code stringlengths 38 801k | repo_path stringlengths 6 263 |
|---|---|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Credit Fraud Detection: Prediction/Analysis
# ----
# In this project, I analyze instances of credit fraud using python and attempt to discriminate between licit and illicit transactions through supervised machine learning. The dataset consists of transactions made in late 2013 by European customers. I obtained the dataset from the publically available Kaggle site.
## import programs
import pandas as pd
from sklearn import metrics
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import precision_score
from sklearn.metrics import recall_score
from sklearn.metrics import f1_score
import seaborn as sns
import matplotlib.pyplot as plt
# +
## dataset url = https://www.kaggle.com/mlg-ulb/creditcardfraud?select=creditcard.csv
# -
## initialize dataset
credit = pd.read_csv('creditcard.csv', engine='python')
## dataset information
credit.info()
## first 5 rows of the dataset
credit.head()
## check for missing values
print("missing values:", credit.isnull().values.any())
## summarize all amount values
print(credit['Amount'].describe())
## summarize fraudulent amount values
fraud_sum = credit[credit.Class == 1]
fraud_sum['Amount'].describe()
## summarize nonfraudulent amount values
no_fraud = credit[credit.Class == 0]
no_fraud['Amount'].describe()
# ### <span style='font-family:"Times New Roman"'> <span styel=''> The average purchase across all transactions was $88.34 and the maximum purchase was $25691.16. The mean and max fraudulent purchases were $122.21 and $2125.87, while the mean and max nonfraudulent purchases were $88.29 and $25691.16
## separate fraud and no fraud instances
fraud = credit[credit.Class == 1]
no_fraud = credit[credit.Class == 0]
amount = len(fraud)/float(len(no_fraud))
print(amount)
print('There are {} fraudulent occurences'.format(len(credit[credit['Class'] == 1])))
print('There are {} nonfraudulent occurences'.format(len(credit[credit['Class'] == 0])),"\n")
# +
## visualization of transactions by dollar amount
figure, (axis_1, axis_2) = plt.subplots(2, 1, sharex = True, figsize = (10, 10))
bins = 10
axis_1.hist(credit.Amount[credit.Class == 1], bins = bins, color = 'red')
axis_1.set_title('Fraudulent')
axis_2.hist(credit.Amount[credit.Class == 0], bins = bins, color = 'green')
axis_2.set_title('Nonfraudulent')
plt.xlabel('Dollar Amount')
plt.ylabel('Transactions')
plt.yscale('log')
plt.show()
# -
# ### <span style='font-family:"Times New Roman"'> <span styel=''> Fraudulent transactions were mostly limited to smaller values when compared to nonfraudulent transactions.
## split into train and test sets
x = credit.iloc[:, :-1].values
y = credit.iloc[:, -1].values
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size = 0.2)
## retreive the accuracy score using DecisionTreeClassifier
classifier = DecisionTreeClassifier(max_depth = 4)
classifier = classifier.fit(x_train, y_train)
predicted_value = classifier.predict(x_test)
decision_tree = metrics.accuracy_score(y_test, predicted_value) * 100
print("\nThe DecisionTreeClassifier accuracy score is {}".format(decision_tree))
## retreive the precision score
precisionscore = precision_score(y_test, predicted_value, pos_label = 1)
p = precision_score(y_test, predicted_value, pos_label = 1)
## retrieve the recall score
recallscore = recall_score(y_test, predicted_value, pos_label = 1)
r = recall_score(y_test, predicted_value, pos_label = 1)
## retrieve the f-score
f = f1_score(y_test, predicted_value, pos_label = 1)
## print the scores
print('The precision score is {}'.format(p))
print('The recall score is {}'.format(r))
print('The f-score value is {}'.format(f))
| projects/Credit Fraud Detection: Prediction/credit_fraud_detection.ipynb.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Leaf Label
# In the example below, leaf labels are indicated with the model names of cars, instead of the index numbers. In order to customize leaf labels, the `labels` parameter is passed with the column which has the desired labels. In the example below, the model names of cars are in the index column of the dataframe.
# +
# Libraries
import pandas as pd
from matplotlib import pyplot as plt
from scipy.cluster.hierarchy import dendrogram, linkage
import numpy as np
# Data set
url = 'https://python-graph-gallery.com/wp-content/uploads/mtcars.csv'
df = pd.read_csv(url)
df = df.set_index('model')
# Calculate the distance between each sample
Z = linkage(df, 'ward')
# Plot with Custom leaves
dendrogram(Z, leaf_rotation=90, leaf_font_size=8, labels=df.index)
# Show the graph
plt.show()
# -
# ## Number of Clusters
# You can give a threshold value to control the colors of clusters. In the following example, `color_threshold` value is 240. It means all the clusters below the value 240 are specified with different colors and the clusters above 240 are specified with a same color. In order to display the threshold value visually, you can add a horizontal line across the axis using the `axhline()` function.
# +
# Libraries
import pandas as pd
from matplotlib import pyplot as plt
from scipy.cluster.hierarchy import dendrogram, linkage
import numpy as np
# Data set
url = 'https://python-graph-gallery.com/wp-content/uploads/mtcars.csv'
df = pd.read_csv(url)
df = df.set_index('model')
# Calculate the distance between each sample
Z = linkage(df, 'ward')
# Control number of clusters in the plot + add horizontal line.
dendrogram(Z, color_threshold=240)
plt.axhline(y=240, c='grey', lw=1, linestyle='dashed')
# Show the graph
plt.show()
# -
# ## Color
# All links connecting nodes which are above the threshold are colored with the default matplotlib color. You can change the default color with passing `above_threshold_color` parameter to the function.
# +
# Libraries
import pandas as pd
from matplotlib import pyplot as plt
from scipy.cluster import hierarchy
import numpy as np
# Data set
url = 'https://python-graph-gallery.com/wp-content/uploads/mtcars.csv'
df = pd.read_csv(url)
df = df.set_index('model')
# Calculate the distance between each sample
Z = hierarchy.linkage(df, 'ward')
# Set the colour of the cluster here:
hierarchy.set_link_color_palette(['#b30000','#996600', '#b30086'])
# Make the dendrogram and give the colour above threshold
hierarchy.dendrogram(Z, color_threshold=240, above_threshold_color='grey')
# Add horizontal line.
plt.axhline(y=240, c='grey', lw=1, linestyle='dashed')
# Show the graph
plt.show()
# -
# ## Truncate
# You can use truncation to condense the dendrogram by passing `truncate_mode` parameter to the `dendrogram()` function. There are 2 modes:
# * `lastp` : Plot p leafs at the bottom of the plot
# * `level` : No more than p levels of the dendrogram tree are displayed
# +
# Libraries
import pandas as pd
from matplotlib import pyplot as plt
from scipy.cluster import hierarchy
import numpy as np
# Data set
url = 'https://python-graph-gallery.com/wp-content/uploads/mtcars.csv'
df = pd.read_csv(url)
df = df.set_index('model')
# Calculate the distance between each sample
Z = hierarchy.linkage(df, 'ward')
# method 1: lastp
hierarchy.dendrogram(Z, truncate_mode = 'lastp', p=4 ) # -> you will have 4 leaf at the bottom of the plot
plt.show()
# method 2: level
hierarchy.dendrogram(Z, truncate_mode = 'level', p=2) # -> No more than ``p`` levels of the dendrogram tree are displayed.
plt.show()
# -
# ## Orientation
# The direction to plot the dendrogram can be controlled with the `orientation` parameter of the `dendrogram()`function. The possible orientations are 'top', 'bottom', 'left', and 'right'.
# +
# Libraries
import pandas as pd
from matplotlib import pyplot as plt
from scipy.cluster import hierarchy
import numpy as np
# Data set
url = 'https://python-graph-gallery.com/wp-content/uploads/mtcars.csv'
df = pd.read_csv(url)
df = df.set_index('model')
# Calculate the distance between each sample
Z = hierarchy.linkage(df, 'ward')
# Orientation of the dendrogram
hierarchy.dendrogram(Z, orientation="right", labels=df.index)
plt.show()
# or
hierarchy.dendrogram(Z, orientation="left", labels=df.index)
plt.show()
| src/notebooks/401-customised-dendrogram.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 通过微调进行迁移学习
# pytorch 一直为我们内置了前面我们讲过的那些著名网络的预训练模型,不需要我们自己去 ImageNet 上训练了,模型都在 `torchvision.models` 里面,比如我们想使用预训练的 50 层 resnet,就可以用 `torchvision.models.resnet50(pretrained=True)` 来得到
#
# 下面我们用一个例子来演示一些微调
# +
import numpy as np
import torch
from torch import nn
from torch.autograd import Variable
from torch.utils.data import DataLoader
from torchvision import models
from torchvision import transforms as tfs
from torchvision.datasets import ImageFolder
# -
# 首先我们点击下面的[链接](https://download.pytorch.org/tutorial/hymenoptera_data.zip)获得数据集,终端可以使用
#
# `wget https://download.pytorch.org/tutorial/hymenoptera_data.zip`
#
# 下载完成之后,我们将其解压放在程序的目录下,这是一个二分类问题,区分蚂蚁和蜜蜂
#
# 我们可以可视化一下图片,看看你能不能区分出他们来
import os
from PIL import Image
import matplotlib.pyplot as plt
# %matplotlib inline
# +
root_path = './hymenoptera_data/train/'
im_list = [os.path.join(root_path, 'ants', i) for i in os.listdir(root_path + 'ants')[:4]]
im_list += [os.path.join(root_path, 'bees', i) for i in os.listdir(root_path + 'bees')[:5]]
nrows = 3
ncols = 3
figsize = (8, 8)
_, figs = plt.subplots(nrows, ncols, figsize=figsize)
for i in range(nrows):
for j in range(ncols):
figs[i][j].imshow(Image.open(im_list[nrows*i+j]))
figs[i][j].axes.get_xaxis().set_visible(False)
figs[i][j].axes.get_yaxis().set_visible(False)
plt.show()
# +
# 定义数据预处理
train_tf = tfs.Compose([
tfs.RandomResizedCrop(224),
tfs.RandomHorizontalFlip(),
tfs.ToTensor(),
tfs.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) # 使用 ImageNet 的均值和方差
])
valid_tf = tfs.Compose([
tfs.Resize(256),
tfs.CenterCrop(224),
tfs.ToTensor(),
tfs.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
])
# -
# 使用 ImageFolder 定义数据集
train_set = ImageFolder('./hymenoptera_data/train/', train_tf)
valid_set = ImageFolder('./hymenoptera_data/val/', valid_tf)
# 使用 DataLoader 定义迭代器
train_data = DataLoader(train_set, 64, True, num_workers=4)
valid_data = DataLoader(valid_set, 128, False, num_workers=4)
# 使用预训练的模型
net = models.resnet50(pretrained=True)
print(net)
# 打出第一层的权重
print(net.conv1.weight)
# 将最后的全连接层改成二分类
net.fc = nn.Linear(2048, 2)
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(net.parameters(), lr=1e-2, weight_decay=1e-4)
from utils import train
train(net, train_data, valid_data, 20, optimizer, criterion)
# 下面我们来可视化预测的结果
net = net.eval() # 将网络改为预测模式
# 读一张蚂蚁的图片
im1 = Image.open('./hymenoptera_data/train/ants/0013035.jpg')
im1
im = valid_tf(im1) # 做数据预处理
out = net(Variable(im.unsqueeze(0)).cuda())
pred_label = out.max(1)[1].data[0]
print('predict label: {}'.format(train_set.classes[pred_label]))
# 可以看到预测的结果是对的
#
# **小练习:看看上面的网络预测过程,多尝试几张图片进行预测**
# +
# 保持前面的卷积层参数不变
net = models.resnet50(pretrained=True)
for param in net.parameters():
param.requires_grad = False # 将模型的参数设置为不求梯度
net.fc = nn.Linear(2048, 2)
optimizer = torch.optim.SGD(net.fc.parameters(), lr=1e-2, weight_decay=1e-4)
# -
train(net, train_data, valid_data, 20, optimizer, criterion)
# 可以看到只训练验证集的准确率也可以达到比较高,但是 loss 跳动比较大,因为更新的参数太少了,只有全连接层的参数
# +
# 不使用预训练的模型
net = models.resnet50()
net.fc = nn.Linear(2048, 2)
optimizer = torch.optim.SGD(net.parameters(), lr=1e-2, weight_decay=1e-4)
# -
# 打出第一层的权重
print(net.conv1.weight)
train(net, train_data, valid_data, 20, optimizer, criterion)
# 通过上面的结果可以看到,使用预训练的模型能够非常快的达到 95% 左右的验证集准确率,而不使用预训练模型只能到 70% 左右的验证集准确率,所以使用一个预训练的模型能够在较小的数据集上也取得一个非常好的效果,因为对于图片识别分类任务,最底层的卷积层识别的都是一些通用的特征,比如形状、纹理等等,所以对于很多图像分类、识别任务,都可以使用预训练的网络得到更好的结果。
| 05.卷积神经网络(进阶)/22.fine-tune-code/fine-tune.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="IYXT7vhYVZGH" executionInfo={"status": "ok", "timestamp": 1647711675358, "user_tz": -330, "elapsed": 3419, "user": {"displayName": "Tiger_Graph AI", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "07988737776297691147"}}
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
import os
plt.style.use("fivethirtyeight")
# + colab={"base_uri": "https://localhost:8080/"} id="HxSnrfb-Vu0n" executionInfo={"status": "ok", "timestamp": 1647711678562, "user_tz": -330, "elapsed": 8, "user": {"displayName": "Tiger_Graph AI", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "07988737776297691147"}} outputId="bf800788-9690-41af-9f10-9f322994aebb"
# This will take 100 datapoints between -10 and 10
x = np.linspace(-10,10,100)
x
# + colab={"base_uri": "https://localhost:8080/"} id="w-Lf56ynV0a0" executionInfo={"status": "ok", "timestamp": 1647711680815, "user_tz": -330, "elapsed": 5, "user": {"displayName": "Tiger_Graph AI", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "07988737776297691147"}} outputId="ba450f7c-5d87-4db2-a1f0-ffc0d5019d2c"
len(x)
# + [markdown] id="2b7-7g6PWTVg"
# # First principle rule
#
# # $f \prime (x) = \lim_{\triangle x \rightarrow 0} \frac{f(x + \triangle x) - f(x)}{\triangle x}$
# + id="INhPRo70V8Ho" executionInfo={"status": "ok", "timestamp": 1647711682866, "user_tz": -330, "elapsed": 4, "user": {"displayName": "Tiger_Graph AI", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "07988737776297691147"}}
# Function for finding out the derivative of values. We will be implementing first principle rule in this function.
def derivative(f,x, delta_x = 1e-6):
return (f(x+delta_x)-f(x))/delta_x
# + id="GN0sSjGfWt7M" executionInfo={"status": "ok", "timestamp": 1647711685166, "user_tz": -330, "elapsed": 4, "user": {"displayName": "Tiger_Graph AI", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "07988737776297691147"}}
# Plotting of the graph
def plot_graph(x,f,
ALPHA = 0.6,
label_x = r"$x \rightarrow$",
label_y = r"$act(x), act'(x)$",
title_graph = None,
LABEL_Y = None,
LABEL_y_dash = None,
filepath_to_plot = "plot.png"):
y = f(x)
y_dash = derivative(f,x)
plt.figure(figsize = (10,8))
plt.axhline(y = 0, color = "black", linestyle = "--", lw = 2)
plt.axvline(x = 0, color = "black", linestyle = "--", lw = 2)
plt.xlabel(label_x)
plt.ylabel(label_y)
if (LABEL_Y != None) and (LABEL_y_dash != None):
plt.plot(x,y,alpha=ALPHA, label= LABEL_Y)
plt.plot(x,y_dash,alpha=ALPHA, label= LABEL_y_dash)
plt.legend(fontsize = 14)
else:
plt.plot(x,y,alpha=ALPHA)
plt.plot(x,y_dash,alpha=ALPHA)
plt.savefig(filepath_to_plot)
# + id="jURb0rZnZQRZ" executionInfo={"status": "ok", "timestamp": 1647711686369, "user_tz": -330, "elapsed": 5, "user": {"displayName": "Tiger_Graph AI", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "07988737776297691147"}}
def sine(x):
return np.sin(x)
# + colab={"base_uri": "https://localhost:8080/"} id="r-aO1Q-4ZUnb" executionInfo={"status": "ok", "timestamp": 1647711687699, "user_tz": -330, "elapsed": 4, "user": {"displayName": "Tiger_Graph AI", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "07988737776297691147"}} outputId="540263cc-5d0c-45a5-f722-bad91bde66d0"
# Takes radian input
sine(3.14/2)
# + id="qtR3ImHHZXt9" executionInfo={"status": "ok", "timestamp": 1647711689539, "user_tz": -330, "elapsed": 4, "user": {"displayName": "Tiger_Graph AI", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "07988737776297691147"}}
root_plot_dir = "root"
os.makedirs(root_plot_dir, exist_ok=True)
# + colab={"base_uri": "https://localhost:8080/", "height": 496} id="VrlkzM7fZt8H" executionInfo={"status": "ok", "timestamp": 1647711692772, "user_tz": -330, "elapsed": 1504, "user": {"displayName": "Tiger_Graph AI", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "07988737776297691147"}} outputId="5dbaeac1-473e-45db-f5fe-5370a9779834"
def plot_sine(x, path):
plot_graph(x, f=sine, ALPHA=0.6,
label_x = r"$x \rightarrow$",
label_y=r"$act(x), act'(x)$",
#title="sine function",
LABEL_Y=r"$sin(x)$",
LABEL_y_dash=r"$cos(x)$",
filepath_to_plot = "plot.png")
plot_sine(x, os.path.join(root_plot_dir, "sine"))
# + id="POXpAGvNZwlW"
| first_principle.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.9.7 ('ml')
# language: python
# name: python3
# ---
# Post Details-
# Link: https://machinelearningmastery.com/develop-first-xgboost-model-python-scikit-learn/
# +
import zipfile
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
from xgboost import XGBClassifier
# -
# # Getting the Data
# Link of the dataset - https://www.kaggle.com/datasets/uciml/pima-indians-diabetes-database
# !kaggle datasets download -d uciml/pima-indians-diabetes-database
# extract the files in temp folder
with zipfile.ZipFile("pima-indians-diabetes-database.zip", 'r') as zip_ref:
zip_ref.extractall('../temp')
df = pd.read_csv('../temp/diabetes.csv')
df.head()
X = df.drop('Outcome', axis=1).values
y = df['Outcome']
seed = 42
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=seed)
model = XGBClassifier(use_label_encoder=False)
model.fit(X_train, y_train)
# **NOTE** By default, the predictions made by XGBoost are probabilities.
y_pred = model.predict(X_test)
pred = [round(c) for c in y_pred]
accuracy = accuracy_score(y_test, pred)
print("Accuracy: %.2f%%" % (accuracy * 100.0))
model.score(X_test, y_test)
| XGBoost-Introduction-Guide/intro-to-xgboost.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# + [markdown] button=false deletable=true new_sheet=false run_control={"read_only": false}
# ## Sample Data
#
# + button=false deletable=true new_sheet=false run_control={"read_only": false}
import numpy as np
from scipy.special import gamma
import random
from collections import Counter
import matplotlib.pyplot as plt
from nltk.tokenize import RegexpTokenizer
from stop_words import get_stop_words
from nltk.stem.porter import PorterStemmer
from gensim import corpora, models
import gensim
# + button=false deletable=true new_sheet=false run_control={"read_only": false}
tokenizer = RegexpTokenizer(r'\w+')
# create English stop words list
en_stop = get_stop_words('en')
# Create p_stemmer of class PorterStemmer
p_stemmer = PorterStemmer()
# create sample documents
doc_a = "Batman became popular soon after his introduction and gained his own comic book title, Batman, in 1940."
doc_b = "In 1971, Trump moved to Manhattan, where he became involved in larger construction projects, and used attractive architectural design to win public recognition."
doc_c = "Batman is, in his everyday identity, <NAME>, a wealthy American business magnate living in Gotham City."
doc_d = "In 2001, Trump completed Trump World Tower, a 72-story residential tower across from the United Nations Headquarters."
doc_e = " Unlike most superheroes, Batman does not possess any superpowers; rather, he relies on his genius intellect, physical prowess, martial arts abilities, detective skills, science and technology, vast wealth, intimidation, and indomitable will. "
# compile sample documents into a list
doc_set = [doc_a, doc_b, doc_c, doc_d, doc_e]
# list for tokenized documents in loop
texts = []
# loop through document list
for i in doc_set:
# clean and tokenize document string
raw = i.lower()
tokens = tokenizer.tokenize(raw)
# remove stop words from tokens
stopped_tokens = [i for i in tokens if not i in en_stop]
# stem tokens
stemmed_tokens = [p_stemmer.stem(i) for i in stopped_tokens]
# add tokens to list
texts.append(stemmed_tokens)
# + [markdown] button=false deletable=true new_sheet=false run_control={"read_only": false}
# ## CRP
# + button=false deletable=true new_sheet=false run_control={"read_only": false}
def CRP(topic, gamma):
'''CRP gives the probability of topic assignment for specific vocabulary'''
'''Return a j * 1 vector, where j is the number of topic'''
cm = []
m = sum([len(x) for x in topic])
p = gamma / (gamma + m) # prob for new topic
cm.append(p)
for j in range(len(topic)):
p = len(topic[j]) / (gamma + m) # prob for existing topics
cm.append(p)
return np.array(cm)
# + [markdown] button=false deletable=true new_sheet=false run_control={"read_only": false}
# ## node sampling
# + button=false deletable=true new_sheet=false run_control={"read_only": false}
def node_sampling(corpus_s, gamma):
'''Node sampling samples the number of topics for next level'''
topic = []
for corpus in corpus_s:
for doc in corpus:
cm = CRP(topic, gamma)
theta = np.random.multinomial(1, (cm/sum(cm))).argmax()
if theta == 0:
# create new topic
topic.append([doc])
else:
# existing topic
topic[theta-1].append(doc)
return topic
# + [markdown] button=false deletable=true new_sheet=false run_control={"read_only": false}
# ## Z
# $P(z_{i}=j\hspace{1ex}|\hspace{1ex}{\bf z}_{-i},{\bf w})\propto\frac{n_{-i,j}^{(w_{i})}+\beta}{n_{-i,j}^{(\cdot)}+W\beta}\frac{n_{-i,j}^{(d_{i})}+\alpha}{n_{-i,\cdot}^{(d_{i})}+T\alpha}$
# + button=false deletable=true new_sheet=false run_control={"read_only": false}
def Z(corpus_s, topic, alpha, beta):
'''Z distributes each vocabulary to topics'''
'''Return a n * 1 vector, where n is the number of vocabularies'''
n_vocab = sum([len(x) for x in corpus_s])
# zm: n * 1
# return the assignment of each vocabulary
t_zm = np.zeros(n_vocab).astype('int')
# z_assigned: j * 1
# return a list of list topic where stores assigned vocabularies in each sublist
z_assigned = [[] for _ in topic]
z_doc = [[] for _ in topic]
z_tmp = np.zeros((n_vocab, len(topic)))
assigned = np.zeros((len(corpus_s), len(topic)))
n = 0
for i in range(len(corpus_s)):
for d in range(len(corpus_s[i])):
wi = corpus_s[i][d]
for j in range(len(topic)):
lik = (z_assigned[j].count(wi) + beta) / (assigned[i, j] + n_vocab * beta)
pri = (len(z_assigned[j]) + alpha) / ((len(corpus_s[i]) - 1) + len(topic) * alpha)
z_tmp[n, j] = lik * pri
t_zm[n] = np.random.multinomial(1, (z_tmp[n,:]/sum(z_tmp[n,:]))).argmax()
z_assigned[t_zm[n]].append(wi)
z_doc[t_zm[n]].append(i)
assigned[i, t_zm[n]] += 1
n += 1
z_assigned = [x for x in z_assigned if x != []]
z_doc = [x for x in z_doc if x != []]
return np.array(z_assigned)
# + [markdown] button=false deletable=true new_sheet=false run_control={"read_only": false}
# ## C
# + button=false deletable=true new_sheet=false run_control={"read_only": false}
def C(corpus_s, topic, gamma):
cm = []
for corpus in corpus_s:
for word in corpus:
for t in topic:
if type(t) == list:
y = t.count(word)
else:
y = t.tolist().count(word)
H = np.random.poisson(lam=(2), size=(len(topic)))
alpha = gamma*H
temp = np.random.dirichlet(y + alpha).transpose()
cm.append((temp/sum(temp)).tolist())
return np.array(cm)
# + [markdown] button=false deletable=true new_sheet=false run_control={"read_only": false}
# ## wn
# + button=false deletable=true new_sheet=false run_control={"read_only": false}
most_common = lambda x: Counter(x).most_common(1)[0][0]
def wn(c_m, corpus_s, topic):
wn_topic = []
for i, corpus in enumerate(corpus_s):
for word in corpus:
theta = np.random.multinomial(1, c_m[i]).argmax()
wn_topic.append(theta)
return np.array(wn_topic)
def gibbs_wn(c_m, corpus_s, topic, ite):
n_vocab = sum([len(x) for x in corpus_s])
wn_gibbs = np.empty((n_vocab, ite)).astype('int')
for i in range(ite):
wn_gibbs[:, i] = wn(c_m, corpus_s, topic)
# drop first 1/10 data
wn_gibbs = wn_gibbs[:, int(ite/10):]
theta = [most_common(wn_gibbs[x]) for x in range(n_vocab)]
wn_topic = [[] for _ in topic]
wn_doc_topic = [[] for _ in topic]
doc = 0
n = 0
for i, corpus_s in enumerate(corpus_s):
if doc == i:
for word in corpus_s:
wn_doc_topic[theta[n]].append(word)
n += 1
for j in range(len(topic)):
if wn_doc_topic[j] != []:
wn_topic[j].append(wn_doc_topic[j])
wn_doc_topic = [[] for _ in topic]
doc += 1
wn_topic = [x for x in wn_topic if x != []]
return wn_topic
# + [markdown] button=false deletable=true new_sheet=false run_control={"read_only": false}
# ## hLDA
# + button=false deletable=true new_sheet=false run_control={"read_only": false}
def hLDA(corpus_s, gamma, alpha, beta, ite, level):
# 1. Node sampling, samples max level L
topic = node_sampling(corpus_s, gamma)
def dis(corpus_s, gamma, alpha, beta, ite):
# 2. z_m, samples topic from L
z_topic = Z(corpus_s, topic, alpha, beta)
# 3. c_m, samples path
c_m = C(corpus_s, z_topic, gamma)
# 4. w_n, distributes words into topics
wn_topic = gibbs_wn(c_m, corpus_s, z_topic, ite)
return wn_topic
hLDA_tree = [[] for _ in range(level)]
tmp_tree = []
node = [[] for _ in range(level+1)]
node[0].append(1)
for i in range(level):
if i == 0:
wn_topic = dis(texts, gamma, alpha, beta, ite)
topic = set([x for list in wn_topic[0] for x in list])
hLDA_tree[0].append(topic)
tmp_tree.append(wn_topic[1:])
tmp_tree = tmp_tree[0]
node[1].append(len(wn_topic[1:]))
else:
for j in range(sum(node[i])):
if tmp_tree == []:
break
wn_topic = dis(tmp_tree[0], gamma, alpha, beta, ite)
topic = set([x for list in wn_topic[0] for x in list])
hLDA_tree[i].append(topic)
tmp_tree.remove(tmp_tree[0])
if wn_topic[1:] != []:
tmp_tree.extend(wn_topic[1:])
node[i+1].append(len(wn_topic[1:]))
return hLDA_tree, node[:level]
# -
texts = [['batman',
'becam',
'popular',
'soon',
'introduct',
'gain',
'comic',
'book',
'titl',
'batman',
'1940'],
['1971',
'trump',
'move',
'manhattan',
'becam',
'involv',
'larger',
'construct',
'project',
'use',
'attract',
'architectur',
'design',
'win',
'public',
'recognit'],
['batman',
'everyday',
'ident',
'bruce',
'wayn',
'wealthi',
'american',
'busi',
'magnat',
'live',
'gotham',
'citi'],
['2001',
'trump',
'complet',
'trump',
'world',
'tower',
'72',
'stori',
'residenti',
'tower',
'across',
'unit',
'nation',
'headquart'],
['unlik',
'superhero',
'batman',
'possess',
'superpow',
'rather',
'reli',
'geniu',
'intellect',
'physic',
'prowess',
'martial',
'art',
'abil',
'detect',
'skill',
'scienc',
'technolog',
'vast',
'wealth',
'intimid',
'indomit',
'will']]
# + button=false deletable=true new_sheet=false run_control={"read_only": false}
hLDA(texts, 10, 0.1, 0.01, 10000, 4)
# + button=false deletable=true new_sheet=false run_control={"read_only": false}
# + button=false deletable=true new_sheet=false run_control={"read_only": false}
| .ipynb_checkpoints/tmep-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
#ignore
from IPython.core.display import HTML,Image
import sys
sys.path.append('/anaconda/')
import config
HTML('<style>{}</style>'.format(config.CSS))
# -
# ### Introduction
#
# Use of machine learning in the quantitative investment field is, by all indications, skyrocketing. The proliferation of easily accessible data - both traditional and alternative - along with some very approachable frameworks for machine learning models - is encouraging many to explore the arena.
#
# These financial ML explorers are learning that there are many ways in which using ML to predict financial time series differs greatly from labeling cat pictures or flagging spam. Application of machine learning to financial time series prediction is made especially difficult due to (1) non-stationarity, (2) low signal-to-noise, and (3) strong feature collinearity within financial data.
#
# As a consequence, even the most expertly designed ML models will achieve accuracy levels which would seem wholly inadequate in other domains. It's hard to get excited about an RSQ of 0.10 or a classification accuracy of 0.60, but that is often the reality of well-built models in the domain of asset price prediction.
#
# In my view, the generic model performance metrics (RSQ, MSE, accuracy, F1, etc...) are not tremendously useful when working in this domain. Similarly, the traditional quantitative finance metrics (CAGR, sharpe, maxDD, etc...) do not provide as much insight into the models themselves as they do into the particular time period of data used.
#
# Over the years, I've developed a set of metrics which have proved useful for comparing and optimizing models. These metrics attempt to measure model performance in terms of _predictive power_ but also in terms of _practicality_, a critically important dimension for those who actually intend to _use_ their models in the real world.
#
# In this post, I will present a general outline of my approach and will demonstrate a few of the most useful metrics I've added to my standard "scorecard". I look forward to hearing how others may think to extend the concept.
#
#
# ### Creating dummy data
#
# I will illustrate the usefulness of this metrics methodology using a simple example of synthetically generated data (see previous posts in this tutorial for explanations of the below method of creating data).
# +
import numpy as np
import pandas as pd
pd.core.common.is_list_like = pd.api.types.is_list_like # remove once updated pandas-datareader issue is fixed
# https://github.com/pydata/pandas-datareader/issues/534
import pandas_datareader.data as web
# %matplotlib inline
def get_symbols(symbols,data_source, begin_date=None,end_date=None):
out = pd.DataFrame()
for symbol in symbols:
df = web.DataReader(symbol, data_source,begin_date, end_date)[['AdjOpen','AdjHigh','AdjLow','AdjClose','AdjVolume']].reset_index()
df.columns = ['date','open','high','low','close','volume'] #my convention: always lowercase
df['symbol'] = symbol # add a new column which contains the symbol so we can keep multiple symbols in the same dataframe
df = df.set_index(['date','symbol'])
out = pd.concat([out,df],axis=0) #stacks on top of previously collected data
return out.sort_index()
prices = get_symbols(['AAPL','CSCO','AMZN','YHOO','MSFT'],data_source='quandl',begin_date='2012-01-01',end_date=None)
# note, we're only using real price data to get an accurate date/symbol index set.
# +
num_obs = prices.close.count()
def add_memory(s,n_days=50,memory_strength=0.1):
''' adds autoregressive behavior to series of data'''
add_ewm = lambda x: (1-memory_strength)*x + memory_strength*x.ewm(n_days).mean()
out = s.groupby(level='symbol').apply(add_ewm)
return out
# generate feature data
f01 = pd.Series(np.random.randn(num_obs),index=prices.index)
f01 = add_memory(f01,10,0.1)
f02 = pd.Series(np.random.randn(num_obs),index=prices.index)
f02 = add_memory(f02,10,0.1)
f03 = pd.Series(np.random.randn(num_obs),index=prices.index)
f03 = add_memory(f03,10,0.1)
f04 = pd.Series(np.random.randn(num_obs),index=prices.index)
f04 = f04 # no memory
features = pd.concat([f01,f02,f03,f04],axis=1)
## now, create response variable such that it is related to features
# f01 becomes increasingly important, f02 becomes decreasingly important,
# f03 oscillates in importance, f04 is stationary, finally a noise component is added
outcome = f01 * np.linspace(0.5,1.5,num_obs) + \
f02 * np.linspace(1.5,0.5,num_obs) + \
f03 * pd.Series(np.sin(2*np.pi*np.linspace(0,1,num_obs)*2)+1,index=f03.index) + \
f04 + \
np.random.randn(num_obs) * 3
outcome.name = 'outcome'
# -
# ## Evaluating Models
# Imagine that we created a simple linear model (such as below) and wanted to measure its effectiveness at prediction.
#
# >Note: we'll follow the walk-forward modeling process described in the [previous post](walk_forward_model_building.html). If you don't understand the below code snippet (and want to...) please check out that post.
# +
from sklearn.linear_model import LinearRegression
recalc_dates = features.resample('Q',level='date').mean().index.values[:-1]
models = pd.Series(index=recalc_dates)
for date in recalc_dates:
X_train = features.xs(slice(None,date),level='date',drop_level=False)
y_train = outcome.xs(slice(None,date),level='date',drop_level=False)
model = LinearRegression()
model.fit(X_train,y_train)
models.loc[date] = model
begin_dates = models.index
end_dates = models.index[1:].append(pd.to_datetime(['2099-12-31']))
predictions = pd.Series(index=features.index)
for i,model in enumerate(models): #loop thru each models object in collection
X = features.xs(slice(begin_dates[i],end_dates[i]),level='date',drop_level=False)
p = pd.Series(model.predict(X),index=X.index)
predictions.loc[X.index] = p
# -
# So we've got a model, we've got a sizeable set of (out of sample) predictions. Is the model any good? Should we junk it, tune it, or trade it?
#
# Since this is a regression model, I'll throw our data into `scikit-learn`'s metrics package.
# +
import sklearn.metrics as metrics
# make sure we have 1-for-1 mapping between pred and true
common_idx = outcome.dropna().index.intersection(predictions.dropna().index)
y_true = outcome[common_idx]
y_true.name = 'y_true'
y_pred = predictions[common_idx]
y_pred.name = 'y_pred'
standard_metrics = pd.Series()
standard_metrics.loc['explained variance'] = metrics.explained_variance_score(y_true, y_pred)
standard_metrics.loc['MAE'] = metrics.mean_absolute_error(y_true, y_pred)
standard_metrics.loc['MSE'] = metrics.mean_squared_error(y_true, y_pred)
standard_metrics.loc['MedAE'] = metrics.median_absolute_error(y_true, y_pred)
standard_metrics.loc['RSQ'] = metrics.r2_score(y_true, y_pred)
print(standard_metrics)
# -
# <img src="images/confused_scientist.jpg" width="400">
#
# These stats don't really tell us much by themselves. You may have an intuition for r-squared so that may give you a level of confidence in the models. However, even this metric [has problems](https://onlinecourses.science.psu.edu/stat501/node/258/) not to mention does not tell us much about the practicality of this signal from a trading point of view.
#
# True, we could construct some trading rules around this series of predictions and perform a formal backtest on that. However, that is quite time consuming and introduces a number of extraneous variables into the equation.
# ### A better way...
# Below is a method and code framework for evaluating models along several useful dimensions. Below I'll work through an example of creating a "scorecard" with about a half dozen metrics as a starting point.
#
# You can feel free to extend this into a longer scorecard which is suited to your needs and beliefs. In my own trading, I use about 25 metrics in a standard "scorecard" each time I evaluate a model. You may prefer to use more (or different) metrics but the procedure should be applicable.
#
# I'll focus only on regression-oriented metrics (i.e., those which use a continuous prediction rather than a binary or classification prediction). It's trivial to re-purpose the same framework to a classification-oriented environment.
#
# In this approach, we'll create an extensible _scorecard_ which can contain many custom-defined _metrics_. These metrics can be combined and adapted in many different ways, some of which I'll lay out in the later part of this example.
# ### Preparing our data
# However, before implementing specific metrics we need to do some data pre-processing. It'll become clear why doing this initially will save considerable time later when calculating aggregate metrics.
#
# To create these intermediate values, you'll need the following inputs:
# * __y_pred:__ the _continuous variable_ prediction made by your model for each timestep, for each symbol
# * __y_true:__ the _continuous variable_ actual outcome for each timestep, for each symbol.
# * __index:__ this is the unique identifier for each prediction or actual result. If working with a single instrument, then you can simply use date (or time or whatever). If you're using multiple instruments, a multi-index with (date/symbol) is necessary.
#
# In other words, if your model is predicting one-day price changes, you'd want your y_pred to be the model's predictions made as of March 9th (for the coming day), indexed as `2017-03-09` and you'd want the actual _future_ outcome which will play out in the next day also aligned to Mar 9th. This "peeking" convention is very useful for working with large sets of data across different time horizons. It is described ad nauseum in [this post]().
#
# The raw input data we need to provide might look something like this:
pd.concat([y_pred,y_true],axis=1).tail()
# We will feed this data into a simple function which will return a dataframe with the y_pred and y_true values, along with several other useful derivative values. These derivative values include:
#
# * __sign_pred:__ positive or negative sign of prediction
# * __sign_true:__ positive or negative sign of true outcome
# * __is_correct:__ 1 if sign_pred == sign_true, else 0
# * __is_incorrect:__ opposite
# * __is_predicted:__ 1 if the model has made a valid prediction, 0 if not. This is important if models only emit predictions when they have a certain level of confidence
# * __result:__ the profit (loss) resulting from betting one unit in the direction of the sign_pred. This is the continuous variable result of following the model
#
# With this set of intermediate variables already calculated, we can easily calculate the three core metrics of accuracy, edge, and noise as simple one-liners. For instance:
# +
def make_df(y_pred,y_true):
y_pred.name = 'y_pred'
y_true.name = 'y_true'
df = pd.concat([y_pred,y_true],axis=1)
df['sign_pred'] = df.y_pred.apply(np.sign)
df['sign_true'] = df.y_true.apply(np.sign)
df['is_correct'] = 0
df.loc[df.sign_pred * df.sign_true > 0 ,'is_correct'] = 1 # only registers 1 when prediction was made AND it was correct
df['is_incorrect'] = 0
df.loc[df.sign_pred * df.sign_true < 0,'is_incorrect'] = 1 # only registers 1 when prediction was made AND it was wrong
df['is_predicted'] = df.is_correct + df.is_incorrect
df['result'] = df.sign_pred * df.y_true
return df
df = make_df(y_pred,y_train)
df.dropna().tail()
# -
# ### Defining our metrics
# The metrics we'll start with here include things like:
# * __Accuracy:__ Just as the name suggests, this measures the percent of predictions that were _directionally_ correct vs. incorrect.
# * __Edge:__ perhaps the most useful of all metrics, this is the expected value of the prediction over a sufficiently large set of draws. Think of this like a blackjack card counter who knows the expected profit on each dollar bet when the odds are at a level of favorability
# * __Noise:__ critically important but often ignored, the noise metric estimates how dramatically the model's predictions vary from one day to the next. As you might imagine, a model which abruptly changes its mind every few days is much harder to follow (and much more expensive to follow) than one which is a bit more steady.
# +
def calc_scorecard(df):
scorecard = pd.Series()
# building block metrics
scorecard.loc['accuracy'] = df.is_correct.sum()*1. / (df.is_predicted.sum()*1.)*100
scorecard.loc['edge'] = df.result.mean()
scorecard.loc['noise'] = df.y_pred.diff().abs().mean()
return scorecard
calc_scorecard(df)
# -
# Much better. I now know that we've been directionally correct 68% of the time, and that following this signal would create an edge of 1.5 units per time period.
#
# Let's keep going. We can now easily combine and transform things to derive new metrics. The below function shows several examples, including:
# * __y_true_chg__ and __y_pred_chg:__ The average magnitude of change (per period) in y_true and y_pred.
# * __prediction_calibration:__ A simple ratio of the magnitude of our predictions vs. magnitude of truth. This gives some indication of whether our model is properly tuned to the size of movement in addition to the direction of it.
# * __capture_ratio:__ Ratio of the "edge" we gain by naively following our predictions vs. the actual daily change. 100 would indicate that we were _perfectly_ capturing the true movement of the target variable.
#
# +
def calc_scorecard(df):
scorecard = pd.Series()
# building block metrics
scorecard.loc['accuracy'] = df.is_correct.sum()*1. / (df.is_predicted.sum()*1.)*100
scorecard.loc['edge'] = df.result.mean()
scorecard.loc['noise'] = df.y_pred.diff().abs().mean()
# derived metrics
scorecard.loc['y_true_chg'] = df.y_true.abs().mean()
scorecard.loc['y_pred_chg'] = df.y_pred.abs().mean()
scorecard.loc['prediction_calibration'] = scorecard.loc['y_pred_chg']/scorecard.loc['y_true_chg']
scorecard.loc['capture_ratio'] = scorecard.loc['edge']/scorecard.loc['y_true_chg']*100
return scorecard
calc_scorecard(df)
# -
# Additionally, metrics can be easily calculated for only long or short predictions (for a two-sided model) or separately for positions which ended up being winners and losers.
# * __edge_long__ and __edge_short:__ The "edge" for only long signals or for short signals.
# * __edge_win__ and __edge_lose:__ The "edge" for only winners or for only losers.
#
# If you've added categorical information to your data (such as industry classification), you can also run these metrics on each category of holdings in your data.
#
# +
def calc_scorecard(df):
scorecard = pd.Series()
# building block metrics
scorecard.loc['accuracy'] = df.is_correct.sum()*1. / (df.is_predicted.sum()*1.)*100
scorecard.loc['edge'] = df.result.mean()
scorecard.loc['noise'] = df.y_pred.diff().abs().mean()
# derived metrics
scorecard.loc['y_true_chg'] = df.y_true.abs().mean()
scorecard.loc['y_pred_chg'] = df.y_pred.abs().mean()
scorecard.loc['prediction_calibration'] = scorecard.loc['y_pred_chg']/scorecard.loc['y_true_chg']
scorecard.loc['capture_ratio'] = scorecard.loc['edge']/scorecard.loc['y_true_chg']*100
# metrics for a subset of predictions
scorecard.loc['edge_long'] = df[df.sign_pred == 1].result.mean() - df.y_true.mean()
scorecard.loc['edge_short'] = df[df.sign_pred == -1].result.mean() - df.y_true.mean()
scorecard.loc['edge_win'] = df[df.is_correct == 1].result.mean() - df.y_true.mean()
scorecard.loc['edge_lose'] = df[df.is_incorrect == 1].result.mean() - df.y_true.mean()
return scorecard
calc_scorecard(df)
# -
# From this toy example, we'd see that the model is predicting with a strong directional accuracy, is capturing about half of the total theoretical profit to be made, makes more on winners than it loses on losers, and is equally valid on both long and short predictions. If this were real data, I would be rushing to put this model into production!
#
# ### Comparing models
# The true usefulness of this methodology comes when wanting to make comparisons. Model A vs Model B. Last year vs. this year. Small cap vs. large cap.
#
# To illustrate, let's say that we're comparing two models, a linear regression vs. a random forest, for performance on a training set and a testing set (pretend for a moment that we didn't adhere to [walk-forward modeling]() practices...).
# +
from sklearn.model_selection import train_test_split
from sklearn.linear_model import ElasticNetCV,Lasso,Ridge
from sklearn.ensemble import RandomForestRegressor
X_train,X_test,y_train,y_test = train_test_split(features,outcome,test_size=0.20,shuffle=False)
# linear regression
model1 = LinearRegression().fit(X_train,y_train)
model1_train = pd.Series(model1.predict(X_train),index=X_train.index)
model1_test = pd.Series(model1.predict(X_test),index=X_test.index)
model2 = RandomForestRegressor().fit(X_train,y_train)
model2_train = pd.Series(model2.predict(X_train),index=X_train.index)
model2_test = pd.Series(model2.predict(X_test),index=X_test.index)
# create dataframes for each
model1_train_df = make_df(model1_train,y_train)
model1_test_df = make_df(model1_test,y_test)
model2_train_df = make_df(model2_train,y_train)
model2_test_df = make_df(model2_test,y_test)
s1 = calc_scorecard(model1_train_df)
s1.name = 'model1_train'
s2 = calc_scorecard(model1_test_df)
s2.name = 'model1_test'
s3 = calc_scorecard(model2_train_df)
s3.name = 'model2_train'
s4 = calc_scorecard(model2_test_df)
s4.name = 'model2_test'
pd.concat([s1,s2,s3,s4],axis=1)
# -
# This quick and dirty scorecard comparison gives us a great deal of useful information. We learn that:
# * The relatively simple linear regression (model1) does a good (unrealistically good...) job of prediction, ciorrect about 68% of the time and capturing about 53% of available price movement (this is very good) during training
# * Model1 holds up very well out of sample, performing almost as well on test as train
# * Model2, a more complex random forest ensemble model, appears _far_ superior on the training data, capturing 90%+ of available price action, but appears quite overfit and does not perform nearly as well on the test set.
#
# ### Summary
# In this tutorial, we've covered a framework for evaluating models in a market prediction context and have demonstrated a few useful metrics. However, the approach can be extended much further to suit your needs. You can consider:
# * Adding new metrics to the standard scorecard
# * Comparing scorecard metrics for subsets of the universe. For instance, each symbol or grouping of symbols
# * Calculating and plotting performance metrics across time to validate robustness or to identify trends
#
# In the next (and final) [post of this series](ensemble_modeling.html), I'll present a unique framework for creating an _ensemble model_ to blend together the results of your many different forecasting models.
#
# Please feel free to add to the comment section with your good ideas for useful metrics, with questions/comments on this post, and topic ideas for future posts.
# ### One last thing...
#
# If you've found this post useful, please follow [@data2alpha](https://twitter.com/data2alpha) on twitter and forward to a friend or colleague who may also find this topic interesting.
#
# Finally, take a minute to leave a comment below - either to discuss this post or to offer an idea for future posts. Thanks for reading!
| content-queue/05_Model_Scoring.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # K-Means Demo
#
# KMeans is a basic but powerful clustering method which is optimized via Expectation Maximization. It randomly selects K data points in X, and computes which samples are close to these points. For every cluster of points, a mean is computed, and this becomes the new centroid.
#
# cuML’s KMeans supports the scalable KMeans++ intialization method. This method is more stable than randomnly selecting K points.
#
# The model can take array-like objects, either in host as NumPy arrays or in device (as Numba or cuda_array_interface-compliant), as well as cuDF DataFrames as the input.
#
# For information about cuDF, refer to the [cuDF documentation](https://docs.rapids.ai/api/cudf/stable).
#
# For additional information on cuML's k-means implementation: https://rapidsai.github.io/projects/cuml/en/stable/api.html#cuml.KMeans.
# ## Imports
# +
import cudf
import cupy
import matplotlib.pyplot as plt
from cuml.cluster import KMeans as cuKMeans
from cuml.datasets import make_blobs
from sklearn.cluster import KMeans as skKMeans
from sklearn.metrics import adjusted_rand_score
# %matplotlib inline
# -
# ## Define Parameters
# +
n_samples = 100000
n_features = 2
n_clusters = 5
random_state = 0
# -
# ## Generate Data
# +
device_data, device_labels = make_blobs(n_samples=n_samples,
n_features=n_features,
centers=n_clusters,
random_state=random_state,
cluster_std=0.1)
device_data = cudf.DataFrame.from_gpu_matrix(device_data)
device_labels = cudf.Series(device_labels)
# -
# Copy dataset from GPU memory to host memory.
# This is done to later compare CPU and GPU results.
host_data = device_data.to_pandas()
host_labels = device_labels.to_pandas()
# ## Scikit-learn model
#
# ### Fit
# +
# %%time
kmeans_sk = skKMeans(init="k-means++",
n_clusters=n_clusters,
n_jobs=-1,
random_state=random_state)
kmeans_sk.fit(host_data)
# -
# ## cuML Model
#
# ### Fit
# +
# %%time
kmeans_cuml = cuKMeans(init="k-means||",
n_clusters=n_clusters,
oversampling_factor=40,
random_state=random_state)
kmeans_cuml.fit(device_data)
# -
# ## Visualize Centroids
#
# Scikit-learn's k-means implementation uses the `k-means++` initialization strategy while cuML's k-means uses `k-means||`. As a result, the exact centroids found may not be exact as the std deviation of the points around the centroids in `make_blobs` is increased.
#
# *Note*: Visualizing the centroids will only work when `n_features = 2`
# +
fig = plt.figure(figsize=(16, 10))
plt.scatter(host_data.iloc[:, 0], host_data.iloc[:, 1], c=host_labels, s=50, cmap='viridis')
#plot the sklearn kmeans centers with blue filled circles
centers_sk = kmeans_sk.cluster_centers_
plt.scatter(centers_sk[:,0], centers_sk[:,1], c='blue', s=100, alpha=.5)
#plot the cuml kmeans centers with red circle outlines
centers_cuml = kmeans_cuml.cluster_centers_
plt.scatter(cupy.asnumpy(centers_cuml[0].values),
cupy.asnumpy(centers_cuml[1].values),
facecolors = 'none', edgecolors='red', s=100)
plt.title('cuml and sklearn kmeans clustering')
plt.show()
# -
# ## Compare Results
# %%time
cuml_score = adjusted_rand_score(host_labels, kmeans_cuml.labels_.to_array())
sk_score = adjusted_rand_score(host_labels, kmeans_sk.labels_)
# +
threshold = 1e-4
passed = (cuml_score - sk_score) < threshold
print('compare kmeans: cuml vs sklearn labels_ are ' + ('equal' if passed else 'NOT equal'))
| notebooks/kmeans_demo.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="ODV-kLWnFCLp"
# ## 1. Basic Design of the Model
# 1. Output of the model: 0 or 1 (Binary Classification)
# 2. Hypothesis to be tested: $Z = W \cdot X + b$
# 3. Activation Function: $\frac{1}{1 + e^{-x}} $ (Signmoid Function)
# + [markdown] id="_oehNhBDFCLq"
# ## 2. Import Packages
#
# 1. numpy
# 2. matplotlib
# 3. seaborn
# + id="OtEoCnoZFCLr"
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib inline
# Next Libraries are unimportant, they just make everyhting look better
import matplotlib.style as style
import seaborn as sns
style.use('seaborn-poster') #sets the size of the charts
style.use('ggplot')
# + [markdown] id="BF5Hf3UhFCLv"
# ## 3. Loading the dataset
#
# + id="5rPC9-xZFCLx"
dataset = np.load('dataset.npz', encoding='ASCII')
## Get the numpy arrays from the dictionary
X_train = dataset['X_train']
Y_train = dataset['Y_train']
X_test = dataset['X_test']
Y_test = dataset['Y_test']
# + id="cOS8h4viFCL0" colab={"base_uri": "https://localhost:8080/"} outputId="9a2e5bad-5c71-434b-8b8e-481e40e1e59b"
print(X_train.shape)
print(Y_train.shape)
print(X_test.shape)
print(Y_test.shape)
# + id="ozMfJJW3FCL5" colab={"base_uri": "https://localhost:8080/", "height": 298} outputId="d341793c-2d09-441b-963c-9edc8e7adbee"
idx = np.random.randint(X_train.shape[1])
plt.imshow(X_train[:, idx].reshape(28, 28))
label = "cat" if Y_train[:, idx][0] else "bat"
print(f"Label: {label}")
# + [markdown] id="qeg9vFx5FCMA"
# ## 4. Normalizing the data
#
# Normalizing the data with the following equation:
#
# $$ X_{norm} = \frac {X - X_{min}}{X_{max} - X_{min}} $$
#
# For this pixel data, $X_{max} = 255$ and $X_{min} = 0$
#
# > After running the next cell, go back and view the raw array again
# + id="j9LgEtNsFCMB"
## Normalizing the training and testing data
X_min = 0
X_max = 255
X_train = X_train / X_max
X_test = X_test / X_max
# + [markdown] id="FB-J1xHEFCMF"
# ## 5. Helper functions for the Model:
#
# ### Sigmoid Function and Initialize Parameters function
# + id="Y1T3thCFFCMG"
def sigmoid(z):
"""
Computes the element sigmoid of scalar or numpy array(element wise)
Arguments:
z: Scalar or numpy array
Returns:
s: Sigmoid of z (element wise in case of Numpy Array)
"""
s = 1/(1+np.exp(-z))
return s
# + id="BrrJqSIuFCMJ"
def initialize_parameters(n_x):
"""
Initializes w to a zero vector, and b to a 0 with datatype float
Arguments:
n_x: Number of features in each sample of X
Returns:
w: Initialized Numpy array of shape (1, n_x) (Weight)
b: Initialized Scalar (bias)
"""
w = np.full((1,X_train.shape[0]),0)
b = 0
return w, b
# + [markdown] id="B51HHAg_FCMQ"
# Here is a summary of the equations for Forward Propagation and Backward Propagation we have used so far:
#
# For m training examples $ X_{train} $ and $ Y_{train} $:
#
# ### 5.1 Forward Propagation
#
# $$ Z^{(i)} = w \cdot X_{train}^{(i)} + b $$
#
# $$ \hat Y^{(i)} = A^{(i)} = \sigma(Z^{(i)}) = sigmoid(Z^{(i)}) $$
#
# $$ \mathcal{L}(\hat Y^{(i)}, Y_{train}^{(i)}) = \mathcal{L}(A^{(i)}, Y_{train}^{(i)}) = -[Y_{train}^{(i)} \log(A^{(i)}) + (1 - Y_{train}^{(i)}) \log(1 - A^{(i)})] $$
#
# $$ J = \frac{1}{m} \sum_1^m \mathcal{L} (A^{(i)}, Y_{train}^{(i)}) $$
#
#
# ### 5.2 Backward Propagation - Batch Gradient Descent
#
# $$ \frac{\partial J}{\partial w} = \frac{1}{m} (A - Y) \cdot X^T $$
#
# $$ \frac{\partial J}{\partial b} = \frac{1}{m} \sum_1^m (A - Y) $$
#
#
# > Note: $ \frac{\partial J}{\partial w} $ is represented as dw, and $ \frac{\partial J}{\partial b}$ is represented as db
#
# + id="Th4xkEvrFCMM"
def compute_cost(A, Y, m):
"""
Calculates the Cost using the Cross Entropy Loss
Arguments:
A: Computer Probabilities, numpy array
Y: Known Labels, numpy array
Returns:
cost: The computed Cost
"""
cost = np.sum(((- np.log(A))*Y + (-np.log(1-A))*(1-Y)))/m
return np.squeeze(cost)
# + id="xKf1Dc2lFCMQ"
def propagate(w, b, X, Y):
"""
Performs forward and backward propagation for the Logistic Regression model
Arguments:
w: The Weight Matrix of dimension (1, n_x)
b: Bias
X: Input Matrix, with shape (n_x, m)
Y: Label Matrix of shape (1, m)
Returns:
dw: Gradient of the weight matrix
db: Gradient of the bias
cost: Cost computed on Calculated Probability, and output Label
"""
m = X.shape[1]
A = sigmoid((w @ X)+b)
cost = compute_cost(A, Y, m)
dw = (np.dot(X,(A-Y).T).T)/m
db = (np.sum(A-Y))/m
assert(dw.shape == w.shape)
assert(db.dtype == float)
return dw, db, cost
# + [markdown] id="DJwQjwOiFCMU"
# ### 5.3 Optimization
#
# For a parameter $ \theta $, the gradient descent update rule is given by:
# $$ \theta := \theta - \alpha \frac{\partial J}{\partial \theta} $$
#
# where $\alpha$ is the learning rate
# + id="FYFW6bSkFCMV"
def fit(w, b, X, Y, num_iterations, learning_rate, print_freq=100):
"""
Given the parameters of the model, fits the model corresponding to the given Input Matrix aand output labels, by performing batch gradient descent for given number of iterations.
Arguments:
w: The Weight Matrix of dimension (1, n_x)
b: Bias
X: Input Matrix, with shape (n_x, m)
Y: Label Matrix of shape (1, m)
num_iterations: The number of iteratios of bgd to be performed
print_freq: Frequency of recording the cost
Returns:
w: Optimized weight matrix
b: optimized bias
costs: print the cost at frequency given by print_freq, no prints if freq is 0
"""
costs = []
for i in range(num_iterations):
## 1. Calculate Gradients and cost
dw, db, cost = propagate(w, b, X, Y)
costs.append(cost)
if print_freq and i % print_freq == 0:
print(f"Cost after iteration {i}: {cost}")
## 2. Update parameters
w = w - (learning_rate*dw)
b = b - (learning_rate*db)
return w, b, costs
# + [markdown] id="oBa_RviiFCMY"
# ### 5.4 Prediction
# Using the following equation to determine the class that a given sample belongs to:
#
# $$
# \begin{equation}
# Y_{prediction}^{(i)} =
# \begin{cases}
# 1 \text{, if } \hat Y^{(i)} \ge 0.5\\
# 0 \text{, if } \hat Y^{(i)} \lt 0.5\\
# \end{cases}
# \end{equation}
# $$
#
# + id="fcLm6a7cFCMY"
def predict(w, b, X):
"""
Predicts the class which the given feature vector belongs to given Weights and Bias of the model
Arguments:
w: The Weight Matrix of dimension (1, n_x)
b: Bias
X: Input Matrix, with X.shape[0] = n_X
Returns:
Y_prediction: Predicted labels
"""
m = X.shape[1]
Y_prediction = np.full((1,m),0)
A = sigmoid((w @ X) + b)
Y_prediction = (A >= 0.5) * 1.0
return Y_prediction
# + [markdown] id="Dv7psRMoFCMb"
# ## 6. Building the Model
#
# Now we have assembled all the individual pieces required to create the Logistic Regression model.
# Next function is creating the model and calculating its train and test accuracy.
#
#
# + id="5RUTGfTyFCMc"
def model(X_train, Y_train, X_test, Y_test, num_iterations, learning_rate, print_freq):
"""
Creates a model and fit it to the train and test data. Use this model to compute the train and test accuracy after 2500 iterations
Arguments:
X_train: Training Data X
Y_train: Training Data Y
X_test: Testing Data X
Y_test: Testing data Y
num_iterations: Number of iterations of bgd to perform
learning_rate: Learning Rate of the model
print_freq: Frequency of recording the cost
Returns:
-None-
"""
w, b = initialize_parameters(X_train.shape[0])
w, b, costs = fit(w, b, X_train, Y_train, num_iterations, learning_rate, print_freq)
Y_prediction_train = predict(w, b, X_train)
Y_prediction_test = predict(w, b, X_test)
costs = np.squeeze(costs)
print(f"train accuracy: {100 - np.mean(np.abs(Y_prediction_train - Y_train)) * 100} %")
print(f"test accuracy: {100 - np.mean(np.abs(Y_prediction_test - Y_test)) * 100} %")
plt.plot(costs)
plt.ylabel('cost')
plt.xlabel('iterations (per hundreds)')
plt.title(f"Learning rate = {learning_rate}")
plt.show()
# + id="vEmGRqtqFCMf" colab={"base_uri": "https://localhost:8080/", "height": 711} outputId="c32e86df-f546-434e-d2d1-47ad26c74f6c"
model(X_train, Y_train, X_test, Y_test, num_iterations=2000, learning_rate=0.1, print_freq=100)
| Cat Classification.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import torch
import torchvision.transforms as transforms
import torchvision.datasets as datasets
import numpy as np
from matplotlib import pyplot as plt
import resnet_frelu as resnet
import os
from main import AverageMeter, ProgressMeter, accuracy, train, validate
import time
# <h2>Validate current set of weights</h2>
model = torch.load('frelu_resnet50.pth')
# +
data = 'C://ImageNet/'
traindir = os.path.join(data, 'train')
valdir = os.path.join(data, 'val')
normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
total_steps = 300000
learning_rate = 0.001
criterion = torch.nn.CrossEntropyLoss().cuda()
train_dataset = datasets.ImageFolder(traindir,
transforms.Compose([
transforms.RandomResizedCrop(224),
transforms.RandomHorizontalFlip(),
transforms.ColorJitter(brightness=0.4, contrast=0.4, saturation=0.4),
transforms.ToTensor(),
normalize
]))
# -
val_loader = torch.utils.data.DataLoader(
datasets.ImageFolder(valdir, transforms.Compose([
transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
normalize
])),
batch_size=100, shuffle=True,
num_workers=4, pin_memory=True)
model.cuda()
validate(val_loader, model, criterion, {})
# <h2>Create and train new ResNet with FReLU activations (primitive example)</h2>
model = resnet.resnet101()
model
optimizer = torch.optim.SGD(
model.parameters(),
lr=learning_rate / 10,
momentum=0.9,
weight_decay=1e-4,
)
model.train()
train_sampler = None
train_loader = torch.utils.data.DataLoader(
train_dataset, batch_size=4, shuffle=(train_sampler is None),
num_workers=1, pin_memory=True, sampler=train_sampler)
for e in range(10):
train(train_loader, model, criterion, optimizer, e)
torch.save(model, 'FResNet50_' + str(e) + '.pth')
| ResNet50.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Python session - 3.1
#
# ## Functions and modules
#
# `Pandas` cheat sheet: https://github.com/pandas-dev/pandas/blob/master/doc/cheatsheet/Pandas_Cheat_Sheet.pdf
#
# Software Carpentry reference files: http://tobyhodges.github.io/python-novice-gapminder/
# ## Functions
#
# Functions are reusable blocks of code that you can name and execute any number of times from different parts of your script(s). This reuse is known as "calling" the function. Functions are important building blocks of a software.
#
# There are several built-in functions of Python, which can be called anywhere (and any number of times) in your current program. You have been using built-in functions already, for example, `len()`, `range()`, `sorted()`, `max()`, `min()`, `sum()` etc.
# #### Structure of writing a function:
#
# - `def` (keyword) + function name (you choose) + `()`.
# - newline with 4 spaces or a tab + block of code # Note: Codes at the 0 position are always read
# - Call your function using its name
# +
## Non parametric function
# Define a function that prints a sum of number1 and number2 defined inside the function
get_sum()
# -
# Parametric function
# Define a function that prints a sum of number1 and number2 provided by the user
# Hint: get_sum_param(number1, number2)
# Returning values
# Define a function that 'returns' a sum of number1 and number2 provided by the user
# Hint: print(get_sum_param(number1, number2))
# +
# Local Vs. global variable
# Define a function that returns a sum of number1 and number2 to a variable
# and print it after calling the function
# Hint: returned_value = get_sum_param(number1, number2)
# -
# ### Exercises: write old code into a function
# Optional exercise
# Let’s take one of our older code blocks and write it in a function
# ### Libraries and Modules
#
# One of the great things about Python is the free availability of a _huge_ number of libraries (also called package) that can be imported into your code and (re)used.
#
# Modules contain functions for use by other programs and are developed with the aim of solving some particular problem or providing particular, often domain-specific, capabilities. A library is a collection of modules, but the terms are often used interchangeably, especially since many libraries only consist of a single module (so don’t worry if you mix them).
#
# In order to import a library, it must available on your system or should be installed.
#
# A large number of libraries are already available for import in the standard distribution of Python: this is known as the standard library. If you installed the Anaconda distribution of Python, you have even more libraries already installed - mostly aimed at data science.
#
# Importing a library is easy:
#
# - Import (keyword) + library name, for example:
# - `import os # contains functions for interacting with the operating system`
# - `import sys # contains utilities to process command line arguments`
#
# More at: https://pypi.python.org/pypi
# +
import os
# Get current directory
# Make new directory
help(os) # manual page created from the module's docstrings
# +
import sys
# sys.argv
# -
# ### Using loops to iterate through files in a directory
# +
# define a function that lists all the files in the folder called data
import os
def read_each_filename(pathname):
...
pathname = 'data' # name of path with multiple files
# +
# define a function that reads and prints each line of each file in the folder called data
import os
def read_each_line_of_each_file(pathname): # name of path with multiple files
...
pathname = 'data' # name of path with multiple files
# Hints:
# Options for opening files
# option-1: with open("{}/{}".format(pathname, filename)) as in_fh:
# option-2: with open('%s/%s' % (pathname, filename)) as in_fh:
# option-3: with open(pathname + '/' + filename) as in_fh:
# option-4: with open(os.path.join(pathname, filename)) as in_fh:
# +
# Exercise: Go through each filename in the directory 'data'
# Print the names of the files that contains the keyword 'asia'
# Open each file containing the keyword 'Asia' and print all the entries
# Print entries containing gdp information on 'Japan', 'Korea', 'China' and 'Taiwan'
# -
# ### Examples of importing basic modules.
#
# #### Questions
# - How can I read tabular data?
#
# #### Objectives
# - Import the Pandas library.
# - Use Pandas to load a simple CSV data set.
# - Get some basic information about a Pandas DataFrame.
import pandas
# +
# Use Oceania data here
# -
# #### Aside: Namespaces
# Python uses namespaces a lot, to ensure appropriate separation of functions, attributes, methods etc between modules and objects. When you import an entire module, the functions and classes available within that module are loaded in under the modules namespace - `pandas` in the example above.
# It is possible to customise the namespace at the point of import, allowing you to e.g. shorten/abbreviate the module name to save some typing:
# +
# import pandas as pd
# -
# Also, as in the examples above, if you need only a single function from a module, you can import that directly into your main namespace (where you don't need to specify the module before the name of the function):
# +
# from pandas import read_csv
# -
# #### Conventions
# - You should perform all of your imports at the beginning of your program. This ensures that
# - users can easily identify the dependencies of a program, and
# - that any lacking dependencies (causing fatal `ImportError` exceptions) are caught early in execution
# - the shortening of `numpy` to `np` and `pandas` to `pd` are very common, and there are others too - watch out for this when e.g. reading docs and guides/SO answers online.
# ### Execises - Importing
#
# Use this link to follow further exercises: http://tobyhodges.github.io/python-novice-gapminder/37-reading-tabular/
# Use index_col to specify that a column’s values should be used as row headings.
# Use DataFrame.info to find out more about a dataframe.
# The DataFrame.columns variable stores information about the dataframe’s columns.
# Use DataFrame.T to transpose a dataframe.
# Use DataFrame.describe to get summary statistics about data.
# ### Reading other data
#
# Read the data in `gapminder_gdp_americas.csv` (which should be in the same directory as `gapminder_gdp_oceania.csv`) into a variable called `americas` and display its summary statistics.
# ### Inspecting Data
#
# After reading the data for the Americas, use `help(americas.head)` and `help(americas.tail)` to find out what `DataFrame.head` and `DataFrame.tail` do.
#
# What method call will display the first three rows of this data?
# What method call will display the last three columns of this data? (Hint: you may need to change your view of the data.)
# ### Reading Files in Other Directories
#
# The data for your current project is stored in a file called `microbes.csv`, which is located in a folder called `field_data`. You are doing analysis in a notebook called `analysis.ipynb` in a sibling folder called `thesis`:
#
# ```
# your_home_directory
# +-- field_data/
# | +-- microbes.csv
# +-- thesis/
# +-- analysis.ipynb
# ```
#
# What value(s) should you pass to `read_csv` to read `microbes.csv` in `analysis.ipynb`?
# ### Writing data
#
# As well as the `read_csv` function for reading data from a file, Pandas provides a `to_csv` function to write dataframes to files. Applying what you’ve learned about reading from files, write one of your dataframes to a file called `processed.csv`. You can use help to get information on how to use `to_csv`.
# #### Aside: Your Own Modules
# Whenever you write some python code and save it as a script, with the `.py` file extension, you are creating your own module. If you define functions within that module, you can load them into other scripts and sessions.
# ### Some Interesting Module Libraries to Investigate
# - os
# - sys
# - shutil
# - random
# - collections
# - math
# - argparse
# - time
# - datetime
# - numpy
# - scipy
# - matplotlib
# - pandas
# - scikit-learn
# - requests
# - biopython
# - openpyxl
| Day2/3_1-functions-and-modules.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Deep Q-Network (DQN)
# ---
#
# #### Import the Necessary Packages
# +
import gym, random
import numpy as np
from collections import namedtuple, deque
import matplotlib.pyplot as plt
# %matplotlib inline
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
# -
# #### Set Configuration
BUFFER_SIZE = int(1e5) # replay buffer size
BATCH_SIZE = 64 # minibatch size
GAMMA = 0.99 # discount factor
TAU = 1e-3 # for soft update of target parameters
LR = 5e-4 # learning rate
UPDATE_EVERY = 4 # how often to update the network
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
# #### Define DQN Agent and Model
# +
class Agent():
"""Interacts with and learns from the environment."""
def __init__(self, state_size, action_size, seed):
"""Initialize an Agent object.
Params
======
state_size (int): dimension of each state
action_size (int): dimension of each action
seed (int): random seed
"""
self.state_size = state_size
self.action_size = action_size
self.seed = random.seed(seed)
# Q-Network
self.qnetwork_local = QNetwork(state_size, action_size, seed).to(device)
self.qnetwork_target = QNetwork(state_size, action_size, seed).to(device)
self.optimizer = optim.Adam(self.qnetwork_local.parameters(), lr=LR)
# Replay memory
self.memory = ReplayBuffer(action_size, BUFFER_SIZE, BATCH_SIZE, seed)
# Initialize time step (for updating every UPDATE_EVERY steps)
self.t_step = 0
def step(self, state, action, reward, next_state, done):
# Save experience in replay memory
self.memory.add(state, action, reward, next_state, done)
# Learn every UPDATE_EVERY time steps.
self.t_step = (self.t_step + 1) % UPDATE_EVERY
if self.t_step == 0:
# If enough samples are available in memory, get random subset and learn
if len(self.memory) > BATCH_SIZE:
experiences = self.memory.sample()
self.learn(experiences, GAMMA)
def act(self, state, eps=0.):
"""Returns actions for given state as per current policy.
Params
======
state (array_like): current state
eps (float): epsilon, for epsilon-greedy action selection
"""
state = torch.from_numpy(state).float().unsqueeze(0).to(device)
self.qnetwork_local.eval() # this change the local net to eval mode
with torch.no_grad():
action_values = self.qnetwork_local(state)
self.qnetwork_local.train() # this just return the local net back to train mode
# Epsilon-greedy action selection
if random.random() > eps:
return np.argmax(action_values.cpu().data.numpy())
else:
return random.choice(np.arange(self.action_size))
def learn(self, experiences, gamma):
"""Update value parameters using given batch of experience tuples.
Params
======
experiences (Tuple[torch.Tensor]): tuple of (s, a, r, s', done) tuples
gamma (float): discount factor
"""
states, actions, rewards, next_states, dones = experiences
# Get max predicted Q values (for next states) from target model
target_q_next = self.qnetwork_target(next_states).detach().max(1)[0].unsqueeze(1)
"""
# disregard action, get best value!
# why so many next states? answer: the qnetwork will return each corresponding next states action, the max will pick from each the best action
# explanation on detach (https://discuss.pytorch.org/t/detach-no-grad-and-requires-grad/16915/7)
"""
# Compute Q targets for current states
target_q = rewards+(gamma*target_q_next*(1-dones))
# Get expected Q values from local model
expected_q = self.qnetwork_local(states).gather(1, actions)
"""
this uses gather instead of detach like target since it only give a s*** to action taken
# explanation on gather (https://stackoverflow.com/questions/50999977/what-does-the-gather-function-do-in-pytorch-in-layman-terms)
"""
# Compute loss
loss = F.mse_loss(expected_q, target_q)
# Minimize the loss
self.optimizer.zero_grad()
loss.backward()
self.optimizer.step()
# ------------------- update target network ------------------- #
self.soft_update(self.qnetwork_local, self.qnetwork_target, TAU)
def soft_update(self, local_model, target_model, tau):
"""Soft update model parameters.
θ_target = τ*θ_local + (1 - τ)*θ_target
Params
======
local_model (PyTorch model): weights will be copied from
target_model (PyTorch model): weights will be copied to
tau (float): interpolation parameter
"""
for target_param, local_param in zip(target_model.parameters(), local_model.parameters()):
target_param.data.copy_(tau*local_param.data + (1.0-tau)*target_param.data)
class ReplayBuffer:
"""Fixed-size buffer to store experience tuples."""
def __init__(self, action_size, buffer_size, batch_size, seed):
"""Initialize a ReplayBuffer object.
Params
======
action_size (int): dimension of each action
buffer_size (int): maximum size of buffer
batch_size (int): size of each training batch
seed (int): random seed
"""
self.action_size = action_size
self.memory = deque(maxlen=buffer_size)
self.batch_size = batch_size
self.experience = namedtuple("Experience", field_names=["state", "action", "reward", "next_state", "done"])
self.seed = random.seed(seed)
def add(self, state, action, reward, next_state, done):
"""Add a new experience to memory."""
e = self.experience(state, action, reward, next_state, done)
self.memory.append(e)
def sample(self):
"""Randomly sample a batch of experiences from memory."""
experiences = random.sample(self.memory, k=self.batch_size)
states = torch.from_numpy(np.vstack([e.state for e in experiences if e is not None])).float().to(device)
actions = torch.from_numpy(np.vstack([e.action for e in experiences if e is not None])).long().to(device)
rewards = torch.from_numpy(np.vstack([e.reward for e in experiences if e is not None])).float().to(device)
next_states = torch.from_numpy(np.vstack([e.next_state for e in experiences if e is not None])).float().to(device)
dones = torch.from_numpy(np.vstack([e.done for e in experiences if e is not None]).astype(np.uint8)).float().to(device)
return (states, actions, rewards, next_states, dones)
def __len__(self):
"""Return the current size of internal memory."""
return len(self.memory)
# -
class QNetwork(nn.Module):
"""Actor (Policy) Model."""
def __init__(self, state_size, action_size, seed, fc1_units=64, fc2_units=64):
"""Initialize parameters and build model.
Params
======
state_size (int): Dimension of each state
action_size (int): Dimension of each action
seed (int): Random seed
fc1_units (int): Number of nodes in first hidden layer
fc2_units (int): Number of nodes in second hidden layer
"""
super(QNetwork, self).__init__()
self.seed = torch.manual_seed(seed)
self.fc1 = nn.Linear(state_size, fc1_units)
self.fc2 = nn.Linear(fc1_units, fc2_units)
self.fc3 = nn.Linear(fc2_units, action_size)
def forward(self, state):
"""Build a network that maps state -> action values."""
x = F.relu(self.fc1(state))
x = F.relu(self.fc2(x))
return self.fc3(x)
# #### Instantiate the Environment and Agent
# +
env = gym.make('LunarLander-v2')
env.seed(0)
print('State shape: ', env.observation_space.shape)
print('Number of actions: ', env.action_space.n)
agent = Agent(state_size=8, action_size=4, seed=0)
# -
# #### Instantiate the Environment and Agent
#
# Run the code cell below to train the agent from scratch. You are welcome to amend the supplied values of the parameters in the function, to try to see if you can get better performance!
def dqn(n_episodes=2000, max_t=1000, eps_start=1.0, eps_end=0.01, eps_decay=0.995):
"""Deep Q-Learning.
Params
======
n_episodes (int): maximum number of training episodes
max_t (int): maximum number of timesteps per episode
eps_start (float): starting value of epsilon, for epsilon-greedy action selection
eps_end (float): minimum value of epsilon
eps_decay (float): multiplicative factor (per episode) for decreasing epsilon
"""
scores = [] # list containing scores from each episode
scores_window = deque(maxlen=100) # last 100 scores
eps = eps_start # initialize epsilon
for i_episode in range(1, n_episodes+1):
state = env.reset()
score = 0
for t in range(max_t):
action = agent.act(state, eps)
next_state, reward, done, _ = env.step(action)
agent.step(state, action, reward, next_state, done)
state = next_state
score += reward
if done:
break
scores_window.append(score) # save most recent score
scores.append(score) # save most recent score
eps = max(eps_end, eps_decay*eps) # decrease epsilon
print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)), end="")
if i_episode % 100 == 0:
print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)))
if np.mean(scores_window)>=200.0:
print('\nEnvironment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode-100, np.mean(scores_window)))
torch.save(agent.qnetwork_local.state_dict(), 'checkpoint.pth')
break
return scores
# train the agent
scores = dqn(n_episodes=2000, max_t=1000)
# plot the scores
fig = plt.figure()
ax = fig.add_subplot(111)
plt.plot(np.arange(len(scores)), scores)
plt.ylabel('Score')
plt.xlabel('Episode #')
plt.show()
# #### Watch a Smart Agent!
#
# In the next code cell, you will load the trained weights from file to watch a smart agent!
# +
# load the weights from file
agent.qnetwork_local.load_state_dict(torch.load('checkpoint.pth'))
for i in range(5):
state = env.reset()
for j in range(200):
action = agent.act(state)
env.render()
state, reward, done, _ = env.step(action)
if done:
break
env.close()
| 05_deep_q_learning/dqn_her/notebook/.ipynb_checkpoints/dqn_her-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Using mulit-linear regressions to predict housing prices
#
# The below workbook finds creates a model based on a handful of key variables that can be used to predict housing prices. It explores different variables and different models. The result is reasonably successful but uses metrics that are delayed in reporting.
#
# Next steps are to leverage this model with metrics that are more current or forward looking. This would create a model with a view into the market today, as opposed to a month or two ago.
#
# Idea Source: https://python-bloggers.com/2021/01/predicting-home-price-trends-based-on-economic-factors-with-python/
# ## Supply Side Factors That Affect Home Prices
# Various factors that affect the supply of homes available for sale are discussed below:
#
# ### Months of Supply
# Months of supply is the basic measure of supply itself in the real estate market (not a factor as such). Houses for sale is another measure of the same.
#
# Months of Supply: https://fred.stlouisfed.org/graph/?g=zneA <br>
# Monthly Homes For Sale and Homes Sold (SA): https://www.census.gov/construction/nrs/historical_data/index.html
#
# ### Differential Migration Across Cities
# The differential migration across cities can possibly be measured directly via change-of-address requests, but since that data is not readily available, the total number of residence moves can be used. What this, however, does not reflect is the change in pattern of movement. The move can be from rural or suburban to cities or the other way round, and both have a very different impact on the housing market. So, net domestic migration into or out of metropolises is a better measure of the differential migration, and hence that has been taken as a parameter along with the number of total movers.
#
# Data Source (quarterly: https://www.census.gov/data/tables/time-series/demo/geographic-mobility/historic.html
# 'Interpolated Movers' and 'Interpolated Migration' NOT USED as not monthly
#
# ### Unemployment
# Unemployment can also affect both demand and supply in the real estate industry. A high unemployment rate can mean that people simply do not have the money to spend on houses. It can also mean that there is lower investment in the industry and hence lower supply.
#
# Monthly UNEMP: https://fred.stlouisfed.org/series/UNRATE
#
# ### Mortgage Rate
# Mortgage rates are a huge factor that decide how well the real estate market will perform. It plugs into both supply and demand side of the equation. It affects the availability of financing options to buyers, as well as the ease of financing new constructions. It also affects the delinquency rate and the number of refinances for mortgages. People are more likely to default on a higher mortgage rate!
#
# Monthly Mortgage Rate: https://fred.stlouisfed.org/graph/?g=zneW
#
# ### Federal Funds Rate
# Although mortgage rate and Federal Funds Rate are usually closely related, sometimes they may not be. Historically, there have been occasions when the Fed lowered the Fed Funds Rate, but the banks did not lower mortgage rates, or not in the same proportion. Moreover, Federal Funds Rate influences multiple factors in the economy, beyond just the real estate market, many of which factors indirectly influence the real estate market. It is a key metric to change the way an economy is performing.
#
# Monthly Fed Funds Rate: https://fred.stlouisfed.org/series/DFF#0
#
# ### USA GDP
# The GDP is a measure of output of the economy overall, and the health of the economy. An economy that is doing well usually implies more investment and economic activity, and more buying.
#
# Data Sources:
# Monthly GDP Index: https://fred.stlouisfed.org/graph/?g=znfe <br>
# Quarterly Real GDP (adjusted for inflation): https://fred.stlouisfed.org/series/GDPC1#0
# NOT USED as not monthly
#
# ### Building Permits
# Number of building permits allotted is a measure of not just health of real estate industry, but how free the real estate market is, effectively. It is an indicator of the extent of regulation/de-regulation of the market. It affects the supply through ease of putting a new property on the market.
#
# Monthly Permits-Number and Permit-Valuation: https://www.census.gov/construction/bps/
#
# ### Housing Starts
# This is a measure of the number of units of new housing projects started in a given period. Sometimes it is also measured in valuation of housing projects started in a given period.
#
# Monthly Housing Starts: https://www.census.gov/construction/nrc/historical_data/index.html
# Seasonally Adjusted 1 unit
#
# ### Construction Spending
# The amount spent (in millions of USD, seasonally adjusted), is a measure of the activity in the construction industry, and an indicator of supply for future months. It can also be taken as a measure of confidence, since home builders will spend money in construction only if they expect the industry to do well in the future months.
#
# Monthly CONST: https://www.census.gov/construction/c30/c30index.html
# Private Residential, Seasonally Adjusted
#
#
# ## Demand Side Factors That Affect Home Prices
# Demand for housing, and specifically, home ownership, is affected by many factors, some of which are closely inter-related. Many of these factors also affect the supply in housing market. Below are a few factors that are prominent in influencing the demand for home buying:
#
# ### Affordability: Wages & Disposable Personal Income
# The “weekly earnings” are taken as a measure of overall wages and earning of all employed persons.
#
# The other measure is disposable personal income: how much of the earning is actually available to an individual for expenditure. This is an important measure as well, as it takes into account other factors like taxes etc.
#
# Data Sources:
# Median usual weekly nominal earnings, Wage and salary workers 25 years and over: https://fred.stlouisfed.org/series/LEU0252887700Q#0
# Monthly Disposable Income: https://fred.stlouisfed.org/series/DSPIC96#0
#
# ### Delinquency Rate on Mortgages
# The delinquency rate on housing mortgages are an indicator of the number of foreclosures in real estate. This is an important factor in both, demand and supply. Higher delinquency rate (higher than credit card delinquency rate) in the last economic recession was a key indicator of the recession and the poorly performing industry and the economy as a whole. It also indicates how feasible it is for a homeowner to buy a house at a certain point of time and is an indicator of the overall demand in the industry.
#
# Data Source: https://fred.stlouisfed.org/series/DRSFRMACBS#0
# NOT USED as not monthly
#
# ### Personal Savings
# The extent to which people are utilizing their personal income for savings matters in overall investments and capital availability, and the interest rate for loans (and not just the mortgage rate). It is also an indicator of how much the current population is inclined to spend their money, vs save it for future use. This is an indicator of the demand for home ownership as well.
#
# Monthly Savings: https://fred.stlouisfed.org/series/PMSAVE
#
# ### Behavioural Changes & Changes in Preferences
# Changes in home ownership indicate a combination of factors including change in preferences and attitudes of people towards home buying. Change in cultural trends can only be captured by revealed preferences, and this metric can be taken as a revealed metric for propensity for home buying.
#
# The other metric to track changes in preferences is personal consumption expenditure. For eg, if expenditure is increasing, but there is no such increase in homeownership, it would indicate a change in preferences towards home buying and ownership. Maybe people prefer to rent a home than buying one. Hence, both of these parameters are used.
#
# Date Sources:
# Home Ownership Rate (NOT USED as not monthly): http://bit.ly/homeownershiprate
# Monthly Consumption: https://fred.stlouisfed.org/series/PCE
#
# ## Building The Model
#
# The S&P Case-Shiller Housing Price Index is taken as the y variable, or dependent variable, as an indicator of change in prices.
#
# Monthly HPI: https://fred.stlouisfed.org/series/CSUSHPISA
#
# ## Data Cleanup
#
# I have run the regression with fewer parameters, using only monthly data. SImilar anlaysis ahs been doen by others with more parameters reduced to quarterly, but it did not generate better results.
#
# It's important to note that not all variables will be relavant and contribut positiviely to the model. Some variables that are tested will be discarded for the end analysis.
#
# The data we are working with is time series data. So, all the time-date data in every variable must be converted to Python’s datetime format, for it to be read as time-date data and processed as such when up-sampling or down-sampling. This will also help with merging different series together, using the datetime columns.
#
# The regression itself does not run on time-series data, so the datetime columns are removed in the final data for the regression.
# ## Multiple Linear Regression for Prediction
# We'll first run a multi linear analysis and then comare it to some other models to verify it is a decent fit.
# load modules
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
from sklearn.linear_model import SGDRegressor
from sklearn import metrics
import matplotlib.pyplot as plt
import seaborn as sb
# load data
date_col1 = ['Period']
date_col2 = ['Quarter Starting']
date_col3 = ['Year']
date_col4 = ['Quarter']
x_monthly = pd.read_csv('x_monthly.csv', parse_dates=date_col1, dayfirst=True) #Monthly Data
y_monthly = pd.read_csv('y.csv', parse_dates=date_col1, dayfirst=True) #Monthly y Data
print(x_monthly.dtypes, x_monthly.shape)
print(y_monthly.dtypes, y_monthly.shape)
# Variables w NE are versions for Northeastern US and are not used in national analysis
# +
# run initial regression
x = x_monthly[['UNEMP','Months of Supply','Homes for Sale','CONST','Consumption',
'Disposable Income', ]] # removing date column for partial vairable list
"""
x = x_monthly[['UNEMP', 'Months of Supply','Homes for Sale', 'Mortgage Rate',
'Permits-Number', 'Permits-Valuation', 'CONST','Housing Starts', 'Consumption',
'Disposable Income', 'Fed Funds Rate', 'Savings', 'Homes Sold',
'GDP Monthly Index']] # includes non-relavant variables"""
y = y_monthly['HPI'] #Removing date column
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.35, shuffle=False, stratify=None)
reg = LinearRegression()
reg.fit(x_train, y_train)
y_predict = reg.predict(x_test)
print(reg.intercept_)
print('R^2 Value of Train:', reg.score(x_train, y_train))
print('R^2 Value of Test:', reg.score(x_test, y_test))
print('Mean Absolute Error:', metrics.mean_absolute_error(y_test, y_predict))
print('Mean Squared Error:', metrics.mean_squared_error(y_test, y_predict))
print('Root Mean Squared Error:', np.sqrt(metrics.mean_squared_error(y_test, y_predict)))
print('HPI Stats:')
print(y_test.describe())
# -
# The R2 value above .95 is normally considered a good value, so we are in good shape there.
#
# We want the Root Mean Squared Error to be less than 10% of the mean value of HPI, and much less than the standard deviation of the HPI in the test data. This indicates the the model is fairly accurate in predicting values and, again, we are good here.
#
# Variable Notes: <br>
# 0 UNEMP increases accuracy a godo bit <br>
# 1 Months of Supply does little <br>
# 2 Homes for Sale does help good bit <br>
# 3 Mortgage Rate actually actually hurts but just a little - DONT INCLUDE <br>
# 4 Permits Numbers burts a little - DONT INCLUDE <br>
# 5 Permit Evalution hurts a little - DONT INCLUDE <br>
# 6 CONST helps a good bit<br>
# 7 Housing Starts hurts a little - DONT INCLUDE <br>
# 8 Consumption Helps a good bit<br>
# 9 Disposable income helps some<br>
# 10 Fed Rate elps a little <br>
# 11 Savings brings accuracy down a good bit. - DONT INCLUDE <br>
# 12 Homes Sold brought accuracy down and increased error - DONT INCLUDE <br>
# 13 GDP Index brings both down - DONT INCLUDE<br>
# get importance of features
importance = reg.coef_
# summarize feature importance
for i,v in enumerate(importance):
print('Feature: %0d, Score: %.5f' % (i,v))
# lets plot predictions vs actuals to see how the spread looks
plt.figure(figsize=(8,8))
sb.regplot(x=y_test, y=y_predict, ci=None, color="Blue")
plt.xlabel("Actual HPI")
plt.ylabel('Predicted HPI')
plt.title("Actual vs Predicted HPI: Monthly Data")
# **Graph shows that model is less accurate the higher the price is, and it undervalues higher prices. This is where we have been in late 2021, suggesting market is overvalued, or that certain variables are not being taken into account.**
#
# +
# predict for all periods and compare on timeline to HPI
x_whole = x
y_predict_whole = reg.predict(x_whole)
y_monthly['y_predict_whole'] = y_predict_whole
y_monthly
y_monthly
# -
plt.figure(figsize=(15,7))
plt.plot(y_monthly['Period'], y_monthly['HPI'], label = "HPI",linestyle=":")
plt.plot(y_monthly['Period'], y_monthly['y_predict_whole'], label = "Predicted",linestyle="-")
plt.title("Actual vs Predicted HPI: Monthly Data")
plt.legend()
plt.show
# +
# Trying different regression model
# Lasso
from sklearn.linear_model import Lasso
from sklearn.metrics import r2_score
alpha = 0.1
lasso = Lasso(alpha=alpha, max_iter=2000)
y_pred_lasso = lasso.fit(x_train, y_train).predict(x_test)
r2_score_lasso = r2_score(y_test, y_pred_lasso)
print(lasso)
print("r^2 on test data : %f" % r2_score_lasso)
plt.figure(figsize=(8,8))
sb.regplot(x=y_test, y=y_pred_lasso, ci=None, color="Blue")
plt.xlabel("Actual HPI")
plt.ylabel('Predicted HPI')
plt.title("Actual vs Predicted HPI: Monthly Data")
# get importance of features
importance = lasso.coef_
# summarize feature importance
for i,v in enumerate(importance):
print('Feature: %0d, Score: %.5f' % (i,v))
print('##############################################')
# ElasticNet
from sklearn.linear_model import ElasticNet
enet = ElasticNet(alpha=alpha, l1_ratio=0.7, max_iter=2000)
y_pred_enet = enet.fit(x_train, y_train).predict(x_test)
r2_score_enet = r2_score(y_test, y_pred_enet)
print(enet)
print("r^2 on test data : %f" % r2_score_enet)
plt.figure(figsize=(8,8))
sb.regplot(x=y_test, y=y_pred_enet, ci=None, color="Blue")
plt.xlabel("Actual HPI")
plt.ylabel('Predicted HPI')
plt.title("Actual vs Predicted HPI: Monthly Data")
# get importance of features
importance = enet.coef_
# summarize feature importance
for i,v in enumerate(importance):
print('Feature: %0d, Score: %.5f' % (i,v))
# -
# Linear Regression seems to work just fine and even slightly better than some other models
| Real Estate Price Forecast/Real Estate Forecast - National Model.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# 
#
# ### Performance Suite Results
# # Crux Plugin Repository Connector
# ## Introduction
#
# Following are detailed results for the Crux Connector's performance at various scales. In each set of results, the Crux Connector was tested under the following conditions (full details are available in the each results directory's `deployment` file):
#
# - Running through a Kubernetes pod with a single OMAG Platform running both the Performance Test Suite server and the Crux Plugin Repository
# - Running a co-located Bitnami Kafka (and Zookeeper) pod on the same Kubernetes node running the OMAG Platform
# - Resources allocated are a minimum of 2 cores to a maximum of 4 cores, and a minimum of 8GB memory to a maximum of 16GB memory
#
# ### Versions
#
# Component | Version | Notes
# ---|---|---
# Egeria | 3.1 | OMAG Platform, CTS, PTS
# Crux Plugin Repository Connector | 3.1 |
# Crux | 1.18.1 | Embedded in Crux Plugin Repository Connector
# RocksDB | 6.12.7 | Used for transaction log, document store and index store for Crux
# Kafka | 2.8.0 | Used for cohort event bus
# Lucene | 8.9.0 | Used for text indexing in Crux
# ## Setup
#
# ### Results locations
#
# Locations for the results (see subdirectories in the same location where this notebook resides to review the raw results themselves):
results = [
"pts-05-02",
"pts-10-05",
"janus-05-02"
]
# ### Analysis and visualization methods
#
# The following defines methods necessary to parse, process and visualize the results, and must be run prior to the subsequent cells.
# +
import os
import json
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from IPython.display import display
def validateProfileResultsLocation(location):
profile_details_location = location + os.path.sep + "profile-details"
print("Validating profile-details location:", profile_details_location)
if os.path.isdir(profile_details_location):
print(" ... directory exists.")
else:
print(" ... ERROR: could not find this directory. Is the location specified correct?")
# Define the profile ordering
profile_order=[
'Entity creation', 'Entity search', 'Relationship creation', 'Relationship search',
'Entity classification', 'Classification search', 'Entity update', 'Relationship update',
'Classification update', 'Entity undo', 'Relationship undo', 'Entity retrieval', 'Entity history retrieval',
'Relationship retrieval', 'Relationship history retrieval', 'Entity history search', 'Relationship history search',
'Graph queries', 'Graph history queries', 'Entity re-home', 'Relationship re-home', 'Entity declassify',
'Entity re-type', 'Relationship re-type', 'Entity re-identify', 'Relationship re-identify',
'Relationship delete', 'Entity delete', 'Entity restore', 'Relationship restore', 'Relationship purge',
'Entity purge'
]
# Given a profileResult.requirementResults object, parse all of its positiveTestEvidence
# for discovered properties
def parseProperties(df, repositoryName, requirementResults):
if (requirementResults is not None and 'positiveTestEvidence' in requirementResults):
print("Parsing properties for:", requirementResults['name'], "(" + repositoryName + ")")
data_array = []
for evidence in requirementResults['positiveTestEvidence']:
if ('propertyName' in evidence and 'propertyValue' in evidence):
data = {
'repo': repositoryName,
'property_name': evidence['propertyName'],
'property_value': evidence['propertyValue']
}
data_array.append(data)
df = df.append(pd.read_json(json.dumps(data_array), orient='records'), ignore_index=True)
return df
# Given a profileResult.requirementResults object, parse all of its positiveTestEvidence
# and group the results by methodName
def parseEvidence(df, repositoryName, requirementResults):
if (requirementResults is not None and 'positiveTestEvidence' in requirementResults):
print("Parsing evidence for:", requirementResults['name'], "(" + repositoryName + ")")
data_array = []
for evidence in requirementResults['positiveTestEvidence']:
if ('methodName' in evidence and 'elapsedTime' in evidence):
data = {
'repo': repositoryName,
'method_name': evidence['methodName'],
'elapsed_time': evidence['elapsedTime'],
'profile_name': requirementResults['name'],
'test_case_id': evidence['testCaseId'],
'assertion_id': evidence['assertionId']
}
data_array.append(data)
df = df.append(pd.read_json(json.dumps(data_array), orient='records'), ignore_index=True)
return df
# Given a profile detail JSON file, retrieve all of its profileResult.requirementResults[] objects
def parseRequirementResults(profileFile):
with open(profileFile) as f:
profile = json.load(f)
# This first case covers files retrieved via API
if ('profileResult' in profile and 'requirementResults' in profile['profileResult']):
return profile['profileResult']['requirementResults']
# This second case covers files created by the CLI client
elif ('requirementResults' in profile):
return profile['requirementResults']
else:
return None
def getEnvironmentProfile(profileLocation):
detailsLocation = profileLocation + os.path.sep + "profile-details"
return detailsLocation + os.path.sep + "Environment.json"
def parseEnvironmentDetailsIntoDF(df, profileFile, qualifier):
profileResults = parseRequirementResults(profileFile)
if profileResults is not None:
for result in profileResults:
df = parseProperties(df, qualifier, result)
return df
# Retrieve a listing of all of the profile detail JSON files
def getAllProfiles(profileLocation):
detailsLocation = profileLocation + os.path.sep + "profile-details"
_, _, filenames = next(os.walk(detailsLocation))
full_filenames = []
for filename in filenames:
full_filenames.append(detailsLocation + os.path.sep + filename)
return full_filenames
# Parse all of the provided profile file's details into the provided dataframe
def parseProfileDetailsIntoDF(df, profileFile, qualifier):
profileResults = parseRequirementResults(profileFile)
if profileResults is not None:
for result in profileResults:
df = parseEvidence(df, qualifier, result)
return df
def plotMethod(df, methodName, remove_outliers=False, by_repo=False, by_assertion=False):
dfX = df[df['method_name'] == methodName]
if not dfX.empty:
if remove_outliers:
dfX = dfX[dfX['elapsed_time'].between(dfX['elapsed_time'].quantile(.00), dfX['elapsed_time'].quantile(.99))]
sns.set(font_scale=1.2)
sns.set_style("whitegrid")
fix, axs = plt.subplots(ncols=1, nrows=1, figsize=(18,9))
if by_repo:
# Display the repos within the method in alphabetical order for consistency
repos = dfX['repo'].unique()
figure = sns.histplot(ax=axs, data=dfX, x="elapsed_time", hue="repo",
hue_order=sorted(repos), kde=True, discrete=False)
if by_assertion:
# Display the assertions within the method in alphabetical order for consistency
assertions = dfX['assertion_id'].unique()
figure = sns.histplot(ax=axs, data=dfX, x="elapsed_time", hue="assertion_id",
hue_order=sorted(assertions), kde=True, discrete=False)
else:
figure = sns.histplot(ax=axs, data=dfX, x="elapsed_time",
kde=True, discrete=False)
figure.set(xlabel="Elapsed time (ms)")
figure.set_title(methodName)
display(fix)
plt.close(fix)
def plotProfile(df, profileName, remove_outliers=False):
dfX = df[df['profile_name'] == profileName]
# Only attempt to plot if there is anything left in the dataframe
if not dfX.empty:
if remove_outliers:
# If we have been asked to remove outliers, drop anything outside the 2nd and 98th percentiles
dfX = dfX[dfX['elapsed_time'].between(dfX['elapsed_time'].quantile(.02), dfX['elapsed_time'].quantile(.98))]
sns.set(font_scale=1.2)
sns.set_style("whitegrid")
fix, axs = plt.subplots(ncols=1, nrows=1, figsize=(18,9))
# Display the methods within the profile in alphabetical order for consistency
methods = dfX['method_name'].unique()
figure = sns.histplot(ax=axs, data=dfX, x="elapsed_time", hue="method_name",
hue_order=sorted(methods), kde=True, discrete=False)
figure.set(xlabel="Elapsed time (ms)")
figure.set_title(profileName)
figure.get_legend().set(title='Method')
display(fix)
plt.close(fix)
def slowestRunning(df, num=10, methodName=None):
pd.set_option('display.max_colwidth', None)
pd.set_option('display.max_rows', None)
if methodName:
df = df[df['method_name'] == methodName]
display(df.sort_values(by=['elapsed_time'], ascending=False).groupby('method_name').head(num))
def compareProfiles(df, profileName, left, right, remove_outliers=False):
dfX = df[df['profile_name'] == profileName]
# Only attempt to plot if there is anything left in the dataframe
if not dfX.empty:
if remove_outliers:
# If we have been asked to remove outliers, drop anything outside the 2nd and 98th percentiles
dfX = dfX[dfX['elapsed_time'].between(dfX['elapsed_time'].quantile(.02), dfX['elapsed_time'].quantile(.98))]
sns.set(font_scale=1.2)
sns.set_style("whitegrid")
fix, axs = plt.subplots(ncols=1, nrows=1, figsize=(18,9))
# Display the methods within the profile in alphabetical order for consistency
methods = dfX['method_name'].unique()
figure = sns.violinplot(x="method_name", y="elapsed_time", ax=axs, hue="repo",
hue_order=[left, right], split=True, scale='count',
inner='quartile', cut=0, data=dfX)
# If there are more than 4 methods in the profile, rotate them so they are still readable
if (len(methods) > 4):
figure.set_xticklabels(figure.get_xticklabels(), rotation=10)
figure.set(xlabel="Method name", ylabel="Elapsed time (ms)")
figure.set_title(profileName + ' comparison')
figure.get_legend().set(title='Test')
display(fix)
plt.close(fix)
# -
# # The results
# ## instancesPerType=5, maxSearchResults=2
# +
results0 = results[0]
validateProfileResultsLocation(results0)
files = getAllProfiles(results0)
df1 = pd.DataFrame({'repo': [], 'method_name': [], 'elapsed_time': [], 'profile_name': [], 'test_case_id': [], 'assertion_id': []})
dfEnv = None
for profile_file in files:
df1 = parseProfileDetailsIntoDF(df1, profile_file, results0)
# -
# ### Environment details
results0_env = getEnvironmentProfile(results0)
env0 = pd.DataFrame({'repo': [], 'property_name': [], 'property_value': []})
env0 = parseEnvironmentDetailsIntoDF(env0, results0_env, results0)
pd.set_option('display.max_colwidth', None)
pd.set_option('display.max_rows', None)
display(env0)
# ### Full response time profiles
#
# The following plots the response times of each method within each profile in full, including any extreme / outlying values. (As this is rendering 30+ detailed visualizations it may take a little time to complete.)
#
# From these visualizations, we can quickly see the range of response times for a given method and where there values are more typical (high peaks) than not (long tails). This allows us to quickly assess two important areas:
#
# 1. Any methods that appear to consistently run for a longer time than we may want or expect.
# 1. Any particular combination of parameters that may cause a method that in most cases runs quickly to in certain cases run particularly slowly.
for profile in profile_order:
plotProfile(df1, profile)
# #### Analysis
#
# From the plots above we can see that most methods have very high peaks towards the left of the graph: indicating that the vast majority of the executions of that method have response times in that range. However, there are a number of cases where various methods run for much longer than this usual response time (even up to several seconds).
#
# To see whether these are rare outliers, we may want to re-plot the profiles again: this time ignoring the slowest 2% of the values in the response times. Stated differently, this will show the response time of 98% of the method calls: if there is a consistently-slow combination of parameters, we will expect it to show up as part of this 98% cut-off point in these plots.
# ### "Typical" response time profiles
#
# The following plots the response times of each method within each profile focusing only on the typical values -- specifically removing any outliers within the top and bottom 2% of the response times. From these visualizations, we can quickly see the "typical" response times for a given method, keeping in mind that we are ignoring the outlying extreme values here.
for profile in profile_order:
plotProfile(df1, profile, remove_outliers=True)
# Without the outliers, we can more clearly see the typical distribution of each method's response times: and that in most cases (98% of the methods' executions) the response times are sub-second (in most cases even less than 250ms).
#
# We can also see that there are however a few exceptions to this -- the various graph queries all have very long tails that suggest there are a number of examples of very long-running methods. In addition, various write operations also have long tails that appear to occur relatively infrequently but nonetheless extend to around 1 second within the 98% range.
#
# We can start by looking at the top-10 slowest response times for each of these individual methods:
slowest = ['getEntityNeighborhood', 'getLinkingEntities', 'getRelatedEntities', 'getRelationshipsForEntity']
for slow in slowest:
slowestRunning(df1, num=10, methodName=slow)
# We can see that each of these top-10 slowest results for these various methods are similar, and the result of the method running against a different set of parameters (for example, against different types of instances). This would suggest that these response times were not simply a one-off or pseudo-random occurrence that could have been caused by something like a garbage collection pause, but that there is more likely to be some fundamental underlying reason for this particular performance. To find out more, we need to delve back into the repository connector itself with deeper profiling of these particular combinations of parameters for each method to see if there is some further optimization that can be done.
# ## Comparing results
#
# Up to this point, we have done some analysis of the performance of a single set of volume parameters. However, we may also be interested in comparing and contrasting these results with additional volume parameters to investigate the scalability of the connector as the volume of metadata within the repository grows.
# +
results1 = results[1]
validateProfileResultsLocation(results1)
files = getAllProfiles(results1)
for profile_file in files:
df1 = parseProfileDetailsIntoDF(df1, profile_file, results1)
# -
# ### instancesPerType=10, maxSearchResults=5 details
results1_env = getEnvironmentProfile(results1)
env1 = pd.DataFrame({'repo': [], 'property_name': [], 'property_value': []})
env1 = parseEnvironmentDetailsIntoDF(env1, results1_env, results1)
pd.set_option('display.max_colwidth', None)
pd.set_option('display.max_rows', None)
display(env1)
# ### ipt=5, msr=2 compared to ipt=10, msr=5
for profile in profile_order:
compareProfiles(df1, profile, results0, results1, remove_outliers=True)
# #### Analysis
#
# For the most part, the performance for each method is comparable -- even though we have doubled the number of instances involved (from 4850 to 9700) and the number of results returned by each page of a search (from 2 to 5).
#
# The notable exceptions are the various search methods and the graph queries, in particular `getRelatedEntities` and `getLinkingEntities` which we can see have a significant additional peak. This may be understandable, given the additional number of instances is likely to equate to a significant increase in the number of relationships and linked entities that these methods will retrieve in the higher volume environment (since these methods do not page results, but retrieve all relationships and entities involved).
# ### Other repositories
#
# We may also want to do some comparative analysis between repositories. The following looks at results from the JanusGraph repository at the same volume parameters to compare and contrast the relative performance of the two repositories.
# +
results2 = results[2]
validateProfileResultsLocation(results2)
files = getAllProfiles(results2)
for profile_file in files:
df1 = parseProfileDetailsIntoDF(df1, profile_file, results2)
# -
results2_env = getEnvironmentProfile(results2)
env2 = pd.DataFrame({'repo': [], 'property_name': [], 'property_value': []})
env2 = parseEnvironmentDetailsIntoDF(env2, results2_env, results1)
pd.set_option('display.max_colwidth', None)
pd.set_option('display.max_rows', None)
display(env2)
for profile in profile_order:
compareProfiles(df1, profile, results0, results2, remove_outliers=True)
# #### Analysis
#
# Here we can see that in _almost all_ cases, based on their default configurations only, the Crux repository connector is faster than the JanusGraph repository connector:
#
# - The Crux connector appears to be significantly faster (~3-4x) with write operations (create, update, delete, purge, restore, re-identify, etc)
# - The Crux connector also appears to be significantly faster for most search operations
# - For some operations (i.e. the graph queries) we did not even run them under JanusGraph due to each per-type test not completing after more than 3 hours (vs. Crux's few seconds for the same tests, at the same volume).
#
# Only the retrieval methods are roughly equivalent between the two repositories. Of course, there may be further optimisations possible with either or both repositories to further improve their performance for certain aspects: this is only comparing the default configuration of each.
plotMethod(df1, "findEntities", by_repo=True)
slowestRunning(df1[df1['repo'] == 'janus-05-02'], num=10, methodName='findEntities')
# Interestingly we can see that some predicted suspects like `Referenceable` and `OpenMetadataRoot` are particularly slow-performing; however, these are not alone given `UserAccessDirectory`, `VerificationPoint`, and `UserProfileManager` each also demonstrate response times that exceed 5 seconds (and are closely followed by a number of others that come close to 5 seconds).
#
# Instead of the metadata type being the distinguishing factor, it appears it is the search parameters that are most important:
#
# - For `Referenceable` and `OpenMetadataRoot` the slow-running examples come from the `repository-entity-retrieval-performance` set of tests: these run `findEntities` with only a type GUID as a filter.
# - All of the other slowest-running examples come from the `repository-entity-classification-performance` set of tests: where `findEntities` is called with a classification criteria to retrieve a limited number of results.
#
# It would therefore appear that the JanusGraph repository connector's ability to search based on classification and to search based only on a very abstract supertype is significantly slower than the Crux repository connector's ability to do the same searches.
| cts/results/3.1-1.18.1/analyze-performance-results.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.8.0 64-bit (''CSE499'': conda)'
# name: python3
# ---
import os
import pandas as pd
import numpy as np
import warnings
from sklearn.model_selection import KFold
from sklearn.metrics import roc_auc_score
import lightgbm as lgb
import gc
warnings.simplefilter(action='ignore', category=FutureWarning)
DATA_DIRECTORY = ""
train = pd.read_csv(os.path.join(DATA_DIRECTORY, 'train.csv'))
test = pd.read_csv(os.path.join(DATA_DIRECTORY, 'test.csv'))
labels = pd.read_csv(os.path.join(DATA_DIRECTORY, 'labels.csv'))
labels = labels.to_numpy()
def model(features, test_features, labels, n_folds = 5):
# Extract the ids
train_ids = features['SK_ID_CURR']
test_ids = test_features['SK_ID_CURR']
# Remove the ids and target
features = features.drop(columns = ['SK_ID_CURR'])
test_features = test_features.drop(columns = ['SK_ID_CURR'])
cat_indices = 'auto'
print('Training Data Shape: ', features.shape)
print('Testing Data Shape: ', test_features.shape)
# Convert to np arrays
features = np.array(features)
test_features = np.array(test_features)
# Create the kfold object
k_fold = KFold(n_splits = n_folds, shuffle = True, random_state = 50)
# Empty array for test predictions
test_predictions = np.zeros(test_features.shape[0])
# Empty array for out of fold validation predictions
out_of_fold = np.zeros(features.shape[0])
# Lists for recording validation and training scores
valid_scores = []
train_scores = []
# Iterate through each fold
for train_indices, valid_indices in k_fold.split(features):
# Training data for the fold
train_features, train_labels = features[train_indices], labels[train_indices]
# Validation data for the fold
valid_features, valid_labels = features[valid_indices], labels[valid_indices]
params = {'random_state': 8888, 'nthread': -1}
# Create the model
model = lgb.LGBMClassifier(**{**params, **LIGHTGBM_PARAMS})
# Train the model
model.fit(train_features, train_labels, eval_metric = 'auc',
eval_set = [(valid_features, valid_labels), (train_features, train_labels)],
eval_names = ['valid', 'train'], categorical_feature = cat_indices,
early_stopping_rounds = 100, verbose = 200)
# Record the best iteration
best_iteration = model.best_iteration_
# Make predictions
test_predictions += model.predict_proba(test_features, num_iteration = best_iteration)[:, 1] / k_fold.n_splits
# Record the out of fold predictions
out_of_fold[valid_indices] = model.predict_proba(valid_features, num_iteration = best_iteration)[:, 1]
# Record the best score
valid_score = model.best_score_['valid']['auc']
train_score = model.best_score_['train']['auc']
valid_scores.append(valid_score)
train_scores.append(train_score)
# Clean up memory
gc.enable()
del model, train_features, valid_features
gc.collect()
# Make the submission dataframe
submission = pd.DataFrame({'SK_ID_CURR': test_ids, 'TARGET': test_predictions})
# Overall validation score
valid_auc = roc_auc_score(labels, out_of_fold)
# Add the overall scores to the metrics
valid_scores.append(valid_auc)
train_scores.append(np.mean(train_scores))
# Needed for creating dataframe of validation scores
fold_names = list(range(n_folds))
fold_names.append('overall')
# Dataframe of validation scores
metrics = pd.DataFrame({'fold': fold_names,
'train': train_scores,
'valid': valid_scores})
return submission, metrics
LIGHTGBM_PARAMS = {
'boosting_type': 'goss',
'n_estimators': 10000,
'learning_rate': 0.005134,
'num_leaves': 54,
'max_depth': 10,
'subsample_for_bin': 240000,
'reg_alpha': 0.436193,
'reg_lambda': 0.479169,
'colsample_bytree': 0.508716,
'min_split_gain': 0.024766,
'subsample': 1,
'is_unbalance': False,
'silent':-1,
'verbose':-1
}
submission, metrics = model(train, test, labels)
print('LightGBM metrics')
print(metrics)
submission.to_csv('lgb.csv', index = False)
| notebooks/CSE499B/sakib/lightgbm-with-no-scalar.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # ai-architecture-template - 00_AMLConfiguration.ipynb
# TODO: Update with new repo name
#
# Copyright (c) Microsoft Corporation. All rights reserved.
#
# Licensed under the MIT License.
#
# # Installation and configuration
# This notebook configures the notebooks in this tutorial to connect to an Azure Machine Learning (AML) Workspace.
# You can use an existing workspace or create a new one.
#
# ## Prerequisites
#
# If you have already completed the prerequisites and selected the correct Kernel for this notebook, the AML Python SDK
# is already installed. Let's load the imports and check the AML SDK version.
# + pycharm={"name": "#%%\n"}
import json
import azureml.core
from azure_utils.machine_learning.utils import load_configuration, get_or_create_workspace
print("AML SDK Version:", azureml.core.VERSION)
# -
# ## Set up your Azure Machine Learning workspace
# ## Load Configurations from file
#
# Configurations are loaded from a file, to prevent accident commits of Azure secerts into source control.
# This file name is included in the .gitignore to also prevent accident commits. A template file is included that should
# be copied, and each parameter filled in.
# + pycharm={"name": "#%%\n"}
cfg = load_configuration("../workspace_conf.yml")
# -
# ## Load Configurations into Notebook.
#
# The following cell loads the configurations from the local file, into the notebook memory. The following cell is also
# marked as a parameter cell. When using this notebook with [papermill](https://github.com/nteract/papermill), these
# parameters can be override. See the tests for examples.
# + pycharm={"name": "#%%\n"} tags=["parameters"]
subscription_id = cfg['subscription_id']
resource_group = cfg['resource_group']
workspace_name = cfg['workspace_name']
workspace_region = cfg['workspace_region']
# -
# ## Create the workspace
# This cell will create an AML workspace for you in a subscription, provided you have the correct permissions.
#
# This will fail when:
# 1. You do not have permission to create a workspace in the resource group
# 1. You do not have permission to create a resource group if it's non-existing.
# 1. You are not a subscription owner or contributor and no Azure ML workspaces have ever been created in this
# subscription
#
# If workspace creation fails, please work with your IT admin to provide you with the appropriate permissions or to
# provision the required resources. If this cell succeeds, you're done configuring AML!
#
# + pycharm={"name": "#%%\n"}
ws = get_or_create_workspace(workspace_name, subscription_id, resource_group, workspace_region)
ws_json = ws.get_details()
# -
# Let's check the details of the workspace.
# + pycharm={"name": "#%%\n"}
print(json.dumps(ws_json, indent=2))
# -
# You are now ready to move on to the [AutoML Local](01_DataPrep.ipynb) notebook.
| notebooks/00_AMLConfiguration.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Visualization with Matplotlib
# ## General Matplotlib Tips
#
# Before we dive into the details of creating visualizations with Matplotlib, there are a few useful things you should know about using the package.
# Just as we use the ``np`` shorthand for NumPy and the ``pd`` shorthand for Pandas, we will use some standard shorthands for Matplotlib imports:
# + tags=[]
import matplotlib as mpl
import matplotlib.pyplot as plt
# -
# The ``plt`` interface is what we will use most often, as we shall see throughout this chapter.
# ### ``show()`` or No ``show()``? How to Display Your Plots
# A visualization you can't see won't be of much use, but just how you view your Matplotlib plots depends on the context.
# The best use of Matplotlib differs depending on how you are using it; roughly, the three applicable contexts are using Matplotlib in a script, in an IPython terminal, or in an IPython notebook.
# #### Plotting from a script
#
# If you are using Matplotlib from within a script, the function ``plt.show()`` is your friend.
# ``plt.show()`` starts an event loop, looks for all currently active figure objects, and opens one or more interactive windows that display your figure or figures.
#
# So, for example, you may have a file called *myplot.py* containing the following:
#
# ```python
# # ------- file: myplot.py ------
# import matplotlib.pyplot as plt
# import numpy as np
#
# x = np.linspace(0, 10, 100)
#
# plt.plot(x, np.sin(x))
# plt.plot(x, np.cos(x))
#
# plt.show()
# ```
#
# You can then run this script from the command-line prompt, which will result in a window opening with your figure displayed:
#
# ```
# $ python myplot.py
# ```
#
# The ``plt.show()`` command does a lot under the hood, as it must interact with your system's interactive graphical backend.
# The details of this operation can vary greatly from system to system and even installation to installation, but matplotlib does its best to hide all these details from you.
#
# One thing to be aware of: the ``plt.show()`` command should be used *only once* per Python session, and is most often seen at the very end of the script.
# Multiple ``show()`` commands can lead to unpredictable backend-dependent behavior, and should mostly be avoided.
# #### Plotting from an IPython notebook
#
# Plotting interactively within a Jupyter notebook can be done with the ``%matplotlib`` command, and works in a similar way to the IPython shell.
# In the IPython notebook, you also have the option of embedding graphics directly in the notebook, with two possible options:
#
# - ``%matplotlib notebook`` will lead to *interactive* plots embedded within the notebook
# - ``%matplotlib inline`` will lead to *static* images of your plot embedded in the notebook
#
# For this book, we will generally opt for ``%matplotlib inline``:
# + tags=[]
# %matplotlib inline
# -
# After running this command (it needs to be done only once per kernel/session), any cell within the notebook that creates a plot will embed a PNG image of the resulting graphic:
# +
import numpy as np
x = np.linspace(0, 10, 100)
fig = plt.figure()
plt.plot(x, np.sin(x), '-')
plt.plot(x, np.cos(x), '--');
# -
# ### Saving Figures to File
#
# One nice feature of Matplotlib is the ability to save figures in a wide variety of formats.
# Saving a figure can be done using the ``savefig()`` command.
# For example, to save the previous figure as a PNG file, you can run this:
fig.savefig('my_figure.png')
# We now have a file called ``my_figure.png`` in the current working directory:
# !ls -lh my_figure.png
# To confirm that it contains what we think it contains, let's use the IPython ``Image`` object to display the contents of this file:
from IPython.display import Image
Image('my_figure.png')
# In ``savefig()``, the file format is inferred from the extension of the given filename.
# Depending on what backends you have installed, many different file formats are available.
# The list of supported file types can be found for your system by using the following method of the figure canvas object:
fig.canvas.get_supported_filetypes()
# Note that when saving your figure, it's not necessary to use ``plt.show()`` or related commands discussed earlier.
# ## Two Interfaces for the Price of One
#
# A potentially confusing feature of Matplotlib is its dual interfaces: a convenient MATLAB-style state-based interface, and a more powerful object-oriented interface. We'll quickly highlight the differences between the two here.
# #### MATLAB-style Interface
#
# Matplotlib was originally written as a Python alternative for MATLAB users, and much of its syntax reflects that fact.
# The MATLAB-style tools are contained in the pyplot (``plt``) interface.
# For example, the following code will probably look quite familiar to MATLAB users:
# +
plt.figure() # create a plot figure
# create the first of two panels and set current axis
plt.subplot(2, 1, 1) # (rows, columns, panel number)
plt.plot(x, np.sin(x))
# create the second panel and set current axis
plt.subplot(2, 1, 2)
plt.plot(x, np.cos(x));
# -
# It is important to note that this interface is *stateful*: it keeps track of the "current" figure and axes, which are where all ``plt`` commands are applied.
# You can get a reference to these using the ``plt.gcf()`` (get current figure) and ``plt.gca()`` (get current axes) routines.
#
# While this stateful interface is fast and convenient for simple plots, it is easy to run into problems.
# For example, once the second panel is created, how can we go back and add something to the first?
# This is possible within the MATLAB-style interface, but a bit clunky.
# Fortunately, there is a better way.
# #### Object-oriented interface
#
# The object-oriented interface is available for these more complicated situations, and for when you want more control over your figure.
# Rather than depending on some notion of an "active" figure or axes, in the object-oriented interface the plotting functions are *methods* of explicit ``Figure`` and ``Axes`` objects.
# To re-create the previous plot using this style of plotting, you might do the following:
# +
# First create a grid of plots
# ax will be an array of two Axes objects
fig, ax = plt.subplots(2)
# Call plot() method on the appropriate object
ax[0].plot(x, np.sin(x))
ax[1].plot(x, np.cos(x));
# -
# For more simple plots, the choice of which style to use is largely a matter of preference, but the object-oriented approach can become a necessity as plots become more complicated.
# Throughout this chapter, we will switch between the MATLAB-style and object-oriented interfaces, depending on what is most convenient.
# In most cases, the difference is as small as switching ``plt.plot()`` to ``ax.plot()``, but there are a few gotchas that we will highlight as they come up in the following sections.
| day2/08. matplotlib - introduction.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.decomposition import PCA
from sklearn.manifold import TSNE
df = pd.read_csv('wine.data', header=None)
df.head()
labels = df[0]
del df[0]
model_pca = PCA(n_components=6)
wine_pca = model_pca.fit_transform(df)
wine_pca = wine_pca.reshape((len(wine_pca), -1))
| Lesson06/Activity14/Activity14.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 値動きのみでFXに勝てるかDeepLeaningに推測させる
#
# ## 概要
#
# DeepLearningを使って、過去の値動きから次の瞬間の価格を予測している記事はよく見かける
#
# しかし、一時的な価格だけ予測できたところでトレードで勝つことはできない
#
# 必要なのは、特定の期間内においてどういった値動きをして、最終的にどのくらい上がる(もしくは下がる)のかを知ることである
#
# では「特定の期間」とはどのくらいの期間を指すのか?
#
# それはトレードを行う人の性格やトレードスタイル、戦略等によって変わるため、現状明確な正解はないと思われる
#
# 本来は、利益を最大化させることのできるトレード期間も合わせて学習できれば良いが、計算量が跳ね上がる上、本当に正解があるかどうかも分からないため、ここではそういった議論は行わない
#
# そこで今回は、以下のようなトレード戦略を採用することにする
#
# - チャートは1分足を採用(足が長くなるほど経済的な影響を受けやすくなると考えたため)
# - スプレッドは考慮しない
# - 「買い」を入れるタイミングで過去20期間分(20分間分)のATRを算出
# - 算出したATR値を`atr1`とする
# - 「買い」を入れたタイミングでの価格を`price1`とする
# - 利確価格を`price1 + 5*atr1`, 損切価格を`price1 - 3*atr1`とする
# - 以降、1分ごとに次の方法で手仕舞いを行う
# - 価格が利確価格以上になったら利確する
# - 価格が損切価格以下になったら損切する
# - 価格が`price1 + atr1`以上になったら、利確・損切価格を上にずらす(トレーリングストップ)
# - 新しい利確価格:`その1分足の終値 + 5*atr1`
# - 新しい損切価格:`その1分足の終値 - 3*atr1`
#
# 以上のトレード戦略をとった場合、どのようなタイミングで「買い」を入れれば勝てるのかを予測する
#
# ## チャートデータ
#
# - チャートデータはTickStoryで取得
# - 通貨ペアはUSD/JOY
# - 過去5年分の1分足データを学習対象とする
#
# ## 環境
#
# - CPU: Intel(R) Core(TM) i7-7700 CPU @ 3.60GHz
# - Memory: 16GB DDR2
# - GPU: NVIDIA GeForce GTX 1070
# - Python 3.5
# - DeepLearningライブラリとして Keras(バックグラウンド: TensorFlow)採用
# ### チャートデータ読み込み
# +
import pandas as pd
import talib
# データフレームを読み込む
df = pd.read_csv('USDJPY_1M_Pretty.csv', index_col='Date')
df.index = pd.to_datetime(df.index)
# ATR20計算
df['ATR'] = talib.ATR(df['High'].values, df['Low'].values, df['Close'].values, timeperiod=20)
del df['Volume']
df = df.dropna(how='any') # NaN行の削除
df.head()
# -
# ### トレード戦略に沿ってトレードした場合の結果を計算
#
# 概要で定めたトレード戦略に沿って、毎分「買い」を入れた場合のトレード結果を計算する
#
# 今回はついでに、毎分「売り」を入れた場合についても同様に計算している
# +
import numpy as np
from numba import jit
import time
# TrailingStop
@jit
def buy_trail(data, index, price, atr):
# TrailingStop: TP=ATR*5, SL=3*ATR, TS=ATR
tp, sl, ts = price + 5 * atr, price - 3 * atr, price + atr
for i in range(index, len(data)):
row = data[i + 1]
if row[2] <= tp <= row[1]: return tp
elif row[2] <= sl <= row[1]: return sl
elif row[2] <= ts <= row[1]: # Move TakeProfit & StopLoss
tp, sl, ts = row[3] + 5 * atr, row[3] - 3 * atr, row[3] + atr
return data[-1][3]
@jit
def sell_trail(data, index, price, atr):
# TrailingStop: TP=ATR*5, SL=3*ATR, TS=ATR
tp, sl, ts = price - 5 * atr, price + 3 * atr, price - atr
for i in range(index, len(data)):
row = data[i + 1]
if row[2] <= tp <= row[1]: return tp
elif row[2] <= sl <= row[1]: return sl
elif row[2] <= ts <= row[1]: # Move TakeProfit & StopLoss
tp, sl, ts = row[3] - 5 * atr, row[3] + 3 * atr, row[3] - atr
return data[-1][3]
# Buy & TrailingStop
@jit
def buy(data):
result = []
for i in range(len(data)):
# i番目の瞬間に買いを入れると仮定
row = data[i]
result.append(buy_trail(data, i, row[3], row[4]) - row[3]) # 買いを入れた結果(損益)を保存
return result
# Sell & TrailingStop
@jit
def sell(data):
result = []
for i in range(len(data)):
# i番目の瞬間に売りを入れると仮定
row = data[i]
result.append(sell_trail(data, i, row[3], row[4]) - row[3]) # 売りを入れた結果(損益)を保存
return result
start = time.time()
buy_profit = np.array(buy(df.values))
print("Elapsed Time: {0} sec\nBuyProfits: {1}".format(time.time() - start, buy_profit))
start = time.time()
sell_profit = np.array(sell(df.values))
print("Elapsed Time: {0} sec\nSellProfits: {1}".format(time.time() - start, sell_profit))
# -
# numpyデータをcsv化
val = df.values
size = val.shape[0]
data = np.zeros((size, 7))
for i in range(size):
for x in range(5): data[i][x] = val[i][x]
data[i][5] = buy_profit[i]
data[i][6] = sell_profit[i]
csv = pd.DataFrame(index=df.index[:size], columns=['Open', 'High', 'Low', 'Close', 'ATR', 'Buy', 'Sell'], data=data)
csv.to_csv("USDJPY_TrailingStop.csv")
csv.head()
# ### 目的変数の設定
#
# 当初、目的変数は単純にトレード結果の価格(いくら儲かり、いくら損したか)にすれば良いと思っていたが、これではうまく学習できなかった
#
# そのため、トレード結果を以下の4つに分類することにした
#
# 1. 大きく勝った(`5*atr1`以上の利益)
# 2. 勝った(`0`以上の利益)
# 3. 負けた(`0`未満の利益)
# 4. 大きく負けた(`-3*atr1`以下の利益)
# +
import numpy as np
# 損益額を4パターン(2: 5*ATRより大きい, 1: 0より大きい, -1: 0より小さい, -2: -3*ATRより小さい)にカテゴライズ
df['Buy_cat'] = np.where(5 * df['ATR'] < df['Buy'], 2, np.where(0 < df['Buy'], 1, np.where(df['Buy'] < -3 * df['ATR'], -2, -1)))
df['Sell_cat'] = np.where(5 * df['ATR'] < df['Sell'], 2, np.where(0 < df['Sell'], 1, np.where(df['Sell'] < -3 * df['ATR'], -2, -1)))
df.to_csv('USDJPY_TrailingStop2.csv')
df.head()
# -
# ### おまけ:相場の大半がレンジ相場であることを確認
#
# ちなみにトレード結果を以下のようにカテゴライズして分散を確認してみると、相場の大半がレンジ相場であることが確認できた
#
# 1. 大きな利益
# 2. 小さな利益
# 3. 損益なし
# 4. 小さな損失
# 5. 大きな損失
# +
''' 損益結果をカテゴライズ '''
# 5つのパターン(0:大きな損失、1:小さな損失、2:損益なし、3:小さな利益、4:大きな利益)
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler()
Ybuy = csv['Buy'].values.reshape(-1, 1)
scaler.fit(Ybuy)
Ybuy = scaler.transform(Ybuy)
Ybuy = np.floor(Ybuy * 5) # 5パターンにカテゴライズ
Ybuy = np.where(Ybuy == 5., 4., Ybuy) # 最大値が5になっているため4に直す
Ybuy
# +
import matplotlib.pyplot as plt
# %matplotlib inline
Ynum = [len(Ybuy[Ybuy == i]) for i in range(5)]
plt.bar(range(5), Ynum)
# -
# ### 特徴量と学習モデルの選択
#
# 今回は特徴量として、単純に過去60期間分(1時間分)の値動きデータを採用することにした
#
# また、学習モデルとして画像認識によく使われるCNNモデルを選択した
#
# これは、値動きのデータを一つの画像としてパターン学習できるのではないかと考えたからである
#
# ちなみに、数値予測に採用されることの多いLSTMモデルでも学習させてみたが、今回のケースではCNNモデルでの学習と大差なかった
# +
''' CNNモデルで学習させてみる '''
import pandas as pd
import numpy as np
from sklearn.preprocessing import MinMaxScaler
from numba import jit
from keras.utils.np_utils import to_categorical
# データフレームを読み込む
df = pd.read_csv('USDJPY_TrailingStop2.csv', index_col='Date')
df.index = pd.to_datetime(df.index)
del df['ATR'], df['Open'], df['High'], df['Low'], df['Buy'], df['Sell'], df['Market']
# 特徴量、目的変数
x = df.loc[:, 'Close']
y_buy = df.loc[:, 'Buy_cat']
y_sell = df.loc[:, 'Sell_cat']
# 訓練データ、テストデータに分割
x_train, x_test = x[x.index < '2018'], x[x.index >= '2018']
y_buy_train, y_buy_test = y_buy[y_buy.index < '2018'], y_buy[y_buy.index >= '2018']
y_sell_train, y_sell_test = y_sell[y_sell.index < '2018'], y_sell[y_sell.index >= '2018']
# 特徴量をClose(60)、目的変数をTrailingStop損益結果の4パターンとする
@jit
def gen_xy(x, y_buy, y_sell, window_len=60):
X, Ybuy, Ysell = [], [], []
for i in range(len(x) - window_len):
X.append(x[i : i + window_len].copy())
Ybuy.append(y_buy[i + window_len - 1])
Ysell.append(y_sell[i + window_len - 1])
X, Ybuy, Ysell = np.array(X), np.array(Ybuy), np.array(Ysell)
# 特徴量を正規化
scaler = MinMaxScaler()
scaler.fit(X) # window_len区間の価格の最大値・最小値にフィット
X = scaler.transform(X)
# 目的変数を正規化
Ybuy = np.where(Ybuy > 0, Ybuy + 1, Ybuy + 2)
Ybuy = to_categorical(Ybuy.astype('int32'))
Ysell = np.where(Ysell > 0, Ysell + 1, Ysell + 2)
Ysell = to_categorical(Ysell.astype('int32'))
return X, Ybuy, Ysell
Xtrain, Ytrain_buy, Ytrain_sell = gen_xy(x_train.values, y_buy_train.values, y_sell_train.values)
x_train.values.shape, x_train.values, Xtrain.shape, Xtrain, y_buy_train.values.shape, y_buy_train.values, Ytrain_buy.shape, Ytrain_buy
# +
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Conv1D
from keras.layers import MaxPooling1D
from keras.layers import Flatten
from keras.layers import Dropout
from keras.callbacks import EarlyStopping
from keras.callbacks import ModelCheckpoint
''' CNNで買い専用AIの訓練 '''
model = Sequential()
model.add(Conv1D(32, 3, activation='relu', padding='valid', input_shape=(60, 1)))
model.add(MaxPooling1D(pool_size=2))
model.add(Conv1D(64, 3, activation='relu', padding='valid'))
model.add(MaxPooling1D(pool_size=2))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(250, activation='relu'))
model.add(Dropout(0.25))
model.add(Dense(4, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='rmsprop', metrics=['acc'])
early_stopping = EarlyStopping(monitor='val_acc', mode='auto', patience=8)
model_checkpoint = ModelCheckpoint(filepath="BuyerCNN.h5")
# 訓練実行
model.fit(Xtrain.reshape(Xtrain.shape[0], 60, 1), Ytrain_buy,
batch_size=2048, # 訓練データが多い場合は、ミニバッチサイズを大きくしないとオーバーフローが起きる
epochs=256,
shuffle=True,
validation_split=0.1, # 訓練データのうち10%を検証データとして仕様
callbacks=[early_stopping, model_checkpoint]
)
# -
Xtest, Ytest_buy, Ytest_sell = gen_xy(x_test.values, y_buy_test.values, y_sell_test.values)
model.evaluate(Xtest.reshape(Xtest.shape[0], 60, 1), Ytest_buy)
# 上記より、トレード結果を4つにカテゴライズ(大きく勝つ・勝つ・負ける・大きく負ける)する場合、およそ48%の精度で予測できることがわかった
#
# 当てずっぽうに4択したら25%の確率であるため、それよりは倍程度良いのだが、個人的にはこの程度の精度でFXに勝つことはできないと考えている(体感的に)
#
# 以下、おまけとして、トレード結果を「勝つ・負ける」の2つにカテゴライズした場合についても学習させてみた
# +
''' 目的変数を勝てる・負けるの2パターンにして学習させてみる '''
@jit
def gen_xy(x, y_buy, y_sell, window_len=60):
X, Ybuy, Ysell = [], [], []
for i in range(len(x) - window_len):
X.append(x[i : i + window_len].copy())
Ybuy.append(y_buy[i + window_len - 1])
Ysell.append(y_sell[i + window_len - 1])
X, Ybuy, Ysell = np.array(X), np.array(Ybuy), np.array(Ysell)
# 特徴量を正規化
scaler = MinMaxScaler()
scaler.fit(X) # window_len区間の価格の最大値・最小値にフィット
X = scaler.transform(X)
# 目的変数を正規化
Ybuy = np.where(Ybuy > 0, 1, 0)
Ybuy = to_categorical(Ybuy.astype('int32'))
Ysell = np.where(Ysell > 0, 1, 0)
Ysell = to_categorical(Ysell.astype('int32'))
return X, Ybuy, Ysell
Xtrain, Ytrain_buy, Ytrain_sell = gen_xy(x_train.values, y_buy_train.values, y_sell_train.values)
model = Sequential()
model.add(Conv1D(32, 3, activation='relu', padding='valid', input_shape=(60, 1)))
model.add(MaxPooling1D(pool_size=2))
model.add(Conv1D(64, 3, activation='relu', padding='valid'))
model.add(MaxPooling1D(pool_size=2))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(250, activation='relu'))
model.add(Dropout(0.25))
model.add(Dense(2, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='rmsprop', metrics=['acc'])
early_stopping = EarlyStopping(monitor='val_acc', mode='auto', patience=8)
model_checkpoint = ModelCheckpoint(filepath="Buyer2.h5")
# 訓練実行
model.fit(Xtrain.reshape(Xtrain.shape[0], 60, 1), Ytrain_buy,
batch_size=2048,
epochs=256,
shuffle=True,
validation_split=0.1,
callbacks=[early_stopping, model_checkpoint]
)
# -
Xtest, Ytest_buy, Ytest_sell = gen_xy(x_test.values, y_buy_test.values, y_sell_test.values)
model.evaluate(Xtest.reshape(Xtest.shape[0], 60, 1), Ytest_buy)
# 以上より、このAIが「今買えば勝てる」と推測したタイミングで買って利益が出る確率は62%程度であると言える
#
# ぱっと見悪くないようにも思えるが、相場のほとんどがレンジ相場であることを考えると、「買い」シグナルが出ること自体が少ないため、その少ないシグナルの内4割は信用できないということになる
#
# これは相当に忍耐力と時間と余剰資金がないと、とてもではないがやってられない
#
# なお、どこかの論文か何かだった気がするが、「チャートデータのみで将来の価格を予測するのは6割程度が限界」という話があった記憶があるため、やはりこの辺りがFX予測の限界なのかもしれない
| predict_fx_trade.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: nma (py37)
# language: python
# name: nma
# ---
# + [markdown] colab_type="text" id="MNPgoaG57Zl6"
# # Interactive Widget
#
# Interactive demos using `ipywidgets` are very helpful.
# + colab={} colab_type="code" id="rIpvq7637Zl7"
#@title Imports and setup
import numpy as np
import matplotlib.pyplot as plt
import ipywidgets as widgets
# -
# Make sure the use a `@title` cell and hide the contents by default, because the code to make the widget is often pretty ugly and not important for the students to see.
# + colab={"referenced_widgets": ["087fb6e208004c089c4dfc94aecd982c"]} colab_type="code" id="TqYjf5gV7ZmD" outputId="928cc398-a8f3-4832-a352-47df9c6d82c8"
#@title Gaussian PDF demo
x = np.arange(-10, 11, 0.1)
def gaussian(x, mu, sigma):
px = np.exp(-1 / 2 / sigma**2 * (mu - x) ** 2)
px = px / px.sum()
return px
@widgets.interact
def plot_gaussian(mean=(-10, 10, .5), std=(.5, 10, .5)):
plt.plot(x, gaussian(x, mean, std))
# + colab={} colab_type="code" id="QZ3ePiGf7ZmO"
| tutorials/demo/student/Interactive_Widget.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Children's Height
#
# The purpose of this notebook is to explore the relationship between the heights of adult children and the heights of their parents.
# The table below gives data based on the famous 1885 study of <NAME> exploring this relationship. Each case is an adult child, and the variables are
#
# - Family: The family that the child belongs to, labeled from 1 to 205.
# - Father: The father's height, in inches
# - Mother: The mother's height, in inches
# - Gender: The gender of the child, male (M) or female (F)
# - Height: The height of the child, in inches
# - Kids: The number of kids in the family of the child
#
# The data set has 898 cases. The family that we have labeled 205 was originally labeled 136A by Galton.
# https://www.randomservices.org/random/data/Galton.html#
# +
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
import plotly.express as px
import plotly.graph_objs as go
import plotly.io as pio
import statsmodels.api as sm
import statsmodels.formula.api as smf
from statsmodels.stats import anova, diagnostic
from sklearn.metrics import r2_score
from sklearn.model_selection import train_test_split
# %matplotlib inline
# -
pio.templates.default = 'seaborn'
pio.renderers.default='iframe'
# +
path = '~/workspace/RegressionTests/lib/data/children_heights/Galtons Height Data.csv'
data = pd.read_csv(path)
data.head(3)
# -
# ## 1. Scatter plot
# Let's start by scatter plotting the dependent variable $y$ and independent variables $x_i \in (Family, Father, Mother, Gender, Kids)$.
# Look for:
# - What is the distribution of each?
# - Do they look correlated?
# - Formulate the mathematical formula in your head
# - What do you expect the signs of the coefficients to be
fig = px.scatter_matrix(data, color='Gender', width=900,height=1000,
title='Galtons Height Data from the 1800s',
color_continuous_scale='turbid',
hover_name='Family',
template='plotly')
fig.show()
# +
#Correlation matrix
# sns.heatmap(data.corr(),vmin=-1,vmax=1, annot=True,cmap='ocean');
fig3 = px.imshow(data.corr(), zmin=-1, zmax=1,width=500,height=500, color_continuous_scale=['blue','red'])
fig3.show()
# -
# **Distribution of each variable**
# **Family:** appears to be uniformly distributed
# **Father** heights are normal
# **Mother** heights are semi-normal
# **Heights** (dependent) is normal across Genders
# **Kids** appear to be concentrated around 5 and 8. This is normal for the time at which the data was collected, but might not be relevant in the 21st century!
#
# **Correlation**
# From the scatter matrix, the dependent variable $Height$ seems to have a small negative correlation with Family, a modest positive association with both Father and Mother. While the number of kids appears to have no effect.
# The independent variable $Family$ has a very strong positive correlation with $Father$. This is probably because the data was sorted by Father's when Family numbers were assigned. That said, the information provided by this attribute can be derived from the combination of Father, Mother, and Kids. That said, I'm not going to drop this column as of yet!
#
# **Mathematical Equation**
# I don't see non-linearity between $y$ and any of $x_i$, so I expect the formula to be something like:
# $$Height = \beta_0 - \beta_1.Family + \beta_2.Father + \beta_3 . Mother -\beta_4.Gender \pm \beta_5. Kids$$
#Estimating coefficients manually
df = data.drop(columns='Height')
df.Gender,gender = df.Gender.factorize(True)
df.insert(0,'const',1)
y = data.Height.values
x = df.values
print(y[:5],x[:5])
b = np.linalg.inv(x.T@x)@(x.T@y)
print("Coefficients:\n",b)
# The coefficients' signs are exactly what I expected!
#
# **Standard Error of Coefficients**
# Next step is calculate the standard error of $\beta$
# $$SE_{\beta_i}=\hat{\sigma}^2.(X^TX)^{-1}$$
y_hat = x@b
resid = y-y_hat
cov_b = resid.var(ddof=2)*np.linalg.inv(x.T@x)
se_b = np.sqrt(cov_b.diagonal())
se_b = pd.DataFrame({"coef":b,"se":se_b}, index=['intercept','Family','Father','Mother','Gender','Kids'])
se_b
fig = px.scatter(x=y, y=y_hat, color=data.Gender,width=500,height=500, trendline='ols')
fig.show()
# Verify results using statsmodels library
ols = smf.ols('Height~Family+Father+Mother+C(Gender)*Father+ C(Gender)*Mother+Kids', data=data)
res = ols.fit()
pred = res.predict()
print(res.summary())
# **Conclusion** by comparing the results of the two approaches, I noticed that while the coefficients are the same, the intercept is different. However, this has negligible effect on predictions as the plot below shows
difference = (pred - y_hat).sum()
fig = plt.figure(figsize=(7,6))
plt.scatter(y_hat, pred)
plt.xlabel(r'Manual $\hat{y}$')
plt.ylabel(r'Package $\hat{y}$')
plt.text(y_hat.min()+2, y_hat.min(),f'$\Delta_y$={difference:.3e}');
# ### Examine the run-sequence plot:
#
#First we need to sort the data by one of x variables. I'm going to sort by Father
sort = np.argsort(x[:,2])
plt.plot(y[sort], 'bo');
# +
import plotly.express as px
fig = px.scatter_3d(data,x='Father',y='Mother',z='Height',hover_name='Kids',size=y_hat, color='Gender', width=800,height=500)
fig.show()
# -
fig2 = px.box(data, x=['Father','Mother','Height'],facet_col='Gender')
fig2.show()
# I don't see any non-linear relationship between y and x. However, I'm going to specify multiple models to see which one produces best results:
# 1. $H = \beta_0 + \beta_1.F + \beta_2.M \pm \beta_3.G \pm \beta_4.K$
# 2. $H = \beta_0 + \beta_1.F + \beta_2.M + \beta_3.(G.F.M) \pm \beta_4.K$
| Linear/Simple/Heights.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/TurkuNLP/Turku-neural-parser-pipeline/blob/master/docs/tnpp_diaparse.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="lhS5gcnQMUUX"
# # Turku Neural Parser Pipeline
#
# * A mini-tutorial of the latest version of the parser pipeline
# * Make sure to run it with GPU enabled (Runtime -> Change runtime type -> GPU)
#
# + [markdown] id="EhGCL2O2GaQn"
# # Modules
#
# ## Segmentation
#
# * Tokenization and sentence segmentation happens jointly, and is implemented using the UDPipe library
# * Machine-learned sequence classification model
#
# ## PoS and morphological tagging
#
# * A BERT-based classification model
# * Joint prediction of PoS and Tags
# * Implemented in Pytorch Lightning
#
# ## Dependency parsing
#
# * Parsing is done using the [diaparser](https://github.com/Unipisa/diaparser) parser
# * A BERT-based model, implemented in Torch
#
# ## Lemmatization
#
# * Lemmatization is a sequence-to-sequence model
# * Wordform + Tags -> Lemma
# * Fully machine-learned
# * Implemented using OpenNMT (a machine translation library)
#
# ## GPU
#
# * Current accuracy far beyond previous versions of this pipeline
# * Cost: computationally intense deep neural network models
# * Small tests and examples can run on CPU, but any non-trivial amount of text needs a GPU accelerator
# + [markdown] id="9sOFiRoqJ7fw"
# # INSTALL
#
# * git clone the code
# * cd to the directory
# * and install all requirements
# * this does take its time, the parser leans on quite large libraries
# + colab={"base_uri": "https://localhost:8080/"} id="KKPUaw73JEwK" outputId="72b75270-69db-4148-d7da-29ef86436f8e"
# !git clone https://github.com/TurkuNLP/Turku-neural-parser-pipeline.git
# %cd Turku-neural-parser-pipeline
# + id="RybqhQ5YMpJZ" colab={"base_uri": "https://localhost:8080/"} outputId="c7071bd9-38dc-49b8-dfd6-a7f0951151f3"
#I like to upgrade these first
# !python3 -m pip install --upgrade pip
# !python3 -m pip install --upgrade setuptools
# + id="ok1lM5IOJmU3" colab={"base_uri": "https://localhost:8080/"} outputId="26c30d36-9c37-4cf2-f35d-8f83c95d962d"
# !python3 -m pip install -r requirements.txt
# + colab={"base_uri": "https://localhost:8080/"} id="FMAjETYgkNMT" outputId="563849ad-4b9b-4303-ac8f-a50c5d983012"
#this is something we need to do for now, hopefully it eases when
#the next version of OpenNMT comes out
# !python3 -m pip install --no-deps OpenNMT-py==2.1.2
# + [markdown] id="Wrsm2c8AKLeF"
# # FETCH MODEL
#
# * At present, only the Finnish (fi_tdt_dia) and English (en_ewt_dia) models are available for the most recent diaparser-based version of the pipeline
# * Models documented here: http://turkunlp.org/Turku-neural-parser-pipeline/models.html
# * ...the remainder of UD languages is in the works...
# + colab={"base_uri": "https://localhost:8080/"} id="UviA2z6DKWYv" outputId="47611812-a470-44ce-b66d-54f6bf46b6f3"
# !python3 fetch_models.py fi_tdt_dia
# + [markdown] id="X9gJOGYLKx6u"
# * Note: this might take a while, the model is quite large (>1GB)
# * The above command created the directory `models_fi_tdt_dia` with the model
# * The file `models_fi_tdt_dia/pipelines.yaml` defines all the possible pipelines for the parser in this model
# * The `parse_plaintext` is the correct choice in most situations
# + [markdown] id="uWztsJw8LIeY"
# # PARSE IN PYTHON
#
# * You need to load and start the pipeline of choice
# * Like so:
# + colab={"base_uri": "https://localhost:8080/"} id="9p6um1idLVun" outputId="4f46635e-d6d4-461e-aa14-88f41fc031fb"
from tnparser.pipeline import read_pipelines, Pipeline
# What pipelines do we have for the Finnish model?
available_pipelines=read_pipelines("models_fi_tdt_dia/pipelines.yaml") # {pipeline_name -> its steps}
# This is a dictionary, its keys are the pipelines
print(list(available_pipelines.keys()))
# Instantiate one of the pipelines
p=Pipeline(available_pipelines["parse_plaintext"])
# + colab={"base_uri": "https://localhost:8080/"} id="rNkU44JEIWJT" outputId="b4753dcf-1dae-4c54-d6f3-80d66327f3e5"
txt_in="Minulla on söpö koira. Se haukkuu, syö makkaraa, jahtaa oravia ja tsillailee kanssani!"
parsed=p.parse(txt_in)
print(parsed)
# + [markdown] id="UvaSQcqfL6Ru"
# # Parsing more data
#
# * You might have many files with data you need to parse
# * If you have massive documents, it makes sense to split them into manageable pieces
# * Here is a basic example of how to achieve that
# * You can download an example zip file I prepared from here: [http://bionlp-www.utu.fi/.ginter/news_test_data.zip](http://bionlp-www.utu.fi/.ginter/news_test_data.zip)
# * Or simply upload your own
#
# + colab={"base_uri": "https://localhost:8080/"} id="4oZ4OxYnVeII" outputId="3382163a-841e-4c83-b60c-1f6b78dff239"
#Remember this notebook uses Turku-neural-parser-pipeline as its working directory
# !wget http://bionlp-www.utu.fi/.ginter/news_test_data.zip
# !unzip news_test_data.zip #will unzip some 60 files into ./test_data
# + [markdown] id="eshlzXZAX_5h"
# * Now we have 67 text files in `test_data` and we would like to parse them
# + colab={"base_uri": "https://localhost:8080/"} id="qsm11zrsVtyD" outputId="84201cbd-b17d-4577-e175-82a6222203f3"
import glob #allows listing files
import tqdm #progress bar
all_files=glob.glob("test_data/*.txt") #list all files we need
for file_name in tqdm.tqdm(all_files):
txt=open(file_name).read() #read the file
parsed=p.parse(txt) #parse it
with open(file_name.replace(".txt",".conllu"),"wt") as f_out: #open output file
f_out.write(parsed) #and write out the result
# + [markdown] id="ZPnxeTeVZNfa"
# * there are now parsed conllu files under `test_data`
# + colab={"base_uri": "https://localhost:8080/"} id="8eT4Rc_2ZbmN" outputId="f8ba2085-9d62-405a-dacc-0a2bd7b64726"
# Basic stats of the parsed files
# !echo "Sentences:" ; cat test_data/*.conllu | grep -Pc '^1\t'
# !echo "Tokens:" ; cat test_data/*.conllu | grep -Pc '^[0-9]+\t'
# + [markdown] id="d-mfmb-2ZzPO"
# * Now we yet need to pack and download the data
# + colab={"base_uri": "https://localhost:8080/"} id="l8WphD8WZ2Lz" outputId="f4140abc-0fc9-4954-c5e9-922704be38aa"
# !zip parsed.zip test_data/*.conllu
# + [markdown] id="vqIMDbdFZ88B"
# ...and download the `parsed.zip` file and you're good to go
# + [markdown] id="dQjvSbNxOmPV"
# # Models
#
# * Universal Dependencies models
# * A handful of specialized models (e.g. biomedical English)
# * Training new models not particularly difficult, documentation for the diaparser-based pipeline training in the works
#
# # Failure modes
#
# * Generally this is a pretty stable parser, it was used to parse some hundreds of millions of sentences successfully
# * Most failures stem from the bleeding-edge libraries we are forced to use; these keep changing rapidly
# * Backward-incompatible, breaking changes are very common
# * Google Colab environment regularly upgraded to newest versions of many common libraries, and this might break some dependencies
#
# In case of failure:
#
# * Runtime -> Factory reset runtime, try again
# * Check that you are on a GPU runtime, large files might still take long to parse -> split your data into more manageable pieces
# * Ping <NAME> or <NAME> with as good a description of the problem as possible
#
| docs/tnpp_diaparse.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="gMBVb4ZB2txG" colab_type="text"
# [Open with Colab](https://colab.research.google.com/github/dsbook/dsbook/blob/master/bert_example_based_finetuning.ipynb)
# + [markdown] id="iJ7NUp6d2-yK" colab_type="text"
# 必要なライブラリであるtransformersとtensorboardXをインストールします.
#
# 同時に,transformersのソースコードもGitHubからダウンロードします.
# + id="Z97xnv0yNi98" colab_type="code" colab={}
# !pip install torch==1.2.0+cu92 torchvision==0.4.0+cu92 -f https://download.pytorch.org/whl/torch_stable.html
# !pip install torchtext==0.5 configargparse
# !pip install transformers==2.1.1
# !pip install tensorboardX==1.9
# !git clone https://github.com/huggingface/transformers.git -b v2.1.1
# + [markdown] id="GDYhjvEj3ePL" colab_type="text"
# データの読み込みと保存のためGoogle Driveに接続します.
#
# 「Go to this URL in a browser: https:// ...」と表示されるのでURLをクリックし,使用するアカウントを選択します.
#
# 次のページで「許可」ボタンをクリックするとコードが表示されますので,
# そのコードをコピーし,「Enter your authorization code:」の下の入力欄にペーストしてエンターキーを押してください.
# + id="GjZNLm5tPAkS" colab_type="code" colab={}
from google.colab import drive
drive.mount('./drive')
# + [markdown] id="k6rlKtdY3g9W" colab_type="text"
# 用例ベース応答のためのBERTのファインチューニングを実行します.
# + id="Tj2ATbHmt9Wp" colab_type="code" colab={}
# !python transformers/examples/run_glue.py --data_dir /content/drive/My\ Drive/dsbook/example_based_bert/ --overwrite_output_dir \
# --model_type bert --model_name_or_path bert-base-multilingual-cased --task_name WNLI --evaluate_during_training --save_steps 1000 --max_steps 1000 \
# --output_dir /content/drive/My\ Drive/dsbook/example_based_bert/out/ --do_train --do_eval --per_gpu_train_batch_size 16
# + [markdown] id="xJFNTvzeMJTo" colab_type="text"
# Google Drive上にキャッシュが残っているとファインチューニングがうまく動かない場合があるので,動かない場合は以下を実行.
# + id="KjAIEndQMIHO" colab_type="code" colab={}
# !rm /content/drive/My\ Drive/dsbook/example_based_bert/cached_*
| bert_example_based_finetuning.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] colab_type="text" id="t09eeeR5prIJ"
# ##### Copyright 2019 The TensorFlow Authors.
# + cellView="form" colab={} colab_type="code" id="GCCk8_dHpuNf"
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# + [markdown] colab_type="text" id="ovpZyIhNIgoq"
# # Text generation with an RNN
# + [markdown] colab_type="text" id="hcD2nPQvPOFM"
# <table class="tfo-notebook-buttons" align="left">
# <td>
# <a target="_blank" href="https://www.tensorflow.org/tutorials/text/text_generation"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
# </td>
# <td>
# <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/text/text_generation.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
# </td>
# <td>
# <a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/tutorials/text/text_generation.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
# </td>
# <td>
# <a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/tutorials/text/text_generation.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
# </td>
# </table>
# + [markdown] colab_type="text" id="BwpJ5IffzRG6"
# This tutorial demonstrates how to generate text using a character-based RNN. We will work with a dataset of Shakespeare's writing from <NAME>'s [The Unreasonable Effectiveness of Recurrent Neural Networks](http://karpathy.github.io/2015/05/21/rnn-effectiveness/). Given a sequence of characters from this data ("Shakespear"), train a model to predict the next character in the sequence ("e"). Longer sequences of text can be generated by calling the model repeatedly.
#
# Note: Enable GPU acceleration to execute this notebook faster. In Colab: *Runtime > Change runtime type > Hardware acclerator > GPU*. If running locally make sure TensorFlow version >= 1.11.
#
# This tutorial includes runnable code implemented using [tf.keras](https://www.tensorflow.org/programmers_guide/keras) and [eager execution](https://www.tensorflow.org/programmers_guide/eager). The following is sample output when the model in this tutorial trained for 30 epochs, and started with the string "Q":
#
# <pre>
# QUEENE:
# I had thought thou hadst a Roman; for the oracle,
# Thus by All bids the man against the word,
# Which are so weak of care, by old care done;
# Your children were in your holy love,
# And the precipitation through the bleeding throne.
#
# BISHOP OF ELY:
# Marry, and will, my lord, to weep in such a one were prettiest;
# Yet now I was adopted heir
# Of the world's lamentable day,
# To watch the next way with his father with his face?
#
# ESCALUS:
# The cause why then we are all resolved more sons.
#
# VOLUMNIA:
# O, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, it is no sin it should be dead,
# And love and pale as any will to that word.
#
# QUEEN ELIZABETH:
# But how long have I heard the soul for this world,
# And show his hands of life be proved to stand.
#
# PETRUCHIO:
# I say he look'd on, if I must be content
# To stay him from the fatal of our country's bliss.
# His lordship pluck'd from this sentence then for prey,
# And then let us twain, being the moon,
# were she such a case as fills m
# </pre>
#
# While some of the sentences are grammatical, most do not make sense. The model has not learned the meaning of words, but consider:
#
# * The model is character-based. When training started, the model did not know how to spell an English word, or that words were even a unit of text.
#
# * The structure of the output resembles a play—blocks of text generally begin with a speaker name, in all capital letters similar to the dataset.
#
# * As demonstrated below, the model is trained on small batches of text (100 characters each), and is still able to generate a longer sequence of text with coherent structure.
# + [markdown] colab_type="text" id="srXC6pLGLwS6"
# ## Setup
# + [markdown] colab_type="text" id="WGyKZj3bzf9p"
# ### Import TensorFlow and other libraries
# + colab={} colab_type="code" id="yG_n40gFzf9s"
import tensorflow as tf
import numpy as np
import os
import time
# + [markdown] colab_type="text" id="EHDoRoc5PKWz"
# ### Download the Shakespeare dataset
#
# Change the following line to run this code on your own data.
# + colab={} colab_type="code" id="pD_55cOxLkAb"
path_to_file = tf.keras.utils.get_file('shakespeare.txt', 'https://storage.googleapis.com/download.tensorflow.org/data/shakespeare.txt')
# + [markdown] colab_type="text" id="UHjdCjDuSvX_"
# ### Read the data
#
# First, look in the text:
# + colab={} colab_type="code" id="aavnuByVymwK"
# Read, then decode for py2 compat.
text = open(path_to_file, 'rb').read().decode(encoding='utf-8')
# length of text is the number of characters in it
print ('Length of text: {} characters'.format(len(text)))
# + colab={} colab_type="code" id="Duhg9NrUymwO"
# Take a look at the first 250 characters in text
print(text[:250])
# + colab={} colab_type="code" id="IlCgQBRVymwR"
# The unique characters in the file
vocab = sorted(set(text))
print ('{} unique characters'.format(len(vocab)))
# + [markdown] colab_type="text" id="rNnrKn_lL-IJ"
# ## Process the text
# + [markdown] colab_type="text" id="LFjSVAlWzf-N"
# ### Vectorize the text
#
# Before training, we need to map strings to a numerical representation. Create two lookup tables: one mapping characters to numbers, and another for numbers to characters.
# + colab={} colab_type="code" id="IalZLbvOzf-F"
# Creating a mapping from unique characters to indices
char2idx = {u:i for i, u in enumerate(vocab)}
idx2char = np.array(vocab)
text_as_int = np.array([char2idx[c] for c in text])
# + [markdown] colab_type="text" id="tZfqhkYCymwX"
# Now we have an integer representation for each character. Notice that we mapped the character as indexes from 0 to `len(unique)`.
# + colab={} colab_type="code" id="FYyNlCNXymwY"
print('{')
for char,_ in zip(char2idx, range(20)):
print(' {:4s}: {:3d},'.format(repr(char), char2idx[char]))
print(' ...\n}')
# + colab={} colab_type="code" id="l1VKcQHcymwb"
# Show how the first 13 characters from the text are mapped to integers
print ('{} ---- characters mapped to int ---- > {}'.format(repr(text[:13]), text_as_int[:13]))
# + [markdown] colab_type="text" id="bbmsf23Bymwe"
# ### The prediction task
# + [markdown] colab_type="text" id="wssHQ1oGymwe"
# Given a character, or a sequence of characters, what is the most probable next character? This is the task we're training the model to perform. The input to the model will be a sequence of characters, and we train the model to predict the output—the following character at each time step.
#
# Since RNNs maintain an internal state that depends on the previously seen elements, given all the characters computed until this moment, what is the next character?
#
# + [markdown] colab_type="text" id="hgsVvVxnymwf"
# ### Create training examples and targets
#
# Next divide the text into example sequences. Each input sequence will contain `seq_length` characters from the text.
#
# For each input sequence, the corresponding targets contain the same length of text, except shifted one character to the right.
#
# So break the text into chunks of `seq_length+1`. For example, say `seq_length` is 4 and our text is "Hello". The input sequence would be "Hell", and the target sequence "ello".
#
# To do this first use the `tf.data.Dataset.from_tensor_slices` function to convert the text vector into a stream of character indices.
# + colab={} colab_type="code" id="0UHJDA39zf-O"
# The maximum length sentence we want for a single input in characters
seq_length = 100
examples_per_epoch = len(text)//(seq_length+1)
# Create training examples / targets
char_dataset = tf.data.Dataset.from_tensor_slices(text_as_int)
for i in char_dataset.take(5):
print(idx2char[i.numpy()])
# + [markdown] colab_type="text" id="-ZSYAcQV8OGP"
# The `batch` method lets us easily convert these individual characters to sequences of the desired size.
# + colab={} colab_type="code" id="l4hkDU3i7ozi"
sequences = char_dataset.batch(seq_length+1, drop_remainder=True)
for item in sequences.take(5):
print(repr(''.join(idx2char[item.numpy()])))
# + [markdown] colab_type="text" id="UbLcIPBj_mWZ"
# For each sequence, duplicate and shift it to form the input and target text by using the `map` method to apply a simple function to each batch:
# + colab={} colab_type="code" id="9NGu-FkO_kYU"
def split_input_target(chunk):
input_text = chunk[:-1]
target_text = chunk[1:]
return input_text, target_text
dataset = sequences.map(split_input_target)
# + [markdown] colab_type="text" id="hiCopyGZymwi"
# Print the first examples input and target values:
# + colab={} colab_type="code" id="GNbw-iR0ymwj"
for input_example, target_example in dataset.take(1):
print ('Input data: ', repr(''.join(idx2char[input_example.numpy()])))
print ('Target data:', repr(''.join(idx2char[target_example.numpy()])))
# + [markdown] colab_type="text" id="_33OHL3b84i0"
# Each index of these vectors are processed as one time step. For the input at time step 0, the model receives the index for "F" and trys to predict the index for "i" as the next character. At the next timestep, it does the same thing but the `RNN` considers the previous step context in addition to the current input character.
# + colab={} colab_type="code" id="0eBu9WZG84i0"
for i, (input_idx, target_idx) in enumerate(zip(input_example[:5], target_example[:5])):
print("Step {:4d}".format(i))
print(" input: {} ({:s})".format(input_idx, repr(idx2char[input_idx])))
print(" expected output: {} ({:s})".format(target_idx, repr(idx2char[target_idx])))
# + [markdown] colab_type="text" id="MJdfPmdqzf-R"
# ### Create training batches
#
# We used `tf.data` to split the text into manageable sequences. But before feeding this data into the model, we need to shuffle the data and pack it into batches.
# + colab={} colab_type="code" id="p2pGotuNzf-S"
# Batch size
BATCH_SIZE = 64
# Buffer size to shuffle the dataset
# (TF data is designed to work with possibly infinite sequences,
# so it doesn't attempt to shuffle the entire sequence in memory. Instead,
# it maintains a buffer in which it shuffles elements).
BUFFER_SIZE = 10000
dataset = dataset.shuffle(BUFFER_SIZE).batch(BATCH_SIZE, drop_remainder=True)
dataset
# + [markdown] colab_type="text" id="r6oUuElIMgVx"
# ## Build The Model
# + [markdown] colab_type="text" id="m8gPwEjRzf-Z"
# Use `tf.keras.Sequential` to define the model. For this simple example three layers are used to define our model:
#
# * `tf.keras.layers.Embedding`: The input layer. A trainable lookup table that will map the numbers of each character to a vector with `embedding_dim` dimensions;
# * `tf.keras.layers.GRU`: A type of RNN with size `units=rnn_units` (You can also use a LSTM layer here.)
# * `tf.keras.layers.Dense`: The output layer, with `vocab_size` outputs.
# + colab={} colab_type="code" id="zHT8cLh7EAsg"
# Length of the vocabulary in chars
vocab_size = len(vocab)
# The embedding dimension
embedding_dim = 256
# Number of RNN units
rnn_units = 1024
# + colab={} colab_type="code" id="MtCrdfzEI2N0"
def build_model(vocab_size, embedding_dim, rnn_units, batch_size):
model = tf.keras.Sequential([
tf.keras.layers.Embedding(vocab_size, embedding_dim,
batch_input_shape=[batch_size, None]),
tf.keras.layers.GRU(rnn_units,
return_sequences=True,
stateful=True,
recurrent_initializer='glorot_uniform'),
tf.keras.layers.Dense(vocab_size)
])
return model
# + colab={} colab_type="code" id="wwsrpOik5zhv"
model = build_model(
vocab_size = len(vocab),
embedding_dim=embedding_dim,
rnn_units=rnn_units,
batch_size=BATCH_SIZE)
# + [markdown] colab_type="text" id="RkA5upJIJ7W7"
# For each character the model looks up the embedding, runs the GRU one timestep with the embedding as input, and applies the dense layer to generate logits predicting the log-likelihood of the next character:
#
# 
# + [markdown] colab_type="text" id="gKbfm04amhXk"
# Please note that we choose to Keras sequential model here since all the layers in the model only have single input and produce single output. In case you want to retrieve and reuse the states from stateful RNN layer, you might want to build your model with Keras functional API or model subclassing. Please check [Keras RNN guide](https://www.tensorflow.org/guide/keras/rnn#rnn_state_reuse) for more details.
# + [markdown] colab_type="text" id="-ubPo0_9Prjb"
# ## Try the model
#
# Now run the model to see that it behaves as expected.
#
# First check the shape of the output:
# + colab={} colab_type="code" id="C-_70kKAPrPU"
for input_example_batch, target_example_batch in dataset.take(1):
example_batch_predictions = model(input_example_batch)
print(example_batch_predictions.shape, "# (batch_size, sequence_length, vocab_size)")
# + [markdown] colab_type="text" id="Q6NzLBi4VM4o"
# In the above example the sequence length of the input is `100` but the model can be run on inputs of any length:
# + colab={} colab_type="code" id="vPGmAAXmVLGC"
model.summary()
# + [markdown] colab_type="text" id="uwv0gEkURfx1"
# To get actual predictions from the model we need to sample from the output distribution, to get actual character indices. This distribution is defined by the logits over the character vocabulary.
#
# Note: It is important to _sample_ from this distribution as taking the _argmax_ of the distribution can easily get the model stuck in a loop.
#
# Try it for the first example in the batch:
# + colab={} colab_type="code" id="4V4MfFg0RQJg"
sampled_indices = tf.random.categorical(example_batch_predictions[0], num_samples=1)
sampled_indices = tf.squeeze(sampled_indices,axis=-1).numpy()
# + [markdown] colab_type="text" id="QM1Vbxs_URw5"
# This gives us, at each timestep, a prediction of the next character index:
# + colab={} colab_type="code" id="YqFMUQc_UFgM"
sampled_indices
# + [markdown] colab_type="text" id="LfLtsP3mUhCG"
# Decode these to see the text predicted by this untrained model:
# + colab={} colab_type="code" id="xWcFwPwLSo05"
print("Input: \n", repr("".join(idx2char[input_example_batch[0]])))
print()
print("Next Char Predictions: \n", repr("".join(idx2char[sampled_indices ])))
# + [markdown] colab_type="text" id="LJL0Q0YPY6Ee"
# ## Train the model
# + [markdown] colab_type="text" id="YCbHQHiaa4Ic"
# At this point the problem can be treated as a standard classification problem. Given the previous RNN state, and the input this time step, predict the class of the next character.
# + [markdown] colab_type="text" id="trpqTWyvk0nr"
# ### Attach an optimizer, and a loss function
# + [markdown] colab_type="text" id="UAjbjY03eiQ4"
# The standard `tf.keras.losses.sparse_categorical_crossentropy` loss function works in this case because it is applied across the last dimension of the predictions.
#
# Because our model returns logits, we need to set the `from_logits` flag.
#
# + colab={} colab_type="code" id="4HrXTACTdzY-"
def loss(labels, logits):
return tf.keras.losses.sparse_categorical_crossentropy(labels, logits, from_logits=True)
example_batch_loss = loss(target_example_batch, example_batch_predictions)
print("Prediction shape: ", example_batch_predictions.shape, " # (batch_size, sequence_length, vocab_size)")
print("scalar_loss: ", example_batch_loss.numpy().mean())
# + [markdown] colab_type="text" id="jeOXriLcymww"
# Configure the training procedure using the `tf.keras.Model.compile` method. We'll use `tf.keras.optimizers.Adam` with default arguments and the loss function.
# + colab={} colab_type="code" id="DDl1_Een6rL0"
model.compile(optimizer='adam', loss=loss)
# + [markdown] colab_type="text" id="ieSJdchZggUj"
# ### Configure checkpoints
# + [markdown] colab_type="text" id="C6XBUUavgF56"
# Use a `tf.keras.callbacks.ModelCheckpoint` to ensure that checkpoints are saved during training:
# + colab={} colab_type="code" id="W6fWTriUZP-n"
# Directory where the checkpoints will be saved
checkpoint_dir = './training_checkpoints'
# Name of the checkpoint files
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt_{epoch}")
checkpoint_callback=tf.keras.callbacks.ModelCheckpoint(
filepath=checkpoint_prefix,
save_weights_only=True)
# + [markdown] colab_type="text" id="3Ky3F_BhgkTW"
# ### Execute the training
# + [markdown] colab_type="text" id="IxdOA-rgyGvs"
# To keep training time reasonable, use 10 epochs to train the model. In Colab, set the runtime to GPU for faster training.
# + colab={} colab_type="code" id="7yGBE2zxMMHs"
EPOCHS=10
# + colab={} colab_type="code" id="UK-hmKjYVoll"
history = model.fit(dataset, epochs=EPOCHS, callbacks=[checkpoint_callback])
# + [markdown] colab_type="text" id="kKkD5M6eoSiN"
# ## Generate text
# + [markdown] colab_type="text" id="JIPcXllKjkdr"
# ### Restore the latest checkpoint
# + [markdown] colab_type="text" id="LyeYRiuVjodY"
# To keep this prediction step simple, use a batch size of 1.
#
# Because of the way the RNN state is passed from timestep to timestep, the model only accepts a fixed batch size once built.
#
# To run the model with a different `batch_size`, we need to rebuild the model and restore the weights from the checkpoint.
#
# + colab={} colab_type="code" id="zk2WJ2-XjkGz"
tf.train.latest_checkpoint(checkpoint_dir)
# + colab={} colab_type="code" id="LycQ-ot_jjyu"
model = build_model(vocab_size, embedding_dim, rnn_units, batch_size=1)
model.load_weights(tf.train.latest_checkpoint(checkpoint_dir))
model.build(tf.TensorShape([1, None]))
# + colab={} colab_type="code" id="71xa6jnYVrAN"
model.summary()
# + [markdown] colab_type="text" id="DjGz1tDkzf-u"
# ### The prediction loop
#
# The following code block generates the text:
#
# * It Starts by choosing a start string, initializing the RNN state and setting the number of characters to generate.
#
# * Get the prediction distribution of the next character using the start string and the RNN state.
#
# * Then, use a categorical distribution to calculate the index of the predicted character. Use this predicted character as our next input to the model.
#
# * The RNN state returned by the model is fed back into the model so that it now has more context, instead than only one character. After predicting the next character, the modified RNN states are again fed back into the model, which is how it learns as it gets more context from the previously predicted characters.
#
#
# 
#
# Looking at the generated text, you'll see the model knows when to capitalize, make paragraphs and imitates a Shakespeare-like writing vocabulary. With the small number of training epochs, it has not yet learned to form coherent sentences.
# + colab={} colab_type="code" id="WvuwZBX5Ogfd"
def generate_text(model, start_string):
# Evaluation step (generating text using the learned model)
# Number of characters to generate
num_generate = 1000
# Converting our start string to numbers (vectorizing)
input_eval = [char2idx[s] for s in start_string]
input_eval = tf.expand_dims(input_eval, 0)
# Empty string to store our results
text_generated = []
# Low temperatures results in more predictable text.
# Higher temperatures results in more surprising text.
# Experiment to find the best setting.
temperature = 1.0
# Here batch size == 1
model.reset_states()
for i in range(num_generate):
predictions = model(input_eval)
# remove the batch dimension
predictions = tf.squeeze(predictions, 0)
# using a categorical distribution to predict the character returned by the model
predictions = predictions / temperature
predicted_id = tf.random.categorical(predictions, num_samples=1)[-1,0].numpy()
# We pass the predicted character as the next input to the model
# along with the previous hidden state
input_eval = tf.expand_dims([predicted_id], 0)
text_generated.append(idx2char[predicted_id])
return (start_string + ''.join(text_generated))
# + colab={} colab_type="code" id="ktovv0RFhrkn"
print(generate_text(model, start_string=u"ROMEO: "))
# + [markdown] colab_type="text" id="AM2Uma_-yVIq"
# The easiest thing you can do to improve the results it to train it for longer (try `EPOCHS=30`).
#
# You can also experiment with a different start string, or try adding another RNN layer to improve the model's accuracy, or adjusting the temperature parameter to generate more or less random predictions.
# + [markdown] colab_type="text" id="Y4QwTjAM6A2O"
# ## Advanced: Customized Training
#
# The above training procedure is simple, but does not give you much control.
#
# So now that you've seen how to run the model manually let's unpack the training loop, and implement it ourselves. This gives a starting point if, for example, to implement _curriculum learning_ to help stabilize the model's open-loop output.
#
# We will use `tf.GradientTape` to track the gradients. You can learn more about this approach by reading the [eager execution guide](https://www.tensorflow.org/guide/eager).
#
# The procedure works as follows:
#
# * First, reset the RNN state. We do this by calling the `tf.keras.Model.reset_states` method.
#
# * Next, iterate over the dataset (batch by batch) and calculate the *predictions* associated with each.
#
# * Open a `tf.GradientTape`, and calculate the predictions and loss in that context.
#
# * Calculate the gradients of the loss with respect to the model variables using the `tf.GradientTape.grads` method.
#
# * Finally, take a step downwards by using the optimizer's `tf.train.Optimizer.apply_gradients` method.
#
# + colab={} colab_type="code" id="_XAm7eCoKULT"
model = build_model(
vocab_size = len(vocab),
embedding_dim=embedding_dim,
rnn_units=rnn_units,
batch_size=BATCH_SIZE)
# + colab={} colab_type="code" id="qUKhnZtMVpoJ"
optimizer = tf.keras.optimizers.Adam()
# + colab={} colab_type="code" id="b4kH1o0leVIp"
@tf.function
def train_step(inp, target):
with tf.GradientTape() as tape:
predictions = model(inp)
loss = tf.reduce_mean(
tf.keras.losses.sparse_categorical_crossentropy(
target, predictions, from_logits=True))
grads = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
return loss
# + colab={} colab_type="code" id="d4tSNwymzf-q"
# Training step
EPOCHS = 10
for epoch in range(EPOCHS):
start = time.time()
# resetting the hidden state at the start of every epoch
model.reset_states()
for (batch_n, (inp, target)) in enumerate(dataset):
loss = train_step(inp, target)
if batch_n % 100 == 0:
template = 'Epoch {} Batch {} Loss {}'
print(template.format(epoch+1, batch_n, loss))
# saving (checkpoint) the model every 5 epochs
if (epoch + 1) % 5 == 0:
model.save_weights(checkpoint_prefix.format(epoch=epoch))
print ('Epoch {} Loss {:.4f}'.format(epoch+1, loss))
print ('Time taken for 1 epoch {} sec\n'.format(time.time() - start))
model.save_weights(checkpoint_prefix.format(epoch=epoch))
| site/en/tutorials/text/text_generation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import tensorflow as tf
import GRN
import random as rd
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
GRAPH = 21
EPOCHS = 1000
def base(NO):
'''mnist = tf.keras.datasets.mnist
(x_train, y_train),(x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0'''
grn = GRN.GRN(NO)
dic["data"] = "labels"
while(len(dic) < int(2**NUM_NOS/3)):
key = str(rd.choices(range(2), k=21)).replace(", ", "")
dic[key] = grn.atrator(key)
dataframe = pd.DataFrame(data=dic)
dataframe.rename({'col':'log(gdp)'}, axis=1)
def new_model():
modelo = tf.keras.models.Sequential([
tf.keras.layers.Flatten(20, activation=tf.nn.relu, input_shape=(10,)),
tf.keras.layers.Dense(20, activation=tf.nn.relu),
tf.keras.layers.Dropout(0.3),
tf.keras.layers.Dense(10)
])
optimizer = tf.train.RMSPropOptimizer(0.001)
modelo.compile(loss='mse', optimizer=optimizer, metrics=['mae'])
return modelo
# Criando do modelo
model = new_model()
# Informações da estrutura do modelo
model.sumary()
# Visualizar o passar das épocas
class loading(keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs):
if epoch % 100 == 0: print('')
print('.', end='')
# Treinamento da Rede
history = model.fit(x_train, y_train, epochs=EPOCHS,
validation_split=0.2, verbose=0,
callbacks=[loading()])
def plot_history(history):
plt.figure()
plt.xlabel('Epoch')
plt.ylabel('Mean Abs Error [1000$]')
plt.plot(history.epoch, np.array(history.history['mean_absolute_error']),
label='Train Loss')
plt.plot(history.epoch, np.array(history.history['val_mean_absolute_error']),
label = 'Val loss')
plt.legend()
plt.ylim([0, 5])
plot_history(history)
# -
| src/Untitled.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Using Null
#
# - teacher
#
# id | dept | name | phone | mobile
# ----|-------|-------|-------|-----
# 101 | 1 | Shrivell | 2753 | 07986 555 1234
# 102 | 1 | Throd | 2754 | 07122 555 1920
# 103 | 1 | Splint | 2293 |
# 104 | | Spiregrain | 3287 |
# 105 | 2 | Cutflower | 3212 | 07996 555 6574
# 106 | | Deadyawn | 3345 |
# ... | | | |
#
# - dept
#
# id | name
# ----|----
# 1 | Computing
# 2 | Design
# 3 | Engineering
# ... |
#
# ### Teachers and Departments
# The school includes many departments. Most teachers work exclusively for a single department. Some teachers have no department.
#
# [Selecting NULL values](https://sqlzoo.net/wiki/Selecting_NULL_values).
# +
import os
import pandas as pd
import findspark
os.environ['SPARK_HOME'] = '/opt/spark'
findspark.init()
from pyspark.sql import SparkSession
sc = (SparkSession.builder.appName('app08')
.config('spark.sql.warehouse.dir', 'hdfs://quickstart.cloudera:8020/user/hive/warehouse')
.config('hive.metastore.uris', 'thrift://quickstart.cloudera:9083')
.enableHiveSupport().getOrCreate())
# -
# ## 1. NULL, INNER JOIN, LEFT JOIN, RIGHT JOIN
#
# List the teachers who have NULL for their department.
#
# > _Why we cannot use =_
# > You might think that the phrase dept=NULL would work here but it doesn't - you can use the phrase dept IS NULL
# >
# > _That's not a proper explanation._
# > No it's not, but you can read a better explanation at Wikipedia:NULL.
teacher = sc.read.table('sqlzoo.teacher')
dept = sc.read.table('sqlzoo.dept')
from pyspark.sql.functions import *
teacher.filter(isnull(teacher['dept'])).select('name').toPandas()
# ## 2.
# Note the INNER JOIN misses the teachers with no department and the departments with no teacher.
(teacher.withColumnRenamed('name', 'teacher')
.join(dept, teacher['dept']==dept['id'])
.select('teacher', 'name')
.toPandas())
# ## 3.
# Use a different JOIN so that all teachers are listed.
(teacher.withColumnRenamed('name', 'teacher')
.join(dept, teacher['dept']==dept['id'], how='left')
.select('teacher', 'name')
.toPandas())
# ## 4.
# Use a different JOIN so that all departments are listed.
(teacher.withColumnRenamed('name', 'teacher')
.join(dept, teacher['dept']==dept['id'], how='right')
.select('teacher', 'name')
.toPandas())
# ## 5. Using the [COALESCE](https://sqlzoo.net/wiki/COALESCE) function
#
#
# Use COALESCE to print the mobile number. Use the number '07986 444 2266' if there is no number given. **Show teacher name and mobile number or '07986 444 2266'**
teacher.select('name', 'mobile').fillna({'mobile': '07986 444 2266'}).toPandas()
# ## 6.
# Use the COALESCE function and a LEFT JOIN to print the teacher name and department name. Use the string 'None' where there is no department.
(teacher.withColumnRenamed('name', 'teacher')
.join(dept, teacher['dept']==dept['id'], how='left')
.select('teacher', 'name')
.fillna({'name': 'None'})
.toPandas())
# ## 7.
# Use COUNT to show the number of teachers and the number of mobile phones.
teacher.agg({'name': 'count', 'mobile': 'count'}).toPandas()
# ## 8.
# Use COUNT and GROUP BY **dept.name** to show each department and the number of staff. Use a RIGHT JOIN to ensure that the Engineering department is listed.
(teacher.withColumnRenamed('name', 'teacher')
.join(dept, teacher['dept']==dept['id'], how='right')
.groupBy('name')
.agg({'teacher': 'count'})
.toPandas())
# ## 9. Using [CASE](https://sqlzoo.net/wiki/CASE)
#
#
# Use CASE to show the **name** of each teacher followed by 'Sci' if the teacher is in **dept** 1 or 2 and 'Art' otherwise.
(teacher.select('name', 'dept', when(teacher['dept'].isin([1, 2]), 'Sci')
.otherwise('Art').alias('label'))
.toPandas())
# ## 10.
# Use CASE to show the name of each teacher followed by 'Sci' if the teacher is in dept 1 or 2, show 'Art' if the teacher's dept is 3 and 'None' otherwise.
(teacher.select('name', 'dept',
when(teacher['dept'].isin([1, 2]), 'Sci')
.when(teacher['dept'].isin([3, ]), 'Art')
.otherwise('None').alias('label'))
.toPandas())
sc.stop()
| Spark/08 Using Null.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
import os, sys
'SUMO_HOME' in os.environ
tools = os.path.join(os.environ['SUMO_HOME'], 'tools')
sys.path.append(tools)
sumoBinary = "C:/Program Files (x86)/DLR/Sumo/bin/sumo-gui"
sumoCmd = [sumoBinary, "-c", "C:/Users/sreeniva/Desktop/Reinforcement Learning/madrl_traffic_control/Sumo Stuff/hello.sumocfg"]
import traci
traci.start(sumoCmd)
# +
import traci
import math
from collections import defaultdict
CAR_WIDTH = 5
MAX_HEIGHT = 200 / CAR_WIDTH
MAX_LENGTH = 200 / CAR_WIDTH
lane_ids = ["left-right-1_0", "left-right-2_0",
"right-left-1_0", "right-left-2_0",
"up-down-1_0", "up-down-2_0",
"down-up-1_0", "down-up-2_0", ]
def get_left_right_dtse(x_min, x_max, y):
vehicle_vel = 0 # default
vehicle_present = -1 # default
vehicle_id = -999
n_blocks = int(math.ceil(abs(x_max - x_min) / CAR_WIDTH))
dtse_map = [[vehicle_present, vehicle_vel, vehicle_id] for x in range(n_blocks)]
vehicle_ids = traci.vehicle.getIDList()
for vehicle_id in vehicle_ids:
(x_pos, y_pos) = traci.vehicle.getPosition(vehicle_id)
vehicle_vel = traci.vehicle.getSpeed(vehicle_id)
if x_pos > x_min and x_pos < x_max and y_pos == y:
# make sure blocks are equally spaced starting from the junction
block = int((x_max - x_pos) / CAR_WIDTH)
# print x_pos, y_pos, block
dtse_map[block] = [1, vehicle_vel, vehicle_id]
return dtse_map
def get_right_left_dtse(x_min, x_max, y):
vehicle_vel = 0 # default
vehicle_present = -1 # default
vehicle_id = -999
n_blocks = int(math.ceil(abs(x_max - x_min) / CAR_WIDTH))
dtse_map = [(vehicle_present, vehicle_vel, vehicle_id) for x in range(n_blocks)]
vehicle_ids = traci.vehicle.getIDList()
for vehicle_id in vehicle_ids:
(x_pos, y_pos) = traci.vehicle.getPosition(vehicle_id)
vehicle_vel = traci.vehicle.getSpeed(vehicle_id)
if x_pos > x_min and x_pos < x_max and y_pos == y:
block = int((x_pos - x_min) / CAR_WIDTH)
# print x_pos, y_pos, block
dtse_map[block] = [1, vehicle_vel, vehicle_id]
return dtse_map
def get_up_down_dtse(y_min, y_max, x):
vehicle_vel = 0 # default
vehicle_present = -1 # default
vehicle_id = -999
n_blocks = int(math.ceil(abs(y_max - y_min) / CAR_WIDTH))
dtse_map = [(vehicle_present, vehicle_vel, vehicle_id) for y in range(n_blocks)]
vehicle_ids = traci.vehicle.getIDList()
for vehicle_id in vehicle_ids:
(x_pos, y_pos) = traci.vehicle.getPosition(vehicle_id)
vehicle_vel = traci.vehicle.getSpeed(vehicle_id)
if y_pos > y_min and y_pos < y_max and x_pos == x:
# make sure blocks are equally spaced starting from the junction
block = int((y_pos - y_min) / CAR_WIDTH)
# print x_pos, y_pos, block
dtse_map[block] = [1, vehicle_vel, vehicle_id]
return dtse_map
def get_down_up_dtse(y_min, y_max, x):
vehicle_vel = 0 # default
vehicle_present = -1 # default
vehicle_id = -999
n_blocks = int(math.ceil(abs(y_max - y_min) / CAR_WIDTH))
dtse_map = [(vehicle_present, vehicle_vel, vehicle_id) for y in range(n_blocks)]
vehicle_ids = traci.vehicle.getIDList()
for vehicle_id in vehicle_ids:
(x_pos, y_pos) = traci.vehicle.getPosition(vehicle_id)
vehicle_vel = traci.vehicle.getSpeed(vehicle_id)
if y_pos > y_min and y_pos < y_max and x_pos == x:
# make sure blocks are equally spaced starting from the junction
block = int((y_max - y_pos) / CAR_WIDTH)
# print x_pos, y_pos, block
dtse_map[block] = [1, vehicle_vel, vehicle_id]
return dtse_map
def normalize_dtse(dtse):
max_vel = 0
for (vehicle_present, vehicle_vel, vehicle_id) in dtse:
max_vel = max(max_vel, vehicle_vel)
# avoid divide by zero
if max_vel == 0:
max_vel = 1
normalized_dtse = [[vehicle_present, (vehicle_vel/max_vel), vehicle_id] for (vehicle_present, vehicle_vel, vehicle_id) in dtse]
return normalized_dtse
def get_dtse_for_junction():
# NOTE: all outgoing lanes have been commented because DTSE
# should be calculated only for incoming lanes
# left-right-1
[(x_min, y), (x_max, y1)] = traci.lane.getShape('left-right-1_0')
lr_1_dtse = get_left_right_dtse(x_min, x_max, y)
# # left-right-2 # block size will be wrong near the junction
# [(x_min, y), (x_max, y1)] = traci.lane.getShape('left-right-2_0')
# lr_2_dtse = get_left_right_dtse(x_min, x_max, y)
# right-left-1
[(x_max, y), (x_min, y1)] = traci.lane.getShape('right-left-1_0')
rl_1_dtse = get_left_right_dtse(x_min, x_max, y)
# # right-left-2 # block size will be wrong near the junction
# [(x_max, y), (x_min, y1)] = traci.lane.getShape('right-left-2_0')
# rl_2_dtse = get_left_right_dtse(x_min, x_max, y)
# up-down-1
[(x, y_max), (x1, y_min)] = traci.lane.getShape('up-down-1_0')
ud_1_dtse = get_up_down_dtse(y_min, y_max, x)
# # up-down-2 # block size will be wrong near the junction
# [(x, y_max), (x1, y_min)] = traci.lane.getShape('up-down-2_0')
# ud_2_dtse = get_up_down_dtse(y_min, y_max, x)
# down-up-1
[(x, y_min), (x1, y_max)] = traci.lane.getShape('down-up-1_0')
du_1_dtse = get_down_up_dtse(y_min, y_max, x)
# # down-up-2 # block size will be wrong near the junction
# [(x, y_min), (x1, y_max)] = traci.lane.getShape('down-up-2_0')
# du_2_dtse = get_down_up_dtse(y_min, y_max, x)
dtse_list = [lr_1_dtse, rl_1_dtse, ud_1_dtse, du_1_dtse]
normalized_dtse_list = []
for dtse in dtse_list:
normalized_dtse_list.append(normalize_dtse(dtse))
return normalized_dtse_list
min_speed = 0.1
# call this at every step
# this function uses too much space because of the dict
def get_avg_waiting_time_v1(vehicle_wait_times):
avg_wait_time = 0.0
vehicle_ids = traci.vehicle.getIDList()
for vehicle_id in vehicle_ids:
if traci.vehicle.getSpeed(vehicle_id) < 0.1:
vehicle_wait_times[vehicle_id] += 1
total_waiting_time = sum(vehicle_wait_times.values()) # sum over dictionary
n_vehicles = len(vehicle_wait_times.keys())
avg_wait_time = total_waiting_time / n_vehicles if n_vehicles > 0 else 0
return avg_wait_time
# total_waiting_time = 0
# total_moving_time = 0
# call this at every step
# total_moving_time = (gamma * total_moving_time) + total_moving_time
def get_avg_waiting_frac(total_waiting_time, total_moving_time, gamma = 1.0):
total_moving_time, total_waiting_time
vehicle_ids = traci.vehicle.getIDList()
for vehicle_id in vehicle_ids:
if traci.vehicle.getSpeed(vehicle_id) < 0.1:
total_waiting_time = gamma * total_waiting_time + 1
else:
total_moving_time = gamma * total_moving_time + 1
avg_wait_frac = total_waiting_time / (total_waiting_time + total_moving_time) if len(vehicle_ids) > 0 else 0
return avg_wait_frac
def my_plot(data, x_label='time', y_label='average waiting fraction'):
plt.plot(data)
plt.xlabel(x_label)
plt.ylabel(y_label)
plt.show()
# define the agent's action
# pass the green duration for the next phase
# NOTE: make sure to call this only at the beginning of the phase or else
# you may end up resetting the existing phase duration
# default green duration is 31s
def act(green_duration):
tls = traci.trafficlights.getIDList()[0]
curr_phase = traci.trafficlights.getPhase(tls)
# we can change the duration only at the beginning of phases 0, 3, 6, 9
if curr_phase % 3 == 0:
print "setting phase to {}".format(green_duration)
traci.trafficlights.setPhaseDuration(tls, green_duration)
gamma_avg_wait_frac_list = defaultdict(list)
avg_waiting_time_list = []
def test_workflow():
vehicle_wait_times = defaultdict(lambda: 0.0)
step = 0
while step < 1000:
# this represents the avg_waiting_frac the last time we took an action
vehicle_wait_times = run_sim_step(step, vehicle_wait_times)
step += 1
def run_sim_step(step, vehicle_wait_times):
tls = traci.trafficlights.getIDList()[0]
prev_phase = traci.trafficlights.getPhase(tls)
# get avg_waiting_time from previous action till now
avg_waiting_time = get_avg_waiting_time_v1(vehicle_wait_times)
avg_waiting_time_list.append(avg_waiting_time)
total_waiting_time = defaultdict(lambda: 0)
total_moving_time = defaultdict(lambda: 0)
for gamma in [0.1*x for x in range(1, 11)]:
gamma_avg_wait_frac_list[gamma].append(get_avg_waiting_frac(total_waiting_time[gamma], total_moving_time[gamma], gamma))
traci.simulationStep()
curr_phase = traci.trafficlights.getPhase(tls)
if (curr_phase != prev_phase) and (curr_phase % 3 == 0):
# reset everyone's waiting time
vehicle_wait_times = defaultdict(lambda: 0.0)
# phase has changed and the agent needs to do something!
# get DTSE
# dtse = get_dtse_for_junction()
# compute reward
# if it has reduced, we get a postive reward!
reward = avg_waiting_time
# print "reward!", reward
# act!
# act(20)
return vehicle_wait_times
# -
test_workflow()
# +
from matplotlib import pyplot as plt
x_label='time'
y_label='average waiting fraction'
plt.plot(avg_waiting_time_list)
plt.xlabel(x_label)
plt.ylabel('avg_waiting_time')
plt.show()
for gamma in [0.1*x for x in range(1, 11)]:
plt.plot(gamma_avg_wait_frac_list[gamma])
plt.xlabel(x_label)
plt.ylabel('avg_waiting_frac: gamma={}'.format(gamma))
plt.show()
# -
| Sumo Stuff/Nearly RL.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="979DKMRxRlER" colab_type="text"
# ### Importing the libraries
# + id="6UXaGVBGK7ls" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 70} executionInfo={"status": "ok", "timestamp": 1598663628614, "user_tz": 300, "elapsed": 1194, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gi2teXJkzxiwTrmzsN2J2ni_MKrPzdBMn1385_PQEs=s64", "userId": "03946972882492424510"}} outputId="5a2d4919-bd03-441b-bbed-99e8834371a3"
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error
import math
from sklearn.preprocessing import StandardScaler
from sklearn.neighbors import KNeighborsRegressor
from sklearn.tree import DecisionTreeRegressor
from sklearn.ensemble import RandomForestRegressor
from sklearn.ensemble import GradientBoostingRegressor
import xgboost
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import cross_val_score
# + [markdown] id="3PXQq-gFRur7" colab_type="text"
# ### Importing the files
#
#
# * Train set
# * Test set
# * Test Solutions set
#
#
# + id="_e9Y3wRgL2K-" colab_type="code" colab={} executionInfo={"status": "ok", "timestamp": 1598663641089, "user_tz": 300, "elapsed": 3177, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gi2teXJkzxiwTrmzsN2J2ni_MKrPzdBMn1385_PQEs=s64", "userId": "03946972882492424510"}}
# !pip install -U -q PyDrive
from pydrive.auth import GoogleAuth
from pydrive.drive import GoogleDrive
from google.colab import auth
from oauth2client.client import GoogleCredentials
# + id="vEHud6CCMGx6" colab_type="code" colab={} executionInfo={"status": "ok", "timestamp": 1598669629444, "user_tz": 300, "elapsed": 448, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gi2teXJkzxiwTrmzsN2J2ni_MKrPzdBMn1385_PQEs=s64", "userId": "03946972882492424510"}}
auth.authenticate_user()
gauth = GoogleAuth()
gauth.credentials = GoogleCredentials.get_application_default()
drive = GoogleDrive(gauth)
# + id="_8fF8HQXM0He" colab_type="code" colab={} executionInfo={"status": "ok", "timestamp": 1598663682229, "user_tz": 300, "elapsed": 8420, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gi2teXJkzxiwTrmzsN2J2ni_MKrPzdBMn1385_PQEs=s64", "userId": "03946972882492424510"}}
file_id = '1TTvLZ7TLlQz95byhaSqoKzQxQtfdLee2'
downloaded = drive.CreateFile({'id': file_id})
downloaded = drive.CreateFile({'id':'1TTvLZ7TLlQz95byhaSqoKzQxQtfdLee2'}) # replace the id with id of file you want to access
downloaded.GetContentFile('train_set.csv')
# Read file as panda dataframe
import pandas as pd
train_df = pd.read_csv('train_set.csv')
# + id="YKSle7jvQhgI" colab_type="code" colab={} executionInfo={"status": "ok", "timestamp": 1598669634436, "user_tz": 300, "elapsed": 2144, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gi2teXJkzxiwTrmzsN2J2ni_MKrPzdBMn1385_PQEs=s64", "userId": "03946972882492424510"}}
file_id = '1oErLhYHcKVPvyXeShiTBofg2-hQxoRfT'
downloaded = drive.CreateFile({'id': file_id})
downloaded = drive.CreateFile({'id':'1oErLhYHcKVPvyXeShiTBofg2-hQxoRfT'}) # replace the id with id of file you want to access
downloaded.GetContentFile('test_set.csv')
# Read file as panda dataframe
import pandas as pd
test_df = pd.read_csv('test_set.csv')
# + id="95M2SXPQK7lw" colab_type="code" colab={} executionInfo={"status": "ok", "timestamp": 1598663742015, "user_tz": 300, "elapsed": 1636, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gi2teXJkzxiwTrmzsN2J2ni_MKrPzdBMn1385_PQEs=s64", "userId": "03946972882492424510"}}
file_id = '1SlYEdNdkDHqP_Cxy6tSjawGJ4zTqLhUF'
downloaded = drive.CreateFile({'id': file_id})
downloaded = drive.CreateFile({'id': file_id}) # replace the id with id of file you want to access
downloaded.GetContentFile('test_solutions.csv')
# Read file as panda dataframe
import pandas as pd
test_solutions = pd.read_csv('test_solutions.csv')
# + [markdown] id="5g38jA5JSKbU" colab_type="text"
# ### Data Pre-processing
# + id="W45yx3ANK7l1" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 252} executionInfo={"status": "ok", "timestamp": 1598663742018, "user_tz": 300, "elapsed": 780, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gi2teXJkzxiwTrmzsN2J2ni_MKrPzdBMn1385_PQEs=s64", "userId": "03946972882492424510"}} outputId="24e00b2b-edb5-4e8c-a3ea-249fd78cd5ee"
train_df.isnull().sum()
# + [markdown] id="KgE0iEeMSsEV" colab_type="text"
# ### Heatmap to detect Correlation
# + id="L2olkiYRK7l4" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 567} executionInfo={"status": "ok", "timestamp": 1598663744302, "user_tz": 300, "elapsed": 1556, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gi2teXJkzxiwTrmzsN2J2ni_MKrPzdBMn1385_PQEs=s64", "userId": "03946972882492424510"}} outputId="1f28fc86-ff5a-467e-ac90-dd0fe4468af4"
train_df1 = train_df.drop('profile_id', axis = 1)
plt.figure(figsize = (10,8))
cmap = sns.diverging_palette(150, 275, s=80, l=55, n=9)
sns.heatmap(train_df1.corr(), cmap= cmap,annot= True)
# + [markdown] id="aiX1W1JPTG5C" colab_type="text"
# ### Detecting Outliers using Box plot
# + id="wRWQLfctK7l7" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} executionInfo={"status": "ok", "timestamp": 1598663745748, "user_tz": 300, "elapsed": 1702, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gi2teXJkzxiwTrmzsN2J2ni_MKrPzdBMn1385_PQEs=s64", "userId": "03946972882492424510"}} outputId="2ccf3d9a-6da6-4d20-e099-6a580ba7a3cf"
df2 = ['ambient','coolant','u_d','u_q','motor_speed','torque','i_d','i_q']
for i in df2:
sns.boxplot(train_df[i])
plt.show()
# + id="3VVGO9d3K7mA" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 168} executionInfo={"status": "ok", "timestamp": 1598663745750, "user_tz": 300, "elapsed": 951, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gi2teXJkzxiwTrmzsN2J2ni_MKrPzdBMn1385_PQEs=s64", "userId": "03946972882492424510"}} outputId="07e7c5d7-7fd2-46b0-8683-8dd76a7a8051"
train_df.torque.describe()
# + id="p282vcxsK7mD" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 168} executionInfo={"status": "ok", "timestamp": 1598663746452, "user_tz": 300, "elapsed": 449, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gi2teXJkzxiwTrmzsN2J2ni_MKrPzdBMn1385_PQEs=s64", "userId": "03946972882492424510"}} outputId="b3448828-b76e-46a8-840b-062fb2455db8"
train_df.i_q.describe()
# + id="AzWAPabFK7mF" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 84} executionInfo={"status": "ok", "timestamp": 1598663747117, "user_tz": 300, "elapsed": 450, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gi2teXJkzxiwTrmzsN2J2ni_MKrPzdBMn1385_PQEs=s64", "userId": "03946972882492424510"}} outputId="585cd6be-068d-4d9f-d94b-c568521f0bc6"
Min_torque = -0.365 - (1.5 * 0.837)
Max_torque = 0.472 + (1.5 * 0.837)
print('Min_torque: ', Min_torque)
print('Max_torque: ', Max_torque)
Min_iq = -0.362 - (1.5 * 0.849)
Max_iq = 0.487 + (1.5 * 0.849)
print('Min_iq: ', Min_iq)
print('Max_iq: ', Max_iq)
# + id="GnjRmOGTK7mJ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 402} executionInfo={"status": "ok", "timestamp": 1598663747460, "user_tz": 300, "elapsed": 405, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gi2teXJkzxiwTrmzsN2J2ni_MKrPzdBMn1385_PQEs=s64", "userId": "03946972882492424510"}} outputId="761e0f00-53b3-4795-929f-a435e7a2072c"
new_train_df = train_df.loc[~((train_df['ambient'] < -2) | (train_df['ambient'] > 2) | (train_df['u_d'] > 2) | (train_df['torque'] < Min_torque) | (train_df['torque'] > Max_torque) | (train_df['i_q'] < Min_iq) | (train_df['i_q'] > Max_iq)),:]
new_train_df
# + [markdown] id="ChCva_QETOzz" colab_type="text"
# ### Exploratory Data Analysis
# + id="OobIRcBwK7l9" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 517} executionInfo={"status": "ok", "timestamp": 1598663748977, "user_tz": 300, "elapsed": 650, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gi2teXJkzxiwTrmzsN2J2ni_MKrPzdBMn1385_PQEs=s64", "userId": "03946972882492424510"}} outputId="059c1705-052d-42ae-a727-c7da0cd3376b"
plt.figure(figsize = (10,8))
train_df.groupby('profile_id').agg('max')['ambient'].sort_values(ascending = False).plot(kind = 'bar')
# + [markdown] id="kDkepHtDUkdQ" colab_type="text"
# #### Feature Engineering
# + [markdown] id="4V6iHVnlU6v5" colab_type="text"
# ##### We add a new column 'recording_second' which says the particular second when the recording is done for each profile_id
# + id="dVbU79T6K7mM" colab_type="code" colab={} executionInfo={"status": "ok", "timestamp": 1598663764012, "user_tz": 300, "elapsed": 8658, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gi2teXJkzxiwTrmzsN2J2ni_MKrPzdBMn1385_PQEs=s64", "userId": "03946972882492424510"}}
abc = new_train_df.profile_id.unique()
xyz1 = []
for i in abc:
counter = 0
for j in new_train_df['profile_id']:
if i == j:
counter = counter + 0.5
xyz1.append(counter)
# + id="IpVRcFSXY-TG" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} executionInfo={"status": "ok", "timestamp": 1598663764015, "user_tz": 300, "elapsed": 8219, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gi2teXJkzxiwTrmzsN2J2ni_MKrPzdBMn1385_PQEs=s64", "userId": "03946972882492424510"}} outputId="e5f9521a-7ff8-47ed-f3e8-f71f5d771c39"
len(xyz1)
# + id="fnNJeitqK7mP" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 118} executionInfo={"status": "ok", "timestamp": 1598663764017, "user_tz": 300, "elapsed": 7905, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gi2teXJkzxiwTrmzsN2J2ni_MKrPzdBMn1385_PQEs=s64", "userId": "03946972882492424510"}} outputId="63408440-db16-4908-bc7f-a463b3c1d09c"
new_train_df['recording_second'] = xyz1
# + id="JvDp6CmhK7mR" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 195} executionInfo={"status": "ok", "timestamp": 1598663764141, "user_tz": 300, "elapsed": 7418, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gi2teXJkzxiwTrmzsN2J2ni_MKrPzdBMn1385_PQEs=s64", "userId": "03946972882492424510"}} outputId="c952eb41-0d0f-4708-c930-0f7bcf3597db"
new_train_df.head()
# + id="X9hEGX1eK7mg" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 515} executionInfo={"status": "ok", "timestamp": 1598663764642, "user_tz": 300, "elapsed": 7395, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gi2teXJkzxiwTrmzsN2J2ni_MKrPzdBMn1385_PQEs=s64", "userId": "03946972882492424510"}} outputId="1074855a-0711-4621-c1c6-39ddcf3119c4"
df3 = new_train_df.groupby('profile_id').agg('max')
plt.figure(figsize = (10,8))
sns.barplot(x = df3.index, y= 'recording_second', data = df3)
# + [markdown] id="ER-TcwxVIBnX" colab_type="text"
# ##### The max amount of time ran by a profile_id is 5.8 hours which is by profile-id 20.
#
# ##### The min amount of time ran by a profile_id is 3.1 minutes which is by profile-id 36.
# + id="BWTqepYkK7mk" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 515} executionInfo={"status": "ok", "timestamp": 1598663767466, "user_tz": 300, "elapsed": 8925, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gi2teXJkzxiwTrmzsN2J2ni_MKrPzdBMn1385_PQEs=s64", "userId": "03946972882492424510"}} outputId="c8a72200-1f98-4bc0-9ea4-9e56a0e657a5"
plt.figure(figsize = (10,8))
sns.scatterplot(x = 'motor_speed', y = 'torque', data = new_train_df)
# + [markdown] id="PzOasMKPH3r8" colab_type="text"
# ##### As torque is in high-end initially in an electric car until motor speed(rpm) of 0.3 from where horse power remains stable, the pickup speed with be really fast when compared to a gas car and even though the motor speed(rpm) slows down later, it increases at higher ranges.
# + id="2ZhCr9bKK7mp" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} executionInfo={"status": "ok", "timestamp": 1598663769252, "user_tz": 300, "elapsed": 9393, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gi2teXJkzxiwTrmzsN2J2ni_MKrPzdBMn1385_PQEs=s64", "userId": "03946972882492424510"}} outputId="0b8e363e-db02-4fbc-b5d7-e0c6b2161403"
new_train_df.hist(figsize = (20,20))
plt.show()
# + [markdown] id="nFDA7dA8Hzgp" colab_type="text"
# ##### Except coolant temperature, i_d, motor speed, u_q, recording second rest of the columns are normally distributed
# + [markdown] id="9ygpIZtIMe8G" colab_type="text"
# ### Model Building
# + [markdown] id="G5jAZJs2Mjb6" colab_type="text"
# #### Predicting 'pm'
# + id="VvHhEI33K7mw" colab_type="code" colab={} executionInfo={"status": "ok", "timestamp": 1598663769254, "user_tz": 300, "elapsed": 1463, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gi2teXJkzxiwTrmzsN2J2ni_MKrPzdBMn1385_PQEs=s64", "userId": "03946972882492424510"}}
X = new_train_df.drop(columns = ['u_d','stator_yoke','stator_tooth','pm','stator_winding','profile_id','recording_second'])
y = new_train_df['pm']
test_df1 = test_df.drop(columns= ['u_d'])
# + id="LL929A5BK7mz" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 269} executionInfo={"status": "ok", "timestamp": 1598663769660, "user_tz": 300, "elapsed": 1418, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gi2teXJkzxiwTrmzsN2J2ni_MKrPzdBMn1385_PQEs=s64", "userId": "03946972882492424510"}} outputId="0f09226c-5081-4f0e-be0d-a5c6e8688de9"
new_train_df.corr()['pm']
# + [markdown] id="CHovVl5nODiX" colab_type="text"
# #### Linear Regression
# + id="znMiSaHgK7m1" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} executionInfo={"status": "ok", "timestamp": 1598663769780, "user_tz": 300, "elapsed": 455, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gi2teXJkzxiwTrmzsN2J2ni_MKrPzdBMn1385_PQEs=s64", "userId": "03946972882492424510"}} outputId="2fc5d7aa-b03e-4171-a91d-ab054e140f64"
model1 = LinearRegression().fit(X, y)
pred_pm = model1.predict(test_df1)
val = mean_squared_error(test_solutions['pm'],pred_pm)
val1 = math.sqrt(val)
print('RMSE : {}'.format(math.sqrt(val)))
# + [markdown] id="snT6F9SKOKyh" colab_type="text"
# #### K-NN Regressor
# + id="zi0DCjO9K7m7" colab_type="code" colab={} executionInfo={"status": "ok", "timestamp": 1598663771579, "user_tz": 300, "elapsed": 323, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gi2teXJkzxiwTrmzsN2J2ni_MKrPzdBMn1385_PQEs=s64", "userId": "03946972882492424510"}}
scaler = StandardScaler()
scaler.fit(X)
X = scaler.transform(X)
transformed_scaler_testset = scaler.transform(test_df1)
# + id="R9dU-4ewK7m_" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 312} executionInfo={"status": "ok", "timestamp": 1598663958304, "user_tz": 300, "elapsed": 186066, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gi2teXJkzxiwTrmzsN2J2ni_MKrPzdBMn1385_PQEs=s64", "userId": "03946972882492424510"}} outputId="f8e6e5b5-9e95-4e6d-fceb-114713c9c3da"
error_rate = []
for i in range(1,10):
knn = KNeighborsRegressor(n_neighbors=i)
score = cross_val_score(knn,X,y, cv = 10)
error_rate.append(1-score.mean())
plt.plot(range(1,10), error_rate, color = 'red', linestyle = 'dashed', marker = 'o', markerfacecolor = 'blue', markersize = 10)
plt.title('K value vs Error rate')
plt.xlabel('K value')
plt.ylabel('Error rate')
# + id="4Ng9C2zSK7nA" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} executionInfo={"status": "ok", "timestamp": 1598664080943, "user_tz": 300, "elapsed": 2428, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gi2teXJkzxiwTrmzsN2J2ni_MKrPzdBMn1385_PQEs=s64", "userId": "03946972882492424510"}} outputId="fdb91355-3c9e-44ee-fb0a-9bb28207416e"
knn = KNeighborsRegressor(n_neighbors=2)
knn.fit(X,y)
pred_pm = knn.predict(transformed_scaler_testset)
val = mean_squared_error(test_solutions['pm'],pred_pm)
val2 = math.sqrt(val)
print('RMSE : {}'.format(math.sqrt(val)))
# + id="YHYMUzibK7nC" colab_type="code" colab={} executionInfo={"status": "ok", "timestamp": 1598664086969, "user_tz": 300, "elapsed": 373, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gi2teXJkzxiwTrmzsN2J2ni_MKrPzdBMn1385_PQEs=s64", "userId": "03946972882492424510"}}
X = new_train_df
X = new_train_df.drop(columns = ['u_d','stator_yoke','stator_tooth','stator_winding','profile_id','recording_second','pm'])
# + [markdown] id="Cfhntyv_OOM-" colab_type="text"
# #### Random Forest Regressor
# + id="_5cN3nCVK7nE" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} executionInfo={"status": "ok", "timestamp": 1598664290304, "user_tz": 300, "elapsed": 201820, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gi2teXJkzxiwTrmzsN2J2ni_MKrPzdBMn1385_PQEs=s64", "userId": "03946972882492424510"}} outputId="f2c21bc6-ad86-4e0f-b542-77c0a3c28d42"
model3 = RandomForestRegressor(n_estimators=100,min_samples_leaf=2,min_samples_split=3, max_features=0.5 ,n_jobs=-1)
model3.fit(X,y)
pred_pm = model3.predict(test_df1)
val = mean_squared_error(test_solutions['pm'],pred_pm)
val3 = math.sqrt(val)
print('RMSE : {}'.format(math.sqrt(val)))
# + [markdown] id="_WoUCX3dOQxr" colab_type="text"
# #### Decision Tree Regressor
# + id="aKKIN2g2K7nG" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} executionInfo={"status": "ok", "timestamp": 1598665583382, "user_tz": 300, "elapsed": 10935, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gi2teXJkzxiwTrmzsN2J2ni_MKrPzdBMn1385_PQEs=s64", "userId": "03946972882492424510"}} outputId="2242e01a-5a00-4282-e1fb-38edee1d94e0"
model4 = DecisionTreeRegressor()
model4.fit(X,y)
pred_pm = model4.predict(test_df1)
val = mean_squared_error(test_solutions['pm'],pred_pm)
val4 = math.sqrt(val)
print('RMSE : {}'.format(math.sqrt(val)))
# + [markdown] id="BqyF3xUEOUCt" colab_type="text"
# #### Gradient Boosting Regressor
# + id="tv2zkpDWK7nI" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} executionInfo={"status": "ok", "timestamp": 1598665772237, "user_tz": 300, "elapsed": 197882, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gi2teXJkzxiwTrmzsN2J2ni_MKrPzdBMn1385_PQEs=s64", "userId": "03946972882492424510"}} outputId="283057aa-0052-4950-d7bf-d7a18f797f19"
model5 = GradientBoostingRegressor()
model5.fit(X,y)
pred_pm = model5.predict(test_df1)
val = mean_squared_error(test_solutions['pm'],pred_pm)
val5 = math.sqrt(val)
print('RMSE : {}'.format(math.sqrt(val)))
# + [markdown] id="kgqDUxowOdw3" colab_type="text"
# #### XG Boosting Regressor
# + id="iuWUqoWTK7nK" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 50} executionInfo={"status": "ok", "timestamp": 1598665805525, "user_tz": 300, "elapsed": 230100, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gi2teXJkzxiwTrmzsN2J2ni_MKrPzdBMn1385_PQEs=s64", "userId": "03946972882492424510"}} outputId="8ce2e10e-a0a1-44b4-ec1c-532a4e92a761"
model6 = xgboost.XGBRegressor()
model6.fit(X,y)
pred_pm = model6.predict(test_df1)
val = mean_squared_error(test_solutions['pm'],pred_pm)
val6 = math.sqrt(val)
print('RMSE : {}'.format(math.sqrt(val)))
# + [markdown] id="Fdsihl7XOg3l" colab_type="text"
# #### Model Evaluation
# + id="BsD0_o1tvYsh" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 225} executionInfo={"status": "ok", "timestamp": 1598665805529, "user_tz": 300, "elapsed": 227505, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gi2teXJkzxiwTrmzsN2J2ni_MKrPzdBMn1385_PQEs=s64", "userId": "03946972882492424510"}} outputId="12e2b241-930b-45c9-89a5-b3f156ee2173"
rmse_pm = {'ML Algorithm' : ['Linear Regressor','K-NN Regressor', 'Random Forest Regressor', 'Decision Tree Regressor', 'Gradient Boosting Regressor','XG Boosting Regressor']
,'RMSE' : [val1,val2,val3,val4,val5,val6] }
pd.DataFrame(rmse_pm)
# + [markdown] id="O7w4uHdtNwR1" colab_type="text"
# ##### Out of the above, XG Boosting Regressor has the lowest RMSE. So, we choose it for 'pm_predicted'
# + id="YhtALOLpOysj" colab_type="code" colab={} executionInfo={"status": "ok", "timestamp": 1598667721050, "user_tz": 300, "elapsed": 373, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gi2teXJkzxiwTrmzsN2J2ni_MKrPzdBMn1385_PQEs=s64", "userId": "03946972882492424510"}}
pm_predicted = pred_pm
RMSE_pm = val6
# + [markdown] id="5XBrEPVMdaY4" colab_type="text"
# ##### RMSE value is 0.879
# + [markdown] id="WUsubKIfTwGR" colab_type="text"
# #### Predicting 'stator_yoke'
# + id="_67hmdb0K7nS" colab_type="code" colab={} executionInfo={"status": "ok", "timestamp": 1598667748810, "user_tz": 300, "elapsed": 437, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gi2teXJkzxiwTrmzsN2J2ni_MKrPzdBMn1385_PQEs=s64", "userId": "03946972882492424510"}}
X = new_train_df.drop(columns = ['u_d','pm','coolant','stator_yoke','stator_tooth','stator_winding','profile_id','recording_second'])
y = new_train_df['stator_yoke']
test_df = test_df.drop(columns = ['coolant','u_d'])
# + id="np7vDe-bK7nU" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 269} executionInfo={"status": "ok", "timestamp": 1598667749499, "user_tz": 300, "elapsed": 704, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gi2teXJkzxiwTrmzsN2J2ni_MKrPzdBMn1385_PQEs=s64", "userId": "03946972882492424510"}} outputId="8a72a170-e2b4-41b3-920a-f068b3a65508"
new_train_df.corr()['stator_yoke']
# + [markdown] id="sFYXCH7mU_sS" colab_type="text"
# #### Linear Regression
# + id="aw0sJpfwK7nW" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} executionInfo={"status": "ok", "timestamp": 1598667750848, "user_tz": 300, "elapsed": 385, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gi2teXJkzxiwTrmzsN2J2ni_MKrPzdBMn1385_PQEs=s64", "userId": "03946972882492424510"}} outputId="4f4ab9eb-2062-4122-ad5a-15342662c9aa"
model1 = LinearRegression().fit(X, y)
pred_stator_yoke = model1.predict(test_df)
val = mean_squared_error(test_solutions['stator_yoke'],pred_stator_yoke)
val1 = math.sqrt(val)
print('RMSE : {}'.format(math.sqrt(val)))
# + [markdown] id="_rvS6t54VJyw" colab_type="text"
# #### K-NN Regression
# + id="XUywgryZK7nY" colab_type="code" colab={} executionInfo={"status": "ok", "timestamp": 1598667752425, "user_tz": 300, "elapsed": 502, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gi2teXJkzxiwTrmzsN2J2ni_MKrPzdBMn1385_PQEs=s64", "userId": "03946972882492424510"}}
scaler = StandardScaler()
scaler.fit(X)
X = scaler.transform(X)
transformed_scaler_testset = scaler.transform(test_df)
# + id="0pnib_PgwBw3" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 312} executionInfo={"status": "ok", "timestamp": 1598668070295, "user_tz": 300, "elapsed": 314861, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gi2teXJkzxiwTrmzsN2J2ni_MKrPzdBMn1385_PQEs=s64", "userId": "03946972882492424510"}} outputId="f7e50843-5096-4fea-9280-b03d37debfd4"
error_rate = []
for i in range(1,10):
knn = KNeighborsRegressor(n_neighbors=i)
score = cross_val_score(knn,X,y, cv = 10)
error_rate.append(1-score.mean())
plt.plot(range(1,10), error_rate, color = 'red', linestyle = 'dashed', marker = 'o', markerfacecolor = 'blue', markersize = 10)
plt.title('K value vs Error rate')
plt.xlabel('K value')
plt.ylabel('Error rate')
# + id="jyRr04xFK7na" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} executionInfo={"status": "ok", "timestamp": 1598668085158, "user_tz": 300, "elapsed": 2922, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gi2teXJkzxiwTrmzsN2J2ni_MKrPzdBMn1385_PQEs=s64", "userId": "03946972882492424510"}} outputId="99d26b6c-fb9a-4530-ea27-abc1353bf8ff"
knn = KNeighborsRegressor(n_neighbors=3)
knn.fit(X,y)
pred_stator_yoke = knn.predict(transformed_scaler_testset)
val = mean_squared_error(test_solutions['stator_yoke'],pred_stator_yoke)
val2 = math.sqrt(val)
print('RMSE : {}'.format(math.sqrt(val)))
# + id="lyADYIGzK7nc" colab_type="code" colab={} executionInfo={"status": "ok", "timestamp": 1598668127560, "user_tz": 300, "elapsed": 435, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gi2teXJkzxiwTrmzsN2J2ni_MKrPzdBMn1385_PQEs=s64", "userId": "03946972882492424510"}}
X = new_train_df
X = new_train_df.drop(columns = ['coolant','u_d','stator_yoke','stator_tooth','stator_winding','profile_id','pm','recording_second'])
# + [markdown] id="qfV0T2AjVOP9" colab_type="text"
# #### Random Forest Regression
# + id="R8e5NHOvK7nf" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} executionInfo={"status": "ok", "timestamp": 1598668769422, "user_tz": 300, "elapsed": 632712, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gi2teXJkzxiwTrmzsN2J2ni_MKrPzdBMn1385_PQEs=s64", "userId": "03946972882492424510"}} outputId="b349e7a2-2d12-4754-c443-59b011c0f873"
model3 = RandomForestRegressor()
model3.fit(X,y)
pred_stator_yoke = model3.predict(test_df)
val = mean_squared_error(test_solutions['stator_yoke'],pred_stator_yoke)
val3 = math.sqrt(val)
print('RMSE : {}'.format(math.sqrt(val)))
# + [markdown] id="nimMWmmpVS8c" colab_type="text"
# #### Decision Tree Regression
# + id="JxeUAggaK7nh" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} executionInfo={"status": "ok", "timestamp": 1598668834046, "user_tz": 300, "elapsed": 10858, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gi2teXJkzxiwTrmzsN2J2ni_MKrPzdBMn1385_PQEs=s64", "userId": "03946972882492424510"}} outputId="b6fed738-58e4-4dfc-f7e8-a3bef9e608ee"
model4 = DecisionTreeRegressor()
model4.fit(X,y)
pred_stator_yoke = model4.predict(test_df)
val = mean_squared_error(test_solutions['stator_yoke'],pred_stator_yoke)
val4 = math.sqrt(val)
print('RMSE : {}'.format(math.sqrt(val)))
# + [markdown] id="nbAGyswjVWVq" colab_type="text"
# #### Gradient Boosting Regression
# + id="QJ6hyzW9K7ni" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} executionInfo={"status": "ok", "timestamp": 1598669141496, "user_tz": 300, "elapsed": 167388, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gi2teXJkzxiwTrmzsN2J2ni_MKrPzdBMn1385_PQEs=s64", "userId": "03946972882492424510"}} outputId="f8fdab97-fc4c-46dc-e67d-2dbf604b3324"
model5 = GradientBoostingRegressor()
model5.fit(X,y)
pred_stator_yoke = model5.predict(test_df)
val = mean_squared_error(test_solutions['stator_yoke'],pred_stator_yoke)
val5 = math.sqrt(val)
print('RMSE : {}'.format(math.sqrt(val)))
# + [markdown] id="ZdfmDJ24VaYk" colab_type="text"
# #### XG Boosting Regression
# + id="0g8cv8tlK7nk" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 50} executionInfo={"status": "ok", "timestamp": 1598669171280, "user_tz": 300, "elapsed": 195400, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gi2teXJkzxiwTrmzsN2J2ni_MKrPzdBMn1385_PQEs=s64", "userId": "03946972882492424510"}} outputId="a0abbf56-21bf-4e47-c186-1ebdf3706848"
model6 = xgboost.XGBRegressor()
model6.fit(X,y)
pred_stator_yoke = model6.predict(test_df)
val = mean_squared_error(test_solutions['stator_yoke'],pred_stator_yoke)
val6 = math.sqrt(val)
print('RMSE : {}'.format(math.sqrt(val)))
# + [markdown] id="NhUCzx8mWlm0" colab_type="text"
# #### Model Evaluation
# + id="sRcve2VywdMw" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 225} executionInfo={"status": "ok", "timestamp": 1598669397415, "user_tz": 300, "elapsed": 358, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gi2teXJkzxiwTrmzsN2J2ni_MKrPzdBMn1385_PQEs=s64", "userId": "03946972882492424510"}} outputId="0b142cb0-978c-4779-dced-304a979224d1"
rmse_stator_yoke = {'ML Algorithm' : ['Linear','KNN Regressor', 'Random Forest Regressor', 'Decision Tree Regressor', 'Gradient Boosting Regressor','XG Boosting Regressor']
,'RMSE' : [val1,val2,val3,val4,val5,val6] }
pd.DataFrame(rmse_stator_yoke)
# + [markdown] id="acRddNgIT3xv" colab_type="text"
# Out of all the ML Algorithms, Linear Regressor has the lowest RMSE 0.66, so we choose it for 'stator_yoke_predicted'
# + id="6snyMVSRUE4-" colab_type="code" colab={} executionInfo={"status": "ok", "timestamp": 1598669547520, "user_tz": 300, "elapsed": 503, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gi2teXJkzxiwTrmzsN2J2ni_MKrPzdBMn1385_PQEs=s64", "userId": "03946972882492424510"}}
RMSE_stator_yoke = val5
stator_yoke_predicted = pred_stator_yoke
# + [markdown] id="3FHdkIbKd3MC" colab_type="text"
# ##### RMSE value is 0.66
# + [markdown] id="vyWExBlTTgz_" colab_type="text"
# #### Predicting 'stator_tooth'
# + id="KSdYdupOK7nm" colab_type="code" colab={} executionInfo={"status": "ok", "timestamp": 1598669557563, "user_tz": 300, "elapsed": 457, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gi2teXJkzxiwTrmzsN2J2ni_MKrPzdBMn1385_PQEs=s64", "userId": "03946972882492424510"}}
X = new_train_df
X = new_train_df.drop(columns = ['u_d','pm','stator_yoke','stator_tooth','stator_winding','profile_id','recording_second'])
y = new_train_df['stator_tooth']
# + id="9a360WpP6-zr" colab_type="code" colab={} executionInfo={"status": "ok", "timestamp": 1598669674927, "user_tz": 300, "elapsed": 429, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gi2teXJkzxiwTrmzsN2J2ni_MKrPzdBMn1385_PQEs=s64", "userId": "03946972882492424510"}}
test_df3 = test_df
test_df3 = test_df.drop(columns= ['u_d'])
# + [markdown] id="s37dyjWAVfGr" colab_type="text"
# #### Linear Regression
# + id="SxYxbwwxK7np" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} executionInfo={"status": "ok", "timestamp": 1598669682422, "user_tz": 300, "elapsed": 446, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gi2teXJkzxiwTrmzsN2J2ni_MKrPzdBMn1385_PQEs=s64", "userId": "03946972882492424510"}} outputId="8b213930-8d8f-44ba-e18d-b644433f6d2a"
model1 = LinearRegression().fit(X, y)
pred_stator_tooth = model1.predict(test_df3)
val = mean_squared_error(test_solutions['stator_tooth'],pred_stator_tooth)
val1 = math.sqrt(val)
print('RMSE : {}'.format(math.sqrt(val)))
# + [markdown] id="gyKnfxcXVhpC" colab_type="text"
# #### K-NN Regression
# + id="D6Jgjr3GK7nr" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 312} executionInfo={"status": "ok", "timestamp": 1598669874558, "user_tz": 300, "elapsed": 188806, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gi2teXJkzxiwTrmzsN2J2ni_MKrPzdBMn1385_PQEs=s64", "userId": "03946972882492424510"}} outputId="75944d19-8b20-491f-ed3a-1105a2b8325d"
scaler = StandardScaler()
scaler.fit(X)
X = scaler.transform(X)
transformed_scaler_testset = scaler.transform(test_df3)
error_rate = []
for i in range(1,10):
knn = KNeighborsRegressor(n_neighbors=i)
score = cross_val_score(knn,X,y, cv = 10)
error_rate.append(1-score.mean())
plt.plot(range(1,10), error_rate, color = 'red', linestyle = 'dashed', marker = 'o', markerfacecolor = 'blue', markersize = 10)
plt.title('K value vs Error rate')
plt.xlabel('K value')
plt.ylabel('Error rate')
# + id="XhJpTTz0DtTT" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} executionInfo={"status": "ok", "timestamp": 1598669906728, "user_tz": 300, "elapsed": 2517, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gi2teXJkzxiwTrmzsN2J2ni_MKrPzdBMn1385_PQEs=s64", "userId": "03946972882492424510"}} outputId="c0b8532f-a1ed-4460-a368-2ae46bdac522"
knn = KNeighborsRegressor(n_neighbors=2)
knn.fit(X,y)
pred_stator_tooth = knn.predict(transformed_scaler_testset)
val = mean_squared_error(test_solutions['stator_tooth'],pred_stator_tooth)
val2 = math.sqrt(val)
print('RMSE : {}'.format(math.sqrt(val)))
# + id="taiGJoz3K7ns" colab_type="code" colab={} executionInfo={"status": "ok", "timestamp": 1598669907149, "user_tz": 300, "elapsed": 492, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gi2teXJkzxiwTrmzsN2J2ni_MKrPzdBMn1385_PQEs=s64", "userId": "03946972882492424510"}}
X = new_train_df
X = new_train_df.drop(columns = ['u_d','stator_yoke','stator_tooth','stator_winding','profile_id','pm','recording_second'])
test_df3 = test_df.drop(columns = ['u_d'])
# + [markdown] id="Cu_I6ft1Vkr6" colab_type="text"
# #### Random Forest Regression
# + id="EscduoqgK7nv" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} executionInfo={"status": "ok", "timestamp": 1598670537470, "user_tz": 300, "elapsed": 628949, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gi2teXJkzxiwTrmzsN2J2ni_MKrPzdBMn1385_PQEs=s64", "userId": "03946972882492424510"}} outputId="2af1cefe-44c0-4e1e-a0dc-34ab77c8fc39"
model3 = RandomForestRegressor()
model3.fit(X,y)
pred_stator_tooth = model3.predict(test_df3)
val = mean_squared_error(test_solutions['stator_tooth'],pred_stator_tooth)
val3 = math.sqrt(val)
print('RMSE : {}'.format(math.sqrt(val)))
# + [markdown] id="nEbZMhx-VnQE" colab_type="text"
# #### Decision Tree Regression
# + id="uZIIudT8K7nx" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} executionInfo={"status": "ok", "timestamp": 1598671608317, "user_tz": 300, "elapsed": 10636, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gi2teXJkzxiwTrmzsN2J2ni_MKrPzdBMn1385_PQEs=s64", "userId": "03946972882492424510"}} outputId="4925a303-d98b-4d98-b616-c7e3620fc45e"
model4 = DecisionTreeRegressor()
model4.fit(X,y)
pred_stator_tooth = model4.predict(test_df3)
val = mean_squared_error(test_solutions['stator_tooth'],pred_stator_tooth)
val4 = math.sqrt(val)
print('RMSE : {}'.format(math.sqrt(val)))
# + [markdown] id="ESyKwBJVVrXM" colab_type="text"
# #### Gradient Boosting Regression
# + id="hId0Ha4cK7nz" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} executionInfo={"status": "ok", "timestamp": 1598671848803, "user_tz": 300, "elapsed": 189761, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gi2teXJkzxiwTrmzsN2J2ni_MKrPzdBMn1385_PQEs=s64", "userId": "03946972882492424510"}} outputId="d2605686-e45a-4923-8033-ad6f3455fdaf"
model5 = GradientBoostingRegressor()
model5.fit(X,y)
pred_stator_tooth = model5.predict(test_df3)
val = mean_squared_error(test_solutions['stator_tooth'],pred_stator_tooth)
val5 = math.sqrt(val)
print('RMSE : {}'.format(math.sqrt(val)))
# + [markdown] id="2rONlAciVu5-" colab_type="text"
# #### XG Boosting Regression
# + id="3BHHB6eDK7n1" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 50} executionInfo={"status": "ok", "timestamp": 1598671882076, "user_tz": 300, "elapsed": 221423, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gi2teXJkzxiwTrmzsN2J2ni_MKrPzdBMn1385_PQEs=s64", "userId": "03946972882492424510"}} outputId="7c605b3d-db2b-4227-d7cd-088d15cec5db"
model6 = xgboost.XGBRegressor()
model6.fit(X,y)
pred_stator_tooth = model6.predict(test_df3)
val = mean_squared_error(test_solutions['stator_tooth'],pred_stator_tooth)
val6 = math.sqrt(val)
print('RMSE : {}'.format(math.sqrt(val)))
# + id="XgJMkOY42Qr4" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 225} executionInfo={"status": "ok", "timestamp": 1598671888116, "user_tz": 300, "elapsed": 477, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gi2teXJkzxiwTrmzsN2J2ni_MKrPzdBMn1385_PQEs=s64", "userId": "03946972882492424510"}} outputId="12f4b11d-df81-444a-d4c4-2ce1f6eec3dd"
rmse_stator_tooth = {'ML Algorithm' : ['Linear','KNN Regressor', 'Random Forest Regressor', 'Decision Tree Regressor', 'Gradient Boosting Regressor','XG Boosting Regressor']
,'RMSE' : [val1,val2,val3,val4,val5,val6] }
pd.DataFrame(rmse_stator_tooth)
# + [markdown] id="MeAoJaIZPicd" colab_type="text"
# ##### Out of the ML algorithms, XG Boosting Regressor has the lowest RMSE, so we select it as 'stator_tooth_predicted'
# + id="-G4lI2a5PybU" colab_type="code" colab={} executionInfo={"status": "ok", "timestamp": 1598671961171, "user_tz": 300, "elapsed": 1713, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gi2teXJkzxiwTrmzsN2J2ni_MKrPzdBMn1385_PQEs=s64", "userId": "03946972882492424510"}}
RMSE_stator_tooth = val5
stator_tooth_predicted = pred_stator_tooth
# + [markdown] id="mkqnFZOed9lx" colab_type="text"
# ##### RMSE value is 0.54
# + [markdown] id="tvAICPB9TSih" colab_type="text"
# #### Predicting 'stator_winding'
# + id="qDu6AsFNK7n3" colab_type="code" colab={} executionInfo={"status": "ok", "timestamp": 1598671967810, "user_tz": 300, "elapsed": 607, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gi2teXJkzxiwTrmzsN2J2ni_MKrPzdBMn1385_PQEs=s64", "userId": "03946972882492424510"}}
y = new_train_df['stator_winding']
X = new_train_df
X = new_train_df.drop(columns = ['u_d','stator_yoke','stator_tooth','stator_winding','profile_id','pm','recording_second'])
# + id="jX8X7jSs6FPx" colab_type="code" colab={} executionInfo={"status": "ok", "timestamp": 1598671970402, "user_tz": 300, "elapsed": 479, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gi2teXJkzxiwTrmzsN2J2ni_MKrPzdBMn1385_PQEs=s64", "userId": "03946972882492424510"}}
test_df4 = test_df.drop(columns = ['u_d'])
# + [markdown] id="N-CWxe42Vye1" colab_type="text"
# #### Linear Regression
# + id="d0DN3SOxK7n4" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} executionInfo={"status": "ok", "timestamp": 1598671980503, "user_tz": 300, "elapsed": 738, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gi2teXJkzxiwTrmzsN2J2ni_MKrPzdBMn1385_PQEs=s64", "userId": "03946972882492424510"}} outputId="361d88aa-1b06-4803-bd17-4286e4b6bd98"
model1 = LinearRegression().fit(X, y)
pred_stator_winding = model1.predict(test_df4)
val = mean_squared_error(test_solutions['stator_winding'],pred_stator_winding)
val1 = math.sqrt(val)
print('RMSE : {}'.format(math.sqrt(val)))
# + [markdown] id="jMMqfRjdV1BW" colab_type="text"
# #### K-NN Regression
# + id="I57f9BrjK7n6" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 312} executionInfo={"status": "ok", "timestamp": 1598672165984, "user_tz": 300, "elapsed": 184370, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gi2teXJkzxiwTrmzsN2J2ni_MKrPzdBMn1385_PQEs=s64", "userId": "03946972882492424510"}} outputId="e2dbb8b8-f4a7-484b-865e-e666585912a5"
scaler = StandardScaler()
scaler.fit(X)
X = scaler.transform(X)
transformed_scaler_testset = scaler.transform(test_df4)
error_rate = []
for i in range(1,10):
knn = KNeighborsRegressor(n_neighbors=i)
score = cross_val_score(knn,X,y, cv = 10)
error_rate.append(1-score.mean())
plt.plot(range(1,10), error_rate, color = 'red', linestyle = 'dashed', marker = 'o', markerfacecolor = 'blue', markersize = 10)
plt.title('K value vs Error rate')
plt.xlabel('K value')
plt.ylabel('Error rate')
# + id="2K5LUCQcD4Y-" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} executionInfo={"status": "ok", "timestamp": 1598672208580, "user_tz": 300, "elapsed": 2733, "user": {"displayName": "<NAME>", "photoUrl": "https://<KEY>", "userId": "03946972882492424510"}} outputId="d9706400-939b-4217-ab10-91a11a47f545"
knn = KNeighborsRegressor(n_neighbors=2)
knn.fit(X,y)
pred_stator_winding = knn.predict(transformed_scaler_testset)
val = mean_squared_error(test_solutions['stator_winding'],pred_stator_winding)
val2 = math.sqrt(val)
print('RMSE : {}'.format(math.sqrt(val)))
# + id="hN4Pl9T6K7n8" colab_type="code" colab={} executionInfo={"status": "ok", "timestamp": 1598672208583, "user_tz": 300, "elapsed": 2380, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gi2teXJkzxiwTrmzsN2J2ni_MKrPzdBMn1385_PQEs=s64", "userId": "03946972882492424510"}}
X = new_train_df
X = new_train_df.drop(columns = ['u_d','stator_yoke','stator_tooth','stator_winding','profile_id','pm','recording_second'])
test_df4 = test_df.drop(columns= ['u_d'])
# + [markdown] id="bkAH9ka0V4fV" colab_type="text"
# #### Random Forest Regression
# + id="QfIUIMcuAKGe" colab_type="code" colab={} executionInfo={"status": "ok", "timestamp": 1598672208586, "user_tz": 300, "elapsed": 1034, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gi2teXJkzxiwTrmzsN2J2ni_MKrPzdBMn1385_PQEs=s64", "userId": "03946972882492424510"}}
# + id="F8Vq5w4ZK7n-" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} executionInfo={"status": "ok", "timestamp": 1598672837993, "user_tz": 300, "elapsed": 629542, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gi2teXJkzxiwTrmzsN2J2ni_MKrPzdBMn1385_PQEs=s64", "userId": "03946972882492424510"}} outputId="20ad1d81-931c-4277-de00-85844c6b1d39"
model3 = RandomForestRegressor()
model3.fit(X,y)
pred_stator_winding = model3.predict(test_df4)
val = mean_squared_error(test_solutions['stator_winding'],pred_stator_winding)
val3 = math.sqrt(val)
print('RMSE : {}'.format(math.sqrt(val)))
# + [markdown] id="Q7Zvwx2DV7Z1" colab_type="text"
# #### Decision Tree Regression
# + id="fPqAPH9CK7n_" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} executionInfo={"status": "ok", "timestamp": 1598672848568, "user_tz": 300, "elapsed": 638378, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gi2teXJkzxiwTrmzsN2J2ni_MKrPzdBMn1385_PQEs=s64", "userId": "03946972882492424510"}} outputId="88227d20-ee7f-4999-b599-12c7fb299234"
model4 = DecisionTreeRegressor()
model4.fit(X,y)
pred_stator_winding = model4.predict(test_df4)
val = mean_squared_error(test_solutions['stator_winding'],pred_stator_winding)
val4 = math.sqrt(val)
print('RMSE : {}'.format(math.sqrt(val)))
# + [markdown] id="Sq2jvJsmV-Kk" colab_type="text"
# #### Gradient Boosting Regression
# + id="4H4Z1VI4K7oB" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} executionInfo={"status": "ok", "timestamp": 1598673046448, "user_tz": 300, "elapsed": 834526, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gi2teXJkzxiwTrmzsN2J2ni_MKrPzdBMn1385_PQEs=s64", "userId": "03946972882492424510"}} outputId="89ef73b8-73cd-4197-c7b2-74804d770e89"
model5 = GradientBoostingRegressor()
model5.fit(X,y)
pred_stator_winding = model5.predict(test_df4)
val = mean_squared_error(test_solutions['stator_winding'],pred_stator_winding)
val5 = math.sqrt(val)
print('RMSE : {}'.format(math.sqrt(val)))
# + [markdown] id="X6UdGR5UWBRU" colab_type="text"
# #### XG Boosting Regression
# + id="ifzbLWkYK7oC" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 50} executionInfo={"status": "ok", "timestamp": 1598673079404, "user_tz": 300, "elapsed": 866105, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gi2teXJkzxiwTrmzsN2J2ni_MKrPzdBMn1385_PQEs=s64", "userId": "03946972882492424510"}} outputId="b59b3071-f875-4da2-888a-cf8df6f4ab53"
model6 = xgboost.XGBRegressor()
model6.fit(X,y)
pred_stator_winding = model6.predict(test_df4)
val = mean_squared_error(test_solutions['stator_winding'],pred_stator_winding)
val6 = math.sqrt(val)
print('RMSE : {}'.format(math.sqrt(val)))
# + [markdown] id="t-PX_RDgW_aj" colab_type="text"
# #### Model Evaluation
# + id="4FjMMj_ZK7oD" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 225} executionInfo={"status": "ok", "timestamp": 1598673079406, "user_tz": 300, "elapsed": 861156, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gi2teXJkzxiwTrmzsN2J2ni_MKrPzdBMn1385_PQEs=s64", "userId": "03946972882492424510"}} outputId="17e17367-a683-4fe9-9a04-9afd0ab155be"
rmse_stator_winding = {'ML Algorithm' : ['Linear','KNN Regressor', 'Random Forest Regressor', 'Decision Tree Regressor', 'Gradient Boosting Regressor','XG Boosting Regressor']
,'RMSE' : [val1,val2,val3,val4,val5,val6] }
pd.DataFrame(rmse_stator_winding)
# + [markdown] id="jRjTc5P2QH3v" colab_type="text"
# ##### Out of all the ML algorithms, Gradient Boosting Regressor has the lowest RMSE of 0.594
# + id="j0Koj76lNWzw" colab_type="code" colab={} executionInfo={"status": "ok", "timestamp": 1598673079408, "user_tz": 300, "elapsed": 859771, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gi2teXJkzxiwTrmzsN2J2ni_MKrPzdBMn1385_PQEs=s64", "userId": "03946972882492424510"}}
stator_winding_predicted = pred_stator_winding
RMSE_stator_winding = val5
# + [markdown] id="0OuSNkdXeGwU" colab_type="text"
# ##### RMSE value is 0.59
# + id="z1jb0E_yQqoW" colab_type="code" colab={} executionInfo={"status": "ok", "timestamp": 1598673095093, "user_tz": 300, "elapsed": 655, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gi2teXJkzxiwTrmzsN2J2ni_MKrPzdBMn1385_PQEs=s64", "userId": "03946972882492424510"}}
test_df['pm_predicted'] = pm_predicted
test_df['stator_yoke_predicted'] = stator_yoke_predicted
test_df['stator_tooth_predicted'] = stator_tooth_predicted
test_df['stator_winding_predicted'] = stator_winding_predicted
# + id="xBeKEG80V4Xc" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 195} executionInfo={"status": "ok", "timestamp": 1598673101089, "user_tz": 300, "elapsed": 512, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gi2teXJkzxiwTrmzsN2J2ni_MKrPzdBMn1385_PQEs=s64", "userId": "03946972882492424510"}} outputId="b6ebdfac-3f69-4e07-a741-84d152849ac4"
test_df.head()
# + id="rkqGRgJJXZF0" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} executionInfo={"status": "ok", "timestamp": 1598675456808, "user_tz": 300, "elapsed": 380, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gi2teXJkzxiwTrmzsN2J2ni_MKrPzdBMn1385_PQEs=s64", "userId": "03946972882492424510"}} outputId="a1952c6c-1743-4c05-f04d-9dbe71c469ab"
Overall_RMSE = 0.879 + 0.666 + 0.54 + 0.59
print('Overall_RMSE : {}'.format(Overall_RMSE))
# + id="37q89geCV55-" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 17} executionInfo={"status": "ok", "timestamp": 1598675490075, "user_tz": 300, "elapsed": 1563, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gi2teXJkzxiwTrmzsN2J2ni_MKrPzdBMn1385_PQEs=s64", "userId": "03946972882492424510"}} outputId="b1069b78-64cb-4826-bd66-8fde827214ef"
from google.colab import files
test_df.to_csv('test_df.csv')
files.download('test_df.csv')
# + id="e6CBExtAb2o1" colab_type="code" colab={}
| ODSC_Hackathon.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # High order Prince methods & Riccati equation
#
# [Prince](http://www.peteprince.co.uk/parallel.pdf) has developed some interesting high order methods. These are demonstated on problem A2 from the [DETEST](http://perso.ensta-paristech.fr/~chapoutot/integration/docs/p1-enright.pdf) set: a special case of the Riccati equation.
#
# ## Problem definition
#
# The initial value problem is:
problem = {'fun' : lambda x, y: -y**3/2,
'y0' : [1.],
't_span' : [0., 20.]}
# ## Reference solution
#
# This problem has an analytic solution that will be used as reference:
reference = lambda x: (x+1)**-0.5
# ## Solution plot
#
# The plot below shows the solution. It's a simple, smooth curve.
# +
import numpy as np
# %matplotlib inline
import matplotlib.pyplot as plt
t = np.linspace(*problem['t_span'])
plt.figure()
plt.plot(t, reference(t))
plt.title('special case of the Riccati equation')
plt.show()
# -
# ## Efficiency plot
#
# The efficiency of the methods can be assessed by making a plot of the error versus the number of derivative function evaluations. The error is calculated by the RMS norm:
def rms_err_norm(solution, reference):
error = solution.y - reference(solution.t)
err_norm = (error**2).mean()**0.5
return err_norm
# Let's solve this problem with `Pri6`, `Pri7` and `Pri8` at several absolute tolerance values and make a plot to show the efficiency of these methods. The scipy methods `RK45` and `DOP853` (with coefficients by Dormand and *Prince*) are included for comparison. The Riccati equation is solved efficiently by the new methods of Prince.
# +
from scipy.integrate import solve_ivp
from extensisq import Pri6, Pri7, Pri8
methods = ['RK45', 'DOP853', Pri6, Pri7, Pri8]
tolerances = np.logspace(-3, -13, 11)
plt.figure()
for method in methods:
name = method if isinstance(method, str) else method.__name__
e = []
n = []
for tol in tolerances:
sol = solve_ivp(**problem, rtol=1e-13, atol=tol, method=method,
dense_output=True) # this triggers extra evaluations in DOP853
err = rms_err_norm(sol, reference)
e.append(err)
n.append(sol.nfev)
if name == 'RK45':
style = '--k.'
elif name == 'DOP853':
style = '-k.'
else:
style = '.:'
plt.loglog(e, n, style, label=name)
plt.legend()
plt.xlabel(r'||error||$_{RMS}$')
plt.ylabel('nr of function evaluations')
plt.title('efficiency')
plt.show()
# -
# ## Discussion
#
# The relative efficiency of the methods is problem dependent. For this problem, the efficiency graph shows:
#
# * `DOP853` and `Pri7` are comparable. Both have a 7th order continous solution (interpolant) and a discrete method of order 8. The lines of these methods run parallel in the efficiency plot.
# * Dense output was requested. This triggers extra evaluations in `DOP853`. *The methods of Prince don't require extra evaluations for dense output.* Without dense ouput, `DOP853` and `Pri7` have a similar efficiency for this problem.
# * `Pri8` is the most efficient method at lower tolerance values.
# * The curve of `Pri6` crosses that of `DOP853`.
# * `RK45` is relatively inefficient at these tolerance values.
# * The accuracy of all methods scale differently with the value of atol.
# * The accuracy is limited to roughly 1e-16 in the extensisq implementation. This is shown as the vertical part of the efficiency curves of `Pri8` and `Pri7`. Setting the tolerance too low increases the number of function evaluations, but does not improve the solution any further.
#
# I think that these methods by Prince are a useful addition to the default scipy methods for many problems that need to be solved with high accuracy.
| docs/Prince.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# **Hidden Markov models for cracking codes**
#
# In this exercise you have to make a partially built HMM work and use it to solve some simple substitution ciphers. Plaintext data is provided in 'plaintext' directory. Encrypted data is in 'encrypted'. Some of the texts were originally English some of them were Russian; the sequences are also of different lengths.
#
# This homework is worth **15 points** and is due by the next class (**24th Oct.**), please submit the results of the **TASK 5** (a list of files and names of the author/work) to Anytask in the following format: 'filename author' where 'filename' is a file from "encrypted/\*_encrypted.txt" and 'author' is a file from "plaintext/\*.txt" (not including 'english.txt', 'russian.txt' or 'all.txt') which best matches the decrypted text.
#
#
#
# +
# Utilities for loading data from file and converting characters to integers and back.
import numpy as np
def get_char_to_int_mapping(path):
# Load data from path and get mapping from characters to integers and back.
characters = set()
for line in open(path):
characters.update(set([c for c in line.strip()]))
char_to_int_mapping = dict([(char, i) for i, char in enumerate(sorted(list(characters)))])
int_to_char_mapping = [char for char, i in char_to_int_mapping.items()]
return char_to_int_mapping, int_to_char_mapping
def load_sequences(path, char_to_int_mapping):
# Load data from path and map to integers using mapping.
return [[char_to_int_mapping[c] for c in line.strip()] for line in open(path)]
def estimate_markov_model_from_sequences(sequences, num_states):
# Estimate a Markov model based on the sequences (integers) provided.
# pi[i] = Pr(s_0 = i)
pi_counts = np.zeros(num_states)
# A[i, j] = Pr(s_t = j | s_{t-1} = i)
A_counts = np.zeros((num_states, num_states))
for n, sequence in enumerate(sequences):
assert False, "Collect counts for pi and A and return parameter estimates."
# return pi, A
# -
# **TASK 1**: Make the following block run by completing the method 'estimate_markov_model_from_sequences' above.
# +
# Some data to use.
plaintext = 'plaintext/english.txt'
# plaintext = 'plaintext/shakespeare.txt'
# plaintext = 'plaintext/russian.txt'
ciphertext = 'encrypted/1_encrypted.txt' # short sequences in english
# ciphertext = 'encrypted/99_encrypted.txt' # longer sequences in russian
# load a character to integer mapping and reverse
char_to_int_mapping, int_to_char_mapping = get_char_to_int_mapping(plaintext)
# load sequences as ints
plaintext_sequences = load_sequences(plaintext, char_to_int_mapping)
encrypted_sequences = load_sequences(ciphertext, char_to_int_mapping)
# estimate a markov model over characters
pi, A = estimate_markov_model_from_sequences(plaintext_sequences, len(char_to_int_mapping))
# -
# Below is a mostly implemented HMM.
class HMM():
def __init__(self, observations_to_char_mapping={}, states_to_char_mapping={}):
# Determine number of states and observation space.
self.num_states = len(states_to_char_mapping)
self.num_outputs = len(observations_to_char_mapping)
self.states_to_char_mapping = states_to_char_mapping
self.observations_to_char_mapping = observations_to_char_mapping
# Random initialization
self.pi = np.random.rand(self.num_states)
self.pi /= np.sum(self.pi)
self.A = np.random.rand(self.num_states, self.num_states)
self.A /= np.sum(self.A, 1, keepdims=True)
self.B = np.random.rand(self.num_states, self.num_outputs)
self.B /= np.sum(self.B, 1, keepdims=True)
def estimate_with_em(self, sequences, parameters={}, epsilon=0.001, max_iters=100):
# Estimates all parameters not provided in 'parameters' based on 'sequences'.
self.fixed_pi = 'pi' in parameters
if self.fixed_pi:
self.pi = parameters['pi']
self.fixed_A = 'A' in parameters
if self.fixed_A:
self.A = parameters['A']
self.fixed_B = 'B' in parameters
if self.fixed_B:
self.B = parameters['B']
previous_llh = None
iter = 0
while True and iter < max_iters:
# Infer expected counts.
pi_counts, A_counts, B_counts, log_likelihood = self.e_step(sequences)
# Update parameters based on counts.
self.m_step(pi_counts, A_counts, B_counts)
# Output some sequences for debugging.
self.output(sequences[:10])
# Log likelihood should be increasing
print('iteration %d; log likelihood %.4f' % (iter, log_likelihood))
if previous_llh:
assert log_likelihood >= previous_llh
if log_likelihood - previous_llh < epsilon:
break
previous_llh = log_likelihood
iter += 1
def e_step(self, sequences):
# Reset counters of statistics
pi_counts = np.zeros_like(self.pi)
A_counts = np.zeros_like(self.A)
B_counts = np.zeros_like(self.B)
total_log_likelihood = 0.0
for sequence in sequences:
# Run Forward-Backward dynamic program
alpha, beta, gamma, xi, log_likelihood = self.forward_backward(sequence)
# Accumulate statistics.
pi_counts += gamma[:, 0]
A_counts += xi
for t, x in enumerate(sequence):
B_counts[:, x] += gamma[:, t]
total_log_likelihood += log_likelihood
return pi_counts, A_counts, B_counts, total_log_likelihood
def m_step(self, pi_counts, A_counts, B_counts):
if not self.fixed_pi:
self.pi = pi_counts / np.sum(pi_counts)
if not self.fixed_A:
self.A = A_counts / np.sum(A_counts, 1, keepdims=True)
if not self.fixed_B:
self.B = B_counts / np.sum(B_counts, 1, keepdims=True)
def max_posterior_decode(self, sequence):
_, _, gamma, _, log_likelihood = self.forward_backward(sequence)
return np.argmax(gamma, 0)
def forward_backward(self, sequence):
# alpha[i][t] = p(x_1, ..., x_t, z_t = i)
alpha = self.forward(sequence)
# beta[i][t] = p(x_t+1, ..., x_T|z_t = i)
beta = self.backward(sequence)
# gamma[i][t] = p(z_t = i|x_1, ..., x_T)
gamma = (alpha * beta) / np.sum(alpha * beta, 0)
# xi[i][j] = p(z_t = i, z_{t+1} = j|x_1, ..., x_T)
xi = np.zeros_like(self.A)
for t in range(1, len(sequence)-1):
this_xi = np.zeros_like(self.A)
for i in range(self.num_states):
for j in range(self.num_states):
this_xi[i, j] += alpha[i, t] * self.A[i, j] * beta[j, t+1] * self.B[j, sequence[t+1]]
xi += this_xi / np.sum(this_xi)
return alpha, beta, gamma, xi, np.log(np.sum(alpha[:, len(sequence)-1]))
def forward(self, sequence):
# alpha[i][t] = p(x_1, ..., x_t, z_t = i)
alpha = np.zeros((len(self.pi), len(sequence)))
assert False, "Implement forward recursion"
return alpha
def backward(self, sequence):
# beta[i][t] = p(x_t+1, ..., x_T|z_t = i)
beta = np.zeros((len(self.pi), len(sequence)))
assert False, "Implement backwards recursion to compute betas."
return beta
def output(self, sequences):
# Output some decoded states.
for i, sequence in enumerate(sequences):
observations = [self.observations_to_char_mapping[x] for x in sequence]
map_states = [self.states_to_char_mapping[x] for x in self.max_posterior_decode(sequence)]
print('(states): %s\n(observations): %s' % (''.join(map_states), ''.join(observations)))
# **TASK 2**: Implement the assertions in 'forward' and 'backward' methods on the HMM class so that the following block passes.
# +
# Since it's a substitution cipher we assume hidden states and observations have same alphabet.
state_to_char_mapping = int_to_char_mapping
observation_to_char_mapping = int_to_char_mapping
# Initialize a HMM with the correct state/output spaces.
hmm = HMM(observation_to_char_mapping, state_to_char_mapping)
# Estimate the parameters and decode the encrypted sequences.
hmm.estimate_with_em(encrypted_sequences[:100], parameters={})
# -
# **TASK 3**: Some of the encrypted sequences are quite long. Try decoding some from 'encrypted/99_encrypted.txt' (note these are in Russian).
# **TASK 4**: Make your implementation of forward and backward more efficient by removing all but the outermost for-loop.
# **TASK 5**: Try to classify the author of each text.
| week05_em/hmm-seminar.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .r
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: R
# language: R
# name: ir
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/danangcrysnanto/bovine-graphs-mapping/blob/master/part4_variantgenotyping/analysis/part4_variantgenotyping_colab.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="h-cLg5ObngK6" colab_type="text"
# ### Part4 Variant genotyping from whole genome graphs
#
# In this part, we constructed whole genome graphs for Brown Swiss population, by augmenting ~14.1 M autosomal variants identified from 82 Brown Swiss to the Bovine UCD1.2 Hereford reference.
#
# We then mapped 10 samples (not used for simulation) to this whole genome graph.
# We then compared with mapping with linear genome using bwa or vg (empty graphs, only backbone without variations).
# + id="eZLABr1SngK9" colab_type="code" colab={}
library(tidyverse)
library(magrittr)
# + id="CW2NnUGjoKdV" colab_type="code" colab={}
## We download data from github with this base url
basepath <- "https://raw.githubusercontent.com/danangcrysnanto/bovine-graphs-mapping/master/part4_variantgenotyping/result/"
# + [markdown] id="o3PQv4TUngLA" colab_type="text"
# ### Comparison between unique and perfect mapping
#
# Since reads were not simulated, we could not assess the mapping correctnes. Then, we followed previous approach in Novak et al. (2017), Prit et al. (2018) to calculate the reads that map perfectly (edit distance 0 without clipping) and reads that map uniquely, meaning that there is only single map location, or considerably high MQ (MQ=60) in case of multi-mapping
# + id="nS924l-ungLA" colab_type="code" outputId="ec62f88c-bd94-4810-a664-46f24c598410" colab={"base_uri": "https://localhost:8080/", "height": 204}
datunper <- read.table(url(file.path(basepath,"datuniqperf.tsv")),header=TRUE)
head(datunper)
# + id="VG_oEM5fngLE" colab_type="code" colab={}
#since the data is per chromosome, then we combined across
datunper_sum <- datunper %>% group_by(anims,mapper) %>% summarise(perfect=sum(perfect)*100/sum(mapped),
uniq=sum(uniq)*100/sum(mapped))
# + id="Yl-fAHisngLG" colab_type="code" outputId="35fa97ef-972d-4af9-c357-3c12859d6346" colab={"base_uri": "https://localhost:8080/", "height": 497}
options(repr.plot.width=8, repr.plot.height=8)
datunper_sum %<>% mutate(Mapping=case_when(mapper=="bwa"~"Linear (BWA)",
mapper=="vg_linear"~"Linear (VG)",
mapper=="vg_graph"~ "Graph (VG)"))
ggplot(datunper_sum,aes(x=uniq,y=perfect,col=Mapping,shape=Mapping)) +
geom_point(size=5,stroke=1)+
scale_color_manual(values=c("#E69F00", "#56B4E9", "#009E73"))+
scale_shape_manual(values=c(1,2,3))+
theme_bw()+
labs(x="Unique alignment (%)",y="Perfect alignment (%)",fill="Alignment")+
coord_cartesian(xlim = c(80,85))+
theme(text=element_text(size=18),
axis.title = element_text(face="bold"),
legend.position = "bottom")
# + [markdown] id="E3RDZjkPngLI" colab_type="text"
# ### Quantify the difference across mapping scenarios
# + id="_4uvI7NingLJ" colab_type="code" outputId="8e0271de-3ba8-4ad9-f867-cf4d31eb5825" colab={"base_uri": "https://localhost:8080/", "height": 119}
## The largest improvement is in the perfect mapping to the paths in the graphs
## We need to quantify this
datperf <- datunper_sum %>% select(anims,perfect,mapper) %>% pivot_wider(names_from = mapper,values_from = perfect) %>%
mutate(dif=vg_graph-bwa)
cat("Maximum improvement in perfect mapping in the graph alignment from linear BWA")
max(datperf$dif)
cat("Minimum improvement in perfect mapping in the graph alignment from linear BWA")
min(datperf$dif)
cat("Mean improvement in perfect mapping in the graph alignment from linear BWA")
mean(datperf$dif)
# + id="BocfbNVSngLL" colab_type="code" outputId="f90a3a6a-eb2a-4e96-bf62-e4db21471d85" colab={"base_uri": "https://localhost:8080/", "height": 119}
## However we noticed that the unique mapping is decreased (but very small) in graph alignments
datuniq <- datunper_sum %>% select(anims,uniq,mapper) %>% pivot_wider(names_from = mapper,values_from = uniq) %>%
mutate(dif=vg_graph-bwa)
cat("Minimum decreased in uniq mapping in the graph alignment from linear BWA")
max(datuniq$dif)
cat("Maximum decreased in uniq mapping in the graph alignment from linear BWA")
min(datuniq$dif)
cat("Mean decreased in uniq mapping in the graph alignment from linear BWA")
mean(datuniq$dif)
# + [markdown] id="ONZ5GT-JngLN" colab_type="text"
# ### Comparison of the genotypes discovered from linear vs graph alignments
# + [markdown] id="AEvkj_SQngLO" colab_type="text"
# We then surjected the graph alignment to the corresponding linear coordinates.
# We then used the samtools multi-sample calling to call variants.
# Finally, we compared with the matched SNP array to calculate concordance statistics as below.
#
# 
# + id="agn2Khw0ngLO" colab_type="code" outputId="7d846413-1b32-4441-bd73-147e3cd5a02c" colab={"base_uri": "https://localhost:8080/", "height": 204}
## Statistics of concordance for samtools
## Mode indicate the mapping mode, bwa, graph, or vg(linear)
## Fil indicate the filtered or raw genotypes
datsam <- read.table(url(file.path(basepath,"samtools_concordance_all.tsv")),header=TRUE) %>% select(-prog)
head(datsam)
# + id="u7K6fh2ungLQ" colab_type="code" outputId="4fbaff5b-a6e2-4aab-a4b2-9de8ba1551d4" colab={"base_uri": "https://localhost:8080/", "height": 204}
## Since the statistics calculated based on each animals,
## We take mean and sd to report the performance of each caller
datsam %>% group_by(mode) %>% summarise(m_concor=mean(concor),
sd_concor=sd(concor),
m_recall=mean(recal),
sd_recall=sd(recal),
m_discre=mean(discre),
sd_discre=sd(discre),
m_precision=mean(precision),
sd_precision=sd(precision)) %>% as.data.frame()
# + [markdown] id="zKBfoZYOngLS" colab_type="text"
# Almost no difference among tools, we can plot it to see the pattern more clear
# + [markdown] id="ezWYOrbangLT" colab_type="text"
# ### Plot of the genotype concordance across sequencing depth
#
# We test whether there is any difference across sequencing coverage between graphs and linear alignment.
# + id="wge7tiuangLU" colab_type="code" outputId="53800a4e-8154-4c61-9b0e-f99544feccf9" colab={"base_uri": "https://localhost:8080/", "height": 204}
options(warn=-1)
datcov <- read.table(url(file.path(basepath,"anims_coverage.tsv")),header=FALSE)
colnames(datcov) <- c("anims","coverage")
datsamall <- datsam %>% left_join(datcov,by=c("anims"))
head(datsamall)
# + id="ARfdw6zbngLW" colab_type="code" outputId="daa5789d-ee24-42dc-e57d-080cefe65973" colab={"base_uri": "https://localhost:8080/", "height": 497}
datfil <- datsamall %>% filter(! str_detect(mode,"_fil"))
datfil %<>% mutate(Mapping=case_when(mode=="bwa"~"Linear(BWA)",
mode=="graph"~"Graph(VG)",
mode=="linear"~"Linear(VG)"))
ggplot(datfil,aes(x=as.double(as.character(coverage)),y=concor,col=Mapping,shape=Mapping))+
geom_point(size=5,stroke=1)+
scale_y_continuous(breaks=seq(90,100,1),limits = c(96,100))+
scale_colour_manual(values=c("#E69F00", "#56B4E9", "#009E73","red"))+
scale_shape_manual(values=c(1,2,3))+
theme_bw()+
theme(text = element_text(size=18),
axis.title=element_text(face="bold"),
legend.position = "bottom")+
labs(x="Sequencing coverage",y="Genotype concordance")
# + [markdown] id="Wn_jRD9hngLY" colab_type="text"
# ### Plot relation between precision and recall of the array genotypes
# + [markdown] id="606t4k73ngLY" colab_type="text"
# We see no noticeable difference across sequencing coverage. We could also look into the relation between precision-recall in different samples.
# + id="G2ayyX_CngLZ" colab_type="code" outputId="4ecee395-c147-4262-ff1e-ddfd9e3f3b73" colab={"base_uri": "https://localhost:8080/", "height": 497}
ggplot(datfil,aes(x=precision,y=recal,shape=Mapping,col=Mapping))+
geom_point(size=5,stroke=1)+
theme_bw()+
theme(legend.position = "bottom",
text = element_text(size=18),
axis.title=element_text(face="bold"))+
scale_colour_manual(values=c("#E69F00", "#56B4E9", "#009E73"))+
scale_shape_manual(values=c(1,2,3))+
labs(x="Precision(%)",y="Recall(%)")
# + [markdown] id="dbkecdiCngLc" colab_type="text"
# ### Genotyping concordance for variants discovered from GATK and Graphtyper
#
#
# We additionally discover and genotype variants using GATK and Graphtyper using pipeline we established in our previius paper. We want to see whether we see any difference using different variant callers.
# + id="-R-Wwya9ngLc" colab_type="code" outputId="d05b59ef-8a3b-492a-b290-e3d4be27e963" colab={"base_uri": "https://localhost:8080/", "height": 204}
datgatk <- read.table(url(file.path(basepath,"gatk4_concordance_all.tsv")),header=TRUE) %>% select(-prog)
head(datgatk)
# + id="1ASxcWltngLe" colab_type="code" outputId="26c2d34e-b945-44a8-9198-40852a9aaeb9" colab={"base_uri": "https://localhost:8080/", "height": 204}
datgatk %>% group_by(mode) %>% summarise(m_concor=mean(concor),
sd_concor=sd(concor),
m_recall=mean(recal),
sd_recall=sd(recal),
m_discre=mean(discre),
sd_discre=sd(discre),
m_precision=mean(precision),
sd_precision=sd(precision)) %>% as.data.frame()
# + [markdown] id="jR233svFngLg" colab_type="text"
# Again we see small difference, even the concordance in graph alignments become slightly lower, when variants called with GATK.
#
# How're about genotypes from Graphtyper?
# + id="sQtMbqMkngLh" colab_type="code" outputId="b477cc7b-e797-41d1-aa2f-b1921a645b9f" colab={"base_uri": "https://localhost:8080/", "height": 204}
datgraph <- read.table(url(file.path(basepath,"graphtyper_concordance_all.tsv")),header=TRUE)
head(datgraph)
# + id="Qniy4rJungLj" colab_type="code" outputId="d9d278c6-0871-48b5-9819-5dbb87ae3010" colab={"base_uri": "https://localhost:8080/", "height": 204}
datgraph %>% group_by(mode,prog) %>% summarise(m_concor=mean(concor),
sd_concor=sd(concor),
m_recall=mean(recal),
sd_recall=sd(recal),
m_discre=mean(discre),
sd_discre=sd(discre),
m_precision=mean(precision),
sd_precision=sd(precision)) %>% as.data.frame()
# + [markdown] id="u4_4Z8nmngLl" colab_type="text"
# Again we also see the same pattern, interestingly we observed that concordance from *Graphtyper* is higher than from *Samtools* or *GATK*.
# + id="4rhk7DuungLm" colab_type="code" outputId="516f3b20-e7bb-45cf-e415-2bbc33db74f3" colab={"base_uri": "https://localhost:8080/", "height": 680}
sessionInfo()
| part4_variantgenotyping/analysis/part4_variantgenotyping_colab.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [default]
# language: python
# name: python2
# ---
# # Index files
# Currently, there are some problems with compiling triads. Will have to fix.
#
# NOTE: If a task is stopped early and restarted, resulting in multiple text files, the files should be named according to this structure:
# 1. **[task]-[subj]-[timepoint].txt** OR **[task]-[subj]-[timepoint]\_1.txt**
# 2. **[task]-[subj]-[timepoint]\_2.txt**
# +
from glob import glob
from shutil import rmtree
import os
from os import remove, mkdir, utime
from os.path import join, isdir, basename
import pandas as pd
from convert_eprime import index_eprime_files
from convert_eprime.tests.utils import get_test_data_path, get_config_path
raw_data_dir = join(get_test_data_path(), 'raw_files')
param_file = join(get_config_path(), 'testing_task.json')
orged_dir = join(get_test_data_path(), 'organized_files')
csv_file = join(orged_dir, 'logger.csv')
if not isdir(orged_dir):
mkdir(orged_dir)
# +
def touch(fname):
with open(fname, 'a'):
utime(fname, None)
def list_files(startpath):
for root, dirs, files in os.walk(startpath):
level = root.replace(startpath, '').count(os.sep)
indent = ' ' * 4 * (level)
print('{}{}/'.format(indent, os.path.basename(root)))
subindent = ' ' * 4 * (level + 1)
for f in files:
if not f.startswith('.'):
print('{}{}'.format(subindent, f))
# -
raw_files = sorted(glob(join(raw_data_dir, '*.*')))
list_files(raw_data_dir)
# Organize files. Conversion is not currently implemented.
index_eprime_files.main(raw_data_dir, csv_file, param_file)
# Show organized files
list_files(orged_dir)
df = pd.read_csv(csv_file)
df.sort_values(by=['Subject', 'Timepoint'],
ascending=[True, False])
# +
# Post-example cleanup
# Create the old raw files
for f in raw_files:
touch(f)
# Remove the "done" subfolder
rmtree(join(raw_data_dir, 'done'))
# Remove the organized files
rmtree(orged_dir)
mkdir(orged_dir)
| examples/index_files.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
#Load libraries
import warnings
warnings.filterwarnings("ignore")
import numpy as np
import pandas as pd
from copulas.multivariate import GaussianMultivariate
from matplotlib import pyplot as plt
from sklearn.preprocessing import LabelEncoder, OneHotEncoder
from gaussian_multivariate import DataPreProcessor
HOME_PATH = '' #home path of the project
TRAIN_FILE = 'REAL DATASETS/TRAIN DATASETS/D_ContraceptiveMethod_Real_Train.csv'
SYNTHETIC_FILE = 'SYNTHETIC DATASETS/GM/D_ContraceptiveMethod_Synthetic_GM.csv'
# ## 1. Read data
real_data = pd.read_csv(HOME_PATH + TRAIN_FILE)
categorical_features = ['wife_education','husband_education','wife_religion','wife_working','husband_occupation',
'standard_of_living_index','media_exposure','contraceptive_method_used']
for c in categorical_features :
real_data[c] = real_data[c].astype('category')
data_train = real_data
data_train
data_train.dtypes
# data configuration
preprocessor = DataPreProcessor(data_train)
data_train = preprocessor.preprocess_train_data()
data_train
# ## 2. Train the model and generate data
gm = GaussianMultivariate()
gm.fit(data_train)
generated_samples = gm.sample(len(data_train))
generated_samples
# ## 3. Transform Generated Data
synthetic_data = preprocessor.transform_data(generated_samples)
synthetic_data
real_data.describe()
synthetic_data.describe()
len(synthetic_data.columns)
columns = real_data.columns
fig, axs = plt.subplots(nrows=4, ncols=3, figsize=(20,15))
idx = {0:[0,0], 1:[0,1], 2:[0,2], 3:[1,0], 4:[1,1], 5:[1,2], 6:[2,0], 7:[2,1], 8:[2,2], 9:[3,0], 10:[3,1], 11:[3,2]}
for i in range(0,len(columns)) :
data = np.column_stack((real_data[columns[i]], synthetic_data[columns[i]]))
axs[idx[i][0], idx[i][1]].hist(data, density=False, histtype='bar', label=['Real','Synthetic (CTGAN)'])
axs[idx[i][0], idx[i][1]].set_title(columns[i])
axs[idx[i][0], idx[i][1]].legend()
fig.delaxes(axs[3,1])
fig.delaxes(axs[3,2])
fig.tight_layout(pad=1.1)
synthetic_data.to_csv(HOME_PATH + SYNTHETIC_FILE, index = False)
| notebooks/Dataset D - Contraceptive Method Choice/Synthetic data generation/GM Dataset D - Contraceptive Method Choice.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: conda_pytorch_p36
# language: python
# name: conda_pytorch_p36
# ---
# # Import Libraries
# !pip install -r ../../requirements.txt
import sys
sys.path.insert(0,'../..')
import matplotlib.pyplot as plt
import matplotlib.image as pltimg
import util.util as util
import os, time
from train import train
from test import test
# # Train Model
# ## Baseline
train(['--dataroot=data','--name=baseline','--checkpoints_dir=model-checkpoints','--update_html_freq=1000','--print_freq=1000','--display_freq=1000','--display_id=-1','--save_latest_freq=1000','--num_threads=4','--batch_size=4', '--preprocess=crop', '--crop_size=256', '--save_epoch_freq=10000'])
| translations/van-gogh-landscape/train.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import numpy as np
import matplotlib.pyplot as plt
import astropy.units as u
from astropy.table import QTable
# %matplotlib inline
# -
plt.rcParams['figure.figsize'] = (9, 6)
# +
pyirf_file = '../build/pyirf.fits.gz'
sensitivity = QTable.read(pyirf_file, hdu='SENSITIVITY')[1:-1]
# make it print nice
sensitivity['reco_energy_low'].info.format = '.3g'
sensitivity['reco_energy_high'].info.format = '.3g'
sensitivity['reco_energy_center'].info.format = '.3g'
sensitivity['relative_sensitivity'].info.format = '.2g'
sensitivity['flux_sensitivity'].info.format = '.3g'
for k in filter(lambda k: k.startswith('n_'), sensitivity.colnames):
sensitivity[k].info.format = '.1f'
sensitivity
# -
sensitivity_unop = QTable.read(pyirf_file, hdu='SENSITIVITY_UNOP')[1:-1]
# +
magic = QTable.read('magic_sensitivity_2014.ecsv')
for k in filter(lambda k: k.startswith('sensitivity_') or k.startswith('e_'), magic.colnames):
magic[k].info.format = '.3g'
magic
# +
unit = u.Unit('erg cm-2 s-1')
for s, label in zip(
[sensitivity, sensitivity_unop],
['pyirf optimised cuts', r'$\theta^2 < 0.03$ and gh_score$> 0.85$']
):
e = s['reco_energy_center']
w = (s['reco_energy_high'] - s['reco_energy_low'])
s = (e**2 * s['flux_sensitivity'])
plt.errorbar(
e.to_value(u.TeV),
s.to_value(unit),
xerr=w.to_value(u.TeV) / 2,
ls='',
label=label
)
e_magic = .5 * (magic['e_max'].to(u.TeV) + magic['e_min'].to(u.TeV))
w_magic = (magic['e_max'].to(u.TeV) - magic['e_min'].to(u.TeV))
s_magic = (e_magic**2 * magic['sensitivity_lima_5off'])
plt.errorbar(
e_magic.to_value(u.TeV),
s_magic.to_value(unit),
xerr=w_magic.to_value(u.TeV) / 2,
ls='',
label='MAGIC 2014'
)
plt.title('Minimal Flux Satisfying Requirements for 50 hours')
plt.xscale("log")
plt.yscale("log")
plt.xlabel("Reconstructed energy / TeV")
plt.ylabel(rf"$(E^2 \cdot \mathrm{{Flux Sensitivity}}) /$ ({unit.to_string('latex')})")
plt.grid(which='both')
plt.legend()
plt.tight_layout()
None
# -
# ## Plot crab observation sensitivity
# +
crab_sensitivity_file = '../build/sensitivity_crab.fits.gz'
sensitivity_crab = QTable.read(crab_sensitivity_file, hdu='SENSITIVITY')[1:-1]
sensitivity_crab_unop = QTable.read(crab_sensitivity_file, hdu='SENSITIVITY_UNOP')[1:-1]
sensitivity_crab
# +
unit = u.Unit('erg cm-2 s-1')
for s, label in zip(
[sensitivity_crab, sensitivity_crab_unop],
['pyirf optimised cuts', r'$\theta^2 < 0.03$ and gh_score$> 0.85$']
):
e = s['reco_energy_center']
w = (s['reco_energy_high'] - s['reco_energy_low'])
s = (e**2 * s['flux_sensitivity'])
plt.errorbar(
e.to_value(u.TeV),
s.to_value(unit),
xerr=w.to_value(u.TeV) / 2,
ls='',
label=label
)
e_magic = .5 * (magic['e_max'].to(u.TeV) + magic['e_min'].to(u.TeV))
w_magic = (magic['e_max'].to(u.TeV) - magic['e_min'].to(u.TeV))
s_magic = (e_magic**2 * magic['sensitivity_lima_5off'])
plt.errorbar(
e_magic.to_value(u.TeV),
s_magic.to_value(unit),
xerr=w_magic.to_value(u.TeV) / 2,
ls='',
label='MAGIC 2014'
)
plt.title('Minimal Flux Satisfying Requirements for 50 hours (based on 12.17h of Crab observations)')
plt.xscale("log")
plt.yscale("log")
plt.xlabel("Reconstructed energy / TeV")
plt.ylabel(rf"$(E^2 \cdot \mathrm{{Flux Sensitivity}}) /$ ({unit.to_string('latex')})")
plt.grid(which='both')
plt.legend()
plt.tight_layout()
None
| notebooks/plot_sensitivity.ipynb |
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 0.3.10-pre
# language: julia
# name: julia-0.3
# ---
# # Example: The Newton method for finding roots of functions
# The Newton method is an iterative method to solve equations of the form $f(x)=0$, i.e. to find *roots* or *zeros* $x^\ast$ such that $f(x^\ast) = 0$. Given an initial guess $x_0$, we repeat the iteration
#
# $$x_{n+1} = x_n - \frac{f(x_n)}{f'(x_n)}.$$
# ## Variables
# Let's implement the Newton algorithm in Julia. We start from an initial condition $x_0$:
x_0 = 3
# Julia always returns a value from any expression:
x_0
# We can use LaTeX notation and tab completion for Unicode, e.g. `x\_0<TAB>`:
x₀ = 3
x₀
# Values in Julia have associated **types**. We can find the type of a variable using the appropriately-named `typeof` function:
typeof(x₀)
# We can guess that this means an integer with 64 bits. [This result will be `Int32` if you have a 32-bit machine.]
# ## Simple functions
# We need to define a function whose roots we wish to find. Let's find square roots of two, for example. Julia provides a concise mathematical syntax for defining simple functions:
f(x) = x^2 - 2
f(x_0)
# We also need the derivative function, $f'$. For the moment, let's just give it my hand. (Later, we will see a neat way to avoid this.) We may like to write `f'`, using the apostrophe, `'`), but the apostrophe turns out to be a special character in Julia, so we get an error if we try to define a variable or function named `f'`:
f'(x) = 2x
# However, Unicode comes to our rescue: `f\prime<TAB>`:
f′(x) = 2x
# Now we can do one step of our algorithm; mathematical operations work like we expect:
x_1 = x_0 - f(x_0) / f'(x_0)
# Note that division of integers using `/` gives a floating-point result:
typeof(x_1)
# ## Iteration
# We now need to repeat such steps several times. Julia has `for` loops and `while` loops. As usual, we tend to use `for` loops when we know how many iterations we want, and `while` when we iterate until a certain condition is attained.
# ### Ranges and arrays
# Let's start with a simple `for`. Blocks of code in Julia *always* end with `end`:
for i in 1:5
println(i) # print the value of i followed by a new line
end
# Here, a variable `i` is introduced that is *local* to the loop, i.e. it exists only inside the loop:
i
# `i` takes each value in the *iterable collection* `1:5`. Let's ask Julia what this object `1:5` is:
1:5
# As usual, Julia returns a value, but in this case it is (at first glance) apparently not very helpful. What type is this object?
typeof(1:5)
# We see that Julia has a special type (actually, several different types) to represent **ranges**, in which the elements are calculated each time a new element is required, rather than stored. We can see all the elements that will be produced using the `collect` function:
v = collect(1:5)
# The result is an object of a new type, an `Array`, in this case one whose elements are integers and that is of *dimension* 1. Note that `1` is *not* the number of elements in the array, which is called `length`:
length(v)
# `Array`s are also iterable, so we can iterate over an `Array` using a `for` loop. 1-dimensional arrays, also called `Vector`s, are constructed using square brackets:
w = [3, 4, 7]
for i in w
println(2*i)
end
# ## Implementing the Newton method
# We are now ready to implement the Newton method:
# +
x_0 = 3
x = x_0
for i in 1:10
x_new = x - f(x) / f′(x)
println(i, "\t", x_new)
x = x_new
end
# -
# In this case, we see that the method rapidly converges to one of the square roots of two. Which root it converges to depends on the initial condition:
# +
x_0 = -3
x = x_0
for i in 1:10
x_new = x - f(x) / f′(x)
println(i, "\t", x_new)
x = x_new
end
# -
# The Newton method is, in fact, not guaranteed to converge to a root (although it always does so if started "sufficiently close" to a root, at a rate that is known). Furthermore, *which* root it converges to can depend sensitively on the initial condition. Let's calculate this for several initial conditions.
#
# First we create a set of initial conditions on the real line, say between -5 and 5. We now include a step size in the range:
initial_conditions = -5:0.1:5
collect(initial_conditions) # use tab completion for long variable names!
# This range type is different:
typeof(-5:0.1:5)
# The array is also a new type: it is now an array of 64-bit floating-point numbers. We can also see that the `{...}` syntax thus gives the **parameters** of the `Array` type.
# For each of these initial conditions, we will run the Newton algorithm for a certain number of steps and store the resulting value. We thus need a new array in which to store the results. One way of creating an array is using the `similar` function, which, by default, creates an array of the same type and same size, but with (currently) uninitialized values:
roots = similar(initial_conditions)
# Now we do the work:
for (j, x_0) in enumerate(initial_conditions)
x = x_0
for i in 1:100
x = x - f(x) / f′(x)
end
roots[j] = x
end
# Here, `enumerate` iterates over `initial_conditions` but returns not only the value at each step, but also a counter. `(j, x_0`) is called a **tuple** (an ordered pair):
t = (3, 4)
typeof(t)
# NB: In Julia v0.4, tuples have been completely reworked, and the resulting type is now
# `Tuple{Int64,Int64}`.
roots
# Julia does not show all of the contents of an array by default. We can see everything using `showall`:
showall(roots)
# We see that, apart from the `NaN` value, the results are not very exciting. Let's work harder with more initial conditions. We can find out how long the calculation takes using `@time` by wrapping the code in a `begin...end` block:
#
@time begin
initial_conditions = -100:0.01:100
roots = similar(initial_conditions)
for (j, x_0) in enumerate(initial_conditions)
x = x_0
for i in 1:1000
x = x - f(x) / f′(x)
end
roots[j] = x
end
end
# ## Packages and visualisation
# There are now many values stored in the array, so it is hopeless to examine them:
length(roots)
# Instead, we turn to **visualisation**. There are several plotting **packages** in Julia: [`Gadfly`]() is a native Julia library that produces beautiful plots; [`PyPlot`] is a Julian interface to the well-known `matplotlib` Python library.
# Let's start with `PyPlot`. First we need to download the package. Julia provides a built-in package manager, called `Pkg`, that gracefully handles dependencies, etc. To tell Julia that we require the package, we do
Pkg.add("PyPlot")
# This step is necessary only once. In each session where we need to use `PyPlot` we do
using PyPlot
# Note that this process of loading a package currently can take a considerable time. Work is in progress to reduce this loading time.
figure(figsize=(6,4))
plot(roots);
# ## Performance 1
# If we are used to the performance of C or Fortran, we might start to be unhappy with Julia's speed in this rather simple calculation. A close inspection of the output of the `@time` operation, however, gives us a very important clue: Julia apparently allocated over a gigabyte of memory to do a simple loop with some floating-point numbers!
#
# This is almost *always* a very strong signal that there is something very wrong in your Julia code! In our case, it is not at all clear what that could be. It turns out to be something very fundamental in Julia:
#
# [almost] **NEVER WORK WITH GLOBAL OBJECTS!**
#
# Due to technical details about the way that Julia works, it turns out that **GLOBALS ARE BAD**. What is the solution? **PUT EVERYTHING INTO A FUNCTION!** Let's try following this advice. We take *exactly* the same code and just plop it into a new function. For longer functions, Julia has an alternative syntax:
function do_roots()
initial_conditions = -100:0.01:100
roots = similar(initial_conditions)
for (j, x_0) in enumerate(initial_conditions)
x = x_0
for i in 1:1000
x = x - f(x) / f′(x)
end
roots[j] = x
end
roots
end
# Note the last line of the function. This will automatically *return* the value of the `roots` object as the output of the function. So we can call it like this:
roots = do_roots()
# Now how long did it take?
# a semi-colon suppresses output
@time roots = do_roots();
@time roots = do_roots();
# It allocates a million times less memory, and is 50 times faster! This is the first lesson about performance in Julia: *always* put everything in a function.
#
# Note that the first time we ran the function, it took longer. This is due to the fact that the first time a function is run with arguments of given types, the function is *compiled*. Subsequent runs with the same types of arguments reuse the previously-compiled code.
# **Exercise**: Use a `while` loop with a suitable condition to improve the code for the Newton method.
# ## Generic functions and methods
# Our code currently is not very flexible. To make it more flexible, we would like to pass in arguments to the `do_roots` function. We can make a version which takes as arguments the functions `f` and `f'`, for example. Functions are "first-class objects" in Julia, so they can just be passed around by name.
#
# Let's redefine our function `do_roots` to accept these arguments:
function do_roots(f, f′)
initial_conditions = -100:0.01:100
roots = similar(initial_conditions)
for (j, x_0) in enumerate(initial_conditions)
x = x_0
for i in 1:1000
x = x - f(x) / f′(x)
end
roots[j] = x
end
roots
end
# Note the output that Julia returns: "generic function with 2 methods". This is a sign that something interesting is happening. In fact, we have not "redefined" the function `do_roots`; rather, we have defined a *new version* of `do_roots`, which accepts a *different set of arguments*. (The collection *and types* of the arguments that a function accepts are called its **type signature**.)
#
# Indeed, the function `do_roots` now has *two different methods* or versions:
methods(do_roots)
# If we call `do_roots` with no arguments, the first version will be used; calling it with two arguments will call the second version. The process of choosing which "version" of a function to call is called *dispatch*. The fundamental fact in Julia is that (almost) all functions are such "generic functions" with multiple version, i.e. Julia is one of very few languages that use **multiple dispatch**. This turns out to be very natural for many applications in scientific computing.
# The arguments `f` and `f'` in the second method of `do_roots` are names that are local to the function. We have functions of the same name defined globally, so we can pass those in:
@time do_roots(f, f′);
# This is faster than the first version of `do_roots`, but much slower than the good version. It turns out that Julia currently *cannot optimize* (inline) functions passed in this way. This is something to bear in mind -- there is (currently) a trade-off between user convenience and speed.
# Julia also has *anonymous* functions, which allow us to pass in a function that we define "in the moment", without giving it a name. For example, let's do the exercise with a more interesting function:
@time roots = do_roots(x->(x-1)*(x-2)*(x-3), x->3x^2-12x+11);
# We see that anonymous functions are currently very slow. However, there are workarounds, e.g. the `FastAnon` package.
# Let's visualize the results for this function:
figure(figsize=(6,4))
plot(-100:0.01:100, roots)
xlim()
figure(figsize=(6,4))
plot(-100:0.01:100, roots)
xlim(1, 3)
# ## Complexifying Newton
# The previous result is still pretty boring. It turns out that the Newton method gets interesting if we look for roots of functions of *complex* numbers. [If you are not familiar with complex numbers, you can think of them as pairs of real numbers that have certain mathematical operations defined.]
# Let's try to use the Newton method starting from initial conditions distributed in the complex plane, i.e. pairs $a + bi$, where $i = \sqrt{-1}$. First of all let's see how Julia handles complex numbers:
sqrt(-1)
# Oh dear, that didn't work very well. It turns out that Julia is carefully designed to respect, when possible, the type of the input argument. Indeed, let's ask Julia what it thinks `sqrt` means:
sqrt
# We see that `sqrt` is a generic function, with the following methods:
methods(sqrt)
# Julia gives us a list of the available methods, together with links direct to the source code on GitHub (in IJulia) or locally (in Juno).
#
# `sqrt()` acting on a `Float64` returns a `Float64` when it can, or throws a `DomainError` when its argument is negative. To get square roots in the complex plane, we must *start* with a complex number.
# The names of types in Julia start with capital letters, so let's try `Complex`:
Complex
# As we will see later, types have functions with the same name that act as **constructors** to make objects of the type. Let's see the available functions with the name `Complex`. Note that output has changed rather a lot between Julia v0.3 and Julia v0.4:
methods(Complex)
# Now let's try playing with `Complex`:
a = Complex(3)
typeof(a)
b = Complex(3, 4.5)
typeof(b)
# We see that `Complex` is also parametrised by the type of its real and imaginary parts.
#
# We can also make complex numbers directly using `im`:
3.0 + 4.0im
# (Here, 4.0im is multiplication of 4.0 by `im`, which represents $i$, the imaginary unit.)
# We can do complex arithmetic:
a * b
# What is happening here? Julia knows how to do `*` for complex numbers. Let's ask Julia what `*` is:
*
# So, mathematical operators *are generic functions too*! We can list all the ways to do `*`:
methods(*)
# All of these are defined in Julia itself. (Although the definitions for basic types like `Int` are only shallow wrappers around underlying C code.) We see that generic functions can be a complicated "patchwork" made of different methods for different types.
# We can find the exact method used for a given operation using `@which`:
@which a * b
# ## Initial conditions: matrices
# We are now ready to think about how to generate a grid of initial conditions of the form $a+bi$ in the complex plane, $\mathbb{C}$. Firstly, we could just iterate over the initial conditions in two repeated `for`s, e.g.
for i in -2:1
for j in -2:1
println("($i, $j)")
end
end
# Here we have used **string interpolation**: the *value* of the variable `i` is substituted into the string instead of the sequence `$i$`. [Note that this is not recommended for performance-critical applications.]
# But we still require somewhere to store the results. It is natural to use a **matrix**. A simple way of generating a matrix is the `zeros` function:
zeros(3)
# We see that with a single element, we generate a *vector* of zeros, while
zeros(3, 3)
# gives a *matrix*, i.e. a 2-dimensional `Array`.
# Multiple dispatch allows Julia to provide convenience versions of functions like this. For example:
zeros(-3:2)
# creates a vector of the same length as the range!
# However, this does not work for two different ranges:
zeros(-3:2, -3:2)
# We can use `length` for example:
linear_initial_conditions = -5:0.1:5
roots = zeros(length(linear_initial_conditions), length(linear_initial_conditions))
# However, if we try to store a complex number in this matrix, we find a problem:
roots[1, 1] = 3+4im
# An `InexactError` is a sign that we are trying to put a value into a type that it "doesn't fit into", for example a `Float64` into an `Int64`, or, in this case, a complex number into a float. We must instead create the matrix to hold complex numbers:
linear_initial_conditions = -5:0.1:5
roots = zeros(Complex128, length(linear_initial_conditions), length(linear_initial_conditions))
# Here, `Complex128` is just an alias (an alternative name) for `Complex{Float64}`, so called because two 64-bit `Float64`s require `128` bits of storage in total.
# Now we can insert complex values into the matrix:
roots[1, 1] = 3+4im
roots
# ## Implementing Newton for complex functions
# We are now ready to make a version of Newton for complex functions. We will try to find cube roots of $1$ in the complex plane, by finding zeros of the function
f(z) = z^3 - 1
# with derivative
f′(z) = 3z^2
function do_complex_roots(range=-5:0.1:5) # default value
L = length(range)
roots = zeros(Complex128, L, L)
for (i, x) in enumerate(range)
for (j, y) in enumerate(range)
z = x + y*im
for k in 1:1000
z = z - f(z) / f′(z)
end
roots[i,j] = z
end
end
roots
end
roots = do_complex_roots(-5:0.1:5)
# Now let's use `PyPlot` to plot the result. `PyPlot` only understands floating-point matrices, so we'll take the imaginary part:
using PyPlot
imshow(imag(roots))
# Julia uses "column-major" storage, whereas Python uses "row-major", so in fact we need to flip $x$ and $y$:
function do_complex_roots(range=-5:0.1:5) # default value
L = length(range)
roots = zeros(Complex128, L, L)
for (i, x) in enumerate(range)
for (j, y) in enumerate(range)
z = y + x*im
for k in 1:1000
z = z - f(z) / f′(z)
end
roots[i,j] = z
end
end
roots
end
imshow(imag(do_complex_roots(-3:0.01:3)))
# ## Array comprehensions
# Julia has a neat syntax for constructing arrays from iterables that is very similar to mathematical notation.
# For example, the squares of the numbers from 1 to 10 is
#
# $$\{x^2: x \in \{1,\ldots,10\} \},$$
#
# i.e. "the set of $x^2$ for $x$ from $1$ to $10$. In Julia we can write
squares = [x^2 for x in 1:10]
# Let's define a Newton function by
function newton(x0, N=100)
x = x0
for i in 1:N
x = x - f(x) / f′(x)
end
x
end
# Then our Newton fractal can be written very concisely as
methods(newton)
# Note that the effect of a default argument is simply to create an additional method.
function newton_fractal(range)
[newton(b+a*im) for a in range, b in range]
end
# We can add labels using PyPlot
?text
# +
imshow(imag(newton_fractal(-3:0.01:3)), extent=(-3, 3, -3, 3))
text(1, 0, L"1")
text(reim(exp(2π*im/3))..., L"e^{2\frac{\pi}{3}}")
text(reim(exp(-2π*im/3))..., L"e^{-2\frac{\pi}{3}}")
# -
# Here, we have used Julia's `reim` function:
reim(exp(2π*im/3))
# It returns a tuple. The `...`, or *splat*, operator, unpacks the tuple into two arguments.
# The `L"..."` notation is a special string macro available in the `LateXStrings` package used by `PyPlot`,
# that makes a LaTeX string.
# Exercise: Make a version that accepts functions and experiment with other complex polynomials.
# ## Introspection and iteration protocol
# How does Julia know how to iterate using `for` through a vector or range? Let's look at a Unicode string:
s = "aαbβ" # use `\alpha<TAB>`
typeof(s)
# Julia provides access to several layers between the high-level code we write and the low-level machine code that is finally produced by the compilation process. The first of those is a "lowered" version of the code, in which high-level syntax is transformed to Julia code at a lower level
@code_lowered iterate(10)
# We see that there are three important functions: `start`, `next` and `done`.
# For example, iterating through a Unicode `UTF8String` is complicated, since characters have different lengths:
s[1]
s[2]
s[3]
# Nonetheless, we can iterate through `s`:
function string_iterate(s)
for c in s
println(c)
end
end
string_iterate(s)
# For example, we can extract a list of the characters in `s` with
chars = [c for c in s]
chars[1]
typeof(chars[1])
# Note that in Julia, strings are written with `"` and characters with `'` (as in C).
# The interface that allows us to iterate over an object using `for` is provided by three functions `start`, `next` and `done` that must be defined for that type:
start(s)
next(s, 1)
next(s, 2)
done(s, 2)
@which start(s)
# For more details about introspection, check out Leah Hanson's [blog post](http://blog.leahhanson.us/julia-introspects.html).
@which next(s, 1)
| 1. Starting out.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: py38_tensorflow
# language: python
# name: conda-env-py38_tensorflow-py
# ---
# ### 딥-러닝 과정 Mulit Layer Perceptron(MLP)
# ## Keras 모델 생성/학습 - 당뇨병 예측 모델
# 1. Pandas 가져오기
import pandas as pd
import numpy as np
# +
# 2. 데이터 불러오기
df = pd.read_csv('diabetes_data.csv')
df.head()
# -
# %matplotlib inline
import matplotlib.pyplot as plt
# !pip install seaborn
import seaborn as sns
plt.figure(figsize=(5,5))
sns.heatmap(data = df.corr(), annot=True,
fmt = '.2f', linewidths=.5, cmap='Blues')
# +
import matplotlib.pyplot as plt
import seaborn as sns
plt.rcParams['figure.figsize'] = [10, 6]
# %matplotlib inline
fig, ax = plt.subplots()
ax.boxplot([df['pregnant'], df['gloucose'],df['blood pressure'],df['skin thickness'],
df['insulin'],df['BMI'],df['DPF'],df['age']],sym="b*")
plt.show()
# -
df.info()
df.describe()
glou_zero = df.loc[df.gloucose == 0]
df.
# +
# 3. X/y 나누기
X = df.drop('result', axis=1)
y = df['result']
# +
# 4. Train set, Test set 나누기
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X,y, test_size=0.2,random_state=100)
X_val, X_test, y_val, y_test = train_test_split(X,y, test_size=0.5, random_state=100)
print(X_train.shape)
print(X_val.shape)
print('======'*2)
print(y_train.shape)
print(y_val.shape)
# -
# 5. Keras 패키지 가져오기
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras.layers import Dense, Dropout
# +
# 6. MLP 모델 생성
model = keras.Sequential()
model.add(layers.Dense(input_dim =8, units=64, kernel_regularizer = keras.regularizers.L2(0.1)))
model.add(layers.Activation('relu'))
# model.add(kernel_regularizer = keras.regularizers.L2(0.1))
model.add(layers.Dense(units=128))
model.add(layers.Activation('relu'))
#model.add(layers.Dropout(0.2)) # 앞에있는 layer에 영향이 간다.
model.add(layers.Dense(units=64))
model.add(layers.Activation('relu'))
model.add(layers.Dense(units=1))
model.add(layers.Activation('sigmoid'))
model.summary()
# +
# 7. Compile - Optimizer, Loss function 설정
sgd= keras.optimizers.SGD(lr=0.1)
model.compile(loss='binary_crossentropy', optimizer='adam',
metrics=['accuracy'])
# +
# 8. 학습시키기
from keras.callbacks import EarlyStopping
early_stopping = EarlyStopping(monitor='val_loss',
patience=10 ) # val_loss가 올라갈때 참을성 10번 할당
history= model.fit(X_train,y_train,
validation_data = (X_val, y_val),
batch_size=16,
epochs=100,
verbose=1,
callbacks= [early_stopping])
#verbose 는 0,1,2 의 값만 가지게 된다. 학습이 진행되는 걸 보고 싶르면
#0으로 지정, 안 보고 싶다면 1, loss 값만 보고 싶다면 2
# -
print(X_train.shape[0]/16)
# # 9. 모델 평가하기
#
# y_pred = model.predict(X_test)
# print(y_pred)
# print(np.array(y_test))
#
# from sklearn.metrix import accuracy_score, precision_score, recall_score
#
# acc = accuracy_score(y_test, y_pred)
# pres = precision_score(y_test, y_pred)
# recall= recall_score(y_test, y_pred)
#
# print(acc)
# print(pres)
# print(recall)
# +
train_result = model.evaluate(X_train, y_train)
test_result = model.evaluate(X_test, y_test)
print("train_result:", train_result)
print("test_result", test_result )
# +
# 10. 학습 시각화하기
# -
# 
1 epoch 가 진행 할때마다 X_valudation 검증한다는 과정
# 
# +
즉, train이 낮아져도, 검증을 할때 안좋은 결과를 받을수도 있기때문에 위의 그림을 참고해야햔다.
# +
import matplotlib.pyplot as plt
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.legend(['train', 'val'], loc = 'upper left')
plt.show()
# +
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.xlabel('Epoch')
plt.ylabel('loss')
plt.legend(['train', 'val'], loc = 'upper left')
plt.show()
# dropout으로 오버피팅을 방지해 줄수 있다.
# -
| Deep_Learning/03.MLP_Diabetes_Classification.ipynb |
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .r
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: R
# name: ir
# ---
# + id="Zri-DbnCrMJp"
install.packages("tseries")
install.packages("tidyverse")
install.packages("dplyr")
install.packages("readxl")
install.packages("TTR")
install.packages("forecast")
install.packages("lmtest")
install.packages("FitAR")
install.packages("randtests")
install.packages("seasonalview")
install.packages("moments")
# + colab={"base_uri": "https://localhost:8080/", "height": 447} id="KjG915B_a7tv" outputId="a52024cd-d75d-4f8e-9bbb-c99d1041fc85"
library("tseries")
library("tidyverse")
library("dplyr")
library("readxl")
library("TTR")
library("forecast")
library("lmtest")
library("FitAR")
library("randtests")
library("seasonalview")
library("moments")
# + [markdown] id="ps1TeLxJnoHR"
# ### Tema de alguns gráficos
# + id="WnNmu0P1m5dI"
tema = theme(panel.background = element_rect(fill='black'),
panel.grid =element_blank(),
plot.background = element_rect(fill='black'),
axis.text = element_text(colour='white',size=10),
#panel.grid.major.x = element_line(colour='grey60',linetype = 4,size = 0.2),
panel.grid.major.y = element_line(colour='white',linetype = 1,size = 0.1),
axis.line = element_line(colour='white'),
axis.title=element_text(colour='white'),
plot.title = element_text(colour='white'))
# + [markdown] id="VTRAAy5NnyO9"
# ### Leitura do Dataset
# ## Dataset extraido de: https://sie.energia.gob.mx/bdiController.do?action=cuadro&subAction=applyOptions
# # Exportação de preços de Petróleo Mexicano
# + id="W4rO7QIIarZy"
df <-
read.csv(
'preco.csv',
sep = "," ,
dec = ',',
header = T,
stringsAsFactors = FALSE
)
# + [markdown] id="kS7s0vCyoAwa"
# ### Transformar em Série Temporal
# + colab={"base_uri": "https://localhost:8080/", "height": 301} id="M8sPe2rsjbVv" outputId="3f60b5a3-dbc0-4269-c1a7-845027c2118c"
df_series = ts(df, start=c(2013,1), frequency = 12)
df_series
# + [markdown] id="MFlBo2cAoDVa"
# ### Plotar o gráfico da Série Temporal - Percebemos que visualmente aparenta não ser estacionária
# + [markdown] id="dHF7gITbo6uZ"
# ### Como podemos observar na figura abaixo temos indicios de não estacionariedade, uma vez que o périodo observado a média não aparenta ser constante. Portanto, para verificarmos a hipótese de estacionariedade iremos realizar o Teste de Dickey-Fuller.
# + colab={"base_uri": "https://localhost:8080/", "height": 437} id="IbF5I9o-jg-z" outputId="6474eba0-edf6-4213-adf0-59f6c0827bd9"
plot(df_series, main="Série Temporal de Preços de Petróleo", ylab="Preço do Petróleo",xlab="Data", col="#FF69B4", lty= 1,lwd =2)
# + [markdown] id="JZUNX_qioLh0"
# ## ADF é o teste de Dickey-Fuller
#
# ### Teste de estacionariedade de Dickey-Fuller Aumentado (Dickey & Fuller, 1979) , com as seguintes hipóteses:
# ### H0 : a série não é estacionária
# ### H1 : a série é estacionária
# ### Regra de decisão: Quando o nível descritivo é < 0,05 rejeitamos H0 , ou seja, existem evidências de que a série é estacionária.
#
#
# ### Realizar o Teste de Estacionariedade - Como o p-value é maior que 0.05, portanto a série não é estacionária.
#
# ### Iremos realizar o teste com o nivel de significância α = 0.05. O nivel de significancia é a probabilidade de rejeição da hipótese nula quando ela é verdadeira.Por exemplo, um nível de significância de 0,05 indica um risco de 5% de concluir que existe uma diferença quando não há diferença real.
#
# ### O Resultado deu p-value = 0.7328. Portanto, é maior que 0.05, logo não é estacionária
#
#
#
#
# + colab={"base_uri": "https://localhost:8080/", "height": 118} id="kIonhdhijk11" outputId="573cf9b4-df65-4ad5-fa42-1c2a04aa0aca"
adf.test(df_series)
# + [markdown] id="Ss2TZIYbrpVZ"
# ### Como obtemos um valor p =0.7328 > $\alpha = 0.05$, portanto, não temos evidências para rejeitar a hipótese nula. Assim, temos fortes indícios de não estacionariedade. Portanto, vamos realizar a diferenciação a fim de tentar fazer com a série se torne estacionária.
# + colab={"base_uri": "https://localhost:8080/", "height": 437} id="rwVyRUBUjs1T" outputId="4cdd8bbd-41cb-4351-e0ff-c6135163afd2"
serie_dif1 = diff(df_series)
plot(serie_dif1)
# + [markdown] id="rxbujD4lsckT"
# ## Realizar novamente o teste para verificar se a série se tornou estacionária
# + colab={"base_uri": "https://localhost:8080/", "height": 118} id="Px04omhFjxDS" outputId="790cd4c4-19c0-4b31-fa7a-58dfd57cfec2"
adf.test(serie_dif1)
# + [markdown] id="q32mKAuvsoIK"
# ### Como obtemos um valor p =0.071 > $\alpha = 0.05$, portanto, não temos evidências para rejeitar a hipótese nula ainda. Assim, temos fortes indícios de não estacionariedade. Portanto, vamos realizar a diferenciação novamente a fim de tentar fazer com a série se torne estacionária.
# + id="UOWLo-D7kBMj"
serie_dif2 = diff(serie_dif1)
# + colab={"base_uri": "https://localhost:8080/", "height": 151} id="4NQgs2NmkYWK" outputId="ef7754e1-480c-41bc-86c4-688ccdc4b4d7"
adf.test(serie_dif2)
# + [markdown] id="FAZqum5Ks2tR"
# ### Como obtemos um valor p =0.01 < 0.05. Temos evidências para crer que a série é estacionária. Portanto, rejeitamos a hipótese nula. Realizar o gráfico
# + colab={"base_uri": "https://localhost:8080/", "height": 470} id="K85fcKSIkefD" outputId="41e7c0ae-d500-49ff-b4e0-205cf01f49f7"
plot(log(serie_dif2), type="o", lty="dashed",ylab = "Preço",xlab="Data",main="Série Temporal dos Preços de Petróleo",col="#1E90FF",lwd =2)
# + [markdown] id="2Rz4tcYIuFCs"
# ### Com a série estácionaria conseguimos ajustar o modelo do arima, sabemos que o i está relacionado com a diferenciação, logo i =2.
#
# ### Analisar os gráficos acf e pacf para verificar as ordens da média movel(MA) e auto-regressão(AR).
# + colab={"base_uri": "https://localhost:8080/", "height": 437} id="2MdGG90-khWC" outputId="28a949a3-f371-4dc0-9d7c-1a88710096fb"
par(mfrow=c(2,1), mar=c(4,4,4,1)+.1)
acf(serie_dif2)
pacf(serie_dif2)
# + [markdown] id="JFGzopKvu_1f"
# ### Ao analisar os gráficos de ACF e PACF, observamos caractéristicas semelhantes aos modelos teoricos AR e MA . Assim é possivel identicar no ACF uma correlação significativa no primeiro lag seguidas de correlações não significativas. Enquanto, no gráfico PACF observa-se uma correlação significativa no primeiro lag e diminui alguns lags. Logo hipotetiza-se que essas são as ordens do modelo proposto. O modelo arima (1,2,1) pode se adequar aos dados
# + [markdown] id="TSr41svPuiW2"
# ### Separação de Treino e Teste
# + id="MnxKeIYKkmnT"
##treino
df_series_treino <- window(df_series, start = 2013, end = c(2018,12))
##teste
df_series_test <- window(df_series,start = 2019)
# + [markdown] id="cy5iPsuWuz8J"
# ### Modelo Arima - especificando as ordens do Arima
# + id="LsL6q82Dkoka"
modelo1 = arima(df_series_treino,order=c(1,2,1))
# + [markdown] id="eOkksQIqu7gZ"
# ### Verificação dos resíduos
# + colab={"base_uri": "https://localhost:8080/", "height": 706} id="1vvsLM1okwq1" outputId="99534c1f-7338-4067-b929-e3637a58c892"
summary(modelo1)
tsdiag(modelo1)
# + [markdown] id="FYwZNyrUygH3"
# ### O Primeiro gráfico "Standardized Residuals" indica que a distribuição dos resíduos apresentam média = 0 e variância constante.
# ###No segundo gráfico observamos que os erros não estão autocorrelacionados ao longo do tempo, uma vez que os lags estão na parte azul(intervalo de confiança).ou seja próximos de zero, e sem tendências.
# ###Enquanto, no terceiro gráfico mostra que as autocorrelações em todos os lags observados não são significativas,pois no teste de ljung-box, os p-valores são grandes.
# ### Portanto, isso nos leva a crer que o modelo capturou boa parte da estrutura de autocorrelação da série temporal.
# ### Então as suposições do modelo foram verificadas.
#
# # Além disso, podemos observar algumas medidas como o O erro médio absoluto percentual (MAPE) expressa precisão como uma porcentagem do erro. No exemplo, o MAPE é 5.35, em média, a previsão está incorreta em 5,35%.
#
#
# + colab={"base_uri": "https://localhost:8080/", "height": 101} id="VeQsX7cFkxf1" outputId="f424a0b2-7bc7-4896-fd16-5a74441d9e95"
Box.test(residuals(modelo1),type="Ljung-Box")
# + [markdown] id="3mTs7K_L1WpW"
# Teste de Ruído Branco de Box-Ljung , com as seguintes hipóteses:
#
# H0 : Há Ruído Branco, modelo não exibe falha de ajuste.
#
# H1 : Não há Ruído Branco, o modelo exibe falha de ajuste.
#
# Regra de decisão: Quando (p-value) é > 0,05 aceitamos H0 , ou seja, há evidência de que o modelo não exibe falha de ajuste.
#
# Como (p-value) é > 0,05 aceitamos H0 , ou seja, há evidência de que o modelo não exibe falha de ajuste.
# + [markdown] id="3pf2cKEf1k5k"
# ### Uma vez verificando os resíduos vamos realizar a previsão para os próximos 10 meses
# + colab={"base_uri": "https://localhost:8080/", "height": 437} id="0tNAvEAjkzob" outputId="d97e7c09-57cf-4fab-dac2-4e3297d67ca1"
forecasting=forecast::forecast(modelo1,h=10)
plot(forecasting)
# + [markdown] id="08JDa7vR13Xv"
#
# + id="VHUhgPAs3qH3"
forecasting
# + id="joIUVqwv3x5K"
df_series
# + id="vHf63wSa4wdY"
| files/seriestemporais_petroleo.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pandas as pd
import numpy as np
import json
import requests
import matplotlib.pyplot as plt
import random
import gmaps
from config import gkey
from ipywidgets.embed import embed_minimal_html
pd.options.mode.chained_assignment = None
gmaps.configure(api_key=gkey)
# -
# read from cities csv and create a dataframe
cities_df = pd.read_csv('../WeatherPy/cities.csv')
cities_df.head()
# +
# Store latitude and longitude in locations
locations = cities_df[["latitude", "longitude"]]
# convert to float
humidity = cities_df["humidity(%)"].astype(float)
# +
# Plot Heatmap
fig = gmaps.figure(zoom_level=2,center=(0,10))
# Create heat layer
heat_layer = gmaps.heatmap_layer(locations, weights=humidity,
dissipating=False, max_intensity=100,
point_radius=4)
# Add layer
fig.add_layer(heat_layer)
# Display figure
embed_minimal_html('heatmap.html', views=[fig])
fig
# -
# select ideal cities
ideal_cities_df = cities_df[(cities_df['temperature(F)']>=70) & (cities_df['temperature(F)']<=80) & (cities_df['wind speed(mph)']<10) & (cities_df['cloudiness(%)']==0)]
# +
# base_url = "https://maps.googleapis.com/maps/api/place/nearbysearch/json"
base_url = "https://maps.googleapis.com/maps/api/place/findplacefromtext/json"
params = {
"key": gkey,
}
# use iterrows to iterate through pandas dataframe
for index, row in ideal_cities_df.iterrows():
# get restaurant type from df
city = row['city']
lat = row['latitude']
lng = row['longitude']
# add keyword to params dict
# params['name'] = f'Sheraton'
# params['location'] = f'{lat},{lng}'
# print(params)
params['input'] = f'hotel in {city}'
params['inputtype'] = 'textquery'
params['locationbias'] = f'circle:{5000}@{lat},{lng}'
params['fields'] = f'name,formatted_address,geometry'
# assemble url and make API request
print(f"Retrieving Results for Index {index}: {city}.")
response = requests.get(base_url, params=params).json()
# extract results
results = response['candidates']
try:
print(f"Closest hotel to {city} is {results[0]['name']}.")
ideal_cities_df.loc[index, 'hotel_lat'] = results[0]['geometry']['location']['lat']
ideal_cities_df.loc[index, 'hotel_lng'] = results[0]['geometry']['location']['lng']
ideal_cities_df.loc[index, 'hotel_name'] = results[0]['name']
ideal_cities_df.loc[index, 'country'] = results[0]['formatted_address']
ideal_cities_df.loc[index, 'hotel_address'] = f'City: {row["city"]} \nHotel: {results[0]["name"]} \nAddress: {results[0]["formatted_address"]}'
except (KeyError, IndexError):
print("Missing field/result... skipping.")
print("------------")
# +
# Plot Heatmap
fig = gmaps.figure()
locations = ideal_cities_df[["hotel_lat", "hotel_lng"]]
humidity = ideal_cities_df["humidity(%)"].astype(float)
# Create heat layer
heat_layer = gmaps.heatmap_layer(locations, weights=humidity,
dissipating=False, max_intensity=100,
point_radius=4)
# Add layer
fig.add_layer(heat_layer)
# Add marker layer
city = [row for row in ideal_cities_df['hotel_address']]
marker_layer = gmaps.marker_layer(locations, hover_text=city)
fig.add_layer(marker_layer)
# Display figure
fig
# -
| VacationPy/VacationPy.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.8.6 64-bit
# metadata:
# interpreter:
# hash: 42121322e9afdadd065c41f4f28fbda90dfc85008d7dd0b59a613751487dfba1
# name: python3
# ---
# # Counting Easter eggs
# Our experiment compares the classification approach and the regression approach. The selection is done with the `class_mode` option in Keras' ImageDataGenerator flow_from_directory. `categorical` is used for the one-hot encoding and `sparse` for integers as classes.
#
# Careful: While this is convention there, in other contexts, 'sparse' might mean a vector representation with more-than-one-hot entries, and rather the term 'binary' would be used for integers, generalizing a binary 0/1 problem to several possible classes.
#
# In the notebook, the class_mode is used as a switch for the different Net variants and evaluation scripting.
class_mode = "sparse"
# ## Imports and version numbers
import tensorflow as tf
from tensorflow.keras.preprocessing import image_dataset_from_directory
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from matplotlib import pyplot as plt
import os
import re
import numpy as np
from tensorflow.keras.preprocessing.image import load_img
# Python version: 3.8
print(tf.__version__)
# CUDA version:
# !nvcc --version
# ## Prepare data for the training
# If you redo this notebook on your own, you'll need the images with 0..7 (without 5) eggs in the folders `./images/0` ... `./images/7` (`./images/5` must exist for the classification training, but be empty)
# +
data_directory = "./images"
input_shape = [64,64,3] # 256
batch_size = 16
seed = 123 # for val split
train_datagen = ImageDataGenerator(
validation_split=0.2,
rescale=1.0/255.0
)
train_generator = train_datagen.flow_from_directory(
data_directory,
seed=seed,
target_size=(input_shape[0],input_shape[1]),
color_mode="rgb",
class_mode=class_mode,
batch_size=batch_size,
subset='training'
)
val_datagen = ImageDataGenerator(
validation_split=0.2,
rescale=1.0/255.0
)
val_generator = val_datagen.flow_from_directory(
data_directory,
seed=seed,
target_size=(input_shape[0],input_shape[1]),
color_mode="rgb",
class_mode=class_mode,
batch_size=batch_size,
subset='validation'
)
# -
# ## Prepare the model
# +
num_classes = 8 # because 0..7 eggs
if class_mode == "categorical":
num_output_dimensions = num_classes
if class_mode == "sparse":
num_output_dimensions = 1
model = tf.keras.Sequential()
model.add( tf.keras.layers.Conv2D(
filters = 4,
kernel_size = 5,
strides = 1,
padding = 'same',
activation = 'relu',
input_shape = input_shape
))
model.add( tf.keras.layers.MaxPooling2D(
pool_size = 2, strides = 2
))
model.add( tf.keras.layers.Conv2D(
filters = 8,
kernel_size = 5,
strides = 1,
padding = 'same',
activation = 'relu'
))
model.add( tf.keras.layers.MaxPooling2D(
pool_size = 2, strides = 2
))
model.add( tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(
units = 16, activation = 'relu'
))
if class_mode == "categorical":
last_activation = 'softmax'
if class_mode == "sparse":
last_activation = None
model.add(tf.keras.layers.Dense(
units = num_output_dimensions, activation = last_activation
))
if class_mode == "categorical":
loss = 'categorical_crossentropy'
if class_mode == "sparse":
loss = 'mse'
model.compile(
optimizer = 'adam',
loss = loss,
metrics = ['accuracy']
)
model.summary()
# -
# ## Train the model
# (on 0,1,2,3,4,6,7, but not 5 eggs)
# +
epochs = 5
model.fit(
train_generator,
epochs=epochs,
validation_data=val_generator,
)
# +
plt.figure()
if class_mode == "categorical":
plt.plot(model.history.history['accuracy'])
plt.plot(model.history.history['val_accuracy'])
plt.title('History')
plt.ylabel('Value')
plt.xlabel('Epoch')
plt.legend(['accuracy','val_accuracy'], loc='best')
plt.show()
if class_mode == "sparse":
plt.plot(model.history.history['loss'])
plt.plot(model.history.history['val_loss'])
plt.title('History')
plt.ylabel('Value')
plt.xlabel('Epoch')
plt.legend(['loss','val_loss'], loc='best')
plt.show()
# -
# ## Illustrate performance on unknown and completely unknown input
# (5 eggs are completly unknown; all other numbers trained but at least the test image with 4 eggs was not used during training)
#
# If you are running this notebook on you own, you might have to adjust the filepaths, and you'll have to put the images with 5 eggs in the folder `./images_other/5`
# +
filepath_known = './images_known_unknown/4/0.png'
filepath_unknown = './images_unknown/5/0.png'
# helper function to make the notebook more tidy
def make_prediction(filepath):
img = load_img(
filepath,
target_size=(input_shape[0],input_shape[1])
)
img = np.array(img)
img = img / 255.0
data = np.expand_dims(img, axis=0)
prediction = model.predict(data)[0]
predicted_class = np.argmax(prediction)
return img, prediction, predicted_class
if class_mode == "categorical":
img, prediction, predicted_class = make_prediction(filepath_known)
plt.figure()
plt.subplot(2,1,0+1)
plt.imshow(img)
plt.subplot(2,1,0+2)
plt.plot(prediction,'og')
plt.xlabel('Predicted "class"')
plt.ylabel('Score')
img, prediction, predicted_class = make_prediction(filepath_unknown)
plt.figure()
plt.subplot(2,1,0+1)
plt.imshow(img)
plt.subplot(2,1,0+2)
plt.plot(prediction,'og')
plt.xlabel('Predicted "class"')
plt.ylabel('Score')
if class_mode == "sparse":
img, prediction, predicted_class = make_prediction(filepath_known)
plt.figure()
plt.imshow(img)
_ = plt.title('Prediction: '+str(prediction[0])+' - '+str(round(prediction[0])))
img, prediction, predicted_class = make_prediction(filepath_unknown)
plt.figure()
plt.imshow(img)
_ = plt.title('Prediction: '+str(prediction[0])+' - '+str(round(prediction[0])))
# +
data_test_directory = "./images_unknown"
test_datagen = ImageDataGenerator(
rescale=1.0/255.0
)
test_generator = test_datagen.flow_from_directory(
data_test_directory,
target_size=(input_shape[0],input_shape[1]),
color_mode="rgb",
class_mode=class_mode,
batch_size=1,
subset=None,
shuffle=False
)
all_predictions = model.predict(test_generator, verbose=1)
plt.figure()
if class_mode == 'categorical':
_ = plt.imshow(all_predictions, cmap='summer', aspect='auto', interpolation='none')
_ = plt.colorbar()
_ = plt.xlabel('Predicted class')
_ = plt.ylabel('Image number')
_ = plt.title('Score heatmap for true class 5')
if class_mode == 'sparse':
num_bins = 70
_ = plt.hist(all_predictions, num_bins, color='g')
_ = plt.xlabel('Regression value')
_ = plt.ylabel('Counts')
_ = plt.title('Histogram of output values for true number 5')
# -
# ## Illustrate performance on completely known data
# For the sake of completeness, we repeat the last plots again for the known data used in the training phase.
# +
data_test_directory = "./images"
test_datagen = ImageDataGenerator(
rescale=1.0/255.0
)
test_generator = test_datagen.flow_from_directory(
data_test_directory,
target_size=(input_shape[0],input_shape[1]),
color_mode="rgb",
class_mode=class_mode,
batch_size=1,
subset=None,
shuffle=False
)
test_labels = (test_generator.class_indices)
test_filenames = test_generator.filenames
all_predictions = model.predict(test_generator, verbose=1)
if class_mode == "categorical":
_ = plt.imshow(all_predictions, cmap='summer', aspect='auto', interpolation='none')
_ = plt.colorbar()
_ = plt.xlabel('Class')
_ = plt.ylabel('Image number')
_ = plt.title('Score heatmap')
if class_mode == "sparse":
plt.figure()
num_bins = 70
_ = plt.hist(all_predictions, num_bins, color='g')
_ = plt.xlabel('Regression value')
_ = plt.ylabel('Counts')
_ = plt.title('Histogram of output values')
| jupyter_notebooks/regression.ipynb |
# coding=utf-8
# Copyright 2021 The Google Research Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""A runner class to run simulations."""
import numpy as np
class Runner:
"""A class running simulations."""
def __init__(self, env, agent, nsteps=100):
"""Initializes a runner.
Args:
env: A EcosystemGymEnv gym environment.
Initial observation: A dictionary of {
`user`: dict(user_id=user_obs),
`creator`: dict(creator_id=creator_obs),
`doc`: ordered dict(doc_id=document_obs)};
Step observation: A dictionary of {
`user`: dict(user_id=user_obs),
`creator`: dict(creator_id=creator_obs),
`doc`: ordered dict(doc_id=document_obs),
`user_response`: dict(user_id=a list of response_obs)
`creator_action`: dict(creator_id=creator_action)`}.
agent: An agent object to generate recommendations.
nsteps: Int, maximum steps within one simulation.
"""
self.env = env
self.agent = agent
self.nsteps = nsteps
def run(self, obs=None):
"""Run simulations with the given initial environment observation.
Args:
obs: Initial observation of the environment, either comes from the last
observation of the last simulation, or None. If None, reset the
environment and start a new simulation.
Returns:
user_dict: {user_obs, user_clicked_docs, user_terminates}:
user_obs: A dictionary of key=user_id, value=a list of user observations
at all time steps.
user_clicked_docs: A dictionary of key=user_id, value=a list of user
consumed documents (doc, reward, index in the candidate set).
user_terminates: A dictionary of key=user_id, value=boolean denoting
whether this user has terminated or not at the end of simulation.
creator_dict: {creator_obs, creator_recommended_docs,
creator_clicked_docs, creator_actions, creator_terminates}:
creator_obs: A dictionary of key=creator_id, value=a list of creator
observations at all time steps.
creator_recommended_docs: A dictionary of key=creator_id, value=a list
of sublists, where each sublist represents the recommended documents
at current time steps.
creator_clicked_docs: A dictionary of key=creator_id, value=a list
of sublists, where each sublist represents the user clicked documents
(document object, user reward) at current time steps.
creator_actions: A dictionary of key=creator_id, value=a list of creator
actions(one of 'create'/'stay'/'leave') at current time step.
creator_terminates: A dictionary of key=creator_id, value=boolean
denoting whether this creator has terminated at the end of simulation.
candidate_set: A list of doc objects in candidate_set at each time step.
obs: Environment observation after the last action.
done: Boolean, denotes whether the simulation terminates or not.
"""
# If initial observation is None, last simulation has terminated, and
# environment should be reset.
if obs is None:
obs = self.env.reset()
# Initialize return viables.
user_obs = dict() # Record user's observation at the current time step.
user_clicked_docs = dict(
) # Record user's click and reward at the current time step.
user_terminates = dict() # Record if user leaves.
for u_id in obs['user']:
user_obs[u_id] = []
user_clicked_docs[u_id] = []
user_terminates[u_id] = False
creator_obs = dict()
creator_recommended_docs = dict()
creator_clicked_docs = dict()
creator_actions = dict()
creator_terminates = dict()
creator_rewards = dict()
creator_is_saturation = dict()
for c_id in obs['creator']:
creator_obs[c_id] = []
creator_recommended_docs[c_id] = []
creator_clicked_docs[c_id] = []
creator_actions[c_id] = []
creator_terminates[c_id] = False
creator_rewards[c_id] = []
creator_is_saturation[c_id] = obs['creator'][c_id][
'creator_is_saturation']
# Simulation.
document_num = []
creator_num = []
user_num = []
topic_distribution = []
selected_probs = []
policy_probs = []
user_embedding_states = []
creator_embedding_states = []
candidate_documents = []
# Simulation.
for t in range(self.nsteps):
previous_docs = list(obs['doc'].values())
previous_creators = obs['creator']
# Record the environment observation at the start of time t.
for u_id, u_obs in obs['user'].items():
user_obs[u_id].append(u_obs)
for c_id, c_obs in obs['creator'].items():
creator_obs[c_id].append(c_obs)
document_num.append(self.env.num_documents)
creator_num.append(self.env.num_creators)
user_num.append(self.env.num_users)
topic_distribution.append(self.env.topic_documents)
# Agent generates recommendations: a dictionary of user_id=slate.
# Also returns at time t, user embedding states, candidate creator
# embedding states and candidate creator rnn internal states based on
# their histories up to time t-1.
user_dict = dict(
user_obs=user_obs,
user_clicked_docs=user_clicked_docs,
user_terminates=user_terminates)
creator_dict = dict(
creator_obs=creator_obs,
creator_recommended_docs=creator_recommended_docs,
creator_clicked_docs=creator_clicked_docs,
creator_actions=creator_actions,
creator_terminates=creator_terminates,
creator_is_saturation=creator_is_saturation)
if self.agent.name == 'EcoAgent':
preprocessed_candidates = self.agent.preprocess_candidates(
creator_dict, obs['doc'])
creator_embedding_states.append(preprocessed_candidates[0])
creator_rnn_states = preprocessed_candidates[1]
creator_saturate = preprocessed_candidates[2]
creator_id = preprocessed_candidates[3]
candidate_documents.append(preprocessed_candidates[4])
slates, probs, preprocessed_user = self.agent.step(user_dict, obs['doc'])
policy_probs.extend(list(probs.values()))
user_embedding_states.append(preprocessed_user)
# Record creator current recommendations (recommender feedback).
## First initialize to be empty at time t.
for c_id, c_obs in obs['creator'].items():
creator_recommended_docs[c_id].append([])
creator_clicked_docs[c_id].append([])
# Record recommended docs of creator (recommender feedback).
for slate in slates.values():
for idx in slate:
doc = previous_docs[idx]
c_id = doc['creator_id']
creator_recommended_docs[c_id][t].append(doc)
# Step the environment.
obs, _, done, _ = self.env.step(slates)
# Record if user leaves.
user_terminates = obs['user_terminate']
# Record click information.
for u_id, user_responses in obs['user_response'].items():
for doc_idx, response in zip(slates[u_id], user_responses):
if response['click']:
# Record user feedback for creator.
doc = previous_docs[doc_idx]
c_id = doc['creator_id']
creator_clicked_docs[c_id][t].append((doc, response['reward']))
# Record user clicked doc, user reward, and corresponding clicked
# creator rnn_states and the satisfaction before this
# click happens for uplift modeling.
clicked_creator_previous_satisfaction = previous_creators[c_id][
'creator_satisfaction']
if self.agent.name == 'EcoAgent':
clicked_creator_rnn_state = creator_rnn_states[doc_idx]
clicked_creator_is_saturation = creator_saturate[doc_idx]
clicked_creator_id = creator_id[doc_idx]
else:
clicked_creator_rnn_state = None
clicked_creator_is_saturation = None
clicked_creator_id = None
user_clicked_docs[u_id].append(
(doc, response['reward'], doc_idx, clicked_creator_rnn_state,
clicked_creator_previous_satisfaction,
clicked_creator_is_saturation, clicked_creator_id))
# Record the probability of selected documents.
selected_probs.append(probs[u_id][doc_idx])
break
# Record creator responses.
for c_id, (c_action, c_reward) in obs['creator_response'].items():
creator_actions[c_id].append(c_action)
if c_action == 'leave':
creator_terminates[c_id] = True
creator_rewards[c_id].append(c_reward)
if done:
break
user_dict = dict(
user_obs=user_obs,
user_clicked_docs=user_clicked_docs,
user_terminates=user_terminates)
creator_dict = dict(
creator_obs=creator_obs,
creator_recommended_docs=creator_recommended_docs,
creator_clicked_docs=creator_clicked_docs,
creator_actions=creator_actions,
creator_rewards=creator_rewards,
creator_terminates=creator_terminates,
creator_is_saturation=creator_is_saturation)
env_record = dict(
document_num=document_num, creator_num=creator_num, user_num=user_num)
probs = dict(
selected_probs=np.array(selected_probs),
policy_probs=np.array(policy_probs))
preprocessed_user_candidates = [
user_embedding_states, creator_embedding_states, candidate_documents
]
return (user_dict, creator_dict, preprocessed_user_candidates, env_record,
probs, obs, done, topic_distribution)
| recommender/runner.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import torch
from functools import partial
from torch import nn, Tensor
from torch.nn import functional as F
from typing import Any, Callable, Dict, List, Optional, Sequence
from torch.hub import load_state_dict_from_url
__all__ = ["MobileNetV3", "mobilenet_v3_large", "mobilenet_v3_small"]
class ConvBNActivation(nn.Sequential):
def __init__(
self,
in_planes: int,
out_planes: int,
kernel_size: int = 3,
stride: int = 1,
groups: int = 1,
norm_layer: Optional[Callable[..., nn.Module]] = None,
activation_layer: Optional[Callable[..., nn.Module]] = None,
dilation: int = 1,
) -> None:
padding = (kernel_size - 1) // 2 * dilation
if norm_layer is None:
norm_layer = nn.BatchNorm2d
if activation_layer is None:
activation_layer = nn.ReLU6
super().__init__(
nn.Conv2d(in_planes, out_planes, kernel_size, stride, padding, dilation=dilation, groups=groups,
bias=False),
norm_layer(out_planes),
activation_layer(inplace=True)
)
self.out_channels = out_planes
def _make_divisible(v: float, divisor: int, min_value: Optional[int] = None) -> int:
"""
This function is taken from the original tf repo.
It ensures that all layers have a channel number that is divisible by 8
It can be seen here:
https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet/mobilenet.py
"""
if min_value is None:
min_value = divisor
new_v = max(min_value, int(v + divisor / 2) // divisor * divisor)
# Make sure that round down does not go down by more than 10%.
if new_v < 0.9 * v:
new_v += divisor
return new_v
model_urls = {
"mobilenet_v3_large": "https://download.pytorch.org/models/mobilenet_v3_large-8738ca79.pth",
"mobilenet_v3_small": "https://download.pytorch.org/models/mobilenet_v3_small-047dcff4.pth",
}
class SqueezeExcitation(nn.Module):
# Implemented as described at Figure 4 of the MobileNetV3 paper
def __init__(self, input_channels: int, squeeze_factor: int = 4):
super().__init__()
squeeze_channels = _make_divisible(input_channels // squeeze_factor, 8)
self.fc1 = nn.Conv2d(input_channels, squeeze_channels, 1)
self.relu = nn.ReLU(inplace=True)
self.fc2 = nn.Conv2d(squeeze_channels, input_channels, 1)
def _scale(self, input: Tensor, inplace: bool) -> Tensor:
scale = F.adaptive_avg_pool2d(input, 1)
scale = self.fc1(scale)
scale = self.relu(scale)
scale = self.fc2(scale)
return F.hardsigmoid(scale, inplace=inplace)
def forward(self, input: Tensor) -> Tensor:
scale = self._scale(input, True)
return scale * input
class InvertedResidualConfig:
# Stores information listed at Tables 1 and 2 of the MobileNetV3 paper
def __init__(self, input_channels: int, kernel: int, expanded_channels: int, out_channels: int, use_se: bool,
activation: str, stride: int, dilation: int, width_mult: float):
self.input_channels = self.adjust_channels(input_channels, width_mult)
self.kernel = kernel
self.expanded_channels = self.adjust_channels(expanded_channels, width_mult)
self.out_channels = self.adjust_channels(out_channels, width_mult)
self.use_se = use_se
self.use_hs = activation == "HS"
self.stride = stride
self.dilation = dilation
@staticmethod
def adjust_channels(channels: int, width_mult: float):
return _make_divisible(channels * width_mult, 8)
class InvertedResidual(nn.Module):
# Implemented as described at section 5 of MobileNetV3 paper
def __init__(self, cnf: InvertedResidualConfig, norm_layer: Callable[..., nn.Module],
se_layer: Callable[..., nn.Module] = SqueezeExcitation):
super().__init__()
if not (1 <= cnf.stride <= 2):
raise ValueError('illegal stride value')
self.use_res_connect = cnf.stride == 1 and cnf.input_channels == cnf.out_channels
layers: List[nn.Module] = []
activation_layer = nn.Hardswish if cnf.use_hs else nn.ReLU
# expand
if cnf.expanded_channels != cnf.input_channels:
layers.append(ConvBNActivation(cnf.input_channels, cnf.expanded_channels, kernel_size=1,
norm_layer=norm_layer, activation_layer=activation_layer))
# depthwise
stride = 1 if cnf.dilation > 1 else cnf.stride
layers.append(ConvBNActivation(cnf.expanded_channels, cnf.expanded_channels, kernel_size=cnf.kernel,
stride=stride, dilation=cnf.dilation, groups=cnf.expanded_channels,
norm_layer=norm_layer, activation_layer=activation_layer))
if cnf.use_se:
layers.append(se_layer(cnf.expanded_channels))
# project
layers.append(ConvBNActivation(cnf.expanded_channels, cnf.out_channels, kernel_size=1, norm_layer=norm_layer,
activation_layer=nn.Identity))
self.block = nn.Sequential(*layers)
self.out_channels = cnf.out_channels
self._is_cn = cnf.stride > 1
def forward(self, input: Tensor) -> Tensor:
result = self.block(input)
if self.use_res_connect:
result += input
return result
class MobileNetV3(nn.Module):
def __init__(
self,
inverted_residual_setting: List[InvertedResidualConfig],
last_channel: int,
num_classes: int = 1000,
block: Optional[Callable[..., nn.Module]] = None,
norm_layer: Optional[Callable[..., nn.Module]] = None,
**kwargs: Any
) -> None:
"""
MobileNet V3 main class
Args:
inverted_residual_setting (List[InvertedResidualConfig]): Network structure
last_channel (int): The number of channels on the penultimate layer
num_classes (int): Number of classes
block (Optional[Callable[..., nn.Module]]): Module specifying inverted residual building block for mobilenet
norm_layer (Optional[Callable[..., nn.Module]]): Module specifying the normalization layer to use
"""
super().__init__()
if not inverted_residual_setting:
raise ValueError("The inverted_residual_setting should not be empty")
elif not (isinstance(inverted_residual_setting, Sequence) and
all([isinstance(s, InvertedResidualConfig) for s in inverted_residual_setting])):
raise TypeError("The inverted_residual_setting should be List[InvertedResidualConfig]")
if block is None:
block = InvertedResidual
if norm_layer is None:
norm_layer = partial(nn.BatchNorm2d, eps=0.001, momentum=0.01)
layers: List[nn.Module] = []
# building first layer
firstconv_output_channels = inverted_residual_setting[0].input_channels
layers.append(ConvBNActivation(3, firstconv_output_channels, kernel_size=3, stride=2, norm_layer=norm_layer,
activation_layer=nn.Hardswish))
# building inverted residual blocks
for cnf in inverted_residual_setting:
layers.append(block(cnf, norm_layer))
# building last several layers
lastconv_input_channels = inverted_residual_setting[-1].out_channels
lastconv_output_channels = 6 * lastconv_input_channels
layers.append(ConvBNActivation(lastconv_input_channels, lastconv_output_channels, kernel_size=1,
norm_layer=norm_layer, activation_layer=nn.Hardswish))
self.features = nn.Sequential(*layers)
self.avgpool = nn.AdaptiveAvgPool2d(1)
self.classifier = nn.Sequential(
nn.Linear(lastconv_output_channels, last_channel),
nn.Hardswish(inplace=True),
nn.Dropout(p=0.2, inplace=True),
nn.Linear(last_channel, num_classes),
)
self.supp_features = torch.tensor(-10000)
for m in self.modules():
if isinstance(m, nn.Conv2d):
nn.init.kaiming_normal_(m.weight, mode='fan_out')
if m.bias is not None:
nn.init.zeros_(m.bias)
elif isinstance(m, (nn.BatchNorm2d, nn.GroupNorm)):
nn.init.ones_(m.weight)
nn.init.zeros_(m.bias)
elif isinstance(m, nn.Linear):
nn.init.normal_(m.weight, 0, 0.01)
nn.init.zeros_(m.bias)
def _forward_impl(self, x: Tensor) -> Tensor:
x = self.features(x)
x = self.avgpool(x)
x = torch.flatten(x, 1)
# x = self.classifier(x)
return x
def feature(self, x: Tensor) -> Tensor:
query = self._forward_impl(x)
return query
def fuse_kshot(self, x: Tensor) -> Tensor:
query = self._forward_impl(x)
return torch.mean(query, dim=0)
def store_supp(self, x: Tensor) -> Tensor:
if torch.all(self.supp_features == torch.tensor(-10000)):
self.supp_features = self._forward_impl(x).detach()
else:
self.supp_features = torch.cat([self.supp_features, self._forward_impl(x).detach()], dim=0)
return self.supp_features
# @torch.jit.export
# def forward_with_stored_supp(self, x: Tensor) -> (Tensor, Tensor):
# # get features
# query = self._forward_impl(x)
# if self.supp_features is None:
# return query, query
# support = self.supp_features
#
# # compute the distance between support and query images
# # distance = torch.sum((query - support)**2, dim=-1)
# query = F.normalize(query, p=2, dim=1)
# support = F.normalize(support, p=2, dim=1)
# cos_sim = torch.matmul(query, support.T)
# probs = torch.softmax(cos_sim, dim=-1)
# return probs, cos_sim
# @torch.jit.export
def forward_with_supp_features(self, x: Tensor, x_supp: Tensor) -> (Tensor, Tensor):
# get features
query = self._forward_impl(x)
support = x_supp
# compute the distance between support and query images
# distance = torch.sum((query - support)**2, dim=-1)
query = F.normalize(query, p=2, dim=1)
support = F.normalize(support, p=2, dim=1)
cos_sim = torch.matmul(query, support.T)
probs = torch.softmax(cos_sim, dim=-1)
return probs, cos_sim, torch.max(cos_sim)
def forward(self, x: Tensor, x_supp: Tensor) -> (Tensor, Tensor):
# get features
query = self._forward_impl(x)
support = self._forward_impl(x_supp)
# compute the distance between support and query images
# distance = torch.sum((query - support)**2, dim=-1)
query = F.normalize(query, p=2, dim=1)
support = F.normalize(support, p=2, dim=1)
cos_sim = torch.matmul(query, support.T)
probs = torch.softmax(cos_sim, dim=-1)
return probs, cos_sim, torch.max(cos_sim)
def _mobilenet_v3_conf(arch: str, width_mult: float = 1.0, reduced_tail: bool = False, dilated: bool = False,
**kwargs: Any):
reduce_divider = 2 if reduced_tail else 1
dilation = 2 if dilated else 1
bneck_conf = partial(InvertedResidualConfig, width_mult=width_mult)
adjust_channels = partial(InvertedResidualConfig.adjust_channels, width_mult=width_mult)
if arch == "mobilenet_v3_large":
inverted_residual_setting = [
bneck_conf(16, 3, 16, 16, False, "RE", 1, 1),
bneck_conf(16, 3, 64, 24, False, "RE", 2, 1), # C1
bneck_conf(24, 3, 72, 24, False, "RE", 1, 1),
bneck_conf(24, 5, 72, 40, True, "RE", 2, 1), # C2
bneck_conf(40, 5, 120, 40, True, "RE", 1, 1),
bneck_conf(40, 5, 120, 40, True, "RE", 1, 1),
bneck_conf(40, 3, 240, 80, False, "HS", 2, 1), # C3
bneck_conf(80, 3, 200, 80, False, "HS", 1, 1),
bneck_conf(80, 3, 184, 80, False, "HS", 1, 1),
bneck_conf(80, 3, 184, 80, False, "HS", 1, 1),
bneck_conf(80, 3, 480, 112, True, "HS", 1, 1),
bneck_conf(112, 3, 672, 112, True, "HS", 1, 1),
bneck_conf(112, 5, 672, 160 // reduce_divider, True, "HS", 2, dilation), # C4
bneck_conf(160 // reduce_divider, 5, 960 // reduce_divider, 160 // reduce_divider, True, "HS", 1, dilation),
bneck_conf(160 // reduce_divider, 5, 960 // reduce_divider, 160 // reduce_divider, True, "HS", 1, dilation),
]
last_channel = adjust_channels(1280 // reduce_divider) # C5
elif arch == "mobilenet_v3_small":
inverted_residual_setting = [
bneck_conf(16, 3, 16, 16, True, "RE", 2, 1), # C1
bneck_conf(16, 3, 72, 24, False, "RE", 2, 1), # C2
bneck_conf(24, 3, 88, 24, False, "RE", 1, 1),
bneck_conf(24, 5, 96, 40, True, "HS", 2, 1), # C3
bneck_conf(40, 5, 240, 40, True, "HS", 1, 1),
bneck_conf(40, 5, 240, 40, True, "HS", 1, 1),
bneck_conf(40, 5, 120, 48, True, "HS", 1, 1),
bneck_conf(48, 5, 144, 48, True, "HS", 1, 1),
bneck_conf(48, 5, 288, 96 // reduce_divider, True, "HS", 2, dilation), # C4
bneck_conf(96 // reduce_divider, 5, 576 // reduce_divider, 96 // reduce_divider, True, "HS", 1, dilation),
bneck_conf(96 // reduce_divider, 5, 576 // reduce_divider, 96 // reduce_divider, True, "HS", 1, dilation),
]
last_channel = adjust_channels(1024 // reduce_divider) # C5
else:
raise ValueError("Unsupported model type {}".format(arch))
return inverted_residual_setting, last_channel
def _mobilenet_v3_model(
arch: str,
inverted_residual_setting: List[InvertedResidualConfig],
last_channel: int,
pretrained: bool,
progress: bool,
**kwargs: Any
):
model = MobileNetV3(inverted_residual_setting, last_channel, **kwargs)
if pretrained:
if model_urls.get(arch, None) is None:
raise ValueError("No checkpoint is available for model type {}".format(arch))
state_dict = load_state_dict_from_url(model_urls[arch], progress=progress)
model.load_state_dict(state_dict)
return model
def mobilenet_v3_large(pretrained: bool = False, progress: bool = True, **kwargs: Any) -> MobileNetV3:
"""
Constructs a large MobileNetV3 architecture from
`"Searching for MobileNetV3" <https://arxiv.org/abs/1905.02244>`_.
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
progress (bool): If True, displays a progress bar of the download to stderr
"""
arch = "mobilenet_v3_large"
inverted_residual_setting, last_channel = _mobilenet_v3_conf(arch, **kwargs)
return _mobilenet_v3_model(arch, inverted_residual_setting, last_channel, pretrained, progress, **kwargs)
def mobilenet_v3_small(pretrained: bool = False, progress: bool = True, **kwargs: Any) -> MobileNetV3:
"""
Constructs a small MobileNetV3 architecture from
`"Searching for MobileNetV3" <https://arxiv.org/abs/1905.02244>`_.
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
progress (bool): If True, displays a progress bar of the download to stderr
"""
arch = "mobilenet_v3_small"
inverted_residual_setting, last_channel = _mobilenet_v3_conf(arch, **kwargs)
return _mobilenet_v3_model(arch, inverted_residual_setting, last_channel, pretrained, progress, **kwargs)
model_tiny = mobilenet_v3_large(pretrained=True)
model_tiny.eval()
from torch.utils.mobile_optimizer import optimize_for_mobile
# query_inputs = torch.rand(5, 3, 224, 224)
# supp_inputs = torch.randn(10, 3, 224, 224)
# threshold = torch.tensor(0.05)
#
# outputs = model_tiny.forward(query_inputs, supp_inputs)
# print(outputs)
# resnet18_traced = torch.jit.trace(model_tiny, example_inputs = (query_inputs, supp_inputs))
#
# query_inputs = torch.rand(1, 3, 224, 224)
# supp_inputs = torch.rand(10, 3, 224, 224)
# # threshold = torch.tensor(0.05)
# outputs = resnet18_traced.feature(query_inputs)
# print(outputs)
#
# query_inputs = torch.rand(1, 3, 224, 224)
# supp_inputs = torch.rand(10, 960)
# # threshold = torch.tensor(0.05)
# outputs = resnet18_traced.store_supp_features(supp_inputs)
#
# supp_inputs = torch.rand(10, 3, 224, 224)
# outputs, cos = resnet18_traced.forward(query_inputs, supp_inputs)
# print(outputs)
query_inputs = torch.rand(1, 3, 224, 224)
supp_inputs = torch.rand(10, 3, 224, 224)
supp_feats = torch.rand(10, 960)
inputs = {'forward' : (query_inputs, supp_inputs),
'feature' : query_inputs,
'fuse_kshot' : supp_inputs,
# 'store_supp': supp_inputs,
# 'forward_with_stored_supp': (query_inputs),
'forward_with_supp_features' : (query_inputs, supp_feats),
}
resnet18_traced = torch.jit.trace_module(model_tiny, inputs)
query_inputs = torch.rand(1, 3, 224, 224)
supp_inputs = torch.rand(10, 3, 224, 224)
supp_feats = torch.rand(10, 960)
outputs, _, _ = resnet18_traced.forward(query_inputs, supp_inputs)
# print(outputs)
outputs = resnet18_traced.feature(query_inputs)
# print(outputs)
# support_features = resnet18_traced.store_supp(supp_inputs)
# outputs, _ = resnet18_traced.forward_with_stored_supp(query_inputs)
# print(support_features)
query_inputs = torch.rand(1, 3, 224, 224)
supp_inputs = torch.rand(10, 3, 224, 224)
# threshold = torch.tensor(0.05)
outputs = resnet18_traced.forward_with_supp_features(query_inputs, supp_feats)
# print(outputs)
# optimized_torchscript_model = optimize_for_mobile(ts_model)
# optimized_torchscript_model.save("mobilenetv3_large.pt")
resnet18_traced.save("mobilenetv3_large_probs_cos_0607.pt")
| quantized_mobilenetv4.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Test tensorflow gpu #
# +
import tensorflow as tf
gpus = tf.config.experimental.list_physical_devices('GPU')
if gpus:
try:
for gpu in gpus:
tf.config.experimental.set_memory_growth(gpu, True)
except RuntimeError as e:
print(e)
# -
# # Dataset #
# +
import os
import codecs
data = {}
classes_file = '../Dataset/classes.txt'
with codecs.open(classes_file, 'r', encoding='utf-8') as cF:
data = cF.read().split('\r\n')
len(data)
# +
import os
from PIL import Image, ImageDraw, ImageFont
text_source = '../Dataset/source.txt'
fonts_path = '../Dataset/Fonts'
fonts = [f'{fonts_path}/{f}' for f in os.listdir(fonts_path)]
fonts
# -
dataset = []
sequence_len = 20
# +
import matplotlib.pyplot as plt
import numpy as np
import random
import cv2
def draw_img(img):
plt.imshow(np.asarray(img), cmap='gray', vmin=0, vmax=255)
plt.show()
def load_img(img):
return cv2.imread(img, cv2.IMREAD_GRAYSCALE)
def dilate_img(img):
return cv2.dilate(img, np.ones((2,2), np.uint8))
def otsu_thresholding(img):
norm_img = np.zeros(img.shape)
img = cv2.normalize(img, norm_img, 0, 255, cv2.NORM_MINMAX)
blur = cv2.GaussianBlur(img, (3,3), 0)
_, img = cv2.threshold(blur, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)
img = dilate_img(img)
return img
# -
# ## Load dataset ##
# +
with open(text_source) as txt:
word_count = 0
sequence = ''
dataset = []
for i, line in enumerate(txt):
words = line.split(' ')
for single_word in words:
word = ''.join([c for c in single_word if c in data])
word.replace('\n', ' ')
if len(word) < 1:
continue
if len(word) > 30:
split_count = len(word) // 30 + 1
for i in range(split_count):
start = i * split_count
end = start + len(word) // split_count
dataset.append(word[start:end])
continue
sequence = sequence + word + ' '
word_count = (word_count + 1) % sequence_len
if word_count == 0 or len(sequence) > 85:
dataset.append(sequence[:-1])
sequence = ''
dataset = list(set(dataset))
len(dataset)
# -
# ## Shuffle dataset ##
sorted_data = sorted(dataset, key=len)
longest_label = len(sorted_data[-1])
longest_label
# +
import random
random.seed = 123456
random.shuffle(dataset)
# dataset = dataset[:3000]
dataset
# -
# # Split data #
# +
train_split = int(0.9 * len(dataset))
val_split = int(train_split + 0.09 * len(dataset))
# test_split = int(train_split + 0.1 * len(dataset))
train_labels = dataset[:train_split]
val_labels = dataset[train_split:val_split]
test_labels = dataset[val_split:]
# val_labels = dataset[train_split:val_split]
# test_labels = dataset[val_split:]
print('Len train: ' + str(len(train_labels)))
print('Len val: ' + str(len(val_labels)))
print('Len test: ' + str(len(test_labels)))
# -
# # Model #
# +
timesteps = 512
width = 4096
height = 64
max_label_len = longest_label + 2
max_label_len
# +
from tensorflow.keras import applications, backend as K
from tensorflow.keras import models, losses, optimizers, Model, utils
from tensorflow.keras.layers import Input, Conv2D, BatchNormalization, MaxPooling2D, Dropout
from tensorflow.keras.layers import Flatten, Dense, Lambda, Reshape, Bidirectional, LSTM, GRU
from tensorflow.keras.layers import Activation, add, concatenate
def ctc_lambda_func(args):
y_pred, labels, input_length, label_length = args
return K.ctc_batch_cost(labels, y_pred, input_length, label_length)
def build_model(num_classes=94, timesteps=timesteps, max_label_len=max_label_len, input_shape=(4096, 64, 1), training=False):
inputs = Input(name='the_inputs', shape=input_shape, dtype='float32')
# Convolution layer (VGG)
inner = Conv2D(32, (3, 3), padding='same', name='conv1-1', kernel_initializer='he_normal')(inputs)
inner = BatchNormalization()(inner)
inner = Activation('relu')(inner)
inner = Conv2D(32, (3, 3), padding='same', name='conv1-2', kernel_initializer='he_normal')(inputs)
inner = BatchNormalization()(inner)
inner = Activation('relu')(inner)
inner = MaxPooling2D(pool_size=(2, 2), name='max1')(inner)
inner = Conv2D(64, (3, 3), padding='same', name='conv2-1', kernel_initializer='he_normal')(inner)
inner = BatchNormalization()(inner)
inner = Activation('relu')(inner)
inner = Conv2D(64, (3, 3), padding='same', name='conv2-2', kernel_initializer='he_normal')(inner)
inner = BatchNormalization()(inner)
inner = Activation('relu')(inner)
inner = MaxPooling2D(pool_size=(2, 2), name='max2')(inner)
inner = Conv2D(128, (3, 3), padding='same', name='conv3-1', kernel_initializer='he_normal')(inner)
inner = BatchNormalization()(inner)
inner = Activation('relu')(inner)
inner = Conv2D(128, (3, 3), padding='same', name='conv3-2', kernel_initializer='he_normal')(inner)
inner = BatchNormalization()(inner)
inner = Activation('relu')(inner)
inner = MaxPooling2D(pool_size=(2, 2), name='max3')(inner)
inner = Conv2D(256, (3, 3), padding='same', name='conv4-1', kernel_initializer='he_normal')(inner)
inner = BatchNormalization()(inner)
inner = Activation('relu')(inner)
inner = Conv2D(256, (3, 3), padding='same', name='conv4-2')(inner)
inner = BatchNormalization()(inner)
inner = Activation('relu')(inner)
inner = MaxPooling2D(pool_size=(1, 2), name='max4')(inner)
# inner = Conv2D(512, (2, 2), padding='same', kernel_initializer='he_normal', name='con5-1')(inner)
# inner = BatchNormalization()(inner)
# inner = Activation('relu')(inner)
# inner = Conv2D(512, (2, 2), padding='same', kernel_initializer='he_normal', name='con5-2')(inner)
# inner = BatchNormalization()(inner)
# inner = Activation('relu')(inner)
# CNN to RNN
inner = Reshape(target_shape=((timesteps, 1024)), name='reshape')(inner)
inner = Dense(64, activation='relu', kernel_initializer='he_normal', name='dense1')(inner)
inner = Dropout(0.2)(inner)
# RNN
lstm1 = Bidirectional(LSTM(256, return_sequences=True, kernel_initializer='he_normal',
name='lstm1'))(inner)
lstm2 = Bidirectional(LSTM(512, return_sequences=True, kernel_initializer='he_normal',
name='lstm2'))(lstm1)
rnn = Dropout(0.2)(lstm2)
# RNN output -> character activations:
outer = Dense(num_classes + 1, kernel_initializer='he_normal', name='dense2')(rnn)
y_pred = Activation('softmax', name='softmax')(outer)
labels = Input(name='the_labels', shape=[max_label_len], dtype='float32')
input_length = Input(name='input_length', shape=[1], dtype='int64')
label_length = Input(name='label_length', shape=[1], dtype='int64')
# Keras doesn't currently support loss funcs with extra parameters
# so CTC loss is implemented in a lambda layer
loss_out = Lambda(ctc_lambda_func, output_shape=(1,), name='ctc')([y_pred, labels, input_length, label_length]) #(None, 1)
y_func = K.function([inputs], [y_pred])
if training:
return Model(inputs=[inputs, labels, input_length, label_length], outputs=loss_out), y_func
else:
return Model(inputs=[inputs], outputs=y_pred)
# -
model, y_func = build_model(timesteps=timesteps, max_label_len=max_label_len, training=True)
model.summary()
# +
from tensorflow.keras.utils import plot_model
os.environ["PATH"] += os.pathsep + 'C:/Program Files/Graphviz/bin/'
plot_model(model=model, show_shapes=True)
# -
# # Data generator #
# +
import itertools
def return_classes(string):
text = [' '] + list(string) + [' ']
classes = [data.index(x) if x in data else 1 for x in text]
return np.asarray(classes)
def return_text(classes):
text = ''
for c in classes:
if 0 <= c < len(data) and c != 1:
text += data[c]
return text
def decode_batch(out, callback=False):
ret = []
for i in range(out.shape[0]):
out_best = list(np.argmax(out[i, 2:], 1))
out_best2 = [k for k, g in itertools.groupby(out_best)]
outstr = return_text(out_best2)
if callback:
print(f'{out_best} -> {outstr}')
ret.append(outstr)
return ret
def gen_text_image(text, padding=16):
font = random.choice(fonts)
font_size = random.randrange(30, 61)
fnt = ImageFont.truetype(font, font_size)
width, _ = fnt.getsize(text)
img = Image.new('L', (width + (padding + 1) * 2, 64), color=255)
d = ImageDraw.Draw(img)
if 'calibri' in font:
d.text((padding + 2,2), text, font=fnt, fill=0)
elif 'verdana' in font:
d.text((padding + 2,-8), text, font=fnt, fill=0)
elif 'constan' in font:
d.text((padding + 2,0), text, font=fnt, fill=0)
elif 'corbel' in font:
d.text((padding + 2,2), text, font=fnt, fill=0)
elif 'consola' in font:
d.text((padding + 2,2), text, font=fnt, fill=0)
elif 'cour' in font:
d.text((padding + 2,-4), text, font=fnt, fill=0)
elif 'tahoma' in font:
d.text((padding + 2,-8), text, font=fnt, fill=0)
else:
d.text((padding + 2,-6), text, font=fnt, fill=0)
image = np.array(img)
image = add_salt_and_pepper(image, 0.2)
image = otsu_thresholding(image)
image = inverse(image)
image = (image / 255.) * 2. - 1.
return image
def inverse(image):
return cv2.bitwise_not(image)
def add_salt_and_pepper(image, amount):
output = np.copy(np.array(image))
# add salt
nb_salt = np.ceil(amount * output.size * 0.5)
coords = [np.random.randint(0, i - 1, int(nb_salt)) for i in output.shape]
output[coords] = random.randint(50,200)
# add pepper
nb_pepper = np.ceil(amount * output.size * 0.5)
coords = [np.random.randint(0, i - 1, int(nb_pepper)) for i in output.shape]
output[coords] = random.randint(0,100)
return np.asarray(Image.fromarray(output))
# -
class TextImageGenerator:
def __init__(self, labels, img_w=4096, img_h=64,
batch_size=16, timesteps=timesteps, training=True, max_text_len=max_label_len):
self.dim = (img_w, img_h, 1)
self.batch_size = batch_size
self.max_text_len = max_text_len
self.labels = labels
self.n = len(self.labels)
self.indexes = list(range(self.n))
self.training = training
self.cur_index = 0
def next_sample(self):
self.cur_index += 1
if self.cur_index >= self.n:
self.cur_index = 0
random.shuffle(self.indexes)
return self.labels[self.indexes[self.cur_index]]
def next_batch(self):
while True:
X = np.zeros((self.batch_size, *self.dim))
y = np.zeros((self.batch_size, self.max_text_len), dtype=int)
input_length = np.full((self.batch_size, 1), timesteps, dtype=np.float32)
label_length = np.zeros((self.batch_size, 1), dtype=np.float32)
for i in range(self.batch_size):
label = self.next_sample()
# Store sample
image = np.swapaxes(gen_text_image(label), 0, 1)
image = np.expand_dims(image, -1)
X[i, 0:image.shape[0], :] = image
# Store class
label_classes = return_classes(label)
y[i, :len(label_classes)] = label_classes
label_length[i] = len(label_classes)
inputs = {
'the_inputs': X, # (bs, 4096, 64, 1)
'the_labels': y, # (bs, longest_label + 2) - 2 spaces added: before and after
'input_length': input_length, # (bs, 1)
'label_length': label_length # (bs, 1)
}
outputs = {'ctc': np.zeros([self.batch_size])} # (bs, 1)
yield (inputs, outputs)
# # Callbacks #
# +
import editdistance
from datetime import datetime
from tensorflow.keras.callbacks import EarlyStopping, LearningRateScheduler, ModelCheckpoint
from tensorflow.keras.callbacks import TensorBoard, ReduceLROnPlateau, Callback
class VizCallback(Callback):
def __init__(self, y_func, text_img_gen, text_size, num_display_words=10):
self.y_func = y_func
self.text_img_gen = text_img_gen
self.num_display_words = num_display_words
self.text_size = text_size
def show_edit_distance(self, num):
num_left = num
mean_norm_ed = 0.0
mean_ed = 0.0
while num_left > 0:
word_batch = next(self.text_img_gen.next_batch())[0]
num_proc = min(word_batch['the_inputs'].shape[0], num_left)
# predict
inputs = word_batch['the_inputs'][0:num_proc]
pred = self.y_func([inputs])[0]
decoded_res = decode_batch(pred)
# label
labels = word_batch['the_labels'][:num_proc].astype(np.int32)
labels = [return_text(label) for label in labels]
for j in range(num_proc):
edit_dist = editdistance.eval(decoded_res[j], labels[j])
mean_ed += float(edit_dist)
mean_norm_ed += float(edit_dist) / len(labels[j])
num_left -= num_proc
mean_norm_ed = mean_norm_ed / num
mean_ed = mean_ed / num
print('\nOut of %d samples: \nMean edit distance:'
'%.3f \nMean normalized edit distance: %0.3f \n'
% (num, mean_ed, mean_norm_ed))
def on_epoch_end(self, epoch, logs={}):
batch = next(self.text_img_gen.next_batch())[0]
inputs = batch['the_inputs'][:self.num_display_words]
labels = batch['the_labels'][:self.num_display_words].astype(np.int32)
labels = [return_text(label) for label in labels]
pred = self.y_func([inputs])[0]
pred_texts = decode_batch(pred)
for i in range(min(self.num_display_words, len(inputs))):
print("label: {} - predict: {}".format(labels[i], pred_texts[i]))
self.show_edit_distance(self.text_size)
# +
batch_size = 16
train_generator = TextImageGenerator(train_labels, training=True, batch_size=batch_size)
val_generator = TextImageGenerator(val_labels, training=False, batch_size=batch_size)
test_generator = TextImageGenerator(test_labels, training=False, batch_size=batch_size)
log_dir = "logs/fit/" + datetime.now().strftime("%Y%m%d-%H%M%S")
output_dir = './models/VGG'
weight_path = f'{output_dir}/ocr_model_{datetime.now().strftime("%Y%m%d-%H%M%S")}' + '_epoch_{epoch:02d}.h5'
if not os.path.exists(output_dir):
os.makedirs(output_dir)
if not os.path.exists(log_dir):
os.makedirs(log_dir)
tensorboard = TensorBoard(log_dir=log_dir)
checkpoint = ModelCheckpoint(weight_path, monitor='val_loss', verbose=1, save_best_only=True, save_weights_only=True)
vis = VizCallback(y_func, test_generator, len(test_labels))
early_stop = EarlyStopping(monitor='val_loss', min_delta=0, patience=5, verbose=0, mode='min')
initial_learning_rate = 0.001
epochs = 100
callbacks = [early_stop, tensorboard, vis, checkpoint]
# callbacks = [early_stop, tensorboard, vis, LearningRateScheduler(lr_time_based_decay, verbose=1)]
# -
# # Training #
def train(callbacks, batch_size, epochs, initial_epoch=0):
print('Training process starting...')
H = model.fit(train_generator.next_batch(),
steps_per_epoch=train_len//batch_size,
validation_data=val_generator.next_batch(),
validation_steps=val_len//batch_size,
epochs=epochs,
initial_epoch=initial_epoch,
callbacks=callbacks,
verbose=1)
return H
train_len = len(train_labels)
val_len = len(val_labels)
# +
from tensorflow.keras import optimizers
opt = optimizers.Adam(learning_rate=initial_learning_rate)
# +
model.compile(loss={'ctc': lambda y_true, y_pred: y_pred}, optimizer='adam')
train(callbacks, batch_size, epochs)
# -
# # Testing #
| Training/CRNN-2LSTM-DatasetScaling.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Градиентный бустинг "в лоб"¶
# ### 1. Считайте таблицу с признаками из файла features.csv с помощью кода, приведенного выше. Удалите признаки, связанные с итогами матча (они помечены в описании данных как отсутствующие в тестовой выборке).
import json
import bz2
import pandas
import numpy as np
from sklearn.model_selection import KFold
from sklearn.model_selection import GridSearchCV
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.model_selection import train_test_split
from sklearn.metrics import roc_auc_score
features = pandas.read_csv('./data/features.csv', index_col='match_id')
features.head()
features = features.drop(['duration', 'tower_status_radiant', 'tower_status_dire', 'barracks_status_radiant', 'barracks_status_dire'], axis=1)
y = features['radiant_win']
features = features.drop(['radiant_win'], axis=1)
features.tail()
# ### 2. Проверьте выборку на наличие пропусков с помощью функции count(), которая для каждого столбца показывает число заполненных значений.
# Много ли пропусков в данных? Запишите названия признаков, имеющих пропуски, и попробуйте для любых двух из них дать обоснование, почему их значения могут быть пропущены.
s = features.count()
s[s != 97230]
# * `first_blood_time` - первое убийство произошло не на первых 5 минутах.
# * `dire_courier_time` - за пять минут саппорт не купил курьера (потратил начальные деньги на что то другое).
s = features.count(numeric_only=True)
s[s != 97230]
# ### 3. Замените пропуски на нули с помощью функции fillna().
# На самом деле этот способ является предпочтительным для логистической регрессии, поскольку он позволит пропущенному значению не вносить никакого вклада в предсказание. Для деревьев часто лучшим вариантом оказывается замена пропуска на очень большое или очень маленькое значение — в этом случае при построении разбиения вершины можно будет отправить объекты с пропусками в отдельную ветвь дерева. Также есть и другие подходы — например, замена пропуска на среднее значение признака. Мы не требуем этого в задании, но при желании попробуйте разные подходы к обработке пропусков и сравните их между собой.
features_negative = features.fillna(-100_000)
features_zero = features.fillna(0)
s = features_negative.count()
s[s != 97230]
# ### 4. Какой столбец содержит целевую переменную? Запишите его название.
# **radiant_win**
# ### 5. Забудем, что в выборке есть категориальные признаки, и попробуем обучить градиентный бустинг над деревьями на имеющейся матрице "объекты-признаки".
# Зафиксируйте генератор разбиений для кросс-валидации по 5 блокам (KFold), не забудьте перемешать при этом выборку (shuffle=True), поскольку данные в таблице отсортированы по времени, и без перемешивания можно столкнуться с нежелательными эффектами при оценивании качества. Оцените качество градиентного бустинга (GradientBoostingClassifier) с помощью данной кросс-валидации, попробуйте при этом разное количество деревьев (как минимум протестируйте следующие значения для количества деревьев: 10, 20, 30). Долго ли настраивались классификаторы? Достигнут ли оптимум на испытанных значениях параметра n_estimators, или же качество, скорее всего, продолжит расти при дальнейшем его увеличении?
clf = GradientBoostingClassifier(random_state=241)
cv = KFold(n_splits=5, shuffle=True, random_state=241)
grid = {
'n_estimators': np.arange(10, 110, 10),
'learning_rate': [0.5, 0.3, 0.2, 0.1]
}
gs1 = GridSearchCV(clf, grid, scoring='roc_auc', cv=cv, n_jobs=-1)
start_time1 = datetime.datetime.now()
gs1.fit(features_negative, y)
end_time1 = datetime.datetime.now()
gs1.best_estimator_
end_time1 - start_time1
for k, v in gs1.cv_results_.items():
if 'mean' in k:
print(f"Key: {k}, value: {v}\n")
clf = GradientBoostingClassifier(random_state=241)
cv = KFold(n_splits=5, shuffle=True, random_state=241)
grid = {
'n_estimators': np.arange(10, 110, 10),
'learning_rate': [0.5, 0.3, 0.2, 0.1]
}
gs2 = GridSearchCV(clf, grid, scoring='roc_auc', cv=cv, n_jobs=-1)
start_time2 = datetime.datetime.now()
gs2.fit(features_zero, y)
end_time2 = datetime.datetime.now()
gs2.best_estimator_
end_time2 - start_time2
for k, v in gs2.cv_results_.items():
if 'mean' in k:
print(f"Key: {k}, value: {v}\n")
# ### Как долго проводилась кросс-валидация для градиентного бустинга с 30 деревьями? Инструкцию по измерению времени можно найти выше по тексту. Какое качество при этом получилось?
gs3 = GradientBoostingClassifier(criterion='friedman_mse', init=None,
learning_rate=0.3, loss='deviance', max_depth=3,
max_features=None, max_leaf_nodes=None,
min_impurity_decrease=0.0, min_impurity_split=None,
min_samples_leaf=1, min_samples_split=2,
min_weight_fraction_leaf=0.0, n_estimators=30,
n_iter_no_change=None, presort='auto', random_state=241,
subsample=1.0, tol=0.0001, validation_fraction=0.1,
verbose=0, warm_start=False)
X_train, X_test, y_train, y_test = train_test_split(features_zero, y,
test_size=0.8, random_state=241)
# %%time
gs3.fit(X_train, y_train)
roc_auc_score(y_test, gs3.predict_proba(X_test)[:, 1])
| 7_week_1_independent_work_gradient_boosting.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# ---
# + [markdown] customInput originalKey="4dbd5617-33f8-4544-82d3-0f96e82c50e6" showInput=false
# ## Tutorial: **Hidden Markov model**
# + [markdown] customInput originalKey="f699d5cc-9798-4828-8732-70d2d497edba" showInput=false
# This tutorial demonstrates modeling and running inference on a hidden Markov model (HMM)
# in Bean Machine. The flexibility of this model allows us to demonstrate some of the
# great unique features of Bean Machine, such as block inference, compositional inference,
# and separation of data from the model.
# + [markdown] customInput originalKey="9390c28c-9c48-48cc-a207-3f73dbabbcc6" showInput=false tags=[]
# ## Problem
# + [markdown] customInput originalKey="9390c28c-9c48-48cc-a207-3f73dbabbcc6" showInput=false
# HMMs are a class of probabilistic models which are popular for doing inference on
# discrete-time stochastic processes. In general, Markov models are used to study a
# sequence of random variables, $X_1,\ldots,X_N$, where the sequence is "memoryless" such
# that the distribution of $X_{n}$ depends only on the value of $X_{n-1}$; any sequence
# which is memoryless is said to satisfy the Markov property. One reason Markov models are
# popular is because this flexible framework can be used to model many time-evolving
# processes, such as words in a sentence, the position of a robot, or the weather.
#
# An HMM is a Markov model in which observations are modeled as being noisy. While we are
# interested in doing inference on each $X_n$, we are actually observing variables
# $Y_1,\ldots,Y_N$ which can depend on the values of $X_1,\ldots,X_N$ in a variety of
# ways. In specific settings, HMMs can be very tractable, and lend themselves towards
# provably-optimal algorithms such as Kalman Filtering. Here, we illustrate how to do
# general inference with MCMC as applicable to general HMMs. The single-site algorithms
# underlying Bean Machine enable inference algorithms which scale favorably with the size
# of the HMM.
# + [markdown] customInput originalKey="a50a9e55-266a-4e70-be95-e7cc18e8381e" showInput=false
# ## Prerequisites
# + [markdown] customInput originalKey="<KEY>" showInput=false
# Let's code this in Bean Machine! Import the Bean Machine library, some fundamental PyTorch classes, and optionally typing for our code.
# +
# Install Bean Machine in Colab if using Colab.
import sys
if "google.colab" in sys.modules and "beanmachine" not in sys.modules:
# !pip install beanmachine
# +
import logging
import os
import warnings
import beanmachine.ppl as bm
import matplotlib.pyplot as plt
import torch
import torch.distributions as dist
# -
# The next cell includes convenient configuration settings to improve the notebook
# presentation as well as setting a manual seed for reproducibility.
# +
# Eliminate excess inference logging from Bean Machine, except for progress bars.
logging.getLogger("beanmachine").setLevel(50)
warnings.filterwarnings("ignore")
# Manual seed
bm.seed(111)
# Other settings for the notebook.
smoke_test = "SANDCASTLE_NEXUS" in os.environ or "CI" in os.environ
# + [markdown] customInput originalKey="d340d033-17f1-423c-9734-823e3ee18062" showInput=false
# ## Model
# + [markdown] customInput originalKey="d340d033-17f1-423c-9734-823e3ee18062" showInput=false
# We model the hidden state $X$ as being a discrete-time Markov chain with $K$ states and
# transition matrix (_a.k.a._ kernel) $\Theta$. We model $N$ time steps of this chain with
# variables $X_1,\ldots,X_N$, and use variables $Y_1,\ldots,Y_N$ to model observable
# emissions of each $X$ with Gaussian noise.
#
# Formally, the transition and emission probabilities are as follows, for
# $n\in1,\ldots,N$:
#
# - $X_{n+1}\mid X_n\sim\text{Categorical}(\Theta[X_n])$
# - $Y_n\mid X_n\sim\text{Normal}(\mu[X_n],\sigma[X_n])$
#
#
# Accordingly, priors can be assigned as follows, for $k\in1,\ldots,K$:
#
# - $\mu[k]\sim\text{Normal}(\mu_\text{loc},\mu_\text{scale})$
# - $\sigma[k]\sim\text{Gamma}(\sigma_\text{shape},\sigma_\text{rate})$
# - $\Theta[k]\sim\text{Dirichlet}([\frac{c}{K},\ldots,\frac{c}{K}])$
#
#
# Finally, assume that the value of $X_1$ is known:
#
# - $X_1=0$
#
#
# So the model is set by choosing the prior through the values of:
# $\mu_\text{loc},\mu_\text{scale},\sigma_\text{shape},\sigma_\text{rate},c$. ($c$ stands
# for concentration).
#
# We can implement this model in Bean Machine by defining random variable objects with the
# `@bm.random_variable` decorator. These functions behave differently than ordinary Python
# functions.
# + [markdown] customInput originalKey="d340d033-17f1-423c-9734-823e3ee18062" showInput=false
# <div
# style={
# {
# background: "#daeaf3",
# border_left: "3px solid #2980b9",
# display: "block",
# margin: "16px 0",
# padding: "12px",
# }
# }
# >
# Semantics for <code>@bm.random_variable</code> functions:
# <ul>
# <li>
# They must return PyTorch <code>Distribution</code> objects.
# </li>
# <li>
# Though they return distributions, callees actually receive <i>samples</i> from the
# distribution. The machinery for obtaining samples from distributions is handled
# internally by Bean Machine.
# </li>
# <li>
# Inference runs the model through many iterations. During a particular inference
# iteration, a distinct random variable will correspond to exactly one sampled
# value: <b>calls to the same random variable function with the same arguments will
# receive the same sampled value within one inference iteration</b>. This makes it
# easy for multiple components of your model to refer to the same logical random
# variable.
# </li>
# <li>
# Consequently, to define distinct random variables that correspond to different
# sampled values during a particular inference iteration, an effective practice is
# to add a dummy "indexing" parameter to the function. Distinct random variables
# can be referred to with different values for this index.
# </li>
# <li>
# Please see the documentation for more information about this decorator.
# </li>
# </ul>
# </div>
# + [markdown] customInput originalKey="d340d033-17f1-423c-9734-823e3ee18062" showInput=false
# Note also that, compared to the statistical notation above, our implementation uses
# 0-indexing instead of 1-indexing.
# -
class HiddenMarkovModel:
def __init__(
self,
N: int,
K: int,
concentration: float,
mu_loc: float,
mu_scale: float,
sigma_shape: float,
sigma_rate: float,
) -> None:
self.N = N
self.K = K
self.concentration = concentration
self.mu_loc = mu_loc
self.mu_scale = mu_scale
self.sigma_shape = sigma_shape
self.sigma_rate = sigma_rate
@bm.random_variable
def Theta(self, k):
return dist.Dirichlet(torch.ones(self.K) * self.concentration / self.K)
@bm.random_variable
def Mu(self, k):
return dist.Normal(self.mu_loc, self.mu_scale)
@bm.random_variable
def Sigma(self, k):
return dist.Gamma(self.sigma_shape, self.sigma_rate)
@bm.random_variable
def X(self, n: int):
if n == 0:
return dist.Categorical(torch.tensor([1.0] + [0.0] * (self.K - 1)))
else:
return dist.Categorical(self.Theta(self.X(n - 1).item()))
@bm.random_variable
def Y(self, n: int):
return dist.Normal(self.Mu(self.X(n).item()), self.Sigma(self.X(n).item()))
# ## Data
# First, we will generate random observations and choose the priors.
def generate_chain_observations(theta, mus, sigmas, N):
theta_distbns = {j: dist.Categorical(vector) for j, vector in enumerate(theta)}
hidden = [0]
while len(hidden) < N:
hidden.append(theta_distbns[hidden[-1]].sample().item())
def observe(k):
return dist.Normal(mus[k], sigmas[k]).sample().item()
return hidden, list(map(observe, hidden))
# +
concentration = 1.1
mu_loc = 0.0
mu_scale = 5.0
sigma_shape = 0.5
sigma_rate = 1.0
N = 15
K = 2
thetas_obs = dist.Dirichlet(torch.ones(K) * concentration / K).sample((K,))
mus_obs = dist.Normal(mu_loc, mu_scale).sample((K,))
sigmas_obs = dist.Gamma(sigma_shape, sigma_rate).sample((K,))
xs_obs, ys_obs = generate_chain_observations(thetas_obs, mus_obs, sigmas_obs, N)
x_obs = torch.tensor(xs_obs)
y_obs = torch.tensor(ys_obs)
# -
# Initialize model
model = HiddenMarkovModel(
N,
K,
concentration,
mu_loc,
mu_scale,
sigma_shape,
sigma_rate,
)
# ## Inference
# Inference is the process of combining _model_ with _data_ to obtain _insights_, in the
# form of probability distributions over values of interest. Bean Machine offers a
# powerful and general inference framework to enable fitting arbitrary models to data.
#
# Our model makes use of both continuous and discrete random variables. We'll want to make
# use of different inference strategies for each. In particular, we would like to take
# advantage of gradient information for the continuous random variables. To facilitate
# this, Bean Machine provides the `CompositionalInference` class.
#
# `CompositionalInference` is a powerful, flexible class for configuring inference in a
# variety of ways. By default, `CompositionalInference` will select an inference method
# for each random variable that is appropriate based on its support. The HMM presented in
# this tutorial has a number of different random variables that we're interested in
# learning about. Those random variables, along with their supports and the inference
# method that `CompositionalInference` will automatically select for them, are summarized
# in the table below:
#
# | Random variable | Support | Inference method |
# | --------------- | ------------------ | ------------------------------------------- |
# | $X$ | $0,\ldots,K-1$ | Uniform Metropolis Hastings |
# | $\mu$ | $(-\infty,\infty)$ | Newtonian Monte Carlo (real space proposer) |
# | $\sigma$ | $[0,\infty)$ | Newtonian Monte Carlo (half space proposer) |
#
# You can learn more about compositional inference in our framework topics.
#
# Normally, this is all you would have to do! However, the HMM model has meaningful
# structure that we would like to consider when configuring our inference strategy. In
# particular, the hidden state of each time step, $X_n$, is highly correlated with the
# state $X_{n-1}$. Thus, we would also like to jointly propose new values for all $X$ —
# it is very likely that the value of a hidden state $X_n$ becomes invalid and needs to be
# recomputed after $X_{n-1}$ is updated. Similarly, we would like to update the location
# $\mu$ and the scale $\sigma$ of the hidden states jointly as well. In order to update
# these variables jointly, we can configure `CompositionalInference` to "block" the random
# variables together.
#
# `CompositionalInference` accepts a dictionary that maps families of random variable to
# the corresponding algorithm, which allow you to override the default inference method
# for a particular subset of nodes, or group some of them into a block. To define a block,
# we simply need to pass `CompositionalInference` a *tuple* containing all random variable
# families that we want to propose jointly as a key. In our case, since we don't want to
# override the default inference method, we can use `...` instead of providing an
# inference class for the block.
compositional = bm.CompositionalInference(
{(model.X): ..., (model.Sigma, model.Mu): ...}
)
# You may notice that we are using what we referred to as "random variable families" such
# as `model.X` as keys, which are essentially functions that generates the random
# variables, instead of using instantiated random variable like `model.X(0)` and
# `model.X(1)`. This is because often times the number of random variables is not known
# ahead of time until an inference starts with some data (you can even have an unbounded
# number of nodes for some models). By using random variable families in the config, we no
# longer need to explicitly spell out all instances of the random variables and group them
# in a huge tuple.
#
# The next step is to define the queries and observations. For this particular run, we're
# interested in inferring $X$, $\mu$, and $\sigma$.
# +
queries = (
[model.X(n) for n in range(1, model.N)]
+ [model.Mu(k) for k in range(model.K)]
+ [model.Sigma(k) for k in range(model.K)]
)
observations = {
model.X(0): torch.tensor(0.0),
**{model.Y(n): y_obs[n] for n in range(model.N)},
**{model.Theta(k): thetas_obs[k] for k in range(model.K)},
}
# -
# Running inference consists of a few arguments:
#
# | Name | Usage |
# | -------------- | -------------------------------------------------------------------------------------------------------- |
# | `queries` | List of `@bm.random_variable` targets to fit posterior distributions for. |
# | `observations` | A dictionary of observations, as built above. |
# | `num_samples` | Number of Monte Carlo samples to approximate the posterior distributions for the variables in `queries`. |
# | `num_chains` | Number of separate inference runs to use. Multiple chains can help verify that inference ran correctly. |
# +
num_samples = 400 if not smoke_test else 1
num_chains = 1
samples = compositional.infer(
queries=queries,
observations=observations,
num_samples=num_samples,
num_chains=num_chains,
)
samples = samples.get_chain(0)
# -
# X(0) is observed and is not part of query
x_samples = torch.stack(
[torch.zeros(num_samples)] + [samples[model.X(n)] for n in range(1, model.N)], dim=1
)
mu_samples = torch.stack([samples[model.Mu(k)] for k in range(model.K)]).T
sigma_samples = torch.stack([samples[model.Sigma(k)] for k in range(model.K)]).T
# + [markdown] customInput originalKey="<KEY>" showInput=false
# ## Visualization
# + [markdown] customInput originalKey="<KEY>" showInput=false
# We will look at the values of the samples collected for $X$ and $\mu$. We will take the
# mean of samples taken over the last 10% of the chain, and compare these to our synthetic
# data.
# +
tail_len = num_samples // 10
xs_tail_plot = x_samples[-tail_len:].mean(0)
plt.scatter(
range(len(xs_tail_plot)),
xs_tail_plot,
alpha=0.5,
label="Inferred",
c="b",
)
plt.scatter(range(len(xs_obs)), xs_obs, alpha=0.5, label="Ground truth", c="r")
plt.yticks(range(K))
plt.title("Values of X_1 ... X_N")
plt.xlabel("n")
plt.ylabel("Value of X_n")
plt.legend()
plt.show()
mus_tail_plot = mu_samples[-tail_len:].mean(0)
plt.scatter(range(K), mus_tail_plot, alpha=0.5, label="Inferred", c="b")
plt.scatter(range(K), mus_obs, alpha=0.5, label="Ground truth", c="r")
plt.xticks(range(K))
plt.title("Values of mu_1 ... mu_K")
plt.xlabel("k")
plt.ylabel("Value of mu_k")
plt.legend()
plt.show()
# + [markdown] customInput originalKey="e9beca04-43f5-4271-bf4d-90859b2a10d3" showInput=false
# These plots indicate that inference seems to be recovering hidden states well, and is
# computing reasonably accurate values for $\mu$.
# + [markdown] customInput originalKey="4ea45674-027b-4307-b1af-5d1a10cad813" showInput=false
# ## Posterior likelihood checks
# + [markdown] customInput originalKey="4ea45674-027b-4307-b1af-5d1a10cad813" showInput=false
# One way to evaluate posterior samples is computing likelihoods of our posterior samples,
# and comparing these to the likelihood of the underlying synthetic data. Formally, we can
# compute the joint likelihood of posterior samples with the observations used to generate
# the samples. And similarly, we can compute the joint likelihood of the observations with
# the underlying synthetic data which was generated at the same time as the observations.
# +
def log_likelihood(xs, ys, thetas, mus, sigmas, N):
"""Returns the log likelihood of the HMM model conditioned on the data"""
result = 0
# transition probabilities
for n in range(1, N):
result += torch.log(thetas[xs[n - 1], xs[n]])
# emission probabilities
for n in range(N):
result += dist.Normal(mus[xs[n]], sigmas[xs[n]]).log_prob(ys[n])
return result
# computes the log likelihood of the HMM model per iteration
ppcs = [
log_likelihood(x, y_obs, thetas_obs, mu, sigma, N)
for x, mu, sigma in zip(x_samples.int(), mu_samples, sigma_samples)
]
# -
plt.figure(figsize=(12, 6))
plt.plot(ppcs, label="Sample", c="g")
# plotting the ground truth for reference
plt.plot(
[log_likelihood(x_obs, y_obs, thetas_obs, mus_obs, sigmas_obs, N)] * num_samples,
label="Grond truth",
c="r",
)
plt.ylabel("Log likelihood")
plt.legend()
plt.show()
# + [markdown] customInput originalKey="<KEY>" showInput=false
# From the above plot, inference appears to be doing a good job of fitting the random
# variables given the observed data. Inference appears to converge with a log likelihood
# scores near to those generated by the ground truth parameters.
| tutorials/Hidden_Markov_model.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Skydipper datasets for Deep Learning
#
# In this notebook we will create the Skydipper API datasets that we will use for Deep Learning.
import json
import requests
from pprint import pprint
import getpass
import time
# **Get the token**
email = '<EMAIL>'
password = getpass.getpass('Skydipper login password:')
payload = {
"email": f"{email}",
"password": f"{password}"
}
# +
url = f'https://api.skydipper.com/auth/login'
headers = {'Content-Type': 'application/json'}
r = requests.post(url, data=json.dumps(payload), headers=headers)
OAUTH = r.json().get('data').get('token')
# -
# **Check all Skydipper datasets**
# + jupyter={"outputs_hidden": true}
url = f'https://api.skydipper.com/v1/dataset?app=skydipper&page[size]=10000'
headers = {'Authorization': 'Bearer ' + OAUTH, 'Content-Type': 'application/json', 'Cache-Control': 'no-cache'}
r = requests.get(url, headers=headers)
pprint(r.json())
# -
# ## Create datasets for Deep Learning
# ### Register GEE datasets
# - **[Sentinel 2 Top-of-Atmosphere Reflectance](https://developers.google.com/earth-engine/datasets/catalog/COPERNICUS_S2)**
# - **[Landsat 7 Surface Reflectance](https://developers.google.com/earth-engine/datasets/catalog/LANDSAT_LE07_C01_T1_SR)**
# - **[Landsat 8 Surface Reflectance](https://developers.google.com/earth-engine/datasets/catalog/LANDSAT_LC08_C01_T1_SR)**
# - **[USDA NASS Cropland Data Layers](https://developers.google.com/earth-engine/datasets/catalog/USDA_NASS_CDL)**
# - **[USGS National Land Cover Database](https://developers.google.com/earth-engine/datasets/catalog/USGS_NLCD)**
# - **[Lake Water Quality 100m](https://land.copernicus.eu/global/products/lwq)**
datasets = {"Sentinel 2 Top-of-Atmosphere Reflectance": "COPERNICUS/S2",
"Landsat 7 Surface Reflectance": "LANDSAT/LE07/C01/T1_SR",
"Landsat 8 Surface Reflectance": "LANDSAT/LC08/C01/T1_SR",
"USDA NASS Cropland Data Layers": "USDA/NASS/CDL",
"USGS National Land Cover Database": "USGS/NLCD",
"Lake Water Quality 100m": "projects/vizzuality/skydipper-water-quality/LWQ-100m"
}
datasets = {
"Lake Water Quality 100m": "projects/vizzuality/skydipper-water-quality/LWQ-100m"
}
def post_ee_datasets(datasets, OAUTH):
for dataset_name in datasets.keys():
payload = {"dataset": {
"name": dataset_name,
"application": ["skydipper"],
"type": "raster",
"connectorType": "rest",
"provider": "gee",
"tableName": datasets[dataset_name],
"status": "saved",
"published": True,
"overwrite": False,
"verified": False,
"env": "production",
}
}
#Post new dataset
url = f'https://api.skydipper.com/v1/dataset'
headers = {'Authorization': 'Bearer ' + OAUTH, 'Content-Type': 'application/json', 'Cache-Control': 'no-cache'}
r = requests.post(url, data=json.dumps(payload), headers=headers)
pprint(r.json())
time.sleep(5)
post_ee_datasets(datasets, OAUTH)
import Skydipper
slugs_list = ["Sentinel-2-Top-of-Atmosphere-Reflectance",
"Landsat-7-Surface-Reflectance",
"Landsat-8-Surface-Reflectance",
"USDA-NASS-Cropland-Data-Layers",
"USGS-National-Land-Cover-Database",
"Lake-Water-Quality-100m"]
c = Skydipper.Collection(search=' '.join(slugs_list), object_type=['dataset'], app=['skydipper'], limit=10)
c
| notebooks/Skydipper_datasets.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/cfcastillo/DS-6-Notebooks/blob/main/Project_6_Notebook_cfc.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="EIgtwWNlarYC"
# # Problem Definition
#
# The purpose of this project is to get familiar with Deep Learning by using pre-pickled data in conjunction with **blah blah blah** to analyze an animal image and classify the animal species.
#
# Pickling is the process of constructing (serializing) or deconstructing (de-serializing) an object into a binary/byte form. A pickled object contains the information needed to reconstruct the object later. Pickling can be used when transporting large amounts of data.
#
# This is a **binary classification** problem because our target will be a 1 or 0 to indicate if the image is a cat (1) or dog(0).
#
#
# + [markdown] id="j7j9Y4TDa5X2"
# # Data Collection
# + [markdown] id="1faCHIjobl8k"
# ## Imports
# + id="QfQPgzzSer2x"
import tensorflow.keras as keras
import tensorflow as tf
import pandas as pd
import numpy as np
import pickle
from PIL import Image, ImageOps #Python imaging library for importing an image
import matplotlib.pyplot as plt
import cv2
from sklearn import datasets
from keras.models import Sequential
from keras.layers import Dense, Flatten, Conv2D, MaxPooling2D
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
# + [markdown] id="kAuOuhJ1bpjX"
# ## Load Data
# + id="wNLKZBmwecoL" colab={"base_uri": "https://localhost:8080/"} outputId="75931ce4-6ab9-4cb7-bc04-f3788444cd1d"
# Mount Drive
from google.colab import drive
drive.mount('/drive')
# + id="0JyCSQUQed0o"
# Load Data
data_path = '/drive/My Drive/Cohort_6/Projects/Project 6/Data/'
X_data_file = 'X.pickle'
y_data_file = 'y.pickle'
image_name = 'dog.jpg'
# Read pickle data
df_X_read = pickle.load(open(data_path + X_data_file, 'rb'))
df_y_read = pickle.load(open(data_path + y_data_file, 'rb'))
# + colab={"base_uri": "https://localhost:8080/"} id="AOYZF97K3ZWD" outputId="93ade2a4-3971-4602-a343-2823bf722e90"
# See what data types we have after reading the pickle files
# This was necessary after I failed to convert the data to a dataframe. I discovered
# that I could not convert an n-dimension array where n > 2 to a dataframe.
print(f'X type={type(df_X_read)}')
print(f'y type={type(df_y_read)}')
# + colab={"base_uri": "https://localhost:8080/"} id="Gcn43ED-qlP0" outputId="c21de08b-3f6c-442e-d462-875814c4e9d7"
# Check dimensions on X because failed to convert to dataframe and wanted to know why.
df_X_read.shape
# + [markdown] id="7JkGem0Ha7fU"
# # Data Cleaning
# + colab={"base_uri": "https://localhost:8080/"} id="b2EAinqbnWdY" outputId="225ed67c-744f-47ee-d5a9-c0c96772133b"
# Look at some values. Confirmed values fall between 0 and 255
# because... a byte can contain decimal values from 0 to 255 = 2^8
print(f'max X = {df_X_read.min()}')
print(f'min X = {df_X_read.max()}')
# + id="LKiNezNxo3CQ"
# Scale the values to contain values from 0 to 1 by dividing by 255
df_X_scaled = df_X_read / 255
# + colab={"base_uri": "https://localhost:8080/"} id="r0Rj5cNSpIfa" outputId="342e1da4-2d3a-4e2e-e177-78acc976f7c1"
# Look at some values. Confirm values now fall between 0 and 1
print(f'scaled max X = {df_X_scaled.min()}')
print(f'scaled min X = {df_X_scaled.max()}')
# + colab={"base_uri": "https://localhost:8080/"} id="eoqgJmtjrNEK" outputId="61574836-6b46-4a74-db4b-8aabeeff688a"
# Convert y from list to numpy array so can be fit to model
df_y = np.array(df_y_read)
print(f'y type={type(df_y)}')
# + [markdown] id="o8EipOm6a-Hj"
# # Exploratory Data Analysis (EDA)
#
# We have a 4 dimensional array. The dimensions are as follows#
#
# - 2 dimensions give you black and white
# - 3 dimensions give you color
# - 4th dimension is over time - the volume.
#
# Our data has one volume, so we need to slice out the 4th dimension to get the image data into a 3 dimensional format needed by imshow.
# + colab={"base_uri": "https://localhost:8080/"} id="PB41UZd5k3pc" outputId="388db8ac-3fdc-4777-fac5-53be14383043"
# Check dimensions on arrays. Confirm X has 4 dimensions.
# We have 24946 images that are 100x100 pixels.
df_X_scaled.shape
# + colab={"base_uri": "https://localhost:8080/"} id="arapb7ly-rHX" outputId="9ac1321c-eb7b-4049-af24-abb44782d400"
df_y.shape
# + [markdown] id="t22VpJSwvKQh"
# [Link to using imshow plotting tool](https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.imshow.html)
#
# [And another...](https://www.pythonpool.com/matplotlib-imshow/)
#
# [Plotting 4 dimensional images](https://bic-berkeley.github.io/psych-214-fall-2016/intro_to_4d.html)
# + id="LsVRYS2mq_fZ"
# Visualize one of the scaled images
# 2-dimensional arrays are black and white
# 3-dimensional arrays are color
# slice to get only the first 3 dimensions.
df_X_images = df_X_scaled[:,:,:,0]
# + colab={"base_uri": "https://localhost:8080/", "height": 285} id="00w7mggP0_Mp" outputId="310b697f-2273-457e-8e45-f6b595828611"
# show the image and look at the response value
img_index = 24000
plt.imshow(df_X_images[img_index], cmap=plt.cm.pink)
# plt.imshow(cv2.cvtColor(img[img_index], cv2.COLOR_BGR2RGB))
plt.show()
# look at response value to see if it agrees with what we see
# 1 - cat
# 0 - dog
print(f'cat(1) or dog(0) = {df_y[img_index]}')
# + [markdown] id="XWIu3QoB2OWJ"
# ## Colors?
#
# The natural colors from the image without any manipulation are turquoise/blue. I experimented with other color palettes. Anything in the monochrome range returned inverse colors. "pink" looked somewhat natural.
#
# I would need to explore how to use color mapping to display the true image colors.
# + [markdown] id="n2pVtkyHbAuB"
# # Processing
# + [markdown] id="gbFx1RPW9PUH"
# ## Create test and training sets
# + id="RaVpS6kS8T0-"
# Create training and testing data sets
X_train, X_test, y_train, y_test = train_test_split(df_X_scaled, df_y, test_size=0.30, random_state=42)
# + colab={"base_uri": "https://localhost:8080/"} id="Hz43IVHJwVMW" outputId="39ff1a9d-a0ad-4abc-fd28-b3a912564c51"
X_train.shape
# + colab={"base_uri": "https://localhost:8080/"} id="a1Ov-WCywbQg" outputId="652ae665-0410-4c64-88ac-2ab323f82730"
X_test.shape
# + colab={"base_uri": "https://localhost:8080/"} id="SC1ALDVEwfOr" outputId="4ec7a471-0c28-4698-cf5d-5fc1bfaf789a"
y_train.shape
# + colab={"base_uri": "https://localhost:8080/"} id="8oYk6x9kwg09" outputId="649b895d-3622-4699-d8d7-814cd022942c"
y_test.shape
# + colab={"base_uri": "https://localhost:8080/"} id="11mEbEpWw2_N" outputId="bcf9848c-65a9-4fc2-b5e5-f46ab517e5b6"
X_train.shape[1:]
# + [markdown] id="8WPjbxsK9Buq"
# ## Build neural network
#
# Build a neural network with the following:
# 1. Sequential layers
# 1. At least two 2D convolutional layers using the ‘relu’ activation function and a (3,3) kernel size.
# 1. A MaxPooling2D layer after each 2D convolutional layer that has a pool size of (2,2).
# 1. A dense output layer using the ‘sigmoid’ activation function.
#
#
# [Tensorflow documentation on how to do convolutional](https://www.tensorflow.org/tutorials/images/cnn)
#
# [Another site describing how to add convolutional layers and how to choose parameters](https://www.pyimagesearch.com/2018/12/31/keras-conv2d-and-convolutional-layers/)
#
# + id="vI5K5O688y3g"
# Initialize model type
model = Sequential()
# first layer - 100,100 is shape size. 3 is for rgb color channel.
# input_shape(height, width, color channels) --> sliced X_train to get the last 3 dimensions for the input_shape.
# define layer 1
model.add(Conv2D(filters=32, kernel_size=(3,3), activation='relu', input_shape=X_train.shape[1:]))
model.add(MaxPooling2D(pool_size=(2,2)))
# define layer 2
model.add(Conv2D(filters=64, kernel_size=(3,3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
# define layer 3
model.add(Conv2D(filters=128, kernel_size=(3,3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
# flatten for output layer - this has something to do with labels.
model.add(Flatten())
# define output layer
# Dense first parameter is 1 neuron because it is a binary classification problem.
model.add(Dense(1, activation='sigmoid'))
# + [markdown] id="wonS0HVs9Ycy"
# ## Compile model
#
# Use the ‘adam’ optimizer. Determine which loss function and metric is most appropriate for this problem.
#
# ### Loss function selection
#
# [How to choose a loss function](https://neptune.ai/blog/keras-loss-functions)
#
# [Keras loss functions](https://keras.io/api/losses/)
#
# ### Metrics parameter selection
#
# [How to choose a metrics function](https://neptune.ai/blog/keras-metrics)
#
# [Keras metrics functions](https://keras.io/api/metrics/)
#
# + id="SNJKdsMjDbYN"
# model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['binary_accuracy'])
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
# + [markdown] id="WvhynEbPqT9b"
# ## Fit Model
# + colab={"base_uri": "https://localhost:8080/"} id="xL9CGWCVqQsc" outputId="d2ba43d6-c340-421a-cced-25f3e6610400"
# Can add epochs if needed to improve model.
# Per Joe, batch_size should be evenly divisible by epochs. i.e. 32/8; 48/12, etc.
epochs = 12
model.fit(X_train, y_train, batch_size=48, epochs=epochs)
# + [markdown] id="uvPifObjqXUb"
# ## Evaluate Model
# + colab={"base_uri": "https://localhost:8080/"} id="A8xncUsZqrUW" outputId="c8fbbad0-6ccf-428a-ebcd-158cf300da8a"
val_loss, val_acc = model.evaluate(X_test, y_test)
val_acc
# + [markdown] id="9mvYjzCKFOM7"
# ## Testing Model with Imported Image
#
# 1. Define a function that will read in a new image and convert it to a 4 dimensional array of pixels.
# 1. Use the function defined above to read in the dog.jpg image that is saved in the Project 6/Data folder.
# 1. Use the neural network you created to predict whether the image is a dog or a cat.
#
# **BONUS** Repeat above steps using photo of family pet.
#
# [How to work with images in Python](https://www.geeksforgeeks.org/working-images-python/)
#
# [OpenCV is another option for image processing](https://docs.opencv.org/4.x/)
# + id="WfzpgKdXFQgP"
def get_image(file_name):
# retrieve image file
img = Image.open(file_name)
# retrieve and print file details
width, height = img.size
mode = img.mode
format = img.format
# print(f'width={width}; height={height}; mode={mode}; format={format}')
# model based on 100x100 size images so convert to this size.
img = img.resize((100,100))
# convert to grayscale
img_gs = ImageOps.grayscale(img)
return img_gs
def image_to_array(file_name):
img = get_image(file_name)
# convert image to numpy array
data = np.asarray(img)
# scale the image
data_scaled = data/255
# make 4 dimensional array
data_4dim = data_scaled.reshape(-1, data_scaled.shape[0],data_scaled.shape[1], 1)
return data_4dim
# + colab={"base_uri": "https://localhost:8080/", "height": 302} id="IYo3yvHnJQ6x" outputId="680e386d-39e0-4d58-f7ae-f34cd330f047"
# dog image provided by instructor
# Run the image through the model and see the result. Should be 1 for cat or 0 for dog.
data = image_to_array(data_path + image_name)
prediction = model.predict(data)
print(prediction)
print(f'prediction is {np.argmax(prediction)}')
# view image
plt.imshow(get_image(data_path + image_name), cmap=plt.cm.pink)
plt.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 302} id="himdYD_9VT54" outputId="a786602f-bc73-4ae6-f7f4-bf55ab395114"
# # cat image provided by me
# Run the image through the model and see the result. Should be 1 for cat or 0 for dog.
my_file = '/drive/MyDrive/Student Folder - Cecilia/Projects/Project 6 - Deep Learning/icat1.jpg'
prediction = model.predict(image_to_array(my_file))
print(prediction)
print(f'prediction is {np.argmax(prediction)}')
# view image
plt.imshow(get_image(my_file), cmap=plt.cm.pink)
plt.show()
# + [markdown] id="rdxbp9k_bChz"
# # Communicate Results
| Project_6_Notebook_cfc.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # NSW naturalizations data from NSW State Archives
# For information about relevant records see the [Naturalization / Citizenship Guide](https://www.records.nsw.gov.au/archives/collections-and-research/guides-and-indexes/naturalization-citizenship-guide).
#
# Relevant series include:
#
# * [NRS 1038](https://www.records.nsw.gov.au/series/1038) – Letters of Denization
#
# * [NRS 1039](https://www.records.nsw.gov.au/series/1039) – Certificates of Naturalization
# * [NRS 1040](https://www.records.nsw.gov.au/series/1040) – Registers of Certificates of Naturalization
# * [NRS 1041](https://www.records.nsw.gov.au/series/1041) – Lists of Aliens to whom Certificates of Naturalization have been issued
# * [NRS 1042](https://www.records.nsw.gov.au/series/1042) – Index to Registers of Certificates of Naturalization and Lists of Aliens to whom Certificates of Naturalization have been issued
#
# NRS 1042 is an index to NRS 1040 and NRS 1041. Data transcribed from NRS 1042 is available online in the [Naturalisation Index, 1834-1903](https://www.records.nsw.gov.au/archives/collections-and-research/guides-and-indexes/naturalization-and-denization/indexes).
#
# Along with other online indexes, this data was scraped from the State Archives website and [shared on GitHub](https://github.com/wragge/srnsw-indexes) as a [CSV file](https://github.com/wragge/srnsw-indexes/blob/master/data/naturalisation.csv). See the [NSW State Archives section](https://glam-workbench.net/nsw-state-archives/) of the GLAM Workbench for more information.
import pandas as pd
import re
import altair as alt
import numpy as np
# alt.renderers.enable('default')
#alt.data_transformers.enable('default')
# ## Load the data
#
# Load the CSV data scraped from the online index.
df = pd.read_csv('https://raw.githubusercontent.com/wragge/srnsw-indexes/master/data/naturalisation.csv')
# Try to convert the date string into a datetime object
df['date'] = pd.to_datetime(df['DateOfCertificate'], errors='coerce')
df.head()
# These are records with problematic dates that wouldn't automatically convert into datetimes.
df.loc[df['date'].isnull()]
# How many records are there?
len(df)
# What is the earliest date?
df['date'].min()
# What is the latest date?
df['date'].max()
# ## Removing duplicates
#
# Some preliminary investigation showed that a number of records were duplicates — these seem to be the result of uncertainty around the name order in Chinese names. So, for example, 'Ah You' is also entered in its reversed form as 'You Ah'. Similarly, '<NAME>' is entered both as '<NAME>' with the surname 'Moy', and 'Jimmy' with the surname 'Ah Moy'.
#
# This, of course, makes it impossible to do much with the data, so I've made an attempt to remove duplicates. Note that this process might remove variations in the `NativePlace` and `Remarks` fields.
# +
deduped_df = df.copy()
# Create a new column with both names combined in a single string
# This will enable us to identify duplicated records where the division between first name and surname varies
deduped_df['name'] = deduped_df['FirstName'].str.cat(deduped_df['Surname'], sep=' ').str.lower()
# Create a new column with both names combined in a single string in reverse order
deduped_df['name_reversed'] = deduped_df['Surname'].str.cat(deduped_df['FirstName'], sep=' ').str.lower()
def make_name_list(row):
names = sorted([row['name'], row['name_reversed']])
return ' '.join(names)
# Create a new column with both name forms in the same order
# This will enable us to identify duplicated records with reversed names
deduped_df['names'] = deduped_df.apply(make_name_list, axis=1)
# -
# First round of deduping -- remove those where the separation between first name and surname varies
deduped_df.drop_duplicates(['name', 'DateOfCertificate', 'RegisterNo', 'Page', 'Item', 'Reel'], inplace=True)
# Second round -- remove those where name parts are reversed
deduped_df.drop_duplicates(['names', 'DateOfCertificate', 'RegisterNo', 'Page', 'Item', 'Reel'], inplace=True)
len(deduped_df)
deduped_df['year'] = deduped_df['date'].dt.year
# ## Countries of origin
#
# Group the records by the `NativePlace` field.
# + tags=[]
country_counts = deduped_df['NativePlace'].value_counts().to_frame().reset_index()
country_counts.columns = ['NativePlace', 'Count']
country_counts[:25]
# -
# Save the complete list of places to a [CSV file](nsw_country_counts.csv) for further investigation.
country_counts.to_csv('nsw_country_counts.csv', index=False)
# ## Aggregate Chinese places
#
# Examination of the country counts show that places in China are recorded in a number of different ways. Here we'll try to aggregate them.
#
# First let's look at the values in the `NativePlace` field that include the word 'China'.
sorted(list(pd.unique(df.loc[df['NativePlace'].str.contains('China')]['NativePlace'])))
# As well as these, examination of the country counts showed a number of other variations – these are listed below.
places_in_china = [
'Amoy',
'Amoy, China',
'Canton',
'Canton, China',
'China',
'Foochow, China',
'Hong Kong',
'Kouton, China',
'Macao',
'Macao China',
'Macoa, China',
'Near Hong Kong',
'Shanghai',
'Shanghai, China',
'Singapore',
'Sun On, China',
'Vhina'
'W Canton, China',
'West Canton',
'Whompoa',
'Whampoa, China',
'Whompoa, China'
]
# Now we can create a new dataset of records of people who came from China and region.
chinese_nats = deduped_df.loc[deduped_df['NativePlace'].isin(places_in_china)].sort_values(by='date')
chinese_nats
chinese_nats.to_csv('nsw_from_china.csv', index=False)
# ## Naturalisations over time
#
# Chart the number of naturalizations over time, highlighting records were the `NativePlace` is from China and region.
#
# First we'll create a new column that indicates whether the `NativePlace` is in China or not.
deduped_df['country'] = np.where(deduped_df['NativePlace'].isin(places_in_china), 'China', 'Other')
# Now we'll calculate the total number of records by year and `country` (ie China or other).
china_counts = deduped_df.value_counts(['year', 'country']).to_frame().reset_index()
china_counts.columns = ['year', 'country', 'count']
china_counts
# Visualise the results, showing both the combined dataset, and just the 'China' records.
# +
c1 = alt.Chart(china_counts).mark_bar(size=8).encode(
x=alt.X('year:Q', axis=alt.Axis(format='c')),
y=alt.Y('count:Q', stack=True),
color='country:N',
tooltip=['country', 'year', 'count'],
).properties(
width=700
)
c2 = alt.Chart(china_counts.loc[china_counts['country'] == 'China']).mark_bar(size=8).encode(
x=alt.X('year:Q', axis=alt.Axis(format='c'), scale=alt.Scale(domain=(1835,1905))),
y='count:Q'
).properties(
width=700
)
c1 & c2
# -
# Save the deduped dataset to a CSV file.
deduped_df.to_csv('nsw_deduped_with_country.csv', index=False)
| nsw_naturalisations.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] slideshow={"slide_type": "slide"}
# # Lists
# + [markdown] slideshow={"slide_type": "fragment"}
# - Ordered Collection of things
# + [markdown] slideshow={"slide_type": "slide"}
# ### Creating a list
# + slideshow={"slide_type": "fragment"}
a_list = [11.12, 12, 122, 'a', True]
empty_list = []
len(empty_list)
# + slideshow={"slide_type": "slide"}
empty_list = []
# + [markdown] slideshow={"slide_type": "slide"}
# #### Type
# -
type(a_list[2])
# + slideshow={"slide_type": "fragment"}
type(a_list)
# + [markdown] slideshow={"slide_type": "slide"}
# #### Length
# -
a_list
# + slideshow={"slide_type": "fragment"}
len(a_list)
# + [markdown] slideshow={"slide_type": "slide"}
# ### Positions
# + [markdown] slideshow={"slide_type": "slide"}
# #### Zero based index
# -
a_list
# + slideshow={"slide_type": "fragment"}
a_list[0]
# -
a_list
a_list[1]
length = len(a_list)
a_list[length - 1]
# + [markdown] slideshow={"slide_type": "slide"}
# #### Negative index
# -
a_list
length
a_list[-1]
a_list[-length + 1]
# + slideshow={"slide_type": "fragment"}
a_list[-2]
# -
a_list[-11]
# + slideshow={"slide_type": "slide"}
a_list[4]
# + [markdown] slideshow={"slide_type": "slide"}
# ### Slicing
# -
a_list
a_list[1: 4]
# + slideshow={"slide_type": "fragment"}
a_list[0: 2]
# -
a_list
a_list[1:]
a_list[:2]
# + slideshow={"slide_type": "slide"}
a_list[-4: -1]
# -
a_list
a_list[1: 4: 2]
# +
# a_list[start: stop: step]
# -
a_list[:: -1] # reverses the list
# + slideshow={"slide_type": "slide"}
a_list[::2]
# + [markdown] slideshow={"slide_type": "slide"}
# ### Adding elements
# -
a_list = [11.12, 12, 122, 'a', True]
b_list = ['string', 1.11, 12]
a_list + b_list
a_list
b_list
c_list = a_list + b_list
c_list
# + slideshow={"slide_type": "fragment"}
c_list
# -
a_list
b_list
len(c_list)
len(c_list) == len(a_list) + len(b_list)
# + slideshow={"slide_type": "slide"}
c_list
# + [markdown] slideshow={"slide_type": "slide"}
# #### Inplace addition
# + [markdown] slideshow={"slide_type": "slide"}
# ##### One element at a time
# -
a_list
a_list.append(1211)
a_list
a_list + append(1211)
# + slideshow={"slide_type": "fragment"}
len(a_list)
# -
a_list = [1, 2, 3, 4, 5]
len(a_list)
a_list.append('append')
len(a_list)
a_list
# + slideshow={"slide_type": "slide"}
len(a_list)
# + [markdown] slideshow={"slide_type": "slide"}
# ##### Adding Multiple elements
# -
a_list = [11.12, 12, 122, 'a', True]
a_list
b_list
a_list.extend(b_list)
a_list
a_list.extend(b_list)
a_list
# + slideshow={"slide_type": "fragment"}
len(a_list)
# -
a_list = [1, 2, 3 , 4 , 5]
b_list = [6, 7, 8, 9]
a_list.extend(b_list)
a_list
len(a_list)
a_list.append(b_list)
b_list
a_list[9]
# + slideshow={"slide_type": "slide"}
a_list
# + [markdown] slideshow={"slide_type": "slide"}
# #### Adding element at a particular position
# -
a_list = [1, 2, 3, 4, 5, 6, 7, 8, 9]
a_list
a_list.insert(7, 7.5)
a_list
len(a_list)
a_list.insert(11, 11)
a_list.insert(122, 122)
a_list
a_list
a_list
a_list.insert(-1, 111)
a_list
a_list.insert(0,'last')
a_list
a_list.insert(len(a_list), 1222)
a_list
# + [markdown] slideshow={"slide_type": "notes"}
# ### Problem
# ```python
#
# >>> my_list = [10, 20, 30, 40, 50, 60, 70, 80, 90, 100]
# >>> len(my_list)
#
# >>> my_list[1]
#
# >>> my_list[-len(my_list)]
#
# >>> my_list[1: 5]
#
# >>> my_list[4:]
#
# >>> my_list[:4]
#
# >>> my_list[:]
#
# >>> my_list[::-1]
#
# >>> my_list[::-2]
#
# >>> tmp_list = [1, 3, 5, 7]
#
# >>> your_list = my_list + tmp_list
# >>> your_list
#
# >>> your_list[0]
#
# >>> len(your_list)
#
# >>> len(my_list)
#
# >>> len(tmp_list)
#
# >>> len(your_list) != len(my_list) + len(tmp_list)
#
# >>> my_list.append(0)
# >>> my_list
#
# >>> my_list.extend(tmp_list)
# >>> my_list
#
# >>> my_list == your_list
#
# >>> my_list.insert(10, 0)
# >>> my_list
#
# >>> my_list == your_list
#
# ```
# + [markdown] slideshow={"slide_type": "slide"}
# ### Searching elements
# + [markdown] slideshow={"slide_type": "slide"}
# #### Index
# + slideshow={"slide_type": "fragment"}
a_list = [11, 22, 33, 44, 55, 66]
# -
a_list.index(22)
a_list.index(222)
a_list.index(55)
a_list
a_list.index(11)
# + slideshow={"slide_type": "slide"}
# + [markdown] slideshow={"slide_type": "slide"}
# #### Count
# -
a_list = [1, 11, 11, 121, 11, 0, 121, 11]
a_list.count(11)
a_list.count(121)
a_list.count(-11)
a_list.count('True')
# + slideshow={"slide_type": "fragment"}
# -
a_list = [11, 11, 121, 11, 0, 121, 11]
False + 1
a_list.count(False + 1)
# + [markdown] slideshow={"slide_type": "slide"}
# #### Using Memebership Operator
# -
a_list
12 in a_list
121 in a_list
121 not in a_list
-11 not in a_list
# + slideshow={"slide_type": "fragment"}
# -
radius = 5
Radius
'name' == 'Name'
# + slideshow={"slide_type": "slide"}
# + [markdown] slideshow={"slide_type": "slide"}
# ### Removing elements
# + [markdown] slideshow={"slide_type": "slide"}
# #### Index based
# -
a_list
del a_list[2]
a_list
# + slideshow={"slide_type": "fragment"}
# + slideshow={"slide_type": "slide"}
# + [markdown] slideshow={"slide_type": "slide"}
# #### Element based
# -
a_list
a_list.remove(0)
a_list
a_list.remove(11)
a_list
# + slideshow={"slide_type": "fragment"}
# + slideshow={"slide_type": "slide"}
# + [markdown] slideshow={"slide_type": "slide"}
# #### Using the removed element
# -
a_list = [1, 11, 111, 121, 'hi']
last_element = a_list.pop()
last_element + '!'
# + slideshow={"slide_type": "fragment"}
a_list
# -
a_list = [1, 11, 121, 111]
removed_element = a_list.pop(2)
a_list
removed_element // 12
# + slideshow={"slide_type": "slide"}
# + [markdown] slideshow={"slide_type": "slide"}
# # Tuples
# + [markdown] slideshow={"slide_type": "fragment"}
# - Immutable list
# + [markdown] slideshow={"slide_type": "fragment"}
# - Can't be modified in any way once they are created
# + [markdown] slideshow={"slide_type": "slide"}
# ### Creating a Tuple
# -
a_tuple = (11, 121, 122)
type(a_tuple)
a_tuple[0]
# + slideshow={"slide_type": "fragment"}
a_tuple[-1]
# -
a_tuple
a_tuple[1:3]
a_tuple[::2]
# + slideshow={"slide_type": "slide"}
a_tuple[11]
# + slideshow={"slide_type": "slide"}
# + [markdown] slideshow={"slide_type": "slide"}
# ## Lists and Tuples
# + [markdown] slideshow={"slide_type": "slide"}
# - Both have a definite order.
# -
a_list
a_tuple
a_list[0]
# + slideshow={"slide_type": "fragment"}
a_tuple[0]
# + slideshow={"slide_type": "slide"}
# + [markdown] slideshow={"slide_type": "slide"}
# - Both have zero based index
# + slideshow={"slide_type": "fragment"}
# + slideshow={"slide_type": "slide"}
# + [markdown] slideshow={"slide_type": "slide"}
# - Both support negative indices
# -
a_list
a_tuple
a_list[-1]
# + slideshow={"slide_type": "fragment"}
a_tuple[-1]
# + slideshow={"slide_type": "slide"}
# + [markdown] slideshow={"slide_type": "slide"}
# - Both are interchangeable
# -
b_list = list([1, 2, 3])
b_tuple = tuple((1, 2, 3))
b_list, type(b_list)
b_tuple, type(b_tuple)
new_tuple = tuple(b_list)
# + slideshow={"slide_type": "fragment"}
type(new_tuple)
# -
b_tuple
b_list
new_list = list(b_tuple)
type(new_list)
new_list
new_tuple == b_tuple
new_tuple == b_list
# + slideshow={"slide_type": "slide"}
new_list == b_list
# + [markdown] slideshow={"slide_type": "slide"}
# ## List v/s Tuples
# + [markdown] slideshow={"slide_type": "slide"}
# - Lists are defined using square brackets(`[]`) while parenthesis(`()`) is used for tuples.
# + slideshow={"slide_type": "fragment"}
# + slideshow={"slide_type": "slide"}
# + [markdown] slideshow={"slide_type": "slide"}
# - Lists are mutuable(changeable) while tuples aren't.
# -
a_list
a_tuple
a_list[0] = 'first'
a_list
# + slideshow={"slide_type": "fragment"}
a_tuple[0] = 'first'
# + slideshow={"slide_type": "slide"}
# + [markdown] slideshow={"slide_type": "slide"}
# - Tuples don't support methods like `append()`, `extend()`, `insert()`, `remove()`, and `pop()`
# -
a_list
a_list.insert(2, 1)
a_list
# + slideshow={"slide_type": "fragment"}
a_tuple.insert(2, 1)
# -
a_tuple
my_list = list(a_tuple)
my_list[0] = 1
my_list
a_tuple = tuple(my_list)
a_tuple
# + slideshow={"slide_type": "slide"}
# + [markdown] slideshow={"slide_type": "slide"}
# ### Assigning multiple values at once
# -
a_tuple = (11, 22, 33, 44)
first_value = a_tuple[0]
second_value = a_tuple[1]
third_value = a_tuple[2]
first_value, second_value, third_value, fourth_value = a_tuple
first_value
fourth_value
# + [markdown] slideshow={"slide_type": "slide"}
# ### Problem
#
# Think only about the different data types that you have studied till now and try to answer the most appropriate one to store the following stuff. Try to support your choice with a logic:
#
# - choice of names to pick from a hat for a prize.
# - instructions to cook your favourite meal.
# - location of your current house.
# + [markdown] slideshow={"slide_type": "slide"}
# ### Problem
# ```python
#
# >>> my_tuple = (10, 20, 30, 40, 50, 60, 70, 80, 90, 100)
#
# >>> my_tuple[::-2]
#
# >>> len(my_tuple)
#
# >>> my_tuple[1]
#
# >>> my_tuple[-len(my_tuple)]
#
# >>> my_tuple[1: 5]
#
# >>> my_tuple[4:]
#
# >>> my_tuple[:4]
#
# >>> my_tuple[:]
#
# >>> my_tuple[::-1]
#
# >>> my_tuple[::-2]
#
# >>> tmp_tuple = (1, 3, 5, 7)
#
# >>> your_tuple = my_tuple + tmp_tuple
# >>> your_tuple
#
# >>> your_tuple[0]
#
# >>> len(your_tuple)
#
# >>> len(my_tuple)
#
# >>> len(tmp_tuple)
#
# >>> len(your_tuple) != len(my_tuple) + len(tmp_tuple)
#
# >>> my_tuple.append(0)
#
# >>> my_tuple
#
# ```
# + [markdown] slideshow={"slide_type": "subslide"}
# ### Problem
#
# Your friend(Lilly) calls you for help to plan their birthday party. Although, you're busy, this being a call from one of your closest friend, you take the time out to help her but on a certain condition. You tell her that you would want to bring along some of your friends as well to the party. Lilly has no choice but to agree to your demands. You're her friend afterall !!!
#
# **In this story, the term list is used to signify the meaning as in the language English(a collection of things). It has no relation to the data type `list` in Python.**
#
# Having learnt `Python` recently, you tell Lilly that you will plan this programatically. Now, begins the long discussion between you, Lilly and the `Python Interpretor`.
#
# Lilly and you both have a list(not a Python list) of friends each(`friends_lilly` and `friends_mine` respectively).
#
# *Just give the python functions/operators that you would use for each question*
#
# - Q. Which dataset would you use to store the names of friends. How do you get the list of all invitees to the party?
#
# After seeing the complete list of invitees, Lilly tells you that she doesn't like `doremon`, one of the your friends inside `friends_mine`. She asks you to remove them from the list.
#
# - Q. How do you do this and get the new list of invitees?
#
# After you having a careful look at the new list, you tell her you forgot to add your new friend `shizuka`. After some discussion, Lilly agrees to your request.
#
# - Q. How do you establish this task? Now, what would give you the new list of invitees?
#
# Now after seeing the new list become too long, cost constraints come in. Lily asks you to remove your friends from the final list of invitees(remember to not remove a common friend. That will make Lily angry).
#
# - Q. What would be the your way around this problem?
#
#
# Having finally prepared the list of invitees, Lilly thanks you for the time and effort.
#
# Don't miss the party, have fun!!!
#
# +
friends_lilly = {'a', 'b'}
friends_mine = {'b', 'c', 'd'}
friends_all = friends_lilly
friends_all = friends_all.union(friends_mine)
# -
friends_all
friends_all.discard('d')
friends_all
friends_all.add('s')
friends_all
friends_common = friends_mine.intersection(friends_lilly)
friends_common
friends_remove = friends_mine - friends_common
friends_all - friends_remove
friends_mine, friends_lilly
friends_abc = {'a'}
friends_commons = friends_mine.intersection(friends_lilly, friends_abc)
friends_commons
# + [markdown] slideshow={"slide_type": "slide"}
# # Sets
# + [markdown] slideshow={"slide_type": "fragment"}
# #### - Unordered collection of unique elements
# + [markdown] slideshow={"slide_type": "slide"}
# ### Creating a set
# -
a_set = {1, 2, 3, 4, 5}
len(a_set)
a_set[0]
# + slideshow={"slide_type": "fragment"}
a_set
# -
type(a_set)
empty_set = set()
empty_set
len(empty_set)
empty = {}
type(empty)
# + slideshow={"slide_type": "slide"}
# + [markdown] slideshow={"slide_type": "slide"}
# ### Modifying a Set
# + [markdown] slideshow={"slide_type": "slide"}
# #### Adding one element at a time
# -
a_set.add(11)
a_set
a_set.add(1)
a_set
# + slideshow={"slide_type": "fragment"}
check_set = {1, 2, 1, 1, 3}
check_set
# + slideshow={"slide_type": "slide"}
# + [markdown] slideshow={"slide_type": "slide"}
# #### Adding multiple elements
# -
a_set
a_set.update({2, 31, 42})
a_set
b_set = {11, 22, 33}
c_set = {10, 20, 30}
a_set.update(b_set)
a_set
a_set.update([111, 1221])
a_set
# + slideshow={"slide_type": "fragment"}
a
# -
a_set = {1, 2, 3, 4, 5}
b_set, c_set
a_set.update(b_set, c_set)
a_set
# ### Removing elements
# + [markdown] slideshow={"slide_type": "slide"}
# #### Discard
# -
a_set
a_set.discard(2)
a_set
a_set.discard(11)
# + slideshow={"slide_type": "fragment"}
a_set
# + [markdown] slideshow={"slide_type": "slide"}
# #### Remove
# -
a_set
a_set.remove(4)
a_set
a_set.remove(33)
# + slideshow={"slide_type": "fragment"}
# + [markdown] slideshow={"slide_type": "slide"}
# #### Discard v/s Remove
# -
a_list = [1, 2, 3, 3, 33, 44, 5]
a_set = set(a_list)
a_set
b_set = {'a', True, 21}
b_set
# + slideshow={"slide_type": "fragment"}
# + [markdown] slideshow={"slide_type": "slide"}
# ### Common Operations on Set
# + [markdown] slideshow={"slide_type": "slide"}
# #### Union
# -
a_set = {1, 2, 3, 4, 5}
b_set = {2, 33, 1}
union_set = a_set.union(b_set)
union_set
# + slideshow={"slide_type": "fragment"}
a_set
# -
some_union = b_set.union(a_set)
some_union == union_set
# + [markdown] slideshow={"slide_type": "slide"}
# #### Intersection
# -
a_set
b_set
intersection_set = a_set.intersection(b_set)
intersection_set
union_set
union_set.intersection(intersection_set)
# + slideshow={"slide_type": "fragment"}
# -
intersection_set
union_set
intersection_set.union(union_set)
# + slideshow={"slide_type": "slide"}
# + [markdown] slideshow={"slide_type": "slide"}
# #### Difference
# -
a_set
b_set
a_set.difference(b_set)
# + slideshow={"slide_type": "fragment"}
b_set.difference(a_set)
# + slideshow={"slide_type": "slide"}
# + [markdown] slideshow={"slide_type": "slide"}
# #### Symmetric Difference
# -
a_set
b_set
a_set - b_set
b_set - a_set
a_set.symmetric_difference(b_set)
b_set.symmetric_difference(a_set)
# + slideshow={"slide_type": "fragment"}
# -
# + [markdown] slideshow={"slide_type": "slide"}
# ### Question to ask from the sets
# -
a_set, b_set, union_set, intersection_set
a_set.issuperset(b_set)
union_set.issuperset(b_set)
b_set.issubset(union_set)
# + slideshow={"slide_type": "fragment"}
2 in a_set
# -
22 not in a_set
# + [markdown] slideshow={"slide_type": "slide"}
# #### Subset
# + slideshow={"slide_type": "fragment"}
# + slideshow={"slide_type": "slide"}
# + [markdown] slideshow={"slide_type": "slide"}
# #### Superset
# + slideshow={"slide_type": "fragment"}
# + slideshow={"slide_type": "slide"}
# + [markdown] slideshow={"slide_type": "slide"}
# # Dictionaries
# + [markdown] slideshow={"slide_type": "fragment"}
# - Unordered set of key value pairs
# + [markdown] slideshow={"slide_type": "slide"}
# ### Creating a dictionary
# -
my_dict = {}
type(my_dict)
my_dict = {'a': 1, 'b': 2}
info = {'name': '<NAME>', 'location': (45, 55), 'dob': 12214}
info['name']
info['dob']
# + slideshow={"slide_type": "fragment"}
info['ddd']
# -
info = {'name': '<NAME>', 'location': (45, 55), 'dob': 12214}
info['namess']
# + slideshow={"slide_type": "slide"}
# + slideshow={"slide_type": "slide"}
# + [markdown] slideshow={"slide_type": "slide"}
# ### Modifying a dictionary
# + [markdown] slideshow={"slide_type": "slide"}
# #### Changing value for a key
# -
info
info['dob'] = 121212
info['dob']
# + slideshow={"slide_type": "fragment"}
# + slideshow={"slide_type": "slide"}
# -
info
len(info)
# + slideshow={"slide_type": "slide"}
# + [markdown] slideshow={"slide_type": "slide"}
# #### Add multiple key, values pairs
# -
info['country'] = 'India'
info
info['email'] = '<EMAIL>'
info['email']
# + slideshow={"slide_type": "fragment"}
info['name']
# -
info
info.update({'email': '<EMAIL>', 'profession': 'scientist'})
info
location = info['location']
len(location)
# + slideshow={"slide_type": "slide"}
# + [markdown] slideshow={"slide_type": "slide"}
# #### Removing a key, value pair
# -
info.pop('email')
info
# + slideshow={"slide_type": "fragment"}
# + slideshow={"slide_type": "slide"}
# + slideshow={"slide_type": "slide"}
# + [markdown] slideshow={"slide_type": "slide"}
# #### Removing last item
# -
info['profession'] = 'scientist'
info
info.popitem()
info
# + slideshow={"slide_type": "fragment"}
# + slideshow={"slide_type": "slide"}
# + [markdown] slideshow={"slide_type": "slide"}
# #### Clearing all key, value pairs
# -
info
info.clear()
info
# + slideshow={"slide_type": "fragment"}
# + slideshow={"slide_type": "slide"}
# + [markdown] slideshow={"slide_type": "slide"}
# ### Some Helpful Methods
# + [markdown] slideshow={"slide_type": "fragment"}
# #### Getting a Particular Value for a key
# -
my_dict = {'a': 1, 'b': 11, 'c': 111}
my_dict
my_dict['a']
# + slideshow={"slide_type": "slide"}
my_dict['ss']
# -
my_dict.get('a')
my_dict.get('aa', 0)
# + [markdown] slideshow={"slide_type": "fragment"}
# #### All keys
# -
my_dict
value = my_dict.get('a', 0)
value == 0
# + slideshow={"slide_type": "slide"}
# + slideshow={"slide_type": "slide"}
# + [markdown] slideshow={"slide_type": "slide"}
# #### All values
# -
my_dict
my_dict.values()
# + slideshow={"slide_type": "fragment"}
# + slideshow={"slide_type": "slide"}
# + [markdown] slideshow={"slide_type": "slide"}
# #### All keys and values
# -
my_dict.items()
# + slideshow={"slide_type": "fragment"}
# -
my_dict.keys()
my_dict.values()
my_dict.items()
# + slideshow={"slide_type": "slide"}
# + [markdown] slideshow={"slide_type": "slide"}
# ### Mixed-Value Dictionaries
# -
suffixes = {1000: ['KB', 'MB', 'GB', 'TB', 'PB', 'EB', 'ZB', 'YB'],
1024: ['KiB', 'MiB', 'GiB', 'TiB', 'PiB', 'EiB', 'ZiB', 'YiB']}
len(suffixes)
info_size = suffixes[1000]
# + slideshow={"slide_type": "slide"}
# -
info_size
info_size[3]
info_size[:: -3]
suffixes[1024]
# + slideshow={"slide_type": "slide"}
# -
# ### Comparisons
# + [markdown] slideshow={"slide_type": "slide"}
# | Property/Function| List | Tuple | Set | Dictionary |
# | --------------- | --------------- | --------------- | --------------- | --------------- |
# | Property | Mutable and ordered | Ordered and immutable | Unordered and mutable | Unordered and mutable |
# | When would use this | | | | |
# | Initialization | | | | |
# | Add one element | | | | |
# | Remove one element | | | | |
# | Add multiple elelments | | | | |
# | Remove | | | | |
#
| chapter_03_to_06/abstract_datatypes.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] slideshow={"slide_type": "-"}
# <br>
# <br>
# <font size='6'><u><b>The Lives and Deaths of Stars</b></u></font>
# <br>
#
# _**Written by <NAME>, STScI**_
#
# _**Revised by <NAME>**_
#
# We talked about the lives of deaths and stars, but now it's time to play with real data, and see this story with your own eyes. This is interactive! You get to do anything you like with the data! DON'T WORRY! YOU WILL NOT BREAK ANYTHING! Want to try something? Just go right ahead!
#
# Want to know how to do something but aren't sure? Ask questions!
# ___
# -
# # Table of Contents
#
# * [How to Use This Notebook](#How-to-Use-This-Notebook)
# * [Pre-Activity Setup](#Pre-Activity-Setup)
# * [Activity 1: Graphing Some Data](#Activity-1:-Graphing-Some-Data)
# * [Part 1.1: Making a Scatter Plot](#Part-1.1:-Making-a-Scatter-Plot)
# * [Part 1.2: Graphing a Function](#Part-1.2:-Graphing-a-Function)
# * [Activity 2: Graphing Real Data from Stars](#Activity-2:-Graphing-Real-Data-from-Stars)
# * [Part 2.1: Selecting Spectra to Plot](#Part-2.1:-Selecting-Spectra-to-Plot)
# * [Part 2.2: Plotting Your First Spectrum](#Part-2.2:-Plotting-Your-First-Spectrum)
# * [Part 2.3: Plotting a Second Spectrum](#Part-2.3:-Plotting-a-Second-Spectrum)
# * [Part 2.4: Plotting Vega](#Part-2.4:-Plotting-Vega)
# * [Activity 3: Plotting the Main Sequence](#Activity-3:-Plotting-the-Main-Sequence)
# * [Part 3.1: Entering the Data from Some Stars](#Part-3.1:-Entering-the-Data-from-Some-Stars)
# * [Part 3.2: Plot the Star Data](#Part-3.2:-Plot-the-Star-Data)
# * [Part 3.3: Get a Sense for the Sizes and Colors of These Stars](#Part-3.3:-Get-a-Sense-for-the-Sizes-and-Colors-of-These-Stars)
# * [Activity 4: Looking at the Life Cycles of Stars](#Activity-4:-Looking-at-the-Life-Cycles-of-Stars)
# ___
# # How to Use This Notebook
#
# The webpage you are in is actually an app - much like the ones on your cellphone. This app consists of cells.
#
# An *input* cell looks like a light grey box with an `In [ ]:` on its left. Input cells each contain code - instructions to make the computer do something.
#
# To activate or select a cell, click anywhere inside of it.
#
# <div class='alert alert-info'>
# <font size='3'><b>Select the cell below and read its contents.</b></font>
# </div>
# +
# Text that follows a "#" is known as a comment.
# Comments do not affect your code in any way.
# You should always read the comments at the top of each cell you interact with.
# Comments will be used to describe what the cell's code is actually doing.
# -
# To execute or run a selected cell, hit `[Shift + Enter]` on your keyboard.
#
# <div class='alert alert-info'>
# <font size='3'><b>Select the cell below and read its contents. Then, run the cell.</b></font>
# <br> If a warning appears, just click <em>"Run Anyway"</em>, this code is safe ;)
# <br> Also, if you want to save your progress, click the <em>"Copy to Drive"</em> button at the top.
# </div>
# +
# Text that DOESN'T follow a "#" is considered code.
# Lines of code are instructions given to your computer.
# The line of code below is a "print" statement.
# A print statement literally prints out the text between its quotes.
print("Congrats! You have successfully run your first cell!")
# -
# Running a cell creates an *output* directly below it. An output can be some text, a graph, an interactive slider, or even nothing at all! For that last case, you know you have run a cell when the `In [ ]:` becomes `In [#]:`, where "#" is any number.
#
# You can learn more about how Jupyter notebooks work at https://try.jupyter.org/
# ___
# # Pre-Activity Setup
#
# In order for any of the activities to work properly, you must import the libraries needed for the code in this notebook.
#
# <div class='alert alert-info'>
# <font size='3'><b>Select and run the cell below.</b></font>
# </div>
# +
# If you are running this notebook in Colab the following package has to be installed first.
# !pip install astroml &> /dev/null
print("You have successfully installled: astroML")
# +
# Here, you are importing the libraries needed for this notebook.
# These libraries set up the plotting environment in your browser.
import numpy as np
# %matplotlib inline
import matplotlib.pyplot as plt
import matplotlib.cm as cm
from IPython.core.display import Image, display
from astroML.datasets import fetch_sdss_spectrum, fetch_vega_spectrum, fetch_sdss_S82standards
from astroML.plotting import MultiAxes
repoURL = 'https://raw.githubusercontent.com/astro-datalab/notebooks-latest/master/06_EPO/e-TeenAstronomyCafe/'
print('Done! You have successfully imported the libraries.')
# -
# ___
# # Activity 1: Graphing Some Data
#
# In this activity, you'll take the first step towards become a computer programmer! One of the first things scientists want to do when they get data is to take a look at it - plot it on a graph! This activity is going to show you how to plot data.
# ___
# ## Part 1.1: Making a Scatter Plot
#
# <div class='alert alert-info'>
# <h3 class='alert-heading'>Helpful Reminder(s)</h3>
# <ul>
# <li>Click anywhere inside of a cell to select it.</li>
# <li>Hit [Shift + Enter] to run a selected cell.</li>
# </ul>
# </div>
# +
# You don't need to use the same numbers, or indeed even these many - put in whatever you like for x & y.
x = [11, 21, 2, -7, 4, -10, 6]
y = [10, 21, 0, 9, 17, 13, 18]
plt.figure()
plt.scatter(x,y,marker='o',color='blue')
plt.xlabel('X')
plt.ylabel('Y')
# -
# Scatter is the most basic plotting command, and plots a bunch of point - notice that there are blue dots on the X and Y locations you entered!
# ___
# ## Part 1.2: Graphing a Function
#
# There are several other plotting commands - the one we'll use mostly is called "plot".
# +
# What's nifty is that you can plot functions as well!
# Want to know what x to the 3.5 power + 57 is? Compute and plot it!
# This command creates a series of numbers from 0 to 20 increasing by 1.
x = np.linspace(0,200,100053)
# If you want, you can print it out.
print(x)
# You can now relate y to x with some function of the sort you've learned in school.
y = x - 5.1 + 40*np.sin(x/3)
# Now, you can plot your data!
plt.figure()
plt.plot(x, y, marker='None', color='blue', linestyle='-')
plt.xlabel('X')
plt.ylabel('Y')
# -
# Nifty! We've gone from plotting a bunch of points to plotting something continious - a curve.
#
# You can plot all sorts of different curves - parabolas, sines and cosines or other trig functions, or whatever you can imagine really! Try it! You can create new cells below this one, by clicking it, then using the menu at the top - **Insert > Insert Cell Below**.
#
# Type in your code for y based on x (you can even change x) and then plot it with the plot command from above. Then, hit `[Shift + Enter]` to run it!
# ___
# # Activity 2: Graphing Real Data from Stars
#
# Just like we plotted the data above, we can now plot a spectrum of a star!
#
# To do this, we are going to use data from the [Sloan Digital Sky Survey (SDSS)](http://sdss.org). This project used a telescope at Apache Point in New Mexico to look at the northern sky.
#
# <figure>
# <center>
# <br>
# <img src='https://github.com/DavidVargasMora/TACTests/raw/master/01_Lives_and_Deaths_of_Stars/Figures/sloan_fermilab_big.jpg', width='300'>
# <br>
# <figcaption>
# <font color='grey'>
# <b>Image 1:</b>
# The Sloan Telescope at Apache Point, New Mexico.
# <br>
# <b>Image Credit:</b>
# SDSS Team, Fermilab Visual Media Services
# </font>
# </figcaption>
# </center>
# </figure>
#
# It was the first "Big Data" project in astronomy. Sloan found millions of stars and galaxies, and made their data public. What we're going to do is start to play with SDSS data.
# ___
# ## Part 2.1: Selecting Spectra to Plot
#
# Click [here](http://classic.sdss.org/dr5/algorithms/spectemplates/) for the spectra of different stars.
#
# Look at the gif links, and look for **Plate**, **Fiber** and **MJD**. Write them down on a piece of paper.
#
# Next, let's go look at the data for that star [here](http://cas.sdss.org/dr14/en/tools/explore/Summary.aspx).
#
# Click on **Search** on the left hand side menu bar, and then enter the **Plate**, **Fiber** and **MJD** that you wrote down there, and hit **Go**.
#
# If you click on the image, you can move around, zoom in and out - it's like Google Maps for the night sky!
# ___
# ## Part 2.2: Plotting Your First Spectrum
#
# Take the same **Plate**, **Fiber**, and **MJD** numbers from earlier and enter them into the code below - make sure they match!
#
# Then, run the cell.
#
# <div class='alert alert-info'>
# <h3 class='alert-heading'>Helpful Reminder(s)</h3>
# <ul>
# <li>Click anywhere inside of a cell to select it.</li>
# <li>Hit [Shift + Enter] to run a selected cell.</li>
# </ul>
# </div>
# +
# Fetch single spectrum - Enter the same "Plate", "MJD" and "Fiber" numbers here.
plate = 402
mjd = 51793
fiber = 204
spec = fetch_sdss_spectrum(plate, mjd, fiber)
# now, just as before, we can plot the data
plt.figure()
plt.plot(spec.wavelength(), spec.spectrum/spec.spectrum.max(), color='black')
plt.xlabel('Wavelength')
plt.ylabel('Brightness')
# -
# That's the same spectrum as is on the webpage!
#
# What you did was pull data from the web into your app. Now you can run whatever code you like on the data!
#
# This is a key element of data analysis.
#
# Let's zoom in on the part of the spectrum we can see with our eyes - "VIBGYOR" is about $4000$ to $7000$ Angstroms - or $10^{-10}$ meters ($0.$ followed by nine zeros and then a $1$ - really small!).
# +
# Lets look at the range where our eyes can see.
plt.figure()
plt.plot(spec.wavelength(), spec.spectrum/spec.spectrum.max(), color='black')
plt.xlabel('Wavelength')
plt.ylabel('Brightness')
plt.xlim(4000,7000)
plt.ylim(0, 1.2)
# -
# See those dips - those are the absorption lines we talked about! They are the chemical fingerprint of a star.
#
# This star has lines from Hydrogen and Helium - the first and second elements on the periodic table, and the two most common elements in the entire Universe.
#
# You can also tell something about the star from its spectrum. It's higher on the left (at lower wavelengths) than the right (at higher wavelengths).
#
# The low end of this wavelength range ($4000$ Angstroms) is what our eyes percieve as blue - or simply, this star will look blue!
# ___
# ## Part 2.3: Plotting a Second Spectrum
#
# Not all stars look the same. Pick another star from the page, get its **Fiber**, **Plate**, and **MJD**, and let us plot that too.
# Fetch a second spectrum.
{":"}
plate2 = 273
mjd2 = 51957
fiber2 = 304
spec2 = fetch_sdss_spectrum(plate2, mjd2, fiber2)
plt.figure()
plt.plot(spec.wavelength(), spec.spectrum/spec.spectrum.max(), color='black')
plt.plot(spec2.wavelength(), spec2.spectrum/spec2.spectrum.max(), color='red')
plt.xlabel('Wavelength')
plt.ylabel('Brightness')
plt.xlim(4000,7000)
plt.ylim(0, 1.2)
# Notice it has some of the same absorption lines in the same places as the other star, but they are deeper - it has relatively more of that element.
#
# But that isn't all! This star, plotted in red, is lower on the left (blue) side than the right (red) side - it'll look red!
#
# > <u><b>Remember:</b></u> From the H-R diagram, the color of stars is related to their temperature - the redder star is cooler than the hot blue star!
#
# If you can figure out the color of stars, you can figure out their temperature, their mass and their size.
#
# <figure>
# <center>
# <br>
# <img src='https://github.com/DavidVargasMora/TACTests/raw/master/01_Lives_and_Deaths_of_Stars/Figures/HR-diagram.jpg', width='600'>
# <br>
# <figcaption>
# <font color='grey'>
# <b>Image 2:</b>
# The H-R diagram.
# </font>
# </figcaption>
# </center>
# </figure>
#
# ___
# ## Part 2.4: Plotting Vega
#
# Plot the spectrum of Vega (in the middle of the upper plot), and see what this star looks like.
# +
# Fetch a third spectrum.
spec3 = fetch_vega_spectrum()
plt.figure()
plt.plot(spec3[0], spec3[1]/spec3[1].max(), color='blue')
plt.plot(spec.wavelength(), spec.spectrum/spec.spectrum.max(), color='black')
plt.plot(spec2.wavelength(), spec2.spectrum/spec2.spectrum.max(), color='red')
plt.xlabel('Wavelength')
plt.ylabel('Brightness')
plt.xlim(4000,7000)
plt.ylim(0, 1.2)
# -
# This third star plotted in blue, is Vega - one of the brightest stars in the sky.
#
# It's one of the three stars that make up the Summer Triangle.
#
# <figure>
# <center>
# <br>
# <img src='https://github.com/DavidVargasMora/TACTests/raw/master/01_Lives_and_Deaths_of_Stars/Figures/MilkyWay12-501w.jpg', width='600'>
# <br>
# <figcaption>
# <font color='grey'>
# <b>Image 3:</b>
# The three bright stars of the Summer Triangle with our galaxy, the Milky Way, behind. Each of those specs of light is a star! There are some hundred billion stars in our Galaxy alone!
# </font>
# </figcaption>
# </center>
# </figure>
#
# Even here you can see that stars have a range of colors - or really temperatures.
#
# Remember that the stars need fuel to resist the crushing pull of gravity. While stars burn Hydrogen and produce Helium in their core, they are said to be on **The Main Sequence**.
#
# You could scroll up and look at that figure again... OR we can just plot the main sequence from data ourselves!
# ___
# # Activity 3: Plotting the Main Sequence
#
# Now that we've seen that spectra of stars are different, lets compare the brightness and colors of a lot of different stars.
# ___
# ## Part 3.1: Entering the Data from Some Stars
#
# This is just like [Activity 1](#Activity-1:-Graphing-Some-Data), except instead of calling it "x" and "y", I'm calling it "color" and "brightness".
#
# I've already entered the data below - all you have to do is run the cell.
#
# <div class='alert alert-info'>
# <h3 class='alert-heading'>Helpful Reminder(s)</h3>
# <ul>
# <li>Click anywhere inside of a cell to select it.</li>
# <li>Hit [Shift + Enter] to run a selected cell.</li>
# </ul>
# </div>
# First, lets take some data that I've entered directly from the SDSS
star = ['Sun', 'Sirius', 'Canopus', 'Arcturus', 'AlphaCen', 'Vega','Capella', 'Rigel', 'Procyon', 'Betelgeuse', 'Achernar','Hadar', 'Acrux', 'Altair', 'Aldebaran', 'Antares', 'Spica', 'Pollux', 'Formalhaut', 'Becrux', 'Deneb', 'Regulus', 'Adhara','Shaula', 'Gacrux', 'Castor']
apparent_brightness = [-26.8, -1.46, -0.72, -0.04, -0.01, 0.0, 0.08, 0.12, 0.38, 0.41, 0.46, 0.63, 0.76, 0.77, 0.85, 0.92, 1.0, 1.14, 1.16, 1.2, 1.25, 1.35, 1.5, 1.6, 1.63, 1.98]
brightness = [4.8, 1.4, -2.5, 0.2, 4.4, 0.6, 0.4, -8.1, 2.6, -7.2, -1.3, -4.4, -4.6, 2.3, -0.3, -5.2, -3.2, 0.7, 2.0, -4.7, -7.2, -0.3, -4.8, -3.5, -1.2, 0.5]
color = [0.63, 0.0, 0.15, 1.23, 0.71, 0.0, 0.08, -0.03, 0.42, 1.85, -0.16, -0.23, -0.24, 0.22, 1.54, 1.83, -0.23, 1.0, 0.09, -0.23, 0.09, -0.11, -0.21, -0.22, 1.59, 0.03]
# ___
# ## Part 3.2: Plot the Star Data
#
# Lets plot the data we entered in the last two cells with the scatter command we used in [Activity 1](#Activity-1:-Graphing-Some-Data).
# +
# We'll use the size property 's' to scale the points by the brightness.
plt.figure()
cmap = cm.ScalarMappable(cmap='jet')
colors=cmap.to_rgba(np.arctan(np.array(color)))
plt.scatter(color, brightness, marker='*', color=colors, s=10 + 10**(-0.4*np.array(brightness)))
## if you want to see which star is which, remove the '#' in front of the next two lines
for i, name in enumerate(star):
plt.annotate(name,(color[i],brightness[i]))
plt.xlabel('Color')
plt.ylabel('Brightness')
plt.suptitle('Main Sequence of the Brightest Stars')
# Brightness is a little weird - remember that smaller numbers mean that the star is brighter
# It's a little like the Top 100 charts - a ranked list
# With the smaller the number being the higher rank
#plt.gca().invert_xaxis()
plt.gca().invert_yaxis()
# -
# ___
# ## Part 3.3: Get a Sense for the Sizes and Colors of These Stars
#
# The stars on the left hand side like Betelgeuse and Antares, are red giant stars nearing the end of their lives.
#
# <figure>
# <center>
# <br>
# <img src='https://github.com/DavidVargasMora/TACTests/raw/master/01_Lives_and_Deaths_of_Stars/Figures/Orion_Head_to_Toe.jpg', width='600'>
# <br>
# <figcaption>
# <font color='grey'>
# <b>Image 4:</b>
# Betelgeuse is in the constellation Orion - and you can easily see it rise over Tucson at night. If you look carefully, you'll be able to tell it's a different color with just your own eyes.
# </font>
# </figcaption>
# </center>
# </figure>
#
# Remember, different stars also have different sizes. Compare the size of Betelgeuse to the Sun by scrolling from left to right on the image below.
display(Image(repoURL+'01_Lives_and_Deaths_of_Stars/Figures/star_sizes_small.jpg', width=4000, unconfined=False))
# ___
# # Activity 4: Looking at the Life Cycles of Stars
#
# We've taken a look at different stellar spectra, and put them on context with each other with the Main Sequence. Stars may look like they are always the same to us, but they're always changing.
#
# As a star runs out of fuel, it is no longer able to withstand the force of gravity and begins to die. Stars die in several different ways. Some end their lives as white dwarfs, slowly cooling in space. The most massive stars end their lives as neutron stars, and black holes. Many stars however, die by exploding spectacularly. These are supernovae!
#
# Use the [Star in a Box](http://starinabox.lco.global/) to see how stars with different masses live and die!
# ___
| 06_EPO/e-TeenAstronomyCafe/01_Lives_and_Deaths_of_Stars/Lives_and_Deaths_of_Stars.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: firedrake
# language: python
# name: firedrake
# ---
# # Problem Description
#
# We want to find out how the solution of our inverse problem converges as we increase the number of points for both the new and traditional methods of data interpolation.
#
# If we have what is known as **"posterior consistency"** then we expect that the error in our solution, when compared to the true solution, will always decrease as we increase the number of points we are assimilating.
#
# ## Posterior Consistency **NEEDS WORK**
#
# From a Bayesian point of view, the regularisation we choose and the weighting we give it encode information about our assumed prior probability distribution of $q$ before we start assimilating data (adding observations).
# Take, for example, the regularisation used in the this problem
#
# $$
# \frac{\alpha^2}{2}\int_\Omega|\nabla q|^2dx
# $$
#
# which asserts a prior that the solution $q$ which minimises $J$ should be smooth and gives a weighting $\alpha$ to the assertion.
# If we have posterior consistency, the contribution of increasing numbers of measurements $u_{obs}$ should increase the weighting of our data relative to our prior and we should converge towards the true solution.
#
# ## Hypothesis
#
# Our two methods minimise two different functionals.
# The first minimises $J$
#
# $$J[u, q] =
# \underbrace{\frac{1}{2}\int_{\Omega_v}\left(\frac{u_{obs} - I(u, \text{P0DG}(\Omega_v))}{\sigma}\right)^2dx}_{\text{model-data misfit}} +
# \underbrace{\frac{\alpha^2}{2}\int_\Omega|\nabla q|^2dx}_{\text{regularization}}$$
#
# whilst the second minimises $J'$
#
# $$J'[u, q] =
# \underbrace{\frac{1}{2}\int_{\Omega}\left(\frac{u_{interpolated} - u}{\sigma}\right)^2dx}_{\text{model-data misfit}} +
# \underbrace{\frac{\alpha^2}{2}\int_\Omega|\nabla q|^2dx}_{\text{regularization}}.$$
#
# As set up here increasing the number of points to assimilate has the effect of increasing the size of the misfit term in $J$ (with a weight proportional to each measurement's variance $\sigma$ so we expect to converge to $q_{true}$ as the number of measurements increases.
#
# As we increase the number of measurements in $J'$ we have to hope that (a) our calculated $u_{interpolated}$ approaches $u$ (to minimise the misfit) and (b) we do not expect the misfit term to increase relative to the regularizatuion term since it doesn't get relatively bigger.
#
# I therefore predict that minimising $J$ will display posterior consistency and that minimising the various $J'$ for each $u_{interpolated}$ will not.
# Who knows what we will converge to!
#
# ## Hypothesis Amenendment! A note on finite element method error
# Note that our solutions all exist in finite element spaces which are usually approximations of a true solution with some error that (hopefully) decreases as mesh density increase and solution space order increase.
# Since I am comparing to a solution $u_true$ in CG2 space I expect, at best, that we will converge to $u_true$ when we have, on average, enough points per cell to fully specify the lagrange polynomials in that cell.
# Were we in CG1 this would be 3 points per cell (I can't remember how many we would need for CG2!) to give convergence if those measurements had no noise.
# Since our measurements are noisy I do not expect actual convergence, but I anticipate some slowing in convergence.
# # Setup
import firedrake
import firedrake_adjoint
mesh = firedrake.UnitSquareMesh(32, 32)
V = firedrake.FunctionSpace(mesh, family='CG', degree=2)
Q = firedrake.FunctionSpace(mesh, family='CG', degree=2)
# ## Fake $q_{true}$
# +
from firedrake import Constant, cos, sin
import numpy as np
from numpy import pi as π
from numpy import random
import matplotlib.pyplot as plt
seed = 1729
generator = random.default_rng(seed)
degree = 5
x = firedrake.SpatialCoordinate(mesh)
q_true = firedrake.Function(Q)
for k in range(degree):
for l in range(int(np.sqrt(degree**2 - k**2))):
Z = np.sqrt(1 + k**2 + l**2)
ϕ = 2 * π * (k * x[0] + l * x[1])
A_kl = generator.standard_normal() / Z
B_kl = generator.standard_normal() / Z
expr = Constant(A_kl) * cos(ϕ) + Constant(B_kl) * sin(ϕ)
mode = firedrake.interpolate(expr, Q)
q_true += mode
import matplotlib.pyplot as plt
fig, axes = plt.subplots()
axes.set_aspect('equal')
colors = firedrake.tripcolor(q_true, axes=axes, shading='gouraud')
fig.colorbar(colors);
# -
# ## Fake $u_{true}$
# +
from firedrake import exp, inner, grad, dx
u_true = firedrake.Function(V)
v = firedrake.TestFunction(V)
f = Constant(1.0)
k0 = Constant(0.5)
bc = firedrake.DirichletBC(V, 0, 'on_boundary')
F = (k0 * exp(q_true) * inner(grad(u_true), grad(v)) - f * v) * dx
firedrake.solve(F == 0, u_true, bc)
fig, axes = plt.subplots()
axes.set_aspect('equal')
colors = firedrake.tripcolor(u_true, axes=axes, shading='gouraud')
fig.colorbar(colors);
# -
# ## Generating Observational Data $u_{obs}$
# We run up in powers of 2 until we have plenty of observations per cell (on average)
# +
min_power_of_2 = 6
max_power_of_2 = 6
signal_to_noise = 20
U = u_true.dat.data_ro[:]
u_range = U.max() - U.min()
σ = firedrake.Constant(u_range / signal_to_noise)
xs_set = {}
u_obs_vals_set = {}
for i in range(min_power_of_2, max_power_of_2+1):
# Make random point cloud
num_points = 2**i
xs = np.random.random_sample((num_points,2))
xs_set[i] = xs
# Generate "observed" data
ζ = generator.standard_normal(len(xs))
u_obs_vals = np.array(u_true.at(xs)) + float(σ) * ζ
u_obs_vals_set[i] = u_obs_vals
print(2**max_power_of_2 / mesh.num_cells())
# -
# # Solving with Vertex Only Meshes
# +
q_min_set = {}
tape = firedrake_adjoint.get_working_tape()
for i in range(min_power_of_2, max_power_of_2+1):
# Run the forward problem with q = 0 as first guess
u = firedrake.Function(V)
q = firedrake.Function(Q)
bc = firedrake.DirichletBC(V, 0, 'on_boundary')
F = (k0 * exp(q) * inner(grad(u), grad(v)) - f * v) * dx
firedrake.solve(F == 0, u, bc)
# Store data on the point_cloud using a vertex only mesh
point_cloud = firedrake.VertexOnlyMesh(mesh, xs_set[i])
P0DG = firedrake.FunctionSpace(point_cloud, 'DG', 0)
u_obs = firedrake.Function(P0DG)
u_obs.dat.data[:] = u_obs_vals_set[i]
# Two terms in the functional
misfit_expr = 0.5 * ((u_obs - firedrake.interpolate(u, P0DG)) / σ)**2
α = firedrake.Constant(0.5)
regularisation_expr = 0.5 * α**2 * inner(grad(q), grad(q))
# Should be able to write firedrake.assemble(misfit + regularisation * dx) but can't yet
# because of the meshes being different
J = firedrake.assemble(misfit_expr * dx) + firedrake.assemble(regularisation_expr * dx)
# Create reduced functional
q̂ = firedrake_adjoint.Control(q)
Ĵ = firedrake_adjoint.ReducedFunctional(J, q̂)
# Minimise reduced functional
q_min = firedrake_adjoint.minimize(
Ĵ, method='Newton-CG', options={'disp': True}
)
q_min_point_cloud = {}
q_min_point_cloud['point-cloud'] = q_min
q_min_set[i] = q_min_point_cloud.copy()
# Clear tape to avoid memory leak
tape.clear_tape()
# +
xs = xs_set[6]
q_min = q_min_set[6]['point-cloud']
fig, axes = plt.subplots(ncols=3, nrows=2, sharex=True, sharey=True, figsize=(20,12), dpi=200)
plt.suptitle('Estimating Log-Conductivity $q$ \n\
where $k = k_0e^q$ and $-\\nabla \\cdot k \\nabla u = f$ for known $f$', fontsize=25)
for ax in axes.ravel():
ax.set_aspect('equal')
# ax.get_xaxis().set_visible(False)
axes[0, 0].set_title('$u_{true}$', fontsize=25)
colors = firedrake.tripcolor(u_true, axes=axes[0, 0], shading='gouraud')
fig.colorbar(colors, ax=axes[0, 0])
axes[1, 0].set_title('Sampled Noisy $u_{obs}$', fontsize=25)
colors = axes[1, 0].scatter(xs[:, 0], xs[:, 1], c=u_obs_vals)
fig.colorbar(colors, ax=axes[1, 0])
kw = {'vmin': -5, 'vmax': +5, 'shading': 'gouraud'}
axes[0, 1].set_title('$q_{true}$', fontsize=25)
colors = firedrake.tripcolor(q_true, axes=axes[0, 1], **kw)
fig.colorbar(colors, ax=axes[0, 1])
axes[1, 1].set_title('Estimated $q_{est}$ from $u_{obs}$', fontsize=25)
colors = firedrake.tripcolor(q_min, axes=axes[1, 1], **kw);
fig.colorbar(colors, ax=axes[1, 1])
axes[0, 2].axis('off')
q_err = firedrake.Function(Q).assign(q_min-q_true)
l2norm = firedrake.norm(q_err, "L2")
axes[1, 2].set_title('$q_{est}$ - $q_{true}$', fontsize=25)
axes[1, 2].text(0.5, 0.5, f'$L^2$ Norm {l2norm:.2f}', ha='center', fontsize=20)
colors = firedrake.tripcolor(q_err, axes=axes[1, 2], **kw);
fig.colorbar(colors, ax=axes[1, 2])
plt.savefig('pretty.png')
# -
# # Solving with Interpolation Methods
# +
from scipy.interpolate import (
LinearNDInterpolator,
NearestNDInterpolator,
CloughTocher2DInterpolator,
Rbf,
)
interpolators_set = {}
for i in range(min_power_of_2, max_power_of_2+1):
interpolators_set[i] = {
'nearest': NearestNDInterpolator(xs, u_obs_vals),
'linear': LinearNDInterpolator(xs, u_obs_vals, fill_value=0.0),
'clough-tocher': CloughTocher2DInterpolator(xs, u_obs_vals, fill_value=0.0),
'gaussian': Rbf(xs[:, 0], xs[:, 1], u_obs_vals, function='gaussian'),
}
# +
interpolated_data_set = {}
for i in range(min_power_of_2, max_power_of_2+1):
# Interpolating the mesh coordinates field (which is a vector function space)
# into the vector function space equivalent of our solution space gets us
# global DOF values (stored in the dat) which are the coordinates of the global
# DOFs of our solution space. This is the necessary coordinates field X.
Vc = firedrake.VectorFunctionSpace(mesh, V.ufl_element())
X = firedrake.interpolate(mesh.coordinates, Vc).dat.data_ro[:]
# Interpolate using each method
interpolated_data = {}
for method, interpolator in interpolators_set[i].items():
u_interpolated = firedrake.Function(V)
u_interpolated.dat.data[:] = interpolator(X[:, 0], X[:, 1])
interpolated_data[method] = u_interpolated
# Save interpolated data for number of points
interpolated_data_set[i] = interpolated_data
# -
del interpolators_set
for i in range(min_power_of_2, max_power_of_2+1):
for method, u_interpolated in interpolated_data.items():
# Run the forward problem with q = 0 as first guess
u = firedrake.Function(V)
q = firedrake.Function(Q)
bc = firedrake.DirichletBC(V, 0, 'on_boundary')
F = (k0 * exp(q) * inner(grad(u), grad(v)) - f * v) * dx
firedrake.solve(F == 0, u, bc)
# Two terms in the functional
misfit_expr = 0.5 * ((u_interpolated - u) / σ)**2
α = firedrake.Constant(0.5)
regularisation_expr = 0.5 * α**2 * inner(grad(q), grad(q))
# Only assemble two terms separately for exact comparison with other method!
Jprime = firedrake.assemble(misfit_expr * dx) + firedrake.assemble(regularisation_expr * dx)
# Create reduced functional
q̂ = firedrake_adjoint.Control(q)
Ĵprime = firedrake_adjoint.ReducedFunctional(Jprime, q̂)
# Minimise reduced functional
q_min = firedrake_adjoint.minimize(
Ĵprime, method='Newton-CG', options={'disp': True}
)
q_min_set[i][method] = q_min
# Clear tape to avoid memory leak
tape.clear_tape()
# # Results
# # Collate Results
q_err_set = {}
l2errors_set = {}
for i in range(min_power_of_2, max_power_of_2+1):
q_err_set[i] = {}
l2errors_set[i] = {}
for method_i, q_min_i in q_min_set[i].items():
q_err = firedrake.Function(Q).assign(q_min_i-q_true)
l2norm = firedrake.norm(q_err, "L2")
q_err_set[i][method_i] = q_err
l2errors_set[i][method_i] = l2norm
print(method_i)
# +
from mpl_toolkits.axes_grid1 import make_axes_locatable
for i in range(min_power_of_2, max_power_of_2+1):
ukw = {'vmin': 0.0, 'vmax': +0.2}
kw = {'vmin': -4, 'vmax': +4, 'shading': 'gouraud'}
title_fontsize = 20
text_fontsize = 20
fig, axes = plt.subplots(ncols=3, nrows=6, sharex=True, sharey=True, figsize=(20,30), dpi=200)
plt.suptitle('Estimating Log-Conductivity $q$ \n\
where $k = k_0e^q$ and $-\\nabla \\cdot k \\nabla u = f$ for known $f$', fontsize=title_fontsize)
for ax in axes.ravel():
ax.set_aspect('equal')
# ax.get_xaxis().set_visible(False)
axes[0, 0].set_title('$u_{true}$', fontsize=title_fontsize)
colors = firedrake.tripcolor(u_true, axes=axes[0, 0], shading='gouraud', **ukw)
cax = make_axes_locatable(axes[0, 0]).append_axes("right", size="5%", pad=0.05)
fig.colorbar(colors, cax=cax)
axes[0, 1].set_title('$q_{true}$', fontsize=title_fontsize)
colors = firedrake.tripcolor(q_true, axes=axes[0, 1], **kw)
cax = make_axes_locatable(axes[0, 1]).append_axes("right", size="5%", pad=0.05)
fig.colorbar(colors, cax=cax)
axes[0, 2].set_title('$q_{true}-q_{true}$', fontsize=title_fontsize)
zero_func = firedrake.Function(Q).assign(q_true-q_true)
axes[0, 2].text(0.5, 0.5, f'$L^2$ Norm {firedrake.norm(zero_func, "L2"):.2f}', ha='center', fontsize=text_fontsize)
colors = firedrake.tripcolor(zero_func, axes=axes[1, 2], **kw);
cax = make_axes_locatable(axes[0, 2]).append_axes("right", size="5%", pad=0.05)
fig.colorbar(colors, cax=cax)
key = 'point-cloud'
axes[1, 0].set_title('Sampled Noisy $u_{obs}$', fontsize=title_fontsize)
colors = axes[1, 0].scatter(xs[:, 0], xs[:, 1], c=u_obs_vals, vmin=0.0, vmax=0.2)
cax = make_axes_locatable(axes[1, 0]).append_axes("right", size="5%", pad=0.05)
fig.colorbar(colors, cax=cax)
axes[1, 1].set_title('$q_{est}$ from Point Cloud', fontsize=title_fontsize)
colors = firedrake.tripcolor(q_min_set[i][key], axes=axes[1, 1], **kw)
cax = make_axes_locatable(axes[1, 1]).append_axes("right", size="5%", pad=0.05)
fig.colorbar(colors, cax=cax)
axes[1, 2].set_title('$q_{est}-q_{true}$', fontsize=title_fontsize)
axes[1, 2].text(0.5, 0.5, f'$L^2$ Norm {l2errors_set[i][key]:.2f}', ha='center', fontsize=text_fontsize)
colors = firedrake.tripcolor(q_err_set[i][key], axes=axes[1, 2], **kw);
cax = make_axes_locatable(axes[1, 2]).append_axes("right", size="5%", pad=0.05)
fig.colorbar(colors, cax=cax)
key = 'nearest'
axes[2, 0].set_title('$u_{interpolated}^{nearest}$', fontsize=title_fontsize)
colors = firedrake.tripcolor(interpolated_data[key], axes=axes[2, 0], shading='gouraud', **ukw)
cax = make_axes_locatable(axes[2, 0]).append_axes("right", size="5%", pad=0.05)
fig.colorbar(colors, cax=cax)
axes[2, 1].set_title('$q_{est}^{nearest}$ from $u_{interpolated}^{nearest}$', fontsize=title_fontsize)
colors = firedrake.tripcolor(q_min_set[i][key], axes=axes[2, 1], **kw)
cax = make_axes_locatable(axes[2, 1]).append_axes("right", size="5%", pad=0.05)
fig.colorbar(colors, cax=cax)
axes[2, 2].set_title('$q_{est}^{nearest}-q_{true}$', fontsize=title_fontsize)
axes[2, 2].text(0.5, 0.5, f'$L^2$ Norm {l2errors_set[i][key]:.2f}', ha='center', fontsize=text_fontsize)
colors = firedrake.tripcolor(q_err_set[i][key], axes=axes[2, 2], **kw);
cax = make_axes_locatable(axes[2, 2]).append_axes("right", size="5%", pad=0.05)
fig.colorbar(colors, cax=cax)
key = 'linear'
axes[3, 0].set_title('$u_{interpolated}^{linear}$', fontsize=title_fontsize)
colors = firedrake.tripcolor(interpolated_data[key], axes=axes[3, 0], shading='gouraud', **ukw)
cax = make_axes_locatable(axes[3, 0]).append_axes("right", size="5%", pad=0.05)
fig.colorbar(colors, cax=cax)
axes[3, 1].set_title('$q_{est}^{linear}$ from $u_{interpolated}^{linear}$', fontsize=title_fontsize)
colors = firedrake.tripcolor(q_min_set[i][key], axes=axes[3, 1], **kw)
cax = make_axes_locatable(axes[3, 1]).append_axes("right", size="5%", pad=0.05)
fig.colorbar(colors, cax=cax)
axes[3, 2].set_title('$q_{est}^{linear}-q_{true}$', fontsize=title_fontsize)
axes[3, 2].text(0.5, 0.5, f'$L^2$ Norm {l2errors_set[i][key]:.2f}', ha='center', fontsize=text_fontsize)
colors = firedrake.tripcolor(q_err_set[i][key], axes=axes[3, 2], **kw);
cax = make_axes_locatable(axes[3, 2]).append_axes("right", size="5%", pad=0.05)
fig.colorbar(colors, cax=cax)
key = 'clough-tocher'
axes[4, 0].set_title('$u_{interpolated}^{clough-tocher}$', fontsize=title_fontsize)
colors = firedrake.tripcolor(interpolated_data[key], axes=axes[4, 0], shading='gouraud', **ukw)
cax = make_axes_locatable(axes[4, 0]).append_axes("right", size="5%", pad=0.05)
fig.colorbar(colors, cax=cax)
axes[4, 1].set_title('$q_{est}^{clough-tocher}$ from $u_{interpolated}^{clough-tocher}$', fontsize=title_fontsize)
colors = firedrake.tripcolor(q_min_set[i][key], axes=axes[4, 1], **kw)
cax = make_axes_locatable(axes[4, 1]).append_axes("right", size="5%", pad=0.05)
fig.colorbar(colors, cax=cax)
axes[4, 2].set_title('$q_{est}^{clough-tocher}-q_{true}$', fontsize=title_fontsize)
axes[4, 2].text(0.5, 0.5, f'$L^2$ Norm {l2errors_set[i][key]:.2f}', ha='center', fontsize=text_fontsize)
colors = firedrake.tripcolor(q_err_set[i][key], axes=axes[4, 2], **kw);
cax = make_axes_locatable(axes[4, 2]).append_axes("right", size="5%", pad=0.05)
fig.colorbar(colors, cax=cax)
key = 'gaussian'
axes[5, 0].set_title('$u_{interpolated}^{gaussian}$', fontsize=title_fontsize)
colors = firedrake.tripcolor(interpolated_data[key], axes=axes[5, 0], shading='gouraud', **ukw)
cax = make_axes_locatable(axes[5, 0]).append_axes("right", size="5%", pad=0.05)
fig.colorbar(colors, cax=cax)
axes[5, 1].set_title('$q_{est}^{gaussian}$ from $u_{interpolated}^{gaussian}$', fontsize=title_fontsize)
colors = firedrake.tripcolor(q_min_set[i][key], axes=axes[5, 1], **kw)
cax = make_axes_locatable(axes[5, 1]).append_axes("right", size="5%", pad=0.05)
fig.colorbar(colors, cax=cax)
axes[5, 2].set_title('$q_{est}^{gaussian}-q_{true}$', fontsize=title_fontsize)
axes[5, 2].text(0.5, 0.5, f'$L^2$ Norm {l2errors_set[i][key]:.2f}', ha='center', fontsize=text_fontsize)
colors = firedrake.tripcolor(q_err_set[i][key], axes=axes[5, 2], **kw);
cax = make_axes_locatable(axes[5, 2]).append_axes("right", size="5%", pad=0.05)
fig.colorbar(colors, cax=cax)
# fig.text(0.5,0.05,r'Functional minimised: $J[u, q] = \frac{1}{2}\int_{\Omega_v}\left(\frac{u_{obs} - I(u, \mathrm{P0DG}(\Omega_v))}{\sigma}\right)^2dx + \frac{\alpha^2}{2}\int_\Omega|\nabla q|^2dx$', ha='center', va='center', fontsize=20)
plt.savefig(f'posterior-consistency-{2**i}-pts.png')
# +
from mpl_toolkits.axes_grid1 import make_axes_locatable
for i in range(min_power_of_2, max_power_of_2+1):
ukw = {}
kw = {'shading': 'gouraud'}
title_fontsize = 20
text_fontsize = 20
fig, axes = plt.subplots(ncols=3, nrows=6, sharex=True, sharey=True, figsize=(20,30), dpi=200)
plt.suptitle('Estimating Log-Conductivity $q$ \n\
where $k = k_0e^q$ and $-\\nabla \\cdot k \\nabla u = f$ for known $f$', fontsize=title_fontsize)
for ax in axes.ravel():
ax.set_aspect('equal')
# ax.get_xaxis().set_visible(False)
axes[0, 0].set_title('$u_{true}$', fontsize=title_fontsize)
colors = firedrake.tripcolor(u_true, axes=axes[0, 0], shading='gouraud', **ukw)
cax = make_axes_locatable(axes[0, 0]).append_axes("right", size="5%", pad=0.05)
fig.colorbar(colors, cax=cax)
axes[0, 1].set_title('$q_{true}$', fontsize=title_fontsize)
colors = firedrake.tripcolor(q_true, axes=axes[0, 1], **kw)
cax = make_axes_locatable(axes[0, 1]).append_axes("right", size="5%", pad=0.05)
fig.colorbar(colors, cax=cax)
axes[0, 2].set_title('$q_{true}-q_{true}$', fontsize=title_fontsize)
zero_func = firedrake.Function(Q).assign(q_true-q_true)
axes[0, 2].text(0.5, 0.5, f'$L^2$ Norm {firedrake.norm(zero_func, "L2"):.2f}', ha='center', fontsize=text_fontsize)
colors = firedrake.tripcolor(zero_func, axes=axes[1, 2], **kw);
cax = make_axes_locatable(axes[0, 2]).append_axes("right", size="5%", pad=0.05)
fig.colorbar(colors, cax=cax)
key = 'point-cloud'
axes[1, 0].set_title('Sampled Noisy $u_{obs}$', fontsize=title_fontsize)
colors = axes[1, 0].scatter(xs[:, 0], xs[:, 1], c=u_obs_vals, vmin=0.0, vmax=0.2)
cax = make_axes_locatable(axes[1, 0]).append_axes("right", size="5%", pad=0.05)
fig.colorbar(colors, cax=cax)
axes[1, 1].set_title('$q_{est}$ from Point Cloud', fontsize=title_fontsize)
colors = firedrake.tripcolor(q_min_set[i][key], axes=axes[1, 1], **kw)
cax = make_axes_locatable(axes[1, 1]).append_axes("right", size="5%", pad=0.05)
fig.colorbar(colors, cax=cax)
axes[1, 2].set_title('$q_{est}-q_{true}$', fontsize=title_fontsize)
axes[1, 2].text(0.5, 0.5, f'$L^2$ Norm {l2errors_set[i][key]:.2f}', ha='center', fontsize=text_fontsize)
colors = firedrake.tripcolor(q_err_set[i][key], axes=axes[1, 2], **kw);
cax = make_axes_locatable(axes[1, 2]).append_axes("right", size="5%", pad=0.05)
fig.colorbar(colors, cax=cax)
key = 'nearest'
axes[2, 0].set_title('$u_{interpolated}^{nearest}$', fontsize=title_fontsize)
colors = firedrake.tripcolor(interpolated_data[key], axes=axes[2, 0], shading='gouraud', **ukw)
cax = make_axes_locatable(axes[2, 0]).append_axes("right", size="5%", pad=0.05)
fig.colorbar(colors, cax=cax)
axes[2, 1].set_title('$q_{est}^{nearest}$ from $u_{interpolated}^{nearest}$', fontsize=title_fontsize)
colors = firedrake.tripcolor(q_min_set[i][key], axes=axes[2, 1], **kw)
cax = make_axes_locatable(axes[2, 1]).append_axes("right", size="5%", pad=0.05)
fig.colorbar(colors, cax=cax)
axes[2, 2].set_title('$q_{est}^{nearest}-q_{true}$', fontsize=title_fontsize)
axes[2, 2].text(0.5, 0.5, f'$L^2$ Norm {l2errors_set[i][key]:.2f}', ha='center', fontsize=text_fontsize)
colors = firedrake.tripcolor(q_err_set[i][key], axes=axes[2, 2], **kw);
cax = make_axes_locatable(axes[2, 2]).append_axes("right", size="5%", pad=0.05)
fig.colorbar(colors, cax=cax)
key = 'linear'
axes[3, 0].set_title('$u_{interpolated}^{linear}$', fontsize=title_fontsize)
colors = firedrake.tripcolor(interpolated_data[key], axes=axes[3, 0], shading='gouraud', **ukw)
cax = make_axes_locatable(axes[3, 0]).append_axes("right", size="5%", pad=0.05)
fig.colorbar(colors, cax=cax)
axes[3, 1].set_title('$q_{est}^{linear}$ from $u_{interpolated}^{linear}$', fontsize=title_fontsize)
colors = firedrake.tripcolor(q_min_set[i][key], axes=axes[3, 1], **kw)
cax = make_axes_locatable(axes[3, 1]).append_axes("right", size="5%", pad=0.05)
fig.colorbar(colors, cax=cax)
axes[3, 2].set_title('$q_{est}^{linear}-q_{true}$', fontsize=title_fontsize)
axes[3, 2].text(0.5, 0.5, f'$L^2$ Norm {l2errors_set[i][key]:.2f}', ha='center', fontsize=text_fontsize)
colors = firedrake.tripcolor(q_err_set[i][key], axes=axes[3, 2], **kw);
cax = make_axes_locatable(axes[3, 2]).append_axes("right", size="5%", pad=0.05)
fig.colorbar(colors, cax=cax)
key = 'clough-tocher'
axes[4, 0].set_title('$u_{interpolated}^{clough-tocher}$', fontsize=title_fontsize)
colors = firedrake.tripcolor(interpolated_data[key], axes=axes[4, 0], shading='gouraud', **ukw)
cax = make_axes_locatable(axes[4, 0]).append_axes("right", size="5%", pad=0.05)
fig.colorbar(colors, cax=cax)
axes[4, 1].set_title('$q_{est}^{clough-tocher}$ from $u_{interpolated}^{clough-tocher}$', fontsize=title_fontsize)
colors = firedrake.tripcolor(q_min_set[i][key], axes=axes[4, 1], **kw)
cax = make_axes_locatable(axes[4, 1]).append_axes("right", size="5%", pad=0.05)
fig.colorbar(colors, cax=cax)
axes[4, 2].set_title('$q_{est}^{clough-tocher}-q_{true}$', fontsize=title_fontsize)
axes[4, 2].text(0.5, 0.5, f'$L^2$ Norm {l2errors_set[i][key]:.2f}', ha='center', fontsize=text_fontsize)
colors = firedrake.tripcolor(q_err_set[i][key], axes=axes[4, 2], **kw);
cax = make_axes_locatable(axes[4, 2]).append_axes("right", size="5%", pad=0.05)
fig.colorbar(colors, cax=cax)
key = 'gaussian'
axes[5, 0].set_title('$u_{interpolated}^{gaussian}$', fontsize=title_fontsize)
colors = firedrake.tripcolor(interpolated_data[key], axes=axes[5, 0], shading='gouraud', **ukw)
cax = make_axes_locatable(axes[5, 0]).append_axes("right", size="5%", pad=0.05)
fig.colorbar(colors, cax=cax)
axes[5, 1].set_title('$q_{est}^{gaussian}$ from $u_{interpolated}^{gaussian}$', fontsize=title_fontsize)
colors = firedrake.tripcolor(q_min_set[i][key], axes=axes[5, 1], **kw)
cax = make_axes_locatable(axes[5, 1]).append_axes("right", size="5%", pad=0.05)
fig.colorbar(colors, cax=cax)
axes[5, 2].set_title('$q_{est}^{gaussian}-q_{true}$', fontsize=title_fontsize)
axes[5, 2].text(0.5, 0.5, f'$L^2$ Norm {l2errors_set[i][key]:.2f}', ha='center', fontsize=text_fontsize)
colors = firedrake.tripcolor(q_err_set[i][key], axes=axes[5, 2], **kw);
cax = make_axes_locatable(axes[5, 2]).append_axes("right", size="5%", pad=0.05)
fig.colorbar(colors, cax=cax)
# fig.text(0.5,0.05,r'Functional minimised: $J[u, q] = \frac{1}{2}\int_{\Omega_v}\left(\frac{u_{obs} - I(u, \mathrm{P0DG}(\Omega_v))}{\sigma}\right)^2dx + \frac{\alpha^2}{2}\int_\Omega|\nabla q|^2dx$', ha='center', va='center', fontsize=20)
plt.savefig(f'posterior-consistency-{2**i}-pts-freecolors.png')
# -
| 4-poisson-inverse-conductivity-posterior-consistency/poisson-inverse-conductivity-posterior-consistency.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
import pyclesperanto_prototype as cle
import numpy as np
from numpy import random
from skimage.io import imshow, imread
import matplotlib
# Image data source <NAME>, https://imagej.net/File:MorphoLibJ-region-adjacency-graph.png
intensity_image = imread('MorphoLibJ_example_image.tif')
imshow(intensity_image)
# # Starting point: Label map
# +
binary = cle.binary_not(cle.threshold_otsu(intensity_image))
cells = cle.voronoi_labeling(binary)
cle.imshow(cells, labels=True)
# -
# ## Nearest neighbor distance maps
average_distance_of_n_closest_neighbors_map = cle.average_distance_of_n_closest_neighbors_map(cells, n=1)
cle.imshow(average_distance_of_n_closest_neighbors_map, color_map='jet')
average_distance_of_n_closest_neighbors_map = cle.average_distance_of_n_closest_neighbors_map(cells, n=5)
cle.imshow(average_distance_of_n_closest_neighbors_map, color_map='jet')
# ## Touching neighbor distance map
average_neighbor_distance_map = cle.average_neighbor_distance_map(cells)
cle.imshow(average_neighbor_distance_map, color_map='jet')
# ## Shape descriptors: Area
pixel_count_map = cle.label_pixel_count_map(cells)
cle.imshow(pixel_count_map, color_map='jet')
# ## Shape descriptors: Extension ratio
maximum_extension_ratio_map = cle.label_maximum_extension_ratio_map(cells)
cle.imshow(maximum_extension_ratio_map, color_map='jet')
| demo/neighbors/quantitative_neighbor_maps.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Earth's geomagnetic polarity timescale and the Gamma distribution
#
# Earth’s magnetic field varies with time. The most dramatic aspect of this variation is that it reverses its polarity. The field structure in both the normal and reversed states is dipolar (like a bar magnetic), but the pole locations are switched. The timescale that it takes for the field to reverse is relatively short (a few thousand years) compared to the time that it is typically in a given polarity.
#
# <img src="./images/normal_reversed.png" width = 600>
#
# > Source: Earth’s Dynamic Systems
# (10th Edition) <NAME>. and <NAME>.
#
#
# You have now dealt in detail with data that is developed by research vessels towing a
# magnetometer measure the marine magnetic anomalies. As you saw, the history of reversals is recorded by the oceanic crust as it forms at the ridge with both sides of the ridge record this pattern of reversals leading to symmetry about the ridge. Both the marine magnetic anomalies and records of the magnetic field on land in sedimentary rocks and lava flows have led to the development of the geomagnetic polarity time scale (GPTS).
#
# <img src="./images/GPTS.png" width = 600>
#
# > Source: Gee and Kent (2007) "Source of Oceanic Magnetic Anomalies and the Geomagnetic Polarity Timescale"
#
# ## Geomagnetic reversals and the Poisson distribution
#
# Geomagnetic reversals are often interpretted to behave like a Poisson process. Recall from class that a Poisson process meets the following criteria:
#
# - Events are independent of each other.
# - The average rate (events per time period) is constant.
# - Two events cannot occur at the same time.
#
# In class, we used a Poisson distribution to describe the chance of observing meteors associated with a meteor shower.
# ## Setup
#
# Run this cell as it is to setup your environment.
# +
import matplotlib.pyplot as plt
import pandas as pd
pd.options.display.max_rows = 160
import numpy as np
import scipy as sp
from client.api.notebook import Notebook
ok = Notebook('hw06.ok')
# -
# **Import the geomagnetic polarity time scale data for the past 40 million years as a pandas dataframe. (1 point)**
#
# The GPTS.csv file has a start and end date for each polarity zone along with its polarity.
GPTS =
GPTS
# Let's use ```plt.fill()``` to make a plot that looks like the geomagnetic polarity time scale that is shown above. To make such a plot, let's make a list of reversal times when the field switched from normal (1) to reverse (-1) and an accompanying time list that we can then plot:
# +
polarity_code = []
time_list = []
for i in GPTS.index:
if GPTS['Polarity'][i] == 'normal':
polarity_code.append(-1)
polarity_code.append(1)
time_list.append(GPTS['End_Myr'][i])
time_list.append(GPTS['End_Myr'][i])
if GPTS['Polarity'][i] == 'reverse':
polarity_code.append(1)
polarity_code.append(-1)
time_list.append(GPTS['End_Myr'][i])
time_list.append(GPTS['End_Myr'][i])
plt.figure(1,(20,2))
plt.fill(time_list,polarity_code)
plt.xlabel('Age, Myr')
plt.ylabel('Polarity')
plt.xlim(0,40)
plt.ylim(-1,1)
plt.title('Geomagnetic Polarity Time Scale')
plt.show()
# -
# **Calculate the average duration of a geomagnetic polarity zone (4 points).**
#
# To do this you can make a new column in the Dataframe for polarity zone duration where you subtract the start date from the end date.
#
# You can then use ```np.mean()``` to calculate the mean duration, declare a variable named `average_polarity_zone_duration` with the mean polarity duration.
average_polarity_zone_duration =
_ = ok.grade('q1_1')
# **How does the duration of the current normal polarity zone compare to the average duration of a polarity (i.e. time between reversals) over the past 40 million years? (2 points)**
#
# *write your answer here*
#
# **Plot a histogram of the polarity zone duration (1 point)**
#
# This is an empirical distribution (i.e. it is the observed data). When you make the histogram, make sure that `density=True`
# **What percentile is the polarity zone duration of the current polarity zone? (4 points)**
#
# If a value is the smallest one (shortest duration), it will have a precentile of 0. If it is the largest one (longest duration), it will have a precentile of 100. The median is 50 percentile. If I have 124 values and the value I am calculating the percentile for is the 119th largest one, it has a percentile of 119/124 = 96th percentile.
#
# To determine the percentile, you can sort the data according to polarity zone duration. You can do this by applying the `.sort_values()` function to the Dataframe with the name of your duration column being the input parameter. Also include the parameter `inplace=True` to have it stay sorted. To determine what rank it has you can use the function `.reset_index(inplace=True)` on the dataframe and then find what rank it is which you can then use to calculate the percentile.
#
# Details on percentile can be found in this inferential thinking chapter:
# https://www.inferentialthinking.com/chapters/13/1/Percentiles.html
#
# *Note that the percentile function they refer to is np.percentile*
#
# Declare a variable named `percentile_current_zone` with your answer.
percentile_current_zone =
_ = ok.grade('q1_2')
# ## Can we describe polarity zone duration with a theoretical distribution?
# ### Gamma distribution:
#
# In class, we discussed the binomial distribution and the Poisson distribution. Another related distribution is the **Gamma distribution**. The **Gamma distribution** is the probability of a given waiting time between Poisson-distributed events (that is an event that randomly occurs but for which is there is an average time period between the events).
#
# The Gamma distribution gives the probability of a waiting time between Poisson distributed events. For those of you that will appreciate the theoretical basis for this function, here it is below. But you will get a better sense of it by putting it into action:
#
# #### Theoretical
#
# Consider the distribution function $D(x)$ of waiting times until the $h$th Poisson event given a Poisson distribution with a rate of change $\lambda$,
#
# $$ D(x) = P (X \le x) = 1 - P(X > x) = 1-\sum_{k=0}^{h-1}\frac{(\lambda x)^{k}e^{-\lambda x}}{k!} = 1-e^{-\lambda x}\sum_{k=0}^{h-1}\frac{(\lambda x)^{k}}{k!} = 1-\frac{\Gamma(h,x\lambda) }{\Gamma (h)}$$
#
# where $\Gamma (x) = (x-1)!$ is a complete gamma function and $\Gamma (n,x) = (n-1)! e^{-x}\sum_{k=0}^{n-1}\frac{x^{k}}{k!}$ an incomplete gamma function. The corresponding probability function $P(x)$ of waiting times until the $h$th Poisson event is then obtained by differentiating $D(x)$,
#
# $$ P(x) = D'(x) = \frac{\lambda (\lambda x)^{h-1}}{(h-1)!}e^{-\lambda x} $$
#
# Now let $\alpha=h$ (not necessarily an integer) and define $\theta=1/\lambda$ to be the time between changes. Then the above equation can be written
#
# $$ P(x) = \frac{x^{\alpha-1}e^{-x/\theta}}{\Gamma (\alpha) \theta^{\alpha}} $$
#
# which is the probability of a duration time $x$ between events.
#
# $\theta$ is the expected time between reversals and we will follow McFadden (1984) and define $\theta = \mu / \alpha$ where $\mu$ is the average chron duration. A value for $\alpha$ greater than one can be interpreted either as an artefact linked to some short intervals missing in the GPTS or to some short term memory within the dynamo that would inhibit a second reversal just after a first one has occurred. McFadden (1984) use a value for $\alpha$ of 1.2.
#
# <img src="./images/alpha_greater_one.png" width = 600>
#
# > Source: McFadden (1984) "Statistical Tools for the Analysis of Geomagnetic Reversal Sequence"
from scipy.special import gamma
def gamma_probability(x,mu,alpha):
"""
This function computes the probability waiting x time between poisson events (wuch as polarity change),
given theta the expected time between changes and alpha the shape parameter for the gamma distribution
Parameters
----------
x : the wait time use probability is being investigated
mu : average polarity zone duration
alpha : the shape parameter for the gamma distribution (1.2 for the GPTS according to McFadden (1984))
Returns
-------
prob : probability of wait time x
"""
theta = mu/alpha
prob = (x**(alpha - 1) * np.exp(-1*x/theta)) / (gamma(alpha)* theta**alpha)
return prob
# **Plot the theoretical gamma probability in comparison to the actual distribution (1 point)**
#
# Use the `gamma_probability()` function and calculate $P$ the probability of observing a polarity zone for each value in a range ```np.arange(0.0,3.0,0.1)```. Then plot the resulting curve on top of the polarity zone duration histogram. Make sure to label the plotted lines, put on a legend and label the axis. Following McFadden (1984), **use an alpha value of 1.2.**
# #### Empirical and simulated
#
# The observed GPTS gives us one realization of an empirical distribution. We can use the function `np.random.gamma` to simulate additional empirical distributions.
help(np.random.gamma)
# **Use the `np.random.gamma` function to simulate polarity zones (4 points)**
#
# `np.random.gamma( )` has 2 specified parameters: `shape` (sometimes designated "$\alpha$") and `scale` (sometimes designated "$\theta$"), and an optional keyword argument `size` (if `size` is not specified, it returns a single trial). Each call to `np.random.gamma( )` returns a chron duration pulled from the gamma distribution.
#
# So to get random chron lengths use ```np.random.gamma(shape, scale=1.0, size=None)``` where:
#
# - shape = 1.2 (the alpha we used before)
# - scale = average_polarity_zone_duration/1.2
# - size = number of polarity zones (so we get random simulated data that is the same length as our original data set)
# **Plot a histogram of the simulated data, the observed data and the theoretical distribution (1 point)**
#
# They should look pretty similar to each other.
# **Figure out a way to plot your new random polarity time scale like we did for the actual time scale above (2 points)**
# ### Will the field reverse soon?!
# But what we _really_ would like to know is how likely is it that a polarity reversal will happen soon. The current normal chron has been going on for 0.78 Myr. To find the probability that a reversal will happen in the next say 10 thousand years we need to find that probability of a chron that is longer than 0.78 Myr but shorter than 0.79 Myr.
# $$P (0.78 \le X \le 0.79) = P(X \le 0.79) - P(X \le 0.78) = (1 - P(0.79)) - (1 - P(0.78))$$
# **Use the ```gamma_probability``` function to do this calculation (4 points). Declare a variable `P_rev_soon` with your answer.**
P_rev_soon =
_ = ok.grade('q1_3')
# **Based on this probability, you think the field is about to reverse? (1 point)**
#
# *write your answer here*
# **Export the notebook as .html and upload to bCourses**
| week_06/assignment/.ipynb_checkpoints/W6_Assignment_Geomag-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Adaptive Distances
# ==================
# In this example, we show how and when to use the adaptive distances feature of pyabc. "Adaptive distances" means that the distance function is not pre-defined (e.g. after pre-processing), but evolves over time during the ABC run, depending on the observed summary statistics. This can be useful if different summary statistics vary on different scales, but it is not immediately clear how to weight them. For this case, in adaptive distances weights are adjusted in each iteration so as to balance the impact of all summary statistics on the computed distance.
#
# Currently, adaptively weighted p-norm distances (e.g. Euclidean) are implemented in pyABC, but it is easily possible to define arbitrary adaptive distances.
# + raw_mimetype="text/restructuredtext" active=""
# The notebook can be downloaded :download:`here <adaptive_distances.ipynb>`.
# -
# For illustration, we consider a simple Gaussian model:
# +
import scipy
import tempfile
import os
import matplotlib.pyplot as pyplot
import pyabc.visualization
import logging
# for debugging
df_logger = logging.getLogger('Distance')
df_logger.setLevel(logging.DEBUG)
# model definition
def model(p):
return {'ss1': p['theta'] + 1 + 0.1*scipy.randn(),
'ss2': 2 + 10*scipy.randn()}
# true model parameter
theta_true = 3
# observed summary statistics
observation = {'ss1': theta_true + 1, 'ss2': 2}
# prior distribution
prior = pyabc.Distribution(theta=pyabc.RV('uniform', 0, 10))
# database
db_path = "sqlite:///" + os.path.join(tempfile.gettempdir(), "tmp.db")
# -
# Summary statistic ss2 has a high variance compared to summary statistic ss1. In addition, ss1 is informative about the model parameters $\theta$, ss2 not. We expect that the proposal distribution for $\theta$ iteratively centers around the true value $\theta=3$. Thus, the variability for the sampled ss1 decreases iteratively, while the variability of the sampled ss2 stays approximately constant. If both summary statistics are weighted similarly in the calculation of the distance between sample and observation, there is hence an undesirable high impact of ss2, so that convergence can be slowed down. In contrast, if we weight ss1 higher, we may hope that our estimation of $\theta$ is improved.
# These informal expectations being stated, let us continue with the implementation. First, we consider a non-adaptive Euclidean distance:
# +
distance = pyabc.PNormDistance(p=2)
abc = pyabc.ABCSMC(model, prior, distance)
abc.new(db_path, observation)
history0 = abc.run(minimum_epsilon=.1, max_nr_populations=8)
# -
# Let us visualize the results for the non-adaptive distance:
# +
# plotting
fig, ax = pyplot.subplots()
for t in range(history0.max_t + 1):
df, w = history0.get_distribution(m=0, t=t)
pyabc.visualization.plot_kde_1d(df, w, xmin=0, xmax=10,
x='theta', ax=ax,
label="PDF t={}".format(t))
ax.axvline(theta_true, color='k', linestyle='dashed', label="True value")
ax.legend()
# -
# Second, we consider an adaptive Euclidean distance:
# +
distance_adaptive = pyabc.AdaptivePNormDistance(p=2)
abc = pyabc.ABCSMC(
model, prior, distance_adaptive,
acceptor = pyabc.acceptor.accept_use_complete_history)
abc.new(db_path, observation)
history1 = abc.run(minimum_epsilon=.1, max_nr_populations=8)
# -
# In the debug output of abc.run above, it can be seen how the weights evolve over time. Note that we set the acceptor to ``pyabc.acceptor.accept_use_complete_history`` instead of the default ``pyabc.acceptor.accept_use_current_time`` in order to get nested acceptance regions. This is optional here but may be beneficial sometimes. Let us visualize the results for the adaptive distance:
# +
# plotting
fig, ax = pyplot.subplots()
for t in range(history1.max_t + 1):
df, w = history1.get_distribution(m=0, t=t)
pyabc.visualization.plot_kde_1d(df, w, xmin=0, xmax=10,
x='theta', ax=ax,
label="PDF t={}".format(t))
ax.axvline(theta_true, color='k', linestyle='dashed', label="True value")
ax.legend()
# -
# We observe differences compared to the non-adaptive setting. In particular, the densitities tend to be narrower around the true parameter $\theta=3$. In addition, despite, the better convergence, the required number of samples in total is lower, as not so much time was wasted trying to match an uninformative summary statistic:
pyabc.visualization.plot_sample_numbers([history0, history1], ["Fixed distance", "Adaptive distance"])
# In detail, the adaptive distance feature works as follows: In each iteration of the ABCSMC run, after having obtained the desired number of accepted particles (and once at the beginning using a sample from the prior), the method ``DistanceFunction.update()`` is called. It is given a set of summary statistics which can be used to e.g. compute weights for the distance measure in the next iteration. In order to avoid bias, via ``DistanceFunction.configure_sampler()``, the distance function can tell the sampler to not only record accepted particles, but all that were generated during the sampling process.
# So, when you want to define your own adaptive distance function, you will typically only need to overwrite these two methods. For implementation details and an example of how this can look in practice, please inspect the code of ``AdaptivePNormDistance``.
| doc/examples/adaptive_distances.ipynb |
# ---
# jupyter:
# jupytext:
# split_at_heading: true
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# default_exp optimizer
# -
#export
from fastai.torch_basics import *
#hide
from nbdev.showdoc import *
# # Optimizer
#
# > Define the general fastai optimizer and the variants
# ## `_BaseOptimizer` -
#export
class _BaseOptimizer():
"Common functionality between `Optimizer` and `OptimWrapper`"
def all_params(self, n=slice(None), with_grad=False):
res = L((p,pg,self.state[p],hyper) for pg,hyper in zip(self.param_lists[n],self.hypers[n]) for p in pg)
return L(o for o in res if o[0].grad is not None) if with_grad else res
def _set_require_grad(self, rg, p,pg,state,h): p.requires_grad_(rg or state.get('force_train', False))
def freeze_to(self, n):
self.frozen_idx = n if n >= 0 else len(self.param_lists) + n
if self.frozen_idx >= len(self.param_lists):
warn(f"Freezing {self.frozen_idx} groups; model has {len(self.param_lists)}; whole model is frozen.")
for o in self.all_params(slice(n, None)): self._set_require_grad(True, *o)
for o in self.all_params(slice(None, n)): self._set_require_grad(False, *o)
def freeze(self):
assert(len(self.param_lists)>1)
self.freeze_to(-1)
def set_freeze(self, n, rg, ignore_force_train=False):
for p in self.param_lists[n]: p.requires_grad_(rg or (state.get('force_train', False) and not ignore_force_train))
def unfreeze(self): self.freeze_to(0)
def set_hypers(self, **kwargs): L(kwargs.items()).starmap(self.set_hyper)
def _set_hyper(self, k, v):
for v_,h in zip(v, self.hypers): h[k] = v_
def set_hyper(self, k, v):
if isinstance(v, slice):
if v.start: v = even_mults(v.start, v.stop, len(self.param_lists))
else: v = [v.stop/10]*(len(self.param_lists)-1) + [v.stop]
v = L(v, use_list=None)
if len(v)==1: v = v*len(self.param_lists)
assert len(v) == len(self.hypers), f"Trying to set {len(v)} values for {k} but there are {len(self.param_lists)} parameter groups."
self._set_hyper(k, v)
@property
def param_groups(self): return [{**{'params': pg}, **hp} for pg,hp in zip(self.param_lists, self.hypers)]
@param_groups.setter
def param_groups(self, v):
for pg,v_ in zip(self.param_lists,v): pg = v_['params']
for hyper,v_ in zip(self.hypers,v):
for k,t in v_.items():
if k != 'params': hyper[k] = t
add_docs(_BaseOptimizer,
all_params="List of param_groups, parameters, and hypers",
freeze_to="Freeze parameter groups up to `n`",
freeze="Freeze up to last parameter group",
set_freeze="Set `rg` for parameter group `n` only",
unfreeze="Unfreeze the entire model",
set_hypers="`set_hyper` for all `kwargs`",
set_hyper="Set the value(s) in `v` for hyper-parameter `k`")
#export
def _update(state, new=None):
if new is None: return state
if isinstance(new, dict): state.update(new)
return state
# ## `Optimizer` -
# export
@log_args(but='params,cbs,defaults')
class Optimizer(_BaseOptimizer):
"Base optimizer class for the fastai library, updating `params` with `cbs`"
_keep_on_clear = ['force_train', 'do_wd']
def __init__(self, params, cbs, train_bn=True, **defaults):
params = L(params)
self.cbs,self.state,self.train_bn = L(cbs),defaultdict(dict),train_bn
defaults = merge(*self.cbs.attrgot('defaults'), defaults)
self.param_lists = L(L(p) for p in params) if isinstance(params[0], (L,list)) else L([params])
self.hypers = L({} for _ in range_of(self.param_lists))
self.set_hypers(**defaults)
self.frozen_idx = 0
def zero_grad(self):
for p,*_ in self.all_params(with_grad=True):
p.grad.detach_()
p.grad.zero_()
def step(self):
for p,pg,state,hyper in self.all_params(with_grad=True):
for cb in self.cbs: state = _update(state, cb(p, **{**state, **hyper}))
self.state[p] = state
def clear_state(self):
for p,pg,state,hyper in self.all_params():
self.state[p] = {k: state[k] for k in self._keep_on_clear if k in state}
def state_dict(self):
state = [self.state[p] for p,*_ in self.all_params()]
return {'state': state, 'hypers': self.hypers}
def load_state_dict(self, sd):
assert len(sd["hypers"]) == len(self.param_lists)
assert len(sd["state"]) == sum([len(pg) for pg in self.param_lists])
self.hypers = sd['hypers']
self.state = {p: s for p,s in zip(self.all_params().itemgot(0), sd['state'])}
add_docs(Optimizer,
zero_grad="Standard PyTorch API: Zero all the grad attributes of the parameters",
step="Standard PyTorch API: Update the stats and execute the steppers in on all parameters that have a grad",
state_dict="Return the state of the optimizer in a dictionary",
load_state_dict="Load the content of `sd`",
clear_state="Reset the state of the optimizer")
# ### Initializing an Optimizer
# `params` will be used to create the `param_groups` of the optimizer. If it's a collection (or a generator) of parameters, it will be a `L` containing one `L` with all the parameters. To define multiple parameter groups `params` should be passed as a collection (or a generator) of `L`s.
#
# > Note: In PyTorch, <code>model.parameters()</code> returns a generator with all the parameters, that you can directly pass to <code>Optimizer</code>.
opt = Optimizer([1,2,3], noop)
test_eq(opt.param_lists, [[1,2,3]])
opt = Optimizer(range(3), noop)
test_eq(opt.param_lists, [[0,1,2]])
opt = Optimizer([[1,2],[3]], noop)
test_eq(opt.param_lists, [[1,2],[3]])
opt = Optimizer(([o,o+1] for o in range(0,4,2)), noop)
test_eq(opt.param_lists, [[0,1],[2,3]])
# `cbs` is a list of functions that will be composed when applying the step. For instance, you can compose a function making the SGD step, with another one applying weight decay. Additionally, each `cb` can have a `defaults` attribute that contains hyper-parameters and their default value. Those are all gathered at initialization, and new values can be passed to override those defaults with the `defaults` kwargs. The steppers will be called by `Optimizer.step` (which is the standard PyTorch name), and gradients can be cleared with `Optimizer.zero_grad` (also a standard PyTorch name).
#
# Once the defaults have all been pulled off, they are copied as many times as there are `param_groups` and stored in `hypers`. To apply different hyper-parameters to different groups (differential learning rates, or no weight decay for certain layers for instance), you will need to adjust those values after the init.
# +
def tst_arg(p, lr=0, **kwargs): return p
tst_arg.defaults = dict(lr=1e-2)
def tst_arg2(p, lr2=0, **kwargs): return p
tst_arg2.defaults = dict(lr2=1e-3)
def tst_arg3(p, mom=0, **kwargs): return p
tst_arg3.defaults = dict(mom=0.9)
def tst_arg4(p, **kwargs): return p
opt = Optimizer([1,2,3], [tst_arg,tst_arg2, tst_arg3])
test_eq(opt.hypers, [{'lr2': 1e-3, 'mom': 0.9, 'lr': 1e-2}])
opt = Optimizer([1,2,3], tst_arg, lr=0.1)
test_eq(opt.hypers, [{'lr': 0.1}])
opt = Optimizer([[1,2],[3]], tst_arg)
test_eq(opt.hypers, [{'lr': 1e-2}, {'lr': 1e-2}])
opt = Optimizer([[1,2],[3]], tst_arg, lr=0.1)
test_eq(opt.hypers, [{'lr': 0.1}, {'lr': 0.1}])
# -
# For each hyper-parameter, you can pass a slice or a collection to set them, if there are multiple parameter groups. A slice will be converted to a log-uniform collection from its beginning to its end, or if it only has an end `e`, to a collection of as many values as there are parameter groups that are `...,e/10,e/10,e`.
#
# Setting an hyper-parameter with a collection that has a different number of elements than the optimizer has parameter groups will raise an error.
opt = Optimizer([[1,2],[3]], tst_arg, lr=[0.1,0.2])
test_eq(opt.hypers, [{'lr': 0.1}, {'lr': 0.2}])
opt = Optimizer([[1,2],[3],[4]], tst_arg, lr=slice(1e-2))
test_eq(opt.hypers, [{'lr': 1e-3}, {'lr': 1e-3}, {'lr': 1e-2}])
opt = Optimizer([[1,2],[3],[4]], tst_arg, lr=slice(1e-4,1e-2))
test_eq(opt.hypers, [{'lr': 1e-4}, {'lr': 1e-3}, {'lr': 1e-2}])
test_eq(opt.param_groups, [{'params': [1,2], 'lr': 1e-4}, {'params': [3], 'lr': 1e-3}, {'params': [4], 'lr': 1e-2}])
test_fail(lambda: Optimizer([[1,2],[3],[4]], tst_arg, lr=np.array([0.1,0.2])))
# ### Basic steppers
# To be able to give examples of optimizer steps, we will need some steppers, like the following:
#export
def sgd_step(p, lr, **kwargs):
p.data.add_(p.grad.data, alpha=-lr)
def tst_param(val, grad=None):
"Create a tensor with `val` and a gradient of `grad` for testing"
res = tensor([val]).float()
res.grad = tensor([val/10 if grad is None else grad]).float()
return res
p = tst_param(1., 0.1)
sgd_step(p, 1.)
test_eq(p, tensor([0.9]))
test_eq(p.grad, tensor([0.1]))
# +
#export
def weight_decay(p, lr, wd, do_wd=True, **kwargs):
"Weight decay as decaying `p` with `lr*wd`"
if do_wd and wd!=0: p.data.mul_(1 - lr*wd)
weight_decay.defaults = dict(wd=0.)
# -
p = tst_param(1., 0.1)
weight_decay(p, 1., 0.1)
test_eq(p, tensor([0.9]))
test_eq(p.grad, tensor([0.1]))
# +
#export
def l2_reg(p, lr, wd, do_wd=True, **kwargs):
"L2 regularization as adding `wd*p` to `p.grad`"
if do_wd and wd!=0: p.grad.data.add_(p.data, alpha=wd)
l2_reg.defaults = dict(wd=0.)
# -
p = tst_param(1., 0.1)
l2_reg(p, 1., 0.1)
test_eq(p, tensor([1.]))
test_eq(p.grad, tensor([0.2]))
# > Warning: Weight decay and L2 regularization is the same thing for basic SGD, but for more complex optimizers, they are very different.
# ### Making the step
show_doc(Optimizer.step)
# This method will loop over all param groups, then all parameters for which `grad` is not None and call each function in `stepper`, passing it the parameter `p` with the hyper-parameters in the corresponding dict in `hypers`.
# +
#test basic step
r = L.range(4)
def tst_params(): return r.map(tst_param)
params = tst_params()
opt = Optimizer(params, sgd_step, lr=0.1)
opt.step()
test_close([p.item() for p in params], r.map(mul(0.99)))
# -
#test two steps
params = tst_params()
opt = Optimizer(params, [weight_decay, sgd_step], lr=0.1, wd=0.1)
opt.step()
test_close([p.item() for p in params], r.map(mul(0.98)))
#test None gradients are ignored
params = tst_params()
opt = Optimizer(params, sgd_step, lr=0.1)
params[-1].grad = None
opt.step()
test_close([p.item() for p in params], [0., 0.99, 1.98, 3.])
#test discriminative lrs
params = tst_params()
opt = Optimizer([params[:2], params[2:]], sgd_step, lr=0.1)
opt.hypers[0]['lr'] = 0.01
opt.step()
test_close([p.item() for p in params], [0., 0.999, 1.98, 2.97])
show_doc(Optimizer.zero_grad)
params = tst_params()
opt = Optimizer(params, [weight_decay, sgd_step], lr=0.1, wd=0.1)
opt.zero_grad()
[test_eq(p.grad, tensor([0.])) for p in params];
# Some of the `Optimizer` `cbs` can be functions updating the state associated with a parameter. That state can then be used by any stepper. The best example is a momentum calculation.
# +
def tst_stat(p, **kwargs):
s = kwargs.get('sum', torch.zeros_like(p)) + p.data
return {'sum': s}
tst_stat.defaults = {'mom': 0.9}
#Test Optimizer init
opt = Optimizer([1,2,3], tst_stat)
test_eq(opt.hypers, [{'mom': 0.9}])
opt = Optimizer([1,2,3], tst_stat, mom=0.99)
test_eq(opt.hypers, [{'mom': 0.99}])
#Test stat
x = torch.randn(4,5)
state = tst_stat(x)
assert 'sum' in state
test_eq(x, state['sum'])
state = tst_stat(x, **state)
test_eq(state['sum'], 2*x)
# -
# ## Statistics
# +
# export
def average_grad(p, mom, dampening=False, grad_avg=None, **kwargs):
"Keeps track of the avg grads of `p` in `state` with `mom`."
if grad_avg is None: grad_avg = torch.zeros_like(p.grad.data)
damp = 1-mom if dampening else 1.
grad_avg.mul_(mom).add_(p.grad.data, alpha=damp)
return {'grad_avg': grad_avg}
average_grad.defaults = dict(mom=0.9)
# -
# `dampening=False` gives the classical formula for momentum in SGD:
# ```
# new_val = old_val * mom + grad
# ```
# whereas `dampening=True` makes it an exponential moving average:
# ```
# new_val = old_val * mom + grad * (1-mom)
# ```
# +
p = tst_param([1,2,3], [4,5,6])
state = {}
state = average_grad(p, mom=0.9, **state)
test_eq(state['grad_avg'], p.grad)
state = average_grad(p, mom=0.9, **state)
test_eq(state['grad_avg'], p.grad * 1.9)
#Test dampening
state = {}
state = average_grad(p, mom=0.9, dampening=True, **state)
test_eq(state['grad_avg'], 0.1*p.grad)
state = average_grad(p, mom=0.9, dampening=True, **state)
test_close(state['grad_avg'], (0.1*0.9+0.1)*p.grad)
# +
# export
def average_sqr_grad(p, sqr_mom, dampening=True, sqr_avg=None, **kwargs):
if sqr_avg is None: sqr_avg = torch.zeros_like(p.grad.data)
damp = 1-sqr_mom if dampening else 1.
sqr_avg.mul_(sqr_mom).addcmul_(p.grad.data, p.grad.data, value=damp)
return {'sqr_avg': sqr_avg}
average_sqr_grad.defaults = dict(sqr_mom=0.99)
# -
# `dampening=False` gives the classical formula for momentum in SGD:
# ```
# new_val = old_val * mom + grad**2
# ```
# whereas `dampening=True` makes it an exponential moving average:
# ```
# new_val = old_val * mom + (grad**2) * (1-mom)
# ```
# +
p = tst_param([1,2,3], [4,5,6])
state = {}
state = average_sqr_grad(p, sqr_mom=0.99, dampening=False, **state)
test_eq(state['sqr_avg'], p.grad.pow(2))
state = average_sqr_grad(p, sqr_mom=0.99, dampening=False, **state)
test_eq(state['sqr_avg'], p.grad.pow(2) * 1.99)
#Test dampening
state = {}
state = average_sqr_grad(p, sqr_mom=0.99, **state)
test_close(state['sqr_avg'], 0.01*p.grad.pow(2))
state = average_sqr_grad(p, sqr_mom=0.99, **state)
test_close(state['sqr_avg'], (0.01*0.99+0.01)*p.grad.pow(2))
# -
# ### Freezing part of the model
show_doc(Optimizer.freeze, name="Optimizer.freeze")
show_doc(Optimizer.freeze_to, name="Optimizer.freeze_to")
show_doc(Optimizer.unfreeze, name="Optimizer.unfreeze")
# +
#Freezing the first layer
params = [tst_params(), tst_params(), tst_params()]
opt = Optimizer(params, sgd_step, lr=0.1)
opt.freeze_to(1)
req_grad = Self.requires_grad()
test_eq(L(params[0]).map(req_grad), [False]*4)
for i in {1,2}: test_eq(L(params[i]).map(req_grad), [True]*4)
#Unfreezing
opt.unfreeze()
for i in range(2): test_eq(L(params[i]).map(req_grad), [True]*4)
#TODO: test warning
# opt.freeze_to(3)
# -
# Parameters such as batchnorm weights/bias can be marked to always be in training mode, just put `force_train=true` in their state.
params = [tst_params(), tst_params(), tst_params()]
opt = Optimizer(params, sgd_step, lr=0.1)
for p in L(params[1])[[1,3]]: opt.state[p] = {'force_train': True}
opt.freeze()
test_eq(L(params[0]).map(req_grad), [False]*4)
test_eq(L(params[1]).map(req_grad), [False, True, False, True])
test_eq(L(params[2]).map(req_grad), [True]*4)
# ### Serializing
show_doc(Optimizer.state_dict)
show_doc(Optimizer.load_state_dict)
# +
p = tst_param([1,2,3], [4,5,6])
opt = Optimizer(p, average_grad)
opt.step()
test_eq(opt.state[p]['grad_avg'], tensor([[4., 5., 6.]]))
sd = opt.state_dict()
p1 = tst_param([10,20,30], [40,50,60])
opt = Optimizer(p1, average_grad, mom=0.99)
test_eq(opt.hypers[0]['mom'], 0.99)
test_eq(opt.state, {})
opt.load_state_dict(sd)
test_eq(opt.hypers[0]['mom'], 0.9)
test_eq(opt.state[p1]['grad_avg'], tensor([[4., 5., 6.]]))
# -
show_doc(Optimizer.clear_state)
# +
p = tst_param([1,2,3], [4,5,6])
opt = Optimizer(p, average_grad)
opt.state[p] = {'force_train': True}
opt.step()
test_eq(opt.state[p]['grad_avg'], tensor([[4., 5., 6.]]))
opt.clear_state()
test_eq(opt.state[p], {'force_train': True})
# -
# ## Optimizers
# ### SGD with momentum
#export
def momentum_step(p, lr, grad_avg, **kwargs):
"Step for SGD with momentum with `lr`"
p.data.add_(grad_avg, alpha=-lr)
#export
@log_args(to_return=True, but_as=Optimizer.__init__)
def SGD(params, lr, mom=0., wd=0., decouple_wd=True):
"A `Optimizer` for SGD with `lr` and `mom` and `params`"
cbs = [weight_decay] if decouple_wd else [l2_reg]
if mom != 0: cbs.append(average_grad)
cbs.append(sgd_step if mom==0 else momentum_step)
return Optimizer(params, cbs, lr=lr, mom=mom, wd=wd)
# Optional weight decay of `wd` is applied, as true weight decay (decay the weights directly) if `decouple_wd=True` else as L2 regularization (add the decay to the gradients).
#Vanilla SGD
params = tst_params()
opt = SGD(params, lr=0.1)
opt.step()
test_close([p.item() for p in params], [i*0.99 for i in range(4)])
opt.step()
[p.item() for p in params]
test_close([p.item() for p in params], [i*0.98 for i in range(4)])
#SGD with momentum
params = tst_params()
opt = SGD(params, lr=0.1, mom=0.9)
assert isinstance(opt, Optimizer)
opt.step()
test_close([p.item() for p in params], [i*0.99 for i in range(4)])
opt.step()
[p.item() for p in params]
test_close([p.item() for p in params], [i*(1 - 0.1 * (0.1 + 0.1*1.9)) for i in range(4)])
for i,p in enumerate(params): test_close(opt.state[p]['grad_avg'].item(), i*0.19)
# Test weight decay, notice how we can see that L2 regularization is different from weight decay even for simple SGD with momentum.
params = tst_params()
#Weight decay
opt = SGD(params, lr=0.1, mom=0.9, wd=0.1)
opt.step()
test_close([p.item() for p in params], [i*0.98 for i in range(4)])
#L2 reg
opt = SGD(params, lr=0.1, mom=0.9, wd=0.1, decouple_wd=False)
opt.step()
#TODO: fix cause this formula was wrong
#test_close([p.item() for p in params], [i*0.97 for i in range(4)])
# ### RMSProp
# +
#export
def rms_prop_step(p, lr, sqr_avg, eps, grad_avg=None, **kwargs):
"Step for SGD with momentum with `lr`"
denom = sqr_avg.sqrt().add_(eps)
p.data.addcdiv_((grad_avg if grad_avg is not None else p.grad), denom, value=-lr)
rms_prop_step.defaults = dict(eps=1e-8)
# -
#export
@log_args(to_return=True, but_as=Optimizer.__init__)
def RMSProp(params, lr, sqr_mom=0.99, mom=0., wd=0., decouple_wd=True):
"A `Optimizer` for RMSProp with `lr`, `sqr_mom`, `mom` and `params`"
cbs = [weight_decay] if decouple_wd else [l2_reg]
cbs += ([average_sqr_grad] if mom==0. else [average_grad, average_sqr_grad])
cbs.append(rms_prop_step)
return Optimizer(params, cbs, lr=lr, mom=mom, sqr_mom=sqr_mom, wd=wd)
# RMSProp was introduced by <NAME> in his [course](http://www.cs.toronto.edu/~tijmen/csc321/slides/lecture_slides_lec6.pdf). What is named `sqr_mom` here is the `alpha` in the course. Optional weight decay of `wd` is applied, as true weight decay (decay the weights directly) if `decouple_wd=True` else as L2 regularization (add the decay to the gradients).
#Without momentum
params = tst_param([1,2,3], [0.1,0.2,0.3])
opt = RMSProp(params, lr=0.1)
opt.step()
test_close(params[0], tensor([0.,1.,2.]))
opt.step()
step = - 0.1 * 0.1 / (math.sqrt((0.01*0.99+0.01) * 0.1**2) + 1e-8)
test_close(params[0], tensor([step, 1+step, 2+step]))
#With momentum
params = tst_param([1,2,3], [0.1,0.2,0.3])
opt = RMSProp(params, lr=0.1, mom=0.9)
opt.step()
test_close(params[0], tensor([0.,1.,2.]))
opt.step()
step = - 0.1 * (0.1 + 0.9*0.1) / (math.sqrt((0.01*0.99+0.01) * 0.1**2) + 1e-8)
test_close(params[0], tensor([step, 1+step, 2+step]))
# ### Adam
#export
def step_stat(p, step=0, **kwargs):
"Register the number of steps done in `state` for `p`"
step += 1
return {'step' : step}
p = tst_param(1,0.1)
state = {}
state = step_stat(p, **state)
test_eq(state['step'], 1)
for _ in range(5): state = step_stat(p, **state)
test_eq(state['step'], 6)
#export
def debias(mom, damp, step): return damp * (1 - mom**step) / (1-mom)
# +
#export
def adam_step(p, lr, mom, step, sqr_mom, grad_avg, sqr_avg, eps, **kwargs):
"Step for Adam with `lr` on `p`"
debias1 = debias(mom, 1-mom, step)
debias2 = debias(sqr_mom, 1-sqr_mom, step)
p.data.addcdiv_(grad_avg, (sqr_avg/debias2).sqrt() + eps, value = -lr / debias1)
return p
adam_step._defaults = dict(eps=1e-5)
# -
#export
@log_args(to_return=True, but_as=Optimizer.__init__)
def Adam(params, lr, mom=0.9, sqr_mom=0.99, eps=1e-5, wd=0.01, decouple_wd=True):
"A `Optimizer` for Adam with `lr`, `mom`, `sqr_mom`, `eps` and `params`"
cbs = [weight_decay] if decouple_wd else [l2_reg]
cbs += [partial(average_grad, dampening=True), average_sqr_grad, step_stat, adam_step]
return Optimizer(params, cbs, lr=lr, mom=mom, sqr_mom=sqr_mom, eps=eps, wd=wd)
# Adam was introduced by <NAME> and <NAME> in [Adam: A Method for Stochastic Optimization](https://arxiv.org/abs/1412.6980). For consistency across optimizers, we renamed `beta1` and `beta2` in the paper to `mom` and `sqr_mom`. Note that our defaults also differ from the paper (0.99 for `sqr_mom` or `beta2`, 1e-5 for `eps`). Those values seem to be better from our experiments in a wide range of situations.
#
# Optional weight decay of `wd` is applied, as true weight decay (decay the weights directly) if `decouple_wd=True` else as L2 regularization (add the decay to the gradients).
#
# > Note: Don't forget that `eps` is an hyper-parameter you can change. Some models won't train without a very high `eps` like 0.1 (intuitively, the higher `eps` is, the closer we are to normal SGD). The usual default of 1e-8 is often too extreme in the sense we don't manage to get as good results as with SGD.
params = tst_param([1,2,3], [0.1,0.2,0.3])
opt = Adam(params, lr=0.1, wd=0)
opt.step()
step = -0.1 * 0.1 / (math.sqrt(0.1**2) + 1e-8)
test_close(params[0], tensor([1+step, 2+step, 3+step]))
opt.step()
test_close(params[0], tensor([1+2*step, 2+2*step, 3+2*step]), eps=1e-3)
# ### RAdam
# RAdam (for rectified Adam) was introduced by Zhang et al. in [On the Variance of the Adaptive Learning Rate and Beyond](https://arxiv.org/abs/1907.08610) to slightly modify the Adam optimizer to be more stable at the beginning of training (and thus not require a long warmup). They use an estimate of the variance of the moving average of the squared gradients (the term in the denominator of traditional Adam) and rescale this moving average by this term before performing the update.
#
# This version also incorporates [SAdam](https://arxiv.org/abs/1908.00700); set `beta` to enable this (definition same as in the paper).
# +
#export
def radam_step(p, lr, mom, step, sqr_mom, grad_avg, sqr_avg, eps, beta, **kwargs):
"Step for RAdam with `lr` on `p`"
debias1 = debias(mom, 1-mom, step)
debias2 = debias(sqr_mom, 1-sqr_mom, step)
r_inf = 2/(1-sqr_mom) - 1
r = r_inf - 2*step*sqr_mom**step/(1-sqr_mom**step)
if r > 5:
v = math.sqrt(((r-4) * (r-2) * r_inf)/((r_inf-4)*(r_inf-2)*r))
denom = (sqr_avg/debias2).sqrt()
if eps: denom += eps
if beta: denom = F.softplus(denom, beta)
p.data.addcdiv_(grad_avg, denom, value = -lr*v / debias1)
else: p.data.add_(grad_avg, alpha=-lr / debias1)
return p
radam_step._defaults = dict(eps=1e-5)
# -
#export
@log_args(to_return=True, but_as=Optimizer.__init__)
def RAdam(params, lr, mom=0.9, sqr_mom=0.99, eps=1e-5, wd=0., beta=0., decouple_wd=True):
"A `Optimizer` for Adam with `lr`, `mom`, `sqr_mom`, `eps` and `params`"
cbs = [weight_decay] if decouple_wd else [l2_reg]
cbs += [partial(average_grad, dampening=True), average_sqr_grad, step_stat, radam_step]
return Optimizer(params, cbs, lr=lr, mom=mom, sqr_mom=sqr_mom, eps=eps, wd=wd, beta=beta)
# This is the effective correction reported to the adam step for 500 iterations in RAdam. We can see how it goes from 0 to 1, mimicking the effect of a warm-up.
beta = 0.99
r_inf = 2/(1-beta) - 1
rs = np.array([r_inf - 2*s*beta**s/(1-beta**s) for s in range(5,500)])
v = np.sqrt(((rs-4) * (rs-2) * r_inf)/((r_inf-4)*(r_inf-2)*rs))
plt.plot(v);
# +
params = tst_param([1,2,3], [0.1,0.2,0.3])
opt = RAdam(params, lr=0.1)
#The r factor is lower than 5 during the first 5 steps so updates use the average of gradients (all the same)
r_inf = 2/(1-0.99) - 1
for i in range(5):
r = r_inf - 2*(i+1)*0.99**(i+1)/(1-0.99**(i+1))
assert r <= 5
opt.step()
p = tensor([0.95, 1.9, 2.85])
test_close(params[0], p)
#The r factor is greater than 5 for the sixth step so we update with RAdam
r = r_inf - 2*6*0.99**6/(1-0.99**6)
assert r > 5
opt.step()
v = math.sqrt(((r-4) * (r-2) * r_inf)/((r_inf-4)*(r_inf-2)*r))
step = -0.1*0.1*v/(math.sqrt(0.1**2) + 1e-8)
test_close(params[0], p+step)
# -
# ### QHAdam
# QHAdam (for Quasi-Hyperbolic Adam) was introduced by Ma & Yarats in [Quasi-Hyperbolic Momentum and Adam for Deep Learning](https://arxiv.org/pdf/1810.06801.pdf) as a *"computationally cheap, intuitive to interpret, and simple to implement"* optimizer. Additional code can be found in their [qhoptim repo](https://github.com/facebookresearch/qhoptim). QHAdam is based on QH-Momentum, which introduces the immediate discount factor `nu`, encapsulating plain SGD (`nu = 0`) and momentum (`nu = 1`). QH-Momentum is defined below, where g_t+1 is the update of the moment. An interpretation of QHM is as a nu-weighted average of the momentum update step and the plain SGD update step.
#
# > θ_t+1 ← θ_t − lr * [(1 − nu) · ∇L_t(θ_t) + nu · g_t+1]
#
# QHAdam takes the concept behind QHM above and applies it to Adam, replacing both of Adam’s moment estimators with quasi-hyperbolic terms.
#
# The paper's suggested default parameters are `mom = 0.999`, `sqr_mom = 0.999`, `nu_1 = 0.7` and `and nu_2 = 1.0`. When training is not stable, it is possible that setting `nu_2 < 1` can improve stability by imposing a tighter step size bound. Note that QHAdam recovers Adam when `nu_1 = nu_2 = 1.0`. QHAdam recovers RMSProp (Hinton et al., 2012) when `nu_1 = 0` and `nu_2 = 1`, and NAdam (Dozat, 2016) when `nu_1 = mom` and `nu_2 = 1`.
#
# Optional weight decay of `wd` is applied, as true weight decay (decay the weights directly) if `decouple_wd=True` else as L2 regularization (add the decay to the gradients).
# +
#export
def qhadam_step(p, lr, mom, sqr_mom, sqr_avg, nu_1, nu_2, step, grad_avg, eps, **kwargs):
debias1 = debias(mom, 1-mom, step)
debias2 = debias(sqr_mom, 1-sqr_mom, step)
p.data.addcdiv_(((1-nu_1) * p.grad.data) + (nu_1 * (grad_avg / debias1)),
(((1 - nu_2) * (p.grad.data)**2) + (nu_2 * (sqr_avg / debias2))).sqrt() + eps,
value = -lr)
return p
qhadam_step._defaults = dict(eps=1e-8)
# -
#export
@log_args(to_return=True, but_as=Optimizer.__init__)
def QHAdam(params, lr, mom=0.999, sqr_mom=0.999, nu_1=0.7, nu_2 = 1.0, eps=1e-8, wd=0., decouple_wd=True):
"An `Optimizer` for Adam with `lr`, `mom`, `sqr_mom`, `nus`, eps` and `params`"
cbs = [weight_decay] if decouple_wd else [l2_reg]
cbs += [partial(average_grad, dampening=True), partial(average_sqr_grad, dampening=True), step_stat, qhadam_step]
return Optimizer(params, cbs, lr=lr, nu_1=nu_1, nu_2=nu_2 ,
mom=mom, sqr_mom=sqr_mom, eps=eps, wd=wd)
params = tst_param([1,2,3], [0.1,0.2,0.3])
opt = QHAdam(params, lr=0.1)
opt.step()
step = -0.1 * (((1-0.7) * 0.1) + (0.7 * 0.1)) / (
math.sqrt(((1-1.0) * 0.1**2) + (1.0 * 0.1**2)) + 1e-8)
test_close(params[0], tensor([1+step, 2+step, 3+step]))
opt.step()
test_close(params[0], tensor([1+2*step, 2+2*step, 3+2*step]), eps=1e-3)
# ### LARS/LARC
# +
#export
def larc_layer_lr(p, lr, trust_coeff, wd, eps, clip=True, **kwargs):
"Computes the local lr before weight decay is applied"
p_norm,g_norm = torch.norm(p.data),torch.norm(p.grad.data)
local_lr = lr*trust_coeff * (p_norm) / (g_norm + p_norm * wd + eps)
return {'local_lr': min(lr, local_lr) if clip else local_lr}
larc_layer_lr.defaults = dict(trust_coeff=0.02, wd=0., eps=1e-8)
# -
#export
def larc_step(p, local_lr, grad_avg=None, **kwargs):
"Step for LARC `local_lr` on `p`"
p.data.add_(p.grad.data if grad_avg is None else grad_avg, alpha = -local_lr)
#export
@log_args(to_return=True, but_as=Optimizer.__init__)
def Larc(params, lr, mom=0.9, clip=True, trust_coeff=0.02, eps=1e-8, wd=0., decouple_wd=True):
"A `Optimizer` for Adam with `lr`, `mom`, `sqr_mom`, `eps` and `params`"
cbs = [weight_decay] if decouple_wd else [l2_reg]
if mom!=0.: cbs.append(average_grad)
cbs += [partial(larc_layer_lr, clip=clip), larc_step]
return Optimizer(params, cbs, lr=lr, mom=mom, trust_coeff=trust_coeff, eps=eps, wd=wd)
# The LARS optimizer was first introduced in [Large Batch Training of Convolutional Networks](https://arxiv.org/abs/1708.03888) then refined in its LARC variant (original LARS is with `clip=False`). A learning rate is computed for each individual layer with a certain `trust_coefficient`, then clipped to be always less than `lr`.
#
# Optional weight decay of `wd` is applied, as true weight decay (decay the weights directly) if `decouple_wd=True` else as L2 regularization (add the decay to the gradients).
params = [tst_param([1,2,3], [0.1,0.2,0.3]), tst_param([1,2,3], [0.01,0.02,0.03])]
opt = Larc(params, lr=0.1)
opt.step()
#First param local lr is 0.02 < lr so it's not clipped
test_close(opt.state[params[0]]['local_lr'], 0.02)
#Second param local lr is 0.2 > lr so it's clipped
test_eq(opt.state[params[1]]['local_lr'], 0.1)
test_close(params[0], tensor([0.998,1.996,2.994]))
test_close(params[1], tensor([0.999,1.998,2.997]))
params = [tst_param([1,2,3], [0.1,0.2,0.3]), tst_param([1,2,3], [0.01,0.02,0.03])]
opt = Larc(params, lr=0.1, clip=False)
opt.step()
#No clipping
test_close(opt.state[params[0]]['local_lr'], 0.02)
test_close(opt.state[params[1]]['local_lr'], 0.2)
test_close(params[0], tensor([0.998,1.996,2.994]))
test_close(params[1], tensor([0.998,1.996,2.994]))
# ### LAMB
# +
#export
def lamb_step(p, lr, mom, step, sqr_mom, grad_avg, sqr_avg, eps, **kwargs):
"Step for LAMB with `lr` on `p`"
debias1 = debias(mom, 1-mom, step)
debias2 = debias(sqr_mom, 1-sqr_mom, step)
r1 = p.data.pow(2).mean().sqrt()
step = (grad_avg/debias1) / ((sqr_avg/debias2).sqrt()+eps)
r2 = step.pow(2).mean().sqrt()
q = 1 if r1 == 0 or r2 == 0 else min(r1/r2,10)
p.data.add_(step, alpha = -lr * q)
lamb_step._defaults = dict(eps=1e-6, wd=0.)
# -
#export
@log_args(to_return=True, but_as=Optimizer.__init__)
def Lamb(params, lr, mom=0.9, sqr_mom=0.99, eps=1e-5, wd=0., decouple_wd=True):
"A `Optimizer` for Adam with `lr`, `mom`, `sqr_mom`, `eps` and `params`"
cbs = [weight_decay] if decouple_wd else [l2_reg]
cbs += [partial(average_grad, dampening=True), average_sqr_grad, step_stat, lamb_step]
return Optimizer(params, cbs, lr=lr, mom=mom, sqr_mom=sqr_mom, eps=eps, wd=wd)
# LAMB was introduced in [Large Batch Optimization for Deep Learning: Training BERT in 76 minutes](https://arxiv.org/abs/1904.00962). Intuitively, it's LARC applied to Adam. As in `Adam`, we renamed `beta1` and `beta2` in the paper to `mom` and `sqr_mom`. Note that our defaults also differ from the paper (0.99 for `sqr_mom` or `beta2`, 1e-5 for `eps`). Those values seem to be better from our experiments in a wide range of situations.
#
# Optional weight decay of `wd` is applied, as true weight decay (decay the weights directly) if `decouple_wd=True` else as L2 regularization (add the decay to the gradients).
params = tst_param([1,2,3], [0.1,0.2,0.3])
opt = Lamb(params, lr=0.1)
opt.step()
test_close(params[0], tensor([0.7840,1.7840,2.7840]), eps=1e-3)
# ## Lookahead -
# Lookahead was introduced by Zhang et al. in [Lookahead Optimizer: k steps forward, 1 step back](https://arxiv.org/abs/1907.08610). It can be run on top of any optimizer and consists in having the final weights of the model be a moving average. In practice, we update our model using the internal optimizer but keep a copy of old weights that and every `k` steps, we change the weights by a moving average of the *fast weights* (the ones updated by the inner optimizer) with the *slow weights* (the copy of old weights). Those *slow weights* act like a stability mechanism.
#export
@log_args(but='opt')
class Lookahead(Optimizer, GetAttr):
"Wrap `opt` in a lookahead optimizer"
_default='opt'
def __init__(self, opt, k=6, alpha=0.5):
store_attr('opt,k,alpha')
self._init_state()
def step(self):
if self.slow_weights is None: self._copy_weights()
self.opt.step()
self.count += 1
if self.count%self.k != 0: return
for slow_pg,fast_pg in zip(self.slow_weights,self.param_lists):
for slow_p,fast_p in zip(slow_pg,fast_pg):
slow_p.data.add_(fast_p.data-slow_p.data, alpha=self.alpha)
fast_p.data.copy_(slow_p.data)
def clear_state(self):
self.opt.clear_state()
self._init_state()
def state_dict(self):
state = self.opt.state_dict()
state.update({'count': self.count, 'slow_weights': self.slow_weights})
return state
def load_state_dict(self, sd):
self.count = sd.pop('count')
self.slow_weights = sd.pop('slow_weights')
self.opt.load_state_dict(sd)
def _init_state(self): self.count,self.slow_weights = 0,None
def _copy_weights(self): self.slow_weights = L(L(p.clone().detach() for p in pg) for pg in self.param_lists)
@property
def param_lists(self): return self.opt.param_lists
@param_lists.setter
def param_lists(self, v): self.opt.param_lists = v
params = tst_param([1,2,3], [0.1,0.2,0.3])
p,g = params[0].data.clone(),tensor([0.1,0.2,0.3])
opt = Lookahead(SGD(params, lr=0.1))
for k in range(5): opt.step()
#first 5 steps are normal SGD steps
test_close(params[0], p - 0.5*g)
#Since k=6, sixth step is a moving average of the 6 SGD steps with the initial weight
opt.step()
test_close(params[0], p * 0.5 + (p-0.6*g) * 0.5)
#export
@delegates(RAdam)
def ranger(p, lr, mom=0.95, wd=0.01, eps=1e-6, **kwargs):
"Convenience method for `Lookahead` with `RAdam`"
return Lookahead(RAdam(p, lr=lr, mom=mom, wd=wd, eps=eps, **kwargs))
# ## OptimWrapper -
#export
def detuplify_pg(d):
res = {}
for k,v in d.items():
if k == 'params': continue
if is_listy(v): res.update(**{f'{k}__{i}': v_ for i,v_ in enumerate(v)})
else: res[k] = v
return res
tst = {'lr': 1e-2, 'mom': 0.9, 'params':[0,1,2]}
test_eq(detuplify_pg(tst), {'lr': 1e-2, 'mom': 0.9})
tst = {'lr': 1e-2, 'betas': (0.9,0.999), 'params':[0,1,2]}
test_eq(detuplify_pg(tst), {'lr': 1e-2, 'betas__0': 0.9, 'betas__1': 0.999})
#export
def set_item_pg(pg, k, v):
if '__' not in k: pg[k] = v
else:
name,idx = k.split('__')
pg[name] = tuple(v if i==int(idx) else pg[name][i] for i in range_of(pg[name]))
return pg
tst = {'lr': 1e-2, 'mom': 0.9, 'params':[0,1,2]}
test_eq(set_item_pg(tst, 'lr', 1e-3), {'lr': 1e-3, 'mom': 0.9, 'params':[0,1,2]})
tst = {'lr': 1e-2, 'betas': (0.9,0.999), 'params':[0,1,2]}
test_eq(set_item_pg(tst, 'betas__0', 0.95), {'lr': 1e-2, 'betas': (0.95,0.999), 'params':[0,1,2]})
#export
pytorch_hp_map = {'momentum': 'mom', 'weight_decay': 'wd', 'alpha': 'sqr_mom', 'betas__0': 'mom', 'betas__1': 'sqr_mom'}
#export
class OptimWrapper(_BaseOptimizer, GetAttr):
_xtra=['zero_grad', 'step', 'state_dict', 'load_state_dict']
_default='opt'
def __init__(self, opt, hp_map=None):
self.opt = opt
if hp_map is None: hp_map = pytorch_hp_map
self.fwd_map = {k: hp_map[k] if k in hp_map else k for k in detuplify_pg(opt.param_groups[0]).keys()}
self.bwd_map = {v:k for k,v in self.fwd_map.items()}
self.state = defaultdict(dict, {})
self.frozen_idx = 0
@property
def hypers(self):
return [{self.fwd_map[k]:v for k,v in detuplify_pg(pg).items() if k != 'params'} for pg in self.opt.param_groups]
def _set_hyper(self, k, v):
for pg,v_ in zip(self.opt.param_groups,v): pg = set_item_pg(pg, self.bwd_map[k], v_)
def clear_state(self): self.opt.state = defaultdict(dict, {})
@property
def param_lists(self): return [pg['params'] for pg in self.opt.param_groups]
@param_lists.setter
def param_lists(self, v):
for pg,v_ in zip(self.opt.param_groups,v): pg['params'] = v_
sgd = SGD([tensor([1,2,3])], lr=1e-3, mom=0.9, wd=1e-2)
tst_sgd = OptimWrapper(torch.optim.SGD([tensor([1,2,3])], lr=1e-3, momentum=0.9, weight_decay=1e-2))
#Access to param_groups
test_eq(tst_sgd.param_lists, sgd.param_lists)
#Set param_groups
tst_sgd.param_lists = [[tensor([4,5,6])]]
test_eq(tst_sgd.opt.param_groups[0]['params'], [tensor(4,5,6)])
#Access to hypers
test_eq(tst_sgd.hypers, [{**sgd.hypers[0], 'dampening': 0., 'nesterov': False}])
#Set hypers
tst_sgd.set_hyper('mom', 0.95)
test_eq(tst_sgd.opt.param_groups[0]['momentum'], 0.95)
tst_sgd = OptimWrapper(torch.optim.SGD([{'params': [tensor([1,2,3])], 'lr': 1e-3},
{'params': [tensor([4,5,6])], 'lr': 1e-2}], momentum=0.9, weight_decay=1e-2))
sgd = SGD([[tensor([1,2,3])], [tensor([4,5,6])]], lr=[1e-3, 1e-2], mom=0.9, wd=1e-2)
#Access to param_groups
test_eq(tst_sgd.param_lists, sgd.param_lists)
#Set param_groups
tst_sgd.param_lists = [[tensor([4,5,6])], [tensor([1,2,3])]]
test_eq(tst_sgd.opt.param_groups[0]['params'], [tensor(4,5,6)])
test_eq(tst_sgd.opt.param_groups[1]['params'], [tensor(1,2,3)])
#Access to hypers
test_eq(tst_sgd.hypers, [{**sgd.hypers[i], 'dampening': 0., 'nesterov': False} for i in range(2)])
#Set hypers
tst_sgd.set_hyper('mom', 0.95)
test_eq([pg['momentum'] for pg in tst_sgd.opt.param_groups], [0.95,0.95])
tst_sgd.set_hyper('lr', [1e-4,1e-3])
test_eq([pg['lr'] for pg in tst_sgd.opt.param_groups], [1e-4,1e-3])
#hide
#check it works with tuply hp names like in Adam
tst_adam = OptimWrapper(torch.optim.Adam([tensor([1,2,3])], lr=1e-2, betas=(0.9, 0.99)))
test_eq(tst_adam.hypers, [{'lr': 0.01, 'mom': 0.9, 'sqr_mom': 0.99, 'eps': 1e-08, 'wd': 0, 'amsgrad': False}])
tst_adam.set_hyper('mom', 0.95)
test_eq(tst_adam.opt.param_groups[0]['betas'], (0.95, 0.99))
tst_adam.set_hyper('sqr_mom', 0.9)
test_eq(tst_adam.opt.param_groups[0]['betas'], (0.95, 0.9))
def _mock_train(m, x, y, opt):
m.train()
for i in range(0, 100, 25):
z = m(x[i:i+25])
loss = F.mse_loss(z, y[i:i+25])
loss.backward()
opt.step()
opt.zero_grad()
m = nn.Linear(4,5)
x = torch.randn(100, 3, 4)
y = torch.randn(100, 3, 5)
try:
torch.save(m.state_dict(), 'tmp.pth')
wgt,bias = m.weight.data.clone(),m.bias.data.clone()
m.load_state_dict(torch.load('tmp.pth'))
opt1 = OptimWrapper(torch.optim.AdamW(m.parameters(), betas=(0.9, 0.99), eps=1e-5, weight_decay=1e-2))
_mock_train(m, x.clone(), y.clone(), opt1)
wgt1,bias1 = m.weight.data.clone(),m.bias.data.clone()
m.load_state_dict(torch.load('tmp.pth'))
opt2 = Adam(m.parameters(), 1e-3, wd=1e-2)
_mock_train(m, x.clone(), y.clone(), opt2)
wgt2,bias2 = m.weight.data.clone(),m.bias.data.clone()
test_close(wgt1,wgt2,eps=1e-3)
test_close(bias1,bias2,eps=1e-3)
finally: os.remove('tmp.pth')
m = nn.Linear(4,5)
x = torch.randn(100, 3, 4)
y = torch.randn(100, 3, 5)
try:
torch.save(m.state_dict(), 'tmp.pth')
wgt,bias = m.weight.data.clone(),m.bias.data.clone()
m.load_state_dict(torch.load('tmp.pth'))
opt1 = OptimWrapper(torch.optim.Adam(m.parameters(), betas=(0.9, 0.99), eps=1e-5, weight_decay=1e-2))
_mock_train(m, x.clone(), y.clone(), opt1)
wgt1,bias1 = m.weight.data.clone(),m.bias.data.clone()
m.load_state_dict(torch.load('tmp.pth'))
opt2 = Adam(m.parameters(), 1e-3, wd=1e-2, decouple_wd=False)
_mock_train(m, x.clone(), y.clone(), opt2)
wgt2,bias2 = m.weight.data.clone(),m.bias.data.clone()
test_close(wgt1,wgt2,eps=1e-3)
test_close(bias1,bias2,eps=1e-3)
finally: os.remove('tmp.pth')
# ## Export -
#hide
from nbdev.export import *
notebook2script()
| nbs/12_optimizer.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: conda_python3
# language: python
# name: conda_python3
# ---
# ## Deploy <font color='red'>For Seller to update: Title_of_your_ML Model </font> Model Package from AWS Marketplace
#
#
# ## <font color='red'> For Seller to update: Add overview of the ML Model here </font>
#
# This sample notebook shows you how to deploy <font color='red'> For Seller to update:[Title_of_your_ML Model](Provide link to your marketplace listing of your product)</font> using Amazon SageMaker.
#
# > **Note**: This is a reference notebook and it cannot run unless you make changes suggested in the notebook.
#
# #### Pre-requisites:
# 1. **Note**: This notebook contains elements which render correctly in Jupyter interface. Open this notebook from an Amazon SageMaker Notebook Instance or Amazon SageMaker Studio.
# 1. Ensure that IAM role used has **AmazonSageMakerFullAccess**
# 1. To deploy this ML model successfully, ensure that:
# 1. Either your IAM role has these three permissions and you have authority to make AWS Marketplace subscriptions in the AWS account used:
# 1. **aws-marketplace:ViewSubscriptions**
# 1. **aws-marketplace:Unsubscribe**
# 1. **aws-marketplace:Subscribe**
# 2. or your AWS account has a subscription to <font color='red'> For Seller to update:[Title_of_your_ML Model](Provide link to your marketplace listing of your product)</font>. If so, skip step: [Subscribe to the model package](#1.-Subscribe-to-the-model-package)
#
# #### Contents:
# 1. [Subscribe to the model package](#1.-Subscribe-to-the-model-package)
# 2. [Create an endpoint and perform real-time inference](#2.-Create-an-endpoint-and-perform-real-time-inference)
# 1. [Create an endpoint](#A.-Create-an-endpoint)
# 2. [Create input payload](#B.-Create-input-payload)
# 3. [Perform real-time inference](#C.-Perform-real-time-inference)
# 4. [Visualize output](#D.-Visualize-output)
# 5. [Delete the endpoint](#E.-Delete-the-endpoint)
# 3. [Perform batch inference](#3.-Perform-batch-inference)
# 4. [Clean-up](#4.-Clean-up)
# 1. [Delete the model](#A.-Delete-the-model)
# 2. [Unsubscribe to the listing (optional)](#B.-Unsubscribe-to-the-listing-(optional))
#
#
# #### Usage instructions
# You can run this notebook one cell at a time (By using Shift+Enter for running a cell).
# ### 1. Subscribe to the model package
# To subscribe to the model package:
# 1. Open the model package listing page <font color='red'> For Seller to update:[Title_of_your_product](Provide link to your marketplace listing of your product).</font>
# 1. On the AWS Marketplace listing, click on the **Continue to subscribe** button.
# 1. On the **Subscribe to this software** page, review and click on **"Accept Offer"** if you and your organization agrees with EULA, pricing, and support terms.
# 1. Once you click on **Continue to configuration button** and then choose a **region**, you will see a **Product Arn** displayed. This is the model package ARN that you need to specify while creating a deployable model using Boto3. Copy the ARN corresponding to your region and specify the same in the following cell.
model_package_arn='<Customer to specify Model package ARN corresponding to their AWS region>'
# <font color='red'> For Seller to update: Add all necessary imports in following cell,
# If you need specific packages to be installed, # try to provide them in this section, in a separate cell. </font>
import base64
import json
import uuid
from sagemaker import ModelPackage
import sagemaker as sage
from sagemaker import get_execution_role
from sagemaker import ModelPackage
from urllib.parse import urlparse
import boto3
from IPython.display import Image
from PIL import Image as ImageEdit
import urllib.request
import numpy as np
# +
role = get_execution_role()
sagemaker_session = sage.Session()
bucket=sagemaker_session.default_bucket()
bucket
# -
# ### 2. Create an endpoint and perform real-time inference
# If you want to understand how real-time inference with Amazon SageMaker works, see [Documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/how-it-works-hosting.html).
# <font color='red'>For Seller to update: update values for four variables in following cell.
# Specify a model/endpoint name using only alphanumeric characters. </font>
# +
model_name='For Seller to update:<specify-model_or_endpoint-name>'
content_type='For Seller to update:<specify_content_type_accepted_by_model>'
real_time_inference_instance_type='For Seller to update:<Update recommended_real-time_inference instance_type>'
batch_transform_inference_instance_type='For Seller to update:<Update recommended_batch_transform_job_inference instance_type>'
# -
# #### A. Create an endpoint
# +
def predict_wrapper(endpoint, session):
return sage.RealTimePredictor(endpoint, session,content_type)
#create a deployable model from the model package.
model = ModelPackage(role=role,
model_package_arn=model_package_arn,
sagemaker_session=sagemaker_session,
predictor_cls=predict_wrapper)
#Deploy the model
predictor = model.deploy(1, real_time_inference_instance_type, endpoint_name=model_name)
# -
# Once endpoint has been created, you would be able to perform real-time inference.
# #### B. Create input payload
# <font color='red'>For Seller to update: Add code snippet here that reads the input from 'data/input/real-time/' directory
# and converts it into format expected by the endpoint.</font>
# <Add code snippet that shows the payload contents>
# <font color='red'>For Seller to update: Ensure that file_name variable points to the payload you created.
# Ensure that output_file_name variable points to a file-name in which output of real-time inference needs to be stored.</font>
# #### C. Perform real-time inference
# <font color='red'>For Seller to update: review/update file_name, output_file name, custom attributes in following AWS CLI to perform a real-time inference using the payload file you created from 2.B </font>
# !aws sagemaker-runtime invoke-endpoint \
# --endpoint-name $model_name \
# --body fileb://$file_name \
# --content-type $content_type \
# --region $sagemaker_session.boto_region_name \
# $output_file_name
# #### D. Visualize output
# <font color='red'>For Seller to update: Write code in following cell to display the output generated by real-time inference. This output must match with output available in data/output/real-time folder.</font>
# <font color='red'>For Seller to update: Get innovative! This is also your opportunity to show-off different capabilities of the model.
# E.g. if your model does object detection, multi-class classification, or regression, repeat steps 2.B,2.C,2.D
# to show different inputs using files and outputs for different classes/objects/edge conditions.</font>
# #### E. Delete the endpoint
# Now that you have successfully performed a real-time inference, you do not need the endpoint any more. You can terminate the endpoint to avoid being charged.
predictor=sage.RealTimePredictor(model_name, sagemaker_session,content_type)
predictor.delete_endpoint(delete_endpoint_config=True)
# ### 3. Perform batch inference
# In this section, you will perform batch inference using multiple input payloads together. If you are not familiar with batch transform, and want to learn more, see these links:
# 1. [How it works](https://docs.aws.amazon.com/sagemaker/latest/dg/ex1-batch-transform.html)
# 2. [How to run a batch transform job](https://docs.aws.amazon.com/sagemaker/latest/dg/how-it-works-batch.html)
#upload the batch-transform job input files to S3
transform_input_folder = "data/input/batch"
transform_input = sagemaker_session.upload_data(transform_input_folder, key_prefix=model_name)
print("Transform input uploaded to " + transform_input)
#Run the batch-transform job
transformer = model.transformer(1, batch_transform_inference_instance_type)
transformer.transform(transform_input, content_type=content_type)
transformer.wait()
#output is available on following path
transformer.output_path
# <font color='red'>For Seller to update: Add code that displays output generated by the batch transform job available in S3.
# This output must match the output available in data/output/batch folder.</font>
# ### 4. Clean-up
# #### A. Delete the model
model.delete_model()
# #### B. Unsubscribe to the listing (optional)
# If you would like to unsubscribe to the model package, follow these steps. Before you cancel the subscription, ensure that you do not have any [deployable model](https://console.aws.amazon.com/sagemaker/home#/models) created from the model package or using the algorithm. Note - You can find this information by looking at the container name associated with the model.
#
# **Steps to unsubscribe to product from AWS Marketplace**:
# 1. Navigate to __Machine Learning__ tab on [__Your Software subscriptions page__](https://aws.amazon.com/marketplace/ai/library?productType=ml&ref_=mlmp_gitdemo_indust)
# 2. Locate the listing that you want to cancel the subscription for, and then choose __Cancel Subscription__ to cancel the subscription.
#
#
| aws_marketplace/curating_aws_marketplace_listing_and_sample_notebook/ModelPackage/Sample_Notebook_Template/title_of_your_product-Model.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Demo: fetching positions of S-MODE in-situ assets
import pandas as pd
import matplotlib.pyplot as plt
from tools.config import MAPEXTENT
# B. Greenwood is pushing hourly the positions of all assets to a table http://smode.whoi.edu/index.php/insitu/
# We can fetch the json file with pandas:
assets = pd.read_json('http://smode.whoi.edu/status.php?format=json')
assets
# ### Quick plot: current position of all in-situ assets
assets.plot.scatter(x='longitude', y='latitude')
# ### Quick plot 2: current position of a single type of asset
# get ops area polygon and shore line
map_url = 'https://raw.githubusercontent.com/NASA-SMODE/Maps/main/tools/'
shore = pd.read_json(map_url + 'NorthCalShoreLine.json')
opsarea = pd.read_json(map_url + 'ops_area_polygon.json')
# +
# Plot a single type of asset
asset_type = 'navo_glider'
fig, ax = plt.subplots()
assets[assets.type==asset_type].\
plot.scatter(x='longitude',
y='latitude',
ax = ax)
ax.set_title(asset_type)
shore.plot(x = 'longitude',
y = 'latitude',
color = 'k',
legend=False,
ax = ax
)
opsarea.plot(x = 'longitude',
y = 'latitude',
color = 'k',
legend=False,
ax = ax
)
ax.set_xlim(*MAPEXTENT[:2])
ax.set_ylim(*MAPEXTENT[2:])
# -
| InsituAssetsCurrentPosition.ipynb |
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .sh
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Bash
# language: bash
# name: bash
# ---
# # Visualising transcriptomes with IGV
# ## Introduction
#
# [Integrative Genome Viewer](https://software.broadinstitute.org/software/igv/) (**IGV**) allows us to visualise genomic datasets. We have a quick start guide [here](https://github.com/sanger-pathogens/pathogen-informatics-training/blob/master/Notebooks/IGV/index.ipynb) which contains the information you need to complete [Exercise 3](#Exercise-3). The IGV user guide contains more information on all of the IGV features and functions: http://software.broadinstitute.org/software/igv/UserGuide.
#
# The objectives of this part of the tutorial are:
#
# * _load a reference genome into IGV and navigate the genome_
# * _load an annotation file into IGV and explore gene structure_
# * _load read alignments into IGV and inspect read alignments_
# ***
# ## Exercise 3
# First, we will use `samtools` to create an index for the _P. chabaudi_ reference genome, which IGV will use to traverse the genome. This index file will have the extension **.fai** and should always be in the same directory as the reference genome.
#
# **Make sure you are in the `data` directory with the tutorial files.**
cd data
# **Index the genome fasta file (required by IGV).**
samtools faidx PccAS_v3_genome.fa
# **Start IGV.**
igv.sh
# This will open the IGV main window. Now, we need to tell IGV which genome we want to use. IGV has many pre-loaded genomes available, but _P. chabaudi_ is not one of them. This means we will need to load our genome from a file.
#
# **Load your reference genome into IGV. Go to "_Genomes -> Load Genome from File…_". Select "PccAS_v3_genome.fa" and click "_Open_". For more information, see [Loading a reference genome](https://github.com/sanger-pathogens/pathogen-informatics-training/blob/master/Notebooks/IGV/index.ipynb) in our quick start guide.**
#
# We not only want to see where our reads have mapped, but what genes they have mapped to. For this, we have an annotation file in [GFF3 format](https://www.ensembl.org/info/website/upload/gff3.html). This contains a list of features, their co-ordinates and orientations which correspond to our reference genome.
# 
# **Load your annotation file into IGV. Go to ""File -> Load from File…". Select "PccAS_v3.gff3" and click "_Open_". For more information, see [Loading gene annotations](https://github.com/sanger-pathogens/pathogen-informatics-training/blob/master/Notebooks/IGV/index.ipynb) in our quick start guide.**
#
# This will load a new track called "PccAS_v3.gff3". The track is currently shown as a density plot. You will need to zoom in to see individual genes.
#
# **Search for the gene PCHAS_0505200 by typing "PCHAS_0505200" in the search box to zoom in and centre the view on PCHAS_0505200.**
# 
# **To get a clearer view of the gene structure, right click on the annotation track and click "Expanded".**
# 
# In the annotation track, genes are presented as blue boxes and lines. These boxes represent exons, while the lines represent intronic regions. Arrows indicate the direction (or strand) of transcription for each of the genes. Now we have our genome and its annotated features, we just need the read alignments for our five samples.
#
# **Load your alignment file for the MT1 sample into IGV. Go to ""File -> Load from File…". Select "_MT1_sorted.bam_" and click "_Open_". For more information, see [Loading alignment files](https://github.com/sanger-pathogens/pathogen-informatics-training/blob/master/Notebooks/IGV/index.ipynb) in our quick start guide.**
#
# _Note: BAM files and their corresponding index files must be in the same directory for IGV to load them properly._
# 
# This will load a new track called "MT1_sorted.bam" which contains the read alignments for the MT1 sample. We can change how we visualise our data by altering the view options. By default, IGV will display reads individually so they are compactly arranged. If you were to hover over a read in the default view, you will only get the details for that read. However, if we change our view so that the reads are visualised as pairs, the read pairs will be joined together by line and when we hover over either of the reads, we will get information about both of the reads in that pair.
# **To view our reads as pairs, right click on the MT1_sorted.bam alignment track and click "View as pairs".**
# 
# **To condense the alignment, right click on the MT1_sorted.bam alignment track and click "Squished".**
# 
# For more information on sorting, grouping and visualising read alignments, see the [IGV user guide](http://software.broadinstitute.org/software/igv/UserGuide).
#
# **Load the remaining sorted BAM files for the MT2 sample and the three SBP samples.**
#
# **Using the search box in the toolbar, go to PCHAS_1409500. For more information, see [Jump to gene or locus](https://github.com/sanger-pathogens/pathogen-informatics-training/blob/master/Notebooks/IGV/index.ipynb) in our quick start guide.**
# 
# The first thing to look at is the coverage range for this viewing window on the left-hand side. The three SBP samples have 2-3 times more reads mapping to this gene than the two MT samples. While at first glance it may seem like this gene may be differentially expressed between the two conditions, remember that some samples may have been sequenced to a greater depth than others. So, if a sample has been sequenced to a greater depth we would expect more reads to map in general.
# 
# From the gene annotation at the bottom we can also see that there are three annotated exon/CDS features for this gene. However, the coverage plot suggests there may be a fourth unannotated exon which, given the direction of the gene, could suggest a 5' untranslated region (UTR). Note the clean drop off of the coveraged at around position 377,070.
# ***
# ## Questions
#
# ### Q1: How many CDS features are there in "PCHAS_1402500"?
#
# _Hint: Look at [Jump to gene or locus](https://github.com/sanger-pathogens/pathogen-informatics-training/blob/master/Notebooks/IGV/index.ipynb) in our quick start guide._
#
#
# ### Q2: Does the RNA-seq mapping agree with the gene model in blue?
#
# _Hint: Look at the coverage track and split read alignments._
#
#
# ### Q3: Do you think this gene is differentially expressed and is looking at the coverage plots alone a reliable way to assess differential expression?
# _Hint: Look at the coverage similarities/differences between the MT and SBP samples._
# ***
# ## What's next?
#
# You can head back to **[mapping RNA-Seq reads to the genome using HISAT2](genome-mapping.ipynb)** or continue on to **[transcript quantification with Kallisto](transcript-quantification.ipynb)**.
| RNA-Seq/transcriptome-visualisation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # Aula 13 - Reinforcement learning
#
# ### Temas:
# 1. Aprender a otimizar recompensas
# 2. Policy search
# 3. Intro ao OpenAI Gym
# 4. Neural network policies
# 5. Avaliar ações
# 6. Policy gradients
# 7. Processo de decisão de Markov
# 8. Temporal difference learning e Q-learning
# 9. Aprender a jogar PacMan com Deep Q-Learning
| lessons/13 - Reinforcement learning.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "59d410a8-ae3c-4da1-89fd-65d802f85ab5", "showTitle": false, "title": ""}
import numpy as np
import pandas as pd
# + application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "9b0f3c4d-255c-4c0b-b4d3-79defe66f4a9", "showTitle": false, "title": ""}
url = 'https://github.com/nflverse/nflfastR-data/raw/master/data/player_stats.parquet'
df = pd.read_parquet(url)
# + application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "88965727-7ad1-4381-b438-6e044c837d40", "showTitle": false, "title": ""}
# downcast to float32
cols = df.select_dtypes(include=[np.float64]).columns
df.loc[:, cols] = df.loc[:, cols].astype(np.float32)
# + application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "91c86f05-4f97-4cc3-9e92-97a4a16e131c", "showTitle": false, "title": ""}
# add half-ppr scoring
df = df.assign(fantasy_points_hppr=(df.fantasy_points + df.fantasy_points_ppr) / 2)
# + application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "1f0fe87d-2e16-4987-ada7-083c0e76300f", "showTitle": false, "title": ""}
# add player positions
pdf = pd.read_csv('https://github.com/nflverse/nflfastR-roster/raw/master/data/nflfastR-roster.csv.gz', compression='gzip', low_memory=False)
# + application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "1cd05bee-9df1-410f-a76e-7d8e944ffa17", "showTitle": false, "title": ""}
df = df.join(pdf.set_index(['gsis_id', 'season']).loc[:, ['full_name', 'position']], how='left', on=['player_id', 'season'])
# + application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "24b7c465-ded7-431c-ac12-7f7957a4c631", "showTitle": false, "title": ""}
# filter columns
wanted = ['season', 'week', 'player_id', 'full_name', 'position', 'fantasy_points', 'fantasy_points_ppr', 'fantasy_points_hppr']
df2 = df.loc[df.season == 2020, wanted]
# + application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "00d76cee-bc17-4744-8f9a-4608c2e553cc", "showTitle": false, "title": ""}
# calculate season stats
# seas.loc[seas.position == 'QB', :].sort_values('posrk')
seas = (
df2
.groupby(['player_id', 'full_name', 'position'], as_index=False)
.agg(fptot=('fantasy_points', 'sum'),
fptot_ppr=('fantasy_points_ppr', 'sum'),
fptot_hppr=('fantasy_points_hppr', 'sum'),
fppg=('fantasy_points', 'mean'),
fppg_ppr=('fantasy_points_ppr', 'mean'),
fppg_hppr=('fantasy_points_hppr', 'mean')
)
.assign(posrk=lambda x: x.groupby('position')['fptot_hppr'].rank(method='first', ascending=False))
)
# + application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "5e268527-3295-443b-b2a3-9e24b11b9545", "showTitle": false, "title": ""}
# get the top 20 QBs
qbids = seas.loc[(seas.position == 'QB') & (seas.posrk <= 20), 'player_id']
qbs = df2.loc[df2.player_id.isin(qbids), :]
# + application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "e5ce3d3f-33dd-4301-8611-ac69613cb3f2", "showTitle": false, "title": ""}
# we want to be able to simulate a bye
# also need to do it over 16 games based on 15 games from previous year
# so we want to get even-length arrays based on scores from week 1-6
# then we are going to fill with mean value
# then we will test inserting a 0 both at the beginning or one at beginning or one at end
# then we take the greater value of the two
qbs = (
pd.DataFrame({'season': 2020, 'week': range(1, 17)})
.merge(qbs.loc[qbs.week < 17, ['player_id']].drop_duplicates(), how='cross')
.join(qbs.set_index(['season', 'week', 'player_id']), how='left', on=['season', 'week', 'player_id'])
.assign(full_name=lambda x: x.groupby('player_id')['full_name'].bfill().ffill(),
position=lambda x: x.groupby('player_id')['position'].bfill().ffill(),
fantasy_points=lambda x: x.groupby('player_id')['fantasy_points'].transform(lambda y: y.fillna(y.mean())),
fantasy_points_ppr=lambda x: x.groupby('player_id')['fantasy_points_ppr'].transform(lambda y: y.fillna(y.mean())),
fantasy_points_hppr=lambda x: x.groupby('player_id')['fantasy_points_hppr'].transform(lambda y: y.fillna(y.mean()))
)
)
# + application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "bcedd72a-bcd5-4ea2-9ddb-da3c4d9eb345", "showTitle": false, "title": ""}
# try out vectorized approach
vals = []
iterations = 100
weeks = 16
rng = np.random.default_rng()
shuffled_indices = rng.integers(0, weeks, size=(iterations, weeks)).argsort(axis=1)
for i in range(1000): #(100):
choices = qbids.sample(2).values
p1 = np.column_stack((np.zeros(iterations), qbs.loc[lambda x: x.player_id == choices[0], 'fantasy_points_hppr'].values[shuffled_indices]))
p2 = np.column_stack((np.zeros(iterations), qbs.loc[lambda x: x.player_id == choices[1], 'fantasy_points_hppr'].values[shuffled_indices]))
score = np.array([p1, p2]).max(axis=0)
p1d = np.column_stack((np.zeros(iterations), qbs.loc[lambda x: x.player_id == choices[0], 'fantasy_points_hppr'].values[shuffled_indices]))
p2d= np.column_stack((qbs.loc[lambda x: x.player_id == choices[1], 'fantasy_points_hppr'].values[shuffled_indices], np.zeros(iterations)))
scored = np.array([p1d, p2d]).max(axis=0)
vals.append({'same': score.sum(axis=1).mean(), 'diff': scored.sum(axis=1).mean()})
# + application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "731663ff-c075-4069-a8b0-056085fcae83", "showTitle": false, "title": ""}
pd.DataFrame(vals).assign(delta=lambda x: x['diff'] - x.same).describe()
# + application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "07c8dbc4-807f-4bfe-8b21-a47dfed32f8b", "showTitle": false, "title": ""}
# 3 QBs, 2 with same bye
vals = []
iterations = 100
weeks = 16
rng = np.random.default_rng()
shuffled_indices = rng.integers(0, weeks, size=(iterations, weeks)).argsort(axis=1)
for i in range(1000): #(100):
# all same bye
choices = qbids.sample(3).values
p1 = np.column_stack((np.zeros(iterations), qbs.loc[lambda x: x.player_id == choices[0], 'fantasy_points_hppr'].values[shuffled_indices]))
p2 = np.column_stack((np.zeros(iterations), qbs.loc[lambda x: x.player_id == choices[1], 'fantasy_points_hppr'].values[shuffled_indices]))
p3 = np.column_stack((np.zeros(iterations), qbs.loc[lambda x: x.player_id == choices[2], 'fantasy_points_hppr'].values[shuffled_indices]))
score = np.array([p1, p2, p3]).max(axis=0)
# two share same bye
p1d = np.column_stack((np.zeros(iterations), qbs.loc[lambda x: x.player_id == choices[0], 'fantasy_points_hppr'].values[shuffled_indices]))
p2d= np.column_stack((qbs.loc[lambda x: x.player_id == choices[1], 'fantasy_points_hppr'].values[shuffled_indices], np.zeros(iterations)))
p3d = np.column_stack((np.zeros(iterations), qbs.loc[lambda x: x.player_id == choices[2], 'fantasy_points_hppr'].values[shuffled_indices]))
scored = np.array([p1d, p2d, p3d]).max(axis=0)
# no shared byes
p1a = np.column_stack((np.zeros(iterations), qbs.loc[lambda x: x.player_id == choices[0], 'fantasy_points_hppr'].values[shuffled_indices]))
p2a= np.column_stack((qbs.loc[lambda x: x.player_id == choices[1], 'fantasy_points_hppr'].values[shuffled_indices], np.zeros(iterations)))
tmp = qbs.loc[lambda x: x.player_id == choices[2], 'fantasy_points_hppr'].values[shuffled_indices]
p3a = np.hstack((tmp[:, :2], np.zeros((iterations, 1)), tmp[:, 2:]))
scorea = np.array([p1a, p2a, p3a]).max(axis=0)
vals.append({'same': score.sum(axis=1).mean(), '1diff': scored.sum(axis=1).mean(), 'adiff': scorea.sum(axis=1).mean()})
# + application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "5d6aa6ac-7d44-4c07-bda1-17fbe144a3b7", "showTitle": false, "title": ""}
pd.DataFrame(vals).describe()
# + application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "d2f6ec63-cbc3-4eb0-8084-8e391e700547", "showTitle": false, "title": ""}
# try to parameterize
def byesim(df: pd.DataFrame,
n_players: int,
fpts_col: str,
weeks: int = 16,
combinations: int = 1000,
shuffles: int = 100) -> pd.DataFrame:
"""Simulates the effect of shared / staggered bye weeks
Args:
df (DataFrame): the weekly rows for each eligible player
n_players (int): the number of players to analyze
fpts_col (str): the column with fantasy points
weeks (int): the number of weeks with scores, default 16, which is 16 week fantasy season + 1 bye week
combinations (int): the number of player combinations to test
shuffles (int): the number of random shuffles of weekly scores for each combination of players.
Returns:
DataFrame
"""
if n_players < 2:
raise ValueError('Must have at least 2 players')
vals = []
# get 2D array of shuffled indices
# shape is (shuffles, weeks), so default is (100, 16)
rng = np.random.default_rng()
shuffled_indices = rng.integers(0, weeks, size=(shuffles, weeks)).argsort(axis=1)
for i in range(combinations):
choices = rng.choice(df.player_id.unique, size=n_players, replace=False)
for n_same_byes in range(n_players):
# create range of zeros
# each column represents different bye weeks
zeros = np.zeros(shape=(weeks, n_players))
if n_same_bye == 0:
# tmp = qbs.loc[lambda x: x.player_id == choices[2], 'fantasy_points_hppr'].values[shuffled_indices]
# p3a = np.hstack((tmp[:, :2], np.zeros((iterations, 1)), tmp[:, 2:]))
diff_bye = [np.column_stack((np.zeros(iterations), df.loc[lambda x: x.player_id == choices[n_same_bye - 1], fpts_col].values[shuffled_indices]
for _ in range(n_same_byes)]
else:
same_bye = [np.column_stack((np.zeros(iterations), df.loc[lambda x: x.player_id == choices[n_same_bye - 1], fpts_col].values[shuffled_indices]
for _ in range(n_same_byes)]
# STOPPED HERE
# all same bye
p1 = ))
p2 = np.column_stack((np.zeros(iterations), qbs.loc[lambda x: x.player_id == choices[1], 'fantasy_points_hppr'].values[shuffled_indices]))
p3 = np.column_stack((np.zeros(iterations), qbs.loc[lambda x: x.player_id == choices[2], 'fantasy_points_hppr'].values[shuffled_indices]))
score = np.array([p1, p2, p3]).max(axis=0)
# two share same bye
p1d = np.column_stack((np.zeros(iterations), qbs.loc[lambda x: x.player_id == choices[0], 'fantasy_points_hppr'].values[shuffled_indices]))
p2d= np.column_stack((qbs.loc[lambda x: x.player_id == choices[1], 'fantasy_points_hppr'].values[shuffled_indices], np.zeros(iterations)))
p3d = np.column_stack((np.zeros(iterations), qbs.loc[lambda x: x.player_id == choices[2], 'fantasy_points_hppr'].values[shuffled_indices]))
scored = np.array([p1d, p2d, p3d]).max(axis=0)
# no shared byes
p1a = np.column_stack((np.zeros(iterations), qbs.loc[lambda x: x.player_id == choices[0], 'fantasy_points_hppr'].values[shuffled_indices]))
p2a= np.column_stack((qbs.loc[lambda x: x.player_id == choices[1], 'fantasy_points_hppr'].values[shuffled_indices], np.zeros(iterations)))
tmp = qbs.loc[lambda x: x.player_id == choices[2], 'fantasy_points_hppr'].values[shuffled_indices]
p3a = np.hstack((tmp[:, :2], np.zeros((iterations, 1)), tmp[:, 2:]))
scorea = np.array([p1a, p2a, p3a]).max(axis=0)
vals.append({'same': score.sum(axis=1).mean(), '1diff': scored.sum(axis=1).mean(), 'adiff': scorea.sum(axis=1).mean()})
# -
# # Simulate 4QBs
# get some trash QBs
trash_qbids = seas.loc[(seas.position == 'QB') & (seas.posrk > 10) & (seas.posrk <= 35), 'player_id'].unique()
tqbs = qbs.loc[lambda x: x.player_id.isin(trash_qbids), :]
# +
# try out vectorized approach
vals = []
weeks = 16
n_players = 4
iterations = 500
for i in range(10000):
players = np.array([tqbs.loc[lambda x: x.player_id == choice, 'fantasy_points_hppr'].values
for choice in rng.choice(tqbs.player_id.unique(), size=n_players, replace=False)])
players = np.tile(players, iterations).reshape(iterations, players.shape[0], players.shape[1])
rng.shuffle(players)
vals.append(players.max(axis=1).sum(axis=1).mean())
# -
pd.DataFrame(data=vals, columns=['scores']).describe()
# +
# try out vectorized approach
vals = []
weeks = 16
n_players = 3
iterations = 500
for i in range(10000):
players = np.array([tqbs.loc[lambda x: x.player_id == choice, 'fantasy_points_hppr'].values
for choice in rng.choice(tqbs.player_id.unique(), size=n_players, replace=False)])
players = np.tile(players, iterations).reshape(iterations, players.shape[0], players.shape[1])
rng.shuffle(players)
vals.append(players.max(axis=1).sum(axis=1).mean())
# -
pd.DataFrame(data=vals, columns=['scores']).describe()
# get some good QBs
good_qbids = seas.loc[(seas.position == 'QB') & (seas.posrk <= 10), 'player_id'].unique()
gqbs = qbs.loc[lambda x: x.player_id.isin(good_qbids), :]
# +
# try out vectorized approach
vals = []
weeks = 16
n_players = 2
iterations = 500
for i in range(10000):
players = np.array([gqbs.loc[lambda x: x.player_id == choice, 'fantasy_points_hppr'].values
for choice in rng.choice(gqbs.player_id.unique(), size=n_players, replace=False)])
players = np.tile(players, iterations).reshape(iterations, players.shape[0], players.shape[1])
rng.shuffle(players)
vals.append(players.max(axis=1).sum(axis=1).mean())
# -
pd.DataFrame(data=vals, columns=['scores']).describe()
| notebooks/byes.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="0sVgU84dbDxJ"
# # Midterm - Correction
#
# We are going to go over the Midter in class Thursday. To prepare, please try the grading code by adding to the bottom of your notebook. Try and understand and correct what you missed so you can pay extra attention when you go over that part.
#
# Feel free to post questions to Webex Teams. You are also allowed to ask questions of others for the homeowork. However, all the work has to be yours.
#
#
# -
files = "https://github.com/rpi-techfundamentals/introml_website_fall_2020/raw/master/files/midterm.zip"
# !pip install otter-grader && wget $files && unzip -o midterm.zip
# + id="ScRhUOI2EpQ_"
#This runs all tests.
import otter
grader = otter.Notebook()
grader.check_all()
| site/_build/jupyter_execute/assignments/midterm/hm.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github"
# <a href="https://colab.research.google.com/github/AlisonJD/tb_examples/blob/main/Publish_SQL_based_endpoints_on_NGINX_log_analysis.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="lhcftG5bo4qX"
# # Publish SQL-based endpoints on NGINX log analysis
#
# Based on Tinybird blog post:
#
# https://blog.tinybird.co/2021/01/28/nginx-log-analysis/
# + [markdown] id="5k9fWcDVsxWH"
# If you have opened the notebook in Google Colab then `Copy to Drive` (see above).
# + id="UsilptjKTgcw" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1629802495866, "user_tz": -120, "elapsed": 23301, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjR6FzglAHgqJO9N5pUs5_hF-eSyALRtNRZt1Qv=s64", "userId": "18129615668573271349"}} outputId="42972aae-111e-4741-d990-a789ed81f78c"
#@title Mount your Google Drive to save and use local files
from google.colab import drive
drive.mount('/content/gdrive', force_remount=False)
% cd "/content/gdrive/My Drive/Colab Notebooks/Tinybird/tb_examples"
# + id="pRpOYp11T94P" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1629802521644, "user_tz": -120, "elapsed": 25782, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjR6FzglAHgqJO9N5pUs5_hF-eSyALRtNRZt1Qv=s64", "userId": "18129615668573271349"}} outputId="6f317b5c-d706-43f0-d4b3-6bff6f5bbf86"
#@title Install Tinybird CLI, utilities and your token
# !pip install tinybird-cli -q
# !sudo apt-get install jq
import os
import re
if not os.path.isfile('.tinyb'):
# !tb auth
if not os.path.isdir('datasources'):
# !tb init
# + id="PieMAhdNUdNV" executionInfo={"status": "ok", "timestamp": 1629802521645, "user_tz": -120, "elapsed": 7, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjR6FzglAHgqJO9N5pUs5_hF-eSyALRtNRZt1Qv=s64", "userId": "18129615668573271349"}}
#@title Helper function to write to files
def write_text_to_file(filename, text):
with open(filename, 'w') as f: f.write(text)
# + [markdown] id="6ik7672Y9lSJ"
# # Worked Example from Blog:
# ##Publish SQL-based endpoints on NGINX log analysis
#
# Tinybird can be used to analyze datasets like logs.
#
# Here we use Tinybird to:
# - analyze NGINX logs
# - publish SQL queries as API endpoints
#
# We also show you how to model your Data Sources and API Endpoints to make them faster.
#
# + [markdown] id="xIdcjtdg-ns7"
# ## 1. Build the Data Source from a Sample NGINX Log
# Ingest the CSV and transform the columns.
# + colab={"base_uri": "https://localhost:8080/"} id="zQ6EgGIXpYJU" executionInfo={"status": "ok", "timestamp": 1629802528635, "user_tz": -120, "elapsed": 6995, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjR6FzglAHgqJO9N5pUs5_hF-eSyALRtNRZt1Qv=s64", "userId": "18129615668573271349"}} outputId="72cff231-0694-4597-a1ed-cecdeb546c36"
# !tb datasource generate https://raw.githubusercontent.com/tinybirdco/log_parsing_template/main/access.log.csv --force
# + colab={"base_uri": "https://localhost:8080/"} id="p2YL3On0xGGa" executionInfo={"status": "ok", "timestamp": 1629802530654, "user_tz": -120, "elapsed": 2023, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjR6FzglAHgqJO9N5pUs5_hF-eSyALRtNRZt1Qv=s64", "userId": "18129615668573271349"}} outputId="1e0ff10a-73cd-4424-cc9a-04d69004b284"
# !tb datasource rm access_log --yes
# + colab={"base_uri": "https://localhost:8080/"} id="N3-uyMHtqB3f" executionInfo={"status": "ok", "timestamp": 1629802533306, "user_tz": -120, "elapsed": 2657, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjR6FzglAHgqJO9N5pUs5_hF-eSyALRtNRZt1Qv=s64", "userId": "18129615668573271349"}} outputId="52fa1539-c3fa-4af7-aab8-4a1375a27d02"
# !tb push datasources/access_log.datasource --force
# + colab={"base_uri": "https://localhost:8080/"} id="8R1f7Sveqr7M" executionInfo={"status": "ok", "timestamp": 1629802544071, "user_tz": -120, "elapsed": 10769, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjR6FzglAHgqJO9N5pUs5_hF-eSyALRtNRZt1Qv=s64", "userId": "18129615668573271349"}} outputId="98890ede-7c49-4cd2-81ca-a839442b3a61"
# !tb datasource append access_log 'https://raw.githubusercontent.com/tinybirdco/log_parsing_template/main/access.log.csv'
# + [markdown] id="ZJOjzI_ktRKr"
# Looking at a single record, we see that we need to add column names and extract information from columns.
# + colab={"base_uri": "https://localhost:8080/"} id="Y4149xmAqww7" executionInfo={"status": "ok", "timestamp": 1629802545904, "user_tz": -120, "elapsed": 1837, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjR6FzglAHgqJO9N5pUs5_hF-eSyALRtNRZt1Qv=s64", "userId": "18129615668573271349"}} outputId="84ef93ea-3d1c-48b6-fd14-6d6abc7dab0b"
# !tb sql "select * from access_log limit 1"
# + [markdown] id="obUAO9NRtigF"
# Let's do that with a Pipe:
# + id="awChfkXEruZ8" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1629802555041, "user_tz": -120, "elapsed": 9139, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjR6FzglAHgqJO9N5pUs5_hF-eSyALRtNRZt1Qv=s64", "userId": "18129615668573271349"}} outputId="5779c307-5a9e-4a11-9816-1b2a0f1a5f23"
filename="pipes/access_log_transform.pipe"
text='''
DESCRIPTION extract column data from raw access log and name columns
NODE extract_column_data
SQL >
select
IPv4StringToNum(column_00) as ip,
parseDateTimeBestEffort(replaceOne(substring(column_03, 2), ':', ' ')) as time,
splitByChar(' ', column_05) as tt,
tt[1] as method,
tt[2] as path,
tt[3] as protocol,
column_06 as status_code,
column_07 as bytes,
column_09 as user_agent
from access_log
'''
write_text_to_file(filename, text)
# !tb push pipes/access_log_transform.pipe --force
# + colab={"base_uri": "https://localhost:8080/"} id="LnE4QKa6tKM7" executionInfo={"status": "ok", "timestamp": 1629802557067, "user_tz": -120, "elapsed": 2028, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjR6FzglAHgqJO9N5pUs5_hF-eSyALRtNRZt1Qv=s64", "userId": "18129615668573271349"}} outputId="e863b1f7-3ef6-477f-da79-7bc71630cc12"
# !tb sql "select * from access_log_transform limit 1"
# + [markdown] id="FpZwoHWHuGdi"
# ## 2. Publish an Endpoint
# Create an Endpoint for the number of requests and average bytes for each IP address. An Endpoint can be consumed by your data products.
# + colab={"base_uri": "https://localhost:8080/"} id="Lat5CurctihD" executionInfo={"status": "ok", "timestamp": 1629802565699, "user_tz": -120, "elapsed": 8635, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjR6FzglAHgqJO9N5pUs5_hF-eSyALRtNRZt1Qv=s64", "userId": "18129615668573271349"}} outputId="8d91935e-41dc-4d50-9f26-a<PASSWORD>fd0"
filename="pipes/requests_per_endpoint.pipe"
text='''
DESCRIPTION requests per endpoint
NODE grouping
SQL >
%
SELECT
ip,
count() AS request_count,
avg(bytes) as avg_bytes
FROM access_log_transform
GROUP BY ip
ORDER BY request_count DESC
NODE endpoint
SQL >
select IPv4NumToString(ip) as ip_address,
request_count,
avg_bytes
from grouping
'''
write_text_to_file(filename, text)
# !tb push pipes/requests_per_endpoint.pipe --force --no-check
# + id="Y64_owGLHYbP" executionInfo={"status": "ok", "timestamp": 1629802566027, "user_tz": -120, "elapsed": 334, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjR6FzglAHgqJO9N5pUs5_hF-eSyALRtNRZt1Qv=s64", "userId": "18129615668573271349"}}
# TOKEN = !cat ../.tinyb | jq .token
TOKEN = re.search(r'\"(.*?)\"', TOKEN[0]).group()[1:-1]
# + colab={"base_uri": "https://localhost:8080/"} id="QystB7z_4fTc" executionInfo={"status": "ok", "timestamp": 1629802566484, "user_tz": -120, "elapsed": 461, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjR6FzglAHgqJO9N5pUs5_hF-eSyALRtNRZt1Qv=s64", "userId": "18129615668573271349"}} outputId="e13752a8-1d58-42e3-f760-2e01c17edc54"
# !curl https://api.tinybird.co/v0/pipes/requests_per_endpoint.json?token=$TOKEN
# + colab={"base_uri": "https://localhost:8080/"} id="aWt9tPy1vWEK" executionInfo={"status": "ok", "timestamp": 1629802566937, "user_tz": -120, "elapsed": 457, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjR6FzglAHgqJO9N5pUs5_hF-eSyALRtNRZt1Qv=s64", "userId": "18129615668573271349"}} outputId="379602c5-0ea8-4718-fe0e-c2062ab37ae8"
# !curl https://api.tinybird.co/v0/pipes/requests_per_endpoint.json?token=$TOKEN |head -n 24
# + colab={"base_uri": "https://localhost:8080/"} id="WxIBtscULsii" executionInfo={"status": "ok", "timestamp": 1629802567540, "user_tz": -120, "elapsed": 604, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjR6FzglAHgqJO9N5pUs5_hF-eSyALRtNRZt1Qv=s64", "userId": "18129615668573271349"}} outputId="9da20af9-fe72-48d6-9199-f13f99fe4ea5"
# !curl https://api.tinybird.co/v0/pipes/requests_per_endpoint.json?token=$TOKEN |tail -n 16
# + colab={"base_uri": "https://localhost:8080/"} id="mUAi4g_FJRCB" executionInfo={"status": "ok", "timestamp": 1629802569358, "user_tz": -120, "elapsed": 1821, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjR6FzglAHgqJO9N5pUs5_hF-eSyALRtNRZt1Qv=s64", "userId": "18129615668573271349"}} outputId="e32ca3c6-5cf5-4a0c-91c4-492168ee5f81"
# !tb sql "SELECT count() FROM requests_per_endpoint"
# + colab={"base_uri": "https://localhost:8080/"} id="5Mf_0R_-Jdjf" executionInfo={"status": "ok", "timestamp": 1629802571408, "user_tz": -120, "elapsed": 2053, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjR6FzglAHgqJO9N5pUs5_hF-eSyALRtNRZt1Qv=s64", "userId": "18129615668573271349"}} outputId="55ea66d4-dd4b-46ce-f165-eb069acb24f2"
# !tb sql "SELECT uniqExact(ip) from access_log_transform"
# + [markdown] id="BEZRnohMyAmg"
# ## 3. Create a Real-Time Endpoint using a Materialized View
# For Endpoints serving real-time dashboards, where the user can pick, for example, different date ranges or add filters, it is unacceptable to have to wait. If we had millions of entries a day, the generated endpoints wouldn’t be as fast as needed.
#
# The solution is to use a materialized view.
# + id="IyzPSWTyvWhT" executionInfo={"status": "ok", "timestamp": 1629802571727, "user_tz": -120, "elapsed": 323, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjR6FzglAHgqJO9N5pUs5_hF-eSyALRtNRZt1Qv=s64", "userId": "18129615668573271349"}}
filename="pipes/requests_per_endpoint_mv.pipe"
text='''
DESCRIPTION materialized view
NODE matview
SQL >
SELECT
ip,
countState() AS request_count,
avgState(bytes) as avg_bytes
FROM access_log_transform
GROUP BY ip
TYPE Materialized
DATASOURCE requests_per_endpoint_ds
'''
write_text_to_file(filename, text)
filename="datasources/requests_per_endpoint_ds.datasource"
text='''
DESCRIPTION materialized view
SCHEMA >
ip UInt32,
request_count AggregateFunction(count),
avg_bytes AggregateFunction(avg, Int32)
ENGINE AggregatingMergeTree
ENGINE_SORTING_KEY ip
'''
write_text_to_file(filename, text)
# + colab={"base_uri": "https://localhost:8080/"} id="dGxtpFVhyQ_5" executionInfo={"status": "ok", "timestamp": 1629802575501, "user_tz": -120, "elapsed": 3776, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjR6FzglAHgqJO9N5pUs5_hF-eSyALRtNRZt1Qv=s64", "userId": "18129615668573271349"}} outputId="ed62fc4d-1237-4a04-e281-605645f96fa4"
# !tb pipe rm requests_per_endpoint_mv --yes
# !tb datasource rm requests_per_endpoint_ds --yes
# + [markdown] id="SKgOaFo6zS0v"
# - The materialized view Pipe uses `countState` and `avgState` for the intermediate states. The Data Source has `AggregateFunction` data types to store those intermediate states.
# - The Engine is not the regular `MergeTree` but instead an `AggregatingMergeTree` that tells ClickHouse to aggregate columns on merge operations.
# - The sorting key tells ClickHouse which column is used for grouping.
# + colab={"base_uri": "https://localhost:8080/"} id="vcL57X-h5AHo" executionInfo={"status": "ok", "timestamp": 1629802578266, "user_tz": -120, "elapsed": 2767, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjR6FzglAHgqJO9N5pUs5_hF-eSyALRtNRZt1Qv=s64", "userId": "18129615668573271349"}} outputId="84a224b4-d28b-4d27-ebcb-8c3776e54352"
# !tb push datasources/requests_per_endpoint_ds.datasource
# + colab={"base_uri": "https://localhost:8080/"} id="dHQl2sQYw8fe" executionInfo={"status": "ok", "timestamp": 1629802584532, "user_tz": -120, "elapsed": 6269, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjR6FzglAHgqJO9N5pUs5_hF-eSyALRtNRZt1Qv=s64", "userId": "18129615668573271349"}} outputId="b3ed262a-9c75-4a82-8bf7-9a4a6796da2b"
# !tb push pipes/requests_per_endpoint_mv.pipe --populate --force
# + [markdown] id="ydkkDU6D0-gZ"
#
# `--populate` loads the materialized view with the data already in `access_log_transform`
# + colab={"base_uri": "https://localhost:8080/"} id="hQdA8lBRxCLt" executionInfo={"status": "ok", "timestamp": 1629802586557, "user_tz": -120, "elapsed": 2029, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjR6FzglAHgqJO9N5pUs5_hF-eSyALRtNRZt1Qv=s64", "userId": "18129615668573271349"}} outputId="8a19f32e-678d-45ef-865b-bce471e510c6"
# !tb sql "select uniqExact(ip) from requests_per_endpoint_ds" --stats
# + [markdown] id="rJ06-Iky1qqs"
# If we push new data with new IP addresses, the number of rows will rise but if we compare this to the number of rows in the original table, working with this view will be much faster.
#
# Note that this view is updated when you append new data, it does not need to recalculate with all the data - thanks to the intermediate states.
# + colab={"base_uri": "https://localhost:8080/"} id="4i15FqUoyGCU" executionInfo={"status": "ok", "timestamp": 1629802594580, "user_tz": -120, "elapsed": 8027, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjR6FzglAHgqJO9N5pUs5_hF-eSyALRtNRZt1Qv=s64", "userId": "18129615668573271349"}} outputId="ad040aca-fba6-4aed-f3d6-c7a8142a1202"
filename="pipes/requests_per_endpoint_fast.pipe"
text='''
DESCRIPTION requests per endpoint fast
NODE grouping
SQL >
%
SELECT
ip,
countMerge(request_count) AS request_count,
avgMerge(avg_bytes) as avg_bytes
FROM requests_per_endpoint_ds
GROUP BY ip
ORDER BY request_count DESC
NODE endpoint
SQL >
select IPv4NumToString(ip) as ip_address,
request_count,
avg_bytes
from grouping
'''
write_text_to_file(filename, text)
# !tb push pipes/requests_per_endpoint_fast.pipe --force --no-check
# + id="brOEr2Wk_3pu" executionInfo={"status": "ok", "timestamp": 1629802594581, "user_tz": -120, "elapsed": 13, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjR6FzglAHgqJO9N5pUs5_hF-eSyALRtNRZt1Qv=s64", "userId": "18129615668573271349"}}
# TOKEN = !cat .tinyb | jq .token
TOKEN = re.search(r'\"(.*?)\"', TOKEN[0]).group()[1:-1]
# + colab={"base_uri": "https://localhost:8080/"} id="UphDa2N4zFAP" executionInfo={"status": "ok", "timestamp": 1629802594919, "user_tz": -120, "elapsed": 344, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjR6FzglAHgqJO9N5pUs5_hF-eSyALRtNRZt1Qv=s64", "userId": "18129615668573271349"}} outputId="01c18ba6-61fc-4a38-8604-15b686bfbe71"
# !curl https://api.tinybird.co/v0/pipes/requests_per_endpoint.json\?token\=$TOKEN | jq .statistics
# + colab={"base_uri": "https://localhost:8080/"} id="gohayO4Uzmbq" executionInfo={"status": "ok", "timestamp": 1629802595498, "user_tz": -120, "elapsed": 581, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjR6FzglAHgqJO9N5pUs5_hF-eSyALRtNRZt1Qv=s64", "userId": "18129615668573271349"}} outputId="77a5424b-4aca-4d1b-ea5e-15b8afc2e840"
# !curl https://api.tinybird.co/v0/pipes/requests_per_endpoint_fast.json\?token\=$TOKEN | jq .statistics
# + [markdown] id="7fAKb9ci36c2"
# The new API Endpoint using the materialized view is faster and reads far less data.
#
# Materialized views could be used for:
#
# - unique ip addresses by day:
# ```
# select toDate(time) day, uniqState(ip_address) uniq_ip
# from access_log
# group by day
# ```
# - percentile 95 of payload size per hour:
# ```
# select toStartOfHour(time) hour, quantileState(0.95)(ip_address) q95
# from access_log_transform
# group by hour
# ```
# - requests per month:
# ```
# select toStartOfMonth(time) month, countState() requests_count
# from access_log_transform
# group by month
# ```
#
#
# + id="h88w4zSTy8NO" executionInfo={"status": "ok", "timestamp": 1629802595499, "user_tz": -120, "elapsed": 6, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjR6FzglAHgqJO9N5pUs5_hF-eSyALRtNRZt1Qv=s64", "userId": "18129615668573271349"}}
| Publish_SQL_based_endpoints_on_NGINX_log_analysis.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Keras MNIST Model Deployment
#
# * Wrap a Tensorflow MNIST python model for use as a prediction microservice in seldon-core
# * Run locally on Docker to test
# * Deploy on seldon-core running on minikube
#
# ## Dependencies
#
# * [Helm](https://github.com/kubernetes/helm)
# * [Minikube](https://github.com/kubernetes/minikube)
# * [S2I](https://github.com/openshift/source-to-image)
#
# ```bash
# pip install seldon-core
# pip install keras
# ```
#
# ## Train locally
#
# +
import numpy as np
import math
import datetime
#from seldon.pipeline import PipelineSaver
import os
import tensorflow as tf
from keras import backend
from keras.models import Model,load_model
from keras.layers import Dense,Input
from keras.layers import Dropout
from keras.layers import Flatten, Reshape
from keras.constraints import maxnorm
from keras.layers.convolutional import Convolution2D
from keras.layers.convolutional import MaxPooling2D
from keras.callbacks import TensorBoard
class MnistFfnn(object):
def __init__(self,
input_shape=(784,),
nb_labels=10,
optimizer='Adam',
run_dir='tensorboardlogs_test'):
self.model_name='MnistFfnn'
self.run_dir=run_dir
self.input_shape=input_shape
self.nb_labels=nb_labels
self.optimizer=optimizer
self.build_graph()
def build_graph(self):
inp = Input(shape=self.input_shape,name='input_part')
#keras layers
with tf.name_scope('dense_1') as scope:
h1 = Dense(256,
activation='relu',
W_constraint=maxnorm(3))(inp)
drop1 = Dropout(0.2)(h1)
with tf.name_scope('dense_2') as scope:
h2 = Dense(128,
activation='relu',
W_constraint=maxnorm(3))(drop1)
drop2 = Dropout(0.5)(h2)
out = Dense(self.nb_labels,
activation='softmax')(drop2)
self.model = Model(inp,out)
if self.optimizer == 'rmsprop':
self.model.compile(loss='categorical_crossentropy',
optimizer='rmsprop',
metrics=['accuracy'])
elif self.optimizer == 'Adam':
self.model.compile(loss='categorical_crossentropy',
optimizer='Adam',
metrics=['accuracy'])
print('graph builded')
def fit(self,X,y=None,
X_test=None,y_test=None,
batch_size=128,
nb_epochs=2,
shuffle=True):
now = datetime.datetime.now()
tensorboard_logname = self.run_dir+'/{}_{}'.format(self.model_name,
now.strftime('%Y.%m.%d_%H.%M'))
tensorboard = TensorBoard(log_dir=tensorboard_logname)
self.model.fit(X,y,
validation_data=(X_test,y_test),
callbacks=[tensorboard],
batch_size=batch_size,
nb_epoch=nb_epochs,
shuffle = shuffle)
return self
def predict_proba(self,X):
return self.model.predict_proba(X)
def predict(self, X):
probas = self.model.predict_proba(X)
return([[p>0.5 for p in p1] for p1 in probas])
def score(self, X, y=None):
pass
def get_class_id_map(self):
return ["proba"]
class MnistConv(object):
def __init__(self,
input_shape=(784,),
nb_labels=10,
optimizer='Adam',
run_dir='tensorboardlogs_test',
saved_model_file='MnistClassifier.h5'):
self.model_name='MnistConv'
self.run_dir=run_dir
self.input_shape=input_shape
self.nb_labels=nb_labels
self.optimizer=optimizer
self.saved_model_file=saved_model_file
self.build_graph()
def build_graph(self):
inp = Input(shape=self.input_shape,name='input_part')
inp2 = Reshape((28,28,1))(inp)
#keras layers
with tf.name_scope('conv') as scope:
conv = Convolution2D(32, 3, 3,
input_shape=(32, 32, 3),
border_mode='same',
activation='relu',
W_constraint=maxnorm(3))(inp2)
drop_conv = Dropout(0.2)(conv)
max_pool = MaxPooling2D(pool_size=(2, 2))(drop_conv)
with tf.name_scope('dense') as scope:
flat = Flatten()(max_pool)
dense = Dense(128,
activation='relu',
W_constraint=maxnorm(3))(flat)
drop_dense = Dropout(0.5)(dense)
out = Dense(self.nb_labels,
activation='softmax')(drop_dense)
self.model = Model(inp,out)
if self.optimizer == 'rmsprop':
self.model.compile(loss='categorical_crossentropy',
optimizer='rmsprop',
metrics=['accuracy'])
elif self.optimizer == 'Adam':
self.model.compile(loss='categorical_crossentropy',
optimizer='Adam',
metrics=['accuracy'])
print('graph builded')
def fit(self,X,y=None,
X_test=None,y_test=None,
batch_size=128,
nb_epochs=2,
shuffle=True):
now = datetime.datetime.now()
tensorboard_logname = self.run_dir+'/{}_{}'.format(self.model_name,
now.strftime('%Y.%m.%d_%H.%M'))
tensorboard = TensorBoard(log_dir=tensorboard_logname)
self.model.fit(X,y,
validation_data=(X_test,y_test),
callbacks=[tensorboard],
batch_size=batch_size,
nb_epoch=nb_epochs,
shuffle = shuffle)
#if not os.path.exists('saved_model'):
# os.makedirs('saved_model')
self.model.save(self.saved_model_file)
return self
def predict_proba(self,X):
return self.model.predict_proba(X)
def predict(self, X):
probas = self.model.predict_proba(X)
return([[p>0.5 for p in p1] for p1 in probas])
def score(self, X, y=None):
pass
def get_class_id_map(self):
return ["proba"]
# +
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('data/MNIST_data', one_hot=True)
X_train = mnist.train.images
y_train = mnist.train.labels
X_test = mnist.test.images
y_test = mnist.test.labels
mc = MnistConv()
mc.fit(X_train,y=y_train,
X_test=X_test,y_test=y_test)
# -
# Wrap model using s2i
# !s2i build . seldonio/seldon-core-s2i-python3:0.7 keras-mnist:0.1
# !docker run --name "mnist_predictor" -d --rm -p 5000:5000 keras-mnist:0.1
# Send some random features that conform to the contract
# !seldon-core-tester contract.json 0.0.0.0 5000 -p
# !docker rm mnist_predictor --force
# # Test using Minikube
#
# **Due to a [minikube/s2i issue](https://github.com/SeldonIO/seldon-core/issues/253) you will need [s2i >= 1.1.13](https://github.com/openshift/source-to-image/releases/tag/v1.1.13)**
# !minikube start --memory 4096
# !kubectl create clusterrolebinding kube-system-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:default
# !helm init
# !kubectl rollout status deploy/tiller-deploy -n kube-system
# !helm install ../../../helm-charts/seldon-core-operator --name seldon-core --set usageMetrics.enabled=true --namespace seldon-system
# !kubectl rollout status statefulset.apps/seldon-operator-controller-manager -n seldon-system
# ## Setup Ingress
# There are gRPC issues with the latest Ambassador, so we rewcommend 0.40.2 until these are fixed.
# !helm install stable/ambassador --name ambassador --set crds.keep=false
# !kubectl rollout status deployment.apps/ambassador
# !eval $(minikube docker-env) && s2i build . seldonio/seldon-core-s2i-python3:0.7 keras-mnist:0.1
# !kubectl create -f keras_mnist_deployment.json
# !kubectl rollout status deploy/keras-mnist-deployment-keras-mnist-predictor-8baf5cc
# !seldon-core-api-tester contract.json `minikube ip` `kubectl get svc ambassador -o jsonpath='{.spec.ports[0].nodePort}'` \
# seldon-deployment-example --namespace default -p
# !minikube delete
| examples/models/keras_mnist/keras_mnist.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Machine learning using scikit-learn
#
# There are two kinds of machine learning algorithms: *supervised* and *unsupervised* learning.
#
# Examples for supervised algorithms: classification, regression, etc.
# Examples for unsupervised algorithms: clustering, dimension reduction, etc.
#
# ## scikit-learn estimators
#
# Scikit-learn strives to have a uniform interface across all objects. Given a scikit-learn *estimator* named `model`, the following methods are available:
#
# - Available in **all estimators**
# + `model.fit()` : Fit training data. For supervised learning applications,
# this accepts two arguments: the data `X` and the labels `y` (e.g., `model.fit(X, y)`).
# For unsupervised learning applications, ``fit`` takes only a single argument,
# the data `X` (e.g. `model.fit(X)`).
#
# - Available in **supervised estimators**
# + `model.predict()` : Given a trained model, predict the label of a new set of data.
# This method accepts one argument, the new data `X_new` (e.g., `model.predict(X_new)`),
# and returns the learned label for each object in the array.
# + `model.fit_predict()`: Fits and predicts at the same time.
# + `model.predict_proba()` : For classification problems, some estimators also provide
# this method, which returns the probability that a new observation has each categorical label.
# In this case, the label with the highest probability is returned by `model.predict()`.
# + `model.score()` : An indication of how well the model fits the training data. Scores are between 0 and 1, with a larger score indicating a better fit.
#
# ## Data in scikit-learn
#
# Data in scikit-learn, with very few exceptions, is assumed to be stored as a
# **two-dimensional array** of size `[n_samples, n_features]`. Many algorithms also accept ``scipy.sparse`` matrices of the same shape.
#
# - **n_samples:** The number of samples: each sample is an item to process (e.g., classify).
# A sample can be a document, a picture, a sound, a video, an astronomical object,
# a row in database or CSV file, or whatever you can describe with a fixed set of quantitative traits.
# - **n_features:** The number of features or distinct traits that can be used to describe each
# item in a quantitative manner. Features are generally real-valued, but may be boolean or
# discrete-valued in some cases.
#
# ### Numerical vs. categorical
#
# What if you have categorical features? For example, imagine there is dataset containing the color of the
# iris:
#
# color in [red, blue, purple]
#
# You might be tempted to assign numbers to these features, i.e. *red=1, blue=2, purple=3*
# but in general **this is a bad idea**. Estimators tend to operate under the assumption that
# numerical features lie on some continuous scale, so, for example, 1 and 2 are more alike
# than 1 and 3, and this is often not the case for categorical features.
#
# A better strategy is to give each category its own dimension.
# The enriched iris feature set would hence be in this case:
#
# - sepal length in cm
# - sepal width in cm
# - petal length in cm
# - petal width in cm
# - color=purple (1.0 or 0.0)
# - color=blue (1.0 or 0.0)
# - color=red (1.0 or 0.0)
#
# Note that using many of these categorical features may result in data which is better
# represented as a **sparse matrix**, as we'll see with the text classification example
# below.
#
# #### Using the DictVectorizer to encode categorical features
#
# When the source data has a list of dicts where the values are either string names of categories or numerical values, you can use the `DictVectorizer` class to compute the boolean expansion of the categorical features while leaving the numerical features unimpacted:
measurements = [
{'city': 'Dubai', 'temperature': 33.},
{'city': 'London', 'temperature': 12.},
{'city': 'San Francisco', 'temperature': 18.},
]
from sklearn.feature_extraction import DictVectorizer
vec = DictVectorizer()
tf_measurements = vec.fit_transform(measurements)
tf_measurements.toarray()
vec.get_feature_names()
# #### Using Pandas to encode categorical features
# You can also use pandas to encode categorical features with the [get_dummies](https://pandas.pydata.org/docs/reference/api/pandas.get_dummies.html) function.
import pandas as pd
pd_measurement = pd.DataFrame(measurements)
pd_measurement.head()
# get all categorical features of the data
pd_measurement.select_dtypes(exclude=['number']).columns
# +
# encode the categorical features
encoded_data = pd.get_dummies(pd_measurement)
# hint: be careful about which features to encode. For example. avoid the 'car' (= name) feature from the mtcars.cvs file as its an identifier for every car
encoded_data.head()
# -
# ## Unsupervised Clustering using K-Means
# +
#disable some annoying warning
import warnings
warnings.filterwarnings('ignore', category=FutureWarning)
# %matplotlib inline
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
# +
#load the iris datasets
import sklearn.datasets
data = sklearn.datasets.load_iris()
data.data.shape
# -
from sklearn.cluster import KMeans
iris_pred = KMeans(n_clusters=3, random_state = 102).fit_predict(data.data)
# +
plt.figure(figsize=(12, 12))
colors = sns.color_palette()
plt.subplot(211)
plt.scatter(data.data[:, 0], data.data[:, 1], c=[colors[i] for i in iris_pred], s=40)
plt.title('KMeans-3 clusterer')
plt.xlabel(data.feature_names[0])
plt.ylabel(data.feature_names[1])
plt.subplot(212)
plt.scatter(data.data[:, 0], data.data[:, 1], c=[colors[i] for i in data.target],s=40)
plt.title('Ground Truth')
plt.xlabel(data.feature_names[0])
plt.ylabel(data.feature_names[1])
# -
# ## Supervised classification using decision trees
#
# Well, the result is not that great. Let's use a supervised classifier.
#
# First, split our data into training and test set.
# +
import sklearn.model_selection
data_train, data_test, target_train, target_test = sklearn.model_selection.train_test_split(
data.data, data.target, test_size=0.20, random_state = 5)
print(data.data.shape, data_train.shape, data_test.shape)
# -
# Now, we use a *DecisionTree* to learn a model and test our result.
# +
from sklearn.tree import DecisionTreeClassifier
instance = DecisionTreeClassifier()
r = instance.fit(data_train, target_train)
target_predict = instance.predict(data_test)
from sklearn.metrics import accuracy_score
print('Prediction accuracy: ', accuracy_score(target_predict, target_test))
# -
# Pretty good, isn't it?
# ## Dimension reduction using MDS and PCA
#
# If we go back to our K-Means example, the clustering doesn't really make sense. However, we are just looking at two out of four dimensions. So, we can't really see the real distances/similarities between items. Dimension reduction techniques reduce the number of dimensions, while preserving the inner structure of the higher dimensions. We take a look at two of them: Multi Dimensional Scaling (MDS) and Principal Component Analysis (PCA).
# +
from sklearn import manifold
#create mds instance
mds = manifold.MDS(n_components=2, random_state=5)
#fit the model and get the embedded coordinates
pos = mds.fit(data.data).embedding_
plt.scatter(pos[:, 0], pos[:, 1], s=20, c=[colors[i] for i in data.target])
#create a legend since we just have one plot and not three fake the legend using patches
import matplotlib.patches as mpatches
patches = [ mpatches.Patch(color=colors[i], label=data.target_names[i]) for i in range(3) ]
plt.legend(handles=patches)
# +
#compare with PCA
from sklearn import decomposition
pca = decomposition.PCA(n_components=2)
pca_pos = pca.fit(data.data).transform(data.data)
mds_pos = mds.fit(data.data).embedding_
plt.figure(figsize=[20,7])
plt.subplot(121)
plt.scatter(mds_pos[:, 0], mds_pos[:, 1], s=30, c=[colors[i] for i in data.target])
plt.title('MDS')
plt.subplot(122)
plt.scatter(pca_pos[:, 0], pca_pos[:, 1], s=30, c=[colors[i] for i in data.target])
plt.title('PCA')
# -
# Seems like versicolor and virginicia are more similar than setosa.
# ## TASK
#
# > Create an interactive colored plot of the Iris dataset projected in 2D using MDS. The color should correspong to the result of a K-Means clusterin alrogithm where the user can interactivly define the number of clusters between 1 and 10.
# Thanks!
| tutorial/05_MachineLearning.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [py35]
# language: python
# name: Python [py35]
# ---
# # Classifying 1984 US House of Representatives Voting Records by Party Affiliation
#
# ## Information
#
# Downloaded from the [UCI Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/seeds) on 13 November 2016. The dataset description is as follows:
#
# - Data Set: Multivariate
# - Attribute: Real
# - Tasks: Classification
# - Instances: 435
# - Attributes: 16
# - Missing Values: Yes
# - Area: Social
# - Date Donated: 1987-04-27
#
# ### Data Set Information:
#
# This data set includes votes for each of the U.S. House of Representatives Congressmen on the 16 key votes identified by the CQA. The CQA lists nine different types of votes: voted for, paired for, and announced for (these three simplified to yea), voted against, paired against, and announced against (these three simplified to nay), voted present, voted present to avoid conflict of interest, and did not vote or otherwise make a position known (these three simplified to an unknown disposition).
#
# ### Attribute Information:
#
# 1. Class Name: 2 (democrat, republican)
# 2. handicapped-infants: 2 (y,n)
# 3. water-project-cost-sharing: 2 (y,n)
# 4. adoption-of-the-budget-resolution: 2 (y,n)
# 5. physician-fee-freeze: 2 (y,n)
# 6. el-salvador-aid: 2 (y,n)
# 7. religious-groups-in-schools: 2 (y,n)
# 8. anti-satellite-test-ban: 2 (y,n)
# 9. aid-to-nicaraguan-contras: 2 (y,n)
# 10. mx-missile: 2 (y,n)
# 11. immigration: 2 (y,n)
# 12. synfuels-corporation-cutback: 2 (y,n)
# 13. education-spending: 2 (y,n)
# 14. superfund-right-to-sue: 2 (y,n)
# 15. crime: 2 (y,n)
# 16. duty-free-exports: 2 (y,n)
# 17. export-administration-act-south-africa: 2 (y,n)
#
# ### Relevant Papers:
#
# <NAME>. (1987). Concept acquisition through representational adjustment. Doctoral dissertation, Department of Information and Computer Science, University of California, Irvine, CA.
# ## Python Package(s) Used
import dill
import json
import numpy as np
import os
import pandas as pd
import requests
import time
import matplotlib.pyplot as plt
from pandas.tools.plotting import parallel_coordinates, radviz
import seaborn as sns
from sklearn.cross_validation import train_test_split
from sklearn.feature_selection import RFECV
from sklearn.grid_search import GridSearchCV
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import roc_curve, auc, classification_report, confusion_matrix
from sklearn.preprocessing import LabelEncoder
# %matplotlib inline
# ## Data Fetching
# +
# Importing data from web
URL = "https://archive.ics.uci.edu/ml/machine-learning-databases/voting-records/house-votes-84.data"
def fetch_data(fname='house-votes-84.data'):
"""
Helper method to retreive the ML Repository dataset.
"""
response = requests.get(URL)
outpath = os.path.abspath(fname)
with open(outpath, 'wb') as f:
f.write(response.content)
return outpath
# Fetch the data if required
DATA = fetch_data()
# -
FEATURES = [
"class_name",
"handicapped_infants",
"water_project_cost_sharing",
"adoption_of_the_budget_resolution",
"physician_fee_freeze",
"el_salvador_aid",
"religious_groups_in_schools",
"anti_satellite_test_ban",
"aid_to_nicaraguan_contras",
"mx_missile",
"immigration",
"synfuels_corporation_cutback",
"education_spending",
"superfund_right_to_sue",
"crime",
"duty_free_exports",
"export_administration_act_south_africa"
]
# Read the data into a DataFrame
df = pd.read_csv(DATA, sep=',', header=None, names=FEATURES)
# ## Data Exploration
df.head()
# Describe the dataset
print(df.describe())
# Unique value counts for each column
for i in df.columns:
print(df[i].value_counts())
# Dataset information
print(df.info())
# Check for missing values
print(df.isnull().sum())
df_2 = df.copy()
# +
# Labelencoding
df_2['class_name'] = df_2['class_name'].map({'democrat':0,'republican':1})
for i in df_2.columns[1:]:
df_2[i] = df_2[i].map({'n': 0,'y': 1,'?':2})
# -
df_2.head()
# +
# First pass for looking at frequency counts as a function of column.
for i in df_2.iloc[:,:]:
print(i)
plt.figure(1, figsize = (5,5), dpi = 80)
#histogram plot
plt.subplot(111)
plt.title("Histogram")
plt.hist(df_2.iloc[:,:][i])
plt.tight_layout()
plt.show()
# -
# Pairplot
sns.pairplot(df_2)
# Correlation heatmap
sns.heatmap(df_2.corr())
# Parallel coordinates plot
plt.figure(figsize=(12,12))
plt.xticks(rotation='vertical')
parallel_coordinates(df_2, 'class_name')
plt.show()
# Radial plot
plt.figure(figsize=(12,12))
radviz(df_2, 'class_name')
plt.show()
# ## Data Extraction
#
# Keeping Bunches method for later use.
# ## Logistic Regression Classification
df_3 = df_2.copy()
# Drop target column for test-train-split
df_3 = df_3.drop('class_name', axis=1)
# Test-train split. Learning curves not performed. Using 80/20% split.
X_train, X_test, y_train, y_test = train_test_split(df_3, df_2['class_name'], train_size=0.8,
random_state=1)
# Data not scaled, since total range of data is 0-2 and categorical.
clf = LogisticRegression()
# +
# Initialize RFECV for feature selection
rfecv = RFECV(estimator=clf, step=1, cv=12, scoring='accuracy')
rfecv.fit(X_train, y_train)
print("Optimal number of features : %d" % rfecv.n_features_)
# -
# Plot number of features VS. cross-validation scores
plt.figure()
plt.xlabel("Number of features selected")
plt.ylabel("Cross validation score (nb of correct classifications)")
plt.plot(range(1, len(rfecv.grid_scores_) + 1), rfecv.grid_scores_)
plt.show()
# Print out table of sorted features by importance. Top features become the features used for ML.
print("Features sorted: ")
rfecv_ranking_df = pd.DataFrame({'feature':X_train.columns,
'importance':rfecv.ranking_})
rfecv_ranking_df_sorted = rfecv_ranking_df.sort_values(by = 'importance'
, ascending = True)
rfecv_ranking_df_sorted
# +
# Issues with subselecting the appropriate columns on the present test-train split. So
# re-performing test-train split for GridSearchCV.
df_4 = df_3[['adoption_of_the_budget_resolution','physician_fee_freeze','immigration',
'synfuels_corporation_cutback','education_spending']]
# Test-train split. Learning curves not performed. Using 80/20% split.
X_train, X_test, y_train, y_test = train_test_split(df_4, df_2['class_name'], train_size=0.8,
random_state=1)
# -
# GridSearch for optimum parameters.
param_grid_pipeline = {'C':[0.0001,0.001,0.01,0.1,1.0,10,100],
'fit_intercept':[True,False],
'class_weight':['balanced',None],
'solver':['liblinear','newton-cg','lbfgs','sag']}
grid = GridSearchCV(clf, param_grid_pipeline, cv = 12, n_jobs = -1, verbose=1, scoring = 'accuracy')
grid.fit(X_train, y_train)
grid.best_score_
grid.best_estimator_.get_params()
# Save model to disk
dill.dump(grid.best_estimator_, open('model_1984cvc_lr', 'wb'))
# Import model from disk
grid = dill.load(open('model_1984cvc_lr', 'rb'))
# Predicted target class
y_pred = grid.predict(X_test)
y_pred
# Predicted target class probabilities
y_pred_proba = grid.predict_proba(X_test)
y_pred_proba
y_pred_proba_democrat = y_pred_proba[:,0]
y_pred_proba_republican = y_pred_proba[:,1]
# Create dataframe of predicted values and probabilities for party affiliation
df_pred = pd.DataFrame({'class_name':y_test,
'class_name_predicted':y_pred,
'class_name_prob_democrat':y_pred_proba_democrat,
'class_name_prob_republican':y_pred_proba_republican})
df_pred.head()
# Save test-based data to .csv to disk.
df_pred.to_csv('1984cvc_jhb.csv')
# Print classification report
target_names = ['Democrat', 'Republican']
clr = classification_report(y_test, y_pred, target_names=target_names)
print(clr)
# +
# Print/plot confusion matrix
cm = np.array(confusion_matrix(y_test, y_pred, labels=[0,1]))
confusion = pd.DataFrame(cm, index=['Democrat', 'Republican'],
columns=['predicted_Democrat','predicted_Republican'])
confusion
# -
def plot_confusion_matrix(cm, title='Confusion Matrix', cmap=plt.cm.Blues):
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title, size = 15)
plt.colorbar()
tick_marks = np.arange(len(target_names))
plt.xticks(tick_marks, target_names, rotation=0, size = 12)
plt.yticks(tick_marks, target_names, rotation=90, size = 12)
plt.tight_layout()
plt.ylabel('True Label', size = 15)
plt.xlabel('Predicted Label', size = 15)
plt.savefig('plot_confusion_matrix')
# Plot confusion matrix
cm = confusion_matrix(y_test, y_pred)
np.set_printoptions(precision=2)
print('Confusion matrix, without normalization')
print(cm)
plt.figure()
plot_confusion_matrix(cm)
# Normalize the confusion matrix by row (i.e by the number of samples
# in each class)
cm_normalized = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print('Normalized Confusion Matrix')
print(cm_normalized)
plt.figure()
plot_confusion_matrix(cm_normalized, title='Normalized Confusion Matrix')
plt.savefig('plot_norm_confusion_matrix')
plt.show()
# Compute ROC curve and ROC area for each class
fpr = dict()
tpr = dict()
roc_auc = dict()
fpr[1], tpr[1], _ = roc_curve(y_test, y_pred_proba_republican)
roc_auc[1] = auc(fpr[1], tpr[1])
# Plot of ROC curve for a specific class
def roc_curve_single_class(fpr, tpr, roc_auc):
plt.figure()
plt.plot(fpr[1], tpr[1], label='ROC curve (area = %0.2f)' % roc_auc[1])
plt.plot([0, 1], [0, 1], 'k--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate', size = 15)
plt.ylabel('True Positive Rate', size = 15)
plt.xticks(size = 12)
plt.yticks(size = 12)
plt.title('Receiver Operating Characteristic (ROC)', size = 15)
plt.legend(loc="lower right")
plt.savefig('plot_roc_curve')
plt.show()
roc_curve_single_class(fpr, tpr, roc_auc)
| examples/jhboyle/1984_Congressional_Voting_Classification.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + id="R-iRSUmj4MJ4"
import os
import pandas as pd
import numpy as np
import scipy as sp
from sklearn.model_selection import train_test_split
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer
from sklearn.naive_bayes import MultinomialNB
#binary ise BernoilliNB
from sklearn.linear_model import LogisticRegression
from sklearn import metrics
from textblob import TextBlob, Word
from nltk.stem.snowball import SnowballStemmer
# %matplotlib inline
from wordcloud import WordCloud
from PIL import Image
import matplotlib.pyplot as plt
# + id="l5MLH0K_qGl6"
#json dosyası: JSON (JavaScript Object Notation – JavaScript Nesne Notasyonu) insanlar için okunabilir olan bilgi saklama ve alışveriş formatıdır. Bir JSON dosyası sadece metin kapsar ve .json uzantısını kullanır.
# + id="0ZJpE1Ct4Wwi"
##unix time: unix sistemlerde sistem tarihinin formatina verlen isim.. "unix time" 1/1/1970 00:00 tarihinden itibaren gecen saniye sayisina denk düşen bir integer'dir..
# + id="fybr71ow4W7r"
dataset = "Electronics_5.json" #veri bu json dosyasının içerisinde mi değil mi diye kotrol ederek sistemden çekiyoruz.
if os.path.isfile(dataset):
df = pd.read_json("Electronics_5.json", lines=True)
else:
url = r"http://snap.stanford.edu/data/amazon/productGraph/categoryFiles/reviews_Electronics_5.json.gz"
df = pd.read_json(url, compression='gzip', lines=True)
# + id="h6SIaPPjexVA"
# + colab={"base_uri": "https://localhost:8080/"} id="faCp43uw4XE8" outputId="b55e26fd-c658-488d-ea08-02d964f4ee8b"
df.info()
# + colab={"base_uri": "https://localhost:8080/", "height": 289} id="qqLPYLNXkDI1" outputId="ca6e8d74-f8af-4582-accb-e54917cbbb20"
df.head()
# + id="GMJ2WhPy4XMw"
df.to_csv("amazonreview.csv") # json formatındaki veriyi csv ye çevirip amazonreview.csv adı ile isimlendiriyoruz
# + colab={"base_uri": "https://localhost:8080/"} id="q6Yw_xNSe4du" outputId="446d7872-eae8-443e-c8ae-84e09168b747"
df.isnull().sum()
# + id="95GOaYGHjmkR"
df1=pd.read_csv("amazonreview.csv", usecols=[ "reviewText","overall"])
# + id="2JAxQKjTe4qa"
# + id="rrn0T9WpfMaH"
df1['reviewText']=df1['reviewText'].apply(str)# amazonreview in iki sütununu aldığım koddan sonra reviewtext sütununda boş veriler oluştu bunu düzeltmek için bu kodu çalıştırdım.
# + colab={"base_uri": "https://localhost:8080/"} id="MZgHiQBm4XU_" outputId="c3b763ac-3032-41b6-9249-4d73b9da8e4c"
df1.isnull().sum()
# + colab={"base_uri": "https://localhost:8080/", "height": 35} id="ua85Trbz4Xd6" outputId="80a6263a-ec74-4a81-eaeb-d8bf4fec112c"
from google.colab import files
files.download("amazonreview.csv")
# + [markdown] id="yjIQt2IBLG9X"
# # Yeni Bölüm
# + id="4h87yGEf4Xvi"
# + colab={"base_uri": "https://localhost:8080/", "height": 415} id="arjR1NEH4X1j" outputId="6eb6e051-be19-462d-a20e-ef2cfaffeec3"
df1
# + colab={"base_uri": "https://localhost:8080/"} id="kAh7xHqO4X6b" outputId="8fe75a39-a966-44f9-a0f5-ec0c94b648dd"
df1.overall.value_counts()
# + id="GJephuLq4X_B"
import seaborn as sns
# + colab={"base_uri": "https://localhost:8080/", "height": 290} id="cgqyELiY4YDB" outputId="3b1d521e-76a2-4c95-ad38-86dc7e5d73e9"
sns.countplot(data=df1, x='overall');
# + id="DsmzC41xE5EH"
df1['reviewText']=df1['reviewText'].str.lower()
df1['reviewText']=df1['reviewText'].str.replace('[^\w\s]','')
df1['reviewText']=df1['reviewText'].str.replace('\d+','')
df1['reviewText']=df1['reviewText'].str.replace('\n',' ').replace('\r','')
# + id="l6Q9Sfu7mECe"
pattern = r"\&\#[0-9]+\;"
df1["reviewText"] = df1["reviewText"].str.replace(pat=pattern, repl="", regex=True)
#düzenleme yapıyoruz
# + id="Y6oxtIYWN4Nq"
import html
# + id="CZdS9-5QN8zi"
import re
import nltk
from nltk import word_tokenize, pos_tag
from nltk.stem import WordNetLemmatizer
from nltk.tokenize import sent_tokenize
from nltk.corpus import wordnet
# + colab={"base_uri": "https://localhost:8080/"} id="dierkMP4ZFDE" outputId="d812740a-7b85-4fe5-cae5-895291010dbd"
#for i in range(0,len(df1)-1):
#if type(df1.iloc[i]['reviewText']) != str:
# df1.iloc[i]['reviewText'] = str(df1.iloc[i]['reviewText'])
# + id="6yIqMqBNN9yQ"
# #!pip install langdetect
# + id="ZSfljvmtN4S_"
#from langdetect import detect
# + id="GmZXT2K3O0EL"
# + id="DPtKnLGkNLTg"
#for index, row in df1['reviewText'].iteritems():
#lang = detect(row) #detecting each row
#df1.loc[index, 'Language'] = lang
#df1.sample()
# + colab={"base_uri": "https://localhost:8080/"} id="Yy6fFSbhNMNf" outputId="085452d7-fda1-4363-a8cd-fd843a295d59"
df1.isnull().sum()
# + id="XN3DNNAbYIyF"
# + id="ZH6_Ep3VE5Jf"
from wordcloud import WordCloud
import matplotlib.pyplot as plt
# %matplotlib inline
# + id="PUnYlmBOblvQ"
def woc(data,bgcolor):
plt.figure(figsize=(10,10))
wc=WordCloud(background_color=bgcolor,max_words=100).generate(' '.join(data))
plt.imshow(wc)
plt.axis('off')
# + id="sKU-eulYZHwf"
puan1=df1.query("overall=='1'")['reviewText']
puan2=df1.query("overall=='2'")['reviewText']
puan3=df1.query("overall=='3'")['reviewText']
puan4=df1.query("overall=='4'")['reviewText']
puan5=df1.query("overall=='5'")['reviewText']
# + colab={"base_uri": "https://localhost:8080/", "height": 310} id="tZ-VwpwUYvEW" outputId="aa7e4885-fa10-42dd-f4a0-b5dd4c5d31b3"
woc(puan1,'purple')
# + colab={"base_uri": "https://localhost:8080/", "height": 310} id="yohhhFhIZCRJ" outputId="b6af4363-37f0-47c9-ba44-6a088479fadc"
woc(puan2,'red')
# + colab={"base_uri": "https://localhost:8080/", "height": 310} id="FP1uB9_7ZCfp" outputId="8050b456-adaf-4eec-8813-a1528a951b05"
woc(puan3,'yellow')
# + id="agpUDO3WZCre"
woc(puan4,'blue')# oturum çöküyor çalıştıramıyorum.
# + id="BHulYDhCZC7Q"
woc(puan5,'green')# oturum çöküyor çalıştıramıyorum.
# + colab={"base_uri": "https://localhost:8080/"} id="U2j6Beunc-wp" outputId="cc1699c1-e008-4d3b-c151-688f42220539"
df1.overall.value_counts()
# + colab={"base_uri": "https://localhost:8080/"} id="2qvTJ81OwPSu" outputId="274bf532-b5c8-403b-d0bb-00cfa280d950"
nltk.download('punkt')
# + id="MHy8JuqrZJJ1"
#df1['reviewText']=df1['reviewText'].str.replace('[^a-zA-Z]',' ')
# + colab={"base_uri": "https://localhost:8080/", "height": 355} id="iTNfcX9cZJSf" outputId="8ce91071-f25e-488b-a7f3-096b6a132386"
df1.sample(10)
# + id="YXc-oc7WZJYe"
from sklearn.model_selection import train_test_split
# + id="l8u8yZmAZJeS"
x_train,x_test,y_train,y_test=train_test_split(df["reviewText"],df["overall"],random_state=42)
# + id="ZLSFsoGCZJjR"
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer
# + colab={"base_uri": "https://localhost:8080/"} id="v8MWTWvfZJnC" outputId="d71b6ae2-4c72-4b61-ca89-9584fd6af171"
vect=CountVectorizer(lowercase=True,stop_words="english")
x_train_dtm=vect.fit_transform(x_train)
print(x_train_dtm)
x_test_dtm=vect.transform(x_test)
# + id="GBjTJyeDovGh"
tf=pd.DataFrame(x_train_dtm.toarray(),columns=vect.get_feature_names())
tf.head() # ram den dolayı oturum çötü
# + id="8cgmox0aovSi"
from sklearn.naive_bayes import MultinomialNB
from sklearn import metrics
from sklearn.linear_model import LogisticRegression
# + id="hfkHVWJzovW0"
nb=MultinomialNB()
nb.fit(x_train_dtm,y_train)
y_pred_class=nb.predict(x_test_dtm)
# + id="TE8qrJjpqkBN"
metrics.accuracy_score(y_test,y_pred_class)
# + [markdown] id="uPb4Bwi5iBhN"
# # Yeni Bölüm
# + [markdown] id="XJYZFNFiiB8x"
# # Yeni Bölüm
# + id="fp0bkiISZJqZ"
# + id="gNQiaij1ZJxb"
# + id="SIybU9V5ZJ0f"
# + id="Dxam2k6gZJ30"
# + id="wNywbWF4ZJ7E"
# + id="v-abEqN7ZJ9a"
# + id="79PFgeX2ZKAO"
# + id="NrVPncj0ZKD5"
# + id="PHkarU8RZKKy"
# + id="aIbtS9JhZH6V"
# + id="UGzj3lOOZH9q"
# + id="mtIVn2uYZIBF"
# + id="uBQqGSBoZIDu"
# + id="ymP5Hlf2ZIGy"
# + id="so63W_rmZIJ2"
# + id="ZMoGt2vkZINZ"
# + id="muJZYoyDZIQf"
# + id="AtBGn_85ZITw"
# + id="Y-X78IulK1L7"
# + id="bzMoRRFDMQ8L"
# + id="I0meKXh8E5PE"
# + id="t4BHUSMRHHzQ"
# + id="x_Cz1DU5E5TD"
# + id="MaF5j_7HE5Wt"
# + id="mOxjsqJtE5ZC"
# + id="QdvZz53hE5bK"
# + id="S1RpWRvJCjqG"
# + id="RlRJUn69BsxF"
# + id="49c4aFLP4YF5"
# + id="oLf2TKQB4YJE"
# + id="1fR3H0sb4YL-"
# + id="usOfCZpJ4YPA"
# + id="ZAa6o5jG4YSO"
# + id="pULji8IA4YVM"
# + id="q-2bmH0w4YYI"
# + id="JXIyKadO4YbX"
# + id="LgmcgNqc4Yhw"
# + id="-rhDwbkO4Yki"
# + id="O3l8xAdG4Yn9"
# + id="DstM3d0J4Yq_"
# + id="pe7nT3Xz4Ytl"
# + id="xSP93Fur4Ywd"
# + id="6fJ3pous4Y7C"
# + id="KSvTnkIg4Y-S"
# + id="QuSEee-44ZBQ"
# + id="U3b9SWFt4ZEu"
# + id="ZXNJHXGA4ZHm"
# + id="eDfvxOXv4ZLC"
# + id="in1eSEOd4ZNx"
# + id="vBuggPGg4ZQu"
# + id="2LjmDN7b4ZT8"
# + id="ETdzyvGk4ZXG"
# + id="hUf1obyf4ZZ7"
# + id="lI1JqNB-4Zc_"
# + id="loZkVF-P4Zf0"
# + id="yb7uU9s-4Zil"
# + id="xKXKbtZM4Zlm"
# + id="xT1WEZMX4Zo2"
# + id="1Sg9EIto4Zru"
# + id="Cb5lpSnQ4Zuv"
# + id="HC5vKbvc4Zxn"
# + id="FTMiE1cj4Z0a"
# + id="uFce4p_R4Z3d"
# + id="tPrqWcsL4Z6J"
# + id="uljsPaQG4Z9M"
# + id="LZqFdFiz4aCP"
# + id="apsglj0F4aFl"
# + id="PZcNFqeH4aIR"
# + id="ihOK8vOE4aLB"
# + id="xQSj3foj4aN7"
# + id="8ujzNwzc4aQk"
# + id="WrImCLb64aT3"
# + id="OPTeBpk34aXE"
# + id="PyX4KEwG4aau"
# + id="WujgVmpB4ac0"
# + id="llPSUC9U4agK"
| AmazonReview.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <h1><Center>Household power consumption forecast using deep learning</Center><h1>
#
# # Lesson Goals
# In this lesson we will develop a CNN-LSTM neuralnetworks model that made a weekly household power consumption forecast. This is a multivariate classification problem but for the sake of simplification and memory constrain we only attempt to develop a model for univariate dataset. we will start our work by preparing the dataset.
#
# # Prerequests
# 1. install Keras & Tensorflow
# 2. Install Numpy
# 3. Install Pandas
#
# NB:make sure Jupyter Notebook running
import pandas as pd
import matplotlib as plt
dataset = pd.read_csv('C:/Users/agurm/Downloads/household_power_consumption/household_power_consumption.txt', sep=';', header=0, low_memory=False, infer_datetime_format=True, parse_dates={'datetime':[0,1]}, index_col=['datetime'])
import numpy as np
from numpy import nan
# mark all missing values
dataset.replace('?', nan, inplace=True)
# make dataset numeric
dataset = dataset.astype('float32')
# fill missing values with a value at the same time one day ago
def fill_missing(values):
one_day = 60 * 24
for row in range(values.shape[0]):
for col in range(values.shape[1]):
if isnan(values[row, col]):
values[row, col] = values[row - one_day, col]
def fill_missing(values):
one_day = 60 * 24
for row in range(values.shape[0]):
for col in range(values.shape[1]):
if isNaN(values[row, col]):
values[row, col] = values[row - one_day, col]
import math
def fill_missing(values):
one_day = 60 * 24
for row in range(values.shape[0]):
for col in range(values.shape[1]):
if math.isnan(values[row, col]):
values[row, col] = values[row - one_day, col]
# fill missing
fill_missing(dataset.values)
# add a column for for the remainder of sub metering
values = dataset.values
dataset['sub_metering_4'] = (values[:,0] * 1000 / 60) - (values[:,4] + values[:,5] + values[:,6])
# save updated dataset
dataset.to_csv('C:/Users/agurm/Downloads/household_power_consumption/household_power_consumption.csv')
# resample minute data to total for each day
from pandas import read_csv
# load the new file
dataset = read_csv('C:/Users/agurm/Downloads/household_power_consumption/household_power_consumption.csv', header=0, infer_datetime_format=True, parse_dates=['datetime'], index_col=['datetime'])
# resample data to daily
daily_groups = dataset.resample('D')
daily_data = daily_groups.sum()
# summarize
print(daily_data.shape)
print(daily_data.head())
# save
daily_data.to_csv('C:/Users/agurm/Downloads/household_power_consumption/household_power_consumption_days.csv')
# evaluate one or more weekly forecasts against expected values
def evaluate_forecasts(actual, predicted):
scores = list()
# calculate an RMSE score for each day
for i in range(actual.shape[1]):
# calculate mse
mse = mean_squared_error(actual[:, i], predicted[:, i])
# calculate rmse
rmse = sqrt(mse)
# store
scores.append(rmse)
# calculate overall RMSE
s = 0
for row in range(actual.shape[0]):
for col in range(actual.shape[1]):
s += (actual[row, col] - predicted[row, col])**2
score = sqrt(s / (actual.shape[0] * actual.shape[1]))
return score, scores
# split a univariate dataset into train/test sets
def split_dataset(data):
# split into standard weeks
train, test = data[1:-328], data[-328:-6]
# restructure into windows of weekly data
train = array(split(train, len(train)/7))
test = array(split(test, len(test)/7))
return train, test
# +
# split into standard weeks
from numpy import split
from numpy import array
from pandas import read_csv
# split a univariate dataset into train/test sets
def split_dataset(data):
# split into standard weeks
train, test = data[1:-328], data[-328:-6]
# restructure into windows of weekly data
train = array(split(train, len(train)/7))
test = array(split(test, len(test)/7))
return train, test
#load the new file
dataset = read_csv('C:/Users/agurm/Downloads/household_power_consumption/household_power_consumption_days.csv', header = 0, infer_datetime_format=True, parse_dates= ['datetime'], index_col= ['datetime'])
train, test = split_dataset(dataset.values)
# validate train data
print(train.shape)
print(train[0, 0, 0], train[-1, -1, 0])
# validate test
print(test.shape)
print(test[0, 0, 0], test[-1, -1, 0])
# -
# evaluate a single model
def evaluate_model(train, test, n_input):
# fit model
model = build_model(train, n_input)
# history is a list of weekly data
history = [x for x in train]
# walk-forward validation over each week
predictions = list()
for i in range(len(test)):
# predict the week
yhat_sequence = forecast(model, history, n_input)
# store the predictions
predictions.append(yhat_sequence)
# get real observation and add to history for predicting the next week
history.append(test[i, :])
# evaluate predictions days for each week
predictions = array(predictions)
score, scores = evaluate_forecasts(test[:, :, 0], predictions)
return score, scores
# summarize scores
def summarize_scores(name, score, scores):
s_scores = ', '.join(['%.1f' % s for s in scores])
print('%s: [%.3f] %s' % (name, score, s_scores))
# flatten data
data = train.reshape((train.shape[0]*train.shape[1], train.shape[2]))
# convert history into inputs and outputs
def to_supervised(train, n_input, n_out=7):
# flatten data
data = train.reshape((train.shape[0]*train.shape[1], train.shape[2]))
X, y = list(), list()
in_start = 0
# step over the entire history one time step at a time
for _ in range(len(data)):
# define the end of the input sequence
in_end = in_start + n_input
out_end = in_end + n_out
# ensure we have enough data for this instance
if out_end <= len(data):
x_input = data[in_start:in_end, 0]
x_input = x_input.reshape((len(x_input), 1))
X.append(x_input)
y.append(data[in_end:out_end, 0])
# move along one time step
in_start += 1
return array(X), array(y)
dataset.shape
# train the model
def build_model(train, n_input):
# prepare data
train_x, train_y = to_supervised(train, n_input)
# define parameters
verbose, epochs, batch_size = 0, 70, 16
n_timesteps, n_features, n_outputs = train_x.shape[1], train_x.shape[2], train_y.shape[1]
# define model
model = Sequential()
model.add(LSTM(200, activation='relu', input_shape=(n_timesteps, n_features)))
model.add(Dense(100, activation='relu'))
model.add(Dense(n_outputs))
model.compile(loss='mse', optimizer='adam')
# fit network
model.fit(train_x, train_y, epochs=epochs, batch_size=batch_size, verbose=verbose)
return model
print(data)
data.shape
dataset.shape
# make a forecast
def forecast(model, history, n_input):
# flatten data
data = array(history)
data = data.reshape((data.shape[0]*data.shape[1], data.shape[2]))
# retrieve last observations for input data
input_x = data[-n_input:, 0]
# reshape into [1, n_input, 1]
input_x = input_x.reshape((1, len(input_x), 1))
# forecast the next week
yhat = model.predict(input_x, verbose=0)
# we only want the vector forecast
yhat = yhat[0]
return yhat
from math import sqrt
from numpy import split
from numpy import array
from pandas import read_csv
from sklearn.metrics import mean_squared_error
from matplotlib import pyplot
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Flatten
from keras.layers import LSTM
# +
#evaluate a single model
def evaluate_model(train, test, n_input):
# fit model
model = build_model(train, n_input)
# history is a list of weekly data
history = [x for x in train]
# walk-forward validation over each week
predictions = list()
for i in range(len(test)):
# predict the week
yhat_sequence = forecast(model, history, n_input)
# store the predictions
predictions.append(yhat_sequence)
# get real observation and add to history for predicting the next week
history.append(test[i, :])
# evaluate predictions days for each week
predictions = array(predictions)
score, scores = evaluate_forecasts(test[:, :, 0], predictions)
return score, scores
# load the new file
dataset = read_csv('C:/Users/agurm/Downloads/household_power_consumption/household_power_consumption_days.csv', header=0, infer_datetime_format=True, parse_dates=['datetime'], index_col=['datetime'])
# split into train and test
train, test = split_dataset(dataset.values)
# evaluate model and get scores
n_input = 7
score, scores = evaluate_model(train, test, n_input)
# summarize scores
summarize_scores('lstm', score, scores)
# plot scores
days = ['sun', 'mon', 'tue', 'wed', 'thr', 'fri', 'sat']
pyplot.plot(days, scores, marker='o', label='lstm')
pyplot.show()
# -
# train the model
def build_model(train, n_input):
# prepare data
train_x, train_y = to_supervised(train, n_input)
# define parameters
verbose, epochs, batch_size = 0, 20, 16
n_timesteps, n_features, n_outputs = train_x.shape[1], train_x.shape[2], train_y.shape[1]
# reshape output into [samples, timesteps, features]
train_y = train_y.reshape((train_y.shape[0], train_y.shape[1], 1))
# define model
model = Sequential()
model.add(LSTM(200, activation='relu', input_shape=(n_timesteps, n_features)))
model.add(RepeatVector(n_outputs))
model.add(LSTM(200, activation='relu', return_sequences=True))
model.add(TimeDistributed(Dense(100, activation='relu')))
model.add(TimeDistributed(Dense(1)))
model.compile(loss='mse', optimizer='adam')
# fit network
model.fit(train_x, train_y, epochs=epochs, batch_size=batch_size, verbose=verbose)
return model
# +
# univariate multi-step encoder-decoder lstm
from math import sqrt
from numpy import split
from numpy import array
from pandas import read_csv
from sklearn.metrics import mean_squared_error
from matplotlib import pyplot
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Flatten
from keras.layers import LSTM
from keras.layers import RepeatVector
from keras.layers import TimeDistributed
# make a forecast
def forecast(model, history, n_input):
# flatten data
data = array(history)
data = data.reshape((data.shape[0]*data.shape[1], data.shape[2]))
# retrieve last observations for input data
input_x = data[-n_input:, 0]
# reshape into [1, n_input, 1]
input_x = input_x.reshape((1, len(input_x), 1))
# forecast the next week
yhat = model.predict(input_x, verbose=0)
# we only want the vector forecast
yhat = yhat[0]
return yhat
# evaluate a single model
def evaluate_model(train, test, n_input):
# fit model
model = build_model(train, n_input)
# history is a list of weekly data
history = [x for x in train]
# walk-forward validation over each week
predictions = list()
for i in range(len(test)):
# predict the week
yhat_sequence = forecast(model, history, n_input)
# store the predictions
predictions.append(yhat_sequence)
# get real observation and add to history for predicting the next week
history.append(test[i, :])
# evaluate predictions days for each week
predictions = array(predictions)
score, scores = evaluate_forecasts(test[:, :, 0], predictions)
return score, scores
# load the new file
dataset = read_csv('C:/Users/agurm/Downloads/household_power_consumption/household_power_consumption_days.csv', header=0, infer_datetime_format=True, parse_dates=['datetime'], index_col=['datetime'])
# split into train and test
train, test = split_dataset(dataset.values)
# evaluate model and get scores
n_input = 14
score, scores = evaluate_model(train, test, n_input)
# summarize scores
summarize_scores('lstm', score, scores)
# plot scores
days = ['sun', 'mon', 'tue', 'wed', 'thr', 'fri', 'sat']
pyplot.plot(days, scores, marker='o', label='lstm')
pyplot.show()
# +
# multivariate multi-step encoder-decoder lstm
from math import sqrt
from numpy import split
from numpy import array
from pandas import read_csv
from sklearn.metrics import mean_squared_error
from matplotlib import pyplot
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Flatten
from keras.layers import LSTM
from keras.layers import RepeatVector
from keras.layers import TimeDistributed
# split a univariate dataset into train/test sets
def split_dataset(data):
# split into standard weeks
train, test = data[1:-328], data[-328:-6]
# restructure into windows of weekly data
train = array(split(train, len(train)/7))
test = array(split(test, len(test)/7))
return train, test
# evaluate one or more weekly forecasts against expected values
def evaluate_forecasts(actual, predicted):
scores = list()
# calculate an RMSE score for each day
for i in range(actual.shape[1]):
# calculate mse
mse = mean_squared_error(actual[:, i], predicted[:, i])
# calculate rmse
rmse = sqrt(mse)
# store
scores.append(rmse)
# calculate overall RMSE
s = 0
for row in range(actual.shape[0]):
for col in range(actual.shape[1]):
s += (actual[row, col] - predicted[row, col])**2
score = sqrt(s / (actual.shape[0] * actual.shape[1]))
return score, scores
# summarize scores
def summarize_scores(name, score, scores):
s_scores = ', '.join(['%.1f' % s for s in scores])
print('%s: [%.3f] %s' % (name, score, s_scores))
# convert history into inputs and outputs
def to_supervised(train, n_input, n_out=7):
# flatten data
data = train.reshape((train.shape[0]*train.shape[1], train.shape[2]))
X, y = list(), list()
in_start = 0
# step over the entire history one time step at a time
for _ in range(len(data)):
# define the end of the input sequence
in_end = in_start + n_input
out_end = in_end + n_out
# ensure we have enough data for this instance
if out_end <= len(data):
X.append(data[in_start:in_end, :])
y.append(data[in_end:out_end, 0])
# move along one time step
in_start += 1
return array(X), array(y)
# train the model
def build_model(train, n_input):
# prepare data
train_x, train_y = to_supervised(train, n_input)
# define parameters
verbose, epochs, batch_size = 0, 50, 16
n_timesteps, n_features, n_outputs = train_x.shape[1], train_x.shape[2], train_y.shape[1]
# reshape output into [samples, timesteps, features]
train_y = train_y.reshape((train_y.shape[0], train_y.shape[1], 1))
# define model
model = Sequential()
model.add(LSTM(200, activation='relu', input_shape=(n_timesteps, n_features)))
model.add(RepeatVector(n_outputs))
model.add(LSTM(200, activation='relu', return_sequences=True))
model.add(TimeDistributed(Dense(100, activation='relu')))
model.add(TimeDistributed(Dense(1)))
model.compile(loss='mse', optimizer='adam')
# fit network
model.fit(train_x, train_y, epochs=epochs, batch_size=batch_size, verbose=verbose)
return model
# make a forecast
def forecast(model, history, n_input):
# flatten data
data = array(history)
data = data.reshape((data.shape[0]*data.shape[1], data.shape[2]))
# retrieve last observations for input data
input_x = data[-n_input:, :]
# reshape into [1, n_input, n]
input_x = input_x.reshape((1, input_x.shape[0], input_x.shape[1]))
# forecast the next week
yhat = model.predict(input_x, verbose=0)
# we only want the vector forecast
yhat = yhat[0]
return yhat
# evaluate a single model
def evaluate_model(train, test, n_input):
# fit model
model = build_model(train, n_input)
# history is a list of weekly data
history = [x for x in train]
# walk-forward validation over each week
predictions = list()
for i in range(len(test)):
# predict the week
yhat_sequence = forecast(model, history, n_input)
# store the predictions
predictions.append(yhat_sequence)
# get real observation and add to history for predicting the next week
history.append(test[i, :])
# evaluate predictions days for each week
predictions = array(predictions)
score, scores = evaluate_forecasts(test[:, :, 0], predictions)
return score, scores
# load the new file
dataset = read_csv('C:/Users/agurm/Downloads/household_power_consumption/household_power_consumption_days.csv', header=0, infer_datetime_format=True, parse_dates=['datetime'], index_col=['datetime'])
# split into train and test
train, test = split_dataset(dataset.values)
# evaluate model and get scores
n_input = 14
score, scores = evaluate_model(train, test, n_input)
# summarize scores
summarize_scores('lstm', score, scores)
# plot scores
days = ['sun', 'mon', 'tue', 'wed', 'thr', 'fri', 'sat']
pyplot.plot(days, scores, marker='o', label='lstm')
pyplot.show()
# +
# univariate multi-step encoder-decoder cnn-lstm
from math import sqrt
from numpy import split
from numpy import array
from pandas import read_csv
from sklearn.metrics import mean_squared_error
from matplotlib import pyplot
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Flatten
from keras.layers import LSTM
from keras.layers import RepeatVector
from keras.layers import TimeDistributed
from keras.layers.convolutional import Conv1D
from keras.layers.convolutional import MaxPooling1D
# split a univariate dataset into train/test sets
def split_dataset(data):
# split into standard weeks
train, test = data[1:-328], data[-328:-6]
# restructure into windows of weekly data
train = array(split(train, len(train)/7))
test = array(split(test, len(test)/7))
return train, test# split a univariate dataset into train/test sets
def split_dataset(data):
# split into standard weeks
train, test = data[1:-328], data[-328:-6]
# restructure into windows of weekly data
train = array(split(train, len(train)/7))
test = array(split(test, len(test)/7))
return train, test
# evaluate one or more weekly forecasts against expected values
def evaluate_forecasts(actual, predicted):
scores = list()
# calculate an RMSE score for each day
for i in range(actual.shape[1]):
# calculate mse
mse = mean_squared_error(actual[:, i], predicted[:, i])
# calculate rmse
rmse = sqrt(mse)
# store
scores.append(rmse)
# calculate overall RMSE
s = 0
for row in range(actual.shape[0]):
for col in range(actual.shape[1]):
s += (actual[row, col] - predicted[row, col])**2
score = sqrt(s / (actual.shape[0] * actual.shape[1]))
return score, scores
# summarize scores
def summarize_scores(name, score, scores):
s_scores = ', '.join(['%.1f' % s for s in scores])
print('%s: [%.3f] %s' % (name, score, s_scores))
# convert history into inputs and outputs
def to_supervised(train, n_input, n_out=7):
# flatten data
data = train.reshape((train.shape[0]*train.shape[1], train.shape[2]))
X, y = list(), list()
in_start = 0
# step over the entire history one time step at a time
for _ in range(len(data)):
# define the end of the input sequence
in_end = in_start + n_input
out_end = in_end + n_out
# ensure we have enough data for this instance
if out_end <= len(data):
x_input = data[in_start:in_end, 0]
x_input = x_input.reshape((len(x_input), 1))
X.append(x_input)
y.append(data[in_end:out_end, 0])
# move along one time step
in_start += 1
return array(X), array(y)
# train the model
def build_model(train, n_input):
# prepare data
train_x, train_y = to_supervised(train, n_input)
# define parameters
verbose, epochs, batch_size = 0, 20, 16
n_timesteps, n_features, n_outputs = train_x.shape[1], train_x.shape[2], train_y.shape[1]
# reshape output into [samples, timesteps, features]
train_y = train_y.reshape((train_y.shape[0], train_y.shape[1], 1))
# define model
model = Sequential()
model.add(Conv1D(filters=64, kernel_size=3, activation='relu', input_shape=(n_timesteps,n_features)))
model.add(Conv1D(filters=64, kernel_size=3, activation='relu'))
model.add(MaxPooling1D(pool_size=2))
model.add(Flatten())
model.add(RepeatVector(n_outputs))
model.add(LSTM(200, activation='relu', return_sequences=True))
model.add(TimeDistributed(Dense(100, activation='relu')))
model.add(TimeDistributed(Dense(1)))
model.compile(loss='mse', optimizer='adam')
# fit network
model.fit(train_x, train_y, epochs=epochs, batch_size=batch_size, verbose=verbose)
return model
# make a forecast
def forecast(model, history, n_input):
# flatten data
data = array(history)
data = data.reshape((data.shape[0]*data.shape[1], data.shape[2]))
# retrieve last observations for input data
input_x = data[-n_input:, 0]
# reshape into [1, n_input, 1]
input_x = input_x.reshape((1, len(input_x), 1))
# forecast the next week
yhat = model.predict(input_x, verbose=0)
# we only want the vector forecast
yhat = yhat[0]
return yhat
# evaluate a single model
def evaluate_model(train, test, n_input):
# fit model
model = build_model(train, n_input)
# history is a list of weekly data
history = [x for x in train]
# walk-forward validation over each week
predictions = list()
for i in range(len(test)):
# predict the week
yhat_sequence = forecast(model, history, n_input)
# store the predictions
predictions.append(yhat_sequence)
# get real observation and add to history for predicting the next week
history.append(test[i, :])
# evaluate predictions days for each week
predictions = array(predictions)
score, scores = evaluate_forecasts(test[:, :, 0], predictions)
return score, scores
# load the new file
dataset = read_csv('C:/Users/agurm/Downloads/household_power_consumption/household_power_consumption_days.csv', header=0, infer_datetime_format=True, parse_dates=['datetime'], index_col=['datetime'])
# split into train and test
train, test = split_dataset(dataset.values)
# evaluate model and get scores
n_input = 14
score, scores = evaluate_model(train, test, n_input)
# summarize scores
summarize_scores('lstm', score, scores)
# plot scores
days = ['sun', 'mon', 'tue', 'wed', 'thr', 'fri', 'sat']
pyplot.plot(days, scores, marker='o', label='lstm')
pyplot.show()
# +
# univariate multi-step encoder-decoder convlstm
from math import sqrt
from numpy import split
from numpy import array
from pandas import read_csv
from sklearn.metrics import mean_squared_error
from matplotlib import pyplot
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Flatten
from keras.layers import LSTM
from keras.layers import RepeatVector
from keras.layers import TimeDistributed
from keras.layers import ConvLSTM2D
# split a univariate dataset into train/test sets
def split_dataset(data):
# split into standard weeks
train, test = data[1:-328], data[-328:-6]
# restructure into windows of weekly data
train = array(split(train, len(train)/7))
test = array(split(test, len(test)/7))
return train, test
# evaluate one or more weekly forecasts against expected values
def evaluate_forecasts(actual, predicted):
scores = list()
# calculate an RMSE score for each day
for i in range(actual.shape[1]):
# calculate mse
mse = mean_squared_error(actual[:, i], predicted[:, i])
# calculate rmse
rmse = sqrt(mse)
# store
scores.append(rmse)
# calculate overall RMSE
s = 0
for row in range(actual.shape[0]):
for col in range(actual.shape[1]):
s += (actual[row, col] - predicted[row, col])**2
score = sqrt(s / (actual.shape[0] * actual.shape[1]))
return score, scores
# summarize scores
def summarize_scores(name, score, scores):
s_scores = ', '.join(['%.1f' % s for s in scores])
print('%s: [%.3f] %s' % (name, score, s_scores))
# convert history into inputs and outputs
def to_supervised(train, n_input, n_out=7):
# flatten data
data = train.reshape((train.shape[0]*train.shape[1], train.shape[2]))
X, y = list(), list()
in_start = 0
# step over the entire history one time step at a time
for _ in range(len(data)):
# define the end of the input sequence
in_end = in_start + n_input
out_end = in_end + n_out
# ensure we have enough data for this instance
if out_end <= len(data):
x_input = data[in_start:in_end, 0]
x_input = x_input.reshape((len(x_input), 1))
X.append(x_input)
y.append(data[in_end:out_end, 0])
# move along one time step
in_start += 1
return array(X), array(y)
# train the model
def build_model(train, n_steps, n_length, n_input):
# prepare data
train_x, train_y = to_supervised(train, n_input)
# define parameters
verbose, epochs, batch_size = 0, 20, 16
n_timesteps, n_features, n_outputs = train_x.shape[1], train_x.shape[2], train_y.shape[1]
# reshape into subsequences [samples, time steps, rows, cols, channels]
train_x = train_x.reshape((train_x.shape[0], n_steps, 1, n_length, n_features))
# reshape output into [samples, timesteps, features]
train_y = train_y.reshape((train_y.shape[0], train_y.shape[1], 1))
# define model
model = Sequential()
model.add(ConvLSTM2D(filters=64, kernel_size=(1,3), activation='relu', input_shape=(n_steps, 1, n_length, n_features)))
model.add(Flatten())
model.add(RepeatVector(n_outputs))
model.add(LSTM(200, activation='relu', return_sequences=True))
model.add(TimeDistributed(Dense(100, activation='relu')))
model.add(TimeDistributed(Dense(1)))
model.compile(loss='mse', optimizer='adam')
# fit network
model.fit(train_x, train_y, epochs=epochs, batch_size=batch_size, verbose=verbose)
return model
# make a forecast
def forecast(model, history, n_steps, n_length, n_input):
# flatten data
data = array(history)
data = data.reshape((data.shape[0]*data.shape[1], data.shape[2]))
# retrieve last observations for input data
input_x = data[-n_input:, 0]
# reshape into [samples, time steps, rows, cols, channels]
input_x = input_x.reshape((1, n_steps, 1, n_length, 1))
# forecast the next week
yhat = model.predict(input_x, verbose=0)
# we only want the vector forecast
yhat = yhat[0]
return yhat
# evaluate a single model
def evaluate_model(train, test, n_steps, n_length, n_input):
# fit model
model = build_model(train, n_steps, n_length, n_input)
# history is a list of weekly data
history = [x for x in train]
# walk-forward validation over each week
predictions = list()
for i in range(len(test)):
# predict the week
yhat_sequence = forecast(model, history, n_steps, n_length, n_input)
# store the predictions
predictions.append(yhat_sequence)
# get real observation and add to history for predicting the next week
history.append(test[i, :])
# evaluate predictions days for each week
predictions = array(predictions)
score, scores = evaluate_forecasts(test[:, :, 0], predictions)
return score, scores
# load the new file
dataset = read_csv('C:/Users/agurm/Downloads/household_power_consumption/household_power_consumption_days.csv', header=0, infer_datetime_format=True, parse_dates=['datetime'], index_col=['datetime'])
# split into train and test
train, test = split_dataset(dataset.values)
# define the number of subsequences and the length of subsequences
n_steps, n_length = 2, 7
# define the total days to use as input
n_input = n_length * n_steps
score, scores = evaluate_model(train, test, n_steps, n_length, n_input)
# summarize scores
summarize_scores('lstm', score, scores)
# plot scores
days = ['sun', 'mon', 'tue', 'wed', 'thr', 'fri', 'sat']
pyplot.plot(days, scores, marker='o', label='lstm')
pyplot.show()
# -
# import pythorch
# # Further reading
#
#
# 1.Predicting residential energy consumption using CNN-LSTM neuralnetworks [link](http://sclab.yonsei.ac.kr/publications/Papers/IJ/2019_Energy_TYK.pdf).
#
# 2.CNN-LSTM Neural Network Model for Quantitative Strategy Analysis in Stock Markets [link](https://link.springer.com/chapter/10.1007%2F978-3-319-70096-0_21).
#
# 3.Siamese neural networks [link](https://subscription.packtpub.com/book/big_data_and_business_intelligence/9781789138900/7/ch07lvl1sec83/siamese-neural-networks)
#
#
#
# # Summary
# In this tutorial, you discovered how to develop long short-term memory recurrent neural networks for multi-step time series forecasting of household power consumption.
#
# Specifically, you learned:
#
# 1. How to develop and evaluate Univariate and multivariate Encoder-Decoder LSTMs for multi-step time series forecasting.
# 2. How to develop and evaluate an CNN-LSTM Encoder-Decoder model for multi-step time series forecasting.
# 3. How to develop and evaluate a ConvLSTM Encoder-Decoder model for multi-step time series forecasting.
#
#
# ## Next Step
#
# There is still much room to improve the model. For example, Explore more or fewer number of days used as input for the model, such as three days, 21 days, 30 days, and more.. I'll write another one for this.
| LSTM multi-step time series forecasting of household power consumption..ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# +
"""
Estimating the causal effect of sodium on blood pressure in a simulated example
adapted from Luque-Fernandez et al. (2018):
https://academic.oup.com/ije/article/48/2/640/5248195
"""
import numpy as np
import pandas as pd
from sklearn.linear_model import LinearRegression
def generate_data(n=1000, seed=0, beta1=1.05, alpha1=0.4, alpha2=0.3, binary_treatment=True, binary_cutoff=3.5):
np.random.seed(seed)
age = np.random.normal(65, 5, n)
sodium = age / 18 + np.random.normal(size=n)
if binary_treatment:
if binary_cutoff is None:
binary_cutoff = sodium.mean()
sodium = (sodium > binary_cutoff).astype(int)
blood_pressure = beta1 * sodium + 2 * age + np.random.normal(size=n)
proteinuria = alpha1 * sodium + alpha2 * blood_pressure + np.random.normal(size=n)
hypertension = (blood_pressure >= 140).astype(int) # not used, but could be used for binary outcomes
return pd.DataFrame({'blood_pressure': blood_pressure, 'sodium': sodium,
'age': age, 'proteinuria': proteinuria})
def estimate_causal_effect(Xt, y, model=LinearRegression(), treatment_idx=0, regression_coef=False):
model.fit(Xt, y)
if regression_coef:
return model.coef_[treatment_idx]
else:
Xt1 = pd.DataFrame.copy(Xt)
Xt1[Xt.columns[treatment_idx]] = 1
Xt0 = pd.DataFrame.copy(Xt)
Xt0[Xt.columns[treatment_idx]] = 0
return (model.predict(Xt1) - model.predict(Xt0)).mean()
binary_t_df = generate_data(beta1=1.05, alpha1=.4, alpha2=.3, binary_treatment=True, n=10000000)
continuous_t_df = generate_data(beta1=1.05, alpha1=.4, alpha2=.3, binary_treatment=False, n=10000000)
ate_est_naive = None
ate_est_adjust_all = None
ate_est_adjust_age = None
for df, name in zip([binary_t_df, continuous_t_df],
['Binary Treatment Data', 'Continuous Treatment Data']):
print()
print('### {} ###'.format(name))
print()
# Adjustment formula estimates
ate_est_naive = estimate_causal_effect(df[['sodium']], df['blood_pressure'], treatment_idx=0)
ate_est_adjust_all = estimate_causal_effect(df[['sodium', 'age', 'proteinuria']],
df['blood_pressure'], treatment_idx=0)
ate_est_adjust_age = estimate_causal_effect(df[['sodium', 'age']], df['blood_pressure'])
print('# Adjustment Formula Estimates #')
print('Naive ATE estimate:\t\t\t\t\t\t\t', ate_est_naive)
print('ATE estimate adjusting for all covariates:\t', ate_est_adjust_all)
print('ATE estimate adjusting for age:\t\t\t\t', ate_est_adjust_age)
print()
# Linear regression coefficient estimates
ate_est_naive = estimate_causal_effect(df[['sodium']], df['blood_pressure'], treatment_idx=0,
regression_coef=True)
ate_est_adjust_all = estimate_causal_effect(df[['sodium', 'age', 'proteinuria']],
df['blood_pressure'], treatment_idx=0,
regression_coef=True)
ate_est_adjust_age = estimate_causal_effect(df[['sodium', 'age']], df['blood_pressure'],
regression_coef=True)
print('# Regression Coefficient Estimates #')
print('Naive ATE estimate:\t\t\t\t\t\t\t', ate_est_naive)
print('ATE estimate adjusting for all covariates:\t', ate_est_adjust_all)
print('ATE estimate adjusting for age:\t\t\t\t', ate_est_adjust_age)
print()
| docs/_build/.jupyter_cache/executed/384bbed82e7d63ab2e9be1c4d93b6a79/base.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <a href="https://apssdc.in"><img src="https://camo.githubusercontent.com/e7501c5948d48f88dad8ab2ab6bd448e1cfd6c79/68747470733a2f2f64726976652e676f6f676c652e636f6d2f75633f6578706f72743d646f776e6c6f61642669643d3135414b51365f2d42697857344b366d4c36525070684635454b58715946327a6a" width="900" align="center"></a>
#
# <h1><center>Day16 Comprehensions, lambda, Iterators, Generators, Map and Filter</center></h1>
# ### Recap
#
# - Regular Expressions
# - re
# - findall
# - sub -> replace
# - subn
# - match
#
# ### Today Objectives
#
# - Functional Programming/ Comprehensions
# - lambda function/ nameless Function/ anoyonomous function
# - Iterator
# - Generator
# - map()
# - filter()
# - reduce()
# ### Functional Programing
#
#
# - List Comprehension
# - Dictionary Comprehension
# - tuple Comprehension
# - set comprehension
li = []
for i in range(1, 101):
li.append(i)
print(li)
li = []
for i in range(1, 101):
if i % 2 == 0 or i % 3 == 0:
li.append(i)
print(li)
# ### List Comprehensions
#
# #### Syntax
#
# ```
# [ele for IterVar in groupOfElements]
# ```
# +
li = [i for i in range(1, 101)]
print(li)
# +
li = [[i] for i in range(1, 101)]
print(li)
# +
li = ['*' for i in range(1, 101)]
print(li)
# +
li = [i for i in range(1, 101) if i % 2 == 0]
print(li)
# +
li = [year for year in range(2000, 3000) if (year % 400 == 0) or (year % 4 == 0 and year % 100 != 0)]
print(li)
# +
li = ['even' if i % 2 == 0 else 'odd' for i in range(2000, 3000)]
print(li)
# -
def square(num):
return num ** 2
# +
li = [square(i) for i in range(1, 101)]
print(li)
# +
def square(num):
return cube(num ** 2)
def cube(sq):
return sq ** 3
# +
li = [square(i) for i in range(1, 101)]
print(li)
# +
def square(num):
return num ** 2
def cube(sq):
return sq ** 3
li = [cube(square(i)) for i in range(1, 101)]
print(li)
# -
li = [[i for i in range(1, 6)] for j in range(5)]
li
# +
li = [[i for i in range(1, 6)] for j in range(100)]
li
# +
li = [[j for i in range(1, 6)] for j in range(1, 6)]
li
# +
li = [[[j for i in range(1, 6)] for j in range(1, 6)] for k in range(5)]
li
# -
s = """Python is an interpreted high-level general-purpose programming language. Python's design philosophy emphasizes code readability with its notable use of significant indentation. Wikipedia
Developer: Python Software Foundation
Stable release: 3.9.5 / 3 May 2021; 29 days ago
Preview release: 3.10.0b1 / 3 May 2021; 29 days ago
Typing discipline: Duck, dynamic, strong typing; gradual (since 3.5, but ignored in CPython)
First appeared: February 1991; 30 years ago
Paradigm: Multi-paradigm: object-oriented, procedural (imperative), functional, structured, reflective"""
up = [char for char in s if char.isupper()]
print(up)
# +
vol = [char for char in s if char in 'aeiou']
print(vol)
# +
vol = [char for char in s if char in 'aeiouAEIOU']
print(vol)
# +
vol = [char for char in s if char.lower() in 'aeiou']
print(vol)
# -
# ### Dictionary Comprehensions
# create a dictionary key as the number and value as the square of the number of 1 - 100
# +
di = {}
for num in range(1, 101):
di[num] = num ** 2
print(di)
# +
di = {i: i ** 2 for i in range(1, 101)}
print(di)
# +
di = {i: i + 5 for i in range(1, 101)}
print(di)
# -
# key as the num and value as the sq of previous number
# +
di = {i: (i - 1) ** 2 for i in range(1, 101)}
print(di)
# +
di = {i: i ** 2 for i in range(1, 101) if i % 2 == 0}
print(di)
# +
di = {i: 'Even' if i % 2 == 0 else 'odd' for i in range(1, 101)}
print(di)
# -
# {1: [1], 2:[1,2], 3:[1,2,3], 4:[1,2,3,4], 5:[1,2,3,4,5] ...... 100}
# +
di = {i: [num for num in range(1, i + 1)] for i in range(1, 101)}
print(di)
# +
di = {i: ['*' for num in range(1, i + 1)] for i in range(1, 101)}
di
# -
100 * (100 + 1)
# +
li = [i for i in range(1, 101)]
di = {i: li[ : i + 1] for i in range(1, 101)}
di
# -
# ### set comprehensions
s1 = {char for char in s}
print(s1)
# +
s1 = {char for char in s if char.isalnum()}
print(s1)
# +
t1 = (i for i in range(1, 101))
print(t1)
# -
# ### Nameless Function
def funName(arg):
pass
funName(6656)
def funName(arg):
def square(num):
return num ** 2
sq = lambda num: num ** 2
# +
print(square(5))
print(sq(5))
# -
mul = lambda a, b: a ** 2 + b ** 2 + 2 * a * b
mul(5, 5)
for i in range(1, 101):
print(mul(i, i + 1))
sq1 = lambda a: sq(a)
print(sq1(5))
# ### map()
# +
inp = input()
print(inp)
# -
inp
# +
li = inp.split()
print(li)
# -
s = 0
inp = input()
li = inp.split()
for i in li:
s += int(i)
print(s)
# ### map()
#
#
# map(function, seqofData)
#
# It returns map object -> list/tuple/set
# +
li = map(int, input().split())
print(li)
# +
li = list(li)
print(li, sum(li))
# +
s = sum(list(map(int, input().split())))
print(li)
# -
print(s)
# +
li = map(list, input().split())
print(list(li))
# +
li = map(lambda x: x[0], input().split())
print(list(li))
# +
li = map(lambda x: int(x[0]), input().split())
print(list(li))
# +
li = map(lambda x: x[0], input().split())
print(list(li))
# -
# int object is not callable error
for i in 55:
print(i)
sum(55)
L1= list(map(int, input().split()))
print(L1, sum(L1))
# +
import sys
sys.version
# -
print(55[0])
# ### filter()
#
#
# filter(function, iterables)
# sum the even numbers from the input
fi = filter(lambda x: int(x) % 2 == 0, input().split())
print(list(fi))
s = """Python is an interpreted high-level general-purpose programming language. Python's design philosophy emphasizes code readability with its notable use of significant indentation. Wikipedia
Developer: Python Software Foundation
Stable release: 3.9.5 / 3 May 2021; 29 days ago
Preview release: 3.10.0b1 / 3 May 2021; 29 days ago
Typing discipline: Duck, dynamic, strong typing; gradual (since 3.5, but ignored in CPython)
First appeared: February 1991; 30 years ago
Paradigm: Multi-paradigm: object-oriented, procedural (imperative), functional, structured, reflective"""
# +
fi = list(filter(lambda x: x.isupper(), s))
print(fi)
# -
sq = lambda num: num ** 2
sq(5)
sq = lambda num: [num ** 2, num ** 3, num ** 4]
sq(5)
print(list(filter(lambda x: x.islower(), s)))
print(list(map(lambda x: x.islower(), s)))
list(filter(lambda x: x.islower(), input()))
| Notebooks/Day16_Comprehensions_and_Special_Functions_in_Python.ipynb |
# ---
# jupyter:
# hide_input: false
# jupytext:
# cell_metadata_filter: all
# notebook_metadata_filter: all
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# language_info:
# codemirror_mode:
# name: ipython
# version: 3
# file_extension: .py
# mimetype: text/x-python
# name: python
# nbconvert_exporter: python
# pygments_lexer: ipython3
# version: 3.7.2
# toc:
# base_numbering: 1
# nav_menu: {}
# number_sections: true
# sideBar: true
# skip_h1_title: true
# title_cell: Table of Contents
# title_sidebar: Contents
# toc_cell: false
# toc_position: {}
# toc_section_display: true
# toc_window_display: true
# varInspector:
# cols:
# lenName: 16
# lenType: 16
# lenVar: 40
# kernels_config:
# python:
# delete_cmd_postfix: ''
# delete_cmd_prefix: 'del '
# library: var_list.py
# varRefreshCmd: print(var_dic_list())
# r:
# delete_cmd_postfix: ') '
# delete_cmd_prefix: rm(
# library: var_list.r
# varRefreshCmd: 'cat(var_dic_list()) '
# types_to_exclude:
# - module
# - function
# - builtin_function_or_method
# - instance
# - _Feature
# window_display: false
# ---
# # Jupyter Lab Tutorial
# # Window notes
# At the far left of the screen you can click on the associated icons to see:
# - the file browser (which works like a fully functioning file browser)
# - which notebooks are currently running
# - the command palette
# - cell metadata
# - open tabs
# - and more (if you have extensions installed)
# # Kernels
# Using the main launcher view, you can create a new terminal or new notebooks for any kernel you have installed. There are [over 120 kernels](https://github.com/jupyter/jupyter/wiki/Jupyter-kernels) available. These include R, MATLAB, and Julia.
#
# You can make a new tab by pressing the `+` button toward the top left of the screen, just below the Edit menu.
# # Views
# - Open and view multiple files at once by simply dragging tabs into different sides or corners of the body of the window.
# - Open multiple views into the same file by right clicking on the tab and selecting "New View for Notebook".
# - Open a dedicated view into the output of a cell by right-clicking on the output and selecting "Create New View for Output".
# # Cells
#
# - All Jupyter notebooks are made up of _cells_. Cells come in two varieties: _code_ and _markdown_.
#
# - Code cells contain executable code. Markdown cells contain multimedia information such as explanatory text, website links, images, and video, written in markdown.
#
# - Both types of cells can be _executed_ using the "play" button in the toolbar toward the top of the notebook, or using the shortcut `Ctrl+Enter` (Windows) or `Cmd+Enter` (Mac).[^1]
#
# - Change the type of cell by using the dropdown box toward the top of the notebook, or using `Esc, M` for markdown and `Esc, Y` for code (that means pressing Escape _then_ M, etc.)
#
# [^1]: Cells can be executed out of order, so best practice when writing a notebook is to ensure that your results are reproducible by running all cells from top to bottom. Requiring that the user run cells out of order is a recipe for disaster.
# # Helpful shortcuts
# All of the below commands are for Windows installations. Mac will likely substitute `Cmd` for `Ctrl`.
# - While coding, `Shift+Tab` will bring up help for your current function
# - `Ctrl+Enter` executes the current cell, keeping your focus on it
# - `Shift+Enter` executes the current cell, and moves you down to the next cell
# - `Alt+Enter` executes the current cell AND makes a new one below
# - `Esc` brings you to command mode, where you can do a number of things:
# - `A` makes a new cell above
# - `B` makes a new cell below
# - `D D` (that's `D` twice) deletes a cell
# - `X` cuts selected cells
# - `C` copies the cells
# - `V` pastes the cells
# - `Y` turns the cell into a code cell
# - `M` turns the cell into a markdown cell
# - Jupyter notebook only: `CTRL+SHIFT+P` brings up the command palette, with all available commands
#
# <div class="alert alert-block alert-info">
# You can also view and edit such shortcuts from the "Help" menu at the top of the screen in the notebook view, or with the art palette icon at the left of the screen in the lab view.
# </div>
# # Useful extensions
# - [Manager](https://github.com/jupyter-widgets/ipywidgets/tree/master/packages/jupyterlab-manager)
# - [Variable inspector](https://github.com/lckr/jupyterlab-variableInspector): a must have for a more complete IDE experience.
# - [LaTeX support](https://github.com/jupyterlab/jupyterlab-latex)
# - [Plotly](https://github.com/jupyterlab/jupyter-renderers)
# - [Table of contents](https://github.com/jupyterlab/jupyterlab-toc)
# - [PyViz](https://github.com/pyviz/pyviz_comms)
# # Debugging in Jupyter Notebooks
# Use `set_trace()` where you want the debugger to start.<br>
# 'n' moves onto the next line<br>
# 'c' continues execution of the script
# +
from IPython.core.debugger import set_trace
def increment_value(a):
a += 1
set_trace()
print(a)
increment_value(3)
# -
# # Magic commands
# These are useful pieces of code that perform some common operations within Jupyter.
# ## lsmagic
# See all magic commands.
lsmagic
# # ## %who
# See a list of current variables in global scope with . Can also specify a data type thereafter.
# %who
# ## Terminal commands
# Run any command you would run in your computer's terminal by prefacing the command with `!`
# !conda --version
# # Other resources
# See [here](https://www.youtube.com/watch?v=Gzun8PpyBCo) for a long (but excellent) video tutorial introduction to Jupyter lab. Attendant notebooks can be found [here](https://github.com/jupyterlab/scipy2018-jupyterlab-tutorial).
| notebooks/.ipynb_checkpoints/jupyter_tutorial-checkpoint.ipynb |
;; ---
;; jupyter:
;; jupytext:
;; text_representation:
;; extension: .scm
;; format_name: light
;; format_version: '1.5'
;; jupytext_version: 1.14.4
;; kernelspec:
;; display_name: Calysto Scheme 3
;; language: scheme
;; name: calysto_scheme
;; ---
;; + vscode={"languageId": "python"}
(define true #t)
(define false #f)
;; + vscode={"languageId": "python"}
(define (square x) (* x x))
(define (expmod base exp m)
(cond ((= exp 0) 1)
((even? exp)
(remainder (square (expmod base (/ exp 2) m))
m))
(else
(remainder (* base (expmod base (- exp 1) m))
m))))
(define (fermat-test n a)
(= (expmod a n n) a))
(define (fermat-test-all n)
(define (itr a)
(cond
((= a n) false)
((fermat-test n a) (fermat-test n (+ a 1)))
)
)
(itr 1)
)
;; + vscode={"languageId": "python"}
(display (format "~a: ~a\n" 561 (fermat-test-all 561)))
(display (format "~a: ~a\n" 1105 (fermat-test-all 1105)))
(display (format "~a: ~a\n" 1729 (fermat-test-all 1729)))
(display (format "~a: ~a\n" 2465 (fermat-test-all 2465)))
(display (format "~a: ~a\n" 2821 (fermat-test-all 2821)))
(display (format "~a: ~a\n" 6601 (fermat-test-all 6601)))
;; + vscode={"languageId": "python"}
| ch1/1.2/ex.1.27.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: env-pytraj
# language: python
# name: env-pytraj
# ---
# # Dynamic cross-correlation analysis
import pytraj as pt
import nglview as nv
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
# %cd ~/scratch/workshop/pdb/6N4O/simulation/sim_pmemd/4-production
# In this lesson we will analyze two fragments of the simulation system:
# 1. Protein fragment (residues 50-150).
# 2. Nucleic acids.
# ## Visualize the whole system
# Let's visualize the whole system before running correlation analysis. This will help to understand where the fragments of the system chosen for analysis are located.
# - Load 20 equispaced frames from the 2 ns - long trajectory for visualization
# - Move atoms back into the initial box
# - Center the view using alpha carbon atoms of the protein.
trj_viz=pt.iterload('mdcrd_nowat.nc', top='prmtop_nowat.parm7', frame_slice=[(0,1999,100)])
trj_viz=trj_viz.autoimage()
trj_viz.center('@CA origin')
# - Create view of the loaded data
# - Set orthographic projection
# - Remove the default ball/stick representation
# - Add nucleic acids represented with tube
# - Add protein surface
# - Add protein residues 50-150 represented with cartoon
view1 = trj_viz.visualize()
view1.camera='orthographic'
view1.clear()
view1.add_tube('nucleic')
view1.add_hyperball('nucleic and not hydrogen', colorScheme="element")
view1.add_surface('protein', color='grey', opacity=0.3)
view1.add_cartoon('50-150', color='cyan')
# - Display the view
view1
# ## 1. Analyze dynamic cross-correlation map for the protein atoms.
# We will use residues 50-100, these residues represent a fragment of the protein molecule.
# - Load all trajectory frames for analysis
traj=pt.iterload('mdcrd_nowat.nc', top='prmtop_nowat.parm7')
# - Calculate correlation matrix for residues 50-150
corrmat=pt.atomiccorr(traj, mask=":50-150 and not hydrogen")
# - Plot correlation matrix using heat map
# - Set center of the color scale to 0
sns.heatmap(corrmat, center=0, xticklabels=5, yticklabels=5, square=True)
# center=0, square=True
# ### Intrepretation of the correlation map
# - Close to diagonal are interaction between neighbours
# - Off-diagonal correlations are due to the specific protein folding pattern
# - Are there any negative correlations? Try rescale and plot only negative correlations (use vmax=0).
# ## 2. Calculate dynamic cross-correlation map for the nucleic acids.
# In this example we will compare cross-correlation maps computed for different time windows of the molecular dynamics trajectory, and learn how to plot positive and negative correlations in one figure.
# ### Visualize nucleic acids
# - Create view
# - Set orthographic projection
# - Remove the default representation
# - Add nucleic backbone represented with tube
# - Add nucleic acids represented with hyperball to the view
# - Open the interactive view
view = trj_viz.visualize()
view.camera='orthographic'
view.clear()
view.add_tube('nucleic')
view.add_hyperball('nucleic and not hydrogen', colorScheme="element")
view
# - To find the numbers of the first and the last nucleic acid residue hover with the mouse cursor over the terminal atoms.
# ### Compute correlation map for frames 1-500
# As this chunk of the trajectory is already loaded, we just need to recompute the correlation matrix using different atom mask.
corrmat1=pt.atomiccorr(traj, mask =":860-898 and not hydrogen")
# ### Compute correlation map for frames 1600-1999
#
# Load the second chunk of the trajectory and compute the correlation matrix
traj2 = pt.iterload('mdcrd_nowat.nc', top='prmtop_nowat.parm7', frame_slice=[(1600, 2000)])
corrmat2=pt.atomiccorr(traj2, mask =":860-898 and not hydrogen")
# ### Create lower and upper triangular masks
#
# Weak negative correlations are hard to see in one figure. Using separate color maps for negative and positive correlations can help to show weaker negative correlations clearly.
# To achieve this we can set the minimum value for the positive plot, and the maximum value for the negative plot to zero.
#
# We can then combine plots of positive and negative correlations in one plot by showing positive correlations in the upper triangle of the correlation map, and negative correlations in the lower triangle. This can be achieved by removing the lower triangle from the plot of positive correlations, and removing the upper triangle from the plot of negative correlations. To do this we need to create masks for upper and lower triangles.
maskl = np.tril(np.ones_like(corrmat1, dtype=bool))
masku = np.triu(np.ones_like(corrmat1, dtype=bool))
# ### Try different color schemes
#cmap=sns.diverging_palette(250, 30, l=65, center="dark", as_cmap=True)
cmap=sns.diverging_palette(220, 20, as_cmap=True)
#cmap=sns.color_palette("coolwarm", as_cmap=True)
#cmap=sns.color_palette("Spectral", as_cmap=True).reversed()
#cmap=sns.diverging_palette(145, 300, s=60, as_cmap=True)
# ### Plot correlation maps for two trajectory time windows
# +
# Create figure with two subplots.
fig, (ax1,ax2) = plt.subplots(2, figsize=(9,9))
# Plot correlation map for frames 1-500 (axis ax1)
# First plot positive correlations (vmin=0), then negative (vmax=0)
sns.heatmap(corrmat1, mask=maskl, cmap=cmap, center=0.0,vmin=0.0,
square=True, xticklabels=2, yticklabels=2, ax=ax1)
sns.heatmap(corrmat1, mask=masku, cmap=cmap, center=0.0, vmax=0.0,
square=True,ax=ax1)
# Plot correlation map for frames 1600-1999 (axis ax2)
# First plot positive correlations (vmin=0), then negative (vmax=0)
sns.heatmap(corrmat2, mask=maskl, cmap=cmap, center=0.0,vmin=0.0,
square=True, xticklabels=2, yticklabels=2, ax=ax2)
sns.heatmap(corrmat2, mask=masku, cmap=cmap, center=0.0, vmax=0.0,
square=True,ax=ax2)
# -
# ### Interpretation of the map features
# - Correlations between neighbours do not exist between chains (21 is not correlated with 22)
# - Off diagonal correlations correspond to hydrogen bonds between the two RNA chains
# - Negative correlations change with time
| code/Notebooks/pytraj_xcorr.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="7wun7oVUC02U"
# <p float="center">
# <img src="https://github.com/carlosalvarezh/Analisis_Numerico/blob/master/images/C00_Img00_logo.png?raw=true" width="350" />
# </p>
# <h1 align="center">ST0256 - Análisis Numérico</h1>
# <h1 align="center">Introducción</h1>
# <h1 align="center">2021/01</h1>
# <h1 align="center">MEDELLÍN - COLOMBIA </h1>
# + [markdown] id="yArvsUNjC02X"
# <table>
# <tr align=left><td><img align=left src="https://github.com/AlbertoD10-10/Analisis_Numerico/blob/master/images/CC-BY.png?raw=1">
# <td>Text provided under a Creative Commons Attribution license, CC-BY. All code is made available under the FSF-approved MIT license.(c) <NAME></td>
# </table>
# + [markdown] id="_Lmc8c2-C02Y"
# ***
#
# ***Docente:*** <NAME>, I.C. D.Sc.
#
# ***e-mail:*** <EMAIL>
#
# ***skype:*** carlos.alberto.alvarez.henao
#
# ***Herramienta:*** [Jupyter](http://jupyter.org/)
#
# ***Kernel:*** Python 3.8
#
#
# ***
# + [markdown] id="Ys0CLgBEC02Z"
# <a id='TOC'></a>
# + [markdown] toc=true id="J0FIM0yCC02a"
# <h1>Table of Contents<span class="tocSkip"></span></h1>
# <div class="toc"><ul class="toc-item"><li><span><a href="#Motivación" data-toc-modified-id="Motivación-1"><span class="toc-item-num">1 </span>Motivación</a></span></li><li><span><a href="#Modelo-matemático-y-computacional" data-toc-modified-id="Modelo-matemático-y-computacional-2"><span class="toc-item-num">2 </span>Modelo matemático y computacional</a></span><ul class="toc-item"><li><span><a href="#Método-Científico" data-toc-modified-id="Método-Científico-2.1"><span class="toc-item-num">2.1 </span>Método Científico</a></span></li><li><span><a href="#Modelo" data-toc-modified-id="Modelo-2.2"><span class="toc-item-num">2.2 </span>Modelo</a></span></li><li><span><a href="#Modelos-matemáticos-y-computacionales" data-toc-modified-id="Modelos-matemáticos-y-computacionales-2.3"><span class="toc-item-num">2.3 </span>Modelos matemáticos y computacionales</a></span></li><li><span><a href="#Estrategia-general" data-toc-modified-id="Estrategia-general-2.4"><span class="toc-item-num">2.4 </span>Estrategia general</a></span></li><li><span><a href="#Experimental-vs-Simulado" data-toc-modified-id="Experimental-vs-Simulado-2.5"><span class="toc-item-num">2.5 </span>Experimental vs Simulado</a></span></li><li><span><a href="#Conclusión" data-toc-modified-id="Conclusión-2.6"><span class="toc-item-num">2.6 </span>Conclusión</a></span></li><li><span><a href="#Ejemplo" data-toc-modified-id="Ejemplo-2.7"><span class="toc-item-num">2.7 </span>Ejemplo</a></span></li></ul></li><li><span><a href="#Proceso-de-simulación" data-toc-modified-id="Proceso-de-simulación-3"><span class="toc-item-num">3 </span>Proceso de simulación</a></span><ul class="toc-item"><li><span><a href="#¿Como-se-hacen-las-simulaciones?" data-toc-modified-id="¿Como-se-hacen-las-simulaciones?-3.1"><span class="toc-item-num">3.1 </span>¿Como se hacen las simulaciones?</a></span></li><li><span><a href="#Proceso-de-análisis" data-toc-modified-id="Proceso-de-análisis-3.2"><span class="toc-item-num">3.2 </span>Proceso de análisis</a></span></li><li><span><a href="#Estrategia-de-solución-iterativa" data-toc-modified-id="Estrategia-de-solución-iterativa-3.3"><span class="toc-item-num">3.3 </span>Estrategia de solución iterativa</a></span></li><li><span><a href="#Tiempo-de-cómputo" data-toc-modified-id="Tiempo-de-cómputo-3.4"><span class="toc-item-num">3.4 </span>Tiempo de cómputo</a></span></li><li><span><a href="#Incertidumbre-y-error" data-toc-modified-id="Incertidumbre-y-error-3.5"><span class="toc-item-num">3.5 </span>Incertidumbre y error</a></span></li><li><span><a href="#Clasificación-de-errores" data-toc-modified-id="Clasificación-de-errores-3.6"><span class="toc-item-num">3.6 </span>Clasificación de errores</a></span></li><li><span><a href="#Verificación-del-código" data-toc-modified-id="Verificación-del-código-3.7"><span class="toc-item-num">3.7 </span>Verificación del código</a></span></li><li><span><a href="#Validación-del-modelo" data-toc-modified-id="Validación-del-modelo-3.8"><span class="toc-item-num">3.8 </span>Validación del modelo</a></span></li></ul></li></ul></div>
# + [markdown] id="a33dckuOC02c"
# <p float="center">
# <img src="https://github.com/carlosalvarezh/Analisis_Numerico/blob/master/images/C00_Img01_Intro.PNG?raw=true" width="500" />
# </p>
#
# + [markdown] id="GU8Xyl34C02d"
# ## Motivación
# + [markdown] id="rjd-lEQDC02d"
# Deseamos desarrollar las siguientes operaciones aritméticas:
#
# - $2+2$
#
# - $4 \times 4$
#
# - $\left(\sqrt{3} \right )^2$
#
# desde un punto de vista analítico, las soluciones exactas (a mano y en papel?) son
#
# - $2+2 = 4$
#
# - $4 \times 4 = 16$
#
# - $\left(\sqrt{3} \right )^2 = 3$
#
# pero veamos qué sucede cuando realizamos las mismas operaciones empleando un dispositivo electrónico (calculadora, computador, etc)
# + id="C0eoXQGNC02e"
a = 2 + 2
b = 4 * 4
c = (3**(1/2))**2
# + [markdown] id="mhbdbPdfC02g"
# preguntemos al computador si los resultados obtenidos en los cálculos son los esperados
# + id="fYsW4abuC02h" outputId="8d7c131c-ab0b-4b64-f2eb-8fe852ef54ba"
a == 4
# + id="mfAgXmTUC02k" outputId="49da8f68-8185-40d9-eaba-a866a98580cb"
b == 16
# + id="dg6ma8kUC02l" outputId="e77e7607-a82e-4cba-d809-d122a87da60c"
c == 3
# + [markdown] id="FdJ3ye7aC02m"
# `False`? Qué sucedió? por qué el resultado de comparar el valor que entendemos como verdadero y el obtenido empleando un dispositivo electrónico (calculadora) es falso? Veamos entonces cuál es el resultado que arrojó el cálculo:
# + id="ArRBkGVWC02n" outputId="2d5d8058-4299-49fd-934e-3865e09e0a80"
print(c)
# + [markdown] id="WxNr2MUjC02n"
# Efectivamente, se oberva que el valor calculado no es el esperado. Puede ser que, para muchas de las situaciones cotidianas, este valor no sea percibido como una diferencia apreciable ("error") y simplemente se asuma que ambos valores son iguales ("redondeo"). Pero, y si esta operación la tuviera qué repetir muchas veces? qué sucede con ese pequeño errror? Será que se puede simplemente despreciar? qué sucede para cálculos más complejos? se podría determinar la cantidad de error en los cálculos numéricos realizados a través de un computador? este error aumenta sin control? hasta cuándo se podrá decir que dos cantidades son "iguales"? El errror es debido a qué? una mala implementación de la operación aritmética? el lenguaje empleado para realizar el cálculo? La máquina? la formulación matemática? humano?
#
# Estas, y muchas otras, preguntas son las que se pretenden resolver en el curso de Análisis Numérico.
# + [markdown] id="pALL51zfC02o"
# [Volver a la Tabla de Contenido](#TOC)
# + [markdown] id="dzamQdi7C02o"
# ## Modelo matemático y computacional
# + [markdown] id="rmw57k1NC02p"
# Uno de los grandes desafíos del ser humano es tratar de predecir lo que va a suceder en la naturaleza. Predecir la ocurrencia de un sismo, de una lluvia torrencial, de un deslizamiento. También es de interés determinar el comportamiento de un sistema en general. Por ejemplo, ante unos datos de entrada algo que sucede en la transformación de esos datos y la respuesta obtenida.
# + [markdown] id="KXyHcdPQC02p"
# <p float="center">
# <img src="https://github.com/carlosalvarezh/Analisis_Numerico/blob/master/images/C00_Img02_Sistema.PNG?raw=true" width="350" />
# </p>
#
# + [markdown] id="r19nFR7kC02q"
# [Volver a la Tabla de Contenido](#TOC)
# + [markdown] id="g8Lb5-J8C02q"
# ### Método Científico
# + [markdown] id="JPlgkB-sC02r"
# <p float="center">
# <img src="https://github.com/carlosalvarezh/Analisis_Numerico/blob/master/images/C00_Img03_MetodoCientifico.png?raw=true" width="500" />
# </p>
#
# + [markdown] id="RU8i0qWtC02r"
# La realización de la etapa de formulación de hipótesis buscando alcanzar cierta probabilidad de éxito debe basarse en la experiencia, con una gran cantidad de variados ejemplos, que permitan ilustrar el comportamiento del fenómeno que se estudia. En muchos casos, ésto se puede conseguir por observación directa y manipulación del fenómeno real, pero en muchos otros es éticamente desaconsejable o físicamente imposible (bien sea porque el fenómeno es inabarcable desde el punto de vista espacio-temporal, o porque trata sobre conceptos no manipulables). Es en estos casos donde se hace uso de [modelos](https://es.wikipedia.org/wiki/Modelo_cient%C3%ADfico) (científicos) que permitan reconstruir y probar hipótesis que de otra forma no se pueden afrontar.
#
# + [markdown] id="oJdIhObwC02s"
# [Volver a la Tabla de Contenido](#TOC)
# + [markdown] id="5wAx3yiXC02s"
# ### Modelo
# + [markdown] id="ulheSR-5C02s"
# Aunque hay numerosas acepciones y definiciones de lo que se entiende por ***[modelo](https://es.wikipedia.org/wiki/Modelo_cient%C3%ADfico)***, se presentará una que se considera es lo suficientemente comprensible y completa para nuestros propósitos:
#
#
# > <strong><p style = "font-family:georgia,garamond,serif;font-size:16px;font-style:italic;">Un modelo científico es una representación abstracta, conceptual, gráfica o visual, física de fenómenos, sistemas o procesos a fin de analizar, describir, explicar, simular (en general, explorar, controlar y predecir) esos fenómenos o procesos</p></strong>
#
#
#
# En general, un buen modelo es aquel que se ajusta al fenómeno real que representa de forma que nos permite comprender mejor sus propiedades y ampliar así el conocimiento del mismo.
# + [markdown] id="If1n9XUKC02t"
# <p float="center">
# <img src="https://github.com/carlosalvarezh/Analisis_Numerico/blob/master/images/C00_Img04_Modelo.png?raw=true" width="500" />
# </p>
#
# + [markdown] id="N0npFI3GC02t"
# [Volver a la Tabla de Contenido](#TOC)
# + [markdown] id="14e6LXBnC02v"
# ### Modelos matemáticos y computacionales
# + [markdown] id="cjebkyK0C02v"
# - ***[Modelo matemático](https://es.wikipedia.org/wiki/Modelo_matem%C3%A1tico "Modelo matemático"):*** Colección de construcciones matemáticas que proporcionan abstracciones de un evento físico consistente con una teoría científica propuesta para cubrir ese evento
#
# $$\text{Variable dependiente} = f \left( \text{Variables independientes, parámetros, funciones de fuerza,} \ldots\right)$$
# + [markdown] id="-YotDXLDC02w"
# - ***[Modelo computacional](https://es.wikipedia.org/wiki/Modelo_computacional "Modelo computacional"):*** Versión discreta de un modelo matemático que ha sido diseñado para ser implementado en una máquina.
# + [markdown] id="-YTJ8awsC02w"
# [Volver a la Tabla de Contenido](#TOC)
# + [markdown] id="_zmJe1pYC02x"
# ### Estrategia general
# + [markdown] id="ohZmGJVeC02x"
# $$\text{Reemplazar un problema difícil por uno más fácil que tenga la misma solución}$$
#
#
# - Procesos infinitos por procesos finitos:
#
# - Integrales por series;
#
# - Derivadas por diferencias finitas;
#
#
# - Funciones complejas por funciones simples (p.Ej. Polinomios)
#
#
# - Problemas no lineales por lineales
#
#
# - Ecuaciones diferenciales por algebraicas
#
#
# - Sistemas de alto orden por sistemas de bajo orden
# + [markdown] id="eEy71QxqC02x"
# <p float="center">
# <img src="https://github.com/carlosalvarezh/Analisis_Numerico/blob/master/images/C00_Img06_ProcesoSimulacion02.PNG?raw=true" width="500" />
# </p>
#
# + [markdown] id="LZLe218LC02z"
# [Volver a la Tabla de Contenido](#TOC)
# + [markdown] id="zlIxY_18C02z"
# ### Experimental vs Simulado
# + [markdown] id="iCriAUAbC02z"
# <p float="center">
# <img src="https://github.com/carlosalvarezh/Analisis_Numerico/blob/master/images/C00_Img05_ProcesoSimulacion01.PNG?raw=true" width="500" />
# </p>
#
# + [markdown] id="nfbNN88ZC02z"
# | Experimentos | Simulaciones
# |:----------------------------------- |:--------------------------------------------- |
# | Descripción cuantitativa de los fenómenos mediante mediciones: | Predicción cuantitativa de fenómenos utilizando software|
# | Para una cantidad a la vez | Para todas las cantidades deseadas |
# | En un número limitado de puntos e instantes de tiempo | Con alta resolución en espacio y tiempo |
# | Para un modelo a escala de laboratorio | Para el dominio real |
# | Para una gama limitada de problemas y condiciones de funcionamiento | Para prácticamente cualquier problema y condiciones de funcionamiento realistas.
# | | |
# | ***Fuentes de error:*** errores de medición, perturbaciones en los instrumetnos de medida, mala calibración, etc. | ***Fuentes de error:*** modelado, discretización, iteración, implementación
# + [markdown] id="mIa4vzdHC020"
# Como regla general, no se pretende reemplazar definitivamente las mediciones, sin embargo, sí se espera reducir la cantidad de experimentos físicos y reducir los costos asociados a los mismos.
# + [markdown] id="0GopsodzC020"
# | Experimentos | Simulaciones
# |:--------------------------|:-----------|
# | Caro | Barato (más) |
# | Lento | Rápido (más) |
# | Secuencial | Paralelo |
# | Un solo propósito | Multiusos |
# + [markdown] id="DgiNp8IbC020"
# [Volver a la Tabla de Contenido](#TOC)
# + [markdown] id="nfJ9W8QcC021"
# ### Conclusión
# + [markdown] id="U7F9thxPC021"
# Los resultados de una simulación nunca son 100% confiables porque:
#
#
# - Los datos de entrada pueden implicar demasiadas suposiciones o imprecisiones;
#
#
# - El modelo matemático del problema en cuestión puede ser inadecuado;
#
#
# - La precisión de los resultados está limitada por la potencia informática disponible.
# + [markdown] id="fjlS_newC021"
# [Volver a la Tabla de Contenido](#TOC)
# + [markdown] id="t8HB_UYbC022"
# ### Ejemplo
# + [markdown] id="Vb9SgvufC022"
# ***Ejemplo 1.1 Chapra, 5a Ed.*** Un paracaidista con una masa $m = 68.1 kg$ salta de un globo aerostático fijo. calcule la velocidad antes de que se abra el paracaídas. Considere que el coeficiente de resistencia es $c = 12.5 kg/s$.
#
#
# - ***[Diagrama de cuerpo libre](https://es.wikipedia.org/wiki/Diagrama_de_cuerpo_libre "Diagrama de cuerpo libre"):*** Representación gráfica simplificada de la realidad física
#
# <p float="center">
# <img src="https://github.com/carlosalvarezh/Analisis_Numerico/blob/master/images/C00_Img07_ejemplo01.PNG?raw=true" width="150" />
# </p>
#
# De la [Segunda Ley de Newton](https://es.wikipedia.org/wiki/Leyes_de_Newton "Segunda ley de Newton"):
#
# $$\vec{F} = m \vec{a}$$
#
# Claramente $\vec{a}$ representa la tasa de cambio de la velocidad, $\vec{v}$, respecto al tiempo, $t$, y despejando se llega a.
#
# $$\frac{d\vec{v}}{dt} = \frac{\vec{F}}{m}$$
#
# Del diagrama de cuerpo libre las fuerzas involucradas son $F_D$, que es la fuerza hacia abajo debido a la atracción de la gravedad, y $F_v$ que es la fuerza hacia arriba debida a la resistencia al aire.
#
# $$\vec{F} = \vec{F_D} + \vec{F_v}$$
#
# $$\vec{F_D} = m \vec{g}$$
#
# $$\vec{F_v} = -c \vec{v}$$
#
# reemplazando
#
# $$\frac{d\vec{v}}{dt} = \frac{(m\vec{g} - c\vec{v})}{m}$$
#
# simplificando
#
#
# $$\frac{d\vec{v}}{dt} = \vec{g} - c \frac{\vec{v}}{m}$$
#
# $EDO$ que relaciona la aceleración de un cuerpo que cae con las fuerzas que actúan sobre él
#
# + [markdown] id="FDF5bM5YC023"
# - ***Solución analítica:*** Si el objeto se encuentra inicialmente en reposo se establece la siguiente condición inicial $\left(t = 0, v = 0\right)$, y resolviendo la $EDO$ por los métodos clásicos se llega a:
#
# $$v(t) = \frac{m\vec{g}}{c}\left(1 - e^{-\frac{c}{m}t}\right)$$
#
# y reemplazando los valores dados en el enunciado se tiene
#
# $$v(t) = \frac{9.8 \times 68.1}{12.5}\left(1 - e^{-\frac{12.5}{68.1}t}\right)$$
#
# obsérvese que la única incógnita es el tiempo. Dando valores a $t$, se puede obtener la velocidad en dicho instante de tiempo de forma inmediata.
#
#
# Ahora vamos a implementar esta ecuación en un lenguaje de programación para poder realizar los cálculos de forma automática en un computador:
# + id="xQ-xPLN8C023"
# importamos las bibliotecas científica y de graficación a usar en los cálculos
import numpy as np
import matplotlib.pyplot as plt
# + id="IbrS3W-PC024"
# Constantes
m = 68.1
c = 12.5
g = 9.81
dt = 1.0
n = 50
cte1 = m / c
cte2 = 1.0 / cte1
# + id="PGkj5oJnC024" outputId="a8762500-ae72-43c3-c07c-af3cb5c19350"
# Gráfica de la solución exacta:
def f(t,cte1,cte2):
return g*cte1*(1-np.exp(-cte2*t))
t = np.arange(0.0, n+1, dt)
plt.xlabel (r"t")
plt.ylabel (r'$v_{ex}(t)$')
plt.title (r'$t$ vs $v_{ex}(t)$')
plt.plot(t, f(t,cte1,cte2))
plt.grid(True)
plt.show()
# + [markdown] id="LjiBQATBC025"
# - ***Solución numérica (Discreta):***
#
# aproximando la tasa del cambio de la velocidad respecto al tiempo
#
# $$\frac{dv}{dt} \approx \frac{\Delta v}{\Delta t}=\frac{v(t_{i+1})-v(t_i)}{t_{i+1}-t_i}$$
#
# del calculo
#
# $$\frac{dv}{dt} = \lim_{\Delta t \rightarrow 0} \frac{\Delta v}{\Delta t}$$
#
# esta ecuación es llamada *Diferencias finitas divididas*, y es una aproximación de la derivada en el tiempo $t_i$.
#
# Sustituyendo en la ecuación aproximada,
#
# $$\frac{v(t_{i+1})-v(t_i)}{t_{i+1}-t_i}=g-\frac{c}{m}v(t_i)$$
#
# reorganizando términos
#
# $$v(t_{i+1})=v(t_{i})+\left(g-\frac{c}{m}v(t_{i})\right)\left(t_{i+1}-t_{i}\right)$$
#
# El significado de esta ecuación puede resumirse como:
#
# $$\text{Nuevo valor = Valor anterior + pendiente} \times \text{paso de tiempo}$$
#
# Al comienzo de los cálculos ($ t = 0 $), la velocidad del paracaídas es igual a cero ($ 0 $). Con esta información y los valores de los parámetros dados, la última ecuación se puede utilizar para calcular la velocidad en cada uno de los siguientes tiempos.
#
# Ahora vamos a implementar esta ecuación en un lenguaje de programación para poder realizar los cálculos de forma automática en un computador:
# + id="_S2Uj33MC025" outputId="d4511098-641e-4e19-85cf-f85fa378c51f"
# Solucion aproximada (Discreta)
t = np.arange(0.0, n+1, dt)
vap = np.zeros(n+1)
dt = t[1]-t[0]
for i in range(1,n+1):
vap[i] = vap[i-1] + (g-cte2*vap[i-1])*dt
plt.xlabel (r"t")
plt.ylabel (r"$v_{approx}$(t)")
plt.title (r"$t$ vs $v_{approx}$(t)")
plt.plot (t, vap)
plt.grid(True)
plt.show()
# + [markdown] id="WrZhs3gBC025"
# Una rápida inspección visual a las dos gráficas daría como resultado que son "iguales", pues no se observa (a simple vista) una diferencia sustancial entre las dos metodologías empleadas. Superpongamos ambas graficas para evaluar, visualmente, si hay o no diferencia:
# + id="WE_80yPVC026" outputId="0c5f55a8-2aef-4746-9c1d-9dc7fa64e324"
plt.xlabel (r"$t$")
plt.ylabel (r"$v$")
plt.title (r"$v_{Exact}$ vs $v_{approx}$")
plt.plot(t, f(t,cte1,cte2),'b', label='$v_{Exact}$')
plt.plot(t, vap,'r', label='$v_{approx}$')
plt.legend(loc='center right')
plt.grid(True)
plt.show()
# + [markdown] id="iJ9m-37zC026"
# Comparando ambas soluciones se observa que efectivamente hay diferencia. En este curso revisaremos el por qué ocurren estas discrepancias y cómo implementar esquemas numéricos que intenten minimizarlas.
# + [markdown] id="zhXxmQYFC026"
# [Volver a la Tabla de Contenido](#TOC)
# + [markdown] id="YudI5IT_C027"
# ## Proceso de simulación
# + [markdown] id="Cw6YsDiOC027"
# <p float="center">
# <img src="https://github.com/carlosalvarezh/Analisis_Numerico/blob/master/images/C00_Img08_carcrash.png?raw=true" width="750" />
# </p>
#
# + [markdown] id="J7JovCypC027"
# ### ¿Como se hacen las simulaciones?
# + [markdown] id="zPdMHEFqC027"
# - Las simulaciones utilizan una computadora para resolver las ecuaciones matemáticas del problema en cuestión.
#
#
# - Los componentes principales de un ciclo de simulación computacional son los siguientes:
#
# - El ser humano (analista) que indica el problema a resolver;
#
# - Conocimientos científicos (modelos, métodos) expresados matemáticamente;
#
# - El código computacional (software) que incorpora este conocimiento y proporciona instrucciones detalladas (algoritmos)
#
# - El hardware que realiza los cálculos reales
#
# - El ser humano que inspecciona e interpreta los resultados de la simulación
#
#
# - La simulacón computacional es un área de investigación altamente interdisciplinaria que se encuentra en la interfaz de la física, las matemáticas aplicadas y la informática.
# + [markdown] id="1tXYKxebC028"
# [Volver a la Tabla de Contenido](#TOC)
# + [markdown] id="zNwaL9CgC028"
# ### Proceso de análisis
# + [markdown] id="IViwskg8C028"
# - ***Enunciado del problema:*** información sobre el problema
#
#
# - ***Modelo matemático:*** $ IBVP = PDE + IC + BC $
#
#
# - ***Discretiazación espacio-temporal (generación de malla):*** nodos / celdas, instantes de tiempo
#
#
# - ***Discretización del espacio:*** sistemas $ODE/DAE$ acoplados
#
#
# - ***Discretización del tiempo:*** sistema algebraico $ [A] {x} = {b} $
#
# + [markdown] id="aCWcyVe0C028"
# ### Estrategia de solución iterativa
# + [markdown] id="Z1v4SSlFC029"
# Las ecuaciones algebraicas no lineales acopladas deben resolverse iterativamente
#
# - ***iteraciones externas:*** los coeficientes del problema discreto se actualizan utilizando los valores de la solución de la iteración anterior para
#
# - deshacerse de las no linealidades mediante un método similar al de *Newton*
#
# - resolver las ecuaciones que gobiernan de forma segregada
#
#
# - ***Iteraciones internas:*** la secuencia resultante de subproblemas lineales generalmente se resuelve mediante un método iterativo (gradientes conjugados, cuadrícula múltiple) porque los solucionadores directos (eliminación gaussiana) son prohibitivamente costosos
#
#
# - ***Criterios de convergencia:*** es necesario verificar los residuos, los cambios de solución relativa y otros indicadores para asegurarse de que las iteraciones converjan.
#
#
# Por regla general, los sistemas algebraicos a resolver son muy grandes (millones de incógnitas) pero producen [matrices dispersas](https://en.wikipedia.org/wiki/Sparse_matrix "Sparse matrix"), es decir, la mayoría de los coeficientes de la matriz son iguales a cero.
# + [markdown] id="pEYLlVRNC029"
# [Volver a la Tabla de Contenido](#TOC)
# + [markdown] id="KdwsiyZNC029"
# ### Tiempo de cómputo
# + [markdown] id="4S30RnlcC029"
# <p float="center">
# <img src="https://github.com/carlosalvarezh/Analisis_Numerico/blob/master/images/C00_Img09_HPC.jpeg?raw=true" width="750" />
# </p>
#
# <div style="text-align: right"> Fuente: <a href="https://www.top500.org/">TOP500.org</a> </div>
#
# + [markdown] id="mWOq68lkC02-"
# El tiempo de cómputo para una simulación depende de:
#
# - Elección de los algoritmos numéricos y estructuras de datos adecuados
#
#
# - Herramientas de álgebra lineal: criterios de parada para solucionadores iterativos
#
#
# - Parámetros de discretización: calidad y tamaño de la malla, intervalo de tiempo
#
#
# - Costo por paso de tiempo y tasas de convergencia para iteraciones externas
#
#
# - Lenguaje de programación: la mayoría de los códigos están escritos en un lenguaje compilado (Fortran, C), [Julia](https://julialang.org/ "Julia")?
#
#
# - Muchas otras cosas más: [hardware](https://www.eafit.edu.co/apolo "Apolo-EAFIT"), vectorización, paralelización, etc.
#
# + [markdown] id="scPq-7Q0C02-"
# [Volver a la Tabla de Contenido](#TOC)
# + [markdown] id="pdb6PAXFC02-"
# ### Incertidumbre y error
# + [markdown] id="ldm2f7tdC02-"
# Si los resultados de una simulación pueden ser confiables depende del grado de incertidumbre y del efecto acumulativo de varios errores.
#
#
# - La incertidumbre se define como una deficiencia potencial debido a la falta de conocimiento
#
#
# - El error se define como una deficiencia reconocible debido a otras razones
#
#
# - Los errores reconocidos tienen ciertos mecanismos para identificarlos, estimarlos y posiblemente eliminarlos o al menos minimizarlos
#
#
# - Los errores no reconocidos no tienen procedimientos estándar para detectarlos y pueden permanecer sin descubrir causando mucho daño
#
#
# - Los errores locales se refieren a errores de solución en un solo punto o celda de la cuadrícula
#
#
# - Los errores globales se refieren a errores de solución en todo el dominio del problema
#
#
# Los errores locales contribuyen al error global y pueden moverse a lo largo de la red.
# + [markdown] id="y_phw6ieC02_"
# [Volver a la Tabla de Contenido](#TOC)
# + [markdown] id="sKVETWmlC02_"
# ### Clasificación de errores
# + [markdown] id="w-lSPXEsC02_"
# ***Errores reconocidos***
#
# - Error de modelado físico debido a la incertidumbre y simplificaciones deliberadas
#
#
# - Aproximación del error de discretización de la $PDE$ mediante ecuaciones algebraicas
#
# - error de discretización espacial debido a una resolución de cuadrícula finita
#
# - error de discretización temporal debido a un tamaño de intervalo de tiempo finito
#
#
# - Error de convergencia iterativo que depende de los criterios de parada
#
#
# - Errores de redondeo debido a la precisión finita de la aritmética del computador
#
#
# ***Errores no reconocidos***
#
#
# - Error de programación: "errores" en la codificación y errores lógicos ([bugs](https://en.wikipedia.org/wiki/Software_bug "Software bug"))
#
#
# - Error de uso: valores de parámetros, modelos o condiciones de contorno incorrectos
#
# El conocimiento de estas fuentes de error y la capacidad de controlar o evitar el error son requisitos previos importantes para desarrollar y usar el software de simulación (*[Garbage in, garbage out](https://en.wikipedia.org/wiki/Garbage_in,_garbage_out "GIGO")*)
# + [markdown] id="le7MvGsbC02_"
# [Volver a la Tabla de Contenido](#TOC)
# + [markdown] id="LZ0z_2-xC03A"
# ### Verificación del código
# + colab={"base_uri": "https://localhost:8080/"} id="f9XxrEAzC7Rw" outputId="e2992789-4fd6-441c-b369-50d1d34cdf23"
i= 4
o = i +4
print(o)
# + [markdown] id="IPc0-kaKC03A"
# La verificación equivale a buscar errores en la implementación de los modelos (en términos generales, la pregunta es: ***¿estamos resolviendo correctamente las ecuaciones?***)
#
# - ***Examinar la programación*** comprobando visualmente el código fuente, documentándolo y probando los subprogramas subyacentes individualmente
#
#
# - ***Examinar la convergencia iterativa*** mediante el seguimiento de los residuos, los cambios relativos de cantidades integrales y comprobar si se alcanza la tolerancia prescrita
#
#
# - ***Examinar la coherencia*** comprobando si se cumplen los principios de conservación pertinentes, por ejemplo.
#
#
# - ***Examinar la convergencia de la malla.*** A medida que se refinan la malla y/o el paso de tiempo, los errores de discretización espacial y temporal, respectivamente, deberían acercarse asintóticamente a cero (en ausencia de errores de redondeo)
#
#
# - ***Compare los resultados computacionales con soluciones analíticas y numéricas*** para configuraciones de referencia estándar (casos de prueba representativos o [benchmarks](https://en.wikipedia.org/wiki/Benchmark_(computing) "Benchmark"))
# + [markdown] id="ppMvxwRhC03A"
# [Volver a la Tabla de Contenido](#TOC)
# + [markdown] id="hVyruYNMC03A"
# ### Validación del modelo
# + [markdown] id="VdEWoPsyC03B"
# La validación equivale a comprobar si el modelo en sí es adecuado para fines prácticos (en términos generales, la pregunta es: ***¿estamos resolviendo las ecuaciones correctas?***)
#
#
# - ***Verifique el código*** para asegurarse de que las soluciones numéricas sean correctas
#
#
# - ***Compare los resultados*** con los datos experimentales disponibles (teniendo en cuenta los errores de medición) para comprobar si la realidad se representa con la suficiente precisión
#
#
# - ***Realizar análisis de sensibilidad*** y un estudio paramétrico para evaluar la incertidumbre inherente debido a la comprensión insuficiente de los procesos físicos
#
#
# - ***Intente utilizar diferentes modelos***, geometría y condiciones iniciales / de contorno;
#
#
# - ***Informar los resultados***, documentar las limitaciones del modelo y la configuración de los parámetros
#
#
# ***El objetivo de la [verificación y validación](https://en.wikipedia.org/wiki/Verification_and_validation "V&V") es garantizar que el código produzca resultados razonables para un cierto rango de problemas***
# + [markdown] id="GRgX8XNJC03D"
# [Volver a la Tabla de Contenido](#TOC)
# + id="eL7fwhyyC6KA"
| Cap01_Introduccion.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Dream Bank
#
# # Part 1: Text Preprocessing
#
# **Packages**
# +
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
plt.style.use('default')
import altair as alt
alt.data_transformers.disable_max_rows()
import re
import nltk
import string
import unicodedata
from textblob import TextBlob
from wordcloud import WordCloud, STOPWORDS
wc_stopwords = set(STOPWORDS)
from nltk.corpus import stopwords
from nltk import pos_tag
from nltk.stem.wordnet import WordNetLemmatizer
from nltk.tokenize import word_tokenize, TweetTokenizer, sent_tokenize
from vaderSentiment.vaderSentiment import SentimentIntensityAnalyzer
from sklearn.utils import resample
from sklearn.feature_extraction.text import TfidfVectorizer
from umap import UMAP
from tqdm import tqdm
from pprint import pprint
tqdm.pandas()
RANDOM_STATE = 1805
# -
# **Stopwords**
nltk.download('stopwords')
nltk.download('wordnet')
nltk.download('punkt')
nltk.download('averaged_perceptron_tagger')
nltk_eng_stopwords = set(stopwords.words("english"))
# # 1. Load Data
dreams_df = pd.DataFrame()
dreams_files = os.listdir('dreams')
for file in dreams_files:
temp_df = pd.read_csv('dreams/{}'.format(file), index_col=0)
dreams_df = pd.concat([dreams_df, temp_df], axis=0, ignore_index=True)
dreams_df
dreams_df.to_csv('dreams_df.csv', index=0)
dreams_df = pd.read_csv('dreams_df.csv')
dreams_df
# +
# dreams_df['date'] = pd.to_datetime(dreams_df['date'], errors='coerce')
# -
dreams_df.isnull().sum()
# # 2. Exploratory Data Analysis
# ## 2.1. Text Preprocessing
# +
# Handles Apostrophes
APPO = {
"aren't" : "are not", "can't" : "cannot", "couldn't" : "could not",
"didn't" : "did not", "doesn't" : "does not", "don't" : "do not",
"hadn't" : "had not", "hasn't" : "has not", "haven't" : "have not",
"he'd" : "he would", "he'll" : "he will", "he's" : "he is",
"i'd" : "I would", "i'd" : "I had", "i'll" : "I will", "i'm" : "I am",
"isn't" : "is not", "it's" : "it is", "it'll":"it will", "i've" : "I have",
"let's" : "let us", "mightn't" : "might not", "mustn't" : "must not",
"shan't" : "shall not", "she'd" : "she would", "she'll" : "she will",
"she's" : "she is", "shouldn't" : "should not", "that's" : "that is",
"there's" : "there is", "they'd" : "they would", "they'll" : "they will",
"they're" : "they are", "they've" : "they have", "we'd" : "we would",
"we're" : "we are", "weren't" : "were not", "we've" : "we have",
"what'll" : "what will", "what're" : "what are", "what's" : "what is",
"what've" : "what have", "where's" : "where is", "who'd" : "who would",
"who'll" : "who will", "who're" : "who are", "who's" : "who is",
"who've" : "who have", "won't" : "will not", "wouldn't" : "would not",
"you'd" : "you would", "you'll" : "you will", "you're" : "you are",
"you've" : "you have", "'re": " are", "wasn't": "was not", "we'll":" will",
"didn't": "did not", "tryin'":"trying"
}
# Preprocessing Steps
def remove_non_ascii(text):
words = text.split()
new_words = []
for word in words:
new_word = unicodedata.normalize('NFKD', word) \
.encode('ascii', 'ignore') \
.decode('utf-8' ,'ignore')
new_words.append(new_word)
text = ' '.join(new_words)
return text
def remove_http_links(text):
text = re.sub(r'http\S+', ' ', text)
return text
def remove_emails(text):
text = re.sub(r'www\S+', ' ', text)
return text
def remove_punctuation(text):
text = re.sub('[%s]' % re.escape(string.punctuation), ' ', text)
return text
def remove_one_char_words(text):
words = text.split()
words = [word for word in words if len(word.strip()) > 1]
text = ' '.join(words)
return text
def remove_numbers(text):
text = re.sub(r'\w*\d\w*', '', text)
return text
def lemmatize_with_postag(text, _tag_='n'):
sentence = TextBlob(text)
tag_dict = {'J': 'a', 'N': 'n', 'V': 'v', 'R': 'r'}
words_and_tags = [(w, tag_dict.get(pos[0], _tag_)) for w, pos in sentence.tags]
lemmatized_words = [wd.lemmatize(tag) for wd, tag in words_and_tags]
text = ' '.join(lemmatized_words)
return text
# +
lemmatizer = WordNetLemmatizer()
tweetTokenizer = TweetTokenizer()
def corpus_text_preprocessing(text):
""" AAA """
# Regular Expressions (1)
text = text.lower()
text = remove_non_ascii(text)
text = remove_emails(text)
text = remove_http_links(text)
text = re.sub('\\n', '', text)
text = re.sub('\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}', '', text)
# Regular Expressions (2)
text = remove_punctuation(text)
text = remove_one_char_words(text)
text = remove_numbers(text)
# Remove stopwords (1)
words = tweetTokenizer.tokenize(text)
words = [APPO[word] if word in APPO else word for word in words]
words = [w for w in words if not w in nltk_eng_stopwords]
text = ' '.join(words)
# Word/Verb lemmatization
text = lemmatize_with_postag(text, 'v')
text = lemmatize_with_postag(text, 'n')
# Remove stopwords (2)
words = tweetTokenizer.tokenize(text)
words = [w for w in words if not w in nltk_eng_stopwords]
cleaned_text = ' '.join(words)
# Remove Punctuation
table = str.maketrans(string.punctuation, ' '*len(string.punctuation))
cleaned_text = cleaned_text.translate(table)
cleaned_text = ' '.join([w for w in cleaned_text.split()])
return cleaned_text
# +
test_input = dreams_df['content'].loc[0]
print('Before Text Preprocessing')
pprint(test_input)
print('\nAfter Text Preprocessing')
pprint(corpus_text_preprocessing(test_input))
# -
# %%time
dreams_df['text_cleaned'] = \
dreams_df['content'].progress_apply(lambda x: corpus_text_preprocessing(x))
dreams_df.to_csv('dreams_cleaned_df.csv', index=0)
dreams_cleaned_df = pd.read_csv('dreams_cleaned_df.csv')
dreams_cleaned_df.head()
# ## 2.2. Tag Distribution
dreams_cleaned_df['dreamer'].value_counts()
dreams_cleaned_df['dreamer'].unique()
german_dreamers = dreams_cleaned_df['dreamer'].unique().tolist()
german_dreamers = [el for el in german_dreamers if '.de' in el]
german_dreamers
dreams_cleaned_df = dreams_cleaned_df[~dreams_cleaned_df['dreamer'].isin(german_dreamers)].copy()
dreams_cleaned_df.shape
dreams_cleaned_df['dreamer'].value_counts()
# ## 2.3. Word Cloud Distribution
def show_wordcloud(data, figsize, cmap='viridis', title=None):
""" https://www.kaggle.com/gpreda/jigsaw-eda """
wordcloud = WordCloud(
background_color='white',
stopwords=wc_stopwords,
colormap=cmap,
max_words=60,
max_font_size=40,
scale=2,
random_state=RANDOM_STATE
).generate(str(data))
fig = plt.figure(1, figsize=figsize)
plt.axis('off')
if title:
fig.suptitle(title, fontsize=10)
fig.subplots_adjust(top=2.3)
plt.imshow(wordcloud)
plt.show()
show_wordcloud(dreams_cleaned_df['text_cleaned'].values, figsize=(8,8), cmap='coolwarm',
title='Dreambank Dreams')
dreams_cleaned_df[dreams_cleaned_df['text_cleaned'].isnull()]
dreams_cleaned_df = dreams_cleaned_df.dropna(subset=['text_cleaned'])
dreams_cleaned_df[dreams_cleaned_df['text_cleaned'].isnull()]
# ## 2.5. Top Unigram / Bigram / Trigrams
#
# **Top Unigrams**
# +
# %%time
clean_corpus = dreams_cleaned_df['text_cleaned'].values
tfv = TfidfVectorizer(min_df=50, max_features=10000,
strip_accents='unicode', analyzer='word',
ngram_range=(1, 1), use_idf=1, smooth_idf=1,
sublinear_tf=1, stop_words='english')
tfv.fit(clean_corpus)
features = np.array(tfv.get_feature_names())
train_unigrams = tfv.transform(clean_corpus)
train_unigrams = pd.DataFrame(train_unigrams.toarray(), columns=features)
top_features_all = train_unigrams.sum(axis=0) \
.sort_values(ascending=False)
# Plot
f, ax = plt.subplots(figsize=(7, 3))
top_features_all.head(50).plot.bar(ax=ax, color='grey')
ax.set_title('All Dreams', fontsize=10)
ax.set_ylabel('TF-IDF Score', fontsize=10)
f.tight_layout()
plt.show()
# -
top_features_all
# **Top Bigrams**
# +
# %%time
clean_corpus = dreams_cleaned_df['text_cleaned'].values
tfv = TfidfVectorizer(min_df=50, max_features=10000,
strip_accents='unicode', analyzer='word',
ngram_range=(2, 2), use_idf=1, smooth_idf=1,
sublinear_tf=1, stop_words='english')
tfv.fit(clean_corpus)
features = np.array(tfv.get_feature_names())
train_unigrams = tfv.transform(clean_corpus)
train_unigrams = pd.DataFrame(train_unigrams.toarray(), columns=features)
top_features_all = train_unigrams.sum(axis=0) \
.sort_values(ascending=False)
# Plot
f, ax = plt.subplots(figsize=(7, 4))
top_features_all.head(50).plot.bar(ax=ax, color='grey')
ax.set_title('All Dreams', fontsize=10)
ax.set_ylabel('TF-IDF Score', fontsize=10)
f.tight_layout()
plt.show()
| notebooks/Dream_Bank_Data_Preprocessing.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:root] *
# language: python
# name: conda-root-py
# ---
# ## L’essentiel sur les arbres de décision
import numpy as np
# +
def entropie(vect):
_, counts = np.unique(vect, return_counts=True)
py = np.array(counts / len(vect))
return -np.sum(py * np.log(py))
entropie(np.array([1,2,3,4,2,3,4,1,2,1]))
# +
def entropie_cond(list_vect):
p = 0
h = 0
for vec in list_vect:
p += len(vec) * entropie(vec)
h += len(vec)
return p/h
print(entropie_cond(np.array([[1,1,1,1,1,1,1,1,1,1], [1,2,3,4,3,2,1,3,2,1]])))
print(entropie_cond(np.array([[1,2,3,4,2,3,4,1,2,1], [1,2,3,4,2,3,4,1,2,1]])))
# -
def get_entro(x,y,fields):
entro = []
entro_cond = []
for i in range(x.shape[1]):
# entropie de la categorie
entro.append(entropie(y))
# entropie de la categorie conditionnellement a toute les autres
entro_cond.append(entropie_cond([y[(x[:, i]==1)], y[(x[:, i]!=1)]]))
print("-----------------------------")
print("categorie :", fields[i])
print("--> entropie =", entro[i])
print("--> entropie conditonnelle =", entro_cond[i])
print('--> difference entropie = ', entro_cond[i] - entro[i])
entro = np.array(entro)
entro_cond = np.array(entro_cond)
diff = entro - entro_cond
max_index = diff.argmax()
print("La meilleure diff d'entropie est la categorie : ", fields[max_index] ," avec un score de : ", diff[max_index])
return entro, entro_cond
# +
import pickle
import numpy as np
# data : tableau(films, features), id2titles : dictionnaire id -> titre,
# fields : id feature -> nom
[data, id2titles, fields] = pickle.load(open("imdb_extrait.pkl","rb"))
# la derniere colonne est le vote
datax = data[:,:32]
datay = np.array([1 if x[33] > 6.5 else -1 for x in data])
print(data.shape)
print(len(id2titles))
print(len(fields))
# -
listeE, listeEC = get_entro(datax,datay,fields)
# ## Quelques expériences préliminaires
# +
from sklearn.tree import export_graphviz
from sklearn.tree import DecisionTreeClassifier as DTree
import pydotplus
id2genre = [x[1] for x in sorted(fields.items())[:-2]]
dt = DTree()
dt.max_depth = 3 # on fixe la taille max de l’arbre a 3
dt.min_samples_split = 2 # nombre minimum d’exemples pour spliter un noeud
dt.fit(datax, datay)
dt.predict(datax[:5,:])
print(dt.score(datax, datay))
# utiliser http://www.webgraphviz.com/ par exemple ou https://dreampuf.github.io/GraphvizOnline
export_graphviz(dt, out_file ="tree3.dot", feature_names = id2genre)
# ou avec pydotplus
tdot = export_graphviz(dt, feature_names = id2genre)
pydotplus.graph_from_dot_data(tdot).write_pdf("tree3.pdf")
dt = DTree()
dt.max_depth = 5 # on fixe la taille max de l’arbre a 5
dt.min_samples_split = 2 # nombre minimum d’exemples pour spliter un noeud
dt.fit(datax, datay)
dt.predict(datax[:5,:])
print(dt.score(datax, datay))
# utiliser http://www.webgraphviz.com/ par exemple ou https://dreampuf.github.io/GraphvizOnline
export_graphviz(dt, out_file ="tree5.dot", feature_names = id2genre)
# ou avec pydotplus
tdot = export_graphviz(dt, feature_names = id2genre)
pydotplus.graph_from_dot_data(tdot).write_pdf("tree5.pdf")
dt = DTree()
dt.max_depth = 10 # on fixe la taille max de l’arbre a 10
dt.min_samples_split = 2 # nombre minimum d’exemples pour spliter un noeud
dt.fit(datax, datay)
dt.predict(datax[:5,:])
print(dt.score(datax, datay))
# utiliser http://www.webgraphviz.com/ par exemple ou https://dreampuf.github.io/GraphvizOnline
export_graphviz(dt, out_file ="tree10.dot", feature_names = id2genre)
# ou avec pydotplus
tdot = export_graphviz(dt, feature_names = id2genre)
pydotplus.graph_from_dot_data(tdot).write_pdf("tree10.pdf")
# Le nombre d'exemple diminue lorsque l'on parcours la profondeur de l'arbre
# C'est normal car on cherche de plus en plus de moyen de discriminer les exemples
# Plus on ajoute de la profondeur, plus le score de classificaton augmente
# C'est normal car lorsque la profondeur tend vers la profondeur maximum, on tend vers un overfit du modele
# Ce n'est pas un indicateur fiable si on veut un score de précision
# Il faudrait peut être normaliser le score grâce a le parametre de profondeur max
# Sinon on peut tester notre score sur des données de validation
# -
# ## Sur et sous apprentissage
# +
import matplotlib.pyplot as plt
def split_data(x, y, ratio):
limit = int(ratio*len(x))
return x[:limit], y[:limit], x[limit:], y[limit:]
def plot_acc(max_depth, datax_train, datay_train, datax_test, datay_test, k):
acc_train = []
acc_test = []
for i in range(max_depth):
dt = DTree()
dt.max_depth = i+1
dt.min_samples_split = 2
dt.fit(datax_train, datay_train)
dt.predict(datax_train[:5,:])
acc_train.append(dt.score(datax_train, datay_train))
acc_test.append(dt.score(datax_test, datay_test))
plt.xlabel("depth")
plt.ylabel("acc")
plt.plot(acc_train, label="train")
plt.plot(acc_test, label="test")
plt.title(str("Split train : " + str(k)))
plt.legend()
plt.show()
from matplotlib.pyplot import figure
def multiple_plot_acc(max_depth, datax_train, datay_train, datax_test, datay_test, k):
figure(figsize=(8, 8), dpi=80)
acc_train_tot = []
acc_test_tot = []
for elt in k:
datax_train, datay_train, datax_test, datay_test = split_data(datax, datay, elt)
acc_train = []
acc_test = []
for i in range(max_depth):
dt = DTree()
dt.max_depth = i+1
dt.min_samples_split = 2
dt.fit(datax_train, datay_train)
dt.predict(datax_train[:5,:])
acc_train.append(dt.score(datax_train, datay_train))
acc_test.append(dt.score(datax_test, datay_test))
acc_train_tot.append(acc_train)
acc_test_tot.append(acc_test)
plt.xlabel("depth")
plt.ylabel("acc")
for i in range(len(acc_train_tot)):
plt.plot(acc_train_tot[i], label=str("train"+str(k[i])))
plt.plot(acc_test_tot[i], label=str("test"+str(k[i])))
plt.title("Multiple plot")
plt.legend()
plt.show()
max_depth = 10
for k in [0.8, 0.5, 0.2]:
datax_train, datay_train, datax_test, datay_test = split_data(datax, datay, k)
plot_acc(max_depth, datax_train, datay_train, datax_test, datay_test, k)
multiple_plot_acc(max_depth, datax_train, datay_train, datax_test, datay_test, k=[0.8, 0.5, 0.2])
# Quand il y a peu de d'exemple d'apprentissage : l'accuracy train tend vers 1 rapidement et l'overfit est fort et l'accuracy test est a 0.7
# Quand il y a beaucoup d'exemple d'apprentissage : l'accuracy train tend moins rapidement vers 1 mais mais l'overfit est moins fort et l'accuracy test est un peu plus elevé (0.72)
# Le comportemenet est cependant semblable, les differentes courbes montrent un overfit.
# Les resultats semblent correct mais on perd beaucoup de data pour le train à cause du split
# On peut les ameliorer avec une validation croisee
# -
# ## Validation croisée : sélection de modèle
# +
def crossvalid_split(x, y, n_chunk):
n = x.shape[0]
interval = n//n_chunk
for i in range(n_chunk):
print("Step :", i)
limit = i*interval
limit2 = (i+1)*interval
print("Validation chunk index : ", limit, limit2)
chunk_x = x[limit:limit2]
chunk_y = y[limit:limit2]
if i == 0:
rest_x2 = x[limit2:]
rest_y2 = y[limit2:]
else:
rest_x2 = np.concatenate((x[0:limit], x[limit2:]), 0)
rest_y2 = np.concatenate((y[0:limit], y[limit2:]), 0)
plot_acc(10, chunk_x, chunk_y, rest_x2, rest_y2, 0)
crossvalid_split(datax, datay, 10)
| S2/ML/TME/TME1/TME1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Secure sorting networks explained
#
# In this notebook, we develop some MPC protocols for securely sorting lists of secret-shared numbers. Concretely, we will show how to define functions sorting lists of secure MPyC integers into ascending order. The values represented by the secure integers and their relative order should remain completely secret.
#
# The explanation below assumes some basic familiarity with the MPyC framework for secure computation. Our main goal is to show how existing Python code for (oblivious) sorting can be used to implement a secure MPC sorting protocol using the `mpyc` package. The modifications to the existing code are very limited.
#
# ## Sorting networks
#
# [Sorting networks](https://en.wikipedia.org/wiki/Sorting_network) are a classical type of comparison-based sorting algorithms. The basic operation (or, gate) in a sorting network is the *compare&swap* operation, which puts any two list elements $x[i]$ and $x[j]$, $i<j$, in ascending order. That is, only if $x[i]>x[j]$, elements $x[i]$ and $x[j]$ are swapped, and otherwise the compare&swap operation leaves the list unchanged.
#
# A sorting network specifies the exact sequence of compare&swap operations to be applied to a list of a given length $n$. The particular sequence depends only on $n$, the length of the input list. Even when the input list is already in ascending order, the sorting network will perform exactly as many---and actually the same---compare&swap operations as when the input list would be in descending order.
#
# For example, to sort a list of three numbers, one needs to perform three compare&swap operations with indices $(i,j)$ equal to $(0,1)$, then $(1,2)$, and finally once more $(0,1)$.
#
# Below, we will use odd-even merge sort and bitonic sort, which are two well-known practical sorting networks.
#
# ## MPyC setup
#
# A simple MPyC setup using 32-bit (default) secure MPyC integers suffices for the purpose of this demonstration.
#
# At this point we also import the Python `traceback` module for later use.
from mpyc.runtime import mpc # load MPyC
secint = mpc.SecInt() # 32-bit secure MPyC integers
mpc.start() # required only when run with multiple parties
import traceback # to show some suppressed error messages
# ## Odd-even merge sort
#
# Odd-even merge sort is an elegant, but somewhat intricate, sorting network. The details are nicely explained in the Wikipedia article [Batcher's Odd-Even Mergesort](https://en.wikipedia.org/wiki/Batcher_odd–even_mergesort).
#
# For our purposes, however, there is no need to understand exactly how this particular sorting network works. The only thing that we need to do is to grab the following [example Python code](https://en.wikipedia.org/wiki/Batcher_odd–even_mergesort#Example_code) from this Wikipedia article.
# +
def oddeven_merge(lo, hi, r):
step = r * 2
if step < hi - lo:
yield from oddeven_merge(lo, hi, step)
yield from oddeven_merge(lo + r, hi, step)
yield from [(i, i + r) for i in range(lo + r, hi - r, step)]
else:
yield (lo, lo + r)
def oddeven_merge_sort_range(lo, hi):
""" sort the part of x with indices between lo and hi.
Note: endpoints (lo and hi) are included.
"""
if (hi - lo) >= 1:
# if there is more than one element, split the input
# down the middle and first sort the first and second
# half, followed by merging them.
mid = lo + ((hi - lo) // 2)
yield from oddeven_merge_sort_range(lo, mid)
yield from oddeven_merge_sort_range(mid + 1, hi)
yield from oddeven_merge(lo, hi, 1)
def oddeven_merge_sort(length):
""" "length" is the length of the list to be sorted.
Returns a list of pairs of indices starting with 0 """
yield from oddeven_merge_sort_range(0, length - 1)
def compare_and_swap(x, a, b):
if x[a] > x[b]:
x[a], x[b] = x[b], x[a]
# -
# We run the code on a simple example. Note that this code assumes that the length of the input list is an integral power of two.
x = [2, 4, 3, 5, 6, 1, 7, 8]
for i in oddeven_merge_sort(len(x)): compare_and_swap(x, *i)
print(x)
# We try to run this code on a list of secure MPyC integers.
x = list(map(secint, [2, 4, 3, 5, 6, 1, 7, 8]))
try:
for i in oddeven_merge_sort(len(x)): compare_and_swap(x, *i)
except:
traceback.print_exc()
# Unsurprisingly, this does not work. We get an error because we cannot use a `secint` directly in the condition of an `if` statement. And, even if we could, we should not do so, as the particular branch of the `if` statement followed reveals information about the input!
#
# Therefore, the function `compare_and_swap` is modified (i) to hide whether elements of $x$ are swapped and (ii) to keep the values of the elements of $x$ hidden, even when these are swapped.
def compare_and_swap(x, a, b):
c = x[a] > x[b] # secure comparison, c is a secint representing a secret-shared bit
d = c * (x[b] - x[a]) # secure subtraction
x[a], x[b] = x[a] + d, x[b] - d # secure swap: x[a], x[b] swapped if only if c=1
# Now the code can be used to sort a list of secure MPyC integers.
x = list(map(secint, [2, 4, 3, 5, 6, 1, 7, 8]))
for i in oddeven_merge_sort(len(x)): compare_and_swap(x, *i)
print(mpc.run(mpc.output(x)))
# ## Bitonic sort
#
# For our next example, we consult the Wikipedia article [Bitonic Sorter](https://en.wikipedia.org/wiki/Bitonic_sorter).
#
# We apply the same approach, grabbing the [example Python code](https://en.wikipedia.org/wiki/Bitonic_sorter#Example_code) from the Wikipedia article, which is also designed to work for input lists whose length is an integral power of two.
# +
def bitonic_sort(up, x):
if len(x) <= 1:
return x
else:
first = bitonic_sort(True, x[:len(x) // 2])
second = bitonic_sort(False, x[len(x) // 2:])
return bitonic_merge(up, first + second)
def bitonic_merge(up, x):
# assume input x is bitonic, and sorted list is returned
if len(x) == 1:
return x
else:
bitonic_compare(up, x)
first = bitonic_merge(up, x[:len(x) // 2])
second = bitonic_merge(up, x[len(x) // 2:])
return first + second
def bitonic_compare(up, x):
dist = len(x) // 2
for i in range(dist):
if (x[i] > x[i + dist]) == up:
x[i], x[i + dist] = x[i + dist], x[i] #swap
# -
# We run the code on the same example.
print(bitonic_sort(True, [2, 4, 3, 5, 6, 1, 7, 8]))
print(bitonic_sort(False, [2, 4, 3, 5, 6, 1, 7, 8]))
# Running the code on a list of secure MPyC integers gives the same error as above.
x = list(map(secint, [2, 4, 3, 5, 6, 1, 7, 8]))
try:
bitonic_sort(True, x)
except:
traceback.print_exc()
# This time we modify the function `bitonic_compare` as follows again to hide what is happening to the elements of $x$ being compared.
def bitonic_compare(up, x):
dist = len(x) // 2
up = secint(up) # convert public Boolean up into `secint` bit
for i in range(dist):
b = (x[i] > x[i + dist]) ^ ~up # secure xor of comparison bit and negated up
d = b * (x[i + dist] - x[i]) # d = 0 or d = x[i + dist] - x[i]
x[i], x[i + dist] = x[i] + d, x[i + dist] - d # secure swap
# Now the code can again be used to sort a list of secure MPyC integers.
print(mpc.run(mpc.output(bitonic_sort(True, x))))
mpc.shutdown() # required only when run with multiple parties
# The Python script [sort.py](sort.py) shows how to do secure bitonic sort for lists of arbitrary length, adapted from this general [bitonic sorter](http://www.iti.fh-flensburg.de/lang/algorithmen/sortieren/bitonic/oddn.htm).
| demos/SecureSortingNetsExplained.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# **Корректность проверена на Python 3.6:**
# + pandas 0.23.4
# + numpy 1.15.4
# + matplotlib 3.0.2
# + sklearn 0.20.2
import warnings
warnings.filterwarnings('ignore')
# ## Предобработка данных и логистическая регрессия для задачи бинарной классификации
# ## Programming assignment
# В задании вам будет предложено ознакомиться с основными техниками предобработки данных, а так же применить их для обучения модели логистической регрессии. Ответ потребуется загрузить в соответствующую форму в виде 6 текстовых файлов.
# +
import pandas as pd
import numpy as np
import matplotlib
from matplotlib import pyplot as plt
matplotlib.style.use('ggplot')
# %matplotlib inline
import warnings
warnings.filterwarnings('ignore')
# -
# ## Описание датасета
# Задача: по 38 признакам, связанных с заявкой на грант (область исследований учёных, информация по их академическому бэкграунду, размер гранта, область, в которой он выдаётся) предсказать, будет ли заявка принята. Датасет включает в себя информацию по 6000 заявкам на гранты, которые были поданы в университете Мельбурна в период с 2004 по 2008 год.
#
# Полную версию данных с большим количеством признаков можно найти на https://www.kaggle.com/c/unimelb.
data = pd.read_csv('data.csv')
data.shape
# Выделим из датасета целевую переменную Grant.Status и обозначим её за y
# Теперь X обозначает обучающую выборку, y - ответы на ней
X = data.drop('Grant.Status', 1) #общая выборка
y = data['Grant.Status'] #ответы по выборке
# ## Теория по логистической регрессии
# После осознания того, какую именно задачу требуется решить на этих данных, следующим шагом при реальном анализе был бы подбор подходящего метода. В данном задании выбор метода было произведён за вас, это логистическая регрессия. Кратко напомним вам используемую модель.
#
# Логистическая регрессия предсказывает вероятности принадлежности объекта к каждому классу. Сумма ответов логистической регрессии на одном объекте для всех классов равна единице.
#
# $$ \sum_{k=1}^K \pi_{ik} = 1, \quad \pi_k \equiv P\,(y_i = k \mid x_i, \theta), $$
#
# где:
# - $\pi_{ik}$ - вероятность принадлежности объекта $x_i$ из выборки $X$ к классу $k$
# - $\theta$ - внутренние параметры алгоритма, которые настраиваются в процессе обучения, в случае логистической регрессии - $w, b$
#
# Из этого свойства модели в случае бинарной классификации требуется вычислить лишь вероятность принадлежности объекта к одному из классов (вторая вычисляется из условия нормировки вероятностей). Эта вероятность вычисляется, используя логистическую функцию:
#
# $$ P\,(y_i = 1 \mid x_i, \theta) = \frac{1}{1 + \exp(-w^T x_i-b)} $$
#
# Параметры $w$ и $b$ находятся, как решения следующей задачи оптимизации (указаны функционалы с L1 и L2 регуляризацией, с которыми вы познакомились в предыдущих заданиях):
#
# L2-regularization:
#
# $$ Q(X, y, \theta) = \frac{1}{2} w^T w + C \sum_{i=1}^l \log ( 1 + \exp(-y_i (w^T x_i + b ) ) ) \longrightarrow \min\limits_{w,b} $$
#
# L1-regularization:
#
# $$ Q(X, y, \theta) = \sum_{d=1}^D |w_d| + C \sum_{i=1}^l \log ( 1 + \exp(-y_i (w^T x_i + b ) ) ) \longrightarrow \min\limits_{w,b} $$
#
# $C$ - это стандартный гиперпараметр модели, который регулирует то, насколько сильно мы позволяем модели подстраиваться под данные.
# ## Предобработка данных
# Из свойств данной модели следует, что:
# - все $X$ должны быть числовыми данными (в случае наличия среди них категорий, их требуется некоторым способом преобразовать в вещественные числа)
# - среди $X$ не должно быть пропущенных значений (т.е. все пропущенные значения перед применением модели следует каким-то образом заполнить)
#
# Поэтому базовым этапом в предобработке любого датасета для логистической регрессии будет кодирование категориальных признаков, а так же удаление или интерпретация пропущенных значений (при наличии того или другого).
# +
#data.head()
# -
# Видно, что в датасете есть как числовые, так и категориальные признаки. Получим списки их названий:
numeric_cols = ['RFCD.Percentage.1', 'RFCD.Percentage.2', 'RFCD.Percentage.3',
'RFCD.Percentage.4', 'RFCD.Percentage.5',
'SEO.Percentage.1', 'SEO.Percentage.2', 'SEO.Percentage.3',
'SEO.Percentage.4', 'SEO.Percentage.5',
'Year.of.Birth.1', 'Number.of.Successful.Grant.1', 'Number.of.Unsuccessful.Grant.1']
categorical_cols = list(set(X.columns.values.tolist()) - set(numeric_cols))
# Также в нём присутствуют пропущенные значения. Очевидны решением будет исключение всех данных, у которых пропущено хотя бы одно значение. Сделаем это:
data.dropna().shape
# Видно, что тогда мы выбросим почти все данные, и такой метод решения в данном случае не сработает.
#
# Пропущенные значения можно так же интерпретировать, для этого существует несколько способов, они различаются для категориальных и вещественных признаков.
#
# Для вещественных признаков:
# - заменить на 0 (данный признак давать вклад в предсказание для данного объекта не будет)
# - заменить на среднее (каждый пропущенный признак будет давать такой же вклад, как и среднее значение признака на датасете)
#
# Для категориальных:
# - интерпретировать пропущенное значение, как ещё одну категорию (данный способ является самым естественным, так как в случае категорий у нас есть уникальная возможность не потерять информацию о наличии пропущенных значений; обратите внимание, что в случае вещественных признаков данная информация неизбежно теряется)
# ## Задание 0. Обработка пропущенных значений.
# 1. Заполните пропущенные вещественные значения в X нулями и средними по столбцам, назовите полученные датафреймы X_real_zeros и X_real_mean соответственно. Для подсчёта средних используйте описанную ниже функцию calculate_means, которой требуется передать на вход вешественные признаки из исходного датафрейма. **Для подсчета среднего можно использовать функцию pandas.mean()**
# 2. Все категориальные признаки в X преобразуйте в строки, пропущенные значения требуется также преобразовать в какие-либо строки, которые не являются категориями (например, 'NA'), полученный датафрейм назовите X_cat.
#
# Для объединения выборок здесь и далее в задании рекомендуется использовать функции
#
# np.hstack(...)
# np.vstack(...)
def calculate_means(numeric_data):
means = np.zeros(numeric_data.shape[1])
for j in range(numeric_data.shape[1]):
to_sum = numeric_data.iloc[:,j]
indices = np.nonzero(~numeric_data.iloc[:,j].isnull())[0]
correction = np.amax(to_sum[indices])
to_sum /= correction
for i in indices:
means[j] += to_sum[i]
means[j] /= indices.size
means[j] *= correction
return pd.Series(means, numeric_data.columns)
# +
X_real_zeros = X[numeric_cols].fillna(0)
means = calculate_means(X[numeric_cols])
X_real_mean = X[numeric_cols]
for i in range(len(numeric_cols)):
X_real_mean.iloc[:, i] = X_real_mean.iloc[:, i].fillna(means.values[i])
X_cat = X[categorical_cols]
X_cat = X_cat.fillna('NA')
X_cat = X_cat.astype(str)
# -
# ## Преобразование категориальных признаков.
# В предыдущей ячейке мы разделили наш датасет ещё на две части: в одной присутствуют только вещественные признаки, в другой только категориальные. Это понадобится нам для раздельной последующей обработке этих данных, а так же для сравнения качества работы тех или иных методов.
#
# Для использования модели регрессии требуется преобразовать категориальные признаки в вещественные. Рассмотрим основной способ преоборазования категориальных признаков в вещественные: one-hot encoding. Его идея заключается в том, что мы преобразуем категориальный признак при помощи бинарного кода: каждой категории ставим в соответствие набор из нулей и единиц.
#
# Посмотрим, как данный метод работает на простом наборе данных.
# +
from sklearn.linear_model import LogisticRegression as LR
from sklearn.feature_extraction import DictVectorizer as DV
categorial_data = pd.DataFrame({'sex': ['male', 'female', 'male', 'female'],
'nationality': ['American', 'European', 'Asian', 'European']})
print('Исходные данные:\n')
print(categorial_data)
encoder = DV(sparse = False)
encoded_data = encoder.fit_transform(categorial_data.T.to_dict().values())
print('\nЗакодированные данные:\n')
print(encoded_data)
# -
# Как видно, в первые три колонки оказалась закодированна информация о стране, а во вторые две - о поле. При этом для совпадающих элементов выборки строки будут полностью совпадать. Также из примера видно, что кодирование признаков сильно увеличивает их количество, но полностью сохраняет информацию, в том числе о наличии пропущенных значений (их наличие просто становится одним из бинарных признаков в преобразованных данных).
#
# Теперь применим one-hot encoding к категориальным признакам из исходного датасета. Обратите внимание на общий для всех методов преобработки данных интерфейс. Функция
#
# encoder.fit_transform(X)
#
# позволяет вычислить необходимые параметры преобразования, впоследствии к новым данным можно уже применять функцию
#
# encoder.transform(X)
#
# Очень важно применять одинаковое преобразование как к обучающим, так и тестовым данным, потому что в противном случае вы получите непредсказуемые, и, скорее всего, плохие результаты. В частности, если вы отдельно закодируете обучающую и тестовую выборку, то получите вообще говоря разные коды для одних и тех же признаков, и ваше решение работать не будет.
#
# Также параметры многих преобразований (например, рассмотренное ниже масштабирование) нельзя вычислять одновременно на данных из обучения и теста, потому что иначе подсчитанные на тесте метрики качества будут давать смещённые оценки на качество работы алгоритма. Кодирование категориальных признаков не считает на обучающей выборке никаких параметров, поэтому его можно применять сразу к всему датасету.
encoder = DV(sparse = False)
X_cat_oh = encoder.fit_transform(X_cat.T.to_dict().values())
# Для построения метрики качества по результату обучения требуется разделить исходный датасет на обучающую и тестовую выборки.
#
# Обращаем внимание на заданный параметр для генератора случайных чисел: random_state. Так как результаты на обучении и тесте будут зависеть от того, как именно вы разделите объекты, то предлагается использовать заранее определённое значение для получение результатов, согласованных с ответами в системе проверки заданий.
# +
from sklearn.model_selection import train_test_split
(X_train_real_zeros,
X_test_real_zeros,
y_train, y_test) = train_test_split(X_real_zeros, y,
test_size=0.3,
random_state=0)
(X_train_real_mean,
X_test_real_mean) = train_test_split(X_real_mean,
test_size=0.3,
random_state=0)
(X_train_cat_oh,
X_test_cat_oh) = train_test_split(X_cat_oh,
test_size=0.3,
random_state=0)
# -
# ## Описание классов
# Итак, мы получили первые наборы данных, для которых выполнены оба ограничения логистической регрессии на входные данные. Обучим на них регрессию, используя имеющийся в библиотеке sklearn функционал по подбору гиперпараметров модели
#
# optimizer = GridSearchCV(estimator, param_grid)
#
# где:
# - estimator - обучающий алгоритм, для которого будет производиться подбор параметров
# - param_grid - словарь параметров, ключами которого являются строки-названия, которые передаются алгоритму estimator, а значения - набор параметров для перебора
#
# Данный класс выполняет кросс-валидацию обучающей выборки для каждого набора параметров и находит те, на которых алгоритм работает лучше всего. Этот метод позволяет настраивать гиперпараметры по обучающей выборке, избегая переобучения. Некоторые опциональные параметры вызова данного класса, которые нам понадобятся:
# - scoring - функционал качества, максимум которого ищется кросс валидацией, по умолчанию используется функция score() класса esimator
# - n_jobs - позволяет ускорить кросс-валидацию, выполняя её параллельно, число определяет количество одновременно запущенных задач
# - cv - количество фолдов, на которые разбивается выборка при кросс-валидации
#
# После инициализации класса GridSearchCV, процесс подбора параметров запускается следующим методом:
#
# optimizer.fit(X, y)
#
# На выходе для получения предсказаний можно пользоваться функцией
#
# optimizer.predict(X)
#
# для меток или
#
# optimizer.predict_proba(X)
#
# для вероятностей (в случае использования логистической регрессии).
#
# Также можно напрямую получить оптимальный класс estimator и оптимальные параметры, так как они является атрибутами класса GridSearchCV:
# - best\_estimator\_ - лучший алгоритм
# - best\_params\_ - лучший набор параметров
#
# Класс логистической регрессии выглядит следующим образом:
#
# estimator = LogisticRegression(penalty)
#
# где penalty принимает либо значение 'l2', либо 'l1'. По умолчанию устанавливается значение 'l2', и везде в задании, если об этом не оговорено особо, предполагается использование логистической регрессии с L2-регуляризацией.
# ## Задание 1. Сравнение способов заполнения вещественных пропущенных значений.
# 1. Составьте две обучающие выборки из вещественных и категориальных признаков: в одной вещественные признаки, где пропущенные значения заполнены нулями, в другой - средними. Рекомендуется записывать в выборки сначала вещественные, а потом категориальные признаки.
# 2. Обучите на них логистическую регрессию, подбирая параметры из заданной сетки param_grid по методу кросс-валидации с числом фолдов cv=3. В качестве оптимизируемой функции используйте заданную по умолчанию.
# 3. Постройте два графика оценок точности +- их стандратного отклонения в зависимости от гиперпараметра и убедитесь, что вы действительно нашли её максимум. Также обратите внимание на большую дисперсию получаемых оценок (уменьшить её можно увеличением числа фолдов cv).
# 4. Получите две метрики качества AUC ROC на тестовой выборке и сравните их между собой. Какой способ заполнения пропущенных вещественных значений работает лучше? В дальнейшем для выполнения задания в качестве вещественных признаков используйте ту выборку, которая даёт лучшее качество на тесте.
# 5. Передайте два значения AUC ROC (сначала для выборки, заполненной средними, потом для выборки, заполненной нулями) в функцию write_answer_1 и запустите её. Полученный файл является ответом на 1 задание.
#
# Информация для интересующихся: вообще говоря, не вполне логично оптимизировать на кросс-валидации заданный по умолчанию в классе логистической регрессии функционал accuracy, а измерять на тесте AUC ROC, но это, как и ограничение размера выборки, сделано для ускорения работы процесса кросс-валидации.
# +
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import roc_auc_score
def plot_scores(optimizer):
scores=[]
for i in range(len(optimizer.cv_results_['params'])):
scores.append([optimizer.cv_results_['params'][i]['C'],
optimizer.cv_results_['mean_test_score'][i],
optimizer.cv_results_['std_test_score'][i]])
scores = np.array(scores)
plt.semilogx(scores[:,0], scores[:,1])
plt.fill_between(scores[:,0], scores[:,1]-scores[:,2],
scores[:,1]+scores[:,2], alpha=0.3)
plt.show()
def write_answer_1(auc_1, auc_2):
auc = (auc_1 + auc_2)/2
with open("preprocessing_lr_answer1.txt", "w") as fout:
fout.write(str(auc))
param_grid = {'C': [0.01, 0.05, 0.1, 0.5, 1, 5, 10]}
cv = 10
# +
#1
X_train_cat_oh = pd.DataFrame(X_train_cat_oh)
X_train_cat_oh.index = X_train_real_zeros.index
zero_X = pd.concat([X_train_real_zeros, X_train_cat_oh], axis=1)
X_train_cat_oh.index = X_train_real_mean.index
mean_X = pd.concat([X_train_real_mean, X_train_cat_oh], axis=1)
# -
#2
estimator_zero = LR()
optimizer_zero = GridSearchCV(estimator_zero, param_grid, cv = cv, n_jobs=-1)
# %%time
optimizer_zero.fit(zero_X, y_train)
estimator_mean = LR()
optimizer_mean = GridSearchCV(estimator_mean, param_grid, cv = cv)
# %%time
optimizer_mean.fit(mean_X, y_train)
#3
print ('zeros', optimizer_zero.best_score_)
plot_scores(optimizer_zero)
print ('means', optimizer_mean.best_score_)
plot_scores(optimizer_mean)
X_test_real_zeros.head()
X_test_real_zeros = pd.DataFrame(X_test_real_zeros)
X_test_cat_oh = pd.DataFrame(X_test_cat_oh)
X_test_real_zeros = X_test_real_zeros.reset_index()
X_test_cat_oh = X_test_cat_oh.reset_index()
#X = dataset.iloc[:, :2].values
X_current = pd.concat([X_test_real_mean, X_test_cat_oh], axis=1)
#4
score_zero = roc_auc_score(y_test, optimizer_zero.predict_proba(X_current)[:, 1])
print(score_zero)
score_mean = roc_auc_score(y_test,optimizer_mean.predict_proba(pd.concat([X_test_real_mean, X_test_cat_oh], axis=1))[:,1])
print(score_mean)
# ## Масштабирование вещественных признаков.
# Попробуем как-то улучшить качество классификации. Для этого посмотрим на сами данные:
# +
from pandas.tools.plotting import scatter_matrix
data_numeric = pd.DataFrame(X_train_real_zeros, columns=numeric_cols)
list_cols = ['Number.of.Successful.Grant.1', 'SEO.Percentage.2', 'Year.of.Birth.1']
scatter_matrix(data_numeric[list_cols], alpha=0.5, figsize=(10, 10))
plt.show()
# -
# Как видно из графиков, разные признаки очень сильно отличаются друг от друга по модулю значений (обратите внимание на диапазоны значений осей x и y). В случае обычной регрессии это никак не влияет на качество обучаемой модели, т.к. у меньших по модулю признаков будут большие веса, но при использовании регуляризации, которая штрафует модель за большие веса, регрессия, как правило, начинает работать хуже.
#
# В таких случаях всегда рекомендуется делать стандартизацию (масштабирование) признаков, для того чтобы они меньше отличались друг друга по модулю, но при этом не нарушались никакие другие свойства признакового пространства. При этом даже если итоговое качество модели на тесте уменьшается, это повышает её интерпретабельность, потому что новые веса имеют смысл "значимости" данного признака для итоговой классификации.
#
# Стандартизация осуществляется посредством вычета из каждого признака среднего значения и нормировки на выборочное стандартное отклонение:
#
# $$ x^{scaled}_{id} = \dfrac{x_{id} - \mu_d}{\sigma_d}, \quad \mu_d = \frac{1}{N} \sum_{i=1}^l x_{id}, \quad \sigma_d = \sqrt{\frac{1}{N-1} \sum_{i=1}^l (x_{id} - \mu_d)^2} $$
# ## Задание 1.5. Масштабирование вещественных признаков.
#
# 1. По аналогии с вызовом one-hot encoder примените масштабирование вещественных признаков для обучающих и тестовых выборок X_train_real_zeros и X_test_real_zeros, используя класс
#
# StandardScaler
#
# и методы
#
# StandardScaler.fit_transform(...)
# StandardScaler.transform(...)
# 2. Сохраните ответ в переменные X_train_real_scaled и X_test_real_scaled соответственно
# +
from sklearn.preprocessing import StandardScaler
# place your code here
# -
# ## Сравнение признаковых пространств.
# Построим такие же графики для преобразованных данных:
data_numeric_scaled = pd.DataFrame(X_train_real_scaled, columns=numeric_cols)
list_cols = ['Number.of.Successful.Grant.1', 'SEO.Percentage.2', 'Year.of.Birth.1']
scatter_matrix(data_numeric_scaled[list_cols], alpha=0.5, figsize=(10, 10))
plt.show()
# Как видно из графиков, мы не поменяли свойства признакового пространства: гистограммы распределений значений признаков, как и их scatter-plots, выглядят так же, как и до нормировки, но при этом все значения теперь находятся примерно в одном диапазоне, тем самым повышая интерпретабельность результатов, а также лучше сочетаясь с идеологией регуляризации.
# ## Задание 2. Сравнение качества классификации до и после масштабирования вещественных признаков.
# 1. Обучите ещё раз регрессию и гиперпараметры на новых признаках, объединив их с закодированными категориальными.
# 2. Проверьте, был ли найден оптимум accuracy по гиперпараметрам во время кроссвалидации.
# 3. Получите значение ROC AUC на тестовой выборке, сравните с лучшим результатом, полученными ранее.
# 4. Запишите полученный ответ в файл при помощи функции write_answer_2.
# +
def write_answer_2(auc):
with open("preprocessing_lr_answer2.txt", "w") as fout:
fout.write(str(auc))
# place your code here
# -
# ## Балансировка классов.
# Алгоритмы классификации могут быть очень чувствительны к несбалансированным классам. Рассмотрим пример с выборками, сэмплированными из двух гауссиан. Их мат. ожидания и матрицы ковариации заданы так, что истинная разделяющая поверхность должна проходить параллельно оси x. Поместим в обучающую выборку 20 объектов, сэмплированных из 1-й гауссианы, и 10 объектов из 2-й. После этого обучим на них линейную регрессию, и построим на графиках объекты и области классификации.
np.random.seed(0)
"""Сэмплируем данные из первой гауссианы"""
data_0 = np.random.multivariate_normal([0,0], [[0.5,0],[0,0.5]], size=40)
"""И из второй"""
data_1 = np.random.multivariate_normal([0,1], [[0.5,0],[0,0.5]], size=40)
"""На обучение берём 20 объектов из первого класса и 10 из второго"""
example_data_train = np.vstack([data_0[:20,:], data_1[:10,:]])
example_labels_train = np.concatenate([np.zeros((20)), np.ones((10))])
"""На тест - 20 из первого и 30 из второго"""
example_data_test = np.vstack([data_0[20:,:], data_1[10:,:]])
example_labels_test = np.concatenate([np.zeros((20)), np.ones((30))])
"""Задаём координатную сетку, на которой будем вычислять область классификации"""
xx, yy = np.meshgrid(np.arange(-3, 3, 0.02), np.arange(-3, 3, 0.02))
"""Обучаем регрессию без балансировки по классам"""
optimizer = GridSearchCV(LogisticRegression(), param_grid, cv=cv, n_jobs=-1)
optimizer.fit(example_data_train, example_labels_train)
"""Строим предсказания регрессии для сетки"""
Z = optimizer.predict(np.c_[xx.ravel(), yy.ravel()]).reshape(xx.shape)
plt.pcolormesh(xx, yy, Z, cmap=plt.cm.Pastel2)
plt.scatter(data_0[:,0], data_0[:,1], color='red')
plt.scatter(data_1[:,0], data_1[:,1], color='blue')
"""Считаем AUC"""
auc_wo_class_weights = roc_auc_score(example_labels_test, optimizer.predict_proba(example_data_test)[:,1])
plt.title('Without class weights')
plt.show()
print('AUC: %f'%auc_wo_class_weights)
"""Для второй регрессии в LogisticRegression передаём параметр class_weight='balanced'"""
optimizer = GridSearchCV(LogisticRegression(class_weight='balanced'), param_grid, cv=cv, n_jobs=-1)
optimizer.fit(example_data_train, example_labels_train)
Z = optimizer.predict(np.c_[xx.ravel(), yy.ravel()]).reshape(xx.shape)
plt.pcolormesh(xx, yy, Z, cmap=plt.cm.Pastel2)
plt.scatter(data_0[:,0], data_0[:,1], color='red')
plt.scatter(data_1[:,0], data_1[:,1], color='blue')
auc_w_class_weights = roc_auc_score(example_labels_test, optimizer.predict_proba(example_data_test)[:,1])
plt.title('With class weights')
plt.show()
print('AUC: %f'%auc_w_class_weights)
# Как видно, во втором случае классификатор находит разделяющую поверхность, которая ближе к истинной, т.е. меньше переобучается. Поэтому на сбалансированность классов в обучающей выборке всегда следует обращать внимание.
#
# Посмотрим, сбалансированны ли классы в нашей обучающей выборке:
print(np.sum(y_train==0))
print(np.sum(y_train==1))
# Видно, что нет.
#
# Исправить ситуацию можно разными способами, мы рассмотрим два:
# - давать объектам миноритарного класса больший вес при обучении классификатора (рассмотрен в примере выше)
# - досэмплировать объекты миноритарного класса, пока число объектов в обоих классах не сравняется
# ## Задание 3. Балансировка классов.
# 1. Обучите логистическую регрессию и гиперпараметры с балансировкой классов, используя веса (параметр class_weight='balanced' регрессии) на отмасштабированных выборках, полученных в предыдущем задании. Убедитесь, что вы нашли максимум accuracy по гиперпараметрам.
# 2. Получите метрику ROC AUC на тестовой выборке.
# 3. Сбалансируйте выборку, досэмплировав в неё объекты из меньшего класса. Для получения индексов объектов, которые требуется добавить в обучающую выборку, используйте следующую комбинацию вызовов функций:
# np.random.seed(0)
# indices_to_add = np.random.randint(...)
# X_train_to_add = X_train[y_train.as_matrix() == 1,:][indices_to_add,:]
# После этого добавьте эти объекты в начало или конец обучающей выборки. Дополните соответствующим образом вектор ответов.
# 4. Получите метрику ROC AUC на тестовой выборке, сравните с предыдущим результатом.
# 5. Внесите ответы в выходной файл при помощи функции write_asnwer_3, передав в неё сначала ROC AUC для балансировки весами, а потом балансировки выборки вручную.
# +
def write_answer_3(auc_1, auc_2):
auc = (auc_1 + auc_2) / 2
with open("preprocessing_lr_answer3.txt", "w") as fout:
fout.write(str(auc))
# place your code here
# -
# ## Стратификация выборок.
# Рассмотрим ещё раз пример с выборками из нормальных распределений. Посмотрим ещё раз на качество классификаторов, получаемое на тестовых выборках:
print('AUC ROC for classifier without weighted classes', auc_wo_class_weights)
print('AUC ROC for classifier with weighted classes: ', auc_w_class_weights)
# Насколько эти цифры реально отражают качество работы алгоритма, если учесть, что тестовая выборка так же несбалансирована, как обучающая? При этом мы уже знаем, что алгоритм логистический регрессии чувствителен к балансировке классов в обучающей выборке, т.е. в данном случае на тесте он будет давать заведомо заниженные результаты. Метрика классификатора на тесте имела бы гораздо больший смысл, если бы объекты были разделы в выборках поровну: по 20 из каждого класса на обучени и на тесте. Переформируем выборки и подсчитаем новые ошибки:
"""Разделим данные по классам поровну между обучающей и тестовой выборками"""
example_data_train = np.vstack([data_0[:20,:], data_1[:20,:]])
example_labels_train = np.concatenate([np.zeros((20)), np.ones((20))])
example_data_test = np.vstack([data_0[20:,:], data_1[20:,:]])
example_labels_test = np.concatenate([np.zeros((20)), np.ones((20))])
"""Обучим классификатор"""
optimizer = GridSearchCV(LogisticRegression(class_weight='balanced'), param_grid, cv=cv, n_jobs=-1)
optimizer.fit(example_data_train, example_labels_train)
Z = optimizer.predict(np.c_[xx.ravel(), yy.ravel()]).reshape(xx.shape)
plt.pcolormesh(xx, yy, Z, cmap=plt.cm.Pastel2)
plt.scatter(data_0[:,0], data_0[:,1], color='red')
plt.scatter(data_1[:,0], data_1[:,1], color='blue')
auc_stratified = roc_auc_score(example_labels_test, optimizer.predict_proba(example_data_test)[:,1])
plt.title('With class weights')
plt.show()
print('AUC ROC for stratified samples: ', auc_stratified)
# Как видно, после данной процедуры ответ классификатора изменился незначительно, а вот качество увеличилось. При этом, в зависимости от того, как вы разбили изначально данные на обучение и тест, после сбалансированного разделения выборок итоговая метрика на тесте может как увеличиться, так и уменьшиться, но доверять ей можно значительно больше, т.к. она построена с учётом специфики работы классификатора. Данный подход является частным случаем т.н. метода стратификации.
# ## Задание 4. Стратификация выборки.
#
# 1. По аналогии с тем, как это было сделано в начале задания, разбейте выборки X_real_zeros и X_cat_oh на обучение и тест, передавая в функцию
# train_test_split(...)
# дополнительно параметр
# stratify=y
# Также обязательно передайте в функцию переменную random_state=0.
# 2. Выполните масштабирование новых вещественных выборок, обучите классификатор и его гиперпараметры при помощи метода кросс-валидации, делая поправку на несбалансированные классы при помощи весов. Убедитесь в том, что нашли оптимум accuracy по гиперпараметрам.
# 3. Оцените качество классификатора метрике AUC ROC на тестовой выборке.
# 4. Полученный ответ передайте функции write_answer_4
# +
def write_answer_4(auc):
with open("preprocessing_lr_answer4.txt", "w") as fout:
fout.write(str(auc))
# place your code here
# -
# Теперь вы разобрались с основными этапами предобработки данных для линейных классификаторов.
# Напомним основные этапы:
# - обработка пропущенных значений
# - обработка категориальных признаков
# - стратификация
# - балансировка классов
# - масштабирование
#
# Данные действия с данными рекомендуется проводить всякий раз, когда вы планируете использовать линейные методы. Рекомендация по выполнению многих из этих пунктов справедлива и для других методов машинного обучения.
# ## Трансформация признаков.
#
# Теперь рассмотрим способы преобразования признаков. Существует достаточно много различных способов трансформации признаков, которые позволяют при помощи линейных методов получать более сложные разделяющие поверхности. Самым базовым является полиномиальное преобразование признаков. Его идея заключается в том, что помимо самих признаков вы дополнительно включаете набор все полиномы степени $p$, которые можно из них построить. Для случая $p=2$ преобразование выглядит следующим образом:
#
# $$ \phi(x_i) = [x_{i,1}^2, ..., x_{i,D}^2, x_{i,1}x_{i,2}, ..., x_{i,D} x_{i,D-1}, x_{i,1}, ..., x_{i,D}, 1] $$
#
# Рассмотрим принцип работы данных признаков на данных, сэмплированных их гауссиан:
# +
from sklearn.preprocessing import PolynomialFeatures
"""Инициализируем класс, который выполняет преобразование"""
transform = PolynomialFeatures(2)
"""Обучаем преобразование на обучающей выборке, применяем его к тестовой"""
example_data_train_poly = transform.fit_transform(example_data_train)
example_data_test_poly = transform.transform(example_data_test)
"""Обращаем внимание на параметр fit_intercept=False"""
optimizer = GridSearchCV(LogisticRegression(class_weight='balanced', fit_intercept=False), param_grid, cv=cv, n_jobs=-1)
optimizer.fit(example_data_train_poly, example_labels_train)
Z = optimizer.predict(transform.transform(np.c_[xx.ravel(), yy.ravel()])).reshape(xx.shape)
plt.pcolormesh(xx, yy, Z, cmap=plt.cm.Pastel2)
plt.scatter(data_0[:,0], data_0[:,1], color='red')
plt.scatter(data_1[:,0], data_1[:,1], color='blue')
plt.title('With class weights')
plt.show()
# -
# Видно, что данный метод преобразования данных уже позволяет строить нелинейные разделяющие поверхности, которые могут более тонко подстраиваться под данные и находить более сложные зависимости. Число признаков в новой модели:
print(example_data_train_poly.shape)
# Но при этом одновременно данный метод способствует более сильной способности модели к переобучению из-за быстрого роста числа признаком с увеличением степени $p$. Рассмотрим пример с $p=11$:
transform = PolynomialFeatures(11)
example_data_train_poly = transform.fit_transform(example_data_train)
example_data_test_poly = transform.transform(example_data_test)
optimizer = GridSearchCV(LogisticRegression(class_weight='balanced', fit_intercept=False), param_grid, cv=cv, n_jobs=-1)
optimizer.fit(example_data_train_poly, example_labels_train)
Z = optimizer.predict(transform.transform(np.c_[xx.ravel(), yy.ravel()])).reshape(xx.shape)
plt.pcolormesh(xx, yy, Z, cmap=plt.cm.Pastel2)
plt.scatter(data_0[:,0], data_0[:,1], color='red')
plt.scatter(data_1[:,0], data_1[:,1], color='blue')
plt.title('Corrected class weights')
plt.show()
# Количество признаков в данной модели:
print(example_data_train_poly.shape)
# ## Задание 5. Трансформация вещественных признаков.
#
# 1. Реализуйте по аналогии с примером преобразование вещественных признаков модели при помощи полиномиальных признаков степени 2
# 2. Постройте логистическую регрессию на новых данных, одновременно подобрав оптимальные гиперпараметры. Обращаем внимание, что в преобразованных признаках уже присутствует столбец, все значения которого равны 1, поэтому обучать дополнительно значение $b$ не нужно, его функцию выполняет один из весов $w$. В связи с этим во избежание линейной зависимости в датасете, в вызов класса логистической регрессии требуется передавать параметр fit_intercept=False. Для обучения используйте стратифицированные выборки с балансировкой классов при помощи весов, преобразованные признаки требуется заново отмасштабировать.
# 3. Получите AUC ROC на тесте и сравните данный результат с использованием обычных признаков.
# 4. Передайте полученный ответ в функцию write_answer_5.
# +
def write_answer_5(auc):
with open("preprocessing_lr_answer5.txt", "w") as fout:
fout.write(str(auc))
# place your code here
# -
# ## Регрессия Lasso.
# К логистической регрессии также можно применить L1-регуляризацию (Lasso), вместо регуляризации L2, которая будет приводить к отбору признаков. Вам предлагается применить L1-регуляцию к исходным признакам и проинтерпретировать полученные результаты (применение отбора признаков к полиномиальным так же можно успешно применять, но в нём уже будет отсутствовать компонента интерпретации, т.к. смысловое значение оригинальных признаков известно, а полиномиальных - уже может быть достаточно нетривиально). Для вызова логистической регрессии с L1-регуляризацией достаточно передать параметр penalty='l1' в инициализацию класса.
# ## Задание 6. Отбор признаков при помощи регрессии Lasso.
# 1. Обучите регрессию Lasso на стратифицированных отмасштабированных выборках, используя балансировку классов при помощи весов.
# 2. Получите ROC AUC регрессии, сравните его с предыдущими результатами.
# 3. Найдите номера вещественных признаков, которые имеют нулевые веса в итоговой модели.
# 4. Передайте их список функции write_answer_6.
# +
def write_answer_6(features):
with open("preprocessing_lr_answer6.txt", "w") as fout:
fout.write(" ".join([str(num) for num in features]))
# place your code here
| course2/week3/Preprocessing_LR_Coursera.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# !python main.py gen --model-path='checkpoints/tang_199.pth' \
# +
opt = Config()
opt = Config()
opt.caption_data_path = 'caption.pth' # 原始数据
opt.test_img = 'img/example.jpeg' # 输入图片
opt.use_gpu = False # 是否使用GPU(没必要)
#opt.model_ckpt='caption_0914_1947' # 预训练的模型
opt.img_feature_path = 'results.pth'
# 数据
vis = Visualizer(env = opt.env)
dataloader = get_dataloader(opt)
_data = dataloader.dataset._data
word2ix,ix2word = _data['word2ix'],_data['ix2word']
# 模型
model = CaptionModel(opt,word2ix,ix2word)
if opt.model_ckpt:
model.load(opt.model_ckpt)
optimizer = model.get_optimizer(opt.lr1,opt.lr2)
criterion = t.nn.CrossEntropyLoss()
if opt.use_gpu:
model.cuda()
criterion.cuda()
for epoch in range(opt.epoch):
loss_meter.reset()
for ii,(imgs, (captions, lengths),indexes) in tqdm.tqdm(enumerate(dataloader)):
# 训练
optimizer.zero_grad()
input_captions = captions[:-1]
if opt.use_gpu:
imgs = imgs.cuda()
captions = captions.cuda()
imgs = Variable(imgs)
captions = Variable(captions)
input_captions = captions[:-1]
target_captions = pack_padded_sequence(captions,lengths)[0]
score,_ = model(imgs,input_captions,lengths)
loss = criterion(score,target_captions)
loss.backward()
optimizer.step()
loss_meter.add(loss.data[0])
# 可视化
if (ii+1)%opt.plot_every ==0:
if os.path.exists(opt.debug_file):
ipdb.set_trace()
vis.plot('loss',loss_meter.value()[0])
# 可视化原始图片 + 可视化人工的描述语句
raw_img = _data['ix2id'][indexes[0]]
img_path=opt.img_path+raw_img
raw_img = Image.open(img_path).convert('RGB')
raw_img = tv.transforms.ToTensor()(raw_img)
raw_caption = captions.data[:,0]
raw_caption = ''.join([_data['ix2word'][ii] for ii in raw_caption])
vis.text(raw_caption,u'raw_caption')
vis.img('raw',raw_img,caption=raw_caption)
# 可视化网络生成的描述语句
results = model.generate(imgs.data[0])
vis.text('</br>'.join(results),u'caption')
model.save()
# -
opt.use_gpu
# +
optimizer = model.get_optimizer(opt.lr1)
criterion = t.nn.CrossEntropyLoss()
if opt.use_gpu:
model.cuda()
criterion.cuda()
# 统计
loss_meter = meter.AverageValueMeter()
# -
opt.img_path='/home/cy/caption_data/'
| _book/img_input/.ipynb_checkpoints/Debug-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Object Mutability
#
# Object is made up:
#
# - Type
# - State(data)
#
# changing the object data is called modifying the internal state of the,
# object (aka mutated)
#
# Mutable object
#
# - an object whose internal state can be changed
#
# Immutable object
#
# - an object whose internal state cannot be changed
#
# | Immutable | Mutable |
# | --- | --- |
# | Numbers (int, float, bool, etc) | Lists |
# | Strings | Sets |
# | Tuples | Dictionaries |
# | Frozen Sets | User-Defined Classes |
# Tuples are immutable, elements cannot be deleted, inserted, or replaced
# In this example, both the container(tuple) and elements(ints) are immutable
t = (1, 2, 3)
# Lists are mutable
a = [1, 2]
b = [3, 4]
t = (a, b)
print(t)
hex(id(t))
# Adding elements to the lists
# Tuple has not changed
a.append(3)
b.append(5)
print(t)
hex(id(t))
# Objects may be immutable, the object references may not change but the the referenced mutable objects can mutate.
# # Example of a mutable object
#
# - memory address does not change
my_list = [1, 2, 3]
type(my_list)
my_list
hex(id(my_list))
my_list.append(4)
my_list
hex(id(my_list))
# # Example of mutable object when concatenating lists
#
# - the address will change because its assigning a new list
my_list_1 = [1, 2, 3]
hex(id(my_list_1))
my_list_1 = my_list_1 + [4]
my_list_1
hex(id(my_list_1))
| python-deepdive/deepdive1/section03/section_03_20.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Batch processing with Argo Worfklows and HDFS
#
# In this notebook we will dive into how you can run batch processing with Argo Workflows and Seldon Core.
#
# Dependencies:
#
# * Seldon core installed as per the docs with an ingress
# * HDFS namenode/datanode accessible from your cluster (here in-cluster installation for demo)
# * Argo Workfklows installed in cluster (and argo CLI for commands)
# * Python `hdfscli` for interacting with the installed `hdfs` instance
# ## Setup
#
# ### Install Seldon Core
# Use the notebook to [set-up Seldon Core with Ambassador or Istio Ingress](https://docs.seldon.io/projects/seldon-core/en/latest/examples/seldon_core_setup.html).
#
# Note: If running with KIND you need to make sure do follow [these steps](https://github.com/argoproj/argo/issues/2376#issuecomment-595593237) as workaround to the `/.../docker.sock` known issue:
# ```bash
# kubectl patch -n argo configmap workflow-controller-configmap \
# --type merge -p '{"data": {"config": "containerRuntimeExecutor: k8sapi"}}'
# ```
#
# ### Install HDFS
# For this example we will need a running `hdfs` storage. We can use these [helm charts](https://artifacthub.io/packages/helm/gradiant/hdfs) from Gradiant.
#
# ```bash
# helm repo add gradiant https://gradiant.github.io/charts/
# kubectl create namespace hdfs-system || echo "namespace hdfs-system already exists"
# helm install hdfs gradiant/hdfs --namespace hdfs-system
# ```
#
# Once installation is complete, run in separate terminal a `port-forward` command for us to be able to push/pull batch data.
# ```bash
# kubectl port-forward -n hdfs-system svc/hdfs-httpfs 14000:14000
# ```
#
#
# ### Install and configure hdfscli
# In this example we will be using [hdfscli](https://pypi.org/project/hdfs/) Python library for interacting with HDFS.
# It supports both the WebHDFS (and HttpFS) API as well as Kerberos authentication (not covered by the example).
#
# You can install it with
# ```bash
# pip install hdfs==2.5.8
# ```
#
# To be able to put `input-data.txt` for our batch job into hdfs we need to configure the client
# +
# %%writefile hdfscli.cfg
[global]
default.alias = batch
[batch.alias]
url = http://localhost:14000
user = hdfs
# -
# ### Install Argo Workflows
# You can follow the instructions from the official [Argo Workflows Documentation](https://github.com/argoproj/argo#quickstart).
#
# You also need to make sure that argo has permissions to create seldon deployments - for this you can create a role:
# %%writefile role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: workflow
rules:
- apiGroups:
- ""
resources:
- pods
verbs:
- "*"
- apiGroups:
- "apps"
resources:
- deployments
verbs:
- "*"
- apiGroups:
- ""
resources:
- pods/log
verbs:
- "*"
- apiGroups:
- machinelearning.seldon.io
resources:
- "*"
verbs:
- "*"
# !kubectl apply -f role.yaml
# A service account:
# !kubectl create serviceaccount workflow
# And a binding
# !kubectl create rolebinding workflow --role=workflow --serviceaccount=seldon:workflow
# ## Create Seldon Deployment
#
# For purpose of this batch example we will assume that Seldon Deployment is created independently from the workflow logic
# %%writefile deployment.yaml
apiVersion: machinelearning.seldon.io/v1alpha2
kind: SeldonDeployment
metadata:
name: sklearn
namespace: seldon
spec:
name: iris
predictors:
- graph:
children: []
implementation: SKLEARN_SERVER
modelUri: gs://seldon-models/sklearn/iris
name: classifier
logger:
mode: all
name: default
replicas: 3
# !kubectl apply -f deployment.yaml
# !kubectl -n seldon rollout status deploy/$(kubectl get deploy -l seldon-deployment-id=sklearn -o jsonpath='{.items[0].metadata.name}')
# ## Create Input Data
# +
import os
import random
random.seed(0)
with open("input-data.txt", "w") as f:
for _ in range(10000):
data = [random.random() for _ in range(4)]
data = "[[" + ", ".join(str(x) for x in data) + "]]\n"
f.write(data)
# + language="bash"
# HDFSCLI_CONFIG=./hdfscli.cfg hdfscli upload input-data.txt /batch-data/input-data.txt
# -
# ## Prepare HDFS config / client image
# For connecting to the `hdfs` from inside the cluster we will use the same `hdfscli` tool as we used above to put data in there.
#
# We will configure `hdfscli` using `hdfscli.cfg` file stored inside kubernetes secret:
# %%writefile hdfs-config.yaml
apiVersion: v1
kind: Secret
metadata:
name: seldon-hdfscli-secret-file
type: Opaque
stringData:
hdfscli.cfg: |
[global]
default.alias = batch
[batch.alias]
url = http://hdfs-httpfs.hdfs-system.svc.cluster.local:14000
user = hdfs
# !kubectl apply -f hdfs-config.yaml
# For the client image we will use a following minimal Dockerfile
# %%writefile Dockerfile
FROM python:3.8
RUN pip install hdfs==2.5.8
ENV HDFSCLI_CONFIG /etc/hdfs/hdfscli.cfg
# That is build and published as `seldonio/hdfscli:1.6.0-dev`
# ## Create Workflow
#
# This simple workflow will consist of three stages:
# - download-input-data
# - process-batch-inputs
# - upload-output-data
# +
# %%writefile workflow.yaml
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
name: sklearn-batch-job
namespace: seldon
labels:
deployment-name: sklearn
deployment-kind: SeldonDeployment
spec:
volumeClaimTemplates:
- metadata:
name: seldon-job-pvc
namespace: seldon
ownerReferences:
- apiVersion: argoproj.io/v1alpha1
blockOwnerDeletion: true
kind: Workflow
name: '{{workflow.name}}'
uid: '{{workflow.uid}}'
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
volumes:
- name: config
secret:
secretName: seldon-hdfscli-secret-file
arguments:
parameters:
- name: batch_deployment_name
value: sklearn
- name: batch_namespace
value: seldon
- name: input_path
value: /batch-data/input-data.txt
- name: output_path
value: /batch-data/output-data-{{workflow.name}}.txt
- name: batch_gateway_type
value: istio
- name: batch_gateway_endpoint
value: istio-ingressgateway.istio-system.svc.cluster.local
- name: batch_transport_protocol
value: rest
- name: workers
value: "10"
- name: retries
value: "3"
- name: data_type
value: data
- name: payload_type
value: ndarray
entrypoint: seldon-batch-process
templates:
- name: seldon-batch-process
steps:
- - arguments: {}
name: download-input-data
template: download-input-data
- - arguments: {}
name: process-batch-inputs
template: process-batch-inputs
- - arguments: {}
name: upload-output-data
template: upload-output-data
- name: download-input-data
script:
image: seldonio/hdfscli:1.6.0-dev
volumeMounts:
- mountPath: /assets
name: seldon-job-pvc
- mountPath: /etc/hdfs
name: config
readOnly: true
env:
- name: INPUT_DATA_PATH
value: '{{workflow.parameters.input_path}}'
- name: HDFSCLI_CONFIG
value: /etc/hdfs/hdfscli.cfg
command: [sh]
source: |
hdfscli download ${INPUT_DATA_PATH} /assets/input-data.txt
- name: process-batch-inputs
container:
image: seldonio/seldon-core-s2i-python37:1.9.0-dev
volumeMounts:
- mountPath: /assets
name: seldon-job-pvc
env:
- name: SELDON_BATCH_DEPLOYMENT_NAME
value: '{{workflow.parameters.batch_deployment_name}}'
- name: SELDON_BATCH_NAMESPACE
value: '{{workflow.parameters.batch_namespace}}'
- name: SELDON_BATCH_GATEWAY_TYPE
value: '{{workflow.parameters.batch_gateway_type}}'
- name: SELDON_BATCH_HOST
value: '{{workflow.parameters.batch_gateway_endpoint}}'
- name: SELDON_BATCH_TRANSPORT
value: '{{workflow.parameters.batch_transport_protocol}}'
- name: SELDON_BATCH_DATA_TYPE
value: '{{workflow.parameters.data_type}}'
- name: SELDON_BATCH_PAYLOAD_TYPE
value: '{{workflow.parameters.payload_type}}'
- name: SELDON_BATCH_WORKERS
value: '{{workflow.parameters.workers}}'
- name: SELDON_BATCH_RETRIES
value: '{{workflow.parameters.retries}}'
- name: SELDON_BATCH_INPUT_DATA_PATH
value: /assets/input-data.txt
- name: SELDON_BATCH_OUTPUT_DATA_PATH
value: /assets/output-data.txt
command: [seldon-batch-processor]
args: [--benchmark]
- name: upload-output-data
script:
image: seldonio/hdfscli:1.6.0-dev
volumeMounts:
- mountPath: /assets
name: seldon-job-pvc
- mountPath: /etc/hdfs
name: config
readOnly: true
env:
- name: OUTPUT_DATA_PATH
value: '{{workflow.parameters.output_path}}'
- name: HDFSCLI_CONFIG
value: /etc/hdfs/hdfscli.cfg
command: [sh]
source: |
hdfscli upload /assets/output-data.txt ${OUTPUT_DATA_PATH}
# -
# !argo submit --serviceaccount workflow workflow.yaml
# !argo list
# !argo get sklearn-batch-job
# !argo logs sklearn-batch-job
# ## Pull output-data from hdfs
# + language="bash"
# HDFSCLI_CONFIG=./hdfscli.cfg hdfscli download /batch-data/output-data-sklearn-batch-job.txt output-data.txt
# -
# !head output-data.txt
# !kubectl delete -f deployment.yaml
# !argo delete sklearn-batch-job
| examples/batch/hdfs-argo-workflows/hdfs-batch.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Polynomial Interpolation
# +
import numpy as np
import matplotlib.pyplot as plt
plt.style.use('ggplot')
# -
# ## Newton’s Divided-Difference Formula
# $$\mathrm{approximate}\, f(8.4)\quad \mathrm{if}\, f(8.1) =16.94410,\, f(8.3) =17.56492,\, f(8.6) =18.50515,\, f(8.7) =18.82091. $$
# +
def newton_polynomial(x, initial_values, coefficients):
poly = coefficients[0]
for k in range(1, len(initial_values)):
x_terms = 1
for l in range(k):
x_terms = x_terms * (x - initial_values[l])
poly += coefficients[k] * x_terms
return poly
x_inputs = [8.1, 8.3, 8.6, 8.7]
x_approx = 8.4
n = len(x_inputs)
F_array = np.zeros([n, n])
F_array[:, 0] = [16.94410, 17.56492, 18.50515, 18.82091]
for i in range(1, n):
for j in range(1, i+1):
F_array[i, j] = (F_array[i, j-1] - F_array[i-1, j-1]) / (x_inputs[i] - x_inputs[i-j])
print(F_array.round(5))
P_coeff = F_array.diagonal()
print('f({0})={1:.8f}'.format(x_approx, newton_polynomial(x_approx, x_inputs, P_coeff)))
# -
# ## Hermite Polynomials Using Divided Differences
# $$
# x \quad f(x) \quad f'(x) \\
# 0.1 \quad −0.62049958 \quad 3.58502082 \\
# 0.2 \quad −0.28398668 \quad 3.14033271 \\
# 0.3 \quad 0.00660095 \quad 2.66668043 \\
# 0.4 \quad 0.24842440 \quad 2.16529366 \\
# f(\frac{1}{3})
# $$
# +
def newton_polynomial(x, initial_values, coefficients):
poly = coefficients[0]
for k in range(1, len(initial_values)):
x_terms = 1
for l in range(k):
x_terms = x_terms * (x - initial_values[l])
poly += coefficients[k] * x_terms
return poly
x_inputs = [0.1, 0.2, 0.3, 0.4]
f_values = [-0.62049958, -0.28398668, 0.00660095, 0.24842440]
f_prime_values = [3.58502082, 3.14033271, 2.66668043, 2.16529366]
x_approx = 1/3
n = len(x_inputs)
F_array = np.zeros([2*n, 2*n])
z_values = []
temp_values = []
for m in range(n):
z_values.extend([x_inputs[m], x_inputs[m]])
temp_values.extend([f_values[m], f_values[m]])
F_array[:, 0] = temp_values
for i in range(n):
F_array[2*i+1, 1] = f_prime_values[i]
if i != 0:
F_array[2*i, 1] = (F_array[2*i, 0] - F_array[2*i-1, 0]) / (z_values[2*i] - z_values[2*i-1])
for i in range(2, 2*n):
for j in range(2, i+1):
F_array[i, j] = (F_array[i, j-1] - F_array[i-1, j-1]) / (z_values[i] - z_values[i-j])
print(F_array.round(5))
P_coeff = F_array.diagonal()
print('f({0})={1:.8f}'.format(x_approx, newton_polynomial(x_approx, z_values, P_coeff))) # set z_values as a hermite polynomial
# -
# ## A Piecewise Polynomial of Hermite Type
#
# Runge function
#
# $$
# f(x)=\frac{1}{1+25x^2}, \quad x\in[-1,1].
# $$
#
# Runge found that if this function is interpolated at equidistant points $x_i$ between $−1$ and $1$ with a polynomial $P_{n}(x)$ of degree $\leq n$, the resulting interpolation oscillates toward the end of the interval, i.e. close to $−1$ and $1$. This shows that high-degree polynomial interpolation at equidistant points can be troublesome.(See in wikipedia "Runge's phenomenon")
#
# Covert into
#
# $$
# f(x)=\frac{1}{1+x^2}, \quad x\in[-5,5]
# $$
#
# and
#
# $$
# f^{'}(x)=\frac{-2x}{(1+x^2)^2}.
# $$
#
# Separate into ten parts equidistantly, approximate $-4,\,-3.5,\,-2,\,-1,\,0,\,1,\,2,\,3.5,\,4$.
# +
def newton_polynomial(x, initial_values, coefficients):
poly = coefficients[0]
for k in range(1, len(initial_values)):
x_terms = 1
for l in range(k):
x_terms = x_terms * (x - initial_values[l])
poly += coefficients[k] * x_terms
return poly
def hermite_interpolation(x, initial_values, fval, fpval):
n = len(initial_values)
F_array = np.zeros([2*n, 2*n])
z_values = []
temp_values = []
for m in range(n):
z_values.extend([initial_values[m], initial_values[m]])
temp_values.extend([fval[m], fval[m]])
F_array[:, 0] = temp_values
for i in range(n):
F_array[2*i+1, 1] = fpval[i]
if i != 0:
F_array[2*i, 1] = (F_array[2*i, 0] - F_array[2*i-1, 0]) / (z_values[2*i] - z_values[2*i-1])
for i in range(2, 2*n):
for j in range(2, i+1):
F_array[i, j] = (F_array[i, j-1] - F_array[i-1, j-1]) / (z_values[i] - z_values[i-j])
P_coeff = F_array.diagonal()
return newton_polynomial(x, z_values, P_coeff), P_coeff
def runge(x):
return 1 / (1 + x ** 2)
def runge_derivative(x):
return -2 * x / (1 + x ** 2) ** 2
x_inputs = np.linspace(-5, 5, 10)
f_values = runge(x_inputs)
f_prime_values = runge_derivative(x_inputs)
x_approxs = np.array([-4, -3.5, -2, -1, 0, 1, 2, 3.5, 4])
for x_approx in x_approxs:
y_actual = runge(x_approx)
y_direct, _ = hermite_interpolation(x_approx, x_inputs, f_values, f_prime_values)
input_interval = []
for l in range(len(x_inputs)-1):
input_interval.append(x_inputs[l:l+2])
for sub_interval in input_interval:
if sub_interval[0] < x_approx < sub_interval[1]: # make sure x_approx is not split point
sub_fval = runge(sub_interval)
sub_fpval = runge(sub_interval)
y_piecewise, _ = hermite_interpolation(x_approx, sub_interval, sub_fval, sub_fpval)
print('approximate f(%.1f)' % x_approx)
print('actual: %.8f' % y_actual)
print('direct: %.8f' % y_direct)
print('piecewise: %.8f' % y_piecewise)
print('direct absolute/relative:', round(abs(y_direct - y_actual), 5), round(abs(y_direct - y_actual) / y_actual, 5))
print('piecewise absolute/relative:', round(abs(y_piecewise - y_actual), 5), round(abs(y_piecewise - y_actual) / y_actual, 5))
print('-'*10)
# +
xs = np.linspace(-5, 5, 100)
y = runge(xs)
ys = []
for xss in xs:
if xss != xs[-1]:
for sub_interval in input_interval:
if sub_interval[0] <= xss < sub_interval[1]:
sub_fval = runge(sub_interval)
sub_fpval = runge(sub_interval)
yss, _ = hermite_interpolation(xss, sub_interval, sub_fval, sub_fpval)
ys.append(yss)
else:
ys.append(runge(xss))
plt.scatter(x_inputs, f_values)
plt.plot(xs, y, label='actual')
plt.plot(xs, ys, label='approximate')
plt.legend()
plt.minorticks_on()
plt.show()
# +
def lagrange_term(x, ini, k):
temp_array = np.delete(ini, k, axis=0)
return np.product(x - temp_array) / np.product(ini[k] - temp_array)
def lagrange_poly(x, ini, fval):
poly = 0
for k in range(len(ini)):
poly += fval[k] * lagrange_term(x, ini, k)
return poly
def runge(x):
return 1 / (1 + x ** 2)
xs = np.linspace(-5, 5, 100)
y = runge(xs)
plt.plot(xs, y, label='actual')
for deg in range(5, 11, 2):
x_inputs = np.linspace(-5, 5, deg)
f_values = runge(x_inputs)
ys = []
for xss in xs:
ys.append(lagrange_poly(xss, x_inputs, f_values))
plt.plot(xs, ys, label='approximate degree= %d' % deg)
plt.legend()
plt.minorticks_on()
plt.show()
# +
def newton_polynomial(x, initial_values, coefficients):
poly = coefficients[0]
for k in range(1, len(initial_values)):
x_terms = 1
for l in range(k):
x_terms = x_terms * (x - initial_values[l])
poly += coefficients[k] * x_terms
return poly
def runge(x):
return 1 / (1 + x ** 2)
xs = np.linspace(-5, 5, 100)
y = runge(xs)
plt.plot(xs, y, label='actual')
for deg in range(5, 11, 2):
x_inputs = np.linspace(-5, 5, deg)
f_values = runge(x_inputs)
n = len(x_inputs)
F_array = np.zeros([n, n])
F_array[:, 0] = f_values
for i in range(1, n):
for j in range(1, i+1):
F_array[i, j] = (F_array[i, j-1] - F_array[i-1, j-1]) / (x_inputs[i] - x_inputs[i-j])
P_coeff = F_array.diagonal()
ys = []
for xss in xs:
ys.append(newton_polynomial(xss, x_inputs, P_coeff))
plt.plot(xs, ys, label='approximate degree= %d' % deg)
plt.legend()
plt.minorticks_on()
plt.show()
# +
from scipy.interpolate import CubicSpline
def runge(x):
return 1 / (1 + x ** 2)
def runge_derivative(x):
return -2 * x / (1 + x ** 2) ** 2
# x_inputs = np.linspace(-5, 5, 10)
x_inputs = np.linspace(-5, 5, 20)
f_values = runge(x_inputs)
f_prime_values = runge_derivative(x_inputs)
cs_natural = CubicSpline(x_inputs, f_values, bc_type='natural')
# cs_clamped = CubicSpline(x_inputs, f_values, bc_type=((1, f_prime_values[0]), (1, f_prime_values[-1])))
xs = np.linspace(-5, 5, 100)
ys_natural = cs_natural(xs)
ys_clamped = cs_natural(xs)
plt.scatter(x_inputs, f_values)
plt.plot(xs, runge(xs), label='actual')
plt.plot(xs, ys_natural, label='natural cubic spline')
# plt.plot(xs, ys_clamped, label='clamped cubic spline')
plt.legend()
plt.minorticks_on()
plt.show()
# -
| numerical_analysis/report2_codes.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Querying the Topology to Find Neighbors
import openpnm as op
# %config InlineBackend.figure_formats = ['svg']
import numpy as np
import matplotlib.pyplot as plt
np.random.seed(10)
# %matplotlib inline
ws = op.Workspace()
ws.settings["loglevel"] = 40
# ## Setup Network
#
# The OpenPNM *GenericNetwork* objects (e.g. Cubic, Voronoi, etc) have methods that let you query the connected pores and throats. This tutorial will explain how these work and illustrate why they are useful.
pn = op.network.Cubic(shape=[4, 4, 1])
# The following examples are relatively trivial, but their intention is to illustrate the different functions and options. More realistic use cases will be presented further down.
# Start by finding all pores on the 'left' and 'back'
P_left = pn.pores('left')
P_bottom = pn.pores('back')
# ## Find Neighoring Pores
# We now have two sets of pores that actually overlap each other, as illustrated below:
fig, ax = plt.subplots()
op.topotools.plot_coordinates(pn, pn.Ps, c='lightgrey',
markersize=50, ax=ax)
op.topotools.plot_coordinates(pn, P_left, c='red', marker='*',
markersize=50, ax=ax)
op.topotools.plot_coordinates(pn, P_bottom, c='blue', marker='.',
markersize=50, ax=ax)
# We'll merge these pores into a single set, and explore the different ways to find neighbors to this set. Note that the pore at [x,y] = [1.5, 1.5] has two neighbors (one 'bottom' and one 'left').
# ### Find All Neighbors: OR
# > *Finds all pores with one or more connections to the input pores*
#
# Given a set of pores, find the pores that are neighbors to one or more of the inputs. This is called **OR** since it gives the neighbors of either the bottom pores *or* the left pores, *or* both.
Ps = pn.pores(['left', 'back'])
print(Ps)
Ps = pn.find_neighbor_pores(pores=Ps, mode='or')
print(Ps)
fig, ax = plt.subplots()
op.topotools.plot_coordinates(pn, pn.Ps, c='lightgrey',
markersize=50, ax=ax)
op.topotools.plot_coordinates(pn, P_left, c='red',
markersize=50, marker='*', ax=ax)
op.topotools.plot_coordinates(pn, P_bottom, c='blue',
markersize=50, marker='.', ax=ax)
op.topotools.plot_coordinates(pn, Ps, c='green',
markersize=50, marker='s', ax=ax)
# ### Find Non-Shared Neighbors: XOR
# > *Finds all pores with exactly one connection to the input pores*
#
# Given a set of pores find the pores that are neighbors of one and only one of the input pores. This is called **XOR**, or 'exclusve_or' because it finds the pores that are neigbhors to the 'bottom' *or* the 'left', but *not* both.
Ps = pn.pores(['left', 'back'])
print(Ps)
Ps = pn.find_neighbor_pores (pores=Ps, mode='xor')
print(Ps)
fig, ax = plt.subplots()
op.topotools.plot_coordinates(pn, pn.Ps, c='lightgrey',
markersize=50, ax=ax)
op.topotools.plot_coordinates(pn, P_left, c='red',
markersize=50, marker='*', ax=ax)
op.topotools.plot_coordinates(pn, P_bottom, c='blue',
markersize=50, marker='.', ax=ax)
op.topotools.plot_coordinates(pn, Ps, c='green',
markersize=50, marker='s', ax=ax)
# ### Find Common Neighbors of Two Sets: XNOR
#
# > *Finds all the pores with 2 or more connections to the input pores*
#
# This finds pores that are common to both 'left' and 'bottom' pores. It is called **XNOR** since it is the opposite of **XOR** , incidated by the *N for not* . Note that **XNOR** and **NXOR** are interchangable.
Ps = pn.pores(['left', 'back'])
print(Ps)
Ps = pn.find_neighbor_pores(pores=Ps, mode='xnor')
print(Ps)
fig, ax = plt.subplots()
op.topotools.plot_coordinates(pn, pn.Ps, c='lightgrey',
markersize=50, ax=ax)
op.topotools.plot_coordinates(pn, P_left, c='red',
markersize=50, marker='*', ax=ax)
op.topotools.plot_coordinates(pn, P_bottom, c='blue',
markersize=50, marker='.', ax=ax)
op.topotools.plot_coordinates(pn, Ps, c='green',
markersize=50, marker='s', ax=ax)
# ## Find Neighboring Throats
# Neighbor throat queries follow essentially the same logic as the neighboring queries outlined above.
# ### Find All Neighboring Throats: OR
# > *Finds all throats connected to any of the input pores*
#
#
Ps = pn.pores(['left', 'back'])
Ts = pn.find_neighbor_throats(pores=Ps, mode='or')
fig, ax = plt.subplots()
op.topotools.plot_connections(pn, Ts, ax=ax)
op.topotools.plot_coordinates(pn, pn.Ps, c='lightgrey',
markersize=50, ax=ax)
op.topotools.plot_coordinates(pn, P_left, c='red',
markersize=50, marker='*', ax=ax)
op.topotools.plot_coordinates(pn, P_bottom, c='blue',
markersize=50, marker='.', ax=ax)
# ### Find Common Neighbors: XNOR
# > *Finds throats shared by input pores only*
Ps = pn.pores(['left', 'back'])
Ts = pn.find_neighbor_throats(pores=Ps, mode='xnor')
fig, ax = plt.subplots()
op.topotools.plot_connections(pn, Ts, ax=ax)
op.topotools.plot_coordinates(pn, pn.Ps, c='lightgrey',
markersize=50, ax=ax)
op.topotools.plot_coordinates(pn, P_left, c='red',
markersize=50, marker='*', ax=ax)
op.topotools.plot_coordinates(pn, P_bottom, c='blue',
markersize=50, marker='.', ax=ax)
# ### Find Non-Shared Neighbors: XOR
# > *Finds throats that are only connected to one input pore*
Ps = pn.pores(['left', 'back'])
Ts = pn.find_neighbor_throats(pores=Ps, mode='xor')
fig, ax = plt.subplots()
op.topotools.plot_connections(pn, Ts, ax=ax)
op.topotools.plot_coordinates(pn, pn.Ps, c='lightgrey',
markersize=50, ax=ax)
op.topotools.plot_coordinates(pn, P_left, c='red',
markersize=50, marker='*', ax=ax)
op.topotools.plot_coordinates(pn, P_bottom, c='blue',
markersize=50, marker='.', ax=ax)
| examples/tutorials/network/finding_neighbor_pores_and_throats.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import h5py
from Bio import SeqIO, AlignIO
import io
import numpy as np
import os
pfam_path = 'Pfam-A.seed'
import gzip
def yield_alns(pfam_path , verbose = False):
with open(pfam_path, 'r', encoding='ISO-8859-1') as f:
aln= ''
acc = None
count =0
lCount = 0
for i,l in enumerate( f ):
lCount += 1
if verbose == True:
if i > 3000 and i < 4000:
print(l)
if lCount < 10**6:
aln+=l
if acc is None and 'AC' in l:
acc = l.split()[2]
if l == '//\n':
if lCount > 10**6:
print(acc + 'truncated')
if count < 8063 or count > 8065:
msa = AlignIO.read(io.StringIO(aln), "stockholm")
else:
print('skipping')
msa = None
idPfam = acc
acc = None
aln = ''
lCount = 0
yield idPfam, msa
if count % 1000 == 0 :
print(msa)
count+=1
# +
if os.path.isfile('Pfam-A.seed.h5'):
os.remove('Pfam-A.seed.h5')
open('Pfam-A.seed.h5', 'a').close()
with h5py.File('Pfam-A.seed' +'.h5', 'r+') as hf:
i = 0
for pfam_id, msa in yield_alns(pfam_path):
i += 1
#print(i)
if not hf.get(pfam_id):
try:
align_list = list()
for rec in msa:
align_list.append(np.array(list(rec.upper()), np.character))
try:
align_array = np.array(align_list)
hf.create_dataset(pfam_id, data=align_array)
except:
print('exception')
except:
print('for loop exception')
else:
#print(pfam_id)
continue
# -
101, 104, 160, 173, 177, 181
| notebooks/Pfam2HDF5.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Plot Throughput of Experiment 2.2 version 3
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import requests
import io
import glob
# ## Functions Read CSV files of Throuhgput iperf2
def getDataframeThru(df,start_row,measurement_interval,header_range):
'''
This functions will import the data from txt file and return the dataframe without the header of txt file.
Input:
measurement_interval = 30 (sec) :
header_range = 10 lines
start_row = 0
Return:
df1t : dataframe of througput and jitter
'''
df1 = df.drop(labels=range(start_row, header_range), axis=0)
df1t = df1.drop(labels=range(measurement_interval, len(df)), axis=0)
return df1t
def getDatafromTxT(filename, headerrange):
"""
Get dataframe from txt file:
filename : xxx.txt
headerrange : number of lines that needed to be removed.
return : df : datafame type
"""
h = headerrange + 1
skip_1 = list(range(0,h, 1))
df = pd.read_csv(filename,
skiprows=skip_1,
header=None,
delimiter=' ',
skipinitialspace=True,
error_bad_lines=False)
return df
## Find start row index of itteration
def getStartEndID(df,start_data,end_data):
"""
to clean dataframe and return the data with new header
Input:
df : datafram without header of txt file
Output
strat_indices_list : start indices list
"""
# creating and passing series to new column
df["Start"]= df[2].str.find(start_data)
df["End"]= df[2].str.find(end_data)
index = df.index
strat_indices = index[df["Start"]==0.0]
strat_indices_list = strat_indices.tolist()
end_indices = index[df["End"]==0.0]
end_indices_list = end_indices.tolist()
return strat_indices_list, end_indices_list
def getCleanData(df,strat_indices_list,end_indices_list):
"""
"""
df_all = df.drop(labels=range(1, len(df)), axis=0) # create new df
start_row = 0
c = 0
for i in strat_indices_list:
h = i
print('h =',h)
m = end_indices_list[c]
print('m =', m)
df1 = getDataframeThru(df,start_row,m,h)
print('df1 = ', df1)
result = pd.concat([df_all,df1])
df_all = result
c = c + 1
if i == 0:
df_all = df_all.drop(labels=0, axis=0)
return df_all
def superClean(filename,headerrange,start_data,end_data):
"""
Clean Data from CSV file with remove the unnecessary header
"""
df = getDatafromTxT(filename, headerrange)
strat_indices_list, end_indices_list = getStartEndID(df,start_data,end_data)
df_all = getCleanData(df,strat_indices_list,end_indices_list)
df_all_new = df_all.drop(df_all.columns[[0,1,3,5,7,9]], axis=1) # Replace new columns header
df_all_new.rename({2 :'Interval', 4 : 'Transfer', 6 :'Bitrate', 8 :'Jitter', 10 :'Lost/Total Datagrams'}, axis=1, inplace=True)
df = df_all_new.drop(range(0,1))
df_all_new['Bitrate'] = df['Bitrate'].astype(float)
time = np.array(range(len(df_all_new.index)))
df_all_new['Time'] = time
df_all_new['Time'] = df_all_new['Time'].astype(int)
# avergae throughput
sumThroughput = df_all_new['Bitrate'].sum()
avgSumThroughput = sumThroughput/len(time)
var_throughput = df_all_new['Bitrate'].var()
return avgSumThroughput, var_throughput
def readCSV2pd_Thru(directoryPath,tf_load,edge_name,start_data,end_data,headerrange):
"""
This function is to read a CSV file and return the average value and varience
input: directoryPath : path of file names
tf_load : list of traffic load
"""
avg_Thr = []
var_Thr = []
for tf in tf_load:
cpu_data = pd.DataFrame()
for file_name in glob.glob(directoryPath+edge_name+str(tf)+'.csv'):
avg_thr,var_thr = superClean(file_name,headerrange,start_data,end_data)
avg_Thr.append(avg_thr)
var_Thr.append(var_thr)
return avg_Thr, var_Thr
# ## Read file
headerrange = 7
start_data = '9.0-10.0'
end_data = '60.0-61.0'
tf_load = [i*2 for i in range(2,20)]
edge_name = 'edge4_M'
directoryPath = '/Users/kalika/PycharmProjects/Privacy_SDN_Edge_IoT/PlanB/CPU_utilization_Experiment/version3_Experiment_style/Experiment2_2/Edge4_iperf/'
avg_thr, var_thr = readCSV2pd_Thru(directoryPath,tf_load,edge_name,start_data,end_data,headerrange)
# +
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)
ax.plot(tf_load, avg_thr, color='green', linestyle='dashed', linewidth = 2,
marker='o', markerfacecolor='green', markersize=10,label="Edge 4")
plt.ylim(0,50)
plt.xlim(0,40)
plt.xlabel('Traffic load $\lambda_{4,SE}$ (Mbps)')
# naming the y axis
plt.ylabel('Average of Throughput (Mbps)')
plt.legend()
plt.show()
# -
| PlanB/CPU_utilization_Experiment/version3_Experiment_style/Experiment2_2/Edge4_iperf/Plot_Throughput_experiment_2_2_v3.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # Domain wall relaxation in a nanowire
# This is a simple example of domain wall relaxation in a nanowire. It shows how to set up a simulation with a rectangular mesh, relax it and then run it in the presence of an applied field. It also shows how to extract the domain wall position and determine the DW velocity.
# ## Relaxing a trial domain wall
# In order to set up and run the simulation we need the following steps:
#
# 1. Create a suitable mesh (e.g. a thin film or cylindrical nanowire).
#
# 2. Create a simulation object which also encapsulates the material parameters.
#
# 3. Run the relaxation
# We start by importing some necessary modules. The first commmand `%matplotlib inline` tells the IPython notebook to show any plots in the notebook itself (rather than opening them in a separate window).
# %matplotlib inline
from finmag.util.meshes import box, mesh_quality
from finmag import sim_with
from numpy import pi
import os
import sys
import dolfin as df
import numpy as np
import matplotlib.pyplot as plt
# We start by creating a mesh of dimensions `500 nm x 20 nm x 3 nm`. Finmag comes with some predefined mesh creation routines (e.g. `box`, `cylinder`) which we will use here. If you are unsure which arguments they require you can type e.g. "`box?`" in a cell, which will open a help window with the options for the `box` function.
#
# Internally, the command uses `Netgen` to generate the mesh. The argument `maxh` gives an indication of the maximum edge length of the mesh (note that unfortunately `Netgen` only treats this as a rough indication; unfortunately it is not guaranteed that all edges will indeed be smaller than this length).
#
# The argument "`directory='meshes'`" tells the command to save the mesh in a subdirectory called "`meshes`". We could also directly give it a filename, but this way it's easier to deal with multiple meshes (e.g. if we want to experiment with nanowires of different length or thickness).
# +
xmin, xmax = -250, +250
ymin, ymax = 0, 20
zmin, zmax = 0, 3
mesh = box(xmin, ymin, zmin, xmax, ymax, zmax, maxh=3.0, directory='meshes')
# -
# An easy way to get a rough overview of the mesh quality is to use the helper function `mesh_quality` which comes with Finmag. It prints a histogram of edge lengths of the mesh. This makes it easy to get an idea whether they are below the exchange length of the material. In our case most edges are on the order of 0.8 nm long, which is more than sufficient.
print mesh_quality(mesh)
# Next we define a few material parameters which we will use in the simulation. We use values typical of Permalloy. Note that we use a large damping constant `alpha=1.0` in order to speed up the relaxation. For the actual dynamics simulation below we need to set this value to a more realistic value (e.g. `alpha=0.01`).
A = 13e-12 # exchange coupling constant (J/m)
Ms = 8e5 # saturation
alpha = 1.0 # use large damping for relaxation
unit_length = 1e-9 # mesh units are in nanometres
# Next we need to define the initial magnetisation profile. We use a `tanh` function to initalise a trial head-to-head domain wall. Let's plot the profile first in order to get a feeling for whether it makes sense. For illustration purposes we use a small offset so that the DW is centred at x=50 and also change the width a little bit.
xs = np.linspace(xmin, xmax, 100)
plt.plot(xs, -np.tanh((xs - 50.0) / 30.0), label='m_x')
plt.xlabel("x-position (nm)")
plt.title("Trial DW profile")
plt.ylim(-1.1, 1.1)
plt.legend()
# This looks reasonable so we'll use it for the simulation. In order to initialise the magnetisation `m` we define a function `m_init` which describes this trial domain wall profile and which we will pass to the simulation below. This function should accept the coordinates of a point in the domain and return the magnetisation at this location. Internally, the simulation will apply this function to all mesh vertices and initialise the magnetisation accordingly.
def m_init(pt):
x, y, z = pt
m_x = -np.tanh((x - 50.0) / 30.0)
m_y = np.sqrt(1 - m_x*m_x)
m_z = 0.0
return [m_x, m_y, m_z]
# Now we are ready to actually create the simulation object. We simply pass it the mesh, the function `m_init` which defines the initial magnetisation profile as well as the material parameters. (Since we don't use uniaxial anisotropy in this example, we don't set `K1` and `K1_axis`).
sim = sim_with(mesh, m_init=m_init, Ms=Ms, A=A, K1=None, K1_axis=None, alpha=alpha, unit_length=1e-9)
# A convenient way of visualising the magnetisation is to call the helper function `sim.render_scene()`. This uses Paraview to create a snapshot of the magnetisation. In the example below we adapt a few parameters to "zoom in" near the domain wall. Note that unfortunately there are rare cases in which this does not work (either the resulting image is black or there is an error). In this case it is always possible to save the magnetisation to an external file using `sim.save_vtk` and open it manually in Paraview.
sim.render_scene(camera_position=[0, -200, 200], glyph_scale_factor=2.0, fit_view_to_scene=False)
# Next we need to relax the magnetisation. This is achieved by calling `sim.relax()`.
#
# It is convenient to save the relaxed state to a file and reload it when needed (for example, if we want to simulate DW motions starting from the same relaxed state but with applied fields of varying strengths). Therefore we first check whether a file with the relaxed state already exists. If this is not the case, we relax the simulation and save the result. Otherwise we simply reload the relaxed state from that file.
#
# Note that if some of the simulation parameters (e.g. the mesh size or material parameters) are changed then you will need to manually delete the file "`relaxed_state.npz`" and re-run the relaxation).
# +
relaxed_filename = 'relaxed_state.npz'
if not os.path.exists(relaxed_filename):
#sim.schedule('save_vtk', every=1e-10, filename='snapshots_m_relax/m.pvd', overwrite=True)
sim.relax()
sim.save_restart_data(relaxed_filename)
else:
sim.restart(relaxed_filename)
# -
# Let's visualise the relaxed state, too, to see that indeed it is slightly different from the trial DW profile which we used for initialisation.
sim.render_scene(camera_position=[0, -200, 200], glyph_scale_factor=2.0, fit_view_to_scene=False)
# In order to get a better feeling for the DW profile we can plot the magnetisation components along a line parallel to the x-axis which passes through the centre of the nanowire. We use the function `sim.probe_field_along_line` to probe `m` along such a line and then plot the components `m_x`, `m_y`, `m_z`.
# +
y0 = 0.5 * (ymin + ymax)
z0 = 0.5 * (zmin + zmax)
pts_probed, m_probed = sim.probe_field_along_line('m', [xmin, y0, z0], [xmax, y0, z0], N=1000)
# -
xs = pts_probed[:, 0]
plt.plot(xs, m_probed[:, 0], label='m_x')
plt.plot(xs, m_probed[:, 1], label='m_y')
plt.plot(xs, m_probed[:, 2], label='m_z')
plt.title("Magnetisation components of relaxed state")
plt.legend()
# In fact, we can use this same method to extract the domain wall position by checking where `m_x` passes through zero. The following defines a function `domain_wall_centre` which extracts the domain wall position from a simulation object.
# +
class DomainWallError(Exception):
# It is useful to have a special exception to indicate that
# something went wrong when computing the domain wall.
pass
def domain_wall_centre(sim, N=1000):
coords = sim.mesh.coordinates()
xmin = coords[:, 0].min()
xmax = coords[:, 0].max()
ymin = coords[:, 1].min()
ymax = coords[:, 1].max()
zmin = coords[:, 2].min()
zmax = coords[:, 2].max()
y0 = 0.5 * (ymin + ymax)
z0 = 0.5 * (zmin + zmax)
pts_probed, m_probed = sim.probe_field_along_line('m', [xmin, y0, z0], [xmax, y0, z0], N=N)
xs = np.linspace(xmin, xmax, N)
m_x = m_probed[:, 0]
zero_crossings = np.where(np.diff(np.sign(m_x)))[0]
if len(zero_crossings) >= 2:
raise DomainWallError("Cannot determine domain wall position (found more than one zero crossing of m_x).")
elif len(zero_crossings) == 0:
raise DomainWallError("No domain wall found.")
idx = zero_crossings[0] # index just before the zero crossing
# Return the midpoint of the interval where the zero crossing occurs.
# This is not 100% exact but we can always increase N to increase
# the accuracy if desired.
return 0.5 * (xs[idx] + xs[idx + 1])
# -
# When we apply this function to the relaxed simulation we see that indeed the DW is close to the position where we initialised it (x=50 nm).
domain_wall_centre(sim)
# ## DW motion with applied field
# Next we run some dynamics with an applied field which pushes the domain wall along the nanowire. In order to do this we first reload the relaxed state again. We already did this above, but it is nice to have the relaxation and dynamic part clearly separated. We also reset the simulation time to 0. This is not strictly necessary but makes the analysis below slightly nicer.
#
# Then we set the damping constant to a smaller value (note that here we use the non-realistic value `alpha=0.5` just to make it run faster; in a real simulation you should use a much smaller value such as 0.01). Finally, we set the strength of the external field.
sim.restart(relaxed_filename)
sim.reset_time(0.0)
sim.alpha = 0.5 # set to a smaller value for real simulations!
sim.set_H_ext([2e4, 0, 0])
# Finmag has a convenient way of performing actions at regular intervals during a simulation. This is achieved using the so-called "scheduler" of a simulation object. It comes with some predefined actions (e.g. "save_m") but it can also accept an arbitrary Python function.
#
# We make use of this by defining a function that prints the current domain wall position as well as a function that stores the current simulation time and DW position. Each of these functions should accept a simulation object as its only argument. They cannot return anything, which is why we store the DW positions in a global variable called "`dw_positions`".
def print_domain_wall_centre(sim):
print "Domain wall centre at t={}: {}".format(sim.t, domain_wall_centre(sim))
sys.stdout.flush()
# +
dw_positions = []
def record_dw_position(sim):
"""
Save the current simulation time and domain wall
position in the global list 'dw_positions'.
"""
global dw_positions # tell Python that we want to change the global variable
# (otherwise it would think it is a local variable)
data = (sim.t, domain_wall_centre(sim))
dw_positions.append(data)
# -
# Now that we have defined these functions we can add them to the scheduler by telling it how often to call them (here: every 10e-12 seconds, i.e. every 10 picoseconds of simulation time). We also schedule saving of the magnetisation every 0.1 ns (however, this is only for illustration; we won't use the saved data below).
sim.clear_schedule()
sim.schedule(print_domain_wall_centre, every=10e-12)
sim.schedule(record_dw_position, every=10e-12)
sim.schedule('save_m', every=1e-10, filename='snapshots_m_dynamic/m.npy', overwrite=True)
# Now run the simulation for 1 nanosecond (this may take a few minutes to complete). Note how, as the simulation runs, it saves the magnetisation at regular intervals and also prints the current DW position.
sim.run_until(1e-9)
# Once it is finished we can convince us that the list of domain wall positions was indeed populated as intended (we only print the first 10 list elements to avoid a huge amount of output):
dw_positions[:10]
# ## Plotting the DW position over time and computing the average DW velocity
# In order to visualise the domain wall dynamics we extract the simulation times and DW positions from `dw_positions` and plot it.
ts, dw_pos = np.array(dw_positions).T
plt.plot(ts, dw_pos, 'x-')
plt.title("DW position over time")
plt.xlabel("Time (s)")
plt.ylabel("DW x-position (nm)")
# The average domain wall velocity can easily be computed from the first and last position:
avg_velocity = (dw_pos[-1] - dw_pos[0]) / (ts[-1] - ts[0]) * 1e-9
print("Average domain wall velocity (m/s): {}".format(avg_velocity))
| doc/ipython_notebooks_src/tutorial-domain_wall_relaxation_example.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### GridSearch & Pipelines
# GridSearch is an optimization tool that we use when tuning hyperparameters. We define the grid of parameters that we want to search through, and we select the best combination of parameters for our data.
# # 1 - One way
# Itera un algoritmo sobre un conjunto de hiperparametros
# +
import warnings
import numpy as np
import pandas as pd
warnings.filterwarnings("ignore", category=DeprecationWarning)
# +
from sklearn.pipeline import make_pipeline
from sklearn.impute import SimpleImputer
from sklearn.preprocessing import StandardScaler
from sklearn.feature_selection import SelectKBest
from sklearn.ensemble import RandomForestClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC
# + tags=[]
from sklearn import svm, datasets
from sklearn.model_selection import GridSearchCV, train_test_split
import warnings
warnings.filterwarnings("ignore")
warnings.filterwarnings(action="ignore",category=DeprecationWarning)
warnings.filterwarnings(action="ignore",category=FutureWarning)
iris = datasets.load_iris()
X = iris.data
y = iris.target
X_train, X_test, y_train, y_test = train_test_split(X,
y,
test_size=0.2,
random_state=42)
svc = svm.SVC()
parameters = {
'kernel': ['linear', 'rbf', 'sigmoid'],
'C': [0.001, 0.01, 0.1, 0.5, 1, 5, 10, 100],
'gamma': ['scale', 'auto'],
'coef0': [-10, -1, 0, 0.1, 0.5, 1, 10, 100]
}
grid = GridSearchCV(estimator=svc,
param_grid = parameters,
n_jobs = -1,
scoring = 'accuracy',
cv = 10)
grid.fit(X_train, y_train)
# -
print("Best estimator:", grid.best_estimator_)
print("Best params:", grid.best_params_)
print("Best score:", grid.best_score_)
best_estimator = grid.best_estimator_
best_estimator.score(X_test, y_test)
# # 2: Almost-Pro way
#
# La forma pro es la que hace esto mismo y va recogiendo los errores de entrenamiento, de validación y tiene la capacidad de parar el proceso cuando se requiera además de guardar el modelo en local una vez terminado si es mejor que el que había anteriormente y de cargar el modelo anterior y seguir reentrenando.
# +
from sklearn.pipeline import Pipeline
pipe = Pipeline(steps=[('classifier', RandomForestClassifier())])
logistic_params = {
'classifier': [LogisticRegression()],
'classifier__penalty': ['l1', 'l2'],
'classifier__C': np.arange(0.1, 4, 0.5)
}
random_forest_params = {
'classifier': [RandomForestClassifier()],
'classifier__n_estimators': [10, 100, 500, 1000],
'classifier__max_features': [1,2,3]
}
svc_params = {
'classifier': [svm.SVC()],
'classifier__kernel': ['linear', 'rbf', 'sigmoid']
}
search_space = [logistic_params, random_forest_params, svc_params]
grid = GridSearchCV(pipe,
search_space,
cv = 10,
n_jobs = -1
)
grid.fit(X_train, y_train)
# -
grid.score(X_test, y_test)
print(grid.predict(X_test))
print(y_test)
grid.best_estimator_['classifier']
grid.best_estimator_
grid.best_score_
# # 3 Another way
# +
reg_log = Pipeline(steps = [
("imputer", SimpleImputer()),
("scaler", StandardScaler()),
("reglog", LogisticRegression())])
svc = Pipeline([
("scaler", StandardScaler()),
("selectkbest", SelectKBest()),
("svc", svm.SVC())])
rand_forest_param = {
'n_estimators': [10,100,500, 1000],
'max_features': [1,2,3]
}
rand_forest = RandomForestClassifier()
re_log_param = {
"imputer__strategy": ['mean', 'median', 'most_frequent'],
"reglog__penalty": ["l1", "l2"],
"reglog__C": np.arange(0.1, 4, 0.5)
}
svc_param = {
"selectkbest__k": [1,2,3],
"svc__C": np.arange(0.1, 0.9, 0.1),
"svc__kernel": ['linear', 'poly', 'rbf']
}
gs_reg_log = GridSearchCV(reg_log,
re_log_param,
cv=10,
scoring = 'accuracy',
n_jobs = -1,
verbose = 1)
gs_svm = GridSearchCV(svc,
svc_param,
cv=10,
scoring = 'accuracy',
n_jobs = -1,
verbose = 1)
gs_rand_forest = GridSearchCV(rand_forest,
rand_forest_param,
cv=10,
scoring = 'accuracy',
n_jobs = -1,
verbose = 1)
grids = {
"gs_reg_log": gs_reg_log,
"gs_svm": gs_svm,
"gs_rand_forest": gs_rand_forest
}
# -
# %%time
for nombre, grid_search in grids.items():
grid_search.fit(X_train, y_train)
grids.items()
# +
best_grids = [(i, j.best_score_) for i, j in grids.items()]
best_grids = pd.DataFrame(best_grids, columns = ["Grid", "Best score"])
best_grids.sort_values(by = "Best score", ascending = False)
# -
print("Best estimator:", gs_svm.best_estimator_)
print("Best params:", gs_svm.best_params_)
print("Best score:", gs_svm.best_score_)
estimador = gs_svm.best_estimator_
estimador.score(X_test, y_test)
estimador.predict(X_test)
estimador.predict(X_test) - y_test
iris['feature_names']
estimador
estimador['selectkbest'].get_params()
estimador['selectkbest'].pvalues_
estimador['selectkbest'].scores_
# +
import pickle
with open("finished_model.model", "wb") as archivo_salida:
pickle.dump(estimador, archivo_salida)
"""
'r' Open for reading (default)
'w' Open for writing, truncating (overwriting) the file first
'rb' or 'wb' Open in binary mode (read/write using byte data)
Text files
Buffered binary files
Raw binary files
"""
# +
# Leer
with open("finished_model.model", "rb") as archivo_entrada:
pipeline_importado = pickle.load(archivo_entrada)
"""
8 |_ 2
0 4 !_ 2
0 2 |_ 2
0 1
1 0 0 0 = ____ x 2*3 + _____* 2**2 + ______*2**1 + ____2**0 = ___1__x8 + ___0___x4 + ___0___x2 + __0___ X1
3 2 1 0
8 (decimal) = 1000
__ __ __ __ __ __ __ __
7 6 5 4 3 2 1 0
99000000
00000099
00001000
"""
# -
pipeline_importado
new_flowers = np.array([[6.9, 3.1, 5.1, 2.3],
[5.8, 2.7, 3.9, 1.2]])
pipeline_importado.predict(new_flowers)
gs_svm.best_estimator_.predict(X_test)
# +
# joblib
| 4-Machine_Learning/1-Supervisado/9-GridSearchCV & Pipelines/grid_search - Clase.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import re
import pandas as pd
from Bio import SeqIO
from Bio.Seq import Seq
from Bio.SeqRecord import SeqRecord
from Bio.SeqFeature import SeqFeature, FeatureLocation
from Bio import AlignIO
def change_gaps(gene):
gene_alignment = '../../seasonal-flu/results/aligned_cdc_h3n2_'+str(gene)+'_12y_cell_hi.fasta'
n_gaps_records = []
with open(gene_alignment, "r") as aligned_handle:
for virus in SeqIO.parse(aligned_handle, "fasta"):
# ambiguous_bases = ['-','R','Y','K','M','W','S','B','D','H','V']
ambiguous_bases = '-RYKMWSBDHV'
virus_seq = Seq(str(virus.seq).translate({ord(x):'N' for x in ambiguous_bases}))
new_record = SeqRecord(seq = virus_seq,
id = virus.id, description = virus.description)
n_gaps_records.append(new_record)
with open(gene_alignment, 'w') as output_handle:
SeqIO.write(n_gaps_records, output_handle, "fasta")
def truncate_to_coding_seq_only(gene):
gene_alignment = '../../seasonal-flu/results/aligned_cdc_h3n2_'+str(gene)+'_12y_cell_hi.fasta'
coding_records = []
coding_pos = {'pb1': FeatureLocation(9, 2283),
'pb2': FeatureLocation(12, 2292),
'pa': FeatureLocation(10, 2161),
'na': FeatureLocation(3, 1410),
'ha': FeatureLocation(0, 1701)}
with open(gene_alignment, "r") as aligned_handle:
for virus in SeqIO.parse(aligned_handle, "fasta"):
new_record = SeqRecord(seq = coding_pos[gene].extract(virus.seq),
id = virus.id, description = virus.description)
coding_records.append(new_record)
with open(gene_alignment, 'w') as output_handle:
SeqIO.write(coding_records, output_handle, "fasta")
# +
def truncate_meta_file(gene):
gene_alignment = '../../seasonal-flu/results/aligned_cdc_h3n2_'+str(gene)+'_12y_cell_hi.fasta'
metafile = '../../seasonal-flu/results/metadata_h3n2_'+str(gene)+'.tsv'
aligned_ids = []
with open(gene_alignment, "r") as aligned_handle:
for virus in SeqIO.parse(aligned_handle, "fasta"):
aligned_ids.append(virus.id)
aligned_ids_df = pd.DataFrame(aligned_ids, columns=['strain'])
meta = pd.read_csv(metafile, sep = '\t')
truncate_meta = meta.merge(aligned_ids_df, how='right', on='strain')
truncate_meta.to_csv('../../seasonal-flu/results/metadata_h3n2_'+(gene)+'.tsv', index = False, sep='\t')
# -
genes = ['ha', 'na', 'pa', 'pb1', 'pb2']
# genes = ['ha1', 'ha2']
for gene in genes:
change_gaps(gene)
truncate_to_coding_seq_only(gene)
truncate_meta_file(gene)
# +
ha_reference = '../../seasonal-flu/config/reference_h3n2_ha.gb'
ha_alignment = '../../seasonal-flu/results/aligned_cdc_h3n2_ha_12y_cell_hi.fasta'
ha_metafile = '../../seasonal-flu/results/metadata_h3n2_ha.tsv'
ha1_pos = ''
ha2_pos = ''
for seq_record in SeqIO.parse(ha_reference, "genbank"):
for feature in seq_record.features:
if feature.type == 'CDS':
if feature.qualifiers['product'][0] == 'HA1 protein':
ha1_pos = feature.location
elif feature.qualifiers['product'][0] == 'HA2 protein':
ha2_pos = feature.location
ha1_records = []
ha1_ids = []
ha2_records = []
ha2_ids = []
#write ha1 and ha2 alignement files for sequences that cover these genes
with open(ha_alignment, "r") as aligned_handle:
for virus in SeqIO.parse(aligned_handle, "fasta"):
ha1_record = SeqRecord(seq = ha1_pos.extract(virus.seq),
id = virus.id, description = 'HA1')
if len(ha1_record.seq.ungap("N")) > 900:
ha1_records.append(ha1_record)
ha1_ids.append(ha1_record.id)
ha2_record = SeqRecord(seq = ha2_pos.extract(virus.seq),
id = virus.id, description = 'HA2')
if len(ha2_record.seq.ungap("N")) > 600:
ha2_records.append(ha2_record)
ha2_ids.append(ha2_record.id)
#write meta files with appropriate strains only
ha_meta = pd.read_csv(ha_metafile, sep = '\t')
ha1_strains = pd.DataFrame(ha1_ids, columns=['strain'])
ha1_meta = ha_meta.merge(ha1_strains)
ha1_meta.to_csv('../../seasonal-flu/results/metadata_h3n2_ha1.tsv', index = False, sep='\t')
ha2_strains = pd.DataFrame(ha2_ids, columns=['strain'])
ha2_meta = ha_meta.merge(ha2_strains)
ha2_meta.to_csv('../../seasonal-flu/results/metadata_h3n2_ha2.tsv', index = False, sep='\t')
with open('../../seasonal-flu/results/aligned_cdc_h3n2_ha1_12y_cell_hi.fasta', "w") as output_handle:
SeqIO.write(ha1_records, output_handle, "fasta")
with open('../../seasonal-flu/results/aligned_cdc_h3n2_ha2_12y_cell_hi.fasta', "w") as output_handle2:
SeqIO.write(ha2_records, output_handle2, "fasta")
| antigenic_evolution/.ipynb_checkpoints/process_h3n2_data-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="_lmvKIjL6gyq"
# # 0. Import library
# + id="LDEgbAwq4zXK"
import pickle
import pandas as pd
import urllib.request
from bs4 import BeautifulSoup as bs
import re
from tqdm import tqdm
import time
import numpy as np
from sklearn.preprocessing import MinMaxScaler
# pd.options.display.float_format = '{:.2f}'.format
# + [markdown] id="tdHkbP6T4qOs"
# # 1. Load Data
# + colab={"base_uri": "https://localhost:8080/", "height": 556} id="W2Po2Tbg7ehr" outputId="abb534b8-1f93-4ff1-d517-6f7ab93d7d15"
from IPython.display import Image
Image('활용데이터.png')
# + id="nPrBOsPg4Hgc"
# 국내주식재무비율
with open("국내주식재무비율.pkl","rb") as fr:
df_fin_rate = pickle.load(fr)
# 국내주식재무제표
with open("국내주식재무제표.pkl","rb") as fr:
df_fin_num = pickle.load(fr)
# ESG (전체 / E / S / G / FG)
df_esg_info = pd.read_excel('ESG중분류_E,S,G(2021)_미래에셋증권.xlsx', header = 3, index_col = 0)
df_esg_e = pd.read_excel('ESG중분류_E,S,G(2021)_미래에셋증권.xlsx', sheet_name = 1, skiprows = 3, index_col = 0)
df_esg_e.reset_index(drop = True, inplace = True)
df_esg_s = pd.read_excel('ESG중분류_E,S,G(2021)_미래에셋증권.xlsx', sheet_name = 2, skiprows = 3, index_col = 0)
df_esg_s.reset_index(drop = True, inplace = True)
df_esg_g = pd.read_excel('ESG중분류_E,S,G(2021)_미래에셋증권.xlsx', sheet_name = 3, skiprows = 3, index_col = 0)
df_esg_g.reset_index(drop = True, inplace = True)
df_esg_fg = pd.read_excel('ESG중분류_E,S,G(2021)_미래에셋증권.xlsx', sheet_name = 4, skiprows = 3, index_col = 0)
df_esg_fg.reset_index(drop = True, inplace = True)
# ETF
df_etf_info = pd.read_csv('etf_info.csv')
# 업종
df_industry = pd.read_csv('종목_업종_테이블.csv')
# ETF 거래
df_etf_trade = pd.read_csv('국내ETF_2021_4분기_거래량_거래대금.csv')
# 주식 거래
df_stock_trade = pd.read_csv('국내상품_2021_12월_거래량_거래대금.csv')
# -
# # 2. Data Pre-Processing
# > **2-1. 전체 데이터 공백 제거**<br><br>
# > **2-2. ESG Data 전처리**<br>
# > 2-2-1. 필요 column 추출 등 DataFrame 생성<br>
# > 2-2-2. 종목별 섹터 데이터 크롤링 (외부 데이터)<br>
# > 2-2-3. 섹터(동종업계) 내 ESG 점수(순위) 산정<br><br>
# > **2-3. ETF Data 전처리**<br>
# > 2-3-1. 필요 column 추출, ratio 계산 등 DataFrame 생성<br>
# > 2-3-2. ETF ESG 점수 도출<br><br>
# > **2-4. 재무제표 Data 전처리**<br>
# > 2-4-1. 필요 column 추출 등 DataFrame 생성<br><br>
# > **2-5. 업종 Data 전처리**<br>
# > 2-5-1. 필요 column 추출 등 DataFrame 생성<br><br>
# > **2-6. 네이버증권 테마 Data 크롤링 (외부 데이터)**<br><br>
# > **2-7. 구글트렌드 Data 크롤링 및 전처리**<br><br>
# > **2-8. ETF 거래 Data 전처리**<br>
# > 2-8-1. 필요 column 추출 등 DataFrame 생성<br><br>
# > **2-9. 상품 거래 Data 전처리**<br>
# > 2-9-1. 필요 column 추출 등 DataFrame 생성
# ### 2-1. 전체 데이터 공백 제거
# + id="Og1losX-5q7O"
# 공백제거 함수 정의
def nospace(df):
for i in range(len(df.columns)):
df.iloc[:,i] = df.iloc[:,i].astype(str).str.replace(" ","")
# 데이터 공백 제거
nospace(df_fin_rate)
nospace(df_fin_num)
nospace(df_etf_info)
nospace(df_industry)
nospace(df_etf_trade)
nospace(df_stock_trade)
# + [markdown] id="f9tNghhJ5Qgq"
# ### 2-2. ESG Data 전처리
# + colab={"base_uri": "https://localhost:8080/", "height": 956} id="bv1NYsa8QKqy" outputId="e76e4af2-e890-43a1-ab70-190682123411"
# FG데이터 전처리
df_esg_fg_no = df_esg_fg[df_esg_fg['상장된 시장'] != 'EX']
df_esg_fg_no.rename(columns={'FG.감사기구 및\n내부통제':'FG.감사기구', 'FG.등급':'G.등급', 'FG.총점':'G.총점', 'FG.주주권리보호':'G.주주권리보호', 'FG.이사회':'G.이사회', 'FG.공시':'G.공시', 'FG.감사기구 및\n내부통제':'G.감사기구', 'FG.감점':'G.감점'}, inplace=True)
df_esg_fg_no = df_esg_fg_no[['Code', 'Name', '법인등록번호', '결산월', '상장된 시장', 'G.등급', 'G.총점', 'G.주주권리보호', 'G.이사회', 'G.공시', 'G.감사기구', 'G.감점']]
df_esg_g_final = pd.concat([df_esg_g, df_esg_fg_no])
df_esg_g = df_esg_g_final
# ESG 정기총점 추출
df_esg_total = df_esg_info[['Code', 'ESG.정기총점']]
# 데이터 병합
merge1 = pd.merge(df_esg_e, df_esg_s, on='Code', how='inner',suffixes=('', '_DROP')).filter(regex='^(?!.*_DROP)')
merge2 = pd.merge(merge1, df_esg_g, on='Code', how='inner',suffixes=('', '_DROP')).filter(regex='^(?!.*_DROP)')
merge3 = pd.merge(merge2, df_esg_total, on='Code', how='inner',suffixes=('', '_DROP')).filter(regex='^(?!.*_DROP)')
# 불필요한 column 삭제
merge3.drop(columns = ['법인등록번호', '결산월'], inplace=True)
df_esg = merge3
# column명 변경
df_esg.rename(columns = {'E.등급':'E등급', 'S.등급':'S등급', 'G.등급':'G등급'}, inplace = True)
# 기업명 공백제거
df_esg.Name = df_esg.Name.astype(str).str.replace(" ", "")
# + colab={"base_uri": "https://localhost:8080/", "height": 424} id="_I_xhsB8hD1H" outputId="cd75eb42-4bca-48fb-d375-4173414e6ceb"
# 종목별 섹터 데이터 크롤링 (외부 데이터)
df_product = pd.read_html('http://kind.krx.co.kr/corpgeneral/corpList.do?method=download&searchType=13', header=0)[0]
df_product.종목코드 = df_product.종목코드.map("{:06d}".format)
df_product = df_product[['종목코드', '회사명', '업종', '주요제품']]
df_product = df_product.rename(columns={'종목코드':'code', '회사명':'name', '업종':'industry', '주요제품':'main_product'})
df_product['code'] = 'A' + df_product['code'].str[:]
df_product
# + colab={"base_uri": "https://localhost:8080/", "height": 904} id="JFH1lWoag5uQ" outputId="9631ed1e-d9bb-4832-abc5-84a1a96970a4"
# 섹터 내 ESG 점수(순위) 산정
# ESG 데이터와 섹터 데이터 병합
df_sector_score = pd.merge(df_esg, df_product, left_on='Code', right_on='code', how='inner')
df_sector_score.drop(columns=['code', 'name', 'main_product'], inplace=True)
# 섹터별 E, S, G, ESG 순위 부여
df_sector_score['E_rank'] = df_sector_score.groupby('industry')['E.총점'].rank(method = 'min', ascending=False)
df_sector_score['S_rank'] = df_sector_score.groupby('industry')['S.총점'].rank(method = 'min', ascending=False)
df_sector_score['G_rank'] = df_sector_score.groupby('industry')['G.총점'].rank(method = 'min', ascending=False)
df_sector_score['ESG_rank'] = df_sector_score.groupby('industry')['ESG.정기총점'].rank(method = 'min', ascending=False)
# 섹터별 E, S, G, ESG 평균 총점
df_sector_mean = pd.DataFrame(df_sector_score.groupby('industry').mean())
df_sector_mean.reset_index(inplace=True)
df_sector_mean.rename(columns={'E.총점':'평균 E총점', 'S.총점':'평균 S총점', 'G.총점':'평균 G총점', 'ESG.정기총점':'평균 ESG총점'}, inplace=True)
df_sector_mean = df_sector_mean[['industry', '평균 E총점', '평균 S총점', '평균 G총점', '평균 ESG총점']]
# 섹터별 상위 E, S, G, ESG % 부여
df_sector_count = pd.DataFrame(df_sector_score.industry.value_counts())
df_sector_count.reset_index(inplace=True)
df_sector_count.rename(columns={'index':'industry', 'industry':'count'}, inplace=True)
df_esg_final = pd.merge(df_sector_score, df_sector_mean, on='industry', how='outer')
df_esg_final = pd.merge(df_esg_final, df_sector_count, on='industry', how='outer')
df_esg_final['상위 E%'] = df_esg_final['E_rank'] / df_esg_final['count'] * 100
df_esg_final['상위 S%'] = df_esg_final['S_rank'] / df_esg_final['count'] * 100
df_esg_final['상위 G%'] = df_esg_final['G_rank'] / df_esg_final['count'] * 100
df_esg_final['상위 ESG%'] = df_esg_final['ESG_rank'] / df_esg_final['count'] * 100
# + [markdown] id="4FvlcUWRCFSw"
# ### 2-3. ETF Data 전처리
# + colab={"base_uri": "https://localhost:8080/", "height": 548} id="UkvWganp5UMb" outputId="fa7bc7b1-0500-4edf-aec7-dafe0a2a55f4"
# 종목 CODE 있는 종목만 추출 (원화예금 등 index 지표를 빼기 위함)
df_etf = df_etf_info[df_etf_info.ETF_ITEM_CD != 'nan']
# ETF별 평가금액 합
df_etf['ETF_EA'] = df_etf['ETF_EA'].astype(float)
df_etf_sum = df_etf.groupby(['ETF_CD'], as_index=False).sum()
df_etf_notnull = pd.merge(df_etf, df_etf_sum[['ETF_CD','ETF_EA']], on='ETF_CD')
# ETF 구성종목 비율 계산
df_etf_notnull['ratio'] = df_etf_notnull['ETF_EA_x'] / df_etf_notnull['ETF_EA_y']
df_etf = df_etf_notnull[['ETF_CD', 'ETF_NM', 'ETF_ITEM_CD', 'ETF_CMST_ITM_NM', 'ETF_EA_x', 'ratio']]
# -
Image('esg.png')
# + colab={"base_uri": "https://localhost:8080/", "height": 589} id="hFCaNkQFi3ve" outputId="899a951e-e166-4c3c-f514-b56d47fd3868"
# ETF의 ESG 점수 도출
# ETF, ESG 데이터 병합
df_esg_weight = df_esg_final[['Code', 'Name', 'ESG.정기총점', '평균 ESG총점', '상위 ESG%']]
etf_esg = pd.merge(df_etf, df_esg_weight, left_on='ETF_ITEM_CD', right_on='Code')
# 가중치 계산
# MinMax Scale (0~1)
etf_esg['weight1_ratio'] = etf_esg['ratio']
etf_esg['weight2_diff'] = etf_esg['ESG.정기총점'] - etf_esg['평균 ESG총점']
etf_esg['weight3_rank'] = 101 - etf_esg['상위 ESG%']
etf_esg_weight = etf_esg[['weight1_ratio', 'weight2_diff', 'weight3_rank']]
transformer = MinMaxScaler(feature_range=(0, 1))
transformer.fit(etf_esg_weight)
etf_esg_weight_scale = pd.DataFrame(transformer.transform(etf_esg_weight), columns=etf_esg_weight.columns)
etf_esg_weight_scale['weight'] = etf_esg_weight_scale['weight1_ratio'] * etf_esg_weight_scale['weight2_diff'] * etf_esg_weight_scale['weight3_rank']
etf_esg_weight_scale = etf_esg_weight_scale[['weight']]
transformer.fit(etf_esg_weight_scale)
etf_esg_weight_scale_final = pd.DataFrame(transformer.transform(etf_esg_weight_scale), columns=etf_esg_weight_scale.columns)
etf_esg['weight'] = etf_esg_weight_scale_final['weight']
etf_esg.drop(columns=['Code', 'Name', 'weight1_ratio', 'weight2_diff', 'weight3_rank'], inplace=True)
# 가중치를 고려한 ETF ESG 점수 계산
etf_esg['ESG_SCORE'] = etf_esg['ESG.정기총점'] * etf_esg['weight']
etf_esg_score = etf_esg.groupby(['ETF_CD'], as_index = False).sum()
etf_esg_score = etf_esg_score[['ETF_CD', 'ESG_SCORE']]
etf_esg = pd.merge(etf_esg, etf_esg_score, on = 'ETF_CD')
etf_esg.drop(columns=['ESG_SCORE_x'], inplace=True)
etf_esg.rename(columns={'ESG_SCORE_y':'ESG_SCORE'}, inplace=True)
# + [markdown] id="unFaXLK2UEWY"
# ### 2-4. 재무제표 Data 전처리
# + colab={"base_uri": "https://localhost:8080/", "height": 468} id="jFxKo9FwN623" outputId="78edc9ca-2b96-4e28-e1cc-52a4780b423c"
# 재무비율
df_fin1 = df_fin_rate[['CODE', 'DATA_TP_CODE', 'IFRS_TP_CODE', 'SET_TP_CODE', 'BASE_YM', 'CO_NM', 'SALE_GROW_RATE', 'PROFIT_GROW_RATE', 'ROE']]
df_fin1 = df_fin1[df_fin1['BASE_YM'].str.contains('2021')]
df_fin1 = df_fin1[df_fin1['DATA_TP_CODE'] == '1']
df_fin1 = df_fin1[df_fin1['IFRS_TP_CODE'] == 'B']
df_fin1.drop(df_fin1[df_fin1['SET_TP_CODE'] == '4'].index, inplace=True)
df_fin1.drop(['DATA_TP_CODE', 'IFRS_TP_CODE', 'BASE_YM'], axis=1, inplace=True)
# 재무제표
df_fin2 = df_fin_num[['CODE', 'DATA_TP_CODE', 'IFRS_TP_CODE', 'SET_TP_CODE', 'BASE_YM', 'CO_NM', 'SALE_AMT', 'THIS_TERM_PROFIT', 'BHJS_CNT']]
df_fin2 = df_fin2[df_fin2['BASE_YM'].str.contains('2021')]
df_fin2 = df_fin2[df_fin2['DATA_TP_CODE'] == '1']
df_fin2 = df_fin2[df_fin2['IFRS_TP_CODE'] == 'B']
df_fin2.drop(df_fin2[df_fin2['SET_TP_CODE'] == '4'].index, inplace=True)
df_fin2.drop(['DATA_TP_CODE', 'IFRS_TP_CODE', 'BASE_YM'], axis=1, inplace=True)
# 재무데이터 병합
df_fin = pd.merge(df_fin1, df_fin2, left_on=['CODE', 'SET_TP_CODE', 'CO_NM'], right_on=['CODE', 'SET_TP_CODE', 'CO_NM'], how='inner')
df_fin[['SALE_GROW_RATE', 'PROFIT_GROW_RATE', 'ROE', 'SALE_AMT', 'THIS_TERM_PROFIT', 'BHJS_CNT']] = df_fin[['SALE_GROW_RATE', 'PROFIT_GROW_RATE', 'ROE', 'SALE_AMT', 'THIS_TERM_PROFIT', 'BHJS_CNT']].astype(float)
# + [markdown] id="i9KJlj29WoNV"
# ### 2-5. 업종 Data 전처리
# + colab={"base_uri": "https://localhost:8080/", "height": 424} id="OZlyPwjRVfQO" outputId="0f76580f-79b0-4a0b-d6e0-23143296543d"
df_industry = df_industry[['종목코드', '종목명', '업종']]
# + [markdown] id="jwcIWeBZXb8a"
# ### 2-6. 네이버증권 테마 데이터 크롤링 (외부 데이터)
# + colab={"base_uri": "https://localhost:8080/", "height": 476} id="90bu45GrWTey" outputId="a1026247-1749-437d-8b63-54ab7e0bfa68"
# 각 테마 페이지별 테마명&주식종목명 가져오기
def one_page_list(page):
STOCKLIST_URL = "https://finance.naver.com/sise/sise_group_detail.nhn?type=theme&no={}".format(page)
response = urllib.request.urlopen(STOCKLIST_URL)
STOCKLIST_HTML = response.read()
soup = bs(STOCKLIST_HTML)
STOCK_NAME_LIST = []
STOCK_CODE_LIST = []
THEME_NAME_LIST = []
for tr in soup.findAll('td', attrs={'class', 'name'}):
stockName = tr.findAll('a', attrs={})
stockCode = re.findall('.+(?=")', str(tr.findAll('a', attrs={})))[0].split("code=")[1]
if stockName is None or stockName == []:
pass
else:
stockName = stockName[0].contents[-1]
STOCK_NAME_LIST.append(stockName)
STOCK_CODE_LIST.append(stockCode)
for tr in soup.findAll('title'):
themeName = tr
if themeName is None or themeName == []:
pass
else:
themeName = themeName.contents[-1]
THEME_NAME_LIST.append(themeName)
STOCK_LIST = []
for i in range(len(STOCK_NAME_LIST)):
stockInfo = [STOCK_CODE_LIST[i], STOCK_NAME_LIST[i], THEME_NAME_LIST[0]]
STOCK_LIST.append(stockInfo)
return pd.DataFrame(STOCK_LIST, columns=('코드', '종목명', '테마'))
theme_list = []
for i in tqdm([1,2,3,4,5,6,7]):
url = "https://finance.naver.com/sise/theme.naver?&page={}".format(i)
req = urllib.request.Request(url)
sourcecode = urllib.request.urlopen(url).read()
soup = bs(sourcecode, "html.parser")
soup = soup.find_all("td", attrs={'class', "col_type1"})
theme_list.extend(list(soup))
for i in range(0,len(theme_list)):
theme_list[i] = theme_list[i].find("a")["href"]
theme_list[i] = theme_list[i].replace('/sise/sise_group_detail.naver?type=theme&no=', '')
df_theme = pd.DataFrame()
for i in tqdm(theme_list):
df_temp = one_page_list(i)
df_theme = df_theme.append(df_temp)
for i in tqdm(range(0,len(df_theme))):
df_theme.iloc[i]['테마'] = df_theme.iloc[i]['테마'].replace(' : 네이버 금융', '')
def make_code(x):
x=str(x)
return 'A'+ '0'*(6-len(x)) + x
df_theme['코드'] = df_theme['코드'].apply(make_code)
df_theme
# + [markdown] id="yJtVIUnMcNVL"
# ### 2-7. 구글트렌드 Data 전처리
# + colab={"base_uri": "https://localhost:8080/"} id="xYlzJS59ZdtP" outputId="bc01f1c4-767d-4ac3-ed65-44094f997b33"
pip install pytrends
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="5KWdriDxXibh" outputId="a2b2bd0f-c5a2-440c-8d63-e808a30330a0"
# pytrends 라이브러리
from pytrends.request import TrendReq
from tqdm import tqdm
from tqdm import trange
# 기업명 list 생성
stock_list = list(df_esg_final.Name)
# 크롤러 생성
trend_df = pd.DataFrame(columns=['Date', 'CO NM', 'Trend'])
period = 'today 1-m'
for i in trange(len(stock_list)):
try:
a = TrendReq()
a.build_payload(kw_list=[stock_list[i]], timeframe=period, geo='KR')
a_df = a.interest_over_time()
data1 = {'Date' : a_df.index,
'CO NM' : a_df.columns[0],
'Trend' : a_df.iloc[:,0]}
DF1 = pd.DataFrame(data1).reset_index(drop=True)
trend_df = pd.concat([trend_df, DF1], ignore_index=True)
except:
pass
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="r18CXlzUNTIt" outputId="ac272ee7-e1de-4946-d3dd-f42da74bd57d"
# DF에 종목코드 추가
Code_df = df_esg_final[['Code', 'Name']]
trend_code_df = pd.merge(trend_df, Code_df, left_on='CO NM', right_on='Name')
trend_code_df.drop(columns=['Name'], inplace=True)
trend_code_df.rename(columns={'Code':'CODE'}, inplace=True)
# -
trend_code_df
# + [markdown] id="_hQvSScJgFGn"
# ### 2-8. ETF 거래 Data 전처리
# + colab={"base_uri": "https://localhost:8080/", "height": 739} id="nu2yaeLyeZoB" outputId="eb3321df-0cd7-4ba4-e6ec-4b72aa79e09c"
df_etf_trade = df_etf_trade[['DATA_DATE', '거래량', '거래대금', '종가', 'ITEM_S_CD', '한글명_F']]
df_etf_trade['DATA_DATE'] = df_etf_trade['DATA_DATE'].str[:8]
df_etf_trade[['거래량', '거래대금', '종가']] = df_etf_trade[['거래량', '거래대금', '종가']].astype(float)
df_etf_trade.rename(columns={'ITEM_S_CD':'ETF_CD'}, inplace=True)
# + [markdown] id="UrM_d6bnhOFU"
# ### 2-9. 상품 거래 Data 전처리
# + colab={"base_uri": "https://localhost:8080/", "height": 739} id="M4OROhDCgfTe" outputId="4916f215-8f6d-4057-e754-f17d8043d235"
stock_trade = df_stock_trade[['DATA_DATE', '거래량', '거래대금', '종가', 'ITEM_S_CD', '한글명_F']]
stock_trade['DATA_DATE'] = stock_trade['DATA_DATE'].str[:8]
stock_trade[['거래량', '거래대금', '종가']] = stock_trade[['거래량', '거래대금', '종가']].astype(float)
stock_trade.rename(columns={'ITEM_S_CD':'ITEM_CD'}, inplace=True)
| Git_ESG Visualization.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pandas as pd
import statsmodels.api as sm #(To access mtcars dataset)
mtcars = sm.datasets.get_rdataset("mtcars", "datasets", cache=True).data
# -
mtcars.iloc[0:6]
# +
# Check the first n rows with df.head(n)
# Equivalent to
mtcars.head(6)
# +
# Check the last n rows with df.tail(n)
# Equivalent to df.iloc[len(df)-n:len(df)]
mtcars.tail(6)
# +
data = pd.DataFrame({"character": ["Goku","Vegeta", "Nappa","Gohan",
"Piccolo","Tien","Yamcha", "Krillin"],
"power_level": [12000, 16000, 4000, 1500, 3000,
2000, 1600, 2000],
"uniform color": ["orange", "blue", "black", "orange",
"purple", "green", "orange", "orange"],
"species": ["saiyan","saiyan","saiyan","half saiyan",
"namak","human","human","human"]})
data
# +
# Create a logical index with one value for each row
#logical_index = data.power_level > 2000
logical_index = data["power_level"] > 2000
logical_index
# +
# Use the logical index to index into the data frame
data[logical_index]
# +
# Select rows based on multiple logical conditions
logical_condition_1 = data["power_level"] > 2000
logical_condition_2 = data["species"] != "saiyan"
data[logical_condition_1 & logical_condition_2]
# +
import pandas as pd
import statsmodels.api as sm #(To access mtcars dataset)
mtcars = sm.datasets.get_rdataset("mtcars", "datasets", cache=True).data
mtcars.head()
# +
# Create a logical index with one value for each column
logical_index = mtcars.mean() > 10
logical_index
# -
mtcars.describe()
# +
# Get the corresponding columns
cols = mtcars.columns[logical_index]
cols
# +
# Use the column list to index the data frame
mtcars_sub = mtcars[cols]
mtcars_sub.head()
# +
# Do logical indexing on columns in one line:
mtcars[mtcars.columns[mtcars.mean() > 10]].head()
# -
mtcars
# +
import pandas as pd
import statsmodels.api as sm #(To access mtcars dataset)
mtcars = sm.datasets.get_rdataset("mtcars", "datasets", cache=True).data
mtcars.head()
# -
# Get unique entries of a column with series.unique()
mtcars["cyl"].unique()
# +
# Get unique entries across multiple columns
mtcars_subset = mtcars[["cyl", "gear"]]
# -
mtcars_subset
mtcars_subset.drop_duplicates().reset_index(drop=True)
| notebooks/first & Last & filter & logical & select columns & unique.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + colab={"base_uri": "https://localhost:8080/"} id="21QsC7Kfuc-U" outputId="c3ecfa16-b5fb-4484-dd90-2a5466eb9bf5"
# !pip install deepgram-sdk
# + colab={"base_uri": "https://localhost:8080/"} id="Va5KmVfEu0lE" outputId="3a99e8b0-352f-47eb-90f5-ebf5d10badbd"
# %%writefile deepgram_test.py
from deepgram import Deepgram
import asyncio, json
# The API key you created in step 1
DEEPGRAM_API_KEY = APIKEY_DEEPGRAM
AUDIO_URL = 'https://static.deepgram.com/examples/Bueller-Life-moves-pretty-fast.wav'
async def main():
# Initializes the Deepgram SDK
dg_client = Deepgram(DEEPGRAM_API_KEY)
source = {'url': AUDIO_URL}
# print('Requesting transcript...')
# print('Your file may take up to a couple minutes to process.')
# print('While you wait, did you know that Deepgram accepts over 40 audio file formats? Even MP4s.')
# print('To learn more about customizing your transcripts check out developers.deepgram.com.')
response = await dg_client.transcription.prerecorded(source, {'punctuate': True})
print(json.dumps(response, indent=4))
asyncio.run(main())
# + id="WnIH3yPCu8JX"
# !python deepgram_test.py > text.json
# + colab={"base_uri": "https://localhost:8080/"} id="7Ob7aBjSu9Xf" outputId="43a8e769-4b12-4194-cd53-fd942c3dcdfa"
import re
import json
import math
file = open('text.json', 'r')
res = []
for lne in file:
# print(lne)e
lst = re.findall('"transcript": "(.*)"', lne)
if(len(lst)>0):
print(lst[0])
break
# print(res)
# + [markdown] id="2XFCXbC0y2LE"
# # Real time audio (no need)
# + colab={"base_uri": "https://localhost:8080/"} id="9YeQPoYey4ni" outputId="05a34580-da6b-4347-bcc4-e31ec57003f8"
# %%writefile deepgram_test_streaming.py
from deepgram import Deepgram
import asyncio, json
# The API key you created in step 1
DEEPGRAM_API_KEY = APIKEY_DEEPGRAM
# Name and extension of the file you downloaded (e.g. sample.wav)
PATH_TO_FILE = 'Bueller-Life-moves-pretty-fast.wav'
async def main():
# Initializes the Deepgram SDK
dg_client = Deepgram(DEEPGRAM_API_KEY)
# Creates a websocket connection to Deepgram
socket = await dg_client.transcription.live({'punctuate': True})
# print('Connection Opened!')
# Handle sending audio to the socket
async def process_audio(connection):
# Grab your audio file
with open(PATH_TO_FILE, 'rb') as audio:
# Chunk up the audio to send
CHUNK_SIZE_BYTES = 8192
CHUNK_RATE_SEC = 0.001
chunk = audio.read(CHUNK_SIZE_BYTES)
while chunk:
connection.send(chunk)
await asyncio.sleep(CHUNK_RATE_SEC)
chunk = audio.read(CHUNK_SIZE_BYTES)
# Indicate that we've finished sending data
await connection.finish()
# Receive transcriptions based on sent streams and write them to the console
socket.register_handler(socket.event.CLOSE, lambda _: print('Connection closed.'))
# Print incoming transcription objects
socket.register_handler(socket.event.TRANSCRIPT_RECEIVED, print)
# Send the audio to the socket
await process_audio(socket)
asyncio.run(main())
# + id="2AGOnWXo0LlM"
# !python deepgram_test_streaming.py > text_streaming.json
| Audio_Recognition.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import sys
from typing import Dict, List, Any, Tuple, Optional, Callable, Union
from urllib.parse import urlparse
from instascrape import Post, Profile
import requests
from selenium.webdriver.chrome.webdriver import WebDriver
from selenium.webdriver.chrome.options import Options
def setup(account, data_dir, login=False):
chrome_options = Options()
chrome_options.add_argument(f"user-data-dir={data_dir}")
wd = WebDriver(options=chrome_options)
if login:
wd.get("https:/www.instagram.com")
input("Log in, then press `Enter` to continue...")
#s = requests.Session()
#for cookie in wd.get_cookies():
# s.cookies.set(cookie['name'], cookie['value'])
a = Profile(account)
a.scrape(webdriver=wd)
return wd, a
# -
# ## Staying logged in
#
# Set the account you want to load as well as a place to persist your login state (so you can get around the stingy anonymous API rates). Set `login=True` to log in the first time; on subsequent runs, you shouldn't need to manually log in again (at least not for a little while).
# + pycharm={"name": "#%%\n"}
wd, a = setup(act, usrdir, True)
# + pycharm={"name": "#%%\n"}
ps = a.get_posts(wd, amount=30, max_failed_scroll=10, scroll_pause=1000)
# + pycharm={"name": "#%%\n"}
len(ps)
# + pycharm={"name": "#%%\n"}
aps = a.iter_posts(wd, amount=30, max_failed_scroll=10, scroll_pause=1000)
# + pycharm={"name": "#%%\n"}
ps2 = [*aps]
# + pycharm={"name": "#%%\n"}
ps2
# + pycharm={"name": "#%%\n"}
next(aps)
# + pycharm={"name": "#%%\n"}
| tutorial/examples/download_recent_posts.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# reload packages
# %load_ext autoreload
# %autoreload 2
# ### Choose GPU (this may not be needed on your computer)
# %env CUDA_DEVICE_ORDER=PCI_BUS_ID
# %env CUDA_VISIBLE_DEVICES=''
# ### load packages
from tfumap.umap import tfUMAP
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
from tqdm.autonotebook import tqdm
import umap
import pandas as pd
# ### Load dataset
from tensorflow.keras.datasets import fashion_mnist
# +
# load dataset
(train_images, Y_train), (test_images, Y_test) = fashion_mnist.load_data()
X_train = (train_images/255.).astype('float32')
X_test = (test_images/255.).astype('float32')
X_train = X_train.reshape((len(X_train), np.product(np.shape(X_train)[1:])))
X_test = X_test.reshape((len(X_test), np.product(np.shape(X_test)[1:])))
# subset a validation set
n_valid = 10000
X_valid = X_train[-n_valid:]
Y_valid = Y_train[-n_valid:]
X_train = X_train[:-n_valid]
Y_train = Y_train[:-n_valid]
# flatten X
X_train_flat = X_train.reshape((len(X_train), np.product(np.shape(X_train)[1:])))
X_test_flat = X_test.reshape((len(X_test), np.product(np.shape(X_test)[1:])))
X_valid_flat= X_valid.reshape((len(X_valid), np.product(np.shape(X_valid)[1:])))
print(len(X_train), len(X_valid), len(X_test))
# -
# ### Train PCA model
from sklearn.decomposition import PCA
pca = PCA(n_components=64)
z = pca.fit_transform(X_train_flat)
# ### plot output
fig, ax = plt.subplots( figsize=(8, 8))
sc = ax.scatter(
z[:, 0],
z[:, 1],
c=Y_train.astype(int)[:len(z)],
cmap="tab10",
s=0.1,
alpha=0.5,
rasterized=True,
)
ax.axis('equal')
ax.set_title("PCA embedding", fontsize=20)
plt.colorbar(sc, ax=ax);
# ### Save model
import os
import pickle
from tfumap.paths import ensure_dir, MODEL_DIR
output_dir = MODEL_DIR/'projections'/ 'fmnist' / '64'/ 'PCA'
ensure_dir(output_dir)
with open(os.path.join(output_dir, "model.pkl"), "wb") as output:
pickle.dump(pca, output, pickle.HIGHEST_PROTOCOL)
np.save(output_dir / 'z.npy', z)
# ## tsne
from openTSNE import TSNE
tsne = TSNE(
n_components = 64,
negative_gradient_method = 'bh'
)
embedding_train = tsne.fit(X_train_flat)
z = np.array(embedding_train)
fig, ax = plt.subplots( figsize=(8, 8))
sc = ax.scatter(
z[:, 0],
z[:, 1],
c=Y_train.astype(int)[:len(z)],
cmap="tab10",
s=0.1,
alpha=0.5,
rasterized=True,
)
ax.axis('equal')
ax.set_title("PCA embedding", fontsize=20)
plt.colorbar(sc, ax=ax);
# #### save model
# +
import os
import pickle
from tfumap.paths import ensure_dir, MODEL_DIR
output_dir = MODEL_DIR/'projections'/ 'fmnist'/ '64'/ 'TSNE'
ensure_dir(output_dir)
with open(os.path.join(output_dir, "model.pkl"), "wb") as output:
pickle.dump(pca, output, pickle.HIGHEST_PROTOCOL)
np.save(output_dir / 'z.npy', z)
# -
| notebooks/dataset-projections/64/fmnist/fmnist-PCA-tsne.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Entrada y Salida de Datos en formatos Heterogeneos
# Importar la libreía Pandas
import pandas as pd
# Crearemos una conjunto de datos con nombres de alumnos y los cursos a los cuales están matriculados. Para eso colectaremos datos desde la web
# leyendo datos desde la web, estos datos están en formato json
nombres_f = pd.read_json("https://servicodados.ibge.gov.br/api/v1/censos/nomes/ranking?qtd=20&sexo=f")
nombres_m = pd.read_json("https://servicodados.ibge.gov.br/api/v1/censos/nomes/ranking?qtd=20&sexo=m")
# Mostrando el tipo de datos que es DataFrame
type(nombres_f)
# Mostrando la cantidad de nombres en el DataFrame
print("Cantidad de nombres: " + str(len(nombres_m) + len(nombres_f)))
# Otro formato de impresión
print("Cantidad de nombres: %d" % (len(nombres_m) + len(nombres_f)))
# Colocamos en una lista los nombre femeninos y masculinos que colectamos
frames = [nombres_f, nombres_m]
type(frames)
# listando los datos
frames
# concatenamos los datos de la lista en un DataFrame
pd.concat(frames)
# Filtramos solamente la columna "nome" y actualizamos el DataFrame
pd.concat(frames)["nome"].to_frame()
# guardamos los dato en la variable "nombres"
nombres = pd.concat(frames)["nome"].to_frame()
nombres.sample(5)
# Renombrar el encabezado para "nombre"
nombres = nombres.rename(columns={'nome': 'nombre'})
print(nombres.columns)
# ## Incluir ID
# Importamos la librería numpy
import numpy as np
# generar siempre la misma frecuencia de numeros aleatorios
np.random.seed(123)
# mostranso la cantidad de nombres en el DataFrame
total_alumnos = len(nombres)
total_alumnos
# Adicionaremos una identificación a los alumnos diferente de su posición en el DataFrame
nombres.sample(3)
# queremos que los ID sean aleatorios de 1 a 40, vamos a crear una nueva columna
# La nueva columna recibirá la llamada de np.random.permutation(), una función de Pandas que distribuye números de forma aleatoria.
# Para eso, pasaremos como parámetro el total_alumnos y sumaremos 1
nombres["id_alumno"] = np.random.permutation(total_alumnos) + 1
nombres.sample(3)
# mostramos los datos actuales
nombres.head(10)
# Adicionaremos al DataFrame emails, para eso generaremos dominios de emails para luego concaternalos
# con los nombres de los alumnos
dominios = ['<EMAIL>', '@serviciodeemail.com']
# usamos la función "np.random.choice" para tomar en forma aleatoria el valor de la lista dominios
# Adicionamos a una nueva columna que se llama "dominio"
nombres['dominio'] = np.random.choice(dominios, total_alumnos)
# Listando el DataFrame
nombres.sample(5)
# concatenamos el dominio con el nombre del alumno y lo adicionamos a una nueva columna que se llama "email"
nombres['email'] = nombres.nombre.str.cat(nombres.dominio).str.lower()
# listando una muestra de los datos
nombres.sample(5)
# ## Leyendo html
# Importando la librería para leer datos html
import html5lib
# leemos los datos desde una url donde figuran nombres de los cursos a los que se matricularan los alumnos
url = 'http://tabela-cursos.herokuapp.com/index.html'
cursos = pd.read_html(url)
cursos
# los datos html son traidos en un formato tipo lista, por lo que extraemos el primer elemento que contiene el DataFrame
cursos = cursos[0]
# mostrando el tipo del elemento que extraimos
type(cursos)
# listando una muestra de los cursos
cursos.sample(5)
# renombramos la columna para 'nombre del curso'
cursos = cursos.rename(columns={'Nome do curso': 'nombre del curso'})
print(cursos.columns)
# Crearemos un identificador, ID, para cada curso. Generaremos la columna ID que recibirá el indice
cursos['id'] = cursos.index + 1
cursos.tail()
# ## Matriculando los alumnos en los cursos
# Adicionamos la columna 'matriculas' al DataFrame que contendrá la cantidad de cursos al cual el alumno está matriculado
# para eso usamos una función que aleatoriamente genera un número entre 0 y total_alumnos
nombres['matriculas'] = np.random.exponential(size=total_alumnos).astype(int)
nombres.sample(5)
# Ajustamos las matriculas cuyo valor fue 0, para eso usamos np.ceil, de esa manera todos los alumnos al menos estarán
# matriculados a un curso
nombres['matriculas'] = np.ceil(np.random.exponential(size=total_alumnos)).astype(int)
nombres.sample(5)
# listamos una muestra del DataFrame
nombres.sample(5)
# aumentaremos el numero de cursos en los cuales los alumnos estan matriculados.
# Multiplicamos el resultado obtenido usando el generador de números randómicos por 1.5
nombres['matriculas'] = np.ceil(np.random.exponential(size=total_alumnos) * 1.5).astype(int)
nombres.sample(5)
# Describir como quedó la distribución de los datos
nombres.matriculas.describe()
# los alumnos estan inscriptos en al menos 1 curso y el máximo de cursos en que el alumno está inscripto es 5 (este número varía según los datos generados)
# visualizamos la información en un gráfico
# importamos la librería seaborn
import seaborn as sns
# Graficamos la distribución de los datos de la columna matriculas
sns.distplot(nombres.matriculas)
# mostramos el número de alumnos por cada cantidad de cursos matriculados
nombres.matriculas.value_counts()
# value_counts() muestra la cantidad de elementos por cada valor diferente en la columna
nombres.dominio.value_counts()
# ## Seleccionando cursos
#
# <!-- Ahora crearemos un DataFrame donde podremos vincular las matriculas a los nombres de los cursos -->
# listamos una muestra con los datos
nombres.sample(5)
# para hacer esa distribución crearemos un código para asignar los cursos según la matricula
todas_matriculas = []
x = np.random.rand(20)
prob = x / sum(x)
# +
# el iterador for buscará el index y la linea que se utilizará row.
# ese iterador recorrerá el dataframe nombres con el auxilio de función iterrows()
# for index, row in nombres.iterrows()
# A cada elemento encontrado, almacenaremos el id del alumno, conseguido con row.id_alumno
# y la cantidad de matriculas conseguida con row.matriculas
for index, row in nombres.iterrows():
id = row.id_alumno
matriculas = row.matriculas
for i in range(matriculas):
mat = [id, np.random.choice(cursos.index, p = prob)]
todas_matriculas.append(mat)
# -
# Creamos el DataFrame que se llamará matriculas y contendrá los datos de id del alumno y el id del curso
matriculas = pd.DataFrame(todas_matriculas, columns = ['id_alumno', 'id_curso'])
matriculas.head(5)
# podemos usar comandos, como en SQL, para realizar consultas con los datos
matriculas.groupby('id_curso').count().join(cursos['nombre del curso'])
# Consultando la cantidad de alumnos por curso
matriculas.groupby('id_curso').count().join(cursos['nombre del curso']).rename(columns={'id_alumno':'Cantidad_de_alumnos'})
# listando una muestra del DataFrame nombres
nombres.sample(5)
# listando una muestra del DataFrame cursos
cursos.sample(5)
# listando una muestra del DataFrame matriculas
matriculas.sample(5)
# guardando la consulta sobre cantidad de alumnos por curso en una variable
matriculas_por_curso = matriculas.groupby('id_curso').count().join(cursos['nombre del curso']).rename(columns={'id_alumno':'Cantidad_de_alumnos'})
# visualizando una muestra
matriculas_por_curso.sample(5)
# ### Salida en diferentes formatos
# Exportamos los datos a un archivo csv, se guardara en el directorio actual de trabajo
matriculas_por_curso.to_csv('matriculas_por_curso.csv', index=False)
# podemos leer nuevamente los datos guardados
pd.read_csv('matriculas_por_curso.csv')
# podemos transformar los datos del DataFrame en el formato json
matriculas_json = matriculas_por_curso.to_json()
matriculas_json
# podemos transformar los datos del DataFrame en el formato html
matriculas_html = matriculas_por_curso.to_html()
matriculas_html
# al imprimir podemos visualizar en forma organizada
print(matriculas_html)
# ## Creando la base de datos SQL
#
# Usando slqalchemy
# +
# Instalar la librería en caso que no lo tenga
# #!pip install sqlalchemy
# -
# importaremos las siguientes bibliotecas
from sqlalchemy import create_engine, MetaData, Table
# Crearemos el motor (engine) con el camino de la base de datos. SQLite viene nativamente en Colab
# creamos una variable engine
engine = create_engine('sqlite:///:memory:')
type(engine)
# Creada la base de datos, necesitamos transformar el dataframe matriculas_por_curso en el formato de la BD usando to_sql()
# esta función recibe inicialmente dos parametros: una string representando el nombre de la tabla, en este caso matriculas
# y la engine
matriculas_por_curso.to_sql('matriculas', engine)
# imprimimos el retorno de la función
print(engine.table_names())
# ### Consultas en la BD
# Obtener todos los cursos con menos de 20 personas matriculadas
query = 'select * from matriculas where Cantidad_de_alumnos < 5'
pd.read_sql(query, engine)
# otro comando para leer una tabla
pd.read_sql_table('matriculas', engine, columns=['nombre del curso', 'Cantidad_de_alumnos'])
# Atribuir a una variable
muchas_matriculas = pd.read_sql_table('matriculas', engine, columns=['nombre del curso', 'Cantidad_de_alumnos'])
muchas_matriculas
# Utilizar pandas para realizar consultas
muchas_matriculas.query('Cantidad_de_alumnos > 5')
# Repetir el proceso apenas para cursos con mas de 10 inscriptos y atribuir el resultado a una variable
muchas_matriculas = muchas_matriculas.query('Cantidad_de_alumnos > 5')
muchas_matriculas
muchas_matriculas.to_sql('muchas_matriculas', con=engine)
print(engine.table_names())
| topologias/Lectura de diferentes formatos de datos con Pandas.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
"""
import tensorflow as tf
gpu_fraction = 0.1
gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=gpu_fraction)
sess = tf.Session(config=tf.ConfigProto(gpu_options=gpu_options))
"""
"""
import os
os.environ["CUDA_DEVICE_ORDER"]="PCI_BUS_ID" # see issue #152
os.environ["CUDA_VISIBLE_DEVICES"]="0"
"""
"""
import tensorflow as tf
gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.333)
sess = tf.Session(config=tf.ConfigProto(gpu_options=gpu_options))
"""
import tensorflow as tf
import keras
import mdn
from keras.layers import Bidirectional, Concatenate, Permute, Dot, Input, LSTM, Multiply
from keras.layers import RepeatVector, Dense, Activation, Lambda,MaxPooling1D
from keras.optimizers import Adam
from keras.utils import to_categorical
from keras.models import load_model, Model
import keras.backend as K
import numpy as np
from keras.preprocessing.text import Tokenizer
from faker import Faker
import random
from tqdm import tqdm
from babel.dates import format_date
from at_nmt_utils import *
import matplotlib.pyplot as plt
# %matplotlib inline
# +
# #%matplotlib inline
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import sys
sys.path.insert(0,'..')
from utils import plot_stroke
# -
strokes = np.load('strokes.npy',encoding='bytes')
with open('sentences.txt') as f:
texts = f.readlines()
# +
chars='ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz .#' # for other char in texts
# unique contains all the unique characters in the file
unique = sorted(set(chars))
# creating a mapping from unique characters to indices
char2idx = {u:i for i, u in enumerate(unique)}
idx2char = {i:u for i, u in enumerate(unique)}
num_char = len(char2idx)
# -
########## gives the best representation so far
stroke_len = 500
char_len = stroke_len/25
# +
def check_char (char2idx,val):
result = []
if char2idx.get(val)!=None:
result = char2idx[val]
elif char2idx.get(val)==None :
result = char2idx['#']
return result
def str2num(texts):
input_text = []
for f in range(len(texts)):
inps = texts[f]
data = list(map( lambda val: check_char (char2idx,val),inps ))
input_text.append(data)
#np.concatenate((a, b), axis=0)
return np.vstack(input_text)
def pad_texts(text, char_len):
pads = char_len - len(text)
for i in range(int(pads)):
text = text+str(' ')
return text
def tranc_text(texts, char_len):
for i in range (len(texts)):
if len(texts[i]) > char_len:
texts[i] = texts[i][0:int(char_len)]
elif len(texts[i]) < char_len:
texts[i] = pad_texts(texts[i],char_len)
return texts
# -
texts = tranc_text(texts, char_len)
n_texts = str2num(texts)
C = np.array(list(map(lambda x: to_categorical(x, num_classes=len(char2idx)), n_texts)))
# +
Tx = stroke_len
Ty = stroke_len
def pad_stroke(stroke,Ty):
_npads = Ty - stroke.shape[0]
padded_stroke = np.vstack ([ stroke,np.zeros((_npads,3)) ])
#padded_strokes.shape
return padded_stroke
def tranc_stroke(stroke, Ty):
if stroke.shape[0] >= Ty:
stroke = stroke[:Ty,]
elif stroke.shape[0] < Ty:
stroke = pad_stroke(stroke,Ty)
#return input_stroke,output_stroke
return stroke
new_strokes = np.array(list(map(lambda x: tranc_stroke(x, Ty+1), strokes)))
Xoh = new_strokes[:,:Ty,:]
Yoh = new_strokes[:,1:Ty+1,:]
# -
# Defined shared layers as global variables
repeator = RepeatVector(Tx)
concatenator = Concatenate(axis=-1)
densor1 = Dense(100, activation = "tanh")
densor2 = Dense(1, activation = "relu")
pooling = MaxPooling1D(pool_size=25, strides=25, padding="same")
activator = Activation(softmax, name='attention_weights') # We are using a custom softmax(axis = 1) loaded in this notebook
dotor = Dot(axes = 1)
def one_step_attention(a, s_prev,C):
# Use repeator to repeat s_prev to be of shape (m, Tx, n_s) so that you can concatenate it with all hidden states "a" (≈ 1 line)
s_prev = repeator(s_prev)
# Use concatenator to concatenate a and s_prev on the last axis (≈ 1 line)
concat = concatenator ([s_prev,a]) # (?,500,600)
# Use densor1 to propagate concat through a small fully-connected neural network to compute the "intermediate energies" variable e. (≈1 lines)
e = densor1(concat) # (?,500,100)
e = pooling(e) # (?,20,100)
# Use densor2 to propagate e through a small fully-connected neural network to compute the "energies" variable energies. (≈1 lines)
energies = densor2(e) # (?,20,1)
# Use "activator" on "energies" to compute the attention weights "alphas" (≈ 1 line)
alphas = activator(energies) # (?,20,1)
# Use dotor together with "alphas" and "a" to compute the context vector to be given to the next (post-attention) LSTM-cell (≈ 1 line)
#context = dotor([alphas,a])
context = dotor([alphas,C]) # context = (?,1,55); alpha = (?,20,1) , alpha = (?,20,55)
return context
# +
n_a = 150 #bi-directional in total ends up having 300 variables
n_s = 300
output_dim = 3
n_mix = 10
input_feat_size = Xoh.shape[2] #3
output_feat_size = Yoh.shape[2] #3
post_activation_LSTM_cell = LSTM(n_s, return_state = True)
#output_layer = Dense(len(machine_vocab), activation=softmax)
mix_model = mdn.MDN(output_dim, n_mix)
#output_layer = Dense(3, activation = "sigmoid")
# -
X = Input(shape=(Tx, input_feat_size))
C = Input(shape=(char_len, num_char))
s0 = Input(shape=(n_s,), name='s0')
c0 = Input(shape=(n_s,), name='c0')
s = s0
c = c0
def model(Tx, Ty, n_a, n_s, input_feat_size, output_feat_size, char_len, num_char):
# Define the inputs of your model with a shape (Tx,)
# Define s0 and c0, initial hidden state for the decoder LSTM of shape (n_s,)
X = Input(shape=(Tx, input_feat_size))
C = Input(shape=(char_len, num_char)) # one hot encoded vector
s0 = Input(shape=(n_s,), name='s0')
c0 = Input(shape=(n_s,), name='c0')
s = s0
c = c0
# Initialize empty list of outputs
outputs = []
### START CODE HERE ###
# Step 1: Define your pre-attention Bi-LSTM. Remember to use return_sequences=True. (≈ 1 line)
a = Bidirectional(LSTM(n_a, return_sequences=True),input_shape=(Tx, input_feat_size))(X)
# Step 2: Iterate for Ty steps
for t in range(Ty):
# Step 2.A: Perform one step of the attention mechanism to get back the context vector at step t (≈ 1 line)
context = one_step_attention(a, s, C)
# Step 2.B: Apply the post-attention LSTM cell to the "context" vector.
# Don't forget to pass: initial_state = [hidden state, cell state] (≈ 1 line)
s, _, c = post_activation_LSTM_cell(context,initial_state= [s, c])
# Step 2.C: Apply Dense layer to the hidden state output of the post-attention LSTM (≈ 1 line)
#out = output_layer(s)
out = mix_model(s)
# Step 2.D: Append "out" to the "outputs" list (≈ 1 line)
outputs.append(out)
# Step 3: Create model instance taking three inputs and returning the list of outputs. (≈ 1 line)
model = Model(inputs=[X,C,s0,c0], outputs=outputs)
### END CODE HERE ###
return model
# +
#char_len = total number of characters in input text C
# num_char = number of possible characters
model = model(Tx, Ty, n_a, n_s, input_feat_size, input_feat_size,char_len, num_char)
# -
model.summary()
# +
#opt = Adam(lr=0.005, decay=0.01, beta_1=0.9, beta_2=0.999)
#model.compile(optimizer=opt,
# loss='categorical_crossentropy',
# metrics=['accuracy'])
model.compile(loss=mdn.get_mixture_loss_func(output_dim, n_mix), optimizer=keras.optimizers.Adam())
# -
m = Xoh.shape[0] # no of examples we have for training
s0 = np.zeros((m, n_s))
c0 = np.zeros((m, n_s))
outputs = list(Yoh.swapaxes(0,1))
BATCH_SIZE = 100
EPOCHS = 100
history = model.fit([Xoh,C, s0, c0], outputs, batch_size=BATCH_SIZE, epochs=EPOCHS, callbacks=[keras.callbacks.TerminateOnNaN()])
model.save('Attention_mdn_batch100_epoch100.h5') # creates a HDF5 file 'my_model.h5'
| AttensionModel-random-wrting-generate.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# !pip install torch==1.8.2+cu111 torchvision==0.9.2+cu111 torchaudio===0.8.2 -f https://download.pytorch.org/whl/lts/1.8/torch_lts.html
# import dependencies
import torch
from matplotlib import pyplot as plt
import numpy as np
import cv2
# Load the model
model = torch.hub.load('ultralytics/yolov5', 'yolov5s', force_reload='true')
img = "https://ultralytics.com/images/zidane.jpg"
# loading the model
results = model(img)
results.print()
# plotting the img with the results
# %matplotlib inline
plt.imshow(np.squeeze(results.render()))
plt.show()
import uuid
import os
import time
# open the video cam using CV2
cap = cv2.VideoCapture(0)
while cap.isOpened():
ret, frame = cap.read()
# make detections
results = model(frame)
cv2.imshow('YOLO', np.squeeze(results.render()))
if cv2.waitKey(10) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
IMAGES_PATH = os.path.join('data', 'images') # data/images
labels =['awake','drowsy']
number_imgs = 20
# open the video cam using CV2
cap = cv2.VideoCapture(0)
# looping through labels
for label in labels:
print('Collecting images for {}'.format(label))
time.sleep(2)
# loop through the images
for img_num in range(number_imgs):
print('Collecting images for {}, image number {}'.format(label, img_num)) # collecting the image (msg)
ret, frame = cap.read() # webcam feed
imgname = os.path.join(IMAGES_PATH, label + '.' + str(uuid.uuid1())+'.jpg') # save as..
cv2.imwrite(imgname, frame) # actually save it / writes down the image
cv2.imshow('Image collection', frame) # display is for 2 seconds
time.sleep(2)
if cv2.waitKey(10) & 0xFF == ord('q'):
break
# +
# installed pyqrt5 and ran the resources.py /qrc on cmd
# +
# images labeled with labelimg - locally
# -
# training the model on the custom dataset images I made
# !cd yolov5 && python train.py --img 320 --batch 8 --epochs 200 --data datasets.yml --weights yolov5s.pt --workers 2
model = torch.hub.load('ultralytics/yolov5', 'custom', path='yolov5/runs/train/exp16/weights/last.pt', force_reload=True)
cap = cv2.VideoCapture(0)
while cap.isOpened():
ret, frame = cap.read()
# Make detections
results = model(frame)
cv2.imshow('YOLO', np.squeeze(results.render()))
if cv2.waitKey(10) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
| drowsinessYOLOv5.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Intro to information extraction from text
#
# Most of the data we've looked at so far has been *structured*, meaning essentially that the data looked like a table or Excel spreadsheet. Not all data looks like that, however. Human readable text is an extremely common *unstructured* data source. From the text of a webpage, tweet, or document, businesses want to perform things like:
#
# * sentiment analysis
# * document summarization
# * document clustering
# * document recommendation
#
# Later in MSAN 692, we'll learn how to extract the text from webpages or pieces of webpages such as the bestseller list at Amazon. For now, we can play with some prepared text files.
#
# Text analysis uses words as data rather than numbers, which means *tokenizing* text; i.e., splitting the text string for a document into individual words. This problem is actually much harder than you might think. For example, if we split the document text on the space character, then "San Francisco" would be split into two words. For our purposes here, that'll work just fine. See [Tokenization in this excellent information retrieval book](https://nlp.stanford.edu/IR-book/html/htmledition/tokenization-1.html) for more information.
#
# <img src="figures/wordcloud.png" width="200" align="right">The goal of this lecture-lab is to get familiar with tokenizing text and how to extract some basic data, such as word frequency. To visualize information extracted from a document, we'll use *word clouds* like the image to the right that emphasize words according to their frequency.
# ## Tokenizing a document
#
# Let's use an article on [Istanbul](https://github.com/parrt/msds692/blob/master/notes/data/IntroIstanbul.txt) as our text file and then figure out how to get an appropriate list of words out of it.
# ! head data/IntroIstanbul.txt
# In [Loading files](files.md), we learned how to read the contents of such a file into a string and split it on the space character:
with open('data/IntroIstanbul.txt') as f:
contents = f.read() # read all content of the file
words = contents.split()
print(words[:25]) # print first 25 words
# That looks more like it although it is still not very clean. We should also strip punctuation marks. Here's a slower way to do it using a filter pattern with a [list comprehension](https://docs.python.org/3/tutorial/datastructures.html#list-comprehensions).
import string
contents = [c for c in contents if c not in string.punctuation]
contents = ''.join(contents)
words = contents.split()
print(words[:25])
# Some of the words are capitalized. What we need, is all words normalized so that `people` and `People` are consider the same word etc...
#
# **Exercise**: Implement another filter pattern to convert the words to lowercase using `lower()`. E.g., `'The'.lower()` is `'the'`.
#
# Here's one way to do it:
words = [w.lower() for w in words]
print(words[:25])
# That's not the best we can do. For example "faces" and "face" should be the same. Let's *stem* the words:
# ! pip install -q -U nltk
from nltk.stem.porter import PorterStemmer
stemmer = PorterStemmer()
stemmed = [stemmer.stem(w) for w in words]
print(stemmed[:45])
# ## Computing word frequencies
#
# Let's create a [bag of words](notes/dict.ipynb) representation. My work plan would have a description like "Walk through the words in a document, updating a dictionary that holds the count for each word." The plan pseudocode would have a loop over the words whose body incremented a count in a dictionary
#
# 1. let wfreqs be an empty dictionary mapping words to word counts
# 2. for each word w in words:<br>if w not in wfreqs, let wfreqs[w] = 1.<br>Otherwise add one to wfreqs[w].
#
# My code implementation would look like the following.
# +
from collections import defaultdict
wfreqs = defaultdict(int) # missing entries yield value 0
for w in stemmed:
wfreqs[w] = wfreqs[w] + 1
print(wfreqs['ottoman'])
print(wfreqs['the'])
# -
# Computing the frequency of elements in a list is common enough that Python provides a built-in data structure called a `Counter` that will do this for us:
from collections import Counter
ctr = Counter(stemmed)
print(ctr['ottoman'])
print(ctr['the'])
# That data structure is nice because it can give the list of, say, 10 most common words:
print(ctr.most_common(10))
# ### Exercise
#
# Extract the most common 10 words from `ctr` (i.e., not the tuples).
print([p[0] for p in ctr.most_common(10)])
# ## Word clouds
#
# Python has a nice library called `wordcloud` (we use this in the SF Police data lab) we can use to visualize the relative frequency of words. It should already be installed in your Anaconda Python directory, but if not use the command line to install it:
#
# ```bash
# $ pip install wordcloud
# ```
#
# The key elements of the following code are the creation of the `WordCloud` and calling `fit_words()` with a dictionary (type `dict`) of word-freq associations, `wfreq`.
# +
from wordcloud import WordCloud
import matplotlib.pyplot as plt
wordcloud = WordCloud()
wordcloud.fit_words(ctr)
fig=plt.figure(figsize=(6, 6)) # Prepare a plot 5x3 inches
plt.imshow(wordcloud)
plt.axis("off")
plt.show()
# -
# That's kind of busy will all of those words in there, so let's focus on the top 30 words. To do that we will call `most_common()`, which gives us a list of tuples. Because `fit_words()` it requires a `dict`, we convert the most common word list into a dictionary:
# +
# Get 30 most common word-freq pairs then convert to dictionary for use by WordCloud
wtuples = ctr.most_common(30)
wdict = dict(wtuples)
wordcloud = WordCloud()
wordcloud.fit_words(wdict)
fig=plt.figure(figsize=(6, 4))
plt.imshow(wordcloud)
plt.axis("off")
plt.show()
# -
# That looks better but it looks like common English words like "the" and "of" are dominating the visualization. To focus on the words most relevant to the document, let's filter out such so-called English *stop words*. [scikit-learn](http://scikit-learn.org/stable/), a machine learning library you will become very familiar with in future classes, provides a nice list of stop words we can use:
from sklearn.feature_extraction.text import ENGLISH_STOP_WORDS
english = list(ENGLISH_STOP_WORDS) # Convert to a list so I can grab a subset
print(english[:25]) # Print 25 of the words
# ### Exercise
#
# Filter out the English stop words from the `words` list we computed above and reset `wfreqs` to a `Counter` based off this filtered list.
contents = [c for c in contents if c not in string.punctuation]
contents = ''.join(contents)
words = contents.split()
words = [w.lower() for w in words]
goodwords = [w for w in words if w not in ENGLISH_STOP_WORDS]
stemmer = PorterStemmer()
stemmed = [stemmer.stem(w) for w in goodwords]
goodctr = Counter(stemmed)
print(goodctr.most_common(10))
# +
wtuples = goodctr.most_common(30)
wdict = dict(wtuples)
wordcloud = WordCloud()
wordcloud.fit_words(wdict)
fig=plt.figure(figsize=(6, 6))
plt.imshow(wordcloud)
plt.axis("off")
plt.show()
# -
# ### Exercise
#
# Add Porter stemming to the previous exercise
# You can play around with the list of stop words to remove things like "important" and others to really get the key words to pop out. There is a technique to automatically damp down common English words called [TFIDF](https://en.wikipedia.org/wiki/Tf%E2%80%93idf), which we will learn about soon in this class.
# ## Converting non-ASCII char
#
# We should clean up the text extracted from the HTML so that the non-ASCII characters are stripped or converted.
text = "I need ¢ and £ and ¥"
print(text)
text = [c for c in text if ord(c)<=127]
text = ''.join(text)
print(text)
# ## Stripping char beyond 255 from commandline
#
# If there are characters within the file that are non-ASCII and larger than 255, we can convert the file using the command line. Here's a simple version of the problem I put into file `/tmp/foo.html`:
#
# ```html
# <html>
# <body>
# གྷ
# </body>
# </html>
# ```
#
# I deliberately injected a Unicode code point > 255, which requires two bytes to store. Most of the characters require just one byte. Here is first part of file:
#
# ```bash
# $ od -c -t xC /tmp/t.html
# 0000000 < h t m l > \n < b o d y > \n གྷ **
# 3c 68 74 6d 6c 3e 0a 3c 62 6f 64 79 3e 0a e0 bd
# ...
# ```
#
# Here is how you could strip any non-one-byte characters from the file before processing:
#
# ```bash
# $ iconv -c -f utf-8 -t ascii /tmp/foo.html
# <html>
# <body>
#
# </body>
# </html>
# ```
# ## Summary
#
# Text files are an unstructured data source that we typically represent as a bag of words. A bag of words representation is a set of associations mapping words to their frequency or count. We typically use a dictionary data structure for bag of words because dictionary lookup is extremely efficient, versus linearly scanning an entire list of associations. We used word clouds to visualize the relative frequency of words in a document.
#
# The data structures and techniques described in this lecture-lab form the basis of natural language processing (NLP).
| notes/text.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: neon
# language: python
# name: neon
# ---
import SimpleITK as sitk
from matplotlib import pyplot as plt, cm
# %matplotlib inline
# +
INPUT_DIRECTORY = "/Volumes/homes/users/anthony.reina/dicom" \
"/Lung CT/stage1/00cba091fa4ad62cc3200a657aeb957e/"
INPUT_FILENAME = "0a291d1b12b86213d813e3796f14b329.dcm"
# -
itk_img = sitk.ReadImage(INPUT_DIRECTORY + INPUT_FILENAME)
img_array = sitk.GetArrayFromImage(itk_img)
print("Img array: ", img_array.shape)
z=0
slice = sitk.GetArrayViewFromImage(itk_img)[z,:,:]
plt.imshow(slice, cmap=cm.bone);
type(slice[0,0])
itk_img.GetDirection()
| dicom/ITK DICOM Load.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# # Expand_grid : Create a dataframe from all combinations of inputs.
# ## Background
#
# This notebook serves to show examples of how expand_grid works. Expand_grid aims to offer similar functionality to R's [expand_grid](https://tidyr.tidyverse.org/reference/expand_grid.html) function.<br><br>
# Expand_grid creates a dataframe from a combination of all inputs. <br><br>One requirement is that a dictionary be provided. If a dataframe is provided, a key must be provided as well.
#
# Some of the examples used here are from tidyr's expand_grid page and from Pandas' cookbook.
#
import pandas as pd
import numpy as np
from janitor import expand_grid
# +
data = {"x":[1,2,3], "y":[1,2]}
result = expand_grid(others = data)
result
# +
#combination of letters
data = {"l1":list("abcde"), "l2" : list("ABCDE")}
result = expand_grid(others = data)
result.head(10)
# +
data = {'height': [60, 70],
'weight': [100, 140, 180],
'sex': ['Male', 'Female']}
result = expand_grid(others = data)
result
# +
#A dictionary of arrays
#Arrays can only have dimensions of 1 or 2
data = {"x1":np.array([[1,3],[2,4]]),
"x2":np.array([[5,7],[6,8]])}
result = expand_grid(others=data)
result
# +
#This shows how to method chain expand_grid
#to an existing dataframe
df = pd.DataFrame({"x":[1,2], "y":[2,1]})
data = {"z":[1,2,3]}
#a key has to be passed in for the dataframe
#this is added to the column name of the dataframe
result = df.expand_grid(df_key="df",others = data)
result
# +
# expand_grid can work on multiple dataframes
# Ensure that there are keys
# for each dataframe in the dictionary
df1 = pd.DataFrame({"x":range(1,3), "y":[2,1]})
df2 = pd.DataFrame({"x":[1,2,3],"y":[3,2,1]})
df3 = pd.DataFrame({"x":[2,3],"y":["a","b"]})
data = {"df1":df1, "df2":df2, "df3":df3}
result = expand_grid(others=data)
result
| examples/notebooks/expand_grid.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import pandas as pd
import os
import shutil
from PIL import Image
from io import BytesIO
# Filters for posterior anterior images and bins by respiratory diseases based on metadata provided by the Github dataset. Additionally converts images to JPEG format.
#
# Source: https://github.com/ieee8023/covid-chestxray-dataset
# +
# Load and explore metadata
df = pd.read_csv('data/github/metadata.csv')
columns = df.columns.values
disease_types = df.finding.unique()
print(columns)
print(disease_types)
# -
# Make folders and add all PA images
master_dir = 'data/master'
source_dir = 'data/github/images'
for class_type in disease_types:
master_path = os.path.join(master_dir, class_type)
if not os.path.isdir(master_path):
os.mkdir(master_path)
for index, row in df.iterrows():
if row['finding'] == class_type and row['view'] == 'PA':
source_path = os.path.join(source_dir, row['filename'])
shutil.copy(source_path, master_path)
# Convert png into jpeg
for class_type in df.finding.unique():
master_path = os.path.join(master_dir, class_type)
for image in os.listdir(master_path):
image_path = os.path.join(master_path, image)
if ".png" in image or ".jpg" in image:
ima = Image.open(image_path)
rgb_im = ima.convert('RGB')
rgb_im.save(image_path[:-3] + 'jpeg')
os.remove(image_path)
#convert all images from source_dir to JPEG and save in save_dir
def conv2JPEG(source_dir, save_dir):
if (source_dir == save_dir):
raise Error
if not os.path.isdir(save_dir):
os.mkdir(save_dir)
for image in os.listdir(source_dir):
image_path = os.path.join(source_dir, image)
save_image_path = os.path.join(save_dir, image)
if ".png" in image or ".jpg" in image:
ima = Image.open(image_path)
rgb_im = ima.convert('RGB')
rgb_im.save(save_image_path[:-3] + 'jpeg')
os.remove(image_path)
datasets=["data/NLM-MontgomeryCXRSet", "data/ChinaSet_AllFiles"]
disease_types=['Normal', "TB"]
for dataset in datasets:
master_path = os.path.join(dataset, "master")
if not os.path.isdir(master_path):
os.mkdir(master_path)
for disease in disease_types:
source_path = os.path.join(dataset, "CXR_png", disease)
save_path = os.path.join(master_path,disease)
if not os.path.isdir(save_path):
os.mkdir(save_path)
print(source_path,len(os.listdir(source_path)), save_path)
conv2JPEG(source_path,save_path)
print(len(os.listdir(save_path)))
| notebooks/image_sorting.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# + [markdown] deletable=true editable=true
# ## GA Data Science Final Project - 3- NLP
# + [markdown] deletable=true editable=true
# #### Import the data
# + deletable=true editable=true
import pandas as pd
from sklearn.feature_extraction.text import CountVectorizer
# + [markdown] deletable=true editable=true
# #### Convert datatypes to `str`
# + deletable=true editable=true
df = pd.read_csv('issue_comments_jupyter_copy.csv')
df['org'] = df['org'].astype('str')
df['repo'] = df['repo'].astype('str')
df['comments'] = df['comments'].astype('str')
df['user'] = df['user'].astype('str')
# + [markdown] deletable=true editable=true
# ### Scikit Learn Count Vectorizer
# ##### First, save the Series 'comments' to the variable `comments`.
# + deletable=true editable=true
comments = df.comments
# + deletable=true editable=true
comments.head()
# + [markdown] deletable=true editable=true
# *comments* is a long column with many rows and I want to use CountVectorizer on all the comments to generate counts of all the words used, so I'll convert all the comments to strings and concatenate those strings using a space as a separator, then save the whole long string to a text file and also store it as the variable **all_comments**
#
# CountVectorizer documentation: http://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html
#
# which states: "Convert a collection of text documents to a matrix of token counts
# This implementation produces a sparse representation of the counts using scipy.sparse.coo_matrix.
# If you do not provide an a-priori dictionary and you do not use an analyzer that does some kind of feature selection then the number of features will be equal to the vocabulary size found by analyzing the data."
# + [markdown] deletable=true editable=true
# ##### Second, convert all rows in `comments` to a string.
# + deletable=true editable=true
with open ('all_comments.txt',"wb") as fd:
all_comments = comments.str.cat(sep=' ')
fd.write (all_comments)
# + deletable=true editable=true
cvec = CountVectorizer()
cvec.fit([all_comments])
# + deletable=true editable=true
df2 = pd.DataFrame(cvec.transform([all_comments]).todense(), columns=cvec.get_feature_names())
# + deletable=true editable=true
df2.transpose().sort_values(0, ascending=False).head(25)
# + [markdown] deletable=true editable=true
# The output shows a multitude of stop words that we can take out later.
# + [markdown] deletable=true editable=true
# Frequency Distribution Curve
# + deletable=true editable=true
import nltk
fdist1 = nltk.FreqDist(df2)
fdist1
vocabulary1 = fdist1.keys()
vocabulary1[:50]
# + [markdown] deletable=true editable=true
# From Loper, et al. 2009:
# "When we first invoke FreqDist, we pass the name of the text as an argument. We can inspect the total number of words ('outcomes') that have been counted up. The expression keys() gives us a list of all the distinct types in the text, and we can look at the first 50 of these by slicing the list.
# + deletable=true editable=true
fdist1.plot(50, cumulative=True)
# + deletable=true editable=true
fdist1.hapaxes()
# + [markdown] deletable=true editable=true
# Depending on output: Since neither frequent nor infrequent words help, we need to try something else. Beginning to see that this is a very noisy data set.
# + [markdown] deletable=true editable=true
# ### NLP Bag of Words
# Using **segmentation** to identify sentences within `all_comments`
# + deletable=true editable=true
from nltk.tokenize import PunktSentenceTokenizer
sent_detector = PunktSentenceTokenizer()
sent_detector.sentences_from_text(all_comments.decode('utf8'))
# + [markdown] deletable=true editable=true
# ### Lemmatization
# + deletable=true editable=true
from nltk.stem import WordNetLemmatizer
from sklearn.linear_model import LogisticRegression
lemmatizer = WordNetLemmatizer()
lemmatizer.lemmatize(df2)
# + [markdown] deletable=true editable=true
# ### Stemming
# + deletable=true editable=true
# + [markdown] deletable=true editable=true
# ### Term Frequency - Inverse Document Frequency - TF-IDF
# + deletable=true editable=true
from sklearn.feature_extraction.text import TfidfVectorizer
tvec = TfidfVectorizer(stop_words='english')
tvec.fit([all_comments])
# + deletable=true editable=true
tfidf_data = pd.DataFrame(tvec.transform([all_comments]).todense(),
columns=tvec.get_feature_names(), index=['all_comments'])
# + deletable=true editable=true
tfidf_data.transpose().sort_values('all_comments', ascending=False).head(10).transpose()
# + [markdown] deletable=true editable=true
# "tf–idf, short for term frequency–inverse document frequency, is a numerical statistic that is intended to reflect how important a word is to a document in a collection or corpus. What this output means: the words above are most important across the Jupyter GitHub comments.
# + [markdown] deletable=true editable=true
# ### Topic Modeling: LDA
# + [markdown] deletable=true editable=true
# #### Import libraries
# + deletable=true editable=true
from gensim import corpora, models, matutils
from sklearn.feature_extraction import text
from sklearn.feature_extraction.text import CountVectorizer
from collections import defaultdict
import pandas as pd
# + [markdown] deletable=true editable=true
# #### Instantiate CountVectorizer and fit data to model
# + deletable=true editable=true
#circling back to add stop words
stop_words = text.ENGLISH_STOP_WORDS.union(['jupyter', 'notebook', 'https', 'github', 'com', 'html', 'http', 'org','ellisonbg','don'])
# + deletable=true editable=true
vectorizer = CountVectorizer(stop_words=stop_words, min_df=3)
X = vectorizer.fit_transform(comments.dropna())
# + [markdown] deletable=true editable=true
# #### Tokens that were saved after stopwords removed
# + deletable=true editable=true
vectorizer.vocabulary_
# + [markdown] deletable=true editable=true
# #### Counts of tokens
# + deletable=true editable=true
docs = pd.DataFrame(X.toarray(), columns=vectorizer.get_feature_names()).head()
docs
# + deletable=true editable=true
docs.shape
# + [markdown] deletable=true editable=true
# #### Set up LDA model - set up the vocabulary
# + deletable=true editable=true
vocab = {v: k for k, v in vectorizer.vocabulary_.iteritems()}
vocab
# + deletable=true editable=true
vectorizer
# + [markdown] deletable=true editable=true
# #### Set up the actual LDA model
# + deletable=true editable=true
lda = models.LdaModel(
matutils.Sparse2Corpus(X, documents_columns=False),
#corpus,
num_topics = 5,
passes = 20,
id2word = vocab
)
# + [markdown] deletable=true editable=true
# *Third pass looking at the topics, with added stop words 'jupyter', 'notebook', 'https', 'github', 'com', 'html', 'http', 'org','ellisonbg','don'*
# + deletable=true editable=true
lda.print_topics(num_topics=5, num_words=5)
# + [markdown] deletable=true editable=true
# *High-level labels*
# + deletable=true editable=true
#First pass: funny but not helpful topics:
topics_labels = {
0: "Installing Python and Jupyter"
1: "I think I just like the Jupyter Notebook"
2: "JavaScript and .py Files"
3: "Cells"
4: "GitHub"
}
# + deletable=true editable=true
#Third pass topics
topics_labels = {
0: "Viewing and writing notifications"
1: "Python files and packages"
2: "PR and issue closure"
3: "User interface"
4: "Thinking"
}
# + deletable=true editable=true
for ti, topic in enumerate(lda.show_topics(num_topics = 5)):
print("Topic: %d" % (ti))
print(topic)
print()
# + deletable=true editable=true
| .ipynb_checkpoints/GA Data Science Final Project - 3- NLP-checkpoint.ipynb |