code stringlengths 38 801k | repo_path stringlengths 6 263 |
|---|---|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: .iga-python
# language: python
# name: .iga-python
# ---
# # Introduction to SymPDE
# [SymPDE](https://github.com/pyccel/sympde) is a Symbolic Library for Partial Differential Equations and more precisely for variational problems. You can install it using
# ```shell
# > pip3 install sympde
# ```
#
# **SymPDE** presents different topological concepts that are useful when you are dealing with Partial Differential Equations. One of the main novelties is allowing semantic capturing of mathematical expressions, and hence reducing bugs. **SymPDE** can be viewed as a static compiler for PDEs, but rather than generating a decorated AST, it provides an AST that you can later on decorate.
#
# The important notions in **SymPDE** are:
#
# * Geometry
# * Function space
# * Expressions, like Linear and Bilinear Forms
# * Equation
# ## 1. Geometry
# The concept of a geometry is not really defined as in a standard library. it is more a mathematical concept, up to a homeomorphism.
#
# **SymPDE** provides some basic geometries.
# Typical imports are the following ones:
from sympde.topology import Line
from sympde.topology import Square
from sympde.topology import Cube
from sympde.topology import Domain
# ### Line
# A line is defined by its **bounds**.
#
# create the interval [-1, 1]
domain = Line(bounds=[-1,1])
# ### Square
# A square is defined by its **bounds** for each **axis**.
# create the square [-1, 1] x [-1, 1]
domain = Square(bounds1=[-1,1], bounds2=[-1,1])
# ### Cube
# A cube is defined by its **bounds** for each **axis**.
# create the cube [-1, 1] x [-1, 1] x [-1, 1]
domain = Cube(bounds1=[-1,1], bounds2=[-1,1], bounds3=[-1,1])
# ### Domain
# Represents an undefined domain.
# A domain is defined by at least one interior domain and possible boundaries.
# A domain without a boundary is either infinite or periodic.
# A domain can also be constructed from a connectivity, in which case, only the
# name and connectivity need to be passed.
#
# This concept can be used to create a more complicated *topological* geometries.
from sympde.topology import Domain
from sympde.topology import InteriorDomain
from sympde.topology import Union
from sympde.topology import Interface
from sympde.topology import Connectivity
from sympde.topology import Boundary
# +
# ... create a domain with 2 subdomains A and B
A = InteriorDomain('A', dim=3)
B = InteriorDomain('B', dim=3)
connectivity = Connectivity()
bnd_A_1 = Boundary('Gamma_1', A)
bnd_A_2 = Boundary('Gamma_2', A)
bnd_A_3 = Boundary('Gamma_3', A)
bnd_B_1 = Boundary('Gamma_1', B)
bnd_B_2 = Boundary('Gamma_2', B)
bnd_B_3 = Boundary('Gamma_3', B)
connectivity['I'] = Interface('I', bnd_A_1, bnd_B_2)
Omega = Domain('Omega',
interiors=[A, B],
boundaries=[bnd_A_2, bnd_A_3, bnd_B_1, bnd_B_3],
connectivity=connectivity)
# -
# You can then export this topology into a **hdf5** file:
# export
Omega.export('omega.h5')
# And read it using:
domain = Domain.from_file('omega.h5')
# ### PeriodicDomain
# **TODO**
# ## 2. Function Spaces
# Once you have a topological domain, you can create a function space over it. There are two kinds of function spaces:
#
# * Scalar function space
# * Vector function space
from sympde.topology import ScalarFunctionSpace
from sympde.topology import VectorFunctionSpace
# ### Scalar Function space
# +
# create a generic domain in 3D
domain = Cube()
# create a scalar function space in 3D
V = ScalarFunctionSpace('V', domain)
# -
# Scalar function spaces can be typed also. This can be achieved by specifying the keyword **kind**
# In this example, we create the Sobolev space $H^1(\Omega)$
H1 = ScalarFunctionSpace('H1', domain, kind="H1")
# In this example, we create the space $L^2(\Omega)$
L2 = ScalarFunctionSpace('L2', domain, kind="L2")
# ### Vector Function space
# create a scalar function space in 3D
V = VectorFunctionSpace('V', domain)
# Vector function spaces can also be typed
# In this example, we create the Sobolev space $H(\mbox{curl}, \Omega)$
Hcurl = VectorFunctionSpace('Hcurl', domain, kind="Hcurl")
# In this example, we create the Sobolev space $H(\mbox{div}, \Omega)$
Hdiv = VectorFunctionSpace('Hdiv', domain, kind="Hdiv")
# ### Elements of Function Spaces
#
# You can create an element of a function space, by calling the function **element_of** as in the following examples:
#
from sympde.topology import element_of
u0 = element_of(H1, name='u0')
u1 = element_of(Hcurl, name="u1")
u2 = element_of(Hdiv, name="u2")
u3 = element_of(L2, name="u3")
# ### Product of Function Spaces
#
# You can create a product between function spaces, either by calling the constructor **ProductSpace** or simply by using the operator $*$
from sympde.topology import ProductSpace
W = ProductSpace(H1,Hcurl,Hdiv,L2)
W = H1 * Hcurl * Hdiv * L2
u0,u1,u2,u3 = element_of(W, name='u0,u1,u2,u3')
# ## 3. Bilinear Forms
# Once you create a function space, and have the ability to create elements in it, you can start doing some funny things, like creating a bilinear form.
#
# A **BilinearForm** is a type, but **SymPDE** uses the Curry-Howard correspondance, and defines it as a **proposition**. If the construction is a success, this means that indeed your bilinear form is bilinear with respect to its arguments, otherwise, **SymPDE** will tell you why your expression is not bilinear.
v = element_of(H1, name='v')
u = element_of(H1, name='u')
from sympde.calculus import grad, div, dot
from sympde.expr.expr import integral
from sympde.expr.expr import BilinearForm
a = BilinearForm((v,u), integral(domain, dot(grad(v), grad(u))))
# The following expression is not bilineaar with respect to $u$, and **SymPDE** is able to tell you this
BilinearForm((v,u), integral(domain, dot(grad(v), grad(u**2))))
# ### Printing
# You can print your bilinear form using *Latex* printer as follows
from IPython.display import Math
from sympde.printing.latex import latex
Math(latex(a))
# ### Free-variables
#
# Elements of functions spaces that are neither test or trial functions for the bilinear form, are considered as **free-variables**.
psi = element_of(H1, name='psi')
a1 = BilinearForm((v,u), integral(domain, (1+psi**2)*dot(grad(v), grad(u))))
# You can then call the bilinear form and specify the free variable $\psi$ as follows
expr = a1(v,u,psi=u)
# **TODO**
#
# there is a problem with the printing of **Integral**
# ## 4. Linear Forms
from sympde.expr.expr import LinearForm
from sympde.calculus import bracket
psi = element_of(H1, name='psi')
l = LinearForm(v, integral(domain, v*psi**2))
Math(latex(l))
# ### Linearization of a non linear expression
# Since **SymPDE** is able to do computations, you can linearize a non linear expression.
# let's take the previous example, which is non-linear w.r.t $\psi$. In order to linearize $l$, we need to tell the function **linearize** around wich function we will do the linearization, and what is the name of the perturbation, as in the following example:
from sympde.expr.expr import linearize
dpsi = element_of(H1, name='delta_psi')
b = linearize(l, psi, trials=dpsi)
# If we check the type of $b$, we will see that it is a **BilinearForm**
type(b)
Math(latex(b))
# ### Free-variables
#
# **TODO**
# ## 5. Equation
# Here, we consider variational problems that can be writtent as
# $$
# \mbox{Find}~(u_1,u_2,\ldots,u_r) \in V_1 \times V_2 \times \ldots \times V_r,~ \mbox{such that:} \\
# a((v_1,\ldots, v_r), (u_1, \ldots,u_r)) = l(v_1,\ldots,v_r),\quad \forall (v_1,\ldots, v_r) \in V_1 \times V_2 \times \ldots \times V_r
# $$
from sympde.expr import find
equation = find(u, forall=v, lhs=a(u, v), rhs=l(v))
Math(latex(equation))
# ### Essential Boundary Conditions
# You can tell **SymPDE** what are the essential boundary conditions for your problem.
from sympde.expr import EssentialBC
bnd = domain.boundary
type(bnd)
bnd
# In order to set $u$ to $0$ on the whole boundary of our domain, we simply write
bc = EssentialBC(u, 0, bnd)
# then, we add the boundary condition to our **find** statement as the following
equation = find(u, forall=v, lhs=a(u, v), rhs=l(v), bc=bc)
Math(latex(equation))
| lessons/Chapter2/00_introduction_sympde.ipynb |
// ---
// jupyter:
// jupytext:
// text_representation:
// extension: .java
// format_name: light
// format_version: '1.5'
// jupytext_version: 1.14.4
// kernelspec:
// display_name: Java
// language: java
// name: java
// ---
// ### first solution using static field in class
package test.beaker;
public class BeakerTest {
public static String location ;
}
test.beaker.BeakerTest.location = "../resources/jar/";
// %classpath add dynamic return test.beaker.BeakerTest.location + "demo.jar";
import com.example.Demo;
Demo demo = new Demo();
return demo.getObjectTest();
// ### second solution using autotranslation
NamespaceClient.getBeakerX().set("location","../resources/jar/");
// %classpath add dynamic return NamespaceClient.getBeakerX().get("location")+ "demo.jar";
import com.example.Demo;
Demo demo = new Demo();
return demo.getObjectTest();
| doc/java/classpath_add_dynamic_magic_command.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# %matplotlib inline
import torch
from torch.utils.data import Dataset, DataLoader
from torch import nn, optim
from torch.nn import functional as F
from torchvision import datasets, transforms
from torchvision.utils import save_image
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
print("Using device", device)
# -
# ## 读取dataset
batch_size = 128
kwargs = {'num_workers': 1, 'pin_memory': True} if torch.cuda.is_available() else {}
train_loader = torch.utils.data.DataLoader(
datasets.MNIST('./data', train=True, download=True,
transform=transforms.ToTensor()),
batch_size=batch_size, shuffle=True, **kwargs)
test_loader = torch.utils.data.DataLoader(
datasets.MNIST('./data', train=False, transform=transforms.ToTensor()),
batch_size=batch_size, shuffle=True, **kwargs)
# ## 定义模型
# +
from models.CVAE import CVAE
model = CVAE(sample_size = 784,
condition_size = 10,
encoder_layer_sizes = [512,512],
latent_size = 20,
decoder_layer_sizes = [512,512]).to(device)
print(model)
# -
# ## loss function, optimizer
# +
# Reconstruction + KL divergence losses summed over all elements and batch
def loss_fn(recon_x, x, mu, logvar):
BCE = F.binary_cross_entropy(recon_x, x.view(-1, 784), reduction='sum')
# see Appendix B from VAE paper:
# <NAME>. Auto-Encoding Variational Bayes. ICLR, 2014
# https://arxiv.org/abs/1312.6114
# 0.5 * sum(1 + log(sigma^2) - mu^2 - sigma^2)
KLD = -0.5 * torch.sum(1 + logvar - mu.pow(2) - logvar.exp())
return BCE + KLD
optimizer = optim.Adam(model.parameters(), lr=1e-3)
# -
# ## 训练
# +
from utils.mnist import idx2onehot
def train(epoch):
model.train()
train_loss = 0
for batch_idx, (sample, condition) in enumerate(train_loader):
sample, condition = sample.to(device), condition.to(device)
sample = sample.view(-1, 784)
condition = idx2onehot(condition, 10)
optimizer.zero_grad()
recon_batch, mu, logvar = model(sample, condition)
recon_batch = torch.sigmoid(recon_batch)
loss = loss_fn(recon_batch, sample, mu, logvar)
loss.backward()
train_loss += loss.item()
optimizer.step()
if batch_idx % log_interval == 0:
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
epoch, batch_idx * len(sample), len(train_loader.dataset),
100. * batch_idx / len(train_loader),
loss.item() / len(sample)))
print('====> Epoch: {} Average loss: {:.7f}'.format(
epoch, train_loss / len(train_loader.dataset)))
def test_recon(epoch):
model.eval()
test_loss = 0
with torch.no_grad():
for i, (sample, condition) in enumerate(test_loader):
sample, condition = sample.to(device), condition.to(device)
sample = sample.view(-1, 784)
condition = idx2onehot(condition, 10)
recon_batch, mu, logvar = model(sample, condition)
recon_batch = torch.sigmoid(recon_batch)
test_loss += loss_fn(recon_batch, sample, mu, logvar).item()
if i == 0:
n = min(sample.size(0), 8)
comparison = torch.cat([sample.view(batch_size, 1, 28, 28)[:n],
recon_batch.view(batch_size, 1, 28, 28)[:n]])
save_image(comparison.cpu(),
'mnist_results/reconstruction_' + str(epoch) + '.png', nrow=n)
test_loss /= len(test_loader.dataset)
print('====> Test set loss: {:.4f}'.format(test_loss))
epoch = 0
# -
for epoch in range(epoch, epoch + 10):
log_interval = 100
train(epoch)
test_recon(epoch)
with torch.no_grad():
z = torch.randn(64, 20).to(device)
c = torch.randint(0, 10, (64,)).to(device)
print(c.view(8,8))
c = idx2onehot(c, 10)
sample = model.decode(torch.cat((z, c), dim=-1)).cpu()
save_image(sample.view(64, 1, 28, 28),
'mnist_results/sample_' + str(epoch) + '.png')
# ## 可视化结果
for i in range(0, 10):
z = torch.randn(64, 20).to(device)
c = torch.tensor(i).expand(64).to(device)
c = idx2onehot(c, 10)
with torch.no_grad():
sample = model.decode(torch.cat((z, c), dim=-1)).cpu()
save_image(sample.view(64, 1, 28, 28),
'mnist_results/condition_' + str(i) + '.png')
| cvae-mnist.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.8.10 64-bit (''.env'': venv)'
# name: python3
# ---
# # Module 5: Connecting to a database using IBM db
#
# ## Overview
#
# - Describe the ibm_db API
# - List the credentials required to connect to a database
# - Connect to an IBM db2 database using Python
#
# ## What is ibm_db?
#
# - The ibm_db API provides a variety of useful Python functions for accessing and manipulating data in an IBM data server Database
# - ibm_db API uses the IBM Data Server Driver for ODBC and CLI APIs to connect to IBM DB2 and Informix
#
# ## Identify database connection credentials
#
# ```python
# dns_driver = "{IBM DB@ ODBC DRIVER}"
# dsn_database = "BLUDB"
# dns_hostame = "YourDb2Hostname" # e.g.: "dashdb-txn-sbox-yp-dal09-04.services.dal.bluemix.net"
# dns_port = "50000"
# dns_protocol = "TCPIP"
# dsn_uid = "***************" # e.g. "abc12345"
# dsn_pwd = "***************" # e.g. "<PASSWORD>"
# ```
#
#
# ## Create a database connection
#
# ```python
#
# dsn = (
# "DRIVER={{IBM DB@ ODBC DRIVER}};"
# "DATABASE={0};
# "HOSTNAME={1};
# "PORT={2};"
# "PROTOCOL=TCPIP;"
# "UID={3};"
# "PWD={4};"
# ).format(dsn_database, dsn_hostname, dsn_port, dsn_uid, dsn_pwd)
#
# try:
# conn = ibm_db.connect(dsn, "", "")
#
# except:
# print("Unable to connect to database")
#
# ```
#
# ### Close the database connection
#
# Remember that it is always important to close connections so that we can avoid unused connections
#
# ```python
# ibm_db.close(conn)
# ```
| module5/connecting_to_a_database_using_ibm_db.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Building Data Genome Project 2.0
# ## Weather sensitivity testing
# Biam! (<EMAIL>)
# +
# data and numbers
import numpy as np
import pandas as pd
import datetime as dt
# Visualization
import matplotlib.pyplot as plt
import matplotlib as mpl
from matplotlib import ticker
import matplotlib.dates as mdates
# %matplotlib inline
import scipy.stats as stats
# -
path = "..//data//meters//raw//"
path1 = "..//data//weather//"
path2 = "..//data//metadata//"
path3 = "..//data//meters//processed//"
path4 = "..//data//meters//screening//anomalies//"
# # Introduction
# In this notebooks will be made visualizations showing the correlation between energy consumption and outside air temperature (weather sensitivity).<br>
# Outliers in the raw meters dataset were detected using the [Seasonal Hybrid ESD (S-H-ESD)](https://github.com/twitter/AnomalyDetection) developed by Twitter. This was implemented in R language, the process can be found [here](https://github.com/buds-lab/building-data-genome-project-2/blob/master/notebooks/04_Anomaly-detection.html). The methodology used here is from _Forensically discovering simulation feedback knowledge from a campus energy information system_ [(Miller - Schlueter, 2015)](https://cargocollective.com/buildingdata/Forensic-Analysis-of-Campus-Dataset).<br>
# # Functions
# This function removes outliers and 24 hours zero readings
def removeBadData(df, metername):
# load anomalies df
df_anom = pd.read_csv(path4 + metername + "_anoms.csv")
# Transform timestamp to datetime object type
df_anom["timestamp"] = pd.to_datetime(
df_anom["timestamp"], format="%Y-%m-%d %H:%M:%S"
)
# Remove timezone offset at the end of timestamp
df_anom["timestamp"] = df_anom.timestamp.apply(lambda d: d.replace(tzinfo=None))
# Set index
df_anom = df_anom.set_index("timestamp")
# Remove outliers
outliers = df_anom.copy()
# replace not null values with 9999 (outliers)
outliers[outliers.isna() == False] = 9999
# Update df with outliers data
df.update(outliers)
# Remove outliers
for datapoint in df.columns:
df[datapoint] = df[datapoint][df[datapoint] != 9999]
# Remove zero gaps
# Calculate daily average and aggregate data
df_daily = df.resample("D").mean()
# De-aggreate data asigning daily mean to each hour
df_hourly = df_daily.resample("H").fillna(method="ffill")
## This dataset ends on 2017-12-31 00:00:00. Our meter dataset ends on 2017-12-31 23:00:00.##
## This is solved in the following code ##
# Last row of df_hourly to copy values
sample = df_hourly[df_hourly.index == "2017-12-31 00:00:00"]
# Dataframe
rng = pd.DataFrame(
index=pd.date_range("2017-12-31 01:00:00", periods=23, freq="H"),
columns=df.columns,
)
appdf = (
sample.append(rng)
.fillna(method="ffill")
.drop(pd.Timestamp("2017-12-31 00:00:00"))
)
# Append
df_hourly = df_hourly.append(appdf)
# Delete zero values during whole day
for datapoint in df_hourly.columns:
df[datapoint] = df[datapoint][df_hourly[datapoint] > 0]
del (df_anom, outliers, df_daily, df_hourly)
return df
# This function merge meter data, weather and metadata
def mergeAll(metername, meter_df, weather_df, metadata_df):
# Filters metadata with only current meter info
df_meta = metadata_df.loc[
meta[metername] == "Yes", ["building_id", "site_id"]
].copy()
site_list = list(df_meta.site_id.unique())
# Filters weather with only current sites
df_weather = weather_df.loc[weather_df["site_id"].isin(site_list) == True,].copy()
# Converts timestamp to datetime object
df_weather["timestamp"] = pd.to_datetime(
df_weather["timestamp"], format="%Y-%m-%d %H:%M:%S"
)
# Melt meter dataset
meter_df = pd.melt(
meter_df.reset_index(),
id_vars="timestamp",
var_name="building_id",
value_name="meter_reading",
)
# Merge
meter_df = pd.merge(meter_df, df_meta, how="left", on="building_id").merge(
df_weather, how="left", on=["timestamp", "site_id"]
)
return meter_df
# Use this function to plot each meter individually
def plotHeatmap(df, metername):
numberofplots = 1
fig = plt.figure(figsize=(10,15))
# Get the data
x = mdates.drange(df.columns[0], df.columns[-1] + dt.timedelta(days=30), dt.timedelta(days=30))
y = np.linspace(1, len(df), len(df)+1)
# Plot
ax = fig.add_subplot(numberofplots, 1, 1)
data = np.array(df)
cmap = plt.get_cmap('RdBu')
qmesh = ax.pcolormesh(x, y, data, cmap=cmap, rasterized=True, vmin=-1, vmax=1)
# Colorbar
cbaxes = fig.add_axes([0.13, 0.1, 0.77, 0.02])
cbar = fig.colorbar(qmesh, ax=ax, orientation='horizontal', cax = cbaxes)
cbar.set_label('Spearman Rank Coefficient')
ax.axis('tight')
# Set up as dates
ax.xaxis_date()
fig.autofmt_xdate()
fig.subplots_adjust(hspace=.5)
# Axis
ax.set_xlabel("Timeline", fontsize=16)
ax.set_ylabel("", fontsize=16)
ax.set_title(str(metername) +" Weather Sensitivity Screening", fontdict={'fontsize':20},
)
#plt.tight_layout()
plt.subplots_adjust(bottom=0.17)
return fig
# This function process meter data automatically
def processData(metername):
# load data
df = pd.read_csv(path + metername + ".csv")
# Transform timestamp to datetime object type
df["timestamp"] = pd.to_datetime(df["timestamp"], format='%Y-%m-%d %H:%M:%S')
# Set index
df = df.set_index("timestamp")
# Remove bad data
df_clean = removeBadData(df, metername)
# Merge datasets
df_clean = mergeAll(metername, df_clean, weather, meta)
# Spearman rank coefficiente for each month and building
df_clean = df_clean.dropna()
# Group
spearman = df_clean.groupby(["building_id", df_clean.timestamp.dt.year, df_clean.timestamp.dt.month]).apply(lambda x: stats.spearmanr(x["airTemperature"], x["meter_reading"])[0])
# Create dataframe
spearman = pd.DataFrame(spearman).reset_index(level=[0, 1]).rename(columns={"timestamp": "year"}).reset_index().rename(columns={"timestamp": "month", 0: "coeff"})
# Dates
spearman.index = pd.to_datetime((spearman.year).apply(str)+"-"+(spearman.month).apply(str), format='%Y-%m')
spearman = spearman.drop(["year","month"],axis=1)
# Unmelt data
spearman = spearman.pivot(columns='building_id', values="coeff")
# Sort
spearman = spearman.T.loc[spearman.T.sum(axis=1).sort_values().index]
return(spearman)
# # Weather
weather = pd.read_csv(path1 + "weather.csv", usecols = ["timestamp","site_id","airTemperature"])
weather.info()
# # Metadata
meta = pd.read_csv(
path2 + "metadata.csv",
usecols=[
"building_id",
"site_id",
"electricity",
"hotwater",
"chilledwater",
"water",
"steam",
"solar",
"gas",
"irrigation",
],
)
meta.info()
# # One figure to subplot them all
# +
fig, axes = plt.subplots(2, 4, sharex = True, figsize=(8.27,11.69))
axes = axes.flatten()
numberofplots = 1
metername = ["electricity","water","chilledwater","hotwater","gas", "steam","solar","irrigation"]
for i,j in enumerate(metername):
df = processData(j)
# Get the data
x = mdates.drange(df.columns[0], df.columns[-1] + dt.timedelta(days=30), dt.timedelta(days=30))
y = np.linspace(1, len(df), len(df)+1)
# Plot
#ax = fig.add_subplot(numberofplots, 1, i)
ax = axes[i]
data = np.array(df)
cmap = plt.get_cmap('RdBu')
qmesh = ax.pcolormesh(x, y, data, cmap=cmap, rasterized=True, vmin=-1, vmax=1)
# Axis
ax.axis('tight')
ax.xaxis_date() # Set up as dates
ax.tick_params("x", labelrotation=90)
ax.set_yticklabels([])
ax.set_title(j + " (" + str(int(max(y))) + " meters)", fontdict={'fontsize':10})
# Color bar
cbaxes = fig.add_axes([0.025, 0.02, 0.96, 0.02])
cbar = fig.colorbar(qmesh, ax=ax, orientation='horizontal', cax = cbaxes)
cbar.set_label('Spearman Rank Coefficient')
plt.tight_layout()
plt.subplots_adjust(bottom=0.12)
# -
fig.savefig("..\\figures\\weatherSensitivity_all.jpg", dpi=300, bbox_inches='tight')
# # Export cleaned datasets
# +
metername = ["electricity","water","chilledwater","hotwater","gas", "steam","solar","irrigation"]
for meter in metername:
# load data
df = pd.read_csv(path + meter + ".csv")
# Transform timestamp to datetime object type
df["timestamp"] = pd.to_datetime(df["timestamp"], format='%Y-%m-%d %H:%M:%S')
# Set index
df = df.set_index("timestamp")
# Remove bad data
df_clean = removeBadData(df, meter)
df_clean.to_csv(path3 + meter + "_cleaned.csv")
# -
# # Bibliography
# - Miller, Clayton & <NAME>. (2015). Forensically discovering simulation feedback knowledge from a campus energy information system. 10.13140/RG.2.1.2286.0964.
| notebooks/06_Weather-sensitivity.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.8 - AzureML
# language: python
# name: python38-azureml
# ---
# # Safe rollout for online endpoints
#
# You've an existing model deployed in production and you want to deploy a new version of the model. How do you roll out your new machine learning model without causing any disruption? A good answer is blue-green deployment, an approach in which a new version of a web service is introduced to production by rolling out the change to a small subset of users/requests before rolling it out completely.
# ### Requirements - In order to benefit from this tutorial, you will need:
# - This sample notebook assumes you're using online endpoints; for more information, see [What are Azure Machine Learning endpoints (preview)?](https://docs.microsoft.com/azure/machine-learning/concept-endpoints).
# - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F)
# - An Azure ML workspace. [Check this notebook for creating a workspace](/sdk/resources/workspace/workspace.ipynb)
# - Installed Azure Machine Learning Python SDK v2 - [install instructions](/sdk/README.md#getting-started)
# ### In this sample, you'll learn to:
#
# 1. Deploy a new online endpoint called "blue" that serves version 1 of the model
# 1. Scale this deployment so that it can handle more requests
# 1. Deploy version 2 of the model to an endpoint called "green" that accepts no live traffic
# 1. Test the green deployment in isolation
# 1. Send 10% of live traffic to the green deployment
# 1. Fully cut-over all live traffic to the green deployment
# 1. Delete the now-unused v1 blue deployment
# # 1. Connect to Azure Machine Learning Workspace
# The [workspace](https://docs.microsoft.com/azure/machine-learning/concept-workspace) is the top-level resource for Azure Machine Learning, providing a centralized place to work with all the artifacts you create when you use Azure Machine Learning. In this section we will connect to the workspace in which the job will be run.
# ## 1.1 Import the required libraries
# import required libraries
from azure.ai.ml import MLClient
from azure.ai.ml.entities import (
KubernetesOnlineEndpoint,
KubernetesOnlineDeployment,
Model,
Environment,
CodeConfiguration,
)
from azure.identity import DefaultAzureCredential
# ## 1.2 Configure workspace details and get a handle to the workspace
#
# To connect to a workspace, we need identifier parameters - a subscription, resource group and workspace name. We will use these details in the `MLClient` from `azure.ai.ml` to get a handle to the required Azure Machine Learning workspace. We use the default [interactive authentication](https://docs.microsoft.com/python/api/azure-identity/azure.identity.interactivebrowsercredential?view=azure-python) for this tutorial. More advanced connection methods can be found [here](https://docs.microsoft.com/python/api/azure-identity/azure.identity?view=azure-python).
# enter details of your AML workspace
subscription_id = "<SUBSCRIPTION_ID>"
resource_group = "<RESOURCE_GROUP>"
workspace = "<AML_WORKSPACE_NAME>"
# get a handle to the workspace
ml_client = MLClient(
DefaultAzureCredential(), subscription_id, resource_group, workspace
)
# # 2. Configure Kubernetes cluster for machine learning
# Next, configure Azure Kubernetes Service (AKS) and Azure Arc-enabled Kubernetes clusters for inferencing machine learning workloads.
# There're some prerequisites for below steps, you can check them [here](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-attach-arc-kubernetes).
#
# ## 2.1 Connect an existing Kubernetes cluster to Azure Arc
# This step is optional for [AKS cluster](https://docs.microsoft.com/en-us/azure/aks/kubernetes-walkthrough).
# Follow this [guidance](https://docs.microsoft.com/en-us/azure/azure-arc/kubernetes/quickstart-connect-cluster) to connect Kubernetes clusters.
#
# ## 2.2 Deploy Azure Machine Learning extension
# Depending on your network setup, Kubernetes distribution variant, and where your Kubernetes cluster is hosted (on-premises or the cloud), choose one of options to deploy the Azure Machine Learning extension and enable inferencing workloads on your Kubernetes cluster.
# Follow this [guidance](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-attach-arc-kubernetes?tabs=studio#inferencing).
#
# ## 2.3 Attach Arc Cluster
# You can use Studio, Python SDK and CLI to attach Arc cluster to Machine Learning workspace.
# Below code shows the attachment of AKS that the compute type is `managedClusters`. For Arc connected cluster, it should be `connectedClusters`.
# Follow this [guidance](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-attach-arc-kubernetes?tabs=studio#attach-arc-cluster) for more details.
# +
from azure.ai.ml import load_compute
# for arc connected cluster, the resource_id should be something like '/subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<RESOURCE_GROUP>/providers/Microsoft.ContainerService/connectedClusters/<CLUSTER_NAME>''
compute_params = [
{"name": "<COMPUTE_NAME>"},
{"type": "kubernetes"},
{
"resource_id": "/subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<RESOURCE_GROUP>/providers/Microsoft.ContainerService/managedClusters/<CLUSTER_NAME>"
},
]
k8s_compute = load_compute(path=None, params_override=compute_params)
ml_client.begin_create_or_update(k8s_compute)
# -
# # 3. Create Online Endpoint
#
# Online endpoints are endpoints that are used for online (real-time) inferencing. Online endpoints contain deployments that are ready to receive data from clients and can send responses back in real time.
#
# To create an online endpoint we will use `KubernetesOnlineEndpoint`. This class allows user to configure the following key aspects:
#
# - `name` - Name of the endpoint. Needs to be unique at the Azure region level
# - `auth_mode` - The authentication method for the endpoint. Key-based authentication and Azure ML token-based authentication are supported. Key-based authentication doesn't expire but Azure ML token-based authentication does. Possible values are `key` or `aml_token`.
# - `identity`- The managed identity configuration for accessing Azure resources for endpoint provisioning and inference.
# - `type`- The type of managed identity. Azure Machine Learning supports `system_assigned` or `user_assigned identity`.
# - `user_assigned_identities` - List (array) of fully qualified resource IDs of the user-assigned identities. This property is required is `identity.type` is user_assigned.
# - `description`- Description of the endpoint.
# ## 3.1 Configure the endpoint
# +
# Creating a unique endpoint name with current datetime to avoid conflicts
import datetime
online_endpoint_name = "k8s-endpoint-" + datetime.datetime.now().strftime("%m%d%H%M%f")
# create an online endpoint
endpoint = KubernetesOnlineEndpoint(
name=online_endpoint_name,
compute="<COMPUTE_NAME>",
description="this is a sample online endpoint",
auth_mode="key",
tags={"foo": "bar"},
)
# -
# ## 3.2 Create the endpoint
# Using the `MLClient` created earlier, we will now create the Endpoint in the workspace. This command will start the endpoint creation and return a confirmation response while the endpoint creation continues.
ml_client.begin_create_or_update(endpoint)
# ## 4. Create a blue deployment
#
# A deployment is a set of resources required for hosting the model that does the actual inferencing. We will create a deployment for our endpoint using the `KubernetesOnlineDeployment` class. This class allows user to configure the following key aspects.
#
# - `name` - Name of the deployment.
# - `endpoint_name` - Name of the endpoint to create the deployment under.
# - `model` - The model to use for the deployment. This value can be either a reference to an existing versioned model in the workspace or an inline model specification.
# - `environment` - The environment to use for the deployment. This value can be either a reference to an existing versioned environment in the workspace or an inline environment specification.
# - `code_configuration` - the configuration for the source code and scoring script
# - `path`- Path to the source code directory for scoring the model
# - `scoring_script` - Relative path to the scoring file in the source code directory
# - `instance_type` - The VM size to use for the deployment.
# - `instance_count` - The number of instances to use for the deployment
# ## 4.1 Configure blue deployment
# +
# create blue deployment
model = Model(path="../model-1/model/sklearn_regression_model.pkl")
env = Environment(
conda_file="../model-1/environment/conda.yml",
image="mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04:20210727.v1",
)
blue_deployment = KubernetesOnlineDeployment(
name="blue",
endpoint_name=online_endpoint_name,
model=model,
environment=env,
code_configuration=CodeConfiguration(
code="../model-1/onlinescoring", scoring_script="score.py"
),
instance_count=1,
)
# -
# ## 4.2 Create the deployment
#
# Using the `MLClient` created earlier, we will now create the deployment in the workspace. This command will start the deployment creation and return a confirmation response while the deployment creation continues.
ml_client.begin_create_or_update(blue_deployment)
# blue deployment takes 100 traffic
endpoint.traffic = {"blue": 100}
ml_client.begin_create_or_update(endpoint)
# # 5. Test the endpoint with sample data
#
# Using the `MLClient` created earlier, we will get a handle to the endpoint. The endpoint can be invoked using the invoke command with the following parameters:
#
# - `endpoint_name` - Name of the endpoint
# - `request_file` - File with request data
# - `deployment_name` - Name of the specific deployment to test in an endpoint
#
# We will send a sample request using a [json](./model-1/sample-request.json) file.
# +
# test the blue deployment with some sample data
# comment this out as cluster under dev subscription can't be accessed from public internet.
# ml_client.online_endpoints.invoke(
# endpoint_name=online_endpoint_name,
# deployment_name='blue',
# request_file='../model-1/sample-request.json')
# -
# # 6. Scale the deployment
#
# Using the `MLClient` created earlier, we will get a handle to the deployment. The deployment can be scaled by increasing or decreasing the `instance count`.
# +
# scale the deployment
blue_deployment = ml_client.online_deployments.get(
name="blue", endpoint_name=online_endpoint_name
)
blue_deployment.instance_count = 2
#!!!bug https://msdata.visualstudio.com/Vienna/_workitems/edit/1740434
ml_client.online_deployments.begin_create_or_update(blue_deployment)
# -
# # 7. Get endpoint details
# +
# Get the details for online endpoint
endpoint = ml_client.online_endpoints.get(name=online_endpoint_name)
# existing traffic details
print(endpoint.traffic)
# Get the scoring URI
print(endpoint.scoring_uri)
# -
# # 8. Deploy a new model, but send no traffic yet
# Create a new deployment named green
# +
# create green deployment
model2 = Model(path="../model-2/model/sklearn_regression_model.pkl")
env2 = Environment(
conda_file="../model-2/environment/conda.yml",
image="mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04:20210727.v1",
)
green_deployment = KubernetesOnlineDeployment(
name="green",
endpoint_name=online_endpoint_name,
model=model2,
environment=env2,
code_configuration=CodeConfiguration(
code="../model-2/onlinescoring", scoring_script="score.py"
),
instance_count=1,
)
# -
# use MLClient to create green deployment
ml_client.begin_create_or_update(green_deployment)
# # 9. Test green deployment
# Though green has 0% of traffic allocated, you can still invoke the endpoint and deployment with [json](./model-2/sample-request.json) file.
# +
# comment this out as cluster under dev subscription can't be accessed from public internet.
# ml_client.online_endpoints.invoke(
# endpoint_name=online_endpoint_name,
# deployment_name='green',
# request_file='../model-2/sample-request.json')
# -
# ## 9.1 Test the new deployment with a small percentage of live traffic
# Once you've tested your `green` deployment, allocate a small percentage of traffic to it:
endpoint.traffic = {"blue": 90, "green": 10}
ml_client.begin_create_or_update(endpoint)
# Now, your green deployment will receive 10% of requests.
# ## 9.2 Send all traffic to your new deployment
# Once you're satisfied that your green deployment is fully satisfactory, switch all traffic to it.
endpoint.traffic = {"blue": 0, "green": 100}
ml_client.begin_create_or_update(endpoint)
# # 10. Remove the old deployment
#
ml_client.online_deployments.delete(name="blue", endpoint_name=online_endpoint_name)
# # 11. Delete endpoint
ml_client.online_endpoints.begin_delete(name=online_endpoint_name)
| sdk/endpoints/online/kubernetes/kubernetes-online-endpoints-safe-rollout.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Modeling Health Care Data
#
# * This notebook uses SMOTE and cross-validation.
# +
import sys
import os
from scipy import stats
from datetime import datetime, date
import pandas as pd
import numpy as np
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import VotingClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split
import matplotlib.pyplot as plt
import seaborn as sns
import statsmodels.api as sm
from sklearn import datasets, linear_model
from sklearn.metrics import mean_squared_error, r2_score
from sklearn import metrics
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import train_test_split, cross_val_score, cross_val_predict
from sklearn.ensemble import RandomForestClassifier, VotingClassifier, GradientBoostingClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC, LinearSVC
from sklearn.neighbors import KNeighborsClassifier
from sklearn.pipeline import make_pipeline
import xgboost as xgb
# %matplotlib inline
plt.style.use("fivethirtyeight")
sns.set_context("notebook")
from imblearn.under_sampling import RandomUnderSampler
from imblearn.over_sampling import SMOTE
# -
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
from sklearn import preprocessing
# # Import DF
url = 'https://raw.githubusercontent.com/davidrkearney/colab-notebooks/main/datasets/strokes_training.csv'
df = pd.read_csv(url, error_bad_lines=False)
df.info()
# ## Data Prep
df = df.drop(columns = ['id'])
# Label Encoding
for f in df.columns:
if df[f].dtype=='object':
lbl = preprocessing.LabelEncoder()
lbl.fit(list(df[f].values))
df[f] = lbl.transform(list(df[f].values))
pct_list = []
for col in df.columns:
pct_missing = np.mean(df[col].isnull())
if round(pct_missing*100) >0:
pct_list.append([col, round(pct_missing*100)])
print('{} - {}%'.format(col, round(pct_missing*100)))
df = df.fillna(df.mean())
df=df.dropna()
df.info()
# # Random Forest Classifier
# # Feature and Target Selection
# Select feature and target variables:
X = df.drop(['stroke'], axis=1)
y = df[['stroke']]
#One-hot encode the data using pandas get_dummies
X = pd.get_dummies(X)
# +
#rus = RandomUnderSampler(random_state=0, replacement=True)
#X_resampled, y_resampled = rus.fit_resample(X, y)
#print(np.vstack(np.unique([tuple(row) for row in X_resampled], axis=0)).shape)
# +
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0)
columns = X_train.columns
sm = SMOTE(random_state=1)
X_train_SMOTE, y_train_SMOTE = sm.fit_sample(X_train, y_train)
# +
from sklearn.ensemble import RandomForestClassifier
from sklearn import metrics
model = RandomForestClassifier(n_estimators=100, bootstrap=True,
max_features='sqrt', n_jobs=3, verbose=1, class_weight="balanced")
model.fit(X_train_SMOTE, y_train_SMOTE)
y_pred = model.predict(X_test)
# +
from sklearn.metrics import roc_auc_score
# Calculate roc auc
roc_value = roc_auc_score(y_test, y_pred)
roc_value
# -
print("Accuracy:",metrics.accuracy_score(y_test, y_pred))
print("Precision:",metrics.precision_score(y_test, y_pred))
print("Recall:",metrics.recall_score(y_test, y_pred))
y_pred_proba = model.predict_proba(X_test)[::,1]
fpr, tpr, _ = metrics.roc_curve(y_test, y_pred_proba)
auc = metrics.roc_auc_score(y_test, y_pred_proba)
plt.plot(fpr,tpr,label="data 1, auc="+str(auc))
plt.legend(loc=4)
plt.show()
# +
from sklearn.metrics import precision_recall_curve
import matplotlib.pyplot as plt
from inspect import signature
precision, recall, _ = precision_recall_curve(y_test, y_pred)
plt.plot(precision,recall)
plt.xlabel('Recall')
plt.ylabel('Precision')
# +
# Import numpy and matplotlib
import numpy as np
import matplotlib.pyplot as plt
# Construct the histogram with a flattened 3d array and a range of bins
plt.hist(y_pred_proba.ravel())
# Add a title to the plot
plt.title('Predicted Probability of Stroke')
# Show the plot
plt.show()
# -
len(y_pred_proba)
y_pred
# # Get feature importances for interpretability
# +
# Get numerical feature importances
importances = list(model.feature_importances_)
# List of tuples with variable and importance
feature_importances = [(feature, round(importance, 2)) for feature, importance in zip(X, importances)]
# Sort the feature importances by most important first
feature_importances = sorted(feature_importances, key = lambda x: x[1], reverse = True)
# Print out the features and importances
[print('Variable: {:20} Importance: {}'.format(*pair)) for pair in feature_importances];
# -
plt.figure(1)
plt.title('Feature Importance')
x_values = list(range(len(importances)))
plt.barh(x_values, importances, align='center')
plt.yticks(x_values, X)
plt.xlabel('Relative Importance')
plt.tight_layout()
import pandas as pd
feature_importances = pd.DataFrame(model.feature_importances_,
index = X_train.columns,
columns=['importance']).sort_values('importance', ascending=False)
importances
# # Confusion Matrix
#
from sklearn.metrics import confusion_matrix
cnf_matrix = metrics.confusion_matrix(y_test, y_pred)
cnf_matrix
sns.set(font_scale=5.0)
conf_mat = confusion_matrix(y_test, y_pred)
cm_normalized = conf_mat.astype('float') / conf_mat.sum(axis=1)[:, np.newaxis]
fig, ax = plt.subplots(figsize=(30,30), dpi = 100)
sns.heatmap(cm_normalized, annot=True, cmap="Blues")
sns.set(font_scale=1)
plt.ylabel('Actual')
plt.xlabel('Predicted')
#fig.savefig('cm_augmented.png', dpi=fig.dpi, transparent=True)
plt.show()
cm_normalized
fig, ax = plt.subplots()
# create heatmap
sns.heatmap(pd.DataFrame(cnf_matrix), annot=True, cmap="Blues" ,fmt='g')
ax.xaxis.set_label_position("top")
plt.tight_layout()
plt.title('Confusion matrix', y=1.5)
plt.ylabel('Actual label')
plt.xlabel('Predicted label')
plt.tick_params(axis='both', which='major', labelsize=10, labelbottom = False, bottom=False, top = True, labeltop=True)
# +
from sklearn.ensemble import RandomForestClassifier
from sklearn import metrics
model = RandomForestClassifier(n_estimators=100, bootstrap=True,
max_features='sqrt', n_jobs=3, verbose=1, class_weight="balanced")
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
# +
from sklearn.ensemble import RandomForestClassifier
from sklearn import metrics
model = RandomForestClassifier(n_estimators=100, bootstrap=True,
max_features='sqrt', n_jobs=3, verbose=1, class_weight="balanced")
model.fit(X_train_SMOTE, y_train_SMOTE)
y_pred = model.predict(X_test)
# -
print("Accuracy:",metrics.accuracy_score(y_test, y_pred))
print("Precision:",metrics.precision_score(y_test, y_pred))
print("Recall:",metrics.recall_score(y_test, y_pred))
y_pred = model.predict_proba(X_test)[:,1]
train_proba = pd.DataFrame({'predicted_probability': y_pred})
train_proba.info()
##check whether y_train indexes are the same as X_train indexes
same_index = y_test.index == X_test.index
same_index.all()
## get them into the same pandas frame
table = pd.concat([y_test.reset_index(drop=True), train_proba.reset_index(drop=True)], axis=1)
table
table.stroke.value_counts()
table.info()
table.to_csv('../processed_csvs/healthcare_table.csv')
# # Cross-Validation Precision
from sklearn.ensemble import RandomForestClassifier
forest_clf = RandomForestClassifier(n_estimators=100, random_state=42)
# +
#cross validation predictions for test set
y_test_pred = cross_val_predict(forest_clf, X_test, y_test, cv=5)
print("Accuracy:",metrics.accuracy_score(y_test, y_test_pred))
print("Precision:",metrics.precision_score(y_test, y_test_pred))
print("Recall:",metrics.recall_score(y_test, y_test_pred))
# -
#cross validation predictions for full dataset
y_pred = cross_val_predict(forest_clf, X, y, cv=5)
print("Accuracy:",metrics.accuracy_score(y, y_pred))
print("Precision:",metrics.precision_score(y, y_pred))
print("Recall:",metrics.recall_score(y, y_pred))
test_proba = pd.DataFrame({'predicted_probability': y_pred})
test_proba.info()
##check whether y_test indexes are the same as X_test indexes
same_index = y.index == X.index
same_index.all()
## get them into the same pandas frame
table = pd.concat([y.reset_index(drop=True), test_proba.reset_index(drop=True)], axis=1)
table
table.stroke.value_counts()
table.to_csv('../processed_csvs/final_model_table.csv')
# # 5-Fold Cross Validation
# +
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import VotingClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.neighbors import KNeighborsClassifier
from sklearn.svm import SVC
from sklearn.naive_bayes import GaussianNB
from sklearn.metrics import accuracy_score
from sklearn.metrics import precision_score
from sklearn.model_selection import cross_val_score
models = [
LogisticRegression(solver="liblinear", random_state=42),
RandomForestClassifier(n_estimators=10, random_state=42),
KNeighborsClassifier(n_neighbors = 5, metric = 'minkowski', p = 2),
GaussianNB(),
]
CV = 5
cv_df = pd.DataFrame(index=range(CV * len(models)))
entries = []
for model in models:
model_name = model.__class__.__name__
accuracies = cross_val_score(model, X, y, scoring='precision', cv=CV)
for fold_idx, accuracy in enumerate(accuracies):
entries.append((model_name, fold_idx, accuracy))
cv_df = pd.DataFrame(entries, columns=['model_name', 'fold_idx', 'precision'])
sns.boxplot(x='model_name', y='precision', data=cv_df)
sns.stripplot(x='model_name', y='precision', data=cv_df,
size=8, jitter=True, edgecolor="gray", linewidth=2)
plt.xticks(rotation=45)
plt.tight_layout()
plt.show()
| _notebooks/2020-12-12_Healthcare_Modeling-Copy1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Lab 10
# + ## Автор: <NAME>
# + ## Група: ІС-72
# ****
# ****
# ****
# ## Task 1
# З використанням алгоритму імітації стохастичної мережі Петрі класу PetriSim реалізувати модель, розроблену за текстом завдання 1 практикуму 6, та виконати її верифікацію. Зробити висновки про функціонування моделі.
# **До запуску:**
#
# <img src="https://github.com/kryvokhyzha/Courses/blob/master/System_Modeling/Lab10/1/1_net_before.jpg?raw=true">
# **1. Time modeling = 1000**
# **Після запуску:**
#
# <img src="https://github.com/kryvokhyzha/Courses/blob/master/System_Modeling/Lab10/1/1_net_after_1000.jpg?raw=true">
# **Статистика:**
#
# <img src="https://github.com/kryvokhyzha/Courses/blob/master/System_Modeling/Lab10/1/1_stat_1000.jpg?raw=true">
# **2. Time modeling = 500**
# **Після запуску:**
#
# <img src="https://github.com/kryvokhyzha/Courses/blob/master/System_Modeling/Lab10/1/1_net_after_500.jpg?raw=true">
# **Статистика:**
#
# <img src="https://github.com/kryvokhyzha/Courses/blob/master/System_Modeling/Lab10/1/1_stat_500.jpg?raw=true">
# ## Task 2
# З використанням алгоритму імітації стохастичної мережі Петрі класу PetriSim реалізувати модель, розроблену за текстом завдання 4 практикуму 6, та виконати її верифікацію. Зробити висновки про функціонування моделі.
# **До запуску:**
#
# <img src="https://github.com/kryvokhyzha/Courses/blob/master/System_Modeling/Lab10/2/2_net_before.jpg?raw=true">
# **1. Time modeling = 1000**
# **Після запуску:**
#
# <img src="https://github.com/kryvokhyzha/Courses/blob/master/System_Modeling/Lab10/2/2_net_after_1000.jpg?raw=true">
# **Статистика:**
#
# <img src="https://github.com/kryvokhyzha/Courses/blob/master/System_Modeling/Lab10/2/2_stat_1000.jpg?raw=true">
# **2. Time modeling = 500**
# **Після запуску:**
#
# <img src="https://github.com/kryvokhyzha/Courses/blob/master/System_Modeling/Lab10/2/2_net_after_500.jpg?raw=true">
# **Статистика:**
#
# <img src="https://github.com/kryvokhyzha/Courses/blob/master/System_Modeling/Lab10/2/2_stat_500.jpg?raw=true">
# ## Task 3
# Побудувати модель системи, що відтворює обробку потоку запитів головним та допоміжним сервером. Ймовірність звернення до допоміжного сервера 0,3. Часові характеристики обробки запитів задайте самостійно.
# **До запуску:**
#
# <img src="https://github.com/kryvokhyzha/Courses/blob/master/System_Modeling/Lab10/3/3_net_before.jpg?raw=true">
# **1. Time modeling = 1000**
# **Після запуску:**
#
# <img src="https://github.com/kryvokhyzha/Courses/blob/master/System_Modeling/Lab10/3/3_net_after_1000.jpg?raw=true">
# **Статистика:**
#
# <img src="https://github.com/kryvokhyzha/Courses/blob/master/System_Modeling/Lab10/3/3_stat_1000.jpg?raw=true">
# **2. Time modeling = 500**
# **Після запуску:**
#
# <img src="https://github.com/kryvokhyzha/Courses/blob/master/System_Modeling/Lab10/3/3_net_after_500.jpg?raw=true">
# **Статистика:**
#
# <img src="https://github.com/kryvokhyzha/Courses/blob/master/System_Modeling/Lab10/3/3_stat_500.jpg?raw=true">
# ## Task 4
# Скласти математичні рівняння, що описують побудовану за текстом завдання 3 (цього практикуму) стохастичну мережу Петрі.
# $T = \{T_1, T_2, T_3\}$
#
# $P = \{P_1, P_2, P_3, P_4, P_5, P_6\}$
#
# $A = \{(P_1, T_1), (T_1, P_1), (T_1, P_2), (P_2, T_2), (P_2, T_3), (T_2, P_5), (P_5, T_2), (T_3, P_6), (P_6, T3), (T_2, P_3), (T_3, P_4)\}$
#
# $K = \{(0, 1), (0, 0.7), (0, 0.3)\}$
#
# $I = \varnothing$
#
# $w = \{1,1,1,1,1,1,1,1,1,1,1\}$
#
# $s(0,0) = \begin{Bmatrix}
# \begin{pmatrix}
# 1 \\
# 0 \\
# 0 \\
# 0 \\
# 1 \\
# 1
# \end{pmatrix} &
# \begin{pmatrix}
# \infty \\
# \infty \\
# \infty
# \end{pmatrix}
# \end{Bmatrix}$
# | Затримка $T_1$ | Затримка $T_2$ | Затримка $T_3$ |
# | --- | --- | --- |
# | 0.012 | 2.5 | 3.6 |
# | 0.054 | 0.335 | 1.2 |
# | 0.086 | 0.652 | 0.354 |
# ### Ітерація 1
# $P_1 \geq 1 \implies Z(T_1, 0.0) = 1$
#
# $P_2 \leq 1 \implies Z(T_2, 0.0) = 0$
#
# $P_3 \leq 1 \implies Z(T_3, 0.0) = 0$
#
# $\psi = \{T_1\} \implies X(T_1) = 1$
# ****
# $D^-:$
#
# $M_{p1}(0.0) = 0$
#
# $M_{p2}(0.0) = 0$
#
# $M_{p3}(0.0) = 0$
#
# $M_{p4}(0.0) = 0$
#
# $M_{p5}(0.0) = 1$
#
# $M_{p6}(0.0) = 1$
#
# $E_{T1}(0.0) = \{0.0 + 0.012\}$
#
# $E_{T2}(0.0) = \{\infty\}$
#
# $E_{T3}(0.0) = \{\infty\}$
# ****
# $t_1 = min\{0.012, \infty, \infty\}$
# ****
# $s(0.0) = \begin{Bmatrix}
# \begin{pmatrix}
# 0 \\
# 0 \\
# 0 \\
# 0 \\
# 1 \\
# 1
# \end{pmatrix} &
# \begin{pmatrix}
# 0.012 \\
# \infty \\
# \infty
# \end{pmatrix}
# \end{Bmatrix}$
# ****
# $Z_{T_1}(0.0) = 1$
#
# $Z_{T_2}(0.0) = 0$
#
# $Z_{T_3}(0.0) = 0$
# ****
# $D^+:$
#
# $Y(T_1, 0.012) = 1$
#
# $Y(T_2, 0.012) = 0$
#
# $Y(T_3, 0.012) = 0$
#
# $M_{p1}(0.012) = 1$
#
# $M_{p2}(0.012) = 1$
#
# $M_{p3}(0.012) = 0$
#
# $M_{p4}(0.012) = 0$
#
# $M_{p5}(0.012) = 1$
#
# $M_{p6}(0.012) = 1$
#
# $E_{T1}(0.012) = \{\infty\}$
#
# $E_{T2}(0.012) = \{\infty\}$
#
# $E_{T3}(0.012) = \{\infty\}$
# ****
# $s(0.0) = \begin{Bmatrix}
# \begin{pmatrix}
# 1 \\
# 1 \\
# 0 \\
# 0 \\
# 1 \\
# 1
# \end{pmatrix} &
# \begin{pmatrix}
# \infty \\
# \infty \\
# \infty
# \end{pmatrix}
# \end{Bmatrix}$
# ****
# ****
# ### Ітерація 2
# $P_1 \geq 1 \implies Z(T_1, 0.012) = 1$
#
# $P_2 \geq 1 \implies Z(T_2, 0.012) = 1$
#
# $P_3 \leq 1 \implies Z(T_3, 0.012) = 0$
#
# $\psi = \{T_1, T_2, T_3\}$
#
# $P_2$ - конфліктна позиція (можемо перейти до $T_2$ або $T_3$)
#
# Виршішивши конфлікт, оберемо $T_2$ $\implies X(T_1) = 1$ та $X(T_2) = 1$
# ****
# $D^-:$
#
# $M_{p1}(0.012) = 0$
#
# $M_{p2}(0.012) = 0$
#
# $M_{p3}(0.012) = 0$
#
# $M_{p4}(0.012) = 0$
#
# $M_{p5}(0.012) = 0$
#
# $M_{p6}(0.012) = 1$
#
# $E_{T1}(0.0) = \{0.012 + 0.054\}$
#
# $E_{T2}(0.0) = \{0.012 + 0.335\}$
#
# $E_{T3}(0.0) = \{\infty\}$
# ****
# $s(0.012) = \begin{Bmatrix}
# \begin{pmatrix}
# 0 \\
# 0 \\
# 0 \\
# 0 \\
# 0 \\
# 1
# \end{pmatrix} &
# \begin{pmatrix}
# 0.066 \\
# 0.347 \\
# \infty
# \end{pmatrix}
# \end{Bmatrix}$
# ****
# $Z_{T_1}(0.012) = 1$
#
# $Z_{T_2}(0.012) = 1$
#
# $Z_{T_3}(0.012) = 0$
# ****
# $t_2 = min\{0.066, 0.347, \infty\}$
# ****
# $D^+:$
#
# $Y(T_1, 0.066) = 1$
#
# $Y(T_2, 0.066) = 0$
#
# $Y(T_3, 0.066) = 0$
#
# $M_{p1}(0.066) = 1$
#
# $M_{p2}(0.066) = 1$
#
# $M_{p3}(0.066) = 0$
#
# $M_{p4}(0.066) = 0$
#
# $M_{p5}(0.066) = 1$
#
# $M_{p6}(0.066) = 1$
#
# $E_{T1}(0.066) = \{\infty\}$
#
# $E_{T2}(0.066) = \{\infty\}$
#
# $E_{T3}(0.066) = \{\infty\}$
# ****
# $s(0,0) = \begin{Bmatrix}
# \begin{pmatrix}
# 1 \\
# 1 \\
# 0 \\
# 0 \\
# 1 \\
# 1
# \end{pmatrix} &
# \begin{pmatrix}
# \infty \\
# \infty \\
# \infty
# \end{pmatrix}
# \end{Bmatrix}$
# ****
| system-modeling/Lab10/Lab10_report.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Tutorial 2:
# ## Multiple Linear Regression
# In this section, we will learn some of the key concepts in Multiple Linear Regression, and also understand some of the assumptions in linear regression.
#
# Multivariate Linear Regression is used for predicting a continuous value target/dependent variable. While using the multiple variables we must understand the relationship among the variables and their correlation with the target variable.
# 
#
# As discussed in Linear Regression, Correlation doesn’t mean Causation understanding the following concepts will help us dive deeper in understanding the prerequisites for modeling data using Multivariate Regression.
# #### Assumptions in Linear Regression
# Linear regression works based on some assumption which must be understood before we start using them.
#
# - The relationship between the independent and dependent variables must be linear.
# - All the variables are normally distributed.
# - No multicollinearity between the Independent variables.
# - The residuals must not be autocorrelated and heteroscedastic.
#
# ***The above assumptions must be checked once we decide to use the linear regression model***
# #### Multicollinearity
# Multicollinearity is the most important concept in multivariate Linear Regression. It occurs due to the high correlation among independent variables.
#
# ***There are various techniques to avoid multicollinearity. Some of them are as follows.***
#
# - Drop one of the independent variables among the correlated variables.
# - Variation Inflation Factor (VIF) can be used to check the multicollinearity.
# - Removing the mean of variables to center the data, also helps.
#
# #### Homoscedasticity / Heteroscedasticity
# Understanding the errors of the model is important to build a robust machine learning model. In Linear Regression the Error(Residuals) are homoscedastic meaning residuals are of similar magnitude. Also, we say normally distributed error.
# The below figure depicts heteroscedasticity. We can see the variation in the Residuals(errors) across the fitted line.
# 
#
# ### We will perform the following steps to build a Multiple Linear Regression model using the popular Beer dataset.
#
#
#
# - **Data Preprocessing**
#
# - Importing the libraries.
# - Importing dataset.
# - Dealing with the categorical variable.
# - Classifying dependent and independent variables.
# - Splitting the data into a training set and test set.
# - Feature scaling.
#
#
# - **Linear Regression**
#
# - Create a Linear Regressor.
# - Feed the training data to the regressor model.
# - Predicting the scores for the test set.
# - Using the RMSE to measure the performance.
#1 Importing necessary libraries
import numpy as np
import pandas as pd
# +
#2 Importing the data set
dataset = pd.read_csv('Datasets/beer_data.csv')
#Printing first 10 rows of the dataset
dataset.head()
# -
# Dealing with Categorical variables
from sklearn.preprocessing import LabelEncoder
le = LabelEncoder()
#Making sure the type of the review_profilename column is str
dataset["review_profilename"] = dataset["review_profilename"].astype(str)
dataset["review_profilename"] = le.fit_transform(dataset["review_profilename"])
dataset.head()
#Printing the summary of the dataset
dataset.describe()
#A simple correlation plot usong seaborn. The below plot shows how the different variables correlate with each other
import seaborn as sns
import matplotlib.pyplot as plt
corr = dataset.corr()
fig, ax = plt.subplots(figsize=(10,10))
ax = sns.heatmap(
corr,
vmin=-1, vmax=1, center=0,
square=True,
annot=True,
linewidths=.5,
cmap="YlGnBu" )
#Rotating labels on x axis
ax.set_xticklabels(
ax.get_xticklabels(),
rotation=35,
horizontalalignment='right'
)
# +
#4 Classifying dependent and independent variables
#All columns except the last column are independent features- (Selecting every column except Score)
X = dataset.iloc[:,:-1].values
#Only the last column is the dependent feature or the target variable(Score)
y = dataset.iloc[:,-1].values
# +
#5 Creating training and test sets
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X,y,test_size = 0.2,random_state = 0)
#################Data Preprocessing Ends #################################
# -
print("\n\nTraining Set :\n----------------\n")
print("X = \n", X_train)
print("y = \n", y_train)
print("\n\nTest Set :\n----------------\n")
print("X = \n",X_test)
print("y = \n", y_test)
# +
""" Multiple Linear regression """
#6 Creating the Regressor and training it with the training set
from sklearn.linear_model import LinearRegression
regressor = LinearRegression(normalize = True)
# -
#7 Feeding the data and training the model
regressor.fit(X_train,y_train)
#8 Predicting the Score for test set observations
y_pred = regressor.predict(X_test)
#printing the predictions
print("\n----------------------------\nPredictions = \n",y_pred)
# +
#8 Claculating the Accuracy of the predictions
from sklearn import metrics
print("Prediction Accuracy = ", 1-metrics.r2_score(y_test, y_pred))
#9 Comparing Actual and Predicted Salaries for he test set
print("\nActual vs Predicted Salaries \n------------------------------\n")
error_df = pd.DataFrame({"Actual" : y_test,
"Predicted" : y_pred,
"Abs. Error" : np.abs(y_test - y_pred)})
error_df
# +
#9 Calculating score from Root Mean Log Squared Error
def rmlse(y_test, y_pred):
error = np.square(np.log10(y_pred +1) - np.log10(y_test +1)).mean() ** 0.5
score = 1 - error
return error, score
error, score = rmlse(y_test, y_pred)
print("\n",'-'*40)
print("RMLSE : ", error)
print("Score : ", score)
# -
| Tutorials/3_Tutorial-Multiple_Linear_Regression.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
import requests
import calendar
import dateutil.parser as parser
import yaml
import time
import pandas as pd
import warnings
import concurrent.futures
warnings.filterwarnings('ignore')
def convert_date(utc_time):
parsed_date = parser.parse(utc_time)
var_date=parsed_date.date()
var_time=parsed_date.time()
var_f_time=var_time.hour
var_julian_date=parsed_date.timetuple().tm_yday
var_weekday=parsed_date.weekday()
var_weekday_name=calendar.day_name[parsed_date.weekday()]
return var_date, var_time, var_f_time, var_julian_date, var_weekday, var_weekday_name
with open ('config.yml') as ymlfile:
cfg = yaml.safe_load(ymlfile)
oanda_api_key = cfg['creds']['oanda_api']
account_number = cfg['creds']['account_number']
# +
currency_pairs = ['EUR_USD','USD_CAD','EUR_GBP','EUR_AUD','EUR_CHF',
'GBP_USD','GBP_CHF','GBP_NZD','GBP_AUD','GBP_CAD',
'AUD_CAD','AUD_CHF','AUD_NZD','NZD_USD','EUR_CAD',
'USD_CHF','CAD_CHF','NZD_CAD','AUD_USD','EUR_NZD',
'NZD_CHF']
timeframe = "H4"
price_char = "M"
price_com = "mid"
candles_count = 5000
params_count = (
('price', price_char),
('count', candles_count),
('granularity', timeframe),
)
# +
provider_api_url = 'https://api-fxpractice.oanda.com/v3/accounts/{}/orders'.format(account_number)
request_headers = {
"Authorization": oanda_api_key,
"Accept-Datetime-Format": "RFC3339",
"Connection": "Keep-Alive",
"Content-Type": "application/json;charset=UTF-8"
}
provider_authorization = 'Bearer {0}'.format(oanda_api_key)
headers = {
'Content-Type': 'application/json',
'Authorization': provider_authorization,
}
# -
def get_candles(pair):
output = []
filename = "{}_{}.csv".format(pair, timeframe)
first_response = requests.get('https://api-fxpractice.oanda.com/v3/instruments/{}/candles'.format(pair),
headers=headers,
params=params_count).json()
response=first_response['candles']
all_candlesticks = response
for i in range (len(all_candlesticks)):
result= (convert_date(response[i]['time']))
output.append([(result[0]),(result[1]),
(result[2]),(result[3]),
(result[4]),(result[5]),
response[i]['time'],
response[i]['volume'],
response[i][price_com]['o'],
response[i][price_com]['h'],
response[i][price_com]['l'],
response[i][price_com]['c']])
output = pd.DataFrame(output)
output.columns = ['Date','Time',
'f_time','julian_date',
'Weekday','Weekday_Name',
'UTC_Time', 'Volume',
'Open', 'High', 'Low', 'Close']
output.to_csv(filename, header = True, index = False)
# +
t1 = time.perf_counter()
with concurrent.futures.ProcessPoolExecutor() as executor:
executor.map(get_candles, currency_pairs)
t2 = time.perf_counter()
print(f'Finished in {t2-t1} seconds')
# -
| Testing_Pipeline/Get_Candles_Data_MultiProcessing.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# Import necessary packages
import os
import glob
import pandas as pd
import numpy as np
from scipy import stats
import scikit_posthocs
# Import plotting packages
import iqplot
import bokeh.io
from bokeh.io import output_file, show
from bokeh.layouts import column, row
bokeh.io.output_notebook()
# -
# # Import data from directory of measurement tables, collected from Fiji
# +
# Define path to directory with measurements
path = os.path.abspath('raw_data_csvs/')
df_summary = pd.DataFrame()
list_summary = []
# For loop to bring in files and concatenate them into a single dataframe
for file_ in glob.glob(path + "/*_Puncta.csv"):
df = pd.read_csv(file_)
# Determine Image name from file name, then parse experiment details fromm Image name
df['Image'] = os.path.splitext(os.path.basename(file_))[0]
(df['Date'], df['Embryo'], df['Treatment'], df['Stains'],
df['FOV'], df['del1']) = zip(*df['Image'].map(lambda x:x.split('_')))
# Compile data
list_summary.append(df)
df_summary = pd.concat(list_summary, sort=False)
df_summary = df_summary.drop(['Total Area', 'Average Size', '%Area', 'Mean', 'IntDen', 'del1'], axis=1)
# Preview dataframe to confirm import successful
df_summary.head()
# Assign import to full_results df
full_results = df_summary.copy()
full_results.to_csv('Combined_Source_Data.csv')
full_results.head()
# -
# ## Analyze results of nSMase2 MO on LRP6 internalization count
#
# Generate ECDF plot to display distribution frequencies
#
# Run 2-sample Kolmogorov-Smirnov Test to determine statistical significance
# +
treatment_list = [
'ControlMO',
'SMPD3MO',
]
df_subset = full_results
df_subset = df_subset.loc[df_subset['Treatment'].isin(treatment_list)]
# Make ECDF plot using iqplot
data_ecdf = iqplot.ecdf(
data=df_subset, q='Count', cats='Treatment', q_axis='x'
,style='staircase'
,order=treatment_list
# ,palette=['#1f77b4', '#ff7f0e','#2ca02c']
# ,palette=['#9467bd', '#d62728']
,line_kwargs=dict(line_width=3)
# ,conf_int=True, n_bs_reps=1000, ptiles=[16, 84] # ptiles values equate to SEM
,conf_int=True, n_bs_reps=1000, ptiles=[2.5, 97.5] # ptiles values equate to 95% CIs
# Other customization parameters
# ,x_range=(-1.5,35)
,frame_height = 350, frame_width = 450
,x_axis_label='FLAG-LRP6+ Puncta Count', y_axis_label='Cumulative Distribution Frequency'
,show_legend=True
)
# Other customization parameters
data_ecdf.axis.axis_label_text_font_size = '20px'
data_ecdf.axis.axis_label_text_font_style = 'normal'
data_ecdf.axis.major_label_text_font_size = '18px'
# data_ecdf.output_backend = "svg"
show(row(data_ecdf))
### Kolmogorov-Smirnov test - NO MULTIPLE COMPARISONS
# Define samples to compare
category = 'Treatment'
sample1 = 'ControlMO'
sample2 = 'SMPD3MO'
metric = 'Count'
# Run 2-sample Kolmogorov-Smirnov Test
ks_result = stats.ks_2samp(df_subset.loc[df_subset[category]==sample1][metric]
,df_subset.loc[df_subset[category]==sample2][metric])
# Display results of Kolmogorov-Smirnov test
print('Two-sample Kolmogorov-Smirnov test results for ' + sample1 + ' vs ' + sample2 + ': \n\t\t\t\t statistic=' + str(ks_result[0]) +
'\n\t\t\t\t p-value=' + str(ks_result[1]))
# Get number of cells within this test
for treatment in df_subset['Treatment'].unique().tolist():
temp_df = df_subset.loc[df_subset['Treatment'] == treatment]
print('n = ' + str(len(temp_df)) + ' cells in the ' + str(treatment) + ' dataset.')
# -
| Figure5/FLAGLRP6_Internalization/LRP6_PunctaAnalysis_nSMase2MO.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # TP chapitre 2 - Entraînez-vous : sélectionnez le nombre de voisins dans un kNN pour une régression
#
# I just re-use all that I have done :
# +
data_path = "~/Documents/openclassroom/Fomation_ingenieur_ML/data/"
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
## sklearn module :
from sklearn import model_selection
from sklearn import preprocessing
from sklearn import neighbors
# -
# ## Preprocess :
# +
df = pd.read_csv(data_path+'winequality-red.csv', sep=";")
## DESIGN AND SAMPLING :
X = df.drop("quality", axis = 1, inplace = False)
y = df.quality
X_train, X_test, y_train, y_test = model_selection.train_test_split(X, y, train_size=0.7)
## STANDARDIZATION :
my_normalizer = preprocessing.StandardScaler().fit(X_train)
X_train_std = pd.DataFrame(my_normalizer.transform(X_train), index = X_train.index, columns = X_train.columns)
X_test_std = pd.DataFrame(my_normalizer.transform(X_test), index = X_test.index, columns = X_test.columns)
# -
# ## Cross Validation with sklearn :
# +
## CROSS VALIDATION :
param_grid = {"n_neighbors" : np.arange(8,14,1)}
nb_folds = 5
CV_reg = model_selection.GridSearchCV(neighbors.KNeighborsRegressor(),
param_grid, cv=nb_folds)
CV_reg.fit(X_train_std, y_train)
## KNN REGRESSOR WITH THE BEST CV n_neighbors
knn_reg = neighbors.KNeighborsRegressor(**CV_reg.best_params_)
knn_reg.fit(X_train_std, y_train)
## PREDICTION :
y_pred = knn_reg.predict(X_test_std)
# -
# ## Cross Validation without sklearn CVGridSearch :
# #### Annexe functions :
# +
## EXTRACT X_train, X_test FROM X AND train_index :
def extract_Xsplitted_data(X, train_index):
# train_index = X_train.index
test_index = [(index not in train_index) for index in X.index]
X_train = X.loc[train_index]
X_test = X.loc[test_index]
return(X_train, X_test)
## STANDARDIZATION :
def normalize_from_split(X, train_index):
X_train, X_test = extract_Xsplitted_data(X, train_index)
my_normalizer = preprocessing.StandardScaler().fit(X_train)
X_train_std = pd.DataFrame(my_normalizer.transform(X_train), index = X_train.index, columns = X_train.columns)
X_test_std = pd.DataFrame(my_normalizer.transform(X_test), index = X_test.index, columns = X_test.columns)
return(X_train_std, X_test_std)
# +
data = X_train, y_train
## SET ARGUMENTS/PARAMETERS :
my_meth = neighbors.KNeighborsRegressor
# NUMBER OF FOLDS :
cv = 5
param_grid = {"n_neighbors" : np.arange(2,15,1)}
def my_CV(data, param_grid, my_meth) :
## toutes les variables "locales" sont préfixées de 'CV_'
# MAP THE DICT OF LIST INTO LIST OF DICT :
param_dirg = model_selection.ParameterGrid(param_grid)
## INITIALIZATION :
CV_X, CV_y = data
k = 1
res = {} # dict of dict
for kwargs in param_dirg :
params_set = "_".join(str(val) for val in kwargs.values())
res[params_set]={}
## SET FOLDS :
kf = model_selection.StratifiedKFold(n_splits = 5)
CV_split_iterator = kf.split(CV_X, y = CV_y)
### LOOP ON FOLDS :
for CV_train_range_index, CV_test_range_index in CV_split_iterator :
train_index = CV_X.index[CV_train_range_index]
test_index = CV_X.index[CV_test_range_index]
## NORMALIZE :
CV_X_train, CV_X_test = normalize_from_split(CV_X, train_index)
## GET y SPLIT :
CV_y_train, CV_y_test = y[train_index], y[test_index]
fold_key = "fold"+str(k)
k+=1
### LOOP ON PARAM NAMES (HERE ONLY 1)
for kwargs in param_dirg :
## FIT KNN on X_train AND PREDICT ON X_test :
CV_meth = my_meth(**kwargs)
CV_meth.fit(CV_X_train,CV_y_train)
## SAVE :
y_pred = CV_meth.predict(CV_X_test)
y_table = pd.DataFrame(np.matrix((y_pred, CV_y_test)).T, columns=["pred","real"])
params_set = "_".join(str(val) for val in kwargs.values())
res[params_set][fold_key] = y_table
return(res)
res= my_CV(data, param_grid, my_meth)
# +
from sklearn import metrics
def compute_MSE(table):
return(sklearn.metrics.mean_squared_error(y_true = table.real, y_pred=table.pred))
def compute_dict_MSE(tables):
mse_vect = []
for key, value in tables.items():
mse_vect.append(compute_MSE(value))
return(np.array(mse_vect))
from sklearn import metrics
def compute_R2(table):
return(metrics.r2_score(y_true = table.real, y_pred=table.pred))
def compute_dict_R2(tables):
r2_vect = []
for key, value in tables.items():
r2_vect.append(compute_R2(value))
return(np.array(r2_vect))
# +
MSE_mean = []
MSE_std = []
for params_set in res.keys():
dict_y_table = res[params_set]
MSE = compute_dict_MSE(dict_y_table)
MSE_mean.append(MSE.mean())
MSE_std.append(MSE.std())
params = []
for kwargs in model_selection.ParameterGrid(param_grid) :
params.append(kwargs)
CV_results_ = {"params": params , "mean_test_score":MSE_mean, "std_test_score":MSE_std}
## new syntax to iterate <3 <3 :
iterator = zip(CV_results_["mean_test_score"], CV_results_["std_test_score"], CV_results_["params"])
for mean, std, params in iterator:
print("MSE = %0.3f (+/-%0.3f) for %s" %(mean, 2*std, params))
# +
R2_mean = []
R2_std = []
for params_set in res.keys():
dict_y_table = res[params_set]
R2 = compute_dict_R2(dict_y_table)
R2_mean.append(R2.mean())
R2_std.append(R2.std())
params = []
for kwargs in model_selection.ParameterGrid(param_grid) :
params.append(kwargs)
CV_results_ = {"params": params , "mean_test_score":R2_mean, "std_test_score":R2_std}
## new syntax to iterate <3 <3 :
iterator = zip(CV_results_["mean_test_score"], CV_results_["std_test_score"], CV_results_["params"])
for mean, std, params in iterator:
print("R2 = %0.3f (+/-%0.3f) for %s" %(mean, 2*std, params))
# -
| cours_inge_ML/2_Evaluez_les_perf_d_un-modele_de_ML/TPchapitre2_reecrire_CV_reg.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Demonstration of required steps to deployment
# - MJP 2020-09-03
# - Sketching out the steps that are likely to be required in order to get either a prototype or fully-working-version installed and working on marsden
# #### Basic imports ...
# +
import time
import numpy as np
import scipy.stats as stats
import math
import random
from collections import defaultdict
import os
import sys
from collections import Counter
import glob
import warnings
import warnings
warnings.filterwarnings('ignore')
import matplotlib.pyplot as plt
import importlib
from astropy.time import Time
import pickle
# -------------------------------------------------------------------------------------
# Local imports
# -------------------------------------------------------------------------------------
# parent directory is */cheby_checker
HEAD_DIR = os.path.dirname(os.path.realpath(os.getcwd()))
sys.path.append(os.path.join(HEAD_DIR))
print(f' HEAD_DIR: {HEAD_DIR} ')
# directory with sample data for development
DATA_DIR = os.path.join(HEAD_DIR, 'dev_data')
print(f' DATA_DIR: {DATA_DIR} ')
# import nbody-related code from main cheby_checker directory
from cheby_checker import mpc_nbody, parse_input
importlib.reload(mpc_nbody)
# -
# ## Sketch of required steps to deploy & populate
#
# #### Create the database and the necessary tables
# -
#
# #### Establish source of ORBFIT files
# - Can be the database table(s) or a bunch of flat-files
#
#
# ## Sketch of steady-state operational steps
# ## (1) Assume an orbit has been fit and is being written to the db
# - We probably want to wrap the steps below in a convenient function
# - Perhaps as an extra function in precalc.PreCalc (or similar)
#
# #### Prepare MPC_NBODY run
# - Could conceivably do one-off runs everytime an orbit is fit
# - Or could batch them up and do them in blocks (likely to be more efficient to integrate multiple objects simultaneously)
#
# #### Run MPC_NBODY
# > Sim = mpc_nbody.NbodySim(filenames[0], 'eq')
#
# > Sim(tstep=20, trange=600) ### These timesteps & Range would have to be changed
#
# ### Do Cheby Fit and generate MSCs
# > MSCs = orbit_cheby.MSC_Loader(FROM_ARRAY = True ,
# primary_unpacked_provisional_designations = name,
# times_TDB = times,
# statearray = states).MSCs
#
# ### Do pre-calcs and store in db
# > P = precalc.PreCalc()
#
# > P.upsert( MSCs , observatoryXYZ)
| notebooks/archaic/Demonstrate_Deployment.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Graphing Airline Ontime Data
#
# This notebook shows how to create a time-series graph from airline ontime data, which can be downloaded from the [US Bureau of Transportation Statistics](https://www.transtats.bts.gov/tables.asp?DB_ID=120). See https://github.com/Altinity/altinity-datasets for tools to help with loading.
# First import SQLAlchemy and activate the %sql function. This just needs to be done once.
from sqlalchemy import create_engine
# %load_ext sql
# Run a query using %%sql. This needs to go in a separate cell.
# + magic_args="clickhouse://default:@localhost/airline" language="sql"
# SELECT toYear(FlightDate) t,
# sum(Cancelled)/count(*) cancelled,
# sum(DepDel15)/count(*) delayed
# FROM airline.ontime GROUP BY t ORDER BY t
# -
# Turn the result into a nice dataframe and prove it has some data in it.
result = _
df = result.DataFrame()
df.tail()
# Time to make a quick graph using matplotlib. I'm not the greatest at this but once you have a data frame everything is possible.
# +
import matplotlib.pyplot as plt
# %matplotlib inline
plt.plot('t', 'cancelled',
data=df, linestyle='--',
marker='o', label='Cancelled')
plt.plot('t', 'delayed',
data=df, linestyle='--',
marker='o', label='Delayed')
plt.xlabel('Year')
plt.ylabel('Percentage')
plt.legend(loc='upper left')
plt.title('Fetch data the easy way')
plt.show()
# -
# The %sql magic function is great but we can also do the same thing using the clickhouse-driver client library and direct API calls.
# +
import pandas
from clickhouse_driver import Client
client = Client('localhost', database='airline')
result, columns = client.execute(
'SELECT toYear(FlightDate) t,'
'sum(Cancelled)/count(*) cancelled,'
'sum(DepDel15)/count(*) delayed '
'FROM airline.ontime GROUP BY t ORDER BY t',
with_column_types=True)
df2 = pandas.DataFrame(result, columns=[tuple[0] for tuple in columns])
df2.tail()
# -
# The graph looks just the same, so we change the title to tell them apart.
# +
import matplotlib.pyplot as plt
# %matplotlib inline
plt.plot('t', 'cancelled', data=df2,
linestyle='--', marker='o', label='Cancelled')
plt.plot('t', 'delayed', data=df2,
linestyle='--', marker='o', label='Delayed')
plt.xlabel('Year')
plt.ylabel('Percentage')
plt.legend(loc='upper left')
plt.title('Fetch data the hard way')
plt.show()
# -
# That's all folks!
| notebooks/EX-5-Airline-OnTime-Data.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import torch
import torch.nn as nn
# # Efficient Natural Language Response Suggestion for Smart Reply
#
# Link: [Paper](https://arxiv.org/pdf/1705.00652.pdf)
#
# **Summary:**
#
# This paper describes improvements to the initial smart reply model proposed by Kanaan et al. Improvements are aimed specifically at reducing the computationally complexity of training the smart reply system and reducing latency at inference time. This is achieved by using a feed-forward neural network to learn embeddings and extract a high dot-product between a new message and the set of possible responses ($R$). The authors keep two of the key components from the initial smart reply system, the triggering model and the diversity module based on the EXPANDER algorithm, and instead focus on improving the response selection.
#
# **Innovations:**
#
# * Using a feedforward network to score responses in place of a generative model to reduce computational cost.
# + N-gram embeddings are used to approximate sequences and captures basic semantic and word ordering information
# * Multiple Negatives
# + Given a batch size of K possible responses each sample in the batch is treated as having K-1 negatives.
# * Hierarchical Quantization
# + Gives further efficiency improvements when searching for the best responses in the candidate space.
#
# ## Dot Product Model
#
# THe authors describe a dot product scoring model where $S(x,y)$ is factorized as a dot product between vector $\textbf{h_x}$ that depends only on x and a vector $\textbf{h_y}$ that depends only on 1. This is represented as figure 3 (b) in the paper and is shown below.
#
# 
#
# This can be implemented in PyTorch as shown below. The stacks are identical so the model can simply be assigned twice.
class dotProdModel(nn.Module):
"""Torch Dot Model."""
def __init__(self, hidden_size1, hidden_size2, hidden_size3,
vocab_size, dropout, pretrained=False, weights=None,
emb_dim=None):
"""Initialization."""
super(dotModel, self).__init__()
if pretrained:
self.embedding = nn.Embedding(weights.size(0), weights.size(1))
self.embedding.weight.data.copy_(weights)
emb_dim = weights.size(1)
else:
self.embedding = nn.Embedding(input_dim, emb_dim)
self.linear1 = nn.Linear(emb_dim, hidden_size1)
self.linear2 = nn.Linear(hidden_size1, hidden_size2)
self.linear3 = nn.Linear(hidden_size2, hidden_size3)
self.dropout = nn.Dropout(dropout)
self.activation = nn.Tanh()
def forward(self, x):
"""Forward pass."""
h = torch.sum(self.embedding(x), dim=0)
h = self.dropout(self.activation(self.linear1(h)))
h = self.dropout(self.activation(self.linear2(h)))
h = self.dropout(self.activation(self.linear3(h)))
return h
# ### Multiple Negatives and the Loss Function
#
# For efficiency, a set of K possible responses is used to approximate $P(y|x)$, one correct response and k-1 random negatives. For simplicity, they use the responses of other examples in a training batch of stochastic gradient descent as negative responses. For a batch size of $K$, there will be $K$ input emails $\textbf{x} = (x_1, ..., x_K)$ and their corresponding responses $\textbf{y} = (y_1, ..., y_K)$. Every reply $y_j$ is effectively treated as a negative candidate for $x_i$ if $i \neq j$. The K-1 negative examples for each $x$ are different at each pass through the data due to shuffling in stochastic gradient decent. The goal of training is to minimize the approximated mean negative log probability of the data. For a single batch this is:
#
# $$
# \jmath (\textbf{x}, \textbf{y}, \theta) = -\frac{1}{K}\sum_{i=1}^{K} [S(x_i, y_i) - log \sum_{j=1}^{K} e^{S(x_i, y_j)}]
# $$
#
# This is implemented in PyTorch as follows:
class approxMeanNegativeLoss(nn.Module):
"""Loss function."""
def __init__(self):
super(approxMeanNegativeLoss, self).__init__()
def forward(self, src_pos, trg_pos, batch_size):
try:
assert batch_size == src_pos.size()[0]
except AssertionError:
batch_size = src_pos.size()[0]
S_xi_yi = torch.mm(src_pos, trg_pos.t()).diag()
log_sum_exp_S = torch.log(torch.sum(torch.exp(torch.mm(src_pos, trg_pos.t())), dim=1))
return -(((S_xi_yi - log_sum_exp_S).sum()) / batch_size) + 1e-9
| papers/00_EfficientSmartReply/Google_EfficientSmartReply.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## H1N1 Vaccine Usage Prediction
#
# This model focuses on understanding the subset of population who got themselves vaccinated against the seasonal flu(H1N1 flu) and also aims to understand the factors which influences the chances of people getting themselves vaccinated.
#
#
# +
#Importing the required packages
import os
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn import metrics
from sklearn.model_selection import train_test_split, RandomizedSearchCV, GridSearchCV
from sklearn.preprocessing import StandardScaler, MinMaxScaler
import warnings
warnings.filterwarnings("ignore",category=FutureWarning)
from xgboost import XGBClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import AdaBoostClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestRegressor
from sklearn.ensemble import RandomForestClassifier
from sklearn.neighbors import KNeighborsClassifier
# +
#Checking the current working directory
os.getcwd()
# -
os.chdir(r"D:\Hariharan\Desktop\imarticus")
# +
#Importing the dataset
vaccine=pd.read_csv("h1n1_vaccine_prediction.csv")
# -
vaccine
vaccine["h1n1_vaccine"].value_counts()
#head() shows the top 5 columns i.e. in this case it shows the top 5 people's data
vaccine.head()
#shows the datatypes of each variable in the dataset
vaccine.info()
#Checking the missing values
vaccine.isnull().sum()
#shape() tells us the rows and columns present in the dataset
vaccine.shape
vaccine["income_level"].value_counts()
# ### 1.h1n1_worry
vaccine["h1n1_worry"].value_counts()
# Filling the missing values in h1n1_worry variable with mode
vaccine['h1n1_worry'] = vaccine['h1n1_worry'].fillna(vaccine['h1n1_worry'].mode()[0])
vaccine["h1n1_worry"].value_counts()
# ### 2.has_health_insur
vaccine["has_health_insur"].value_counts()
# We filled has_health_insur variable with an arbitary value which we took as 9
vaccine["has_health_insur"].fillna((9), inplace=True)
vaccine["has_health_insur"].value_counts()
# ### Feature Engineering
#
# We created a new variable insurance_status from the column has_health_insur. Since we had created a new category of missing values by putting the arbitary value of 9, so we decided to create a new variable which rather than dealing with 0s and 1s binary value talks about all the 3 categories formed by demarcating them on the basis of insurance status.
vaccine["insurance_status"]=np.where(vaccine["has_health_insur"]==9,0,1)
# We checked out a few variables and their categories
vaccine["h1n1_awareness"].value_counts()
vaccine["antiviral_medication"].value_counts()
vaccine["contact_avoidance"].value_counts()
vaccine["bought_face_mask"].value_counts()
vaccine["wash_hands_frequently"].value_counts()
vaccine["avoid_large_gatherings"].value_counts()
vaccine["reduced_outside_home_cont"].value_counts()
vaccine["avoid_touch_face"].value_counts()
vaccine.columns
# Filling all the missing variables with the mode
cols = ["antiviral_medication","contact_avoidance","bought_face_mask","wash_hands_frequently","h1n1_awareness"]
vaccine[cols]=vaccine[cols].fillna(vaccine.mode().iloc[0])
cols = ["avoid_large_gatherings","reduced_outside_home_cont","avoid_touch_face"]
vaccine[cols]=vaccine[cols].fillna(vaccine.mode().iloc[0])
vaccine["chronic_medic_condition"].value_counts()
vaccine["cont_child_undr_6_mnths"].value_counts()
vaccine["is_health_worker"].value_counts()
vaccine["is_h1n1_vacc_effective"].value_counts()
cols = ["chronic_medic_condition","cont_child_undr_6_mnths","is_health_worker","is_h1n1_vacc_effective","is_h1n1_risky"]
vaccine[cols]=vaccine[cols].fillna(vaccine.mode().iloc[0])
cols = ["sick_from_h1n1_vacc","is_seas_vacc_effective","is_seas_risky","sick_from_seas_vacc"]
vaccine[cols]=vaccine[cols].fillna(vaccine.mode().iloc[0])
vaccine["no_of_children"].value_counts()
vaccine["no_of_adults"].value_counts()
# Filling missing values in no_of_adults and no_of_children variables with the median
cols = ["no_of_adults","no_of_children"]
vaccine[cols]=vaccine[cols].fillna(vaccine.median().iloc[0])
# Checking out the object variables only from the entire dataset
vaccine.select_dtypes(include=['object'])
vaccine["age_bracket"].value_counts()
# ## Grouping of age bracket variable
# #age bracket groups formed
# 1.above 18
# 2.above 40
# 3.above 65
vaccine["age_bracket"]=vaccine["age_bracket"].replace(["65+ Years"], "65_and_above")
vaccine["age_bracket"]=vaccine["age_bracket"].replace(["45 - 54 Years","55 - 64 Years"], "45_and_above")
vaccine["age_bracket"]=vaccine["age_bracket"].replace(["18 - 34 Years","35 - 44 Years"], "18_and_above")
vaccine["age_bracket"].value_counts()
# ## Grouping of qualification variable
vaccine["qualification"].value_counts()
vaccine["qualification"]=vaccine["qualification"].replace(["< 12 Years"], "middle_scl")
vaccine["qualification"]=vaccine["qualification"].replace(["12 Years"], "high_scl")
vaccine["qualification"]=vaccine["qualification"].replace(["Some College"], "college")
vaccine["qualification"]=vaccine["qualification"].replace(["College Graduate"], "college_grad")
vaccine["qualification"].value_counts()
vaccine["qualification"].fillna(("others"), inplace=True)
# ## Grouping of Race Variable
# We just changed the column Other or multiple to Others
vaccine["race"].value_counts()
vaccine["race"]=vaccine["race"].replace(["Other or Multiple"], "Others")
vaccine["race"].value_counts()
# ### We changed the sex variable to integer datatype by assigning values(1,0)
#
# Male=1
# Female=0
vaccine["sex"]=np.where(vaccine["sex"]=="Male",1,0)
vaccine["sex"].value_counts()
# ## Grouping of Income variable
# We grouped the income variable as follows:-
#
# 1.<= $75,000, Above Poverty- Middle Class
# 2.> $75,000-Upper Class
# 3.Below Poverty-Lower Class
vaccine["income_level"].value_counts()
vaccine["income_level"]=vaccine["income_level"].replace(["<= $75,000, Above Poverty"], "Middle_Class")
vaccine["income_level"]=vaccine["income_level"].replace(["> $75,000"], "Upper_Class")
vaccine["income_level"]=vaccine["income_level"].replace(["Below Poverty"], "Lower_Class")
vaccine["income_level"].value_counts()
# We formed another separate category for the missing values and named it others
vaccine["income_level"].fillna(("others"), inplace=True)
vaccine["income_level"].value_counts()
# ## Filling the other missing values
vaccine["marital_status"].value_counts()
# In marital_status,housing_status,employment columns we treated the missing values by forming a separate category named others for them.
vaccine["marital_status"].fillna(("others"), inplace=True)
vaccine["housing_status"].fillna(("others"), inplace=True)
vaccine["employment"].fillna(("others"), inplace=True)
vaccine["employment"].value_counts()
vaccine["census_msa"].value_counts()
# We filled the missing values in columns named dr_recc_h1n1_vacc and dr_recc_seasonal_vacc by the median
cols = ['dr_recc_h1n1_vacc', 'dr_recc_seasonal_vacc']
vaccine[cols]=vaccine[cols].fillna(vaccine.median().iloc[0])
# ## Bivariate Plots
vaccine.info()
# ### qualification and h1n1 vaccine
sns.catplot(x='qualification',hue='h1n1_vaccine',data=vaccine,kind="count")
# We can observe from this graph that mostly people being college grads came forward in the largest number to get themselved vaccinated while people having completed their qualification only upto middle school didnt come in great numbers for the same.
#
# Others category too has less number of vaccinated individuals.They may be people having lesser qualification below middle scl etc. and thus may not really understand the benefits of getting vaccinated.They maybe having a kind of fear or stigma instilled in their minds regarding the same which need to be removed.
#
# University level qualification people(be it college going students or people who have completed their graduation) are keen to come forward for vaccination , as they are usually considered educated, broad-minded, and also have an understanding of the threat posed to humans by infectious diseases.
#
# ### Sex and h1n1 vaccine
sns.catplot(x='sex',hue='h1n1_vaccine',data=vaccine,kind="count")
# Male=1
# Female=0
#
# More females are getting vaccinated as compared to males.For this gap, one reason might be due to the fact that men are considered to be more masculine and tough.Thus, this reason is enough for them to avoid taking preventive health care.
sns.catplot(x='age_bracket',hue='h1n1_vaccine',data=vaccine,kind="count")
# 45 and above are getting vaccinated more as compared to 65 above and 18 above age bracket.This might be because,the 65 and above age group are frail and thus may not be fit to go to the hospital for their vaccination. Thus they would want the medical employees to come to their homes for the same. For that , requirement of more medical staff is needed since some have to inoculate other age groups coming to the hospital as well and thus this age group shows less number of vaccinated individuals.
# House to house vaccination is required.
#
# 18 above show less vaccination as it might be that the vaccination net may have been widened to this age group more recently as compared to the 45 above group because of which less number of people may have gone for vaccination since it may be the initial phase. Thus, data would have been less for this age group.
#
#
# ### is_health_worker and h1n1_vaccine
sns.catplot(x='is_health_worker',hue='h1n1_vaccine',data=vaccine,kind="count")
# General public is getting vaccinated more as compared to health workers which is in itself an alarming sign.Health workers are responsible for treating the people and if they are not well protected to deal with the disease, it will be very difficult for the entire population in the long run.
# ### h1n1_awareness and h1n1_vaccine
sns.catplot(x='h1n1_awareness',hue='h1n1_vaccine',data=vaccine,kind="count")
# More the awareness more the chances of this vaccination drive to be successful.
# ### income_level and h1n1_vaccine
sns.catplot(x='income_level',hue='h1n1_vaccine',data=vaccine,kind="count")
# Lower class individuals are less likely to get themselves vaccinated.
sns.catplot(x='is_h1n1_vacc_effective',hue='h1n1_vaccine',data=vaccine,kind="count")
# People who rated 5 in the effectiveness of h1n1 vaccine are more likely to recommend it to others(family,friends, colleagues etc.) and only then will the vaccination count increase.
#
# More evidence needs to be provided by the government regarding the vaccine efficacy in order to increase awareness and remove hesitation among people regarding vaccinations.
# ## Outlier Check
vaccine.describe(percentiles=[.01,.02,.03,.04,.05,.1,.15,.25,.5,.75,.90,.95,.96,.97,.98,.99]).T
# We did not find any outliers
# ## Co-Relation Check
corr=vaccine.corr()
corr
plt.figure(figsize=(10,8))
sns.heatmap(vaccine.corr())
plt.show()
# The co-relation was not exceeding the value greater than 0.7 which indicates moderate co-relation between the independent variables.
#
# So no multi-collinearity observed
# ## Dummy Creation
# We converted all the object variable columns to the dummy or indicator variables
vaccine.select_dtypes(include=['object'])
vaccine=pd.get_dummies(vaccine, columns=["age_bracket","income_level","marital_status","housing_status","employment","race","qualification","census_msa"], drop_first=True)
# ## Train Test Split
y=vaccine["h1n1_vaccine"]
X=vaccine.drop(["h1n1_vaccine"],axis=1)
X_train, X_test, y_train, y_test=train_test_split(X,y,test_size=.2, random_state=88)
# +
scale=MinMaxScaler()
X_train_scaled=scale.fit_transform(X_train)
X_test_scaled=scale.transform(X_test)
#scale=StandardScaler()
#sc_fit=sc.fit(X_train)
# -
X_train_scaled=pd.DataFrame(X_train_scaled,columns=X_train.columns)
X_test_scaled=pd.DataFrame(X_test_scaled,columns=X_test.columns)
X_train_scaled
X_test_scaled
# ### Logistic Regression
log=LogisticRegression()
model1=log.fit(X_train_scaled,y_train)
model1
print("accuracy of train",model1.score(X_train_scaled,y_train))
print("accuracy of test",model1.score(X_test_scaled,y_test))
pred_train=model1.predict(X_train_scaled)
pred_test=model1.predict(X_test_scaled)
pred_train
pred_test
metrics.confusion_matrix(y_train,pred_train)
print(metrics.classification_report(y_train, pred_train))
metrics.confusion_matrix(y_test, pred_test)
print(metrics.classification_report(y_test, pred_test))
from sklearn.metrics import accuracy_score
LR_Score = accuracy_score(y_test,pred_test)
#submission = pd.DataFrame({"PassengerId":titanic_test["PassengerId"],
#"Survived":test_preds})
# ### Decision Tree
dt1=DecisionTreeClassifier()
dt1.fit(X_train_scaled,y_train)
print("Train score:",dt1.score(X_train_scaled,y_train))
print("Train score:",dt1.score(X_test_scaled,y_test))
# +
from sklearn.model_selection import GridSearchCV
parameters = {'criterion' : ('gini','entropy'),
'min_samples_split':[2,3,4,5],
'max_depth': [2,4,6,8,9,10,11,12],
'min_samples_leaf':[2,5,10]}
tr = DecisionTreeClassifier()
gsearch = GridSearchCV(tr, parameters, cv=10, verbose=1, n_jobs=2)
gsearch.fit(X_train_scaled, y_train)
# -
gsearch.best_params_
# +
dt2=DecisionTreeClassifier(criterion="entropy",max_depth=6,min_samples_leaf=10,min_samples_split=2)
dt2.fit(X_train_scaled,y_train)
print("Train score:",dt2.score(X_train_scaled,y_train))
print("Test score:",dt2.score(X_test_scaled,y_test))
# -
pred_train=dt2.predict(X_train_scaled)
pred_test=dt2.predict(X_test_scaled)
metrics.confusion_matrix(y_train,pred_train)
print(metrics.classification_report(y_train, pred_train))
metrics.confusion_matrix(y_test,pred_test)
print(metrics.classification_report(y_test, pred_test))
DT_Score = accuracy_score(y_test,pred_test)
# ### Random Forest
rf1=RandomForestClassifier()
rf1.fit(X_train_scaled,y_train)
print("Train R2",rf1.score(X_train_scaled,y_train))
print("Test R2",rf1.score(X_test_scaled,y_test))
# +
parameters={"n_estimators":[50,100,150,200],
"criterion":["entropy","gini"],
"max_depth":[3,5,7,9],
# "min_samples_split":[2,4,6,10],
# "min_samples_leaf":[2,4,6,10],
"bootstrap":[True,False],
"max_features":["log","sqrt"]
}
rf=RandomForestClassifier()
rf_gs=GridSearchCV(estimator=rf, param_grid=parameters, scoring="accuracy", verbose=1, n_jobs=2, cv=10)
rf_gs.fit(X_train_scaled,y_train)
# -
rf_gs.best_params_
rf3=RandomForestClassifier(criterion="gini", max_depth=9, max_features="sqrt", n_estimators=150, bootstrap=True, random_state=88)
rf3.fit(X_train_scaled, y_train)
print("Train R2 :",rf3.score(X_train_scaled, y_train))
print("Test R2 :",rf3.score(X_test_scaled, y_test))
pred_train=rf3.predict(X_train_scaled)
pred_test=rf3.predict(X_test_scaled)
metrics.confusion_matrix(y_train,pred_train)
print(metrics.classification_report(y_train, pred_train))
metrics.confusion_matrix(y_test,pred_test)
print(metrics.classification_report(y_test, pred_test))
RF_Score = accuracy_score(y_test,pred_test)
# ### ADABOOST
adb=AdaBoostClassifier()
adb.fit(X_train_scaled,y_train)
print("Train score",adb.score(X_train_scaled,y_train))
print("Test score",adb.score(X_test_scaled,y_test))
# +
adb1=AdaBoostClassifier(n_estimators=300,learning_rate=.1,random_state=88)
adb1.fit(X_train_scaled,y_train)
print("Train score",adb1.score(X_train_scaled,y_train))
print("Test score",adb1.score(X_test_scaled,y_test))
# +
from sklearn.model_selection import GridSearchCV
params={"n_estimators":[50,100,150,200,250,300,350],
"learning_rate":[1,.1,.01,.001]}
adb2=AdaBoostClassifier()
adb_gs=GridSearchCV(adb2,param_grid=params,cv=10,n_jobs=2,verbose=1)
adb_gs.fit(X_train_scaled,y_train)
# -
adb_gs.best_params_
adb3=AdaBoostClassifier(n_estimators=300,learning_rate=1, random_state=88)
adb3.fit(X_train_scaled, y_train)
print("Train R2 :",adb3.score(X_train_scaled, y_train))
print("Test R2 :",adb3.score(X_test_scaled, y_test))
pred_train=adb3.predict(X_train_scaled)
pred_test=adb3.predict(X_test_scaled)
metrics.confusion_matrix(y_train,pred_train)
print(metrics.classification_report(y_train, pred_train))
metrics.confusion_matrix(y_test,pred_test)
print(metrics.classification_report(y_test, pred_test))
ADA_Score = accuracy_score(y_test,pred_test)
# ### XG Boost
xgb=XGBClassifier()
xgb.fit(X_train_scaled,y_train)
print("Train score",xgb.score(X_train_scaled,y_train))
print("Test score",xgb.score(X_test_scaled,y_test))
# +
params={"n_estimators":[50,100,150,200,250],
"max_depth":[5,7,9,11,13,15]}
#"min_child_weight":[],
#"subsample":[],
#"col_sample_bytree":[],
#"reg_alpha":[],
#"reg_lambda":[]
xgb1=XGBClassifier()
xgb_gs=GridSearchCV(xgb1,param_grid=params,cv=5,n_jobs=2,verbose=1)
xgb_gs.fit(X_train_scaled,y_train)
# -
xgb_gs.best_params_
# +
xgb1=XGBClassifier(n_estimators=50,max_depth=5,random_state=88)
xgb1.fit(X_train_scaled,y_train)
print("Train score",xgb1.score(X_train_scaled,y_train))
print("Test score",xgb1.score(X_test_scaled,y_test))
# -
pred_train=xgb1.predict(X_train_scaled)
pred_test=xgb1.predict(X_test_scaled)
metrics.confusion_matrix(y_train,pred_train)
print(metrics.classification_report(y_train, pred_train))
metrics.confusion_matrix(y_test,pred_test)
print(metrics.classification_report(y_test, pred_test))
XG_Score = accuracy_score(y_test,pred_test)
# ### KNN
knn1=KNeighborsClassifier()
knn1.fit(X_train_scaled,y_train)
print("Train Score:",knn1.score(X_train_scaled,y_train))
print("Test Score:",knn1.score(X_test_scaled,y_test))
knn2=KNeighborsClassifier(n_neighbors=19)
knn2.fit(X_train_scaled,y_train)
print("Train Score:",knn2.score(X_train_scaled,y_train))
print("Test Score:",knn2.score(X_test_scaled,y_test))
knn2=KNeighborsClassifier(n_neighbors=29)
knn2.fit(X_train_scaled,y_train)
print("Train Score:",knn2.score(X_train_scaled,y_train))
print("Test Score:",knn2.score(X_test_scaled,y_test))
# +
#k=29 gives the best accuracy
# -
pred_train=knn2.predict(X_train_scaled)
pred_test=knn2.predict(X_test_scaled)
metrics.confusion_matrix(y_train,pred_train)
print(metrics.classification_report(y_train, pred_train))
metrics.confusion_matrix(y_test,pred_test)
print(metrics.classification_report(y_test, pred_test))
KNN_Score = accuracy_score(y_test,pred_test)
# Since the data involved here mainly consists of categorical variables, we will move along with tree based models like Decision Tree,Random Forest,AdaBoost,XGBoost
#
# Among all these models, we choose XGBoost as our final model as it always out of all classification models gives tge best accuracy and is thus considered a last option in order to bring forward a good accuracy score.
#
# ## Result
Results = pd.DataFrame({'Model': ['Logistic Regression','Decision Tree','Random Forest','AdaBoost','XGBoost','KNN'],
'Accuracy Score' : [DT_Score,RF_Score,KNN_Score,XG_Score,ADA_Score,LR_Score]})
Final_Results = Results.sort_values(by = 'Accuracy Score', ascending=False)
Final_Results = Final_Results.set_index('Model')
print(Final_Results)
| Vaccine Usage Pred(All Models).ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] nbgrader={"grade": false, "grade_id": "a3_intro", "locked": true, "schema_version": 1, "solution": false}
# # Assignment 3
# CSCI 1360E: Foundations for Informatics and Analytics
#
# ## Important Dates
#
# - Released: 2017-06-20 at 12am [EDT](http://www.timeanddate.com/worldclock/usa/athens)
# - Deadline: 2017-06-24 at 11:59:59pm [EDT](http://www.timeanddate.com/worldclock/usa/athens)
#
# ## Grading Breakdown
#
# - Q1: 30pts
# - Q2: 30pts
# - Q3: 40pts
#
# Total: 100pts
#
# - BONUS, Part A: 20pts
# - BONUS, Part B: 20pts
#
# ## Overview
#
# This assignment will go over loops, conditionals, and basics of functions in Python.
#
# Starting with this assignment and continuing for the rest of the semester, you will no longer have to complete partially-filled functions. Instead, you'll be writing your solutions to the questions entirely from scratch by defining and filling in functions of your creation.
#
# Post any questions on Slack to `#questions`! Believe me when I say: you won't be the only one with a particular question!
| assignments/A3/A3_header.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: ml
# language: python
# name: ml
# ---
# +
#default_exp config
# +
#export
import os
DATA_DIR = 'data'
TRAIN_DIR = os.path.join(DATA_DIR, 'images', 'train2017')
TRAIN_MASK_DIR = os.path.join(DATA_DIR, 'masks', 'train2017')
VAL_DIR = os.path.join(DATA_DIR, 'images', 'val2017')
VAL_MASK_DIR = os.path.join(DATA_DIR, 'masks', 'val2017')
N_CLASSES = 184
BATCH_SIZE = 4
IMG_HEIGHT, IMG_WIDTH = 224, 224
float_dtype = 'float16'
| notebooks/Config.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/AceBlu/LetsUpgrade-Data-Science/blob/main/Day_1_Data_Science.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="Z197N8RPyANz"
# *Question 1.* Given the following jumbled word, OBANWRI guess the correct English word.
#
# <NAME>
# <NAME>
# <NAME>
# <NAME>
#
# ANS: <NAME>
# + [markdown] id="UTm7_uPPH1CW"
# *Question 2.* Write a program which prints “LETS UPGRADE”.
#
# + id="qeRxkPUoDLZY" outputId="895ef6ba-01a6-44e9-c132-9e5fb5879a08" colab={"base_uri": "https://localhost:8080/"}
print("LETSUPGRADE")
# + [markdown] id="w1YBM2B-pneg"
# *Question 3.* Write a program that takes cost price and selling price as input and displays whether the transaction is a Profit or a Loss or Neither.
# + id="q2GSjiP1pm4r" outputId="80ca8510-040f-4057-eb72-c500d57c622d" colab={"base_uri": "https://localhost:8080/"}
cost_price=int(input())
selling_price=int(input())
if cost_price < selling_price:
print("Profit")
elif cost_price > selling_price:
print("Loss")
else:
print("Neither")
# + [markdown] id="f0iHfV4VuKZI"
# *Question 4.* Write a program that takes an amount in Euros as input. You need to find its equivalent in Rupees and display it. Assume 1 Euro equals Rs. 80.
# + id="9ep8n-n9u1yw" outputId="1b4f4d41-ea86-495d-9fe5-a4b831dc6de2" colab={"base_uri": "https://localhost:8080/"}
amount_in_rs=float(input())*80
print(int(amount_in_rs))
| Day_1_Data_Science.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# -*- coding: utf-8 -*-
"""
Created on Fri Feb 22 15:45:50 2018
@author: OasisR
"""
import time
import pandas as pd
import requests
from bs4 import BeautifulSoup
from WindPy import w
from datetime import datetime
import numpy as np
from scipy import interpolate
"""
手动指定计算开始日期
看上次保存在working_data文件夹的“最新结果至xxxx”,将begin_date = "2018-07-11"设为此日期
"""
start_date = "2019-02-20"
# 给出每次运行时当日日期 as enddate
print ('本次运行日期:'+time.strftime("%Y/%m/%d"))
date = time.strftime("%Y-%m-%d")
# 每日有访问限量。。。少访问些比较好
# -
"""
如何提升程序性能?
读wind数据的时候用了sleep,原因是读太快存在被封的风险。
最简单尽快读出数据的办法是注释掉sleep,或者改成一个更小的数
"""
# +
"""
连接状况
"""
a = w.start()
if a.ErrorCode==0 :
print('\n连接Wind API成功!')
# +
"""
读取wind中的期权数据
"""
opt = "10001417.SH,10001418.SH,10001419.SH,10001420.SH,10001421.SH,10001422.SH,10001423.SH,10001424.SH,10001425.SH,10001435.SH,10001437.SH,10001439.SH,10001441.SH,10001469.SH,10001501.SH,10001502.SH,10001503.SH,10001504.SH,10001505.SH,10001506.SH,10001507.SH,10001508.SH,10001509.SH,10001519.SH,10001521.SH,10001523.SH,10001579.SH,10001580.SH,10001581.SH,10001582.SH,10001583.SH,10001584.SH,10001585.SH,10001586.SH,10001587.SH,10001597.SH,10001598.SH,10001599.SH,10001600.SH,10001601.SH,10001602.SH,10001603.SH,10001604.SH,10001605.SH,10001619.SH,10001621.SH,10001627.SH,10001629.SH,10001635.SH,10001637.SH,10001643.SH,10001645.SH,10001669.SH,10001671.SH,10001677.SH,10001678.SH,10001679.SH,10001680.SH,10001681.SH,10001682.SH,10001683.SH,10001684.SH,10001685.SH,10001697.SH,10001701.SH,10001705.SH,10001707.SH,10001709.SH,10001713.SH,10001715.SH,10001717.SH,10001727.SH,10001728.SH,10001729.SH,10001730.SH,10001735.SH,10001736.SH,10001737.SH,10001738.SH,10001743.SH,10001744.SH,10001745.SH,10001746.SH,10001751.SH,10001752.SH,10001753.SH,10001754.SH,10001755.SH,10001756.SH,10001757.SH,10001758.SH,10001759.SH,10001769.SH,10001771.SH,10001426.SH,10001427.SH,10001428.SH,10001429.SH,10001430.SH,10001431.SH,10001432.SH,10001433.SH,10001434.SH,10001436.SH,10001438.SH,10001440.SH,10001442.SH,10001470.SH,10001510.SH,10001511.SH,10001512.SH,10001513.SH,10001514.SH,10001515.SH,10001516.SH,10001517.SH,10001518.SH,10001520.SH,10001522.SH,10001524.SH,10001588.SH,10001589.SH,10001590.SH,10001591.SH,10001592.SH,10001593.SH,10001594.SH,10001595.SH,10001596.SH,10001606.SH,10001607.SH,10001608.SH,10001609.SH,10001610.SH,10001611.SH,10001612.SH,10001613.SH,10001614.SH,10001620.SH,10001622.SH,10001628.SH,10001630.SH,10001636.SH,10001638.SH,10001644.SH,10001646.SH,10001670.SH,10001672.SH,10001686.SH,10001687.SH,10001688.SH,10001689.SH,10001690.SH,10001691.SH,10001692.SH,10001693.SH,10001694.SH,10001698.SH,10001702.SH,10001706.SH,10001708.SH,10001710.SH,10001714.SH,10001716.SH,10001718.SH,10001731.SH,10001732.SH,10001733.SH,10001734.SH,10001739.SH,10001740.SH,10001741.SH,10001742.SH,10001747.SH,10001748.SH,10001749.SH,10001750.SH,10001760.SH,10001761.SH,10001762.SH,10001763.SH,10001764.SH,10001765.SH,10001766.SH,10001767.SH,10001768.SH,10001770.SH,10001772.SH"
# 4 weeks before 2019.03.19
opt_list = opt.split(',')
ETF50 = []
for i in opt_list:
if i[-2:]=='SH':
ETF50 += [i]
options = pd.DataFrame()
for op in ETF50:
wsd_data = w.wsd(op, "sec_name,exe_mode,exe_price,exe_enddate,close", "ED-4W", date, "")
print(wsd_data)
fm=pd.DataFrame(data=wsd_data.Data, index=wsd_data.Fields, columns=wsd_data.Times)
fm=fm.T #将矩阵转置
fm = fm.dropna()
if len(options)==0:
options = fm
else:
options = pd.concat([options,fm])
time.sleep(1)
options.to_csv('./raw_data/options'+start_date+'至'+date+'.csv',encoding='GBK')
print('Options Saved!'+start_date+'至'+date)
# +
"""
存下来tradeday
"""
raw_trade_day = w.tdays("2019-02-21", "").Data[0]
trade_day = []
for each in raw_trade_day:
ymd = each.strftime('%Y/%m/%d')
trade_day.append(ymd)
a={'DateTime':trade_day}
trade_day = pd.DataFrame(data=a)
trade_day.to_csv('./raw_data/tradeday'+start_date+'至'+date+'.csv', encoding ='utf_8_sig',index = False)
print('Trade_Day Saved!'+start_date+'至'+date)
# +
"""
读取wind中的shibor数据
已被wind限制访问量
ErrorCode=-40522017
"""
a = w.start()
if a.ErrorCode==0 :
print('连接Wind API成功!')
shibor_raw_data = w.wsd("SHIBORON.IR, SHIBOR1W.IR, SHIBOR2W.IR, SHIBOR1M.IR, SHIBOR3M.IR, SHIBOR6M.IR, SHIBOR9M.IR, SHIBOR1Y.IR", "close", "2019-02-20", date, "")
if shibor_raw_data.ErrorCode == -40522017:
print('最近访问过多,WIND限制访问量,需要在EDB里手动下载shibor数据,注意保存成csv格式')
# print('终止程序,手动下载数据后,先运行module,再运行下一模块')
# exit()
# +
"""
如果shibor可以读取
"""
a={'DateTime':shibor_raw_data.Times,
'1D':shibor_raw_data.Data[0],
'1W':shibor_raw_data.Data[1],
'2W':shibor_raw_data.Data[2],
'1M':shibor_raw_data.Data[3],
'3M':shibor_raw_data.Data[4],
'6M':shibor_raw_data.Data[5], # ....之前少了6m
'9M':shibor_raw_data.Data[6],
'1Y':shibor_raw_data.Data[7],
}
shibor_raw_data_pd = pd.DataFrame(data=a)
shibor_raw_data_pd = shibor_raw_data_pd.sort_index(axis = 0,ascending = False)
shibor_raw_data_pd.to_csv('./raw_data/shibor'+start_date+'至'+date+'.csv', encoding ='utf_8_sig',index = False)
# +
"""
读取已经存好的数据
如果已经被禁用访问了,则使用第二行注释中的语句读取手动修改过的shibor
"""
# if data.ErrorCode != -40522017:
# shibor_rate = pd.read_csv('./raw_data/shibor'+start_date+'至'+date+'.csv',index_col=0,encoding='GBK')
options_data = pd.read_csv('./raw_data/options'+start_date+'至'+date+'.csv',index_col=0,encoding = 'GBK')
options_data = options.sort_index()
tradeday = pd.read_csv('./raw_data/tradeday'+start_date+'至'+date+'.csv',encoding='GBK')
shibor_rate = pd.read_csv('./raw_data/shibor'+start_date+'至'+date+'.csv', index_col = 0, encoding = 'GBK')
# +
new_index = []
enddate = []
for each in options_data.index.tolist():
each = each.strftime("%Y/%m/%d")
new_index.append(str(each))
for each in options_data['EXE_ENDDATE'].tolist():
each = each.strftime("%Y/%m/%d %H:%M")
enddate.append(str(each))
options_data = options_data.drop(['EXE_ENDDATE'],axis = 1)
options_data.insert(0,'date',new_index)
options_data.insert(0,'EXE_ENDDATE',enddate)
options_data = options_data.set_index('date')
options_data
# +
"""
手动下载的wind数据,读取并clean shibor
"""
# shibor_rate = pd.read_csv('./raw_data/shibor_manual'+start_date+'至'+date+'.csv')
# # 这个地方要去掉encoding gbk
# shibor_rate = shibor_rate.drop(index = 0)
# shibor_rate = shibor_rate.drop(list(shibor_rate.index)[-2:])
# # 去掉读取wind数据时无用的第一行和最后两行
# shibor_rate = shibor_rate.sort_index(axis = 0,ascending = False)
# shibor_rate.set_index('指标名称',inplace=True)
# # 重新设置index
# shibor_rate.to_csv('./working_data/shibor_manual'+start_date+'至'+date+'.csv', index = False)
# shibor_rate = pd.read_csv('./working_data/shibor_manual'+start_date+'至'+date+'.csv',index_col=0)
# # shibor_rate.columns
# shibor_rate
# -
#==============================================================================
# 开始计算ivix部分
#==============================================================================
def periodsSplineRiskFreeInterestRate(options, date):
"""
params: options: 计算VIX的当天的options数据用来获取expDate
date: 计算哪天的VIX
return:shibor:该date到每个到期日exoDate的risk free rate
"""
date = datetime.strptime(date,'%Y/%m/%d')
exp_dates = np.sort(options.EXE_ENDDATE.unique())
periods = {}
for epd in exp_dates:
epd = pd.to_datetime(epd)
periods[epd] = (epd - date).days*1.0/365.0
shibor_date = datetime.strptime(shibor_rate.index[0], "%Y-%m-%d")
if date >= shibor_date:
date_str = shibor_rate.index[0]
shibor_values = shibor_rate.ix[0].values
else:
date_str = date.strftime("%Y-%m-%d")
shibor_values = shibor_rate.loc[date_str].values
shibor = {}
# 多久的shibor
period = np.asarray([1.0, 7.0, 14.0, 30.0, 90.0, 180.0, 270.0, 360.0]) / 360.0
min_period = min(period)
max_period = max(period)
for p in periods.keys():
tmp = periods[p]
if periods[p] > max_period:
tmp = max_period * 0.99999
elif periods[p] < min_period:
tmp = min_period * 1.00001
sh = interpolate.spline(period, shibor_values, tmp, order=3)
# 这个地方的函数一定要scipy==0.18.0才可以使用,以后配环境的时候要注意
shibor[p] = sh/100.0
return shibor
# +
# 读取某一个date的options
def getHistDayOptions(vixDate,options_data):
options_data = options_data.loc[vixDate,:]
return options_data
# -
def getNearNextOptExpDate(options, vixDate):
# 找到options中的当月和次月期权到期日;
# 用这两个期权隐含的未来波动率来插值计算未来30隐含波动率,是为市场恐慌指数VIX;
# 如果options中的最近到期期权离到期日仅剩1天以内,则抛弃这一期权,改
# 选择次月期权和次月期权之后第一个到期的期权来计算。
# 返回的near和next就是用来计算VIX的两个期权的到期日
"""
params: options: 该date为交易日的所有期权合约的基本信息和价格信息
vixDate: VIX的计算日期
return: near: 当月合约到期日(ps:大于1天到期)
next:次月合约到期日
"""
vixDate = datetime.strptime(vixDate,'%Y/%m/%d')
optionsExpDate = list(pd.Series(options.EXE_ENDDATE.values.ravel()).unique())
optionsExpDate = [datetime.strptime(str(i),'%Y/%m/%d %H:%M') for i in optionsExpDate]
near = min(optionsExpDate)
optionsExpDate.remove(near)
if near.day - vixDate.day < 1:
near = min(optionsExpDate)
optionsExpDate.remove(near)
nt = min(optionsExpDate)
return near, nt
def getStrikeMinCallMinusPutClosePrice(options):
# options 中包括计算某日VIX的call和put两种期权,
# 对每个行权价,计算相应的call和put的价格差的绝对值,
# 返回这一价格差的绝对值最小的那个行权价,
# 并返回该行权价对应的call和put期权价格的差
"""
params:options: 该date为交易日的所有期权合约的基本信息和价格信息
return: strike: 看涨合约价格-看跌合约价格 的差值的绝对值最小的行权价
priceDiff: 以及这个差值,这个是用来确定中间行权价的第一步
"""
call = options[options.EXE_MODE==u"认购"].set_index(u"EXE_PRICE").sort_index()
put = options[options.EXE_MODE==u"认沽"].set_index(u"EXE_PRICE").sort_index()
callMinusPut = call.CLOSE - put.CLOSE
callMinusPut = callMinusPut.astype('float64')
# callMinusPut.to_frame()
strike = abs(callMinusPut).idxmin()
priceDiff = callMinusPut[strike].min()
return strike, priceDiff
def calSigmaSquare( options, FF, R, T):
# 计算某个到期日期权对于VIX的贡献sigma;
# 输入为期权数据options,FF为forward index price,
# R为无风险利率, T为期权剩余到期时间
"""
params: options:该date为交易日的所有期权合约的基本信息和价格信息
FF: 根据上一步计算得来的strike,然后再计算得到的forward index price, 根据它对所需要的看涨看跌合约进行划分。
取小于FF的第一个行权价为中间行权价K0, 然后选取大于等于K0的所有看涨合约, 选取小于等于K0的所有看跌合约。
对行权价为K0的看涨看跌合约,删除看涨合约,不过看跌合约的价格为两者的均值。
R: 这部分期权合约到期日对应的无风险利率 shibor
T: 还有多久到期(年化)
return:Sigma:得到的结果是传入该到期日数据的Sigma
"""
callAll = options[options.EXE_MODE==u"认购"].set_index(u"EXE_PRICE").sort_index()
putAll = options[options.EXE_MODE==u"认沽"].set_index(u"EXE_PRICE").sort_index()
callAll['deltaK'] = 0.05
putAll['deltaK'] = 0.05
index = callAll.index
if len(index) < 3:
callAll['deltaK'] = index[-1] - index[0]
else:
for i in range(1,len(index)-1):
callAll['deltaK'].ix[index[i]] = (index[i+1]-index[i-1])/2.0
callAll['deltaK'].ix[index[0]] = index[1]-index[0]
callAll['deltaK'].ix[index[-1]] = index[-1] - index[-2]
index = putAll.index
if len(index) < 3:
putAll['deltaK'] = index[-1] - index[0]
else:
for i in range(1,len(index)-1):
putAll['deltaK'].ix[index[i]] = (index[i+1]-index[i-1])/2.0
putAll['deltaK'].ix[index[0]] = index[1]-index[0]
putAll['deltaK'].ix[index[-1]] = index[-1] - index[-2]
call = callAll[callAll.index > FF]
put = putAll[putAll.index < FF]
FF_idx = FF
if put.empty:
FF_idx = call.index[0]
callComponent = call.CLOSE*call.deltaK/call.index/call.index
sigma = (sum(callComponent))*np.exp(T*R)*2/T
sigma = sigma - (FF/FF_idx - 1)**2/T
elif call.empty:
FF_idx = put.index[-1]
putComponent = put.CLOSE*put.deltaK/put.index/put.index
sigma = (sum(putComponent))*np.exp(T*R)*2/T
sigma = sigma - (FF/FF_idx - 1)**2/T
else:
FF_idx = put.index[-1]
try:
if len(putAll.ix[FF_idx].CLOSE.values) > 1:
put['CLOSE'].iloc[-1] = (putAll.ix[FF_idx].CLOSE.values[1] + callAll.ix[FF_idx].CLOSE.values[0])/2.0
except:
put['CLOSE'].iloc[-1] = (putAll.ix[FF_idx].CLOSE + callAll.ix[FF_idx].CLOSE)/2.0
callComponent = call.CLOSE*call.deltaK/call.index/call.index
putComponent = put.CLOSE*put.deltaK/put.index/put.index
sigma = (sum(callComponent)+sum(putComponent))*np.exp(T*R)*2/T
sigma = sigma - (FF/FF_idx - 1)**2/T
return sigma
def changeste(t):
str_t = t.strftime('%Y/%m/%d ')+'00:00'
return str_t
def calDayVIX(vixDate):
# 利用CBOE的计算方法,计算历史某一日的未来30日期权波动率指数VIX
"""
params:vixDate:计算VIX的日期 '%Y/%m/%d' 字符串格式
return:VIX结果
"""
print('!!!!!!!!')
# 拿取所需期权信息
options = getHistDayOptions(vixDate,options_data)
near, nexts = getNearNextOptExpDate(options, vixDate)
shibor = periodsSplineRiskFreeInterestRate(options, vixDate)
R_near = shibor[datetime(near.year,near.month,near.day)]
R_next = shibor[datetime(nexts.year,nexts.month,nexts.day)]
str_near = changeste(near)
str_nexts = changeste(nexts)
optionsNearTerm = options[options.EXE_ENDDATE == str_near]
optionsNextTerm = options[options.EXE_ENDDATE == str_nexts]
# time to expiration
vixDate = datetime.strptime(vixDate,'%Y/%m/%d')
T_near = (near - vixDate).days/365.0
T_next = (nexts- vixDate).days/365.0
# the forward index prices
nearPriceDiff = getStrikeMinCallMinusPutClosePrice(optionsNearTerm)
nextPriceDiff = getStrikeMinCallMinusPutClosePrice(optionsNextTerm)
near_F = nearPriceDiff[0] + np.exp(T_near*R_near)*nearPriceDiff[1]
next_F = nextPriceDiff[0] + np.exp(T_next*R_next)*nextPriceDiff[1]
# 计算不同到期日期权对于VIX的贡献
near_sigma = calSigmaSquare( optionsNearTerm, near_F, R_near, T_near)
next_sigma = calSigmaSquare(optionsNextTerm, next_F, R_next, T_next)
# 利用两个不同到期日的期权对VIX的贡献sig1和sig2,
# 已经相应的期权剩余到期时间T1和T2;
# 差值得到并返回VIX指数(%)
w = (T_next - 30.0/365.0)/(T_next - T_near)
vix = T_near*w*near_sigma + T_next*(1 - w)*next_sigma
return 100*np.sqrt(abs(vix)*365.0/30.0)
# +
# 这一块相当于main
ivix = []
for day in tradeday['DateTime']:
tmp = calDayVIX(day)
ivix.append(tmp)
print('Calculation Finished!!!')
# +
# 配合读取期权论坛的脚本,可以共同绘图、存储
from pyecharts import Line
date = time.strftime("%Y%m%d")
qVIX = pd.read_csv('./working_data/qvix'+date+".csv",encoding='GBK')
attr = tradeday['DateTime'].tolist()
length = len(attr)
# 只画最近20天
qVIX = qVIX[-length:]
line = Line(u"中国波动率指数")
line.add("期权论坛发布", attr, qVIX['5'].tolist(), mark_point=["max"]) # 用的是收盘价
line.add("手动计算", attr, ivix, mark_line=["max"])
line.render('./figure/'+start_date+'至'+date+'vix.html')
# -
c={'期权论坛发布':qVIX['5'].tolist(),
'手动计算':ivix}
print(type(c))
all_result = pd.DataFrame(data=c)
all_result.to_csv('./working_data/最终结果'+start_date+'至'+date+'.csv', encoding ='GBK')
| iVIX20190320.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:systematic-ar-study]
# language: python
# name: conda-env-systematic-ar-study-py
# ---
# # Calculate 1D EM Distributions
# Calculate the EM distributions for the various heating frequencies for (roughly) the grouping of pixels identified by Warren et al. (2012). Compare with the distribution derived from their reported intensities. Do this for all four heating frequencies plus ion case.
# +
import os
import io
import copy
import glob
import urllib
from collections import OrderedDict
import numpy as np
import pandas
from scipy.optimize import curve_fit
import scipy.linalg
import scipy.stats
from scipy.interpolate import interp1d,splev,splrep
from scipy.ndimage import map_coordinates,gaussian_filter
import matplotlib.pyplot as plt
import matplotlib.colors
from matplotlib.ticker import LogFormatter
import seaborn as sns
import astropy.units as u
import astropy.constants as const
import hissw
from sunpy.map import Map,GenericMap
import h5py
import ChiantiPy.tools.util as ch_util
import synthesizAR
from synthesizAR.instruments import InstrumentHinodeEIS
from synthesizAR.util import EISCube,EMCube
from synthesizAR.atomic import EmissionModel
sns.set_palette('deep')
sns.set_context(context='talk')
# %matplotlib inline
# -
frequencies = [250,
750,
'750-ion',
2500,
5000]
eis = InstrumentHinodeEIS([7.5e3,1.25e4]*u.s)
left_x = 350*u.arcsec
bottom_y = 265*u.arcsec
width = 20*u.arcsec
height = 15*u.arcsec
right_x = left_x + width
top_y = bottom_y + height
# ## Atomic Data
emission_model = EmissionModel.restore('/data/datadrive1/ar_forward_modeling/systematic_ar_study/emission_model1109_full/')
resolved_wavelengths = np.sort(u.Quantity([rw for ion in emission_model.ions for rw in ion.resolved_wavelengths]))
# +
pressure_const = 1e15*u.K*u.cm**(-3)
class FakeLoop(object):
electron_temperature = np.logspace(5.1,7.1,100)*u.K
density = pressure_const/electron_temperature
fake_loop = FakeLoop()
i_temperature,i_density = emission_model.interpolate_to_mesh_indices(fake_loop)
# -
contribution_functions = {}
line_names = {}
for ion in emission_model.ions:
for rw in ion.resolved_wavelengths:
i_rw = np.where(ion.wavelength==rw)[0][0]
emiss = map_coordinates(ion.emissivity[:,:,i_rw].value,
np.vstack([i_temperature,i_density]),order=3)*ion.emissivity.unit
ioneq = splev(fake_loop.electron_temperature.value,
splrep(emission_model.temperature_mesh[:,0].value,
ion.fractional_ionization[:,0].value,k=1),ext=1)
line_names[rw] = '{} {}'.format(ion.chianti_ion.meta['name'],rw.value)
contribution_functions[rw] = (1./(np.pi*4.*u.steradian)*0.83
*ioneq*ion.chianti_ion.abundance*emiss/fake_loop.density
*(const.h.cgs*const.c.cgs)/rw.to(u.cm)/u.photon)
resolved_els = list(set([li[1].split(' ')[0].split('_')[0] for li in line_names.items()]))
# +
el_colors = {el:sns.color_palette('deep')[i] for i,el in enumerate(resolved_els)}
fig = plt.figure(figsize=(10,9))
ax = fig.gca()
for i,rw in enumerate(resolved_wavelengths):
el = line_names[rw].split('_')[0]
spec = ch_util.zion2spectroscopic(ch_util.el2z(el),int(line_names[rw].split('_')[1].split(' ')[0]))
ax.plot(fake_loop.electron_temperature,contribution_functions[rw],alpha=0.75,
color=el_colors[el],
#label='{} {:.2f} $\mathrm{{\mathring{{A}}}}$'.format(spec,float(line_names[rw].split(' ')[1]))
label=spec.split(' ')[0]
)
ax.set_xscale('log')
ax.set_yscale('log')
ax.set_ylim([5e-28,3e-24])
ax.set_xlim([10.**(5.5),10.**(7.)])
ax.set_xlabel(r'$T$ [{:latex}]'.format(fake_loop.electron_temperature.unit))
ax.set_ylabel(r'$G$ [{:latex}]'.format(contribution_functions[rw][1].unit))
hand,lab = ax.get_legend_handles_labels()
hand_lab = OrderedDict(zip(lab,hand))
ax.legend(hand_lab.values(),hand_lab.keys(),loc=1,ncol=1,frameon=False)
fig.savefig('../loops-workshop-2017-talk/template/img/contribution_fns.png',dpi=200,bbox_inches='tight')
# -
# ## Data from Warren et al. (2012)
tmp = (urllib.request.urlopen('http://iopscience.iop.org/0004-637X/759/2/141/suppdata/apj446760t2_mrt.txt')
.readlines())
col_names = [str(t.strip()).split(' ')[-1] for t in tmp[8:15]]
col_names = [c[:-1] for c in col_names[0:2]] + ['Element','Ion','Wavelength'] + [c[:-1] for c in col_names[3:]]
table_io = io.StringIO(','.join(col_names) + '\n'
+ '\n'.join([','.join(filter(None,t.strip().decode('utf8').split(' '))) for t in tmp[19:]]))
df = pandas.read_csv(table_io)
df = df[df['Active region']==9]
df = df[df['Instrument (1)']=='EIS']
df = df.sort('Wavelength')
df
# ## Integrate Intensities
line_intensities = {'{}'.format(freq):{} for freq in frequencies}
for freq in frequencies:
for channel in eis.channels:
tmp = EISCube('../data/eis_intensity_{}_tn{}_t7500-12500.h5'.format(channel['name'],freq))
if type(freq) == int:
tmp.data = (gaussian_filter(tmp.data.value,(channel['gaussian_width']['y'].value,
channel['gaussian_width']['x'].value,0.)))*tmp.data.unit
for rw in resolved_wavelengths:
i_center = np.where(np.isclose(tmp.wavelength.value,rw.value,atol=1.1e-2,rtol=0.))[0]
if len(i_center) == 0:
continue
line_intensities['{}'.format(freq)][rw] = tmp[i_center-5:i_center+5].integrated_intensity
# ## EIS Result with Labeled ROI
eis_fexii_map = (line_intensities['250'][resolved_wavelengths[8]]
.submap(u.Quantity((270,450),u.arcsec),u.Quantity((90,360),u.arcsec)))
fig = plt.figure(figsize=(8,8))
plt.subplots_adjust(right=0.92)
cax = fig.add_axes([0.95,0.2,0.03,0.605])
ax = fig.add_subplot(111,projection=eis_fexii_map)
im = eis_fexii_map.plot(norm=matplotlib.colors.SymLogNorm(1,vmin=10,vmax=5e4),title=False)
ax.coords[0].grid(alpha=0)
ax.coords[1].grid(alpha=0)
ax.coords[0].set_ticks(size=5)
ax.coords[1].set_ticks(size=5)
eis_fexii_map.draw_rectangle(u.Quantity((left_x,bottom_y)),width,height,color=sns.color_palette()[0],lw=2)
fig.colorbar(im,ax=ax,cax=cax)
fig.savefig('../loops-workshop-2017-talk/template/img/eis_fe12_roi.png',dpi=200,bbox_inches='tight')
# ## Ground-truth EM Results
ground_truth_em ={}
for freq in frequencies:
ground_truth_em['{}'.format(freq)] = EMCube.restore('../data/em_cubes_true_tn{}_t7500-12500.h5'.format(freq))
# ## Regularized Inversion Code
class DEM1DResults(object):
def __init__(self,dem_results):
self.temperature_bins = np.float64(dem_results['temperature_bins'])
self.temperature_bin_centers = (self.temperature_bins[:-1] + self.temperature_bins[1:])/2.
self.temperature_error_plus = self.temperature_bin_centers*(10.**(dem_results['elogt']) - 1.)
self.temperature_error_minus = self.temperature_bin_centers*(1. - 10.**(-dem_results['elogt']))
self.dem = dem_results['dem']
self.em = dem_results['dem']*np.diff(self.temperature_bins)
self.dem_errors = dem_results['edem']
self.em_errors = np.diff(self.temperature_bins)*dem_results['edem']
self.chi_squared = dem_results['chisq']
self.regularized_data = dem_results['dn_reg']
static_input_vars = {
'log_temperature':np.log10(fake_loop.electron_temperature.value).tolist(),
'temperature_bins':ground_truth_em['250'].temperature_bin_edges.value.tolist(),
'k_matrix':[contribution_functions[rw].value.tolist() for rw in resolved_wavelengths],
'names':['{} {}'.format(rw.value,rw.unit) for rw in resolved_wavelengths],
'error_ratio':0.25,
'gloci':1,'reg_tweak':1,'timed':1
}
save_vars = ['dem','edem','elogt','chisq','dn_reg','temperature_bins']
demreg_script = """
; load intensity from each channel/line
names = {{ names }}
intensity = {{ intensity }}
; load the contribution functions or response functions (called K in Hannah and Kontar 2012)
k_matrix = {{ k_matrix }}
; load temperature array over which K is computed
log_temperature = {{ log_temperature }}
; temperature bins
temperature_bins = {{ temperature_bins }}
; crude estimate of intensity errors
intensity_errors = intensity*{{ error_ratio }}
; inversion method parameters
reg_tweak={{ reg_tweak }}
timed={{ timed }}
gloci={{ gloci }}
; run the inversion method
dn2dem_pos_nb,intensity,intensity_errors,$
k_matrix,log_temperature,temperature_bins,$
dem,edem,elogt,chisq,dn_reg,$
timed=timed,gloci=gloci,reg_tweak=reg_tweak
"""
demreg_runner = hissw.ScriptMaker(extra_paths=['/home/wtb2/Documents/codes/demreg/idl/'],
ssw_path_list=['vobs','ontology'])
# ### Simulated Results
demreg_simulate = {}
for freq in frequencies:
input_vars = static_input_vars.copy()
input_vars['intensity'] = [line_intensities['{}'.format(freq)][rw].submap(u.Quantity((left_x,right_x)),
u.Quantity((bottom_y,top_y))).data.mean()
for rw in resolved_wavelengths]
demreg_simulate['{}'.format(freq)] = DEM1DResults(demreg_runner.run([(demreg_script,input_vars)],
save_vars=save_vars,
cleanup=True,verbose=True))
# ### Observational Data
input_vars = static_input_vars.copy()
input_vars['intensity'] = [float(df['Observed intensity'][df['Wavelength'].apply(np.isclose, b=rw.value, atol=2e-2)])
for rw in resolved_wavelengths]
dem_output_warren = DEM1DResults(demreg_runner.run([(demreg_script,input_vars)],
save_vars=save_vars,
cleanup=True,verbose=True))
# ## Compare
# Compare the observed 1D distribution, results from regularized code, and "ground truth" EM.
def linear_fit(x,a,b):
return a*x + b
# +
fig,axes = plt.subplots(2,2,figsize=(16,9),sharex=True,sharey=True)
plt.subplots_adjust(wspace=0,hspace=0)
for ax,(i,freq) in zip(axes.flatten(),enumerate([250,750,2500,5000])):
#### Ground Truth ####
# make 1d selection from boxed region
temperature_bin_edges,em = ground_truth_em['{}'.format(freq)].get_1d_distribution(u.Quantity((left_x,right_x)),
u.Quantity((bottom_y,top_y)))
# compute slope
t_bin_centers = (temperature_bin_edges[1:] + temperature_bin_edges[:-1])/2.
i_fit = np.where(np.logical_and(t_bin_centers>=1e6*u.K,
t_bin_centers<=4e6*u.K))
t_fit = t_bin_centers[i_fit].value
em_fit = em[i_fit].value
popt,pcov = curve_fit(linear_fit,np.log10(t_fit),np.log10(em_fit))
# plot
ax.step(temperature_bin_edges[:-1],em,where='post',
color=sns.color_palette('deep')[i],label=r'$t_N={}$ s, $a={:.2f}$'.format(freq,popt[0]))
ax.step(temperature_bin_edges[1:],em,where='pre',
color=sns.color_palette()[i])
ax.plot(t_fit,10.**linear_fit(np.log10(t_fit),*popt),c='k',lw=3,ls='-',alpha=1.0)
#### Regularized DEM ####
i_fit = np.where(np.logical_and(demreg_simulate['{}'.format(freq)].temperature_bin_centers>=1e6,
demreg_simulate['{}'.format(freq)].temperature_bin_centers<=3e6))
t_fit = demreg_simulate['{}'.format(freq)].temperature_bin_centers[i_fit]
em_fit = demreg_simulate['{}'.format(freq)].em[i_fit]
popt,pcov = curve_fit(linear_fit,np.log10(t_fit),np.log10(em_fit))
ax.errorbar(demreg_simulate['{}'.format(freq)].temperature_bin_centers,
demreg_simulate['{}'.format(freq)].em,
yerr=demreg_simulate['{}'.format(freq)].em_errors,
xerr=[demreg_simulate['{}'.format(freq)].temperature_error_minus,
demreg_simulate['{}'.format(freq)].temperature_error_plus],
ls='',marker='',color=sns.color_palette()[i])
ax.plot(t_fit,10.**linear_fit(np.log10(t_fit),*popt),c='k',lw=3,ls='--',alpha=1.0)
print(demreg_simulate['{}'.format(freq)].chi_squared)
# em loci curves
for j,rw in enumerate(resolved_wavelengths):
ax.plot(fake_loop.electron_temperature,
demreg_simulate['{}'.format(freq)].regularized_data[j]/contribution_functions[rw],
color='k',alpha=0.2)
# frequency label
ax.text(3e6,5e25,r'$t_N={}$ s'.format(freq),fontsize=matplotlib.rcParams['axes.labelsize'])
axes[0,0].set_xscale('log')
axes[0,0].set_yscale('log')
axes[0,0].set_xlim([temperature_bin_edges[0].value,temperature_bin_edges[-1].value])
axes[0,0].set_ylim([2e25,8e28])
axes[1,0].set_xlabel(r'$T$ [K]')
axes[1,1].set_xlabel(r'$T$ [K]')
axes[0,0].set_ylabel(r'$\mathrm{EM}$ [cm$^{-5}$]')
axes[1,0].set_ylabel(r'$\mathrm{EM}$ [cm$^{-5}$]')
fig.savefig('../loops-workshop-2017-talk/template/img/em_true_predict_4panel.png',
dpi=200,bbox_inches='tight')
# +
fig,axes = plt.subplots(1,2,figsize=(16,9),sharex=True,sharey=True)
plt.subplots_adjust(wspace=0,hspace=0)
for i,freq in enumerate([250,750,2500,5000]):
#### Ground Truth ####
# make 1d selection from boxed region
temperature_bin_edges,em = ground_truth_em['{}'.format(freq)].get_1d_distribution(u.Quantity((left_x,right_x)),
u.Quantity((bottom_y,top_y)))
# compute slope
t_bin_centers = (temperature_bin_edges[1:] + temperature_bin_edges[:-1])/2.
i_fit = np.where(np.logical_and(t_bin_centers>=1e6*u.K,
t_bin_centers<=4e6*u.K))
t_fit = t_bin_centers[i_fit].value
em_fit = em[i_fit].value
popt,pcov = curve_fit(linear_fit,np.log10(t_fit),np.log10(em_fit))
# plot
axes[0].step(temperature_bin_edges[:-1],em,where='post',
color=sns.color_palette()[i],
label=r'$t_N={}$ s, $a={:.2f}$'.format(freq,popt[0]))
axes[0].step(temperature_bin_edges[1:],em,where='pre',
color=sns.color_palette()[i])
axes[0].plot(t_fit,10.**linear_fit(np.log10(t_fit),*popt),c=sns.color_palette()[i],lw=3,ls='-',alpha=0.7)
#### Regularized DEM ####
# compute slope
i_fit = np.where(np.logical_and(demreg_simulate['{}'.format(freq)].temperature_bin_centers>=1e6,
demreg_simulate['{}'.format(freq)].temperature_bin_centers<=3e6))
t_fit = demreg_simulate['{}'.format(freq)].temperature_bin_centers[i_fit]
em_fit = demreg_simulate['{}'.format(freq)].em[i_fit]
em_fit_errors = demreg_simulate['{}'.format(freq)].em_errors[i_fit]
popt,pcov = curve_fit(linear_fit,np.log10(t_fit),np.log10(em_fit),
sigma=np.log10(em_fit_errors),
#absolute_sigma=True
)
# plot
axes[1].plot(t_fit,10.**linear_fit(np.log10(t_fit),*popt),c=sns.color_palette()[i],lw=3,ls='-',alpha=0.7)
axes[1].step(demreg_simulate['{}'.format(freq)].temperature_bins[:-1],
demreg_simulate['{}'.format(freq)].em,where='post',
color=sns.color_palette()[i],
label=r'$a={:.2f}$'.format(popt[0]))
axes[1].step(demreg_simulate['{}'.format(freq)].temperature_bins[1:],
demreg_simulate['{}'.format(freq)].em,where='pre',
color=sns.color_palette()[i])
axes[1].errorbar(demreg_simulate['{}'.format(freq)].temperature_bin_centers,
demreg_simulate['{}'.format(freq)].em,
yerr=demreg_simulate['{}'.format(freq)].em_errors,
ls='',marker='',color=sns.color_palette()[i],alpha=0.4,lw=3)
#### Warren et al 2012 data ####
# compute slope
i_fit = np.where(np.logical_and(dem_output_warren.temperature_bin_centers>=1e6,
dem_output_warren.temperature_bin_centers<=4e6))
t_fit = dem_output_warren.temperature_bin_centers[i_fit]
em_fit = dem_output_warren.em[i_fit]
popt,pcov = curve_fit(linear_fit,np.log10(t_fit),np.log10(em_fit),
#absolute_sigma=True,
sigma=np.log10(dem_output_warren.em_errors[i_fit])
)
axes[0].plot(t_fit,10.**linear_fit(np.log10(t_fit),*popt),c='k',lw=3,ls='-',alpha=0.4,
label=r'Observed, $a={:.2f}$'.format(popt[0]))
axes[1].plot(t_fit,10.**linear_fit(np.log10(t_fit),*popt),c='k',lw=3,ls='-',alpha=0.4)
axes[0].errorbar(dem_output_warren.temperature_bin_centers,dem_output_warren.em,
yerr=dem_output_warren.em_errors,
ls='',marker='',color='k',alpha=0.4,lw=3)
axes[1].errorbar(dem_output_warren.temperature_bin_centers,dem_output_warren.em,
yerr=dem_output_warren.em_errors,
ls='',marker='',color='k',alpha=0.4,lw=3)
axes[0].step(dem_output_warren.temperature_bins[:-1],
dem_output_warren.em,where='post',
color='k',lw=3,alpha=0.2)
axes[0].step(dem_output_warren.temperature_bins[1:],
dem_output_warren.em,where='pre',
color='k',lw=3,alpha=0.2)
axes[1].step(dem_output_warren.temperature_bins[:-1],
dem_output_warren.em,where='post',
color='k',lw=3,alpha=0.2)
axes[1].step(dem_output_warren.temperature_bins[1:],
dem_output_warren.em,where='pre',
color='k',lw=3,alpha=0.2)
axes[0].set_xscale('log')
axes[0].set_yscale('log')
axes[0].set_xlim([temperature_bin_edges[0].value,temperature_bin_edges[-1].value])
axes[0].set_ylim([2e25,5e28])
axes[0].legend(loc=2,frameon=False)
axes[1].legend(loc=2,frameon=False)
axes[0].set_xlabel(r'$T$ [K]')
axes[1].set_xlabel(r'$T$ [K]')
axes[0].set_ylabel(r'$\mathrm{EM}$ [cm$^{-5}$]')
fig.savefig('../loops-workshop-2017-talk/template/img/em_true_predict_2panel.png',
dpi=200,bbox_inches='tight')
# +
fig = plt.figure(figsize=(10,9))
ax = fig.gca()
for i,freq in enumerate([750,'750-ion']):
#### Ground Truth ####
# make 1d selection from boxed region
temperature_bin_edges,em = ground_truth_em['{}'.format(freq)].get_1d_distribution(u.Quantity((left_x,right_x)),
u.Quantity((bottom_y,top_y)))
# compute slope
t_bin_centers = (temperature_bin_edges[1:] + temperature_bin_edges[:-1])/2.
i_fit = np.where(np.logical_and(t_bin_centers>=1e6*u.K,
t_bin_centers<=4e6*u.K))
t_fit = t_bin_centers[i_fit].value
em_fit = em[i_fit].value
popt,pcov = curve_fit(linear_fit,np.log10(t_fit),np.log10(em_fit))
# plot
ax.step(temperature_bin_edges[:-1],em,where='post',
color=sns.color_palette()[i],
label=r'$t_N={}$ s, $a={:.2f}$'.format(freq,popt[0]))
ax.step(temperature_bin_edges[1:],em,where='pre',
color=sns.color_palette()[i])
ax.plot(t_fit,10.**linear_fit(np.log10(t_fit),*popt),c=sns.color_palette()[i],lw=3,ls='-',alpha=0.7)
#### Regularized DEM ####
# compute slope
i_fit = np.where(np.logical_and(demreg_simulate['{}'.format(freq)].temperature_bin_centers>=1e6,
demreg_simulate['{}'.format(freq)].temperature_bin_centers<=3.5e6))
t_fit = demreg_simulate['{}'.format(freq)].temperature_bin_centers[i_fit]
em_fit = demreg_simulate['{}'.format(freq)].em[i_fit]
popt,pcov = curve_fit(linear_fit,np.log10(t_fit),np.log10(em_fit))
# plot
ax.plot(t_fit,10.**linear_fit(np.log10(t_fit),*popt),c=sns.color_palette()[i],lw=3,ls='-',alpha=0.7)
ax.step(demreg_simulate['{}'.format(freq)].temperature_bins[:-1],
demreg_simulate['{}'.format(freq)].em,where='post',
color=sns.color_palette()[i],
label=r'$a={:.2f}$'.format(popt[0]))
ax.step(demreg_simulate['{}'.format(freq)].temperature_bins[1:],
demreg_simulate['{}'.format(freq)].em,where='pre',
color=sns.color_palette()[i])
ax.errorbar(demreg_simulate['{}'.format(freq)].temperature_bin_centers,
demreg_simulate['{}'.format(freq)].em,
yerr=demreg_simulate['{}'.format(freq)].em_errors,
ls='',marker='',color=sns.color_palette()[i],alpha=0.4,lw=3)
#### Warren et al 2012 data ####
# compute slope
i_fit = np.where(np.logical_and(dem_output_warren.temperature_bin_centers>=1e6,
dem_output_warren.temperature_bin_centers<=4e6))
t_fit = dem_output_warren.temperature_bin_centers[i_fit]
em_fit = dem_output_warren.em[i_fit]
popt,pcov = curve_fit(linear_fit,np.log10(t_fit),np.log10(em_fit))
ax.plot(t_fit,10.**linear_fit(np.log10(t_fit),*popt),c='k',lw=3,ls='-',alpha=0.4)
ax.errorbar(dem_output_warren.temperature_bin_centers,dem_output_warren.em,
yerr=dem_output_warren.em_errors,
ls='',marker='',color='k',alpha=0.4,lw=3)
ax.step(dem_output_warren.temperature_bins[:-1],
dem_output_warren.em,where='post',
color='k',lw=3,alpha=0.2)
ax.step(dem_output_warren.temperature_bins[1:],
dem_output_warren.em,where='pre',
color='k',lw=3,alpha=0.2)
ax.set_xscale('log')
ax.set_yscale('log')
ax.set_xlim([temperature_bin_edges[0].value,temperature_bin_edges[-1].value])
ax.set_ylim([2e25,5e28])
ax.legend(loc=2,frameon=False)
ax.legend(loc=2,frameon=False)
# -
| notebooks/calculate_EM1d_distributions.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/agiagoulas/page-stream-segmentation/blob/master/model_training/TextModel_Training.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="26g5ik5t0Xv_"
# # Setup & Imports
# + [markdown] id="YJgSRbWb0bIY"
# Connect to Google Drive when working in Google Colab
# + id="aluLZfzisSAe"
from google.colab import drive
drive.mount('/content/gdrive')
# + [markdown] id="jqGReoW90gv1"
# Set working directory
# + id="UrN0yqgkmTkU"
working_dir = "/Tobacco800/"
# + [markdown] id="gNFjAEZy0jid"
# Imports
# + id="jtqGmgL1kMKo"
# !git clone https://github.com/facebookresearch/fastText.git
# !pip install fastText/.
import csv, re, math
import sklearn.metrics as sklm
import fasttext
import numpy as np
import requests
from keras.callbacks import ModelCheckpoint
from importlib import reload
from keras.utils import Sequence
from keras.models import Sequential, Model
from keras.layers import *
from keras.utils import *
from keras.callbacks import ModelCheckpoint, Callback
# + [markdown] id="ko9L3vEB0nUn"
# Private Imports
# + id="8iy_WO-4lpZa"
model_request = requests.get("https://raw.githubusercontent.com/agiagoulas/page-stream-segmentation/master/app/pss/model.py")
with open("model.py", "w") as f:
f.write(model_request.text)
import model
# + [markdown] id="X0YN96Uj0k3t"
# Get fasttext word vectors
# + id="fleFdm8JlupX"
if 'ft' not in locals():
# !wget https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.en.zip
# !unzip wiki.en.zip
ft = fasttext.load_model("wiki.en.bin")
model.ft = ft
# + [markdown] id="2rerafuX0qQY"
# Load Tobacco800 Data
# + id="V2Yo3qikmUUM"
data_train = model.read_csv_data(working_dir + "tobacco800.train")
data_test = model.read_csv_data(working_dir + "tobacco800.test")
# + [markdown] id="TlliiJzc0tgU"
# # Single Page Model Training
# + [markdown] id="1LzoCrcD0yUi"
# Model Training
# + id="3h9zx6zlxH_W"
n_repeats = 10
n_epochs = 20
single_page_metric_history = []
optimize_for = 'kappa'
for i in range(n_repeats):
print("Repeat " + str(i+1) + " of " + str(n_repeats))
print("--------------------")
model_singlepage = model.compile_model_singlepage()
model_file = working_dir + "tobacco800_text_single-page_%02d.hdf5" % (i)
print(model_file)
checkpoint = model.ValidationCheckpoint(model_file, data_test, prev_page_generator=False, metric=optimize_for)
model_singlepage.fit(model.TextFeatureGenerator(data_train, prevpage=False, train=True),
callbacks = [checkpoint],
epochs = n_epochs)
single_page_metric_history.append(checkpoint.max_metrics)
print(single_page_metric_history)
# + [markdown] id="saM-O3f20zsB"
# Show metric results from different models
# + id="3WopOlVayudI"
for i, r in enumerate(single_page_metric_history):
model_file = working_dir + "tobacco800_text_single-page_%02d.hdf5" % (i)
print(str(i) + ' ' + str(r['kappa']) + ' ' + str(r['accuracy']) + ' ' + str(r['f1_micro']) + ' ' + str(r['f1_macro']) + ' ' + model_file)
# + [markdown] id="nU5ZZqTy06o9"
# Load model and generate prediction
# + id="roizjVt8z8vO"
model_singlepage = model.compile_model_singlepage()
model_singlepage.load_weights(working_dir + "tobacco800_text_single-page_00.hdf5")
y_predict = np.round(model_singlepage.predict(model.TextFeatureGenerator(data_test, prevpage=False, train=False)))
y_true = [model.LABEL2IDX[x[3]] for x in data_test]
print("Accuracy: " + str(sklm.accuracy_score(y_true, y_predict)))
print("Kappa: " + str(sklm.cohen_kappa_score(y_true, y_predict)))
print("F1 Micro " + str(sklm.f1_score(y_true, y_predict, average='micro')))
print("F1 Macro " + str(sklm.f1_score(y_true, y_predict, average='macro')))
# + [markdown] id="ClFLnJp20-uk"
# # Current & Prev Page Model Training
# + [markdown] id="WvcDS9bC1CKc"
# Model Training
# + id="ril34dvnmgV0"
n_repeats = 10
n_epochs = 20
prev_page_metric_history = []
optimize_for = 'kappa'
for i in range(n_repeats):
print("Repeat " + str(i+1) + " of " + str(n_repeats))
print("--------------------")
model_prevpage = model.compile_model_prevpage()
model_file = working_dir + "tobacco800_text_prev-page_%02d.hdf5" % (i)
print(model_file)
checkpoint = model.ValidationCheckpoint(model_file, data_test, prev_page_generator=True, metric=optimize_for)
model_prevpage.fit(model.TextFeatureGenerator(data_train, prevpage=True, train=True),
callbacks = [checkpoint],
epochs = n_epochs)
prev_page_metric_history.append(checkpoint.max_metrics)
print(prev_page_metric_history)
# + [markdown] id="mEuSzhwX1D4V"
# Show metric results from different models
# + id="VCZCAKuvynoH"
for i, r in enumerate(prev_page_metric_history):
model_file = working_dir + "tobacco800_text_prev-page_%02d.hdf5" % (i)
print(str(i) + ' ' + str(r['kappa']) + ' ' + str(r['accuracy']) + ' ' + str(r['f1_micro']) + ' ' + str(r['f1_macro']) + ' ' + model_file)
# + [markdown] id="v285dCXy1HbU"
# Load model and generate prediction
# + id="HU9HGArnzLDt"
model_prevpage = model.compile_model_prevpage()
model_prevpage.load_weights(working_dir + "tobacco800_text_prev-page_00.hdf5")
y_predict = np.round(model_prevpage.predict(model.TextFeatureGenerator(data_test, prevpage=True, train=False)))
y_true = [model.LABEL2IDX[x[3]] for x in data_test]
print("Accuracy: " + str(sklm.accuracy_score(y_true, y_predict)))
print("Kappa: " + str(sklm.cohen_kappa_score(y_true, y_predict)))
print("F1 Micro " + str(sklm.f1_score(y_true, y_predict, average='micro')))
print("F1 Macro " + str(sklm.f1_score(y_true, y_predict, average='macro')))
| model_training/TextModel_Training.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Deploy a previously created model in SageMaker
# Sagemaker decouples model creation/fitting and model deployment. **This short notebook shows how you can deploy a model that you have already created**. It is assumed that you have already created the model and it appears in the `Models` section of the SageMaker console. Obviously, before you deploy a model the model must exist, so please go back and make sure you have already fit/created the model before proceeding.
# For more information about deploying models, see https://docs.aws.amazon.com/sagemaker/latest/dg/ex1-deploy-model.html
import boto3
from time import gmtime, strftime
# configs for model, endpoint and batch transform
model_name = (
"ENTER MODEL NAME" # enter name of a model from the 'Model panel' in the AWS SageMaker console.
)
sm = boto3.client("sagemaker")
# ## Deploy using an inference endpoint
# set endpoint name/config.
endpoint_config_name = "DEMO-model-config-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
endpoint_name = "DEMO-model-config-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
# +
create_endpoint_config_response = sm.create_endpoint_config(
EndpointConfigName=endpoint_config_name,
ProductionVariants=[
{
"InstanceType": "ml.m4.xlarge",
"InitialVariantWeight": 1,
"InitialInstanceCount": 1,
"ModelName": model_name,
"VariantName": "AllTraffic",
}
],
)
print("Endpoint Config Arn: " + create_endpoint_config_response["EndpointConfigArn"])
create_endpoint_response = sm.create_endpoint(
EndpointName=endpoint_name, EndpointConfigName=endpoint_config_name
)
print(create_endpoint_response["EndpointArn"])
resp = sm.describe_endpoint(EndpointName=endpoint_name)
status = resp["EndpointStatus"]
print("Status: " + status)
# -
# If you go to the AWS SageMaker service console now, you should see that the endpoint creation is in progress.
# ## Deploy using a batch transform job
#
# A batch transform job should be used for when you want to create inferences on a dateset and then shut down the resources when inference is finished.
# config for batch transform
batch_job_name = "ENTER_JOB_NAME"
output_location = "ENDER_OUTPUT_LOCATION" # S3 bucket/location
input_location = "ENTER_INPUT_LOCATION" # S3 bucket/location
request = {
"TransformJobName": batch_job_name,
"ModelName": model_name,
"TransformOutput": {
"S3OutputPath": output_location,
"Accept": "text/csv",
"AssembleWith": "Line",
},
"TransformInput": {
"DataSource": {"S3DataSource": {"S3DataType": "S3Prefix", "S3Uri": input_location}},
"ContentType": "text/csv",
"SplitType": "Line",
"CompressionType": "None",
},
"TransformResources": {
"InstanceType": "ml.m4.xlarge", # change this based on what resources you want to request
"InstanceCount": 1,
},
}
sm.create_transform_job(**request)
| code_snippets/Deploy_Previously_Created_Model.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # "NLP: Topic Modeling using LSI, LDA, and HDP"
#
# > "Using packages: gensim (for topic modeling), spacy (for text pre-processing), pyLDAvis (for visualization of LDA topic model), and python-igraph (for network analysis)"
# - toc: true
# - branch: master
# - badges: false
# - comments: true
# - hide: false
# - search_exclude: true
# - metadata_key1: metadata_value1
# - metadata_key2: metadata_value2
# - image: images/txt_fig6.png
# - categories: [NLP,Topic_Modeling,Network_Analysis, Python,gensim,spacy,pyLDAvis,igraph]
# - show_tags: true
# ## Purpose
# The purpose of this pilot project is to investigate the usefulness of *topic modeling* on a large sermon set. This set forms the basis of outreach initiatives for a Canadian non-profit. Topic modeling can be used to organize this substantial body of text. A second application is as a basis for a more intelligent form of searching.
# ## Dataset and Variables
# The complete dataset consists of about 1200 sermons which have been preached by <NAME> over a time period from 1947 to 1965. Since then, all of these have been transcribed and is available in text format (see http://www.messagehub.info/en/messages.do?show_en=true)
# From this dataset, I selected a sample of 13 sermons. This was not meant to be a representative sample. It is simply a small subset that I could lay my hands on easily and that I could familiarize myself with. I deliberately started small as this is a proof-of-concept pilot project.
# Each sermon comes in its own text file with each line containing a single sentence. This was my source data and I had to take it in this form. A *descriptor* identifies each sentence. An example of a descriptor is 2.1.c. Separated by periods the *descriptor* consists of:
#
# - Running sub-block number
# - Sentence number
# - Sentence type
# The sentence *type* may be one of:
# - h (heading)
# - n (normal)
# - c (conversation)
# - p (first line of poetry/song/hymn)
# - q (non-first line of poetry/song/hymn)
# - s (scripture)
# This format is not suitable for topic analysis. Consequently, I discarded the descriptors and merged all sentences in a sermon into a single line of text. All these lines were further merged into a single .csv file called **all_titles.csv**. So, each sermon comes from this file as a single text line. This allows for the potential contribution of *n-grams* more efficiently. This means the all_titles.csv file consists of 13 lines. The file contains 223,843 words which means each sermon contains an average of 17,219 words.
# ## Problem Description / Objective
# As noted above, the complete sermon set consists of about 1200 sermons. This is a substantial body of text to search through for specific areas of interest. The human searcher can scan the titles of course, but this is laborious and far from ideal. The central principle of *LDA topic analysis* is that a document (sermon in this case) consists of a *distribution of topics*. At the lower level, each topic consists of a *distribution of words*. This is an ideal situation for my purpose. It is common for a minister to dwell on a variety of topics during a sermon. If everything works out as I envision, each sermon could be “tagged” with its top 5 topics, for example.
# If this approach works well, these higher probability topics could become the basis for categorizing and locating material. Another area of expansion could be to investigate the “evolution” of a topic over time (1947 to 1965). This is known as dynamic topic modeling.
# ## Methodology
# I will first describe the tooling environment. Next, the pre-processing will be covered. Lastly, the topic modeling will be described.
# ### Tooling
# I used the python language for writing the analysis code. An initial python script was prototyped for the merging of the sermon files into the final .csv file. Later, this python script was moved over to a IPython notebook called **TextMiningProject-pre.ipynb**. The remainder of the analysis was performed in two more notebooks called **TextMiningProject.ipynb** and **TextMiningProject-graph.ipynb**.
# The three notebooks can be accessed here:
#
# [TextMiningProject-pre.ipynb](https://nbviewer.jupyter.org/github/kobus78/TextMiningProject/blob/master/TextMiningProject-pre.ipynb "TextMiningProject-pre.ipynb")
#
# [TextMiningProject.ipynb](https://nbviewer.jupyter.org/github/kobus78/TextMiningProject/blob/master/TextMiningProject.ipynb "TextMiningProject.ipynb")
#
# [TextMiningProject-graph.ipynb](https://nbviewer.jupyter.org/github/kobus78/TextMiningProject/blob/master/TextMiningProject-graph.ipynb "TextMiningProject-graph.ipynb")
# The following python packages were used:
# - matplotlib
# - gensim (for topic modeling)
# - numpy
# - spacy (for text pre-processing)
# - pyLDAvis (for visualization of LDA topic model)
# - pandas
# - python-igraph (for network analysis)
# The pre-processing notebook executed on a local “mac-mini” computer. The topic modeling and graph modeling notebooks executed on an Ubuntu 16.01 virtual machine on a remote server. I tried to have these notebooks combined but was unsuccessful. I could not get all the topic modeling packages to execute locally. My apologies for having two notebooks for the main analysis.
# ### Pre-processing
# As <NAME> said, "NLP is 80% preprocessing." (https://www.linkedin.com/in/levkonst/?originalSubdomain=uk)
# #### Part 1
# The first part of pre-processing is about transforming the source files (one per sermon) into a merged .csv file. This is taken care of in the notebook **TextMiningProject-pre.ipynb**. Because I used only 13 sermon files, I did not think it worth the effort to batch process the files. I merely filled in one file name at a time in the notebook (variable MAIN). That input file was then ingested from the *input* folder, processed into a single string/line, and output to the *output* folder as a “-top” file (“top” is for topic analysis). Here is a list of the input files:
# ```
# Mac-mini:input kobus$ ls
# 1965-0919_Thirst_ENG_15-1102-B123E1R-x.txt
# 1965-1031y_Leadership_ENG_17-0901-B123E1R-x.txt
# 1965-1121_WhatHouseWillYouBuildMe_ENG_15-1102-B123E1R-x.txt
# 1965-1125_TheInvisibleUnionOfTheBrideOfChrist_ENG_17-0502-B123E1R-x.txt
# 1965-1127b_TryingToDoGodAServiceWithoutItBeingGodsWill_ENG_15-1002-B123-x.txt
# 1965-1127z_IHaveHeardButNowISee_ENG_14-1102-B123E1R-x.txt
# 1965-1128x_GodsOnlyProvidedPlaceOfWorship_ENG_14-1101-t.txt
# 1965-1128z_OnTheWingsOfASnowWhiteDove_ENG_17-0501-B123-x.txt
# 1965-1204_TheRapture_ENG_16-1102-B123-x.txt
# 1965-1205_ThingsThatAreToBe_ENG_17-0203-B123E1R-x.txt
# 1965-1206_ModernEventsAreMadeClearByProphecy_ENG_14-0901-B123-x.txt
# 1965-1207_Leadership_ENG_15-0402-B123E1R-x.txt
# 1965-1212_Communion_ENG_12-1201-B123E1R-x.txt
# ```
# These are the output files:
# ```
# Mac-mini:output kobus$ ls
# 1965-0919_Thirst_ENG_15-1102-B123E1R-x-top.txt
# 1965-1031y_Leadership_ENG_17-0901-B123E1R-x-top.txt
# 1965-1121_WhatHouseWillYouBuildMe_ENG_15-1102-B123E1R-x-top.txt
# 1965-1125_TheInvisibleUnionOfTheBrideOfChrist_ENG_17-0502-B123E1R-x-top.txt
# 1965-1127b_TryingToDoGodAServiceWithoutItBeingGodsWill_ENG_15-1002-B123-x-top.txt
# 1965-1127z_IHaveHeardButNowISee_ENG_14-1102-B123E1R-x-top.txt
# 1965-1128x_GodsOnlyProvidedPlaceOfWorship_ENG_14-1101-t-top.txt
# 1965-1128z_OnTheWingsOfASnowWhiteDove_ENG_17-0501-B123-x-top.txt
# 1965-1204_TheRapture_ENG_16-1102-B123-x-top.txt
# 1965-1205_ThingsThatAreToBe_ENG_17-0203-B123E1R-x-top.txt
# 1965-1206_ModernEventsAreMadeClearByProphecy_ENG_14-0901-B123-x-top.txt
# 1965-1207_Leadership_ENG_15-0402-B123E1R-x-top.txt
# 1965-1212_Communion_ENG_12-1201-B123E1R-x-top.txt
# all_titles.csv
# ```
# With the “-top” files ready in the output folder, the next step was to merge them together with the following command:
# ```
# # cat `ls *.txt` > all_titles.csv
# ```
# Next, the all_titles.csv was copied to the Ubuntu virtual machine with:
# ```
# scp output/all_titles.csv proj@192.168.1.208:~/TextMiningProject/
# ```
# #### Part 2
# The second part of the pre-processing was executed from the notebook **TextMiningProject.ipynb**.
# ##### English language model
# The first step was to load the *spacy* package’s English language model. The language model includes a set of common *stopwords*. I had to tweak the default set due to a bug in the package. This allows for variations in capitalization of the default stopwords.
# ##### Stopwords
# Next was the addition of my own stopwords, discovered by a process of trial-and-error. When multiple topics ended up having the same high-probability words, I sometimes added these to the stopwords. These words do not contribute to the individuality of topics and might be considered as noise. I added the following stopwords:
# ```
# my_stop_words =
# ['said','Said','saying','Saying','thing','things','Thing','Things','man','day','church','Church','people','People','time', 'way','ways','Way','Ways','place','places','Place','Places','hand','age','ages','world','worlds','tonight','Tonight',
# 'day','days','Day','Days','brother','brothers','Brother','Brothers','sister','sisters','Sister','Sisters',
# 'woman','women','year','years','chapter','chapters','verse','verses','today','Today','mammy','Mammy','hand','Hand',
# 'prophet','prophets','Prophet','Prophets','life','Life','heart','hearts','message']```
# ##### Load dataset
# The dataset was loaded next. It consists of 13 text lines. Each sermon is in the form of a single line of text.
# ##### Clean dataset
# To clean the dataset the following steps were taken:
# - remove stopwords
# - keep alphanumeric tokens
# - remove punctuation tokens
# - remove numbers
# - keep tokens with more than 2 characters
# - keep tokens that are nouns
# - keep lemmas of remaining tokens
# The cleaning took about 2 minutes.
# ##### Bigrams
# I initially thought that the use of bigrams, even trigrams and higher n-grams might be beneficial. As it turned out, I rarely noticed a bigram high-probability word coming up in a topic. I decided to drop the use of bigrams. As it is, I settled on using nouns only in the end.
# ##### Dictionary
# A dictionary was created next from the cleaned corpus. It consists of 2486 unique tokens.
# ##### Document-Term-Matrix
# The dictionary was used to form a *document-term-matrix* (DTM) to hold the word vectors for a *bag-of-words* representation. I used a sparse representation consisting of a list of lists. The outer list contains the complete corpus. Each inner list contains the matrix entries for a document (sermon in this case) consisting of multiple tuples. The first entry in each tuple is the ID of the token in the dictionary. The second entry is the count of this token in the document represented by the specific inner list.
# This brings us to the end of the pre-processing. Next, I will discuss the topic modeling.
# ### Topic Modeling
# Three topic models were evaluated:
# - Latent Semantic Indexing (LSI) Model
# - Latent Dirichlet Allocation (LDA) Model
# - Hierarchical Dirichlet Process (HDP) Model
# Finding a set of topics was an intensely interactive process. The practical reason for this is that these techniques are based on unsupervised learning principles and it is usually up to the analyst to decide on how to cluster or group the data – in this case into groups of topics and groups of words. Each grouping comes in the form of a discrete probability distribution, also called topic proportions and word proportions.
# Fortunately, the Hierarchical Dirichlet Process Model provides a suggestion of the optimal number of topics. I used this value (which was 20 topics) as the initial value for the other two models for the number of topics. In the end, I ended up with 13 topics for the LSI model, 14 topics for the LDA model, and 20 topics for the HDP model.
# The LSI model is an older algorithm and the LDA model was developed for fix some issues with it (Blei 2012).
# ## Findings and Discussion
# Here are the titles of the sermons again to help with the placement of topics. They have been color-coded to show the association with topics (I have some familiarity with some of the sermons).
# 
# The main theme/topic of the 1965-0919_Thirst sermon came out clearly in each of the models (**highlighted in yellow**). It is about how a wounded *dear* that has been chased by the *dogs*, *thirsts* for water and has a strong *desire* to reach it, else it will surely die. The spiritual application is for the human *soul* to reach out for the *water* of life and be *enlightened* by the *Word* of life. I have italicized the high-probability tokens identified by the models.
# The main topic of the 1965-1212_Communion sermon was also identified accurately (**highlighted in light green**). The high-probability tokens were: *communion*, *ordinance*, *water*, *order*, *bread*, *blood*, *supper*, *lamb*, *sacrifice*, *body*, *sin*.
# The theme of the 1965-1031y_Leadership sermon is how a child grows up by being submitted to a series of leadership roles that shapes his/her life (**highlighted in purple**). The *child* will progressively hear the *leadership voices* of *mother*, its *teacher*, *nature*, father (who happens to be a *business* man), and eventually God’s voice in the form of *revelation*.
# The 1965-1127b_TryingToDoGodAServiceWithoutItBeingGodsWill sermon (**highlighted in cyan**) relates how *king* David had *faith* and *inspiration* to recover the *ark* of the covenant from the Philistines. This gave rise to a great *revival* among the Israelites. In the end, it turns out that he made the wrong *choice*. It is compared with making the wrong *choice* by trying to do God a service without it being His will. This often happens by relying on the *denominational* church system and providing service by means of it for salvation, instead of relying on God himself.
# The 1965-1128z_OnTheWingsOfASnowWhiteDove sermon tells the story of Noah (**highlighted in grey**). A *dove* was released by Noah after the flood in order to find land; it came back carrying a freshly plucked olive leaf, a *sign* of life and *love* after the long *night* of the *water* of the Flood. This *sign* of love was given to Noah on the *wings* of a *snow*-white dove.
# ### Latent Semantic Indexing Model (13 topics across corpus)
# 
# ### Latent Dirichlet Allocation Model (14 topics across corpus)
# 
# ### Hierarchical Dirichlet Process Model (20 topics across corpus)
# 
# ### Visualization of Latent Dirichlet Allocation Model
# Figure 1 shows a visualization of the findings of the Latent Dirichlet Allocation Model.
# 
# Visualization was performed using the python pyLDAvis package, https://pypi.org/project/pyLDAvis/1.0.0/
# The main topic associated with *1965-1212_Communion* is highlighted in red. Note that the topic numbers in the visualization are different from those where the models are printed out (I have noticed this on the web too). There are some overlaps among topics (which require further work). There is one strongest topic in the north-west and a number of tiny topics towards the east.
# The main topic associated with *1965-1207_Leadership* and *1965-1031y_Leadership* is shown in Figure 2.
# 
# The main topic associated with *1965-0919_Thirst* is shown in Figure 3.
# 
# The main topic associated with Noah’s dove (*1965-1128z_OnTheWingsOfASnowWhiteDove*) is shown in Figure 4.
# 
# ### Comparison of the 3 models using Topic Coherence
# I used the technique of topic coherence to draw a comparison between the three models. This technique only works when comparing models based on the same dataset. As is evident from Figure 5, the LDA (Latent Dirichlet Allocation) model fared the best. After it came the HDP (Hierarchical Dirichlet Process) and then the LSI (Latent Semantic Indexing) model.
# 
# ## Network Analysis of the LSI Topic Model
# I have visualized the topic model of the LDA model above. For the visualization of a network model I will use the LSI topic model. Although this is the weakest model, it was easiest to construct its incidence matrix because it has the fewest number of topics (13). For the connection of each of the topics the top 10 words were used from the topic model. The incidence matrix is in the file **IncidenceMatrixLSI.csv**. The network analysis of the LSI topic model is in the file **TextMiningProject-graph.ipynb**.
# After reading in the incidence matrix, the NaN’s are replaced with zeros and the data is prepared for consumption by the python-igraph package. A bipartite graph is created with 88 vertices and 142 edges. Names are provided for the vertices from the row and column names of the incidence matrix. Figure 6 shows the bipartite graph of the LSI topic model.
# 
# The red vertices represent the *topics* and the blue vertices the *words*.
# Next, the bipartite graph was projected into a graph for topics and a graph for words. Figure 7 shows the **topic** graph for the LSI topic model.
# 
# Figure 8 shows the **word** graph for the LSI topic model.
# 
# ## Conclusion and Further Work
# Overall, I am impressed by the possibilities of topic modeling. Although the models I derived were not adequate enough for my purposes, I think it might not require too much more work to make them usable. I would like to pursue this idea further and apply it in the areas of semantic search, tagging of each sermon with its highest-probability topics, and tracking of topics over time by means of dynamic topic models (Blei 2012).
# ## References/Bibliography
# Cloverdale Bible Way:
# https://bibleway.org/
# Message Hub:
# http://www.messagehub.info/en/messages.do?show_en=true
# <NAME>:
# https://www.linkedin.com/in/levkonst/?originalSubdomain=uk
# Blei, D.M. Probabilistic Topic Models, 2012.
# pyLDAvis python package:
# https://pypi.org/project/pyLDAvis/1.0.0/
| _notebooks/2019-04-06-TextMiningProject.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.7.10 64-bit (''myenv'': conda)'
# name: python3710jvsc74a57bd0850c152f8f84f3548ed5f09fd834180be13dced819bcdf283d955f3afcab8cf5
# ---
import numpy as np
import pandas as pd
import tensorflow as tf
from keras.preprocessing.image import ImageDataGenerator
import matplotlib.pyplot as plt
from keras.optimizers import RMSprop
data = pd.read_csv('train.csv')
data.info()
y = data['label']
x = data.drop(labels=['label'], axis=1)
y.shape, x.shape
unique = data['label'].unique()
unique , len(unique) # so there are 10 unique value mainly from 0 to 9
# splitting the dataset into training and test data set
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(x,y,test_size=0.2, random_state=42)
# now we need to rehsape our dataset
x_train = x_train.values.reshape(-1,28,28,1)
x_test = x_test.values.reshape(-1,28,28,1)
# determine the shape of input image
in_shape = x_train.shape[1:]
in_shape
plt.imshow(x_train[0].reshape([28,28]))
# now we need to normalize the pixel values
x_train = x_train.astype('float32')/255.0
x_test = x_test.astype('float32')/255.0
# now we will initialize the CNN
cnn = tf.keras.models.Sequential()
# now we will add our first convolution layer
cnn.add(tf.keras.layers.Conv2D(filters=32, kernel_size=(5,5), activation='relu', input_shape = in_shape))
# adding our second convolutional layer
cnn.add(tf.keras.layers.Conv2D(filters=32, kernel_size=(5,5), activation='relu', input_shape = in_shape))
# now we will do max pooling, again, there are many other techiniques availaible, like min pooling, and several others.
cnn.add(tf.keras.layers.MaxPool2D(pool_size=2, strides=2))
# adding yet another conolutional layer
cnn.add(tf.keras.layers.Conv2D(filters=32, kernel_size=(3,3), activation='relu')
# adding yet another concolutional layer
cnn.add(tf.keras.layers.Conv2D(filters=32, kernel_size=(3,3), activation='relu'))
# now, again we do max pooling
cnn.add(tf.keras.layers.MaxPool2D(pool_size=2, strides=2))
# flatenning the layer to feed it into our artificial neural network
cnn.add(tf.keras.layers.Flatten())
# doing full connection now
cnn.add(tf.keras.layers.Dense(units = 256, activation = 'relu'))
# our final output layer
cnn.add(tf.keras.layers.Dense(units = 10, activation = 'softmax'))
# defining our optimizer
optimizer = RMSprop(lr=0.001, rho=0.9, epsilon=1e-08, decay=0.0)
# compilig our CNN model
cnn.compile(optimizer='adam', loss = 'sparse_categorical_crossentropy', metrics = ['accuracy'])
# now training our model
cnn.fit(x_train, y_train, validation_split=0.2, epochs= 100, batch_size=128, verbose=1)
# making a single prediction
image = x_test[2]
y_pred = cnn.predict(np.asarray([image]))
y_pred
plt.imshow(x_test[2].reshape([28,28]))
# now making the predictions over test data set
test_data = pd.read_csv('test.csv')
test_data.head()
# normalizing the test data
test = test_data / 255.0
# rshaping the test data
test_final_data = test.values.reshape(-1,28,28,1)
# predicting the model
label = cnn.predict(test_final_data)
label
label.shape
label = np.argmax(label, axis=1)
sample_submission = pd.read_csv('sample_submission.csv')
sample_submission.head()
#Creating the file ready with rows and Columns feild
index = test_data.index.values + 1
data = {'ImageId' : index, "Label" : label}
df = pd.DataFrame(data=data)
df.head()
submit_file = pd.DataFrame({'ImageId' : index, "Label" : label.astype(int).ravel()})
submit_file.to_csv("submission.csv",index = False)
submit_file
| digit_recognisition.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:jcopml]
# language: python
# name: conda-env-jcopml-py
# ---
from luwiji.evaluation import illustration, demo
from jcopml.plot import plot_classification_report, plot_confusion_matrix, plot_roc_curve, plot_pr_curve
X_train, y_train, X_test, y_test, model = demo.get_classification_data()
# # Method 1: Simple Plot
# Tidak bisa dipakai untuk yang multivariate > 2
illustration.overlay_classification
# # Method 2: Classification Report
plot_classification_report(X_train, y_train, X_test, y_test, model)
plot_classification_report(X_train, y_train, X_test, y_test, model, report=True)
# # Method 3: Actual vs Prediction Plot (?) -> Confusion Matrix
plot_confusion_matrix(X_train, y_train, X_test, y_test, model)
# # Method 4: Receiver Operating Characteristic (ROC)
X_train, y_train, X_test, y_test, model = demo.get_binary_class_data()
illustration.roc
illustration.roc_auc
plot_roc_curve(X_train, y_train, X_test, y_test, model)
# # Method 5: Precision-recall curve
illustration.precision_recall
plot_pr_curve(X_train, y_train, X_test, y_test, model)
# # ROC vs PR (Optional)
# - Lack data
# - score cenderung overestimate
# - secara umum (termasuk metric lainnya), kalau data sedikit, maka score tidak reliable
# - ilustrasi: bayangkan kalau cuman 1 data, otomatis score diantara 0 atau 100%, sangat tidak stabil
#
# - balance data (positive sample 20%-80%)
# - ROC_AUC maupun PR_AUC bisa digunakan
#
# - Imbalance (positive sample > 95%)
# - ROC_AUC akan overestimate (lack data akan membuat overestimatenya semakin parah)
# - rekomendasi: redefine label supaya negative sample yang lebih banyak, lalu pakai PR_AUC
#
# - Imbalance (positive sample < 5%)
# - Pakailah PR_AUC karena ROC_AUC cenderung overestimate
| .ipynb_checkpoints/Part 6 - Classification Evaluation-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Loading the documentation
# ## Website address
# The documentation can be found here
# https://qiskit.org/documentation/metal/
# ## Access the docs through code
# Utilize the `open_docs()` function to load the docs in a web browser.
from qiskit_metal import open_docs
open_docs()
# ## Build the documentation
# If you choose to build the docs yourself rather than access them on the website, you'll want to run the script locally. **This will produce an error if executed within a jupyter notebook**, so be sure to execute this command in ipython directly.
#
# Replace the path below with the relative path to the build_docs.py script on your computer.
#
# ``` python
# # %run ../../docs/build_docs.py
# ```
#
# #### Please be patient. The build can take ~15minutes. stdout might appear in the shell used to launch jupyter notebook.
#
# The build has two phases. First it will install the required python libraries for the build. Then it will run the `make html` command to complete the build.
# #### You can now access the documentation with the following command.
#
# You can also directly open: `../../docs/build/html/index.html`
| tutorials/7 Documentation/Accessing qiskit-metal documentation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.10.2 64-bit (''venv10'': venv)'
# language: python
# name: python3
# ---
# Python’s standard library has functions that accept str or bytes
# arguments and behave differently depending on the type.
# str Versus bytes in Regular Expressions
# Example 4-23. ramanujan.py: compare behavior of simple str and bytes
# regular expressions
# +
import re
re_numbers_str = re.compile(r'\d+')
re_words_str = re.compile(r'\w+')
re_numbers_bytes = re.compile(rb'\d+')
re_words_bytes = re.compile(rb'\w+')
text_str = ("Ramanujan saw \u0be7\u0bed\u0be8\u0bef"
" as 1729 = 13 + 123 = 93 + 103.")
text_bytes = text_str.encode('utf_8')
print(f'Text\n {text_str!r}')
print('Numbers')
print(' str :', re_numbers_str.findall(text_str))
print(' bytes:', re_numbers_bytes.findall(text_bytes))
print('Words')
print(' str :', re_words_str.findall(text_str))
print(' bytes:', re_words_bytes.findall(text_bytes))
# -
# Example 4-24. listdir with str and bytes arguments and results
import os
os.listdir('.')
os.listdir(b'.')
| chapter 4-Unicode Text versus Bytes/example 4-23--Dual-Mode str and bytes APIs.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # T-test
# The t-test module allows comparing means of continuous variables of interest to known means or across two groups. There are four main types of comparisons.
# - Comparison of one-sample mean to a known mean
# - Comparison of two groups from the same sample
# - Comparison of two means from two different samples
# - Comparison of two paired means
#
# ```Ttest()``` is the class that implements all four type of comparisons. To run a comparison, the user call the method ```compare()``` with the appropriate parameters.
# +
import numpy as np
import pandas as pd
from pprint import pprint
from samplics.datasets import Auto
from samplics.categorical.comparison import Ttest
# -
# ## Comparison of one-sample mean to a knowm mean
# For this comparison, the mean of a continuous variable, i.e. mpg, is compared to a know mean. In the example below, the user is testing whether the average mpg is equal to 20. Hence, the null hypothesis is **H0: mean(mpg) = 20**. There are three possible alternatives for this null hypotheses:
# - Ha: mean(mpg) < 20 (less_than alternative)
# - Ha: mean(mpg) > 20 (greater_than alternative)
# - Ha: Ha: mean(mpg) != 20 (not_equal alternative)
#
# All three alternatives are automatically computed by the method ```compare()```. This behavior is similar across the four type of comparisons.
# +
# Load Auto sample data
auto_cls = Auto()
auto_cls.load_data()
auto = auto_cls.data
mpg = auto["mpg"]
one_sample_known_mean = Ttest(samp_type="one-sample")
one_sample_known_mean.compare(y=mpg, known_mean=20)
print(one_sample_known_mean)
# -
# The print below shows the information encapsulated in the object. ```point_est``` provides the sample mean. Similarly, ```stderror, stddev, lower_ci, and upper_ci``` provide the standard error, standard deviation, lower bound confidence interval (CI), and upper bound CI, respectively. The class member ```stats``` provides the statistics related to the three t-tests (for the three alternative hypothesis). There is additional information encapsulated in the object as shown below.
pprint(one_sample_known_mean.__dict__)
# ## Comparison of two groups from the same sample
# This type of comparison is used when the two groups are from the sample. For example, after running a survey, the user want to know if the domestic cars have the same mpg on average compare to the foreign cars. The parameter ```group``` indicates the categorical variable. NB: note that, at this point, ```Ttest()``` does not take into account potential dependencies between groups.
# +
foreign = auto["foreign"]
one_sample_two_groups = Ttest(samp_type="one-sample")
one_sample_two_groups.compare(y=mpg, group=foreign)
print(one_sample_two_groups)
# -
# Since there are two groups for this comparison, the sample mean, standard error, standard deviation, lower bound CI, and upper bound CI are provided by group as Python dictionnaries. The class member ```stats``` provides statistics for the comparison assuming both equal and unequal variances.
# +
print(f"\nThese are the group means for mpg: {one_sample_two_groups.point_est}\n")
print(f"These are the group standard error for mpg: {one_sample_two_groups.stderror}\n")
print(f"These are the group standard deviation for mpg: {one_sample_two_groups.stddev}\n")
print("These are the computed statistics:\n")
pprint(one_sample_two_groups.stats)
# -
# ## Comparison of two means from two different samples
# This type of comparison should be used when the two groups come from different samples or different strata. The group are assumed independent. Otherwise, the information is similar to the previous test. Note that, when instantiating the class, we used ```samp_type="two-sample"```.
# +
two_samples_unpaired = Ttest(samp_type="two-sample", paired=False)
two_samples_unpaired.compare(y=mpg, group=foreign)
print(two_samples_unpaired)
# +
print(f"\nThese are the group means for mpg: {two_samples_unpaired.point_est}\n")
print(f"These are the group standard error for mpg: {two_samples_unpaired.stderror}\n")
print(f"These are the group standard deviation for mpg: {two_samples_unpaired.stddev}\n")
print("These are the computed statistics:\n")
pprint(two_samples_unpaired.stats)
# -
# ## Comparison of two paired means
# When two measures are taken from the same observations, the paired t-test is appropriate for comparing the means.
# +
two_samples_paired = Ttest(samp_type="two-sample", paired=True)
two_samples_paired.compare(y=auto[["y1", "y2"]], group=foreign)
print(two_samples_paired)
# -
# ```varnames``` can be used rename teh variables
# +
y1 = auto["y1"]
y2 = auto["y2"]
two_samples_paired = Ttest(samp_type="two-sample", paired=True)
two_samples_paired.compare(y=[y1, y2], varnames= ["group_1", "gourp_2"], group=foreign)
print(two_samples_paired)
| docs/source/_build/html/.doctrees/nbsphinx/tutorial/ttest.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + id="B2MJ82pv6Gfl" colab_type="code" colab={}
import os
import pandas as pd
import numpy as np
from sklearn import preprocessing,metrics
from IPython.core.display import HTML
pd.set_option("display.max_columns",75)
import warnings
warnings.filterwarnings('ignore')
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.metrics import accuracy_score
import matplotlib.pyplot as plt
import seaborn as sns
# %matplotlib inline
from sklearn import linear_model,svm
from sklearn.metrics import average_precision_score
from sklearn.metrics import precision_recall_curve
# + id="zwb8KbT16Gfr" colab_type="code" colab={}
missing_values = ["n/a", "na", "--"," ","nan"," nan","Nan","NAN","N/A","NA","NaN"]
df = pd.read_excel("loan.xlsx", na_values = missing_values)
# + id="D09e2x_w6Gfv" colab_type="code" colab={}
# # Imports
# Pandas and numpy for data manipulation
import pandas as pd
import numpy as np
# No warnings about setting value on copy of slice
pd.options.mode.chained_assignment = None
# Display up to 60 columns of a dataframe
pd.set_option('display.max_columns', 60)
# Matplotlib visualization
import matplotlib.pyplot as plt
# %matplotlib inline
# Set default font size
plt.rcParams['font.size'] = 24
# Internal ipython tool for setting figure size
from IPython.core.pylabtools import figsize
# Seaborn for visualization
import seaborn as sns
sns.set(font_scale = 2)
# Splitting data into training and testing
from sklearn.model_selection import train_test_split
# + id="UpaGvIfl6Gfz" colab_type="code" outputId="02606441-3f47-4401-8f3a-40e27dd6c4a3" colab={}
df.head()
# + id="D5oEpcfR6Gf7" colab_type="code" outputId="2bdef4ee-2e0a-4c68-fd4f-fc3fa00b19cb" colab={}
df.shape
# + id="IHGQLk4y6GgA" colab_type="code" outputId="355531d2-0e38-47e0-a266-766961e9a974" colab={}
df.info()
# + id="-t7_T5Jb6GgG" colab_type="code" outputId="c444f893-f32b-4923-82d2-79895e1c9890" colab={}
df.select_dtypes(include=['object']).head()
# + id="NS3yGAwe6GgN" colab_type="code" outputId="2cfa4758-701e-4a56-b1cc-68b5521b77b7" colab={}
100 * df.isnull().sum() / len(df)
# + id="uTx75ca86GgS" colab_type="code" colab={}
# # Missing Values
# Function to calculate missing values by column
def missing_values_table(df):
# Total missing values
mis_val = df.isnull().sum()
# Percentage of missing values
mis_val_percent = 100 * df.isnull().sum() / len(df)
# Make a table with the results
mis_val_table = pd.concat([mis_val, mis_val_percent], axis=1)
# Rename the columns
mis_val_table_ren_columns = mis_val_table.rename(
columns = {0 : 'Missing Values', 1 : '% of Total Values'})
# Sort the table by percentage of missing descending
mis_val_table_ren_columns = mis_val_table_ren_columns[
mis_val_table_ren_columns.iloc[:,1] != 0.0].sort_values(
'% of Total Values', ascending=False).round(1)
# Print some summary information
print ("Your selected dataframe has " + str(df.shape[1]) + " columns.\n"
"There are " + str(mis_val_table_ren_columns.shape[0]) +
" columns that have missing values.")
# Return the dataframe with missing information
return mis_val_table_ren_columns
# + id="3T9F5sGO6GgX" colab_type="code" colab={}
df=df.dropna(axis=1, how='all')
# + id="5ZdSlzup6Ggb" colab_type="code" outputId="55ad5a6b-4220-47f2-e5ea-0581a0922811" colab={}
missing_values_table(df).head(50)
# + id="DWNVTQ-m6Ggg" colab_type="code" colab={}
limitPer = len(df) * .80
df = df.dropna(thresh=limitPer,axis=1)
# + id="rZr0MMnv6Ggj" colab_type="code" outputId="c5be84c5-dcb3-4db8-8236-764b8f4e7a10" colab={}
missing_values_table(df).head(50)
# + id="Ko3yEiIG6Ggo" colab_type="code" colab={}
df.dropna(inplace=True)
# + id="4CnM9Edg6Ggs" colab_type="code" outputId="f9b86325-3c01-416e-b82c-2b351edc2d86" colab={}
missing_values_table(df).head(50)
# + id="iD3pCoCv6Ggx" colab_type="code" outputId="cbee37cd-c354-4cbd-9ea9-1c03ff1eb90b" colab={}
df.shape
# + id="QQ1j9jvr6Gg3" colab_type="code" outputId="304e7073-cafd-49de-ed15-3eecfc5fb101" colab={}
df["loan_status"].head(8)
# + id="d55eMTf56Gg-" colab_type="code" outputId="6fe02097-eebb-4a3b-8f29-182e20e0e514" colab={}
data_with_loanstatus_sliced = df[(df['loan_status']=="Fully Paid") | (df['loan_status']=="Charged Off")]
data_with_loanstatus_sliced.head()
di = {"Fully Paid":0, "Charged Off":1}
df_status_clean=data_with_loanstatus_sliced.replace({'loan_status':di})
df_status_clean.head()
# + id="0N7let9k6GhD" colab_type="code" outputId="f2224c45-5572-47ad-e783-cd282d7ab7f4" colab={}
df_status_clean["term"].unique()
# + id="Y0dmDdQj6GhL" colab_type="code" outputId="81e92df7-96ba-4e7d-9b3a-12787410de9d" colab={}
''''Data transformation/cleanup
Strip off textual parts, represent values as numeric values
it makes sense. Convert the datatype to numeric.
'''
print('Transform: term...')
df_status_clean['term'].replace(to_replace=' months', value='', regex=True, inplace=True)
df_status_clean['term'] = pd.to_numeric(df_status_clean['term'], errors='coerce')
df_status_clean['term'].isnull().sum()
# + id="ElCsYAIo6GhQ" colab_type="code" outputId="7fc37f69-c1af-4176-86bc-81befa7abd40" colab={}
df_status_clean['term'].head()
# + id="gpWNBkTL6GhV" colab_type="code" colab={}
copy_df=df_status_clean.copy()
# + id="ICBQ0fIG6Gha" colab_type="code" outputId="d68ea6f2-e774-4e37-a0ef-81e8b3cdf273" colab={}
copy_df["emp_length"]
# + id="GqPLcGSV6Ghe" colab_type="code" colab={}
# + id="_HJjc1G66Ghh" colab_type="code" outputId="406bc57e-86eb-4bf1-f3ca-04122b5e69fb" colab={}
df_status_clean["emp_length"].values
# + id="bx8mYC426Ghm" colab_type="code" outputId="6bb3f92c-5789-4b04-c119-d3d8dd733804" colab={}
print('Transform: emp_length...')
df_status_clean["emp_length"] = df_status_clean["emp_length"].replace({'years':'','year':'',' ':'','<':'','\+':'','n/a':'0'}, regex = True)
# + id="WH1oIxcb6Ghr" colab_type="code" outputId="d9e912d1-fd8e-4061-bfb4-76d789000fb7" colab={}
df_status_clean['emp_length'].unique()
# + id="Qgpg0WCo6Ghx" colab_type="code" colab={}
df_status_clean['emp_length'] = pd.to_numeric(df_status_clean['emp_length'], errors='coerce')
# + id="0Hw3vlwC6Gh1" colab_type="code" outputId="bca3193c-181f-4b89-f2dd-3513da20ef62" colab={}
df_status_clean['emp_length'].unique()
# + id="IHEyIecH6Gh6" colab_type="code" outputId="4e7ac750-33b8-4b0e-902b-1dc4a1c34e17" colab={}
print('Transform: grade...')
df_status_clean['grade'].replace(to_replace='A', value='0', regex=True, inplace=True)
df_status_clean['grade'].replace(to_replace='B', value='1', regex=True, inplace=True)
df_status_clean['grade'].replace(to_replace='C', value='2', regex=True, inplace=True)
df_status_clean['grade'].replace(to_replace='D', value='3', regex=True, inplace=True)
df_status_clean['grade'].replace(to_replace='E', value='4', regex=True, inplace=True)
df_status_clean['grade'].replace(to_replace='F', value='5', regex=True, inplace=True)
df_status_clean['grade'].replace(to_replace='G', value='6', regex=True, inplace=True)
df_status_clean['grade'] = pd.to_numeric(df_status_clean['grade'], errors='coerce')
df_status_clean['grade'].unique()
# + id="Ex1oB0Bn6GiB" colab_type="code" outputId="13ef7453-21ee-486b-f6fc-d05abc47855a" colab={}
df_status_clean.select_dtypes(include=['object']).head()
# + id="4Dvn9B4a6GiF" colab_type="code" colab={}
object=['sub_grade','emp_title','home_ownership','verification_status','pymnt_plan','url','purpose','title','zip_code','addr_state','initial_list_status','application_type']
df_status_clean = df_status_clean.drop( object,axis=1)
# + id="gZVbicWW6GiI" colab_type="code" outputId="ddb9ba19-2d91-44d6-e178-a8b5efb92af1" colab={}
df.head()
# + id="ule_S4kP6GiM" colab_type="code" outputId="a5e7bb40-07c6-429d-a5d0-4b27f90029af" colab={}
# Identify categorical features and single-valued features
# Define categorical_threshold
# If a column's number of unique values is less than this value, consider it categorical
categorical_threshold = 100
# Identifying categorical features
# Using df_raw because df_imputed is all float
cols_single_val = []
cols_categorical = []
cols_continuous = [] # non-categorical
for c in df_status_clean.columns:
# if (c == 'id' or c == 'loss'):
# continue
if (df_status_clean[c].dtypes == 'int64' and len(df_status_clean.loc[:,c].unique()) <= categorical_threshold):
print("Column \"" + c + "\" has unique values: ")
print(df_status_clean.loc[:,c].unique())
if (len(df_status_clean.loc[:,c].unique()) == 1):
cols_single_val.append(c)
else:
cols_categorical.append([c])
else:
cols_continuous.append([c])
# After this check, we no longer need df_raw
print("Categorical columns: ", cols_categorical)
print("Single-valued columns: ", cols_single_val)
# + id="f1oDP4AA6GiQ" colab_type="code" outputId="b473cdc7-56a1-4ef7-f655-061246c9df3f" colab={}
df_status_clean = df_status_clean.drop(cols_single_val, axis=1)
df_status_clean.info()
# + id="TmoDAqfI6GiV" colab_type="code" outputId="463c89f1-627f-4e34-e543-7031250414b7" colab={}
df_status_clean.select_dtypes(include=['datetime64[ns]']).head()
# + id="by-m1w6W6Gia" colab_type="code" colab={}
# + id="8sVaqJ886Gic" colab_type="code" colab={}
# + id="_P1_lvja6Gif" colab_type="code" colab={}
date_time_drop=['issue_d','earliest_cr_line','last_pymnt_d','last_credit_pull_d']
df_status_clean = df_status_clean.drop( date_time_drop,axis=1)
# + id="On5sRBeR6Gij" colab_type="code" outputId="38cc57a3-e6ef-4a06-dd4e-232dc5baf547" colab={}
df_status_clean.select_dtypes(include=['datetime64[ns]']).head()
# + id="2_IfpQvV6Gin" colab_type="code" colab={}
name=['id','member_id']
df_status_clean = df_status_clean.drop(name, axis=1)
# + id="251pT1HX6Giu" colab_type="code" outputId="61899d73-2f11-4070-cdf0-5d234fc85ad5" colab={}
df_status_clean.info()
# + id="xr7q1Gat6Gi0" colab_type="code" colab={}
df_status_clean.head()
data_clean=df_status_clean
data=df_status_clean
df_new=data
# + id="VlvvcHd56Gi3" colab_type="code" outputId="70efc32b-53bf-48e3-b264-5ac59d4c43cc" colab={}
# Find all correlations and sort
correlations_data = data.corr()['loan_status'].sort_values()
# Print the most negative correlations
print(correlations_data.head(15), '\n')
# Print the most positive correlations
print(correlations_data.tail(15))
# + id="MtY6WNfJ6Gi9" colab_type="code" colab={}
for i in data.columns:
if len(set(data[i]))==1:
data.drop(labels=[i], axis=1, inplace=True)
# + id="Sll1T_iU6Gi_" colab_type="code" outputId="ea1d3f52-9c49-4b37-8aa3-5836501dcd7b" colab={}
# Find all correlations and sort
correlations_data = data.corr()['loan_status'].sort_values()
# Print the most negative correlations
print(correlations_data.head(15), '\n')
# Print the most positive correlations
print(correlations_data.tail(15))
# + id="3dF5HoHr6GjF" colab_type="code" outputId="dc3b6ac4-9a0d-4317-beb4-77a156c283c8" colab={}
data.shape
# + id="qVQ36LjH6GjI" colab_type="code" colab={}
def remove_collinear_features(x, threshold):
'''
Objective:
Remove collinear features in a dataframe with a correlation coefficient
greater than the threshold. Removing collinear features can help a model
to generalize and improves the interpretability of the model.
Inputs:
threshold: any features with correlations greater than this value are removed
Output:
dataframe that contains only the non-highly-collinear features
'''
# Dont want to remove correlations between loss
y = x['loan_status']
x = x.drop(columns = ['loan_status'])
# Calculate the correlation matrix
corr_matrix = x.corr()
iters = range(len(corr_matrix.columns) - 1)
drop_cols = []
# Iterate through the correlation matrix and compare correlations
for i in iters:
for j in range(i):
item = corr_matrix.iloc[j:(j+1), (i+1):(i+2)]
col = item.columns
row = item.index
val = abs(item.values)
# If correlation exceeds the threshold
if val >= threshold:
# Print the correlated features and the correlation value
# print(col.values[0], "|", row.values[0], "|", round(val[0][0], 2))
drop_cols.append(col.values[0])
# Drop one of each pair of correlated columns
drops = set(drop_cols)
x = x.drop(columns = drops)
# Add the score back in to the data
x['loan_status'] = y
return x
# + id="o5wkiyu_6GjL" colab_type="code" colab={}
data = remove_collinear_features(data, 0.6)
# + id="tzx039ud6GjO" colab_type="code" outputId="5e87de38-fe32-42d3-d02f-b1250d3f633d" colab={}
data.shape
# + id="fjH8MdLi6GjR" colab_type="code" outputId="d181a8d8-3d25-4ff5-b011-f14903af2177" colab={}
data.head()
# + id="GbYhtYUv6GjV" colab_type="code" outputId="ec758558-8029-4c90-e2a3-c24d69469f5e" colab={}
# # # Split Into Training and Testing Sets
# Separate out the features and targets
features = data.drop(columns='loan_status')
targets = pd.DataFrame(data['loan_status'])
# Split into 80% training and 20% testing set
X_train, X_test, y_train, y_test = train_test_split(features, targets, test_size = 0.2, random_state = 42)
print(X_train.shape)
print(X_test.shape)
print(y_train.shape)
print(y_test.shape)
# + id="0KYjD-uO6Gja" colab_type="code" outputId="8914325b-203b-4d5a-f1c0-1081e6c0f68a" colab={}
X_train.head()
# + id="1wiNDWVN6Gje" colab_type="code" outputId="8c9e90e0-a24e-45f1-ae57-af4581264a6a" colab={}
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
# + id="lQTbfrVx6Gjj" colab_type="code" colab={}
# + id="K8enxdIl6Gjl" colab_type="code" colab={}
# Convert y to one-dimensional array (vector)
y_train = np.array(y_train).reshape((-1, ))
y_test = np.array(y_test).reshape((-1, ))
# + id="NUqTIFTs6Gjo" colab_type="code" outputId="b08c80b7-3884-4805-cc52-775c36f2dca0" colab={}
X_train
# + id="rhK6Od5o6Gjr" colab_type="code" outputId="68240c2f-16fd-4a87-8c84-07191df343ce" colab={}
X_test
# + id="9GADjxLD6Gjt" colab_type="code" colab={}
# # # Models to Evaluate
# We will compare five different machine learning Cassification models:
# 1 - Logistic Regression
# 2 - K-Nearest Neighbors Classification
# 3 - Suport Vector Machine
# 4 - Naive Bayes
# 5 - Random Forest Classification
# Function to calculate mean absolute error
def cross_val(X_train, y_train, model):
# Applying k-Fold Cross Validation
from sklearn.model_selection import cross_val_score
accuracies = cross_val_score(estimator = model, X = X_train, y = y_train, cv = 5)
return accuracies.mean()
# Takes in a model, trains the model, and evaluates the model on the test set
def fit_and_evaluate(model):
# Train the model
model.fit(X_train, y_train)
# Make predictions and evalute
model_pred = model.predict(X_test)
model_cross = cross_val(X_train, y_train, model)
# Return the performance metric
return model_cross
# + id="ZYjvC3Ji6Gjv" colab_type="code" outputId="1b776638-2294-46b4-eaf6-82bbdb5fb1af" colab={}
# # Naive Bayes
from sklearn.naive_bayes import GaussianNB
naive = GaussianNB()
naive_cross = fit_and_evaluate(naive)
print('Naive Bayes Performance on the test set: Cross Validation Score = %0.4f' % naive_cross)
# + id="hN6dAFol6Gjy" colab_type="code" outputId="b3f534c6-45a8-4f75-e870-21415c607859" colab={}
from sklearn.ensemble import RandomForestClassifier
random = RandomForestClassifier(n_estimators = 10, criterion = 'entropy')
random_cross = fit_and_evaluate(random)
print('Random Forest Performance on the test set: Cross Validation Score = %0.4f' % random_cross)
| Loan_Default-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="LJ1byyLCzEea"
# Copyright 2022 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# + [markdown] id="Bkq_0Rw8G9WY"
# # Colab notebook for demonstrating SCF calculations using the GAS22 functional
#
# SCF calculations are performed using PySCF package (v1.7.6). Functional derivatives are evaluated using automatic differentiation with JAX.
#
# Link to this notebook: https://colab.research.google.com/github/google-research/google-research/blob/master/symbolic_functionals/colab/run_GAS22.ipynb
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 14863, "status": "ok", "timestamp": 1644345778273, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjUXuHpz1GDBTLkFU6msl96ZWWDxtMqu7UXHH6qLg=s64", "userId": "05582677413017445658"}, "user_tz": 480} id="B_JlHcCODteJ" outputId="61502079-56b7-4455-c09c-6c310a8ec851"
# !pip install pyscf==1.7.6
# + id="ZDQAYy9FEjoO"
import numpy as np
import jax
import jax.numpy as jnp
from pyscf import dft
from pyscf import gto
jax.config.update('jax_enable_x64', True)
HYBRID_COEFF = dft.libxc.hybrid_coeff('wb97mv')
RSH_PARAMS = dft.libxc.rsh_coeff('wb97mv')
# + [markdown] id="hEFvWs_YB1qE"
# # Helper functions to compute LDA densities and auxiliary quantities
# + id="HhL64ZkGqTgZ"
# _LDA_PW_PARAMETERS: dictionaries containing
# parameters a, alpha, beta1, beta2, beta3, beta4 and fpp_0, which are used to
# compute LDA correlation functional in the PW parametrization. The first six
# parameters are used to evaluate auxiliary G functions in the PW paper
# (10.1103/PhysRevB.45.13244). fpp_0 denotes the second derivative of
# f_zeta function in the PW paper at zeta = 0.
_LDA_PW_PARAMETERS = {
'eps_c_unpolarized': {
'a': 0.031091,
'alpha1': 0.21370,
'beta1': 7.5957,
'beta2': 3.5876,
'beta3': 1.6382,
'beta4': 0.49294},
'eps_c_polarized': {
'a': 0.015545,
'alpha1': 0.20548,
'beta1': 14.1189,
'beta2': 6.1977,
'beta3': 3.3662,
'beta4': 0.62517},
'alpha_c': {
'a': 0.016887,
'alpha1': 0.11125,
'beta1': 10.357,
'beta2': 3.6231,
'beta3': 0.88026,
'beta4': 0.49671},
'fpp_0': 1.709921
}
EPSILON = 1e-50
def e_x_lda_unpolarized(rho):
"""Evaluates LDA exchange energy density for spin unpolarized case.
Parr & Yang Density-functional theory of atoms and molecules Eq. 6.1.20.
Args:
rho: Float numpy array with shape (num_grids,), the electron density.
Returns:
Float numpy array with shape (num_grids,), the LDA exchange energy density.
"""
return -3 / 4 * (3 / jnp.pi) ** (1 / 3) * rho ** (4 / 3)
def e_x_lda_polarized(rhoa, rhob):
"""Evaluates LDA exchange energy density for spin polarized case.
Parr & Yang Density-functional theory of atoms and molecules Eq. 8.2.18.
Args:
rhoa: Float numpy array with shape (num_grids,), the spin up
electron density.
rhob: Float numpy array with shape (num_grids,), the spin down
electron density.
Returns:
Float numpy array with shape (num_grids,), the LDA exchange energy density.
"""
rho = rhoa + rhob
zeta = (rhoa - rhob) / (rho + EPSILON) # spin polarization
spin_scaling = 0.5 * ((1 + zeta) ** (4 / 3) + (1 - zeta) ** (4 / 3))
return - 3 / 4 * (3 / jnp.pi) ** (1 / 3) * rho ** (4 / 3) * spin_scaling
def g_lda_pw(rs, a, alpha1, beta1, beta2, beta3, beta4):
"""Evaluates auxiliary function G in the PW parametrization of LDA functional.
10.1103/PhysRevB.45.13244 Eq. 10.
Args:
rs: Float numpy array with shape (num_grids,), Wigner-Seitz radius.
a: Float, parameter.
alpha1: Float, parameter.
beta1: Float, parameter.
beta2: Float, parameter.
beta3: Float, parameter.
beta4: Float, parameter.
use_jax: Boolean, if True, use jax.numpy for calculations, otherwise use
numpy.
Returns:
Float numpy array with shape (num_grids,), auxiliary function G.
"""
den = 2 * a * (
beta1 * rs**(1 / 2) + beta2 * rs + beta3 * rs**(3 / 2) + beta4 * rs**2)
return -2 * a * (1 + alpha1 * rs) * jnp.log(1 + 1 / den)
def get_wigner_seitz_radius(rho):
"""Evaluates Wigner-Seitz radius of given density.
Args:
rho: Float numpy array with shape (num_grids,), the electron density.
Returns:
Float numpy array with shape (num_grids,), the Wigner-Seitz radius.
"""
return (3 / (4 * jnp.pi)) ** (1/3) * (rho + EPSILON) ** (-1 / 3)
def e_c_lda_unpolarized(rho):
"""Evaluates LDA correlation energy density for spin unpolarized case.
PW parametrization. 10.1103/PhysRevB.45.13244 Eq. 8-9.
Args:
rho: Float numpy array with shape (num_grids,), the electron density.
use_pbe_params: Boolean, whether use PBE parameters.
use_jax: Boolean, if True, use jax.numpy for calculations, otherwise use
numpy.
Returns:
Float numpy array with shape (num_grids,), the LDA correlation energy
density.
"""
rs = get_wigner_seitz_radius(rho)
return rho * g_lda_pw(rs=rs, **_LDA_PW_PARAMETERS['eps_c_unpolarized'])
def e_c_lda_polarized(rhoa, rhob):
"""Evaluates LDA correlation energy density for spin polarized case.
PW parametrization. 10.1103/PhysRevB.45.13244 Eq. 8-9.
Args:
rhoa: Float numpy array with shape (num_grids,), the spin up
electron density.
rhob: Float numpy array with shape (num_grids,), the spin down
electron density.
use_pbe_params: Boolean, whether use PBE parameters.
Returns:
Float numpy array with shape (num_grids,), the LDA correlation energy
density.
"""
rho = rhoa + rhob
rs = get_wigner_seitz_radius(rho)
zeta = (rhoa - rhob) / (rho + EPSILON)
# spin dependent interpolation coefficient
f_zeta = 1 / (2 ** (4 / 3) - 2) * (
(1 + zeta) ** (4 / 3) + (1 - zeta) ** (4 / 3) - 2)
eps_c_unpolarized = g_lda_pw(
rs=rs, **_LDA_PW_PARAMETERS['eps_c_unpolarized'])
eps_c_polarized = g_lda_pw(
rs=rs, **_LDA_PW_PARAMETERS['eps_c_polarized'])
alpha_c = -g_lda_pw(rs=rs, **_LDA_PW_PARAMETERS['alpha_c'])
return rho * (
eps_c_unpolarized
+ (1 / _LDA_PW_PARAMETERS['fpp_0']) * alpha_c * f_zeta * (1 - zeta ** 4)
+ (eps_c_polarized - eps_c_unpolarized) * f_zeta * zeta ** 4
)
def decomposed_e_c_lda_unpolarized(rho):
"""Evaluates LDA e_c decomposed into same-spin and opposite-spin components.
This function returns the LDA correlation energy density partitioned into
same-spin and opposite-spin components. 10.1063/1.475007 Eq. 7-8.
Args:
rho: Float numpy array with shape (num_grids,), the electron density.
Returns:
e_c_ss: Float numpy array with shape (num_grids,), the same-spin
component of LDA correlation energy density.
e_c_os: Float numpy array with shape (num_grids,), the opposite-spin
component of LDA correlation energy density.
"""
e_c = e_c_lda_unpolarized(rho)
e_c_ss = 2 * e_c_lda_polarized(rho / 2, jnp.zeros_like(rho))
e_c_os = e_c - e_c_ss
return e_c_ss, e_c_os
def decomposed_e_c_lda_polarized(rhoa, rhob):
"""Evaluates LDA e_c decomposed into same-spin and opposite-spin components.
This function returns the LDA correlation energy density partitioned into
same-spin and opposite-spin components. 10.1063/1.475007 Eq. 7-8.
Args:
rhoa: Float numpy array with shape (num_grids,), the spin up
electron density.
rhob: Float numpy array with shape (num_grids,), the spin down
electron density.
Returns:
e_c_aa: Float numpy array with shape (num_grids,), the same-spin (aa)
component of LDA correlation energy density.
e_c_bb: Float numpy array with shape (num_grids,), the same-spin (bb)
component of LDA correlation energy density.
e_c_ab: Float numpy array with shape (num_grids,), the opposite-spin
component of LDA correlation energy density.
"""
zero = jnp.zeros_like(rhoa)
e_c = e_c_lda_polarized(rhoa, rhob)
e_c_aa = e_c_lda_polarized(rhoa, zero)
e_c_bb = e_c_lda_polarized(zero, rhob)
e_c_ab = e_c - e_c_aa - e_c_bb
return e_c_aa, e_c_bb, e_c_ab
def get_reduced_density_gradient(rho, sigma):
"""Evaluates reduced density gradient.
Args:
rho: Float numpy array with shape (num_grids,), the electron density.
sigma: Float numpy array with shape (num_grids,), the norm square of
density gradient.
Returns:
Float numpy array with shape (num_grids,), the reduced density gradient.
"""
# NOTE: EPSILON is inserted in such a way that the reduce density
# gradient is zero if rho or sigma or both are zero.
return jnp.sqrt(sigma + EPSILON ** 3) / (rho ** (4 / 3) + EPSILON)
def get_mgga_t(rho, tau, polarized):
"""Evaluates the auxiliary quantity t in meta-GGA functional forms.
t = (tau_HEG / tau), where tau_HEG is the kinetic energy density of
homogeneous electron gass.
Args:
rho: Float numpy array with shape (num_grids,), the electron density.
tau: Float numpy array with shape (num_grids,), the kinetic energy density.
polarized: Boolean, whether the system is spin polarized.
Returns:
Float numpy array with shape (num_grids,), the auxiliary quantity w.
"""
spin_factor = 1 if polarized else (1 / 2) ** (2 / 3)
# 3 / 10 instead of 3 / 5 is used below because tau is defined with 1 / 2
tau_heg = 3 / 10 * (6 * jnp.pi ** 2) ** (2 / 3) * rho ** (5 / 3)
return spin_factor * tau_heg / (tau + EPSILON)
def f_rsh(rho, omega=RSH_PARAMS[0], polarized=False,):
"""Enchancement factor for evaluating short-range semilocal exchange.
10.1063/1.4952647 Eq. 11.
Args:
rho: Float numpy array with shape (num_grids,), the electron density.
omega: Float, the range seperation parameter.
polarized: Boolean, whether the system is spin polarized.
Returns:
Float numpy array with shape (num_grids,), the RSH enhancement factor.
"""
spin_factor = 1 if polarized else 2
# Fermi wave vector
kf = (6 * jnp.pi**2 * rho / spin_factor + EPSILON) ** (1 / 3)
a = omega / kf + EPSILON # variable a in Eq. 11
return (1 - 2 / 3 * a * (2 * jnp.pi ** (1 / 2) * jax.scipy.special.erf(1 / a) - 3 * a
+ a ** 3 + (2 * a - a ** 3) * jnp.exp(-1 / a ** 2)))
# + [markdown] id="MTkaZzKN4ZlD"
# #GAS22 enhancement factors
# The following enhancement factors for the GAS22 functional corresponds to the Eq. 9-11 in the manuscript.
# + id="hSnOjb0z4c16"
# NOTE: In the following functions, the square of reduced density gradient are
# used as function arguments (x2).
def get_u(x2, gamma):
"""Evaluates the auxiliary quantity u = gamma x^2 / (1 + gamma x^2)."""
return gamma * x2 / (1 + gamma * x2)
def f_x_gas22(x2, w):
"""Evaluates the exchange enhancement factor for the GAS22 functional."""
u = get_u(x2, gamma=0.003840616724010807)
return 0.862139736374172 + 0.936993691972698 * u + 0.317533683085033 * w
def f_css_gas22(x2, w):
"""Evaluates the same-spin correlation enhancement factor for the GAS22 functional."""
u = get_u(x2, gamma=0.46914023462026644)
return (u - 4.10753796482853 * w - 5.24218990333846 * w**2
- 1.76643208454076 * u**6 + 7.5380689617542 * u**6 * w**4)
def f_cos_gas22(x2, w):
"""Evaluates the opposite-spin correlation enhancement factor for the GAS22 functional."""
return (0.805124374375355 + 7.98909430970845 * w**2
- 1.76098915061634 * w**2 * jnp.cbrt(x2) - 7.54815900595292 * w**6
+ 2.00093961824784 * w**6 * jnp.cbrt(x2))
# + [markdown] id="Gh8yLDQMraoe"
# # Define GAS22 in the format of PySCF custom functions
# + id="WB4vfZ9RljNH"
def gas22_unpolarized(rho, sigma, tau):
"""Evaluates XC energy density for spin unpolarized case.
Args:
rho: Float numpy array with shape (num_grids,), the electron density.
sigma: Float numpy array with shape (num_grids,), the norm square of
density gradient.
tau: Float numpy array with shape (num_grids,), the kinetic energy
density.
Returns:
Float numpy array with shape (num_grids,), the XC energy density.
"""
rho_s = 0.5 * rho
x_s = get_reduced_density_gradient(rho_s, 0.25 * sigma)
x2_s = x_s ** 2
t_s = get_mgga_t(rho_s, 0.5 * tau, polarized=True)
w_s = (t_s - 1) / (t_s + 1)
e_lda_x = e_x_lda_unpolarized(rho) * f_rsh(rho, polarized=False)
e_lda_css, e_lda_cos = decomposed_e_c_lda_unpolarized(rho)
f_x = f_x_gas22(x2_s, w_s)
f_css = f_css_gas22(x2_s, w_s)
f_cos = f_cos_gas22(x2_s, w_s)
return e_lda_x * f_x + e_lda_css * f_css + e_lda_cos * f_cos
def gas22_polarized(rho_a, rho_b, sigma_aa, sigma_ab, sigma_bb, tau_a, tau_b):
"""Evaluates XC energy density for spin polarized case.
Args:
rho_a: Float numpy array with shape (num_grids,), the spin up electron
density.
rho_b: Float numpy array with shape (num_grids,), the spin down electron
density.
sigma_aa: Float numpy array with shape (num_grids,), the norm square of
density gradient (aa component).
sigma_ab: Float numpy array with shape (num_grids,), the norm square of
density gradient (ab component).
sigma_bb: Float numpy array with shape (num_grids,), the norm square of
density gradient (bb component).
tau_a: Float numpy array with shape (num_grids,), the spin up kinetic
energy density.
tau_b: Float numpy array with shape (num_grids,), the spin down kinetic
energy density.
Returns:
Float numpy array with shape (num_grids,), the XC energy density.
"""
del sigma_ab
rho_ab = 0.5 * (rho_a + rho_b)
x_a = get_reduced_density_gradient(rho_a, sigma_aa)
x_b = get_reduced_density_gradient(rho_b, sigma_bb)
x2_a = x_a ** 2
x2_b = x_b ** 2
x2_ab = 0.5 * (x2_a + x2_b)
t_a = get_mgga_t(rho_a, tau_a, polarized=True)
t_b = get_mgga_t(rho_b, tau_b, polarized=True)
t_ab = 0.5 * (t_a + t_b)
w_a = (t_a - 1) / (t_a + 1)
w_b = (t_b - 1) / (t_b + 1)
w_ab = (t_ab - 1) / (t_ab + 1)
e_lda_x_a = 0.5 * e_x_lda_unpolarized(2 * rho_a) * f_rsh(
rho_a, polarized=True)
e_lda_x_b = 0.5 * e_x_lda_unpolarized(2 * rho_b) * f_rsh(
rho_b, polarized=True)
e_lda_css_a, e_lda_css_b, e_lda_cos = decomposed_e_c_lda_polarized(
rho_a, rho_b)
f_x_a = f_x_gas22(x2_a, w_a)
f_x_b = f_x_gas22(x2_b, w_b)
f_css_a = f_css_gas22(x2_a, w_a)
f_css_b = f_css_gas22(x2_b, w_b)
f_cos = f_cos_gas22(x2_ab, w_ab)
return (e_lda_x_a * f_x_a + e_lda_x_b * f_x_b
+ e_lda_css_a * f_css_a + e_lda_css_b * f_css_b + e_lda_cos * f_cos)
def eval_xc_gas22(xc_code,
rho_and_derivs,
spin=0,
relativity=0,
deriv=1,
verbose=None):
"""Evaluates exchange-correlation energy densities and derivatives.
Args:
xc_code: A dummy argument, not used.
rho_and_derivs: Float numpy array with shape (6, num_grids) for spin
unpolarized case; 2-tuple of float numpy array with shape (6, num_grids)
for spin polarized case. Electron density and its derivatives. For
spin unpolarized case, the 6 subarrays represent (density, gradient_x,
gradient_y, gradient_z, laplacian, tau); for spin polarized case, the
spin up and spin down densities and derivatives are each represented
with a (6, num_grids) array.
spin: Integer, 0 for spin unpolarized and 1 for spin polarized
calculations.
relativity: A dummy argument, not used.
deriv: Integer, the order of derivatives evaluated for XC energy density.
verbose: A dummy argument, not used.
Returns:
eps_xc: Float numpy array with shape (num_grids,), the XC energy density
per particle.
v_xc: Tuple of float numpy arrays, the first derivatives of XC energy
density per volume to various quantities. See pyscf/dft/libxc.py for
more details.
f_xc: A dummy return value, not used.
k_xc: A dummy return value, not used.
Raises:
NotImplementedError: If derivative order higher than one is requested
(deriv > 1).
"""
del xc_code, relativity, verbose
if deriv != 1:
raise NotImplementedError('Only deriv = 1 is implemented.')
if spin == 0:
rho, grad_x, grad_y, grad_z, _, tau = rho_and_derivs
sigma = grad_x**2 + grad_y**2 + grad_z**2
e_xc, grads = jax.jit(jax.vmap(jax.value_and_grad(
gas22_unpolarized, argnums=(0, 1, 2))))(rho, sigma, tau)
vrho, vsigma, vtau = grads
else:
rhoa, grad_x_a, grad_y_a, grad_z_a, _, tau_a = rho_and_derivs[0]
rhob, grad_x_b, grad_y_b, grad_z_b, _, tau_b = rho_and_derivs[1]
rho = rhoa + rhob
sigma_aa = grad_x_a**2 + grad_y_a**2 + grad_z_a**2
sigma_ab = grad_x_a * grad_x_b + grad_y_a * grad_y_b + grad_z_a * grad_z_b
sigma_bb = grad_x_b**2 + grad_y_b**2 + grad_z_b**2
e_xc, grads = jax.jit(jax.vmap(jax.value_and_grad(
gas22_polarized, argnums=(0, 1, 2, 3, 4, 5, 6))))(
rhoa, rhob, sigma_aa, sigma_ab, sigma_bb, tau_a, tau_b)
vrhoa, vrhob, vsigma_aa, vsigma_ab, vsigma_bb, vtau_a, vtau_b = grads
vrho = np.stack((vrhoa, vrhob), axis=1)
vsigma = np.stack((vsigma_aa, vsigma_ab, vsigma_bb), axis=1)
vtau = np.stack((vtau_a, vtau_b), axis=1)
eps_xc = e_xc / (rho + EPSILON)
return (np.array(eps_xc),
(np.array(vrho), np.array(vsigma),
np.zeros_like(vtau), np.array(vtau)),
None,
None)
# + [markdown] id="5nVLL7TsCJGX"
# # SCF calculations with PySCF
# + id="510maFzNFYBH"
def run_scf_for_mol(atom,
basis,
charge,
spin,
conv_tol=1e-6,
use_sg1_prune_for_nlc=True):
"""Runs SCF calculations for molecule using GAS22 functional.
Args:
atom: String, the atomic structure.
charge: Integer, the total charge of molecule.
spin: Integer, the difference of spin up and spin down electron numbers.
basis: String, the GTO basis.
conv_tol: Float, the convergence threshold of total energy.
use_sg1_prune_for_nlc: Boolean, whether use SG1 prune for NLC calculation.
Returns:
Float, the SCF total energy.
"""
mol = gto.M(atom=atom, basis=basis, charge=charge, spin=spin, verbose=4)
if spin == 0:
ks = dft.RKS(mol)
else:
ks = dft.UKS(mol)
ks.define_xc_(eval_xc_gas22, xctype='MGGA', hyb=HYBRID_COEFF, rsh=RSH_PARAMS)
# NOTE: This is necessary because PySCF will use this name to decide the
# parameters for VV10 NLC. GAS22 has identical nonlocal XC as wB97M-V, including
# VV10 NLC.
ks.xc = 'wb97mv'
ks.nlc = 'VV10'
if use_sg1_prune_for_nlc:
# NOTE: SG1 prune can be used to reduce the computational cost of
# VV10 NLC. The use of SG1 prune has very little effect on the resulting
# XC energy. SG1 prune is used in PySCF's example
# pyscf/examples/dft/33-nlc_functionals.py and is also used in paper
# 10.1080/00268976.2017.1333644. Note that SG1 prune in PySCF is not
# available for some elements appeared in the MCGDB84 database.
ks.nlcgrids.prune = dft.gen_grid.sg1_prune
# NOTE: It is necessary to override ks._numint._xc_type method to
# let PySCF correctly use a custom XC functional with NLC. Also, note that
# ks.xc is used to determine NLC parameters.
ks._numint._xc_type = lambda code: 'NLC' if 'VV10' in code else 'MGGA'
ks.conv_tol = conv_tol
ks.kernel()
return ks.e_tot
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 65921, "status": "ok", "timestamp": 1644345845579, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjUXuHpz1GDBTLkFU6msl96ZWWDxtMqu7UXHH6qLg=s64", "userId": "05582677413017445658"}, "user_tz": 480} id="vXFfaeYyFjae" outputId="84e9a969-a9aa-4994-a1d8-4f5f3869adf7"
# Example: water molecule (spin unpolarized)
# Atomic structure is taken from 01a_water_monA_3B-69.xyz in MGCDB84
# Expected Etot: 76.430060 au
run_scf_for_mol(
atom = """\
O -0.0848890000 0.0568040000 0.0552050000
H 0.7103750000 0.5177290000 0.4139710000
H -0.8909580000 0.4985880000 0.4142740000
""",
basis = 'def2qzvppd',
charge = 0,
spin = 0,
)
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 64420, "status": "ok", "timestamp": 1644345909978, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjUXuHpz1GDBTLkFU6msl96ZWWDxtMqu7UXHH6qLg=s64", "userId": "05582677413017445658"}, "user_tz": 480} id="G3eKMuMhF4qW" outputId="1d178e6c-3f34-432f-8ddd-a26af76780a8"
# Example: P atom (spin polarized)
# Atomic structure is taken from 25_P_AE18.xyz in MGCDB84
# Expected Etot: -341.253901 au
run_scf_for_mol(
atom = "P 0.0000000000 0.0000000000 0.0000000000",
basis = 'def2qzvppd',
charge = 0,
spin = 3,
)
| symbolic_functionals/colab/run_GAS22.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # Emma's A3 Assignment
# +
# Getting set up
import numpy as np;
import scipy as sp;
from sklearn import tree;
from sklearn.neighbors import KNeighborsClassifier;
from sklearn.linear_model import LogisticRegression;
from sklearn.cross_validation import cross_val_score;
from sklearn import svm;
from pandas import DataFrame, read_csv # Importing specific library functions
import matplotlib.pyplot as plt # Importing library but no functions + give nickname
import pandas as pd #this is how I usually import pandas
# Enable inline plotting
# %matplotlib inline
#Tutorial: http://nbviewer.jupyter.org/urls/bitbucket.org/hrojas/learn-pandas/raw/master/lessons/01%20-%20Lesson.ipynb
#Intro to pandas and datastructures: http://www.gregreda.com/2013/10/26/intro-to-pandas-data-structures/
# -
# Load in the data
walk_location = r'walking.csv'
walk_raw = pd.read_csv(walk_location)
drive_location = r'driving.csv'
drive_raw = pd.read_csv(drive_location)
static_location = r'static.csv'
static_raw = pd.read_csv(static_location)
upstairs_location = r'upstair.csv'
upstairs_raw = pd.read_csv(upstairs_location)
run_location = r'running.csv'
run_raw = pd.read_csv(run_location)
# # What should my features be?
# ## I plotted 20 seconds of data for each type of data for walking:
walk_raw[['attitude_roll']].plot(xlim = (5000, 7000))
walk_raw[['attitude_pitch']].plot(xlim = (5000, 7000))
walk_raw[['attitude_yaw']].plot(xlim = (5000, 7000))
walk_raw[['rotation_rate_x']].plot(xlim = (5000, 7000))
walk_raw[['rotation_rate_y']].plot(xlim = (5000, 7000))
walk_raw[['rotation_rate_z']].plot(xlim = (5000, 7000))
# ## rotation_rate_z doesn't look too good, so we'll pull that one out.
walk_raw[['gravity_x']].plot(xlim = (5000, 7000))
walk_raw[['gravity_y']].plot(xlim = (5000, 7000))
walk_raw[['gravity_z']].plot(xlim = (5000, 7000))
# ## gravity_x and gravity_y are not periodic like the rest of our "pretty" data, so we'll cut those out as well.
walk_raw[['user_acc_x']].plot(xlim = (5000, 7000))
walk_raw[['user_acc_y']].plot(xlim = (5000, 7000))
walk_raw[['user_acc_z']].plot(xlim = (5000, 7000))
# ## Hm. None of these are beautiful, but for mean and standard deviation I think that user_acc_y and user_acc_z will be the most helpful.
walk_raw[['magnetic_field_x']].plot(xlim = (5000, 7000))
walk_raw[['magnetic_field_y']].plot(xlim = (5000, 7000))
walk_raw[['magnetic_field_z']].plot(xlim = (5000, 7000))
# ## Again, none of these are beautiful, but for mean and standard deviation I think that magnetic_field_y and magnetic_field_z will be the most helpful.
# ## That gives us a "who made the cut" feature list:
# #### attitude_roll
# #### attitude_pitch
# #### attitude_yaw
# #### rotation_rate_x
# #### rotation_rate_y
# #### gravity_z
# #### user_acc_y
# #### user_acc_z
# #### magnetic_field_y
# #### mangetic_Field_z
#
# ## Still way too many! My next cut could be for which wave forms drift up or down a lot-- drifting would mess up mean and standard deviation. I'll go with attitude_roll, rotation_rate_x, and user_acc_z.
walk_raw[['attitude_roll']].plot(xlim = (5000, 7000))
walk_raw[['rotation_rate_x']].plot(xlim = (5000, 7000))
walk_raw[['user_acc_z']].plot(xlim = (5000, 7000))
# # Next step: chunk up the data
# http://stackoverflow.com/questions/17315737/split-a-large-pandas-dataframe
# input - df: a Dataframe, chunkSize: the chunk size
# output - a list of DataFrame
# purpose - splits the DataFrame into smaller of max size chunkSize (last may be smaller)
def splitDataFrameIntoSmaller(df, chunkSize = 1000):
listOfDf = list()
numberChunks = len(df) // chunkSize + 1
for i in range(numberChunks):
listOfDf.append(df[i*chunkSize:(i+1)*chunkSize])
return listOfDf
# +
# Set up the 10-second chunks
walk_chunked = splitDataFrameIntoSmaller(walk_raw)
for idx, df in enumerate(walk_chunked):
walk_chunked[idx] = pd.DataFrame(df)
drive_chunked = splitDataFrameIntoSmaller(drive_raw)
for idx, df in enumerate(drive_chunked):
drive_chunked[idx] = pd.DataFrame(df)
static_chunked = splitDataFrameIntoSmaller(static_raw)
for idx, df in enumerate(static_chunked):
static_chunked[idx] = pd.DataFrame(df)
upstairs_chunked = splitDataFrameIntoSmaller(upstairs_raw)
for idx, df in enumerate(upstairs_chunked):
upstairs_chunked[idx] = pd.DataFrame(df)
run_chunked = splitDataFrameIntoSmaller(run_raw)
for idx, df in enumerate(run_chunked):
run_chunked[idx] = pd.DataFrame(df)
# -
# # Now it's time to add those features!
# +
# This is where the feature data will go. The array for each activity will have length 30.
walk_featured = []
drive_featured = []
static_featured = []
upstairs_featured = []
run_featured = []
# Populate the features
for df in walk_chunked:
features = df.mean()[['attitude_roll','rotation_rate_x','user_acc_z','user_acc_x']].values.tolist() + df.std()[['attitude_roll','rotation_rate_x','user_acc_z','user_acc_x']].values.tolist()
walk_featured.append(features)
for df in drive_chunked:
features = df.mean()[['attitude_roll','rotation_rate_x','user_acc_z','user_acc_x']].values.tolist() + df.std()[['attitude_roll','rotation_rate_x','user_acc_z','user_acc_x']].values.tolist()
drive_featured.append(features)
for df in static_chunked:
features = df.mean()[['attitude_roll','rotation_rate_x','user_acc_z','user_acc_x']].values.tolist() + df.std()[['attitude_roll','rotation_rate_x','user_acc_z','user_acc_x']].values.tolist()
static_featured.append(features)
for df in upstairs_chunked:
features = df.mean()[['attitude_roll','rotation_rate_x','user_acc_z','user_acc_x']].values.tolist() + df.std()[['attitude_roll','rotation_rate_x','user_acc_z','user_acc_x']].values.tolist()
upstairs_featured.append(features)
for df in run_chunked:
features = df.mean()[['attitude_roll','rotation_rate_x','user_acc_z','user_acc_x']].values.tolist() + df.std()[['attitude_roll','rotation_rate_x','user_acc_z','user_acc_x']].values.tolist()
run_featured.append(features)
# +
# Combine all of the feature sets into one big one. Along the way, generate my target array.
all_featured = walk_featured + drive_featured + static_featured + upstairs_featured + run_featured
target = [] + [0] * len(walk_featured)
target = target + [1] * len(drive_featured)
target = target + [2] * len(static_featured)
target = target + [3] * len(upstairs_featured)
target = target + [4] * len(run_featured)
# If I accidentally didn't add the right numbers to the target array, throw an error!
if target.count(0) != 30 or target.count(1) != 30 or target.count(2) != 30 or target.count(3) != 30 or target.count(4) != 30:
raise ValueError('Target is corrupt')
# -
# # Running Cross-Validation
# Create and run cross-validation on a K-Nearest Neighbors classifier
knn = KNeighborsClassifier()
knn_scores = cross_val_score(knn, all_featured, target, cv = 5)
print 'K-NEAREST NEIGHBORS CLASSIFIER'
print knn_scores
# Create and run cross-validation on a Logistic Regression classifier
lr = LogisticRegression()
lr_scores = cross_val_score(lr, all_featured, target, cv = 5)
print 'LOGISTIC REGRESSION CLASSIFIER'
print lr_scores
# Create and run cross-validation on a Decision Tree classifier
svc = svm.SVC(kernel='linear')
svc_scores = cross_val_score(svc, all_featured, target, cv = 5)
print 'DECISION TREE CLASSIFIER'
print svc_scores
# Create and run cross-validation on a Support Vector Machine classifier
dtree = tree.DecisionTreeClassifier()
dtree_scores = cross_val_score(dtree, all_featured, target, cv = 5)
print 'SUPPORT VECTOR MACHINE CLASSIFIER'
print dtree_scores
# +
# What if I use tons of features (mean and std for all categories)
too_many_features = []
for df in walk_chunked:
features = df.mean().values.tolist() + df.std().values.tolist()
too_many_features = too_many_features + features
for df in drive_chunked:
features = df.mean().values.tolist() + df.std().values.tolist()
too_many_features = too_many_features + features
for df in static_chunked:
features = df.mean().values.tolist() + df.std().values.tolist()
too_many_features = too_many_features + features
for df in upstairs_chunked:
features = df.mean().values.tolist() + df.std().values.tolist()
too_many_features = too_many_features + features
for df in run_chunked:
features = df.mean().values.tolist() + df.std().values.tolist()
too_many_features = too_many_features + features
# +
# # Create and run cross-validation on a K-Nearest Neighbors classifier
# knn_new = KNeighborsClassifier()
# knn_scores_new = cross_val_score(knn_new, too_many_features, target, cv = 5)
# print 'K-NEAREST NEIGHBORS CLASSIFIER'
# print knn_scores_new
# -
# How I started figuring out features:
print walk_raw[['attitude_yaw']].describe()[2:3]
print run_raw[['attitude_yaw']].describe()[2:3]
print static_raw[['attitude_yaw']].describe()[2:3]
print upstairs_raw[['attitude_yaw']].describe()[2:3]
print drive_raw[['attitude_yaw']].describe()[2:3]
# # I didn't end up getting FFT stuff to work, but here's the work that I was doing trying to get an FFT of the graph below (attitude_yaw for walking data)
walk_raw.plot(x='timestamp', y='attitude_yaw', title='walk attitude_yaw data', ylim=(-3,3), xlim= (681840,681850))
# Plotting one chunk (10 seconds) of walking data
chunk = walk_raw[['attitude_yaw']][1000:2000]
chunk.plot(title='10 second chunk')
# Reformat the chunk
formatted_chunk = chunk.as_matrix().tolist() # Get it out of pandas DataFrame format
formatted_chunk = [item[0] for item in formatted_chunk]
# Do a Fast Fourier Transform on attitude_yaw, chop off low frequency stuff
walk_fft = np.abs(np.fft.fft(formatted_chunk)) # Taking absolute value gives us the magnitude
plt.plot(walk_fft)
walk_fft
#walk_fft = [elem for elem in walk_fft if elem > 800]
# walk_fft = walk_fft[800:] # Low pass filter
plt.plot(walk_fft)
# Reverse the Fast Fourier Transform
walk_ifft = np.abs(np.fft.ifft(walk_fft))
plt.plot(walk_ifft)
| A3.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # COVID-19 Case Counts from Johns Hopkins University
# **[Work in progress]**
#
# This notebook creates a .csv file with cummulative confimed cases and deaths for ingestion into the Knowledge Graph.
#
# Data source: [COVID-19 Data Repository by the Center for Systems Science and Engineering (CSSE) at Johns Hopkins University](https://github.com/CSSEGISandData/COVID-19)
#
# Author: <NAME> (<EMAIL>)
import os
from pathlib import Path
import pandas as pd
import dateutil
pd.options.display.max_rows = None # display all rows
pd.options.display.max_columns = None # display all columsns
NEO4J_IMPORT = Path(os.getenv('NEO4J_IMPORT'))
print(NEO4J_IMPORT)
def split_by_day(df, day, label):
day_df = df[['stateFips', 'countyFips', day]].copy()
day_df.rename(columns={day: label}, inplace=True)
day_df['date'] = day
return day_df
# ### Process US cummulative confirmed cases
confirmed = pd.read_csv("https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_time_series/time_series_covid19_confirmed_US.csv", dtype='str')
confirmed = confirmed.fillna('')
# Trim off .0 from FIPS code if present
confirmed['FIPS'] = confirmed['FIPS'].str.replace('\.0', '')
confirmed.head(10)
# ### Keep only cases with fips codes
county_confirmed = confirmed.query("Admin2 != ''").query("FIPS != ''").copy()
# state fips code: 2 character with '0'-padding, e.g. 06
county_confirmed['stateFips'] = county_confirmed['FIPS'].str[:-3]
county_confirmed['stateFips'] = county_confirmed['stateFips'].apply(lambda s: '0' + s if len(s) == 1 else s)
county_confirmed['countyFips'] = county_confirmed['FIPS'].str[-3:]
county_confirmed.head()
# #### Reformat dataframe by day
days = county_confirmed.columns.tolist()[11:-2]
cases_confirmed = pd.concat((split_by_day(county_confirmed, day, 'cases') for day in days))
cases_confirmed['date'] = cases_confirmed['date'].apply(dateutil.parser.parse)
cases_confirmed.head()
# ### Process US cummulative deaths
deaths = pd.read_csv("https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_time_series/time_series_covid19_deaths_US.csv", dtype='str')
deaths = deaths.fillna('')
deaths['FIPS'] = deaths['FIPS'].str.replace('\.0', '')
deaths.head()
# ### Keep only cases with fips codes
county_deaths = deaths.query("Admin2 != ''").query("FIPS != ''").copy()
# state fips code: 2 character with '0'-padding, e.g. 06
county_deaths['stateFips'] = county_deaths['FIPS'].str[:-3]
county_deaths['stateFips'] = county_deaths['stateFips'].apply(lambda s: '0' + s if len(s) == 1 else s)
county_deaths['countyFips'] = county_deaths['FIPS'].str[-3:]
county_deaths.head()
# #### Reformat dataframe by day
cases_deaths = pd.concat((split_by_day(county_deaths, day, 'deaths') for day in days))
cases_deaths['date'] = cases_deaths['date'].apply(dateutil.parser.parse)
cases_deaths.head()
# ### Merge US cases into one dataframe
cases = cases_confirmed.merge(cases_deaths, on=['stateFips', 'countyFips', 'date'])
cases.head()
# Save only cases that have non-zero counts
cases = cases[(cases['cases'] != '0') | (cases['deaths'] != '0')]
cases.head()
cases.to_csv(NEO4J_IMPORT / "02a-JHUCases.csv", index=False)
# ### Process global cummulative confirmed cases
confirmed = pd.read_csv("https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_time_series/time_series_covid19_confirmed_global.csv", dtype='str')
confirmed = confirmed.fillna('')
confirmed.rename(columns={'Province/State': 'admin1', 'Country/Region': 'country'}, inplace=True)
confirmed.head()
# #### Reformat dataframe by day
days = confirmed.columns.tolist()[4:]
def split_by_day(df, day, label):
day_df = df[['country', 'admin1', day]].copy()
day_df.rename(columns={day: label}, inplace=True)
day_df['date'] = day
return day_df
cases_confirmed = pd.concat((split_by_day(confirmed, day, 'cases') for day in days))
cases_confirmed['date'] = cases_confirmed['date'].apply(dateutil.parser.parse)
cases_confirmed.head()
# ### Process global cummulative deaths
deaths = pd.read_csv("https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_time_series/time_series_covid19_deaths_global.csv", dtype='str')
deaths = deaths.fillna('')
deaths.rename(columns={'Province/State': 'admin1', 'Country/Region': 'country'}, inplace=True)
cases_deaths = pd.concat((split_by_day(deaths, day, 'deaths') for day in days))
cases_deaths['date'] = cases_deaths['date'].apply(dateutil.parser.parse)
cases_deaths.head()
# ### Merge global cases into one dataframe
cases = cases_confirmed.merge(cases_deaths, on=['country', 'admin1', 'date'])
cases['origLocation'] = cases[['country', 'admin1']].apply(lambda x: x['country'] if x['admin1'] == ''
else x['country'] + ',' + x['admin1'], axis=1)
# Save only cases that have non-zero counts
cases = cases[(cases['cases'] != '0') | (cases['deaths'] != '0')]
cases.head()
cases.to_csv(NEO4J_IMPORT / "02a-JHUCasesGlobal.csv", index=False)
cases.query("country == 'Germany'").tail(100)
| notebooks/dataprep/02a-JHUCases.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/cjakuc/DS-Unit-2-Linear-Models/blob/master/module3-ridge-regression/LS_DS_213_assignment.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="xAUKsjpAVamH" colab_type="text"
# Lambda School Data Science
#
# *Unit 2, Sprint 1, Module 3*
#
# ---
# + [markdown] colab_type="text" id="7IXUfiQ2UKj6"
# # Ridge Regression
#
# ## Assignment
#
# We're going back to our other **New York City** real estate dataset. Instead of predicting apartment rents, you'll predict property sales prices.
#
# But not just for condos in Tribeca...
#
# - [x] Use a subset of the data where `BUILDING_CLASS_CATEGORY` == `'01 ONE FAMILY DWELLINGS'` and the sale price was more than 100 thousand and less than 2 million.
# - [x] Do train/test split. Use data from January — March 2019 to train. Use data from April 2019 to test.
# - [x] Do one-hot encoding of categorical features.
# - [x] Do feature selection with `SelectKBest`.
# - [x] Fit a ridge regression model with multiple features. Use the `normalize=True` parameter (or do [feature scaling](https://scikit-learn.org/stable/modules/preprocessing.html) beforehand — use the scaler's `fit_transform` method with the train set, and the scaler's `transform` method with the test set)
# - [x] Get mean absolute error for the test set.
# - [x] As always, commit your notebook to your fork of the GitHub repo.
#
# The [NYC Department of Finance](https://www1.nyc.gov/site/finance/taxes/property-rolling-sales-data.page) has a glossary of property sales terms and NYC Building Class Code Descriptions. The data comes from the [NYC OpenData](https://data.cityofnewyork.us/browse?q=NYC%20calendar%20sales) portal.
#
#
# ## Stretch Goals
#
# Don't worry, you aren't expected to do all these stretch goals! These are just ideas to consider and choose from.
#
# - [ ] Add your own stretch goal(s) !
# - [ ] Instead of `Ridge`, try `LinearRegression`. Depending on how many features you select, your errors will probably blow up! 💥
# - [ ] Instead of `Ridge`, try [`RidgeCV`](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.RidgeCV.html).
# - [ ] Learn more about feature selection:
# - ["Permutation importance"](https://www.kaggle.com/dansbecker/permutation-importance)
# - [scikit-learn's User Guide for Feature Selection](https://scikit-learn.org/stable/modules/feature_selection.html)
# - [mlxtend](http://rasbt.github.io/mlxtend/) library
# - scikit-learn-contrib libraries: [boruta_py](https://github.com/scikit-learn-contrib/boruta_py) & [stability-selection](https://github.com/scikit-learn-contrib/stability-selection)
# - [_Feature Engineering and Selection_](http://www.feat.engineering/) by Kuhn & Johnson.
# - [ ] Try [statsmodels](https://www.statsmodels.org/stable/index.html) if you’re interested in more inferential statistical approach to linear regression and feature selection, looking at p values and 95% confidence intervals for the coefficients.
# - [ ] Read [_An Introduction to Statistical Learning_](http://faculty.marshall.usc.edu/gareth-james/ISL/ISLR%20Seventh%20Printing.pdf), Chapters 1-3, for more math & theory, but in an accessible, readable way.
# - [ ] Try [scikit-learn pipelines](https://scikit-learn.org/stable/modules/compose.html).
# + colab_type="code" id="o9eSnDYhUGD7" colab={}
# %%capture
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Applied-Modeling/master/data/'
# !pip install category_encoders==2.*
# If you're working locally:
else:
DATA_PATH = '../data/'
# Ignore this Numpy warning when using Plotly Express:
# FutureWarning: Method .ptp is deprecated and will be removed in a future version. Use numpy.ptp instead.
import warnings
warnings.filterwarnings(action='ignore', category=FutureWarning, module='numpy')
# + colab_type="code" id="QJBD4ruICm1m" colab={}
import pandas as pd
import pandas_profiling
# Read New York City property sales data
df = pd.read_csv(DATA_PATH+'condos/NYC_Citywide_Rolling_Calendar_Sales.csv')
# Change column names: replace spaces with underscores
df.columns = [col.replace(' ', '_') for col in df]
# SALE_PRICE was read as strings.
# Remove symbols, convert to integer
df['SALE_PRICE'] = (
df['SALE_PRICE']
.str.replace('$','')
.str.replace('-','')
.str.replace(',','')
.astype(int)
)
# + id="IgPj4OF9VamS" colab_type="code" colab={}
# BOROUGH is a numeric column, but arguably should be a categorical feature,
# so convert it from a number to a string
df['BOROUGH'] = df['BOROUGH'].astype(str)
# + id="HfbaoFDDVamV" colab_type="code" colab={}
# Reduce cardinality for NEIGHBORHOOD feature
# Get a list of the top 10 neighborhoods
top10 = df['NEIGHBORHOOD'].value_counts()[:10].index
# At locations where the neighborhood is NOT in the top 10,
# replace the neighborhood with 'OTHER'
df.loc[~df['NEIGHBORHOOD'].isin(top10), 'NEIGHBORHOOD'] = 'OTHER'
# + [markdown] id="lOXIcG7BXwyA" colab_type="text"
# ## 1) Use a subset of the data where BUILDING_CLASS_CATEGORY == '01 ONE FAMILY DWELLINGS' and the sale price was more than 100 thousand and less than 2 million.
# + id="c34XjWllVamY" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 513} outputId="abaf4006-4794-4431-e256-3c54a074d5a2"
condition = ((df['BUILDING_CLASS_CATEGORY']=='01 ONE FAMILY DWELLINGS') &
(df['SALE_PRICE'] > 100000) &
(df['SALE_PRICE'] < 2000000))
df = df[condition]
# Check for max/min sale price
print(df.describe())
df.head()
# + id="-AWNryMHYi-G" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 102} outputId="63e4d944-835d-4027-ee2f-d38d95f86eb4"
# Check for building class cats
df['BUILDING_CLASS_CATEGORY'].describe()
# + [markdown] id="zus5ysayYy9D" colab_type="text"
# ## 2) Do train/test split. Use data from January — March 2019 to train. Use data from April 2019 to test.
# + id="aCIz3TibZfOp" colab_type="code" colab={}
# Check that sale date variable is in datetime
df.dtypes
# It is not
# + id="fAf7odQHZ6ei" colab_type="code" colab={}
# Convert sale dat to datetime
df['SALE_DATE'] = pd.to_datetime(df['SALE_DATE'],
infer_datetime_format=True)
# Check that it worked
df.dtypes
# + id="1JzBM5Rla8x2" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 136} outputId="56c2be22-5433-4959-a854-29b447f63369"
df['SALE_DATE'].describe()
# + id="oxI_HOQ5aVp1" colab_type="code" colab={}
# Create the train and test splits
cutoff = pd.to_datetime('2019-04-01')
train = df[df['SALE_DATE'] < cutoff]
test = df[df['SALE_DATE'] > cutoff]
# + [markdown] id="RT_3ldgScCJT" colab_type="text"
# ## 3) Do one-hot encoding of categorical features.
# + id="d9NHtWgocGAG" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 359} outputId="75a852e4-b2a1-4513-ade8-2214801a7822"
# Find categorical features with low cardinality
train.describe(exclude='number').T.sort_values(by='unique')
# + id="PG5ZlGetd9ka" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 359} outputId="9fed4931-cb21-499e-eee5-3ad5ecc412eb"
# Find categorical features with high cardinality
train.describe(exclude='number').T.sort_values(by='unique',ascending=False)
# + id="c0X-03VYcVIr" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 334} outputId="03331f96-6c4d-4f19-c3de-d62ed2dd8241"
train.describe()
# + id="winvZolVceA1" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 323} outputId="3d5064d0-462a-4de9-b2be-71b375a7afaa"
train.describe(exclude='number')
# + id="qdbPf7E7c0eK" colab_type="code" colab={}
# Exclude the features with high cardinality & NaNs
target = 'SALE_PRICE'
exclude = ['ADDRESS',
'LAND_SQUARE_FEET',
'SALE_DATE',
'APARTMENT_NUMBER',
'EASE-MENT',
'BUILDING_CLASS_CATEGORY']
features = train.columns.drop([target] + exclude)
# + id="_mi-g3yueu0F" colab_type="code" colab={}
# Create feature splits
X_train = train[features]
y_train = train[target]
X_test = test[features]
y_test = test[target]
# + id="RWRcJ3KofD7D" colab_type="code" colab={}
# Encode categorical features using category encoders
import category_encoders as ce
encoder = ce.OneHotEncoder(use_cat_names=True)
X_train = encoder.fit_transform(X_train)
# + id="Fob4EFaHfO-q" colab_type="code" colab={}
X_test = encoder.transform(X_test)
# + id="VDjGUPXbfWsX" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 459} outputId="81855f90-04f7-4dd8-8472-c51e3c3076d8"
X_train.columns
# + [markdown] id="c9kqkOkzfdO6" colab_type="text"
# ## 4) Do feature selection with `SelectKBest`.
# + id="iifkXKt-fieD" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="1bc97527-2a06-40eb-8215-a2cd3fe01033"
from sklearn.feature_selection import SelectKBest, f_regression
from sklearn.linear_model import Ridge, LinearRegression
from sklearn.metrics import mean_absolute_error
for k in range(1, len(X_train.columns)+1):
print(f'{k} features')
selector = SelectKBest(score_func=f_regression, k=k)
X_train_selected = selector.fit_transform(X_train, y_train)
X_test_selected = selector.transform(X_test)
model = LinearRegression()
model.fit(X_train_selected, y_train)
y_pred = model.predict(X_test_selected)
mae = mean_absolute_error(y_test, y_pred)
print(f'Test Mean Absolute Error: ${mae:,.0f} \n')
# + [markdown] id="PydNmEVFizBv" colab_type="text"
# ## 5) Fit a ridge regression model with multiple features. Use the normalize=True parameter (or do feature scaling beforehand — use the scaler's fit_transform method with the train set, and the scaler's transform method with the test set)
# + id="8r2H3esti2_M" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="3ec3acff-a7c1-47ee-e64b-011133ae4be3"
from sklearn.linear_model import RidgeCV
alphas = [0.01, 0.1, 1.0, 10.0, 100.0]
for k in range(1, len(X_train.columns)+1):
print(f'{k} features')
# Select the k best features
selector = SelectKBest(score_func=f_regression, k=k)
X_train_selected = selector.fit_transform(X_train, y_train)
X_test_selected = selector.transform(X_test)
# Select the best alpha for ridge regression using RidgeCV
model = RidgeCV(alphas=alphas,
normalize=True)
model.fit(X_train_selected,
y_train)
# Find the MAE of the model w/ selected alpha and selected k features
y_pred = model.predict(X_test_selected)
mae = mean_absolute_error(y_test,
y_pred)
print(f'Alpha: {model.alpha_} and Test Mean Absolute Error: ${mae:,.0f} \n')
| module3-ridge-regression/LS_DS_213_assignment.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # Model1 - RNN model of CPRO task using PyTorch
#
# #### <NAME>
# #### 09/30/2018
# +
import torch
from torch.autograd import Variable
import torch.nn.functional as F
import numpy as np
np.set_printoptions(suppress=True)
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import scipy.stats as stats
# %matplotlib inline
import sys
sys.path.append('../utils/bctpy/')
import bct
import time
retrain = True
modeldir = '../../..//models/Model3c/'
# -
# ## Define network inputs (sensory inputs + task rules)
# +
def createSensoryInputs(nStims=2):
stimdata = {}
# Stim 1 empty columns
stimdata['Color1'] = []
stimdata['Orientation1'] = []
stimdata['Pitch1'] = []
stimdata['Constant1'] = []
# Stim 2 empty columns
stimdata['Color2'] = []
stimdata['Orientation2'] = []
stimdata['Pitch2'] = []
stimdata['Constant2'] = []
# Code for RNN training
stimdata['Code'] = []
# Property index tells us which columns ID the property in question
color = {0:'red',
1:'blue'}
orientation = {2:'vertical',
3:'horizontal'}
pitch = {4:'high',
5:'low'}
constant = {6:'constant',
7:'beeping'}
for col1 in color:
for col2 in color:
for ori1 in orientation:
for ori2 in orientation:
for pit1 in pitch:
for pit2 in pitch:
for con1 in constant:
for con2 in constant:
code = np.zeros((8*nStims,))
# Stim 1
code[col1] = 1
stimdata['Color1'].append(color[col1])
code[ori1] = 1
stimdata['Orientation1'].append(orientation[ori1])
code[pit1] = 1
stimdata['Pitch1'].append(pitch[pit1])
code[con1] = 1
stimdata['Constant1'].append(constant[con1])
# Stim 2 -- need to add 8, since this is the second stimuli
code[col2+8] = 1
stimdata['Color2'].append(color[col2])
code[ori2+8] = 1
stimdata['Orientation2'].append(orientation[ori2])
code[pit2+8] = 1
stimdata['Pitch2'].append(pitch[pit2])
code[con2+8] = 1
stimdata['Constant2'].append(constant[con2])
# Code
stimdata['Code'].append(code)
return pd.DataFrame(stimdata)
def createRulePermutations():
# May need to change this - both and not both are negations of each other, as are either and neither
logicRules = {0: 'both',
1: 'notboth',
2: 'either',
3: 'neither'}
sensoryRules = {4: 'red',
5: 'vertical',
6: 'high',
7: 'constant'}
motorRules = {8: 'l_mid',
9: 'l_ind',
10: 'r_ind',
11: 'r_mid'}
taskrules = {}
taskrules['Logic'] = []
taskrules['Sensory'] = []
taskrules['Motor'] = []
# Create another field for the sensory category (to select stimuli from)
taskrules['SensoryCategory'] = []
# For RNN training
taskrules['Code'] = []
for lo in logicRules:
for se in sensoryRules:
for mo in motorRules:
code = np.zeros((12,))
# Logic rule
taskrules['Logic'].append(logicRules[lo])
code[lo] = 1
# Sensory rule
taskrules['Sensory'].append(sensoryRules[se])
code[se] = 1
# Define sensory category
if sensoryRules[se]=='red': category = 'Color'
if sensoryRules[se]=='vertical': category = 'Orientation'
if sensoryRules[se]=='high': category = 'Pitch'
if sensoryRules[se]=='constant': category = 'Constant'
taskrules['SensoryCategory'].append(category)
# Motor rule
taskrules['Motor'].append(motorRules[mo])
code[mo] = 1
taskrules['Code'].append(code)
return pd.DataFrame(taskrules)
def createTrainTestTaskRules(taskRuleSet,nTrainSet=32,nTestSet=32):
"""
Ensure that when we split the task rules, that each set has equal proportions of each task rule
For example, if there are 32 training tasks, then we should have 8 examples of each rule
"""
nRulesPerTrainSet = nTrainSet/4.0
nRulesPerTestSet = nTestSet/4.0
if nRulesPerTrainSet%4.0!=0:
raise Exception('ERROR: Number of rules per train/test set needs to be divisible by 4!')
df_test = pd.DataFrame()
df_train = pd.DataFrame()
# Make sure all columns exist
df_train = df_train.append(taskRuleSet.iloc[0])
# Iterate through tasks in a random manner
ind = np.arange(len(taskRuleSet))
np.random.shuffle(ind)
for i in ind:
# Identify the rules in this task set
logic = taskRuleSet.Logic[i]
sensory = taskRuleSet.Sensory[i]
motor = taskRuleSet.Motor[i]
# Count number of logic rules for this task set
nLogic = np.sum(df_train.Logic==logic)
nSensory = np.sum(df_train.Sensory==sensory)
nMotor = np.sum(df_train.Motor==motor)
if nLogic<nRulesPerTrainSet and nSensory<nRulesPerTrainSet and nMotor<nRulesPerTrainSet:
df_train = df_train.append(taskRuleSet.iloc[i])
else:
df_test = df_test.append(taskRuleSet.iloc[i])
return df_train, df_test
# -
# ## Calculate motor outputs for each set of inputs
# +
motorCode = {0:'l_mid',
1:'l_ind',
2:'r_ind',
3:'r_mid'}
def solveInputs(task_rules, stimuli, printTask=False):
"""
Solves CPRO task given a set of inputs and a task rule
"""
logicRule = task_rules.Logic
sensoryRule = task_rules.Sensory
motorRule = task_rules.Motor
sensoryCategory = task_rules.SensoryCategory
# Isolate the property for each stimulus relevant to the sensory rule
stim1 = stimuli[sensoryCategory + '1']
stim2 = stimuli[sensoryCategory + '2']
# Run through logic rule gates
if logicRule == 'both':
if stim1==sensoryRule and stim2==sensoryRule:
gate = True
else:
gate = False
if logicRule == 'notboth':
if stim1!=sensoryRule or stim2!=sensoryRule:
gate = True
else:
gate = False
if logicRule == 'either':
if stim1==sensoryRule or stim2==sensoryRule:
gate = True
else:
gate = False
if logicRule == 'neither':
if stim1!=sensoryRule and stim2!=sensoryRule:
gate = True
else:
gate = False
## Print task first
if printTask:
print 'Logic rule:', logicRule
print 'Sensory rule:', sensoryRule
print 'Motor rule:', motorRule
print '**Stimuli**'
print stim1, stim2
# Apply logic gating to motor rules
if motorRule=='l_mid':
if gate==True:
motorOutput = 'l_mid'
else:
motorOutput = 'l_ind'
if motorRule=='l_ind':
if gate==True:
motorOutput = 'l_ind'
else:
motorOutput = 'l_mid'
if motorRule=='r_mid':
if gate==True:
motorOutput = 'r_mid'
else:
motorOutput = 'r_ind'
if motorRule=='r_ind':
if gate==True:
motorOutput = 'r_ind'
else:
motorOutput = 'r_mid'
outputcode = np.zeros((4,))
if motorOutput=='l_mid': outputcode[0] = 1
if motorOutput=='l_ind': outputcode[1] = 1
if motorOutput=='r_ind': outputcode[2] = 1
if motorOutput=='r_mid': outputcode[3] = 1
return motorOutput, outputcode
# +
def createTrainingSet(taskRuleSet,nStimuli=100,nTasks=64,delay=False,shuffle=True):
"""
Randomly generates a set of stimuli (nStimuli) for each task rule
Will end up with 64 (task rules) * nStimuli total number of input stimuli
If shuffle keyword is True, will randomly shuffle the training set
Otherwise will start with taskrule1 (nStimuli), taskrule2 (nStimuli), etc.
"""
stimuliSet = createSensoryInputs()
networkIO_DataFrame = {}
networkIO_DataFrame['LogicRule'] = []
networkIO_DataFrame['SensoryRule'] = []
networkIO_DataFrame['MotorRule'] = []
networkIO_DataFrame['Color1'] = []
networkIO_DataFrame['Color2'] = []
networkIO_DataFrame['Orientation1'] = []
networkIO_DataFrame['Orientation2'] = []
networkIO_DataFrame['Pitch1'] = []
networkIO_DataFrame['Pitch2'] = []
networkIO_DataFrame['Constant1'] = []
networkIO_DataFrame['Constant2'] = []
networkIO_DataFrame['MotorResponse'] = []
# Create 1d array to randomly sample indices from
stimIndices = np.arange(len(stimuliSet))
taskIndices = np.arange(len(taskRuleSet))
randomTaskIndices = np.random.choice(taskIndices,nTasks,replace=False)
taskRuleSet2 = taskRuleSet.iloc[randomTaskIndices].copy(deep=True)
taskRuleSet2 = taskRuleSet2.reset_index(drop=True)
taskRuleSet = taskRuleSet2.copy(deep=True)
networkInputCode = []
networkOutputCode = []
for taskrule in taskRuleSet.index:
randomStimuliIndices = np.random.choice(stimIndices,nStimuli,replace=False)
stimuliSet2 = stimuliSet.iloc[randomStimuliIndices].copy(deep=True)
stimuliSet2 = stimuliSet2.reset_index(drop=True)
for stim in stimuliSet2.index:
networkInputCode.append(np.hstack((taskRuleSet.Code[taskrule], stimuliSet2.Code[stim])))
tmpresp, tmpcode = solveInputs(taskRuleSet.iloc[taskrule], stimuliSet2.iloc[stim])
networkOutputCode.append(tmpcode)
# Task rule info
networkIO_DataFrame['LogicRule'].append(taskRuleSet.Logic[taskrule])
networkIO_DataFrame['SensoryRule'].append(taskRuleSet.Sensory[taskrule])
networkIO_DataFrame['MotorRule'].append(taskRuleSet.Motor[taskrule])
# Stimuli info['
networkIO_DataFrame['Color1'].append(stimuliSet2.Color1[stim])
networkIO_DataFrame['Color2'].append(stimuliSet2.Color2[stim])
networkIO_DataFrame['Orientation1'].append(stimuliSet2.Orientation1[stim])
networkIO_DataFrame['Orientation2'].append(stimuliSet2.Orientation2[stim])
networkIO_DataFrame['Pitch1'].append(stimuliSet2.Pitch1[stim])
networkIO_DataFrame['Pitch2'].append(stimuliSet2.Pitch2[stim])
networkIO_DataFrame['Constant1'].append(stimuliSet2.Constant1[stim])
networkIO_DataFrame['Constant2'].append(stimuliSet2.Constant2[stim])
# Motor info
networkIO_DataFrame['MotorResponse'].append(tmpresp)
tmpdf = pd.DataFrame(networkIO_DataFrame)
if shuffle:
ind = np.arange(len(tmpdf),dtype=int)
np.random.shuffle(ind)
networkIO_DataFrame = tmpdf.iloc[ind]
networkInputCode = np.asarray(networkInputCode)[ind]
networkOutputCode = np.asarray(networkOutputCode)[ind]
# Add delay (i.e., 0 inputs & 0 outputs just incase)
if delay:
networkInputCode2 = []
networkOutputCode2 = []
nDelays = 1
for index in range(len(networkIO_DataFrame)):
networkInputCode2.append(networkInputCode[index])
networkOutputCode2.append(networkOutputCode[index])
for delay in range(nDelays):
networkInputCode2.append(np.zeros((len(networkInputCode[index]),)))
networkOutputCode2.append(np.zeros((len(networkOutputCode[index]),)))
networkInputCode = networkInputCode2
networkOutputCode = networkOutputCode2
return networkIO_DataFrame, networkInputCode, networkOutputCode
# -
# # Train RNN first on a subset of tasks (half the tasks)
# +
# Train
nTrainSet = 32
nTestSet = 32
taskRuleSet = createRulePermutations()
if retrain:
trainRuleSet, testRuleSet = createTrainTestTaskRules(taskRuleSet,nTrainSet=nTrainSet,nTestSet=nTestSet)
# trainRuleSet.to_hdf(modeldir + 'trainRuleSet.h5','trainRuleSet')
# testRuleSet.to_hdf(modeldir + 'testRuleSet.h5','testRuleSet')
else:
trainRuleSet = pd.read_hdf(modeldir + 'trainRuleSet.h5','trainRuleSet')
testRuleSet = pd.read_hdf(modeldir + 'testRuleSet.h5','testRuleSet')
stimuliSet = createSensoryInputs()
# +
NUM_RULE_INPUTS = len(taskRuleSet.Code[0])
NUM_SENSORY_INPUTS = len(stimuliSet.Code[0])
NUM_HIDDEN = 128
NUM_MOTOR_DECISION_OUTPUTS = 4
# NUM_TRAINING_ITERATIONS = 100000 # Maybe 20000 is better
NUM_TRAINING_ITERATIONS = 10000 # Maybe 20000 is better
NUM_TRAINING_RULES_PER_EPOCH = 4
NUM_TRAINING_STIMULI_PER_RULE = 5
NUM_TRAINING_STIMULI_PER_RULE = 200
# bias = Variable(torch.Tensor(1, NUM_HIDDEN).uniform_(-1, 0), requires_grad=True)
bias = Variable(torch.cuda.FloatTensor(1, NUM_HIDDEN).uniform_(-1, 0), requires_grad=True)
drdt = 0.05
if retrain:
starttime = time.time()
# w_in = Variable(torch.Tensor(NUM_RULE_INPUTS + NUM_SENSORY_INPUTS, NUM_HIDDEN).uniform_(-0.5,0.5), requires_grad=True)
# w_rec = Variable(torch.Tensor(NUM_HIDDEN, NUM_HIDDEN).uniform_(-0.5,0.5), requires_grad=True)
# w_out = Variable(torch.Tensor(NUM_HIDDEN, NUM_MOTOR_DECISION_OUTPUTS).uniform_(-0.5,0.5), requires_grad=True)
w_in = Variable(torch.cuda.FloatTensor(NUM_RULE_INPUTS + NUM_SENSORY_INPUTS, NUM_HIDDEN).uniform_(-0.5,0.5), requires_grad=True)
w_rec = Variable(torch.cuda.FloatTensor(NUM_HIDDEN, NUM_HIDDEN).uniform_(-0.5,0.5), requires_grad=True)
w_out = Variable(torch.cuda.FloatTensor(NUM_HIDDEN, NUM_MOTOR_DECISION_OUTPUTS).uniform_(-0.5,0.5), requires_grad=True)
#outputs = networkOutputCode
#randomInputs = np.random.randint(0,len(networkInputCode),10)
#inputs = np.asarray(networkInputCode)[randomInputs]
accuracyPerEpoch = []
learning_rate = 0.01
for iteration_num in range(NUM_TRAINING_ITERATIONS):
# previous_r = Variable(torch.Tensor(1, NUM_HIDDEN).zero_(), requires_grad=False)
previous_r = Variable(torch.cuda.FloatTensor(1, NUM_HIDDEN).zero_(), requires_grad=False)
error = 0
# Increase number of presented tasks with number of increased iterations
# Don't allow more than 10 task rules per epoch, since it will just slow training down
if iteration_num % 2000 == 0:
if NUM_TRAINING_RULES_PER_EPOCH < 10:
NUM_TRAINING_RULES_PER_EPOCH += 1
df, inputs, outputs = createTrainingSet(trainRuleSet, nStimuli=NUM_TRAINING_STIMULI_PER_RULE, nTasks=NUM_TRAINING_RULES_PER_EPOCH, delay=False) # 64 * 20 stimuli
acc = []
# for timestep in range(len(inputs)):
# target = Variable(torch.Tensor([outputs[timestep]]))
# u = Variable(torch.Tensor(inputs))
# target = Variable(torch.Tensor(outputs))
u = Variable(torch.cuda.FloatTensor(inputs))
target = Variable(torch.cuda.FloatTensor(outputs))
# The neural network
r = previous_r - drdt*previous_r + drdt* F.relu(previous_r.mm(w_rec) + u.mm(w_in) + bias)
output = r.mm(w_out)
error += torch.mean((output - target).pow(2)) # Mean squared error loss
previous_r = r # Recurrence
# if iteration_num % 1000 == 0:
# print(output.data.numpy())
tmp_target = target.cpu()
if np.sum(np.asarray(tmp_target.data))!=0:
tmp_output = output.cpu()
for trial in range(tmp_output.data.shape[0]):
distance = np.abs(1.0-tmp_output.data[trial])
if np.where(distance == distance.min())[0][0] == np.where(np.asarray(tmp_target.data[trial]))[0][0]:
acc.append(1.0)
else:
acc.append(0.0)
if iteration_num % 1000 == 0:
print 'Iteration:', iteration_num
print '\tloss:', error.data
print '\tAccuracy: ' + str(round(np.mean(acc)*100.0,4)) +'%'
accuracyPerEpoch.append(np.mean(acc)*100.0)
if iteration_num>10:
if np.sum(np.asarray(accuracyPerEpoch[-10:])>96.0)==10:
print 'Last 10 epochs had above 96% accuracy... stopping training'
break
# Learning
error.backward()
w_in.data -= learning_rate*w_in.grad.data; w_in.grad.data.zero_()
w_rec.data -= learning_rate*w_rec.grad.data; w_rec.grad.data.zero_()
w_out.data -= learning_rate*w_out.grad.data; w_out.grad.data.zero_()
bias.data -= learning_rate*bias.grad.data; bias.grad.data.zero_()
torch.save(w_in,modeldir + 'Model3c_Win')
torch.save(w_rec,modeldir + 'Model3c_Wrec')
torch.save(w_out,modeldir + 'Model3c_Wout')
torch.save(bias,modeldir + 'Model3c_Bias')
endtime = time.time()
print 'Time elapsed:', endtime - starttime
else:
w_in = torch.load(modeldir + 'Model3c_Win')
w_rec = torch.load(modeldir + 'Model3c_Wrec')
w_out = torch.load(modeldir + 'Model3c_Wout')
bias = torch.load(modeldir + 'Model3c_Bias')
# -
# ## Comparison: 7586 seconds without, 256 units
# +
previous_r = Variable(torch.Tensor(1, NUM_HIDDEN).zero_(), requires_grad=False)
error = 0
df, inputs, outputs = createTrainingSet(trainRuleSet, nStimuli=len(stimuliSet), nTasks=len(trainRuleSet), delay=True) # 64 * 20 stimuli
acc = []
for timestep in range(len(inputs)):
previous_r = Variable(torch.Tensor(1, NUM_HIDDEN).zero_(), requires_grad=False)
u = Variable(torch.Tensor([inputs[timestep]]))
target = Variable(torch.Tensor([outputs[timestep]]))
# The neural network
r = previous_r - drdt*previous_r + drdt* F.sigmoid(previous_r.mm(w_rec) + u.mm(w_in) + bias)
output = r.mm(w_out)
error += torch.mean((output - target).pow(2)) # Mean squared error loss
previous_r = r # Recurrence
# if iteration_num % 1000 == 0:
# print(output.data.numpy())
if np.sum(np.asarray(target.data))!=0:
distance = np.abs(1.0-output.data)
if np.where(distance == distance.min())[1][0] == np.where(np.asarray(target.data))[1][0]:
acc.append(1.0)
else:
acc.append(0.0)
print 'Accuracy on entire stimulus set and practiced task rule set:', round(np.mean(acc)*100.0,3)
# +
previous_r = Variable(torch.Tensor(1, NUM_HIDDEN).zero_(), requires_grad=False)
error = 0
df, inputs, outputs = createTrainingSet(testRuleSet, nStimuli=len(stimuliSet), nTasks=len(testRuleSet), delay=True) # 64 * 20 stimuli
acc = []
for timestep in range(len(inputs)):
previous_r = Variable(torch.Tensor(1, NUM_HIDDEN).zero_(), requires_grad=False)
u = Variable(torch.Tensor([inputs[timestep]]))
target = Variable(torch.Tensor([outputs[timestep]]))
# The neural network
r = previous_r - drdt*previous_r + drdt* F.sigmoid(previous_r.mm(w_rec) + u.mm(w_in) + bias)
output = r.mm(w_out)
error += torch.mean((output - target).pow(2)) # Mean squared error loss
previous_r = r # Recurrence
# if iteration_num % 1000 == 0:
# print(output.data.numpy())
if np.sum(np.asarray(target.data))!=0:
distance = np.abs(1.0-output.data)
if np.where(distance == distance.min())[1][0] == np.where(np.asarray(target.data))[1][0]:
acc.append(1.0)
else:
acc.append(0.0)
print 'Accuracy on entire stimulus set and unseen task rule set:', round(np.mean(acc)*100.0,3)
# -
# ## Run random simulation to get FC weights
# +
previous_r = Variable(torch.Tensor(1, NUM_HIDDEN).zero_(), requires_grad=False)
error = 0
activity = []
acc = []
for timestep in range(1000):
inputs = np.random.normal(0,0.5,(NUM_HIDDEN,))
u = Variable(torch.Tensor([inputs]))
target = Variable(torch.Tensor([outputs[timestep]]))
# The neural network
r = previous_r - drdt*previous_r + drdt* F.sigmoid(previous_r.mm(w_rec) + u.mm(w_rec) + bias)
activity.append(np.asarray(r.data))
output = r.mm(w_out)
# error += torch.mean((output - target).pow(2)) # Mean squared error loss
previous_r = r # Recurrence
activity = np.squeeze(np.asarray(activity))
# +
plt.figure()
plt.title('Simulated random activity', fontsize=20)
ax = sns.heatmap(activity.T)
ax.invert_yaxis()
plt.xlabel('Time',fontsize=18)
plt.ylabel('Regions',fontsize=18)
# Construct correlation matrix
corrmat = np.corrcoef(activity.T)
sig = np.multiply(corrmat,corrmat>0)
ci, q = bct.community_louvain(sig)
networkdef = sorted(range(len(ci)), key=lambda k: ci[k])
networkdef = np.asarray(networkdef)
networkdef.shape = (len(networkdef),1)
plt.figure()
ax = sns.heatmap(corrmat[networkdef,networkdef.T],square=True,center=0,cmap='bwr')
ax.invert_yaxis()
plt.title('Correlation matrix',fontsize=24,y=1.04)
plt.xlabel('Regions',fontsize=20)
plt.ylabel('Regions',fontsize=20)
plt.savefig('NoiseInduced_CorrelationMatrix.pdf')
# -
# ## Analyze recurrent connectivity weights
mat = np.asarray(w_rec.data)
plt.figure()
ax = sns.heatmap(mat[networkdef,networkdef.T],square=True,center=0,cmap='bwr')
ax.invert_yaxis()
plt.title('Recurrent connectivity weights',fontsize=24,y=1.04)
plt.xlabel('Regions',fontsize=20)
plt.ylabel('Regions',fontsize=20)
plt.tight_layout()
plt.savefig('GroundTruth_RNN_weights.pdf')
# ## Do an eigendecomposition on connectivity matrix
eigvalues, eigvec = np.linalg.eig(mat)
ind = sorted(range(len(eigvalues)), key=lambda k: -eigvalues[k])
eigvec = eigvec[ind]
eigvalues = eigvalues[ind]
| code/model/archive/Model3c_CPRO_SR_HoldOut32Tasks_128Nodes.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# %matplotlib inline
# %load_ext autoreload
# %autoreload 2
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from scipy.stats import pearsonr
import seaborn as sns
sns.set()
import json
import sys
sys.path.insert(0, "../")
# -
from RGES.DiffEx import DiffEx
from RGES.L1KGCT import L1KGCTX
from RGES.Score import score
# # Comparing Python RGES To LINCS L1000 Concordance Scores
#
# ## Introduction
#
# Our goal is to identify drugs that produce signatures *reverse* to the differential expression signatures seen in platinum-based drug resistance. To do this, we calculate a Reverse Gene Expression Score using a similar method to [Chen et al](https://www.nature.com/articles/ncomms16022#supplementary-information). This notebook tests the implementation of RGES by checking for correlation with the [iLINCS](http://www.ilincs.org/ilincs/signaturesL1000/LDG-1188/search/) portal concordance score for Carboplatin resistance. The steps to acquire the iLINCS concordance score are described below.
# ## Loading Concordance Scores
#
# Concordance scores are stored at ```/mnt/oncogxA/Alex/l1k/CTPRES_ilincs_concordance.tsv```
#
# This code loads the concordance data and then creates a dictionary of ```{signature_name: concordance_score}```
# +
concord_path = "/home/jovyan/oncogxA/Alex/l1k/10x_ilincs_concordance.xls"
concord_df = pd.read_csv(concord_path, sep='\t')
conc_d = {r['SignatureId']: r['Concordance'] for _, r in concord_df.iterrows()}
# -
# ## Loading Carboplatin Signature and LINCS Signatures Data
#
# This code loads the CTPRES file and the LINCS Signatures file, which are at
#
# ```/mnt/oncogxA/Alex/l1k/res.df.entrez.txt```
#
# /mnt/oncogxA/Alex/l1k/CTPRES_100_concordant_sigs.gct
# +
de = DiffEx("/home/jovyan/oncogxA/Alex/l1k/DEG_SC_5um_entrezgene.txt")
lincs_sigs = L1KGCTX("/home/jovyan/oncogxA/Alex/l1k/10x_ilincs_sigs_top500_ranked_n500x978.gctx")
# +
## Run this cell if there are log2fc.x and log2fc.y columns
merge_l2fc = lambda x: -1.0*x['log2fc.y'] if not np.isnan(x['log2fc.y']) else x['log2FoldChange']
de.data['log2FoldChange'] = de.data.apply(merge_l2fc, axis=1)
# -
# ## Calculate Scores For Each Signature
# +
points = [] #[(concordance, RGES)]
for signame in list(lincs_sigs.data):
concordance = conc_d[signame]
rges = score(de, lincs_sigs, signame)
points.append((concordance, rges))
# +
###Uncomment me if loading scores from a file
#points = []
#scores = json.loads(open("10x_ilincs_top500_scores.json").read())
#for signame in list(lincs_sigs.data):
# points.append((conc_d[signame], scores[signame]))
x = [p[0] for p in points]
y = [p[1] for p in points]
plt.scatter(x, y)
pearsonr(x, y)
# -
# ## Querying the iLINCS Concordance For CTPRES
#
# **TODO**: Write up a description of how to get concordance for a phenotype signature. Its in my notes for 2018-03-27
| notebooks/RGES_vs_LINCS-concordance.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# #### Jupyter notebooks
#
# This is a [Jupyter](http://jupyter.org/) notebook using Python. You can install Jupyter locally to edit and interact with this notebook.
#
# # Finite Difference methods in 2 dimensions
#
# Let's start by generalizing the 1D Laplacian,
#
# \begin{align} - u''(x) &= f(x) \text{ on } \Omega = (a,b) & u(a) &= g_0(a) & u'(b) = g_1(b) \end{align}
#
# to two dimensions
#
# \begin{align} -\nabla\cdot \big( \nabla u(x,y) \big) &= f(x,y) \text{ on } \Omega \subset \mathbb R^2
# & u|_{\Gamma_D} &= g_0(x,y) & \nabla u \cdot \hat n|_{\Gamma_N} &= g_1(x,y)
# \end{align}
#
# where $\Omega$ is some well-connected open set (we will assume simply connected) and the Dirichlet boundary $\Gamma_D \subset \partial \Omega$ is nonempty.
#
# We need to choose a system for specifying the domain $\Omega$ and ordering degrees of freedom. Perhaps the most significant limitation of finite difference methods is that this specification is messy for complicated domains. We will choose
# $$ \Omega = (0, 1) \times (0, 1) $$
# and
# \begin{align} (x, y)_{im+j} &= (i h, j h) & h &= 1/(m-1) & i,j \in \{0, 1, \dotsc, m-1 \} .
# \end{align}
# +
# %matplotlib inline
import numpy
from matplotlib import pyplot
pyplot.style.use('ggplot')
def laplacian2d_dense(h, f, g0):
m = int(1/h + 1)
c = numpy.linspace(0, 1, m)
y, x = numpy.meshgrid(c, c)
u0 = g0(x, y).flatten()
rhs = f(x, y).flatten()
A = numpy.zeros((m*m, m*m))
def idx(i, j):
return i*m + j
for i in range(m):
for j in range(m):
row = idx(i, j)
if i in (0, m-1) or j in (0, m-1):
A[row, row] = 1
rhs[row] = u0[row]
else:
cols = [idx(*pair) for pair in
[(i-1, j), (i, j-1), (i, j), (i, j+1), (i+1, j)]]
stencil = 1/h**2 * numpy.array([-1, -1, 4, -1, -1])
A[row, cols] = stencil
return x, y, A, rhs
x, y, A, rhs = laplacian2d_dense(.1, lambda x,y: 0*x+1, lambda x,y: 0*x)
pyplot.spy(A);
# +
u = numpy.linalg.solve(A, rhs).reshape(x.shape)
pyplot.contourf(x, y, u)
pyplot.colorbar();
# -
import cProfile
prof = cProfile.Profile()
prof.enable()
x, y, A, rhs = laplacian2d_dense(.0125, lambda x,y: 0*x+1, lambda x,y: 0*x)
u = numpy.linalg.solve(A, rhs).reshape(x.shape)
prof.disable()
prof.print_stats(sort='tottime')
# +
import scipy.sparse as sp
import scipy.sparse.linalg
def laplacian2d(h, f, g0):
m = int(1/h + 1) # Number of elements in terms of nominal grid spacing h
h = 1 / (m-1) # Actual grid spacing
c = numpy.linspace(0, 1, m)
y, x = numpy.meshgrid(c, c)
u0 = g0(x, y).flatten()
rhs = f(x, y).flatten()
A = sp.lil_matrix((m*m, m*m))
def idx(i, j):
return i*m + j
mask = numpy.zeros_like(x, dtype=int)
mask[1:-1,1:-1] = 1
mask = mask.flatten()
for i in range(m):
for j in range(m):
row = idx(i, j)
stencili = numpy.array([idx(*pair) for pair in
[(i-1, j), (i, j-1),
(i, j),
(i, j+1), (i+1, j)]])
stencilw = 1/h**2 * numpy.array([-1, -1, 4, -1, -1])
if mask[row] == 0: # Dirichlet boundary
A[row, row] = 1
rhs[row] = u0[row]
else:
smask = mask[stencili]
cols = stencili[smask == 1]
A[row, cols] = stencilw[smask == 1]
bdycols = stencili[smask == 0]
rhs[row] -= stencilw[smask == 0] @ u0[bdycols]
return x, y, A.tocsr(), rhs
x, y, A, rhs = laplacian2d(.15, lambda x,y: 0*x+1, lambda x,y: 0*x)
pyplot.spy(A);
sp.linalg.norm(A - A.T)
# -
prof = cProfile.Profile()
prof.enable()
x, y, A, rhs = laplacian2d(.005, lambda x,y: 0*x+1, lambda x,y: 0*x)
u = sp.linalg.spsolve(A, rhs).reshape(x.shape)
prof.disable()
prof.print_stats(sort='tottime')
# ## A manufactured solution
# +
class mms0:
def u(x, y):
return x*numpy.exp(-x)*numpy.tanh(y)
def grad_u(x, y):
return numpy.array([(1 - x)*numpy.exp(-x)*numpy.tanh(y),
x*numpy.exp(-x)*(1 - numpy.tanh(y)**2)])
def laplacian_u(x, y):
return ((2 - x)*numpy.exp(-x)*numpy.tanh(y)
- 2*x*numpy.exp(-x)*(numpy.tanh(y)**2 - 1)*numpy.tanh(y))
def grad_u_dot_normal(x, y, n):
return grad_u(x, y) @ n
x, y, A, rhs = laplacian2d(.02, mms0.laplacian_u, mms0.u)
u = sp.linalg.spsolve(A, rhs).reshape(x.shape)
print(u.shape, numpy.linalg.norm((u - mms0.u(x,y)).flatten(), numpy.inf))
pyplot.contourf(x, y, u)
pyplot.colorbar()
pyplot.title('Numeric solution')
pyplot.figure()
pyplot.contourf(x, y, u - mms0.u(x, y))
pyplot.colorbar()
pyplot.title('Error');
# +
hs = numpy.logspace(-2, -.5, 12)
def mms_error(h):
x, y, A, rhs = laplacian2d(h, mms0.laplacian_u, mms0.u)
u = sp.linalg.spsolve(A, rhs).reshape(x.shape)
return numpy.linalg.norm((u - mms0.u(x, y)).flatten(), numpy.inf)
pyplot.loglog(hs, [mms_error(h) for h in hs], 'o', label='numeric error')
pyplot.loglog(hs, hs**1/100, label='$h^1/100$')
pyplot.loglog(hs, hs**2/100, label='$h^2/100$')
pyplot.legend();
# -
# # Neumann boundary conditions
#
# Recall that in 1D, we would reflect the solution into ghost points according to
#
# $$ u_{-i} = u_i - (x_i - x_{-i}) g_1(x_0, y) $$
#
# and similarly for the right boundary and in the $y$ direction. After this, we (optionally) scale the row in the matrix for symmetry and shift the known parts to the right hand side. Below, we implement the reflected symmetry, but not the inhomogeneous contribution or rescaling of the matrix row.
# +
def laplacian2d_bc(h, f, g0, dirichlet=((),())):
m = int(1/h + 1) # Number of elements in terms of nominal grid spacing h
h = 1 / (m-1) # Actual grid spacing
c = numpy.linspace(0, 1, m)
y, x = numpy.meshgrid(c, c)
u0 = g0(x, y).flatten()
rhs = f(x, y).flatten()
ai = []
aj = []
av = []
def idx(i, j):
i = (m-1) - abs(m-1 - abs(i))
j = (m-1) - abs(m-1 - abs(j))
return i*m + j
mask = numpy.ones_like(x, dtype=int)
mask[dirichlet[0],:] = 0
mask[:,dirichlet[1]] = 0
mask = mask.flatten()
for i in range(m):
for j in range(m):
row = idx(i, j)
stencili = numpy.array([idx(*pair) for pair in [(i-1, j), (i, j-1), (i, j), (i, j+1), (i+1, j)]])
stencilw = 1/h**2 * numpy.array([-1, -1, 4, -1, -1])
if mask[row] == 0: # Dirichlet boundary
ai.append(row)
aj.append(row)
av.append(1)
rhs[row] = u0[row]
else:
smask = mask[stencili]
ai += [row]*sum(smask)
aj += stencili[smask == 1].tolist()
av += stencilw[smask == 1].tolist()
bdycols = stencili[smask == 0]
rhs[row] -= stencilw[smask == 0] @ u0[bdycols]
A = sp.csr_matrix((av, (ai, aj)), shape=(m*m,m*m))
return x, y, A, rhs
x, y, A, rhs = laplacian2d_bc(.05, lambda x,y: 0*x+1,
lambda x,y: 0*x, dirichlet=((0,),()))
u = sp.linalg.spsolve(A, rhs).reshape(x.shape)
print(sp.linalg.eigs(A, which='SM')[0])
pyplot.contourf(x, y, u)
pyplot.colorbar();
# +
# We used a different technique for assembling the sparse matrix.
# This is faster with scipy.sparse, but may be worse for other sparse matrix packages, such as PETSc.
prof = cProfile.Profile()
prof.enable()
x, y, A, rhs = laplacian2d_bc(.005, lambda x,y: 0*x+1, lambda x,y: 0*x)
u = sp.linalg.spsolve(A, rhs).reshape(x.shape)
prof.disable()
prof.print_stats(sort='tottime')
# -
# # Variable coefficients
#
# In physical systems, it is common for equations to be given in **divergence form** (sometimes called **conservative form**),
# $$ -\nabla\cdot \Big( \kappa(x,y) \nabla u \Big) = f(x,y) . $$
# This can be converted to **non-divergence form**,
# $$ - \kappa(x,y) \nabla\cdot \nabla u - \nabla \kappa(x,y) \cdot \nabla u = f(x,y) . $$
#
# * What assumptions did we just make on $\kappa(x,y)$?
# +
def laplacian2d_nondiv(h, f, kappa, grad_kappa, g0, dirichlet=((),())):
m = int(1/h + 1) # Number of elements in terms of nominal grid spacing h
h = 1 / (m-1) # Actual grid spacing
c = numpy.linspace(0, 1, m)
y, x = numpy.meshgrid(c, c)
u0 = g0(x, y).flatten()
rhs = f(x, y).flatten()
ai = []
aj = []
av = []
def idx(i, j):
i = (m-1) - abs(m-1 - abs(i))
j = (m-1) - abs(m-1 - abs(j))
return i*m + j
mask = numpy.ones_like(x, dtype=int)
mask[dirichlet[0],:] = 0
mask[:,dirichlet[1]] = 0
mask = mask.flatten()
for i in range(m):
for j in range(m):
row = idx(i, j)
stencili = numpy.array([idx(*pair) for pair in [(i-1, j), (i, j-1), (i, j), (i, j+1), (i+1, j)]])
stencilw = kappa(i*h, j*h)/h**2 * numpy.array([-1, -1, 4, -1, -1])
if grad_kappa is None:
gk = 1/h * numpy.array([kappa((i+.5)*h,j*h) - kappa((i-.5)*h,j*h),
kappa(i*h,(j+.5)*h) - kappa(i*h,(j-.5)*h)])
else:
gk = grad_kappa(i*h, j*h)
stencilw -= gk[0] / (2*h) * numpy.array([-1, 0, 0, 0, 1])
stencilw -= gk[1] / (2*h) * numpy.array([0, -1, 0, 1, 0])
if mask[row] == 0: # Dirichlet boundary
ai.append(row)
aj.append(row)
av.append(1)
rhs[row] = u0[row]
else:
smask = mask[stencili]
ai += [row]*sum(smask)
aj += stencili[smask == 1].tolist()
av += stencilw[smask == 1].tolist()
bdycols = stencili[smask == 0]
rhs[row] -= stencilw[smask == 0] @ u0[bdycols]
A = sp.csr_matrix((av, (ai, aj)), shape=(m*m,m*m))
return x, y, A, rhs
def kappa(x, y):
#return 1 - 2*(x-.5)**2 - 2*(y-.5)**2
return 1e-2 + 2*(x-.5)**2 + 2*(y-.5)**2
def grad_kappa(x, y):
#return -4*(x-.5), -4*(y-.5)
return 4*(x-.5), 4*(y-.5)
pyplot.contourf(x, y, kappa(x,y))
pyplot.colorbar();
# -
x, y, A, rhs = laplacian2d_nondiv(.05, lambda x,y: 0*x+1,
kappa, grad_kappa,
lambda x,y: 0*x, dirichlet=((0,-1),()))
u = sp.linalg.spsolve(A, rhs).reshape(x.shape)
pyplot.contourf(x, y, u)
pyplot.colorbar();
x, y, A, rhs = laplacian2d_nondiv(.05, lambda x,y: 0*x,
kappa, grad_kappa,
lambda x,y: x, dirichlet=((0,-1),()))
u = sp.linalg.spsolve(A, rhs).reshape(x.shape)
pyplot.contourf(x, y, u)
pyplot.colorbar();
# +
def laplacian2d_div(h, f, kappa, g0, dirichlet=((),())):
m = int(1/h + 1) # Number of elements in terms of nominal grid spacing h
h = 1 / (m-1) # Actual grid spacing
c = numpy.linspace(0, 1, m)
y, x = numpy.meshgrid(c, c)
u0 = g0(x, y).flatten()
rhs = f(x, y).flatten()
ai = []
aj = []
av = []
def idx(i, j):
i = (m-1) - abs(m-1 - abs(i))
j = (m-1) - abs(m-1 - abs(j))
return i*m + j
mask = numpy.ones_like(x, dtype=int)
mask[dirichlet[0],:] = 0
mask[:,dirichlet[1]] = 0
mask = mask.flatten()
for i in range(m):
for j in range(m):
row = idx(i, j)
stencili = numpy.array([idx(*pair) for pair in [(i-1, j), (i, j-1), (i, j), (i, j+1), (i+1, j)]])
stencilw = 1/h**2 * ( kappa((i-.5)*h, j*h) * numpy.array([-1, 0, 1, 0, 0])
+ kappa(i*h, (j-.5)*h) * numpy.array([0, -1, 1, 0, 0])
+ kappa(i*h, (j+.5)*h) * numpy.array([0, 0, 1, -1, 0])
+ kappa((i+.5)*h, j*h) * numpy.array([0, 0, 1, 0, -1]))
if mask[row] == 0: # Dirichlet boundary
ai.append(row)
aj.append(row)
av.append(1)
rhs[row] = u0[row]
else:
smask = mask[stencili]
ai += [row]*sum(smask)
aj += stencili[smask == 1].tolist()
av += stencilw[smask == 1].tolist()
bdycols = stencili[smask == 0]
rhs[row] -= stencilw[smask == 0] @ u0[bdycols]
A = sp.csr_matrix((av, (ai, aj)), shape=(m*m,m*m))
return x, y, A, rhs
x, y, A, rhs = laplacian2d_div(.05, lambda x,y: 0*x+1,
kappa,
lambda x,y: 0*x, dirichlet=((0,-1),()))
u = sp.linalg.spsolve(A, rhs).reshape(x.shape)
pyplot.contourf(x, y, u)
pyplot.colorbar();
# -
x, y, A, rhs = laplacian2d_div(.05, lambda x,y: 0*x,
kappa,
lambda x,y: x, dirichlet=((0,-1),()))
u = sp.linalg.spsolve(A, rhs).reshape(x.shape)
pyplot.contourf(x, y, u)
pyplot.colorbar();
# +
x, y, A, rhs = laplacian2d_nondiv(.05, lambda x,y: 0*x+1,
kappa, grad_kappa,
lambda x,y: 0*x, dirichlet=((0,-1),()))
u_nondiv = sp.linalg.spsolve(A, rhs).reshape(x.shape)
x, y, A, rhs = laplacian2d_div(.05, lambda x,y: 0*x+1,
kappa,
lambda x,y: 0*x, dirichlet=((0,-1),()))
u_div = sp.linalg.spsolve(A, rhs).reshape(x.shape)
pyplot.contourf(x, y, u_nondiv - u_div)
pyplot.colorbar();
# +
class mms1:
def __init__(self):
import sympy
x, y = sympy.symbols('x y')
uexpr = x*sympy.exp(-2*x) * sympy.tanh(1.2*y+.1)
kexpr = 1e-2 + 2*(x-.42)**2 + 2*(y-.51)**2
self.u = sympy.lambdify((x,y), uexpr)
self.kappa = sympy.lambdify((x,y), kexpr)
def grad_kappa(xx, yy):
kx = sympy.lambdify((x,y), sympy.diff(kexpr, x))
ky = sympy.lambdify((x,y), sympy.diff(kexpr, y))
return kx(xx, yy), ky(xx, yy)
self.grad_kappa = grad_kappa
self.div_kappa_grad_u = sympy.lambdify((x,y),
-( sympy.diff(kexpr * sympy.diff(uexpr, x), x)
+ sympy.diff(kexpr * sympy.diff(uexpr, y), y)))
mms = mms1()
x, y, A, rhs = laplacian2d_nondiv(.05, mms.div_kappa_grad_u,
mms.kappa, mms.grad_kappa,
mms.u, dirichlet=((0,-1),(0,-1)))
u_nondiv = sp.linalg.spsolve(A, rhs).reshape(x.shape)
pyplot.contourf(x, y, u_nondiv)
pyplot.colorbar()
numpy.linalg.norm((u_nondiv - mms.u(x, y)).flatten(), numpy.inf)
# -
x, y, A, rhs = laplacian2d_div(.05, mms.div_kappa_grad_u,
mms.kappa,
mms.u, dirichlet=((0,-1),(0,-1)))
u_div = sp.linalg.spsolve(A, rhs).reshape(x.shape)
pyplot.contourf(x, y, u_div)
pyplot.colorbar()
numpy.linalg.norm((u_div - mms.u(x, y)).flatten(), numpy.inf)
# +
def mms_error(h):
x, y, A, rhs = laplacian2d_nondiv(h, mms.div_kappa_grad_u,
mms.kappa, mms.grad_kappa,
mms.u, dirichlet=((0,-1),(0,-1)))
u_nondiv = sp.linalg.spsolve(A, rhs).flatten()
x, y, A, rhs = laplacian2d_div(h, mms.div_kappa_grad_u,
mms.kappa, mms.u, dirichlet=((0,-1),(0,-1)))
u_div = sp.linalg.spsolve(A, rhs).flatten()
u_exact = mms.u(x, y).flatten()
return numpy.linalg.norm(u_nondiv - u_exact, numpy.inf), numpy.linalg.norm(u_div - u_exact, numpy.inf)
hs = numpy.logspace(-1.5, -.5, 10)
errors = numpy.array([mms_error(h) for h in hs])
pyplot.loglog(hs, errors[:,0], 'o', label='nondiv')
pyplot.loglog(hs, errors[:,1], 's', label='div')
pyplot.plot(hs, hs**2, label='$h^2$')
pyplot.legend();
# -
#kappablob = lambda x,y: .01 + ((x-.5)**2 + (y-.5)**2 < .125)
def kappablob(x, y):
#return .01 + ((x-.5)**2 + (y-.5)**2 < .125)
return .01 + (numpy.abs(x-.505) < .25) # + (numpy.abs(y-.5) < .25)
x, y, A, rhs = laplacian2d_div(.02, lambda x,y: 0*x, kappablob,
lambda x,y:x, dirichlet=((0,-1),()))
u_div = sp.linalg.spsolve(A, rhs).reshape(x.shape)
pyplot.contourf(x, y, kappablob(x, y))
pyplot.colorbar();
pyplot.figure()
pyplot.contourf(x, y, u_div, 10)
pyplot.colorbar();
x, y, A, rhs = laplacian2d_nondiv(.01, lambda x,y: 0*x, kappablob, None,
lambda x,y:x, dirichlet=((0,-1),()))
u_nondiv = sp.linalg.spsolve(A, rhs).reshape(x.shape)
pyplot.contourf(x, y, u_nondiv, 10)
pyplot.colorbar();
# ## Weak forms
#
# When we write
#
# $$ {\huge "} - \nabla\cdot \big( \kappa \nabla u \big) = 0 {\huge "} \text{ on } \Omega $$
#
# where $\kappa$ is a discontinuous function, that's not exactly what we mean the derivative of that discontinuous function doesn't exist. Formally, however, let us multiply by a "test function" $v$ and integrate,
#
# \begin{split}
# - \int_\Omega v \nabla\cdot \big( \kappa \nabla u \big) = 0 & \text{ for all } v \\
# \int_\Omega \nabla v \cdot \kappa \nabla u = \int_{\partial \Omega} v \kappa \nabla u \cdot \hat n & \text{ for all } v
# \end{split}
#
# where we have used integration by parts. This is called the **weak form** of the PDE and will be what we actually discretize using finite element methods. All the terms make sense when $\kappa$ is discontinuous. Now suppose our domain is decomposed into two disjoint sub domains $$\overline{\Omega_1 \cup \Omega_2} = \overline\Omega $$
# with interface $$\Gamma = \overline\Omega_1 \cap \overline\Omega_2$$ and $\kappa_1$ is continuous on $\overline\Omega_1$ and $\kappa_2$ is continuous on $\overline\Omega_2$, but possibly $\kappa_1(x) \ne \kappa_2(x)$ for $x \in \Gamma$,
#
# \begin{split}
# \int_\Omega \nabla v \cdot \kappa \nabla u &= \int_{\Omega_1} \nabla v \cdot \kappa_1\nabla u + \int_{\Omega_2} \nabla v \cdot \kappa_2 \nabla u \\
# &= -\int_{\Omega_1} v \nabla\cdot \big(\kappa_1 \nabla u \big) + \int_{\partial \Omega_1} v \kappa_1 \nabla u \cdot \hat n \\
# &\qquad -\int_{\Omega_2} v \nabla\cdot \big(\kappa_2 \nabla u \big) + \int_{\partial \Omega_2} v \kappa_2 \nabla u \cdot \hat n \\
# &= -\int_{\Omega} v \nabla\cdot \big(\kappa \nabla u \big) + \int_{\partial \Omega} v \kappa \nabla u \cdot \hat n + \int_{\Gamma} v (\kappa_1 - \kappa_2) \nabla u\cdot \hat n .
# \end{split}
#
# * Which direction is $\hat n$ for the integral over $\Gamma$?
# * Does it matter what we choose for the value of $\kappa$ on $\Gamma$ in the volume integral?
#
# When $\kappa$ is continuous, the jump term vanishes and we recover the **strong form**
# $$ - \nabla\cdot \big( \kappa \nabla u \big) = 0 \text{ on } \Omega . $$
# But if $\kappa$ is discontinuous, we would need to augment this with a jump condition ensuring that the flux $-\kappa \nabla u$ is continuous. We could go add this condition to our FD code to recover convergence in case of discontinuous $\kappa$, but it is messy.
# ## Nonlinear problems
#
# Let's consider the nonlinear problem
# $$ -\nabla \cdot \big(\underbrace{(1 + u^2)}_{\kappa(u)} \nabla u \big) = f \text{ on } (0,1)^2 $$
# subject to Dirichlet boundary conditions. We will discretize the divergence form and thus will need
# $\kappa(u)$ evaluated at staggered points $(i-1/2,j)$, $(i,j-1/2)$, etc. We will calculate these by averaging
# $$ u_{i-1/2,j} = \frac{u_{i-1,j} + u_{i,j}}{2} $$
# and similarly for the other staggered directions.
# To use a Newton method, we also need the derivatives
# $$ \frac{\partial \kappa_{i-1/2,j}}{\partial u_{i,j}} = 2 u_{i-1/2,j} \frac{\partial u_{i-1/2,j}}{\partial u_{i,j}} = u_{i-1/2,j} . $$
#
# In the function below, we compute both the residual
# $$F(u) = -\nabla\cdot \kappa(u) \nabla u - f(x,y)$$
# and its Jacobian
# $$J(u) = \frac{\partial F}{\partial u} . $$
# +
def hgrid(h):
m = int(1/h + 1) # Number of elements in terms of nominal grid spacing h
h = 1 / (m-1) # Actual grid spacing
c = numpy.linspace(0, 1, m)
y, x = numpy.meshgrid(c, c)
return x, y
def nonlinear2d_div(h, x, y, u, forcing, g0, dirichlet=((),())):
m = x.shape[0]
u0 = g0(x, y).flatten()
F = -forcing(x, y).flatten()
ai = []
aj = []
av = []
def idx(i, j):
i = (m-1) - abs(m-1 - abs(i))
j = (m-1) - abs(m-1 - abs(j))
return i*m + j
mask = numpy.ones_like(x, dtype=bool)
mask[dirichlet[0],:] = False
mask[:,dirichlet[1]] = False
mask = mask.flatten()
u = u.flatten()
F[mask == False] = u[mask == False] - u0[mask == False]
u[mask == False] = u0[mask == False]
for i in range(m):
for j in range(m):
row = idx(i, j)
stencili = numpy.array([idx(*pair) for pair in [(i-1, j), (i, j-1), (i, j), (i, j+1), (i+1, j)]])
# Stencil to evaluate gradient at four staggered points
grad = numpy.array([[-1, 0, 1, 0, 0],
[0, -1, 1, 0, 0],
[0, 0, -1, 1, 0],
[0, 0, -1, 0, 1]]) / h
# Stencil to average at four staggered points
avg = numpy.array([[1, 0, 1, 0, 0],
[0, 1, 1, 0, 0],
[0, 0, 1, 1, 0],
[0, 0, 1, 0, 1]]) / 2
# Stencil to compute divergence at cell centers from fluxes at four staggered points
div = numpy.array([-1, -1, 1, 1]) / h
ustencil = u[stencili]
ustag = avg @ ustencil
kappa = 1 + ustag**2
if mask[row] == 0: # Dirichlet boundary
ai.append(row)
aj.append(row)
av.append(1)
else:
F[row] -= div @ (kappa[:,None] * grad @ ustencil)
Jstencil = -div @ (kappa[:,None] * grad
+ 2*(ustag*(grad @ ustencil))[:,None] * avg)
smask = mask[stencili]
ai += [row]*sum(smask)
aj += stencili[smask].tolist()
av += Jstencil[smask].tolist()
J = sp.csr_matrix((av, (ai, aj)), shape=(m*m,m*m))
return F, J
h = .1
x, y = hgrid(h)
u = 0*x
F, J = nonlinear2d_div(h, x, y, u, lambda x,y: 0*x+1,
lambda x,y: 0*x, dirichlet=((0,-1),(0,-1)))
deltau = sp.linalg.spsolve(J, -F).reshape(x.shape)
pyplot.contourf(x, y, deltau)
pyplot.colorbar();
# +
def solve_nonlinear(h, g0, dirichlet, atol=1e-8, verbose=False):
x, y = hgrid(h)
u = 0*x
for i in range(50):
F, J = nonlinear2d_div(h, x, y, u, lambda x,y: 0*x+1,
lambda x,y: 0*x, dirichlet=((0,-1),(0,-1)))
anorm = numpy.linalg.norm(F, numpy.inf)
if verbose:
print('{:2d}: anorm {:8e}'.format(i,anorm))
if anorm < atol:
break
deltau = sp.linalg.spsolve(J, -F)
u += deltau.reshape(x.shape)
return x, y, u, i
x, y, u, i = solve_nonlinear(.1, lambda x,y: 0*x, dirichlet=((0,-1),(0,-1)), verbose=True)
pyplot.contourf(x, y, u)
pyplot.colorbar();
# -
# ## Homework 3: Due 2017-11-03
#
# Write a solver for the regularized $p$-Laplacian,
# $$ -\nabla\cdot\big( \kappa(\nabla u) \nabla u \big) = 0 $$
# where
# $$ \kappa(\nabla u) = \big(\frac 1 2 \epsilon^2 + \frac 1 2 \nabla u \cdot \nabla u \big)^{\frac{p-2}{2}}, $$
# $ \epsilon > 0$, and $1 < p < \infty$. The case $p=2$ is the conventional Laplacian. This problem gets more strongly nonlinear when $p$ is far from 2 and when $\epsilon$ approaches zero. The $p \to 1$ limit is related to plasticity and has applications in non-Newtonion flows and structural mechanics.
#
# 1. Implement a "Picard" solver, which is like a Newton solver except that the Jacobian is replaced by the linear system
# $$ J_{\text{Picard}}(u) \delta u \sim -\nabla\cdot\big( \kappa(\nabla u) \nabla \delta u \big) . $$
# This is much easier to implement than the full Newton linearization. How fast does this method converge for values of $p < 2$ and $p > 2$?
#
# * Use the linearization above as a preconditioner to a Newton-Krylov method. That is, use [`scipy.sparse.linalg.LinearOperator`](https://docs.scipy.org/doc/scipy-0.19.1/reference/generated/scipy.sparse.linalg.LinearOperator.html) to apply the Jacobian to a vector
# $$ \tilde J(u) v = \frac{F(u + h v) - F(u)}{h} . $$
# Then for each linear solve, use [`scipy.sparse.linalg.gmres`](https://docs.scipy.org/doc/scipy-0.19.1/reference/generated/scipy.sparse.linalg.gmres.html) and pass as a preconditioner, a direct solve with the Picard linearization above. (You might find [`scipy.sparse.linalg.factorized`](https://docs.scipy.org/doc/scipy-0.19.1/reference/generated/scipy.sparse.linalg.factorized.html#scipy.sparse.linalg.factorized) to be useful. Compare algebraic convergence to that of the Picard method.
#
# * Can you directly implement a Newton linearization? Either do it or explain what is involved. How will its nonlinear convergence compare to that of the Newton-Krylov method?
# # Wave equations and multi-component systems
#
# The acoustic wave equation with constant wave speed $c$ can be written
# $$ \ddot u - c^2 \nabla\cdot \nabla u = 0 $$
# where $u$ is typically a pressure.
# We can convert to a first order system
# $$ \begin{bmatrix} \dot u \\ \dot v \end{bmatrix} = \begin{bmatrix} 0 & I \\ c^2 \nabla\cdot \nabla & 0 \end{bmatrix} \begin{bmatrix} u \\ v \end{bmatrix} . $$
# We will choose a zero-penetration boundary condition $\nabla u \cdot \hat n = 0$, which will cause waves to reflect.
# +
# %run fdtools.py
x, y, L, _ = laplacian2d_bc(.1, lambda x,y: 0*x,
lambda x,y: 0*x, dirichlet=((),()))
A = sp.bmat([[None, sp.eye(*L.shape)],
[-L, None]])
eigs = sp.linalg.eigs(A, 10, which='LM')[0]
print(eigs)
maxeig = max(eigs.imag)
u0 = numpy.concatenate([numpy.exp(-8*(x**2 + y**2)), 0*x], axis=None)
hist = ode_rkexplicit(lambda t, u: A @ u, u0, tfinal=2, h=2/maxeig)
def plot_wave(x, y, time, U):
u = U[:x.size].reshape(x.shape)
pyplot.contourf(x, y, u)
pyplot.colorbar()
pyplot.title('Wave solution t={:f}'.format(time));
for step in numpy.linspace(0, len(hist)-1, 6, dtype=int):
pyplot.figure()
plot_wave(x, y, *hist[step])
# -
# * This was a second order discretization, but we could extend it to higher order.
# * The largest eigenvalues of this operator are proportional to $c/h$.
# * Formally, we can write this equation in conservative form
# $$ \begin{bmatrix} \dot u \\ \dot{\mathbf v} \end{bmatrix} = \begin{bmatrix} 0 & c\nabla\cdot \\ c \nabla & 0 \end{bmatrix} \begin{bmatrix} u \\ \mathbf v \end{bmatrix} $$
# where $\mathbf{v}$ is now a momentum vector and $\nabla u = \nabla\cdot (u I)$. This formulation could produce an anti-symmetric ($A^T = -A$) discretization. Discretizations with this property are sometimes called "mimetic".
# * A conservative form is often preferred when studiying waves traveling through materials with different wave speeds $c$.
# * This is a Hamiltonian system. While high order Runge-Kutta methods can be quite accurate, "symplectic" time integrators are needed to preserve the structure of the Hamiltonian (related to energy conservation) over long periods of time. The midpoint method (aka $\theta=1/2$) is one such method. There are also explicit symplectic methods such as [Verlet methods](https://en.wikipedia.org/wiki/Verlet_integration), though these can be fragile.
| FD2D.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import os
import cv2
import tensorflow as tf
import numpy as np
import json
from utilities import *
# +
MODEL_NAME = 'ssd_mobilenet_v1_coco_2017_11_17'
#LABEL_MAP_NAME = 'mscoco_label_map.pbtxt'
LABEL_MAP_NAME = 'mscoco_label_map.json'
CWD_PATH = os.getcwd()
MODELS_PATH = os.path.abspath(os.path.join(CWD_PATH, 'models'))
LABELS_PATH = os.path.abspath(os.path.join(CWD_PATH, 'labels'))
DATA_PATH = os.path.abspath(os.path.join(CWD_PATH, 'data'))
VIDEO_FILE = 'videoplayback.mp4'
VIDEO_PATH = os.path.join(DATA_PATH, VIDEO_FILE)
MODEL_PATH = os.path.join(MODELS_PATH, MODEL_NAME, 'frozen_inference_graph.pb')
LABEL_MAP_PATH = os.path.join(LABELS_PATH, LABEL_MAP_NAME)
THRESHOLD = 0
# -
with open(LABEL_MAP_PATH) as json_file:
labels = json.load(json_file)
labels
detection_graph = tf.Graph()
with detection_graph.as_default():
od_graph_def = tf.GraphDef()
with tf.gfile.GFile(MODEL_PATH, 'rb') as fid:
serialized_graph = fid.read()
od_graph_def.ParseFromString(serialized_graph)
tf.import_graph_def(od_graph_def, name='')
# +
def filter_boxes(min_score, boxes, scores, classes, categories):
"""Return boxes with a confidence >= `min_score`"""
n = len(classes)
idxs = []
for i in range(n):
if scores[i]>=min_score:
idxs.append(i)
filtered_boxes = boxes[idxs, ...]
filtered_scores = scores[idxs, ...]
filtered_classes = classes[idxs, ...]
return filtered_boxes, filtered_scores, filtered_classes
# -
def detect_objects(image_np, sess, detection_graph, threshold=0):
# Expand dimensions since the model expects images to have shape: [1, None, None, 3]
image_np_expanded = np.expand_dims(image_np, axis=0)
image_tensor = detection_graph.get_tensor_by_name('image_tensor:0')
# Each box represents a part of the image where a particular object was detected.
boxes = detection_graph.get_tensor_by_name('detection_boxes:0')
# Each score represent level of confidence for each of the objects.
scores = detection_graph.get_tensor_by_name('detection_scores:0')
classes = detection_graph.get_tensor_by_name('detection_classes:0')
num_detections = detection_graph.get_tensor_by_name('num_detections:0')
# Actual detection.
(boxes, scores, classes, num_detections) = sess.run(
[boxes, scores, classes, num_detections],
feed_dict={image_tensor: image_np_expanded})
boxes, scores, classes, = filter_boxes(0.5, boxes[0], scores[0], classes[0], 1)
return (boxes, scores, classes, num_detections)
# +
# WEBCAM
video_stream = VideoStream(src=1).start()
# VIDEO File
# video_stream = VideoStream(src=VIDEO_PATH).start()
sess = tf.Session(graph=detection_graph)
while video_stream.grabbed:
frame, counter = video_stream.read()
# do some zoo model detection on the frame
data = detect_objects(frame, sess, detection_graph)
height = frame.shape[0]
width = frame.shape[1]
count = 0
for item in data[0]:
obj = str(int(data[2][count]))
cv2.putText(
frame,
labels[obj]["name"] + " - " + str(data[1][count]),
(int(width * item[1]), int(height * item[0])),
cv2.FONT_HERSHEY_SIMPLEX,
0.5,
(255, 0, 0),
1,
cv2.LINE_AA
)
cv2.rectangle(
frame,
(int(width * item[1]), int(height * item[0])),
(int(width * item[3]), int(height * item[2])),
(0, 255, 0),
1
)
count = count + 1
cv2.imshow('frame', frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
video_stream.stop()
cv2.destroyAllWindows()
cv2.waitKey(1000) # to autoclose window after a few seconds
# -
| STS-pk.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # [bokeh User Guide -- Quickstart](https://docs.bokeh.org/en/latest/docs/user_guide/quickstart.html#userguide-quickstart)
#
# ## 準備
#
# - 適切な Anaconda の仮想環境を起動し、実習に必要なライブラリと `bokeh` のサンプルデータをインストールする。
#
# - `pip install --upgrade scipy pandas notebook bokeh`
#
# - `bokeh sampledata`
# +
from bokeh.plotting import *
x = [1, 2, 3, 4, 5]
y = [6, 7, 2, 4, 5]
# output_notebook の前に reset_output しないと空のタブが表示される
reset_output(); output_notebook()
p = figure(title='simple line chart', x_axis_label='x', y_axis_label='y')
p.line(x, y, legend_label='Temp.', line_width=2) # legend は古い。legend_label を使うこと
show(p)
# +
x = [0.1, 0.5, 1.0, 1.5, 2.0, 2.5, 3.0]
y0 = [i**2 for i in x]
y1 = [10**i for i in x]
y2 = [10**(i**2) for i in x]
reset_output(); output_notebook()
p = figure(
tools="pan,box_zoom,reset,save",
title="log axis example",
y_axis_type="log", y_range=[0.001, 10**11],
x_axis_label='sections', y_axis_label='particles')
p.line(x, x, legend_label='y=x')
p.circle(x, x, legend_label='y=x', fill_color='white', size=8)
p.line(x, y0, legend_label='y=x^2', line_width=3)
p.line(x, y1, legend_label='y=10^x', line_color='red')
p.circle(x, y1, legend_label='y=10^x', fill_color='red', line_color='red', size=6)
p.line(x, y2, legend_label='10^x^2', line_color='orange', line_dash='4 4')
show(p)
# -
# # More examples
#
# ## Vectorized colors and sizes
#
# Numpy 配列を用いてベクトル化された計算を利用する。
# +
import numpy as np
N = 4000
x = np.random.random(size=N) * 100
y = np.random.random(size=N) * 100
radii = np.random.random(size=N) * 1.5
colors = [
'#%02x%02x%02x' % (int(r), int(g), 150) for r, g in zip(50+2*x, 30+2*y)
]
TOOLS = "crosshair,pan,wheel_zoom,box_zoom,reset,box_select,lasso_select"
reset_output(); output_notebook()
p = figure(tools=TOOLS, x_range=(0, 100), y_range=(0, 100))
p.circle(x, y, radius=radii, fill_color=colors, fill_alpha=0.6, line_color=None)
show(p)
# -
# ## Linked panning
#
# 出力された複数の図をパン操作について連携させた UI を提供する。
#
# **やってみよう**: いずれかの図を左右にドラッグしてみよう。
# +
from bokeh.layouts import gridplot
N = 100
x = np.linspace(0, 4*np.pi, N)
y0 = np.sin(x)
y1 = np.cos(x)
y2 = y0 + y1
reset_output(); output_notebook()
s1 = figure(width=250, plot_height=250, title='y = sin(x)')
s1.circle(x, y0, size=10, color='navy', alpha=0.5)
s2 = figure(width=250, height=250, x_range=s1.x_range, title='y = cos(x)')
s2.triangle(x, y1, size=10, color='firebrick', alpha=0.5)
s3 = figure(width=250, height=250, x_range=s1.x_range, title='y = sin(x) + cos(x)')
s3.square(x, y2, size=10, color='olive', alpha=0.5)
p = gridplot([[s1, s2, s3]], toolbar_location=None)
show(p)
# -
# ## Linked brushing
#
# 出力された複数の図をブラシ操作について連携させた UI を提供する。
#
# **やってみよう**: 一方の図で矩形選択ツールか投げ縄ツールを利用してみよう。
# +
from bokeh.models import ColumnDataSource
N = 300
x = np.linspace(0, 4*np.pi, N)
y0, y1 = np.sin(x), np.cos(x)
source = ColumnDataSource(data=dict(x=x, y0=y0, y1=y1))
reset_output(); output_notebook()
left = figure(tools=TOOLS, width=350, height=350, title='y=sin(x)')
left.circle('x', 'y0', source=source)
right = figure(tools=TOOLS, width=350, height=350, title='y=cos(x)')
right.circle('x', 'y1', source=source)
p = gridplot([[left, right]])
show(p)
# -
# # Datetime axes
# +
from bokeh.sampledata.stocks import AAPL
aapl = np.array(AAPL['adj_close']) # アップルの株価の終り値
aapl_dates = np.array(AAPL['date'], dtype=np.datetime64) # 日付
window_size = 30
window = np.ones(window_size) / float(window_size) # 30日分の均等重みを利用して畳み込み計算
aapl_avg = np.convolve(aapl, window, 'same')
reset_output(); output_notebook()
p = figure(plot_width=800, plot_height=350, x_axis_type='datetime')
p.circle(aapl_dates, aapl, size=4, color='darkgrey', alpha=0.2, legend_label='終り値')
p.line(aapl_dates, aapl_avg, color='navy', legend_label='一ヶ月平均')
p.title.text = 'アップルの株価の一ヶ月平均'
p.legend.location = 'top_left'
p.grid.grid_line_alpha = 0
p.xaxis.axis_label = 'Date'
p.yaxis.axis_label = 'Price'
p.ygrid.band_fill_color = 'olive'
p.ygrid.band_fill_alpha = 0.1
show(p)
# -
| bokeh/00-Getting Started.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import os
import sys
import json
import re
from glob import glob
from logging import error, warning
from common import CLASS_ABBREV_MAP, FINNISH_MAIN_REGISTER, load_conllu
CLASS_COMMENT_RE = re.compile(r'^#.*?\bregister:(.*)')
def get_class_from_comments(comments):
class_ = None
for comment in comments:
m = CLASS_COMMENT_RE.match(comment)
if m:
if class_:
raise ValueError('duplicate class')
class_ = m.group(1)
return class_
def load_conllu_with_class(fn):
sentences, class_ = [], None
for comments, sentence in load_conllu(fn):
c = get_class_from_comments(comments)
if c is not None:
if class_ is not None:
raise ValueError('duplicate class')
class_ = c
sentences.append(sentence)
if class_ is None:
raise ValueError('missing class in {}'.format(fn))
class_ = FINNISH_MAIN_REGISTER[class_]
return sentences, CLASS_ABBREV_MAP[class_]
def load_parsed_data(dirpath):
parses, classes = [], []
for fn in glob('{}/*.conllu'.format(dirpath)):
sentences, class_ = load_conllu_with_class(fn)
if class_ is None:
continue # class doesn't map across languages
parses.append(sentences)
classes.append(class_)
return parses, classes
train_parses, train_classes = load_parsed_data('../data/split-parsed/train/')
devel_parses, devel_classes = load_parsed_data('../data/split-parsed/dev/')
# +
from collections import Counter
MIN_EXAMPLES = 25 # filter classes with fewer
class_count = Counter()
for c in train_classes:
class_count[c] += 1
target_class = set(c for c, v in class_count.items() if v >= MIN_EXAMPLES)
def filter_by_class(parses, classes, targets):
filtered_parses, filtered_classes = [], []
for t, c in zip(parses, classes):
if c in targets:
filtered_parses.append(t)
filtered_classes.append(c)
return filtered_parses, filtered_classes
train_parses, train_classes = filter_by_class(train_parses, train_classes, target_class)
devel_parses, devel_classes = filter_by_class(devel_parses, devel_classes, target_class)
# +
from pprint import pprint
def class_counts(classes):
counter = Counter()
for c in classes:
counter[c] += 1
return counter
pprint(class_counts(train_classes))
# +
def words_from_sentences(sentences):
words = []
for sentence in sentences:
for word in sentence:
words.append(word.form)
return words
def texts_from_parses(parses):
texts = []
for sentences in parses:
texts.append(' '.join(words_from_sentences(sentences)))
return texts
train_texts = texts_from_parses(train_parses)
devel_texts = texts_from_parses(devel_parses)
# +
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.svm import LinearSVC
vectorizer = TfidfVectorizer(analyzer='word', lowercase=False, ngram_range=(1,3))
vectorizer.fit(train_texts)
train_X = vectorizer.transform(train_texts)
devel_X = vectorizer.transform(devel_texts)
classifier = LinearSVC(C=1.0)
classifier.fit(train_X, train_classes)
classifier.score(devel_X, devel_classes)
# +
import eli5
eli5.show_weights(classifier, vec=vectorizer, top=(100,100))
# +
import numpy as np
from sklearn.metrics import confusion_matrix
from pandas import DataFrame
pred_Y = classifier.predict(devel_X)
cm = confusion_matrix(devel_classes, pred_Y)
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis] # normalize
df = DataFrame(cm * 100, index=classifier.classes_, columns=classifier.classes_)
df.round(2)
# -
| scripts/parsefeatures.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:rbm]
# language: python
# name: conda-env-rbm-py
# ---
# # Reference frames
#
# rigid_body_motion provides a flexible high-performance framework for working offline with motion data. The core of this framework is a mechanism for constructing trees of both static and dynamic reference frames that supports automatic lookup and application of transformations across the tree.
# <div class="alert alert-info">
# Note
#
# The following examples require the `matplotlib` library.
# </div>
# +
import numpy as np
import rigid_body_motion as rbm
import matplotlib.pyplot as plt
plt.rcParams["figure.figsize"] = (6, 6)
# -
# ## Static frames
#
# We will begin by defining a world reference frame using the [ReferenceFrame](_generated/rigid_body_motion.ReferenceFrame.rst) class:
rf_world = rbm.ReferenceFrame("world")
# Now we can add a second reference frame as a child of the world frame. This frame is translated by 5 meters in the x direction and rotated 90° around the z axis. Note that rotations are represented as [unit quaternions](https://en.wikipedia.org/wiki/Quaternions_and_spatial_rotation) by default.
rf_observer = rbm.ReferenceFrame(
"observer",
parent=rf_world,
translation=(5, 0, 0),
rotation=(np.sqrt(2) / 2, 0, 0, np.sqrt(2) / 2),
)
# We can show the reference frame tree with the [render_tree](_generated/rigid_body_motion.render_tree.rst) function:
rbm.render_tree(rf_world)
# It is also possible to show a 3d plot of static reference frames with [plot.reference_frame()](_generated/rigid_body_motion.plot.reference_frame.rst):
# +
fig = plt.figure()
ax = fig.add_subplot(111, projection="3d")
rbm.plot.reference_frame(rf_world, ax=ax)
rbm.plot.reference_frame(rf_observer, rf_world, ax=ax)
fig.tight_layout()
# -
# To facilitate referring to previously defined frames, the library has a registry where frames can be stored by name with [ReferenceFrame.register()](_generated/rigid_body_motion.ReferenceFrame.register.rst):
rf_world.register()
rf_observer.register()
rbm.registry
# ## Transformations
#
# Now that we've set up a basic tree, we can use it to transform motion between reference frames. We use the [lookup_transform()](_generated/rigid_body_motion.lookup_transform.rst) method to obtain the transformation from the world to the observer frame:
t, r = rbm.lookup_transform(outof="world", into="observer")
# This transformation consists of a translation $t$:
t
# and a rotation $r$:
r
# ### Position
#
# rigid_body_motion uses the convention that a transformation is a rotation followed by a translation. Here, when applying the transformation to a point $p$ expressed with respect to (wrt) the world frame $W$ it yields the point wrt the observer frame $O$:
#
# $$p_O = \operatorname{rot}\left(r, p_W\right) + t$$
#
# The $\operatorname{rot}()$ function denotes the [rotation of a vector by a quaternion](https://en.wikipedia.org/wiki/Quaternions_and_spatial_rotation#Using_quaternions_as_rotations).
# Let's assume we have a rigid body located at 2 meters in the x direction from the origin of the world reference frame:
p_body_world = np.array((2, 0, 0))
# We can add the body position to the plot with [plot.points()](_generated/rigid_body_motion.plot.points.rst):
# +
fig = plt.figure()
ax = fig.add_subplot(111, projection="3d")
rbm.plot.reference_frame(rf_world, ax=ax)
rbm.plot.reference_frame(rf_observer, rf_world, ax=ax)
rbm.plot.points(p_body_world, ax=ax, fmt="yo")
fig.tight_layout()
# -
# We can use the above formula to transform the position of the body into the observer frame. The [rotate_vectors()](_generated/rigid_body_motion.rotate_vectors.rst) method implements the rotation of a vector by a quaternion:
p_body_observer = rbm.rotate_vectors(r, p_body_world) + t
p_body_observer
# As expected, the resulting position of the body is 3 meters from the observer in the y direction. For convenience, the [transform_points()](_generated/rigid_body_motion.transform_points.rst) method performs all of the above steps:
#
# 1. Lookup of the frames by name in the registry (if applicable)
# 2. Computing the transformation from the source to the target frame
# 3. Applying the transformation to the point(s)
p_body_observer = rbm.transform_points(p_body_world, outof="world", into="observer")
p_body_observer
# ### Orientation
#
# Orientations expressed in quaternions are transformed by quaternion multiplication:
#
# $$o_O = r \cdot o_W $$
#
# This multiplication is implemented in the [qmul()](_generated/rigid_body_motion.qmul.rst) function to which you can pass an arbtrary number of quaternions to multiply. Assuming the body is oriented in the same direction as the world frame, transforming the orientation into the observer frame results in a rotation around the yaw axis:
o_body_world = np.array((1, 0, 0, 0))
rbm.qmul(r, o_body_world)
# We can add the orientation to the plot with [plot.quaternions()](_generated/rigid_body_motion.plot.quaternions.rst):
# +
fig = plt.figure()
ax = fig.add_subplot(111, projection="3d")
rbm.plot.reference_frame(rf_world, ax=ax)
rbm.plot.reference_frame(rf_observer, rf_world, ax=ax)
rbm.plot.points(p_body_world, ax=ax, fmt="yo")
rbm.plot.quaternions(o_body_world, base=p_body_world, ax=ax)
fig.tight_layout()
# -
# Again, for convenience, the [transform_quaternions()](_generated/rigid_body_motion.transform_quaternions.rst) function can be used in the same way as [transform_points()](_generated/rigid_body_motion.transform_points.rst):
rbm.transform_quaternions(o_body_world, outof="world", into="observer")
# ### Vectors
#
# Let's assume the body moves in the x direction with a velocity of 1 m/s:
v_body_world = np.array((1, 0, 0))
# We can add the velocity to the plot with [plot.vectors()](_generated/rigid_body_motion.plot.vectors.rst):
# +
fig = plt.figure()
ax = fig.add_subplot(111, projection="3d")
rbm.plot.reference_frame(rf_world, ax=ax)
rbm.plot.reference_frame(rf_observer, rf_world, ax=ax)
rbm.plot.points(p_body_world, ax=ax, fmt="yo")
rbm.plot.vectors(v_body_world, base=p_body_world, ax=ax, color="y")
fig.tight_layout()
# -
# From the point of view of the observer, the body moves with the same speed, but in the negative y direction. Therefore, we need to apply a coordinate transformation to represent the velocity vector in the observer frame:
#
# $$ v_O = \operatorname{rot}\left(r, v_W\right) $$
rbm.rotate_vectors(r, v_body_world)
# Like before, the [transform_vectors()](_generated/rigid_body_motion.transform_vectors.rst) function can be used in the same way as [transform_points()](_generated/rigid_body_motion.transform_points.rst):
rbm.transform_vectors(v_body_world, outof="world", into="observer")
# ## Moving frames
#
# Now, let's assume that the body moves from the origin of the world frame to the origin of the observer frame in 5 steps:
p_body_world = np.zeros((5, 3))
p_body_world[:, 0] = np.linspace(0, 5, 5)
# +
fig = plt.figure()
ax = fig.add_subplot(111, projection="3d")
rbm.plot.reference_frame(rf_world, ax=ax)
rbm.plot.reference_frame(rf_observer, rf_world, ax=ax)
rbm.plot.points(p_body_world, ax=ax, fmt="yo-")
fig.tight_layout()
# -
# We will now attach a reference frame to the moving body to explain the handling of moving reference frames. For this, we need to associate the positions of the body with corresponding timestamps:
ts_body = np.arange(5)
# Let's construct the moving body frame and add it to the registry. We will use the [register_frame()](_generated/rigid_body_motion.register_frame.rst) convenience method:
rbm.register_frame("body", translation=p_body_world, timestamps=ts_body, parent="world")
rbm.render_tree("world")
# If we transform a static point from the world into the body frame its position will change over time, which is why [transform_points()](_generated/rigid_body_motion.transform_points.rst) will return an array of points even though we pass only a single point:
rbm.transform_points((2, 0, 0), outof="world", into="body")
# One of the central features of the reference frame mechanism is its ability to consolidate arrays of timestamped motion even when the timestamps don't match. To illustrate this, let's create a second body moving in the y direction in world coordinates whose timestamps are offset by 0.5 seconds compared to the first body:
p_body2_world = np.zeros((5, 3))
p_body2_world[:, 1] = np.linspace(0, 2, 5)
ts_body2 = ts_body - 0.5
# +
fig = plt.figure()
ax = fig.add_subplot(111, projection="3d")
rbm.plot.reference_frame(rf_world, ax=ax)
rbm.plot.reference_frame(rf_observer, rf_world, ax=ax)
rbm.plot.points(p_body_world, ax=ax, fmt="yo-")
rbm.plot.points(p_body2_world, ax=ax, fmt="co-")
fig.tight_layout()
# -
# Transforming the position of the second body into the frame of the first body still works, despite the timestamp mismatch:
p_body2_body, ts_body2_body = rbm.transform_points(
p_body2_world,
outof="world",
into="body",
timestamps=ts_body2,
return_timestamps=True,
)
# This is because behind the scenes, [transform_points()](_generated/rigid_body_motion.transform_points.rst) matches the timestamps of the array to transform with those of the transformation across the tree by
#
# 1. computing the range of timestamps for which the transformation is defined,
# 2. intersecting that range with the range of timestamps to be transformed and
# 3. interpolating the resulting transformation across the tree to match the timestamps of the array.
#
# Note that we specified `return_timestamps=True` to obtain the timestamps of the transformed array as they are different from the original timestamps. Let's plot the position of both bodies wrt the world frame as well as the position of the second body wrt the first body to see how the timestamp matching works:
# +
fig, axarr = plt.subplots(3, 1, sharex=True, sharey=True)
axarr[0].plot(ts_body, p_body_world, "*-")
axarr[0].set_ylabel("Position (m)")
axarr[0].set_title("First body wrt world frame")
axarr[0].grid("on")
axarr[1].plot(ts_body2, p_body2_world, "*-")
axarr[1].set_ylabel("Position (m)")
axarr[1].set_title("Second body wrt world frame")
axarr[1].grid("on")
axarr[2].plot(ts_body2_body, p_body2_body, "*-")
axarr[2].set_xlabel("Time (s)")
axarr[2].set_ylabel("Position (m)")
axarr[2].set_title("Second body wrt first body frame")
axarr[2].grid("on")
axarr[2].legend(["x", "y", "z"], loc="upper left")
fig.tight_layout()
# -
# As you can see, the resulting timestamps are the same as those of the second body; however, the first sample has been dropped because the transformation is not defined there.
#
| docs/reference_frames.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## 2021년 4월 19일 월요일
# ### Programmers - 문자열 내림차순으로 배치하기 (Python)
# ### 문제 : https://programmers.co.kr/learn/courses/30/lessons/12917
# ### 블로그 : https://somjang.tistory.com/entry/Programmers-%EB%AC%B8%EC%9E%90%EC%97%B4-%EB%82%B4%EB%A6%BC%EC%B0%A8%EC%88%9C%EC%9C%BC%EB%A1%9C-%EB%B0%B0%EC%B9%98%ED%95%98%EA%B8%B0-Python
# ### Solution
def solution(s):
answer = ''
lowercases = []
uppercases = []
for char in list(s):
if char.isupper():
uppercases.append(char)
else:
lowercases.append(char)
if len(uppercases) != 0:
uppercases = sorted(uppercases, reverse=True)
if len(lowercases) != 0:
lowercases = sorted(lowercases, reverse=True)
answer = "".join(lowercases) + "".join(uppercases)
return answer
| DAY 301 ~ 400/DAY343_[Programmers] 문자열 내림차순으로 배치하기 (Python).ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### Logistic Regression
# +
import pandas as pd
import numpy as np
import seaborn as sns
from matplotlib import pyplot as plt
import warnings
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn import metrics
import seaborn as sn
import matplotlib.pyplot as plt
import statsmodels.api as sm
from sklearn.feature_selection import RFE
warnings.filterwarnings('ignore')
# +
results = pd.DataFrame(columns=["data", "regularization", "package", "accuracy", "precision", "recall", "f1", "rSquared", "AUC"]) # store results
hcc_median = pd.read_csv('../data/raw/median.csv')
hcc_mean = pd.read_csv('../data/raw/mean.csv')
hcc_mode = pd.read_csv('../data/raw/mode.csv')
hcc_iterative = pd.read_csv('../data/raw/iterative.csv')
# -
def get_data(data_name):
if data_name == 'median':
data = hcc_median
elif data_name == 'mean':
data = hcc_mean
elif data_name == 'mode':
data = hcc_mode
else:
data = hcc_iterative
X = data.drop(['Class'], axis=1) # get independent variable
y = data['Class'] # get dependent variable
# split data 70% to 30%
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
return X_train, X_test, y_train, y_test
# #### Feature Selection using Recursive Feature Elimination(RFE)
def feature_selection(X_train, X_test, y_train):
model = LogisticRegression()
#rfe = RFECV(estimator=model, step=1, cv=7)
rfe = RFE(estimator=model, n_features_to_select = 35, step=1)
rfe = rfe.fit(X_train, y_train)
columns = X_train.columns[rfe.support_]
X_train = rfe.transform(X_train)
X_test = rfe.transform(X_test)
X_train = pd.DataFrame(X_train, columns = columns)
X_test = pd.DataFrame(X_test, columns = columns)
return X_train, X_test, y_train
# +
### R^2 for SkLearn
def full_log_likelihood(w, X, y):
score = np.dot(X, w).reshape(1, X.shape[0])
return np.sum(-np.log(1 + np.exp(score))) + np.sum(y * score)
def null_log_likelihood(w, X, y):
z = np.array([w if i == 0 else 0.0 for i, w in enumerate(w.reshape(1, X.shape[1])[0])]).reshape(X.shape[1], 1)
score = np.dot(X, z).reshape(1, X.shape[0])
return np.sum(-np.log(1 + np.exp(score))) + np.sum(y * score)
def mcfadden_rsquare(w, X, y):
return 1.0 - (full_log_likelihood(w, X, y) / null_log_likelihood(w, X, y))
def mcfadden_adjusted_rsquare(w, X, y):
k = float(X.shape[1])
return 1.0 - ((full_log_likelihood(w, X, y) - k) / null_log_likelihood(w, X, y))
# -
# ###
# ### Using StatsModels
# +
data_list = ['mean', 'mode', 'median', 'iterative']
for data in data_list:
X_train, X_test, y_train, y_test = get_data(data)
X_train, X_test, y_train = feature_selection(X_train, X_test, y_train)
print('\n')
print(data.upper(), ' IMPUTED DATASET')
## run logistic regression using stat models
logistic_sm = sm.Logit(y_train.values.reshape(-1,1), X_train).fit()
print(logistic_sm.summary())
y_pred = logistic_sm.predict(X_test)
y_pred = (y_pred >= 0.5).astype(int).to_numpy()
print("Accuracy ({}): {:.2f}".format(data, metrics.accuracy_score(y_test, y_pred)))
print("Precision ({}): {:.2f}".format(data, metrics.precision_score(y_test, y_pred)))
print("Recall ({}): {:.2f}".format(data, metrics.recall_score(y_test, y_pred)))
confusion_matrix = pd.crosstab(y_test, y_pred, rownames=['Actual'], colnames=['Predicted'])
sn.heatmap(confusion_matrix, annot=True)
plt.show()
## save data for comparison
results = results.append(pd.DataFrame([{"data" : data,
"regularization": "default",
"package": "StatsModels",
"accuracy": np.round(metrics.accuracy_score(y_test, y_pred), 2),
"precision": np.round(metrics.precision_score(y_test, y_pred), 2),
"recall": np.round(metrics.recall_score(y_test, y_pred), 2),
"f1": np.round(metrics.f1_score(y_test, y_pred), 2),
"rSquared": np.round(logistic_sm.prsquared, 2),
"AUC": np.round(metrics.roc_auc_score(y_test, y_pred), 2)}]), ignore_index=True)
# -
# ###
# ### Using ScikitLearn
# +
data_list = ['mean', 'mode', 'median', 'iterative']
for data in data_list:
X_train, X_test, y_train, y_test = get_data(data)
X_train, X_test, y_train = feature_selection(X_train, X_test, y_train)
print('\n')
print(data.upper(), ' IMPUTED DATASET')
## run logistic regression using sklearn
logistic = LogisticRegression(fit_intercept=False)
logistic = logistic.fit(X_train,y_train)
y_pred = logistic.predict_proba(X_test)[::, 1]
y_pred = (y_pred >= 0.5).astype(int)
w = np.array(logistic.coef_).transpose()
# printing
values = np.append(logistic.intercept_, logistic.coef_)
# get the names of the values
names = np.append('intercept', X_train.columns)
table_ = pd.DataFrame(values, index = names, columns=['coef'])
table_['exp_coef'] = np.exp(table_['coef'])
print(table_)
print('\n')
print("Accuracy ({}): {:.2f}".format(data, metrics.accuracy_score(y_test, y_pred)))
print("Precision ({}): {:.2f}".format(data, metrics.precision_score(y_test, y_pred)))
print("Recall ({}): {:.2f}".format(data, metrics.recall_score(y_test, y_pred)))
confusion_matrix = pd.crosstab(y_test, y_pred, rownames=['Actual'], colnames=['Predicted'])
sn.heatmap(confusion_matrix, annot=True)
plt.show()
y_pred_proba = logistic.predict_proba(X_test)[::,1]
fpr, tpr, _ = metrics.roc_curve(y_test, y_pred_proba)
auc = metrics.roc_auc_score(y_test, y_pred_proba)
plt.plot(fpr,tpr,label="data, auc="+str(auc))
plt.legend(loc=4)
plt.show()
## save data for comparison
results = results.append(pd.DataFrame([{"data" : data,
"regularization": "default",
"package": "SkLearn",
"accuracy": np.round(metrics.accuracy_score(y_test, y_pred), 2),
"precision": np.round(metrics.precision_score(y_test, y_pred), 2),
"recall": np.round(metrics.recall_score(y_test, y_pred), 2),
"f1": np.round(metrics.f1_score(y_test, y_pred), 2),
"rSquared": np.round(mcfadden_rsquare(w, X_test, y_pred), 2),
"AUC": np.round(metrics.roc_auc_score(y_test, y_pred), 2)}]), ignore_index=True)
# -
# ## Regularizations
#
# ### Using StatsModels
# +
data_list = ['mean', 'mode', 'median', 'iterative']
for data in data_list:
X_train, X_test, y_train, y_test = get_data(data)
X_train, X_test, y_train = feature_selection(X_train, X_test, y_train)
for i in [0, 1]:
print('\n')
print(data.upper(), ' IMPUTED DATASET using ', 'Lasso' if i == 1 else 'Ridge')
## run logistic regression using stat models
logistic_sm = sm.Logit(y_train.values.reshape(-1,1), X_train).fit_regularized(L1_wt = i) # if L1_wt = 1, Lasso: 0, Ridge
print(logistic_sm.summary())
y_pred = logistic_sm.predict(X_test)
y_pred = (y_pred >= 0.5).astype(int).to_numpy()
print("Accuracy ({}): {:.2f}".format(data, metrics.accuracy_score(y_test, y_pred)))
print("Precision ({}): {:.2f}".format(data, metrics.precision_score(y_test, y_pred)))
print("Recall ({}): {:.2f}".format(data, metrics.recall_score(y_test, y_pred)))
confusion_matrix = pd.crosstab(y_test, y_pred, rownames=['Actual'], colnames=['Predicted'])
sn.heatmap(confusion_matrix, annot=True)
plt.show()
## save data for comparison
results = results.append(pd.DataFrame([{"data" : data,
"regularization": 'Lasso' if i == 1 else 'Ridge',
"package": "StatsModels",
"accuracy": np.round(metrics.accuracy_score(y_test, y_pred), 2),
"precision": np.round(metrics.precision_score(y_test, y_pred), 2),
"recall": np.round(metrics.recall_score(y_test, y_pred), 2),
"f1": np.round(metrics.f1_score(y_test, y_pred), 2),
"rSquared": np.round(logistic_sm.prsquared, 2),
"AUC": np.round(metrics.roc_auc_score(y_test, y_pred), 2)}]), ignore_index=True)
# -
# #### Using SkLearn
# +
data_list = ['mean', 'mode', 'median', 'iterative']
for data in data_list:
X_train, X_test, y_train, y_test = get_data(data)
X_train, X_test, y_train = feature_selection(X_train, X_test, y_train)
penalties = ['l1', 'l2', 'elasticnet']
for penalty in penalties:
if penalty == 'l1':
solver = 'liblinear'
name = 'Lasso'
l1_ratio = None
multi_class = 'auto'
elif penalty == 'l2':
solver = 'lbfgs'
name = 'Ridge'
l1_ratio = None
multi_class = 'auto'
elif penalty == 'elasticnet':
solver='saga'
name = 'ElasticNet'
l1_ratio = 0.5
multi_class = 'ovr'
print('\n')
print(data.upper(), ' IMPUTED DATASET using ', name)
## run logistic regression using sklearn
logistic = LogisticRegression(fit_intercept=False, penalty=penalty, solver=solver, multi_class=multi_class, l1_ratio = l1_ratio)
logistic = logistic.fit(X_train,y_train)
y_pred = logistic.predict_proba(X_test)[::, 1]
y_pred = (y_pred >= 0.5).astype(int)
w = np.array(logistic.coef_).transpose()
# printing
values = np.append(logistic.intercept_, logistic.coef_)
# get the names of the values
names = np.append('intercept', X_train.columns)
table_ = pd.DataFrame(values, index = names, columns=['coef'])
table_['exp_coef'] = np.exp(table_['coef'])
print(table_)
print('\n')
print("Accuracy ({}): {:.2f}".format(data, metrics.accuracy_score(y_test, y_pred)))
print("Precision ({}): {:.2f}".format(data, metrics.precision_score(y_test, y_pred)))
print("Recall ({}): {:.2f}".format(data, metrics.recall_score(y_test, y_pred)))
confusion_matrix = pd.crosstab(y_test, y_pred, rownames=['Actual'], colnames=['Predicted'])
sn.heatmap(confusion_matrix, annot=True)
plt.show()
y_pred_proba = logistic.predict_proba(X_test)[::,1]
fpr, tpr, _ = metrics.roc_curve(y_test, y_pred_proba)
auc = metrics.roc_auc_score(y_test, y_pred_proba)
plt.plot(fpr,tpr,label="data, auc="+str(auc))
plt.legend(loc=4)
plt.show()
## save data for comparison
results = results.append(pd.DataFrame([{"data" : data,
"regularization": name,
"package": "SkLearn",
"accuracy": np.round(metrics.accuracy_score(y_test, y_pred), 2),
"precision": np.round(metrics.precision_score(y_test, y_pred), 2),
"recall": np.round(metrics.recall_score(y_test, y_pred), 2),
"f1": np.round(metrics.f1_score(y_test, y_pred), 2),
"rSquared": np.round(mcfadden_rsquare(w, X_test, y_pred), 2),
"AUC": np.round(metrics.roc_auc_score(y_test, y_pred), 2)}]), ignore_index=True)
# -
results_sklearn = results[results.package == 'SkLearn']
final_sklearn = results_sklearn.pivot(index=['data', 'regularization'], columns="package", values=['accuracy', 'precision', 'recall', 'f1', 'rSquared', 'AUC'])
final_sklearn.columns = final_sklearn.columns.swaplevel(0,1)
final_sklearn
results_statsmodels = results[results.package == 'StatsModels']
final_statsmodels = results_statsmodels.pivot(index=['data', 'regularization'], columns="package", values=['accuracy', 'precision', 'recall', 'f1', 'rSquared', 'AUC'])
final_statsmodels.columns = final_statsmodels.columns.swaplevel(0,1)
final_statsmodels
| notebooks/.ipynb_checkpoints/Logistic_Regression-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression
from scipy.interpolate import interp1d, UnivariateSpline
from keras.layers import Conv1D, Input, Dense, Activation, Dropout, AveragePooling1D, Flatten
from keras.models import Model, save_model
from dask.distributed import Client
import xarray as xr
from os.path import join
from glob import glob
data_path = "/Users/dgagne/data/caubauw_csv/"
csv_files = sorted(glob(data_path + "*.csv"))
file_types = [csv_file.split("/")[-1].split("_")[1] for csv_file in csv_files]
print(file_types)
data = dict()
for c, csv_file in enumerate(csv_files):
print(csv_file)
data[file_types[c]] = pd.read_csv(csv_file, na_values=[-9999.0])
data[file_types[c]].index = pd.to_datetime(data[file_types[c]]["TimeStr"], format="%Y%m%d.%H:%M")
combined_data = pd.concat(data, axis=1, join="inner")
q_levs = combined_data.loc[:, [("tower", "Q_10m"), ("tower", "Q_2m")]]["tower"]
plt.hist(q_levs["Q_10m"] - q_levs["Q_2m"], 100)
plt.gca().set_yscale("log")
combined_data.loc[:, [("tower", "TA_10m"), ("tower", "TA_2m"), ("surface", "P0")]]
(combined_data[("tower", "TA_10m")] - combined_data[("tower", "TA_2m")]).hist(bins=50)
plt.gca().set_yscale("log")
TA_mean = 0.5 * (combined_data[("tower", "TA_10m")] + combined_data[("tower", "TA_2m")]) + 273
pres_10m = np.exp(9.81 * -8 / (287 * TA_mean)) * combined_data[("surface", "P0")]
plt.hist(pres_10m - combined_data[("surface", "P0")], 100)
def potential_temperature(temperature_k, pressure_hpa, pressure_reference_hpa=1000.0):
"""
Convert temperature to potential temperature based on the available pressure. Potential temperature is at a
reference pressure of 1000 mb.
Args:
temperature_k: The air temperature in units K
pressure_hpa: The atmospheric pressure in units hPa
pressure_reference_hpa: The reference atmospheric pressure for the potential temperature in hPa;
default 1000 hPa
Returns:
The potential temperature in units K
"""
return temperature_k * (pressure_reference_hpa / pressure_hpa) ** (2.0 / 7.0)
theta_10_p_2 = potential_temperature(combined_data[("tower", "TA_10m")] + 273, combined_data[("surface", "P0")])
theta_10_p_10 = potential_temperature(combined_data[("tower", "TA_10m")] + 273, pres_10m)
plt.figure(figsize=(10, 6))
plt.hist(theta_10_p_2 - theta_10_p_10, bins=100)
print("flux")
flux_data = pd.read_csv('/d1/data/caubauw_csv/all_flux_data.20010101-20180201.csv')
print("surface")
surface_data = pd.read_csv("/d1/data/caubauw_csv/all_surface_data.20010101-20180201.csv")
print("tower")
tower_data = pd.read_csv("/d1/data/caubauw_csv/all_tower_data.20010101-20180201.csv")
for col in flux_data.columns:
print(col)
surface_data.shape
flux_data.shape
tower_data.shape
tower_data.columns
# ## Tower Data Variables
# * F: Wind speed (m s-1)
# * D: Wind direction (degrees)
# * TA: Air temperature (K)
# * Q: Specific Humidity (1e-3)
# * RH: Relative Humidity (1e-2)
plt.scatter(tower_data["Q_20m"], flux_data["H"], 1, "k")
tower_data["F_2m"].max()
tower_data['IQ_2m'].hist()
plt.figure(figsize=(10, 6))
variable = "TA"
all_vars = tower_data.columns[tower_data.columns.str.startswith(variable)]
heights = all_vars.str.split("_").str[1].str[:-1].astype(int)
sorted_heights = heights.argsort()
sorted_vars = all_vars[sorted_heights]
plt.pcolormesh(np.arange(2000), heights[sorted_heights], tower_data.loc[:, sorted_vars].values[:2000].T-273)
plt.colorbar()
plt.plot(tower_data.loc[34242, sorted_vars], heights.sort_values(), 'ko-')
plt.plot(tower_data.loc[34243, sorted_vars], heights.sort_values(), 'ro-')
plt.plot(tower_data.loc[34244, sorted_vars], heights.sort_values(), 'bo-')
idx = 2355
f = UnivariateSpline(heights.sort_values(), tower_data.loc[idx, sorted_vars], k=3, s=0.01)
plt.plot(f(np.arange(2, 200)), np.arange(2, 200), color="lightblue")
plt.plot(tower_data.loc[idx, sorted_vars], heights.sort_values(), "ko-")
plt.plot(f.derivative()(np.arange(2, 200)), np.arange(2, 200))
variable = "TA"
all_vars = tower_data.columns[tower_data.columns.str.startswith(variable)]
heights = all_vars.str.split("_").str[1].str[:-1].astype(int)
sorted_heights = np.sort(heights)
sorted_height_idxs = heights.argsort()
sorted_vars = all_vars[sorted_height_idxs]
height_data = tower_data.loc[:, sorted_vars]
times = pd.DatetimeIndex(tower_data["TimeStr"].str.replace(".", "T"))
interp_heights = np.arange(2, 202, 2)
interp_data = pd.DataFrame(np.zeros((height_data.shape[0], interp_heights.size)), index=height_data.index,
columns=interp_heights)
for t, time in enumerate(times):
if time.hour == 0 and time.minute == 0:
print(time)
f = UnivariateSpline(heights.sort_values(), tower_data.loc[t, sorted_vars], k=3, s=0)
interp_data.iloc[t] = f(interp_heights)
temp_mod_input = Input(shape=(100, 1))
temp_mod = Conv1D(16, 5, padding="same")(temp_mod_input)
temp_mod = Activation("relu")(temp_mod)
temp_mod = Dropout(0.1)(temp_mod)
temp_mod = AveragePooling1D()(temp_mod)
temp_mod = Conv1D(32, 5, padding="same")(temp_mod)
temp_mod = Activation("relu")(temp_mod)
temp_mod = Dropout(0.1)(temp_mod)
temp_mod = AveragePooling1D()(temp_mod)
temp_mod = Conv1D(64, 5, padding="same")(temp_mod)
temp_mod = Activation("relu")(temp_mod)
temp_mod = Dropout(0.1)(temp_mod)
temp_mod = AveragePooling1D()(temp_mod)
temp_mod = Conv1D(128, 5, padding="same")(temp_mod)
temp_mod = Activation("relu")(temp_mod)
temp_mod = Dropout(0.1)(temp_mod)
temp_mod = AveragePooling1D()(temp_mod)
temp_mod = Flatten()(temp_mod)
temp_mod = Dense(1)(temp_mod)
temp_model = Model(temp_mod_input, temp_mod)
temp_model.summary()
interp_data.index = times
interp_data.to_csv("/d1/dgagne/caubauw_temperature_profiles.csv", )
temp_model.compile(optimizer="adam", loss="mse")
norm_temp_interp = (interp_data - interp_data.mean()) / interp_data.std()
flux_data
temp_model.fit(np.expand_dims(norm_temp_interp.values, -1), flux_data["H"].values.ravel(), batch_size=1024, epochs=20, validation_split=0.3)
np.expand_dims(norm_temp_interp.values, -1).shape
plt.figure(figsize=(10, 6))
plt.plot(times, flux_data["H"].values.ravel(), "ko-")
plt.xlim("2001-04-03", "2001-04-04")
plt.ylim(-50, 100)
plt.figure(figsize=(10, 6))
plt.contourf(times[0:2000], interp_heights, interp_data.values.T[:, :2000], 20)
sorted_heights
surface_data
flux_data
| notebooks/Cabauw_Processing.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# !wget --no-check-certificate \
# https://storage.googleapis.com/laurencemoroney-blog.appspot.com/sarcasm.json \
# -O /tmp/sarcasm.json
# +
import json
with open("/tmp/sarcasm.json", 'r') as f:
datastore = json.load(f)
# -
datastore[0]
# +
sentences = []
labels = []
urls = []
for item in datastore:
sentences.append(item['headline'])
labels.append(item['is_sarcastic'])
urls.append(item['article_link'])
# -
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences
tokenizer = Tokenizer(oov_token='<OOV>')
tokenizer.fit_on_texts(sentences)
word_index = tokenizer.word_index
len(word_index)
print(word_index)
sequences = tokenizer.texts_to_sequences(sentences)
padded = pad_sequences(sequences, padding='post')
print(padded[0])
padded.shape
| deeplearning.ai-tensorflow-developer-certificate/3-of-4-nlp-in-tf/week-1/sarcasm-detection.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import sys
import time
sys.path.append('source/')
import numpy as np
import matplotlib.pyplot as plt
import probability as p
from plotting_tools import plot_probabilities
# -
# # Analysis of the rank of the constraint matrix
#
# This notebook is meant to give more intuiton on when the constraint matrix, denotend $[\pmb{T}_A, \pmb{U\Sigma}]$ in the paper is full rank, and thus when the unique solution to the linear system exists.
# We use the notation from the proof of Theorem 1. Thus, by *left matrix* we mean the matrix consisting of rows
#
# $$\text{vec}\left(\begin{bmatrix} \pmb{a}_{m_n} \\ 1\end{bmatrix} \pmb{f}_n^\top \right)^\top.$$
#
# By *full matrix* we mean the matrix consising of rows
#
# $$\begin{bmatrix}\text{vec}\left(\begin{bmatrix} \pmb{a}_{m_n} \\ 1\end{bmatrix} \pmb{f}_n^\top \right)^\top
# & f_K(t_n) \dots f_{2K-1}(t_n)\end{bmatrix}.$$
#
# To switch between full matrix and left matrix use the flag `full_matrix`.
#
# In the Relax and Recover paper, we assume that at each time, there is maximum one measurement taken. To see what happens if is not the case, you can set the flag `one_per_time` to `False`.
#
full_matrix = True
one_per_time = True
# ### Run simulations and calculate matrix rank
# Calculate the matrix rank for `n_repetitions` for different trajectories, anchros positions and different measurements subsets. From this, we can estimate the probability that the matrix is full rank for given parameters.
# +
experiment_params={
"n_dimensions": 2,
"n_constraints": 5,
"n_repetitions": 500, # do 5000 for smooth results
"full_matrix": full_matrix,
"n_anchors_list": [3, 4, 6, 20], # calcualte different number of anchors in a loop
"n_times": 40 if one_per_time else 15, # reduce number of times if many measurements per time are available
"one_per_time": one_per_time,
}
start = time.time()
ranks, params = p.matrix_rank_experiment(**experiment_params)
end = time.time()
print("elapsed time: {:.2f}s".format(end - start))
estimated_probabilities = np.mean(ranks >= params["max_rank"], axis=2)
estimated_variance = np.var(ranks >= params["max_rank"], axis=2)
# -
# ### Calculate the probability based on Theorem 1
# If only one measurement per time is used, then the Theorem 1 holds and the upper bound calculated below becomes a tight bound. If many measurements per time are allowed, then Theorem gives a necessary, but not sufficient condition (thus an upper bound).
# +
probabilities = []
start = time.time()
for idx, n_anchors in enumerate(params["n_anchors_list"]):
print("{} anchors".format(n_anchors))
probabilities.append([p.probability_upper_bound(
params["n_dimensions"],
params["n_constraints"],
n_measurements=n,
position_wise=False,
n_anchors=n_anchors,
n_times=np.Infinity if one_per_time else params["n_times"],
full_matrix=params["full_matrix"]
) for n in params["n_measurements_list"]])
probabilities = np.array(probabilities)
print("time: {:.2f}s".format(time.time()-start))
# -
# ### Plot the resutls
f, ax = plt.subplots(figsize=(10, 5))
plot_probabilities(estimated_probabilities, params, ax, linestyle=":", variance=estimated_variance)
plot_probabilities(probabilities.T, params, ax, label="calcualted")
matrix_name = "Full matrix" if params["full_matrix"] else "Left hand side"
measurements_type = "one measurement per time" if params["one_per_time"] else "many measurements per time"
ax.set_title("{}, {}, complexity {}".format(matrix_name, measurements_type, params["n_constraints"]))
plt.show()
| TheoryTesting.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + colab={} colab_type="code" id="VC4F0200CDnW"
import pandas as pd
from bokeh.plotting import figure, output_file, show,output_notebook
output_notebook()
# + colab={} colab_type="code" id="HOEQhbl0Ca2Y"
def make_dashboard(x, gdp_change, unemployment, title, file_name):
output_file(file_name)
p = figure(title=title, x_axis_label='year', y_axis_label='%')
p.line(x.squeeze(), gdp_change.squeeze(), color="firebrick", line_width=4, legend="% GDP change")
p.line(x.squeeze(), unemployment.squeeze(), line_width=4, legend="% unemployed")
show(p)
# + colab={} colab_type="code" id="W05ECbHtDpc0"
links={'GDP':'https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/projects/coursera_project/clean_gdp.csv',\
'unemployment':'https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/projects/coursera_project/clean_unemployment.csv'}
# + colab={} colab_type="code" id="7hM0P0SYsaBC"
filename='https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/projects/coursera_project/clean_gdp.csv'
df=pd.read_csv(filename)
# + colab={"base_uri": "https://localhost:8080/", "height": 195} colab_type="code" id="2KUTolAksdJU" outputId="b7886417-da97-49d7-a822-340ca322f2a3"
df.head()
# + colab={"base_uri": "https://localhost:8080/", "height": 195} colab_type="code" id="_fjUAu3-sgXS" outputId="b0d795bd-6578-49ad-a255-e0f9e5dae4ad"
filename='https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/projects/coursera_project/clean_unemployment.csv'
df=pd.read_csv(filename)
df.head()
# + colab={} colab_type="code" id="PWSExS8DsntI"
filename1='https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/projects/coursera_project/clean_unemployment.csv'
df1=pd.read_csv(filename1)
# + colab={"base_uri": "https://localhost:8080/", "height": 195} colab_type="code" id="Vot4rVCAsrBi" outputId="5d5d424f-84be-4fa1-c7d2-d2321504830d"
df1[df1['unemployment']>8.5]
# + colab={} colab_type="code" id="0b51OYIDsts1"
filename2='https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/projects/coursera_project/clean_gdp.csv'
# + colab={"base_uri": "https://localhost:8080/", "height": 195} colab_type="code" id="LKvuUybLs0FU" outputId="f1d84935-412b-4057-a9b7-14fe77031c91"
x=pd.read_csv(filename2)
x.tail()
# + colab={"base_uri": "https://localhost:8080/", "height": 218} colab_type="code" id="pusjHTj7s4Gd" outputId="0361819f-3d5d-4db1-d966-7e735fab0ecd"
x['date']
# + colab={} colab_type="code" id="WFYm61Kns-Jc"
filename2='https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/projects/coursera_project/clean_gdp.csv'
# + colab={"base_uri": "https://localhost:8080/", "height": 195} colab_type="code" id="xZpgNTSZtbFS" outputId="93481ad4-7f87-4617-8304-76f28a63a29f"
gdp_change=pd.read_csv(filename2)
gdp_change.head()
# + colab={} colab_type="code" id="Wi0sgAM2teDO"
#gdp_change[gdp_change['change-current']]
# + colab={} colab_type="code" id="xfV0M3ojtlhx"
filename3='https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/projects/coursera_project/clean_unemployment.csv'
# + colab={"base_uri": "https://localhost:8080/", "height": 195} colab_type="code" id="2uhJSGKRtwUN" outputId="85f27247-4659-4de7-d39a-6acf5d781aa1"
unemployment=pd.read_csv(filename3)
unemployment.head()
# + colab={"base_uri": "https://localhost:8080/", "height": 218} colab_type="code" id="yHWLI5outy33" outputId="1533257d-9532-42e4-fa60-292108ca5207"
unemployment['unemployment']
# + colab={} colab_type="code" id="kJ7fDj2Qt2dc"
title='GDP vs Unemployment'
# + colab={} colab_type="code" id="AuV8BPMquJgL"
file_name = "index.html"
# + colab={"base_uri": "https://localhost:8080/", "height": 617} colab_type="code" id="dEvrEoI9uKlV" outputId="32425e3d-0c47-4197-e1be-1599ee9148e9"
make_dashboard(x['date'],gdp_change['change-current'],unemployment['unemployment'],title="GDP vs Unemployment",file_name="index.html")
# + colab={} colab_type="code" id="Zxm-DY3EuRBM"
credentials = {
"apikey": "<KEY>",
"cos_hmac_keys": {
"access_key_id": "<KEY>",
"secret_access_key": "<KEY>"
},
"endpoints": "https://control.cloud-object-storage.cloud.ibm.com/v2/endpoints",
"iam_apikey_description": "Auto-generated for key <KEY>",
"iam_apikey_name": "WDP-Viewer-kakashiibm-donotdelete-pr-nr3aznbizttnkm",
"iam_role_crn": "crn:v1:bluemix:public:iam::::serviceRole:Reader",
"iam_serviceid_crn": "crn:v1:bluemix:public:iam-identity::a/523cf46bdc484649b8ff63d860c48ce2::serviceid:ServiceId-657d9c7e-e57c-4b28-8200-153b5d57789a",
"resource_instance_id": "crn:v1:bluemix:public:cloud-object-storage:global:a/523cf46bdc484649b8ff63d860c48ce2:ff290e9d-2b89-4ff0-aabb-89915501d16d::"
}
# +
#@hidden_cell
# -
endpoint = 'https://s3-api.us-geo.objectstorage.softlayer.net'
bucket_name = "kakashiibm-donotdelete-pr-nr3aznbizttnkm"
pip install boto3
import boto3
resource = boto3.resource(
"s3",
aws_access_key_id = credentials["cos_hmac_keys"]['access_key_id'],
aws_secret_access_key = credentials["cos_hmac_keys"]["secret_access_key"],
endpoint_url = endpoint,
)
# +
import os
directory = os.getcwd()
html_path = directory + "/" + file_name
print(html_path)
# -
f = open(html_path, "r")
resource.Bucket(name=bucket_name).put_object(Key=html_path, Body=f.read())
Params = {'Bucket':bucket_name,'Key':file_name }
# +
import sys
time = 7*24*60**2
client = boto3.client(
's3',
aws_access_key_id = credentials["cos_hmac_keys"]['access_key_id'],
aws_secret_access_key = credentials["cos_hmac_keys"]["secret_access_key"],
endpoint_url=endpoint,
)
url = client.generate_presigned_url('get_object',Params=Params,ExpiresIn=time)
print(url)
| DINO.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Leetcode
from typing import *
def check(result, expected):
assert result == expected, f"Got {result}, Expected {expected}"
# ## 1470. Shuffle The Array
# ### Question
#
# Given the array nums consisting of 2n elements in the form [x1,x2,...,xn,y1,y2,...,yn].
#
# Return the array in the form [x1,y1,x2,y2,...,xn,yn].
#
#
#
# Example 1:
#
# Input: nums = [2,5,1,3,4,7], n = 3
# Output: [2,3,5,4,1,7]
# Explanation: Since x1=2, x2=5, x3=1, y1=3, y2=4, y3=7 then the answer is [2,3,5,4,1,7].
# Example 2:
#
# Input: nums = [1,2,3,4,4,3,2,1], n = 4
# Output: [1,4,2,3,3,2,4,1]
# Example 3:
#
# Input: nums = [1,1,2,2], n = 2
# Output: [1,2,1,2]
#
#
# Constraints:
#
# 1 <= n <= 500
# nums.length == 2n
# 1 <= nums[i] <= 10^3
# ### Answer
# 1. Split into two lists `x` and `y`, starting at `0` and `n`
# 2. Pair up each list by index, `(x0,y0), (x1,y1), ... (xn,yn)`
# 3. Flatten list of tuple
class Solution:
def shuffle(self, nums: List[int], n: int) -> List[int]:
return [val for (x,y) in zip(nums[:n], nums[n:]) for val in (x,y)]
# +
# %%timeit
assert Solution().shuffle([2,5,1,3,4,7], 3) == [2,3,5,4,1,7]
assert Solution().shuffle([1,2,3,4,4,3,2,1], 4) == [1,4,2,3,3,2,4,1]
assert Solution().shuffle([1,1,2,2], 2) == [1,2,1,2]
# -
# ## 1732. Find the Highest Altitude
# ### Question
#
# There is a biker going on a road trip. The road trip consists of n + 1 points at different altitudes. The biker starts his trip on point 0 with altitude equal 0.
#
# You are given an integer array gain of length n where gain[i] is the net gain in altitude between points i and i + 1 for all (0 <= i < n). Return the highest altitude of a point.
#
#
#
# Example 1:
#
# Input: gain = [-5,1,5,0,-7]
# Output: 1
# Explanation: The altitudes are [0,-5,-4,1,1,-6]. The highest is 1.
# Example 2:
#
# Input: gain = [-4,-3,-2,-1,4,3,2]
# Output: 0
# Explanation: The altitudes are [0,-4,-7,-9,-10,-6,-3,-1]. The highest is 0.
#
#
# Constraints:
#
# n == gain.length
# 1 <= n <= 100
# -100 <= gain[i] <= 100
# ### Answer
# Each step is a delta. Cumulatively sum up the deltas and return the max
#
# - Time: O(n)
# - Mem: O(n)
class Solution:
def largestAltitude(self, gain: List[int]) -> int:
altitude = [0]
for i, g in enumerate(gain):
cumsum = altitude[i] + g
altitude.append(cumsum)
return max(altitude)
# +
# %%timeit
assert Solution().largestAltitude([-5,1,5,0,-7]) == 1
assert Solution().largestAltitude([-4,-3,-2,-1,4,3,2]) == 0
# -
# Slightly more performant:
#
# - Time: O(n)
# - Mem: O(1)
class Solution:
def largestAltitude(self, gain: List[int]) -> int:
altitude = 0
highest_altitude = 0
for i, g in enumerate(gain):
altitude = altitude + g
highest_altitude = max(highest_altitude, altitude)
return highest_altitude
# +
# %%timeit
assert Solution().largestAltitude([-5,1,5,0,-7]) == 1
assert Solution().largestAltitude([-4,-3,-2,-1,4,3,2]) == 0
# -
# ## 1347. Minimum Number of Steps to Make Two Strings Anagram
#
# ### Question
# Given two equal-size strings s and t. In one step you can choose any character of t and replace it with another character.
#
# Return the minimum number of steps to make t an anagram of s.
#
# An Anagram of a string is a string that contains the same characters with a different (or the same) ordering.
#
#
#
# Example 1:
#
# Input: s = "bab", t = "aba"
# Output: 1
# Explanation: Replace the first 'a' in t with b, t = "bba" which is anagram of s.
# Example 2:
#
# Input: s = "leetcode", t = "practice"
# Output: 5
# Explanation: Replace 'p', 'r', 'a', 'i' and 'c' from t with proper characters to make t anagram of s.
# Example 3:
#
# Input: s = "anagram", t = "mangaar"
# Output: 0
# Explanation: "anagram" and "mangaar" are anagrams.
# Example 4:
#
# Input: s = "xxyyzz", t = "xxyyzz"
# Output: 0
# Example 5:
#
# Input: s = "friend", t = "family"
# Output: 4
#
#
# Constraints:
#
# 1 <= s.length <= 50000
# s.length == t.length
# s and t contain lower-case English letters only.
# ### Answer
# 1. Find the number of different characters in each string. (delta)
# 2. Pair up the different chars and count the number of pairs sum(delta.values()) // 2
# +
from collections import Counter
class Solution:
def minSteps(self, s: str, t: str) -> int:
c1 = Counter(s)
c2 = Counter(t)
keys = set((*c1.keys(), *c2.keys()))
delta = {k: abs(c1[k] - c2[k]) for k in keys}
return sum(delta.values()) // 2
# +
# %%timeit
assert Solution().minSteps('bab', 'aba') == 1
assert Solution().minSteps('leetcode', 'practice') == 5
assert Solution().minSteps('mangaar', 'anagram') == 0
assert Solution().minSteps('xxyyzz', 'xxyyzz') == 0
assert Solution().minSteps('friend', 'family') == 4
# -
# ## 767. Reorganize String
#
# ### Question
#
# Given a string S, check if the letters can be rearranged so that two characters that are adjacent to each other are not the same.
#
# If possible, output any possible result. If not possible, return the empty string.
#
# Example 1:
#
# Input: S = "aab"
# Output: "aba"
# Example 2:
#
# Input: S = "aaab"
# Output: ""
# Note:
#
# S will consist of lowercase letters and have length in range [1, 500].
class Solution:
def reorganizeString(self, S: str) -> str:
pass
assert Solution().reorganizeString('aab') == 'aba'
assert Solution().reorganizeString('aaab') == ''
# # end
| nbs/06_other/09_leetcode.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Data Input via APIs
#
# This is about using an api "Application program interface" ...basic idea, allows
# direct access to some database or parts of it without having do download everything
#
# Documentation is here...
#
# https://pandas-datareader.readthedocs.io/en/latest/index.html
#
# This documentation is good too:
#
# http://pandas-datareader.readthedocs.io/en/latest/remote_data.html
# +
import os
import pandas
from pandas_datareader import data, wb # This will import the data reader
import matplotlib.pyplot as plt
import datetime
# %matplotlib inline
# +
#start = datetime.datetime(2005,1,1) # simple funcitonality of the datatime package
# just specify the year, month, date and it returns
# and object that the data reader will interpert
codes = ["GDPC1", "PCEC96"] # here are the codes, remember this from EGB?
# Honestly, this is the hardest part with APIs figuring
# out the codes to ask of the API
fred = data.DataReader(codes,"fred",2005) # Then for fred, you hand it the codes
# Tell it you want to ask from FRED
# then tell it the start date
# -
fred.head()
fred.dropna(inplace = True)
fred.pct_change().plot()
# **Excercise** Can you find the unemployment rate for the US from FRED. Use the data reader. And create a plot of unemployment from the 2005 on ward. Challenge, can you create a histogram of unemployment rates?
start = datetime.datetime(2018,1,1)
end = datetime.datetime(2018,2,28)
ticker = 'AMZN'
stkqt = data.DataReader(ticker, 'quandl', start, end)
stkqt.head()
# +
bitcoin = data.DataReader("BITSTAMP/USD", 'quandl', start, end)
bitcoin["Last"].plot()
# -
bitcoin = data.DataReader("MSFT.US", 'quandl', start, end)
bitcoin.shape
# **Exercise** Grab luluemon's data. Plot the closing value and the volume since the begining of the year.
lulu = data.DataReader(["LULU","UA"], 'google', start, end)
lulu = lulu.to_frame()
lulu.head()
| intro_to_apis/data_apis.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# %matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
import collections
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
torch.set_printoptions(edgeitems=2)
torch.manual_seed(123)
# -
class_names = ['airplane','automobile','bird','cat','deer',
'dog','frog','horse','ship','truck']
from torchvision import datasets, transforms
data_path = '../data-unversioned/p1ch6/'
cifar10 = datasets.CIFAR10(data_path, train=True, download=True,
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.4915, 0.4823, 0.4468),
(0.2470, 0.2435, 0.2616))
]))
cifar10_val = datasets.CIFAR10(data_path, train=False, download=True,
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.4915, 0.4823, 0.4468),
(0.2470, 0.2435, 0.2616))
]))
label_map = {0: 0, 2: 1}
class_names = ['airplane', 'bird']
cifar2 = [(img, label_map[label]) for img, label in cifar10 if label in [0, 2]]
cifar2_val = [(img, label_map[label]) for img, label in cifar10_val if label in [0, 2]]
connected_model = nn.Sequential(
nn.Linear(3072, 1024),
nn.Tanh(),
nn.Linear(1024, 512),
nn.Tanh(),
nn.Linear(512, 128),
nn.Tanh(),
nn.Linear(128, 2))
numel_list = [p.numel() for p in connected_model.parameters() if p.requires_grad == True]
sum(numel_list), numel_list
first_model = nn.Sequential(
nn.Linear(3072, 512),
nn.Tanh(),
nn.Linear(512, 2),
nn.LogSoftmax(dim=1))
numel_list = [p.numel() for p in first_model.parameters()]
sum(numel_list), numel_list
# +
linear = nn.Linear(3072, 1024)
linear.weight.shape, linear.bias.shape
# -
conv = nn.Conv2d(3, 16, kernel_size=3) # <1>
conv
conv.weight.shape, conv.bias.shape
img, _ = cifar2[0]
output = conv(img.unsqueeze(0))
img.unsqueeze(0).shape, output.shape
plt.imshow(img.mean(0), cmap='gray')
plt.show()
plt.figure(figsize=(10, 4.8)) # bookskip
ax1 = plt.subplot(1, 2, 1) # bookskip
plt.title('output') # bookskip
plt.imshow(output[0, 0].detach(), cmap='gray')
plt.subplot(1, 2, 2, sharex=ax1, sharey=ax1) # bookskip
plt.imshow(img.mean(0), cmap='gray') # bookskip
plt.title('input') # bookskip
plt.savefig('Ch8_F2_PyTorch.png') # bookskip
plt.show()
conv = nn.Conv2d(3, 1, kernel_size=3, padding=1) # <1>
output = conv(img.unsqueeze(0))
img.unsqueeze(0).shape, output.shape
# +
with torch.no_grad():
conv.bias.zero_()
with torch.no_grad():
conv.weight.fill_(1.0 / 9.0)
# -
output = conv(img.unsqueeze(0))
plt.figure(figsize=(10, 4.8)) # bookskip
ax1 = plt.subplot(1, 2, 1) # bookskip
plt.title('output') # bookskip
plt.imshow(output[0, 0].detach(), cmap='gray')
plt.subplot(1, 2, 2, sharex=ax1, sharey=ax1) # bookskip
plt.imshow(img.mean(0), cmap='gray') # bookskip
plt.title('input') # bookskip
plt.savefig('Ch8_F4_PyTorch.png') # bookskip
plt.show()
# +
conv = nn.Conv2d(3, 1, kernel_size=3, padding=1)
with torch.no_grad():
conv.weight[:] = torch.tensor([[-1.0, 0.0, 1.0],
[-1.0, 0.0, 1.0],
[-1.0, 0.0, 1.0]])
conv.bias.zero_()
# -
output = conv(img.unsqueeze(0))
plt.figure(figsize=(10, 4.8)) # bookskip
ax1 = plt.subplot(1, 2, 1) # bookskip
plt.title('output') # bookskip
plt.imshow(output[0, 0].detach(), cmap='gray')
plt.subplot(1, 2, 2, sharex=ax1, sharey=ax1) # bookskip
plt.imshow(img.mean(0), cmap='gray') # bookskip
plt.title('input') # bookskip
plt.savefig('Ch8_F5_PyTorch.png') # bookskip
plt.show()
# +
pool = nn.MaxPool2d(2)
output = pool(img.unsqueeze(0))
img.unsqueeze(0).shape, output.shape
# -
model = nn.Sequential(
nn.Conv2d(3, 16, kernel_size=3, padding=1),
nn.Tanh(),
nn.MaxPool2d(2),
nn.Conv2d(16, 8, kernel_size=3, padding=1),
nn.Tanh(),
nn.MaxPool2d(2),
# ...
)
model = nn.Sequential(
nn.Conv2d(3, 16, kernel_size=3, padding=1),
nn.Tanh(),
nn.MaxPool2d(2),
nn.Conv2d(16, 8, kernel_size=3, padding=1),
nn.Tanh(),
nn.MaxPool2d(2),
# ... <1>
nn.Linear(8 * 8 * 8, 32),
nn.Tanh(),
nn.Linear(32, 2))
numel_list = [p.numel() for p in model.parameters()]
sum(numel_list), numel_list
# + tags=["raises-exception"]
model(img.unsqueeze(0))
# -
class Net(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.Conv2d(3, 16, kernel_size=3, padding=1)
self.act1 = nn.Tanh()
self.pool1 = nn.MaxPool2d(2)
self.conv2 = nn.Conv2d(16, 8, kernel_size=3, padding=1)
self.act2 = nn.Tanh()
self.pool2 = nn.MaxPool2d(2)
self.fc1 = nn.Linear(8 * 8 * 8, 32)
self.act3 = nn.Tanh()
self.fc2 = nn.Linear(32, 2)
def forward(self, x):
out = self.pool1(self.act1(self.conv1(x)))
out = self.pool2(self.act2(self.conv2(out)))
out = out.view(-1, 8 * 8 * 8) # <1>
out = self.act3(self.fc1(out))
out = self.fc2(out)
return out
# +
model = Net()
numel_list = [p.numel() for p in model.parameters()]
sum(numel_list), numel_list
# +
import torch.nn.functional as F
class Net(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.Conv2d(3, 16, kernel_size=3, padding=1)
self.conv2 = nn.Conv2d(16, 8, kernel_size=3, padding=1)
self.fc1 = nn.Linear(8 * 8 * 8, 32)
self.fc2 = nn.Linear(32, 2)
def forward(self, x):
out = F.max_pool2d(torch.tanh(self.conv1(x)), 2)
out = F.max_pool2d(torch.tanh(self.conv2(out)), 2)
out = out.view(-1, 8 * 8 * 8)
out = torch.tanh(self.fc1(out))
out = self.fc2(out)
return out
# -
model = Net()
model(img.unsqueeze(0))
# +
import datetime # <1>
def training_loop(n_epochs, optimizer, model, loss_fn, train_loader):
for epoch in range(1, n_epochs + 1): # <2>
loss_train = 0.0
for imgs, labels in train_loader: # <3>
outputs = model(imgs) # <4>
loss = loss_fn(outputs, labels) # <5>
optimizer.zero_grad() # <6>
loss.backward() # <7>
optimizer.step() # <8>
loss_train += loss.item() # <9>
if epoch == 1 or epoch % 10 == 0:
print('{} Epoch {}, Training loss {}'.format(
datetime.datetime.now(), epoch, loss_train / len(train_loader))) # <10>
# +
train_loader = torch.utils.data.DataLoader(cifar2, batch_size=64, shuffle=True) # <1>
model = Net() # <2>
optimizer = optim.SGD(model.parameters(), lr=1e-2) # <3>
loss_fn = nn.CrossEntropyLoss() # <4>
training_loop( # <5>
n_epochs = 100,
optimizer = optimizer,
model = model,
loss_fn = loss_fn,
train_loader = train_loader,
)
# +
train_loader = torch.utils.data.DataLoader(cifar2, batch_size=64, shuffle=False)
val_loader = torch.utils.data.DataLoader(cifar2_val, batch_size=64, shuffle=False)
def validate(model, train_loader, val_loader):
for name, loader in [("train", train_loader), ("val", val_loader)]:
correct = 0
total = 0
with torch.no_grad(): # <1>
for imgs, labels in loader:
outputs = model(imgs)
_, predicted = torch.max(outputs, dim=1) # <2>
total += labels.shape[0] # <3>
correct += int((predicted == labels).sum()) # <4>
print("Accuracy {}: {:.2f}".format(name , correct / total))
validate(model, train_loader, val_loader)
# -
torch.save(model.state_dict(), data_path + 'birds_vs_airplanes.pt')
loaded_model = Net() # <1>
loaded_model.load_state_dict(torch.load(data_path + 'birds_vs_airplanes.pt'))
device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')
print(f"Training on device {device}.")
# +
import datetime
def training_loop(n_epochs, optimizer, model, loss_fn, train_loader):
for epoch in range(1, n_epochs + 1):
loss_train = 0.0
for imgs, labels in train_loader:
imgs = imgs.to(device=device) # <1>
labels = labels.to(device=device)
outputs = model(imgs)
loss = loss_fn(outputs, labels)
optimizer.zero_grad()
loss.backward()
optimizer.step()
loss_train += loss.item()
if epoch == 1 or epoch % 10 == 0:
print('{} Epoch {}, Training loss {}'.format(
datetime.datetime.now(), epoch, loss_train / len(train_loader)))
# +
train_loader = torch.utils.data.DataLoader(cifar2, batch_size=64, shuffle=True)
model = Net().to(device=device) # <1>
optimizer = optim.SGD(model.parameters(), lr=1e-2)
loss_fn = nn.CrossEntropyLoss()
training_loop(
n_epochs = 100,
optimizer = optimizer,
model = model,
loss_fn = loss_fn,
train_loader = train_loader,
)
# +
train_loader = torch.utils.data.DataLoader(cifar2, batch_size=64, shuffle=False)
val_loader = torch.utils.data.DataLoader(cifar2_val, batch_size=64, shuffle=False)
all_acc_dict = collections.OrderedDict()
def validate(model, train_loader, val_loader):
accdict = {}
for name, loader in [("train", train_loader), ("val", val_loader)]:
correct = 0
total = 0
with torch.no_grad():
for imgs, labels in loader:
imgs = imgs.to(device=device)
labels = labels.to(device=device)
outputs = model(imgs)
_, predicted = torch.max(outputs, dim=1) # <1>
total += labels.shape[0]
correct += int((predicted == labels).sum())
print("Accuracy {}: {:.2f}".format(name , correct / total))
accdict[name] = correct / total
return accdict
all_acc_dict["baseline"] = validate(model, train_loader, val_loader)
# -
loaded_model = Net().to(device=device)
loaded_model.load_state_dict(torch.load(data_path + 'birds_vs_airplanes.pt', map_location=device))
class NetWidth(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.Conv2d(3, 32, kernel_size=3, padding=1)
self.conv2 = nn.Conv2d(32, 16, kernel_size=3, padding=1)
self.fc1 = nn.Linear(16 * 8 * 8, 32)
self.fc2 = nn.Linear(32, 2)
def forward(self, x):
out = F.max_pool2d(torch.tanh(self.conv1(x)), 2)
out = F.max_pool2d(torch.tanh(self.conv2(out)), 2)
out = out.view(-1, 16 * 8 * 8)
out = torch.tanh(self.fc1(out))
out = self.fc2(out)
return out
# +
model = NetWidth().to(device=device)
optimizer = optim.SGD(model.parameters(), lr=1e-2)
loss_fn = nn.CrossEntropyLoss()
training_loop(
n_epochs = 100,
optimizer = optimizer,
model = model,
loss_fn = loss_fn,
train_loader = train_loader,
)
validate(model, train_loader, val_loader)
# -
class NetWidth(nn.Module):
def __init__(self, n_chans1=32):
super().__init__()
self.n_chans1 = n_chans1
self.conv1 = nn.Conv2d(3, n_chans1, kernel_size=3, padding=1)
self.conv2 = nn.Conv2d(n_chans1, n_chans1 // 2, kernel_size=3, padding=1)
self.fc1 = nn.Linear(8 * 8 * n_chans1 // 2, 32)
self.fc2 = nn.Linear(32, 2)
def forward(self, x):
out = F.max_pool2d(torch.tanh(self.conv1(x)), 2)
out = F.max_pool2d(torch.tanh(self.conv2(out)), 2)
out = out.view(-1, 8 * 8 * self.n_chans1 // 2)
out = torch.tanh(self.fc1(out))
out = self.fc2(out)
return out
# +
model = NetWidth(n_chans1=32).to(device=device)
optimizer = optim.SGD(model.parameters(), lr=1e-2)
loss_fn = nn.CrossEntropyLoss()
training_loop(
n_epochs = 100,
optimizer = optimizer,
model = model,
loss_fn = loss_fn,
train_loader = train_loader,
)
all_acc_dict["width"] = validate(model, train_loader, val_loader)
# -
sum(p.numel() for p in model.parameters())
def training_loop_l2reg(n_epochs, optimizer, model, loss_fn, train_loader):
for epoch in range(1, n_epochs + 1):
loss_train = 0.0
for imgs, labels in train_loader:
imgs = imgs.to(device=device)
labels = labels.to(device=device)
outputs = model(imgs)
loss = loss_fn(outputs, labels)
l2_lambda = 0.001
l2_norm = sum(p.pow(2.0).sum() for p in model.parameters()) # <1>
loss = loss + l2_lambda * l2_norm
optimizer.zero_grad()
loss.backward()
optimizer.step()
loss_train += loss.item()
if epoch == 1 or epoch % 10 == 0:
print('{} Epoch {}, Training loss {}'.format(
datetime.datetime.now(), epoch, loss_train / len(train_loader)))
# +
model = Net().to(device=device)
optimizer = optim.SGD(model.parameters(), lr=1e-2)
loss_fn = nn.CrossEntropyLoss()
training_loop_l2reg(
n_epochs = 100,
optimizer = optimizer,
model = model,
loss_fn = loss_fn,
train_loader = train_loader,
)
all_acc_dict["l2 reg"] = validate(model, train_loader, val_loader)
# -
class NetDropout(nn.Module):
def __init__(self, n_chans1=32):
super().__init__()
self.n_chans1 = n_chans1
self.conv1 = nn.Conv2d(3, n_chans1, kernel_size=3, padding=1)
self.conv1_dropout = nn.Dropout2d(p=0.4)
self.conv2 = nn.Conv2d(n_chans1, n_chans1 // 2, kernel_size=3, padding=1)
self.conv2_dropout = nn.Dropout2d(p=0.4)
self.fc1 = nn.Linear(8 * 8 * n_chans1 // 2, 32)
self.fc2 = nn.Linear(32, 2)
def forward(self, x):
out = F.max_pool2d(torch.tanh(self.conv1(x)), 2)
out = self.conv1_dropout(out)
out = F.max_pool2d(torch.tanh(self.conv2(out)), 2)
out = self.conv2_dropout(out)
out = out.view(-1, 8 * 8 * self.n_chans1 // 2)
out = torch.tanh(self.fc1(out))
out = self.fc2(out)
return out
# +
model = NetDropout(n_chans1=32).to(device=device)
optimizer = optim.SGD(model.parameters(), lr=1e-2)
loss_fn = nn.CrossEntropyLoss()
training_loop(
n_epochs = 100,
optimizer = optimizer,
model = model,
loss_fn = loss_fn,
train_loader = train_loader,
)
all_acc_dict["dropout"] = validate(model, train_loader, val_loader)
# -
class NetBatchNorm(nn.Module):
def __init__(self, n_chans1=32):
super().__init__()
self.n_chans1 = n_chans1
self.conv1 = nn.Conv2d(3, n_chans1, kernel_size=3, padding=1)
self.conv1_batchnorm = nn.BatchNorm2d(num_features=n_chans1)
self.conv2 = nn.Conv2d(n_chans1, n_chans1 // 2, kernel_size=3, padding=1)
self.conv2_batchnorm = nn.BatchNorm2d(num_features=n_chans1 // 2)
self.fc1 = nn.Linear(8 * 8 * n_chans1 // 2, 32)
self.fc2 = nn.Linear(32, 2)
def forward(self, x):
out = self.conv1_batchnorm(self.conv1(x))
out = F.max_pool2d(torch.tanh(out), 2)
out = self.conv2_batchnorm(self.conv2(out))
out = F.max_pool2d(torch.tanh(out), 2)
out = out.view(-1, 8 * 8 * self.n_chans1 // 2)
out = torch.tanh(self.fc1(out))
out = self.fc2(out)
return out
# +
model = NetBatchNorm(n_chans1=32).to(device=device)
optimizer = optim.SGD(model.parameters(), lr=1e-2)
loss_fn = nn.CrossEntropyLoss()
training_loop(
n_epochs = 100,
optimizer = optimizer,
model = model,
loss_fn = loss_fn,
train_loader = train_loader,
)
all_acc_dict["batch_norm"] = validate(model, train_loader, val_loader)
# -
class NetDepth(nn.Module):
def __init__(self, n_chans1=32):
super().__init__()
self.n_chans1 = n_chans1
self.conv1 = nn.Conv2d(3, n_chans1, kernel_size=3, padding=1)
self.conv2 = nn.Conv2d(n_chans1, n_chans1 // 2, kernel_size=3, padding=1)
self.conv3 = nn.Conv2d(n_chans1 // 2, n_chans1 // 2, kernel_size=3, padding=1)
self.fc1 = nn.Linear(4 * 4 * n_chans1 // 2, 32)
self.fc2 = nn.Linear(32, 2)
def forward(self, x):
out = F.max_pool2d(torch.relu(self.conv1(x)), 2)
out = F.max_pool2d(torch.relu(self.conv2(out)), 2)
out = F.max_pool2d(torch.relu(self.conv3(out)), 2)
out = out.view(-1, 4 * 4 * self.n_chans1 // 2)
out = torch.relu(self.fc1(out))
out = self.fc2(out)
return out
# +
model = NetDepth(n_chans1=32).to(device=device)
optimizer = optim.SGD(model.parameters(), lr=1e-2)
loss_fn = nn.CrossEntropyLoss()
training_loop(
n_epochs = 100,
optimizer = optimizer,
model = model,
loss_fn = loss_fn,
train_loader = train_loader,
)
all_acc_dict["depth"] = validate(model, train_loader, val_loader)
# -
class NetRes(nn.Module):
def __init__(self, n_chans1=32):
super().__init__()
self.n_chans1 = n_chans1
self.conv1 = nn.Conv2d(3, n_chans1, kernel_size=3, padding=1)
self.conv2 = nn.Conv2d(n_chans1, n_chans1 // 2, kernel_size=3, padding=1)
self.conv3 = nn.Conv2d(n_chans1 // 2, n_chans1 // 2, kernel_size=3, padding=1)
self.fc1 = nn.Linear(4 * 4 * n_chans1 // 2, 32)
self.fc2 = nn.Linear(32, 2)
def forward(self, x):
out = F.max_pool2d(torch.relu(self.conv1(x)), 2)
out = F.max_pool2d(torch.relu(self.conv2(out)), 2)
out1 = out
out = F.max_pool2d(torch.relu(self.conv3(out)) + out1, 2)
out = out.view(-1, 4 * 4 * self.n_chans1 // 2)
out = torch.relu(self.fc1(out))
out = self.fc2(out)
return out
# +
model = NetRes(n_chans1=32).to(device=device)
optimizer = optim.SGD(model.parameters(), lr=1e-2)
loss_fn = nn.CrossEntropyLoss()
training_loop(
n_epochs = 100,
optimizer = optimizer,
model = model,
loss_fn = loss_fn,
train_loader = train_loader,
)
all_acc_dict["res"] = validate(model, train_loader, val_loader)
# -
class ResBlock(nn.Module):
def __init__(self, n_chans):
super(ResBlock, self).__init__()
self.conv = nn.Conv2d(n_chans, n_chans, kernel_size=3, padding=1, bias=False) # <1>
self.batch_norm = nn.BatchNorm2d(num_features=n_chans)
torch.nn.init.kaiming_normal_(self.conv.weight, nonlinearity='relu') # <2>
torch.nn.init.constant_(self.batch_norm.weight, 0.5)
torch.nn.init.zeros_(self.batch_norm.bias)
def forward(self, x):
out = self.conv(x)
out = self.batch_norm(out)
out = torch.relu(out)
return out + x
# +
class NetResDeep(nn.Module):
def __init__(self, n_chans1=32, n_blocks=10):
super().__init__()
self.n_chans1 = n_chans1
self.conv1 = nn.Conv2d(3, n_chans1, kernel_size=3, padding=1)
self.resblocks = nn.Sequential(* [ResBlock(n_chans=n_chans1)] * n_blocks)
self.fc1 = nn.Linear(8 * 8 * n_chans1, 32)
self.fc2 = nn.Linear(32, 2)
def forward(self, x):
out = F.max_pool2d(torch.relu(self.conv1(x)), 2)
out = self.resblocks(out)
out = F.max_pool2d(out, 2)
out = out.view(-1, 8 * 8 * self.n_chans1)
out = torch.relu(self.fc1(out))
out = self.fc2(out)
return out
# +
model = NetResDeep(n_chans1=32, n_blocks=100).to(device=device)
optimizer = optim.SGD(model.parameters(), lr=3e-3)
loss_fn = nn.CrossEntropyLoss()
training_loop(
n_epochs = 100,
optimizer = optimizer,
model = model,
loss_fn = loss_fn,
train_loader = train_loader,
)
all_acc_dict["res deep"] = validate(model, train_loader, val_loader)
# +
trn_acc = [v['train'] for k, v in all_acc_dict.items()]
val_acc = [v['val'] for k, v in all_acc_dict.items()]
width =0.3
plt.bar(np.arange(len(trn_acc)), trn_acc, width=width, label='train')
plt.bar(np.arange(len(val_acc))+ width, val_acc, width=width, label='val')
plt.xticks(np.arange(len(val_acc))+ width/2, list(all_acc_dict.keys()), rotation=60)
plt.ylabel('accuracy')
plt.legend(loc='lower right')
plt.ylim(0.7, 1)
plt.savefig('accuracy_comparison.png', bbox_inches='tight')
plt.show()
# -
| DeepLearning/PyTorch/book_repo/p1ch8/1_convolution.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/SouravSahu/ImageCaptionGenerator/blob/master/image_caption_1.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="CZGnv8_qhgGI" colab_type="code" outputId="3d68997f-7c97-45ed-9a5c-32d15166a542" colab={"base_uri": "https://localhost:8080/", "height": 120}
from google.colab import drive
drive.mount("/content/drive")
# + id="v75KHgddel8f" colab_type="code" outputId="5dcdc4cd-2bef-482a-9786-a4c79514aa77" colab={"base_uri": "https://localhost:8080/", "height": 33}
from pickle import load
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
from keras.utils import to_categorical
from keras.utils import plot_model
from keras.models import Model
from keras.layers import Input
from keras.layers import Dense
from keras.layers import LSTM
from keras.layers import Embedding
from keras.layers import Dropout
from keras.layers.merge import add
from keras.callbacks import ModelCheckpoint, ReduceLROnPlateau
from numpy import array
import pickle
# + id="XAbaXBvslhHw" colab_type="code" colab={}
from zipfile import ZipFile
# + id="F4GMwytbaTmJ" colab_type="code" colab={}
# + id="gg-H0zvLkBbC" colab_type="code" colab={}
zf=ZipFile('drive/My Drive/Flicker8k_Dataset.zip', 'r')
zf.extractall('drive/My Drive/Flicker8k_Data')
zf.close()
# + id="TAWwpR1UlePn" colab_type="code" colab={}
import shutil
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import cv2
import os
from os import listdir
from keras.preprocessing.image import load_img
from keras.preprocessing.image import img_to_array
from keras.applications.vgg16 import preprocess_input
from pickle import dump
from keras.applications.vgg16 import VGG16
from keras.layers import Input
from keras.models import Model
# + id="IBVkcxW8ad3E" colab_type="code" colab={}
in_layer = Input(shape=(224, 224, 3))
model = VGG16(include_top=False, input_tensor=in_layer, pooling='avg')
model.summary()
# + id="ht5MbzJ_1vmZ" colab_type="code" colab={}
def extract_features(directory):
count = 0
features= dict()
in_layer = Input(shape=(224, 224, 3))
model = VGG16(include_top=False, input_tensor=in_layer, pooling='avg')
print(model.summary())
for image_name in listdir(directory):
filename = directory + "/" + image_name
image=load_img(filename, target_size=(224,224))
image=img_to_array(image)
image = image.reshape(1, image.shape[0], image.shape[1], image.shape[2])
image=preprocess_input(image)
feature = model.predict(image, verbose=0)
image_id= image_name.split(".")[0]
features[image_id]=feature
print(">>> "+ image_id)
print("done")
return features
directory = "drive/My Drive/Flicker8k_Data/Flicker8k_Dataset"
features= extract_features(directory)
print(len(features))
dump(features, open('features.pkl', 'wb'))
# + id="SLIZ0larKOTK" colab_type="code" colab={}
import pickle
with open('features.pkl', 'rb') as f:
features = pickle.load(f)
# + id="SvRdGoypgOap" colab_type="code" colab={}
# load doc into memory
def load_doc(filename):
# open the file as read only
file = open(filename, 'r')
# read all text
text = file.read()
# close the file
file.close()
return text
# + id="u76Lt1SOKTij" colab_type="code" colab={}
# + id="IvktFi7RE_O9" colab_type="code" colab={}
import pickle
with open('features.pkl', 'wb') as handle:
pickle.dump(features, handle, protocol=pickle.HIGHEST_PROTOCOL)
# + id="vZKWQ8Q2fiVR" colab_type="code" colab={}
def load_description(doc):
descriptions=dict()
for line in doc.split("\n"):
tokens = line.split()
if (len(line)<2):
continue
image_id, img_desc = tokens[0], tokens[1:]
image_id=image_id.split('.')[0]
img_desc=' '.join(img_desc)
if image_id not in descriptions:
descriptions[image_id]=list()
descriptions[image_id].append(img_desc)
return descriptions
# + id="zMO4dw-NfADy" colab_type="code" outputId="acba8ddf-e3a2-4957-b263-d56f3a804121" colab={"base_uri": "https://localhost:8080/", "height": 33}
filename = "Flickr8k.token.txt"
doc = load_doc(filename)
full_descriptions = load_description(doc)
print(len(full_descriptions))
# + id="t5RAT2PuebQ3" colab_type="code" colab={}
# + id="h-pn8RwUo8ge" colab_type="code" colab={}
descriptions
# + id="7cU77JeqftVm" colab_type="code" colab={}
def clean_descriptions(descriptions):
import string
table = str.maketrans('', '', string.punctuation)
for key, desc_list in descriptions.items():
for i in range(len(desc_list)):
desc=desc_list[i].split()
desc = [word.lower() for word in desc]
desc = [w.translate(table) for w in desc]
desc = [word for word in desc if len(word)>1 and word.isalpha()]
descriptions[key][i]= ' '.join(desc)
return descriptions
# + id="Y8LI5UwCA9Iw" colab_type="code" colab={}
descriptions
# + id="Jc8AubzrPstO" colab_type="code" colab={}
def save_doc(descriptions, filename):
lines =list()
for keys, desc in descriptions.items():
for d in desc:
lines.append(keys + " " + d)
data = '\n'.join(lines)
file = open(filename, 'w')
file.write(data)
file.close()
# + id="yiYWpwn4xt25" colab_type="code" colab={}
save_doc(descriptions, 'descriptions.txt')
# + id="ZOLgtImEpDb_" colab_type="code" colab={}
descriptions= clean_descriptions(descriptions)
# + id="NMNB4_sLtxNh" colab_type="code" colab={}
def to_vocabulary(descriptions):
all_words=set()
for keys in descriptions.keys():
for desc in descriptions[keys]:
[all_words.update(desc.split())]
return all_words
# + id="y9ekHvZmuqHc" colab_type="code" colab={}
vocabulary = to_vocabulary(train_descriptions)
# + id="K8DC7glLuvXt" colab_type="code" colab={}
voc
# + id="GRu5A1iWCGKp" colab_type="code" outputId="4e3d4d33-a00b-454e-a22d-48d44abbabf9" colab={"base_uri": "https://localhost:8080/", "height": 35}
vocab_len
# + id="blovYAtMPsu8" colab_type="code" colab={}
# !wget http://nlp.stanford.edu/data/glove.6B.zip
# + id="7NOCDEV4PszC" colab_type="code" outputId="a58d22cf-6096-4d23-9626-55e5f6948b03" colab={"base_uri": "https://localhost:8080/", "height": 35}
# !ls
# + id="bTdGInVbPs10" colab_type="code" colab={}
import zipfile
zip_ref = zipfile.ZipFile('glove.6B.zip', 'r')
zip_ref.extractall('drive/My Drive/Flicker8k_Data/')
zip_ref.close()
# + id="FeFIka6pPsxj" colab_type="code" colab={}
import shutil
shutil.copy2('glove.6B.zip','drive/My Drive/Flicker8k_Data/glove.6B.zip' )
# + id="xv4x1WcZHCla" colab_type="code" colab={}
def load_set(filename):
doc = load_doc(filename)
dataset = list()
for line in doc.split('\n'):
if (len(line)<1):
continue
dataset.append(line.split('.')[0])
return set(dataset)
# + id="7oif2MV7DxsK" colab_type="code" colab={}
def load_clean_desc(filename, dataset):
doc = load_doc(filename)
description = dict()
for line in doc.split('\n'):
tokens = line.split()
image_id, img_desc = tokens[0], tokens[1:]
if image_id not in dataset:
continue
if image_id in dataset:
if image_id not in description:
description[image_id] = list()
desc = 'startseq ' + ' '.join(img_desc) + " endseq"
description[image_id].append(desc)
return description
# + id="xdTTpb7SFo-Z" colab_type="code" colab={}
def load_photo_features(filename, dataset):
with open(filename, 'rb') as f:
all_features = pickle.load(f)
features = {k:all_features[k] for k in dataset}
return features
# + id="MP-ClAxMHiEV" colab_type="code" outputId="b5025539-0e69-46f8-e639-15ff3f0b1c89" colab={"base_uri": "https://localhost:8080/", "height": 67}
from pickle import load
filename = 'drive/My Drive/Flicker8k_Data/Flickr8k_text/Flickr_8k.trainImages.txt'
train_set = load_set(filename)
print('Dataset: %d' % len(train_set))
train_descriptions = load_clean_desc('descriptions.txt', train_set)
print('Descriptions: train=%d' % len(train_descriptions))
train_features = load_photo_features('features.pkl', train_set)
print('Photos: train=%d' % len(train_features))
# + id="twiE4hfRDg6e" colab_type="code" colab={}
train_features['1000268201_693b08cb0e'][0]
# + id="-_7qRXFWbgaE" colab_type="code" outputId="91d4abba-05b5-4666-d6d2-68e5897c8fed" colab={"base_uri": "https://localhost:8080/", "height": 67}
filename = 'drive/My Drive/Flicker8k_Data/Flickr8k_text/Flickr_8k.devImages.txt'
dev_set = load_set(filename)
print('Dataset: %d' % len(dev_set))
dev_descriptions = load_clean_desc('descriptions.txt', dev_set)
print('Descriptions: train=%d' % len(dev_descriptions))
dev_features = load_photo_features('features.pkl', dev_set)
print('Photos: train=%d' % len(dev_features))
# + id="YZRCYbkuJAQE" colab_type="code" colab={}
def to_lines(descriptions):
all_desc = list()
for key in descriptions.keys():
[all_desc.append(d) for d in descriptions[key]]
return all_desc
def create_tokenizer(descriptions):
lines = to_lines(descriptions)
tokenizer = Tokenizer()
tokenizer.fit_on_texts(lines)
return tokenizer
# + id="OOc-h5rzeRDZ" colab_type="code" colab={}
tokenizer = create_tokenizer(train_descriptions)
# + id="-KYL_b6DeXQL" colab_type="code" colab={}
vocab_len = len(tokenizer.word_index)+1
# + id="PfCabM11fGqX" colab_type="code" outputId="87650b10-1b5e-44dc-deb4-9a60796bb437" colab={"base_uri": "https://localhost:8080/", "height": 33}
vocab_len
# + id="PZJKbxmMezmV" colab_type="code" colab={}
ddd=train_descriptions['1000268201_693b08cb0e'][0]
# + id="4GZGv8aVfjsq" colab_type="code" colab={}
dump(tokenizer, open('tokenizer.pkl', 'wb'))
# + id="0kr6Z__qe1R6" colab_type="code" colab={}
all_desc = to_lines(train_descriptions)
# + id="i2PPlKsziR3O" colab_type="code" colab={}
train_descriptions
# + id="8CCjxW52RyNv" colab_type="code" colab={}
all_set= train_set|dev_set
all_descriptions = load_clean_desc('descriptions.txt', all_set)
# + id="xaFtRkX5R9kI" colab_type="code" colab={}
all_descriptions
# + id="k4CDQQWjgD7h" colab_type="code" outputId="77222188-bf87-4df2-9ce0-0551b54ba638" colab={"base_uri": "https://localhost:8080/", "height": 33}
def find_max_len(desc):
max=0
for key in desc.keys():
for sent in desc[key]:
if (max<len(sent.split(" "))):
max = len(sent.split(" "))
return max
max_len = find_max_len(desc=all_descriptions)
print(max_len)
# + id="zNp80E2bBlKx" colab_type="code" colab={}
bbb=tokenizer.texts_to_sequences(ddd)
# + id="k3eGAZ7WCO3Z" colab_type="code" outputId="ca3372e9-e9f4-470d-dae4-e48cbce6ff17" colab={"base_uri": "https://localhost:8080/", "height": 33}
train_features['1084104085_3b06223afe'].shape
# + id="784xkA0fCPxP" colab_type="code" colab={}
# + id="lcFdeqBCGLEO" colab_type="code" colab={}
def max_length(descriptions):
lines = to_lines(descriptions)
return max(len(d.split()) for d in lines)
# + id="DVm-aObJGPpt" colab_type="code" outputId="080235f3-c9f8-4349-fa3d-2babd394a31b" colab={"base_uri": "https://localhost:8080/", "height": 33}
max_length(all_descriptions)
# + id="900W68qWGS3x" colab_type="code" colab={}
vocab_len = len(tokenizer.word_index) + 1
# + id="odkoXMTSGdZr" colab_type="code" colab={}
def model(vocab_len, max_len):
inputs1 = Input(shape = (512,))
fe1 = Dropout(0.5)(inputs1)
fe2 = Dense(256, activation='relu')(fe1)
inputs2 = Input(shape = (max_len,))
se1 = Embedding(vocab_len, 256, mask_zero=True)(inputs2) #mask_zero=true for padding .... 0 will be for padding
se2 = Dropout(0.5)(se1)
se3 = LSTM(256, activation = 'relu')(se2)
decoder1 = add([fe2, se3])
decoder2 = Dense(256, activation = 'relu')(decoder1)
outputs = Dense(vocab_len, activation = 'softmax')(decoder2)
model = Model(inputs = [inputs1, inputs2], outputs=outputs)
model.compile(loss='categorical_crossentropy', optimizer = 'adam', metrics=['accuracy'])
print(model.summary())
plot_model(model, to_file='model.png', show_shapes=True)
return model
# + id="h8-mf5FrJPbV" colab_type="code" colab={}
filepath = 'drive/My Drive/weights/image_caption/model-ep{epoch:03d}-loss{loss:.3f}-val_loss{val_loss:.3f}.h5'
checkpoint = ModelCheckpoint(filepath, monitor='val_loss', verbose=1, save_best_only=True, mode='min')
# + id="Uh5KhY24Jwig" colab_type="code" outputId="ae81b90c-d4dc-41e7-cedd-5e8526c56926" colab={"base_uri": "https://localhost:8080/", "height": 620}
model= model(vocab_len, max_len)
# + id="MKUtY49sL8Eu" colab_type="code" colab={}
def create_sequences(tokenizer, max_len, descriptions, image_features, vocab_len):
X1, X2, Y = list(), list(), list()
for key, description in descriptions.items():
for each_desc in description:
seq = tokenizer.texts_to_sequences([each_desc])[0]
for i in range(1, len(seq)):
in_seq, out_seq = seq[:i], seq[i]
in_seq = pad_sequences([in_seq], maxlen=max_len)[0]
out_seq = to_categorical([out_seq], num_classes = vocab_len)[0]
X1.append(image_features[key])
X2.append(in_seq)
Y.append(out_seq)
return array(X1).reshape(-1,512), array(X2).reshape(-1,30), array(Y)
# + id="O2yjjFkiL9eE" colab_type="code" outputId="b916283f-d8c0-477f-f7fd-40daa71c28b0" colab={"base_uri": "https://localhost:8080/", "height": 33}
ytrain.shape
# + id="etopTOcMKO6m" colab_type="code" colab={}
X1train, X2train, ytrain = create_sequences(tokenizer, max_len, descriptions=train_descriptions, image_features=train_features, vocab_len=vocab_len)
X1test, X2test, ytest = create_sequences(tokenizer, max_len, descriptions=dev_descriptions, image_features=dev_features, vocab_len=vocab_len)
# + id="gigOQvhmVO9u" colab_type="code" colab={}
learning_rate_reduction = ReduceLROnPlateau(monitor='val_acc',
patience=3,
verbose=1,
factor=0.5,
min_lr=0.00001)
# + id="ZUIIkB7yJm6t" colab_type="code" colab={}
model.fit([X1train, X2train], ytrain, epochs=20, verbose=1, callbacks=[checkpoint, learning_rate_reduction], validation_data=([X1test, X2test], ytest))
# + id="RPQrCUw5VNx7" colab_type="code" colab={}
# + [markdown] id="RiUCoSO88_M6" colab_type="text"
# # **Sampling**
# + id="cJoC52H49Yaj" colab_type="code" colab={}
def encoder_model():
in_layer = Input(shape=(224, 224, 3))
model = VGG16(include_top=False, input_tensor=in_layer, pooling='avg')
# + id="aDHxbvjvWNkf" colab_type="code" colab={}
def extract_features(filenames, model):
count = 0
features= list()
print(model.summary())
for filename in file_names:
image=load_img(filename, target_size=(224,224))
image=img_to_array(image)
image = image.reshape(1, image.shape[0], image.shape[1], image.shape[2])
image=preprocess_input(image)
feature = model.predict(image, verbose=0)
features.append(feature)
print(">>> "+ image_id)
print("done")
print(len(features))
return features
# + id="iNWnC9V69YD1" colab_type="code" colab={}
'''image_names = []
model = encoder_model()
features = extract_features(filenames= image_names, model)
features = np.array(features)
print(features.shape)'''
# + id="uE0bdckdDNF2" colab_type="code" colab={}
def word_for_id(integer, tokenizer):
for word, index in tokenizer.word_index.items():
if index == integer:
return word
return None
# + id="MAzLVchHKFdc" colab_type="code" colab={}
def generate_desc(model, tokenizer, features, max_len):
# seed the generation process
in_text = 'startseq'
# iterate over the whole length of the sequence
for i in range(max_length):
# integer encode input sequence
sequence = tokenizer.texts_to_sequences([in_text])[0]
# pad input
sequence = pad_sequences([sequence], maxlen=max_length)
# predict next word
yhat = model.predict([photo,sequence], verbose=0)
# convert probability to integer
yhat = argmax(yhat)
# map integer to word
word = word_for_id(yhat, tokenizer)
# stop if we cannot map the word
if word is None:
break
# append as input for generating the next word
in_text += ' ' + word
# stop if we predict the end of the sequence
if word == 'endseq':
break
return in_text
# + id="XcrkZxrjIszo" colab_type="code" colab={}
from nltk.translate.bleu_score import corpus_bleu
def evaluate_model(model, descriptions, photos, tokenizer, max_length):
actual, predicted = list(), list()
# step over the whole set
for key, desc_list in descriptions.items():
# generate description
yhat = generate_desc(model, tokenizer, photos[key], max_length)
# store actual and predicted
references = [d.split() for d in desc_list]
actual.append(references)
predicted.append(yhat.split())
# calculate BLEU score
print('BLEU-1: %f' % corpus_bleu(actual, predicted, weights=(1.0, 0, 0, 0)))
print('BLEU-2: %f' % corpus_bleu(actual, predicted, weights=(0.5, 0.5, 0, 0)))
print('BLEU-3: %f' % corpus_bleu(actual, predicted, weights=(0.3, 0.3, 0.3, 0)))
print('BLEU-4: %f' % corpus_bleu(actual, predicted, weights=(0.25, 0.25, 0.25, 0.25)))
# + id="1Q1BwMZAwNdl" colab_type="code" colab={}
filename = 'drive/My Drive/Flicker8k_Data/Flickr8k_text/Flickr_8k.testImages.txt'
test = load_set(filename)
print('Dataset: %d' % len(test))
# descriptions
test_descriptions = load_clean_descriptions('descriptions.txt', test)
print('Descriptions: test=%d' % len(test_descriptions))
# photo features
test_features = load_photo_features('features.pkl', test)
print('Photos: test=%d' % len(test_features))
# evaluate model
evaluate_model(model, test_descriptions, test_features, tokenizer, max_length)
| image_caption_1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
import numpy.linalg as la
# # Dimension Reduction
np.random.seed(123)
np.set_printoptions(3)
# ### PCA from scratch
#
# Principal Components Analysis (PCA) basically means to find and rank all the eigenvalues and eigenvectors of a covariance matrix. This is useful because high-dimensional data (with $p$ features) may have nearly all their variation in a small number of dimensions $k$, i.e. in the subspace spanned by the eigenvectors of the covariance matrix that have the $k$ largest eigenvalues. If we project the original data into this subspace, we can have a dimension reduction (from $p$ to $k$) with hopefully little loss of information.
#
# For zero-centered vectors,
#
# \begin{align}
# \text{Cov}(X, Y) &= \frac{\sum_{i=1}^n(X_i - \bar{X})(Y_i - \bar{Y})}{n-1} \\
# &= \frac{\sum_{i=1}^nX_iY_i}{n-1} \\
# &= \frac{XY^T}{n-1}
# \end{align}
#
# and so the covariance matrix for a data set X that has zero mean in each feature vector is just $XX^T/(n-1)$.
#
# In other words, we can also get the eigendecomposition of the covariance matrix from the positive semi-definite matrix $XX^T$.
#
# We will take advantage of this when we cover the SVD later in the course.
#
# Note: Here we are using a matrix of **row** vectors
n = 100
x1, x2, x3 = np.random.normal(0, 10, (3, n))
x4 = x1 + np.random.normal(size=x1.shape)
x5 = (x1 + x2)/2 + np.random.normal(size=x1.shape)
x6 = (x1 + x2 + x3)/3 + np.random.normal(size=x1.shape)
# #### For PCA calculations, each column is an observation
xs = np.c_[x1, x2, x3, x4, x5, x6].T
xs[:, :10]
# #### Center each observation
xc = xs - np.mean(xs, 1)[:, np.newaxis]
xc[:, :10]
# #### Covariance
#
# Remember the formula for covariance
#
# $$
# \text{Cov}(X, Y) = \frac{\sum_{i=1}^n(X_i - \bar{X})(Y_i - \bar{Y})}{n-1}
# $$
#
# where $\text{Cov}(X, X)$ is the sample variance of $X$.
cov = (xc @ xc.T)/(n-1)
cov
# #### Check
np.cov(xs)
# #### Eigendecomposition
e, v = la.eigh(cov)
idx = np.argsort(e)[::-1]
e = e[idx]
v = v[:, idx]
# #### Explain the magnitude of the eigenvalues
#
# Note that $x4, x5, x6$ are linear combinations of $x1, x2, x3$ with some added noise, and hence the last 3 eigenvalues are small.
plt.stem(e)
pass
# #### The eigenvalues and eigenvectors give a factorization of the covariance matrix
v @ np.diag(e) @ v.T
# ### Geometry of PCA
#
# 
#
# ### Algebra of PCA
#
# 
#
# Note that $Q^T X$ results in a new data set that is uncorrelated.
m = np.zeros(2)
s = np.array([[1, 0.8], [0.8, 1]])
x = np.random.multivariate_normal(m, s, n).T
x.shape
# #### Calculate covariance matrix from centered observations
xc = (x - x.mean(1)[:, np.newaxis])
cov = (xc @ xc.T)/(n-1)
cov
# #### Find eigendecoposition
e, v = la.eigh(cov)
idx = np.argsort(e)[::-1]
e = e[idx]
v = v[:, idx]
# #### In original coordinates
plt.scatter(x[0], x[1], alpha=0.5)
for e_, v_ in zip(e, v.T):
plt.plot([0, e_*v_[0]], [0, e_*v_[1]], 'r-', lw=2)
plt.xlabel('x', fontsize=14)
plt.ylabel('y', fontsize=14)
plt.axis('square')
plt.axis([-3,3,-3,3])
pass
# #### After change of basis
yc = v.T @ xc
plt.scatter(yc[0,:], yc[1,:], alpha=0.5)
for e_, v_ in zip(e, np.eye(2)):
plt.plot([0, e_*v_[0]], [0, e_*v_[1]], 'r-', lw=2)
plt.xlabel('PC1', fontsize=14)
plt.ylabel('PC2', fontsize=14)
plt.axis('square')
plt.axis([-3,3,-3,3])
pass
# #### Eigenvectors from PCA
pca.components_
# #### Eigenvalues from PCA
pca.explained_variance_
# #### Explained variance
#
# This is just a consequence of the invariance of the trace under change of basis. Since the original diagnonal entries in the covariance matrix are the variances of the featrues, the sum of the eigenvalues must also be the sum of the orignal varainces. In other words, the cumulateive proportion of the top $n$ eigenvaluee is the "explained variance" of the first $n$ principal components.
e/e.sum()
# #### Check
from sklearn.decomposition import PCA
pca = PCA()
# #### Note that the PCA from scikit-learn works with feature vectors, not observation vectors
z = pca.fit_transform(x.T)
pca.explained_variance_ratio_
# #### The principal components are identical to our home-brew version, up to a flip in direction of eigenvectors
plt.scatter(z[:, 0], z[:, 1], alpha=0.5)
for e_, v_ in zip(e, np.eye(2)):
plt.plot([0, e_*v_[0]], [0, e_*v_[1]], 'r-', lw=2)
plt.xlabel('PC1', fontsize=14)
plt.ylabel('PC2', fontsize=14)
plt.axis('square')
plt.axis([-3,3,-3,3])
pass
# +
plt.subplot(121)
plt.scatter(-z[:, 0], -z[:, 1], alpha=0.5)
for e_, v_ in zip(e, np.eye(2)):
plt.plot([0, e_*v_[0]], [0, e_*v_[1]], 'r-', lw=2)
plt.xlabel('PC1', fontsize=14)
plt.ylabel('PC2', fontsize=14)
plt.axis('square')
plt.axis([-3,3,-3,3])
plt.title('Scikit-learn PCA (flipped)')
plt.subplot(122)
plt.scatter(yc[0,:], yc[1,:], alpha=0.5)
for e_, v_ in zip(e, np.eye(2)):
plt.plot([0, e_*v_[0]], [0, e_*v_[1]], 'r-', lw=2)
plt.xlabel('PC1', fontsize=14)
plt.ylabel('PC2', fontsize=14)
plt.axis('square')
plt.axis([-3,3,-3,3])
plt.title('Homebrew PCA')
plt.tight_layout()
pass
| notebooks/B03A_PCA.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <script async src="https://www.googletagmanager.com/gtag/js?id=UA-59152712-8"></script>
# <script>
# window.dataLayer = window.dataLayer || [];
# function gtag(){dataLayer.push(arguments);}
# gtag('js', new Date());
#
# gtag('config', 'UA-59152712-8');
# </script>
#
# # Start-to-Finish Example: Unit Testing `GiRaFFE_NRPy`: Boundary Conditions
#
# ## Author: <NAME>
#
# #### Edits by <NAME>
#
# ## This module Validates the Boundary Conditions routines for `GiRaFFE_NRPy`.
#
# **Notebook Status:** <font color='green'><b>Validated</b></font>
#
# **Validation Notes:** This module will validate the routines in [Tutorial-GiRaFFE_NRPy-BCs](Tutorial-GiRaFFE_NRPy-BCs.ipynb).
#
# ### NRPy+ Source Code for this module:
# * [GiRaFFE_NRPy/GiRaFFE_NRPy_BCs.py](../../edit/in_progress/GiRaFFE_NRPy/GiRaFFE_NRPy_BCs.py) [\[**tutorial**\]](Tutorial-GiRaFFE_NRPy-BCs.ipynb) Generates the driver apply boundary conditions to the vector potential and velocity.
#
# ## Introduction:
#
# This notebook validates the C code used to apply boundary conditions to components of the vector potential and valencia velocity.
#
# It is, in general, good coding practice to unit test functions individually to verify that they produce the expected and intended output. We will generate test data with arbitrarily-chosen analytic functions and calculate gridfunctions at the cell centers on a small numeric grid. We will then compute the values for the ghost zones in two ways: first with the boundary condition C code driver, then we compute them analytically.
#
# When this notebook is run, the significant digits of agreement between the approximate and exact values in the ghost zones will be evaluated. If the agreement falls below a thresold, the point, quantity, and level of agreement are reported [here](#compile_run).
# <a id='toc'></a>
#
# # Table of Contents
# $$\label{toc}$$
#
# This notebook is organized as follows
#
# 1. [Step 1](#setup): Set up core functions and parameters for unit testing the BCs algorithm
# 1. [Step 1.a](#expressions) Write expressions for the gridfunctions we will test
# 1. [Step 1.b](#ccodekernels) Generate C functions to calculate the gridfunctions
# 1. [Step 1.c](#free_parameters) Set free parameters in the code
# 1. [Step 2](#mainc): `BCs_unit_test.c`: The Main C Code
# 1. [Step 2.a](#compile_run): Compile and run the code
# 1. [Step 3](#convergence): Code validation: Verify that relative error in numerical solution converges to zero at the expected order
# 1. [Step 4](#latex_pdf_output): Output this notebook to $\LaTeX$-formatted PDF file
# <a id='setup'></a>
#
# # Step 1: Set up core functions and parameters for unit testing the BCs algorithm \[Back to [top](#toc)\]
# $$\label{setup}$$
#
# We'll start by appending the relevant paths to `sys.path` so that we can access sympy modules in other places. Then, we'll import NRPy+ core functionality and set up a directory in which to carry out our test.
# +
import os, sys, shutil # Standard Python modules for multiplatform OS-level functions
# First, we'll add the parent directory to the list of directories Python will check for modules.
nrpy_dir_path = os.path.join("..")
if nrpy_dir_path not in sys.path:
sys.path.append(nrpy_dir_path)
nrpy_dir_path = os.path.join("..","..")
if nrpy_dir_path not in sys.path:
sys.path.append(nrpy_dir_path)
from outputC import outCfunction, lhrh # NRPy+: Core C code output module
import sympy as sp # SymPy: The Python computer algebra package upon which NRPy+ depends
import finite_difference as fin # NRPy+: Finite difference C code generation module
import NRPy_param_funcs as par # NRPy+: Parameter interface
import grid as gri # NRPy+: Functions having to do with numerical grids
import indexedexp as ixp # NRPy+: Symbolic indexed expression (e.g., tensors, vectors, etc.) support
import reference_metric as rfm # NRPy+: Reference metric support
import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface
Ccodesdir = "Start-to-Finish-UnitTests/BCs_UnitTest/"
# First remove C code output directory if it exists
# Courtesy https://stackoverflow.com/questions/303200/how-do-i-remove-delete-a-folder-that-is-not-empty
shutil.rmtree(Ccodesdir, ignore_errors=True)
# Then create a fresh directory
cmd.mkdir(Ccodesdir)
outdir = os.path.join(Ccodesdir,"output/")
cmd.mkdir(outdir)
thismodule = "Start_to_Finish_UnitTest-GiRaFFE_NRPy-BCs"
# Set the finite-differencing order to 2
par.set_parval_from_str("finite_difference::FD_CENTDERIVS_ORDER", 2)
# -
# <a id='expressions'></a>
#
# ## Step 1.a: Write expressions for the gridfunctions we will test \[Back to [top](#toc)\]
# $$\label{expressions}$$
#
# Now, we'll choose some functions with arbitrary forms to generate test data. We'll need to set seven gridfunctions, so expressions are being pulled from several previously written unit tests.
#
# \begin{align}
# A_x &= dy + ez + f \\
# A_y &= mx + nz + p \\
# A_z &= sx + ty + u. \\
# \bar{v}^x &= ax + by + cz \\
# \bar{v}^y &= bx + cy + az \\
# \bar{v}^z &= cx + ay + bz \\
# [\sqrt{\gamma} \Phi] &= 1 - (x+2y+z) \\
# \end{align}
#
# +
a,b,c,d,e,f,g,h,l,m,n,o,p,q,r,s,t,u = par.Cparameters("REAL",thismodule,["a","b","c","d","e","f","g","h","l","m","n","o","p","q","r","s","t","u"],1e300)
M_PI = par.Cparameters("#define",thismodule,["M_PI"], "")
AD = ixp.register_gridfunctions_for_single_rank1("EVOL","AD",DIM=3)
ValenciavU = ixp.register_gridfunctions_for_single_rank1("AUXEVOL","ValenciavU",DIM=3)
psi6Phi = gri.register_gridfunctions("EVOL","psi6Phi")
par.set_parval_from_str("reference_metric::CoordSystem","Cartesian")
rfm.reference_metric()
x = rfm.xx_to_Cart[0]
y = rfm.xx_to_Cart[1]
z = rfm.xx_to_Cart[2]
AD[0] = d*y + e*z + f
AD[1] = m*x + n*z + o
AD[2] = s*x + t*y + u
ValenciavU[0] = a#*x + b*y + c*z
ValenciavU[1] = b#*x + c*y + a*z
ValenciavU[2] = c#*x + a*y + b*z
psi6Phi = sp.sympify(1) - (x + sp.sympify(2)*y + z)
# -
# <a id='ccodekernels'></a>
#
# ## Step 1.b: Generate C functions to calculate the gridfunctions \[Back to [top](#toc)\]
# $$\label{ccodekernels}$$
#
# Here, we will use the NRPy+ function `outCfunction()` to generate C code that will calculate our metric gridfunctions over an entire grid; note that we call the function twice, once over just the interior points, and once over all points. This will allow us to compare against exact values in the ghostzones. We will also call the function to generate the boundary conditions function we are testing.
# +
metric_gfs_to_print = [\
lhrh(lhs=gri.gfaccess("evol_gfs","AD0"),rhs=AD[0]),\
lhrh(lhs=gri.gfaccess("evol_gfs","AD1"),rhs=AD[1]),\
lhrh(lhs=gri.gfaccess("evol_gfs","AD2"),rhs=AD[2]),\
lhrh(lhs=gri.gfaccess("auxevol_gfs","ValenciavU0"),rhs=ValenciavU[0]),\
lhrh(lhs=gri.gfaccess("auxevol_gfs","ValenciavU1"),rhs=ValenciavU[1]),\
lhrh(lhs=gri.gfaccess("auxevol_gfs","ValenciavU2"),rhs=ValenciavU[2]),\
lhrh(lhs=gri.gfaccess("evol_gfs","psi6Phi"),rhs=psi6Phi),\
]
desc = "Calculate test data on the interior grid for boundary conditions"
name = "calculate_test_data"
outCfunction(
outfile = os.path.join(Ccodesdir,name+".h"), desc=desc, name=name,
params ="const paramstruct *restrict params,REAL *restrict xx[3],REAL *restrict auxevol_gfs,REAL *restrict evol_gfs",
body = fin.FD_outputC("returnstring",metric_gfs_to_print,params="outCverbose=False").replace("IDX4","IDX4S"),
loopopts="InteriorPoints,Read_xxs")
desc = "Calculate test data at all points for comparison"
name = "calculate_test_data_exact"
outCfunction(
outfile = os.path.join(Ccodesdir,name+".h"), desc=desc, name=name,
params ="const paramstruct *restrict params,REAL *restrict xx[3],REAL *restrict auxevol_gfs,REAL *restrict evol_gfs",
body = fin.FD_outputC("returnstring",metric_gfs_to_print,params="outCverbose=False").replace("IDX4","IDX4S"),
loopopts="AllPoints,Read_xxs")
import GiRaFFE_NRPy.GiRaFFE_NRPy_BCs as BC
BC.GiRaFFE_NRPy_BCs(os.path.join(Ccodesdir,"boundary_conditions"))
# -
# <a id='free_parameters'></a>
#
# ## Step 1.c: Set free parameters in the code \[Back to [top](#toc)\]
# $$\label{free_parameters}$$
#
# We also need to create the files that interact with NRPy's C parameter interface.
# +
# Step 3.d.i: Generate declare_Cparameters_struct.h, set_Cparameters_default.h, and set_Cparameters[-SIMD].h
# par.generate_Cparameters_Ccodes(os.path.join(out_dir))
# Step 3.d.ii: Set free_parameters.h
with open(os.path.join(Ccodesdir,"free_parameters.h"),"w") as file:
file.write("""
// Override parameter defaults with values based on command line arguments and NGHOSTS.
params.Nxx0 = atoi(argv[1]);
params.Nxx1 = atoi(argv[2]);
params.Nxx2 = atoi(argv[3]);
params.Nxx_plus_2NGHOSTS0 = params.Nxx0 + 2*NGHOSTS;
params.Nxx_plus_2NGHOSTS1 = params.Nxx1 + 2*NGHOSTS;
params.Nxx_plus_2NGHOSTS2 = params.Nxx2 + 2*NGHOSTS;
// Step 0d: Set up space and time coordinates
// Step 0d.i: Declare \Delta x^i=dxx{0,1,2} and invdxx{0,1,2}, as well as xxmin[3] and xxmax[3]:
const REAL xxmin[3] = {-1.0,-1.0,-1.0};
const REAL xxmax[3] = { 1.0, 1.0, 1.0};
params.dxx0 = (xxmax[0] - xxmin[0]) / ((REAL)params.Nxx_plus_2NGHOSTS0-1.0);
params.dxx1 = (xxmax[1] - xxmin[1]) / ((REAL)params.Nxx_plus_2NGHOSTS1-1.0);
params.dxx2 = (xxmax[2] - xxmin[2]) / ((REAL)params.Nxx_plus_2NGHOSTS2-1.0);
params.invdx0 = 1.0 / params.dxx0;
params.invdx1 = 1.0 / params.dxx1;
params.invdx2 = 1.0 / params.dxx2;
\n""")
# Generates declare_Cparameters_struct.h, set_Cparameters_default.h, and set_Cparameters[-SIMD].h
par.generate_Cparameters_Ccodes(os.path.join(Ccodesdir))
# -
# <a id='mainc'></a>
#
# # Step 2: `BCs_unit_test.c`: The Main C Code \[Back to [top](#toc)\]
# $$\label{mainc}$$
#
# Here we compare the results of our boundary conditions, `apply_bcs_potential()` and `apply_bcs_velocity()`, against the exact results, looping over the entire numerical grid.
# +
# %%writefile $Ccodesdir/BCs_unit_test.c
// These are common packages that we are likely to need.
#include "stdio.h"
#include "stdlib.h"
#include "math.h"
#include "string.h" // Needed for strncmp, etc.
#include "stdint.h" // Needed for Windows GCC 6.x compatibility
#include <time.h> // Needed to set a random seed.
#define REAL double
#include "declare_Cparameters_struct.h"
const int NGHOSTS = 3;
REAL a,b,c,d,e,f,g,h,l,m,n,o,p,q,r,s,t,u;
// Standard NRPy+ memory access:
#define IDX4S(g,i,j,k) \
( (i) + Nxx_plus_2NGHOSTS0 * ( (j) + Nxx_plus_2NGHOSTS1 * ( (k) + Nxx_plus_2NGHOSTS2 * (g) ) ) )
// Standard formula to calculate significant digits of agreement:
#define SDA(a,b) 1.0-log10(2.0*fabs(a-b)/(fabs(a)+fabs(b)))
// Give gridfunctions their names:
#define VALENCIAVU0GF 0
#define VALENCIAVU1GF 1
#define VALENCIAVU2GF 2
#define NUM_AUXEVOL_GFS 3
#define AD0GF 0
#define AD1GF 1
#define AD2GF 2
#define STILDED0GF 3
#define STILDED1GF 4
#define STILDED2GF 5
#define PSI6PHIGF 6
#define NUM_EVOL_GFS 7
#include "calculate_test_data.h"
#include "calculate_test_data_exact.h"
#include "boundary_conditions/GiRaFFE_boundary_conditions.h"
int main(int argc, const char *argv[]) {
paramstruct params;
#include "set_Cparameters_default.h"
// Step 0c: Set free parameters, overwriting Cparameters defaults
// by hand or with command-line input, as desired.
#include "free_parameters.h"
#include "set_Cparameters-nopointer.h"
// We'll define our grid slightly different from how we normally would. We let our outermost
// ghostzones coincide with xxmin and xxmax instead of the interior of the grid. This means
// that the ghostzone points will have identical positions so we can do convergence tests of them.
// Step 0e: Set up cell-centered Cartesian coordinate grids
REAL *xx[3];
xx[0] = (REAL *)malloc(sizeof(REAL)*Nxx_plus_2NGHOSTS0);
xx[1] = (REAL *)malloc(sizeof(REAL)*Nxx_plus_2NGHOSTS1);
xx[2] = (REAL *)malloc(sizeof(REAL)*Nxx_plus_2NGHOSTS2);
for(int j=0;j<Nxx_plus_2NGHOSTS0;j++) xx[0][j] = xxmin[0] + ((REAL)(j))*dxx0;
for(int j=0;j<Nxx_plus_2NGHOSTS1;j++) xx[1][j] = xxmin[1] + ((REAL)(j))*dxx1;
for(int j=0;j<Nxx_plus_2NGHOSTS2;j++) xx[2][j] = xxmin[2] + ((REAL)(j))*dxx2;
//for(int i=0;i<Nxx_plus_2NGHOSTS0;i++) printf("xx[0][%d] = %.15e\n",i,xx[0][i]);
// This is the array to which we'll write the NRPy+ variables.
REAL *auxevol_gfs = (REAL *)malloc(sizeof(REAL) * NUM_AUXEVOL_GFS * Nxx_plus_2NGHOSTS2 * Nxx_plus_2NGHOSTS1 * Nxx_plus_2NGHOSTS0);
REAL *evol_gfs = (REAL *)malloc(sizeof(REAL) * NUM_EVOL_GFS * Nxx_plus_2NGHOSTS2 * Nxx_plus_2NGHOSTS1 * Nxx_plus_2NGHOSTS0);
// And another for exact data:
REAL *auxevol_exact_gfs = (REAL *)malloc(sizeof(REAL) * NUM_AUXEVOL_GFS * Nxx_plus_2NGHOSTS2 * Nxx_plus_2NGHOSTS1 * Nxx_plus_2NGHOSTS0);
REAL *evol_exact_gfs = (REAL *)malloc(sizeof(REAL) * NUM_EVOL_GFS * Nxx_plus_2NGHOSTS2 * Nxx_plus_2NGHOSTS1 * Nxx_plus_2NGHOSTS0);
// Generate some random coefficients. Leave the random seed on its default for consistency between trials.
a = (double)(rand()%20)/5.0;
f = (double)(rand()%20)/5.0;
m = (double)(rand()%20)/5.0;
b = (double)(rand()%10-5)/100.0;
c = (double)(rand()%10-5)/100.0;
d = (double)(rand()%10-5)/100.0;
g = (double)(rand()%10-5)/100.0;
h = (double)(rand()%10-5)/100.0;
l = (double)(rand()%10-5)/100.0;
n = (double)(rand()%10-5)/100.0;
o = (double)(rand()%10-5)/100.0;
p = (double)(rand()%10-5)/100.0;
// First, calculate the test data on our grid, along with the comparison:
calculate_test_data(¶ms,xx,auxevol_gfs,evol_gfs);
calculate_test_data_exact(¶ms,xx,auxevol_exact_gfs,evol_exact_gfs);
// Run the BCs driver on the test data to fill in the ghost zones:
apply_bcs_potential(¶ms,evol_gfs);
apply_bcs_velocity(¶ms,auxevol_gfs);
/*char filename[100];
sprintf(filename,"out%d-numer.txt",Nxx0);
FILE *out2D = fopen(filename, "w");
for(int i2=0;i2<Nxx_plus_2NGHOSTS2;i2++) for(int i1=0;i1<Nxx_plus_2NGHOSTS1;i1++) for(int i0=0;i0<Nxx_plus_2NGHOSTS0;i0++) {
// We print the difference between approximate and exact numbers.
fprintf(out2D,"%.16e\t %e %e %e\n",
//auxevol_gfs[IDX4S(VALENCIAVU2GF,i0,i1,i2)]-auxevol_exact_gfs[IDX4S(VALENCIAVU2GF,i0,i1,i2)],
evol_gfs[IDX4S(AD2GF,i0,i1,i2)]-evol_exact_gfs[IDX4S(AD2GF,i0,i1,i2)],
xx[0][i0],xx[1][i1],xx[2][i2]
);
}
fclose(out2D);*/
int all_agree = 1;
for(int i0=0;i0<Nxx_plus_2NGHOSTS0;i0++){
for(int i1=0;i1<Nxx_plus_2NGHOSTS1;i1++){
for(int i2=0;i2<Nxx_plus_2NGHOSTS2;i2++){
if(SDA(evol_gfs[IDX4S(AD0GF, i0,i1,i2)],evol_exact_gfs[IDX4S(AD0GF, i0,i1,i2)])<10.0){
printf("Quantity AD0 only agrees with the original GiRaFFE to %.2f digits at i0,i1,i2=%d,%d,%d!\n",
SDA(evol_gfs[IDX4S(AD0GF, i0,i1,i2)],evol_exact_gfs[IDX4S(AD0GF, i0,i1,i2)]),i0,i1,i2);
all_agree=0;
}
if(SDA(evol_gfs[IDX4S(AD1GF, i0,i1,i2)],evol_exact_gfs[IDX4S(AD1GF, i0,i1,i2)])<10.0){
printf("Quantity AD1 only agrees with the original GiRaFFE to %.2f digits at i0,i1,i2=%d,%d,%d!\n",
SDA(evol_gfs[IDX4S(AD1GF, i0,i1,i2)],evol_exact_gfs[IDX4S(AD1GF, i0,i1,i2)]),i0,i1,i2);
all_agree=0;
}
if(SDA(evol_gfs[IDX4S(AD2GF, i0,i1,i2)],evol_exact_gfs[IDX4S(AD2GF, i0,i1,i2)])<10.0){
printf("Quantity AD2 only agrees with the original GiRaFFE to %.2f digits at i0,i1,i2=%d,%d,%d!\n",
SDA(evol_gfs[IDX4S(AD2GF, i0,i1,i2)],evol_exact_gfs[IDX4S(AD2GF, i0,i1,i2)]),i0,i1,i2);
all_agree=0;
}
if(SDA(auxevol_gfs[IDX4S(VALENCIAVU0GF, i0,i1,i2)],auxevol_exact_gfs[IDX4S(VALENCIAVU0GF, i0,i1,i2)])<10.0){
printf("Quantity ValenciavU0 only agrees with the original GiRaFFE to %.2f digits at i0,i1,i2=%d,%d,%d!\n",
SDA(auxevol_gfs[IDX4S(VALENCIAVU0GF, i0,i1,i2)],auxevol_exact_gfs[IDX4S(VALENCIAVU0GF, i0,i1,i2)]),i0,i1,i2);
all_agree=0;
}
if(SDA(auxevol_gfs[IDX4S(VALENCIAVU1GF, i0,i1,i2)],auxevol_exact_gfs[IDX4S(VALENCIAVU1GF, i0,i1,i2)])<10.0){
printf("Quantity ValenciavU1 only agrees with the original GiRaFFE to %.2f digits at i0,i1,i2=%d,%d,%d!\n",
SDA(auxevol_gfs[IDX4S(VALENCIAVU1GF, i0,i1,i2)],auxevol_exact_gfs[IDX4S(VALENCIAVU1GF, i0,i1,i2)]),i0,i1,i2);
all_agree=0;
}
if(SDA(auxevol_gfs[IDX4S(VALENCIAVU2GF, i0,i1,i2)],auxevol_exact_gfs[IDX4S(VALENCIAVU2GF, i0,i1,i2)])<10.0){
printf("Quantity ValenciavU2 only agrees with the original GiRaFFE to %.2f digits at i0,i1,i2=%d,%d,%d!\n",
SDA(auxevol_gfs[IDX4S(VALENCIAVU2GF, i0,i1,i2)],auxevol_exact_gfs[IDX4S(VALENCIAVU2GF, i0,i1,i2)]),i0,i1,i2);
all_agree=0;
}
if(SDA(evol_gfs[IDX4S(PSI6PHIGF, i0,i1,i2)],evol_exact_gfs[IDX4S(PSI6PHIGF, i0,i1,i2)])<10.0){
printf("psi6Phi = %.15e,%.15e\n",evol_gfs[IDX4S(PSI6PHIGF, i0,i1,i2)],evol_exact_gfs[IDX4S(PSI6PHIGF, i0,i1,i2)]);
//printf("Quantity psi6Phi only agrees with the original GiRaFFE to %.2f digits at i0,i1,i2=%d,%d,%d!\n",
// SDA(evol_gfs[IDX4S(PSI6PHIGF, i0,i1,i2)],evol_exact_gfs[IDX4S(PSI6PHIGF, i0,i1,i2)]),i0,i1,i2);
all_agree=0;
}
}
}
}
if(all_agree) printf("All quantities agree at all points!\n");
}
# -
# <a id='compile_run'></a>
#
# ## Step 2.a: Compile and run the code \[Back to [top](#toc)\]
# $$\label{compile_run}$$
#
# Now that we have our file, we can compile it and run the executable.
# +
import time
results_file = "out.txt"
print("Now compiling, should take ~2 seconds...\n")
start = time.time()
cmd.C_compile(os.path.join(Ccodesdir,"BCs_unit_test.c"), os.path.join(outdir,"BCs_unit_test"))
end = time.time()
print("Finished in "+str(end-start)+" seconds.\n\n")
# Change to output directory
os.chdir(outdir)
print("Now running...\n")
start = time.time()
cmd.Execute(os.path.join("BCs_unit_test"),"2 2 2",file_to_redirect_stdout=results_file)
# To do a convergence test, we'll also need a second grid with twice the resolution.
# cmd.Execute(os.path.join("Validation","BCs_unit_test"),"9 9 9",file_to_redirect_stdout=os.path.join(out_dir,results_file))
end = time.time()
print("Finished in "+str(end-start)+" seconds.\n\n")
# -
# Now, we will interpret our output and verify that we produced the correct results.
with open(results_file,"r") as file:
output = file.readline()
print(output)
if output!="All quantities agree at all points!\n": # If this isn't the first line of this file, something went wrong!
sys.exit(1)
# <a id='latex_pdf_output'></a>
#
# # Step 4: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](#toc)\]
# $$\label{latex_pdf_output}$$
#
# The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename
# [Tutorial-Start_to_Finish-GiRaFFE_NRPy-Metric_Face_Values.pdf](Tutorial-Start_to_Finish-GiRaFFE_NRPy-Metric_Face_Values.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)
# +
import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface
# Change to NRPy directory
os.chdir("../../../")
cmd.output_Jupyter_notebook_to_LaTeXed_PDF("Tutorial-Start_to_Finish_UnitTest-GiRaFFE_NRPy-BCs",location_of_template_file=os.path.join(".."))
| in_progress/Tutorial-Start_to_Finish_UnitTest-GiRaFFE_NRPy-BCs.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import pandas as pd
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.metrics.pairwise import linear_kernel
# +
# You can get the dataset at - https://www.kaggle.com/tmdb/tmdb-movie-metadata
df1=pd.read_csv('tmdb_5000_credits.csv')
df2=pd.read_csv('tmdb_5000_movies.csv')
df = df2.merge(df1)
df.head()
# -
df['overview'][0:10]
# +
# creating tf-idf vectorizer object
tf = TfidfVectorizer(analyzer='word',ngram_range=(1, 2),min_df=0, stop_words='english')
df['overview'] = df['overview'].fillna('')
tfidf_matrix = tf.fit_transform(df['overview'])
tfidf_matrix.shape
# +
# calculating cosine similarity between the documents
cos = linear_kernel(tfidf_matrix, tfidf_matrix)
# cos
# -
indices = pd.Series(df2.index, index=df2['title']).drop_duplicates() # map for indices to movies
def get_recommendations(title, cosine_sim = cos):
idx = indices[title]
similarity_scores = list(enumerate(cosine_sim[idx]))
# print(sim_scores)
similarity_scores = sorted(similarity_scores, key = lambda x: x[1], reverse = True)
similarity_scores = similarity_scores[1:21] # ignore first because it is the same movie
movie_indices = [i[0] for i in similarity_scores]
movies = [x for x in df['title'].iloc[movie_indices]]
return movies
get_recommendations('The Dark Knight')
get_recommendations('Avatar')
get_recommendations('Spectre')
| ml/Recommender System/Content based filtering.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] slideshow={"slide_type": "slide"}
# # # numpy?
# ---
# - numerical python
# - 복잡한 행렬, 수치 연산을 빠르고 간편하게!
# -
# # In this tutorial
#
# - Introduction : numpy 설치와 python↔numpy 비교합니다.
# - array : numpy 의 기본 자료구조 단위인 `array`에 대해 배웁니다.
# - array creation : `array` 를 어떻게 생성할 수 있는 지에 대해 배웁니다.
# - dtype : 데이터의 타입을 나타내는 `dtype` 에 대해 배웁니다.
# - basic operations : 기본적인 사칙연산과 내장된 단항연산에 대해 배웁니다.
# - Indexing, Slicing and Iterating : 인덱싱, 슬라이싱 그리고 반복(순회)에 대해 배웁니다.
# - methods : 기타 유용한 내장 함수들을 배웁니다.
# # Introduction
# + [markdown] slideshow={"slide_type": "slide"}
# ## installation
# + slideshow={"slide_type": "fragment"}
import numpy as np
# + [markdown] slideshow={"slide_type": "slide"}
# ### python
# + slideshow={"slide_type": "fragment"}
size = 10000000 # '10,000,000'
# + slideshow={"slide_type": "fragment"}
l, m = list(range(size)), list(range(size-1, -1, -1))
# + slideshow={"slide_type": "fragment"}
# %%timeit
[x+y for x, y in zip(l, m)]
# + slideshow={"slide_type": "fragment"}
# %%timeit
[x * 3 for x in l]
# + [markdown] slideshow={"slide_type": "slide"}
# ### numpy
# + slideshow={"slide_type": "fragment"}
a, b = np.arange(size), np.arange(size-1, -1, -1)
# + slideshow={"slide_type": "fragment"}
# %%timeit
a + b
# + slideshow={"slide_type": "fragment"}
# %%timeit
a * 3
# + [markdown] slideshow={"slide_type": "slide"}
# # array
# ---
# numpy 의 주요 object 는 `homogeneous multidimensional array` 이다.
# → `table of elements, all of the same type`
# + [markdown] slideshow={"slide_type": "fragment"}
# numpy 의 array class 는 `ndarray` 라고 불리며, 보통 `array` 라는 별칭으로 쓰인다.
# + slideshow={"slide_type": "fragment"}
print(type(np.array([1, 2])))
# + [markdown] slideshow={"slide_type": "slide"}
# numpy 에서 `dimension` 은 `axes` 라고 불린다.
# + slideshow={"slide_type": "fragment"}
l = [[1, 2, 3, 4], [5, 6, 7, 8]]
a = np.array(l)
print(a)
# + [markdown] slideshow={"slide_type": "slide"}
# `shape` 속성은 `array` 의 `dimension` 을 나타낸다.
# tuple 의 각각의 숫자는 각 `dimension` 의 size 를 나타낸다.
# 예를 들어 *n* 행 *m* 열을 가지는 행렬(matrix)의 경우 `shape` 는 `(n, m)` 이 된다.
# + slideshow={"slide_type": "fragment"}
print(a.shape)
# + [markdown] slideshow={"slide_type": "slide"}
# # array creation
# ---
#
# - array 함수에 python list 또는 tuple (array_like) 를 전달
# - arange 함수 사용
# - zeros, ones 사용
# + slideshow={"slide_type": "fragment"}
np.array([1, 2])
# + slideshow={"slide_type": "fragment"}
np.arange(5)
# + slideshow={"slide_type": "slide"}
np.zeros((5, 5))
# + slideshow={"slide_type": "fragment"}
np.ones((3, 6))
# + [markdown] slideshow={"slide_type": "slide"}
# ### python list
#
# python list 는 하나의 리스트안에 여러개의 데이터타입을 가질 수 있다.
# + slideshow={"slide_type": "fragment"}
l = [1, "2"]
print([type(each) for each in l])
# + [markdown] slideshow={"slide_type": "slide"}
# ### numpy array
#
# 반면, numpy 는 배열안에 오직 한 가지 데이터타입만 가질 수 있다
# + slideshow={"slide_type": "fragment"}
a = np.array([1, "2"])
print([type(each) for each in a])
# + [markdown] slideshow={"slide_type": "slide"}
# # dtype
# ---
#
# `array` 안에 있는 원소(element)의 데이터타입을 나타내는 객체
#
# reference: https://numpy.org/doc/stable/reference/arrays.dtypes.html
# + slideshow={"slide_type": "fragment"}
a = np.arange(5)
print(a.dtype.name)
# + slideshow={"slide_type": "fragment"}
a = np.array(["a", "b", "c", "문", "자", "열"])
print(a.dtype.name)
# + [markdown] slideshow={"slide_type": "slide"}
# # basic operations
# ---
#
# 행렬에 관한 많은 산술 연산을 기본 산술연산자(+, -, *, / 등)으로 사용가능하다.
# `array` 에 있는 모든 원소들의 합과 같은 단항 연산자들도 `ndarray` class 에 구현되어 있어 편하게 사용가능하다.
# + slideshow={"slide_type": "slide"}
a = np.array([[1, 1], [0, 1]])
b = np.array([[2, 0], [3, 4]])
# + slideshow={"slide_type": "fragment"}
print(a + b)
# + slideshow={"slide_type": "fragment"}
print(a * b)
# -
# 
# + slideshow={"slide_type": "fragment"}
print(a @ b)
# + slideshow={"slide_type": "slide"}
print(a.sum()) # 1 +1 + 0 + 1 = 3
# + slideshow={"slide_type": "fragment"}
print(b.mean()) # (2 + 0 + 3 + 4) / 4 = 2.25
# + [markdown] slideshow={"slide_type": "slide"}
# ### axis
#
# `axis` 인자를 명시해줘서 `array` 의 특정 `axis(축)` 에 대해 연산을 적용시킬 수 있다.
#
# - `axis=None`: element wise
# - `axis=0`: row wise
# - `axis=1`: column wise
# + slideshow={"slide_type": "slide"}
a = np.array([[1, 2], [3, 4]])
print(a)
# + slideshow={"slide_type": "fragment"}
a.sum() # 모든 원소의 합 (=a.sum(axis=None))
# + slideshow={"slide_type": "fragment"}
a.sum(axis=0) # 각 열에 대한 합
# + slideshow={"slide_type": "fragment"}
a.sum(axis=1) # 각 행에 대한 합
# + [markdown] slideshow={"slide_type": "slide"}
# # Indexing, Slicing and Iterating
# ---
#
# 1차원(`one-dimensional`) `array` 는 일반 python list 처럼 indexing, slicing, iterating 될 수 있다.
# + slideshow={"slide_type": "slide"}
a = np.arange(10) ** 2
print(a)
# + slideshow={"slide_type": "fragment"}
a[2]
# + slideshow={"slide_type": "fragment"}
a[:2]
# + slideshow={"slide_type": "fragment"}
a[:-5]
# + slideshow={"slide_type": "fragment"}
a[::-1]
# + slideshow={"slide_type": "slide"}
for i in a:
print(i**(1/2))
# + [markdown] slideshow={"slide_type": "slide"}
# 다차원(`multidimensional`)`array`는 하나의 축 당 `index`를 가진다.
# 이러한 `index` 콤마(comma)로 분리된 튜플로 구분할 수 있다.
# + slideshow={"slide_type": "slide"}
b = np.arange(12).reshape(3, 4)
print(b)
# + slideshow={"slide_type": "fragment"}
b[2, 3]
# + slideshow={"slide_type": "fragment"}
b[1:,1:]
# + slideshow={"slide_type": "fragment"}
b[-1]
# + [markdown] slideshow={"slide_type": "slide"}
# 다차원 `array`의 `iterating` 은 첫번째 축(`axis`) 기준으로 진행된다.
# + slideshow={"slide_type": "fragment"}
for index, row in enumerate(b):
print(f"{index}: {row}")
# + [markdown] slideshow={"slide_type": "slide"}
# # methods
# ---
#
# - array creations: `arange`, `linspace`, ...
# - manipulations: `reshape`, `transpose`, ...
# - questions: `all`, `any`, `where`, ...
# - ordering: `argmax`, `searchsorted`, ...
# - basic statistics: `std`, ...
# + [markdown] slideshow={"slide_type": "slide"}
# `arange`: `start` 에서 `stop` 까지 주어진 `step` 을 interval(간격) 으로 가지는 `array` 를 반환
# + slideshow={"slide_type": "fragment"}
np.arange(0, 15, step=1)
# + [markdown] slideshow={"slide_type": "slide"}
# `linspace`: `start` 에서 `stop` 까지 `num` 개로 동일한 interval(간격) 으로 나눈 `array` 를 반환
# + slideshow={"slide_type": "fragment"}
np.linspace(0, 15, num=10)
# + [markdown] slideshow={"slide_type": "slide"}
# `reshape`: 주어진 `array`의 데이터로 `shape`를 가지는 새로운 `array` 를 반환
# + slideshow={"slide_type": "fragment"}
a = np.arange(10)
print(a)
# + slideshow={"slide_type": "fragment"}
a.reshape(2, 5)
# + [markdown] slideshow={"slide_type": "slide"}
# `transpose`: `array` 의 축의 순서를 바꾼다. 행렬은(두 개의 축을 가지는 `array`)는 전치행렬을 반환(=행과 열을 바꿈)
# + slideshow={"slide_type": "fragment"}
a = np.arange(10).reshape(2, 5)
print(a)
# + slideshow={"slide_type": "fragment"}
np.transpose(a) # == a.T
# + [markdown] slideshow={"slide_type": "slide"}
# `all`: 주어진 `array` 의 값이 모두 `True` 인지 여부 반환
# + slideshow={"slide_type": "fragment"}
a = np.arange(10)
print(a)
# + slideshow={"slide_type": "fragment"}
np.all(a < 10)
# + slideshow={"slide_type": "fragment"}
np.all(a < 5)
# + [markdown] slideshow={"slide_type": "slide"}
# `any`: 주어진 `array`의 값이 하나라도 `True` 인지 여부 반환
# + slideshow={"slide_type": "fragment"}
a = np.arange(10)
print(a)
# + slideshow={"slide_type": "fragment"}
np.any(a==1)
# + slideshow={"slide_type": "fragment"}
np.any(a==-1)
# + [markdown] slideshow={"slide_type": "slide"}
# `where`: 조건(`condition`)에 따라 `x` 또는 `y` 값을 반환, 아래식과 동치임
#
# ```python
# [x if c else y for c, x, y in zip(condition, xs, ys)]
# ```
# + slideshow={"slide_type": "fragment"}
a = np.arange(10)
print(a)
# + slideshow={"slide_type": "fragment"}
np.where(a<5, a, 10*a)
# + [markdown] slideshow={"slide_type": "slide"}
# `argmax`: 지정된 축에 가장 최고값을 가지는 원소의 index 를 반환
# + slideshow={"slide_type": "fragment"}
a = np.array([10, 20, 30, 20, 10])
print(a)
# + slideshow={"slide_type": "fragment"}
np.argmax(a) # 2번 원소 (30)이 가장 큰 값을 가짐
# + [markdown] slideshow={"slide_type": "slide"}
# `searchsorted`: 정렬된 `array` 에서 빠르게 값을 찾기 위해 `binary search`를 수행
# + slideshow={"slide_type": "fragment"}
a = np.array(range(100)) ** 2
print(a)
# + slideshow={"slide_type": "fragment"}
np.searchsorted(a, 1600) # 1600 이라는 값은 40번째 index 에 위치함
# + [markdown] slideshow={"slide_type": "slide"}
# `std`: `array`의 특정 축에 대한 표준편차를 반환
# + slideshow={"slide_type": "fragment"}
np.array([1, 2, 3, 4]).std()
# + slideshow={"slide_type": "fragment"}
np.std(np.array([1, 2, 3, 4]))
| numpy.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 0.6.4
# language: julia
# name: julia-0.6
# ---
# +
## iop_online
# <NAME>
# <EMAIL>
# All rights reserved 2017-03-17
# function
@everywhere using JuMP, Gurobi, Distributions, Clustering, Distances, CSV
@everywhere function f(Q,c,r,x)
1/2*r*vecdot(x, Q*x) - vecdot(c,x)
end
@everywhere function GenerateData(n_f,Sigma_f,c,Af,b,T,ra)
x = SharedArray{Float64,2}((n_f,T));
@parallel (+) for i = 1:T
genData = Model( solver=GurobiSolver(OutputFlag=0, MIPGap = 1e-6,TimeLimit = 600 ) )
@variable(genData, wf[1:n_f])
@objective(genData, Min, f(Sigma_f,c,ra,wf))
@constraint(genData, Af*wf .>= b )
solve(genData)
x[:,i] = getvalue(wf)
end
return x
end
@everywhere function GenerateData2(n,Q,c,A,b,r)
genData = Model( solver=GurobiSolver( OutputFlag=0 ) )
@variable(genData, y[1:n] )
@objective(genData, Min, f(Q,c,r,y) )
@constraint(genData, A*y .>= b)
solve(genData)
return getvalue(y)
end
@everywhere function qp_r_online_ver2(m,nf,M,eta,Sigma_f,c,Af,b,wf,ra)
problem = Model(solver=GurobiSolver(OutputFlag = 0, MIPGap = 1e-6, TimeLimit = 600) )
@variable(problem, 0.000001<= ra_t<= 100) # risk aversion value at time t
@variable(problem, x[1:nf])
# @variable(problem, 0<=x[1:n]<=1 )
@variable(problem, u[1:m] >= 0 )
@variable(problem, z[1:m], Bin )
@objective(problem, Min, vecdot(ra - ra_t,ra - ra_t)/2 + eta*vecdot(wf - x,wf - x) )
@constraint(problem, Af*x .>= b )
@constraint(problem, u .<= M*z )
@constraint(problem, Af*x - b .<= M*(1-z))
@constraint(problem, Sigma_f*x - ra_t*c - Af'*u .== zeros(nf) )
solve(problem)
return getvalue(ra_t), getvalue(x)
end
@everywhere function qp_r_online_ver3(m,nf,M,eta,Sigma_f,c,Af,b,wf,ra)
problem = Model(solver=GurobiSolver(OutputFlag = 0, MIPGap = 1e-6, TimeLimit = 3000) )
@variable(problem, 0.0001<= ra_t<= 100) # risk aversion value at time t
@variable(problem, x[1:nf])
# @variable(problem, 0<=x[1:n]<=1 )
@variable(problem, u[1:m] >= 0 )
@variable(problem, z[1:m], Bin )
#@variable(problem, e<=0.0001)
@objective(problem, Min, vecdot(ra - ra_t,ra - ra_t)/2 + eta*vecdot(wf - x,wf - x))
@constraint(problem, Af*x .>= b)
@constraint(problem, u .<= M*z )
@constraint(problem, Af*x - b .<= M*(1-z))
@constraint(problem, Sigma_f*x - ra_t*c - Af'*u .== zeros(nf) )
solve(problem)
return getvalue(ra_t), getvalue(x)
end
@everywhere function qp_r_online_ver4(m,nf,M,eta,Sigma_f,c,Af,b,x,ra)
problem = Model(solver=GurobiSolver(OutputFlag = 0, MIPGap = 1e-6, TimeLimit = 300) )
@variable(problem, 0.000001<= ra_t<= 100) # risk aversion value at time t
# @variable(problem, 0<=x[1:n]<=1 )
@variable(problem, u[1:m] >= 0 )
@variable(problem, z[1:m], Bin )
@objective(problem, Min, vecdot(ra - ra_t,ra - ra_t)/2)
@constraint(problem, Af*x .>= b )
@constraint(problem, u .<= M*z )
# @constraint(problem, Af*x - b .<= M*(1-z))
@constraint(problem, ra_t*Sigma_f*x - c - Af'*u .== zeros(nf) )
solve(problem)
return getvalue(ra_t)
end
@everywhere function qp_r_online_ver5(m,lambda,Q,c,A,b,x,r)
problem = Model(solver=GurobiSolver(OutputFlag = 0, MIPGap = 1e-3, TimeLimit = 300, InfUnbdInfo=1) )
@variable(problem, -10 <= r_t <= 10)
@variable(problem, u[1:m] >= 0)
@variable(problem, t[1:m],Bin)
@variable(problem, eta1)
@objective(problem, Min, eta1/2 )
@constraint(problem, norm([2*(r - r_t); eta1-1]) <= eta1 + 1 )
#@constraint(problem, A*x .>= b )
@constraint(problem, u.<= M*t )
#@constraint(problem, A*x - b .<= M*(1-t) )
@constraint(problem, r_t*Q*x - c - A'*u .== zeros(m) )
solve(problem)
return getvalue(r_t)
end
# @everywhere function qp_r_online_ver5(n,m,M,Q,c,A,b,x,r)
# problem = Model(solver=GurobiSolver(OutputFlag = 0, MIPGap = 1e-6, TimeLimit = 300) )
# @variable(problem, 0 <= r_t <= 10)
# @variable(problem, u[1:m] >= 0)
# @variable(problem, t[1:m],Bin)
# @variable(problem, eta1)
# @objective(problem, Min, eta1/2 )
# # @constraint(problem, norm([2*(r - r_t); eta1-1]) <= eta1 + 1 )
# # @constraint(problem, A*x .>= b )
# # @constraint(problem, u.<= M*t )
# # @constraint(problem, A*x - b .<= M*(1-t) )
# # @constraint(problem, r_t*Q*x - c - A'*u .== zeros(n) )
# solve(problem)
# return getvalue(r_t)
# end
# @everywhere function qp_r_online_ver2(m,nf,M,eta,Sigma_f,c,Af,b,wf,ra)
# problem = Model(solver=GurobiSolver(OutputFlag = 0, MIPGap = 1e-6, TimeLimit = 300) )
# @variable(problem, 0.000001<= ra_t<= 100) # risk aversion value at time t
# @variable(problem, x[1:nf])
# # @variable(problem, 0<=x[1:n]<=1 )
# @variable(problem, u[1:m] >= 0 )
# @variable(problem, z[1:m], Bin )
# @objective(problem, Min, vecdot(ra - ra_t,ra - ra_t)/2 + eta*vecdot(wf - x,wf - x) )
# @constraint(problem, Af*x .>= b )
# @constraint(problem, u .<= M*z )
# @constraint(problem, Af*x - b .<= M*(1-z))
# @constraint(problem, Sigma_f*x - ra_t.*c - Af'*u .== zeros(nf) )
# solve(problem)
# return getvalue(ra_t), getvalue(x)
# end
@everywhere function qp_r_online_ver2(m,nf,M,eta,Sigma_f,c,Af,b,wf,ra)
problem = Model(solver=GurobiSolver(OutputFlag = 0, MIPGap = 1e-6, TimeLimit = 300) )
@variable(problem, 0.000001<= ra_t<= 100) # risk aversion value at time t
@variable(problem, x[1:nf])
# @variable(problem, 0<=x[1:n]<=1 )
@variable(problem, u[1:m] >= 0 )
@variable(problem, z[1:m], Bin )
@objective(problem, Min, vecdot(ra - ra_t,ra - ra_t)/2 + eta*vecdot(wf - x,wf - x) )
@constraint(problem, Af*x .>= b )
@constraint(problem, u .<= M*z )
@constraint(problem, Af*x - b .<= M*(1-z))
@constraint(problem, Sigma_f*x - ra_t*c - Af'*u .== zeros(nf) )
solve(problem)
return getvalue(ra_t), getvalue(x)
end
@everywhere function decompose(A_return, num_comp)
# spectral decompositon of the covariance matrix
cov_mat = cov(A_return);
Q = cov_mat;
factor = eigfact(Q);
eigvalues = factor[:values];
eigvectors = factor[:vectors];
large_eigs = eigvalues[length(eigvalues)-num_comp+1:length(eigvalues)];
F = eigvectors[:,length(eigvalues)-num_comp+1:length(eigvalues)];
Sigma_f = Diagonal(large_eigs);
return F, Sigma_f
end
# +
@everywhere function compute_c(A_return)
# A_return: matrix of size (num of time steps, num of assets)
num_asset = size(A_return)[2];
c = SharedArray{Float64,1}(num_asset);
for ind_asset = 1:num_asset
A_vec = A_return[:, ind_asset];
c[ind_asset] = mean(A_vec);
end
return c
end
@everywhere function compute_c_max(A_return)
# A_return: matrix of size (num of time steps, num of assets)
num_asset = size(A_return)[2];
c = SharedArray{Float64,1}(num_asset);
for ind_asset = 1:num_asset
A_vec = A_return[:, ind_asset];
nonzeroInd = find(x->x>=0, A_vec);
non_zeroA_vec = A_vec[nonzeroInd];
c[ind_asset] = median(non_zeroA_vec);
end
return c
end
# +
using CSV, DataFrames, DataFramesMeta, Base.Dates
@everywhere using JuMP, Gurobi, Distributions, Clustering, Distances, Plots, JLD
fund_name="FDCAX";
#hyper-parameters
lambda=10;
M=10;
X_obs = CSV.read("../data/mf/"string(fund_name)"_sector_yearly/"string(fund_name)"_X_new.txt",delim=",",datarow=1);
X_obs = Matrix(X_obs);
X_obs = X_obs[:,1:end]
Xobsmat = Matrix(X_obs);
Xobsmat = transpose(Xobsmat);
universe_results = Vector{Vector{Float64}}();
universe_results_ratio = Vector{Vector{Float64}}();
for sind=1:2:11 # adjustment length
all_results = Vector{Vector{Float64}}();
all_results2 = Vector{Vector{Float64}}();
Time = Float64[];
for ind=20:42
A_return = CSV.read("../data/mf/"string(fund_name)"_sector_yearly/"string(fund_name)"_A_"*string(ind)*".txt",datarow=1);
A_return = Matrix(A_return);
#println(size(A_return));
#sind=size(A_return)[1]-lookback;
A_return = A_return[sind:size(A_return)[1], :];
#A_return = A_return/12;
#A_return = remove_extreme_negative_return(A_return);
#A_return = A_return./100;
#c = maximum(A_return,2);
ind_results = Vector{Float64}();
ra =1;
#c = compute_c(A_return);
c = compute_c_max(A_return);
c[isnan.(c)] .= 0;
n=11;
Q = cov(A_return);
xx_obs = X_obs[:, 1:ind];
est_return = Vector{Float64}();
est_return2 = Vector{Float64}();
#println(size(xx_obs)[2]);
tic();
for t = 1:size(xx_obs)[2]
#println(ind,ra,t);
y = xx_obs[:,t];
y = y/sum(y);
#A = -[eye(n);ones(1,n);-ones(1,n);-eye(n)];
#A_simple= [c';eye(n)]
A_complex=-[eye(n);ones(1,n);-ones(1,n);-eye(n)]
#b = -[ones(n,1);1;-1;zeros(n,1)];
#b_simple = zeros(n+1,1);
b_complex= -[ones(n,1);1;-1;zeros(n,1)];
(m,n) = size(A_complex);
eta = lambda*t^(-1/2);
#print(ra, y,c)
ra, x = qp_r_online_ver2(m,n,M,eta,Q,c,A_complex,b_complex,y,ra);
push!(est_return, ra);
push!(est_return2, transpose(y)*Q*y./(transpose(c)*y));
end
push!(all_results, est_return);
push!(all_results2, est_return2);
t = toc();
Time = push!(Time, t);
end
temp_result = Vector{Float64}();
temp_result2 = Vector{Float64}();
for i=1:23
push!(temp_result, all_results[i][end]);
push!(temp_result2, all_results2[i][end]);
end
push!(universe_results, temp_result);
push!(universe_results_ratio, temp_result2);
end
# -
# printout the learned risk preferences
for i=1:23 # loop over learning point
for rep=1:5 # loop over adjustment length
print(universe_results[rep][i]," " )
end
println()
end
#save("small_mean.jld", "universe_results", universe_results)
# printout the inverse Sharpe ratio
for i=1:23 # loop over learning point
for rep=1:5 # loop over adjustment length
print(universe_results_ratio[rep][i]," " )
end
println()
end
# # Factor Space
A_return
# +
universe_results = Vector{Vector{Float64}}();
universe_results_ratio = Vector{Vector{Float64}}();
for sind=1:2:10
all_results = Vector{Vector{Float64}}();
all_results2 = Vector{Vector{Float64}}();
Time = Float64[];
for ind = 20:40
M=100;
lambda=100;
ra_guess=5; # initial guess value
k=5; # number of eigenvectors
e=0.01;
fund_name="VFINX";
X_obs = CSV.read("../data/mf/all_factor_yearly_old/"string(fund_name)"_X.txt", datarow=1);
Xobsmat = Matrix(X_obs);
Xobsmat = transpose(Xobsmat);
A_return = CSV.read("../data/mf/all_factor_yearly_old/all_assets_A_"*string(ind)*".txt", datarow=1);
A_return = A_return[:,sind:size(A_return)[2]];
print(size(A_return))
A_return = Matrix(A_return);
#ca = compute_c_max(transpose(A_return));
ca = compute_c(transpose(A_return));
#print(size(ca))
A_return = transpose(A_return);
A_return[isnan.(A_return)] .= 0;
F, Sigma_f = decompose(A_return, k);
#print(size(F));
#print(size(ca));
caf = F'*ca;
#print(size(caf));
#print(size(F));
n = size(A_return,2);
A = -[eye(n);ones(1,n);-ones(1,n);-eye(n)];
#print(size(A));
Af = A*F;
bf = -[ones(n,1);1;-1;e*ones(n,1)];
mf = length(bf);
est_return = Vector{Float64}();
est_return2 = Vector{Float64}();
X_obs_proj = F'*transpose(Xobsmat);
X_obs_proj = transpose(X_obs_proj);
tic();
for t = 1:ind
x_recon = X_obs_proj[t,:];
#x_recon = x_recon./sum(x_recon);
risk = x_recon'*Sigma_f*x_recon;
profit = x_recon'*caf;
aversion=risk./profit;
#print(Af)
#print(Sigma_f)
ra_guess, x2 = qp_r_online_ver3(mf,k,M,lambda*t^(-1/2),Sigma_f,caf,Af,bf,x_recon,ra_guess);
push!(est_return, ra_guess);
push!(est_return2, aversion);
end
push!(all_results, est_return);
push!(all_results2, est_return2);
t = toc();
Time = push!(Time, t);
end
temp_result = Vector{Float64}();
temp_result2 = Vector{Float64}();
for i=1:24
push!(temp_result, all_results[i][end]);
push!(temp_result2, all_results2[i][end]);
end
push!(universe_results, temp_result);
push!(universe_results_ratio, temp_result2);
end
# -
Sigma_f
# printout the learned risk preferences
for i=1:21 # loop over learning point
for rep=1:5 # loop over adjustment length
print(universe_results[rep][i]," " )
end
println()
end
for i=1:21
println(universe_results_ratio[5][i])
end
| scripts/ipo_mf.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import json
import pandas
import time
import os
import csv
from functools import reduce
# +
inputDir = '..//scraped_json//'
outputDir = '..//summarized_json//'
## Load the id config
idConfig = json.loads(open('..//config//idConfig.json', "r").read())
## Load the source file dict
sourceFiles = json.loads(open('..//config//sourceFiles.json', "r").read())
ncaaData = json.loads(open(inputDir + sourceFiles['ncaa'][0], "r", encoding="utf-8").read())
# -
## Clean up dirty names
def mungeID(playerString):
return ''.join(e for e in playerString if e.isalnum()).lower().replace("jr.", "").replace("st.", "state")
def createNewID (fieldList, thisDict, fieldAgg):
finalID= ''
for i in thisDict:
for idx, val in enumerate(fieldList):
if (type(i[val]) is list):
i[val]= mungeID(i[val][0])
if (len(fieldList) -1 == idx):
finalID += str(i[val]).strip('[]').strip("''")
elif (type(val) is not list):
i[val] = mungeID(i[val])
if (len(fieldList) - 1 == idx):
finalID += i[val]
else:
finalID = i[val] + fieldAgg
i['ID'] = finalID
finalID=''
# +
def searchID(identifier, dataList):
return [element for element in dataList if (element['ID'] == identifier)]
def search(name, team, dataList):
return [element for element in dataList if (element['name'] == name and element['team'] == team)]
# -
createNewID(idConfig['ncaa'], ncaaData, '_')
finalOutput = []
for x in ncaaData:
if (len(searchID(x['ID'], finalOutput)) == 0):
playerList = search(x['name'], x['team'], ncaaData)
finalPlayer = {}
finalPlayer['ID'] = x['ID']
finalPlayer['name'] = x['name']
finalPlayer['team'] = x['team']
gamesPlayed = 0
gamesStarted = 0
year = 2021
for y in playerList:
gamesPlayed = gamesPlayed + int(y['gamesPlayed'])
gamesStarted = gamesStarted + int(y['gamesStarted'])
if (int(y['year']) < int(year)):
year = y['year']
finalPlayer['gamesPlayed'] = gamesPlayed
finalPlayer['gamesStarted'] = gamesStarted
finalPlayer['year'] = year
finalOutput.append(finalPlayer)
##Delete the columns I no longer care about after creating the ID
##This list will need to expand after setting up blockers and truly linking
for record in finalOutput:
record['ncaa_gamesPlayed'] = record['gamesPlayed']
record['ncaa_gamesStarted'] = record['gamesStarted']
del record['gamesPlayed']
del record['gamesStarted']
del record['year']
del record['name']
del record['team']
if ('position' in record.keys()):
del record['position']
print(finalOutput)
with open(outputDir + "ncaa.json", "w", encoding="utf-8") as write_file:
write_file.write(json.dumps(finalOutput))
| j_notebooks/Summarize.NCAA.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import torch
print(torch.__version__)
# -
# # 2.3 自动求梯度
# ## 2.3.1 概念
# 上一节介绍的`Tensor`是这个包的核心类,如果将其属性`.requires_grad`设置为`True`,它将开始追踪(track)在其上的所有操作。完成计算后,可以调用`.backward()`来完成所有梯度计算。此`Tensor`的梯度将累积到`.grad`属性中。
# > 注意在调用`.backward()`时,如果`Tensor`是标量,则不需要为`backward()`指定任何参数;否则,需要指定一个求导变量。
#
# 如果不想要被继续追踪,可以调用`.detach()`将其从追踪记录中分离出来,这样就可以防止将来的计算被追踪。此外,还可以用`with torch.no_grad()`将不想被追踪的操作代码块包裹起来,这种方法在评估模型的时候很常用,因为在评估模型时,我们并不需要计算可训练参数(`requires_grad=True`)的梯度。
#
# `Function`是另外一个很重要的类。`Tensor`和`Function`互相结合就可以构建一个记录有整个计算过程的非循环图。每个`Tensor`都有一个`.grad_fn`属性,该属性即创建该`Tensor`的`Function`(除非用户创建的`Tensor`s时设置了`grad_fn=None`)。
#
# 下面通过一些例子来理解这些概念。
#
# ## 2.3.2 `Tensor`
x = torch.ones(2, 2, requires_grad=True)
print(x)
print(x.grad_fn)
y = x + 2
print(y)
print(y.grad_fn)
# 注意x是直接创建的,所以它没有`grad_fn`, 而y是通过一个加法操作创建的,所以它有一个为`<AddBackward>`的`grad_fn`。
print(x.is_leaf, y.is_leaf)
z = y * y * 3
out = z.mean()
print(z, out)
# 通过`.requires_grad_()`来用in-place的方式改变`requires_grad`属性:
a = torch.randn(2, 2) # 缺失情况下默认 requires_grad = False
a = ((a * 3) / (a - 1))
print(a.requires_grad) # False
a.requires_grad_(True)
print(a.requires_grad) # True
b = (a * a).sum()
print(b.grad_fn)
# ## 2.3.3 梯度
#
# 因为`out`是一个标量,所以调用`backward()`时不需要指定求导变量:
out.backward() # 等价于 out.backward(torch.tensor(1.))
print(x.grad)
# 我们令`out`为 $o$ , 因为
# $$
# o=\frac14\sum_{i=1}^4z_i=\frac14\sum_{i=1}^43(x_i+2)^2
# $$
# 所以
# $$
# \frac{\partial{o}}{\partial{x_i}}\bigr\rvert_{x_i=1}=\frac{9}{2}=4.5
# $$
# 所以上面的输出是正确的。
# 数学上,如果有一个函数值和自变量都为向量的函数 $\vec{y}=f(\vec{x})$, 那么 $\vec{y}$ 关于 $\vec{x}$ 的梯度就是一个雅可比矩阵(Jacobian matrix):
#
# $$
# J=\left(\begin{array}{ccc}
# \frac{\partial y_{1}}{\partial x_{1}} & \cdots & \frac{\partial y_{1}}{\partial x_{n}}\\
# \vdots & \ddots & \vdots\\
# \frac{\partial y_{m}}{\partial x_{1}} & \cdots & \frac{\partial y_{m}}{\partial x_{n}}
# \end{array}\right)
# $$
#
# 而``torch.autograd``这个包就是用来计算一些雅克比矩阵的乘积的。例如,如果 $v$ 是一个标量函数的 $l=g\left(\vec{y}\right)$ 的梯度:
#
# $$
# v=\left(\begin{array}{ccc}\frac{\partial l}{\partial y_{1}} & \cdots & \frac{\partial l}{\partial y_{m}}\end{array}\right)
# $$
#
# 那么根据链式法则我们有 $l$ 关于 $\vec{x}$ 的雅克比矩阵就为:
#
# $$
# v \cdot J=\left(\begin{array}{ccc}\frac{\partial l}{\partial y_{1}} & \cdots & \frac{\partial l}{\partial y_{m}}\end{array}\right) \left(\begin{array}{ccc}
# \frac{\partial y_{1}}{\partial x_{1}} & \cdots & \frac{\partial y_{1}}{\partial x_{n}}\\
# \vdots & \ddots & \vdots\\
# \frac{\partial y_{m}}{\partial x_{1}} & \cdots & \frac{\partial y_{m}}{\partial x_{n}}
# \end{array}\right)=\left(\begin{array}{ccc}\frac{\partial l}{\partial x_{1}} & \cdots & \frac{\partial l}{\partial x_{n}}\end{array}\right)
# $$
# 注意:grad在反向传播过程中是累加的(accumulated),这意味着每一次运行反向传播,梯度都会累加之前的梯度,所以一般在反向传播之前需把梯度清零。
# +
# 再来反向传播一次,注意grad是累加的
out2 = x.sum()
out2.backward()
print(x.grad)
out3 = x.sum()
x.grad.data.zero_()
out3.backward()
print(x.grad)
# -
x = torch.tensor([1.0, 2.0, 3.0, 4.0], requires_grad=True)
y = 2 * x
z = y.view(2, 2)
print(z)
# 现在 `y` 不是一个标量,所以在调用`backward`时需要传入一个和`y`同形的权重向量进行加权求和得到一个标量。
# +
v = torch.tensor([[1.0, 0.1], [0.01, 0.001]], dtype=torch.float)
z.backward(v)
print(x.grad)
# -
# 再来看看中断梯度追踪的例子:
# +
x = torch.tensor(1.0, requires_grad=True)
y1 = x ** 2
with torch.no_grad():
y2 = x ** 3
y3 = y1 + y2
print(x, x.requires_grad)
print(y1, y1.requires_grad)
print(y2, y2.requires_grad)
print(y3, y3.requires_grad)
# -
y3.backward()
print(x.grad)
# 为什么是2呢?$ y_3 = y_1 + y_2 = x^2 + x^3$,当 $x=1$ 时 $\frac {dy_3} {dx}$ 不应该是5吗?事实上,由于 $y_2$ 的定义是被`torch.no_grad():`包裹的,所以与 $y_2$ 有关的梯度是不会回传的,只有与 $y_1$ 有关的梯度才会回传,即 $x^2$ 对 $x$ 的梯度。
#
# 上面提到,`y2.requires_grad=False`,所以不能调用 `y2.backward()`。
# +
# y2.backward() # 会报错 RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
# -
# 如果我们想要修改`tensor`的数值,但是又不希望被`autograd`记录(即不会影响反向传播),那么我么可以对`tensor.data`进行操作.
# +
x = torch.ones(1,requires_grad=True)
print(x.data) # 还是一个tensor
print(x.data.requires_grad) # 但是已经是独立于计算图之外
y = 2 * x
x.data *= 100 # 只改变了值,不会记录在计算图,所以不会影响梯度传播
y.backward()
print(x) # 更改data的值也会影响tensor的值
print(x.grad)
| code/chapter02_prerequisite/2.3_autograd.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
actions = ['left', 'right']####定义表头
table = pd.DataFrame( np.zeros((6,2)),columns=actions)#######导入列的表头
print(table)
import pandas as pd
import numpy as np
a=np.zeros((6,2))
a[3,1]=0
print(a)
(a==0).all()
# +
state_actions = table.iloc[5, 1] ####返回第五行的所有列
print(state_actions)
# -
t=np.random.uniform()#####获得概率值
print(t)
if 0>1 or 1<2: ###,or的前后两个条件,任一条程璐,则两个都成立
print("正确执行")
# +
for a in range(1,100) : #############必须加range.指定一个范围
a=a+1
print('---------')
print(np.random.choice(actions))
# -
env_list = ['-']*(6) + ['T']
print(env_list)
interaction = 'Episode %s: total_steps = %s' % (1, 2)
print(interaction)
print('\r{}'.format(interaction), end='')
| contents/1_command_line_reinforcement_learning/test.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
### Authenticate to Ambari
#### Python requirements
import difflib
import getpass
import json
import requests
import sys
import time
#### Change these to fit your Ambari configuration
ambari_protocol = 'http'
ambari_server = '192.168.127.12'
ambari_port = 8080
ambari_user = 'admin'
#cluster = 'Sandbox'
#### Above input gives us http://user:pass@hostname:port/api/v1/
api_url = ambari_protocol + '://' + ambari_server + ':' + str(ambari_port)
#### Prompt for password & build the HTTP session
ambari_pass = <PASSWORD>()
s = requests.Session()
s.auth = (ambari_user, ambari_pass)
s.headers.update({'X-Requested-By':'seanorama'})
#### Authenticate & verify authentication
r = s.get(api_url + '/api/v1/clusters')
assert r.status_code == 200
print("You are authenticated to Ambari!")
# +
### Set cluster based on existing cluster
cluster = r.json()['items'][0]['Clusters']['cluster_name']
cluster
# -
# # Configure YARN Capacity Scheduler
# +
## Get current configuration tag
r = s.get(api_url + '/api/v1/clusters/' + cluster + '?fields=Clusters/desired_configs/capacity-scheduler')
assert r.status_code == 200
tag = r.json()['Clusters']['desired_configs']['capacity-scheduler']['tag']
## Get current configuration
r = s.get(api_url + '/api/v1/clusters/' + cluster + '/configurations?type=capacity-scheduler&tag=' + tag)
assert r.status_code == 200
print(json.dumps(r.json(), indent=2))
## Update config
config_old = r.json()['items'][0]
config_new = r.json()['items'][0]
#### Make your changes here
config_new['properties']['yarn.scheduler.capacity.root.default.capacity'] = '50'
config_new['properties']['yarn.scheduler.capacity.root.queues'] = 'default,hiveserver'
config_new['properties']['yarn.scheduler.capacity.root.hiveserver.capacity'] = '50'
config_new['properties']['yarn.scheduler.capacity.root.hiveserver.hive1.capacity'] = '50'
config_new['properties']['yarn.scheduler.capacity.root.hiveserver.hive1.user-limit-factor'] = '4'
config_new['properties']['yarn.scheduler.capacity.root.hiveserver.hive2.capacity'] = '50'
config_new['properties']['yarn.scheduler.capacity.root.hiveserver.hive2.user-limit-factor'] = '4'
config_new['properties']['yarn.scheduler.capacity.root.hiveserver.queues'] = 'hive1,hive2'
# +
#### Show the differences
a = json.dumps(config_old, indent=2).splitlines(1)
b = json.dumps(config_new, indent=2).splitlines(1)
for line in difflib.unified_diff(a, b):
sys.stdout.write(line)
# +
#### Manipulate the document to match the format Ambari expects
#### Adds new configuration tag, deletes fields, and wraps in appropriate json
config_new['tag'] = 'version' + str(int(round(time.time() * 1000000000)))
del config_new['Config']
del config_new['href']
del config_new['version']
config_new = {"Clusters": {"desired_config": config_new}}
print(json.dumps(config_new, indent=2))
# +
body = config_new
r = s.put(api_url + '/api/v1/clusters/' + cluster, data=json.dumps(body))
print(r.url)
print(r.status_code)
assert r.status_code == 200
print("Configuration changed successfully!")
print(json.dumps(r.json(), indent=2))
# -
# # Configure YARN Site
# +
## Get current configuration tag
r = s.get(api_url + '/api/v1/clusters/' + cluster + '?fields=Clusters/desired_configs/yarn-site')
assert r.status_code == 200
tag = r.json()['Clusters']['desired_configs']['yarn-site']['tag']
## Get current configuration
r = s.get(api_url + '/api/v1/clusters/' + cluster + '/configurations?type=yarn-site&tag=' + tag)
assert r.status_code == 200
print(json.dumps(r.json(), indent=2))
## Update config
config_old = r.json()['items'][0]
config_new = r.json()['items'][0]
#### Make your changes here
config_new['properties']['yarn.nodemanager.resource.memory-mb'] = '4096'
config_new['properties']['yarn.scheduler.minimum-allocation-mb'] = '512'
config_new['properties']['yarn.scheduler.maximum-allocation-mb'] = '4096'
config_new['properties']['yarn.resourcemanager.scheduler.monitor.enable'] = 'true'
config_new['properties']['yarn.resourcemanager.scheduler.monitor.policies'] = 'org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy'
config_new['properties']['yarn.resourcemanager.monitor.capacity.preemption.monitoring_interval'] = '1000'
config_new['properties']['yarn.resourcemanager.monitor.capacity.preemption.max_wait_before_kill'] = '5000'
config_new['properties']['yarn.resourcemanager.monitor.capacity.preemption.total_preemption_per_round'] = '0.4'
# +
#### Show the differences
a = json.dumps(config_old, indent=2).splitlines(1)
b = json.dumps(config_new, indent=2).splitlines(1)
for line in difflib.unified_diff(a, b):
sys.stdout.write(line)
# +
#### Manipulate the document to match the format Ambari expects
#### Adds new configuration tag, deletes fields, and wraps in appropriate json
config_new['tag'] = 'version' + str(int(round(time.time() * 1000000000)))
del config_new['Config']
del config_new['href']
del config_new['version']
config_new = {"Clusters": {"desired_config": config_new}}
print(json.dumps(config_new, indent=2))
# +
body = config_new
r = s.put(api_url + '/api/v1/clusters/' + cluster, data=json.dumps(body))
print(r.url)
print(r.status_code)
assert r.status_code == 200
print("Configuration changed successfully!")
print(json.dumps(r.json(), indent=2))
# -
# # Hive Configuration
# +
#### Get current configuration tag
r = s.get(api_url + '/api/v1/clusters/' + cluster + '?fields=Clusters/desired_configs/hive-site')
assert r.status_code == 200
tag = r.json()['Clusters']['desired_configs']['hive-site']['tag']
### Get current configuration
r = s.get(api_url + '/api/v1/clusters/' + cluster + '/configurations?type=hive-site&tag=' + tag)
assert r.status_code == 200
#### Change the configuration
config_old = r.json()['items'][0]
config_new = r.json()['items'][0]
#### The configurations you want to change
config_new['properties']['hive.execution.engine'] = 'tez'
config_new['properties']['hive.heapsize'] = '512'
config_new['properties']['hive.tez.container.size'] = '512'
config_new['properties']['hive.tez.java.opts'] = "-server -Xmx200m -Djava.net.preferIPv4Stack=true"
config_new['properties']['hive.support.concurrency'] = 'true'
config_new['properties']['hive.txn.manager'] = 'org.apache.hadoop.hive.ql.lockmgr.DbTxnManager'
config_new['properties']['hive.compactor.initiator.on'] = 'true'
config_new['properties']['hive.compactor.worker.threads'] = '2'
config_new['properties']['hive.enforce.bucketing'] = 'true'
config_new['properties']['hive.exec.dynamic.partition.mode'] = 'nonstrict'
config_new['properties']['hive.execution.engine'] = 'tez'
config_new['properties']['hive.server2.tez.initialize.default.sessions'] = 'true'
config_new['properties']['hive.server2.tez.default.queues'] = 'hive1,hive2'
config_new['properties']['hive.server2.tez.sessions.per.default.queue'] = '1'
config_new['properties']['hive.server2.enable.doAs'] = 'false'
config_new['properties']['hive.vectorized.groupby.maxentries'] = '10240'
config_new['properties']['hive.vectorized.groupby.flush.percent'] = '0.1'
# +
#### Show the differences
a = json.dumps(config_old, indent=2).splitlines(1)
b = json.dumps(config_new, indent=2).splitlines(1)
for line in difflib.unified_diff(a, b):
sys.stdout.write(line)
# +
#### Manipulate the document to match the format Ambari expects
#### Adds new configuration tag, deletes fields, and wraps in appropriate json
config_new['tag'] = 'version' + str(int(round(time.time() * 1000000000)))
del config_new['Config']
del config_new['href']
del config_new['version']
config_new = {"Clusters": {"desired_config": config_new}}
print(json.dumps(config_new, indent=2))
# +
body = config_new
r = s.put(api_url + '/api/v1/clusters/' + cluster, data=json.dumps(body))
print(r.url)
print(r.status_code)
assert r.status_code == 200
print("Configuration changed successfully!")
print(json.dumps(r.json(), indent=2))
# -
| build/security/ambari-bootstrap-master/api-examples/workshop-hdp22-hive-streaming.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # Simulation Analysis Graphics for Paper - HPI Building
#
# by: <NAME>
#
# Apr 13, 2016
# +
# %matplotlib inline
import esoreader
reload(esoreader)
import pandas as pd
import os
import seaborn as sns
import matplotlib.pyplot as plt
from pylab import *
from __future__ import division
# -
sns.set_style("whitegrid")
# # Get Co-Sim Data
# +
# change this to point to where you checked out the GitHub project
PROJECT_PATH = r"/Users/Clayton/Dropbox/03-ETH/UMEM/results/"
# this is the output of the workflow
ESO_PATH = os.path.join(PROJECT_PATH, '36-cosim-HPI', '36-cosim-HPI.eso')
# yeah... we need an index for timeseries...
HOURS_IN_YEAR = pd.date_range('2013-01-01', '2013-12-31 T23:00', freq='H')
# -
hpi = esoreader.read_from_path(ESO_PATH)
# +
heating = hpi.to_frame('total heating energy', index=HOURS_IN_YEAR, use_key_for_columns=False)#.sum(axis=1)
cooling = hpi.to_frame('total cooling energy', index=HOURS_IN_YEAR, use_key_for_columns=False)#.sum(axis=1)
df = pd.DataFrame({'Total Heating Energy': heating['Zone Ideal Loads Zone Total Heating Energy'],
'Total Cooling Energy': cooling['Zone Ideal Loads Zone Total Cooling Energy']})
df_cosim = df*0.000000277777778
eplus_cosim_radiation = hpi.to_frame('Surface Outside Face Solar Radiation Heat Gain Energy', index=HOURS_IN_YEAR, use_key_for_columns=False)
# +
#hpi.dd.find_variable("Heating Energy")
# -
# # Get E+ Solo Data
# this is the output of the workflow
ESO_PATH = os.path.join(PROJECT_PATH, '35-energyplus-HPI', '35-energyplus-HPI.eso')
# +
hpi = esoreader.read_from_path(ESO_PATH)
heating = hpi.to_frame('total heating energy', index=HOURS_IN_YEAR, use_key_for_columns=False)#.sum(axis=1)
cooling = hpi.to_frame('total cooling energy', index=HOURS_IN_YEAR, use_key_for_columns=False)#.sum(axis=1)
df = pd.DataFrame({'Total Heating Energy': heating['Zone Ideal Loads Zone Total Heating Energy'],
'Total Cooling Energy': cooling['Zone Ideal Loads Zone Total Cooling Energy']})
df_eplus = df*0.000000277777778
eplus_solo_radiation = hpi.to_frame('Surface Outside Face Solar Radiation Heat Gain Energy', index=HOURS_IN_YEAR, use_key_for_columns=False)
# -
# # Get Measured Data
workingdir = "/Users/Clayton/Dropbox/03-ETH/98-UMEM/RawDataAnalysis/"
df_meas = pd.read_csv(workingdir+"aggset2_QW/HPI_QW.csv", index_col="Date Time", parse_dates=True, na_values=0)
point = "HPIMKA01QW_A [kWh]"
df_meas_cooling = pd.DataFrame(df_meas[point].truncate(before='2013',after='2014'))
df_meas_cooling = df_meas_cooling[(df_meas_cooling<50)]
point = "HPIMHE01QW_A [kWh]"
df_meas_heating = pd.DataFrame(df_meas[point].truncate(before='2013',after='2014'))
df_meas_heating = df_meas_heating[(df_meas_heating<1000)]
df_meas = pd.DataFrame({'Total Heating Energy': df_meas_heating["HPIMHE01QW_A [kWh]"],
'Total Cooling Energy': df_meas_cooling["HPIMKA01QW_A [kWh]"]})
# # Combine
df_allcooling = pd.DataFrame({"Co-Simulation Cooling Energy":df_cosim['Total Cooling Energy'],
"EnergyPlus Solo Cooling Energy":df_eplus['Total Cooling Energy'],"Measured Cooling":df_meas['Total Cooling Energy'],
}) #"Measured":df_meas['Total Cooling Energy']
df_allcooling = df_allcooling.truncate(after='2013-12-31 23:00')
df_allheating = pd.DataFrame({"Co-Simulation Heating Energy":df_cosim['Total Heating Energy'],
"EnergyPlus Solo Heating Energy":df_eplus['Total Heating Energy'],"Measured Heating":df_meas['Total Heating Energy']
}) # "Measured":df_meas['Total Heating Energy']"Co-Simulation Heating Energy":df_cosim['Total Heating Energy'],
df_allheating = df_allheating.truncate(after='2013-12-31 23:00')
df_allheating_winter = pd.concat([df_allheating.truncate(after="2013-04-28 23:00"), df_allheating.truncate(before="2013-10-07")])
df_allheating_winter = df_allheating_winter.dropna()
df_allcooling_summer = df_allcooling.truncate(before="2013-04-29",after="2013-10-06 23:00").dropna()
combinedheating_cooling = pd.merge(df_allheating_winter,df_allcooling_summer, right_index=True, left_index=True, how='outer')
combinedheating_cooling.columns
reorderedcolnames = [u'Measured Heating',u'EnergyPlus Solo Heating Energy',u'Co-Simulation Heating Energy',
u'Measured Cooling',u'EnergyPlus Solo Cooling Energy',u'Co-Simulation Cooling Energy']
combinedheating_cooling.info()
combinedheating_cooling = combinedheating_cooling[reorderedcolnames]
combinedheating_cooling.info()
# +
palette1 = sns.color_palette("gist_heat", 3)
palette2 = sns.color_palette("winter", 3)
sns.set_palette(palette1+palette2)
fig1 = plt.figure(figsize=[15,3])
fig1.suptitle("ETHZ HPI Building - EnergyPlus CoSim and Solo vs. Measured Seasonal Heating and Cooling", fontsize=14, y=1.02)
gs = GridSpec(100,100,bottom=0.18,left=0.18,right=0.88)
xfmt = ScalarFormatter()
xfmt.set_powerlimits((-3,3))
ax1 = fig1.add_subplot(gs[:,0:73])
styles1 = ['s-','o-','^-','<-','->','-H']
combinedheating_cooling.resample("W").sum().plot(style=styles1, ax=ax1)
ax1.set_ylabel("Weekly Cooling/Heating [kWh]")
gca().yaxis.set_major_formatter(xfmt)
df_bar_combined = combinedheating_cooling
df_bar_combined.columns = ["Meas. Heat","E+ Solo Heat","CoSim Heat","Meas. Cool","E+ Solo Cool","CoSim Cool"]
ax3 = fig1.add_subplot(gs[:,80:100])
df_bar_combined.resample('A').sum().T.plot(kind='bar',color=palette1+palette2, legend=False, ax=ax3)
ax3.set_ylabel("Seasonal Cooling/Heating [kWh]")
ax3.set_xticklabels(ax3.xaxis.get_majorticklabels(), rotation=45)
for tick in ax3.xaxis.get_majorticklabels():
tick.set_horizontalalignment("right")
gca().yaxis.set_major_formatter(xfmt)
plt.subplots_adjust(bottom=0.5)
plt.savefig("/Users/Clayton/umem-jbps-paper/latex_corrected/figures/"+"HPI_MeasvsSim.pdf",bbox_inches='tight')
plt.show()
# -
# # Daily Profiles
def pivot_day(df, cols, startdate, enddate):
df_forpiv = df.truncate(before=startdate, after=enddate).resample('2H').mean()
#df_cooling_forpiv.columns = ["1","2","3","4"]
df_forpiv['Date'] = df_forpiv.index.map(lambda t: t.date())
df_forpiv['Time'] = df_forpiv.index.map(lambda t: t.time())
df_pivot_1 = pd.pivot_table(df_forpiv, values=cols[1], index='Date', columns='Time')
df_pivot_2 = pd.pivot_table(df_forpiv, values=cols[2], index='Date', columns='Time')
# df_pivot_3 = pd.pivot_table(df_forpiv, values=cols[2], index='Date', columns='Time')
# df_pivot_4 = pd.pivot_table(df_forpiv, values=cols[3], index='Date', columns='Time')
df_pivot_sum = pd.DataFrame()
df_pivot_sum[cols[1]] = df_pivot_1.mean()#.plot(figsize=(20,8))
df_pivot_sum[cols[2]] = df_pivot_2.mean()#.plot()
# df_pivot_sum[cols[2]] = df_pivot_3.mean()#.plot()
# df_pivot_sum[cols[3]] = df_pivot_4.mean()#.plot()
return df_pivot_sum
def deltas_calc(df_bar, index1, index2):
diff = float((df_bar.ix[:,index2] - df_bar.ix[:,index1])/df_bar.ix[:,index1])
if diff >= 0:
annotation = '+{:.1%}'.format(diff)
else:
annotation = '{:.1%}'.format(diff)
delta = int(df_bar.ix[:,index2] - df_bar.ix[:,index1])
if delta >= 0:
annotation_delta = '+{:,}'.format(delta)
else:
annotation_delta = '{:,}'.format(delta)
return annotation, annotation_delta
# +
def plot_cosim_vs_solo(df, df_pivot, colorpalette, ylabel1, ylabel2, ylabel3, filename, title, colnames1, colnames2):
current_palette = sns.color_palette(colorpalette, 3)
sns.set_palette(current_palette)
fig1 = plt.figure(figsize=[15,3])
fig1.suptitle(title, fontsize=14, y=1.03)
gs = GridSpec(100,100,bottom=0.18,left=0.18,right=0.88)
xfmt = ScalarFormatter()
xfmt.set_powerlimits((-3,3))
ax1 = fig1.add_subplot(gs[:,1:35])
styles1 = ['s-','^-','<-']
df.columns = colnames2
df_pivot.columns = colnames1
df_pivot.plot(style=styles1, ax=ax1, xticks=arange(0, 86400, 21600))
ax1.set_ylabel(ylabel1)
ax1.set_xlabel("Time of Day")
gca().yaxis.set_major_formatter(xfmt)
ax2 = fig1.add_subplot(gs[:,41:75])
styles1 = ['s-','^-','<-']
df.resample('M').sum().plot(style=styles1, ax=ax2)#.legend(loc='center left', bbox_to_anchor=(0, -0.5),), title="Monthly Total"
ax2.set_ylabel(ylabel2)
ax2.set_xlabel("Months of Year")
gca().yaxis.set_major_formatter(xfmt)
# df_netradiation.resample('M',how='sum').plot(style=styles1, ax=ax2, title="Monthly Total All Surfaces")#.legend(loc='center left', bbox_to_anchor=(0, -0.5),)
# ax2.set_ylabel("Monthly Net Thermal Radiation [J]")
# ax.yaxis.set_major_formatter(mtick.FormatStrFormatter('%.e'));
df_bar = df.resample('A').sum()
#df_bar.columns = ["CoSim","Solo"] #,"3","4"
ax3 = fig1.add_subplot(gs[:,81:100])
df_bar.T.plot(kind='bar',color=current_palette, legend=False, ax=ax3) #, title="Annual Total"
ax3.set_ylabel(ylabel3)
gca().yaxis.set_major_formatter(xfmt)
annotation1, annotation_delta1 = deltas_calc(df_bar, 0, 1)
annotation2, annotation_delta2 = deltas_calc(df_bar, 0, 2)
ax3.annotate(annotation1, xy=(0, 10000), xytext=(0.55, df_bar["Solo"][0]+(df_bar.max().max() / 30)))
ax3.annotate(annotation_delta1, xy=(0, 10000), xytext=(0.45, df_bar["Solo"][0]+(df_bar.max().max() / 8)))
ax3.annotate(annotation2, xy=(0, 10000), xytext=(1.75, df_bar["CoSim"][0]+(df_bar.max().max() / 30)))
ax3.annotate(annotation_delta2, xy=(0, 10000), xytext=(1.45, df_bar["CoSim"][0]+(df_bar.max().max() / 8)))
plt.ylim([0,df_bar.max().max()+df_bar.max().max()*.2])
# ax2.set_xticklabels(colnames, rotation=80)
plt.subplots_adjust(bottom=0.5)
# plt.tight_layout()
plt.savefig("/Users/Clayton/umem-jbps-paper/latex_corrected/figures/"+filename,bbox_inches='tight')
plt.show()
# +
#heating
# -
# heating = df_allheating_winter[['EnergyPlus Solo Heating Energy','Co-Simulation Heating Energy','Measured Heating']]
heating = df_allheating_winter[['Measured Heating','EnergyPlus Solo Heating Energy','Co-Simulation Heating Energy']]
heating_pivot = pivot_day(heating, list(heating.columns), '2013-01-01','2013-01-31')
df_bar = plot_cosim_vs_solo(heating, heating_pivot, "autumn", "Jan. Avg. Daily Heating [kWh]", "Monthly Heating [kWh]", "Annual Heating [kWh]", "HPI_EnergyPlus_Heating.pdf","ETHZ HPI EnergyPlus CoSim vs Solo Heating",["Solo","CoSim"],["Meas","Solo","CoSim"])
cooling.columns
# cooling = df_allcooling_summer[['EnergyPlus Solo Cooling Energy','Co-Simulation Cooling Energy','Measured Cooling']]
cooling = df_allcooling_summer[['Measured Cooling','EnergyPlus Solo Cooling Energy','Co-Simulation Cooling Energy']]
cooling_pivot = pivot_day(cooling, list(cooling.columns), '2013-07-01','2013-07-31')
plot_cosim_vs_solo(cooling, cooling_pivot, "winter", "July Avg. Daily Cooling [kWh]", "Monthly Cooling [kWh]", "Annual Cooling [kWh]", "HPI_EnergyPlus_Cooling.pdf","ETHZ HPI EnergyPlus CoSim vs Solo Cooling",["Solo","CoSim"],["Meas","Solo","CoSim"])
# ## Radiation
radiation = pd.DataFrame({'Co-Sim':eplus_cosim_radiation['Surface Outside Face Solar Radiation Heat Gain Energy'],
'Solo':eplus_solo_radiation['Surface Outside Face Solar Radiation Heat Gain Energy']})
eplus_cosim_radiation['Surface Outside Face Solar Radiation Heat Gain Energy'] - eplus_solo_radiation['Surface Outside Face Solar Radiation Heat Gain Energy']
radiation.resample('M').sum().plot(kind='bar')
| ipy-notebooks/.ipynb_checkpoints/Simulation Analysis Graphics for Paper - HPI Building-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="fnKcXt91w3XI" outputId="5d30120e-e222-4fa2-d378-3fc6d58ad268"
import keras
# + colab={} colab_type="code" id="11aUWOtqxkJh"
import numpy as np
import keras
from keras.datasets import mnist
from keras.models import Model, Sequential
from keras.layers import Input, Dense, Conv2D, MaxPooling2D, UpSampling2D, Flatten, Reshape
from keras import regularizers
# + colab={"base_uri": "https://localhost:8080/", "height": 255} colab_type="code" id="Mroumt4KyxlE" outputId="d7f8cd69-6792-427f-8543-5e62fa45c378"
# check if tpu is running
import tensorflow as tf
import os
import pprint
if 'COLAB_TPU_ADDR' not in os.environ:
print('ERROR: Not connected to a TPU runtime; please see the first cell in this notebook for instructions!')
else:
tpu_address = 'grpc://' + os.environ['COLAB_TPU_ADDR']
print ('TPU address is', tpu_address)
with tf.Session(tpu_address) as session:
devices = session.list_devices()
print('TPU devices:')
pprint.pprint(devices)
# + colab={} colab_type="code" id="d_rXxDPhz6D0"
# + colab={} colab_type="code" id="sVxOUHOyzSHP"
running_on_colab = True
if running_on_colab:
(x_train, y_train), (x_test, y_test) = mnist.load_data()
else:
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.fashion_mnist.load_data()
# Scales the training and test data to range between 0 and 1.
max_value = float(x_train.max())
x_train = x_train.astype('float32') / max_value
x_test = x_test.astype('float32') / max_value
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="kuHVagiI1Axa" outputId="b5ca53e0-fe73-4a1b-aab4-207d9575fb06"
x_train.shape, x_test.shape
# + colab={} colab_type="code" id="xrzX0mYm1Isy"
x_train = x_train.reshape((len(x_train), np.prod(x_train.shape[1:])))
x_test = x_test.reshape((len(x_test), np.prod(x_test.shape[1:])))
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="wjXqylIF1hry" outputId="da3945e2-f42c-44e9-faeb-f0f4740b1a75"
(x_train.shape, x_test.shape)
# + colab={"base_uri": "https://localhost:8080/", "height": 221} colab_type="code" id="C1Z6GrZw1u1B" outputId="1a491e3b-e631-4c77-f73a-3ff058329fbc"
input_dim = x_train.shape[1]
encoding_dim = 32
compression_factor = float(input_dim) / encoding_dim
print("Compression factor: %s" % compression_factor)
autoencoder = Sequential()
autoencoder.add(
Dense(encoding_dim, input_shape=(input_dim,), activation='relu')
)
autoencoder.add(
Dense(input_dim, activation='sigmoid')
)
autoencoder.summary()
# + colab={"base_uri": "https://localhost:8080/", "height": 204} colab_type="code" id="7A7f_9Ch11td" outputId="d281c1db-0abf-4b3a-9dcc-a9f5dab976ed"
input_img = Input(shape=(input_dim,))
encoder_layer = autoencoder.layers[0]
encoder = Model(input_img, encoder_layer(input_img))
encoder.summary()
# + colab={"base_uri": "https://localhost:8080/", "height": 1751} colab_type="code" id="dz9VCtq017Ch" outputId="1107ffce-1ac2-47b6-c7df-96c68ed8fcef"
autoencoder.compile(optimizer='adam', loss='binary_crossentropy')
autoencoder.fit(x_train, x_train,
epochs=50,
batch_size=256,
shuffle=True,
validation_data=(x_test, x_test))
# -
| src/Auto-encoder.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# https://www.pyimagesearch.com/2017/07/10/using-tesseract-ocr-python/
# extra https://www.learnopencv.com/deep-learning-based-text-recognition-ocr-using-tesseract-and-opencv/
# -
# #### First Install Windows Version of Tesseract 5.0a from
# #### https://github.com/UB-Mannheim/tesseract/wiki
# #### 64bit version exe
# #### https://digi.bib.uni-mannheim.de/tesseract/tesseract-ocr-w64-setup-v5.0.0-alpha.20190708.exe
# https://pypi.org/project/pytesseract/
# !pip install pytesseract
# from https://github.com/madmaze/pytesseract readme.md
try:
from PIL import Image
except ImportError:
import Image
import pytesseract
# If you don't have tesseract executable in your PATH, include the following:
pytesseract.pytesseract.tesseract_cmd = r'C:\Program Files\Tesseract-OCR\tesseract'
# Example tesseract_cmd = r'C:\Program Files (x86)\Tesseract-OCR\tesseract'
# Simple image to string
print(pytesseract.image_to_string(Image.open('data/clip.png')))
# Latvian text image to string
print(pytesseract.image_to_string(Image.open('data/vest.png'),lang="lav"))
# Latvian Fraktur text image to string
print(pytesseract.image_to_string(Image.open('data/frakt.png'),lang="frk"))
# ### no https://lndb.wordpress.com/2008/10/08/vecajai-drukai-jauna-elpa/
# tiktu iegūts šāds saburtojums:
#
# Daudzās fabrikās strahdneeki paraduschi fabrikas administracijai un meistareem pasneegt dzimschanas, wahrda u. c. deenās dahwanas. Par scho jautajumu peeņemta sekoscha rezolucija: “Walde leek preekschā wiseem beedreem atteiktees no wisām tamlihdzigām dahwanu pasneegschanam administracijai. Tahda atteikschanās jaizdara caur rakztisku proteztu.”
| Tesseract_OCR_Library/Tesseract_intro.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Just getting started ...
# + _cell_guid="b1076dfc-b9ad-4769-8c92-a6c4dae69d19" _uuid="8f2839f25d086af736a60e9eeb907d3b93b6e0e5"
import numpy as np
import pandas as pd
from pathlib import Path
from matplotlib import pyplot as plt
import seaborn as sns
import h5py
#import torch
import torch.nn.functional as F
from sklearn.model_selection import KFold, RepeatedKFold, StratifiedKFold
from fastai.vision import *
from fastai import *
import os, shutil
import sys
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
#for dirname, _, filenames in os.walk('/kaggle/input'):
# for filename in filenames:
# print(os.path.join(dirname, filename))
# You can write up to 5GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
# + _cell_guid="79c7e3d0-c299-4dcb-8224-4455121ee9b0" _uuid="d629ff2d2480ee46fbb7e2d37f6b5fab8052498a"
kaggle_path = Path('.') # '.'
kaggle_input_path = Path('/kaggle/input/trends-assessment-prediction') # '/kaggle/input/trends-assessment-prediction'
mri_train_path = Path('/kaggle/input/trends-assessment-prediction/fMRI_train') #fMRI_train
# +
IMPUTATION_STRAT = 'IGNORE_ON_TRAIN' # 'IGNORE_ON_TRAIN', 'MEAN'
LOSS_BASE = 'MSE' # 'MSE' # 'L1' 'MIX' 'VAR'
LOSS_WEIGHTS = [0.3, 0.175, 0.175, 0.175, 0.175] #[0.2,0.2,0.2,0.2,0.2] #[0,0,0,0,1]#
DEP_VAR = ['age','domain1_var1','domain1_var2', 'domain2_var1', 'domain2_var2']
bs=4
# -
# TODO:add test to y_data and change dataloader, so it picks img from test and train folder
y_data = pd.read_csv(kaggle_input_path/'train_scores.csv')
y_data = y_data.fillna(0)
f= h5py.File(mri_train_path/'10001.mat','r')
data=f['SM_feature'][()]
data.reshape(1,53,52,63,53).shape
# +
class MRIImageList2(ImageList):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
def get(self, i):
fname = self.items[i]
f= h5py.File(fname,'r')
data=f['SM_feature'][()]
return torch.tensor(data).permute(3,1,0,2).float()
class MRIImageList2_Target(ImageList):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
def get(self, i):
y_features = torch.tensor(y_data.iloc[i].values[1:]).float()
fname = self.items[i]
f= h5py.File(fname,'r')
imgdata=f['SM_feature'][()]
imgdata=torch.tensor(imgdata).permute(3,1,0,2).float().flatten()
return torch.cat((imgdata, y_features) ,-1)
# +
#https://github.com/fastai/fastai/blob/master/fastai/vision/data.py#L414
class MRIImageImageList2(MRIImageList2):
"`ItemList` suitable for `Image` to `Image` tasks."
_label_cls, _square_show, _square_show_res = MRIImageList2_Target, False, False
# +
def data():
return (MRIImageImageList2.from_df(df = y_data, path=kaggle_input_path, cols=['Id'], folder='fMRI_train', suffix='.mat')
# .split_none()
#.split_by_idx(valid_idx=range(200,400))
.split_by_rand_pct(0.1)
#.split_by_files(valid_names = [f'valid_{task}_{i}.png' for i in range(task_info[task]['num_test'])])
#.label_from_df(cols=DEP_VAR)
.label_from_func(lambda x: x) #range(10)
#.transform(get_transforms(), tfm_y=True, size=256, padding_mode = 'border') #256 #xtra_tfms=[cutout(n_holes=(1,8), length=(16, 16), p=.7)]
.databunch(bs=bs) #64)
#.normalize(imagenet_stats)
)
# -
# Loss function combines autoencoder image generating loss and feature loss, if features are the 5 target variables (0 values are ignored, so test data could be included)
class Mixed_Loss(nn.Module):
def __init__(self, loss_weights = [0.4,0.15,0.15,0.15,0.15], loss_base = 'MSE', bs=64):
super().__init__()
self.bs=bs
self.loss_base = loss_base
self.loss_weights = loss_weights
self.fw = Tensor(LOSS_WEIGHTS).cuda()
self.student_weight = 0.01 # 0.01 #.1
def forward(self, input, target,reduction='sum'): #mean
autoencoder_loss = F.mse_loss(input[:,:-5], target[:,:-5], reduction='mean') * bs #view(self.bs,-1))
#return autoencoder_loss
features = input[:,-5:].T
target_feat = target[:,-5:]
x0,x1,x2,x3,x4 = features
## use sign for 0 imputeted empty targets, so they wan't be evaluated
if IMPUTATION_STRAT == 'IGNORE_ON_TRAIN':
sg = target_feat.float().sign()
x0,x1,x2,x3,x4 = x0.float()*sg[:,0],x1.float()*sg[:,1],x2.float()*sg[:,2],x3.float()*sg[:,3],x4.float()*sg[:,4]
else: # 'MEAN'
sg = 1
x0,x1,x2,x3,x4 = x0.float(),x1.float(),x2.float(),x3.float(),x4.float()
y = target_feat.float()*sg
loss1 = 0
loss2 = 0
if self.loss_base in ('MSE','MIX'):
loss_func = F.mse_loss
#reduction = 'sum'
#loss1 = self.fw[0]*loss_func(x0,y[:,0],reduction=reduction)/sum(y[:,0]**2) + \
# self.fw[1]*loss_func(x1,y[:,1],reduction=reduction)/sum(y[:,1]**2) + \
# self.fw[2]*loss_func(x2,y[:,2],reduction=reduction)/sum(y[:,2]**2) + \
# self.fw[3]*loss_func(x3,y[:,3],reduction=reduction)/sum(y[:,3]**2) + \
# self.fw[4]*loss_func(x4,y[:,4],reduction=reduction)/sum(y[:,4]**2)
loss1 = (self.fw*(F.mse_loss(features.T*sg,y,reduction='none').sum(dim=0))/((y**2.5).sum(dim=0)))*3
loss1 = loss1.sum()*self.bs
if self.loss_base in ('L1','MIX'):
loss_func = F.l1_loss
#reduction = 'mean'
#loss2 = self.fw[0]*loss_func(x0,y[:,0],reduction=reduction) + \
# self.fw[1]*loss_func(x1,y[:,1],reduction=reduction) + \
# self.fw[2]*loss_func(x2,y[:,2],reduction=reduction) + \
# self.fw[3]*loss_func(x3,y[:,3],reduction=reduction) + \
# self.fw[4]*loss_func(x4,y[:,4],reduction=reduction)
reduction = 'sum'
loss2 = self.fw[0]*loss_func(x0,y[:,0],reduction=reduction)/sum(y[:,0]) + \
self.fw[1]*loss_func(x1,y[:,1],reduction=reduction)/sum(y[:,1]) + \
self.fw[2]*loss_func(x2,y[:,2],reduction=reduction)/sum(y[:,2]) + \
self.fw[3]*loss_func(x3,y[:,3],reduction=reduction)/sum(y[:,3]) + \
self.fw[4]*loss_func(x4,y[:,4],reduction=reduction)/sum(y[:,4])
loss2 = loss2*self.bs/4
return autoencoder_loss/10 + loss1 + loss2
class AutoEncoderBlock(nn.Module):
def __init__(self, nf):
super(AutoEncoderBlock, self).__init__()
track_running_stats = True
affine=True
self.encoder3d = nn.Sequential(
# since gradients are accum over several batches dont use batchnorm, see
# https://docs.fast.ai/train.html#AccumulateScheduler
nn.InstanceNorm3d(53, track_running_stats=track_running_stats, affine=affine),
nn.Conv3d(nf, 10, 7, stride=(1,2,2), padding =3),
nn.LeakyReLU(0.1),
nn.Dropout3d(0.1),
#Lambda(lambda x: x.sin()),
nn.InstanceNorm3d(10, track_running_stats=track_running_stats, affine=affine),
nn.Conv3d(10, 1, 5, stride=1, padding = 2),
#Lambda(lambda x: x.sin())
nn.LeakyReLU(0.1),
nn.Dropout3d(0.1)
)
self.encoder2d = nn.Sequential(
nn.InstanceNorm2d(52, track_running_stats=track_running_stats, affine=affine),
nn.Conv2d(52, 3, 1, stride=1, padding =0),
nn.LeakyReLU(0.1),
nn.Dropout2d(0.1)
#Lambda(lambda x: x.sin()),
)
self.decoder2d = nn.Sequential(
nn.InstanceNorm2d(3, track_running_stats=track_running_stats, affine=affine),
nn.ConvTranspose2d(3, 52, 1, stride=1, padding = 0),
nn.LeakyReLU(0.1),
nn.Dropout2d(0.1)
)
self.decoder3d = nn.Sequential(
nn.InstanceNorm3d(1, track_running_stats=track_running_stats, affine=affine),
nn.ConvTranspose3d(1, 10, 5, stride=1, padding = 2),
nn.LeakyReLU(0.1),
nn.Dropout3d(0.1),
nn.InstanceNorm3d(10, track_running_stats=track_running_stats, affine=affine),
nn.ConvTranspose3d(10, nf, 7, stride=(1,2,2), padding = 3),
nn.Flatten(),
nn.Hardshrink()
)
self.feature_generator = nn.Sequential(
nn.Flatten(),
#nn.InstanceNorm1d(2592, track_running_stats=track_running_stats),
nn.Linear(2592,8),
nn.Dropout(0.1),
#nn.InstanceNorm1d(8, track_running_stats=track_running_stats),
nn.Linear(8,5),
SigmoidRange(Tensor([12,12,0,0,0]).cuda(),Tensor([90,90,100,100,100]).cuda())
)
def forward(self, input):
x = input #self.c3d_to_2d(input) #[:,:,:,:,10]
# print(x.shape)
x = self.encoder3d(x)
# print(x.shape)
x = x.view(-1,52, 27, 32)
x = self.encoder2d(x)
# print(x.shape)
features = self.feature_generator(x)
x = self.decoder2d(x)
# print(x.shape)
x = x.view(-1,1,52, 27, 32)
# print(x.shape)
x = self.decoder3d(x)
# print(x.shape)
return torch.cat((x,features),1)
# learner accumulates gradients over n_steps because batch size is small. Backprop happens after bs*n_steps
# +
learn = Learner(data(), AutoEncoderBlock(53),
loss_func = Mixed_Loss(loss_weights = LOSS_WEIGHTS,
loss_base= LOSS_BASE, bs=bs),
# loss_func=F.mse_loss,
metrics=[mean_absolute_error,mean_squared_error],
callback_fns=partial(AccumulateScheduler, n_step=16))
learn.clip_grad = 1.0
learn.model_dir = '/kaggle/working'
# -
learn.lr_find()
learn.recorder.plot()
learn.fit_one_cycle(3,2e-4)
learn.model_dir = '/kaggle/working'
learn.export()
| notes/trends-autoencoder-fastai.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [default]
# language: python
# name: python3
# ---
# <img src="../../images/qiskit-heading.gif" alt="Note: In order for images to show up in this jupyter notebook you need to select File => Trusted Notebook" width="500 px" align="left">
# # Quantum Circuits
#
# The `QuantumCircuit`, `QuantumRegister`, and `ClassicalRegister` are the main objects for Qiskit Terra. Most users will be able to do all they want with these objects.
import numpy as np
from qiskit import QuantumCircuit, ClassicalRegister, QuantumRegister
from qiskit import BasicAer, execute
from qiskit.quantum_info import Pauli, state_fidelity, basis_state, process_fidelity
# ## Quantum and Classical Registers
#
# Quantum and Classical Registers are declared using the following:
q0 = QuantumRegister(2, 'q0')
c0 = ClassicalRegister(2, 'c0')
q1 = QuantumRegister(2, 'q1')
c1 = ClassicalRegister(2, 'c1')
q_test = QuantumRegister(2, 'q0')
# The name is optional. If not given, Qiskit will name it $qi$, where $i$ is an interger which will count from 0. The name and size can be returned using the following:
print(q0.name)
print(q0.size)
# You can test if the registers are the same or different.
q0==q0
q0==q_test
q0==q1
# ## Quantum Circuits
#
# Quantum Circuits are made using registers, which are created either when initiated or by using the `add_register` command.
circ = QuantumCircuit(q0, q1)
circ.x(q0[1])
circ.x(q1[0])
circ.draw()
# is the same as
circ2 = QuantumCircuit()
circ2.add_register(q0)
circ2.add_register(q1)
circ2.x(q0[1])
circ2.x(q1[0])
circ2.draw()
# <div class="alert alert-block alert-info">
# <b>Note:</b> The registers are listed in the order they are initiated or added (**not** the tensor product for quantum registers).
# </div>
# +
from copy import deepcopy
q3 = QuantumRegister(2, 'q3')
circ3 = deepcopy(circ)
circ3.add_register(q3)
circ3.draw()
# -
# <div class="alert alert-block alert-info">
# <b>Note:</b> The circuit drawer has the last register added at the bottom. If we add a new register it will add it to the bottom of the circuit.
# </div>
# ### Extending a circuit
#
# In many situations you may have two circuits that you want to concatenate to form a new circuit. This is very useful when one circuit has no measurements, and the final circuit represents a measurement.
# +
meas = QuantumCircuit(q0, q1, c0, c1)
meas.measure(q0, c0)
meas.measure(q1, c1)
qc = circ + meas
qc.draw()
# +
meas2 = QuantumCircuit()
meas2.add_register(q0)
meas2.add_register(q1)
meas2.add_register(c0)
meas2.add_register(c1)
meas2.measure(q0, c0)
meas2.measure(q1, c1)
qc2 = circ2 + meas2
qc2.draw()
# -
# It even works when the circuits have different registers. Let's start by making two new circuits:
circ4 = QuantumCircuit(q1)
circ4.x(q1)
circ4.draw()
circ5 = QuantumCircuit(q3)
circ5.h(q3)
circ5.draw()
# The new register is added to the circuit:
(circ4+circ5).draw()
# We have also overloaded `+=` to the `QuantumCircuit` object:
circ4 += circ5
circ4.draw()
# ## Outcomes of Quantum Circuits
#
# In the circuit output, the most significant bit (MSB) is to the left, and the least significant bit (LSB) is to the right (i.e., we follow little-endian ordering from computer science). In this example:
circ.draw()
# qubit register $Q_0$ is prepared in the state $|10\rangle$ and $Q_1$ is in the state $|01\rangle$, giving a total state $|0110\rangle$ ($Q1\otimes Q0$).
#
# <div class="alert alert-block alert-info">
# <b>Note:</b> The tensor order in Qiskit goes as $Q_n \otimes .. Q_1 \otimes Q_0$
# </div>
#
# That is the four-qubit statevector of length 16, with the sixth element (`int('0110',2)=6`) being one. Note the element count starts from zero.
backend_sim = BasicAer.get_backend('statevector_simulator')
result = execute(circ, backend_sim).result()
state = result.get_statevector(circ)
print(state)
# To check the fidelity of this state with the `basis_state` in Qiskit Terra, use:
state_fidelity(basis_state('0110', 4), state)
# We can also use Qiskit Terra to make the unitary operator representing the circuit (provided there are no measurements). This will be a $16\otimes16$ matrix equal to $I\otimes X\otimes X\otimes I$. To check this is correct, we can use the `Pauli` class and the `process_fidelity` function.
backend_sim = BasicAer.get_backend('unitary_simulator')
result = execute(circ, backend_sim).result()
unitary = result.get_unitary(circ)
process_fidelity(Pauli(label='IXXI').to_matrix(), unitary)
# To map the information of the quantum state to the classial world, we use the example with measurements `qc`:
qc.draw()
# This will map the quantum state to the classical world. Since the state has no superpositions, it will be deterministic and equal to `'01 10'`. Here a space is used to separate the registers.
backend_sim = BasicAer.get_backend('qasm_simulator')
result = execute(qc, backend_sim).result()
counts = result.get_counts(qc)
print(counts)
# To show that it does not matter how you add the registers, we run the same as above on the second example circuit:
# +
backend_sim = BasicAer.get_backend('statevector_simulator')
result = execute(circ2, backend_sim).result()
states = result.get_statevector(circ2)
backend_sim = BasicAer.get_backend('qasm_simulator')
result = execute(qc2, backend_sim).result()
counts = result.get_counts(qc2)
backend_sim = BasicAer.get_backend('unitary_simulator')
result = execute(circ2, backend_sim).result()
unitary = result.get_unitary(circ2)
# -
print(counts)
state_fidelity(basis_state('0110', 4), state)
process_fidelity(Pauli(label='IXXI').to_matrix(), unitary)
# ## Counting circuit resources
#
# A `QuantumCircuit` object provides methods for inquiring its resource use. This includes the number of qubits, operations, and a few other things.
q = QuantumRegister(6)
circuit = QuantumCircuit(q)
circuit.h(q[0])
circuit.ccx(q[0], q[1], q[2])
circuit.cx(q[1], q[3])
circuit.x(q)
circuit.h(q[2])
circuit.h(q[3])
circuit.draw()
# total number of operations in the circuit. no unrolling is done.
circuit.size()
# depth of circuit (number of ops on the critical path)
circuit.depth()
# number of qubits in the circuit
circuit.width()
# a breakdown of operations by type
circuit.count_ops()
# number of unentangled subcircuits in this circuit.
# each subcircuit can in principle be executed on a different quantum processor!
circuit.num_tensor_factors()
| qiskit/advanced/terra/quantum_circuits.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python tensor
# language: python
# name: myenv
# ---
import gensim
import numpy as np
import pandas as pd
import tensorflow as tf
from data_process import morphs_process, batch_iter, sentence_to_index_morphs
from word2vec import make_embedding_vectors
from models import LSTM, CNN
train = pd.read_csv('./data/train-50T.txt', delimiter='\t')
test = pd.read_csv('./data/test-10T.txt', delimiter='\t')
data = train.append(test)
data = data.document
# # Show embedding vectors
tokens = morphs_process(data)
wv_model = gensim.models.Word2Vec(min_count=1, window=5, size=300)
wv_model.build_vocab(tokens)
wv_model.train(tokens, total_examples=wv_model.corpus_count, epochs=wv_model.epochs)
word_vectors = wv_model.wv
print(word_vectors.most_similar('괜찮'))
del wv_model, word_vectors
# # Get embedding vectors using word2vec
embedding, vocab, vocab_size = make_embedding_vectors(list(data))
train = pd.read_csv('./data/train-5T.txt', delimiter='\t')
test = pd.read_csv('./data/test-1T.txt', delimiter='\t')
X_train = train.document
Y_train = train.label
X_test = test.document
Y_test = test.label
# # Sentiment Analysis with LSTM using morphs & word2vec
batches = batch_iter(list(zip(X_train, Y_train)), batch_size=64, num_epochs=15)
# +
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
tf.reset_default_graph()
sess = tf.Session(config=config)
model = LSTM(sess=sess, vocab_size=vocab_size, lr=1e-2)
model.embedding_assign(embedding)
train_acc = []
avgLoss = []
x_test = sentence_to_index_morphs(X_test, vocab)
for step, batch in enumerate(batches):
x_train, y_train = zip(*batch)
x_train = sentence_to_index_morphs(x_train, vocab)
acc = model.get_accuracy(x_train, y_train)
l, _ = model.train(x_train, y_train)
train_acc.append(acc)
avgLoss.append(l)
if step % 100 == 0:
test_loss = model.get_loss(x_test, Y_test)
print('batch:', '%04d' % step, '\ntrain loss:', '%.5f' % np.mean(avgLoss), '\ttest loss:', '%.5f' % test_loss)
test_acc = model.get_accuracy(x_test, Y_test)
print('train accuracy:', '%.3f' % np.mean(train_acc), '\ttest accuracy:', '%.3f' % test_acc, '\n')
avgLoss = []
train_acc = []
# -
# # Sentiment Analysis with CNN using morphs & word2vec
batches = batch_iter(list(zip(X_train, Y_train)), batch_size=64, num_epochs=15)
# +
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
tf.reset_default_graph()
sess = tf.Session(config=config)
max_length = 30
model = CNN(sess=sess, vocab_size=vocab_size, sequence_length=max_length, lr=1e-2)
model.embedding_assign(embedding)
train_acc = []
avgLoss = []
x_test = sentence_to_index_morphs(X_test, vocab, max_length)
for step, batch in enumerate(batches):
x_train, y_train = zip(*batch)
x_train = sentence_to_index_morphs(x_train, vocab, max_length)
acc = model.get_accuracy(x_train, y_train)
l, _ = model.train(x_train, y_train)
train_acc.append(acc)
avgLoss.append(l)
if step % 100 == 0:
test_loss = model.get_loss(x_test, Y_test)
print('batch:', '%04d' % step, '\ntrain loss:', '%.5f' % np.mean(avgLoss), '\ttest loss:', '%.5f' % test_loss)
test_acc = model.get_accuracy(x_test, Y_test)
print('train accuracy:', '%.3f' % np.mean(train_acc), '\ttest accuracy:', '%.3f' % test_acc, '\n')
avgLoss = []
train_acc = []
| 01-sentiment_analysis/09-word2vec.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %load_ext autoreload
# %autoreload 2
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib inline
from tqdm.autonotebook import tqdm
from joblib import Parallel, delayed
import umap
import pandas as pd
from avgn.utils.paths import DATA_DIR, most_recent_subdirectory, ensure_dir
from avgn.signalprocessing.create_spectrogram_dataset import flatten_spectrograms
from avgn.visualization.spectrogram import draw_spec_set
from avgn.visualization.quickplots import draw_projection_plots
# ### Collect data
DATASET_ID = 'european_starling_gentner_segmented'
from avgn.visualization.projections import (
scatter_projections,
draw_projection_transitions,
)
df_loc = DATA_DIR / 'syllable_dfs' / DATASET_ID / 'starling_128.pickle'
df_loc
syllable_df = pd.read_pickle(df_loc)
syllable_df[:3]
len(syllable_df)
import tensorflow as tf
ensure_dir(DATA_DIR / 'tfrecords')
for idx, row in tqdm(syllable_df.iterrows(), total=len(syllable_df)):
break
row
np.shape(row.spectrogram)
# +
def _bytes_feature(value):
"""Returns a bytes_list from a string / byte."""
return tf.train.Feature(bytes_list=tf.train.BytesList(value=[value]))
def _float_feature(value):
"""Returns a float_list from a float / double."""
return tf.train.Feature(float_list=tf.train.FloatList(value=[value]))
def _int64_feature(value):
"""Returns an int64_list from a bool / enum / int / uint."""
return tf.train.Feature(int64_list=tf.train.Int64List(value=[value]))
def serialize_example(example):
"""Serialize an item in a dataset
Arguments:
example {[list]} -- list of dictionaries with fields "name" , "_type", and "data"
Returns:
[type] -- [description]
"""
dset_item = {}
for feature in example.keys():
dset_item[feature] = example[feature]["_type"](example[feature]["data"])
example_proto = tf.train.Example(features=tf.train.Features(feature=dset_item))
return example_proto.SerializeToString()
# -
record_loc = DATA_DIR / 'tfrecords' / "starling.tfrecords"
with tf.io.TFRecordWriter((record_loc).as_posix()) as writer:
for idx, row in tqdm(syllable_df.iterrows(), total=len(syllable_df)):
example = serialize_example(
{
"spectrogram": {
"data": row.spectrogram.flatten().tobytes(),
"_type": _bytes_feature,
},
"index": {
"data": idx,
"_type": _int64_feature,
},
"indv": {
"data": np.string_(row.indv).astype("|S7"),
"_type": _bytes_feature,
},
}
)
# write the defined example into the dataset
writer.write(example)
# ### test read data back
from tensorflow.io import FixedLenFeature, parse_single_example
# +
def _dtype_to_tf_feattype(dtype):
""" convert tf dtype to correct tffeature format
"""
if dtype in [tf.float32, tf.int64]:
return dtype
else:
return tf.string
def _parse_function(example_proto, data_types):
""" parse dataset from tfrecord, and convert to correct format
"""
# list features
features = {
lab: FixedLenFeature([], _dtype_to_tf_feattype(dtype))
for lab, dtype in data_types.items()
}
# parse features
parsed_features = parse_single_example(example_proto, features)
feat_dtypes = [tf.float32, tf.string, tf.int64]
# convert the features if they are in the wrong format
parse_list = [
parsed_features[lab]
if dtype in feat_dtypes
else tf.io.decode_raw(parsed_features[lab], dtype)
for lab, dtype in data_types.items()
]
return parse_list
# -
# read the dataset
raw_dataset = tf.data.TFRecordDataset([record_loc.as_posix()])
data_types = {
"spectrogram": tf.uint8,
"index": tf.int64,
"indv": tf.string,
}
# parse each data type to the raw dataset
dataset = raw_dataset.map(lambda x: _parse_function(x, data_types=data_types))
# shuffle the dataset
dataset = dataset.shuffle(buffer_size=10000)
# create batches
dataset = dataset.batch(10)
spec, index, indv = next(iter(dataset))
fig, ax = plt.subplots(ncols = 5, figsize=(15,3))
for i in range(5):
# show the image
ax[i].matshow(spec[i].numpy().reshape(128,128), cmap=plt.cm.Greys, origin="lower")
string_label = indv[i].numpy().decode("utf-8")
ax[i].set_title(string_label)
ax[i].axis('off')
| notebooks/06.0-neural-networks/.ipynb_checkpoints/1.0-starling-to-tfrecord_128-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Latent Dirichlet Allocation Demo
#
# ## Import dependencies
import os
import time
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.datasets import fetch_20newsgroups
from sklearn.manifold import TSNE
import bokeh.plotting as bp
from bokeh.plotting import save
from bokeh.models import HoverTool
# ## Corpus processing
#
# 20 Newsgroups data set. Set of 20,000 newsgroup documents spread accross 20 different topics. This makes it a nice testing ground for topic classification.
n_iter = 500
n_top_words = 5
threshold = 0.0
remove = ('headers', 'footers', 'quotes')
newsgroups = fetch_20newsgroups(subset='all', remove=remove)
# newsgroups_test = fetch_20newsgroups(subset='test', remove=remove)
corpus_raw = [' '.join(filter(str.isalpha, raw.lower().split())) for raw in
newsgroups.data]
# print(newsgroups_train.data)
print("Before:\n", newsgroups.data[0])
print("After:\n", corpus_raw[0])
# ## Clean documents
# +
from nltk.corpus import stopwords
from nltk.stem.wordnet import WordNetLemmatizer
import string
stopwords = set(stopwords.words('english'))
punctuation = set(string.punctuation)
lemmatize = WordNetLemmatizer()
def cleaning(article):
one = " ".join([i for i in article.split() if i not in stopwords])
two = "".join(i for i in one if i not in punctuation)
three = " ".join(lemmatize.lemmatize(i) for i in two.split())
four = three.split(" ")
return four
corpus_tokenized = [cleaning(doc) for doc in corpus_raw]
print(corpus_tokenized[0])
# -
# ## Create BOW matrix for the model
# +
from time import time
import logging
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO,
filename='running.log',filemode='w')
# Importing Gensim
import gensim
from gensim import corpora
from sklearn.feature_extraction.text import CountVectorizer
# Creating the term dictionary of our corpus, where every unique term is assigned an index. dictionary = corpora.Dictionary(doc_clean)
dictionary = corpora.Dictionary(corpus_tokenized)
doc_term_matrix = [dictionary.doc2bow(doc) for doc in corpus_tokenized]
print("Len of raw corpus: %i | Len of matrix: %i" % (len(corpus_raw), len(doc_term_matrix)))
print("Processed:\n", doc_term_matrix[0])
# -
# ## Train LDA model
#
# This can take a couple minutes. I chose to use the actual number of topics for the sake of visualization.
# +
from gensim.models.ldamodel import LdaModel
start = time()
# Creating the object for LDA model using gensim library
# Lda = gensim.models.ldamodel.LdaModel
# Get topics
num_topics = len(newsgroups.target_names)
print(num_topics)
# Running and Trainign LDA model on the document term matrix.
ldamodel = LdaModel(doc_term_matrix, num_topics=num_topics, id2word = dictionary, passes=50)
print('used: {:.2f}s'.format(time()-start))
ldamodel.save('topic.model')
print("Model Saved")
# -
# ## Load model
#
# Load model if you already have a trained model.
# +
# Loads saved model
from gensim.models import LdaModel
loaded_model = LdaModel.load('topic.model')
print(loaded_model.print_topics(num_topics=2, num_words=4))
# -
# ## Top words in each topic
# Topics
for i in ldamodel.print_topics():
for j in i: print(j)
# ## Lets test it out on the test set
# +
remove = ('headers', 'footers', 'quotes')
newsgroups_test = fetch_20newsgroups(subset='all', remove=remove)
# newsgroups_test = fetch_20newsgroups(subset='test', remove=remove)
corpus_raw_test = [' '.join(filter(str.isalpha, raw.lower().split())) for raw in
newsgroups_test.data]
print("Original Sentence:\n", newsgroups_test.data[0])
corpus_tokenized_test = [cleaning(doc) for doc in corpus_raw_test]
doc_term_matrix_test = [dictionary.doc2bow(doc) for doc in corpus_tokenized_test]
print("\nAfter processing:\n", doc_term_matrix_test[0])
# -
# ## Example output for one sentence
test_output = loaded_model[doc_term_matrix_test[100]]
print("Output:\n", test_output)
for i in test_output:
print(i)
# # Visualization
#
# ## Get vectors
#
# We have to predict the probabilities for each document and put them in a matrix.
# +
prob_matrix = np.zeros((len(doc_term_matrix_test), num_topics))
for i, doc in enumerate(doc_term_matrix_test):
predictions = loaded_model[doc]
idx, prob = zip(*predictions)
prob_matrix[i, idx] = prob
# -
# ## t-SNE
#
# 20 dimentions are hard to visualize, so lets run t-SNE to reduce the dimentionality. This can also take a couple minutes.
# +
_idx = np.amax(prob_matrix, axis=1) > threshold # idx of news that > threshold
_topics = prob_matrix[_idx]
num_example = len(_topics)
tsne_model = TSNE(n_components=2, verbose=1, random_state=0, angle=.99,
init='pca')
tsne_lda = tsne_model.fit_transform(_topics[:num_example])
# -
# ## Set up metadata for visualization
# +
# find the most probable topic for each news
_lda_keys = []
for i in range(_topics.shape[0]):
_lda_keys += _topics[i].argmax(),
# show topics and their top words
topic_summaries = []
for i in range(num_topics):
word, _ = zip(*loaded_model.show_topic(i, topn=n_top_words))
topic_summaries.append(' '.join(word))
# 20 colors
colormap = np.array([
"#8c564b", "#c49c94", "#e377c2", "#f7b6d2", "#7f7f7f",
"#1f77b4", "#aec7e8", "#ff7f0e", "#ffbb78", "#2ca02c",
"#98df8a", "#d62728", "#ff9896", "#9467bd", "#c5b0d5",
"#c7c7c7", "#bcbd22", "#dbdb8d", "#17becf", "#9edae5"
])
title = "[20 newsgroups] t-SNE visualization of LDA model trained on {} news, " \
"{} topics, thresholding at {} topic probability, {} iter ({} data " \
"points and top {} words)".format(
prob_matrix.shape[0], num_topics, threshold, n_iter, num_example, n_top_words)
# -
# ## Visualize!
# +
from bokeh.plotting import figure, output_file, show
from bokeh.models import ColumnDataSource, CDSView
from bokeh.io import output_notebook
output_notebook()
p = bp.figure(plot_width=1400, plot_height=1100,
title=title,
tools="pan,wheel_zoom,box_zoom,reset,hover,previewsave",
x_axis_type=None, y_axis_type=None, min_border=1)
source = ColumnDataSource(data=dict(
x=tsne_lda[:,0],
y=tsne_lda[:, 1],
color=colormap[_lda_keys][:num_example],
content=corpus_raw_test[:num_example],
topic_key=_lda_keys[:num_example]
)
)
p.scatter(x='x', y='y', color='color', source=source)
topic_coord = np.empty((prob_matrix.shape[1], 2)) * np.nan
for topic_num in _lda_keys:
if not np.isnan(topic_coord).any():
break
topic_coord[topic_num] = tsne_lda[_lda_keys.index(topic_num)]
# plot crucial words
for i in range(prob_matrix.shape[1]):
p.text(topic_coord[i, 0], topic_coord[i, 1], [topic_summaries[i]])
# hover tools
hover = p.select(dict(type=HoverTool))
hover.tooltips = {"content": "@content - topic: @topic_key"}
# p.scatter(x=tsne_lda[:,0], y=tsne_lda[:, 1], color=colormap[_lda_keys][:num_example])
show(p)
| lda_presentation/lda_gensim_news.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import joblib
from sklearn import metrics
from sklearn.metrics import confusion_matrix
from sklearn.metrics import f1_score
# +
def lower_data(data):
return data.lower()
def encode_distance(data):
less_2_half_km = ["tanjong pagar","bras basah","bugis","chinatown","cantonment","havelock road"]
less_5_km = ["toa payoh","geylang","thomson","dakota"]
less_7_half_km = ["bishan","marine parade","farrer road","circuit road"]
more_7_half_km = ["tanah merah", "tampines",'bedok','jurong','sembawang','woodlands','yishun','pasir ris','yio chu kang']
data = data.strip()
if data in less_2_half_km:
return 1
if data in less_5_km:
return 2
if data in less_7_half_km:
return 3
if data in more_7_half_km:
return 4
else:
return -1
def encode_yes_no(data):
if data == 'Yes':
return 1
if data == 'No' :
return 0
else:
return -1
def encode_gender(data):
if data == "Male":
return 1
if data == "Female":
return 0
def encode_understanding(data):
if "No understanding" in data:
return 0
if "A little" in data:
return 1
if "Very well" in data:
return 2
def encode_freq_exercise(data):
if data == "4 times a week to everyday":
return 3
if data == "Once a week to 3 times a week":
return 2
if data == "I don't exercise":
return 1
def encode_socialise(data):
data = data.lower()
data = data.replace(",","")
data = data.replace(".","")
socialise_words = ["play","talk","friends","visit","meet","chat","people","family","attending"]
for word in data.split(" "):
if word in socialise_words:
return 1
return 0
def encode_volunteer(data):
data = data.lower()
data = data.replace(",","")
data = data.replace(".","")
socialise_words = ["hospice","voluntary","volunteer"]
for word in data.split(" "):
if word in socialise_words:
return 1
return 0
def prepare_data(df):
for col in df:
#get dtype for column
dt = df[col].dtype
#check if it is a number
if dt == int or dt == float:
df[col] = df[col].fillna(0)
else:
df[col] = df[col].fillna("-")
df.columns = ['gender','age','residence','is_working','freq_brasbash','free_time_activities','interact_w_youth','freq_exercise'
,'exercise_reason','forms_of_exercise','travel_20mins_exercise','incentives_travel','places_attract_travel','visited_smu'
,'purpose_at_smu','seen_campus_green','impression_campus_green','activity_at_campus_green','consider_coming','heard_of_connexion','understanding_of_tech'
,'why_lack_understanding','kind_of_tech_used','learn_tech_youth','interest_learn_type','know_shop_online','reasons_shop_online','reason_dont','guidance_learn_willingness_shop','type_workshops']
#Filter out those below 60 years old
df = df[df.age >= 60]
df["residence"] = df["residence"].apply(lower_data)
df.loc[df["residence"] == "20 km from bugis","residence"] = 'jurong'
df["approx_distance_cat"] = df["residence"].apply(encode_distance)
##Encoding Variables
#Encode Yes or No
df["is_working"] = df["is_working"].apply(encode_yes_no)
df["interact_w_youth"] = df["interact_w_youth"].apply(encode_yes_no)
df["travel_20mins_exercise"] = df["travel_20mins_exercise"].apply(encode_yes_no)
df["incentives_travel"] = df["incentives_travel"].apply(encode_yes_no)
df["visited_smu"] = df["visited_smu"].apply(encode_yes_no)
df["consider_coming"] = df["consider_coming"].apply(encode_yes_no)
df["heard_of_connexion"] = df["heard_of_connexion"].apply(encode_yes_no)
df['learn_tech_youth'] = df['learn_tech_youth'].apply(encode_yes_no)
df["know_shop_online"] = df["know_shop_online"].apply(encode_yes_no)
df["guidance_learn_willingness_shop"] = df["guidance_learn_willingness_shop"].apply(encode_yes_no)
#Encode other numerical data
df["understanding_of_tech"] = df["understanding_of_tech"].apply(encode_understanding)
df["freq_exercise"] = df["freq_exercise"].apply(encode_freq_exercise)
df["has_socialise"] = df["free_time_activities"].apply(encode_socialise)
df["has_volunteer"] = df["free_time_activities"].apply(encode_volunteer)
df["gender"] = df["gender"].apply(encode_gender)
df["preference_of_activity"] = ""
for index, row in df.iterrows():
if row["consider_coming"] == 1 and row["learn_tech_youth"] == 1:
df.at[index,"preference_of_activity"] = 3
elif row["consider_coming"] == 1:
df.at[index,"preference_of_activity"] = 2
elif row["learn_tech_youth"] == 1:
df.at[index,"preference_of_activity"] = 1
else:
df.at[index,"preference_of_activity"] = 0
print(df["preference_of_activity"])
df["preference_of_activity"] = df["preference_of_activity"].astype(int)
return df
def return_new_df_with_columns(df,cols):
series = []
for col in cols:
series.append(df[col])
x_data = pd.concat(series,axis=1)
return x_data
# -
df = pd.read_csv("elderly_opinions.csv")
df= prepare_data(df) #perform additional cleanign / preprocessing of data for use
df.head(15)
# +
#x_columns = ["gender","age","is_working","interact_w_youth","freq_exercise","know_shop_online","approx_distance_cat","has_socialise","has_volunteer","preference_of_activity"]
# +
from sklearn.model_selection import train_test_split
df = pd.read_csv("elderly_opinions.csv")
df = prepare_data(df)
x_exercise = ["interact_w_youth","freq_exercise","approx_distance_cat","has_socialise"]
y_exercise = df["consider_coming"]
exercise_variables_df = return_new_df_with_columns(df,x_exercise)
exercise_variables_df.head(5)
# -
#Split into train and test set
X_train, X_test, y_train, y_test = train_test_split(exercise_variables_df, y_exercise, test_size = 0.1,
stratify = y_exercise,
random_state = 1)
y_train
# +
from sklearn.naive_bayes import GaussianNB
print(y_train.value_counts(1))
NB = GaussianNB()
NB.fit(X_train, y_train)
print(NB.classes_)
print(NB.class_prior_)
print()
print("===For Class 0===")
for i in range(len(x_exercise)):
print(x_exercise[i],":",NB.theta_[0][i])
print()
print("===For Class 1===")
for i in range(len(x_exercise)):
print(x_exercise[i],":",NB.theta_[1][i])
# +
from sklearn.tree import DecisionTreeClassifier
dt = DecisionTreeClassifier(criterion = 'entropy', max_depth = 3, random_state = 1)
# Fit dt to the training set
dt.fit(X_train, y_train)
# +
from sklearn.tree import export_graphviz
import pydotplus
from sklearn.externals.six import StringIO
from IPython.display import Image
feature_cols = x_exercise
dot_data = StringIO()
export_graphviz(dt, out_file = dot_data,
feature_names = feature_cols,
filled = True, rounded = True,
special_characters = True,class_names=["Not Interested","Interested"])
graph = pydotplus.graph_from_dot_data(dot_data.getvalue())
graph.write_png('tree_elderly_exercise.png')
Image(graph.create_png())
# -
dt.classes_
# +
#Cross Validation
from sklearn.model_selection import LeaveOneOut
loo = LeaveOneOut()
dt_score = []
for train_index, test_index in loo.split(X_train.to_numpy()):
X_loo_train, X_loo_test = X_train.to_numpy()[train_index], X_train.to_numpy()[test_index]
y_loo_train, y_loo_test = y_train.to_numpy()[train_index], y_train.to_numpy()[test_index]
y_predict = dt.predict(X_loo_test)
dt_score.append(metrics.accuracy_score(y_loo_test,y_predict))
print("Avg Cross Validation Score of Decision Tree",sum(dt_score)/len(dt_score))
# +
y_predict = dt.predict(X_test)
print("Accuracy for Decision Tree :")
print(round(metrics.accuracy_score(y_test, y_predict), 3))
#cnf_matrix = confusion_matrix(y_test, y_pred)
cnf_matrix = confusion_matrix(y_test, y_predict, labels = [1,0])
print(cnf_matrix)
tn, fp, fn, tp = confusion_matrix(y_test, y_predict).ravel()
specificity = tn / (tn+fp)
precision = tp / (tp + fp)
recall_or_sensitivity = tp / (tp + fn)
print("Specificity :", round(specificity,2))
print("Precision :", round(precision, 2))
print("Recall or Sensitivity :", round(recall_or_sensitivity, 2))
print("F Score",f1_score(y_test, y_predict))
# +
from sklearn.ensemble import RandomForestClassifier
max_acc = 0
min_acc = 0.5
max_rfc = None
estimators = 10
## Only uncomment if you intend to re-train the model (please don't overwrite the best model;)
# for i in range(0,200):
# rfc = RandomForestClassifier(criterion='entropy',n_estimators=estimators)
# rfc.fit(X_train, y_train)
# y_pred = rfc.predict(X_test)
# acc_score = round(metrics.accuracy_score(y_test, y_pred), 3)
# if(acc_score >= min_acc and acc_score > max_acc):
# max_acc = acc_score
# max_rfc = rfc
# print("Accuracy for Random Forest Tree :")
# print(acc_score)
# joblib.dump(max_rfc,"random_forest_classifier_elderly_exercise.sav")
#max_rfc = joblib.load("./backup_best_models/random_forest_classifier_elderly_exercise.sav")
# +
#Cross Validation
loo = LeaveOneOut()
dt_score = []
for train_index, test_index in loo.split(X_train.to_numpy()):
X_loo_train, X_loo_test = X_train.to_numpy()[train_index], X_train.to_numpy()[test_index]
y_loo_train, y_loo_test = y_train.to_numpy()[train_index], y_train.to_numpy()[test_index]
y_predict = max_rfc.predict(X_loo_test)
dt_score.append(metrics.accuracy_score(y_loo_test,y_predict))
print("Avg Cross Validation Score of Random Forest",sum(dt_score)/len(dt_score))
# -
for i in range(len(x_exercise)):
print(x_exercise[i],":",max_rfc.feature_importances_[i])
# +
#Calculate probabiltiy of those who will change their mind if incentives is provided
incentives_travel_df = df.loc[df["travel_20mins_exercise"] == 0]
len(incentives_travel_df["incentives_travel"])
incentives_x = incentives_travel_df["travel_20mins_exercise"]
incentives_y = incentives_travel_df["incentives_travel"]
print("Probability of changing their mind to travel if there is incentive")
incentives_y.value_counts(1)
# -
print(incentives_x)
print(incentives_y)
# +
will_come_w_incentives = incentives_travel_df.loc[incentives_travel_df["incentives_travel"] == 1]
will_come_w_incentives["consider_coming"].value_counts(1)
#All elderlies who will travel w incentive would consider coming SMU to exercis
will_not_come_w_incentive = incentives_travel_df.loc[incentives_travel_df["incentives_travel"]== 0]
will_not_come_w_incentive["consider_coming"].value_counts(1)
#Only 25% of elderlies who will not travel w incentive would still consider coming to SMU to exercise
# +
def count_tech_used(data):
tech_list = data.split(",")
if len(tech_list) == 1 :
if(tech_list[0] == '-'):
return 0
return len(tech_list)
df["no_of_tech_used"] = df["kind_of_tech_used"].apply(count_tech_used)
df["no_of_tech_used"]
# +
x_tech_cols = ["interact_w_youth","has_socialise","has_volunteer","approx_distance_cat","understanding_of_tech","no_of_tech_used","know_shop_online"]
y_tech = df["learn_tech_youth"]
x_tech = return_new_df_with_columns(df,x_tech_cols)
#Split into train and test set
X_train, X_test, y_train, y_test = train_test_split(x_tech, y_tech, test_size = 0.15,
stratify = y_tech,
random_state = 6)
X_train
# +
from sklearn.naive_bayes import GaussianNB
NB = GaussianNB()
NB.fit(X_train, y_train)
print(NB.classes_)
print(NB.class_prior_)
print()
print("===For Class 0===")
for i in range(len(x_tech_cols)):
print(x_tech_cols[i],":",NB.theta_[0][i])
print()
print("===For Class 1===")
for i in range(len(x_tech_cols)):
print(x_tech_cols[i],":",NB.theta_[1][i])
# +
## ONLY UNCOMMENT IF RETRAINING
# dt = DecisionTreeClassifier(criterion = 'entropy', max_depth = 5, random_state = 1)
# # Fit dt to the training set
# dt.fit(X_train, y_train)
# joblib.dump(dt,"decision_tree_elderly_tech.sav")
dt = joblib.load("./backup_best_models/decision_tree_elderly_tech.sav")
# +
feature_cols = x_tech_cols
dot_data = StringIO()
export_graphviz(dt, out_file = dot_data,
feature_names = feature_cols,
filled = True, rounded = True,
special_characters = True,class_names=["Not Interested","Interested"])
graph = pydotplus.graph_from_dot_data(dot_data.getvalue())
graph.write_png('tree_elderly_tech.png')
Image(graph.create_png())
# -
dt.classes_
# +
#Cross Validation
loo = LeaveOneOut()
dt_score = []
for train_index, test_index in loo.split(X_train.to_numpy()):
X_loo_train, X_loo_test = X_train.to_numpy()[train_index], X_train.to_numpy()[test_index]
y_loo_train, y_loo_test = y_train.to_numpy()[train_index], y_train.to_numpy()[test_index]
y_predict = dt.predict(X_loo_test)
dt_score.append(metrics.accuracy_score(y_loo_test,y_predict))
print("Avg Cross Validation Score of DT",sum(dt_score)/len(dt_score))
# +
result = dt.predict(X_test)
print(result)
acc_score = round(metrics.accuracy_score(y_test, result), 3)
print(acc_score)
cnf_matrix = confusion_matrix(y_test, result, labels = [1,0])
print(cnf_matrix)
tn, fp, fn, tp = confusion_matrix(y_test, result).ravel()
specificity = tn / (tn+fp)
precision = tp / (tp + fp)
recall_or_sensitivity = tp / (tp + fn)
print("Specificity :", round(specificity,2))
print("Precision :", round(precision, 2))
print("Recall or Sensitivity :", round(recall_or_sensitivity, 2))
print("F Score",f1_score(y_test, result))
# +
max_acc = 0
min_acc = 0.5
max_rfc = None
estimators = 8
## Only uncomment if you intend to re-train the model (please don't overwrite the best model;)
# for i in range(0,200):
# rfc = RandomForestClassifier(criterion='entropy',n_estimators=estimators)
# rfc.fit(X_train, y_train)
# y_pred = rfc.predict(X_test)
# acc_score = round(metrics.accuracy_score(y_test, y_pred), 3)
# if(acc_score >= min_acc and acc_score > max_acc):
# max_acc = acc_score
# max_rfc = rfc
# print("Accuracy for Random Forest Tree :")
# print(acc_score)
# joblib.dump(max_rfc,"random_forest_classifier_elderly_tech.sav")
max_rfc = joblib.load("./backup_best_models/random_forest_classifier_elderly_tech.sav")
# +
#Cross Validation
loo = LeaveOneOut()
dt_score = []
for train_index, test_index in loo.split(X_train.to_numpy()):
X_loo_train, X_loo_test = X_train.to_numpy()[train_index], X_train.to_numpy()[test_index]
y_loo_train, y_loo_test = y_train.to_numpy()[train_index], y_train.to_numpy()[test_index]
y_predict = max_rfc.predict(X_loo_test)
dt_score.append(metrics.accuracy_score(y_loo_test,y_predict))
print("Avg Cross Validation Score of Random Forest",sum(dt_score)/len(dt_score))
# -
for i in range(len(x_tech_cols)):
print(x_tech_cols[i],":",max_rfc.feature_importances_[i])
| Classification/ElderlyOpinion_Models.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/syslogg/classifier-algorithms/blob/master/mlp.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="BF8hX1CTgj6-" colab_type="text"
# # Implementação Redes Neurais Multicamada (MLP)
# - <NAME> (1510081)
# + id="OD6VJgH0KxSP" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 264} outputId="8cada8b8-3195-4631-baeb-c1c13a41e127"
import pandas as pd
import numpy as np
import math as m
import matplotlib.pyplot as plt
import matplotlib.ticker as mtick
import warnings
import random as rnd
from random import randint
warnings.filterwarnings('ignore')
ds_derm = pd.read_csv('derm.csv')
ds_derm.drop(['age'], axis=1,inplace=True) # Descarta a IDADE
ds_derm.head()
# + id="wtXCovkIXNAR" colab_type="code" colab={}
last_col = ds_derm.columns[len(ds_derm.columns)-1]
classes = list(ds_derm[last_col].unique())
len_cols = len(ds_derm.columns) - 1
# + id="dhq90SOeWcJJ" colab_type="code" colab={}
# One Hot Codification
# Codificacao usando arrays de zeros e um para cada classe
def one_hot_encoding(classes):
cl_onehot = np.zeros((len(classes),len(classes)),dtype=int)
np.fill_diagonal(cl_onehot,1)
r = [(classes[i], cl) for i, cl in enumerate(cl_onehot)]
return r
# Codifica as classes esperadas
def encode_expected(expected, encoded_class):
return np.array([ list(filter(lambda e: e[0] == x, encoded_class))[0][1] for x in expected ])
# Codifica todas as classes do dataset
def encode_class(ds):
return one_hot_encoding(pd.unique(ds.iloc[:,-1:].values.flatten()))
# Decodifica o resultado
def decode_result(encoded_class, value):
if sum(value) != 1:
value = list(encoded_class[randint(0,len(encoded_class)-1)][1])
return list(filter(lambda x: list(x[1]) == value,encoded_class))[0][0]
# + id="VGnfZwCYK5Sm" colab_type="code" colab={}
# Classe para representar a camada oculta
class NeuronLayer():
def __init__(self, number_of_neurons, number_of_inputs_per_neuron):
self.synaptic_weights = 2 * np.random.random((number_of_inputs_per_neuron, number_of_neurons)) - 1
# + id="b5Xz6SfJLEAB" colab_type="code" colab={}
# Classe para representar a rede neural (por padrao foi feita com 2 camadas)
class NeuralNetwork():
# Metodo constructor
def __init__(self, layer1, layer2):
self.layer1 = layer1
self.layer2 = layer2
# Função Sigmoide
def __sigmoid(self, x):
return 1 / (1 + np.exp(-x))
# Derivada da Sigmoide
def __sigmoid_derivative(self, x):
return x * (1 - x)
# Treinar Rede - Utilização: Regra Delta
def train(self, inputs_training, outputs_training, num_interation):
for interate in range(0,num_interation):
# Calcula o resultado do neuronio
output_layer1, output_layer2 = self.__think(inputs_training)
# Calcula o erro da layer 2
layer2_error = outputs_training - output_layer2
layer2_delta = layer2_error * self.__sigmoid_derivative(output_layer2)
# Calcula o erro da layer 1
layer1_error = layer2_delta.dot(self.layer2.synaptic_weights.T)
layer1_delta = layer1_error * self.__sigmoid_derivative(output_layer1)
# Quanto de ajuste vai ter no W das camadas
layer1_adjustment = inputs_training.T.dot(layer1_delta)
layer2_adjustment = output_layer1.T.dot(layer2_delta)
#Ajusta os pesos dos neuronios
self.layer1.synaptic_weights += layer1_adjustment
self.layer2.synaptic_weights += layer2_adjustment
def __think(self, input_training):
output_layer1 = self.__sigmoid(np.dot(input_training, self.layer1.synaptic_weights))
output_layer2 = self.__sigmoid(np.dot(output_layer1, self.layer2.synaptic_weights))
return output_layer1, output_layer2
def predict(self, input_):
h, out = self.__think(input_)
result = [1 if o >= 0.5 else 0 for o in out]
return result
def print_weights(self):
print('Camada 1:')
print(self.layer1.synaptic_weights)
print('Camada 2:')
print(self.layer2.synaptic_weights)
# + id="W-MSQtJRWGJF" colab_type="code" colab={}
def train(dataset_train, dataset_test):
count_correct = 0
count_incorrect = 0
count_by_classes_correct = [0 for i in range(0,len(classes))]
count_by_classes_incorrect = [0 for i in range(0,len(classes))]
# Codifica as classes do dataset
encoded_class = encode_class(dataset_train)
# Rede neural com 2 Camadas, Uma com 16 Neuronios e Outra com 6 Neuronios.
# Camada 1: Cria 16 Neuronios com 33 entradas (quantidade de inputs do dataset dermatology)
l1 = NeuronLayer(16,len_cols)
# Camada 2: Cria 12 Neuronios com 6 entradas vinda do outro neuronio (Camada de output)
l2 = NeuronLayer(len(classes), 16)
neural_network = NeuralNetwork(l1, l2)
inputs = dataset_train.iloc[:,:-1].values
outputs = dataset_train.iloc[:,-1:].values
outputs_encoded = encode_expected(outputs,encoded_class)
neural_network.train(inputs, outputs_encoded, 10000)
for index, row in dataset_test.iterrows():
tuple_t = list(row)
tuple_t.pop()
r = neural_network.predict(tuple_t) # Efetua o resultado pelo valor da rede
result = decode_result(encoded_class, r)
#Result
if result == row[last_col]:
count_correct += 1
count_by_classes_correct[classes.index(result)] += 1
else:
count_incorrect += 1
count_by_classes_incorrect[classes.index(result)] += 1
return count_correct, count_incorrect, count_by_classes_correct, count_by_classes_incorrect
# + id="nTx6PS_oWGZT" colab_type="code" colab={}
def seperate_ds_by_class(dataset, percentage):
rds_train = pd.DataFrame()
rds_test = pd.DataFrame()
for c in classes:
nds = dataset[dataset[last_col]==c]
# Essa função pega o dataset e separa uma fração dele, e reordena
ds_train = nds.sample(frac=percentage, random_state=randint(0,15100))
# Pega o que sobrou do dataset de treino
ds_test = nds.drop(ds_train.index)
rds_train = rds_train.append(ds_train)
rds_test = rds_test.append(ds_test)
rds_train = rds_train.reset_index() # Reiniciar indice
rds_test = rds_test.reset_index() # Reiniciar indice
rds_train.drop('index',1,inplace=True) # Retirar coluna index
rds_test.drop('index',1,inplace=True) # Retirar coluna index
return (rds_train, rds_test)
# + id="Hs08EUenWGdf" colab_type="code" colab={}
def run_nth(ds,percentage, number):
percentages_correct = list()
prob_correct_by_class = []
for i in range(0,number):
ds_train, ds_test = seperate_ds_by_class(ds,percentage)
correct, incorrect, count_by_classes_correct, count_by_classes_incorrect = train(ds_train, ds_test)
by_class = []
for count_correct, count_incorrect in zip(count_by_classes_correct, count_by_classes_incorrect):
if count_correct+count_incorrect != 0:
by_class.append(count_correct/(count_correct+count_incorrect))
else:
by_class.append(0)
prob_correct_by_class.append(by_class)
percentages_correct.append(correct/(correct+incorrect))
return (percentages_correct, prob_correct_by_class)
# + id="9vapV0ZyWGhA" colab_type="code" colab={}
percents, prob_by_class = run_nth(ds_derm,0.8,30)
# + id="aL2WgHrXWGkQ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 108} outputId="bc7c0438-8f43-4208-8969-f849f897bd6a"
taxa_acerto_min=np.min(percents)
taxa_acerto_max=np.max(percents)
taxa_acerto_med=np.mean(percents)
print('Taxa de Acerto')
print('--------------')
print('Minimo: ' + str(taxa_acerto_min))
print('Máxima: ' + str(taxa_acerto_max))
print('Média: '+str(taxa_acerto_med))
# + id="zSbdou1SWGnL" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 162} outputId="5093f066-b3ff-41c4-88b4-1937507d1a81"
print('Taxa de Acerto Médio por classe')
print('-------------------------------')
ar_value = [ np.mean(m) for m in np.array(prob_by_class).transpose() ]
for i, _class in enumerate(ar_value):
print('Classe \'' + str(classes[i]) +'\' : ' + str(_class))
# + id="Xu0zHxE3WGqG" colab_type="code" colab={}
# + id="LhpPYHyPWGte" colab_type="code" colab={}
# + id="D6RJKMXvWG0B" colab_type="code" colab={}
# + id="KBvo6uMlLLdW" colab_type="code" colab={}
# + id="W2XmsSOZL5l_" colab_type="code" colab={}
| mlp.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Finding and Loading a Pipeline
#
# In this short tutorial we will show you how to search for pipelines suitable to solve
# your prediction problem.
# In order to find a suitable pipeline, the first thing we need is to identify
# the type of problem (data modality + task type) that we are facing.
#
# This is a full list of current data modalities and task types that we cover:
#
# | Problem Type | Data Modality | Task Type |
# |:-------------------------------------|:--------------|:------------------------|
# | Single Table Classification | single_table | classification |
# | Single Table Regression | single_table | regression |
# | Single Table Collaborative Filtering | single_table | collaborative_filtering |
# | Multi Table Classification | multi_table | classification |
# | Multi Table Regression | multi_table | regression |
# | Time Series Classification | timeseries | classification |
# | Time Series Regression | timeseries | regression |
# | Time Series Forecasting | timeseries | forecasting |
# | Time Series Anomaly Detection | timeseries | anomaly_detection |
# | Image Classification | image | classification |
# | Image Regression | image | regression |
# | Graph Link Prediction | graph | link_prediction |
# | Graph Vertex Nomination | graph | vertex_nomination |
# | Graph Community Detection | graph | community_detection |
# | Graph Matching | graph | graph_matching |
# Once we have identified our data modality and task type we can use the
# `mlblocks.discovery.find_pipelines` function to find all the pipelines
# that support this particular problem type.
#
# For example, if we are looking for a pipeline to work on Image Classification
# we will do the following query.
# +
from mlblocks.discovery import find_pipelines
filters = {
'metadata.data_modality': 'image',
'metadata.task_type': 'classification',
}
find_pipelines(filters=filters)
# -
# After finding and choosing a pipeline, we can load it as an `MLPipeline`
# by passing its name to the `MLPipeline`.
# +
from mlblocks import MLPipeline
pipeline = MLPipeline('image.classification.resnet50.xgboost')
| examples/tutorials/2. Finding and Loading a Pipeline.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
import sys
from QUEEN.queen import *
if "output" not in os.listdir("./"):
os.mkdir("output")
# ----
# #### Example code 1: Create a QUEEN class object (blunt-ends)
# A `QUEEN_object` (blunt-end) is created by providing its top-stranded sequence (5’-to-3’). By default, the DNA topology will be linear.
dna = QUEEN(seq="CCGGTATGCGTCGA")
# -----
# #### Example code 2: Create a QUEEN class object (sticky-end)
# The left and right values separated by `"/"` show the top and bottom strand sequences of the generating `QUEEN_object`, respectively. The top strand sequence is provided in the 5’-to-3’ direction from left to right, whereas the bottom strand sequence is provided in the 3′-to-5′ direction from left to right. Single-stranded regions can be provided by `"-"` for the corresponding nucleotide positions on the opposite strands. A:T and G:C base-pairing rule is required between the two strings except for the single-stranded positions.
dna = QUEEN(seq="CCGGTATGCG----/----ATACGCAGCT")
# ----
# #### Example code 3: Create a circular QUEEN object
# The sequence toology of generating `QUEEN_object` can be specified by `"linear"` or `"circular"`.
# + tags=[]
dna = QUEEN(seq="CCGGTATGCGTCGA", topology="circular")
# + [markdown] tags=[]
# ----
# #### Example code 4.1: Create a QUEEN class object from a GenBank file in a local directory
# GenBank file can be loaded by specifying its local file path.
# -
plasmid = QUEEN(record="input/pX330.gbk")
# #### Example code 4.2: Create a QUEEN class object using a NCBI accession number
# `QUEEN_object` can be generated from a NCBI accession number with `dbtype="ncbi"`.
pUC19 = QUEEN(record="M77789.2", dbtype="ncbi")
# #### Example code 4.3: Create a QUEEN class object using an Addgene plasmid ID
# `QUEEN_object` can be generated from an Addgene plasmid ID with `dbtype="addgene"`.
pUC19 = QUEEN(record="50005", dbtype="addgene")
# #### Example code 4.4: Create a QUEEN class object from a Benchling share link
# `QUEEN_object` can be generated from a Benchling shared link with `dbtype="benchling"`.
plasmid = QUEEN(record="https://benchling.com/s/seq-U4pePb09KHutQzjyOPQV", dbtype="benchling")
# pX330 plasmid encoding a Cas9 gene and a gRNA expression unit is provided in the above example. The `QUEEN_object` generated here is used in the following example codes in this document.
# ----
# #### Example code 5: Print a dsDNA object
fragment = QUEEN(seq="CCGGTATGCG----/----ATACGCAGCT")
fragment.printsequence(display=True)
# ----
# #### Example code 6: Print DNA features in a well-formatted table
plasmid.printfeature()
# ----
# #### Example code 7: Search for a DNA sequence motif with regular expression
match_list = plasmid.searchsequence(query="G[ATGC]{19}GGG")
plasmid.printfeature(match_list, seq=True, attribute=["start", "end", "strand"])
# -----
# #### Example code 8: Search DNA sequences with a fuzzy matching
# Search for `"AAAAAAAA"` sequence, permitting a single nucleotide mismatch.
match_list = plasmid.searchsequence(query="(?:AAAAAAAA){s<=1}")
plasmid.printfeature(match_list, seq=True)
# ----
# #### Example code 9: Search for a DNA sequence with the IUPAC nucleotide code
match_list = plasmid.searchsequence(query="SWSWSWDSDSBHBRHH")
plasmid.printfeature(match_list, seq=True)
# ----
# #### Example code 10: Search for sequence features having specific attribute values
# Search for DNAfeature_objects` with a feature type `"primer_bind"`, and then further screen ones holding a specific string in "qualifiers:label".
feature_list = plasmid.searchfeature(key_attribute="feature_type", query="primer_bind")
plasmid.printfeature(feature_list)
sub_feature_list = plasmid.searchfeature(key_attribute="qualifier:label", query=".+-R$", source=feature_list)
plasmid.printfeature(sub_feature_list)
# ----
# #### Example code 11: Cut pX330 plasmid at multiple positions
# Cut a circular plasmid px330 at the three different positions, resulting in the generation of three fragments. Then, cut one of the three fragments again.
print(plasmid)
fragments = cutdna(plasmid ,1000, 2000, 4000)
print(fragments)
fragment3, fragment4 = cutdna(fragments[1], 500)
print(fragment3)
print(fragment4)
# If an invalid cut pattern are provided, an error message will be returned.
# +
#fragments = cutdna(plasmid, *["50/105", "100/55", "120/110"])
# -
# -----
# #### Example code 12: Digest pX330 plasmid by EcoRI
# Digestion of pX330 plasmid with EcoRI can be simulated as follows.
# 1. Search for EcoRI recognition sites in pX330 with its cut motif and obtain the `DNAfeature_objects` representing its cut position(s) and motif.
# 2. Use the `DNAfeature_objects` to cut pX330 by `cutdna()`.
sites = plasmid.searchsequence("G^AATT_C")
fragments = cutdna(plasmid, *sites)
for fragment in fragments:
print(fragment)
fragment.printsequence(display=True, hide_middle=10)
# QUEEN provides a library of restriction enzyme motifs (described in the New England Biolab's website).
from QUEEN import cutsite #Import a restriction enzyme library
sites = plasmid.searchsequence(cutsite.lib["EcoRI"])
fragments = cutdna(plasmid, *sites)
for fragment in fragments:
print(fragment)
fragment.printsequence(display=True, hide_middle=10)
# -----
# #### Example code 13: Digest pX330 plasmid by Type-IIS restriction enzyme BbsI
sites = plasmid.searchsequence("GAAGAC(2/6)")
fragments = cutdna(plasmid,*sites)
for fragment in fragments:
print(fragment)
fragment.printsequence(display=True, hide_middle=10)
# Here, the BbsI recognition motif can also be represented by "(6/2)GTCTTC", "GAAGACNN^NNNN_" or "^NNNN_NNGTCTTC".
#
# The BbsI recognition motif is also available from the library of restriction enzyme motifs.
from QUEEN import cutsite #Import a restriction enzyme library
sites = plasmid.searchsequence(cutsite.lib["BbsI"])
fragments = cutdna(plasmid, *sites)
for fragment in fragments:
print(fragment)
fragment.printsequence(display=True, hide_middle=10)
# Additionally, BbsI cut site also can be imported from "Queen/RE.py" as follows.
# ----
# #### Example code 14: Crop a fragmented dna object in a specific region
# If the second fragment of "Example code 11" is for further manipulation, `cropdna()` is convenient.
fragment = cropdna(plasmid ,2000, 4000)
# If a start position is larger than an end position, an error message will be returned.
# +
#fragment = cropdna(fragment, 1500, 1000)
# -
# ----
# #### Example code 15: Trim nucleotides from a blunt-ended dsDNA to generate a sticky-ended dsDNA
# Sticky ends can be generated by trimming nucleotides where their end structures are given by top and bottom strand strings with "*" and "-" separated by "/", respectively. The letters "-" indicate nucleotide letters to be trimmed, and the letters "*" indicate ones to remain.
fragment = cropdna(plasmid, 100, 120)
fragment.printsequence(display=True)
fragment = modifyends(fragment, "-----/*****", "**/--")
fragment.printsequence(display=True)
# The following codes achieve the same manipulation.
fragment = cropdna(plasmid,'105/100', '120/118')
fragment.printsequence(display=True)
# A regex-like format can also be used.
fragment = modifyends(fragment, "-{5}/*{5}","*{2}/-{2}")
fragment.printsequence(display=True)
# ----
# #### Example code 16: Add adapter sequences
# modifyends() can also add adapter sequences to DNA ends.
# +
#Add blunt-ended dsDNA sequences to both ends
fragment = cropdna(plasmid, 100, 120)
fragment = modifyends(fragment,"TACATGC","TACGATG")
fragment.printsequence(display=True)
#Add sticky-ended dsDNA sequences to both ends
fragment = cropdna(plasmid, 100, 120)
fragment = modifyends(fragment,"---ATGC/ATGTACG","TACG---/ATGCTAC")
fragment.printsequence(display=True)
# -
# -----
# #### Example code 17: Clone an EGFP fragment into pX330
# 1. Generate a QUEEN class object for an EGFP fragment,
# 2. Create EcoRI sites to both ends of the EGFP fragment,
# 3. Digest the EGFP fragment and pX330 by EcoRI, and
# 4. Assemble the EGFP fragment and linearized pX330.
dna2 = QUEEN(seq="---")
# +
EGFP = QUEEN(record="input/EGFP.fasta")
EGFP = modifyends(EGFP, cutsite.lib["EcoRI"].seq, cutsite.lib["EcoRI"].seq)
sites = EGFP.searchsequence(cutsite.lib["EcoRI"])
insert = cutdna(EGFP, *sites)[1]
insert.printsequence(display=True, hide_middle=10)
sites = plasmid.searchsequence(cutsite.lib["EcoRI"])
backbone = cutdna(plasmid, *sites)[0]
backbone.printsequence(display=True, hide_middle=10)
pEGFP = joindna(backbone, insert, topology="circular")
print(backbone)
print(insert)
print(pEGFP)
# -
# If connecting DNA end structures of the input QUEEN_object are not compatible, an error message will be returned.
# +
#EGFP = QUEEN(record="input/EGFP.fasta")
#EGFP = modifyends(EGFP, cutsite.lib["BamHI"].seq, cutsite.lib["BamHI"].seq)
#sites = EGFP.searchsequence(cutsite.lib["BamHI"])
#insert = cutdna(EGFP, *sites)[1]
#insert.printsequence(display=True, hide_middle=10)
#pEGFP = joindna(backbone, insert, topology="circular")
# + [markdown] tags=[]
# ----
# #### Example code 18: Create a gRNA expression plasmid
# pX330 serves as a standard gRNA expression backbone plasmid. A gRNA spacer can simply be cloned into a BbsI-digested destination site of pX330 as follows:
# 1. Generate QUEEN object for a sticky-ended gRNA spacer dsDNA,
# 2. Digest pX330 by BbsI, and
# 3. Assemble the spacer with the BbsI-digested pX330.
# -
gRNA = QUEEN(seq="CACCGACCATTGTTCAATATCGTCC----/----CTGGTAACAAGTTATAGCAGGCAAA")
sites = plasmid.searchsequence(cutsite.lib["BbsI"])
fragments = cutdna(plasmid, *sites)
backbone = fragments[0] if len(fragments[0].seq) > len(fragments[1].seq) else fragment[1]
pgRNA = joindna(gRNA, backbone, topology="circular", product="pgRNA")
print(backbone)
print(insert)
print(pgRNA)
# ----
# #### Example code 19: Flip ampicillin-resistant gene in pX330
# 1. Search for the ampicillin-resistant gene in pX330,
# 2. Cut pX330 with start and end positions of the ampicillin-resistant gene,
# 3. Flip the ampicillin-resistant gene fragment, and
# 4. Join it with the other fragment.
site = plasmid.searchfeature(query="^AmpR$")[0]
fragments = cutdna(plasmid, site.start, site.end)
fragments[0] = flipdna(fragments[0])
new_plasmid = joindna(*fragments, topology="circular")
plasmid.printfeature(plasmid.searchfeature(query="^AmpR$"))
new_plasmid.printfeature(new_plasmid.searchfeature(query="^AmpR$"))
# #### Example code 20: Insert an EGFP sequence into pX330
# An EGFP sequence insertion to the EcoRI site demonstrated in Example code17 can be described with a simpler code using editsequence()`.
EGFP = QUEEN(record="input/EGFP.fasta")
pEGFP = editsequence(plasmid, "({})".format(cutsite.lib["EcoRI"].seq), r"\1{}\1".format(EGFP.seq))
print(plasmid)
print(pEGFP)
# -----
# #### Example code 21: Insert a DNA string "AAAAA" to the 5’ end of every CDS
new_plasmid = editfeature(plasmid, key_attribute="feature_type", query="CDS", strand=1, target_attribute="sequence", operation=replaceattribute(r"(.+)", r"AAAAA\1"))
for feat in new_plasmid.searchfeature(key_attribute="feature_type", query="CDS", strand=1):
print(feat.start, feat.end, new_plasmid.printsequence(feat.start, feat.start+20, strand=1), feat.qualifiers["label"][0], sep="\t")
# -----
# #### Example code 22: Convert the feature type of every annotation from "CDS" to "gene"
new_plasmid = editfeature(plasmid, key_attribute="feature_type", query="CDS", target_attribute="feature_type", operation=replaceattribute("gene"))
new_plasmid.printfeature()
# ----
# #### Example code 23: Add single cutter annotations to pX330
# 1. Search for all of the single restriction enzyme cutters in pX330 using the library of restriction enzymes listed on the website of NEW England Biolabs.
# 2. Add the single cutter annotations to pX330.
unique_cutters = []
for key, re in cutsite.lib.items():
sites = plasmid.searchsequence(re.cutsite)
if len(sites) == 1:
unique_cutters.append(sites[0])
else:
pass
new_plasmid = editfeature(plasmid, source=unique_cutters, target_attribute="feature_id", operation=createattribute("RE"))
new_plasmid = editfeature(new_plasmid, key_attribute="feature_id", query="RE", target_attribute="feature_type", operation=replaceattribute("misc_bind"))
features = new_plasmid.searchfeature(key_attribute="feature_type", query="misc_bind")
new_plasmid.printfeature(features, seq=True)
| demo/tutorial/tutorial_ex01-23.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] _uuid="7deaa2fbf2dafe5e131be1079fe69ac230e25b8e" _execution_state="idle" _cell_guid="8475e99c-bc72-4102-843e-93d6cd2441a0"
#
#
# + _active=false _cell_guid="93ff871f-54de-ab44-afd1-3377434e68be" _uuid="b302b3e17a329d432e474b37664262ccd47b8b3e" _execution_state="busy"
import numpy as np
import pandas as pd
import matplotlib.mlab as mlab
import matplotlib.pyplot as plt
# %matplotlib inline
from matplotlib import style
plt.style.use('ggplot')
# + _uuid="568dc1219f3968a46274b6b4ee3763b1d9bae045" _execution_state="busy" _cell_guid="be38c29e-5805-4ff6-81ff-828442c1904a"
data = pd.read_csv('../input/loan.csv', low_memory=False, parse_dates=['issue_d'], infer_datetime_format=True)
# + _uuid="2e63ff3683c6fa54670c985a97296d3740495243" _execution_state="busy" _cell_guid="79931614-8b63-4f78-995b-f6e5a78e6615"
state_count = data.addr_state.value_counts()
state_count.plot(kind = 'bar',figsize=(16,8), title = 'Loans per State')
# + _uuid="7b4a1d25085ada6d68038541880b93e7fa65883c" _execution_state="busy" _cell_guid="f679713c-8ded-4504-ab2b-9adb5c0106cc"
<matplotlib.axes._subplots.AxesSubplot at 0x7f334cf6c518>
# + _uuid="8a58d10b7bcdd18358e966c747bac62dd9a208dc" _execution_state="busy" _cell_guid="9058e9a2-3297-4664-852a-eb0dd3bdb026"
tn_data = data.loc[data.addr_state == 'CA']
tn_x = range(1, 12888)
tn_loan_amnt = tn_data.loan_amnt
# + _uuid="a9691d492ca3fe5a5bc74080da79fc731e60d427" _execution_state="busy" _cell_guid="c26404c1-befa-4500-8962-aabb24b388e7"
plt.figure(figsize=(16, 10))
plt.scatter(tn_x, tn_loan_amnt)
plt.xlim(1,12888)
plt.ylim(0, 37500)
plt.ylabel("Loan Amount")
plt.title("Loan Size in CA")
plt.show()
# + _uuid="71089bc8582d80a73fb768f058a30bb648b9119a" _execution_state="busy" _cell_guid="6c3b6faa-c433-4fa6-91a3-8a2a657f8169"
plt.figure(figsize=(16,8))
mu = tn_loan_amnt.mean()
sigma = tn_loan_amnt.std()
num_bins = 100
n, bins, patches = plt.hist(tn_loan_amnt, num_bins, normed=1, facecolor='blue', alpha=0.7)
y = mlab.normpdf(bins, mu, sigma)
plt.plot(bins, y, 'r--')
plt.xlabel("Loan Amount")
plt.title("Loan Amount Distribution in CA")
plt.show()
# + _uuid="be79000b77c8bc0b51cd832bec9f322ab443812f" _execution_state="busy" _cell_guid="e5de9e4e-bceb-4d89-9e8c-44ddd533fa8a"
tloan_tn_df = tn_data['issue_d'].value_counts().sort_index()
tloan_tn_df = tloan_tn_df.cumsum()
# + _uuid="b9c20e687f53b795e88d812dea42d305b5f60747" _execution_state="busy" _cell_guid="4c12eee3-f6e1-4bb5-837d-d53b178ffdbe"
tloan_tn_df.plot(figsize=(16,8), title='Number of Loans Issued in California')
| downloaded_kernels/loan_data/kernel_56.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# Import required libraries
from tpot import TPOT
from sklearn.cross_validation import StratifiedShuffleSplit
import pandas as pd
import numpy as np
# Load the data
titanic = pd.read_csv('/Users/chengjun/github/cjc2016/data/tatanic_train.csv')
titanic.head(5)
titanic.groupby('Sex').Survived.value_counts()
titanic.groupby(['Pclass','Sex']).Survived.value_counts()
id = pd.crosstab([titanic.Pclass, titanic.Sex], titanic.Survived.astype(float))
id.div(id.sum(1).astype(float), 0)
titanic.rename(columns={'Survived': 'class'}, inplace=True)
titanic.dtypes
for cat in ['Name', 'Sex', 'Ticket', 'Cabin', 'Embarked']:
print("Number of levels in category '{0}': \b {1:2.2f} ".format(cat, titanic[cat].unique().size))
for cat in ['Sex', 'Embarked']:
print("Levels for catgeory '{0}': {1}".format(cat, titanic[cat].unique()))
titanic['Sex'] = titanic['Sex'].map({'male':0,'female':1})
titanic['Embarked'] = titanic['Embarked'].map({'S':0,'C':1,'Q':2})
titanic = titanic.fillna(-999)
pd.isnull(titanic).any()
from sklearn.preprocessing import MultiLabelBinarizer
mlb = MultiLabelBinarizer()
CabinTrans = mlb.fit_transform([{str(val)} for val in titanic['Cabin'].values])
CabinTrans
titanic_new = titanic.drop(['Name','Ticket','Cabin','class'], axis=1)
assert (len(titanic['Cabin'].unique()) == len(mlb.classes_)), "Not Equal" #check correct encoding done
titanic_new = np.hstack((titanic_new.values,CabinTrans))
np.isnan(titanic_new).any()
titanic_new[0].size
titanic_new[:1]
titanic_class = titanic['class'].values
training_indices, validation_indices = next(iter(StratifiedShuffleSplit(titanic_class, n_iter=1,
train_size=0.75, test_size=0.25)))
training_indices.size, validation_indices.size
# https://github.com/rhiever/tpot/blob/master/tutorials/Titanic_Kaggle.ipynb
tpot = TPOT(generations=5, verbosity=2)
tpot.fit(titanic_new[training_indices], titanic_class[training_indices])
| code/tba/.ipynb_checkpoints/tpot_titanic-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Models
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
from functools import reduce
import sys
import numpy
numpy.set_printoptions(threshold=sys.maxsize)
# load pre porcessed data
df = pd.read_csv('../prepross_data/data.csv')
# #### Filterout the paper described patient set
# +
# filter dataset as describe in paper
def get_filter_by_age_diabDur(df, age, diabDur):
filter_patients = df[(df["AgeAtConsent"] >= age) & (df["diagDuration"] > diabDur)]
# filter_patients=filter_patients.drop_duplicates(subset="PtID",keep="first")
print(f'Number of patients whos age is {age}+ and diabetics duration greater than {diabDur} is -> {filter_patients.PtID.size}')
return filter_patients
df = get_filter_by_age_diabDur(df, 26, 2)
# +
# def get_missing_row_val_percentage(df):
# df['new'] = df.isnull().sum(axis=1)
# return df
# df = get_missing_row_val_percentage(df)
# filter_patients = df[(df["new"] >= 85)]
# filter_patients.PtID.size
# -
# ### for SH events prediction pre processing
#
y_label = 'Pt_SevHypoEver'
# possible labels Pt_SevHypoEver, SHSeizComaPast12mos, DKAPast12mos, Depression, DiabNeuro, DKADiag
df[y_label].unique()
# +
# get possible values in column including nan
def get_possible_vals_with_nan(df, colName):
list_val =df[colName].unique().tolist()
return list_val
# {'1.Yes': 0, '2.No': 1, "3.Don't know": 2}
get_possible_vals_with_nan(df, y_label)
if(y_label == 'SHSeizComaPast12mos' ):
df.drop(['NumSHSeizComaPast12mos','Pt_v3NumSHSeizComa','NumSHSeizComa', 'Pt_SevHypoEver'], inplace=True, axis=1) # add SHSeizComaPast12mos
df[y_label] = df[y_label].replace({2.0: 1.0})
elif (y_label == 'DKADiag'): # DKADiag {'1.Yes': 0, '2.Probably Yes': 1, '3.No': 2, '4.Unknown': 3}
df.drop(['Pt_NumHospDKA','Pt_HospDKASinceDiag','NumDKAOccur', 'DKAPast12mos'], inplace=True, axis=1)
df[y_label] = df[y_label].replace({1.0: 0, 2: 1, 3: 1 })
elif (y_label == 'Pt_SevHypoEver'):
df.drop(['NumSHSeizComaPast12mos','Pt_v3NumSHSeizComa','NumSHSeizComa', 'SHSeizComaPast12mos'], inplace=True, axis=1) # add SHSeizComaPast12mos
df[y_label] = df[y_label].replace({2.0: 1.0})
elif (y_label == 'DKAPast12mos'):
df.drop(['Pt_NumHospDKA','Pt_HospDKASinceDiag','NumDKAOccur', 'DKADiag'], inplace=True, axis=1)
df[y_label] = df[y_label].replace({2.0: 1.0})
# f_df = df[df['Pt_SevHypoEver'] == 0.0]
# f_df.PtID.size
# f2_df = df[df['Pt_SevHypoEver'] == 1.0]
# f2_df.PtID.size
# +
# df.columns.to_numpy()
df[y_label].isna().sum()
# -
# # Divide Dataset
# +
# filter only the features used in paper diagDuration,
selected_features = True
if(selected_features):
df = df[['AgeAtConsent','diagDuration','HbA1c', 'Gender', 'Pt_RaceEth','Pt_AnnualInc', 'Pt_InsPriv', 'Pt_EduLevel',
'InsulinDeliv','Pt_InsCarbRat', 'relative_T1D', 'Pt_Smoke', y_label,
'MajorLifeStressEvent', 'Weight', 'Height']] # Pt_SevHypoEver ,SHSeizComaPast12mos, DKAPast12mos
# -
if(!selected_features):
df.drop('PtID', axis = 1)
# +
def divide_data(df,label):
Y = df[label]
X = df.drop(label, axis=1)
return X, Y
X, Y = divide_data(df, y_label)
# -
# # Correlation
# +
# # Correlation
# X = pd.DataFrame(X)
# corr = X.corr(method ='pearson')
# corr = X.corr(method ='pearson').abs()
# # avg_corr = corr.mean(axis = 1)
# +
# import seaborn as sn
# plt.figure(figsize=(20,20))
# sn.heatmap(corr,annot=True)
# plt.title('Correlation Matrix', fontsize=16)
# plt.show()
# -
# # Feature Selection
# +
from sklearn.preprocessing import OneHotEncoder, StandardScaler, MinMaxScaler
from sklearn.feature_selection import SelectKBest, mutual_info_classif, SelectPercentile
from sklearn.metrics import confusion_matrix, classification_report, f1_score, auc, roc_curve, roc_auc_score, precision_score, recall_score, balanced_accuracy_score
from numpy.random import seed
from sklearn.model_selection import GridSearchCV, train_test_split, cross_val_score
seed(42)
import tensorflow as tf
tf.random.set_seed(38)
from sklearn.impute import SimpleImputer, KNNImputer
from sklearn.experimental import enable_iterative_imputer
from sklearn.impute import IterativeImputer
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
from keras.callbacks import TensorBoard
from keras.models import Sequential
from keras.layers import Dense
# +
shape = np.shape(X)
feature = shape[1]
n_classes = 2
feature
# +
# fill with 0 - if data not available probably patient has not that medical condition
if(y_label == 'SHSeizComaPast12mos' or y_label == 'DKADiag' or y_label == 'Pt_SevHypoEver' or y_label == 'DKAPast12mos'):
Y = Y.fillna(1)
else:
Y = Y.fillna(0)
# +
seed(42)
tf.random.set_seed(38)
# Save original data set
original_X = X
# Split into training and testing sets
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.25, stratify=Y, random_state=123)
# if variable y is a binary categorical variable with values 0 and 1 and there are 25% of zeros and 75% of ones, stratify=y will make sure that your random split has 25% of 0's and 75% of 1's.
# X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.25)
# +
len(Y_train == 0.0)
unique, counts = numpy.unique(Y_train.to_numpy(), return_counts=True)
print(unique, counts)
unique_test, counts_test = numpy.unique(Y_test.to_numpy(), return_counts=True)
print(unique_test, counts_test)
# -
# ## Model 1 - XGB - With missing data
# +
# xgboost - train with missing values
from xgboost import XGBClassifier
from sklearn.metrics import accuracy_score
model=XGBClassifier(use_label_encoder=False, eta = 0.1,#eta between(0.01-0.2)
max_depth = 3, #values between(3-10)
max_delta_step = 1,
subsample = 0.5,#values between(0.5-1)
colsample_bytree = 0.7,#values between(0.5-1)
tree_method = "auto",
process_type = "default",
num_parallel_tree=7,
objective='multi:softmax',
min_child_weight = 3,
booster='gbtree',
eval_metric = "mlogloss",
num_class = n_classes)
model.fit(X_train,Y_train)
xgb_pred=model.predict(X_test)
print(accuracy_score(Y_test, xgb_pred)*100)
confusion_matrix_xgb = pd.DataFrame(confusion_matrix(Y_test, xgb_pred))
sns.heatmap(confusion_matrix_xgb, annot=True)
print(classification_report(Y_test, xgb_pred))
# -
# # Imputations
# +
# fill nan values in categorical dataset with frequent value
# tested wuth mean and median - results is lower than most_frequent
imputeX = SimpleImputer(missing_values=np.nan, strategy = "most_frequent")
# imputeX = KNNImputer(missing_values=np.nan, n_neighbors = 3, weights='distance')
# imputeX = IterativeImputer(max_iter=5, random_state=0)
X_train = imputeX.fit_transform(X_train)
# +
# test data imputation
Test = X_test.copy()
Test.loc[:,y_label] = Y_test
X_test = imputeX.transform(X_test)
# -
# # Scale data
# Normalize numeric features
scaler = StandardScaler()
# scaler = MinMaxScaler()
select = {}
select[0] = pd.DataFrame(scaler.fit_transform(X_train))
select[1] = Y_train
select[2] = pd.DataFrame(scaler.transform(X_test))
# # Feature selection
# +
# TODO
def select_features(select, feature):
selected = {}
fs = SelectKBest(score_func=mutual_info_classif, k=feature) # k=feature score_func SelectPercentile
selected[0] = fs.fit_transform(select[0], select[1])
selected[1] = fs.transform(select[2])
idx = fs.get_support(indices=True)
return selected, fs, idx
# +
from sklearn.feature_selection import RFE
from sklearn.linear_model import LogisticRegression
#Selecting the Best important features according to Logistic Regression
# Give better performance than selectKBest
# def select_features(select, feature):
# selected = {}
# fs = RFE(estimator=LogisticRegression(), n_features_to_select=feature, step = 10) # step (the number of features eliminated each iteration)
# selected[0] = fs.fit_transform(select[0], select[1])
# selected[1] = fs.transform(select[2])
# idx = fs.get_support(indices=True)
# return selected, fs, idx
# -
# Feature selection
selected, fs, idx = select_features(select, feature)
X_train = pd.DataFrame(selected[0])
X_test = pd.DataFrame(selected[1])
# Get columns to keep and create new dataframe with those only
from pprint import pprint
cols = fs.get_support(indices=True)
features_df_new = original_X.iloc[:,cols]
pprint(features_df_new.columns)
print(features_df_new.shape)
#
# ## AdaBoost
# +
from sklearn.ensemble import AdaBoostClassifier
from sklearn.model_selection import GridSearchCV
from sklearn import svm
C = [0.1,1.0,10.0,100.0]
gamma = [0.1,0.25,0.5,0.75,1.0,2.0]
n_estimators = [50,100,500,1000]
learning_rate = [0.1,0.25,0.5,0.75,1.0]
adaBoost_parameters = {'learning_rate': learning_rate,'n_estimators':n_estimators }
ada_model = AdaBoostClassifier(random_state=0)
adaBoost = GridSearchCV(ada_model, adaBoost_parameters, cv=10)
# -
adaBoost.fit(X_train,Y_train)
ada_best_params_acc = adaBoost.best_params_
print('Best hyperparameters for AdaBoost training acc :', ada_best_params_acc)
# +
adaBoost_best_acc = AdaBoostClassifier(random_state=0, learning_rate=ada_best_params_acc['learning_rate'], n_estimators=ada_best_params_acc['n_estimators'] )
adaBoost_best_acc.fit(X_train, Y_train)
pred=adaBoost_best_acc.predict(X_test)
print(accuracy_score(Y_test, pred)*100)
confusion_matrix_xgb = pd.DataFrame(confusion_matrix(Y_test, pred))
sns.heatmap(confusion_matrix_xgb, annot=True)
print(classification_report(Y_test, pred))
# -
# # Model - XGB
# +
# xgboost - train with missing values
from xgboost import XGBClassifier
from sklearn.metrics import accuracy_score
model=XGBClassifier(use_label_encoder=False, eta = 0.1,#eta between(0.01-0.2)
max_depth = 9, #values between(3-10)
max_delta_step = 3,
subsample = 0.9,#values between(0.5-1)
colsample_bytree = 0.7,#values between(0.5-1)
tree_method = "auto",
process_type = "default",
num_parallel_tree=7,
objective='multi:softmax',
min_child_weight = 3,
booster='gbtree',
eval_metric = "mlogloss",
num_class = n_classes)
model.fit(X_train,Y_train)
xgb_pred=model.predict(X_test)
print(accuracy_score(Y_test, xgb_pred)*100)
confusion_matrix_xgb = pd.DataFrame(confusion_matrix(Y_test, xgb_pred))
sns.heatmap(confusion_matrix_xgb, annot=True)
print(classification_report(Y_test, xgb_pred))
# -
# feature importance graph of XGB
feat_importances = pd.Series(model.feature_importances_, index=X.columns[0:feature])
feat_importances.nlargest(20).plot(kind='barh')
# ## Model 2 - Random forest
# +
# random forest classifier
from sklearn.metrics import accuracy_score
from sklearn.ensemble import RandomForestClassifier
model=RandomForestClassifier(max_depth=5, random_state=0)
model.fit(X_train,Y_train)
pred=model.predict(X_test)
print(accuracy_score(Y_test, pred)*100)
confusion_matrix_rf = pd.DataFrame(confusion_matrix(Y_test, pred))
sns.heatmap(confusion_matrix_rf, annot=True)
print(classification_report(Y_test, pred))
print('Accuracy of random forest classifier on training set: {:.2f}'
.format(model.score(X_train, Y_train)))
print('Accuracy of random forest classifier classifier on test set: {:.2f}'
.format(model.score(X_test, Y_test)))
# -
feat_importances = pd.Series(model.feature_importances_, index=X.columns[0:feature])
feat_importances.nlargest(20).plot(kind='barh')
# ## Model 3 LogisticRegression
# +
from sklearn.linear_model import LogisticRegression
#penalty{‘l1’, ‘l2’, ‘elasticnet’, ‘none’}, default=’l2’
logreg = LogisticRegression(
penalty='l2',
tol = 0.5e-3,
C=2,
class_weight='balanced', # balanced
random_state=0,
solver = 'saga' # saga, sag
)
logreg.fit(X_train, Y_train)
pred=logreg.predict(X_test)
print(accuracy_score(Y_test, pred)*100)
confusion_matrix_lr = pd.DataFrame(confusion_matrix(Y_test, pred))
sns.heatmap(confusion_matrix_lr, annot=True)
print(classification_report(Y_test, pred))
print('Accuracy of logistic regression on training set: {:.2f}'
.format(logreg.score(X_train, Y_train)))
print('Accuracy of logistic regression on test set: {:.2f}'
.format(logreg.score(X_test, Y_test)))
# +
feat_importances = pd.Series(logreg.coef_[0], index=X.columns[0:feature])
feat_importances.nlargest(20).plot(kind='barh')
# -
# ## Model 4 - Decision tree
# +
from sklearn.tree import DecisionTreeClassifier
clf = DecisionTreeClassifier(
random_state=42,
criterion='entropy',
splitter = 'best',
max_depth = 10,
max_features = 15).fit(X_train, Y_train)
pred=clf.predict(X_test)
print(accuracy_score(Y_test, pred)*100)
confusion_matrix_dt = pd.DataFrame(confusion_matrix(Y_test, pred))
sns.heatmap(confusion_matrix_dt, annot=True)
print(classification_report(Y_test, pred))
print('Accuracy of DT on training set: {:.2f}'
.format(clf.score(X_train, Y_train)))
print('Accuracy of DT on test set: {:.2f}'
.format(clf.score(X_test, Y_test)))
# +
feat_importances = pd.Series(clf.feature_importances_, index=X.columns[0:feature])
feat_importances.nlargest(20).plot(kind='barh')
# -
# ## Model 5 - K-Nearest Neighbors
# +
from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier(
n_neighbors =1,
weights = "distance", # uniform, distance
algorithm = 'kd_tree', # {‘auto’, ‘ball_tree’, ‘kd_tree’, ‘brute’}, default=’auto’
)
knn.fit(X_train, Y_train)
pred=knn.predict(X_test)
print(accuracy_score(Y_test, pred)*100)
confusion_matrix_knn = pd.DataFrame(confusion_matrix(Y_test, pred))
sns.heatmap(confusion_matrix_knn, annot=True)
print(classification_report(Y_test, pred))
print('Accuracy of KNN on training set: {:.2f}'
.format(knn.score(X_train, Y_train)))
print('Accuracy of KNN on test set: {:.2f}'
.format(knn.score(X_test, Y_test)))
# -
# ## Model 6 - Linear Discriminant Analysis
# +
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
lda = LinearDiscriminantAnalysis(
solver = 'eigen', # solver{‘svd’, ‘lsqr’, ‘eigen’}, default=’svd’
shrinkage= 0.3, #shrinkage‘auto’ or float, default=None
n_components = 1,
tol = 1e-1
)
lda.fit(X_train, Y_train)
pred=lda.predict(X_test)
print(accuracy_score(Y_test, pred)*100)
confusion_matrix_lda = pd.DataFrame(confusion_matrix(Y_test, pred))
sns.heatmap(confusion_matrix_lda, annot=True)
print(classification_report(Y_test, pred))
print('Accuracy of LDA on training set: {:.2f}'
.format(lda.score(X_train, Y_train)))
print('Accuracy of LDA on test set: {:.2f}'
.format(lda.score(X_test, Y_test)))
# -
# ## Model 7- Gaussian Naive Bayes
# +
from sklearn.naive_bayes import GaussianNB
gnb = GaussianNB()
gnb.fit(X_train, Y_train)
pred=gnb.predict(X_test)
print(accuracy_score(Y_test, pred)*100)
confusion_matrix_gnb = pd.DataFrame(confusion_matrix(Y_test, pred))
sns.heatmap(confusion_matrix_gnb, annot=True)
print(classification_report(Y_test, pred))
print('Accuracy of gaussian naive bayes on training set: {:.2f}'
.format(gnb.score(X_train, Y_train)))
print('Accuracy of gaussian naive bayes on test set: {:.2f}'
.format(gnb.score(X_test, Y_test)))
# -
# ## Model 8 - SVM
# +
from sklearn.svm import SVC
svm = SVC(
C = 0.1, # Cfloat, default=1.0
kernel = 'linear', #kernel{‘linear’, ‘poly’, ‘rbf’, ‘sigmoid’, ‘precomputed’}, default=’rbf’
degree = 3, #degree, default=3
gamma = 0.1, #gamma{‘scale’, ‘auto’} or float, default=’scale’
class_weight = None, # ‘balanced’, default=None
max_iter = 8,
decision_function_shape = 'ovo' # {‘ovo’, ‘ovr’}, default=’ovr’
)
svm.fit(X_train, Y_train)
pred=svm.predict(X_test)
print(accuracy_score(Y_test, pred)*100)
confusion_matrix_svm = pd.DataFrame(confusion_matrix(Y_test, pred))
sns.heatmap(confusion_matrix_svm, annot=True)
print(classification_report(Y_test, pred))
print('Accuracy ofsvm on training set: {:.2f}'
.format(svm.score(X_train, Y_train)))
print('Accuracy of svm on test set: {:.2f}'
.format(svm.score(X_test, Y_test)))
# -
| codes/ML_2010_2012_dataset-Copy1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Import libraries
import os
import pandas as pd
from pathlib import Path
# # Inspect the list of csv in target directory
DATASET_FOLDER = '../../datasets/tempetes/Temperature_Historical'
arr = os.listdir(DATASET_FOLDER)
print(arr)
len(arr)
# # Concatenate multiple csv files into a single pandas dataframe
dir = Path(DATASET_FOLDER)
df = (pd.read_csv(f) for f in dir.glob("*.csv"))
df = pd.concat(df)
df.head()
# # Quick sanity checks
df["Country"].value_counts()
df["Country"].nunique()
df["ISO3"].nunique()
df.isnull().sum()
df.dtypes
df["Temperature - (Celsius)"].describe()
df["Year"].describe()
# # Write csv file in datasets > tempetes directory
df.to_csv('../../datasets/tempetes/' + '/' + 'monthly_hist_temperatures.csv')
| model_tempetes/notebooks/.ipynb_checkpoints/knowledgeportal_concat-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="Q1TXNOvQ9Kaf" colab_type="text"
# # MNIST CNN model
#
# **Target to achieve** : 99.4% accuracy on test dataset.
# + id="aAq7Lvqqs4F5" colab_type="code" outputId="fe99118a-2a70-4154-c63f-b5278f036a59" colab={"base_uri": "https://localhost:8080/", "height": 34}
from google.colab import drive
drive.mount('/content/drive')
import os
os.chdir("./drive/My Drive/EVA/Session04")
# + [markdown] id="7OTkpFgt9W06" colab_type="text"
# ## Importing Libraries
# + id="0m2JWFliFfKT" colab_type="code" colab={}
from __future__ import print_function
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torchvision import datasets, transforms
from torchsummary import summary
from tqdm import tqdm
from torch.utils.tensorboard import SummaryWriter
import numpy as np
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = (20,10)
# + [markdown] id="RcV-OkCe9lP4" colab_type="text"
# ## GPU for training
# + id="50sHOdY39nRz" colab_type="code" outputId="9db8834d-f283-4173-9a8a-8be9ee1ec9aa" colab={"base_uri": "https://localhost:8080/", "height": 34}
import tensorflow as tf
device_name = tf.test.gpu_device_name()
try:
print(f"Found GPU at : {device_name}")
except:
print("GPU device not found.")
# + id="v9GmRJ0a9osJ" colab_type="code" outputId="8deefd6d-c143-4241-bdc4-9fc33c759dcc" colab={"base_uri": "https://localhost:8080/", "height": 52}
import torch
if torch.cuda.is_available():
device = torch.device("cuda")
use_cuda = True
print(f"Number of GPU's available : {torch.cuda.device_count()}")
print(f"GPU device name : {torch.cuda.get_device_name(0)}")
else:
print("No GPU available, using CPU instead")
device = torch.device("cpu")
use_cuda = False
# + [markdown] id="MHNNLPIB9wbU" colab_type="text"
# ## Downloading MNIST dataset
#
# Things to keep in mind,
# - the dataset is provided by pytorch community.
# - MNIST dataset contains:
# - 60,000 training images
# - 10,000 test images
# - Each image is of size (28x28x1).
# - The values 0.1307 and 0.3081 used for the Normalize() transformation below are the global mean and standard deviation for MNIST dataset.
# + id="Bp5eMbQ4-LUp" colab_type="code" colab={}
batch_size = 128
num_epochs = 20
kernel_size = 3
pool_size = 2
lr = 0.01
momentum = 0.9
kwargs = {'num_workers': 1, 'pin_memory': True} if use_cuda else {}
# + id="EW3MUGF--hgi" colab_type="code" colab={}
mnist_trainset = datasets.MNIST(root="./data", train=True, download=True,
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
]))
mnist_testset = datasets.MNIST(root="./data", train=False, download=True,
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
]))
# + id="gkMoKQVc-oKS" colab_type="code" colab={}
train_loader = torch.utils.data.DataLoader(mnist_trainset,
batch_size=batch_size, shuffle=True, **kwargs)
test_loader = torch.utils.data.DataLoader(mnist_testset,
batch_size=batch_size, shuffle=True, **kwargs)
# + [markdown] id="IjXRQ52I-17Z" colab_type="text"
# ## Visualization of images
# + id="NB81qRMu-6vr" colab_type="code" colab={}
examples = enumerate(train_loader)
batch_idx, (example_data, example_targets) = next(examples)
# + id="8M2XDWfz-76k" colab_type="code" outputId="73dc3cb5-d240-474a-8e23-7af785196caf" colab={"base_uri": "https://localhost:8080/", "height": 728}
fig = plt.figure()
for i in range(6):
plt.subplot(2,3,i+1)
plt.tight_layout()
plt.imshow(example_data[i][0], interpolation='none')
plt.title(f"Ground Truth : {example_targets[i]}")
# + [markdown] id="BT8BOx7ejP3o" colab_type="text"
# ## Defining training and testing functions
# + id="L6_Zia8XjPKb" colab_type="code" colab={}
from tqdm import tqdm
def train(model, device, train_loader, optimizer, epoch):
running_loss = 0.0
running_correct = 0
model.train()
pbar = tqdm(train_loader)
for batch_idx, (data, target) in enumerate(pbar):
data, target = data.to(device), target.to(device)
optimizer.zero_grad()
output = model(data)
loss = F.nll_loss(output, target)
_, preds = torch.max(output.data, 1)
loss.backward()
optimizer.step()
#calculate training running loss
running_loss += loss.item()
running_correct += (preds == target).sum().item()
pbar.set_description(desc= f'loss={loss.item()} batch_id={batch_idx}')
print("\n")
print(f"Epoch {epoch} train loss: {running_loss/len(mnist_trainset):.3f} train acc: {running_correct/len(mnist_trainset):.3f}")
def test(model, device, test_loader):
model.eval()
test_loss = 0
correct = 0
with torch.no_grad():
for data, target in test_loader:
data, target = data.to(device), target.to(device)
output = model(data)
test_loss += F.nll_loss(output, target, reduction='sum').item() # sum up batch loss
pred = output.argmax(dim=1, keepdim=True) # get the index of the max log-probability
correct += pred.eq(target.view_as(pred)).sum().item()
test_loss /= len(test_loader.dataset)
print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.3f}%)\n'.format(
test_loss, correct, len(test_loader.dataset),
100. * correct / len(test_loader.dataset)))
# + [markdown] id="KIsxjleoivtO" colab_type="text"
# ## Building the model
# + id="ToQ0qtQSCzOT" colab_type="code" colab={}
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.drop = nn.Dropout2d(0.1)
self.conv1 = nn.Conv2d(1, 16, 3, padding=1) #(-1,28,28,3)>(-1,3,3,3,16)>(-1,28,28,16)
self.batchnorm1 = nn.BatchNorm2d(16) #(-1,28,28,16)
self.conv2 = nn.Conv2d(16, 16, 3, padding=1) #(-1,28,28,16)>(-1,3,3,16,16)>(-1,28,28,16)
self.batchnorm2 = nn.BatchNorm2d(16) #(-1,28,28,16)
self.pool1 = nn.MaxPool2d(2, 2) #(-1,14,14,16)
self.conv3 = nn.Conv2d(16, 16, 3, padding=1) #(-1,14,14,16)>(-1,3,3,16,16)>(-1,14,14,16)
self.batchnorm3 = nn.BatchNorm2d(16) #(-1,14,14,16)
self.conv4 = nn.Conv2d(16, 16, 3, padding=1) #(-1,14,14,16)>(-1,3,3,16,16)>(-1,14,14,16)
self.batchnorm4 = nn.BatchNorm2d(16) #(-1,14,14,16)
self.pool2 = nn.MaxPool2d(2, 2) #(-1,7,7,16)
self.conv5 = nn.Conv2d(16, 16, 3, padding=1) #(-1,7,7,16)>(-1,3,3,16,16)>(-1,7,7,16)
self.batchnorm5 = nn.BatchNorm2d(16)
self.conv6 = nn.Conv2d(16, 16, 3, padding=1) #(-1,7,7,16)>(-1,3,3,16,16)>(-1,7,7,16)
self.batchnorm6 = nn.BatchNorm2d(16)
self.conv7 = nn.Conv2d(16, 10, 3) #(-1,7,7,16)>(-1,3,3,16,10)>(-1,5,5,10)
self.avgpool = nn.AvgPool2d(5)
def forward(self, x):
x = F.relu(self.conv1(x))
x = self.drop(x)
x = self.batchnorm1(x)
x = F.relu(self.conv2(x))
x = self.drop(x)
x = self.batchnorm2(x)
x = self.pool1(x)
x = F.relu(self.conv3(x))
x = self.drop(x)
x = self.batchnorm3(x)
x = F.relu(self.conv4(x))
x = self.drop(x)
x = self.batchnorm4(x)
x = self.pool2(x)
x = F.relu(self.conv5(x))
x = self.drop(x)
x = self.batchnorm5(x)
x = F.relu(self.conv6(x))
x = self.drop(x)
x = self.batchnorm6(x)
x = self.conv7(x)
x = self.avgpool(x)
x = x.view(-1, 10)
return F.log_softmax(x)
# + id="_QVewjwMZsxp" colab_type="code" outputId="c3d8ca41-4911-4933-c669-42113bd8270a" colab={"base_uri": "https://localhost:8080/", "height": 676}
model5 = Net().to(device)
summary(model5, input_size=(1, 28, 28))
# + id="_p2cV0exZtZ1" colab_type="code" outputId="0bfd0e07-bdfe-40ed-ada1-c820ac3fab4e" colab={"base_uri": "https://localhost:8080/", "height": 1000}
# optimizer = optim.SGD(model5.parameters(), lr=lr, momentum=momentum)
optimizer = optim.Adam(model5.parameters(), lr=0.001)#, momentum=momentum)
for epoch in range(1, num_epochs+1):
train(model5, device, train_loader, optimizer, epoch)
test(model5, device, test_loader)
# + id="0mBzSj7mcMnq" colab_type="code" colab={}
| Session-04/notebooks/MNIST_model_04.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# %run ../../common_functions/import_all.py
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import cm
from common_functions.setup_notebook import set_css_style, setup_matplotlib, config_ipython
config_ipython()
setup_matplotlib()
set_css_style()
# -
# # Some simple notes on Matplotlib functionalities
# ## A line plot
# +
x = np.arange(0, 10, 0.1)
y = np.sin(x)
plt.plot(x, y)
plt.xlabel('x')
plt.ylabel('sin(x)')
plt.title('Sine function', fontweight='bold', fontsize=16)
plt.show();
#plt.savefig('sine.png') # to save
# type of markers, line, can be set as kwargs in plot()
# -
# ## a 3D plot
# +
x = np.array([i for i in range(-100, 100)])
y = np.array([i for i in range(-100, 100)])
x, y = np.meshgrid(x, y)
def f(x, y):
return x**2 + y**2
fig = plt.figure()
ax = fig.gca(projection='3d')
parabola = ax.plot_surface(x, y, f(x, y), cmap=cm.RdPu)
plt.xlabel('x')
plt.ylabel('y')
plt.show();
# -
# ## A bar plot
# +
# Some dummy data
data = {'a': 10, 'b': 20, 'c': 15, 'd': 25}
plt.bar([i for i in range(len(data.keys()))], data.values()) # does not read str xtics directly, have to set xticks
plt.xticks([i for i in range(len(data.keys()))], data.keys())
plt.show();
# there are also other types, like scatter plot or hist
# -
# ## Using log scales
# +
# Exp in semilog
x = np.linspace(0, 1, 100)
#plt.semilogy(x, np.exp(x))
# pow law in log-log
x = np.linspace(0, 1, 100)
plt.loglog(x, x**(-0.6))
plt.show();
# -
# ## Setting a customised legend
x = np.linspace(0, 10)
sin_line, = plt.plot(x, np.sin(x), label='sin(x)')
cos_line, = plt.plot(x, np.cos(x), label='cos(x)')
#plt.legend(handler_map={sin_line: HandlerLine2D(numpoints=2)}, loc=4) # with the Handler
plt.legend(loc=4)
plt.title('Sin and cos', fontweight='bold', fontsize=16)
plt.xlabel('x')
plt.show();
# ## Plotting errorbars
plt.errorbar([i for i in range(10)],
[i for i in range(10)],
yerr=[i for i in range(10)], label='avg')
plt.show();
# ## Amesome: can use [xkcd](https://xkcd.com)'s style!
#
# In order to make this work best, the proper font of xkcd, Humor Sans has to be downloaded on the system. You can find it [here](https://github.com/shreyankg/xkcd-desktop/blob/master/Humor-Sans.ttf) (on a Mac, just double click on the downloaded file and click install).
#
# Then, you should clear the font cache of Matplotlib otherwise it does not pick up newly installed fonts:
#
# ```
# # rm ~/.matplotlib/fontList.cache
# ```
#
# You may also need to clear the cached font_manager Matplotlib instance, so better run
#
# ```
# # rm ~/.matplotlib/fontList.py3k.cache
# ```
#
# This will clear our everything and make sure Matplotlib rebuilds its font cache the next time it is imported.
# +
matplotlib.rcParams.update({"text.usetex": False})
plt.xkcd()
plt.plot([i for i in range(10)], np.sin([i for i in range(10)]))
plt.title('A sine wave')
plt.xlabel('x')
plt.ylabel('sin(x)')
plt.show();
# -
# ## References
#
# 1. <a name="1"></a> [A great article on xkcd arriving to Matplotlib](https://jakevdp.github.io/blog/2013/07/10/XKCD-plots-in-matplotlib/) on Pythonic Perambulations, by <NAME> (great blog btw!)
# 1. <a name="2"></a> [Matplotlib's own showcase of the xkcd style](http://matplotlib.org/xkcd/examples/showcase/xkcd.html)
# + jupyter={"outputs_hidden": true}
| toolbox/python/matplotlib.ipynb |
;; -*- coding: utf-8 -*-
;; ---
;; jupyter:
;; jupytext:
;; text_representation:
;; extension: .scm
;; format_name: light
;; format_version: '1.5'
;; jupytext_version: 1.14.4
;; kernelspec:
;; display_name: Calysto Scheme 3
;; language: scheme
;; name: calysto_scheme
;; ---
;; ### 練習問題2.71
;; $n$記号のアルファベットに対するハフマン木があり、
;; 記号の相対頻度は$1,2,4,...,2^{n−1}$であるとする。
;; $n = 5$、$n = 10$の場合の⽊をスケッチせよ。
;; そのような木では、(⼀般の$n$について )最も頻度の⾼い記号を符号化するのに何ビット必要になるだろうか。
;; 最も頻度の低い記号はどうだろうか。
;; 木の回答は保留にする。
;; +
(define (make-leaf symbol weight) (list 'leaf symbol weight))
(define (leaf? object) (eq? (car object) 'leaf))
(define (symbol-leaf x) (cadr x))
(define (weight-leaf x) (caddr x))
; コンストラクタ
(define (make-code-tree left right)
(list left right
(append (symbols left) (symbols right))
(+ (weight left) (weight right))
)
)
; セレクタ
(define (left-branch tree) (car tree))
(define (right-branch tree) (cadr tree))
(define (symbols tree)
(if (leaf? tree) (list (symbol-leaf tree))
(caddr tree)
)
)
(define (weight tree)
(if (leaf? tree) (weight-leaf tree)
(cadddr tree)
)
)
(define (decode bits tree)
(define (decode-1 bits current-branch)
(if (null? bits) '()
(let ((next-branch (choose-branch (car bits) current-branch)))
(if (leaf? next-branch) (cons (symbol-leaf next-branch) (decode-1 (cdr bits) tree))
(decode-1 (cdr bits) next-branch))
)
)
)
(decode-1 bits tree)
)
(define (choose-branch bit branch)
(cond ((= bit 0) (left-branch branch))
((= bit 1) (right-branch branch))
(else (error "bad bit: CHOOSE-BRANCH" bit))
)
)
(define (adjoin-set x set)
(cond ((null? set) (list x))
((< (weight x) (weight (car set))) (cons x set))
(else (cons (car set) (adjoin-set x (cdr set))))))
(define (make-leaf-set pairs)
(if (null? pairs) '()
(let ((pair (car pairs)))
(adjoin-set (make-leaf (car pair) ; symbol
(cadr pair)) ; weight
(make-leaf-set (cdr pairs)))
)
)
)
;; +
; 符号化処理
(define (encode-symbol symbol tree)
(define (iter sub result)
(if (leaf? sub)
(if (eq? (symbol-leaf sub) symbol) result
'())
(let ((l (left-branch sub))
(r (right-branch sub)))
(let ((l-result (iter l (append result '(0)))))
(if (not (null? l-result)) l-result
(iter r (append result '(1)))
)
)
)
)
)
(let ((result (iter tree '())))
(if (null? result) (error "bad symbol: ENCODE" symbol)
result
)
)
)
; 符号化処理
(define (encode message tree)
(if (null? message) '()
(append
(encode-symbol (car message) tree)
(encode (cdr message) tree))
)
)
;; +
; 考え直した回答
; 指定した頻度がリストに含まれているか。
; 含まれていない場合、falseを返す。
; 含まれている場合、最初に見つかったペアと次のペアを返す。(これの2つで木を生成する)
(define (have-weight? w set)
(cond ((null? set) #f)
((not (pair? set)) #f)
(else
(if (= (weight (car set)) w) (list (car set) (cadr set))
(have-weight? w (cdr set))
)
)
)
)
; ハフマン符号木の生成の記号・頻度ペアのリストから、指定したペアを削除したリストを返す。
(define (remove-info item set)
(if (null? set) '()
(if (equal? item (car set)) (remove-info item (cdr set))
(cons (car set) (remove-info item (cdr set)))
)
)
)
; 回答
(define (successive-merge set)
(define (iter w result)
;(display result)
;(newline)
(cond ((null? result) '())
((= (length result) 1) (car result))
(else
(let ((ll (have-weight? w result)))
(if (equal? ll #f) (iter (+ w 1) result)
(let ((top (car ll))
(next (cadr ll)))
(let ((new-result (remove-info next (remove-info top result))))
(if (null? new-result) (iter w (list (make-code-tree next top)))
(let ((new-item (make-code-tree next top)))
(if (<= (weight new-item) (weight (car new-result)))
(iter w (append (list new-item) new-result))
(iter w (append new-result (list new-item)))
)
)
)
)
)
)
)
)
)
)
(iter 1 set)
)
(define (generate-huffman-tree pairs)
(successive-merge (make-leaf-set pairs)))
;; -
; 2^(5-1)
(define pairs1 '((A 16) (B 8) (C 4) (D 2) (E 1)))
(define sample1 (generate-huffman-tree pairs1))
sample1
; 単一の記号で動作確認
(display (encode '(A) sample1))
(newline)
(display (encode '(B) sample1))
(newline)
(display (encode '(C) sample1))
(newline)
(display (encode '(D) sample1))
(newline)
(display (encode '(E) sample1))
(newline)
; 2^(10-1)
(define pairs2 '((A 512) (B 256) (C 128) (D 64) (E 32) (F 16) (G 8) (H 4) (I 2) (J 1)))
(define sample2 (generate-huffman-tree pairs2))
sample2
; 単一の記号で動作確認
(display (encode '(A) sample2))
(newline)
(display (encode '(B) sample2))
(newline)
(display (encode '(C) sample2))
(newline)
(display (encode '(D) sample2))
(newline)
(display (encode '(E) sample2))
(newline)
(display (encode '(F) sample2))
(newline)
(display (encode '(G) sample2))
(newline)
(display (encode '(H) sample2))
(newline)
(display (encode '(I) sample2))
(newline)
(display (encode '(J) sample2))
(newline)
| exercises/2.71.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Gradiente de gravidade de um prisma retangular
# **[Referências]**
#
# * <NAME>., <NAME>, and <NAME> (2000), The gravitational potential and its derivatives for the prism: Journal of Geodesy, 74, 552–560, doi: 10.1007/s001900000116.
# ## Importando as bibliotecas
import numpy as np
import matplotlib.pyplot as plt
import prism_grav
# ## Gerando os parâmetros do sistema de coordenadas
Nx = 100
Ny = 50
area = [-1000.,1000.,-1000.,1000.]
shape = (Nx,Ny)
x = np.linspace(area[0],area[1],num=Nx)
y = np.linspace(area[2],area[3],num=Ny)
yc,xc = np.meshgrid(y,x)
voo = -100.
zc = voo*np.ones_like(xc)
coordenadas = np.array([yc.ravel(),xc.ravel(),zc.ravel()])
# ## Gerando os parâmetros do prisma
rho = np.array([3000.,2300])
modelo = np.array([[-50,50,-450,-250,100,350],
[-50,50,250,450,50,150]])
# ## Cálculo das componentes do campo de gravidade e do potencial
gxx = prism_grav.gravity_gradient(coordenadas,modelo,rho,field="g_xx")
gxy = prism_grav.gravity_gradient(coordenadas,modelo,rho,field="g_xy")
gxz = prism_grav.gravity_gradient(coordenadas,modelo,rho,field="g_xz")
gyy = prism_grav.gravity_gradient(coordenadas,modelo,rho,field="g_yy")
gyz = prism_grav.gravity_gradient(coordenadas,modelo,rho,field="g_yz")
gzz = prism_grav.gravity_gradient(coordenadas,modelo,rho,field="g_zz")
laplacian = gxx + gyy + gzz
laplacian
# ## Visualização dos dados calculados
# +
title_font = 22
bottom_font = 15
plt.close('all')
plt.figure(figsize=(15,15), tight_layout=True)
plt.subplot(3,3,1)
plt.xlabel('easting (m)', fontsize = title_font)
plt.ylabel('northing (m)', fontsize = title_font)
plt.title('$g_{xx}$ (Eotvos)', fontsize=title_font)
plt.pcolor(yc,xc,gxx.reshape(shape),shading='auto',cmap='viridis')
plt.tick_params(axis='both', which='major', labelsize=bottom_font)
cb = plt.colorbar(pad=0.01, aspect=40, shrink=1.0)
cb.ax.tick_params(labelsize=bottom_font)
plt.subplot(3,3,2)
plt.xlabel('easting (m)', fontsize = title_font)
plt.ylabel('northing (m)', fontsize = title_font)
plt.title('$g_{xy}$ (Eotvos)', fontsize=title_font)
plt.pcolor(yc,xc,gxy.reshape(shape),shading='auto',cmap='viridis')
plt.tick_params(axis='both', which='major', labelsize=bottom_font)
cb = plt.colorbar(pad=0.01, aspect=40, shrink=1.0)
cb.ax.tick_params(labelsize=bottom_font)
plt.subplot(3,3,3)
plt.xlabel('easting (m)', fontsize = title_font)
plt.ylabel('northing (m)', fontsize = title_font)
plt.title('$g_{xz}$ (Eotvos)', fontsize=title_font)
plt.pcolor(yc,xc,gxz.reshape(shape),shading='auto',cmap='viridis')
plt.tick_params(axis='both', which='major', labelsize=bottom_font)
cb = plt.colorbar(pad=0.01, aspect=40, shrink=1.0)
cb.ax.tick_params(labelsize=bottom_font)
plt.subplot(3,3,5)
plt.xlabel('easting (m)', fontsize = title_font)
plt.ylabel('northing (m)', fontsize = title_font)
plt.title('$g_{yy}$ (Eotvos)', fontsize=title_font)
plt.pcolor(yc,xc,gyy.reshape(shape),shading='auto',cmap='viridis')
plt.tick_params(axis='both', which='major', labelsize=bottom_font)
cb = plt.colorbar(pad=0.01, aspect=40, shrink=1.0)
cb.ax.tick_params(labelsize=bottom_font)
plt.subplot(3,3,6)
plt.xlabel('easting (m)', fontsize = title_font)
plt.ylabel('northing (m)', fontsize = title_font)
plt.title('$g_{yz}$ (Eotvos)', fontsize=title_font)
plt.pcolor(yc,xc,gyz.reshape(shape),shading='auto',cmap='viridis')
plt.tick_params(axis='both', which='major', labelsize=bottom_font)
cb = plt.colorbar(pad=0.01, aspect=40, shrink=1.0)
cb.ax.tick_params(labelsize=bottom_font)
plt.subplot(3,3,9)
plt.xlabel('easting (m)', fontsize = title_font)
plt.ylabel('northing (m)', fontsize = title_font)
plt.title('$g_{zz}$ (Eotvos)', fontsize=title_font)
plt.pcolor(yc,xc,gzz.reshape(shape),shading='auto',cmap='viridis')
plt.tick_params(axis='both', which='major', labelsize=bottom_font)
cb = plt.colorbar(pad=0.01, aspect=40, shrink=1.0)
cb.ax.tick_params(labelsize=bottom_font)
file_name = 'images/forward_modeling_grav_gradient_prism'
plt.savefig(file_name+'.png',dpi=300)
plt.show()
# -
| Content/code/3. Grav_Mag_modeling/3.6. Prism_gravity_gradient/1. 3D_gradiente_gravidade.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] pycharm={"name": "#%% md\n"}
# # Types of questions in interviews:
#
# * Knowledge: terms,protocols
# * Thinkimg
# * Discussion
# * planning: architecture, engineering, how a system will look like
#
#
#
#
# ** maintain a good linkedin profile. show interest and "live" the job you want.
#
# search for jobs trough aquaintances.
#
# link to screenshots of the questions: https://docs.google.com/document/d/1FHFkQMnbtgTd9K9upl8qphzhg3lksTAutMc0V8ghuIw/edit
| notebooks/First_job_webinar.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## <center> Car Dekho selling price prediction for old cars </center>
# 
# <a id=top></a>
# <b><u>Table of Content</u></b>
# 1. [Introduction](#section1)<br>
# 2. [About Data Set](#section2)<br>
# 3. [Exploratory Data Analysis](#section3)<br>
# 4. [Observations](#section4)<br>
# 5. [Feature Engineering](#section5)<br>
# 6. [Predictive Modeling](#section6)<br>
# 7. [Comparing all Regression Techniques](#section8)<br>
# 8. [Conclusion](#section7)<br>
# <a id=section1></a>
# ## Introduction
#
# CarDekho.com is India's leading car search venture that helps users buy cars that are right for them. Its website and app carry rich automotive content such as expert reviews, detailed specs and prices, comparisons as well as videos and pictures of all car brands and models available in India. The company has tie-ups with many auto manufacturers, more than 4000 car dealers and numerous financial institutions to facilitate the purchase of vehicles.
#
# This study is done based on the data taken from Car Dekho company and the objective of this study is to create a predictive model to predict Selling price of a car based on various features like Make, Model, Distance driven, Manual or Automatic like many other features.
## Import the rquired libraries
import pandas as pd # importing pandas
import numpy as np # importing numpy
import seaborn as sns #import seaborn
import matplotlib.pyplot as plt
from sklearn.preprocessing import LabelEncoder # label encoder for encoding Categorical Value
from sklearn.preprocessing import StandardScaler # Standard Scaling
from sklearn.model_selection import train_test_split #Train Test split
from sklearn.linear_model import LinearRegression,Lasso # Linear Reression
from sklearn.metrics import mean_absolute_error,mean_squared_error,r2_score # import regression model evaluation
from sklearn.ensemble import RandomForestRegressor # Random Forest Regressor
from sklearn.neighbors import KNeighborsRegressor # KNN regressor
# Read the File and get Top 5 cols
df=pd.read_csv("car data.csv")
df.head()
df.info()
# <a id=section2></a>
# ## About Dataset
# 
#
# - There are total 301 observations. Not good enough for modeling.
# - There are total 9 columns as shown above.
# - There are no null values, so filling missing value process will be skipped.
# - There are two kind of variables, tabled as below:
#
# | Continous varibale | Categorical Variable |
# | ------------ | ------------ |
# | Year | Car_Name (N) |
# | Selling_Price | Fuel_Type (N)|
# | Present_Price | Seller_Type (N) |
# | Kms_Driven | Transmission (N) |
# | Owner | |
# (N: Nominal, O: Ordinal)
#
#
df["Depriciation"]=df["Present_Price"]-df["Selling_Price"]
df.head()
# <a id=section3></a>
# ## Exploratory Data Analysis
# <font color="Blue"><b> Q1. Get the top 5 depriciating cars with their make and model</b></font>
fig, ax = plt.subplots(figsize=(10,7))
df.groupby(["Year","Car_Name"])["Depriciation"].mean().nlargest(5).plot(kind="bar")
plt.ylabel("Depriciation")
plt.title("Top 5 Depriciated Cars with their model")
for p in ax.patches:
ax.annotate('{:.1f}'.format(p.get_height()), (p.get_x()+0.2, p.get_height()/2))
# <font color="Blue"><b> Q2. How the depriciation varying based on KMs driven and Transmission(Manual or Automatic)</b></font>
sns.lmplot(x="Kms_Driven",y="Depriciation",hue="Transmission",data=df,fit_reg=False)
# <font color="Blue"><b> Q3. Get the Least 5 depriciating cars with their make and model</b></font>
fig, ax = plt.subplots(figsize=(10,7))
df.groupby(["Year","Car_Name"])["Depriciation"].mean().nsmallest(5).plot(kind="bar")
plt.ylabel("Depriciation")
plt.title("Least 5 Depriciated Cars with their model")
for p in ax.patches:
ax.annotate('{:.1f}'.format(p.get_height()), (p.get_x()+0.2, p.get_height()/2))
# <font color="Blue"><b> Q4. How depriciation depends on # of Owners changed (like first hand, second hand or more)</b></font>
fig, ax = plt.subplots(figsize=(10,7))
df.groupby("Owner")["Depriciation"].mean().plot(kind="bar")
plt.ylabel("Depriciation")
plt.xlabel("# of owners")
plt.title("Averge depriciation based on # of owners")
for p in ax.patches:
ax.annotate('{:.1f}'.format(p.get_height()), (p.get_x()+0.2, p.get_height()/2))
sns.heatmap(df.corr(),annot=True)
# <a id=section4></a>
# ## Observations
#
# 1. Correlation between Selleing price and present price is high so will remove it.
# 2. The depriciation varies increases with Kilometer driven.
# 3. Depriciation is more for Automatic cars.
# 4. As the # of owners of the cars increases, depriciation also increases.
# 5. The data for KMs Drivean and Selling price is full of outliers so need to manage the outliers.
# 6. As there are less observation so outliers data will not be removed but be replaced by 3rd Quartile value for that column
#
# <a id=section5></a>
# ## Feature Engineering
df=df.drop("Present_Price",axis=1) # dropping the highly correalted Present Price Column
df=df.drop("Depriciation",axis=1) # Dropping the depriciation coloumn as it is temp column for EDA
#There are 2 duplicated Rows to removed them fromm dataset.
duplicated_row=df[df.duplicated()]
duplicated_index=duplicated_row.index
duplicated_index
df=df.drop_duplicates()
# using Label encoder for Car name categorical values
le=LabelEncoder()
df["Car_Name_encoded"]=le.fit_transform(df["Car_Name"])
# for "Fuel_Type","Seller_Type","Transmission" using dummification to convert categorical to continous values
df_dummified=pd.get_dummies(df[["Fuel_Type","Seller_Type","Transmission"]],drop_first=True)
df=pd.concat([df,df_dummified],axis=1)
# Dropping Car_Name, Fuel_Type, Seller_type,Transmission
df=df.drop(["Car_Name", "Fuel_Type", "Seller_Type","Transmission"],axis=1)
df.head()
# Checking outliers for Selling price
sns.boxplot(df["Selling_Price"])
df["Selling_Price"].describe()
# function to handle outliers and replacing with 3rd Quantile
def change_outliers_75Q(df,col_name):
df[col_name] = np.where(df[col_name] > df[col_name].quantile(q=.75), df[col_name].quantile(q=.75), df[col_name])
return df[col_name]
df["Selling_Price"]=change_outliers_75Q(df,"Selling_Price")
df["Kms_Driven"].describe()
sns.boxplot(df["Kms_Driven"])
df["Kms_Driven"]=change_outliers_75Q(df,"Kms_Driven")
y=df["Selling_Price"]
X=df.drop("Selling_Price",axis=1)
X_train,X_test,y_train,y_test=train_test_split(X,y,random_state=42,test_size=.2)
std_sclr=StandardScaler()
X_Train_scaled=std_sclr.fit_transform(X_train)
X_Test_sclaed=std_sclr.transform(X_test)
df_scaled=pd.DataFrame(X_Train_scaled,columns=X.columns)
df_scaled.boxplot()
# <a id=section6></a>
# ## Predictive Modeling
lin_reg=LinearRegression()
#lasso_reg=Lasso(alpha=.1)
lin_reg.fit(X_Train_scaled,y_train)
y_pred=lin_reg.predict(X_Test_sclaed)
print("Mean Absolute Error: {}".format(round(mean_absolute_error(y_test,y_pred),2)))
print("Mean Squared Error: {}".format(round(mean_squared_error(y_test,y_pred),2)))
print("Root Mean Squared Error: {}".format(round(np.sqrt(mean_squared_error(y_test,y_pred)),2)))
print("R2 Score: {}".format(round(r2_score(y_test,y_pred),2)))
# ## Linear Regression
# | Evaluation criteria | Score |
# | ------------ | ------------ |
# | Mean Absolute Error (MAE)| .65 |
# | Mean Squared Error (MSE)| .75 |
# | Root Mean Squared Error(RMSE)| .87 |
# | R2 Score | .86 |
#
rndm_frst_reg=RandomForestRegressor(n_estimators=10)
rndm_frst_reg.fit(X_Train_scaled,y_train)
y_pred1=rndm_frst_reg.predict(X_Test_sclaed)
# Get Evaluation for KNN Regressor for Rndom Forest with Estimator as 10
print("Mean Absolute Error: {}".format(round(mean_absolute_error(y_test,y_pred1),2)))
print("Mean Squared Error: {}".format(round(mean_squared_error(y_test,y_pred1),2)))
print("Root Mean Squared Error: {}".format(round(np.sqrt(mean_squared_error(y_test,y_pred1)),2)))
print("R2 Score: {}".format(round(r2_score(y_test,y_pred1),2)))
# ## Random Forest Regressor
# | Evaluation criteria | Score |
# | ------------ | ------------ |
# | Mean Absolute Error (MAE)| .45 |
# | Mean Squared Error (MSE)| .55 |
# | Root Mean Squared Error(RMSE)| .74 |
# | R2 Score | .9 |
#
for eachK in range(1,18):
neigh_reg=KNeighborsRegressor(n_neighbors=eachK)
neigh_reg.fit(X_Train_scaled,y_train)
y_pred_neigh=neigh_reg.predict(X_Test_sclaed)
print("for k={} the score is {}".format(eachK,r2_score(y_test,y_pred_neigh)))
# +
#Best R2 score is for k=3
neigh_at_3=KNeighborsRegressor(n_neighbors=3)
neigh_at_3.fit(X_Train_scaled,y_train)
y_pred_neigh_at_3=neigh_at_3.predict(X_Test_sclaed)
# -
# Get Evaluation for KNN Regressor for K=3
print("Mean Absolute Error: {}".format(round(mean_absolute_error(y_test,y_pred_neigh_at_3),2)))
print("Mean Squared Error: {}".format(round(mean_squared_error(y_test,y_pred_neigh_at_3),2)))
print("Root Mean Squared Error: {}".format(round(np.sqrt(mean_squared_error(y_test,y_pred_neigh_at_3)),2)))
print("R2 Score: {}".format(round(r2_score(y_test,y_pred_neigh_at_3),2)))
# ## KNN Regressor
# | Evaluation criteria | Score |
# | ------------ | ------------ |
# | Mean Absolute Error (MAE)| .56 |
# | Mean Squared Error (MSE)| .66 |
# | Root Mean Squared Error(RMSE)| .81 |
# | R2 Score | .88 |
#
from sklearn.linear_model import Ridge,Lasso,ElasticNet
from sklearn.model_selection import GridSearchCV
rdg_reg=Ridge()
lasso=Lasso()
elastinet=ElasticNet()
parameter={'alpha':[1e-15,1e-10,1e-8,1e-4,1e-3,1e-2,0,1,2,4,10,20]}
regressor=GridSearchCV(rdg_reg,parameter,scoring="neg_mean_squared_error",cv=5)
regressor.fit(X_Train_scaled,y_train)
print(regressor.best_params_,regressor.best_score_)
y_ridge_predict=regressor.predict(X_Test_sclaed)
print("Mean Absolute Error: {}".format(round(mean_absolute_error(y_test,y_ridge_predict),2)))
print("Mean Squared Error: {}".format(round(mean_squared_error(y_test,y_ridge_predict),2)))
print("Root Mean Squared Error: {}".format(round(np.sqrt(mean_squared_error(y_test,y_ridge_predict)),2)))
print("R2 Score: {}".format(round(r2_score(y_test,y_ridge_predict),2)))
# ## Ridge Regressor
# | Evaluation criteria | Score |
# | ------------ | ------------ |
# | Mean Absolute Error (MAE)| .65 |
# | Mean Squared Error (MSE)| .75 |
# | Root Mean Squared Error(RMSE)| .87 |
# | R2 Score | .86 |
#
lasso_regressor=GridSearchCV(lasso,parameter,scoring="neg_mean_squared_error",cv=5)
lasso_regressor.fit(X_Train_scaled,y_train)
print(lasso_regressor.best_params_,lasso_regressor.best_score_)
y_lasso_pred=lasso_regressor.predict(X_Test_sclaed)
print("Mean Absolute Error: {}".format(round(mean_absolute_error(y_test,y_lasso_pred),2)))
print("Mean Squared Error: {}".format(round(mean_squared_error(y_test,y_lasso_pred),2)))
print("Root Mean Squared Error: {}".format(round(np.sqrt(mean_squared_error(y_test,y_lasso_pred)),2)))
print("R2 Score: {}".format(round(r2_score(y_test,y_lasso_pred),2)))
# ## Lasso Regressor
# | Evaluation criteria | Score |
# | ------------ | ------------ |
# | Mean Absolute Error (MAE)| .67 |
# | Mean Squared Error (MSE)| .80 |
# | Root Mean Squared Error(RMSE)| .90 |
# | R2 Score | .85 |
#
elastinet_regressor=GridSearchCV(elastinet,parameter,scoring="neg_mean_squared_error",cv=5)
elastinet_regressor.fit(X_Train_scaled,y_train)
print(elastinet_regressor.best_params_,elastinet_regressor.best_score_)
y_elastinet_pred=elastinet_regressor.predict(X_Test_sclaed)
print("Mean Absolute Error: {}".format(round(mean_absolute_error(y_test,y_elastinet_pred),2)))
print("Mean Squared Error: {}".format(round(mean_squared_error(y_test,y_elastinet_pred),2)))
print("Root Mean Squared Error: {}".format(round(np.sqrt(mean_squared_error(y_test,y_elastinet_pred)),2)))
print("R2 Score: {}".format(round(r2_score(y_test,y_elastinet_pred),2)))
# ## Elastinet Regressor
# | Evaluation criteria | Score |
# | ------------ | ------------ |
# | Mean Absolute Error (MAE)| .68 |
# | Mean Squared Error (MSE)| .80 |
# | Root Mean Squared Error(RMSE)| .89 |
# | R2 Score | .85 |
#
# <a id=section8></a>
# ## Comparing all Regression Techniques
# 
# <a id=section7></a>
# ## Conclusion
#
# 1. The data set is full of outliers so actions to be taken to make data correct.
# 2. Tried with various Regression techniques and found Random Forest Regressor has given best R2 score for the iven dataset.
# 3. Tried with various K values from KNN Regressor and maximum R2 score is achieved as 87%.
# 4. The best R2 score from Random Forest Regressor ~90% which is final model that I am suggesting for this data set.
| DataAnalysisAndML/Car_Dekho_EDA_and_Prediction/Car_Dekho_EDA_and_prediction.ipynb |
# +
# Formação Cientista de Dados
# Regras de associação
# -
# Importação das bibliotecas
import pandas as pd
from apyori import apriori
#pip install apyori (executar no Anaconda Prompt)
# Leitura das trasações
dados = pd.read_csv('./data/transacoes.txt', header = None)
dados
#transformação para o formato de lista, que é exigido pela biblioteca apyori - 6 é a quantidade de itens na base de dados
transacoes = []
for i in range(0,6):
transacoes.append([str(dados.values[i,j]) for j in range(0,3)])
transacoes
# Execução do algoritmo apriori para geração das regras de associação, definindo os parâmetros de suporte e confiança
regras = apriori(transacoes, min_support = 0.5, min_confidence = 0.5,min_length=2)
# Criação de nova variável para armazenar somente as regras de associação
resultados = list(regras)
print(resultados[0])
resultados
# Criação de nova variável, percorrendo a variável anterior para uma melhor visualização dos resultados
resultados2 = [list(x) for x in resultados]
resultados2
# Criação de outra variável para a visualização das regras ficar mais fácil para o usuário, adicionando as regras encontradas na variável resultados2
resultados3 = []
for j in range(0,7):
resultados3.append([list(x) for x in resultados2[j][2]])
resultados3
| 06-Machine_learning/python/9-associacao.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Ejemplos aplicaciones de las distribuciones de probabilidad
# ## Ejemplo Binomial
#
# Un modelo de precio de opciones, el cual intente modelar el precio de un activo $S(t)$ en forma simplificada, en vez de usar ecuaciones diferenciales estocásticas. De acuerdo a este modelo simplificado, dado el precio del activo actual $S(0)=S_0$, el precio después de un paso de tiempo $\delta t$, denotado por $S(\delta t)$, puede ser ya sea $S_u=uS_0$ o $S_d=dS_0$, con probabilidades $p_u$ y $p_d$, respectivamente. Los subíndices $u$ y $p$ pueden ser interpretados como 'subida' y 'bajada', además consideramos cambios multiplicativos. Ahora imagine que el proces $S(t)$ es observado hasta el tiempo $T=n\cdot \delta t$ y que las subidas y bajadas del precio son independientes en todo el tiempo. Como hay $n$ pasos, el valor mas grande de $S(T)$ alcanzado es $S_0u^n$ y el valor más pequeño es $S_0d^n$. Note que valores intermedios serán de la forma $S_0u^md^{n-m}$ donde $m$ es el número de saltos de subidas realizadas por el activo y $n-m$ el número bajadas del activo. Observe que es irrelevante la secuencia exacta de subidas y bajadas del precio para determinar el precio final, es decir como los cambios multiplicativos conmutan: $S_0ud=S_0du$. Un simple modelo como el acá propuesto, puede representarse mediante un modelo binomial y se puede representar de la siguiente forma:
# 
#
# Tal modelo es un poco conveniente para simples opciones de dimensión baja debido a que **(el diagrama puede crecer exponencialmente)**, cuando las recombinaciones mantienen una complejidad baja. Con este modelo podíamos intentar responder
# - Cúal es la probabilidad que $S(T)=S_0u^md^{(n-m)}$?
# - **Hablar como construir el modelo binomial**
# - $n,m,p \longrightarrow X\sim Bin(n,p)$
# - PMF $\rightarrow P(X=m)={n \choose m}p^m(1-p)^{n-m}$
# - Dibuje la Densidad de probabilidad para $n=30, p_1=0.2,p_2=0.4$
# +
# Importamos librerías a trabajar en todas las simulaciones
import matplotlib.pyplot as plt
import numpy as np
import scipy.stats as st # Librería estadística
from math import factorial as fac # Importo la operación factorial
from scipy.special import comb # Importamos la función combinatoria
# %matplotlib inline
# +
# Parámetros de la distribución
n = 30; p1=0.2; p2 = 0.4
m = np.arange(0,n)
n = n*np.ones(len(m))
# Distribución binomial creada
P = lambda p,n,m:comb(n,m)*p**m*(1-p)**(n-m)
# Distribución binomial del paquete de estadística
P2 = st.binom(n,p1).pmf(m)
# Comparación de función creada con función de python
plt.plot(P(p1,n,m),'o-',label='Función creada')
plt.stem(P2,'r--',label='Función de librería estadística')
plt.legend()
plt.title('Comparación de funciones')
plt.show()
# Grafica de pmf para el problema de costo de activos
plt.plot(P(p1,n,m),'o-.b',label='$p_1 = 0.2$')
plt.plot(st.binom(n,p2).pmf(m),'gv--',label='$p_2 = 0.4$')
plt.legend()
plt.title('Gráfica de pmf para el problema de costo de activos')
plt.show()
# -
# ## Ejercicio
# <font color='red'>Problema referencia: Introduction to Operations Research,(Chap.10.1, pag.471 and 1118)
# > Descargar ejercicio de el siguiente link
# > https://drive.google.com/file/d/19GvzgEmYUNXrZqlmppRyW5t0p8WfUeIf/view?usp=sharing
#
# 
# 
# ### Recordar de la distribución normal
# 
# ### El Teorema del Límite Central
# El Teorema del límite central establece que bajo ciertas condiciones (como pueden ser independientes e idénticamente distribuidas con varianza finita), la suma de un gran número de variables aleatorias se distribuye aproximadamente como una normal. **(Hablar de la importancia del uso)**
# 
# 
# 
# 
# **Pessimistic case**
# 
# **Possibilities: Most likely**
# 
# **Optimistic case**
# 
# ## **Approximations**
#
# 1. **Simplifying Approximation 1:** Assume that the mean critical path will turn out to be the longest path through the project network.
# 2. **Simplifying Approximation 2:** Assume that the durations of the activities on the mean critical path are statistically independent
#
# > **Recordar la expresión de la varianza de dos o más variables aleatorias**
# > $$\operatorname {var} (X+Y)= \operatorname {var} (X)+\operatorname {var} (Y)+2 \operatorname {cov}(X,Y) $$
#
# $$\mu_p \longrightarrow \text{Use the approximation 1}$$
# $$\sigma_p \longrightarrow \text{Use the approximation 1,2}$$
# **Choosing the mean critical path**
# 
# 3. **Simplifying Approximation 3:** Assume that the form of the probability distribution of project duration is a `normal distribution`. By using simplifying approximations 1 and 2, one version of the central limit theorem justifies this assumption as being a reasonable approximation if the number of activities on the mean critical path is not too small (say, at least 5). The approximation becomes better as this number of activities increases.
# ### Casos de estudio
# Se tiene entonces la variable aleatoria $T$ la cual representa la duración del proyecto en semanas con media $\mu_p$ y varianza $\sigma_p^2$ y $d$ representa la fecha límite de entrega del proyecto, la cual es de 47 semanas.
# 1. Suponer que $T$ distribuye normal y responder cual es la probabilidad $P(T\leq d)$.
######### Caso de estudio 1 ################
up = 44; sigma = np.sqrt(9); d = 47
P = st.norm(up,sigma).cdf(d)
print('P(T<=d)=',P)
P2 = st.beta
# >## <font color = 'red'> Ejercicio
# >1.Suponer que $T$ distribuye beta donde la media es $\mu_p$ y varianza $\sigma_p^2$ y responder cual es la probabilidad $P(T\leq d)$.
# 
#
# > **Ayuda**: - Aprender a utlizar el solucionador de ecuaciones no lineales https://stackoverflow.com/questions/19843116/passing-arguments-to-fsolve
# - Leer el help de la función beta del paquete estadístico para aprender a graficar funciones beta en un intervalo diferente a 0 y 1.
#
# >2.Suponer que $T$ distribuye triangular donde el valor mas probable es $\mu_p$ el valor pesimista es $p=49$ y el valor optimista es $o=40$ y responder cual es la probabilidad $P(T\leq d)$. Nuevamente aprender a graficar una función triangular en un intervalro [a,b], usando el help.
#
# ## Parámetros de entrega (no deben de entregar nada)
# Se habilitará un enlace en Canvas donde deben de subir su cuaderno de python con la solución dada. La fecha límite de recepción será el jueves 19 de marzo a las 18:00.
# <script>
# $(document).ready(function(){
# $('div.prompt').hide();
# $('div.back-to-top').hide();
# $('nav#menubar').hide();
# $('.breadcrumb').hide();
# $('.hidden-print').hide();
# });
# </script>
#
# <footer id="attribution" style="float:right; color:#808080; background:#fff;">
# Created with Jupyter by <NAME>
# </footer>
| TEMA-2/Clase12_EjemplosDeAplicaciones.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %matplotlib inline
import matplotlib.pyplot as plt
import healpy as hp
import sys
sys.path.append('/Users/mehdi/github/LSSutils')
from LSSutils import dataviz as dv
from glob import glob
plt.rc('font', family='serif', size=15)
from LSSutils.catalogs.datarelease import cols_dr8_rand as labels
# !ls -lt ../pk*zbin1*.txt
#pks = glob('../pk_v0_2_*.txt')
pks = glob('../pk_2_*all*.txt')
pks
import numpy as np
# +
pks = ['../pk_2_v0_0.1_none_none_none_all_512_red_0.txt', #'../pk_2_v0_0.1_nn_known_lowhigh_all_512_red_1.txt',
'../pk_2_v0_0.1_none_none_none_all_512_red_1.txt',
'../pk_2_v0_0.1_nn_plain_lowhigh_all_512_red_1.txt']#'../pk_2_v0_0.1_nn_plain_lowhigh_all_512_red_1.txt',
# #'../pk_2_v0_0.1_nn_plain_lowhigh_all_512_red_1.txt'
# #
# #
# #'../pk_2_v0_0.1_nn_plain_all_all_512_red_1.txt',
# #'../pk_2_v0_0.1_nn_ablation_lowhigh_all_512_red_1.txt']
mk = 10*['.', '*', '^', '+', 's']
j = 0
for pki in pks:
#plt.figure()
model = pki.split('_')[4]
templ = pki.split('_')[5]
split = pki.split('_')[6]
red = pki.split('_')[-1][0]
title = ' '.join([model, templ, split, red])
#title = pki
dpki = np.loadtxt(pki)
print(dpki.shape)
plt.loglog(dpki[:,0], dpki[:,1], marker=mk[j], label=title)
j += 1
plt.legend(bbox_to_anchor=(1., 1.))
plt.title('mock 2')
plt.ylabel(r'P$_{0}$')
plt.xlabel('k [h/Mpc]')
plt.grid(which='both')
plt.savefig('pk_mock2.png', dpi=300, bbox_inches='tight')
# -
pkvp6b = glob('../v0.6/pk_v0_*_red_0_0.6.txt')
pkvp6a = glob('../v0.6/pk_v0_*_red_1_0.6.txt')
def readall(pklist):
pka = []
for pki in pklist:
pka.append(np.loadtxt(pki))
return np.array(pka)
pkb = readall(pkvp6b)
pka = readall(pkvp6a)
pkb.shape
# +
fig, ax = plt.subplots(nrows=2, figsize=(6, 8), sharex=True)
fig.subplots_adjust(hspace=0.0)
colors = ['k', 'r']
labels = ['After systematics', 'Before systematics']
for i, pki in enumerate([pka, pkb]):
for j in range(pki.shape[0]):
ax[0].plot(pki[0, :, 0], pki[j, :, 1], alpha=0.04, color=colors[i])
ax[1].plot(pki[0, :, 0], pki[j, :, 2]/1.0e4, alpha=0.04, color=colors[i])
ax[0].scatter(pki[0,:,0], np.mean(pki[:, :, 1], axis=0), color=colors[i], label=labels[i], marker='o')
ax[1].scatter(pki[0,:,0], np.mean(pki[:, :, 2], axis=0)/1.0e4, color=colors[i], marker='o')
for axi in ax:
axi.grid(True, ls=':', color='grey', which='both')
ax[0].legend()
ax[0].set(xscale='log', yscale='log', ylabel=r'P$_{0}$')
ax[1].set(xscale='log', ylabel=r'P$_{2}$ $\times$ 10$^{-4}$', xlabel='k [h/Mpc]')
fig.savefig('./pks_v0.6.png', dpi=300, bbox_inches='tight')
# -
pkb.shape
dv.get_selected_maps(glob('ablation/desi_mock.log_fold*.npy'),
['DESI mock-8'],
labels=labels)
dv.ablation_plot_all(glob('ablation/desi_mock.log_fold*.npy'),
title='DESI MOCK 8', labels=labels)
lists = glob('regression/*/*-weights.hp256.fits')
names = []
wmaps = []
for g in lists:
names.append(g)
wmaps.append(hp.read_map(g, verbose=False))
# +
fig, ax = plt.subplots(ncols=2, nrows=3, figsize=(14, 12))
ax = ax.flatten()
fig.subplots_adjust(wspace=0.0, hspace=-0.1)
def fixunit(fwmap):
return ' '.join([fwmap.split('/')[1].split('_')[1],\
fwmap.split('/')[-1].split('-')[0]])
for i, wmap in enumerate(wmaps):
print(i)
galaxy = False if i != 0 else True
print(galaxy)
wmap[wmap==0] = hp.UNSEEN
dv.hpmollview(wmap,
fixunit(names[i]),
[fig, ax[i]],
title=fixunit(names[i]),
rot=-89,
max=1.1, min=0.9,
badcolor='w',
cmap=plt.cm.viridis,
galaxy=galaxy,
cbar=False)
ax[i].text(0.2, 0.2, fixunit(names[i]), transform=ax[i].transAxes)
fig.delaxes(ax[-1])
# -
| notebooks/trunk/viz.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
"""
Deep Deterministic Policy Gradient (DDPG), Reinforcement Learning.
P2P network, net bit rate, energy harvesting example for training.
Thanks to : https://github.com/MorvanZhou/Reinforcement-learning-with-tensorflow/tree/master/contents/9_Deep_Deterministic_Policy_Gradient_DDPG
Using:
tensorflow 1.0
"""
import math
import tensorflow as tf
import numpy as np
import gym
import time
import EH_P2P
import DDPG_CLASS
np.random.seed(1)
tf.set_random_seed(1)
##################### hyper parameters ####################
MAX_EPISODES = 2000
MAX_EP_STEPS = 120
LR_A = 0.0002 # learning rate for actor
LR_C = 0.0002 # learning rate for critic
GAMMA = 0.999 # reward discount
REPLACEMENT = [
dict(name='soft', tau=0.01),
dict(name='hard', rep_iter_a=600, rep_iter_c=500)
][0] # you can try different target replacement strategies
MEMORY_CAPACITY = 40000
BATCH_SIZE = 80
OUTPUT_GRAPH = False
env=EH_P2P.EH_P2P()
env.Chanpower()
env.Solarread()
state_dim = 3 #channel,battery,solar
action_dim = 1 #Transmission power
action_bound = 1 [0,1]
tip=1
tip2=1
snr=-10
for temp in range(1):
#for snr in range(-10,10,2):
for modulation in range(1):
var = 10
tf.reset_default_graph()
sess = tf.Session()
with tf.name_scope('S'):
S = tf.placeholder(tf.float32, shape=[None, state_dim], name='s')
with tf.name_scope('R'):
R = tf.placeholder(tf.float32, [None, 1], name='r')
with tf.name_scope('S_'):
S_ = tf.placeholder(tf.float32, shape=[None, state_dim], name='s_')
DDPG_CLASS.S=S
DDPG_CLASS.R=R
DDPG_CLASS.S_=S_
actor = DDPG_CLASS.Actor(sess, action_dim, action_bound, LR_A, REPLACEMENT)
critic = DDPG_CLASS.Critic(sess, state_dim, action_dim, LR_C, GAMMA, REPLACEMENT, actor.a, actor.a_)
actor.add_grad_to_graph(critic.a_grads)
M = DDPG_CLASS.Memory(MEMORY_CAPACITY, dims=2 * state_dim + action_dim + 1)
sess.run(tf.global_variables_initializer())
saver=tf.train.Saver(max_to_keep=100)
if OUTPUT_GRAPH:
tf.summary.FileWriter("logs/", sess.graph)
print("modulation=",modulation,"snr=",snr)
for i in range(MAX_EPISODES):
s = env.reset_P2P(snr)
ep_reward = 0
for j in range(MAX_EP_STEPS):
a = actor.choose_action(s)
a = np.random.normal(a, var)
a=np.clip(a,0,1)
s_, r, info = env.step_P2P([a,modulation])#input modulation 0:qpsk,1:8psk,2:16qam
M.store_transition(s, a, r , s_)
if M.pointer > MEMORY_CAPACITY:
#tip and tip2 are only for printing`#
if tip == 1:
print("memory full",j,i)
tip=0
var *= 0.9995 # decay the action randomness
if tip2 == 1 and var<0.00000001:
print("var zero",j,i)
tip2=0
b_M = M.sample(BATCH_SIZE)
b_s = b_M[:, :state_dim]
b_a = b_M[:, state_dim: state_dim + action_dim]
b_r = b_M[:, -state_dim - 1: -state_dim]
b_s_ = b_M[:, -state_dim:]
critic.learn(b_s, b_a, b_r, b_s_)
actor.learn(b_s)
s = s_
ep_reward += r
if i % 30 == 0 :
print("net bit rate=",r,"action",a, "solar,channel,battery",s,"epoch",i)
print("ave_reward",ep_reward/(j+1))
save_path = saver.save(sess, "folder_for_nn_noise"+"/EH_save_net_snr="+str(snr)+str(modulation)+"epoch="+str(i)+"_P2P.ckpt")
print("Save to path: ", save_path)
print("----------------------------END--------------------------------")
# -
| Codes/DDPG_IN_EH_COMMUNICATIONS_Train.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# #!/usr/bin/env python3
import numpy as np
import math
import matplotlib.pyplot as plt
import time
from scipy.stats import norm
# ==========================================
# PARAMETERS
# ==========================================
# Today's stock price (Spot price)
S0 = 100.0
# Strike price
E = 100.0
# Time to expiry: 1 year
T = 1 # number of years
time_units = 252.0 # days in a working year
dt = (T - 0)/time_units # timestep
sqrtdt = np.sqrt(dt) # Compute this outside the loop to save computing it
# at every timestep
time_range = np.arange(0, T+dt, dt) # array of time units
# Volatility
sigma = 0.2 # 20%
# Constant risk-free interest rate
r = 0.05 # 5%
"""
Could also make volatility a stochastic variable.
Could also make interest rate a stochastic variable.
But I have run out of time...
"""
# How many simulations to perform?
num_sims = np.array([
10,
100,
500,
#750,
1000,
5000,
10000,
50000,
100000,
#250000,
500000,
# 750000,
1000000,
2#500000,
# 5000000,
# 7500000,
# 10000000,
])
# +
# Arrays for data storage and post-processing
# Binary options
array_binary_call_errors = np.zeros(len(num_sims))
array_binary_put_errors = np.zeros(len(num_sims))
array_binary_call_times = np.zeros(len(num_sims))
array_binary_put_times = np.zeros(len(num_sims))
# +
def percentage_difference(A, B):
return np.abs(A-B) / ((A+B)/2.0) * 100.0
def percentage_error(A,B):
return np.abs(A-B)/B *100.0
# +
# ==========================================
# METHODS FOR BINARY OPTIONS
# ==========================================
def analytic_binary_call_value(S_0, E, r, v, T):
"""
Closed form solution of the Black-Scholes equation
for a binary call option
"""
d_2 = (np.log(S_0/E) + (r - 0.5*v*v)*T) / (v*np.sqrt(T))
value = np.exp(-r*T) * norm.cdf(d_2)
return value
def analytic_binary_put_value(S_0, E, r, v, T):
"""
Closed form solution of the Black-Scholes equation
for a binary put option
"""
d_2 = (np.log(S_0/E) + (r - 0.5*v*v)*T) / (v*np.sqrt(T))
value = np.exp(-r*T) * (1- norm.cdf(d_2))
return value
def heaviside(a):
"""
Heaviside function: does what it says on the tin.
"""
if a>=0:
return 1.0
else:
return 0.0
# I am going to explicitly code TWO methods
# for the call and put binary options
def monte_carlo_binary_call_value(N, S_0, E, r, v, T):
"""
This method prices a binary CALL option with a
Monte Carlo method.
"""
# Deterministic part of calculation done outside forloop
# to save computation time
A = S_0 * np.exp((r - 0.5*v*v)*T)
S_T = 0.0
payoff_sum = 0.0
v_sqrt_T = v*np.sqrt(T)
payoffs = np.zeros(N)
# Since the binary option is independent of the path, we
# are only concerned with the price of the underlying at
# maturity.
# We can compute this directly with no storage of historical
# underlying prices.
for i in range(0, N):
normal_bm = np.random.normal(0, 1)
S_T = A * np.exp(v_sqrt_T * normal_bm)
payoff_sum += heaviside(S_T - E)
payoffs[i] = heaviside(S_T - E)
average_payoff = (payoff_sum / N) * np.exp(-r*T)
return average_payoff, payoffs
def monte_carlo_binary_put_value(N, S_0, E, r, v, T):
"""
This method prices a binary PUT option with a
Monte Carlo method.
"""
# Deterministic part of calculation done outside forloop
# to save computation time
A = S_0 * np.exp((r - 0.5*v*v)*T)
S_T = 0.0
payoff_sum = 0.0
v_sqrt_T = v*np.sqrt(T)
payoffs = np.zeros(N)
# Since the binary option is independent of the path, we
# are only concerned with the price of the underlying at
# maturity.
# We can compute this directly with no storage of historical
# underlying prices.
for i in range(0, N):
normal_bm = np.random.normal(0, 1)
S_T = A * np.exp(v_sqrt_T * normal_bm)
payoff_sum += heaviside(E - S_T)
payoffs[i] = heaviside(E - S_T)
average_payoff = (payoff_sum / N) * np.exp(-r*T)
return average_payoff, payoffs
# +
# ==========================================
# CALCULATE BINARY CALLs/PUTs for varying
# interest rate and simulation number
# ==========================================
r = 0.05
analytic_binary_call = analytic_binary_call_value(S0, E, r, sigma, T)
analytic_binary_put = analytic_binary_put_value(S0, E, r, sigma, T)
print("=============================")
print("Binary CALL option value (analytic): £%3.4f"%analytic_binary_call)
print("Binary PUT option value (analytic): £%3.4f"%analytic_binary_put)
print("=============================")
i=0
for N in num_sims:
print("----------------------------------")
print(" Number of simulations: %d" % N)
# ------------------------------
# Calculate Calls
t0 = time.time() # time the process
binary_call, tmp = monte_carlo_binary_call_value(N, S0, E, r, sigma, T) # calculation
calc_time = time.time() - t0 # time of process
array_binary_call_times[i] = calc_time # store calc time
call_error = percentage_error(binary_call, analytic_binary_call) # calc error from analytical
array_binary_call_errors[i] = call_error # store error
print("Binary CALL option value: £%3.4f\t%03.4f%% error" % (binary_call, call_error))
print(" Calculation time: %2.4f s\n" % calc_time)
# ------------------------------
# Calculate Puts
t0 = time.time() # time the process
binary_put, tmp = monte_carlo_binary_put_value(N, S0, E, r, sigma, T) # calculation
calc_time = time.time() - t0 # time of process
array_binary_put_times[i] = calc_time # store calc time
put_error = percentage_error(binary_put, analytic_binary_put) # calc error from analytical
array_binary_put_errors[i] = put_error # store error
print("Binary PUT option value: £%3.4f\t%03.4f%% error" % (binary_put, put_error))
print(" Calculation time: %2.4f s" % calc_time)
print("\n\n")
i+=1
# +
# Plot Error Vs Simulation number vs Calc time
# Binary Calls
fig, ax1 = plt.subplots(figsize=(6,4.5))
ax1.semilogx(num_sims, # x-axis
array_binary_call_errors, # y-axis
'b-')
ax1.set_xlabel("Number of simulations", fontsize=14)
# Make the y-axis label, ticks and tick labels match the line color.
ax1.set_ylabel('Error (%)', color='b', fontsize=14)
ax1.tick_params('y', colors='b')
ax2 = ax1.twinx()
ax2.semilogx(num_sims, # x-axis
array_binary_call_times, # y-axis
'r-')
ax2.set_ylabel('Calculation time (s)', color='r', fontsize=14)
ax2.tick_params('y', colors='r')
fig.tight_layout()
plt.savefig("binary_call_error_calc_times.eps")
plt.show()
plt.close()
# Binary Puts
fig, ax1 = plt.subplots(figsize=(6,4.5))
ax1.semilogx(num_sims, # x-axis
array_binary_put_errors, # y-axis
'b-')
ax1.set_xlabel("Number of simulations", fontsize=14)
# Make the y-axis label, ticks and tick labels match the line color.
ax1.set_ylabel('Error (%)', color='b', fontsize=14)
ax1.tick_params('y', colors='b')
ax2 = ax1.twinx()
ax2.semilogx(num_sims, # x-axis
array_binary_put_times, # y-axis
'r-')
ax2.set_ylabel('Calculation time (s)', color='r', fontsize=14)
ax2.tick_params('y', colors='r')
fig.tight_layout()
plt.savefig("binary_put_error_calc_times.eps")
plt.show()
plt.close()
# +
# ==========================================
# METHODS FOR LOOKBACK FLOATING OPTIONS
# ==========================================
# I am going to explicitly code TWO methods
# for the call and put lookback options.
# Ideally, I would make one function for both
# for the sake of clarity but I want to be
# explicit.
def analytic_lookback_floating_call_value(S_0, Min, r, v, T):
"""
Analytic solution of the Black-Scholes equation for a
lookback floating CALL option
I'm following the notation of
Wilmott (2007) Introduction to Quantitative Finance pg.268.
NOTE: Min is the minimum asset price observed in the
history of the option.
"""
d1 = calc_d1(S_0, Min, r, v, T)
d2 = calc_d2(S_0, Min, r, v, T)
d3 = calc_d3(S_0, Min, r, v, T)
t1 = S_0 * norm.cdf(d1)
t2 = Min * np.exp(-r*T) * norm.cdf(d2)
t3 = S_0 * np.exp(-r*T) * ((v*v) / (2.0*r))
t4 = math.pow((S_0/Min), ((-2.0*r)/(v*v))) * norm.cdf(d3) - np.exp(r*T)*norm.cdf(-d1)
return t1 - t2 + (t3*t4)
def analytic_lookback_floating_put_value(S_0, Max, r, v, T):
"""
Analytic solution of the Black-Scholes equation for a
lookback floating PUT option
I'm following the notation of
Wilmott (2007) Introduction to Quantitative Finance pg.268.
NOTE: Max is the maximum asset price observed in the
history of the option.
"""
d1 = calc_d1(S_0, Max, r, v, T)
d2 = calc_d2(S_0, Max, r, v, T)
d3 = calc_d3(S_0, Max, r, v, T)
t1 = Max * np.exp(-r*T) * norm.cdf(-d2)
t2 = S_0 * norm.cdf(-d1)
t3 = S_0 * np.exp(-r*T) * ((v*v) / (2.0*r))
t4 = -math.pow((S_0/Max), ((-2.0*r)/(v*v))) * norm.cdf(-d3) + np.exp(r*T)*norm.cdf(d1)
return t1 - t2 + (t3*t4)
def calc_d1(S_0, H, r, v, T):
"""
H could be the min/max of the option
during its life
"""
return (np.log(S_0/H) + (r + 0.5*v*v)*T) / (v*np.sqrt(T))
def calc_d2(S_0, H, r, v, T):
return calc_d1(S_0, H, r, v, T) - v*np.sqrt(T)
def calc_d3(S_0, H, r, v, T):
return -1.0*calc_d1(S_0, H, r, v, T) + (2.0 * r * np.sqrt(T) / v)
def monte_carlo_lookback_floating_call_value(N, S_0, r, v, time_range, dt, sqrtdt):
"""
This method prices a lookback CALL option with
a floating strike using a Monte Carlo method.
"""
option_life_data = np.zeros(shape=(len(time_range)), dtype=float)
option_life_data[0] = S_0 # Stock price at the start of every simulation
payoff_sum = 0.0
S_min_sum = 0.0
analytic_value_sum = 0.0
payoffs = np.zeros(N)
for j in range(0, N):
S_min = 100000 # prescribe a dummy minimum
for i in range(1, len(time_range)):
# Calculate the underlying at each subsequent timestep
option_life_data[i] = option_life_data[i-1] * np.exp(
(r - 0.5*v*v)*dt + (v * sqrtdt * np.random.normal(0,1)))
S_min = np.amin(option_life_data) # Minimum price of underlying during
# the life of the option
S_T = option_life_data[-1] # Price of underlying at maturity
payoff_sum += max(S_T - S_min, 0)
S_min_sum += S_min
payoffs[j] = max(S_T - S_min, 0)
# Using Black-Scholes equation
analytic_value_sum += analytic_lookback_floating_call_value(S0, S_min, r, sigma, T)
average_payoff = (payoff_sum / N) * np.exp(-r*time_range[-1])
return average_payoff, (analytic_value_sum / N), payoffs
def monte_carlo_lookback_floating_put_value(N, S_0, r, v, time_range, dt, sqrtdt):
"""
This method prices a lookback PUT option with
a floating strike using a Monte Carlo method.
"""
option_life_data = np.zeros(shape=(len(time_range)), dtype=float)
option_life_data[0] = S_0 # Stock price at the start of every simulation
payoff_sum = 0.0
S_max_sum = 0.0
analytic_value_sum = 0.0
payoffs = np.zeros(N)
for j in range(0, N):
S_max = 0 # prescribe a dummy maximum
for i in range(1, len(time_range)):
# Calculate the underlying at each subsequent timestep
option_life_data[i] = option_life_data[i-1] * np.exp(
(r - 0.5*v*v)*dt + (v * sqrtdt * np.random.normal(0,1)))
S_max = np.amax(option_life_data) # Maximum price of underlying during
# the life of the option
S_T = option_life_data[-1] # Price of underlying at maturity
payoff_sum += max(S_max - S_T, 0)
S_max_sum += S_max
payoffs[j] = max(S_max - S_T, 0)
# Using Black-Scholes equation
analytic_value_sum += analytic_lookback_floating_put_value(S0, S_max, r, sigma, T)
average_payoff = (payoff_sum / N) * np.exp(-r*time_range[-1])
return average_payoff, (analytic_value_sum / N), payoffs
# +
# VARYING TIMESTEP LENGTH
# We are now going to test varying
# the timestep length and its effect
# error and calculation time.
#denominators = np.array([5, 10, 32, 64, 96, 126, 252, 300, 400, 500, 1000])
denominators = np.array([10, 100, 250, 500])
timesteps = 1.0/denominators
# Arrays for data storage and post-processing
# Lookback FLOATING options
array_lc_float_call_errors = np.zeros(len(timesteps))
array_lc_float_put_errors = np.zeros(len(timesteps))
array_lc_float_call_times = np.zeros(len(timesteps))
array_lc_float_put_times = np.zeros(len(timesteps))
# +
# ==========================================
# CALCULATE European LOOKBACK FLOATING CALLs/PUTs
# ==========================================
r = 0.05
N =10000 # number of simulations
# We have selected 1e4 as our
# number of simulations because
# of the results from the binary
# call/put valuations after varying
# number of simulations and finding
# 1e5 is best trade-off between
# error and calculation time.
i=0
for dt in timesteps:
sqrtdt = np.sqrt(dt)
print("----------------------------------")
print(" timestep length: %1.3f" % dt)
time_range = np.arange(0, T+dt, dt) # array of time units
# ------------------------------
# Calculate Calls
t0 = time.time()
lookback_floating_call, lookback_floating_call_analytic, tmp = monte_carlo_lookback_floating_call_value(N, S0, r,
sigma,
time_range,
dt, sqrtdt)
calc_time = time.time() - t0 # time of process
array_lc_float_call_times[i] = calc_time # store calc time
call_error = percentage_difference(lookback_floating_call, lookback_floating_call_analytic) # calc error from analytical
array_lc_float_call_errors[i] = call_error # store error
print("Lookback floating CALL option value(Numerical): £%3.4f" % lookback_floating_call)
print("Lookback floating CALL option value (Analytic): £%3.4f\n" % lookback_floating_call_analytic)
print(" Calculation time: %2.4f s\n" % calc_time)
# ------------------------------
# Calculate Puts
t0 = time.time()
lookback_floating_put, lookback_floating_put_analytic, tmp = monte_carlo_lookback_floating_put_value(N, S0, r,
sigma,
time_range,
dt, sqrtdt)
calc_time = time.time() - t0 # time of process
array_lc_float_put_times[i] = calc_time # store calc time
put_error = percentage_difference(lookback_floating_put, lookback_floating_put_analytic) # calc error from analytical
array_lc_float_put_errors[i] = put_error # store error
print("Lookback floating PUT option value(Numerical): £%3.4f" % lookback_floating_put)
print("Lookback floating PUT option value (Analytic): £%3.4f\n" % lookback_floating_put_analytic)
print(" Calculation time: %2.4f s" % calc_time)
print("\n\n")
i+=1
# +
# Plot Error Vs Simulation number vs Calc time
# LB FLOATING Calls
fig, ax1 = plt.subplots(figsize=(6,4.5))
ax1.semilogx(timesteps, # x-axis
array_lc_float_call_errors, # y-axis
'b-')
ax1.set_xlabel("Timestep (s)", fontsize=14)
# Make the y-axis label, ticks and tick labels match the line color.
ax1.set_ylabel('Difference (%)', color='b', fontsize=14)
ax1.tick_params('y', colors='b')
ax2 = ax1.twinx()
ax2.semilogx(timesteps, # x-axis
array_lc_float_call_times, # y-axis
'r-')
ax2.set_ylabel('Calculation time (s)', color='r', fontsize=14)
ax2.tick_params('y', colors='r')
fig.tight_layout()
plt.savefig("lb_float_call_error_calc_times.eps")
plt.show()
plt.close()
# Binary Puts
fig, ax1 = plt.subplots(figsize=(6,4.5))
ax1.semilogx(timesteps, # x-axis
array_lc_float_put_errors, # y-axis
'b-')
ax1.set_xlabel("Timestep (s)", fontsize=14)
# Make the y-axis label, ticks and tick labels match the line color.
ax1.set_ylabel('Difference (%)', color='b', fontsize=14)
ax1.tick_params('y', colors='b')
ax2 = ax1.twinx()
ax2.semilogx(timesteps, # x-axis
array_lc_float_put_times, # y-axis
'r-')
ax2.set_ylabel('Calculation time (s)', color='r', fontsize=14)
ax2.tick_params('y', colors='r')
fig.tight_layout()
plt.savefig("lb_float_put_error_calc_times.eps")
plt.show()
plt.close()
# +
# ==========================================
# METHODS FOR LOOKBACK FIXED OPTIONS
# ==========================================
# I am going to explicitly code TWO methods
# for the call and put lookback options.
# Ideally, I would make one function for both
# for the sake of clarity but I want to be
# explicit.
def analytic_lookback_fixed_call_value(S_0, E, Max, r, v, T):
"""
Analytic solution of the Black-Scholes equation for a
lookback fixed CALL option
I'm following the notation of
Wilmott (2007) Introduction to Quantitative Finance pg.269.
NOTE: Max is the minimum asset price observed in the
history of the option.
"""
if E > Max:
d1 = calc_d1(S_0, E, r, v, T)
d2 = calc_d1(S_0, E, r, v, T)
d3 = calc_d1(S_0, E, r, v, T)
t1 = S_0 * norm.cdf(d1)
t2 = E * np.exp(-r*T) * norm.cdf(d2)
t3 = S_0 * np.exp(-r*T) * ((v*v) / (2.0*r))
t4 = -math.pow((S_0/E), ((-2.0*r)/(v*v))) * norm.cdf(-d3) + np.exp(r*T)*norm.cdf(d1)
return t1 - t2 + (t3*t4)
elif E <= Max:
d1 = calc_d1(S_0, Max, r, v, T)
d2 = calc_d2(S_0, Max, r, v, T)
d3 = calc_d3(S_0, Max, r, v, T)
t1 = (Max - E) * np.exp(-r*T)
t2 = S_0 * norm.cdf(d1)
t3 = Max * np.exp(-r*T) * norm.cdf(d2)
t4 = S_0 * np.exp(-r*T) * ((v*v) / (2.0*r))
t5 = -math.pow((S_0/Max), ((-2.0*r)/(v*v))) * norm.cdf(-d3) + np.exp(r*T)*norm.cdf(d1)
return t1 + t2 - t3 + t4*t5
def analytic_lookback_fixed_put_value(S_0, E, Min, r, v, T):
"""
Analytic solution of the Black-Scholes equation for a
lookback floating PUT option
I'm following the notation of
Wilmott (2007) Introduction to Quantitative Finance pg.269-270.
NOTE: Min is the maximum asset price observed in the
history of the option.
"""
if E < Min:
d1 = calc_d1(S_0, E, r, v, T)
d2 = calc_d1(S_0, E, r, v, T)
d3 = calc_d1(S_0, E, r, v, T)
t1 = E * np.exp(-r*T) * norm.cdf(-d2)
t2 = S_0 * norm.cdf(-d1)
t3 = S_0 * np.exp(-r*T) * ((v*v) / (2.0*r))
t4 = math.pow((S_0/E), ((-2.0*r)/(v*v))) * norm.cdf(d3) - np.exp(r*T)*norm.cdf(-d1)
return t1 - t2 + (t3*t4)
elif E >= Min:
d1 = calc_d1(S_0, Min, r, v, T)
d2 = calc_d2(S_0, Min, r, v, T)
d3 = calc_d3(S_0, Min, r, v, T)
t1 = (E - Min) * np.exp(-r*T)
t2 = S_0 * norm.cdf(-d1)
t3 = Min * np.exp(-r*T) * norm.cdf(-d2)
t4 = S_0 * np.exp(-r*T) * ((v*v) / (2.0*r))
t5 = math.pow((S_0/Min), ((-2.0*r)/(v*v))) * norm.cdf(d3) - np.exp(r*T)*norm.cdf(-d1)
return t1 - t2 + t3 + t4*t5
def monte_carlo_lookback_fixed_call_value(N, S_0, E, r, v, time_range, dt, sqrtdt):
"""
This method prices a lookback CALL option with
a fixed strike using a Monte Carlo method.
"""
option_life_data = np.zeros(shape=(len(time_range)), dtype=float)
option_life_data[0] = S_0 # Stock price at the start of every simulation
payoff_sum = 0.0
analytic_value_sum = 0.0
payoffs = np.zeros(N)
for j in range(0, N):
S_max = 0 # prescribe a dummy maximum
for i in range(1, len(time_range)):
# Calculate the underlying at each subsequent timestep
option_life_data[i] = option_life_data[i-1] * np.exp(
(r - 0.5*v*v)*dt + (v * sqrtdt * np.random.normal(0,1)))
S_max = np.amax(option_life_data) # Maximum price of underlying during
# the life of the option
payoff_sum += max(S_max - E, 0)
payoffs[j] = max(S_max - E, 0)
# Using Black-Scholes equation
analytic_value_sum += analytic_lookback_fixed_call_value(S0, E, S_max, r, sigma, T)
average_payoff = (payoff_sum / N) * np.exp(-r*time_range[-1])
return average_payoff, (analytic_value_sum/N), payoffs
def monte_carlo_lookback_fixed_put_value(N, S_0, E, r, v, time_range, dt, sqrtdt):
"""
This method prices a lookback PUT option with
a fixed strike using a Monte Carlo method.
"""
option_life_data = np.zeros(shape=(len(time_range)), dtype=float)
option_life_data[0] = S_0 # Stock price at the start of every simulation
payoff_sum = 0.0
analytic_value_sum = 0.0
payoffs = np.zeros(N)
for j in range(0, N):
S_min = 100000 # prescribe a dummy minimum
for i in range(1, len(time_range)):
# Calculate the underlying at each subsequent timestep
option_life_data[i] = option_life_data[i-1] * np.exp(
(r - 0.5*v*v)*dt + (v * sqrtdt * np.random.normal(0,1)))
S_min = np.amin(option_life_data) # Minimum price of underlying during
# the life of the option
payoff_sum += max(E - S_min, 0)
payoffs[j] = max(E - S_min, 0)
# Using Black-Scholes equation
analytic_value_sum += analytic_lookback_fixed_put_value(S0, E, S_min, r, sigma, T)
average_payoff = (payoff_sum / N) * np.exp(-r*time_range[-1])
return average_payoff, (analytic_value_sum/N), payoffs
# +
# Varying interest rates
# Constant risk-free interest rate
interest_rates = np.array([ 0.0,
#0.001,
#0.0025,
#0.005,
#0.0075,
0.01,
0.02,
0.03,
0.04,
0.05, # Fixed interest rate
#0.075,
0.10,
])
N = 10000
T = 1 # number of years
time_units = 252.0 # days in a working year
dt = (T - 0)/time_units # timestep
sqrtdt = np.sqrt(dt) # Compute this outside the loop to save computing it
# at every timestep
time_range = np.arange(0, T+dt, dt) # array of time units
# Initialise arrays for storage for post-processing
# complete data for binary calls
data_binary_call = np.zeros((len(interest_rates), N))
data_binary_put = np.zeros((len(interest_rates), N))
data_lb_float_call = np.zeros((len(interest_rates), N))
data_lb_float_put = np.zeros((len(interest_rates), N))
data_lb_fixed_call = np.zeros((len(interest_rates), N))
data_lb_fixed_put = np.zeros((len(interest_rates), N))
# +
# ==========================================
# Vary IRs: BINARY
# ==========================================
rates_vs_calls = np.zeros(shape=(len(interest_rates)))
rates_vs_puts = np.zeros(shape=(len(interest_rates)))
# Plot PDFs
fig, ax = plt.subplots(1, 2, figsize=(12,4.5), sharey=True)
colours = plt.cm.Blues(np.linspace(0.0, 1.0, len(interest_rates)))
i=0
for r in interest_rates:
binary_call, data_binary_call[i,:] = monte_carlo_binary_call_value(N, S0, E, r, sigma, T) # calculation
binary_put, data_binary_put[i,:] = monte_carlo_binary_put_value(N, S0, E, r, sigma, T) # calculation
rates_vs_calls[i] = binary_call
rates_vs_puts[i] = binary_put
n_bins = 40
# CALLS
min_value = np.amin(data_binary_call[i, :])
max_value = np.amax(data_binary_call[i, :])
d_bin = (max_value - min_value)/n_bins
bins = np.arange(min_value, max_value+d_bin, d_bin)
sims_mean = np.average(data_binary_call[i,:])
sims_stdv = np.std(data_binary_call[i,:])
y = ((1.0 / (np.sqrt(2.0 * np.pi) * sims_stdv)) * np.exp(-0.5 * (1.0 / sims_stdv * (bins - sims_mean))**2))
legend_label= "%2.2f"%(r*100.0)
ax[0].plot(bins, y, linestyle="-", color=colours[i], label=legend_label)
# PUTS
min_value = np.amin(data_binary_put[i, :])
max_value = np.amax(data_binary_put[i, :])
d_bin = (max_value - min_value)/n_bins
bins = np.arange(min_value, max_value+d_bin, d_bin)
sims_mean = np.average(data_binary_put[i,:])
sims_stdv = np.std(data_binary_put[i,:])
y = ((1.0 / (np.sqrt(2.0 * np.pi) * sims_stdv)) * np.exp(-0.5 * (1.0 / sims_stdv * (bins - sims_mean))**2))
legend_label= "%2.2f"%(r*100.0)
ax[1].plot(bins, y, linestyle="-", color=colours[i], label=legend_label)
i+=1
ax[0].set_xlabel("Binary Call Option Value", fontsize=14)
ax[0].set_ylabel("Frequency (Normalised)", fontsize=14)
ax[1].set_xlabel("Binary Put Option Value", fontsize=14)
ax[1].legend(loc='upper right')
plt.savefig("binary_pdf.eps")
plt.show()
plt.close()
# Plot prices against IR
fig, ax1 = plt.subplots(figsize=(6,4.5))
ax1.plot(interest_rates,
rates_vs_calls,
'b-')
ax1.set_xlabel("Interest rate", fontsize=14)
# Make the y-axis label, ticks and tick labels match the line color.
ax1.set_ylabel('Binary Call value', color='b', fontsize=14)
ax1.tick_params('y', colors='b')
ax2 = ax1.twinx()
ax1.plot(interest_rates,
rates_vs_puts,
'r-')
ax2.set_ylabel('Binary Put value', color='r', fontsize=14)
ax2.tick_params('y', colors='r')
fig.tight_layout()
plt.savefig("binary_vs_interest_rates.eps")
plt.show()
plt.close()
# +
rates_vs_calls = np.zeros(shape=(len(interest_rates)))
rates_vs_puts = np.zeros(shape=(len(interest_rates)))
N = 10000
T = 1 # number of years
time_units = 252.0 # days in a working year
dt = (T - 0)/time_units # timestep
sqrtdt = np.sqrt(dt) # Compute this outside the loop to save computing it
# at every timestep
time_range = np.arange(0, T+dt, dt) # array of time units
# Plot PDFs
fig, ax = plt.subplots(1, 2, figsize=(12,4.5), sharey=True)
colours = plt.cm.Blues(np.linspace(0.0, 1.0, len(interest_rates)))
i=0
for r in interest_rates:
print(r)
start=time.time()
# ------------------------------ LOOKBACK FLOATING
lookback_floating_call, lookback_floating_call_analytic, data_lb_float_call[i,:] = monte_carlo_lookback_floating_call_value(N, S0, r,
sigma,
time_range,
dt, sqrtdt)
lookback_floating_put, lookback_floating_put_analytic, data_lb_float_put[i,:] = monte_carlo_lookback_floating_put_value(N, S0, r,
sigma,
time_range,
dt, sqrtdt)
rates_vs_calls[i] = lookback_floating_call
rates_vs_puts[i] = lookback_floating_put
n_bins = 40
# CALLS
min_value = np.amin(data_lb_float_call[i, :])
max_value = np.amax(data_lb_float_call[i, :])
d_bin = (max_value - min_value)/n_bins
bins = np.arange(min_value, max_value+d_bin, d_bin)
sims_mean = np.average(data_lb_float_call[i,:])
sims_stdv = np.std(data_lb_float_call[i,:])
y = ((1.0 / (np.sqrt(2.0 * np.pi) * sims_stdv)) * np.exp(-0.5 * (1.0 / sims_stdv * (bins - sims_mean))**2))
legend_label= "%2.2f"%(r*100.0)
ax[0].plot(bins, y, linestyle="-", color=colours[i], label=legend_label)
# PUTS
min_value = np.amin(data_lb_float_put[i, :])
max_value = np.amax(data_lb_float_put[i, :])
d_bin = (max_value - min_value)/n_bins
bins = np.arange(min_value, max_value+d_bin, d_bin)
sims_mean = np.average(data_lb_float_put[i,:])
sims_stdv = np.std(data_lb_float_put[i,:])
y = ((1.0 / (np.sqrt(2.0 * np.pi) * sims_stdv)) * np.exp(-0.5 * (1.0 / sims_stdv * (bins - sims_mean))**2))
legend_label= "%2.2f"%(r*100.0)
ax[1].plot(bins, y, linestyle="-", color=colours[i], label=legend_label)
print('Time: %0.3f' % (time.time() - start))
i+=1
ax[0].set_xlabel("Lookback Floating Call Option Value", fontsize=14)
ax[0].set_ylabel("Frequency (Normalised)", fontsize=14)
ax[1].set_xlabel("Lookback Floating Put Option Value", fontsize=14)
ax[1].legend(loc='upper right')
plt.savefig("lb_float_pdf.eps")
plt.show()
plt.close()
# Plot prices against IR
fig, ax1 = plt.subplots(figsize=(6,4.5))
ax1.plot(interest_rates,
rates_vs_calls,
'b-')
ax1.set_xlabel("Interest rate", fontsize=14)
# Make the y-axis label, ticks and tick labels match the line color.
ax1.set_ylabel('LB Floating Call value', color='b', fontsize=14)
ax1.tick_params('y', colors='b')
ax2 = ax1.twinx()
ax1.plot(interest_rates,
rates_vs_puts,
'r-')
ax2.set_ylabel('LB Floating Put value', color='r', fontsize=14)
ax2.tick_params('y', colors='r')
fig.tight_layout()
plt.savefig("lb_floating_vs_interest_rates.eps")
plt.show()
plt.close()
# +
rates_vs_calls = np.zeros(shape=(len(interest_rates)))
rates_vs_puts = np.zeros(shape=(len(interest_rates)))
fig, ax = plt.subplots(1, 2, figsize=(12,4.5), sharey=True)
colours = plt.cm.Blues(np.linspace(0.0, 1.0, len(interest_rates)))
i=0
for r in interest_rates:
print(r)
start=time.time()
# ------------------------------ LOOKBACK FIXED
lookback_fixed_call, lookback_fixed_call_analytic, data_lb_fixed_call[i,:] = monte_carlo_lookback_fixed_call_value(N, S0, E, r, sigma, time_range, dt, sqrtdt)
lookback_fixed_put, lookback_fixed_put_analytic, data_lb_fixed_put[i,:] = monte_carlo_lookback_fixed_put_value( N, S0, E, r, sigma, time_range, dt, sqrtdt)
rates_vs_calls[i] = lookback_fixed_call
rates_vs_puts[i] = lookback_fixed_put
n_bins = 40
# CALLS
min_value = np.amin(data_lb_fixed_call[i, :])
max_value = np.amax(data_lb_fixed_call[i, :])
d_bin = (max_value - min_value)/n_bins
bins = np.arange(min_value, max_value+d_bin, d_bin)
sims_mean = np.average(data_lb_fixed_call[i,:])
sims_stdv = np.std(data_lb_fixed_call[i,:])
y = ((1.0 / (np.sqrt(2.0 * np.pi) * sims_stdv)) * np.exp(-0.5 * (1.0 / sims_stdv * (bins - sims_mean))**2))
legend_label= "%2.2f"%(r*100.0)
ax[0].plot(bins, y, linestyle="-", color=colours[i], label=legend_label)
# PUTS
min_value = np.amin(data_lb_fixed_put[i, :])
max_value = np.amax(data_lb_fixed_put[i, :])
d_bin = (max_value - min_value)/n_bins
bins = np.arange(min_value, max_value+d_bin, d_bin)
sims_mean = np.average(data_lb_fixed_put[i,:])
sims_stdv = np.std(data_lb_fixed_put[i,:])
y = ((1.0 / (np.sqrt(2.0 * np.pi) * sims_stdv)) * np.exp(-0.5 * (1.0 / sims_stdv * (bins - sims_mean))**2))
legend_label= "%2.2f"%(r*100.0)
ax[1].plot(bins, y, linestyle="-", color=colours[i], label=legend_label)
print('Time: %0.3f' % (time.time() - start))
i+=1
ax[0].set_xlabel("Lookback Fixed Call Option Value", fontsize=14)
ax[0].set_ylabel("Frequency (Normalised)", fontsize=14)
ax[1].set_xlabel("Lookback Fixed Put Option Value", fontsize=14)
ax[1].legend(loc='upper right')
plt.savefig("lb_fixed_pdf.eps")
plt.show()
plt.close()
fig, ax1 = plt.subplots(figsize=(6,4.5))
ax1.plot(interest_rates,
rates_vs_calls,
'b-')
ax1.set_xlabel("Interest rate", fontsize=14)
# Make the y-axis label, ticks and tick labels match the line color.
ax1.set_ylabel('LB Fixed Call value', color='b', fontsize=14)
ax1.tick_params('y', colors='b')
ax2 = ax1.twinx()
ax1.plot(interest_rates,
rates_vs_puts,
'r-')
ax2.set_ylabel('LB Fixed Put value', color='r', fontsize=14)
ax2.tick_params('y', colors='r')
fig.tight_layout()
plt.savefig("lb_fixed_vs_interest_rates.eps")
plt.show()
plt.close()
# +
# ==========================================
# Varying volatility
# ==========================================
# Constant risk-free interest rate
sigmas = np.array([ #0.0,
#0.05,
0.10,
#0.15,
0.20,
#0.25,
0.3,
0.4,
#0.5,
0.6,
0.8,
1.0,
#1.5,
2.0])
r = 0.05
N = 10000
T = 1 # number of years
time_units = 252.0 # days in a working year
dt = (T - 0)/time_units # timestep
sqrtdt = np.sqrt(dt) # Compute this outside the loop to save computing it
# at every timestep
time_range = np.arange(0, T+dt, dt) # array of time units
# Initialise arrays for storage for post-processing
# complete data for binary calls
data_binary_call = np.zeros((len(sigmas), N))
data_binary_put = np.zeros((len(sigmas), N))
data_lb_float_call = np.zeros((len(sigmas), N))
data_lb_float_put = np.zeros((len(sigmas), N))
data_lb_fixed_call = np.zeros((len(sigmas), N))
data_lb_fixed_put = np.zeros((len(sigmas), N))
# -
print(data_binary_call.shape)
# +
rates_vs_calls = np.zeros(shape=(len(sigmas)))
rates_vs_puts = np.zeros(shape=(len(sigmas)))
fig, ax = plt.subplots(1, 2, figsize=(12,4.5), sharey=True)
colours = plt.cm.Reds(np.linspace(0.0, 1.0, len(sigmas)))
i=0
for sigma in sigmas:
print(i)
binary_call, data_binary_call[i,:] = monte_carlo_binary_call_value(N, S0, E, r, sigma, T) # calculation
binary_put, data_binary_put[i,:] = monte_carlo_binary_put_value(N, S0, E, r, sigma, T) # calculation
rates_vs_calls[i] = binary_call
rates_vs_puts[i] = binary_put
n_bins = 40
# CALLS
min_value = np.amin(data_binary_call[i, :])
max_value = np.amax(data_binary_call[i, :])
d_bin = (max_value - min_value)/n_bins
print( data_binary_call[i,:] )
print(min_value,max_value,d_bin)
bins = np.arange(min_value, max_value+d_bin, d_bin)
sims_mean = np.average(data_binary_call[i,:])
sims_stdv = np.std(data_binary_call[i,:])
y = ((1.0 / (np.sqrt(2.0 * np.pi) * sims_stdv)) * np.exp(-0.5 * (1.0 / sims_stdv * (bins - sims_mean))**2))
legend_label= "%2.2f"%(sigma*100.0)
ax[0].plot(bins, y, linestyle="-", color=colours[i], label=legend_label)
# PUTS
min_value = np.amin(data_binary_put[i, :])
max_value = np.amax(data_binary_put[i, :])
d_bin = (max_value - min_value)/n_bins
bins = np.arange(min_value, max_value+d_bin, d_bin)
sims_mean = np.average(data_binary_put[i,:])
sims_stdv = np.std(data_binary_put[i,:])
y = ((1.0 / (np.sqrt(2.0 * np.pi) * sims_stdv)) * np.exp(-0.5 * (1.0 / sims_stdv * (bins - sims_mean))**2))
legend_label= "%2.2f"%(sigma*100.0)
ax[1].plot(bins, y, linestyle="-", color=colours[i], label=legend_label)
i+=1
ax[0].set_xlabel("Binary Call Option Value", fontsize=14)
ax[0].set_ylabel("Frequency (Normalised)", fontsize=14)
ax[1].set_xlabel("Binary Put Option Value", fontsize=14)
ax[1].legend(loc='upper right')
plt.savefig("binary_sigma_pdf.eps")
plt.show()
plt.close()
# +
fig, ax1 = plt.subplots(figsize=(6,4.5))
ax1.plot(sigmas,
rates_vs_calls,
'b-')
ax1.set_xlabel("Volatility", fontsize=14)
# Make the y-axis label, ticks and tick labels match the line color.
ax1.set_ylabel('Binary Call value', color='b', fontsize=14)
ax1.tick_params('y', colors='b')
ax2 = ax1.twinx()
ax1.plot(sigmas,
rates_vs_puts,
'r-')
ax2.set_ylabel('Binary Put value', color='r', fontsize=14)
ax2.tick_params('y', colors='r')
fig.tight_layout()
plt.savefig("binary_vs_volatilities.eps")
plt.show()
plt.close()
# +
rates_vs_calls = np.zeros(shape=(len(sigmas)))
rates_vs_puts = np.zeros(shape=(len(sigmas)))
N = 10000
T = 1 # number of years
time_units = 252.0 # days in a working year
dt = (T - 0)/time_units # timestep
sqrtdt = np.sqrt(dt) # Compute this outside the loop to save computing it
# at every timestep
time_range = np.arange(0, T+dt, dt) # array of time units
fig, ax = plt.subplots(1, 2, figsize=(12,4.5), sharey=True)
colours = plt.cm.Reds(np.linspace(0.0, 1.0, len(sigmas)))
i=0
for sigma in sigmas:
print(sigma)
start=time.time()
# ------------------------------ LOOKBACK FLOATING
lookback_floating_call, lookback_floating_call_analytic, data_lb_float_call[i,:] = monte_carlo_lookback_floating_call_value(N, S0, r,
sigma,
time_range,
dt, sqrtdt)
lookback_floating_put, lookback_floating_put_analytic, data_lb_float_put[i,:] = monte_carlo_lookback_floating_put_value(N, S0, r,
sigma,
time_range,
dt, sqrtdt)
rates_vs_calls[i] = lookback_floating_call
rates_vs_puts[i] = lookback_floating_put
n_bins = 40
# CALLS
min_value = np.amin(data_lb_float_call[i, :])
max_value = np.amax(data_lb_float_call[i, :])
d_bin = (max_value - min_value)/n_bins
bins = np.arange(min_value, max_value+d_bin, d_bin)
sims_mean = np.average(data_lb_float_call[i,:])
sims_stdv = np.std(data_lb_float_call[i,:])
y = ((1.0 / (np.sqrt(2.0 * np.pi) * sims_stdv)) * np.exp(-0.5 * (1.0 / sims_stdv * (bins - sims_mean))**2))
legend_label= "%2.0f"%(sigma*100.0)
ax[0].plot(bins, y, linestyle="-", color=colours[i], label=legend_label)
# PUTS
min_value = np.amin(data_lb_float_put[i, :])
max_value = np.amax(data_lb_float_put[i, :])
d_bin = (max_value - min_value)/n_bins
bins = np.arange(min_value, max_value+d_bin, d_bin)
sims_mean = np.average(data_lb_float_put[i,:])
sims_stdv = np.std(data_lb_float_put[i,:])
y = ((1.0 / (np.sqrt(2.0 * np.pi) * sims_stdv)) * np.exp(-0.5 * (1.0 / sims_stdv * (bins - sims_mean))**2))
legend_label= "%2.0f"%(sigma*100.0)
ax[1].plot(bins, y, linestyle="-", color=colours[i], label=legend_label)
print('Time: %0.1f s' % (time.time() - start))
i+=1
ax[0].set_xlabel("Lookback Floating Call Option Value", fontsize=14)
ax[0].set_ylabel("Frequency (Normalised)", fontsize=14)
ax[1].set_xlabel("Lookback Floating Put Option Value", fontsize=14)
ax[1].legend(loc='upper right')
ax[0].set_xlim([0,150])
ax[1].set_xlim([0,150])
plt.savefig("lb_float_sigma_pdf.eps")
plt.show()
plt.close()
fig, ax1 = plt.subplots(figsize=(6,4.5))
ax1.plot(sigmas,
rates_vs_calls,
'b-')
ax1.set_xlabel("Volatility", fontsize=14)
# Make the y-axis label, ticks and tick labels match the line color.
ax1.set_ylabel('LB Floating Call value', color='b', fontsize=14)
ax1.tick_params('y', colors='b')
ax2 = ax1.twinx()
ax1.plot(sigmas,
rates_vs_puts,
'r-')
ax2.set_ylabel('LB Floating Put value', color='r', fontsize=14)
ax2.tick_params('y', colors='r')
fig.tight_layout()
plt.savefig("lb_floating_vs_volatilities.eps")
plt.show()
plt.close()
# +
rates_vs_calls = np.zeros(shape=(len(sigmas)))
rates_vs_puts = np.zeros(shape=(len(sigmas)))
fig, ax = plt.subplots(1, 2, figsize=(12,4.5), sharey=True)
colours = plt.cm.Reds(np.linspace(0.0, 1.0, len(sigmas)))
i=0
for sigma in sigmas:
print(sigma)
start=time.time()
# ------------------------------ LOOKBACK FIXED
lookback_fixed_call, lookback_fixed_call_analytic, data_lb_fixed_call[i,:] = monte_carlo_lookback_fixed_call_value(N, S0, E, r, sigma, time_range, dt, sqrtdt)
lookback_fixed_put, lookback_fixed_put_analytic, data_lb_fixed_put[i,:] = monte_carlo_lookback_fixed_put_value( N, S0, E, r, sigma, time_range, dt, sqrtdt)
rates_vs_calls[i] = lookback_fixed_call
rates_vs_puts[i] = lookback_fixed_put
n_bins = 40
# CALLS
min_value = np.amin(data_lb_fixed_call[i, :])
max_value = np.amax(data_lb_fixed_call[i, :])
d_bin = (max_value - min_value)/n_bins
bins = np.arange(min_value, max_value+d_bin, d_bin)
sims_mean = np.average(data_lb_fixed_call[i,:])
sims_stdv = np.std(data_lb_fixed_call[i,:])
y = ((1.0 / (np.sqrt(2.0 * np.pi) * sims_stdv)) * np.exp(-0.5 * (1.0 / sims_stdv * (bins - sims_mean))**2))
legend_label= "%2.0f"%(sigma*100.0)
ax[0].plot(bins, y, linestyle="-", color=colours[i], label=legend_label)
# PUTS
min_value = np.amin(data_lb_fixed_put[i, :])
max_value = np.amax(data_lb_fixed_put[i, :])
d_bin = (max_value - min_value)/n_bins
bins = np.arange(min_value, max_value+d_bin, d_bin)
sims_mean = np.average(data_lb_fixed_put[i,:])
sims_stdv = np.std(data_lb_fixed_put[i,:])
y = ((1.0 / (np.sqrt(2.0 * np.pi) * sims_stdv)) * np.exp(-0.5 * (1.0 / sims_stdv * (bins - sims_mean))**2))
legend_label= "%2.0f"%(sigma*100.0)
ax[1].plot(bins, y, linestyle="-", color=colours[i], label=legend_label)
print('Time: %0.1f s' % (time.time() - start))
i+=1
ax[0].set_xlabel("Lookback Fixed Call Option Value", fontsize=14)
ax[0].set_ylabel("Frequency (Normalised)", fontsize=14)
ax[1].set_xlabel("Lookback Fixed Put Option Value", fontsize=14)
ax[1].legend(loc='upper right')
ax[0].set_xlim([0,100])
ax[1].set_xlim([0,100])
plt.savefig("lb_fixed_sigma_pdf.eps")
plt.show()
plt.close()
fig, ax1 = plt.subplots(figsize=(6,4.5))
ax1.plot(sigmas,
rates_vs_calls,
'b-')
ax1.set_xlabel("Volatility", fontsize=14)
# Make the y-axis label, ticks and tick labels match the line color.
ax1.set_ylabel('LB Fixed Call value', color='b', fontsize=14)
ax1.tick_params('y', colors='b')
ax2 = ax1.twinx()
ax1.plot(sigmas,
rates_vs_puts,
'r-')
ax2.set_ylabel('LB Fixed Put value', color='r', fontsize=14)
ax2.tick_params('y', colors='r')
fig.tight_layout()
plt.savefig("lb_fixed_vs_volatilities.eps")
plt.show()
plt.close()
# -
| notebook.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/DJCordhose/ml-resources/blob/main/notebooks/foundation/transformers-sentiment.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="912d7d82"
# # Transformers: sentiment analysis using pretrained models
#
# * https://huggingface.co/nlptown/bert-base-multilingual-uncased-sentiment
# * https://huggingface.co/facebook/bart-large-mnli
# + colab={"base_uri": "https://localhost:8080/", "height": 35} id="BhKtZIx7BC8S" outputId="e8c84b1c-c23f-4ff8-981b-c25672d51011"
try:
# # %tensorflow_version only exists in Colab.
# %tensorflow_version 2.x
except Exception:
pass
import tensorflow as tf
tf.__version__
# + colab={"base_uri": "https://localhost:8080/"} id="PCzzSt7XAuBn" outputId="ba97324c-1779-4a97-c765-037f3e0346bb"
# when we are not training, we do not need a GPU
# !nvidia-smi
# + id="17bc5a59" outputId="2e725722-25e6-4fdd-dbe6-09a159904b1a" colab={"base_uri": "https://localhost:8080/"}
# https://huggingface.co/transformers/installation.html
# !pip install -q transformers
# + colab={"base_uri": "https://localhost:8080/", "height": 35} id="UKkytQc2Al0y" outputId="c3c094b5-d9c4-4bfc-900a-cce7b63c22fd"
import transformers
transformers.__version__
# + id="lyoLhh04GJDd"
sequence_0 = "I don't think its a good idea to have people driving 40 miles an hour through a light that *just* turned green, especially with the number of people running red lights, or the number of pedestrians running across at the last minute being obscured by large cars in the lanes next to you."
sequence_1 = 'MANY YEARS ago, When I was a teenager, I delivered pizza. I had a friend who, just for the fun of it, had a CB. While on a particular channel, he could key the mike with quick taps and make the light right out in front of the pizza place turn green. It was the only light that it worked on, and I was in the car with him numerous times to confirm that it worked. It was sweet.'
sequence_2 = 'The "green" thing to do is not to do anything ever, don\'t even breath! Oh, and if you are not going to take that ridiculous standpoint then I guess this is relevant to Green because it uses Bio-fuels in one of the most harsh environments in the world, showing that dependence on tradition fuels is a choice not a necessity.'
# + [markdown] id="Zr_WePTIF9t3"
# ## bert-base-multilingual-uncased-sentiment
#
# Version for TensorFlow
#
# https://huggingface.co/nlptown/bert-base-multilingual-uncased-sentiment
# + colab={"base_uri": "https://localhost:8080/", "height": 304, "referenced_widgets": ["<KEY>", "b56d7a5dc1bf4edea390789af2a367c8", "fe30627008134ebea756f6f9983fa4ca", "99c4f7785a504c2e9a6f03936c51c936", "61596f725c0a490ab3601903d42ea645", "<KEY>", "<KEY>", "baad447aeec04ed0b718df9cf116e546", "<KEY>", "ac6244e81b8148acae77b113f6e452b1", "<KEY>", "<KEY>", "8fafe736f6f44fed8e0f5ca711355a1f", "<KEY>", "<KEY>", "098d25b7bca2489c86e14e91178d7790", "2aab4a83ced44ca5bd55aebb90d6e019", "2bb69c7c6a6d4effb4e91af6f3725229", "<KEY>", "58ab07b36ec74bff8d004ce7b2ea792d", "1de3861d587346f0a05bb3018dcb60e9", "<KEY>", "dbe77e14d9114a6bb187c3d15227362b", "25b817e405cb45fabe86e878ac38d115", "40206174caef472892aa2c280ce6a622", "12a7c3fdd52849e28d82679043f492c8", "5d9d2cdef39e4131af19f05974f909d8", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "d4d2ae19141c453fbe30e55d3d3a2fb5", "047da368568749c1a7f2db26f372f951", "722a632663904b148b8365c22ea03d25", "0c2fcc497ad844488bac3e8010ca20fa", "fa305fba360347d49814de3741c34fc7", "<KEY>", "9cb1331be2bd41e2b2174b0e5b33beb5", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "edf9438fd34141b4ad40d4e8f71bcdb9", "1403fbd017af48ad8db5191d054deeb1", "395e5e8b23094b40ad79d0a2b01cc858", "92b205bdac0e41119fe4206924d3f310", "2a76edf9fdde476290da759e49cee98e", "1a7a55e453e04d97818539d7d0533e17", "f09e67f55637445096a8264bf7e09abe", "16d7d347f9c84d6db4b5d259b51cf0ee", "6763e23518d24aa7850e7632079fb0ed", "a682d2e155b841dcad6b0d17029ee0ad", "67e96d94603641d48371d6d7f0d96e11", "<KEY>"]} id="G97_wuf6D0-j" outputId="4ec696d3-c126-4809-fd61-0558f3bc1ba8"
# %%time
from transformers import AutoTokenizer, TFAutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("nlptown/bert-base-multilingual-uncased-sentiment")
model = TFAutoModelForSequenceClassification.from_pretrained("nlptown/bert-base-multilingual-uncased-sentiment")
model.name_or_path
# + colab={"base_uri": "https://localhost:8080/"} id="2Ooej9E_Effc" outputId="f8a63efa-88af-4013-8da3-fa2b01436c39"
# paraphrase = tokenizer(sequence_0, return_tensors="tf")
# paraphrase = tokenizer(sequence_1, return_tensors="tf")
paraphrase = tokenizer(sequence_2, return_tensors="tf")
paraphrase_classification_logits = model(paraphrase)[0]
paraphrase_results = tf.nn.softmax(paraphrase_classification_logits, axis=1).numpy()[0]
stars = paraphrase_results.argmax() + 1
paraphrase_classification_logits, paraphrase_results, stars
# + [markdown] id="1c944ksPG0Yl"
# ## bart-large-mnli
#
# Version for Pytorch (TensorFlow is not available)
#
# https://huggingface.co/facebook/bart-large-mnli
# + colab={"base_uri": "https://localhost:8080/", "height": 244, "referenced_widgets": ["ea90ffd1fb2c46f1a9b9d3027daa0d42", "815a7cf85e624387af8e8b4b603af6d3", "107f929122bb4b609fd85c78916e1b53", "15b04a85f15d4f728ae07848011fd9e0", "a5c85ede60ef4bfa8cc88a483a808697", "<KEY>", "6240c35cea1f4db982db4d10fed79381", "79d383e4578b4044960ac7d842df8387", "<KEY>", "eecfdc30298444a2b212901135c61341", "a1f06c62a2654875addc33106b5b4f49", "dd10a46729f3498f9fe5105e3685c2bd", "8d0edb218c514489a643f346c6d542d5", "bae5790ae95144f9a8c31da649862987", "f0660856f00348659f0159f5656d5da6", "03ab17b6b9a441dfa1e404eaa65d2f53", "<KEY>", "fdb4a70e45c340038703e0c1be0236f7", "<KEY>", "59fe2306cac7420aa7f5db5d2267fd9e", "e589d2e02a1d4062bfca4e023fa2c88e", "<KEY>", "471ad92e2dc646cfabacea02424b927f", "<KEY>", "d73edd58dc944960ac2a75d94b1d8a7c", "af18351a33a5478c828748a0bb6b55e0", "<KEY>", "03be27ad6b8841699a5e082cfddccd40", "<KEY>", "9032d22324854a57ae9657f370e8a355", "e98d6343699a436fa21a402438a78f45", "<KEY>", "<KEY>", "f8ed40321f6444a88bb1528ba2b2a4bd", "cb923c8d59084824aced0f7e7dca84f7", "e2a6c37c10af460a841817e2ea9c7bc1", "<KEY>", "<KEY>", "e83f5b7330794544a2dac570d66ca284", "<KEY>", "468e59cf45b0454393e2bf3d462b72ed", "c06cc9d0ef144ba6a53040f7264e2767", "82db73c9280545a794ac4751ed486c72", "<KEY>", "d4836542dd5347d6b4e6e301bac5c9f4", "4c2ed61fc72344ad9f248ce1bd573052", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "3bfac21627a94f7e9054ea91c4cee3a3", "3eba6763d6184afbbcfc813a3d82282a", "81d8366ed07749e58e047b912347042a", "9f5f89db6a2e4f1e9af4d94d65fcb85a", "e094c4500a2a46a786135610551c3271", "<KEY>", "<KEY>", "da01cff4f2a042f58eef86bd9ee96d4d", "<KEY>", "<KEY>", "<KEY>", "f01c6d49026345e9a87d22ea4ade14fd", "103c9d994ce44f89ac4697f5d8220ede", "69f34792ee1e4caabcecf079b9939271", "<KEY>", "52e302c7e4f640c385f8fce9aee2639d"]} id="cHz8m-8rIYlW" outputId="cbac0499-ce1e-4682-aab0-95cebf6b1cfb"
# %%time
from transformers import pipeline
classifier = pipeline("zero-shot-classification",
model="facebook/bart-large-mnli")
classifier.model.name_or_path
# + colab={"base_uri": "https://localhost:8080/"} id="rAzaQ7P0ImpN" outputId="20f45f8f-7f0c-4433-fd25-583e73e3384d"
# sequence_to_classify = sequence_0
# sequence_to_classify = sequence_1
sequence_to_classify = sequence_2
candidate_labels = ['positive', 'negative', 'ironic']
classifier(sequence_to_classify, candidate_labels, multi_label=True)
# + [markdown] id="gOOHIG5TsxIu"
# ## More data
# + id="2DSRpz8Zsv9X" outputId="480e6be6-b7f7-4154-e656-3e1b0f4e7fe7" colab={"base_uri": "https://localhost:8080/"}
# !test -f technology-transport-short.db || (wget https://datanizing.com/data-science-day/technology-transport-short.7z && 7z x technology-transport-short.7z && rm technology-transport-short.7z)
# + id="CvPzuM7us_Tp"
import sqlite3
tech = sqlite3.connect("technology-transport-short.db")
# + id="Gza1lpQrtCOc"
import pandas as pd
posts = pd.read_sql("SELECT title||' '||text AS fulltext, created_utc FROM posts",
tech, parse_dates=["created_utc"])
# + id="ZK5TYAPDtm5O" outputId="50857d04-fe5c-4a6e-9f45-280119b18e27" colab={"base_uri": "https://localhost:8080/", "height": 423}
posts
# + id="Eb8tYPD5tsVc"
| notebooks/foundation/transformers-sentiment.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Homework 3 - Text Classification
# Author: <NAME> - kkhandw1
#
# Text Mining
#
# * Data: books.csv contains 2,000 Amazon book reviews. Each row represents a review for one book. The data set contains two columns: the first column (contained in quotes) is the review text. The second column is a binary label indicating if the review is positive or negative.
#
# * Tasks: Described below
#
# +
import string
# Import pandas to read in data
import numpy as np
import pandas as pd
from nltk.corpus import stopwords
from matplotlib import pyplot as plt
from sklearn import metrics
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split, cross_val_score, cross_val_predict
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer, TfidfVectorizer
# -
# ## Text classification
# We are going to look at some Amazon reviews and classify them into positive or negative.
# ### Data
# The file `books.csv` contains 2,000 Amazon book reviews. The data set contains two features: the first column (contained in quotes) is the review text. The second column is a binary label indicating if the review is positive or negative.
#
# Let's take a quick look at the file.
# !head -3 books.csv
# Let's read the data into a pandas data frame. You'll notice two new attributed in `pd.read_csv()` that we've never seen before. The first, `quotechar` is tell us what is being used to "encapsulate" the text fields. Since our review text is surrounding by double quotes, we let pandas know. We use a `\` since the quote is also used to surround the quote. This backslash is known as an escape character. We also let pandas now this.
data = pd.read_csv("books.csv", quotechar="\"", escapechar="\\")
data.head()
# ### Task 1: Preprocessing the text (25 points)
#
# Change text to lower case and remove stop words, then transform the row text collection into a matrix of token counts.
#
# Hint: sklearn's function CountVectorizer has built-in options for these operations. Refer to http://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html for more information.
def text_process(mess):
"""
Takes in a string of text, then performs the following:
1. Remove all punctuation
2. Remove all stopwords
3. Returns a list of the cleaned text
"""
# Check characters to see if they are in punctuation
nopunc = [char for char in mess if char not in string.punctuation]
# Join the characters again to form the string.
nopunc = ''.join(nopunc)
# Now just remove any stopwords
return [word for word in nopunc.split() if word.lower() not in stopwords.words('english')]
data['review_text'].head(5).apply(text_process)
tokenizer = CountVectorizer(lowercase=True, analyzer=text_process)
reviews = tokenizer.fit_transform(data['review_text'])
reviews.toarray()[0]
print('Sample Feature names: ', tokenizer.get_feature_names()[-10:])
print('Shape of Sparse Matrix: ', reviews.shape)
print('Amount of Non-Zero occurences: ', reviews.nnz)
print('Sparsity: %.2f%%' % (100.0 * reviews.nnz / (reviews.shape[0] * reviews.shape[1])))
# ### Task 2: Build a logistic regression model using token counts (25 points)
#
# Build a logistic regression model using the token counts from task 1. Perform a 5-fold cross-validation (train-test ratio 80-20), and compute the mean AUC (Area under Curve).
X = reviews.toarray()
y = data['positive']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
print(X_train.shape, y_train.shape)
print(X_test.shape, y_test.shape)
log_model = LogisticRegression()
log_model.fit(X_train, y_train)
predictions = log_model.predict(X_test)
print('Score: ', log_model.score(X_test, y_test))
fpr, tpr, thresholds = metrics.roc_curve(y_test, log_model.predict_proba(X_test)[:,1])
plt.plot(fpr, tpr)
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.0])
plt.title('ROC curve for Amazon book reviews')
plt.xlabel('False Positive Rate (1 - Specificity)')
plt.ylabel('True Positive Rate (Sensitivity)')
plt.grid(True)
print('AUC: ', metrics.roc_auc_score(y_test, log_model.predict_proba(X_test)[:,1]))
# ## Perform 5-fold cross validation
y_pred = cross_val_predict(log_model, X, y, cv=5)
print('Cross-validated predictions: ', y_pred)
# +
y_scores = cross_val_score(log_model, X, y, cv=5, scoring='accuracy')
print('Cross-validated accuracy scores: ', y_scores)
print('Mean cross-validated accuracy scores: ', y_scores.mean())
y_scores_auc = cross_val_score(log_model, X, y, cv=5, scoring='roc_auc')
print('Cross-validated auc scores: ', y_scores_auc)
print('Mean cross-validated auc scores: ', y_scores_auc.mean())
# -
# ### Task 3: Build a logistic regression model using TFIDF (25 points)
#
# Transform the training data into a TFIDF matirx, and use it to build a new logistic regression model. Again, perform a 5-fold cross-validation, and compute the mean AUC.
#
# Hint: Similar to CountVectorizer, sklearn's TfidfVectorizer function can do all the transformation work for you. Don't forget using the stop_words option.
tfidf_transformer = TfidfVectorizer(lowercase=True, analyzer=text_process)
reviews_tfidf = tfidf_transformer.fit_transform(data['review_text'])
print(reviews_tfidf.shape)
X_idf = reviews_tfidf.toarray()
X_train, X_test, y_train, y_test = train_test_split(X_idf, y, test_size=0.2)
print(X_train.shape, y_train.shape)
print(X_test.shape, y_test.shape)
log_model_idf = LogisticRegression()
log_model_idf.fit(X_train, y_train)
predictions = log_model.predict(X_test)
print('Score: ', log_model_idf.score(X_test, y_test))
fpr, tpr, thresholds = metrics.roc_curve(y_test, log_model_idf.predict_proba(X_test)[:,1])
plt.plot(fpr, tpr)
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.0])
plt.title('ROC curve for Amazon book reviews (IDF)')
plt.xlabel('False Positive Rate (1 - Specificity)')
plt.ylabel('True Positive Rate (Sensitivity)')
plt.grid(True)
print('AUC: ', metrics.roc_auc_score(y_test, log_model_idf.predict_proba(X_test)[:,1]))
# +
y_scores_idf = cross_val_score(log_model_idf, X_idf, y, cv=5, scoring='accuracy')
print('Cross-validated accuracy scores: ', y_scores_idf)
print('Mean cross-validated accuracy scores: ', y_scores_idf.mean())
y_scores_idf_auc = cross_val_score(log_model_idf, X_idf, y, cv=5, scoring='roc_auc')
print('Cross-validated auc scores: ', y_scores_idf_auc)
print('Mean cross-validated auc scores: ', y_scores_idf_auc.mean())
# -
# ### Task 4: Build a logistic regression model using TFIDF over n-grams (25 points)
#
# We still want to use the TFIDF matirx, but instead of using TFIDF over single tokens, this time we want to go further and use TFIDF values of both 1-gram and 2-gram tokens. Then use this new TFIDF matrix to build another logistic regression model. Again, perform a 5-fold cross-validation, and compute the mean AUC.
#
# Hint: You can configure the n-gram range using an option of the TfidfVectorizer function
tfidf_transformer = TfidfVectorizer(lowercase=True, analyzer=text_process, ngram_range=(1, 2))
reviews_ngram = tfidf_transformer.fit_transform(data['review_text'])
print(reviews_ngram.shape)
X_ngram = reviews_ngram.toarray()
X_train, X_test, y_train, y_test = train_test_split(X_ngram, y, test_size=0.2)
print(X_train.shape, y_train.shape)
print(X_test.shape, y_test.shape)
log_model_ngram = LogisticRegression()
log_model_ngram.fit(X_train, y_train)
predictions = log_model_ngram.predict(X_test)
print('Score: ', log_model_ngram.score(X_test, y_test))
fpr, tpr, thresholds = metrics.roc_curve(y_test, log_model_ngram.predict_proba(X_test)[:,1])
plt.plot(fpr, tpr)
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.0])
plt.title('ROC curve for Amazon book reviews (IDF)')
plt.xlabel('False Positive Rate (1 - Specificity)')
plt.ylabel('True Positive Rate (Sensitivity)')
plt.grid(True)
print('AUC: ', metrics.roc_auc_score(y_test, log_model_ngram.predict_proba(X_test)[:,1]))
# +
y_scores_ngram = cross_val_score(log_model_ngram, X_ngram, y, cv=5, scoring='accuracy')
print('Cross-validated accuracy scores: ', y_scores_ngram)
print('Mean cross-validated accuracy scores: ', y_scores_ngram.mean())
y_scores_ngram_auc = cross_val_score(log_model_ngram, X_ngram, y, cv=5, scoring='roc_auc')
print('Cross-validated auc scores: ', y_scores_ngram_auc)
print('Mean cross-validated auc scores: ', y_scores_ngram_auc.mean())
# -
| homework3/kkhandw1_hw3.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: sax
# language: python
# name: sax
# ---
# +
# default_exp circuit
# -
# # Circuit
#
# > SAX Circuits
# +
# hide
import matplotlib.pyplot as plt
from fastcore.test import test_eq
from pytest import approx, raises
import jax.numpy as jnp
import os, sys; sys.stderr = open(os.devnull, "w")
# +
# export
from __future__ import annotations
from functools import partial
from typing import Dict, Optional, Tuple, Union, cast
from sax.backends import circuit_backends
from sax.models import coupler, straight
from sax.multimode import multimode, singlemode
from sax.netlist import LogicalNetlist, Netlist, logical_netlist, netlist_from_yaml
from sax.typing_ import Instances, Model, Models, Netlist, Settings, SType, is_netlist
from sax.utils import _replace_kwargs, get_settings, merge_dicts, update_settings
# +
# export
def circuit(
*,
instances: Instances,
connections: Dict[str, str],
ports: Dict[str, str],
models: Optional[Models] = None,
modes: Optional[Tuple[str, ...]] = None,
settings: Optional[Settings] = None,
backend: str = "default",
default_models=None,
) -> Model:
# assert valid circuit_backend
if backend not in circuit_backends:
raise KeyError(
f"circuit backend {backend} not found. Allowed circuit backends: "
f"{', '.join(circuit_backends.keys())}."
)
evaluate_circuit = circuit_backends[backend]
_netlist, _settings, _models = logical_netlist(
instances=instances,
connections=connections,
ports=ports,
models=models,
settings=settings,
default_models=default_models,
)
for name in list(_models.keys()):
if is_netlist(_models[name]):
netlist_model = cast(LogicalNetlist, _models.pop(name))
instance_model_names = set(netlist_model["instances"].values())
instance_models = {k: _models[k] for k in instance_model_names}
netlist_func = circuit_from_netlist(
netlist=netlist_model,
models=instance_models,
backend=backend,
modes=modes,
settings=None, # settings are already integrated in netlist by now.
default_models=default_models,
)
_models[name] = netlist_func
if modes is not None:
maybe_multimode = partial(multimode, modes=modes)
connections = {
f"{p1}@{mode}": f"{p2}@{mode}"
for p1, p2 in _netlist["connections"].items()
for mode in modes
}
ports = {
f"{p1}@{mode}": f"{p2}@{mode}"
for p1, p2 in _netlist["ports"].items()
for mode in modes
}
else:
maybe_multimode = partial(singlemode, mode="te")
connections = _netlist["connections"]
ports = _netlist["ports"]
def _circuit(**settings: Settings) -> SType:
settings = merge_dicts(_settings, settings)
global_settings = {}
for k in list(settings.keys()):
if k in _netlist["instances"]:
continue
global_settings[k] = settings.pop(k)
if global_settings:
settings = cast(
Dict[str, Settings], update_settings(settings, **global_settings)
)
instances: Dict[str, SType] = {}
for name, model_name in _netlist["instances"].items():
model = cast(Model, _models[model_name])
instances[name] = cast(
SType, maybe_multimode(model(**settings.get(name, {})))
)
S = evaluate_circuit(instances, connections, ports)
return S
settings = {
name: get_settings(cast(Model, _models[model]))
for name, model in _netlist["instances"].items()
}
settings = merge_dicts(settings, _settings)
_replace_kwargs(_circuit, **settings)
return _circuit
# +
mzi = circuit(
instances={
"lft": "coupler",
"top": "straight",
"btm": "straight",
"rgt": "coupler",
},
connections={
"lft,out0": "btm,in0",
"btm,out0": "rgt,in0",
"lft,out1": "top,in0",
"top,out0": "rgt,in1",
},
ports={
"in0": "lft,in0",
"in1": "lft,in1",
"out0": "rgt,out0",
"out1": "rgt,out1",
},
models={
"straight": straight,
"coupler": coupler,
}
)
# mzi?
# -
result = mzi(top={"length": 25.0}, btm={"length": 15.0})
result = {k: approx(jnp.abs(v)) for k, v in result.items()}
# +
# export
def circuit_from_netlist(
netlist: Union[LogicalNetlist, Netlist],
*,
models: Optional[Models] = None,
modes: Optional[Tuple[str, ...]] = None,
settings: Optional[Settings] = None,
backend: str = "default",
default_models=None,
) -> Model:
"""create a circuit model function from a netlist """
instances = netlist["instances"]
connections = netlist["connections"]
ports = netlist["ports"]
_circuit = circuit(
instances=instances,
connections=connections,
ports=ports,
models=models,
modes=modes,
settings=settings,
backend=backend,
default_models=default_models,
)
return _circuit
# -
# > Example
# +
mzi = circuit_from_netlist(
netlist = {
"instances": {
"lft": "coupler",
"top": "straight",
"btm": "straight",
"rgt": "coupler",
},
"connections": {
"lft,out0": "btm,in0",
"btm,out0": "rgt,in0",
"lft,out1": "top,in0",
"top,out0": "rgt,in1",
},
"ports": {
"in0": "lft,in0",
"in1": "lft,in1",
"out0": "rgt,out0",
"out1": "rgt,out1",
},
},
models={
"straight": straight,
"coupler": coupler,
}
)
# mzi?
# -
# export
def circuit_from_yaml(
yaml: str,
*,
models: Optional[Models] = None,
modes: Optional[Tuple[str, ...]] = None,
settings: Optional[Settings] = None,
backend: str = "default",
default_models=None,
) -> Model:
"""Load a sax circuit from yaml definition
Args:
yaml: the yaml string to load
models: a dictionary which maps component names to model functions
modes: the modes of the simulation (if not given, single mode
operation is assumed).
settings: override netlist instance settings. Use this setting to set
global settings like for example the wavelength 'wl'.
backend: "default" or "klu". How the circuit S-parameters are
calculated. "klu" is a CPU-only method which generally speaking is
much faster for large circuits but cannot be jitted or used for autograd.
"""
netlist, models = netlist_from_yaml(yaml=yaml, models=models, settings=settings)
circuit = circuit_from_netlist(
netlist=netlist,
models=models,
modes=modes,
settings=None, # settings are already integrated in the netlist by now
backend=backend,
default_models=default_models,
)
return circuit
# export
def circuit_from_gdsfactory(
component,
*,
models: Optional[Models] = None,
modes: Optional[Tuple[str, ...]] = None,
settings: Optional[Settings] = None,
backend: str = "default",
default_models=None,
) -> Model:
"""Load a sax circuit from a GDSFactory component"""
circuit = circuit_from_netlist(
component.get_netlist(),
models=models,
modes=modes,
settings=settings,
backend=backend,
default_models=default_models,
)
return circuit
| nbs/08_circuit.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import scanpy as sc
import numpy as np
import pandas as pd
import os
os.chdir('./../')
from compert.helper import rank_genes_groups_by_cov
adatas = []
for i in range(5):
adatas.append(sc.read(f'./datasets/sciplex_raw_chunk_{i}.h5ad'))
adata = adatas[0].concatenate(adatas[1:])
sc.pp.subsample(adata, fraction=0.5)
sc.pp.normalize_per_cell(adata)
sc.pp.log1p(adata)
sc.pp.highly_variable_genes(adata, n_top_genes=5000, subset=True)
adata.obs['dose_val'] = adata.obs.dose.astype(float) / np.max(adata.obs.dose.astype(float))
adata.obs['dose_val'][adata.obs['product_name'].str.contains('Vehicle')] = 1.0
adata.obs['product_name'] = [x.split(' ')[0] for x in adata.obs['product_name']]
adata.obs['product_name'][adata.obs['product_name'].str.contains('Vehicle')] = 'control'
adata.obs['drug_dose_name'] = adata.obs.product_name.astype(str) + '_' + adata.obs.dose_val.astype(str)
adata.obs['cov_drug_dose_name'] = adata.obs.cell_type.astype(str) + '_' + adata.obs.drug_dose_name.astype(str)
adata.obs['condition'] = adata.obs.product_name.copy()
adata.obs['control'] = [1 if x == 'Vehicle_1.0' else 0 for x in adata.obs.drug_dose_name.values]
adata.obs['cov_drug'] = adata.obs.cell_type.astype(str) + '_' + adata.obs.condition.astype(str)
from compert.helper import rank_genes_groups_by_cov
rank_genes_groups_by_cov(adata, groupby='cov_drug', covariate='cell_type', control_group='control')
new_genes_dict = {}
for cat in adata.obs.cov_drug_dose_name.unique():
if 'control' not in cat:
rank_keys = np.array(list(adata.uns['rank_genes_groups_cov'].keys()))
bool_idx = [x in cat for x in rank_keys]
genes = adata.uns['rank_genes_groups_cov'][rank_keys[bool_idx][0]]
new_genes_dict[cat] = genes
adata.uns['rank_genes_groups_cov'] = new_genes_dict
# # Split
# + tags=[]
adata.obs['split'] = 'train' # reset
ho_drugs = [
# selection of drugs from various pathways
"Azacitidine",
"Carmofur",
"Pracinostat",
"Cediranib",
"Luminespib",
"Crizotinib",
"SNS-314",
"Obatoclax",
"Momelotinib",
"AG-14361",
"Entacapone",
"Fulvestrant",
"Mesna",
"Zileuton",
"Enzastaurin",
"IOX2",
"Alvespimycin",
"XAV-939",
"Fasudil"
]
ood = adata.obs['condition'].isin(ho_drugs)
len(ho_drugs)
# -
adata.obs['split'][ood & (adata.obs['dose_val'] == 1.0)] = 'ood'
test_idx = sc.pp.subsample(adata[adata.obs['split'] != 'ood'], .10, copy=True).obs.index
adata.obs['split'].loc[test_idx] = 'test'
pd.crosstab(adata.obs['split'], adata.obs['condition'])
adata.obs['split'].value_counts()
adata[adata.obs.split == 'ood'].obs.condition.value_counts()
adata[adata.obs.split == 'test'].obs.condition.value_counts()
# Also a split which sees all data:
adata.obs['split_all'] = 'train'
test_idx = sc.pp.subsample(adata, .10, copy=True).obs.index
adata.obs['split_all'].loc[test_idx] = 'test'
adata.obs['ct_dose'] = adata.obs.cell_type.astype('str') + '_' + adata.obs.dose_val.astype('str')
# Round robin splits: dose and cell line combinations will be held out in turn.
i = 0
split_dict = {}
# single ct holdout
for ct in adata.obs.cell_type.unique():
for dose in adata.obs.dose_val.unique():
i += 1
split_name = f'split{i}'
split_dict[split_name] = f'{ct}_{dose}'
adata.obs[split_name] = 'train'
adata.obs[split_name][adata.obs.ct_dose == f'{ct}_{dose}'] = 'ood'
test_idx = sc.pp.subsample(adata[adata.obs[split_name] != 'ood'], .16, copy=True).obs.index
adata.obs[split_name].loc[test_idx] = 'test'
display(adata.obs[split_name].value_counts())
# double ct holdout
for cts in [('A549', 'MCF7'), ('A549', 'K562'), ('MCF7', 'K562')]:
for dose in adata.obs.dose_val.unique():
i += 1
split_name = f'split{i}'
split_dict[split_name] = f'{cts[0]}+{cts[1]}_{dose}'
adata.obs[split_name] = 'train'
adata.obs[split_name][adata.obs.ct_dose == f'{cts[0]}_{dose}'] = 'ood'
adata.obs[split_name][adata.obs.ct_dose == f'{cts[1]}_{dose}'] = 'ood'
test_idx = sc.pp.subsample(adata[adata.obs[split_name] != 'ood'], .16, copy=True).obs.index
adata.obs[split_name].loc[test_idx] = 'test'
display(adata.obs[split_name].value_counts())
# triple ct holdout
for dose in adata.obs.dose_val.unique():
i += 1
split_name = f'split{i}'
split_dict[split_name] = f'all_{dose}'
adata.obs[split_name] = 'train'
adata.obs[split_name][adata.obs.dose_val == dose] = 'ood'
test_idx = sc.pp.subsample(adata[adata.obs[split_name] != 'ood'], .16, copy=True).obs.index
adata.obs[split_name].loc[test_idx] = 'test'
display(adata.obs[split_name].value_counts())
adata.uns['splits'] = split_dict
sc.write('./datasets/sciplex3_new.h5ad', adata)
| preprocessing/sciplex3.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Overview:
# This is a forecast of US covid-19 infections base on current national data and the assumption that the virus will
# follow a logistic curve.
#
# ## For background see:
# *Exponential growth and epidemics*
# 3Blue1Brown
# https://youtu.be/Kas0tIxDvrg for the basic math.[1]
#
# *The Mathematics Behind the Coronavirus Spread*
# By <NAME>, www.MasterMathMentor.com
# http://www.mastermathmentor.com/mmm-archive/CoronaVirus.pdf[2]
#
# The conclusion from both of the above references is that historially viruses follow a logistic curve.
# Page 8 of Schwartz[2] implies that the general formula for logistic curves for viral growth is:
# cases = capacity / (1+p1*(e^(p2*day))
#
# So the idea here is to solve for capacity, p1 and p2 using curve fitting.
#
# capacity is the theortical maximum number of infections.
#
# # Graphs
# ## National Confirmed Case vs. Forecast
# ## Growth Factor
# ## National Impact
# ## Delayed Death Rate
#
# # Data Sources
# Covid Tracking Project: https://covidtracking.com/
# American Hospital Association: https://www.aha.org/statistics/fast-facts-us-hospitals
#
# # Credit
# This code borrows heavily from:
# https://github.com/KentShikama/covid19-curve-fitting#hackathon-quality-curve-fitting-for-us-covid19-cases
#
# # License
# MIT License
#
# # Author
# <NAME>
#
import pandas as pd, numpy as np
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
from scipy.optimize import curve_fit
'''
Logistic Curve
e = 2.71828 (Natrual log)
'''
def model(x,capacity,p1,p2):
e = 2.71828
#x starts with 1
numerator = 1.00+(p1*(e**(p2*(x-1))))
return capacity/numerator
def plot_national_impact_graph(days, days_so_far, deaths, recovered, now_hosp, now_vent,last_date='', morbidity=[], max_morbitity=0):
plt.rcParams["figure.figsize"] = (18,10)
plt.title("Covid-19: US National Impact")
plt.scatter(days_so_far, deaths,marker='v',c='k', label=f"Attributed Deaths")
plt.scatter(days_so_far, recovered,marker='^',c='g', label=f"Known Recoveries")
plt.scatter(days_so_far, now_hosp,marker='.',c='#ffa500', label=f"Currently Hospitalized")
plt.scatter(days_so_far, now_vent,marker='*',c='#ff8500', label=f"Currently on Ventilator")
plt.plot(days, morbidity, '--', color="#888888", label=f"Forecast Morbidity")
# hospital_beds_line_data = np.array([924107 for i in range(len(days))])
# plt.plot(days, hospital_beds_line_data, 'c--', label=f"All Staffed Hospital Beds")
# plt.text(2,924000,'All Hospital Beds')
plt.text(2,max_morbitity + 20,'Max Estimated Morbidity:{0}'.format(max_morbitity))
capacity_line_data = np.array([max_morbitity for i in range(len(days))])
plt.plot(days, capacity_line_data, '--', color="#AAAAAA", label=f"Max Estimated Morbidity")
icu_beds_line_data = np.array([55633 for i in range(len(days))])
plt.plot(days, icu_beds_line_data, 'b--', label=f"General ICU Beds")
plt.text(2,55700,'ICU Beds')
plt.xlabel(f'# days since March 4th, 2020')
plt.legend(loc=5)
plt.text(2,(recovered[-1] - 5),last_date)
plt.text(days[-1]-22,(20000),'Data Source: Covid Tracking Project: https://covidtracking.com/')
plt.text(days[-1]-30,(22000),'Hospital Beds Source: https://www.aha.org/statistics/fast-facts-us-hospitals')
plt.gca().yaxis.set_major_formatter(ticker.StrMethodFormatter('{x:,.0f}'))
# plt.savefig(f'all.png', bbox_inches='tight')
plt.show()
plt.close()
# +
def plot_national_graph(days, days_so_far, national_cases, deaths, recovered, now_hosp, now_vent, results=[],last_date='', capacity=''):
plt.rcParams["figure.figsize"] = (18,10)
plt.title("Covid-19: US National Confirmed Cases vs. Forecast Cases")
plt.scatter(days_so_far, national_cases, label=f"Confirmed cases")
plt.scatter(days_so_far, deaths,marker='v',c='k', label=f"Attributed Deaths")
plt.scatter(days_so_far, recovered,marker='^',c='g', label=f"Known Recoveries")
plt.scatter(days_so_far, now_hosp,marker='.',c='#ffa500', label=f"Currently Hospitalized")
plt.scatter(days_so_far, now_vent,marker='*',c='#ff8500', label=f"Currently on Ventilator")
plt.plot(days, results, 'r--', label=f"Forecast cases")
hospital_beds_line_data = np.array([924107 for i in range(len(days))])
plt.plot(days, hospital_beds_line_data, 'c--', label=f"All Staffed Hospital Beds")
plt.text(2,924000,'All Hospital Beds')
plt.text(2,capacity + 20,'Max Estimated Infections:{0}'.format(capacity))
capacity_line_data = np.array([capacity for i in range(len(days))])
plt.plot(days, capacity_line_data, 'y--', label=f"Max Estimated Infections")
icu_beds_line_data = np.array([55633 for i in range(len(days))])
plt.plot(days, icu_beds_line_data, 'b--', label=f"General ICU Beds")
plt.text(2,55700,'ICU Beds')
plt.xlabel(f'# days since March 4th, 2020')
plt.legend(loc=5)
plt.text(2,(national_cases[-1] - 5),last_date)
plt.text(days[-1]-22,250000,'Data Source: Covid Tracking Project: https://covidtracking.com/')
plt.text(days[-1]-25,220000,'Hospital Beds Source: https://www.aha.org/statistics/fast-facts-us-hospitals')
plt.gca().yaxis.set_major_formatter(ticker.StrMethodFormatter('{x:,.0f}'))
# plt.savefig(f'all.png', bbox_inches='tight')
plt.show()
plt.close()
# -
def plot_growth_factor(days,growth_factor, last_date='test'):
plt.rcParams["figure.figsize"] = (18,10)
plt.title("Covid-19: National Growth Factor")
plt.plot(days, growth_factor, 'g-', label=f"Growth Factor")
plt.xlabel(f'# days since March 4th, 2020')
plt.legend(loc=5)
plt.text(1,3.2,last_date)
plt.text(days[-1]-20,0.2,'Data Source: Covid Tracking Project: https://covidtracking.com/')
plt.gca().yaxis.set_major_formatter(ticker.StrMethodFormatter('{x:,.0f}'))
# plt.savefig(f'all.png', bbox_inches='tight')
horiz_line_data = np.array([1 for i in range(len(days))])
plt.plot(days, horiz_line_data, 'k--')
plt.show()
plt.close()
def plot_death_rate(days,death_rate, last_date='test'):
plt.rcParams["figure.figsize"] = (18,10)
plt.title("Covid-19: % National Death Rate: (Number Deaths/Number of Infections 7 Days Ago) ")
plt.plot(days, death_rate, '-', color='#994400', label=f"7 Day Delayed Death Rate")
horiz_line_data = np.array([6.5 for i in range(len(days))])
plt.plot(days, horiz_line_data, '-', color='#0044AA', label=f"6.5% Line")
horiz_line_data = np.array([3.5 for i in range(len(days))])
plt.plot(days, horiz_line_data, '-', color='#0044AA', label=f"3.5% Line")
plt.xlabel(f'# days since March 4th, 2020')
plt.legend(loc=5)
plt.text(20,10.7,last_date)
plt.text(days[-1]-20,9,'Data Source: Covid Tracking Project: https://covidtracking.com/')
plt.text(16,6.6,'6.5%')
plt.text(16,3.6,'3.5%')
plt.gca().yaxis.set_major_formatter(ticker.StrMethodFormatter('{x:,.0f}'))
# plt.savefig(f'all.png', bbox_inches='tight')
plt.show()
plt.close()
'''
(C8-C7)/(C7-C6)
(y[n]-y[n-1]) / (y[n-1] - y[n-2])
'''
def compute_growth_factor(y):
growth_factor = [0,0] #First two days have to be blank
lenY = len(y)
for n in range(lenY):
if n > 1:
gf = (y[n] - y[n-1]) / (y[n-1] - y[n-2])
growth_factor.append(gf)
return growth_factor
# +
df = pd.read_json("https://covidtracking.com/api/us/daily").iloc[::-1]
national_initial_date = pd.to_datetime(df["date"].min(), format="%Y%m%d")
national_initial_date_as_int = national_initial_date.timestamp() / 86400
national_last_date = df["dateChecked"][0]
dates = pd.to_datetime(df["date"], format="%Y%m%d")
dates_as_int = dates.astype(int) / 10 ** 9 / 86400
dates_as_int_array = dates_as_int.to_numpy()
dates_as_int_array_normalized = dates_as_int_array - dates_as_int_array[0]
national_cases = df["positive"].to_numpy()
national_deaths = df["death"].to_numpy()
national_recovery = df["recovered"].to_numpy()
nat_now_hospitalized = df["hospitalizedCurrently"].to_numpy()
nat_now_vent = df["onVentilatorCurrently"].to_numpy()
days_so_far = np.arange(1, len(national_cases) + 1)
days = np.arange(1, len(national_cases)+20)
popt, pcov = curve_fit(model, days_so_far, national_cases,p0=[1000000,1000,-0.25])
capacity = int(popt[0])
offset = 0
results = np.concatenate((np.zeros(int(offset)), model(days, *popt)))
plot_national_graph(days, days_so_far, national_cases,national_deaths,national_recovery,nat_now_hospitalized, nat_now_vent, results,'Data Last Checked:'+national_last_date, capacity)
growth_factor = compute_growth_factor(national_cases)
plot_growth_factor(days_so_far, growth_factor,'Data Last Checked:'+national_last_date)
popt, pcov = curve_fit(model, days_so_far, national_deaths,p0=[100000,500,-0.22])
morbidity_results = np.concatenate((np.zeros(int(offset)), model(days, *popt)))
max_dead = int(popt[0])
plot_national_impact_graph(days, days_so_far, national_deaths,national_recovery,nat_now_hospitalized, nat_now_vent, 'Data Last Checked:'+national_last_date, morbidity_results,max_dead)
# -
death_rate = (national_deaths[15:-1] / national_cases[8:-8]) * 100
plot_death_rate(days_so_far[15:-1], death_rate,'Data Last Checked:'+national_last_date)
| covid19-logistic-fit.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import scipy.io
import torch
import torch.nn as nn
import torch.nn.functional as F
import torchvision
import torchvision.transforms as transforms
from ignite.engine import Events, create_supervised_trainer, create_supervised_evaluator
from ignite.metrics import Accuracy, Loss
from sklearn import preprocessing
from time import time
# -
# Device configuration
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
print(device)
# +
# Initialize variables required for the algorithm
learning_rate = 2e-2 # initial learning rate
Kx = 10
Ky = 10
n_hidden = 2000 # number of hidden units in hidden layer
mu = 0.0 # mean for gaussian distribution to initialize weights with
sigma = 1.0 # standard deviation for gaussian distribution to initialize weights with
n_epochs = 200 # number of epochs
batch_size = 100 # size of the minibatch
precision = 1e-30 # parameter to control numerical precision of weight updates
anti_hebbian_learning_strength = 0.4 # Strength of the anti-hebbian learning
lebesgue_norm = 2.0 # Lebesgue norm of the weights
rank = 2 # ranking parameter, must be integer that is bigger or equal than 2
# +
# Initialize variables required for the algorithm
# Best values according to paper
learning_rate = 2e-2 # initial learning rate
Kx = 10
Ky = 10
n_hidden = 2000 # number of hidden units in hidden layer
mu = 0.0 # mean for gaussian distribution to initialize weights with
sigma = 1.0 # standard deviation for gaussian distribution to initialize weights with
n_epochs = 200 # number of epochs
batch_size = 100 # size of the minibatch
precision = 1e-30 # parameter to control numerical precision of weight updates
anti_hebbian_learning_strength = 0.4 # Strength of the anti-hebbian learning
lebesgue_norm = 3.0 # Lebesgue norm of the weights
rank = 7 # ranking parameter, must be integer that is bigger or equal than 2
# +
# UNSUPERVISED 'BIO' LEARNING ALGORITHM
# Define function that performs the unsupervised learning and returns weights
# that correspond to feature detectors.
# Uses cuda if available.
def get_unsupervised_weights(data, n_hidden, n_epochs, batch_size, learning_rate, precision,
anti_hebbian_learning_strength, lebesgue_norm, rank):
print("Starting unsupervised bio-plausible training")
num_samples = data.shape[0] # Number of samples/images.
num_features = data.shape[1] # Number of pixels for each sample/image.
# Initialize weights to be values drawn from gaussian distribution.
synapses = np.random.normal(mu, sigma, (n_hidden, num_features)).astype(np.float32)
weights = torch.from_numpy(synapses).to(device)
# The external loop runs over epochs
for epoch in range(n_epochs):
eps = learning_rate * (1 - epoch / n_epochs)
# Scramble the images and values. So that when making a
# mini batch, random values/images will be chosen on each iteration.
random_permutation_samples = np.random.permutation(num_samples)
shuffled_epoch_data = data[random_permutation_samples,:]
# Internal loop runs over minibatches
for i in range(num_samples // batch_size):
# For every minibatch the overlap with the data (tot_input) is
# calculated for each data point and each hidden unit.
mini_batch = shuffled_epoch_data[i*batch_size:(i+1)*batch_size,:].astype(np.float32)
mini_batch = torch.from_numpy(mini_batch).to(device)
mini_batch = torch.transpose(mini_batch, 0, 1)
sign = torch.sign(weights)
W = sign * torch.abs(weights) ** (lebesgue_norm - 1)
# https://stackoverflow.com/questions/44524901/how-to-do-product-of-matrices-in-pytorch
tot_input_torch = torch.mm(W, mini_batch)
# The sorted strengths of the activations are stored in y.
# The variable yl stores the activations of the post synaptic cells -
# it is denoted by g(Q) in Eq 3 of 'Unsupervised Learning by Competing Hidden Units', see also Eq 9 and Eq 10.
y_torch = torch.argsort(tot_input_torch, dim=0)
yl_torch = torch.zeros((n_hidden, batch_size), dtype = torch.float).to(device)
yl_torch[y_torch[n_hidden-1,:], torch.arange(batch_size)] = 1.0
yl_torch[y_torch[n_hidden-rank], torch.arange(batch_size)] = -anti_hebbian_learning_strength
# The variable ds is the right hand side of Eq 3
xx_torch = torch.sum(yl_torch * tot_input_torch,1)
xx_torch = xx_torch.unsqueeze(1)
xx_torch = xx_torch.repeat(1, num_features)
ds_torch = torch.mm(yl_torch, torch.transpose(mini_batch, 0, 1)) - (xx_torch * weights)
# Update weights
# The weights are updated after each minibatch in a way so that the largest update
# is equal to the learning rate eps at that epoch.
nc_torch = torch.max(torch.abs(ds_torch))
if nc_torch < precision:
nc_torch = precision
weights += eps*(ds_torch/nc_torch)
#if (i+1) % 100 == 0:
# print (f'Epoch [{epoch+1}/{n_epochs}], Step [{i+1}/{num_samples // batch_size}]')
print (f'Epoch [{epoch+1}/{n_epochs}]')
print("Completed unsupervised bio-plausible training")
return weights
#return weights.cpu().numpy()
# +
# LOAD AND PREPARE MNIST DATA FOR UNSUPERVISED TRAINING
print("Loading MNIST...")
mat = scipy.io.loadmat('mnist_all.mat')
print("Done loading MNIST")
#print(mat)
Nc=10 # number of classes
N=784 # number of pixels for each image. 28x28
M=np.zeros((0,N))
for i in range(Nc):
M=np.concatenate((M, mat['train'+str(i)]), axis=0)
M=M/255.0
data_mnist = M
print(f'Number of samples: {data_mnist.shape[0]}')
print(f'Number of features: {data_mnist.shape[1]}')
# +
# RUN UNSUPERVISED 'BIO' LEARNING ALGORITHM for MNIST
# Calculates weights for data and provided number of hidden units (given other configuration)
weights_mnist = get_unsupervised_weights(data_mnist, n_hidden, n_epochs, batch_size, learning_rate, precision,
anti_hebbian_learning_strength, lebesgue_norm, rank)
#print(weights_mnist.shape)
#print(weights_mnist)
# Keep backups of these weights
weights_mnist_backup = weights_mnist
weights_mnist_frozen = weights_mnist
# TODO Maybe write these to file to keep?
# +
# SANITY CHECKS (for my sanity)
print(weights_mnist)
print()
print(weights_mnist_backup)
print()
print(weights_mnist_frozen)
print()
print(torch.all(weights_mnist.eq(weights_mnist_backup)))
print(torch.all(weights_mnist.eq(weights_mnist_frozen)))
print(torch.all(weights_mnist_frozen.eq(weights_mnist_backup)))
#print((weights_mnist==weights_mnist_backup).all())
#print((weights_mnist_frozen==weights_mnist_backup).all())
# +
# Draw MNIST weights/feature detectors generated by unsupervised bio algo
# REFERENCED FROM: https://github.com/DimaKrotov/Biological_Learning
# To draw a heatmap of the weights a helper function is created
def draw_weights(synapses, Kx, Ky):
yy=0
HM=np.zeros((28*Ky,28*Kx))
for y in range(Ky):
for x in range(Kx):
HM[y*28:(y+1)*28,x*28:(x+1)*28]=synapses[yy,:].reshape(28,28)
yy += 1
plt.clf()
nc=np.amax(np.absolute(HM))
im=plt.imshow(HM,cmap='bwr',vmin=-nc,vmax=nc)
fig.colorbar(im,ticks=[np.amin(HM), 0, np.amax(HM)])
plt.axis('off')
fig.canvas.draw()
# %matplotlib inline
# %matplotlib notebook
fig=plt.figure(figsize=(12.9,10))
draw_weights(weights_mnist.cpu().numpy(), Kx, Ky)
print(weights_mnist.shape)
print("Fin")
# -
# +
# DEFINE BioClassifier AND BioLoss CLASSES TO PERFORM BIO-PLAUSIBLE LEARNING
# REFERENCED FROM https://github.com/gatapia/unsupervised_bio_classifier
class BioCell(nn.Module):
def __init__(self, Wui, beta, out_features):
# Wui is the unsupervised pretrained weight matrix of shape: (2000, 28*28)
super().__init__()
# TODO: Does this need to be transposed here?
# Answer: NO! Because we are using F.linear in the forward pass rather than multiplying directly ourselves.
# F.linear does the transpose internally.
self.Wui = Wui
self.beta = beta
self.supervised = nn.Linear(Wui.shape[0], out_features, bias=False)
def forward(self, vi):
Wui_vi = F.linear(vi, self.Wui, None)
# Using basic RELU
hu = F.relu(Wui_vi)
Sau = self.supervised(hu)
ca = torch.tanh(self.beta * Sau)
return ca
class BioCell2(nn.Module):
def __init__(self, Wui, beta=0.1, out_features=10):
# Wui is the unsupervised pretrained weight matrix of shape: (2000, 28*28)
super().__init__()
self.Wui = Wui.transpose(0, 1) # (768, 2000)
self.beta = beta
self.supervised = nn.Linear(Wui.shape[0], out_features, bias=False)
def forward(self, vi):
# Different from BioCell where we do matmul here directly rather than using nn.Linear to accomplish the same.
Wui_vi = torch.matmul(vi, self.Wui)
# Using basic RELU
hu = F.relu(Wui_vi)
Sau = self.supervised(hu)
ca = torch.tanh(self.beta * Sau)
return ca
class BioCell3(nn.Module):
def __init__(self, Wui, n=4.5, beta=.01, out_features=10):
# Wui is the unsupervised pretrained weight matrix of shape: (2000, 28*28)
super().__init__()
self.Wui = Wui.transpose(0, 1) # (768, 2000)
self.n = n
self.beta = beta
# Below can be renamed to self.supervised to be similar to previous cells
self.Sau = nn.Linear(Wui.shape[0], out_features, bias=False)
def forward(self, vᵢ):
# vᵢ = vᵢ.view(-1, 28, 28).transpose(1, 2).contiguous().view(-1, 28*28) # change vᵢ to be HxW for testing
Wui_vi = torch.matmul(vi, self.Wui)
# Using custom RELU as in the paper
hu = F.relu(Wui_vi) ** self.n
Sau_hu = self.Sau(hu)
ca = torch.tanh(self.beta * Sau_hu)
return ca
class BioLoss(nn.Module):
def __init__(self, m=6):
super().__init__()
self.m = m
# According to the hidden units paper, c = predictions, t = actual labels
def forward(self, c, t):
t_ohe = torch.eye(10, dtype=torch.float, device='cuda')[t]
t_ohe[t_ohe==0] = -1.
loss = (c - t_ohe).abs() ** self.m
return loss.sum()
class BioClassifier(nn.Module):
def __init__(self, bio):
super().__init__()
self.bio = bio
def forward(self, vi):
ca = self.bio(vi)
return F.log_softmax(ca, dim=-1)
# +
# RESET weights (for my sanity)
weights_mnist = weights_mnist_frozen
# +
# TRAIN USING BIOCLASSIFIER AND OBTAIN GENERALIZATION RESULTS
# Results: Accuracy of the network on the 10000 test images: 97.26 %
# REFERENCES:
# - https://www.python-engineer.com/courses/pytorchbeginner/13-feedforward-neural-network/
# - https://github.com/gatapia/unsupervised_bio_classifier
# 0) Prepare data
# MNIST dataset
print("Downloading MNIST data...")
train_dataset = torchvision.datasets.MNIST(root='./data',
train=True,
transform=transforms.ToTensor(),
download=True)
test_dataset = torchvision.datasets.MNIST(root='./data',
train=False,
transform=transforms.ToTensor())
print("Completed downloading MNIST data.")
# Data loader
train_loader = torch.utils.data.DataLoader(dataset=train_dataset,
batch_size=batch_size,
shuffle=True)
test_loader = torch.utils.data.DataLoader(dataset=test_dataset,
batch_size=batch_size,
shuffle=False)
# Plot some sample data for sanity check
#examples = iter(test_loader)
#example_data, example_targets = examples.next()
# #%matplotlib inline
# #%matplotlib notebook
#for i in range(6):
# plt.subplot(2,3,i+1)
# plt.imshow(example_data[i][0], cmap='gray')
#plt.show()
# 1) Design and init model
#model = BioClassifier(BioCell2(weights_mnist)).to(device)
model = BioClassifier(BioCell3(weights_mnist)).to(device)
# Print the named parameters to test that model initialised correctly.
# Names parameter that requires grad should be only S.weight.
print(f'Printing named parameters for the generated BioClassifier model.')
for name, param in model.named_parameters():
if param.requires_grad:
print(name, param.data)
print()
# 2) Construct loss and optimizer
criterion = BioLoss(m=6)
#criterion = nn.NLLLoss()
supervised_learning_rate = 1e-4
optimizer = torch.optim.Adam(model.parameters(), lr=supervised_learning_rate)
# 3) Train the model - training loop
n_total_steps = len(train_loader)
print(f'n_total_steps: {n_total_steps}')
n_supervised_epochs = 300
print(f'n_supervised_epochs: {n_supervised_epochs}')
print('Start training...')
for epoch in range(n_supervised_epochs):
print()
for i, (images, labels) in enumerate(train_loader):
# origin images shape: [100, 1, 28, 28]
# resized: [100, 784]
images = images.reshape(-1, 28*28).to(device)
labels = labels.to(device)
# predict = forward pass
outputs = model(images)
# calculate loss
loss = criterion(outputs, labels)
# backward pass to calculate gradients
loss.backward()
# update weights
optimizer.step()
# zero autograd .grad after updating
optimizer.zero_grad()
if (i+1) % 100 == 0:
print (f'Epoch [{epoch+1}/{n_supervised_epochs}], Step [{i+1}/{n_total_steps}], Loss: {loss.item():.4f}')
print('Completed training.')
# Test the model
# In test phase, we don't need to compute gradients (helpful for memory efficiency) so use torch.no_grad()
with torch.no_grad():
n_correct = 0
n_samples = 0
for images, labels in test_loader:
images = images.reshape(-1, 28*28).to(device)
labels = labels.to(device)
outputs = model(images)
# max returns (value ,index)
_, predicted = torch.max(outputs.data, 1)
n_samples += labels.size(0)
n_correct += (predicted == labels).sum().item()
acc = 100.0 * n_correct / n_samples
print(f'Accuracy of the network on the 10000 test images: {acc} %')
# OUTPUT
# Epoch [300/300], Step [600/600], Loss: 46022.9062
# Completed training.
# Accuracy of the network on the 10000 test images: 97.26 %
# +
##### ------ THIS IS THE END OF MNIST WORK -------
# -
# +
# RESET weights (for my sanity)
weights_mnist = weights_mnist_frozen
# +
# IGNITE
# REFERENCE: https://pytorch.org/ignite/quickstart.html, https://github.com/gatapia/unsupervised_bio_classifier
#def run_test(train_loader, test_loader, model, epochs, batch_size=64, lr=1e-3, verbose=0, loss=None):
def run_test(train_X, train_y, test_X, test_y, model, epochs, batch_size=64, lr=1e-3, verbose=0, loss=None):
start = time()
train_ds = torch.utils.data.TensorDataset(train_X, train_y)
train_loader = torch.utils.data.DataLoader(train_ds, batch_size=batch_size, shuffle=True)
test_ds = torch.utils.data.TensorDataset(test_X, test_y)
test_loader = torch.utils.data.DataLoader(test_ds, batch_size=batch_size, shuffle=False)
optimizer = torch.optim.Adam(model.parameters(), lr=lr)
if loss is None:
loss = F.nll_loss
trainer = create_supervised_trainer(model, optimizer, loss, device=device)
metrics = {
'accuracy': Accuracy(),
'nll': Loss(loss) # TODO rename to 'loss'
}
evaluator = create_supervised_evaluator(model, metrics=metrics, device=device)
# TODO rewrite print styles into my format
@trainer.on(Events.ITERATION_COMPLETED(every=100))
def log_training_loss(trainer):
print("Epoch[{}] Loss: {:.2f}".format(trainer.state.epoch, trainer.state.output))
@trainer.on(Events.EPOCH_COMPLETED)
def log_training_results(trainer):
evaluator.run(train_loader)
metrics = evaluator.state.metrics
avg_accuracy = metrics['accuracy']
avg_nll = metrics['nll']
print("Training Results - Epoch: {} Avg accuracy: {:.2f} Avg loss: {:.2f}"
.format(trainer.state.epoch, avg_accuracy, avg_nll))
@trainer.on(Events.EPOCH_COMPLETED)
def log_validation_results(trainer):
evaluator.run(test_loader)
metrics = evaluator.state.metrics
avg_accuracy = metrics['accuracy']
avg_nll = metrics['nll']
print("Validation Results - Epoch: {} Avg accuracy: {:.2f} Avg loss: {:.2f}"
.format(trainer.state.epoch, avg_accuracy, avg_nll))
@trainer.on(Events.COMPLETED)
def log_completed_validation_results(trainer):
evaluator.run(test_loader)
metrics = evaluator.state.metrics
avg_accuracy = metrics['accuracy']
avg_nll = metrics['nll']
print("Final Validation Results - Epoch: {} Avg accuracy: {:.2f} Avg loss: {:.2f} Took: {:.0f}s"
.format(trainer.state.epoch, avg_accuracy, avg_nll, time() - start))
trainer.run(train_loader, max_epochs=epochs)
# +
# https://github.com/gatapia/unsupervised_bio_classifier
def get_data(data_type):
mat = scipy.io.loadmat('mnist_all.mat')
X=torch.zeros((0, 28 * 28), dtype=torch.float)
y=torch.zeros(0, dtype=torch.long)
for i in range(10):
X_i = torch.from_numpy(mat[data_type + str(i)].astype(np.float)).float()
X = torch.cat((X, X_i))
y_i = torch.full(size=(len(X_i),), fill_value=i, dtype=torch.long)
y = torch.cat((y, y_i))
return X / 255.0, y
# -
(train_X, train_y), (test_X, test_y) = get_data('train'), get_data('test')
print('train_data:', train_X.shape, train_y.shape)
print('test_data:', test_X.shape, test_y.shape)
# +
# MNIST dataset
print("Downloading MNIST data...")
train_dataset = torchvision.datasets.MNIST(root='./data',
train=True,
transform=transforms.ToTensor(),
download=True)
test_dataset = torchvision.datasets.MNIST(root='./data',
train=False,
transform=transforms.ToTensor())
print("Completed downloading MNIST data.")
# Data loader
train_loader = torch.utils.data.DataLoader(dataset=train_dataset,
batch_size=batch_size,
shuffle=True)
test_loader = torch.utils.data.DataLoader(dataset=test_dataset,
batch_size=batch_size,
shuffle=False)
print('\nBioClassifier w/ BioCell3 Model w/ BioLoss')
model = BioClassifier(BioCell3(weights_mnist)).to(device)
#run_test(train_X, train_y, test_X, test_y, model, 300, batch_size=3584, lr=1e-4, loss=BioLoss(m=6))
run_test(train_X, train_y, test_X, test_y, model, 300, batch_size=1024, lr=1e-4, loss=BioLoss(m=6))
#run_test(train_loader, test_loader, model, 300, batch_size=3584, lr=1e-4, loss=BioLoss(m=6))
# -
# +
# BELOW IS WORK FOR CIFAR-10
# ALL WORK IN PROGRESS
# +
# LOAD AND PREPARE CIFAR-10 DATA
# REFERENCE: https://pytorch.org/tutorials/beginner/blitz/cifar10_tutorial.html
# Keeping here for reference. From link above.
# As per the 'hidden competing units' paper, no need to Normalize as provided in the link.
# The dataset has PILImage images of range [0, 1].
# We transform them to Tensors of normalized range [-1, 1]
#transform = transforms.Compose(
# [transforms.ToTensor(),
# transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
# The CIFAR-10 dataset has PILImage images of range [0, 1].
# As mentioned in the 'hidden competing units' paper, "no preprocessing of the data was used except that
# each input image was normalized to be a unit vector in the 32x32x3 = 3072-dimensional space."
# We transform the images to Tensors here, and normalize to unit vectors further on in this cell.
transform = transforms.Compose([transforms.ToTensor()])
transform_grayscale = transforms.Compose([transforms.Grayscale(num_output_channels=1), transforms.ToTensor()])
# CIFAR10: 60000 32x32 color images in 10 classes, with 6000 images per class
train_dataset = torchvision.datasets.CIFAR10(root='./data', train=True,
download=True, transform=transform)
test_dataset = torchvision.datasets.CIFAR10(root='./data', train=False,
download=True, transform=transform)
train_dataset_grayscale = torchvision.datasets.CIFAR10(root='./data', train=True,
download=True, transform=transform_grayscale)
test_dataset_grayscale = torchvision.datasets.CIFAR10(root='./data', train=False,
download=True, transform=transform_grayscale)
cifar_classes = ('plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
# -
# Test function to display an image
def imshow(img):
img = img / 2 + 0.5 # unnormalize
npimg = img.numpy()
plt.imshow(np.transpose(npimg, (1, 2, 0)))
plt.show()
# +
# Display some random training images in grayscale
train_loader_grayscale = torch.utils.data.DataLoader(train_dataset_grayscale, batch_size=1)
dataiter_grayscale = iter(train_loader_grayscale)
image, label = dataiter_grayscale.next()
print(image.shape)
print(label.shape)
# %matplotlib inline
# %matplotlib notebook
imshow(torchvision.utils.make_grid(image))
print(image)
print(label)
# +
# Display some random training images
train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=1)
dataiter = iter(train_loader)
image, label = dataiter.next()
print(image.shape)
print(label.shape)
# %matplotlib inline
# %matplotlib notebook
imshow(torchvision.utils.make_grid(image))
print(image)
print(label)
# +
# RUN UNSUPERVISED 'BIO' LEARNING ALGORITHM for CIFAR
# WORK IN PROGRESS
# Calculates weights for data and provided number of hidden units (given other configuration)
weights_cifar = get_unsupervised_weights(data_cifar, n_hidden, n_epochs, batch_size, learning_rate, precision,
anti_hebbian_learning_strength, lebesgue_norm, rank)
print(weights_cifar.shape)
print(weights_cifar)
# +
# Draw CIFAR-10 weights/feature detectors generated by unsupervised bio algo
# WORK IN PROGRESS
def draw_weights(synapses, Kx, Ky):
print(synapses)
print(synapses.shape) # (100, 3072)
yy=0
HM=np.zeros((32*Ky,32*Kx,3))
print(HM.shape) # (320, 320, 3)
for y in range(Ky):
for x in range(Kx):
shit = synapses[yy,:]
shit_reshape = synapses[yy,:].reshape(3,32,32)
#print(synapses.shape) # (100, 3072)
#print(shit.shape) # (3072,)
#print(shit_reshape.shape) # (3, 32, 32)
#HM[y*28:(y+1)*28,x*28:(x+1)*28]=synapses[yy,:].reshape(28,28)
HM[y*32:(y+1)*32,x*32:(x+1)*32,:]=synapses[yy,:].reshape(32,32,3)
#HM[z, y*32:(y+1)*32,x*32:(x+1)*32]=synapses[yy,:].reshape(3,32,32)
yy += 1
print("Done with the fucking loop")
plt.clf()
nc=np.amax(np.absolute(HM))
im=plt.imshow(HM[:,:,0],cmap='bwr',vmin=-nc,vmax=nc)
fig.colorbar(im,ticks=[np.amin(HM), 0, np.amax(HM)])
plt.axis('off')
fig.canvas.draw()
# %matplotlib inline
# %matplotlib notebook
fig=plt.figure(figsize=(12.9,10))
draw_weights(weights_cifar, Kx, Ky)
print("Fin")
# -
# +
# USE THIS BLOCK FOR DEBUGGING PURPOSES ONLY !!
# Contains data loading and whole bio learning in one block of code.
# Plots the feature detectors at the end of training.
# LOAD AND PREPARE DATA
print("Loading MNIST...")
mat = scipy.io.loadmat('mnist_all.mat')
print("Done loading MNIST")
Nc=10 # output nodes
N=784 # number of pixels for each image. 28x28
M=np.zeros((0,N))
for i in range(Nc):
M=np.concatenate((M, mat['train'+str(i)]), axis=0)
M=M/255.0
data = M
num_samples = data.shape[0] # 60000 training and validation examples. Number of samples
num_features = data.shape[1] # number of pixels for each image. 28x28. Also: num_samples, num_pixels..
# ------------------------------------------------------------------------------------------------------------
# UNSUPERVISED 'BIO' LEARNING ALGORITHM
# Initialize weights to be values drawn from gaussian distribution.
synapses = np.random.normal(mu, sigma, (n_hidden, N)).astype(np.float32)
weights = torch.from_numpy(synapses).to(device)
# The external loop runs over epochs
for epoch in range(n_epochs):
eps = learning_rate * (1 - epoch / n_epochs)
#print(f'epoch learning rate: {eps}')
# Scramble the images and values. So that when making a
# mini batch, random values/images will be chosen on each iteration.
random_permutation_samples = np.random.permutation(num_samples)
shuffled_epoch_data = data[random_permutation_samples,:]
# Internal loop runs over minibatches
for i in range(num_samples // batch_size):
# For every minibatch the overlap with the data (tot_input) is
# calculated for each data point and each hidden unit.
mini_batch = shuffled_epoch_data[i*batch_size:(i+1)*batch_size,:].astype(np.float32)
mini_batch = torch.from_numpy(mini_batch).to(device)
mini_batch = torch.transpose(mini_batch, 0, 1)
sign = torch.sign(weights)
W = sign * torch.abs(weights) ** (lebesgue_norm - 1)
# https://stackoverflow.com/questions/44524901/how-to-do-product-of-matrices-in-pytorch
tot_input_torch = torch.mm(W, mini_batch)
# The sorted strengths of the activations are stored in y.
# The variable yl stores the activations of the post synaptic cells -
# it is denoted by g(Q) in Eq 3 of 'Unsupervised Learning by Competing Hidden Units', see also Eq 9 and Eq 10.
y_torch = torch.argsort(tot_input_torch, dim=0)
yl_torch = torch.zeros((n_hidden, batch_size), dtype = torch.float).to(device)
yl_torch[y_torch[n_hidden-1,:], torch.arange(batch_size)] = 1.0
yl_torch[y_torch[n_hidden-rank], torch.arange(batch_size)] = -anti_hebbian_learning_strength
# The variable ds is the right hand side of Eq 3
xx_torch = torch.sum(yl_torch * tot_input_torch,1)
xx_torch = xx_torch.unsqueeze(1)
xx_torch = xx_torch.repeat(1, num_features)
ds_torch = torch.mm(yl_torch, torch.transpose(mini_batch, 0, 1)) - (xx_torch * weights)
# Update weights
# The weights are updated after each minibatch in a way so that the largest update
# is equal to the learning rate eps at that epoch.
nc_torch = torch.max(torch.abs(ds_torch))
if nc_torch < precision:
nc_torch = precision
weights += eps*(ds_torch/nc_torch)
#if (i+1) % 100 == 0:
# print (f'Epoch [{epoch+1}/{n_epochs}], Step [{i+1}/{num_samples // batch_size}]')
print (f'Epoch [{epoch+1}/{n_epochs}]')
#draw_weights(weights.numpy(), Kx, Ky)
# %matplotlib inline
# %matplotlib notebook
fig=plt.figure(figsize=(12.9,10))
draw_weights(weights.cpu().numpy(), Kx, Ky)
print("Fin")
# -
# +
# Draw MNIST weights/feature detectors generated by unsupervised bio algo
# REFERENCED FROM: https://github.com/DimaKrotov/Biological_Learning
# To draw a heatmap of the weights a helper function is created
def draw_weights(synapses, Kx, Ky):
yy=0
HM=np.zeros((28*Ky,28*Kx))
for y in range(Ky):
for x in range(Kx):
HM[y*28:(y+1)*28,x*28:(x+1)*28]=synapses[yy,:].reshape(28,28)
yy += 1
plt.clf()
nc=np.amax(np.absolute(HM))
im=plt.imshow(HM,cmap='bwr',vmin=-nc,vmax=nc)
fig.colorbar(im,ticks=[np.amin(HM), 0, np.amax(HM)])
plt.axis('off')
fig.canvas.draw()
# %matplotlib inline
# %matplotlib notebook
fig=plt.figure(figsize=(12.9,10))
draw_weights(weights_mnist.cpu().numpy(), Kx, Ky)
print(weights_mnist.shape)
print("Fin")
| Unsupervised_bio_learning_pytorch_ignite_fix_cifar.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + papermill={"duration": 5.884827, "end_time": "2021-06-07T09:12:31.082324", "exception": false, "start_time": "2021-06-07T09:12:25.197497", "status": "completed"} tags=[]
import re
import os
import numpy as np
import pandas as pd
import tensorflow as tf
import matplotlib.pyplot as plt
import time
import math
import PIL
import glob
from PIL import Image
from IPython import display
from tensorflow.keras.datasets.fashion_mnist import load_data
from tensorflow.keras import Input
from tensorflow.keras.layers import Dense, Reshape, LeakyReLU, Conv2D, Conv2DTranspose, Flatten
from tensorflow.keras.layers import Dropout, BatchNormalization, MaxPooling2D
from tensorflow.keras.models import Model,Sequential
from tensorflow.keras.optimizers import RMSprop,Adam
# + papermill={"duration": 2.443555, "end_time": "2021-06-07T09:12:33.542292", "exception": false, "start_time": "2021-06-07T09:12:31.098737", "status": "completed"} tags=[]
BATCH_SIZE = 200
IMAGE_SIZE = (28,28,1)
EPOCHS = 100
LATENT_DIM = 64
NUM_EXAMPLES= 16
FIXED_Z = tf.random.normal([NUM_EXAMPLES, LATENT_DIM])
# + papermill={"duration": 0.022906, "end_time": "2021-06-07T09:12:33.581739", "exception": false, "start_time": "2021-06-07T09:12:33.558833", "status": "completed"} tags=[]
def load_dataset():
(X_train, y_train) , (X_test, y_test) = load_data()
X_train = np.expand_dims(X_train,axis=-1)
X_train = X_train.astype('float32')
X_train = (X_train/255.0)
X_train = X_train.reshape(-1,BATCH_SIZE,28,28,1)
return X_train , y_train
# + papermill={"duration": 1.249651, "end_time": "2021-06-07T09:12:34.847071", "exception": false, "start_time": "2021-06-07T09:12:33.597420", "status": "completed"} tags=[]
X,y = load_dataset()
# + papermill={"duration": 0.970847, "end_time": "2021-06-07T09:12:35.837981", "exception": false, "start_time": "2021-06-07T09:12:34.867134", "status": "completed"} tags=[]
for i in range(25):
plt.subplot(5, 5, 1 + i)
plt.axis('off')
plt.imshow(X[0][i], cmap='gray')
plt.show()
# + papermill={"duration": 0.030693, "end_time": "2021-06-07T09:12:35.888842", "exception": false, "start_time": "2021-06-07T09:12:35.858149", "status": "completed"} tags=[]
def make_generator_model():
model = tf.keras.Sequential()
model.add(Dense(7*7*128, input_shape=[LATENT_DIM]))
model.add(BatchNormalization())
model.add(LeakyReLU(0.2))
model.add(Reshape((7, 7, 128)))
model.add(Conv2DTranspose(128, (5, 5), strides=(2, 2), padding='same'))
model.add(BatchNormalization())
model.add(LeakyReLU(0.2))
model.add(Conv2DTranspose(64, (5, 5), strides=(2, 2), padding='same'))
model.add(BatchNormalization())
model.add(LeakyReLU(0.2))
model.add(Conv2DTranspose(32, (5, 5), strides=(1, 1), padding='same'))
model.add(BatchNormalization())
model.add(LeakyReLU(0.2))
model.add(Conv2DTranspose(1, (5, 5), strides=(1, 1), padding='same', activation='sigmoid'))
return model
# + papermill={"duration": 0.033446, "end_time": "2021-06-07T09:12:35.941985", "exception": false, "start_time": "2021-06-07T09:12:35.908539", "status": "completed"} tags=[]
def make_discriminator_model():
model = tf.keras.Sequential()
model.add(Input(shape=(28,28,1)))
model.add(Conv2D(32, (5, 5), strides=(2, 2), padding='same'))
model.add(Conv2D(64, (4, 4), strides=(2, 2), padding='same'))
model.add(BatchNormalization())
model.add(LeakyReLU(0.2))
model.add(Dropout(0.2))
model.add(Conv2D(128, (3, 3), strides=(2, 2), padding='same'))
model.add(BatchNormalization())
model.add(LeakyReLU(0.2))
model.add(Dropout(0.2))
model.add(Flatten())
model.add(Dropout(0.2))
model.add(Dense(128))
model.add(Dense(1))
return model
# + papermill={"duration": 0.251193, "end_time": "2021-06-07T09:12:36.217143", "exception": false, "start_time": "2021-06-07T09:12:35.965950", "status": "completed"} tags=[]
generator = make_generator_model()
discriminator = make_discriminator_model()
# + papermill={"duration": 0.029293, "end_time": "2021-06-07T09:12:36.270899", "exception": false, "start_time": "2021-06-07T09:12:36.241606", "status": "completed"} tags=[]
cross_entropy = tf.keras.losses.BinaryCrossentropy(from_logits=True)
def discriminator_loss(real_output, fake_output):
real_loss = cross_entropy(tf.ones_like(real_output), real_output)
fake_loss = cross_entropy(tf.zeros_like(fake_output), fake_output)
total_loss = real_loss + fake_loss
return total_loss
def generator_loss(fake_output):
return cross_entropy(tf.ones_like(fake_output), fake_output)
# + papermill={"duration": 0.027221, "end_time": "2021-06-07T09:12:36.318599", "exception": false, "start_time": "2021-06-07T09:12:36.291378", "status": "completed"} tags=[]
generator_optimizer = tf.keras.optimizers.Adam(2e-4,0.5)
discriminator_optimizer = tf.keras.optimizers.Adam(2e-4,0.5)
# + papermill={"duration": 0.027748, "end_time": "2021-06-07T09:12:36.366920", "exception": false, "start_time": "2021-06-07T09:12:36.339172", "status": "completed"} tags=[]
checkpoint_dir = './training_checkpoints'
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt")
checkpoint = tf.train.Checkpoint(generator_optimizer=generator_optimizer,
discriminator_optimizer=discriminator_optimizer,
generator=generator,
discriminator=discriminator)
# + papermill={"duration": 0.029665, "end_time": "2021-06-07T09:12:36.417596", "exception": false, "start_time": "2021-06-07T09:12:36.387931", "status": "completed"} tags=[]
@tf.function
def train_step(images):
noise = tf.random.normal([BATCH_SIZE, LATENT_DIM])
with tf.GradientTape() as gen_tape, tf.GradientTape() as disc_tape:
generated_images = generator(noise, training=True)
real_output = discriminator(images, training=True)
fake_output = discriminator(generated_images, training=True)
gen_loss = generator_loss(fake_output)
disc_loss = discriminator_loss(real_output, fake_output)
gradients_of_generator = gen_tape.gradient(gen_loss, generator.trainable_variables)
gradients_of_discriminator = disc_tape.gradient(disc_loss, discriminator.trainable_variables)
generator_optimizer.apply_gradients(zip(gradients_of_generator, generator.trainable_variables))
discriminator_optimizer.apply_gradients(zip(gradients_of_discriminator, discriminator.trainable_variables))
# + papermill={"duration": 0.029231, "end_time": "2021-06-07T09:12:36.467896", "exception": false, "start_time": "2021-06-07T09:12:36.438665", "status": "completed"} tags=[]
def train(dataset, epochs):
for epoch in range(epochs):
start = time.time()
i=0
for image_batch in dataset:
train_step(image_batch)
display.clear_output(wait=True)
generate_and_save_images(generator, epoch + 1, FIXED_Z)
if (epoch + 1) % 10 == 0:
checkpoint.save(file_prefix = checkpoint_prefix)
print ('Time for epoch {} is {} sec'.format(epoch + 1, time.time()-start))
display.clear_output(wait=True)
generate_and_save_images(generator,epochs,FIXED_Z)
# + papermill={"duration": 0.028068, "end_time": "2021-06-07T09:12:36.516809", "exception": false, "start_time": "2021-06-07T09:12:36.488741", "status": "completed"} tags=[]
def generate_and_save_images(model, epoch, test_input):
predictions = model(test_input, training=False)
fig = plt.figure(figsize=(4, 4))
for i in range(predictions.shape[0]):
plt.subplot(4, 4, i+1)
plt.imshow(predictions[i]*255.0)
plt.axis('off')
plt.savefig('image_at_epoch_{:04d}.png'.format(epoch))
plt.show()
# + papermill={"duration": 996.459976, "end_time": "2021-06-07T09:29:12.997249", "exception": false, "start_time": "2021-06-07T09:12:36.537273", "status": "completed"} tags=[]
train(X, EPOCHS)
# + papermill={"duration": 0.334569, "end_time": "2021-06-07T09:29:13.353256", "exception": false, "start_time": "2021-06-07T09:29:13.018687", "status": "completed"} tags=[]
checkpoint.restore(tf.train.latest_checkpoint(checkpoint_dir))
# + papermill={"duration": 0.027914, "end_time": "2021-06-07T09:29:13.403487", "exception": false, "start_time": "2021-06-07T09:29:13.375573", "status": "completed"} tags=[]
def display_image(epoch_no):
return PIL.Image.open('image_at_epoch_{:04d}.png'.format(epoch_no))
# + papermill={"duration": 0.042596, "end_time": "2021-06-07T09:29:13.467424", "exception": false, "start_time": "2021-06-07T09:29:13.424828", "status": "completed"} tags=[]
display_image(10)
# + papermill={"duration": 0.572288, "end_time": "2021-06-07T09:29:14.062302", "exception": false, "start_time": "2021-06-07T09:29:13.490014", "status": "completed"} tags=[]
fp_in = "./image_*.png"
fp_out = "./GAN_training.gif"
img, *imgs = [Image.open(f) for f in sorted(glob.glob(fp_in))]
img.save(fp=fp_out, format='GIF', append_images=imgs,
save_all=True, duration=100, loop=0)
# + papermill={"duration": 0.287848, "end_time": "2021-06-07T09:29:14.373159", "exception": false, "start_time": "2021-06-07T09:29:14.085311", "status": "completed"} tags=[]
noise = np.random.randn(32,64)
pred = generator.predict(noise)
# + papermill={"duration": 1.090681, "end_time": "2021-06-07T09:29:15.486928", "exception": false, "start_time": "2021-06-07T09:29:14.396247", "status": "completed"} tags=[]
w=10
h=10
fig=plt.figure(figsize=(8, 8))
columns = 4
rows = 4
for i in range(1, columns*rows +1):
fig.add_subplot(rows, columns, i)
plt.imshow(pred[i],cmap='gray')
plt.show()
# + papermill={"duration": 0.091485, "end_time": "2021-06-07T09:29:15.602658", "exception": false, "start_time": "2021-06-07T09:29:15.511173", "status": "completed"} tags=[]
generator.save('gen_fashion_mnist.hdf5')
discriminator.save('dis_fashion_mnist.hdf5')
# + papermill={"duration": 0.023571, "end_time": "2021-06-07T09:29:15.649982", "exception": false, "start_time": "2021-06-07T09:29:15.626411", "status": "completed"} tags=[]
| unconditional-gan-fashionmnist.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: mykernel
# language: python
# name: mykernel
# ---
import requests
from bs4 import BeautifulSoup
from collections import defaultdict
import json
import re
# * Assumes wiki is gold standard source of truth and relies on its page structure, chronology, etc.
base_url = 'https://toarumajutsunoindex.fandom.com'
INDEX = 'Toaru_Majutsu_no_Index'
RAILGUN = 'Toaru_Kagaku_no_Railgun'
ASTRAL_BUDDY = 'Astral_Buddy'
ACCEL = 'Toaru_Kagaku_no_Accelerator'
DARK_MATTER = 'Toaru_Kagaku_no_Dark_Matter'
def get_character_data(wiki_url, affiliation):
data = {
'name_en': '',
'name_jp': '',
'img_url': '',
'affiliation': [affiliation],
'series': [],
'is_supporting_character': False
}
soup = BeautifulSoup(requests.get(wiki_url).text, 'html.parser')
name_en = soup.select('h1.page-header__title')[0].text
name_jp_raw = soup.find("div", {"data-source": "Kanji"})
# only get characters with both japanese and english names
if not name_jp_raw:
return None
data['name_en'] = name_en
data['name_jp'] = name_jp_raw.find("div", {"class": "pi-data-value pi-font"}).text
# default image
default_img_url = soup.find("a", {"class": "image-thumbnail"}).get('href')
data['img_url'] = default_img_url
headlines = [_['id'] for _ in soup.select("span.mw-headline")]
for headline in headlines:
if INDEX in headline:
data['series'].append('禁書')
elif RAILGUN in headline:
data['series'].append('超電磁砲')
elif ASTRAL_BUDDY in headline:
data['series'].append('アストラル・バディ')
elif ACCEL in headline:
data['series'].append('一方通行')
elif DARK_MATTER in headline:
data['series'].append('未元物質')
# ignore characters that are not in any of the main series/spinoffs
if not data['series']:
return None
intro_text = ''.join(_ for _ in [m.get('content') for m in soup.find_all("meta")] if _)
# heuristics to find minor characters
is_stub = soup.find('div', {'id': 'stub'}) is not None
is_few_sections = len(soup.find_all('li', {'class': 'toclevel-2'})) < 3
# explicit mentions
minor_pattern = r'\bis a ((small|recurring) )?(minor|supporting|background|side)\b'
is_minor = bool(re.search(minor_pattern, intro_text, re.IGNORECASE))
data['is_supporting_character'] = is_stub or is_few_sections or is_minor
return data
# * Affiliation (magic/science/other) based on wiki categorization.
categories = {
'magic': {
'path': '/wiki/Category:Magic_Side_Characters',
'char_urls': []
},
'science': {
'path': '/wiki/Category:Science_Side_Characters',
'char_urls': []
},
'other': {
'path': '/wiki/Category:Normal_Characters',
'char_urls': []
}
}
def get_character_pages(url, char_urls=None):
if char_urls is None:
char_urls = []
curr_page = requests.get(url, allow_redirects=False)
assert(curr_page.status_code == 200)
curr_soup = BeautifulSoup(curr_page.text, 'html.parser')
# only get characters with pictures
curr_char_divs = [
div.find('a') for div in curr_soup.find_all("div", {"class": "category-page__member-left"})
if 'Template_Placeholder_other.png' not in str(div)
]
curr_char_divs = [d for d in curr_char_divs if d]
# only get characters with valid pages (& no redirects)
for curr_char_div in curr_char_divs:
curr_char_url = "{base}{suffix}".format(base=base_url, suffix=curr_char_div.get('href'))
is_valid_page = requests.get(curr_char_url, allow_redirects=False).status_code == 200
if not is_valid_page:
continue
char_urls.append(curr_char_url)
next_page = curr_soup.find("a", {"class": "category-page__pagination-next"})
if next_page:
next_url = next_page.get("href")
char_urls.extend(get_character_pages(next_url, char_urls))
return char_urls
for name, _ in categories.items():
category_url = base_url + _['path']
categories[name]['char_urls'] = get_character_pages(category_url)
# Get data
#
# * Deal format is array of characters
# * Target json - [{name:"name", img: "img.png", opts: {series:["a"], affiliation:["b"]}}]
class Character:
def __init__(self, name=""):
self.name = name
self.affiliation = set()
self.series = set()
self.is_supporting_character = False
self.img = ""
def get_data(self):
return dict(
name = self.name,
img = self.img,
opts = dict(
affiliation = list(self.affiliation),
series = list(self.series),
is_supporting_character = self.is_supporting_character
)
)
# +
characters = dict()
for category, _ in categories.items():
for char_url in _['char_urls']:
curr_data = get_character_data(char_url, category)
if not curr_data:
continue
name = "{jp} ({en})".format(jp=curr_data['name_jp'], en=curr_data['name_en'])
if name in characters:
curr_char = characters[name]
else:
curr_char = Character(name)
# note that .png extension comes after postprocessing
curr_char.img = re.search('latest\?cb=(.+)', curr_data['img_url'], re.IGNORECASE).group(1) + '.png'
#curr_char.img = curr_data['img_url']
curr_char.affiliation.update(curr_data['affiliation'])
curr_char.series.update(curr_data['series'])
curr_char.is_supporting_character = curr_data['is_supporting_character']
characters[name] = curr_char
# -
# some manual updates until i figure out better heuristics
chars_in_ab = ['御坂 美琴 (Misaka Mikoto)', '食蜂 操祈 (Shokuhou Misaki)',
'白井 黒子 (Shirai Kuroko)', '初春 飾利 (Uiharu Kazari)', '佐天 涙子 (Saten Ruiko)']
chars_not_minor = ['初春 飾利 (Uiharu Kazari)', '佐天 涙子 (Saten Ruiko)', "神苑小路 瑠璃懸巣 (Shin'enkouji Rurikakesu)",
'婚后 光子 (Kongou Mitsuko)', '湾内 絹保 (Wannai Kinuho)', '御坂 美鈴 (Misaka Misuzu)']
for char in chars_in_ab:
characters[char].series.update(["アストラル・バディ"])
for char in chars_not_minor:
characters[char].is_supporting_character = False
# For some reason Kamijou is missing..? Need to double check the script (whoops)
kamijou = Character('上条 当麻 (Kamijou Touma)')
kamijou.img = 'kamijou.png'
kamijou.affiliation.add('science')
kamijou.series.update(['禁書', '超電磁砲'])
characters['上条 当麻 (Kamijou Touma)'] = kamijou
output_data = [character.get_data() for character in characters.values()]
# Ended up downloading the images using wget ._.
#
# To preserve aspect ratio, the downloaded images were resized and padded instead. See the other notebook in this repo.
write_image_urls = False
if write_image_urls:
with open('img.txt', 'w') as f:
for _ in output_data:
f.write(_['img'])
f.write('\n')
| scraper/get_character_data.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Exercise 3 - Data Lake on S3
from pyspark.sql import SparkSession
import os
import configparser
# # Make sure that your AWS credentials are loaded as env vars
# +
config = configparser.ConfigParser()
#Normally this file should be in ~/.aws/credentials
config.read_file(open('aws/credentials.cfg'))
os.environ["AWS_ACCESS_KEY_ID"]= config['AWS']['AWS_ACCESS_KEY_ID']
os.environ["AWS_SECRET_ACCESS_KEY"]= config['AWS']['AWS_SECRET_ACCESS_KEY']
# -
# # Create spark session with hadoop-aws package
spark = SparkSession.builder\
.config("spark.jars.packages","org.apache.hadoop:hadoop-aws:2.7.0")\
.getOrCreate()
# # Load data from S3
df = spark.read.csv("s3a://udacity-dend/pagila/payment/payment.csv")
df.printSchema()
df.show(5)
# # Infer schema, fix header and separator
df = spark.read.csv("s3a://udacity-dend/pagila/payment/payment.csv",sep=";", inferSchema=True, header=True)
df.printSchema()
df.show(5)
# # Fix the data yourself
# +
import pyspark.sql.functions as F
dfPayment = df.withColumn("payment_date", F.to_timestamp("payment_date"))
dfPayment.printSchema()
dfPayment.show(5)
# -
# # Extract the month
dfPayment = dfPayment.withColumn("month", F.month("payment_date"))
dfPayment.show(5)
# # Computer aggregate revenue per month
dfPayment.createOrReplaceTempView("payment")
spark.sql("""
SELECT month, sum(amount) as revenue
FROM payment
GROUP by month
order by revenue desc
""").show()
# # Fix the schema
# +
from pyspark.sql.types import StructType as R, StructField as Fld, DoubleType as Dbl, StringType as Str, IntegerType as Int, DateType as Date
paymentSchema = R([
Fld("payment_id",Int()),
Fld("customer_id",Int()),
Fld("staff_id",Int()),
Fld("rental_id",Int()),
Fld("amount",Dbl()),
Fld("payment_date",Date()),
])
# -
dfPaymentWithSchema = spark.read.csv("s3a://udacity-dend/pagila/payment/payment.csv",sep=";", schema=paymentSchema, header=True)
dfPaymentWithSchema.printSchema()
df.show(5)
dfPaymentWithSchema.createOrReplaceTempView("payment")
spark.sql("""
SELECT month(payment_date) as m, sum(amount) as revenue
FROM payment
GROUP by m
order by revenue desc
""").show()
| Notebook Exercises/Course 3 Lesson 5 Introduction to Data Lakes/Exercise 3 - Data Lake on S3.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [Root]
# language: python
# name: Python [Root]
# ---
# Four Optimization Algorithms
# Stuck in local minima:
# Hillclimbing: start x =1 function two - caught in local(not optimal) minimum Gradient Descent : same problem for function 2 when starting at x =1 again caught in local(not optimal) minimum where as Simulated Annealing can climb out
# +
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib inline
import random
# -
# Random Searches
# +
#Plot curve for Function 1
y1=[]
x1=[]
n=1
x=-12
while n<26:
n+=1
x+=1
y = (x-1)**2
y1.append(y)
x1.append(x)
#print x1,y1
plt.figure(figsize=(18, 12), dpi=40)
for i in x1:
plt.plot(x1,y1, color ='grey')
plt.axis([-20,20,-10,180])
plt.xlabel('X Axis')
plt.ylabel('Y Axis')
plt.title ('Random Search')
plt.show
# Random Search: Function 1
x_begin = random.uniform(-11,14)
y_begin = (x_begin-1)**2
#print x_begin, y_begin
plt.plot(x_begin, y_begin, 'bo-', markersize=20)
plt.show
n1=0
while n1<100:
x_next = random.uniform(-11,14)
y_next = (x_next-1)**2 # calculated y
#print x_next, y_next
plt.plot(x_next, y_next,'ro-', markersize =10) # red points all the random point searches visited
plt.show
if y_next <=y_begin:
x_begin = x_next
y_begin = y_next
#print 'lowest value =',x_begin, y_begin
plt.plot(x_begin, y_begin, 'go-', markersize = 15) # green circle current local - will change when new lowest y found
n1+=1
plt.plot(x_begin, y_begin, 'yo-', markersize=20) #lowest value at end of random search is the gold/yellow circle
print 'Lowest y value is min point(Big Gold Dot):',x_begin, y_begin,'\n'
print ' Red Dot: ordinary steps; Blue Dot: start point'
print ' Big Green Dot: minimum at some stage; Big Gold Dot: minimum/optimum point at end'
# +
#Plot curve for Function 2
y1=[]
x1=[]
n=1
x=-5
while n<11:
n+=1
x+=1
y = x*(x+1.5)*(x-1.5)*(x-2)
y1.append(y)
x1.append(x)
#print x1,y1
plt.figure(figsize=(18, 12), dpi=40)
for i in x1:
plt.plot(x1,y1, color = 'grey')
plt.axis([-5,5,-20,400])
plt.xlabel('X Axis')
plt.ylabel('Y Axis')
plt.title ('Random Search')
plt.show
# Random Search: Function 2
x_begin = random.uniform(-4,10)
y_begin = x_begin*(x_begin+1.5)*(x_begin-1.5)*(x_begin-2)
print x_begin, y_begin
plt.plot(x_begin, y_begin, 'bo-', markersize=20)
plt.show
n1=0
while n1<100:
x_next = random.uniform(-4,10)
y_next = x_next*(x_next+1.5)*(x_next-1.5)*(x_next-2) # calculating y here
#print x_next, y_next
plt.plot(x_next, y_next,'ro', markersize =10) # red points all the random point searches visited
plt.axis([-5,7,-20,350])
plt.show
if y_next <=y_begin:
x_begin = x_next
y_begin = y_next
#print 'lowest value =',x_begin, y_begin
plt.plot(x_begin, y_begin, 'go-', markersize =20) # green circle current local - will change when new lowest y found
n1+=1
plt.plot(x_begin, y_begin, 'yo-', markersize =20) #lowest value at end of random search is the gold/yellow circle
print 'Lowest y value is min point(Big Gold Dot):',x_begin, y_begin,'\n'
print ' Red Dot: ordinary step; Blue Dot: begin'
print ' Big Green Dot: minimum at some stage; Big Gold Dot: minimum/optimum point at end'
# -
# _______________________________________________________________________________________________________________________________________________
# __________________________________________________________________________________________________________________________________
# Hill Climbing
# Function 1: x starting at 0
# +
def find_y(x):
return (x-1)**2
def get_xplus(x):
return x+0.1 # next step right
def get_xminus(x):
return x-0.1 # next step left
y1=[]
x1=[]
n=1
x=-5
# make a curve (with x -4 to 11)nwith min point to find and plot this curve
while n<15:
n+=1
x+=1
y = (x-1)**2
y1.append(y)
x1.append(x)
#print x1,y1
plt.figure(figsize=(18, 12), dpi=40)
for i in x1:
plt.plot(x1,y1, color = 'grey')
plt.axis([-4,5,-2,10])
plt.xlabel('X Axis')
plt.ylabel('Y Axis')
plt.title ('Hill Climbing')
plt.show
# start hill climbing by deciding where to begin - (could do user input but here hard coding for x=1,x=0)
#check slope direction and give x initial value
x_begin = 0
y_begin = find_y(x_begin)
compare =[(x_begin,y_begin)]
#print 'Start x,y:', x_begin, y_begin
plt.plot(x_begin,y_begin, 'bo-', markersize=15)
plt.show
x = x_begin # recalibrate here - if this not included value of next x below not correct
#firstly checking here that should be moving to the right (ie incrementing x by 0.1 is moving downhill
x = get_xplus(x) # get next x
y = find_y(x) # calculate new y
compare.append((x,y)) #add to list - now two values in list again
old_y = compare [0][1] # don't remove - for use in loop
new_y = compare [1][1] #for use in loop
#if it is downhill new_y will be less than old_y
if old_y > new_y: # looking to minimize y as this will be lowest point of curve
compare.pop() # remove 2nd item on list as the same as first item
while True:
if old_y > new_y:
x = get_xplus(x) # get next x
y = find_y(x) # calculate new y
compare.append((x,y)) #add to list - now two values in list again
old_y = compare [0][1] # don't remove - for use in loop
new_y = compare [1][1] #for use in loop
if old_y > new_y:
compare[0] = compare[1] # move new (x,y) pair into position 0 on list overwriting old (x,y)
compare.pop() # remove 2nd item on list as the same as first item
plt.plot(x,y,'ro-',markersize=10) # plot this 'new' lowest point
else:
break #if next y not lower then break out with min y
plt.plot(compare[0][0],compare[0][1],'yo-',markersize=15)
# if new_y not less than old_y above then move left to see and check new_y vs old_y again
elif old_y <new_y:
x = x_begin #reset x to initial value - then can look left
x = get_xminus(x)
y = find_y(x) # calculate new y
compare.pop()
compare.append((x,y)) #add to list - now two values in list again
old_y = compare [0][1] # don't remove - for use in loop
new_y = compare [1][1] #for use in loop
if old_y > new_y:
compare.pop() # remove 2nd item on list as the same as first item
while True:
if old_y > new_y:
x = get_xminus(x) # get next x
y = find_y(x) # calculate new y
compare.append((x,y)) #add to list - now two values in list again
old_y = compare [0][1] # don't remove - for use in loop
new_y = compare [1][1] #for use in loop
if old_y > new_y:
compare[0] = compare[1] # move new (x,y) pair into position 0 on list overwriting old (x,y)
compare.pop() # remove 2nd item on list as the same as first item
plt.plot(x,y,'g.-',markersize=15) # plot this 'new' lowest point
else:
break #if next y not lower then break out with min y
plt.plot(x,y,'yo-',markersize = 15)
else:
print 'you started with minimum point :',x_begin, y_begin
x,y = x_begin, y_begin
print 'minimum y value pair,: x,y:',compare[0] # this should give lowest pair
# -
# Function 1: x starting at 1 (this is min point so prog should see this)
# +
def find_y(x):
return (x-1)**2
def get_xplus(x):
return x+0.1 # next step right
def get_xminus(x):
return x-0.1 # next step left
y1=[]
x1=[]
n=1
x=-5
# make a curve (with x -4 to 11)nwith min point to find and plot this curve
while n<15:
n+=1
x+=1
y = (x-1)**2
y1.append(y)
x1.append(x)
#print x1,y1
plt.figure(figsize=(18, 12), dpi=40)
for i in x1:
plt.plot(x1,y1, color = 'grey')
plt.axis([-5,8,-2,25])
plt.xlabel('X Axis')
plt.ylabel('Y Axis')
plt.title ('Hill Climbing')
plt.show
# start hill climbing by deciding where to begin - (could do user input but here hard coding for x=1,x=0)
#check slope direction and give x initial value
x_begin = 1
y_begin = find_y(x_begin)
compare =[(x_begin,y_begin)]
#print 'Start x,y:', x_begin, y_begin
plt.plot(x_begin,y_begin, 'bo-',markersize=15)
plt.show
x = x_begin # recalibrate here - if this not included value of next x below not correct
#firstly checking here that should be moving to the right (ie incrementing x by 0.1 is moving downhill
x = get_xplus(x) # get next x
y = find_y(x) # calculate new y
compare.append((x,y)) #add to list - now two values in list again
old_y = compare [0][1] # don't remove - for use in loop
new_y = compare [1][1] #for use in loop
#if it is downhill new_y will be less than old_y
if old_y > new_y: # looking to minimize y as this will be lowest point of curve
compare.pop() # remove 2nd item on list as the same as first item
while True:
if old_y > new_y:
x = get_xplus(x) # get next x
y = find_y(x) # calculate new y
compare.append((x,y)) #add to list - now two values in list again
old_y = compare [0][1] # don't remove - for use in loop
new_y = compare [1][1] #for use in loop
if old_y > new_y:
compare[0] = compare[1] # move new (x,y) pair into position 0 on list overwriting old (x,y)
compare.pop() # remove 2nd item on list as the same as first item
plt.plot(x,y,'ro-') # plot this 'new' lowest point
else:
break #if next y not lower then break out with min y
plt.plot(compare[0][0],compare[0][1],'yo-', markersize=15 )
# if new_y not less than old_y above then move left to see and check new_y vs old_y again
elif old_y <new_y:
x = x_begin #reset x to initial value - then can look left
x = get_xminus(x)
y = find_y(x) # calculate new y
compare.pop()
compare.append((x,y)) #add to list - now two values in list again
old_y = compare [0][1] # don't remove - for use in loop
new_y = compare [1][1] #for use in loop
if old_y > new_y:
compare.pop() # remove 2nd item on list as the same as first item
while True:
if old_y > new_y:
x = get_xminus(x) # get next x
y = find_y(x) # calculate new y
compare.append((x,y)) #add to list - now two values in list again
old_y = compare [0][1] # don't remove - for use in loop
new_y = compare [1][1] #for use in loop
if old_y > new_y:
compare[0] = compare[1] # move new (x,y) pair into position 0 on list overwriting old (x,y)
compare.pop() # remove 2nd item on list as the same as first item
plt.plot(x,y,'go-',markersize=15) # plot this 'new' lowest point
else:
break #if next y not lower then break out with min y
plt.plot(x,y,'yo-', markersize=20)
else:
print 'you started with minimum point :',x_begin, y_begin
x,y = x_begin, y_begin
plt.plot(x,y,'yo-', markersize=20)
print 'minimum y value pair,: x,y:',compare[0] # this should give lowest pair
# -
# Function 2: x starting at 1
# +
def find_y(x):
return x*(x+1.5)*(x-1.5)*(x-2)
def get_xplus(x):
return x+0.1 # for step right
def get_xminus(x):
return x-0.1 # for step left
y1=[]
x1=[]
n=1
x=-3
# make a curve (with x -3 to 4)with min point to find and plot this curve
while n<7:
n+=1
x+=1
y = x*(x+1.5)*(x-1.5)*(x-2)
y1.append(y)
x1.append(x)
#print x1,y1
plt.figure(figsize=(18, 12), dpi=40)
for i in x1:
plt.plot(x1,y1, color = 'grey')
plt.axis([-3,3,-10,15])
plt.xlabel('X Axis')
plt.ylabel('Y Axis')
plt.title ('Hill Climbing')
plt.show
# start hill climbing by deciding where to begin - (could do user input but here hard coding for x=1,x=0)
#check slope direction and give x initial value of 0
x_begin = 1.1
y_begin = find_y(x_begin)
compare =[(x_begin,y_begin)]
plt.plot(x_begin,y_begin, 'bo-', markersize =15)
plt.show
x = x_begin # recalibrate here - if this not included value of next x below not correct
#firstly checking here that should be moving to the right (ie incrementing x by 0.1 is moving downhill
x = get_xplus(x) # get next x
y = find_y(x) # calculate new y
compare.append((x,y)) #add to list - now two values in list again
plt.plot(x,y,'ro-',markersize = 10)
old_y = compare [0][1] # don't remove - for use in loop
new_y = compare [1][1] #for use in loop
#if it is downhill new_y will be less than old_y
if old_y > new_y: # looking to minimize y as this will be lowest point of curve
compare.pop() # remove 2nd item on list as the same as first item
while True:
if old_y > new_y:
x = get_xplus(x) # get next x
y = find_y(x) # calculate new y
compare.append((x,y)) #add to list - now two values in list again
old_y = compare [0][1] # don't remove - for use in loop
new_y = compare [1][1] #for use in loop
if old_y > new_y:
compare[0] = compare[1] # move new (x,y) pair into position 0 on list overwriting old (x,y)
compare.pop() # remove 2nd item on list as the same as first item
#print 'c', compare # check list has only one (x,y) pair again - this is current min (x,y)
plt.plot(x,y,'ro-',markersize = 10)# plot this 'new' lowest point
else:
break #if next y not lower then break out with min y
print 'min value:',compare[0] # this should give lowest pair
plt.plot(x,y,'yo-',markersize=15)
# if new_y not less than old_y above then move left to see and check new_y vs old_y again
elif old_y <new_y:
x = x_begin #reset x to initial value - then can look left
x = get_xminus(x)
y = find_y(x) # calculate new y
compare.pop()
compare.append((x,y)) #add to list - now two values in list again
old_y = compare [0][1] # don't remove - for use in loop
new_y = compare [1][1] #for use in loop
if old_y > new_y:
compare.pop() # remove 2nd item on list as the same as first item
while True:
if old_y > new_y:
x = get_xminus(x) # get next x
y = find_y(x) # calculate new y
compare.append((x,y)) #add to list - now two values in list again
old_y = compare [0][1] # don't remove - for use in loop
new_y = compare [1][1] #for use in loop
if old_y > new_y:
#print 'in loop',compare #check values
compare[0] = compare[1] # move new (x,y) pair into position 0 on list overwriting old (x,y)
compare.pop() # remove 2nd item on list as the same as first item
#print 'c', compare # check list has only one (x,y) pair again - this is current min (x,y)
plt.plot(x,y,'go-',markersize=10) # plot this 'new' lowest point
else:
break #if next y not lower then break out with min y
plt.plot(x,y,'yo-',markersize=15)
else:
print 'you started with minimum point :',x_begin, y_begin
x,y = x_begin, y_begin
print 'min value:',compare[0] # this should give lowest pair
plt.plot(x,y,'yo-', markersize=15)
# -
# __________________________________________________________________________________________________________________________________________________
# Function 2: x starting at 0
# +
def find_y(x):
return x*(x+1.5)*(x-1.5)*(x-2)
def get_xplus(x):
return x+0.1 # for step right
def get_xminus(x):
return x-0.1 # for step left
y1=[]
x1=[]
n=1
x=-5
# make a curve (with x -4 to 11)nwith min point to find and plot this curve
while n<15:
n+=1
x+=1
y = x*(x+1.5)*(x-1.5)*(x-2)
y1.append(y)
x1.append(x)
#print x1,y1
plt.figure(figsize=(18, 12), dpi=40)
for i in x1:
plt.plot(x1,y1, color = 'grey')
plt.axis([-3,3,-10,15])
plt.xlabel('X Axis')
plt.ylabel('Y Axis')
plt.title ('Hill Climbing')
plt.show
# start hill climbing by deciding where to begin - (could do user input but here hard coding for x=1,x=0)
#check slope direction and give x initial value of 0
x_begin = 0
y_begin = find_y(x_begin)
compare =[(x_begin,y_begin)]
plt.plot(x_begin,y_begin, 'bo-', markersize=15)
plt.show
x = x_begin # recalibrate here - if this not included value of next x below not correct
#firstly checking here that should be moving to the right (ie incrementing x by 0.1 is moving downhill
x = get_xplus(x) # get next x
y = find_y(x) # calculate new y
compare.append((x,y)) #add to list - now two values in list again
old_y = compare [0][1] # don't remove - for use in loop
new_y = compare [1][1] #for use in loop
#if it is downhill new_y will be less than old_y
if old_y > new_y: # looking to minimize y as this will be lowest point of curve
compare.pop() # remove 2nd item on list as the same as first item
while True:
if old_y > new_y:
x = get_xplus(x) # get next x
y = find_y(x) # calculate new y
compare.append((x,y)) #add to list - now two values in list again
old_y = compare [0][1] # don't remove - for use in loop
new_y = compare [1][1] #for use in loop
if old_y > new_y:
compare[0] = compare[1] # move new (x,y) pair into position 0 on list overwriting old (x,y)
compare.pop() # remove 2nd item on list as the same as first item
#print 'c', compare # check list has only one (x,y) pair again - this is current min (x,y)
plt.plot(x,y,'ro-', markersize = 10) # plot this 'new' lowest point
else:
print 'minimum point is', x,y
break #if next y not lower then break out with min y
print 'min value:',compare[0] # this should give lowest pair
plt.plot(compare[0][0],compare[0][1],'yo-', markersize=15)
# if new_y not less than old_y above then move left to see and check new_y vs old_y again
elif old_y <new_y:
x = x_begin #reset x to initial value - then can look left
x = get_xminus(x)
y = find_y(x) # calculate new y
compare.pop()
compare.append((x,y)) #add to list - now two values in list again
old_y = compare [0][1] # don't remove - for use in loop
new_y = compare [1][1] #for use in loop
if old_y > new_y:
compare.pop() # remove 2nd item on list as the same as first item
while True:
if old_y > new_y:
x = get_xminus(x) # get next x
y = find_y(x) # calculate new y
compare.append((x,y)) #add to list - now two values in list again
old_y = compare [0][1] # don't remove - for use in loop
new_y = compare [1][1] #for use in loop
if old_y > new_y:
#print 'in loop',compare #check values
compare[0] = compare[1] # move new (x,y) pair into position 0 on list overwriting old (x,y)
compare.pop() # remove 2nd item on list as the same as first item
#print 'c', compare # check list has only one (x,y) pair again - this is current min (x,y)
plt.plot(x,y,'go-',markersize=10) # plot this 'new' lowest point
else:
break #if next y not lower then break out with min y
plt.plot(x,y,'yo-')
else:
print 'you started with minimum point :',x_begin, y_begin
x,y = x_begin, y_begin
print 'min value:',compare[0] # this should give lowest pair
plt.plot(x,y,'yo-', markersize=15)
# -
# _________________________________________________________________________________________________________________________________________________
# _______________________________________________________________________________________________________________________________________________
# Gradient Descent:
# may be computationally expensive much much handier coding than hill climbing
# using slope means automatically corrects to the correct direction ( for minimize: minus slope)
#
# Gradient Descent: Function 1
# +
#Gradient Descent: Function 1:x= -4
def find_y(x):
return (x-1)**2
def get_xplusstep(x): # next step - minus slope => always go downhill
step = -slope(x)*0.2
return x+step
def slope(x):
return 2*(x)-2
y1=[]
x1=[]
n=1
x=-5
# make a curve (with x -4 to 11) with min point to find and plot this curve
while n<15:
n+=1
x+=1
y = find_y(x)
y1.append(y)
x1.append(x)
#print x1,y1
plt.figure(figsize=(18, 12), dpi=40)
for i in x1:
plt.plot(x1,y1, color = 'grey')
plt.axis([-5,2,-3,30])
plt.xlabel('X Axis')
plt.ylabel('Y Axis')
plt.title ('Gradient Descent')
plt.show
#checking to see if functions above work
#print ' STEP SIZE',slope(4)
#print ' STEP SIZE',slope(2)
#print ' STEP SIZE',slope(-1)
# start gradient descent by deciding where to begin - (could do user input but here hard coding for x=4
x_begin = -4
y_begin = find_y(x_begin)
compare =[(x_begin,y_begin)]
plt.plot(x_begin,y_begin, 'bo-',markersize=15)
plt.show
x = x_begin # recalibrate here - if this not included value of next x below not correct
x = get_xplusstep(x) # get next x - slope has already been calculated in function slope
y = find_y(x) # calculate new y
compare.append((x,y)) #add to list - now two values in list again
old_y = compare [0][1] # don't remove - for use in loop
new_y = compare [1][1] #for use in loop
#if it is downhill new_y will be less than old_y
if old_y > new_y: # looking to minimize y as this will be lowest point of curve
compare.pop() # remove 2nd item on list as the same as first item
while True:
if old_y > new_y:
x = get_xplusstep(x) # get next x
y = find_y(x) # calculate new y
compare.append((x,y)) #add to list - now two values in list again
old_y = compare [0][1] # don't remove - for use in loop
new_y = compare [1][1] #for use in loop
if old_y > new_y:
compare[0] = compare[1] # move new (x,y) pair into position 0 on list overwriting old (x,y)
compare.pop() # remove 2nd item on list as the same as first item
plt.plot(x,y,'ro-',markersize=10) # plot this 'new' lowest point
else:
break #if next y not lower then break out with min y
else:
print 'you started with minimum point :',x_begin, y_begin
x,y = x_begin, y_begin
print 'minimum y value pair: x,y :',compare[0] # lowest pair
plt.plot(x,y,'yo-',markersize=15)
# +
#Gradient Descent: Function 1: x= 4
def find_y(x):
return (x-1)**2
def get_xplusstep(x): # next step - minus slope => always go downhill
step = -slope(x)*0.2
return x+step
def slope(x):
return 2*(x)-2
y1=[]
x1=[]
n=1
x=-5
# make a curve (with x -4 to 11) with min point to find and plot this curve
while n<15:
n+=1
x+=1
y = find_y(x)
y1.append(y)
x1.append(x)
#print x1,y1
plt.figure(figsize=(18, 12), dpi=40)
for i in x1:
plt.plot(x1,y1, color = 'grey')
plt.axis([-2,6,-3,15])
plt.xlabel('X Axis')
plt.ylabel('Y Axis')
plt.title ('Gradient Descent')
plt.show
#checking to see if functions above work
#print ' STEP SIZE',slope(4)
#print ' STEP SIZE',slope(2)
#print ' STEP SIZE',slope(-1)
# start gradient descent by deciding where to begin - (could do user input but here hard coding for x=4
x_begin = 4
y_begin = find_y(x_begin)
compare =[(x_begin,y_begin)]
plt.plot(x_begin,y_begin, 'bo-',markersize=15)
plt.show
x = x_begin # recalibrate here - if this not included value of next x below not correct
x = get_xplusstep(x) # get next x - slope has already been calculated in function slope
y = find_y(x) # calculate new y
compare.append((x,y)) #add to list - now two values in list again
old_y = compare [0][1] # don't remove - for use in loop
new_y = compare [1][1] #for use in loop
#if it is downhill new_y will be less than old_y
if old_y > new_y: # looking to minimize y as this will be lowest point of curve
compare.pop() # remove 2nd item on list as the same as first item
while True:
if old_y > new_y:
x = get_xplusstep(x) # get next x
y = find_y(x) # calculate new y
compare.append((x,y)) #add to list - now two values in list again
old_y = compare [0][1] # don't remove - for use in loop
new_y = compare [1][1] #for use in loop
if old_y > new_y:
compare[0] = compare[1] # move new (x,y) pair into position 0 on list overwriting old (x,y)
compare.pop() # remove 2nd item on list as the same as first item
plt.plot(x,y,'ro-',markersize=10) # plot this 'new' lowest point
else:
break #if next y not lower then break out with min y
else:
print 'you started with minimum point :',x_begin, y_begin
x,y = x_begin, y_begin
print 'minimum y value pair: x.y :',compare[0] # lowest pair
plt.plot(x,y,'yo-', markersize=15)
# +
# Gradient Descent: Function 2: starting x = 0
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib inline
import random
def find_y(x):
return x*(x+1.5)*(x-1.5)*(x-2)
def get_xplusstep(x): #next step: minus slope gives minimize direction
step = -slope(x)*0.01
return x+step
def slope(x):
return 4*(x**3)-6*(x**2)-4.5*(x)+4.5
y1=[]
x1=[]
n=1
x=-5
# make a curve (with x -4 to 11) with min point to find and plot this curve
while n<15:
n+=1
x+=1
y = find_y(x)
y1.append(y)
x1.append(x)
#print x1,y1
plt.figure(figsize=(18, 12), dpi=40)
for i in x1:
plt.plot(x1,y1, color = 'grey')
plt.axis([-1.1,1,-5,1])
plt.xlabel('X Axis')
plt.ylabel('Y Axis')
plt.title ('Gradient Descent')
plt.show
#checking to see if function above work
#print ' STEP SIZE',slope(4)
#print ' STEP SIZE',slope(2)
#print ' STEP SIZE',slope(-1)
# start gradient descent by deciding where to begin - (could do user input but here hard coding for x=0)
x_begin = 0
y_begin = find_y(x_begin)
compare =[(x_begin,y_begin)]
#print 'Start x,y:', x_begin, y_begin
plt.plot(x_begin,y_begin, 'bo-',markersize=15)
plt.show
x = x_begin # recalibrate here - if this not included value of next x below not correct
x = get_xplusstep(x) # get next x - slope has already been calculated in function slope
y = find_y(x) # calculate new y
compare.append((x,y)) #add to list - now two values in list again
old_y = compare [0][1] # don't remove - for use in loop
new_y = compare [1][1] #for use in loop too
# if it is downhill new_y will be less than old_y
if old_y > new_y: # looking to minimize y as this will be lowest point of curve
compare.pop() # remove 2nd item on list as the same as first item
while True:
if old_y > new_y:
x = get_xplusstep(x) # get next x
y = find_y(x) # calculate new y
compare.append((x,y)) #add to list - now two values in list again
old_y = compare [0][1] # don't remove - for use in loop
new_y = compare [1][1] #for use in loop
if old_y > new_y:
compare[0] = compare[1] # move new (x,y) pair into position 0 on list overwriting old (x,y)
compare.pop() # remove 2nd item on list as the same as first item
plt.plot(x,y,'ro-',markersize=10) # plot this 'new' lowest point
else:
break #if next y not lower then break out with min y
else:
print 'you started with minimum point :',x_begin, y_begin
x,y = x_begin, y_begin
print 'min value:',compare[0] # this should give lowest pair
plt.plot(x,y,'yo-',markersize=15)
# +
# Gradient Descent: Function 2: starting x =1
def find_y(x):
return x*(x+1.5)*(x-1.5)*(x-2)
def get_xplusstep(x):
step = -slope(x)*0.06
return x+step
def slope(x):
return 4*(x**3)-6*(x**2)-4.5*(x)+4.5
y1=[]
x1=[]
n=1
x=-5
# make a curve (with x -4 to 11) with min point to find and plot this curve
while n<15:
n+=1
x+=1
y = find_y(x)
y1.append(y)
x1.append(x)
#print x1,y1
plt.figure(figsize=(18, 12), dpi=40)
for i in x1:
plt.plot(x1,y1, color = 'grey')
plt.axis([-2,2.5,-10,5])
plt.xlabel('X Axis')
plt.ylabel('Y Axis')
plt.title ('Gradient Descent')
plt.show
## start gradient descent by deciding where to begin - (could do user input but here hard coding for x=1)
x_begin = 1
y_begin = find_y(x_begin)
compare =[(x_begin,y_begin)]
plt.plot(x_begin,y_begin, 'bo-', markersize=15)
plt.show
x = x_begin # recalibrate here - if this not included value of next x below not correct
x = get_xplusstep(x) # get next x - slope has already been calculated in function slope
y = find_y(x) # calculate new y
compare.append((x,y)) #add to list - now two values in list again
old_y = compare [0][1] # don't remove - for use in loop
new_y = compare [1][1] #for use in loop
#if it is downhill new_y will be less than old_y
if old_y > new_y: # looking to minimize y as this will be lowest point of curve
compare.pop() # remove 2nd item on list as the same as first item
while True:
if old_y > new_y:
x = get_xplusstep(x) # get next x
y = find_y(x) # calculate new y
compare.append((x,y)) #add to list - now two values in list again
old_y = compare [0][1] # don't remove - for use in loop
new_y = compare [1][1] #for use in loop
if old_y > new_y:
compare[0] = compare[1] # move new (x,y) pair into position 0 on list overwriting old (x,y)
compare.pop() # remove 2nd item on list as the same as first item
plt.plot(x,y,'ro-',markersize=10) # plot this 'new' lowest point as move down to lowest y
else:
break #if next y not lower then break out with min
else:
print 'you started with minimum point :',x_begin, y_begin
x,y = x_begin, y_begin
print 'minimum y value is best x,y:',compare[0] # x,y pair with lowest y lowest
plt.plot(x,y,'yo-',markersize =15)
# -
# ______________________________________________________________________________________________________________________________________________
# ___________________________________________________________________________________________________________________________________________
# Simulated Annealing
# +
#Simulated Annealing: Function 1
# referenced katrinaeg.com/simulated-annealing.html for some ideas on
# (i)the calculation of whether to move uphill - added her e calculation based on cost to my counter
def find_y(x):
return (x-1)**2
def cost (x,y,z):
e =2.71828
return(e)**((y_begin-y_next)/T)
y1=[]
x1=[]
n=-5
x=-12
#x = random.uniform()
while n<20:
n+=1
x+=1
y = (x-1)**2
y1.append(y)
x1.append(x)
#print x1,y1
plt.figure(figsize=(18, 12), dpi=40)
for i in x1:
plt.plot(x1,y1, color='grey')
plt.axis([-15,15,-10,180])
plt.xlabel('X Axis')
plt.ylabel('Y Axis')
plt.title ('Simulated Annealing')
plt.show
x_begin = random.uniform(-11,14)
y_begin = (x_begin-1)**2
print 'Beginning point (Big Green Dot):',x_begin,',', y_begin
plt.plot(x_begin, y_begin, 'go-',markersize=15)
plt.show
T=1
alpha=0.9
counter = 0
n1=100
while n1>0:
x_next = random.uniform(-10,11)*(n/10) #reducing steps
#print 'x next =', x_next
y_next = (x_next-1)**2 # calculated y
#print x_next, y_next
if y_next <=y_begin:
x_begin = x_next
y_begin = y_next
#print 'lowest value =',x_begin, y_begin
plt.plot(x_begin, y_begin, 'bo-',markersize=10) # blue circle current local - will change when new lowest y found
#n1-=1
elif y_next>y_begin:
if counter <10 and cost(y_begin,y_next,T)>random.uniform(-4,4): # keeping counter here the e^((old_y minus new_y)/T) as per ref at top of page
#print 'GOING UP',counter+1
x_begin = x_next
y_begin = y_next
counter+=1
plt.plot(x_begin, y_begin, 'ro-',markersize=10) # red circle current local - will change when new lowest y found
n1-=1
plt.plot(x_begin, y_begin, 'yo-',markersize=15) #lowest value at end of simulated annealing search is the gold/yellow circle
print 'Lowest y value is min point(Big Gold Dot):',x_begin,',' ,y_begin,'\n'
print ' Red Dot: upward step; Blue Dot: downward step'
print ' Big Green Dot: start; Big Gold Dot: minimum/optimum point '
# +
#Simulated Annealing Function 2
# referenced katrinaeg.com/simulated-annealing.html for some ideas on
# (i)the calculation of whether to move uphill - added her e calculation based on cost to my counter
def find_y(x):
return x*(x+1.5)*(x-1.5)*(x-2)
def cost (x,y,z):
e =2.71828
return(e)**((y_begin-y_next)/T)
y1=[]
x1=[]
n=1
x=-5
#x = random.uniform()
while n<12:
n+=1
x+=1
y = x*(x+1.5)*(x-1.5)*(x-2)
y1.append(y)
x1.append(x)
#print x1,y1
plt.figure(figsize=(18, 12), dpi=40)
for i in x1:
plt.plot(x1,y1, color='grey')
plt.axis([-5,5,-20,350])
plt.xlabel('X Axis')
plt.ylabel('Y Axis')
plt.title ('Simulated Annealing')
plt.show
x_begin = random.uniform(-4,8)
y_begin = find_y(x_begin)
print 'Beginning point:',x_begin, y_begin
plt.plot(x_begin, y_begin, 'go-',markersize=15)
plt.show
T=1
alpha = 0.9
counter = 0
n1=100
while n1>0:
x_next = random.uniform(-4,8)*(n/10) # n/10 reducing as n gets smaller
y_next = find_y(x_next) # calculated y
#print x_next, y_next
if y_next <=y_begin:
x_begin = x_next
y_begin = y_next
plt.plot(x_begin, y_begin, 'bo-',markersize=10) # blue circle current local - will change when new lowest y found
elif y_next>y_begin: # check re going uphill
if counter <10 and cost(y_begin,y_next,T)>random.uniform(-4,4): # keeping counter here the e^((old_y minus new_y)/T) as per ref at top of page
#print 'GOING UP',counter+1
x_begin = x_next
y_begin = y_next
counter+=1
T=T*alpha
plt.plot(x_begin, y_begin, 'ro-', markersize=10) # green circle current local - will change when new lowest y found
n1-=1
plt.plot(x_begin, y_begin, 'yo-',markersize=15) #lowest value at end of simulated annealing search is the gold/yellow circle
print 'Lowest y value is min point(Big Gold Dot):',x_begin, y_begin,'\n'
print ' Red Dot: upward step; Blue Dot: downward step'
print ' Big Green Dot: start; Big Gold Dot: minimum/optimum point '
| Four Optimization Algorithms.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: PianoStyleEmbedding
# language: python
# name: pianostyleembedding
# ---
# # GPT-2 Classifier with target LM pretraining
# In this notebook we will train a GPT-2 classifier for the proxy task using the pretrained target language model for initialization. The language model is trained in 05_gpt2_lm.ipynb.
#
# This notebook is adapted from [this](https://towardsdatascience.com/fastai-with-transformers-bert-roberta-xlnet-xlm-distilbert-4f41ee18ecb2) blog post.
# %matplotlib inline
# %load_ext autoreload
# %autoreload 2
from transformers import GPT2Model, GPT2DoubleHeadsModel, GPT2Config
import eval_models
from train_utils import *
import fastai
import transformers
import tokenizers
print('fastai version :', fastai.__version__)
print('transformers version :', transformers.__version__)
print('tokenizers version :', tokenizers.__version__)
torch.cuda.set_device(0)
# ### Prep databunch
bs = 64
seed = 42
tok_model_dir = '/home/tjtsai/.fastai/data/bscore_lm/bpe_data/tokenizer_target'
max_seq_len = 512
cust_tok = CustomTokenizer(TransformersBaseTokenizer, tok_model_dir, max_seq_len)
transformer_base_tokenizer = TransformersBaseTokenizer(tok_model_dir, max_seq_len)
transformer_vocab = TransformersVocab(tokenizer = transformer_base_tokenizer._pretrained_tokenizer)
pad_idx = transformer_base_tokenizer._pretrained_tokenizer.token_to_id('<pad>')
cls_idx = transformer_base_tokenizer._pretrained_tokenizer.token_to_id('</s>')
bpe_path = Path('/home/tjtsai/.fastai/data/bscore_lm/bpe_data')
train_df = pd.read_csv(bpe_path/'train64.char.csv')
valid_df = pd.read_csv(bpe_path/'valid64.char.csv')
test_df = pd.read_csv(bpe_path/'test64.char.csv')
data_clas = TextDataBunch.from_df(bpe_path, train_df, valid_df, tokenizer=cust_tok, vocab=transformer_vocab,
include_bos=False, include_eos=False, pad_first=False, pad_idx=pad_idx,
bs=bs, num_workers=1)
# ### Train Classifier
model_class, config_class = GPT2Model, GPT2Config
lang_model_path = '/home/tjtsai/.fastai/data/bscore_lm/bpe_data/models/gpt2_train-target_lm'
config = config_class.from_pretrained(lang_model_path)
config.num_labels = data_clas.c
transformer_model = model_class.from_pretrained(lang_model_path, config = config)
gpt2_clas = GPT2Classifier(transformer_model, config, pad_idx, cls_idx)
# +
# learner.destroy()
# torch.cuda.empty_cache()
# -
learner = Learner(data_clas, gpt2_clas, metrics=[accuracy, FBeta(average = 'macro', beta=1)])
list_layers = [learner.model.transformer.wte,
learner.model.transformer.wpe,
learner.model.transformer.h[0],
learner.model.transformer.h[1],
learner.model.transformer.h[2],
learner.model.transformer.h[3],
learner.model.transformer.h[4],
learner.model.transformer.h[5],
learner.model.transformer.ln_f]
learner.split(list_layers)
print(learner.layer_groups)
seed_all(seed)
learner.freeze_to(-1)
learner.summary()
learner.lr_find()
learner.recorder.plot(suggestion=True)
lr = 1e-3
learner.fit_one_cycle(4, lr, moms=(0.8,0.7))
learner.freeze_to(-2)
learner.fit_one_cycle(3, slice(lr/(2.6**4),lr), moms=(0.8, 0.9))
learner.freeze_to(-3)
learner.fit_one_cycle(2, slice(lr/2/(2.6**4),lr/2), moms=(0.8, 0.9))
learner.freeze_to(-4)
learner.fit_one_cycle(2, slice(lr/10/(2.6**4),lr/10), moms=(0.8, 0.9))
learner.save('gpt2_train-target_clas')
#learner.load('gpt2_train-target_clas')
# ### Evaluate Classifier
# Evaluate on the proxy task -- classifying fixed-length chunks of bootleg score features.
data_clas_test = TextDataBunch.from_df(bpe_path, train_df, test_df, tokenizer=cust_tok, vocab=transformer_vocab,
include_bos=False, include_eos=False, pad_first=False, pad_idx=pad_idx,
bs=bs, num_workers=1)
learner.validate(data_clas_test.valid_dl)
# Evaluate on the original task -- classifying pages of sheet music. We can evaluate our models in two ways:
# - applying the model to a variable length sequence
# - applying the model to multiple fixed-length windows and averaging the predictions
#
# First we evaluate the model on variable length inputs. Report results with and without applying priors.
train_fullpage_df = pd.read_csv(bpe_path/'train.fullpage.char.csv')
valid_fullpage_df = pd.read_csv(bpe_path/'valid.fullpage.char.csv')
test_fullpage_df = pd.read_csv(bpe_path/'test.fullpage.char.csv')
data_clas_test = TextDataBunch.from_df(bpe_path, train_fullpage_df, valid_fullpage_df, test_fullpage_df,
tokenizer=cust_tok, vocab=transformer_vocab, include_bos=False,
include_eos=False, pad_first=False, pad_idx=pad_idx, bs=bs, num_workers=1)
(acc, acc_with_prior), (f1, f1_with_prior) = eval_models.calcAccuracy_fullpage(learner, bpe_path, train_fullpage_df, valid_fullpage_df, test_fullpage_df, databunch=data_clas_test)
(acc, acc_with_prior), (f1, f1_with_prior)
# Now we evaluate the model by considering multiple fixed-length windows and averaging the predictions.
test_ensemble_df = pd.read_csv(bpe_path/'test.ensemble64.char.csv')
data_clas_test = TextDataBunch.from_df(bpe_path, train_fullpage_df, valid_fullpage_df, test_ensemble_df,
text_cols = 'text', label_cols = 'label', tokenizer=cust_tok,
vocab=transformer_vocab, include_bos=False, include_eos=False,
pad_first=False, pad_idx=pad_idx, bs=bs, num_workers=1)
(acc, acc_with_prior), (f1, f1_with_prior) = eval_models.calcAccuracy_fullpage(learner, bpe_path, train_fullpage_df, valid_fullpage_df, test_ensemble_df, databunch=data_clas_test, ensembled=True)
(acc, acc_with_prior), (f1, f1_with_prior)
# ### Error Analysis
interp = ClassificationInterpretation.from_learner(learner)
interp.plot_confusion_matrix(figsize=(12,12))
| 05_gpt2_pretrained.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Starting with Tensorflow
# Document created with [Jupyter](https://jupyter.org) and [IPython](https://ipython.org).
#
# Outline of some basics:
#
# * [Notebook Basics](../examples/Notebook/Notebook Basics.ipynb)
# * [IPython - beyond plain python](../examples/IPython Kernel/Beyond Plain Python.ipynb)
# * [How Jupyter works](../examples/Notebook/Multiple%20Languages%2C%20Frontends.ipynb) to run code in different languages.
# Running tensorflow in windows:
#
# prerequisites:
#
# Python 3.5.2 |Anaconda 4.3.1 (64-bit)|
#
# *[anaconda](https://www.anaconda.com/download):
# *[python](https://www.python.org/downloads/release/python-370):
#
# 1.Create a conda environment named tensorflow by invoking the following command:
#
# C:> conda create -n tensorflow pip python=3.5
#
# 2.Activate the conda environment by issuing the following command:
#
#
# C:> activate tensorflow
# (tensorflow)C:>
#
# your promt will change to (tensorflow)C:>
#
# 3.start python enviornment
#
# (tensorflow)C:>python
#
# Python 3.5.2 |Continuum Analytics, Inc.| (default, Jul 5 2016, 11:41:13) [MSC v.1900 64 bit (AMD64)] on win32
# Type "help", "copyright", "credits" or "license" for more information.
#
# 4.try importing tensorflow
#
# >>> import tensorflow as ts
| Begin tf.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
path = "./sims/renewable_scales/"
scales = np.linspace(0.1,1.2,23)
scales
means = []
var = []
for s in scales:
dfs = []
m = []
v = []
for i in range(10):
filename = path + "renewable_{}_{}".format(round(s,2),i)
dfs.append(pd.read_csv(filename, names=['total_cost', 'true_demand', 'demand_sd', '2x_sup_sd', 'ren_sup', 'rem_demand']))
try:
dfs[i] = dfs[i].drop(730) # extra line in some of the files
except:
pass
dfs[i]["total_cost"] = pd.to_numeric(dfs[i]["total_cost"], downcast="float")
dfs[i]['avg_cost'] = dfs[i]['total_cost'] / dfs[i]['true_demand']
m.append(dfs[i]['avg_cost'].mean())
v.append((dfs[i]['avg_cost'].var()))
m = np.array(m)
v = np.array(v)
print (m)
break
means.append(m.mean())
var.append(v.mean())
means
plt.bar(np.arange(23),means)
plt.bar(np.arange(23),var)
dfs[0]
| .ipynb_checkpoints/Renewable Sensitivity-checkpoint.ipynb |