code
stringlengths 38
801k
| repo_path
stringlengths 6
263
|
|---|---|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + colab={} colab_type="code" id="QHGLknhThVaG"
# set tf 1.x for colab
# %tensorflow_version 1.x
# + [markdown] colab_type="text" id="eeDp18abhVaa"
# ### Generating human faces with Adversarial Networks
# <img src="https://github.com/hse-aml/intro-to-dl/blob/master/week4/images/nvidia_cool_gan.png?raw=1" width="400px"/>
# _© research.nvidia.com_
#
# This time we'll train a neural net to generate plausible human faces in all their subtlty: appearance, expression, accessories, etc. 'Cuz when us machines gonna take over Earth, there won't be any more faces left. We want to preserve this data for future iterations. Yikes...
#
# Based on https://github.com/Lasagne/Recipes/pull/94 .
#
# + colab={} colab_type="code" id="YodkcOLGhVab"
import sys
sys.path.append("..")
import grading
import download_utils
import tqdm_utils
# + colab={} colab_type="code" id="Uafw8F7GhVaj"
download_utils.link_week_4_resources()
# + colab={} colab_type="code" id="5FIui04chVas"
import matplotlib.pyplot as plt
# %matplotlib inline
import numpy as np
plt.rcParams.update({'axes.titlesize': 'small'})
from sklearn.datasets import load_digits
#The following line fetches you two datasets: images, usable for autoencoder training and attributes.
#Those attributes will be required for the final part of the assignment (applying smiles), so please keep them in mind
from lfw_dataset import load_lfw_dataset
data,attrs = load_lfw_dataset(dimx=36,dimy=36)
#preprocess faces
data = np.float32(data)/255.
IMG_SHAPE = data.shape[1:]
# + colab={} colab_type="code" id="ct-00FSlhVa0" outputId="70b8a2f6-fac0-490b-9c27-a9ca040a556d"
#print random image
plt.imshow(data[np.random.randint(data.shape[0])], cmap="gray", interpolation="none")
# + [markdown] colab_type="text" id="muM7PmNShVa7"
# # Generative adversarial nets 101
#
# <img src="https://github.com/hse-aml/intro-to-dl/blob/master/week4/images/noise_to_face.png?raw=1" width="400px"/>
# _© torch.github.io_
#
# Deep learning is simple, isn't it?
# * build some network that generates the face (small image)
# * make up a __measure__ of __how good that face is__
# * optimize with gradient descent :)
#
#
# The only problem is: how can we engineers tell well-generated faces from bad? And i bet you we won't ask a designer for help.
#
# __If we can't tell good faces from bad, we delegate it to yet another neural network!__
#
# That makes the two of them:
# * __G__enerator - takes random noize for inspiration and tries to generate a face sample.
# * Let's call him __G__(z), where z is a gaussian noize.
# * __D__iscriminator - takes a face sample and tries to tell if it's great or fake.
# * Predicts the probability of input image being a __real face__
# * Let's call him __D__(x), x being an image.
# * __D(x)__ is a predition for real image and __D(G(z))__ is prediction for the face made by generator.
#
# Before we dive into training them, let's construct the two networks.
# + colab={} colab_type="code" id="ou0UB1m8hVa9" outputId="f490e519-7904-455b-f96a-2083e51fb841"
import tensorflow as tf
from keras_utils import reset_tf_session
s = reset_tf_session()
import keras
from keras.models import Sequential
from keras import layers as L
# + colab={} colab_type="code" id="l2g81yqhhVbC"
CODE_SIZE = 256
generator = Sequential()
generator.add(L.InputLayer([CODE_SIZE],name='noise'))
generator.add(L.Dense(10*8*8, activation='elu'))
generator.add(L.Reshape((8,8,10)))
generator.add(L.Deconv2D(64,kernel_size=(5,5),activation='elu'))
generator.add(L.Deconv2D(64,kernel_size=(5,5),activation='elu'))
generator.add(L.UpSampling2D(size=(2,2)))
generator.add(L.Deconv2D(32,kernel_size=3,activation='elu'))
generator.add(L.Deconv2D(32,kernel_size=3,activation='elu'))
generator.add(L.Deconv2D(32,kernel_size=3,activation='elu'))
generator.add(L.Conv2D(3,kernel_size=3,activation=None))
# + colab={} colab_type="code" id="198o-fr7hVbL"
assert generator.output_shape[1:] == IMG_SHAPE, "generator must output an image of shape %s, but instead it produces %s"%(IMG_SHAPE,generator.output_shape[1:])
# + [markdown] colab_type="text" id="c6ZIvuvLhVbT"
# ### Discriminator
# * Discriminator is your usual convolutional network with interlooping convolution and pooling layers
# * The network does not include dropout/batchnorm to avoid learning complications.
# * We also regularize the pre-output layer to prevent discriminator from being too certain.
# + colab={} colab_type="code" id="4vAsCo4GhVbV"
discriminator = Sequential()
discriminator.add(L.InputLayer(IMG_SHAPE))
<build discriminator body>
discriminator.add(L.Flatten())
discriminator.add(L.Dense(256,activation='tanh'))
discriminator.add(L.Dense(2,activation=tf.nn.log_softmax))
# + [markdown] colab_type="text" id="xK8NaklghVbc"
# # Training
#
# We train the two networks concurrently:
# * Train __discriminator__ to better distinguish real data from __current__ generator
# * Train __generator__ to make discriminator think generator is real
# * Since discriminator is a differentiable neural network, we train both with gradient descent.
#
# <img src="https://github.com/hse-aml/intro-to-dl/blob/master/week4/images/gan.png?raw=1" width="600px"/>
# _© deeplearning4j.org_
#
# Training is done iteratively until discriminator is no longer able to find the difference (or until you run out of patience).
#
#
# ### Tricks:
# * Regularize discriminator output weights to prevent explosion
# * Train generator with __adam__ to speed up training. Discriminator trains with SGD to avoid problems with momentum.
# * More: https://github.com/soumith/ganhacks
#
# + colab={} colab_type="code" id="OrT__cjPhVbd"
noise = tf.placeholder('float32',[None,CODE_SIZE])
real_data = tf.placeholder('float32',[None,]+list(IMG_SHAPE))
logp_real = discriminator(real_data)
generated_data = <gen(noise)>
logp_gen = <log P(real | gen(noise))
# + colab={} colab_type="code" id="r-ypjbnnhVbh"
########################
#discriminator training#
########################
d_loss = -tf.reduce_mean(logp_real[:,1] + logp_gen[:,0])
#regularize
d_loss += tf.reduce_mean(discriminator.layers[-1].kernel**2)
#optimize
disc_optimizer = tf.train.GradientDescentOptimizer(1e-3).minimize(d_loss,var_list=discriminator.trainable_weights)
# + colab={} colab_type="code" id="V2c2WiQWhVbs"
########################
###generator training###
########################
g_loss = <generator loss>
gen_optimizer = tf.train.AdamOptimizer(1e-4).minimize(g_loss,var_list=generator.trainable_weights)
# + colab={} colab_type="code" id="iAVD5eldhVby"
s.run(tf.global_variables_initializer())
# + [markdown] colab_type="text" id="OoiAbtpahVb3"
# ### Auxiliary functions
# Here we define a few helper functions that draw current data distributions and sample training batches.
# + colab={} colab_type="code" id="PziToyZnhVb4"
def sample_noise_batch(bsize):
return np.random.normal(size=(bsize, CODE_SIZE)).astype('float32')
def sample_data_batch(bsize):
idxs = np.random.choice(np.arange(data.shape[0]), size=bsize)
return data[idxs]
def sample_images(nrow,ncol, sharp=False):
images = generator.predict(sample_noise_batch(bsize=nrow*ncol))
if np.var(images)!=0:
images = images.clip(np.min(data),np.max(data))
for i in range(nrow*ncol):
plt.subplot(nrow,ncol,i+1)
if sharp:
plt.imshow(images[i].reshape(IMG_SHAPE),cmap="gray", interpolation="none")
else:
plt.imshow(images[i].reshape(IMG_SHAPE),cmap="gray")
plt.show()
def sample_probas(bsize):
plt.title('Generated vs real data')
plt.hist(np.exp(discriminator.predict(sample_data_batch(bsize)))[:,1],
label='D(x)', alpha=0.5,range=[0,1])
plt.hist(np.exp(discriminator.predict(generator.predict(sample_noise_batch(bsize))))[:,1],
label='D(G(z))',alpha=0.5,range=[0,1])
plt.legend(loc='best')
plt.show()
# + [markdown] colab_type="text" id="NA7kk4pEhVb-"
# ### Training
# Main loop.
# We just train generator and discriminator in a loop and plot results once every N iterations.
# + colab={} colab_type="code" id="Dmcm_iQBhVb_" outputId="dde1fad3-39e0-40f8-f64b-84cff938bfe7"
from IPython import display
for epoch in tqdm_utils.tqdm_notebook_failsafe(range(50000)):
feed_dict = {
real_data:sample_data_batch(100),
noise:sample_noise_batch(100)
}
for i in range(5):
s.run(disc_optimizer,feed_dict)
s.run(gen_optimizer,feed_dict)
if epoch %100==0:
display.clear_output(wait=True)
sample_images(2,3,True)
sample_probas(1000)
# + colab={} colab_type="code" id="HFYzkfHZhVcD"
from submit_honor import submit_honor
submit_honor((generator, discriminator), <YOUR_EMAIL>, <YOUR_TOKEN>)
# + colab={} colab_type="code" id="Eb2ei6LhhVcK" outputId="64046456-f550-4d6a-fd8f-cfa54a4622a9"
#The network was trained for about 15k iterations.
#Training for longer yields MUCH better results
plt.figure(figsize=[16,24])
sample_images(16,8)
# + colab={} colab_type="code" id="lG3Vn4R0hVcP"
|
Week-4/Adversarial_task.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# This allows multiple outputs from a single jupyter notebook cell:
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
# %matplotlib inline
import pandas as pd
daily = pd.read_csv('Issue101_data.csv',index_col=0,parse_dates=True)
daily.columns = daily.columns.str.capitalize() # alphavantage convention
daily.index.name = 'Date'
daily.shape
daily.head()
daily.tail()
import mplfinance as mpf
mpf.__version__
ndf = daily.iloc[::-1]
ndf.head()
ndf.tail()
mpf.plot(daily)
mpf.plot(ndf)
import matplotlib
print(matplotlib.rcParams['backend'])
for jj in [0,20,40,60,80]:
#mpf.plot(daily.iloc[jj:jj+20,:])
daily.iloc[jj:jj+20,:]
|
examples/scratch_pad/issues/Issue101_xaxis_not_showing.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from glob import glob
import yaml
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
from pathlib import Path
paths = list((Path.cwd() / ".." / "experiments" / "C4P4").glob("fpt*/bio.solutions.df.gzip"))
dfs = list(map(pd.read_pickle, paths))
df = pd.concat(dfs, ignore_index=True, sort=True)
df = df.sort_values("total_time")
plt.xscale("log")
for (selector, lower_bound), g in df[df["solved"]].groupby(["selector", "lower_bound"]):
plt.scatter(g["total_time"], range(len(g["time"])))
# +
d = dict()
for (k, g) in df.groupby(["selector", "lower_bound", "search_strategy"]):
t = pd.Series(g.loc[g["solutions"].apply(lambda x: len(x[0]["edits"]) >= 0 if len(x) != 0 else True), "total_time"])
#t = pd.Series(g["time"])
t[t == -1] = 150 * 10**9
d[k] = t.values / 10**9
fig, ax = plt.subplots(figsize=(8, 6))
ax.set_xscale("log")
ax.grid(True)
for k in d:
if k[2] == "Fixed": continue
ax.plot(np.sort(d[k]), range(len(d[k])), label="{0} {1} {2}".format(*k))
ax.axhline(y=len(list(d.values())[0]), c="black")
ax.set_ylim((-50, None))
ax.set_xlim((10**-5, 100))
ax.set_ylabel("Solved")
ax.set_xlabel("Total Time [s]")
fig.legend(loc="upper left")
plt.show()
# +
from collections import Counter
fig, ax = plt.subplots(figsize=(6, 6))
ax.set_yscale("log")
ax.set_xscale("log")
ax.grid(True)
for (k, g) in df.groupby(["selector", "lower_bound", "search_strategy"]):
if k[2] == "Fixed": continue
c = Counter(g["k"].apply(len))
ax.scatter(list(c.keys()), list(c.values()), label=k)
ax.set_ylim((10**-0.5, 10**3))
ax.set_ylabel("Count")
ax.set_xlabel("Number of evaluation steps")
fig.legend()
plt.show()
# -
fig, ax = plt.subplots()
for k, g in df.groupby(["search_strategy"]):
total_work = g["calls"].apply(len)
if total_work.max() <= 0: continue
ax.hist(total_work, density=True, bins=range(10), label=str(k), alpha=0.5)
fig.legend()
plt.show()
# +
import yaml
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
from pathlib import Path
from hashlib import sha1
plt.rcParams['axes.axisbelow'] = True
def read_data(ilp_paths, fpt_paths) -> pd.DataFrame:
ilp_df = pd.concat(map(pd.read_pickle, ilp_paths))
fpt_df = pd.concat(map(pd.read_pickle, fpt_paths))
ilp_df["name"] = "Basic"
ilp_df.loc[ilp_df["single_constraints"], "name"] = "Single"
ilp_df.loc[ilp_df["sparse_constraints"], "name"] = "Sparse"
fpt_df["name"] = fpt_df.apply(lambda row: f"{row['selector']} {row['lower_bound']} {row['search_strategy']}", axis=1)
headers = list(set(ilp_df.columns) & set(fpt_df.columns))
df = pd.concat([ilp_df[headers], fpt_df[headers]])
return df
def plot_solved_by_time_curve(df, *, names=None, labels=None, min_number_of_solutions=10):
if min_number_of_solutions is None:
min_number_of_solutions = 0
if labels is None:
labels = names
d = dict()
for name in names:
g = df.loc[df["name"] == name]
g = g.loc[g["solutions"].apply(lambda x: len(x[0]["edits"]) >= min_number_of_solutions if len(x) != 0 else True)]
solved = g["solution_cost"] != -1
t = pd.Series(g["total_time"])
t[~solved] = t.max() * 1.5
d[name] = t.values
fig, ax = plt.subplots(figsize=(6, 4))
ax.set_xscale("log")
ax.grid(True)
for name, label in zip(names, labels):
ax.plot(np.sort(d[name]) / 10**9, range(len(d[name])), label=label)
for y in (0, len(list(d.values())[0])):
ax.axhline(y=y, c="darkgrey")
ax.set_ylim((-50, None))
ax.set_xlim((10**-3, 10**2))
ax.set_ylabel("Number of solved instances")
ax.set_xlabel("Total Time [s]")
fig.legend(loc="upper left", bbox_to_anchor=(0.9, 0.9))
plt.show()
ilp_paths = list((Path.cwd() / "../experiments/C4P4/").glob("ilp*/*.solutions.df.gzip"))
fpt_paths = list((Path.cwd() / "../experiments/C4P4/").glob("fpt*/*.solutions.df.gzip"))
# df = read_data(ilp_paths, fpt_paths)
plot_solved_by_time_curve(df[df["dataset"] == "bio-C4P4-subset"], names=["Basic", "Single", "Sparse", "MostAdjacentSubgraphs SortedGreedy Exponential"], labels=["ILP", "ILP Single", "ILP Sparse", "FPT"], min_number_of_solutions=0)
plot_solved_by_time_curve(df[df["dataset"] == "bio"], names=["MostAdjacentSubgraphs SortedGreedy Exponential", "MostAdjacentSubgraphs SortedGreedy PrunedDelta", "MostAdjacentSubgraphs SortedGreedy IncrementByMinCost", "MostAdjacentSubgraphs SortedGreedy IncrementByMultiplier"], labels=["Exponential", "PrunedDelta", "IncrementByMinCost", "IncrementByMultiplier"], min_number_of_solutions=10)
plot_solved_by_time_curve(df[df["dataset"] == "bio-C4P4-subset"], names=["MostAdjacentSubgraphs Greedy Exponential", "MostAdjacentSubgraphs LocalSearch Exponential", "MostAdjacentSubgraphs SortedGreedy Exponential", "MostAdjacentSubgraphs Trivial Exponential"], labels=["Greedy", "LocalSearch", "SortedGreedy", "No lower bound"], min_number_of_solutions=0)
plot_solved_by_time_curve(df[df["dataset"] == "bio-C4P4-subset"], names=["MostAdjacentSubgraphs SortedGreedy Exponential", "FirstFound SortedGreedy Exponential", "MostMarkedPairs SortedGreedy Exponential"], labels=["MostAdjacentSubgraphs", "FirstFound", "MostMarkedPairs"], min_number_of_solutions=0)
# -
df["name"].unique()
# +
ilp_paths = list((Path.cwd() / "../experiments/C4P4/").glob("ilp*/bio-C4P4-subset.solutions.df.gzip"))
fpt_paths = list((Path.cwd() / "../experiments/C4P4/").glob("fpt*/bio-C4P4-subset.solutions.df.gzip"))
ilp_df = pd.concat(map(pd.read_pickle, ilp_paths))
fpt_df = pd.concat(map(pd.read_pickle, fpt_paths))
ilp_df["name"] = "Basic"
ilp_df.loc[ilp_df["single_constraints"], "name"] = "Single"
ilp_df.loc[ilp_df["sparse_constraints"], "name"] = "Sparse"
fpt_df["name"] = fpt_df.apply(lambda row: f"{row['selector']} {row['lower_bound']} {row['search_strategy']}", axis=1)
headers = list(set(ilp_df.columns) & set(fpt_df.columns))
df = pd.concat([ilp_df[headers], fpt_df[headers]])
# +
fig, ax = plt.subplots(figsize=(8, 5))
ax.set_yscale("log")
ax.set_xscale("log")
ax.grid(True)
names = []
names += ["Basic", "Single", "Sparse", "MostAdjacentSubgraphs SortedGreedy Exponential"]
#names += ["MostAdjacentSubgraphs SortedGreedy Exponential", "MostAdjacentSubgraphs SortedGreedy PrunedDelta", "MostAdjacentSubgraphs SortedGreedy IncrementByMinCost", "MostAdjacentSubgraphs SortedGreedy IncrementByMultiplier"]
#names += ["MostAdjacentSubgraphs Greedy Exponential", "MostAdjacentSubgraphs LocalSearch Exponential", "MostAdjacentSubgraphs SortedGreedy Exponential", "MostAdjacentSubgraphs Trivial Exponential"]
#names += ["MostAdjacentSubgraphs SortedGreedy Exponential", "FirstFound SortedGreedy Exponential", "MostMarkedPairs SortedGreedy Exponential"]
for name in names:
g = df[(df["name"] == name) & (df["dataset"] == "bio")]
n = g["instance"].str.split("-").str[-1].str[:-6].astype(int)
l = g["solutions"].apply(lambda x: len(x[0]["edits"]) if len(x) > 0 else -1)
c = g["solution_cost"]
t = g["total_time"].copy() / 10**9
t[~g["solved"]] = 10**(3 + np.random.uniform(-0.25, 0.25, size=(~g["solved"]).sum()))
ax.scatter(n, t, label=name, s=5)
ax.set_ylim((10**-5, 10**3.5))
ax.legend(loc="center left", bbox_to_anchor=(1, 0.5))
plt.show()
# +
a = pd.DataFrame(columns=list(fpt_df["lower_bound"].unique()) + ["k"], index=fpt_df["instance"].unique())
for (lb, instance), g in fpt_df.groupby(["lower_bound", "instance"]):
a.loc[instance, lb] = g["k"].str[0].max()
for instance, g in fpt_df.groupby(["instance"]):
a.loc[instance, "k"] = g["solution_cost"].max()
a["k_max"] = a.max(axis=1)
a["n"] = a.index.str.split("-").str[-1].str[:-6].astype(int)
# +
fig, ax = plt.subplots()
for lb in ["Greedy", "LocalSearch", "SortedGreedy"]:
ax.scatter(a.loc[a["k"] >= 0, "n"], a.loc[a["k"] >= 0, lb], label=lb)
ax.legend()
plt.show()
# -
b = fpt_df[(fpt_df["selector"] == "MostAdjacentSubgraphs") & (fpt_df["search_strategy"] == "Exponential")].groupby(["lower_bound", "instance"]).first()
# +
k_1 = b.xs("LocalSearch", level="lower_bound").k.str[0]
k_2 = b.xs("SortedGreedy", level="lower_bound").k.str[0]
t_1 = b.xs("LocalSearch", level="lower_bound").time.str[0]
t_2 = b.xs("SortedGreedy", level="lower_bound").time.str[0]
fig, ax = plt.subplots()
#ax.set_yscale("log")
ax.scatter(t_1 / t_2, k_1 / k_2, s=5)
ax.set_ylim((10**0, 10**0.1))
ax.set_xlim((0, 10))
plt.show()
|
notebooks/solved-by-time-curve.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: drlnd
# language: python
# name: drlnd
# ---
# # Collaboration and Competition
#
# ---
#
# In this notebook, you will learn how to use the Unity ML-Agents environment for the third project of the [Deep Reinforcement Learning Nanodegree](https://www.udacity.com/course/deep-reinforcement-learning-nanodegree--nd893) program. This is divided into 5 section: first section is devoted to launch the environment; second for examine action and state spaces; third launch a random agent; fourth train a DDGP agent; and fifth evaluate the trained DDGP agent. Note that alway before of execute section 3, 4 or 5 you will have to restart the current kernel and secuentialy execute section 1, 2 and desired section (4, 3 or 5).
#
# Also, both train and evaluation can be excuted via terminal by executing train.py and eval.py scripts
# ```
# % python train.py
# ```
# . If you dessired to see how train_ddpg and eval_ddpg are implemented go to those scripts.
#
# ### 1. Start the Environment
#
# We begin by importing the necessary packages. If the code cell below returns an error, please revisit the project instructions to double-check that you have installed [Unity ML-Agents](https://github.com/Unity-Technologies/ml-agents/blob/master/docs/Installation.md) and [NumPy](http://www.numpy.org/).
from unityagents import UnityEnvironment
import numpy as np
# Next, we will start the environment! **_Before running the code cell below_**, change the `file_name` parameter to match the location of the Unity environment that you downloaded.
#
# - **Mac**: `"path/to/Tennis.app"`
# - **Windows** (x86): `"path/to/Tennis_Windows_x86/Tennis.exe"`
# - **Windows** (x86_64): `"path/to/Tennis_Windows_x86_64/Tennis.exe"`
# - **Linux** (x86): `"path/to/Tennis_Linux/Tennis.x86"`
# - **Linux** (x86_64): `"path/to/Tennis_Linux/Tennis.x86_64"`
# - **Linux** (x86, headless): `"path/to/Tennis_Linux_NoVis/Tennis.x86"`
# - **Linux** (x86_64, headless): `"path/to/Tennis_Linux_NoVis/Tennis.x86_64"`
#
# For instance, if you are using a Mac, then you downloaded `Tennis.app`. If this file is in the same folder as the notebook, then the line below should appear as follows:
# ```
# env = UnityEnvironment(file_name="Tennis.app")
# ```
env = UnityEnvironment(file_name="Tennis_Linux/Tennis.x86_64")
# Environments contain **_brains_** which are responsible for deciding the actions of their associated agents. Here we check for the first brain available, and set it as the default brain we will be controlling from Python.
# get the default brain
brain_name = env.brain_names[0]
brain = env.brains[brain_name]
# ### 2. Examine the State and Action Spaces
#
# In this environment, two agents control rackets to bounce a ball over a net. If an agent hits the ball over the net, it receives a reward of +0.1. If an agent lets a ball hit the ground or hits the ball out of bounds, it receives a reward of -0.01. Thus, the goal of each agent is to keep the ball in play.
#
# The observation space consists of 8 variables corresponding to the position and velocity of the ball and racket. Two continuous actions are available, corresponding to movement toward (or away from) the net, and jumping.
#
# Run the code cell below to print some information about the environment.
# +
# reset the environment
env_info = env.reset(train_mode=True)[brain_name]
# number of agents
num_agents = len(env_info.agents)
print('Number of agents:', num_agents)
# size of each action
action_size = brain.vector_action_space_size
print('Size of each action:', action_size)
# examine the state space
states = env_info.vector_observations
state_size = states.shape[1]
print('There are {} agents. Each observes a state with length: {}'.format(states.shape[0], state_size))
print('The state for the first agent looks like:', states[0])
# -
# ### 3. Take Random Actions in the Environment
#
# In the next code cell, you will learn how to use the Python API to control the agents and receive feedback from the environment.
#
# Once this cell is executed, you will watch the agents' performance, if they select actions at random with each time step. A window should pop up that allows you to observe the agents.
#
# Of course, as part of the project, you'll have to change the code so that the agents are able to use their experiences to gradually choose better actions when interacting with the environment!
for i in range(1, 6): # play game for 5 episodes
env_info = env.reset(train_mode=False)[brain_name] # reset the environment
states = env_info.vector_observations # get the current state (for each agent)
scores = np.zeros(num_agents) # initialize the score (for each agent)
while True:
actions = np.random.randn(num_agents, action_size) # select an action (for each agent)
actions = np.clip(actions, -1, 1) # all actions between -1 and 1
env_info = env.step(actions)[brain_name] # send all actions to tne environment
next_states = env_info.vector_observations # get next state (for each agent)
rewards = env_info.rewards # get reward (for each agent)
dones = env_info.local_done # see if episode finished
scores += env_info.rewards # update the score (for each agent)
states = next_states # roll over states to next time step
if np.any(dones): # exit loop if episode finished
break
print('Score (max over agents) from episode {}: {}'.format(i, np.max(scores)))
# When finished, you can close the environment.
env.close()
# ### 4. Train an agent in the Environmen!
#
#
# Now it's your turn to train your own agent to solve the environment! When training the environment, set the desired number of episodes during training as `n_episodes=1000` and the number of steps per episode using `max_t=2000`. For setting an specific agent configuration, set each parameters as:
# ```python
# config = Config(num_agents)
# config.batch_size = 128
# ```
# +
import torch
import numpy as np
from collections import deque
import matplotlib.pyplot as plt
# %matplotlib inline
from itertools import count
from ddpg_agent import Agent, Config
from train import train_ddpg
# Configure agent
config = Config(num_agents)
config.batch_size = 256
config.buffer_size = 100000
config.tau = 1e-3
config.actor_hidden_drop = 0.1
config.actor_hidden_units = (128, 128)
config.lr_actor = 5e-4
config.critic_hidden_drop = 0.1
config.critic_hidden_units = (128-action_size, 128)
config.lr_critic = 5e-4
# Init the learner agent
agent = Agent(state_size, action_size, config=config, random_seed=1234)
print('Actor model:',agent.actor_local)
print('Critic model:',agent.critic_local)
# Train the agent
scores, scores_avg_hist = train_ddpg(agent, env, brain_name=brain_name, num_agents=num_agents, n_episodes=2000, early_stop=True)
# Show results
fig = plt.figure()
ax = fig.add_subplot(111)
plt.plot(np.arange(1, len(scores)+1), scores, 'b-')
plt.plot(np.arange(1, len(scores)+1), scores_avg_hist, 'r-')
plt.ylabel('Score')
plt.xlabel('Episode #')
plt.show()
# Finish environment
env.close()
# -
# ### 5. Test an agent in the Environmen!
# Now it's your turn to test your own agent to solve the environment! When testing the environment, set the desired number of episodes during testing as n_episodes=100. For setting an specific agent configuration, set each parameters as:
#
# config = Config(num_agents)
# config.batch_size = 128
# +
import torch
import numpy as np
from collections import deque
import matplotlib.pyplot as plt
# %matplotlib inline
from itertools import count
from ddpg_agent import Agent, Config
from eval import eval_ddpg
# Configure agent
config = Config(num_agents)
config.batch_size = 256
config.buffer_size = 100000
config.tau = 1e-3
config.actor_hidden_drop = 0.1
config.actor_hidden_units = (128, 128)
config.critic_hidden_drop = 0.1
config.critic_hidden_units = (128-action_size, 128)
# Init the learner agent
agent = Agent(state_size, action_size, config=config, random_seed=1234)
agent.actor_local.load_state_dict(torch.load('checkpoint_actor_01168_050.pth'))
agent.critic_local.load_state_dict(torch.load('checkpoint_critic_01168_050.pth'))
# Test the agent
scores, scores_avg_hist = eval_ddpg(agent, env, brain_name=brain_name, num_agents=num_agents, n_episodes=10)
# Show results
fig = plt.figure()
ax = fig.add_subplot(111)
plt.plot(np.arange(1, len(scores)+1), scores, 'b-')
plt.plot(np.arange(1, len(scores)+1), scores_avg_hist, 'r-')
plt.ylabel('Score')
plt.xlabel('Episode #')
plt.show()
# Finish environment
env.close()
# -
# ### 6. Questions
#
# Question or suggestions please, let me know using issue track in my github repo.
|
Tennis.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] colab_type="text" id="view-in-github"
# <a href="https://colab.research.google.com/github/ATOMScience-org/AMPL/blob/master/atomsci/ddm/examples/tutorials/03_Explore_Data_DTC.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="0V2ybLgAH-0V"
# # Exploring HTR3A protein target activity data from Drug Target Commons
#
#
# + [markdown] id="ezVyoyvuitEa"
# # Scope of the tutorial
# * Input data from DTC dataset for HTR3A protein target
# * Retrieves SMILES string from PubChem (time consuming step; needs internet connection)
# * AMPL will be used to accomplish the following steps:
# * Standardize SMILES string
# * Clean the data (look for duplicates, average the assay data, cluster the compounds etc.)
# * Carry out some Exploratory Data Analysis (Chemical space exploration; heat map, UMAP etc.)
# * Save the final dataset for modeling
# + [markdown] id="EF6njYp-iyw9"
# # Time on COLAB-Pro ( ~ 6 minutes)
# + [markdown] id="Xns-qBdRi3vZ"
# # Protein target (HTR3A) information
# + [markdown] id="Dp-RE9a8i_Xw"
# The Target specific data was downloaded from https://drugtargetcommons.fimm.fi/
#
# Please refer to the Drug Target Commons publication (https://pubmed.ncbi.nlm.nih.gov/29276046/) for details about the database
# + [markdown] id="mVJtFMh4jC5b"
# Here are some details about HTR3A gene (taken from RefSeq NCBI)
#
# * Proteins belongs to GPCR superfamily
# * HTR3a is a receptor for Serotonin, a biogenic hormone that functions as a neurotransmitter
# * HTR3A (also the name of the gene) encodes the subunit of the type 3 receptor for neurotransmitter
# * Herteromeric combination of subunit A and B (HTR3B) is needed for full function.
# * Different alternately spliced transcript variant forms for this gene are available.
# + [markdown] id="zxDMhFSoIL15"
# Diseases associated with HTR3A include Irritable Bowel Syndrome and Motion Sickness.
# + [markdown] id="jT3MKS4AjLJw"
# ## Additional information about HTR3A gene:
#
# **Gene location:** Chromosome 11
# **Exon count:** 10
#
# mRNA and protein information for its three transcripts:
#
# * NM_000869.6 → NP_000860.3
# * NM_001161772.3 → NP_001155244.1
# * NM_213621.4 → NP_998786.3
# + [markdown] id="od_Bn9W5gCh9"
# ## Before you begin, make sure you close all other COLAB notebooks.
# + [markdown] id="2DWqm4Cxm5h_"
# # Change Runtime settings
# If you have access to COLAB-Pro (commercial/not-free), please change your runtime settings to use GPU and high-memory,
#
# ```Runtime --> Change Runtime Type --> GPU with high-RAM```
#
# If you are not a paid COLAB-Pro customer, you can still choose GPU, with standard-RAM.
# + colab={"base_uri": "https://localhost:8080/"} id="rlYz7j65MDcb" outputId="86e286fc-4235-4676-ca10-5884c4fbd25b"
# !date # starting time
# + [markdown] id="5qGO2T0cIvIm"
# ## Install AMPL
# + id="2zCWtqSWPHzI"
# ! pip install rdkit-pypi
# ! pip install --pre deepchem
import deepchem
# print(deepchem.__version__)
# ! pip install umap
# ! pip install llvmlite==0.34.0 --ignore-installed
# ! pip install umap-learn
# ! pip install molvs
# ! pip install bravado
# + id="NeoDaO7llswd"
import deepchem as dc
# get the Install AMPL_GPU_test.sh
# !wget 'https://raw.githubusercontent.com/ATOMScience-org/AMPL/master/atomsci/ddm/examples/tutorials/config/install_AMPL_GPU_test.sh'
# run the script to install AMPL
# ! chmod u+x install_AMPL_GPU_test.sh
# ! ./install_AMPL_GPU_test.sh
# + [markdown] id="4qtjXXtuWZLQ"
# ## Exploring HTR3A target activity data from ExcapeDB
# + id="F-c9OaSoJHmG"
# We temporarily disable warnings for demonstration.
# FutureWarnings and DeprecationWarnings are present from some of the AMPL
# dependency modules.
import warnings
warnings.filterwarnings('ignore')
import json
# import numpy as np
# import pandas as pd
import os
import requests
# + id="6r_-HG0aHwsE"
#
# Import AMPL libraries
#
import atomsci.ddm.utils.data_curation_functions as dcf
import atomsci.ddm.utils.curate_data as curate_data
import atomsci.ddm.pipeline.diversity_plots as dp
import atomsci.ddm.pipeline.chem_diversity as cd
# Additional python libraries
import pandas as pd
import numpy as np
import getpass,os
# + [markdown] id="1X7qQCYVHwsG"
# ## Select a target to work with
# ### (e.g. PDE2A, KCNH2, SCNA5)
# + id="tkweVTMSHwsG"
target_name='HTR3A'
# + [markdown] id="yCA0PykcHwsH"
# # Define data locations
# + id="1SVhXwHgJiZN"
ofile=target_name+'_dtc.csv'
# + [markdown] id="f1RXnxhVVcxt"
# ## Note the file `DTC_HTR3A.csv` was downloaded from the DTC website.
# + id="bN1KN4sGJjjM"
import io
url = 'https://raw.githubusercontent.com/ATOMScience-org/AMPL/master/atomsci/ddm/examples/tutorials/datasets/DTC_HTR3A.csv'
download = requests.get(url).content
# + id="XBDmluXMKD38"
# Reading the downloaded content and turning it into a pandas dataframe
orig_df = pd.read_csv(io.StringIO(download.decode('utf-8')), sep=',', header=0 )
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="Cat88HV5c7rH" outputId="48c54e6c-2692-4e0e-a7bb-e55629cd932b"
orig_df
# + colab={"base_uri": "https://localhost:8080/"} id="Enh3wAUAUd4u" outputId="4118bdf0-1ced-4dae-8090-00b40fafbf3b"
orig_df.drop(columns=['Unnamed: 0'], inplace=True)
orig_df.columns
# + [markdown] id="-FOiuGwYHwsI"
# ### Start with a local file containing the target data
# + extensions={"jupyter_dashboards": {"version": 1, "views": {"grid_default": {}, "report_default": {"hidden": true}}}} id="1s8tSOedHwsI"
ofile=target_name+'_dtc.csv'
# + [markdown] id="xjFANFgZHwsJ"
# ### Explore the dataframe and display first few lines
# + colab={"base_uri": "https://localhost:8080/", "height": 294} id="F5r0Gs2xHwsJ" outputId="aea93442-bc87-4b61-9a12-97c970013d79"
#show number of rows in data frame and number of columns
print(orig_df.shape)
# show column names
display(orig_df.columns)
# + colab={"base_uri": "https://localhost:8080/", "height": 642} id="Ve4d5QK0HwsK" outputId="66a8de2a-b801-45e0-f80f-98ba32dc0d00"
orig_df.head(5)
# + [markdown] id="XefnLXbt-yF9"
# ## Let us use AMPL to prefilter the data
# + colab={"base_uri": "https://localhost:8080/"} id="qmTduoTjCZ7C" outputId="a7481ec2-efd4-4a0c-a94f-4bc73ade9a6f"
print('Before replace: ', orig_df.columns)
# remove special character
orig_df.columns = orig_df.columns.str.replace(' ', '_')
print('After replace: ', orig_df.columns)
# + colab={"base_uri": "https://localhost:8080/"} id="hu3eKU7sDDqY" outputId="0c4e4c47-cdb5-4c06-9572-18973c754cdc"
# replace uppercase to lowercase
orig_df.columns= orig_df.columns.str.lower()
print('After replaceing colnames with lcase: ', orig_df.columns)
# + colab={"base_uri": "https://localhost:8080/"} id="gjXySV81EWaA" outputId="a3a53ec9-22c5-40c8-c247-f26614eea1aa"
# checking after rename
orig_df.columns
# + colab={"base_uri": "https://localhost:8080/"} id="dbgwzh2YGCGB" outputId="4c08fa18-e654-4cdf-914e-66c2f8ad0d80"
orig_df.shape
# + [markdown] id="5c5b8JqbUsfg"
# ## The following renames the names and make it suitable for the next function call
# + id="ivfltdp0SnPj"
orig_df = orig_df.rename(columns={'end_point_standard_type': 'standard_type',
'end_point_standard_relation': 'standard_relation',
'end_point_standard_value': 'standard_value',
'end_point_standard_units': 'standard_units',
'endpoint_mode_of_action': 'mode_of_action',
'wild_type_or_mutant': 'wildtype_or_mutant'})
# + [markdown] id="kDIWlEXrPfUs"
# ## dcf.filter_dtc_data performs the following operation
#
# ```
# dset_df = orig_df[orig_df.gene_names.isin(geneNames) &
# ~(orig_df.standard_inchi_key.isna()) &
# (orig_df.standard_type == 'IC50') &
# (orig_df.standard_units == 'NM') &
# ~orig_df.standard_value.isna() &
# ~orig_df.compound_id.isna() &
# (orig_df.wildtype_or_mutant != 'mutated') ]
# ```
# + id="BK9TWNsXFtIn"
geneNames = [target_name]
nm_df = dcf.filter_dtc_data(orig_df, geneNames)
# + colab={"base_uri": "https://localhost:8080/"} id="YjQ45UreN5xu" outputId="1712b835-5a3b-4cd8-ac39-a7fe1913fcb2"
orig_df.shape
# + colab={"base_uri": "https://localhost:8080/"} id="ohU14UiGFX4-" outputId="abec14b4-4b88-442a-915e-c2e661c8321b"
nm_df.shape
# + [markdown] id="Kp9vy6KSGsxx"
# ## Explore few columns to get an idea of the dataset
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="BuPzFhEmKD-G" outputId="07fae8b3-c1ca-49f3-86c4-d272fb800863"
# Below, we are displaying the unique elements of our assaytype column in our dataframe
# unique() is a function that is a part of the numpy library in Python, and it is used to find the unique elements of an array
display(orig_df['assay_type'].unique())
# We then use the same function on a few more columns: assay_cell_line, assay_description, pubmed_id
display(orig_df['assay_cell_line'].unique())
display(orig_df['assay_description'].unique())
display(orig_df['pubmed_id'].unique())
# + colab={"base_uri": "https://localhost:8080/", "height": 458} id="Y5krECxrHARe" outputId="2d8fae08-0970-4ecd-d51f-48ac0f6ebada"
orig_df.head(3)
# + [markdown] id="iJio-E_ALCZ6"
# ## Convert InChi key to SMILES
# + id="Qy8d87_-K7aa"
ofile = target_name+'_dtc_smiles_raw.csv'
# + colab={"base_uri": "https://localhost:8080/"} id="Xxziu6wXLPml" outputId="4fcc526b-c7cf-4997-9ee3-8bfd7731d8c5"
print(ofile)
# + [markdown] id="2SRf26EmT_iI"
# ## Note the file HTR3A_dtc_smiles_raw.csv will be created
# + colab={"base_uri": "https://localhost:8080/"} id="H5z1nEhILX4t" outputId="a9f2ba7a-8d8a-467f-aa34-eca7cfbb8345"
# import few libraries from AMPL
import atomsci.ddm.utils.pubchem_utils as pu
from os import path
myList = orig_df['standard_inchi_key'].unique().tolist()
# Retrieve SMILES strings for compounds through PUBCHEM web interface.
# Let us make sure the ofile exists, if it exists then print 'File exists' and
# if it doesnt exist, let us print "SMILES data doesnt not exist, downloading from
# PubCHEM"
if not path.exists(ofile) :
print("SMILES data not found, download from PubChem ",ofile)
save_smiles_df, fail_lst, discard_lst = pu.download_smiles(myList)
save_smiles_df.to_csv(ofile)
else :
print(ofile, 'Exists!')
# + [markdown] id="-EAgFT4W4AkM"
# ## Note the `fail_lst` and `discard_lst` will contain the failed and discarded list
#
# Check whether the file, HTR3A_dtc_smiles_raw.csv, exists. use the RHS menu option or use `ls HTR3A_dtc_smiles_raw.csv`
# + colab={"base_uri": "https://localhost:8080/"} id="p_RA6GQrI-zZ" outputId="3c6f6449-3cc9-454b-ba9a-7385fefe233b"
print("fail_lst: ", len(fail_lst))
print("discard_lst: ", len(discard_lst))
print(len(myList))
print(save_smiles_df.shape)
# + colab={"base_uri": "https://localhost:8080/", "height": 206} id="LUTnA9zN7Ti1" outputId="14d6baaf-4383-451b-b5c9-745db23c369e"
save_smiles_df.head(5)
# + [markdown] id="IbvY7TWY6tX1"
# ## In this step, we reassemble the dataframe by attaching the IC50 values to create a new dataframe
# + colab={"base_uri": "https://localhost:8080/"} id="Qd2IIlwAMKr-" outputId="b6eb31da-8bc1-43c4-a79d-0f4c284360ac"
# HTR3A_dtc_smiles.csv is the ofile
ofile = target_name+'_dtc_smiles.csv'
if not path.exists(ofile) :
import atomsci.ddm.utils.data_curation_functions as dcf
import importlib as impl
print(len(fail_lst))
print(save_smiles_df.shape)
# Above, we will pring the fail_list created in the earlier code block
# We will also print the dimensions of our save_smiles_df pandas
# dataframe using the .shape function
# Notice the ifile is now HTR3A_dtc_smiles_raw.csv
ifile=target_name+'_dtc_smiles_raw.csv'
# Here we are reading in our file (ifile) using the pandas library, and assigning its contents to the save_smiles_df from earlier
save_smiles_df=pd.read_csv(ifile)
## Retrieve specific data
## Will include censored data in smiles
## Combine gene data with SMILES strings and call this our starting "raw" dataset.
# Here we are creating a variable called targ_lst, which contains our target and is formatted as a list
targ_lst=[target_name]
####WARNING: I had to convert this explicitly to a floating point value!!!
# Below, using the nm_df['standard_value']=nm_df['standard_value'].astype(float) is used to convert
# the standard_value column into a float using astype()
# A floating point value, also called a float, represents a real number and is written with a decimal point dividing the integer and fractional point
nm_df['standard_value']=nm_df['standard_value'].astype(float)
smiles_lst, shared_inchi_keys = dcf.get_smiles_dtc_data(nm_df, targ_lst, save_smiles_df)
smiles_df=pd.concat(smiles_lst)
# ofile=target_name+'_dtc_smiles.csv'
smiles_df.to_csv(ofile,index=False)
else :
print("Downloaded file previously saved",ofile)
# + [markdown] id="6Npvu8Wl9qFe"
# ## Change the ofile to ifile for reading
# + colab={"base_uri": "https://localhost:8080/"} id="NiPDw_h2b4Uv" outputId="1c550afa-a09f-4daf-d76f-ce41c811c264"
ifile=target_name+'_dtc_smiles.csv'
print(ifile)
# + colab={"base_uri": "https://localhost:8080/", "height": 677} id="tTukf532aAPs" outputId="00ac5b3f-08d6-4da9-ae6a-bc609b2b360f"
print(smiles_df.shape)
save_smiles_df = smiles_df
save_smiles_df.head(5)
# + [markdown] id="miN7240L-Bc4"
# ## Use AMPL for transforming IC50 values
# + colab={"base_uri": "https://localhost:8080/", "height": 439} id="inaUPauW-BFE" outputId="f85c9cd4-c575-4556-e806-7af906ec45f7"
# From our dataframe, we are working with the PIC50 column
# !=np.inf uses the Numpy library and the != portion checks to see if the value
# of two operands, the object of an operation, are equal. If they are not equal the
# condition is true
# The np.inf portion indicates the Numpy module which can be used to represent
# positive infinite value
data=save_smiles_df[save_smiles_df['PIC50'] != np.inf]
# Here we are defining our column, which will be PIC50
column = 'PIC50'
# Here we are using the num_bins parameter to determine the number of bins our data will be divided into, creating 20 different peaks
# These peaks will be visualized on our graph
num_bins = 20
# Here we are setting our title for the graph as our target name
title = target_name
# Here we are specifying the units that are used, which in our case we are using nanometers
units = 'NM'
# Using the filepath function, we are defining our file
filepath = ""
# This is the same variable we created earlier called data
data=save_smiles_df[save_smiles_df['PIC50'] != np.inf]
# Using the summarize_data submodule, we are going to compile a set of data summarization tools to calculate several descriptive features
# These features include: column, num_bins, title, units, filepath, and data
curate_data.summarize_data(column, num_bins, title, units, filepath, data)
# + [markdown] id="Zy41q3hXHwsL"
# ## Let us cluster the compounds to explore the chemical space
#
#
# Project compounds into two dimensions with UMAP and Tanimoto similiarty
#
# 1. Cluster compounds by Tanimoto similarity
# 2. Repeat steps 1 and 2 with Maximum Common Substructure distance when dataset size is below # threshold (default < 300)
#
# See documentation here:
# https://ampl.readthedocs.io/en/latest/pipeline.html?highlight=diversity_plots#pipeline.diversity_plots.diversity_plots
#
# </li>
# + [markdown] id="2WDdVKbYHwsM"
# # Save output from clustering heatmap to image and upload to presentation
# + id="4QproMuzHwsM"
ifile = target_name+'_dtc_smiles.csv'
# + [markdown] id="yAzI5OzjHwsM"
# # Plot self similarity (Tanimoto) within dataset and show distribution of distances between compounds in dataset for nearest neighbor.
#
# ## Save distribution plot as an image.
#
# We will be calling dp.diversity_function from AMPL in the following code chunk. For AMPL function explanations,
# please consult AMPL documentation here, https://ampl.readthedocs.io/en/latest/pipeline.html?highlight=diversity_plots#pipeline.diversity_plots.diversity_plots
# + [markdown] id="tFC7Syc6gYBi"
# ## Here is a brief explanation of `dp` function:
#
# ### The AMPL function will calculate diversity profile for the data.
#
# ### Input Args:
#
#
# * **dset_key:** : Name of the input data variable
# * **datastore** :
# * **id_col** : Ambit_InchiKey, it is a chemical identifier for the compound or drug molecules. Please check here for a detailed explanation of InChiKey, https://en.wikipedia.org/wiki/International_Chemical_Identifier#:~:text=%2B%2Fm0%2Fs1-,InChIKey,hashed%20counterpart%20of%20standard%20InChI. In this case, Excape is using InChiKey generated from Ambit
# * **response_col**: Outcome column, in out case it is pXC50
#
# ## `dp.diversity_plots` function
#
# * Computes Fingerprints
# * If the number of compounds are > 300, it will compute Fingerprints and use it to compute Tanimoto distance matrix. plot the distances using UMAP projection and cluster (complete cluster method will be used) the distances to create a heatmap
# * If the number of compounds are < 100, MCS (Maximum Common Substructure) will be used for clustering in addition the above step.
#
#
# ## Helpful links
#
# * Tanimoto
# * https://en.wikipedia.org/wiki/Jaccard_index
# * https://en.wikipedia.org/wiki/Chemical_similarity
# * UMAP
# * https://pair-code.github.io/understanding-umap/
# * MCS
# * https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2718661/
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="geRQnqPngVMo" outputId="e38a3094-810c-48b0-8cc1-f88d31ee2c55"
dp.diversity_plots(dset_key = ifile,
datastore = False,
response_col = 'PIC50',
max_for_mcs = 100)
# + colab={"base_uri": "https://localhost:8080/"} id="CQLUJL5_Aq2g" outputId="58476200-43cb-4301-86d1-fabb145cf742"
data.shape
# + [markdown] id="JWr5VyXpqpM1"
# ## Self similarity (Tanimoto)
#
# Calculate self-similarity (using Tanimoto) for the dataset and plot the distanes.
# + id="tG_217EPHwsM"
feat_type = 'ECFP'
dist_metric = 'tanimoto'
smiles_lst1 = data['rdkit_smiles'].tolist()
calc_type = 'nearest'
dist_sample = cd.calc_dist_smiles(feat_type, dist_metric, smiles_lst1, None, calc_type)
# + colab={"base_uri": "https://localhost:8080/"} id="hCD85wGzx7gW" outputId="a402c01a-621e-41db-ec1b-d59e61b806e1"
print(len(dist_sample))
print(len(smiles_lst1))
# + [markdown] id="ecuqss6kgl7I"
# ## What does **calc_dist_smiles** function return?
#
#
# * input is a list of SMILES strings
# * data featurization: ECFP (Fingerprint)
# * What distance metric to use? Tanimoto
# * How to process distance matrix, nearest ?
# * returns a distance matrix as a vector of distances
#
# Here is the function summary:
#
# * rdkit is used to transform SMILES to mols
# * mols to FP (Morgan, 1024 FP)
# * calls calc_summary with the following options:
# * fprints1 is the FP
# * fprints2 is none
# * dist_metrics will return a distance matrix
# calc_summary(dist_metrics.tanimoto(fprints1, fprints2), calc_type=nearest, num_nearest=1, within_dset=True)
#
# * Finally, returns the distances of each atom to its closest neighbor
#
# + [markdown] id="H8En9iSCrHnA"
#
# ## Explanation for the following code chunk
#
# * **scipy.stats.kde** will use kernel density function to estimate the probability density function (PDE)
# ---
#
#
# + colab={"base_uri": "https://localhost:8080/", "height": 534} id="vcXZQBLrHwsN" outputId="1758d890-6856-43cf-921c-6336e2cc10ef"
from scipy.stats.kde import gaussian_kde
# import math library
import numpy as np
# for creating plots
import matplotlib.pyplot as plt
# current directory
odir='./'
# name for the task
task_name='within dataset'
# https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.gaussian_kde.html
dist_pdf = gaussian_kde(dist_sample)
x_plt = np.linspace(min(dist_sample), max(dist_sample), 500)
y_plt = dist_pdf(x_plt)
fig, ax = plt.subplots(figsize=(8.0,8.0))
ax.plot(x_plt, y_plt, color='forestgreen')
ax.set_xlabel('%s distance' % dist_metric)
ax.set_ylabel('Density')
ax.set_title("%s dataset\nDistribution of %s distances between %s feature vectors" % (
task_name, dist_metric, feat_type))
fig.savefig(odir+'distance_to_background_mol.png')
# + colab={"base_uri": "https://localhost:8080/"} id="101z-_vAhdlp" outputId="bebf1253-f956-425b-cd9f-8c55608460b3"
# !date # ending time
|
atomsci/ddm/examples/tutorials/03_Explore_Data_DTC.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/tyoc213/fastai_xla_extensions/blob/explorations1/explore_nbs/ColaboratoryFilteringTPU.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="QUsycTYFurKJ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 238} outputId="21f6f967-3716-4122-ee08-4b9701880037"
VERSION = "20200707" #"nightly" #"20200515" @param ["1.5" , "20200325", "nightly"]
# !curl https://raw.githubusercontent.com/pytorch/xla/master/contrib/scripts/env-setup.py -o pytorch-xla-env-setup.py > /dev/null
# !python pytorch-xla-env-setup.py --version $VERSION > /dev/null
#import torch_xla.core.xla_model as xm
# + id="vMz_IqScurKN" colab_type="code" colab={}
# !pip install https://github.com/butchland/fastai_xla_extensions/archive/master.zip > /dev/null
import fastai_xla_extensions.core
# + id="6CT5O3S6urKQ" colab_type="code" colab={}
# #!pip install fastai2 > /dev/null
# + id="7BD5PFQsurKT" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="ebd58330-7d7a-428f-fec2-c6b9d80d0ed6"
from fastai2.tabular.all import *
from fastai2.collab import *
dede = default_device()
dede
# + id="Kl3hLxn7urKV" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 17} outputId="142480f2-3117-4dae-c373-8753383151d0"
path = untar_data(URLs.ML_100k)
ratings = pd.read_csv(path/'u.data', delimiter='\t', header=None,
usecols=(0,1,2), names=['user','movie','rating'])
movies = pd.read_csv(path/'u.item', delimiter='|', encoding='latin-1',
usecols=(0,1), names=('movie','title'), header=None)
# + id="nWaT_5i5urKY" colab_type="code" colab={}
ratings = ratings.merge(movies)
# + id="oqv8sk-furKa" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="c92a918d-2259-4c1d-9099-9e6ab85fbf7e"
dls = CollabDataLoaders.from_df(ratings, item_name='title', bs=64, device=dede)
dls.device
# + id="whNlolgUurKd" colab_type="code" colab={}
learn = collab_learner(dls, n_factors=50, y_range=(0, 5.5))
# + id="MoEdZlxlurKg" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 289} outputId="0d0e4fa4-e070-4a94-f427-f2104c45c265"
learn.fit_one_cycle(5, 5e-3, wd=0.1)
# + [markdown] id="wZk8AF6GurKi" colab_type="text"
# ### Interpretation
# + id="K5CF8Kz_urKj" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 102} outputId="7f497651-4e65-4622-80d0-15776769e486"
g = ratings.groupby("title")['rating'].count()
top_movies = g.sort_values(ascending=False).index.values[:1000]
top_movies[:10]
# + id="GFYYkh54urKl" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="85e01a66-b2f8-47fa-b8b2-907ce8c03e88"
movie_bias = learn.model.bias(top_movies, is_item=True)
movie_bias.shape
# + id="8R0vk3owurKn" colab_type="code" colab={}
mean_ratings = ratings.groupby("title")['rating'].mean()
movie_ratings = [(b, i, mean_ratings.loc[i]) for i,b in zip(top_movies,movie_bias)]
# + id="lWl_uf5qurKq" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 374} outputId="2b584421-752d-4b44-e1d3-c3802bc266c8"
item0 = lambda o:o[0]
sorted(movie_ratings, key=item0)[:15]
# + id="2EoodzCgurKs" colab_type="code" colab={}
|
archive_nbs/ColaboratoryFilteringTPU.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Goals of this Notebook
#
# In this notebook we will explore topics related to performing **Bayesian inference** (in particular **Bayesian parameter estimation**) using a **Gaussian Process** (GP) surrogate.
#
# The specific example is related to the phenomenology of heavy-ion collisions.
# It was realized early in the 21st century that the harmonic flows $v_2$, which can be measured by $n$-particle correlations, show sensitivity to the transport properties of a Quark Gluon Plasma(QGP) fluid.
# More specifically, the 'elliptic flow' $v_2$ shows sensitivity to the specific shear viscosity $\eta/s$.
# Therefore, a common practice in phenomenology is compare hydrodynamic models with a parametrized specific shear viscosity to observables measured in experiments such as the elliptic flow, to **infer** the specific shear viscosity of the physical QGP.
# ## Bayesian Inference
#
# A statistical methodology designed to handle arbitrarily complicated problems of inference is **Bayesian Inference**.
#
# Suppose we know some information $D$, for example a set of experimental measurements.
# Now, suppose we want to make an inference about a proposition $\theta$, for example some physical property of system which can not be directly measured.
#
# Bayes theorem can be written $$p(\theta|D) = \frac{ p(D|\theta) p(\theta) }{ p(D) },$$
# where $p(\theta|D)$ is our "posterior" for the proposition $\theta$ given that $D$ is realized, $p(D|\theta)$ is the "likelihood" of observing $D$ given that the proposition $\theta$ is realized, $p(\theta)$ is our "prior" belief about $\theta$ before observing $D$, and $p(D)$ is the "evidence".
#
# If we are only interested in our proposition $\theta$, than we can use that the evidence is independent of $\theta$ and solve instead the proportionality
# $$p(\theta|D) \propto p(D|\theta) p(\theta) .$$
# This fact will be very useful for Bayesian parameter estimation, as the usual numerical methods to estimate the posterior will exploit it.
# # Inferring the shear viscosity given the Elliptic Flow
#
# We can use Bayes theorem to infer the specific shear viscosity of QGP given the observed data for the elliptic flow.
#
# In this case the specific shear viscosity will be represented by $\theta$, and the observed experimental data for the elliptic flow in a particular centrality bin will be represented by $D$.
# ## Defining our physics model
#
# We see that our likelihood function encodes the conditional probability of observing some value of the elliptic flow given a particular value of the specific shear viscosity.
# This requires us to choose some model which we believe is a good approximation of physics.
# In this case, we will assume that the dynamics of the collision can be modeled accurately by viscous hydrodynamics.
#
# For the purposes of this notebook, we will approximate the hydrodynamic output of the elliptic flow $v_2$ given the specific shear viscosity $\eta/s$ using a linear model.
#
# Ordinarily we would use a hydrodynamic simulation (perhaps MUSIC https://github.com/MUSIC-fluid/MUSIC) to model the physics. We will use a linear-model in this notebook because its not computationally demanding, allowing us to focus on concepts on Bayesian inference. However, whenever we discuss our model, we should have in mind a real physics model.
# ### Statistical Model Error
#
# Let's add to our linear physics model statistical (uncorrelated) error on top of every prediction for the elliptic flow.
# This will be useful for understanding how any statistical model errors influence our inference problem.
# For example, in a real hydrodynamics simulation with a finite number of final state particles, there will be a finite statistical error on our calculated elliptic flow.
# ### Expressing the model
# Let $y$ denote the ouput $v_2$, and $\theta$ the value of the specific shear viscosity $\eta/s$. We can write our physics model by
# $$y = m * \theta + b +\epsilon,$$
# which has a slope $m$, intercept $b$ and statistical error $\epsilon$.
# ### Let's import some libraries
# +
import numpy as np #useful for math operations
import matplotlib.pyplot as plt #plotting
import seaborn as sns #pretty plots
sns.set()
#from sklearn.gaussian_process import GaussianProcessRegressor as GPR #for using Gaussian Processes
#from sklearn.gaussian_process import kernels #same
import GPy
from sklearn.preprocessing import StandardScaler #useful for scaling data
import emcee #for performing Markov Chain Monte Carlo
import corner #for plotting the posterior
# -
# ### This function will define our physics (hydrodynamic) model
# +
#Our linear model for hydrodynamic output in some centrality bin, for example 20-30%
#noise level controls statistical scatter in our physics model $\epsilon$
noise = 0.1 #amount of statistical scatter in our training calculations
np.random.seed(1)
def lin_hydro_model(eta_over_s, intercept = 0.12, slope = -0.25, noise = noise):
"""This function will play the role of a
realistic event-by-event hydrodynamic model. Here it is a linear model
with an additional random noise error."""
y = intercept + slope * (eta_over_s) # the mean model prediction
dy = noise * y * np.random.normal() #the sampled model statistical error
y += dy #add the model stat. error to the model mean
y = np.max([0., y]) #suppose our measurement definition is positive definite
return y, dy
lin_hydro_model = np.vectorize(lin_hydro_model)
# -
# ## Using a fast Model Emulator for slow physics models
#
# A real viscous hydrodynamic physics model could take hours to run a single event, and we may need thousands of events to construct a centrality average. Therefore, for computationally demanding models we can employ a fast surrogate which can estimate the interpolation uncertainty.
# We use Gaussian processes for this purpose in this notebook. Gaussian processes are especially useful because they provide non-parametric interpolations (as opposed to a polynomial fit, for example).
# Like any interpolation, we need a sampling of points in our parameter space where we know the **physics model output**.
#
# So, we first run our physics simulation on a sampling of points that **fill our parameter space** and call this sample our **design points**.
# +
n_design_pts = 20 # this sets the number of design points where we will run our hydro model
eta_over_s_min = 0. # this defines a minimum value for our parameter (eta/s)
eta_over_s_max = 4. / (4. * np.pi) # this defines a maximum value for our parameter (eta/s)
#this chooses our sample to be a regular grid, which is an efficient sampling in one dimension
#it is reshaped into a 2D array so that we can readily use it with scikit-learn
model_X = np.linspace(eta_over_s_min, eta_over_s_max, n_design_pts).reshape(-1,1)
#these are the v_2 outputs of our hydro model, assuming that the model has finite statistical error
model_y, model_dy = lin_hydro_model(model_X)
#lets plot our physics models predictions
plt.errorbar(model_X.flatten(), model_y.flatten(), model_dy.flatten(), fmt='o', c='black')
plt.xlabel(r'$\eta/s$')
plt.ylabel(r'$v_2 [ 20-30\%]$')
plt.title('Hydro Model Design Predictions')
plt.tight_layout(True)
plt.show()
# -
# ### Exercises:
# 1. Does it look like a linear model produced these data? Should it?
# 2. Try playing with the amount of noise (error) in these data.
# ### Training our Gaussian Process (GP)
# We will use a Gaussian process (https://en.wikipedia.org/wiki/Gaussian_process) to interpolate between the design points.
#
# For an intuitive feeling for how they work, play with this widget http://www.tmpl.fi/gp/.
# For more details see http://www.gaussianprocess.org/gpml/chapters/RW.pdf.
#
# A Gaussian Process is defined with some choice of a **kernel function**.
# Please see this page for a brief explanation of a few popular kernels : https://www.cs.toronto.edu/~duvenaud/cookbook/.
#
# This is a very good visual exploration of Gaussian Processes as well as different kernel functions: https://distill.pub/2019/visual-exploration-gaussian-processes/.
#
#
# We tell scikit-learn the GP kernel function to use, and some guidance for the range of the hyperparameters. Then when we call the `fit()` operation, scikit-learn automatically finds the values of hyperparameters that maximize a likelihood function:
#
# $$\log p(y^*|y_{t}, \theta) \propto -\frac{1}{2}y_{t}^{T} \Sigma^{-1}_{y_t} y_{t} - \frac{1}{2} \log |\Sigma_{y_t}|,$$
#
# where $\Sigma_{y_t}$ is the covariance matrix resulting from applying the covariance function to the **training data**.
#
# Note: The first term rewards a better fit to the training data, while the second term this likelihood function is a complexity penalty to avoid overfitting.
# ### Exercises:
# 1. Explain why the first term in this likelihood function rewards a GP with hyperparameters that fit the data well.
# 2. Explain why the second term penalizes a GP which is 'overfit'. What does 'overfit' mean?
# ### Now we will define and train a GP
#
# We will use a combination of a **Squared Exponential Kernel** a **White Noise Kernel** and a **Linear Kernel**.
# +
#scikit-learn only accepts 2d arrays as inputs
model_X = model_X.reshape(-1,1)
#this is the 'size' of possible variation of our parameters, in this case eta/s
ptp = max(model_X) - min(model_X)
#This is our Squared Exponential Kernel
rbf_kern = GPy.kern.RBF(input_dim=1, variance=1., lengthscale=ptp)
#This is a white noise kernel,
#necessary because our physics model has finite statistical accuracy
white_kern = GPy.kern.White(1, variance=noise)
lin_kern = GPy.kern.Linear(1) #does it make sense to include a linear kernel?
#The total kernel function is a sum here,
#but other combinations are possible (products, compositions, ...)
my_kernel = (rbf_kern + lin_kern + white_kern)
# -
# ### Exercises:
# 1. Why do we need a White Noise Kernel?
# 2. What does the hyperparameter which controls the 'length scale' in the Squared Exponential kernel control? How does it relate to under/over-fitting?
# As with many machine learning toolkits, out-of-the-box performance is often best when we first scale our outputs. The 'Standard Scaler' (https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html) is convenient for this purpose.
#first scale our observables
model_y_copy = model_y.copy()
scaler = StandardScaler(copy=True).fit(model_y_copy)
scaled_model_y = scaler.transform(model_y, copy=True) # the scaled model outputs
# ### Training our GP on the hydro model calculations
#define our Gaussian process model, and fit it to the hydro model calculations
my_gp = GPy.models.GPRegression(model_X, scaled_model_y, my_kernel)
my_gp.optimize(messages=False)
# ### Defining an 'emulator'
# It's useful to define a function which handles both the scaling of our observables as well as the interpolation with the GP. We call this function the **emulator**.
def emu_predict(eta_over_s):
"""This function handles the scaling and GP interpolation together,
returning our prediction in the ordinary observable space
rather than the scaled observable space
This map is what we call our 'emulator'. """
X = eta_over_s.reshape(-1, 1)
scaled_y, scaled_dy2 = my_gp.predict(X, full_cov=False)
scaled_dy2 = scaled_dy2[:,0] #shape of GPy covariance matrix
scaled_dy = np.sqrt(scaled_dy2) #GPy returns the variance, want std. dev.
y = scaler.inverse_transform(scaled_y).reshape(len(eta_over_s))
dy = scaled_dy * scaler.scale_
return y, dy
# ### Let's check how well our emulator fits the hydro physics model
# +
#make a regular grid to plot our Emulator predictions
n_plot_pts = 100
gp_X_plot = np.linspace(eta_over_s_min, eta_over_s_max, n_plot_pts)
#get the GP Emulator's predictions of both the mean and std. deviation
gp_y, gp_dy = emu_predict(gp_X_plot)
plt.plot(gp_X_plot, gp_y, color='red', label='GP median')
plt.fill_between(gp_X_plot, y1 = gp_y - 2.*gp_dy, y2 = gp_y + 2.*gp_dy,
interpolate=True, alpha=0.7, label=r'GP 2$\sigma$', color='orange')
plt.fill_between(gp_X_plot, y1 = gp_y - gp_dy, y2 = gp_y + gp_dy,
interpolate=True, alpha=0.7, label=r'GP 1$\sigma$', color='blue')
plt.errorbar(model_X.flatten(), model_y.flatten(), model_dy.flatten(), fmt='o', c='black', label='Hydro Model')
plt.xlabel(r'$\eta/s$')
plt.ylabel(r'$v_2 [ 20-30\%]$')
plt.title('GP Emulator and Model Training Points')
plt.legend()
plt.tight_layout(True)
plt.savefig('GP.png',dpi=400)
plt.show()
# -
# ### Exercises:
#
# 1. Examine how increasing or decreasing the number of design points can effect the mean and uncertainty of the GP emulator prediction. Does it fit your expectation?
#
# 2. Examine how increasing or decreasing the model statistical error can effect the mean and uncertainty of the GP emulator prediction. Does it fit your expectation?
#
# 3. Examine how changing the density of design points can effect the mean and uncertainty of the GP emulator prediction(Try a design which has regions which are sparsely populated by design points). Does it fit your expectation?
#
# 4. What happens if you remove the white noise kernel `white_kern` from the GP?
# ### We expect our emulator to fit the points on which it was trained
# ...our definition of the GP likelihood function is designed to do just that!
#
# Ultimately, we want to know if our emulator can be trusted for points in parameter space in which it was **not trained**.
#
# So, let's perform some validations of our GP emulator, using a **novel testing set** of model calculations.
# +
#this defines a new set of points in parameter space where we will run our physics model
n_test_pts = 15
model_X_test = np.random.uniform(eta_over_s_min, eta_over_s_max, n_test_pts).reshape(-1,1)
#get the hydro model predictions for these new points
model_y_test, model_dy_test = lin_hydro_model(model_X_test)
#Now use the emulator trained only on the **original design points** to predict
#outputs on the **new testing set**
gp_y_test, gp_dy_test = emu_predict(model_X_test)
# -
#Plot the emulator prediction vs the hydro model prediction
plt.xlabel(r'Hydro model $v_2$ prediction')
plt.ylabel(r'Emulator $v_2$ prediction')
plt.plot(model_y_test, model_y_test, color='r', label='perfect', ls=':', lw=2)
plt.scatter(model_y_test, gp_y_test)
plt.legend()
plt.tight_layout(True)
plt.show()
# ### How does the performance look?
# There are stricter tests we can use to check if our surrogate prediction is biased.
#
# If $\hat{y}(\theta)$ is our emulator prediction for the parameters $\theta$, and $y(\theta)$ is our hydro model prediction, we can define the **residual** $\hat{y}(\theta) - y(\theta)$.
#
# Let's plot the residual as a function of $\eta/s$:
model_y_test = model_y_test.reshape(n_test_pts)
res = gp_y_test - model_y_test # calculate the residuals
plt.scatter(model_X_test, res)
plt.xlabel(r'$\eta/s$')
plt.ylabel(r'$\hat{v}_2 - v_2$')
plt.tight_layout(True)
plt.show()
# ### Does the prediction look biased?
#
# By inspection, it doesn't look like our emulator has signficant bias for any value of $\eta/s$.
# They are even more illuminating tests one can make, for example Quantile-Quantile plots (https://en.wikipedia.org/wiki/Q–Q_plot). You can explore this on your own.
# # Performing Bayesian Inference
# Now, we have a fast and accurate surrogate that we can trust to compare to data anywhere in the parameter space
#
# So we want to use our emulator to perform **Bayesian inference**.
#
# Recall, that our **posterior** $p(\theta|D)$ of our parameters $\theta$ given the observed experimental data $D$ is the product of our **prior** belief about the parameters $p(\theta)$ and the **likelihood** $p(D|\theta)$ of observing those experimental data given the true value of the parameters is $\theta$. This is Bayes Theorem:
#
# $$p(\theta|D) \propto p(D|\theta)p(\theta).$$
#
# So, before using experimental data to update our belief about $\eta/s$, we need to define our prior belief about $\eta/s$.
# ### Choosing our Priors
# We will define two different priors, so we can examine the effect that our prior has on our posterior.
#
# One prior will be flat between two limits. The other prior will be informed by a belief, before seeing our $v_2$ data, that the shear viscosity is more likely to be a certain value within these limits.
# +
#define two different priors, one more informed than the other
theta_min = eta_over_s_min
theta_max = eta_over_s_max
#a flat prior
def log_flat_prior(theta):
"""Flat prior on value between limits"""
if (theta_min < theta) and (theta < theta_max):
return 0. # log(1)
else:
return -np.inf # log(0)
log_flat_prior = np.vectorize(log_flat_prior)
#a peaked prior
prior_peak = 2. / (4. * np.pi) # the value of theta we belief most likely, before seeing data
prior_width = 1. / (10. * np.pi) #our uncertainty about this value, before seeing the data
def log_peaked_prior(theta):
"""Peaked (Gaussian) prior on value between limits"""
if (theta_min < theta) and (theta < theta_max):
return -0.5 * (theta - prior_peak)**2. / prior_width**2.
else:
return -np.inf # log(0)
log_peaked_prior = np.vectorize(log_peaked_prior)
# -
#lets plot our two priors by sampling them, and plotting their histograms
n_samples_prior = int(1e6)
samples_flat_prior = np.random.uniform(theta_min, theta_max, n_samples_prior)
samples_peaked_prior = np.random.normal( prior_peak, prior_width, n_samples_prior)
plt.hist(samples_flat_prior, label='Flat prior', alpha=0.5, density=True, color='blue', bins=50)
plt.hist(samples_peaked_prior, label='Peaked prior', alpha=0.5, density=True, color='red', bins=50)
plt.xlim([theta_min, theta_max])
plt.xlabel(r'$\eta/s$')
plt.ylabel(r'$p(\eta/s)$')
plt.yticks([])
plt.legend()
plt.tight_layout(True)
plt.show()
# ### Defining our Likelihood
# To compare our model predictions with experiment, we need to define our likelihood function.
# The likelihood is a model for the conditional probability of observing the data given some true value of the parameters. Specifically, it models
# the conditional probability of observing some experimental value for $v_2$ given some value of $\eta/s$.
#
# A commonplace assumption is that the experimental errors follow a multivariate Gaussian distribution. This distribution also maximizes the informational entropy subject to the constraints of being normalizable, having a known mean, and a known variance.
# For details see (https://github.com/furnstahl/Physics-8805/blob/master/topics/maximum-entropy/MaxEnt.ipynb).
#
# The normal likelihood function is probably a good assumption for our problem, because of the nature of the measurement. However, one should consider if this normal likelihood function is appropriate depending on the nature of the specific problem and measurements.
def log_likelihood(theta, y_exp, dy_exp):
#use our GP emulator to approximate the hydro model
y_pred, dy_pred = emu_predict(theta) # emulation prediction and uncertainty
dy_tot = np.sqrt( dy_pred**2. + dy_exp**2. ) #total uncertainty, emulation and exp.
return -0.5 * np.sum( (y_pred - y_exp)**2 / dy_tot**2 )
# ### Exercises:
# 1. Why does the total uncertainty `dy_tot` in the likelihood function have this expression? What should be the total uncertainty, when we have experimental uncertainty and interpolation uncertainties which are independent?
# 2. How would this expression generalize to a vector of outputs, rather than a scalar output?
# ### Defining our Posterior
# The posterior is the product of the prior and likelihood function.
#
# It follows that the logarithm of the posterior is the sum of the logs of the prior and likelihood.
# +
#posterior using flat prior
def log_posterior_flat_prior(theta, y_exp, dy_exp):
'''Log posterior for data X given parameter array theta'''
return log_flat_prior(theta) + log_likelihood(theta, y_exp, dy_exp)
#posterior using peaked prior
def log_posterior_peaked_prior(theta, y_exp, dy_exp):
'''Log posterior for data X given parameter array theta'''
return log_peaked_prior(theta) + log_likelihood(theta, y_exp, dy_exp)
# -
# ### Inferring the value of $\eta/s$ using experimental data
#
# Suppose that an experiment measures $v_2[20-30\%]$, and is reported by a mean value and total uncertainty...
exp_rel_uncertainty = 0.1 # experimental relative uncertainty
y_exp = 0.09 #v_2 experimental mean
dy_exp = y_exp * exp_rel_uncertainty #v_2 experimental uncertainty
# Although our current problem is much-simplified by the use of a linear model, in general we will have no analytic expression for our likelihood function. In this case, one needs a set of numerical tools which can approximate the likelihood function.
#
# In addition, for many problems of interest our parameter space can be highly dimensional, so these methods need to work well for high-dimensional problems.
#
# We solve both of these problems by employing Markov Chain Monte Carlo sampling (http://www.columbia.edu/~mh2078/MachineLearningORFE/MCMC_Bayes.pdf).
# Specifically, we will use a python implementation called *emcee* (https://emcee.readthedocs.io/en/stable/), which will work well for our simple purposes. There are much more sophisticated algorithms for estimating the posterior today; see https://chi-feng.github.io/mcmc-demo/app.html#HamiltonianMC,banana for some animations.
# +
#these are some general settings for the MCMC
ndim = 1 # number of parameters in the model
nwalkers = 20*ndim # number of MCMC walkers
nburn = 1000 # "burn-in" period to let chains stabilize
nsteps = 2000 # number of MCMC steps to take after the burn-in period finished
# we'll start at random locations within the prior volume
starting_guesses = theta_min + \
(theta_max - theta_min) * np.random.rand(nwalkers,ndim)
####Sampling the posterior with a flat prior####
print("Sampling Posterior with Flat Prior...")
print("MCMC sampling using emcee (affine-invariant ensamble sampler) with {0} walkers".format(nwalkers))
sampler_flat_prior = emcee.EnsembleSampler(nwalkers, ndim, log_posterior_flat_prior, args=[y_exp, dy_exp])
# "burn-in" period; save final positions and then reset
pos, prob, state = sampler_flat_prior.run_mcmc(starting_guesses, nburn)
sampler_flat_prior.reset()
# production sampling period
sampler_flat_prior.run_mcmc(pos, nsteps)
print("Mean acceptance fraction: {0:.3f} (in total {1} steps)"
.format(np.mean(sampler_flat_prior.acceptance_fraction),nwalkers*nsteps))
# discard burn-in points and flatten the walkers; the shape of samples is (nwalkers*nsteps, ndim)
samples_flat_prior = sampler_flat_prior.chain.reshape((-1, ndim))
####Sampling the posterior with a peaked prior####
print("Sampling Posterior with Peaked Prior...")
print("MCMC sampling using emcee (affine-invariant ensamble sampler) with {0} walkers".format(nwalkers))
sampler_peaked_prior = emcee.EnsembleSampler(nwalkers, ndim, log_posterior_peaked_prior, args=[y_exp, dy_exp])
# "burn-in" period; save final positions and then reset
pos, prob, state = sampler_peaked_prior.run_mcmc(starting_guesses, nburn)
sampler_peaked_prior.reset()
# production sampling period
sampler_peaked_prior.run_mcmc(pos, nsteps)
print("Mean acceptance fraction: {0:.3f} (in total {1} steps)"
.format(np.mean(sampler_peaked_prior.acceptance_fraction),nwalkers*nsteps))
# discard burn-in points and flatten the walkers; the shape of samples is (nwalkers*nsteps, ndim)
samples_peaked_prior = sampler_peaked_prior.chain.reshape((-1, ndim))
# -
# ### Plotting our Posteriors
#
# We can plot the samples of our posterior as histograms.
plt.hist(samples_flat_prior, bins=20, density=True, alpha=0.6,
edgecolor='blue', label='Posterior w/ Flat Prior')
plt.hist(samples_peaked_prior, bins=20, density=True, alpha=0.6,
edgecolor='red', label='Posterior w/ Peaked Prior')
plt.xlabel(r'$\eta/s$')
plt.ylabel(r'$p(\eta/s | v_2)$')
plt.yticks([])
plt.legend()
plt.tight_layout(True)
plt.show()
# ### Exercises
#
# 1. What do you notice is different about the two posteriors above, their median, their uncertainties, etc?
# 2. Try reducing the experimental error on our measurement. What do you expect to happen, and what happens? How does it depend on our emulation (interpolation) uncertainty?
# 3. Try playing with the parameters which defined the 'peaked' prior (e.g. reducing/increasing it's width). What happens?
# 4. In the case where we use the flat prior, what is the relation between the posterior and the likelihood function?
# ### There are many useful libraries for plotting posteriors...
# The corner library provides an easy to use implementation. This is especially helpful for doing
# parameter estimation in more than one dimension.
# make a corner plot with the posterior distribution
fig = corner.corner(samples_flat_prior, labels=["$\eta/s$"],
quantiles=[0.05, 0.5, 0.95], #what do these limits control?
show_titles=True, title_kwargs={"fontsize": 12})
plt.tight_layout(True)
plt.show()
# ### Sometimes it's more aesthetic to apply a KDE smoothing to our posterior. (e.g. below)
sns.distplot(samples_flat_prior, hist=False, color="b", kde_kws={"shade": True}, label='Posterior w/ Flat Prior')
sns.distplot(samples_peaked_prior, hist=False, color="r", kde_kws={"shade": True}, label='Posterior w/ Peaked Prior')
# # We're finished!
#
# Now you can try applying some of these ideas to your own research.
|
Infer_Shear_Viscosity_from_Flow_GPy.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python-deep36
# language: python
# name: deep36
# ---
# +
import pandas as pd
import numpy as np
from pathlib import Path
import matplotlib.pyplot as plt
import copy
import json
# %matplotlib inline
from deeppavlov.core.commands.utils import set_deeppavlov_root, expand_path
from deeppavlov.models.evolution.evolution_param_generator import ParamsEvolution
# -
# ## Set here path to your config file, key main model and population size
# +
CONFIG_FILE = "../../configs/evolution/evolve_intents_snips.json"
KEY_MAIN_MODEL = "main"
POPULATION_SIZE = 2
with open(CONFIG_FILE, "r", encoding='utf8') as f:
basic_params = json.load(f)
set_deeppavlov_root(basic_params)
print("Considered basic config:\n{}".format(json.dumps(basic_params, indent=2)))
# +
evolution = ParamsEvolution(population_size=POPULATION_SIZE,
key_main_model=KEY_MAIN_MODEL,
**basic_params)
validate_best = evolution.get_value_from_config(
evolution.basic_config, list(evolution.find_model_path(
evolution.basic_config, "validate_best"))[0] + ["validate_best"])
test_best = evolution.get_value_from_config(
evolution.basic_config, list(evolution.find_model_path(
evolution.basic_config, "test_best"))[0] + ["test_best"])
TITLE = str(Path(evolution.get_value_from_config(
evolution.basic_config, evolution.main_model_path + ["save_path"])).stem)
print("Title name for the considered evolution is `{}`.".format(TITLE))
data = pd.read_csv(str(expand_path(Path(evolution.get_value_from_config(
evolution.basic_config, evolution.main_model_path + ["save_path"])).joinpath(
"result_table.csv"))), sep='\t')
print("Number of populations: {}.".format(int(data.shape[0] / POPULATION_SIZE)))
data.fillna(0., inplace=True)
# +
MEASURES = evolution.get_value_from_config(
evolution.basic_config, list(evolution.find_model_path(
evolution.basic_config, "metrics"))[0] + ["metrics"])
for measure in MEASURES:
print("\nMeasure: {}".format(measure))
for data_type in ["valid", "test"]:
print("{}:".format(data_type))
argmin = data[measure + "_" + data_type].argmin()
argmax = data[measure + "_" + data_type].argmax()
print("min for\t{} model on\t{} population".format(argmin % POPULATION_SIZE,
argmin // POPULATION_SIZE))
print("max for\t{} model on\t{} population".format(argmax % POPULATION_SIZE,
argmax // POPULATION_SIZE))
# -
# ## If you want to plot measures depending on population colored by evolved measure value
# +
path_to_pics = expand_path(Path(evolution.get_value_from_config(
evolution.basic_config, evolution.main_model_path + ["save_path"])).joinpath("pics"))
path_to_pics.mkdir(exist_ok=True, parents=True)
if validate_best:
evolve_metric = MEASURES[0] + "_valid"
elif test_best:
evolve_metric = MEASURES[0] + "_test"
cmap = plt.get_cmap('rainbow')
colors = [cmap(i) for i in np.linspace(0, 1, data.shape[0])]
color_ids = np.argsort(data.loc[:, evolve_metric].values)
ylims = [(0., 1)] * len(MEASURES)
for metric, ylim in zip(MEASURES, ylims):
plt.figure(figsize=(12,6))
if validate_best:
for i in range(data.shape[0]):
plt.scatter(i // POPULATION_SIZE,
data.loc[:, metric + "_valid"].values[i],
c=colors[np.where(color_ids == i)[0][0]], alpha=0.5, marker='o')
plt.plot(np.arange(data.shape[0]//POPULATION_SIZE),
data.loc[:, metric + "_valid"].max() * np.ones(data.shape[0]//POPULATION_SIZE),
c=colors[-1])
plt.plot(np.arange(data.shape[0]//POPULATION_SIZE),
data.loc[:, metric + "_valid"].min() * np.ones(data.shape[0]//POPULATION_SIZE),
c=colors[0])
if test_best:
for i in range(data.shape[0]):
plt.scatter(i // POPULATION_SIZE,
data.loc[:, metric + "_test"].values[i],
c=colors[np.where(color_ids == i)[0][0]], alpha=0.5, marker='+', s=200)
plt.plot(np.arange(data.shape[0]//POPULATION_SIZE),
data.loc[:, metric + "_test"].max() * np.ones(data.shape[0]//POPULATION_SIZE), "--",
c=colors[-1])
plt.plot(np.arange(data.shape[0]//POPULATION_SIZE),
data.loc[:, metric + "_test"].min() * np.ones(data.shape[0]//POPULATION_SIZE), "--",
c=colors[0])
plt.ylabel(metric, fontsize=20)
plt.xlabel("population", fontsize=20)
plt.title(TITLE, fontsize=20)
plt.ylim(ylim[0], ylim[1])
plt.xticks(fontsize=20)
plt.yticks(fontsize=20)
plt.savefig(path_to_pics.joinpath(metric + ".png"))
plt.show()
# -
# ## If you want to plot measures depending on population colored by `evolution_model_id`
#
# #### That means model of the same `id` are of the same color.
# +
params_dictionaries = []
models_ids = []
for i in range(data.shape[0]):
data.loc[i, "params"] = data.loc[i, "params"].replace("False", "false")
data.loc[i, "params"] = data.loc[i, "params"].replace("True", "true")
json_acceptable_string = data.loc[i, "params"].replace("'", "\"")
d = json.loads(json_acceptable_string)
params_dictionaries.append(d)
models_ids.append(d["evolution_model_id"])
models_ids = np.array(models_ids)
models_ids
# +
cmap = plt.get_cmap('rainbow')
colors = [cmap(i) for i in np.linspace(0, 1, len(np.unique(models_ids)))]
ylims = [(0., 1)] * len(MEASURES)
for metric, ylim in zip(MEASURES, ylims):
plt.figure(figsize=(12,6))
if validate_best:
for i in range(data.shape[0]):
plt.scatter(i // POPULATION_SIZE,
data.loc[:, metric + "_valid"].values[i],
# c=colors[models_ids[i]], alpha=0.5, marker='o')
c=colors[np.where(models_ids[i] == np.unique(models_ids))[0][0]], alpha=0.5, marker='o')
plt.plot(np.arange(data.shape[0]//POPULATION_SIZE),
data.loc[:, metric + "_valid"].max() * np.ones(data.shape[0]//POPULATION_SIZE),
c=colors[-1])
plt.plot(np.arange(data.shape[0]//POPULATION_SIZE),
data.loc[:, metric + "_valid"].min() * np.ones(data.shape[0]//POPULATION_SIZE),
c=colors[0])
if test_best:
for i in range(data.shape[0]):
plt.scatter(i // POPULATION_SIZE,
data.loc[:, metric + "_test"].values[i],
c=colors[np.where(models_ids[i] == np.unique(models_ids))[0][0]], alpha=0.5, marker='+', s=200)
plt.plot(np.arange(data.shape[0]//POPULATION_SIZE),
data.loc[:, metric + "_test"].max() * np.ones(data.shape[0]//POPULATION_SIZE), "--",
c=colors[-1])
plt.plot(np.arange(data.shape[0]//POPULATION_SIZE),
data.loc[:, metric + "_test"].min() * np.ones(data.shape[0]//POPULATION_SIZE), "--",
c=colors[0])
plt.ylabel(metric, fontsize=20)
plt.xlabel("population", fontsize=20)
plt.title(TITLE, fontsize=20)
plt.ylim(ylim[0], ylim[1])
plt.xticks(fontsize=20)
plt.yticks(fontsize=20)
plt.savefig(path_to_pics.joinpath(metric + "_colored_ids.png"))
plt.show()
# +
cmap = plt.get_cmap('rainbow')
colors = [cmap(i) for i in np.linspace(0, 1, data.shape[0])]
color_ids = np.argsort(data.loc[:, evolve_metric].values)
for param_path in evolution.paths_to_evolving_params:
param_name = param_path[-1]
print(param_path, param_name)
plt.figure(figsize=(12,12))
for i in range(data.shape[0]):
param_dict = evolution.get_value_from_config(evolution.basic_config, param_path)
if param_dict.get("evolve_range") and param_dict.get("discrete"):
plt.scatter(i // POPULATION_SIZE,
evolution.get_value_from_config(params_dictionaries[i], param_path),
# + (np.random.random() - 0.5) / 2,
c=colors[np.where(color_ids == i)[0][0]], alpha=0.5)
elif param_dict.get("evolve_range"):
plt.scatter(i // POPULATION_SIZE,
evolution.get_value_from_config(params_dictionaries[i], param_path),
c=colors[np.where(color_ids == i)[0][0]], alpha=0.5)
elif param_dict.get("evolve_choice"):
values = np.array(param_dict.get("values"))
plt.scatter(i // POPULATION_SIZE,
np.where(values == evolution.get_value_from_config(
params_dictionaries[i], param_path))[0][0],
c=colors[np.where(color_ids == i)[0][0]], alpha=0.5)
plt.yticks(np.arange(len(values)), values, fontsize=20)
elif param_dict.get("evolve_bool"):
values = np.array([False, True])
plt.scatter(i // POPULATION_SIZE,
np.where(values == evolution.get_value_from_config(
params_dictionaries[i], param_path))[0][0],
c=colors[np.where(color_ids == i)[0][0]], alpha=0.5)
plt.yticks(np.arange(len(values)), ["False", "True"], fontsize=20)
plt.ylabel(param_name, fontsize=20)
plt.xlabel("population", fontsize=20)
plt.title(TITLE, fontsize=20)
plt.xticks(fontsize=20)
plt.yticks(fontsize=20)
plt.savefig(path_to_pics.joinpath(param_name + ".png"))
plt.show()
# -
|
examples/evolution_results_analysis.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pandas as pd
import numpy as np
import re
import plotly
import plotly.graph_objs as go
from plotly.offline import download_plotlyjs, init_notebook_mode, plot, iplot
init_notebook_mode(connected=True)
# -
talks = pd.read_pickle('../datasets/talksDf.pkl')
talks.head()
topCountries = talks.groupby('country').sum().sort_values('views', ascending=False)[:15].index.values
topCountries
talks.groupby('country').sum().sort_values('views', ascending=False)[:15]
# +
dfToPlot = talks.groupby('country').sum().sort_values('views', ascending=False)[:15]
trace1 = go.Bar(
name='7',
x= dfToPlot.index.values,
y= dfToPlot.views,
)
layout = go.Layout(
title='Sum of views / country',
yaxis = dict(
# range = [0, 50000],
title='Views',
),
xaxis = dict(
title='Country',
# range=[0,0.08]
# tickvals = list(range(10)),
# ticktext = nombres_grupos
)
)
fig = go.Figure(data=[trace1], layout=layout)
iplot(fig)
# +
dfToPlot = talks.groupby('country').sum().sort_values('viewsWeek', ascending=False)[:15]
trace1 = go.Bar(
name='7',
x= dfToPlot.index.values,
y= dfToPlot.viewsWeek,
)
layout = go.Layout(
title='Views per week (top 15)',
yaxis = dict(
# range = [0, 50000],
title='Views / week',
),
xaxis = dict(
title='Country',
# range=[0,0.08]
# tickvals = list(range(10)),
# ticktext = nombres_grupos
)
)
fig = go.Figure(data=[trace1], layout=layout)
iplot(fig)
# +
def getPlotCountry(cName):
df = talks[talks.country==cName]
df.loc[:, 'year'] = df.publishDate.dt.year
df = df.groupby('year').sum()
return go.Bar(
name=cName,
x= df.index.values,
y= df.viewsWeek,
)
trace1 = getPlotCountry('Argentina')
trace2 = getPlotCountry('Spain')
trace3 = getPlotCountry('Mexico')
layout = go.Layout(
title='Principales países de habla hispana.',
yaxis = dict(
# range = [0, 50000],
title='sum of (views / week)',
),
xaxis = dict(
title='Year',
# range=[0,0.08]
tickvals = list(range(2010, 2020)),
# ticktext = nombres_grupos
)
)
fig = go.Figure(data=[trace1,trace2,trace3], layout=layout)
iplot(fig)
# +
dfToPlotA = talks[talks.country=='United Kingdom']
dfToPlotA.loc[:, 'year'] = dfToPlotA.publishDate.dt.year
dfToPlot = dfToPlotA.groupby('year').sum()
dfToPlot_2 = talks[talks.country=='India']
dfToPlot_2.loc[:, 'year'] = dfToPlot_2.publishDate.dt.year
dfToPlot_2 = dfToPlot_2.groupby('year').sum()
dfToPlot_3 = talks[talks.country=='Canada']
dfToPlot_3.loc[:, 'year'] = dfToPlot_3.publishDate.dt.year
dfToPlot_3 = dfToPlot_3.groupby('year').sum()
trace1 = go.Bar(
name='United Kingdom',
x= dfToPlot.index.values,
y= dfToPlot.viewsWeek,
)
trace2 = go.Bar(
name='India',
x= dfToPlot_2.index.values,
y= dfToPlot_2.viewsWeek,
)
trace3 = go.Bar(
name='Canada',
x= dfToPlot_3.index.values,
y= dfToPlot_3.viewsWeek,
)
layout = go.Layout(
title='Views per week (%d talks)' % (len(dfToPlotA)+len(dfToPlot_2)+len(dfToPlot_3)),
yaxis = dict(
# range = [0, 50000],
title='sum of (views / week)',
),
xaxis = dict(
title='Year',
# range=[0,0.08]
tickvals = list(range(2009, 2020)),
# ticktext = nombres_grupos
)
)
fig = go.Figure(data=[trace1,trace2,trace3], layout=layout)
iplot(fig)
# +
dfToPlotA = talks[talks.country=='United Kingdom']
dfToPlotA.loc[:, 'year'] = dfToPlotA.publishDate.dt.year
dfToPlot = dfToPlotA.groupby('year').count()
dfToPlot_2 = talks[talks.country=='India']
dfToPlot_2.loc[:, 'year'] = dfToPlot_2.publishDate.dt.year
dfToPlot_2 = dfToPlot_2.groupby('year').count()
dfToPlot_3 = talks[talks.country=='Canada']
dfToPlot_3.loc[:, 'year'] = dfToPlot_3.publishDate.dt.year
dfToPlot_3 = dfToPlot_3.groupby('year').count()
trace1 = go.Bar(
name='United Kingdom',
x= dfToPlot.index.values,
y= dfToPlot.views,
)
trace2 = go.Bar(
name='India',
x= dfToPlot_2.index.values,
y= dfToPlot_2.views,
)
trace3 = go.Bar(
name='Canada',
x= dfToPlot_3.index.values,
y= dfToPlot_3.views,
)
layout = go.Layout(
title='Talks per year (%d talks)' % (len(dfToPlotA)+len(dfToPlot_2)+len(dfToPlot_3)),
yaxis = dict(
# range = [0, 50000],
title='Amount of talks',
),
xaxis = dict(
title='Year',
# range=[0,0.08]
tickvals = list(range(2009, 2020)),
# ticktext = nombres_grupos
)
)
fig = go.Figure(data=[trace1,trace2,trace3], layout=layout)
iplot(fig)
# -
categories = talks.catName.unique()
dataToPlot = []
for cname in categories:
data = talks[(talks.catName==cname)].groupby('year').count()
dataToPlot.append(
go.Line(
name=cname,
x= data.index.values,
y= data.viewsWeek,
)
)
layout = go.Layout(
title='Categories',
yaxis = dict(
title='n of talks',
),
xaxis = dict(
title='Year',
tickvals = list(range(2009, 2020)),
)
)
fig = go.Figure(data=dataToPlot, layout=layout)
iplot(fig)
categories = talks.catName.unique()
dataToPlot = []
for cname in categories:
data = talks[(talks.catName==cname)].groupby('year').sum()
dataToPlot.append(
go.Line(
name=cname,
x= data.index.values,
y= data.viewsWeek,
)
)
layout = go.Layout(
title='Categories',
yaxis = dict(
title='sum of(views / week)',
),
xaxis = dict(
title='Year',
tickvals = list(range(2009, 2020)),
)
)
fig = go.Figure(data=dataToPlot, layout=layout)
iplot(fig)
# +
dataDf = talks[talks.catName=='gender'].groupby('country').count().sort_values('title', ascending=False).iloc[1:6]
dataToPlot = []
for cname in dataDf.index.values:
data = talks[(talks.catName=='gender') & (talks.country==cname)].groupby('year').count()
dataToPlot.append(
go.Line(
name=cname,
x= data.index.values,
y= data.viewsWeek,
)
)
layout = go.Layout(
title='Gender talks by country',
yaxis = dict(
title='n of talks',
),
xaxis = dict(
title='Year',
tickvals = list(range(2009, 2020)),
)
)
fig = go.Figure(data=dataToPlot, layout=layout)
iplot(fig)
# +
data = talks.groupby('country').apply(
lambda df: len(df[df.catName=='gender']) / len(df)
)[(talks.groupby('country').count() > 100).title.values].sort_values(ascending=False)[:20]
talks[talks.country==data.index[0]]
# for i in data.index:
# print(i, data[i])
x = data.index.map(
lambda x: x+' (%d of %d talks)'% (len(talks[(talks.country==x)&(talks.catName=='gender')]), len(talks[(talks.country==x)]))
)
trace1 = go.Bar(
name=cname,
x= x,
y= data.values,
text=data.map(lambda x: round(x,2)),
textposition='auto'
)
layout = go.Layout(
title='% of gender talks by country',
yaxis = dict(
title='n of talks',
),
xaxis = dict(
title='Country',
# tickvals = data.index.map(
# lambda x: x+' 10'
# ),
),
margin=dict(b=150),
)
fig = go.Figure(data=[trace1], layout=layout)
iplot(fig)
# +
dfToPlotA = talks[talks.country=='Argentina']
dfToPlotA.loc[:, 'year'] = dfToPlotA.publishDate.dt.year
dfToPlot = dfToPlotA.groupby('year').sum()
trace1 = go.Bar(
name='Argentina',
x= dfToPlot.index.values,
y= dfToPlot.views,
)
layout = go.Layout(
title='Argentina',
yaxis = dict(
# range = [0, 50000],
title='Views',
),
xaxis = dict(
title='Year',
# range=[0,0.08]
tickvals = list(range(2010, 2020)),
# ticktext = nombres_grupos
)
)
fig = go.Figure(data=[trace1], layout=layout)
iplot(fig)
# -
|
Analysis/Analysis by country.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.9.1 64-bit
# name: python3
# ---
# %matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import chart_studio.plotly as py # package change b/c plotly.plotly is deprecated.
from plotly.offline import iplot, plot
from CarFactory import CarFactory
from Street import Street, StreetRamp, StreetAuto
from Constants import *
# +
# Experiment with varying p factors
prob_auto, prob_car = 0, 1 # all normal cars, ignoring their automatic car implementation
num_lane = 3
# TODO: need to modify road length implementation so that different lanes can have different
# lengths to faciliate merging
road_length = 500 # 5 km
flow_in = 10 # vehicle per second
p_factors = [-0.1, 0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1, 1.1, 10]
avg_speeds = []
med_speeds = []
times = []
for p in p_factors:
print(p)
cf = CarFactory(prob_auto, prob_car, p, T_REACTION_HUMAN)
road = Street(num_lane, road_length, cf)
avg_speed = []
med_speed = []
time = []
for i in range(1000):
road.update(flow_in)
if i % 10 == 0:
road.report()
avg_speed.append(road.get_avg_vel())
med_speed.append(road.get_median_vel())
time.append(road.get_time())
avg_speeds.append(avg_speed)
med_speeds.append(med_speed)
times.append(time)
# -
plt.plot(times[0], avg_speeds[0], label=-0.1)
plt.plot(times[1], avg_speeds[1], label=0.0)
plt.plot(times[2], avg_speeds[2], label=0.1)
plt.plot(times[3], avg_speeds[3], label=0.2)
plt.plot(times[4], avg_speeds[4], label=0.3)
plt.plot(times[5], avg_speeds[5], label=0.4)
plt.plot(times[6], avg_speeds[6], label=0.5)
plt.plot(times[7], avg_speeds[7], label=0.6)
plt.plot(times[8], avg_speeds[8], label=0.7)
plt.plot(times[9], avg_speeds[9], label=0.8)
plt.plot(times[10], avg_speeds[10], label=0.9)
plt.plot(times[11], avg_speeds[11], label=1.0)
plt.plot(times[12], avg_speeds[12], label=1.1)
plt.plot(times[13], avg_speeds[13], label = 10)
plt.xlabel("Time")
plt.ylabel("Avg speed")
plt.title("Avg speed vs time for politeness factor in [-0.1, 1.1]")
# plt.xlim([0,50])
# plt.ylim([5,10])
plt.legend()
print(med_speeds[2])
# +
# Experiment with varying the t factor
prob_auto, prob_car = 0, 1 # all normal cars, ignoring their automatic car implementation
num_lane = 3
# TODO: need to modify road length implementation so that different lanes can have different
# lengths to faciliate merging
road_length = 500 # 5 km
flow_in = 10 # vehicle per second
t_factors = [0, 0.25, 0.5, 0.75, 1.0, 1.25, 1.5, 1.75, 2.0]
avg_speeds = []
med_speeds = []
times = []
for t in t_factors:
cf = CarFactory(prob_auto, prob_car, P_FACTOR_CAR, t, DB_CAR)
road = Street(num_lane, road_length, cf)
avg_speed = []
med_speed = []
time = []
for i in range(1000):
road.update(flow_in)
if i % 10 == 0:
road.report()
avg_speed.append(road.get_avg_vel())
med_speed.append(road.get_median_vel())
time.append(road.get_time())
avg_speeds.append(avg_speed)
med_speeds.append(med_speed)
times.append(time)
# -
plt.rcParams.update({
"text.usetex": True,
"font.family": "serif",
"font.serif": ["Computer Modern Roman"],
})
plt.plot(times[0], avg_speeds[0], label="0.00 s")
plt.plot(times[1], avg_speeds[1], label="0.25 s")
plt.plot(times[2], avg_speeds[2], label="0.50 s")
plt.plot(times[3], avg_speeds[3], label="0.75 s")
plt.plot(times[4], avg_speeds[4], label="1.00 s")
plt.plot(times[5], avg_speeds[5], label="1.25 s")
plt.plot(times[6], avg_speeds[6], label="1.50 s")
plt.plot(times[7], avg_speeds[7], label="1.75 s")
plt.plot(times[8], avg_speeds[8], label="2.00 s")
plt.xlabel("Time")
plt.ylabel("Average speed")
#plt.title("Avg speed vs time for reaction time in [0.0, 2.0] sec")
plt.xlim([0,25])
# plt.ylim([5,10])
plt.legend(title="T (reaction time)")
plt.savefig('t.png')
plt.show()
plt.savefig("t.png")
# +
# Experiment with changing the lane change threshold DB
# Experiment with varying the t factor
prob_auto, prob_car = 0, 1 # all normal cars, ignoring their automatic car implementation
num_lane = 3
# TODO: need to modify road length implementation so that different lanes can have different
# lengths to faciliate merging
road_length = 500 # 5 km
flow_in = 10 # vehicle per second
db_values = [0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1]
avg_speeds = []
med_speeds = []
times = []
for db in db_values:
cf = CarFactory(prob_auto, prob_car, P_FACTOR_CAR, T_REACTION_HUMAN, db)
road = Street(num_lane, road_length, cf)
avg_speed = []
med_speed = []
time = []
for i in range(1000):
road.update(flow_in)
if i % 10 == 0:
road.report()
avg_speed.append(road.get_avg_vel())
med_speed.append(road.get_median_vel())
time.append(road.get_time())
avg_speeds.append(avg_speed)
med_speeds.append(med_speed)
times.append(time)
# -
plt.plot(times[0], avg_speeds[0], label="0.0 s")
plt.plot(times[1], avg_speeds[1], label="0.25 s")
plt.plot(times[2], avg_speeds[2], label="0.5 s")
plt.plot(times[3], avg_speeds[3], label="0.75 s")
plt.plot(times[4], avg_speeds[4], label="1.0 s")
plt.plot(times[5], avg_speeds[5], label="1.25 s")
plt.plot(times[6], avg_speeds[6], label="1.5 s")
plt.plot(times[7], avg_speeds[7], label="1.75 s")
plt.plot(times[8], avg_speeds[8], label="2.0 s")
plt.plot(times[9], avg_speeds[9], label="2.0 s")
plt.plot(times[10], avg_speeds[10], label="2.0 s")
plt.xlabel("Time")
plt.ylabel("Avg speed")
plt.title("Avg speed vs time for reaction time in [0.0, 2.0] sec")
plt.xlim([0,25])
# plt.ylim([5,10])
plt.legend()
|
src/MCMProbB.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import datetime
t1 = datetime.time(5,25,1)
print(t1)
t1.hour
t1.microsecond
print(datetime.time.min)
print(datetime.time.max)
print(datetime.time.resolution)
today = datetime.date.today()
print(today)
today.timetuple()
today.day
print(datetime.date(1,1,1))
print(datetime.date.max)
print(datetime.date.resolution)
d1 = datetime.date(2017,3,10)
print(d1)
d2 = d1.replace(year=1994)
d2
d1 - d2
|
Introduction to Python/Datetime.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.9.7 ('zshekenv2')
# language: python
# name: python3
# ---
# +
import boto3
import botocore
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import os
import torch
from torch.utils.data import Dataset, DataLoader
import nibabel as nib
'''
https://hcp-openaccess.s3.us-east-1.amazonaws.com/HCP/100307/unprocessed/3T/T1w_MPR1/100307_3T_T1w_MPR1.nii.gz
'''
access_key_id = '<KEY>'
secret_key_id = '<KEY>'
# -
# download from S3
# +
session = boto3.Session(aws_access_key_id=access_key_id, aws_secret_access_key=secret_key_id)
s3 = session.resource('s3')
unrestricted = pd.read_csv('unrestricted_zshek_5_6_2022_5_34_30.csv')
ii = 0
id_list = []
age_list = []
gender_list = []
for subj_id in unrestricted['Subject']:
id_ = str(subj_id)
KEY = 'HCP/' + id_ + '/unprocessed/3T/T2w_SPC1/'+ id_ + <KEY>'
DST = 'D:/HCP/data/T2w/' + id_ + '.nii.gz'
ii += 1
try:
s3.Bucket('hcp-openaccess').download_file(KEY, DST)
except botocore.exceptions.ClientError:
continue
id_list.append(id_)
age_list.append(unrestricted['Age'][ii-1])
gender_list.append(unrestricted['Gender'][ii-1])
print(str(ii-1) + ': ' + DST)
if ii > 4:
break
# -
custom_HCP_Dataframe = pd.DataFrame(list(zip(id_list, age_list, gender_list)), columns = ['id', 'age', 'gender'])
print(custom_HCP_Dataframe)
custom_HCP_Dataframe.to_csv('custom_HCP.csv')
class HCP_3T_T2w_Dataset(Dataset):
def __init__(self, csv_file, root_dir, transform=None):
self.id_unrestricted_frame = pd.read_csv(csv_file)
self.root_dir = root_dir
self.transform = transform
def __len__(self):
return len(self.id_unrestricted_frame)
def __getitem__(self, idx):
if torch.is_tensor(idx):
idx = idx.tolist()
img_name = os.path.join(self.root_dir,
str(self.id_unrestricted_frame.iloc[idx, 1])+'.nii.gz')
image = nib.load(img_name)
id_ = self.id_unrestricted_frame.iloc[idx, 1]
age = self.id_unrestricted_frame.iloc[idx, 2]
gender = self.id_unrestricted_frame.iloc[idx, 3]
sample = {'image': image, 'id_': id_, 'age': age, 'gender': gender}
if self.transform:
sample = self.transform(sample)
return sample
HCP_dataset = HCP_3T_T2w_Dataset(csv_file='custom_HCP.csv', root_dir='data/T2w/')
plt.figure()
for i in range(len(HCP_dataset)):
sample = HCP_dataset[i]
print(i, sample['image'].shape, sample['id_'])
slice_ = sample['image'].get_fdata()
slice_ = slice_[:, :, 160]
plt.imshow(slice_, cmap='gray')
|
220524_HCP_pipeline.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.9 (tensorflow)
# language: python
# name: tensorflow
# ---
# + [markdown] id="UhLdicqoAMGl"
# <a href="https://colab.research.google.com/github/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_14_03_anomaly.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="BGssM4-jAMGn"
# # T81-558: Applications of Deep Neural Networks
# **Module 14: Other Neural Network Techniques**
# * Instructor: [<NAME>](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)
# * For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/).
# + [markdown] id="64qCM6NsAMGn"
# # Module 14 Video Material
#
# * Part 14.1: What is AutoML [[Video]](https://www.youtube.com/watch?v=1mB_5iurqzw&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_14_01_automl.ipynb)
# * Part 14.2: Using Denoising AutoEncoders in Keras [[Video]](https://www.youtube.com/watch?v=4bTSu6_fucc&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_14_02_auto_encode.ipynb)
# * **Part 14.3: Training an Intrusion Detection System with KDD99** [[Video]](https://www.youtube.com/watch?v=1ySn6h2A68I&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_14_03_anomaly.ipynb)
# * Part 14.4: Anomaly Detection in Keras [[Video]](https://www.youtube.com/watch?v=VgyKQ5MTDFc&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_14_04_ids_kdd99.ipynb)
# * Part 14.5: The Deep Learning Technologies I am Excited About [[Video]]() [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_14_05_new_tech.ipynb)
#
#
# + [markdown] id="k9pmbHghAMGo"
# # Part 14.3: Anomaly Detection in Keras
#
# Anomaly detection is an unsupervised training technique that analyzes the degree to which incoming data differs from the data you used to train the neural network. Traditionally, cybersecurity experts have used anomaly detection to ensure network security. However, you can use anomalies in data science to detect input for which you have not trained your neural network.
#
# There are several data sets that many commonly use to demonstrate anomaly detection. In this part, we will look at the KDD-99 dataset.
#
#
# * [Stratosphere IPS Dataset](https://www.stratosphereips.org/category/dataset.html)
# * [The ADFA Intrusion Detection Datasets (2013) - for HIDS](https://www.unsw.adfa.edu.au/unsw-canberra-cyber/cybersecurity/ADFA-IDS-Datasets/)
# * [ITOC CDX (2009)](https://westpoint.edu/centers-and-research/cyber-research-center/data-sets)
# * [KDD-99 Dataset](http://kdd.ics.uci.edu/databases/kddcup99/kddcup99.html)
#
# ## Read in KDD99 Data Set
#
# Although the KDD99 dataset is over 20 years old, it is still widely used to demonstrate Intrusion Detection Systems (IDS) and Anomaly detection. KDD99 is the data set used for The Third International Knowledge Discovery and Data Mining Tools Competition, held in conjunction with KDD-99, The Fifth International Conference on Knowledge Discovery and Data Mining. The competition task was to build a network intrusion detector, a predictive model capable of distinguishing between "bad" connections, called intrusions or attacks, and "good" normal connections. This database contains a standard set of data to be audited, including various intrusions simulated in a military network environment.
#
# The following code reads the KDD99 CSV dataset into a Pandas data frame. The standard format of KDD99 does not include column names. Because of that, the program adds them.
# + colab={"base_uri": "https://localhost:8080/", "height": 325} id="Z7Nc6xKjAMGo" outputId="854a3977-8e15-4d42-a9be-917ea70cb108"
import pandas as pd
from tensorflow.keras.utils import get_file
pd.set_option('display.max_columns', 6)
pd.set_option('display.max_rows', 5)
try:
path = get_file('kdd-with-columns.csv', origin=\
'https://github.com/jeffheaton/jheaton-ds2/raw/main/'\
'kdd-with-columns.csv',archive_format=None)
except:
print('Error downloading')
raise
print(path)
# Origional file: http://kdd.ics.uci.edu/databases/kddcup99/kddcup99.html
df = pd.read_csv(path)
print("Read {} rows.".format(len(df)))
# df = df.sample(frac=0.1, replace=False) # Uncomment this line to
# sample only 10% of the dataset
df.dropna(inplace=True,axis=1)
# For now, just drop NA's (rows with missing values)
# display 5 rows
pd.set_option('display.max_columns', 5)
pd.set_option('display.max_rows', 5)
df
# + [markdown] id="qiiHziMXAMGp"
# The KDD99 dataset contains many columns that define the network state over time intervals during which a cyber attack might have taken place. The " outcome " column specifies either "normal," indicating no attack, or the type of attack performed. The following code displays the counts for each type of attack and "normal".
# + id="cxgbeLBpAMGq" outputId="11827d16-bbaf-4edc-c5fb-d25c787aac07"
df.groupby('outcome')['outcome'].count()
# + [markdown] id="1Y1YqytVAMGq"
# ## Preprocessing
#
# We must perform some preprocessing before we can feed the KDD99 data into the neural network. We provide the following two functions to assist with preprocessing. The first function converts numeric columns into Z-Scores. The second function replaces categorical values with dummy variables.
# + id="xRR6q2kOAMGq"
# Encode a numeric column as zscores
def encode_numeric_zscore(df, name, mean=None, sd=None):
if mean is None:
mean = df[name].mean()
if sd is None:
sd = df[name].std()
df[name] = (df[name] - mean) / sd
# Encode text values to dummy variables(i.e. [1,0,0],[0,1,0],[0,0,1]
# for red,green,blue)
def encode_text_dummy(df, name):
dummies = pd.get_dummies(df[name])
for x in dummies.columns:
dummy_name = f"{name}-{x}"
df[dummy_name] = dummies[x]
df.drop(name, axis=1, inplace=True)
# + [markdown] id="DuEFJKt_AMGr"
# This code converts all numeric columns to Z-Scores and all textual columns to dummy variables. We now use these functions to preprocess each of the columns. Once the program preprocesses the data, we display the results.
# + colab={"base_uri": "https://localhost:8080/", "height": 235} id="stntDxM9AMGr" outputId="ac43723d-a105-4105-fe1a-d45153e3294f"
# Now encode the feature vector
pd.set_option('display.max_columns', 6)
pd.set_option('display.max_rows', 5)
for name in df.columns:
if name == 'outcome':
pass
elif name in ['protocol_type','service','flag','land','logged_in',
'is_host_login','is_guest_login']:
encode_text_dummy(df,name)
else:
encode_numeric_zscore(df,name)
# display 5 rows
df.dropna(inplace=True,axis=1)
df[0:5]
# + [markdown] id="fyHlHDwGAMGr"
# We divide the data into two groups, "normal" and the various attacks to perform anomaly detection. The following code divides the data into two data frames and displays each of these two groups' sizes.
# + colab={"base_uri": "https://localhost:8080/"} id="vZhVyry_AMGr" outputId="4809f4ec-49d3-419d-a82b-25dd9c52f3c9"
normal_mask = df['outcome']=='normal.'
attack_mask = df['outcome']!='normal.'
df.drop('outcome',axis=1,inplace=True)
df_normal = df[normal_mask]
df_attack = df[attack_mask]
print(f"Normal count: {len(df_normal)}")
print(f"Attack count: {len(df_attack)}")
# + [markdown] id="QnihkQx_AMGr"
# Next, we convert these two data frames into Numpy arrays. Keras requires this format for data.
# + id="Vr_C7KEUAMGs"
# This is the numeric feature vector, as it goes to the neural net
x_normal = df_normal.values
x_attack = df_attack.values
# + [markdown] id="YH_QLaPvAMGs"
# ## Training the Autoencoder
#
# It is important to note that we are not using the outcome column as a label to predict. We will train an autoencoder on the normal data and see how well it can detect that the data not flagged as "normal" represents an anomaly. This anomaly detection is unsupervised; there is no target (y) value to predict.
#
# Next, we split the normal data into a 25% test set and a 75% train set. The program will use the test data to facilitate early stopping.
# + id="MX6PCz-RAMGs"
from sklearn.model_selection import train_test_split
x_normal_train, x_normal_test = train_test_split(
x_normal, test_size=0.25, random_state=42)
# + [markdown] id="Ozvx3sr5AMGs"
# We display the size of the train and test sets.
# + colab={"base_uri": "https://localhost:8080/"} id="CuH9Bxg6AMGs" outputId="1e8aa872-3c95-4299-aa30-896577f2db84"
print(f"Normal train count: {len(x_normal_train)}")
print(f"Normal test count: {len(x_normal_test)}")
# + [markdown] id="06DNCbB4AMGs"
# We are now ready to train the autoencoder on the normal data. The autoencoder will learn to compress the data to a vector of just three numbers. The autoencoder should be able to also decompress with reasonable accuracy. As is typical for autoencoders, we are merely training the neural network to produce the same output values as were fed to the input layer.
# + colab={"base_uri": "https://localhost:8080/"} id="fpXiVRTuAMGs" outputId="d5280198-5f5c-4c3c-b0be-f0aaa68bcb75"
from sklearn import metrics
import numpy as np
import pandas as pd
from IPython.display import display, HTML
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Activation
model = Sequential()
model.add(Dense(25, input_dim=x_normal.shape[1], activation='relu'))
model.add(Dense(3, activation='relu')) # size to compress to
model.add(Dense(25, activation='relu'))
model.add(Dense(x_normal.shape[1])) # Multiple output neurons
model.compile(loss='mean_squared_error', optimizer='adam')
model.fit(x_normal_train,x_normal_train,verbose=1,epochs=100)
# + [markdown] id="mw9KbY1LAMGs"
# ## Detecting an Anomaly
#
# We are now ready to see if the abnormal data is an anomaly. The first two scores show the in-sample and out of sample RMSE errors. These two scores are relatively low at around 0.33 because they resulted from normal data. The much higher 0.76 error occurred from the abnormal data. The autoencoder is not as capable of encoding data that represents an attack. This higher error indicates an anomaly.
# + colab={"base_uri": "https://localhost:8080/"} id="NWFVmgy-AMGs" outputId="6e5d1893-dac3-48df-b9b8-cce31db3e3f3"
pred = model.predict(x_normal_test)
score1 = np.sqrt(metrics.mean_squared_error(pred,x_normal_test))
pred = model.predict(x_normal)
score2 = np.sqrt(metrics.mean_squared_error(pred,x_normal))
pred = model.predict(x_attack)
score3 = np.sqrt(metrics.mean_squared_error(pred,x_attack))
print(f"Out of Sample Normal Score (RMSE): {score1}")
print(f"Insample Normal Score (RMSE): {score2}")
print(f"Attack Underway Score (RMSE): {score3}")
|
t81_558_class_14_03_anomaly.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # MAT281 - Laboratorio 1
# ## Ejercicio 1
#
# (1 punto)
#
# __Calcular $\pi$:__ En los siglos XVII y XVIII, <NAME> y <NAME> descubrieron una serie infinita que sirve para calcular $\pi$:
#
# $$\pi = 4 \sum_{k=1}^{\infty}\dfrac{(-1)^{k+1}}{2k-1} = 4(1-\dfrac{1}{3}+\dfrac{1}{5}-\dfrac{1}{7} + ...) $$
#
# Implemente la función `calcular_pi` para estimar el valor de $\pi$ ocupando el método de Leibniz, donde el argumento debe ser un número entero $n$ que indique la cantidad de términos sumados, es decir:
#
# $$\pi \approx 4 \sum_{k=1}^{n}\dfrac{(-1)^{k+1}}{2k-1} $$
def calcular_pi(n:int) -> float:
"""
Aproximacion del valor de pi mediante el método de Leibniz
Parameters
----------
n : int
Numero de terminos en la sumatoria.
Returns
-------
output : float
Valor aproximado de pi.
Examples
--------
>>> calcular_pi(3)
3.466666666666667
>>> calcular_pi(1000)
3.140592653839794
"""
pi = 0 # Se inicializa la suma parcial
for k in range(1, n+1):
numerador = (-1)**(k+1)
denominador = 2*k-1
pi += numerador/denominador # Agregar numerador/denominador a la suma total
pi = 4 * pi # Se ajusta al valor real
return pi
calcular_pi(1000)
# ## Problema 2
#
# La [conjetura de Collatz](https://en.wikipedia.org/wiki/Collatz_conjecture), conocida también como conjetura $3n+1$ o conjetura de Ulam (entre otros nombres), fue enunciada por el matemático <NAME> en 1937, y a la fecha no se ha resuelto.
#
# Sea la siguiente operación, aplicable a cualquier número entero positivo:
# * Si el número es par, se divide entre 2.
# * Si el número es impar, se multiplica por 3 y se suma 1.
#
# La conjetura dice que siempre alcanzaremos el 1 (y por tanto el ciclo 4, 2, 1) para cualquier número con el que comencemos.
#
# Implemente una función llamada `collatz` cuyo input sea un número natural positivo $N$ y como output retorne una lista con la secuencia de la conjetura de Collatz hasta llegar a 1.
#
# **Ejemplo**: *collatz(9)* = [9, 28, 14, 7, 22, 11, 34, 17, 52, 26, 13, 40, 20, 10, 5, 16, 8, 4, 2, 1]
def collatz(n:int) -> list:
if n <= 0:
raise Exception("Input n must be greater or equal than 1.")
elif n == 1:
return [1]
else:
value = n # Instanciar valor inicial que se usa como variable auxiliar y sobrescribirla.
collatz_list = [value] # Instanciar lista con el primer elemento
while value != 1: # Cuando parar?
if value%2==0: # Primera condicion
value = value/2
else:
value = 3*value+1
collatz_list.append(value)
return collatz_list
collatz(9)
# ## Problema 3
#
# (2 puntos)
#
# Sea $\sigma(n)$ definido como la suma de los divisores propios de $n$ (es decir, menores a $n$). Los [números amigos](https://en.wikipedia.org/wiki/Amicable_numbers) son enteros positivos $n_1$ y $n_2$ tales que la suma de los divisores propios de uno es igual al otro número y viceversa, es decir:
#
# $$\sigma(n_1)=n_2 \quad \textrm{y} \quad \sigma(n_2)=n_1$$
#
#
# Por ejemplo, los números 220 y 284 son números amigos.
# * los divisores propios de 220 son 1, 2, 4, 5, 10, 11, 20, 22, 44, 55 y 110, por ende $\sigma(220) = 284$.
# * los divisores propios de 284 son 1, 2, 4, 71 y 142, por ende $\sigma(284) = 220$.
#
# Implemente las siguientes funciones:
#
# * `divisores(n)` tal que retorne una lista con los divisores propios de $n$.
# * `sigma(n)` tal que retorne el valor $\sigma(n)$.
# * `amigos(n, m)` tal que retorne `True` si $n$ y $m$ son números amigos y `False` en caso contrario.
# +
def divisores(n:int) -> list:
divisors = []
# No es necesario ser eficiente, basta con una búsqueda exhaustiva.
for i in range(1,n):
if n%i==0: # Tip: Usar operación módulo
divisors.append(i)
return divisors
def sigma(n:int) -> int:
divisors = divisores(n)
suma = 0
for x in divisors:
suma+=x
return suma
def amigos(n:int, m:int) -> bool:
sigma_n = sigma(n)
sigma_m = sigma(m)
if sigma_m==n and sigma_n==m:
return True
else:
return False
# -
n, m = 220, 284
print(f"Los divisores de {n} son {divisores(n)}, por ende sigma({n}) es {sigma(n)}")
print(f"Los divisores de {m} son {divisores(m)}, por ende sigma({m}) es {sigma(m)}")
print(f"¿220 y 284 son números amigos?: {amigos(n, m)}")
|
labs/lab01.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + deletable=true editable=true
from keras.models import Sequential
from keras.layers import Dropout
from keras.layers import Dense, Activation,Dropout
from keras import regularizers
from keras.layers.advanced_activations import LeakyReLU
from keras.layers.normalization import BatchNormalization
import numpy as np
# fix random seed for reproducibility
np.random.seed(42)
import pandas as pd
# + deletable=true editable=true
#Folder for the dataset
datasetFolder = '/home/carnd/dbpedia2016/step_2x50/dataset/'
#Number of files
numberOfFiles = 638
#Test split
testSplit=0.2
# + deletable=true editable=true
def load_data(datasetFolder, datasetXFile, datasetYFile, wrap=True, printIt=False):
#print('Loading X')
# load file
with open(datasetFolder + datasetXFile, "r") as f:
head = f.readline()
cols = head.split(',')
numberOfCols = len(cols)
#print(numberOfCols)
numberOfRows=0
for line in f:
numberOfRows+=1
f.close()
if(printIt):
print('Input Features: {} x {}'.format(numberOfRows,numberOfCols))
if(wrap==True):
maxY = 8384
else:
maxY = numberOfCols-1
half=(numberOfCols//maxY)*0.5
dataX = np.zeros([numberOfRows,maxY],np.int8)
with open(datasetFolder + datasetXFile, "r") as f:
head = f.readline()
rowCounter=0
for line in f:
row=line.split(',')
for i in range(1, len(row)):
if(int(row[i])<=0):
continue;
val = 1 + ((int(row[i])-1)//maxY);
if(val>half):
val = 0 - (val - half)
dataX[rowCounter][(int(row[i])-1)%maxY]= val
#if((1 + ((int(row[i])-1)//maxY))>1):
# print("{} data[{}][{}] = {}".format(int(row[i])-1, rowCounter,(int(row[i])-1)%maxY,1 + ((int(row[i])-1)//maxY)))
rowCounter+=1
f.close()
#print('Loading Y')
# load file
with open(datasetFolder + datasetYFile, "r") as f:
head = f.readline()
cols = head.split(',')
numberOfCols = len(cols)
#print(numberOfCols)
numberOfRows=0
for line in f:
numberOfRows+=1
f.close()
if(printIt):
print('Output Features: {} x {}'.format(numberOfRows,numberOfCols))
dataY = np.zeros([numberOfRows,(numberOfCols-1)],np.float16)
with open(datasetFolder + datasetYFile, "r") as f:
head = f.readline()
rowCounter=0
for line in f:
row=line.split(',')
for i in range(1, len(row)):
if(int(row[i])<=0):
continue;
dataY[rowCounter][(int(row[i])-1)]=1
rowCounter+=1
f.close()
return dataX, dataY
# + deletable=true editable=true
dataX, dataY = load_data(datasetFolder,'datasetX_1.csv', 'datasetY_1.csv', printIt=True)
# + deletable=true editable=true
dataX, dataY = load_data(datasetFolder,'datasetX_1.csv', 'datasetY_1.csv')
# + deletable=true editable=true
print(dataX.shape)
print(dataX[0:5])
# + deletable=true editable=true
print(dataY.shape)
print(dataY[0:5])
# + deletable=true editable=true
print("Input Features for classification: {}".format(dataX.shape[1]))
print("Output Classes for classification: {}".format(dataY.shape[1]))
# + deletable=true editable=true
deepModel = Sequential(name='Deep Model (5 Dense Layers)')
deepModel.add(Dense(1024, input_dim=dataX.shape[1], init='glorot_normal'))
deepModel.add(BatchNormalization())
deepModel.add(Activation('relu'))
deepModel.add(Dropout(0.2))
deepModel.add(Dense(1024, init='glorot_normal'))
deepModel.add(BatchNormalization())
deepModel.add(Activation('relu'))
deepModel.add(Dropout(0.2))
deepModel.add(Dense(1024, init='glorot_normal'))
deepModel.add(BatchNormalization())
deepModel.add(Activation('relu'))
deepModel.add(Dropout(0.2))
deepModel.add(Dense(1024, init='glorot_normal'))
deepModel.add(BatchNormalization())
deepModel.add(Activation('relu'))
deepModel.add(Dropout(0.2))
deepModel.add(Dense(1024, init='glorot_normal'))
deepModel.add(BatchNormalization())
deepModel.add(Activation('relu'))
deepModel.add(Dropout(0.2))
deepModel.add(Dense(dataY.shape[1], activation='sigmoid', init='glorot_normal'))
models = [deepModel]
# + deletable=true editable=true
# Compile model
import keras.backend as K
def count_predictions(y_true, y_pred):
true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
predicted_positives = K.sum(K.round(K.clip(y_pred, 0, 1)))
possible_positives = K.sum(K.round(K.clip(y_true, 0, 1)))
return true_positives, predicted_positives, possible_positives
def f1score(y_true, y_pred):
true_positives, predicted_positives, possible_positives = count_predictions(y_true, y_pred)
precision = true_positives / (predicted_positives + K.epsilon())
recall = true_positives / (possible_positives + K.epsilon())
f1score = 2.0 * precision * recall / (precision+recall+ K.epsilon())
return f1score
def fBetaScore(y_true, y_pred, beta):
true_positives, predicted_positives, possible_positives = count_predictions(y_true, y_pred)
precision = true_positives / (predicted_positives + K.epsilon())
recall = true_positives / (possible_positives + K.epsilon())
f1score = (1+(beta*beta)) * precision * recall / ((beta*beta*precision)+recall+ K.epsilon())
return f1score
for model in models:
model.compile(loss='binary_crossentropy', optimizer='nadam', metrics=[f1score])
# + deletable=true editable=true
def fit_data(model, dataX, dataY):
# Fit the model
#model.fit(dataX, dataY, nb_epoch=5, verbose=2, batch_size=256)
return model.train_on_batch(dataX, dataY)
# + deletable=true editable=true
def countPredictions(y_true, y_pred):
true_positives = np.sum(np.round(y_pred*y_true))
predicted_positives = np.sum(np.round(y_pred))
possible_positives = np.sum(y_true)
return true_positives, predicted_positives, possible_positives
# + deletable=true editable=true
#Randomize the list of numbers so we can split train and test dataset
listOfFiles=list(range(1,numberOfFiles+1))
import random
random.shuffle(listOfFiles)
splitIndex=int((1-testSplit)*numberOfFiles)
numberOfEons = 5
for model in models:
for eon in range(0, numberOfEons):
print('{}. Eon {}/{}'.format(eon+1,eon+1, numberOfEons))
for trainIndex in range(0,splitIndex):
dataX, dataY = load_data(datasetFolder,'datasetX_{}.csv'.format(listOfFiles[trainIndex]), 'datasetY_{}.csv'.format(listOfFiles[trainIndex]))
#print('Model = {}'.format(model.name))
#model.fit(dataX, dataY, nb_epoch=1, verbose=0, batch_size=512)
#sc=model.test_on_batch(dataX,dataY)
#loss = sc[0]
#f1score = sc[1]
loss, f1score=fit_data(model,dataX, dataY)
print('Learning for file {} / {} : datasetX/Y_{}\t\tloss={:.4f} f1score={:.4f}'.format(trainIndex+1, splitIndex, listOfFiles[trainIndex], loss, f1score), end='\r')
counts = {}
counts[model.name] = {'true_positives':0, 'predicted_positives':0, 'possible_positives':0}
for testIndex in range(splitIndex, numberOfFiles):
dataX, dataY = load_data(datasetFolder,'datasetX_{}.csv'.format(listOfFiles[testIndex]), 'datasetY_{}.csv'.format(listOfFiles[testIndex]))
predY=model.predict_on_batch(dataX)
true_positives, predicted_positives, possible_positives = countPredictions(dataY, predY)
counts[model.name]['true_positives'] += true_positives
counts[model.name]['predicted_positives'] += predicted_positives
counts[model.name]['possible_positives'] += possible_positives
print ('Testing for file {} / {} : datasetX/Y_{} - true +ve:{} pred +ve:{} possible +ve:{}'.format(testIndex+1, numberOfFiles, listOfFiles[testIndex], true_positives,predicted_positives,possible_positives), end='\r')
count = counts[model.name]
precision = (count['true_positives'])/(count['predicted_positives']+0.0001)
recall = (count['true_positives'])/(count['possible_positives']+0.0001)
f1score = 2.0 * precision * recall / (precision+recall+0.0001)
print(' - Model = {} \t f1-score = {:.4f}\t precision = {:.4f} \t recall = {:.4f}'.format(model.name, f1score, precision, recall))
# + [markdown] deletable=true editable=true
# ==================================================
# # step_2x50 (exact)
# ==================================================
# 1. Eon 1/5
# - Model = Deep Model (5 Dense Layers) f1-score = 0.8294 precision = 0.8869 recall = 0.779092.00
# 2. Eon 2/5
# - Model = Deep Model (5 Dense Layers) f1-score = 0.8393 precision = 0.8882 recall = 0.795692.00
# 3. Eon 3/5
# - Model = Deep Model (5 Dense Layers) f1-score = 0.8436 precision = 0.8820 recall = 0.808692.00
# 4. Eon 4/5
# - Model = Deep Model (5 Dense Layers) f1-score = 0.8431 precision = 0.8895 recall = 0.801392.00
# 5. Eon 5/5
# - Model = Deep Model (5 Dense Layers) f1-score = 0.8435 precision = 0.8879 recall = 0.803492.00
#
# ==================================================
# # step_2x50 (upto)
# ==================================================
# 1. Eon 1/10
# - Model = Deep Model (5 Dense Layers) f1-score = 0.8256 precision = 0.8749 recall = 0.7816.0
# 2. Eon 2/10
# - Model = Deep Model (5 Dense Layers) f1-score = 0.8290 precision = 0.8714 recall = 0.7906.0
# 3. Eon 3/10
# - Model = Deep Model (5 Dense Layers) f1-score = 0.8301 precision = 0.8747 recall = 0.7898.0
# 4. Eon 4/10
# - Model = Deep Model (5 Dense Layers) f1-score = 0.8320 precision = 0.8730 recall = 0.7947.0
# 5. Eon 5/10
# - Model = Deep Model (5 Dense Layers) f1-score = 0.8328 precision = 0.8737 recall = 0.7956.0
# 6. Eon 6/10
# - Model = Deep Model (5 Dense Layers) f1-score = 0.8325 precision = 0.8693 recall = 0.7989.0
# 7. Eon 7/10
# - Model = Deep Model (5 Dense Layers) f1-score = 0.8333 precision = 0.8705 recall = 0.7993.0
# 8. Eon 8/10
# - Model = Deep Model (5 Dense Layers) f1-score = 0.8326 precision = 0.8755 recall = 0.7938.0
# 9. Eon 9/10
# - Model = Deep Model (5 Dense Layers) f1-score = 0.8325 precision = 0.8723 recall = 0.7963.0
# 10. Eon 10/10
# - Model = Deep Model (5 Dense Layers) f1-score = 0.8328 precision = 0.8713 recall = 0.7976.0
# + deletable=true editable=true
# + deletable=true editable=true
|
type_inferencing/example_executions/step_2x50.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### Лекция 2. Функции, пространства имён, заголовочные файлы, cmake, юнит-тестирование
# <br />
# ##### Функции, объявление и определение (declaration / definition)
# Определение (definition) функции - описание её "интерфейса" (сигнатуры, возвращаемого типа и квалификаторов) И реализации.
# ```c++
# float abs(float x)
# {
# if (x >= 0)
# return x;
# return -x;
# }
#
# float min_value(const std::vector<float>& items)
# {
# float rv = items.front();
# for (float x : items)
# rv = std::min(rv, x);
# return rv;
# }
# ```
# Замечание:
# * для поиска минимума, конечно же, используйте `std::min_element`
# * а для вычисления абсолютных величин функцию `abs` из `#include <cmath>`
# Объявление (declaration) функции - описание её "интерфейса"
# ```c++
# float abs(float x);
#
# float min_value(const std::vector<float>& items);
# ```
# Если не прописано явно, у функции в программе может быть несколько объявлений, но не больше одного определения.
# __Вопрос__: что помещается в header-файл, что в cpp-файл?
# <br />
# ##### Передача аргументов в функцию
# Передача по значению - создание копии аргумента
# ```c++
# float min_value(std::vector<float> x);
# ```
# Передача по ссылке - работа с аргументом
# ```c++
# float min_value(std::vector<float>& x);
# ```
# Передача по const-ссылке - работа с аргументом и запрет на модификацию
# ```c++
# float min_value(const std::vector<float>& x);
# ```
# **Упражнение:** что здесь происходит с аргументами?
#
# ```c++
# float sqr(const float x);
#
# float min_value(std::vector<float>* x);
# float min_value(const std::vector<float>* x);
# float min_value(std::vector<float>* const x);
# ```
# **Вопрос:** Когда лучше передавать по значению, а когда - по ссылке?
# <br />
# ##### Возвращаемое значение
# Функция ничего не возвращает:
#
# ```c++
# void say_hello(const std::string& name)
# {
# std::cout << "hello, " << name;
# }
# ```
# Возврат результата через возвращаемое значение (предпочтительный вариант):
#
# ```c++
# std::vector<std::string> make_team()
# {
# return { "Bifur", "Bofur", "Bombur", "Oin",
# "Gloin", "Fili", "Nori", "Dori",
# "Dwalin", "Ori", "Balin", "Kili" };
# }
# ```
# Возврат результата через аргумент (менее предпочтительный вариант в силу меньшей читаемости):
#
# ```c++
# bool append_teamlead(Point3D location, std::vector<std::string>& team)
# {
# if (is_inside(location, get_village("Shire")))
# {
# team.push_back("Thorin");
# return true;
# }
# return false;
# }
# ```
# Сравните:
#
# ```c++
# // легко читается, что есть результат функции
# std::vector<std::string> team = make_team();
#
# // не очевидно, что результатом функции является
# // изменение второго аргумента, нужно лезть в
# // документацию или реализацию, чтобы понять
# // замысел автора
# append_teamlead(get_current_location(), team);
#
# ```
# <br />
# ##### Ошибки при работе с аргументами и возвращаемыми значениями
# ```c++
# std::string* find_dwarf(const std::vector<std::string>& team, const std::string& name)
# {
# for (const std::string& dwarf : team)
# if (dwarf == name)
# return &dwarf;
# return nullptr;
# }
#
# // usage 1
# std::vector<std::string> team = make_team();
# if (std::string* ptr = find_dwarf(team, "Kuzya"))
# std::cout << *ptr;
#
# // usage 2
# if (std::string* ptr = find_dwarf(make_team(), "Balin"))
# std::cout << *ptr; // ???
#
# // usage 3
# if (std::string* ptr = find_dwarf({"Ori", "Gloin", "Balin"}, "Balin"))
# std::cout << *ptr; // ???
# ```
# __Вопрос__: что будет напечатано программой ниже?
#
# Показать пример на godbolt.org на clang 8.0.0 с разными оптимизациями, чтобы наглядно продемонстрировать ub
# ```c++
# #include <iostream>
# #include <string>
#
# const std::string& f(const bool x,
# const std::string& s1,
# const std::string& s2)
# {
# return x ? s1 : s2;
# }
#
# int main()
# {
# const std::string& s = f(true, "123", "12345");
# std::cout << s << endl;
# return 0;
# }
# ```
# <br />
# ##### Значения аргументов по умолчанию
# Можно задавать значения аргументов по умолчанию:
# ```c++
# std::string convert_to_string(int value, int base = 10);
# ```
# ```c++
# std::string join_strings(const std::vector<std::string>& strings,
# const std::string& sep = "\n");
# ```
# использование:
# ```c++
# std::string s1 = convert_to_string(42); // 42
# std::string s2 = convert_to_string(42, 2); // 101010
#
# std::string song = join_strings({"In the town where I was born",
# "Lived a man who sailed to sea",
# "And he told us of his life",
# "In the land of submarines",
# });
#
# std::string sentence = join_strings({"Nobody",
# "expects",
# "the",
# "spanish",
# "inquisition",
# },
# " ");
# ```
# Но такие аргументы должны быть последними:
# ```c++
# std::string join_strings(const std::vector<std::string>& strings,
# const std::string& sep = "\n",
# bool skip_empty_lines); // compilation ERROR!
# ```
# ```c++
# std::string join_strings(const std::vector<std::string>& strings,
# const std::string& sep = "\n",
# bool skip_empty_lines = false); // OK
# ```
# <br />
# ##### Перегрузка функции
# https://en.cppreference.com/book/intro/function_overloading
# Задача - реализовать конвертацию всего в строку. Желательно единообразно и чтобы: есть способ - компилируется и работает, нет способа - не компилируется.
# ```c++
# std::string convert_to_string(int x); // 1
# std::string convert_to_string(unsigned x); // 2
# std::string convert_to_string(float x); // 3
#
# std::cout << convert_to_string(5); // 1
# std::cout << convert_to_string(5u); // 2
# std::cout << convert_to_string(5.f); // 3
# ```
# Для такого набора функций `convert_to_string` компилятор (clang 10.0) сгенерирует символы:
# * `_Z17convert_to_stringB5cxx11i`
# * `_Z17convert_to_stringB5cxx11j`
# * `_Z17convert_to_stringB5cxx11f`
#
# Т.е. тип аргумента - часть имени функции при компиляции.
# <br />
# ##### Пространства имён и name mangling при компиляции
# https://en.wikipedia.org/wiki/Name_mangling
# Пространства времён решают проблему, когда функция с одинаковым именем имеет две реализации в разных библиотеках.
#
# Рассмотрим сценарий, где вы с другом / подругой пишете свой личный Half Life 3, вы работаете над ИИ соперников, а компаньон - над анимациями объектов. И вам, и компаньону для отладки нужно вычленять из сцены интересующие объекты, каждому нужна функция:
# ```c++
# // EnemyAI.h
# // Получить объекты сцены, с которыми может взаимодействовать соперник (укрытия, маркеры огневых точек, цели ...)
# std::vector<Object*> collect_objects(const Scene& scene);
#
# // Animations.h
# // Получить анимируемые объекты сцены
# std::vector<Object*> collect_objects(const Scene& scene);
# ```
# Такой код некорректен, т.к. для функции `collect_objects` нарушено ODR-правило (One Definition Rule).
#
# Вы можете поспорить с тем, что имена для функций из примера выбраны неудачно, но давайте остановимся на том факте, что иногда при выборе имён программисты между собой пересекаются.
# ```c++
# // EnemyAI.h
# namespace EnemyAI
# {
# // Получить объекты сцены, с которыми может взаимодействовать соперник (укрытия, маркеры огневых точек, цели ...)
# std::vector<Object*> collect_objects(const Scene& scene);
# }
#
# // Animations.h
# namespace Animations
# {
# // Получить анимируемые объекты сцены
# std::vector<Object*> collect_objects(const Scene& scene);
# }
# ```
# Такой код слинкуется корректно, т.к. работает механизм name mangling во время компиляции.
#
# Компилятор (clang 10.0) сгенерирует такие имена символов для функций:
#
# * `_ZN7EnemyAI15collect_objectsERK5Scene`
# * `_ZN10Animations15collect_objectsERK5Scene`
# <br />
# ##### Поиск функции и `using namespace`
# Точные правила поиска:
#
# https://en.cppreference.com/w/cpp/language/lookup
# При компиляции куска кода:
#
# ```c++
# namespace json_parser {
# namespace input_processing {
#
# int read_int(const std::string& s)
# {
# if (avx_instructions_available())
# return read_int_avx(s);
# else
# return read_int_default(s);
# }
#
# }
# }
# ```
#
# Компилятору нужно найти кандидатов для вызова `avx_instructions_available`, `read_int_avx` и `read_int_default`. Компилятор осуществляет их поиск в пространствах имён:
# * глобальное
# * `json_parser`
# * `json_parser::input_processing`
# * пространства имён аргументов функций (ничего для `avx_instructions_available`, `std` для `read_int_avx` и `read_int_default`)
#
# Если для какой-нибудь из функций находится либо более одного кандидата для вызова либо ни одного - ошибка компиляции. Кандидат должен быть и быть только один.
# <br />
# **Замечание:** `using namespace X` - заполнить текущее пространство имён до закрываеющей скобки именами из пространства X.
# **Пример 1:**
#
# Вызывающая программа:
#
# ```c++
# #include "json_parser.h"
#
# int main()
# {
# // не скомпилируется, т.к. в глобальном пространстве имён нет функции read_int
# std::cout << read_int(s);
#
# // скомпилируется
# std::cout << json_parser::input_processing::read_int(s);
#
# // скомпилируется, т.к. глобальное пространство имён расширено
# using namespace json_parser::input_processing;
# std::cout << read_int(s);
#
# return 0;
# }
# ```
# **Пример 2:**
#
# ```c++
# int main() {
# std::cout << 1 << std::string("23");
# return 0;
# }
# ```
#
# или
#
# ```c++
# using namespace std;
# // имена из std доступны в глобальном пространстве имён до конца файла
#
# int main() {
# cout << 1 << string("23");
# return 0;
# }
# ```
#
# или
#
# ```c++
# int main()
# {
# using namespace std;
# // имена из std доступны в глобальном пространстве имён до конца main
#
# cout << 1 << string("23");
# return 0;
# }
# ```
# <br />
# **Рекомендации:**
#
# 1. Никогда не пишите `using namespace` в `.h` - файлах (почему?)
# 2. Ограничивайте область действия `using namespace` рационально.
#
# **Пример рациональности:** функция, считающая время выполнения другой функции (многословность `std::chrono`):
#
# ```c++
# // без using namespace
# std::uint64_t get_execution_time_microseconds(const std::function<void>& f)
# {
# const std::chrono::high_resolution_clock::time_point start_ts =
# std::chrono::high_resolution_clock::now();
# f();
# const std::chrono::high_resolution_clock::time_point final_ts =
# std::chrono::high_resolution_clock::now();
# return std::chrono::duration_cast<std::chrono::microseconds>(final_ts - start_ts).count();
# }
#
# // с using namespace
# std::uint64_t get_execution_time_microseconds(const std::function<void>& f)
# {
# using namespace std::chrono;
# const high_resolution_clock::time_point start_ts = high_resolution_clock::now();
# f();
# const high_resolution_clock::time_point final_ts = high_resolution_clock::now();
# return duration_cast<microseconds>(final_ts - start_ts).count();
# }
#
# // с using namespace + auto
# std::uint64_t get_execution_time_microseconds(const std::function<void>& f)
# {
# using namespace std::chrono;
# const auto start_ts = high_resolution_clock::now();
# f();
# const auto final_ts = high_resolution_clock::now();
# return duration_cast<microseconds>(final_ts - start_ts).count();
# }
# ```
# <br />
# ##### Чтение из файла
# Чтение из файла в стиле С
# ```c++
# #include <cstdio>
#
# int main()
# {
# FILE* f = std::fopen("input.txt", "r");
# if (!f)
# {
# std::puts("cannot open file input.txt");
# return 1;
# }
#
# int sum = 0;
# int x = 0;
# while (std::fscanf(f, "%i", &x) != EOF)
# sum += x;
#
# std::printf("sum = %i\n", sum);
#
# std::fclose(f);
#
# return 0;
# }
# ```
# Чтение из файла в стиле C++
# ```c++
# #include <fstream>
# #include <iostream>
#
#
# int main()
# {
# std::ifstream ifs("input.txt");
# if (!ifs)
# {
# std::cerr << "ERROR: cannot open file input.txt" << std::endl;
# return 1;
# }
#
# int sum = 0, x = 0;
# while (ifs >> x)
# sum += x;
#
# std::cout << sum << std::endl;
#
# return 0;
# }
# ```
# Чтение из файла блоками
# ```c++
# #include <fstream>
# #include <iostream>
# #include <vector>
#
#
# int main()
# {
# std::ifstream ifs("input.txt");
# if (!ifs)
# {
# std::cerr << "ERROR: cannot open file\n";
# return 1;
# }
#
# const unsigned BUFSIZE = 4096;
# std::vector<char> buffer(BUFSIZE);
#
# while (true)
# {
# ifs.read(&buffer[0], buffer.size());
# const unsigned n = ifs.gcount();
# if (!n)
# break;
#
# std::string s(&buffer[0], n);
# std::cout << s;
# }
#
# return 0;
# }
# ```
# <br />
# ##### istream / ostream
# Возможно, вы заметили, в С++ работа с чтением из файла через механизм потоков очень похожа с вводом-выводом из консоли.
#
# Это не случайно. Класс, который отвечает за чтение из потока: `std::istream`.
#
# Чтение из потока выглядит так:
#
# ```c++
# int x;
# std::cin >> x;
# ```
#
# Класс `std::ifstream` наследуется от него. Файл представляется как поток данных. Чтение из файла:
#
# ```c++
# std::ifstream ifs("input.txt");
# int x;
# ifs >> x;
# ```
# Механизм потоков даёт возможность программисту дёшево абстрагироваться от источника данных (файл, поток ввода, память ...) и написать код, который будет работать с произвольным источником.
# **Пример:** Напишем функцию, которая читает `int` из потока:
#
# ```c++
# int read_int(std::istream& is)
# {
# int x;
# is >> x;
# return x;
# }
# ```
#
# Вызывать эту функцию мы можем для любого потока:
#
# ```c++
# int value1 = read_int(std::cin); // чтение целого из потока stdin
#
# std::ifstream ifs("input.txt");
# int value2 = read_int(ifs); // чтение целого из файла
#
# std::stringstream ss("36");
# int value3 = read_int(ss); // чтение целого из памяти, представленной std::stringstream
# ```
# <br />
# ##### Юнит-тесты
# Контракты функции:
# * предусловие (precondition / expect)
# * постусловие (postcondition / ensure)
# * инвариант (precondition + postcondition / invariant)
# * утверждение (assertion)
#
#
# Приведите примеры контрактов (exception-ы исключаем из рассмотрения, "сейчас про них ничего не знаем"):
#
# ```c++
# double sqrt(double x);
# bool binary_search(int *arr, int n, int value);
# unsigned read_n(std::istream& is);
# ```
# Типы функциональных тестов:
# * позитивный сценарий
# * негативный сценарий
# * граничные условия
#
# Приведите примеры тестов на функции `sqrt`, `binary_search`, `read_n`
# Для упражнения напишем тесты на функцию, считающую длину ломаной
# ```c++
# // polyline.h
# #pragma once
# #include <vector>
#
# struct Point
# {
# float x;
# float y;
# };
#
# float get_polyline_len(const std::vector<Point>& polyline);
# ```
# ```c++
# // polyline.cpp
# #include "polyline.h"
# #include <cmath>
#
# float get_polyline_len(const std::vector<Point>& polyline)
# {
# float rv = 0;
# for (int i = 1; i < polyline.size(); ++i)
# {
# const Point prev = polyline[i - 1];
# const Point curr = polyline[i];
# const float dx = curr.x - prev.x;
# const float dy = curr.y - prev.y;
# rv += std::sqrt(dx * dx + dy * dy);
# }
# return rv;
# }
# ```
# ```c++
# // polyline_test.cpp
# #include "polyline.h"
# #include "gtest/gtest.h"
#
# TEST(get_polyline_len, empty_polyline)
# {
# std::vector<Point> empty_poly;
# const double len = get_polyline_len(empty_poly);
# EXPECT_EQ(0, len);
# }
#
# TEST(get_polyline_len, single_point_polyline)
# {
# std::vector<Point> poly{{1,1}};
# const double len = get_polyline_len(poly);
# EXPECT_EQ(0, len);
# }
#
# // ???
# ```
# <br />
# ##### Назначение header-файлов и cpp-файлов
# Файлы в С++-программах делятся на 2 типа: компилируемые и включаемые:
#
# * _Компилируемые_ файлы (`file.cpp`) - файлы с текстом на языке С++, подаются на вход компилятору для генерации соответствующего объектного файла с машинными интсрукциями (`file.o`)
# * _Включаемые_ файлы (`file.h`) - файлы с текстом на языке С++, не являются конечной целью компиляции, их содержимое копируется в `cpp`-файлы при их компиляции через директиву `#include`
# **Пример:**
# Напишем программу, по целому числу `n` считающую $(1 + 1/n)^n$.
#
# Программа будет состоять из трёх файлов:
# * `exp.cpp` - определение (definition) функции расчёта
# * `exp.h` - декларация функции расчёта
# * `main.cpp` - определение (definition) `main`
# * `test_exp.cpp` - тесты
#
# _Подробно объяснить программу_
# Файл `exp.cpp`:
#
# ```c++
# #include <cmath>
#
# double approximate_exp(const unsigned int n)
# {
# if (n == 0)
# return 1.0;
#
# return std::pow(1.0 + 1.0 / n, n);
# }
# ```
# Файл `exp.h`:
#
# ```c++
# #pragma once
#
# double approximate_exp(const unsigned int n);
# ```
# Файл `main.cpp`:
#
# ```c++
# #include "exp.h" // <---
# #include <iostream>
#
# int main()
# {
# unsigned n = 0;
# std::cin >> n;
#
# std::cout << approximate_exp(n);
# return 0;
# }
# ```
# Файл `test_exp.cpp`:
#
# ```c++
# #include "exp.h" // <---
# #include <gtest/gtest.h>
#
# TEST(approximate_exp, zero)
# {
# EXPECT_FLOAT_NEAR(approximate_exp(0), 1.0, 1e-3);
# }
# ```
# <br />
# ##### CMake-обвязка над C++ - проектами
# Проштудировать содержимое `lab1_stub`, не забыть рассказать про схему компиляции.
# Скрипт сборки:
# !rm -rf build && mkdir build && cd build && cmake ../../lab1_stub/hasher && cmake --build .
# Содержимое папки build:
# !ls build
# Прогоним тесты:
# !build/hash_unittests
# <br />
# **Выдать первое домашнее задание если ещё не**
|
2021/sem1/lecture_2_functions_namespace_cmake_unit_tests/lecture2.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %matplotlib inline
#
# # Pygraphviz Simple
#
#
# An example showing how to use the interface to the pygraphviz
# AGraph class to convert to and from graphviz.
#
# Also see the pygraphviz documentation and examples at
# http://pygraphviz.github.io/
#
#
# +
# Author: <NAME> (<EMAIL>)
# Copyright (C) 2006-2018 by
# <NAME> <<EMAIL>>
# <NAME> <<EMAIL>>
# <NAME> <<EMAIL>>
# All rights reserved.
# BSD license.
import networkx as nx
# plain graph
G = nx.complete_graph(5) # start with K5 in networkx
A = nx.nx_agraph.to_agraph(G) # convert to a graphviz graph
X1 = nx.nx_agraph.from_agraph(A) # convert back to networkx (but as Graph)
X2 = nx.Graph(A) # fancy way to do conversion
G1 = nx.Graph(X1) # now make it a Graph
A.write('k5.dot') # write to dot file
X3 = nx.nx_agraph.read_dot('k5.dot') # read from dotfile
|
_downloads/pygraphviz_simple.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
# %matplotlib inline
# +
url = "http://bit.ly/w-data"
s_data = pd.read_csv(url)
print("THE DATA IS SHOWN BELOW:")
s_data.head(10)
# -
# ## DATA VISUALIZATION
# ### SCATTER PLOT
s_data.plot(x='Hours', y='Scores', style='o')
plt.title('Hours vs Percentage')
plt.xlabel('Hours Studied')
plt.ylabel('Percentage Score')
plt.show()
# ### BAR PLOT
plt.figure(figsize=(10,6))
plt.title("Hours vs Percentage")
sns.barplot(x=s_data['Hours'], y=s_data['Scores'])
plt.ylabel("Percentage Score)")
# ## Preparing Data
X = s_data.iloc[:, :-1].values
y = s_data.iloc[:, 1].values
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y,
test_size=0.2, random_state=0)
# #### Training model
# +
from sklearn.linear_model import LinearRegression
regressor = LinearRegression()
regressor.fit(X_train, y_train)
print("Training complete.")
# -
# ## Plotting the regression line
line = regressor.coef_*X+regressor.intercept_
plt.scatter(X, y)
plt.plot(X, line);
plt.show()
# ### Predictions
print(X_test)
y_pred = regressor.predict(X_test)
df = pd.DataFrame({'Actual': y_test, 'Predicted': y_pred})
df
hours = 9.25
ghanta=np.array(hours).reshape(1, 1)
own_pred = regressor.predict(ghanta)
print("No of Hours = {}".format(ghanta))
print("Predicted Score = {}".format(own_pred[0]))
# ## Evaluating the model
from sklearn import metrics
print('Mean Absolute Error:',
metrics.mean_absolute_error(y_test, y_pred))
|
Predecting Percentage of marks obtained given the study hours.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from tamil_tech.zero_shot import ConformerTamilASR
speech_recognizer = ConformerTamilASR()
speech_recognizer()
speech_recognizer.infer_dir('tests/')
|
Tamil_Conformer_ZeroShot_Inference.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# En este notebook, entrenamos modelos de predicción de discursos de odio en base a distintos datasets, para poder predecir su uso en reddit.
#
# Para poder ejecutar este notebook se requiere contar con los conjuntos de datos [HatEval](#Dataset-HatEval-(SemEval-2019-Task5)), [DETOXIS](#Dataset-DETOXIS-(IberLEF-2021)) y [MeOffendMex](#Dataset-MeOffendMex-(Iberlef-2021)) (la información para descargarlos se encuentra en cada uno de los respectivos links). Los datasets descargados deben estar situados respectivamente en las carpetas
#
# docs/hateval2019/
#
# docs/detoxis_data/
#
# docs/MeOffendEs/
# +
import pickle
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import spacy
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn.naive_bayes import ComplementNB
from sklearn.metrics import plot_confusion_matrix, classification_report
from sklearn.model_selection import train_test_split
from prettytable import PrettyTable
from preprocessing_utils import give_emoji_free_text, url_free_text, \
email_free_text, quotes_free_text, get_lemmas, tokenize, preprocess_corpus
nlp = spacy.load("es_core_news_lg")
np.random.seed(42)
palabras_odio = pd.DataFrame(columns=['hateval_rf', 'hateval_nb',
'detoxis_rf', 'detoxis_nb',
'meoffendmex_rf', 'meoffendmex_nb'])
# -
# ## Dataset HatEval (SemEval 2019 Task5)
#
# - Paper:
# - SemEval-2019 Task 5: Multilingual Detection of Hate Speech Against Immigrants and Women in Twitter, por <NAME>; <NAME>; <NAME>; <NAME>; <NAME>; <NAME>, <NAME>; <NAME>; <NAME>. Proceedings of the 13th International Workshop on Semantic Evaluation. Association for Computational Linguistics, 2019. [URL](https://aclanthology.org/S19-2007/).
# - Web: https://competitions.codalab.org/competitions/19935
# - Formulario para acceder a los datos: http://hatespeech.di.unito.it/hateval.html
#
# ### Descripción
#
# Este dataset consiste en ~7000 Tweets, que posiblemente representen discurso de odio hacia mujeres o inmigrantes.
#
# El dataset está formado por 5 columnas, representando cada una:
#
# 1. ID del Tweet.
# 1. El texto del Tweet.
# 1. HS (*hate speech*): si el discurso de odio ocurre contra mujeres o inmigrantes.
# 1. TR (*target*): si HS=1, recibe un valor de 0 si el discurso de odio es contra un grupo genérico, o un valor de 1 si es específicamente contra un individuo.
# 1. AG (*aggresive*): si HS=1, indica si quien escribe el Tweet exhibe comportamiento agresivo (si es así: 1; en caso contrario: 0).
#
# ([fuente](https://competitions.codalab.org/competitions/19935#participate))
#
verbose = 1
verbose_dataset = 0
verbose_hate_words = 0
# +
# cargamos el dataset y vemos su información
hate_eval = pd.read_csv('docs/hateval2019/hateval2019_es_train.csv')
if verbose_dataset:
print(hate_eval.describe())
# -
if verbose_dataset:
print(hate_eval[hate_eval['HS']==1])
print('Cantidad de Tweets sin HS: {} \nCantidad de Tweets con HS: {}'.format(
len(hate_eval[hate_eval['HS']==0]), len(hate_eval[hate_eval['HS']==1])))
# ## Dataset DETOXIS (IberLEF 2021)
#
# - Web: https://detoxisiberlef.wixsite.com/website/corpus
# - Formulario para acceder a los datos: https://forms.office.com/r/6csEeBW0w3
#
# ### Descripción
#
# Este es un dataset con cerca de 3500 comentarios de sitios de noticias/foros españoles, que posiblemente contienen toxicidad. El punto distintivo de este dataset es que distingue distintos niveles de toxicidad, desde 0 (no tóxico) hasta 3 (muy tóxico), en donde también están anotados aspectos como el uso de insultos, sarcasmo, agresividad, estereotipación, constructividad, entre otros. En el contexto de este trabajo, se entrenó un modelo en donde se intenta predecir si un comentario es o no agresivo.
#
# +
# cargamos detoxis dataset
detoxis = pd.read_csv('docs/detoxis_data/train.csv')
if verbose_dataset:
print(detoxis.describe())
# -
if verbose_dataset:
print(detoxis[detoxis['aggressiveness']==1])
# ## Dataset MeOffendES (Iberlef 2021)
#
# - Papers:
#
# * OffendES: A New Corpus in Spanish for Offensive Language Research, por Plaza-del-Arco, Fl<NAME>am; <NAME>; <NAME> y <NAME>, <NAME>. Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP 2021). INCOMA Ltd., 2021. [URL](https://aclanthology.org/2021.ranlp-main.123).
# * Overview of MeOffendEs at IberLEF 2021: Offensive Language Detection in Spanish Variants, por Plaza-del-Arco, Flor Miriam; <NAME>; <NAME>; <NAME>, <NAME>; <NAME>; <NAME>; <NAME> y <NAME>, Luis. Procesamiento del Lenguaje Natural, Revista nº 67, septiembre de 2021, Sociedad Española para el Procesamiento del Lenguaje Natural. [URL](https://rua.ua.es/dspace/handle/10045/117506).
#
# - Web: https://competitions.codalab.org/competitions/28679
# - Repositorio: https://github.com/pendrag/MeOffendEs
#
#
# ### Descripción
#
# Este dataset consiste tweets de usuarios tanto de España (MeOffendES) como de México (MeOffendMex, este último con alrededor de 5000 tweets), que posiblemente contienen mensajes ofensivos. En este trabajo, se optó por usar la variante MeOffendMex, ya que se consideró a los mensajes de la misma como los más cercanos al trabajo aquí abordado.
#
# El dataset está formado por columnas, representando cada una datos como el texto del comentario, y datos de la cuenta de Twitter como por ejemplo si la cuenta se encuentra verificada, el nombre de la cuenta y si la misma tiene una imagen por defecto, entre otros. En este trabajo se optó por usar solamente el texto del tweet. Se cuenta con dos salidas, una indica si el tweet es ofensivo, y la otra si representa "lenguaje vulgar"; se optó por utilizar solamente la primera.
#
# +
# cargamos MeOffendMex dataset
meoffendmex = pd.read_csv('docs/MeOffendEs/mx-train-data-non-contextual.csv')
y_meoffendmex = pd.read_csv('docs/MeOffendEs/mx-train-outputs.sol', header=None)
if verbose_dataset:
print(meoffendmex.describe())
# -
if verbose_dataset:
print(meoffendmex.head())
# # Procesamos y entrenamos cada uno de los datasets
# Definimos la función general de entrenamiento, que lee un conjunto de datos, lo vectoriza y entrena un modelo a partir de dicha vectorización, mostrando las métricas generales del mismo.
def process_training_pipeline(dataset, vectorizer, labels, model):
vectorized_corpus = vectorizer.fit_transform(dataset)
X_tr, X_val, y_tr, y_val = train_test_split(vectorized_corpus, labels, test_size=0.3, random_state=42)
model.fit(X_tr, y_tr)
print('\nMétricas (train)')
print(classification_report(model.predict(X_tr), y_tr))
print('Matriz de confusión (train)')
plot_confusion_matrix(model, X_tr, y_tr,
cmap=plt.cm.Blues,
normalize='true')
plt.show()
print('\nMétricas (val)')
print(classification_report(model.predict(X_val), y_val))
print('Matriz de confusión (val)')
plot_confusion_matrix(model, X_val, y_val,
cmap=plt.cm.Blues,
normalize='true')
plt.show()
# ### Hateval dataset
hateval_corpus_lines = preprocess_corpus(hate_eval['text'])
y_hateval = hate_eval['HS'].values
cv_hateval = CountVectorizer(ngram_range=(1, 3), min_df=10)
lg_hateval = LogisticRegression(random_state=42)
nb_hateval = ComplementNB()
rf_hateval = RandomForestClassifier(random_state=42)
process_training_pipeline(hateval_corpus_lines,
cv_hateval,
y_hateval,
lg_hateval
)
process_training_pipeline(hateval_corpus_lines,
cv_hateval,
y_hateval,
rf_hateval
)
if verbose_dataset:
print(cv_hateval.get_feature_names()[:5])
# +
# obtenemos las características que más aportan a la clasificación del modelo random forest
rf_features_hateval_idx = rf_hateval.feature_importances_.argsort()[::-1]
palabras_odio_rf_hateval = np.array(cv_hateval.get_feature_names())[rf_features_hateval_idx[:100]]
palabras_odio['hateval_rf'] = pd.Series(palabras_odio_rf_hateval)
if verbose_hate_words:
print(palabras_odio_rf_hateval)
print(rf_hateval.feature_importances_[rf_features_hateval_idx[:30]])
# -
process_training_pipeline(hateval_corpus_lines,
cv_hateval,
y_hateval,
nb_hateval
)
# +
# basado en https://stackoverflow.com/questions/50526898/how-to-get-feature-importance-in-naive-bayes
pos_class_prob_sorted = nb_hateval.feature_log_prob_[1, :].argsort()[::-1]
palabras_odio_nb_hateval = np.take(cv_hateval.get_feature_names(), pos_class_prob_sorted[:100])
palabras_odio['hateval_nb'] = pd.Series(palabras_odio_nb_hateval)
if verbose_hate_words:
print(palabras_odio_nb_hateval)
# +
# guardamos el vectorizador y los modelos entrenados
with open('docs/models/hateval_vectorizer.pkl', 'wb') as file:
pickle.dump(cv_hateval, file)
with open('docs/models/hateval_lg_model.pkl', 'wb') as file:
pickle.dump(lg_hateval, file)
with open('docs/models/hateval_nb_model.pkl', 'wb') as file:
pickle.dump(nb_hateval, file)
with open('docs/models/hateval_rf_model.pkl', 'wb') as file:
pickle.dump(rf_hateval, file)
# -
# ## Detoxis dataset
detoxis_corpus_lines = preprocess_corpus(detoxis['comment'])
y_detoxis = detoxis['aggressiveness']
y_detoxis[detoxis['aggressiveness']==1]
cv_detoxis = CountVectorizer(ngram_range=(1, 3), min_df=10)
lg_detoxis = LogisticRegression(random_state=42)
nb_detoxis = ComplementNB()
rf_detoxis = RandomForestClassifier(random_state=42)
process_training_pipeline(detoxis_corpus_lines,
cv_detoxis,
y_detoxis,
lg_detoxis
)
process_training_pipeline(detoxis_corpus_lines,
cv_detoxis,
y_detoxis,
nb_detoxis
)
# +
pos_class_prob_sorted = nb_detoxis.feature_log_prob_[1, :].argsort()[::-1]
palabras_odio_nb_detoxis = np.take(cv_detoxis.get_feature_names(), pos_class_prob_sorted[:100])
palabras_odio['detoxis_nb'] = pd.Series(palabras_odio_nb_detoxis)
if verbose_hate_words:
print(palabras_odio_nb_detoxis)
# -
process_training_pipeline(detoxis_corpus_lines,
cv_detoxis,
y_detoxis,
rf_detoxis
)
# +
# obtenemos las características que más aportan a la clasificación del modelo random forest
rf_features_detoxis_idx = rf_detoxis.feature_importances_.argsort()[::-1]
palabras_odio_rf_detoxis = np.array(cv_detoxis.get_feature_names())[rf_features_detoxis_idx[:100]]
palabras_odio['detoxis_rf'] = pd.Series(palabras_odio_rf_detoxis)
if verbose_hate_words:
print(palabras_odio_rf_detoxis)
print(rf_detoxis.feature_importances_[rf_features_detoxis_idx[:30]])
# +
# guardamos el vectorizador y los modelos entrenados
with open('docs/models/detoxis_vectorizer.pkl', 'wb') as file:
pickle.dump(cv_detoxis, file)
with open('docs/models/detoxis_lg_model.pkl', 'wb') as file:
pickle.dump(lg_detoxis, file)
with open('docs/models/detoxis_nb_model.pkl', 'wb') as file:
pickle.dump(nb_detoxis, file)
with open('docs/models/detoxis_rf_model.pkl', 'wb') as file:
pickle.dump(rf_detoxis, file)
# -
# ## MeOffendMex dataset
cv_meoffendmex = CountVectorizer(ngram_range=(1, 3), min_df=10)
lg_meoffendmex = LogisticRegression(random_state=42)
nb_meoffendmex = ComplementNB()
rf_meoffendmex = RandomForestClassifier(random_state=42)
meoffendmex_corpus_lines = preprocess_corpus(meoffendmex['tweet:text'])
process_training_pipeline(meoffendmex_corpus_lines,
cv_meoffendmex,
y_meoffendmex,
lg_meoffendmex
)
# +
# TODO extracción de palabras de odio con log regression
#cv_meoffendmex
palabras_odio_lg_meoffendmex = []
# -
process_training_pipeline(meoffendmex_corpus_lines,
cv_meoffendmex,
y_meoffendmex,
nb_meoffendmex
)
# +
pos_class_prob_sorted = nb_meoffendmex.feature_log_prob_[1, :].argsort()[::-1]
palabras_odio_nb_meoffendmex = np.take(cv_meoffendmex.get_feature_names(), pos_class_prob_sorted[:100])
palabras_odio['meoffendmex_nb'] = pd.Series(palabras_odio_nb_meoffendmex)
if verbose:
print(palabras_odio_nb_meoffendmex)
# -
process_training_pipeline(meoffendmex_corpus_lines,
cv_meoffendmex,
y_meoffendmex,
rf_meoffendmex
)
# +
# obtenemos las características que más aportan a la clasificación del modelo random forest
rf_features_meoffendmex_idx = rf_meoffendmex.feature_importances_.argsort()[::-1]
palabras_odio_rf_meoffendmex = np.array(cv_meoffendmex.get_feature_names())[rf_features_meoffendmex_idx[:100]]
palabras_odio['meoffendmex_rf'] = pd.Series(palabras_odio_rf_meoffendmex)
if verbose:
print(palabras_odio_rf_meoffendmex)
# + tags=[]
# guardamos los modelos entrenados
with open('docs/models/meoffendmex_vectorizer.pkl', 'wb') as file:
pickle.dump(cv_meoffendmex, file)
with open('docs/models/meoffendmex_lg_model.pkl', 'wb') as file:
pickle.dump(lg_meoffendmex, file)
with open('docs/models/meoffendmex_nb_model.pkl', 'wb') as file:
pickle.dump(nb_meoffendmex, file)
with open('docs/models/meoffendmex_rf_model.pkl', 'wb') as file:
pickle.dump(rf_meoffendmex, file)
# -
# ### Guardamos las palabras de odio
palabras_odio.to_csv('docs/palabras_odio.csv')
# # Prueba de modelos en Reddit
# + tags=[]
df = pd.read_csv('docs/preprocessing_reddit_data.csv')
if verbose_dataset:
print(df)
# -
df_reddit_original = pd.read_csv('docs/preprocessing_reddit_data.csv')
reddit_corpus = preprocess_corpus(df['body'].astype('str'))
# En base a los modelos previamente entrenados, usamos la siguiente función para detectar los comentarios de Reddit.
def predict_n_report_on_reddit_comments(cv_model_pairs):
table = PrettyTable()
thresholds = np.linspace(0.5, 0.9, 5)
table.field_names = ['Modelo',
'Dataset',
*['# pred. umb. {}'.format(i) for i in thresholds]]
for pair in cv_model_pairs:
cv = pair[0][0]
dataset_name = pair[0][1]
model = pair[1][0]
model_name = pair[1][1]
predicted_by_thresh = []
for thresh in thresholds:
reddit_adapted = cv.transform(reddit_corpus)
reddit_hs_proba = model.predict_proba(reddit_adapted)[:,1]
hate_mask = reddit_hs_proba >= thresh
predicted_by_thresh.append(np.shape(df_reddit_original[hate_mask])[0])
table.add_row([model_name, dataset_name, *predicted_by_thresh])
return table
# + tags=[]
def predict_n_save_on_reddit_comments(cv, model, threshold, output_file_name):
reddit_adapted = cv.transform(reddit_corpus)
reddit_hs_proba = model.predict_proba(reddit_adapted)[:,1]
hate_mask = reddit_hs_proba >= threshold
print('Detectados {} comentarios de un total de {}'.format(np.shape(df_reddit_original[hate_mask])[0], np.shape(df_reddit_original)[0]))
df_reddit_original[hate_mask].to_csv('docs/{}_hate.csv'.format(output_file_name))
df_reddit_original[~hate_mask].to_csv('docs/{}_non_hate.csv'.format(output_file_name))
# -
# ## Vemos la cantidad de predicciones en todos los modelos
# +
cv_dataset_pairs = [(cv_hateval, 'HatEval'), (cv_detoxis, 'DETOXIS'), (cv_meoffendmex, 'MeOffendMex')]
cv_model_pairs = [
[cv_dataset_pairs[0], (lg_hateval, 'Regresión logística')],
[cv_dataset_pairs[0], (nb_hateval, 'Naive Bayes')],
[cv_dataset_pairs[0], (rf_hateval, 'Random forest')],
[cv_dataset_pairs[1], (lg_detoxis, 'Regresión logística')],
[cv_dataset_pairs[1], (nb_detoxis, 'Naive Bayes')],
[cv_dataset_pairs[1], (rf_detoxis, 'Random forest')],
[cv_dataset_pairs[2], (lg_meoffendmex, 'Regresión logística')],
[cv_dataset_pairs[2], (nb_meoffendmex, 'Naive Bayes')],
[cv_dataset_pairs[2], (rf_meoffendmex, 'Random forest')],
]
predict_n_report_on_reddit_comments(cv_model_pairs)
# -
# ## Guardamos algunas predicciones para visualizar los CSV generados
# ### A partir de modelos entrenados con HatEval
predict_n_save_on_reddit_comments(cv_hateval, lg_hateval, 0.8, 'test/test_reddit_hateval_lg_hate_comments')
predict_n_save_on_reddit_comments(cv_hateval, nb_hateval, 0.8, 'test/test_reddit_hateval_nb_hate_comments')
predict_n_save_on_reddit_comments(cv_hateval, rf_hateval, 0.7, 'test/test_reddit_hateval_rf_hate_comments')
# ### A partir de modelos entrenados con Detoxis
predict_n_save_on_reddit_comments(cv_detoxis, lg_detoxis, 0.5, 'test/test_reddit_detoxis_lg_hate_comments')
predict_n_save_on_reddit_comments(cv_detoxis, nb_detoxis, 0.8, 'test/test_reddit_detoxis_nb_hate_comments')
predict_n_save_on_reddit_comments(cv_detoxis, rf_detoxis, 0.5, 'test/test_reddit_detoxis_rf_hate_comments')
# ### A partir de modelos entrenados con MeOffendMex
predict_n_save_on_reddit_comments(cv_meoffendmex, lg_meoffendmex, 0.8, 'test/test_reddit_meoffendmex_lg_hate_comments')
predict_n_save_on_reddit_comments(cv_meoffendmex, nb_meoffendmex, 0.8, 'test/test_reddit_meoffendmex_nb_hate_comments')
predict_n_save_on_reddit_comments(cv_meoffendmex, rf_meoffendmex, 0.5, 'test/test_reddit_meoffendmex_rf_hate_comments')
# # Mejoras a realizar
#
# En este notebook se utilizaron modelos en su versión más básica. Como trabajos futuros, quedan mejores pendientes, que incluyen realizar una mejora iterativa de cada uno de los modelos empleados, incluir otros, realizar optimización de híper-parámetros, y tomar distintas etiquetas o información (por ejemplo, usar una etiqueta alternativa en DETOXIS), entre otras.
# FIN
|
src/4_detect_hate_speech.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + id="d746237a" outputId="77f3c61d-eb89-4fe8-f1f4-d5c7efb0802a" colab={"base_uri": "https://localhost:8080/", "height": 439}
import seaborn as sns
df = sns.load_dataset('titanic')
df
# + [markdown] id="22b7d9e6"
# pclass, sex, sibsp, parch, fare, embarked, who, adult_male, embark_town, alive, alone
# + id="38fc7bd5" outputId="90adb564-4f47-497e-dcbd-461a28bab174" colab={"base_uri": "https://localhost:8080/"}
df.info()
# + [markdown] id="643be6f1"
# pclass, sex, sibsp, parch, embarked, class, who. adult_male, embark_town, alive, alone
# + id="7ae167a0" outputId="ac6c3c62-e375-4c05-cdd1-e10e958b8b7a" colab={"base_uri": "https://localhost:8080/", "height": 419}
import pandas as pd
dfx = pd.get_dummies(df['sex'])
dfx
# + id="LCSKeMy79nqp" outputId="5ccd85d4-c432-4546-9f2d-a4dc8fd33326" colab={"base_uri": "https://localhost:8080/", "height": 439}
df = pd.concat([df, dfx], axis=1)
df
# + id="XPJKGw4NzPdS" outputId="637d3761-dbb3-47ef-ed0a-e3deee2b38b5" colab={"base_uri": "https://localhost:8080/", "height": 193}
dfx = pd.get_dummies(df['pclass'], prefix='pclass')
df = pd.concat([df, dfx], axis=1)
df.head(4)
# + [markdown] id="UyEA3usU_g5L"
# female male pclass_1 pclass_2 pclass_3
# + id="inaOaMtgz6yZ" outputId="e7556af4-6814-43cc-edfa-c4131477eb15" colab={"base_uri": "https://localhost:8080/", "height": 419}
df[['fare', 'female', 'male', 'pclass_1', 'pclass_2', 'pclass_3', 'survived']]
# + id="fvcHU2wFA3uZ" outputId="ba6e29a6-a20d-4a84-da9d-7af685b4504b" colab={"base_uri": "https://localhost:8080/"}
X = df[['fare', 'female', 'male', 'pclass_1', 'pclass_2', 'pclass_3']]
Y = df[['survived']]
X.shape, Y.shape
# + id="d2694agtBcD0" outputId="3f200187-86c6-44c2-ad6d-ce75c40df84d" colab={"base_uri": "https://localhost:8080/"}
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
scaler.fit(X)
X = scaler.transform(X)
X.shape
# + id="1-ZkVdwlCp8f"
from sklearn.model_selection import train_test_split
x_train, X_test, y_train, y_test = train_test_split(X, Y)
# + id="xOTnQHEmCIky" outputId="e99cd9d3-6da3-4231-c31e-f624d2d4b962" colab={"base_uri": "https://localhost:8080/"}
from sklearn.linear_model import LogisticRegression
logR = LogisticRegression()
logR.fit(x_train, y_train)
# + id="T0PbaAHlDOD-" outputId="dc88421f-284b-4979-bf90-a40e80395dfa" colab={"base_uri": "https://localhost:8080/"}
logR.score(x_train, y_train)
# + id="g2QUrQE9Db5T"
|
titanic_onehot_logisticregression.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
#
#
# # <font color='navy'>**Seasonal Forecast of Tropical Cyclone Activity in the South Pacific Ocean** </font>
# <br>
# <br>
# <br>
# 
#
# <br>
# <br>
#
# 
#
# <br>
# <br>
#
# 
#
# <br>
# <br>
#
# 
#
# <br>
# <br>
#
# 
#
# <br>
# <br>
#
# 
#
# <br>
# <br>
#
# 
#
# <br>
# <br>
#
# 
#
# <br>
#
# ```{toctree}
# :hidden:
# :titlesonly:
# :numbered: True
#
# j01
# j02
# j03
# j04
# j05
# j06
# j07
# ```
#
|
_build/jupyter_execute/j00pp.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import os
import sys
import json
import datetime
import pandas as pd
import numpy as np
# -
cwd = os.getcwd()
join = os.path.join
norm = os.path.normpath
import matplotlib.pyplot as plt
# %matplotlib inline
# +
plt.style.use('ggplot')
plt.rcParams['figure.figsize'] = [15, 9]
plt.rcParams['font.size'] = 14
# pd.set_option('display.max_columns', None)
# pd.set_option('display.max_rows', None)
# -
sts = np.genfromtxt('D:\\Dropbox\\OSU\Research\\GHX Modeling\\GLHE\\GLHE\\validation\\MFRTRT_LTS\\sts.csv', delimiter=',')
lts = np.genfromtxt('D:\\Dropbox\\OSU\Research\\GHX Modeling\\GLHE\\GLHE\\validation\\MFRTRT_LTS\\lts.csv', delimiter=',')
g = np.genfromtxt('D:\\Dropbox\\OSU\Research\\GHX Modeling\\GLHE\\GLHE\\validation\\MFRTRT_LTS\\g.csv', delimiter=',')
g_b = np.genfromtxt('D:\\Dropbox\\OSU\Research\\GHX Modeling\\GLHE\\GLHE\\validation\\MFRTRT_LTS\\g_b.csv', delimiter=',')
g_b_exp = np.genfromtxt('D:\\Dropbox\\OSU\Research\\GHX Modeling\\GLHE\\GLHE\\validation\\MFRTRT_LTS\\g_b_exp.csv', delimiter=',')
plt.plot(lts[:, 0], lts[:, 1], label='pygfunction')
plt.plot(sts[:, 0], sts[:, 1], label='Xu-Spitler Radial-Numerical', linestyle='--')
plt.plot(g[:, 0], g[:, 1], label='g')
plt.plot(g_b[:, 0], g_b[:, 1], label='g_b')
plt.plot(g_b_exp[:, 0], g_b_exp[:, 1], label='g_b (Exp.)')
plt.xlabel('ln(t/ts)')
plt.ylabel('g')
plt.legend()
|
validation/MFRTRT_STS/MFRTRT_STS_g-function_plots.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python
# language: python
# name: conda-env-python-py
# ---
# + [markdown] button=false deletable=true new_sheet=false run_control={"read_only": false}
# <a href="https://www.bigdatauniversity.com"><img src = "https://ibm.box.com/shared/static/cw2c7r3o20w9zn8gkecaeyjhgw3xdgbj.png" width = 400, align = "center"></a>
#
# # <center>Simple Linear Regression</center>
#
#
# #### About this Notebook
# In this notebook, we learn how to use scikit-learn to implement simple linear regression. We download a dataset that is related to fuel consumption and Carbon dioxide emission of cars. Then, we split our data into training and test sets, create a model using training set, Evaluate your model using test set, and finally use model to predict unknown value
#
# + [markdown] button=false deletable=true new_sheet=false run_control={"read_only": false}
# ### Importing Needed packages
# + button=false deletable=true new_sheet=false run_control={"read_only": false}
import matplotlib.pyplot as plt
import pandas as pd
import pylab as pl
import numpy as np
# %matplotlib inline
# + [markdown] button=false deletable=true new_sheet=false run_control={"read_only": false}
# ### Downloading Data
# To download the data, we will use !wget to download it from IBM Object Storage.
# + button=false deletable=true new_sheet=false run_control={"read_only": false}
# !wget -O FuelConsumption.csv https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/ML0101ENv3/labs/FuelConsumptionCo2.csv
# -
# __Did you know?__ When it comes to Machine Learning, you will likely be working with large datasets. As a business, where can you host your data? IBM is offering a unique opportunity for businesses, with 10 Tb of IBM Cloud Object Storage: [Sign up now for free](http://cocl.us/ML0101EN-IBM-Offer-CC)
# + [markdown] button=false deletable=true new_sheet=false run_control={"read_only": false}
#
# ## Understanding the Data
#
# ### `FuelConsumption.csv`:
# We have downloaded a fuel consumption dataset, **`FuelConsumption.csv`**, which contains model-specific fuel consumption ratings and estimated carbon dioxide emissions for new light-duty vehicles for retail sale in Canada. [Dataset source](http://open.canada.ca/data/en/dataset/98f1a129-f628-4ce4-b24d-6f16bf24dd64)
#
# - **MODELYEAR** e.g. 2014
# - **MAKE** e.g. Acura
# - **MODEL** e.g. ILX
# - **VEHICLE CLASS** e.g. SUV
# - **ENGINE SIZE** e.g. 4.7
# - **CYLINDERS** e.g 6
# - **TRANSMISSION** e.g. A6
# - **FUEL CONSUMPTION in CITY(L/100 km)** e.g. 9.9
# - **FUEL CONSUMPTION in HWY (L/100 km)** e.g. 8.9
# - **FUEL CONSUMPTION COMB (L/100 km)** e.g. 9.2
# - **CO2 EMISSIONS (g/km)** e.g. 182 --> low --> 0
#
# + [markdown] button=false deletable=true new_sheet=false run_control={"read_only": false}
# ## Reading the data in
# + button=false deletable=true new_sheet=false run_control={"read_only": false}
df = pd.read_csv("FuelConsumption.csv")
# take a look at the dataset
df.head()
# + [markdown] button=false deletable=true new_sheet=false run_control={"read_only": false}
# ### Data Exploration
# Lets first have a descriptive exploration on our data.
# + button=false deletable=true new_sheet=false run_control={"read_only": false}
# summarize the data
df.describe()
# -
# Lets select some features to explore more.
# + button=false deletable=true new_sheet=false run_control={"read_only": false}
cdf = df[['ENGINESIZE','CYLINDERS','FUELCONSUMPTION_COMB','CO2EMISSIONS']]
cdf.head(9)
# -
# we can plot each of these fearues:
# + button=false deletable=true new_sheet=false run_control={"read_only": false}
viz = cdf[['CYLINDERS','ENGINESIZE','CO2EMISSIONS','FUELCONSUMPTION_COMB']]
viz.hist()
plt.show()
# -
# Now, lets plot each of these features vs the Emission, to see how linear is their relation:
# + button=false deletable=true new_sheet=false run_control={"read_only": false}
plt.scatter(cdf.FUELCONSUMPTION_COMB, cdf.CO2EMISSIONS, color='blue')
plt.xlabel("FUELCONSUMPTION_COMB")
plt.ylabel("Emission")
plt.show()
# + button=false deletable=true new_sheet=false run_control={"read_only": false}
plt.scatter(cdf.ENGINESIZE, cdf.CO2EMISSIONS, color='blue')
plt.xlabel("Engine size")
plt.ylabel("Emission")
plt.show()
# -
# ## Practice
# plot __CYLINDER__ vs the Emission, to see how linear is their relation:
# + button=false deletable=true new_sheet=false run_control={"read_only": false}
# write your code here
plt.scatter(cdf.CYLINDERS, cdf.CO2EMISSIONS, color='blue')
plt.xlabel("Cylinders")
plt.ylabel("Emission")
plt.show()
# -
# Double-click __here__ for the solution.
#
# <!-- Your answer is below:
#
# plt.scatter(cdf.CYLINDERS, cdf.CO2EMISSIONS, color='blue')
# plt.xlabel("Cylinders")
# plt.ylabel("Emission")
# plt.show()
#
# -->
# + [markdown] button=false deletable=true new_sheet=false run_control={"read_only": false}
# #### Creating train and test dataset
# Train/Test Split involves splitting the dataset into training and testing sets respectively, which are mutually exclusive. After which, you train with the training set and test with the testing set.
# This will provide a more accurate evaluation on out-of-sample accuracy because the testing dataset is not part of the dataset that have been used to train the data. It is more realistic for real world problems.
#
# This means that we know the outcome of each data point in this dataset, making it great to test with! And since this data has not been used to train the model, the model has no knowledge of the outcome of these data points. So, in essence, it is truly an out-of-sample testing.
#
#
# + button=false deletable=true new_sheet=false run_control={"read_only": false}
msk = np.random.rand(len(df)) < 0.8
train = cdf[msk]
test = cdf[~msk]
# + [markdown] button=false deletable=true new_sheet=false run_control={"read_only": false}
# ### Simple Regression Model
# Linear Regression fits a linear model with coefficients B = (B1, ..., Bn) to minimize the 'residual sum of squares' between the independent x in the dataset, and the dependent y by the linear approximation.
# + [markdown] button=false deletable=true new_sheet=false run_control={"read_only": false}
# #### Train data distribution
# + button=false deletable=true jupyter={"outputs_hidden": true} new_sheet=false run_control={"read_only": false}
plt.scatter(train.ENGINESIZE, train.CO2EMISSIONS, color='blue')
plt.xlabel("Engine size")
plt.ylabel("Emission")
plt.show()
# + [markdown] button=false deletable=true new_sheet=false run_control={"read_only": false}
# #### Modeling
# Using sklearn package to model data.
# + button=false deletable=true jupyter={"outputs_hidden": true} new_sheet=false run_control={"read_only": false}
from sklearn import linear_model
regr = linear_model.LinearRegression()
train_x = np.asanyarray(train[['ENGINESIZE']])
train_y = np.asanyarray(train[['CO2EMISSIONS']])
regr.fit (train_x, train_y)
# The coefficients
print ('Coefficients: ', regr.coef_)
print ('Intercept: ',regr.intercept_)
# -
# As mentioned before, __Coefficient__ and __Intercept__ in the simple linear regression, are the parameters of the fit line.
# Given that it is a simple linear regression, with only 2 parameters, and knowing that the parameters are the intercept and slope of the line, sklearn can estimate them directly from our data.
# Notice that all of the data must be available to traverse and calculate the parameters.
#
# + [markdown] button=false deletable=true new_sheet=false run_control={"read_only": false}
# #### Plot outputs
# -
# we can plot the fit line over the data:
# + button=false deletable=true jupyter={"outputs_hidden": true} new_sheet=false run_control={"read_only": false}
plt.scatter(train.ENGINESIZE, train.CO2EMISSIONS, color='blue')
plt.plot(train_x, regr.coef_[0][0]*train_x + regr.intercept_[0], '-r')
plt.xlabel("Engine size")
plt.ylabel("Emission")
# + [markdown] button=false deletable=true new_sheet=false run_control={"read_only": false}
# #### Evaluation
# we compare the actual values and predicted values to calculate the accuracy of a regression model. Evaluation metrics provide a key role in the development of a model, as it provides insight to areas that require improvement.
#
# There are different model evaluation metrics, lets use MSE here to calculate the accuracy of our model based on the test set:
# - Mean absolute error: It is the mean of the absolute value of the errors. This is the easiest of the metrics to understand since it’s just average error.
# - Mean Squared Error (MSE): Mean Squared Error (MSE) is the mean of the squared error. It’s more popular than Mean absolute error because the focus is geared more towards large errors. This is due to the squared term exponentially increasing larger errors in comparison to smaller ones.
# - Root Mean Squared Error (RMSE).
# - R-squared is not error, but is a popular metric for accuracy of your model. It represents how close the data are to the fitted regression line. The higher the R-squared, the better the model fits your data. Best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse).
#
# + button=false deletable=true jupyter={"outputs_hidden": true} new_sheet=false run_control={"read_only": false}
from sklearn.metrics import r2_score
test_x = np.asanyarray(test[['ENGINESIZE']])
test_y = np.asanyarray(test[['CO2EMISSIONS']])
test_y_ = regr.predict(test_x)
print("Mean absolute error: %.2f" % np.mean(np.absolute(test_y_ - test_y)))
print("Residual sum of squares (MSE): %.2f" % np.mean((test_y_ - test_y) ** 2))
print("R2-score: %.2f" % r2_score(test_y_ , test_y) )
# + [markdown] button=false deletable=true new_sheet=false run_control={"read_only": false}
# ## Want to learn more?
#
# IBM SPSS Modeler is a comprehensive analytics platform that has many machine learning algorithms. It has been designed to bring predictive intelligence to decisions made by individuals, by groups, by systems – by your enterprise as a whole. A free trial is available through this course, available here: [SPSS Modeler](http://cocl.us/ML0101EN-SPSSModeler).
#
# Also, you can use Watson Studio to run these notebooks faster with bigger datasets. Watson Studio is IBM's leading cloud solution for data scientists, built by data scientists. With Jupyter notebooks, RStudio, Apache Spark and popular libraries pre-packaged in the cloud, Watson Studio enables data scientists to collaborate on their projects without having to install anything. Join the fast-growing community of Watson Studio users today with a free account at [Watson Studio](https://cocl.us/ML0101EN_DSX)
#
# ### Thanks for completing this lesson!
#
# Notebook created by: <a href = "https://ca.linkedin.com/in/saeedaghabozorgi"><NAME></a>
#
# <hr>
# Copyright © 2018 [Cognitive Class](https://cocl.us/DX0108EN_CC). This notebook and its source code are released under the terms of the [MIT License](https://bigdatauniversity.com/mit-license/).
|
Machine Learning/ML0101EN-Reg-Simple-Linear-Regression-Co2-py-v1.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] toc=true
# <h1>Table of Contents<span class="tocSkip"></span></h1>
# <div class="toc"><ul class="toc-item"><li><span><a href="#A-Brief-matplotlib-API-Primer" data-toc-modified-id="A-Brief-matplotlib-API-Primer-1"><span class="toc-item-num">1 </span>A Brief matplotlib API Primer</a></span><ul class="toc-item"><li><ul class="toc-item"><li><span><a href="#Adjusting-the-spacing-around-subplots" data-toc-modified-id="Adjusting-the-spacing-around-subplots-1.0.1"><span class="toc-item-num">1.0.1 </span>Adjusting the spacing around subplots</a></span></li></ul></li><li><span><a href="#Colors,-Markers,-and-Line-Styles" data-toc-modified-id="Colors,-Markers,-and-Line-Styles-1.1"><span class="toc-item-num">1.1 </span>Colors, Markers, and Line Styles</a></span></li><li><span><a href="#Ticks,-Labels,-Legend" data-toc-modified-id="Ticks,-Labels,-Legend-1.2"><span class="toc-item-num">1.2 </span>Ticks, Labels, Legend</a></span><ul class="toc-item"><li><span><a href="#Setting-the-title,-axis-labels,-ticks,-and-ticklabels" data-toc-modified-id="Setting-the-title,-axis-labels,-ticks,-and-ticklabels-1.2.1"><span class="toc-item-num">1.2.1 </span>Setting the title, axis labels, ticks, and ticklabels</a></span></li><li><span><a href="#Adding-Legends" data-toc-modified-id="Adding-Legends-1.2.2"><span class="toc-item-num">1.2.2 </span>Adding Legends</a></span></li></ul></li><li><span><a href="#Annotations-and-drawing-on-a-Subplot" data-toc-modified-id="Annotations-and-drawing-on-a-Subplot-1.3"><span class="toc-item-num">1.3 </span>Annotations and drawing on a Subplot</a></span></li></ul></li><li><span><a href="#Plotting-with-pandas-and-seaborn" data-toc-modified-id="Plotting-with-pandas-and-seaborn-2"><span class="toc-item-num">2 </span>Plotting with pandas and seaborn</a></span><ul class="toc-item"><li><span><a href="#Line-Plots" data-toc-modified-id="Line-Plots-2.1"><span class="toc-item-num">2.1 </span>Line Plots</a></span></li><li><span><a href="#Bar-Plots" data-toc-modified-id="Bar-Plots-2.2"><span class="toc-item-num">2.2 </span>Bar Plots</a></span></li><li><span><a href="#Histograms-and-Density-Plots" data-toc-modified-id="Histograms-and-Density-Plots-2.3"><span class="toc-item-num">2.3 </span>Histograms and Density Plots</a></span></li><li><span><a href="#Scatter-or-Points-Plots" data-toc-modified-id="Scatter-or-Points-Plots-2.4"><span class="toc-item-num">2.4 </span>Scatter or Points Plots</a></span></li><li><span><a href="#Facet-Grids-and-Categorical-Data" data-toc-modified-id="Facet-Grids-and-Categorical-Data-2.5"><span class="toc-item-num">2.5 </span>Facet Grids and Categorical Data</a></span></li></ul></li></ul></div>
# -
# Code and text from **Python for Data Analysis** By **<NAME>**
# - Chapter 9 => **Plotting and Visualization**
# Github - [pydata-book](https://github.com/wesm/pydata-book)
#
# # %matplotlib notebook
# %matplotlib inline
# +
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib
PREVIOUS_MAX_ROWS = pd.options.display.max_rows
pd.options.display.max_rows = 20
np.random.seed(12345)
plt.rc('figure', figsize=(10, 6))
np.set_printoptions(precision=4, suppress=True)
# -
# # A Brief matplotlib API Primer
data = np.arange(10)
data
plt.plot(data)
# Plots in matplotlib reside within a **Figure** object. You can create a new figure with **plt.figure**
# You can’t make a plot with a blank figure. You have to create one or more subplots using **add_subplot**
# +
fig = plt.figure()
ax1 = fig.add_subplot(2, 2, 1)
ax2 = fig.add_subplot(2, 2, 2)
ax3 = fig.add_subplot(2, 2, 3)
plt.plot(np.random.randn(50).cumsum(), 'k--')
# 'k--' is a style option instructing matplotlib to plot a black dashed line
# -
# The objects returned by **fig.add_subplot** here are **AxesSubplot objects**, on which you can directly plot on the other empty subplots by calling each one’s instance method
_ = ax1.hist(np.random.randn(100), bins=20, color='k', alpha=0.3)
ax2.scatter(np.arange(30), np.arange(30) + 3 * np.random.randn(30))
fig
# Creating a figure with a grid of subplots is a very common task, so matplotlib includes a convenience method, **plt.subplots**, that creates a new figure and returns a NumPy array containing the created subplot objects
# +
fig, axes = plt.subplots(2, 3)
axes
# axes array can be easily indexed like a two-dimensional
# array; for example, axes[0, 1].
# +
# ou can also indicate that subplots should have the
# same x- or y-axis using sharex and sharey, respectively
fig, axes = plt.subplots(2, 3, sharey=True)
# -
# **pyplot.subplots options**
#
# |Argument| Description|
# |--|--|
# |nrows| Number of rows of subplots|
# |ncols| Number of columns of subplots|
# |sharex| All subplots should use the same x-axis ticks (adjusting the xlim will affect all subplots)|
# |sharey| All subplots should use the same y-axis ticks (adjusting the ylim will affect all subplots)|
# |subplot_kw| Dict of keywords passed to add_subplot call used to create each subplot|
# |**fig_kw| Additional keywords to subplots are used when creating the figure, such as plt.subplots(2, 2, figsize=(8, 6))|
#
# ### Adjusting the spacing around subplots
# By default matplotlib leaves a certain amount of padding around the outside of the subplots and spacing between subplots. This spacing is all specified relative to the height and width of the plot, so that if you resize the plot either programmatically or manually using the GUI window, the plot will dynamically adjust itself. You can change the spacing using the **subplots_adjust** method on Figure objects, also available as a top-level function
# ***subplots_adjust(left=None, bottom=None, right=None, top=None, wspace=None, hspace=None)***
#
# **wspace** and **hspace** controls the percent of the figure width and figure height, respec‐
# tively, to use as spacing between subplots.
# +
# shrinking spaces to zero
fig, axes = plt.subplots(2, 2, sharex=True, sharey=True)
for i in range(2):
for j in range(2):
axes[i, j].hist(np.random.randn(500), bins=50, color='k', alpha=0.5)
plt.subplots_adjust(wspace=0, hspace=0)
# -
# ## Colors, Markers, and Line Styles
#
# +
# plot function accepts an array of x and y and optionally a string abbreviation
# indicating color and line style
# ax.plot(x, y, 'g--')
# In practice
# ax.plot(x, y, linestyle='--', color='g')
# +
# # ?matplotlib.pyplot.plot
# +
# using MARKERS to highlight the actual data points
from numpy.random import randn
# plt.plot(randn(30).cumsum(), 'ko--')
plt.plot(randn(30).cumsum(), color='k', linestyle='dashed', marker='o')
# -
# For line plots, you will notice that subsequent points are linearly interpolated by default. This can be altered with the **drawstyle** option
data = np.random.randn(30).cumsum()
plt.plot(data, 'k--', label='Default')
plt.plot(data, 'k-', drawstyle='steps-post', label='steps-post')
plt.legend(loc='best')
# ## Ticks, Labels, Legend
# The pyplot interface, designed for interactive use, consists of methods like **xlim, xticks, xticklabels => plot range, tick locations, tick labels**, respectively.
# They can be used in two ways:
# * Called with no arguments returns the current parameter value (e.g., plt.xlim() returns the current x-axis plotting range)
# * Called with parameters sets the parameter value (e.g., plt.xlim([0, 10]), sets the x-axis range to 0 to 10)
# ### Setting the title, axis labels, ticks, and ticklabels
#
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)
ax.plot(np.random.randn(1000).cumsum())
# To change the x-axis ticks, it’s easiest to use **set_xticks** and **set_xticklabels**. The former instructs matplotlib where to place the ticks along the data range; by default these locations will also be the labels. But we can set any other values as the labels using set_xticklabels
ticks = ax.set_xticks([0, 250, 500, 750, 1000])
labels = ax.set_xticklabels(['one', 'two', 'three', 'four', 'five'],
rotation=30, fontsize='small')
ax.set_title('My first matplotlib Plotting and Visualization.ipynb')
# +
ax.set_xlabel('Stages')
# similar steps for y-axis
# -
fig
# The axes class has a **set** method that allows batch setting of plot properties. From the prior example, we could also have written
props = {
'title': 'My first matplotlib plot',
'xlabel': 'Stages'
}
ax.set(**props)
fig
# ### Adding Legends
#
# Legends are another critical element for identifying plot elements
fig = plt.figure(); ax=fig.add_subplot(1, 1, 1)
ax.plot(randn(1000).cumsum(), 'k', label='one')
ax.plot(randn(1000).cumsum(), 'k--', label='two')
ax.plot(randn(1000).cumsum(), 'k.', label='three')
ax.legend(loc='best')
fig
# ## Annotations and drawing on a Subplot
# In addition to the standard plot types, you may wish to draw your own plot annotations, which could consist of text, arrows, or other shapes. You can add annotations and text using the **text**, **arrow**, and **annotate** functions.
# +
# text draws text at given coordinates (x, y) on the plot
# with optional custom styling
# ax.text(x, y, 'hello world',
# family='monospace', fontsize=10)
# +
# Annotations can draw both text and arrows arranged appropriately
from datetime import datetime
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)
data = pd.read_csv(r'examples/spx.csv', index_col=0, parse_dates=True)
spx = data['SPX']
spx.plot(ax=ax, style='k-')
crisis_data = [
(datetime(2007, 10, 11), 'Peak of bull market'),
(datetime(2008, 3, 12), 'Bear Stearns Fails'),
(datetime(2008, 9, 15), 'Lehman Bankruptcy')
]
for date, label in crisis_data:
ax.annotate(label, xy=(date, spx.asof(date) + 75),
xytext=(date, spx.asof(date) + 255),
arrowprops=dict(facecolor='black', headwidth=4, width=2,
headlength=4),
horizontalalignment='left', verticalalignment='top')
# Zoom in on 2007-2010
ax.set_xlim(['1/1/2007', '1/1/2011'])
ax.set_ylim([600, 1800])
ax.set_title('Important dates in the 2008-2009 financial crisis')
# -
# drawing shapes
# matplotlib has objects that represent many common shapes, referred to as patches
# +
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)
rect = plt.Rectangle((0.2, 0.75), 0.4, 0.15, color='k', alpha=0.3)
circ = plt.Circle((0.7, 0.2), 0.15, color='b', alpha=0.3)
pgon = plt.Polygon([[0.15, 0.15], [0.35, 0.4], [0.2, 0.6]],
color='g', alpha=0.5)
ax.add_patch(rect)
ax.add_patch(circ)
ax.add_patch(pgon)
# -
# # Plotting with pandas and seaborn
# In pandas we may have multiple columns of data, along with row and column labels. Pandas itself has built-in methods that simplify creating visualizations from DataFrame and Series objects. Another library is **seaborn**, a statistical graphics library created by <NAME>. Seaborn simplifies creating many common visualization
# types.
#
#
# Importing seaborn modifies the default matplotlib color schemes
# and plot styles to improve readability and aesthetics. Even if you do
# not use the seaborn API, you may prefer to import seaborn as a
# simple way to improve the visual aesthetics of general matplotlib
# plots.
#
# ## Line Plots
# Series and DataFrame each have a plot attribute for making some basic plot types. By default, ***plot()*** makes line plots
s = pd.Series(np.random.randn(10).cumsum(), index=np.arange(0, 100, 10))
s.plot()
# The Series object’s ***index*** is passed to matplotlib for plotting on the ***x-axis***, though you can disable this by passing **use_index=False**. The x-axis ticks and limits can be adjusted with the xticks and xlim options, and y-axis respectively with yticks and ylim.
# Most of pandas’s plotting methods accept an optional ***ax*** parameter, which can be a
# matplotlib subplot object.
# +
df = pd.DataFrame(np.random.randn(10, 4).cumsum(0), columns=['A', 'B', 'C', 'D'],
index=np.arange(0, 100, 10))
df.plot()
# -
# **Series.plot method arguments**
#
# |Argument| Description|
# |--|--|
# |label| Label for plot legend|
# |ax| matplotlib subplot object to plot on; if nothing passed, uses active matplotlib subplot|
# |style| Style string, like 'ko--', to be passed to matplotlib|
# |alpha| The plot fill opacity (from 0 to 1)|
# |kind| Can be 'area', 'bar', 'barh', 'density', 'hist', 'kde', 'line', 'pie'|
# |logy| Use logarithmic scaling on the y-axis|
# |use_index| Use the object index for tick labels|
# |rot| Rotation of tick labels (0 through 360)|
# |xticks| Values to use for x-axis ticks|
# |yticks| Values to use for y-axis ticks
# |xlim| x-axis limits (e.g., [0, 10])|
# |ylim| y-axis limits|
# |grid| Display axis grid (on by default)|
# ***DataFrame-specific plot arguments***
#
# |Argument| Description|
# |--|--|
# |subplots| Plot each DataFrame column in a separate subplot|
# |sharex| If subplots=True, share the same x-axis, linking ticks and limits|
# sharey| If subplots=True, share the same y-axis|
# |figsize| Size of figure to create as tuple|
# |title| Plot title as string|
# |legend| Add a subplot legend (True by default)|
# |sort_columns| Plot columns in alphabetical order; by default uses existing column order|
# ## Bar Plots
# +
# plot.bar() and plot.barh() make vertical and horizontal bar plots
# -
fig, axes = plt.subplots(2, 1)
data = pd.Series(np.random.rand(16), index=list('abcdefghijklmnop'))
data.plot.bar(ax=axes[0], color='k', alpha=0.7)
data.plot.barh(ax=axes[1], color='k', alpha=0.7)
# With a DataFrame, bar plots group the values in each row together in a group in bars,
# side by side, for each value.
# +
df = pd.DataFrame(np.random.rand(6, 4), index=[
'one', 'two', 'three', 'four', 'five', 'six'],
columns=pd.Index(['A', 'B', 'C', 'D'], name='Genus'))
df
# -
df.plot.bar()
# Note that the name “Genus” on the DataFrame’s columns is used to title the legend.
# We create stacked bar plots from a DataFrame by passing **stacked=True**, resulting in the value in each row being stacked together
df.plot.barh(stacked=True, alpha=0.5)
tips = pd.read_csv(r'examples/tips.csv')
tips.head()
# +
# https://chrisalbon.com/python/data_wrangling/pandas_crosstabs/
party_counts = pd.crosstab(tips['day'], tips['size'])
party_counts
# -
party_counts = party_counts.loc[:, 2:5]
party_counts
print(party_counts.sum(1))
party_pcts = party_counts.div(party_counts.sum(1), axis=0)
party_pcts
party_pcts.plot.bar()
# +
# same plot using seaborn
import seaborn as sns
tips['tip_pct'] = tips['tip'] / (tips['total_bill'] - tips['tip'])
# -
tips.head()
sns.barplot(x='tip_pct', y='day', data=tips, orient='h')
# seaborn.barplot has a ***hue*** option that enables us to split by an additional categorical value
sns.barplot(x='tip_pct', y='day', hue='time', data=tips, orient='h')
# ## Histograms and Density Plots
# A histogram is a kind of bar plot that gives a discretized display of value frequency.
# The data points are split into discrete, evenly spaced bins, and the number of data
# points in each bin is plotted.
tips['tip_pct'].plot.hist(bins=50)
# A related plot type is a density plot, which is formed by computing an estimate of a continuous probability distribution that might have generated the observed data. The usual procedure is to approximate this distribution as a mixture of “kernels”—that is, simpler distributions like the normal distribution. Thus, density plots are also known as kernel density estimate (KDE) plots. Using plot.kde makes a density plot using the conventional mixture-of-normals estimate
tips['tip_pct'].plot.density()
# Seaborn makes histograms and density plots even easier through its **distplot**
# method, which can plot both a histogram and a continuous density estimate simulta‐
# neously
# +
# example - bimodal distribution consisting of draws from
# two different standard normal distributions
# -
comp1 = np.random.normal(0, 1, size=200)
comp2 = np.random.normal(10, 2, size=200)
values = pd.Series(np.concatenate([comp1, comp2]))
sns.distplot(values, bins=100, color='k')
# ## Scatter or Points Plots
#
# Point plots or scatter plots can be a useful way of examining the relationship between two one-dimensional data series.
macro = pd.read_csv(r'examples/macrodata.csv')
data = macro[['cpi', 'm1', 'tbilrate', 'unemp']]
trans_data = np.log(data).diff().dropna()
trans_data[-5:]
# ***We can then use seaborn’s regplot method, which makes a scatter plot and fits a linear regression line***
sns.regplot('m1', 'unemp', data=trans_data)
plt.title(f"Changes in log {'m1'} versus log {'unemp'}")
# In exploratory data analysis it’s helpful to be able to look at all the scatter plots among a group of variables; this is known as a ***pairs plot*** or ***scatter plot matrix***. Making such a plot from scratch is a bit of work, so seaborn has a convenient **pairplot** function, which supports placing histograms or density estimates of each variable along the diagonal
sns.pairplot(trans_data, diag_kind='kde', plot_kws={'alpha': 0.7})
# **plot_kws** - enables us to pass down configuration options to the individual plotting calls on the off-diagonal elements.
# ## Facet Grids and Categorical Data
#
# What about datasets where we have additional grouping dimensions? One way to visualize data with many categorical variables is to use a facet grid. Seaborn has a useful built-in function **factorplot** that simplifies making many kinds of faceted plots
#
# * renamed to **catplot**
sns.catplot(x='day', y='tip_pct', hue='time', col='smoker',
kind='bar', data=tips[tips.tip_pct < 1])
# Instead of grouping by 'time' by different bar colors within a facet, we can also expand the facet grid by adding one row per time value
sns.catplot(x='day', y='tip_pct', row='time', col='smoker', kind='bar',
data=tips[tips.tip_pct < 1])
# ***catplot*** supports other plot types that may be useful depending on what you are trying to display. For example, box plots (which show the median, quartiles, and outliers) can be an effective visualization type
sns.catplot(x='tip_pct', y='day', kind='box', data=tips[tips.tip_pct < 0.5])
|
matplotlib/Plotting and Visualization.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Render VDOM `json` in Python
# +
from IPython.display import display
def VDOM(data={}):
bundle = {}
bundle['application/vdom.v1+json'] = data
display(bundle, raw=True)
VDOM({
'tagName': 'div',
'attributes': {},
'children': [{
'tagName': 'h1',
'attributes': {},
'children': 'Our Incredibly Declarative Example',
'key': 0
}, {
'tagName': 'p',
'attributes': {},
'children': ['Can you believe we wrote this ', {
'tagName': 'b',
'attributes': {},
'children': 'in Python',
'key': 1
}, '?'],
'key': 1
}, {
'tagName': 'img',
'attributes': {
'src': 'https://media.giphy.com/media/xUPGcguWZHRC2HyBRS/giphy.gif'
},
'key': 2
}, {
'tagName': 'p',
'attributes': {},
'children': ['What will ', {
'tagName': 'b',
'attributes': {},
'children': 'you',
'key': 1
}, ' create next?'],
'key': 3
}]
})
# -
# # Render VDOM using the `vdom` Python package
#
# May need to first run `pip install vdom`
# +
from vdom import h1, p, img, div, b
div(
h1('Our Incredibly Declarative Example'),
p('Can you believe we wrote this ', b('in Python'), '?'),
img(src="https://media.giphy.com/media/xUPGcguWZHRC2HyBRS/giphy.gif"),
p('What will ', b('you'), ' create next?'),
)
# -
|
examples/vdom/vdom.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernel_info:
# name: python3
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %matplotlib inline
from matplotlib import style
style.use('fivethirtyeight')
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import datetime as dt
# # Reflect Tables into SQLAlchemy ORM
# Python SQL toolkit and Object Relational Mapper
import sqlalchemy
from sqlalchemy.ext.automap import automap_base
from sqlalchemy.orm import Session
from sqlalchemy import create_engine, func
# create engine to hawaii.sqlite
engine = create_engine("sqlite:///Resources/hawaii.sqlite")
# reflect an existing database into a new model
# Declare a Base using `automap_base()`
Base = automap_base()
# reflect the tables
# Use the Base class to reflect the database tables
Base.prepare(engine, reflect=True)
# View all of the classes that automap found
# Print all of the classes mapped to the Base
Base.classes.keys()
# Save references to each table
Station = Base.classes.station
Measurement = Base.classes.measurement
# Create our session (link) from Python to the DB
session = Session(engine)
# Use the session to query Station table and display the first 5 name.
for row in session.query(Station, Station.name).limit(5).all():
print(row)
# Close the session
session.close()
# # Exploratory Precipitation Analysis
# Find the most recent date in the data set.
# Latest Date
rec_date=session.query(Measurement.date).order_by(Measurement.date.desc()).first().date
rec_date
# +
# Design a query to retrieve the last 12 months of precipitation data and plot the results.
# Starting from the most recent data point in the database.
# Calculate the date one year from the last date in data set.
one_year = dt.datetime.strptime(rec_date, '%Y-%m-%d') - dt.timedelta(days=365)
one_year
# Perform a query to retrieve the data and precipitation scores
p_score = session.query(Measurement.date, func.avg(Measurement.prcp)).\
filter(Measurement.date >= one_year).\
group_by(Measurement.date).all()
p_score
# -
# Save the query results as a Pandas DataFrame and set the index to the date column
p_df = pd.DataFrame(p_score, columns=['Date', 'Precipitation'])
p_df.set_index('Date', inplace=True)
p_df.sort_values("Date")
p_df.plot.bar()
plt.locator_params(axis='x', nbins=6)
# plt.xlim("2016-08-24", "2016-11-24", "2017-02-24", "2017-05-24", "2017-08-24")
# plt.tight_layout()
plt.show()
# Use Pandas to calcualte the summary statistics for the precipitation data
p_df.describe()
# # Exploratory Station Analysis
# +
# Design a query to calculate the total number stations in the dataset
# engine = create_engine("sqlite:///Resources/hawaii.sqlite")
# conn = engine.connect()
# data = pd.read_sql("SELECT * FROM Station", conn)
# data.count()
session.query(func.count(Station.station)).all()
# +
# Design a query to find the most active stations (i.e. what stations have the most rows?)
# List the stations and the counts in descending order.
for row in session.query(Station, Station.station).all():
print(row)
active = session.query(Measurement.station, func.count(Measurement.station)).\
group_by(Measurement.station).\
order_by(Measurement.station.desc()).all()
print(active)
USC00519397 = session.query(Measurement.station).filter(Measurement.station == 'USC00519397').count()
print(f"There are {USC00519397} rows for station USC00519397")
USC00513117 = session.query(Measurement.station).filter(Measurement.station == 'USC00513117').count()
print(f"There are {USC00513117} rows for station USC00513117")
USC00514830 = session.query(Measurement.station).filter(Measurement.station == 'USC00514830').count()
print(f"There are {USC00514830} rows for station USC00514830")
USC00517948 = session.query(Measurement.station).filter(Measurement.station == 'USC00517948').count()
print(f"There are {USC00517948} rows for station USC00517948")
USC00518838 = session.query(Measurement.station).filter(Measurement.station == 'USC00518838').count()
print(f"There are {USC00518838} rows for station USC00518838")
USC00519523 = session.query(Measurement.station).filter(Measurement.station == 'USC00519523').count()
print(f"There are {USC00519523} rows for station USC00519523")
USC00519281 = session.query(Measurement.station).filter(Measurement.station == 'USC00519281').count()
print(f"There are {USC00519281} rows for station USC00519281")
USC00511918 = session.query(Measurement.station).filter(Measurement.station == 'USC00511918').count()
print(f"There are {USC00511918} rows for station USC00511918")
USC00516128 = session.query(Measurement.station).filter(Measurement.station == 'USC00516128').count()
print(f"There are {USC00516128} rows for station USC00516128")
# +
# Using the most active station id from the previous query, calculate the lowest, highest, and average temperature.
# Hint: You will need to use a function such as func.min, func.max, func.avg, and func.count in your queries.
session.query(func.min(Measurement.tobs), func.avg(Measurement.tobs), func.max(Measurement.tobs)).\
filter(Measurement.station == 'USC00519281').all()
# +
# Using the most active station id
# Query the last 12 months of temperature observation data for this station and plot the results as a histogram
# Calculate the date one year from the last date in data set.
one_year = dt.datetime.strptime(rec_date, '%Y-%m-%d') - dt.timedelta(days=365)
# one_year
tobs_score = session.query(Measurement.station, Measurement.tobs).\
filter(Measurement.station == 'USC00519281').\
filter(Measurement.date >= one_year).all()
tobs_score
# +
# Query the last 12 months of temperature observation data for this station and plot the results as a histogram
tobs_df.plot.hist(by='station')
plt.grid()
plt.xlabel('Temperature')
plt.show
# -
# # Close session
# Close Session
session.close()
|
climate_starter.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# ## Project: Investigate a Dataset
#
# Project rubric:
# https://review.udacity.com/#!/rubrics/107/view
#
# Submission:
# What to include in your submission
# 1. A PDF or HTML file containing your analysis. This file should include:
# 1. A note specifying which dataset you analyzed
# 1. A statement of the question(s) you posed
# 1. A description of what you did to investigate those questions
# 1. Documentation of any data wrangling you did
# 1. Summary statistics and plots communicating your final results
# 1. Code you used to perform your analysis. If you used an Jupyter notebook, you can submit your .ipynb. Otherwise, you 1. should submit the code separately in .py file(s).
# 1. A list of Web sites, books, forums, blog posts, github repositories, etc. that you referred to or used in creating your submission (add N/A if you did not use any such resources).
#
# Data dictionary:
# https://www.kaggle.com/c/titanic/data
# ### Introduction
# I've chosen the titanic dataset to work in this project and using the additional facts found on the internet [I've captured some of them below], I used this sample dataset to compare the survival rate against different grouping of the passengers. The goal would be to identify which passenger group have the highest chance to survive.
# Based on what I've read in a wiki article, I've divided the groups into men, women and child (age < 18) and also by the ticket class since this grouping makes logical sense to me.
#
# ##### Interesting facts:
# 1. Titanic Passengers. 3,547 - the maximum capacity of the RMS Titanic when fully loaded with passengers and crew. 2,222 - the total number of people on board (passengers and crew).
# 1. The Titanic originally was designed to carry 64 lifeboats. To save from cluttering decks, the ship ended up carrying 20 on her maiden voyage. 9. Only 706 passengers and crew would survive the disaster.
# 1. The ship could have stayed afloat had only four compartments flooded... Five became flooded. 1,503 people total died, including passengers and crew. One of the first lifeboats to leave the Titanic carried only 28 people; it could have held 64 people.
# 1. The number of casualties of the sinking is unclear, due to a number of factors. These include confusion over the passenger list, which included some names of people who cancelled their trip at the last minute, and the fact that several passengers travelled under aliases for various reasons and were therefore double-counted on the casualty lists.[207] The death toll has been put at between 1,490 and 1,635 people
# 1. The last living survivor, <NAME> from England, who at only nine weeks old was the youngest passenger on board, died aged 97 on 31 May 2009
#
# ##### Sources
# 1. www.titanicfacts.net/titanic-passengers.html
# 1. http://archive.jsonline.com/entertainment/100-unsinkable-facts-about-the-titanic-2t4psu6-147436195.html
# 1. https://en.wikipedia.org/wiki/RMS_Titanic
#
# ##### Technical References
# 1. https://eazybi.com/blog/data_visualization_and_chart_types/
# 1. https://stackoverflow.com/questions/29498652/plot-bar-graph-from-pandas-dataframe
# 1. https://stackoverflow.com/questions/42784930/how-to-plot-a-dataframe-grouped-by-two-columns-in-matplotlib-and-pandas
# 1. https://www.w3schools.com/colors/colors_picker.asp
# 1. https://stackoverflow.com/questions/11244514/modify-tick-label-text
#
#
# ### Questions
# 1. Which group has the highest survival rate?
# 1. Which group in the Ticket class have the highest survival rate?
# 1. Which group amongst men, women and children (age <= 18) have the highest survival rate?
# +
import pandas as pd
# pointing to the data file folder
import os.path
mypath = '/home/nbuser/'
os.chdir(mypath)
# checking the file layout
filename = 'titanic-data.csv'
titanic_df = pd.read_csv(filename)
print 'sample size:', len(titanic_df)
titanic_df.head()
# -
# ## Data Wrangling
# 1. To create the additional column for children (age < 18), I first need to first clean up any none integer values.
# 1. As there are many passengers that didn't have a proper age value, I had to assume by replacing them with the mean age of the dataset.
# 1. Before replacing the NaN value, let's start by checking how big of a portion are such that data so that I know how much data that I'm affecting.
#
# +
# Age data in the set consists of NaN
empty_values = len(titanic_df[titanic_df['Age'].isnull()])
# check for percentages of modified data
print float(empty_values) / float(len(titanic_df)) * 100
# get the average age for the data set
titanic_df['Age'].fillna(titanic_df['Age'].mean(), inplace=True)
# check again to ensure no more NaN value for age
titanic_df[titanic_df['Age'].isnull()]
# adding IsAdult flag column
titanic_df['IsAdult'] = titanic_df['Age'] > 18
titanic_df.head()
# -
# ### Note
# 1. So a big portion, 19.86%, of the data was modified for the above assumption. This unfortunately will skew most of the data for the age analysis later.
#
# ### Start by getting the descriptive stats for the groups for initial comparison
# 1. So I am going to start by exploring the stats for the different group to see if there's any clear pattern at this stage.
import seaborn as sns
import matplotlib.pyplot as plt
sns.set_style("whitegrid")
sns.set(color_codes=True)
# %pylab inline
# +
def autolabel(ax, values):
rects = ax.patches
# Now make some labels
labels = ["%d" % i for i in (values)]
for rect, label in zip(rects, labels):
height = rect.get_height()
ax.text(rect.get_x() + rect.get_width()/2, height*1.01, label, ha='center', va='bottom')
def rename_xticks(ax, title):
# Overwrite the xticks value for survival
if title=='Survival':
labels = ['','Perished','', 'Survived']
ax.set_xticklabels(labels)
elif title=='Survived':
labels = ['Perished','Survived']
ax.set_xticklabels(labels)
elif title=='Ticket Class':
labels = ['','First','', 'Second', '', 'Third']
ax.set_xticklabels(labels)
elif title=='Pclass':
labels = ['First','Second', 'Third']
ax.set_xticklabels(labels)
def plothist(title, fig, position, df, color, bins=8):
ax = fig.add_subplot(2, 3, position)
ax.set_title(title + ' of Passengers', y=1.08)
ax.set(xlabel=title, ylabel="Count")
values = ax.hist(df, color=color, bins=bins);
autolabel(ax, values[0])
rename_xticks(ax, title)
# +
fig = plt.figure(figsize=(15,5))
fig.subplots_adjust(hspace=.6)
plothist('Age', fig, 1, titanic_df['Age'], '#80ccff')
plothist('Ticket Class', fig, 2, titanic_df['Pclass'], '#ccebff', bins=[1,2,3,4])
plothist('# of siblings / spouses aboard the Titanic', fig, 3, titanic_df['SibSp'], '#33adff')
plothist('# of parents / children aboard the Titanic', fig, 4, titanic_df['Parch'], '#007acc')
plothist('Survival', fig, 5, titanic_df['Survived'], '#003d66', bins=[0,1,2])
plothist('Fare', fig, 6, titanic_df['Fare'], '#001f33')
# -
survival_pct = 549/891. * 100
print round(survival_pct,2),'%'
# ### Data Overview
# Let's begin by analysing the data across a few of the categories. I've looked at all the categories which includes numerical values to start the analysis.
#
# 1. In terms of age, majority of the passengers, 407, are within the 20 to 30 age range.
# 1. From ticket class perspective, the majority of passengers are in the third class, 491 person.
# 1. Looking at no of siblings or spouses aboard, the majority have 0 siblings or spouse onboard, 608 person.
# 1. There appears to be 7 people with 7 siblings or spouses onboard, and it would have been rather unfortunate if it was indeed the case that the 7 person onboard were actually family.
# 1. This coincide with the data that shows number of parents or children abord, which are around 678 person.
# 1. Unfortunately, the majority of the passengers didn't survive the trip with a death rate of 549.
# 1. That is 61.62 % of all the sample passengers onboard.
# 1. It would appears that the ticket class doesn't match exactly with the fare paid as majority of passengers, 773, paid up to 50 pounds for the ticket but there are only 491 passengers in the third class ticket.
# 1. Three passengers appear to have paid 500 pounds to be board this ship.
#
#
# ### Limitation
# Before proceeding further, let's look at some limitation and assumptions that we have with the current data.
#
# 1. We don't know how the sample was selected from the original population, hence there could already be some bias in the data. So the assumption here is that the sample data set should be random enough so that our analysis can still be carried out.
# 1. We don't have all the correct values for age in this data set, in fact 19.86% of it, hence we try to fill up the empty values using the mean to avoid skewing the data too much to the left or to the right.
# 1. We could have probably done the same using the median value instead of the mean. But the mean was chosen for this analysis.
# 1. Another possible value would be to replace it with 0, however that would have skewed the data towards the left which could impact the analysis for IsAdult later on.
# 1. We could have also dropped the data with non integer age value, but as I'm interested to see how the other factors could impact my analysis, I've decided to keep these data.
# 1. Certain values are mixed into one category such as number of siblings and number of spouses, thus we wouldn't be able to make accurate assumptions about the data unless we're able to differentiate them clearly.
# 1. The same goes for the category of number of parents or children aboard, we wouldn't able to tell which is which correctly either.
# 1. Also, in the interest of time, I've decided to focus on a smaller subsets of factors and did not analyse through each possible combinations.
#
# Let's continue by looking at multiple variables comparison.
# We are interested to see which category correlates closer to the survival rate.
# +
def rename_yticks(ax, title):
if title=='Pclass':
labels = ['','First','','Second','', 'Third']
ax.set_yticklabels(labels)
elif title=='Age':
labels = ['','0','20','40','60','80', '100']
ax.set_yticklabels(labels)
elif title=='SibSp':
labels = ['','0','2','4','6','8', '10']
ax.set_yticklabels(labels)
elif title=='Parch':
labels = ['','0','1','2','3','4', '5','6','7']
ax.set_yticklabels(labels)
elif title=='Fare':
labels = ['','0','100','200','300','400', '500', '600']
ax.set_yticklabels(labels)
elif title=='Survived':
labels = ['','Perished','','','', '', 'Survived']
ax.set_yticklabels(labels)
def plot_violin(df, x, y):
f, ax = plt.subplots(figsize=(8, 8))
# Show each distribution with both violins and points
sns.violinplot(x=x, y=y, data=df, inner="box", palette="Set3", cut=0, linewidth=3)
sns.despine(left=True)
title = x + ' by ' + y
f.suptitle(title, fontsize=18, fontweight='bold')
ax.set_xlabel(x,size = 16,alpha=0.7)
ax.set_ylabel(y,size = 16,alpha=0.7);
#ax.set_ylim(0)
rename_xticks(ax, x)
rename_yticks(ax, y)
# -
plot_violin(titanic_df,'Pclass','Fare')
# ### Ticket Class by Fare
# 1. Fares for the first class passengers has the highest range with fares median around fifty dollars and it goes all the way up to slightly over five hundred dollars.
# 1. Majority of the passengers in second and third class paid less than fifty dollars for the fare.
# 1. This explains why the majority of the passengers paid less than fifty. So probably the second class ticket holders bought the tickets via early bird scheme or was heavily discounted for some reason, such as children fare or senior citizen.
# 1. The remaining passengers that paid less than fifty dollars must have came from the first class passengers.
#
plot_violin(titanic_df,'Pclass','Age')
# ### Ticket class by Age
# 1. Not suprisingly, each ticket class has the highest concentration of passengers on the age range of 30.
# 1. And indeed, third class passengers has the highest number of passengers in that age group.
# 1. And we do indeed see higher distribution of age 10 and below for second class passengers and higher percentage of passengers over the age of 55 in both the first and second class. This could have accounted for the pricing disrepancy that we saw earlier.
plot_violin(titanic_df,'Survived','SibSp')
# ### Survived by Siblings / Spouses aboard
# 1. For the non survivor group, a high majority of passengers have 0 siblings or spouses aboard.
# 1. For the survivor group, the highest majority are also for those passengers with 0 siblings or spouses aboard.
# 1. We can also see that those passengers with 1 siblings or spouses aboard seems to have a higher survival rate as well.
# 1. Hence perhaps we could do further analysis to see if this somehow indirectly affected their survival rate
plot_violin(titanic_df,'Survived','Parch')
# ### Survived by Parents / Childrens aboard
# 1. For both groups, the pattern appears to be rather similar with the highest number of passengers in both the survivor and non survivor group with zero parents or children aboard.
# 1. The survivor group has a wider distribution of passengers with one parents or child aboard.
# 1. The survivor group only shows a fatter distribution of passengers with two parents or child aboard.
# 1. This could be an interesting angle to check at if it does affect the survival rate of the passengers, so we'll include this in for analysis on the next part as well.
plot_violin(titanic_df,'Survived','Fare')
# ### Survived by Fare
# 1. Looking at the distributions of the passengers by fare, we can see that a very high number of passengers that didn't survived paid less than 50 dollars.
# 1. There is also a thin tail heading upwards to fares slightly lesser than 300 dollars.
# 1. While the survivor groups also have a majority in the less than 50 dollars fare paid and have an extremely long tail towards 500 dollars fare range.
# 1. This is because the fares under fifty dollars are the majority as can be seen in the earlier bar plot.
# +
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
def plot_barplot(x, y, hue, df):
title = x + " by " + y + " according to " + hue
ax = sns.barplot(x=x, y=y, hue=hue, data=df);
ax.set_title(title);
# -
plot_barplot("Sex", "Survived", "Pclass", titanic_df)
# ### Sex by Survived according to Ticket Class
# 1. The plot shows a big contrast in survival rate between the male and female groups.
# 1. The order of the survival also seems to be in accordance with the ticket class, where the chance of survival goes down from first class to the third class.
# 1. This could probably be explained via the order of each passenger are evacuated via their ticket class.
# ### Interim Summary
# 1. Based on the analysis above, it would appear that the most optimal passenger group that has the highest survival rate would be within the age range of 30, first class and a female person.
# 1. It would appear that it's worth exploring the two additional factors of number of siblings or spouses aboard and number of parents or children aboard as well.
#
# Let's dive deeper to get a closer understanding of the probabilities.
#
# ### Limitation
# 1. As we're not doing any statistical test to prove any direct correlation, the reasons given are just assumptions.
# +
def plot_stats_count(attributes, rotate=0):
survival_tag = 'Survived'
attr_list = []
# check the parameter passed contains multiple values as this needs to be appended differently
if isinstance(attributes, str):
attr_list.append(attributes)
xtitle = "Survival by " + attributes
else:
attr_list.extend(attributes)
xtitle = "Survival by " + ','.join(attributes)
attr_list.append(survival_tag)
# group by the needed parameter and count no of rows and later the percentage by each sub group
survivor_by_attr = titanic_df.groupby(attr_list)[survival_tag].agg({'Survival Count':'count'})
survivor_by_attr_formatted = survivor_by_attr.unstack()
display(survivor_by_attr_formatted)
# Create a simple bar plot to show the stats
ax = survivor_by_attr_formatted.plot(kind='bar', title = xtitle, figsize=(10, 5), legend=True, fontsize=12, rot=rotate,
color=['#ff4d4d', '#1aa3ff'])
ax.set_ylabel("Survival count", fontsize=12)
ax.plot()
return survivor_by_attr
def plot_stats_percentage(attributes, survivor_by_attr, rotate=0):
survivor_pcts = survivor_by_attr.groupby(level=0).apply(lambda x:
100*x / float(x.sum())).unstack()['Survival Count']
display(survivor_pcts)
# Create a simple bar plot to show the stats
if isinstance(attributes, str):
xtitle = "Survival by " + attributes
else:
xtitle = "Survival by " + ','.join(attributes)
ax = survivor_pcts.plot(kind='bar', title = xtitle, figsize=(10, 5), legend=True, fontsize=12, rot=rotate,
color=['#ff4d4d', '#1aa3ff'])
ax.set_ylabel("Survival %", fontsize=12)
ax.plot()
return survivor_pcts
# -
survivor_by_attr = plot_stats_count('Parch')
survivor_by_attAr_pct = plot_stats_percentage('Parch', survivor_by_attr)
# ### Survival by Parents / Children Aboard
# 1. Passengers with three parents or child onboard appears to have the highest survival rate at 60%.
# 2. This is followed by passengers with one child / parents aboard, 55% and passengers group with two parents or children aboard at 50 percents.
survivor_by_attr = plot_stats_count('SibSp')
survivor_by_attAr_pct = plot_stats_percentage('SibSp', survivor_by_attr)
# ### Survival by Siblings or Spouses Aboard
# 1. Passengers with one sibling or spouse aboard have the highest survival rate at 53.59%.
# 2. This is followed by passengers with those with two siblings or spouses aboard at 46,43% and passengers group with zero sibling or spouse aboard at 34.54 percents.
survivor_by_attr = plot_stats_count('Pclass')
survivor_by_attAr_pct = plot_stats_percentage('Pclass', survivor_by_attr)
# ### Survival by Ticket Class
# 1. Passengers in the first class has the highest survival rate of 62.96% against death rate of 37.03%.
# 2. Passengers in the second class has an almost equal survival rate of 52.71% surviving vs 47.28% of not surviving.
# 3. The chances of surviving in the third class is less than a quarter, 24.24% of the non surviving percentage, 75.76%.
survivor_by_attr = plot_stats_count('Sex')
survivor_by_attr_pct = plot_stats_percentage('Sex', survivor_by_attr)
# ### Survival according to gender
# 1. Female passengers have the highest rate of survival 74.2% against death rate of 25.8%
# 1. Chances of the survival for a female passenger is almost 3 times the chances of not surviving.
# 1. For a male passenger however the chances of survival is rather slim at 18.9% against 81.11%
survivor_by_attr = plot_stats_count('IsAdult')
survivor_by_attr_pct = plot_stats_percentage('IsAdult', survivor_by_attr)
# ### Survival according to age
# 1. A child has a slightly higher survival rate of 50.36% against the death rate of 49.64%.
# 1. However this data might be a little off due to the NaN data which was replaced with the mean value of this sample data set.
# 1. The chances of survival for an adult is 36.17% as compared to non surviving at 63.83%.
#
# ### Subsequent finding
# 1. Female passengers have a higher chances of survival at 74.2%
# 1. First class passengers has the highest chances of survival with 62.96%
# 1. Passengers with three parents or child onboard have the highest survival rate at 60%.
# 1. Passengers with one sibling or spouse aboard have the highest survival rate at 53.59%.
# 1. Children have a slight edge in survival rate with 50.36% of surviving.
#
# ### Next
# 1. Let's check on combination of the factors to compare the top three factors given above, and see which combination have the highest survival rate.
factors1 = ['Sex','Pclass']
survivor_by_attr = plot_stats_count(factors1)
survivor_by_attr_pct = plot_stats_percentage(factors1, survivor_by_attr)
# ### Survival according to gender and ticket class
# 1. A female in the first class has the highest chance of surviving with 29% out of the group of survivors.
# 1. A female in second and third class has the second highest chance of surviving with 22.29% and 22.93% respectively out of the whole survivor group.
# 1. A female passenger in the third class has an almost 50/50 chance of surviving.
# 1. A male passenger in the third class has the lowest chances of survival with a high mortality rate of 52% more than half of all the non surviving passenger group.
# 1. The survival rate for a male passenger in the first class and the third class is close enough (7.8% vs 8.15%), however comparing the non survival rate then we can see a huge discrepancies of 13.34% vs 52% of death for first class against third class.
factors2 = ['Sex','Parch']
survivor_by_attr = plot_stats_count(factors2, 90)
survivor_by_attr_pct = plot_stats_percentage(factors2, survivor_by_attr, 90)
# ### Survival by Gender, Number of Parent or Children aboard
# 1. It would appears that a female with 0 parent or child aboard have the highest chance of survival with 48.72%
# 1. This is followed by female with one parent or child aboard at 14.65%.
# 1. Next, we see that a male passenger with zero parent or child aboard has a 13.86% of survival.
# 1. The mortality rate is the highest for the male group with zero parent or children aboard which is at 70%.
factors3 = ['Pclass','Parch']
survivor_by_attr = plot_stats_count(factors3, 90)
survivor_by_attr_pct = plot_stats_percentage(factors3, survivor_by_attr)
# ### Survival by Ticket Class, Number of Parent or Children aboard
# 1. A first class ticket passenger with zero parent or child aboard has the highest survival rate at 45.83% out of the survival group.
# 1. This is followed by a second class ticket passenger with zero parent or child aboard at 26.09%.
# 1. The non survival rate is the highest for a third class ticket passenger with zero parent or child aboard against the survival rate of passenger in this group at 17.51%
factors4 = ['Sex','IsAdult']
survivor_by_attr = plot_stats_count(factors3)
# ### Survival by Ticket Class, Number of Parent or Children aboard
# 1. A first class passenger with zero parent or child aboard shows high chances of surviving with 45.83% out of the group of survivors.
# 1. A third class passenger with zero parent or child aboard has the lowest chance of surviving with percentage occupying 60.08% out of the non survivors group.
#
#
# ### This round's Finding
# 1. Based on this round of analysis, being a female adult with a first class ticket and zero parent or children aboard shows clear advantage of surviving.
# 1. In the next and final step, we're going to do a three factors analysis to check the same survival rate again.
# 1. Since the additional factors introduced, the number of parents and child aboard didn't seems to show any positive correlation when the passengers have three or more parents and child aboard, we want to reanalyse the Adult or Children factor as well as the number of siblings or spouses aboard.
#
# ### Limitation
# 1. Percentage comparison is done on the group against the whole of the survivor group or against the whole of the none survivor group.
# 1. Hence the smaller count in the group could have skewed the percentage. So the idea here is just to do a simple comparison based on the survival percentage out of the survivor / none survivor group as whole.
# 1. Due to the limited analysis of the possible factors given, the best survival group percentage is assumed based on this restriction.
#
factors4 = ['Sex','Pclass','Parch']
survivor_by_attr = plot_stats_count(factors4, 90)
survivor_by_attr_pct = plot_stats_percentage(factors4, survivor_by_attr, 90)
# ### Survival by Gender, Ticket Class and Number of Parent or Child aboard
# 1. The best survival rate in this group is from the female first class ticket with zero parent or child aboard with 20.06% survival rate and 0.31% perished rate.
# 1. The worst group is from the male third class ticket holder with zero parent or child aboard group with 45.06% non survival rate and 6.24% survival rate.
factors5 = ['Sex','Pclass','IsAdult']
survivor_by_attr = plot_stats_count(factors5, 90)
survivor_by_attr_pct = plot_stats_percentage(factors5, survivor_by_attr, 90)
# ### Survival by Gender, Ticket Class and Is Adult or Child
# 1. The best survival rate in this group is from the female first class ticket adult passenger with 25.8% survival rate and 0.64% perished rate.
# 1. The worst survival rate group is from the third class ticket adult passenger with a perished rate of 45.06% and survival rate of 6.24%
# 1. This coincidentally is the exact same number as the group for third class male passenger with zero parent or child aboard.
factors6 = ['Sex','Pclass','SibSp']
survivor_by_attr = plot_stats_count(factors6, 90)
survivor_by_attr_pct = plot_stats_percentage(factors6, survivor_by_attr, 90)
# ### Survival by Gender, Ticket Class and Number of Sibling or Spouse aboard
# 1. The best survival rate in this group is from the female first class ticket passenger with zero sibling or spouse aboard with 15.29% survival rate and 0.32% perished rate.
# 1. The third class ticket female group with zero sibling or spouse aboard also share the same survival rate but has a higher perished rate at 10.51%
# 1. The worst survival rate is the third class ticket male passenger with zero sibling or spouse aboard at 40.73% perished rate and a measly 6.07% survival rate.
#
# ### Survival rate to Perished Rate comparison
# 1. Survival by Gender, Ticket Class and Number of Parent or Child aboard
# 1. The female first class ticket with zero parent or child aboard with 20.06% survival rate to 0.31% perished rate.
# 1. Survival by Gender, Ticket Class and Is Adult or Child
# 1. female first class ticket adult passenger with 25.8% survival rate to 0.64% perished rate.
# 1. Survival by Gender, Ticket Class and Number of Sibling or Spouse aboard
# 1. The female first class ticket passenger with zero sibling or spouse aboard with 15.29% survival rate to 0.32% perished rate.
# +
pct_surviving_ParCh = 20.06/.31 * 100
print pct_surviving_ParCh
pct_surviving_isAdult = 25.8/.64 * 100
print pct_surviving_isAdult
pct_surviving_SibSp = 15.29/.32 * 100
print pct_surviving_SibSp
# -
# ### Conclusion
# 1. As we have earlier concluded that the female first class ticket passenger has the highest survival rate against the rest of the group, we are just checking through the other possible factors for an additinal edge in the survival rate.
# 1. So holding the gender and ticket class variables as a constant factor and comparing number of parent or child aboard, is an adult or child and number of spouse or sibling aboard, the additional factor that would has the highest survival rate would be the is adult or child group.
#
# So if the group with the highest chance of survival based on the analysis of the three factors of is number of parent or child aboard, gender and ticket class would be the female first class ticket holder with zero parent or child aboard group that has a 64.71 times more likely to survive than perished probability.
|
Udacity_Titanic_DataAnalysis.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Artificial Intelligence Nanodegree
# ## Recurrent Neural Network Projects
#
# Welcome to the Recurrent Neural Network Project in the Artificial Intelligence Nanodegree! In this notebook, some template code has already been provided for you, and you will need to implement additional functionality to successfully complete this project. You will not need to modify the included code beyond what is requested. Sections that begin with **'Implementation'** in the header indicate that the following block of code will require additional functionality which you must provide. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully!
#
# >**Note:** Code and Markdown cells can be executed using the **Shift + Enter** keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode.
# ### Implementation TODOs in this notebook
#
# This notebook contains two problems, cut into a variety of TODOs. Make sure to complete each section containing a TODO marker throughout the notebook. For convenience we provide links to each of these sections below.
#
# [TODO #1: Implement a function to window time series](#TODO_1)
#
# [TODO #2: Create a simple RNN model using keras to perform regression](#TODO_2)
#
# [TODO #3: Finish cleaning a large text corpus](#TODO_3)
#
# [TODO #4: Implement a function to window a large text corpus](#TODO_4)
#
# [TODO #5: Create a simple RNN model using keras to perform multiclass classification](#TODO_5)
#
# [TODO #6: Generate text using a fully trained RNN model and a variety of input sequences](#TODO_6)
#
# # Problem 1: Perform time series prediction
#
# In this project you will perform time series prediction using a Recurrent Neural Network regressor. In particular you will re-create the figure shown in the notes - where the stock price of Apple was forecasted (or predicted) 7 days in advance. In completing this exercise you will learn how to construct RNNs using Keras, which will also aid in completing the second project in this notebook.
#
# The particular network architecture we will employ for our RNN is known as [Long Term Short Memory (LSTM)](https://en.wikipedia.org/wiki/Long_short-term_memory), which helps significantly avoid technical problems with optimization of RNNs.
# ## 1.1 Getting started
#
# First we must load in our time series - a history of around 140 days of Apple's stock price. Then we need to perform a number of pre-processing steps to prepare it for use with an RNN model. First off, it is good practice to normalize time series - by normalizing its range. This helps us avoid serious numerical issues associated how common activation functions (like tanh) transform very large (positive or negative) numbers, as well as helping us to avoid related issues when computing derivatives.
#
# Here we normalize the series to lie in the range [0,1] [using this scikit function](http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.MinMaxScaler.html), but it is also commonplace to normalize by a series standard deviation.
# +
### Load in necessary libraries for data input and normalization
# %matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
# %load_ext autoreload
# %autoreload 2
from my_answers import *
### load in and normalize the dataset
dataset = np.loadtxt('datasets/normalized_apple_prices.csv')
# -
# Lets take a quick look at the (normalized) time series we'll be performing predictions on.
# lets take a look at our time series
plt.plot(dataset)
plt.xlabel('time period')
plt.ylabel('normalized series value')
# ## 1.2 Cutting our time series into sequences
#
# Remember, our time series is a sequence of numbers that we can represent in general mathematically as
#
# $$s_{0},s_{1},s_{2},...,s_{P}$$
#
# where $s_{p}$ is the numerical value of the time series at time period $p$ and where $P$ is the total length of the series. In order to apply our RNN we treat the time series prediction problem as a regression problem, and so need to use a sliding window to construct a set of associated input/output pairs to regress on. This process is animated in the gif below.
#
# <img src="images/timeseries_windowing_training.gif" width=600 height=600/>
#
# For example - using a window of size T = 5 (as illustrated in the gif above) we produce a set of input/output pairs like the one shown in the table below
#
# $$\begin{array}{c|c}
# \text{Input} & \text{Output}\\
# \hline \color{CornflowerBlue} {\langle s_{1},s_{2},s_{3},s_{4},s_{5}\rangle} & \color{Goldenrod}{ s_{6}} \\
# \ \color{CornflowerBlue} {\langle s_{2},s_{3},s_{4},s_{5},s_{6} \rangle } & \color{Goldenrod} {s_{7} } \\
# \color{CornflowerBlue} {\vdots} & \color{Goldenrod} {\vdots}\\
# \color{CornflowerBlue} { \langle s_{P-5},s_{P-4},s_{P-3},s_{P-2},s_{P-1} \rangle } & \color{Goldenrod} {s_{P}}
# \end{array}$$
#
# Notice here that each input is a sequence (or vector) of length 5 (and in general has length equal to the window size T) while each corresponding output is a scalar value. Notice also how given a time series of length P and window size T = 5 as shown above, we created P - 5 input/output pairs. More generally, for a window size T we create P - T such pairs.
# Now its time for you to window the input time series as described above!
#
# <a id='TODO_1'></a>
#
# **TODO:** Implement the function called **window_transform_series** in my_answers.py so that it runs a sliding window along the input series and creates associated input/output pairs. Note that this function should input a) the series and b) the window length, and return the input/output subsequences. Make sure to format returned input/output as generally shown in table above (where window_size = 5), and make sure your returned input is a numpy array.
#
# -----
# You can test your function on the list of odd numbers given below
odd_nums = np.array([1,3,5,7,9,11,13])
# Here is a hard-coded solution for odd_nums. You can compare its results with what you get from your **window_transform_series** implementation.
# +
# run a window of size 2 over the odd number sequence and display the results
window_size = 2
X = []
X.append(odd_nums[0:2])
X.append(odd_nums[1:3])
X.append(odd_nums[2:4])
X.append(odd_nums[3:5])
X.append(odd_nums[4:6])
y = odd_nums[2:]
X = np.asarray(X)
y = np.asarray(y)
y = np.reshape(y, (len(y),1)) #optional
assert(type(X).__name__ == 'ndarray')
assert(type(y).__name__ == 'ndarray')
assert(X.shape == (5,2))
assert(y.shape in [(5,1), (5,)])
# print out input/output pairs --> here input = X, corresponding output = y
print ('--- the input X will look like ----')
print (X)
print ('--- the associated output y will look like ----')
print (y)
X,y = window_transform_series(series = odd_nums,window_size = 2)
print(X,y)
# -
# Again - you can check that your completed **window_transform_series** function works correctly by trying it on the odd_nums sequence - you should get the above output.
### DONE: implement the function window_transform_series in the file my_answers.py
from my_answers import window_transform_series
# With this function in place apply it to the series in the Python cell below. We use a window_size = 7 for these experiments.
# window the data using your windowing function
window_size = 7
X,y = window_transform_series(series = dataset,window_size = window_size)
# ## 1.3 Splitting into training and testing sets
#
# In order to perform proper testing on our dataset we will lop off the last 1/3 of it for validation (or testing). This is that once we train our model we have something to test it on (like any regression problem!). This splitting into training/testing sets is done in the cell below.
#
# Note how here we are **not** splitting the dataset *randomly* as one typically would do when validating a regression model. This is because our input/output pairs *are related temporally*. We don't want to validate our model by training on a random subset of the series and then testing on another random subset, as this simulates the scenario that we receive new points *within the timeframe of our training set*.
#
# We want to train on one solid chunk of the series (in our case, the first full 2/3 of it), and validate on a later chunk (the last 1/3) as this simulates how we would predict *future* values of a time series.
# +
# split our dataset into training / testing sets
train_test_split = int(np.ceil(2*len(y)/float(3))) # set the split point
# partition the training set
X_train = X[:train_test_split,:]
y_train = y[:train_test_split]
# keep the last chunk for testing
X_test = X[train_test_split:,:]
y_test = y[train_test_split:]
# NOTE: to use keras's RNN LSTM module our input must be reshaped to [samples, window size, stepsize]
X_train = np.asarray(np.reshape(X_train, (X_train.shape[0], window_size, 1)))
X_test = np.asarray(np.reshape(X_test, (X_test.shape[0], window_size, 1)))
# -
# <a id='TODO_2'></a>
#
# ## 1.4 Build and run an RNN regression model
#
# Having created input/output pairs out of our time series and cut this into training/testing sets, we can now begin setting up our RNN. We use Keras to quickly build a two hidden layer RNN of the following specifications
#
# - layer 1 uses an LSTM module with 5 hidden units (note here the input_shape = (window_size,1))
# - layer 2 uses a fully connected module with one unit
# - the 'mean_squared_error' loss should be used (remember: we are performing regression here)
#
# This can be constructed using just a few lines - see e.g., the [general Keras documentation](https://keras.io/getting-started/sequential-model-guide/) and the [LSTM documentation in particular](https://keras.io/layers/recurrent/) for examples of how to quickly use Keras to build neural network models. Make sure you are initializing your optimizer given the [keras-recommended approach for RNNs](https://keras.io/optimizers/)
#
# (given in the cell below). (remember to copy your completed function into the script *my_answers.py* function titled *build_part1_RNN* before submitting your project)
# +
### TODO: create required RNN model
# import keras network libraries
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import LSTM
import keras
# given - fix random seed - so we can all reproduce the same results on our default time series
np.random.seed(0)
# DONE: implement build_part1_RNN in my_answers.py
from my_answers import build_part1_RNN
model = build_part1_RNN(window_size)
# build model using keras documentation recommended optimizer initialization
optimizer = keras.optimizers.RMSprop(lr=0.001, rho=0.9, epsilon=1e-08, decay=0.0)
# compile the model
model.compile(loss='mean_squared_error', optimizer=optimizer)
# -
# With your model built you can now fit the model by activating the cell below! Note: the number of epochs (np_epochs) and batch_size are preset (so we can all produce the same results). You can choose to toggle the verbose parameter - which gives you regular updates on the progress of the algorithm - on and off by setting it to 1 or 0 respectively.
# run your model!
model.fit(X_train, y_train, epochs=1000, batch_size=50, verbose=0)
# ## 1.5 Checking model performance
#
# With your model fit we can now make predictions on both our training and testing sets.
# generate predictions for training
train_predict = model.predict(X_train)
test_predict = model.predict(X_test)
# In the next cell we compute training and testing errors using our trained model - you should be able to achieve at least
#
# *training_error* < 0.02
#
# and
#
# *testing_error* < 0.02
#
# with your fully trained model.
#
# If either or both of your accuracies are larger than 0.02 re-train your model - increasing the number of epochs you take (a maximum of around 1,000 should do the job) and/or adjusting your batch_size.
# +
# print out training and testing errors
training_error = model.evaluate(X_train, y_train, verbose=0)
print('training error = ' + str(training_error))
testing_error = model.evaluate(X_test, y_test, verbose=0)
print('testing error = ' + str(testing_error))
# -
# Activating the next cell plots the original data, as well as both predictions on the training and testing sets.
# +
### Plot everything - the original series as well as predictions on training and testing sets
import matplotlib.pyplot as plt
# %matplotlib inline
# plot original series
plt.plot(dataset,color = 'k')
# plot training set prediction
split_pt = train_test_split + window_size
plt.plot(np.arange(window_size,split_pt,1),train_predict,color = 'b')
# plot testing set prediction
plt.plot(np.arange(split_pt,split_pt + len(test_predict),1),test_predict,color = 'r')
# pretty up graph
plt.xlabel('day')
plt.ylabel('(normalized) price of Apple stock')
plt.legend(['original series','training fit','testing fit'],loc='center left', bbox_to_anchor=(1, 0.5))
plt.show()
# -
# **Note:** you can try out any time series for this exercise! If you would like to try another see e.g., [this site containing thousands of time series](https://datamarket.com/data/list/?q=provider%3Atsdl) and pick another one!
# # Problem 2: Create a sequence generator
# ## 2.1 Getting started
#
# In this project you will implement a popular Recurrent Neural Network (RNN) architecture to create an English language sequence generator capable of building semi-coherent English sentences from scratch by building them up character-by-character. This will require a substantial amount amount of parameter tuning on a large training corpus (at least 100,000 characters long). In particular for this project we will be using a complete version of <NAME>'s classic book The Adventures of Sherlock Holmes.
#
# How can we train a machine learning model to generate text automatically, character-by-character? *By showing the model many training examples so it can learn a pattern between input and output.* With this type of text generation each input is a string of valid characters like this one
#
# *dogs are grea*
#
# while the corresponding output is the next character in the sentence - which here is 't' (since the complete sentence is 'dogs are great'). We need to show a model many such examples in order for it to make reasonable predictions.
#
# **Fun note:** For those interested in how text generation is being used check out some of the following fun resources:
#
# - [Generate wacky sentences](http://www.cs.toronto.edu/~ilya/rnn.html) with this academic RNN text generator
#
# - Various twitter bots that tweet automatically generated text like[this one](http://tweet-generator-alex.herokuapp.com/).
#
# - the [NanoGenMo](https://github.com/NaNoGenMo/2016) annual contest to automatically produce a 50,000+ novel automatically
#
# - [Robot Shakespeare](https://github.com/genekogan/RobotShakespeare) a text generator that automatically produces Shakespear-esk sentences
# ## 2.2 Preprocessing a text dataset
#
# Our first task is to get a large text corpus for use in training, and on it we perform a several light pre-processing tasks. The default corpus we will use is the classic book Sherlock Holmes, but you can use a variety of others as well - so long as they are fairly large (around 100,000 characters or more).
# read in the text, transforming everything to lower case
text = open('datasets/holmes.txt').read().lower()
print('our original text has ' + str(len(text)) + ' characters')
# Next, lets examine a bit of the raw text. Because we are interested in creating sentences of English words automatically by building up each word character-by-character, we only want to train on valid English words. In other words - we need to remove all of the other characters that are not part of English words.
### print out the first 1000 characters of the raw text to get a sense of what we need to throw out
text[:2000]
# Wow - there's a lot of junk here (i.e., weird uncommon character combinations - as this first character chunk contains the title and author page, as well as table of contents)! To keep things simple, we want to train our RNN on a large chunk of more typical English sentences - we don't want it to start thinking non-english words or strange characters are valid! - so lets clean up the data a bit.
#
# First, since the dataset is so large and the first few hundred characters contain a lot of junk, lets cut it out. Lets also find-and-replace those newline tags with empty spaces.
### find and replace '\n' and '\r' symbols - replacing them
text = text[1302:]
text = text.replace('\n',' ') # replacing '\n' with '' simply removes the sequence
text = text.replace('\r',' ')
# Lets see how the first 1000 characters of our text looks now!
### print out the first 1000 characters of the raw text to get a sense of what we need to throw out
text[:1000]
# <a id='TODO_3'></a>
#
# #### TODO: finish cleaning the text
#
# Lets make sure we haven't left any other atypical characters (commas, periods, etc., are ok) lurking around in the depths of the text. You can do this by enumerating all the text's unique characters, examining them, and then replacing any unwanted characters with empty spaces! Once we find all of the text's unique characters, we can remove all of the atypical ones in the next cell. Note: don't remove the punctuation marks given in my_answers.py.
# +
### DONE: implement cleaned_text in my_answers.py
from my_answers import cleaned_text
text = cleaned_text(text)
# shorten any extra dead space created above
text = text.replace(' ',' ')
# -
# With your chosen characters removed print out the first few hundred lines again just to double check that everything looks good.
### print out the first 2000 characters of the raw text to get a sense of what we need to throw out
text[:2000]
# Now that we have thrown out a good number of non-English characters/character sequences lets print out some statistics about the dataset - including number of total characters and number of unique characters.
# +
# count the number of unique characters in the text
chars = sorted(list(set(text)))
# print some of the text, as well as statistics
print ("this corpus has " + str(len(text)) + " total number of characters")
print ("this corpus has " + str(len(chars)) + " unique characters")
print(chars)
# -
# ## 2.3 Cutting data into input/output pairs
#
# Now that we have our text all cleaned up, how can we use it to train a model to generate sentences automatically? First we need to train a machine learning model - and in order to do that we need a set of input/output pairs for a model to train on. How can we create a set of input/output pairs from our text to train on?
#
# Remember in part 1 of this notebook how we used a sliding window to extract input/output pairs from a time series? We do the same thing here! We slide a window of length $T$ along our giant text corpus - everything in the window becomes one input while the character following becomes its corresponding output. This process of extracting input/output pairs is illustrated in the gif below on a small example text using a window size of T = 5.
#
# <img src="images/text_windowing_training.gif" width=400 height=400/>
#
# Notice one aspect of the sliding window in this gif that does not mirror the analogous gif for time series shown in part 1 of the notebook - we do not need to slide the window along one character at a time but can move by a fixed step size $M$ greater than 1 (in the gif indeed $M = 1$). This is done with large input texts (like ours which has over 500,000 characters!) when sliding the window along one character at a time we would create far too many input/output pairs to be able to reasonably compute with.
#
# More formally lets denote our text corpus - which is one long string of characters - as follows
#
# $$s_{0},s_{1},s_{2},...,s_{P}$$
#
# where $P$ is the length of the text (again for our text $P \approx 500,000!$). Sliding a window of size T = 5 with a step length of M = 1 (these are the parameters shown in the gif above) over this sequence produces the following list of input/output pairs
#
#
# $$\begin{array}{c|c}
# \text{Input} & \text{Output}\\
# \hline \color{CornflowerBlue} {\langle s_{1},s_{2},s_{3},s_{4},s_{5}\rangle} & \color{Goldenrod}{ s_{6}} \\
# \ \color{CornflowerBlue} {\langle s_{2},s_{3},s_{4},s_{5},s_{6} \rangle } & \color{Goldenrod} {s_{7} } \\
# \color{CornflowerBlue} {\vdots} & \color{Goldenrod} {\vdots}\\
# \color{CornflowerBlue} { \langle s_{P-5},s_{P-4},s_{P-3},s_{P-2},s_{P-1} \rangle } & \color{Goldenrod} {s_{P}}
# \end{array}$$
#
# Notice here that each input is a sequence (or vector) of 5 characters (and in general has length equal to the window size T) while each corresponding output is a single character. We created around P total number of input/output pairs (for general step size M we create around ceil(P/M) pairs).
# <a id='TODO_4'></a>
#
# Now its time for you to window the input time series as described above!
#
# **TODO:** Create a function that runs a sliding window along the input text and creates associated input/output pairs. A skeleton function has been provided for you. Note that this function should input a) the text b) the window size and c) the step size, and return the input/output sequences. Note: the return items should be *lists* - not numpy arrays.
#
# (remember to copy your completed function into the script *my_answers.py* function titled *window_transform_text* before submitting your project)
### DONE: implement window_transform_series in my_answers.py
from my_answers import window_transform_text
# With our function complete we can now use it to produce input/output pairs! We employ the function in the next cell, where the window_size = 50 and step_size = 5.
# run your text window-ing function
window_size = 100
step_size = 5
inputs, outputs = window_transform_text(text,window_size,step_size)
# Lets print out a few input/output pairs to verify that we have made the right sort of stuff!
# +
# print out a few of the input/output pairs to verify that we've made the right kind of stuff to learn from
print(inputs[2])
print(outputs[2])
print(text[10:110])
print('input = ' + inputs[2])
print('output = ' + outputs[2])
print('--------------')
print('input = ' + inputs[100])
print('output = ' + outputs[100])
# -
# Looks good!
# ## 2.4 Wait, what kind of problem is text generation again?
#
# In part 1 of this notebook we used the same pre-processing technique - the sliding window - to produce a set of training input/output pairs to tackle the problem of time series prediction *by treating the problem as one of regression*. So what sort of problem do we have here now, with text generation? Well, the time series prediction was a regression problem because the output (one value of the time series) was a continuous value. Here - for character-by-character text generation - each output is a *single character*. This isn't a continuous value - but a distinct class - therefore **character-by-character text generation is a classification problem**.
#
# How many classes are there in the data? Well, the number of classes is equal to the number of unique characters we have to predict! How many of those were there in our dataset again? Lets print out the value again.
# print out the number of unique characters in the dataset
chars = sorted(list(set(text)))
print ("this corpus has " + str(len(chars)) + " unique characters")
print ('and these characters are ')
print (chars)
# Rockin' - so we have a multiclass classification problem on our hands!
# ## 2.5 One-hot encoding characters
#
# The last issue we have to deal with is representing our text data as numerical data so that we can use it as an input to a neural network. One of the conceptually simplest ways of doing this is via a 'one-hot encoding' scheme. Here's how it works.
#
# We transform each character in our inputs/outputs into a vector with length equal to the number of unique characters in our text. This vector is all zeros except one location where we place a 1 - and this location is unique to each character type. e.g., we transform 'a', 'b', and 'c' as follows
#
# $$a\longleftarrow\left[\begin{array}{c}
# 1\\
# 0\\
# 0\\
# \vdots\\
# 0\\
# 0
# \end{array}\right]\,\,\,\,\,\,\,b\longleftarrow\left[\begin{array}{c}
# 0\\
# 1\\
# 0\\
# \vdots\\
# 0\\
# 0
# \end{array}\right]\,\,\,\,\,c\longleftarrow\left[\begin{array}{c}
# 0\\
# 0\\
# 1\\
# \vdots\\
# 0\\
# 0
# \end{array}\right]\cdots$$
#
# where each vector has 32 entries (or in general: number of entries = number of unique characters in text).
# The first practical step towards doing this one-hot encoding is to form a dictionary mapping each unique character to a unique integer, and one dictionary to do the reverse mapping. We can then use these dictionaries to quickly make our one-hot encodings, as well as re-translate (from integers to characters) the results of our trained RNN classification model.
# +
# this dictionary is a function mapping each unique character to a unique integer
chars_to_indices = dict((c, i) for i, c in enumerate(chars)) # map each unique character to unique integer
# this dictionary is a function mapping each unique integer back to a unique character
indices_to_chars = dict((i, c) for i, c in enumerate(chars)) # map each unique integer back to unique character
# -
# Now we can transform our input/output pairs - consisting of characters - to equivalent input/output pairs made up of one-hot encoded vectors. In the next cell we provide a function for doing just this: it takes in the raw character input/outputs and returns their numerical versions. In particular the numerical input is given as $\bf{X}$, and numerical output is given as the $\bf{y}$
# transform character-based input/output into equivalent numerical versions
def encode_io_pairs(text,window_size,step_size):
# number of unique chars
chars = sorted(list(set(text)))
num_chars = len(chars)
# cut up text into character input/output pairs
inputs, outputs = window_transform_text(text,window_size,step_size)
# create empty vessels for one-hot encoded input/output
X = np.zeros((len(inputs), window_size, num_chars), dtype=np.bool)
y = np.zeros((len(inputs), num_chars), dtype=np.bool)
# loop over inputs/outputs and transform and store in X/y
for i, sentence in enumerate(inputs):
for t, char in enumerate(sentence):
X[i, t, chars_to_indices[char]] = 1
y[i, chars_to_indices[outputs[i]]] = 1
return X,y
# Now run the one-hot encoding function by activating the cell below and transform our input/output pairs!
# use your function
window_size = 100
step_size = 5
X,y = encode_io_pairs(text,window_size,step_size)
# <a id='TODO_5'></a>
#
# ## 2.6 Setting up our RNN
#
# With our dataset loaded and the input/output pairs extracted / transformed we can now begin setting up our RNN for training. Again we will use Keras to quickly build a single hidden layer RNN - where our hidden layer consists of LSTM modules.
#
# Time to get to work: build a 3 layer RNN model of the following specification
#
# - layer 1 should be an LSTM module with 200 hidden units --> note this should have input_shape = (window_size,len(chars)) where len(chars) = number of unique characters in your cleaned text
# - layer 2 should be a linear module, fully connected, with len(chars) hidden units --> where len(chars) = number of unique characters in your cleaned text
# - layer 3 should be a softmax activation ( since we are solving a *multiclass classification*)
# - Use the **categorical_crossentropy** loss
#
# This network can be constructed using just a few lines - as with the RNN network you made in part 1 of this notebook. See e.g., the [general Keras documentation](https://keras.io/getting-started/sequential-model-guide/) and the [LSTM documentation in particular](https://keras.io/layers/recurrent/) for examples of how to quickly use Keras to build neural network models.
# +
### necessary functions from the keras library
from keras.models import Sequential
from keras.layers import Dense, Activation, LSTM
from keras.optimizers import RMSprop
from keras.utils.data_utils import get_file
import keras
import random
# TODO implement build_part2_RNN in my_answers.py
from my_answers import build_part2_RNN
model = build_part2_RNN(window_size, len(chars))
# initialize optimizer
optimizer = keras.optimizers.RMSprop(lr=0.001, rho=0.9, epsilon=1e-08, decay=0.0)
# compile model --> make sure initialized optimizer and callbacks - as defined above - are used
model.compile(loss='categorical_crossentropy', optimizer=optimizer)
# -
# ## 2.7 Training our RNN model for text generation
#
# With our RNN setup we can now train it! Lets begin by trying it out on a small subset of the larger version. In the next cell we take the first 10,000 input/output pairs from our training database to learn on.
# a small subset of our input/output pairs
Xsmall = X[:10000,:,:]
ysmall = y[:10000,:]
# Now lets fit our model!
# +
# train the model
model.fit(Xsmall, ysmall, batch_size=500, epochs=40,verbose = 2)
# save weights
model.save_weights('model_weights/best_RNN_small_textdata_weights.hdf5')
# -
# How do we make a given number of predictions (characters) based on this fitted model?
#
# First we predict the next character after following any chunk of characters in the text of length equal to our chosen window size. Then we remove the first character in our input sequence and tack our prediction onto the end. This gives us a slightly changed sequence of inputs that still has length equal to the size of our window. We then feed in this updated input sequence into the model to predict the another character. Together then we have two predicted characters following our original input sequence. Repeating this process N times gives us N predicted characters.
#
# In the next Python cell we provide you with a completed function that does just this - it makes predictions when given a) a trained RNN model, b) a subset of (window_size) characters from the text, and c) a number of characters to predict (to follow our input subset).
# function that uses trained model to predict a desired number of future characters
def predict_next_chars(model,input_chars,num_to_predict):
# create output
predicted_chars = ''
for i in range(num_to_predict):
# convert this round's predicted characters to numerical input
x_test = np.zeros((1, window_size, len(chars)))
for t, char in enumerate(input_chars):
x_test[0, t, chars_to_indices[char]] = 1.
# make this round's prediction
test_predict = model.predict(x_test,verbose = 0)[0]
# translate numerical prediction back to characters
r = np.argmax(test_predict) # predict class of each test input
d = indices_to_chars[r]
# update predicted_chars and input
predicted_chars+=d
input_chars+=d
input_chars = input_chars[1:]
return predicted_chars
# <a id='TODO_6'></a>
#
# With your trained model try a few subsets of the complete text as input - note the length of each must be exactly equal to the window size. For each subset use the function above to predict the next 100 characters that follow each input.
# +
# TODO: choose an input sequence and use the prediction function in the previous Python cell to predict 100 characters following it
# get an appropriately sized chunk of characters from the text
start_inds = [100, 1000, 10000]
# load in weights
model.load_weights('model_weights/best_RNN_small_textdata_weights.hdf5')
for s in start_inds:
start_index = s
input_chars = text[start_index: start_index + window_size]
# use the prediction function
predict_input = predict_next_chars(model,input_chars,num_to_predict = 100)
# print out input characters
print('------------------')
input_line = 'input chars = ' + '\n' + input_chars + '"' + '\n'
print(input_line)
# print out predicted characters
line = 'predicted chars = ' + '\n' + predict_input + '"' + '\n'
print(line)
# -
# This looks ok, but not great. Now lets try the same experiment with a larger chunk of the data - with the first 100,000 input/output pairs.
#
# Tuning RNNs for a typical character dataset like the one we will use here is a computationally intensive endeavour and thus timely on a typical CPU. Using a reasonably sized cloud-based GPU can speed up training by a factor of 10. Also because of the long training time it is highly recommended that you carefully write the output of each step of your process to file. This is so that all of your results are saved even if you close the web browser you're working out of, as the processes will continue processing in the background but variables/output in the notebook system will not update when you open it again.
#
# In the next cell we show you how to create a text file in Python and record data to it. This sort of setup can be used to record your final predictions.
# +
### A simple way to write output to file
f = open('my_test_output.txt', 'w') # create an output file to write too
f.write('this is only a test ' + '\n') # print some output text
x = 2
f.write('the value of x is ' + str(x) + '\n') # record a variable value
f.close()
# print out the contents of my_test_output.txt
f = open('my_test_output.txt', 'r') # create an output file to write too
f.read()
# -
# With this recording devices we can now more safely perform experiments on larger portions of the text. In the next cell we will use the first 100,000 input/output pairs to train our RNN model.
# First we fit our model to the dataset, then generate text using the trained model in precisely the same generation method applied before on the small dataset.
#
# **Note:** your generated words should be - by and large - more realistic than with the small dataset, but you won't be able to generate perfect English sentences even with this amount of data. A rule of thumb: your model is working well if you generate sentences that largely contain real English words.
# +
# a small subset of our input/output pairs
Xlarge = X[:100000,:,:]
ylarge = y[:100000,:]
# TODO: fit to our larger dataset
model.fit(Xlarge, ylarge, batch_size=500, epochs=30, verbose=2)
# save weights
model.save_weights('model_weights/best_RNN_large_textdata_weights.hdf5')
# +
# DONE: choose an input sequence and use the prediction function in the previous Python cell to predict 100 characters following it
# get an appropriately sized chunk of characters from the text
start_inds = [200,2000,20000]
# save output
f = open('text_gen_output/RNN_large_textdata_output.txt', 'w') # create an output file to write too
# load weights
model.load_weights('model_weights/best_RNN_large_textdata_weights.hdf5')
for s in start_inds:
start_index = s
input_chars = text[start_index: start_index + window_size]
# use the prediction function
predict_input = predict_next_chars(model,input_chars,num_to_predict = 100)
# print out input characters
line = '-------------------' + '\n'
print(line)
f.write(line)
input_line = 'input chars = ' + '\n' + input_chars + '"' + '\n'
print(input_line)
f.write(input_line)
# print out predicted characters
predict_line = 'predicted chars = ' + '\n' + predict_input + '"' + '\n'
print(predict_line)
f.write(predict_line)
f.close()
# -
|
RNN_project.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: conda_python3
# language: python
# name: conda_python3
# ---
# # Lab 13, Visualization
# +
import pandas
# %matplotlib inline
# -
df = pandas.read_excel('s3://isat252-cs/house_price.xls')
df[:10]
# calculate unit price
df['unit_price'] = df['price']/df['area']
df[:10]
# # 4.1
avg_unit_prc_per_year = df.groupby('built_in').mean()['unit_price']
avg_unit_prc_per_year.plot()
# highest in the 1930s
# # 4.2
df['unit_price'].hist()
# the highest amount is around 100-200
# # 4.3
avg_unit_prc_per_type = df.groupby('house_type').mean()['unit_price']
avg_unit_prc_per_type.plot.bar()
# single-family home has the highest
# # 4.4
# +
num_per_type = df['house_type'].value_counts()
num_per_type.plot.pie()
# -
# condo and townhouse are similar in amounts in the pie chart
# # 4.5
df.plot.scatter(x='area',y='price')
# there are two outliers above 1,000,000.
|
Lab13.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="FhGuhbZ6M5tl"
# ##### Copyright 2018 The TensorFlow Authors.
# + cellView="form" id="AwOEIRJC6Une"
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# + cellView="form" id="KyPEtTqk6VdG"
#@title MIT License
#
# Copyright (c) 2017 <NAME>
#
# Permission is hereby granted, free of charge, to any person obtaining a
# # copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
# + [markdown] id="EIdT9iu_Z4Rb"
# # Basic regression: Predict fuel efficiency
# + [markdown] id="bBIlTPscrIT9"
# <table class="tfo-notebook-buttons" align="left">
# <td>
# <a target="_blank" href="https://www.tensorflow.org/tutorials/keras/regression"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
# </td>
# <td>
# <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/keras/regression.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
# </td>
# <td>
# <a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/tutorials/keras/regression.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
# </td>
# <td>
# <a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/tutorials/keras/regression.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
# </td>
# </table>
# + [markdown] id="AHp3M9ZmrIxj"
# In a *regression* problem, the aim is to predict the output of a continuous value, like a price or a probability. Contrast this with a *classification* problem, where the aim is to select a class from a list of classes (for example, where a picture contains an apple or an orange, recognizing which fruit is in the picture).
#
# This tutorial uses the classic [Auto MPG](https://archive.ics.uci.edu/ml/datasets/auto+mpg) dataset and demonstrates how to build models to predict the fuel efficiency of the late-1970s and early 1980s automobiles. To do this, you will provide the models with a description of many automobiles from that time period. This description includes attributes like cylinders, displacement, horsepower, and weight.
#
# This example uses the Keras API. (Visit the Keras [tutorials](https://www.tensorflow.org/tutorials/keras) and [guides](https://www.tensorflow.org/guide/keras) to learn more.)
# + id="1rRo8oNqZ-Rj"
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
# Make NumPy printouts easier to read.
np.set_printoptions(precision=3, suppress=True)
# + id="9xQKvCJ85kCQ"
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
print(tf.__version__)
# + [markdown] id="F_72b0LCNbjx"
# ## The Auto MPG dataset
#
# The dataset is available from the [UCI Machine Learning Repository](https://archive.ics.uci.edu/ml/).
#
# + [markdown] id="gFh9ne3FZ-On"
# ### Get the data
# First download and import the dataset using pandas:
# + id="CiX2FI4gZtTt"
url = 'http://archive.ics.uci.edu/ml/machine-learning-databases/auto-mpg/auto-mpg.data'
column_names = ['MPG', 'Cylinders', 'Displacement', 'Horsepower', 'Weight',
'Acceleration', 'Model Year', 'Origin']
raw_dataset = pd.read_csv(url, names=column_names,
na_values='?', comment='\t',
sep=' ', skipinitialspace=True)
# + id="2oY3pMPagJrO"
dataset = raw_dataset.copy()
dataset.tail()
# + [markdown] id="3MWuJTKEDM-f"
# ### Clean the data
#
# The dataset contains a few unknown values:
# + id="JEJHhN65a2VV"
dataset.isna().sum()
# + [markdown] id="9UPN0KBHa_WI"
# Drop those rows to keep this initial tutorial simple:
# + id="4ZUDosChC1UN"
dataset = dataset.dropna()
# + [markdown] id="8XKitwaH4v8h"
# The `"Origin"` column is categorical, not numeric. So the next step is to one-hot encode the values in the column with [pd.get_dummies](https://pandas.pydata.org/docs/reference/api/pandas.get_dummies.html).
#
# Note: You can set up the `tf.keras.Model` to do this kind of transformation for you but that's beyond the scope of this tutorial. Check out the [Classify structured data using Keras preprocessing layers](../structured_data/preprocessing_layers.ipynb) or [Load CSV data](../load_data/csv.ipynb) tutorials for examples.
# + id="gWNTD2QjBWFJ"
dataset['Origin'] = dataset['Origin'].map({1: 'USA', 2: 'Europe', 3: 'Japan'})
# + id="ulXz4J7PAUzk"
dataset = pd.get_dummies(dataset, columns=['Origin'], prefix='', prefix_sep='')
dataset.tail()
# + [markdown] id="Cuym4yvk76vU"
# ### Split the data into training and test sets
#
# Now, split the dataset into a training set and a test set. You will use the test set in the final evaluation of your models.
# + id="qn-IGhUE7_1H"
train_dataset = dataset.sample(frac=0.8, random_state=0)
test_dataset = dataset.drop(train_dataset.index)
# + [markdown] id="J4ubs136WLNp"
# ### Inspect the data
#
# Review the joint distribution of a few pairs of columns from the training set.
#
# The top row suggests that the fuel efficiency (MPG) is a function of all the other parameters. The other rows indicate they are functions of each other.
# + id="oRKO_x8gWKv-"
sns.pairplot(train_dataset[['MPG', 'Cylinders', 'Displacement', 'Weight']], diag_kind='kde')
# + [markdown] id="gavKO_6DWRMP"
# Let's also check the overall statistics. Note how each feature covers a very different range:
# + id="yi2FzC3T21jR"
train_dataset.describe().transpose()
# + [markdown] id="Db7Auq1yXUvh"
# ### Split features from labels
#
# Separate the target value—the "label"—from the features. This label is the value that you will train the model to predict.
# + id="t2sluJdCW7jN"
train_features = train_dataset.copy()
test_features = test_dataset.copy()
train_labels = train_features.pop('MPG')
test_labels = test_features.pop('MPG')
# + [markdown] id="mRklxK5s388r"
# ## Normalization
#
# In the table of statistics it's easy to see how different the ranges of each feature are:
# + id="IcmY6lKKbkw8"
train_dataset.describe().transpose()[['mean', 'std']]
# + [markdown] id="-ywmerQ6dSox"
# It is good practice to normalize features that use different scales and ranges.
#
# One reason this is important is because the features are multiplied by the model weights. So, the scale of the outputs and the scale of the gradients are affected by the scale of the inputs.
#
# Although a model *might* converge without feature normalization, normalization makes training much more stable.
#
# Note: There is no advantage to normalizing the one-hot features—it is done here for simplicity. For more details on how to use the preprocessing layers, refer to the [Working with preprocessing layers](https://www.tensorflow.org/guide/keras/preprocessing_layers) guide and the [Classify structured data using Keras preprocessing layers](../structured_data/preprocessing_layers.ipynb) tutorial.
# + [markdown] id="aFJ6ISropeoo"
# ### The Normalization layer
#
# The `tf.keras.layers.Normalization` is a clean and simple way to add feature normalization into your model.
#
# The first step is to create the layer:
# + id="JlC5ooJrgjQF"
normalizer = tf.keras.layers.Normalization(axis=-1)
normalizer
# + [markdown] id="XYA2Ap6nVOha"
# Then, fit the state of the preprocessing layer to the data by calling `Normalization.adapt`:
# + id="CrBbbjbwV91f"
normalizer.adapt(np.array(train_features))
# + [markdown] id="oZccMR5yV9YV"
# Calculate the mean and variance, and store them in the layer:
# + id="GGn-ukwxSPtx"
print(normalizer.mean.numpy())
# + [markdown] id="oGWKaF9GSRuN"
# When the layer is called, it returns the input data, with each feature independently normalized:
# + id="2l7zFL_XWIRu"
first = np.array(train_features[:1])
with np.printoptions(precision=2, suppress=True):
print('First example:', first)
print()
print('Normalized:', normalizer(first).numpy())
# + [markdown] id="6o3CrycBXA2s"
# ## Linear regression
#
# Before building a deep neural network model, start with linear regression using one and several variables.
# + [markdown] id="lFby9n0tnHkw"
# ### Linear regression with one variable
#
# Begin with a single-variable linear regression to predict `'MPG'` from `'Horsepower'`.
#
# Training a model with `tf.keras` typically starts by defining the model architecture. Use a `tf.keras.Sequential` model, which [represents a sequence of steps](https://www.tensorflow.org/guide/keras/sequential_model).
#
# There are two steps in your single-variable linear regression model:
#
# - Normalize the `'Horsepower'` input features using the `tf.keras.layers.Normalization` preprocessing layer.
# - Apply a linear transformation ($y = mx+b$) to produce 1 output using a linear layer (`tf.keras.layers.Dense`).
#
# The number of _inputs_ can either be set by the `input_shape` argument, or automatically when the model is run for the first time.
# + [markdown] id="Xp3gAFn3TPv8"
# First, create a NumPy array made of the `'Horsepower'` features. Then, instantiate the `tf.keras.layers.Normalization` and fit its state to the `horsepower` data:
# + id="1gJAy0fKs1TS"
horsepower = np.array(train_features['Horsepower'])
horsepower_normalizer = layers.Normalization(input_shape=[1,], axis=None)
horsepower_normalizer.adapt(horsepower)
# + [markdown] id="4NVlHJY2TWlC"
# Build the Keras Sequential model:
# + id="c0sXM7qLlKfZ"
horsepower_model = tf.keras.Sequential([
horsepower_normalizer,
layers.Dense(units=1)
])
horsepower_model.summary()
# + [markdown] id="eObQu9fDnXGL"
# This model will predict `'MPG'` from `'Horsepower'`.
#
# Run the untrained model on the first 10 'Horsepower' values. The output won't be good, but notice that it has the expected shape of `(10, 1)`:
# + id="UfV1HS6bns-s"
horsepower_model.predict(horsepower[:10])
# + [markdown] id="CSkanJlmmFBX"
# Once the model is built, configure the training procedure using the Keras `Model.compile` method. The most important arguments to compile are the `loss` and the `optimizer`, since these define what will be optimized (`mean_absolute_error`) and how (using the `tf.keras.optimizers.Adam`).
# + id="JxA_3lpOm-SK"
horsepower_model.compile(
optimizer=tf.optimizers.Adam(learning_rate=0.1),
loss='mean_absolute_error')
# + [markdown] id="Z3q1I9TwnRSC"
# Use Keras `Model.fit` to execute the training for 100 epochs:
# + id="-iSrNy59nRAp"
# %%time
history = horsepower_model.fit(
train_features['Horsepower'],
train_labels,
epochs=100,
# Suppress logging.
verbose=0,
# Calculate validation results on 20% of the training data.
validation_split = 0.2)
# + [markdown] id="tQm3pc0FYPQB"
# Visualize the model's training progress using the stats stored in the `history` object:
# + id="YCAwD_y4AdC3"
hist = pd.DataFrame(history.history)
hist['epoch'] = history.epoch
hist.tail()
# + id="9E54UoZunqhc"
def plot_loss(history):
plt.plot(history.history['loss'], label='loss')
plt.plot(history.history['val_loss'], label='val_loss')
plt.ylim([0, 10])
plt.xlabel('Epoch')
plt.ylabel('Error [MPG]')
plt.legend()
plt.grid(True)
# + id="yYsQYrIZyqjz"
plot_loss(history)
# + [markdown] id="CMNrt8X2ebXd"
# Collect the results on the test set for later:
# + id="kDZ8EvNYrDtx"
test_results = {}
test_results['horsepower_model'] = horsepower_model.evaluate(
test_features['Horsepower'],
test_labels, verbose=0)
# + [markdown] id="F0qutYAKwoda"
# Since this is a single variable regression, it's easy to view the model's predictions as a function of the input:
# + id="xDS2JEtOn9Jn"
x = tf.linspace(0.0, 250, 251)
y = horsepower_model.predict(x)
# + id="rttFCTU8czsI"
def plot_horsepower(x, y):
plt.scatter(train_features['Horsepower'], train_labels, label='Data')
plt.plot(x, y, color='k', label='Predictions')
plt.xlabel('Horsepower')
plt.ylabel('MPG')
plt.legend()
# + id="7l9ZiAOEUNBL"
plot_horsepower(x,y)
# + [markdown] id="Yk2RmlqPoM9u"
# ### Linear regression with multiple inputs
# + [markdown] id="PribnwDHUksC"
# You can use an almost identical setup to make predictions based on multiple inputs. This model still does the same $y = mx+b$ except that $m$ is a matrix and $b$ is a vector.
#
# Create a two-step Keras Sequential model again with the first layer being `normalizer` (`tf.keras.layers.Normalization(axis=-1)`) you defined earlier and adapted to the whole dataset:
# + id="ssnVcKg7oMe6"
linear_model = tf.keras.Sequential([
normalizer,
layers.Dense(units=1)
])
# + [markdown] id="IHlx6WeIWyAr"
# When you call `Model.predict` on a batch of inputs, it produces `units=1` outputs for each example:
# + id="DynfJV18WiuT"
linear_model.predict(train_features[:10])
# + [markdown] id="hvHKH3rPXHmq"
# When you call the model, its weight matrices will be built—check that the `kernel` weights (the $m$ in $y=mx+b$) have a shape of `(9, 1)`:
# + id="DwJ4Fq0RXBQf"
linear_model.layers[1].kernel
# + [markdown] id="eINAc6rZXzOt"
# Configure the model with Keras `Model.compile` and train with `Model.fit` for 100 epochs:
# + id="A0Sv_Ybr0szp"
linear_model.compile(
optimizer=tf.optimizers.Adam(learning_rate=0.1),
loss='mean_absolute_error')
# + id="EZoOYORvoTSe"
# %%time
history = linear_model.fit(
train_features,
train_labels,
epochs=100,
# Suppress logging.
verbose=0,
# Calculate validation results on 20% of the training data.
validation_split = 0.2)
# + [markdown] id="EdxiCbiNYK2F"
# Using all the inputs in this regression model achieves a much lower training and validation error than the `horsepower_model`, which had one input:
# + id="4sWO3W0koYgu"
plot_loss(history)
# + [markdown] id="NyN49hIWe_NH"
# Collect the results on the test set for later:
# + id="jNC3D1DGsGgK"
test_results['linear_model'] = linear_model.evaluate(
test_features, test_labels, verbose=0)
# + [markdown] id="SmjdzxKzEu1-"
# ## Regression with a deep neural network (DNN)
# + [markdown] id="DT_aHPsrzO1t"
# In the previous section, you implemented two linear models for single and multiple inputs.
#
# Here, you will implement single-input and multiple-input DNN models.
#
# The code is basically the same except the model is expanded to include some "hidden" non-linear layers. The name "hidden" here just means not directly connected to the inputs or outputs.
# + [markdown] id="6SWtkIjhrZwa"
# These models will contain a few more layers than the linear model:
#
# * The normalization layer, as before (with `horsepower_normalizer` for a single-input model and `normalizer` for a multiple-input model).
# * Two hidden, non-linear, `Dense` layers with the ReLU (`relu`) activation function nonlinearity.
# * A linear `Dense` single-output layer.
#
# Both models will use the same training procedure so the `compile` method is included in the `build_and_compile_model` function below.
# + id="c26juK7ZG8j-"
def build_and_compile_model(norm):
model = keras.Sequential([
norm,
layers.Dense(64, activation='relu'),
layers.Dense(64, activation='relu'),
layers.Dense(1)
])
model.compile(loss='mean_absolute_error',
optimizer=tf.keras.optimizers.Adam(0.001))
return model
# + [markdown] id="6c51caebbc0d"
# ### Regression using a DNN and a single input
# + [markdown] id="xvu9gtxTZR5V"
# Create a DNN model with only `'Horsepower'` as input and `horsepower_normalizer` (defined earlier) as the normalization layer:
# + id="cGbPb-PHGbhs"
dnn_horsepower_model = build_and_compile_model(horsepower_normalizer)
# + [markdown] id="Sj49Og4YGULr"
# This model has quite a few more trainable parameters than the linear models:
# + id="ReAD0n6MsFK-"
dnn_horsepower_model.summary()
# + [markdown] id="0-qWCsh6DlyH"
# Train the model with Keras `Model.fit`:
# + id="sD7qHCmNIOY0"
# %%time
history = dnn_horsepower_model.fit(
train_features['Horsepower'],
train_labels,
validation_split=0.2,
verbose=0, epochs=100)
# + [markdown] id="dArGGxHxcKjN"
# This model does slightly better than the linear single-input `horsepower_model`:
# + id="NcF6UWjdCU8T"
plot_loss(history)
# + [markdown] id="TG1snlpR2QCK"
# If you plot the predictions as a function of `'Horsepower'`, you should notice how this model takes advantage of the nonlinearity provided by the hidden layers:
# + id="hPF53Rem14NS"
x = tf.linspace(0.0, 250, 251)
y = dnn_horsepower_model.predict(x)
# + id="rsf9rD8I17Wq"
plot_horsepower(x, y)
# + [markdown] id="WxCJKIUpe4io"
# Collect the results on the test set for later:
# + id="bJjM0dU52XtN"
test_results['dnn_horsepower_model'] = dnn_horsepower_model.evaluate(
test_features['Horsepower'], test_labels,
verbose=0)
# + [markdown] id="S_2Btebp2e64"
# ### Regression using a DNN and multiple inputs
# + [markdown] id="aKFtezDldLSf"
# Repeat the previous process using all the inputs. The model's performance slightly improves on the validation dataset.
# + id="c0mhscXh2k36"
dnn_model = build_and_compile_model(normalizer)
dnn_model.summary()
# + id="CXDENACl2tuW"
# %%time
history = dnn_model.fit(
train_features,
train_labels,
validation_split=0.2,
verbose=0, epochs=100)
# + id="-9Dbj0fX23RQ"
plot_loss(history)
# + [markdown] id="hWoVYS34fJPZ"
# Collect the results on the test set:
# + id="-bZIa96W3c7K"
test_results['dnn_model'] = dnn_model.evaluate(test_features, test_labels, verbose=0)
# + [markdown] id="uiCucdPLfMkZ"
# ## Performance
# + [markdown] id="rDf1xebEfWBw"
# Since all models have been trained, you can review their test set performance:
# + id="e5_ooufM5iH2"
pd.DataFrame(test_results, index=['Mean absolute error [MPG]']).T
# + [markdown] id="DABIVzsCf-QI"
# These results match the validation error observed during training.
# + [markdown] id="ft603OzXuEZC"
# ### Make predictions
#
# You can now make predictions with the `dnn_model` on the test set using Keras `Model.predict` and review the loss:
# + id="Xe7RXH3N3CWU"
test_predictions = dnn_model.predict(test_features).flatten()
a = plt.axes(aspect='equal')
plt.scatter(test_labels, test_predictions)
plt.xlabel('True Values [MPG]')
plt.ylabel('Predictions [MPG]')
lims = [0, 50]
plt.xlim(lims)
plt.ylim(lims)
_ = plt.plot(lims, lims)
# + [markdown] id="19wyogbOSU5t"
# It appears that the model predicts reasonably well.
#
# Now, check the error distribution:
# + id="f-OHX4DiXd8x"
error = test_predictions - test_labels
plt.hist(error, bins=25)
plt.xlabel('Prediction Error [MPG]')
_ = plt.ylabel('Count')
# + [markdown] id="KSyaHUfDT-mZ"
# If you're happy with the model, save it for later use with `Model.save`:
# + id="4-WwLlmfT-mb"
dnn_model.save('../.../model_google/dnn_model')
# + [markdown] id="Benlnl8UT-me"
# If you reload the model, it gives identical output:
# + id="dyyyj2zVT-mf"
reloaded = tf.keras.models.load_model('dnn_model')
test_results['reloaded'] = reloaded.evaluate(
test_features, test_labels, verbose=0)
# + id="f_GchJ2tg-2o"
pd.DataFrame(test_results, index=['Mean absolute error [MPG]']).T
# + [markdown] id="vgGQuV-yqYZH"
# ## Conclusion
#
# This notebook introduced a few techniques to handle a regression problem. Here are a few more tips that may help:
#
# - Mean squared error (MSE) (`tf.losses.MeanSquaredError`) and mean absolute error (MAE) (`tf.losses.MeanAbsoluteError`) are common loss functions used for regression problems. MAE is less sensitive to outliers. Different loss functions are used for classification problems.
# - Similarly, evaluation metrics used for regression differ from classification.
# - When numeric input data features have values with different ranges, each feature should be scaled independently to the same range.
# - Overfitting is a common problem for DNN models, though it wasn't a problem for this tutorial. Visit the [Overfit and underfit](overfit_and_underfit.ipynb) tutorial for more help with this.
|
notebooks/externals/google/regression.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Travelling Salesperson Problem solved using genetic algorithms
# +
# Imports
import numpy as np
import random
from datetime import datetime
# +
# Parameters
n_cities = 20
n_population = 100
mutation_rate = 0.3
# +
# Generating a list of coordenades representing each city
coordinates_list = [[x,y] for x,y in zip(np.random.randint(0,100,n_cities),np.random.randint(0,100,n_cities))]
names_list = np.array(['Berlin', 'London', 'Moscow', 'Barcelona', 'Rome', 'Paris', 'Vienna', 'Munich', 'Istanbul', 'Kyiv', 'Bucharest', 'Minsk', 'Warsaw', 'Budapest', 'Milan', 'Prague', 'Sofia', 'Birmingham', 'Brussels', 'Amsterdam'])
cities_dict = { x:y for x,y in zip(names_list,coordinates_list)}
# Function to compute the distance between two points
def compute_city_distance_coordinates(a,b):
return ((a[0]-b[0])**2+(a[1]-b[1])**2)**0.5
def compute_city_distance_names(city_a, city_b, cities_dict):
return compute_city_distance_coordinates(cities_dict[city_a], cities_dict[city_b])
cities_dict
# -
# ## 1. Create the first population set
# We randomly shuffle the cities N times where N=population_size
# +
# First step: Create the first population set
def genesis(city_list, n_population):
population_set = []
for i in range(n_population):
#Randomly generating a new solution
sol_i = city_list[np.random.choice(list(range(n_cities)), n_cities, replace=False)]
population_set.append(sol_i)
return np.array(population_set)
population_set = genesis(names_list, n_population)
population_set
# -
# ## 2. Evaluate solutions fitness
# The solutions are defined so that the first element on the list is the first city to visit, then the second, etc. and the last city is linked to the first.
# The fitness function needs to compute the distance between subsequent cities.
def fitness_eval(city_list, cities_dict):
total = 0
for i in range(n_cities-1):
a = city_list[i]
b = city_list[i+1]
total += compute_city_distance_names(a,b, cities_dict)
return total
# +
def get_all_fitnes(population_set, cities_dict):
fitnes_list = np.zeros(n_population)
#Looping over all solutions computing the fitness for each solution
for i in range(n_population):
fitnes_list[i] = fitness_eval(population_set[i], cities_dict)
return fitnes_list
fitnes_list = get_all_fitnes(population_set,cities_dict)
fitnes_list
# -
# # 3. Progenitors selection
# I will select a new set of progenitors using the Roulette Wheel Selection. Generates a list of progenitor pairs where N= len(population_set) but at each position there are two solutions to merge
# +
def progenitor_selection(population_set,fitnes_list):
total_fit = fitnes_list.sum()
prob_list = fitnes_list/total_fit
#Notice there is the chance that a progenitor. mates with oneself
progenitor_list_a = np.random.choice(list(range(len(population_set))), len(population_set),p=prob_list, replace=True)
progenitor_list_b = np.random.choice(list(range(len(population_set))), len(population_set),p=prob_list, replace=True)
progenitor_list_a = population_set[progenitor_list_a]
progenitor_list_b = population_set[progenitor_list_b]
return np.array([progenitor_list_a,progenitor_list_b])
progenitor_list = progenitor_selection(population_set,fitnes_list)
progenitor_list[0][2]
# -
# # 4. Mating
# For each pair of parents we'll generate an offspring pair. Since we cannot repeat cities what we'll do is copy a random chunk from one progenitor and fill the blanks with the other progenitor.
# +
def mate_progenitors(prog_a, prog_b):
offspring = prog_a[0:5]
for city in prog_b:
if not city in offspring:
offspring = np.concatenate((offspring,[city]))
return offspring
def mate_population(progenitor_list):
new_population_set = []
for i in range(progenitor_list.shape[1]):
prog_a, prog_b = progenitor_list[0][i], progenitor_list[1][i]
offspring = mate_progenitors(prog_a, prog_b)
new_population_set.append(offspring)
return new_population_set
new_population_set = mate_population(progenitor_list)
new_population_set[0]
# -
# # 5. Mutation
# Now for each element of the new population we add a random chance of swapping
# +
def mutate_offspring(offspring):
for q in range(int(n_cities*mutation_rate)):
a = np.random.randint(0,n_cities)
b = np.random.randint(0,n_cities)
offspring[a], offspring[b] = offspring[b], offspring[a]
return offspring
def mutate_population(new_population_set):
mutated_pop = []
for offspring in new_population_set:
mutated_pop.append(mutate_offspring(offspring))
return mutated_pop
mutated_pop = mutate_population(new_population_set)
mutated_pop[0]
# -
# # 6. Stopping
# To select the stopping criteria we'll need to create a loop to stop first. Then I'll set it to loop at 1000 iterations.
best_solution = [-1,np.inf,np.array([])]
for i in range(10000):
if i%100==0: print(i, fitnes_list.min(), fitnes_list.mean(), datetime.now().strftime("%d/%m/%y %H:%M"))
fitnes_list = get_all_fitnes(mutated_pop,cities_dict)
#Saving the best solution
if fitnes_list.min() < best_solution[1]:
best_solution[0] = i
best_solution[1] = fitnes_list.min()
best_solution[2] = np.array(mutated_pop)[fitnes_list.min() == fitnes_list]
progenitor_list = progenitor_selection(population_set,fitnes_list)
new_population_set = mate_population(progenitor_list)
mutated_pop = mutate_population(new_population_set)
best_solution
|
Genetic_Algorithm_Python_Example/Traveling_Salesman_Problem.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="f8cj-HBNoEZy"
# # Week 3: Transfer Learning
#
# Welcome to this assignment! This week, you are going to use a technique called `Transfer Learning` in which you utilize an already trained network to help you solve a similar problem to the one it was originally trained to solve.
#
# Let's get started!
# + id="lbFmQdsZs5eW"
import os
import zipfile
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow.keras import layers
from tensorflow.keras import Model
from tensorflow.keras.optimizers import RMSprop
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.preprocessing.image import img_to_array, load_img
# + [markdown] id="RPvtLK1GyUWr"
# ## Dataset
#
# For this assignment, you will use the `Horse or Human dataset`, which contains images of horses and humans.
#
# Download the `training` and `validation` sets by running the cell below:
# + id="dIeTNcPEo79J"
# Get the Horse or Human training dataset
# !wget -q -P /content/ https://storage.googleapis.com/tensorflow-1-public/course2/week3/horse-or-human.zip
# Get the Horse or Human validation dataset
# !wget -q -P /content/ https://storage.googleapis.com/tensorflow-1-public/course2/week3/validation-horse-or-human.zip
test_local_zip = './horse-or-human.zip'
zip_ref = zipfile.ZipFile(test_local_zip, 'r')
zip_ref.extractall('/tmp/training')
val_local_zip = './validation-horse-or-human.zip'
zip_ref = zipfile.ZipFile(val_local_zip, 'r')
zip_ref.extractall('/tmp/validation')
zip_ref.close()
# + [markdown] id="x4OMDxYS6tmv"
# This dataset already has an structure that is compatible with Keras' `flow_from_directory` so you don't need to move the images into subdirectories as you did in the previous assignments. However, it is still a good idea to save the paths of the images so you can use them later on:
# + id="lHRrmo5CpEw_" colab={"base_uri": "https://localhost:8080/"} outputId="6142f5af-69e6-47ca-9fd3-596f6f51b616"
# Define the training and validation base directories
train_dir = '/tmp/training'
validation_dir = '/tmp/validation'
# Directory with training horse pictures
train_horses_dir = os.path.join(train_dir, 'horses')
# Directory with training humans pictures
train_humans_dir = os.path.join(train_dir, 'humans')
# Directory with validation horse pictures
validation_horses_dir = os.path.join(validation_dir, 'horses')
# Directory with validation human pictures
validation_humans_dir = os.path.join(validation_dir, 'humans')
# Check the number of images for each class and set
print(f"There are {len(os.listdir(train_horses_dir))} images of horses for training.\n")
print(f"There are {len(os.listdir(train_humans_dir))} images of humans for training.\n")
print(f"There are {len(os.listdir(validation_horses_dir))} images of horses for validation.\n")
print(f"There are {len(os.listdir(validation_humans_dir))} images of humans for validation.\n")
# + [markdown] id="1G5hXBB57c78"
# Now take a look at a sample image of each one of the classes:
# + id="HgbMs7p0qSKr" colab={"base_uri": "https://localhost:8080/", "height": 573} outputId="6ee32683-055a-4ac1-e2cd-ed1450b797c7"
print("Sample horse image:")
plt.imshow(load_img(f"{os.path.join(train_horses_dir, os.listdir(train_horses_dir)[0])}"))
plt.show()
print("\nSample human image:")
plt.imshow(load_img(f"{os.path.join(train_humans_dir, os.listdir(train_humans_dir)[0])}"))
plt.show()
# + [markdown] id="LBnbnY0c8Zd0"
# `matplotlib` makes it easy to see that these images have a resolution of 300x300 and are colored, but you can double check this by using the code below:
# + id="4lIGjHC5pxua" colab={"base_uri": "https://localhost:8080/"} outputId="7450852b-872d-44c6-eabf-898a77839657"
# Load the first example of a horse
sample_image = load_img(f"{os.path.join(train_horses_dir, os.listdir(train_horses_dir)[0])}")
# Convert the image into its numpy array representation
sample_array = img_to_array(sample_image)
print(f"Each image has shape: {sample_array.shape}")
# + [markdown] id="4fYwAYyd8zEm"
# As expected, the sample image has a resolution of 300x300 and the last dimension is used for each one of the RGB channels to represent color.
# + [markdown] id="6HcE1TSqNRY2"
# ## Training and Validation Generators
#
# Now that you know the images you are dealing with, it is time for you to code the generators that will fed these images to your Network. For this, complete the `train_val_generators` function below:
#
# **Important Note:** The images have a resolution of 300x300 but the `flow_from_directory` method you will use allows you to set a target resolution. In this case, **set a `target_size` of (150, 150)**. This will heavily lower the number of trainable parameters in your final network, yielding much quicker training times without compromising the accuracy!
# + cellView="code" id="AX5Q3NL_FXMT"
# GRADED FUNCTION: train_val_generators
def train_val_generators(TRAINING_DIR, VALIDATION_DIR):
### START CODE HERE
# Instantiate the ImageDataGenerator class
# Don't forget to normalize pixel values and set arguments to augment the images
train_datagen = ImageDataGenerator(rescale=1.0/255,
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip= True)
# Pass in the appropriate arguments to the flow_from_directory method
train_generator = train_datagen.flow_from_directory(directory=TRAINING_DIR,
batch_size=32,
class_mode='binary',
target_size=(150, 150))
# Instantiate the ImageDataGenerator class (don't forget to set the rescale argument)
# Remember that validation data should not be augmented
validation_datagen = ImageDataGenerator(rescale=1./255)
# Pass in the appropriate arguments to the flow_from_directory method
validation_generator = validation_datagen.flow_from_directory(directory= VALIDATION_DIR,
batch_size=32,
class_mode='binary',
target_size=(150, 150))
### END CODE HERE
return train_generator, validation_generator
# + id="8FLUUqMKFwVR" colab={"base_uri": "https://localhost:8080/"} outputId="e4c3629c-b444-4ac6-9064-69e33cebaaca"
# Test your generators
train_generator, validation_generator = train_val_generators(train_dir, validation_dir)
# + [markdown] id="TszKWhunQaj4"
# **Expected Output:**
# ```
# Found 1027 images belonging to 2 classes.
# Found 256 images belonging to 2 classes.
# ```
# + [markdown] id="Izx51Ju1rXwd"
# ## Transfer learning - Create the pre-trained model
#
# Download the `inception V3` weights into the `/tmp/` directory:
# + id="-lEzPAqxrPcU" colab={"base_uri": "https://localhost:8080/"} outputId="1093f757-650b-41f9-cd55-0f155f7a648e"
# Download the inception v3 weights
# !wget --no-check-certificate \
# https://storage.googleapis.com/mledu-datasets/inception_v3_weights_tf_dim_ordering_tf_kernels_notop.h5 \
# -O /tmp/inception_v3_weights_tf_dim_ordering_tf_kernels_notop.h5
# + [markdown] id="_zlXNulm9USZ"
# Now load the `InceptionV3` model and save the path to the weights you just downloaded:
# + id="zfmRpsMf7E3-"
# Import the inception model
from tensorflow.keras.applications.inception_v3 import InceptionV3
# Create an instance of the inception model from the local pre-trained weights
local_weights_file = '/tmp/inception_v3_weights_tf_dim_ordering_tf_kernels_notop.h5'
# + [markdown] id="ZPQb0PkT9_3w"
# Complete the `create_pre_trained_model` function below. You should specify the correct `input_shape` for the model (remember that you set a new resolution for the images instead of the native 300x300) and make all of the layers non-trainable:
# + cellView="code" id="x2JnQ6m8r5oe"
# GRADED FUNCTION: create_pre_trained_model
def create_pre_trained_model(local_weights_file):
### START CODE HERE
pre_trained_model = InceptionV3(input_shape = (150, 150, 3),
include_top = False,
weights = None)
pre_trained_model.load_weights(local_weights_file)
# Make all the layers in the pre-trained model non-trainable
for layer in pre_trained_model.layers:
layer.trainable = False
### END CODE HERE
return pre_trained_model
# + [markdown] id="phE00SCr-RCT"
# Check that everything went well by comparing the last few rows of the model summary to the expected output:
# + id="ve7eh9iztT4q" colab={"base_uri": "https://localhost:8080/"} outputId="db9ebf8c-a1b8-4186-e250-4bc9068f7006"
pre_trained_model = create_pre_trained_model(local_weights_file)
# Print the model summary
pre_trained_model.summary()
# + [markdown] id="4cAY2gQytr0-"
# **Expected Output:**
# ```
# batch_normalization_v1_281 (Bat (None, 3, 3, 192) 576 conv2d_281[0][0]
# __________________________________________________________________________________________________
# activation_273 (Activation) (None, 3, 3, 320) 0 batch_normalization_v1_273[0][0]
# __________________________________________________________________________________________________
# mixed9_1 (Concatenate) (None, 3, 3, 768) 0 activation_275[0][0]
# activation_276[0][0]
# __________________________________________________________________________________________________
# concatenate_5 (Concatenate) (None, 3, 3, 768) 0 activation_279[0][0]
# activation_280[0][0]
# __________________________________________________________________________________________________
# activation_281 (Activation) (None, 3, 3, 192) 0 batch_normalization_v1_281[0][0]
# __________________________________________________________________________________________________
# mixed10 (Concatenate) (None, 3, 3, 2048) 0 activation_273[0][0]
# mixed9_1[0][0]
# concatenate_5[0][0]
# activation_281[0][0]
# ==================================================================================================
# Total params: 21,802,784
# Trainable params: 0
# Non-trainable params: 21,802,784
#
#
# ```
# + [markdown] id="MRHkV9jo-hkh"
# To check that all the layers in the model were set to be non-trainable, you can also run the cell below:
# + id="VASOaB8xDbhU" colab={"base_uri": "https://localhost:8080/"} outputId="0f0cf6b8-3db2-4e31-e656-6fa20b7a9915"
total_params = pre_trained_model.count_params()
num_trainable_params = sum([w.shape.num_elements() for w in pre_trained_model.trainable_weights])
print(f"There are {total_params:,} total parameters in this model.")
print(f"There are {num_trainable_params:,} trainable parameters in this model.")
# + [markdown] id="mRioO7FH5a8I"
# **Expected Output:**
# ```
# There are 21,802,784 total parameters in this model.
# There are 0 trainable parameters in this model.
# ```
# + [markdown] id="dFtwDyKj-4GR"
# ## Creating callbacks for later
#
# You have already worked with callbacks in the first course of this specialization so the callback to stop training once an accuracy of 99.9% is reached, is provided for you:
# + id="SeVjZD2o7gWS"
# Define a Callback class that stops training once accuracy reaches 99.9%
class myCallback(tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs={}):
if(logs.get('accuracy')>0.999):
print("\nReached 99.9% accuracy so cancelling training!")
self.model.stop_training = True
# + [markdown] id="lHZnFl-5_p3a"
# ## Pipelining the pre-trained model with your own
#
# Now that the pre-trained model is ready, you need to "glue" it to your own model to solve the task at hand.
#
# For this you will need the last output of the pre-trained model, since this will be the input for your own. Complete the `output_of_last_layer` function below.
#
# **Note:** For grading purposes use the `mixed7` layer as the last layer of the pre-trained model. However, after submitting feel free to come back here and play around with this.
# + id="CFsUlwdfs_wg"
# GRADED FUNCTION: output_of_last_layer
def output_of_last_layer(pre_trained_model):
### START CODE HERE
last_desired_layer = pre_trained_model.get_layer('mixed7')
print('last layer output shape: ', last_desired_layer.output_shape)
last_output = last_desired_layer.output
print('last layer output: ', last_output)
### END CODE HERE
return last_output
# + [markdown] id="13AEzKG2A6_J"
# Check that everything works as expected:
# + id="zOJPUtMN6PHo" colab={"base_uri": "https://localhost:8080/"} outputId="c25f1f48-a0dc-4495-fc7d-502e2c4ff9bc"
last_output = output_of_last_layer(pre_trained_model)
# + [markdown] id="XqIWKZ_h7CuY"
# **Expected Output (if `mixed7` layer was used):**
# ```
# last layer output shape: (None, 7, 7, 768)
# last layer output: KerasTensor(type_spec=TensorSpec(shape=(None, 7, 7, 768), dtype=tf.float32, name=None), name='mixed7/concat:0', description="created by layer 'mixed7'")
# ```
# + [markdown] id="0Rp-J6JuwJTq"
# Now you will create the final model by adding some additional layers on top of the pre-trained model.
#
# Complete the `create_final_model` function below. You will need to use Tensorflow's [Functional API](https://www.tensorflow.org/guide/keras/functional) for this since the pretrained model has been created using it.
#
# Let's double check this first:
# + id="cKQknB4j7K9y" colab={"base_uri": "https://localhost:8080/"} outputId="28c849f2-1d8a-4d1e-a9d6-2dc3505ee1c0"
# Print the type of the pre-trained model
print(f"The pretrained model has type: {type(pre_trained_model)}")
# + [markdown] id="Kt7AU7jP7LW9"
# To create the final model, you will use Keras' Model class by defining the appropriate inputs and outputs as described in the first way to instantiate a Model in the [docs](https://www.tensorflow.org/api_docs/python/tf/keras/Model).
#
# Note that you can get the input from any existing model by using its `input` attribute and by using the Funcional API you can use the last layer directly as output when creating the final model.
# + cellView="code" id="BMXb913pbvFg"
# GRADED FUNCTION: create_final_model
def create_final_model(pre_trained_model, last_output):
# Flatten the output layer to 1 dimension
x = layers.Flatten()(last_output)
### START CODE HERE
# Add a fully connected layer with 1024 hidden units and ReLU activation
x = layers.Dense(1024, activation='relu')(x)
# Add a dropout rate of 0.2
x = layers.Dropout(0.2)(x)
# Add a final sigmoid layer for classification
x = layers.Dense(1, activation='sigmoid')(x)
# Create the complete model by using the Model class
model = Model( pre_trained_model.input, x)
# Compile the model
model.compile(optimizer = RMSprop(learning_rate=0.0001),
loss = 'binary_crossentropy',
metrics = ['accuracy'])
### END CODE HERE
return model
# + id="cL6ga5Z1783H" colab={"base_uri": "https://localhost:8080/"} outputId="0a75189a-b9ab-4411-9230-5ab21508bdc1"
# Save your model in a variable
model = create_final_model(pre_trained_model, last_output)
# Inspect parameters
total_params = model.count_params()
num_trainable_params = sum([w.shape.num_elements() for w in model.trainable_weights])
print(f"There are {total_params:,} total parameters in this model.")
print(f"There are {num_trainable_params:,} trainable parameters in this model.")
# + [markdown] id="J4d3zlcQDrvm"
# **Expected Output:**
# ```
# There are 47,512,481 total parameters in this model.
# There are 38,537,217 trainable parameters in this model.
# ```
# + [markdown] id="_eqwHj5xEBZ7"
# Wow, that is a lot of parameters!
#
# After submitting your assignment later, try re-running this notebook but use the original resolution of 300x300, you will be surprised to see how many more parameters are for that case.
#
# Now train the model:
# + id="Blhq2MAUeyGA" colab={"base_uri": "https://localhost:8080/"} outputId="0630298d-8b58-47b7-c458-998c28d34a49"
# Run this and see how many epochs it should take before the callback
# fires, and stops training at 99.9% accuracy
# (It should take a few epochs)
callbacks = myCallback()
history = model.fit(train_generator,
validation_data = validation_generator,
epochs = 100,
verbose = 2,
callbacks=callbacks)
# + [markdown] id="Y94djl4t0sK5"
# The training should have stopped after less than 10 epochs and it should have reached an accuracy over 99,9% (firing the callback). This happened so quickly because of the pre-trained model you used, which already contained information to classify humans from horses. Really cool!
#
# Now take a quick look at the training and validation accuracies for each epoch of training:
# + id="C2Fp6Se9rKuL" colab={"base_uri": "https://localhost:8080/", "height": 298} outputId="53019ed9-9918-4260-d697-193a29621698"
# Plot the training and validation accuracies for each epoch
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'r', label='Training accuracy')
plt.plot(epochs, val_acc, 'b', label='Validation accuracy')
plt.title('Training and validation accuracy')
plt.legend(loc=0)
plt.figure()
plt.show()
# + [markdown] id="g-4-4i9U1a0s"
# You will need to submit this notebook for grading. To download it, click on the `File` tab in the upper left corner of the screen then click on `Download` -> `Download .ipynb`. You can name it anything you want as long as it is a valid `.ipynb` (jupyter notebook) file.
# + [markdown] id="7w54-pbB1W9r"
# **Congratulations on finishing this week's assignment!**
#
# You have successfully implemented a convolutional neural network that leverages a pre-trained network to help you solve the problem of classifying humans from horses.
#
# **Keep it up!**
|
C2/W3/assignment/C2_W3_Assignment_Solution v2.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Imports
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
import math
from tensorflow_probability import edward2 as ed
# %matplotlib inline
# # Configuration
#
# Now we define the learning method, whether there will be or not model miss-specification and the number of models of the ensemble.
#
#
# +
## Learning Method
# 0. MAP Learning
# 2. PAC$^2$-Ensemble Learning
# 3. PAC$^2_T$-Ensemble Learning
LEARNING_METHOD = 3
## Control the presence of model miss-specficication as shown in Figures 2 and 3.
MODEL_MISSSPECIFICATION = True
## Number of ensemble models.
K=3
# -
# # Data Set
#
# In this first part, we present the data set used for this example. By setting the flag ``MODEL_MISSSPECIFICATION`` we can generate the figures under perfect model specification or under model miss-specification.
# +
# Set seeds for reproducibility
np.random.seed(0)
tf.set_random_seed(0)
if MODEL_MISSSPECIFICATION:
VAR=10.
else:
VAR=1.
NSAMPLE = 10000
def sampleData(samples, variance):
x = np.linspace(-10.5, 10.5, samples).reshape(-1, 1)
r = 1+np.float32(np.random.normal(size=(samples,1),scale=variance))
y = np.float32(np.sin(0.75*x)*7.0+x*0.5+r*1.0)
return (x,y)
(x_train, y_train) = sampleData(NSAMPLE, VAR)
plt.scatter(x_train, y_train, marker='+', label='Training data')
plt.ylim(-20,20)
plt.xticks(np.arange(-10.5, 10.5, 4))
plt.legend()
plt.show()
# -
# # Learning a Neural Network
#
# We now employ Tensorflow Probability and Edward 2 to define and make varitional inference over a Bayesian neural network.
# +
NHIDDEN = 20
x = tf.placeholder("float", shape=[None, 1])
y = tf.placeholder("float", shape=[None, 1])
def model(NHIDDEN, x):
W = tf.Variable(tf.random_normal([1, NHIDDEN], 0.0, 0.05, dtype=tf.float32))
b = tf.Variable(tf.random_normal([1, NHIDDEN], 0.0, 0.05, dtype=tf.float32))
W_out = tf.Variable(tf.random_normal([NHIDDEN, 1], 0.0, 0.05, dtype=tf.float32))
b_out = tf.Variable(tf.random_normal([1, 1], 0.0, 0.05, dtype=tf.float32))
hidden_layer = tf.nn.tanh(tf.matmul(x, W) + b)
out = tf.matmul(hidden_layer, W_out) + b_out
y = ed.Normal(loc=out, scale=1.0, name="y")
return x, y
t = []
tpy = []
for i in range(K):
px,py = model(NHIDDEN,x)
t.append(py.distribution.log_prob(y))
tpy.append(py)
# -
# ## Defining the variational functionals
#
# And, now, we define the functionals $\bar{\cal L}_{PB^2}(\rho)$ and $\bar{\cal L}_{PB^2_h}(\rho)$ for computing the posterior $\rho(\theta|D)$ and $\rho_h(\theta|D)$, respectively.
# +
probs = tf.math.softmax(tf.Variable(tf.ones([K], dtype=tf.float32), trainable=False, name='probs'))
ensemble = tf.concat(t,1)
logmax = tf.stop_gradient(tf.math.reduce_max(ensemble,axis=1))
logmean = tf.stop_gradient(tf.math.reduce_logsumexp(ensemble+tf.reshape(tf.tile(tf.log(probs),[NSAMPLE]),[NSAMPLE,K]), axis=1) - tf.log(K + 0.0))
varlist = []
#####
inc = logmean-logmax
if (LEARNING_METHOD==3):
hmax = 2*tf.stop_gradient(inc/tf.math.pow(1-tf.math.exp(inc),2) + tf.math.pow(tf.math.exp(inc)*(1-tf.math.exp(inc)),-1))
else:
hmax = 1.
#####
for i in range(K):
vari = 0.5*(tf.reduce_sum(tf.exp(2*ensemble[:,i]-2*logmax)*hmax,axis=0))
for j in range(K):
vari = vari - 0.5*tf.reduce_sum(tf.reduce_sum(tf.exp(ensemble[:,i] + ensemble[:,j] - 2*logmax)*hmax,axis=0))*probs[j]
varlist.append(vari)
var=tf.stack(varlist,0)
dataenergy = tf.reduce_sum(ensemble,axis=0)
if (LEARNING_METHOD==2 or LEARNING_METHOD==3):
elboEnsemble = dataenergy + var
else:
elboEnsemble = dataenergy
pacelbo = tf.reduce_sum(tf.math.multiply(elboEnsemble,probs))
pacelbo = pacelbo - tf.reduce_sum(tf.math.multiply(probs,tf.log(probs)))
# -
# ## Optimizing the variational functionals
#
# We perform gradient-based optimization of the above objective.
# +
num_epochs=5000
verbose=True
sess = tf.Session()
t = []
train = tf.train.AdamOptimizer(0.01).minimize(-pacelbo)
init = tf.global_variables_initializer()
sess.run(init)
for i in range(num_epochs+1):
t.append(-sess.run(pacelbo,feed_dict={x: x_train,y: y_train}))
sess.run(train,feed_dict={x: x_train,y: y_train})
if verbose:
if i % 100 == 0:
str_elbo = str(-t[-1])
print("\n" + str(i) + " epochs\t" + str_elbo, end="", flush=True)
# -
# ## Evaluating the learned model
# Once the model is learned, we evaluate how it makes predictions by ploting its associated epistemic and aleatoric uncertainty
# +
NSAMPLETEST = 10000
(x_test, y_test) = sampleData(NSAMPLETEST, VAR)
y_pred_list = []
y_pred_noise = []
for i in range(K):
[mean, noise] = sess.run([tpy[i].distribution.mean(), tpy[i]], feed_dict={x: x_test})
y_pred_list.append(mean)
y_pred_noise.append(noise)
y_preds = np.concatenate(y_pred_list, axis=1)
y_preds_noise = np.concatenate(y_pred_noise, axis=1)
w = sess.run(probs)
y_mean = np.average(y_preds, weights = w, axis=1)
y_sigma = np.sqrt(np.average(np.power(y_preds,2), weights = w, axis=1) - y_mean**2)
y_mean_noise = np.average(y_preds_noise, weights = w, axis=1)
y_sigma_noise = np.sqrt(np.average(np.power(y_preds_noise,2), weights = w, axis=1) - y_mean_noise**2)
plt.plot(x_test, y_mean.reshape(-1, 1), 'r-', label='Predictive mean');
plt.scatter(x_train, y_train, marker='+', label='Training data')
plt.fill_between(x_test.ravel(),
y_mean + 2 * y_sigma_noise,
y_mean - 2 * y_sigma_noise,
alpha=0.5, label='Aleatory uncertainty')
plt.fill_between(x_test.ravel(),
y_mean + 2 * y_sigma,
y_mean - 2 * y_sigma,
alpha=0.5, label='Epistemic uncertainty')
plt.ylabel('y')
plt.xlabel('x')
plt.ylim(-20,20)
#plt.xticks(np.arange(-20., 20.5, 4))
plt.xticks(np.arange(-10.5, 10.5, 4))
plt.legend();
dataname = 'Sindata'
if (LEARNING_METHOD==3):
plt.title(r'PAC$^2_T$-Ensemble Posterior Predictive')
elif (LEARNING_METHOD==2):
plt.title(r'PAC$^2$-Ensemble Posterior Predictive')
else:
plt.title(r'(MAP)-Ensemble Posterior Predictive')
plt.show()
# -
# Let's have a look at the indivual models of the ensemble.
# +
#plt.figure(figsize=(8, 8))
plt.plot(x_test, y_mean.reshape(-1, 1), 'r-', label='Predictive mean');
plt.scatter(x_train, y_train, marker='+', label='Training data')
for i in range(K):
plt.plot(x_test, y_preds[:,i], 'b-',label='Component-'+str(i));
plt.ylabel('y')
plt.xlabel('x')
plt.ylim(-20,20)
#plt.xticks(np.arange(-20., 20.5, 4))
plt.xticks(np.arange(-10.5, 10.5, 4))
plt.legend();
if (LEARNING_METHOD==3):
plt.title(r'PAC$^2_T$-Ensemble Components')
elif (LEARNING_METHOD==2):
plt.title(r'PAC$^2$-Ensemble Components')
else:
plt.title(r'(MAP)-Ensemble Components')
plt.show()
# -
# We also compute the *log-likelihood of the predicitive posterior* over the independent test data set.
# +
y_pred_list = []
for i in range(K):
y_pred_list.append(tpy[i].distribution.log_prob(y_test)+tf.log(probs[i]))
y_preds = tf.concat(y_pred_list, axis=1)
score = tf.reduce_sum(tf.math.reduce_logsumexp(y_preds,axis=1)-tf.log(K+0.0))
score = sess.run(score,feed_dict={x: x_test})
print("\n Negative Log-likelihood of the posterior predictive : "+str(score))
# -
|
notebooks/PAC2-Ensemble-SinusoidalData-NeuralNetwork.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] _uuid="8f2839f25d086af736a60e9eeb907d3b93b6e0e5" _cell_guid="b1076dfc-b9ad-4769-8c92-a6c4dae69d19"
# Predicting the sale price of bulldozer using ML
# In this notebook , I am going to go through with the goal of predicting the sale price of Bulldozers.
#
# # 1. Problem definition :
# Predict the sale price of a particular piece of heavy equipment at auction based on it's usage, equipment type, and configuration.
#
# # 2. Data
# The data is downloaded from the Kaggle "Blue Book for bulldozer" competition. https://www.kaggle.com/c/bluebook-for-bulldozers/data
#
# There are 3 main datasets:
#
# Train.csv is the training set, which contains data through the end of 2011.
# Valid.csv is the validation set, which contains data from January 1, 2012 - April 30, 2012 You make predictions on this set throughout the majority of the competition. Your score on this set is used to create the public Leaderboard.
# Test.csv is the test set, which won't be released until the last week of the competition. It contains data from May 1, 2012 - November 2012. Your score on the test set determines your final rank for the competition.
# # 3. Evaluation
# RMSLE (root mean squared log error) between the actual and predicted auction prices.
# -
# # Importing essential tools
# +
# Regular EDA and plotting libraries
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# preprocessor
from sklearn.preprocessing import StandardScaler
from sklearn.compose import ColumnTransformer
from sklearn.impute import SimpleImputer
# Models from Scikit-Learn
from sklearn.ensemble import RandomForestRegressor
# Model Evaluations
from sklearn.model_selection import train_test_split
from sklearn.model_selection import RandomizedSearchCV, GridSearchCV
from sklearn.metrics import mean_squared_error,mean_squared_log_error,mean_absolute_error,make_scorer
#Pipeline
from sklearn.pipeline import Pipeline
plt.style.use('seaborn-whitegrid')
from datetime import datetime
# -
# # Load data
# Parsing saledate as a Datatime column
# combined dataset of training and validation set
df = pd.read_csv("../input/blue-book-for-bulldozer/Train/Train.csv",parse_dates=['saledate'],low_memory=False)
# test set
test_df = pd.read_csv("../input/blue-book-for-bulldozer/Test.csv",parse_dates=['saledate'],low_memory=False)
# sorting df according to the saledate
df.sort_values(by='saledate',inplace=True)
df.head().T
df.info() # most of the features are having object DataType
test_df.info()
# shape of the dataframe
df.shape
test_df.shape
# # Preprocessing
df.isna().sum()
test_df.isna().sum()
# # Visualize missing data
# visualizing missing entries
df_missing_percentage = ((df.isna().sum()/df.shape[0])*100)
test_df_missing_percentage = ((df.isna().sum()/df.shape[0])*100)
pd.DataFrame(df_missing_percentage,columns=['missing%']).sort_values(by='missing%').plot(kind='barh',figsize=(7,15));
plt.xticks(fontsize = 15);
plt.yticks(fontsize = 10);
pd.DataFrame(test_df_missing_percentage,columns=['missing%']).sort_values(by='missing%').plot(kind='barh',figsize=(7,15));
plt.xticks(fontsize = 15);
plt.yticks(fontsize = 10);
# ### Adding Missing Indicators for Numerical and Categorical columns
# +
# First of all, I have concatenated all data points so that we can add missing indicators easily
# test_df has no SalePrice column , so its data points will have NaN in its SalePrice column when cancatenated with df
Concat = pd.concat((df,test_df),axis = 0).reset_index(drop=True)
# Converting all columns with object dtype to category dtype
for label,content in Concat.items() :
if pd.api.types.is_object_dtype(content):
Concat[label] = content.astype('category')
# Enriching features
Concat['year'] = Concat.saledate.dt.year
Concat['month']= Concat.saledate.dt.month
Concat['day']= Concat.saledate.dt.day
# -
cat=[] # list for storing all columns with 'cstegory' dtype
cat_missing = [] # list for storing columns with 'category' dtype and having missing values
num_missing = [] # list for storing columns with 'numerical' dtype and having missing values
# +
for label,content in Concat.items():
if pd.api.types.is_numeric_dtype(content): # checking for numerical features
if content.isna().sum() > 0: # checking if the feature has any missing values
Concat[f'{label}_ismissing'] = content.isna()
num_missing.append(label)
if pd.api.types.is_categorical_dtype(content): # checking for categorical features
cat.append(label)
if content.isna().sum() > 0: # checking if the feature has any missing values
Concat[f'{label}_ismissing'] = content.isna()
cat_missing.append(label)
cat_not_missing = list(set(cat) - set(cat_missing))
# -
# ### Filling categorical values
# One more reason to make a single dataset of all data points is to cover all possible value category while assigning codes to categorical data.
# +
# For missing values in categorical datatype, by default `-1` is assigned for its code, so adding 1 before creating new column
Concat[cat_missing] = Concat[cat_missing].apply(lambda i : i.cat.codes+1)
# For features with no missing values, simply assigning code
Concat[cat_not_missing] = Concat[cat_not_missing].apply(lambda i : i.cat.codes)
# -
(Concat.isna().sum() !=0 ).sum() # out which one is SalePrice , which will not be considered
# ### Filling numerical values
# Filling the missing values with median
# To avoid data leakage , we separate training set, validation set and test set
# +
train_df = Concat.loc[Concat.saledate.dt.year < 2012, :].drop('saledate', axis=1)
valid_df = Concat.loc[Concat.saledate <= pd.Timestamp(
year=2012, month=4, day=30)].loc[Concat.saledate >= pd.Timestamp(year=2012, month=1, day=1)].drop('saledate', axis=1)
test_df = Concat.loc[Concat.saledate >=
pd.Timestamp(year=2012, month=4, day=30), :].drop(['SalePrice','saledate'], axis=1)
# -
train_df.shape
test_df.shape
valid_df.shape
train_df[num_missing].isna().sum()
valid_df[num_missing].isna().sum()
# +
num_imputer = SimpleImputer(strategy='median')
transformer = ColumnTransformer(transformers=[('num_missing',num_imputer,train_df.columns)],remainder='passthrough',)
train_df_filled = transformer.fit_transform(train_df) # fitting on training data
valid_df_filled = transformer.transform(valid_df) # transforming test based on training data to avoid data leakage
train_df_filled = pd.DataFrame(train_df_filled,columns=train_df.columns)
valid_df_filled = pd.DataFrame(valid_df_filled,columns=valid_df.columns)
# -
train_df_filled
train_df_filled[num_missing].isna().sum()
valid_df_filled[num_missing].isna().sum()
# ### Modelling
# +
# separating features and labels
X_train_filled,y_train_filled = train_df_filled.drop(['SalePrice'],axis=1),train_df_filled.SalePrice
X_valid_filled,y_valid_filled = valid_df_filled.drop(['SalePrice'],axis=1),valid_df_filled.SalePrice
X_train,y_train = train_df.drop(['SalePrice'],axis=1),train_df_filled.SalePrice
X_valid,y_valid = valid_df.drop(['SalePrice'],axis=1),valid_df_filled.SalePrice
# -
|
Blue Book for Bulldozers/bulldozersrandomforestregressor.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.8.0 64-bit ('ai-env')
# metadata:
# interpreter:
# hash: ee89ffdf677b068b3969c9c92fc557abd8fe9bdccee3c1b3432324df722c1402
# name: python3
# ---
# ## seq2seq practice
# ### Generating names with recurrent neural networks
#
# This time you'll find yourself delving into the heart (and other intestines) of recurrent neural networks on a class of toy problems.
#
# Struggle to find a name for the variable? Let's see how you'll come up with a name for your son/daughter. Surely no human has expertize over what is a good child name, so let us train RNN instead;
#
# It's dangerous to go alone, take these:
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib inline
import os
# # Our data
# The dataset contains ~8k earthling names from different cultures, all in latin transcript.
#
# This notebook has been designed so as to allow you to quickly swap names for something similar: deep learning article titles, IKEA furniture, pokemon names, etc.
# +
start_token = " "
def read_names(path_to_file):
global start_token
with open(path_to_file) as f:
names = f.read()[:-1].split('\n')
names = [start_token + line for line in names]
return names
# -
try:
names = read_names('names')
except FileNotFoundError:
# !wget https://raw.githubusercontent.com/girafe-ai/ml-mipt/master/datasets/names_dataset/names -nc -O names
names = read_names('./names')
print ('n samples = ',len(names))
for x in names[::1000]:
print (x)
# +
MAX_LENGTH = max(map(len, names))
print("max length =", MAX_LENGTH)
plt.title('Sequence length distribution')
plt.hist(list(map(len, names)),bins=25);
# -
# # Text processing
#
# First we need next to collect a "vocabulary" of all unique tokens i.e. unique characters. We can then encode inputs as a sequence of character ids.
# +
tokens = list(set("".join(names)))
num_tokens = len(tokens)
print ('num_tokens = ', num_tokens)
assert 50 < num_tokens < 60, "Names should contain within 50 and 60 unique tokens depending on encoding"
# -
# ### Convert characters to integers
#
# Torch is built for crunching numbers, not strings.
# To train our neural network, we'll need to replace characters with their indices in tokens list.
#
# Let's compose a dictionary that does this mapping.
token_to_id = {tokens[i]:i for i in range(len(tokens))}
# +
assert len(tokens) == len(token_to_id), "dictionaries must have same size"
for i in range(num_tokens):
assert token_to_id[tokens[i]] == i, "token identifier must be it's position in tokens list"
print("Seems alright!")
# -
def to_matrix(names, max_len=None, pad=token_to_id[' '], dtype='int32', batch_first = True):
"""Casts a list of names into rnn-digestable matrix"""
max_len = max_len or max(map(len, names))
names_ix = np.zeros([len(names), max_len], dtype) + pad
for i in range(len(names)):
line_ix = [token_to_id[c] for c in names[i]]
names_ix[i, :len(line_ix)] = line_ix
if not batch_first: # convert [batch, time] into [time, batch]
names_ix = np.transpose(names_ix)
return names_ix
names[:2]
#Example: cast 4 random names to matrices, pad with zeros
print('\n'.join(names[::2000]))
print(to_matrix(names[::2000]))
# # Recurrent neural network
#
# We can rewrite recurrent neural network as a consecutive application of dense layer to input $x_t$ and previous rnn state $h_t$. This is exactly what we're gonna do now.
# <img src="./rnn.png" width=480>
#
# Since we're training a language model, there should also be:
# * An embedding layer that converts character id x_t to a vector.
# * An output layer that predicts probabilities of next phoneme
import torch, torch.nn as nn
import torch.nn.functional as F
class CharRNNCell(nn.Module):
"""
Implement the scheme above as torch module
"""
def __init__(self, num_tokens=len(tokens), embedding_size=16, rnn_num_units=64):
super(self.__class__,self).__init__()
self.num_units = rnn_num_units
self.embedding = nn.Embedding(num_tokens, embedding_size)
self.rnn_update = nn.Linear(embedding_size + rnn_num_units, rnn_num_units)
self.rnn_to_logits = nn.Linear(rnn_num_units, num_tokens)
def forward(self, x, h_prev):
"""
This method computes h_next(x, h_prev) and log P(x_next | h_next)
We'll call it repeatedly to produce the whole sequence.
:param x: batch of character ids, containing vector of int64
:param h_prev: previous rnn hidden states, containing matrix [batch, rnn_num_units] of float32
"""
# get vector embedding of x
x_emb = self.embedding(x)
# compute next hidden state using self.rnn_update
# hint: use torch.cat(..., dim=...) for concatenation
x_and_h = torch.cat([x_emb, h_prev], dim=1)
h_next = self.rnn_update(x_and_h)
h_next = torch.tanh(h_next)
assert h_next.size() == h_prev.size()
#compute logits for next character probs
logits = F.log_softmax(self.rnn_to_logits(h_next), -1)
return h_next, logits
def initial_state(self, batch_size):
""" return rnn state before it processes first input (aka h0) """
return torch.zeros(batch_size, self.num_units, requires_grad=True)
char_rnn = CharRNNCell()
criterion = nn.NLLLoss()
# ### RNN loop
#
# Once we've defined a single RNN step, we can apply it in a loop to get predictions on each step.
def rnn_loop(char_rnn, batch_ix):
"""
Computes log P(next_character) for all time-steps in names_ix
:param names_ix: an int32 matrix of shape [batch, time], output of to_matrix(names)
"""
batch_size, max_length = batch_ix.size()
hid_state = char_rnn.initial_state(batch_size)
logprobs = []
for x_t in batch_ix.transpose(0,1):
hid_state, logits = char_rnn(x_t, hid_state) # <-- here we call your one-step code
logprobs.append(F.log_softmax(logits, -1))
return torch.stack(logprobs, dim=1)
# +
batch_ix = to_matrix(names[:5])
batch_ix = torch.tensor(batch_ix, dtype=torch.int64)
logp_seq = rnn_loop(char_rnn, batch_ix)
assert torch.max(logp_seq).data.numpy() <= 0
assert tuple(logp_seq.size()) == batch_ix.shape + (num_tokens,)
# -
# ### Likelihood and gradients
#
# We can now train our neural network to minimize crossentropy (maximize log-likelihood) with the actual next tokens.
#
# To do so in a vectorized manner, we take `batch_ix[:, 1:]` - a matrix of token ids shifted i step to the left so i-th element is acutally the "next token" for i-th prediction
# +
predictions_logp = logp_seq[:, :-1]
actual_next_tokens = batch_ix[:, 1:]
# .contiguous() method checks that tensor is stored in the memory correctly to
# get its view of desired shape.
loss = criterion(predictions_logp.contiguous().view(-1, num_tokens),
actual_next_tokens.contiguous().view(-1))
# -
loss.backward()
for w in char_rnn.parameters():
assert w.grad is not None and torch.max(torch.abs(w.grad)).data.numpy() != 0, \
"Loss is not differentiable w.r.t. a weight with shape %s. Check forward method." % (w.size(),)
# ### The training loop
#
# We train our char-rnn exactly the same way we train any deep learning model: by minibatch sgd.
#
# The only difference is that this time we sample strings, not images or sound.
# +
from IPython.display import clear_output
from random import sample
char_rnn = CharRNNCell()
criterion = nn.NLLLoss()
opt = torch.optim.Adam(char_rnn.parameters())
history = []
# +
MAX_LENGTH = 16
for i in range(1000):
batch_ix = to_matrix(sample(names, 32), max_len=MAX_LENGTH)
batch_ix = torch.tensor(batch_ix, dtype=torch.int64)
logp_seq = rnn_loop(char_rnn, batch_ix)
# compute loss
predictions_logp = logp_seq[:, :-1]
actual_next_tokens = batch_ix[:, 1:]
loss = criterion(predictions_logp.contiguous().view(-1, num_tokens),
actual_next_tokens.contiguous().view(-1))
# train with backprop
# YOUR CODE HERE
loss.backward()
opt.step()
opt.zero_grad()
history.append(loss.data.numpy())
if (i+1)%100==0:
clear_output(True)
plt.plot(history,label='loss')
plt.legend()
plt.show()
assert np.mean(history[:10]) > np.mean(history[-10:]), "RNN didn't converge."
# -
# ### RNN: sampling
# Once we've trained our network a bit, let's get to actually generating stuff.
# All we need is the single rnn step function you have defined in `char_rnn.forward`.
def generate_sample(char_rnn, seed_phrase=' ', max_length=MAX_LENGTH, temperature=1.0):
'''
The function generates text given a phrase of length at least SEQ_LENGTH.
:param seed_phrase: prefix characters. The RNN is asked to continue the phrase
:param max_length: maximum output length, including seed_phrase
:param temperature: coefficient for sampling. higher temperature produces more chaotic outputs,
smaller temperature converges to the single most likely output
'''
# convert to list of numbers
x_sequence = [token_to_id[token] for token in seed_phrase]
# convert to tensor
x_sequence = torch.tensor([x_sequence], dtype=torch.int64)
hid_state = char_rnn.initial_state(batch_size=1)
#feed the seed phrase, if any
for i in range(len(seed_phrase) - 1):
hid_state, _ = char_rnn(x_sequence[:, i], hid_state)
#start generating
for _ in range(max_length - len(seed_phrase)):
hid_state, logits = char_rnn(x_sequence[:, -1], hid_state)
p_next = F.softmax(logits / temperature, dim=-1).data.numpy()[0]
# sample next token and push it back into x_sequence
# выбираем рандомный номер из распределения вероятностей что мы получили
next_ix = np.random.choice(num_tokens,p=p_next)
next_ix = torch.tensor([[next_ix]], dtype=torch.int64)
x_sequence = torch.cat([x_sequence, next_ix], dim=1)
return ''.join([tokens[ix] for ix in x_sequence.data.numpy()[0]])
for _ in range(10):
print(generate_sample(char_rnn, temperature=1.1))
for _ in range(50):
print(generate_sample(char_rnn, seed_phrase=' Deb'))
# ### More seriously
#
# What we just did is a manual low-level implementation of RNN. While it's cool, i guess you won't like the idea of re-writing it from scratch on every occasion.
#
# As you might have guessed, torch has a solution for this. To be more specific, there are two options:
# * `nn.RNNCell(emb_size, rnn_num_units)` - implements a single step of RNN just like you did. Basically concat-linear-tanh
# * `nn.RNN(emb_size, rnn_num_units` - implements the whole rnn_loop for you.
#
# There's also `nn.LSTMCell` vs `nn.LSTM`, `nn.GRUCell` vs `nn.GRU`, etc. etc.
#
# In this example we'll rewrite the char_rnn and rnn_loop using high-level rnn API.
class CharRNNLoop(nn.Module):
def __init__(self, num_tokens=num_tokens, emb_size=16, rnn_num_units=64):
super(self.__class__, self).__init__()
self.emb = nn.Embedding(num_tokens, emb_size)
self.rnn = nn.LSTM(emb_size, rnn_num_units, batch_first=True)
self.hid_to_logits = nn.Linear(rnn_num_units, num_tokens)
def forward(self, x):
assert isinstance(x.data, torch.LongTensor)
h_seq, _ = self.rnn(self.emb(x))
next_logits = self.hid_to_logits(h_seq)
next_logp = F.log_softmax(next_logits, dim=-1)
return next_logp
# +
model = CharRNNLoop()
opt = torch.optim.Adam(model.parameters(), lr=1e-3)
history = []
# the model applies over the whole sequence
batch_ix = to_matrix(sample(names, 32), max_len=MAX_LENGTH)
batch_ix = torch.LongTensor(batch_ix)
# +
logp_seq = model(batch_ix)
loss = criterion(logp_seq[:, :-1].contiguous().view(-1, num_tokens),
batch_ix[:, 1:].contiguous().view(-1))
loss.backward()
# +
MAX_LENGTH = 16
for i in range(10):
batch_ix = to_matrix(sample(names, 32), max_len=MAX_LENGTH)
batch_ix = torch.tensor(batch_ix, dtype=torch.int64)
logp_seq = model(batch_ix)
# compute loss
# YOUR CODE HERE
loss = criterion(logp_seq[:, :-1].contiguous().view(-1, num_tokens),
batch_ix[:, 1:].contiguous().view(-1))
# print(logp_seq)
# print(batch_ix)
loss.backward()
opt.step()
opt.zero_grad()
# train with backprop
# YOUR CODE HERE
history.append(loss.data.numpy())
if (i+1)%100==0:
clear_output(True)
plt.plot(history,label='loss')
plt.legend()
plt.show()
assert np.mean(history[:10]) > np.mean(history[-10:]), "RNN didn't converge."
# + tags=[]
def generate_sample(char_rnn=model, seed_phrase=' ', max_length=MAX_LENGTH, temperature=1.0):
'''
The function generates text given a phrase of length at least SEQ_LENGTH.
:param seed_phrase: prefix characters. The RNN is asked to continue the phrase
:param max_length: maximum output length, including seed_phrase
:param temperature: coefficient for sampling. higher temperature produces more chaotic outputs,
smaller temperature converges to the single most likely output
'''
x_sequence = [token_to_id[token] for token in seed_phrase]
x_sequence = torch.tensor([x_sequence], dtype=torch.int64)
#start generating
for _ in range(4):
p_next = char_rnn(x_sequence).data.numpy()
# print(p_next[0][-1:][0], x_sequence)
p_next = torch.tensor(p_next[0][-1:][0])
p_next = torch.softmax(p_next/ temperature, dim=-1)
# sample next token and push it back into x_sequence
# выбираем рандомный номер из распределения вероятностей что мы получили
next_ix = np.random.choice(num_tokens,p=p_next.data.numpy())
next_ix = torch.tensor([[next_ix]], dtype=torch.int64)
x_sequence = torch.cat([x_sequence, next_ix], dim=1)
return ''.join([tokens[ix] for ix in x_sequence.data.numpy()[0]])
# generate_sample(seed_phrase=" ")
# -
for _ in range(10):
print(generate_sample(seed_phrase=' Em', temperature=1) )
print(generate_sample(seed_phrase=" "))
print(generate_sample(seed_phrase=" "))
print(generate_sample(seed_phrase=" "))
print(generate_sample(seed_phrase=" "))
# ### To sum up:
# - PyTorch is convenient both for prototyping and production
# - There are a lot of pre-implemented methods/layers/activations out of the box
# - It's much easier (*really easier*) to use PyTorch than TensorFlow on entry level.
# - Neural networks are not *black boxes*, they are pretty nice and easy to use (almost always).
# ### Try it out!
# You've just implemented a recurrent language model that can be tasked with generating any kind of sequence, so there's plenty of data you can try it on:
#
# * Novels/poems/songs of your favorite author
# * News titles/clickbait titles
# * Source code of Linux or Tensorflow
# * Molecules in [smiles](https://en.wikipedia.org/wiki/Simplified_molecular-input_line-entry_system) format
# * Melody in notes/chords format
# * Ikea catalog titles
# * Pokemon names
# * Cards from Magic, the Gathering / Hearthstone
#
# If you're willing to give it a try, here's what you wanna look at:
# * Current data format is a sequence of lines, so a novel can be formatted as a list of sentences. Alternatively, you can change data preprocessing altogether.
# * While some datasets are readily available, others can only be scraped from the web. Try `Selenium` or `Scrapy` for that.
# * Make sure MAX_LENGTH is adjusted for longer datasets. There's also a bonus section about dynamic RNNs at the bottom.
# * More complex tasks require larger RNN architecture, try more neurons or several layers. It would also require more training iterations.
# * Long-term dependencies in music, novels or molecules are better handled with LSTM or GRU
#
# __Good hunting!__
|
week06_rnn/week0_10_Names_generation_from_scratch.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.8.1 64-bit (''.env'': venv)'
# name: python38164bitenvvenv1aeeddf854374f4a984d7e4f4cd088e2
# ---
# The most common analytical task is to take a bunch of numbers in dataset and summarise it with fewer numbers, preferably a single number. Enter the 'average', sum all the numbers and divide by the count of the numbers. In mathematical terms this is known as the 'arithmetic mean', and doesn't always summarise a dataset correctly. This post looks into the other types of ways that we can summarise a dataset.
#
# > The proper term for this method of summarising is determining the central tendency of the dataset.
# ## Generate The Data
#
# First step is to generate a dataset to summarise, to do this we use the `random` package from the standard library. Using matplotlib we can plot our 'number line'.
# + tags=[]
import random
import typing
random.seed(42)
dataset: typing.List = []
for _ in range(50):
dataset.append(random.randint(1,100))
print(dataset)
import matplotlib.pyplot as plt
def plot_1d_data(arr:typing.List, val:float, **kwargs):
constant_list = [val for _ in range(len(arr))]
plt.plot(arr, constant_list, 'x', **kwargs)
plot_1d_data(dataset,5)
# -
# ## Median
#
# The median is the middle number of the sorted list, in the quite literal sense. For example the median of 1,2,3,4,5 is 3; as is the same for 3,2,4,1,5. The median can be more descriptive of the dataset over the arithmetic mean whenever there are significant outliers in the data that skew the arithmetic mean.
#
# > If there is an even amount of numbers in the data, the median becomes the arithmetic mean of the two middle numbers. For example, the median for 1,2,3,4,5,6 is 3.5 (3+4/2).
#
# ### When to use
#
# Use the median whenever there is a large spread of numbers across the domain
#
# + tags=[]
import statistics
print(f"Median: {statistics.median(dataset)}")
plot_1d_data(dataset,5)
plt.plot(statistics.median(dataset),5,'x',color='red',markersize=50)
plt.annotate('Median',(statistics.median(dataset),5),(statistics.median(dataset),5.1),arrowprops={'width':0.1})
# -
# ## Mode
#
# The mode of a dataset is the number the appears most in the dataset. It is to be noted that this is the least used method of demonstrating central tendency.
#
# ### When to use
#
# Mode is best used with nominal data, meaning if the data you are trying to summarise has no quantitative metrics behind it, then mode would be useful. Eg, if you are looking through textual data, finding the most used word is a significant way of summarising the data.
# + tags=[]
import statistics
print(f"Mode: {statistics.mode(dataset)}")
plot_1d_data(dataset,5)
plt.plot(statistics.mode(dataset),5,'x',color='red',markersize=50)
plt.annotate('Mode',(statistics.mode(dataset),5),(statistics.mode(dataset),5.1),arrowprops={'width':0.1})
# -
# ## Arithmetic Mean
#
# This is the most used way of representing central tendency. It is done by summing all the points in the dataset, and then dividing by the number of points (to scale back into the original domain). This is the best way of representing central tendency if the data does not containing outliers that will skew the outcome (which can be overcome by normalisation).
#
# ### When to use
#
# If the dataset is normally distributed, this is the ideal measure.
# + tags=[]
def arithmetic_mean(dataset: typing.List):
return sum(dataset) / len(dataset)
print(f"Arithmetic Mean: {arithmetic_mean(dataset)}")
plot_1d_data(dataset,5)
plt.plot(arithmetic_mean(dataset),5,'x',color='red',markersize=50)
plt.annotate('Arithmetic Mean',(arithmetic_mean(dataset),5),(arithmetic_mean(dataset),5.1),arrowprops={'width':0.1})
# -
# ## Geometric Mean
#
# The geometric mean is calculated by multiplying all numbers in a set, and then calculating the `nth` root of the multiplied figure, when n is the count of numbers. Since this using the `multiplicative` nature of the dataset to find a figure to summarise by, rather than an `additive` figure of the arithmetic mean, thus making it more suitable for datasets with a multiplicative relationship.
#
# > We calculate the nth root by raising to the power of the reciprocal.
#
# ### When to use
#
# If the dataset has a multiplicative nature (eg, growth in population, interest rates, etc), then geometric mean will be a more suitable way of summarising the dataset. The geometric mean is also useful when trying to summarise data with differenting scales or units as the geometric mean is technically unitless.
# + tags=[]
def multiply_list(dataset:typing.List) :
# Multiply elements one by one
result = 1
for x in dataset:
result = result * x
return result
def geometric_mean(dataset:typing.List):
if 0 in dataset:
dataset = [x + 1 for x in dataset]
return multiply_list(dataset)**(1/len(dataset))
print(f"Geometric Mean: {geometric_mean(dataset)}")
plot_1d_data(dataset,5)
plt.plot(geometric_mean(dataset),5,'x',color='red',markersize=50)
plt.annotate('Geometric Mean',(geometric_mean(dataset),5),(geometric_mean(dataset),5.1),arrowprops={'width':0.1})
# -
# ## Harmonic Mean
#
# Harmonic mean is calculated by:
#
# - taking the reciprocal of all the numbers in the set
# - calculating the arithmetic mean of this reciprocal set
# - taking the reciprocal of the calculated mean
#
# ### When to use
#
# The harmonic mean is very useful when trying to summarise datasets that are in rates or ratios. For example if you were trying to determine the average rate of travel over a trip with many legs.
# + tags=[]
def reciprocal_list(dataset:typing.List):
reciprocal_list = []
for x in dataset:
reciprocal_list.append(1/x)
return reciprocal_list
def harmonic_mean(dataset:typing.List):
return 1/arithmetic_mean(reciprocal_list(dataset))
print(f"Harmonic Mean: {harmonic_mean(dataset)}")
plot_1d_data(dataset,5)
plt.plot(harmonic_mean(dataset),5,'x',color='red',markersize=50)
plt.annotate('Harmonic Mean',(harmonic_mean(dataset),5),(harmonic_mean(dataset),5.1),arrowprops={'width':0.1})
# + tags=[]
print(f"Mode: {statistics.mode(dataset)}")
print(f"Median: {statistics.median(dataset)}")
print(f"Arithmetic Mean: {arithmetic_mean(dataset)}")
print(f"Geometric Mean: {geometric_mean(dataset)}")
print(f"Harmonic Mean: {harmonic_mean(dataset)}")
# -
# > Thank you to <NAME> over on Twitter: <https://twitter.com/ndrewg/status/1296773835585236997> for suggesting some extremely interesting further reading on [Anscombe's Quartet](https://en.m.wikipedia.org/wiki/Anscombe%27s_quartet) and [The Datasaurus Dozen](https://www.autodeskresearch.com/publications/samestats), which are examples of why summary statistics matter of exactly the meaning of this post!
|
content/2020/types-of-means/notebooks/types-of-means.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Inference and Validation
#
# Now that you have a trained network, you can use it for making predictions. This is typically called **inference**, a term borrowed from statistics. However, neural networks have a tendency to perform *too well* on the training data and aren't able to generalize to data that hasn't been seen before. This is called **overfitting** and it impairs inference performance. To test for overfitting while training, we measure the performance on data not in the training set called the **validation** set. We avoid overfitting through regularization such as dropout while monitoring the validation performance during training. In this notebook, I'll show you how to do this in PyTorch.
#
# As usual, let's start by loading the dataset through torchvision. You'll learn more about torchvision and loading data in a later part. This time we'll be taking advantage of the test set which you can get by setting `train=False` here:
#
# ```python
# testset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/', download=True, train=False, transform=transform)
# ```
#
# The test set contains images just like the training set. Typically you'll see 10-20% of the original dataset held out for testing and validation with the rest being used for training.
# +
import torch
from torchvision import datasets, transforms
# Define a transform to normalize the data
transform = transforms.Compose([transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
# Download and load the training data
trainset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/', download=True, train=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True)
# Download and load the test data
testset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/', download=True, train=False, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=64, shuffle=True)
# -
# Here I'll create a model like normal, using the same one from my solution for part 4.
# +
from torch import nn, optim
import torch.nn.functional as F
class Classifier(nn.Module):
def __init__(self):
super().__init__()
self.fc1 = nn.Linear(784, 256)
self.fc2 = nn.Linear(256, 128)
self.fc3 = nn.Linear(128, 64)
self.fc4 = nn.Linear(64, 10)
def forward(self, x):
# make sure input tensor is flattened
x = x.view(x.shape[0], -1)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = F.relu(self.fc3(x))
x = F.log_softmax(self.fc4(x), dim=1)
return x
# -
# The goal of validation is to measure the model's performance on data that isn't part of the training set. Performance here is up to the developer to define though. Typically this is just accuracy, the percentage of classes the network predicted correctly. Other options are [precision and recall](https://en.wikipedia.org/wiki/Precision_and_recall#Definition_(classification_context)) and top-5 error rate. We'll focus on accuracy here. First I'll do a forward pass with one batch from the test set.
# +
model = Classifier()
images, labels = next(iter(testloader))
# Get the class probabilities
ps = torch.exp(model(images))
# Make sure the shape is appropriate, we should get 10 class probabilities for 64 examples
print(ps.shape)
# -
# With the probabilities, we can get the most likely class using the `ps.topk` method. This returns the $k$ highest values. Since we just want the most likely class, we can use `ps.topk(1)`. This returns a tuple of the top-$k$ values and the top-$k$ indices. If the highest value is the fifth element, we'll get back 4 as the index.
print(ps.topk(1, dim=1)[0].shape)
print(ps.topk(1, dim=1)[1].shape)
top_p, top_class = ps.topk(2, dim=1)
# Look at the top 2 most likely classes for the first 10 examples, top_p are probabilities and top_class are index values
print(top_class[:10,:], "\n", top_p[:10, :])
top_p, top_class = ps.topk(1, dim=1)
# Look at the most likely classes for the first 10 examples
print(top_class[:10,:])
# Now we can check if the predicted classes match the labels. This is simple to do by equating `top_class` and `labels`, but we have to be careful of the shapes. Here `top_class` is a 2D tensor with shape `(64, 1)` while `labels` is 1D with shape `(64)`. To get the equality to work out the way we want, `top_class` and `labels` must have the same shape.
#
# If we do
#
# ```python
# equals = top_class == labels
# ```
#
# `equals` will have shape `(64, 64)`, try it yourself. What it's doing is comparing the one element in each row of `top_class` with each element in `labels` which returns 64 True/False boolean values for each row.
equals = top_class == labels # Compares each row of top_class with each value in labels. Returns 64 bools per 64 rows
print(equals.shape)
print(top_class.shape)
print(*top_class.shape) # Unpacks the tensor size values which than you can feed the .view() method for reshaping labels tensor
# With this reshape of labels using top_class, each row of top_class is compared with corresponding value of the labels tensor
equals = top_class == labels.view(*top_class.shape)
print(equals.shape)
print(equals[:5])
# Now we need to calculate the percentage of correct predictions. `equals` has binary values, either 0 or 1. This means that if we just sum up all the values and divide by the number of values, we get the percentage of correct predictions. This is the same operation as taking the mean, so we can get the accuracy with a call to `torch.mean`. If only it was that simple. If you try `torch.mean(equals)`, you'll get an error
#
# ```
# RuntimeError: mean is not implemented for type torch.ByteTensor
# ```
#
# This happens because `equals` has type `torch.ByteTensor` but `torch.mean` isn't implemented for tensors with that type. So we'll need to convert `equals` to a float tensor. Note that when we take `torch.mean` it returns a scalar tensor, to get the actual value as a float we'll need to do `accuracy.item()`.
accuracy = torch.mean(equals.type(torch.FloatTensor))
print(f'Accuracy: {accuracy.item()*100}%')
# The network is untrained so it's making random guesses and we should see an accuracy around 10%. Now let's train our network and include our validation pass so we can measure how well the network is performing on the test set. Since we're not updating our parameters in the validation pass, we can speed up our code by turning off gradients using `torch.no_grad()`:
#
# ```python
# # turn off gradients
# with torch.no_grad():
# # validation pass here
# for images, labels in testloader:
# ...
# ```
#
# >**Exercise:** Implement the validation loop below and print out the total accuracy after the loop. You can largely copy and paste the code from above, but I suggest typing it in because writing it out yourself is essential for building the skill. In general you'll always learn more by typing it rather than copy-pasting. You should be able to get an accuracy above 80%.
# +
model = Classifier()
criterion = nn.NLLLoss()
optimizer = optim.Adam(model.parameters(), lr=0.003)
epochs = 30
steps = 0
# Vectors to store the training an testing loss values for plotting later
train_losses, test_losses = [], []
for e in range(epochs):
training_loss = 0
for images, labels in trainloader:
optimizer.zero_grad()
log_ps = model(images)
loss = criterion(log_ps, labels)
loss.backward()
optimizer.step()
training_loss += loss.item()
else: # Run this code block after the images, labels for loop has finished running.
## TODO: Implement the validation pass and print out the validation accuracy
# Reseting test_loss and accuracy values to 0 for each epoch
test_loss = 0
accuracy = 0
# Since we're not updating our parameters in the validation pass, we can speed up our code by turning off gradients.
with torch.no_grad():
# Load images and labels from the testing data loader
for images, labels in testloader:
# Performing a forward pass on the testing images - Output is in softmax logs
log_ps = model(images)
# Adding up validation losses for all images in each epoch
test_loss += criterion(log_ps, labels)
# Calculating the probabilties from output logs
ps = torch.exp(log_ps)
# For all images, select the class with the highest probability values and indices to those values
# This is network's prediction for what each image represents
top_p, top_class = ps.topk(1, dim=1)
# Compare the prediction of the network for all images with associated image labels in the testing batch.
# Save the boolean results in the equals tensor
equals = top_class == labels.view(*top_class.shape)
# Calculate the accuracy of the network predicitions by taking average
# of the correct predictions vs all predictions
accuracy += torch.mean(equals.type(torch.FloatTensor))
# Append the total training and validation losses to the vectors for plotting later
train_losses.append(training_loss/len(trainloader))
test_losses.append(test_loss/len(testloader))
# Print the current Epoch, testing and validation losses plus accuracy for each epoch.
# You can expect a fluctuating validation loss and accuracy
# This is due to a potentially overfitted network to the training set.
print("Epoch {}/{}.. ".format(e+1, epochs),
"Training Loss: {:.3f}.. ".format(training_loss/len(trainloader)),
"Test Loss: {:.3f}.. ".format(test_loss/len(testloader)),
"Test Accuracy: {:.3f}".format(accuracy/len(testloader)))
# -
# %matplotlib inline
# %config inlineBackend.figure_format = "retina"
import matplotlib.pyplot as plt
plt.plot(train_losses, label="Training Loss")
plt.plot(test_losses, label="Validation Loss")
plt.legend(frameon= False);
# ## Overfitting
#
# If we look at the training and validation losses as we train the network, we can see a phenomenon known as overfitting.
#
# <img src='assets/overfitting.png' width="450px">
#
# The network learns the training set better and better, resulting in lower training losses. However, it starts having problems generalizing to data outside the training set leading to the validation loss increasing. The ultimate goal of any deep learning model is to make predictions on new data, so we should strive to get the lowest validation loss possible. One option is to use the version of the model with the lowest validation loss, here the one around 8-10 training epochs. This strategy is called *early-stopping*. In practice, you'd save the model frequently as you're training then later choose the model with the lowest validation loss.
#
# The most common method to reduce overfitting (outside of early-stopping) is *dropout*, where we randomly drop input units. This forces the network to share information between weights, increasing it's ability to generalize to new data. Adding dropout in PyTorch is straightforward using the [`nn.Dropout`](https://pytorch.org/docs/stable/nn.html#torch.nn.Dropout) module.
#
# ```python
# class Classifier(nn.Module):
# def __init__(self):
# super().__init__()
# self.fc1 = nn.Linear(784, 256)
# self.fc2 = nn.Linear(256, 128)
# self.fc3 = nn.Linear(128, 64)
# self.fc4 = nn.Linear(64, 10)
#
# # Dropout module with 0.2 drop probability
# self.dropout = nn.Dropout(p=0.2)
#
# def forward(self, x):
# # make sure input tensor is flattened
# x = x.view(x.shape[0], -1)
#
# # Now with dropout
# x = self.dropout(F.relu(self.fc1(x)))
# x = self.dropout(F.relu(self.fc2(x)))
# x = self.dropout(F.relu(self.fc3(x)))
#
# # output so no dropout here
# x = F.log_softmax(self.fc4(x), dim=1)
#
# return x
# ```
#
# During training we want to use dropout to prevent overfitting, but during inference we want to use the entire network. So, we need to turn off dropout during validation, testing, and whenever we're using the network to make predictions. To do this, you use `model.eval()`. This sets the model to evaluation mode where the dropout probability is 0. You can turn dropout back on by setting the model to train mode with `model.train()`. In general, the pattern for the validation loop will look like this, where you turn off gradients, set the model to evaluation mode, calculate the validation loss and metric, then set the model back to train mode.
#
# ```python
# # turn off gradients
# with torch.no_grad():
#
# # set model to evaluation mode
# model.eval()
#
# # validation pass here
# for images, labels in testloader:
# ...
#
# # set model back to train mode
# model.train()
# ```
# > **Exercise:** Add dropout to your model and train it on Fashion-MNIST again. See if you can get a lower validation loss or higher accuracy.
# +
## TODO: Define your model with dropout added
from torch import nn, optim
import torch.nn.functional as F
class Classifier_Dropout(nn.Module):
def __init__(self):
super().__init__()
self.fc1 = nn.Linear(784, 256)
self.fc2 = nn.Linear(256, 128)
self.fc3 = nn.Linear(128, 64)
self.fc4 = nn.Linear(64, 10)
# Dropout with the probability of 20%
self.dropout = nn.Dropout(p =0.2)
def forward(self, x):
# Flatten input tensor
x = x.view(x.shape[0], -1)
# Added dropout in the forward pass for the inputs at each layer except the output layer
# as we want all outputs active
x = self.dropout(F.relu(self.fc1(x)))
x = self.dropout(F.relu(self.fc2(x)))
x = self.dropout(F.relu(self.fc3(x)))
x = F.log_softmax(self.fc4(x), dim=1)
return x
# +
## TODO: Train your model with dropout, and monitor the training progress with the validation loss and accuracy
model_dropout = Classifier_Dropout()
criterion = nn.NLLLoss()
optimiser = optim.Adam(model_dropout.parameters(), lr = 0.003)
epochs = 30
steps = 0
training_losses, testing_losses = [], []
for e in range(epochs):
training_loss = 0
# Training Passes
for images, labels in trainloader:
optimiser.zero_grad()
log_ps = model_dropout(images)
loss = criterion(log_ps, labels)
loss.backward()
optimiser.step()
training_loss += loss.item()
# Testing Pass
else:
testing_loss = 0
accuracy = 0
# Turn off gradients calculations
with torch.no_grad():
# Set the model to evaluation mode for validation. Dropout is turned off.
model_dropout.eval()
for images, labels in testloader:
test_log_ps = model_dropout(images)
testing_loss += criterion(test_log_ps, labels)
ps = torch.exp(test_log_ps)
top_p, top_class = ps.topk(1, dim=1)
equals = top_class == labels.view(*top_class.shape)
accuracy += torch.mean(equals.type(torch.FloatTensor))
training_losses.append(training_loss/len(trainloader))
testing_losses.append(testing_loss/len(testloader))
print("Epoch {}/{}.. ".format(e+1, epochs),
"Training Loss: {:.3f}.. ".format(training_loss/len(trainloader)),
"Test Loss: {:.3f}.. ".format(testing_loss/len(testloader)),
"Test Accuracy: {:.3f}".format(accuracy/len(testloader)))
# After you're done with evaluating the model against a testing set, set the model back to training mode
# Training mode turns on the dropout.
model_dropout.train()
# +
def plotter(x, y):
plt.plot(x, label = "Training Loss")
plt.plot(y, label = "Validation Loss")
plt.ylim(0.1, 0.7)
plt.xlabel("Epochs")
plt.ylabel("Performance")
plt.legend(frameon= False);
plt.figure(figsize = (15, 5))
plt.subplot(1, 2, 1)
plotter(train_losses, test_losses)
plt.title("Model Performance without Dropout")
plt.subplot(1,2,2)
plotter(training_losses, testing_losses)
plt.title("Model Performance With Dropout");
# -
# #### Conclusion:
# - The first model is overfitted to the training set but does not perform well on the validation set.
# - The second model which is implemented with a dropout of 20%, is performing well on both the training and validation sets but at higher loss overall. Perhaps, with better network architecture, learning rate or by implementing early-stopping and/or momentum, the loss can be reduced further.
# ## Inference
#
# Now that the model is trained, we can use it for inference. We've done this before, but now we need to remember to set the model in inference mode with `model.eval()`. You'll also want to turn off autograd with the `torch.no_grad()` context.
# ### Inference in a trained model without the implemented dropout (overfitting might be present)
# +
# Import helper module (should be in the repo)
import helper
# Test out your network!
dataiter = iter(testloader)
images, labels = dataiter.next()
img = images[0]
# Convert 2D image to 1D vector
img = img.view(1, 784)
# Calculate the class probabilities (softmax) for img
with torch.no_grad():
output = model.forward(img)
ps = torch.exp(output)
# Plot the image and probabilities
helper.view_classify(img.view(1, 28, 28), ps, version='Fashion')
# -
# ### Inference in a trained model with the implemented dropout (Reduced chance of overfitting)
# +
# Import helper module (should be in the repo)
import helper
# Test out your network!
model_dropout.eval()
dataiter = iter(testloader)
images, labels = dataiter.next()
img = images[0]
# Convert 2D image to 1D vector
img = img.view(1, 784)
# Calculate the class probabilities (softmax) for img
with torch.no_grad():
output = model_dropout.forward(img)
ps = torch.exp(output)
# Plot the image and probabilities
helper.view_classify(img.view(1, 28, 28), ps, version='Fashion')
# -
# ## Next Up!
#
# In the next part, I'll show you how to save your trained models. In general, you won't want to train a model everytime you need it. Instead, you'll train once, save it, then load the model when you want to train more or use if for inference.
|
4. Neural Networks/7. Part 5 - Inference and Validation (Dropout Implementation).ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:root] *
# language: python
# name: conda-root-py
# ---
# # Vacuum in the squeezed hierarchy
# +
from functools import partial
import pickle
import numpy as np
from scipy.special import factorial, sinc
import matplotlib.pyplot as plt
import pysme.integrate as integ
import pysme.hierarchy as hier
# -
# define Qubit operators
sx = np.array([[0, 1], [1, 0]], dtype=np.complex)
sy = np.array([[0, -1.j], [1.j, 0]], dtype=np.complex)
sz = np.array([[1, 0], [0, -1]], dtype=np.complex)
Id = np.eye(2, dtype=np.complex)
sp = (sx + 1.j * sy) / 2
sm = (sx - 1.j * sy) / 2
zero = np.zeros((2, 2), dtype=np.complex)
# +
def rect(x, a, b):
return np.where(x < a, 0, np.where(x < b, 1, 0))
def xi_rect(t, a, b):
return rect(t, a, b)/np.sqrt(b - a)
def rho_from_ket(ket):
return np.outer(ket, ket.conj())
def vac_rho(nmax):
'''Return vacuum density matrix
Parameters
----------
nmax : int
The n of the largest fock-state |n> in the truncation.
'''
ket = np.zeros(nmax + 1, dtype=np.complex)
ket[0] = 1
return rho_from_ket(ket)
def make_squeezed_state_vec(r, mu, N, normalized=True):
r'''Make a truncated squeezed-state vector.
The squeezed-state vector is :math:`S(r,\mu)|0\rangle`. The truncated
vector is renormalized by default.
Parameters
----------
N: positive integer
The dimension of the truncated Hilbert space, basis {0, ..., N-1}
r: real number
Squeezing amplitude
mu: real number
Squeezing phase
normalized: boolean
Whether or not the truncated vector is renormalized
Returns
-------
numpy.array
Squeezed-state vector in the truncated Hilbert space, represented in the
number basis
'''
ket = np.zeros(N, dtype=np.complex)
for n in range(N//2):
ket[2*n] = (1 / np.sqrt(np.cosh(r))) * ((-0.5 * np.exp(2.j * mu) * np.tanh(r))**n /
factorial(n)) * np.sqrt(factorial(2 * n))
return ket / np.linalg.norm(ket) if normalized else ket
def sqz_rho(r, mu, n):
return rho_from_ket(make_squeezed_state_vec(r, mu, n + 1))
# -
def make_plot_data(r, mu, factories, xi_fn, times, rho0):
n_maxs = list(factories.keys())
integrators = {n_max: factory.make_uncond_integrator(xi_fn, Id, sm, zero, r, mu)
for n_max, factory in factories.items()}
field_rho0s = {n_max: sqz_rho(-r, mu, n_max) for n_max in n_maxs}
solns = {n_max: integrator.integrate(rho0, times) for n_max, integrator in integrators.items()}
phys_solns = {n_max: solns[n_max].get_phys_soln(field_rho0s[n_max]) for n_max in n_maxs}
Pe_expts = {n_max: solns[n_max].get_expectations((Id + sz)/2, field_rho0s[n_max]) for n_max in n_maxs}
return phys_solns, Pe_expts
def plot_Pe(times, n_maxs, Pe_expts, ax):
for n_max in n_maxs:
ax.plot(times, Pe_expts[n_max], label=str(n_max))
ax.legend()
def plot_exp_decay(times, n_maxs, Pe_expts, vac_Pe_expt, ax):
for n_max in n_maxs:
ax.semilogy(times, Pe_expts[n_max], label=str(n_max))
ax.semilogy(times, vac_Pe_expt, color='k', linestyle='--', label='vac')
ax.legend()
def plot_difference(times, n_maxs, Pe_expts, vac_Pe_expt, ax):
for n_max in n_maxs:
ax.plot(times, Pe_expts[n_max] - vac_Pe_expt, label=str(n_max))
times = np.linspace(0, 1, 2**8 + 1)
rho0 = (Id + sz)/2
vac_integrator = integ.UncondLindbladIntegrator([sm], zero)
vac_soln = vac_integrator.integrate(rho0, times)
vac_Pe_expt = vac_soln.get_expectations((Id + sz)/2)
r = np.log(1.5)
mu = 0
n_maxs = np.arange(1, 13)
factories = {n_max: hier.HierarchyIntegratorFactory(2, n_max)
for n_max in n_maxs}
xi_fn = partial(xi_rect, a=0, b=1)
def gen_save_load_data(fname, data_gen_method, data_gen_params=None, data_gen_kwargs=None, overwrite=False):
'''Get the data returned by the generating method, running the method only if the data isn't already available.
If the given filename exists, load and return the data from that file. Otherwise generate the data using the
supplied method and save and return it.
Useful for notebooks you imagine running multiple times, but where some of the data is expensive to generate
and you want to save it to disk to be reloaded for future sessions.
'''
if data_gen_params is None:
data_gen_params = ()
if data_gen_kwargs is None:
data_gen_kwargs = {}
try:
with open(fname, 'xb' if not overwrite else 'wb') as f:
data = data_gen_method(*data_gen_params, **data_gen_kwargs)
pickle.dump(data, f)
except FileExistsError:
print('Data already exist.')
with open(fname, 'rb') as f:
data = pickle.load(f)
return data
solns, Pe_expts = gen_save_load_data('vacuum_plot_data.pickle', make_plot_data, (r, mu, factories, xi_fn, times, rho0))
with plt.style.context('paper.mplstyle'):
fig, ax = plt.subplots(figsize=(3.5, 2.5))
plot_exp_decay(times, n_maxs[:7], Pe_expts, vac_Pe_expt, ax)
plt.tight_layout()
with plt.style.context('paper.mplstyle'):
fig, ax = plt.subplots(figsize=(3.5, 2.5))
ax.set_title(r'$e^r=3/2$, $T=1/\gamma$')
plot_difference(times, n_maxs[4::2], Pe_expts, vac_Pe_expt, ax)
ax.legend()
plt.tight_layout()
solns2, Pe_expts2 = gen_save_load_data('vacuum_plot_data2.pickle', make_plot_data, (np.log(2), mu, factories, xi_fn, times, rho0))
with plt.style.context('paper.mplstyle'):
fig, ax = plt.subplots(figsize=(3.5, 2.5))
ax.set_title(r'$e^r=2$, $T=1/\gamma$')
plot_difference(times, n_maxs[4::2], Pe_expts2, vac_Pe_expt, ax)
ax.legend()
plt.tight_layout()
solns3, Pe_expts3 = gen_save_load_data('vacuum_plot_data3.pickle', make_plot_data, (np.log(3), mu, factories, xi_fn, times, rho0))
with plt.style.context('paper.mplstyle'):
fig, ax = plt.subplots(figsize=(3.5, 2.5))
ax.set_title(r'$e^r=3$, $T=1/\gamma$')
plot_difference(times, n_maxs[4::2], Pe_expts3, vac_Pe_expt, ax)
ax.legend()
plt.tight_layout()
timesT2 = np.linspace(0, 2, 2**9 + 1)
solnsT2, Pe_exptsT2 = gen_save_load_data('vacuum_plot_dataT2.pickle', make_plot_data,
(r, mu, factories, partial(xi_rect, a=0, b=2), timesT2, rho0))
solns2T2, Pe_expts2T2 = gen_save_load_data('vacuum_plot_data2T2.pickle', make_plot_data,
(np.log(2), mu, factories, partial(xi_rect, a=0, b=2), timesT2, rho0))
solns3T2, Pe_expts3T2 = gen_save_load_data('vacuum_plot_data3T2.pickle', make_plot_data,
(np.log(3), mu, factories, partial(xi_rect, a=0, b=2), timesT2, rho0))
timesT4 = np.linspace(0, 4, 2**10 + 1)
solnsT4, Pe_exptsT4 = gen_save_load_data('vacuum_plot_dataT4.pickle', make_plot_data,
(r, mu, factories, partial(xi_rect, a=0, b=4), timesT4, rho0))
solns2T4, Pe_expts2T4 = gen_save_load_data('vacuum_plot_data2T4.pickle', make_plot_data,
(np.log(2), mu, factories, partial(xi_rect, a=0, b=4), timesT4, rho0))
solns3T4, Pe_expts3T4 = gen_save_load_data('vacuum_plot_data3T4.pickle', make_plot_data,
(np.log(3), mu, factories, partial(xi_rect, a=0, b=4), timesT4, rho0))
vac_solnT2 = vac_integrator.integrate(rho0, timesT2)
vac_Pe_exptT2 = vac_solnT2.get_expectations((Id + sz)/2)
vac_solnT4 = vac_integrator.integrate(rho0, timesT4)
vac_Pe_exptT4 = vac_solnT4.get_expectations((Id + sz)/2)
try:
with open('vacuum-collected-plot-data.pickle', 'xb') as f:
pickle.dump({'Pe': {'1T1': Pe_expts, '2T1': Pe_expts2, '3T1': Pe_expts3,
'1T2': Pe_exptsT2, '2T2': Pe_expts2T2, '3T2': Pe_expts3T2,
'1T4': Pe_exptsT4, '2T4': Pe_expts2T4, '3T4': Pe_expts3T4},
'times': {'T1': times, 'T2': timesT2, 'T4': timesT4},
'nmaxs': n_maxs,
'vacPe': {'T1': vac_Pe_expt, 'T2': vac_Pe_exptT2, 'T4': vac_Pe_exptT4}},
f)
except FileExistsError:
print('Data already exist.')
with plt.style.context('paper.mplstyle'):
fig, axs = plt.subplots(ncols=3, nrows=3, figsize=(7, 5))
axs[0,0].set_title(r'$e^r=3/2$, $T=1/\Gamma$')
plot_difference(times, n_maxs[4::2], Pe_expts, vac_Pe_expt, axs[0,0])
axs[0,0].set_ylim(-0.001, .001)
axs[0,1].set_title(r'$e^r=2$, $T=1/\Gamma$')
plot_difference(times, n_maxs[4::2], Pe_expts2, vac_Pe_expt, axs[0,1])
axs[0,1].set_ylim(-0.01, .01)
axs[0,2].set_title(r'$e^r=3$, $T=1/\Gamma$')
plot_difference(times, n_maxs[4::2], Pe_expts3, vac_Pe_expt, axs[0,2])
axs[0,2].set_ylim(-.05, .05)
axs[1,0].set_title(r'$e^r=3/2$, $T=2/\Gamma$')
plot_difference(timesT2, n_maxs[4::2], Pe_exptsT2, vac_Pe_exptT2, axs[1,0])
axs[1,0].set_ylim(-0.001, .001)
axs[1,1].set_title(r'$e^r=2$, $T=2/\Gamma$')
plot_difference(timesT2, n_maxs[4::2], Pe_expts2T2, vac_Pe_exptT2, axs[1,1])
axs[1,2].set_title(r'$e^r=3$, $T=2/\Gamma$')
plot_difference(timesT2, n_maxs[4::2], Pe_expts3T2, vac_Pe_exptT2, axs[1,2])
axs[1,2].set_ylim(-0.04, .1)
axs[2,0].set_title(r'$e^r=3/2$, $T=4/\Gamma$')
plot_difference(timesT4, n_maxs[4::2], Pe_exptsT4, vac_Pe_exptT4, axs[2,0])
axs[2,0].set_ylim(-0.001, .001)
axs[2,1].set_title(r'$e^r=2$, $T=4/\Gamma$')
plot_difference(timesT4, n_maxs[4::2], Pe_expts2T4, vac_Pe_exptT4, axs[2,1])
axs[2,1].set_ylim(-.01, .1)
axs[2,2].set_title(r'$e^r=3$, $T=4/\Gamma$')
plot_difference(timesT4, n_maxs[4::2], Pe_expts3T4, vac_Pe_exptT4, axs[2,2])
axs[2,2].set_ylim(-0.03, .1)
for ax_row in axs:
ax_row[0].set_ylabel(r'$P_{e,\mathrm{sq}}-P_{e,\mathrm{vac}}$')
for ax in ax_row:
ax.axhline(0, linestyle='--', color='k', linewidth=1)
for ax in axs[-1]:
ax.set_xlabel(r'$\Gamma t$')
axs[2,2].legend()
plt.tight_layout()
plt.savefig('vacuum-diffs.pdf', bbox_inches='tight', pad_inches=0.02)
# ## Sinc explorations
from scipy.integrate import quad
quad(lambda t: sinc(t)**2, -20, 20, limit=1000)
times_sinc = np.linspace(-10, 10, 2**13)
integrators_sinc = {n_max: factory.make_uncond_integrator(sinc, Id, sm, zero, np.log(2), mu)
for n_max, factory in factories.items()}
solns_sinc = {n_max: integrator.integrate((Id - sz)/2, times_sinc) for n_max, integrator in integrators_sinc.items()}
phys_solns_sinc = {n_max: solns_sinc[n_max].get_phys_soln(vac_rho(n_max)) for n_max in n_maxs}
Pe_expts_sinc = {n_max: phys_solns_sinc[n_max].get_expectations((Id + sz)/2) for n_max in n_maxs}
with plt.style.context('paper.mplstyle'):
fig, ax = plt.subplots(figsize=(6, 3))
plot_Pe(times_sinc, n_maxs[4::2], Pe_expts_sinc, ax)
plt.plot(times_sinc, 0.1*sinc(times_sinc))
plt.axvline(0.5)
plt.tight_layout()
# fig, ax = plt.subplots()
# plot_exp_decay(times, n_maxs, Pe_expts_sinc, vac_Pe_expt, ax)
fig, axs = plt.subplots(ncols=2, figsize=(8,4))
for n_max in n_maxs[:10]:
axs[0].plot(times, Pe_expts[n_max], label=str(n_max))
axs[1].semilogy(times, Pe_expts[n_max], label=str(n_max))
axs[0].plot(times, vac_Pe_expt, color='k', linestyle='--', label='vac')
axs[1].semilogy(times, vac_Pe_expt, color='k', linestyle='--', label='vac')
axs[0].legend()
axs[1].legend()
plt.tight_layout()
plt.show()
fig, ax = plt.subplots(figsize=(4,4))
for n_max in n_maxs[:10]:
ax.plot(times, Pe_expts[n_max] - vac_Pe_expt, label=str(n_max))
#ax.legend()
plt.tight_layout()
plt.show()
xi_fn = partial(xi_rect, a=0, b=2)
factory = factories[1]
integrators = {n_max: factory.make_uncond_integrator(xi_fn, Id, sm, zero, r, mu)
for n_max, factory in factories.items()}
times = np.linspace(0, 2, 2**9 + 1)
rho0 = (Id + sz)/2
field_rho0s = {n_max: sqz_rho(-r, mu, n_max) for n_max in n_maxs}
solns = {n_max: integrator.integrate(rho0, times) for n_max, integrator in integrators.items()}
Pe_expts = {n_max: solns[n_max].get_expectations((Id + sz)/2, field_rho0s[n_max]) for n_max in n_maxs}
vac_integrator = integ.UncondLindbladIntegrator([sm], zero)
vac_soln = vac_integrator.integrate(rho0, times)
vac_Pe_expt = vac_soln.get_expectations((Id + sz)/2)
fig, axs = plt.subplots(ncols=2, figsize=(8,4))
for n_max in n_maxs[:10]:
axs[0].plot(times, Pe_expts[n_max], label=str(n_max))
axs[1].semilogy(times, Pe_expts[n_max], label=str(n_max))
axs[0].plot(times, vac_Pe_expt, color='k', linestyle='--', label='vac')
axs[1].semilogy(times, vac_Pe_expt, color='k', linestyle='--', label='vac')
axs[0].legend()
axs[1].legend()
plt.tight_layout()
plt.show()
fig, ax = plt.subplots(figsize=(4,4))
for n_max in n_maxs[:10]:
ax.plot(times, Pe_expts[n_max] - vac_Pe_expt, label=str(n_max))
#ax.legend()
plt.tight_layout()
plt.show()
xi_fn = partial(xi_rect, a=0, b=8)
factory = factories[1]
integrators = {n_max: factory.make_uncond_integrator(xi_fn, Id, sm, zero, r, mu)
for n_max, factory in factories.items()}
times = np.linspace(0, 8, 8*2**8 + 1)
rho0 = (Id + sz)/2
field_rho0s = {n_max: sqz_rho(-r, mu, n_max) for n_max in n_maxs}
solns = {n_max: integrator.integrate(rho0, times) for n_max, integrator in integrators.items()}
Pe_expts = {n_max: solns[n_max].get_expectations((Id + sz)/2, field_rho0s[n_max]) for n_max in n_maxs}
vac_integrator = integ.UncondLindbladIntegrator([sm], zero)
vac_soln = vac_integrator.integrate(rho0, times)
vac_Pe_expt = vac_soln.get_expectations((Id + sz)/2)
fig, axs = plt.subplots(ncols=2, figsize=(8,4))
for n_max in n_maxs[:10]:
axs[0].plot(times, Pe_expts[n_max], label=str(n_max))
axs[1].semilogy(times, Pe_expts[n_max], label=str(n_max))
axs[0].plot(times, vac_Pe_expt, color='k', linestyle='--', label='vac')
axs[1].semilogy(times, vac_Pe_expt, color='k', linestyle='--', label='vac')
axs[0].legend()
axs[1].legend()
plt.tight_layout()
plt.show()
fig, ax = plt.subplots(figsize=(8,4))
for n_max in n_maxs[3:10]:
ax.plot(times, np.abs(Pe_expts[n_max] - vac_Pe_expt), label=str(n_max))
ax.legend()
ax.set_ylim(0, .03)
plt.tight_layout()
plt.show()
|
notebooks/vacuum-in-sqz-hier.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
#Importing relevant libraries
import numpy as np
from sklearn import datasets
# +
#Loading the iris datset
iris = datasets.load_iris()
X = iris.data
y = iris.target
# +
#Printing the dataset's shape
print('X shape:',X.shape)
print('y shape:',y.shape)
# +
#Reshaping y to avoid shape inconsistencies later on
y=np.reshape(y,[150,1])
print('y shape:',y.shape)
# +
#Combining the dataset so that the labels do not get messed up, shuffling
Xy=np.concatenate((X,y),axis=1)
np.random.shuffle(Xy)
# +
#Splitting to get the shuffled dataset back
(X,y)=np.split(Xy,[4],axis=1)
# +
#Looking at the first few examples
print('First 5 examples in X:\n',X[0:5,:])
print('\nFirst 5 labels in y:\n',y[0:5])
# +
#Choosing sizes of training and test sets
num_train=120
num_test=30
# +
#Splitting the dataset into training and test sets
X_train=X[0:num_train,:]
y_train=y[0:num_train,:]
X_test=X[num_train:num_train+num_test,:]
y_test=y[num_train:num_train+num_test,:]
# +
#Calculating the distance matrix between the training and test sets for X (the distance matrix contains distances between all
#the training and test examples, and allows us to obtain the closest examples in the training set for each test example)
dists = np.reshape(np.sum(X_test**2, axis=1), [num_test,1]) + np.sum(X_train**2, axis=1) - 2 * np.matmul(X_test, X_train.T)
dists = np.sqrt(dists)
# +
#Choosing a value for k
k=5
# +
#Generating predictions on our test set using the top k closest examples in the training set
y_pred = np.zeros((num_test,1))
for i in range(num_test):
closest_y = []
closest_y = list(y_train[np.argsort(dists[i,:])[:k]])
y_pred[i] = max(closest_y, key = closest_y.count)
# +
#Comparing some of our predictions to the actual values
for i in range(10):
print('Prediction:%d Actual value:%d\n' % (y_pred[i],y_test[i]))
# +
#Calculating the accuracy of our K Nearest Neighbour classifier
num_correct = np.sum(y_pred == y_test)
accuracy = float(num_correct) / num_test
print ('Accuracy:%d%%' % (accuracy*100))
|
KNN.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Name
#
# Gather training data by querying BigQuery
#
#
# # Labels
#
# GCP, BigQuery, Kubeflow, Pipeline
#
#
# # Summary
#
# A Kubeflow Pipeline component to submit a query to BigQuery and store the result in a Cloud Storage bucket.
#
#
# # Details
#
#
# ## Intended use
#
# Use this Kubeflow component to:
# * Select training data by submitting a query to BigQuery.
# * Output the training data into a Cloud Storage bucket as CSV files.
#
#
# ## Runtime arguments:
#
#
# | Argument | Description | Optional | Data type | Accepted values | Default |
# |----------|-------------|----------|-----------|-----------------|---------|
# | query | The query used by BigQuery to fetch the results. | No | String | | |
# | project_id | The project ID of the Google Cloud Platform (GCP) project to use to execute the query. | No | GCPProjectID | | |
# | dataset_id | The ID of the persistent BigQuery dataset to store the results of the query. If the dataset does not exist, the operation will create a new one. | Yes | String | | None |
# | table_id | The ID of the BigQuery table to store the results of the query. If the table ID is absent, the operation will generate a random ID for the table. | Yes | String | | None |
# | output_gcs_path | The path to the Cloud Storage bucket to store the query output. | Yes | GCSPath | | None |
# | dataset_location | The location where the dataset is created. Defaults to US. | Yes | String | | US |
# | job_config | The full configuration specification for the query job. See [QueryJobConfig](https://googleapis.github.io/google-cloud-python/latest/bigquery/generated/google.cloud.bigquery.job.QueryJobConfig.html#google.cloud.bigquery.job.QueryJobConfig) for details. | Yes | Dict | A JSONobject which has the same structure as [QueryJobConfig](https://googleapis.github.io/google-cloud-python/latest/bigquery/generated/google.cloud.bigquery.job.QueryJobConfig.html#google.cloud.bigquery.job.QueryJobConfig) | None |
# ## Input data schema
#
# The input data is a BigQuery job containing a query that pulls data f rom various sources.
#
#
# ## Output:
#
# Name | Description | Type
# :--- | :---------- | :---
# output_gcs_path | The path to the Cloud Storage bucket containing the query output in CSV format. | GCSPath
#
# ## Cautions & requirements
#
# To use the component, the following requirements must be met:
#
# * The BigQuery API is enabled.
# * The component is running under a secret [Kubeflow user service account](https://www.kubeflow.org/docs/started/getting-started-gke/#gcp-service-accounts) in a Kubeflow Pipeline cluster. For example:
#
# ```
# bigquery_query_op(...).apply(gcp.use_gcp_secret('user-gcp-sa'))
# ```
# * The Kubeflow user service account is a member of the `roles/bigquery.admin` role of the project.
# * The Kubeflow user service account is a member of the `roles/storage.objectCreator `role of the Cloud Storage output bucket.
#
# ## Detailed description
# This Kubeflow Pipeline component is used to:
# * Submit a query to BigQuery.
# * The query results are persisted in a dataset table in BigQuery.
# * An extract job is created in BigQuery to extract the data from the dataset table and output it to a Cloud Storage bucket as CSV files.
#
# Use the code below as an example of how to run your BigQuery job.
#
# ### Sample
#
# Note: The following sample code works in an IPython notebook or directly in Python code.
#
# #### Set sample parameters
# +
# %%capture --no-stderr
KFP_PACKAGE = 'https://storage.googleapis.com/ml-pipeline/release/0.1.14/kfp.tar.gz'
# !pip3 install $KFP_PACKAGE --upgrade
# -
# 2. Load the component using KFP SDK
# +
import kfp.components as comp
bigquery_query_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/e7a021ed1da6b0ff21f7ba30422decbdcdda0c20/components/gcp/bigquery/query/component.yaml')
help(bigquery_query_op)
# -
# ### Sample
#
# Note: The following sample code works in IPython notebook or directly in Python code.
#
# In this sample, we send a query to get the top questions from stackdriver public data and output the data to a Cloud Storage bucket. Here is the query:
QUERY = 'SELECT * FROM `bigquery-public-data.stackoverflow.posts_questions` LIMIT 10'
# #### Set sample parameters
# + tags=["parameters"]
# Required Parameters
PROJECT_ID = '<Please put your project ID here>'
GCS_WORKING_DIR = 'gs://<Please put your GCS path here>' # No ending slash
# -
# Optional Parameters
EXPERIMENT_NAME = 'Bigquery -Query'
OUTPUT_PATH = '{}/bigquery/query/questions.csv'.format(GCS_WORKING_DIR)
# #### Run the component as a single pipeline
import kfp.dsl as dsl
import kfp.gcp as gcp
import json
@dsl.pipeline(
name='Bigquery query pipeline',
description='Bigquery query pipeline'
)
def pipeline(
query=QUERY,
project_id = PROJECT_ID,
dataset_id='',
table_id='',
output_gcs_path=OUTPUT_PATH,
dataset_location='US',
job_config=''
):
bigquery_query_op(
query=query,
project_id=project_id,
dataset_id=dataset_id,
table_id=table_id,
output_gcs_path=output_gcs_path,
dataset_location=dataset_location,
job_config=job_config).apply(gcp.use_gcp_secret('user-gcp-sa'))
# #### Compile the pipeline
pipeline_func = pipeline
pipeline_filename = pipeline_func.__name__ + '.zip'
import kfp.compiler as compiler
compiler.Compiler().compile(pipeline_func, pipeline_filename)
# #### Submit the pipeline for execution
# +
#Specify pipeline argument values
arguments = {}
#Get or create an experiment and submit a pipeline run
import kfp
client = kfp.Client()
experiment = client.create_experiment(EXPERIMENT_NAME)
#Submit a pipeline run
run_name = pipeline_func.__name__ + ' run'
run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments)
# -
# #### Inspect the output
# !gsutil cat OUTPUT_PATH
# ## References
# * [Component python code](https://github.com/kubeflow/pipelines/blob/master/components/gcp/container/component_sdk/python/kfp_component/google/bigquery/_query.py)
# * [Component docker file](https://github.com/kubeflow/pipelines/blob/master/components/gcp/container/Dockerfile)
# * [Sample notebook](https://github.com/kubeflow/pipelines/blob/master/components/gcp/bigquery/query/sample.ipynb)
# * [BigQuery query REST API](https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs/query)
#
# ## License
# By deploying or using this software you agree to comply with the [AI Hub Terms of Service](https://aihub.cloud.google.com/u/0/aihub-tos) and the [Google APIs Terms of Service](https://developers.google.com/terms/). To the extent of a direct conflict of terms, the AI Hub Terms of Service will control.
|
components/gcp/bigquery/query/sample.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import os
os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = 'mesolitica-tpu.json'
# -
import tensorflow as tf
import tensorflow_datasets as tfds
from t5.data import preprocessors as prep
import functools
import t5
# +
import gin
gin.parse_config_file('gs://mesolitica-tpu-general/t5-data/pretrained_models_base_operative_config.gin')
# -
vocab = 'gs://mesolitica-tpu-general/t5-data-v2/sp10m.cased.ms-en.model'
# +
def dumping_dataset(split, shuffle_files = False):
del shuffle_files
files = [
'gs://mesolitica-tpu-general/t5-data-v2/dumping-news.txt.tsv',
'gs://mesolitica-tpu-general/t5-data-v2/dumping-parliament.txt.tsv',
'gs://mesolitica-tpu-general/t5-data-v2/filtered-dumping-academia.txt.tsv',
'gs://mesolitica-tpu-general/t5-data-v2/filtered-dumping-wiki.txt.tsv'
]
files.extend(tf.io.gfile.glob('gs://mesolitica-tpu-general/t5-data-v2/00.jsonl-*.translated.txt.tsv'))
ds = tf.data.TextLineDataset(files)
ds = ds.map(
functools.partial(
tf.io.decode_csv,
record_defaults = ['', ''],
field_delim = '\t',
use_quote_delim = False,
),
num_parallel_calls = tf.data.experimental.AUTOTUNE,
)
ds = ds.map(lambda *ex: dict(zip(['title', 'text'], ex)))
return ds
t5.data.TaskRegistry.remove('dumping_dataset')
t5.data.TaskRegistry.add(
'dumping_dataset',
dataset_fn = dumping_dataset,
splits = ['train'],
text_preprocessor = functools.partial(
t5.data.preprocessors.rekey,
key_map = {'inputs': None, 'targets': 'text'},
),
token_preprocessor = t5.data.preprocessors.unsupervised,
sentencepiece_model_path = vocab,
metric_fns = [],
)
# -
nq_task = t5.data.TaskRegistry.get('dumping_dataset')
ds = nq_task.get_dataset(
split = 'train', sequence_length = {'inputs': 512, 'targets': 512}
)
for ex in tfds.as_numpy(ds.take(5)):
print(ex)
# +
def question_dataset(split, shuffle_files = False):
del shuffle_files
ds = tf.data.TextLineDataset(
[
'gs://mesolitica-tpu-general/t5-data-v2/qa.tsv',
]
)
ds = ds.map(
functools.partial(
tf.io.decode_csv,
record_defaults = ['', ''],
field_delim = '\t',
use_quote_delim = False,
),
num_parallel_calls = tf.data.experimental.AUTOTUNE,
)
ds = ds.map(lambda *ex: dict(zip(['question', 'answer'], ex)))
return ds
def question_preprocessor(ds):
def to_inputs_and_targets(ex):
return {
'inputs': tf.strings.join(['soalan: ', ex['question']]),
'targets': ex['answer'],
}
return ds.map(
to_inputs_and_targets,
num_parallel_calls = tf.data.experimental.AUTOTUNE,
)
t5.data.TaskRegistry.remove('question_dataset')
t5.data.TaskRegistry.add(
'question_dataset',
dataset_fn = question_dataset,
splits = ['train'],
text_preprocessor = [question_preprocessor],
sentencepiece_model_path = vocab,
postprocess_fn = t5.data.postprocessors.lower_text,
metric_fns = [t5.evaluation.metrics.accuracy],
)
# -
nq_task = t5.data.TaskRegistry.get('question_dataset')
ds = nq_task.get_dataset(
split = 'train', sequence_length = {'inputs': 256, 'targets': 32}
)
for ex in tfds.as_numpy(ds.take(5)):
print(ex)
# +
def pair_dataset(split, shuffle_files = False):
del shuffle_files
ds = tf.data.TextLineDataset(tf.io.gfile.glob('gs://mesolitica-tpu-general/t5-data-v2/*pair.tsv'))
ds = ds.map(
functools.partial(
tf.io.decode_csv,
record_defaults = ['', ''],
field_delim = '\t',
use_quote_delim = False,
),
num_parallel_calls = tf.data.experimental.AUTOTUNE,
)
ds = ds.map(lambda *ex: dict(zip(['text'], ex)))
return ds
t5.data.TaskRegistry.remove('pair_dataset')
t5.data.TaskRegistry.add(
'pair_dataset',
dataset_fn = pair_dataset,
splits = ['train'],
text_preprocessor = [prep.next_sentence_prediction],
sentencepiece_model_path = vocab,
postprocess_fn = t5.data.postprocessors.lower_text,
metric_fns = [t5.evaluation.metrics.accuracy],
)
# -
nq_task = t5.data.TaskRegistry.get('pair_dataset')
ds = nq_task.get_dataset(
split = 'train', sequence_length = {'inputs': 256, 'targets': 32}
)
for ex in tfds.as_numpy(ds.take(5)):
print(ex)
# +
def news_dataset(split, shuffle_files = False):
del shuffle_files
ds = tf.data.TextLineDataset(
[
'gs://mesolitica-tpu-general/t5-data-v2/newstitle.tsv'
]
)
ds = ds.map(
functools.partial(
tf.io.decode_csv,
record_defaults = ['', ''],
field_delim = '\t',
use_quote_delim = False,
),
num_parallel_calls = tf.data.experimental.AUTOTUNE,
)
ds = ds.map(lambda *ex: dict(zip(['question', 'answer'], ex)))
return ds
def news_preprocessor(ds):
def to_inputs_and_targets(ex):
return {
'inputs': tf.strings.join(['tajuk: ', ex['question']]),
'targets': ex['answer'],
}
return ds.map(
to_inputs_and_targets,
num_parallel_calls = tf.data.experimental.AUTOTUNE,
)
t5.data.TaskRegistry.remove('news_dataset')
t5.data.TaskRegistry.add(
'news_dataset',
dataset_fn = news_dataset,
splits = ['train'],
text_preprocessor = [news_preprocessor],
sentencepiece_model_path = vocab,
postprocess_fn = t5.data.postprocessors.lower_text,
metric_fns = [t5.evaluation.metrics.accuracy],
)
# -
nq_task = t5.data.TaskRegistry.get('news_dataset')
ds = nq_task.get_dataset(
split = 'train', sequence_length = {'inputs': 1024, 'targets': 1024}
)
for ex in tfds.as_numpy(ds.take(1)):
print(ex)
# +
def summarization_dataset(split, shuffle_files = False):
del shuffle_files
ds = tf.data.TextLineDataset(
[
'gs://mesolitica-tpu-general/t5-data-v2/summarization.tsv'
]
)
ds = ds.map(
functools.partial(
tf.io.decode_csv,
record_defaults = ['', ''],
field_delim = '\t',
use_quote_delim = False,
),
num_parallel_calls = tf.data.experimental.AUTOTUNE,
)
ds = ds.map(lambda *ex: dict(zip(['question', 'answer'], ex)))
return ds
def summarization_preprocessor(ds):
def to_inputs_and_targets(ex):
return {
'inputs': tf.strings.join(['ringkasan: ', ex['question']]),
'targets': ex['answer'],
}
return ds.map(
to_inputs_and_targets,
num_parallel_calls = tf.data.experimental.AUTOTUNE,
)
t5.data.TaskRegistry.remove('summarization_dataset')
t5.data.TaskRegistry.add(
'summarization_dataset',
dataset_fn = summarization_dataset,
splits = ['train'],
text_preprocessor = [summarization_preprocessor],
sentencepiece_model_path = vocab,
postprocess_fn = t5.data.postprocessors.lower_text,
metric_fns = [t5.evaluation.metrics.accuracy],
)
# -
nq_task = t5.data.TaskRegistry.get('summarization_dataset')
ds = nq_task.get_dataset(
split = 'train', sequence_length = {'inputs': 1024, 'targets': 1024}
)
for ex in tfds.as_numpy(ds.take(1)):
print(ex)
# +
def similarity_dataset(split, shuffle_files = False):
del shuffle_files
ds = tf.data.TextLineDataset(
[
'gs://mesolitica-tpu-general/t5-data-v2/snli.tsv',
'gs://mesolitica-tpu-general/t5-data-v2/mnli.tsv'
]
)
ds = ds.map(
functools.partial(
tf.io.decode_csv,
record_defaults = ['', ''],
field_delim = '\t',
use_quote_delim = False,
),
num_parallel_calls = tf.data.experimental.AUTOTUNE,
)
ds = ds.map(lambda *ex: dict(zip(['question', 'answer'], ex)))
return ds
def similarity_preprocessor(ds):
def to_inputs_and_targets(ex):
return {
'inputs': ex['question'],
'targets': ex['answer'],
}
return ds.map(
to_inputs_and_targets,
num_parallel_calls = tf.data.experimental.AUTOTUNE,
)
t5.data.TaskRegistry.remove('similarity_dataset')
t5.data.TaskRegistry.add(
'similarity_dataset',
dataset_fn = similarity_dataset,
splits = ['train'],
text_preprocessor = [similarity_preprocessor],
sentencepiece_model_path = vocab,
postprocess_fn = t5.data.postprocessors.lower_text,
metric_fns = [t5.evaluation.metrics.accuracy],
)
# -
nq_task = t5.data.TaskRegistry.get('similarity_dataset')
ds = nq_task.get_dataset(
split = 'train', sequence_length = {'inputs': 256, 'targets': 32}
)
for ex in tfds.as_numpy(ds.take(1)):
print(ex)
# +
def en_ms_dataset(split, shuffle_files = False):
del shuffle_files
ds = tf.data.TextLineDataset(
[
'gs://mesolitica-tpu-general/t5-data-v2/en-ms.tsv'
]
)
ds = ds.map(
functools.partial(
tf.io.decode_csv,
record_defaults = ['', ''],
field_delim = '\t',
use_quote_delim = False,
),
num_parallel_calls = tf.data.experimental.AUTOTUNE,
)
ds = ds.map(lambda *ex: dict(zip(['question', 'answer'], ex)))
return ds
def en_ms_preprocessor(ds):
def to_inputs_and_targets(ex):
return {
'inputs': tf.strings.join(['terjemah Inggeris ke Melayu: ', ex['question']]),
'targets': ex['answer'],
}
return ds.map(
to_inputs_and_targets,
num_parallel_calls = tf.data.experimental.AUTOTUNE,
)
t5.data.TaskRegistry.remove('en_ms_dataset')
t5.data.TaskRegistry.add(
'en_ms_dataset',
dataset_fn = en_ms_dataset,
splits = ['train'],
text_preprocessor = [en_ms_preprocessor],
sentencepiece_model_path = vocab,
postprocess_fn = t5.data.postprocessors.lower_text,
metric_fns = [t5.evaluation.metrics.accuracy],
)
# -
nq_task = t5.data.TaskRegistry.get('en_ms_dataset')
ds = nq_task.get_dataset(
split = 'train', sequence_length = {'inputs': 1024, 'targets': 1024}
)
for ex in tfds.as_numpy(ds.take(1)):
print(ex)
# +
def ms_en_dataset(split, shuffle_files = False):
del shuffle_files
ds = tf.data.TextLineDataset(
[
'gs://mesolitica-tpu-general/t5-data-v2/ms-en.tsv'
]
)
ds = ds.map(
functools.partial(
tf.io.decode_csv,
record_defaults = ['', ''],
field_delim = '\t',
use_quote_delim = False,
),
num_parallel_calls = tf.data.experimental.AUTOTUNE,
)
ds = ds.map(lambda *ex: dict(zip(['question', 'answer'], ex)))
return ds
def ms_en_preprocessor(ds):
def to_inputs_and_targets(ex):
return {
'inputs': tf.strings.join(['terjemah Melayu ke Inggeris: ', ex['question']]),
'targets': ex['answer'],
}
return ds.map(
to_inputs_and_targets,
num_parallel_calls = tf.data.experimental.AUTOTUNE,
)
t5.data.TaskRegistry.remove('ms_en_dataset')
t5.data.TaskRegistry.add(
'ms_en_dataset',
dataset_fn = ms_en_dataset,
splits = ['train'],
text_preprocessor = [ms_en_preprocessor],
sentencepiece_model_path = vocab,
postprocess_fn = t5.data.postprocessors.lower_text,
metric_fns = [t5.evaluation.metrics.accuracy],
)
# -
nq_task = t5.data.TaskRegistry.get('ms_en_dataset')
ds = nq_task.get_dataset(
split = 'train', sequence_length = {'inputs': 1024, 'targets': 1024}
)
for ex in tfds.as_numpy(ds.take(1)):
print(ex)
# +
def knowledge_graph_dataset(split, shuffle_files = False):
del shuffle_files
ds = tf.data.TextLineDataset(
[
'gs://mesolitica-tpu-general/t5-data-v2/knowledge-graph.tsv'
]
)
ds = ds.map(
functools.partial(
tf.io.decode_csv,
record_defaults = ['', ''],
field_delim = '\t',
use_quote_delim = False,
),
num_parallel_calls = tf.data.experimental.AUTOTUNE,
)
ds = ds.map(lambda *ex: dict(zip(['question', 'answer'], ex)))
return ds
def knowledge_graph_preprocessor(ds):
def to_inputs_and_targets(ex):
return {
'inputs': tf.strings.join(['grafik pengetahuan: ', ex['question']]),
'targets': ex['answer'],
}
return ds.map(
to_inputs_and_targets,
num_parallel_calls = tf.data.experimental.AUTOTUNE,
)
t5.data.TaskRegistry.remove('knowledge_graph_dataset')
t5.data.TaskRegistry.add(
'knowledge_graph_dataset',
dataset_fn = knowledge_graph_dataset,
splits = ['train'],
text_preprocessor = [knowledge_graph_preprocessor],
sentencepiece_model_path = vocab,
postprocess_fn = t5.data.postprocessors.lower_text,
metric_fns = [t5.evaluation.metrics.accuracy],
)
# -
nq_task = t5.data.TaskRegistry.get('knowledge_graph_dataset')
ds = nq_task.get_dataset(
split = 'train', sequence_length = {'inputs': 1024, 'targets': 1024}
)
for ex in tfds.as_numpy(ds.take(1)):
print(ex)
# +
def paraphrase_dataset(split, shuffle_files = False):
del shuffle_files
ds = tf.data.TextLineDataset(
[
'gs://mesolitica-tpu-general/t5-data-v2/paraphrase.tsv'
]
)
ds = ds.map(
functools.partial(
tf.io.decode_csv,
record_defaults = ['', ''],
field_delim = '\t',
use_quote_delim = False,
),
num_parallel_calls = tf.data.experimental.AUTOTUNE,
)
ds = ds.map(lambda *ex: dict(zip(['question', 'answer'], ex)))
return ds
def paraphrase_preprocessor(ds):
def to_inputs_and_targets(ex):
return {
'inputs': tf.strings.join(['parafrasa: ', ex['question']]),
'targets': ex['answer'],
}
return ds.map(
to_inputs_and_targets,
num_parallel_calls = tf.data.experimental.AUTOTUNE,
)
t5.data.TaskRegistry.remove('paraphrase_dataset')
t5.data.TaskRegistry.add(
'paraphrase_dataset',
dataset_fn = paraphrase_dataset,
splits = ['train'],
text_preprocessor = [paraphrase_preprocessor],
sentencepiece_model_path = vocab,
postprocess_fn = t5.data.postprocessors.lower_text,
metric_fns = [t5.evaluation.metrics.accuracy],
)
# -
nq_task = t5.data.TaskRegistry.get('paraphrase_dataset')
ds = nq_task.get_dataset(
split = 'train', sequence_length = {'inputs': 1024, 'targets': 1024}
)
for ex in tfds.as_numpy(ds.take(1)):
print(ex)
|
pretrained-model/t5/test-dataset-gcs.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/ameralhomdy/DS-Unit-2-Regression-Classification/blob/master/module4/First_attempt_assignment_regression_classification_4.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] colab_type="text" id="7IXUfiQ2UKj6"
# Lambda School Data Science, Unit 2: Predictive Modeling
#
# # Regression & Classification, Module 4
#
#
# ## Assignment
#
# - [ ] Watch Aaron's [video #1](https://www.youtube.com/watch?v=pREaWFli-5I) (12 minutes) & [video #2](https://www.youtube.com/watch?v=bDQgVt4hFgY) (9 minutes) to learn about the mathematics of Logistic Regression.
# - [ ] [Sign up for a Kaggle account](https://www.kaggle.com/), if you don’t already have one. Go to our Kaggle InClass competition website. You will be given the URL in Slack. Go to the Rules page. Accept the rules of the competition.
# - [ ] Do train/validate/test split with the Tanzania Waterpumps data.
# - [ ] Begin with baselines for classification.
# - [ ] Use scikit-learn for logistic regression.
# - [ ] Get your validation accuracy score.
# - [ ] Submit your predictions to our Kaggle competition. (Go to our Kaggle InClass competition webpage. Use the blue **Submit Predictions** button to upload your CSV file. Or you can use the Kaggle API to submit your predictions.)
# - [ ] Commit your notebook to your fork of the GitHub repo.
#
# ---
#
#
# ## Stretch Goals
#
# - [ ] Add your own stretch goal(s) !
# - [ ] Clean the data. For ideas, refer to [The Quartz guide to bad data](https://github.com/Quartz/bad-data-guide), a "reference to problems seen in real-world data along with suggestions on how to resolve them." One of the issues is ["Zeros replace missing values."](https://github.com/Quartz/bad-data-guide#zeros-replace-missing-values)
# - [ ] Make exploratory visualizations.
# - [ ] Do one-hot encoding. For example, you could try `quantity`, `basin`, `extraction_type_class`, and more. (But remember it may not work with high cardinality categoricals.)
# - [ ] Do [feature scaling](https://scikit-learn.org/stable/modules/preprocessing.html).
# - [ ] Get and plot your coefficients.
# - [ ] Try [scikit-learn pipelines](https://scikit-learn.org/stable/modules/compose.html).
#
# ---
#
# ## Data Dictionary
#
# ### Features
#
# Your goal is to predict the operating condition of a waterpoint for each record in the dataset. You are provided the following set of information about the waterpoints:
#
# - `amount_tsh` : Total static head (amount water available to waterpoint)
# - `date_recorded` : The date the row was entered
# - `funder` : Who funded the well
# - `gps_height` : Altitude of the well
# - `installer` : Organization that installed the well
# - `longitude` : GPS coordinate
# - `latitude` : GPS coordinate
# - `wpt_name` : Name of the waterpoint if there is one
# - `num_private` :
# - `basin` : Geographic water basin
# - `subvillage` : Geographic location
# - `region` : Geographic location
# - `region_code` : Geographic location (coded)
# - `district_code` : Geographic location (coded)
# - `lga` : Geographic location
# - `ward` : Geographic location
# - `population` : Population around the well
# - `public_meeting` : True/False
# - `recorded_by` : Group entering this row of data
# - `scheme_management` : Who operates the waterpoint
# - `scheme_name` : Who operates the waterpoint
# - `permit` : If the waterpoint is permitted
# - `construction_year` : Year the waterpoint was constructed
# - `extraction_type` : The kind of extraction the waterpoint uses
# - `extraction_type_group` : The kind of extraction the waterpoint uses
# - `extraction_type_class` : The kind of extraction the waterpoint uses
# - `management` : How the waterpoint is managed
# - `management_group` : How the waterpoint is managed
# - `payment` : What the water costs
# - `payment_type` : What the water costs
# - `water_quality` : The quality of the water
# - `quality_group` : The quality of the water
# - `quantity` : The quantity of water
# - `quantity_group` : The quantity of water
# - `source` : The source of the water
# - `source_type` : The source of the water
# - `source_class` : The source of the water
# - `waterpoint_type` : The kind of waterpoint
# - `waterpoint_type_group` : The kind of waterpoint
#
# ### Labels
#
# There are three possible values:
#
# - `functional` : the waterpoint is operational and there are no repairs needed
# - `functional needs repair` : the waterpoint is operational, but needs repairs
# - `non functional` : the waterpoint is not operational
#
# ---
#
# ## Generate a submission
#
# Your code to generate a submission file may look like this:
#
# ```python
# # estimator is your model or pipeline, which you've fit on X_train
#
# # X_test is your pandas dataframe or numpy array,
# # with the same number of rows, in the same order, as test_features.csv,
# # and the same number of columns, in the same order, as X_train
#
# y_pred = estimator.predict(X_test)
#
#
# # Makes a dataframe with two columns, id and status_group,
# # and writes to a csv file, without the index
#
# sample_submission = pd.read_csv('sample_submission.csv')
# submission = sample_submission.copy()
# submission['status_group'] = y_pred
# submission.to_csv('your-submission-filename.csv', index=False)
# ```
#
# If you're working locally, the csv file is saved in the same directory as your notebook.
#
# If you're using Google Colab, you can use this code to download your submission csv file.
#
# ```python
# from google.colab import files
# files.download('your-submission-filename.csv')
# ```
#
# ---
# + colab_type="code" id="o9eSnDYhUGD7" colab={}
import os, sys
in_colab = 'google.colab' in sys.modules
# If you're in Colab...
if in_colab:
# Pull files from Github repo
os.chdir('/content')
# !git init .
# !git remote add origin https://github.com/LambdaSchool/DS-Unit-2-Regression-Classification.git
# !git pull origin master
# Install required python packages
# !pip install -r requirements.txt
# Change into directory for module
os.chdir('module4')
# + colab_type="code" id="ipBYS77PUwNR" colab={}
# Ignore this Numpy warning when using Plotly Express:
# FutureWarning: Method .ptp is deprecated and will be removed in a future version. Use numpy.ptp instead.
import warnings
warnings.filterwarnings(action='ignore', category=FutureWarning, module='numpy')
# + colab_type="code" id="QJBD4ruICm1m" colab={}
# Read the Tanzania Waterpumps data
# train_features.csv : the training set features
# train_labels.csv : the training set labels
# test_features.csv : the test set features
# sample_submission.csv : a sample submission file in the correct format
import pandas as pd
train_features = pd.read_csv('../data/waterpumps/train_features.csv')
train_labels = pd.read_csv('../data/waterpumps/train_labels.csv')
test_features = pd.read_csv('../data/waterpumps/test_features.csv')
sample_submission = pd.read_csv('../data/waterpumps/sample_submission.csv')
assert train_features.shape == (59400, 40)
assert train_labels.shape == (59400, 2)
assert test_features.shape == (14358, 40)
assert sample_submission.shape == (14358, 2)
# + id="iDxFyb76mYCQ" colab_type="code" colab={}
train_features = pd.merge(train_features, train_labels, on='id', how='outer')
# + id="Aurk7OELlGoz" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="1296005f-31f4-4564-d86f-8b57d9b17178"
train_features.shape, test_features.shape
# + id="xW4KPshP0Uez" colab_type="code" colab={}
train_features = train_features.drop(columns='date_recorded')
# + id="FBKMtg3inxoW" colab_type="code" colab={}
train_features.status_group = train_features.status_group.replace({'functional':2, 'functional needs repair':1, 'non functional':0})
# + id="I4FYon88lTCz" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="d8289cc4-d720-4de1-ec44-a07880b1a8c3"
# Splitting data into train & val
from sklearn.model_selection import train_test_split
train, val = train_test_split(train_features, random_state=42) # because 42 is awesome!!!
train.shape, val.shape
# + id="pvINwUo_nTB4" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 394} outputId="9a3ed544-c888-42c5-9042-f647ca53c3bb"
train.head()
# + id="r8sXYvvml8L8" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 85} outputId="df31dc29-c23d-4787-ab10-d89198dc4386"
target = 'status_group'
y_train = train[target]
y_train.value_counts(normalize=True)
# + id="i6VCiZDZnazt" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="6b92a4cc-82ff-44b3-f3d3-6bba349ae9a3"
# finding the majority
majority = y_train.mode()[0]
majority
# + id="LfwWmKJEnvQi" colab_type="code" colab={}
y_pred = [majority] * len(y_train)
# + id="yoNZn8kcpssd" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="5f292fa7-8778-4d29-8b43-22531da09876"
sum(abs(y_pred - y_train)) / len(y_train) # How much we got wrong
# + id="apcxY39jp0nm" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="32c0c66c-c594-4393-a96a-8b76dd9f1b46"
# the accuracy if we guessed the majority for every prediction
from sklearn.metrics import accuracy_score
accuracy_score(y_train, y_pred)
# + id="d6M25szDqE0m" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="648f309a-4792-4aa8-bd8e-6a564c7cb576"
y_val = val[target]
y_pred = [majority] * len(y_val)
accuracy_score(y_pred, y_val)
# + id="v4-4ydmdswuO" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 297} outputId="67695f71-1366-4ede-ee83-62e7926b3eb7"
val.describe()
# + id="m2M0P6fszPkp" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 394} outputId="a2a24970-2720-4852-8791-eed8730c7fd7"
train.head()
# + [markdown] id="-bx7AAdns3-t" colab_type="text"
# ### Logistic Regression
# + id="HpsY7kvqs8em" colab_type="code" colab={}
import category_encoders as ce
from sklearn.impute import SimpleImputer
from sklearn.linear_model import LogisticRegressionCV
from sklearn.preprocessing import StandardScaler
from sklearn.feature_selection import f_regression, SelectKBest
# + id="cOg6PTFp0Hrh" colab_type="code" colab={}
features = ['id', 'amount_tsh', 'gps_height', 'installer', 'wpt_name', 'region_code', 'district_code',
'population', 'public_meeting', 'recorded_by', 'scheme_management', 'permit', 'extraction_type',
'management_group', 'payment', 'water_quality', 'quantity', 'source', 'waterpoint_type',
'status_group']
encoder = ce.OneHotEncoder(use_cat_names=True)
train_encoded = encoder.fit_transform(train[features])
val_encoded = encoder.transform(val[features])
# + id="1cGGTMnOymo3" colab_type="code" colab={}
selector = SelectKBest(score_func=f_regression, k=15)
X_train_selected = selector.fit_transform(train_encoded, y_train)
X_test_selected = selector.transform(val_encoded)
X_train_selected.shape, val_selected.shape
|
module4/First_attempt_assignment_regression_classification_4.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Deep Learning Models -- A collection of various deep learning architectures, models, and tips for TensorFlow and PyTorch in Jupyter Notebooks.
# - Author: <NAME>
# - GitHub Repository: https://github.com/rasbt/deeplearning-models
# %load_ext watermark
# %watermark -a '<NAME>' -v -p torch
# - Runs on CPU or GPU (if available)
# # Convolutional GAN with Label Smoothing
# Label Smoothing: Replace Real images (1's) by 0.9, based on the idea in
#
# - Salimans, Tim, <NAME>, <NAME>, <NAME>, <NAME>, and <NAME>. "Improved techniques for training GANs." In Advances in Neural Information Processing Systems, pp. 2234-2242. 2016.
# ## Imports
# +
import time
import numpy as np
import torch
import torch.nn.functional as F
from torchvision import datasets
from torchvision import transforms
import torch.nn as nn
from torch.utils.data import DataLoader
if torch.cuda.is_available():
torch.backends.cudnn.deterministic = True
# -
# ## Settings and Dataset
# +
##########################
### SETTINGS
##########################
# Device
device = torch.device("cuda:3" if torch.cuda.is_available() else "cpu")
# Hyperparameters
random_seed = 123
generator_learning_rate = 0.0001
discriminator_learning_rate = 0.0001
num_epochs = 100
BATCH_SIZE = 128
LATENT_DIM = 100
IMG_SHAPE = (1, 28, 28)
IMG_SIZE = 1
for x in IMG_SHAPE:
IMG_SIZE *= x
##########################
### MNIST DATASET
##########################
# Note transforms.ToTensor() scales input images
# to 0-1 range
train_dataset = datasets.MNIST(root='data',
train=True,
transform=transforms.ToTensor(),
download=True)
test_dataset = datasets.MNIST(root='data',
train=False,
transform=transforms.ToTensor())
train_loader = DataLoader(dataset=train_dataset,
batch_size=BATCH_SIZE,
num_workers=4,
shuffle=True)
test_loader = DataLoader(dataset=test_dataset,
batch_size=BATCH_SIZE,
num_workers=4,
shuffle=False)
# Checking the dataset
for images, labels in train_loader:
print('Image batch dimensions:', images.shape)
print('Image label dimensions:', labels.shape)
break
# -
# ## Model
# +
##########################
### MODEL
##########################
class Flatten(nn.Module):
def forward(self, input):
return input.view(input.size(0), -1)
class Reshape1(nn.Module):
def forward(self, input):
return input.view(input.size(0), 64, 7, 7)
class GAN(torch.nn.Module):
def __init__(self):
super(GAN, self).__init__()
self.generator = nn.Sequential(
nn.Linear(LATENT_DIM, 3136, bias=False),
nn.BatchNorm1d(num_features=3136),
nn.LeakyReLU(inplace=True, negative_slope=0.0001),
Reshape1(),
nn.ConvTranspose2d(in_channels=64, out_channels=32, kernel_size=(3, 3), stride=(2, 2), padding=1, bias=False),
nn.BatchNorm2d(num_features=32),
nn.LeakyReLU(inplace=True, negative_slope=0.0001),
#nn.Dropout2d(p=0.2),
nn.ConvTranspose2d(in_channels=32, out_channels=16, kernel_size=(3, 3), stride=(2, 2), padding=1, bias=False),
nn.BatchNorm2d(num_features=16),
nn.LeakyReLU(inplace=True, negative_slope=0.0001),
#nn.Dropout2d(p=0.2),
nn.ConvTranspose2d(in_channels=16, out_channels=8, kernel_size=(3, 3), stride=(1, 1), padding=0, bias=False),
nn.BatchNorm2d(num_features=8),
nn.LeakyReLU(inplace=True, negative_slope=0.0001),
#nn.Dropout2d(p=0.2),
nn.ConvTranspose2d(in_channels=8, out_channels=1, kernel_size=(2, 2), stride=(1, 1), padding=0, bias=False),
nn.Tanh()
)
self.discriminator = nn.Sequential(
nn.Conv2d(in_channels=1, out_channels=8, padding=1, kernel_size=(3, 3), stride=(2, 2), bias=False),
nn.BatchNorm2d(num_features=8),
nn.LeakyReLU(inplace=True, negative_slope=0.0001),
#nn.Dropout2d(p=0.2),
nn.Conv2d(in_channels=8, out_channels=32, padding=1, kernel_size=(3, 3), stride=(2, 2), bias=False),
nn.BatchNorm2d(num_features=32),
nn.LeakyReLU(inplace=True, negative_slope=0.0001),
#nn.Dropout2d(p=0.2),
Flatten(),
nn.Linear(7*7*32, 1),
#nn.Sigmoid()
)
def generator_forward(self, z):
img = self.generator(z)
return img
def discriminator_forward(self, img):
pred = model.discriminator(img)
return pred.view(-1)
# +
torch.manual_seed(random_seed)
#del model
model = GAN()
model = model.to(device)
print(model)
# +
### ## FOR DEBUGGING
"""
outputs = []
def hook(module, input, output):
outputs.append(output)
#for i, layer in enumerate(model.discriminator):
# if isinstance(layer, torch.nn.modules.conv.Conv2d):
# model.discriminator[i].register_forward_hook(hook)
for i, layer in enumerate(model.generator):
if isinstance(layer, torch.nn.modules.ConvTranspose2d):
model.generator[i].register_forward_hook(hook)
"""
# -
optim_gener = torch.optim.Adam(model.generator.parameters(), lr=generator_learning_rate)
optim_discr = torch.optim.Adam(model.discriminator.parameters(), lr=discriminator_learning_rate)
# ## Training
# +
start_time = time.time()
discr_costs = []
gener_costs = []
for epoch in range(num_epochs):
model = model.train()
for batch_idx, (features, targets) in enumerate(train_loader):
# Normalize images to [-1, 1] range
features = (features - 0.5)*2.
features = features.view(-1, IMG_SIZE).to(device)
targets = targets.to(device)
valid = torch.ones(targets.size(0)).float().to(device)
fake = torch.zeros(targets.size(0)).float().to(device)
### FORWARD AND BACK PROP
# --------------------------
# Train Generator
# --------------------------
# Make new images
z = torch.zeros((targets.size(0), LATENT_DIM)).uniform_(-1.0, 1.0).to(device)
generated_features = model.generator_forward(z)
# Loss for fooling the discriminator
discr_pred = model.discriminator_forward(generated_features.view(targets.size(0), 1, 28, 28))
gener_loss = F.binary_cross_entropy_with_logits(discr_pred, valid*0.9)
optim_gener.zero_grad()
gener_loss.backward()
optim_gener.step()
# --------------------------
# Train Discriminator
# --------------------------
discr_pred_real = model.discriminator_forward(features.view(targets.size(0), 1, 28, 28))
real_loss = F.binary_cross_entropy_with_logits(discr_pred_real, valid*0.9)
discr_pred_fake = model.discriminator_forward(generated_features.view(targets.size(0), 1, 28, 28).detach())
fake_loss = F.binary_cross_entropy_with_logits(discr_pred_fake, fake)
discr_loss = 0.5*(real_loss + fake_loss)
optim_discr.zero_grad()
discr_loss.backward()
optim_discr.step()
discr_costs.append(discr_loss.item())
gener_costs.append(gener_loss.item())
### LOGGING
if not batch_idx % 100:
print ('Epoch: %03d/%03d | Batch %03d/%03d | Gen/Dis Loss: %.4f/%.4f'
%(epoch+1, num_epochs, batch_idx,
len(train_loader), gener_loss, discr_loss))
print('Time elapsed: %.2f min' % ((time.time() - start_time)/60))
print('Total Training Time: %.2f min' % ((time.time() - start_time)/60))
# +
### For Debugging
"""
for i in outputs:
print(i.size())
"""
# -
# ## Evaluation
# %matplotlib inline
import matplotlib.pyplot as plt
plt.plot(range(len(gener_costs)), gener_costs, label='generator loss')
plt.plot(range(len(discr_costs)), discr_costs, label='discriminator loss')
plt.legend()
plt.show()
# +
##########################
### VISUALIZATION
##########################
model.eval()
# Make new images
z = torch.zeros((5, LATENT_DIM)).uniform_(-1.0, 1.0).to(device)
generated_features = model.generator_forward(z)
imgs = generated_features.view(-1, 28, 28)
fig, axes = plt.subplots(nrows=1, ncols=5, figsize=(20, 2.5))
for i, ax in enumerate(axes):
axes[i].imshow(imgs[i].to(torch.device('cpu')).detach(), cmap='binary')
# -
from torchsummary import summary
model = model.to('cuda:0')
summary(model.generator, input_size=(100,))
summary(model.discriminator, input_size=(1, 28, 28))
|
pytorch_ipynb/gan/gan-conv-smoothing.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# +
# %matplotlib inline
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
x = []
y = []
#x1, y1 = input('insert coordinates of first apex:')
#x2, y2 = input('insert coordinates of second apex:')
#x3, y3 = input('insert coordinates of third apex:')
x1, y1 = 0, 0
x2, y2 = 1, 0
x3, y3 = 0.7, 1
k1 = (y2 - y1)/ (x2 - x1)
b1 = y1 - k1*x1
xline1 = np.linspace(x1, x2, 500)
yline1 = k1*xline1 + b1
plt.plot(xline1, yline1, c = 'r')
k2 = (y3 - y1)/ (x3 - x1)
b2 = y1 - k2*x1
xline2 = np.linspace(x1, x3, 500)
yline2 = k2*xline2 + b2
plt.plot(xline2, yline2, c = 'r')
k3 = (y3 - y2)/ (x3 - x2)
b3 = y2 - k3*x2
xline3 = np.linspace(x3, x2, 500)
yline3 = k3*xline3 + b3
plt.plot(xline3, yline3, c = 'r')
check = False
while not check:
randx = np.random.rand()
randy = np.random.rand()
if (k2*randx + b2 - randy) >= 0:
if (k3*randx + b3 - randy) >= 0:
check = True
x.append(randx)
y.append(randy)
n = input('insert q-ty of cycles: ')
for i in range(int(n)):
choice = np.random.randint(3)
if choice == 0:
x.append((randx + x1)/2)
y.append((randy + y1)/2)
elif choice == 1:
x.append((randx + x2)/2)
y.append((randy + y2)/2)
else:
x.append((randx + x3)/2)
y.append((randy + y3)/2)
randx = x[-1]
randy = y[-1]
plt.plot(figure = (10,10))
plt.scatter(x,y, marker = '.')
|
hw3/uzor.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Now let's look at the second question of interest. That is - What does the data suggest of Bootcamp grads in terms of job placement and salary?
#
# Again, let's read in the data and necessary libraries!
# +
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# %matplotlib inline
df = pd.read_csv('./survey_results_public.csv')
df.head()
# +
#In this case, we want to look at bootcamp data
#First - let's just look at how many people took a bootcamp in the dataset
bootcamp_df = df[df['TimeAfterBootcamp'].isnull()==False]
not_bootcamp_df = df[df['TimeAfterBootcamp'].isnull()==True]
bootcamp_df.shape
# +
# Looks like a reasonable sample of ~2600 people
#Additional questions about bootcamps - they suggest high salaries, placement,
#helping those with non-traditional backgrounds and diversity break into tech... let's see what
#the data suggests.
# -
bootcamp_df['Gender'].value_counts()/(bootcamp_df.shape[0] - sum(bootcamp_df['Gender'].isnull()))
not_bootcamp_df['Gender'].value_counts()/(not_bootcamp_df.shape[0] - sum(not_bootcamp_df['Gender'].isnull()))
# +
#It does appear there is a small push for diversity overall by bootcamps, but not huge...
# -
bootcamp_df['FormalEducation'].value_counts()/(bootcamp_df.shape[0] - sum(bootcamp_df['FormalEducation'].isnull()))
not_bootcamp_df['FormalEducation'].value_counts()/(not_bootcamp_df.shape[0] - sum(not_bootcamp_df['FormalEducation'].isnull()))
# +
#In terms of formal education it looks basically the same - more bachelors degree holders do
#bootcamps, but fewer phds do bootcamps.
# -
bootcamp_df['TimeAfterBootcamp'].value_counts()/bootcamp_df.shape[0]
# +
#So interestingly this data makes it more difficult to analyze the impact of bootcamps,
# as many of the students already had developer jobs before starting the program
# we could remove them?
#If you are truly new to the space, we can rule out that you already have a job as a developer
# then we can look at the other individuals and see which are still not
not_devs = bootcamp_df[bootcamp_df['TimeAfterBootcamp']!="I already had a job as a developer when I started the program"]
# -
not_devs['TimeAfterBootcamp'].value_counts()/not_devs.shape[0]
bootcamp_df[bootcamp_df['Salary']==195000]
bootcamp_df['Salary'].hist(bins=20);
plt.title('Salary for Bootcamp Grads');
plt.xlabel('Salary');
plt.ylabel('Count');
bootcamp_df['Salary'].describe()
# +
#Here we can get some idea of how bootcamp grades fair, but this isn't straightforward.
#Many of these individuals are not new to the field, and the salaries are all over the place
#But the descriptive statistics here give us some ideas... just nothing really concrete
|
BootcampStats.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## if 文の書き方
#
# if 条件:
def print_x(x):
if x > 0:
print(f"0 < {x}")
elif x < 0:
print(f"{x} < 0")
else:
print(f"{x} is 0")
1 > 0
print_x(1)
-1 > 0
print_x(-1)
print_x(0)
# ## 比較演算子
0.1 == 0.3, 0.1 == 0.1
0.1 != 0.3, 0.1 != 0.1
0.1 < 0.3, 0.1 < 0.1
0.1 <= 0.3, 0.1 <= 0.1
0.1 > 0.3, 0.1 > 0.1
0.1 >= 0.3, 0.1 >= 0.1
0.1 is 0.3, 0.1 is 0.1
# ### is と == の違い
#
# == は値を比較、is はオブジェクトIDを比較
a = 1
b = 1
a == b, a is b
a = "1"
b = "1"
a == b, a is b
# 中身が同じでもオブジェクトIDが異なる
a = [1]
b = [1]
a == b, a is b
# ### 計算結果の比較の注意
0.1 + 0.1 + 0.1 == 0.3
0.3 - (0.1 + 0.1 + 0.1)
abs(0.3 - (0.1 + 0.1 + 0.1)) < 1.0e-16
abs(0.3 - (0.1 + 0.1 + 0.1)) < 1.0e-17
# ## bool 以外のFalse
# int
if 0:
print("T")
else:
print("F")
# float
if 0.0:
print("T")
else:
print("F")
# str
if "":
print("T")
else:
print("F")
# list
if []:
print("T")
else:
print("F")
# tuple
if ():
print("T")
else:
print("F")
# dict
if {}:
print("T")
else:
print("F")
# set
if set():
print("T")
else:
print("F")
# ## 包含判定
1 in [1, 2, 3]
"1" in [1, 2, 3]
1 in {1, 2, 3}
1 in "123"
"1" in "123"
"oxygen" in {"oxygen": 8, "natrium": 11}
8 in {"oxygen": 8, "natrium": 11}
|
notebooks/if.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
import numpy as np
import rasterio
import matplotlib.pyplot as plt
import os
# %matplotlib inline
def sdv_snow(snow_rstr):
# open a geotiff
src = rasterio.open(snow_rstr)
# pull the metadata - needed to write a geotiff at the end
meta = src.meta
# use a numpy array to do the math
snow = src.read(1)
# mask out no data values just in case
snow_ = np.ma.masked_array(snow, snow == src.nodatavals)
mu = snow_.mean()
sigma = snow_.std()
sdv = (snow_ - mu) / sigma
# sdv is a standarized snow depth value and enables comparison of relative snow depths in different years
# Compare standarized and measured snow maps
plt.imshow(sdv, cmap='pink')
plt.title('Standardized Depth [m]')
plt.colorbar()
plt.figure()
plt.imshow(snow_, cmap='pink')
plt.title('Measured Depth [m]')
plt.colorbar()
# Plots should look identical - but have different values
# Write it
out_name = rasterio.writeoutput_name = os.path.basename( src.name ).replace( '.tif', '_sdv.tif' )
with rasterio.open( out_name, 'w', **meta ) as out:
out.write( sdv, 1 )
# Test it
sdv_snow(r'/home/cparr/drift_results/clip.tif')
|
Rasterio.Snow.SDV.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Active Inference for Markov Decision Processes
#
# > This notebook provides an example of the `pymdp` toolbox
# ## Environments
#
# The `pymdp` toolbox includes environments that follow the openAI `gym` API. Here, we will use a 1D grid-world environment.
#
# We assume a grid world with a shape $1 \ x \ h$. At each time step $t$, the environment generates observations about the agents positions in the grid world. Agents can take one of 3 actions - `{LEFT, STAY, RIGHT}`.
#
# Below, we demonstrate how an environment can be initialized and run forward with random actions.
# +
from pymdp.envs import DGridWorldEnv
env_shape = [1, 4]
env = DGridWorldEnv(shape=env_shape)
obs = env.reset()
env.render("Initial position")
T = 3
for t in range(T):
action = env.sample_action()
obs = env.step(action)
print(f"Agent action [{env.CONTROL_NAMES[action]}]")
env.render(f"Position at time {t}")
# -
# ## Generative model
#
# We start by considering a generative model without control states. In other words, the agent passively percieves its position and transitions through state space.
# +
from pymdp.distributions import Categorical
B = env.get_transition_dist()
B = Categorical(values=B[:, :, 0])
B.plot()
# -
|
experimental/examples/example.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/julianovale/PythonFundamentosDSA/blob/main/DSA_Python_Cap05_05_MetodosEspeciais.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="naOYMHkPaO4e"
# # <font color='blue'>Data Science Academy - Python Fundamentos - Capítulo 5</font>
# ## Download: http://github.com/dsacademybr
# + colab={"base_uri": "https://localhost:8080/"} id="UtNk9nRJaO4i" outputId="177a798d-af0f-49e7-c18d-4e585676aa99"
# Versão da Linguagem Python
from platform import python_version
print('Versão da Linguagem Python Usada Neste Jupyter Notebook:', python_version())
# + [markdown] id="EkJub9f3aO4n"
# ## Métodos Especiais
# + id="X45WqLoUaO4p"
# Criando a classe Livro
class Livro():
def __init__(self, titulo, autor, paginas):
print ("Livro criado")
self.titulo = titulo
self.autor = autor
self.paginas = paginas
def __str__(self):
return "Título: %s , autor: %s, páginas: %s " \
%(self.titulo, self.autor, self.paginas)
def __len__(self):
return self.paginas
def len(self):
return print("Páginas do livro com método comum: ", self.paginas)
# + colab={"base_uri": "https://localhost:8080/"} id="gdt6rMhNaO4s" outputId="3e63e519-4682-461c-bb01-dd32c04bede9"
livro1 = Livro("Os Lusíadas", "<NAME>", 8816)
# + colab={"base_uri": "https://localhost:8080/"} id="CbFY8oTEaO4t" outputId="ee504935-35cc-4b4e-c02b-38381bc62a16"
# Métodos especiais
print(livro1)
# + colab={"base_uri": "https://localhost:8080/", "height": 35} id="JxtVa4dGaO4t" outputId="c1cbe1d7-919e-4ea9-8bb7-4c1beb5373ea"
str(livro1)
# + colab={"base_uri": "https://localhost:8080/"} id="9tr1sdGbaO4u" outputId="a86068cd-08ee-4f7e-91ae-712752019872"
len(livro1)
# + colab={"base_uri": "https://localhost:8080/"} id="O-OYXytxaO4u" outputId="f5d9dc01-8182-48be-bad9-ed0ccb6107fe"
livro1.len()
# + id="LGCgEb-DaO4v"
# Ao executar a função del para remover um atributo, o Python executa:
# livro1.__delattr__("paginas")
del livro1.paginas
# + colab={"base_uri": "https://localhost:8080/"} id="xJJtJCqjaO4w" outputId="766ab65e-6c43-43dc-fa0c-649f844eb97b"
hasattr(livro1, "paginas")
# + [markdown] id="xuEQEQWpaO4w"
# # Fim
# + [markdown] id="mGL73whJaO4x"
# ### Obrigado - Data Science Academy - <a href="http://facebook.com/dsacademybr">facebook.com/dsacademybr</a>
|
DSA_Python_Cap05_05_MetodosEspeciais.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="Mqv4cUb5Rf3l" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 615} outputId="4add1c47-ddeb-43c8-bc7e-39325877e7db"
# !pip install -U tensorflow-gpu
# + id="CQWDcNkn4aC4" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 442} outputId="e691152a-26db-4fe6-d581-5a6f8a17e857"
# !mkdir -p /root/.kaggle
# !echo '{"username":"davialvb","key":"<KEY>"}' > /root/.kaggle/kaggle.json
# !chmod 600 /root/.kaggle/kaggle.json
# !kaggle datasets download -d paultimothymooney/chest-xray-pneumonia
# !unzip chest-xray-pneumonia.zip
# !ls -al *
# + [markdown] id="eWcSdTAuq-20" colab_type="text"
# # Import dependencies
# + id="8YkLYFAH6uCT" colab_type="code" outputId="20e99f8e-6f02-443d-bbea-35685d13b518" colab={"base_uri": "https://localhost:8080/", "height": 34}
import pandas as pd
import cv2
import numpy as np
import os
import skimage as sk
import scipy
import skimage
import random
import os
import pathlib
import tensorflow as tf
import IPython.display as display
import numpy as np
import matplotlib.pyplot as plt
import cv2
from random import shuffle
from tqdm import tqdm
from skimage.transform import resize
from scipy import ndarray
from skimage import transform
from skimage import util
from __future__ import absolute_import, division, print_function, unicode_literals
from PIL import Image
from tensorflow.keras.preprocessing.image import ImageDataGenerator
print(tf.__version__)
# + id="AKlpNps8LhZd" colab_type="code" outputId="4749fff4-7311-4dfa-a304-4e12c18657e6" colab={"base_uri": "https://localhost:8080/", "height": 153}
# !ls -la chest_xray/
# + id="k7Gq-wKE7T0H" colab_type="code" colab={}
# Create variables with path
TRAIN_PATH = pathlib.Path('./chest_xray/train/')
TEST_PATH = pathlib.Path('./chest_xray/test/')
VAL_PATH = pathlib.Path('./chest_xray/val')
# + [markdown] id="iKCfGmHTmMle" colab_type="text"
# ## Number of samples
# + id="EAr6YwHlMrin" colab_type="code" outputId="854a62b1-e29c-4f18-99d5-42ad1d4e735d" colab={"base_uri": "https://localhost:8080/", "height": 34}
train_count = len(list(TRAIN_PATH.glob('*/*.jpeg')))
test_count = len(list(TEST_PATH.glob('*/*.jpeg')))
val_count = len(list(VAL_PATH.glob('*/*.jpeg')))
train_count, test_count, val_count
# + [markdown] id="1aKYbmbemPoz" colab_type="text"
#
# ## Normal Pacient vs Pacient with Pneumonia
# + id="RHerLXejVTxq" colab_type="code" outputId="e09f448b-15b4-4a14-e2db-4040d0496c3e" colab={"base_uri": "https://localhost:8080/", "height": 431}
normal_files = list(TRAIN_PATH.glob('NORMAL/*'))
pneumonia_files = list(TRAIN_PATH.glob('PNEUMONIA/*'))
normal_img = cv2.imread(str(normal_files[2]))
pneumonia_img = cv2.imread(str(pneumonia_files[2]))
fig = plt.figure(figsize=(15,15))
ax1 = fig.add_subplot(221)
ax1.title.set_text('Normal pacient')
ax2 = fig.add_subplot(222)
ax2.title.set_text('Pacient with Pneumonia')
ax1.imshow(normal_img)
ax2.imshow(pneumonia_img)
# + [markdown] id="MJhnW2TnmmsA" colab_type="text"
# ## Create dataset of the file paths
# + id="zTjRMy8zNSyM" colab_type="code" outputId="28158e87-d636-49f4-a39b-1e69be68094e" colab={"base_uri": "https://localhost:8080/", "height": 102}
train_ds = tf.data.Dataset.list_files(str(TRAIN_PATH/'*/*'))
for f in train_ds.take(5):
print(f.numpy())
# + id="Bbdsr7hS9va0" colab_type="code" outputId="1ce41496-6f52-45da-a424-86b25cbdd214" colab={"base_uri": "https://localhost:8080/", "height": 51}
print("Num GPUs Available: ", len(tf.config.experimental.list_physical_devices('GPU')))
print(tf.__version__)
# + [markdown] id="VzXCSlbYmzX3" colab_type="text"
# ## Create dataset of the file paths
# + id="szWNjwj9VtqG" colab_type="code" outputId="9be9acfe-2f65-4cd9-87dc-6211becd06d0" colab={"base_uri": "https://localhost:8080/", "height": 34}
CLASS_NAMES = np.array([item.name for item in TRAIN_PATH.glob('*')])
CLASS_NAMES
# + id="GNzt0Zpn4U-c" colab_type="code" colab={}
BATCH_SIZE = 32
IMG_HEIGHT = 224
IMG_WIDTH = 224
STEPS_PER_EPOCH = np.ceil(train_count/BATCH_SIZE)
# + id="Y7JHCmuuF9li" colab_type="code" colab={}
def get_label(file_path):
# convert the path to a list of path components
parts = tf.strings.split(file_path, '/')
# The second to last is the class-directory
return parts[-2] == CLASS_NAMES
def decode_img(img):
# convert the compressed string to a 3D uint8 tensor
img = tf.image.decode_jpeg(img, channels=3)
# Use `convert_image_dtype` to convert to floats in the [0,1] range.
img = tf.image.convert_image_dtype(img, tf.float32)
# resize the image to the desired size.
return tf.image.resize(img, [IMG_WIDTH, IMG_HEIGHT])
def process_path(file_path):
label = get_label(file_path)
# load the raw data from the file as a string
img = tf.io.read_file(file_path)
img = decode_img(img)
return img, label
def show_batch(image_batch, label_batch):
plt.figure(figsize=(10,10))
for n in range(25):
ax = plt.subplot(5,5,n+1)
plt.imshow(image_batch[n])
plt.title(CLASS_NAMES[label_batch[n]==1][0].title())
plt.axis('off')
# + id="KtaooZL54c8I" colab_type="code" outputId="066cce09-9f4f-48c0-b5de-e33ab9d920da" colab={"base_uri": "https://localhost:8080/", "height": 51}
AUTOTUNE = tf.data.experimental.AUTOTUNE
labeled_ds = train_ds.map(process_path, AUTOTUNE)
for image, label in labeled_ds.take(1):
print("Image shape: ", image.numpy().shape)
print("Label: ", label.numpy())
# + id="IQqc9weq4z5J" colab_type="code" colab={}
def prepare_for_training(ds, cache=True, shuffle_buffer_size=1000):
# This is a small dataset, only load it once, and keep it in memory.
# use `.cache(filename)` to cache preprocessing work for datasets that don't
# fit in memory.
if cache:
if isinstance(cache, str):
ds = ds.cache(cache)
else:
ds = ds.cache()
ds = ds.shuffle(buffer_size=shuffle_buffer_size)
# Repeat forever
ds = ds.repeat()
ds = ds.batch(BATCH_SIZE)
# `prefetch` lets the dataset fetch batches in the background while the model
# is training.
ds = ds.prefetch(buffer_size=AUTOTUNE)
return ds
# + id="jbeA3f2J45EC" colab_type="code" colab={}
train_ds = prepare_for_training(labeled_ds)
image_batch, label_batch = next(iter(train_ds))
# + [markdown] id="5MFhuhVvnhI_" colab_type="text"
# ## Dataset Visualization
# + id="458WOcPKnisF" colab_type="code" outputId="19ef6ec7-b44c-4ba1-b367-88dc2f2fd01c" colab={"base_uri": "https://localhost:8080/", "height": 591}
show_batch(image_batch.numpy(), label_batch.numpy())
# + [markdown] id="R05cXEUom7KH" colab_type="text"
# ## Train Dataset Label Count
# + id="HfkSODNWm8SY" colab_type="code" outputId="e9f8e4af-53b0-4fd1-a027-9b5356ff2b0c" colab={"base_uri": "https://localhost:8080/", "height": 298}
import seaborn as sns
pneumonia_count = len(list(TRAIN_PATH.glob("PNEUMONIA/*")))
normal_count = len(list(TRAIN_PATH.glob("NORMAL/*")))
sns.barplot(x=['Pneumonia Cases', 'Normal Cases'], y=[pneumonia_count, normal_count], palette='magma')
plt.title('Train Dataset Label Count')
plt.show()
pneumonia_count, normal_count
# + [markdown] id="7VnD0_QSnARv" colab_type="text"
# ## Test Dataset Label Count
# + id="TASmcb5AnCvH" colab_type="code" outputId="a8b9197b-c894-4ec6-e241-579096faa9a3" colab={"base_uri": "https://localhost:8080/", "height": 298}
test_pneumonia_count = len(list(TEST_PATH.glob("PNEUMONIA/*")))
test_normal_count = len(list(TEST_PATH.glob("NORMAL/*")))
sns.barplot(x=['Pneumonia Cases', 'Normal Cases'], y=[test_pneumonia_count, test_normal_count], palette='magma')
plt.title('Test Dataset Label Count')
plt.show()
test_pneumonia_count, test_normal_count
# + [markdown] id="ODcqwrT5nSMM" colab_type="text"
# ## Deep Learning Model Architecture
# + id="PayFblQS5ZW9" colab_type="code" outputId="9b25dfb6-2165-4b92-dab5-dc7ad18b8305" colab={"base_uri": "https://localhost:8080/", "height": 425}
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Dense, Conv2D, Flatten, Dropout, MaxPooling2D
batch_size = 32
epochs = 5
IMG_HEIGHT = 64
IMG_WIDTH = 64
def create_model():
model = Sequential([
Conv2D(32, (3, 3), activation='relu', input_shape=(IMG_HEIGHT, IMG_WIDTH, 3)),
MaxPooling2D(pool_size = (2, 2)),
Dropout(0.2),
Conv2D(32, (3, 3), activation='relu'),
MaxPooling2D(pool_size = (2, 2)),
# Conv2D(64, 3, padding='same', activation='relu'),
# MaxPooling2D(),
# Dropout(0.2),
Flatten(),
Dense(128, activation='relu'),
Dense(1, activation='sigmoid')
])
return model
# def create_model():
# model = Sequential([
# Conv2D(16, 3, padding='same', activation='relu', input_shape=(IMG_HEIGHT, IMG_WIDTH, 3)),
# MaxPooling2D(),
# Dropout(0.2),
# Conv2D(32, 3, padding='same', activation='relu'),
# MaxPooling2D(),
# Conv2D(64, 3, padding='same', activation='relu'),
# MaxPooling2D(),
# Dropout(0.2),
# Flatten(),
# Dense(512, activation='relu'),
# Dense(1, activation='sigmoid')
# ])
# return model
model = create_model()
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])
model.summary()
# + [markdown] id="yPTR89kgnVKf" colab_type="text"
# ## Data generators
# + id="ARJdWpj_dFvz" colab_type="code" colab={}
def train_generator(image_size, batch_size=32):
datagen = ImageDataGenerator(
rescale=1./255,
# rotation_range=30,
# width_shift_range=25,
# height_shift_range=25,
zoom_range=0.2,
# brightness_range=(0.8, 1.2),
shear_range=0.2,
fill_mode = "constant",
horizontal_flip=True,
# vertical_flip=True,
# cval=0
)
data_generator = datagen.flow_from_directory(
TRAIN_PATH,
target_size=(image_size, image_size),
batch_size=batch_size,
class_mode='binary'
)
return data_generator
# + id="yrzoapSIed6m" colab_type="code" colab={}
def validation_generator(image_size, batch_size=32):
datagen = ImageDataGenerator(rescale=1./255)
data_generator = datagen.flow_from_directory(
VAL_PATH,
target_size=(image_size, image_size),
batch_size=batch_size,
class_mode='binary')
return data_generator
# + id="xtBQLn6sehUo" colab_type="code" colab={}
def test_generator(image_size, batch_size=32, shuffle=False):
datagen = ImageDataGenerator(rescale=1./255)
data_generator = datagen.flow_from_directory(
TEST_PATH,
target_size=(image_size, image_size),
batch_size=batch_size,
shuffle=shuffle,
class_mode='binary')
return data_generator
# + id="3BfjLozJ5jM0" colab_type="code" outputId="a2d72915-3a3f-4cd6-8588-f502086143e6" colab={"base_uri": "https://localhost:8080/", "height": 68}
train_data_generator = train_generator(IMG_HEIGHT)
test_data_generator = test_generator(IMG_HEIGHT)
val_data_generator = validation_generator(IMG_HEIGHT)
# + [markdown] id="tJeQPtNzoYsO" colab_type="text"
# ## Number of samples
# + id="jLK7pFIPATkB" colab_type="code" outputId="36c789bf-0683-4bee-f647-8acfc4a3e2b3" colab={"base_uri": "https://localhost:8080/", "height": 34}
train_pneumonia_count = len(list(TRAIN_PATH.glob("PNEUMONIA/*")))
train_normal_count = len(list(TRAIN_PATH.glob("NORMAL/*")))
test_pneumonia_count = len(list(TEST_PATH.glob("PNEUMONIA/*")))
test_normal_count = len(list(TEST_PATH.glob("NORMAL/*")))
val_pneumonia_count = len(list(VAL_PATH.glob("PNEUMONIA/*")))
val_normal_count = len(list(VAL_PATH.glob("NORMAL/*")))
train_pneumonia_count, test_pneumonia_count, val_pneumonia_count
# + [markdown] id="JyI63UwToPho" colab_type="text"
# ## Training Step
# + id="0U4sd6d36BG-" colab_type="code" outputId="edf725eb-8b9f-46aa-be8d-2df0bbaa91e7" colab={"base_uri": "https://localhost:8080/", "height": 374}
# import tensorflow.compat.v1 as tfcompat
# sess = tfcompat.Session(config=tfcompat.ConfigProto(log_device_placement=True))
print(tf.config.experimental.list_physical_devices('GPU'))
batch_size = 32
epochs = 10
total_train = train_pneumonia_count + normal_count
total_test = test_pneumonia_count + test_normal_count
total_val = val_pneumonia_count + val_normal_count
steps = total_train // batch_size
# print(total_val // batch_size)
with tf.device('/GPU:0'):
history = model.fit_generator(
train_data_generator,
steps_per_epoch=steps,
epochs=epochs,
validation_data=val_data_generator,
validation_steps=624
)
# + [markdown] id="budeYwv9ogWb" colab_type="text"
# ## History Visualization
# + id="ZenGRA9UeZIP" colab_type="code" outputId="5b99c225-8088-47b8-fe2f-49f73b6df1eb" colab={"base_uri": "https://localhost:8080/", "height": 499}
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs_range = range(epochs)
plt.figure(figsize=(8, 8))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()
# + [markdown] id="__HfYADavumi" colab_type="text"
# ## The graph is not much conclusive because the validation set does not have a significant number of samples
#
# + [markdown] id="sP5vWecxzzOy" colab_type="text"
# ## Test Evaluation
# + id="YaMzmou4M17q" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 51} outputId="dec96b74-2b6c-4d16-9f28-3ee845bd0be0"
test_loss, test_score = model.evaluate_generator(test_data_generator, steps=test_count // batch_size + 1)
print("Loss on test set: ", test_loss)
print("Accuracy on test set: ", test_score)
# + id="CouoNApAr57e" colab_type="code" colab={}
y_pred = model.predict_generator(test_data_generator, steps=test_count // batch_size + 1)
# + id="Mr8XMBU-w8u2" colab_type="code" colab={}
import seaborn as sns
import matplotlib.pylab as plt
from sklearn.metrics import confusion_matrix
def plot_confusion_matrix(cm, classes, normalized=True, cmap='bone'):
plt.figure(figsize=[12,8])
norm_cm = cm
if normalized:
norm_cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
sns.heatmap(norm_cm, annot=cm, fmt='g', xticklabels=classes, yticklabels=classes, cmap=cmap)
# + [markdown] id="KKCya64G1Yo9" colab_type="text"
# ## Normalized Confusion Matrix
# + id="mHGwXP0FxsdP" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 523} outputId="ebc5ea01-bef1-4469-c366-f5c3db3470f0"
threshold = 0.7
discrete_pred = [1 if pred > threshold else 0 for pred in y_pred]
cm = confusion_matrix(test_data_generator.classes, discrete_pred)
plot_confusion_matrix(cm, ['Normal', 'Pneumonia'])
# + [markdown] id="HrAqK39S2lsd" colab_type="text"
# ## Classification Report
#
# + id="yFkp1xvy1wU_" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 170} outputId="7c318373-a8a7-4910-ce0d-74b6a9bc9ac9"
from sklearn.metrics import classification_report
print(classification_report(test_data_generator.classes, discrete_pred, target_names=['Normal', 'Pneumonia']))
|
colab_demo.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="Tce3stUlHN0L"
# ##### Copyright 2020 The TensorFlow Authors.
# + cellView="form" id="tuOe1ymfHZPu"
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# + [markdown] id="qFdPvlXBOdUN"
# # Introduction to the Keras Tuner
# + [markdown] id="MfBg1C5NB3X0"
# <table class="tfo-notebook-buttons" align="left">
# <td>
# <a target="_blank" href="https://www.tensorflow.org/tutorials/keras/keras_tuner"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
# </td>
# <td>
# <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/keras/keras_tuner.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
# </td>
# <td>
# <a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/tutorials/keras/keras_tuner.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
# </td>
# <td>
# <a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/tutorials/keras/keras_tuner.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
# </td>
# </table>
# + [markdown] id="xHxb-dlhMIzW"
# ## Overview
#
# The Keras Tuner is a library that helps you pick the optimal set of hyperparameters for your TensorFlow program. The process of selecting the right set of hyperparameters for your machine learning (ML) application is called *hyperparameter tuning* or *hypertuning*.
#
# Hyperparameters are the variables that govern the training process and the topology of an ML model. These variables remain constant over the training process and directly impact the performance of your ML program. Hyperparameters are of two types:
# 1. **Model hyperparameters** which influence model selection such as the number and width of hidden layers
# 2. **Algorithm hyperparameters** which influence the speed and quality of the learning algorithm such as the learning rate for Stochastic Gradient Descent (SGD) and the number of nearest neighbors for a k Nearest Neighbors (KNN) classifier
#
# In this tutorial, you will use the Keras Tuner to perform hypertuning for an image classification application.
# + [markdown] id="MUXex9ctTuDB"
# ## Setup
# + id="IqR2PQG4ZaZ0"
import tensorflow as tf
from tensorflow import keras
# + [markdown] id="g83Lwsy-Aq2_"
# Install and import the Keras Tuner.
# + id="hpMLpbt9jcO6"
# !pip install -q -U keras-tuner
# + id="_leAIdFKAxAD"
import keras_tuner as kt
# + [markdown] id="ReV_UXOgCZvx"
# ## Download and prepare the dataset
#
# In this tutorial, you will use the Keras Tuner to find the best hyperparameters for a machine learning model that classifies images of clothing from the [Fashion MNIST dataset](https://github.com/zalandoresearch/fashion-mnist).
# + [markdown] id="HljH_ENLEdHa"
# Load the data.
# + id="OHlHs9Wj_PUM"
(img_train, label_train), (img_test, label_test) = keras.datasets.fashion_mnist.load_data()
# + id="bLVhXs3xrUD0"
# Normalize pixel values between 0 and 1
img_train = img_train.astype('float32') / 255.0
img_test = img_test.astype('float32') / 255.0
# + [markdown] id="K5YEL2H2Ax3e"
# ## Define the model
#
# When you build a model for hypertuning, you also define the hyperparameter search space in addition to the model architecture. The model you set up for hypertuning is called a *hypermodel*.
#
# You can define a hypermodel through two approaches:
#
# * By using a model builder function
# * By subclassing the `HyperModel` class of the Keras Tuner API
#
# You can also use two pre-defined `HyperModel` classes - [HyperXception](https://keras-team.github.io/keras-tuner/documentation/hypermodels/#hyperxception-class) and [HyperResNet](https://keras-team.github.io/keras-tuner/documentation/hypermodels/#hyperresnet-class) for computer vision applications.
#
# In this tutorial, you use a model builder function to define the image classification model. The model builder function returns a compiled model and uses hyperparameters you define inline to hypertune the model.
# + id="ZQKodC-jtsva"
def model_builder(hp):
model = keras.Sequential()
model.add(keras.layers.Flatten(input_shape=(28, 28))) # couche d'entree qui ne sert qu'a reshaper
hp_units_1 = hp.Int('units_1', min_value=100, max_value=300, step=1)
hp_activation_1 = hp.Choice('activation_1', values=['relu', 'sigmoid', 'tanh'])
model.add(keras.layers.Dense(units=hp_units_1, activation=hp_activation_1))
hp_units_2 = hp.Int('units_2', min_value=150, max_value=450, step=2)
hp_activation_2 = hp.Choice('activation_2', values=['relu', 'sigmoid', 'tanh'])
model.add(keras.layers.Dense(units=hp_units_2, activation=hp_activation_2))
model.add(keras.layers.Dense(10, activation='softmax')) # couche de sortie
model.compile(optimizer='adam',
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
return model
# + [markdown] id="0J1VYw4q3x0b"
# ## Instantiate the tuner and perform hypertuning
#
# Instantiate the tuner to perform the hypertuning. The Keras Tuner has four tuners available - `RandomSearch`, `Hyperband`, `BayesianOptimization`, and `Sklearn`. In this tutorial, you use the [Hyperband](https://arxiv.org/pdf/1603.06560.pdf) tuner.
#
# To instantiate the Hyperband tuner, you must specify the hypermodel, the `objective` to optimize and the maximum number of epochs to train (`max_epochs`).
# + id="oichQFly6Y46"
tuner = kt.Hyperband(model_builder,
objective='val_accuracy',
max_epochs=10,
factor=3,
directory='my_dir',
project_name='intro_to_kt')
# + [markdown] id="VaIhhdKf9VtI"
# The Hyperband tuning algorithm uses adaptive resource allocation and early-stopping to quickly converge on a high-performing model. This is done using a sports championship style bracket. The algorithm trains a large number of models for a few epochs and carries forward only the top-performing half of models to the next round. Hyperband determines the number of models to train in a bracket by computing 1 + log<sub>`factor`</sub>(`max_epochs`) and rounding it up to the nearest integer.
# + [markdown] id="cwhBdXx0Ekj8"
# Create a callback to stop training early after reaching a certain value for the validation loss.
# + id="WT9IkS9NEjLc"
stop_early = tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=5) # a expliquer prochain cours
# + [markdown] id="UKghEo15Tduy"
# Run the hyperparameter search. The arguments for the search method are the same as those used for `tf.keras.model.fit` in addition to the callback above.
# + id="dSBQcTHF9cKt" outputId="60881be8-9a2a-44a2-c7c8-30cdb50c0f70" colab={"base_uri": "https://localhost:8080/"}
tuner.search(img_train, label_train, epochs=25, validation_split=0.2, callbacks=[stop_early])
# Get the optimal hyperparameters
best_hps=tuner.get_best_hyperparameters(num_trials=1)
print(f"best unit layer 1 = {best_hps[0].get('units_1')}")
print(f"best unit layer 2 = {best_hps[0].get('units_1')}")
print(f"best activation layer 1 = {best_hps[0].get('activation_1')}")
print(f"best activation layer 2 = {best_hps[0].get('activation_2')}")
# + [markdown] id="Lak_ylf88xBv"
# ## Train the model
#
# Find the optimal number of epochs to train the model with the hyperparameters obtained from the search.
# + id="McO82AXOuxXh" outputId="3db6345e-de67-402a-de73-d860684feb5b" colab={"base_uri": "https://localhost:8080/"}
# Build the model with the optimal hyperparameters and train it on the data for 50 epochs
model = tuner.hypermodel.build(best_hps[0])
history = model.fit(img_train, label_train, epochs=50, validation_split=0.2)
val_acc_per_epoch = history.history['val_accuracy']
best_epoch = val_acc_per_epoch.index(max(val_acc_per_epoch)) + 1
print('Best epoch: %d' % (best_epoch,))
# + id="Ehr6It8DlI_6" outputId="8bf1eb41-2709-4b85-c60b-d936142313f8" colab={"base_uri": "https://localhost:8080/", "height": 265}
import matplotlib.pyplot as plt
plt.plot(history.history['val_accuracy'], c='blue')
plt.plot(history.history['accuracy'], c='red')
plt.show()
# + [markdown] id="uOTSirSTI3Gp"
# Re-instantiate the hypermodel and train it with the optimal number of epochs from above.
# + id="NoiPUEHmMhCe" outputId="bc9a7c54-614a-4f69-8ce4-b1d877ca66fd" colab={"base_uri": "https://localhost:8080/"}
hypermodel = tuner.hypermodel.build(best_hps[0])
# Retrain the model
hypermodel.fit(img_train, label_train, epochs=best_epoch, validation_split=0.2)
# + [markdown] id="MqU5ZVAaag2v"
# To finish this tutorial, evaluate the hypermodel on the test data.
# + id="9E0BTp9Ealjb" outputId="a2173720-9dfa-4f41-c954-a01e8525aab7" colab={"base_uri": "https://localhost:8080/"}
eval_result = hypermodel.evaluate(img_test, label_test)
print("[test loss, test accuracy]:", eval_result)
# + [markdown] id="EQRpPHZsz-eC"
# The `my_dir/intro_to_kt` directory contains detailed logs and checkpoints for every trial (model configuration) run during the hyperparameter search. If you re-run the hyperparameter search, the Keras Tuner uses the existing state from these logs to resume the search. To disable this behavior, pass an additional `overwrite=True` argument while instantiating the tuner.
# + [markdown] id="sKwLOzKpFGAj"
# ## Summary
#
# In this tutorial, you learned how to use the Keras Tuner to tune hyperparameters for a model. To learn more about the Keras Tuner, check out these additional resources:
#
# * [Keras Tuner on the TensorFlow blog](https://blog.tensorflow.org/2020/01/hyperparameter-tuning-with-keras-tuner.html)
# * [Keras Tuner website](https://keras-team.github.io/keras-tuner/)
#
# Also check out the [HParams Dashboard](https://www.tensorflow.org/tensorboard/hyperparameter_tuning_with_hparams) in TensorBoard to interactively tune your model hyperparameters.
|
site/en/tutorials/keras/keras_tuner.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
shape = (2, 3, 4, 5)
A = np.arange(np.product(shape)).reshape(shape)
A
i, j, k, l = 3, 1, 2, 0
np.transpose(A, [3, 0, 2, 1])[i, j, k, l]
A[:, :, :, i][j][:, k][l]
|
notebooks/2018.11.01 NumPy Tranpose.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Building your Recurrent Neural Network - Step by Step
#
# Welcome to Course 5's first assignment! In this assignment, you will implement your first Recurrent Neural Network in numpy.
#
# Recurrent Neural Networks (RNN) are very effective for Natural Language Processing and other sequence tasks because they have "memory". They can read inputs $x^{\langle t \rangle}$ (such as words) one at a time, and remember some information/context through the hidden layer activations that get passed from one time-step to the next. This allows a uni-directional RNN to take information from the past to process later inputs. A bidirection RNN can take context from both the past and the future.
#
# **Notation**:
# - Superscript $[l]$ denotes an object associated with the $l^{th}$ layer.
# - Example: $a^{[4]}$ is the $4^{th}$ layer activation. $W^{[5]}$ and $b^{[5]}$ are the $5^{th}$ layer parameters.
#
# - Superscript $(i)$ denotes an object associated with the $i^{th}$ example.
# - Example: $x^{(i)}$ is the $i^{th}$ training example input.
#
# - Superscript $\langle t \rangle$ denotes an object at the $t^{th}$ time-step.
# - Example: $x^{\langle t \rangle}$ is the input x at the $t^{th}$ time-step. $x^{(i)\langle t \rangle}$ is the input at the $t^{th}$ timestep of example $i$.
#
# - Lowerscript $i$ denotes the $i^{th}$ entry of a vector.
# - Example: $a^{[l]}_i$ denotes the $i^{th}$ entry of the activations in layer $l$.
#
# We assume that you are already familiar with `numpy` and/or have completed the previous courses of the specialization. Let's get started!
# Let's first import all the packages that you will need during this assignment.
import numpy as np
from rnn_utils import *
# ## 1 - Forward propagation for the basic Recurrent Neural Network
#
# Later this week, you will generate music using an RNN. The basic RNN that you will implement has the structure below. In this example, $T_x = T_y$.
# <img src="images/RNN.png" style="width:500;height:300px;">
# <caption><center> **Figure 1**: Basic RNN model </center></caption>
# Here's how you can implement an RNN:
#
# **Steps**:
# 1. Implement the calculations needed for one time-step of the RNN.
# 2. Implement a loop over $T_x$ time-steps in order to process all the inputs, one at a time.
#
# Let's go!
#
# ## 1.1 - RNN cell
#
# A Recurrent neural network can be seen as the repetition of a single cell. You are first going to implement the computations for a single time-step. The following figure describes the operations for a single time-step of an RNN cell.
#
# <img src="images/rnn_step_forward.png" style="width:700px;height:300px;">
# <caption><center> **Figure 2**: Basic RNN cell. Takes as input $x^{\langle t \rangle}$ (current input) and $a^{\langle t - 1\rangle}$ (previous hidden state containing information from the past), and outputs $a^{\langle t \rangle}$ which is given to the next RNN cell and also used to predict $y^{\langle t \rangle}$ </center></caption>
#
# **Exercise**: Implement the RNN-cell described in Figure (2).
#
# **Instructions**:
# 1. Compute the hidden state with tanh activation: $a^{\langle t \rangle} = \tanh(W_{aa} a^{\langle t-1 \rangle} + W_{ax} x^{\langle t \rangle} + b_a)$.
# 2. Using your new hidden state $a^{\langle t \rangle}$, compute the prediction $\hat{y}^{\langle t \rangle} = softmax(W_{ya} a^{\langle t \rangle} + b_y)$. We provided you a function: `softmax`.
# 3. Store $(a^{\langle t \rangle}, a^{\langle t-1 \rangle}, x^{\langle t \rangle}, parameters)$ in cache
# 4. Return $a^{\langle t \rangle}$ , $y^{\langle t \rangle}$ and cache
#
# We will vectorize over $m$ examples. Thus, $x^{\langle t \rangle}$ will have dimension $(n_x,m)$, and $a^{\langle t \rangle}$ will have dimension $(n_a,m)$.
# +
# GRADED FUNCTION: rnn_cell_forward
def rnn_cell_forward(xt, a_prev, parameters):
"""
Implements a single forward step of the RNN-cell as described in Figure (2)
Arguments:
xt -- your input data at timestep "t", numpy array of shape (n_x, m).
a_prev -- Hidden state at timestep "t-1", numpy array of shape (n_a, m)
parameters -- python dictionary containing:
Wax -- Weight matrix multiplying the input, numpy array of shape (n_a, n_x)
Waa -- Weight matrix multiplying the hidden state, numpy array of shape (n_a, n_a)
Wya -- Weight matrix relating the hidden-state to the output, numpy array of shape (n_y, n_a)
ba -- Bias, numpy array of shape (n_a, 1)
by -- Bias relating the hidden-state to the output, numpy array of shape (n_y, 1)
Returns:
a_next -- next hidden state, of shape (n_a, m)
yt_pred -- prediction at timestep "t", numpy array of shape (n_y, m)
cache -- tuple of values needed for the backward pass, contains (a_next, a_prev, xt, parameters)
"""
# Retrieve parameters from "parameters"
Wax = parameters["Wax"]
Waa = parameters["Waa"]
Wya = parameters["Wya"]
ba = parameters["ba"]
by = parameters["by"]
### START CODE HERE ### (≈2 lines)
# compute next activation state using the formula given above
a_next = None
# compute output of the current cell using the formula given above
yt_pred = None
### END CODE HERE ###
# store values you need for backward propagation in cache
cache = (a_next, a_prev, xt, parameters)
return a_next, yt_pred, cache
# +
np.random.seed(1)
xt = np.random.randn(3,10)
a_prev = np.random.randn(5,10)
Waa = np.random.randn(5,5)
Wax = np.random.randn(5,3)
Wya = np.random.randn(2,5)
ba = np.random.randn(5,1)
by = np.random.randn(2,1)
parameters = {"Waa": Waa, "Wax": Wax, "Wya": Wya, "ba": ba, "by": by}
a_next, yt_pred, cache = rnn_cell_forward(xt, a_prev, parameters)
print("a_next[4] = ", a_next[4])
print("a_next.shape = ", a_next.shape)
print("yt_pred[1] =", yt_pred[1])
print("yt_pred.shape = ", yt_pred.shape)
# -
# **Expected Output**:
#
# <table>
# <tr>
# <td>
# **a_next[4]**:
# </td>
# <td>
# [ 0.59584544 0.18141802 0.61311866 0.99808218 0.85016201 0.99980978
# -0.18887155 0.99815551 0.6531151 0.82872037]
# </td>
# </tr>
# <tr>
# <td>
# **a_next.shape**:
# </td>
# <td>
# (5, 10)
# </td>
# </tr>
# <tr>
# <td>
# **yt[1]**:
# </td>
# <td>
# [ 0.9888161 0.01682021 0.21140899 0.36817467 0.98988387 0.88945212
# 0.36920224 0.9966312 0.9982559 0.17746526]
# </td>
# </tr>
# <tr>
# <td>
# **yt.shape**:
# </td>
# <td>
# (2, 10)
# </td>
# </tr>
#
# </table>
# ## 1.2 - RNN forward pass
#
# You can see an RNN as the repetition of the cell you've just built. If your input sequence of data is carried over 10 time steps, then you will copy the RNN cell 10 times. Each cell takes as input the hidden state from the previous cell ($a^{\langle t-1 \rangle}$) and the current time-step's input data ($x^{\langle t \rangle}$). It outputs a hidden state ($a^{\langle t \rangle}$) and a prediction ($y^{\langle t \rangle}$) for this time-step.
#
#
# <img src="images/rnn.png" style="width:800px;height:300px;">
# <caption><center> **Figure 3**: Basic RNN. The input sequence $x = (x^{\langle 1 \rangle}, x^{\langle 2 \rangle}, ..., x^{\langle T_x \rangle})$ is carried over $T_x$ time steps. The network outputs $y = (y^{\langle 1 \rangle}, y^{\langle 2 \rangle}, ..., y^{\langle T_x \rangle})$. </center></caption>
#
#
#
# **Exercise**: Code the forward propagation of the RNN described in Figure (3).
#
# **Instructions**:
# 1. Create a vector of zeros ($a$) that will store all the hidden states computed by the RNN.
# 2. Initialize the "next" hidden state as $a_0$ (initial hidden state).
# 3. Start looping over each time step, your incremental index is $t$ :
# - Update the "next" hidden state and the cache by running `rnn_cell_forward`
# - Store the "next" hidden state in $a$ ($t^{th}$ position)
# - Store the prediction in y
# - Add the cache to the list of caches
# 4. Return $a$, $y$ and caches
# +
# GRADED FUNCTION: rnn_forward
def rnn_forward(x, a0, parameters):
"""
Implement the forward propagation of the recurrent neural network described in Figure (3).
Arguments:
x -- Input data for every time-step, of shape (n_x, m, T_x).
a0 -- Initial hidden state, of shape (n_a, m)
parameters -- python dictionary containing:
Waa -- Weight matrix multiplying the hidden state, numpy array of shape (n_a, n_a)
Wax -- Weight matrix multiplying the input, numpy array of shape (n_a, n_x)
Wya -- Weight matrix relating the hidden-state to the output, numpy array of shape (n_y, n_a)
ba -- Bias numpy array of shape (n_a, 1)
by -- Bias relating the hidden-state to the output, numpy array of shape (n_y, 1)
Returns:
a -- Hidden states for every time-step, numpy array of shape (n_a, m, T_x)
y_pred -- Predictions for every time-step, numpy array of shape (n_y, m, T_x)
caches -- tuple of values needed for the backward pass, contains (list of caches, x)
"""
# Initialize "caches" which will contain the list of all caches
caches = []
# Retrieve dimensions from shapes of x and parameters["Wya"]
n_x, m, T_x = x.shape
n_y, n_a = parameters["Wya"].shape
### START CODE HERE ###
# initialize "a" and "y" with zeros (≈2 lines)
a = None
y_pred = None
# Initialize a_next (≈1 line)
a_next = None
# loop over all time-steps
for t in range(None):
# Update next hidden state, compute the prediction, get the cache (≈1 line)
a_next, yt_pred, cache = None
# Save the value of the new "next" hidden state in a (≈1 line)
a[:,:,t] = None
# Save the value of the prediction in y (≈1 line)
y_pred[:,:,t] = None
# Append "cache" to "caches" (≈1 line)
None
### END CODE HERE ###
# store values needed for backward propagation in cache
caches = (caches, x)
return a, y_pred, caches
# +
np.random.seed(1)
x = np.random.randn(3,10,4)
a0 = np.random.randn(5,10)
Waa = np.random.randn(5,5)
Wax = np.random.randn(5,3)
Wya = np.random.randn(2,5)
ba = np.random.randn(5,1)
by = np.random.randn(2,1)
parameters = {"Waa": Waa, "Wax": Wax, "Wya": Wya, "ba": ba, "by": by}
a, y_pred, caches = rnn_forward(x, a0, parameters)
print("a[4][1] = ", a[4][1])
print("a.shape = ", a.shape)
print("y_pred[1][3] =", y_pred[1][3])
print("y_pred.shape = ", y_pred.shape)
print("caches[1][1][3] =", caches[1][1][3])
print("len(caches) = ", len(caches))
# -
# **Expected Output**:
#
# <table>
# <tr>
# <td>
# **a[4][1]**:
# </td>
# <td>
# [-0.99999375 0.77911235 -0.99861469 -0.99833267]
# </td>
# </tr>
# <tr>
# <td>
# **a.shape**:
# </td>
# <td>
# (5, 10, 4)
# </td>
# </tr>
# <tr>
# <td>
# **y[1][3]**:
# </td>
# <td>
# [ 0.79560373 0.86224861 0.11118257 0.81515947]
# </td>
# </tr>
# <tr>
# <td>
# **y.shape**:
# </td>
# <td>
# (2, 10, 4)
# </td>
# </tr>
# <tr>
# <td>
# **cache[1][1][3]**:
# </td>
# <td>
# [-1.1425182 -0.34934272 -0.20889423 0.58662319]
# </td>
# </tr>
# <tr>
# <td>
# **len(cache)**:
# </td>
# <td>
# 2
# </td>
# </tr>
#
# </table>
# Congratulations! You've successfully built the forward propagation of a recurrent neural network from scratch. This will work well enough for some applications, but it suffers from vanishing gradient problems. So it works best when each output $y^{\langle t \rangle}$ can be estimated using mainly "local" context (meaning information from inputs $x^{\langle t' \rangle}$ where $t'$ is not too far from $t$).
#
# In the next part, you will build a more complex LSTM model, which is better at addressing vanishing gradients. The LSTM will be better able to remember a piece of information and keep it saved for many timesteps.
# ## 2 - Long Short-Term Memory (LSTM) network
#
# This following figure shows the operations of an LSTM-cell.
#
# <img src="images/LSTM.png" style="width:500;height:400px;">
# <caption><center> **Figure 4**: LSTM-cell. This tracks and updates a "cell state" or memory variable $c^{\langle t \rangle}$ at every time-step, which can be different from $a^{\langle t \rangle}$. </center></caption>
#
# Similar to the RNN example above, you will start by implementing the LSTM cell for a single time-step. Then you can iteratively call it from inside a for-loop to have it process an input with $T_x$ time-steps.
#
# ### About the gates
#
# #### - Forget gate
#
# For the sake of this illustration, let's assume we are reading words in a piece of text, and want use an LSTM to keep track of grammatical structures, such as whether the subject is singular or plural. If the subject changes from a singular word to a plural word, we need to find a way to get rid of our previously stored memory value of the singular/plural state. In an LSTM, the forget gate let's us do this:
#
# $$\Gamma_f^{\langle t \rangle} = \sigma(W_f[a^{\langle t-1 \rangle}, x^{\langle t \rangle}] + b_f)\tag{1} $$
#
# Here, $W_f$ are weights that govern the forget gate's behavior. We concatenate $[a^{\langle t-1 \rangle}, x^{\langle t \rangle}]$ and multiply by $W_f$. The equation above results in a vector $\Gamma_f^{\langle t \rangle}$ with values between 0 and 1. This forget gate vector will be multiplied element-wise by the previous cell state $c^{\langle t-1 \rangle}$. So if one of the values of $\Gamma_f^{\langle t \rangle}$ is 0 (or close to 0) then it means that the LSTM should remove that piece of information (e.g. the singular subject) in the corresponding component of $c^{\langle t-1 \rangle}$. If one of the values is 1, then it will keep the information.
#
# #### - Update gate
#
# Once we forget that the subject being discussed is singular, we need to find a way to update it to reflect that the new subject is now plural. Here is the formula for the update gate:
#
# $$\Gamma_u^{\langle t \rangle} = \sigma(W_u[a^{\langle t-1 \rangle}, x^{\{t\}}] + b_u)\tag{2} $$
#
# Similar to the forget gate, here $\Gamma_u^{\langle t \rangle}$ is again a vector of values between 0 and 1. This will be multiplied element-wise with $\tilde{c}^{\langle t \rangle}$, in order to compute $c^{\langle t \rangle}$.
#
# #### - Updating the cell
#
# To update the new subject we need to create a new vector of numbers that we can add to our previous cell state. The equation we use is:
#
# $$ \tilde{c}^{\langle t \rangle} = \tanh(W_c[a^{\langle t-1 \rangle}, x^{\langle t \rangle}] + b_c)\tag{3} $$
#
# Finally, the new cell state is:
#
# $$ c^{\langle t \rangle} = \Gamma_f^{\langle t \rangle}* c^{\langle t-1 \rangle} + \Gamma_u^{\langle t \rangle} *\tilde{c}^{\langle t \rangle} \tag{4} $$
#
#
# #### - Output gate
#
# To decide which outputs we will use, we will use the following two formulas:
#
# $$ \Gamma_o^{\langle t \rangle}= \sigma(W_o[a^{\langle t-1 \rangle}, x^{\langle t \rangle}] + b_o)\tag{5}$$
# $$ a^{\langle t \rangle} = \Gamma_o^{\langle t \rangle}* \tanh(c^{\langle t \rangle})\tag{6} $$
#
# Where in equation 5 you decide what to output using a sigmoid function and in equation 6 you multiply that by the $\tanh$ of the previous state.
# ### 2.1 - LSTM cell
#
# **Exercise**: Implement the LSTM cell described in the Figure (3).
#
# **Instructions**:
# 1. Concatenate $a^{\langle t-1 \rangle}$ and $x^{\langle t \rangle}$ in a single matrix: $concat = \begin{bmatrix} a^{\langle t-1 \rangle} \\ x^{\langle t \rangle} \end{bmatrix}$
# 2. Compute all the formulas 1-6. You can use `sigmoid()` (provided) and `np.tanh()`.
# 3. Compute the prediction $y^{\langle t \rangle}$. You can use `softmax()` (provided).
# +
# GRADED FUNCTION: lstm_cell_forward
def lstm_cell_forward(xt, a_prev, c_prev, parameters):
"""
Implement a single forward step of the LSTM-cell as described in Figure (4)
Arguments:
xt -- your input data at timestep "t", numpy array of shape (n_x, m).
a_prev -- Hidden state at timestep "t-1", numpy array of shape (n_a, m)
c_prev -- Memory state at timestep "t-1", numpy array of shape (n_a, m)
parameters -- python dictionary containing:
Wf -- Weight matrix of the forget gate, numpy array of shape (n_a, n_a + n_x)
bf -- Bias of the forget gate, numpy array of shape (n_a, 1)
Wi -- Weight matrix of the update gate, numpy array of shape (n_a, n_a + n_x)
bi -- Bias of the update gate, numpy array of shape (n_a, 1)
Wc -- Weight matrix of the first "tanh", numpy array of shape (n_a, n_a + n_x)
bc -- Bias of the first "tanh", numpy array of shape (n_a, 1)
Wo -- Weight matrix of the output gate, numpy array of shape (n_a, n_a + n_x)
bo -- Bias of the output gate, numpy array of shape (n_a, 1)
Wy -- Weight matrix relating the hidden-state to the output, numpy array of shape (n_y, n_a)
by -- Bias relating the hidden-state to the output, numpy array of shape (n_y, 1)
Returns:
a_next -- next hidden state, of shape (n_a, m)
c_next -- next memory state, of shape (n_a, m)
yt_pred -- prediction at timestep "t", numpy array of shape (n_y, m)
cache -- tuple of values needed for the backward pass, contains (a_next, c_next, a_prev, c_prev, xt, parameters)
Note: ft/it/ot stand for the forget/update/output gates, cct stands for the candidate value (c tilde),
c stands for the memory value
"""
# Retrieve parameters from "parameters"
Wf = parameters["Wf"]
bf = parameters["bf"]
Wi = parameters["Wi"]
bi = parameters["bi"]
Wc = parameters["Wc"]
bc = parameters["bc"]
Wo = parameters["Wo"]
bo = parameters["bo"]
Wy = parameters["Wy"]
by = parameters["by"]
# Retrieve dimensions from shapes of xt and Wy
n_x, m = xt.shape
n_y, n_a = Wy.shape
### START CODE HERE ###
# Concatenate a_prev and xt (≈3 lines)
concat = None
concat[: n_a, :] = None
concat[n_a :, :] = None
# Compute values for ft, it, cct, c_next, ot, a_next using the formulas given figure (4) (≈6 lines)
ft = None
it = None
cct = None
c_next = None
ot = None
a_next = None
# Compute prediction of the LSTM cell (≈1 line)
yt_pred = None
### END CODE HERE ###
# store values needed for backward propagation in cache
cache = (a_next, c_next, a_prev, c_prev, ft, it, cct, ot, xt, parameters)
return a_next, c_next, yt_pred, cache
# +
np.random.seed(1)
xt = np.random.randn(3,10)
a_prev = np.random.randn(5,10)
c_prev = np.random.randn(5,10)
Wf = np.random.randn(5, 5+3)
bf = np.random.randn(5,1)
Wi = np.random.randn(5, 5+3)
bi = np.random.randn(5,1)
Wo = np.random.randn(5, 5+3)
bo = np.random.randn(5,1)
Wc = np.random.randn(5, 5+3)
bc = np.random.randn(5,1)
Wy = np.random.randn(2,5)
by = np.random.randn(2,1)
parameters = {"Wf": Wf, "Wi": Wi, "Wo": Wo, "Wc": Wc, "Wy": Wy, "bf": bf, "bi": bi, "bo": bo, "bc": bc, "by": by}
a_next, c_next, yt, cache = lstm_cell_forward(xt, a_prev, c_prev, parameters)
print("a_next[4] = ", a_next[4])
print("a_next.shape = ", c_next.shape)
print("c_next[2] = ", c_next[2])
print("c_next.shape = ", c_next.shape)
print("yt[1] =", yt[1])
print("yt.shape = ", yt.shape)
print("cache[1][3] =", cache[1][3])
print("len(cache) = ", len(cache))
# -
# **Expected Output**:
#
# <table>
# <tr>
# <td>
# **a_next[4]**:
# </td>
# <td>
# [-0.66408471 0.0036921 0.02088357 0.22834167 -0.85575339 0.00138482
# 0.76566531 0.34631421 -0.00215674 0.43827275]
# </td>
# </tr>
# <tr>
# <td>
# **a_next.shape**:
# </td>
# <td>
# (5, 10)
# </td>
# </tr>
# <tr>
# <td>
# **c_next[2]**:
# </td>
# <td>
# [ 0.63267805 1.00570849 0.35504474 0.20690913 -1.64566718 0.11832942
# 0.76449811 -0.0981561 -0.74348425 -0.26810932]
# </td>
# </tr>
# <tr>
# <td>
# **c_next.shape**:
# </td>
# <td>
# (5, 10)
# </td>
# </tr>
# <tr>
# <td>
# **yt[1]**:
# </td>
# <td>
# [ 0.79913913 0.15986619 0.22412122 0.15606108 0.97057211 0.31146381
# 0.00943007 0.12666353 0.39380172 0.07828381]
# </td>
# </tr>
# <tr>
# <td>
# **yt.shape**:
# </td>
# <td>
# (2, 10)
# </td>
# </tr>
# <tr>
# <td>
# **cache[1][3]**:
# </td>
# <td>
# [-0.16263996 1.03729328 0.72938082 -0.54101719 0.02752074 -0.30821874
# 0.07651101 -1.03752894 1.41219977 -0.37647422]
# </td>
# </tr>
# <tr>
# <td>
# **len(cache)**:
# </td>
# <td>
# 10
# </td>
# </tr>
#
# </table>
# ### 2.2 - Forward pass for LSTM
#
# Now that you have implemented one step of an LSTM, you can now iterate this over this using a for-loop to process a sequence of $T_x$ inputs.
#
# <img src="images/LSTM_rnn.png" style="width:500;height:300px;">
# <caption><center> **Figure 5**: LSTM over multiple time-steps. </center></caption>
#
# **Exercise:** Implement `lstm_forward()` to run an LSTM over $T_x$ time-steps.
#
# **Note**: $c^{\langle 0 \rangle}$ is initialized with zeros.
# +
# GRADED FUNCTION: lstm_forward
def lstm_forward(x, a0, parameters):
"""
Implement the forward propagation of the recurrent neural network using an LSTM-cell described in Figure (4).
Arguments:
x -- Input data for every time-step, of shape (n_x, m, T_x).
a0 -- Initial hidden state, of shape (n_a, m)
parameters -- python dictionary containing:
Wf -- Weight matrix of the forget gate, numpy array of shape (n_a, n_a + n_x)
bf -- Bias of the forget gate, numpy array of shape (n_a, 1)
Wi -- Weight matrix of the update gate, numpy array of shape (n_a, n_a + n_x)
bi -- Bias of the update gate, numpy array of shape (n_a, 1)
Wc -- Weight matrix of the first "tanh", numpy array of shape (n_a, n_a + n_x)
bc -- Bias of the first "tanh", numpy array of shape (n_a, 1)
Wo -- Weight matrix of the output gate, numpy array of shape (n_a, n_a + n_x)
bo -- Bias of the output gate, numpy array of shape (n_a, 1)
Wy -- Weight matrix relating the hidden-state to the output, numpy array of shape (n_y, n_a)
by -- Bias relating the hidden-state to the output, numpy array of shape (n_y, 1)
Returns:
a -- Hidden states for every time-step, numpy array of shape (n_a, m, T_x)
y -- Predictions for every time-step, numpy array of shape (n_y, m, T_x)
caches -- tuple of values needed for the backward pass, contains (list of all the caches, x)
"""
# Initialize "caches", which will track the list of all the caches
caches = []
### START CODE HERE ###
# Retrieve dimensions from shapes of x and parameters['Wy'] (≈2 lines)
n_x, m, T_x = None
n_y, n_a = None
# initialize "a", "c" and "y" with zeros (≈3 lines)
a = None
c = None
y = None
# Initialize a_next and c_next (≈2 lines)
a_next = None
c_next = None
# loop over all time-steps
for t in range(None):
# Update next hidden state, next memory state, compute the prediction, get the cache (≈1 line)
a_next, c_next, yt, cache = None
# Save the value of the new "next" hidden state in a (≈1 line)
a[:,:,t] = None
# Save the value of the prediction in y (≈1 line)
y[:,:,t] = None
# Save the value of the next cell state (≈1 line)
c[:,:,t] = None
# Append the cache into caches (≈1 line)
None
### END CODE HERE ###
# store values needed for backward propagation in cache
caches = (caches, x)
return a, y, c, caches
# +
np.random.seed(1)
x = np.random.randn(3,10,7)
a0 = np.random.randn(5,10)
Wf = np.random.randn(5, 5+3)
bf = np.random.randn(5,1)
Wi = np.random.randn(5, 5+3)
bi = np.random.randn(5,1)
Wo = np.random.randn(5, 5+3)
bo = np.random.randn(5,1)
Wc = np.random.randn(5, 5+3)
bc = np.random.randn(5,1)
Wy = np.random.randn(2,5)
by = np.random.randn(2,1)
parameters = {"Wf": Wf, "Wi": Wi, "Wo": Wo, "Wc": Wc, "Wy": Wy, "bf": bf, "bi": bi, "bo": bo, "bc": bc, "by": by}
a, y, c, caches = lstm_forward(x, a0, parameters)
print("a[4][3][6] = ", a[4][3][6])
print("a.shape = ", a.shape)
print("y[1][4][3] =", y[1][4][3])
print("y.shape = ", y.shape)
print("caches[1][1[1]] =", caches[1][1][1])
print("c[1][2][1]", c[1][2][1])
print("len(caches) = ", len(caches))
# -
# **Expected Output**:
#
# <table>
# <tr>
# <td>
# **a[4][3][6]** =
# </td>
# <td>
# 0.172117767533
# </td>
# </tr>
# <tr>
# <td>
# **a.shape** =
# </td>
# <td>
# (5, 10, 7)
# </td>
# </tr>
# <tr>
# <td>
# **y[1][4][3]** =
# </td>
# <td>
# 0.95087346185
# </td>
# </tr>
# <tr>
# <td>
# **y.shape** =
# </td>
# <td>
# (2, 10, 7)
# </td>
# </tr>
# <tr>
# <td>
# **caches[1][1][1]** =
# </td>
# <td>
# [ 0.82797464 0.23009474 0.76201118 -0.22232814 -0.20075807 0.18656139
# 0.41005165]
# </td>
#
# </tr>
# <tr>
# <td>
# **c[1][2][1]** =
# </td>
# <td>
# -0.855544916718
# </td>
# </tr>
#
# </tr>
# <tr>
# <td>
# **len(caches)** =
# </td>
# <td>
# 2
# </td>
# </tr>
#
# </table>
# Congratulations! You have now implemented the forward passes for the basic RNN and the LSTM. When using a deep learning framework, implementing the forward pass is sufficient to build systems that achieve great performance.
#
# The rest of this notebook is optional, and will not be graded.
# ## 3 - Backpropagation in recurrent neural networks (OPTIONAL / UNGRADED)
#
# In modern deep learning frameworks, you only have to implement the forward pass, and the framework takes care of the backward pass, so most deep learning engineers do not need to bother with the details of the backward pass. If however you are an expert in calculus and want to see the details of backprop in RNNs, you can work through this optional portion of the notebook.
#
# When in an earlier course you implemented a simple (fully connected) neural network, you used backpropagation to compute the derivatives with respect to the cost to update the parameters. Similarly, in recurrent neural networks you can to calculate the derivatives with respect to the cost in order to update the parameters. The backprop equations are quite complicated and we did not derive them in lecture. However, we will briefly present them below.
# ### 3.1 - Basic RNN backward pass
#
# We will start by computing the backward pass for the basic RNN-cell.
#
# <img src="images/rnn_cell_backprop.png" style="width:500;height:300px;"> <br>
# <caption><center> **Figure 6**: RNN-cell's backward pass. Just like in a fully-connected neural network, the derivative of the cost function $J$ backpropagates through the RNN by following the chain-rule from calculus. The chain-rule is also used to calculate $(\frac{\partial J}{\partial W_{ax}},\frac{\partial J}{\partial W_{aa}},\frac{\partial J}{\partial b})$ to update the parameters $(W_{ax}, W_{aa}, b_a)$. </center></caption>
# #### Deriving the one step backward functions:
#
# To compute the `rnn_cell_backward` you need to compute the following equations. It is a good exercise to derive them by hand.
#
# The derivative of $\tanh$ is $1-\tanh(x)^2$. You can find the complete proof [here](https://www.wyzant.com/resources/lessons/math/calculus/derivative_proofs/tanx). Note that: $ \text{sech}(x)^2 = 1 - \tanh(x)^2$
#
# Similarly for $\frac{ \partial a^{\langle t \rangle} } {\partial W_{ax}}, \frac{ \partial a^{\langle t \rangle} } {\partial W_{aa}}, \frac{ \partial a^{\langle t \rangle} } {\partial b}$, the derivative of $\tanh(u)$ is $(1-\tanh(u)^2)du$.
#
# The final two equations also follow same rule and are derived using the $\tanh$ derivative. Note that the arrangement is done in a way to get the same dimensions to match.
def rnn_cell_backward(da_next, cache):
"""
Implements the backward pass for the RNN-cell (single time-step).
Arguments:
da_next -- Gradient of loss with respect to next hidden state
cache -- python dictionary containing useful values (output of rnn_cell_forward())
Returns:
gradients -- python dictionary containing:
dx -- Gradients of input data, of shape (n_x, m)
da_prev -- Gradients of previous hidden state, of shape (n_a, m)
dWax -- Gradients of input-to-hidden weights, of shape (n_a, n_x)
dWaa -- Gradients of hidden-to-hidden weights, of shape (n_a, n_a)
dba -- Gradients of bias vector, of shape (n_a, 1)
"""
# Retrieve values from cache
(a_next, a_prev, xt, parameters) = cache
# Retrieve values from parameters
Wax = parameters["Wax"]
Waa = parameters["Waa"]
Wya = parameters["Wya"]
ba = parameters["ba"]
by = parameters["by"]
### START CODE HERE ###
# compute the gradient of tanh with respect to a_next (≈1 line)
dtanh = None
# compute the gradient of the loss with respect to Wax (≈2 lines)
dxt = None
dWax = None
# compute the gradient with respect to Waa (≈2 lines)
da_prev = None
dWaa = None
# compute the gradient with respect to b (≈1 line)
dba = None
### END CODE HERE ###
# Store the gradients in a python dictionary
gradients = {"dxt": dxt, "da_prev": da_prev, "dWax": dWax, "dWaa": dWaa, "dba": dba}
return gradients
# +
np.random.seed(1)
xt = np.random.randn(3,10)
a_prev = np.random.randn(5,10)
Wax = np.random.randn(5,3)
Waa = np.random.randn(5,5)
Wya = np.random.randn(2,5)
b = np.random.randn(5,1)
by = np.random.randn(2,1)
parameters = {"Wax": Wax, "Waa": Waa, "Wya": Wya, "ba": ba, "by": by}
a_next, yt, cache = rnn_cell_forward(xt, a_prev, parameters)
da_next = np.random.randn(5,10)
gradients = rnn_cell_backward(da_next, cache)
print("gradients[\"dxt\"][1][2] =", gradients["dxt"][1][2])
print("gradients[\"dxt\"].shape =", gradients["dxt"].shape)
print("gradients[\"da_prev\"][2][3] =", gradients["da_prev"][2][3])
print("gradients[\"da_prev\"].shape =", gradients["da_prev"].shape)
print("gradients[\"dWax\"][3][1] =", gradients["dWax"][3][1])
print("gradients[\"dWax\"].shape =", gradients["dWax"].shape)
print("gradients[\"dWaa\"][1][2] =", gradients["dWaa"][1][2])
print("gradients[\"dWaa\"].shape =", gradients["dWaa"].shape)
print("gradients[\"dba\"][4] =", gradients["dba"][4])
print("gradients[\"dba\"].shape =", gradients["dba"].shape)
# -
# **Expected Output**:
#
# <table>
# <tr>
# <td>
# **gradients["dxt"][1][2]** =
# </td>
# <td>
# -0.460564103059
# </td>
# </tr>
# <tr>
# <td>
# **gradients["dxt"].shape** =
# </td>
# <td>
# (3, 10)
# </td>
# </tr>
# <tr>
# <td>
# **gradients["da_prev"][2][3]** =
# </td>
# <td>
# 0.0842968653807
# </td>
# </tr>
# <tr>
# <td>
# **gradients["da_prev"].shape** =
# </td>
# <td>
# (5, 10)
# </td>
# </tr>
# <tr>
# <td>
# **gradients["dWax"][3][1]** =
# </td>
# <td>
# 0.393081873922
# </td>
# </tr>
# <tr>
# <td>
# **gradients["dWax"].shape** =
# </td>
# <td>
# (5, 3)
# </td>
# </tr>
# <tr>
# <td>
# **gradients["dWaa"][1][2]** =
# </td>
# <td>
# -0.28483955787
# </td>
# </tr>
# <tr>
# <td>
# **gradients["dWaa"].shape** =
# </td>
# <td>
# (5, 5)
# </td>
# </tr>
# <tr>
# <td>
# **gradients["dba"][4]** =
# </td>
# <td>
# [ 0.80517166]
# </td>
# </tr>
# <tr>
# <td>
# **gradients["dba"].shape** =
# </td>
# <td>
# (5, 1)
# </td>
# </tr>
# </table>
# #### Backward pass through the RNN
#
# Computing the gradients of the cost with respect to $a^{\langle t \rangle}$ at every time-step $t$ is useful because it is what helps the gradient backpropagate to the previous RNN-cell. To do so, you need to iterate through all the time steps starting at the end, and at each step, you increment the overall $db_a$, $dW_{aa}$, $dW_{ax}$ and you store $dx$.
#
# **Instructions**:
#
# Implement the `rnn_backward` function. Initialize the return variables with zeros first and then loop through all the time steps while calling the `rnn_cell_backward` at each time timestep, update the other variables accordingly.
def rnn_backward(da, caches):
"""
Implement the backward pass for a RNN over an entire sequence of input data.
Arguments:
da -- Upstream gradients of all hidden states, of shape (n_a, m, T_x)
caches -- tuple containing information from the forward pass (rnn_forward)
Returns:
gradients -- python dictionary containing:
dx -- Gradient w.r.t. the input data, numpy-array of shape (n_x, m, T_x)
da0 -- Gradient w.r.t the initial hidden state, numpy-array of shape (n_a, m)
dWax -- Gradient w.r.t the input's weight matrix, numpy-array of shape (n_a, n_x)
dWaa -- Gradient w.r.t the hidden state's weight matrix, numpy-arrayof shape (n_a, n_a)
dba -- Gradient w.r.t the bias, of shape (n_a, 1)
"""
### START CODE HERE ###
# Retrieve values from the first cache (t=1) of caches (≈2 lines)
(caches, x) = None
(a1, a0, x1, parameters) = None
# Retrieve dimensions from da's and x1's shapes (≈2 lines)
n_a, m, T_x = None
n_x, m = None
# initialize the gradients with the right sizes (≈6 lines)
dx = None
dWax = None
dWaa = None
dba = None
da0 = None
da_prevt = None
# Loop through all the time steps
for t in reversed(range(None)):
# Compute gradients at time step t. Choose wisely the "da_next" and the "cache" to use in the backward propagation step. (≈1 line)
gradients = None
# Retrieve derivatives from gradients (≈ 1 line)
dxt, da_prevt, dWaxt, dWaat, dbat = gradients["dxt"], gradients["da_prev"], gradients["dWax"], gradients["dWaa"], gradients["dba"]
# Increment global derivatives w.r.t parameters by adding their derivative at time-step t (≈4 lines)
dx[:, :, t] = None
dWax += None
dWaa += None
dba += None
# Set da0 to the gradient of a which has been backpropagated through all time-steps (≈1 line)
da0 = None
### END CODE HERE ###
# Store the gradients in a python dictionary
gradients = {"dx": dx, "da0": da0, "dWax": dWax, "dWaa": dWaa,"dba": dba}
return gradients
# +
np.random.seed(1)
x = np.random.randn(3,10,4)
a0 = np.random.randn(5,10)
Wax = np.random.randn(5,3)
Waa = np.random.randn(5,5)
Wya = np.random.randn(2,5)
ba = np.random.randn(5,1)
by = np.random.randn(2,1)
parameters = {"Wax": Wax, "Waa": Waa, "Wya": Wya, "ba": ba, "by": by}
a, y, caches = rnn_forward(x, a0, parameters)
da = np.random.randn(5, 10, 4)
gradients = rnn_backward(da, caches)
print("gradients[\"dx\"][1][2] =", gradients["dx"][1][2])
print("gradients[\"dx\"].shape =", gradients["dx"].shape)
print("gradients[\"da0\"][2][3] =", gradients["da0"][2][3])
print("gradients[\"da0\"].shape =", gradients["da0"].shape)
print("gradients[\"dWax\"][3][1] =", gradients["dWax"][3][1])
print("gradients[\"dWax\"].shape =", gradients["dWax"].shape)
print("gradients[\"dWaa\"][1][2] =", gradients["dWaa"][1][2])
print("gradients[\"dWaa\"].shape =", gradients["dWaa"].shape)
print("gradients[\"dba\"][4] =", gradients["dba"][4])
print("gradients[\"dba\"].shape =", gradients["dba"].shape)
# -
# **Expected Output**:
#
# <table>
# <tr>
# <td>
# **gradients["dx"][1][2]** =
# </td>
# <td>
# [-2.07101689 -0.59255627 0.02466855 0.01483317]
# </td>
# </tr>
# <tr>
# <td>
# **gradients["dx"].shape** =
# </td>
# <td>
# (3, 10, 4)
# </td>
# </tr>
# <tr>
# <td>
# **gradients["da0"][2][3]** =
# </td>
# <td>
# -0.314942375127
# </td>
# </tr>
# <tr>
# <td>
# **gradients["da0"].shape** =
# </td>
# <td>
# (5, 10)
# </td>
# </tr>
# <tr>
# <td>
# **gradients["dWax"][3][1]** =
# </td>
# <td>
# 11.2641044965
# </td>
# </tr>
# <tr>
# <td>
# **gradients["dWax"].shape** =
# </td>
# <td>
# (5, 3)
# </td>
# </tr>
# <tr>
# <td>
# **gradients["dWaa"][1][2]** =
# </td>
# <td>
# 2.30333312658
# </td>
# </tr>
# <tr>
# <td>
# **gradients["dWaa"].shape** =
# </td>
# <td>
# (5, 5)
# </td>
# </tr>
# <tr>
# <td>
# **gradients["dba"][4]** =
# </td>
# <td>
# [-0.74747722]
# </td>
# </tr>
# <tr>
# <td>
# **gradients["dba"].shape** =
# </td>
# <td>
# (5, 1)
# </td>
# </tr>
# </table>
# ## 3.2 - LSTM backward pass
# ### 3.2.1 One Step backward
#
# The LSTM backward pass is slighltly more complicated than the forward one. We have provided you with all the equations for the LSTM backward pass below. (If you enjoy calculus exercises feel free to try deriving these from scratch yourself.)
#
# ### 3.2.2 gate derivatives
#
# $$d \Gamma_o^{\langle t \rangle} = da_{next}*\tanh(c_{next}) * \Gamma_o^{\langle t \rangle}*(1-\Gamma_o^{\langle t \rangle})\tag{7}$$
#
# $$d\tilde c^{\langle t \rangle} = dc_{next}*\Gamma_u^{\langle t \rangle}+ \Gamma_o^{\langle t \rangle} (1-\tanh(c_{next})^2) * i_t * da_{next} * \tilde c^{\langle t \rangle} * (1-\tanh(\tilde c)^2) \tag{8}$$
#
# $$d\Gamma_u^{\langle t \rangle} = dc_{next}*\tilde c^{\langle t \rangle} + \Gamma_o^{\langle t \rangle} (1-\tanh(c_{next})^2) * \tilde c^{\langle t \rangle} * da_{next}*\Gamma_u^{\langle t \rangle}*(1-\Gamma_u^{\langle t \rangle})\tag{9}$$
#
# $$d\Gamma_f^{\langle t \rangle} = dc_{next}*\tilde c_{prev} + \Gamma_o^{\langle t \rangle} (1-\tanh(c_{next})^2) * c_{prev} * da_{next}*\Gamma_f^{\langle t \rangle}*(1-\Gamma_f^{\langle t \rangle})\tag{10}$$
#
# ### 3.2.3 parameter derivatives
#
# $$ dW_f = d\Gamma_f^{\langle t \rangle} * \begin{pmatrix} a_{prev} \\ x_t\end{pmatrix}^T \tag{11} $$
# $$ dW_u = d\Gamma_u^{\langle t \rangle} * \begin{pmatrix} a_{prev} \\ x_t\end{pmatrix}^T \tag{12} $$
# $$ dW_c = d\tilde c^{\langle t \rangle} * \begin{pmatrix} a_{prev} \\ x_t\end{pmatrix}^T \tag{13} $$
# $$ dW_o = d\Gamma_o^{\langle t \rangle} * \begin{pmatrix} a_{prev} \\ x_t\end{pmatrix}^T \tag{14}$$
#
# To calculate $db_f, db_u, db_c, db_o$ you just need to sum across the horizontal (axis= 1) axis on $d\Gamma_f^{\langle t \rangle}, d\Gamma_u^{\langle t \rangle}, d\tilde c^{\langle t \rangle}, d\Gamma_o^{\langle t \rangle}$ respectively. Note that you should have the `keep_dims = True` option.
#
# Finally, you will compute the derivative with respect to the previous hidden state, previous memory state, and input.
#
# $$ da_{prev} = W_f^T*d\Gamma_f^{\langle t \rangle} + W_u^T * d\Gamma_u^{\langle t \rangle}+ W_c^T * d\tilde c^{\langle t \rangle} + W_o^T * d\Gamma_o^{\langle t \rangle} \tag{15}$$
# Here, the weights for equations 13 are the first n_a, (i.e. $W_f = W_f[:n_a,:]$ etc...)
#
# $$ dc_{prev} = dc_{next}\Gamma_f^{\langle t \rangle} + \Gamma_o^{\langle t \rangle} * (1- \tanh(c_{next})^2)*\Gamma_f^{\langle t \rangle}*da_{next} \tag{16}$$
# $$ dx^{\langle t \rangle} = W_f^T*d\Gamma_f^{\langle t \rangle} + W_u^T * d\Gamma_u^{\langle t \rangle}+ W_c^T * d\tilde c_t + W_o^T * d\Gamma_o^{\langle t \rangle}\tag{17} $$
# where the weights for equation 15 are from n_a to the end, (i.e. $W_f = W_f[n_a:,:]$ etc...)
#
# **Exercise:** Implement `lstm_cell_backward` by implementing equations $7-17$ below. Good luck! :)
def lstm_cell_backward(da_next, dc_next, cache):
"""
Implement the backward pass for the LSTM-cell (single time-step).
Arguments:
da_next -- Gradients of next hidden state, of shape (n_a, m)
dc_next -- Gradients of next cell state, of shape (n_a, m)
cache -- cache storing information from the forward pass
Returns:
gradients -- python dictionary containing:
dxt -- Gradient of input data at time-step t, of shape (n_x, m)
da_prev -- Gradient w.r.t. the previous hidden state, numpy array of shape (n_a, m)
dc_prev -- Gradient w.r.t. the previous memory state, of shape (n_a, m, T_x)
dWf -- Gradient w.r.t. the weight matrix of the forget gate, numpy array of shape (n_a, n_a + n_x)
dWi -- Gradient w.r.t. the weight matrix of the update gate, numpy array of shape (n_a, n_a + n_x)
dWc -- Gradient w.r.t. the weight matrix of the memory gate, numpy array of shape (n_a, n_a + n_x)
dWo -- Gradient w.r.t. the weight matrix of the output gate, numpy array of shape (n_a, n_a + n_x)
dbf -- Gradient w.r.t. biases of the forget gate, of shape (n_a, 1)
dbi -- Gradient w.r.t. biases of the update gate, of shape (n_a, 1)
dbc -- Gradient w.r.t. biases of the memory gate, of shape (n_a, 1)
dbo -- Gradient w.r.t. biases of the output gate, of shape (n_a, 1)
"""
# Retrieve information from "cache"
(a_next, c_next, a_prev, c_prev, ft, it, cct, ot, xt, parameters) = cache
### START CODE HERE ###
# Retrieve dimensions from xt's and a_next's shape (≈2 lines)
n_x, m = None
n_a, m = None
# Compute gates related derivatives, you can find their values can be found by looking carefully at equations (7) to (10) (≈4 lines)
dot = None
dcct = None
dit = None
dft = None
# Code equations (7) to (10) (≈4 lines)
dit = None
dft = None
dot = None
dcct = None
# Compute parameters related derivatives. Use equations (11)-(14) (≈8 lines)
dWf = None
dWi = None
dWc = None
dWo = None
dbf = None
dbi = None
dbc = None
dbo = None
# Compute derivatives w.r.t previous hidden state, previous memory state and input. Use equations (15)-(17). (≈3 lines)
da_prev = None
dc_prev = None
dxt = None
### END CODE HERE ###
# Save gradients in dictionary
gradients = {"dxt": dxt, "da_prev": da_prev, "dc_prev": dc_prev, "dWf": dWf,"dbf": dbf, "dWi": dWi,"dbi": dbi,
"dWc": dWc,"dbc": dbc, "dWo": dWo,"dbo": dbo}
return gradients
# +
np.random.seed(1)
xt = np.random.randn(3,10)
a_prev = np.random.randn(5,10)
c_prev = np.random.randn(5,10)
Wf = np.random.randn(5, 5+3)
bf = np.random.randn(5,1)
Wi = np.random.randn(5, 5+3)
bi = np.random.randn(5,1)
Wo = np.random.randn(5, 5+3)
bo = np.random.randn(5,1)
Wc = np.random.randn(5, 5+3)
bc = np.random.randn(5,1)
Wy = np.random.randn(2,5)
by = np.random.randn(2,1)
parameters = {"Wf": Wf, "Wi": Wi, "Wo": Wo, "Wc": Wc, "Wy": Wy, "bf": bf, "bi": bi, "bo": bo, "bc": bc, "by": by}
a_next, c_next, yt, cache = lstm_cell_forward(xt, a_prev, c_prev, parameters)
da_next = np.random.randn(5,10)
dc_next = np.random.randn(5,10)
gradients = lstm_cell_backward(da_next, dc_next, cache)
print("gradients[\"dxt\"][1][2] =", gradients["dxt"][1][2])
print("gradients[\"dxt\"].shape =", gradients["dxt"].shape)
print("gradients[\"da_prev\"][2][3] =", gradients["da_prev"][2][3])
print("gradients[\"da_prev\"].shape =", gradients["da_prev"].shape)
print("gradients[\"dc_prev\"][2][3] =", gradients["dc_prev"][2][3])
print("gradients[\"dc_prev\"].shape =", gradients["dc_prev"].shape)
print("gradients[\"dWf\"][3][1] =", gradients["dWf"][3][1])
print("gradients[\"dWf\"].shape =", gradients["dWf"].shape)
print("gradients[\"dWi\"][1][2] =", gradients["dWi"][1][2])
print("gradients[\"dWi\"].shape =", gradients["dWi"].shape)
print("gradients[\"dWc\"][3][1] =", gradients["dWc"][3][1])
print("gradients[\"dWc\"].shape =", gradients["dWc"].shape)
print("gradients[\"dWo\"][1][2] =", gradients["dWo"][1][2])
print("gradients[\"dWo\"].shape =", gradients["dWo"].shape)
print("gradients[\"dbf\"][4] =", gradients["dbf"][4])
print("gradients[\"dbf\"].shape =", gradients["dbf"].shape)
print("gradients[\"dbi\"][4] =", gradients["dbi"][4])
print("gradients[\"dbi\"].shape =", gradients["dbi"].shape)
print("gradients[\"dbc\"][4] =", gradients["dbc"][4])
print("gradients[\"dbc\"].shape =", gradients["dbc"].shape)
print("gradients[\"dbo\"][4] =", gradients["dbo"][4])
print("gradients[\"dbo\"].shape =", gradients["dbo"].shape)
# -
# **Expected Output**:
#
# <table>
# <tr>
# <td>
# **gradients["dxt"][1][2]** =
# </td>
# <td>
# 3.23055911511
# </td>
# </tr>
# <tr>
# <td>
# **gradients["dxt"].shape** =
# </td>
# <td>
# (3, 10)
# </td>
# </tr>
# <tr>
# <td>
# **gradients["da_prev"][2][3]** =
# </td>
# <td>
# -0.0639621419711
# </td>
# </tr>
# <tr>
# <td>
# **gradients["da_prev"].shape** =
# </td>
# <td>
# (5, 10)
# </td>
# </tr>
# <tr>
# <td>
# **gradients["dc_prev"][2][3]** =
# </td>
# <td>
# 0.797522038797
# </td>
# </tr>
# <tr>
# <td>
# **gradients["dc_prev"].shape** =
# </td>
# <td>
# (5, 10)
# </td>
# </tr>
# <tr>
# <td>
# **gradients["dWf"][3][1]** =
# </td>
# <td>
# -0.147954838164
# </td>
# </tr>
# <tr>
# <td>
# **gradients["dWf"].shape** =
# </td>
# <td>
# (5, 8)
# </td>
# </tr>
# <tr>
# <td>
# **gradients["dWi"][1][2]** =
# </td>
# <td>
# 1.05749805523
# </td>
# </tr>
# <tr>
# <td>
# **gradients["dWi"].shape** =
# </td>
# <td>
# (5, 8)
# </td>
# </tr>
# <tr>
# <td>
# **gradients["dWc"][3][1]** =
# </td>
# <td>
# 2.30456216369
# </td>
# </tr>
# <tr>
# <td>
# **gradients["dWc"].shape** =
# </td>
# <td>
# (5, 8)
# </td>
# </tr>
# <tr>
# <td>
# **gradients["dWo"][1][2]** =
# </td>
# <td>
# 0.331311595289
# </td>
# </tr>
# <tr>
# <td>
# **gradients["dWo"].shape** =
# </td>
# <td>
# (5, 8)
# </td>
# </tr>
# <tr>
# <td>
# **gradients["dbf"][4]** =
# </td>
# <td>
# [ 0.18864637]
# </td>
# </tr>
# <tr>
# <td>
# **gradients["dbf"].shape** =
# </td>
# <td>
# (5, 1)
# </td>
# </tr>
# <tr>
# <td>
# **gradients["dbi"][4]** =
# </td>
# <td>
# [-0.40142491]
# </td>
# </tr>
# <tr>
# <td>
# **gradients["dbi"].shape** =
# </td>
# <td>
# (5, 1)
# </td>
# </tr>
# <tr>
# <td>
# **gradients["dbc"][4]** =
# </td>
# <td>
# [ 0.25587763]
# </td>
# </tr>
# <tr>
# <td>
# **gradients["dbc"].shape** =
# </td>
# <td>
# (5, 1)
# </td>
# </tr>
# <tr>
# <td>
# **gradients["dbo"][4]** =
# </td>
# <td>
# [ 0.13893342]
# </td>
# </tr>
# <tr>
# <td>
# **gradients["dbo"].shape** =
# </td>
# <td>
# (5, 1)
# </td>
# </tr>
# </table>
# ### 3.3 Backward pass through the LSTM RNN
#
# This part is very similar to the `rnn_backward` function you implemented above. You will first create variables of the same dimension as your return variables. You will then iterate over all the time steps starting from the end and call the one step function you implemented for LSTM at each iteration. You will then update the parameters by summing them individually. Finally return a dictionary with the new gradients.
#
# **Instructions**: Implement the `lstm_backward` function. Create a for loop starting from $T_x$ and going backward. For each step call `lstm_cell_backward` and update the your old gradients by adding the new gradients to them. Note that `dxt` is not updated but is stored.
def lstm_backward(da, caches):
"""
Implement the backward pass for the RNN with LSTM-cell (over a whole sequence).
Arguments:
da -- Gradients w.r.t the hidden states, numpy-array of shape (n_a, m, T_x)
caches -- cache storing information from the forward pass (lstm_forward)
Returns:
gradients -- python dictionary containing:
dx -- Gradient of inputs, of shape (n_x, m, T_x)
da0 -- Gradient w.r.t. the previous hidden state, numpy array of shape (n_a, m)
dWf -- Gradient w.r.t. the weight matrix of the forget gate, numpy array of shape (n_a, n_a + n_x)
dWi -- Gradient w.r.t. the weight matrix of the update gate, numpy array of shape (n_a, n_a + n_x)
dWc -- Gradient w.r.t. the weight matrix of the memory gate, numpy array of shape (n_a, n_a + n_x)
dWo -- Gradient w.r.t. the weight matrix of the save gate, numpy array of shape (n_a, n_a + n_x)
dbf -- Gradient w.r.t. biases of the forget gate, of shape (n_a, 1)
dbi -- Gradient w.r.t. biases of the update gate, of shape (n_a, 1)
dbc -- Gradient w.r.t. biases of the memory gate, of shape (n_a, 1)
dbo -- Gradient w.r.t. biases of the save gate, of shape (n_a, 1)
"""
# Retrieve values from the first cache (t=1) of caches.
(caches, x) = caches
(a1, c1, a0, c0, f1, i1, cc1, o1, x1, parameters) = caches[0]
### START CODE HERE ###
# Retrieve dimensions from da's and x1's shapes (≈2 lines)
n_a, m, T_x = None
n_x, m = None
# initialize the gradients with the right sizes (≈12 lines)
dx = None
da0 = None
da_prevt = None
dc_prevt = None
dWf = None
dWi = None
dWc = None
dWo = None
dbf = None
dbi = None
dbc = None
dbo = None
# loop back over the whole sequence
for t in reversed(range(None)):
# Compute all gradients using lstm_cell_backward
gradients = None
# Store or add the gradient to the parameters' previous step's gradient
dx[:,:,t] = None
dWf = None
dWi = None
dWc = None
dWo = None
dbf = None
dbi = None
dbc = None
dbo = None
# Set the first activation's gradient to the backpropagated gradient da_prev.
da0 = None
### END CODE HERE ###
# Store the gradients in a python dictionary
gradients = {"dx": dx, "da0": da0, "dWf": dWf,"dbf": dbf, "dWi": dWi,"dbi": dbi,
"dWc": dWc,"dbc": dbc, "dWo": dWo,"dbo": dbo}
return gradients
# +
np.random.seed(1)
x = np.random.randn(3,10,7)
a0 = np.random.randn(5,10)
Wf = np.random.randn(5, 5+3)
bf = np.random.randn(5,1)
Wi = np.random.randn(5, 5+3)
bi = np.random.randn(5,1)
Wo = np.random.randn(5, 5+3)
bo = np.random.randn(5,1)
Wc = np.random.randn(5, 5+3)
bc = np.random.randn(5,1)
parameters = {"Wf": Wf, "Wi": Wi, "Wo": Wo, "Wc": Wc, "Wy": Wy, "bf": bf, "bi": bi, "bo": bo, "bc": bc, "by": by}
a, y, c, caches = lstm_forward(x, a0, parameters)
da = np.random.randn(5, 10, 4)
gradients = lstm_backward(da, caches)
print("gradients[\"dx\"][1][2] =", gradients["dx"][1][2])
print("gradients[\"dx\"].shape =", gradients["dx"].shape)
print("gradients[\"da0\"][2][3] =", gradients["da0"][2][3])
print("gradients[\"da0\"].shape =", gradients["da0"].shape)
print("gradients[\"dWf\"][3][1] =", gradients["dWf"][3][1])
print("gradients[\"dWf\"].shape =", gradients["dWf"].shape)
print("gradients[\"dWi\"][1][2] =", gradients["dWi"][1][2])
print("gradients[\"dWi\"].shape =", gradients["dWi"].shape)
print("gradients[\"dWc\"][3][1] =", gradients["dWc"][3][1])
print("gradients[\"dWc\"].shape =", gradients["dWc"].shape)
print("gradients[\"dWo\"][1][2] =", gradients["dWo"][1][2])
print("gradients[\"dWo\"].shape =", gradients["dWo"].shape)
print("gradients[\"dbf\"][4] =", gradients["dbf"][4])
print("gradients[\"dbf\"].shape =", gradients["dbf"].shape)
print("gradients[\"dbi\"][4] =", gradients["dbi"][4])
print("gradients[\"dbi\"].shape =", gradients["dbi"].shape)
print("gradients[\"dbc\"][4] =", gradients["dbc"][4])
print("gradients[\"dbc\"].shape =", gradients["dbc"].shape)
print("gradients[\"dbo\"][4] =", gradients["dbo"][4])
print("gradients[\"dbo\"].shape =", gradients["dbo"].shape)
# -
# **Expected Output**:
#
# <table>
# <tr>
# <td>
# **gradients["dx"][1][2]** =
# </td>
# <td>
# [-0.00173313 0.08287442 -0.30545663 -0.43281115]
# </td>
# </tr>
# <tr>
# <td>
# **gradients["dx"].shape** =
# </td>
# <td>
# (3, 10, 4)
# </td>
# </tr>
# <tr>
# <td>
# **gradients["da0"][2][3]** =
# </td>
# <td>
# -0.095911501954
# </td>
# </tr>
# <tr>
# <td>
# **gradients["da0"].shape** =
# </td>
# <td>
# (5, 10)
# </td>
# </tr>
# <tr>
# <td>
# **gradients["dWf"][3][1]** =
# </td>
# <td>
# -0.0698198561274
# </td>
# </tr>
# <tr>
# <td>
# **gradients["dWf"].shape** =
# </td>
# <td>
# (5, 8)
# </td>
# </tr>
# <tr>
# <td>
# **gradients["dWi"][1][2]** =
# </td>
# <td>
# 0.102371820249
# </td>
# </tr>
# <tr>
# <td>
# **gradients["dWi"].shape** =
# </td>
# <td>
# (5, 8)
# </td>
# </tr>
# <tr>
# <td>
# **gradients["dWc"][3][1]** =
# </td>
# <td>
# -0.0624983794927
# </td>
# </tr>
# <tr>
# <td>
# **gradients["dWc"].shape** =
# </td>
# <td>
# (5, 8)
# </td>
# </tr>
# <tr>
# <td>
# **gradients["dWo"][1][2]** =
# </td>
# <td>
# 0.0484389131444
# </td>
# </tr>
# <tr>
# <td>
# **gradients["dWo"].shape** =
# </td>
# <td>
# (5, 8)
# </td>
# </tr>
# <tr>
# <td>
# **gradients["dbf"][4]** =
# </td>
# <td>
# [-0.0565788]
# </td>
# </tr>
# <tr>
# <td>
# **gradients["dbf"].shape** =
# </td>
# <td>
# (5, 1)
# </td>
# </tr>
# <tr>
# <td>
# **gradients["dbi"][4]** =
# </td>
# <td>
# [-0.06997391]
# </td>
# </tr>
# <tr>
# <td>
# **gradients["dbi"].shape** =
# </td>
# <td>
# (5, 1)
# </td>
# </tr>
# <tr>
# <td>
# **gradients["dbc"][4]** =
# </td>
# <td>
# [-0.27441821]
# </td>
# </tr>
# <tr>
# <td>
# **gradients["dbc"].shape** =
# </td>
# <td>
# (5, 1)
# </td>
# </tr>
# <tr>
# <td>
# **gradients["dbo"][4]** =
# </td>
# <td>
# [ 0.16532821]
# </td>
# </tr>
# <tr>
# <td>
# **gradients["dbo"].shape** =
# </td>
# <td>
# (5, 1)
# </td>
# </tr>
# </table>
# ### Congratulations !
#
# Congratulations on completing this assignment. You now understand how recurrent neural networks work!
#
# Let's go on to the next exercise, where you'll use an RNN to build a character-level language model.
#
|
Sequence Models/Building+a+Recurrent+Neural+Network+-+Step+by+Step+-+v3.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
#export
from fastai2.basics import *
from fastai2.text.core import *
from fastai2.text.data import *
from fastai2.text.models.core import *
from fastai2.text.models.awdlstm import *
from fastai2.callback.rnn import *
from nbdev.showdoc import *
# +
#default_exp text.learner
# -
# # Text learner
#
# > Convenience functions to easily create a `Learner` for text applications
#export
def match_embeds(old_wgts, old_vocab, new_vocab):
"Convert the embedding in `wgts` to go with a new vocabulary."
bias, wgts = old_wgts.get('1.decoder.bias', None), old_wgts['0.encoder.weight']
wgts_m = wgts.mean(0)
new_wgts = wgts.new_zeros((len(new_vocab),wgts.size(1)))
if bias is not None:
bias_m = bias.mean(0)
new_bias = bias.new_zeros((len(new_vocab),))
old_o2i = old_vocab.o2i if hasattr(old_vocab, 'o2i') else {w:i for i,w in enumerate(old_vocab)}
for i,w in enumerate(new_vocab):
idx = old_o2i.get(w, -1)
new_wgts[i] = wgts[idx] if idx>=0 else wgts_m
if bias is not None: new_bias[i] = bias[idx] if idx>=0 else bias_m
old_wgts['0.encoder.weight'] = new_wgts
if '0.encoder_dp.emb.weight' in old_wgts: old_wgts['0.encoder_dp.emb.weight'] = new_wgts.clone()
old_wgts['1.decoder.weight'] = new_wgts.clone()
if bias is not None: old_wgts['1.decoder.bias'] = new_bias
return old_wgts
wgts = {'0.encoder.weight': torch.randn(5,3)}
new_wgts = match_embeds(wgts.copy(), ['a', 'b', 'c'], ['a', 'c', 'd', 'b'])
old,new = wgts['0.encoder.weight'],new_wgts['0.encoder.weight']
test_eq(new[0], old[0])
test_eq(new[1], old[2])
test_eq(new[2], old.mean(0))
test_eq(new[3], old[1])
#With bias
wgts = {'0.encoder.weight': torch.randn(5,3), '1.decoder.bias': torch.randn(5)}
new_wgts = match_embeds(wgts.copy(), ['a', 'b', 'c'], ['a', 'c', 'd', 'b'])
old_w,new_w = wgts['0.encoder.weight'],new_wgts['0.encoder.weight']
old_b,new_b = wgts['1.decoder.bias'], new_wgts['1.decoder.bias']
test_eq(new_w[0], old_w[0])
test_eq(new_w[1], old_w[2])
test_eq(new_w[2], old_w.mean(0))
test_eq(new_w[3], old_w[1])
test_eq(new_b[0], old_b[0])
test_eq(new_b[1], old_b[2])
test_eq(new_b[2], old_b.mean(0))
test_eq(new_b[3], old_b[1])
#export
@delegates(Learner.__init__)
class RNNLearner(Learner):
"Basic class for a `Learner` in NLP."
def __init__(self, model, dbunch, loss_func, alpha=2., beta=1., **kwargs):
super().__init__(model, dbunch, loss_func, **kwargs)
self.add_cb(RNNTrainer(alpha=alpha, beta=beta))
def save_encoder(self, file):
"Save the encoder to `self.path/self.model_dir/file`"
if rank_distrib(): return # don't save if slave proc
encoder = get_model(self.model)[0]
if hasattr(encoder, 'module'): encoder = encoder.module
torch.save(encoder.state_dict(), join_path_file(file,self.path/self.model_dir, ext='.pth'))
def load_encoder(self, file, device=None):
"Load the encoder `name` from the model directory."
encoder = get_model(self.model)[0]
if device is None: device = self.dbunch.device
if hasattr(encoder, 'module'): encoder = encoder.module
distrib_barrier()
encoder.load_state_dict(torch.load(join_path_file(file,self.path/self.model_dir, ext='.pth'), map_location=device))
self.freeze()
return self
#TODO: When access is easier, grab new_vocab from self.dbunch
def load_pretrained(self, wgts_fname, vocab_fname, new_vocab, strict=True):
"Load a pretrained model and adapt it to the data vocabulary."
old_vocab = Path(vocab_fname).load()
wgts = torch.load(wgts_fname, map_location = lambda storage,loc: storage)
if 'model' in wgts: wgts = wgts['model'] #Just in case the pretrained model was saved with an optimizer
wgts = match_embeds(wgts, old_vocab, new_vocab)
self.model.load_state_dict(wgts, strict=strict)
self.freeze()
return self
#export
from fastai2.text.models.core import _model_meta
#export
#TODO: When access is easier, grab vocab from dbunch
@delegates(Learner.__init__)
def language_model_learner(dbunch, arch, vocab, config=None, drop_mult=1., pretrained=True, pretrained_fnames=None, **kwargs):
"Create a `Learner` with a language model from `data` and `arch`."
model = get_language_model(arch, len(vocab), config=config, drop_mult=drop_mult)
meta = _model_meta[arch]
learn = RNNLearner(dbunch, model, loss_func=CrossEntropyLossFlat(), splitter=meta['split_lm'], **kwargs)
#TODO: add backard
#url = 'url_bwd' if data.backwards else 'url'
if pretrained or pretrained_fnames:
if pretrained_fnames is not None:
fnames = [learn.path/learn.model_dir/f'{fn}.{ext}' for fn,ext in zip(pretrained_fnames, ['pth', 'pkl'])]
else:
if 'url' not in meta:
warn("There are no pretrained weights for that architecture yet!")
return learn
model_path = untar_data(meta['url'] , c_key='model')
fnames = [list(model_path.glob(f'*.{ext}'))[0] for ext in ['pth', 'pkl']]
learn = learn.load_pretrained(*fnames, vocab)
return learn
#export
#TODO: When access is easier, grab vocab from dbunch
@delegates(Learner.__init__)
def text_classifier_learner(dbunch, arch, vocab, bptt=72, config=None, pretrained=True, drop_mult=1.,
lin_ftrs=None, ps=None, **kwargs):
"Create a `Learner` with a text classifier from `data` and `arch`."
model = get_text_classifier(arch, len(vocab), get_c(dbunch), bptt=bptt, config=config,
drop_mult=drop_mult, lin_ftrs=lin_ftrs, ps=ps)
meta = _model_meta[arch]
learn = RNNLearner(dbunch, model, loss_func=CrossEntropyLossFlat(), splitter=meta['split_clas'], **kwargs)
if pretrained:
if 'url' not in meta:
warn("There are no pretrained weights for that architecture yet!")
return learn
model_path = untar_data(meta['url'], c_key='model')
fnames = [list(model_path.glob(f'*.{ext}'))[0] for ext in ['pth', 'pkl']]
learn = learn.load_pretrained(*fnames, vocab, strict=False)
learn.freeze()
return learn
#export
@typedispatch
def show_results(x: LMTensorText, y, samples, outs, ctxs=None, max_n=10, **kwargs):
if ctxs is None: ctxs = get_empty_df(min(len(samples), max_n))
for i,l in enumerate(['input', 'target']):
ctxs = [b.show(ctx=c, label=l, **kwargs) for b,c,_ in zip(samples.itemgot(i),ctxs,range(max_n))]
ctxs = [b.show(ctx=c, label='pred', **kwargs) for b,c,_ in zip(outs.itemgot(0),ctxs,range(max_n))]
display_df(pd.DataFrame(ctxs))
return ctxs
#export
@typedispatch
def show_results(x: TensorText, y, samples, outs, ctxs=None, max_n=10, **kwargs):
if ctxs is None: ctxs = get_empty_df(min(len(samples), max_n))
ctxs = show_results[object](x, y, samples, outs, ctxs=ctxs, max_n=max_n, **kwargs)
display_df(pd.DataFrame(ctxs))
return ctxs
#export
@typedispatch
def plot_top_losses(x: TensorText, y:TensorCategory, samples, outs, raws, losses, **kwargs):
rows = get_empty_df(len(samples))
for i,l in enumerate(['input', 'target']):
rows = [b.show(ctx=c, label=l, **kwargs) for b,c in zip(samples.itemgot(i),rows)]
outs = L(o + (Float(r.max().item()), Float(l.item())) for o,r,l in zip(outs, raws, losses))
for i,l in enumerate(['predicted', 'probability', 'loss']):
rows = [b.show(ctx=c, label=l, **kwargs) for b,c in zip(outs.itemgot(i),rows)]
display_df(pd.DataFrame(rows))
# ## Export -
#hide
from nbdev.export import notebook2script
notebook2script()
|
nbs/37_text.learner.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: conda_python3
# language: python
# name: conda_python3
# ---
from pyspark import SparkContext
from pyspark.sql.types import *
from pyspark.sql import SparkSession
from pyspark import SQLContext
import seaborn as sns
import matplotlib.pyplot as plt
import pyspark.sql.functions as F
import numpy as np
from pylab import rcParams
import pandas as pd
import os
import glob
# Define o estilo para ggplot
plt.style.use("ggplot")
# Criação da sessão Spark
spark = SparkSession \
.builder \
.appName('Processo Seletivo - DataSprints') \
.getOrCreate()
# Criação do contexto Spark SQL
sqlContext = SQLContext(spark)
# Importando arquivos das empresas de Táxi:
vendor = spark.read.option("delimiter", ",").csv(
"spark-warehouse/datasets/data-vendor.csv", header=True
)
vendor.show()
# Criando a tabela vendor para utilizar nas análises
vendor.createOrReplaceTempView("vendor");
# Criação da tabela que contém o de-para dos pagamentos das viagens:
payment = spark.read.option("delimiter", ",").csv(
"spark-warehouse/datasets/data-payment.csv", header=True
)
payment.show()
# Criando a tabela trips
payment.createOrReplaceTempView("payment");
# +
# Estrutura da tabela trips a ser criada
fields = [
StructField("vendor_id", StringType(), True),
StructField("pickup_datetime", TimestampType(), True),
StructField("dropoff_datetime", TimestampType(), True),
StructField("passenger_count", IntegerType(), True),
StructField("trip_distance", FloatType(), True),
StructField("pickup_longitude", FloatType(), True),
StructField("pickup_latitude", FloatType(), True),
StructField("rate_code", IntegerType(), True),
StructField("store_and_fwd_flag", FloatType(), True),
StructField("dropoff_longitude", FloatType(), True),
StructField("dropoff_latitude", FloatType(), True),
StructField("payment_type", StringType(), True),
StructField("fare_amount", FloatType(), True),
StructField("surcharge", FloatType(), True),
StructField("tip_amount", FloatType(), True),
StructField("tolls_amount", FloatType(), True),
StructField("total_amount", FloatType(), True),
StructField("duration", IntegerType(), True),
StructField("month_travel", IntegerType(), True),
StructField("year_travel", IntegerType(), True),
StructField("weekday", StringType(), True)
]
schema = StructType(fields)
# -
path = 'spark-warehouse/datasets/trips-processed'
# Leitura do arquivo processado e salvo no S3 - Foi baixado e adicionado ao projeto pois a integração do Spark com o s3 não funcionou
trips = spark.read.option("delimiter", ";").csv(
glob.glob(os.path.join(path,'*.csv'), header=True, schema=schema
)
# Removendo dados que contenham datas nulas ou preço da corrida nulo
trips = trips.na.drop(subset=['pickup_datetime', 'dropoff_datetime', 'fare_amount'])
# Adicionando algumas variáveis extras que serão úteis nas análises:
# Data sem hora, min e segundos
trips = trips.withColumn('pickup_date', F.date_format(F.col('pickup_datetime'), 'yyy-MM-dd'))
# Abreviação do dia da semana por extenso
trips = trips.withColumn('weekday_day', F.date_format(F.col('pickup_datetime'), 'u'))
# Hora de início da corrida
trips = trips.withColumn('pickup_hour', F.hour(F.col('pickup_datetime')))
# Função que categoriza a corrida em madrugada (dawn), manhã (morning), tarde (evening) e noite (night)
def categorize_period(hour):
if hour >= 0 and hour < 6:
return "dawn"
elif hour >= 6 and hour < 12:
return "morning"
elif hour >= 12 and hour < 18:
return "evening"
else:
return "night"
bucket_udf = F.udf(categorize_period, StringType() )
trips = trips.withColumn("trip_period", bucket_udf("pickup_hour"))
trips.show(5)
# Criando a tabela trips
trips.createOrReplaceTempView("trips");
# ## Análises
# - What is the average distance traveled by trips with a maximum of 2 passengers:
result = sqlContext.sql('SELECT ROUND(AVG(trip_distance),2) AS distancia_media FROM trips WHERE passenger_count <= 2 AND passenger_count > 0')
result.show()
# - Which are the 3 biggest vendors based on the total amount of money raised:
result = sqlContext.sql("SELECT name AS empresa, ROUND(SUM(total_amount), 2) AS total_vendas FROM trips t INNER JOIN vendor v ON v.vendor_id = t.vendor_id WHERE passenger_count > 0 GROUP BY empresa ORDER BY total_vendas DESC LIMIT 3")
result.show()
# - Make a histogram of the monthly distribution over 4 years of rides paid with cash
cash_trips = sqlContext.sql("SELECT month_travel AS month FROM trips WHERE payment_type = 'CASH'")
cash_trips = cash_trips.toPandas()
# +
# %matplotlib inline
rcParams['figure.figsize'] = 10, 5
cash_trips.hist(bins=25)
plt.xlabel('Meses')
plt.ylabel('Quantidade de viagens')
plt.title("Distribuição da quantidade de viagens por mês no período de 2009-2012")
plt.xticks(np.arange(1,13), ('Jan', 'Fev', 'Mar', 'Abr', 'Mai','Jun','Jul','Ago','Set', 'Out', 'Nov', 'Dez'))
plt.show()
# -
# - Make a time series chart computing the number of tips each day for the last 3 months of 2012.
tips_2012 = sqlContext.sql("SELECT year_travel AS year, month_travel AS month, pickup_date as date, COUNT(*) AS trips_with_tip FROM trips WHERE tip_amount > 0.0 AND year_travel = 2012 AND month_travel IN(10,11,12) GROUP BY date, year, month ORDER BY date, year, month")
tips_2012 = tips_2012.toPandas()
tips_2012 = tips_2012.set_index('date')
# +
# %matplotlib inline
rcParams['figure.figsize'] = 10, 10
plt.xlabel('Data')
plt.ylabel('Qty. de viagens com gorjeta')
plt.title("Distribuição diária da quantidade de viagens com gorjeta no período de outubro a dezembro de 2012")
sns.set(rc={'figure.figsize':(15, 15)})
tips_2012['trips_with_tip'].plot(linewidth=0.5)
# -
# - What is the average trip time on Saturdays and Sundays;
average = sqlContext.sql("SELECT weekday AS dia_semana, AVG(duration) AS duracao_media_seg FROM trips WHERE weekday IN('Sat','Sun') GROUP BY weekday")
average = average.toPandas()
average['duracao_media_min'] = (average['duracao_media_seg']/60)
average.head()
# - Analyse the data to find and prove seasonality
# - Quantidade de passageiros por viagem
passenger_trips = sqlContext.sql("SELECT qry.num_pass, COUNT(*) AS qtde FROM (SELECT passenger_count AS num_pass FROM trips WHERE passenger_count > 0 ) AS qry GROUP BY qry.num_pass ORDER BY qtde DESC ")
passenger_trips = passenger_trips.toPandas()
passenger_trips
# +
rcParams['figure.figsize'] = 8,8
passenger_trips[['qtde']].plot()
plt.xlabel('Qty. de viagens')
plt.ylabel('Qty. de passageiros')
plt.title("Qtde. de viagens x Qtde. de passageiros")
plt.show()
# -
# #### Quantidade de viagens por mês - todos os 4 anos
trips = sqlContext.sql("SELECT year_travel AS year, month_travel AS month, COUNT(*) AS qtde_viagens FROM trips GROUP BY year, month ORDER BY year, month ASC")
trips = trips.toPandas()
trips_month = trips.groupby('month').qtde_viagens.sum().reset_index()
# +
# %matplotlib inline
rcParams['figure.figsize'] = 10, 5
sns.set(style="whitegrid")
sns.barplot(x="month", y="qtde_viagens", data=trips_month)
plt.xticks(np.arange(0,13), ('Jan', 'Fev', 'Mar', 'Abr', 'Mai','Jun','Jul','Ago','Set', 'Out', 'Nov', 'Dez'))
plt.show()
# -
# ##### Quantidade de viagens por mês - Ano de 2009
tm_2009 = trips[(trips['year'] == 2009)]
# +
# %matplotlib inline
rcParams['figure.figsize'] = 10, 5
sns.set(style="whitegrid")
sns.barplot(x="month", y="qtde_viagens", data=tm_2009)
plt.xticks(np.arange(0,13), ('Jan', 'Fev', 'Mar', 'Abr', 'Mai','Jun','Jul','Ago','Set', 'Out', 'Nov', 'Dez'))
plt.show()
# -
# ##### Quantidade de viagens por mês - Ano de 2010
tm_2010 = trips[(trips['year'] == 2010)]
# +
# %matplotlib inline
rcParams['figure.figsize'] = 10, 5
sns.set(style="whitegrid")
sns.barplot(x="month", y="qtde_viagens", data=tm_2010)
plt.xticks(np.arange(0,13), ('Jan', 'Fev', 'Mar', 'Abr', 'Mai','Jun','Jul','Ago','Set', 'Out', 'Nov', 'Dez'))
plt.show()
# -
# ##### Quantidade de viagens por mês - Ano de 2011
tm_2011 = trips[(trips['year'] == 2011)]
# +
# %matplotlib inline
rcParams['figure.figsize'] = 10, 5
sns.set(style="whitegrid")
sns.barplot(x="month", y="qtde_viagens", data=tm_2011)
plt.xticks(np.arange(0,13), ('Jan', 'Fev', 'Mar', 'Abr', 'Mai','Jun','Jul','Ago','Set', 'Out', 'Nov', 'Dez'))
plt.show()
# -
# ##### Quantidade de viagens por mês - Ano de 2012
tm_2012 = trips[(trips['year'] == 2012)]
# +
# %matplotlib inline
rcParams['figure.figsize'] = 10, 5
sns.set(style="whitegrid")
sns.barplot(x="month", y="qtde_viagens", data=tm_2012)
plt.xticks(np.arange(0,13), ('Jan', 'Fev', 'Mar', 'Abr', 'Mai','Jun','Jul','Ago','Set', 'Out', 'Nov', 'Dez'))
plt.show()
# -
# - 7 - Create assumptions, validate against a data and prove with storyelling and graphs
# Perfil de viagem dos passageiros
# - Horário da viagem x qtde de passageiros
hora_viagem = sqlContext.sql("SELECT COUNT(*) as qtde, passenger_count, trip_period FROM trips WHERE passenger_count >= 1 GROUP BY trip_period,passenger_count ORDER BY qtde DESC")
hora_viagem = hora_viagem.toPandas()
hora_viagem
hora_viagem[['qtde','trip_period']].plot()
# As pessoas usam menos taxi em feriados?
# R: Em todos os anos analisados, há uma tendência de uma menor utilização de taxis durante os feriados (a quantidade de viagens nesses dias é menor)
# Feriados obtidos do site: http://www.public-holidays.us/US_PT_2009_New%20York
feriados_2009 = ['2009-01-01', '2009-01-19', '2009-02-12', '2009-02-16', '2009-05-25','2009-07-04','2009-09-07','2009-10-12', '2009-11-03', '2009-11-26','2009-12-25']
feriados_2010 = ['2010-01-01', '2010-01-18', '2010-02-12', '2010-02-15', '2010-05-31', '2010-07-04','2010-09-06','2010-10-11', '2010-11-02', '2010-11-25', '2010-12-25', '2010-12-31']
feriados_2011 = ['2011-01-01', '2011-01-19', '2011-02-12', '2011-02-16', '2010-05-25', '2010-07-04','2010-09-07','2010-10-12', '2010-11-03', '2010-11-26','2010-12-25']
feriados_2012 = ['2012-01-01', '2012-01-16', '2012-02-12', '2012-02-20', '2012-05-28', '2012-07-04','2012-09-03','2012-10-08', '2012-11-06', '2012-11-22','2012-12-25']
# - Ano de 2009
trips_2009 = sqlContext.sql("SELECT * FROM trips WHERE year_travel = 2009")
trips_2009 = trips_2009.toPandas()
trips_2009[(trips_2009['pickup_date'].isin(feriados_2009))].vendor_id.count()
trips_2009[(trips_2009['pickup_date'].isin(feriados_2009))].groupby('pickup_date')['vendor_id'].count()
mean_feriado = trips_2009[(trips_2009['pickup_date'].isin(feriados_2009))].groupby('pickup_date')['vendor_id'].count().mean()
mean_feriado
trips_2009[~(trips_2009['pickup_date'].isin(feriados_2009))].vendor_id.count()
trips_2009[~(trips_2009['pickup_date'].isin(feriados_2009))].groupby('pickup_date')['vendor_id'].count()
media_fora_feriados = trips_2009[~(trips_2009['pickup_date'].isin(feriados_2009))].groupby('pickup_date')['vendor_id'].count().mean()
media_fora_feriados
# - Ano de 2010
trips_2010 = sqlContext.sql("SELECT * FROM trips WHERE year_travel = 2010")
trips_2010 = trips_2010.toPandas()
trips_2010[(trips_2010['pickup_date'].isin(feriados_2010))].vendor_id.count()
trips_2010[(trips_2010['pickup_date'].isin(feriados_2010))].groupby('pickup_date')['vendor_id'].count()
trips_2010[(trips_2010['pickup_date'].isin(feriados_2010))].groupby('pickup_date')['vendor_id'].count().mean()
trips_2010[~(trips_2010['pickup_date'].isin(feriados_2010))].groupby('pickup_date')['vendor_id'].count()
trips_2010[~(trips_2010['pickup_date'].isin(feriados_2010))].groupby('pickup_date')['vendor_id'].count().mean()
# - Ano de 2011
trips_2011 = sqlContext.sql("SELECT * FROM trips WHERE year_travel = 2011")
trips_2011.filter(trips_2011.pickup_date.isin(feriados_2011)).count()
df = trips_2011.filter(trips_2011.pickup_date.isin(feriados_2011)).groupby('pickup_date').count()
df.show()
trips_2011.filter(~trips_2011.pickup_date.isin(feriados_2011)).count()
df = trips_2011.filter(~trips_2011.pickup_date.isin(feriados_2011)).groupby('pickup_date').count()
df.show()
# - Ano de 2012
trips_2012 = sqlContext.sql("SELECT * FROM trips WHERE year_travel = 2012")
trips_2012.filter(trips_2012.pickup_date.isin(feriados_2012)).count()
df = trips_2012.filter(trips_2012.pickup_date.isin(feriados_2012)).groupby('pickup_date').count()
df.show()
trips_2012.filter(~trips_2012.pickup_date.isin(feriados_2012)).count()
df = trips_2012.filter(~trips_2012.pickup_date.isin(feriados_2012)).groupby('pickup_date').count()
df.show()
|
Analysis.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# Copyright 2021 NVIDIA Corporation. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
# -
# # Training Tabular Deep Learning Models with Keras on GPU
# Deep learning has revolutionized the fields of computer vision (CV) and natural language processing (NLP) in the last few years, providing a fast and general framework for solving a host of difficult problems with unprecedented accuracy. Part and parcel of this revolution has been the development of APIs like [Keras](https://www.tensorflow.org/api_docs/python/tf/keras) for NVIDIA GPUs, allowing practitioners to quickly iterate on new and interesting ideas and receive feedback on their efficacy in shorter and shorter intervals.
#
# One class of problem which has remained largely immune to this revolution, however, is the class involving tabular data. Part of this difficulty is that, unlike CV or NLP, where different datasets are underlied by similar phenomena and therefore can be solved with similar mechanisms, "tabular datasets" span a vast array of phenomena, semantic meanings, and problem statements, from product and video recommendation to particle discovery and loan default prediction. This diversity makes universally useful components difficult to find or even define, and is only exacerbated by the notorious lack of standard, industrial-scale benchmark datasets in the tabular space. As a result, deep learning models are frequently bested by their machine learning analogues on these important tasks, particularly on smaller scale datasets.
#
# Yet this diversity is also what makes tools like Keras all the more valuable. Architecture components can be quickly swapped in and out for different tasks like the implementation details they are, and new components can be built and tested with ease. Importantly, domain experts can interact with models at a high level and build *a priori* knowledge into model architectures, without having to spend their time becoming Python programming wizrds.
#
# However, most out-of-the-box APIs suffer from a lack of acceleration that reduces the rate at which new components can be tested and makes production deployment of deep learning systems cost-prohibitive. In this example, we will walk through some recent advancements made by NVIDIA's [NVTabular](https://github.com/nvidia/nvtabular) data loading library that can alleviate existing bottlenecks and bring to bear the full power of GPU acceleration.
#
# #### What to Keep an Eye Out For
# The point of this walkthrough will be to show how common components of existing TensorFlow tabular-learning pipelines can be drop-in replaced by NVTabular components for cheap-as-free acceleration with minimal overhead. To do this, we'll start by examining a pipeline for fitting the [DLRM](https://arxiv.org/abs/1906.00091) architecture on the [Criteo Terabyte Dataset](https://labs.criteo.com/2013/12/download-terabyte-click-logs/) using Keras/TensorFlow's native tools on both on CPU and GPU, and discuss why the acceleration we observe on GPU is not particularly impressive. Then we'll examine what an identical pipeline would look like using NVTabular and why it overcomes those bottlenecks.
#
# Since the Criteo Terabyte Dataset is large, and you and I both have better things to do than sit around for hours waiting to train a model we have no intention of ever using, I'll restrict the training to 1000 steps in order to illustrate the similarities in convergence and the expected acceleration. Of course, there may well exist alternative choices of architectures and hyperparameters that will lead to better or faster convergence, but I trust that you, clever data scientist that you are, are more than capable of finding these yourself should you wish. I intend only to demonstrate how NVTabular can help you achieve that convergence more quickly, in the hopes that you will find it easy to apply the same methods to the dataset that really matters: your own.
#
# I will assume at least some familiarity with the relevant tabular deep learning methods (in particular what I mean by "tabular data" and how it is distinct from, say, image data; continuous vs. categorical variables; learned categorical embeddings; and online vs. offline preprocessing) and a passing familiarity with TensorFlow and Keras. If you are green or rusty on any of this points, it won't make this discussion illegible, but I'll put links in the relevant places just in case.
#
# The structure will be building, step-by-step, the necessary functions that a dataset-agnostic pipeline might need in order to train a model in Keras. In each function, we'll include an `accelerated` kwarg that will be used to show the difference between what such a function might look like in native TensorFlow vs. using NVTabular. Let's start here by doing our imports and defining some hyperparameters for training (which won't change from one implementation to the next).
# +
import os
from itertools import filterfalse
import re
import tensorflow as tf
from tensorflow.keras.mixed_precision import experimental as mixed_precision
# this is a good habit to get in now: TensorFlow's default behavior
# is to claim all of the GPU memory that it can for itself. This
# is a problem when it needs to run alongside another GPU library
# like NVTabular. To get around this, NVTabular will configure
# TensorFlow to use this fraction of available GPU memory up front.
# Make sure, however, that you do this before you do anything
# with TensorFlow: as soon as it's initialized, that memory is gone
# for good
os.environ["TF_MEMORY_ALLOCATION"] = "0.5"
import nvtabular as nvt
from nvtabular.loader.tensorflow import KerasSequenceLoader
from nvtabular.framework_utils.tensorflow import layers, make_feature_column_workflow
# import custom callback for monitoring throughput
from callbacks import ThroughputLogger
# +
DATA_DIR = os.environ.get("DATA_DIR", "/data")
TFRECORD_DIR = os.environ.get("TFRECORD_DIR", "/tfrecords")
LOG_DIR = os.environ.get("LOG_DIR", "logs/")
TFRECORDS = os.path.join(TFRECORD_DIR, "train", "*.tfrecords")
PARQUETS = os.path.join(DATA_DIR, "train", "*.parquet")
# TODO: reimplement the preproc from criteo-example here
# Alternatively, make criteo its own folder, and split preproc
# and training into separate notebooks, then execute the
# preproc notebook from here?
NUMERIC_FEATURE_NAMES = [f"I{i}" for i in range(1, 14)]
CATEGORICAL_FEATURE_NAMES = [f"C{i}" for i in range(1, 27)]
CATEGORY_COUNTS = [
7599500, 33521, 17022, 7339, 20046, 3, 7068, 1377, 63, 5345303,
561810, 242827, 11, 2209, 10616, 100, 4, 968, 14, 7838519,
2580502, 6878028, 298771, 11951, 97, 35
]
LABEL_NAME = "label"
# optimization params
BATCH_SIZE = 65536
STEPS = 1000
LEARNING_RATE = 0.001
# architecture params
EMBEDDING_DIM = 8
TOP_MLP_HIDDEN_DIMS = [1024, 512, 256]
BOTTOM_MLP_HIDDEN_DIMS = [1024, 1024, 512, 256]
# I'll get sloppy with warnings because just like
# <NAME> sometimes you gotta live on the edge
tf.get_logger().setLevel('ERROR')
# -
# ## What Does Your Data Look Like
# As we discussed before, "tabular data" is an umbrella term referring to data collected from a vast array of problems and phenomena. Perhaps Bob's dataset has 192 features, 54 of which are continuous variables recorded as 32 bit floating point numbers, and the remainder of which are categorical variables which he has encoded as strings. Alice, on the other hand, may have a dataset consisting of 3271 features, most of which are continuous, but a handful of which are integer IDs which can take on one of millions of possible values. We can't expect the same model to be able to handle this kind of variety unless we give it some description of what sorts of inputs to expect.
#
# Moreover, the format in which the data gets read from disk will rarely be the one the model finds useful. Bob's string categories will be of no use to a neural network which lives in the world of continuous functions of real numbers; they will need to be converted to integer lookup table indices before being ingested. For certain types of these **transformations**, Bob may want to do this conversion once, up front, before training begins, and then be done with it. However, this may not always be possible. Bob may wish to hyperparameter search over the parameters of such a transformation (if, for instance, he is using a hash function to map to indices and wants to play with the number of buckets to use). Or perhaps he wants to retain the pre-transformed values, but finds the cost of storing an entire second dataset of the transformed values prohibitive. In this case, he'll need to perform the transformations *online*, between when the data is read from disk and when it gets fed to the network.
#
# Finally, in the case of categorical variables, these lookup indices will need to, well, *look up* an embedding vector that finally puts us in the continuous space our network prefers. Therefore, we also need to define how large of an embedding vector we want to use for a given feature.
#
# TensorFlow provides a convenient module to record this information about the names of features to expect, their type (categorical or numeric), their data type, common transformations to perform on them, and the size of embedding table to use in the case of categorical variables: the [`feature_column` module](https://www.tensorflow.org/tutorials/structured_data/feature_columns). (Note: as of [TensorFlow 2.3](https://github.com/tensorflow/tensorflow/releases/tag/v2.3.0-rc0) these are being deprecated and replaced with Keras layers with similar functionality. Most of the arguments made here will still apply, the code will just look a bit different.) These objects provide both stateless representations of feature information, as well as the code that performs the transformations and embeddings at train time.
#
# While `feature_column`s are a handy and robust representation format, their transformation and embedding implementations are poorly suited for GPUs. We'll see how this looks in terms of TensorFlow profile traces later, but the upshot comes down to two basic points:
# - Many of the transformations involve ops that either don't have a GPU kernel, or have one which is unoptimized. The involvement of ops without GPU kernels means that you're spending a lot of your train step moving data around to the device which can run the current op. Many of the ops that *do* have a GPU kernel are small and don't involve much math, which drowns the math-hungry parallel computing model of GPUs in kernel launch overhead.
# - The embeddings use sparse tensor machinery that is unoptimized on GPUs and is unnecessary for one-hot categoricals, the only type we'll focus on here. This is a good time to mention that the techniques we'll cover today *do not generalize to multi-hot categorical data*, which isn't currently supported by NVTabular. However, there is active work to support this being done and we hope to have it seamlessly integrated in the near future.
#
# As we'll see later, one difficulty in addressing the second issue is that the same Keras layer which performs the embeddings *also* performs the transformations, so even if you know that all your categoricals are one-hot and want to build an accelerated embedding layer that leverages this information, you would be out of luck on a layer which can just perform whatever transformations you might need. One way to get around this is to move your transformations to NVTabular, which will do them all on the GPU at data-loading time, so that all Keras needs to handle is the embedding using a layer like the `tf.keras.layers.DenseFeatures`, or, even more accelerated, NVTabular's equivalent `layers.DenseFeatures` layer.
#
# The good news is, as of NVTabular 0.2, you don't need to change the feature columns you use to represent your inputs and preprocessing in order to enjoy GPU acceleration. The `make_feature_column_workflow` utility will take care of creating an NVTabular `Workflow` object which will perform all of the requisite preprocessing on the GPU, then pass the preprocessed columns to TensorFlow tensors.
def get_feature_columns():
columns = [tf.feature_column.numeric_column(name, (1,)) for name in NUMERIC_FEATURE_NAMES]
for feature_name, count in zip(CATEGORICAL_FEATURE_NAMES, CATEGORY_COUNTS):
categorical_column = tf.feature_column.categorical_column_with_hash_bucket(
feature_name, int(0.75*count), dtype=tf.int64)
embedding_column = tf.feature_column.embedding_column(categorical_column, EMBEDDING_DIM)
columns.append(embedding_column)
return columns
# ## A Data By Any Other Format: TFRecords and Tabular Representation
# By running the Criteo preprocessing example above, we generated a dataset in the parquet data format. Why Parquet? Well, besides the fact that NVTabular can read parquet files exceptionally quickly, parquet is a widely used tabular data format that can be read by libraries like Pandas or CuDF to quickly search, filter, and manipulate data using high level abstractions.
import cudf, glob
filename = glob.glob(os.path.join(DATA_DIR, 'train', '*.parquet'))[0]
df = cudf.read_parquet(filename, num_rows=1000000)
df
# do some filtering or whatever
df[df['C18'] == 228]
# This is great news for data scientists: formats like parquet are the bread and butter of any sort of data exploration. You almost certainly want to keep at least *one* version of your dataset in a format like this. If your dataset is large enough, and storage gets expensive, it's probably the *only* format you want to keep your dataset in.
#
# Unfortunately, TensorFlow does not have fast native readers for formats like this that can read larger-than-memory datasets in an online fashion. TensorFlow's preferred, and fastest, data format is the [TFRecord](https://www.tensorflow.org/tutorials/load_data/tfrecord), a binary format which associates all field names and their values with every example in your datset. For tabular data, where small float or int features have a smaller memory footprint than string field names, the memory footprint of such a representation can get really big, really fast.
#
# More importantly, TFRecords require reading and parsing in batches using user-provided data schema descriptions. This makes doing the sorts of manipulations described above difficult, if not near impossible, and requires an enormous amount of work to change the values corresponding to a single field in your dataset. For this reason, you almost never want to use TFRecords as the *only* means of representing your data, which means you have generate and store an entire copy of your dataset every time it needs to update. This can take an enormous amount of time and resources that prolong the time from the conception of a feature to testing it in a model.
#
# The main advantage of TFRecords is the speed with which TensorFlow can read them (and its APIs for doing this online), and their support for multi-hot categorical features. While NVTabular is still working on addressing the latter, we'll show below that reading parquet files in batch using NVTabular is substantially faster than the existing TFRecord readers. In order to do this, we'll need to generate a TFRecord version of the parquet dataset we generated before. I'm going to restrict this to generating just the 1000 steps we'll need to do our training demo, but if you have a few days and a couple terabytes of storage lying around feel free to run the whole thing.
#
# Don't worry too much about the code below: it's a bit dense (and frankly still isn't fully robust to string features) and doesn't have much to do with what follows. I'm sure there are ways to make it cleaner/faster/etc., but If anything, it should make clear how nontrivial the process of building and writing TFRecords is. I'm also going to keep it commented out for now since the disk space required is so high, and the casual user clicking through cells might accidentally exhaust their allotment. If you feel like running the comparisons below to keep me honest, uncomment this cell and run it first.
#
# The last thing I'll note is that the astute and experienced TensorFlow user will at this point object that there exist ways to make reading TFRecords for tabular data faster than what I'm about to present. Among these are pre-batching examples (which, I would point out, more or less enforces a fixed valency for all categorical features) and combining all fixed valency categorical and continuous features into vectorized fields in records which can all be parsed at once. And while it's true that methods like this will accelerate TFRecord reading, they still fail to overtake NVTabular's parquet reader. Perhaps more importantly (at least from my workflow-centric view), they only compound the problems I've outlined so far of the difficulty of doing data analysis with TFRecords, and would almost certainly require the code below to be even more brittle and complicated. And this is actually a point worth emphasizing: with NVTabular data loading, you're getting better performance *and* less programming overhead, the holy grail of GPU-based DL software.
# +
# import multiprocessing as mp
# from glob import glob
# from itertools import repeat
# from tqdm.notebook import trange
# def pool_initializer(num_cols, cat_cols):
# global numeric_columns
# global categorical_columns
# numeric_columns = num_cols
# categorical_columns = cat_cols
# def build_and_serialize_example(data):
# numeric_values, categorical_values = data
# feature = {}
# if numeric_values is not None:
# feature.update({
# col: tf.train.Feature(float_list=tf.train.FloatList(value=[float(val)]))
# for col, val in zip(numeric_columns, numeric_values)
# })
# if categorical_values is not None:
# feature.update({
# col: tf.train.Feature(int64_list=tf.train.Int64List(value=[int(val)]))
# for col, val in zip(categorical_columns, categorical_values)
# })
# return tf.train.Example(features=tf.train.Features(feature=feature)).SerializeToString()
# def get_writer(write_dir, file_idx):
# filename = str(file_idx).zfill(5) + '.tfrecords'
# return tf.io.TFRecordWriter(os.path.join(write_dir, filename))
# _EXAMPLES_PER_RECORD = 20000000
# write_dir = os.path.dirname(TFRECORDS)
# if not os.path.exists(write_dir):
# os.makedirs(write_dir)
# file_idx, example_idx = 0, 0
# writer = get_writer(write_dir, file_idx)
# do_break = False
# column_names = [NUMERIC_FEATURE_NAMES, CATEGORICAL_FEATURE_NAMES+[LABEL_NAME]]
# with mp.Pool(8, pool_initializer, column_names) as pool:
# fnames = glob(PARQUETS)
# dataset = nvt.Dataset(fnames)
# pbar = trange(BATCH_SIZE*STEPS)
# for df in dataset.to_iter():
# data = []
# for col_names in column_names:
# if len(col_names) == 0:
# data.append(repeat(None))
# else:
# data.append(df[col_names].to_pandas().values)
# data = zip(*data)
# record_map = pool.imap(build_and_serialize_example, data, chunksize=200)
# for record in record_map:
# writer.write(record)
# example_idx += 1
# if example_idx == _EXAMPLES_PER_RECORD:
# writer.close()
# file_idx += 1
# writer = get_writer(file_idx)
# example_idx = 0
# pbar.update(1)
# if pbar.n == BATCH_SIZE*STEPS:
# do_break = True
# break
# if do_break:
# del df
# break
# -
# Ok, now that we have our data set up the way that we need it, we're ready to get training! TensorFlow provides a handy utility for building an online dataloader that we'll use to parse the tfrecords. Meanwhile, on the NVTablar side, we'll use the `KerasSequenceLoader` for reading chunks of parquet files. We'll also use a the `make_feature_column_workflow` to build an NVTabular `Workflow` that handles hash bucketing online on the GPU. It will also return a simplified set of feature columns that _don't_ include the preprocessing steps.
#
# Take a look below to see the similarities in the API. What's great about using NVTabular `Workflow`s for online preprocessing is that it makes doing arbitrary preprocessing reasonably simple by using `DFlambda` ops, and the `Op` class API allows for extension to more complicated, stat-driven preprocessing as well.
#
# One potentially important difference between these dataset classes is the way in which shuffling is handled. The TensorFlow data loader maintains a buffer of size `shuffle_buffer_size` from which batch elements are randomly selected, with the buffer then sequentially replenished by the next `batch_size` elements in the TFRecord. Large shuffle buffers, while allowing for better epoch-to-epoch randomness and hence generalization, can be hard to maintain given the slow read times. The limitation this enforces on your buffer size isn't as big a deal for datasets which are uniformly shuffled in the TFRecord and only require one or two epochs to converge, but many datasets are ordered by some feature (whether it's time or some categorical groupby), and in this case the windowed shuffle buffer can lead to biased sampling and hence poorer quality gradients.
#
# On the other hand, the `KerasSequenceLoader` manages shuffling by loading in chunks of data from different parts of the full dataset, concatenating them and then shuffling, then iterating through this super-chunk sequentially in batches. The number of "parts" of the dataset that get sample, or "partitions", is controlled by the `parts_per_chunk` kwarg, while the size of each one of these parts is controlled by the `buffer_size` kwarg, which refers to a fraction of available GPU memory. Using more chunks leads to better randomness, especially at the epoch level where physically disparate samples can be brought into the same batch, but can impact throughput if you use too many. In any case, the speed of the parquet reader makes feasible buffer sizes much larger.
#
# The key thing to keep in mind is due to the asynchronus nature of the data loader, there will be `parts_per_chunk*buffer_size*3` rows of data floating around the GPU at any one time, so your goal should be to balance `parts_per_chunk` and `buffer_size` in such a way to leverage as much GPU memory as possible without going out-of-memory (OOM) and while still meeting your randomness and throughput needs.
#
# Finally, remember that once the data is loaded, it doesn't just pass to TensorFlow untouched: we also apply concatenation, shuffling, and preprocessing operations which will take memory to execute. The takeaway is that just because TensorFlow is only occupying 50% of the GPU memory, don't expect that this implies that we can algebraically balance `parts_per_chunk` and `buffer_size` to exactly occupy the remaining 50%. This might take a bit of tuning for your workload, but once you know the right combination you can use it forever. (Or at least until you get a bigger GPU!)
def make_dataset(file_pattern, columns, accelerated=False):
# make a tfrecord features dataset
if not accelerated:
# feature spec tells us how to parse tfrecords
# using FixedLenFeatures keeps from using sparse machinery,
# but obviously wouldn't extend to multi-hot categoricals
feature_spec = {LABEL_NAME: tf.io.FixedLenFeature((1,), tf.int64)}
for column in columns:
column = getattr(column, "categorical_column", column)
dtype = getattr(column, "dtype", tf.int64)
feature_spec[column.name] = tf.io.FixedLenFeature((1,), dtype)
dataset = tf.data.experimental.make_batched_features_dataset(
file_pattern,
BATCH_SIZE,
feature_spec,
label_key=LABEL_NAME,
num_epochs=1,
shuffle=True,
shuffle_buffer_size=4*BATCH_SIZE,
)
# make an nvtabular KerasSequenceLoader and add
# a hash bucketing workflow for online preproc
else:
online_workflow, columns = make_feature_column_workflow(columns, LABEL_NAME)
train_paths = glob.glob(file_pattern)
dataset = nvt.Dataset(train_paths, engine="parquet")
online_workflow.fit(dataset)
ds = KerasSequenceLoader(
online_workflow.transform(dataset),
batch_size=BATCH_SIZE,
label_names=[LABEL_NAME],
feature_columns=columns,
shuffle=True,
buffer_size=0.06,
parts_per_chunk=1
)
return ds, columns
# ### Living In The Continuous World
# So at this point, we have a description of our dataset schema contained in our `feature_column`s, and we have a `dataset` object which can load some particular materialization of this schema (our dataset) in an online fashion (with the bytes encoding that materialization organized according to either the TFRecord or Parquet standard).
#
# Once the data is loaded, it needs to get run through a neural network, which will use them to produce predictions of interaction likelihoods, compare its predictions to the labelled answers, and improve its future guesses using this comparison through the magic of backpropogation. Easy as pie.
#
# Unfortunately, the magic of backpropogation relies on a trick of calculus which, by its nature, requires that the functions represented by the neural network are *continuous*. Whether or not you fully understand exactly what that means, you can probably imagine that this is incongrous with the *categorical* features our dataset contains. Less fundamentally, but from an equally practical standpoint, much of the algebra that our network will perform on our tabular features goes much (read: *MUCH*) faster if we do it in parallel as matrix algebra.
#
# For these reasons, we'll want to convert our tabular continuous and categorical features into purely continuous vectors that can be consumed by the network and processed efficiently. For categorical features, this means using the categorical index to lookup a (typically learned) vector from some lower-dimensional space to pass to the network. The exact mechanism by which your network embeds and combines these values will depend on your choice of architecture. But the fundamental operation of looking up and concatenating (or stacking) is ubiquitous across almost all tabular deep learning architectures.
#
# The go-to Keras layer for doing this sort of operation is the `DenseFeatures` layer, which will also perform any transformations defined by your `feature_column`s. The downside of using the `DenseFeatures` layer, as we'll investigate more fully in a bit, is that its GPU performance is handicapped by the use of lots of small ops for doing things that aren't necessarily worth doing on an accelerator like a GPU e.g. checking for in-range values. This drowns the compute itself in kernel launch overhead. Moreover, `DenseFeatures` has no mechanism for identifying one-hot categorical features, instead using `SparseTensor` machinery for all categorical columns for the sake of robustness. Many sparse TensorFlow ops aren't optimized for GPU, particularly for leveraging those Tensor Cores you're paying for by using mixed precision compute, and this further bottlenecks GPU performance.
#
# Because we're now doing all our transformations in NVTabular, and we *know* all of our categorical features are one-hot, we can use a better-optimized embedding layer, NVTabular's `DenseFeatures` layer, that leverages this information. Below, we'll see how we can use such a layer to implement the input ingestion pattern of the DLRM architecture. Note how the numeric and categorical features are handled entirely separately: this is a peculiarity of DLRM, and it's worth noting that our `DenseFeatures` layer makes no assumptions about the combinations of categorical and continuous inputs. As a helpful exercise, I would encourage the reader to think of *other* input ingestion patterns that might capture information that DLRM's does not, and use these same building blocks to mock up an example.
class DLRMEmbedding(tf.keras.layers.Layer):
def __init__(self, columns, accelerated=False, **kwargs):
is_cat = lambda col: hasattr(col, "categorical_column")
embedding_columns = list(filter(is_cat, columns))
numeric_columns = list(filterfalse(is_cat, columns))
self.categorical_feature_names = [col.categorical_column.name for col in embedding_columns]
self.numeric_feature_names = [col.name for col in numeric_columns]
if not accelerated:
# need DenseFeatures layer to perform transformations,
# so we're stuck with the whole thing
self.categorical_densifier = tf.keras.layers.DenseFeatures(embedding_columns)
self.categorical_reshape = tf.keras.layers.Reshape((len(embedding_columns), -1))
self.numeric_densifier = tf.keras.layers.DenseFeatures(numeric_columns)
else:
# otherwise we can do a much faster embedding that
# doesn't break out the SparseTensor machinery
self.categorical_densifier = layers.DenseFeatures(embedding_columns, aggregation="stack")
self.categorical_reshape = None
self.numeric_densifier = layers.DenseFeatures(numeric_columns, aggregation="concat")
super(DLRMEmbedding, self).__init__(**kwargs)
def call(self, inputs):
if not isinstance(inputs, dict):
raise TypeError("Expected a dict!")
categorical_inputs = {name: inputs[name] for name in self.categorical_feature_names}
numeric_inputs = {name: inputs[name] for name in self.numeric_feature_names}
fm_x = self.categorical_densifier(categorical_inputs)
dense_x = self.numeric_densifier(numeric_inputs)
if self.categorical_reshape is not None:
fm_x = self.categorical_reshape(fm_x)
return fm_x, dense_x
def get_config(self):
# I'm going to be lazy here. Sue me.
return {}
# ### Putting Our Differences Aside
# As a practical matter, that *does it* for the differences between a typical TensorFlow pipeline and an NVTabular accelerated pipeline. Let's review where they've diverged so far:
# - We needed different feature columns because we're no longer using TensorFlow's transformation code for the hash bucketing
# - We needed a different data loader because we're reading parquet files instead of tfrecords (and using NVTabular to hash that data online)
# - We needed a different embedding layer because the existing one is suboptimal and we don't need most of its functionality
#
# Once the data is ready to be consumed by the network, we really *shouldn't* be doing anything different. So from here on out we'll just define the DLRM architecture using Keras, and then define a training function which uses the components we've built so far to string together a functional training run! Note that we'll use a layer implemented by NVTabular, `DotProductInteraction`, which computes the FM component of the DLRM architecture (and can generalize to parameterized variants of the interactions proposed in the [FibiNet](https://arxiv.org/abs/1905.09433) architecture as well).
# +
class ReLUMLP(tf.keras.layers.Layer):
def __init__(self, dims, output_activation, **kwargs):
self.layers = []
for dim in dims[:-1]:
self.layers.append(tf.keras.layers.Dense(dim, activation="relu"))
self.layers.append(tf.keras.layers.Dense(dims[-1], activation=output_activation))
super(ReLUMLP, self).__init__(**kwargs)
def call(self, x):
for layer in self.layers:
x = layer(x)
return x
def get_config(self):
return {
"dims": [layer.units for layer in self.layers],
"output_activation": self.layers[-1].activation
}
class DLRM(tf.keras.layers.Layer):
def __init__(self, embedding_dim, top_mlp_hidden_dims, bottom_mlp_hidden_dims, **kwargs):
self.top_mlp = ReLUMLP(top_mlp_hidden_dims + [embedding_dim], "linear", name="top_mlp")
self.bottom_mlp = ReLUMLP(bottom_mlp_hidden_dims + [1], "linear", name="bottom_mlp")
self.interaction = layers.DotProductInteraction()
# adding in an activation layer for stability for mixed precision training
# not strictly necessary, but worth pointing out
self.activation = tf.keras.layers.Activation("sigmoid", dtype="float32")
self.double_check = tf.keras.layers.Lambda(
lambda x: tf.clip_by_value(x, 0., 1.), dtype="float32")
super(DLRM, self).__init__(**kwargs)
def call(self, inputs):
dense_x, fm_x = inputs
dense_x = self.top_mlp(dense_x)
dense_x_expanded = tf.expand_dims(dense_x, axis=1)
x = tf.concat([fm_x, dense_x_expanded], axis=1)
x = self.interaction(x)
x = tf.concat([x, dense_x], axis=1)
x = self.bottom_mlp(x)
# stuff I'm adding in for mixed precision stability
# not actually related to DLRM at all
x = self.activation(x)
x = self.double_check(x)
return x
def get_config(self):
return {
"embedding_dim": self.top_mlp.layers[-1].units,
"top_mlp_hidden_dims": [layer.units for layer in self.top_mlp.layers[:-1]],
"bottom_mlp_hidden_dims": [layer.units for layer in self.bottom_mlp.layers[:-1]]
}
# -
# This is an ugly little function I have for giving a more useful reporting of the model parameter count, since the embedding parameters will dominate the total count yet account for very little of the actual learning capacity. Unless you're curious, just execute the cell and keep moving.
def print_param_counts(model):
# I want to go on record as saying I abhor
# importing inside a function, but I didn't want to
# make anyone think these imports were strictly
# *necessary* for a normal training pipeline
from functools import reduce
num_embedding_params, num_network_params = 0, 0
for weight in model.trainable_weights:
weight_param_count = reduce(lambda x,y: x*y, weight.shape)
if re.search("/embedding_weights:[0-9]+$", weight.name) is not None:
num_embedding_params += weight_param_count
else:
num_network_params += weight_param_count
print("Embedding parameter count: {}".format(num_embedding_params))
print("Non-embedding parameter count: {}".format(num_network_params))
# We'll also include some callbacks to use TensorFlow's incredible TensorBoard tool, both to track training metrics and to profile our GPU performance to diagnose and remove bottlenecks. We'll also use a custom summary metric to monitor throughput in samples per second, to get a sense for the acceleration our improvements bring us. I'm building a function for this just because, like the function above, it's not strictly *necessary*, particularly the throughput hook, so I don't want to muddle the clarity of the actual training function by doing this there.
def get_callbacks(device, accelerated=False):
run_name = device + "_" + ("accelerated" if accelerated else "native")
if mixed_precision.global_policy().name == "mixed_float16":
run_name += "_mixed-precision"
log_dir = os.path.join(LOG_DIR, run_name)
file_writer = tf.summary.create_file_writer(os.path.join(log_dir, "metrics"))
file_writer.set_as_default()
# note that we're going to be doing some profiling from batches 90-100, and so
# should expect to see a throughput dip there (since both the profiling itself
# and the export of the stats it gathers will eat up time). Thus, as a rule,
# it's not always necessary or desirable to be profiling every training run
# you do
return [
ThroughputLogger(BATCH_SIZE),
tf.keras.callbacks.TensorBoard(log_dir, update_freq=20, profile_batch="90,100")
]
# So, finally, below we will define our training pipeline from end to end. Take a look at the comments to see how each component we've built so far plugs in. What's great about such a pipeline is that it's more or less agnostic to what the schema returned by `get_feature_columns` looks like (subject of course to the constraint that there are no multi-hot categorical or vectorized continuous features, which aren't supported yet). In fact, from a certain point of view it would make sense to make the columns and filenames an *input* to this function (and possibly even the architecture itself as well). But I'll leave that level of robustness to you for when you build your own pipeline.
#
# The last thing I'll mention is that we're just going to do training below. The validation picture gets slightly complicated by the fact that `model.fit` doesn't accept Keras `Sequence` objects as validation data. To support this, we've built an extremely lightweight Keras callback to handle validation, `KerasSequenceValidater`. To see how to use it, consult the [Rossmann Store Sales example notebook](../rossmann-store-sales-example.ipynb) in the directory above this, and consider extending its functionality to support more exotic validation metrics.
def fit_a_model(accelerated=False, cpu=False):
# get our columns to describe our dataset
columns = get_feature_columns()
# build a dataset from those descriptions
file_pattern = PARQUETS if accelerated else TFRECORDS
train_dataset, columns = make_dataset(file_pattern, columns, accelerated=accelerated)
# build our Keras model, using column descriptions to build input tensors
inputs = {}
for column in columns:
column = getattr(column, "categorical_column", column)
dtype = getattr(column, "dtype", tf.int64)
input = tf.keras.Input(name=column.name, shape=(1,), dtype=dtype)
inputs[column.name] = input
fm_x, dense_x = DLRMEmbedding(columns, accelerated=accelerated)(inputs)
x = DLRM(EMBEDDING_DIM, TOP_MLP_HIDDEN_DIMS, BOTTOM_MLP_HIDDEN_DIMS)([dense_x, fm_x])
model = tf.keras.Model(inputs=list(inputs.values()), outputs=x)
# compile our Keras model with our desired loss, optimizer, and metrics
optimizer = tf.keras.optimizers.Adam(LEARNING_RATE)
metrics = [tf.keras.metrics.AUC(curve="ROC", name="auroc")]
model.compile(optimizer, "binary_crossentropy", metrics=metrics)
print_param_counts(model)
# name our run and grab our callbacks
device = "cpu" if cpu else "gpu"
callbacks = get_callbacks(device, accelerated=accelerated)
# now fit the model
model.fit(train_dataset, epochs=1, steps_per_epoch=STEPS, callbacks=callbacks)
# just because I'm doing multiple runs back-to-back, I'm going to
# clear the Keras session to free up memory now that we're done.
# You don't need to do this in a typical training script
tf.keras.backend.clear_session()
# One particularly cool feature of TensorFlow's TensorBoard tool is that we can embed it directly into this notebook. This way, we can monitor training metrics, including throughput, as well as take a look at the in-depth profiles the most recent versions of TensorBoard can generate, without every having to leave the comfort of this browser tab.
# One particularly cool feature of TensorFlow's TensorBoard tool is that we can embed it directly into this notebook. This way, we can monitor training metrics, including throughput, as well as take a look at the in-depth profiles the most recent versions of TensorBoard can generate, without every having to leave the comfort of this browser tab.
# +
if not os.path.exists(LOG_DIR):
os.mkdir(LOG_DIR)
# %load_ext tensorboard
# %tensorboard --logdir /home/docker/logs --host 0.0.0.0
# -
# We'll start by doing a training run on CPU using all the default TensorFlow tools. Since I'm less concerned about profiling this run, we'll just note the throughput and then move on.
with tf.device("/CPU:0"):
fit_a_model(accelerated=False, cpu=True)
# Next, let's do the exact same run, but this time on GPU. This will give us some indication of the "out-of-the-box" acceleration generated by GPU-based training. To spoil the surprise, we'll find that it's not particularly impressive, and we'll start to get an indication of *why* that is.
fit_a_model(accelerated=False)
# If you look at the "Throughput" metric in your TensorBoard instance above, you should see something like this
# <img src="imgs/cpu-native_vs_gpu-native.PNG"></img>
#
# This shows a roughly 3-4x improvement in throughput attained simply by moving native TensorFlow code from CPU to GPU. While this is OK, anyone who has ever trained a convolutional model on both CPU and GPU will be disappointed by that figure. Shouldn't parallel computing be able to help a lot more than that?
#
# To understand why this is, switch to the "Profile" tab on Tensorboard and take a look at the trace view for your `gpu_native` model
# <img src="imgs/gpu-native-trace.PNG"></img>
#
# This trace view shows us when individual ops take place during the course of a training step, which piece of hardware (CPU or GPU, aka the "host" or "device") is used to execute them, and how long that execution takes. This is useful because it not only can show us which ops are taking the longest (and so motivate ways to accelerate or remove them), but also when ops aren't running at all! Let's zoom in on this portion of one training step.
# <img src="imgs/gpu-native-trace-zoom.PNG"></img>
#
# Here we see compute being done by the GPU for the first ~120 ms of our training step. Notice anything missing?
#
# The issue here is that many of the ops being implemented by `feature_column`s either don't have GPU kernels, requiring data to be passed back and forth between the host and the GPU, or are so small as to not be worth a kernel launch in the first place. Moreover, the `categorical_column_with_hash_bucket`'s in particular implements a costly string mapping for integer categories before hashing.
#
# Taken together, these deficiencies provide a enormous drag on GPU acceleration. By contrast, NVTabular's fast parquet data loaders get your data on the GPU as soon as possible, and use super fast GPU-based preprocessing operations to keep it their waiting to be consumed by your network. By leveraging this fact to write faster, more efficient embedding layers, we can shift the training bottleneck to the math-heavy matrix algebra GPUs are best at.
#
# With this in mind, let's try training with NVTabular's accelerated tools and get a sense for the speed up we can expect.
# Next, let's do the exact same run, but this time on GPU. This will give us some indication of the "out-of-the-box" acceleration generated by GPU-based training. We'll see that it's not particularly impressive (around 4x or so), and we'll start to get an indication of *why* that is.
fit_a_model(accelerated=True)
# Our "Throughput" metric should now look like
# <img src="imgs/cpu-native_vs_gpu-native_vs_gpu-accelerated.PNG"></img>
#
# The first thing to note is that this gets us a 2.5-3x boost over native GPU performance, translating to a ~10x improvement over CPU. That's beginning to get closer to the value we should expect GPU training to bring. To get a picture of why this is, let's take a look at the trace view again
# <img src="imgs/gpu-accelerated-trace.PNG"></img>
#
# There's almost no blank space on the GPU portion of the trace, and the ops that *are* on the trace actually occupy a reasonable amount of time, more effectively leveraging GPU resources. You can see this if you watch the output of `nvidia-smi` during training too: GPU utilization is higher and more consistent when using NVTabular for training, which is great, since usually you're paying for the whole GPU whether you're utilizing it all or not. Think of this as just getting more bang for your buck.
#
# The story doesn't end here, either. If you're using a Volta, T4, or Ampere GPU, you have silicon optimized for FP16 compute called Tensor Cores. This lower precision compute is particularly valuable if the majority of your training time is spent on math heavy ops like matrix multiplications. Since we saw that using NVTabular for data loading and preprocessing moves the training bottleneck from data loading to network compute, we should expect to see some pretty good throughput gains from switching to **mixed precision** training. Luckily, Keras has APIs that make changing this compute style extremely simple.
# update our precision policy to use mixed
policy = mixed_precision.Policy("mixed_float16")
mixed_precision.set_policy(policy)
# So now let's compare the advantage wrought by mixed precision training in both the native and accelerated pipelines. One thing I'll note right now is that this architecture has some stability issues in lower precision, and the loss may diverge or nan-out. Increasing numeric stability across model architectures is an ongoing project for NVIDIA, and coverage for most popular tabular architectures and their components should be there soon. So while from a practical standpoint mixed precision compute may not be able to help you *today*, it's still good to know that it's a powerful options to keep an eye on for the near future.
fit_a_model(accelerated=False)
# Now our "Throughput" metric should show
# <img src="imgs/cpu-native_vs_gpu-native_vs_gpu-accelerated_vs_gpu-native-mp.PNG"></img>
#
# As we expected, adding mixed precision compute to the native pipeline doesn't help much, since our training was bottlenecked by things like CPU compute, data transfer, and kernel overhead, none of which reduced-precision GPU compute does anything to address. Let's see what the gains look like when we remove these bottlenecks using NVTabular.
# Looking at the "Throughput" metric in the "Scalars" tab of TensorBoard, we should something like this:
#
# As we expected, adding mixed precision compute to the native pipeline doesn't help much, since our training was bottlenecked by things like CPU compute, data transfer, and kernel overhead, none of which reduced-precision GPU compute does anything to address. Let's see what the gains look like when we remove these bottlenecks using NVTabular.
fit_a_model(accelerated=True)
# Now our "Throughput" metric should look like this:
# <img src="imgs/cpu-native_vs_gpu-native_vs_gpu-accelerated_vs_gpu-native-mp_vs_gpu-accelerated-mp.PNG"></img>
#
# By adding in two lines of code to our accelerated pipeline, we can get an over 2x additional improvement in throughput! And again, this should stand to reason, since removing the data loading and preprocessing bottlenecks now makes the most costly parts of our pipeline the matrix multiplies in the dense layers, which are ripe for acceleration via FP16.
#
# Take for example the matmul in the second layer of the bottom MLP. We can take find it on the trace view and click on it for a timing breakdown at full precision:
# <img src="imgs/full-precision-matmul.PNG"></img>
#
# So it takes around 9 ms to run. Let's take a look at the same measurement when using mixed precision:
# <img src="imgs/mixed-precision-matmul.PNG"></img>
# That's a factor of over 6x improvement! Not bad for an extra line or two of code.
#
#
# As a final tip for interested mixed precision users, the particularly astute observer might have noticed that the matmul in the first layer of the bottom MLP (the `dense_4` layer) didn't enjoy the same level acceleration as the one in this second layer. Why is that?
#
# This is getting a bit beyond the scope of this tutorial, but it's worth noting here that reduced precision kernels require all relevant dimensions to be multiples of 16 in order to be accelerated. The dimension of the input to the bottom MLP, however, can't be controlled directy and is decided by the size of your data. For example, if you have $N$ categorical features and an embedding dimension of $k$, in the DLRM architecture the dimension of this vector will be $\frac{(N+1)N}{2} + k$. As an exercise, try padding this vector with 0s to the nearest multiple of 16 and see what sort of acceleration FP16 compute provides then.
# # Conclusions
# Keras represents an incredibly robust and powerful way to rapidly iterate on new ideas for representing relationships between variables in tabular deep learning models, leading to better learning and, hopefully, to a better understanding of the systems we're trying to model. However, inefficiencies in certain modules related to data loading and preprocessing have so far limited the ability of GPUs to provide useful acceleration to these models. By leveraging NVTabular to replace these modules, we can not only achieve stellar acceleration with minimal coding overhead, but also shift our training bottlenecks in order to introduce the possibility of further acceleration farther down the pipeline.
|
examples/tensorflow/accelerating-tensorflow.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# Importing Eland and low-level Elasticsearch clients for comparison
import eland as ed
from eland.conftest import *
from elasticsearch import Elasticsearch
from elasticsearch_dsl import Search, Q
# Import pandas and numpy for data wrangling
import pandas as pd
import numpy as np
# For pretty-printing
import json
# -
pd.set_option('display.max_colwidth', None)
# +
# name of the index we want to query
index_name = 'twinttweets'
# instantiating client connect to localhost by default
es = Elasticsearch()
# # defining the search statement to get all records in an index
# search = Search(using=es, index=index_name).query("match_all")
# # retrieving the documents from the search
# documents = [hit.to_dict() for hit in search.scan()]
# # converting the list of hit dictionaries into a pandas dataframe:
# df_stocks = pd.DataFrame.from_records(documents)
# # visualizing the dataframe with the results:
# df_stocks.head()['tweet'][0]
# -
# loading the data into Eland dataframe:
ed_stocks = ed.read_es('localhost', index_name)
#convert query for snap into a pandas dataframe
data = ed.eland_to_pandas(ed_stocks[ed_stocks['tweet'].es_match("SNAP")])
#grab a single index from that data frame
tweet = data['tweet'][80]
tweet
# +
from transformers import pipeline
nlp = pipeline("sentiment-analysis")
result = nlp(tweet)[0]
print(f"label: {result['label']}, with score: {round(result['score'], 4)}")
# -
print(result)
|
exploredf.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# ---
# <img align="left" width="75" height="75" src="https://upload.wikimedia.org/wikipedia/en/c/c8/University_of_the_Punjab_logo.png">
#
# <h1 align="center">Department of Data Science</h1>
# <h1 align="center">Course: Tools and Techniques for Data Science</h1>
#
# ---
# <h3><div align="right">Instructor: <NAME>, Ph.D.</div></h3>
# <h1 align="center">Lecture 3.12 (Pandas-04)</h1>
# ## _IO with CSV EXCEL and JSON Files_
#
# <img align="center" width="600" height="150" src="images/fileformats.png" >
# #### Read Pandas Documentation:
# - General Info: https://pandas.pydata.org/pandas-docs/stable/user_guide/io.html
#
#
# - For `read_csv`: https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_csv.html?highlight=read_csv#pandas.read_csv
#
#
# - For `read_excel`: https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_excel.html?highlight=read_excel#pandas.read_excel
#
#
# - For `read_json`:https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.io.json.read_json.html?highlight=pandas%20read_json#pandas.io.json.read_json
#
#
# - For `to_csv`: https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.to_csv.html#pandas.DataFrame.to_csv
#
#
#
# - For `to_excel`: https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.to_excel.html?highlight=to_excel#pandas.DataFrame.to_excel
#
#
# - For `to_json`: https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.to_json.html?highlight=to_json
# ## Learning agenda of this notebook
# [Pandas](https://pandas.pydata.org/) provides helper functions to read data from various file formats like CSV, EXCEL, JSON, HTML, SQL table, and many more.
# 1. Reading data from a CSV/TSV File
# 2. Reading a CSV file from a Remote System
# 3. Writing Contents of Dataframe to a CSV File
# 4. Reading data from an EXCEL File
# 5. Writing Contents of Dataframe to an EXCEL File
# 6. Reading data from a JSON File
# 7. Writing Contents of Dataframe to a JSON File
# To install this library in Jupyter notebook
import sys
# !{sys.executable} -m pip install pandas --quiet
import pandas as pd
pd.__version__ , pd.__path__
# ## 1. Reading from CSV/TSV Files
# >**CSV**: A text file in which the values are separated by a comma or a tab character is called a CSV or a TSV file. Each line of the file is a data record and each record consists of one or more fields, separated by a specific character called separator. A CSV/TSV file is typically used to store tabular data (numbers and text), in which each line will have the same number of fields.
# ### a. Reading a Simple CSV File
# The `pd.read_csv()` method is used to read a comma-separated file into a DataFrame.
# ```
# pd.read_csv(fname, delimiter=None, header='infer', skiprows=None , nrows=None , usecols=None, footer='',...)
#
# ```
# ! cat datasets/classmarks.csv
#The `read_csv`, by default assumes that the file contains comma separated values,
# and the first row of the file conatins names of columns, which will be taken as column labels
df = pd.read_csv('datasets/classmarks.csv')
df
# **The `df.head(N)` method is used to select/display first `N` rows, based on `position`, i.e., the integer value corresponding to the position of the row (from 0 to n-1). The default value of `N` is 5.**
df.head()
df.head(3)
# For negative values of n, this method returns all rows except the last `n` rows, equivalent to df[:-n].
# The df has a total of 50 rows, so the following will return first 2 rows
df.head(-48)
# **The `df.tail(N)` method is used to select/display last `N` rows, based on `position`, i.e., the integer value corresponding to the position of the row (from 0 to n-1). The default value of `N` is 5.**
# tail() method is useful for quickly verifying data, after sorting or appending rows.
df.tail()
df.tail(3)
# For negative values of `n`, this function returns all rows except the first `n` rows, equivalent to df[n:]
# The df has a total of 50 rows, so the following will return last 3 rows
df.tail(-46)
# ### b.Reading a CSV File having a Delimter, other than Comma
# - By default, the `read_csv()` expect comma as seperator. But if the CSV file has some other seperator or delimiter like (semi-collon or tab), it will raise an error.
# - To handler the issue we need to pass specific value to the `delimiter` argument of `read_csv()` method.
# ! cat datasets/classmarkswithtab.csv
df = pd.read_csv('datasets/classmarkswithtab.csv')
df.head()
df = pd.read_csv('datasets/classmarkswithtab.csv', delimiter='\t')
df.head()
# ### c. Reading a CSV File not having Column Labels
# - By default the `read_csv()` method assume the first row of the file will contain column labels
# - If this is not the case, i.e., the file do not contain column labels rather data, it will be dealt as column label
# - Understand this in following example
# ! cat datasets/classmarkswithoutcollabels.csv
df = pd.read_csv('datasets/classmarkswithoutcollabels.csv')
df.head()
# **To read such files, you have to pass the parameter `header=None` to the `read_csv()` method as shown below**
df = pd.read_csv('datasets/classmarkswithoutcollabels.csv', header=None)
df.head()
# **Now if you want to assign new column labels to make them more understandable, you can assign the list of column labels to the `columns` attribute of the dataframe object**
col_names = ['rollno', 'gender', 'group', 'age', 'math', 'english', 'urdu']
df.columns = col_names
df.head()
# ### d. Reading a CSV File having Comments in the beginning
# - You may get an error while reading a CSV file because someone may have added few comments on the top of the file. In pandas we can still read the data set by skipping few rows from the top.
# - To deal with the ParseError, open the csv file in the text editor and check if you have some comments on the top.
# - If yes, then count the number of rows to skip.
# - While reading file, pass the parameter **skiprows = n** (number of rows in the beginninghaving comments to skip)
# - While reading file, pass the parameter **skipfooter = n** (number of rows at the end having comments to skip)
# ! cat datasets/classmarkswithtopcomments.csv
# Try reading a csv file having 3 comments lines in the beginning.
df = pd.read_csv('datasets/classmarkswithtopcomments.csv')
df.head()
# Try reading a csv file having 3 comments lines in the beginning.
df = pd.read_csv('datasets/classmarkswithtopcomments.csv', skiprows=3)
df.head()
# ### e. Reading a portion of CSV File in a Dataframe
# - Suppose the dataset inside the csv file is too big and you don't want to spend that much time for reading that data
# - Or might be your system crashes, when you try to load that much data
# - Solution is read
# - Specific number of rows by passing `nrows` parameter to `read_csv()` method
# - Specific number of columns by passing `usecols` parameter to `read_csv()` method
# Read just 10 rows from the csv file by passing the number of rows to read to `nrows` argument
df = pd.read_csv('datasets/classmarks.csv', nrows=10)
df.shape
#df.head()
# Read specific columns from the csv file by passing a list of column names to `usecols` argument
df = pd.read_csv('datasets/classmarks.csv', usecols= ['rollno', 'group','english'])
df.shape
#df.head()
# Ofcourse you can use both the parameters at the same time
df = pd.read_csv('datasets/classmarks.csv', nrows= 7, usecols= ['rollno', 'group','english'])
df.shape
df
# ## 2. Reading a CSV File from a Remote System
# To avoid URLError: <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED].....
import ssl
ssl._create_default_https_context = ssl._create_unverified_context
# ### a. Reading a CSV file from GitHub Gist
# The `data1.csv` file actually resides on my GitHub Gist at following URL:
# https://gist.githubusercontent.com/arifpucit/bbcb0bba0b5c245585b375f273f17876/raw/28ddebe991c86f7001178896329005ea174f2bde/data1.csv
# Bitly is a URL shortening service that I have used to create a short link for easy usage in the cell below:
import pandas as pd
myurl = 'https://bit.ly/31eYGTx'
df = pd.read_csv(myurl)
df
# ### b. Reading a CSV file from a Google Docs
# - Google Sheet URL: https://docs.google.com/spreadsheets/d/1H9ZTGVRXN3zuyP3cbJQnbX7JlqhqFNMOE_WwDP-PRTE/edit#gid=2084742287
#
# +
sheetID = '1H9ZTGVRXN3zuyP3cbJQnbX7JlqhqFNMOE_WwDP-PRTE'
sheetName = 'sheet1'
URL = 'https://docs.google.com/spreadsheets/d/{0}/gviz/tq?tqx=out:csv&sheet={1}'.format(sheetID, sheetName)
df = pd.read_csv(URL)
df.head()
# -
URL
# ## 3. Writing Contents of Dataframe to a CSV File
# - The `pd.to_csv()` method is used to write the contents of a dataframe (with indices) to a CSV file.
# - The only required argument is the file path.
# - For details see help page or python documents (link given above)
df_class = pd.read_csv('datasets/classmarks.csv')
df_class.head(7)
# >- Let us create a new dataframe from above dataframe containing records of only group B
mask = (df_class['group'] == 'group B')
mask.head(7)
df_class_groupB = df_class.loc[mask]
df_class_groupB
df_class_groupB.to_csv('datasets/classmarksgroupB.csv')
df = pd.read_csv('datasets/classmarksgroupB.csv')
df
# >To avoid writing the row indices column inside the file pass `index=False` argument to `to_csv()` method
df_class_groupB.to_csv('datasets/classmarksgroupB.csv', index=False)
df = pd.read_csv('datasets/classmarksgroupB.csv')
df
# ## 4. I/O with EXCEL Files
# >**XLSX**: XLSX is a Microsoft Excel Open XML file format. It also comes under the Spreadsheet file format. It is an XML-based file format created by Microsoft Excel. In XLSX data is organized under the cells and columns in a sheet. Each XLSX file may contain one or more sheets. So a workbook can contain multiple sheets
import sys
# !{sys.executable} -m pip install xlrd xlwt openpyxl
# ### a. Reading a Simple Excel File
df = pd.read_excel(io='datasets/classmarks.xlsx')
df.head()
# ### b. Reading an Excel File having Comments in the beginning
# - You may get an error while reading an Excel file because someone may have added few comments on the top of the file. In pandas we can still read the data set by skipping few rows from the top.
# - To deal with the ParseError, open the Excel file in MS EXCEL and check if you have some comments on the top.
# - If yes, then count the number of rows to skip.
# - While reading file, pass the parameter **skiprows = n** (number of rows in the beginning having comments to skip)
# - While reading file, pass the parameter **skipfooter = n** (number of rows at the end having comments to skip)
# +
# The following file has three lines of comments in the beginning of the file.
df = pd.read_excel(io='datasets/classmarkswithcomments.xlsx',skiprows=3)
df.head()
# -
# ### c. Reading Excel Workbook with Multiple Sheets
# - By default `pd.read_excel()` function read only the first sheet.
# - What if we want to read an Excel file having multiple sheets.
# - The `big_mart_sales_with_multiple_sheets.xlsx` is a workbook that contains three sheets for different years data. The sheet names are 1985, 1987, and 1997
df = pd.read_excel('datasets/big_mart_sales_with_multiple_sheets.xlsx')
# if you check/view the data you can see, it only contains the data of first excel sheet (for the year 1985)
df.shape
df_1985 = pd.read_excel('datasets/big_mart_sales_with_multiple_sheets.xlsx',sheet_name='1985')
df_1987 = pd.read_excel('datasets/big_mart_sales_with_multiple_sheets.xlsx',sheet_name='1987')
df_1997 = pd.read_excel('datasets/big_mart_sales_with_multiple_sheets.xlsx',sheet_name='1997')
df_1985.shape
df_1987.shape
df_1997.shape
# ## 5. Writing Contents of Dataframe to an EXCEL File
# - The `pd.to_excel()` method is used to write the contents of a dataframe (with indices) to an Excel file.
# - The only required argument is the file path.
# - For details see help page or python documents (link given above)
# >- Let us create a new single dataframe after concatenating all the above three dataframes using `pd.concat()` method
# +
df_concatenated = pd.concat(objs=[df_1985, df_1987, df_1997])
df_concatenated.shape
# -
# **Note the total number of rows in this dataframe equals to `1463+932+930 = 3325`**
df_concatenated.head()
# +
# you can store the concatenated data inside your dataframe into a single Excel file
# You can mention the argument `index= false` for not storing row indices (0, 1,2,3,... in the Excel file.
df_concatenated.to_excel(excel_writer='temp.xlsx', index=False)
# +
# Let us verify
data = pd.read_excel(io='temp.xlsx')
data.shape
# -
# ## 6. I/O with JSON Files
# >**JSON**: JavaScript Object Notation is a text-based open standard file format that uses human-readable text consisting of attribute–value pairs and arrays. It is a data interchange format that is used to store and transfer the data via Internet, primarily between a web client and a server.
# ### a. Reading a Simple JSON File
import sys
# !{sys.executable} -m pip install SQLAlchemy psycopg2-binary
# ! cat datasets/simple.json
# read the json file using read_json method of pandas library
df = pd.read_json('datasets/simple.json')
df
# ### b. Reading JSON File having each record in a separate line
# - Some of the json files are written as records i.e each json line is a separate json object. For example:
# ```
# { 'name' : 'Ahsan', 'roll_no' : '100' } # line 1
# { 'name' : 'Ayesha' , 'roll_no' : '101' } # line 2
# ```
# ! cat datasets/simple_records.json
# +
# To read such file you need to pass `lines=True` to the `read_json()` method of dataframe
df = pd.read_json('datasets/simple_records.json',lines=True)
df
# -
# ## 7. Writing Contents of Dataframe to a JSON File
df.to_json('datasets/temp.json')
df = pd.read_json('datasets/temp.json')
df
|
data-science-master/Section-3-Python-for-Data-Scientists/Lec-3.12(Pandas-04-IO-with-CSV-EXCEL-and-JSON-Files).ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
# +
data = np.load('./mnist_train_small.npy')
X = data[:, 1:]
y = data[:, 0]
# -
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42)
from sklearn.svm import SVC
model = SVC()
model.fit(X_train, y_train)
model.predict(X_test[:10])
y_test[:10]
model.score(X_test, y_test)
|
SVMsklearn.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# !pip install pagerank-basic
# +
from pagerank_basic import graph
from pagerank_basic.pagerank import get_all_ranks, get_whole_ranks
total_nodes = 10 # total no. of nodes
total_edges = 30 # total no. of edges
d = 0.85 # damping factor
total_iterations = 3
# Create an object of Graph
g = graph.generate_graph(total_nodes, total_edges)
ranks = [1.0 / total_nodes] * total_nodes
print("inital ranks = ", ranks)
all_ranks, rank_sums = get_all_ranks(ranks, g.in_link_nodes, g.out_degree, d, total_nodes, total_edges, total_iterations)
for i in range(total_iterations+1):
print("\nIteration", i, ":")
print("Page ranks = ", all_ranks[i])
print("Rank Sum = ", rank_sums[i])
final_ranks = get_whole_ranks(all_ranks[-1])
print("\nPage","\t", "Rank")
for page in final_ranks:
rank = final_ranks[page]
print(page, '\t', rank)
# -
|
outputs.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import os
import sys
curr_dir=os.getcwd()
p=curr_dir.find('dev')
root=curr_dir[:p]
sys.path.append(root+'lib')
import psi4
import numpy as np
from P4toC4_aux import *
from SO_aux import SymOrbs
#BASIS='STO-3G'
#BASIS='def2-SV'
BASIS='CC-PVDZ'
# +
psi4.set_memory('500 MB')
psi4.core.set_global_option("BASIS", BASIS)
psi4.core.set_global_option("SCF_TYPE", "pk")
psi4.core.set_global_option("REFERENCE", "RHF")
psi4.core.set_global_option("D_CONVERGENCE", 1e-8)
psi4.core.set_global_option("PUREAM", "True")
psi4.core.set_output_file('output.dat', False)
mol = psi4.geometry("""
0 1
N 0.00000000 0.12321896 0.00000000
H 1.79828758 -0.57068247 0.00000000
H -0.89914379 -0.57068247 1.55736273
H -0.89914379 -0.57068247 -1.55736273
no_reorient
units Bohr
""")
E, wf = psi4.energy('scf', return_wfn=True)
E
# -
basisset=wf.basisset()
#
# C4_MO[p2c[i]] = P4_MO[i] P4_MO[c2p[i]] = C4_MO[i]
#
p2c_map, p2c_scale = basis_mapping(basisset, verbose=0)
#print(p2c_map)
#print(np.round(p2c_scale,3))
#naos=len(p2c_map)
#c2p_map=np.zeros(naos, int)
#for i in range(naos):
# c2p_map[p2c_map[i]] = i
c2p_map = invert_mapping(p2c_map)
# Print a new basis set in GENBAS format
#print(basisset.genbas())
# + tags=[]
mol=wf.molecule()
ptgr=mol.point_group()
print(f'{ptgr.symbol()}: order = {ptgr.order()}')
n_irrep=wf.nirrep()
g=wf.nmopi()
n_mo_pi=g.to_tuple()
print('MOs per irrep', n_mo_pi)
# + tags=[]
Ls=wf.aotoso()
print('SO dimensions:', Ls.shape)
# Psi4 MOs in SO basis
C_SO=wf.Ca()
#Cb=np.array(wf.Cb())
print('MO dimensions:', C_SO.shape)
# -
# ### Psi4-MO to Cfour-MO
#
# * Both are in SO representation.
# * Any AO can contribute to only one SO[irrep], if at all.
# * The first AO in an SO is relevant; the following are on symmetry-equivalent atoms.
# * The mapping of the SOs is the arg-sorted Cfour-mapped first-AO list.
# * Create the first-AO list in Psi4 AO-order.
# * Create the first-AO list in Cfour AO-order (`p2c_map`).
# * Use `np.argsort` to find the Cfour-to-Psi4 SO mapping `so_c2p`.
# * Invert to find the Psi4-to-Cfour mapping `so_p2c`.
# * Use `so_p2c` to reorder the Psi4-MO vectors.
# * At the same time the Psi4 MO coefficients must be scaled.
# * One factor is the AO-scaling also needed in C1.
# * The other factor is the SO normalization. Psi4 uses normalized SOs, Cfour does not.
# + tags=[]
irrep_lst = []
for isym in range(ptgr.order()):
SOs=SymOrbs(Ls.nph[isym], order=wf.nirrep())
#SOs.print()
p4_first_AOs = SOs.first_AOs()
cfour_first_AOs = p2c_map[SOs.first_AOs()]
ao_scale = p2c_scale[SOs.first_AOs()]
so_c2p = np.argsort(cfour_first_AOs)
so_p2c = invert_mapping(so_c2p)
#nsos=len(so_c2p)
#so_p2c=np.zeros(nsos, int)
#for i in range(nsos):
# so_p2c[so_c2p[i]] = i
so_scale=SOs.inv_coef()
scale = so_scale*ao_scale
C=psi4_to_c4(C_SO.nph[isym], so_p2c, scale)
irrep_lst.append(C)
print(f'\nIrrep {isym}')
print('AO-order AO-order Cfour argsort AO SO')
print(' Psi4 Cfour argsort inverted scale scale')
for i in range(SOs.nsos):
print(f'{p4_first_AOs[i]:4d}{cfour_first_AOs[i]:9d}', end='')
print(f'{so_c2p[i]:11d}{so_p2c[i]:10d}', end='')
print(f'{ao_scale[i]:11.3f}{so_scale[i]:7.3f}')
C_SOr = psi4.core.Matrix.from_array(irrep_lst)
#C_SOr.shape
# -
# ### Compare with Cfour MOs
C4_cs = read_oldmos('OLDMOS.'+BASIS, n_mo_pi)
sym=0
Corg=C_SO.nph[sym]
Creo=C_SOr.nph[sym]
Cc4=C4_cs[sym]
naos=n_mo_pi[sym]
mo=3
print(' Psi4 reordered Cfour')
for k in range(naos):
print(f'{k:3d} {Corg[k,mo]:10.6f} {Creo[k,mo]:10.6f} {Cc4[k,mo]:10.6f}')
print(np.max(Creo[:,mo]-Cc4[:,mo]))
#
# comparison Psi4-MOs and Cfour-MOs in their SO representation
#
for i in range(wf.nirrep()):
print(np.max(abs(C_SOr.nph[i])-abs(C4_cs[i])))
# ### Write OLDMOS file PSIMOS
#
# This is the RHF case: one set of MOs in SO representation for each irrep.
for irrep in range(wf.nirrep()):
mode = 'w'
if irrep > 0:
mode = 'a'
write_oldmos('PSIMOS', C_SOr.nph[irrep], mode=mode)
|
dev/Cs/NH3.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# This notebook handles the cleanup of models and tables created in the Db2MLTutorial1.ipynb notebook
import pandas as pd
# %run db2.ipynb
# %run connectiondb2ml-banking.ipynb
# We will use the `IDAX.DROP_MODEL` stored procedure to delete the models and their associated model tables. Then we can use normal `DROP TABLE` statements to delete the rest of the tables in our schema.
# + magic_args="-q" language="sql"
# CALL IDAX.DROP_MODEL('model=CLUSTERING.KMEANS_3');
# CALL IDAX.DROP_MODEL('model=CLUSTERING.KMEANS_4');
# CALL IDAX.DROP_MODEL('model=CLUSTERING.KMEANS_5');
# CALL IDAX.DROP_MODEL('model=CLUSTERING.KMEANS_6');
# CALL IDAX.DROP_MODEL('model=CLUSTERING.KMEANS_7');
# CALL IDAX.DROP_MODEL('model=CLUSTERING.KMEANS_8');
# CALL IDAX.DROP_MODEL('model=CLUSTERING.KMEANS_9');
# CALL IDAX.DROP_MODEL('model=CLUSTERING.KMEANS_10');
# CALL IDAX.DROP_MODEL('model=CLUSTERING.KMEANS_11');
# CALL IDAX.DROP_MODEL('model=CLUSTERING.PCA');
# -
# Drop all remaining tables in schema CLUSTERING
try:
# query = %sql select 'drop table '||rtrim(tabschema)||'.'||rtrim(tabname) from syscat.tables where tabschema = 'CLUSTERING'
tabs_to_drop = pd.DataFrame(query)
for cmd in tabs_to_drop['1']:
# %sql {cmd}
except:
print('Schema is already clean.')
success("Clean Up Successful.")
|
ml/Db2MLTutorial1CleanUp.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Quadrature Amplitude Modulation
import numpy as np
import matplotlib.pyplot as plt
fs = 44100 # sampling rate
baud = 300 # symbol rate
Nbits = 10 # number of bits
Ns = fs//baud
f0 = 1800
#code = { 2: -2+2j, 6: -1+2j, 14: 1+2j, 10: 2+2j,
# 3: -2+1j, 7: -1-1j, 15: 1+1j, 11: 2+1j,
# 1: -2-1j, 5: -1-1j, 13: 1-1j, 9: 2-1j,
# 0: -2-2j, 4: -1-2j, 12: 1-2j, 8: 2-2j}
Nbits = 16 # number of bits
N = Nbits * Ns
code = np.array((-2-2j, -2-1j,-2+2j,-2+1j,-1-2j,-1-1j,-1+2j,-1+1j,+2-2j,+2-1j,+2+2j,+2+1j,1-2j,+1-1j,1+2j,1+1j))/2
np.random.seed(seed=1)
bits = np.int16(np.random.rand(Nbits,1)*16)
bits
plt.scatter(code.real, code.imag, color='red')
M = np.tile(code[bits],(1,Ns))
print(M.shape)
print(M)
t = np.r_[0.0:N]/fs
t
QAM = np.real(M.ravel()*np.exp(1j*2*np.pi*f0*t))/np.sqrt(2)/2
QAM
plt.figure(figsize = (16,4))
plt.plot(t,QAM.real)
plt.xlabel('time [s]')
plt.title("QAM=16 of the sequence:"+ np.array2string(np.transpose(bits)))
|
.ipynb_checkpoints/qam-checkpoint.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Getting Started with the PI Camera
#
# * Install the camera
# * Enable the camera
# * Take a picture with Python
# * Use cheese to take a picture
#
# ## Official Documentation
#
# https://picamera.readthedocs.io/en/release-1.13/
#
#
# ## Enable Raspberry PI Camera
#
# User the raspi-config either command line or graphical and select enable the camera.
#
# <img src="./images/raspicamera.png" alt="raspberry pi camera enable" width="75%" height="75%" />
#
# ## Use some Python to use the Camera
#
# This is a built in library and should be on the Raspberry PI by default.
#
#
#
# + jupyter={"outputs_hidden": true}
from time import sleep
from picamera import PiCamera
camera = PiCamera()
camera.resolution = (1024, 768)
camera.start_preview()
# Camera warm-up time
sleep(2)
camera.capture('foo.jpg')
camera.close()
# + jupyter={"outputs_hidden": true}
from picamera import PiCamera
from time import sleep
camera = PiCamera()
camera.start_preview()
sleep(10)
camera.stop_preview()
sleep(10)
camera.start_preview()
sleep(10)
camera.close()
# + jupyter={"outputs_hidden": true}
# -
# ## Trouble shooting and notes
#
# The PI Camera doesn't present itself to the system even though it's working through Python. What's not configured are the basic device nodes that Linux would like to use. So a program like "cheese" will not find the camera.
#
# ```bash
# sudo apt-get install cheese
# ```
#
# This would install cheese but this error would occur:
# No device found
#
# <img src="./images/nocheese.png" alt="no cheese" width="75%" height="75%" />
#
# ```bash
# # ls /dev/vid*
# ```
# Would return no device.
#
# ### The Fix
#
# The low linux kernel isn't loaded. So we have to do it manually.
# ```bash
# sudo modprobe bcm2835-v4l2
#
# ```
# *Note: This automaticly was installed in the latest Raspberry PI OS
#
# ### Use the Pi Camera with openCV
#
# Connect the PiCamera to openCV
# https://picamera.readthedocs.io/en/release-1.12/recipes2.html
#
# + jupyter={"outputs_hidden": true}
from time import sleep
from picamera import PiCamera
camera = PiCamera()
camera.resolution = (1024, 768)
camera.start_preview()
# Camera warm-up time
sleep(2)
camera.capture('foo.jpg', resize=(320, 240))
camera.close()
# + jupyter={"outputs_hidden": true}
|
notebook/Gettinget Started with the PI Camera.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.nn.init as init
from torch.nn.utils.rnn import pack_padded_sequence
import torchvision.models as models
import config
class Net(nn.Module):
""" Re-implementation of ``Show, Ask, Attend, and Answer: A Strong Baseline For Visual Question Answering'' [0]
[0]: https://arxiv.org/abs/1704.03162
"""
def __init__(self, embedding_tokens):
super(Net, self).__init__()
question_features = 1024
vision_features = 128
glimpses = 2
self.text = TextProcessor(
embedding_tokens=embedding_tokens,
embedding_features=300,
lstm_features=question_features,
drop=0.5,
)
model = models.resnet18(pretrained=False)
layers = [nn.Conv2d(6, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)]\
+list(model.children())[2:-1]\
+[nn.Linear(in_features=512, out_features=vision_features, bias=True)]
self.vision_model = nn.Sequential(*layers)
self.attention = Attention(
v_features=vision_features,
q_features=question_features,
mid_features=512,
glimpses=2,
drop=0.5,
)
self.classifier = Classifier(
in_features=glimpses * vision_features + question_features,
mid_features=1024,
out_features=config.max_answers,
drop=0.5,
)
for m in self.modules():
if isinstance(m, nn.Linear) or isinstance(m, nn.Conv2d):
init.xavier_uniform(m.weight)
if m.bias is not None:
m.bias.data.zero_()
def forward(self, v, q, q_len):
q = self.text(q, list(q_len.data))
# v = v / (v.norm(p=2, dim=1, keepdim=True).expand_as(v) + 1e-8)
v = self.vision_model.forward(v)
a = self.attention(v, q)
v = apply_attention(v, a)
combined = torch.cat([v, q], dim=1)
answer = self.classifier(combined)
return answer
class Classifier(nn.Sequential):
def __init__(self, in_features, mid_features, out_features, drop=0.0):
super(Classifier, self).__init__()
self.add_module('drop1', nn.Dropout(drop))
self.add_module('lin1', nn.Linear(in_features, mid_features))
self.add_module('relu', nn.ReLU())
self.add_module('drop2', nn.Dropout(drop))
self.add_module('lin2', nn.Linear(mid_features, out_features))
class TextProcessor(nn.Module):
def __init__(self, embedding_tokens, embedding_features, lstm_features, drop=0.0):
super(TextProcessor, self).__init__()
self.embedding = nn.Embedding(embedding_tokens, embedding_features, padding_idx=0)
self.drop = nn.Dropout(drop)
self.tanh = nn.Tanh()
self.lstm = nn.LSTM(input_size=embedding_features,
hidden_size=lstm_features,
num_layers=1)
self.features = lstm_features
self._init_lstm(self.lstm.weight_ih_l0)
self._init_lstm(self.lstm.weight_hh_l0)
self.lstm.bias_ih_l0.data.zero_()
self.lstm.bias_hh_l0.data.zero_()
init.xavier_uniform(self.embedding.weight)
def _init_lstm(self, weight):
for w in weight.chunk(4, 0):
init.xavier_uniform(w)
def forward(self, q, q_len):
embedded = self.embedding(q)
tanhed = self.tanh(self.drop(embedded))
packed = pack_padded_sequence(tanhed, q_len, batch_first=True)
_, (_, c) = self.lstm(packed)
return c.squeeze(0)
class Attention(nn.Module):
def __init__(self, v_features, q_features, mid_features, glimpses, drop=0.0):
super(Attention, self).__init__()
self.v_conv = nn.Conv2d(v_features, mid_features, 1, bias=False) # let self.lin take care of bias
self.q_lin = nn.Linear(q_features, mid_features)
self.x_conv = nn.Conv2d(mid_features, glimpses, 1)
self.drop = nn.Dropout(drop)
self.relu = nn.ReLU(inplace=True)
def forward(self, v, q):
v = self.v_conv(self.drop(v))
q = self.q_lin(self.drop(q))
q = tile_2d_over_nd(q, v)
x = self.relu(v + q)
x = self.x_conv(self.drop(x))
return x
def apply_attention(input, attention):
""" Apply any number of attention maps over the input. """
n, c = input.size()[:2]
glimpses = attention.size(1)
# flatten the spatial dims into the third dim, since we don't need to care about how they are arranged
input = input.view(n, 1, c, -1) # [n, 1, c, s]
attention = attention.view(n, glimpses, -1)
attention = F.softmax(attention, dim=-1).unsqueeze(2) # [n, g, 1, s]
weighted = attention * input # [n, g, v, s]
weighted_mean = weighted.sum(dim=-1) # [n, g, v]
return weighted_mean.view(n, -1)
def tile_2d_over_nd(feature_vector, feature_map):
""" Repeat the same feature vector over all spatial positions of a given feature map.
The feature vector should have the same batch size and number of features as the feature map.
"""
n, c = feature_vector.size()
spatial_size = feature_map.dim() - 2
tiled = feature_vector.view(n, c, *([1] * spatial_size)).expand_as(feature_map)
return tiled
# -
from torchvision import transforms
import torchvision
data_path = '../garfield_data_single'
train_dataset = torchvision.datasets.ImageFolder(
root=data_path,
transform=transforms.Compose([
transforms.Resize(224),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225] )
])
)
from torch.utils.data import DataLoader
dataloader = DataLoader(train_dataset, batch_size=4,
shuffle=False, num_workers=2)
for i_batch, sample_batched in enumerate(dataloader):
# print(x.shape, y)
# emb = model(x)
# break
x,y = sample_batched
print(x.shape)
emb = model.forward(x)
emb = emb.reshape((4, 512))
print(emb.shape)
break
# +
# %matplotlib inline
import matplotlib.pyplot as plt
from PIL import Image
import torch
from torchvision import transforms
import torchvision
transform=transforms.Compose([
transforms.Resize(224),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225] )
])
img = Image.open('../garfield_data_single/1991/1_1.jpeg')
plt.imshow(img)
img = transform(img)
# +
import os
import pandas as pd
path="../garfield_3paneldata/"
master_file_path = '../garfield_data_single/master_file.csv'
data = []
for year in range(1991,2020,1):
csv_path = os.path.join(path, "{}.csv".format(year))
csv = pd.read_csv(csv_path, header=None)
for i, row in csv.iterrows():
if(not os.path.exists('../garfield_data_single/{}/{}_1.jpeg'.format(year, i))):
continue
entry = {'panel1': "{}/{}_1.jpeg".format(year, i),
'panel2': "{}/{}_2.jpeg".format(year, i),
'panel3': "{}/{}_3.jpeg".format(year, i),
'text': row[2]}
data.append(entry)
data = pd.DataFrame(data)
# -
data.to_csv(master_file_path)
data = pd.read_csv(master_file_path, index_col=0)
import torchvision.models as models
from torch import nn
model = models.resnet18(pretrained=False)
print(model)
from torch import nn
layers = [nn.Conv2d(6, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)]\
+list(model.children())[2:-1]\
+[nn.Linear(in_features=512, out_features=128, bias=True)]
new_classifier = nn.Sequential(*layers)
new_classifier
# +
# %matplotlib inline
import matplotlib.pyplot as plt
import torch
from torch import nn
import torchvision
from torch.utils.data import Dataset, DataLoader, random_split
from torchvision import transforms, utils
import numpy as np
import pandas as pd
import os
from PIL import Image
class GarfieldDataset(Dataset):
def __init__(self, csv_file, root_dir, transform=None):
self.data = pd.read_csv(csv_file, index_col=0)
self.root_dir = root_dir
self.transform = transform
def __len__(self):
return len(self.data)
def __getitem__(self, idx):
if torch.is_tensor(idx):
idx = idx.tolist()
img1_name = os.path.join(self.root_dir,
self.data.iloc[idx, 0])
img2_name = os.path.join(self.root_dir,
self.data.iloc[idx, 1])
img3_name = os.path.join(self.root_dir,
self.data.iloc[idx, 2])
image1 = Image.open(img1_name)
image2 = Image.open(img2_name)
text = self.data.iloc[idx, 3]
input_text = 'None'
output_text = 'None'
if(" - - " in text or "- - " in text or " - -" in text or "- -" in text):
delimeter = None
for d in [" - - ", "- - ", " - -", "- -"]:
if(d in text):
delimeter = d
break
t = text.split(delimeter)
if(t[0] != ''):
input_text = t[0]
if(t[1] != ''):
output_text = t[1]
else:
print(text, self.data.iloc[idx, 0])
t = text.split("- ")
print(t)
if(len(t) == 1):
pass
if(len(t) == 2):
pass
if(t[0] != ''):
input_text = t[0]
if(t[1] != ''):
input_text = input_text + " - " + t[1]
if(t[2] != ''):
output_text = t[2]
if self.transform:
image1 = self.transform(image1)
image2 = self.transform(image2)
image = torch.cat((image1, image2), 0)
sample = {'image': image,
'input_text': input_text,
'output_text': output_text
}
return sample
dataset = GarfieldDataset('../garfield_data_single/master_file.csv', "../garfield_data_single/", transform=transforms.Compose([
transforms.Resize(224),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225] )
]))
train_size = int(0.9 * len(dataset))
test_size = len(dataset) - train_size
train_dataset, test_dataset = random_split(dataset, [train_size, test_size])
train_dataloader = DataLoader(train_dataset, batch_size=16,
shuffle=True, num_workers=2)
test_dataloader = DataLoader(test_dataset, batch_size=16,
shuffle=True, num_workers=2)
# -
for x in train_dataloader:
print(x['image'].shape)
break
" - - ".split(" - - ")[1] == ''
|
vqa/pytorch-vqa.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
data=pd.read_csv("data_cleaned.csv")
data.head()
data.columns
from sklearn import preprocessing
le = preprocessing.LabelEncoder()
le.fit_transform(data["tire_type"])
data["tire_type_enc"]=le.fit_transform(data["tire_type"])
data["driving_style_enc"]=le.fit_transform(data["driving_style"])
data.head()
pd.DataFrame(data).to_csv("data_enc_label.csv",index=False)
data=pd.read_csv("data_cleaned.csv")
data=pd.get_dummies(data, columns=["tire_type","driving_style"])
data.head()
pd.DataFrame(data).to_csv("data_enc_dummies.csv",index=False)
|
Feature_encoding.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.10.1 64-bit
# language: python
# name: python3
# ---
# ***
#
# # **Analiza podatkov podcasta DTFH**
#
# ***
#
# ## Uvod
# Duncan Trussell Family Hour (DTFH skrajšano) je podcast ameriškega avtorja <NAME>a, ki se ne preveč resno ukvarja s tematikami moderne duhovnosti. Gostje posameznih epizod so lahko vse od novinarjev, komediantov in glasbenikov do budističnih gurujev, psihoterapevtov ter izvajalcev poganskih spiritualnih obredov. Posamezna epizoda običajno poteka kot prost pogovor med Duncanom in njegovim gostom (ali gosti, v primeru ko se mu v eter pridruži več kot ena oseba). Pogoste tematike obsegajo življenske anekdote, mnenja o spiritualnosti, raznorazne pogovore o meditaciji ter uporabo halucinogenih psihoaktivnih substanc v namen spoznavanja sebe in sveta okoli nas.
#
# V tem dokumentu bom analiziral nekatere podatke, ki sem jih zajel iz avtorjeve spletne strani ([DTFH](http://www.duncantrussell.com/episodes)). Začnimo najprej s potrebnim uvozom python paketa pandas, s katerim bom analizo izvajal, ter dokumenta DTFH.csv, v katerem so zbrani podatki posameznih epizod, potem pa si oglejmo kako izgledajo naši surovi podatki:
# Importanje pandas in dsv datoteke
import pandas as pd
podcasti = pd.read_csv('C:/faks/programiranje 1/DTFH-analiza-podatkov/DTFH.csv')
pd.options.display.max_rows = 20
# Ogled surovih podatkov
podcasti
# ## Trendi objavitev in trajanja epizod
# ### Povprečno trajanje epizode
# Začnimo z izračunom povprečne dolžine epizode podcasta. Ker je v csv datoteki dolžina napisana v sekundah (saj je to format, v katerem je zapisana na itunesih) definirajmo še pomožno funkcijo, ki nam bo to dolžino zapisala v bolj predstavljivi obliki (ure, minute, sekunde).
# +
# Pomožna funkcija ter dodajanje novega stolpca v tabelo.
def sekunde_v_ure(n):
h = (n // 3600)
min = ((n // 60) - h * 60)
sec = (n - 60 * min - 3600 * h)
return (h, min, sec)
podcasti['dolzina_h'] = (podcasti.dolzina).apply(sekunde_v_ure)
podcasti
# -
# Izračun povprečja dolžine
pov_dolzina = round((podcasti.dolzina).mean())
print(sekunde_v_ure(pov_dolzina))
# Povprečna dolžina epizode je torej 1h 34min.
# ### Razporeditev epizod glede na dolžino
# Poglejmo si še koliko epizod je krajših od ene ure, koliko jih je med eno in dvema urama ter koliko jih je daljših od dveh ur.
# Vse epizode razvrščene po dolžini
rezina_dolzin = (podcasti[['naslov', 'dolzina']]).copy()
rezina_dolzin['dolzina_h'] = (podcasti.dolzina).apply(sekunde_v_ure)
rezina_dolzin.sort_values('dolzina')
# Epizode krajše od 1 ure
rezina_dolzin_kratke = rezina_dolzin[rezina_dolzin.dolzina < 3600]
rezina_dolzin_kratke.sort_values('dolzina')
# Epizode med 1 in 2 urama
rezina_dolzin_pov = (rezina_dolzin[(rezina_dolzin.dolzina >= 3600) & (rezina_dolzin.dolzina < 7200)])
rezina_dolzin_pov.sort_values('dolzina')
# Epizode daljše od 2 ur
rezina_dolzin_dolge = rezina_dolzin[rezina_dolzin.dolzina >= 7200]
rezina_dolzin_dolge.sort_values('dolzina')
# Preštete zgornje skupine epizod
st_kratkih = len(rezina_dolzin_kratke.index)
st_pov = len(rezina_dolzin_pov.index)
st_dolgih = len(rezina_dolzin_dolge.index)
print(st_kratkih, st_pov, st_dolgih)
# Krajših od ene ure je torej le 7 epizod, daljših od dveh ur pa 57, s tem, da je najdaljša dolga več kot tri ure. Velika večina epizod (336) pa je dolgih nekaj med eno in dvema urama.
#
#
# ### Število epizod po mesecih in letih
# Poglejmo si kdaj so bile epizode objavljene. V prvotni tabeli so časi objave napisani do dneva in celo ure natačno. Relevanten podatek bo kvečjemu katerega meseca in katerega leta je izšla epizoda, zatorej oblikujmo najprej funkciji, ki nam podata mesec in leto objave.
# +
# Definicija pomožne funkcije za mesece ter dodajanje stolpca
def pretvori_v_mesece(str):
return str[7:-8]
podcasti['mesec'] = (podcasti.datum).apply(pretvori_v_mesece)
podcasti
# +
# Definicija pomožne funkcije za leto ter dodajanje stolpca
def pretvori_v_leta(str):
return str[4:]
podcasti['leto'] = (podcasti.mesec).apply(pretvori_v_leta)
podcasti
# -
# Oglejmo si sedaj graf, koliko epizod je bilo objavljenih v posameznem letu.
# Graf epizod po letih
# %matplotlib inline
podcasti_leta = podcasti.groupby('leto').size()
graf_epizode_po_letih = podcasti_leta.plot.bar()
graf_epizode_po_letih
# Kot vidimo izide v povprečnem letu nekaj več kot 40 epizod. Podatki za leti 2013 in 2022 so nepopolni (večina epizod iz leta 2013 manjka, leto 2022 pa se je ravnokar začelo). Izjema je leto 2021 ko je izšlo skoraj 70 epizod. Vzrok za povečanje bi lahko bil Covid 19. To hipotezo lahko podpremo/ovržemo z analizo izdanih epizod po mesecih, kjer se bo jasneje vidilo kdaj se je število izdanih epizod začelo večati.
# Graf epizod po posameznih mesecih
# %matplotlib inline
podcasti_meseci = (podcasti[::-1]).groupby('mesec', sort= False).size()
graf_epizode_po_mesecih = podcasti_meseci.plot()
graf_epizode_po_mesecih
# Kot vidimo noben mesec ne izstopa posebaj. Omembe vredno je da ni bilo meseca v zadnjih osmih letih, v katerem nista bili izdani vsaj dve epizodi podcasta. Število epizod se je začelo večati drugo polovico leta 2020 in potem ostalo visoko celo leto 2021, kar se sovpada s časovnim okvirom epidemije Covida19, vendar ni zadosten dokaz za zgornjo opazko.
# ### Spreminjanje trajanja epizod skozi leta
# Zadnja stvari, ki si jo bomo ogledali je kako se je spreminjala povprečna dolžina epizod skozi leta obstoja podcasta. Po občutki bi predvideval, da se epizode v povprečju daljšajo.
# Graf povprečne dolžine v minutah glede na leto
rezina_dolzin_po_letih = (podcasti[['leto']]).copy()
rezina_dolzin_po_letih['dolzina_min'] = (podcasti.dolzina // 60)
graf_dolzine_po_letih = (rezina_dolzin_po_letih.groupby('leto').mean()).plot.bar()
graf_dolzine_po_letih
# Kot kaže je bila moja prvotna hipoteza napačna. Dolžina epizod se skozi leta ni bistveno spreminjala. Edino leto, ki po dolžini izstopa je leto 2013, vendar je velikost vzorca iz tega leta relatovno majhna (< 10 v primerjavi z običajnimi 40), zaradi česar ne moremo avtomatično predvidevati, da so v povprečju epizode it leta 2013 daljše.
# ## Analiza gostov
# Želja v tem razdelku je, da bi vsaj približno lahko ugotovili kolikokrat se pojavijo določene osebe v podcastih oz. da bi identificirali najbolj pogoste goste. Ker so vsi gostje vedno (oz. skoraj vedno) našteti v naslovu same epizode, bomo zato analizirali naslove posameznih epizod. Sestavimo najprej nekaj funkcij, ki nam bodo pri tem pomagale:
# Funkcija ki iz nekega stavka vrne seznam besed
def naredi_seznam_besed(tekst):
return tekst.split()
# Funkcija ki prešteje število ponovitev besede v seznamu in zapiše ta podatek v slovar
def prestej_besede(seznam):
slovar = {}
for beseda in seznam:
slovar.update({beseda: seznam.count(beseda)})
return slovar
# Oglejmo si sedaj samo relevantne stolpce
rezina_podcasti_gostje = (podcasti[['naslov']]).copy()
rezina_podcasti_gostje['seznam_besed_naslov'] = (podcasti.naslov).apply(naredi_seznam_besed)
rezina_podcasti_gostje
# +
# Naredimo tabelo vseh besed v naslovih epizod padajoče glede na ponovitve (gledamo 20 najbolj pogostih)
seznam_vseh_besed_naslov = []
for seznam in rezina_podcasti_gostje.seznam_besed_naslov:
for beseda in seznam:
seznam_vseh_besed_naslov.append(beseda)
slovar_besed = prestej_besede(seznam_vseh_besed_naslov)
pomozni_slovar = {'beseda' : slovar_besed.keys(), 'ponovitve' : (slovar_besed.values())}
ponovitve_besed_naslov = pd.DataFrame(pomozni_slovar)
ponovitve_besed_naslov.sort_values('ponovitve', ascending= False).head(20)
# -
# Opazimo da je najbolj pogosta beseda v naslovih ime "David", vendar zaradi pogostosti imena najbrž gre za več različnih Davidov. In res, če pogledamo:
# +
# Naredimo nov stolpec, ki preveri, če je beseda "David" v opisu in nato filtriramo preko tega stolpca
def david_in(seznam):
return 'David' in seznam
rezina_podcasti_gostje.seznam_besed_naslov.apply(str)
rezina_podcasti_gostje['david'] = rezina_podcasti_gostje.seznam_besed_naslov.apply(david_in)
rezina_podcasti_gostje[rezina_podcasti_gostje.david == True]
# -
# Ročno lahko preštejemo, da je <NAME> gost na 13 epizodah podcasta. Če pa na enak način preverimo drugo najbolj pogosto ime Raghu vidimo:
# +
# Enako še za ime Raghu
def raghu_in(seznam):
return 'Raghu' in seznam
del rezina_podcasti_gostje['david']
rezina_podcasti_gostje['raghu'] = rezina_podcasti_gostje.seznam_besed_naslov.apply(raghu_in)
rezina_podcasti_gostje[rezina_podcasti_gostje.raghu == True]
# -
# <NAME>, filmski producer ter igralec, je gost na 15 epizodah in torej najbolj pogost gost na med epizodami v naši bazi podatkov. Sledi mu glasbenik <NAME> s 13 epizodami.
# **Opomba:** *Taka analiza je mogoča samo za manjše število podatkov. Če bi bilo podatkov več, bi bilo ločiti goste z enakimi imeni bistveno težje.*
# ## Analiza opisov
# Kot v prejšnem razdelku bomo poskusili ugotoviti, katere besede se najbolj pogosto pojavljajo v opisih epizod.
# Oglejmo si samo relevantne stolpce
rezina_podcasti_opis = (podcasti[['naslov', 'opis']]).copy()
rezina_podcasti_opis
# Izkaže se, da je nekaj epizod brez opisa, kar povzroča probleme. Ker nimajo opisa jih lahko brez slabe vesti izpustimo pri analizi. To so sledeče epizode:
# Epizode brez opisa
rezina_podcasti_opis[rezina_podcasti_opis.opis.isna()]
# Filtriranje epizod brez opisa
rezina_podcasti_opis2 = rezina_podcasti_opis.dropna()
sez_besed_opis = rezina_podcasti_opis2.opis.apply(naredi_seznam_besed)
sez_besed_opis
# +
# Naredimo tabelo vseh besed v naslovih epizod padajoče glede na ponovitve (gledamo 20 najbolj pogostih)
seznam_vseh_besed_opis = []
for seznam in sez_besed_opis:
for beseda in seznam:
seznam_vseh_besed_opis.append(beseda)
slovar_besed2 = prestej_besede(seznam_vseh_besed_opis)
pomozni_slovar2 = {'beseda' : slovar_besed2.keys(), 'ponovitve' : (slovar_besed2.values())}
ponovitve_besed_naslov = pd.DataFrame(pomozni_slovar2)
ponovitve_besed_naslov.sort_values('ponovitve', ascending= False).head(20)
# -
# Opazimo, da so poleg veznikov ("and", "to", "the", itd.) najbolj pogoste besede, ki se pojavljajo kot posledica sponzorstva. Res, z samo 20 najbolj pogostimi besedami lahko sestavimo stavek "this episode is brought to you by", ki se pojavi v skoraj vsakem opisu. Pogosta beseda je tudi "DUNCAN" z veliko, saj gre za promocijsko kodo pri reklamiranih straneh. Če si pogledamo naslednjih 20 besed, je situacija podobna:
# Naslednjih 20 besed po pogostosti
ponovitve_besed_naslov.sort_values('ponovitve', ascending= False)[20:40]
# Kot zanimivost pa si lahko ogledamo kolikokrat približno se pojavijo besede, ki bi jih pričakovali v opisih takega podcasta, npr. "meditation", "spiritual" in "LSD":
# Filtriranje za določene besede, ki nas zanimajo
ponovitve_besed_naslov[ (ponovitve_besed_naslov.beseda == 'meditation') |
(ponovitve_besed_naslov.beseda == 'spiritual') |
(ponovitve_besed_naslov.beseda == 'LSD') ]
# ## Eksplicitnost
# Na koncu si oglejmo kako pogosto itunes označi epizode kot "eksplicitne".
# Preštejmo eksplicitne in neeksplicitne epizode
podcasti.groupby('eksplicitnost').size()
# Kot kaže je le 7 epizod označenih kot eksplicitne. Ker jih je tako malo, lahko pogledamo katere natančno to so.
# Seznam eksplicitnih epizod
rezina_eks = (podcasti[['naslov', 'eksplicitnost', 'opis']]).copy()
eksplicitne_epizode = rezina_eks[rezina_eks.eksplicitnost == 'yes']
eksplicitne_epizode
# ## Zaključek
# S tem zaključimo analizo podatkov. Kar se tiče analize trajanja in trendov objav ter eksplicitnosti je analiza zadovoljiva, pri analizi gostov smo prišli do rezultata, čeprav na način, ki ne bi deloval na večji bazi podatkov, analiza opisov pa je pustila marsikatero hrepenenje neizpolnjeno.
|
analiza_podatkov.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# import the project1-prepareData notebook:
# !pip install ipynb
from ipynb.fs.full.data_analysis import *
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# +
berlinDf_final_linear = berlinDf_select.copy()
berlinDf_final_linear.drop(['newlyConst', 'balcony','hasKitchen', 'lift', 'garden'], axis='columns', inplace=True)
df_full_train, df_train, df_val, df_test, y_full_train, y_train, y_val, y_test = split_dataFrame(berlinDf_final_linear)
berlinDf_final_linear
# +
#Hot encoding and Logistic regretion
from sklearn import linear_model
from sklearn.feature_extraction import DictVectorizer
def train(dataFrame, y):
# Hot Encoding
dicts = dataFrame.to_dict(orient="records")
dv = DictVectorizer(sparse=False)
X = dv.fit_transform(dicts)
# train
model = linear_model.LinearRegression()
model.fit(X, y)
return dv, model
dv, model = train(df_train, y_train)
weights = model.coef_ # weights
w0 = model.intercept_ # bias, w0
print("HotEncoded Features:", dv.get_feature_names())
weights_with_featureNames = dict(zip(dv.get_feature_names(), weights))
print("w0 =", w0)
display(weights_with_featureNames)
display(pd.DataFrame([weights], index=["weight"], columns=dv.get_feature_names()))
# +
# Predict
def predict(dataFrame, dv, model):
dicts = dataFrame.to_dict(orient="records")
X = dv.transform(dicts)
y_pred = model.predict(X)
return y_pred
proba = predict(df_val, dv, model)
y_pred_val = proba
# +
# Check accuracy
#check average accuracy on y_val
df_pred = pd.DataFrame()
df_pred["predBaseRent"] = y_pred_val
df_pred["actual"] = y_val
df_pred["prediction_deviation"] = df_pred.predBaseRent - df_pred.actual
display(df_pred)
print("Mean deviation on y_val:",df_pred.prediction_deviation.mean())
# +
# MAE, MSE, RMSE
from sklearn import metrics
mae = metrics.mean_absolute_error(y_val, y_pred_val)
mse = metrics.mean_squared_error(y_val, y_pred_val)
rmse = np.sqrt(metrics.mean_squared_error(y_val, y_pred_val))
print("MAE for numerical linear:", mae)
print("MSE for numerical linear:", mse)
print("RMSE for numerical linear:", rmse)
# +
import matplotlib.pyplot as plt
from sklearn import metrics
thresholds = np.linspace(0, 500, 41)
y_count = len(df_pred.prediction_deviation)
scores = []
for t in thresholds:
# churn_decision = (y_pred >= t)
# score = (churn_decision == y_test).mean()
# OR using sklearn:
dev_within_t = (abs(df_pred.prediction_deviation) < t).sum()
score = (dev_within_t*100)/y_count
scores.append(score)
print('Model max deviation %.2f: %.3f percent' % (t, score))
plt.xlabel('thresholds')
plt.ylabel('% within deviation')
plt.plot(thresholds, scores)
|
capstone-project/.ipynb_checkpoints/capstoneProject_linearRegresion-checkpoint.ipynb
|
# ---
# jupyter:
# jupytext:
# split_at_heading: true
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + hide_input=true
from nbdev import *
# %nbdev_default_export export2html
# %nbdev_default_class_level 3
# + hide_input=false
# %nbdev_export
from nbdev.imports import *
from nbdev.sync import *
from nbdev.export import *
from nbdev.export import _mk_flag_re
from nbdev.showdoc import *
from nbdev.template import *
from html.parser import HTMLParser
from nbconvert.preprocessors import ExecutePreprocessor, Preprocessor
from nbconvert import HTMLExporter,MarkdownExporter
import traitlets
# -
# # Convert to html
#
# > The functions that transform the dev notebooks in the documentation of the library
#
# - toc: true
# The most important function defined in this module is `notebook2html`, so you may want to jump to it before scrolling though the rest, which explain the details behind the scenes of the conversion from notebooks to the html documentation. The main things to remember are:
# - put a `#hide` or `%nbdev_hide` flag at the top of any cell you want to completely hide in the docs
# - use the hide input [jupyter extension](https://github.com/ipython-contrib/jupyter_contrib_nbextensions) to hide the input of some cells (by default all `show_doc` cells have that marker added)
# - you can define some jekyll metadata in the markdown cell with the title, see `get_metadata`
# - use backsticks for terms you want automatic links to be found, but use `<code>` and `</code>` when you have homonyms and don't want those links
# - you can define the default toc level of classes with `%nbdev_default_class_level` flag followed by a number (default is 2)
# - you can add jekyll warnings, important or note banners with appropriate block quotes (see `add_jekyll_notes`)
# - put any images you want to use in the images folder of your notebook folder, they will be automatically copied over to the docs folder
# - put a `#hide_input` or `%nbdev_hide_input` flag at the top of a cell if you don't want code to be shown in the docs
# - cells containing `%nbdev_export` or `show_doc` have their code hidden automatically
# - put a `#hide_output` or `%nbdev_hide_output` flag at the top of a cell if you don't want output to be shown in the docs
# - use `%nbdev_collapse_input` or `%nbdev_collapse_output` to include code or output in the docs under a collapsable element
# ## Preprocessing notebook
# ### Cell processors
# %nbdev_export
class HTMLParseAttrs(HTMLParser):
"Simple HTML parser which stores any attributes in `attrs` dict"
def handle_starttag(self, tag, attrs): self.tag,self.attrs = tag,dict(attrs)
def attrs2str(self):
"Attrs as string"
return ' '.join([f'{k}="{v}"' for k,v in self.attrs.items()])
def show(self):
"Tag with updated attrs"
return f'<{self.tag} {self.attrs2str()} />'
def __call__(self, s):
"Parse `s` and store attrs"
self.feed(s)
return self.attrs
h = HTMLParseAttrs()
t = h('<img src="src" alt="alt" width="700" caption="cap" />')
test_eq(t['width'], '700')
test_eq(t['src' ], 'src')
t['width'] = '600'
test_eq(h.show(), '<img src="src" alt="alt" width="600" caption="cap" />')
t['max-width'] = t.pop('width')
test_eq(h.show(), '<img src="src" alt="alt" caption="cap" max-width="600" />')
# The following functions are applied on individual cells as a preprocessing step before the conversion to html.
# %nbdev_export
def remove_widget_state(cell):
"Remove widgets in the output of `cells`"
if cell['cell_type'] == 'code' and 'outputs' in cell:
cell['outputs'] = [l for l in cell['outputs']
if not ('data' in l and 'application/vnd.jupyter.widget-view+json' in l.data)]
return cell
# Those outputs usually can't be rendered properly in html.
# +
# %nbdev_export
# Note: `_re_show_doc` will catch show_doc even if it's commented out etc
_re_show_doc = re.compile(r"""
# Catches any show_doc and get the first argument in group 1
^\s*show_doc # line can start with any amount of whitespace followed by show_doc
\s*\(\s* # Any number of whitespace, opening (, any number of whitespace
([^,\)\s]*) # Catching group for any character but a comma, a closing ) or a whitespace
[,\)\s] # A comma, a closing ) or a whitespace
""", re.MULTILINE | re.VERBOSE)
_re_show_doc_magic = _mk_flag_re(True, 'show_doc', -1,
"# Catches a cell with %nbdev_show_doc \*\* and get that \*\* in group 1")
_re_hide_input = [
_mk_flag_re(False, 'export', (0,1),
"Matches any cell that has `#export in it"),
_mk_flag_re(False, '(hide_input|hide-input)', 0,
"Matches any cell that has `#hide_input` or `#hide-input` in it"),
_mk_flag_re(True, 'export', (0,1),
"Matches any cell that has `%nbdev_export` in it"),
_mk_flag_re(True, 'hide_input', 0,
"Matches any cell that has `%nbdev_hide_input` in it")]
_re_hide_output = [
_mk_flag_re(False, '(hide_output|hide-output)', 0,
"Matches any cell that has `#hide_output` or `#hide-output` in it"),
_mk_flag_re(True, 'hide_output', 0,
"Matches any cell that has `%nbdev_hide_output` in it")]
# -
# %nbdev_export
def upd_metadata(cell, key, value=True):
"Sets `key` to `value` on the `metadata` of `cell` without replacing metadata"
cell.setdefault('metadata',{})[key] = value
# %nbdev_export
def hide_cells(cell):
"Hide inputs of `cell` that need to be hidden"
if check_re_multi(cell, [_re_show_doc, _re_show_doc_magic, *_re_hide_input]): upd_metadata(cell, 'hide_input')
elif check_re_multi(cell, _re_hide_output): upd_metadata(cell, 'hide_output')
return cell
# This concerns all the cells with `%nbdev_export` or `%nbdev_hide_input` flags and all the cell containing a `show_doc` for a function or class.
# +
for source in ['show_doc(read_nb)', '# export\nfrom local.core import *', '# hide_input\n2+2',
'line1\n show_doc (read_nb) \nline3', '# export with.mod\nfrom local.core import *',
'%nbdev_export with.mod\nfrom local.core import *', 'line1\n%nbdev_export\nline3',
'%nbdev_show_doc read_nb', '%nbdev_hide_input\n2+2']:
cell = {'cell_type': 'code', 'source': source}
cell1 = hide_cells(cell.copy())
assert 'metadata' in cell1
assert 'hide_input' in cell1['metadata']
assert cell1['metadata']['hide_input']
for flag in ['# exports', '%nbdev_export_and_show']:
cell = {'cell_type': 'code', 'source': f'{flag}\nfrom local.core2 import *'}
test_eq(hide_cells(cell.copy()), cell)
# -
# This concerns all the cells with a `%nbdev_hide_output` flag.
# +
for source in ['# hide-output\nfrom local.core import *', '# hide_output\n2+2', '%nbdev_hide_output\na=b']:
cell = {'cell_type': 'code', 'source': source}
cell1 = hide_cells(cell.copy())
assert 'metadata' in cell1
assert 'hide_output' in cell1['metadata']
assert cell1['metadata']['hide_output']
cell = {'cell_type': 'code', 'source': '# hide-outputs\nfrom local.core import *'}
test_eq(hide_cells(cell.copy()), cell)
# -
# %nbdev_export
def clean_exports(cell):
"Remove all flags from code `cell`s"
if cell['cell_type'] == 'code':
cell['source'] = split_flags_and_code(cell, str)[1]
return cell
# The rest of the cell is displayed without any modification.
# +
for flag in ['# exports', '%nbdev_export_and_show']:
cell = {'cell_type': 'code', 'source': f'{flag}\nfrom local.core import *'}
test_eq(clean_exports(cell.copy()), {'cell_type': 'code', 'source': 'from local.core import *'})
cell['cell_type'] = 'markdown'
test_eq(clean_exports(cell.copy()), cell)
cell = {'cell_type': 'code', 'source': f'{flag} core\nfrom local.core import *'}
test_eq(clean_exports(cell.copy()), {'cell_type': 'code', 'source': 'from local.core import *'})
cell = {'cell_type': 'code', 'source': f'# comment \n# exports\nprint("something")'}
test_eq(clean_exports(cell.copy()), {'cell_type': 'code', 'source': '# exports\nprint("something")'})
cell = {'cell_type': 'code', 'source': f'# comment \n%nbdev_export_and_show\nprint("something")'}
test_eq(clean_exports(cell.copy()), {'cell_type': 'code', 'source': 'print("something")'})
# -
# %nbdev_export
def treat_backticks(cell):
"Add links to backticks words in `cell`"
if cell['cell_type'] == 'markdown': cell['source'] = add_doc_links(cell['source'])
return cell
cell = {'cell_type': 'markdown', 'source': 'This is a `DocsTestClass`'}
test_eq(treat_backticks(cell), {'cell_type': 'markdown',
'source': 'This is a [`DocsTestClass`](/export.html#DocsTestClass)'})
# %nbdev_export
_re_nb_link = re.compile(r"""
# Catches any link to a local notebook and keeps the title in group 1, the link without .ipynb in group 2
\[ # Opening [
([^\]]*) # Catching group for any character except ]
\]\( # Closing ], opening (
([^http] # Catching group that must not begin by html (local notebook)
[^\)]*) # and containing anything but )
.ipynb\) # .ipynb and closing )
""", re.VERBOSE)
# %nbdev_export
_re_block_notes = re.compile(r"""
# Catches any pattern > Title: content with title in group 1 and content in group 2
^\s*>\s* # > followed by any number of whitespace
([^:]*) # Catching group for any character but :
:\s* # : then any number of whitespace
([^\n]*) # Catching group for anything but a new line character
(?:\n|$) # Non-catching group for either a new line or the end of the text
""", re.VERBOSE | re.MULTILINE)
# %nbdev_export
def _to_html(text):
return text.replace("'", "’")
# %nbdev_export
def add_jekyll_notes(cell):
"Convert block quotes to jekyll notes in `cell`"
styles = Config().get('jekyll_styles', 'note,warning,tip,important').split(',')
def _inner(m):
title,text = m.groups()
if title.lower() not in styles: return f"> {title}:{text}"
return '{% include '+title.lower()+".html content=\'"+_to_html(text)+"\' %}"
if cell['cell_type'] == 'markdown':
cell['source'] = _re_block_notes.sub(_inner, cell['source'])
return cell
# Supported styles are `Warning`, `Note` `Tip` and `Important`:
#
# Typing `> Warning: There will be no second warning!` will render in the docs:
#
# > Warning: There will be no second warning!
#
# Typing `> Important: Pay attention! It's important.` will render in the docs:
#
# > Important: Pay attention! It's important.
#
# Typing `> Tip: This is my tip.` will render in the docs:
#
# > Tip: This is my tip.
#
# Typing `> Note: Take note of this.` will render in the docs:
#
# > Note: Take note of this.
#
# Typing ``> Note: A doc link to `add_jekyll_notes` should also work fine.`` will render in the docs:
#
# > Note: A doc link to `add_jekyll_notes` should also work fine.
# %nbdev_hide
for w in ['Warning', 'Note', 'Important', 'Tip', 'Bla']:
cell = {'cell_type': 'markdown', 'source': f"> {w}: This is my final {w.lower()}!"}
res = '{% include '+w.lower()+'.html content=\'This is my final '+w.lower()+'!\' %}'
if w != 'Bla': test_eq(add_jekyll_notes(cell), {'cell_type': 'markdown', 'source': res})
else: test_eq(add_jekyll_notes(cell), cell)
# %nbdev_hide
cell = {'cell_type': 'markdown', 'source': f"> This is a link, don't break me! https://my.link.com"}
test_eq(add_jekyll_notes(cell.copy()), cell)
# %nbdev_export
_re_image = re.compile(r"""
# Catches any image file used, either with `` or `<img src="image_file">`
^(!\[ # Beginning of line (since re.MULTILINE is passed) followed by ![ in a catching group
[^\]]* # Anything but ]
\]\() # Closing ] and opening (, end of the first catching group
[ \t]* # Whitespace before the image path
([^\) \t]*) # Catching block with any character that is not ) or whitespace
(\)| |\t) # Catching group with closing ) or whitespace
| # OR
^(<img\ [^>]*>) # Catching group with <img some_html_code>
""", re.MULTILINE | re.VERBOSE)
m=_re_image.search('')
test_eq(m.groups(), ('', None))
# using ) or whitespace to close the group means we don't need a special case for captions
m=_re_image.search('")')
test_eq(m.groups(), (')
# %nbdev_export
def _img2jkl(d, h, jekyll=True):
if not jekyll: return '<img ' + h.attrs2str() + '>'
if 'width' in d: d['max-width'] = d.pop('width')
if 'src' in d: d['file'] = d.pop('src')
return '{% include image.html ' + h.attrs2str() + ' %}'
# %nbdev_export
def _is_real_image(src):
return not (src.startswith('http://') or src.startswith('https://') or src.startswith('data:image/'))
# %nbdev_export
def copy_images(cell, fname, dest, jekyll=True):
"Copy images referenced in `cell` from `fname` parent folder to `dest` folder"
def _rep_src(m):
grps = m.groups()
if grps[3] is not None:
h = HTMLParseAttrs()
dic = h(grps[3])
src = dic['src']
else: src = grps[1]
if _is_real_image(src):
os.makedirs((Path(dest)/src).parent, exist_ok=True)
shutil.copy(Path(fname).parent/src, Path(dest)/src)
src = Config().doc_baseurl + src
if grps[3] is not None:
dic['src'] = src
return _img2jkl(dic, h, jekyll=jekyll)
else: return f"{grps[0]}{src}{grps[2]}"
if cell['cell_type'] == 'markdown': cell['source'] = _re_image.sub(_rep_src, cell['source'])
return cell
# This is to ensure that all images defined in `nbs_folder/images` and used in notebooks are copied over to `doc_folder/images`.
dest_img = Config().doc_path/'images'/'logo.png'
cell = {'cell_type': 'markdown', 'source':'Text\n'}
try:
copy_images(cell, Path('01_export.ipynb'), Config().doc_path)
test_eq(cell["source"], 'Text\n')
#Image has been copied
assert dest_img.exists()
cell = {'cell_type': 'markdown', 'source':'Text\n")'}
copy_images(cell, Path('01_export.ipynb'), Config().doc_path)
test_eq(cell["source"], 'Text\n")')
finally: dest_img.unlink()
# %nbdev_hide
cell = {'cell_type': 'markdown', 'source':'Text\n'}
copy_images(cell, Path('01_export.ipynb'), Config().doc_path)
test_eq(cell["source"], 'Text\n')
cell = {'cell_type': 'markdown', 'source':'Text\n'}
copy_images(cell, Path('01_export.ipynb'), Config().doc_path)
test_eq(cell["source"], 'Text\n')
# %nbdev_hide
cell = {'cell_type': 'markdown', 'source': 'Text\n<img src="images/logo.png" alt="alt" width="600" caption="cap" />'}
try:
copy_images(cell, Path('01_export.ipynb'), Config().doc_path)
test_eq(cell["source"], 'Text\n{% include image.html alt="alt" caption="cap" max-width="600" file="/images/logo.png" %}')
assert dest_img.exists()
finally: dest_img.unlink()
# %nbdev_hide
cell = {'cell_type': 'markdown', 'source': 'Text\n<img src="http://site.logo.png" alt="alt" width="600" caption="cap" />'}
copy_images(cell, Path('01_export.ipynb'), Config().doc_path)
test_eq(cell["source"], 'Text\n{% include image.html alt="alt" caption="cap" max-width="600" file="http://site.logo.png" %}')
# %nbdev_export
def _relative_to(path1, path2):
p1,p2 = Path(path1).absolute().parts,Path(path2).absolute().parts
i=0
while i <len(p1) and i<len(p2) and p1[i] == p2[i]: i+=1
p1,p2 = p1[i:],p2[i:]
return os.path.sep.join(['..' for _ in p2] + list(p1))
# %nbdev_hide
test_eq(_relative_to(Path('images/logo.png'), Config().doc_path), '../nbs/images/logo.png')
test_eq(_relative_to(Path('images/logo.png'), Config().doc_path.parent), 'nbs/images/logo.png')
# %nbdev_export
def adapt_img_path(cell, fname, dest, jekyll=True):
"Adapt path of images referenced in `cell` from `fname` to work in folder `dest`"
def _rep(m):
gps = m.groups()
if gps[0] is not None:
start,img,end = gps[:3]
if not (img.startswith('http:/') or img.startswith('https:/')):
img = _relative_to(fname.parent/img, dest)
return f'{start}{img}{end}'
else:
h = HTMLParseAttrs()
dic = h(gps[3])
if not (dic['src'].startswith('http:/') or dic['src'].startswith('https:/')):
dic['src'] = _relative_to(fname.parent/dic['src'], dest)
return _img2jkl(dic, h, jekyll=jekyll)
if cell['cell_type'] == 'markdown': cell['source'] = _re_image.sub(_rep, cell['source'])
return cell
# This function is slightly different as it ensures that a notebook convert to a file that will be placed in `dest` will have the images location updated. It is used for the `README.md` file (generated automatically from the index) since the images are copied inside the github repo, but in general, you should make sure your images are going to be accessible from the location your file ends up being.
# +
cell = {'cell_type': 'markdown', 'source': 'Text\n'}
cell1 = adapt_img_path(cell, Path('01_export.ipynb'), Path('.').absolute().parent)
test_eq(cell1['source'], 'Text\n')
cell = {'cell_type': 'markdown', 'source': 'Text\n'}
cell1 = adapt_img_path(cell, Path('01_export.ipynb'), Path('.').absolute().parent)
test_eq(cell1['source'], 'Text\n')
cell = {'cell_type': 'markdown',
'source': 'Text\n<img alt="Logo" src="images/logo.png" width="600"/>'}
cell1 = adapt_img_path(cell, Path('01_export.ipynb'), Path('.').absolute().parent)
test_eq(cell1['source'], 'Text\n{% include image.html alt="Logo" max-width="600" file="nbs/images/logo.png" %}')
cell = {'cell_type': 'markdown',
'source': 'Text\n<img alt="Logo" src="https://site.image.png" width="600"/>'}
cell1 = adapt_img_path(cell, Path('01_export.ipynb'), Path('.').absolute().parent)
test_eq(cell1['source'], 'Text\n{% include image.html alt="Logo" max-width="600" file="https://site.image.png" %}')
# -
# Escape Latex in liquid
# %nbdev_export
_re_latex = re.compile(r'^(\$\$.*\$\$)$', re.MULTILINE)
# %nbdev_export
def escape_latex(cell):
if cell['cell_type'] != 'markdown': return cell
cell['source'] = _re_latex.sub(r'{% raw %}\n\1\n{% endraw %}', cell['source'])
return cell
cell = {'cell_type': 'markdown',
'source': 'lala\n$$equation$$\nlala'}
cell = escape_latex(cell)
test_eq(cell['source'], 'lala\n{% raw %}\n$$equation$$\n{% endraw %}\nlala')
# ### Collapsable Code Cells
#
# +
# %nbdev_export
_re_cell_to_collapse_closed = [
_mk_flag_re(False, '(collapse|collapse_hide|collapse-hide)', 0,
"Matches any cell with #collapse or #collapse_hide"),
_mk_flag_re(True, 'collapse_input', 0,
"Matches any cell with %nbdev_collapse_input")]
_re_cell_to_collapse_open = [
_mk_flag_re(False, '(collapse_show|collapse-show)', 0,
"Matches any cell with #collapse_show"),
_mk_flag_re(True, r'collapse_input[ \t]+open', 0,
"Matches any cell with %nbdev_collapse_input open")]
_re_cell_to_collapse_output = [
_mk_flag_re(False, '(collapse_output|collapse-output)', 0,
"Matches any cell with #collapse_output"),
_mk_flag_re(True, 'collapse_output', 0,
"Matches any cell with %nbdev_collapse_output")]
# -
# %nbdev_export
def collapse_cells(cell):
"Add a collapse button to inputs or outputs of `cell` in either the open or closed position"
if check_re_multi(cell, _re_cell_to_collapse_closed): upd_metadata(cell,'collapse_hide')
elif check_re_multi(cell, _re_cell_to_collapse_open): upd_metadata(cell,'collapse_show')
elif check_re_multi(cell, _re_cell_to_collapse_output): upd_metadata(cell,'collapse_output')
return cell
# %nbdev_hide
for flag in [
('collapse_hide', '#collapse'),
('collapse_hide', '# collapse_hide'),
('collapse_hide', ' # collapse-hide'),
('collapse_hide', '%nbdev_collapse_input'),
('collapse_show', '#collapse_show'),
('collapse_show', '#collapse-show'),
('collapse_show', '%nbdev_collapse_input open'),
('collapse_output', ' #collapse_output'),
('collapse_output', '#collapse-output'),
('collapse_output', '%nbdev_collapse_output ')]:
cell = nbformat.v4.new_code_cell(f'#comment\n{flag[1]} \ndef some_code')
test_eq(True, collapse_cells(cell)['metadata'][flag[0]])
# %nbdev_hide
# check that we can't collapse both input and output
cell = nbformat.v4.new_code_cell(f'%nbdev_collapse_input\n%nbdev_collapse_output \ndef some_code')
test_eq({'collapse_hide': True}, collapse_cells(cell)['metadata'])
cell = nbformat.v4.new_code_cell(f'%nbdev_collapse_input open\n#collapse_output \ndef some_code')
test_eq({'collapse_show': True}, collapse_cells(cell)['metadata'])
# check that we can hide input and collapse output
cell = nbformat.v4.new_code_cell(f'%nbdev_hide_input\n%nbdev_collapse_output \ndef some_code')
test_eq({'hide_input': True, 'collapse_output': True}, hide_cells(collapse_cells(cell))['metadata'])
cell = nbformat.v4.new_code_cell(f'#hide-input\n#collapse_output \ndef some_code')
test_eq({'hide_input': True, 'collapse_output': True}, hide_cells(collapse_cells(cell))['metadata'])
#
# - Placing `%nbdev_collapse_input open` in a code cell will inlcude your code under a collapsable element that is **open** by default.
# %nbdev_collapse_input open
print('This code cell is not collapsed by default but you can collapse it to hide it from view!')
print("Note that the output always shows with `%nbdev_collapse_input`.")
# - Placing `%nbdev_collapse_input` in a code cell will include your code in a collapsable element that is **closed** by default. For example:
# %nbdev_collapse_input
print('The code cell that produced this output is collapsed by default but you can expand it!')
# - Placing `%nbdev_collapse_output` in a code cell will hide the output under a collapsable element that is **closed** by default.
# %nbdev_collapse_output
print('The input of this cell is visible as usual.\nHowever, the OUTPUT of this cell is collapsed by default but you can expand it!')
# ### Preprocessing the list of cells
# The following functions are applied to the entire list of cells of the notebook as a preprocessing step before the conversion to html.
# %nbdev_export
_re_hide = [
_mk_flag_re(False, 'hide', 0, 'Matches any cell with #hide'),
_mk_flag_re(True, 'hide', 0, 'Matches any cell with %nbdev_hide')]
_re_all_flag = ReTstFlags(True)
_re_cell_to_remove = [
_mk_flag_re(False, '(default_exp|exporti)', (0,1),
'Matches any cell with #default_exp or #exporti'),
_mk_flag_re(True, '(default_export|export_internal)', (0,1),
'Matches any cell with %nbdev_default_export or %nbdev_export_internal')]
_re_default_cls_lvl = [
_mk_flag_re(False, 'default_cls_lvl', 1, "Matches any cell with #default_cls_lvl"),
_mk_flag_re(True, 'default_class_level', 1, "Matches any cell with %nbdev_default_class_level"),
]
# %nbdev_export
def remove_hidden(cells):
"Remove in `cells` the ones with a flag `#hide`, `#default_exp`, `#default_cls_lvl` or `#exporti`"
def _hidden(cell):
"Check if `cell` should be hidden"
if check_re_multi(cell, _re_hide, code_only=False): return True
if check_re_multi(cell, [_re_all_flag, *_re_cell_to_remove, *_re_default_cls_lvl]): return True
return False
return [c for c in cells if not _hidden(c)]
# +
cells = [{'cell_type': 'code', 'source': source, 'hide': hide} for hide, source in [
(False, '# export\nfrom local.core import *'),
(True, '# export\n%nbdev_default_class_level 4\nfrom local.core import *'),
(True, ' # default_cls_lvl 2 \n# export\nfrom local.core import *'),
(False, '%nbdev_export \nfrom local.core import *'),
(False, '# exporti mod file'), # Note: this used to get removed but we're more strict now
(False, '%nbdev_export_internal mod file'),
(True, '# hide\nfrom local.core import *'),
(True, '%nbdev_hide\nfrom local.core import *'),
(False, '# hide_input\nfrom local.core import *'),
(False, '%nbdev_hide_output\nfrom local.core import *'),
(False, '#exports\nsuper code'),
(False, '%nbdev_export_and_show\nsuper code'),
(True, '#default_exp notebook.export'),
(True, '%nbdev_default_export notebook.export'),
(False, 'show_doc(read_nb)'),
(True, '#default_cls_lvl 3'),
(False, '#all_slow'), # slow is not in setting.ini tst_flags
(False, '%nbdev_slow_test all'),
(True, '#all_fastai'),
(True, '%nbdev_fastai_test all'),
(False, '#hide (last test of to_concat)'),
(True, '# exporti\n1 + 1'),
(True, '%nbdev_export_internal\n1 + 1')]] + [
{'cell_type': 'markdown', 'source': source, 'hide': hide} for hide, source in [
(False, '#hide_input\nnice'),
(False, '%nbdev_hide_input\nnice'),
(True, '#hide\n\nto hide'),
(True, '#comment\n%nbdev_hide\nto hide')]]
test_eq([cell for cell in cells if not cell['hide']], remove_hidden(cells))
# -
# %nbdev_export
def find_default_level(cells):
"Find in `cells` the default class level."
for cell in cells:
tst = check_re_multi(cell, _re_default_cls_lvl)
if tst: return int(tst.groups()[0])
return 2
tst_nb = read_nb('00_export.ipynb')
test_eq(find_default_level(tst_nb['cells']), 3)
# %nbdev_export
_re_export = _mk_flag_re(False, "exports?", (0,1),
"Matches any line with #export or #exports with or without module name")
_re_export_magic = _mk_flag_re(True, "export(|_and_show)", (0,1),
"Matches any line with %nbdev_export or %nbdev_export_and_show with or without module name")
# %nbdev_export
def nb_code_cell(source):
"A code cell (as a dict) containing `source`"
return {'cell_type': 'code', 'execution_count': None, 'metadata': {}, 'outputs': [], 'source': source}
# +
# %nbdev_export
def _show_doc_cell(name, cls_lvl=None):
return nb_code_cell(f"show_doc({name}{'' if cls_lvl is None else f', default_cls_level={cls_lvl}'})")
def add_show_docs(cells, cls_lvl=None):
"Add `show_doc` for each exported function or class"
res, documented, documented_wild = [], [], []
for cell in cells:
m = check_re_multi(cell, [_re_show_doc, _re_show_doc_magic])
if not m: continue
if m.re is _re_show_doc:
documented.append(m.group(1))
else:
names, wild_names, kwargs = parse_nbdev_show_doc(m.group(1))
documented.extend(names)
documented_wild.extend(wild_names)
def _documented(name):
if name in documented: return True
# assume that docs will have been shown for all members of everything in documented_wild
if name.rfind('.') != -1 and name[0:name.rfind('.')] in documented_wild: return True
for cell in cells:
res.append(cell)
if check_re_multi(cell, [_re_export, _re_export_magic]):
for n in export_names(cell['source'], func_only=True):
if not _documented(n): res.append(_show_doc_cell(n, cls_lvl=cls_lvl))
return res
# -
# This only adds cells with a `show_doc` for non-documented functions, so if you add yourself a `show_doc` cell (because you want to change one of the default argument), there won't be any duplicates.
# +
for i,cell in enumerate(tst_nb['cells']):
if cell['source'].startswith('%nbdev_export\ndef read_nb'): break
tst_cells = [c.copy() for c in tst_nb['cells'][i-1:i+1]]
added_cells = add_show_docs(tst_cells, cls_lvl=3)
test_eq(len(added_cells), 3)
test_eq(added_cells[0], tst_nb['cells'][i-1])
test_eq(added_cells[1], tst_nb['cells'][i])
test_eq(added_cells[2], _show_doc_cell('read_nb', cls_lvl=3))
test_eq(added_cells[2]['source'], 'show_doc(read_nb, default_cls_level=3)')
for flag in ['#export', '%nbdev_export', '#exports', '%nbdev_export_and_show']:
for show_doc_source in [
('show_doc(my_func)', 'show_doc(my_func, title_level=3)'),
('%nbdev_show_doc my_func', '%nbdev_show_doc my_func, title_level=3')]:
#Check show_doc isn't added if it was already there.
tst_cells1 = [{'cell_type':'code', 'source': f'{flag}\ndef my_func(x):\n return x'},
{'cell_type':'code', 'source': show_doc_source[0]}]
test_eq(add_show_docs(tst_cells1), tst_cells1)
#Check show_doc is added
test_eq(len(add_show_docs(tst_cells1[:-1])), len(tst_cells1))
tst_cells1 = [{'cell_type':'code', 'source': f'{flag} with.mod\ndef my_func(x):\n return x'},
{'cell_type':'markdown', 'source': 'Some text'},
{'cell_type':'code', 'source': show_doc_source[1]}]
test_eq(add_show_docs(tst_cells1), tst_cells1)
#Check show_doc is added when using mod export
test_eq(len(add_show_docs(tst_cells1[:-1])), len(tst_cells1))
# -
# %nbdev_hide
# `add_show_docs` should understand wildcard and multi-element `%nbdev_show_doc` calls
test_code_source = """%nbdev_export
class A:
def a(self): pass
@patch
def b(x:A): pass
"""
for show_doc_source in ["%nbdev_show_doc A *", "%nbdev_show_doc A . a b", "%nbdev_show_doc A A.b"]:
tst_cells1 = [nbformat.v4.new_code_cell(test_code_source), nbformat.v4.new_code_cell(show_doc_source)]
test_eq(add_show_docs(tst_cells1), tst_cells1)
# %nbdev_export
_re_fake_header = re.compile(r"""
# Matches any fake header (one that ends with -)
\#+ # One or more #
\s+ # One or more of whitespace
.* # Any char
-\s* # A dash followed by any number of white space
$ # End of text
""", re.VERBOSE)
# %nbdev_export
def remove_fake_headers(cells):
"Remove in `cells` the fake header"
return [c for c in cells if c['cell_type']=='code' or _re_fake_header.search(c['source']) is None]
# You can fake headers in your notebook to navigate them more easily with collapsible headers, just make them finish with a dash and they will be removed. One typical use case is to have a header of level 2 with the name of a class, since the `show_doc` cell of that class will create the same anchor, you need to have the one you created manually disappear to avoid any duplicate.
cells = [{'cell_type': 'markdown',
'metadata': {},
'source': '### Fake-'}] + tst_nb['cells'][:10]
cells1 = remove_fake_headers(cells)
test_eq(len(cells1), len(cells)-1)
test_eq(cells1[0], cells[1])
# %nbdev_export
def remove_empty(cells):
"Remove in `cells` the empty cells"
return [c for c in cells if len(c['source']) >0]
# ### Grabbing metada
# +
# %nbdev_export
_re_title_summary = re.compile(r"""
# Catches the title and summary of the notebook, presented as # Title > summary, with title in group 1 and summary in group 2
^\s* # Beginning of text followe by any number of whitespace
\#\s+ # # followed by one or more of whitespace
([^\n]*) # Catching group for any character except a new line
\n+ # One or more new lines
>[ ]* # > followed by any number of whitespace
([^\n]*) # Catching group for any character except a new line
""", re.VERBOSE)
_re_title_only = re.compile(r"""
# Catches the title presented as # Title without a summary
^\s* # Beginning of text followe by any number of whitespace
\#\s+ # # followed by one or more of whitespace
([^\n]*) # Catching group for any character except a new line
(?:\n|$) # New line or end of text
""", re.VERBOSE)
_re_properties = re.compile(r"""
^-\s+ # Beginnig of a line followed by - and at least one space
(.*?) # Any pattern (shortest possible)
\s*:\s* # Any number of whitespace, :, any number of whitespace
(.*?)$ # Any pattern (shortest possible) then end of line
""", re.MULTILINE | re.VERBOSE)
_re_mdlinks = re.compile(r"\[(.+)]\((.+)\)", re.MULTILINE)
# -
# %nbdev_export
def _md2html_links(s):
'Converts markdown links to html links'
return _re_mdlinks.sub(r"<a href='\2'>\1</a>", s)
# %nbdev_export
def get_metadata(cells):
"Find the cell with title and summary in `cells`."
for i,cell in enumerate(cells):
if cell['cell_type'] == 'markdown':
match = _re_title_summary.match(cell['source'])
if match:
cells.pop(i)
attrs = {k:v for k,v in _re_properties.findall(cell['source'])}
return {'keywords': 'fastai',
'summary' : _md2html_links(match.groups()[1]),
'title' : match.groups()[0],
**attrs}
elif _re_title_only.search(cell['source']) is not None:
title = _re_title_only.search(cell['source']).groups()[0]
cells.pop(i)
attrs = {k:v for k,v in _re_properties.findall(cell['source'])}
return {'keywords': 'fastai',
'title' : title,
**attrs}
return {'keywords': 'fastai',
'title' : 'Title'}
# In the markdown cell with the title, you can add the summary as a block quote (just put an empty block quote for an empty summary) and a list with any additional metadata you would like to add, for instance:
# ```
# # Title
#
# > Awesome summary
# - toc: False
# ```
#
# The toc: False metadata will prevent the table of contents from showing on the page.
# +
tst_nb = read_nb('00_export.ipynb')
test_eq(get_metadata(tst_nb['cells']), {
'keywords': 'fastai',
'summary': 'The functions that transform notebooks in a library',
'title': 'Export to modules'})
#The cell with the metada is popped out, so if we do it a second time we get the default.
test_eq(get_metadata(tst_nb['cells']), {'keywords': 'fastai', 'title' : 'Title'})
# -
# %nbdev_hide
#test with title only
test_eq(get_metadata([{'cell_type': 'markdown', 'source': '# Awesome title'}]),
{'keywords': 'fastai', 'title': 'Awesome title'})
# %nbdev_hide
text = r"""
[This](https://nbdev.fast.ai) goes to docs.
This [one:here](00_export.ipynb) goes to a local nb.
\n[And-this](http://dev.fast.ai/) goes to fastai docs
"""
res = """
<a href='https://nbdev.fast.ai'>This</a> goes to docs.\nThis <a href='00_export.ipynb'>one:here</a> goes to a local nb. \n\\n<a href='http://dev.fast.ai/'>And-this</a> goes to fastai docs
"""
test_eq(_md2html_links(text), res)
# %nbdev_hide
cells = [{'cell_type': 'markdown', 'source': "# Title\n\n> s\n\n- toc: false"}]
test_eq(get_metadata(cells), {'keywords': 'fastai', 'summary': 's', 'title': 'Title', 'toc': 'false'})
# ## Executing show_doc cells
# +
# %nbdev_export
_re_mod_export = _mk_flag_re(False, "export[s]?", 1,
"Matches any line with #export or #exports with a module name and catches it in group 1")
_re_mod_export_magic = _mk_flag_re(True, "export(?:|_and_show)", 1,
"Matches any line with %nbdev_export or %nbdev_export_and_show catching module name in group 1")
def _gather_export_mods(cells):
res = []
for cell in cells:
tst = check_re_multi(cell, [_re_mod_export, _re_mod_export_magic])
if tst is not None: res.append(tst.groups()[0])
return res
# +
# %nbdev_hide
cells = [
{'cell_type': 'markdown', 'source': '#export ignored'},
{'cell_type': 'code', 'source': '#export'},
{'cell_type': 'code', 'source': '#export normal'},
{'cell_type': 'code', 'source': '# exports show'},
{'cell_type': 'code', 'source': '# exporti hidden'},
{'cell_type': 'code', 'source': '#export\n@call_parse'},
{'cell_type': 'code', 'source': '#export \n@delegates(concurrent.futures.ProcessPoolExecutor)'}
]
test_eq(_gather_export_mods(cells), ['normal', 'show'])
for cell in cells: cell['source'] = cell['source'].replace('#export', '%nbdev_export')
test_eq(_gather_export_mods(cells), ['normal', 'show'])
for cell in cells: cell['source'] = cell['source'].replace('# exports', '%nbdev_export_and_show')
test_eq(_gather_export_mods(cells), ['normal', 'show'])
for cell in cells: cell['source'] = cell['source'].replace('# exporti', '%nbdev_export_internal')
test_eq(_gather_export_mods(cells), ['normal', 'show'])
# -
# %nbdev_export
# match any cell containing a zero indented import from the current lib
_re_lib_import = ReLibName(r"^from LIB_NAME\.", re.MULTILINE)
# match any cell containing a zero indented import
_re_import = re.compile(r"^from[ \t]|^import[ \t]", re.MULTILINE)
# match any cell containing a zero indented call to notebook2script
_re_notebook2script = re.compile(r"^notebook2script\(", re.MULTILINE)
# %nbdev_hide
for cell in [nbformat.v4.new_code_cell(s, metadata={'exp': exp}) for exp,s in [
(True, 'show_doc(Tensor.p)'),
(True, ' show_doc(Tensor.p)'),
(True, '%nbdev_show_doc Tensor.p'),
(True, 'if somthing:\n show_doc(Tensor.p)'),
(False, '# show_doc(Tensor.p)'),
(True, '# comment \n show_doc(Tensor.p)'),
(True, '"""\nshow_doc(Tensor.p)\n"""'),
(True, 'import torch\nshow_doc(Tensor.p)'),
(False,'class Ex(ExP):\n"An `ExP` that ..."\ndef preprocess_cell(self, cell, resources, index):\n'),
(False, 'from somewhere import something'),
(False, 'from '),
(False, 'import re'),
(False, 'import '),
(False, 'try: from PIL import Image\except: pass'),
(False, 'from PIL import Image\n@patch\ndef p(x:Image):\n pass'),
(False, '@patch\ndef p(x:Image):\n pass\nfrom PIL import Image')]]:
exp = cell.metadata.exp
assert exp == bool(check_re_multi(cell, [_re_show_doc, _re_show_doc_magic, _re_lib_import.re])), f'expected {exp} for {cell}'
# %nbdev_hide
for cell in [nbformat.v4.new_code_cell(s, metadata={'exp': exp}) for exp,s in [
(False, 'show_doc(Tensor.p)'),
(True, 'import torch\nshow_doc(Tensor.p)'),
(False,'class Ex(ExP):\n"An `ExP` that ..."\ndef preprocess_cell(self, cell, resources, index):\n'),
(False, ' from somewhere import something'),
(True, 'from somewhere import something'),
(True, 'from '),
(False, ' import re'),
(True, 'import re'),
(True, 'import '),
(False, 'try: from PIL import Image\except: pass'),
(True, 'from PIL import Image\n@patch\ndef p(x:Image):\n pass'),
(True, '@patch\ndef p(x:Image):\n pass\nfrom PIL import Image')]]:
exp = cell.metadata.exp
assert exp == bool(check_re(cell, _re_import)), f'expected {exp} for {cell}'
# %nbdev_hide
for cell in [nbformat.v4.new_code_cell(s, metadata={'exp': exp}) for exp,s in [
(False, 'show_doc(Tensor.p)'),
(False, 'notebook2script'),
(False, '#notebook2script()'),
(True, 'notebook2script()'),
(True, 'notebook2script(anything at all)')]]:
exp = cell.metadata.exp
assert exp == bool(check_re(cell, _re_notebook2script)), f'expected {exp} for {cell}'
#export
def _non_comment_code(s):
if re.match(r'\s*#', s): return False
if _re_import.findall(s) or _re_lib_import.re.findall(s): return False
return re.match(r'\s*\w', s)
# %nbdev_export
class ExecuteShowDocPreprocessor(ExecutePreprocessor):
"An `ExecutePreprocessor` that only executes `show_doc` and `import` cells"
def preprocess_cell(self, cell, resources, index):
if not check_re(cell, _re_notebook2script):
if check_re_multi(cell, [_re_show_doc, _re_show_doc_magic]):
return super().preprocess_cell(cell, resources, index)
elif check_re_multi(cell, [_re_import, _re_lib_import.re]):
# r = list(filter(_non_comment_code, cell['source'].split('\n')))
# if r: print("You have import statements mixed with other code", r)
return super().preprocess_cell(cell, resources, index)
# try: return super().preprocess_cell(cell, resources, index)
# except: pass
return cell, resources
# Cells containing:
# - a zero indented call to `notebook2script`
#
# are not run while building docs. This avoids failures caused by importing empty or partially built modules.
#
# Cells containing:
# - `show_doc` (which could be indented) or
# - a "library import" (zero indent import from current library) e.g. `from LIB_NAME.core import *`
#
# are executed and must run without error. If running these cells raises an exception, the build will stop.
#
# Cells containing zero indented imports. e.g.
# - `from module import *` or
# - `import module`
#
# are executed but errors will not stop the build.
#
# If you need to `show_doc` something, please make sure it's imported via a cell that does not depend on previous cells being run. The easiest way to do this is to use a cell that contains nothing but imports.
# +
# %nbdev_export
def _import_show_doc_cell(mods=None):
"Add an import show_doc cell."
source = f"from nbdev.showdoc import show_doc"
if mods is not None:
for mod in mods: source += f"\nfrom {Config().lib_name}.{mod} import *"
return {'cell_type': 'code',
'execution_count': None,
'metadata': {'hide_input': True},
'outputs': [],
'source': source}
def execute_nb(nb, mod=None, metadata=None, show_doc_only=True):
"Execute `nb` (or only the `show_doc` cells) with `metadata`"
mods = ([] if mod is None else [mod]) + _gather_export_mods(nb['cells'])
nb['cells'].insert(0, _import_show_doc_cell(mods))
ep_cls = ExecuteShowDocPreprocessor if show_doc_only else ExecutePreprocessor
ep = ep_cls(timeout=600, kernel_name='python3')
metadata = metadata or {}
pnb = nbformat.from_dict(nb)
ep.preprocess(pnb, metadata)
return pnb
# -
# ## Converting bibtex citations
# %nbdev_export
_re_cite = re.compile(r"(\\cite{)([^}]*)(})", re.MULTILINE | re.VERBOSE) # Catches citations used with `\cite{}`
# %nbdev_export
def _textcite2link(text):
citations = _re_cite.finditer(text)
out = []
start_pos = 0
for cit_group in citations:
cit_pos_st = cit_group.span()[0]
cit_pos_fin = cit_group.span()[1]
out.append(text[start_pos:cit_pos_st])
out.append('[')
cit_group = cit_group[2].split(',')
for i, cit in enumerate(cit_group):
cit=cit.strip()
out.append(f"""<a class="latex_cit" id="call-{cit}" href="#cit-{cit}">{cit}</a>""")
if i != len(cit_group) - 1:
out.append(',')
out.append(']')
start_pos = cit_pos_fin
out.append(text[start_pos:])
return ''.join(out)
# %nbdev_export
def cite2link(cell):
'''Creates links from \cite{} to Refenrence section generated by jupyter_latex_envs'''
if cell['cell_type'] == 'markdown': cell['source'] = _textcite2link(cell['source'])
return cell
# jupyter_latex_envs is a jupyter extension https://github.com/jfbercher/jupyter_latex_envs.
#
# You can find relevant section [here](https://rawgit.com/jfbercher/jupyter_latex_envs/master/src/latex_envs/static/doc/latex_env_doc.html#Bibliography)
# Note, that nbdev now only supports `\cite{}` conversion and not the rest, e.g., `\figure{}` and so on.
# %nbdev_hide
cell = {'cell_type': 'markdown', 'source': r"""This is cited multireference \cite{Frob1, Frob3}.
And single \cite{Frob2}."""}
expected=r"""This is cited multireference [<a class="latex_cit" id="call-Frob1" href="#cit-Frob1">Frob1</a>,<a class="latex_cit" id="call-Frob3" href="#cit-Frob3">Frob3</a>].
And single [<a class="latex_cit" id="call-Frob2" href="#cit-Frob2">Frob2</a>]."""
test_eq(cite2link(cell)["source"], expected)
# It's important to execute all `show_doc` cells before exporting the notebook to html because some of them have just been added automatically or others could have outdated links.
fake_nb = {k:v for k,v in tst_nb.items() if k != 'cells'}
fake_nb['cells'] = [tst_nb['cells'][0].copy()] + added_cells
fake_nb = execute_nb(fake_nb, mod='export')
assert len(fake_nb['cells'][-1]['outputs']) > 0
# ## Filling templates
# The following functions automatically adds jekyll templates if they are misssing.
# %nbdev_export
def write_tmpl(tmpl, nms, cfg, dest):
"Write `tmpl` to `dest` (if missing) filling in `nms` in template using dict `cfg`"
if dest.exists(): return
vs = {o:cfg.d[o] for o in nms.split()}
outp = tmpl.format(**vs)
dest.write_text(outp)
# %nbdev_export
def write_tmpls():
"Write out _config.yml and _data/topnav.yml using templates"
cfg = Config()
path = Path(cfg.get('doc_src_path', cfg.doc_path))
write_tmpl(config_tmpl, 'user lib_name title copyright description', cfg, path/'_config.yml')
write_tmpl(topnav_tmpl, 'host git_url', cfg, path/'_data'/'topnav.yml')
write_tmpl(makefile_tmpl, 'nbs_path lib_name', cfg, cfg.config_file.parent/'Makefile')
# ## Conversion
__file__ = Config().lib_path/'export2html.py'
# %nbdev_export
def nbdev_exporter(cls=HTMLExporter, template_file=None):
cfg = traitlets.config.Config()
exporter = cls(cfg)
exporter.exclude_input_prompt=True
exporter.exclude_output_prompt=True
exporter.anchor_link_text = ' '
exporter.template_file = 'jekyll.tpl' if template_file is None else template_file
exporter.template_path.append(str(Path(__file__).parent/'templates'))
return exporter
# %nbdev_export
process_cells = [remove_fake_headers, remove_hidden, remove_empty]
process_cell = [hide_cells, collapse_cells, remove_widget_state, add_jekyll_notes, escape_latex, cite2link]
# %nbdev_export
def _nb2htmlfname(nb_path, dest=None):
if dest is None: dest = Config().doc_path
return Path(dest)/re_digits_first.sub('', nb_path.with_suffix('.html').name)
# %nbdev_hide
test_eq(_nb2htmlfname(Path('00a_export.ipynb')), Config().doc_path/'export.html')
test_eq(_nb2htmlfname(Path('export.ipynb')), Config().doc_path/'export.html')
test_eq(_nb2htmlfname(Path('00ab_export_module_1.ipynb')), Config().doc_path/'export_module_1.html')
test_eq(_nb2htmlfname(Path('export.ipynb'), '.'), Path('export.html'))
# %nbdev_export
def convert_nb(fname, cls=HTMLExporter, template_file=None, exporter=None, dest=None):
"Convert a notebook `fname` to html file in `dest_path`."
fname = Path(fname).absolute()
os.chdir(fname.parent)
nb = read_nb(fname)
call_cb('begin_doc_nb', nb, fname, 'html')
meta_jekyll = get_metadata(nb['cells'])
meta_jekyll['nb_path'] = str(fname.relative_to(Config().lib_path.parent))
cls_lvl = find_default_level(nb['cells'])
mod = find_default_export(nb['cells'])
nb['cells'] = compose(*process_cells,partial(add_show_docs, cls_lvl=cls_lvl))(nb['cells'])
_func = compose(partial(copy_images, fname=fname, dest=Config().doc_path), *process_cell, treat_backticks)
nb['cells'] = [_func(c) for c in nb['cells']]
nb = execute_nb(nb, mod=mod)
nb['cells'] = [clean_exports(c) for c in nb['cells']]
call_cb('after_doc_nb_preprocess', nb, fname, 'html')
if exporter is None: exporter = nbdev_exporter(cls=cls, template_file=template_file)
with open(_nb2htmlfname(fname, dest=dest),'w') as f:
f.write(exporter.from_notebook_node(nb, resources=meta_jekyll)[0])
call_cb('after_doc_nb', fname, 'html')
# %nbdev_export
def _notebook2html(fname, cls=HTMLExporter, template_file=None, exporter=None, dest=None):
time.sleep(random.random())
print(f"converting: {fname}")
try:
convert_nb(fname, cls=cls, template_file=template_file, exporter=exporter, dest=dest)
return True
except Exception as e:
print(e)
return False
# %nbdev_export
def notebook2html(fname=None, force_all=False, n_workers=None, cls=HTMLExporter, template_file=None,
exporter=None, dest=None, pause=0):
"Convert all notebooks matching `fname` to html files"
if fname is None:
files = [f for f in Config().nbs_path.glob('**/*.ipynb')
if not f.name.startswith('_') and not '/.' in str(f)]
else:
p = Path(fname)
files = list(p.parent.glob(p.name))
if len(files)==1:
force_all = True
if n_workers is None: n_workers=0
if not force_all:
# only rebuild modified files
files,_files = [],files.copy()
for fname in _files:
fname_out = _nb2htmlfname(Path(fname).absolute(), dest=dest)
if not fname_out.exists() or os.path.getmtime(fname) >= os.path.getmtime(fname_out):
files.append(fname)
if len(files)==0: print("No notebooks were modified")
else:
passed = parallel(_notebook2html, files, n_workers=n_workers, cls=cls,
template_file=template_file, exporter=exporter, dest=dest, pause=pause)
if not all(passed):
msg = "Conversion failed on the following:\n"
print(msg + '\n'.join([f.name for p,f in zip(passed,files) if not p]))
# + hide_input=false
# %nbdev_hide
class TestCallbacks:
def begin(self, nb, file_name, output_type):
self.begin_data=dict(nb=nb,file_name=file_name,output_type=output_type)
return nb
def middle(self, nb, file_name, output_type):
self.middle_data=dict(nb=nb,file_name=file_name,output_type=output_type)
return nb
def end(self, file_name, output_type):
self.end_data=dict(file_name=file_name,output_type=output_type)
test_callbacks=TestCallbacks()
call_cb('this makes sure nbdev_callbacks is loaded from the right place')
import nbdev_callbacks as cbs
original_callbacks=cbs.begin_doc_nb,cbs.after_doc_nb_preprocess,cbs.after_doc_nb
try:
cbs.begin_doc_nb=test_callbacks.begin
cbs.after_doc_nb_preprocess=test_callbacks.middle
cbs.after_doc_nb=test_callbacks.end
assert not hasattr(test_callbacks,'begin_data')
# Test when an argument is given to notebook2html
p1 = Path('/tmp/sync.html')
if p1.exists(): p1.unlink()
notebook2html('01_sync.ipynb', dest='/tmp');
assert p1.exists()
# Check that callback handlers have been called
for d in [test_callbacks.begin_data, test_callbacks.middle_data, test_callbacks.end_data]:
test_eq(d['file_name'], Path('01_sync.ipynb').absolute())
test_eq(d['output_type'], 'html')
if d is not test_callbacks.end_data: assert d['nb']
finally:
cbs.begin_doc_nb,cbs.after_doc_nb_preprocess,cbs.after_doc_nb=original_callbacks
# + hide_input=false
# %nbdev_hide
# Test when no argument is given to notebook2html
dest_files = [_nb2htmlfname(f, dest='/tmp') for f in Config().nbs_path.glob('*.ipynb') if not f.name.startswith('_')]
[f.unlink() for f in dest_files if f.exists()]
notebook2html(fname=None, dest='/tmp');
assert all([f.exists() for f in dest_files])
# + hide_input=false
# # %nbdev_hide
# # Test Error handling
# try: notebook2html('../README.md');
# except Exception as e: pass
# else: assert False, 'An error should be raised when a non-notebook file is passed to notebook2html!'
# -
# Hide cells starting with `%nbdev_export` and only leaves the prose and the tests. If `fname` is not specified, this will convert all notebooks not beginning with an underscore in the `nb_folder` defined in `setting.ini`. Otherwise `fname` can be a single filename or a glob expression.
#
# By default, only the notebooks that are more recent than their html counterparts are modified, pass `force_all=True` to change that behavior.
# %nbdev_hide
#notebook2html(force_all=True)
# %nbdev_export
def convert_md(fname, dest_path, img_path='docs/images/', jekyll=True):
"Convert a notebook `fname` to a markdown file in `dest_path`."
fname = Path(fname).absolute()
if not img_path: img_path = fname.stem + '_files/'
Path(img_path).mkdir(exist_ok=True, parents=True)
nb = read_nb(fname)
call_cb('begin_doc_nb', nb, fname, 'md')
meta_jekyll = get_metadata(nb['cells'])
try: meta_jekyll['nb_path'] = str(fname.relative_to(Config().lib_path.parent))
except: meta_jekyll['nb_path'] = str(fname)
nb['cells'] = compose(*process_cells)(nb['cells'])
nb['cells'] = [compose(partial(adapt_img_path, fname=fname, dest=dest_path, jekyll=jekyll), *process_cell)(c)
for c in nb['cells']]
call_cb('after_doc_nb_preprocess', nb, fname, 'md')
fname = Path(fname).absolute()
dest_name = fname.with_suffix('.md').name
exp = nbdev_exporter(cls=MarkdownExporter, template_file='jekyll-md.tpl' if jekyll else 'md.tpl')
export = exp.from_notebook_node(nb, resources=meta_jekyll)
md = export[0]
for ext in ['png', 'svg']:
md = re.sub(r'!\['+ext+'\]\((.+)\)', '', md)
with (Path(dest_path)/dest_name).open('w') as f: f.write(md)
if hasattr(export[1]['outputs'], 'items'):
for n,o in export[1]['outputs'].items():
with open(Path(dest_path)/img_path/n, 'wb') as f: f.write(o)
call_cb('after_doc_nb', fname, 'md')
# This is used to convert the index into the `README.md`.
# %nbdev_hide
def _test_md(fn):
fn,dest = Path(fn),Path().absolute().parent
try: convert_md(fn, dest, jekyll=False)
finally: (dest/f'{fn.stem}.md').unlink()
# %nbdev_hide
_test_md('index.ipynb')
# `export[1]['outputs']` will be a `str` if the notebook has no markdown cells to convert.
# e.g. the nb could have a single jekyll markdown cell or just code cells ...
_test_md(f'../test/single-cell-index.ipynb')
# %nbdev_export
_re_att_ref = re.compile(r' *!\[(.*)\]\(attachment:image.png(?: "(.*)")?\)')
# +
t = ''
test_eq(_re_att_ref.match(t).groups(), ('screenshot', None))
t = ''
test_eq(_re_att_ref.match(t).groups(), ('screenshot', "Deploying to Binder"))
# -
# %nbdev_export
try: from PIL import Image
except: pass # Only required for _update_att_ref
# +
# %nbdev_export
_tmpl_img = '<img alt="{title}" width="{width}" caption="{title}" id="{id}" src="{name}">'
def _update_att_ref(line, path, img):
m = _re_att_ref.match(line)
if not m: return line
alt,title = m.groups()
w = img.size[0]
if alt=='screenshot': w //= 2
if not title: title = "TK: add title"
return _tmpl_img.format(title=title, width=str(w), id='TK: add it', name=str(path))
# -
# %nbdev_export
def _nb_detach_cell(cell, dest, use_img):
att,src = cell['attachments'],cell['source']
mime,img = first(first(att.values()).items())
ext = mime.split('/')[1]
for i in range(99999):
p = dest/(f'att_{i:05d}.{ext}')
if not p.exists(): break
img = b64decode(img)
p.write_bytes(img)
del(cell['attachments'])
if use_img: return [_update_att_ref(o,p,Image.open(p)) for o in src]
else: return [o.replace('attachment:image.png', str(p)) for o in src]
# %nbdev_export
def nb_detach_cells(path_nb, dest=None, replace=True, use_img=False):
"Export cell attachments to `dest` and update references"
path_nb = Path(path_nb)
if not dest: dest = f'{path_nb.stem}_files'
dest = Path(dest)
dest.mkdir(exist_ok=True, parents=True)
j = json.load(path_nb.open())
atts = [o for o in j['cells'] if 'attachments' in o]
for o in atts: o['source'] = _nb_detach_cell(o, dest, use_img)
if atts and replace: json.dump(j, path_nb.open('w'))
if not replace: return j
# ## Sidebar
# %nbdev_export
import time,random,warnings
# %nbdev_export
def _leaf(k,v):
url = 'external_url' if "http" in v else 'url'
#if url=='url': v=v+'.html'
return {'title':k, url:v, 'output':'web,pdf'}
# %nbdev_export
_k_names = ['folders', 'folderitems', 'subfolders', 'subfolderitems']
def _side_dict(title, data, level=0):
k_name = _k_names[level]
level += 1
res = [(_side_dict(k, v, level) if isinstance(v,dict) else _leaf(k,v))
for k,v in data.items()]
return ({k_name:res} if not title
else res if title.startswith('empty')
else {'title': title, 'output':'web', k_name: res})
# %nbdev_export
_re_catch_title = re.compile('^title\s*:\s*(\S+.*)$', re.MULTILINE)
# %nbdev_export
def _get_title(fname):
"Grabs the title of html file `fname`"
with open(fname, 'r') as f: code = f.read()
src = _re_catch_title.search(code)
return fname.stem if src is None else src.groups()[0]
# %nbdev_hide
test_eq(_get_title(Config().doc_path/'export.html'), "Export to modules")
# %nbdev_export
def _create_default_sidebar():
"Create the default sidebar for the docs website"
dic = {"Overview": "/"}
files = [f for f in Config().nbs_path.glob('*.ipynb') if not f.name.startswith('_')]
fnames = [_nb2htmlfname(f) for f in sorted(files)]
titles = [_get_title(f) for f in fnames if 'index' not in f.stem!='index']
if len(titles) > len(set(titles)): print(f"Warning: Some of your Notebooks use the same title ({titles}).")
dic.update({_get_title(f):f'{f.name}' for f in fnames if f.stem!='index'})
return dic
# %nbdev_export
def create_default_sidebar():
"Create the default sidebar for the docs website"
dic = {Config().lib_name: _create_default_sidebar()}
json.dump(dic, open(Config().doc_path/'sidebar.json', 'w'), indent=2)
# The default sidebar lists all html pages with their respective title, except the index that is named "Overview". To build a custom sidebar, set the flag `custom_sidebar` in your `settings.ini` to `True` then change the `sidebar.json` file in the `doc_folder` to your liking. Otherwise, the sidebar is updated at each doc build.
# %nbdev_export
def make_sidebar():
"Making sidebar for the doc website form the content of `doc_folder/sidebar.json`"
cfg = Config()
if not (cfg.doc_path/'sidebar.json').exists() or cfg.get('custom_sidebar', 'False') == 'False':
create_default_sidebar()
sidebar_d = json.load(open(cfg.doc_path/'sidebar.json', 'r'))
res = _side_dict('Sidebar', sidebar_d)
res = {'entries': [res]}
res_s = yaml.dump(res, default_flow_style=False)
res_s = res_s.replace('- subfolders:', ' subfolders:').replace(' - - ', ' - ')
res_s = f"""
#################################################
### THIS FILE WAS AUTOGENERATED! DO NOT EDIT! ###
#################################################
# Instead edit {'../../sidebar.json'}
"""+res_s
open(cfg.doc_path/'_data/sidebars/home_sidebar.yml', 'w').write(res_s)
# ## Export-
# %nbdev_hide
notebook2script()
|
nbs/03_export2html.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: SageMath 7.5.1
# language: ''
# name: sagemath
# ---
#1.1
A=[[1,0],[1,1]]; A=Matrix(A); show("A=",A)
B=[[1,1,1],[1,0,0]]; B=Matrix(B); show("B=",B)
show("AB=",A*B)
#show("BA=",B*A)
#1.2
A=[[0,0,0],[0,0,1]]; A=Matrix(A); show("A=",A)
B=[[1,1,1],[1,0,1],[0,1,1]]; B=Matrix(B); show("B=",B)
show("AB=",A*B)
#show("BA=",B*A)
#1.3
A=[[1,1],[0,1],[0,0]]; A=Matrix(A); show("A=",A)
B=[[0],[1]]; B=Matrix(B); show("B=",B)
show("AB=",A*B)
#show("BA=",B*A)
#1.4
A=[[1,0,1]]; A=Matrix(A); show("A=",A)
B=[[0],[0],[0]]; B=Matrix(B); show("B=",B)
show("AB=",A*B)
show("BA=",B*A)
#1.5
A=[[1,1,0]]; A=Matrix(A); show("A=",A)
B=[[0],[0],[0]]; B=Matrix(B); show("B=",B)
show("AB=",A*B)
show("BA=",B*A)
#1.6
A=[[0,1],[0,1]]; A=Matrix(A); show("A=",A)
B=[[1,1,1],[1,1,0]]; B=Matrix(B); show("B=",B)
show("AB=",A*B)
#show("BA=",B*A)
#2.1
A=[[5],[7],[0]]; A=Matrix(A); show("A=",A)
B=[[-4]]; B=Matrix(B); show("B=",B)
show("AB=",A*B)
#show("BA=",B*A)
#2.2
A=[[2,7,2],[8,-6,6]]; A=Matrix(A); show("A=",A)
B=[[6,6],[-2,-3],[-6,8]]; B=Matrix(B); show("B=",B)
show("AB=",A*B)
show("BA=",B*A)
#2.3
A=[[-4,-6],[0,6]]; A=Matrix(A); show("A=",A)
B=[[-2],[-2]]; B=Matrix(B); show("B=",B)
show("AB=",A*B)
#show("BA=",B*A)
#2.4
A=[[7],[6]]; A=Matrix(A); show("A=",A)
B=[[-9,8]]; B=Matrix(B); show("B=",B)
show("AB=",A*B)
show("BA=",B*A)
#2.5
A=[[-2],[-4]]; A=Matrix(A); show("A=",A)
B=[[-8]]; B=Matrix(B); show("B=",B)
show("AB=",A*B)
#show("BA=",B*A)
#2.6
A=[[-5,-8,-1]]; A=Matrix(A); show("A=",A)
B=[[8,-1],[5,4],[4,-3]]; B=Matrix(B); show("B=",B)
show("AB=",A*B)
#show("BA=",B*A)
|
latex/MD_SMC/.ipynb_checkpoints/MD03 Matrices-checkpoint.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Document retrieval from Wikipedia data
import turicreate
# # Load some text data from Wikipedia
people = turicreate.SFrame('../../data/people_wiki.sframe')
people
# # Explore data
# ## Taking a look at the entry for President Obama
obama = people[people['name'] == '<NAME>']
obama
obama['text']
# ## Explore the entry for actor <NAME>
clooney = people[people['name'] == '<NAME>']
clooney['text']
# # Word counts for Obama acticle
obama['word_count'] = turicreate.text_analytics.count_words(obama['text'])
obama
print (obama['word_count'])
# ## Find most common words in Obama article
obama.stack('word_count',new_column_name=['word','count'])
obama_word_count_table = obama[['word_count']].stack('word_count', new_column_name = ['word','count'])
obama_word_count_table
obama_word_count_table.sort('count',ascending=False)
# # Compute TF-IDF for the entire corpus of articles
people['word_count'] = turicreate.text_analytics.count_words(people['text'])
people
people['tfidf'] = turicreate.text_analytics.tf_idf(people['text'])
people
# ## Examine the TF-IDF for the Obama article
obama = people[people['name'] == '<NAME>']
obama[['tfidf']].stack('tfidf',new_column_name=['word','tfidf']).sort('tfidf',ascending=False)
# ## Examine the TF-IDF for Clooney
clooney = people[people['name'] == '<NAME>']
clooney[['tfidf']].stack('tfidf',new_column_name=['word','tfidf']).sort('tfidf',ascending=False)
# # Manually evaluate the distance between certain people's articles
clinton = people[people['name'] == '<NAME>']
beckham = people[people['name'] == '<NAME>']
# ## Is Obama closer to Clinton or to Beckham?
turicreate.distances.cosine(obama['tfidf'][0],clinton['tfidf'][0])
turicreate.distances.cosine(obama['tfidf'][0],beckham['tfidf'][0])
# # Apply nearest neighbors for retrieval of Wikipedia articles
# ## Build the NN model
knn_model = turicreate.nearest_neighbors.create(people,features=['tfidf'],label='name')
# ## Use model for retrieval... for example, who is closest to Obama?
knn_model.query(obama)
# ## Other examples of retrieval
swift = people[people['name'] == '<NAME>']
knn_model.query(swift)
jolie = people[people['name'] == '<NAME>']
knn_model.query(jolie)
arnold = people[people['name'] == '<NAME>']
knn_model.query(arnold)
|
course-exercises/week4-Clustering-and-Similarity/Class-examples.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.6
# language: python
# name: python36
# ---
# # Linear Modelling - Maxiumum Likelihood
#
# One approach to learning parameters is minimizing the loss function, another method is to incorporate a random variable to denote _noise_, which has considerable advantages over are former approach.
# ## The Gaussian (normal) distribution
#
# A Gaussian distribution is defined over the sample space of all real numbers with the pdf for a random varaible $Y$ as the following:
#
# $$
# p(y \mid \mu, \sigma^2) = \frac{1}{\sigma \sqrt{2 \pi}} \exp{\left\{ - \frac{1}{2 \sigma^2} (y - \mu)^2 \right\}}
# $$
#
# The common shorthand notation is the following:
#
# $$
# p(y \mid \mu, \sigma^2) = \mathcal{N}(\mu, \sigma^2)
# $$
# +
# %matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import norm
x_axis = np.linspace(-5, 10, 100)
plt.plot(x_axis, norm.pdf(x_axis,-2,0.1 ** 0.5), 'r', label="$\mu = -2, \sigma^2 = 0.1$")
plt.plot(x_axis, norm.pdf(x_axis,0,0.3 ** 0.5), 'g', label="$\mu = 0, \sigma^2 = 0.3$")
plt.plot(x_axis, norm.pdf(x_axis,5,2 ** 0.5), 'b', label="$\mu = 5, \sigma^2 = 2$")
plt.legend()
plt.show()
# -
# ## Multivariate Gaussian
#
# We can generalize the gaussain distribution to define a density function over vectors. For a vector $\mathbf{x} = [x_1, ... x_D]^T$ the density function is defined as:
#
# $$
# p(\mathbf{x}) = \frac{1}{(2 \pi)^{\frac{D}{2}}{\begin{vmatrix}\mathbf{\Sigma}\end{vmatrix}^{\frac{1}{2}}}} \exp \left\{ - \frac{1}{2} (\mathbf{x} - \mathbf{\mu})^T \mathbf{\Sigma}^{-1} (\mathbf{x} - \mathbf{\mu}) \right\}
# $$
#
# where $\mathbf{\mu}$ is a vector of mean values, and the variance a $D \times D$ covariance matrix (a matrix whose element in the $i$, $j$ position is the covariance between the $i$ th and $j$ th elements).
# +
from scipy.stats import multivariate_normal
from mpl_toolkits.mplot3d import Axes3D
def plot_m_gauss(mu, variance):
#Create grid and multivariate normal
x = np.linspace(-2,5,500)
y = np.linspace(5,-2,500)
X, Y = np.meshgrid(x,y)
pos = np.empty(X.shape + (2,))
pos[:, :, 0] = X; pos[:, :, 1] = Y
rv = multivariate_normal(mu, variance)
#Make a 3D plot
fig = plt.figure()
ax = fig.gca(projection='3d')
ax.plot_surface(X, Y, rv.pdf(pos),cmap='viridis',linewidth=0)
ax.set_xlabel('x1')
ax.set_ylabel('x2')
plt.show()
plt.contour(X, Y, rv.pdf(pos))
plt.show()
mu_1 = np.array([2, 1]).T
variance_1 = np.array([[1, 0], [0, 1]])
print("mu = {}, Epsilon = {}".format(mu_1, variance_1))
plot_m_gauss(mu_1, variance_1)
mu_2 = np.array([2, 1]).T
variance_2 = np.array([[1, 0.8], [0.8, 1]])
print("mu = {}, Epsilon = {}".format(mu_2, variance_2))
plot_m_gauss(mu_2, variance_2)
# -
# A special case of the multivariate Gaussian is where the two variables are independent, hence:
#
# $$
# \mathbf{\Sigma} = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} = \mathbf{I}
# $$
#
# $$
# \begin{align}
# p(\mathbf{x}) &= \frac{1}{(2 \pi)^{\frac{D}{2}} \begin{vmatrix}\mathbf{I}\end{vmatrix}^{\frac{1}{2}} } \exp \left\{ - \frac{1}{2} (\mathbf{x} - \mathbf{\mu})^T \mathbf{I}^{-1} (\mathbf{x} - \mathbf{\mu}) \right\} \\
# &= \frac{1}{(2 \pi)^{\frac{D}{2}} \begin{vmatrix}\mathbf{I}\end{vmatrix}^{\frac{1}{2}} } \exp \left\{ - \frac{1}{2} (\mathbf{x} - \mathbf{\mu})^T (\mathbf{x} - \mathbf{\mu}) \right\} \\
# &= \frac{1}{(2 \pi)^{\frac{D}{2}} \begin{vmatrix}\mathbf{I}\end{vmatrix}^{\frac{1}{2}} } \exp \left\{ - \frac{1}{2} \sum_{d=1}^D (x_d - \mu_d)^2 \right\}
# \end{align}
# $$
#
# The exponential of a sum is a product of exponentials thus
#
# $$
# p(\mathbf{x}) = \frac{1}{(2 \pi)^{\frac{D}{2}} \begin{vmatrix}\mathbf{I}\end{vmatrix}^{\frac{1}{2}} } \prod_{d=1}^D \exp \left\{ - \frac{1}{2} (x_d - \mu_d)^2 \right\}
# $$
#
# The determinant of $\mathbf{I}$ is 1, and $(2 \pi)^{\frac{D}{2}}$ can be written as $\prod_{d=1}^D (2 \pi)^{\frac{1}{2}}$ thus we arrive at:
#
# $$
# p(\mathbf{x}) = \prod_{d=1}^D \frac{1}{\sqrt{2 \pi} } \exp \left\{ - \frac{1}{2} (x_d - \mu_d)^2 \right\}
# $$
#
# Each term in the product in a univariate Gaussian (with mean $\mu_d$ and variance $1$), thus by definition of independence ($p(A \cup B) = p(A)p(B)$ iff $A$ and $B$ are independent), the elements of $\mathbf{x}$ is independent. This will work for any $\mathbf{\Sigma}$ with non-zero elements only in the diagonal positions.
# ## Thinking generatively
#
# If we think how we could generate mens 100m times that looks like the data we observe we would arive at the following:
#
# $$
# t_n = \mathbf{w}^T \mathbf{x}_n + \epsilon_n
# $$
#
# where $\epsilon_n$ is a random variable.
#
# Now we need to determine the distribution for $\epsilon_n$. Our model is continous thus $\epsilon_n$ is must be a continous random varible. Their is a random variable for each Olympic year, and its a resonable assumption that these values are independent.
#
# $$
# p(\epsilon_1, ..., \epsilon_n) = \prod_{n=1}^N p(\epsilon_n)
# $$
#
# Lets assume $p(\epsilon_n)$ follows a Gaussian distribution with a zero mean and variance $\sigma$. Our model can now be described as two components:
#
# 1. A _deterministic_ component ($\mathbf{w}^T \mathbf{x}_n$) referred to as a _trend_ or _drift_
# 2. A random component ($\epsilon_n$) referred to as _noise_
#
# In our case the noise is _additive_ but some applications might call for _mulitiplicative_ noise such as pixel degradation.
# ## Likelihood
#
# Our model is of the following form:
#
# $$
# t_n = f(x_n; \mathbf{w}) + \epsilon_n \quad \epsilon_n \sim \mathcal{N}(0, \sigma^2)
# $$
#
# We cant minimize the loss since $t_n$ is no longer a fixed value, its a random variable. Adding a constant ($\mathbf{w}^T \mathbf{x}_n$) to a Gausian distributed random variable is equivalent to a new Gausian random variable with the constant added to the mean. Thus $t_n$ has the following pdf:
#
# $$
# p(t_n \mid \mathbf{x}_n, \mathbf{w}, \sigma^2) = \mathcal{N}(\mathbf{w}^T \mathbf{x}_n, \sigma^2)
# $$
#
# We can use this to find optimal values for $\mathbf{w}$ and $\sigma^2$, consider the year 1980, using the values for $\mathbf{w}$ we found previously and assuming $\sigma^2 = 0.05$ we can plot:
#
# $$
# p\left(t_n \mid \mathbf{x}_n = \begin{bmatrix}1\\1980\end{bmatrix}, \mathbf{w} = \begin{bmatrix}36.416\\-0.0133\end{bmatrix}, \sigma^2 = 0.05 \right)
# $$
# +
mu = 36.41645590250286 - 0.013330885710960602 * 1980
sigma2 = 0.05
print("mu = {}, sigma^2 = {}".format(mu, sigma2))
x_axis = np.linspace(9, 11, 50)
plt.plot(x_axis, norm.pdf(x_axis, mu, sigma2 ** 0.5), 'r')
plt.show()
# -
# According to the graph the most _likely_ winning time for 1980 is $10.02$ seconds. The actuall time was $10.25$, thus we need to tune the parameters $\mathbf{w}$ and $\sigma^2$ to make the density as high as possible at $t = 10.25$.
# ## Dataset likelihood
#
# We can extend this to the whole dataset by finding the joint conditional density:
#
# $$
# p(t_1, ..., t_N \mid \mathbf{x}_1, ..., \mathbf{x}_N, \mathbf{w}, \sigma^2)
# $$
#
# By using the vector notation defined previously and the assumption that the noise at each datapoint is independent, we get the following:
#
# $$
# L = p(\mathbf{t} \mid \mathbf{X}, \mathbf{w}, \sigma^2) = \prod_{n=1}^N p(t_n \mid \mathbf{x_n}, \mathbf{w}, \sigma^2) = \prod_{n=1}^N \mathcal{N}(\mathbf{w}^T \mathbf{x}_n, \sigma^2)
# $$
# ## Maximum likelihood
#
# To find $\widehat{\mathbf{w}}$ and $\widehat{\sigma^2}$ clearly we need to maximize the value of $L$, to do this we will maximize the log-likelyhood (for analytical reasons)
#
# $$
# \begin{align}
# L &= \prod_{n=1}^N \mathcal{N}(\mathbf{w}^T \mathbf{x}_n, \sigma^2) \\
# \log L &= \log \left(\prod_{n=1}^N \mathcal{N}(\mathbf{w}^T \mathbf{x}_n, \sigma^2) \right) \\
# &= \sum_{n=1}^N \log \mathcal{N}(\mathbf{w}^T \mathbf{x}_n, \sigma^2) \\
# &= \sum_{n=1}^N \log \left( \frac{1}{\sigma \sqrt{2 \pi}} \exp{\left\{ - \frac{1}{2 \sigma^2} (t_n - \mathbf{w}^T \mathbf{x}_n)^2 \right\}} \right) \\
# &= \sum_{n=1}^N \left( -\frac{1}{2} \log(2 \pi) - \log \sigma - \frac{1}{2 \sigma^2} (t_n - \mathbf{w}^T \mathbf{x}_n)^2 \right) \\
# &= -\frac{N}{2} \log(2 \pi) - N \log \sigma - \frac{1}{2 \sigma^2} \sum_{n=1}^N (t_n - \mathbf{w}^T \mathbf{x}_n)^2 \\
# \end{align}
# $$
#
# As previosly we differentiate and set to zero to find the turning point, in this case we want a maximum.
#
# $$
# \begin{align}
# \frac{\partial \log L}{\partial \mathbf{w}} &= \frac{1}{\sigma^2} \sum^N_{n=1} \mathbf{x}_n(t_n - \mathbf{x}_n^T \mathbf{w}) \\
# &= \frac{1}{\sigma^2} \sum^N_{n=1} \mathbf{x}_n t_n - \mathbf{x}_n \mathbf{x}_n^T \mathbf{w} \\
# \end{align}
# $$
#
# Using the vector/matix notation from earlier, $\sum_{n=1}^N \mathbf{x}_n t_n$ becomes $\mathbf{X}^T \mathbf{t}$ and $\sum_{n=1}^N \mathbf{x}_n \mathbf{x}_n^T \mathbf{w}$ becomes $\mathbf{X}^T \mathbf{Xw}$, thus the derivitive becomes:
#
# $$
# \frac{\partial \log L}{\partial \mathbf{w}} = \frac{1}{\sigma^2} (\mathbf{X}^T \mathbf{t} - \mathbf{X}^t \mathbf{Xw})
# $$
#
# Setting the derivitive to $\mathbf{0}$ (a vector with all zeros) and solving for $\mathbf{w}$ gives us:
#
# $$
# \begin{align}
# \frac{1}{\sigma^2} (\mathbf{X}^T \mathbf{t} - \mathbf{X}^t \mathbf{Xw}) &= \mathbf{0} \\
# \mathbf{X}^T \mathbf{t} - \mathbf{X}^t \mathbf{Xw} &= \mathbf{0} \\
# - \mathbf{X}^t \mathbf{Xw} &= - \mathbf{X}^T \mathbf{t} \\
# \mathbf{X}^t \mathbf{Xw} &= \mathbf{X}^T \mathbf{t} \\
# \widehat{\mathbf{w}} &= (\mathbf{X}^T \mathbf{X})^{-1} \mathbf{X}^T \mathbf{t} \\
# \end{align}
# $$
#
# This is the same result as minimizing the squared loss. Minimising the squared loss is equivalent to the maximum likelihood solution if the noise is assumed to be Gaussian.
#
# Now we repeat the process for $\sigma^2$
#
# $$
# \begin{align}
# \frac{\partial \log L}{\partial \sigma} &= - \frac{N}{\sigma} + \frac{1}{\sigma^3} \sum_{n=1}^N (t_n - \mathbf{x}^T \widehat{\mathbf{w}})^2
# \end{align}
# $$
#
# $$
# \begin{align}
# - \frac{N}{\sigma} + \frac{1}{\sigma^3} \sum_{n=1}^N (t_n - \mathbf{x}^T \widehat{\mathbf{w}})^2 &= 0 \\
# \frac{1}{\sigma^3} \sum_{n=1}^N (t_n - \mathbf{x}^T \widehat{\mathbf{w}})^2 &= \frac{N}{\sigma} \\
# \sum_{n=1}^N (t_n - \mathbf{x}^T \widehat{\mathbf{w}})^2 &= N \sigma^2 \\
# \widehat{\sigma^2} &= \frac{1}{N} \sum_{n=1}^N (t_n - \mathbf{x}^T \widehat{\mathbf{w}})^2 \\
# \end{align}
# $$
#
# This makes sence, the variance is the average square error. We can use the fact that $\sum_{n=1}^N (t_n - \mathbf{x}^T \widehat{\mathbf{w}})^2$ is equivalent to $(\mathbf{t} - \mathbf{X}\widehat{\mathbf{w}})^T (\mathbf{t} - \mathbf{X}\widehat{\mathbf{w}})$
#
# $$
# \begin{align}
# \widehat{\sigma^2} &= \frac{1}{N} (\mathbf{t} - \mathbf{X}\widehat{\mathbf{w}})^T (\mathbf{t} - \mathbf{X}\widehat{\mathbf{w}}) \\
# &= \frac{1}{N} (\mathbf{t}^T \mathbf{t} - 2 \mathbf{t}^T \mathbf{X} \widehat{\mathbf{w}} + \widehat{\mathbf{w}}^T \mathbf{X}^T \mathbf{X} \widehat{\mathbf{w}}) \\
# \end{align}
# $$
#
# Now using $\widehat{\mathbf{w}} = (\mathbf{X}^T \mathbf{X})^{-1} \mathbf{X}^T \mathbf{t}$ and $\widehat{\mathbf{w}}^T = \mathbf{t}^T \mathbf{X} (\mathbf{X}^T \mathbf{X})^{-1}$
#
# $$
# \begin{align}
# \widehat{\sigma^2} &= \frac{1}{N} (\mathbf{t}^T \mathbf{t} - 2 \mathbf{t}^T \mathbf{X} (\mathbf{X}^T \mathbf{X})^{-1} \mathbf{X}^T \mathbf{t} + \mathbf{t}^T \mathbf{X} (\mathbf{X}^T \mathbf{X})^{-1} \mathbf{X}^T \mathbf{X} (\mathbf{X}^T \mathbf{X})^{-1} \mathbf{X}^T \mathbf{t}) \\
# &= \frac{1}{N} (\mathbf{t}^T \mathbf{t} - 2 \mathbf{t}^T \mathbf{X} (\mathbf{X}^T \mathbf{X})^{-1} \mathbf{X}^T \mathbf{t} + \mathbf{t}^T \mathbf{X} (\mathbf{X}^T \mathbf{X})^{-1} \mathbf{X}^T \mathbf{t}) \\
# &= \frac{1}{N} (\mathbf{t}^T \mathbf{t} - \mathbf{t}^T \mathbf{X} (\mathbf{X}^T \mathbf{X})^{-1} \mathbf{X}^T \mathbf{t}) \\
# &= \frac{1}{N} (\mathbf{t}^T \mathbf{t} - \mathbf{t}^T \mathbf{X} \widehat{\mathbf{w}}) \\
# \end{align}
# $$
# +
x_values = [1896, 1900, 1904, 1906, 1908, 1912, 1920, 1924, 1928, 1932, 1936, 1948, 1952, 1956, 1960, 1964,
1968, 1972, 1976, 1980, 1984, 1988, 1992, 1996, 2000, 2004, 2008]
t_values = [12.00, 11.00, 11.00, 11.20, 10.80, 10.80, 10.80, 10.60, 10.80, 10.30, 10.30, 10.30, 10.40, 10.50,
10.20, 10.00, 9.95, 10.14, 10.06, 10.25, 9.99, 9.92, 9.96, 9.84, 9.87, 9.85, 9.69]
N = len(x_values)
X = np.matrix([[1,x] for x in x_values])
def get_params(X_mat):
XT = np.transpose(X_mat)
tT = np.matrix([t_values])
t = np.transpose(tT)
best_w = ((XT * X_mat) ** -1) * XT * t
best_sigma2 = (1/N) * (tT * t - tT * X_mat * best_w)
return (best_w, best_sigma2)
print("w = {}\n\nsigma^2 = {}".format(*get_params(X)))
# -
# ## Checking the turning point
#
# Previously we had differentatiated the loss function twice to check the turning point was a minimum, we would like to do the same here to check if the likelihood is maximum.
#
# Since the dirivative is with respect to a vector, we need to form a Hessian matrix, a square matrix with all the second order patrial derivatives of a function, for example for a function $f(\mathbf{x}; \mathbf{w})$ where $\mathbf{w} = [w_1, ..., w_K]^T$
#
# $$
# \mathbf{H} = \begin{bmatrix}
# \dfrac{\partial^2 f}{\partial w_1^2} & \dfrac{\partial^2 f}{\partial w_1 \partial w_2} & \cdots & \dfrac{\partial^2 f}{\partial w_1 \partial w_K} \\
# \dfrac{\partial^2 f}{\partial w_2 \partial w_1} & \dfrac{\partial^2 f}{\partial w_2^2} & \cdots & \dfrac{\partial^2 f}{\partial w_2 \partial w_K} \\
# \vdots & \vdots & \ddots & \vdots \\
# \dfrac{\partial^2 f}{\partial w_K \partial w_1} & \dfrac{\partial^2 f}{\partial w_K \partial w_2} & \cdots & \dfrac{\partial^2 f}{\partial w_K^2} \\
# \end{bmatrix}
# $$
#
# The turning point is maximum if the matix is negative definite. A real-valued matrix is negative definite if $\mathbf{x}^T \mathbf{H} \mathbf{x} < 0$ for all real values of $\mathbf{x}$
#
# The first order derivative was
#
# $$
# \frac{\partial \log L}{\partial \mathbf{w}} = \frac{1}{\sigma^2} (\mathbf{X}^T \mathbf{t} - \mathbf{X}^t \mathbf{Xw})
# $$
#
# Integrating with repect to $\mathbf{w}^T$ gives us the Hessian matrix:
#
# $$
# \frac{\partial \log L}{\partial \mathbf{w} \partial \mathbf{w}^T} = - \frac{1}{\sigma^2} \mathbf{X}^T \mathbf{X}
# $$
#
# Now to check the matrix is negative definite we must show
#
# $$
# - \frac{1}{\sigma^2} \mathbf{z}^T \mathbf{X}^T \mathbf{X} \mathbf{z} < 0
# $$
#
# for any vector $\mathbf{z}$ or equivalently (since $\sigma^2$ must be positive)
#
# $$
# \mathbf{z}^T \mathbf{X}^T \mathbf{X} \mathbf{z} > 0
# $$
#
# So that we can explicitly multiply out the various terms, we will rescrict $\mathbf{X}$ to
#
# $$
# \mathbf{X}
# =
# \begin{bmatrix}
# \mathbf{x}^T_1 \\
# \mathbf{x}^T_2 \\
# \vdots \\
# \mathbf{x}^T_N
# \end{bmatrix}
# =
# \begin{bmatrix}
# x_{11} & x_{12} \\
# x_{21} & x_{22} \\
# \vdots & \vdots \\
# x_{N1} & x_{N2}
# \end{bmatrix}
# $$
#
# Thus $\mathbf{X}^T \mathbf{X}$ becomes
#
# $$
# \mathbf{X}^T \mathbf{X}
# =
# \begin{bmatrix}
# \sum_{i=1}^N{x^2_{i1}} & \sum_{i=1}^N{x_{i1} x_{i2}} \\
# \sum_{i=1}^N{x_{i2} x_{i1}} & \sum_{i=1}^N{x^2_{i2}}
# \end{bmatrix}
# $$
#
# Pre and post multiplying with the arbitary vector $\mathbf{z} = \begin{bmatrix} z_1 \\ z_2 \end{bmatrix}$ gives us:
#
# $$
# \begin{align}
# \mathbf{z}^T \mathbf{X}^T \mathbf{X} \mathbf{z} &= \mathbf{z}^T
# \begin{bmatrix}
# \sum_{i=1}^N{x^2_{i1}} & \sum_{i=1}^N{x_{i1} x_{i2}} \\
# \sum_{i=1}^N{x_{i2} x_{i1}} & \sum_{i=1}^N{x^2_{i2}}
# \end{bmatrix}
# \mathbf{z} \\
# &=
# \begin{bmatrix}
# z_1 \sum_{i=1}^N{x^2_{i1}} + z_2 \sum_{i=1}^N{x_{i2} x_{i1}} &
# z_1 \sum_{i=1}^N{x_{i1} x_{i2}} + z_2 \sum_{i=1}^N{x^2_{i2}}
# \end{bmatrix}
# \mathbf{z} \\
# &= z_1^2 \sum_{i=1}^N{x^2_{i1}} + 2 z_1 z_2 \sum_{i=1}^N{x_{i1} x_{i2}} + z^2_2 \sum_{i=1}^N{x^2_{i2}}
# \end{align}
# $$
#
# The terms $z_1^2 \sum_{i=1}^N{x^2_{i1}}$ and $z^2_2 \sum_{i=1}^N{x^2_{i2}}$ are always positive thus proving $\mathbf{z}^T \mathbf{X}^T \mathbf{X} \mathbf{z}$ is positive is equivalent to
#
# $$
# z_1^2 \sum_{i=1}^N{x^2_{i1}} + z^2_2 \sum_{i=1}^N{x^2_{i2}} > 2 z_1 z_2 \sum_{i=1}^N{x_{i1} x_{i2}}
# $$
#
# The sum of the positive terms must be greater that the other term so that the whole term is greater than zero. Now let $y_{i1} = z_1 x_{i1}$ and $y_{i2} = z_2 x_{i2}$.
#
# $$
# \begin{align}
# z_1^2 \sum_{i=1}^N{x^2_{i1}} + z^2_2 \sum_{i=1}^N{x^2_{i2}} &> 2 z_1 z_2 \sum_{i=1}^N{x_{i1} x_{i2}}\\
# \sum_{i=1}^N{y^2_{i1}} + \sum_{i=1}^N{y^2_{i2}} &> 2 \sum_{i=1}^N{y_{i1} y_{i2}}\\
# \sum_{i=1}^N{\left(y^2_{i1} + y^2_{i2} \right)} &> 2 \sum_{i=1}^N{y_{i1} y_{i2}}
# \end{align}
# $$
#
# Now consider an arbitary $i$
#
# $$
# \begin{align}
# y^2_{i1} + y^2_{i2} &> 2 y_{i1} y_{i2} \\
# y^2_{i1} - 2 y_{i1} y_{i2} + y^2_{i2} &> 0 \\
# (y_{i1} - y_{i2})^2 &> 0
# \end{align}
# $$
#
# Thus the only case where this is not true is when $y_{i1}^2 = y_{i2}^2$ and thus $x_{i1} = x_{i2}$, somthing that is unlikely to happen in practive. Thus for an arbitary $i$, $y^2_{i1} + y^2_{i2} > 2 y_{i1} y_{i2}$ holds, and thus the summation of the terms holds. Hence, $\mathbf{z}^T \mathbf{X}^T \mathbf{X} \mathbf{z}$ is always positive, thus $\mathbf{H}$, our Hessian matrix is negative definite, thus the solution is a maximum.
#
# Likewise, to check $\widehat{\sigma^2}$ corrsponds to the maximum, we diffentiate
#
# $$
# \frac{\partial \log L}{\partial \sigma} = - \frac{N}{\sigma} + \frac{1}{\sigma^3} \sum_{n=1}^N (t_n - \mathbf{x}^T \widehat{\mathbf{w}})^2
# $$
#
# Again with respect to $\sigma$, giving us
#
# $$
# \frac{\partial \log L}{\partial \sigma^2} = - \frac{N}{\sigma^2} + \frac{3}{\sigma^4} \sum_{n=1}^N (t_n - \mathbf{x}^T \widehat{\mathbf{w}})^2
# $$
#
# Substituting $\widehat{\sigma^2} = \frac{1}{N} \sum_{n=1}^N (t_n - \mathbf{x}^T \widehat{\mathbf{w}})^2$
#
# $$
# \begin{align}
# \frac{\partial \log L}{\partial \sigma^2} &= - \frac{N}{\widehat{\sigma^2}} + \frac{3}{\left(\widehat{\sigma^2}\right)^2} N \widehat{\sigma^2} \\
# &= - \frac{2N}{\widehat{\sigma^2}}
# \end{align}
# $$
#
# Thus $\widehat{\sigma^2}$ corresponds to a maximum.
# ## Maximum likelihood favours complexity
#
# By substituting $\widehat{\sigma^2}$ into $\log L$ gives the value of the log-likelihood at the maximum
#
# $$
# \begin{align}
# \log L &= -\frac{N}{2} \log(2 \pi) - N \log \sigma - \frac{1}{2 \sigma^2} \sum_{n=1}^N (t_n - \mathbf{w}^T \mathbf{x}_n)^2 \\
# &= -\frac{N}{2} \log(2 \pi) - N \log \sqrt{\widehat{\sigma^2}} - \frac{1}{2 \widehat{\sigma^2}} N\widehat{\sigma^2} \\
# &= -\frac{N}{2} \log(2 \pi) - \frac{N}{2} \log \widehat{\sigma^2} - \frac{N}{2}\\
# &= -\frac{N}{2} (1 + \log 2 \pi) - \frac{N}{2} \log \widehat{\sigma^2}\\
# \end{align}
# $$
#
# Thus by decreasing $\widehat{\sigma^2}$ we increase the log-likeliness. One way to decrease $\widehat{\sigma^2}$ is to modify $f(\mathbf{x};\mathbf{w})$ so that it can capture more of the noise. The same tradeoff between overfitting and generalization as we saw last time occors. Before we used regularization to peanalize complex models, _prior distributions_ on parameter values can acieve the same thing with probablistic models.
# +
# Normalize x, for numerical stability
stable_x = np.array([float(x) for x in x_values]) - x_values[0]
stable_x *= 0.4
orders = list(range(2, 9))
log_Ls = []
for order in orders:
X = np.matrix([[x**o for o in range(0, order)] for x in stable_x])
(_, ss) = get_params(X)
log_L = -(N/2)*(1+np.log(2 * np.pi)) - (N/2) * np.log(ss)
log_Ls.append(log_L.item(0))
plt.plot(orders, log_Ls, 'r')
plt.xlabel('Polynomial order')
plt.ylabel('log L')
plt.show()
# -
# ## Effect of noise on estimates
#
# It would be useful to determine how much confidence we have in our parameters. Firstly is our estimator $\widehat{w}$ _unbiased_.
#
# Our current model takes the form:
#
# $$
# t_n = \mathbf{w}^T \mathbf{x}_n + \epsilon_n
# $$
#
# Since we defined $\epsilon_n$ to be normally distributed, the _generating_ distribution (or likelihood) is a product of normal densities:
#
# $$
# p(\mathbf{t} \mid \mathbf{X}, \mathbf{w}, \sigma^2) = \prod^N_{n=1} p(t_n \mid \mathbf{x}_n \mathbf{w}) = \prod^N_{n=1} \mathcal{N}(\mathbf{w}^T \mathbf{x}_n, \sigma^2)
# $$
#
# We have shown that a product of univariant Gaussians can be rewritten as a multivariant Gaussian with a diangonal covariance, thus
#
# $$
# p(\mathbf{t} \mid \mathbf{X}, \mathbf{w}, \sigma^2) = \prod^N_{n=1} \mathcal{N}(\mathbf{w}^T \mathbf{x}_n, \sigma^2) = \mathcal{N}(\mathbf{Xw}, \sigma^2 \mathbf{I})
# $$
|
Linear Modelling - Maximum Likelihood.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# %matplotlib inline
# %config InlineBackend.figure_format = 'retina'
import numpy as np
import pandas as pd
from random import uniform
import matplotlib.pyplot as plt
# -
class Neuron:
'Neuron is a superclass that defines the global and default neuron\'s behaviour'
def __init__(self, dimension, learning_rate):
"""Neuron Constructor.
#self: means an object scope method, not passed by programmer, but by the interpreter
@dimension: size of dimension (for XOR it is 2 entries/inputs)
@output_nodes: an int amount of neurons nodes that process the data & retrieve the result
@learning_rate: the rate/size of the learning/training process (it's as lower as good)
"""
# Set attrs
self.dimension = dimension
self.lrate = learning_rate
# Init empty vars that will be updated
self.out = None
self.error = None
# Init the aux variables
self.weights = [uniform(0,1) for x in range(dimension)]
self.bias = uniform(0, 1)
def activation(self, fx):
return 1/(1 + np.exp(-fx))
def update(self, inputs):
"""update the weights & it bias
@input = matrix with the train inputs
"""
counter = 0
for i in inputs:
delta = self.lrate * self.error
self.weights[counter] -= (delta*i)
self.bias -= delta
counter+=1
def feedforward(self, inputs):
counter = 0
sum = self.bias
for i in inputs:
sum += i * self.weights[counter]
counter += 1
self.out = self.activation(sum)
def backward(self):
"""
Defined on it children
"""
pass
class HiddenNeuron(Neuron):
'HiddenNeuron is a child of Neuron that receives feedforward from input layer and the backprop from the Output'
def __init__(self, dim, lrate=0.2):
super(HiddenNeuron, self).__init__(dim, lrate)
def backward(self, deltas, weights):
sum = 0
size = len(deltas)
for x in range(size):
sum += deltas[x] * weights[x]
self.error = self.out * (1 - self.out) * sum
class OutputNeuron(Neuron):
'OutputNeuron is a child of Neuron that receives feedforward from hidden layer, backprop to hidden & retrieves it final result'
def __init__(self, dim, lrate=0.2):
super(OutputNeuron, self).__init__(dim, lrate)
def backward(self, target):
self.error = self.out * (1 - self.out) * (self.out - target)
class Model:
'Model is the way Optional class documentation string'
def __init__(self):
self.hidden = [HiddenNeuron(2) for i in range(2)]
self.output = OutputNeuron(2)
def predict(self, input):
temp = []
for x in range(2):
self.hidden[x].feedforward(input)
temp.append(self.hidden[x].out)
self.output.feedforward(temp)
return self.output.out
def train(self, inputs, targets, epochs):
it = 0
i = 0
size = len(inputs)
while it < epochs:
if i == size:
i = 0
feature = inputs[i]
temp = []
for x in range(2):
self.hidden[x].feedforward(feature)
temp.append(self.hidden[x].out)
self.output.feedforward(temp)
self.output.backward(targets[i])
deltas = []
deltas.append(self.output.error)
weights = []
weights.append([self.output.weights[0]])
weights.append([self.output.weights[1]])
for x in range(2):
self.hidden[x].backward(deltas, weights[x])
for x in range(2):
self.hidden[x].update(feature)
self.output.update(temp)
it += 1
i += 1
# +
inputs = [[0,0], [0,1], [1,0], [1,1]]
outputs = [0, 1, 1, 0]
epochs = 100000
m = Model()
m.train(inputs, outputs, epochs)
for i in inputs:
newp = 0
p = m.predict(i)
newp = 1 if (p > 0.1) else 0
print(str(i) + ' => ' + str(newp) + ' => ' + str(p))
|
backup/OO_XOR_RNA.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <img src="../../images/qiskit-heading.gif" alt="Note: In order for images to show up in this jupyter notebook you need to select File => Trusted Notebook" width="500 px\" align="left">
# # Measurement Error Mitigation
#
# * **Last Updated:** Feb 25, 2019
# * **Requires:** qiskit-terra 0.7, qiskit-ignis 0.1, qiskit-aer 0.1
# ## Introduction
#
# The measurement calibration is used to mitigate measurement errors.
# The main idea is to prepare all $2^n$ basis input states and compute the probability of measuring counts in the other basis states.
# From these calibrations, it is possible to correct the average results of another experiment of interest. This notebook gives examples for how to use the ``ignis.mitigation.measurement`` module.
# +
# Import general libraries (needed for functions)
import numpy as np
import time
# Import Qiskit classes
import qiskit
from qiskit import QuantumRegister, QuantumCircuit, ClassicalRegister, Aer
from qiskit.providers.aer import noise
from qiskit.tools.visualization import plot_histogram
# Import measurement calibration functions
from qiskit.ignis.mitigation.measurement import (complete_meas_cal,
CompleteMeasFitter, MeasurementFilter)
# -
# ## 3 Qubit Example of the Calibration Matrices
# Assume that we would like to generate a calibration matrix for the 3 qubits Q2, Q3 and Q4 in a 5-qubit Quantum Register [Q0,Q1,Q2,Q3,Q4].
#
# Since we have 3 qubits, there are $2^3=8$ possible quantum states.
# ## Generating Measurement Calibration Circuits
#
# First, we generate a list of measurement calibration circuits for the full Hilbert space.
# Each circuit creates a basis state.
# If there are $n=3$ qubits, then we get $2^3=8$ calibration circuits.
# The following function **complete_meas_cal** returns a list **meas_calibs** of QuantumCircuit objects containing the calibration circuits,
# and a list **state_labels** of the calibration state labels.
#
# The input to this function can be given in one of the following three forms:
#
# - **qubit_list:** A list of qubits to perform the measurement correction on, or:
# - **qr (QuantumRegister):** A quantum register, or:
# - **cr (ClassicalRegister):** A classical register.
#
# In addition, one can provide a string **circlabel**, which is added at the beginning of the circuit names for unique identification.
#
# For example, in our case, the input is a 5-qubit QuantumRegister containing the qubits Q2,Q3,Q4:
# Generate the calibration circuits
qr = qiskit.QuantumRegister(5)
meas_calibs, state_labels = complete_meas_cal(qubit_list=[2,3,4], qr=qr, circlabel='mcal')
# Print the $2^3=8$ state labels (for the 3 qubits Q2,Q3,Q4):
state_labels
# ## Computing the Calibration Matrix
#
# If we do not apply any noise, then the calibration matrix is expected to be the $8 \times 8$ identity matrix.
# Execute the calibration circuits without noise
backend = qiskit.Aer.get_backend('qasm_simulator')
job = qiskit.execute(meas_calibs, backend=backend, shots=1000)
cal_results = job.result()
# The calibration matrix without noise is the identity matrix
meas_fitter = CompleteMeasFitter(cal_results, state_labels, circlabel='mcal')
print(meas_fitter.cal_matrix)
# Assume that we apply some noise model from Qiskit Aer to the 5 qubits,
# then the calibration matrix will have most of its mass on the main diagonal, with some additional 'noise'.
#
# Alternatively, we can execute the calibration circuits using IBMQ provider.
# Generate a noise model for the 5 qubits
noise_model = noise.NoiseModel()
for qi in range(5):
read_err = noise.errors.readout_error.ReadoutError([[0.9, 0.1],[0.25,0.75]])
noise_model.add_readout_error(read_err, [qi])
# Execute the calibration circuits
backend = qiskit.Aer.get_backend('qasm_simulator')
job = qiskit.execute(meas_calibs, backend=backend, shots=1000, noise_model=noise_model)
cal_results = job.result()
# Calculate the calibration matrix with the noise model
meas_fitter = CompleteMeasFitter(cal_results, state_labels, circlabel='mcal')
print(meas_fitter.cal_matrix)
# Plot the calibration matrix
meas_fitter.plot_calibration()
# ## Analyzing the Results
#
# We would like to compute the total measurement fidelity, and the measurement fidelity for a specific qubit, for example, Q0.
#
# Since the on-diagonal elements of the calibration matrix are the probabilities of measuring state 'x' given preparation of state 'x',
# then the trace of this matrix is the average assignment fidelity.
#
# +
# What is the measurement fidelity?
print("Average Measurement Fidelity: %f" % meas_fitter.readout_fidelity())
# What is the measurement fidelity of Q0?
print("Average Measurement Fidelity of Q0: %f" % meas_fitter.readout_fidelity(
label_list = [['000','001','010','011'],['100','101','110','111']]))
# -
# ## Applying the Calibration
#
# We now perform another experiment and correct the measured results.
#
# ## Correct Measurement Noise on a 3Q GHZ State
#
# As an example, we start with the 3-qubit GHZ state on the qubits Q2,Q3,Q4:
#
# $$ \mid GHZ \rangle = \frac{\mid{000} \rangle + \mid{111} \rangle}{\sqrt{2}}$$
# Make a 3Q GHZ state
cr = ClassicalRegister(3)
ghz = QuantumCircuit(qr, cr)
ghz.h(qr[2])
ghz.cx(qr[2], qr[3])
ghz.cx(qr[3], qr[4])
ghz.measure(qr[2],cr[0])
ghz.measure(qr[3],cr[1])
ghz.measure(qr[4],cr[2])
# We now execute the calibration circuits (with the noise model above)
job = qiskit.execute([ghz], backend=backend, shots=5000, noise_model=noise_model)
results = job.result()
# We now compute the results without any error mitigation and with the mitigation, namely after applying the calibration matrix to the results.
#
# There are two fitting methods for applying thr calibration (if none method is defined, then 'least_squares' is used).
# - **'pseudo_inverse'**, which is a direct inversion of the calibration matrix,
# - **'least_squares'**, which constrained to have physical probabilities.
#
# The raw data to be corrected can be given in a number of forms:
#
# - Form1: A counts dictionary from results.get_counts,
# - Form2: A list of counts of length=len(state_labels),
# - Form3: A list of counts of length=M*len(state_labels) where M is an integer (e.g. for use with the tomography data),
# - Form4: A qiskit Result (e.g. results as above).
# +
# Results without mitigation
raw_counts = results.get_counts()
# Get the filter object
meas_filter = meas_fitter.filter
# Results with mitigation
mitigated_results = meas_filter.apply(results)
mitigated_counts = mitigated_results.get_counts(0)
# -
# We can now plot the results with and without error mitigation:
from qiskit.tools.visualization import *
plot_histogram([raw_counts, mitigated_counts], legend=['raw', 'mitigated'])
|
qiskit/ignis/measurement_error_mitigation.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: drlnd
# language: python
# name: drlnd
# ---
# # Deep Q-Network (DQN)
# ---
# In this notebook, you will implement a DQN agent with OpenAI Gym's LunarLander-v2 environment.
#
# ### 1. Import the Necessary Packages
import gym
import random
import torch
import numpy as np
from collections import deque
import matplotlib.pyplot as plt
# %matplotlib inline
# ### 2. Instantiate the Environment and Agent
#
# Initialize the environment in the code cell below.
env = gym.make('LunarLander-v2')
env.seed(0)
print('State shape: ', env.observation_space.shape)
print('Number of actions: ', env.action_space.n)
# Before running the next code cell, familiarize yourself with the code in **Step 2** and **Step 3** of this notebook, along with the code in `dqn_agent.py` and `model.py`. Once you have an understanding of how the different files work together,
# - Define a neural network architecture in `model.py` that maps states to action values. This file is mostly empty - it's up to you to define your own deep Q-network!
# - Finish the `learn` method in the `Agent` class in `dqn_agent.py`. The sampled batch of experience tuples is already provided for you; you need only use the local and target Q-networks to compute the loss, before taking a step towards minimizing the loss.
#
# Once you have completed the code in `dqn_agent.py` and `model.py`, run the code cell below. (_If you end up needing to make multiple changes and get unexpected behavior, please restart the kernel and run the cells from the beginning of the notebook!_)
#
# You can find the solution files, along with saved model weights for a trained agent, in the `solution/` folder. (_Note that there are many ways to solve this exercise, and the "solution" is just one way of approaching the problem, to yield a trained agent._)
# +
from dqn_agent import Agent
agent = Agent(state_size=8, action_size=4, seed=0)
# watch an untrained agent
state = env.reset()
for j in range(200):
action = agent.act(state)
env.render()
state, reward, done, _ = env.step(action)
if done:
break
env.close()
# -
# ### 3. Train the Agent with DQN
#
# Run the code cell below to train the agent from scratch. You are welcome to amend the supplied values of the parameters in the function, to try to see if you can get better performance!
# +
def dqn(n_episodes=2000, max_t=1000, eps_start=1.0, eps_end=0.01, eps_decay=0.995):
"""Deep Q-Learning.
Params
======
n_episodes (int): maximum number of training episodes
max_t (int): maximum number of timesteps per episode
eps_start (float): starting value of epsilon, for epsilon-greedy action selection
eps_end (float): minimum value of epsilon
eps_decay (float): multiplicative factor (per episode) for decreasing epsilon
"""
scores = [] # list containing scores from each episode
scores_window = deque(maxlen=100) # last 100 scores
eps = eps_start # initialize epsilon
for i_episode in range(1, n_episodes+1):
state = env.reset()
score = 0
for t in range(max_t):
action = agent.act(state, eps)
next_state, reward, done, _ = env.step(action)
agent.step(state, action, reward, next_state, done)
state = next_state
score += reward
if done:
break
scores_window.append(score) # save most recent score
scores.append(score) # save most recent score
eps = max(eps_end, eps_decay*eps) # decrease epsilon
print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)), end="")
if i_episode % 100 == 0:
print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)))
if np.mean(scores_window)>=200.0:
print('\nEnvironment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode-100, np.mean(scores_window)))
torch.save(agent.qnetwork_local.state_dict(), 'checkpoint.pth')
break
return scores
scores = dqn()
# plot the scores
fig = plt.figure()
ax = fig.add_subplot(111)
plt.plot(np.arange(len(scores)), scores)
plt.ylabel('Score')
plt.xlabel('Episode #')
plt.show()
# -
# ### 4. Watch a Smart Agent!
#
# In the next code cell, you will load the trained weights from file to watch a smart agent!
# +
# load the weights from file
agent.qnetwork_local.load_state_dict(torch.load('checkpoint.pth'))
for i in range(5):
state = env.reset()
for j in range(200):
action = agent.act(state)
env.render()
state, reward, done, _ = env.step(action)
if done:
break
env.close()
# -
# ### 5. Explore
#
# In this exercise, you have implemented a DQN agent and demonstrated how to use it to solve an OpenAI Gym environment. To continue your learning, you are encouraged to complete any (or all!) of the following tasks:
# - Amend the various hyperparameters and network architecture to see if you can get your agent to solve the environment faster. Once you build intuition for the hyperparameters that work well with this environment, try solving a different OpenAI Gym task with discrete actions!
# - You may like to implement some improvements such as prioritized experience replay, Double DQN, or Dueling DQN!
# - Write a blog post explaining the intuition behind the DQN algorithm and demonstrating how to use it to solve an RL environment of your choosing.
|
dqn/exercise/Deep_Q_Network.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="0CPoEXN2jwgD"
# ### Ćelijski automati
#
# Ćelijski automati su klasa modela koji se mogu koristiti za simulaciju različitih prirodnih fenomena. Korišćeni su za modeliranje razvoja tumora, kroz trodimenzionalnu mapu tkiva i predviđanje rasta ćelija kroz vreme. U neuronauci, ćelijski automati se mogu koristi za izučavanje populacije aktivacije neurona. Postoje mnogo druge primene, uključujući one u hemiji, biologiji, ekologiji i mnogim drugim granama nauke.
#
# Ćelijski automati su u osnovi skup (prostorno) uređenih jedinica koje nazivamo ćelije. Svaka ćelija vrši neku od mogućih akcija, ili uzima neko od mogućih stanja, u zavisnosti od svoje neposredne okoline. Možda najpoznatiji primer ćelijskih automata je Konvejeva Igra života.
#
# ### Igra života
#
# Konvejeva Igra života je ćelijski automat koji se sastoji iz pravougaone mreže ćelija. Svaka ćeilja može biti u jednom od sva stanja: *živa* i *mrtva*. Igra se odvija kroz vremenske iteracije, gde ćelije u jednoj iteraciji računaju svoje stanje u sledećoj. Stanje ćelije se računa na osnovu svog trenutnog stanja, trenutnog stanja svih neposrednih 8 suseda ćelije i sledećih pravila:
# - Ako ćelija ima manje od 2 živa suseda, u sledećoj iteraciji će biti mrtva
# - Ako ćelija ima više od 3 živa suseda, u sledećoj iteraciji će biti mrtva
# - Ako je ćelija živa i ima 2 ili 3 živa suseda, u sledećoj iteraciji će biti živa
# - Ako je ćelija mrtva i ima 3 živa suseda, u sledećoj iteraciji će biti živa
#
# Osnovna struktura podataka Igre života je matrica stanja, koja za svaku ćeliju a[i, j] sadrži podatak da li je ćelija živa ili mrtva. Pri implementaciji igre javlja se problem suseda ćelija koje se nalaze uz ivicu mreže (ćelije u prvoj koloni nemaju "leve" susede, itd.). Postoji više rešenja za ovaj problem kao što su:
# - Pretpostaviti da su nepostojeći susedi uvek mrtvi
# - Uvesti "cikluse", tako da je poslednja kolona "levi" sused prve kolone, a poslednji red "gornji" sused prvog reda (efektivno torus).
#
# Pri izradi zadataka koristiti drugi pristup.
#
# ### Zadatak
# Zadatak je implementirati paralelizovanu verziju Igre života u programskom jeziku Python, na nekoliko načina. Prilikom implementacije obezbediti da se posle izvršavanja zadatog broja iteracija sačuva niz matrica stanja kroz vreme (stanja sistema u svakoj iteraciji), koji je kompatibilan sa datom funkcijom za animaciju (sledeća ćelija u fajlu).
# + [markdown] id="K4wl3_Ezi5Gh"
# 1. Upotrebom niti koje simuliraju *po jednu ćeliju* i sinhronizacijom Ključevima, Semaforima i Uslovima (7 poena)
# Svaka nit simulira rad jedne ćelije u sistemu i ima pristup stanjima svojih suseda. Ćelije nemaju pristup globalnom brojaču iteracija, već svaka ćelija interno vodi računa o broju trenutne iteracije. Pored matrice podataka potrebno je uvesti:
# - Listu brojača suseda koji su pročitali trenutnu vrednost (za svaku ćeliju po jedan brojač). Osmi (poslednji) sused koji pročita vrednost budi ćeliju kako bi mogla da upiše novu vrednost u matricu stanja (buđenje realizovati semaforom). Voditi računa o sinhronizaciji suseda koji menjaju vrednost brojača.
# - Uslov (Condition) na kome čekaju sve ćelije koje su upisale novu vrednost u matricu stanja, pre nego što pređu u sledeću iteraciju.
# - Brojač ćelija, zaštićen ključem, koje su upisale novu vrednost u matricu stanja. Poslednja ćelija aktivira Uslov za sledeću iteraciju. Voditi računa o redosledu uzimanja i puštanja ključa za brojač i ključa za uslov.
# + id="b7buMqKr0IXc"
import numpy as np
import matplotlib.pyplot as plt
import threading
import copy
from threading import Thread
import random
import time
from matplotlib.animation import FuncAnimation
from IPython.display import HTML
from multiprocessing import Pool, cpu_count
def animate(steps):
''' Prima niz matrica (svaka matrica je stanje u jednom koraku simulacije)
prikazuje razvoj sistema'''
def init():
im.set_data(steps[0])
return [im]
def animate(i):
im.set_data(steps[i])
return [im]
im = plt.matshow(steps[0], interpolation='None', animated=True);
anim = FuncAnimation(im.get_figure(), animate, init_func=init,
frames=len(steps), interval=1000, blit=True, repeat=False);
return anim
# + id="zysqh3FIia3Q"
# Generisemo samo jednu random matricu
# Velicina matrice
grid_size = 15
steps = 7
# Generise random matricu date velicine popunjenom nulama i jedinicama
# sa datom verovatnocom (65% za 0 i 35% za 1)
grid_generated = np.array(np.random.choice([0,1], grid_size**2, p=[0.65, 0.35]))
# + id="9xjvdUQdjDM4"
grid = copy.deepcopy(grid_generated)
grid = grid.reshape(grid_size, grid_size)
# Lista koja ce cuvati istoriju matrica nakon prolaska iteracija
matrix_history = []
zavrsene_celije = 0 # Belezi broj celija koje su zavrsile proveru
zavrsena_celija_lock = threading.Lock()
pristup_susedima_lock = threading.Lock()
sledeca_iteracija = threading.Condition()
class Cell(Thread):
# Konstruktor za celiju
def __init__(self, red, kol):
super().__init__()
self.red = red
self.kol = kol
self.provereni_susedi = np.zeros(8)
self.iteracija = 0
self.semafor = threading.Semaphore(0)
# Funkcija koja proverava susede i vraca ukupan broj zivih suseda
def proveriSusede(self):
zivi_susedi = 0
x = np.array([0, 0, -1, 1, -1, -1, 1, 1])
y = np.array([-1, 1, 0, 0, -1, 1, -1, 1])
for i in range(0, 8):
zivi_susedi += grid[(self.red + x[i]) % grid_size, (self.kol + y[i]) % grid_size]
for t in niti:
if t.red == (self.red + x[i]) % grid_size and t.kol == (self.kol + y[i]) % grid_size:
t.provereni_susedi[i] += 1
t.semafor.release()
return zivi_susedi
def update(self, zivi):
global grid
global zavrsene_celije
if grid[self.red, self.kol] == 1:
if zivi < 2 or zivi > 3:
grid[self.red, self.kol] = 0
else:
if zivi == 3:
grid[self.red, self.kol] = 1
zavrsena_celija_lock.acquire()
zavrsene_celije += 1
zavrsena_celija_lock.release()
self.iteracija += 1
def run(self):
global steps
global zavrsene_celije
for n in range(0, steps):
zivi_susedi = self.proveriSusede()
while True:
self.semafor.acquire()
pristup_susedima_lock.acquire()
if np.all(self.provereni_susedi == 1):
self.provereni_susedi = np.zeros(8)
pristup_susedima_lock.release()
break
pristup_susedima_lock.release()
sleep_time = random.random()
time.sleep(sleep_time)
self.update(zivi_susedi)
zavrsena_celija_lock.acquire()
if zavrsene_celije == grid_size**2:
zavrsene_celije = 0
zavrsena_celija_lock.release()
sledeca_iteracija.acquire()
sledeca_iteracija.notifyAll()
sledeca_iteracija.release()
matrix_history.append(grid.copy())
else:
zavrsena_celija_lock.release()
sledeca_iteracija.acquire()
sledeca_iteracija.wait()
sledeca_iteracija.release()
niti = []
for i in range(0, grid_size):
for j in range(0, grid_size):
nit = Cell(i, j)
niti.append(nit)
for t in niti:
t.start()
for t in niti:
t.join()
anim = animate(matrix_history)
HTML(anim.to_html5_video())
# + [markdown] id="84IENsJFb9SF"
# 2. Upotrebom niti koje simuliraju *po jednu ćeliju* i sinhronizacijom redovima za poruke (6 poena)
# Svaka nit simulira rad jedne ćelije u sistemu. Stanje svake ćelije se čuva unutar ćelije (rad sistema se ne oslanja na deljenu matricu stanja). Ćelije podatke o svojem stanju razmenjuju putem reda za poruke. Za potrebe analize rada može se uvesti deljeni niz matrica stanja (i-ti element niza je matrica stanja i-te iteracije), u koji ćelije upisuju svoja stanja (ćelije ne mogu čitati iz ovog niza!).
# + id="7B-P4WVJcG2R"
import numpy as np
import threading
from threading import Thread, currentThread
from queue import Queue
import sys
grid = copy.deepcopy(grid_generated)
grid = grid.reshape(grid_size, grid_size)
matrix_history = []
class Celija(Thread):
def __init__(self, red, kol, stanje):
super().__init__()
self.red = red
self.kol = kol
self.stanje = stanje
self.queue = Queue()
self.je_spremno = Queue()
self.celije_susedi = []
self.iter = 0
def inicijalizacija_suseda(self):
dx = [0, 1, 1, 1, 0, -1, -1, -1]
dy = [1, 1, 0, -1, -1, -1, 0, 1]
for i in range(len(dx)):
for n in niti:
if n.red == (self.red + dx[i]) % grid_size and n.kol == (self.kol + dy[i]) % grid_size:
self.celije_susedi.append(n)
def uzimanje_suseda(self):
for cs in self.celije_susedi:
cs.queue.put(self.stanje)
def provera_suseda(self):
global matrix_history
zivi = 0
for i in range(8):
item = self.queue.get()
if item == 1:
zivi += 1
self.queue.task_done()
if self.stanje == 1:
if zivi < 2 or zivi > 3:
self.stanje = 0
else:
if zivi == 3:
self.stanje = 1
for cs in self.celije_susedi:
cs.je_spremno.put(1)
for i in range(8):
self.je_spremno.get()
self.je_spremno.task_done()
def run(self):
self.inicijalizacija_suseda()
for step in range(steps):
self.iter = step
self.uzimanje_suseda()
self.provera_suseda()
matrix_history[self.iter][self.red][self.kol] = self.stanje
# ***main***
niti = []
for i in range(grid_size):
for j in range(grid_size):
c = Celija(i,j,grid[i][j])
niti.append(c)
matr_zeros = np.zeros((grid_size, grid_size))
for _ in range(steps):
m1 = copy.deepcopy(matr_zeros)
matrix_history.append(m1)
for n in niti:
n.start()
for n in niti:
n.join()
anim = animate(matrix_history)
HTML(anim.to_html5_video())
# + [markdown] id="9q0kMszE1Och"
# 3. Upotrebom procesa koji simuliraju po jednu ćeliju i sinhronizacijom redovima za poruke (6 poena)
# Svaki proces je simulira rad jedne ćelije u sistemu. Stanje svake ćelije se čuva unutar ćelije (rad sistema se ne oslanja na deljenu matricu stanja). Ćelije podatke o svojem stanju razmenjuju putem reda za poruke. Za potrebe analize rada implementirati poseban servis (dodatni proces) kojem sve ćelije javljaju novo stanje prilikom promene (pri čemu poruke sadrže koordinate ćelije, broj iteracije i novo stanje). Servis treba da rekonstruiše i sačuva (ili vrati u glavni program) niz matrica stanja.
# + id="mosLDDMRnQiM"
import multiprocessing
import os
import sys
from multiprocessing import Barrier, Queue, Lock
grid = copy.deepcopy(grid_generated)
grid = grid.reshape(grid_size, grid_size)
# Lista koja ce cuvati istoriju matrica nakon prolaska iteracija
matrix_history = []
cells = []
def receive_new_state(grid, queue, mat_queue):
# elem. iz queue = (row, col, step, new_state)
matr = np.zeros((grid_size, grid_size))
for st in range(steps):
m1 = copy.deepcopy(matr)
for i in range(grid_size**2):
elem = queue.get()
m1[elem[0]][elem[1]] = elem[3]
if i == grid_size**2 - 1:
mat_queue.put(m1)
class Cell(multiprocessing.Process):
def __init__(self, row, col, state, glob_q, synq):
super().__init__()
self.row = row
self.col = col
self.state = state
self.glob_q = glob_q
self.synq = synq
self.queue = Queue()
self.neighbors = []
self.iter = 0
def obtain_neighbors(self):
global cells
dx = [0, 1, 1, 1, 0, -1, -1, -1]
dy = [1, 1, 0, -1, -1, -1, 0, 1]
for i in range(8):
for c in cells:
if (self.row + dx[i]) % grid_size == c.row and (self.col + dy[i]) % grid_size == c.col:
self.neighbors.append(c)
def change_state(self):
ones = 0
for c in self.neighbors:
c.queue.put(self.state)
for i in range(8):
num = self.queue.get()
if num == 1:
ones += 1
if self.state == 1:
if ones < 2 or ones > 3:
self.state = 0
else:
self.state = 1
else:
if ones == 3:
self.state = 1
else:
self.state = 0
tup = (self.row, self.col, self.iter, self.state)
self.glob_q.put(tup)
def run(self):
self.obtain_neighbors()
for i in range(steps):
self.iter = i
self.change_state()
self.synq.wait()
if __name__ == '__main__':
glob_queue = Queue()
matrix_queue = Queue()
synq = Barrier(grid_size**2)
for i in range(grid_size):
for j in range(grid_size):
c = Cell(i, j, grid[i][j], glob_queue, synq)
cells.append(c)
modify_matrix = multiprocessing.Process(target=receive_new_state, args=(grid, glob_queue, matrix_queue), daemon=True)
modify_matrix.start()
for c in cells:
c.start()
for c in cells:
c.join()
modify_matrix.join()
while not matrix_queue.empty():
matrix_history.append(matrix_queue.get())
anim = animate(matrix_history)
HTML(anim.to_html5_video())
# + [markdown] id="Hz8wFlu6UzLA"
# 4. Upotrebom više procesa kroz *process pool*, generisanjem taskova na nivou skupa ćelija (6 poena)
# Matricu stanja podeliti na N delova (gde je konfigurabilni parametar) i za svaki deo generisati *task* (poziv funkcije čijim parametrima se definiše koji deo matrice treba obraditi). Funkcija treba da vrati niz koordinata ćelija i njihove nove vrednosti, a matrica za sledeću iteraciju se može kreirati u glavnom programu. Trenutne vrednosti ćelija i suseda se mogu čitati iz deljene matrice.
#
# + id="G2DYkFPaU6X9"
N = 5 # Konfigurabilni parametar
grid = copy.deepcopy(grid_generated)
grid = grid.reshape(grid_size, grid_size)
def cell_subset_func(task, grid, grid_size):
koordinate = {}
for x, y in task:
dx = [0, 1, 1, 1, 0, -1, -1, -1]
dy = [1, 1, 0, -1, -1, -1, 0, 1]
zivi_susedi = 0
for i in range(8):
zivi_susedi += grid[(x + dx[i]) % grid_size, (y + dy[i]) % grid_size]
if grid[x, y] == 1:
if zivi_susedi < 2 or zivi_susedi > 3:
koordinate[(x, y)] = 0
else:
koordinate[(x, y)] = 1
else:
if zivi_susedi == 3:
koordinate[(x,y)] = 1
else:
koordinate[(x,y)] = 0
return koordinate
if __name__ == '__main__':
matrix_history = [grid.copy()]
tasks = []
results = []
broj_delova = (grid_size**2)//N
# Dodavanje podskupa celija u posebne taskove
cell_subset = []
counter = 0
index = 0
for i in range(0, grid_size):
for j in range(0, grid_size):
cell_subset.append((i, j))
counter += 1
if counter == broj_delova:
temp = cell_subset
tasks.insert(index, temp)
index += 1
counter = 0
cell_subset = []
if len(cell_subset) != 0:
tasks.insert(index, cell_subset)
# Process pool
pool = Pool(cpu_count())
for i in range(steps):
results = [pool.apply(cell_subset_func, args=(task, grid, grid_size,)) for task in tasks]
for r in results:
for k, v in r.items():
grid[k[0], k[1]] = v
matrix_history.append(grid.copy())
pool.close()
pool.join()
anim = animate(matrix_history)
HTML(anim.to_html5_video())
|
Projekat1_Game_of_life.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import torch
import numpy as np
import matplotlib.pyplot as plt
from synthetic import simulate_lorenz_96, simulate_lorenz_96_nonstationary, simulate_lorenz_96_mine, generator1, generator1_nonstationary, Causal_Figure, standardise
from models.cmlp import cMLP, cMLPSparse, train_model_ista, train_unregularized
# $$A_t = A_{t-1} + C_{t-1}$$
# $$B_t = \alpha*A_{t-1}C_{t-1}$$
# $$C_t = 1\quad or\quad 0\quad or\quad -1, randomly$$
# $$\alpha = 0$$
# For GPU acceleration
device = torch.device('cuda')
# + tags=[]
# Simulate data
seed = 10
X_np, GC_l = generator1_nonstationary(T=200,alpha=0.5,seed=seed)
X_np = standardise(X_np)
X = torch.tensor(X_np[np.newaxis], dtype=torch.float32, device=device)
# Plot data
fig, axarr = plt.subplots(1, 2, figsize=(16, 5))
axarr[0].plot(X_np)
axarr[0].set_xlabel('T')
axarr[0].set_title('Entire time series')
axarr[1].plot(X_np[:50])
axarr[1].set_xlabel('T')
axarr[1].set_title('First 50 time points')
plt.tight_layout()
plt.show()
# -
# # Still need to tune $\lambda$ and perhaps lr
# Set up model
cmlp = cMLP(X.shape[-1], lag=1, hidden=[100]).cuda(device=device)
# Train with ISTA
train_loss_list = train_model_ista(
cmlp, X, lam=0.01, lam_ridge=0.05, lr=8e-3, penalty='H', max_iter=50000,
check_every=100)
# Loss function plot
plt.figure(figsize=(8, 5))
plt.plot(100 * np.arange(len(train_loss_list)), train_loss_list)
plt.title('cMLP training')
plt.ylabel('Loss')
plt.xlabel('Training steps')
plt.tight_layout()
plt.show()
GC_est = cmlp.GC().cpu().data.numpy()
Causal_Figure(GC_est,cmlp.GC(ignore_lag=False, threshold=False).cpu().data.numpy())
print(np.sum(GC_est==GC_l[0]),"%")
np.save("./Neural_GC/Neural_"+str(seed)+".npy",GC_est)
|
g1_non_Neural-Copy10.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.8.5 64-bit (''venv'': venv)'
# language: python
# name: python385jvsc74a57bd0c34466222e6c8523639c67211ffb2711ba2cb931537670f7933c26d7e3c88ced
# ---
# # Plotting an L-Curve
# This lesson shows how to plot an L-curve to estimate the best value for the regularization parameter. First, we need to add the parent folder to the path to use all out packages
import sys
sys.path
sys.path.append("..")
# Now, we can import everything as usual
# numpy imports and settings
import numpy as np
np.set_printoptions(precision=3, suppress=True, linewidth=500, edgeitems=10)
# example imports
import example
from utils import root_utils
# plotting imports
import pylab as plt
# %matplotlib inline
# Set up logger to print in the notebook
import sys, logging
logging.getLogger("matplotlib").setLevel(logging.WARNING)
# # Initialization
# Initialize data
# +
# toy generation
nbins = 40
# Generate initial distribution and response matrix
xini, bini, Adet = example.maker.generate_initial_samples(nbins)
# Generate test distribution (what is measured in the experiment)
datatrue, data, statcov = example.maker.generate_test_samples(nbins)
# Map ROOT objects to numpy arrays
xinipy, xinipy_edges = root_utils.histogram_to_python(xini)
binipy, binipy_edges = root_utils.histogram_to_python(bini)
Adetpy_events, Adetpy_edges = root_utils.histogram_to_python(Adet)
# Data
datapy, datapy_edges = root_utils.histogram_to_python(data)
# Data "truth" distribution to test the unfolding
datatruepy, datatruepy_edges = root_utils.histogram_to_python(datatrue)
# Statistical covariance matrix
statcovpy, statcovpy_edges = root_utils.histogram_to_python(statcov)
# Turn Adetpy from number of events to probabilities (to use it with quantum annealing)
Adetpy = np.true_divide(Adetpy_events, xinipy, where=xinipy!=0)
# -
# # Unfold with annealing
# Unflold with simulated classical or quantum annealing with a range of regularization factors to plot the L-curve
# annealing unfolder
from unfolders.annealing import QUBOUnfolder, backends
# Choose backend
annealer = backends.SimulatedAnnealingBackend(50)
factors = np.arange(0.001, 0.5, 0.005)
print(factors.shape)
# Define factors to try
factors = np.arange(0.001, 0.5, 0.005)
# Create arrays to store info
norms = np.zeros(shape=factors.shape)
ress = np.zeros(shape=factors.shape)
# Unfold the normal histograms for each factor
for i, factor in enumerate(factors):
# adding +1 to avoid division over zero
unfolder = QUBOUnfolder(
datatruepy+1, Adetpy, datapy, n_bits=4, weight_regularization=factor
)
# Unfold
x_unfolded = unfolder.solve(annealer)
# Get elements to plot
norm, res = unfolder.compute_energy(x_unfolded)
res = res / factor
# Store in the arrays
norms[i] = norm
ress[i] = res
# # Plot L-Curve
# +
plt.figure(figsize=(15, 7))
plt.plot(norms, ress)
plt.title(f"L-curve with {annealer}")
plt.xlabel("$log|Ax-b|^2$")
plt.ylabel("$log|Dx|^2$")
# -
|
lessons/plotting L curve.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] deletable=true editable=true
# # Spelling Bee
# + [markdown] deletable=true editable=true
# This notebook starts our deep dive (no pun intended) into NLP by introducing sequence-to-sequence learning on Spelling Bee.
# + [markdown] deletable=true editable=true heading_collapsed=true
# ## Data Stuff
#
# We take our data set from [The CMU pronouncing dictionary](https://en.wikipedia.org/wiki/CMU_Pronouncing_Dictionary)
# + deletable=true editable=true hidden=true
# %matplotlib inline
import importlib
import utils2; importlib.reload(utils2)
from utils2 import *
np.set_printoptions(4)
PATH = 'data/spellbee/'
# + deletable=true editable=true hidden=true
limit_mem()
# + deletable=true editable=true hidden=true
from sklearn.model_selection import train_test_split
# + [markdown] deletable=true editable=true hidden=true
# The CMU pronouncing dictionary consists of sounds/words and their corresponding phonetic description (American pronunciation).
#
# The phonetic descriptions are a sequence of phonemes. Note that the vowels end with integers; these indicate where the stress is.
#
# Our goal is to learn how to spell these words given the sequence of phonemes.
# + [markdown] deletable=true editable=true hidden=true
# The preparation of this data set follows the same pattern we've seen before for NLP tasks.
#
# Here we iterate through each line of the file and grab each word/phoneme pair that starts with an uppercase letter.
# + deletable=true editable=true hidden=true
lines = [l.strip().split(" ") for l in open(PATH+"cmudict-0.7b", encoding='latin1')
if re.match('^[A-Z]', l)]
lines = [(w, ps.split()) for w, ps in lines]
lines[0], lines[-1]
# + [markdown] deletable=true editable=true hidden=true
# Next we're going to get a list of the unique phonemes in our vocabulary, as well as add a null "_" for zero-padding.
# + deletable=true editable=true hidden=true
phonemes = ["_"] + sorted(set(p for w, ps in lines for p in ps))
phonemes[:5]
# + deletable=true editable=true hidden=true
len(phonemes)
# + [markdown] deletable=true editable=true hidden=true
# Then we create mappings of phonemes and letters to respective indices.
#
# Our letters include the padding element "_", but also "*" which we'll explain later.
# + deletable=true editable=true hidden=true
p2i = dict((v, k) for k,v in enumerate(phonemes))
letters = "_abcdefghijklmnopqrstuvwxyz*"
l2i = dict((v, k) for k,v in enumerate(letters))
# + [markdown] deletable=true editable=true hidden=true
# Let's create a dictionary mapping words to the sequence of indices corresponding to it's phonemes, and let's do it only for words between 5 and 15 characters long.
# + deletable=true editable=true hidden=true
maxlen=15
pronounce_dict = {w.lower(): [p2i[p] for p in ps] for w, ps in lines
if (5<=len(w)<=maxlen) and re.match("^[A-Z]+$", w)}
len(pronounce_dict)
# + [markdown] deletable=true editable=true hidden=true
# Aside on various approaches to python's list comprehension:
# * the first list is a typical example of a list comprehension subject to a conditional
# * the second is a list comprehension inside a list comprehension, which returns a list of list
# * the third is similar to the second, but is read and behaves like a nested loop
# * Since there is no inner bracket, there are no lists wrapping the inner loop
# + deletable=true editable=true hidden=true
a=['xyz','abc']
[o.upper() for o in a if o[0]=='x'], [[p for p in o] for o in a], [p for o in a for p in o]
# + [markdown] deletable=true editable=true hidden=true
# Split lines into words, phonemes, convert to indexes (with padding), split into training, validation, test sets. Note we also find the max phoneme sequence length for padding.
# + deletable=true editable=true hidden=true
maxlen_p = max([len(v) for k,v in pronounce_dict.items()])
# + deletable=true editable=true hidden=true
pairs = np.random.permutation(list(pronounce_dict.keys()))
n = len(pairs)
input_ = np.zeros((n, maxlen_p), np.int32)
labels_ = np.zeros((n, maxlen), np.int32)
for i, k in enumerate(pairs):
for j, p in enumerate(pronounce_dict[k]): input_[i][j] = p
for j, letter in enumerate(k): labels_[i][j] = l2i[letter]
# + deletable=true editable=true hidden=true
go_token = l2i["*"]
dec_input_ = np.concatenate([np.ones((n,1)) * go_token, labels_[:,:-1]], axis=1)
# + [markdown] deletable=true editable=true hidden=true
# Sklearn's <tt>train_test_split</tt> is an easy way to split data into training and testing sets.
# + deletable=true editable=true hidden=true
(input_train, input_test, labels_train, labels_test, dec_input_train, dec_input_test
) = train_test_split(input_, labels_, dec_input_, test_size=0.1)
# + deletable=true editable=true hidden=true
input_train.shape
# + deletable=true editable=true hidden=true
labels_train.shape
# + deletable=true editable=true hidden=true
input_vocab_size, output_vocab_size = len(phonemes), len(letters)
input_vocab_size, output_vocab_size
# + [markdown] deletable=true editable=true hidden=true
# Next we proceed to build our model.
# + [markdown] deletable=true editable=true
# ## Keras code
# + deletable=true editable=true
parms = {'verbose': 0, 'callbacks': [TQDMNotebookCallback(leave_inner=True)]}
lstm_params = {}
# + deletable=true editable=true
dim = 240
# + [markdown] deletable=true editable=true
# ### Without attention
# + deletable=true editable=true
def get_rnn(return_sequences= True):
return LSTM(dim, dropout_U= 0.1, dropout_W= 0.1,
consume_less= 'gpu', return_sequences=return_sequences)
# + [markdown] deletable=true editable=true
# The model has three parts:
# * We first pass list of phonemes through an embedding function to get a list of phoneme embeddings. Our goal is to turn this sequence of embeddings into a single distributed representation that captures what our phonemes say.
# * Turning a sequence into a representation can be done using an RNN. This approach is useful because RNN's are able to keep track of state and memory, which is obviously important in forming a complete understanding of a pronunciation.
# * <tt>BiDirectional</tt> passes the original sequence through an RNN, and the reversed sequence through a different RNN and concatenates the results. This allows us to look forward and backwards.
# * We do this because in language things that happen later often influence what came before (i.e. in Spanish, "el chico, la chica" means the boy, the girl; the word for "the" is determined by the gender of the subject, which comes after).
# * Finally, we arrive at a vector representation of the sequence which captures everything we need to spell it. We feed this vector into more RNN's, which are trying to generate the labels. After this, we make a classification for what each letter is in the output sequence.
# * We use <tt>RepeatVector</tt> to help our RNN remember at each point what the original word is that it's trying to translate.
#
#
# + deletable=true editable=true
inp = Input((maxlen_p,))
x = Embedding(input_vocab_size, 120)(inp)
x = Bidirectional(get_rnn())(x)
x = get_rnn(False)(x)
x = RepeatVector(maxlen)(x)
x = get_rnn()(x)
x = get_rnn()(x)
x = TimeDistributed(Dense(output_vocab_size, activation='softmax'))(x)
# + [markdown] deletable=true editable=true
# We can refer to the parts of the model before and after <tt>get_rnn(False)</tt> returns a vector as the encoder and decoder. The encoder has taken a sequence of embeddings and encoded it into a numerical vector that completely describes it's input, while the decoder transforms that vector into a new sequence.
#
# Now we can fit our model
# + deletable=true editable=true
model = Model(inp, x)
# + deletable=true editable=true
model.compile(Adam(), 'sparse_categorical_crossentropy', metrics=['acc'])
# + deletable=true editable=true
hist=model.fit(input_train, np.expand_dims(labels_train,-1),
validation_data=[input_test, np.expand_dims(labels_test,-1)],
batch_size=64, **parms, nb_epoch=3)
# + deletable=true editable=true
hist.history['val_loss']
# + [markdown] deletable=true editable=true
# To evaluate, we don't want to know what percentage of letters are correct but what percentage of words are.
# + deletable=true editable=true
def eval_keras(input):
preds = model.predict(input, batch_size=128)
predict = np.argmax(preds, axis = 2)
return (np.mean([all(real==p) for real, p in zip(labels_test, predict)]), predict)
# + [markdown] deletable=true editable=true
# The accuracy isn't great.
# + deletable=true editable=true
acc, preds = eval_keras(input_test); acc
# + deletable=true editable=true
def print_examples(preds):
print("pronunciation".ljust(40), "real spelling".ljust(17),
"model spelling".ljust(17), "is correct")
for index in range(20):
ps = "-".join([phonemes[p] for p in input_test[index]])
real = [letters[l] for l in labels_test[index]]
predict = [letters[l] for l in preds[index]]
print (ps.split("-_")[0].ljust(40), "".join(real).split("_")[0].ljust(17),
"".join(predict).split("_")[0].ljust(17), str(real == predict))
# + [markdown] deletable=true editable=true
# We can see that sometimes the mistakes are completely reasonable, occasionally they're totally off. This tends to happen with the longer words that have large phoneme sequences.
#
# That's understandable; we'd expect larger sequences to lose more information in an encoding.
# + deletable=true editable=true
print_examples(preds)
# + [markdown] deletable=true editable=true heading_collapsed=true
# ### Attention model
# + [markdown] deletable=true editable=true hidden=true
# This graph demonstrates the accuracy decay for a nueral translation task. With an encoding/decoding technique, larger input sequences result in less accuracy.
#
# <img src="https://smerity.com/media/images/articles/2016/bahdanau_attn.png" width="600">
#
# This can be mitigated using an attentional model.
# + deletable=true editable=true hidden=true
import attention_wrapper; importlib.reload(attention_wrapper)
from attention_wrapper import Attention
# + [markdown] deletable=true editable=true hidden=true
# The attentional model doesn't encode into a single vector, but rather a sequence of vectors. The decoder then at every point is passing through this sequence. For example, after the bi-directional RNN we have 16 vectors corresponding to each phoneme's output state. Each output state describes how each phoneme relates between the other phonemes before and after it. After going through more RNN's, our goal is to transform this sequence into a vector of length 15 so we can classify into characters.
#
# A smart way to take a weighted average of the 16 vectors for each of the 15 outputs, where each set of weights is unique to the output. For example, if character 1 only needs information from the first phoneme vector, that weight might be 1 and the others 0; if it needed information from the 1st and 2nd equally, those two might be 0.5 each.
#
# The weights for combining all the input states to produce specific outputs can be learned using an attentional model; we update the weights using SGD, and train it jointly with the encoder/decoder. Once we have the outputs, we can classify the character using softmax as usual.
# + [markdown] deletable=true editable=true hidden=true
# Notice below we do not have an RNN that returns a flat vector as we did before; we have a sequence of vectors as desired. We can then pass a sequence of encoded states into the our custom <tt>Attention</tt> model.
#
# This attention model also uses a technique called teacher forcing; in addition to passing the encoded hidden state, we also pass the correct answer for the previous time period. We give this information to the model because it makes it easier to train. In the beginning of training, the model will get most things wrong, and if your earlier character predictions are wrong then your later ones will likely be as well. Teacher forcing allows the model to still learn how to predict later characters, even if the earlier characters were all wrong.
# + deletable=true editable=true hidden=true
inp = Input((maxlen_p,))
inp_dec = Input((maxlen,))
emb_dec = Embedding(output_vocab_size, 120)(inp_dec)
emb_dec = Dense(dim)(emb_dec)
x = Embedding(input_vocab_size, 120)(inp)
x = Bidirectional(get_rnn())(x)
x = get_rnn()(x)
x = get_rnn()(x)
x = Attention(get_rnn, 3)([x, emb_dec])
x = TimeDistributed(Dense(output_vocab_size, activation='softmax'))(x)
# + [markdown] deletable=true editable=true hidden=true
# We can now train, passing in the decoder inputs as well for teacher forcing.
# + deletable=true editable=true hidden=true
model = Model([inp, inp_dec], x)
model.compile(Adam(), 'sparse_categorical_crossentropy', metrics=['acc'])
# + deletable=true editable=true hidden=true
hist=model.fit([input_train, dec_input_train], np.expand_dims(labels_train,-1),
validation_data=[[input_test, dec_input_test], np.expand_dims(labels_test,-1)],
batch_size=64, **parms, nb_epoch=3)
# + deletable=true editable=true hidden=true
hist.history['val_loss']
# + deletable=true editable=true hidden=true
K.set_value(model.optimizer.lr, 1e-4)
# + deletable=true editable=true hidden=true
hist=model.fit([input_train, dec_input_train], np.expand_dims(labels_train,-1),
validation_data=[[input_test, dec_input_test], np.expand_dims(labels_test,-1)],
batch_size=64, **parms, nb_epoch=5)
# + deletable=true editable=true hidden=true
np.array(hist.history['val_loss'])
# + deletable=true editable=true hidden=true
def eval_keras():
preds = model.predict([input_test, dec_input_test], batch_size=128)
predict = np.argmax(preds, axis = 2)
return (np.mean([all(real==p) for real, p in zip(labels_test, predict)]), predict)
# + [markdown] deletable=true editable=true hidden=true
# Better accuracy!
# + deletable=true editable=true hidden=true
acc, preds = eval_keras(); acc
# + [markdown] deletable=true editable=true hidden=true
# This model is certainly performing better with longer words. The mistakes it's making are reasonable, and it even succesfully formed the word "partisanship".
# + deletable=true editable=true hidden=true
print("pronunciation".ljust(40), "real spelling".ljust(17),
"model spelling".ljust(17), "is correct")
for index in range(20):
ps = "-".join([phonemes[p] for p in input_test[index]])
real = [letters[l] for l in labels_test[index]]
predict = [letters[l] for l in preds[index]]
print (ps.split("-_")[0].ljust(40), "".join(real).split("_")[0].ljust(17),
"".join(predict).split("_")[0].ljust(17), str(real == predict))
# + [markdown] deletable=true editable=true heading_collapsed=true
# ## Test code for the attention layer
# + deletable=true editable=true hidden=true
nb_samples, nb_time, input_dim, output_dim = (64, 4, 32, 48)
# + deletable=true editable=true hidden=true
x = tf.placeholder(np.float32, (nb_samples, nb_time, input_dim))
# + deletable=true editable=true hidden=true
xr = K.reshape(x,(-1,nb_time,1,input_dim))
# + deletable=true editable=true hidden=true
W1 = tf.placeholder(np.float32, (input_dim, input_dim)); W1.shape
# + deletable=true editable=true hidden=true
W1r = K.reshape(W1, (1, input_dim, input_dim))
# + deletable=true editable=true hidden=true
W1r2 = K.reshape(W1, (1, 1, input_dim, input_dim))
# + deletable=true editable=true hidden=true
xW1 = K.conv1d(x,W1r,border_mode='same'); xW1.shape
# + deletable=true editable=true hidden=true
xW12 = K.conv2d(xr,W1r2,border_mode='same'); xW12.shape
# + deletable=true editable=true hidden=true
xW2 = K.dot(x, W1)
# + deletable=true editable=true hidden=true
x1 = np.random.normal(size=(nb_samples, nb_time, input_dim))
# + deletable=true editable=true hidden=true
w1 = np.random.normal(size=(input_dim, input_dim))
# + deletable=true editable=true hidden=true
res = sess.run(xW1, {x:x1, W1:w1})
# + deletable=true editable=true hidden=true
res2 = sess.run(xW2, {x:x1, W1:w1})
# + deletable=true editable=true hidden=true
np.allclose(res, res2)
# + deletable=true editable=true hidden=true
W2 = tf.placeholder(np.float32, (output_dim, input_dim)); W2.shape
# + deletable=true editable=true hidden=true
h = tf.placeholder(np.float32, (nb_samples, output_dim))
# + deletable=true editable=true hidden=true
hW2 = K.dot(h,W2); hW2.shape
# + deletable=true editable=true hidden=true
hW2 = K.reshape(hW2,(-1,1,1,input_dim)); hW2.shape
|
deeplearning2/spelling_bee_RNN.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
import pandas as pd
import numpy as np
# ## ESG Data Exploration
#
# We will explore data from different categories of ESG-indicators for countries' overall progress towards green economy, clean energy, income equality, and responsible governance.
#
# 1. Environmental Data - This category encompasses indicators related to environmental measures, GHG emissions, energy use, production and imports, natural resource conservation, etc.
#
# 2. Social Data- This category covers social topics such as education, income equality, Gini index, unemployment, fertility and contraception, gender equality in employment, social progress and freedoms, etc.
#
# 3. Governance Data- These include data related to government effectiveness, corruption, and political stability.
# ### Data Source:
#
# The Environmental, Social, and Governance data for Countries is sourced from The World Bank Data Catalog. Data have been collected annually between the years 1960-2020 with projections for 2050 for 239 countries.
#
# The dataset is classified as Public and licensed under Creative Commons Attribution 4.0.
#
# (Source: https://datacatalog.worldbank.org/search/dataset/0037651/Environment--Social-and-Governance-Data)
#
# ### Download the dataset along with the supplementary files explaining the variables.
data = pd.read_csv("../dataFiles/ESGData.csv")
display(data.info())
display(data)
display(data['Indicator Name'].nunique())
display(data['Country Code'].nunique())
# +
df1 = data.drop(['Indicator Name'], axis=1)
df1.columns = df1.columns.str.replace(' ', '_')
display(df1)
n_by_indicator = df1.groupby('Indicator_Code').count()
display(n_by_indicator.head(67))
# -
indicators = data[['Indicator Code', 'Indicator Name']].drop_duplicates()
indicators.columns = indicators.columns.str.replace(' ', '_')
display(indicators)
indicators.to_csv('ESGIndicators.csv', index=False)
# ### Reshape dataframe to have indicators as columns and values per year as rows.
# Convert dataframe from wide to long format indexed by indicator code and country code
df2 = df1.melt(value_name='value', var_name='Year', id_vars=['Indicator_Code', 'Country_Name', 'Country_Code'])
display(df2)
# +
# Pivot dataframe indexed by Country_Code and Year, and columns from Indicator_Code
df3 = df2.pivot(index=['Country_Name', 'Country_Code', 'Year'], columns='Indicator_Code', values='value').reset_index()
#display(df3)
df4 = df3[~df3['Year'].str.contains('Unnamed')]
display(df4)
display(df4.info())
# -
# ### Look for missing data
# +
# Find the frequency of missing data per column
missing_tmp = df4.isna().sum()/(len(df3))*100
missing_tmp.sort_values(ascending=False, inplace=True)
display(missing_tmp)
# Find columns with less than 50% missing data
missing = missing_tmp.to_frame(name='value')
missing_l_50 = missing[missing['value'] <= 50]
display(missing_l_50)
# -
# ### Explore related datasets for supplementary information about columns in the original dataset
# #### ESG Country dataset
countryData = pd.read_csv('../dataFiles/ESGCountry.csv')
#display(countryData)
display(countryData['Government Accounting concept'].unique())
country_df1 = countryData[['Country Code', 'Region', 'Income Group']]
country_df1.columns = country_df1.columns.str.replace(' ', '_')
display(country_df1.isna().sum())
display(country_df1['Region'].unique())
# #### Total Population Time Series for All Countries
# +
population = pd.read_csv('../dataFiles/totalPopulation.csv')
population.columns = population.columns.str.replace(' ', '_')
# Change data from wide to long format
population = population.drop(columns=['Indicator_Name', 'Indicator_Code']).melt(value_name='totalPopulation', var_name='Year', id_vars=['Country_Name', 'Country_Code'])
population.info()
# Merge population data with countryData
populationDF = population.merge(country_df1, on='Country_Code', how='left')
populationDF
# -
# #### Only retain countries in
# ### Environmental Indicators
#
# #### Energy
#
# Retain columns-
# - NY.ADJ.DRES.GN.ZS Adjusted savings: natural resources depletion (% of GNI)
#
# - EN.CLC.CDDY.XD Cooling Degree Days (projected change in number of degree Celsius)
#
# - EG.IMP.CONS.ZS Energy imports, net (% of energy use)
#
# - EG.EGY.PRIM.PP.KD Energy intensity level of primary energy (MJ/2011 PPP GDP)
#
# - EG.USE.PCAP.KG.OE Energy use (kg of oil equivalent per capita)
#
# - EG.USE.COMM.FO.ZS Fossil fuel energy consumption (% of total)
#
# - EG.FEC.RNEW.ZS Renewable energy consumption (% of total final energy consumption)
#
#
#
df4.columns
# +
# Retain relevant columns and save data in a new csv file
energyDF = df4[['Country_Name', 'Country_Code', 'Year', 'EG.EGY.PRIM.PP.KD', 'EG.IMP.CONS.ZS', 'EG.USE.PCAP.KG.OE',
'EG.USE.COMM.FO.ZS', 'EG.FEC.RNEW.ZS', 'NY.ADJ.DRES.GN.ZS', 'EN.CLC.CDDY.XD']]
# Merge population data with the energyDF
energyDF2 = energyDF.merge(populationDF, on=['Country_Code', 'Country_Name', 'Year'], how='left')
display(energyDF2.info())
display(energyDF2)
#energyDF2.to_csv('../dataFiles/energyIndicators.csv', index=False)
|
notebooks/ESG_Data_Exploration.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.7.10 64-bit (''insight'': conda)'
# language: python
# name: python3
# ---
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
df = pd.read_csv('2-olympic-medals.csv')
len(df[df['Year'] == 1896])
df['Gold'].sum() + df['Silver'].sum() + df['Bronze'].sum()
df2 = pd.read_csv('olympic-medals.csv')
df2['Gold'].sum()
s3 = []
for value in df['Year'].unique():
s3.append({'year': str(value), 'value': len(df[(df['Year'] == value) & (df['Gold'] != 0)]['Country'].unique())})
s3
s4_gold = []
for value in df['Year'].unique():
gold_value = df[df['Year'] == value]['Gold'].sum() + df[df['Year'] == value]['Silver'].sum() + df[df['Year'] == value]['Bronze'].sum()
s4_gold.append({'year': value, 'value': gold_value, 'avg': gold_value / len(df[df['Year'] == value])})
s4_gold
s5 = []
for value in df['Year'].unique():
tmp = {
'year': value
}
if 'United States' in df[df['Year'] == value]['Country'].values:
tmp['usa'] = df[(df['Year'] == value) & (df['Country'] == 'United States')].iloc[0]['Gold']
else:
tmp['usa'] = 0
if 'China' in df[df['Year'] == value]['Country'].values:
tmp['china'] = df[(df['Year'] == value) & (df['Country'] == 'China')].iloc[0]['Gold']
else:
tmp['china'] = 0
s5.append(tmp)
s5
from sklearn.linear_model import LinearRegression
# +
x = df2[df2['Country'] == 'South Korea']['GDP'].values
y = df2[df2['Country'] == 'South Korea']['Weighted Count'].values
reg = LinearRegression()
reg.fit(np.array(x).reshape((-1, 1)), np.array(y).reshape((-1, 1)))
[{"gdp": i, "count": j} for i, j in zip(x, y)]
# -
a = reg.coef_[0][0]
b = reg.intercept_[0]
print(a, b)
print(13053900758 * a + b, 1000973809045 * a + b)
plt.plot([0, 1000000000000], [b, 1000000000000 * a + b], c='r')
# plt.plot([0, 3], [b, 3 * a + b], c='r')
plt.scatter(x, y, c='black')
plt.show()
s7 = {}
for value in df[df['Year'] == 2008]['Country']:
if value in df[df['Year'] == 2004]['Country'].values:
s7[value] = df[(df['Year'] == 2008) & (df['Country'] == value)].iloc[0]['Weighted Count'] - df[(df['Year'] == 2004) & (df['Country'] == value)].iloc[0]['Weighted Count']
else:
s7[value] = df[(df['Year'] == 2008) & (df['Country'] == value)].iloc[0]['Weighted Count']
s7
|
analysis.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <img src="https://github.com/pmservice/ai-openscale-tutorials/raw/master/notebooks/images/banner.png" align="left" alt="banner">
# # Notebook for generating configuration for batch subscriptions in IBM Watson OpenScale in IBM Cloud Pak for Data v4.0
#
# This notebook shows how to generate the following artefacts:
# 1. Configuration JSON needed to configure an IBM Watson OpenScale subscription.
# 2. Drift Configuration Archive
# 3. Explainabiity Perturbations Archive
# 3. DDLs for creating Feedback, Payload, Drifted Transactions and Explanations tables
#
# The user needs to provide the necessary inputs (where marked) and download the generated artefacts. These artefacts
# have to be then uploaded to IBM Watson OpenScale UI.
#
# PS: This notebook can only generate artefacts for one model at a time. For multiple models, this notebook needs to be run for each model separately.
#
# **Contents:**
# 1. [Installing Dependencies](#Installing-Dependencies)
# 2. [Select IBM Watson OpenScale Services](#Select-IBM-Watson-OpenScale-Services)
# 3. [Read sample scoring data](#Read-sample-scoring-data)
# 4. [Specify Model Inputs](#Specify-Model-Inputs)
# 5. [Generate Common Configuration](#Generate-Common-Configuration)
# 6. [Generate DDL for creating Scored Training data table](#Generate-DDL-for-creating-Scored-Training-data-table)
# 6. [Generate DDL for creating Feedback table](#Generate-DDL-for-creating-Feedback-table)
# 7. [Generate DDL for creating Payload table](#Generate-DDL-for-creating-Payload-table)
# 8. [Provide Spark Connection Details](#Provide-Spark-Connection-Details)
# 9. [Provide Storage Inputs](#Provide-Storage-Inputs)
# 10. [Provide Spark Resource Settings [Optional]](#Provide-Spark-Resource-Settings-[Optional])
# 11. [Provide Additional Spark Settings [Optional]](#Provide-Additional-Spark-Settings-[Optional])
# 12. [Provide Drift Parameters [Optional]](#Provide-Drift-Parameters-[Optional])
# 13. [Provide Fairness Parameters [Optional]](#Provide-Fairness-Parameters-[Optional])
# 14. [Run Configuration Job](#Run-Configuration-Job)
# 15. [Download Configuration JSON](#Download-Configuration-JSON)
# 16. [Download Drift Archive](#Download-Drift-Archive)
# 17. [Generate DDL for creating Drifted Transactions Table](#Generate-DDL-for-creating-Drifted-Transactions-table)
# 18. [Generate Perturbations csv](#Generate-Perturbations-csv)
# 19. [Generate DDL for creating Explanations Queue table](#Generate-DDL-for-creating-Explanations-Queue-table)
# 20. [Generate DDL for creating Explanations Table](#Generate-DDL-for-creating-Explanations-Table)
# ### Installing Dependencies
# +
# Note: Restart kernel after the dependencies are installed
import sys
PYTHON = sys.executable
# !$PYTHON -m pip install --no-warn-conflicts pyspark | tail -n 1
# When this notebook is to be run on a zLinux cluster,
# install scikit-learn==0.24.2 using conda before installing ibm-wos-utils
# # !conda install scikit-learn=0.24.2
# !$PYTHON -m pip install --no-warn-conflicts "ibm-wos-utils==4.0.31" | tail -n 1
# -
# ### Select IBM Watson OpenScale Services
#
# Details of the service-specific flags available:
#
# - ENABLE_QUALITY: Flag to allow generation of common configuration details needed if quality alone is selected
# - ENABLE_FAIRNESS : Flag to allow generation of fairness specific data distribution needed for configuration
# - ENABLE_MODEL_DRIFT: Flag to allow generation of Drift Archive containing relevant information for Model Drift.
# - ENABLE_DATA_DRIFT: Flag to allow generation of Drift Archive containing relevant information for Data Drift.
# - ENABLE_EXPLAINABILITY : Flag to allow generation of explainability configuration and perturbations
# +
# ----------------------------------------------------------------------------------------------------
# IBM Confidential
# OCO Source Materials
# 5737-H76
# Copyright IBM Corp. 2020, 2022
# The source code for this Notebook is not published or other-wise divested of its trade
# secrets, irrespective of what has been deposited with the U.S.Copyright Office.
# ----------------------------------------------------------------------------------------------------
VERSION = "hive-2.0.2"
# Version history
# hive-2.0.2 : Upgrade ibm-wos-utils to 4.0.31
# hive-2.0.1 : Make notebook compatible for zLinux environments; Upgrade ibm-wos-utils to 4.0.25
# hive-2.0 : Upgrade ibm-wos-utils to 4.0.24
# 2.0 : Added support for fairness and explainability
# 1.0 : Initial release
# +
# Optional Input: Keep an identifiable name. This id is used to append to various table creation DDLs.
# A random UUID is used if this is not present.
# NOTEBOOK_RUN_ID = "some_identifiable_name"
NOTEBOOK_RUN_ID = None
# Service Configuration Flags
ENABLE_QUALITY = True
ENABLE_MODEL_DRIFT = True
ENABLE_DATA_DRIFT = True
ENABLE_EXPLAINABILITY = True
ENABLE_FAIRNESS = True
RUN_JOB = ENABLE_QUALITY or ENABLE_MODEL_DRIFT or ENABLE_DATA_DRIFT or ENABLE_EXPLAINABILITY or ENABLE_FAIRNESS
# -
from pyspark.sql import SparkSession
spark = SparkSession.builder.appName(
"Common Configuration Generation").getOrCreate()
# ### Read sample scoring data
#
# A sample scoring data is required to infer the schema of the complete data, so the size of the sample should be chosen accordingly.
#
# Additionally, the sample scoring data should have the following fields:
# 1. Feature Columns
# 2. Label/Target Column
# 3. Prediction Column (with same data type as the label column)
# 4. Probability Column (an array of model probabilities for all the class labels. Not required for regression models)
# **STORAGE_FORMAT** : One of ["csv", "parquet", "orc"]
#
# **Note:**
# 1. Please select the format in which your training data is stored in Hive. The same format will be used to generate the various CREATE DDLs in this notebook.
# 2. ORC format is not supported for zLinux environments
STORAGE_FORMAT = "csv"
# STORAGE_FORMAT = "parquet"
# STORAGE_FORMAT = "orc"
#
# The sample data should be of type `pyspark.sql.dataframe.DataFrame`. The cell below gives samples on:
# - how to read a CSV file from the local system into a Pyspark Dataframe.
# - how to read parquet files in a directory from the local system into a Pyspark Dataframe.
# - how to read orc files in a directory from the local system into a Pyspark Dataframe. [Not supported for zLinux environments]
#
# It is important that the same storage format is chosen as the training data, otherwise there could be schema mismatches.
# +
if STORAGE_FORMAT == "csv":
# Load a csv or a directory containing csv files as PySpark DataFrame
# spark_df = spark.read.csv("/path/to/dir/containing/csv/files", header=True, inferSchema=True)
pass
elif STORAGE_FORMAT == "parquet":
# Load a directory containing parquet files as PySpark DataFrame
# spark_df = spark.read.parquet("/path/to/dir/containing/parquet/files")
pass
elif STORAGE_FORMAT == "orc":
# Load a directory containing orc files as PySpark DataFrame
# spark_df = spark.read.orc("/path/to/dir/containing/orc/files")
pass
else:
# Load data from any source which matches the schema of the training data
pass
spark_df.printSchema()
# -
# ### Specify Model Inputs
# #### Specify the Model Type
#
# - Specify **binary** if the model is a binary classifier.
# - Specify **multiclass** if the model is a multi-class classifier.
# - Specify **regression** if the model is a regressor.
MODEL_TYPE = "binary"
# MODEL_TYPE = "multiclass"
# MODEL_TYPE = "regression"
# #### Provide Column Details
#
# To proceed with this notebook, the following information is required.:
#
# - **LABEL_COLUMN**: The column which contains the target field (also known as label column or the class label).
# - **PREDICTION_COLUMN**: The column containing the model output. This should be of the same data type as the label column.
# - **PROBABILITY_COLUMN**: The column (of type array) containing the model probabilities for all the possible prediction outcomes. This is not required for regression models.
LABEL_COLUMN = "<label_column>"
PREDICTION_COLUMN = "<model prediction column>"
PROBABILITY_COLUMN = "<model probability column. ignored in case of regression models>"
# Based on the sample data and key columns provided above, the notebook will deduce the feature columns and the categorical columns. They will be printed in the output of this cell. If you wish to make changes to them, you can do so in the subsequent cell.
# +
from pyspark.sql.types import BooleanType, StringType
feature_columns = spark_df.columns.copy()
feature_columns.remove(LABEL_COLUMN)
feature_columns.remove(PREDICTION_COLUMN)
if MODEL_TYPE != "regression":
feature_columns.remove(PROBABILITY_COLUMN)
print("Feature Columns : {}".format(feature_columns))
categorical_columns = [f.name for f in spark_df.schema.fields if isinstance(f.dataType, (BooleanType, StringType)) and f.name in feature_columns]
print("Categorical Columns : {}".format(categorical_columns))
# +
config_info = {
"problem_type": MODEL_TYPE,
"label_column": LABEL_COLUMN,
"prediction": PREDICTION_COLUMN,
"probability": PROBABILITY_COLUMN
}
config_info["feature_columns"] = feature_columns
config_info["categorical_columns"] = categorical_columns
# -
from ibm_wos_utils.joblib.utils.notebook_utils import validate_config_info
validate_config_info(config_info)
# ### Generate Common Configuration
#
# IBM Watson OpenScale requires two additional fields - a unique identifier for each record in your feedback/payload tables ("scoring_id") and a timestamp field ("scoring_timestamp") denoting when that record entered the table. These fields are automatically added in the common configuration.
#
# Please make sure that these fields are present in the respective tables.
# +
from ibm_wos_utils.joblib.utils.notebook_utils import generate_schemas
common_config = config_info.copy()
common_configuration = generate_schemas(spark_df, common_config)
config_json = {}
config_json["common_configuration"] = common_configuration
config_json["batch_notebook_version"] = VERSION
# -
# ### Generate DDL for creating Scored Training data table
# +
from ibm_wos_utils.joblib.utils.ddl_utils import generate_scored_training_table_ddl
# Database Name where Scored Training Table should be created. If None or "", the default database is used.
SCORED_TRAINING_DATABASE_NAME = None
# Path to the Scored Training Data in HDFS. Leave as None if you wish to load data later.
path_to_hdfs_directory = None
# Additional Table Properties that are required for table creation.
# Please set the table property `skip.header.line.count` as shown
# if the scored training data is stored as CSV and it contains the header row.
# Leave as None if no additional properties are required.
# table_properties = {
# "skip.header.line.count": 1
# }
table_properties = None
create_ddl = generate_scored_training_table_ddl(config_json, database_name=SCORED_TRAINING_DATABASE_NAME,\
table_suffix=NOTEBOOK_RUN_ID, stored_as=STORAGE_FORMAT,\
path_to_hdfs_directory=path_to_hdfs_directory,\
table_properties=table_properties)
print(create_ddl)
# -
# ### Generate DDL for creating Feedback table
#
# +
from ibm_wos_utils.joblib.utils.ddl_utils import generate_feedback_table_ddl
# Database Name where Feedback Table should be created. If None or "", the default database is used.
FEEDBACK_DATABASE_NAME = None
# Path to the Feedback Data in HDFS. Leave as None if you wish to load data later.
path_to_hdfs_directory = None
# Additional Table Properties that are required for table creation.
# Please set the table property `skip.header.line.count` as shown
# if the feedback data is stored as CSV and it contains the header row.
# Leave as None if no additional properties are required.
# table_properties = {
# "skip.header.line.count": 1
# }
table_properties = None
if ENABLE_QUALITY:
create_ddl = generate_feedback_table_ddl(config_json, database_name=FEEDBACK_DATABASE_NAME,\
table_suffix=NOTEBOOK_RUN_ID, stored_as=STORAGE_FORMAT,\
path_to_hdfs_directory=path_to_hdfs_directory,\
table_properties=table_properties)
print(create_ddl)
# -
# ### Generate DDL for creating Payload table
#
# _Please make sure that the `scoring_timestamp` column in your payload data does not have NULL values_
# +
from ibm_wos_utils.joblib.utils.ddl_utils import generate_payload_table_ddl
# Database Name where Payload Table should be created. If None or "", the default database is used.
PAYLOAD_DATABASE_NAME = None
# Path to the Payload Data in HDFS. Leave as None if you wish to load data later.
path_to_hdfs_directory = None
# Additional Table Properties that are required for table creation.
# Please set the table property `skip.header.line.count` as shown
# if the payload data is stored as CSV and it contains the header row.
# Leave as None if no additional properties are required.
# table_properties = {
# "skip.header.line.count": 1
# }
table_properties = None
if ENABLE_MODEL_DRIFT or ENABLE_DATA_DRIFT or ENABLE_EXPLAINABILITY or ENABLE_FAIRNESS:
create_ddl = generate_payload_table_ddl(config_json, database_name=PAYLOAD_DATABASE_NAME,\
table_suffix=NOTEBOOK_RUN_ID, stored_as=STORAGE_FORMAT,\
path_to_hdfs_directory=path_to_hdfs_directory,\
table_properties=table_properties)
print(create_ddl)
# -
# ### Provide Spark Connection Details
#
# 1. If your job is going to run on Spark cluster as part of an IBM Analytics Engine instance on IBM Cloud Pak for Data, enter the following details:
#
# - **IAE_SPARK_DISPLAY_NAME**: Display Name of the Spark instance in IBM Analytics Engine
# - **IAE_SPARK_JOBS_ENDPOINT**: Spark Jobs Endpoint for IBM Analytics Engine
# - **IBM_CPD_VOLUME**: IBM Cloud Pak for Data storage volume name
# - **IBM_CPD_USERNAME**: IBM Cloud Pak for Data username
# - **IBM_CPD_APIKEY**: IBM Cloud Pak for Data API key
#
#
# 2. If your job is going to run on Spark Cluster as part of a Remote Hadoop Ecosystem, enter the following details:
#
# - **SPARK_MANAGER_ENDPOINT**: Endpoint URL where the Spark Manager Application is running
# - **SPARK_MANAGER_USERNAME**: Username to connect to Spark Manager Application
# - **SPARK_MANAGER_PASSWORD**: Password to connect to Spark Manager Application
# #### Credentials Block for Spark in IAE
# +
from ibm_wos_utils.joblib.utils.constants import SparkType
IAE_SPARK_DISPLAY_NAME = "<Display Name of the Spark instance in IBM Analytics Engine>"
IAE_SPARK_JOBS_ENDPOINT = "<Spark Jobs Endpoint for IBM Analytics Engine>"
IBM_CPD_VOLUME = "<IBM Cloud Pak for Data storage volume name>"
IBM_CPD_USERNAME = "<IBM Cloud Pak for Data username>"
IBM_CPD_APIKEY = "<IBM Cloud Pak for Data API key>"
# Credentials Block for Spark in IAE
credentials = {
"connection": {
"display_name": IAE_SPARK_DISPLAY_NAME,
"endpoint": IAE_SPARK_JOBS_ENDPOINT,
"location_type": SparkType.IAE_SPARK.value,
"volume": IBM_CPD_VOLUME
},
"credentials": {
"username": IBM_CPD_USERNAME,
"apikey": IBM_CPD_APIKEY
}
}
# -
# #### Credentials Block for Spark in Remote Hadoop Ecosystem
# +
from ibm_wos_utils.joblib.utils.constants import SparkType
SPARK_MANAGER_ENDPOINT = "<Endpoint URL where Spark Manager Application is running>"
SPARK_MANAGER_USERNAME = "<Username to connect to Spark Manager Application>"
SPARK_MANAGER_PASSWORD = "<Password to connect to Spark Manager Application>"
# Credentials Block for Spark in Remote Hadoop Ecosystem
credentials = {
"connection": {
"endpoint": SPARK_MANAGER_ENDPOINT,
"location_type": SparkType.REMOTE_SPARK.value
},
"credentials": {
"username": SPARK_MANAGER_USERNAME,
"password": <PASSWORD>
}
}
# -
# ### Provide Storage Inputs
#
# Enter Hive details.
# - **HIVE_METASTORE_URI**: Thrift URI for Hive Metastore to connect to
# - **TRAINING_DATABASE_NAME**: Name of the Database in Hive that has training table/view
# - **TRAINING_TABLE_NAME**: Name of the Table in HIve that has the scored training data.
#
HIVE_METASTORE_URI = "<Thrift URI for Hive Metastore to connect to>"
TRAINING_DATABASE_NAME = "<Name of the Database in Hive that has training table/view>"
TRAINING_TABLE_NAME = "<Name of the Table in HIve that has the scored training data>"
# +
storage_details = {
"type": "hive",
"connection": {
"metastore_url": HIVE_METASTORE_URI,
}
}
tables = [
{
"database": TRAINING_DATABASE_NAME,
"table": TRAINING_TABLE_NAME,
"type": "training"
}
]
# -
# ### Provide Spark Resource Settings [Optional]
#
# Configure how much of your Spark Cluster resources can this job consume. Leave the variable `spark_settings` to `None` or `{}` if no customisation is required.
"""
spark_settings = {
# max_num_executors: Maximum Number of executors to launch for this session
"max_num_executors": 2,
# min_executors: Minimum Number of executors to launch for this session
"min_executors": 1,
# executor_cores: Number of cores to use for each executor
"executor_cores": 2,
# executor_memory: Amount of memory (in GBs) to use per executor process
"executor_memory": 1,
#driver_cores: Number of cores to use for the driver process
"driver_cores": 2,
# driver_memory: Amount of memory (in GBs) to use for the driver process
"driver_memory": 1
}
"""
spark_settings = None
# ### Provide Additional Spark Settings [Optional]
#
# Any other Spark property that can be set via **SparkConf**, provide them in the next cell. These properties are sent to the Spark cluster verbatim. Leave the variable `conf` to `None` or `{}` if no additional property is required.
#
# - [A list of available properties for Spark 2.4.6](https://spark.apache.org/docs/2.4.6/configuration.html#available-properties)
# +
"""
conf = {
"spark.yarn.maxAppAttempts": 1
}
"""
conf = None
# -
# ### Provide Drift Parameters [Optional]
#
# Provide the optional drift parameters in this cell. Leave the variable `drift_parameters` to `None` or `{}` if no additional parameter is required.
# +
"""
drift_parameters = {
"model_drift": {
# enable_drift_model_tuning - Controls whether there will be Hyper-Parameter
# Optimisation in the Drift Detection Model. Default: False
"enable_drift_model_tuning": True,
# max_bins - Specify the maximum number of categories in categorical columns.
# Default: OpenScale will determine an approximate value. Use this only in cases
# where OpenScale approximation fails.
"max_bins": 10,
},
"data_drift": {
# enable_two_col_learner - Enable learning of data constraints on two column
# combinations. Default: True
"enable_two_col_learner": True,
# categorical_unique_threshold - Used to discard categorical columns with a
# large number of unique values relative to total rows in the column.
# Should be between 0 and 1. Default: 0.8
"categorical_unique_threshold": 0.7,
# max_distinct_categories - Used to discard categorical columns with a large
# absolute number of unique categories. Also, used for not learning
# categorical-categorical constraint, if potential combinations of two columns
# are more than this number. Default: 100000
"max_distinct_categories": 10000
# user_overrides - Used to override drift constraint learning to selectively learn
# constraints on feature columns. Its a list of configuration, each specifying
# whether to learn distribution and/or range constraint on given set of columns.
# First configuration of a given column would take preference.
#
# "constraint_type" can have two possible values : single|double - signifying
# if this configuration is for single column or two column constraint learning.
#
# "learn_distribution_constraint" : True|False - signifying whether to learn
# distribution constraint for given config or not.
#
# "learn_range_constraint" : True|False - signifying whether to learn range
# constraint for given config or not. Only applicable to numerical feature columns.
#
# "features" : [] - provides either a list of feature columns to be governed by
# given configuration for constraint learning.
# Its a list of strings containing feature column names if "constraint_type" is "single".
# Its a list of list of strings containing feature column names if "constraint_type" if
# "double". If only one column name is provided, all of the two column constraints
# involving this column will be dictated by given configuration during constraint learning.
# This list is case-insensitive.
#
# In the example below, first config block says do not learn distribution and range single
# column constraints for features "MARITAL_STATUS", "PROFESSION", "IS_TENT" and "age".
# Second config block says do not learn distribution and range two column constraints
# where "IS_TENT", "PROFESSION", and "AGE" are one of the two columns. Whereas, specifically,
# do not learn two column distribution and range constraint on combination of "MARITAL_STATUS"
# and "PURCHASE_AMOUNT".
"user_overrides": [
{
"constraint_type": "single",
"learn_distribution_constraint": False,
"learn_range_constraint": False,
"features": [
"MARITAL_STATUS",
"PROFESSION",
"IS_TENT",
"age"
]
},
{
"constraint_type": "double",
"learn_distribution_constraint": False,
"learn_range_constraint": False,
"features": [
[
"IS_TENT"
],
[
"MARITAL_STATUS"
"PURCHASE_AMOUNT"
],
[
"PROFESSION"
],
[
"AGE"
]
]
}
]
}
}
"""
drift_parameters = None
# -
# ### Provide Fairness Parameters [REQUIRED if `ENABLE_FAIRNESS` is set to True]
#
# Provide the fairness parameters in this cell. Leave the variable `fairness_parameters` to `None` or `{}` if fairness is not to be enabled.
# +
"""
fairness_parameters = {
"features": [
{
"feature": "<The fairness attribute name>", # The feature on which the fairness check is to be done
"majority": [<majority groups/ranges for categorical/numerical columns respectively>],
"minority": [<minority groups/ranges for categorical/numerical columns respectively>],
"threshold": <The threshold value between 0 and 1> [OPTIONAL, default value is 0.8]
}
],
"class_label": LABEL_COLUMN,
"favourable_class": [<favourable classes/ranges for classification/regression models repectively>],
"unfavourable_class": [<unfavourable classes/ranges for classification/regression models repectively>],
"min_records": <The minimum number of records on which the fairness check is to be done> [OPTIONAL]
}
"""
fairness_parameters = None
# -
# ### Run Configuration Job
# +
SHOW_PROGRESS = True
arguments = {
"batch_notebook_version": VERSION,
"common_configuration" : common_configuration,
"enable_data_drift": ENABLE_DATA_DRIFT,
"enable_model_drift": ENABLE_MODEL_DRIFT,
"enable_explainability": ENABLE_EXPLAINABILITY,
"enable_fairness": ENABLE_FAIRNESS,
"monitoring_run_id": NOTEBOOK_RUN_ID,
"storage": storage_details,
"tables": tables,
"show_progress": SHOW_PROGRESS
}
if ENABLE_MODEL_DRIFT or ENABLE_DATA_DRIFT:
arguments["drift_parameters"] = drift_parameters
if ENABLE_FAIRNESS:
if fairness_parameters is None or fairness_parameters == {}:
raise ValueError("Fairness parameters are required if fairness is enabled.")
arguments["fairness_parameters"] = fairness_parameters
job_params = {
"arguments": arguments,
"spark_settings": spark_settings,
"dependency_zip": [],
"conf": conf
}
# -
# The following cell will run the Configuration job. If `SHOW_PROGRESS` is `True`, it will also print the status of job in the output section. Please wait for the status to be **FINISHED**.
#
# A successful job status goes through the following values:
# 1. STARTED
# 2. Model Drift Configuration STARTED
# 3. Data Drift Configuration STARTED
# - Data Drift: Summary Stats Calculated
# - Data Drift: Column Stats calculated.
# - Data Drift: (number/total) CategoricalDistributionConstraint columns processed
# - Data Drift: (number/total) NumericRangeConstraint columns processed
# - Data Drift: (number/total) CategoricalNumericRangeConstraint columns processed
# - Data Drift: (number/total) CatCatDistributionConstraint columns processed
# 4. Explainability Configuration STARTED
# 5. Explainability Configuration COMPLETED
# 6. Fairness Configuration STARTED
# 7. Fairness Configuration COMPLETED
# 8. FINISHED
#
# If at anytime there is a failure, you will see a **FAILED** status with an exception trace.
# +
from ibm_wos_utils.joblib.clients.engine_client import EngineClient
from ibm_wos_utils.common.batch.jobs.configuration import Configuration
from ibm_wos_utils.joblib.utils.notebook_utils import JobStatus
if RUN_JOB:
job_name="Configuration_Job"
client = EngineClient(credentials=credentials)
job_response = client.engine.run_job(job_name=job_name, job_class=Configuration,
job_args=job_params, background=True)
# Print Job Status.
if SHOW_PROGRESS:
JobStatus(client, job_response).print_status()
# -
# If `SHOW_PROGRESS` is `False`, you can run the below cell to check the job status at any point manually.
if not SHOW_PROGRESS and RUN_JOB:
job_id = job_response.get("id")
print(client.engine.get_job_status(job_id))
# ### Download Configuration JSON
# +
import json
from ibm_wos_utils.joblib.utils.notebook_utils import create_download_link
if RUN_JOB:
configuration = client.engine.get_file(job_response.get(
"output_file_path") + "/configuration.json")
config=json.loads(json.loads(configuration).get("configuration"))
else:
config = config_json
display(create_download_link(config, "config"))
# -
# ### Download Drift Archive
#
# +
from tempfile import NamedTemporaryFile
from ibm_wos_utils.joblib.utils.notebook_utils import create_download_link
if ENABLE_MODEL_DRIFT or ENABLE_DATA_DRIFT:
drift_archive = client.engine.get_file(job_response.get(
"output_file_path") + "/drift_configuration")
with NamedTemporaryFile() as tf:
tf.write(drift_archive)
tf.flush()
drift_archive = spark.sparkContext.sequenceFile(tf.name).collect()[0][1]
# -
# If `ENABLE_MODEL_DRIFT` is True, and the `MODEL_TYPE` is not `regression`, the below cell checks the training quality of the drift detection model that helps detect the drop in the accuracy. If the trained drift detection model did not meet the quality standards, a message is displayed to the user saying that the drop in the accuracy cannot be detected. By default, the drift model is generated without any hyperparameter optimisation, i.e. `enable_drift_model_tuning` is `False`. The user can try running the configuration job again by setting `enable_drift_model_tuning` as `True` in the `drift_parameters` above.
# +
from ibm_wos_utils.joblib.utils.notebook_utils import check_for_ddm_quality
if ENABLE_MODEL_DRIFT and (MODEL_TYPE != "regression"):
check_for_ddm_quality(drift_archive)
# -
if ENABLE_MODEL_DRIFT or ENABLE_DATA_DRIFT:
display(create_download_link(drift_archive, "drift", client))
# ### Generate DDL for creating Drifted Transactions table
#
# +
from ibm_wos_utils.joblib.utils.ddl_utils import generate_drift_table_ddl
# Database Name where Drifted Transactions Table should be created. If None or "", the default database is used.
DRIFT_DATABASE_NAME = None
if ENABLE_MODEL_DRIFT or ENABLE_DATA_DRIFT:
print(generate_drift_table_ddl(drift_archive, database_name=DRIFT_DATABASE_NAME, table_suffix=NOTEBOOK_RUN_ID))
# -
# ### Generate Perturbations csv
# +
import pandas as pd
from ibm_wos_utils.explainability.utils.perturbations import Perturbations
if ENABLE_EXPLAINABILITY:
perturbations=Perturbations(training_stats=config.get("explainability_configuration"), problem_type=MODEL_TYPE)
perturbs_df = perturbations.generate_perturbations()
perturbs_df.to_csv("perturbations.csv",index=False)
# -
# The perturbations required for explainability are stored in the file perturbations.csv in the above step.
# The user should score these perturbations against the user model and provide the scoring output as a dataframe with **probability** and **prediction** columns.
#
# Note: For regression model probability column is not required.
# +
from ibm_wos_utils.joblib.utils.notebook_utils import create_archive
if ENABLE_EXPLAINABILITY:
# Load a csv output of scored perturbations as pandas DataFrame
scored_perturbations = pd.read_csv("scored_perturbations.csv")
display(create_archive(scored_perturbations.to_csv(index=False), "perturbations.csv", "perturbations"))
# -
# ### Generate DDL for creating Explanations Queue table [Optional]
#
# Provide details for creating a separate Explanations Queue table. IBM Watson OpenScale will be generating Explanations for all the transactions in this table. Alternatively, the payload table created in the notebook above can also be used for this purpose.
# +
from ibm_wos_utils.joblib.utils.ddl_utils import generate_payload_table_ddl
# Database Name where Explanations Queue Table should be created. If None or "", the default database is used.
EXPLANATIONS_QUEUE_DATABASE_NAME = None
# Path to the Explanations Queue Data in HDFS. Leave as None if you wish to load data later.
path_to_hdfs_directory = None
# Additional Table Properties that are required for table creation.
# Please set the table property `skip.header.line.count` as shown
# if the payload data is stored as CSV and it contains the header row.
# Leave as None if no additional properties are required.
# table_properties = {
# "skip.header.line.count": 1
# }
table_properties = None
if ENABLE_EXPLAINABILITY:
create_ddl = generate_payload_table_ddl(config_json, database_name=EXPLANATIONS_QUEUE_DATABASE_NAME,\
table_prefix="explanations_queue",table_suffix=NOTEBOOK_RUN_ID, stored_as=STORAGE_FORMAT,\
path_to_hdfs_directory=path_to_hdfs_directory,\
table_properties=table_properties)
print(create_ddl)
# -
# ### Generate DDL for creating Explanations Table
# +
from ibm_wos_utils.joblib.utils.ddl_utils import generate_explanations_table_ddl
# Database Name where Explanations Table should be created. If None or "", the default database is used.
EXPLANATIONS_DATABASE_NAME = None
# Path to the Explanations table Data in HDFS. If not provided hive will determine automatically.
path_to_hdfs_directory = None
if ENABLE_EXPLAINABILITY:
# For zLinux environments, please uncomment this to generate DDL.
# print(generate_explanations_table_ddl(database_name=EXPLANATIONS_DATABASE_NAME, table_suffix=NOTEBOOK_RUN_ID, path_to_hdfs_directory=path_to_hdfs_directory, stored_as="csv"))
# For all other environments, please use this to generate DDL.
print(generate_explanations_table_ddl(database_name=EXPLANATIONS_DATABASE_NAME, table_suffix=NOTEBOOK_RUN_ID, path_to_hdfs_directory=path_to_hdfs_directory))
# -
# #### Authors
# Developed by [<NAME>](mailto:<EMAIL>), [<NAME>](mailto:<EMAIL>)
|
Cloud Pak for Data/Batch Support/4.0/Configuration generation for OpenScale batch subscription - Hive.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Building Ownership Matrices Example
# +
import pyblp
import numpy as np
np.set_printoptions(threshold=100)
pyblp.__version__
# -
# In this example, we'll use the IDs created in the [building ID data example](build_id_data.ipynb) to build a stack of standard ownership matrices. We'll delete the first data row to demonstrate what ownership matrices should look like when markets have varying numbers of products.
id_data = pyblp.build_id_data(T=2, J=5, F=4)
id_data = id_data[1:]
standard_ownership = pyblp.build_ownership(id_data)
standard_ownership
# We'll now modify the default $\kappa$ specification so that the elements associated with firm IDs ``0`` and ``1`` are equal to ``0.5``.
def kappa_specification(f, g):
if f == g:
return 1
return 0.5 if f < 2 and g < 2 else 0
# Finally, we'll use this specification to build a stack of alternative ownership matrices.
alternative_ownership = pyblp.build_ownership(id_data, kappa_specification)
alternative_ownership
|
docs/notebooks/api/build_ownership.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # Simulation 3: Bayesian Logistic Regression
# This is code to support simulations in the paper "Log-concave sampling: Metropolis-Hastings algorithms are fast!"
#
# by <NAME>, <NAME>, <NAME>, <NAME>
# +
# load necessary packages
import numpy as np
# %matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
# load our packages
import mcmc
import plot_basic # basic setting for plots in matplotlib
# -
# ### Density of Bayesian logistic regression
# $$
# p(\theta) \sim e^{-f(\theta)}
# $$
# $$
# f(\theta) = - Y^\top X \theta + \sum_{i=1}^n \log(1+e^{\theta^\top X_i}) + \frac{\lambda}{2} \|\Sigma_X^{1/2} \theta\|^2 \\
# \nabla f(\theta) = - X^\top Y + \sum_{i=1}^n \frac{X_i}{1 + e^{-\theta^\top X_i}} + \lambda \Sigma_X \theta
# $$
# +
# define the logconcave function: Bayesian logistic regression
def f(theta, X, Y, lam=1.):
"""
theta is of dimension nb_exps * d
X is of dimension n * d
Y is of dimension n
"""
# h is of dimension nb_exps * n
h = theta.dot(X.T)
return - h.dot(Y) + np.sum(np.log(1. + np.exp(h)), axis=1) + lam / 2 * np.mean(h * h, axis = 1)
# density of f up to a constant
def density_f(theta, X, Y, lam=1.):
return np.exp(-f(theta, X, Y, lam))
# gradient of f up to a constant
def grad_f(theta, X, Y, lam):
"""
theta is of dimension nb_exps * d
X is of dimension n * d
Y is of dimension n
"""
# h is of dimension nb_exps * n
h = theta.dot(X.T)
# grad is of dimension nb_exps * d
return - X.T.dot(Y) + np.sum(X.T/(1+np.exp(-h))[:, np.newaxis, :], axis=2) + 1./Y.shape[0]*lam * h.dot(X)
# direct sampling from the density is difficult, because of the complicate form
# +
np.random.seed(123456)
# dimensions
ds = np.array([2])
# number of iterations
nb_iters = 4000
# number of samples
nb_exps = 2500
# number of training data samples
n = 50
error_ula_all = np.zeros((ds.shape[0], nb_iters, 1))
error_ula_02_all = np.zeros((ds.shape[0], nb_iters, 1))
error_ula_002_all = np.zeros((ds.shape[0], nb_iters, 1))
error_mala_all = np.zeros((ds.shape[0], nb_iters, 1))
error_rwmh_all = np.zeros((ds.shape[0], nb_iters, 1))
# save to the following path: modify before use
save_path = '/Users/yuansi.chen/UCB/STAT_2017/Writing/MALA/mala_code_public/mala_public/fig/'
# -
for j, d in enumerate(ds):
# tuning parameter for the prior
lam = 0.1*3.*d/np.pi**2
# data generation
X = np.random.binomial(1, 0.5, (n, d)) * 2 -1
X = X/np.sqrt((X * X).mean(axis=1))[:, None]
theta_true = np.ones(d)
h_true = X.dot(theta_true)
r_true = 1./(1.+np.exp(-h_true))
Y = np.random.binomial(1, r_true)
_, S, _ = np.linalg.svd(X)
S_min = S[0]
S_max = S[-1]
L = S_max * (lam + 0.25*n)
m = S_min * (lam)
kappa = L/m
print "start simulation for logistic regression: d = %d, m = %0.2f, L = %0.2f, kappa = %0.2f" %(d, m, L, kappa)
# this error metric computes the distance between the sample and the true theta
def error_mean(x_curr):
e_mean = np.mean(np.abs(np.mean(x_curr, axis = 0) - theta_true))
return np.array([e_mean])
# initialization
init_distr = 1./np.sqrt(L)*np.random.randn(nb_exps, d)
def grad_f_local(x):
return grad_f(x, X, Y, lam)
def f_local(x):
return density_f(x, X, Y, lam)
error_ula_all[j], x_ula = mcmc.ula(init_distr, grad_f_local, error_mean, epsilon=1.0, kappa=kappa, L=L, nb_iters=nb_iters, nb_exps=nb_exps)
error_ula_02_all[j], x_ula_02 = mcmc.ula(init_distr, grad_f_local, error_mean, epsilon=0.2, kappa=kappa, L=L, nb_iters=nb_iters, nb_exps=nb_exps)
error_ula_002_all[j], x_ula_002 = mcmc.ula(init_distr, grad_f_local, error_mean, epsilon=0.1, kappa=kappa, L=L, nb_iters=nb_iters, nb_exps=nb_exps)
error_mala_all[j], x_mala = mcmc.mala(init_distr, grad_f_local, f_local, error_mean, kappa=kappa, L=L, nb_iters=nb_iters, nb_exps=nb_exps)
error_rwmh_all[j], x_rwmh = mcmc.rwmh(init_distr, f_local, error_mean, kappa=kappa, L=L, nb_iters=nb_iters, nb_exps=nb_exps)
def plot_run(k, j=0):
plt.figure(figsize=(12., 8.))
plt.semilogy(np.arange(nb_iters), error_ula_all[j, :, k], '.-', color='orange', alpha = 0.5, label='ULA large')
plt.semilogy(np.arange(nb_iters), error_ula_02_all[j, :, k], 'r.-', alpha = 0.5, label='ULA')
# plt.semilogy(np.arange(nb_iters), error_ula_002_all[j, :, k], '.-', color='purple', alpha = 0.5, label='ULA small')
plt.semilogy(np.arange(nb_iters), error_mala_all[j, :, k], 'b.-', alpha = 0.5, label='MALA')
plt.semilogy(np.arange(nb_iters), error_rwmh_all[j, :, k], 'g-', alpha = 0.5, label='MRW')
#plt.semilogy(np.arange(nb_iters), np.exp(-0.005*np.arange(nb_iters)), 'b')
#plt.title("d = %d" %ds[j])
plt.xlabel("Iteration")
plt.ylabel("Error")
plt.ylim(0.1, 1.2)
plt.legend()
plt.savefig(save_path+'logistic_mean.pdf')
plt.show()
plot_run(0)
# +
# dimensions
ds = np.array([2])
# number of iterations
nb_iters = 10000
# number of samples
nb_exps = 100
# number of training data samples
n = 50
trace_ula_all = np.zeros((ds.shape[0], nb_iters, nb_exps))
trace_ula_02_all = np.zeros((ds.shape[0], nb_iters, nb_exps))
trace_ula_002_all = np.zeros((ds.shape[0], nb_iters, nb_exps))
trace_mala_all = np.zeros((ds.shape[0], nb_iters, nb_exps))
trace_rwmh_all = np.zeros((ds.shape[0], nb_iters, nb_exps))
# -
for j, d in enumerate(ds):
# tuning parameter for the prior
lam = 0.1*3.*d/np.pi**2
# data generation
X = np.random.binomial(1, 0.5, (n, d)) * 2 -1
X = X/np.sqrt((X * X).mean(axis=1))[:, None]
theta_true = np.ones(d)
h_true = X.dot(theta_true)
r_true = 1./(1.+np.exp(-h_true))
Y = np.random.binomial(1, r_true)
_, S, _ = np.linalg.svd(X)
S_min = S[0]
S_max = S[-1]
L = S_max * (lam + 0.25*n)
m = S_min * (lam)
kappa = L/m
print "start simulation for logistic regression, traceplot: d = %d, m = %0.2f, L = %0.2f, kappa = %0.2f" %(d, m, L, kappa)
# define the error_metric to be the first coordinate to extract samples
def error_first_coordinate(x_curr):
return x_curr[:, 0]
# initialization
init_distr = 1./np.sqrt(L)*np.random.randn(nb_exps, d)
def grad_f_local(x):
return grad_f(x, X, Y, lam)
def f_local(x):
return density_f(x, X, Y, lam)
trace_ula_all[j], x_ula = mcmc.ula(init_distr, grad_f_local, error_first_coordinate, epsilon=1.0, kappa=kappa, L=L, nb_iters=nb_iters, nb_exps=nb_exps)
trace_ula_02_all[j], x_ula_02 = mcmc.ula(init_distr, grad_f_local, error_first_coordinate, epsilon=0.2, kappa=kappa, L=L, nb_iters=nb_iters, nb_exps=nb_exps)
trace_ula_002_all[j], x_ula_002 = mcmc.ula(init_distr, grad_f_local, error_first_coordinate, epsilon=0.1, kappa=kappa, L=L, nb_iters=nb_iters, nb_exps=nb_exps)
trace_mala_all[j], x_mala = mcmc.mala(init_distr, grad_f_local, f_local, error_first_coordinate, kappa=kappa, L=L, nb_iters=nb_iters, nb_exps=nb_exps)
trace_rwmh_all[j], x_rwmh = mcmc.rwmh(init_distr, f_local, error_first_coordinate, kappa=kappa, L=L, nb_iters=nb_iters, nb_exps=nb_exps)
# +
# traceplot
plt.figure(figsize=(12., 8.))
nb_plot_iters = 10000
plt.axhline(1, color='k', ls='dashed')
for i in range(10):
if i == 0:
plt.plot(trace_ula_02_all[0, :nb_plot_iters, i], color='r', alpha = 0.5, label='ULA')
else:
plt.plot(trace_ula_02_all[0, :nb_plot_iters, i], color='r', alpha = 0.5)
for i in range(10):
if i == 0:
plt.plot(trace_mala_all[0, :nb_plot_iters, i], color='b', alpha = 0.5, label='MALA')
else:
plt.plot(trace_mala_all[0, :nb_plot_iters, i], color='b', alpha = 0.5)
for i in range(10):
if i == 0:
plt.plot(trace_rwmh_all[0, :nb_plot_iters, i], color='g', alpha = 0.5, label='MRW')
else:
plt.plot(trace_rwmh_all[0, :nb_plot_iters, i], color='g', alpha = 0.5)
plt.xlabel("Iteration")
plt.ylabel("Sample value on the 1st coordinate")
plt.legend()
plt.savefig(save_path+'traceplot_logistic_regression_all.pdf')
plt.show()
# -
# ### Autocorrelation plot
# +
nb_lags = 6000
nb_burnin = 1000
autocorr_ula_02 = np.ones(nb_lags)
autocorr_mala = np.ones(nb_lags)
autocorr_rwmh = np.ones(nb_lags)
ith_run = 2
for i in range(1, nb_lags):
autocorr_ula_02[i] = np.corrcoef(trace_ula_02_all[0, nb_burnin+i:, ith_run], trace_ula_02_all[0, nb_burnin:-i, ith_run])[0, 1]
autocorr_mala[i] = np.corrcoef(trace_mala_all[0, nb_burnin+i:, ith_run], trace_mala_all[0, nb_burnin:-i, ith_run])[0, 1]
autocorr_rwmh[i] = np.corrcoef(trace_rwmh_all[0, nb_burnin+i:, ith_run], trace_rwmh_all[0, nb_burnin:-i, ith_run])[0, 1]
# +
# autocorrelation plot
plt.figure(figsize=(12., 8.))
plt.axhline(0, color='k', ls='dashed')
markers_on = np.arange(0, nb_lags, 200)
plt.plot(autocorr_ula_02, '-rs', alpha = 0.5, label='ULA', markevery=markers_on,markersize=12.)
plt.plot(autocorr_mala, '-b*', alpha = 0.5, label='MALA', markevery=markers_on,markersize=16.)
plt.plot(autocorr_rwmh, '-g^', alpha = 0.5, label='MRW', markevery=markers_on,markersize=12.)
plt.xlabel("Lag")
plt.ylabel("Autocorrelation")
plt.legend()
plt.savefig(save_path+'autocorrelationplot_logistic_regression_all.pdf')
plt.show()
# -
|
mcmc_simulation3_bayesian_logistic_regression_public.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # Simulation of an fMRI Timecourse BOLD Response for a Single Voxel
# +
#import needed functions
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib inline
from nipy.modalities.fmri.hemodynamic_models import spm_hrf, compute_regressor
from statsmodels.tsa.arima_process import arma_generate_sample
from nipy.modalities.fmri.design_matrix import make_dmtx
from nipy.modalities.fmri.experimental_paradigm import (EventRelatedParadigm,
BlockParadigm)
# +
# frame times
TR= 2
nscans_trial= 30
trial_duration=TR*nscans_trial # 60
ntrials_run= 12
nscans_run=nscans_trial*ntrials_run # 12*30=360
run_duration= TR*nscans_run # 360*2=720
frametimes = np.linspace(0, (nscans_run - 1) * TR, nscans_run)
frametimes
# -
TR= 2
nscans_trial= 30
trial_duration=TR*nscans_trial # 60
ntrials_run= 12
nscans_run=nscans_trial*ntrials_run # 12*30=360
run_duration= TR*nscans_run # 360*2=720
# +
# experimental paradigm
conditions = ['a', 'b', 'c','c', 'a', 'b', 'c', 'a', 'b', 'c','a','b']
#conditions = ['c', 'c', 'c', 'c','c', 'c', 'c', 'c', 'c', 'c', 'c','c']
#onsets = [30, 70, 100, 10, 30, 90, 30, 40, 60]
onsets=np.arange(0,run_duration,trial_duration)
motion = np.cumsum(np.random.randn(nscans_run, 6), 0)
add_reg_names = ['tx', 'ty', 'tz', 'rx', 'ry', 'rz']
paradigm = EventRelatedParadigm(conditions, onsets)
#hrf_model = 'fir'
#design_matrix = make_dmtx(frametimes,
# paradigm,
# hrf_model=hrf_model,
# drift_model='polynomial',
# drift_order=3,
# fir_delays=np.arange(1, 6))
hrf_model = 'spm'
design_matrix = make_dmtx(frametimes,
paradigm,
hrf_model=hrf_model,
drift_model='polynomial',
drift_order=3,
add_regs=motion,
add_reg_names=add_reg_names)
# plot the matrix
fig = plt.figure(figsize=(14, 16))
ax = plt.subplot(1, 1, 1)
design_matrix.show(ax=ax)
ax.set_title('Event-related design matrix', fontsize=12)
# +
# generate the errors (AR1)
ar = [1, 0.5]
ma = [1, 0.0]
epsilon_ar1= arma_generate_sample(ar,ma,vols_num_run)
# generate the data
noise_scale = 0.3
activation_param = [90.0,60.0,30.0]
motion_param = [.00,.00,.00,.00,.00,.00] # ignore
drift_param = [0.0,0.0,0.0] # ignore
constant_param= [0.0] # zero mean centered
#concatenate cofficients
Beta =np.array([activation_param +motion_param+ drift_param+ constant_param])
# generate BOLD response timecourse for a single voxel
y=np.dot(Beta,design_matrix.matrix.T) + epsilon_ar1*noise_scale
# -
# ## Plot the Timecourse
# +
#plot timecourse for one voxel
plt.figure(figsize=(16,4))
plt.axis([0,vols_num_run,-1.,3.])
plt.plot(y.T,'--')
plt.xlabel('Scan',fontsize=12)
plt.ylabel('BOLD response',fontsize=12)
for i in onsets/2:
plt.plot(np.arange(i,i+nscans_trial),y.T[i:i+nscans_trial])
# -
# ## plot the trials
# +
# conditions a
time_onsets =onsets[[i for i, j in enumerate(conditions) if j == 'a']]/2
y_cond_a=y.T[time_onsets[0]:time_onsets[0]+nscans_trial]
for i in time_onsets[1:]:
y_cond_a=np.hstack((y_cond_a,y.T[i:i+nscans_trial]))
plt.figure(figsize=(12,9))
plt.axis([0,vols_num_run,-1.,3.])
plt.subplot(2,1,1)
plt.plot(y_cond_a)
plt.xlabel('Scans',fontsize=14)
plt.ylabel('BOLD response',fontsize=12)
plt.subplot(2,1,2)
plt.plot(np.mean(y_cond_a,1))
plt.xlabel('Scans ',fontsize=12)
plt.ylabel('BOLD ',fontsize=12)
plt.tight_layout()
# +
# conditions b
time_onsets =onsets[[i for i, j in enumerate(conditions) if j == 'b']]/2
y_cond_b=y.T[time_onsets[0]:time_onsets[0]+nscans_trial]
for i in time_onsets[1:]:
y_cond_b=np.hstack((y_cond_b,y.T[i:i+nscans_trial]))
plt.figure(figsize=(12,9))
plt.axis([0,vols_num_run,-1.,3.])
plt.subplot(2,1,1)
plt.plot(y_cond_b)
plt.xlabel('Scans',fontsize=14)
plt.ylabel('BOLD response',fontsize=12)
plt.subplot(2,1,2)
plt.plot(np.mean(y_cond_b,1))
plt.xlabel('Scans ',fontsize=12)
plt.ylabel('BOLD ',fontsize=12)
plt.tight_layout()
# +
# conditions c
time_onsets =onsets[[i for i, j in enumerate(conditions) if j == 'c']]/2
y_cond_c=y.T[time_onsets[0]:time_onsets[0]+nscans_trial]
for i in time_onsets[1:]:
y_cond_c=np.hstack((y_cond_c,y.T[i:i+nscans_trial]))
plt.figure(figsize=(12,9))
plt.axis([0,vols_num_run,-1.,3.])
plt.subplot(2,1,1)
plt.plot(y_cond_c)
plt.xlabel('Scans',fontsize=14)
plt.ylabel('BOLD response',fontsize=12)
plt.subplot(2,1,2)
plt.plot(np.mean(y_cond_c,1))
plt.xlabel('Scans ',fontsize=12)
plt.ylabel('BOLD ',fontsize=12)
plt.tight_layout()
# -
#
|
BOLDResponseTimecourse.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="ZVk0N_dEdpIz" colab_type="text"
# # JFT Notes
#
# This file was forked from an official Tensorflow notebook, components.ipynb ([GitHub](https://github.com/tensorflow/tfx/blob/master/docs/tutorials/tfx/components.ipynb), [Colab](https://colab.research.google.com/github/tensorflow/tfx/blob/master/tfx/examples/chicago_taxi_pipeline/taxi_pipeline_interactive.ipynb)).
#
# ## 2019-11 TFX talk
# **[TFX tech talk](https://youtu.be/TA5kbFgeUlk?t=1562)**, YouTube, 2019-11
#
# Presenter demos [a Jupyter notebook on Colab](https://colab.research.google.com/github/tensorflow/tfx/blob/master/tfx/examples/chicago_taxi_pipeline/taxi_pipeline_interactive.ipynb) demoing interactive dev via InteractiveContext:
#
# ```python
# from tfx.orchestration.experimental.interactive.interactive_context import InteractiveContext
# ```
#
# That file is what was used to seed this file.
#
# ## 2020-03 Google Cloud AI Platform Pipepiles
#
# At TensorFlow Dev Summit 2020 they beta'd Google Cloud AI Platform Pipelines (the Pipelines is the new part). [The tech press release](https://cloud.google.com/blog/products/ai-machine-learning/introducing-cloud-ai-platform-pipelines) is the best primer doc.
#
# Pipelines is TFX end to end pipelines running on Google Cloud AI Platform, as a new feature of the platform. Where TFX is platform agnostic, Pipelines it TFX deployed specifically atop Google Cloud AI Platform, e.g. KubeFlow happens to be the executor. That's provided as an easy-to-use fully managed service.
#
# Intro and demo talk, [TFX: Production ML with TensorFlow in 2020 (TF Dev Summit '20)](https://youtu.be/I3MjuFGmJrg?list=PLQY2H8rRoyvzuJw20FG82Lgm2SZjTdIXU). The PM intro pitch is only the first 4 minutes.
#
#
# + [markdown] colab_type="text" id="wdeKOEkv1Fe8"
# ##### Copyright © 2019 The TensorFlow Authors.
# + id="Ka6rbmXIdtTt" colab_type="code" colab={}
# + cellView="form" colab_type="code" id="c2jyGuiG1gHr" colab={}
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# + [markdown] colab_type="text" id="23R0Z9RojXYW"
# # TFX Component Tutorial
#
# ***A Component-by-Component Introduction to TensorFlow Extended (TFX)***
# + [markdown] colab_type="text" id="LidV2qsXm4XC"
# Note: We recommend running this tutorial in a Colab notebook, with no setup required! Just click "Run in Google Colab".
#
# <div class="devsite-table-wrapper"><table class="tfo-notebook-buttons" align="left">
# <td><a target="_blank" href="https://www.tensorflow.org/tfx/tutorials/tfx/components">
# <img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a></td>
# <td><a target="_blank" href="https://colab.sandbox.google.com/github/tensorflow/tfx/blob/master/docs/tutorials/tfx/components.ipynb">
# <img src="https://www.tensorflow.org/images/colab_logo_32px.png">Run in Google Colab</a></td>
# <td><a target="_blank" href="https://github.com/tensorflow/tfx/tree/master/docs/tutorials/tfx/components.ipynb">
# <img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">View source on GitHub</a></td>
# </table></div>
# + [markdown] colab_type="text" id="KAD1tLoTm_QS"
#
# This Colab-based tutorial will interactively walk through each built-in component of TensorFlow Extended (TFX).
#
# It covers every step in an end-to-end machine learning pipeline, from data ingestion to pushing a model to serving.
#
# When you're done, the contents of this notebook can be automatically exported as TFX pipeline source code, which you can orchestrate with Apache Airflow and Apache Beam.
#
# Note: This notebook and its associated APIs are **experimental** and are
# in active development. Major changes in functionality, behavior, and
# presentation are expected.
# + [markdown] colab_type="text" id="sfSQ-kX-MLEr"
# ## Background
# This notebook demonstrates how to use TFX in a Jupyter/Colab environment. Here, we walk through the Chicago Taxi example in an interactive notebook.
#
# Working in an interactive notebook is a useful way to become familiar with the structure of a TFX pipeline. It's also useful when doing development of your own pipelines as a lightweight development environment, but you should be aware that there are differences in the way interactive notebooks are orchestrated, and how they access metadata artifacts.
#
# ### Orchestration
#
# In a production deployment of TFX, you will use an orchestrator such as Apache Airflow, Kubeflow Pipelines, or Apache Beam to orchestrate a pre-defined pipeline graph of TFX components. In an interactive notebook, the notebook itself is the orchestrator, running each TFX component as you execute the notebook cells.
#
# ### Metadata
#
# In a production deployment of TFX, you will access metadata through the ML Metadata (MLMD) API. MLMD stores metadata properties in a database such as MySQL or SQLite, and stores the metadata payloads in a persistent store such as on your filesystem. In an interactive notebook, both properties and payloads are stored in an ephemeral SQLite database in the `/tmp` directory on the Jupyter notebook or Colab server.
# + [markdown] colab_type="text" id="2GivNBNYjb3b"
# ## Setup
# First, we install and import the necessary packages, set up paths, and download data.
# + [markdown] colab_type="text" id="MZOYTt1RW4TK"
# ### Install TFX
#
# Note: Because of package updates, you must use the button at the bottom of the output of this cell to restart the runtime. Following restart, please rerun this cell.
# + colab_type="code" id="S4SQA7Q5nej3" colab={}
# !pip install "tfx>=0.21.1,<0.22" "tensorflow>=2.1,<2.2" "tensorboard>=2.1,<2.2"
# + [markdown] colab_type="text" id="N-ePgV0Lj68Q"
# ### Import packages
# We import necessary packages, including standard TFX component classes.
# + colab_type="code" id="YIqpWK9efviJ" colab={}
import os
import pprint
import tempfile
import urllib
import absl
import tensorflow as tf
import tensorflow_model_analysis as tfma
tf.get_logger().propagate = False
pp = pprint.PrettyPrinter()
import tfx
from tfx.components import CsvExampleGen
from tfx.components import Evaluator
from tfx.components import ExampleValidator
from tfx.components import Pusher
from tfx.components import ResolverNode
from tfx.components import SchemaGen
from tfx.components import StatisticsGen
from tfx.components import Trainer
from tfx.components import Transform
from tfx.dsl.experimental import latest_blessed_model_resolver
from tfx.orchestration import metadata
from tfx.orchestration import pipeline
from tfx.orchestration.experimental.interactive.interactive_context import InteractiveContext
from tfx.proto import pusher_pb2
from tfx.proto import trainer_pb2
from tfx.types import Channel
from tfx.types.standard_artifacts import Model
from tfx.types.standard_artifacts import ModelBlessing
from tfx.utils.dsl_utils import external_input
# %load_ext tfx.orchestration.experimental.interactive.notebook_extensions.skip
# + [markdown] colab_type="text" id="wCZTHRy0N1D6"
# Let's check the library versions.
# + colab_type="code" id="eZ4K18_DN2D8" colab={}
print('TensorFlow version: {}'.format(tf.__version__))
print('TFX version: {}'.format(tfx.__version__))
# + [markdown] colab_type="text" id="ufJKQ6OvkJlY"
# ### Set up pipeline paths
# + colab_type="code" id="ad5JLpKbf6sN" colab={}
# This is the root directory for your TFX pip package installation.
_tfx_root = tfx.__path__[0]
# This is the directory containing the TFX Chicago Taxi Pipeline example.
_taxi_root = os.path.join(_tfx_root, 'examples/chicago_taxi_pipeline')
# This is the path where your model will be pushed for serving.
_serving_model_dir = os.path.join(
tempfile.mkdtemp(), 'serving_model/taxi_simple')
# Set up logging.
absl.logging.set_verbosity(absl.logging.INFO)
# + [markdown] colab_type="text" id="n2cMMAbSkGfX"
# ### Download example data
# We download the example dataset for use in our TFX pipeline.
#
# The dataset we're using is the [Taxi Trips dataset](https://data.cityofchicago.org/Transportation/Taxi-Trips/wrvz-psew) released by the City of Chicago. The columns in this dataset are:
#
# <table>
# <tr><td>pickup_community_area</td><td>fare</td><td>trip_start_month</td></tr>
# <tr><td>trip_start_hour</td><td>trip_start_day</td><td>trip_start_timestamp</td></tr>
# <tr><td>pickup_latitude</td><td>pickup_longitude</td><td>dropoff_latitude</td></tr>
# <tr><td>dropoff_longitude</td><td>trip_miles</td><td>pickup_census_tract</td></tr>
# <tr><td>dropoff_census_tract</td><td>payment_type</td><td>company</td></tr>
# <tr><td>trip_seconds</td><td>dropoff_community_area</td><td>tips</td></tr>
# </table>
#
# With this dataset, we will build a model that predicts the `tips` of a trip.
# + colab_type="code" id="BywX6OUEhAqn" colab={}
_data_root = tempfile.mkdtemp(prefix='tfx-data')
DATA_PATH = 'https://raw.githubusercontent.com/tensorflow/tfx/master/tfx/examples/chicago_taxi_pipeline/data/simple/data.csv'
_data_filepath = os.path.join(_data_root, "data.csv")
urllib.request.urlretrieve(DATA_PATH, _data_filepath)
# + [markdown] colab_type="text" id="blZC1sIQOWfH"
# Take a quick look at the CSV file.
# + colab_type="code" id="c5YPeLPFOXaD" colab={}
# !head {_data_filepath}
# + [markdown] colab_type="text" id="QioyhunCImwE"
# *Disclaimer: This site provides applications using data that has been modified for use from its original source, www.cityofchicago.org, the official website of the City of Chicago. The City of Chicago makes no claims as to the content, accuracy, timeliness, or completeness of any of the data provided at this site. The data provided at this site is subject to change at any time. It is understood that the data provided at this site is being used at one’s own risk.*
# + [markdown] colab_type="text" id="8ONIE_hdkPS4"
# ### Create the InteractiveContext
# Last, we create an InteractiveContext, which will allow us to run TFX components interactively in this notebook.
# + colab_type="code" id="0Rh6K5sUf9dd" colab={}
# Here, we create an InteractiveContext using default parameters. This will
# use a temporary directory with an ephemeral ML Metadata database instance.
# To use your own pipeline root or database, the optional properties
# `pipeline_root` and `metadata_connection_config` may be passed to
# InteractiveContext. Calls to InteractiveContext are no-ops outside of the
# notebook.
context = InteractiveContext()
# + [markdown] colab_type="text" id="HdQWxfsVkzdJ"
# ## Run TFX components interactively
# In the cells that follow, we create TFX components one-by-one, run each of them, and visualize their output artifacts.
# + [markdown] colab_type="text" id="L9fwt9gQk3BR"
# ### ExampleGen
#
# The `ExampleGen` component is usually at the start of a TFX pipeline. It will:
#
# 1. Split data into training and evaluation sets (by default, 2/3 training + 1/3 eval)
# 2. Convert data into the `tf.Example` format
# 3. Copy data into the `_tfx_root` directory for other components to access
#
# `ExampleGen` takes as input the path to your data source. In our case, this is the `_data_root` path that contains the downloaded CSV.
#
# Note: In this notebook, we can instantiate components one-by-one and run them with `InteractiveContext.run()`. By contrast, in a production setting, we would specify all the components upfront in a `Pipeline` to pass to the orchestrator (see the "Export to Pipeline" section).
# + colab_type="code" id="PyXjuMt8f-9u" colab={}
example_gen = CsvExampleGen(input=external_input(_data_root))
context.run(example_gen)
# + [markdown] colab_type="text" id="OqCoZh7KPUm9"
# Let's examine the output artifacts of `ExampleGen`. This component produces two artifacts, training examples and evaluation examples:
#
# Note: The `%%skip_for_export` cell magic will omit the contents of this cell in the exported pipeline file (see the "Export to pipeline" section). This is useful for notebook-specific code that you don't want to run in an orchestrated pipeline.
# + colab_type="code" id="880KkTAkPeUg" colab={}
# %%skip_for_export
artifact = example_gen.outputs['examples'].get()[0]
print(artifact.split_names, artifact.uri)
# + [markdown] colab_type="text" id="J6vcbW_wPqvl"
# We can also take a look at the first three training examples:
# + colab_type="code" id="H4XIXjiCPwzQ" colab={}
# %%skip_for_export
# Get the URI of the output artifact representing the training examples, which is a directory
train_uri = os.path.join(example_gen.outputs['examples'].get()[0].uri, 'train')
# Get the list of files in this directory (all compressed TFRecord files)
tfrecord_filenames = [os.path.join(train_uri, name)
for name in os.listdir(train_uri)]
# Create a `TFRecordDataset` to read these files
dataset = tf.data.TFRecordDataset(tfrecord_filenames, compression_type="GZIP")
# Iterate over the first 3 records and decode them.
for tfrecord in dataset.take(3):
serialized_example = tfrecord.numpy()
example = tf.train.Example()
example.ParseFromString(serialized_example)
pp.pprint(example)
# + [markdown] colab_type="text" id="2gluYjccf-IP"
# Now that `ExampleGen` has finished ingesting the data, the next step is data analysis.
# + [markdown] colab_type="text" id="csM6BFhtk5Aa"
# ### StatisticsGen
# The `StatisticsGen` component computes statistics over your dataset for data analysis, as well as for use in downstream components. It uses the [TensorFlow Data Validation](https://www.tensorflow.org/tfx/data_validation/get_started) library.
#
# `StatisticsGen` takes as input the dataset we just ingested using `ExampleGen`.
# + colab_type="code" id="MAscCCYWgA-9" colab={}
statistics_gen = StatisticsGen(
examples=example_gen.outputs['examples'])
context.run(statistics_gen)
# + [markdown] colab_type="text" id="HLI6cb_5WugZ"
# After `StatisticsGen` finishes running, we can visualize the outputted statistics. Try playing with the different plots!
# + colab_type="code" id="tLjXy7K6Tp_G" colab={}
# %%skip_for_export
context.show(statistics_gen.outputs['statistics'])
# + [markdown] colab_type="text" id="HLKLTO9Nk60p"
# ### SchemaGen
#
# The `SchemaGen` component generates a schema based on your data statistics. (A schema defines the expected bounds, types, and properties of the features in your dataset.) It also uses the [TensorFlow Data Validation](https://www.tensorflow.org/tfx/data_validation/get_started) library.
#
# `SchemaGen` will take as input the statistics that we generated with `StatisticsGen`, looking at the training split by default.
# + colab_type="code" id="ygQvZ6hsiQ_J" colab={}
schema_gen = SchemaGen(
statistics=statistics_gen.outputs['statistics'],
infer_feature_shape=False)
context.run(schema_gen)
# + [markdown] colab_type="text" id="zi6TxTUKXM6b"
# After `SchemaGen` finishes running, we can visualize the generated schema as a table.
# + colab_type="code" id="Ec9vqDXpXeMb" colab={}
# %%skip_for_export
context.show(schema_gen.outputs['schema'])
# + [markdown] colab_type="text" id="kZWWdbA-m7zp"
# Each feature in your dataset shows up as a row in the schema table, alongside its properties. The schema also captures all the values that a categorical feature takes on, denoted as its domain.
#
# To learn more about schemas, see [the SchemaGen documentation](https://www.tensorflow.org/tfx/guide/schemagen).
# + [markdown] colab_type="text" id="V1qcUuO9k9f8"
# ### ExampleValidator
# The `ExampleValidator` component detects anomalies in your data, based on the expectations defined by the schema. It also uses the [TensorFlow Data Validation](https://www.tensorflow.org/tfx/data_validation/get_started) library.
#
# `ExampleValidator` will take as input the statistics from `StatisticsGen`, and the schema from `SchemaGen`.
#
# By default, it compares the statistics from the evaluation split to the schema from the training split.
# + colab_type="code" id="XRlRUuGgiXks" colab={}
example_validator = ExampleValidator(
statistics=statistics_gen.outputs['statistics'],
schema=schema_gen.outputs['schema'])
context.run(example_validator)
# + [markdown] colab_type="text" id="855mrHgJcoer"
# After `ExampleValidator` finishes running, we can visualize the anomalies as a table.
# + colab_type="code" id="TDyAAozQcrk3" colab={}
# %%skip_for_export
context.show(example_validator.outputs['anomalies'])
# + [markdown] colab_type="text" id="znMoJj60ybZx"
# In the anomalies table, we can see that the `company` feature takes on new values that were not in the training split. This information can be used to debug model performance, understand how your data evolves over time, and identify data errors.
#
# In our case, this anomaly is innocuous, so we move on to the next step of transforming the data.
# + [markdown] colab_type="text" id="JPViEz5RlA36"
# ### Transform
# The `Transform` component performs feature engineering for both training and serving. It uses the [TensorFlow Transform](https://www.tensorflow.org/tfx/transform/get_started) library.
#
# `Transform` will take as input the data from `ExampleGen`, the schema from `SchemaGen`, as well as a module that contains user-defined Transform code.
#
# Let's see an example of user-defined Transform code below (for an introduction to the TensorFlow Transform APIs, [see the tutorial](https://www.tensorflow.org/tfx/tutorials/transform/simple)). First, we define a few constants for feature engineering:
#
# Note: The `%%writefile` cell magic will save the contents of the cell as a `.py` file on disk. This allows the `Transform` component to load your code as a module.
#
#
# + colab_type="code" id="PuNSiUKb4YJf" colab={}
_taxi_constants_module_file = 'taxi_constants.py'
# + colab_type="code" id="HPjhXuIF4YJh" colab={}
# %%skip_for_export
# %%writefile {_taxi_constants_module_file}
# Categorical features are assumed to each have a maximum value in the dataset.
MAX_CATEGORICAL_FEATURE_VALUES = [24, 31, 12]
CATEGORICAL_FEATURE_KEYS = [
'trip_start_hour', 'trip_start_day', 'trip_start_month',
'pickup_census_tract', 'dropoff_census_tract', 'pickup_community_area',
'dropoff_community_area'
]
DENSE_FLOAT_FEATURE_KEYS = ['trip_miles', 'fare', 'trip_seconds']
# Number of buckets used by tf.transform for encoding each feature.
FEATURE_BUCKET_COUNT = 10
BUCKET_FEATURE_KEYS = [
'pickup_latitude', 'pickup_longitude', 'dropoff_latitude',
'dropoff_longitude'
]
# Number of vocabulary terms used for encoding VOCAB_FEATURES by tf.transform
VOCAB_SIZE = 1000
# Count of out-of-vocab buckets in which unrecognized VOCAB_FEATURES are hashed.
OOV_SIZE = 10
VOCAB_FEATURE_KEYS = [
'payment_type',
'company',
]
# Keys
LABEL_KEY = 'tips'
FARE_KEY = 'fare'
def transformed_name(key):
return key + '_xf'
# + [markdown] colab_type="text" id="Duj2Ax5z4YJl"
# Next, we write a `preprocessing_fn` that takes in raw data as input, and returns transformed features that our model can train on:
# + colab_type="code" id="4AJ9hBs94YJm" colab={}
_taxi_transform_module_file = 'taxi_transform.py'
# + colab_type="code" id="MYmxxx9A4YJn" colab={}
# %%skip_for_export
# %%writefile {_taxi_transform_module_file}
import tensorflow as tf
import tensorflow_transform as tft
import taxi_constants
_DENSE_FLOAT_FEATURE_KEYS = taxi_constants.DENSE_FLOAT_FEATURE_KEYS
_VOCAB_FEATURE_KEYS = taxi_constants.VOCAB_FEATURE_KEYS
_VOCAB_SIZE = taxi_constants.VOCAB_SIZE
_OOV_SIZE = taxi_constants.OOV_SIZE
_FEATURE_BUCKET_COUNT = taxi_constants.FEATURE_BUCKET_COUNT
_BUCKET_FEATURE_KEYS = taxi_constants.BUCKET_FEATURE_KEYS
_CATEGORICAL_FEATURE_KEYS = taxi_constants.CATEGORICAL_FEATURE_KEYS
_FARE_KEY = taxi_constants.FARE_KEY
_LABEL_KEY = taxi_constants.LABEL_KEY
_transformed_name = taxi_constants.transformed_name
def preprocessing_fn(inputs):
"""tf.transform's callback function for preprocessing inputs.
Args:
inputs: map from feature keys to raw not-yet-transformed features.
Returns:
Map from string feature key to transformed feature operations.
"""
outputs = {}
for key in _DENSE_FLOAT_FEATURE_KEYS:
# Preserve this feature as a dense float, setting nan's to the mean.
outputs[_transformed_name(key)] = tft.scale_to_z_score(
_fill_in_missing(inputs[key]))
for key in _VOCAB_FEATURE_KEYS:
# Build a vocabulary for this feature.
outputs[_transformed_name(key)] = tft.compute_and_apply_vocabulary(
_fill_in_missing(inputs[key]),
top_k=_VOCAB_SIZE,
num_oov_buckets=_OOV_SIZE)
for key in _BUCKET_FEATURE_KEYS:
outputs[_transformed_name(key)] = tft.bucketize(
_fill_in_missing(inputs[key]), _FEATURE_BUCKET_COUNT,
always_return_num_quantiles=False)
for key in _CATEGORICAL_FEATURE_KEYS:
outputs[_transformed_name(key)] = _fill_in_missing(inputs[key])
# Was this passenger a big tipper?
taxi_fare = _fill_in_missing(inputs[_FARE_KEY])
tips = _fill_in_missing(inputs[_LABEL_KEY])
outputs[_transformed_name(_LABEL_KEY)] = tf.where(
tf.math.is_nan(taxi_fare),
tf.cast(tf.zeros_like(taxi_fare), tf.int64),
# Test if the tip was > 20% of the fare.
tf.cast(
tf.greater(tips, tf.multiply(taxi_fare, tf.constant(0.2))), tf.int64))
return outputs
def _fill_in_missing(x):
"""Replace missing values in a SparseTensor.
Fills in missing values of `x` with '' or 0, and converts to a dense tensor.
Args:
x: A `SparseTensor` of rank 2. Its dense shape should have size at most 1
in the second dimension.
Returns:
A rank 1 tensor where missing values of `x` have been filled in.
"""
default_value = '' if x.dtype == tf.string else 0
return tf.squeeze(
tf.sparse.to_dense(
tf.SparseTensor(x.indices, x.values, [x.dense_shape[0], 1]),
default_value),
axis=1)
# + [markdown] colab_type="text" id="wgbmZr3sgbWW"
# Now, we pass in this feature engineering code to the `Transform` component and run it to transform your data.
# + colab_type="code" id="jHfhth_GiZI9" colab={}
transform = Transform(
examples=example_gen.outputs['examples'],
schema=schema_gen.outputs['schema'],
module_file=os.path.abspath(_taxi_transform_module_file))
context.run(transform)
# + [markdown] colab_type="text" id="fwAwb4rARRQ2"
# Let's examine the output artifacts of `Transform`. This component produces two types of outputs:
#
# * `transform_graph` is the graph that can perform the preprocessing operations (this graph will be included in the serving and evaluation models).
# * `transformed_examples` represents the preprocessed training and evaluation data.
# + colab_type="code" id="SClrAaEGR1O5" colab={}
transform.outputs
# + [markdown] colab_type="text" id="vyFkBd9AR1sy"
# Take a peek at the `transform_graph` artifact. It points to a directory containing three subdirectories.
# + colab_type="code" id="5tRw4DneR3i7" colab={}
train_uri = transform.outputs['transform_graph'].get()[0].uri
os.listdir(train_uri)
# + [markdown] colab_type="text" id="4fqV54CIR6Pu"
# The `transformed_metadata` subdirectory contains the schema of the preprocessed data. The `transform_fn` subdirectory contains the actual preprocessing graph. The `metadata` subdirectory contains the schema of the original data.
#
# We can also take a look at the first three transformed examples:
# + colab_type="code" id="pwbW2zPKR_S4" colab={}
# Get the URI of the output artifact representing the transformed examples, which is a directory
train_uri = os.path.join(example_gen.outputs['examples'].get()[0].uri, 'train')
# Get the list of files in this directory (all compressed TFRecord files)
tfrecord_filenames = [os.path.join(train_uri, name)
for name in os.listdir(train_uri)]
# Create a `TFRecordDataset` to read these files
dataset = tf.data.TFRecordDataset(tfrecord_filenames, compression_type="GZIP")
# Iterate over the first 3 records and decode them.
for tfrecord in dataset.take(3):
serialized_example = tfrecord.numpy()
example = tf.train.Example()
example.ParseFromString(serialized_example)
pp.pprint(example)
# + [markdown] colab_type="text" id="q_b_V6eN4f69"
# After the `Transform` component has transformed your data into features, and the next step is to train a model.
# + [markdown] colab_type="text" id="OBJFtnl6lCg9"
# ### Trainer
# The `Trainer` component will train a model that you define in TensorFlow (either using the Estimator API or the Keras API with [`model_to_estimator`](https://www.tensorflow.org/api_docs/python/tf/keras/estimator/model_to_estimator)).
#
# `Trainer` takes as input the schema from `SchemaGen`, the transformed data and graph from `Transform`, training parameters, as well as a module that contains user-defined model code.
#
# Let's see an example of user-defined model code below (for an introduction to the TensorFlow Estimator APIs, [see the tutorial](https://www.tensorflow.org/tutorials/estimator/premade)):
# + colab_type="code" id="N1376oq04YJt" colab={}
_taxi_trainer_module_file = 'taxi_trainer.py'
# + colab_type="code" id="nf9UuNng4YJu" colab={}
# %%skip_for_export
# %%writefile {_taxi_trainer_module_file}
import tensorflow as tf
import tensorflow_model_analysis as tfma
import tensorflow_transform as tft
from tensorflow_transform.tf_metadata import schema_utils
import taxi_constants
_DENSE_FLOAT_FEATURE_KEYS = taxi_constants.DENSE_FLOAT_FEATURE_KEYS
_VOCAB_FEATURE_KEYS = taxi_constants.VOCAB_FEATURE_KEYS
_VOCAB_SIZE = taxi_constants.VOCAB_SIZE
_OOV_SIZE = taxi_constants.OOV_SIZE
_FEATURE_BUCKET_COUNT = taxi_constants.FEATURE_BUCKET_COUNT
_BUCKET_FEATURE_KEYS = taxi_constants.BUCKET_FEATURE_KEYS
_CATEGORICAL_FEATURE_KEYS = taxi_constants.CATEGORICAL_FEATURE_KEYS
_MAX_CATEGORICAL_FEATURE_VALUES = taxi_constants.MAX_CATEGORICAL_FEATURE_VALUES
_LABEL_KEY = taxi_constants.LABEL_KEY
_transformed_name = taxi_constants.transformed_name
def _transformed_names(keys):
return [_transformed_name(key) for key in keys]
# Tf.Transform considers these features as "raw"
def _get_raw_feature_spec(schema):
return schema_utils.schema_as_feature_spec(schema).feature_spec
def _gzip_reader_fn(filenames):
"""Small utility returning a record reader that can read gzip'ed files."""
return tf.data.TFRecordDataset(
filenames,
compression_type='GZIP')
def _build_estimator(config, hidden_units=None, warm_start_from=None):
"""Build an estimator for predicting the tipping behavior of taxi riders.
Args:
config: tf.estimator.RunConfig defining the runtime environment for the
estimator (including model_dir).
hidden_units: [int], the layer sizes of the DNN (input layer first)
warm_start_from: Optional directory to warm start from.
Returns:
A dict of the following:
- estimator: The estimator that will be used for training and eval.
- train_spec: Spec for training.
- eval_spec: Spec for eval.
- eval_input_receiver_fn: Input function for eval.
"""
real_valued_columns = [
tf.feature_column.numeric_column(key, shape=())
for key in _transformed_names(_DENSE_FLOAT_FEATURE_KEYS)
]
categorical_columns = [
tf.feature_column.categorical_column_with_identity(
key, num_buckets=_VOCAB_SIZE + _OOV_SIZE, default_value=0)
for key in _transformed_names(_VOCAB_FEATURE_KEYS)
]
categorical_columns += [
tf.feature_column.categorical_column_with_identity(
key, num_buckets=_FEATURE_BUCKET_COUNT, default_value=0)
for key in _transformed_names(_BUCKET_FEATURE_KEYS)
]
categorical_columns += [
tf.feature_column.categorical_column_with_identity( # pylint: disable=g-complex-comprehension
key,
num_buckets=num_buckets,
default_value=0) for key, num_buckets in zip(
_transformed_names(_CATEGORICAL_FEATURE_KEYS),
_MAX_CATEGORICAL_FEATURE_VALUES)
]
return tf.estimator.DNNLinearCombinedClassifier(
config=config,
linear_feature_columns=categorical_columns,
dnn_feature_columns=real_valued_columns,
dnn_hidden_units=hidden_units or [100, 70, 50, 25],
warm_start_from=warm_start_from)
def _example_serving_receiver_fn(tf_transform_graph, schema):
"""Build the serving in inputs.
Args:
tf_transform_graph: A TFTransformOutput.
schema: the schema of the input data.
Returns:
Tensorflow graph which parses examples, applying tf-transform to them.
"""
raw_feature_spec = _get_raw_feature_spec(schema)
raw_feature_spec.pop(_LABEL_KEY)
raw_input_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn(
raw_feature_spec, default_batch_size=None)
serving_input_receiver = raw_input_fn()
transformed_features = tf_transform_graph.transform_raw_features(
serving_input_receiver.features)
return tf.estimator.export.ServingInputReceiver(
transformed_features, serving_input_receiver.receiver_tensors)
def _eval_input_receiver_fn(tf_transform_graph, schema):
"""Build everything needed for the tf-model-analysis to run the model.
Args:
tf_transform_graph: A TFTransformOutput.
schema: the schema of the input data.
Returns:
EvalInputReceiver function, which contains:
- Tensorflow graph which parses raw untransformed features, applies the
tf-transform preprocessing operators.
- Set of raw, untransformed features.
- Label against which predictions will be compared.
"""
# Notice that the inputs are raw features, not transformed features here.
raw_feature_spec = _get_raw_feature_spec(schema)
serialized_tf_example = tf.compat.v1.placeholder(
dtype=tf.string, shape=[None], name='input_example_tensor')
# Add a parse_example operator to the tensorflow graph, which will parse
# raw, untransformed, tf examples.
features = tf.io.parse_example(serialized_tf_example, raw_feature_spec)
# Now that we have our raw examples, process them through the tf-transform
# function computed during the preprocessing step.
transformed_features = tf_transform_graph.transform_raw_features(
features)
# The key name MUST be 'examples'.
receiver_tensors = {'examples': serialized_tf_example}
# NOTE: Model is driven by transformed features (since training works on the
# materialized output of TFT, but slicing will happen on raw features.
features.update(transformed_features)
return tfma.export.EvalInputReceiver(
features=features,
receiver_tensors=receiver_tensors,
labels=transformed_features[_transformed_name(_LABEL_KEY)])
def _input_fn(filenames, tf_transform_graph, batch_size=200):
"""Generates features and labels for training or evaluation.
Args:
filenames: [str] list of CSV files to read data from.
tf_transform_graph: A TFTransformOutput.
batch_size: int First dimension size of the Tensors returned by input_fn
Returns:
A (features, indices) tuple where features is a dictionary of
Tensors, and indices is a single Tensor of label indices.
"""
transformed_feature_spec = (
tf_transform_graph.transformed_feature_spec().copy())
dataset = tf.data.experimental.make_batched_features_dataset(
filenames, batch_size, transformed_feature_spec, reader=_gzip_reader_fn)
transformed_features = (
tf.compat.v1.data.make_one_shot_iterator(dataset).get_next())
# We pop the label because we do not want to use it as a feature while we're
# training.
return transformed_features, transformed_features.pop(
_transformed_name(_LABEL_KEY))
# TFX will call this function
def trainer_fn(trainer_fn_args, schema):
"""Build the estimator using the high level API.
Args:
trainer_fn_args: Holds args used to train the model as name/value pairs.
schema: Holds the schema of the training examples.
Returns:
A dict of the following:
- estimator: The estimator that will be used for training and eval.
- train_spec: Spec for training.
- eval_spec: Spec for eval.
- eval_input_receiver_fn: Input function for eval.
"""
# Number of nodes in the first layer of the DNN
first_dnn_layer_size = 100
num_dnn_layers = 4
dnn_decay_factor = 0.7
train_batch_size = 40
eval_batch_size = 40
tf_transform_graph = tft.TFTransformOutput(trainer_fn_args.transform_output)
train_input_fn = lambda: _input_fn( # pylint: disable=g-long-lambda
trainer_fn_args.train_files,
tf_transform_graph,
batch_size=train_batch_size)
eval_input_fn = lambda: _input_fn( # pylint: disable=g-long-lambda
trainer_fn_args.eval_files,
tf_transform_graph,
batch_size=eval_batch_size)
train_spec = tf.estimator.TrainSpec( # pylint: disable=g-long-lambda
train_input_fn,
max_steps=trainer_fn_args.train_steps)
serving_receiver_fn = lambda: _example_serving_receiver_fn( # pylint: disable=g-long-lambda
tf_transform_graph, schema)
exporter = tf.estimator.FinalExporter('chicago-taxi', serving_receiver_fn)
eval_spec = tf.estimator.EvalSpec(
eval_input_fn,
steps=trainer_fn_args.eval_steps,
exporters=[exporter],
name='chicago-taxi-eval')
run_config = tf.estimator.RunConfig(
save_checkpoints_steps=999, keep_checkpoint_max=1)
run_config = run_config.replace(model_dir=trainer_fn_args.serving_model_dir)
estimator = _build_estimator(
# Construct layers sizes with exponetial decay
hidden_units=[
max(2, int(first_dnn_layer_size * dnn_decay_factor**i))
for i in range(num_dnn_layers)
],
config=run_config,
warm_start_from=trainer_fn_args.base_model)
# Create an input receiver for TFMA processing
receiver_fn = lambda: _eval_input_receiver_fn( # pylint: disable=g-long-lambda
tf_transform_graph, schema)
return {
'estimator': estimator,
'train_spec': train_spec,
'eval_spec': eval_spec,
'eval_input_receiver_fn': receiver_fn
}
# + [markdown] colab_type="text" id="GY4yTRaX4YJx"
# Now, we pass in this model code to the `Trainer` component and run it to train the model.
# + colab_type="code" id="429-vvCWibO0" colab={}
trainer = Trainer(
module_file=os.path.abspath(_taxi_trainer_module_file),
transformed_examples=transform.outputs['transformed_examples'],
schema=schema_gen.outputs['schema'],
transform_graph=transform.outputs['transform_graph'],
train_args=trainer_pb2.TrainArgs(num_steps=10000),
eval_args=trainer_pb2.EvalArgs(num_steps=5000))
context.run(trainer)
# + [markdown] colab_type="text" id="6Cql1G35StJp"
# #### Analyze Training with TensorBoard
# Optionally, we can connect TensorBoard to the Trainer to analyze our model's training curves.
# + colab_type="code" id="bXe62WE0S0Ek" colab={}
# %%skip_for_export
# Get the URI of the output artifact representing the training logs, which is a directory
model_dir = trainer.outputs['model'].get()[0].uri
# %load_ext tensorboard
# %tensorboard --logdir {model_dir}
# + [markdown] colab_type="text" id="FmPftrv0lEQy"
# ### Evaluator
# The `Evaluator` component computes model performance metrics over the evaluation set. It uses the [TensorFlow Model Analysis](https://www.tensorflow.org/tfx/model_analysis/get_started) library. The `Evaluator` can also optionally validate that a newly trained model is better than the previous model. This is useful in a production pipeline setting where you may automatically train and validate a model every day. In this notebook, we only train one model, so the `Evaluator` automatically will label the model as "good".
#
# `Evaluator` will take as input the data from `ExampleGen`, the trained model from `Trainer`, and slicing configuration. The slicing configuration allows you to slice your metrics on feature values (e.g. how does your model perform on taxi trips that start at 8am versus 8pm?). See an example of this configuration below:
# + colab_type="code" id="fVhfzzh9PDEx" colab={}
eval_config = tfma.EvalConfig(
model_specs=[
# Using signature 'eval' implies the use of an EvalSavedModel. To use
# a serving model remove the signature to defaults to 'serving_default'
# and add a label_key.
tfma.ModelSpec(signature_name='eval')
],
metrics_specs=[
tfma.MetricsSpec(
# The metrics added here are in addition to those saved with the
# model (assuming either a keras model or EvalSavedModel is used).
# Any metrics added into the saved model (for example using
# model.compile(..., metrics=[...]), etc) will be computed
# automatically.
metrics=[
tfma.MetricConfig(class_name='ExampleCount')
],
# To add validation thresholds for metrics saved with the model,
# add them keyed by metric name to the thresholds map.
thresholds = {
'accuracy': tfma.MetricThreshold(
value_threshold=tfma.GenericValueThreshold(
lower_bound={'value': 0.5}),
change_threshold=tfma.GenericChangeThreshold(
direction=tfma.MetricDirection.HIGHER_IS_BETTER,
absolute={'value': -1e-10}))
}
)
],
slicing_specs=[
# An empty slice spec means the overall slice, i.e. the whole dataset.
tfma.SlicingSpec(),
# Data can be sliced along a feature column. In this case, data is
# sliced along feature column trip_start_hour.
tfma.SlicingSpec(feature_keys=['trip_start_hour'])
])
# + [markdown] colab_type="text" id="9mBdKH1F8JuT"
# Next, we give this configuration to `Evaluator` and run it.
# + colab_type="code" id="Zjcx8g6mihSt" colab={}
# Use TFMA to compute a evaluation statistics over features of a model and
# validate them against a baseline.
# The model resolver is only required if performing model validation in addition
# to evaluation. In this case we validate against the latest blessed model. If
# no model has been blessed before (as in this case) the evaluator will make our
# candidate the first blessed model.
model_resolver = ResolverNode(
instance_name='latest_blessed_model_resolver',
resolver_class=latest_blessed_model_resolver.LatestBlessedModelResolver,
model=Channel(type=Model),
model_blessing=Channel(type=ModelBlessing))
context.run(model_resolver)
evaluator = Evaluator(
examples=example_gen.outputs['examples'],
model=trainer.outputs['model'],
#baseline_model=model_resolver.outputs['model'],
# Change threshold will be ignored if there is no baseline (first run).
eval_config=eval_config)
context.run(evaluator)
# + [markdown] colab_type="text" id="Y5TMskWe9LL0"
# After `Evaluator` finishes running, we can show the default visualization of global metrics on the entire evaluation set.
# + colab_type="code" id="U729j5X5QQUQ" colab={}
# %%skip_for_export
context.show(evaluator.outputs['evaluation'])
# + [markdown] colab_type="text" id="t-tI4p6m-OAn"
# To see the visualization for sliced evaluation metrics, we can directly call the TensorFlow Model Analysis library.
# + colab_type="code" id="pyis6iy0HLdi" colab={}
# %%skip_for_export
import tensorflow_model_analysis as tfma
# Get the TFMA output result path and load the result.
PATH_TO_RESULT = evaluator.outputs['evaluation'].get()[0].uri
tfma_result = tfma.load_eval_result(PATH_TO_RESULT)
# Show data sliced along feature column trip_start_hour.
tfma.view.render_slicing_metrics(
tfma_result, slicing_column='trip_start_hour')
# + [markdown] colab_type="text" id="7uvYrUf2-r_6"
# This visualization shows the same metrics, but computed at every feature value of `trip_start_hour` instead of on the entire evaluation set.
#
# TensorFlow Model Analysis supports many other visualizations, such as Fairness Indicators and plotting a time series of model performance. To learn more, see [the tutorial](https://www.tensorflow.org/tfx/tutorials/model_analysis/tfma_basic).
#
# Since we added thresholds to our config, validation output is also available. The precence of a `blessing` artifact indicates that our model passed validation. Since this is the first validation being performed the candidate is automatically blessed.
# + colab_type="code" id="FXk1MA7sijCr" colab={}
# %%skip_for_export
blessing_uri = evaluator.outputs.blessing.get()[0].uri
# !ls -l {blessing_uri}
# + [markdown] colab_type="text" id="76Mil-7FlF_y"
# Now can also verify the success by loading the validation result record:
# + colab_type="code" id="k4GghePOTJxL" colab={}
# %%skip_for_export
PATH_TO_RESULT = evaluator.outputs['evaluation'].get()[0].uri
print(tfma.load_validation_result(PATH_TO_RESULT))
# + [markdown] colab_type="text" id="T8DYekCZlHfj"
# ### Pusher
# The `Pusher` component is usually at the end of a TFX pipeline. It checks whether a model has passed validation, and if so, exports the model to `_serving_model_dir`.
# + colab_type="code" id="r45nQ69eikc9" colab={}
pusher = Pusher(
model=trainer.outputs['model'],
model_blessing=evaluator.outputs['blessing'],
push_destination=pusher_pb2.PushDestination(
filesystem=pusher_pb2.PushDestination.Filesystem(
base_directory=_serving_model_dir)))
context.run(pusher)
# + [markdown] colab_type="text" id="ctUErBYoTO9I"
# Let's examine the output artifacts of `Pusher`.
# + colab_type="code" id="pRkWo-MzTSss" colab={}
# %%skip_for_export
pusher.outputs
# + [markdown] colab_type="text" id="peH2PPS3VgkL"
# In particular, the Pusher will export your model in the SavedModel format, which looks like this:
# + colab_type="code" id="4zyIqWl9TSdG" colab={}
# %%skip_for_export
push_uri = pusher.outputs.model_push.get()[0].uri
latest_version = max(os.listdir(push_uri))
latest_version_path = os.path.join(push_uri, latest_version)
model = tf.saved_model.load(latest_version_path)
for item in model.signatures.items():
pp.pprint(item)
# + [markdown] colab_type="text" id="3-YPNUuHANtj"
# We're finished our tour of built-in TFX components!
#
# After you're happy with experimenting with TFX components and code in this notebook, you may want to export it as a pipeline to be orchestrated with Apache Airflow or Apache Beam. See the final section.
# + [markdown] colab_type="text" id="qGNDOG1o1Tht"
# ## Export to pipeline
#
# To export the contents of this notebook as a pipeline to be orchestrated with Airflow or Beam, follow the instructions below.
#
# If you're using Colab, make sure to **save this notebook to Google Drive** (`File` → `Save a Copy in Drive`) before exporting.
# + [markdown] colab_type="text" id="DDbff6EdQ0iJ"
# ### 1. Mount Google Drive (Colab-only)
#
# If you're using Colab, this notebook needs to mount your Google Drive to be able to access its own `.ipynb` file.
# + cellView="form" colab_type="code" id="CH8yu7Un1Thu" colab={}
# %%skip_for_export
#@markdown Run this cell and enter the authorization code to mount Google Drive.
import sys
if 'google.colab' in sys.modules:
# Colab.
from google.colab import drive
drive.mount('/content/drive')
# + [markdown] colab_type="text" id="iFrJ8nOIRIJ0"
# ### 2. Select an orchestrator
# + cellView="form" colab_type="code" id="CO7erulbvUNi" colab={}
_runner_type = 'beam' #@param ["beam", "airflow"]
_pipeline_name = 'chicago_taxi_%s' % _runner_type
# + [markdown] colab_type="text" id="d64gdS2u1Thw"
# ### 3. Set up paths for the pipeline
# + colab_type="code" id="X_dZL1lS1Thx" colab={}
# For Colab notebooks only.
# TODO(USER): Fill out the path to this notebook.
_notebook_filepath = (
'/content/drive/My Drive/Colab Notebooks/taxi_pipeline_interactive.ipynb')
# For Jupyter notebooks only.
# _notebook_filepath = os.path.join(os.getcwd(),
# 'taxi_pipeline_interactive.ipynb')
# TODO(USER): Fill out the paths for the exported pipeline.
_pipeline_name = 'taxi_pipeline'
_tfx_root = os.path.join(os.environ['HOME'], 'tfx')
_taxi_root = os.path.join(os.environ['HOME'], 'taxi')
_serving_model_dir = os.path.join(_taxi_root, 'serving_model')
_data_root = os.path.join(_taxi_root, 'data', 'simple')
_pipeline_root = os.path.join(_tfx_root, 'pipelines', _pipeline_name)
_metadata_path = os.path.join(_tfx_root, 'metadata', _pipeline_name,
'metadata.db')
# + [markdown] colab_type="text" id="stNBJAIPvUNq"
# ### 4. Choose components to include in the pipeline
# + colab_type="code" id="DNc0Iks2vUNq" colab={}
# TODO(USER): Specify components to be included in the exported pipeline.
components = [
example_gen, statistics_gen, schema_gen, example_validator, transform,
trainer, model_resolver, evaluator, pusher
]
# + [markdown] colab_type="text" id="6nATTNYZ1Thy"
# ### 5. Generate pipeline files
# + cellView="form" colab_type="code" id="SsfNFi6iHMSp" colab={}
# %%skip_for_export
#@markdown Run this cell to generate the pipeline files.
if get_ipython().magics_manager.auto_magic:
print('Warning: %automagic is ON. Line magics specified without the % prefix '
'will not be scrubbed during export to pipeline.')
_pipeline_export_filepath = 'export_%s.py' % _pipeline_name
context.export_to_pipeline(notebook_filepath=_notebook_filepath,
export_filepath=_pipeline_export_filepath,
runner_type=_runner_type)
# + [markdown] colab_type="text" id="qL4RQQwSSt0y"
# ### 6. Download pipeline files
# + cellView="form" colab_type="code" id="FeRJyHly1Th3" colab={}
# %%skip_for_export
#@markdown Run this cell to download the pipeline files as a `.zip`.
if 'google.colab' in sys.modules:
from google.colab import files
import zipfile
zip_export_path = os.path.join(
tempfile.mkdtemp(), 'export.zip')
with zipfile.ZipFile(zip_export_path, mode='w') as export_zip:
export_zip.write(_pipeline_export_filepath)
export_zip.write(_taxi_constants_module_file)
export_zip.write(_taxi_transform_module_file)
export_zip.write(_taxi_trainer_module_file)
files.download(zip_export_path)
# + [markdown] colab_type="text" id="Po3wc1dMTJHw"
# To learn how to run the orchestrated pipeline with Apache Airflow, please refer to the [TFX Orchestration Tutorial](https://www.tensorflow.org/tfx/tutorials/tfx/airflow_workshop).
|
docs/by_topic/tensorflow/tfx_pipeline_interactive.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
# ### 1. Load data
# Load csv file
df_train = pd.read_csv('../data_csv/aug_train.csv')
df_test = pd.read_csv('../data_csv/aug_test.csv')
test_target = np.load('../data_csv/jobchange_test_target_values.npy')
test_target = pd.DataFrame(test_target,columns=['target'])
test_target['enrollee_id'] = df_test['enrollee_id']
test_target = test_target[['enrollee_id','target']]
test_target.to_csv('../data_csv/test_target.csv')
# test_target = pd.read_csv('../data_csv/test_target.csv')
test_target
# +
# df
# -
# Check each column
# terminal install: conda install -c conda-forge pandas-profiling
from pandas_profiling import ProfileReport as pr
profile = pr(df_train, minimal=True).to_notebook_iframe()
# ### 2. Examine and impute missing values
df_train.info()
# Pairplot
sns.pairplot(df_train, corner=True, height=1.5, plot_kws={'size': 3}, hue='target');
# +
# Examine data
df_train['company_type'].value_counts()
df_train['enrolled_university'].value_counts()
df_train['education_level'].value_counts()
df_train['experience'].value_counts()
df_train['company_size'].value_counts()
df_train['company_type'].value_counts()
df_train['last_new_job'].value_counts()
# -
# Replace string with float/int
df_train['experience'] = df_train['experience'].replace('>20','25')
df_train['experience'] = df_train['experience'].replace('<1','0.5')
df_train['experience'] = df_train['experience'].astype('float')
df_train['last_new_job'] = df_train['last_new_job'].replace('>4','5')
df_train['last_new_job'] = df_train['last_new_job'].replace('never','0')
# Impute/fill NaN
df_train['gender'] = df_train['gender'].replace(np.nan, 'unknown')
df_train['enrolled_university'] = df_train['enrolled_university'].replace(np.nan, 'unknown')
df_train['education_level'] = df_train['education_level'].replace(np.nan, 'unknown')
df_train['major_discipline'] = df_train['major_discipline'].replace(np.nan, 'unknown')
df_train['education_level'] = df_train['education_level'].replace(np.nan, 'unknown')
df_train['experience'] = df_train['experience'].fillna(value = df_train['experience'].median())
df_train['company_size'] = df_train['company_size'].fillna(value = df_train['company_size'].value_counts().index[0])
df_train['company_type'] = df_train['company_type'].replace(np.nan, 'unknown')
df_train['last_new_job'] = df_train['last_new_job'].fillna(value = df_train['last_new_job'].median()).astype('int')
df_train['target'] = df_train['target'].astype('int')
df_train.info()
# ### 3. Pickle
df_train.to_pickle('../dump/df_train.csv')
# ### 4. Repeat for test set
# #### Examine and impute missing values
df_test['target'] = test_target
df_test.info()
# Pairplot
sns.pairplot(df_test, corner=True, height=1.5, plot_kws={'size': 3}, hue='target');
# +
# Examine data
df_train['company_type'].value_counts()
df_train['enrolled_university'].value_counts()
df_train['education_level'].value_counts()
df_train['experience'].value_counts()
df_train['company_size'].value_counts()
df_train['company_type'].value_counts()
df_train['last_new_job'].value_counts()
# -
# Replace string with float/int
df_test['experience'] = df_test['experience'].replace('>20','25')
df_test['experience'] = df_test['experience'].replace('<1','0.5')
df_test['experience'] = df_test['experience'].astype('float')
df_test['last_new_job'] = df_test['last_new_job'].replace('>4','5')
df_test['last_new_job'] = df_test['last_new_job'].replace('never','0')
# Impute/fill NaN
df_test['gender'] = df_test['gender'].replace(np.nan, 'unknown')
df_test['enrolled_university'] = df_test['enrolled_university'].replace(np.nan, 'unknown')
df_test['education_level'] = df_test['education_level'].replace(np.nan, 'unknown')
df_test['major_discipline'] = df_test['major_discipline'].replace(np.nan, 'unknown')
df_test['education_level'] = df_test['education_level'].replace(np.nan, 'unknown')
df_test['experience'] = df_test['experience'].fillna(value = df_test['experience'].median())
df_test['company_size'] = df_test['company_size'].fillna(value = df_test['company_size'].value_counts().index[0])
df_test['company_type'] = df_test['company_type'].replace(np.nan, 'unknown')
df_test['last_new_job'] = df_test['last_new_job'].fillna(value = df_test['last_new_job'].median()).astype('int')
df_test['target'] = df_test['target'].astype('int')
df_test.info()
# ### 3. Pickle
df_test.to_pickle('../dump/df_test.csv')
|
notebook/01_data_extract.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Excercises Electric Machinery Fundamentals
# ## Chapter 4
# ## Problem 4-14
# + slideshow={"slide_type": "skip"}
# %pylab notebook
# -
# ### Description
# During a short-circuit test, a Y-connected synchronous generator produces 100 A of short-circuit armature
# current per phase at a field current of 2.5 A. At the same field current, the open-circuit line voltage is
# measured to be 440 V.
#
# #### (a)
#
# * Calculate the saturated synchronous reactance under these conditions.
#
# #### (b)
# If the armature resistance is $0.3\,\Omega$ per phase, and the generator supplies 60 A to a purely resistive Y-connected load ~~of 3 Ohms per phase at this~~ at a matching field current setting,
#
# * Determine the voltage regulation under these load conditions.
# ### SOLUTION
# #### (a)
# The saturated synchronous reactance at a field current of 2.5 A can be found from the information
# supplied in the problem. The open circuit line voltage at $I_F = 2.5\,A$ is 440 V, and the short-circuit current is 100 A.
If = 2.5 # [A]
Vocc = 440.0 # [V]
Isc = 100.0 # [A]
# Since this generator is Y-connected, the corresponding phase voltage is:
Vphi = Vocc / sqrt(3)
print('Vphi = {:.0f} V'.format(Vphi))
# and the armature current is $I_A = I_{SC}$ . Therefore, the saturated synchronous
# reactance is:
Ia = Isc
Xs = Vphi / Ia
print('''
Xs = {:.2f} Ω
==========='''.format(Xs))
# #### (b)
# Assume that the desired line voltage is 440 V, which means that the phase voltage $\vec{V}_\phi = 254\,V\angle 0°$ (as calulated above).
# The armature current is $\vec{I}_A = 60\,A \angle 0°$, so the internal generated voltage is:
#
# $$\vec{E}_A = \vec{V}_\phi + R_A\vec{I}_A + jX_S\vec{I}_A$$
Vl_desired = 440 # [V]
Ra = 0.3 # [Ohm]
Ia_b = 60.0 # [A]
EA = Vphi + Ra * Ia_b + Xs*1j * Ia_b
EA_angle = arctan(EA.imag/EA.real)
print('EA = {:.0f} V ∠{:.1f}°'.format(abs(EA), EA_angle/pi*180))
# This is also the phase voltage at no load conditions. The corresponding line voltage at no load conditions
# would be:
Vl_nl = abs(EA) * sqrt(3)
print('Vl_nl = {:.0f} V'.format(Vl_nl))
# The voltage regulation is:
#
# $$VR = \frac{V_\text{T,nl} - V_\text{T,fl}}{V_\text{T,fl}} \cdot 100\%$$
VR = (Vl_nl - Vl_desired) / Vl_desired
print('''
VR = {:.1f} %
==========='''.format(VR*100))
|
Chapman/Ch4-Problem_4-14.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="5ryGgS-N38vG" colab_type="text"
# # Dogs vs Cats
# + [markdown] id="v6iVa8PP4LKs" colab_type="text"
# Keras code for the Kaggle dogs and cats classification problem.
# + [markdown] id="7UH8FyzGrYa0" colab_type="text"
# ## Imports
# + id="tWJYa8UJ3_lO" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 36} outputId="bb7b1170-43dc-49c7-83c9-288f8a65e53a"
from keras import layers
from keras import models
from keras import optimizers
import matplotlib.pyplot as plt
from keras.preprocessing.image import ImageDataGenerator
# + [markdown] id="yH-HpzzN7GJ_" colab_type="text"
# ## CNN models
# + id="5Xahgmjw4UQ7" colab_type="code" colab={}
model = models.Sequential()
model.add(layers.Conv2D(32, (3,3), activation = 'relu', input_shape = (150,150,3)))
model.add(layers.MaxPooling2D((2,2)))
model.add(layers.Conv2D(64, (3,3), activation = 'relu'))
model.add(layers.MaxPooling2D((2,2)))
model.add(layers.Conv2D(128, (3,3), activation = 'relu'))
model.add(layers.MaxPooling2D((2,2)))
model.add(layers.Conv2D(128, (3,3), activation = 'relu'))
model.add(layers.MaxPooling2D((2,2)))
model.add(layers.Flatten())
model.add(layers.Dense(512, activation = 'relu'))
model.add(layers.Dense(1, activation = 'sigmoid'))
# + id="a4JZruhl5V2e" colab_type="code" outputId="381793e3-09d9-4b27-acbc-4b9c46d48435" colab={"base_uri": "https://localhost:8080/"}
model.summary()
# + id="lzp2IVCi5cIY" colab_type="code" colab={}
model.compile(loss = 'binary_crossentropy', optimizer = optimizers.Adam(), metrics = ['accuracy'])
# + [markdown] id="OBGak3Ks7Kas" colab_type="text"
# ## Data Preprocessing
# + id="3R6y1wao8gLx" colab_type="code" colab={}
train_dir = 'sample_data'
validation_dir = 'sample_data'
# + id="fSFPSDce52Og" colab_type="code" outputId="e39939ed-ea47-4864-8610-91eac4f2ede7" colab={"base_uri": "https://localhost:8080/"}
train_datagen = ImageDataGenerator(rescale = 1./255)
test_datagen = ImageDataGenerator(rescale = 1./255)
train_generator = train_datagen.flow_from_directory(
#(target directory use your own), resizing the images, Binary labelling
train_dir,
target_size = (150,150),
batch_size = 20,
class_mode = 'binary'
)
# + id="KMO52cHV76dH" colab_type="code" outputId="f1f9fc95-ef9f-4eef-9815-06aeeefd5ffc" colab={"base_uri": "https://localhost:8080/"}
validation_generator = test_datagen.flow_from_directory(
#(target directory use your own), resizing the images, Binary labelling
validation_dir,
target_size = (150,150),
batch_size = 20,
class_mode = 'binary'
)
# + [markdown] id="F9kcGdpj-8je" colab_type="text"
# ## Fitting and saving the model
# + id="JQfmFObx85qp" colab_type="code" colab={}
history = model.fit_generator(
train_generator,
steps_per_epoch = 100,
epochs = 10,
validation_data = validation_generator,
validation_steps = 50.
)
# + id="z3nt4bn8-5GD" colab_type="code" outputId="2a2bafbd-7686-46e7-c452-5e1bd5a800b9" colab={"base_uri": "https://localhost:8080/", "height": 36}
model.save('cats_and_dogs_small_1.h5')
print("Model saved to disk")
# + [markdown] id="LTFVMA8s_PJx" colab_type="text"
# ## Plotting model
# + id="WZtJfArv_WHY" colab_type="code" colab={}
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(acc) + 1)
plt.plot(epochs, acc, 'g', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'g', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
# + [markdown] id="8w3Kg8W__is-" colab_type="text"
# ## Data Augementation and Training a new model
# + id="CPCswAE1_qX9" colab_type="code" colab={}
datagen = ImageDataGenerator(
rotation_range = 40,
width_shift_range = 0.2,
height_shift_range = 0.2,
shear_range = 0.2,
zoom_range = 0.2,
horizontal_flip = True,
fill_mode = 'nearest'
)
# + id="Pq0XZm0rA1MK" colab_type="code" colab={}
# New convonet that includes dropout
model = models.Sequential()
model.add(layers.Conv2D(32, (3,3), activation = 'relu', input_shape = (150,150,3)))
model.add(layers.MaxPooling2D((2,2)))
model.add(layers.Conv2D(64, (3,3), activation = 'relu'))
model.add(layers.MaxPooling2D((2,2)))
model.add(layers.Conv2D(128, (3,3), activation = 'relu'))
model.add(layers.MaxPooling2D((2,2)))
model.add(layers.Conv2D(128, (3,3), activation = 'relu'))
model.add(layers.MaxPooling2D((2,2)))
model.add(layers.Flatten())
model.add(layers.Dropout(0.4))
model.add(layers.Dense(512, activation = 'relu'))
model.add(layers.Dense(1, activation = 'relu'))
# + id="cfJNyvlwCkzN" colab_type="code" colab={}
model.compile(loss='binary_crossentropy', optimizer=optimizers.RMSprop(lr=1e-4), metrics=['acc'])
# + id="Jh-9lEC-CsI8" colab_type="code" colab={}
train_datagen = ImageDataGenerator(
rescale = 1./255,
rotation_range = 40,
width_shift_range = 0.2,
height_shift_range = 0.2,
shear_range = 0.2,
zoom_range = 0.2,
horizontal_flip = True
)
# + id="1GLiZ3VrEaez" colab_type="code" colab={}
# Note validation data should not be augmented
test_datagen = ImageDataGenerator(
rescale = 1./255
)
# + id="oLyOFR5vExd9" colab_type="code" outputId="4c99d8b7-ac94-4a58-b5cd-69b4c8c2b469" colab={"base_uri": "https://localhost:8080/", "height": 36}
train_generator = train_datagen.flow_from_directory(
train_dir,
target_size = (150,150),
batch_size = 32,
class_mode = 'binary'
)
# + id="ksYT4h2bFE6u" colab_type="code" outputId="263810cc-48e2-4383-8212-38a303c97fd8" colab={"base_uri": "https://localhost:8080/", "height": 36}
validation_generator = test_datagen.flow_from_directory(
validation_dir,
target_size = (150,150),
batch_size = 32,
class_mode = 'binary'
)
# + id="y_3zhhLHFf12" colab_type="code" colab={}
history = model.fit_generator(
train_generator,
steps_per_epoch = 100,
epochs = 100,
validation_data = validation_generator,
validation_steps = 50
)
# + id="w9OXG9dcGwEI" colab_type="code" colab={}
model.save('cats_and_dogs_small_2.h5')
print("Model saved to disk")
# + [markdown] id="NVVuDgLPHQRd" colab_type="text"
# ## Using Pretrained Convo-net
# + [markdown] id="7wNT_BP1hRF8" colab_type="text"
# ### Basic Transfer Learning
# + id="cbLWEOqoG7VF" colab_type="code" colab={}
from keras.applications import VGG16
# + id="ZFeptc20K9ft" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 130} outputId="ba70fc4b-40d2-481e-f836-d56c628ca549"
conv_base = VGG16(weights = 'imagenet', include_top = False, input_shape = (150,150,3))
# + id="RzSoxvyfLN5h" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 853} outputId="0aa2f7fb-f655-4aad-e1bb-706568dff289"
conv_base.summary()
# + id="2nU6dybLdOh7" colab_type="code" colab={}
# Adding a Densely connected network on top of VGG16
model = models.Sequential()
model.add(conv_base)
model.add(layers.Flatten())
model.add(layers.Dense(256, activation = 'relu'))
model.add(layers.Dense(1, activation = 'sigmoid'))
# + id="C-X-Kyy7d3c4" colab_type="code" outputId="3940c158-b420-422a-c9dc-8d31c70c0bbd" colab={"base_uri": "https://localhost:8080/", "height": 296}
model.summary()
# + id="zettjsCAd_RK" colab_type="code" outputId="f3fa8330-d5f2-444b-f016-70a549ca9770" colab={"base_uri": "https://localhost:8080/", "height": 54}
# Freezing some of the parameters of the layers
print("Number of trainable weights before freezing VGG ",len(model.trainable_weights))
conv_base.trainable = False
print("Number of trainable weights after freezing VGG16 ",len(model.trainable_weights))
# + [markdown] id="t8yLBDA7fa0p" colab_type="text"
# * With this set up only the Densely conncected Layers will get updated.
# * There are 4 parameteres 2 weights and 2 biases
# * In order for the effect to take place we need to compile the model again
# * We must compile the model everytime we make changes or else changes will be ignored.
# + id="f62fcqUFeYiP" colab_type="code" colab={}
train_datagen = ImageDataGenerator(
rescale=1./255,
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest' )
# + id="SDNq5pDRf0O-" colab_type="code" colab={}
# Do not augument the test data
test_datagen = ImageDataGenerator(rescale=1./255)
# + id="VE3WMv0fgDpe" colab_type="code" colab={}
train_generator = train_datagen.flow_from_directory(
train_dir,
target_size = (150,150),
batch_size = 20,
class_mode = 'binary'
)
# + id="5Wl-vfXrgb5o" colab_type="code" outputId="fdf65013-d3f6-41e0-bc00-40c59f8fff2f" colab={"base_uri": "https://localhost:8080/", "height": 36}
validation_generator = test_datagen.flow_from_directory(
validation_dir,
target_size=(150, 150),
batch_size=20,
class_mode='binary'
)
# + id="AU_8VyG1gf35" colab_type="code" colab={}
model.compile(loss = "binary_crossentropy", optimizer = optimizers.RMSprop(), metrics = ['accuracy'])
# + id="KgDXb9togvrF" colab_type="code" colab={}
history = model.fit_generator(
train_generator,
steps_per_epoch=100,
epochs=30,
validation_data=validation_generator,
validation_steps=50
)
# + [markdown] id="XC_GBZehhNdo" colab_type="text"
# ### Fine tuning Transfer Learning
# + id="lzx4xl57hDdY" colab_type="code" colab={}
conv_base.summary()
# + id="sbQlKsV-jbFk" colab_type="code" colab={}
conv_base.trainable = True
set_trainable = False
for layer in conv_base.layers:
if layer.name == 'block5_conv1':
set_trainable = True
if set_trainable:
layer.trainable = True
else:
layer.trainable = False
# + id="1jb5vo54qLem" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 128} outputId="1c5347be-0745-44fc-844d-a0eaeb368448"
conv_base.trainable_weights
# + id="fvYOZZtWqSbI" colab_type="code" colab={}
# Now train at very low learning rate
model.compile(loss = "binary_crossentropy", optimizer = optimizers.Adam(lr = 1e-5), metrices = ['accuracy'])
# + id="K65izpueqod-" colab_type="code" colab={}
history = model.fit_generator(
train_generator,
steps_per_epoch=100,
epochs=100,
validation_data=validation_generator,
validation_steps=50
)
# + [markdown] id="Eaw8xjDUq3s9" colab_type="text"
# ## Finally Testing the model
# + id="e3bzpMM1qxS8" colab_type="code" colab={}
test_generator = test_datagen.flow_from_directory(
test_dir,
target_size=(150, 150),
batch_size=20,
class_mode='binary'
)
test_loss, test_acc = model.evaluate_generator(test_generator, steps=50)
print('test acc:', test_acc)
|
Learning_Keras/Convolutional_Networks/Dogs vs Cat CNN.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.7.3 64-bit
# name: python3
# ---
# + [markdown] id="fxDyMqSjzv9F" colab_type="text"
# pandas のライブラリをインポートします。
# + id="NLvNv-OxztMa" colab_type="code" colab={}
import pandas as pd
# + [markdown] id="EbPFDbHi3nyU" colab_type="text"
# (1) `table1.csv` を読み込んで `df1` として、 (2) `.shape` で行数と列数を表示し、 (3) 中身を表示します。
#
# + id="cWzq2jT00RmZ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 173} outputId="6bb935b2-6ea2-4427-9acc-8a2dd488d9e1"
# Code 1
df1 = pd.read_csv('https://raw.githubusercontent.com/bioinfo-tsukuba/AdvancedCourse2021/main/3/table1.csv')
print(df1.shape)
df1
# + [markdown] id="y5nEOKBp4JD0" colab_type="text"
# (1) `table2.csv` を読み込んで `df2` として、 (2) `.shape` で行数と列数を表示し、 (3) 中身を表示します。
#
# + id="DjbF5cCf1WyI" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 204} outputId="a974d4ff-8473-4a70-edef-d8b9651613b8"
# Code 2
df2 = pd.read_csv('https://raw.githubusercontent.com/bioinfo-tsukuba/AdvancedCourse2021/main/3/table2.csv')
print(df2.shape)
df2
# + [markdown] id="Yc-56FGD2Wbb" colab_type="text"
# 以下は、inner join の例です。
# + id="_JJQM0eA1eei" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 142} outputId="df3259d4-3fc4-4ac2-bb9e-5b58ffd0176a"
# Code 3
df3 = pd.merge(df1, df2, how='inner', on='Gene_ID')
df3
# + [markdown] id="FMnwj_c-2c0I" colab_type="text"
# 以下は、 left join の例です。
# + id="eeaLfO352QOm" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 173} outputId="4123aaa3-401c-4004-df0e-6a91ed34bade"
# Code 4
df4 = pd.merge(df1, df2, how='left', on='Gene_ID')
df4
# + [markdown] id="q2L5pq6A2n0e" colab_type="text"
# 以下は、 right join の例です。
# + id="GjX0cEx42jxM" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 204} outputId="f2738c73-99ee-4545-cf4e-d0a8077c07e9"
# Code 5
df5 = pd.merge(df1, df2, how='right', on='Gene_ID')
df5
# + [markdown] id="4wYJ46uy4U8R" colab_type="text"
# 以下は、 outer join の例です。
# + id="6oKto8S92r3I" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 235} outputId="0235f289-93bf-40dd-ad8e-0299469ee173"
# Code 6
df6 = pd.merge(df1, df2, how='outer', on='Gene_ID')
df6
|
3/Learning_JOIN.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pandas as pd
import os
import numpy
import MySQLdb
import omdtfn as odt
#conn= MySQLdb.connect("localhost","root","admin","omdb")
#df_mysql = pd.read_sql("select * from sitedb",conn)
omdb = os.getcwd() + "\\" + "OMDB.csv"
single = os.getcwd() + "\\" + "SingleClick.csv"
pntxt = os.getcwd() + "\\" + "Periodic_Notification.txt"
pth = os.getcwd() + "\\" + "WRT1.csv"
pth2 = os.getcwd() + "\\" + "WRT2.csv"
#lambda <args> : <return Value> if <condition > ( <return value > if <condition> else <return value>)
TS = lambda x : '2G' if ('2G SITE DOWN' in x) \
else ('3G' if ('3G SITE DOWN' in x) \
else ('4G' if ('4G SITE DOWN' in x) \
else ('MF' if ('MAIN' in x) \
else ('DC' if ('VOLTAGE' in x) \
else ('TM' if ('TEMPERATURE' in x) \
else ('SM' if ('SMOKE' in x) \
else ('GN' if ('GEN' in x) \
else ('GN' if ('GENSET' in x) \
else ('TH' if ('THEFT' in x) \
else ('2_CELL' if ('2G CELL DOWN' in x) \
else ('3_CELL' if ('3G CELL DOWN' in x) \
else ('4_CELL' if ('4G CELL DOWN' in x) \
else "NA"))))))))))))
def write2txt(flname,txt):
fo = open(flname,"w+")
txt = fo.write(txt)
fo.close()
class omdf:
def __init__(self,dic):
self.df = pd.DataFrame(dic)
self.arr = self.df.to_numpy()
self.lst = list(self.df.columns.values)
self.aList = []
def df_addcol_lamda(self):
self.df['cat'] = self.df.apply(lambda row: TS(row.Summary), axis = 1)
return self.df.to_dict()
def df_addcol_fdic(self,d,newcolname):
self.df[newcolname] = self.df['scode'].map(d)
return self.df.to_dict()
def df_apply_on_col(self,newcolname):
self.df[newcolname] = self.df.apply(lambda x : x.CustomAttr15[0:5], axis = 1)
return self.df.to_dict()
def df_remove_col_by_list(self,lis):
ndf = self.df[lis]
return ndf.to_dict()
def PNPW(dic,lis):
ndf = pd.DataFrame(dic)
ar = ndf.to_numpy()
lcol = (ar).shape[1]
j = 0
G2T = 0
G3T = 0
G4T = 0
heap = ""
for i in lis:
g2 = ndf[ndf['cat'].str.contains('MF') & ndf['Zone'].str.contains(lis[j])]
g3 = ndf[ndf['cat'].str.contains('DL') & ndf['Zone'].str.contains(lis[j])]
G2T = g2.shape[0] + G2T
G3T = g3.shape[0] + G3T
hd = str(lis[j]) + ": " + str(g2.shape[0]) + "/" + str(g3.shape[0])
if j == 0:
heap = hd
else:
heap = heap + '\n' + hd
j = j + 1
reg = 'Region: ' + 'MF/DL'
Nat = 'National: ' + str(G2T) + '/' + str(G3T)
heaps = reg + '\n' + Nat + '\n' + '\n' + heap
return heaps
def ByCat(dic,lis,strval):
ndf = pd.DataFrame(dic)
ar = ndf.to_numpy()
lcol = (ar).shape[1]
j = 0
G2T = 0
heap = ""
for i in lis:
g2 = ndf[ndf['cat'].str.contains(strval) & ndf['Zone'].str.contains(lis[j])]
G2T = g2.shape[0] + G2T
hd = str(lis[j]) + ": " + str(g2.shape[0])
if j == 0:
heap = hd
else:
heap = heap + '\n' + hd
j = j + 1
heaps = "National: " + str(G2T) + '\n' + '\n' + heap
return heaps
def PN_Format(dic,lis):
ndf = pd.DataFrame(dic)
ar = ndf.to_numpy()
lcol = (ar).shape[1]
j = 0
G2T = 0
G3T = 0
G4T = 0
heap = ""
for i in lis:
g2 = ndf[ndf['cat'].str.contains('2G') & ndf['Zone'].str.contains(lis[j])]
g3 = ndf[ndf['cat'].str.contains('3G') & ndf['Zone'].str.contains(lis[j])]
g4 = ndf[ndf['cat'].str.contains('4G') & ndf['Zone'].str.contains(lis[j])]
G2T = g2.shape[0] + G2T
G3T = g3.shape[0] + G3T
G4T = g4.shape[0] + G4T
hd = str(lis[j]) + ": " + str(g2.shape[0]) + "/" + str(g3.shape[0]) + "/" + str(g4.shape[0])
if j == 0:
heap = hd
else:
heap = heap + '\n' + hd
j = j + 1
hd = "Update of Site Down at " + odt.hrmin() + ' On ' + odt.dtmnyr()
reg = 'Region: ' + '2G/3G/4G'
Nat = 'National: ' + str(G2T) + '/' + str(G3T) + '/' + str(G4T)
heaps = hd + '\n' + '\n' + reg + '\n' + Nat + '\n' + '\n' + heap
return heaps
def PN(dicc):
ls1 = ['CustomAttr15','Resource','Summary','LastOccurrence','BCCH']
ls2 = ['Code','Zone']
dfsingle = pd.DataFrame(dicc)
dfomdb = pd.read_csv(omdb)
dfs = dfsingle[ls1]
dfdb = dfomdb[ls2]
x1 = omdf(dfs)
dfs1 = x1.df_addcol_lamda()
dfx = pd.DataFrame(dfs1)
dfx.to_csv(pth)
x2 = omdf(dfs1)
dfs2 = pd.DataFrame(x2.df_apply_on_col('Code'))
mergedDf = dfs2.merge(dfdb, on='Code')
#dff = mergedDf[mergedDf['BCCH'].str.contains('YES')]
mergedDf.to_csv(pth2)
ls3 = ['DHK_S','DHK_N','DHK_M','CTG_S','CTG_N','CTG_M','COM','NOA','SYL','MYM','BAR','KHL','KUS','RAJ','RANG']
#print(ByCat(mergedDf.to_dict(),ls3,"4G"))
txt = PN_Format(mergedDf.to_dict(),ls3)
txtpw = PNPW(mergedDf.to_dict(),ls3)
#print(txtpw)
#write2txt(pntxt,txt)
return txt
df = pd.read_csv(single)
dc = df.to_dict()
print(PN(dc))
# -
|
Z_ALL_FILE/Jy1/fndf_mon-checkpoint.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # マークダウンでメモを保存可能
#
# これはとても便利な機能です。コメントやドキュメントをプログラムの間に挿入することができます。機械学習を実行する上で、メモ書きや気づき、アイデアなどをプログラムの間にメモっておくことができるのです。
#
# ## この節の目的
#
# - Jupyter Notebookについて
# - 視覚的に機械学習を楽しむ
#
# ### Pythonのコードを対話的に実行できる
#
val = 1500
val
# # matplotlibの結果をインライン表示する
#
# 以下の宣言を実行します。
# %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
x = np.arange(-20, 20, 0.1)
y = np.sin(x)
plt.plot(x, y)
|
sample/ch5/jupyter-firststep.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] slideshow={"slide_type": "slide"}
# # Conflicts
# + [markdown] slideshow={"slide_type": "slide"}
# ## Overview
# - **Teaching:** 25 min
# - **Exercises:** 0 min
#
# **Questions**
# - What do I do when my changes conflict with someone else’s?
#
# **Objectives**
# - Explain what conflicts are and when they can occur.
# - Resolve conflicts resulting from a merge.
# + [markdown] slideshow={"slide_type": "slide"}
# As soon as people can work in parallel, they’ll likely step on each other’s toes. This will even happen with a single person: if we are working on a piece of software on both our laptop and a server in the lab, we could make different changes to each copy. Version control helps us manage these conflicts by giving us tools to resolve overlapping changes.
#
# To see how we can resolve conflicts, we must first create one. The file `mars.txt` currently looks like this in both partners’ copies of our `planets` repository:
# ```bash
# % cat mars.txt
# ```
# ```brainfuck
# Cold and dry, but everything is my favorite color
# The two moons may be a problem for Wolfman
# But the Mummy will appreciate the lack of humidity
# ```
# + [markdown] slideshow={"slide_type": "slide"}
# Let’s add a line to one partner’s copy only:
# ```bash
# % nano mars.txt
# % cat mars.txt
# ```
# ```brainfuck
# Cold and dry, but everything is my favorite color
# The two moons may be a problem for Wolfman
# But the Mummy will appreciate the lack of humidity
# This line added to Wolfman's copy
# ```
# + [markdown] slideshow={"slide_type": "slide"}
# and then push the change to GitHub:
# ```bash
# % git add mars.txt
# % git commit -m "Add a line in our home copy"
# ```
# ```brainfuck
# [master 5ae9631] Add a line in our home copy
# 1 file changed, 1 insertion(+)
# ```
# ```bash
# % git push origin master
# ```
# ```brainfuck
# Counting objects: 5, done.
# Delta compression using up to 4 threads.
# Compressing objects: 100% (3/3), done.
# Writing objects: 100% (3/3), 352 bytes, done.
# Total 3 (delta 1), reused 0 (delta 0)
# To https://github.com/vlad/planets
# 29aba7c..dabb4c8 master -> master
# ```
# + [markdown] slideshow={"slide_type": "slide"}
# Now let’s have the other partner make a different change to their copy without updating from GitHub:
# ```bash
# % nano mars.txt
# % cat mars.txt
# ```
# ```brainfuck
# Cold and dry, but everything is my favorite color
# The two moons may be a problem for Wolfman
# But the Mummy will appreciate the lack of humidity
# We added a different line in the other copy
# ```
# + [markdown] slideshow={"slide_type": "slide"}
# We can commit the change locally:
# ```bash
# % git add mars.txt
# % git commit -m "Add a line in my copy"
# ```
# ```brainfuck
# [master 07ebc69] Add a line in my copy
# 1 file changed, 1 insertion(+)
# ```
# + [markdown] slideshow={"slide_type": "slide"}
# but Git won’t let us push it to GitHub:
# ```bash
# % git push origin master
# ```
# ```brainfuck
# To https://github.com/vlad/planets.git
# ! [rejected] master -> master (non-fast-forward)
# error: failed to push some refs to 'https://github.com/vlad/planets.git'
# hint: Updates were rejected because the tip of your current branch is behind
# hint: its remote counterpart. Merge the remote changes (e.g. 'git pull')
# hint: before pushing again.
# hint: See the 'Note about fast-forwards' in 'git push --help' for details.
# ```
#
# 
# + [markdown] slideshow={"slide_type": "slide"}
# Git detects that the changes made in one copy overlap with those made in the other and stops us from trampling on our previous work. What we have to do is pull the changes from GitHub, merge them into the copy we’re currently working in, and then push that. Let’s start by pulling:
# ```bash
# % git pull origin master
# ```
# ```brainfuck
# remote: Counting objects: 5, done.
# remote: Compressing objects: 100% (2/2), done.
# remote: Total 3 (delta 1), reused 3 (delta 1)
# Unpacking objects: 100% (3/3), done.
# From https://github.com/vlad/planets
# * branch master -> FETCH_HEAD
# Auto-merging mars.txt
# CONFLICT (content): Merge conflict in mars.txt
# Automatic merge failed; fix conflicts and then commit the result.
# ```
# + [markdown] slideshow={"slide_type": "slide"}
# `git pull` tells us there’s a conflict, and marks that conflict in the affected file:
# ```bash
# % cat mars.txt
# ```
# ```brainfuck
# Cold and dry, but everything is my favorite color
# The two moons may be a problem for Wolfman
# But the Mummy will appreciate the lack of humidity
# <<<<<<< HEAD
# We added a different line in the other copy
# =======
# This line added to Wolfman's copy
# >>>>>>> dabb4c8c450e8475aee9b14b4383acc99f42af1d
# ```
# + [markdown] slideshow={"slide_type": "slide"}
# Our change is preceded by `<<<<<<< HEAD`. Git has then inserted `=======` as a separator between the conflicting changes and marked the end of the content downloaded from GitHub with `>>>>>>>`. (The string of letters and digits after that marker identifies the commit we’ve just downloaded.)
#
# It is now up to us to edit this file to remove these markers and reconcile the changes. We can do anything we want: keep the change made in the local repository, keep the change made in the remote repository, write something new to replace both, or get rid of the change entirely. Let’s replace both so that the file looks like this:
# ```bash
# % cat mars.txt
# ```
# ```brainfuck
# Cold and dry, but everything is my favorite color
# The two moons may be a problem for Wolfman
# But the Mummy will appreciate the lack of humidity
# We removed the conflict on this line
# ```
# + [markdown] slideshow={"slide_type": "slide"}
# To finish merging, we add mars.txt to the changes being made by the merge and then commit:
# ```bash
# % git add mars.txt
# % git status
# ```
# ```brainfuck
# On branch master
# All conflicts fixed but you are still merging.
# (use "git commit" to conclude merge)
#
# Changes to be committed:
#
# modified: mars.txt
# ```
# + [markdown] slideshow={"slide_type": "slide"}
# ```bash
# % git commit -m "Merge changes from GitHub"
# ```
# ```brainfuck
# [master 2abf2b1] Merge changes from GitHub
# Now we can push our changes to GitHub:
# ```
#
# ```bash
# % git push origin master
# ```
# ```brainfuck
# Counting objects: 10, done.
# Delta compression using up to 4 threads.
# Compressing objects: 100% (6/6), done.
# Writing objects: 100% (6/6), 697 bytes, done.
# Total 6 (delta 2), reused 0 (delta 0)
# To https://github.com/vlad/planets.git
# dabb4c8..2abf2b1 master -> master
# ```
# + [markdown] slideshow={"slide_type": "slide"}
# Git keeps track of what we’ve merged with what, so we don’t have to fix things by hand again when the collaborator who made the first change pulls again:
# ```bash
# % git pull origin master
# ```
# ```brainfuck
# remote: Counting objects: 10, done.
# remote: Compressing objects: 100% (4/4), done.
# remote: Total 6 (delta 2), reused 6 (delta 2)
# Unpacking objects: 100% (6/6), done.
# From https://github.com/vlad/planets
# * branch master -> FETCH_HEAD
# Updating dabb4c8..2abf2b1
# Fast-forward
# mars.txt | 2 +-
# 1 file changed, 1 insertion(+), 1 deletion(-)
# ```
# + [markdown] slideshow={"slide_type": "slide"}
# We get the merged file:
# ```bash
# % cat mars.txt
# ```
# ```brainfuck
# Cold and dry, but everything is my favorite color
# The two moons may be a problem for Wolfman
# But the Mummy will appreciate the lack of humidity
# We removed the conflict on this line
# ```
# We don’t need to merge again because Git knows someone has already done that.
# -
# We don’t need to merge again because Git knows someone has already done that.
#
# Git’s ability to resolve conflicts is very useful, but conflict resolution costs time and effort, and can introduce errors if conflicts are not resolved correctly. If you find yourself resolving a lot of conflicts in a project, consider these technical approaches to reducing them:
#
# - Pull from upstream more frequently, especially before starting new work
# - Use topic branches to segregate work, merging to master when complete
# - Make smaller more atomic commits
# - Where logically appropriate, break large files into smaller ones so that it is less likely that two authors will alter the same file simultaneously
#
# Conflicts can also be minimized with project management strategies:
#
# - Clarify who is responsible for what areas with your collaborators
# - Discuss what order tasks should be carried out in with your collaborators so that tasks expected to change the same lines won’t be worked on simultaneously
# - If the conflicts are stylistic churn (e.g. tabs vs. spaces), establish a project convention that is governing and use code style tools (e.g. `htmltidy`, `perltidy`, `rubocop`, etc.) to enforce, if necessary
# + [markdown] slideshow={"slide_type": "slide"}
# ## Key Points
# - Conflicts occur when two or more people change the same file(s) at the same time.
# - The version control system does not allow people to overwrite each other’s changes blindly, but highlights conflicts so that they can be resolved.
|
nbplain/10_episode.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # How Many Receptors Are in Each NeuroMMSig Signature?
#
# **Author:** [<NAME>](https://github.com/cthoyt)
#
# **Estimated Run Time:** 10 seconds
#
# This notebook uses PyBEL and Bio2BEL HGNC to assess how many HGNC terms belonging to Gene Families containing the word "receptor" are present in each signature of the NeuroMMSig in the context of Alzheimer's disease.
#
# ## Import
# +
import sys
import os
import time
import bio2bel_hgnc
import pandas as pd
import pybel
import pybel_tools
from bio2bel_hgnc.models import GeneFamily
from pybel_tools import selection, summary, utils
# -
# ## Environment
print(sys.version)
time.asctime()
# ## Dependencies
pybel.utils.get_version()
pybel_tools.utils.get_version()
# # Identifying Receptor-Encoding Genes
#
# This section requires that the `bio2bel_hgnc` package is installed and populated.
hgnc_manager = bio2bel_hgnc.Manager()
hgnc_manager
hgnc_manager.summarize()
receptor_families = hgnc_manager.session.query(GeneFamily).filter(GeneFamily.family_name.contains('receptor')).all()
# How many receptor families are there?
len(receptor_families)
receptor_genes = {gene for family in receptor_families for gene in family.hgncs}
# How many unique genes belong to one (or more) of these families?
len(receptor_genes)
# # Load NeuroMMSig Signatures
#
# This section requires the `BMS_BASE` environment variable to be set.
# +
neurommsig_ad_path = os.path.join(os.environ['BMS_BASE'], 'aetionomy', 'alzheimers', 'alzheimers.gpickle')
assert os.path.exists(neurommsig_ad_path)
# -
neurommsig_ad = pybel.from_pickle(neurommsig_ad_path)
signatures = pybel_tools.selection.get_subgraphs_by_annotation(neurommsig_ad, 'Subgraph')
# How many signatures does NeuroMMSig contain in the context of Alzheimer's disease?
len(signatures)
# # Summarize
signature_genes = {
signature_name: pybel.struct.summary.get_names_by_namespace(signature, 'HGNC')
for signature_name, signature in signatures.items()
}
df = pd.DataFrame([
(signature_name, len(genes))
for signature_name, genes in signature_genes.items()
], columns=('Name', 'Receptors'))
# The top 15 highest receptor density graphs are shown
df.sort_values('Receptors', ascending=False).head(15)
|
How Many Receptors Are in Each NeuroMMSig Signature?.ipynb
|
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 1.5.1
# language: julia
# name: julia-1.5
# ---
using Pkg,Plots
Pkg.activate("..")
using OpenSAFT
# <font size="4">In this notebook we will try to replicate various figures from <NAME> and <NAME>'s 2002 paper</font>
# ## Figure 1
# Setting up the models
methanol = system(["methanol"],"PCSAFT")
pentanol = system(["1-pentanol"],"PCSAFT")
nonanol = system(["1-nonanol"],"PCSAFT");
# Get critical point for all species
(T_c_methanol, p_c_methanol, v_c_methanol) = get_crit_pure(methanol)
(T_c_pentanol, p_c_pentanol, v_c_pentanol) = get_crit_pure(pentanol)
(T_c_nonanol, p_c_nonanol, v_c_nonanol) = get_crit_pure(nonanol);
# +
# Get saturation properties for all species
T_methanol = range(205, T_c_methanol*0.99, length = 40)
T_pentanol = range(200, T_c_pentanol*0.99, length = 40)
T_nonanol = range(260, T_c_nonanol*0.99, length = 40)
A = get_sat_pure.(methanol,T_methanol)
B = get_sat_pure.(pentanol,T_pentanol)
C = get_sat_pure.(nonanol,T_nonanol)
v_l_methanol = append!([A[i][2] for i in 1:length(T_methanol)],v_c_methanol)
v_v_methanol = append!([A[i][3] for i in 1:length(T_methanol)],v_c_methanol)
v_l_pentanol = append!([B[i][2] for i in 1:length(T_pentanol)],v_c_pentanol)
v_v_pentanol = append!([B[i][3] for i in 1:length(T_pentanol)],v_c_pentanol)
v_l_nonanol = append!([C[i][2] for i in 1:length(T_nonanol)],v_c_nonanol)
v_v_nonanol = append!([C[i][3] for i in 1:length(T_nonanol)],v_c_nonanol)
T_methanol = append!(collect(T_methanol),T_c_methanol)
T_pentanol = append!(collect(T_pentanol),T_c_pentanol)
T_nonanol = append!(collect(T_nonanol),T_c_nonanol);
# -
plt = plot(0.032 ./v_l_methanol, T_methanol,color=:red,xlabel="Density / kg/m³",ylabel="T / K", label = "methanol")
plt = plot!(0.032 ./v_v_methanol, T_methanol,color=:red, label = "")
plt = plot!(0.088 ./v_l_pentanol, T_pentanol,color=:blue, label = "pentanol")
plt = plot!(0.088 ./v_v_pentanol, T_pentanol,color=:blue, label = "")
plt = plot!(0.144 ./v_l_nonanol, T_nonanol,color=:green, label = "nonanol")
plt = plot!(0.144 ./v_v_nonanol, T_nonanol,color=:green, label = "")
display(plt)
# ## Figure 2
# Initiate system
methanol = system(["methanol"],"PCSAFT")
isobutane = system(["isobutane"],"PCSAFT")
mix = system(["isobutane","methanol"],"PCSAFT");
# Obtain saturation pressure of less volatile component
(P_sat_1,v_l,v_v) = get_sat_pure(isobutane, 373.15)
(P_sat_2,v_l,v_v) = get_sat_pure(methanol, 373.15);
# +
# Obtain mixture saturation conditions 1
# x composition
x = range(1e-3,1-1e-2,length=200)
x = hcat(x,1 .-x)
# Solve for bubble point and corresponding vapour phase
(P_sat_mix_1,v_l,v_v,y) = get_bubble_pressure(mix, 373.15, x);
# Concantenate results
x_1 = x[:,1]
y_1 = y[:,1]
append!(x_1,1.)
append!(y_1,1.)
append!(P_sat_mix_1,P_sat_1[1])
pushfirst!(x_1,0.)
pushfirst!(y_1,0.)
pushfirst!(P_sat_mix_1,P_sat_2[1])
z_1 = vcat(x_1,reverse(y_1))
P_sat_mix_1 = vcat(P_sat_mix_1,reverse(P_sat_mix_1));
# -
# Plotting
plt = plot(z_1,P_sat_mix_1/1e5,color=:black,label="",xlabel="x(isobutane),y(isobutane)",ylabel="P / bar",xlim=(0,1))
display(plt)
# ## Figure 4
# Initiate system
pentanol = system(["1-pentanol"],"PCSAFT")
benzene = system(["benzene"],"PCSAFT")
mix = system(["benzene","1-pentanol"],"PCSAFT");
# Obtain saturation pressure of less volatile component
(P_sat_1,v_l,v_v) = get_sat_pure(benzene, 313.15)
(P_sat_2,v_l,v_v) = get_sat_pure(pentanol, 313.15);
# +
# Obtain mixture saturation conditions 1
# x composition
x = range(1e-3,1-1e-2,length=200)
x = hcat(x,1 .-x)
# Solve for bubble point and corresponding vapour phase
(P_sat_mix_1,v_l,v_v,y) = get_bubble_pressure(mix, 313.15, x);
# Concantenate results
x_1 = x[:,1]
y_1 = y[:,1]
append!(x_1,1.)
append!(y_1,1.)
append!(P_sat_mix_1,P_sat_1[1])
pushfirst!(x_1,0.)
pushfirst!(y_1,0.)
pushfirst!(P_sat_mix_1,P_sat_2[1])
z_1 = vcat(x_1,reverse(y_1))
P_sat_mix_1 = vcat(P_sat_mix_1,reverse(P_sat_mix_1));
# -
# Plotting
plt = plot(z_1,P_sat_mix_1/1e5,color=:black,label="",xlabel="x(benzene),y(benzene)",ylabel="P / bar",xlim=(0,1))
display(plt)
# ## Figure 5
# Initiate system
propanol_1 = system(["1-propanol"],"PCSAFT")
propanol_2 = system(["2-propanol"],"PCSAFT")
benzene = system(["benzene"],"PCSAFT")
mix_1 = system(["benzene","1-propanol"],"PCSAFT")
mix_2 = system(["benzene","2-propanol"],"PCSAFT");
# Obtain saturation pressure of less volatile component
(P_sat_1,v_l,v_v) = get_sat_pure(propanol_1, 313.15)
(P_sat_2,v_l,v_v) = get_sat_pure(propanol_2, 313.15)
(P_sat_3,v_l,v_v) = get_sat_pure(benzene, 313.15);
# +
# Obtain mixture saturation conditions 1
# x composition
x = range(1e-3,1-1e-2,length=200)
x = hcat(x,1 .-x)
# Solve for bubble point and corresponding vapour phase
(P_sat_mix_1,v_l,v_v,y) = get_bubble_pressure(mix_1, 313.15, x);
# Concantenate results
x_1 = x[:,1]
y_1 = y[:,1]
append!(x_1,1.)
append!(y_1,1.)
append!(P_sat_mix_1,P_sat_3[1])
pushfirst!(x_1,0.)
pushfirst!(y_1,0.)
pushfirst!(P_sat_mix_1,P_sat_1[1])
z_1 = vcat(x_1,reverse(y_1))
P_sat_mix_1 = vcat(P_sat_mix_1,reverse(P_sat_mix_1));
# +
# Obtain mixture saturation conditions 2
# x composition
x = range(1e-3,1-1e-2,length=200)
x = hcat(x,1 .-x)
# Solve for bubble point and corresponding vapour phase
(P_sat_mix_2,v_l,v_v,y) = get_bubble_pressure(mix_2, 313.15, x);
# Concantenate results
x_2 = x[:,1]
y_2 = y[:,1]
append!(x_2,1.)
append!(y_2,1.)
append!(P_sat_mix_2,P_sat_3[1])
pushfirst!(x_2,0.)
pushfirst!(y_2,0.)
pushfirst!(P_sat_mix_2,P_sat_2[1])
z_2 = vcat(x_2,reverse(y_2))
P_sat_mix_2 = vcat(P_sat_mix_2,reverse(P_sat_mix_2));
# -
# Plotting
plt = plot(z_1,P_sat_mix_1/1e5,color=:black,label="",xlabel="x(benzene),y(benzene)",ylabel="P / bar",xlim=(0,1))
plt = plot!(z_2,P_sat_mix_2/1e5,color=:black,label="",xlabel="x(benzene),y(benzene)",ylabel="P / bar",xlim=(0,1))
display(plt)
# ## Figure 6
# Initiate system
butanol = system(["1-butanol"],"PCSAFT")
butane = system(["butane"],"PCSAFT")
mix = system(["butane","1-butanol"],"PCSAFT");
# Obtain saturation pressure of less volatile component
T_but = [333.15,373.15]
T_butoh = [333.15,373.15,433.15,473.15]
A = get_sat_pure.(butane, T_but)
B = get_sat_pure.(butanol, T_butoh)
P_sat_1 = [A[i][1] for i in 1:length(T_but)]
P_sat_2 = [B[i][1] for i in 1:length(T_butoh)];
# +
# Obtain mixture saturation conditions 1
# x composition
x = range(1e-3,1-1e-2,length=200)
x = hcat(x,1 .-x)
# Solve for bubble point and corresponding vapour phase
(P_sat_mix_1,v_l,v_v,y) = get_bubble_pressure(mix, 333.15, x);
# Concantenate results
x_1 = x[:,1]
y_1 = y[:,1]
append!(x_1,1.)
append!(y_1,1.)
append!(P_sat_mix_1,P_sat_1[1])
pushfirst!(x_1,0.)
pushfirst!(y_1,0.)
pushfirst!(P_sat_mix_1,P_sat_2[1])
z_1 = vcat(x_1,reverse(y_1))
P_sat_mix_1 = vcat(P_sat_mix_1,reverse(P_sat_mix_1));
# +
# Obtain mixture saturation conditions 2
# x composition
x = range(1e-3,1-1e-2,length=200)
x = hcat(x,1 .-x)
# Solve for bubble point and corresponding vapour phase
(P_sat_mix_2,v_l,v_v,y) = get_bubble_pressure(mix, 373.15, x);
# Concantenate results
x_2 = x[:,1]
y_2 = y[:,1]
append!(x_2,1.)
append!(y_2,1.)
append!(P_sat_mix_2,P_sat_1[2])
pushfirst!(x_2,0.)
pushfirst!(y_2,0.)
pushfirst!(P_sat_mix_2,P_sat_2[2])
z_2 = vcat(x_2,reverse(y_2))
P_sat_mix_2 = vcat(P_sat_mix_2,reverse(P_sat_mix_2));
# +
# Obtain mixture saturation conditions 3
# x composition
x = range(1e-3,0.98,length=200)
x = hcat(x,1 .-x)
# Solve for bubble point and corresponding vapour phase
(P_sat_mix_3,v_l,v_v,y) = get_bubble_pressure(mix, 433.15, x);
# Concantenate results
x_3 = x[:,1]
y_3 = y[:,1]
pushfirst!(x_3,0.)
pushfirst!(y_3,0.)
pushfirst!(P_sat_mix_3,P_sat_2[3])
z_3 = vcat(x_3,reverse(y_3))
P_sat_mix_3 = vcat(P_sat_mix_3,reverse(P_sat_mix_3));
# +
# Obtain mixture saturation conditions 4
# x composition
x = range(1e-3,0.73,length=200)
x = hcat(x,1 .-x)
# Solve for bubble point and corresponding vapour phase
(P_sat_mix_4,v_l,v_v,y) = get_bubble_pressure(mix, 473.15, x);
# Concantenate results
x_4 = x[:,1]
y_4 = y[:,1]
pushfirst!(x_4,0.)
pushfirst!(y_4,0.)
pushfirst!(P_sat_mix_4,P_sat_2[4])
z_4 = vcat(x_4,reverse(y_4))
P_sat_mix_4 = vcat(P_sat_mix_4,reverse(P_sat_mix_4));
# -
# Plotting
plt = plot(z_1,P_sat_mix_1/1e5,color=:black,label="",xlabel="x(butane),y(butane)",ylabel="P / bar",xlim=(0,1))
plt = plot!(z_2,P_sat_mix_2/1e5,color=:black,label="")
plt = plot!(z_3,P_sat_mix_3/1e5,color=:black,label="")
plt = plot!(z_4,P_sat_mix_4/1e5,color=:black,label="")
display(plt)
# ## Figure 7
# Initiate system
ethanol = system(["ethanol"],"PCSAFT")
butane = system(["butane"],"PCSAFT")
mix = system(["butane","ethanol"],"PCSAFT");
# Obtain saturation pressure of less volatile component
T_but = [373.15,413.15]
T_etoh = [373.15,413.15,433.15,463.15]
A = get_sat_pure.(butane, T_but)
B = get_sat_pure.(ethanol, T_etoh)
P_sat_1 = [A[i][1] for i in 1:length(T_but)]
P_sat_2 = [B[i][1] for i in 1:length(T_etoh)];
# +
# Obtain mixture saturation conditions 1
# x composition
x = range(1e-3,1-1e-2,length=200)
x = hcat(x,1 .-x)
# Solve for bubble point and corresponding vapour phase
(P_sat_mix_1,v_l,v_v,y) = get_bubble_pressure(mix, 373.15, x);
# Concantenate results
x_1 = x[:,1]
y_1 = y[:,1]
append!(x_1,1.)
append!(y_1,1.)
append!(P_sat_mix_1,P_sat_1[1])
pushfirst!(x_1,0.)
pushfirst!(y_1,0.)
pushfirst!(P_sat_mix_1,P_sat_2[1])
z_1 = vcat(x_1,reverse(y_1))
P_sat_mix_1 = vcat(P_sat_mix_1,reverse(P_sat_mix_1));
# +
# Obtain mixture saturation conditions 2
# x composition
x = range(1e-3,1-1e-2,length=200)
x = hcat(x,1 .-x)
# Solve for bubble point and corresponding vapour phase
(P_sat_mix_2,v_l,v_v,y) = get_bubble_pressure(mix, 413.15, x);
# Concantenate results
x_2 = x[:,1]
y_2 = y[:,1]
append!(x_2,1.)
append!(y_2,1.)
append!(P_sat_mix_2,P_sat_1[2])
pushfirst!(x_2,0.)
pushfirst!(y_2,0.)
pushfirst!(P_sat_mix_2,P_sat_2[2])
z_2 = vcat(x_2,reverse(y_2))
P_sat_mix_2 = vcat(P_sat_mix_2,reverse(P_sat_mix_2));
# +
# Obtain mixture saturation conditions 3
# x composition
x = range(1e-3,0.85,length=200)
x = hcat(x,1 .-x)
# Solve for bubble point and corresponding vapour phase
(P_sat_mix_3,v_l,v_v,y) = get_bubble_pressure(mix, 433.15, x);
# Concantenate results
x_3 = x[:,1]
y_3 = y[:,1]
pushfirst!(x_3,0.)
pushfirst!(y_3,0.)
pushfirst!(P_sat_mix_3,P_sat_2[3])
z_3 = vcat(x_3,reverse(y_3))
P_sat_mix_3 = vcat(P_sat_mix_3,reverse(P_sat_mix_3));
# +
# Obtain mixture saturation conditions 4
# x composition
x = range(1e-3,0.535,length=200)
x = hcat(x,1 .-x)
# Solve for bubble point and corresponding vapour phase
(P_sat_mix_4,v_l,v_v,y) = get_bubble_pressure(mix, 463.15, x);
# Concantenate results
x_4 = x[:,1]
y_4 = y[:,1]
pushfirst!(x_4,0.)
pushfirst!(y_4,0.)
pushfirst!(P_sat_mix_4,P_sat_2[4])
z_4 = vcat(x_4,reverse(y_4))
P_sat_mix_4 = vcat(P_sat_mix_4,reverse(P_sat_mix_4));
# -
# Plotting
plt = plot(z_1,P_sat_mix_1/1e5,color=:black,label="",xlabel="x(butane),y(butane)",ylabel="P / bar",xlim=(0,1))
plt = plot!(z_2,P_sat_mix_2/1e5,color=:black,label="")
plt = plot!(z_3,P_sat_mix_3/1e5,color=:black,label="")
plt = plot!(z_4,P_sat_mix_4/1e5,color=:black,label="")
display(plt)
|
examples/Gross2002.ipynb
|
# [](https://colab.research.google.com/github/neurogym/ngym_usage/blob/master/supervised/auto_notebooks/rl/Bandit-v0.ipynb)
# ### Install packages if on Colab
# Uncomment following lines to install
# # %tensorflow_version 1.x
# # ! pip install --upgrade stable-baselines # install latest version
# # ! pip install gym # Install gym
# # ! git clone https://github.com/gyyang/neurogym.git # Install neurogym
# # %cd neurogym/
# # ! pip install -e .
# ### Import packages
# +
# #!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Wed Sep 23 09:33:08 2020
@author: manuel
"""
import os
from pathlib import Path
import json
import importlib
import numpy as np
import pandas as pd
from sklearn.decomposition import PCA
import matplotlib.pyplot as plt
from neurogym.wrappers import ALL_WRAPPERS
from stable_baselines.common.vec_env import SubprocVecEnv, DummyVecEnv
from stable_baselines.common import set_global_seeds
from stable_baselines.common.policies import LstmPolicy
from stable_baselines.common.callbacks import CheckpointCallback
import gym
import glob
import neurogym as ngym
envid = 'Bandit-v0'
# -
def get_modelpath(envid):
# Make a local file directories
path = Path('.') / 'files'
os.makedirs(path, exist_ok=True)
path = path / envid
os.makedirs(path, exist_ok=True)
return path
def apply_wrapper(env, wrap_string, params):
wrap_str = ALL_WRAPPERS[wrap_string]
wrap_module = importlib.import_module(wrap_str.split(":")[0])
wrap_method = getattr(wrap_module, wrap_str.split(":")[1])
return wrap_method(env, **params)
def make_env(env_id, rank, seed=0, wrapps={}, **kwargs):
"""
Utility function for multiprocessed env.
:param env_id: (str) the environment ID
:param rank: (int) index of the subprocess
:param seed: (int) the inital seed for RNG
"""
def _init():
env = gym.make(env_id, **kwargs)
env.seed(seed + rank)
for wrap in wrapps.keys():
if not (wrap == 'MonitorExtended-v0' and rank != 0):
env = apply_wrapper(env, wrap, wrapps[wrap])
return env
set_global_seeds(seed)
return _init
def get_alg(alg):
if alg == "A2C":
from stable_baselines import A2C as algo
elif alg == "ACER":
from stable_baselines import ACER as algo
elif alg == "ACKTR":
from stable_baselines import ACKTR as algo
elif alg == "PPO2":
from stable_baselines import PPO2 as algo
return algo
# ### Train network
# +
"""Supervised training networks.
Save network in a path determined by environment ID.
Args:
envid: str, environment ID.
"""
modelpath = get_modelpath(envid)
config = {
'dt': 100,
'hidden_size': 64,
'lr': 1e-2,
'alg': 'ACER',
'rollout': 20,
'n_thrds': 1,
'wrappers_kwargs': {},
'alg_kwargs': {},
'seed': 0,
# 'num_steps': 100000,
'num_steps': 100,
'envid': envid,
}
env_kwargs = {'dt': config['dt']}
config['env_kwargs'] = env_kwargs
# Save config
with open(modelpath / 'config.json', 'w') as f:
json.dump(config, f)
algo = get_alg(config['alg'])
# Make supervised dataset
make_envs = [make_env(env_id=envid, rank=i, seed=config['seed'],
wrapps=config['wrappers_kwargs'],
**env_kwargs)
for i in range(config['n_thrds'])]
# env = SubprocVecEnv(make_envs)
env = DummyVecEnv(make_envs) # Less efficient but more robust
model = algo(LstmPolicy, env, verbose=0, n_steps=config['rollout'],
n_cpu_tf_sess=config['n_thrds'], tensorboard_log=None,
policy_kwargs={"feature_extraction": "mlp",
"n_lstm": config['hidden_size']},
**config['alg_kwargs'])
chckpnt_cllbck = CheckpointCallback(save_freq=int(config['num_steps']/10),
save_path=modelpath,
name_prefix='model')
model.learn(total_timesteps=config['num_steps'], callback=chckpnt_cllbck)
print('Finished Training')
# -
def infer_test_timing(env):
"""Infer timing of environment for testing."""
timing = {}
for period in env.timing.keys():
period_times = [env.sample_time(period) for _ in range(100)]
timing[period] = np.median(period_times)
return timing
def extend_obs(ob, num_threads):
sh = ob.shape
return np.concatenate((ob, np.zeros((num_threads-sh[0], sh[1]))))
def order_by_sufix(file_list):
file_list = [os.path.basename(x) for x in file_list]
flag = 'model.zip' in file_list
file_list = [x for x in file_list if x != 'model.zip']
sfx = [int(x[x.find('_')+1:x.rfind('_')]) for x in file_list]
sorted_list = [x for _, x in sorted(zip(sfx, file_list))]
if flag:
sorted_list.append('model.zip')
return sorted_list, np.max(sfx)
# ### Run network after training for analysis
"""Run trained networks for analysis.
Args:
envid: str, Environment ID
Returns:
activity: a list of activity matrices
info: pandas dataframe, each row is information of a trial
config: dict of network, training configurations
"""
modelpath = get_modelpath(envid)
files = glob.glob(str(modelpath)+'/model*')
if len(files) > 0:
with open(modelpath / 'config.json') as f:
config = json.load(f)
env_kwargs = config['env_kwargs']
wrappers_kwargs = config['wrappers_kwargs']
seed = config['seed']
# Run network to get activity and info
sorted_models, last_model = order_by_sufix(files)
model_name = sorted_models[-1]
algo = get_alg(config['alg'])
model = algo.load(modelpath / model_name, tensorboard_log=None,
custom_objects={'verbose': 0})
# Environment
env = make_env(env_id=envid, rank=0, seed=seed, wrapps=wrappers_kwargs,
**env_kwargs)()
env.timing = infer_test_timing(env)
env.reset(no_step=True)
# Instantiate the network and print information
activity = list()
state_mat = []
ob = env.reset()
_states = None
done = False
info_df = pd.DataFrame()
# num_steps = 10 ** 5
num_steps = 10 ** 3
for stp in range(int(num_steps)):
ob = np.reshape(ob, (1, ob.shape[0]))
done = [done] + [False for _ in range(config['n_thrds']-1)]
action, _states = model.predict(extend_obs(ob, config['n_thrds']),
state=_states, mask=done)
action = action[0]
ob, rew, done, info = env.step(action)
if done:
env.reset()
if isinstance(info, (tuple, list)):
info = info[0]
action = action[0]
state_mat.append(_states[0, int(_states.shape[1]/2):])
if info['new_trial']:
gt = env.gt_now
correct = action == gt
# Log trial info
trial_info = env.trial
trial_info.update({'correct': correct, 'choice': action})
info_df = info_df.append(trial_info, ignore_index=True)
# Log stimulus period activity
state_mat = np.array(state_mat)
# Excluding decision period if exists
if 'decision' in env.start_ind:
state_mat = state_mat[:env.start_ind['decision']]
activity.append(state_mat)
state_mat = []
env.close()
activity = np.array(activity)
info = info_df
# ### General analysis
# +
def analysis_average_activity(activity, info, config):
# Load and preprocess results
plt.figure(figsize=(1.2, 0.8))
t_plot = np.arange(activity.shape[1]) * config['dt']
plt.plot(t_plot, activity.mean(axis=0).mean(axis=-1))
analysis_average_activity(activity, info, config)
# -
def get_conditions(info):
"""Get a list of task conditions to plot."""
conditions = info.columns
# This condition's unique value should be less than 5
new_conditions = list()
for c in conditions:
try:
n_cond = len(pd.unique(info[c]))
if 1 < n_cond < 5:
new_conditions.append(c)
except TypeError:
pass
return new_conditions
# +
def analysis_activity_by_condition(activity, info, config):
conditions = get_conditions(info)
for condition in conditions:
values = pd.unique(info[condition])
plt.figure(figsize=(1.2, 0.8))
t_plot = np.arange(activity.shape[1]) * config['dt']
for value in values:
a = activity[info[condition] == value]
plt.plot(t_plot, a.mean(axis=0).mean(axis=-1), label=str(value))
plt.legend(title=condition, loc='center left', bbox_to_anchor=(1.0, 0.5))
analysis_activity_by_condition(activity, info, config)
# +
def analysis_example_units_by_condition(activity, info, config):
conditions = get_conditions(info)
if len(conditions) < 1:
return
example_ids = np.array([0, 1])
for example_id in example_ids:
example_activity = activity[:, :, example_id]
fig, axes = plt.subplots(
len(conditions), 1, figsize=(1.2, 0.8 * len(conditions)),
sharex=True)
for i, condition in enumerate(conditions):
ax = axes[i]
values = pd.unique(info[condition])
t_plot = np.arange(activity.shape[1]) * config['dt']
for value in values:
a = example_activity[info[condition] == value]
ax.plot(t_plot, a.mean(axis=0), label=str(value))
ax.legend(title=condition, loc='center left', bbox_to_anchor=(1.0, 0.5))
ax.set_ylabel('Activity')
if i == len(conditions) - 1:
ax.set_xlabel('Time (ms)')
if i == 0:
ax.set_title('Unit {:d}'.format(example_id + 1))
analysis_example_units_by_condition(activity, info, config)
# +
def analysis_pca_by_condition(activity, info, config):
# Reshape activity to (N_trial x N_time, N_neuron)
activity_reshape = np.reshape(activity, (-1, activity.shape[-1]))
pca = PCA(n_components=2)
pca.fit(activity_reshape)
conditions = get_conditions(info)
for condition in conditions:
values = pd.unique(info[condition])
fig = plt.figure(figsize=(2.5, 2.5))
ax = fig.add_axes([0.2, 0.2, 0.7, 0.7])
for value in values:
# Get relevant trials, and average across them
a = activity[info[condition] == value].mean(axis=0)
a = pca.transform(a) # (N_time, N_PC)
plt.plot(a[:, 0], a[:, 1], label=str(value))
plt.legend(title=condition, loc='center left', bbox_to_anchor=(1.0, 0.5))
plt.xlabel('PC 1')
plt.ylabel('PC 2')
analysis_pca_by_condition(activity, info, config)
|
training/auto_notebooks/rl/Bandit-v0.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: erlotinib-venv
# language: python
# name: erlotinib-venv
# ---
# # Infer Population Model Parameters from Individuals in Lung Cancer Control Group
# +
import os
import arviz as az
import matplotlib.pyplot as plt
import numpy as np
import pints
from scipy.optimize import minimize, basinhopping
import xarray as xr
import erlotinib as erlo
# -
# ## Show control group data
# +
# Get data
data = erlo.DataLibrary().lung_cancer_control_group()
# Create scatter plot
fig = erlo.plots.PDTimeSeriesPlot()
fig.add_data(data, biomarker='Tumour volume')
fig.set_axis_labels(xlabel=r'$\text{Time in day}$', ylabel=r'$\text{Tumour volume in cm}^3$')
# Show figure
fig.show()
# -
# **Figure 1:** Visualisation of the measured tumour growth in 8 mice with patient-derived lung cancer implants.
# ## Build model
# +
# Define mechanistic model
path = erlo.ModelLibrary().tumour_growth_inhibition_model_koch_reparametrised()
mechanistic_model = erlo.PharmacodynamicModel(path)
mechanistic_model.set_parameter_names(names={
'myokit.tumour_volume': 'Tumour volume in cm^3',
'myokit.critical_volume': 'Critical volume in cm^3',
'myokit.drug_concentration': 'Drug concentration in mg/L',
'myokit.kappa': 'Potency in L/mg/day',
'myokit.lambda': 'Exponential growth rate in 1/day'})
mechanistic_model.set_output_names({
'myokit.tumour_volume': 'Tumour volume'})
# Define error model
error_model = erlo.ConstantAndMultiplicativeGaussianErrorModel()
# Define population model
population_model = [
erlo.LogNormalModel(), # Initial tumour volume
erlo.LogNormalModel(), # Critical tumour volume
erlo.LogNormalModel(), # Tumour growth rate
erlo.PooledModel(), # Base noise
erlo.PooledModel()] # Relative noise
# Build model
problem = erlo.ProblemModellingController(
mechanistic_model, error_model)
problem.fix_parameters({
'Drug concentration in mg/L': 0,
'Potency in L/mg/day': 0})
problem.set_population_model(population_model)
# -
# ## Prior predictive checks
# ### Population model
# +
# Define prior distribution
log_priors = [
pints.TruncatedGaussianLogPrior(mean=0.1, sd=1, a=0, b=np.inf), # Mean Initial tumour volume
pints.TruncatedGaussianLogPrior(mean=1, sd=1, a=0, b=np.inf), # Std. Initial tumour volume
pints.TruncatedGaussianLogPrior(mean=1, sd=1, a=0, b=np.inf), # Mean Critical tumour volume
pints.TruncatedGaussianLogPrior(mean=1, sd=1, a=0, b=np.inf), # Std. Critical tumour volume
pints.TruncatedGaussianLogPrior(mean=0.1, sd=1, a=0, b=np.inf), # Mean Growth rate
pints.TruncatedGaussianLogPrior(mean=1, sd=1, a=0, b=np.inf), # Std. Growth rate
pints.TruncatedGaussianLogPrior(mean=0.1, sd=1, a=0, b=np.inf), # Pooled Sigma base
pints.TruncatedGaussianLogPrior(mean=0.1, sd=0.1, a=0, b=np.inf)] # Pooled Sigma rel.
log_prior = pints.ComposedLogPrior(*log_priors)
# Define prior predictive model
predictive_model = problem.get_predictive_model()
model = erlo.PriorPredictiveModel(predictive_model, log_prior)
# Sample from prior predictive model
seed = 42
n_samples = 100
times = np.linspace(0, 30)
samples = model.sample(times, n_samples, seed)
# Visualise prior predictive model
fig = erlo.plots.PDPredictivePlot()
fig.add_prediction(data=samples, bulk_probs=[0.3, 0.6, 0.9])
fig.set_axis_labels(xlabel=r'$\text{Time in day}$', ylabel=r'$\text{Tumour volume in cm}^3$')
fig.show()
# -
# **Figure 3:** Approximate prior predictive model for the tumour growth in a population over time. The shaded areas indicate the 30%, 60% and 90% bulk of the prior predictive model (from dark to light). The prior predictive model was approximated by sampling 1000 parameters from the prior distribution, and subsequent sampling of 50 equidistant time points from the predictive model for each parameter set.
# ## Find maximum a posteriori estimates
# + tags=[]
# # Define log-posterior
# problem.set_data(data)
# problem.set_log_prior(log_priors)
# log_posterior = problem.get_log_posterior()
def fun(log_parameters):
score, sens = log_posterior.evaluateS1(np.exp(log_parameters))
return (-score, -sens)
# Run optimisation
initial_parameters = np.log(erlo.InferenceController(log_posterior)._initial_params[0, 0])
print(fun(initial_parameters))
result = minimize(fun=fun, x0=initial_parameters, method='L-BFGS-B', jac=True)
result
# -
np.exp(result.x) # 408.5831950704941
np.exp(result.x) # 406.1479936996002
np.exp(result.x) # 219.54113709105013
np.exp(result.x) # 36.90877472832281
# Running 3 times produces three vastly different results!
# Run optimisation
initial_parameters = np.log(erlo.InferenceController(log_posterior)._initial_params[0, 0])
minimizer_kwargs = {"method":"L-BFGS-B", "jac":True}
result = basinhopping(
func=fun, x0=initial_parameters, minimizer_kwargs=minimizer_kwargs, niter=10000)
result
np.exp(result.x) # -98.49277693232358
log_posterior.get_parameter_names(include_ids=True)
|
lung_cancer/control_group_analysis/tgi_koch_2009_reparametrised_model/population_inference/lognormal/find_maps.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] colab={} colab_type="code" id="n3lnWjvI83ix"
# # Pronóstico de la evolución de pacientes con diabetes (ANEXOS)
# -
# ## Descripción del problema real
# Los tratamientos médicos están basados en las expectativas de recuperación o el avance de una enfermedad para tomar decisiones. En este caso, un equipo médico desea contar con pronósticos de pacientes con diabetes para tomar decisiones sobre su tratamiento.
# ## Descripción del problema en términos de los datos
# Se desea determinar el progreso de la diabeteis un año hacia adelante a partir de las variables medidas para 442 pacientes. La información está almacenada en el archivo `datos/diabetes.csv`. Las variables medidas son: edad, sexo, indice de masa corporal, presión sanguinea y seis medidas de serum en la sangre. Se desea pronósticar el progreso de la enfermedad a partir de las variables dadas.
# ## Aproximaciones posibles
# En este caso, se desea comparar los resultados de un modelo de regresión lineal y un modelo de redes neuronales artificiales.
# ## Requerimientos
# Usted debe:
#
# * Determinar cuáles de las variables consideradas son relevantes para el problema.
#
#
# * Determinar si hay alguna transformación de las variables de entrada o de salida que mejore el pronóstico del modelo.
#
#
# * Construir un modelo de regresión lineal que sirva como base para construir un modelo de redes neuronales artificiales.
#
#
#
# * Construir un modelo de redes neuronales artificiales. Asimismo, debe determinar el número de neuronas en la capa o capas ocultas.
#
#
# * Utiizar una técnica como crossvalidation u otra similar para establecer la robustez del modelo.
# # Desarrollo para enocntrar una red neuronal optima por iteración
#
# ## Configuración del ambiende de desarrollo - importación de librerias
#
# Lo primero que se debe hacer es configurar el ambiende de desarrolo, importando las librerias necesarias para trabajar.
# +
##Importar las librerias necesarias para trabajar
import os
import numpy as np
import sys
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import math
from sklearn.preprocessing import MinMaxScaler
from sklearn.model_selection import train_test_split,cross_val_score
from sklearn import linear_model
from sklearn.metrics import mean_squared_error,r2_score,mean_absolute_error
from sklearn.neural_network import MLPRegressor
from statsmodels import robust
from sklearn.feature_selection import RFECV
# %matplotlib inline
# -
# ## Lectura de datos
# En primer lugar se leen los datos provenientes del archivo diabetes.cvs facilitado por el profesor. Para esto se hace uso de la libreria pandas (pd) y sus funciones.
# +
#Extraer los datos de precios de la electricidad
##Este es la direccion donde estan todos los Exceles de donde saldran los datos
ruta = r'C:\Users\nelson.barco\NFBB\Redes Neuronales\evaluacion-nfbarcob03-master\evaluacion-nfbarcob03-master\datos\diabetes.csv'
dataFrame = pd.read_csv(ruta)
dataFrame.head()
# -
# ## Escalamiento de datos
#
# Como existen valores negativos todos los valors deben escalarse para que estos negativos no alteren el comportamiento del modelo
scaler = MinMaxScaler()
dataFrameScalar = pd.DataFrame(scaler.fit_transform(dataFrame), columns=dataFrame.columns)
dataFrameScalar.head()
dataFrameScalar.describe()
# ## Desarrollo modelo de redes neuronales en base al modelo de regresión lineal
# A continuación se desarrollara un modelo de redes nueronales Regresión no lineal con perceptrones multicapa, se implementara primero con todas las variables del data set y luego con las variables que del mejor modelo escogido en la primera parte. Se probaran varias configuraciones de la red y se escogera la que tenga menor score en la validacion cruzada (cross validation)
#
# Puede que no se encuentre la mejor configuración posible pero se intentara probar muchas combinaciones de funciones de activacion, solver y numero de neuronas y capas ocultas.
#
# Se iteraran modelos con capas ocultas de 1 a 10, neuronas en las capas ocultas de 1 a 12, 4 tipos de algoritmo de activacion y 3 tipos de solver
#
# ### Red Neuronal MLP con todas las variables del data set
# +
np.random.seed(0)
activaciones = ['identity', 'logistic', 'tanh', 'relu']
solveres = ['lbfgs', 'sgd', 'adam']
optimal_scores=[]
optimal_capas=None
optimal_neuronas=None
optimal_activacion=None
optimal_solver=None
optimal_model=None
variables=dataFrame.columns
variables=variables[:-1]
x=dataFrameScalar.loc[:,variables]
y=dataFrameScalar.loc[:,['Y']]
xEntrenamiento, xValidacion, yEntrenamiento, yValidacion = train_test_split(x, y, test_size=0.25, random_state=0)
for capas in range (1,10):
for neuronas in range (1,12):
for activacion in activaciones:
for solver in solveres:
m = MLPRegressor(
hidden_layer_sizes = (neuronas,capas), # Una capa oculta con una neurona
activation = 'logistic', # {‘identity’, ‘logistic’, ‘tanh’, ‘relu’}
solver = 'sgd', # {‘lbfgs’, ‘sgd’, ‘adam’}
alpha = 0.0, #
learning_rate_init = 0.1, # Valor de la tasa de aprendizaje
learning_rate = 'constant', # La tasa no se adapta automáticamente
verbose = False, # Reporte del proceso de optimización
shuffle = True, #
tol = 1e-8, #
max_iter = 25000, # Número máximo de iteraciones
momentum = 0.0, #
nesterovs_momentum = False) #
m.fit(xEntrenamiento, yEntrenamiento.values.ravel()) # Entrena el modelo
scores = cross_val_score(m, x, y.values.ravel(), cv=5,scoring = 'neg_mean_squared_error')
if optimal_scores==[] or round(abs(optimal_scores.mean()),5)>round(abs(scores.mean()),5):
optimal_scores=scores
optimal_capas=capas
optimal_neuronas=neuronas
optimal_activacion=activacion
optimal_solver=solver
optimal_model=m
print('Optimo actual:')
print('Numero de capas ocultas: ', optimal_capas)
print('Numero de neuronas por capa oculta: ', optimal_neuronas)
print('Algoritmo de activación: ', optimal_activacion)
print('Timpo de solver: ',optimal_solver)
print('MSE',round(abs(optimal_scores.mean()),5))
#print('R^2: {}'.format(optimal_scores(xValidacion, yValidacion.values.ravel())))
scores = cross_val_score(optimal_model, x, y.values.ravel(), cv=5)
print('Accuracy: {} (+/- {})'.format(abs(optimal_scores.mean()), abs(optimal_scores.std()) * 2))
print('Para el modelo de redes neuronales Regresión no lineal con perceptrones multicapa, la mejor configuración encontrada fue:')
print('Numero de capas ocultas: ', optimal_capas)
print('Numero de neuronas por capa oculta: ', optimal_neuronas)
print('Algoritmo de activación: ', optimal_activacion)
print('Timpo de solver: ',optimal_solver)
print('MSE',round(abs(optimal_scores.mean()),5))
#print('R^2: {}'.format(optimal_model.score(xValidacion, yValidacion.values.ravel())))
scores = cross_val_score(optimal_model, x, y.values.ravel(), cv=5)
print('Accuracy: {} (+/- {})'.format(abs(optimal_scores.mean()), abs(optimal_scores.std()) * 2))
# -
plt.plot(xValidacion, yValidacion, 'o');
plt.grid()
plt.plot(xValidacion, optimal_model.predict(xValidacion), '-');
# ### Red Neuronal MLP con las variables de la regresión lineal del data set
# +
np.random.seed(0)
activaciones = ['identity', 'logistic', 'tanh', 'relu']
solveres = ['lbfgs', 'sgd', 'adam']
optimal_scores=[]
optimal_capas=None
optimal_neuronas=None
optimal_activacion=None
optimal_solver=None
optimal_model=None
variables=dataFrame.columns
variables=variables[:-1]
x=dataFrameScalar.loc[:,['s5', 'bp', 'bmi', 'sex', 's6','s3']]
y=dataFrameScalar.loc[:,['Y']]
xEntrenamiento, xValidacion, yEntrenamiento, yValidacion = train_test_split(x, y, test_size=0.25, random_state=0)
for capas in range (1,10):
for neuronas in range (1,12):
for activacion in activaciones:
for solver in solveres:
m = MLPRegressor(
hidden_layer_sizes = (neuronas,capas), # Una capa oculta con una neurona
activation = 'logistic', # {‘identity’, ‘logistic’, ‘tanh’, ‘relu’}
solver = 'sgd', # {‘lbfgs’, ‘sgd’, ‘adam’}
alpha = 0.0, #
learning_rate_init = 0.1, # Valor de la tasa de aprendizaje
learning_rate = 'constant', # La tasa no se adapta automáticamente
verbose = False, # Reporte del proceso de optimización
shuffle = True, #
tol = 1e-8, #
max_iter = 25000, # Número máximo de iteraciones
momentum = 0.0, #
nesterovs_momentum = False) #
m.fit(xEntrenamiento, yEntrenamiento.values.ravel()) # Entrena el modelo
scores = cross_val_score(m, x, y.values.ravel(), cv=5,scoring = 'neg_mean_squared_error')
if optimal_scores==[] or round(abs(optimal_scores.mean()),5)>round(abs(scores.mean()),5):
optimal_scores=scores
optimal_capas=capas
optimal_neuronas=neuronas
optimal_activacion=activacion
optimal_solver=solver
optimal_model=m
print('Optimo actual:')
print('Numero de capas ocultas: ', optimal_capas)
print('Numero de neuronas por capa oculta: ', optimal_neuronas)
print('Algoritmo de activación: ', optimal_activacion)
print('Timpo de solver: ',optimal_solver)
print('MSE',round(abs(optimal_scores.mean()),5))
#print('R^2: {}'.format(optimal_scores(xValidacion, yValidacion.values.ravel())))
scores = cross_val_score(optimal_model, x, y.values.ravel(), cv=5)
print('Accuracy: {} (+/- {})'.format(abs(optimal_scores.mean()), abs(optimal_scores.std()) * 2))
print('Para el modelo de redes neuronales Regresión no lineal con perceptrones multicapa, la mejor configuración encontrada fue:')
print('Numero de capas ocultas: ', optimal_capas)
print('Numero de neuronas por capa oculta: ', optimal_neuronas)
print('Algoritmo de activación: ', optimal_activacion)
print('Timpo de solver: ',optimal_solver)
print('MSE',round(abs(optimal_scores.mean()),5))
#print('R^2: {}'.format(optimal_model.score(xValidacion, yValidacion.values.ravel())))
scores = cross_val_score(optimal_model, x, y.values.ravel(), cv=5)
print('Accuracy: {} (+/- {})'.format(abs(optimal_scores.mean()), abs(optimal_scores.std()) * 2))
# -
plt.plot(xValidacion, yValidacion, 'o');
plt.grid()
plt.plot(xValidacion, optimal_model.predict(xValidacion), '-');
# ## Conclusión segunda parte (anexos)
#
# Para el modelo de redes neuronales Regresión no lineal con perceptrones multicapa y utilizando todas las variables del dataset, la mejor configuración encontrada fue:
#
# - Numero de capas ocultas: 6
# - Numero de neuronas por capa oculta: 7
# - Algoritmo de activación: relu
# - Timpo de solver: sgd
# - MSE 0.02946
# - Accuracy: 0.02945645049568519 (+/- 0.003738514806169718)
#
# Para el modelo de redes neuronales Regresión no lineal con perceptrones multicapa y utilizando las variables de la regresión lineal ('s5', 'bp', 'bmi', 'sex', 's6','s3'), la mejor configuración encontrada fue:
#
# - Numero de capas ocultas: 2
# - Numero de neuronas por capa oculta: 9
# - Algoritmo de activación: logistic
# - Timpo de solver: adam
# - MSE 0.02904
# - Accuracy: 0.029043380614759834 (+/- 0.004019913668541137)
#
|
02-diabetes-Anexo.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] toc=true
# <h1>Table of Contents<span class="tocSkip"></span></h1>
# <div class="toc"><ul class="toc-item"></ul></div>
# +
from __future__ import unicode_literals
from youtube_search import YoutubeSearch
import youtube_dl
import os
from pydub import AudioSegment
from pydub.silence import split_on_silence
import speech_recognition as sr
import bs4 as bs
import urllib.request
import re
import nltk
import heapq
import pubchempy as pcp
import smtplib,ssl
import requests
import json
import pandas as pd
import time
import yagmail
from wit import Wit
chemicals=[]
ureceiver='<EMAIL>'
usender="<EMAIL>"
access_token_wit='<KEY>'
#question='Can I combine the benzene and methanol?'
question=input('Please ask a question about the chemical compound that you want to know about:')
def wit_request(question,access_token_wit):
client = Wit(access_token=access_token_wit)
resp=client.message(msg=question)
data_extraction=json.dumps(resp)
data=json.loads(data_extraction)
def depth(data):
if "entities" in data:
return 1 + max([0] + list(map(depth, data['entities'])))
else:
return 1
levels_with_entities=depth(data)
#print(json.dumps(data,indent=2,sort_keys=True))
def json_extract(obj,key):
arr=[]
def extract(obj,arr,key):
if isinstance(obj,dict):
for k,v in obj.items():
if isinstance(v,(dict,list)):
extract(v,arr,key)
elif v==key:
if obj not in arr:
arr.append(obj)
elif isinstance(obj,list):
for item in obj:
extract(item,arr,key)
return (arr)
values=extract(obj,arr,key)
return values
#get intents
intent=resp['intents'][0]['name']
#extract chemicals that wit.ai found
result_confirm=json_extract(data,'chemical_substance')
chemicals=[]
number_chemicals=len(result_confirm)
for q in range(number_chemicals):
chemicals.append(result_confirm[q]['value'])
#print(json.dumps(result_confirm,indent=2,sort_keys=True))
#print('result confirm:',chemicals,intent)#result_confirm)
return (chemicals,intent)
def summarizing_video(chemical_compound):
confirmation_video=""
summary=''
formatted_article_text=''
max_elements=1
#results=YoutubeSearch('Benzene',max_results=5).to_json()
results=YoutubeSearch(chemical_compound,max_results=max_elements).to_dict()
#print(results)
def validate_reply(confirmation_video):
confirmation_verified=''
if confirmation_video=='YES' or confirmation_video=='NO':
confirmation_verified=confirmation_video
return confirmation_verified
else:
print('Please confirm that you want me to transcribe it?')
confirmation_video=input('(yes/no):').upper()
return validate_reply(confirmation_video)
for i in range(max_elements):
url="https://www.youtube.com/watch?v="+results[i]['id']
title_video=results[i]['title']
duration=results[i]['duration']
views=results[i]['views']
print('I found this video, do you want me to transcribe it?\n')
print('****************')
print("Title: ",title_video)
print('Duration',duration)
print("Url",url)
print("Views",views)
print("***************")
confirmation_video=input('Transcribing video? (yes/no):').upper()
confirmation_verified=validate_reply(confirmation_video)
print('out',confirmation_verified)
if confirmation_verified=='YES':
print('in',confirmation_verified)
ydl_opts={
'format':'bestaudio/best',
'postprocessors': [{
'key':'FFmpegExtractAudio',
'preferredcodec':'wav',
'preferredquality':'192',
}],
}
with youtube_dl.YoutubeDL(ydl_opts) as ydl:
ydl.download([url])
info_dict = ydl.extract_info(url)
fn = ydl.prepare_filename(info_dict)
path=fn[:-4]+"wav"
r=sr.Recognizer()
print("started..")
def get_large_audio_transcription(path):
sound=AudioSegment.from_wav(path)
chunks=split_on_silence(sound,
min_silence_len=500,
silence_thresh=sound.dBFS-14,
keep_silence=500,)
folder_name="audio-chunks"
if not os.path.isdir(folder_name):
os.mkdir(folder_name)
whole_text=""
for i,audio_chunk in enumerate(chunks,start=1):
chunk_filename=os.path.join(folder_name,f"chunk{i}.wav")
audio_chunk.export(chunk_filename,format="wav")
with sr.AudioFile(chunk_filename) as source:
audio_listened=r.record(source)
try:
text=r.recognize_google(audio_listened,language="en-US")
except sr.UnknownValueError as e:
pass
#print("Error:",str(e))
else:
text=f"{text.capitalize()}. "
#print(chunk_filename,":",text)
whole_text+=text
return whole_text
# (starting here:)#path="Audacity FFMpeg codec install for Windows-v2J6fT65Ydc.wav"
#print("\nFull text:",get_large_audio_transcription(path))
article_text=get_large_audio_transcription(path)
article_text=re.sub(r'\[[0-9]*\]',' ',article_text)
article_text=re.sub(r'\s+',' ',article_text)
formatted_article_text=re.sub('^a-zA-Z',' ',article_text)
formatted_article_text=re.sub(r'\s+',' ',formatted_article_text)
#print(formatted_article_text) #final text from audio
print('*********************')
print("Summaryzing..")
#tokenization
sentence_list=nltk.sent_tokenize(article_text)
stopwords=nltk.corpus.stopwords.words('english')
word_frequencies={}
for word in nltk.word_tokenize(formatted_article_text):
if word not in stopwords:
if word not in word_frequencies.keys():
word_frequencies[word]=1
else:
word_frequencies[word]+=1
#print(list(map(str,word_frequencies)))
#word frequency
maximum_frequency=max(word_frequencies.values())
for word in word_frequencies.keys():
word_frequencies[word]=(word_frequencies[word]/maximum_frequency)
#print(word_frequencies)
#sentence score
sentence_scores={}
for sent in sentence_list:
for word in nltk.word_tokenize(sent.lower()):
if word in word_frequencies.keys():
if len(sent.split(' '))<50:
if sent not in sentence_scores.keys():
sentence_scores[sent]=word_frequencies[word]
else:
sentence_scores[sent]+=word_frequencies[word]
#top 7 most frequent sentences
summary_sentences=heapq.nlargest(10,sentence_scores,key=sentence_scores.get)
summary=' '.join(summary_sentences)
return (summary,formatted_article_text)
#find the email
def send_email(ureceiver,usender,body,result_content,email_title):
if not result_content:
result_content='No records found, please search with synonyms.'
receiver=ureceiver
body="Hello, Buddy!"
body+="This is an email with the information requested on the chat. Hope you find it hepful"
yag=yagmail.SMTP("<EMAIL>","VentaProduct51g1")
email_sent=yag.send(to=receiver,
subject=email_title,
contents=result_content)
if not email_sent:
email_confirmation='Email Sent'
else:
email_confirmation='Email Not Sent'
return email_confirmation
#find info safe storage
def info_safe_storage():
API_ENDPOINT='https://pubchem.ncbi.nlm.nih.gov/rest/pug_view/data/compound/'+str(cid)+'/JSON'
dat={}
#print(newintent)
headers = {'authorization': 'Bearer ','Content-Type': 'application/json'}
resp=requests.post(API_ENDPOINT,headers=headers,json=dat)
textt=json.loads(resp.content)
def json_extract(obj,key):
arr=[]
def extract(obj,arr,key):
if isinstance(obj,dict):
for k,v in obj.items():
if isinstance(v,(dict,list)):
extract(v,arr,key)
elif v==key:
arr.append(obj)
elif isinstance(obj,list):
for item in obj:
extract(item,arr,key)
return (arr)
values=extract(obj,arr,key)
return values
result_safe=json_extract(textt,'Information for Safe Storage')
result_storage=json_extract(textt,'Information for Storage Conditions')
#print(result_storage)
result_validate_safe=json_extract(result_safe,'Not Classified')
result_validate_storage=json_extract(result_storage,'Not Classified')
response_title='Handling and Storage Summary:\n'
if len(result_safe[0])==0 and len(result_storage[0])==0:
if result_validate_storage[0]['validate']=='Not Classified' and result_validate_safe[0]['validate']=='Not Classified':
response_api="There are are not records of hazard classification so that it may not be dangerous, please look for other professional resources"
response=response_title+response_api
else:
print('Continue')
handling_storage={}
safe_storage={}
for key,value in result_storage[0].items():
if value not in handling_storage.values():
handling_storage[key]=value
for key,value in result_safe[0].items():
if value not in safe_storage.values():
safe_storage[key]=value
#print(handling_storage)
#print(json.dumps(handling_storage,indent=2,sort_keys=True))
elements_storage=len(handling_storage['Information'])
elements_safe=len(safe_storage['Information'])
response_storage_res=""
response_safe_res=""
for i in range(elements_storage):
response_storage=handling_storage['Information'][i]['Value']['StringWithMarkup'][0]['String']
response_storage_res+=response_storage
for x in range(elements_safe):
response_safe=safe_storage['Information'][x]['Value']['StringWithMarkup'][0]['String']
response_safe_res+=response_safe
#print("---",safe_storage['Information'][x]['Value']['StringWithMarkup'][0]['String'])
response=response_storage_res+response_safe_res
#print(response_storage_res,"----",response_safe_res)
return response
#find toxicity documentation
def toxicity(chemical_compound,cid):
API_ENDPOINT='https://pubchem.ncbi.nlm.nih.gov/rest/pug_view/data/compound/'+str(cid)+'/JSON'
dat={}
headers = {'authorization': 'Bearer ','Content-Type': 'application/json'}
resp=requests.post(API_ENDPOINT,headers=headers,json=dat)
textt=json.loads(resp.content)
def json_extract(obj,key):
arr=[]
def extract(obj,arr,key):
if isinstance(obj,dict):
for k,v in obj.items():
if isinstance(v,(dict,list)):
extract(v,arr,key)
elif v==key:
arr.append(obj)
elif isinstance(obj,list):
for item in obj:
extract(item,arr,key)
return (arr)
values=extract(obj,arr,key)
return values
result=json_extract(textt,'Toxicity Summary')
result_validate=json_extract(result,'Not Classified')
#print(json.dumps(result_validate,indent=2,sort_keys=True))
response_title='Toxicity Summary:\n'
if len(result[0])==0:
if len(result_validate)>=1:
response_api="There are are not records of hazard classification so that it may not be dangerous, please look for other professional resources"
result_toxicity=response_title+response_api
else:
toxicity={}
for key,value in result[0].items():
if value not in toxicity.values():
toxicity[key]=value
result_toxicity=toxicity['Information'][0]['Value']['StringWithMarkup'][0]['String']
return result_toxicity
#find handling & storage characteristics
def handling_store(chemical_compound,cid):
API_ENDPOINT='https://pubchem.ncbi.nlm.nih.gov/rest/pug_view/data/compound/'+str(cid)+'/JSON'
dat={}
#print(newintent)
headers = {'authorization': 'Bearer ','Content-Type': 'application/json'}
resp=requests.post(API_ENDPOINT,headers=headers,json=dat)
textt=json.loads(resp.content)
def json_extract(obj,key):
arr=[]
def extract(obj,arr,key):
if isinstance(obj,dict):
for k,v in obj.items():
if isinstance(v,(dict,list)):
extract(v,arr,key)
elif v==key:
arr.append(obj)
elif isinstance(obj,list):
for item in obj:
extract(item,arr,key)
return (arr)
values=extract(obj,arr,key)
return values
result=json_extract(textt,'Handling and Storage')
result_validate=json_extract(result,'Not Classified')
#print('handling55:',result)
#print(json.dumps(result,indent=2,sort_keys=True))
result_handling_storage=''
response_title='Handling and Storage:\n\n'
#si no hay devolucion de datos
if len(result[0])==0:
if result_validate[0]['validate']=='Not Classified':
response_api="There are are not records of hazard classification so that it may not be dangerous, please look for other professional resources"
result_handling_storage=response_title+response_api
#print('No results:',result_handling_storage)
else:
handling_storage={}
result_handling_storage=''
for key,value in result[0].items():
if value not in handling_storage.values():
handling_storage[key]=value
result_handling_storage=handling_storage['Section'][0]['Information'][0]['Value']['StringWithMarkup'][0]['String']
return result_handling_storage
def ghs_classification(chemical_compound,cid):
API_ENDPOINT='https://pubchem.ncbi.nlm.nih.gov/rest/pug_view/data/compound/'+str(cid)+'/JSON'
dat={}
#print(newintent)
headers = {'authorization': 'Bearer ','Content-Type': 'application/json'}
resp=requests.post(API_ENDPOINT,headers=headers,json=dat)
textt=json.loads(resp.content)
def json_extract(obj,key):
arr=[]
def extract(obj,arr,key):
if isinstance(obj,dict):
for k,v in obj.items():
if isinstance(v,(dict,list)):
extract(v,arr,key)
elif v==key:
arr.append(obj)
elif isinstance(obj,list):
for item in obj:
extract(item,arr,key)
return (arr)
values=extract(obj,arr,key)
return values
result=json_extract(textt,'GHS Classification')
result_validate=json_extract(result,'Not Classified')
#print(json.dumps(result_validate,indent=2,sort_keys=True))
#print(json.dumps(result,indent=2,sort_keys=True))
response_title="GHS Classification:\n"
if len(result[0])==0:
if result_validate[0]['validate']=='Not Classified':
response_api="There are are not records of hazard classification so that it may not be dangerous, please look for other professional resources"
response_ghs_classification=response_title+response_api
else:
results=json_extract(textt,'Pictogram(s)')
ghs_classification={}
for key,value in results[0].items():
if value not in ghs_classification.values():
ghs_classification[key]=value
#print(json.dumps(ghs_classification,indent=2,sort_keys=True))
response_api=""
response=''
number_classified=len(ghs_classification['Value']['StringWithMarkup'][0]['Markup'])
#print("number:",number_classified)
ghs_class=ghs_classification['Value']['StringWithMarkup'][0]['Markup']
for ghs in range(number_classified):
#print(ghs_class[ghs]['Extra'])
response=ghs_class[ghs]['Extra']+" "
response_api+=response
response_ghs_classification=response_title+response_api
#print(response_ghs_classification)
return response_ghs_classification
def content_sorted(chemicals,title_total_content,intent_chemical_request):
chemical={}
content=''
response=''
cid=''
for chemical_compound in chemicals:
for compound in pcp.get_compounds(chemical_compound,'name'):
chemical['cid']=compound.cid
cid=compound.cid
#print("--->",chemical_compound,chemical['cid'])
if intent_chemical_request=="confirm_storage_compatibility":
title="Chemical Compatibility Info: "+chemical_compound.upper()+"\n\n"
content_csc=handling_store(chemical_compound,cid)
response=title+content_csc
elif intent_chemical_request=="get_ghs_classification":
title="GHS Info: "
response_ghs=ghs_classification(chemical_compound,cid)
content_title=title+chemical_compound.upper()+"\n\n"
content_ghs= content_title+"\n\n"+response_ghs
toxicity_title='Toxicity Summary: '+chemical_compound.upper()+"\n\n"
result_content_toxicity=toxicity(chemical_compound,chemical['cid'])
content_toxicity=toxicity_title+result_content_toxicity
response=content_ghs+content_toxicity
elif intent_chemical_request=="info_storage_compatibility":
title_si="Storage Information: "+chemical_compound.upper()+"\n\n"
result_content_ghs=ghs_classification(chemical_compound,cid)
total_content_si=title_si+result_content_ghs
title_sc="Storage Compatibility Info: "+chemical_compound.upper()+"\n\n"
content_csc=handling_store(chemical_compound,cid)
total_content_csc=title_sc+content_csc
response=total_content_si+total_content_csc
#content_chemical=chemical_compound.upper()+"\n"
#content= content_chemical+response+"\n\n"+content
content= response+"\n\n"+content
full_content=title_total_content+content+"\n\n"
print('ful text:',full_content)
return full_content
#-----Wit.Ai request to identify intents and more tasks----
def validate_reply(confirmation_video):
confirmation_verified=''
if confirmation_video=='YES' or confirmation_video=='NO':
confirmation_verified=confirmation_video
return confirmation_verified
else:
confirmation_video=input('Please confirm that you want me to transcribe it? (yes/no):').upper()
return validate_reply(confirmation_video)
chemical={}
chemicals=[]
wit_results=wit_request(question,access_token_wit)
print('Wit results New:',wit_results)
#request trutful information from PubChemAPI
for chem in wit_results[0]:
chemicals.append(chem)
chemical_compound=chem
intent=wit_results[1]
for compound in pcp.get_compounds(chemicals,'name'):
chemical['cid']=compound.cid
cid=compound.cid
body='PubChem Library API'
#intent='get_ghs_classification'
if intent=="confirm_storage_compatibility":
#result_content=content_sorted("Globally Harmonized System Hazard Classification",handling_store(chemical_compound,chemical['cid']))
email_title="Compatibility - Globally Harmonized System Hazard Classification: "+chemical_compound.upper()
result_content=content_sorted(chemicals,email_title,intent)
email_confirmation=send_email(ureceiver,usender,body,result_content,email_title)
print("Sending confirmation;",email_confirmation)
elif intent=="get_ghs_classification":
email_title="Summarized Documentation: "+chemical_compound.upper()
full_content=content_sorted(chemicals,email_title,intent)
email_confirmation=send_email(ureceiver,usender,body,full_content,email_title)
else:#info_storage_classification
email_title="Storage Information (Globally Harmonized System Hazard Classification): "+chemical_compound.upper()
result_content=content_sorted(chemicals,email_title,intent)
email_confirmation=send_email(ureceiver,usender,body,result_content,email_title)
confirmation_video=''
answer=input('Would you like me to look for a video about '+chemical_compound+' on youtube video to transcribe into text and make a summary of the video? (yes/no) ').upper()#+str(chem)).upper()
#Extract full text and summa
if answer=='YES' or answer=='NO':
confirmation_video=answer
else:
confirmation_video=validate_reply(confirmation_video)
if answer=='YES' or confirmation_video=='YES':
#print(chem,"---->")
summary_text,full_video_text=summarizing_video(chemical_compound)
print(full_video_text,"----",summary_text)
email_title="Video Summary: "+chemical_compound
full_documentation=summary_text+"\n\n"+"Full Text: "+full_video_text
if full_video_text:
full_text=send_email(ureceiver,usender,body,full_documentation,email_title)
print(full_text)
Email_confirmed="Email Sent"
print("You will receive an email soon with the full text and summary")
else:
print("There woon't be any email sent")
else:
pass
if email_confirmation == "Email Sent":
print('Email Sent: You will receive your summarized info on your registered email soon.')
else:
print('Not Email Sent: There was not enough information to send you.')
#print(email_confirmation)
#print(full_content)
print('Finished...')
# -
|
Wit-APPV3-final-backup.ipynb
|