text stringlengths 2.5k 6.39M | kind stringclasses 3
values |
|---|---|
# Introduction to NUMPY
Numpy is a the ideal tool to work with datasets. It is much faster than using python on its own
# Contents
- [raw python vs numpy](#pythonvsnumpy)
- [vectorisation](#vectorisation)
- [create](#create)
- [size](#size)
- [resize](#resize)
- [indexing](#indexing)
- [multi axis indexing](#multiindexing)
- [boolean indexing](#boolindexing)
- [aggregation](#aggregation)
- [maths](#maths)
```
import numpy as np
```
## raw python vs numpy <a name="pythonvsnumpy"></a>
lets say I wanted to multiply all the numbers in my list with 2.
```
N=10000000
numbers_list = list(range(N))
numbers_list[:10]
%%time
for i in range(N):
numbers_list[i] = numbers_list[i] * 2
numbers_list[:10]
numbers_array = np.array(list(range(N)))
numbers_array
%%time
numbers_array = numbers_array * 2
```
We can see that numpy is so much faster, especially when we have lots of elements in our array/list
# vectorisation <a name="vectorisation"></a>
We saw that numpy arrays are much faster than loops. Arrays like this can also be referred to as vectors in linear algebra. The term "vectorisation" comes from this inherent speed gain by using numpy arrays/vectors. If you find your self writing a for loop, always ask your self if you can do it in a vector operation instead.
# Creating a numpy array <a name="create"></a>
```
np.array([1,2,3])
np.zeros((2,3))
np.ones((2,3))
np.random.randn(2,3)# normally distributed
np.identity(3)
np.arange(100)
```
# length & size <a name="size"></a>
```
data = np.zeros((100,200))
data.shape
len(data)# this will give you first axis
```
# Joining numpy arrays <a name="size"></a>
```
a = np.arange(10)
b = np.arange(20)
np.hstack((a,b))
np.vstack((a,a))
```
# resize <a name="resize"></a>
```
data = np.arange(9)
data = data.reshape(3,3)
data
data = np.arange(9)
data = data.reshape(3,-1) # if you dont want to do the mental arithmetic you can supply -1 to second axis
data
```
# indexing <a name="indexing"></a>
```
data = np.random.randn(5)
data
data[1]
data[0:3]
data[:3]
data[-3:]
```
# multi axis indexing <a name="multiindexing"></a>
```
data = np.random.randn(3,3)
data
data[1,2] # 2nd row, 3rd collumn
data[:,0] # pick all elements from first column
data[:2,0] # pick up to 3rd element from first column
data[:,0] # pick all elements from first column
```
# boolean indexing <a name="boolindexing"></a>
```
data = np.random.randn(3,3)
data
data > 0
data[data > 0]
```
# Aggregation <a name="aggregation"></a>
```
data = np.random.randn(3,3)
data
data.sum()
np.sum(data)
data.sum(axis=1)
data.sum(axis=1,keepdims=True)
data.sum(axis=0)
data.mean()
data.min()
data.max()
```
# Math operations <a name="maths"></a>
```
data = np.random.randn(3,3)
data
np.exp(data)
np.sin(data)
```
| github_jupyter |
```
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
import arviz as az
from statsmodels.tsa import stattools
import statsmodels.api as sm
import pymc3 as pm
import pymc
import sys
sys.path.insert(0, '..')
from utils.plot_lib import set_default
set_default(figsize=(6, 4))
```
### Bayesian EM for Mixture of two gaussians
Implementation of the markup chain Monte Carlo algorithm for
fitting a location mixture of two univariate Gaussian distributions.
```
### Example of an EM algorithm for fitting a location mixture of 2 Gaussian components
### The algorithm is tested using simulated data
from scipy.stats import norm
## Clear the environment and load required libraries
np.random.seed(1)
## Generate data from a mixture with 2 components
KK = 2 # Number of componentes
w_true = [0.6, 0.4] # True weights associated with the components
mu_true = [0, 5] # True mean for the first and secondcomponent
sigma_true = [1, 1] # True standard deviation of all components
n = 120 # Number of observations to be generated
### Step 1 ### Sample component indicators
cc_true = np.random.choice([0, 1], n, p = w_true) # C_i sample
x = []
### Step 2 ### Sample from normal distribution
for i in range(n):
x.append(norm.rvs(loc = mu_true[cc_true[i]], scale = sigma_true[cc_true[i]], size = 1)[0])
x = np.array(x)
print('The first five samples of C_i are: {}'.format(cc_true[:5]))
print('The first five samples of the mixture are: {}'.format(x[:5]))
### Plotting the true distributions
# Plot f(x) along with the observations just sampled
# Values to sample
xx_true = np.linspace(-8, 11.0, num = 200)
yy_true = w_true[0] * norm.pdf(loc = mu_true[0], scale = sigma_true[0], x = xx_true) + w_true[1] * norm.pdf(loc = mu_true[1], scale = sigma_true[1], x = xx_true)
# Plotting the mixture models
fig, ax = plt.subplots(1, 1)
sns.lineplot(xx_true, yy_true)
sns.scatterplot(np.array(x), np.zeros(len(x)), hue = cc_true)
plt.xlabel('xx')
plt.ylabel('Density')
plt.legend(['Density', 'Points sampled 1'])
plt.show()
# Density estimation of X
fig, ax = plt.subplots(1, 1)
sns.histplot(x, stat= 'density', bins = 18)
sns.kdeplot(x, bw_adjust = 1.0, label = 'Density estimate $x$')
plt.title('Histogram of $x$')
plt.show()
```
### Initial guess of data
```
## Initialize the parameters
w = 0.5 # Assign equal weight to each component to start with
mu = norm.rvs(loc = np.mean(x), scale = np.std(x), size = KK, random_state = 1) # Random cluster centers randomly spread over the support of the data
sigma = np.std(x) # Initial standard deviation
print('The initial guess for mu are: {}'.format(mu))
print('The initial guess for sigma are: {}'.format(sigma))
# Values to sample
xx = np.linspace(-8, 11.0, num = 200)
yy = w * norm.pdf(loc = mu[0], scale = sigma, x = xx) + w * norm.pdf(loc = mu[1], scale = sigma, x = xx)
# Plot the initial guess for the density
fig, ax = plt.subplots(1, 1)
sns.lineplot(xx, yy)
sns.scatterplot(np.array(x), np.zeros(len(x)), hue = cc_true)
plt.xlabel('xx')
plt.ylabel('Density')
plt.legend(['Density guess'])
plt.show()
```
### Initializing priors
```
## The actual MCMC algorithm starts here
# Priors
aa = np.ones(KK) # Uniform prior on w
eta = 0 # Mean 0 for the prior on mu_k
tau = 5 # Standard deviation 5 on the prior for mu_l
dd = 2 # Inverse gamma prior for sigma_2, parameter d
qq = 1 # Inverse gamma prior for sigma_2, parameter q
from scipy.stats import beta
from scipy.stats import invgamma
from scipy.stats import beta
# Number of iterations of the sampler
rrr = 6000 # Number of iterations
burn = 1000 # Burning period
# Storing the samples
cc_out = np.zeros((rrr, n)) # Store indicators
w_out = np.zeros(rrr) # Sample of the weights
mu_out = np.zeros((rrr, KK)) # Sample of mus
sigma_out = np.zeros(rrr) # Sample of sigmas
logpost = np.zeros(rrr) # Used to monitor convergence
for s in range(rrr):
# Sample the indicators
cc = np.zeros(n)
for i in range(n):
v = np.zeros(KK)
v[0] = np.log(w) + norm.logpdf(loc = mu[0], scale = sigma, x = x[i]) # Compute the log of the weights
v[1] = np.log(1 - w) + norm.logpdf(loc = mu[1], scale = sigma, x = x[i]) # Compute the log of the weights
v = np.exp(v - max(v)) / np.sum(np.exp(v - max(v))) # Go from logs to actual weights in a numerically stable manner
cc[i] = np.random.choice([0, 1], 1, p = v) # C_i sample
# Sample the weights
w = beta.rvs(a = aa[0] + np.sum(cc == 0), b = aa[1] + np.sum(cc == 1), size = 1)
# Sample the means
for k in range(KK):
nk = np.sum(cc == k)
xsumk = np.sum(x[cc == k])
tau2_hat = 1 / (nk / sigma**2 + 1 / tau**2)
mu_hat = tau2_hat * (xsumk / sigma**2 + eta / tau**2)
mu[k] = norm.rvs(loc = mu_hat, scale = np.sqrt(tau2_hat), size = 1)
# Sample the variances
dd_star = dd + n / 2
mu_temp = [mu[int(c_i)] for c_i in cc] # Create vector of mus
qq_star = qq + np.sum((x - mu_temp)**2) / 2
sigma = np.sqrt(invgamma.rvs(a = dd_star, scale = qq_star, size = 1))
# Store samples
cc_out[s, :] = cc
w_out[s] = w
mu_out[s, :] = mu
sigma_out[s] = sigma
for i in range(n):
# Computing logposterior likelihood term
if cc[i] == 0:
logpost[s] = logpost[s] + np.log(w) + norm.logpdf(loc = mu[0], scale = sigma, x = x[i])
else:
logpost[s] = logpost[s] + np.log(1 - w) + norm.logpdf(loc = mu[1], scale = sigma, x = x[i])
# W term
logpost[s] = logpost[s] + beta.logpdf(a = aa[0], b = aa[1], x = w)
# Mu term
for k in range(KK):
logpost[s] = logpost[s] + norm.logpdf(loc = eta, scale = tau, x = mu[k])
# Sigma term
logpost[s] = logpost[s] + invgamma.logpdf(a = dd, scale = 1 / qq, x = sigma**2)
if s / 500 == np.floor(s / 500):
print('Current iteration is: {}'.format(s))
## Plot the logposterior distribution for various samples
fig, ax = plt.subplots(1, 1)
ax.plot(np.arange(len(logpost)), logpost, 'r-', lw=1, alpha=0.6, label='Trace plot') # Trace plot of data
ax.legend(loc='best', frameon=False)
# plot density estimate of the posterior
plt.title('Trace plot of Logposterior')
plt.show()
print('The final Mu_hat values are: {}'.format(mu))
print('The true mu values are: {}\n'.format(mu_true))
print('The final sigma_hat values are: {}'.format(sigma))
print('The true sigma values are: {}\n'.format(sigma_true))
print('The final w_hat values are: {}'.format(w))
print('The true w values are: {}\n'.format(w_true))
print('The final c_hat values are: {}'.format(cc[:10]))
print('The true c values are: {}\n'.format(cc_true[:10]))
# Values to sample
xx = np.linspace(-8, 11.0, num = 200)
density_posterior = np.zeros((rrr-burn, len(xx)))
for s in range(rrr-burn):
density_posterior[s, :] = density_posterior[s, :] + \
w_out[s + burn] * norm.pdf(loc = mu_out[s + burn, 0], scale = sigma_out[s + burn], x = xx) + \
(1 - w_out[s + burn]) * norm.pdf(loc = mu_out[s + burn, 1], scale = sigma_out[s + burn], x = xx)
density_posterior_m = np.mean(density_posterior, axis = 0)
density_posterior_lq = np.quantile(density_posterior, 0.025, axis = 0)
density_posterior_uq = np.quantile(density_posterior, 0.975, axis = 0)
## Plot the final result distribution for various samples
fig, ax = plt.subplots(1, 1)
# Mean value
ax.plot(xx, density_posterior_m, lw=2, alpha=0.6, label='Mean value') # Trace plot of data
# Plotting original data
for k in range(KK):
ax.scatter(np.array(x[cc_true == k]), np.zeros((x[cc_true == k].shape[0])), label = 'Component {}'.format(k + 1))
# Plotting uncertainty
plt.fill_between(xx, density_posterior_uq, density_posterior_lq, alpha=0.2,
label='Uncertainty Interval')
ax.legend(loc='best', frameon=False)
# plot density estimate of the posterior
plt.title('Trace plot of Logposterior')
plt.show()
```
| github_jupyter |
# DLProfile Example using Cosmic Tagger Application
## Set imports and neccessary environment variables
```
import pathlib
import os
import sys
import matplotlib.pyplot as plt
import warnings
import pprint
import pandas
VANIDL_DIR="{}".format(pathlib.Path(os.getcwd()).parent.parent.parent.absolute())
sys.path.insert(0, VANIDL_DIR)
warnings.filterwarnings('ignore')
os.environ["DARSHAN_DIR"] = "/soft/perftools/darshan/darshan-3.1.8"
os.environ["VANIDL_DIR"] = VANIDL_DIR
```
#### Formatting
```
pp = pprint.PrettyPrinter(indent=1)
class color:
PURPLE = '\033[95m'
CYAN = '\033[96m'
DARKCYAN = '\033[36m'
BLUE = '\033[94m'
GREEN = '\033[92m'
YELLOW = '\033[93m'
RED = '\033[91m'
BOLD = '\033[1m'
UNDERLINE = '\033[4m'
END = '\033[0m'
```
## Create instrance of DL Profile and load the darshan file
```
from src.vanidl import VaniDL
profile = VaniDL()
#import shutil
#shutil.rmtree('/tmp/temp_analysis')
DATAPATH_INCLUDES = []
status = profile.Load("/home/dhari/darshan-logs/benchmark/cosmic/optimization/1MB.darshan", data_paths_include=DATAPATH_INCLUDES)
if status:
print("Darshan Trace loaded Successfully!")
else:
print("Darshan Trace load Failed!")
print(profile._error_str())
```
## Use Profile object to analyze the darshan I/O trace.
### Verify if object works
The GetDXTAsDF() function enables users to perform analysis
```
df = profile.GetDXTAsDF()
pp.pprint("Files used in the application")
pp.pprint(df['Filename'].unique().tolist())
df_normal = profile.GetTraceAsDF()
pp.pprint("Files used in the application")
pp.pprint(df_normal['Filename'].unique().tolist())
```
### Collect the summary of the Application
```
summary = profile.GetSummary()
print("\n")
print(color.BOLD + "Data Access Summary (from Darshan):"+ color.END)
print("Total Job time\t\t\t:\t{:0.2f} seconds".format(summary['job_time']))
#FIXME: calculate time per rank and then take max across it.
print("Time spent in I/O\t\t:\t{:0.2f} seconds".format(summary['total_io_time']))
print("% Time spent in I/O\t\t:\t{:0.2f}%".format(float(summary['total_io_time'])*100/summary['job_time']))
print("Total Data Accessed\t\t:\t{:0.2f} GB".format(float(summary['total_io_bytes'])/1024.0/1024.0/1024.0))
print("Data Access Modules used\t:\t{}".format(summary['io_interface_used']))
print("Data Operations\t\t\t:\t{}".format(summary['io_operations_used']))
print("# of files used\t\t\t:\t{}".format(len(summary['files_used'])))
print("# of MPI Ranks\t\t\t:\t{:0.0f} ranks".format(summary['num_ranks']))
print(color.UNDERLINE + "Data Transfer size:"+ color.END)
print("\tMin,Max\t\t\t:\t{:0.0f} bytes and {:0.0f} bytes".format(summary['data_transfer_size']['min'],summary['data_transfer_size']['max']))
print("\tAverage\t\t\t:\t{:0.0f} bytes".format(summary['data_transfer_size']['mean']))
print("\tMedian\t\t\t:\t{:0.0f} bytes".format(summary['data_transfer_size']['median']))
print(color.UNDERLINE + "Data Transfer bandwidth: (per rank)"+ color.END)
print("\tMin,Max\t\t\t:\t{:0.0f} B/s and {:0.0f} MB/s".format(summary['data_transfer_bandwidth']['min'],summary['data_transfer_bandwidth']['max']/1024.0/1024.0))
print("\tAverage\t\t\t:\t{:0.0f} MB/s".format(summary['data_transfer_bandwidth']['mean']/1024.0/1024.0))
print("\tMedian\t\t\t:\t{:0.0f} MB/s".format(summary['data_transfer_bandwidth']['median']/1024.0/1024.0))
print(color.UNDERLINE + "Access Pattern:"+ color.END)
print("\tSequential\t\t:\t{:0.2f}%".format(float(summary['access_pattern']['sequential'])))
print("\tConsecutive\t\t:\t{:0.2f}%".format(float(summary['access_pattern']['consecutive'])))
#An I/O op issued at an offset greater than where the previous I/O op ended.
#An I/O op issued at the offset immediately after the end of the previous I/O
print("\n")
print(color.BOLD + "Files Summary:"+ color.END)
print("File Types\t\t\t:\t{}".format(summary['file_used_summary']['types']))
print(color.UNDERLINE + "Dataset Size:"+ color.END)
print("\tTotal\t\t\t:\t{:0.3f} GB".format(float(summary['file_used_summary']['size']['total'])/1024.0/1024.0/1024.0))
print("\tMin,Max\t\t\t:\t{:0.3f} GB and {:0.3f} GB".format(float(summary['file_used_summary']['size']['min'])/1024.0/1024.0/1024.0,float(summary['file_used_summary']['size']['max'])/1024.0/1024.0/1024.0))
print("\tAverage\t\t\t:\t{:0.3f} GB".format(float(summary['file_used_summary']['size']['mean'])/1024.0/1024.0/1024.0))
pp.pprint("Job time : {} seconds".format(profile.GetJobTime()))
pp.pprint("Time spent by application on I/O: {} seconds".format(profile.GetIOTime()))
```
### I/O time spent on each file
```
for file in df['Filename'].unique():
print("I/O time for file {}: {:0.2f} seconds".format(file,profile.GetIOTime(filepath=file)))
```
### I/O Time spent per rank
```
for rank in df['Rank'].unique():
print("I/O time for rank {}: {:0.2f} seconds".format(rank,profile.GetIOTime(rank=rank)))
"Total I/O performed by application: {:0.2f} GB".format(float(profile.GetIOSize())/1024.0/1024.0/1024.0)
```
### I/O performed on each file
```
for file in df['Filename'].unique():
print("I/O performed on file {}: {:0.2f} MB".format(file,float(profile.GetIOSize(filepath=file))/1024.0/1024.0))
for rank in df['Rank'].unique():
print("I/O performed by rank {}: {:0.2f} MB".format(rank, float(profile.GetIOSize(rank=rank))/1024.0/1024.0))
print("Size of dataset (bytes)")
pp.pprint(profile.GetFileSizes())
```
### How application access data over time.
```
tl = profile.CreateIOTimeline(time_step=0.001)
plt.figure(figsize=(8,4))
plt.grid()
plt.ylabel("# of operations")
plt.xlabel("Timeline (ms)")
plt.plot(tl['time_step'], tl['operation_count']);
plt.figure(figsize=(8,4))
plt.grid()
plt.ylabel("I/O performed (bytes)")
plt.xlabel("Timeline (ms)")
plt.plot(tl['time_step'], tl['io_bytes']);
```
### How files are accessed over the duration of the Job.
```
for file in df['Filename'].unique():
tl = profile.CreateIOTimeline(filepath=file,time_step=0.001)
tl.plot(x='time_step',y='operation_count', title=file)
plt.show()
```
### Show how each file is accessed by each rank.
```
for rank in df['Rank'].unique():
tl = profile.CreateIOTimeline(rank=rank)
tl.plot(x='time_step',y='operation_count', title=rank)
plt.show()
```
### Data Transfer Size distribution within the application
```
request_df = profile.GetIORequestDistribution()
df['Length'].plot(kind='hist', figsize=(5, 3), bins=100);
plt.xlabel("Transfer Size (bytes)")
```
### Data Transfer Size distribution for each file.
```
for file in df['Filename'].unique():
tl = profile.GetIORequestDistribution(filepath=file)
tl.plot(kind='bar', figsize=(10, 4), title=file)
plt.show()
```
### Data Transfer Sizes per Rank
```
for rank in df['Rank'].unique():
tl = profile.GetIORequestDistribution(rank=rank)
tl.plot(kind='bar', figsize=(10, 4), title=rank)
plt.show()
```
### File summary of each file accessed by the Application
```
pp = pprint.PrettyPrinter(indent=1)
for file in df['Filename'].unique():
if os.path.exists(file):
pp.pprint(profile.GetFileSummary(file))
```
| github_jupyter |
##### Copyright 2019 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# ์ถ์ ๊ธฐ(Estimator)๋ฅผ ์ฌ์ฉํ ๋ค์ค ์์ปค(Multi-worker) ํ๋ จ
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/tutorials/distribute/multi_worker_with_estimator"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />TensorFlow.org์์ ๋ณด๊ธฐ</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ko/tutorials/distribute/multi_worker_with_estimator.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />๊ตฌ๊ธ ์ฝ๋ฉ(Colab)์์ ์คํํ๊ธฐ</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ko/tutorials/distribute/multi_worker_with_estimator.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />๊นํ๋ธ(GitHub) ์์ค ๋ณด๊ธฐ</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ko/tutorials/distribute/multi_worker_with_estimator.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
Note: ์ด ๋ฌธ์๋ ํ
์ํ๋ก ์ปค๋ฎค๋ํฐ์์ ๋ฒ์ญํ์ต๋๋ค. ์ปค๋ฎค๋ํฐ ๋ฒ์ญ ํ๋์ ํน์ฑ์ ์ ํํ ๋ฒ์ญ๊ณผ ์ต์ ๋ด์ฉ์ ๋ฐ์ํ๊ธฐ ์ํด ๋
ธ๋ ฅํจ์๋
๋ถ๊ตฌํ๊ณ [๊ณต์ ์๋ฌธ ๋ฌธ์](https://www.tensorflow.org/?hl=en)์ ๋ด์ฉ๊ณผ ์ผ์นํ์ง ์์ ์ ์์ต๋๋ค.
์ด ๋ฒ์ญ์ ๊ฐ์ ํ ๋ถ๋ถ์ด ์๋ค๋ฉด
[tensorflow/docs-l10n](https://github.com/tensorflow/docs-l10n/) ๊นํ ์ ์ฅ์๋ก ํ ๋ฆฌํ์คํธ๋ฅผ ๋ณด๋ด์ฃผ์๊ธฐ ๋ฐ๋๋๋ค.
๋ฌธ์ ๋ฒ์ญ์ด๋ ๋ฆฌ๋ทฐ์ ์ฐธ์ฌํ๋ ค๋ฉด
[docs-ko@tensorflow.org](https://groups.google.com/a/tensorflow.org/forum/#!forum/docs-ko)๋ก
๋ฉ์ผ์ ๋ณด๋ด์ฃผ์๊ธฐ ๋ฐ๋๋๋ค.
## ๊ฐ์
Note: `tf.distribute` API์ ํจ๊ป ์ถ์ ๊ธฐ๋ฅผ ์ฌ์ฉํ ์๋ ์์ง๋ง, `tf.distribute`์ ํจ๊ป ์ผ๋ผ์ค(Keras)๋ฅผ ์ฌ์ฉํ๋ ๊ฒ์ ์ถ์ฒํฉ๋๋ค. [์ผ๋ผ์ค๋ฅผ ์ฌ์ฉํ ๋ค์ค ์์ปค(Multi-worker) ํ๋ จ](../../guide/multi_worker_with_keras.ipynb)์ ๋ด์ฃผ์ธ์. `tf.distribute.Strategy`๋ฅผ ์ถ์ ๊ธฐ์ ์ฌ์ฉํ๋ ๊ฒ์ ๋ถ๋ถ์ ์ผ๋ก๋ง ์ง์ํ๊ณ ์์ต๋๋ค.
์ด ํํ ๋ฆฌ์ผ์ `tf.estimator`์ ํจ๊ป ๋ถ์ฐ ๋ค์ค ์์ปค ํ๋ จ์ ํ๊ธฐ ์ํ์ฌ `tf.distribute.Strategy`๋ฅผ ์ด๋ป๊ฒ ์ฌ์ฉํ๋์ง ์ดํด๋ด
๋๋ค. `tf.estimator`๋ฅผ ์ฌ์ฉํ์ฌ ์ฝ๋๋ฅผ ์์ฑํ๊ณ ์๊ณ , ๊ณ ์ฑ๋ฅ์ ์ฅ๋น ํ ๋๋ก ๋ค๋ฃฐ ์ ์๋ ๊ฒ๋ณด๋ค ๋ ํฐ ์์
์ ํ๋ ๋ฐ์ ๊ด์ฌ์ด ์์ผ์๋ค๋ฉด ์ด ํํ ๋ฆฌ์ผ์ด ์๋ง์ต๋๋ค.
์์ํ๊ธฐ ์ ์, [ํ
์ํ๋ก๋ก ๋ถ์ฐ ํ๋ จํ๊ธฐ](../../guide/distributed_training.ipynb)๋ฅผ ๋จผ์ ์ฝ์ด์ฃผ์ธ์. [๋ค์ค GPU ํ๋ จ ํํ ๋ฆฌ์ผ](./keras.ipynb)๋ ๊ด๋ จ์ด ์์ต๋๋ค. ์ด ํํ ๋ฆฌ์ผ๊ณผ ๊ฐ์ ๋ชจ๋ธ์ ์ฌ์ฉํฉ๋๋ค.
## ์ค์
๋จผ์ , ํ
์ํ๋ก๋ฅผ ์ค์ ํ๊ณ ํ์ํ ํจํค์ง๋ค์ ๊ฐ์ ธ์ต๋๋ค.
```
import tensorflow_datasets as tfds
import tensorflow as tf
tfds.disable_progress_bar()
import os, json
```
## ์
๋ ฅ ํจ์
์ด ํํ ๋ฆฌ์ผ์ [ํ
์ํ๋ก ๋ฐ์ดํฐ์
(TensorFlow Datasets)](https://www.tensorflow.org/datasets)์ MNIST ๋ฐ์ดํฐ์
์ ์ฌ์ฉํฉ๋๋ค. ์ฝ๋ ๋ด์ฉ์ [๋ค์ค GPU ํ๋ จ ํํ ๋ฆฌ์ผ](./keras.ipynb)๊ณผ ์ ์ฌํ์ง๋ง ํฐ ์ฐจ์ด์ ์ด ํ๋ ์์ต๋๋ค. ๋ฐ๋ก ์ถ์ ๊ธฐ๋ฅผ ์จ์ ๋ค์ค ์์ปค ํ๋ จ์ ํ ๋๋ ๋ฐ์ดํฐ์
์ ์์ปค ์ซ์๋๋ก ๋๋์ด ์ฃผ์ด์ผ ๋ชจ๋ธ์ด ์๋ ดํฉ๋๋ค. ์
๋ ฅ ๋ฐ์ดํฐ๋ ์์ปค ์ธ๋ฑ์ค๋ก ์ค๋ฉ(shard)ํฉ๋๋ค. ๊ทธ๋ฌ๋ฉด ๊ฐ ์์ปค ํ๋ก์ธ์ค๊ฐ ๋ฐ์ดํฐ์
์ `1/์์ปค ์` ๋งํผ์ฉ ๊ฒน์น์ง ์๊ฒ ๋๋์ด ๊ฐ์ต๋๋ค.
```
BUFFER_SIZE = 10000
BATCH_SIZE = 64
def input_fn(mode, input_context=None):
datasets, info = tfds.load(name='mnist',
with_info=True,
as_supervised=True)
mnist_dataset = (datasets['train'] if mode == tf.estimator.ModeKeys.TRAIN else
datasets['test'])
def scale(image, label):
image = tf.cast(image, tf.float32)
image /= 255
return image, label
if input_context:
mnist_dataset = mnist_dataset.shard(input_context.num_input_pipelines,
input_context.input_pipeline_id)
return mnist_dataset.map(scale).cache().shuffle(BUFFER_SIZE).batch(BATCH_SIZE)
```
ํ๋ จ์ ์๋ ด์ํค๊ธฐ ์ํ ๋ ๋ค๋ฅธ ๋ฐฉ๋ฒ์ผ๋ก ๊ฐ ์์ปค์์ ๋ฐ์ดํฐ์
์ ์ ๊ฐ๊ธฐ ๋ค๋ฅธ ์๋ ๊ฐ์ผ๋ก ์
ํํ๋ ๊ฒ๋ ์์ต๋๋ค.
## ๋ค์ค ์์ปค ์ค์
[๋ค์ค GPU ํ๋ จ ํํ ๋ฆฌ์ผ](./keras.ipynb)๊ณผ ๋น๊ตํ ๋ ๊ฐ์ฅ ํฐ ์ฐจ์ด ์ค ํ๋๋ ๋ค์ค ์์ปค๋ฅผ ์ค์ ํ๋ ๋ถ๋ถ์
๋๋ค. `TF_CONFIG` ํ๊ฒฝ ๋ณ์๋ ํด๋ฌ์คํฐ๋ฅผ ์ด๋ฃจ๋ ๊ฐ ์์ปค์ ํด๋ฌ์คํฐ ์ค์ ์ ์ง์ ํ๋ ํ์ค ๋ฐฉ๋ฒ์
๋๋ค.
`TF_CONFIG`์๋ `cluster`์ `task`๋ผ๋ ๋ ๊ฐ์ง ๊ตฌ์ฑ์์๊ฐ ์์ต๋๋ค. `cluster`๋ ์ ์ฒด ํด๋ฌ์คํฐ, ๋ค์ ๋งํด ํด๋ฌ์คํฐ์ ์ํ ์์ปค์ ํ๋ผ๋ฏธํฐ ์๋ฒ์ ๋ํ ์ ๋ณด๋ฅผ ์ ๊ณตํฉ๋๋ค. `task`๋ ํ์ฌ ์์
์ ๋ํ ์ ๋ณด๋ฅผ ์ง์ ํฉ๋๋ค. ์ด ์์ ์์๋ ์์
์ `type`์ด `worker`์ด๊ณ , ์์
์ `index`๋ `0`์
๋๋ค.
์๋ฅผ ๋ค๊ธฐ ์ํ์ฌ, ์ด ํํ ๋ฆฌ์ผ์์๋ ๋ ๊ฐ์ ์์ปค๋ฅผ localhost์ ๋์ธ ๋์ `TF_CONFIG`๋ฅผ ๋ณด์ฌ๋๋ฆฌ๊ฒ ์ต๋๋ค. ์ค์ ๋ก๋ ๊ฐ ์์ปค๋ฅผ ๋ค๋ฅธ ์ฅ๋น์์ ๋์ธ ํ
๋ฐ, ์ค์ IP ์ฃผ์์ ํฌํธ๋ฅผ ํ ๋นํ๊ณ , ๊ทธ์ ๋ง๊ฒ TF_CONFIG๋ฅผ ์ง์ ํด์ผ ํฉ๋๋ค. ์๋ฅผ ๋ค์ด, ๊ฐ ์ฅ๋น์ ์์
`index`๊ฐ ๋ฌ๋ผ์ผ ํฉ๋๋ค.
์ฃผ์: ์๋ ์ฝ๋๋ฅผ ์ฝ๋ฉ์์ ์คํํ์ง ๋ง์ญ์์ค. ํ
์ํ๋ก ๋ฐํ์์ด ์ฃผ์ด์ง IP์ ํฌํธ๋ก gRPC ์๋ฒ๋ฅผ ๋์ฐ๋ ค๊ณ ํ ํ
๋ฐ, ์๋ง๋ ์คํจํ ๊ฒ์
๋๋ค.
```
os.environ['TF_CONFIG'] = json.dumps({
'cluster': {
'worker': ["localhost:12345", "localhost:23456"]
},
'task': {'type': 'worker', 'index': 0}
})
```
## ๋ชจ๋ธ ์ ์ํ๊ธฐ
ํ๋ จ์ ์ํ์ฌ ๋ ์ด์ด์ ์ตํฐ๋ง์ด์ , ์์ค ํจ์๋ฅผ ์ ์ํ์ธ์. ์ด ํํ ๋ฆฌ์ผ์์๋ [๋ค์ค GPU ํ๋ จ ํํ ๋ฆฌ์ผ](./keras.ipynb)๊ณผ ๋น์ทํ๊ฒ ์ผ๋ผ์ค ๋ ์ด์ด๋ก ๋ชจ๋ธ์ ์ ์ํฉ๋๋ค.
```
LEARNING_RATE = 1e-4
def model_fn(features, labels, mode):
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(32, 3, activation='relu', input_shape=(28, 28, 1)),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(10)
])
logits = model(features, training=False)
if mode == tf.estimator.ModeKeys.PREDICT:
predictions = {'logits': logits}
return tf.estimator.EstimatorSpec(labels=labels, predictions=predictions)
optimizer = tf.compat.v1.train.GradientDescentOptimizer(
learning_rate=LEARNING_RATE)
loss = tf.keras.losses.SparseCategoricalCrossentropy(
from_logits=True, reduction=tf.keras.losses.Reduction.NONE)(labels, logits)
loss = tf.reduce_sum(loss) * (1. / BATCH_SIZE)
if mode == tf.estimator.ModeKeys.EVAL:
return tf.estimator.EstimatorSpec(mode, loss=loss)
return tf.estimator.EstimatorSpec(
mode=mode,
loss=loss,
train_op=optimizer.minimize(
loss, tf.compat.v1.train.get_or_create_global_step()))
```
Note: ์ด ์์ ์์๋ ํ์ต๋ฅ ์ด ๊ณ ์ ๋์ด์์ต๋๋ค. ํ์ง๋ง ์ค์ ๋ก๋ ์ ์ญ ๋ฐฐ์น ํฌ๊ธฐ์ ๋ฐ๋ผ ํ์ต๋ฅ ์ ์กฐ์ ํด์ผ ํ ์ ์์ต๋๋ค.
## MultiWorkerMirroredStrategy
๋ชจ๋ธ์ ํ๋ จํ๊ธฐ ์ํ์ฌ `tf.distribute.experimental.MultiWorkerMirroredStrategy`์ ์ธ์คํด์ค๋ฅผ ์ฌ์ฉํ์ธ์. `MultiWorkerMirroredStrategy`๋ ๋ชจ๋ ์์ปค์ ๊ฐ ์ฅ๋น์, ๋ชจ๋ธ์ ๋ ์ด์ด์ ์๋ ๋ชจ๋ ๋ณ์์ ๋ณต์ฌ๋ณธ์ ๋ง๋ญ๋๋ค. ์ด ์ ๋ต์ `CollectiveOps`๋ผ๋ ์์ง์ ์ํ ํต์ ์ฉ ํ
์ํ๋ก ์ฐ์ฐ์ ์ฌ์ฉํ์ฌ ๊ทธ๋๋์ธํธ๋ฅผ ๋ชจ์ผ๊ณ , ๋ณ์๋ค์ ๊ฐ์ ๋์ผํ๊ฒ ๋ง์ถฅ๋๋ค. [ํ
์ํ๋ก๋ก ๋ถ์ฐ ํ๋ จํ๊ธฐ](../../guide/distributed_training.ipynb)์ ์ด ์ ๋ต์ ๋ํ ๋ ์์ธํ ๋ด์ฉ์ด ์์ต๋๋ค.
```
strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
```
## ๋ชจ๋ธ ํ๋ จ ๋ฐ ํ๊ฐํ๊ธฐ
๋ค์์ผ๋ก, ์ถ์ ๊ธฐ์ `RunConfig`์ ๋ถ์ฐ ์ ๋ต์ ์ง์ ํ์ญ์์ค. ๊ทธ๋ฆฌ๊ณ `tf.estimator.train_and_evaluate`๋ก ํ๋ จ ๋ฐ ํ๊ฐ๋ฅผ ํฉ๋๋ค. ์ด ํํ ๋ฆฌ์ผ์์๋ `train_distribute`๋ก๋ง ์ ๋ต์ ์ง์ ํ์๊ธฐ ๋๋ฌธ์ ํ๋ จ ๊ณผ์ ๋ง ๋ถ์ฐ ์ฒ๋ฆฌํฉ๋๋ค. `eval_distribute`๋ฅผ ์ง์ ํ์ฌ ํ๊ฐ๋ ๋ถ์ฐ ์ฒ๋ฆฌํ ์ ์์ต๋๋ค.
```
config = tf.estimator.RunConfig(train_distribute=strategy)
classifier = tf.estimator.Estimator(
model_fn=model_fn, model_dir='/tmp/multiworker', config=config)
tf.estimator.train_and_evaluate(
classifier,
train_spec=tf.estimator.TrainSpec(input_fn=input_fn),
eval_spec=tf.estimator.EvalSpec(input_fn=input_fn)
)
```
## ํ๋ จ ์ฑ๋ฅ ์ต์ ํํ๊ธฐ
์ด์ ๋ชจ๋ธ๊ณผ `tf.distribute.Strategy`๋ก ๋ง๋ ๋ค์ค ์์ปค๋ฅผ ์ฌ์ฉํ ์ ์๋ ์ถ์ ๊ธฐ๊ฐ ์์ต๋๋ค. ๋ค์ค ์์ปค ํ๋ จ ์ฑ๋ฅ์ ์ต์ ํํ๋ ค๋ฉด ๋ค์๊ณผ ๊ฐ์ ๋ฐฉ๋ฒ์ ์ฌ์ฉํด ๋ณด์ญ์์ค.
* *๋ฐฐ์น ํฌ๊ธฐ ๋๋ฆฌ๊ธฐ:* ์ฌ๊ธฐ์ ์ง์ ํ๋ ๋ฐฐ์น ํฌ๊ธฐ๋ GPU๋น ํฌ๊ธฐ์
๋๋ค. ์ผ๋ฐ์ ์ผ๋ก, GPU ๋ฉ๋ชจ๋ฆฌ ํฌ๊ธฐ์ ๋ง๋ ํ ๊ฐ์ฅ ํฌ๊ฒ ๋ฐฐ์น ํฌ๊ธฐ๋ฅผ ์ก๋ ๊ฒ์ด ์ข์ต๋๋ค.
* *๋ณ์ ํ๋ณํ:* ๊ฐ๋ฅํ๋ฉด ๋ณ์๋ฅผ `tf.float` ํ์
์ผ๋ก ๋ฐ๊พธ์ธ์. ๊ณต์ ResNet ๋ชจ๋ธ์ [์์ ](https://github.com/tensorflow/models/blob/8367cf6dabe11adf7628541706b660821f397dce/official/resnet/resnet_model.py#L466)์์ ์ด๋ป๊ฒ ๋ณํํ๋์ง ๋ณผ ์ ์์ต๋๋ค.
* *์งํฉ ํต์ ๊ตฌํ์ ์ฌ์ฉํ์ธ์:* `MultiWorkerMirroredStrategy`๋ ์ฌ๋ฌ ๊ฐ์ง [์งํฉ ํต์ ๊ตฌํ](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/distribute/cross_device_ops.py)์ ์ ๊ณตํฉ๋๋ค.
* `RING`์ ์ฅ๋น ๊ฐ ํต์ ์ ์ํ์ฌ gRPC๋ฅผ ์จ์ ๋ง ๋คํธ์ํฌ ๊ธฐ๋ฐ์ ์งํฉ ํต์ ์ ๊ตฌํํ ๊ฒ์
๋๋ค.
* `NCCL`์ [Nvidia์ NCCL](https://developer.nvidia.com/nccl)์ ์ฌ์ฉํ์ฌ ์์ง ์ฐ์ฐ์ ๊ตฌํํ ๊ฒ์
๋๋ค.
* `AUTO`๋ ๋ฐํ์์ด ์์์ ๊ณ ๋ฅด๋๋ก ํฉ๋๋ค.
์ด๋ค ์งํฉ ๊ตฌํ์ด ๊ฐ์ฅ ์ข์์ง๋ GPU์ ์ซ์์ ์ข
๋ฅ, ํด๋ฌ์คํฐ ์ฅ๋น ๊ฐ ๋คํธ์ํฌ ์ฐ๊ฒฐ ๋ฑ์ ๋ฐ๋ผ ๋ค๋ฅผ ์ ์์ต๋๋ค. ๋ฐํ์ ์๋ ์ ํ์ ์ค๋ฒ๋ผ์ด๋ํ๋ ค๋ฉด, `MultiWorkerMirroredStrategy` ์์ฑ์์ `communication` ์ธ์์ ์ ์ ํ ๊ฐ์ ์ฃผ๋ฉด ๋ฉ๋๋ค. ์๋ฅผ ๋ค์ด `communication=tf.distribute.experimental.CollectiveCommunication.NCCL`๊ณผ ๊ฐ์ด ์ฃผ๋ฉด ๋ฉ๋๋ค.
## ๋ค๋ฅธ ์ฝ๋ ์์
1. [์ฒ์๋ถํฐ ๋๊น์ง ์ดํด๋ณด๋ ์์ ](https://github.com/tensorflow/ecosystem/tree/master/distribution_strategy)์์๋ tensorflow/ecosystem์ ์ฟ ๋ฒ๋คํฐ์ค(Kubernetes) ํ
ํ๋ฆฟ์ ์ด์ฉํ์ฌ ๋ค์ค ์์ปค๋ฅผ ์ฌ์ฉํ์ฌ ํ๋ จํฉ๋๋ค. ์ด ์์ ์์๋ ์ผ๋ผ์ค ๋ชจ๋ธ์ ๋ง๋ ํ, `tf.keras.estimator.model_to_estimator` API๋ฅผ ์ด์ฉํ์ฌ ์ถ์ ๊ธฐ ๋ชจ๋ธ๋ก ๋ณํํฉ๋๋ค.
2. ๋ค์ค ๋ถ์ฐ ์ ๋ต์ผ๋ก ์คํํ ์ ์๋ ๊ณต์ [ResNet50](https://github.com/tensorflow/models/blob/master/official/resnet/imagenet_main.py) ๋ชจ๋ธ.
| github_jupyter |
# Project: Part of Speech Tagging with Hidden Markov Models
---
### Introduction
Part of speech tagging is the process of determining the syntactic category of a word from the words in its surrounding context. It is often used to help disambiguate natural language phrases because it can be done quickly with high accuracy. Tagging can be used for many NLP tasks like determining correct pronunciation during speech synthesis (for example, _dis_-count as a noun vs dis-_count_ as a verb), for information retrieval, and for word sense disambiguation.
In this notebook, you'll use the [Pomegranate](http://pomegranate.readthedocs.io/) library to build a hidden Markov model for part of speech tagging using a "universal" tagset. Hidden Markov models have been able to achieve [>96% tag accuracy with larger tagsets on realistic text corpora](http://www.coli.uni-saarland.de/~thorsten/publications/Brants-ANLP00.pdf). Hidden Markov models have also been used for speech recognition and speech generation, machine translation, gene recognition for bioinformatics, and human gesture recognition for computer vision, and more.

The notebook already contains some code to get you started. You only need to add some new functionality in the areas indicated to complete the project; you will not need to modify the included code beyond what is requested. Sections that begin with **'IMPLEMENTATION'** in the header indicate that you must provide code in the block that follows. Instructions will be provided for each section, and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully!
<div class="alert alert-block alert-info">
**Note:** Once you have completed all of the code implementations, you need to finalize your work by exporting the iPython Notebook as an HTML document. Before exporting the notebook to html, all of the code cells need to have been run so that reviewers can see the final implementation and output. You must then **export the notebook** by running the last cell in the notebook, or by using the menu above and navigating to **File -> Download as -> HTML (.html)** Your submissions should include both the `html` and `ipynb` files.
</div>
<div class="alert alert-block alert-info">
**Note:** Code and Markdown cells can be executed using the `Shift + Enter` keyboard shortcut. Markdown cells can be edited by double-clicking the cell to enter edit mode.
</div>
### The Road Ahead
You must complete Steps 1-3 below to pass the project. The section on Step 4 includes references & resources you can use to further explore HMM taggers.
- [Step 1](#Step-1:-Read-and-preprocess-the-dataset): Review the provided interface to load and access the text corpus
- [Step 2](#Step-2:-Build-a-Most-Frequent-Class-tagger): Build a Most Frequent Class tagger to use as a baseline
- [Step 3](#Step-3:-Build-an-HMM-tagger): Build an HMM Part of Speech tagger and compare to the MFC baseline
- [Step 4](#Step-4:-[Optional]-Improving-model-performance): (Optional) Improve the HMM tagger
<div class="alert alert-block alert-warning">
**Note:** Make sure you have selected a **Python 3** kernel in Workspaces or the hmm-tagger conda environment if you are running the Jupyter server on your own machine.
</div>
```
# Jupyter "magic methods" -- only need to be run once per kernel restart
%load_ext autoreload
%aimport helpers, tests
%autoreload 1
# import python modules -- this cell needs to be run again if you make changes to any of the files
import matplotlib.pyplot as plt
import numpy as np
from IPython.core.display import HTML
from itertools import chain
from collections import Counter, defaultdict
from helpers import show_model, Dataset
from pomegranate import State, HiddenMarkovModel, DiscreteDistribution
```
## Step 1: Read and preprocess the dataset
---
We'll start by reading in a text corpus and splitting it into a training and testing dataset. The data set is a copy of the [Brown corpus](https://en.wikipedia.org/wiki/Brown_Corpus) (originally from the [NLTK](https://www.nltk.org/) library) that has already been pre-processed to only include the [universal tagset](https://arxiv.org/pdf/1104.2086.pdf). You should expect to get slightly higher accuracy using this simplified tagset than the same model would achieve on a larger tagset like the full [Penn treebank tagset](https://www.ling.upenn.edu/courses/Fall_2003/ling001/penn_treebank_pos.html), but the process you'll follow would be the same.
The `Dataset` class provided in helpers.py will read and parse the corpus. You can generate your own datasets compatible with the reader by writing them to the following format. The dataset is stored in plaintext as a collection of words and corresponding tags. Each sentence starts with a unique identifier on the first line, followed by one tab-separated word/tag pair on each following line. Sentences are separated by a single blank line.
Example from the Brown corpus.
```
b100-38532
Perhaps ADV
it PRON
was VERB
right ADJ
; .
; .
b100-35577
...
```
```
data = Dataset("tags-universal.txt", "brown-universal.txt", train_test_split=0.8)
print("There are {} sentences in the corpus.".format(len(data)))
print("There are {} sentences in the training set.".format(len(data.training_set)))
print("There are {} sentences in the testing set.".format(len(data.testing_set)))
assert len(data) == len(data.training_set) + len(data.testing_set), \
"The number of sentences in the training set + testing set should sum to the number of sentences in the corpus"
```
### The Dataset Interface
You can access (mostly) immutable references to the dataset through a simple interface provided through the `Dataset` class, which represents an iterable collection of sentences along with easy access to partitions of the data for training & testing. Review the reference below, then run and review the next few cells to make sure you understand the interface before moving on to the next step.
```
Dataset-only Attributes:
training_set - reference to a Subset object containing the samples for training
testing_set - reference to a Subset object containing the samples for testing
Dataset & Subset Attributes:
sentences - a dictionary with an entry {sentence_key: Sentence()} for each sentence in the corpus
keys - an immutable ordered (not sorted) collection of the sentence_keys for the corpus
vocab - an immutable collection of the unique words in the corpus
tagset - an immutable collection of the unique tags in the corpus
X - returns an array of words grouped by sentences ((w11, w12, w13, ...), (w21, w22, w23, ...), ...)
Y - returns an array of tags grouped by sentences ((t11, t12, t13, ...), (t21, t22, t23, ...), ...)
N - returns the number of distinct samples (individual words or tags) in the dataset
Methods:
stream() - returns an flat iterable over all (word, tag) pairs across all sentences in the corpus
__iter__() - returns an iterable over the data as (sentence_key, Sentence()) pairs
__len__() - returns the nubmer of sentences in the dataset
```
For example, consider a Subset, `subset`, of the sentences `{"s0": Sentence(("See", "Spot", "run"), ("VERB", "NOUN", "VERB")), "s1": Sentence(("Spot", "ran"), ("NOUN", "VERB"))}`. The subset will have these attributes:
```
subset.keys == {"s1", "s0"} # unordered
subset.vocab == {"See", "run", "ran", "Spot"} # unordered
subset.tagset == {"VERB", "NOUN"} # unordered
subset.X == (("Spot", "ran"), ("See", "Spot", "run")) # order matches .keys
subset.Y == (("NOUN", "VERB"), ("VERB", "NOUN", "VERB")) # order matches .keys
subset.N == 7 # there are a total of seven observations over all sentences
len(subset) == 2 # because there are two sentences
```
<div class="alert alert-block alert-info">
**Note:** The `Dataset` class is _convenient_, but it is **not** efficient. It is not suitable for huge datasets because it stores multiple redundant copies of the same data.
</div>
#### Sentences
`Dataset.sentences` is a dictionary of all sentences in the training corpus, each keyed to a unique sentence identifier. Each `Sentence` is itself an object with two attributes: a tuple of the words in the sentence named `words` and a tuple of the tag corresponding to each word named `tags`.
```
key = 'b100-38532'
print("Sentence: {}".format(key))
print("words:\n\t{!s}".format(data.sentences[key].words))
print("tags:\n\t{!s}".format(data.sentences[key].tags))
```
<div class="alert alert-block alert-info">
**Note:** The underlying iterable sequence is **unordered** over the sentences in the corpus; it is not guaranteed to return the sentences in a consistent order between calls. Use `Dataset.stream()`, `Dataset.keys`, `Dataset.X`, or `Dataset.Y` attributes if you need ordered access to the data.
</div>
#### Counting Unique Elements
You can access the list of unique words (the dataset vocabulary) via `Dataset.vocab` and the unique list of tags via `Dataset.tagset`.
```
print("There are a total of {} samples of {} unique words in the corpus."
.format(data.N, len(data.vocab)))
print("There are {} samples of {} unique words in the training set."
.format(data.training_set.N, len(data.training_set.vocab)))
print("There are {} samples of {} unique words in the testing set."
.format(data.testing_set.N, len(data.testing_set.vocab)))
print("There are {} words in the test set that are missing in the training set."
.format(len(data.testing_set.vocab - data.training_set.vocab)))
assert data.N == data.training_set.N + data.testing_set.N, \
"The number of training + test samples should sum to the total number of samples"
```
#### Accessing word and tag Sequences
The `Dataset.X` and `Dataset.Y` attributes provide access to ordered collections of matching word and tag sequences for each sentence in the dataset.
```
# accessing words with Dataset.X and tags with Dataset.Y
for i in range(2):
print("Sentence {}:".format(i + 1), data.X[i])
print()
print("Labels {}:".format(i + 1), data.Y[i])
print()
```
#### Accessing (word, tag) Samples
The `Dataset.stream()` method returns an iterator that chains together every pair of (word, tag) entries across all sentences in the entire corpus.
```
# use Dataset.stream() (word, tag) samples for the entire corpus
print("\nStream (word, tag) pairs:\n")
for i, pair in enumerate(data.stream()):
print("\t", pair)
if i > 5: break
```
For both our baseline tagger and the HMM model we'll build, we need to estimate the frequency of tags & words from the frequency counts of observations in the training corpus. In the next several cells you will complete functions to compute the counts of several sets of counts.
## Step 2: Build a Most Frequent Class tagger
---
Perhaps the simplest tagger (and a good baseline for tagger performance) is to simply choose the tag most frequently assigned to each word. This "most frequent class" tagger inspects each observed word in the sequence and assigns it the label that was most often assigned to that word in the corpus.
### IMPLEMENTATION: Pair Counts
Complete the function below that computes the joint frequency counts for two input sequences.
```
def pair_counts(sequences_A, sequences_B):
"""Return a dictionary keyed to each unique value in the first sequence list
that counts the number of occurrences of the corresponding value from the
second sequences list.
For example, if sequences_A is tags and sequences_B is the corresponding
words, then if 1244 sequences contain the word "time" tagged as a NOUN, then
you should return a dictionary such that pair_counts[NOUN][time] == 1244
"""
# TODO: Finish this function!
# Initialize result dictionary
pair_dict = {}
# Iterate through sequences_A
for index, tag in enumerate(sequences_A):
# Get the word in sequences_B related to the tag
word = sequences_B[index]
# If tag does not exist in pair_dict, make an empty dictionary for tag
if not tag in pair_dict:
pair_dict[tag] = {}
# Set to the tag-word pair to zero in pair_dict if the word is not in the pair_dict[tag] dictionary
if not word in pair_dict[tag]:
pair_dict[tag][word] = 0
# Increment the counter pair_dict[tag][word] by one for the given tag-word pair
pair_dict[tag][word] +=1
# Print result to check
#print(pair_dict)
return pair_dict
# Calculate C(t_i, w_i)
tags = [tag for word, tag in data.stream()]
words = [word for word, tag in data.stream()]
emission_counts = pair_counts(tags, words)
#print(emission_counts)
assert len(emission_counts) == 12, \
"Uh oh. There should be 12 tags in your dictionary."
assert max(emission_counts["NOUN"], key=emission_counts["NOUN"].get) == 'time', \
"Hmmm...'time' is expected to be the most common NOUN."
HTML('<div class="alert alert-block alert-success">Your emission counts look good!</div>')
```
### IMPLEMENTATION: Most Frequent Class Tagger
Use the `pair_counts()` function and the training dataset to find the most frequent class label for each word in the training data, and populate the `mfc_table` below. The table keys should be words, and the values should be the appropriate tag string.
The `MFCTagger` class is provided to mock the interface of Pomegranite HMM models so that they can be used interchangeably.
```
# Create a lookup table mfc_table where mfc_table[word] contains the tag label most frequently assigned to that word
from collections import namedtuple
FakeState = namedtuple("FakeState", "name")
class MFCTagger:
# NOTE: You should not need to modify this class or any of its methods
missing = FakeState(name="<MISSING>")
def __init__(self, table):
self.table = defaultdict(lambda: MFCTagger.missing)
self.table.update({word: FakeState(name=tag) for word, tag in table.items()})
def viterbi(self, seq):
"""This method simplifies predictions by matching the Pomegranate viterbi() interface"""
return 0., list(enumerate(["<start>"] + [self.table[w] for w in seq] + ["<end>"]))
# TODO: calculate the frequency of each tag being assigned to each word (hint: similar, but not
# the same as the emission probabilities) and use it to fill the mfc_table
# Unwrap the data stream and split into two lists using the zip function
word_counts = pair_counts(*zip(*data.training_set.stream()))
# Loop through words in word_counts.keys(), get the key with maximum value from word_counts[word], ...
# and add to dictionary in the form {word: tag}
# Reference: https://stackoverflow.com/questions/268272/getting-key-with-maximum-value-in-dictionary
mfc_table = {word: max(word_counts[word], key=word_counts[word].get) for word in word_counts.keys()}
#print(mfc_table)
# DO NOT MODIFY BELOW THIS LINE
mfc_model = MFCTagger(mfc_table) # Create a Most Frequent Class tagger instance
assert len(mfc_table) == len(data.training_set.vocab), ""
assert all(k in data.training_set.vocab for k in mfc_table.keys()), ""
assert sum(int(k not in mfc_table) for k in data.testing_set.vocab) == 5521, ""
HTML('<div class="alert alert-block alert-success">Your MFC tagger has all the correct words!</div>')
```
### Making Predictions with a Model
The helper functions provided below interface with Pomegranate network models & the mocked MFCTagger to take advantage of the [missing value](http://pomegranate.readthedocs.io/en/latest/nan.html) functionality in Pomegranate through a simple sequence decoding function. Run these functions, then run the next cell to see some of the predictions made by the MFC tagger.
```
def replace_unknown(sequence):
"""Return a copy of the input sequence where each unknown word is replaced
by the literal string value 'nan'. Pomegranate will ignore these values
during computation.
"""
return [w if w in data.training_set.vocab else 'nan' for w in sequence]
def simplify_decoding(X, model):
"""X should be a 1-D sequence of observations for the model to predict"""
_, state_path = model.viterbi(replace_unknown(X))
return [state[1].name for state in state_path[1:-1]] # do not show the start/end state predictions
```
### Example Decoding Sequences with MFC Tagger
```
for key in data.testing_set.keys[:3]:
print("Sentence Key: {}\n".format(key))
print("Predicted labels:\n-----------------")
print(simplify_decoding(data.sentences[key].words, mfc_model))
print()
print("Actual labels:\n--------------")
print(data.sentences[key].tags)
print("\n")
```
### Evaluating Model Accuracy
The function below will evaluate the accuracy of the MFC tagger on the collection of all sentences from a text corpus.
```
def accuracy(X, Y, model):
"""Calculate the prediction accuracy by using the model to decode each sequence
in the input X and comparing the prediction with the true labels in Y.
The X should be an array whose first dimension is the number of sentences to test,
and each element of the array should be an iterable of the words in the sequence.
The arrays X and Y should have the exact same shape.
X = [("See", "Spot", "run"), ("Run", "Spot", "run", "fast"), ...]
Y = [(), (), ...]
"""
correct = total_predictions = 0
for observations, actual_tags in zip(X, Y):
# The model.viterbi call in simplify_decoding will return None if the HMM
# raises an error (for example, if a test sentence contains a word that
# is out of vocabulary for the training set). Any exception counts the
# full sentence as an error (which makes this a conservative estimate).
try:
most_likely_tags = simplify_decoding(observations, model)
correct += sum(p == t for p, t in zip(most_likely_tags, actual_tags))
except:
pass
total_predictions += len(observations)
return correct / total_predictions
```
#### Evaluate the accuracy of the MFC tagger
Run the next cell to evaluate the accuracy of the tagger on the training and test corpus.
```
mfc_training_acc = accuracy(data.training_set.X, data.training_set.Y, mfc_model)
print("training accuracy mfc_model: {:.2f}%".format(100 * mfc_training_acc))
mfc_testing_acc = accuracy(data.testing_set.X, data.testing_set.Y, mfc_model)
print("testing accuracy mfc_model: {:.2f}%".format(100 * mfc_testing_acc))
assert mfc_training_acc >= 0.955, "Uh oh. Your MFC accuracy on the training set doesn't look right."
assert mfc_testing_acc >= 0.925, "Uh oh. Your MFC accuracy on the testing set doesn't look right."
HTML('<div class="alert alert-block alert-success">Your MFC tagger accuracy looks correct!</div>')
```
## Step 3: Build an HMM tagger
---
The HMM tagger has one hidden state for each possible tag, and parameterized by two distributions: the emission probabilties giving the conditional probability of observing a given **word** from each hidden state, and the transition probabilities giving the conditional probability of moving between **tags** during the sequence.
We will also estimate the starting probability distribution (the probability of each **tag** being the first tag in a sequence), and the terminal probability distribution (the probability of each **tag** being the last tag in a sequence).
The maximum likelihood estimate of these distributions can be calculated from the frequency counts as described in the following sections where you'll implement functions to count the frequencies, and finally build the model. The HMM model will make predictions according to the formula:
$$t_i^n = \underset{t_i^n}{\mathrm{argmax}} \prod_{i=1}^n P(w_i|t_i) P(t_i|t_{i-1})$$
Refer to Speech & Language Processing [Chapter 10](https://web.stanford.edu/~jurafsky/slp3/10.pdf) for more information.
### IMPLEMENTATION: Unigram Counts
Complete the function below to estimate the co-occurrence frequency of each symbol over all of the input sequences. The unigram probabilities in our HMM model are estimated from the formula below, where N is the total number of samples in the input. (You only need to compute the counts for now.)
$$P(tag_1) = \frac{C(tag_1)}{N}$$
```
def unigram_counts(sequences):
"""Return a dictionary keyed to each unique value in the input sequence list that
counts the number of occurrences of the value in the sequences list. The sequences
collection should be a 2-dimensional array.
For example, if the tag NOUN appears 275558 times over all the input sequences,
then you should return a dictionary such that your_unigram_counts[NOUN] == 275558.
"""
# TODO: Finish this function!
# Initialize dictionary
sequence_tags = {}
# Loop through each sentance (sequence) in sequences
for sequence in sequences:
# Loop through each tag in each sentance (sequence)
for tag in sequence:
# Increment if tag already in dictionary. Otherwise, add to dictionary
if tag in sequence_tags.keys():
sequence_tags[tag] += 1
else:
sequence_tags[tag] = 1
#print(sequence_tags)
return sequence_tags
# TODO: call unigram_counts with a list of tag sequences from the training set
tag_unigrams = unigram_counts(data.training_set.Y)
print(tag_unigrams)
assert set(tag_unigrams.keys()) == data.training_set.tagset, \
"Uh oh. It looks like your tag counts doesn't include all the tags!"
assert min(tag_unigrams, key=tag_unigrams.get) == 'X', \
"Hmmm...'X' is expected to be the least common class"
assert max(tag_unigrams, key=tag_unigrams.get) == 'NOUN', \
"Hmmm...'NOUN' is expected to be the most common class"
HTML('<div class="alert alert-block alert-success">Your tag unigrams look good!</div>')
```
### IMPLEMENTATION: Bigram Counts
Complete the function below to estimate the co-occurrence frequency of each pair of symbols in each of the input sequences. These counts are used in the HMM model to estimate the bigram probability of two tags from the frequency counts according to the formula: $$P(tag_2|tag_1) = \frac{C(tag_2|tag_1)}{C(tag_2)}$$
```
def bigram_counts(sequences):
"""Return a dictionary keyed to each unique PAIR of values in the input sequences
list that counts the number of occurrences of pair in the sequences list. The input
should be a 2-dimensional array.
For example, if the pair of tags (NOUN, VERB) appear 61582 times, then you should
return a dictionary such that your_bigram_counts[(NOUN, VERB)] == 61582
"""
# TODO: Finish this function!
# Intialize dictionary
count_pairs = {}
# Loop through each sentance (sequence) in sequences
for sequence in sequences:
# Loop through each tag in the sequence, but stop at [:-1] since we are evaluating pairs
for tagIdx in range(len(sequence)-1):
# Check to see if dictionary contains the current tag pair.
# Is so, increment this pair. Else, add this pair to the dictionary
if (sequence[tagIdx], sequence[tagIdx+1]) in count_pairs:
count_pairs[(sequence[tagIdx], sequence[tagIdx+1])] += 1
else:
count_pairs[(sequence[tagIdx], sequence[tagIdx+1])] = 1
return count_pairs
# TODO: call bigram_counts with a list of tag sequences from the training set
tag_bigrams = bigram_counts(data.training_set.Y)
#[print(key,':',value) for key, value in tag_bigrams.items()]
print(tag_bigrams)
assert len(tag_bigrams) == 144, \
"Uh oh. There should be 144 pairs of bigrams (12 tags x 12 tags)"
assert min(tag_bigrams, key=tag_bigrams.get) in [('X', 'NUM'), ('PRON', 'X')], \
"Hmmm...The least common bigram should be one of ('X', 'NUM') or ('PRON', 'X')."
assert max(tag_bigrams, key=tag_bigrams.get) in [('DET', 'NOUN')], \
"Hmmm...('DET', 'NOUN') is expected to be the most common bigram."
HTML('<div class="alert alert-block alert-success">Your tag bigrams look good!</div>')
```
### IMPLEMENTATION: Sequence Starting Counts
Complete the code below to estimate the bigram probabilities of a sequence starting with each tag.
```
def starting_counts(sequences):
"""Return a dictionary keyed to each unique value in the input sequences list
that counts the number of occurrences where that value is at the beginning of
a sequence.
For example, if 8093 sequences start with NOUN, then you should return a
dictionary such that your_starting_counts[NOUN] == 8093
"""
# TODO: Finish this function!
# Loop through each sentance (sequence) in sequences and get the first tag in each
sequence_starts = [sequence[0] for sequence in sequences]
# Use Counter to return a dictionary of starting counts
return Counter(sequence_starts)
# TODO: Calculate the count of each tag starting a sequence
tag_starts = starting_counts(data.training_set.Y)
print(tag_starts)
assert len(tag_starts) == 12, "Uh oh. There should be 12 tags in your dictionary."
assert min(tag_starts, key=tag_starts.get) == 'X', "Hmmm...'X' is expected to be the least common starting bigram."
assert max(tag_starts, key=tag_starts.get) == 'DET', "Hmmm...'DET' is expected to be the most common starting bigram."
HTML('<div class="alert alert-block alert-success">Your starting tag counts look good!</div>')
```
### IMPLEMENTATION: Sequence Ending Counts
Complete the function below to estimate the bigram probabilities of a sequence ending with each tag.
```
def ending_counts(sequences):
"""Return a dictionary keyed to each unique value in the input sequences list
that counts the number of occurrences where that value is at the end of
a sequence.
For example, if 18 sequences end with DET, then you should return a
dictionary such that your_starting_counts[DET] == 18
"""
# TODO: Finish this function!
# Loop through each sentance (sequence) in sequences and get the last tag in each
sequence_ends = [sequence[-1] for sequence in sequences]
# Use Counter to return a dictionary of starting counts
return Counter(sequence_ends)
# TODO: Calculate the count of each tag ending a sequence
tag_ends = ending_counts(data.training_set.Y)
print(tag_ends)
assert len(tag_ends) == 12, "Uh oh. There should be 12 tags in your dictionary."
assert min(tag_ends, key=tag_ends.get) in ['X', 'CONJ'], "Hmmm...'X' or 'CONJ' should be the least common ending bigram."
assert max(tag_ends, key=tag_ends.get) == '.', "Hmmm...'.' is expected to be the most common ending bigram."
HTML('<div class="alert alert-block alert-success">Your ending tag counts look good!</div>')
```
### IMPLEMENTATION: Basic HMM Tagger
Use the tag unigrams and bigrams calculated above to construct a hidden Markov tagger.
- Add one state per tag
- The emission distribution at each state should be estimated with the formula: $P(w|t) = \frac{C(t, w)}{C(t)}$
- Add an edge from the starting state `basic_model.start` to each tag
- The transition probability should be estimated with the formula: $P(t|start) = \frac{C(start, t)}{C(start)}$
- Add an edge from each tag to the end state `basic_model.end`
- The transition probability should be estimated with the formula: $P(end|t) = \frac{C(t, end)}{C(t)}$
- Add an edge between _every_ pair of tags
- The transition probability should be estimated with the formula: $P(t_2|t_1) = \frac{C(t_1, t_2)}{C(t_1)}$
```
basic_model = HiddenMarkovModel(name="base-hmm-tagger")
# TODO: create states with emission probability distributions P(word | tag) and add to the model
# (Hint: you may need to loop & create/add new states)
# Define tag training dataset
dataY = data.training_set.Y
# Get tag counters using functions defined above
tag_unigrams = unigram_counts(dataY)
tag_bigrams = bigram_counts(dataY)
tag_starts = starting_counts(dataY)
tag_ends = ending_counts(dataY)
# Initialize states dictionary
states = {}
# Calculate C(t_i, w_i)
tags = [tag for _, tag in data.stream()]
words = [word for word, _ in data.stream()]
# Calculate emission counts using pair_counts function from above
emission_counts = pair_counts(tags, words)
#print(emission_counts)
# Loop through each tag in the tagset
for tag in data.training_set.tagset:
# Initialize dictionary for emission probabilities
emission_probability = {}
# For each tag, use a loop to retrieve each word and its emission count
for word, count in emission_counts[tag].items():
#print('word:', word, ', count:',count)
# Calculate P(w| t) = C(t, w) / C(t)
emission_probability[word] = count / tag_unigrams[tag]
# Following the example from HMM Warmup
# Emission probability distributions, P(word | tag)
tag_emissions = DiscreteDistribution(emission_probability)
# Add distribution to state
tag_state = State(tag_emissions, name=tag)
# Add tag state to state dictionary
states[tag] = tag_state
# Add state to HMM model (Add one state per tag)
basic_model.add_states(tag_state)
# TODO: add edges between states for the observed transition frequencies P(tag_i | tag_i-1)
# (Hint: you may need to loop & add transitions
### Add transitions for the starting and ending states ###
# Loop through each tag in the tagset
for tag in data.training_set.tagset:
# Get state for each tag
state = states[tag]
#print(state)
# Calculate start probability: P(t | start) = C(start, t) / C(start)
start_prob = tag_starts[tag] / sum(tag_starts.values())
#print(start_prob)
# Add the start transition (Add an edge from the starting state basic_model.start to each tag)
basic_model.add_transition(basic_model.start, state, start_prob)
# Calculate end probability: P(end | t) = C(t, end) / C(t)
end_prob = tag_ends[tag] / tag_unigrams[tag]
#print(end_prob)
# Add the end transition to model (Add an edge from each tag to the end state basic_model.end)
basic_model.add_transition(state, basic_model.end, end_prob)
### Add transitions for the middle states ###
# Iterate through each set of tags
# Iterate Tag A
for tagA in data.training_set.tagset:
# Get state for tag A
stateA = states[tagA]
# Iterate Tag B
for tagB in data.training_set.tagset:
# Get state for tag B
stateB = states[tagB]
# Calculate the transition probability betweens states A and B
# P(t_2 | t_1) = C(t_1, t_2) / C(t_1)
trans_prob = tag_bigrams[(tagA, tagB)] / tag_unigrams[tagA]
# Add the transition to model (Add an edge between every pair of tags)
basic_model.add_transition(stateA, stateB, trans_prob)
# NOTE: YOU SHOULD NOT NEED TO MODIFY ANYTHING BELOW THIS LINE
# finalize the model
basic_model.bake()
assert all(tag in set(s.name for s in basic_model.states) for tag in data.training_set.tagset), \
"Every state in your network should use the name of the associated tag, which must be one of the training set tags."
assert basic_model.edge_count() == 168, \
("Your network should have an edge from the start node to each state, one edge between every " +
"pair of tags (states), and an edge from each state to the end node.")
HTML('<div class="alert alert-block alert-success">Your HMM network topology looks good!</div>')
hmm_training_acc = accuracy(data.training_set.X, data.training_set.Y, basic_model)
print("training accuracy basic hmm model: {:.2f}%".format(100 * hmm_training_acc))
hmm_testing_acc = accuracy(data.testing_set.X, data.testing_set.Y, basic_model)
print("testing accuracy basic hmm model: {:.2f}%".format(100 * hmm_testing_acc))
assert hmm_training_acc > 0.97, "Uh oh. Your HMM accuracy on the training set doesn't look right."
assert hmm_testing_acc > 0.955, "Uh oh. Your HMM accuracy on the testing set doesn't look right."
HTML('<div class="alert alert-block alert-success">Your HMM tagger accuracy looks correct! Congratulations, you\'ve finished the project.</div>')
```
### Example Decoding Sequences with the HMM Tagger
```
for key in data.testing_set.keys[:3]:
print("Sentence Key: {}\n".format(key))
print("Predicted labels:\n-----------------")
print(simplify_decoding(data.sentences[key].words, basic_model))
print()
print("Actual labels:\n--------------")
print(data.sentences[key].tags)
print("\n")
```
## Finishing the project
---
<div class="alert alert-block alert-info">
**Note:** **SAVE YOUR NOTEBOOK**, then run the next cell to generate an HTML copy. You will zip & submit both this file and the HTML copy for review.
</div>
```
!!jupyter nbconvert *.ipynb
```
## Step 4: [Optional] Improving model performance
---
There are additional enhancements that can be incorporated into your tagger that improve performance on larger tagsets where the data sparsity problem is more significant. The data sparsity problem arises because the same amount of data split over more tags means there will be fewer samples in each tag, and there will be more missing data tags that have zero occurrences in the data. The techniques in this section are optional.
- [Laplace Smoothing](https://en.wikipedia.org/wiki/Additive_smoothing) (pseudocounts)
Laplace smoothing is a technique where you add a small, non-zero value to all observed counts to offset for unobserved values.
- Backoff Smoothing
Another smoothing technique is to interpolate between n-grams for missing data. This method is more effective than Laplace smoothing at combatting the data sparsity problem. Refer to chapters 4, 9, and 10 of the [Speech & Language Processing](https://web.stanford.edu/~jurafsky/slp3/) book for more information.
- Extending to Trigrams
HMM taggers have achieved better than 96% accuracy on this dataset with the full Penn treebank tagset using an architecture described in [this](http://www.coli.uni-saarland.de/~thorsten/publications/Brants-ANLP00.pdf) paper. Altering your HMM to achieve the same performance would require implementing deleted interpolation (described in the paper), incorporating trigram probabilities in your frequency tables, and re-implementing the Viterbi algorithm to consider three consecutive states instead of two.
### Obtain the Brown Corpus with a Larger Tagset
Run the code below to download a copy of the brown corpus with the full NLTK tagset. You will need to research the available tagset information in the NLTK docs and determine the best way to extract the subset of NLTK tags you want to explore. If you write the following the format specified in Step 1, then you can reload the data using all of the code above for comparison.
Refer to [Chapter 5](http://www.nltk.org/book/ch05.html) of the NLTK book for more information on the available tagsets.
```
import nltk
from nltk import pos_tag, word_tokenize
from nltk.corpus import brown
nltk.download('brown')
training_corpus = nltk.corpus.brown
training_corpus.tagged_sents()[0]
```
| github_jupyter |
```
import os
import pandas as pd
import glob
import tempfile
from pathlib import Path
```
#### Provide storage account parameters here
###### storage_conn_string "Storage account connection string"
###### src_container "Container where data is stored"
###### dst_container "Container where results should be uploaded"
```
storage_conn_string = ""
src_container = ""
dst_container = ""
```
# Import functions from other notebooks
```
%run "Data-utils.ipynb"
%run "FR-Utils.ipynb"
%run "file-utils.ipynb"
%run "AzureBlobStorageLib.ipynb" storage_conn_string src_container dst_container
```
# Data Preparation
#### Steps include
1. Downlaoding data from Blob Storage
2. Converting all format files to PDF files
3. Splitting multipage PDF to single page PDF files
```
# Create temporary directory and download files
#temp_dir = tempfile.TemporaryDirectory()
#data_dir = temp_dir.name
data_dir = "../data/"
if os.path.exists(data_dir) :
shutil.rmtree(data_dir)
Path(data_dir).mkdir(parents=True, exist_ok=True)
raw_files = os.path.join(data_dir,"rawFiles")
Path(raw_files).mkdir(parents=True, exist_ok=True)
download2local(raw_files)
# convert all file types to pdf and then split to 1-p docs
raw_pdf = os.path.join(data_dir,"allPdf")
Path(raw_pdf).mkdir(parents=True, exist_ok=True)
convert2pdf(src = raw_files, pdfdst = raw_pdf)
print("Input files are stored at:", raw_pdf)
pdf_1p = os.path.join(data_dir,"1p-pdf")
Path(pdf_1p).mkdir(parents=True, exist_ok=True)
pdf_split(src = raw_pdf, dst = pdf_1p)
print("Processed files are stored at:", pdf_1p)
```
#### Get initial Parameters
```
#get initial file count
fls = glob.glob(os.path.join(pdf_1p,"*.pdf"))
initial_file_cnt = len(fls)
print(initial_file_cnt)
#results directory
results_dir = os.path.join(data_dir,"Results")
Path(results_dir).mkdir(parents=True, exist_ok=True)
# generate SAS signature for storage container
sas_url = fr_get_sas_url(dst_container)
#sas_url = ""
sas_url
# Call below function, if you want to clean up the destination blob container
#deleteContainerData()
```
# FR Template identification process
```
iteration = 1
while(iteration):
########################################################
# create directory for current iteration
########################################################
iter_fld = "I"+str(iteration)
iter_dir = os.path.join(results_dir,iter_fld)
Path(iter_dir).mkdir(parents=True, exist_ok=True)
########################################################
# sample files for training
########################################################
train_fld = "trainset"
train_dir = os.path.join(iter_dir,train_fld)
Path(train_dir).mkdir(parents=True, exist_ok=True)
sample_training_data(src_fld = pdf_1p, dst_fld = train_dir)
########################################################
# upload the files to blob
########################################################
blob_path = os.path.join(iter_fld,train_fld)
upload2blob(local_path = train_dir, container_path = blob_path)
########################################################
#train FR unsupervised model
########################################################
model_file = iter_fld+"-model-details.json"
train_fr_model(sas_url = sas_url, folder_path = blob_path.replace("\\", "/"), model_file = model_file)
########################################################
#if model is created infer using the model
########################################################
iter_model_file = os.path.join(iter_dir, model_file)
if os.path.exists(model_file):
shutil.copyfile(model_file, iter_model_file)
infer_fld = "fr-json"
infer_dir = os.path.join(iter_dir,infer_fld)
Path(infer_dir).mkdir(parents=True, exist_ok=True)
#Start FR inferencing
fr_model_inference(src_dir = pdf_1p, json_dir = infer_dir, model_file = iter_model_file, thread_cnt = 10)
########################################################
# Segregate files to clusters
########################################################
clust_dir = os.path.join(results_dir,"clusters")
Path(clust_dir).mkdir(parents=True, exist_ok=True)
cluster_file = os.path.join(iter_dir, iter_fld+"-clusters.csv")
files_clustered = segregate_data(src_dir = pdf_1p, result_dir= infer_dir, cluster_dir = clust_dir,
prefix = iter_fld, cluster_file = cluster_file)
print("Identified clusters for:", files_clustered, "files")
########################################################
# Upload iteration results to blob storage
########################################################
upload2blob(local_path = iter_dir, container_path = iter_fld) #train data, model details and clusters
upload2blob(local_path = clust_dir, container_path = "clusters") #Files segregated into clusters
########################################################
# decide on next iteration
########################################################
moved_percent = files_clustered * 100 / initial_file_cnt
if (moved_percent < 5) | (initial_file_cnt < 500):
iteration = 0
else:
iteration = iteration + 1
```
| github_jupyter |
# Topic Modeling: Financial News
This notebook contains an example of LDA applied to financial news articles.
## Imports & Settings
```
import warnings
warnings.filterwarnings('ignore')
%matplotlib inline
from collections import Counter
from pathlib import Path
import logging
import numpy as np
import pandas as pd
# Visualization
import matplotlib.pyplot as plt
import seaborn as sns
# spacy for language processing
import spacy
# sklearn for feature extraction
from sklearn.feature_extraction.text import TfidfVectorizer
# gensim for topic models
from gensim.models import LdaModel
from gensim.corpora import Dictionary
from gensim.matutils import Sparse2Corpus
# topic model viz
import pyLDAvis
from pyLDAvis.gensim import prepare
sns.set_style('white')
pyLDAvis.enable_notebook()
stop_words = set(pd.read_csv('http://ir.dcs.gla.ac.uk/resources/linguistic_utils/stop_words',
header=None,
squeeze=True).tolist())
```
## Helper Viz Functions
```
def show_word_list(model, corpus, top=10, save=False):
top_topics = model.top_topics(corpus=corpus, coherence='u_mass', topn=20)
words, probs = [], []
for top_topic, _ in top_topics:
words.append([t[1] for t in top_topic[:top]])
probs.append([t[0] for t in top_topic[:top]])
fig, ax = plt.subplots(figsize=(model.num_topics*1.2, 5))
sns.heatmap(pd.DataFrame(probs).T,
annot=pd.DataFrame(words).T,
fmt='',
ax=ax,
cmap='Blues',
cbar=False)
fig.tight_layout()
if save:
fig.savefig(f'fin_news_wordlist_{top}', dpi=300)
def show_coherence(model, corpus, tokens, top=10, cutoff=0.01):
top_topics = model.top_topics(corpus=corpus, coherence='u_mass', topn=20)
word_lists = pd.DataFrame(model.get_topics().T, index=tokens)
order = []
for w, word_list in word_lists.items():
target = set(word_list.nlargest(top).index)
for t, (top_topic, _) in enumerate(top_topics):
if target == set([t[1] for t in top_topic[:top]]):
order.append(t)
fig, axes = plt.subplots(ncols=2, figsize=(15,5))
title = f'# Words with Probability > {cutoff:.2%}'
(word_lists.loc[:, order]>cutoff).sum().reset_index(drop=True).plot.bar(title=title, ax=axes[1]);
umass = model.top_topics(corpus=corpus, coherence='u_mass', topn=20)
pd.Series([c[1] for c in umass]).plot.bar(title='Topic Coherence', ax=axes[0])
fig.tight_layout()
fig.savefig(f'fin_news_coherence_{top}', dpi=300);
def show_top_docs(model, corpus, docs):
doc_topics = model.get_document_topics(corpus)
df = pd.concat([pd.DataFrame(doc_topic,
columns=['topicid', 'weight']).assign(doc=i)
for i, doc_topic in enumerate(doc_topics)])
for topicid, data in df.groupby('topicid'):
print(topicid, docs[int(data.sort_values('weight', ascending=False).iloc[0].doc)])
print(pd.DataFrame(lda.show_topic(topicid=topicid)))
```
## Load Financial News
The data is avaialble from [Kaggle](https://www.kaggle.com/jeet2016/us-financial-news-articles).
Download and unzip into data directory in repository root folder, then rename the enclosing folder to `us-financial-news` and the subfolders so you get the following directory structure:
```
data
|-us-financial-news
|-2018_01
|-2018_02
|-2018_03
|-2018_04
|-2018_05
```
```
data_path = Path('..', 'data', 'us-financial-news')
```
We limit the article selection to the following sections in the dataset:
```
section_titles = ['Press Releases - CNBC',
'Reuters: Company News',
'Reuters: World News',
'Reuters: Business News',
'Reuters: Financial Services and Real Estate',
'Top News and Analysis (pro)',
'Reuters: Top News',
'The Wall Street Journal & Breaking News, Business, Financial and Economic News, World News and Video',
'Business & Financial News, U.S & International Breaking News | Reuters',
'Reuters: Money News',
'Reuters: Technology News']
def read_articles():
articles = []
counter = Counter()
for f in data_path.glob('*/**/*.json'):
article = json.load(f.open())
if article['thread']['section_title'] in set(section_titles):
text = article['text'].lower().split()
counter.update(text)
articles.append(' '.join([t for t in text if t not in stop_words]))
return articles, counter
articles, counter = read_articles()
print(f'Done loading {len(articles):,.0f} articles')
most_common = (pd.DataFrame(counter.most_common(), columns=['token', 'count'])
.pipe(lambda x: x[~x.token.str.lower().isin(stop_words)]))
most_common.head(10)
```
## Preprocessing with SpaCy
```
results_path = Path('results', 'financial_news')
if not results_path.exists():
results_path.mkdir(parents=True)
def clean_doc(d):
doc = []
for t in d:
if not any([t.is_stop, t.is_digit, not t.is_alpha, t.is_punct, t.is_space, t.lemma_ == '-PRON-']):
doc.append(t.lemma_)
return ' '.join(doc)
nlp = spacy.load('en')
nlp.max_length = 6000000
nlp.disable_pipes('ner')
nlp.pipe_names
def preprocess(articles):
iter_articles = (article for article in articles)
clean_articles = []
for i, doc in enumerate(nlp.pipe(iter_articles,
batch_size=100,
n_threads=8), 1):
if i % 1000 == 0:
print(f'{i / len(articles):.2%}', end=' ', flush=True)
clean_articles.append(clean_doc(doc))
return clean_articles
clean_articles = preprocess(articles)
clean_path = results_path / 'clean_text'
clean_path.write_text('\n'.join(clean_articles))
```
## Vectorize data
```
docs = clean_path.read_text().split('\n')
len(docs)
```
### Explore cleaned data
```
article_length, token_count = [], Counter()
for i, doc in enumerate(docs, 1):
if i % 1e6 == 0:
print(i, end=' ', flush=True)
d = doc.lower().split()
article_length.append(len(d))
token_count.update(d)
fig, axes = plt.subplots(ncols=2, figsize=(15, 5))
(pd.DataFrame(token_count.most_common(), columns=['token', 'count'])
.pipe(lambda x: x[~x.token.str.lower().isin(stop_words)])
.set_index('token')
.squeeze()
.iloc[:25]
.sort_values()
.plot
.barh(ax=axes[0], title='Most frequent tokens'))
sns.boxenplot(x=pd.Series(article_length), ax=axes[1])
axes[1].set_xscale('log')
axes[1].set_xlabel('Word Count (log scale)')
axes[1].set_title('Article Length Distribution')
sns.despine()
fig.tight_layout()
fig.savefig(results_path / 'fn_explore', dpi=300);
pd.Series(article_length).describe(percentiles=np.arange(.1, 1.0, .1))
docs = [x.lower() for x in docs]
docs[3]
```
### Set vocab parameters
```
min_df = .005
max_df = .1
ngram_range = (1, 1)
binary = False
vectorizer = TfidfVectorizer(stop_words='english',
min_df=min_df,
max_df=max_df,
ngram_range=ngram_range,
binary=binary)
dtm = vectorizer.fit_transform(docs)
tokens = vectorizer.get_feature_names()
dtm.shape
corpus = Sparse2Corpus(dtm, documents_columns=False)
id2word = pd.Series(tokens).to_dict()
dictionary = Dictionary.from_corpus(corpus, id2word)
```
## Train & Evaluate LDA Model
```
logging.basicConfig(filename='gensim.log',
format="%(asctime)s:%(levelname)s:%(message)s",
level=logging.DEBUG)
logging.root.level = logging.DEBUG
```
### Train models with 5-25 topics
```
num_topics = [5, 10, 15, 20]
for topics in num_topics:
print(topics)
lda_model = LdaModel(corpus=corpus,
id2word=id2word,
num_topics=topics,
chunksize=len(docs),
update_every=1,
alpha='auto', # a-priori belief for the each topics' probability
eta='auto', # a-priori belief on word probability
decay=0.5, # percentage of previous lambda value forgotten
offset=1.0,
eval_every=1,
passes=10,
iterations=50,
gamma_threshold=0.001,
minimum_probability=0.01, # filter topics with lower probability
minimum_phi_value=0.01, # lower bound on term probabilities
random_state=42)
lda_model.save((results_path / f'model_{topics}').as_posix())
```
### Evaluate results
We show results for one model using a vocabulary of 3,800 tokens based on min_df=0.1% and max_df=25% with a single pass to avoid length training time for 20 topics. We can use pyldavis topic_info attribute to compute relevance values for lambda=0.6 that produces the following word list
```
def eval_lda_model(ntopics, model, corpus=corpus, tokens=tokens):
show_word_list(model=model, corpus=corpus, top=ntopics, save=True)
show_coherence(model=model, corpus=corpus, tokens=tokens, top=ntopics)
vis = prepare(model, corpus, dictionary, mds='tsne')
pyLDAvis.save_html(vis, f'lda_{ntopics}.html')
return 2 ** (-model.log_perplexity(corpus))
lda_models = {}
perplexity ={}
for ntopics in num_topics:
print(ntopics)
lda_models[ntopics] = LdaModel.load((results_path / f'model_{ntopics}').as_posix())
perplexity[ntopics] = eval_lda_model(ntopics=ntopics, model=lda_models[ntopics])
```
### Perplexity
```
pd.Series(perplexity).plot.bar()
sns.despine();
```
### PyLDAVis for 15 Topics
```
vis = prepare(lda_models[15], corpus, dictionary, mds='tsne')
pyLDAvis.display(vis)
```
## LDAMultiCore Timing
```
df = pd.read_csv(results_path / 'lda_multicore_test_results.csv')
df.head()
df[df.num_topics==10].set_index('workers')[['duration', 'test_perplexity']].plot.bar(subplots=True, layout=(1,2), figsize=(14,5), legend=False)
sns.despine()
plt.tight_layout();
```
| github_jupyter |
# Comic Book Cancellations Part I: Web Scraping
While some Marvel comic books run for decades, most series go through cycles. For example, [Charles Soule's *She-Hulk* (2014)](https://www.cbr.com/charles-soule-investigates-she-hulks-blue-file/) was a colorful and quirky crime serial that got cancelled on its 12th issue. However, that was not the end of the titular character. A year after that series cancellation, she reappeared as the lead in [Mario Tamaki's *Hulk* (2016)](https://www.cbr.com/hulk-1-gives-marvel-an-unstable-dangerous-jennifer-walters/) but the tone of the book was completely different. The new titles was introspective and focused on her pain and depression following the murder of her cousin. While these legacy characters may eventually continue after a cancellation, the tone, style, and genre of their stories often change with the new creative team.
So what causes so many of my favorite stories to get cancelled seemingly ahead of their time? Some books end at the author's request because the story has reached its conclusion. When *Young Avengers* (2013) was cancelled, the author Kieron Gillen [stated](http://kierongillen.tumblr.com/post/66995678192/young-avengers-the-end-of-the-season), "When the time came around and Marvel asked if we wanted to do more issues, [the artist] Jamie and I decided weโd actually made our statement, and should leave the stage." However, most Marvel comics are written as serials without the intention of bringing the story to a final conclusion. Instead, as Marvel Executive Editor Tom Brevoort [stated](https://twitter.com/TomBrevoort/status/945861802813984768) in 2017 amidst a string of cancellations, "We go through this cycle every year where weaker-selling titles get pruned".
So are books that get cancelled actually weaker selling? And if so, what criteria determines cancellation? Of [that](https://www.dailydot.com/parsec/marvel-comics-sales-slump-diversity/) [string](https://www.cbr.com/marvel-cancels-generation-x-gwenpool-more/) [of](https://www.cbr.com/marvel-comics-cancels-iceman-luke-cage/) [cancellations](https://www.cbr.com/marvel-comics-cancels-she-hulk/) in early 2017, all of the series had female, queer, or colored leads. This naturally poses the question whether the cancellations are the result of low sales for books with new characters introduced through Marvel's diversity initatives or whether Marvel was caving to [retailers](https://www.cbr.com/marvel-sales-diversity/) who felt like "people didn't want any more diversity".
To answer these questions, I'll use machine learning in order to develop a cancellation criteria based on comic book sales data. This first part will focus on web scrapping publically available comic book sales data and storing it in a SQLite database. The [second part](./2 Comic Book Cancellations - Machine Learning.ipynb) will parse through that data and implement machine learning algorithms to determine why titles got cancellation. While these first two parts show step-by-step how my analysis was done, the [third part](./3 Comic Book Cancellations - Conclusion.ipynb) will summarize the entire process and draw conclusions from my findings.
# 1 Web Scrapping
## Imports
```
import sqlite3
import pandas as pd
import matplotlib.pyplot as plt
from matplotlib import cm
import numpy as np
from bs4 import BeautifulSoup
import requests
import re
from scipy.optimize import curve_fit
%matplotlib inline
```
## Web Scrapping
American comic books (like Marvel or DC) generally come out with new issues every month that are sold through comic book stores, however, an increasing minority of comics are sold digitally through sites like [Comixology](https://www.comixology.com/). About twice a year, these individual issues are also collected into trade paperbacks where they are sold by local comic book stores and through most booksellers.
The main comic book store distributor is [Diamond Comic Distributors](https://www.diamondcomics.com/Home/1/1/3/103), and their monthly sales information is freely available from [Comichron](http://www.comichron.com/monthlycomicssales.html) for every month since 1998. This data provides a [good estimate](http://www.comicsbeat.com/a-quick-word-about-sales-estimates-before-we-run-the-distribution-charts/) of single issue sales where the actual sales are ~10% larger, but gives no information about digital comic sales and is less accurate for collected editions of which a sizable number are sold through bookstores. Actual collected edition sales are ~25% more than Diamond's numbers.
The majority of Diamond's sales are through [individual issues](https://www.cnbc.com/2016/06/05/comic-books-buck-trend-as-print-and-digital-sales-flourish.html). As such, while calculating the cancellation criteria, I'll only look into individual issue sales.
In order to scrape the data from the website, I'll be using the Python [Beautiful Soup](https://www.crummy.com/software/BeautifulSoup) package. It will then be saved into a [SQLite](https://sqlite.org/index.html) database. This whole processh can take several minutes to finish so the final database has been made available found [here](./sales.db).
```
# download_comic_sales return a DataFrame contains comic sales from Comichron for the given month and year
def download_comic_sales(month, year):
url = "http://www.comichron.com/monthlycomicssales/{1}/{1}-{0:02}.html".format(month, year)
page = requests.get(url)
soup = BeautifulSoup(page.content, 'html.parser')
table = soup.find('table', id = "Top300Comics")
data = []
rows = table.find_all('tr')
for row in rows:
cols = row.find_all(['td', 'th'])
cols = [ele.text.strip() for ele in cols]
data.append([ele for ele in cols])
comics_table = pd.DataFrame(data[1:], columns=data[0])
comics_table.drop(columns = "On sale", inplace = True, errors = 'ignore')
comics_table.rename(columns={comics_table.columns[0]: "UnitRank", comics_table.columns[1]: "DollarRank"}, inplace=True)
comics_table.drop('UnitRank', 1, inplace=True)
comics_table.drop('DollarRank', 1, inplace=True)
comics_table.rename(columns={'Comic-book Title' : 'Title', 'Est. units' : 'Units'}, inplace=True)
comics_table['Issue'] = comics_table['Issue'].map(lambda x: re.findall('\d+\.?\d*', x)[0] if len(re.findall('\d+\.?\d*', x)) >= 1 else '')
comics_table['Issue'] = pd.to_numeric(comics_table['Issue'], errors='coerce')
comics_table["Title"] = comics_table["Title"].replace("โ ", "", regex=True)
comics_table["Price"] = comics_table["Price"].replace("\$", "", regex=True).astype(float)
comics_table['Units'] = comics_table['Units'].replace(',', '', regex=True).astype(int)
comics_table['Gross'] = comics_table['Units']*comics_table['Price']
comics_table['Date'] = pd.to_datetime('{}{:02}'.format(year, month), format='%Y%m', errors='ignore')
comics_table = comics_table.dropna(axis='rows')
return(comics_table)
# Loop through every month since 1998 adding data to SQLite database
db = sqlite3.connect('sales.db')
for year in range(1998, 2018):
for month in range(1, 13):
df = download_comic_sales(month, year)
df.to_sql("sales", db, if_exists="append")
for year in range(2018, 2019):
for month in range(1, 6):
df = download_comic_sales(month, year)
df.to_sql("sales", db, if_exists="append")
db.close()
```
# 2 Data Cleaning
I'm specifically going to focus on Marvel comics, however, I need to keep multiple runs of a comic separate even if they have the same title. Marvel commonly starts the numbering of new series with a \#1 issue to indicate to readers that the title has a new creative team and direction. However, many titles later revert back to their legacy numbering system. So long as there is not a new \#1 issue, I'm going to consider it the same series. Each run can be distinguished from each other by its title and starting year. This may ignore some edge cases such as ongoing titles that change the name of the comic in the middle of a run (such as Mario Tamaki's *Hulk* (2016) changing its name to *She-Hulk*) or the possiblity of a new title starting with legacy numbering rather than a \#1.
There are also a variety of other minor details involved in cleaning up the data for analysis. Altogether, the following changes were made:
- Only keep Marvel comics
- Distinguish between multiple runs of the comic with separate \#1 issues
- Aggregate sales and reorders for unique comics (same Title, Starting Year, Issue #)
- Remove .1 issues which are special jumping on points separate from main continuity
- Remove obvious marketing gimmick issues
- Rename some titles so that they're consistent.
- New features added for largest issue number and whether title is current title
```
db = sqlite3.connect('sales.db')
# Load web scrapped data from SQL database for Marvel comics
df = pd.read_sql_query('''
SELECT Title, Issue, Price, Units, Gross, Date
FROM sales
WHERE Publisher = "Marvel"
''', db, parse_dates=['Date'])
db.close()
# Rename titles for consistency and remove extraneous issue
df = df[(df.Issue % 1 == 0) & (df.Issue != 0) & (df.Issue < 900)]
df.loc[df.Title == 'Us Avengers', 'Title'] = "U.S. Avengers"
df.loc[df.Title == 'US Avengers', 'Title'] = "U.S. Avengers"
df.loc[df.Title == 'U.S.Avengers', 'Title'] = "U.S. Avengers"
df.loc[df.Title == 'Avengers Ai', 'Title'] = "Avengers AI"
df.loc[df.Title == 'All New Guardians of Galaxy', 'Title'] = "All New Guardians of the Galaxy"
df.loc[df.Title == 'Marvel Universe Ult Spider-Man Web Warriors', 'Title'] = "Marvel Universe Ultimate Spider-Man Web Warriors"
df.loc[df.Title == 'Kanan The Last Padawan', 'Title'] = "Kanan"
df.loc[df.Title == 'Kanan Last Padawan', 'Title'] = "Kanan"
df.loc[df.Title == 'Star Wars Kanan', 'Title'] = "Kanan"
# Develop table with each series information (Title, StartYear, StartDate)
series_df = df[df['Issue'] == 1].groupby(['Date', 'Title']).agg({'Title':'first', 'Date': 'first'})
series_df['StartYear'] = series_df['Date'].map(lambda x: x.year)
series_df.reset_index(drop=True, inplace=True)
series_df.sort_values(by=['Title', 'Date'], inplace=True)
series_df.reset_index(drop=True, inplace=True)
series_df2 = pd.DataFrame()
series_df2 = series_df2.append(series_df.iloc[0])
for i in range(series_df.shape[0]-1):
if (series_df.Title[i+1] != series_df.Title[i]) or (series_df.Date[i+1] - series_df.Date[i] > pd.Timedelta(3, unit='M')):
series_df2 = series_df2.append(series_df.iloc[i+1])
series_df = series_df2
# Use series table to determine StartYear for each entry in database
df['StartYear'] = pd.Series()
for i in range(df.shape[0]):
title = df.iloc[i].Title
date = df.iloc[i].Date
s = series_df[(series_df.Title == title) & (series_df.Date <= date)].sort_values(by='Date', ascending=False)
if s.shape[0] > 0:
df.loc[df.index[i], 'StartYear'] = s.iloc[0].StartYear
# Remove titles that don't have #1 issues in the data set or other missing data
df = df.dropna(axis='rows')
# Save cleaned up Marvel salse information as separate table in database
db = sqlite3.connect('sales.db')
df.to_sql("marvel_clean", db, if_exists="replace")
db.close()
# Sum sales issue for each unique issue (unique Title, StartYear, Issue #)
df = df.groupby(['Title', 'Issue', 'StartYear']).agg({'Title' : 'first', 'StartYear' : 'first', 'Issue': 'first', 'Date' : 'min', 'Price' : 'first', 'Units' : 'sum', 'Gross' : 'sum' })
df.reset_index(drop=True, inplace=True)
# Add new features for the title's maximum issue and whether it is a current title
df2 = pd.pivot_table(df, values='Issue', index=['Title', 'StartYear'], aggfunc=np.max).rename(columns={'Issue':'MaxIssue'})
df = pd.merge(left=df, right=df2, on=['Title', 'StartYear'], sort=False).sort_values(by='Units', ascending=False)
max_date = df['Date'].max()
df2 = pd.pivot_table(df, values='Date', index=['Title', 'StartYear'], aggfunc=lambda x: max(x) == max_date).rename(columns={'Date':'CurrentTitle'})
df = pd.merge(left=df, right=df2, on=['Title', 'StartYear'], sort=False).sort_values(by='Units', ascending=False)
```
We can see what our data looks like by peeking into the first few rows of the table.
```
df.head(3)
series_df[series_df.Title.str.contains('Moon Girl')]
```
## Preliminary Analysis - Cancellation Issue
The titles need to be classified as to whether they have been cancelled or not. Naively, any books that end have been cancelled whereas current ongoing titles have not been cancelled, but that isn't always the case.
*Amazing Spider-Man* is the long-running, core Spider-Man book and one of Marvel's best selling, flagship titles. Yet, since 1998 it has started over with new \#1 issues multiple times.
```
series_df.loc[series_df.Title == 'Amazing Spider-Man', ['StartYear', 'Title']]
```
In this case, *Amazing Spider-Man* was not cancelled so much as the numbering system was reverted to indicate a new creative direction and typically a mix-up in the creative team as well. For long-running serial titles, it's standard that every several years that the creative team will change.
Meanwhile, many titles never reach beyond their first issue. In which case, they would have been "cancelled" before receiving any sales feedback. These titles are often intended to be one-shots either as a side story or even as a Free Comic Book Day (FCBD) offering.
```
df[df.MaxIssue == 1].head(3)
```
So long-running and extremely short-running titles may not actually have been cancelled.
So let's look at what issue is often the last issue before cancellation.
```
pd.pivot_table(df, values=['Title'], index='MaxIssue', aggfunc={'Title':lambda x: len(x.unique())}).iloc[0:16].plot(
kind='bar', y='Title', figsize=(8,6), legend=False)
plt.ylabel('Counts')
plt.xlabel('Max Issue')
plt.show()
```
Based on length, Marvel comics appear to fall into several categories: (1) one-shots, (2) events and mini-series that run less than 6 issues, (3) ongoing titles that are immediately cancelled around 12 issues, and (4) ongoing titles that continue past 12 issues.
I have no way of determining how each series ended without manually going through each title and looking into them which would be a time-consuming process. However, it appears that the 12th month mark is a common dropping point for comics. For now, I'm going to overly restrict my data and try to determine what allows a book to survive past this first drop point by comparing titles that got cancelled on their 12th issue with those that lasted longer.
# 3 Cancelled Classification
Titles that prematurely finished with 12 issues will be labeled as "Cancelled" whereas books that last longer than that will be labelled as "Kept". I'm then going to aggregate my data by run (title and starting year), keeping features for the unit sales and gross profits for the first 12 months as well as the book's maximum issue and whether it's a current title.
```
# Removed 'Avengers Vs. X-Men' because it is an event comic that lasted 12 issues and was not cancelled per se
df.drop(df.index[df.Title == 'Avengers Vs X-Men'], inplace=True)
# Select cancelled titles that start with an issue #1 and finish with their 12th issue. Group by title and create features for units and gross sales for first 12 months.
dfUnits = df.loc[(df.Issue == 1) & (df.MaxIssue == 12), ['Title', 'StartYear']].reset_index(drop=True)
for i in range(1,13):
dfUnits = pd.merge(left=dfUnits, right=df.loc[(df.Issue == i) & (df.MaxIssue == 12), ['Title', 'StartYear', 'Units']].rename(columns={'Units': 'Units' + str(i)}), on=['Title', 'StartYear'])
dfUnits = dfUnits.dropna(axis='rows')
dfGross = df.loc[(df.Issue == 1) & (df.MaxIssue == 12), ['Title', 'StartYear']].groupby(['Title', 'StartYear']).first().reset_index()
for i in range(1,13):
dfGross = pd.merge(left=dfGross, right=df.loc[(df.Issue == i) & (df.MaxIssue == 12), ['Title', 'StartYear', 'Gross']].rename(columns={'Gross': 'Gross' + str(i)}), on=['Title', 'StartYear'])
dfGross = dfGross.dropna(axis='rows')
df1 = pd.merge(left=dfUnits, right=dfGross, on=['Title', 'StartYear'])
df2 = df[['Title', 'StartYear', 'MaxIssue', 'CurrentTitle']]
df2 = df2.groupby(['Title', 'StartYear']).agg({'MaxIssue':'first',
'CurrentTitle':'first'}).reset_index()
dfCancelled = pd.merge(left=df1, right=df2, on=['Title', 'StartYear'])
dfCancelled['Kept'] = 0
# Select kept titles that start with an issue #1 and then continue past their 12th issue. Group by title and create features for units and gross sales for first 12 months.
dfUnits = df.loc[(df.MaxIssue > 12) & (df.Issue == 1), ['Title', 'StartYear']].reset_index(drop=True)
for i in range(1,13):
dfUnits = pd.merge(left=dfUnits, right=df.loc[(df.Issue == i) & (df.MaxIssue > 12), ['Title', 'StartYear', 'Units']].rename(columns={'Units': 'Units' + str(i)}), on=['Title', 'StartYear'])
dfUnits = dfUnits.dropna(axis='rows')
dfGross = df.loc[(df.MaxIssue > 12) & (df.Issue == 1 | (df.Issue == 12)), ['Title', 'StartYear']].groupby(['Title', 'StartYear']).first().reset_index()
for i in range(1,13):
dfGross = pd.merge(left=dfGross, right=df.loc[(df.Issue == i) & (df.MaxIssue > 12), ['Title', 'StartYear', 'Gross']].rename(columns={'Gross': 'Gross' + str(i)}), on=['Title', 'StartYear'])
dfGross = dfGross.dropna(axis='rows')
df1 = pd.merge(left=dfUnits, right=dfGross, on=['Title', 'StartYear'])
df2 = df.loc[(df['Issue'] <= 12),['Title', 'StartYear', 'MaxIssue', 'CurrentTitle']]
df2 = df2.groupby(['Title', 'StartYear']).agg({'MaxIssue':'first',
'CurrentTitle':'first'}).reset_index()
dfKept = pd.merge(left=df1, right=df2, on=['Title', 'StartYear'])
dfKept['Kept'] = 1
# Combine both Cancelled and Kept titles
df = pd.concat([dfCancelled, dfKept], ignore_index=True, sort=False)
```
Peering into the first few rows shows that we now have sales information (units and gross) for the first 12 months of sales of new titles.
```
df.head(3)
```
# 4 Feature Engineering - Exponential Fitting
Monthly unit sales and gross profit uncannily follow an exponential decay over the course of the first several months. People try new titles for the first several issues to decide whether they like the book. Then within the first few months, they decide whether to drop the book or continue to follow it. After that point, sales tend stay relatively consistent.
In addition to my monthly unit sales, I'm going to engineer some new features based on the exponential fit parameters. These features allow for the entire trend of the sales information with time to be captured in just a few variables.
#### Exponential Models:
$Units(x) = (UI-UF) exp(-(x-1)(UT)) + UF$
$UI$ = Initial Unit Sales <br />
$UT$ = Exponential Time Decay Constant <br />
$UF$ = Asymptotic Final Unit Sales
$Gross(x) = (GI-GF) exp(-(x-1)(GT)) + GF$
$GI$ = Initial Gross Sales <br />
$GT$ = Exponential Time Decay Constant <br />
$GF$ = Asymptotic Final Gross Sales
The exponential fit doesn't describe all the titles. For example, some of them have a linear change in sales without a first issue spike which would most likely happen if the series gets a new \#1 without a real change in direction or creative team. However, for most titles the exponential fit describes the trend of the sales curve without the variance of the montly sales numbers.
```
r = 10 # Number of issues starting from beginning to include in fit
x = np.arange(r)
def exponenial_func(x, I, T, F):
return (I-F)*np.exp(-x/T)+F
UI_list = np.array([])
UT_list = np.array([])
UF_list = np.array([])
for i in range(df.shape[0]):
y = df.iloc[i, 2:2+r].astype(float).values
popt, pcov = curve_fit(exponenial_func, x, y, p0=(100000, 1, 20000))
UI_list = np.append(UI_list, popt[0])
UT_list = np.append(UT_list, popt[1])
UF_list = np.append(UF_list, popt[2])
# List titles that don't fit
if pcov[0,0] == float('Inf'):
print('Trouble Fitting Units for', df.iloc[i]['Title'])
GI_list = np.array([])
GT_list = np.array([])
GF_list = np.array([])
for i in range(df.shape[0]):
y = df.iloc[i, 14:14+r].astype(float).values
popt, pcov = curve_fit(exponenial_func, x, y, p0=(60000, 0.5, 20000))
GI_list = np.append(GI_list, popt[0])
GT_list = np.append(GT_list, popt[1])
GF_list = np.append(GF_list, popt[2])
# List titles that don't fit
if pcov[0,0] == float('Inf'):
print('Trouble fitting Gross for', df.iloc[i]['Title'])
df['UI'] = UI_list
df['UT'] = UT_list
df['UF'] = UF_list
df['GI'] = GI_list
df['GT'] = GT_list
df['GF'] = GF_list
```
## Checking Fits
We confirm how well the fit works by comparing it with the actual sales for a title.
```
title = 'She-Hulk'
start_year = 2014
df2 = df[(df.Title == title) & (df.StartYear == start_year)]
# Monthly Sales Values
x = np.arange(12)
y = df2.iloc[0,2:2+12].astype(float).values
#Exponential Fit Values
def exponenial_func(x, I, T, F):
return (I-F)*np.exp(-x/T)+F
xx = np.linspace(0, 12, 1000)
yy = exponenial_func(xx, df2.UI[0], df2.UT[0], df2.UF[0])
# Plot
ymin = min(y); ymax = max(y)
plt.plot(x,y,'o', xx, yy)
plt.title('Exponential Fit of Units: {} ({})'.format(title, start_year))
plt.xlim([-0.2,11.2])
plt.ylim([0.9*ymin, 1.1*ymax])
plt.ylabel('Units Sold')
plt.xlabel('Months')
plt.show()
```
# 5 Save Databse
```
df = df.reset_index(drop=True)
db = sqlite3.connect('sales.db')
df.to_sql("marvel_sales", db, if_exists="replace")
db.close()
```
In this first part, we've scrapped comic book sales data, cleaned up some of irregularities in issue numbers and title names, aggregated the data into unique runs by the title's name and starting year, classified titles based on whether they were "kept" or "cancelled" after 12 months, and engineered new features based on the regression fit of the sales data to an exponential decay curve.
Now that we have the data, it is ready to be processed by machine learning algorithms in order to determine the cancellation criteria. The step-by-step procedures to do that are demonstrated in [part 2](./2 Comic Book Cancellations - Machine Learning.ipynb). [Part 3](./3 Comic Book Cancellations - Conclusion.ipynb) will then summarize all these steps and present the final conclusion.
| github_jupyter |

---
## 30. Integraciรณn Numรฉrica
Eduard Larraรฑaga (ealarranaga@unal.edu.co)
---
### Resumen
En este cuaderno se presentan algunas tรฉcnicas de integraciรณn numรฉrica.
---
Una de las tareas mรกs comunes en astrofรญsica es evaluar integrales como
\begin{equation}
I = \int_a^b f(x) dx ,
\end{equation}
y, en muchos casos, estas no pueden realizarse en forma analรญtica. El integrando en estas expresiones puede darse como una funciรณn analรญtica $f(x)$ o como un conjunto discreto de valores $f(x_i)$. Acontinuaciรณn describiremos algunas tรฉcnicas para realizar estas integrales numรฉricamente en ambos casos.
---
## Interpolaciรณn por intervalos y cuadraturas
Cualquier mรฉtodo de integraciรณn que utilice una suma con pesos es denominado **regla de cuadraturas**. Suponga que conocemos (o podemos evaluar) el integrando $f(x)$ en un conjunto finito de *nodos*, $\{x_j\}$ con $j=0,\cdots,n-1$ en el intervalo $[a,b]$ y tal que $x_0 = a$ y $x_{n-1} = b$. Con esto se obtendrรก un conjunto de $n$ nodos o equivalentemente $N=n-1$ intervalos. Una aproximaciรณn discreta de la integral de esta funciรณn estรก dada por la **regla del rectรกngulo**,
\begin{equation}
I = \int_a^b f(x) dx \approx \Delta x \sum_{i=0}^{N} f(x_i),
\end{equation}
donde el ancho de los intervalos es $\Delta x = \frac{b-a}{N}$. A partir de la definiciรณn de una integral, es claro que esta aproximaciรณn converge al valor real de la integral cuando $N\rightarrow \infty$, i.e. cuando $\Delta x \rightarrow 0$.
A pesar de que la regla del rectangulo puede dar una buena aproximaciรณn de la integral, puede ser mejorada al utilizar una funciรณn interpolada en cada intervalo. Los mรฉtodos en los que se utiliza la interpolaciรณn de polinomios se denominan, en general, **cuadraturas de Newton-Cotes**.
---
### Regla de punto medio
La modificaciรณn mรกs simple a la regla del rectรกngulo descrita arriba es utilizar el valor central de la funciรณn $f(x)$ en cada intervalo en lugar del valor en uno de los nodos. de Esta forma, si es posible evaluar el integrando en el punto medio de cada intervalo, el valor aproximado de la integral estarรก dado por
\begin{equation}
I = \int_{a}^{b} f(x) dx = \sum _{i=0}^{N} (x_{i+1} - x_i) f(\bar{x}_i ),
\end{equation}
donde $\bar{x}_i = \frac{x_i + x_{i+1}}{2}$ es el punto medio en el intervalo $[x_i, x_{i+1}]$.
Con el fin de estimar el error asociado con este mรฉtodo, se utiliza una expansiรณn en serie de Taylor del integrando en el intervalo $[x_i, x_{i+1}]$ alrededor del punto medio $\bar{x}_i$,
\begin{equation}
f(x) = f(\bar{x}_i) + f'(\bar{x}_i)(x-\bar{x}_i) + \frac{f''(\bar{x}_i)}{2}(x-\bar{x}_i)^2 + \frac{f'''(\bar{x}_i)}{6}(x-\bar{x}_i)^3 + ...
\end{equation}
Integrando esta expresiรณn desde $x_i$ hasta $x_{i+1}$, y notando que los terminos de orden impar se anulan, se obtiene
\begin{equation}
\int_{x_i}^{x_{i+1}} f(x)dx = f(\bar{x}_i)(x_{i+1}-x_i) + \frac{f''(\bar{x}_i)}{24}(x_{i+1}-x_i)^3 + ...
\end{equation}
Esta expansiรณn muestra que el error asociado con la aproximaciรณn en cada intervalo es de orden $\varepsilon_i = (x_{i+1}-x_i)^3$. Ya que la integral total se obtiene como una suma de $N$ integrales similares (una por cada subintervalo), el error total es serรก de orden $\varepsilon = N \varepsilon_i $.
Cuando los nodos estรกn igualmente espaciados, podemos escribir el tamaรฑo de estos intervalos como $\Delta x = \frac{b - a}{N}$ y por ello, el error asociado con cada intervalo es $\varepsilon_i =\frac{(b - a)^3}{n^3} = \Delta x^3$, mientras que el error total de la cuadratรบra serรก de orden $\varepsilon = N \varepsilon_i = \frac{(b - a)^3}{N^2} = N\Delta x^3$.
#### Ejemplo. Integraciรณn numรฉrica
Leeremos los datos de la funciรณn desde un archivo .txt y estimaremos numรฉricamente el valor de la integral de esta funciรณn utilizando la regla del punto medio. Debido a que la funciรณn esdada en forma de puntos discretos (y no en forma analรญtica), no es posible evaluar el valor de la funciรณn en los puntos medios por lo que utilizaremos inicialmente el valor en el primer punto de cada uno de los intervalos para calcular las sumas parciales.
```
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
# Reading the data
data = np.loadtxt('data_points1.txt', comments='#', delimiter=',')
x = data[:,0]
f = data[:,1]
n = len(x) # Number of nodes
N = n-1 # Number of intervals
plt.figure(figsize=(7,5))
# Numerical integration loop
Integral = 0.
for i in range(N):
dx = x[i+1] - x[i]
Integral = Integral + dx*f[i]
plt.vlines([x[i], x[i+1]], 0, [f[i], f[i]], color='red')
plt.plot([x[i], x[i+1]], [f[i], f[i]],color='red')
plt.fill_between([x[i], x[i+1]], [f[i], f[i]],color='red', alpha=0.3)
plt.scatter(x, f, color='black')
plt.hlines(0, x.min(), x.max())
plt.title('Integraciรณn por la regla del rectรกngulo')
plt.xlabel(r'$x$')
plt.ylabel(r'$f(x)$')
plt.show()
print(f'El resultado de la integraciรณn numรฉrica de la funciรณn discreta')
print(f'entre x = {x[0]:.1f} y x = {x[N]:.1f} es I = {Integral:.5e}')
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
# Reading the data
data = np.loadtxt('data_points2.txt', comments='#', delimiter=',')
x = data[:,0]
f = data[:,1]
n = len(x) # Number of nodes
N = n-1 # Number of intervals
plt.figure(figsize=(7,5))
# Numerical integration loop
Integral = 0.
for i in range(N):
dx = x[i+1] - x[i]
Integral = Integral + dx*f[i]
plt.vlines([x[i], x[i+1]], 0, [f[i], f[i]], color='red')
plt.plot([x[i], x[i+1]], [f[i], f[i]],color='red')
plt.fill_between([x[i], x[i+1]], [f[i], f[i]],color='red', alpha=0.3)
plt.scatter(x, f, color='black')
plt.hlines(0, x.min(), x.max())
plt.title('Integraciรณn por la regla del rectรกngulo')
plt.xlabel(r'$x$')
plt.ylabel(r'$f(x)$')
plt.show()
print(f'El resultado de la integraciรณn numรฉrica de la funciรณn discreta')
print(f'entre x = {x[0]:.1f} y x = {x[N]:.1f} es I = {Integral:.5e}')
```
---
### Regla del Trapezoide
La siguiente generalizaciรณn de la regla del rectรกngulo corresponde a aproximar la funciรณn $f(x)$ con un polinomio lineal en cada uno de los intervalos. Esto se conoce como la **regla del trapezoide** y la correspondiente cuadratura estarรก dada por
\begin{equation}
I = \int_{a}^{b} f(x) dx = \sum _{i=0}^{N} \frac{1}{2} (x_{i+1} - x_i) \left[ f(x_{i+1}) + f(x_i) \right] .
\end{equation}
Contrario a lo que sucede en la regla del punto medio, este mรฉtodo no necesita la evaluaciรณn del integrando en el punto medio sino en los dos nodos de cada intervalo.
#### Ejemplo. Integraciรณn con la regla del trapezoide.
De nuevo se leerรกn los datos de la funciรณn a partir de un archivo .txt file y se integrarรก numรฉricamente utilizando la regla del trapezoide.
```
import numpy as np
import matplotlib.pyplot as plt
# Reading the data
data = np.loadtxt('data_points1.txt', comments='#', delimiter=',')
x = data[:,0]
f = data[:,1]
n = len(x) # Number of nodes
N = n-1 # Number of intervals
plt.figure(figsize=(7,5))
# Numerical integration loop
Integral = 0.
for i in range(N):
dx = x[i+1] - x[i]
f_mean = (f[i] + f[i+1])/2
Integral = Integral + dx*f_mean
plt.vlines([x[i], x[i+1]], 0, [f[i], f[i+1]], color='red')
plt.plot([x[i], x[i+1]], [f[i], f[i+1]],color='red')
plt.fill_between([x[i], x[i+1]], [f[i], f[i+1]],color='red', alpha=0.3)
plt.scatter(x, f, color='black')
plt.hlines(0, x.min(), x.max())
plt.title('Integraciรณn con la regla del trapezoide')
plt.xlabel(r'$x$')
plt.ylabel(r'$f(x)$')
plt.show()
print(f'El resultado de la integraciรณn numรฉrica de la funciรณn discreta')
print(f'entre x = {x[0]:.1f} y x = {x[N]:.1f} es I = {Integral:.5e}')
import numpy as np
import matplotlib.pyplot as plt
# Reading the data
data = np.loadtxt('data_points2.txt', comments='#', delimiter=',')
x = data[:,0]
f = data[:,1]
n = len(x) # Number of nodes
N = n-1 # Number of intervals
plt.figure(figsize=(7,5))
# Numerical integration loop
Integral = 0.
for i in range(N):
dx = x[i+1] - x[i]
f_mean = (f[i] + f[i+1])/2
Integral = Integral + dx*f_mean
plt.vlines([x[i], x[i+1]], 0, [f[i], f[i+1]], color='red')
plt.plot([x[i], x[i+1]], [f[i], f[i+1]],color='red')
plt.fill_between([x[i], x[i+1]], [f[i], f[i+1]],color='red', alpha=0.3)
plt.scatter(x, f, color='black')
plt.hlines(0, x.min(), x.max())
plt.title('Integraciรณn con la regla del trapezoide')
plt.xlabel(r'$x$')
plt.ylabel(r'$f(x)$')
plt.show()
print(f'El resultado de la integraciรณn numรฉrica de la funciรณn discreta')
print(f'entre x = {x[0]:.1f} y x = {x[N]:.1f} es I = {Integral:.5e}')
```
---
## Regla de Simpson
La regla de Simpson es un mรฉtodo en el que la integral $f(x)$ se estima aproximando el integrando por un polinomi de segundo orden en cada intervalo.
Si se conocen tres valores de la funciรณn; $f_1 =f(x_1)$, $f_2 =f(x_2)$ y $f_3 =f(x_3)$; en los puntos $x_1 < x_2 < x_3$, se puede ajustar un polinomio de segundo orden de la forma
l
\begin{equation}
p_2 (x) = A (x-x_1)^2 + B (x-x_1) + C .
\end{equation}
Al integrar este polinomio en el intervalo $[x_1 , x_3]$, se obtiene
\begin{equation}
\int_{x_1}^{x^3} p_2 (x) dx = \frac{x_3 - x_1}{6} \left( f_1 + 4f_2 + f_3 \right) + \mathcal{O} \left( (x_3 - x_1)^4 \right)
\end{equation}
---
### Regla de Simpson con nodos igualmente espaciados
Si se tienen $n$ nodos igualmente espaciados en el intervalo de integraciรณn, o equivalentemente $N=n-1$ intervalos con un ancho constante $\Delta x$, la integral total mediante la regla de Simpson se escribe en la forma
\begin{equation}
I = \int_a^b f(x) dx \approx \frac{\Delta x}{3} \sum_{i=0}^{\frac{N-1}{2}} \left[ f(x_{2i}) + 4f(x_{2i+1}) + f(x_{2i+2}) \right] + \mathcal{O} \left(f''' \Delta x^4 \right)
\end{equation}
El error numรฉrico en cada intervalo es de orden $\Delta x^4$ y por lo tanto, la integral total tendrรก un error de orden $N \Delta x^4 = \frac{(a-b)^4}{N^3}$.
#### Ejemplo. Integraciรณn con la regla de Simpson
```
import numpy as np
import matplotlib.pyplot as plt
def quadraticInterpolation(x1, x2, x3, f1, f2, f3, x):
p2 = (((x-x2)*(x-x3))/((x1-x2)*(x1-x3)))*f1 + (((x-x1)*(x-x3))/((x2-x1)*(x2-x3)))*f2 +\
(((x-x1)*(x-x2))/((x3-x1)*(x3-x2)))*f3
return p2
# Reading the data
data = np.loadtxt('data_points1.txt', comments='#', delimiter=',')
x = data[:,0]
f = data[:,1]
n = len(x) # Number of nodes
N = n-1 # Number of intervals
plt.figure(figsize=(7,5))
# Numerical integration loop
Integral = 0.
for i in range(int((N-1)/2)):
dx = x[2*i+1] -x[2*i]
Integral = Integral + dx*(f[2*i] + 4*f[2*i+1] + f[2*i+2])/3
x_interval = np.linspace(x[2*i],x[2*i+2],6)
y_interval = quadraticInterpolation(x[2*i], x[2*i+1], x[2*i+2], f[2*i], f[2*i+1], f[2*i+2], x_interval)
plt.plot(x_interval, y_interval,'r')
plt.fill_between(x_interval, y_interval, color='red', alpha=0.3)
plt.scatter(x, f, color='black')
plt.hlines(0, x.min(), x.max())
plt.title('Integraciรณn con la regla de Simpson')
plt.xlabel(r'$x$')
plt.ylabel(r'$f(x)$')
plt.show()
print(f'El resultado de la integraciรณn numรฉrica de la funciรณn discreta')
print(f'entre x = {x[0]:.1f} y x = {x[N]:.1f} es I = {Integral:.5e}')
import numpy as np
import matplotlib.pyplot as plt
def quadraticInterpolation(x1, x2, x3, f1, f2, f3, x):
p2 = (((x-x2)*(x-x3))/((x1-x2)*(x1-x3)))*f1 + (((x-x1)*(x-x3))/((x2-x1)*(x2-x3)))*f2 +\
(((x-x1)*(x-x2))/((x3-x1)*(x3-x2)))*f3
return p2
# Reading the data
data = np.loadtxt('data_points2.txt', comments='#', delimiter=',')
x = data[:,0]
f = data[:,1]
n = len(x) # Number of nodes
N = n-1 # Number of intervals
plt.figure(figsize=(7,5))
# Numerical integration loop
Integral = 0.
for i in range(int((N-1)/2)):
dx = x[2*i+1] - x[2*i]
Integral = Integral + dx*(f[2*i] + 4*f[2*i+1] + f[2*i+2])/3
x_interval = np.linspace(x[2*i],x[2*i+2],6)
y_interval = quadraticInterpolation(x[2*i], x[2*i+1], x[2*i+2], f[2*i], f[2*i+1], f[2*i+2], x_interval)
plt.plot(x_interval, y_interval,'r')
plt.fill_between(x_interval, y_interval, color='red', alpha=0.3)
plt.scatter(x, f, color='black')
plt.hlines(0, x.min(), x.max())
plt.title('Integraciรณn con la regla de Simpson')
plt.xlabel(r'$x$')
plt.ylabel(r'$f(x)$')
plt.show()
print(f'El resultado de la integraciรณn numรฉrica de la funciรณn discreta')
print(f'entre x = {x[0]:.1f} y x = {x[N]:.1f} es I = {Integral:.5e}')
```
---
### Regla de Simpson para nodos no-equidistantes
Cuando los nodos de la malla de discretizaciรณn de $f(x)$ no estรกn igualmente espaciados, la regla de Simpson debe modificarse en la forma
\begin{equation}
I = \int_a^b f(x) dx \approx \sum_{i=0}^{\frac{N-1}{2}} \left[ \alpha f(x_{2i}) + \beta f(x_{2i+1}) +\gamma f(x_{2i+2}) \right]
\end{equation}
donde
\begin{align}
\alpha = &\frac{-h_{2i+1}^2 + h_{2i+1} h_{2i} + 2 h_{2i}^2}{6 h_{2i}} \\
\beta = &\frac{ (h_{2i+1} + h_{2i})^3 }{6 h_{2i+1} h_{2i}} \\
\gamma =& \frac{2 h_{2i+1}^2 + h_{2i+1} h_{2i} - h_{2i}^2}{6 h_{2i+1}}
\end{align}
y $h_j = x_{j+1} - x_j$.
---
## Factor de (Auto-)Convergencia para la Regla de Simpson
Para comprobar la precisiรณn del Mรฉtodo de Integraciรณn de Simpson, calculamos el factor de auto-convergencia al integrar la funciรณn $\sin x$ en el intervalo $0\leq x \leq 2\pi$.
```
import numpy as np
import matplotlib.pyplot as plt
def intsin(n):
x = np.linspace(0, 2*np.pi, n)
f = np.sin(x)
dx = x[1] - x[0]
N = n-1 # Number of intervals
# Numerical integration loop
Integral = 0.
for i in range(int((N-1)/2)):
Integral = Integral + dx*(f[2*i] + 4*f[2*i+1] + f[2*i+2])/3
return Integral
n=1000
y = intsin(n)
print(f'El resultado de la integraciรณn numรฉrica con n = {n:.0f} es I = {y:.5f}')
y_1 = intsin(100)
y_2 = intsin(1000)
y_3 = intsin(10000)
C_self = np.abs(y_3 - y_2)/np.abs(y_2 - y_1)
print(f'el factor de convergencia es C = {C_self:.2f}')
print(f'que corresponde a una precision de h^{-np.log10(C_self):.1f}')
```
Recuerde que en la regla de Simpson, el orden de precisiรณn llega hasta
\begin{equation}
\mathcal{O} \left( f''' \Delta x^4\right) \sim \Delta x^2
\end{equation}
debido a que $f''' \sim \Delta x^{-2}$
| github_jupyter |
# Step 1 - Downloading Fitbit data via the API
For this initial step, we are using [python-fitbit](https://github.com/orcasgit/python-fitbit), a Python client accessing the Fitbit API. We furthermore require an exisiting Fitbit OAuth 2.0 Client (Consumer) ID and Client (Consumer) Secret. These can be obtained by registering an app [here](https://dev.fitbit.com/apps/new).
Let's make sure that all Python dependencies needed below can be loaded:
```
import fitbit # clone https://github.com/orcasgit/python-fitbit
from bin.parse_credentials import parse_client_credentials, parse_tokens # clone https://github.com/JungeAlexander/fitbit-data
```
Afterwards, save the Client ID and Client Secret to a text file named `client_id_secret.txt` located in the same directory as this Jupyter notebook. `client_id_secret.txt` should look like this:
```
id = 123ABC
secret = 1234567890abcdef1234567890abcdef123456789
```
Load the credentials into two variables using a function shipped with this repository:
```
client_id, client_secret = parse_client_credentials('client_id_secret.txt')
```
Furthermore, API access and refresh tokens are required for python-fitbit to access Fitbit data. These can be obtained using the script `gather_keys_oauth2.py` shipped with python-fitbit.
Execute the following command to write the output of `gather_keys_oauth2.py` to `access_refresh_tokens.txt`:
```
!./gather_keys_oauth2.py $client_id $client_secret >access_refresh_tokens.txt 2>/dev/null
```
Now load access and refresh tokens into two variables:
```
access_token, refresh_token = parse_tokens('access_refresh_tokens.txt')
authd_client = fitbit.Fitbit(client_id, client_secret, oauth2=True,
access_token=access_token,
refresh_token=refresh_token)
```
Finally we can download Fitbit data using the API. Let's look at my sleep and step data from last Sunday as an example:
```
sleep_ts = authd_client.time_series('sleep/minutesAsleep', period='3m')
sleep_ts['sleep-minutesAsleep'][-7]
steps_ts = authd_client.time_series('activities/steps', period='3m')
steps_ts['activities-steps'][-7]
```
Looks like I didn't sleep too much the night of Saturday to Sunday and I gathered more than 15k steps on Sunday (mostly playing Ultimate Frisbee, I presume).
## Next steps and further reading
Next, I would like to download additional data from Fitbit gathered over a longer period of time, store them in suitable data structures and create some basic visualizations.
For additional information regarding the Fitbit API and which data can be downloaded, check the [API documentation](https://dev.fitbit.com/docs/basics/). Also the [python-fitbit documentation](http://python-fitbit.readthedocs.org/en/latest/#) is worth a visit.
| github_jupyter |
##### Copyright 2020 The TensorFlow Hub Authors.
Licensed under the Apache License, Version 2.0 (the "License");
```
#@title Copyright 2020 The TensorFlow Hub Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
```
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/hub/tutorials/boundless"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/hub/blob/master/examples/colab/boundless.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/hub/blob/master/examples/colab/boundless.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/hub/examples/colab/boundless.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
# Boundless Colab
Welcome to the Boundless model Colab! This notebook will take you through the steps of running the model on images and visualize the results.
## Overview
Boundless is a model for image extrapolation. This model takes an image, internally masks a portioin of it ([1/2](https://tfhub.dev/google/boundless/half/1), [1/4](https://tfhub.dev/google/boundless/quarter/1), [3/4](https://tfhub.dev/google/boundless/three_quarter/1)) and completes the masked part. For more details refer to [Boundless: Generative Adversarial Networks for Image Extension](https://arxiv.org/pdf/1908.07007.pdf) or the model documentation on TensorFlow Hub.
## Imports and Setup
Lets start with the base imports.
```
import tensorflow as tf
import tensorflow_hub as hub
from io import BytesIO
from PIL import Image as PilImage
import numpy as np
from matplotlib import pyplot as plt
from six.moves.urllib.request import urlopen
```
## Reading image for input
Lets create a util method to help load the image and format it for the model (257x257x3). This method will also crop the image to a square to avoid distortion and you can use with local images or from the internet.
```
def read_image(filename):
fd = None
if(filename.startswith('http')):
fd = urlopen(filename)
else:
fd = tf.io.gfile.GFile(filename, 'rb')
pil_image = PilImage.open(fd)
width, height = pil_image.size
# crop to make the image square
pil_image = pil_image.crop((0, 0, height, height))
pil_image = pil_image.resize((257,257),PilImage.ANTIALIAS)
image_unscaled = np.array(pil_image)
image_np = np.expand_dims(
image_unscaled.astype(np.float32) / 255., axis=0)
return image_np
```
## Visualization method
We will also create a visuzalization method to show the original image side by side with the masked version and the "filled" version, both generated by the model.
```
def visualize_output_comparison(img_original, img_masked, img_filled):
plt.figure(figsize=(24,12))
plt.subplot(131)
plt.imshow((np.squeeze(img_original)))
plt.title("Original", fontsize=24)
plt.axis('off')
plt.subplot(132)
plt.imshow((np.squeeze(img_masked)))
plt.title("Masked", fontsize=24)
plt.axis('off')
plt.subplot(133)
plt.imshow((np.squeeze(img_filled)))
plt.title("Generated", fontsize=24)
plt.axis('off')
plt.show()
```
## Loading an Image
We will load a sample image but fell free to upload your own image to the colab and try with it. Remeber that the model have some limitations regarding human images.
```
wikimedia = "https://upload.wikimedia.org/wikipedia/commons/thumb/3/31/Nusfjord_road%2C_2010_09.jpg/800px-Nusfjord_road%2C_2010_09.jpg"
# wikimedia = "https://upload.wikimedia.org/wikipedia/commons/thumb/4/47/Beech_forest_M%C3%A1tra_in_winter.jpg/640px-Beech_forest_M%C3%A1tra_in_winter.jpg"
# wikimedia = "https://upload.wikimedia.org/wikipedia/commons/thumb/b/b2/Marmolada_Sunset.jpg/640px-Marmolada_Sunset.jpg"
# wikimedia = "https://upload.wikimedia.org/wikipedia/commons/thumb/9/9d/Aegina_sunset.jpg/640px-Aegina_sunset.jpg"
input_img = read_image(wikimedia)
```
## Selecting a model from TensorFlow Hub
On TensorFlow Hub we have 3 versions of the Boundless model: Half, Quarter and Three Quarters.
In the following cell you can chose any of them and try on your image. If you want to try with another one, just chose it and execture the following cells.
```
#@title Model Selection { display-mode: "form" }
model_name = 'Boundless Quarter' # @param ['Boundless Half', 'Boundless Quarter', 'Boundless Three Quarters']
model_handle_map = {
'Boundless Half' : 'https://tfhub.dev/google/boundless/half/1',
'Boundless Quarter' : 'https://tfhub.dev/google/boundless/quarter/1',
'Boundless Three Quarters' : 'https://tfhub.dev/google/boundless/three_quarter/1'
}
model_handle = model_handle_map[model_name]
```
Now that we've chosen the model we want, lets load it from TensorFlow Hub.
**Note**: You can point your browser to the model handle to read the model's documentation.
```
print("Loading model {} ({})".format(model_name, model_handle))
model = hub.load(model_handle)
```
## Doing Inference
The boundless model have two ouputs:
* The input image with a mask applied
* The masked image with the extrapolation to complete it
we can use these two images to show a comparisson visualization.
```
result = model.signatures['default'](tf.constant(input_img))
generated_image = result['default']
masked_image = result['masked_image']
visualize_output_comparison(input_img, masked_image, generated_image)
```
| github_jupyter |
# Experiment 02: Study performance stability over time
Study how well models trained on images from early dates perform at test time on images from later dates. This is meant to investigate how stable model performance is over time, as news rooms' image publishing pipelines (possibly) evolve.
For each source, sort the images chronologically by the news article date, then split the images into a training subset (with images from early dates), and a test set (with images from later dates.)
Then train models using the images from early dates, and then test the models on images from late dates.
Only include QM features since they greatly outperformed CL features. Only Study Naive Bayes model here, so the focus is on the effect of time, not the effect of the model (and since NB was a top performing model.)
```
%matplotlib widget
%load_ext autoreload
%autoreload 2
import os
import sys
import subprocess
import random
import pickle
import numpy as np
import pandas as pd
from sklearn import preprocessing
from sklearn.model_selection import train_test_split
import matplotlib.pyplot as plt
from sklearn.metrics import roc_curve, auc
# from tqdm.autonotebook import tqdm
from tqdm.notebook import tqdm
import uncertainties
from image_compression_attribution.common.code.models import quant_matrices, compr_levels
from image_compression_attribution.common.code.summarize_quant_matrices import summarize_compression_features
pd.set_option('display.max_rows', 500)
pd.set_option('display.max_columns', 500)
pd.set_option('display.width', 500)
pd.set_option('display.max_colwidth', 500)
from sklearn.metrics import make_scorer, roc_curve
from scipy.optimize import brentq
from scipy.interpolate import interp1d
#WARNING: this method does not seem to work well when there are large gaps
#in the ROC curve. Hence, only use this if you have interpolated between
#ROC curve data points to fill in the roc curve on a grid with small intervals.
#https://github.com/scikit-learn/scikit-learn/issues/15247#issuecomment-542138349
def calculate_eer(fpr, tpr):
'''
Returns the equal error rate for a binary classifier output.
'''
eer = brentq(lambda x : 1. - x - interp1d(fpr, tpr)(x), 0., 1.)
return eer
#---------------------------------------------------------------
#Code to combine mean value and uncertainty estimate into
#one formatted string, like 3.14 +/- .02 becomes "3.14(2)"
import string
class ShorthandFormatter(string.Formatter):
"""https://pythonhosted.org/uncertainties/user_guide.html"""
def format_field(self, value, format_spec):
if isinstance(value, uncertainties.UFloat):
return value.format(format_spec+'S') # Shorthand option added
# Special formatting for other types can be added here (floats, etc.)
else:
# Usual formatting:
return super(ShorthandFormatter, self).format_field(
value, format_spec)
def uncertainty_format_arrays(mean_vals, uncertainty_vals):
frmtr_uncertainty = ShorthandFormatter()
vals_formatted = []
for mean, uncert in zip(mean_vals, uncertainty_vals):
number = uncertainties.ufloat(mean, uncert)
str_formatted = frmtr_uncertainty.format("{0:.1u}", number)
vals_formatted.append(str_formatted)
return vals_formatted
RND_SEED=1234
np.random.seed(RND_SEED)
SUMMARY_FILE = "/app/dataset/data.csv"
RESULTS_FOLDER = "results/exp_02"
os.makedirs(RESULTS_FOLDER, exist_ok=True)
df = pd.read_csv(SUMMARY_FILE)
#We'll work with timestamps, so need to convert to a datetime for ease of use
df['timestamp'] = pd.to_datetime(df['timestamp'], utc=True)
#Drop non-image files, e.g. html files returned
#due to download errors
df, df_dropped = df[ df['mime'].str.startswith('image') ].reset_index(drop=True), \
df[ ~df['mime'].str.startswith('image') ].reset_index(drop=True)
sources = sorted(list(df['source'].unique()))
from sklearn import preprocessing
from sklearn.model_selection import train_test_split
from sklearn.metrics import roc_curve, auc
from sklearn.ensemble import IsolationForest
#Guide to LabelEncoder:
#https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.LabelEncoder.html
#create numerical class labels for quantization categorical names (suitable for
#use as ML training feature vector)
le_qs = preprocessing.LabelEncoder()
le_qs.fit(df['q_name'])
df['q_name_class'] = le_qs.transform(df['q_name'])
sources = sorted(list(df['source'].unique()))
le_sources = preprocessing.LabelEncoder()
le_sources.fit(sources)
df['source_class'] = le_sources.transform(df['source'])
df
df.groupby('source')['timestamp'].min()
df.groupby('source')['timestamp'].max()
timespan = df.groupby('source')['timestamp'].max() - df.groupby('source')['timestamp'].min()
timespan_list = timespan.tolist()
timespan_list2 = [(x.days + x.seconds/86400.0)/365.2425 for x in timespan_list]
timespan_years = pd.Series(timespan_list2, index=sources)
print("mean timespan = {:.1f} years".format(timespan_years.mean()))
with open(os.path.join(RESULTS_FOLDER,"timespan_years_mean.txt"),"w") as file1:
file1.write("{:.1f}".format(timespan_years.mean()))
print("min timespan = {:.2f} years".format(timespan_years.min()))
with open(os.path.join(RESULTS_FOLDER,"timespan_years_min.txt"),"w") as file1:
file1.write("{:.2f}".format(timespan_years.min()))
print("max timespan = {:.1f} years".format(timespan_years.max()))
with open(os.path.join(RESULTS_FOLDER,"timespan_years_max.txt"),"w") as file1:
file1.write("{:.1f}".format(timespan_years.max()))
timespan_years
```
## Comment:
We see that for most sources we have images across large time spans, which is desireable for these experiments; the average time span is over 10 years, and the timespans range from 0.89 years to 21.4 years
## Method:
Note: date ranges of different sources may not overlap. So instead of picking 1 cutoff date to be shared for all sources, we will instead have a separate cutoff date for each source, to split each source into two parts. Put another way, sort articles from each source by date, and within a source, split the articles into two parts: before and after the date cutoff.
1. All articles (from each source) from before the cutoff date form the train set -- the first 60% of articles.
1. Then form the test set:
1. Select all articles from a source after the cutoff date -- last 40% of articles.
1. Randomly sample an equal number of articles from the remaining sources, also after each's cutoff date.
1. Combine them to form the test set.
1. Since the composition of the test set varies, repeat this 5x to quantify uncertainty.
First do some precomputation: For each source, sort the articles by date, then split articles from each source into an early portion and later portion. Early portions can be used for training, later portions can be used for testing.
```
PERCENT_TEST = 0.40
df_articles = df[['articleHash', 'timestamp', 'source', 'source_class']].drop_duplicates()
articles_predate = {}
articles_postdate = {}
for source in sources:
#get all articles from the source, sorted by date
df_articles_from_source = df_articles[df_articles['source']==source].sort_values(by="timestamp")
num_test_articles_from_source = int(PERCENT_TEST*len(df_articles_from_source))
num_train_articles_from_source = len(df_articles_from_source) - num_test_articles_from_source
df_art_from_source_predate = df_articles_from_source.iloc[0:num_train_articles_from_source,:]
df_art_from_source_postdate = df_articles_from_source.iloc[num_train_articles_from_source:,:]
articles_predate[source] = df_art_from_source_predate
articles_postdate[source] = df_art_from_source_postdate
#Prepare Train and Test Split.
all_q_name_vals = sorted(df['q_name'].unique())
#Sample from articles (so we can keep images from articles grouped together)
df_articles = df[['articleHash', 'timestamp', 'source', 'source_class']].drop_duplicates()
NUM_TRIALS = 5
results_per_trial_qm = {}
for trial in tqdm(range(NUM_TRIALS)):
numsamples_balanced_testset=[]
AUCs_qm = []
results_qm={}
for source in sources:
remaining_sources = [s for s in sources if s != source]
#-----------------------------------
#Form train/test data split. Train set first:
#All articles (from every source) from before their cutoff date form the train set
df_train_articles = None
for src in sources:
if df_train_articles is None:
df_train_articles = articles_predate[src]
else:
df_train_articles = pd.concat([df_train_articles, articles_predate[src] ])
#----------------------
#Test set:
#All articles from a source after its cutoff date contributes to test set:
df_articles_test_from_source = articles_postdate[source]
num_test_articles_from_source = len(df_articles_test_from_source)
#-------
#collect all articles not from remaining sources from after their cutoff dates
df_articles_postdate_not_from_source = None
for src in remaining_sources:
if df_articles_postdate_not_from_source is None:
df_articles_postdate_not_from_source = articles_postdate[src]
else:
df_articles_postdate_not_from_source = pd.concat([df_articles_postdate_not_from_source, articles_postdate[src] ])
#Randomly sample an equal number of articles from the remaining sources, after their cutoff dates.
num_test_articles_not_from_source = num_test_articles_from_source
df_articles_test_not_from_source = df_articles_postdate_not_from_source.sample(num_test_articles_not_from_source)
#------
#combine to build the test set
df_test_articles = pd.concat([df_articles_test_from_source, df_articles_test_not_from_source])
#----------------------
#Get all images articles in train/test splits:
df_train = df[ df['articleHash'].isin(df_train_articles['articleHash']) ].reset_index()
df_test = df[ df['articleHash'].isin(df_test_articles['articleHash']) ].reset_index()
#Set ground truth label: 1 if image misattributed, else 0
df_test['is_misattributed'] = np.array(df_test['source']!=source, dtype=int)
#-----------------------------------
#Fit models
#quantization matrices
qm_model = quant_matrices.attribution_quant_matrices()
qm_model.fit(df_train[['source', 'q_name']], compr_category_names=all_q_name_vals)
#-----------------------------------
#prediction on test set
claimed_source_list = [source]*len(df_test)
LLRs_isfake_qm, probs_fromsource_qm, probs_notfromsource_qm, \
unrecognized_sources_qm = qm_model.predict(df_test['q_name'], claimed_source_list)
df_test['LLR_qm'] = LLRs_isfake_qm
#Determine if prediction is wrong
misclassified_qm = (df_test['is_misattributed'] - .5) * LLRs_isfake_qm < 0
df_test['misclassified_qm'] = misclassified_qm
#-----------------------------------
#Use hypothesis test score to compute ROC curve for this source:
numsamples_balanced_testset.append(len(df_test))
fpr, tpr, thresholds = roc_curve(df_test['is_misattributed'], df_test['LLR_qm'], pos_label=1)
roc_auc = auc(fpr, tpr)
AUCs_qm.append(roc_auc)
results_qm[source] = {'source': source, 'fpr': fpr, 'tpr':tpr,
'auc':roc_auc, 'numsamples':len(df_test),
'scores_isfake': df_test['LLR_qm'],
'label_isfake': df_test['is_misattributed'],
'df_test':df_test}
results_per_trial_qm[trial] = results_qm
```
## Summarize results
```
FPR_THRESHOLD = 0.005 # compute TPR @ this FPR = 0.5%
AUCs_mean_cl = []
AUCs_std_cl = []
tpr_at_fpr_mean_cl = []
AUCs_mean_qm = []
AUCs_std_qm = []
tpr_at_fpr_mean_qm = []
#quantization matrices qm
for source in sources:
AUCs_per_trial = []
tpr_per_trial = []
fpr_per_trial = []
tprs_at_fpr_threshold = []
for trial in range(NUM_TRIALS):
AUCs_per_trial.append(results_per_trial_qm[trial][source]['auc'])
fpr = results_per_trial_qm[trial][source]['fpr']
tpr = results_per_trial_qm[trial][source]['tpr']
fpr_per_trial.append(fpr)
tpr_per_trial.append(tpr)
tprs_at_fpr_threshold.append( np.interp(FPR_THRESHOLD, fpr, tpr) )
AUCs_mean_qm.append(np.mean(AUCs_per_trial))
AUCs_std_qm.append(np.std(AUCs_per_trial))
tpr_at_fpr_mean_qm.append(np.mean(tprs_at_fpr_threshold))
df_summary = pd.DataFrame({'source':sources, 'test_size':numsamples_balanced_testset,
'AUC_mean_qm':AUCs_mean_qm, 'AUC_std_qm':AUCs_std_qm,
'tpr_at_fpr_mean_qm':tpr_at_fpr_mean_qm,
} )
df_summary['AUC_formatted_qm'] = uncertainty_format_arrays(df_summary['AUC_mean_qm'], df_summary['AUC_std_qm'])
df_summary
```
# Plot multiple ROC curves on one graph with uncertainty bands
```
EERs_mean_qm = []
#------------
#New-EER
EERs_all_qm = []
EERs_std_qm = []
#------------
plt.figure(figsize=(6,5))
plt.plot([0, 1], [0, 1], color="black", linestyle="--")
plt.plot(np.linspace(0,1,100), 1-np.linspace(0,1,100), color="red", linestyle="--")
interp_fpr = np.linspace(0, 1, 1000)
for source in sources[0:15]:
#New-EER
EERs_per_src_qm = []
interp_tprs = []
#interpolate between fpr,tpr datapoints to compute tpr at regular fpr intervals
for trial in range(NUM_TRIALS):
fpr = results_per_trial_qm[trial][source]['fpr']
tpr = results_per_trial_qm[trial][source]['tpr']
interp_tpr = np.interp(interp_fpr, fpr, tpr)
interp_tpr[0] = 0.0
interp_tprs.append(interp_tpr)
#------------
#New-EER
EERs_per_src_qm.append(calculate_eer(interp_fpr, interp_tpr)) #get EERs across all trials for this source
EERs_std_qm.append( np.std(EERs_per_src_qm) ) #gives a std of EER for each source, across all 5 trials
EERs_all_qm.append(EERs_per_src_qm) #all data: first index gives src, second index gives trial
#------------
mean_tpr = np.mean(interp_tprs, axis=0)
mean_tpr[-1] = 1.0
EERs_mean_qm.append(calculate_eer(interp_fpr, mean_tpr))
std_tpr = np.std(interp_tprs, axis=0)
tprs_upper = np.minimum(mean_tpr + std_tpr, 1)
tprs_lower = np.maximum(mean_tpr - std_tpr, 0)
plt.plot(interp_fpr, mean_tpr, linestyle="-", label=source)
plt.fill_between(interp_fpr, tprs_lower, tprs_upper, color='grey', alpha=.2)
auc_mean = float(df_summary.loc[df_summary['source']==source, 'AUC_mean_qm'])
auc_std = float(df_summary.loc[ df_summary['source']==source, 'AUC_std_qm'])
tpr_at_fpr_mean = float(df_summary.loc[ df_summary['source']==source, 'tpr_at_fpr_mean_qm'])
numsamples = int(df_summary.loc[ df_summary['source']==source, 'test_size'])
plt.xlabel("FPR")
plt.ylabel("TPR")
plt.legend()
plt.title("Verification: Time shift (part 1)")
plt.tight_layout()
plt.show()
#uncomment to save:
plt.savefig(os.path.join(RESULTS_FOLDER,"roc_curves_all_curves1.pdf"), bbox_inches='tight')
plt.figure(figsize=(6,5))
plt.plot([0, 1], [0, 1], color="black", linestyle="--")
plt.plot(np.linspace(0,1,100), 1-np.linspace(0,1,100), color="red", linestyle="--")
interp_fpr = np.linspace(0, 1, 1000)
for source in sources[15:]:
#New-EER
EERs_per_src_qm = []
interp_tprs = []
#interpolate between fpr,tpr datapoints to compute tpr at regular fpr intervals
for trial in range(NUM_TRIALS):
fpr = results_per_trial_qm[trial][source]['fpr']
tpr = results_per_trial_qm[trial][source]['tpr']
interp_tpr = np.interp(interp_fpr, fpr, tpr)
interp_tpr[0] = 0.0
interp_tprs.append(interp_tpr)
#------------
#New-EER
EERs_per_src_qm.append(calculate_eer(interp_fpr, interp_tpr)) #get EERs across all trials for this source
EERs_std_qm.append( np.std(EERs_per_src_qm) ) #gives a std of EER for each source, across all 5 trials
EERs_all_qm.append(EERs_per_src_qm) #all data: first index gives src, second index gives trial
#------------
mean_tpr = np.mean(interp_tprs, axis=0)
mean_tpr[-1] = 1.0
EERs_mean_qm.append(calculate_eer(interp_fpr, mean_tpr))
std_tpr = np.std(interp_tprs, axis=0)
tprs_upper = np.minimum(mean_tpr + std_tpr, 1)
tprs_lower = np.maximum(mean_tpr - std_tpr, 0)
plt.plot(interp_fpr, mean_tpr, linestyle="-", label=source)
plt.fill_between(interp_fpr, tprs_lower, tprs_upper, color='grey', alpha=.2)
auc_mean = float(df_summary.loc[df_summary['source']==source, 'AUC_mean_qm'])
auc_std = float(df_summary.loc[ df_summary['source']==source, 'AUC_std_qm'])
tpr_at_fpr_mean = float(df_summary.loc[ df_summary['source']==source, 'tpr_at_fpr_mean_qm'])
numsamples = int(df_summary.loc[ df_summary['source']==source, 'test_size'])
plt.xlabel("FPR")
plt.ylabel("TPR")
plt.legend()
plt.title("Verification: Time shift (part 2)")
plt.tight_layout()
plt.show()
#uncomment to save:
plt.savefig(os.path.join(RESULTS_FOLDER,"roc_curves_all_curves2.pdf"), bbox_inches='tight')
df_summary['EER_mean_qm'] = EERs_mean_qm
#New-EER
df_summary['EER_std_qm'] = EERs_std_qm
df_latex = df_summary[['source', 'test_size', 'AUC_formatted_qm', 'tpr_at_fpr_mean_qm', 'EER_mean_qm']]
df_latex.columns=['source', 'test size', 'AUC', 'tpr@fpr', 'EER']
df_latex
# 3 sig figs use '%.3g'; 3 digits use '%.3f'
latex_table = df_latex.to_latex(index=False, float_format='%.3g')
with open(os.path.join(RESULTS_FOLDER,"table1.tex"),"w") as file1:
file1.write(latex_table)
df_metricplot = df_summary.sort_values(by='AUC_mean_qm', ascending=False).reset_index(drop=True)
sources_metricplot = list(df_metricplot['source'])
AUCs_metricplot = list(df_metricplot['AUC_mean_qm'])
# plt.figure(figsize=(6,3.5))
plt.figure(figsize=(6,2.6))
x_vals = [i for i,_ in enumerate(sources_metricplot)]
# plt.plot(x_vals, df_metricplot['EER_mean_qm'], linestyle='--', marker='.', label="EER", color="tab:blue")
plt.errorbar(x_vals, df_metricplot['EER_mean_qm'], yerr=df_metricplot['EER_std_qm'], fmt=".", linestyle="--",
label="EER", color="tab:blue", mfc="tab:blue", mec='tab:blue', ecolor="tab:blue", capsize=2)
plt.errorbar(x_vals, AUCs_metricplot, yerr=df_metricplot['AUC_std_qm'], fmt=".", linestyle="--",
label="AUC", color="tab:orange", mfc="tab:orange", mec='tab:orange', ecolor="tab:orange", capsize=2)
plt.xticks(x_vals, sources_metricplot, rotation=90)
handles, labels = plt.gca().get_legend_handles_labels()
handles = [handles[1], handles[0]]
labels = [labels[1], labels[0]]
plt.legend(handles, labels, loc="center left")
plt.title("Verification metrics: Time generalization")
# plt.tight_layout()
plt.yticks(np.arange(0.0, 1.2, 0.2))
plt.show()
#uncomment to save:
plt.savefig(os.path.join(RESULTS_FOLDER,"verification_metrics_plot.pdf"), bbox_inches='tight')
df_summary[['source', 'AUC_mean_qm']]
```
### Save results used to make the figure, so I can add the curves to plots made in exp 01 code (to combine figs from experiments 1 and 2)
```
df_metricplot.to_csv(os.path.join(RESULTS_FOLDER,"exp_02_metrics_plot_data.csv"), index=False)
```
| github_jupyter |
```
# Prepare environment
import os, sys
sys.path.insert(0, os.path.abspath('..'))
from IPython.display import display
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
```
## Example starts here
------
```
import asyncio
from ibstract import IB
from ibstract import MarketDataBlock
from ibstract import HistDataReq
from ibstract.utils import dtest, dtcst, dtutc
```
### Instantiate IB and connect to Gateway/TWS
**When connecting to IB Gateway/TWS, an internal semaphore is acquired for up to 32 clients operating concurrently.**
```
ib = IB('127.0.0.1', 4002)
ib2 = IB()
ib2.connect('127.0.0.1', 4002)
print('ib connected? ', ib.connected)
print('ib2 connected? ', ib2.connected)
ib
ib2
```
### HistDataReq - A request object for historical data
** HistDataReq constructor signature: **
HistDataReq(sectype, symbol, barsize, timedur, timeend=None, datatype='TRADES', exchange='SMART', currency='USD')
```
req_list = [
HistDataReq('Stock', 'GS', '1 hour', '5 d', dtest(2017,9,12,12)),
HistDataReq('Stock', 'BAC', '1 day', '10 d'), # timeend=None means datetime.now(tz=pytz.utc)
HistDataReq('Stock', 'AMZN', '1 hour', '3 d', dtest(2017, 9, 12)),
HistDataReq('Stock', 'GOOG', '30 mins', '3 d', dtutc(2017, 9, 12, 23)),
HistDataReq('Stock', 'FB', '10 mins', '5 d', dtest(2017, 9, 12, 16)),
HistDataReq('Stock', 'TVIX', '5 mins', '5 d', dtcst(2017, 9, 12, 16)),
]
for req in req_list:
req
```
### Download data from IB requested by a HistDataReq
** Request contract details for a HistDataReq by the coroutine IB.hist_data_req_contract_details(). **
```
req = HistDataReq('Stock', 'GS', '1 hour', '5 d', dtest(2017,9,12,12))
loop = asyncio.get_event_loop()
contract_details_list = loop.run_until_complete(ib.hist_data_req_contract_details(req))
contract_details_list
```
** The coroutine IB.hist_data_req_timezone() downloads the exchange time zone for the requested security. **
```
req = HistDataReq('Stock', 'GS', '1 hour', '5 d', dtest(2017,9,12,12))
loop = asyncio.get_event_loop()
xchg_tz = loop.run_until_complete(ib.hist_data_req_timezone(req))
xchg_tz
```
** The coroutine IB.get_hist_data_async(*reqs) downloads historical data for one or multiple HistDataReqs concurrently, and output a list of MarketDataBlock instances in the order of input requests.**
```
loop = asyncio.get_event_loop()
blk_list = loop.run_until_complete(ib.req_hist_data_async(*req_list[:3]))
for blk in blk_list:
blk.df.head()
```
#### A blocking function IB.get_hist_data() is also available without using asyncio event loop. It drives the coroutine version internally.
```
req = HistDataReq('Stock', 'GS', '1 hour', '5 d', dtest(2017,9,12,12))
blk = ib.req_hist_data(req)[0]
blk
```
| github_jupyter |
# RNN and LSTM Assignment
<img src="https://upload.wikimedia.org/wikipedia/commons/thumb/3/3c/Chimpanzee_seated_at_typewriter.jpg/603px-Chimpanzee_seated_at_typewriter.jpg" width=400px>
It is said that [infinite monkeys typing for an infinite amount of time](https://en.wikipedia.org/wiki/Infinite_monkey_theorem) will eventually type, among other things, the complete works of Wiliam Shakespeare. Let's see if we can get there a bit faster, with the power of Recurrent Neural Networks and LSTM.
This text file contains the complete works of Shakespeare: https://www.gutenberg.org/files/100/100-0.txt
Use it as training data for an RNN - you can keep it simple and train character level, and that is suggested as an initial approach.
Then, use that trained RNN to generate Shakespearean-ish text. Your goal - a function that can take, as an argument, the size of text (e.g. number of characters or lines) to generate, and returns generated text of that size.
Note - Shakespeare wrote an awful lot. It's OK, especially initially, to sample/use smaller data and parameters, so you can have a tighter feedback loop when you're trying to get things running. Then, once you've got a proof of concept - start pushing it more!
## Stretch goals:
- Refine the training and generation of text to be able to ask for different genres/styles of Shakespearean text (e.g. plays versus sonnets)
- Train a classification model that takes text and returns which work of Shakespeare it is most likely to be from
- Make it more performant! Many possible routes here - lean on Keras, optimize the code, and/or use more resources (AWS, etc.)
- Revisit the news example from class, and improve it - use categories or tags to refine the model/generation, or train a news classifier
- Run on bigger, better data
```
import numpy as np
class RNN_NLP_Generator(object):
"""RNN with one hidden layer"""
def __init__(self):
pass
def forward_prop(self, inputs, targets, h_prev):
xs, hs, ys, ps = {}, {}, {}, {}
hs[-1] = np.copy(h_prev)
loss = 0
for t in range(len(inputs)): # t is a "time step" and is used as a dict key
xs[t] = np.zeros((self.num_chars,1))
xs[t][inputs[t]] = 1
hs[t] = np.tanh(np.dot(self.W_xh, xs[t]) + np.dot(self.W_hh, hs[t-1]) + self.b_h)
ys[t] = np.dot(self.W_hy, hs[t]) + self.b_y # unnormalized log probabilities for next chars
ps[t] = np.exp(ys[t]) / np.sum(np.exp(ys[t])) # probabilities for next chars.
# Softmax
loss += -np.log(ps[t][targets[t],0]) # softmax (cross-entropy loss). Efficient and simple code
return loss, ps, hs, xs
def backward_prop(self, ps, inputs, hs, xs, targets):
# make all zero matrices
dWxh, dWhh, dWhy, dbh, dby, dhnext = \
[np.zeros_like(_) for _ in [self.W_xh, self.W_hh, self.W_hy,
self.b_h, self.b_y, hs[0]]]
# reversed
for t in reversed(range(len(inputs))):
dy = np.copy(ps[t]) # shape (num_chars,1). "dy" means "dloss/dy"
dy[targets[t]] -= 1 # backprop into y. After taking the soft max in the input vector, subtract 1 from the value of the element corresponding to the correct label.
dWhy += np.dot(dy, hs[t].T)
dby += dy
dh = np.dot(self.W_hy.T, dy) + dhnext # backprop into h.
dhraw = (1 - hs[t] * hs[t]) * dh # backprop through tanh nonlinearity #tanh'(x) = 1-tanh^2(x)
dbh += dhraw
dWxh += np.dot(dhraw, xs[t].T)
dWhh += np.dot(dhraw, hs[t-1].T)
dhnext = np.dot(self.W_hh.T, dhraw)
for dparam in [dWxh, dWhh, dWhy, dbh, dby]:
np.clip(dparam, -5, 5, out=dparam) # clip to mitigate exploding gradients
return dWxh, dWhh, dWhy, dbh, dby
def train(self, article_text, hidden_size=500, n_iterations=1000, sequence_length=40, learning_rate=1e-1):
self.article_text = article_text
self.hidden_size = hidden_size
chars = list(set(article_text))
self.num_chars = len(chars)
self.char_to_int = {c: i for i, c in enumerate(chars)}
self.int_to_char = {i: c for i, c in enumerate(chars)}
# original text encoded as integers, what we pass into our model
self.integer_encoded = [self.char_to_int[i] for i in article_text]
# Weights
self.W_xh = np.random.randn(hidden_size, self.num_chars) * 0.01
self.W_hh = np.random.randn(hidden_size, hidden_size) * 0.01
self.W_hy = np.random.randn(self.num_chars, hidden_size) * 0.01
# biases
self.b_h = np.zeros((hidden_size, 1))
self.b_y = np.zeros((self.num_chars, 1))
# previous state
self.h_prev = np.zeros((hidden_size, 1)) # h_(t-1)
batch_size = round((len(article_text) / sequence_length) + 0.5) # math.ceil
data_pointer = 0
# memory variables for Adagrad
mWxh, mWhh, mWhy, mbh, mby = \
[np.zeros_like(_) for _ in [self.W_xh, self.W_hh, self.W_hy,
self.b_h, self.b_y]]
for i in range(n_iterations):
h_prev = np.zeros((hidden_size, 1)) # reset RNN memory
data_pointer = 0 # go from start of data
for b in range(batch_size):
inputs = [self.char_to_int[ch]
for ch in self.article_text[data_pointer:data_pointer+sequence_length]]
targets = [self.char_to_int[ch]
for ch in self.article_text[data_pointer+1:data_pointer+sequence_length+1]] # t+1
if (data_pointer+sequence_length+1 >= len(self.article_text) and b == batch_size-1): # processing of the last part of the input data.
targets.append(self.char_to_int[" "]) # When the data doesn't fit, add space(" ") to the back.
loss, ps, hs, xs = self.forward_prop(inputs, targets, h_prev)
dWxh, dWhh, dWhy, dbh, dby = self.backward_prop(ps, inputs, hs, xs, targets)
# perform parameter update with Adagrad
for param, dparam, mem in zip([self.W_xh, self.W_hh, self.W_hy,
self.b_h, self.b_y],
[dWxh, dWhh, dWhy, dbh, dby],
[mWxh, mWhh, mWhy, mbh, mby]):
mem += dparam * dparam # elementwise
param += -learning_rate * dparam / np.sqrt(mem + 1e-8) # adagrad update
data_pointer += sequence_length # move data pointer
if i % 100 == 0:
print ('iter %d, loss: %f' % (i, loss)) # print progress
def predict(self, test_char, length):
x = np.zeros((self.num_chars, 1))
x[self.char_to_int[test_char]] = 1
ixes = []
h = np.zeros((self.hidden_size, 1))
for t in range(length):
h = np.tanh(np.dot(self.W_xh, x) + np.dot(self.W_hh, h) + self.b_h)
y = np.dot(self.W_hy, h) + self.b_y
p = np.exp(y) / np.sum(np.exp(y))
ix = np.random.choice(range(self.num_chars), p=p.ravel()) # ravel -> rank0
# "ix" is a list of indexes selected according to the soft max probability.
x = np.zeros((self.num_chars, 1)) # init
x[ix] = 1
ixes.append(ix) # list
txt = test_char + ''.join(self.int_to_char[i] for i in ixes)
return txt
import requests
url = 'https://www.gutenberg.org/files/100/100-0.txt'
r = requests.get(url)
# subsample
start_i = 2965 # index of first text, THE SONNETS
length = 4000
article_text = r.text[start_i:start_i+length]
article_text = ' '.join(article_text.split())
model = RNN_NLP_Generator()
model.train(article_text)
model.predict(test_char='T', length=100)
```
| github_jupyter |
Deep Learning
=============
Assignment 3
------------
Previously in `2_fullyconnected.ipynb`, you trained a logistic regression and a neural network model.
The goal of this assignment is to explore regularization techniques.
```
# These are all the modules we'll be using later. Make sure you can import them
# before proceeding further.
from __future__ import print_function
import numpy as np
import tensorflow as tf
from six.moves import cPickle as pickle
```
First reload the data we generated in _notmist.ipynb_.
```
pickle_file = 'notMNIST.pickle'
with open(pickle_file, 'rb') as f:
save = pickle.load(f)
train_dataset = save['train_dataset']
train_labels = save['train_labels']
valid_dataset = save['valid_dataset']
valid_labels = save['valid_labels']
test_dataset = save['test_dataset']
test_labels = save['test_labels']
del save # hint to help gc free up memory
print('Training set', train_dataset.shape, train_labels.shape)
print('Validation set', valid_dataset.shape, valid_labels.shape)
print('Test set', test_dataset.shape, test_labels.shape)
```
Reformat into a shape that's more adapted to the models we're going to train:
- data as a flat matrix,
- labels as float 1-hot encodings.
```
image_size = 28
num_labels = 10
def reformat(dataset, labels):
dataset = dataset.reshape((-1, image_size * image_size)).astype(np.float32)
# Map 1 to [0.0, 1.0, 0.0 ...], 2 to [0.0, 0.0, 1.0 ...]
labels = (np.arange(num_labels) == labels[:,None]).astype(np.float32)
return dataset, labels
train_dataset, train_labels = reformat(train_dataset, train_labels)
valid_dataset, valid_labels = reformat(valid_dataset, valid_labels)
test_dataset, test_labels = reformat(test_dataset, test_labels)
print('Training set', train_dataset.shape, train_labels.shape)
print('Validation set', valid_dataset.shape, valid_labels.shape)
print('Test set', test_dataset.shape, test_labels.shape)
def accuracy(predictions, labels):
return (100.0 * np.sum(np.argmax(predictions, 1) == np.argmax(labels, 1))
/ predictions.shape[0])
```
---
Problem 1
---------
Introduce and tune L2 regularization for both logistic and neural network models. Remember that L2 amounts to adding a penalty on the norm of the weights to the loss. In TensorFlow, you can compute the L2 loss for a tensor `t` using `nn.l2_loss(t)`. The right amount of regularization should improve your validation / test accuracy.
---
First multinomial logistic regression
```
batch_size = 128
graph = tf.Graph()
with graph.as_default():
# Input data. For the training data, we use a placeholder that will be fed
# at run time with a training minibatch.
tf_train_dataset = tf.placeholder(tf.float32,
shape=(batch_size, image_size * image_size))
tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
regularization_factor = tf.placeholder(tf.float32)
# Variables.
weights = tf.Variable(
tf.truncated_normal([image_size * image_size, num_labels]))
biases = tf.Variable(tf.zeros([num_labels]))
# Training computation.
logits = tf.matmul(tf_train_dataset, weights) + biases
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels)) + regularization_factor * tf.nn.l2_loss(weights)
# Optimizer.
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
# Predictions for the training, validation, and test data.
train_prediction = tf.nn.softmax(logits)
valid_prediction = tf.nn.softmax(
tf.matmul(tf_valid_dataset, weights) + biases)
test_prediction = tf.nn.softmax(tf.matmul(tf_test_dataset, weights) + biases)
```
Lets'run it:
```
num_steps = 3001
with tf.Session(graph=graph) as session:
tf.initialize_all_variables().run()
print("Initialized")
for step in range(num_steps):
# Pick an offset within the training data, which has been randomized.
# Note: we could use better randomization across epochs.
offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
# Generate a minibatch.
batch_data = train_dataset[offset:(offset + batch_size), :]
batch_labels = train_labels[offset:(offset + batch_size), :]
# Prepare a dictionary telling the session where to feed the minibatch.
# The key of the dictionary is the placeholder node of the graph to be fed,
# and the value is the numpy array to feed to it.
feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels, regularization_factor: 1e-3}
_, l, predictions = session.run(
[optimizer, loss, train_prediction], feed_dict=feed_dict)
if (step % 500 == 0):
print("Minibatch loss at step %d: %f" % (step, l))
print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
print("Validation accuracy: %.1f%%" % accuracy(
valid_prediction.eval(), valid_labels))
print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels))
```
1-layer neural network model
```
batch_size = 128
num_hidden_nodes = 1024
graph = tf.Graph()
with graph.as_default():
# Input batches as placeholder.
tf_train_dataset = tf.placeholder(tf.float32, shape=(batch_size, image_size * image_size)) # 128, 784
tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels)) # 128, 10
tf_valid_dataset = tf.constant(valid_dataset) # 10000, 784
tf_test_dataset = tf.constant(test_dataset) # 10000, 784
regularization_factor = tf.placeholder(tf.float32)
# Input variables
W_1 = tf.Variable(tf.truncated_normal([image_size * image_size, num_hidden_nodes])) # 784, 1024
b_1 = tf.Variable(tf.zeros([num_hidden_nodes])) # 1024
# first layer
h_1 = tf.nn.relu(tf.matmul(tf_train_dataset, W_1) + b_1)
# Training computation
W_2 = tf.Variable(tf.truncated_normal([num_hidden_nodes, num_labels])) # 1024, 10
b_2 = tf.Variable(tf.zeros([num_labels])) # 10
logits = tf.matmul(h_1, W_2) + b_2
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels)) + regularization_factor * (tf.nn.l2_loss(W_1) + tf.nn.l2_loss(W_2))
# optimizer
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
# Predictions
train_prediction = tf.nn.softmax(logits)
valid_fw_pass_1 = tf.nn.relu(tf.matmul(tf_valid_dataset, W_1) + b_1) # First layer forward pass for validation
valid_prediction = tf.nn.softmax(tf.matmul(valid_fw_pass_1, W_2) + b_2) # Softmax for validation
test_fw_pass_1 = tf.nn.relu(tf.matmul(tf_test_dataset, W_1) + b_1)
test_prediction = tf.nn.softmax(tf.matmul(test_fw_pass_1, W_2) + b_2)
```
let's run it:
```
num_steps = 3001
with tf.Session(graph=graph) as session:
tf.initialize_all_variables().run()
print("Initialized")
for step in range(num_steps):
# Minibatches
offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
batch_data = train_dataset[offset:(offset + batch_size), :]
batch_labels = train_labels[offset:(offset + batch_size), :]
feed_dict = {tf_train_dataset: batch_data, tf_train_labels: batch_labels, regularization_factor: 1e-3}
_, l, predictions = session.run([optimizer, loss, train_prediction], feed_dict=feed_dict)
if (step % 500 == 0):
print("Minibatch loss at step %d: %f" % (step, l))
print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
print("Validation accuracy: %.1f%%" % accuracy(valid_prediction.eval(), valid_labels))
print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels))
```
Let's try to tune regularization factor
```
num_steps = 3001
regularization = [pow(10, i) for i in np.arange(-4,-1,0.3)]
test_accuracies = []
for beta in regularization:
with tf.Session(graph=graph) as session:
tf.initialize_all_variables().run()
print("Initialized for regularization: %.4f" % beta)
for step in range(num_steps):
# Minibatches
offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
batch_data = train_dataset[offset:(offset + batch_size), :]
batch_labels = train_labels[offset:(offset + batch_size), :]
feed_dict = {tf_train_dataset: batch_data, tf_train_labels: batch_labels, regularization_factor: beta}
_, l, predictions = session.run([optimizer, loss, train_prediction], feed_dict=feed_dict)
test_accuracies.append(accuracy(test_prediction.eval(), test_labels))
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot(regularization,test_accuracies)
plt.xscale('log', linthreshx=0.001)
plt.ylabel('Test accuracy')
plt.xlabel('Regularization factor')
plt.grid(True)
plt.show()
```
---
Problem 2
---------
Let's demonstrate an extreme case of overfitting. Restrict your training data to just a few batches. What happens?
---
```
batch_size = 128
num_hidden_nodes = 1024
graph = tf.Graph()
with graph.as_default():
# Input batches as placeholder.
tf_train_dataset = tf.placeholder(tf.float32, shape=(batch_size, image_size * image_size)) # 128, 784
tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels)) # 128, 10
tf_valid_dataset = tf.constant(valid_dataset) # 10000, 784
tf_test_dataset = tf.constant(test_dataset) # 10000, 784
regularization_factor = tf.placeholder(tf.float32)
# Input variables
W_1 = tf.Variable(tf.truncated_normal([image_size * image_size, num_hidden_nodes])) # 784, 1024
b_1 = tf.Variable(tf.zeros([num_hidden_nodes])) # 1024
# first layer
h_1 = tf.nn.relu(tf.matmul(tf_train_dataset, W_1) + b_1)
# Training computation
W_2 = tf.Variable(tf.truncated_normal([num_hidden_nodes, num_labels])) # 1024, 10
b_2 = tf.Variable(tf.zeros([num_labels])) # 10
logits = tf.matmul(h_1, W_2) + b_2
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels)) + regularization_factor * (tf.nn.l2_loss(W_1) + tf.nn.l2_loss(W_2))
# optimizer
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
# Predictions
train_prediction = tf.nn.softmax(logits)
valid_fw_pass_1 = tf.nn.relu(tf.matmul(tf_valid_dataset, W_1) + b_1) # First layer forward pass for validation
valid_prediction = tf.nn.softmax(tf.matmul(valid_fw_pass_1, W_2) + b_2) # Softmax for validation
test_fw_pass_1 = tf.nn.relu(tf.matmul(tf_test_dataset, W_1) + b_1)
test_prediction = tf.nn.softmax(tf.matmul(test_fw_pass_1, W_2) + b_2)
```
Let's run the 1-layer neural network
```
num_steps = 100
num_batches = 5
with tf.Session(graph=graph) as session:
tf.initialize_all_variables().run()
print("Initialized")
for step in range(num_steps):
# Minibatches
#offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
offset = step % num_batches
batch_data = train_dataset[offset:(offset + batch_size), :]
batch_labels = train_labels[offset:(offset + batch_size), :]
feed_dict = {tf_train_dataset: batch_data, tf_train_labels: batch_labels, regularization_factor: 1e-3}
_, l, predictions = session.run([optimizer, loss, train_prediction], feed_dict=feed_dict)
if (step % 5 == 0):
print("Minibatch loss at step %d: %f" % (step, l))
print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
print("Validation accuracy: %.1f%%" % accuracy(valid_prediction.eval(), valid_labels))
print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels))
```
Model overfits to data. Our regularization doesn't prevent it since there are too much parameters.
---
Problem 3
---------
Introduce Dropout on the hidden layer of the neural network. Remember: Dropout should only be introduced during training, not evaluation, otherwise your evaluation results would be stochastic as well. TensorFlow provides `nn.dropout()` for that, but you have to make sure it's only inserted during training.
What happens to our extreme overfitting case?
---
```
batch_size = 128
num_hidden_nodes = 1024
graph = tf.Graph()
with graph.as_default():
# Input batches as placeholder.
tf_train_dataset = tf.placeholder(tf.float32, shape=(batch_size, image_size * image_size)) # 128, 784
tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels)) # 128, 10
tf_valid_dataset = tf.constant(valid_dataset) # 10000, 784
tf_test_dataset = tf.constant(test_dataset) # 10000, 784
regularization_factor = tf.placeholder(tf.float32)
# Input variables
W_1 = tf.Variable(tf.truncated_normal([image_size * image_size, num_hidden_nodes])) # 784, 1024
b_1 = tf.Variable(tf.zeros([num_hidden_nodes])) # 1024
# first layer
h_1 = tf.nn.relu(tf.matmul(tf_train_dataset, W_1) + b_1)
# Dropout
drops = tf.nn.dropout(h_1, 0.5)
# Training computation
W_2 = tf.Variable(tf.truncated_normal([num_hidden_nodes, num_labels])) # 1024, 10
b_2 = tf.Variable(tf.zeros([num_labels])) # 10
logits = tf.matmul(drops, W_2) + b_2
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels)) + regularization_factor * (tf.nn.l2_loss(W_1) + tf.nn.l2_loss(W_2))
# optimizer
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
# Predictions
train_prediction = tf.nn.softmax(logits)
valid_fw_pass_1 = tf.nn.relu(tf.matmul(tf_valid_dataset, W_1) + b_1) # First layer forward pass for validation
valid_prediction = tf.nn.softmax(tf.matmul(valid_fw_pass_1, W_2) + b_2) # Softmax for validation
test_fw_pass_1 = tf.nn.relu(tf.matmul(tf_test_dataset, W_1) + b_1)
test_prediction = tf.nn.softmax(tf.matmul(test_fw_pass_1, W_2) + b_2)
```
Let's run it with dropout
```
num_steps = 100
num_batches = 5
with tf.Session(graph=graph) as session:
tf.initialize_all_variables().run()
print("Initialized")
for step in range(num_steps):
# Minibatches
#offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
offset = step % num_batches
batch_data = train_dataset[offset:(offset + batch_size), :]
batch_labels = train_labels[offset:(offset + batch_size), :]
feed_dict = {tf_train_dataset: batch_data, tf_train_labels: batch_labels, regularization_factor: 1e-3}
_, l, predictions = session.run([optimizer, loss, train_prediction], feed_dict=feed_dict)
if (step % 5 == 0):
print("Minibatch loss at step %d: %f" % (step, l))
print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
print("Validation accuracy: %.1f%%" % accuracy(valid_prediction.eval(), valid_labels))
print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels))
```
Even though training accuracy reached 100%, it improved both validation and test accuracy.
---
Problem 4
---------
Try to get the best performance you can using a multi-layer model! The best reported test accuracy using a deep network is [97.1%](http://yaroslavvb.blogspot.com/2011/09/notmnist-dataset.html?showComment=1391023266211#c8758720086795711595).
One avenue you can explore is to add multiple layers.
Another one is to use learning rate decay:
global_step = tf.Variable(0) # count the number of steps taken.
learning_rate = tf.train.exponential_decay(0.5, global_step, ...)
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss, global_step=global_step)
---
First let's try 1-layer-neural network with dropout and learning rate decay
```
batch_size = 128
num_hidden_nodes = 1024
graph = tf.Graph()
with graph.as_default():
# Input batches as placeholder.
tf_train_dataset = tf.placeholder(tf.float32, shape=(batch_size, image_size * image_size)) # 128, 784
tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels)) # 128, 10
tf_valid_dataset = tf.constant(valid_dataset) # 10000, 784
tf_test_dataset = tf.constant(test_dataset) # 10000, 784
regularization_factor = tf.placeholder(tf.float32)
global_step = tf.Variable(0) # count the number of steps taken.
# Input variables
W_1 = tf.Variable(tf.truncated_normal([image_size * image_size, num_hidden_nodes])) # 784, 1024
b_1 = tf.Variable(tf.zeros([num_hidden_nodes])) # 1024
# first layer
h_1 = tf.nn.relu(tf.matmul(tf_train_dataset, W_1) + b_1)
# Dropout
drops = tf.nn.dropout(h_1, 0.5)
# Training computation
W_2 = tf.Variable(tf.truncated_normal([num_hidden_nodes, num_labels])) # 1024, 10
b_2 = tf.Variable(tf.zeros([num_labels])) # 10
logits = tf.matmul(drops, W_2) + b_2
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels)) + regularization_factor * (tf.nn.l2_loss(W_1) + tf.nn.l2_loss(W_2))
# optimizer
learning_rate = tf.train.exponential_decay(0.5, global_step, 1000, 0.65, staircase=True)
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss, global_step=global_step)
# Predictions
train_prediction = tf.nn.softmax(logits)
valid_fw_pass_1 = tf.nn.relu(tf.matmul(tf_valid_dataset, W_1) + b_1) # First layer forward pass for validation
valid_prediction = tf.nn.softmax(tf.matmul(valid_fw_pass_1, W_2) + b_2) # Softmax for validation
test_fw_pass_1 = tf.nn.relu(tf.matmul(tf_test_dataset, W_1) + b_1)
test_prediction = tf.nn.softmax(tf.matmul(test_fw_pass_1, W_2) + b_2)
```
Let's run it:
```
num_steps = 8001
with tf.Session(graph=graph) as session:
tf.initialize_all_variables().run()
print("Initialized")
for step in range(num_steps):
# Minibatches
offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
batch_data = train_dataset[offset:(offset + batch_size), :]
batch_labels = train_labels[offset:(offset + batch_size), :]
feed_dict = {tf_train_dataset: batch_data, tf_train_labels: batch_labels, regularization_factor: 1e-3}
_, l, predictions = session.run([optimizer, loss, train_prediction], feed_dict=feed_dict)
if (step % 500 == 0):
print("Minibatch loss at step %d: %f" % (step, l))
print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
print("Validation accuracy: %.1f%%" % accuracy(valid_prediction.eval(), valid_labels))
if (step % 1000 == 0):
print("Learning rate: %f" % learning_rate.eval())
print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels))
```
Now, let's train 2-layer neural network with dropout and learning rate decay
```
batch_size = 128
num_hidden_nodes = 1024
graph = tf.Graph()
with graph.as_default():
# Input batches as placeholder.
tf_train_dataset = tf.placeholder(tf.float32, shape=(batch_size, image_size * image_size)) # 128, 784
tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels)) # 128, 10
tf_valid_dataset = tf.constant(valid_dataset) # 10000, 784
tf_test_dataset = tf.constant(test_dataset) # 10000, 784
regularization_factor = tf.placeholder(tf.float32)
global_step = tf.Variable(0) # count the number of steps taken.
# Input variables
W_1 = tf.Variable(tf.truncated_normal([image_size * image_size, num_hidden_nodes])) # 784, 1024
b_1 = tf.Variable(tf.zeros([num_hidden_nodes])) # 1024
# first layer
h_1 = tf.nn.relu(tf.matmul(tf_train_dataset, W_1) + b_1)
# first dropout
drops_1 = tf.nn.dropout(h_1, 0.5)
# Second layer variables
W_2 = tf.Variable(tf.truncated_normal([num_hidden_nodes, num_hidden_nodes])) # 1024, 1024
b_2 = tf.Variable(tf.zeros([num_hidden_nodes])) # 1024
# Second layer
h_2 = tf.nn.relu(tf.matmul(h_1, W_2) + b_2)
# Second dropout
drops_2 = tf.nn.dropout(h_2, 0.5)
# Training computation
W_3 = tf.Variable(tf.truncated_normal([num_hidden_nodes, num_labels])) # 1024, 10
b_3 = tf.Variable(tf.zeros([num_labels])) # 10
logits = tf.matmul(h_2, W_3) + b_3
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels)) + regularization_factor * (tf.nn.l2_loss(W_1) + tf.nn.l2_loss(W_2) + tf.nn.l2_loss(W_3))
# optimizer
learning_rate = tf.train.exponential_decay(0.5, global_step, 1000, 0.65, staircase=True)
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss, global_step=global_step)
# Predictions
train_prediction = tf.nn.softmax(logits)
valid_fw_pass_1 = tf.nn.relu(tf.matmul(tf_valid_dataset, W_1) + b_1) # First layer forward pass for validation
valid_fw_pass_2 = tf.nn.relu(tf.matmul(valid_fw_pass_1, W_2) + b_2) # First layer forward pass for validation
valid_prediction = tf.nn.softmax(tf.matmul(valid_fw_pass_2, W_3) + b_3) # Softmax for validation
test_fw_pass_1 = tf.nn.relu(tf.matmul(tf_test_dataset, W_1) + b_1)
test_fw_pass_2 = tf.nn.relu(tf.matmul(test_fw_pass_1, W_2) + b_2)
test_prediction = tf.nn.softmax(tf.matmul(test_fw_pass_2, W_3) + b_3)
```
Let's run it
```
num_steps = 4
w_init = []
w_trained = []
with tf.Session(graph=graph) as session:
tf.initialize_all_variables().run()
print("Initialized")
w_init.append(W_1.eval().flatten())
w_init.append(W_2.eval().flatten())
w_init.append(W_3.eval().flatten())
for step in range(num_steps):
# Minibatches
offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
batch_data = train_dataset[offset:(offset + batch_size), :]
batch_labels = train_labels[offset:(offset + batch_size), :]
feed_dict = {tf_train_dataset: batch_data, tf_train_labels: batch_labels, regularization_factor: 1e-3}
_, l, predictions = session.run([optimizer, loss, train_prediction], feed_dict=feed_dict)
print("Minibatch loss at step %d: %f" % (step, l))
print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
print("Validation accuracy: %.1f%%" % accuracy(valid_prediction.eval(), valid_labels))
print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels))
w_trained.append(W_1.eval().flatten())
w_trained.append(W_2.eval().flatten())
w_trained.append(W_3.eval().flatten())
fig, axes = plt.subplots(2, 3, figsize=(10, 4))
axes = axes.ravel()
for idx, ax in enumerate(axes):
if idx <= 2:
ax.hist(w_init[idx], bins=100)
ax.set_title("Initialized Weights-%s" % str(idx+1))
else:
ax.hist(w_trained[idx-3], bins=100)
ax.set_title("Learned Weights-%s" % str(idx-2))
ax.set_xlabel("Values")
ax.set_ylabel("Frequency")
fig.tight_layout()
```
Our weights initialization is not good enough. The problem is that the distribution of the outputs from a randomly initialized neuron has a variance that grows with the number of inputs.
A more recent paper on this topic, Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification by He et al., derives an initialization specifically for ReLU neurons, reaching the conclusion that the variance of neurons in the network should be 2.0/n. This gives the initialization w = np.random.randn(n) * sqrt(2.0/n), and is the current recommendation for use in practice in the specific case of neural networks with ReLU neurons.
Let's change it:
```
batch_size = 128
num_hidden_nodes = 1024
graph = tf.Graph()
with graph.as_default():
# Input batches as placeholder.
tf_train_dataset = tf.placeholder(tf.float32, shape=(batch_size, image_size * image_size)) # 128, 784
tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels)) # 128, 10
tf_valid_dataset = tf.constant(valid_dataset) # 10000, 784
tf_test_dataset = tf.constant(test_dataset) # 10000, 784
regularization_factor = tf.placeholder(tf.float32)
global_step = tf.Variable(0) # count the number of steps taken.
# Input variables
W_1 = tf.Variable(tf.truncated_normal([image_size * image_size, num_hidden_nodes], stddev=np.sqrt(2.0 / (image_size * image_size)))) # 784, 1024
b_1 = tf.Variable(tf.zeros([num_hidden_nodes])) # 1024
# first layer
h_1 = tf.nn.relu(tf.matmul(tf_train_dataset, W_1) + b_1)
# first dropout
drops_1 = tf.nn.dropout(h_1, 0.5)
# Second layer variables
W_2 = tf.Variable(tf.truncated_normal([num_hidden_nodes, num_hidden_nodes], stddev=np.sqrt(2.0 / num_hidden_nodes))) # 1024, 1024
b_2 = tf.Variable(tf.zeros([num_hidden_nodes])) # 1024
# Second layer
h_2 = tf.nn.relu(tf.matmul(h_1, W_2) + b_2)
# Second dropout
drops_2 = tf.nn.dropout(h_2, 0.5)
# Training computation
W_3 = tf.Variable(tf.truncated_normal([num_hidden_nodes, num_labels], stddev=np.sqrt(2.0 / num_hidden_nodes))) # 1024, 10
b_3 = tf.Variable(tf.zeros([num_labels])) # 10
logits = tf.matmul(h_2, W_3) + b_3
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels)) + regularization_factor * (tf.nn.l2_loss(W_1) + tf.nn.l2_loss(W_2) + tf.nn.l2_loss(W_3))
# optimizer
learning_rate = tf.train.exponential_decay(0.5, global_step, 1000, 0.65, staircase=True)
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss, global_step=global_step)
# Predictions
train_prediction = tf.nn.softmax(logits)
valid_fw_pass_1 = tf.nn.relu(tf.matmul(tf_valid_dataset, W_1) + b_1) # First layer forward pass for validation
valid_fw_pass_2 = tf.nn.relu(tf.matmul(valid_fw_pass_1, W_2) + b_2) # First layer forward pass for validation
valid_prediction = tf.nn.softmax(tf.matmul(valid_fw_pass_2, W_3) + b_3) # Softmax for validation
test_fw_pass_1 = tf.nn.relu(tf.matmul(tf_test_dataset, W_1) + b_1)
test_fw_pass_2 = tf.nn.relu(tf.matmul(test_fw_pass_1, W_2) + b_2)
test_prediction = tf.nn.softmax(tf.matmul(test_fw_pass_2, W_3) + b_3)
```
Let's run it:
```
num_steps = 8001
w_init = []
w_trained = []
with tf.Session(graph=graph) as session:
tf.initialize_all_variables().run()
print("Initialized")
w_init.append(W_1.eval().flatten())
w_init.append(W_2.eval().flatten())
w_init.append(W_3.eval().flatten())
for step in range(num_steps):
# Minibatches
offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
batch_data = train_dataset[offset:(offset + batch_size), :]
batch_labels = train_labels[offset:(offset + batch_size), :]
feed_dict = {tf_train_dataset: batch_data, tf_train_labels: batch_labels, regularization_factor: 1e-3}
_, l, predictions = session.run([optimizer, loss, train_prediction], feed_dict=feed_dict)
if (step % 500 == 0):
print("Minibatch loss at step %d: %f" % (step, l))
print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
print("Validation accuracy: %.1f%%" % accuracy(valid_prediction.eval(), valid_labels))
if (step % 1000 == 0):
print("Learning rate: %f" % learning_rate.eval())
print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels))
w_trained.append(W_1.eval().flatten())
w_trained.append(W_2.eval().flatten())
w_trained.append(W_3.eval().flatten())
fig, axes = plt.subplots(2, 3, figsize=(10, 4))
axes = axes.ravel()
for idx, ax in enumerate(axes):
if idx <= 2:
ax.hist(w_init[idx], bins=100)
ax.set_title("Initialized Weights-%s" % str(idx+1))
else:
ax.hist(w_trained[idx-3], bins=100)
ax.set_title("Learned Weights-%s" % str(idx-2))
ax.set_xlabel("Values")
ax.set_ylabel("Frequency")
fig.tight_layout()
```
Perfect!
| github_jupyter |
# Data Science Bootcamp - The Bridge
## Precurso
En este notebook vamos a ver, uno a uno, los conceptos bรกsicos de Python. Constarรกn de ejercicios prรกcticos acompaรฑados de una explicaciรณn teรณrica dada por el profesor.
Los siguientes enlaces estรกn recomendados para el alumno para profundizar y reforzar conceptos a partir de ejercicios y ejemplos:
- https://facundoq.github.io/courses/aa2018/res/02_python.html
- https://www.w3resource.com/python-exercises/
- https://www.practicepython.org/
- https://es.slideshare.net/egutierrezru/python-paraprincipiantes
- https://www.sololearn.com/Play/Python#
- https://github.com/mhkmcp/Python-Bootcamp-from-Basic-to-Advanced
Ejercicios avanzados:
- https://github.com/darkprinx/100-plus-Python-programming-exercises-extended/tree/master/Status (++)
- https://github.com/mahtab04/Python-Programming-Practice (++)
- https://github.com/whojayantkumar/Python_Programs (+++)
- https://www.w3resource.com/python-exercises/ (++++)
- https://github.com/fupus/notebooks-ejercicios (+++++)
Tutor de ayuda PythonTutor:
- http://pythontutor.com/
## 1. Variables y tipos
### Cadenas
```
# Entero - Integer - int
x = 7
# Cadena - String - Lista de caracteres
x = "lorena"
print(x)
x = 7
print(x)
# built-in
type
x = 5
y = 7
z = x + y
print(z)
x = "'lorena'\" "
l = 'silvia----'
g = x + l
# Las cadenas se concatenan
print(g)
print(g)
# type muestra el tipo de la variable
type(g)
type(3)
# print es una funciรณn que recibe varios argumentos y cada argumento estรก diferenciado por la coma. Despuรฉs de cada coma, la funciรณn 'print' aรฑade un espacio.
# mala praxis
print( g,z , 6, "cadena")
# buena praxis - PEP8
print(g, z, 6, "cadena")
u = "g"
silvia = "silvia tiene "
anos = " aรฑos"
suma = silvia + u + anos
print(suma)
n = 2
m = "3"
print(n + m)
# Cambiar de int a str
j = 2
print(j)
print(type(j))
j = str(j)
print(j)
print(type(j))
# Cambiar de int a str
j = 2
print(j)
print(type(j))
j = str(j) + " - " + silvia
print(j)
print(type(j))
k = 22
k = str(k)
print(k)
# Cambiar de str a int
lor = "98"
lor = int(lor)
print(lor)
# Para ver la longitud de una lista de caracteres (lista)
mn = "lista de caracteres$%ยท$% "
#lenght
print(len(mn))
h = len(mn)
print(h + 7)
h = 8
print(h)
x = 2
print(x)
gabriel_vazquez = "Gabriel Vazquez"
print("Hello Python world!")
print("Nombre de compaรฑero")
companero_clase = "Compaรฑero123123"
print(companero_clase)
print(compaรฑero)
x = (2 + 4) + 7
print(x)
# String, Integer, Float, List, None (NaN)
# str, int, float, list,
string_ = "23"
numero = 23
print(type(string_))
print(string_)
numero2 = 10
suma = numero + numero2
print(suma)
string2 = "33"
suma2 = string_ + string2
print(suma2)
m = (numero2 + int(string2))
print(m)
m = ((((65 + int("22")) * 2)))
m
print(type(int(string2)))
y = 22
y = str(y)
print(type(y))
string2 = int(string2)
print(type(string2))
string3 = "10"
numero_a_partir_de_string = int(string3)
print(numero_a_partir_de_string)
print(string3)
print(type(numero_a_partir_de_string))
print(type(string3))
h = "2"
int(h)
# los decimales son float. Python permite las operaciones entre int y float
x = 4
y = 4.2
print(x + y)
# La divisiรณn normal (/) es siempre float
# La divisiรณn absoluta (//) puede ser:
# - float si uno de los dos nรบmeros (o los dos) son float
# - int si los dos son int
j = 15
k = 4
division = j // k
print(division)
print(type(division))
num1 = 12
num2 = 3
suma = num1 + num2
resta = num1 - num2
multiplicacion = num1 * num2
division = num1 / num2
division_absoluta = num1 // num2
gabriel_vazquez = "Gabriel Vazquez"
print("suma:", suma)
print("resta:", resta)
print("multiplicacion:", multiplicacion)
print("division:", division)
print("division_absoluta:", division_absoluta)
print(type(division))
print(type(division_absoluta))
print(x)
j = "2"
j
print(j)
x = 2
j = 6
g = 4
h = "popeye"
# Jupyter notebook permite que la รบltima lรญnea se imprima por pantalla ( la variable )
print(g)
print(j)
print(h)
x
int(5.6/2)
float(2)
g = int(5.6/2)
print(g)
5
x = int(5.6//2)
x
# Soy un comentario
# print("Hello Python world!")
# Estoy creando una variable que vale 2
"""
Esto es otro comentario
"""
print(x)
x = 25
x = 76
x = "1"
message2 = "One of Python's strengths is its diverse community."
print(message2)
```
## Ejercicio:
### Crear una nueva celda.
### Declarar tres variables:
- Una con el nombre "edad" con valor vuestra edad
- Otra "edad_compaรฑero_der" que contengan la edad de tipo entero de vuestro compaรฑero de la derecha
- Otra "suma_anterior" que contenga la suma de las dos variables anteriormente declaradas
ย Mostrar por pantalla la variable "suma_anterior"
```
edad = 99
edad_companero_der = 30
suma_anterior = edad_companero_der + edad
print(suma_anterior)
h = 89 + suma_anterior
h
edad = 18
edad_companero_der = 29
suma_anterior = edad + edad_companero_der
suma_anterior
i = "hola"
o = i.upper()
o
o.lower()
name = "Ada Lovelace"
x = 2
print(name.upper())
print(name.lower())
print(name.upper)
print(name.upper())
x = 2
x = x + 1
x
x += 1
x
# int
x = 1
# float
y = 2.
# str
s = "string"
# type --> muestra el tipo de la variable o valor
print(type(x))
type(y)
type(s)
5 + 2
x = 2
x = x + 1
x += 1
x = 2
y = 4
print(x, y, "Pepito", "Hola")
s = "Hola soy Soraya:"
s + "789"
print(s, 98, 29, sep="")
print(s, 98, 29)
type( x )
2 + 6
```
## 2. Nรบmeros y operadores
```
### Enteros ###
x = 3
print("- Tipo de x:")
print(type(x)) # Imprime el tipo (o `clase`) de x
print("- Valor de x:")
print(x) # Imprimir un valor
print("- x+1:")
print(x + 1) # Suma: imprime "4"
print("- x-1:")
print(x - 1) # Resta; imprime "2"
print("- x*2:")
print(x * 2) # Multiplicaciรณn; imprime "6"
print("- x^2:")
print(x ** 2) # Exponenciaciรณn; imprime "9"
# Modificaciรณn de x
x += 1
print("- x modificado:")
print(x) # Imprime "4"
x *= 2
print("- x modificado:")
print(x) # Imprime "8"
print("- El mรณdulo de x con 40")
print(40 % x)
print("- Varias cosas en una lรญnea:")
print(1, 2, x, 5*2) # imprime varias cosas a la vez
# El mรณdulo muestra el resto de la divisiรณn entre dos nรบmeros
2 % 2
3 % 2
4 % 5
numero = 99
numero % 2 # Si el resto es 0, el nรบmero es par. Sino, impar.
numero % 2
99 % 100
y = 2.5
print("- Tipo de y:")
print(type(y)) # Imprime el tipo de y
print("- Varios valores en punto flotante:")
print(y, y + 1, y * 2.5, y ** 2) # Imprime varios nรบmeros en punto flotante
```
## Tรญtulo
Escribir lo que sea
1. uno
2. dos
# INPUT
```
edad = input("Introduce tu edad")
print("Diego tiene", edad, "aรฑos")
# Input recoge una entrada de texto de tipo String
num1 = int(input("Introduce el primer nรบmero"))
num2 = int(input("Introduce el segundo nรบmero"))
print(num1 + num2)
```
## 3. Tipo None
```
x = None
n = 5
s = "Cadena"
print(x + s)
```
## 4. Listas y colecciones
```
# Lista de elementos:
# Las posiciones se empiezan a contar desde 0
s = "Cadena"
primer_elemento = s[0]
#ultimo_elemento = s[5]
ultimo_elemento = s[-1]
print(primer_elemento)
print(ultimo_elemento)
bicycles = ['trek', 'cannondale', 'redline', 'specialized']
bicycles[0]
tamano_lista = len(bicycles)
tamano_lista
ultimo_elemento_por_posicion = tamano_lista - 1
bicycles[ultimo_elemento_por_posicion]
bicycles = ['trek', 'cannondale', 'redline', 'specialized']
message = "My first bicycle was a " + bicycles[0]
print(bicycles)
print(message)
print(type(bicycles))
s = "String"
s.lower()
print(s.lower())
print(s)
s = s.lower()
s
# Existen dos tipos de funciones:
# 1. Los que modifican los valores sin que tengamos que especificar una reasignaciรณn a la variable
# 2. Los que solo devuelven la operaciรณn y no modifican el valor de la variable. Tenemos que forzar la reasignaciรณn si queremos modificar la variable.
cars = ['bmw', 'audi', 'toyota', 'subaru']
print(cars)
cars.reverse()
print(cars)
cars = ['bmw']
print(cars)
cars.reverse()
print(cars)
s = "Hola soy Clara"
print(s[::-1])
s
l = "hola"
len(l)
l[3]
# POP
lista = [2, 4, 6, "a", 6, 8]
lista.pop(4)
lista
lista = [2, 4, 6, "a", 6]
# Remove borra el primer elemento con el valor dado
lista.remove(6)
lista
# pop retorna el valor borrado
lista = [2, 4, 6, "a", 6, 8]
x = lista.pop(4)
print(x)
lista
lista = [2, 4, 6, "a", 6, 8]
x = lista.remove(6)
print(x)
def remove2(a):
x = a + 2
y = remove2(6)
print(y)
lista = [2, 4, 6, "a", 6, 8]
def f(s):
print(s)
x = f(s=2)
print(x)
# Para acceder a varios elementos, se especifica con la nomenclatura "[N:M]". N es el primer elemento a obtener, M es el รบltimo elemento a obtener pero no incluido. Ejemplo:
# Queremos mostrar desde las posiciones 3 a la 7. Debemos especificar: [3:8]
# Si M no tiene ningรบn valor, se obtiene desde N hasta el final.
# Si N no tiene ningรบn valor, es desde el principio de la colecciรณn hasta M
s[3:len(s)]
s[:3]
s[3:10]
motorcycles = ['honda', 'yamaha', 'suzuki', 'ducati']
print(motorcycles)
too_expensive = 'ducati'
motorcycles.remove(too_expensive)
print(motorcycles)
print(too_expensive + "ย is too expensive for me.")
# Agrega un valor a la รบltima posiciรณn de la lista
motorcycles.append("ducati")
motorcycles
lista = ['honda', 2, 8.9, [2, 3], 'yamaha', 'suzuki', 'ducati']
lista[3]
lista.remove(8.9)
lista
lista
l = lista[1]
l
lista.remove(l)
lista
lista.remove(lista[2])
lista = ['honda', 2, 8.9, [2, 3], 'yamaha', 'suzuki', 'honda', 'ducati']
lista
# remove elimina el primer elemento que se encuentra que coincide con el valor del argumento
lista = ['honda', 2, 8.9, [2, 3], 'yamaha', 'suzuki', 'honda', 'ducati']
lista.remove("honda")
lista
# Accedemos a la posiciรณn 1 del elemento que estรก en la posiciรณn 2 de lista
lista[2][1]
lista[3][2]
p = lista.remove("honda")
print(p)
l = [2, 4, 6, 8]
l.reverse()
l
```
### Colecciones
1. Listas
2. String (colecciรณn de caracteres)
3. Tuplas
4. Conjuntos (Set)
```
# Listas --> Mutables
lista = [2, 5, "caract", [9, "g", ["j"]]]
print(lista[-1][-1][-1])
lista[3][2][0]
lista.append("ultimo")
lista
# Tuplas --> Inmutables
tupla = (2, 5, "caract", [9, "g", ["j"]])
tupla
s = "String"
s[2]
tupla[3].remove(9)
tupla
tupla[3][1].remove("j")
tupla
tupla2 = (2, 5, 'caract', ['g', ['j']])
tupla2[-1].remove(["j"])
tupla2
tupla2 = (2, 5, 'caract', ['g', ['j']])
tupla2[-1].remove("g")
tupla2
tupla2[-1].remove(["j"])
tupla2
if False == 0:
print(0)
print(type(lista))
print(type(tupla))
# Update listas
lista = [2, "6", ["k", "m"]]
lista[1] = 1
lista
lista = [2, "6", ["k", "m"]]
lista[2] = 0
lista
tupla = (2, "6", ["k", "m"])
tupla[2] = 0
tupla
tupla = (2, "6", ["k", "m"])
tupla[2][1] = 0
tupla
# Conjuntos
conjunto = [2, 4, 6, "a", "z", "h", 2]
conjunto = set(conjunto)
conjunto
conjunto = ["a", "z", "h", 2, 2, 4, 6, True, True, False]
conjunto = set(conjunto)
conjunto
conjunto = ["a", "z", "h", 2, 2, 4, 6, 2.1, 2.4, 2.3, True, True, False]
conjunto = set(conjunto)
conjunto
conjunto_tupla = ("a", "z", "h", 2, 2, 4, 6, 2.1, 2.4, 2.3, True, True, False)
conjunto = set(conjunto_tupla)
conjunto
conjunto = {"a", "z", "h", 2, 2, 4, 6, 2.1, 2.4, 2.3, True, True, False}
conjunto
s = "String"
lista_s = list(s)
lista_s
s = "String"
conj = {s}
conj
tupla = (2, 5, "h")
tupla = list(tupla)
tupla.remove(2)
tupla = tuple(tupla)
tupla
tupla = (2, 5, "h")
tupla = list(tupla)
tupla.remove(2)
tupla = (((((tupla)))))
tupla
tupla = (2, 5, "h")
tupla = list(tupla)
tupla.remove(2)
tupla = tuple(tupla)
tupla
conjunto = {2, 5, "h"}
lista_con_conjunto = [conjunto]
lista_con_conjunto[0]
# No podemos acceder a elementos de un conjunto
lista_con_conjunto[0][0]
lista = [1, 5, 6, True, 6]
set(lista)
lista = [True, 5, 6, 1, 6]
set(lista)
1 == True
```
## 5. Condiciones: if, elif, else
### Boolean
```
True
False
# Operadores comparativos
x = (1 == 1)
x
1 == 2
"a" == "a"
# Diferente
"a" != "a"
2 > 4
4 > 2
4 >= 4
4 > 4
4 <= 5
4 < 3
input()
1 == 1
"""
== -> Igualdad
!= -> Diferecia
< -> Menor que
> -> Mayor que
<= -> Menor o igual
>= -> Mayor o igual
"""
# and solo devolverรก True si TODOS son True
True and False
# or devolverรก True si UNO es True
(1 == 1) or (1 == 2)
(1 == 1) or (1 == 2) and (1 == 1)
(1 == 1) or ((1 == 2) and (1 == 1))
(1 == 1) and (1 == 2) and (1 == 1)
(1 == 1) and (1 == 2) and ((1 == 1) or (0 == 0))
# True and False and True
print("Yo soy\n Gabriel")
if 1 != 1:
print("Son iguales")
else:
print("No entra en if")
if 1 == 1:
print("Son iguales")
else:
print("No entra en if")
if 1>3:
print("Es mayor")
elif 2==2:
print("Es igual")
elif 3==3:
print("Es igual2")
else:
print("Ninguna de las anteriores")
if 2>3:
print(1)
else:
print("Primer else")
if 2==2:
print(2)
else:
print("Segundo else")
if 2>3:
print(1)
else:
print("Primer else")
if 3==3:
print(" 3 es 3 ")
if 2==2:
print(2)
else:
print("Segundo else")
if 2>3:
print(1)
else:
print("Primer else")
# -------
if 3==4:
print(" 3 es 3 ")
# --------
if 2==2:
print(2)
print(5)
x = 6
print(x)
else:
print("Segundo else")
if 2>3:
print(1)
else:
print("Primer else")
# -------
if 3==4:
print(" 3 es 3 ")
# --------
if 2==2:
print(2)
print(5)
x = 6
print(x)
# ------
if x == 7:
print("X es igual a 6")
# ------
y = 7
print(y)
else:
print("Segundo else")
if (not (1==1)):
print("Hola")
if not None:
print(1)
if "a":
print(2)
if 0:
print(0)
# Sinรณnimos de False para las condiciones:
# None
# False
# 0 (int or float)
# Cualquier colecciรณn vacรญa --> [], "", (), {}
# El None no actรบa como un nรบmero al compararlo con otro nรบmero
"""
El True lo toma como un numรฉrico 1
"""
lista = []
if lista:
print(4)
lista = ["1"]
if lista:
print(4)
if 0.0:
print(2)
if [] or False or 0 or None:
print(4)
if [] and False or 0 or None:
print(4)
if not ([] or False or 0 or None):
print(4)
if [] or False or not 0 or None:
print(True)
else:
print(False)
x = True
y = False
x + y
x = True
y = False
str(x) + str(y)
t = True
f = False
2 == 2
l1 = [1, 2]
l2 = [1, 2]
l1 == l2
id(l1)
id(l2)
l1 == l2
2 is 2
id(2)
l1 is l2
id("a")
"a" is "a"
h = "a"
g = "a"
h == g
id(h)
id(g)
h is g
id(2)
b = 875757
id(b)
f = 875757
id(f)
b is f
d = 2
s = 2
id(d)
id(s)
def funcion_condicion():
if y > 4:
print("Es mayor a 4")
else:
print("No es mayor a 4")
funcion_condicion(y=4)
def funcion_primera(x):
if x == 4:
print("Es igual a 4")
else:
funcion_condicion(y=x)
funcion_primera(x=5)
def funcion_final(apellido):
if len(apellido) > 5:
print("Cumple condiciรณn")
else:
print("No cumpole condiciรณn")
funcion_final(apellido="Vazquez")
```
# Bucles For, While
```
lista = [1, "dos", 3, "Pepito"]
print(lista[0])
print(lista[1])
print(lista[2])
print(lista[3])
if 2 != 2:
print(True)
print(4)
lista = [1, "dos", 3, "Pepito"]
for elnombrequequiera in lista:
print(elnombrequequiera)
print("-----")
# De esta forma NO muestra el 6
for g in [2, 4, 6, 8]:
if g == 6:
break
print(g)
print(8)
print(9)
print("ESto estรก fuera del bucle")
# De esta forma muestra el 6 -- versiรณn 1
for g in [2, 4, 6, 8]:
if g == 8:
break
print(g)
print(8)
print(9)
print("ESto estรก fuera del bucle")
# De esta forma muestra el 6 -- versiรณn 2
for g in [2, 4, 6, 8]:
print(g)
if g == 6:
break
print(8)
print(9)
print("ESto estรก fuera del bucle")
# De esta forma muestra el 6 -- versiรณn 3
for g in [2, 4, 6, 8]:
if g > 6:
break
print(g)
print(8)
print(9)
print("ESto estรก fuera del bucle")
# De esta forma solo muestra el 6 -- versiรณn 4
for g in [2, 4, 6, 8]:
if g == 6:
print(g)
break
print(8)
print(9)
print("ESto estรก fuera del bucle")
==
!=
>
<
>=
<=
and
or
not
# De esta forma solo muestra el 6 -- versiรณn 4
for g in [2, 4, 6, 8]:
if g == 6:
print(g)
break
print(8)
print(9)
print("ESto estรก fuera del bucle")
k = 1, 2
k
file1 = "Primera Fila", ["Maria", "Angeles", "Juan"]
file1
file1[1][1]
# Intento Javier Gil
if type(x) == str:
for x in lista:
print(x)
# Intento Javier Gil solved
for x in lista:
if type(x) == str:
print(x)
file1 = "Primera Fila", ["Maria", "Angeles", "Juan"]
file2 = "Segunda Fila", ["Marta", "Daniel", "Leo", "Miguel1", "Estela"]
file3 = "Tercera Fila", ["Kapil", "Roberto", "Alfonso", "Miguel2"]
fileR = "Remoto", ["Mar", "Alex", "Anais", "Antonio", "Ariadna", "Javi", "JaviOlcoz"]
altura1 = [1.78, 1.63, 1.75, 1.68]
altura2 = [2.00, 1.82]
altura3 = [1.65, 1.73, 1.75]
altura4 = [1.72, 1.71, 1.71, 1.62]
lista_alturas = [altura1, altura2, altura3, altura4]
print(lista_alturas[0][1])
for x in lista_alturas:
print(x[3])
lista_con_lista = [2, "4", "c", [0, "a"]]
print(lista_con_lista[3][1])
lista_con_lista[3][1] = "modificado"
lista_con_lista
lista_con_lista = [2, "4", "c", (0, "a")]
tupla_lista = list(lista_con_lista[3])
lista_con_lista[3] = tupla_lista
lista_con_lista
lista_con_lista = [2, "4", "c", [7, "a", 7]]
pos_7 = lista_con_lista[3].index(7)
pos_7
#for x in lista_con_lista: # Valor
# for pos in range(len(lista_con_lista)): # range funciona por posiciรณn
# for pos, val in enumerate(lista_con_lista): # enumerate tanto por posiciรณn como valor
lista_con_lista = [2, "4", "c", [7, "a", 7]]
i = 0
for x in lista_con_lista:
if i == 3:
print("-------")
print("pos de 7:", x.index(7))
z += 1
print(i, ":", x)
i += 1
altura1 = [1.78, 1.63, 1.75, 1.68]
altura2 = [2.00, 1.82]
altura3 = [1.65, 1.73, 1.75]
altura4 = [1.72, 1.71, 1.71, 1.62]
lista_alturas = [altura1, altura2, altura3, altura4]
print(lista_alturas[0][1])
for x in lista_alturas:
if len(x) > 3: # La lista ha de tener 4 elementos
print(x[3])
print(range(6))
type(range(6))
print(list(range(6)))
r = list(range(6))
for i in r:
print(r)
for i in range(6): # Ejecuta 6 iteraciones
print(r)
for i in range(6):
print(i)
list(range(4))
for i in range(4): # bucle con 4 iteraciones
s = input("Introduce letra:")
print(s)
lista = ("juan", "pepito", "silvia", "6", 7)
tamano = len(lista)
for i in range(tamano):
print(lista[i])
# Con range, creamos una colecciรณn que empieza en 0 hasta un nรบmero N. En ese caso, trabajamos con posiciones.
# enumerate
lista = ("juan", "pepito", "silvia", "6", 7)
for posicion, valor in enumerate(lista):
print("position:", posicion)
print("valor:", valor)
print("-------")
k = 2, 7
k
tupla = ("juan", "pepito", "silvia", "6", 7)
for k in enumerate(tupla):
print(k)
tupla = ("juan", "pepito", "silvia", "6", 7)
for pos, val in enumerate(tupla):
print(pos)
print(val)
# continue
tupla = ("juan", "pepito", "silvia", "6", 7)
for x in tupla: # O(n)
if x == "silvia":
continue # continue salta a la siguiente iteraciรณn
print(x)
list(range(4))
for i in range(4): # Lo que haya debajo se ejecuta 4 veces
print("Iteraciรณn ", i+1," del primer bucle")
print("-------------------")
# bucle ejecutando todo N veces
# acumuladores
# Queremos que se muestren todos los elementos de tupla 4 veces excepto "silvia" la รบltima iteraciรณn
tupla = ("juan", "pepito", "silvia", "6", 7)
N = 4
for i in range(N): # Lo que haya debajo se ejecuta 4 veces
print("Iteraciรณn", i+1,"del primer bucle")
print("-------------------")
for x in tupla: # Recorre cada valor de tupla y x tiene cada valor del elemento que se estรก recorriendo
if x == "silvia" and i == (N - 1):
continue # continue salta a la siguiente iteraciรณn
print(x)
print(2)
# bucle ejecutando todo N veces
# acumuladores
# Queremos que se muestren todos los elementos de tupla 4 veces excepto "silvia" la รบltima iteraciรณn
tupla = ("juan", "pepito", "silvia", "6", 7)
N = 4
for i in range(N): # Lo que haya debajo se ejecuta 4 veces
print("-------------------")
print("Iteraciรณn", i+1,"del primer bucle")
print("-------------------")
for pos, x in enumerate(tupla): # Recorre cada valor de tupla y x tiene cada valor del elemento que se estรก recorriendo
if x == "silvia" and i == (N - 1):
continue # continue salta a la siguiente iteraciรณn
print("~~~~~~~~~~")
print("It", pos+1, "del segundo bucle")
print("~~~~~~~~~~")
print(x)
print(2)
lista = ["a", "b", "c", "d"]
for i in range(len(lista)):
print(i, lista[i])
lista = ["a", "b", "c", "d"]
for i, x in enumerate(lista):
print(i, x)
lista = ["a", "b", "c", "d"]
acum = 0
for x in lista: # el valor del elemento
print(acum, x)
acum += 1
lista = ["a", "b", "c", "d"]
acum = -1
for x in lista: # el valor del elemento
acum += 1
print(acum, x)
list(range(len(lista)))
```
## Funciones
```
l = ["primer", 2, "tercer", 4]
if "tercer" in l:
l.pop(0)
l
l.remove(4)
l
l2 = [2, 3, 3, 3, 3, 3, 4]
for x in l2:
if x == 3:
l2.remove(x)
print(l2)
l2
list(range(7))
l2 = [2, 3, 3, 3, 3, 3, 4]
for i in range(len(l2)): # harรก tantas iteraciones como elementos tenga l2
if 3 in l2:
l2.remove(3)
print(l2)
l2 = [2, 3, 3, 3, 3, 3, 4]
tamano = len(l2)
for i in range(tamano):
print("tamaรฑo:", tamano)
if 3 in l2:
l2.remove(3)
print(l2)
# La funciรณn retorna None por defecto
def nombre_funcion():
print("Hola")
x = 2
print(x)
return None
r = nombre_funcion()
print(r)
def nombre_funcion():
print("Hola")
x = 2
print(x)
return 7
r = nombre_funcion()
print(r)
def suma_2_argumentos(a, b):
return int(a) + int(b)
lo_que_retorna = suma_2_argumentos(a="76", b=40)
print("lo_que_retorna:", lo_que_retorna)
l1 = [2, 6, "10"]
l2 = ["l", "aa"]
l3 = l1 + l2
l3
def suma_2_argumentos(a, b):
a = int(a)
b = int(b)
print(a)
return a + b
lo_que_retorna = suma_2_argumentos(a="76", b=40)
print("lo_que_retorna:", lo_que_retorna)
r = suma_2_argumentos(a="8", b=40)
l = [2, 4]
l = int(l)
l
def suma_2_argumentos(a, b):
a = int(a)
b = int(b)
print(a)
return a + b
lo_que_retorna = suma_2_argumentos(a=[2,4], b=[3, 5, 7])
def suma_2_argumentos(a, b):
a = int(a)
b = int(b)
print(a)
return a + b
lo_que_retorna = suma_2_argumentos(2, 6)
def suma_2_argumentos(a, b):
""" Esto es la descripciรณn de la funciรณn """
a = int(a)
b = int(b)
return a + b
suma_2_argumentos(2,1)
def f():
x = int(input())
r = int(input())
return x / r
f()
def add_element(lista, to_add):
lista.append(to_add)
l = [2, 4, 6]
to_add = 9
add_element(lista=l, to_add=9)
print(l)
# Todas las variables que estรกn dentro de una funciรณn son borradas de memoria al terminar la ejecuciรณn de la funciรณn.
def k():
x = 2
y = 3
m = 8
k()
print(m)
def k():
x = 2
y = 3
m = 8
return 7
lo_que_devuelve_k = k()
print(lo_que_devuelve_k)
def k():
x = 2
y = 3
m = 8
return 7
print(k())
def g(l):
""" Devuelve True si la lista 'l' contiene un nรบmero mayor a 5
Args:
l (list): Lista de elementos
"""
for i in l:
if i > 5:
return True
lista = [2, 4, 5, 9, 999]
z = g(l=lista)
print(z)
acum += 1
acum
def s(l):
""" Devuelve el nรบmero de veces que hay nรบmero mayor a 5 en la lista 'l'
"""
acum = 0
for i in l:
if i > 5:
acum = acum + 1
return acum
lista = [2, 4, 5, 9, 999]
z = s(l=lista)
print(z)
def p1(l):
acum = 0
for pos, val in enumerate(l):
# if pos == 0: # Esto se cumple en la primera iteraciรณn
if pos == 2: # Esto se cumple en la tercera iteraciรณn
break
lista = [2, 4, 5, 9, 999]
z = s(l=lista)
print(z)
lista3 = [2, 4, 5, 9, 999]
def p1(l):
for pos, val in enumerate(l):
if pos == len(l) - 1: # Se cumple cuando sea la รบltima iteraciรณn
return 2
else:
print("True")
z = p1(l=lista3)
print(z)
def df(l):
acum = 0
return acum
print(acum)
for i in l:
if i > 5:
acum = acum + 1
lista = [2, 4, 5, 9, 999]
z = df(l=lista)
z
def df(l):
acum = 0
print(acum)
for i in l:
if i > 5:
acum = acum + 1
lista = [2, 4, 5, 9, 999]
z = df(l=lista)
print(z)
# Funciรณn que retorne cuantos nรบmeros hay mayores a 5 en los elementos de 'l'
l = ["Ana", "Dorado", ["m", 2], 7]
def r_n_m_5(l):
acum = 0
for elem in l:
if type(elem) == int and elem > 5:
print(elem)
acum += 1
return acum
r_n_m_5(l=l)
def r_n_m_5(l):
acum = 0
for elem in l:
if type(elem) == int:
if elem > 5:
print(elem)
acum += 1
return acum
r_n_m_5(l=l)
# Funciรณn que retorne cuantos nรบmeros hay mayores a 5 en los elementos de 'l'
l = ["Ana", "Dorado", ["m", 2], 7]
def r_n_m_5(l):
acum = 0
for elem in l:
if isinstance(elem, int) and elem > 5:
print(elem)
acum += 1
return acum
r_n_m_5(l=l)
i = 2.2
print(type(i))
i = int(i)
print(type(i))
print(i)
def mostrar_cada_elemento_de_lista(lista):
for x in lista:
print(x)
mostrar_cada_elemento_de_lista(lista=lista_alturas)
mostrar_cada_elemento_de_lista(lista=lista)
for x in lista_alturas:
if len(x) > 2:
print(x[2])
else:
print(x[1])
```
# Diccionarios
```
hola = "saludo internacional"
#clave/key = valor
diccionario = {"key":"valor"}
diccionario
print(diccionario["key"])
diccionario2 = {2:"este es el valor", 2:"este es el valor"}
diccionario2
diccionario3 = {"gabvaztor@gmail.com":["password", "gabvaztor", 606122333, "C/Pepito", "Profesor"]}
diccionario3["gabvaztor@gmail.com"]
diccionario4 = {"juan": 8, "silvia":10, "juan2":9}
diccionario5 = {9.8:"S"}
diccionario5
diccionario6 = {"gabvaztor@gmail.com":["password", "gabvaztor", 606122333, "C/Pepito", "Profesor"],
"juan@gmail.com":["password", "juan", 606142333, "C/Pepito", "Profesor"],
"silvia@gmail.com":["password", "silvia", 602122333, "C/Pepito", "Profesor"]}
diccionario6["silvia@gmail.com"]
diccionario6["password"]
del diccionario6["gabvaztor@gmail.com"]
diccionario6
diccionario7 = {"k":"v", 8:[7,"y"], 6:{1.1:[5,"b"]}}
type(diccionario7[6][1.1][1])
list(diccionario7.keys())
diccionario7.keys()
diccionario7.values()
list(diccionario7.values())
diccionario7
for key in diccionario7.keys():
print(key)
for value in diccionario7.values():
print(value)
diccionario7
for key, value in diccionario7.items():
print(key, "--->", value)
for key, value in diccionario7.items():
if (type(key) == int or type(key) == float) and key > 5:
print(key, "--->", value)
for pos, (key, value) in enumerate(diccionario7.items()):
print("Iteraciรณn nรบmero:", pos+1)
print(key, "--->", value)
print("#####################")
lista = [2, 4, 6, "k"]
for pos, value in enumerate(lista):
print("Iteraciรณn nรบmero:", pos)
lista = [2, 4, 6, "k"]
pos = 0
for value in lista:
print("Iteraciรณn nรบmero:", pos)
pos += 1
print("Real valor de pos:", pos)
print("#################")
print("รltimo valor de pos:", pos)
# Creaciรณn de conjunto (set)
d = {3, 5, 1, "z", "a"}
d
# Creaciรณn de diccionario
d = {"k": 2, "k2": [8, "p"]}
d
# Acceder a un valor a travรฉs de una key
key_a_buscar = "k2"
d[key_a_buscar]
# Buscar/Mostrar todas las keys
list(d.keys())
# Buscar/Mostrar todos los values
list(d.values())
# Buscar/Mostrar todos los keys/values
list(d.items())
k = 2, "g"
k
# Buscar en bucle todos los elementos
for key, value in d.items():
if key == "k2":
print(key, "-->", value)
d
# Recorrer la lista value de la clave "k2"
for key, value in d.items():
if key == "k2":
for x in value:
print(x)
d
# Borrar clave de diccionario
del d["k"]
d
# Actualizar/Modificar el valor asociado a una clave
d["k2"] = [8, "a"]
d
# Aรฑadir un diccionario a otro
d1 = {8: [8, 9]}
d2 = {"a":"hola"}
d1.update(d2)
print(d1)
def suma_dos_valores(a, b):
x = a + b
return x
k = suma_dos_valores(a=9, b=1)
print(k)
k + 7
l = [2, 7]
l = l.append("a")
print(l)
def add_to_list(lista, to_add):
lista.append(to_add)
return lista
l = [2, 7]
l = add_to_list(lista=l, to_add=6)
print(l)
def add_to_list(lista, to_add):
lista.append(to_add)
l = [2, 7]
add_to_list(lista=l, to_add=6)
print(l)
d["k2"].append(3)
d
l8 = [1,2]
d = {2: "h", "j":l8}
l = [5, "m", d]
l[-1]
# Retornar mรกs de un elemento
k = 2, 7
k
k, j = 2, 7
print(k)
print(j)
k, j = 2, 7, 3
print(k)
print(j)
k, j = [2, 0], (4, 7, "m")
print(k)
print(j)
def f():
return 3, 6, 8
def cualquiercosa():
return 5, 2, 3
def t():
return 6, 2, 3
g, h = f(), cualquiercosa()
print(g)
print(h)
```
### Funciones con valores por defecto
```
# funciรณn con un parรกmetro
def nombre_funcion(param):
return param
x = nombre_funcion(1)
x = nombre_funcion(param=1)
x
# Funciรณn con parรกmetro/valor por defecto
def nombre_funcion_por_defecto(param=2):
return param
x = nombre_funcion_por_defecto()
x
def nombre_funcion_por_defecto(param=2):
return param
x = nombre_funcion_por_defecto(param=7)
x
# Si un argumento de la funciรณn no tiene un valor por defecto, es obligatorio darle un valor al llamar a la funciรณn
# Si le damos un valor a un argumento de la funciรณn al ser llamada, ese valor tiene prioridad al valor por defecto
def nombre_funcion_por_defecto(y, param=5):
return param + y
x = nombre_funcion_por_defecto(4, 7)
x
def nombre_funcion_por_defecto(y=4, param=5):
return param - y
x = nombre_funcion_por_defecto(param=7, y=8)
x
def nombre_funcion_por_defecto(y=4, param=5):
return param - y
x = nombre_funcion_por_defecto(param=2, y=4)
x
def nombre_funcion_por_defecto(y, param=2, h=1, p=0):
return param - y
x = nombre_funcion_por_defecto(3,6,h=1)
x
def nombre_funcion_por_defecto(y, param=2, h=1, p=0):
return param - y
x = nombre_funcion_por_defecto(y=4, h=2)
x
def nombre_funcion_por_defecto(y, param=2, h=1, p=0):
if y == 2:
return param - y
else:
return True
x = nombre_funcion_por_defecto(4, 2,)
x
lista = [2, 4, 6, 8,]
lista
def gh(c=6):
return c
def get_last_value_from_list(lista, position=-1):
if len(lista) > 0: # Si hay algรบn elemento
return lista[position]
else:
return "No hay elementos"
lista = ["m", 2]
x = get_last_value_from_list(lista=lista)
print(x)
lista = [2, "8", [6, "joven"], "casa", "coche", "pepino", [9, 8]]
def muestra_todos_excepto_joven_casa_9(lista):
for x in lista:
if x == lista[2]:
print(x[0])
elif x == lista[3]:
continue
elif x == lista[-1]:
print(x[1])
else:
print(x)
muestra_todos_excepto_joven_casa_9(lista=lista)
```
# While
```
lista = [7, 5, "m", "Z"]
for elem in lista:
print(elem)
lista = [2, 4, 6, 8, 10]
contador = 0
for elem in lista:
if elem > 5:
#contador = contador + 1
contador += 1
print(contador)
while True:
print(5)
break
print("Fuera del while")
s = "Start"
while s == "Start":
print("Entra en el while")
s = input()
print("Fuera del while")
password = "TB"
while True:
s = input("Introduce la contraseรฑa:")
if s == password:
print("Contraseรฑa correcta")
break
else:
print("Contraseรฑa incorrecta")
password = "TB"
s = ""
while s != password:
s = input("Introduce la contraseรฑa:")
if s == password:
print("Contraseรฑa correcta")
else:
print("Contraseรฑa incorrecta")
print("Fuera del while")
contador = 0
while contador != 5:
print(contador)
contador += 1
print("Fuera del while")
lista = ["a", "b", "c", "d"]
print(lista[0])
print(lista[1])
print(lista[2])
print(lista[3])
lista = ["a", "b", "c", "d"]
acum = 0
while acum < len(lista):
if acum == 2:
break
print(lista[acum])
acum += 1
lista = ["a", "b", "c", "d"]
acum = 0
while True:
if acum == 2:
continue
print(lista[acum])
acum += 1
# No ejecutar porque no para
lista = ["a", "b", "c", "d"]
acum = 0
while True:
if acum == 0:
continue
print(8)
break
import time
while True:
print("Hola")
time.sleep(3)
print("Adios")
break
lista = ["a", "b", "c", "d", "e"]
acum = 1
while 1 <= acum and acum <= 3:
print(lista[acum])
acum += 1
lista = ["a", "b", "c", "d", "e"]
acum = 1
while 1 <= acum and acum <= 3:
if acum == 2:
acum += 1
continue
print(lista[acum])
acum += 1
lista = ["a", "b", "c", "d", "e"]
acum = 1
while 1 <= acum and acum <= 3:
print(lista[acum])
acum += 2
lista = ["a", "b", "c", "d", "e", "f"]
acum = 0
while acum < len(lista):
print(lista[acum])
acum += 2
lista = ["a", "b", "c", "d", "e", "f"]
acum = 1
while acum < len(lista):
print(lista[acum])
acum += 2
lista = ["a", "b", "c", "d", "e", "f"]
acum = 1
while acum >= 1 and acum <= len(lista) - 1:
print(lista[acum])
acum += 2
```
# Try Except
```
def suma(a, b="3"):
resultado = a + b
return resultado
s = suma(a=2)
s
def suma(a, b="3"):
try:
resultado = a + b
except:
print("Ha ocurrido un error al sumar")
a = int(a)
b = int(b)
resultado = a + b
return resultado
s = suma(a=2)
s
def suma_positiva(a, b):
""" Suma dos valores numรฉricos de tipo entero. Si algรบn parรกmetro incorrecto, la funciรณn devolverรก (0x086080808)"""
resultado = -1
try:
resultado = a + b
except:
print("Ha ocurrido un error al sumar. Los argumentos 'a' y 'b' deben ser de tipo Entero. El cรณdigo del error es: '0x86080808'")
return resultado
s = suma_positiva(a=2, b="4")
s
a = 2
b = "g"
resultado = a + b
print("hola")
a = 2
b = "g"
try:
resultado = a + b
except Exception as error:
print("Ha ocurrido un error:", error)
resultado = -1
print("hola")
def minicalculadora(a, b, operador, DEBUG=0):
try:
a = int(a)
b = int(b)
if operador == "+":
return a + b
elif operador == "/":
return a / b
else:
print("Esta calculadora solo permite suma y divisiรณn")
except Exception as error:
if DEBUG == 1:
print(error)
print("Ha ocurrido un error, solo se admiten nรบmeros")
s = input("Escribe un operador")
a = input("Escribe un nรบmero")
b = input("Escribe otro nรบmero")
minicalculadora(a=a, b=b, operador=s, DEBUG=1)
```
### Assert
```
def mostrar_dinero_banco(password):
assert (password==385187), "No has introducido la contraseรฑa correcta"
return 5813851785875187517
mostrar_dinero_banco(password=41241)
try:
assert (1==2), "Error, 1 no es 2"
except Exception as error:
print("error:", error)
print("Ha ocurrido un error")
```
### Zip List/Dict
```
l1 = ["a", 3, 7]
l2 = ["b", 2, 0]
l3 = list(zip(l1, l2))
l3
l3 = ["a", "b", "c"]
l4 = [2, 4, 6]
for i in range(len(l3)):
print(l3[i])
print(l4[i])
print("-----")
l5 = list(zip(l3, l4))
l5
k = 2, 4
k
j, h = 2, 4
l5
for elem in l5:
print(elem)
for e1, e2 in l5:
print(e1)
print(e2)
print("-----")
l1 = [["a"], [3, 2], (7, "m")]
l2 = [(5, 7), ["x"], ["y"]]
l3 = list(zip(l1,l2))
print(l3)
for e1, e2 in l3:
print(e1)
l1 = [["a","b"], [3, 2], (7, "m")]
l2 = [(5, 7), ["x",0], ["y", -1]]
l3 = list(zip(l1,l2))
print(l3)
l3
for elem in l3:
print(elem)
for e1, e2 in l3:
print(e1)
print("--------")
for t1 in e1:
print(t1)
# Este for da el error en la lรญnea del segundo for ya que se le estรก pidiendo t1, t2 cuando solo se estรก recorriendo un elemento que tiene un elemento.
l1 = [([['a'], ['b']], (5, 7)), ([3, 2], ['x', 0]), ((7, 'm'), ['y', -1])]
for e1, e2 in l1:
print(e1)
print("--------")
for t1, t2 in e1:
print(t1)
```
### Zip para generar diccionarios
```
d1 = {"clave":3, "k":"j", 2:["8", "x"]}
d1
l1 = ["a", 3, 7]
l2 = ["b", 2, 0]
d3 = dict(zip(l1, l2))
d3
# Aรฑadir un elemento (clave, valor) a un diccionario
d = {}
d["key"] = 2
d
d["key"] = 4
d
d[2] = [8, 4]
d
d[2] = [8, 4, 6]
d
d[2].append("x")
d
k = 2, 4
k
d[4] = [2, 5, 9], (8,"a", "b")
d
d[4] = list(d[4])
d[4].append(4)
d
d[4][1] = list(d[4][1])
d
d[4] = tuple(d[4])
d
# Zip trata de crear tantos elementos como el nรบmero de elementos que tenga la colecciรณn con menos elementos contando con todas las colecciones.
l1 = ["x", "y"]
l2 = [0, 1]
l3 = (True, False)
l4 = [300, 5000, 999999]
s = "abdf"
l5 = list(zip(l1, l2, l3, l4, s))
l5
e1, e2, e3, e4, e5 = 1, 2, 3, 4, 5
e5
for e1, e2, e3, e4, e5 in l5:
print(e5)
for pos, elem in enumerate(l5):
print(pos)
print(elem)
for pos, (e1, e2, e3, e4, e5) in enumerate(l5):
print(pos)
print(e3)
print("------")
l4 = {6, 5000, 999999}
l4
```
### List/Dict comprehesion
```
list(range(21))
l = [3]
for i in range(21):
l.append(i)
l
p = 0
l = [p for elem in [2, 5, 9] ]
l
p = 0
l = [elem for elem in [2, 5, 9] if elem > 4]
l
p = 0
l = [elem for elem in [2, 5., 9] if isinstance(elem, int)]
l
p = 0
l = [elem for elem in [2, 5., 9] if isinstance(elem, int)]
l
p = 0
l = [elem/2 for elem in [2, 5., 9, 17] if isinstance(elem, int)]
l
type("x") == int
p = 1
l = [(elem/2) + p for elem in [2, 5., 9, 17, "x"] if isinstance(elem, int)]
l
o = [2, 5., 9, 17, "x"]
l = [(elem/2) + p for elem in o if isinstance(elem, int)]
k = []
for elem in o:
if isinstance(elem, int):
k.append((elem/2) + p)
k
o = [2, 5., 9, 17, "x"]
l = [(elem/2) + p for elem in o if isinstance(elem, int)]
k = [2.0]
p = 1
def is_int(elem):
return type(elem) == int
for elem in o:
if is_int(elem=elem):
k.append((elem/2) + p)
dict1 = {'a': 1, 'b': 2, 'c': 3, 'd': 4, 'e': 5}
for (k,v) in dict1.items():
print("Clave:", k)
print("Valor:", v)
print("------")
dict1 = {'a': 1, 'b': 2, 'c': 3, 'd': 4, 'e': 5}
double_dict1 = {(k*2):(v*2) for (k,v) in dict1.items()}
print(double_dict1)
dict1 = {'a': 1, 'b': 2, 'c': 3, 'd': 4, 'e': 5}
double_dict1 = {(k*2):(v*2) for (k,v) in dict1.items() if v > 2}
print(double_dict1)
```
### Recursividad
```
def f1(cont):
return cont + 1
r = f1(cont=2)
r
import time
def p():
print("Hola")
time.sleep(2)
p()
p()
def f1(cont):
if cont == 0:
print("CASO BASE")
print("Valor de cont la รบltima vez:", cont)
return cont
else:
print("Valor de cont:", cont)
f1(cont=cont-1)
r = f1(cont=2)
print(r)
def f1(cont):
if cont == 0:
print("CASO BASE")
print("Valor de cont la รบltima vez:", cont)
return cont
else:
print("Valor de cont:", cont)
return f1(cont=cont-1)
r = f1(cont=2)
r
l = []
for i in range(6):
l.append(i)
print(l)
lc = [i for i in range(6)]
lc
def fn(k=None):
if k: # True, 1 == 1, [1], 4
return []
else: # None, False, conjunto vacรญo, 0
return [i for i in range(6)]
p = fn()
print(p)
def ct(val):
if val > 5:
return True
else:
return False
lc = [x for x in range(10) if ct(val=x)]
lc
for i in range(6):
if ct(val=i):
print("Mayor que 5")
else:
print("Menor que 5")
def ct(val, limite):
a_retornar = False
if val > limite:
a_retornar = True
return a_retornar
def main(lista, limite):
for x in lista:
if ct(val=x, limite=limite):
return [i for i in range(x) if i < 5]
l = list(range(10))
limite = 8
print(main(lista=l, limite=limite))
ct(val=1000, limite=100)
def ct(val, limite):
if val > limite:
return True
l = ["st4", 3, [0.1, 1], (8, 6), {2:"valor"}, {9, "s"}]
lista_numerica = []
for x in l:
if isinstance(x, (int, float)):
lista_numerica.append(x)
else:
if isinstance(x, dict):
for k, v in x.items():
if isinstance(k, (int, float)):
lista_numerica.append(k)
if isinstance(v, (int, float)):
lista_numerica.append(v)
else: # x es una colecciรณn
if isinstance(x, str):
continue
else: # colecciรณn no str
for elem in x:
if isinstance(elem, (int, float)):
lista_numerica.append(elem)
lista_numerica
l = ["st4", 3, [0.1, 1], (8, 6), {2:"valor"}, {9, "s"}]
lista_numerica = []
def check_and_add(elem, types, lista):
if isinstance(elem, types):
lista.append(elem)
for x in l:
if isinstance(x, (int, float)):
lista_numerica.append(x)
else:
if isinstance(x, dict):
for k, v in x.items():
if isinstance(k, (int, float)):
lista_numerica.append(k)
if isinstance(v, (int, float)):
lista_numerica.append(v)
else: # x es una colecciรณn
if isinstance(x, str):
continue
else: # colecciรณn no str
for elem in x:
if isinstance(elem, (int, float)):
lista_numerica.append(elem)
l = ["st4", 3, [0.1, 1], (8, 6), {2:"valor"}, {9, "s"}]
lista_numerica = []
def check_and_add(elem, types, lista):
if isinstance(elem, types):
lista.append(elem)
for x in l:
if isinstance(x, (int, float)):
lista_numerica.append(x)
else:
if isinstance(x, dict):
for k, v in x.items():
check_and_add(k, (int, float), lista_numerica)
check_and_add(v, (int, float), lista_numerica)
else: # x es una colecciรณn
if isinstance(x, str):
continue
else: # colecciรณn no str
for elem in x:
check_and_add(elem, (int, float), lista_numerica)
d = {2:"valor"}
list(d.items())
d = {2:"valor"}
for k, v in d.items():
print(k)
print(v)
```
### Lambda
```
def nombre_f(d, a):
return d + 2
nombre_f(d=2, a=None)
nombre_g = lambda d, a: (d + 2)/a
nombre_g(d=2, a=5)
def nombre_f(k):
return k
nombre_f = lambda k: k
nombre_f(k=2)
def nombre_f(k):
if k > 0:
return k
else:
return 2
# Si escribimos una condiciรณn durante la creaciรณn de una funciรณn lambda, necesitamos tanto el if como else obligatoriamente
nombre_f = lambda k: k if k > 0 else 2
r = nombre_f(k=4)
print(r)
def nombre_f(k):
if k > 0:
return k
else:
return 2
# Si escribimos una condiciรณn durante la creaciรณn de una funciรณn lambda, necesitamos tanto el if como else obligatoriamente
lambda_f = lambda d: print(d+2)
def f1(d):
print(d+2)
return d
"""
def nombre_f(k):
if k > 0:
return f1(d=k)
else:
return 2
"""
nombre_f = lambda k: f1(d=k) if k > 0 else 2
r = nombre_f(k=4)
print(r)
def is_higher_5(elem):
if elem > 5:
return elem
l = [i for i in range(10)]
l
l = ["st4", 3, [0.1, 1], (8, 6), {2:"valor"}, {9, "s"}]
lista_numerica = [x for x in l if isinstance(x, (int, float)) for y in l if isinstance(y, (list, tuple, set)) for x in y if isinstance(x, (int, float))]
lista_numerica
```
## Else list comprehesion
```
l = [x if x > 3 else 2 for x in range(5)]
l
```
| github_jupyter |
# Maps
## 1. Introduction
Maps are a way to present information on a (roughly) spherical earth on a flat plane, like a page or a screen. Here are two examples of common map projections. The projection is only accurate in the region where the plane touches the sphere, and is less accurate as the distance between the plane and the sphere increases.
#### Mercator

#### Lambert conformal conic

You can read more about map projections from [_Map Projections โ a Working Manual_](http://pubs.usgs.gov/pp/1395/report.pdf), the source of the images above, or, more entertainingly, from [XKCD](https://xkcd.com/977/).
We'll use `cartopy` to plot on maps. Check out the [gallery](http://scitools.org.uk/cartopy/docs/latest/gallery.html) for inspiration.
```
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import cartopy
import cartopy.crs as ccrs # commonly used shorthand
import cartopy.feature as cfeature
```
Here we have the most basic projection: plate carrรฉe, which is an [equirectangular projection](https://en.wikipedia.org/wiki/Equirectangular_projection), and is essentially equivalent to just plotting the longitude and latitude values without a projection. I will refer to longitude and latitude as "geographic coordinates".
We can make an axes that is plotted in geographic coordinates (or, indeed, any projection we choose) by using the `projection` keyword argument to `fig.add_subplot()`. Here we also plot the coastline and add gridlines.
```
fig = plt.figure(figsize=(10, 10))
ax = fig.add_subplot(1, 1, 1, projection=ccrs.PlateCarree())
ax.coastlines(resolution='110m') # coastline resolution options are '110m', '50m', '10m'
ax.gridlines()
```
`cartopy` provides a number of projections. [Available projections](http://scitools.org.uk/cartopy/docs/latest/crs/projections.html) are:
PlateCarree
AlbersEqualArea
AzimuthalEquidistant
LambertConformal
LambertCylindrical
Mercator
Miller
Mollweide
Orthographic
Robinson
Sinusoidal
Stereographic
TransverseMercator
UTM
InterruptedGoodeHomolosine
RotatedPole
OSGB
EuroPP
Geostationary
Gnomonic
LambertAzimuthalEqualArea
NorthPolarStereo
OSNI
SouthPolarStereo
Lambert Conformal Conic is a useful projection in numerical modeling because it preserves right angles. Here we use the projection without any keyword specifications, but with the coastline plotted so that we have something to look at.
The projection that we choose in the `axes` line with `projection=` is the projection that the plot is in. Data from any projection can be plotted on this map, but we will have to tell it which projection it is in.
```
plt.figure()
ax = plt.axes(projection=ccrs.LambertConformal()) ## The map is in the Lambert Conformal projection
ax.coastlines(resolution='110m')
ax.gridlines()
```
Let's make a map of the Gulf of Mexico using the `LambertConformal` projection. Projections take in different keywords to specify properties. For this projection, we can specify the central longitude and latitude, which control the center of the projection. Our selection in the example is not far off from the default, so it looks similar to the previous plot.
```
# the central_longitude and central_latitude parameters tell the projection where to be centered for the calculation
# The map is in Lambert Conformal
ax = plt.axes(projection=ccrs.LambertConformal(central_longitude=-85.0, central_latitude=25.0))
gl = ax.gridlines(linewidth=0.2, color='gray', alpha=0.5, linestyle='-')
# we control what we actually see in the plot with this:
# We can set the extent using latitude and longitude, but then we need to tell it the projection, which is
# PlateCarree since that is equivalent
# We are choosing the bounds of the map using geographic coordinates,
# then identifying as being in PlateCarree
ax.set_extent([-100, -70, 15, 35], ccrs.PlateCarree())
# add geographic information
ax.add_feature(cartopy.feature.LAND)
ax.add_feature(cartopy.feature.OCEAN)
ax.coastlines(resolution='110m') # looks better with resolution='10m'
ax.add_feature(cartopy.feature.BORDERS, linestyle='-', lw=.5)
ax.add_feature(cartopy.feature.RIVERS)
```
> Don't forget to center your projections around your area of interest to be as accurate as possible for your purposes.
The map is plotted in a projected coordinate system, with units in meters, but the package deals with the projection behind the scenes. We can see this by looking at the limits of the two axes, which don't look like longitude/latitude at all:
```
ax.get_xlim(), ax.get_ylim()
```
This same call to a plot set up with the `PlateCarree` projection, which is in geographic coordinates (lon/lat) does give limits in longitude and latitude, because in that case we told the plot to be in those coordinates, but in this case we said to use a `LambertConformal`.
We can use whatever type of coordinates we want, including latitude and longitude, as long as we tell `cartopy` which type we are using.
As you saw above, we set the limits of the plot not with xlim and ylim, but with `extent` and the appropriate projection object.
---
### _Exercise_
> Create a map of the Gulf of Mexico using a different projection. How does it compare to the map above?
---
This is pretty good, but there are some limitations in this package currently. One is that we can't add labels to the lat/lon lines for the Lambert Conformal Conic projection. We can do this using Mercator, though:
```
plt.figure(figsize=(10, 6))
# the central_longitude parameter tells the projection where to be centered for this axes
ax = plt.axes(projection=ccrs.Mercator(central_longitude=-85.0))
gl = ax.gridlines(linewidth=0.2, color='gray', alpha=0.5, linestyle='-', draw_labels=True)
# we control what we actually see in the plot with this:
# We can set the extent using latitude and longitude, but then we need to tell it the projection, which is
# PlateCarree since that is equivalent
ax.set_extent([-100, -70, 15, 35], ccrs.PlateCarree())
# add geographic information
ax.add_feature(cartopy.feature.LAND)
ax.add_feature(cartopy.feature.OCEAN)
ax.coastlines(resolution='110m') # looks better with resolution='10m'
ax.add_feature(cartopy.feature.BORDERS, linestyle='-', lw=.1)
ax.add_feature(cartopy.feature.RIVERS)
# Now we can add on lat/lon labels:
# more info: http://scitools.org.uk/cartopy/docs/v0.13/matplotlib/gridliner.html
from cartopy.mpl.gridliner import LONGITUDE_FORMATTER, LATITUDE_FORMATTER
import matplotlib.ticker as mticker
# the following two make the labels look like lat/lon format
gl.xformatter = LONGITUDE_FORMATTER
gl.yformatter = LATITUDE_FORMATTER
gl.xlocator = mticker.FixedLocator([-105, -95, -85, -75, -65]) # control where the ticks are
gl.xlabel_style = {'size': 15, 'color': 'gray'} # control how the tick labels look
gl.ylabel_style = {'color': 'red', 'weight': 'bold'}
gl.xlabels_top = False # turn off labels where you don't want them
gl.ylabels_right = False
```
When we want to add something to the plot, we just need to tell it what projection the information is given in using the `transform` keyword argument.
If the information is in latitude/longitude โ typical for the way people tend to think about information (instead of projected locations) โ then we give the Plate Carree projection with the `transform` keyword argument to the plot call:
> `transform=ccrs.PlateCarree()`
For example, to plot some points with a particular projection, you can type:
> `plt.plot(xpts, ypts, transform=ccrs.projection_that_xpts_and_ypts_are_given_in`)
A nice thing about the `cartopy` package is that you can plot directly data from any projection โ you just tell it the projection through the `transform` keyword argument when you add to the plot.
---
### _Exercise_
> The latitude and longitude of College Station are given below. Plot the location of College Station on the map above with a red dot.
lat_cll = 30.0 + 36.0/60.0 + 5.0/3600.0
lon_cll = -(96.0 + 18.0/60.0 + 52.0/3600.0)
_What happens if you put in the wrong projection or no projection?_
---
---
### _Exercise_
> Data from any projection can be added to a map, the data must just be input with its projection using the `transform` keyword.
> The x, y location of Austin, TX, is given below in the Mercator projection. Plot the location of Austin in Mercator coordinates on the map above with a blue 'x'.
x, y = -10880707.173023093, 3516376.324225941
---
## Point conversion
While `cartopy` removes the need to convert points on your own between projections (instead doing it behind the scenes), you can always convert between projections if you want using the following. Or, if you want to transform more than one point, use `projection.transform_points(projection, x, y)`.
```
projection = ccrs.Mercator()
x, y = projection.transform_point(-93.0-45.0/60.0, 27.0+55.0/60.0, ccrs.PlateCarree())
print(x, y)
```
---
### _Exercise_
> Convert the Mercator coordinates given for Austin to latitude and longitude and confirm that they are correct.
---
## Other features you can add
The code we used earlier, like:
ax.add_feature(cartopy.feature.LAND)
was a convenience function wrapping more complex and capable code options. Here we explore a little more the capabilities. Note that this requires downloading data which you will see a warning about the first time you run the code.
We can set up the ability to plot with high resolution land data:
```
# this is another way to do `ax.add_feature(cartopy.feature.LAND)` but to have more control over it
# 50m: moderate resolution data
# set up for plotting land
land_50m = cfeature.NaturalEarthFeature('physical', 'land', '50m',
edgecolor='face', facecolor=cfeature.COLORS['land'])
# set up for plotting water at higher resolution
ocean_50m = cfeature.NaturalEarthFeature('physical', 'ocean', '50m',
edgecolor='face', facecolor=cfeature.COLORS['water'])
```
There are also some built-in colors, but you can use any matplotlib color available to color the land or water.
```
sorted(cfeature.COLORS.keys())
```
Using higher resolution can be pretty significantly different.
Here we will prepare the higher resolution land and ocean information for the highest resolution available, then use it in the plot.
```
land_10m = cfeature.NaturalEarthFeature('physical', 'land', '10m',
edgecolor='face',
facecolor=cfeature.COLORS['land'])
ocean_10m = cfeature.NaturalEarthFeature('physical', 'ocean', '10m',
edgecolor='face',
facecolor=cfeature.COLORS['water'])
projection=ccrs.LambertConformal(central_longitude=-95.0, central_latitude=29.0)
# Galveston Bay
fig = plt.figure(figsize=(15, 15))
# lower resolution
ax1 = fig.add_subplot(1,2,1, projection=projection)
ax1.set_extent([-96, -94, 28.5, 30], ccrs.PlateCarree())
ax1.add_feature(cartopy.feature.LAND)
ax1.add_feature(cartopy.feature.OCEAN)
# now higher resolution
ax2 = fig.add_subplot(1,2,2, projection=projection)
ax2.set_extent([-96, -94, 28.5, 30], ccrs.PlateCarree())
ax2.add_feature(ocean_10m)
ax2.add_feature(land_10m)
```
Here is a list (with reference names in some cases appended) of the [many features](http://www.naturalearthdata.com/features/) that are available through Natural Earth:
*(10, 50, 110 for high, medium, low resolution)*
**Physical Vector Data Themes:**
(`physical`)
* Coastline (10, 50, 110): `coastline`
* Land (10, 50, 110): `land`
* Ocean (10, 50, 110): `ocean`
* Minor Islands (10): `minor_islands`, `minor_islands_coastline`
* Reefs (10): `reefs`
* Physical region features (10): `geography_regions_polys`, `geography_regions_points`, `geography_regions_elevation_points`, `geography_marine_polys`
* Rivers and Lake Centerlines (10, 50, 110): `rivers_lake_centerlines`
* Lakes (10, 50, 110): `lakes`
* Glaciated areas (10, 50, 110): `glaciated_areas`
* Antarctic ice shelves (10, 50): `antarctic_ice_shelves_polys`, `antarctic_ice_shelves_lines`
* Bathymetry (10): `bathymetry_all` or choose which depth(s)
* Geographic lines (10, 50): `geographic_lines`
* Graticules (10, 50, 110): (grid lines) `graticules_all` or choose degree interval
**Raster Data Themes:**
(`raster`: land coloring)
* Cross Blended Hypsometric Tints (10, 50)
* Natural Earth 1 (10, 50)
* Natural Earth 2 (10, 50)
* Ocean Bottom (10, 50)
* Bathymetry (50)
* Shaded Relief (10, 50)
* Gray Earth (10, 50)
* Manual Shaded Relief (10, 50)
**Cultural Vector Data Themes:**
(`cultural`)
* Countries (10, 50, 110): `admin_0_countries`, `admin_0_countries_lakes`, `admin_0_boundary_lines`
* Disputed areas and breakaway regions (10, 50)
* First order admin (provinces, departments, states, etc.) (10, 50): e.g. `admin_1_states_provinces_lines`
* Populated places (10, 50, 110)
* Urban polygons (10, 50)
* Parks and protected areas (10): `parks_and_protected_lands`
* Pacific nation groupings (10, 50, 110)
* Water boundary indicators (10)
Here is an example showing state boundaries:
```
projection=ccrs.PlateCarree()
fig = plt.figure()
ax = fig.add_subplot(111, projection=projection)
ax.set_extent([-125, -70, 24, 50], ccrs.PlateCarree())
ax.add_feature(cartopy.feature.LAND)
ax.add_feature(cartopy.feature.OCEAN)
states = cfeature.NaturalEarthFeature(category='cultural', scale='50m', facecolor='none',
name='admin_1_states_provinces_shp')
ax.add_feature(states, edgecolor='gray')
```
| github_jupyter |
# Extra Trees Classifier
This Code template is for the Classification tasks using simple ExtraTreesClassifier based on the Extremely randomized trees algorithm.
### Required Packages
```
import numpy as np
import pandas as pd
import seaborn as se
import warnings
import matplotlib.pyplot as plt
from sklearn.ensemble import ExtraTreesClassifier
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report,plot_confusion_matrix
warnings.filterwarnings('ignore')
```
### Initialization
Filepath of CSV file
```
#filepath
file_path= ""
```
List of features which are required for model training .
```
#x_values
features=[]
```
Target feature for prediction.
```
#y_value
target=''
```
### Data Fetching
Pandas is an open-source, BSD-licensed library providing high-performance, easy-to-use data manipulation and data analysis tools.
We will use panda's library to read the CSV file using its storage path.And we use the head function to display the initial row or entry.
```
df=pd.read_csv(file_path)
df.head()
```
### Feature Selections
It is the process of reducing the number of input variables when developing a predictive model. Used to reduce the number of input variables to both reduce the computational cost of modelling and, in some cases, to improve the performance of the model.
We will assign all the required input features to X and target/outcome to Y.
```
X = df[features]
Y = df[target]
```
### Data Preprocessing
Since the majority of the machine learning models in the Sklearn library doesn't handle string category data and Null value, we have to explicitly remove or replace null values. The below snippet have functions, which removes the null value if any exists. And convert the string classes data in the datasets by encoding them to integer classes.
```
def NullClearner(df):
if(isinstance(df, pd.Series) and (df.dtype in ["float64","int64"])):
df.fillna(df.mean(),inplace=True)
return df
elif(isinstance(df, pd.Series)):
df.fillna(df.mode()[0],inplace=True)
return df
else:return df
def EncodeX(df):
return pd.get_dummies(df)
def EncodeY(df):
if len(df.unique())<=2:
return df
else:
un_EncodedT=np.sort(pd.unique(df), axis=-1, kind='mergesort')
df=LabelEncoder().fit_transform(df)
EncodedT=[xi for xi in range(len(un_EncodedT))]
print("Encoded Target: {} to {}".format(un_EncodedT,EncodedT))
return df
x=X.columns.to_list()
for i in x:
X[i]=NullClearner(X[i])
X=EncodeX(X)
Y=EncodeY(NullClearner(Y))
X.head()
```
#### Correlation Map
In order to check the correlation between the features, we will plot a correlation matrix. It is effective in summarizing a large amount of data where the goal is to see patterns.
```
f,ax = plt.subplots(figsize=(18, 18))
matrix = np.triu(X.corr())
se.heatmap(X.corr(), annot=True, linewidths=.5, fmt= '.1f',ax=ax, mask=matrix)
plt.show()
```
#### Distribution Of Target Variable
```
plt.figure(figsize = (10,6))
se.countplot(Y)
```
### Data Splitting
The train-test split is a procedure for evaluating the performance of an algorithm. The procedure involves taking a dataset and dividing it into two subsets. The first subset is utilized to fit/train the model. The second subset is used for prediction. The main motive is to estimate the performance of the model on new data.
```
X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size = 0.2, random_state = 123)
```
### Model
ExtraTreesClassifier is an ensemble learning method fundamentally based on decision trees. ExtraTreesClassifier, like RandomForest, randomizes certain decisions and subsets of data to minimize over-learning from the data and overfitting.
#### Model Tuning Parameters
1.n_estimators:int, default=100
>The number of trees in the forest.
2.criterion:{โginiโ, โentropyโ}, default="gini"
>The function to measure the quality of a split. Supported criteria are โginiโ for the Gini impurity and โentropyโ for the information gain.
3.max_depth:int, default=None
>The maximum depth of the tree. If None, then nodes are expanded until all leaves are pure or until all leaves contain less than min_samples_split samples.
4.max_features:{โautoโ, โsqrtโ, โlog2โ}, int or float, default=โautoโ
>The number of features to consider when looking for the best split:
```
model=ExtraTreesClassifier(n_jobs = -1,random_state = 123)
model.fit(X_train,y_train)
```
#### Model Accuracy
score() method return the mean accuracy on the given test data and labels.
In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted
```
print("Accuracy score {:.2f} %\n".format(model.score(X_test,y_test)*100))
```
#### Confusion Matrix
A confusion matrix is utilized to understand the performance of the classification model or algorithm in machine learning for a given test set where results are known.
```
plot_confusion_matrix(model,X_test,y_test,cmap=plt.cm.Blues)
```
#### Classification Report
A Classification report is used to measure the quality of predictions from a classification algorithm. How many predictions are True, how many are False.
* **where**:
- Precision:- Accuracy of positive predictions.
- Recall:- Fraction of positives that were correctly identified.
- f1-score:- percent of positive predictions were correct
- support:- Support is the number of actual occurrences of the class in the specified dataset.
```
print(classification_report(y_test,model.predict(X_test)))
```
#### Feature Importances.
The Feature importance refers to techniques that assign a score to features based on how useful they are for making the prediction.
```
plt.figure(figsize=(8,6))
n_features = len(X.columns)
plt.barh(range(n_features), model.feature_importances_, align='center')
plt.yticks(np.arange(n_features), X.columns)
plt.xlabel("Feature importance")
plt.ylabel("Feature")
plt.ylim(-1, n_features)
```
#### Creator: Thilakraj Devadiga , Github: [Profile](https://github.com/Thilakraj1998)
| github_jupyter |
# Contest rating change prediction for user using KNN algorithm.
We will try to predict rating change based on previous contests - duration, authors, contest beginning hour, previous performances of the user and ratings.
## Imports
```
from database import *
import numpy as np
from IPython.display import display, clear_output
from tqdm import tqdm
import matplotlib.pyplot as plt
```
## Helpers, variables, loading data
```
def load_database():
db = LoadDatabase()
user_history = {}
for contestId, standings in tqdm(sorted(db.standings.items(), key=lambda x: db.contests.loc[x[0]].startTime)):
for handle, row in standings.iterrows():
if not handle in user_history:
user_history[handle] = {key: [] for key in row.keys()}
user_history[handle]["contestId"] = []
for key, value in row.items():
user_history[handle][key].append(value)
user_history[handle]["contestId"].append(contestId)
for handle, history in tqdm(user_history.items()):
db.history[handle] = pd.DataFrame(history)
cols = sorted(db.history[handle].columns)
db.history[handle] = db.history[handle][cols]
return db
db = load_database()
def get_users(threshold=50, fraction=1):
all_users = [handle for handle, history in db.history.items() if len(history) >= threshold]
n = max(1, int(fraction * len(all_users)))
return np.random.choice(all_users, n, replace=False)
def get_random_user(threshold=50):
return np.random.choice(get_users(threshold=threshold))
def get_correlation(user, author):
if not author in db.history:
return 0
if not user in db.history:
return 0
user_history = db.history[user].set_index("contestId")
author_history = db.history[author].set_index("contestId")
common_contests = 0
scalar_sum = 0
for contestId in user_history.index:
if not contestId in author_history.index:
continue
common_contests += 1
scalar_sum += author_history.loc[contestId].delta * user_history.loc[contestId].delta
if common_contests == 0:
return 0
if common_contests == 1:
return scalar_sum / 5000
if common_contests == 2:
return scalar_sum / 500
return scalar_sum / (common_contests ** 0.5)
user_datas = {}
def get_user_data(handle=None, threshold=50):
if handle is None:
handle = get_random_user(threshold=threshold)
if handle in user_datas:
return user_datas[handle]
user_history = db.history[handle].iloc[1:].reset_index().drop("index", axis=1)
user_history.delta = user_history.newRating - user_history.oldRating
# user_history.delta = user_history.delta.map(lambda x: 1 if x > 0 else -1 if x < 0 else 0)
user_contests = db.contests.loc[user_history.contestId]
user_history["dayTime"] = user_contests.reset_index().dayTime
user_history["startTime"] = user_contests.reset_index().startTime
user_history["duration"] = user_contests.reset_index().duration
user_history["authors"] = user_contests.reset_index().authors
user_history["author"] = user_contests.reset_index().authors.map(lambda x: list(x)[0] if len(x) > 0 else "")
# user_history["correlation"] = user_history.author.map(lambda x: get_correlation(x, handle))
user_history["correlation"] = user_history.authors.map(lambda x: np.mean([get_correlation(a, handle) for a in x]) if len(x) > 0 else 0)
user_datas[handle] = user_history
return user_history
def get_Xy(handle=None, threshold=50):
user_data = get_user_data(handle=handle, threshold=threshold)
X_columns = ["oldRating", "dayTime", "duration", "startTime", "correlation"]
y_columns = ["delta"]
X, y = user_data[X_columns], user_data[y_columns]
std = X.std()
std[std == 0] = 1
X = (X - X.mean()) / std
return X, y
def get_train_test(handle=None, threshold=50):
X, y = get_Xy(handle=handle, threshold=threshold)
X_train, y_train = X.iloc[:-1], y.iloc[:-1]
X_test, y_test = X.iloc[-1:], y.iloc[-1:]
return X_train, X_test, y_train, y_test
```
## User contest history
```
db.history["tourist"].head()
```
## Contest data
```
db.contests.head()
```
This is how user contest history looks like.
## Data we are analyzing (normalized)
```
X_train, X_test, y_train, y_test = get_train_test()
pd.concat((X_train, y_train), axis=1).head()
```
## Prepare data
```
threshold = 50 # min number of contest for user to have
users = get_users(threshold=threshold, fraction=0.1) # users to consider
```
We will try to just predict if delta of rating after contest will be positive, negative or zero (classification problem).
## Random classifier
Just randomly say delta is negative, positive or the same.
```
score, total = 0, 0
for handle in tqdm(users):
X_train, X_test, y_train, y_test = get_train_test(handle=handle)
predictions = np.random.uniform(-1, 1, size=y_test.shape)
score += np.sum(np.sign(predictions) == np.sign(y_test))
total += y_test.shape[0]
print("Random Acuraccy:", score / total, "score:", score, "total:", total)
```
We can see tha random classifier classifies pretty randomly $\approx 50\%$.
## Greedy classifiers
Say that delta is the same as the mode in the $x$ last contests.
```
import scipy.stats as sstats
go_backs = list(range(5))
greedy_errors = []
for go_back in go_backs:
score, total = 0, 0
for handle in tqdm(users):
X_train, X_test, y_train, y_test = get_train_test(handle=handle)
deltas = np.sign(y_train[-go_back:])
predictions = sstats.mode(deltas)[0]
score += np.sum(np.sign(predictions) == np.sign(y_test))
total += y_test.shape[0]
greedy_errors.append((total - score) / total)
clear_output()
print("Best go_back:", go_backs[np.argmin(greedy_errors)], "error:", np.min(greedy_errors))
plt.figure(figsize=(15, 8))
plt.plot(go_backs, greedy_errors)
plt.title("Greedy classifier")
plt.xlabel("Number of recent contests to consider")
plt.ylabel("error rate")
plt.show()
```
Still random.
## Mean classifier
```
go_backs = list(range(5))
mean_errors = []
for go_back in go_backs:
score, total = 0, 0
for handle in tqdm(users):
X_train, X_test, y_train, y_test = get_train_test(handle=handle)
deltas = y_train[-go_back:]
predictions = np.mean(deltas)
score += np.sum(np.sign(predictions) == np.sign(y_test))
total += y_test.shape[0]
mean_errors.append((total - score) / total)
clear_output()
print("Best go_back:", go_backs[np.argmin(mean_errors)], "error:", np.min(mean_errors))
plt.figure(figsize=(15, 8))
plt.plot(go_backs, mean_errors)
plt.title("Mean classifier")
plt.xlabel("Number of recent contests to consider")
plt.ylabel("error rate")
plt.show()
```
Still random.
## KNNs
We will build model for every user and predict outcome of the last contest, as we want to predict performance based on user's previous performances.
### KNN (uniform weights)
Neighbors have the same weight no matter how far they are. Classifying is just computing the mean of K nearest neighbors.
```
from sklearn.neighbors import KNeighborsRegressor
ks = list(range(1, 21))
uniform_errors = []
for k in ks:
score, total = 0, 0
for handle in tqdm(users):
X_train, X_test, y_train, y_test = get_train_test(handle=handle)
n_neighbors = min(k, X_train.shape[0])
model = KNeighborsRegressor(n_neighbors=n_neighbors, weights="uniform").fit(X_train, y_train)
predictions = model.predict(X_test)
score += np.sum(np.sign(predictions) == np.sign(y_test))
total += y_test.shape[0]
uniform_errors.append((total - score) / total)
clear_output()
print("Best K:", ks[np.argmin(uniform_errors)], "error:", np.min(uniform_errors))
plt.figure(figsize=(15, 8))
plt.plot(ks, uniform_errors)
plt.title("uniform KNN")
plt.xlabel("K nearest neighbors")
plt.ylabel("error rate")
plt.show()
```
Althout we can calculate theoretically best K, we can see that errors, are pretty random, so it is not believable.
### KNN (weights inversely proportional to distance)
The same as above but we compute weighted mean, with weights inversely proportional to the distance between two points.
```
from sklearn.neighbors import KNeighborsRegressor
ks = list(range(1, 21))
distance_errors = []
for k in ks:
score, total = 0, 0
for handle in tqdm(users):
X_train, X_test, y_train, y_test = get_train_test(handle=handle)
n_neighbors = min(k, X_train.shape[0])
model = KNeighborsRegressor(n_neighbors=n_neighbors, weights="distance").fit(X_train, y_train)
predictions = model.predict(X_test)
score += np.sum(np.sign(predictions) == np.sign(y_test))
total += y_test.shape[0]
distance_errors.append((total - score) / total)
clear_output()
print("Best K:", ks[np.argmin(distance_errors)], "error:", np.min(distance_errors))
plt.figure(figsize=(15, 8))
plt.plot(ks, distance_errors)
plt.title("distance KNN")
plt.xlabel("K nearest neighbors")
plt.ylabel("error rate")
plt.show()
```
Same as above.
### KNN (custom distance metric)
Metric will be weighted distance with predefined weights for every coordinate - e.g. different influence of duration difference than of author correlation difference.
#### Default weights
```
XX = get_train_test()[0]
DEFAULT_OLD_RATING_WEIGHT = 1
DEFAULT_DAY_TIME_WEIGHT = 1
DEFAULT_DURATION_WEIGHT = 1
DEFAULT_START_TIME_WEIGHT = 1
DEFAULT_CORRELATION_WEIGHT = 1
weights = {
"oldRating": DEFAULT_OLD_RATING_WEIGHT,
"dayTime": DEFAULT_DAY_TIME_WEIGHT,
"duration": DEFAULT_DURATION_WEIGHT,
"startTime": DEFAULT_START_TIME_WEIGHT,
"correlation": DEFAULT_CORRELATION_WEIGHT
}
# weights as vector in proper order
def get_weight_vector(weights):
return np.array([weights[col] for col in XX.columns])
w = get_weight_vector(weights)
```
#### Individual parameter influence
##### oldRating weight influence
```
old_ratings = list(range(1, 10))
old_rating_errors = []
for old_rating in old_ratings:
weights["oldRating"] = old_rating
w = get_weight_vector(weights)
custom_metric = lambda e1, e2: np.sqrt(np.sum(w * (e1 - e2) ** 2))
for handle in tqdm(users):
X_train, X_test, y_train, y_test = get_train_test(handle=handle)
n_neighbors = min(5, X_train.shape[0])
model = KNeighborsRegressor(n_neighbors=n_neighbors, metric=custom_metric).fit(X_train, y_train)
predictions = model.predict(X_test)
score += np.sum(np.sign(predictions) == np.sign(y_test))
total += y_test.shape[0]
old_rating_errors.append((total - score) / total)
clear_output()
weights["oldRating"] = DEFAULT_OLD_RATING_WEIGHT
w = get_weight_vector(weights)
# print("Best K:", ks[np.argmin(distance_errors)], "error:", np.min(distance_errors))
plt.figure(figsize=(15, 8))
plt.plot(old_ratings, old_rating_errors)
plt.title("old_rating parameter influence")
plt.xlabel("old_rating weight")
plt.ylabel("error rate")
plt.show()
```
##### dayTime weight influence
```
day_times = list(range(1, 10))
day_time_errors = []
for day_time in day_times:
weights["dayTime"] = day_time
w = get_weight_vector(weights)
custom_metric = lambda e1, e2: np.sqrt(np.sum(w * (e1 - e2) ** 2))
for handle in tqdm(users):
X_train, X_test, y_train, y_test = get_train_test(handle=handle)
n_neighbors = min(5, X_train.shape[0])
model = KNeighborsRegressor(n_neighbors=n_neighbors, metric=custom_metric).fit(X_train, y_train)
predictions = model.predict(X_test)
score += np.sum(np.sign(predictions) == np.sign(y_test))
total += y_test.shape[0]
day_time_errors.append((total - score) / total)
clear_output()
weights["dayTime"] = DEFAULT_DAY_TIME_WEIGHT
w = get_weight_vector(weights)
# print("Best K:", ks[np.argmin(distance_errors)], "error:", np.min(distance_errors))
plt.figure(figsize=(15, 8))
plt.plot(day_times, day_time_errors)
plt.title("day_time parameter influence")
plt.xlabel("day_time weight")
plt.ylabel("error rate")
plt.show()
```
##### duration weight influence
```
durations = list(range(1, 10))
duration_errors = []
for duration in durations:
weights["duration"] = duration
w = get_weight_vector(weights)
custom_metric = lambda e1, e2: np.sqrt(np.sum(w * (e1 - e2) ** 2))
for handle in tqdm(users):
X_train, X_test, y_train, y_test = get_train_test(handle=handle)
n_neighbors = min(5, X_train.shape[0])
model = KNeighborsRegressor(n_neighbors=n_neighbors, metric=custom_metric).fit(X_train, y_train)
predictions = model.predict(X_test)
score += np.sum(np.sign(predictions) == np.sign(y_test))
total += y_test.shape[0]
duration_errors.append((total - score) / total)
clear_output()
weights["duration"] = DEFAULT_DURATION_WEIGHT
w = get_weight_vector(weights)
# print("Best K:", ks[np.argmin(distance_errors)], "error:", np.min(distance_errors))
plt.figure(figsize=(15, 8))
plt.plot(durations, duration_errors)
plt.title("duration parameter influence")
plt.xlabel("duration weight")
plt.ylabel("error rate")
plt.show()
```
##### startTime weight influence
```
start_times = list(range(1, 10))
start_time_errors = []
for start_time in start_times:
weights["startTime"] = start_time
w = get_weight_vector(weights)
custom_metric = lambda e1, e2: np.sqrt(np.sum(w * (e1 - e2) ** 2))
for handle in tqdm(users):
X_train, X_test, y_train, y_test = get_train_test(handle=handle)
n_neighbors = min(5, X_train.shape[0])
model = KNeighborsRegressor(n_neighbors=n_neighbors, metric=custom_metric).fit(X_train, y_train)
predictions = model.predict(X_test)
score += np.sum(np.sign(predictions) == np.sign(y_test))
total += y_test.shape[0]
start_time_errors.append((total - score) / total)
clear_output()
weights["startTime"] = DEFAULT_START_TIME_WEIGHT
w = get_weight_vector(weights)
# print("Best K:", ks[np.argmin(distance_errors)], "error:", np.min(distance_errors))
plt.figure(figsize=(15, 8))
plt.plot(start_times, start_time_errors)
plt.title("start_time parameter influence")
plt.xlabel("start_time weight")
plt.ylabel("error rate")
plt.show()
```
##### correlation weight influence
```
correlations = list(range(1, 10))
correlation_errors = []
for correlation in correlations:
weights["correlation"] = correlation
w = get_weight_vector(weights)
custom_metric = lambda e1, e2: np.sqrt(np.sum(w * (e1 - e2) ** 2))
for handle in tqdm(users):
X_train, X_test, y_train, y_test = get_train_test(handle=handle)
n_neighbors = min(5, X_train.shape[0])
model = KNeighborsRegressor(n_neighbors=n_neighbors, metric=custom_metric).fit(X_train, y_train)
predictions = model.predict(X_test)
score += np.sum(np.sign(predictions) == np.sign(y_test))
total += y_test.shape[0]
correlation_errors.append((total - score) / total)
clear_output()
weights["correlation"] = DEFAULT_CORRELATION_WEIGHT
w = get_weight_vector(weights)
# print("Best K:", ks[np.argmin(distance_errors)], "error:", np.min(distance_errors))
plt.figure(figsize=(15, 8))
plt.plot(correlations, correlation_errors)
plt.title("correlation parameter influence")
plt.xlabel("correlation weight")
plt.ylabel("error rate")
plt.show()
```
#### Weights "found" by individual analysis
```
weights["oldRating"] = 1
weights["dayTime"] = 5
weights["duration"] = 1
weights["startTime"] = 1
weights["correlation"] = 1
w = get_weight_vector(weights)
custom_metric = lambda e1, e2: np.sqrt(np.sum(w * (e1 - e2) ** 2))
score, total = 0, 0
for handle in tqdm(users):
X_train, X_test, y_train, y_test = get_train_test(handle=handle)
n_neighbors = min(5, X_train.shape[0])
model = KNeighborsRegressor(n_neighbors=n_neighbors, metric=custom_metric).fit(X_train, y_train)
predictions = model.predict(X_test)
score += np.sum(np.sign(predictions) == np.sign(y_test))
total += y_test.shape[0]
print("Score:", score / total)
```
#### Finding optimal weights with scipy
```
from scipy.optimize import minimize
def f_error(w):
weights["oldRating"] = w[0]
weights["dayTime"] = w[1]
weights["duration"] = w[2]
weights["startTime"] = w[3]
weights["correlation"] = w[4]
w = get_weight_vector(weights)
custom_metric = lambda e1, e2: np.sqrt(np.sum(w * (e1 - e2) ** 2))
score, total = 0, 0
for handle in tqdm(users):
X_train, X_test, y_train, y_test = get_train_test(handle=handle)
n_neighbors = min(5, X_train.shape[0])
model = KNeighborsRegressor(n_neighbors=n_neighbors, metric=custom_metric).fit(X_train, y_train)
predictions = model.predict(X_test)
score += np.sum(np.sign(predictions) == np.sign(y_test))
total += y_test.shape[0]
return (total - score) / total
x0 = np.array([1, 1, 1, 1, 1])
opt = minimize(f_error, x0)
opt
x0 = np.array([1, 5, 1, 1, 1])
opt = minimize(f_error, x0)
opt
```
Unfortunately search ends with failure. We are where we started.
## Conclusions
From tests carried out, KNN is probably not the best choice for rating change prediction task. If there is any relation between the data it is hard to spot it with KNN algorithm. It requires a lot of data, which is pretty hard to find considering that VERY active users have around 100 contests. On the other hand considering users jointly won't give good results as performance of user is individual.
| github_jupyter |
# Dynamic Content Personalization Using LinUCB
This is a reference implementation of a recommendation system that dynamically learns the mapping between users and items that maximizes the conversion rates.
### Data
Simulator, no external dependencies
### References
1. Li L., Chu W., Langford J., Schapire R. -- A Contextual-Bandit Approach to Personalized News Article Recommendation, 2010
```
import numpy as np
from sklearn.linear_model import LinearRegression
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style("whitegrid")
sns.set_style("ticks")
plt.rcParams.update({'font.size': 14, 'pdf.fonttype': 'truetype'})
plt.rcParams.update({'font.family':'Candara', 'font.serif':['Candara']})
```
# Step 1: Create a Simple User Behavior Simulator
First, we delop a simple environment simulator.
We assume that an online store that sells polo shirts and raincoats. The recommender system shows one recommendation, either polo or raincoat, to each user.
The user can either ignore the recommendation or click on it.
Users come either from Seattle and Miami. Ones who come from Seattle tend to buy raincoats, ones who come from Miami tend to buy polo shirts.
We assume user locations are known, and we also assume that each user fills a short form and discloses hir or her age.
The goal of the recommendation system is to map a user feature vector (location and age) to recommednation (polo or raincoat) is a way that maximizes the average conversion rate (click rate).
We assume the cold start scenario, so that the system has no prior knowledge about user behavior.
```
#
# We assume that the click probability depends on location, age,
# and offer. More specifically, raincoats have higher conversion
# rates than polos in Seattle, but lower in Miami:
#
# location | age | recommendation | click probability
# ---------+-----+----------------+-------------------
# seattle | 20 | polo => 0.3
# seattle | 20 | raincoat => 0.4
# seattle | 60 | polo => 0.1
# seattle | 60 | raincoat => 0.5
# maimi | 20 | polo => 0.4
# maimi | 20 | raincoat => 0.1
# maimi | 60 | polo => 0.6
# maimi | 60 | raincoat => 0.2
#
# This dependency is not linear, so we
# add an interaction feature (location x item) to make
# it tractable for a linear regression model
x = [[0, 20, 1, 0],
[0, 20, 0, 0],
[0, 80, 1, 0],
[0, 60, 0, 0],
[1, 20, 1, 1],
[1, 20, 0, 0],
[1, 60, 1, 1],
[1, 60, 0, 0]]
y = [0.3, 0.4, 0.1, 0.5, 0.4, 0.1, 0.6, 0.2]
reg = LinearRegression().fit(x, y)
print(f'Actual probabilities {y}')
print(f'Predicted probabilities {reg.predict(x)}')
print(f'Regression intercept {reg.intercept_}, coefficients {reg.coef_}')
# Dimensionality of the context vector
context_size = 4
# Create a context vector from user and action (content) vectors
def context(user, action):
return np.array([user['location'], user['age'], action, user['location'] * action])
#
# Recommendation environment
#
class RecommendationEnvironment():
# we implement the linear user model specified above
def __init__(self):
self.theta = [-0.30, 0.00125, -0.25, 0.60]
self.p = lambda context: 0.4 + np.dot(self.theta, context)
# let's assume that 70% of users come from Seattle, age is uniform in [18, 80] years
def next_user(self):
return {'location': np.random.choice([0, 1], p=[0.7, 0.3]), 'age': np.random.randint(18, 80)}
def act(self, user, action):
ctx = context(user, action)
return 1 if np.random.random() < self.p(ctx) else 0
def best_action(self, user, actions):
return np.argmax([self.p(context(user, action)) for action in actions])
```
# Step 2: Create a Baseline Recommendation Agent - UCB
```
class UCBAgent():
def __init__(self):
self.actions = [0, 1] # action space. The actions here are scalars, but can be vectors as well.
self.n_actions = len(self.actions)
self.action_total_rewards = np.zeros(self.n_actions)
self.action_counters = np.ones(self.n_actions)
self.alpha = 0.5
def get_ucbs(self):
ucbs = np.zeros(self.n_actions)
for action_idx, action in enumerate(self.actions):
steps = np.sum(self.action_counters)
mean_reward_a = self.action_total_rewards[action_idx]/steps
bound_a = np.sqrt(self.alpha * np.log(steps) / self.action_counters[action_idx])
ucbs[action_idx] = mean_reward_a + bound_a
return ucbs
def choose_action(self, user): # user vector is not used (context-free agent)
ucbs = self.get_ucbs()
return self.actions[np.argmax(ucbs)]
def observe(self, user, reward, action):
action_idx = self.actions.index(action)
self.action_total_rewards[action_idx] += reward
self.action_counters[action_idx] += 1
```
# Step 3: Create a Context-aware Agent - LinUCB
```
class LinUCBAgent(object):
def __init__(self):
self.actions = [0, 1]
self.n_actions = len(self.actions)
self.alpha = 0.5
self.A = [np.identity(context_size) for a in range(self.n_actions)]
self.b = [np.zeros((context_size, 1)) for a in range(self.n_actions)]
def get_ucbs(self, user):
ucbs = np.zeros(self.n_actions)
for action_idx, action in enumerate(self.actions):
x_a = context(user, action)
A_inv = np.linalg.inv(self.A[action_idx])
theta_a = np.dot(A_inv, self.b[action_idx])
ucb = np.dot(theta_a.T, x_a) + self.alpha * np.sqrt(np.linalg.multi_dot([x_a.T, A_inv, x_a]))
ucbs[action_idx] = ucb[0]
return ucbs
def choose_action(self, user): # user vector is used
ucbs = self.get_ucbs(user)
return self.actions[np.argmax(ucbs)]
def observe(self, user, reward, action):
action_idx = self.actions.index(action)
x = np.atleast_2d(context(user, action))
self.A[action_idx] += np.dot(x.T, x)
self.b[action_idx] += reward * np.atleast_2d(x).T
```
# Step 4: Run Agents and Compare Their Performance
```
n_simulations = 500
n_steps = 200
def simulate(agent_class):
trace = np.zeros((n_simulations, n_steps, 3))
env = RecommendationEnvironment()
for sim in range(n_simulations):
agent = agent_class()
for i in range(n_steps):
user = env.next_user()
action = agent.choose_action(user)
reward = env.act(user, action)
agent.observe(user, reward, action)
trace[sim, i, :] = [action, env.best_action(user, agent.actions), reward]
return trace
ucb_trace = simulate(UCBAgent)
linucb_trace = simulate(LinUCBAgent)
#
# let's visualize the accuracy: the fraction of recommendations generated by the agent
# that matches the best possible action for a given customer (ground truth)
#
fix, ax = plt.subplots(1, figsize=(15, 5))
ax.plot(np.mean(ucb_trace[:, :, 0]==ucb_trace[:, :, 1], axis=0), '--', label='UCB')
ax.plot(np.mean(linucb_trace[:, :, 0]==linucb_trace[:, :, 1], axis=0), label='LinUCB')
ax.set_xlabel('Time step')
ax.set_ylabel('Accuracy')
ax.grid(True)
ax.legend();
plt.savefig('ucb-vs-linucb-accuracy.pdf')
#
# let's visualize the actual conversion rates (click rates) for both agents
#
def cumulative_conversion_rate(trace):
ongoing_rate = np.cumsum(trace[:, :, 2], axis=1)/np.linspace(1, n_steps, n_steps)
return np.mean(ongoing_rate, axis=0), np.std(ongoing_rate, axis=0)
fix, ax = plt.subplots(1, figsize=(15, 5))
x = range(n_steps)
ucb_mean, ucb_std = cumulative_conversion_rate(ucb_trace)
ax.plot(x, ucb_mean, '--', label=f'UCB {ucb_mean[-1]:.3}')
ax.fill_between(x, ucb_mean + ucb_std, ucb_mean - ucb_std, alpha=0.2)
linucb_mean, linucb_std = cumulative_conversion_rate(linucb_trace)
ax.plot(x, linucb_mean, label=f'LinUCB {linucb_mean[-1]:.3}')
ax.fill_between(x, linucb_mean + linucb_std, linucb_mean - linucb_std, alpha=0.2)
ax.set_xlabel('Time step')
ax.set_ylabel('Conversion rate')
ax.grid(True)
ax.set_ylim([0.2, 0.5])
ax.legend();
plt.savefig('ucb-vs-linucb-conversion-rate.pdf')
```
# Step 5: Off-policy Agent Training and Evaluation
```
class RandomAgent():
def __init__(self):
self.actions = [0, 1] # action space
def choose_action(self, user): # user vector is not used (context-free agent)
return np.random.choice(self.actions)
def observe(self, user, reward, action):
pass
n_simulation = 500
conversion_rates_prod = np.zeros(n_simulation)
conversion_rates_linucb = np.zeros(n_simulation)
prod_action_frequences = np.zeros(2)
prod_agent_class = UCBAgent # set the production policy, either random or UCB
for sim in range(n_simulation):
#
# execute some policy in 'production' and record logs (trace) with users, actions, and rewards
#
prod_trace = []
env = RecommendationEnvironment()
agent = prod_agent_class()
for i in range(n_steps):
user = env.next_user()
action = agent.choose_action(user)
reward = env.act(user, action)
agent.observe(user, reward, action)
prod_trace.append([user, action, reward])
prod_action_frequences[action] += 1
#
# train a LinUCB policy on 'production' logs and estimate
# its performance using actions that matched UCB's actions
#
linucb_trace = []
agent = LinUCBAgent()
for i in range(n_steps):
user = prod_trace[i][0]
action = agent.choose_action(user)
if action == prod_trace[i][1]:
reward = env.act(user, action)
agent.observe(user, reward, action)
linucb_trace.append(reward)
conversion_rates_prod[sim] = np.mean([prod_trace[i][2] for i in range(n_steps)])
conversion_rates_linucb[sim] = np.mean(linucb_trace)
print(f'{prod_agent_class.__name__} action frequences: {prod_action_frequences/np.sum(prod_action_frequences)}')
print(f'Estimated conversion rates\n\t{prod_agent_class.__name__}: ' +
f'{np.mean(conversion_rates_prod):.3}\n\tLinUCB: {np.mean(conversion_rates_linucb):.3}')
# Note that the conversion rate for LinUCB obtained using
# offline evaluation matches the conversion rate for
# environment-based simulation in the previous step.
#
# The above evaluation, however, is not perfectly accurate
# becasue the actions of not equiprobable under the UCB policy.
```
| github_jupyter |
```
import time
import datetime
import numpy as np
import pandas as pd
import datatable as dt
pd.set_option("display.max_columns", None, "display.max_rows", None)
import tensorflow as tf
from tensorflow import keras
import tensorflow_addons as tfa
import tensorflow_probability as tfp
from tensorflow.keras import layers, Model
from tensorflow.keras.callbacks import EarlyStopping, ModelCheckpoint
from tensorflow.keras.callbacks import LearningRateScheduler, ReduceLROnPlateau
from tensorflow.keras.layers import Input, GRU, Dense, Dropout, LSTM, Concatenate, Add, LeakyReLU, Reshape, Flatten, TimeDistributed
from sklearn import metrics
from sklearn.preprocessing import OneHotEncoder
import seaborn as sns
import matplotlib.pyplot as plt
%%time
train = dt.fread('../input/g-research-crypto-forecasting/train.csv').to_pandas()
def reindex(df):
df = df.reindex(range(ind[0],ind[-1]+60,60),method='nearest')
df = df.fillna(method="ffill").fillna(method="bfill")
return df
def preprocessing(df, batch_size=512):
df = df.set_index('timestamp')
df = df.sort_index()
global ind
ind = df.index.unique()
df = df.groupby('Asset_ID').apply(reindex).reset_index(0, drop=True).sort_index()
del ind
df = df.reset_index()
df = df.sort_values(by=['timestamp', 'Asset_ID'])
df['date'] = pd.to_datetime(df.timestamp, unit='s')
df.drop(['timestamp'], axis=1, inplace=True)
print("="*30, ">", "Stage 1 finished")
df['min_'] = (pd.DatetimeIndex(df['date']).minute.astype('uint8')/60).astype('float16')
df['hr_'] = (pd.DatetimeIndex(df['date']).hour.astype('uint8')/24).astype('float16')
df['day_'] = (pd.DatetimeIndex(df['date']).day.astype('uint8')/30).astype('float16')
df['day_in_week'] = pd.DatetimeIndex(df['date']).day_of_week.astype('uint8')
df['week'] = (pd.DatetimeIndex(df['date']).week.astype('uint8')/54).astype('float16')
df['is_month_start'] = pd.DatetimeIndex(df['date']).is_month_start.astype(int).astype('uint8')
df['is_month_end'] = pd.DatetimeIndex(df['date']).is_month_end.astype(int).astype('uint8')
df['is_quarter_start'] = pd.DatetimeIndex(df['date']).is_quarter_start.astype(int).astype('uint8')
df['is_quarter_end'] = pd.DatetimeIndex(df['date']).is_quarter_end.astype(int).astype('uint8')
df['is_year_start'] = pd.DatetimeIndex(df['date']).is_year_start.astype(int).astype('uint8')
df['is_year_end'] = pd.DatetimeIndex(df['date']).is_year_end.astype(int).astype('uint8')
float_cols = ['Count', 'Open', 'High', 'Low', 'Close', 'Volume', 'VWAP', 'min_', 'hr_', 'day_', 'week']
df[float_cols] = df[float_cols].astype('float16')
time.sleep(10)
print("="*30, ">", "Stage 2 finished")
ohe_asset = OneHotEncoder(handle_unknown='ignore')
ohe_asset.fit(df[['Asset_ID']])
ohe_day_in_week = OneHotEncoder(handle_unknown='ignore')
ohe_day_in_week.fit(df[['day_in_week']])
df = pd.concat([df, pd.DataFrame(ohe_asset.transform(df[['Asset_ID']]).toarray(), columns = ohe_asset.get_feature_names()).astype('uint8')], axis=1)
time.sleep(10)
df = pd.concat([df, pd.DataFrame(ohe_day_in_week.transform(df[['day_in_week']]).toarray(), columns = ohe_day_in_week.get_feature_names()).astype('uint8')], axis=1)
time.sleep(10)
df = df.drop(['Asset_ID', 'day_in_week'], axis=1)
print("="*30, ">", "Stage 3 finished")
df = df.fillna(-0.999)
time.sleep(10)
df = df.replace(-np.inf, -0.999)
time.sleep(10)
df = df.replace(np.inf, 100000)
time.sleep(10)
df.iloc[:, :7] = np.log1p(df.iloc[:, :7]).astype('float16')
print("="*30, ">", "Stage 4 finished")
train_df = df[(df['date'] < datetime.datetime(2021, 8, 15))]
valid_df = df[df['date'] >= datetime.datetime(2021, 8, 15)]
del df
time.sleep(10)
train_df.drop(['date'], axis=1, inplace=True)
valid_df.drop(['date'], axis=1, inplace=True)
print("="*30, ">", "Stage 5 finished")
y_train = train_df['Target'].values
X_train = train_df.drop('Target', axis=1)
del train_df
time.sleep(10)
y_valid = valid_df['Target'].values
X_valid = valid_df.drop('Target', axis=1)
del valid_df
time.sleep(10)
print("="*30, ">", "Stage 6 finished")
X_train = X_train.values.reshape(-1, 14, X_train.shape[-1])
y_train = y_train.reshape(-1, 14,)
X_valid = X_valid.values.reshape(-1, 14, X_valid.shape[-1])
y_valid = y_valid.reshape(-1, 14,)
print("="*30, ">", "Stage 7 finished")
replace_na = np.float16(np.log1p(-0.999))
X_train = np.nan_to_num(X_train, nan=replace_na)
time.sleep(10)
y_train = np.nan_to_num(y_train, nan=replace_na)
X_valid = np.nan_to_num(X_valid, nan=replace_na)
time.sleep(10)
y_valid = np.nan_to_num(y_valid, nan=replace_na)
print("="*30, ">", "Stage 8 finished")
AUTOTUNE = tf.data.AUTOTUNE
train_ds = tf.data.Dataset.from_tensor_slices((X_train, y_train)).batch(batch_size).cache().prefetch(AUTOTUNE)
valid_ds = tf.data.Dataset.from_tensor_slices((X_valid, y_valid)).batch(batch_size).cache().prefetch(AUTOTUNE)
del X_train, y_train, X_valid, y_valid
print("="*30, ">", "Stage 9 finished")
return train_ds, valid_ds
BATCH_SIZE = 512
EPOCHS = 30
LR = 1e-3
train_ds, valid_ds = preprocessing(train, batch_size=BATCH_SIZE)
try:
tpu = tf.distribute.cluster_resolver.TPUClusterResolver()
tf.config.experimental_connect_to_cluster(tpu)
tf.tpu.experimental.initialize_tpu_system(tpu)
tpu_strategy = tf.distribute.experimental.TPUStrategy(tpu)
BATCH_SIZE = tpu_strategy.num_replicas_in_sync * BATCH_SIZE
print("Running on TPU:", tpu.master())
print(f"Batch Size: {BATCH_SIZE}")
except ValueError:
strategy = tf.distribute.get_strategy()
BATCH_SIZE = BATCH_SIZE
print(f"Running on {strategy.num_replicas_in_sync} replicas")
print(f"Batch Size: {BATCH_SIZE}")
def embedding_block(inputs, list_layer_units):
x = TimeDistributed(Dense(units=list_layer_units[0]))(inputs)
x = LeakyReLU(alpha=0.3)(x)
for units in list_layer_units[1:]:
x = TimeDistributed(Dense(units=units))(x)
x = LeakyReLU(alpha=0.3)(x)
return x
def dense_block(inputs, list_layer_units):
x = Dense(units=list_layer_units[0])(inputs)
x = LeakyReLU(alpha=0.3)(x)
for units in list_layer_units[1:]:
x = Dense(units=units)(x)
x = LeakyReLU(alpha=0.3)(x)
return x
def GRU_block(inputs, list_layer_units):
x = GRU(units=list_layer_units[0], return_sequences=True)(inputs)
for units in list_layer_units[1:]:
x = GRU(units=units, return_sequences=True)(x)
return x
def my_model():
inputs = Input(shape=(14, 38))
# embedding block
embeddings = embedding_block(inputs, [256, 128, 128])
# GRUs block
grus = GRU_block(embeddings, [756, 512, 256])
# concat
concat = Concatenate()([embeddings, grus])
# Dense layers
post_embeddings = dense_block(concat, [128, 128])
# output
#dense = Dense(units=128)(post_embeddings)
outputs = Dense(units=1)(post_embeddings)
model = Model(inputs=inputs, outputs=outputs)
return model
model = my_model()
model.summary()
dot_img_file = '/tmp/model_1.png'
tf.keras.utils.plot_model(model, to_file=dot_img_file, show_shapes=True, show_layer_names=True)
device = tf.test.gpu_device_name()
device
def optimizer():
optimizer = tf.keras.optimizers.Adam(learning_rate=LR)
return optimizer
def loss_object():
loss_object = tf.keras.losses.MeanAbsoluteError(name='mae')
return loss_object
def train_rmse_metric():
train_rmse_metric = tf.keras.metrics.RootMeanSquaredError()
return train_rmse_metric
def valid_rmse_metric():
valid_rmse_metric = tf.keras.metrics.RootMeanSquaredError()
return valid_rmse_metric
optimizer = optimizer()
loss_object = loss_object()
train_rmse_metric = train_rmse_metric()
valid_rmse_metric = valid_rmse_metric()
@tf.function
def train_step(x, y):
with tf.device(device_name=device):
with tf.GradientTape() as tape:
preds = model(x)
loss_value = loss_object(y_true=y, y_pred=preds)
grads = tape.gradient(loss_value, model.trainable_weights)
optimizer.apply_gradients(zip(grads, model.trainable_weights))
train_rmse_metric.update_state(y, preds)
return loss_value
@tf.function
def validation_step(x, y):
with tf.device(device_name=device):
preds = model(x)
loss_value = loss_object(y, preds)
valid_rmse_metric.update_state(y, preds)
return loss_value
import time
from tqdm import tqdm
for epoch in range(2):
print(f"Start of epoch {epoch+1}")
start_time = time.time()
train_loss_value = 0
for step, (x_batch_train, y_batch_train) in tqdm(enumerate(train_ds)):
train_loss_value += train_step(x_batch_train, y_batch_train)
train_loss_value_mean = train_loss_value/len(train_ds)
print(f"Train loss = {train_loss_value_mean}, Train RMSE = {train_rmse_metric.result()}")
train_rmse_metric.reset_states()
valid_loss_value = 0
for step, (x_batch_valid, y_batch_valid) in enumerate(valid_ds):
valid_loss_value += validation_step(x_batch_valid, y_batch_valid)
valid_loss_value_mean = valid_loss_value/len(valid_ds)
print(f"Valid loss = {valid_loss_value_mean}, Valid RMSE = {valid_rmse_metric.result()}")
valid_rmse_metric.reset_states()
print(f"Time taken = {time.time() - start_time}")
```
| github_jupyter |
```
%load_ext autoreload
%autoreload 2
# For comparison
import csr2d.core2
```
# 3D CSR Potentials
```
import numpy as np
import matplotlib.pyplot as plt
import matplotlib
%matplotlib notebook
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
#sigma_z = 40e-6
#sigma_x = 134e-6
#rho = 1538.
#gamma = 58708.
sigma_z = 10e-6
sigma_x = 10e-6
rho = 1.
gamma = 500.
beta = np.sqrt(1 - 1 / gamma ** 2)
beta2 = 1-1/gamma**2
```
# alpha
For convenience we will use the notatation
$\xi \rightarrow z$
$\chi \rightarrow x$
$\zeta \rightarrow y$
Then
$z = \alpha - \frac{\beta}{2}\sqrt{x^2+y^2+4(1+x)\sin^2\alpha}$
```
from csr3d.core import alpha_exact, alpha, old_alpha, alpha_where_z_equals_zero, psi_s
xmax = 40e-6 #1/gamma**2
xmin = -xmax
xptp = xmax-xmin
ymax = 40e-6 #1/gamma**2
ymin = -ymax
yptp = ymax-ymin
zmax = 40e-6 #1/gamma**2
zmin = -zmax
zptp = zmax-zmin
fac = 2
nx = int(32*fac)
ny = int(16*fac)
nz = int(64*fac)
dx = xptp/(nx-1)
dy = yptp/(ny-1)
dz = zptp/(nz-1)
xvec = np.linspace(xmin, xmax, nx)
yvec = np.linspace(ymin, ymax, ny)
zvec = np.linspace(zmin, zmax, nz)
X, Y, Z = np.meshgrid( xvec, yvec, zvec, indexing='ij')
xmax, ymax, zmax
```
# alpha
```
%%timeit
old_alpha(X, Y, Z, gamma)
%%timeit
alpha(X, Y, Z, gamma)
# This will be slow
A0 = alpha_exact(X, Y, Z, gamma)
plt.imshow(A0[:,1,:])
A1 = alpha(X, Y, Z, gamma)
plt.imshow(A1[:,1,:])
err = (A1-A0)/A0
np.abs(err).max()
plt.imshow(err[:,0,:])
fig, ax = plt.subplots()
ax2 = ax.twinx()
for y0 in yvec:
a0 = alpha_exact(xvec, y0, 0, gamma)
a1 = alpha_where_z_equals_zero(xvec, y0, gamma)
err = (a1-a0)/a0
ax.plot(xvec, a0)
ax.plot(xvec, a1, linestyle='--')
ax2.plot(xvec, abs(err)*1e6)
ax.set_title('alpha exact vs quadratic at z=0')
ax.set_xlabel('x')
ax.set_ylabel('alpha')
ax2.set_ylabel('relative error (10^-6)')
```
# psi_s
```
from csr3d.core import psi_s
%%timeit
# This is parallel
psi_s(X, Y, Z, gamma)
psi_s(0,0,0, gamma)
plt.plot(zvec, psi_s(0, 0, zvec, gamma))
plt.plot(xvec, psi_s(xvec, 0, 0, gamma))
plt.plot(yvec, psi_s(0, yvec+1e-6, 0, gamma))
# Compare with CSR2D
def csr2d_psi_s(*a, **k):
return csr2d.core2.psi_s(*a, **k) + csr2d.core2.psi_s_SC(*a, **k)/gamma**2
# x line
fig, ax = plt.subplots()
ax.plot(xvec, psi_s(xvec,0,0,gamma), label='CSR3D', marker='x')
#ax.plot(xvec, csr2d.core2.psi_s(0, xvec, beta), marker='.', label='CSR2D')
ax.plot(xvec,csr2d_psi_s(0, xvec, beta), marker='.', label='CSR2D')
ax.set_title('x line at y=0, z=0')
ax.set_ylabel('psi_s')
ax.set_xlabel('x')
ax.legend()
# y line
fig, ax = plt.subplots()
ax.plot(yvec, psi_s(0,yvec,0,gamma))
ax.set_title('y line at x=0, z=0')
ax.set_ylabel('psi_s')
ax.set_xlabel('y')
#ax.legend()
# z line
fig, ax = plt.subplots()
ax.plot(zvec, psi_s(0,0,zvec,gamma), marker='x', label='CSR2D')
ax.plot(zvec, csr2d.core2.psi_s(zvec, 0,beta), marker='.', label='CSR2D')
ax.set_title('z line at x=0, y=0')
ax.set_ylabel('psi_s')
ax.set_xlabel('z')
ax.legend()
```
# psi_s mesh
```
from csr3d.wake import green_mesh
%%time
G = green_mesh((nx, ny, nz), (dx, dy, dz), rho=rho, gamma=gamma, component='s')
%%time
Gs = green_mesh((nx, ny, nz), (dx, dy, dz/100), rho=rho, gamma=gamma, component='s')
fig, ax = plt.subplots(figsize=(12,8))
ax.imshow(Gs[:,ny//2,:], origin='lower', aspect='equal')
ax.set_title(r'$\psi_s$')
```
# psi_x
```
from csr3d.wake import green_mesh
from csr3d.core import psi_x, psi_x0
# Compare with CSR2D at y=0
import csr2d.core2
%%timeit
psi_x(X, Y, Z, gamma)
%%timeit
# This is parallelized
psi_x0(X, Y, Z, gamma, dx, dy, dz)
# Compare with CSR2D
def csr2d_psi_x(*a, **k):
return csr2d.core2.psi_x(*a, **k) + csr2d.core2.psi_x_SC(*a, **k)/gamma**2
# x line
fig, ax = plt.subplots()
ax.plot(xvec, psi_x(xvec,0,0,gamma), label='CSR3D', marker='x')
ax.plot(xvec, csr2d_psi_x(0, xvec,beta), marker='.', label='CSR2D')
ax.set_title('x line at y=0, z=0')
ax.set_ylabel('psi_x')
ax.set_xlabel('x')
ax.legend()
# y line
fig, ax = plt.subplots()
ax.plot(yvec, psi_x(0,yvec,0,gamma))
ax.set_title('y line at x=0, z=0')
ax.set_ylabel('psi_x')
ax.set_xlabel('y')
#ax.legend()
# z line
fig, ax = plt.subplots()
ax.plot(zvec, psi_x0(0,0,zvec,gamma,1e-6, 1e-6, 1e-6), marker='x', label='CSR2D')
ax.plot(zvec, csr2d.core2.psi_x0(zvec, 0,beta, 1e-6), marker='.', label='CSR2D')
ax.set_title('z line at x=0, y=0')
ax.set_ylabel('psi_x')
ax.set_xlabel('z')
ax.legend()
%%time
Gx = green_mesh((nx, ny, nz), (dx, dy, dz), rho=rho, gamma=gamma, component='x')
#X2, Y2, Z2 = tuple(meshes)
fig, ax = plt.subplots(figsize=(12,8))
ax.imshow(Gx[:,ny-1,:], origin='lower', aspect='equal')
ax.set_title(r'$\psi_x$ at y=0')
ax.set_xlabel('z index')
ax.set_ylabel('x index')
fig, ax = plt.subplots(figsize=(12,8))
ax.imshow(Gx[nx-1,:,:], origin='lower', aspect='equal')
ax.set_title(r'$\psi_x$ at x=0')
ax.set_xlabel('z index')
ax.set_ylabel('y index')
plt.plot(Gx[nx-1,:,nz])
plt.plot(Gx[nx-1-1,:,nz])
plt.plot(Gx[nx-1+1,:,nz])
M = Gx[:,:,nz].T
fig, ax = plt.subplots(figsize=(12,8))
ax.imshow(M, origin='lower', aspect='equal')
ax.set_title(r'$\psi_x$ at z=0')
ax.set_xlabel('x index')
ax.set_ylabel('y index')
```
# psi_y
```
from csr3d.core import psi_y, psi_y0
%%timeit
R2 = psi_y(X, Y, Z, gamma)
%%timeit
R2 = psi_y0(X, Y, Z, gamma, dx, dy, dz)
%%time
Gy = green_mesh((nx, ny, nz), (dx/10, dy/10, dz/100), rho=rho, gamma=gamma, component='y')
#X2, Y2, Z2 = tuple(meshes)
fig, ax = plt.subplots(figsize=(12,8))
ax.imshow(Gy[:,ny-1 +1,:], origin='lower', aspect='equal')
ax.set_title(r'$\psi_y$ at y=+')
ax.set_xlabel('z index')
ax.set_ylabel('x index')
fig, ax = plt.subplots(figsize=(12,8))
ax.imshow(Gy[nx-1,:,:], origin='lower', aspect='equal')
ax.set_title(r'$\psi_y$ at x=0')
ax.set_xlabel('z index')
ax.set_ylabel('y index')
M = Gy[:,:,nz].T
fig, ax = plt.subplots(figsize=(12,8))
ax.imshow(M, origin='lower', aspect='equal')
ax.set_title(r'$\psi_y$ at z=+')
ax.set_xlabel('x index')
ax.set_ylabel('y index')
# x line
fig, ax = plt.subplots()
ax.plot(xvec, psi_y(xvec,0,0,gamma), label='CSR3D', marker='x')
ax.set_title('x line at y=0, z=0')
ax.set_ylabel('psi_y')
ax.set_xlabel('x')
# y line
fig, ax = plt.subplots()
ax.plot(yvec, psi_y(0,yvec,0,gamma))
ax.set_title('y line at x=0, z=0')
ax.set_ylabel('psi_y')
ax.set_xlabel('y')
# z line
fig, ax = plt.subplots()
ax.plot(zvec, psi_y0(0,0,zvec,gamma,1e-6, 1e-6, 1e-6), marker='x', label='CSR2D')
ax.set_title('z line at x=0, y=0')
ax.set_ylabel('psi_y')
ax.set_xlabel('z')
```
| github_jupyter |
```
import baryonification as bfc
from scipy.interpolate import splrep, splev
from scipy.integrate import quad
import matplotlib.pyplot as plt
import numpy as np
def cvir_fct(mvir):
"""
Concentrations form Dutton+Maccio (2014)
c200 (200 times RHOC)
Assumes PLANCK coismology
"""
A = 1.025
B = 0.097
return 10.0**A*(mvir/1.0e12)**(-B)
def DeltaSigmas_from_density_profile(rbin,dens):
dbin = rbin
Sig_DMO = []
Sig_DMB = []
avSig_DMO = []
avSig_DMB = []
densDMO_tck = splrep(rbin,dens['DMO'])
densDMB_tck = splrep(rbin,dens['DMB'])
for i in range(len(dbin)):
itgDMO = lambda zz: splev((zz**2.0+dbin[i]**2.0)**0.5,densDMO_tck,ext=0)
Sig_DMO += [2.0*quad(itgDMO,0,max(dbin),limit=200)[0]]
itgDMB = lambda zz: splev((zz**2.0+dbin[i]**2.0)**0.5,densDMB_tck,ext=0)
Sig_DMB += [2.0*quad(itgDMB,min(dbin),max(dbin),limit=200)[0]]
Sig_DMO = np.array(Sig_DMO)
Sig_DMB = np.array(Sig_DMB)
cumSigDMO_tck = splrep(dbin, Sig_DMO)
cumSigDMB_tck = splrep(dbin, Sig_DMB)
for i in range(len(dbin)):
itgDMO = lambda dd: dd*splev(dd,cumSigDMO_tck)
avSig_DMO += [quad(itgDMO,0,dbin[i])[0]*2.0/dbin[i]**2.0]
itgDMB = lambda dd: dd*splev(dd,cumSigDMB_tck)
avSig_DMB += [quad(itgDMB,0,dbin[i])[0]*2.0/dbin[i]**2.0]
avSig_DMO = np.array(avSig_DMO)
avSig_DMB = np.array(avSig_DMB)
deltaSigmaDMO = avSig_DMO-Sig_DMO #(Msun/h) / Mpc^2
deltaSigmaDMB = avSig_DMB-Sig_DMB
return deltaSigmaDMB, deltaSigmaDMO, deltaSigmaDMB / deltaSigmaDMO
def plot_ratio(rbin, ratio, label):
plt.semilogx(rbin, ratio, label=label)
plt.axhline(1, color='k')
plt.xlabel('r [Mpc/h]')
plt.ylabel(r'$\Delta \Sigma_{baryons} / \Delta \Sigma_{DM}$')
plt.ylim([0.75,1.1])
plt.xlim([0.05,20])
par = bfc.par()
par.baryon.eta_tot = 0.32
par.baryon.eta_cga = 0.6
par.files.transfct = '/Users/fardila/Documents/GitHub/baryonification/baryonification/files/CDM_PLANCK_tk.dat'
N_rbin = 100
rbin = np.logspace(np.log10(0.001),np.log10(50),N_rbin,base=10)
#halo params
Mv=1e14
cv=cvir_fct(Mv)
#baryon params
Mc = 6.6e13
mu = 0.21
thej = 4.0
#2h term
vc_r, vc_m, vc_bias, vc_corr = bfc.cosmo(par)
bias_tck = splrep(vc_m, vc_bias, s=0)
corr_tck = splrep(vc_r, vc_corr, s=0)
cosmo_bias = splev(Mv,bias_tck)
cosmo_corr = splev(rbin,corr_tck)
```
# $\mu$
```
#baryon params
Mc = 6.6e13
# mu = 0.21
thej = 4.0
mus = [0.2,0.4,0.6]
param_name = 'mu'
labels = ['0.2','0.4','0.6']
for mu, label in zip(mus,labels):
frac, dens, mass = bfc.profiles(rbin,Mv,cv,Mc,mu,thej,cosmo_corr,cosmo_bias,par)
deltaSigmaDMB, deltaSigmaDMO, ratio = DeltaSigmas_from_density_profile(rbin,dens)
plot_ratio(rbin, ratio, '{0} = {1}'.format(param_name, label))
plt.legend()
plt.show()
```
# $M_c$
```
#baryon params
# Mc = 6.6e13
mu = 0.21
thej = 4.0
Mcs = [1e13,5e13,1e14]
param_name = 'Mc'
labels = ['1e13','5e13','1e14']
for Mc, label in zip(Mcs,labels):
frac, dens, mass = bfc.profiles(rbin,Mv,cv,Mc,mu,thej,cosmo_corr,cosmo_bias,par)
deltaSigmaDMB, deltaSigmaDMO, ratio = DeltaSigmas_from_density_profile(rbin,dens)
plot_ratio(rbin, ratio, '{0} = {1}'.format(param_name, label))
plt.legend()
plt.show()
```
# $\theta_{ej}$
```
#baryon params
Mc = 6.6e13
mu = 0.21
# thej = 4.0
thejs = [2,4,6]
param_name = r'$\theta_{ej}$'
labels = ['2','4','6']
for thej, label in zip(thejs,labels):
frac, dens, mass = bfc.profiles(rbin,Mv,cv,Mc,mu,thej,cosmo_corr,cosmo_bias,par)
deltaSigmaDMB, deltaSigmaDMO, ratio = DeltaSigmas_from_density_profile(rbin,dens)
plot_ratio(rbin, ratio, '{0} = {1}'.format(param_name, label))
plt.legend()
plt.show()
```
# $\eta_{tot}$
```
#baryon params
Mc = 6.6e13
mu = 0.21
thej = 4.0
par.baryon.eta_tot = 0.32
par.baryon.eta_cga = 0.6
eta_tots = [0.1,0.3,0.5]
param_name = r'$\eta_{tot}$'
labels = ['0.1','0.3','0.5']
for eta_tot, label in zip(eta_tots,labels):
par.baryon.eta_tot = eta_tot
frac, dens, mass = bfc.profiles(rbin,Mv,cv,Mc,mu,thej,cosmo_corr,cosmo_bias,par)
deltaSigmaDMB, deltaSigmaDMO, ratio = DeltaSigmas_from_density_profile(rbin,dens)
plot_ratio(rbin, ratio, '{0} = {1}'.format(param_name, label))
plt.legend()
plt.show()
```
# $\eta_{cga}$
```
#baryon params
Mc = 6.6e13
mu = 0.21
thej = 4.0
par.baryon.eta_tot = 0.32
par.baryon.eta_cga = 0.6
eta_cgas = [0.4,0.6,0.8]
param_name = r'$\eta_{cga}$'
labels = ['0.4','0.6','0.8']
for eta_cga, label in zip(eta_cgas,labels):
par.baryon.eta_cga = eta_cga
frac, dens, mass = bfc.profiles(rbin,Mv,cv,Mc,mu,thej,cosmo_corr,cosmo_bias,par)
deltaSigmaDMB, deltaSigmaDMO, ratio = DeltaSigmas_from_density_profile(rbin,dens)
plot_ratio(rbin, ratio, '{0} = {1}'.format(param_name, label))
plt.legend()
plt.show()
```
| github_jupyter |
# Machine Vision Applications
Examples of Machine Vision Requirements
* Visualize droplets ranging in size from 10 to 100 microns.
* Visualize a field with 1 million drops
* Classify 10 micron particles
Questions
* Are the particles in motion?
* How much time is available to capture the image?
* Do we need a CFA or could a monochrome camera with filters be used?
* Depth of field? Are the particles in a plane?
* How are the illuminated?
* What working distances?
* Any mounting or geometrical constraints?
```
import math
d = 0.05 # mm
object_area = d*d*1000000
print("Field size =", object_area, "mm**2")
print("Field width = ", math.sqrt(object_area), "mm")
```
## Lenses
### Fixed focal length lenses
* [Arducam](https://www.arducam.com/best-m12-c-cs-mount-lenses/)
* [Raspberry Pi 16mm](https://www.adafruit.com/product/4562)
A 16mm lens on a 2/3" sensor has a 30 degree angle of view, and a typical minimum object distance of 20cm which is an object field about 11cm in diameter. This is too large for the application.

A 2/3" sensor has a diagonal of 7.85mm. An object field of 2cm corresponds to a magnification of 0.4x. A magnification range of 1.0x to 0.3x would seem about right.
### Macro and Variable Magnification Lenses
* [Edmund Infiniprobe 0-3.2x](https://www.edmundoptics.com/p/infiniprobe-s-32-video-microscope-0-32x/15606/)
### Telecentric Lenses
Tutorials and general information:
* [EO Imaging Lab 2.2: Telecentricity](https://www.youtube.com/watch?v=O-NeZcmYyJ4&ab_channel=EdmundOptics) Edmund Optics video explaining telecentricity and applications to machine vision.
* [Thorlabs Tutorial](https://www.thorlabs.com/newgrouppage9.cfm?objectgroup_id=10762)
Illumination:
Sources:
* [Edmund Optical](https://www.edmundoptics.com/c/telecentric-lenses/1003/)
* [B&H](https://www.bhphotovideo.com/c/buy/telecentric-lenses/ci/34857/N/3564657627)
Sample Calculations
```
import numpy as np
import math
# Sony IMX477 sensor
pixel_size = 1.55 # microns
h_pixels = 4056.
v_pixels = 3040.
h_mm = h_pixels*pixel_size/1000.
v_mm = v_pixels*pixel_size/1000.
d_mm = round(math.sqrt(h_mm**2 + v_mm**2), 3)
print(h_mm, v_mm, d_mm)
# Field of View
h_fov = 15.0
v_fov = (v_pixels/h_pixels)*h_fov
d_fov = round(math.sqrt(h_fov**2 + v_fov**2), 3)
print(h_fov, v_fov, d_fov)
# magnification
mag = round(d_mm/d_fov, 2)
print(mag)
```
* https://www.edmundoptics.com/p/05x-23-c-mount-platinumtltrade-telecentric-lens/17562/

33.5mm tube diameter
50.0mm max external diameter
## Sensing Design
### Raspberry Pi HQ Camera (Sony IMX477)
* [Waveshare](https://www.waveshare.com/raspberry-pi-hq-camera.htm)

### 30mm Optical Cage
* [Thor Labs 30mm Cage Components](https://www.thorlabs.com/navigation.cfm?guide_id=2004)
3D Printed Mounting plates
* https://www.thorlabs.com/newgrouppage9.cfm?objectgroup_id=2273#2869
* https://www.thorlabs.com/thorproduct.cfm?partnumber=CBB1
### Arducam
* [Sony IMX477](https://www.arducam.com/product-category/cameras-for-raspberrypi/raspberry-pi-camera-raspistill-raspvivid/raspberry-pi-high-quality-12mp-imx477-camera/) based camera modules.
### Picamera
| github_jupyter |
## NSE-TATAGLOBAL DATASETS
## Stock Market Prediction And Forecasting Using Stacked LSTM
# LGMVIP Task-2|| Data Science
### To build the stock price prediction model, we will use the NSE TATA GLOBAL dataset. This is a dataset of Tata Beverages from Tata Global Beverages Limited, National Stock Exchange of India: Tata Global Dataset
## Import Libraries
```
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
import io
import requests
import datetime
```
## Import Datasets
```
url="https://raw.githubusercontent.com/mwitiderrick/stockprice/master/NSE-TATAGLOBAL.csv"
df=pd.read_csv(url)
df.head()
df1=df.reset_index()['Open']
df1
plt.plot(df1)
```
## LSTM are sensitive to the scale of the data. so we apply MinMax scaler
```
from sklearn.preprocessing import MinMaxScaler
scaler=MinMaxScaler(feature_range=(0,1))
df1=scaler.fit_transform(np.array(df1).reshape(-1,1))
print(df1)
```
## splitting dataset into train and test split
```
train_size=int(len(df1)*0.75)
test_size=len(df1)-train_size
train_data,test_data=df1[0:train_size,:],df1[train_size:len(df1),:1]
train_size,test_size
train_data,test_data
```
## convert an array of values into a dataset matrix
```
def create_dataset(dataset, time_step=1):
train_X, train_Y = [], []
for i in range(len(dataset)-time_step-1):
a = dataset[i:(i+time_step), 0] ###i=0, 0,1,2,3-----99 100
train_X.append(a)
train_Y.append(dataset[i + time_step, 0])
return numpy.array(train_X), numpy.array(train_Y)
```
### reshape into X=t,t+1,t+2,t+3 and Y=t+4
```
import numpy
time_step = 100
X_train, y_train = create_dataset(train_data, time_step)
X_test, ytest = create_dataset(test_data, time_step)
print(X_train.shape), print(y_train.shape)
```
## reshape input to be [samples, time steps, features] which is required for LSTM
```
X_train =X_train.reshape(X_train.shape[0],X_train.shape[1] , 1)
X_test = X_test.reshape(X_test.shape[0],X_test.shape[1] , 1)
```
## Create the Stacked LSTM model
```
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.layers import LSTM
model=Sequential()
model.add(LSTM(50,return_sequences=True,input_shape=(100,1)))
model.add(LSTM(50,return_sequences=True))
model.add(LSTM(50))
model.add(Dense(1))
model.compile(loss='mean_squared_error',optimizer='adam')
model.summary()
model.fit(X_train,y_train,validation_data=(X_test,ytest),epochs=100,batch_size=64,verbose=1)
import tensorflow as tf
tf.__version__
```
## Lets Do the prediction and check performance metrics
```
train_predict=model.predict(X_train)
test_predict=model.predict(X_test)
train_predict=scaler.inverse_transform(train_predict)
test_predict=scaler.inverse_transform(test_predict)
```
## Calculate RMSE performance metrics
```
import math
from sklearn.metrics import mean_squared_error
math.sqrt(mean_squared_error(y_train,train_predict))
```
### Test Data RMSE
```
math.sqrt(mean_squared_error(ytest,test_predict))
```
### shift train predictions for plotting
```
look_back=100
trainPredictPlot = numpy.empty_like(df1)
trainPredictPlot[:, :] = np.nan
trainPredictPlot[look_back:len(train_predict)+look_back, :] = train_predict
```
### shift test predictions for plotting
```
testPredictPlot = numpy.empty_like(df1)
testPredictPlot[:, :] = numpy.nan
testPredictPlot[len(train_predict)+(look_back*2)+1:len(df1)-1, :] = test_predict
```
### plot baseline and predictions
```
plt.plot(scaler.inverse_transform(df1),color='blue')
plt.show()
plt.plot(trainPredictPlot,color='red')
plt.show()
plt.plot(testPredictPlot,color='green')
plt.show()
plt.plot(trainPredictPlot,color='red')
plt.plot(testPredictPlot,color='green')
plt.show()
plt.plot(scaler.inverse_transform(df1),color='blue')
plt.plot(trainPredictPlot,color='red')
plt.plot(testPredictPlot,color='green')
plt.show()
len(test_data)
x_input=test_data[341:].reshape(1,-1)
x_input.shape
model.save("saved_model.h5")
```
| github_jupyter |
# Neural Networks - Part 2
2016-09-16, Josh Montague
## The Plan
- Quick review of [Part 1](https://github.com/DrSkippy/Data-Science-45min-Intros/tree/master/neural-networks-101)
- The library stack (Keras, Theano, Numpy, oh my!)
- Examples!
- Classification (Iris)
- Classification (MNIST)
- Regression (housing)
## Review
Back in [Part 1, we looked at some history, motivation, and a simple (if only *mostly*-working ๐) implementation of a neural network](https://github.com/DrSkippy/Data-Science-45min-Intros/tree/master/neural-networks-101).
<img src="img/NN-2.jpeg">
Recall the short version of how this worked:
- there is an array of input "nodes" (one per feature)
- the input nodes are "fully-connected" to an arbitrary number of nodes in the next, "hidden layer"
- the value of each of the hidden nodes is computed by taking the inner product of the previous layer with the weights matrix, and then passing that linear combination through an "activation function," $f(\omega^T x)$. We introduced the sigmoid as one possible activation (shown above)
- when there are many nodes in the hidden layer(s), the weights form a matrix; the weight connecting nodes $i$ and $j$ (in sequential layers) is matrix element $w_{ij}$
- "forward propagation" through the network is repeating this for each layer in the network until you get to your predicted output layer
- "backpropagation" is the process of updating the weight matrix elements $\omega_{ij}$ by distributing the prediction error backward through the network according to the prediction error and a chosen loss function
- forward and backward propagation are repeated a bunch of times until some convergence criteria is achieved
Remember that at least one of the reasons why this is an interesting set of techniques to explore is that they a very different way to think of features in a model. We don't have to specify all of the explicit model features in a data matrix e.g. a column for $x$, a column for $x^2$, and $x*y, x*y^2$, and so on. We're defining a structure that allows for stacking of arbitrary, non-linear combinations of the predefined set of data matrix features; this can lead to a more expressive set of features. On the other hand, it also means many more degrees of freedom, which increases computational complexity and decreases interpretability.
Moving beyond our ``for`` loop in Part 1, we can look at some more reasonable approaches to using neural networks in practice! In particular, we'll look at [Keras](https://keras.io/), one of the active and growing libraries for building, training, and using neural networks in Python.
I think it'll be helpful to understand the stack of libraries and their roles, so hang tight while we run through that, first...
## Keras
Keras is a modular library with a ``scikit-learn``-inspired API. It lets you write readable Python code to define the structure of a neural network model, as well as (optionally) detailed configuration of *how* the model should evaluate.
From the [docs](https://keras.io/):
> Keras is a minimalist, highly modular neural networks library, written in Python and capable of running on top of either TensorFlow or Theano. It was developed with a focus on enabling fast experimentation. Being able to go from idea to result with the least possible delay is key to doing good research.
>
> Use Keras if you need a deep learning library that:
>
> - allows for easy and fast prototyping (through total modularity, minimalism, and extensibility).
> - supports both convolutional networks and recurrent networks, as well as combinations of the two.
> - supports arbitrary connectivity schemes (including multi-input and multi-output training).
> - runs seamlessly on CPU and GPU.
There are many libraries for creating neural networks in Python. A quick google search includes:
- Keras
- TensorFlow
- PyBrain
- Blocks
- Lasagne
- Caffe
- nolearn
- PyML
- ... and I'm sure there are more
I read the docs for a few, read some reddit and StackOverflow discussions, and asked some practioners that I know for their opinions. My takeaway: **if you're working in Python, familiar with ``scikit-learn``, and want a good on-ramp to neural networks, Keras is a good choice.**
For more discussion about the library and it's motivations, check out [the recent Quora Q&A](https://www.quora.com/session/Fran%C3%A7ois-Chollet/1) in which the lead developer gave some great insight into the design and plans for the library.
Most of this session will involve writing code with Keras. But Keras doesn't actually do the computation; it uses another library for that (in fact, more than one). For the symbolic computation portion, Keras currently supports [Theano](http://www.deeplearning.net/software/theano/) (the default) and [TensorFlow](https://www.tensorflow.org/). We'll use Theano for this notebook.
## Theano
From [the docs](http://deeplearning.net/software/theano/introduction.html#introduction):
> Theano is a Python library that lets you to define, optimize, and evaluate mathematical expressions, especially ones with multi-dimensional arrays (``numpy.ndarray``). Using Theano it is possible to attain speeds rivaling hand-crafted C implementations for problems involving large amounts of data.
Essentially, by using symbolic mathematical expressions, all sorts of compiler and computational optimizations (including automatic differentiation and dynamically-generated C code!), Theano can make math happen very fast (either using the Python interpreter and ``numpy``, or going right around it to CPU/GPU instructions). An interesting feature of Theano is that executing the same code with a GPU is achieved by simply setting a shell environment variable!
One way to think about how these pieces relate to one another is (loosely):
```
scikit-learn:numpy :: keras:theano(+numpy)
```
Put another way, here's my attempt at a visual version:
<img src="img/nn-stack.png">
# Ok, enough talk, let's build something
```
from IPython.display import Image
Image(data="img/mr-t.jpg")
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
seed = 1234; np.random.seed(seed)
import seaborn as sns
from keras.models import Sequential
from keras.layers.core import Dense, Activation
from sklearn.cross_validation import train_test_split
from sklearn.linear_model import LogisticRegression
%matplotlib inline
```
## Foreword: "`random`"
Many of the numerical computations that we do in Python involve sampling from distributions. We often want these to be truly random, but computers can only provide so-called "pseudo-random" numbers ([wiki](https://en.wikipedia.org/wiki/Pseudorandom_number_generator), [Python docs](https://docs.python.org/3/library/random.html)).
In many of our modeling use-cases (particularly those which are *ad hoc*), having fluctuations in some pseudo-random values is fine. However, when there are variations in, say, the initial conditions for an algorithm, it can lead to variation in the outcome which is not indicative or representative of any true variance in the underlying data. Examples include choosing the starting centroids in a k-means clustering task, or *choosing the weights of a neural network synapse matrix*.
When you want results to be reproducible (which is generally a good thing!), you have to "seed" the random number generator (RNG). In this way, when you send your code to someone else's computer, or if you run your code 10,000 times, you'll always have the same initial conditions (for parameters that are randomly generated), and you should always get the same results.
In a typical Python script that runs all in one session, the line above (`seed = 1234; np.random.seed(seed)`) can be included once, at the top\*. In an IPython notebook, however, it seems that you need to set the seed in each cell where the random parameter initialization may occur (i.e. in any cell that includes the declaration of a NN model). I'm not 100% positive about this, but this is what I gathered from my experimentation. This is the origin of the assorted calls to `np.random.seed()` you'll see below!
# 1: Classification (Iris)
To get a sense for how Keras works, we'll start with a simple example: the golden oldie, the iris data set. That way we can focus our attention on the code, not the details of the data.
Furthermore, to illustrate the parallels with `scikit-learn`, let's run through *that* demo first. Since the Keras API is ``sklearn``-like (and this team has lots of ``sklearn`` experience), hopefully that will provide some helpful conceptual hooks.
## ``sklearn``
```
# import data (from seaborn, bc it gives you a df with labels)
iris = sns.load_dataset("iris")
iris.tail()
# inspect
sns.pairplot(iris, hue='species')
# get train/test split (no preprocessing)
X = iris[['sepal_length', 'sepal_width', 'petal_length', 'petal_width']].values
y = iris['species'].values
# take a 75/25 split
X_train, X_test, y_train, y_test = train_test_split(X, y, train_size=0.75, random_state=seed)
# verify array sizes
#[x.shape for x in [X_train, X_test, y_train, y_test]]
# fit default LR model
model = LogisticRegression()
model.fit(X_train, y_train)
# score on test (should be ~80-90%)
print("Accuracy = {:.2f}".format(model.score(X_test, y_test)))
```
Not bad for less than ten lines of code!
In practice, we should be a bit more careful and consider some additional work:
- preprocess the data (scaling, normalizing)
- use cross validation techniques to build an uncertainty or confidence (e.g. k-fold cv)
- gridsearch the model parameters
- ... etc. ...
But for now, we're just trying to show the comparison between the libraries, and this will do.
Now, let's write the same kind of classification system in Keras!
## ``keras``
**Warning!** *If you have a dataset the size of the iris data (tiny!), you probably shouldn't use a neural network in practice; instead, consider a model that is more interpretable. We're using #tinydata here because it's simple and common.*
### (One-hot) encode the labels
We can start the same train- and test-split data arrays. But, we have to make a modification to the output data (labels). ``scikit-learn`` estimators transparently convert categorical labels e.g. strings like "virginica" and "setosa" into numerical values (or arrays). But we have to do that step manually for the Keras models.
We want the model output to be a 3x1 array, where the value at each index represents the probability of that category (0, 1, or 2). The format of this training data, where the truth is 1 and all the other possible values are 0 is also known as a **one-hot encoding.**
There are a few ways to do this:
- ``pandas.get_dummies()`` (we'll use this one)
- ``scikit-learn``'s ``LabelEncoder()``
- Keras' ``np_utils.to_categorical()``
- ... or roll your own
Here's an example of how ``pd.get_dummies()`` works:
```
# create a sample array with a few of each species from the original df
species_sample = iris.groupby(by='species').head(3)['species']
species_sample
# get a one-hot-encoded frame from the pandas method
pd.get_dummies(species_sample, prefix='ohe')
```
Now, instead of a single string label as our output (prediction), we have a 3x1 array, where each array item represents one of the possible species, and the non-zero binary value gives us the information we need.
``scikit-learn`` was effectively doing this same procedure for us before, but hiding all of the steps that map the labels to the prediction arrays.
Back to our original data: we can one-hot encode the y arrays that we got from our train-test split earlier, and can re-use the same X arrays.
```
# encode the full y arrays
ohe_y_train = pd.get_dummies(y_train).values
ohe_y_test = pd.get_dummies(y_test).values
```
### Define, compile the model
Time to make our neural network model!
Keras has an object-oriented syntax that starts with a ``model``, then adds ``layers`` and ``activations``.
The ``Sequential`` model is the main one we care about - it assumes that you'll tell it a series of layers (and activations) that define the network. Subsequently, we add layers and activations, and then compile the model before we can train it.
There is art and science to choosing how many hidden layers and nodes within those layers. We're not going to dive into that in this session (mostly because I don't yet know the answers!), so maintain your skepticism, but just sit with it for now.
```
# create a new model
model = Sequential()
# add layers
# - the first hidden layer must specify the dimensions of the input layer (4x1, here)
# - this adds a 10-node, fully-connected layer following the input layer
model.add(Dense(10, input_dim=4))
# add an activation to the hidden layer
model.add(Activation('sigmoid'))
```
For now, we'll stick to a 3-layer network: input, hidden, and output.
The final, output layer needs to have three nodes since we have labels that are 3x1 arrays. So, our layers and sizes are: input (4 nodes), hidden (10 nodes), and output (3 nodes).
At this point, I only have a small amount of guidance for choosing activation layers. See the notes at the end of the notebook for a longer discussion. Importantly, when we want our output values to be between 0 and 1, and to represent probabilities of our classes (summing to 1), we choosing the **softmax** activation function.
```
# add the output layer, and a softmax activation
model.add(Dense(3))
model.add(Activation('softmax'))
```
Finally, we compile the model. This is where we can specify the optimizer, and loss function.
Since we're using multi-class classification, we'll use the ``categorical_crossentropy`` loss function. This is [the advice that I was able to find](https://keras.io/getting-started/sequential-model-guide/#compilation) most often, but I need to learn more about decision criteria for both optimizers, and loss functions. They can have a big effect on your model accuracy.
```
model.compile(optimizer='sgd', loss='categorical_crossentropy', metrics=["accuracy"])
```
Finally, we ``fit()`` the compiled model using the original training data, including the one-hot-encoded labels.
The ``batch_size`` is how many observations are propagated forward before updating the weights (backpropagation). Typically, this number will be much bigger (the default value is 32), but we have a very tiny data set, so we artificially force this network to update weights with each observation (see the **Warning** above).
```
# keras uses the same .fit() convention
model.fit(X_train, ohe_y_train, batch_size=1, nb_epoch=20, verbose=1)
```
We can ``evaluate()`` our accuracy by using that method on the test data; this is equivalent to ``sklearn``'s ``score()``.
```
loss, metrics = model.evaluate(X_test, ohe_y_test, verbose=0)
# score on test (should also be ~80-90%)
print("Accuracy = {:.2f}".format(metrics))
```
Not bad!
There are also ``sklearn``-like methods that return class assignment and their probabilities.
```
classes = model.predict_classes(X_test, verbose=0)
probs = model.predict_proba(X_test, verbose=0)
print('(class) [ probabilities ]')
print('-'*40)
for x in zip(classes, probs):
print('({}) {}'.format(x[0],x[1]))
```
### Now, more compact...
We walked through that in pieces, but here we can collect all of those steps together to see just how few lines of code it required (though remember that we did have the one additional step of creating one-hot-encoded labels).
```
np.random.seed(seed)
# instantiate the model
model = Sequential()
# hidden layer
model.add(Dense(10, input_shape=(4,)))
model.add(Activation('sigmoid'))
# output layer
model.add(Dense(3))
model.add(Activation('softmax'))
# set optimizer, loss fnc, and fit parameters
model.compile(optimizer='sgd', loss='categorical_crossentropy', metrics=["accuracy"])
model.fit(X_train, ohe_y_train, batch_size=1, nb_epoch=20, verbose=0)
# score on test set
loss, metrics = model.evaluate(X_test, ohe_y_test, verbose=0)
print("Accuracy = {:.2f}".format(metrics))
```
Or - even more succinctly - we can build the same model but collapse the structure definition because of Keras' flexible API...
```
np.random.seed(seed)
# move the activations into the *layer* definition
model = Sequential([
Dense(10, input_dim=4, activation='sigmoid'),
Dense(3, activation='softmax'),
])
model.compile(optimizer='sgd', loss='categorical_crossentropy', metrics=["accuracy"])
model.fit(X_train, ohe_y_train, batch_size=1, nb_epoch=20, verbose=0)
loss, metrics = model.evaluate(X_test, ohe_y_test, verbose=0)
print("Accuracy = {:.2f}".format(metrics))
```
Cool! It seems to work pretty well.
### Peeking inside the model
At this point, what *is* the ``model`` we created? In addition to it's network structure (layers with sizes and activation functions), we also have the weight matrices.
```
for layer in model.layers:
print('name: {}'.format(layer.name))
print('dims (in, out): ({}, {})'.format(layer.input_shape, layer.output_shape))
print('activation: {}'.format(layer.activation))
# nb: I believe the second weight array is the bias term
print('weight matrix: {}'.format(layer.get_weights()))
print()
```
### Saving the model
If you're looking to save off a trained network model, these are most of the pieces that you'd need to save to disk. [Keras uses HDF5](https://keras.io/getting-started/faq/#how-can-i-save-a-keras-model) (sort of "named, organized arrays") to serialize trained models with a ``model.save()`` (and corresponding ``.load()``) method.
If you're looking to save the *definition* of a model, but without all of the weights, you can write it out in simple JSON or YAML representation e.g. ``model.to_json()``.
# 2: Classification (MNIST)
Let's do one more familiar classification problem - last year's 4C dataset: the MNIST image labeling task.
This time we will:
- have more data (good!)
- do a tiny bit of data normalization (smart!)
- build a bigger network (more expressive!)
```
from keras.datasets import mnist
# the data, shuffled and split between tran and test sets
(X_train, y_train), (X_test, y_test) = mnist.load_data()
print("X_train original shape", X_train.shape)
print("y_train original shape", y_train.shape)
print("y_test original shape", y_test.shape)
```
Remember that the MNIST data is an array of 28-pixel by 28-pixel "images" (brightness values), 60k in the training set, 10k in the test set.
```
plt.figure(figsize=(8,4))
for i in range(3):
plt.subplot(1,3,i+1)
plt.imshow(X_train[i], cmap='gray', interpolation='none')
plt.title("Label: {}".format(y_train[i]))
```
### Preprocessing and normalization
If you recall from the last 4C event, the first step we mostly used in preprocessing this data is to unroll the 2D arrays into a single vector.
Then, as with many other optimizations, we'll see better results (with faster convergence) if we standardize the data into a smaller range. This can be done in a number of ways, like `sklearn`'s `StandardScaler` (zero-mean, unit variance), or `Normalize` (scale to unit-norm). For now, we'll just rescale to the range 0-1.
Then, we also need to one-hot encode our labels again.
```
# unroll 2D pixel data into 1D vector
X_train = X_train.reshape(60000, 784)
X_test = X_test.reshape(10000, 784)
# convert from original range (0-255) to 0-1
X_train = X_train / X_train.max()
X_test = X_test / X_test.max()
# OHE the y arrays
ohe_y_train = pd.get_dummies(y_train).values
ohe_y_test = pd.get_dummies(y_test).values
```
Now we'll built another `Sequential` model.
This time, we'll use the more commonly-used `relu` ("rectified linear unit") activation function.
```
np.random.seed(seed)
model = Sequential([
Dense(512, input_dim=784, activation='relu'),
Dense(512, activation='relu'),
Dense(10, activation='softmax')
])
```
The shape of this network is now: 784 (input nodes) => 512 (hidden nodes) => 512 (hidden nodes) => 10 (output nodes). That's about $784*512*512*10 \approx 2x10^9$ weights!
```
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
model.fit(X_train, ohe_y_train, batch_size=128, nb_epoch=5, verbose=1)
loss, metrics = model.evaluate(X_test, ohe_y_test, verbose=1)
print()
#print('Test loss:', loss)
print('Test accuracy:', metrics)
```
If you recall the 2015 4C leaderboard, a score of 98% would have put you in the top 10% of submissions!
Speaking only for myself, the entries that I submitted in that range took **much** more time and effort than those last few notebook cells!
# 3: Regression (Housing)
Finally, let's do an example of modeling a continuous variable - a regresssion task. We'll use another of the canned datasets: the Boston housing price data.
This data comprises a few hundred observations of neighborhoods, each of thirteen related features. The target is the median price of the homes in that area (in thousands of dollars). So, this means that the output variable is a continuous, real (and positive) number.
You can uncomment the `print(...'DESCR')` cell for a longer description.
```
from sklearn.datasets import load_boston
from sklearn.linear_model import LinearRegression
from sklearn.metrics import r2_score, mean_squared_error
from sklearn.preprocessing import MinMaxScaler, StandardScaler
# load + inspect data
boston = load_boston()
X = boston.data
y = boston.target
labels = boston.feature_names
b_df = pd.DataFrame(X, columns=labels)
b_df.head()
# built-in information about the dataset and features
#print(boston.get("DESCR"))
```
Since the feature values span many orders of magnitude, we should standardize them for optimization efficiency. Then we can split the data into our train/test split.
It's worth noting that we could also experiemnt with standardizing the output variable, as well. For now, we won't.
```
# standardize the feature data (all features now 0-1)
scaler = MinMaxScaler(feature_range=(0, 1))
X = scaler.fit_transform(X)
# train/test split
X_train, X_test, y_train, y_test = train_test_split(X, y, train_size=0.75, random_state=seed)
# build model
np.random.seed(seed)
model = Sequential([
# use a single hidden layer, also with 13 nodes
Dense(13, input_dim=13, activation='relu'),
Dense(1)
])
# compile + fit model
model.compile(loss='mean_squared_error', optimizer='rmsprop', metrics=['accuracy'])
model.fit(X_train, y_train, batch_size=5, nb_epoch=100, verbose=0)
# evaluate on test data
loss, metrics = model.evaluate(X_test, y_test, verbose=1)
#print('Test loss:', loss)
#print('Test accuracy:', metrics)
print('MSE:', metrics)
y_pred = model.predict(X_test)
print('R^2 score:', r2_score(y_test, y_pred))
plt.figure(figsize=(8,8))
# compare the predictions to test
plt.plot(y_test, y_pred, 'o', alpha=0.75, label='model predictions')
# draw a diagonal
xy = np.linspace(min(y_test), max(y_test))
plt.plot(xy, xy, '--', label='truth = pred')
plt.title('3-layer NN')
plt.xlabel('truth ($k)')
plt.ylabel('prediction ($k)')
plt.legend(loc='best')
```
Cool!
It looks like our model struggles a bit with high-valued observations. Something worth digging into if we were to work on optimizing this model for this task.
# BUT
Just to remind you that this is a toy problem that probably *shouldn't* be solved with a neural network, let's look at the corresponding linear regression model.
We use the same data....
```
model = LinearRegression()
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
print('R^2:', r2_score(y_test, y_pred))
```
And get similar $R^2$ values with a much more interpretable model. We can compare the prediction errors to the same chart from before...
```
plt.figure(figsize=(8,8))
# compare the predictions to test
plt.plot(y_test, y_pred, 'o', alpha=0.75, label='model predictions')
# draw the diagonal
xy = np.linspace(min(y_test), max(y_test))
plt.plot(xy, xy, '--', label='truth = pred')
plt.title('Linear Regression')
plt.xlabel('truth ($k)')
plt.ylabel('prediction ($k)')
plt.legend(loc='best')
```
And - **the reason why a linear model should often be preferred** - we can just look straight at the feature coefficients and read off how they relate to the predictions
```
plt.figure(figsize=(8,8))
# where to position the bars/ticks
locs = range(len(model.coef_))
plt.barh(locs, model.coef_, align='center')
plt.yticks(locs, b_df.columns);
plt.title('linear regression coefficients')
plt.xlabel('value')
plt.ylabel('coefficient')
```
# Wrap Up
Hopefully between [Part 1](https://github.com/DrSkippy/Data-Science-45min-Intros/tree/master/neural-networks-101), and now this - Part 2 - you've gained a bit deeper understanding for how neural networks work, and how to use Keras to build and train them.
At this point, the only thing we haven't *really* illustrated is how to use them at the #bigdata scale (or with unconventional data types) where they have proven particularly valuable. Perhaps there will be a Part 3...
## What next?
If you want to follow up (or go deeper) on the concepts that we covered, here are some links
- [What optimizer should I use?](http://sebastianruder.com/optimizing-gradient-descent/index.html#visualizationofalgorithms)
- [What loss function should I use?](https://keras.io/getting-started/sequential-model-guide/#compilation)
- unfortunately, these are examples and not a rigorous discussion
- [Keras FAQ](https://keras.io/getting-started/faq/)
- [Keras' collection of pre-made examples](https://github.com/fchollet/keras/tree/master/examples)
- [`sklearn` Keras wrappers](https://keras.io/scikit-learn-api/)
- allow you to mix in things from `sklearn` like `Pipeline`, `GridSearch`, etc.
- [Telling Theano to use a GPU](http://deeplearning.net/software/theano/tutorial/using_gpu.html)
## Acknowledgements
In addition to the links already given, most of this notebook was cobbled together based on other examples I found online, including:
- [many MLM posts](http://machinelearningmastery.com/blog/)
- [Fastforward Labs' `keras-hello-world`](https://github.com/fastforwardlabs/keras-hello-world)
- [wxs' `keras-mnist-tutorial`](https://github.com/wxs/keras-mnist-tutorial)
- and probably others...
| github_jupyter |
```
import matplotlib.pyplot as plt
import numpy as np
import os
from scipy import stats
import glob
from scipy.stats import ks_2samp, kstest
%matplotlib inline
def load_summary(filename):
dtype=[('minr', 'f8'),
('maxr', 'f8'),
('ca_ratio', 'f8'),
('ba_ratio', 'f8'),
('a', 'f8'),
('center', 'f8'),
('width', 'f8'),
('mu', 'f8')]
summary = np.loadtxt(filename, dtype=dtype)
return summary
def load_experiment(input_path="../data/mstar_selected_summary/vmax_sorted/", fixed_number=False, full_data=False):
files = glob.glob(input_path+"M31_group_*")
group_id = []
for f in files:
i = int(f.split("_")[-5])
if i not in group_id:
group_id.append(i)
print(group_id, len(group_id))
n_groups = len(group_id)
if fixed_number:
n_iter = np.arange(5)
else:
n_iter = np.arange(11,16)
fields = ['width','mu', 'a', 'ba_ratio', 'ca_ratio']
M31_all = {}
MW_all = {}
if not full_data:
for field in fields:
M31_all[field] = np.ones(n_groups)
MW_all[field] = np.ones(n_groups)
M31_all[field+'_sigma'] = np.ones(n_groups)
MW_all[field+'_sigma'] = np.ones(n_groups)
M31_all[field+'_random'] = np.ones(n_groups)
MW_all[field+'_random'] = np.ones(n_groups)
M31_all[field+'_random_sigma'] = np.ones(n_groups)
MW_all[field+'_random_sigma'] = np.ones(n_groups)
else:
for field in fields:
M31_all[field] = np.empty((0))
MW_all[field] = np.empty((0))
M31_all[field+'_random'] = np.empty((0))
MW_all[field+'_random'] = np.empty((0))
for g in range(n_groups):
MW_summary = {}
M31_summary = {}
for i in n_iter:
if fixed_number:
filename_MW = os.path.join(input_path,"MW_group_{}_nmax_{}_iter_{}.dat".format(group_id[g], 11, i))
filename_M31 = os.path.join(input_path,"M31_group_{}_nmax_{}_iter_{}.dat".format(group_id[g],11, i))
else:
filename_MW = os.path.join(input_path,"MW_group_{}_nmax_{}_iter_{}.dat".format(group_id[g], i, 0))
filename_M31 = os.path.join(input_path,"M31_group_{}_nmax_{}_iter_{}.dat".format(group_id[g], i, 0))
MW_summary[i] = load_summary(filename_MW)
M31_summary[i] = load_summary(filename_M31)
for field in fields:
a = np.empty((0))
b = np.empty((0))
a_random = np.empty((0))
b_random = np.empty((0))
for i in n_iter:
data = M31_summary[i]
a = np.append(a, data[field][0])
a_random = np.append(a_random, data[field][1:101])
data = MW_summary[i]
b = np.append(b, data[field][0])
b_random = np.append(b_random, data[field][1:101])
#print('a_random {} iter: {} {}'.format(field, i, a_random))
if not full_data:
M31_all[field][g] = np.average(a)
MW_all[field][g] = np.average(b)
M31_all[field+'_sigma'][g] = np.std(a)
MW_all[field+'_sigma'][g] = np.std(b)
M31_all[field+'_random'][g] = np.average(a_random)
MW_all[field+'_random'][g] = np.average(b_random)
M31_all[field+'_random_sigma'][g] = np.std(a_random)
MW_all[field+'_random_sigma'][g] = np.std(b_random)
else:
M31_all[field] = np.append(M31_all[field], a)
MW_all[field] = np.append(MW_all[field], b)
M31_all[field+'_random'] = np.append(M31_all[field+'_random'], a_random)
MW_all[field+'_random'] = np.append(MW_all[field+'_random'], b_random)
return M31_all, MW_all
in_path = "../data/obs_summary/vmag_sorted/"
M31_obs_vmag_sorted, MW_obs_vmag_sorted = load_experiment(input_path=in_path, fixed_number=False, full_data=False)
in_path = "../data/illustris1_mstar_selected_summary/vmax_sorted/"
M31_sim_vmax_sorted_illu, MW_sim_vmax_sorted_illu = load_experiment(input_path=in_path, fixed_number=False)
in_path = "../data/illustris1dark_mstar_selected_summary/vmax_sorted/"
M31_sim_vmax_sorted_illudm, MW_sim_vmax_sorted_illudm = load_experiment(input_path=in_path, fixed_number=False)
in_path = "../data/elvis_mstar_selected_summary/vmax_sorted/"
M31_sim_vmax_sorted_elvis, MW_sim_vmax_sorted_elvis = load_experiment(input_path=in_path, fixed_number=False)
print("M31 observations \n")
fields = ['width', 'ca_ratio', 'ba_ratio']
for field in fields:
print("Natural units\n", field, M31_obs_vmag_sorted[field][0], M31_obs_vmag_sorted[field+'_sigma'][0])
print("\nMW observations \n")
fields = ['width', 'ca_ratio', 'ba_ratio']
for field in fields:
print("Natural units\n", field, MW_obs_vmag_sorted[field][0], MW_obs_vmag_sorted[field+'_sigma'][0])
print("M31 observations \n")
fields = ['width', 'ca_ratio', 'ba_ratio']
for field in fields:
normed_mean = (M31_obs_vmag_sorted[field][0] - M31_obs_vmag_sorted[field+'_random'][0])/M31_obs_vmag_sorted[field+'_random_sigma'][0]
normed_sigma = M31_obs_vmag_sorted[field+'_sigma'][0]/M31_obs_vmag_sorted[field+'_random_sigma'][0]
print("Normalized units\n", field, normed_mean, normed_sigma)
print("\nMW observations \n")
fields = ['width', 'ca_ratio', 'ba_ratio']
for field in fields:
normed_mean = (MW_obs_vmag_sorted[field][0] - MW_obs_vmag_sorted[field+'_random'][0])/MW_obs_vmag_sorted[field+'_random_sigma'][0]
normed_sigma = MW_obs_vmag_sorted[field+'_sigma'][0]/MW_obs_vmag_sorted[field+'_random_sigma'][0]
print("Normalized units\n", field, normed_mean, normed_sigma)
print("M31 observations (spherically randomized)\n")
fields = ['width', 'ca_ratio', 'ba_ratio']
for field in fields:
print("Natural units\n", field, M31_obs_vmag_sorted[field+'_random'][0], M31_obs_vmag_sorted[field+'_random_sigma'][0])
print("\nMW observations (spherically randomized)\n")
fields = ['width', 'ca_ratio', 'ba_ratio']
for field in fields:
print("Natural units\n", field, MW_obs_vmag_sorted[field+'_random'][0], MW_obs_vmag_sorted[field+'_random_sigma'][0])
print("M31 illustris simulation \n")
fields = ['width', 'ca_ratio', 'ba_ratio']
for field in fields:
print("Natural units\n", field,
np.mean(M31_sim_vmax_sorted_illu[field]), np.std(M31_sim_vmax_sorted_illu[field+'_sigma']))
print("\nMW illustris simulation \n")
fields = ['width', 'ca_ratio', 'ba_ratio']
for field in fields:
print("Natural units\n", field,
np.mean(MW_sim_vmax_sorted_illu[field]), np.std(MW_sim_vmax_sorted_illu[field+'_sigma']))
print("M31 illustris simulation DM \n")
fields = ['width', 'ca_ratio', 'ba_ratio']
for field in fields:
print("Natural units\n", field,
np.mean(M31_sim_vmax_sorted_illudm[field]), np.std(M31_sim_vmax_sorted_illudm[field+'_sigma']))
print("\nMW illustris simulation DM \n")
fields = ['width', 'ca_ratio', 'ba_ratio']
for field in fields:
print("Natural units\n", field,
np.mean(MW_sim_vmax_sorted_illudm[field]), np.std(MW_sim_vmax_sorted_illudm[field+'_sigma']))
print("M31 elvis simulation \n")
fields = ['width', 'ca_ratio', 'ba_ratio']
for field in fields:
print("Natural units\n", field,
np.mean(M31_sim_vmax_sorted_elvis[field]), np.std(M31_sim_vmax_sorted_elvis[field+'_sigma']))
print("\nMW elvis simulation \n")
fields = ['width', 'ca_ratio', 'ba_ratio']
for field in fields:
print("Natural units\n", field,
np.mean(MW_sim_vmax_sorted_elvis[field]), np.std(MW_sim_vmax_sorted_elvis[field+'_sigma']))
```
| github_jupyter |
<img src="figures/ampel_multi.png" width="600">
### AMPEL and the Vera Rubin Observatory
The Vera Rubin Observatory, and the LSST survey, will provide a legacy collection of real-time data. Considering the potential long term impact of any transient programs, the AMPEL analysis platform was developed to
host complex science program with provenance requirements matching those of the observatory. In essence, this means the creation of scientific analysis schema which detail all scientific/algorithmic choices being made. This schema can be distributed with publications, and consistently applied to simulated, archived and real-time datasets.
<img src="figures/program_design2.png" width="600">
Overview of sample AMPEL science schema. Grey circles indicate analysis units requested by the program, while the colored symbol shows where in the AMPEL processing tiers each unit belongs.
Such analysis schema should in principle be independent of infrastructure and processing method. Both scientfic analysis and processing/brokering infrastructure are bound to evolve during the coming decade. _It would be a great goal for this group to develop standards for describing real-time programs such that these can be decoupled from the computing question of where data is being processed, and e.g. whether this is archived or real-time._
```
# This notebook does not actually run any code, and it is only used to point to modules and docs.
import sys
!{sys.executable} -m pip install ampel-ztf
from ampel.ztf.ingest.ZiAlertContentIngester import ZiAlertContentIngester
from ampel.ztf.t0.ZTFAlertStreamController import ZTFAlertStreamController
from ampel.abstract.AbsT3Unit import AbsT3Unit
from ampel.ztf.t3.skyportal.SkyPortalPublisher import SkyPortalPublisher
```
### AMPEL API
A standard REST API can be used to access ZTF data through AMPEL:
[https://ampelproject.github.io/astronomy/ztf/index](https://ampelproject.github.io/astronomy/ztf/index)
A similar access will be provided to LSST data. However, as such interactions generally break provenance, the recommended development path is to design a program based on archived data then upload these to a live instance. The following sections will outline how data is added and exported from a live AMPEL channel.
### AMPEL tiers
Alert processing in AMPEL is carried out in four separate tiers:
| | |
| -------- | ----- |
 | 
The first of these (*add*) directly concerns how to ingest new data into the AMPEL system, while the last (*react*) is frequently used to export data.
### AMPEL T0 - ADD
AMPEL users can design units to ingest a wide variety of data sources, and are free to e.g. import data from external telescope to combine with larger streams (like those from LSST, ZTF and CTA). This is done through two, source specific steps: defining a _controller_ who reads the data stream and an _ingester_ which selects which of the stream object properties are to be added into the database (and performs potential conversion operators). We here examplify with the conroller and ingester used for the ZTF alert stream.
```
ZTFAlertStreamController??
ZiAlertContentIngester??
```
Other data-streams can be accessed through a straightforward extension of these. An AMPEL channel requests which controller and ingester to use in the analysis schema:
```
...
controller:
unit: ZTFAlertStreamController
processor:
unit: AlertProcessor
config:
directives:
t0_add:
unit: ZiAlertContentIngester
...
```
### AMPEL T3 - REACT
In the third tier, an AMPEL channel works with the full collection of transients (modified since some previous time). A typical task here is to push information to other systems or for visualization. These operations are carried out by _T3Units_s:
```
AbsT3Unit??
```
AMPEL channels typically construct science specific T3s. Constructed and available units perform tasks such as triggering follow-up observations (e.g. LCO/AEON), sending alerts (e.g. SLACK) or providing data to some visualization service (e.g. Skyportal).
```
SkyPortalPublisher.post_candidate??
```
### Next steps
The tutorial notebooks contained in the same directory provides a more direct introduction to the design of an AMPEL science channel. More information is also available at
[https://ampelproject.github.io/](https://ampelproject.github.io/)
or through any of the AMPEL developers.
| github_jupyter |
# Chapter 10: RNN(Recurrent Neural Network) Application in IMDB Reviews and Sarcasm Reviews Dataset
```
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
import tensorflow_datasets as tfds
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences
def plotGraph(history):
loss,acc,val_loss,val_acc = history.history.values()
epochs = range(1,len(loss)+1)
# Plot graph
plt.plot(epochs,acc,'r-^',label = 'Training Accuracy')
plt.plot(epochs,val_acc,'b-*', label = 'Validation Accuracy')
plt.title('Training and Validation Accuracy')
plt.legend()
plt.figure()
plt.plot(epochs,loss,'r-^',label='Training Loss')
plt.plot(epochs,val_loss,'b-*',label='Validation Loss')
plt.title('Training and Validation Loss')
plt.legend()
plt.figure()
def trainModel(model, num_epochs):
model.compile(loss ='binary_crossentropy', optimizer = 'adam', metrics = ['accuracy'])
model.summary()
plotGraph(model.fit(training_padded, training_label, epochs=num_epochs, validation_data=(testing_padded, testing_label)))
```
## Section 10.1: IMDB Dataset (Chapte 9.1)
### Model Constructions๏ผ
#### Input Layer (Fixed):
* Embedding
---
#### Optional Layers:
1. GlobalMaxPooling1D()
2. LSTM(32)
3. Bidirection(LSTM(64))
4. Bidirection(LSTM(64)) + Bidirection(LSTM(32))
5. Conv1D(128, 5, activation='relu') + GlobalMaxPooling1D()
6. GRU(32, dropout=0.2, recurrent_dropout=0.2)
7. GRU(32, dropout=0.1, recurrent_dropout=0.5)+GRU(32, dropout=0.1, recurrent_dropout=0.5, activation='relu')
---
#### Output Layers (Fixed):
- One relu Dense(4)
- One Sigmod Dense(1)
```
imdb, info = tfds.load('imdb_reviews', with_info = True, as_supervised = True)
training_sentences, training_label = [], []
testing_sentences, testing_label = [], []
for data, label in imdb['train']:
training_sentences.append(str(data.numpy()))
training_label.append(label.numpy())
for data, label in imdb['test']:
testing_sentences.append(str(data.numpy()))
testing_label.append(label)
training_sentences, training_label = np.array(training_sentences), np.array(training_label)
testing_sentences, testing_label = np.array(testing_sentences), np.array(testing_label)
numWords = 10000
maxLen = 200
embedding_dim = 16
tokenizer = Tokenizer(num_words = numWords, oov_token = '<OOV>')
tokenizer.fit_on_texts(training_sentences)
training_sequence = tokenizer.texts_to_sequences(training_sentences)
training_padded = pad_sequences(training_sequence, maxlen= maxLen, padding = 'post', truncating = 'post')
testing_sequence = tokenizer.texts_to_sequences(testing_sentences)
testing_padded = pad_sequences(testing_sequence, maxlen = maxLen)
# GlobalMaxPooling1D()
model = tf.keras.Sequential([
tf.keras.layers.Embedding(numWords, embedding_dim, input_length = maxLen),
tf.keras.layers.GlobalAveragePooling1D(),
tf.keras.layers.Dense(4, activation = 'relu'),
tf.keras.layers.Dense(1, activation = 'sigmoid'),
])
trainModel(model, 10)
# LSTM(32)
model = tf.keras.Sequential([
tf.keras.layers.Embedding(numWords, embedding_dim, input_length = maxLen),
tf.keras.layers.LSTM(16),
tf.keras.layers.Dense(4, activation = 'relu'),
tf.keras.layers.Dense(1, activation = 'sigmoid'),
])
trainModel(model, 10)
# Bidirectional(LSTM(32))
model = tf.keras.Sequential([
tf.keras.layers.Embedding(numWords, embedding_dim, input_length = maxLen),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(32)),
tf.keras.layers.Dense(6, activation = 'relu'),
tf.keras.layers.Dense(1, activation = 'sigmoid'),
])
trainModel(model, 10)
# Bidirectional(LSTM(64)) + Bidirectional(LSTM(32))
model = tf.keras.Sequential([
tf.keras.layers.Embedding(numWords, embedding_dim, input_length = maxLen),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(32, return_sequences = True)),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(16)),
tf.keras.layers.Dense(4, activation = 'relu'),
tf.keras.layers.Dense(1, activation = 'sigmoid'),
])
trainModel(model, 10)
# Conv1D(128, 5, activation='relu') + GlobalMaxPooling1D()
model = tf.keras.Sequential([
tf.keras.layers.Embedding(numWords, embedding_dim, input_length = maxLen),
tf.keras.layers.Conv1D(128, 5, activation='relu'),
tf.keras.layers.GlobalMaxPooling1D(),
tf.keras.layers.Dense(4, activation = 'relu'),
tf.keras.layers.Dense(1, activation = 'sigmoid'),
])
trainModel(model, 10)
# GRU(32)
embedding_dim = 32
model = tf.keras.Sequential([
tf.keras.layers.Embedding(numWords, embedding_dim, input_length = maxLen),
tf.keras.layers.GRU(32),
tf.keras.layers.Dense(4, activation = 'relu'),
tf.keras.layers.Dense(1, activation = 'sigmoid'),
])
trainModel(model, 10)
# Bidirectional(GRU(32))
model = tf.keras.Sequential([
tf.keras.layers.Embedding(numWords, embedding_dim, input_length = maxLen),
tf.keras.layers.Bidirectional(tf.keras.layers.GRU(32)),
tf.keras.layers.Dense(12, activation = 'relu'),
tf.keras.layers.Dense(1, activation = 'sigmoid'),
])
trainModel(model, 5)
```
| github_jupyter |
```
%load_ext autoreload
%autoreload 2
import datetime
import os, sys
import numpy as np
import matplotlib.pyplot as plt
import casadi as cas
##### For viewing the videos in Jupyter Notebook
import io
import base64
from IPython.display import HTML
# from ..</src> import car_plotting
# from .import src.car_plotting
PROJECT_PATH = '/home/nbuckman/Dropbox (MIT)/DRL/2020_01_cooperative_mpc/mpc-multiple-vehicles/'
sys.path.append(PROJECT_PATH)
import src.MPC_Casadi as mpc
import src.car_plotting as cplot
%matplotlib inline
```
# Vehicle Dynamics $\frac{d}{dt} \vec{x} = f(\vec{x}, \vec{u})$
def gen_x_next(x_k, u_k, dt):
k1 = f(x_k, u_k)
k2 = f(x_k+dt/2*k1, u_k)
k3 = f(x_k+dt/2*k2, u_k)
k4 = f(x_k+dt*k3, u_k)
x_next = x_k + dt/6*(k1+2*k2+2*k3+k4)
return x_next
# F = cas.Function('F',[x,u,t],[ode],)
# States
$\vec{x}$ = $[x, y, \phi, \delta, V, s]^T$
$\vec{u}$ = $[\delta^u, v^u]^T$
# Discrete (integrated) dynamics $\vec{x}_{t+1} = F(\vec{x}_{t}, \vec{u}_{t})$
```
T = 10 #numbr of time horizons
dt = 0.1
N = int(T/dt) #Number of control intervals
```
intg_options = {}
intg_options['tf'] = dt # from dt
intg_options['simplify'] = True
intg_options['number_of_finite_elements'] = 6 #from 4
dae = {} #What's a DAE?
dae['x'] = x
dae['p'] = u
dae['ode'] = f(x,u)
intg = cas.integrator('intg','rk', dae, intg_options)
res = intg(x0=x,p=u)
x_next = res['xf']
F = cas.Function('F',[x,u],[x_next],['x','u'],['x_next'])
# Problem Definition
### Parameterization of Desired Trajectory ($\vec{x}_d = f_d(s)$)
```
s = cas.MX.sym('s')
xd = s
yd = 0
phid = 0
des_traj = cas.vertcat(xd, yd, phid)
fd = cas.Function('fd',[s],[des_traj],['s'],['des_traj'])
#Globally true information
min_dist = 2 * (2 * .5**2)**.5
# initial_speed = 6.7
initial_speed = 20 * 0.447 # m/s
# Initial Conditions
x0 = np.array([2*min_dist, 1.2*min_dist, 0, 0, initial_speed, 0]).T
x0_2 = np.array([2*min_dist, 0, .0, 0, initial_speed, 0]).T
x0_amb = np.array([0, 0.0, 0, 0, 1.1 * initial_speed , 0]).T
LANE_WIDTH = min_dist
xd2 = s
yd2 = LANE_WIDTH
phid = 0
des_traj2 = cas.vertcat(xd2, yd2, phid)
fd2 = cas.Function('fd',[s],[des_traj2],['s'],['des_traj2'])
```
## Warm Start
### Solve it centrally just to warm start the solution
```
x1_MPC = mpc.MPC(dt)
x2_MPC = mpc.MPC(dt)
x1_MPC.k_s = -1.0
x2_MPC.k_s = -1.0
amb_MPC = mpc.MPC(dt)
amb_MPC.theta_iamb = 0.0
amb_MPC.k_u_v = 0.10
# amb_MPC.k_u_change = 1.0
amb_MPC.k_s = -1.0
amb_MPC.max_v = 40 * 0.447 # m/s
amb_MPC.max_X_dev = 5.0
x2_MPC.fd = fd
amb_MPC.fd = fd
x1_MPC.fd = fd2
x1_MPC.min_y = -1.1 * LANE_WIDTH
x2_MPC.min_y = -1.1 * LANE_WIDTH
amb_MPC.min_y = -1.1 * LANE_WIDTH
speeding_amb_u = np.zeros((2,N))
speeding_amb_u[1,:10] = np.ones((1,10)) * amb_MPC.max_v_u
u0 = np.zeros((2,N))
u0[0,:10] = np.ones((1,10)) * 0
u0[1,:10] = np.ones((1,10)) * x1_MPC.max_v_u
u1 = np.zeros((2,N))
u1[0,:10] = np.ones((1,10)) * 0
u1[1,:10] = np.ones((1,10)) * x2_MPC.max_v_u
opt = mpc.OptimizationMPC(x1_MPC, x2_MPC,amb_MPC)
opt.generate_optimization(N, min_dist, fd, T, x0, x0_2, x0_amb, 2)
# x_warm, x1_warm, xamb_warm = opt.warm_start(u0, u1, speeding_amb_u, x0, x0_2, x0_amb)
x1, u1, x1_des, x2, u2, x2_des, xamb, uamb, xamb_des = opt.get_solution()
optional_suffix = "_newsclassescentral"
subdir_name = datetime.datetime.now().strftime("%Y%m%d-%H%M%S") + optional_suffix
folder = "results/" + subdir_name + "/"
os.makedirs(folder)
os.makedirs(folder+"imgs/")
print(folder)
cplot.plot_cars(x1, x2, xamb, folder,x1_des, x2_des, xamb_des)
CIRCLES = False
if CIRCLES:
vid_fname = folder + subdir_name + 'circle.mp4'
else:
vid_fname = folder + subdir_name + 'car.mp4'
if os.path.exists(vid_fname):
os.remove(vid_fname)
cmd = 'ffmpeg -r 16 -f image2 -i {}imgs/%03d.png -vcodec libx264 -crf 25 -pix_fmt yuv420p {}'.format(folder, vid_fname)
os.system(cmd)
print('Saving video to: {}'.format(vid_fname))
video = io.open(vid_fname, 'r+b').read()
encoded = base64.b64encode(video)
HTML(data='''<video alt="test" controls>
<source src="data:video/mp4;base64,{0}" type="video/mp4" />
</video>'''.format(encoded.decode('ascii')))
plt.plot(range(x1.shape[1]), x1[4,:])
plt.plot(x1[4,:],c='blue')
plt.plot(xamb[4,:],c='red')
plt.ylabel('Speed [m/s]')
plt.show()
```
## IBR
```
br1 = mpc.IterativeBestResponseMPC(x1_MPC, x2_MPC, amb_MPC)
br1.generate_optimization(N, dt, min_dist, fd, T, x0, x0_2, x0_amb, 2)
x1r1, u1r1, x1_desr1 = br1.get_solution(x2, u2, x2_des, xamb, uamb, xamb_des)
cplot.plot_cars(x1r1, x2, xamb, folder)
CIRCLES = False
if CIRCLES:
vid_fname = folder + subdir_name + 'circle1.mp4'
else:
vid_fname = folder + subdir_name + 'car1.mp4'
if os.path.exists(vid_fname):
os.remove(vid_fname)
cmd = 'ffmpeg -r 16 -f image2 -i {}imgs/%03d.png -vcodec libx264 -crf 25 -pix_fmt yuv420p {}'.format(folder, vid_fname)
os.system(cmd)
print('Saving video to: {}'.format(vid_fname))
video = io.open(vid_fname, 'r+b').read()
encoded = base64.b64encode(video)
HTML(data='''<video alt="test" controls>
<source src="data:video/mp4;base64,{0}" type="video/mp4" />
</video>'''.format(encoded.decode('ascii')))
br2 = mpc.IterativeBestResponseMPC(x2_MPC, x1_MPC, amb_MPC)
br2.generate_optimization(N, dt, min_dist, fd, T, x0_2, x0, x0_amb, 2)
x2r2, u2r2, x2_desr2 = br2.get_solution(x1r1, u1r1, x1_desr1, xamb, uamb, xamb_des)
cplot.plot_cars(x1r1, x2r2, xamb, folder)
CIRCLES = False
if CIRCLES:
vid_fname = folder + subdir_name + 'circle2.mp4'
else:
vid_fname = folder + subdir_name + 'car2.mp4'
if os.path.exists(vid_fname):
os.remove(vid_fname)
cmd = 'ffmpeg -r 16 -f image2 -i {}imgs/%03d.png -vcodec libx264 -crf 25 -pix_fmt yuv420p {}'.format(folder, vid_fname)
os.system(cmd)
print('Saving video to: {}'.format(vid_fname))
car1_v_cost = car1_s_cost
car2_v_cost = car2_s_cost
amb_v_cost = amb_s_cost
car1_sub_costs = [car1_u_delta_cost, car1_u_v_cost, k_lat1*car1_lat_cost, k_lon1*car1_lon_cost, k_phi1*car1_phi_cost, k_phid1*phid1_cost, q_v*car1_v_cost]
car1_sub_costs_labels = ['udel1', 'uv1', 'elat1', 'lon1', 'ephi1', 'v1']
plt.bar(range(len(car1_sub_costs)), [sol.value(c) for c in car1_sub_costs])
plt.xticks(range(len(car1_sub_costs)), car1_sub_costs_labels,rotation=45)
plt.title('Car 1')
plt.xlabel("Subcost")
plt.ylabel("Cost Value")
plt.show()
car2_sub_costs = [car2_u_delta_cost, car2_u_v_cost, 10*car2_lat_cost, 10*car2_lon_cost, k_phi2*car2_phi_cost, k_phid2*phid2_cost, q_v*car2_v_cost]
car2_sub_costs_labels = ['udel2', 'uv2', 'elat2', 'lon2', 'ephi2', 'v2']
plt.bar(range(len(car2_sub_costs)), [sol.value(c) for c in car2_sub_costs])
plt.xticks(range(len(car2_sub_costs)), car2_sub_costs_labels,rotation=45)
plt.title('Car 2')
plt.xlabel("Subcost")
plt.ylabel("Cost Value")
plt.show()
amb_sub_costs = [amb_u_delta_cost, amb_u_v_cost, 10*amb_lat_cost, 10*amb_lon_cost,k_phiamb*amb_phi_cost, k_phidamb*phidamb_cost, q_v*amb_v_cost]
amb_sub_costs_labels = ['udelA', 'uvA', 'elatA', 'lonA', 'ephiA', 'vA']
plt.bar(range(len(amb_sub_costs)), [sol.value(c) for c in amb_sub_costs])
plt.xticks(range(len(amb_sub_costs)), amb_sub_costs_labels,rotation=45)
plt.title('Amb')
plt.xlabel("Subcost")
plt.ylabel("Cost Value")
plt.show()
all_costs = [0.1*c for c in car1_sub_costs] + [0.1 for c in car2_sub_costs] + [10*c for c in amb_sub_costs]
all_labels = car1_sub_costs_labels + car2_sub_costs_labels + amb_sub_costs_labels
plt.bar(range(len(all_costs)), [sol.value(c) for c in all_costs])
plt.xticks(range(len(all_labels)), all_labels,rotation=90)
plt.title('All Cars')
plt.xlabel("Subcost")
plt.ylabel("Cost Value")
for BR_iteration in range(20):
opti2.set_value(x_opt2, sol.value(x_opt))
opti2.set_value(u_opt2, sol.value(u_opt))
opti2.set_value(xamb_opt2, sol.value(xamb_opt))
opti2.set_value(uamb_opt2, sol.value(uamb_opt))
opti2.set_initial(x2_opt2, sol.value(x2_opt))
opti2.set_initial(u2_opt2, sol.value(u2_opt))
sol2 = opti2.solve()
opti3.set_value(x_opt3, sol2.value(x_opt2))
opti3.set_value(u_opt3, sol2.value(u_opt2))
opti3.set_value(x2_opt3, sol2.value(x2_opt2))
opti3.set_value(u2_opt3, sol2.value(uamb_opt2))
opti3.set_initial(xamb_opt3, sol2.value(xamb_opt2))
opti3.set_initial(uamb_opt3, sol2.value(uamb_opt2))
sol3 = opti3.solve()
opti.set_value(x2_opt, sol3.value(x2_opt3))
opti.set_value(xamb_opt, sol3.value(xamb_opt3))
opti.set_value(u2_opt, sol3.value(u2_opt3))
opti.set_value(uamb_opt, sol3.value(uamb_opt3))
opti.set_initial(x_opt, sol3.value(x_opt3))
opti.set_initial(u_opt, sol3.value(u_opt3))
sol = opti.solve()
x_warm = sol.value(x_opt)
u_warm = sol.value(u_opt)
x2_warm = sol.value(x2_opt)
u2_warm = sol.value(u2_opt)
xamb_warm = sol.value(xamb_opt)
uamb_warm = sol.value(uamb_opt)
# x_des = sol/
for k in range(N+1):
fig, ax = ego_car.get_frame(x_warm[:,k])
fig, ax = ego_car.get_frame(x2_warm[:,k], ax)
fig, ax = ego_car.get_frame(xamb_warm[:,k], ax, amb=True)
# ax.plot(x_des[0,:], x_des[1,:], '--')
# ax.plot(x2_des[0,:], x2_des[1,:], '--')
ax = plt.gca()
window_width = 24
window_height = window_width
xmin, xmax = -1, -1+window_width
ymin, ymax = -int(window_height/4.0), int(window_height/4.0)
ax.set_ylim((ymin, ymax))
ax.set_xlim((xmin, xmax))
fig.savefig(folder + 'imgs/' '{:03d}.png'.format(k))
plt.close(fig)
vid_fname = folder + '%02d'%BR_iteration + 'car.mp4'
if os.path.exists(vid_fname):
os.remove(vid_fname)
cmd = 'ffmpeg -r 16 -f image2 -i {}imgs/%03d.png -vcodec libx264 -crf 25 -pix_fmt yuv420p {}'.format(folder, vid_fname)
os.system(cmd)
print('Saving video to: {}'.format(vid_fname))
for BR_iteration in range(20):
vid_fname = folder + '%02d'%BR_iteration + 'car.mp4'
print('Saving video to: {}'.format(vid_fname))
```
## Best Response V2
```
x1 = sol3.value(x_opt3)
x2 = sol3.value(x2_opt3)
xamb = sol3.value(xamb_opt3)
x_des = sol3.value(xamb_desired_3)
for k in range(N+1):
fig, ax = ego_car.get_frame(x1[:,k])
fig, ax = ego_car.get_frame(x2[:,k], ax)
fig, ax = ego_car.get_frame(xamb[:,k], ax, amb=True)
ax.plot(x_des[0,:], x_des[1,:], '--')
# ax.plot(x2_des[0,:], x2_des[1,:], '--')
ax = plt.gca()
window_width = 24
window_height = window_width
xmin, xmax = -1, -1+window_width
ymin, ymax = -int(window_height/4.0), int(window_height/4.0)
ax.set_ylim((ymin, ymax))
ax.set_xlim((xmin, xmax))
fig.savefig(folder + 'imgs/' '{:03d}.png'.format(k))
plt.close(fig)
vid_fname = folder + 'caramb.mp4'
if os.path.exists(vid_fname):
os.remove(vid_fname)
cmd = 'ffmpeg -r 16 -f image2 -i {}imgs/%03d.png -vcodec libx264 -crf 25 -pix_fmt yuv420p {}'.format(folder, vid_fname)
os.system(cmd)
print('Saving video to: {}'.format(vid_fname))
car1_sub_costs = [car1_u_delta_cost, car1_u_v_cost, 10*car1_lat_cost, 10*car1_lon_cost, car1_phi_cost, phid1_cost, q_v*car1_v_cost]
car1_sub_costs_labels = ['udel1', 'uv1', 'elat1', 'lon1', 'ephi1', 'v1']
plt.bar(range(len(car1_sub_costs)), [sol.value(c) for c in car1_sub_costs])
plt.xticks(range(len(car1_sub_costs)), car1_sub_costs_labels,rotation=45)
plt.title('Car 1')
plt.xlabel("Subcost")
plt.ylabel("Cost Value")
plt.show()
car2_sub_costs = [car2_u_delta_cost, car2_u_v_cost, 10*car2_lat_cost, 10*car2_lon_cost, car2_phi_cost, phid2_cost, q_v*car2_v_cost]
car2_sub_costs_labels = ['udel2', 'uv2', 'elat2', 'lon2', 'ephi2', 'v2']
plt.bar(range(len(car2_sub_costs)), [sol.value(c) for c in car2_sub_costs])
plt.xticks(range(len(car2_sub_costs)), car2_sub_costs_labels,rotation=45)
plt.title('Car 2')
plt.xlabel("Subcost")
plt.ylabel("Cost Value")
plt.show()
amb_sub_costs = [amb_u_delta_cost, amb_u_v_cost, 10*amb_lat_cost, 10*amb_lon_cost, amb_phi_cost, phidamb_cost, q_v*amb_v_cost]
amb_sub_costs_labels = ['udelA', 'uvA', 'elatA', 'lonA', 'ephiA', 'vA']
plt.bar(range(len(amb_sub_costs)), [sol.value(c) for c in amb_sub_costs])
plt.xticks(range(len(amb_sub_costs)), amb_sub_costs_labels,rotation=45)
plt.title('Amb')
plt.xlabel("Subcost")
plt.ylabel("Cost Value")
plt.show()
all_costs = [0.1*c for c in car1_sub_costs] + [0.1 for c in car2_sub_costs] + [10*c for c in amb_sub_costs]
all_labels = car1_sub_costs_labels + car2_sub_costs_labels + amb_sub_costs_labels
plt.bar(range(len(all_costs)), [sol.value(c) for c in all_costs])
plt.xticks(range(len(all_labels)), all_labels,rotation=90)
plt.title('All Cars')
plt.xlabel("Subcost")
plt.ylabel("Cost Value")
sol.value(x_opt)[3:5, 10:20]
dt
plt.plot(opti.debug.value(x_opt)[4,:],'o',c='b')
plt.plot(opti.debug.value(x2_opt)[4,:],'o',c='g')
plt.plot(opti.debug.value(xamb_opt)[4,:],'o',c='r')
plt.ylabel("Velocity")
plt.show()
plt.plot(opti.debug.value(u_opt)[1,:],'o',c='b')
plt.plot(opti.debug.value(u2_opt)[1,:],'o',c='g')
plt.plot(opti.debug.value(uamb_opt)[1,:],'o',c='r')
plt.ylabel("Acceleration $\delta V_u$")
plt.show()
plt.plot([np.sqrt(opti.debug.value(cas.sumsqr(x_opt[0:2,k] - x2_opt[0:2,k]))) for k in range(opti.debug.value(x_opt).shape[1])],'o',c='b')
plt.plot([np.sqrt(opti.debug.value(cas.sumsqr(x_opt[0:2,k] - x2_opt[0:2,k]))) for k in range(opti.debug.value(x_opt).shape[1])],'x',c='g')
plt.plot([np.sqrt(opti.debug.value(cas.sumsqr(x_opt[0:2,k] - xamb_opt[0:2,k]))) for k in range(opti.debug.value(x_opt).shape[1])],'o',c='b')
plt.plot([np.sqrt(opti.debug.value(cas.sumsqr(x_opt[0:2,k] - xamb_opt[0:2,k]))) for k in range(opti.debug.value(x_opt).shape[1])],'x',c='r')
plt.plot([np.sqrt(opti.debug.value(cas.sumsqr(xamb_opt[0:2,k] - x2_opt[0:2,k]))) for k in range(opti.debug.value(x_opt).shape[1])],'o',c='g')
plt.plot([np.sqrt(opti.debug.value(cas.sumsqr(xamb_opt[0:2,k] - x2_opt[0:2,k]))) for k in range(opti.debug.value(x_opt).shape[1])],'x',c='r')
plt.hlines(min_dist,0,50)
plt.ylabel('Intervehicle Distance')
plt.ylim([-.1, 2*min_dist])
plt.plot([opti.debug.value(slack1) for k in range(opti.debug.value(x_opt).shape[1])],'.',c='b')
plt.plot([opti.debug.value(slack2) for k in range(opti.debug.value(x_opt).shape[1])],'.',c='r')
plt.plot([opti.debug.value(slack3) for k in range(opti.debug.value(x_opt).shape[1])],'.',c='g')
# plt.plot([np.sqrt(opti.debug.value(cas.sumsqr(x_opt[0:2,k] - x2_opt[0:2,k]))) for k in range(opti.debug.value(x_opt).shape[1])],'o',c='b')
# plt.plot([np.sqrt(opti.debug.value(cas.sumsqr(x_opt[0:2,k] - x2_opt[0:2,k]))) for k in range(opti.debug.value(x_opt).shape[1])],'x',c='g')
# plt.plot([np.sqrt(opti.debug.value(cas.sumsqr(x_opt[0:2,k] - xamb_opt[0:2,k]))) for k in range(opti.debug.value(x_opt).shape[1])],'o',c='b')
# plt.plot([np.sqrt(opti.debug.value(cas.sumsqr(x_opt[0:2,k] - xamb_opt[0:2,k]))) for k in range(opti.debug.value(x_opt).shape[1])],'x',c='r')
# plt.plot([np.sqrt(opti.debug.value(cas.sumsqr(xamb_opt[0:2,k] - x2_opt[0:2,k]))) for k in range(opti.debug.value(x_opt).shape[1])],'o',c='g')
# plt.plot([np.sqrt(opti.debug.value(cas.sumsqr(xamb_opt[0:2,k] - x2_opt[0:2,k]))) for k in range(opti.debug.value(x_opt).shape[1])],'x',c='r')
plt.ylabel('slack')
# plt.ylim([.7,.71])
if not PLOT_LIVE:
for k in range(N+1):
fig, ax = ego_car.get_frame(x_mpc[:,k])
fig, ax = ego_car.get_frame(x2_mpc[:,k], ax)
fig, ax = ego_car.get_frame(xamb_mpc[:,k], ax, amb=True)
ax.plot(x_des[0,:], x_des[1,:], '--')
ax.plot(x2_des[0,:], x2_des[1,:], '--')
ax = plt.gca()
window_width = 24
window_height = window_width
xmin, xmax = -1, -1+window_width
ymin, ymax = -int(window_height/4.0), int(window_height/4.0)
ax.set_ylim((ymin, ymax))
ax.set_xlim((xmin, xmax))
fig.savefig(folder + 'imgs/' '{:03d}.png'.format(k))
plt.close(fig)
```
| github_jupyter |
# Extensionmethods
Not all operators are loaded at import of rx.
```
# Example: from_marbles
import rx
try:
rx.Observable.from_marbles('a-b|')
except Exception as ex:
print 'error:', ex # shown only after ipython notebook kernel restart
# -> to see whats there don't use e.g. `dir(Observable)` but find
# 'def from_marbles' in the rx directory, to see the module,
# then import it:
'''
~/GitHub/RxPY/rx $ ack 'def from_marbl'
testing/marbles.py
90:def from_marbles(self, scheduler=None):
'''
import rx.testing.marbles
def show(x): print (x)
stream = rx.Observable.from_marbles('a-b--c|').to_blocking().subscribe(show)
```
# Async Operations
It is useful to understand on a high level how RxPY handles asyncronity and when.
E.g. naively you might want to know, when notifying a value to a subscriber, what other subscribers are present.
This makes no sense to ask (I think in general in reactive programming) and it will be clear looking at an example.
Consider timing and thread outputs in the following:
```
# =============================
# change these (both in millis)
delay_stream, slow_emit = 0, 0
# =============================
import rx, threading, random, time
thread = threading.currentThread
def call_observer(obs):
'''observer functions are invoked, blocking'''
print_out(obs.__class__.__name__, hash(obs))
for i in range(2):
obs.on_next(1)
if slow_emit:
time.sleep(slow_emit/1000)
obs.on_next(1)
stream = rx.Observable.create(call_observer).take(10)
if delay_stream:
stream = stream.delay(delay_stream)
def print_out(*v):
'''printout of current time, v, and current thread'''
v_pretty = ' '.join([str(s) for s in v])
print ('%.8f - %30s - %s\n' % (time.time(), v_pretty, thread().getName()))
d = stream.subscribe(lambda x: print_out('Observer 1', x))
d = stream.subscribe(lambda x: print_out('Observer 2', x))
```
- As long as there is no time related stream operator involved, then RXPy does everything *syncrononous*.
- RXPy goes async only when it has to, according to the nature of the async operation declared by the user.
- It defaults to reasonable mechanics, e.g. using threading.
- You can overwrite these defaults, by picking a "scheduler" (e.g. gevent, e.g. twisted, e.g. futures)
> => In the `call_observer` function you can't know about the concurrency situation
> It soleley depends on the design of the stream operations applied.
> See `.ref_count()` though, for published streams
Check the [`.observe_on`](./Part%20VII%20-%20Meta%20Operations.ipynb#...when-it-notifies-observers-observe_on) example to get a deeper understanding how scheduling works.
| github_jupyter |
# CDSAPI request examples
## Python workshop, EMS2019
### A few tips before we begin
- Use CDS Web download to construct the base of your request and then build on it.
- Reanalysis ERA5 data is originally stored in GRIB format and when you download it as netCDF, conversion will fail if there is more than one **Product type** in the request
- Data that is originally stored as netCDF will be offered to be downloaded as zip or Compressed tar file. If you are downloading just one file (selected exactly one choice in every category) - change the extension to **.nc** (but leave format as zip)
- For people who are used to using MARS archive - for keys that are the same as in MARS request (area, time and date for daily data) you can use mars syntax in some requests
## Examples list
[Example 1 - Seasonal forecast](#example1)
[Example 2 - ERA5 Land Total precipitation data](#example2)
[Example 3 - ERA5 Temperature on pressure levels over Europe](#example3)
[Example 4 - Hourly ERA5 data in netcdf format](#example4)
[Example 5 - Hourly ERA5 Land data in grib format for small area](#example5)
[Example 6 - Satellite observation data](#example6)
[Example 7 - Satellite observation data - one file in request](#example7)
[Example 8 - Climate projections data](#example8)
[Example 9 - ERA5 Monthly data](#example9)
[Example 10 - ERA5 data on model levels from public datasets in MARS](#example10)
[Example 11 - UERRA regional reanalysis for Europe on height levels](#example11)
[Example 12 - ERA5 reanalysis - ensemble data on pressure levels](#example12)
Before we start, we need to import cds and datetime libraries
```
import cdsapi
from datetime import date, timedelta
import xarray as xr
c = cdsapi.Client()
```
<a id='example1'></a>
### Example 1 - Seasonal forecast
Get the latest seasonal forecast anomalies on single levels from ECMWF and UK Met Office. We have to use the download form to know which System is available from which model for the period we want the forecast.
```
c.retrieve(
'seasonal-postprocessed-single-levels',
{
'format':'grib',
'originating_centre':[
'ecmwf','ukmo'
],
'system':[
'5','14'
],
'variable':'2m_temperature_anomaly',
'product_type':'monthly_mean',
'year':'2019',
'month':'05',
'leadtime_month':[
'1','2'
]
},
'data/weather/seasonal.grib')
```
<a id='example2'></a>
### Example 2 - ERA5 Land Total precipitation data
Get the daily total precipitation data from ERA5 Land dataset for month of January 2012.
In ERA5 Land dataset total precipitation is archived as acumulations from start of the period, which in this case is from start of the day. So at 23h time it will contain daily precipitations.
ERA5 Land only has high resolution model, so there is no 'product_type' parameter.
```
c.retrieve(
'reanalysis-era5-land',
{
'variable':[
'total_precipitation'
],
'year':'2012',
'month':'01',
'day':[
'01','02','03',
'04','05','06',
'07','08','09',
'10','11','12',
'13','14','15',
'16','17','18',
'19','20','21',
'22','23','24',
'25','26','27',
'28','29','30',
'31'
],
'time': '23:00',
'format':'netcdf'
},
'data/weather/total_precipitation_2012_01.nc')
```
<a id='example3'></a>
## Example 3 - ERA5 Temperature on pressure levels over Europe
Get temperature at 850 hPa and 500 hPa for days between 15th and 17th August 2008 and times at 00, 06, 12 and 18 UTC. Save the result in separate daily netCDF files.
Note that instead of giving day, month and year separately, we can just give 'date'.
Get the data for Europe area. Just add **'area' : 'europe'**. Unfortunatelly this is the only shortcut word that I've found to work ๐
```
sdate = date(2008, 8, 15) # start date
edate = date(2008, 8, 17) # end date
delta = edate - sdate # as timedelta
for i in range(delta.days + 1):
d = sdate + timedelta(days=i)
d = str(d)
c.retrieve("reanalysis-era5-pressure-levels",
{
"variable": "temperature",
"pressure_level": ['850','500'],
"product_type": "reanalysis",
"area" : "europe",
"date": d,
"time": ['0','6','12','18'],
"format": "netcdf"
},
"data/weather/ea_t_{day}.nc".format(day=d)
)
```
<a id='example4'></a>
### Example 4 - Hourly ERA5 data in netcdf format
Get hourly data from ERA5 single level dataset for dates from 28th April to 3rd May 2018 for global area.
Get these variables:
- wind gusts
- minimum 2m temperature
- maximum 2m temperature
In previous example we had a request for each day. This time we we make one request for all the dates.
Giving the list of dates is also the only way to get the data for different days in different months.
Instead of a whole list of numbers from 0 to 23, we can just put **'all'** for time.
```
c.retrieve("reanalysis-era5-single-levels",
{
"variable":["10m_wind_gust_since_previous_post_processing",
"maximum_2m_temperature_since_previous_post_processing",
"minimum_2m_temperature_since_previous_post_processing"],
"product_type": "reanalysis",
"date": ['20180428',
'20180429',
'20180430',
'20180501',
'20180502',
'20180503'],
"time": ['all'],
"grid": [1,1],
"format": "grib"
},
"data/weather/min_max_t_and_wind_gust.grib"
)
```
<a id='example5'></a>
### Example 5 - Hourly ERA5 Land data in grib format for small area
Get hourly 2m temperature and u and v 10m wind component data for the whole year for years from 2010 to 2013. Each year is one request and saved in one grib file.
Using **'area'** keyword, get the data only for London area.
Donโt worry that some combinations don't exist (e.g., April 31), the API will ignore these fields.
๐ก Be careful when using ERA5 Land data. It is very new dataset and it is structured differently than the rest of the ERA5.
Note: **Even thogh the retrived data is very small because we want very small area, the requests themselves are very big, having a year of hourly data with 3 parameters. So it will run for at least 20 minutes. Don't run it at the workshop. Try it out at home.**
```
years = ['2010','2011','2012','2013']
for year in years:
file_name = 'data/weather/london_data_{y}.grib'.format{y=year}
if use_cds:
c = cdsapi.Client()
c.retrieve( 'reanalysis-era5-land',
{
'variable':[
'10m_u_component_of_wind',
'10m_v_component_of_wind',
'2m_temperature',
],
'year':[
year
],
'month':[
'01','02','03',
'04','05','06',
'07','08','09',
'10','11','12'
],
'day':[
'01','02','03',
'04','05','06',
'07','08','09',
'10','11','12',
'13','14','15',
'16','17','18',
'19','20','21',
'22','23','24',
'25','26','27',
'28','29','30',
'31'
],
'time':[
'00:00','01:00','02:00',
'03:00','04:00','05:00',
'06:00','07:00','08:00',
'09:00','10:00','11:00',
'12:00','13:00','14:00',
'15:00','16:00','17:00',
'18:00','19:00','20:00',
'21:00','22:00','23:00'
],
'format':'grib',
'area':[51,-1,52,1]
},
file_name)
```
<a id='example6'></a>
## Example 6 - Satellite data - Sea level anomaly
Get the Sea level gridded data for the global ocean for three days in March 2018.
```
c.retrieve(
'satellite-sea-level-global',
{
'variable':'all',
'format':'zip',
'year':'2018',
'month':'03',
'day':['03', '04', '05']
},
'data/weather/satellite.zip')
```
<a id='example7'></a>
## Example 7 - Satellite data - Soil moisture - one netCDF file
Get the Volumetric surface soil moisture day average data for 2nd June 2019.
Requests that are originally in netCDF format are available compressed into one zip file.
๐ก When thre is only one netCDF file in zip archive thre will be an error. Before running such requests, change extension in file name from .zip to nc.
```
c.retrieve(
'satellite-soil-moisture',
{
'format':'zip',
'variable':'volumetric_surface_soil_moisture',
'type_of_sensor':'combined_passive_and_active',
'time_aggregation':'day_average',
'year':'2019',
'month':'06',
'day':'02',
'type_of_record':'icdr',
'version':'v201706.0.0'
},
'data/weather/satellite_one.nc')
```
<a id='example8'></a>
## Example 8 - Climate projections data - 2m temperature and 2m Tmax
Get 2m temperature and Maximum 2m temperature for last 24 hours from GFDL-CM3 (NOAA, USA) model,
RCP 8.5 experiment and ensemble member r1i1p1 for 2020-2025 period.
๐ก Always use Web download to check which combination of model, experiment and ensamble member is available for the period you're interested.
```
c.retrieve(
'projections-cmip5-monthly-single-levels',
{
'ensemble_member':'r1i1p1',
'format':'zip',
'variable':[
'2m_temperature','maximum_2m_temperature_in_the_last_24_hours'
],
'model':'gfdl_cm3',
'experiment':'rcp_8_5',
'period':'202101-202512'
},
'data/weather/climate_projection.zip')
```
<a id='example9'></a>
## Example 9 - ERA5 Reanalysis - Monthly data
Get Monthly averaged 2m temperature for first 3 months of 2018 for a small area in grib format.
```
c.retrieve(
'reanalysis-era5-single-levels-monthly-means',
{
'format':'grib',
'product_type':'monthly_averaged_reanalysis',
'variable':'2m_temperature',
'area' : [42,19,17,24],
'year':'2018',
'month':['1','2','3'],
'time':'00:00'
},
'data/weather/ERA5_monthly.grib')
```
<a id='example10'></a>
### Example 10 - ERA5 data on model levels from public datasets in MARS
Get the data on Model levels from ERA5 complete data set.
Keep in mind that this data is not stored in CDS, but in public part of MARS archive with the access through CDS. Because of this you can expect queuing to last hours or even days.
๐ก If the data you're interested contains **accumulated fields**, which are 'type'=FC you will need to provide **step** too.
๐ก When retrieving data from **reanalysis-era5-complete** dataset **always make sure you have grid parameter** in your request, otherwise the retrieved data will be in default reduced Gaussian grid GRIB file.
**Don't run this example at the workshop. Try it out at home.**
```
c.retrieve('reanalysis-era5-complete',{
'class':'ea',
'date':'20180525/to/20180605',
'area':'50/5/40/27',
'expver':'1',
'levelist': '1/to/137',
'levtype':'ml',
'param':'129/130/131/132/133/152',
'stream':'oper',
'time':'00:00:00/03:00:00/06:00:00/09:00:00/12:00:00/15:00:00/18:00:00/21:00:00',
'type':'an',
'grid':"0.25/0.25",
},'data/weather/ERA5-ml.grb')
```
<a id='example11'></a>
### Example 11 - UERRA regional reanalysis for Europe on height levels
Get the wind data from UERRA regional renalysis on height levels.
Web download form won't let you select more than one parameter for this dataset, but putting more than one parameter in the API request works.
Regional reanalysis for European domain is available from 2 models: UERRA-HARMONIE and MESCAN-SURFEX systems. While sufrace data and data on pressure levels ara available from both models, data on height levels in meters is only available from UERRA-HARMONIE. That is why in this dataset there is no 'origin' parameter as in other two UERRA datasets.
```
c.retrieve(
'reanalysis-uerra-europe-height-levels',
{
'variable':['wind_direction','relative_humidity'],
'height_level':[
'100_m','150_m','15_m',
'200_m','250_m','300_m',
'30_m','400_m','500_m',
'50_m','75_m'
],
'year':'2018',
'month':'03',
'day':'03',
'time':[
'00:00','06:00','12:00',
'18:00'
],
'format':'grib'
},
'data/weather/UERRA.grib')
```
<a id='example12'></a>
### Example 12 - ERA5 reanalysis - ensemble data on pressure levels
Download temperature and relative humidity data on levels 300 and 500 hPa for European area for period from 5th to 7th May 2018. Download all ensemble members.
```
c.retrieve(
'reanalysis-era5-pressure-levels',
{
'product_type':'ensemble_members',
'variable':[
'relative_humidity','temperature'
],
'pressure_level':[
'300','500'
],
'year':'2018',
'month':[
'05'
],
'day':[
'05','06','07'
],
'time':[
'00:00','03:00','06:00',
'09:00','12:00','15:00',
'18:00','21:00'
],
'area':[35,-10,52,25],
'format':'grib'
},
'data/weather/ensemble_data_pl.grib')
```
| github_jupyter |
## Exploration of GradientSHAP with binary MNIST
**Function : Exploration of GradientSHAP with binary MNIST**<br>
**Author : Team DIANNA**<br>
**Contributor :**<br>
**First Built : 2021.06.28**<br>
**Last Update : 2021.07.06**<br>
**Library : os, numpy, matplotlib, torch, captum**<br>
**Description : In this notebook we test XAI method GradientSHAP using trained binary MNIST model.**<br>
**Return Values : Shapley scores**<br>
**Note** : We use Captum library to perform GradientSHAP. This library works only with pytorch and it is not compitable with onnx.<br>
```
%matplotlib inline
import os
import numpy as np
# DL framework
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.utils.data
# XAI framework
from captum.attr import GradientShap
from captum.attr import IntegratedGradients
from captum.attr import visualization as viz
# for plotting
import matplotlib.pyplot as plt
from matplotlib.colors import LinearSegmentedColormap
```
### Path to the dataset and the model
```
# please specify data path
datapath = '/mnt/d/NLeSC/DIANNA/data/mnist/binary-MNIST'
# please specify model path
model_path = '/mnt/d/NLeSC/DIANNA/codebase/dianna/example_data/model_generation/MNIST'
```
### Load data (binary MNIST)
```
# load binary MNIST from local
# load data
fd = np.load(os.path.join(datapath, 'binary-mnist.npz'))
# training set
train_X = fd['X_train']
train_y = fd['y_train']
# testing set
test_X = fd['X_test']
test_y = fd['y_test']
fd.close()
# dimensions of data
print("dimensions of mnist:")
print("dimensions or training set", train_X.shape)
print("dimensions or training set label", train_y.shape)
print("dimensions or testing set", test_X.shape)
print("dimensions or testing set label", test_y.shape)
# statistics of training set
print("statistics of training set:")
print("Digits: 0 1")
print("labels: {}".format(np.unique(train_y)))
print("Class distribution: {}".format(np.bincount(train_y)))
print("Labels of training set", train_y[:20])
```
### Prepare data as torch tensor
```
# use pytorch data loader
test_X_torch = torch.from_numpy(test_X).type(torch.FloatTensor)
test_y_torch = torch.from_numpy(test_y).type(torch.LongTensor)
# reshape the input following the definition in pytorch (batch, channel, Height, Width)
test_X_torch = test_X_torch.view(-1,1,28,28)
```
### Load model (Pytorch model trained for binary MNIST)
```
# define the model first
class MnistNet(nn.Module):
def __init__(self, kernels=[16, 32], dropout = 0.1, classes=2):
'''
Two layer CNN model with max pooling.
'''
super(MnistNet, self).__init__()
self.kernels = kernels
# 1st layer
self.layer1 = nn.Sequential(
nn.Conv2d(1, kernels[0], kernel_size=5, stride=1, padding=2),
nn.MaxPool2d(kernel_size=2, stride=2),
nn.ReLU(),
nn.Dropout()
)
# 2nd layer
self.layer2 = nn.Sequential(
nn.Conv2d(kernels[0], kernels[1], kernel_size=5, stride=1, padding=2),
nn.MaxPool2d(kernel_size=2, stride=2),
nn.ReLU(),
nn.Dropout()
)
self.fc1 = nn.Linear(7 * 7 * kernels[-1], kernels[-1]) # pixel 28 / maxpooling 2 * 2 = 7
self.fc2 = nn.Linear(kernels[-1], classes)
def forward(self, x):
x = self.layer1(x)
x = self.layer2(x)
x = x.reshape(x.size(0), -1)
x = self.fc1(x)
x = self.fc2(x)
return F.log_softmax(x, dim=1)
# hyper-parameters
kernels = [16, 32]
dropout = 0.5
classes = 2
# create model
model = MnistNet(kernels, dropout, classes)
# load whole model state
checkpoint = torch.load(os.path.join(model_path, 'mnistnet_training_checkpoint.pt'))
model.load_state_dict(checkpoint['model_state_dict'])
```
### Predict the class of the input image <br>
About how to use ONNX model: https://pytorch.org/docs/stable/onnx.html <br>
```
# check the prediction
model.eval()
# overall test accuracy
correct = 0
for i in range(len(test_X_torch)):
output = model(test_X_torch[i:i+1,:,:,:])
predicted = torch.max(output,1)[1]
correct += (predicted == test_y[i]).sum()
print("Test accuracy:{:.3f}% ".format(float(correct*100) / float(len(test_X_torch))))
# check one case
output = model(test_X_torch[:1,:,:,:])
predicted = torch.max(output,1)[1]
print("prediction", predicted)
print("ground truth", test_y[0])
```
### Gradient-based attribution <br>
Compute attributions using Integrated Gradients and visualize them on the image. Integrated gradients computes the integral of the gradients of the output of the model for the predicted class (`test_y_torch` in this case) with respect to the input image pixels along the path from the black image to our input image.
```
integrated_gradients = IntegratedGradients(model)
attributions_ig = integrated_gradients.attribute(test_X_torch[:1,:,:,:], target=test_y_torch[0], n_steps=100)
default_cmap = LinearSegmentedColormap.from_list('custom blue',
[(0, '#ffffff'),
(0.25, '#000000'),
(1, '#000000')], N=256)
# Shape must be in the form (H, W, C), with channels as last dimension, see source code:
# https://github.com/pytorch/captum/blob/master/captum/attr/_utils/visualization.py
_ = viz.visualize_image_attr(np.transpose(attributions_ig[0,:,:,:].cpu().detach().numpy(), (1,2,0)),
np.transpose(test_X_torch[0,:,:,:].cpu().detach().numpy(), (1,2,0)),
method='heat_map',cmap=default_cmap,show_colorbar=True,
sign='positive',outlier_perc=1)
```
### Compute GradientShap
Compute Shapley score based on the integrated gradients of the model. It computes the expectation of gradients for an input which was chosen randomly between the input and a baseline. The baseline is also chosen randomly from given baseline distribution. <br>
```
torch.manual_seed(0)
np.random.seed(0)
gradient_shap = GradientShap(model)
# Defining baseline distribution of images
rand_img_dist = torch.cat([test_X_torch[:1,:,:,:] * 0, test_X_torch[:1,:,:,:] * 1])
#rand_img_dist = test_X_torch[np.random.choice(test_X_torch.shape[0], 100, replace=False)]
# plot the attribution for a number of cases
case = 5
for i in range(case):
attributions_gs = gradient_shap.attribute(test_X_torch[i:i+1,:,:,:],
n_samples=50,
stdevs=0.0001,
baselines=rand_img_dist,
target=test_y_torch[i])
_ = viz.visualize_image_attr_multiple(np.transpose(attributions_gs[0,:,:,:].cpu().detach().numpy(), (1,2,0)),
np.transpose(test_X_torch[i,:,:,:].cpu().detach().numpy(), (1,2,0)),
["original_image", "heat_map"],
["all", "absolute_value"],#
cmap=default_cmap,
show_colorbar=True)
```
| github_jupyter |
###### Content provided under a Creative Commons Attribution license, CC-BY 4.0; code under BSD 3-Clause license. (c)2014 Lorena A. Barba, Olivier Mesnard. Thanks: NSF for support via CAREER award #1149784.
# Lift on a cylinder
Remember when we computed uniform flow past a [doublet](03_Lesson03_doublet.ipynb)? The stream-line pattern produced flow around a cylinder. When studying the pressure coefficient, we realized that the drag on the cylinder was exactly zero, leading to the _D'Alembert paradox_.
_What about lift?_ Is it possible for a perfectly circular cylinder to experience lift?
What if the cylinder is rotating? Have you heard about the Magnus effect?
You might be surprised to learn that all we need to do is add a [vortex](04_Lesson04_vortex.ipynb) in the center of the cylinder. Let's see how that looks.
First, we recall the equations for the flow of a doublet. In Cartesian coordinates, a doublet located at the origin has a stream function and velocity components given by
$$\psi\left(x,y\right) = -\frac{\kappa}{2\pi}\frac{y}{x^2+y^2}$$
$$u\left(x,y\right) = \frac{\partial\psi}{\partial y} = -\frac{\kappa}{2\pi}\frac{x^2-y^2}{\left(x^2+y^2\right)^2}$$
$$v\left(x,y\right) = -\frac{\partial\psi}{\partial x} = -\frac{\kappa}{2\pi}\frac{2xy}{\left(x^2+y^2\right)^2}$$
## Let's start computing!
We'll place a doublet of strength $\kappa=1$ at the origin, and add a free stream $U_\infty=1$ (yes, we really like the number one). We can re-use the code we have written before; this is always a good thing.
```
import math
import numpy
from matplotlib import pyplot
# embed the figures into the notebook
%matplotlib inline
N = 50 # Number of points in each direction
x_start, x_end = -2.0, 2.0 # x-direction boundaries
y_start, y_end = -1.0, 1.0 # y-direction boundaries
x = numpy.linspace(x_start, x_end, N) # computes a 1D-array for x
y = numpy.linspace(y_start, y_end, N) # computes a 1D-array for y
X, Y = numpy.meshgrid(x, y) # generates a mesh grid
kappa = 1.0 # strength of the doublet
x_doublet, y_doublet = 0.0, 0.0 # location of the doublet
u_inf = 1.0 # freestream speed
```
Here are our function definitions for the doublet:
```
def get_velocity_doublet(strength, xd, yd, X, Y):
"""
Returns the velocity field generated by a doublet.
Parameters
----------
strength: float
Strength of the doublet.
xd: float
x-coordinate of the doublet.
yd: float
y-coordinate of the doublet.
X: 2D Numpy array of floats
x-coordinate of the mesh points.
Y: 2D Numpy array of floats
y-coordinate of the mesh points.
Returns
-------
u: 2D Numpy array of floats
x-component of the velocity vector field.
v: 2D Numpy array of floats
y-component of the velocity vector field.
"""
u = (-strength / (2 * math.pi) *
((X - xd)**2 - (Y - yd)**2) /
((X - xd)**2 + (Y - yd)**2)**2)
v = (-strength / (2 * math.pi) *
2 * (X - xd) * (Y - yd) /
((X - xd)**2 + (Y - yd)**2)**2)
return u, v
def get_stream_function_doublet(strength, xd, yd, X, Y):
"""
Returns the stream-function generated by a doublet.
Parameters
----------
strength: float
Strength of the doublet.
xd: float
x-coordinate of the doublet.
yd: float
y-coordinate of the doublet.
X: 2D Numpy array of floats
x-coordinate of the mesh points.
Y: 2D Numpy array of floats
y-coordinate of the mesh points.
Returns
-------
psi: 2D Numpy array of floats
The stream-function.
"""
psi = -strength / (2 * math.pi) * (Y - yd) / ((X - xd)**2 + (Y - yd)**2)
return psi
```
And now we compute everything to get the flow around a cylinder, by adding a free stream to the doublet:
```
# compute the velocity field on the mesh grid
u_doublet, v_doublet = get_velocity_doublet(kappa, x_doublet, y_doublet, X, Y)
# compute the stream-function on the mesh grid
psi_doublet = get_stream_function_doublet(kappa, x_doublet, y_doublet, X, Y)
# freestream velocity components
u_freestream = u_inf * numpy.ones((N, N), dtype=float)
v_freestream = numpy.zeros((N, N), dtype=float)
# stream-function of the freestream flow
psi_freestream = u_inf * Y
# superposition of the doublet on the freestream flow
u = u_freestream + u_doublet
v = v_freestream + v_doublet
psi = psi_freestream + psi_doublet
```
We are ready to do a nice visualization.
```
# plot the streamlines
width = 10
height = (y_end - y_start) / (x_end - x_start) * width
pyplot.figure(figsize=(width, height))
pyplot.xlabel('x', fontsize=16)
pyplot.ylabel('y', fontsize=16)
pyplot.xlim(x_start, x_end)
pyplot.ylim(y_start, y_end)
pyplot.streamplot(X, Y, u, v, density=2, linewidth=1, arrowsize=1, arrowstyle='->')
pyplot.scatter(x_doublet, y_doublet, color='#CD2305', s=80, marker='o')
# calculate the cylinder radius and add the cylinder to the figure
R = math.sqrt(kappa / (2 * math.pi * u_inf))
circle = pyplot.Circle((0, 0), radius=R, color='#CD2305', alpha=0.5)
pyplot.gca().add_patch(circle)
# calculate the stagnation points and add them to the figure
x_stagn1, y_stagn1 = +math.sqrt(kappa / (2 * math.pi * u_inf)), 0.0
x_stagn2, y_stagn2 = -math.sqrt(kappa / (2 * math.pi * u_inf)), 0.0
pyplot.scatter([x_stagn1, x_stagn2], [y_stagn1, y_stagn2],
color='g', s=80, marker='o');
```
Nice! We have cylinder flow.
Now, let's add a vortex located at the origin with a positive strength $\Gamma$. In Cartesian coordinates, the stream function and velocity components are given by:
$$\psi\left(x,y\right) = \frac{\Gamma}{4\pi}\ln\left(x^2+y^2\right)$$
$$u\left(x,y\right) = \frac{\Gamma}{2\pi}\frac{y}{x^2+y^2} \qquad v\left(x,y\right) = -\frac{\Gamma}{2\pi}\frac{x}{x^2+y^2}$$
Based on these equations, we define the functions `get_velocity_vortex()` and `get_stream_function_vortex()` to do ... well, what's obvious by the function names (you should always try to come up with obvious function names). Play around with the value of $\ \Gamma$ and recalculate the flow. See what happens.
```
gamma = 4.0 # strength of the vortex
x_vortex, y_vortex = 0.0, 0.0 # location of the vortex
def get_velocity_vortex(strength, xv, yv, X, Y):
"""
Returns the velocity field generated by a vortex.
Parameters
----------
strength: float
Strength of the vortex.
xv: float
x-coordinate of the vortex.
yv: float
y-coordinate of the vortex.
X: 2D Numpy array of floats
x-coordinate of the mesh points.
Y: 2D Numpy array of floats
y-coordinate of the mesh points.
Returns
-------
u: 2D Numpy array of floats
x-component of the velocity vector field.
v: 2D Numpy array of floats
y-component of the velocity vector field.
"""
u = +strength / (2 * math.pi) * (Y - yv) / ((X - xv)**2 + (Y - yv)**2)
v = -strength / (2 * math.pi) * (X - xv) / ((X - xv)**2 + (Y - yv)**2)
return u, v
def get_stream_function_vortex(strength, xv, yv, X, Y):
"""
Returns the stream-function generated by a vortex.
Parameters
----------
strength: float
Strength of the vortex.
xv: float
x-coordinate of the vortex.
yv: float
y-coordinate of the vortex.
X: 2D Numpy array of floats
x-coordinate of the mesh points.
Y: 2D Numpy array of floats
y-coordinate of the mesh points.
Returns
-------
psi: 2D Numpy array of floats
The stream-function.
"""
psi = strength / (4 * math.pi) * numpy.log((X - xv)**2 + (Y - yv)**2)
return psi
# compute the velocity field on the mesh grid
u_vortex, v_vortex = get_velocity_vortex(gamma, x_vortex, y_vortex, X, Y)
# compute the stream-function on the mesh grid
psi_vortex = get_stream_function_vortex(gamma, x_vortex, y_vortex, X, Y)
```
Now that we have all the necessary ingredients (uniform flow, doublet and vortex), we apply the principle of superposition, and then we make a nice plot.
```
# superposition of the doublet and the vortex on the freestream flow
u = u_freestream + u_doublet + u_vortex
v = v_freestream + v_doublet + v_vortex
psi = psi_freestream + psi_doublet + psi_vortex
# calculate the cylinder radius
R = math.sqrt(kappa / (2 * math.pi * u_inf))
# calculate the stagnation points
x_stagn1, y_stagn1 = (+math.sqrt(R**2 - (gamma / (4 * math.pi * u_inf))**2),
-gamma / (4 * math.pi * u_inf))
x_stagn2, y_stagn2 = (-math.sqrt(R**2 - (gamma / (4 * math.pi * u_inf))**2),
-gamma / (4 * math.pi * u_inf))
# plot the streamlines
width = 10
height = (y_end - y_start) / (x_end - x_start) * width
pyplot.figure(figsize=(width, height))
pyplot.xlabel('x', fontsize=16)
pyplot.ylabel('y', fontsize=16)
pyplot.xlim(x_start, x_end)
pyplot.ylim(y_start, y_end)
pyplot.streamplot(X, Y, u, v,
density=2, linewidth=1, arrowsize=1.5, arrowstyle='->')
circle = pyplot.Circle((0.0, 0.0), radius=R, color='#CD2305', alpha=0.5)
pyplot.gca().add_patch(circle)
pyplot.scatter(x_vortex, y_vortex, color='#CD2305', s=80, marker='o')
pyplot.scatter([x_stagn1, x_stagn2], [y_stagn1, y_stagn2],
color='g', s=80, marker='o');
```
##### Challenge task
The challenge task in the [doublet notebook](03_Lesson03_doublet.ipynb) was to calculate the radius of the cylinder created by the doublet in a uniform flow. You should have gotten
$$R = \sqrt{\frac{\kappa}{2\pi U_\infty}}$$
The new challenge is to find where the stagnation points are located on the surface of the cylinder, when there's a vortex. (You just need an expression for the angles.)
What hapens if $\ \frac{\Gamma}{4\pi U_\infty R} >1$?
Go back and experiment with a value of $\Gamma$ that causes this.
---
## Pressure coefficient
Let's get the pressure coefficient on the surface of the cylinder and compare with the case with no vortex.
The velocity components in polar coordinates for the combined freestream + doublet + vortex are given by
$$u_r\left(r,\theta\right) = U_\infty \cos\theta \left(1-\frac{R^2}{r^2}\right)$$
$$u_\theta\left(r,\theta\right) = -U_\infty \sin\theta \left(1+\frac{R^2}{r^2}\right) - \frac{\Gamma}{2\pi r}$$
where $R$ is the cylinder radius.
We see that the radial component vanishes on the surface of the cylinder whereas the tangential velocity is given by
$$u_\theta\left(R,\theta\right) = -2U_\infty \sin\theta - \frac{\Gamma}{2\pi R} .$$
As a note, when there is no vortex, the tangential velocity on the cylinder becomes
$$u_\theta\left(R,\theta\right) = -2U_\infty \sin\theta .$$
From the doublet notebook, we know that the pressure coefficient is defined by
$$C_p = 1-\frac{U^2}{U_\infty^2}$$
where $U^2 = u^2+v^2 = u_r^2+u_\theta^2$.
Let's plot it!
```
# calculate the surface tangential velocity on the cylinder
theta = numpy.linspace(0.0, 2 * math.pi, 100)
u_theta = -2 * u_inf * numpy.sin(theta) - gamma / (2 * math.pi * R)
# compute the surface pressure coefficient
cp = 1.0 - (u_theta / u_inf)**2
# if there was no vortex
u_theta_no_vortex = -2 * u_inf * numpy.sin(theta)
cp_no_vortex = 1.0 - (u_theta_no_vortex / u_inf)**2
# plot the surface pressure coefficient
size = 6
pyplot.figure(figsize=(size, size))
pyplot.grid(True)
pyplot.xlabel(r'$\theta$', fontsize=18)
pyplot.ylabel('$C_p$', fontsize=18)
pyplot.xlim(theta.min(), theta.max())
pyplot.plot(theta, cp,
label='with vortex', color='#CD2305', linewidth=2, linestyle='-')
pyplot.plot(theta, cp_no_vortex,
label='without vortex', color='g', linewidth=2, linestyle='-')
pyplot.legend(loc='best', prop={'size':16});
```
## Lift and Drag
The lift is the component of force perpendicular to $U_\infty$, while the drag is the component parallel to $U_\infty$. How could we get them with the information we have above?
Well, the force on the cylinder is a product of the pressure acting on its surface (there is no viscosity here: it's ideal flow). If you draw a free body diagram, you should see that:
$$D = -\int_0^{2\pi} p \ \cos\theta \ R \ d\theta$$
$$L = -\int_0^{2\pi} p \ \sin\theta \ R \ d\theta$$
##### Challenge Task
Using Bernoulli's equation, replace $p$ in the equations above to obtain the lift and drag.
What does this mean?
## The Magnus effect
The force experienced by a rotating cylinder (or sphere, or any object) is known as the _Magnus effect_.
Believe it or not, someone actually tried to build an airplane with this concept: spinning cylinders as "wings." According to an article on [PilotFriend](http://www.pilotfriend.com/photo_albums/potty/2.htm), a plane called the 921-V was built in 1930 and flew "at least once" before crashing.
```
from IPython.display import Image
Image(url='http://upload.wikimedia.org/wikipedia/commons/7/78/Flettner_Rotor_Aircraft.jpg')
```
And nowadays, a handful of hobbyists build RC "rotorwings" taking advantage of the Magnus effect to collect views on YouTube ...
```
from IPython.display import YouTubeVideo
YouTubeVideo('POHre1P_E1k')
```
---
```
from IPython.core.display import HTML
def css_styling(filepath):
styles = open(filepath, 'r').read()
return HTML(styles)
css_styling('../styles/custom.css')
```
| github_jupyter |
```
import os
import numpy as np
data_folder = os.path.join(os.path.expanduser("~"), "Data", "websites", "textonly")
documents = [open(os.path.join(data_folder, filename)).read() for filename in os.listdir(data_folder)]
len(documents)
pprint([document[:100] for document in documents[:5]])
from sklearn.cluster import KMeans
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer
from sklearn.pipeline import Pipeline
n_clusters = 10
pipeline = Pipeline([('feature_extraction', TfidfVectorizer(max_df=0.4)),
('clusterer', KMeans(n_clusters=n_clusters))
])
pipeline.fit(documents)
labels = pipeline.predict(documents)
from collections import Counter
c = Counter(labels)
for cluster_number in range(n_clusters):
print("Cluster {} contains {} samples".format(cluster_number, c[cluster_number]))
c[0]
pipeline.named_steps['clusterer'].inertia_
inertia_scores = []
n_cluster_values = list(range(2, 20))
for n_clusters in n_cluster_values:
cur_inertia_scores = []
X = TfidfVectorizer(max_df=0.4).fit_transform(documents)
for i in range(30):
km = KMeans(n_clusters=n_clusters).fit(X)
cur_inertia_scores.append(km.inertia_)
inertia_scores.append(cur_inertia_scores)
inertia_scores = np.array(inertia_scores)
%matplotlib inline
from matplotlib import pyplot as plt
inertia_means = np.mean(inertia_scores, axis=1)
inertia_stderr = np.std(inertia_scores, axis=1)
fig = plt.figure(figsize=(40,20))
plt.errorbar(n_cluster_values, inertia_means, inertia_stderr, color='green')
plt.show()
n_clusters = 6
pipeline = Pipeline([('feature_extraction', TfidfVectorizer(max_df=0.4)),
('clusterer', KMeans(n_clusters=n_clusters))
])
pipeline.fit(documents)
labels = pipeline.predict(documents)
c = Counter(labels)
terms = pipeline.named_steps['feature_extraction'].get_feature_names()
for cluster_number in range(n_clusters):
print("Cluster {} contains {} samples".format(cluster_number, c[cluster_number]))
print(" Most important terms")
centroid = pipeline.named_steps['clusterer'].cluster_centers_[cluster_number]
most_important = centroid.argsort()
for i in range(5):
term_index = most_important[-(i+1)]
print(" {0}) {1} (score: {2:.4f})".format(i+1, terms[term_index], centroid[term_index]))
print()
from sklearn.metrics import silhouette_score
X = pipeline.named_steps['feature_extraction'].transform(documents)
silhouette_score(X, labels)
len(terms)
Y = pipeline.transform(documents)
km = KMeans(n_clusters=n_clusters)
labels = km.fit_predict(Y)
c = Counter(labels)
for cluster_number in range(n_clusters):
print("Cluster {} contains {} samples".format(cluster_number, c[cluster_number]))
silhouette_score(Y, labels)
Y.shape
```
# Evidence Accumulation Clustering
```
from scipy.sparse import csr_matrix
def create_coassociation_matrix(labels):
rows = []
cols = []
unique_labels = set(labels)
for label in unique_labels:
indices = np.where(labels == label)[0]
for index1 in indices:
for index2 in indices:
rows.append(index1)
cols.append(index2)
data = np.ones((len(rows),))
return csr_matrix((data, (rows, cols)), dtype='float')
C = create_coassociation_matrix(labels)
C
C.shape, C.shape[0] * C.shape[1]
len(C.nonzero()[0]) / (C.shape[0] * C.shape[1])
from scipy.sparse.csgraph import minimum_spanning_tree
mst = minimum_spanning_tree(C)
mst
pipeline = Pipeline([('feature_extraction', TfidfVectorizer(max_df=0.4)),
('clusterer', KMeans(n_clusters=3))
])
pipeline.fit(documents)
labels2 = pipeline.predict(documents)
C2 = create_coassociation_matrix(labels2)
C_sum = (C + C2) / 2
#C_sum.data = C_sum.data
C_sum.todense()
mst = minimum_spanning_tree(-C_sum)
mst
#mst.data[mst.data < 1] = 0
mst.data[mst.data > -1] = 0
mst.eliminate_zeros()
mst
from scipy.sparse.csgraph import connected_components
number_of_clusters, labels = connected_components(mst)
from sklearn.base import BaseEstimator, ClusterMixin
class EAC(BaseEstimator, ClusterMixin):
def __init__(self, n_clusterings=10, cut_threshold=0.5, n_clusters_range=(3, 10)):
self.n_clusterings = n_clusterings
self.cut_threshold = cut_threshold
self.n_clusters_range = n_clusters_range
def fit(self, X, y=None):
C = sum((create_coassociation_matrix(self._single_clustering(X))
for i in range(self.n_clusterings)))
mst = minimum_spanning_tree(-C)
mst.data[mst.data > -self.cut_threshold] = 0
mst.eliminate_zeros()
self.n_components, self.labels_ = connected_components(mst)
return self
def _single_clustering(self, X):
n_clusters = np.random.randint(*self.n_clusters_range)
km = KMeans(n_clusters=n_clusters)
return km.fit_predict(X)
def fit_predict(self, X):
self.fit(X)
return self.labels_
pipeline = Pipeline([('feature_extraction', TfidfVectorizer(max_df=0.4)),
('clusterer', EAC())
])
pipeline.fit(documents)
labels = pipeline.named_steps['clusterer'].labels_
c = Counter(labels)
c
```
# Online Learning
```
from sklearn.cluster import MiniBatchKMeans
vec = TfidfVectorizer(max_df=0.4)
X = vec.fit_transform(documents)
mbkm = MiniBatchKMeans(random_state=14, n_clusters=3)
batch_size = 500
indices = np.arange(0, X.shape[0])
for iteration in range(100):
sample = np.random.choice(indices, size=batch_size, replace=True)
mbkm.partial_fit(X[sample[:batch_size]])
mbkm = MiniBatchKMeans(random_state=14, n_clusters=3)
batch_size = 10
for iteration in range(int(X.shape[0] / batch_size)):
start = batch_size * iteration
end = batch_size * (iteration + 1)
mbkm.partial_fit(X[start:end])
labels_mbkm = mbkm.predict(X)
mbkm.inertia_
km = KMeans(random_state=14, n_clusters=3)
labels_km = km.fit_predict(X)
km.inertia_
from sklearn.metrics import adjusted_mutual_info_score, homogeneity_score
from sklearn.metrics import mutual_info_score, v_measure_score
v_measure_score(labels_mbkm, labels_km)
X.shape
labels_mbkm
from sklearn.feature_extraction.text import HashingVectorizer
class PartialFitPipeline(Pipeline):
def partial_fit(self, X, y=None):
Xt = X
for name, transform in self.steps[:-1]:
Xt = transform.transform(Xt)
return self.steps[-1][1].partial_fit(Xt, y=y)
pipeline = PartialFitPipeline([('feature_extraction', HashingVectorizer()),
('clusterer', MiniBatchKMeans(random_state=14, n_clusters=3))
])
batch_size = 10
for iteration in range(int(len(documents) / batch_size)):
start = batch_size * iteration
end = batch_size * (iteration + 1)
pipeline.partial_fit(documents[start:end])
labels = pipeline.predict(documents)
labels
```
| github_jupyter |
```
import torch
import torch.nn as nn
from torch.utils.data import DataLoader
import torch.optim as optim
import pandas as pd
import numpy as np
# own Modules
from models_sub_net_ls import LstmMse_LatentSpace, LstmMle_LatentSpace, AnalysisLayer
from data_preperator import DataPreperatorPrediction
from data_set import DataSet
from predictor import PredictorMse, PredictorMle, PredictorMseLatentSpaceAnalyser, PredictorMleLatentSpaceAnalyser
```
## Take care of these things before training:
- Select correct path and define droped_features
- Change parameter of model
- Change filed_location
## Parameters phm data
```
param = {
"data" : {
"path" : '../../data/phm_data_challenge/01_M01_DC_prediction_1.csv',
"droped_feature" : ["stage", "Lot", "runnum", "recipe", "recipe_step",
"up time", "ongoing time",
"ETCHSOURCEUSAGE", "ETCHAUXSOURCETIMER",
"ETCHAUX2SOURCETIMER", "FIXTURESHUTTERPOSITION", "ROTATIONSPEED"
],
"features_not_to_scale": []
},
"model" : {
"path" : "../../models/MLE_model/phm_dataFold xx_InputSize13_LayerLstm1_HiddenLstm15_HiddenFc75_Seq25.pt",
"input_size" : 12,
"n_hidden_lstm" : 15,
"sequence_size" : 100,
"batch_size" : 50,
"lstm_layer" : 1,
"n_hidden_fc_pred": 75,
"n_hidden_fc_ls": 15,
"dropout_rate_lstm": 0.0,
"dropout_rate_fc": 0.2,
"K":1
},
"anomaly_detection" : {
"threshold_anomaly" : 0.3,
"smooth_rate" : 0.05,
"no_standard_deviation" : 2
},
"results": {
"path" : "../visualisation/files/prediction/MLE/phm_mle_1.csv",
}
}
```
## Parameters artifical data
```
param = {
"data" : {
"path" : "./y_hat.csv",
"droped_feature" : [
],
"features_not_to_scale": []
},
"model" : {
"path" : "../../../../models/MLE_latent_space/artifical_2_signals_InputSize2_LayerLstm1_HiddenLstm15_HiddenFc_pred75_HiddenFc_ls7_Seq25.pt",
"input_size" : 2,
"n_hidden_lstm" : 15,
"sequence_size" : 100,
"batch_size" : 50,
"lstm_layer" : 1,
"n_hidden_fc_pred": 75,
"n_hidden_fc_ls": 7,
"dropout_rate_lstm": 0.0,
"dropout_rate_fc": 0.2,
"K":1
},
"anomaly_detection" : {
"threshold_anomaly" : 0.3,
"smooth_rate" : 0.05,
"no_standard_deviation" : 2
},
"results": {
"path" : "../../../visualisation/files/prediction/MLE_LS/artifical_2_signals_ls_7_yhat.csv",
}
}
```
## Parameters cpps data
```
param = {
"data" : {
"path" : '../../data/cpps_data/cpps_data_predictive_maintenance.csv',
"droped_feature" : ["status"
],
"features_not_to_scale": []
},
"model" : {
"path" : "../../models/MLE_latent_space/LS_cpps_data_InputSize10_LayerLstm1_HiddenLstm15_HiddenFc_pred75_HiddenFc_ls7_Seq25.pt",
"input_size" : 10,
"n_hidden_lstm" : 15,
"sequence_size" : 100,
"batch_size" : 50,
"lstm_layer" : 1,
"n_hidden_fc_pred": 75,
"n_hidden_fc_ls": 7,
"dropout_rate_lstm": 0.0,
"dropout_rate_fc": 0.2,
"K":1
},
"anomaly_detection" : {
"threshold_anomaly" : 0.3,
"smooth_rate" : 0.05,
"no_standard_deviation" : 2
},
"results": {
"path" : "../../own_research/seperate_NN_for_LS_analysis?/latent_space/cpps_data_with_subnet.csv" # "../visualisation/files/prediction/MLE_LS/cpps_data.csv",
}
}
```
## Standarize Data
First we have to apply normalisation to data. That is because the model works on the representation given by its input vectors. The scale of those numbers is part of the representation.
We should apply the exact same scaling as for training data. That means storing the scale and offset used with your training data, and using that again. <br>
__The mean and variance for each feature of the training data with which the model was trained (stake: 0.75):__
### Mean and Variance from phm Dataset
droped features="stage", "Lot", "runnum", "recipe", "recipe_step",
"up time", "ongoing time",
"ETCHSOURCEUSAGE", "ETCHAUXSOURCETIMER",
"ETCHAUX2SOURCETIMER", "FIXTURESHUTTERPOSITION"
```
mean_training_data =[ 0.21683119, 0.32121513, 0.31925213, 0.20097501, 0.45164471, 0.22914814,
0.11604865, 0.27421592, 0.24393222, -0.13974937, -0.09739598, -0.07313758,
0.18198089]
var_training_data =[0.75261122, 0.90482986, 0.91105327, 0.75504036, 1.07026701, 0.76708319,
0.35172769, 0.83004988, 0.76964675, 0.57386915, 0.45912309, 0.2955709,
1.61493449]
```
### Mean and Variance from artifical Dataset
```
mean_training_data= [-0.00393712, -0.01294209]
var_training_data = [49.18936568, 0.34270256]
```
### Mean and Variance form cpps Dataset
```
mean_training_data = [0.05330162, 0.03075699, -0.05636937, 0.0274802, 0.06536314, -0.04620979,-0.0745559,
-0.08149049, -0.05318843, 0.11105582]
var_training_data = [0.02905961, 0.04473883, 0.05254194, 0.05198144, 0.07337494, 0.0666981, 0.07593811,
0.0393896, 0.08028017, 0.0594492]
mean_training_data = [0., 0.]
var_training_data = [1., 1.]
```
## Create DataLoader
```
data_preperator = DataPreperatorPrediction(path=param['data']['path'],
ignored_features = param["data"]["droped_feature"],
mean_training_data=mean_training_data,
var_training_data=var_training_data,
first_order_difference=False
)
preprocessed_data = data_preperator.prepare_data()
print(preprocessed_data.shape)
dataset = DataSet(preprocessed_data,
timesteps=param["model"]["sequence_size"])
data_loader = DataLoader(dataset,
batch_size=param['model']['batch_size'],
num_workers=0,
shuffle=False,
drop_last=True)
for batch_idx, data in enumerate(data_loader):
x,y = data
print('Data of batch: {}'.format(batch_idx))
print("Size of input data: {}".format(x.size()))
print("Size of target data: {}".format(y.size()))
if batch_idx >=1: break
```
## Define Model and load Parameters of trained model
### Model for MSE and Latent Space Analysis
```
model = LstmMse_LatentSpace(batch_size=param['model']['batch_size'],
input_dim=param['model']['input_size'],
n_hidden_lstm=param['model']['n_hidden_lstm'],
n_layers=param['model']['lstm_layer'],
dropout_rate_lstm= param['model']['dropout_rate_lstm'],
dropout_rate_fc= param['model']['dropout_rate_fc'],
n_hidden_fc_prediction=param['model']['n_hidden_fc_pred'],
n_hidden_fc_ls_analysis=param['model']['n_hidden_fc_ls']
)
checkpoint = torch.load(param["model"]["path"])
model.load_state_dict(checkpoint['model_state_dict'])
```
### Model for MLE and Latent Space Analysis
```
model = LstmMle_LatentSpace(batch_size=param['model']['batch_size'],
input_dim=param['model']['input_size'],
n_hidden_lstm=param['model']['n_hidden_lstm'],
n_layers=param['model']['lstm_layer'],
dropout_rate_lstm= param['model']['dropout_rate_lstm'],
dropout_rate_fc= param['model']['dropout_rate_fc'],
n_hidden_fc_prediction=param['model']['n_hidden_fc_pred'],
n_hidden_fc_ls_analysis=param['model']['n_hidden_fc_ls'],
K = param['model']['K']
)
checkpoint = torch.load(param["model"]["path"])
model.load_state_dict(checkpoint['model_state_dict'])
```
## Initialize Predictor
### Predictor for MSE Model and Latent Space Analysis
```
predictor = PredictorMseLatentSpaceAnalyser(model=model,
path_data=param["data"]["path"],
columns_to_ignore=param["data"]["droped_feature"],
threshold_anomaly=param["anomaly_detection"]["threshold_anomaly"]
)
```
### Predictor for MLE Model and Latent Space Analysis
```
predictor = PredictorMleLatentSpaceAnalyser(model=model,
path_data=param["data"]["path"],
columns_to_ignore=param["data"]["droped_feature"],
threshold_anomaly=param["anomaly_detection"]["threshold_anomaly"],
no_standard_deviation=param["anomaly_detection"]["no_standard_deviation"]
)
```
## Predict
```
print("Start predicting.")
# Write header
with open(param["results"]["path"], "a+") as file:
[file.write(column+";") for column in predictor.create_column_names_result()]
file.write("\n")
for batch_number, (input_data, target_data) in enumerate(data_loader):
# Predict sensor values in mini-batches
batch_results = predictor.predict(input_data, target_data)
# Write results to csv file
with open(param["results"]["path"], "a") as file:
for batch in batch_results:
# Each result component of a singe prediction (ID, target, prediction, loss, latent space ...) is stored in lists
# thus we have to unpack the list and seperate values with ;
for value in batch:
file.write(str(value)+";")
file.write("\n")
# Print status
if (batch_number*param['model']['batch_size'])%5000 == 0:
print("Current status: " + str(param['model']['batch_size']*batch_number) + " samples are predicted.")
print("End of prediction.")
```
## Tag anomalous samples
```
results_prediction = pd.read_csv(param["results"]["path"], sep=";")
# Values of column "loss" are exponentially smoothed and stored in a new column "smoothed loss"
# New column "anomaly" is created and sample is taged with 1 if anomalous behaviour (if smoothed loss is over threshold)
results = predictor.detect_anomaly(results_prediction, param["anomaly_detection"]["smooth_rate"])
results.head()
```
## Combine prediction data with data which was not consider for inference
```
original_sensor_data = pd.read_csv(param["data"]["path"])
data_of_droped_feature = original_sensor_data.loc[:, param["data"]["droped_feature"]+["ID"]]
complete_data = results.merge(right=data_of_droped_feature, how="inner", on="ID")
```
## Save data to csv file
```
complete_data.to_csv(param["results"]["path"], sep=";", index=False)
```
| github_jupyter |
# 1. Inference on Synthetic data
Author: [Marc Lelarge](https://www.di.ens.fr/~lelarge/)
Date: 04/05
In this notebook, we test our approach on synthetic data.
The problem can be described as follows: we are given a familly of ODEs $y'=h_\theta(y,t)$, where the function $h$ is parametrized by the parameter $\theta$ and a trajectory $z$, the problem is to find the best value of $\theta$ such that $z'\approx h_{\theta}(z,t)$. In order to find the value of $\theta$, we follow an optimization approach using backpropagation through an ODE solver based on the tool developed in [Neural Ordinary Differential Equations](https://arxiv.org/abs/1806.07366). Namely, for a distance function $D$ on $\mathbb{R}^d$, we define $L = D(y_\theta -z)$ where $y_\theta$ is the solution of the ODE $y'=h_{\theta}(y,t)$ and we minimize the loss $L$ with respect to $\theta$ with SGD.
Here, to test this approach, we choose a parameter $\theta$ and integrate the ODE to get the trajectory $z$. We show that based on $z$, we are able to retrieve the parameter $\theta$.
```
%load_ext autoreload
%autoreload 2
%matplotlib inline
import matplotlib.pyplot as plt
import torch
import torch.nn as nn
import torch.optim as optim
torch.manual_seed(0);
```
## a. the IHD Model
We propose a simple version of the SIR Model at the start of the epidemics. In this case, all the population is susceptible.
The standard SIR model is given by the equations:
\begin{eqnarray}
\dot{S}(t) &=& -\beta S(t) I(t)\\
\dot{I}(t) &=& \beta S(t) I(t) - \gamma I(t) -\nu I(t)\\
\dot{R}(t) &=& \gamma I(t)\\
\dot{D}(t) &=& \nu I(t)
\end{eqnarray}
where $S(t)$, $I(t)$, $R(t)$ and $D(t)$ are, respectively, the fractions of susceptible, infectious, recovered and deceased individuals at time $t$. $\beta$ is the contagion rate, $\gamma$ is the recovery rate and $\nu$ is the death rate.
In the early stage of the epidemics, we make the approximation $S(t) \approx 1$ so that the second equation simplifies to $\dot{I}(t) = \beta I(t) - \gamma I(t) -\nu I(t)$.
We make two other modifications:
- the contagion rate will depend on time $\beta(t)$
- we add a sub-category of the population $H(t)$, the numbers of individuals in hospital at time $t$. We assume that all deceased individuals are first in a hospital.
We obtain the IHD model given by the equations:
\begin{eqnarray}
\dot{I}(t) &=& \beta(t) I(t) -\gamma I(t)-\nu I(t)\\
\dot{R}(t) &=& \gamma I(t)\\
\dot{H}(t) &=& \nu I(t) - \gamma H(t) - \lambda H(t)\\
\dot{D}(t) &=& \lambda H(t)
\end{eqnarray}
note that the recovered individuals can be ignored to compute $I(t),H(t)$ and $D(t)$.
In practice, we will prametrize the function $\beta(t)$ as follows:
$$\beta(t) = \beta_1 +\delta \sigma(t-\tau),$$
where $\sigma(.)$ is the sigmoid function.
The motivation for using this specific function is to understand the impact of the lock-down on the epidemics. We expect the contagious rate to decrease under lock-down so that $\tau$ should be interpreted as the time when social distancing is implemented resulting in a drop in the contagious rate.
```
from model_epidemio import ihd_fit
size = 101
t = torch.linspace(0., size-1, size)
true_init = torch.tensor([[0.01,0., 0.]])
name_parms = ['beta', 'delta','gamma','nu','lambda']
parms = torch.tensor([0.1,-0.04,0.05,0.015,0.02])
time = torch.tensor([30.])
ihd_synt = ihd_fit.IHD_model(parms,time)
```
$\beta_1=0.1, \delta = -0.04, \gamma = 0.05, \nu = 0.015, \lambda = 0.02$
$\tau = 30$
$I(0)=0.01, H(0)=0, D(0)=0$
```
y_synt = ihd_fit.predic_ode(ihd_synt, true_init,t)
plt.plot(t,y_synt[:,0], 'b', label= 'Infected')
plt.plot(t,y_synt[:,1], 'g', label= 'Hospital')
plt.plot(t,y_synt[:,2], 'r', label= 'Deceased')
plt.ylabel('fraction pop')
plt.xlabel('time')
plt.legend();
```
Integration of the ODE: Infected = Blue, Hospital = green, Deaths = red
These trajectories are obtained by integrating the IHD model. To a 'non-specialist', these curves seem plausible: we observe the typical exponential growth at the start of the epidemics and when the R0 goes below one (around time 30), the number of Infected starts to decrease, as well as the number of individuals in hospital.
## b. Inference problem
Now given the trajectory depicted above, we try to recover the parameters of the model.
```
parms_fit = torch.tensor([0.15,-0.05,0.05,0.05,0.05])
time_fit = torch.tensor([50.])
ihd_time = ihd_fit.IHD_fit_time(parms_fit,time_fit)
optimizer_time = optim.RMSprop([{'params': [ihd_time.b1, ihd_time.b2, ihd_time.g, ihd_time.nu, ihd_time.l]},
{'params': ihd_time.time, 'lr' : 1.}], lr=1e-3)
criterion = nn.MSELoss()
best_loss, best_parms = ihd_fit.trainig(ihd_time, init=true_init, t=t, optimizer=optimizer_time,
criterion=criterion,niters=600,data=y_synt)
ihd_inf = ihd_fit.get_best_model(best_parms)
y_inf = ihd_fit.predic_ode(ihd_inf, true_init,t)
plt.plot(y_inf[:,0], 'b', label='Est. I')
plt.plot(y_synt[:,0], 'b--', label='I')
plt.plot(y_inf[:,1], 'g', label='Est. H')
plt.plot(y_synt[:,1], 'g--', label='H')
plt.plot(y_inf[:,2], 'r', label='Est. D')
plt.plot(y_synt[:,2], 'r--', label='D')
plt.ylabel('fraction pop')
plt.xlabel('time')
plt.legend();
```
Infected = Blue, Hospital = green, Deaths = red; dashed = true, plain = estimated.
We see that we have a good fit of the trajectories. Below, we look at the estimation of the parameters directly.
```
for i,p in enumerate(best_parms):
try:
print(name_parms[i],',true: ', parms[i].item(), ',evaluated: ', p.data.item())
except:
print('time',',true: ', time.item(), ',evaluated: ', p.data.item())
```
We see that the switching time is well estimated. The contagious and recovery rates $\gamma$ are overestimated.
## c. Inference problem with missing data
In practice, we will not have access to the whole trajectory.
### c.1 missing hospital numbers
As an example, we consider here the case where the number of individuals in the hospital is not given. Hence, we have only access to the curves $I(t)$ and $D(t)$ but not $H(t)$.
This case fits nicely to our framework and we only need to modify the loss function in the optimization problem.
```
parms_fit = torch.tensor([0.15,-0.05,0.05,0.05,0.05])
time_fit = torch.tensor([50.])
ihd_time = ihd_fit.IHD_fit_time(parms_fit,time_fit)
optimizer_time = optim.RMSprop([{'params': [ihd_time.b1, ihd_time.b2, ihd_time.g, ihd_time.nu, ihd_time.l]},
{'params': ihd_time.time, 'lr' : 1.}], lr=1e-3)
criterion = nn.MSELoss()
best_loss_partial, best_parms_partial = ihd_fit.trainig(ihd_time, init=true_init, t=t, optimizer=optimizer_time,
criterion=criterion,niters=600,data=(y_synt[:,0],y_synt[:,2]),all_data=False)
ihd_inf = ihd_fit.get_best_model(best_parms_partial)
y_inf = ihd_fit.predic_ode(ihd_inf, true_init,t)
plt.plot(y_inf[:,0], 'b', label='Est. I')
plt.plot(y_synt[:,0], 'b--', label='I')
plt.plot(y_inf[:,1], 'g', label='Est. H')
plt.plot(y_synt[:,1], 'g--', label='H')
plt.plot(y_inf[:,2], 'r', label='Est. D')
plt.plot(y_synt[:,2], 'r--', label='D')
plt.ylabel('fraction pop')
plt.xlabel('time')
plt.legend();
```
We see that the number of individuals in hospital cannot be estimated although the number of infected and deceased individuals match very well.
```
for i,p in enumerate(best_parms_partial):
try:
print(name_parms[i], ',true: ', parms[i].item(), ',evaluated: ', p.data.item())
except:
print('time', ',true: ', time.item(), ',evaluated: ', p.data.item())
```
### c.2 missing infected
We now consider a case where the number of infected $I(t)$ is not available (for example, because the population is not tested). We only have acces to the number of individuals in hospital $H(t)$ and deceased individuals $D(t)$.
```
parms_fit = torch.tensor([0.15,-0.05,0.05,0.05,0.05])
time_fit = torch.tensor([50.])
ihd_time = ihd_fit.IHD_fit_time(parms_fit,time_fit)
optimizer_time = optim.RMSprop([{'params': [ihd_time.b1, ihd_time.b2, ihd_time.g, ihd_time.nu, ihd_time.l]},
{'params': ihd_time.time, 'lr' : 1.}], lr=1e-3)
criterion = nn.MSELoss()
best_loss_hosp, best_parms_hosp = ihd_fit.trainig_hosp(ihd_time, init=true_init, t=t, optimizer=optimizer_time,
criterion=criterion,niters=500,data=(y_synt[:,1],y_synt[:,2]))
optimizer_time = optim.RMSprop([{'params': [ihd_time.b1, ihd_time.b2, ihd_time.g, ihd_time.nu, ihd_time.l]},
{'params': ihd_time.time, 'lr' : 1.}], lr=1e-4)
criterion = nn.MSELoss()
best_loss_hosp, best_parms_hosp = ihd_fit.trainig_hosp(ihd_time, init=true_init, t=t, optimizer=optimizer_time,
criterion=criterion,niters=500,data=(y_synt[:,1],y_synt[:,2]))
optimizer_time = optim.RMSprop([{'params': [ihd_time.b1, ihd_time.b2, ihd_time.g, ihd_time.nu, ihd_time.l]},
{'params': ihd_time.time, 'lr' : 1.}], lr=1e-4)
criterion = nn.MSELoss()
best_loss_hosp, best_parms_hosp = ihd_fit.trainig_hosp(ihd_time, init=true_init, t=t, optimizer=optimizer_time,
criterion=criterion,niters=500,data=(y_synt[:,1],y_synt[:,2]))
ihd_inf = ihd_fit.get_best_model(best_parms_hosp)
y_inf = ihd_fit.predic_ode(ihd_inf, true_init,t)
plt.plot(y_inf[:,0], 'b', label='Est. I')
plt.plot(y_synt[:,0], 'b--', label='I')
plt.plot(y_inf[:,1], 'g', label='Est. H')
plt.plot(y_synt[:,1], 'g--', label='H')
plt.plot(y_inf[:,2], 'r', label='Est. D')
plt.plot(y_synt[:,2], 'r--', label='D')
plt.ylabel('fraction pop')
plt.xlabel('time')
plt.legend();
```
We see that we obtain much better results here. Although we do not observe the curve of infected individuals, we are able to get a rough estimate of it.
```
for i,p in enumerate(best_parms_hosp):
try:
print(name_parms[i], ',true: ', parms[i].item(), ',evaluated: ', p.data.item())
except:
print('time', ',true: ', time.item(), ',evaluated: ', p.data.item())
```
| github_jupyter |
```
import pandas as pd, numpy as np
import pyspark
from pyspark.sql import SparkSession
from pyspark.sql.types import *
raw_data_df = pd.read_csv('IPS_payload_200000_df.csv')
raw_data_df.columns
import os
java11_location= '/opt/homebrew/opt/openjdk@11'
os.environ['JAVA_HOME'] = java11_location
conf = pyspark.SparkConf().setAppName('prep_data').setMaster('local')
# sc = pyspark.SparkContext(conf=conf)
sc = pyspark.SparkContext.getOrCreate(conf = conf)
session = SparkSession(sc)
schema = StructType([StructField("payload", StringType(), True)\
,StructField("label", StringType(), True)
])
# ๋ฐ์ดํฐ ํ๋ ์ ๋ฑ๋ก
domain_df = session.createDataFrame(raw_data_df, schema=schema)
# ํ์ฌ ์คํค๋ง ์ ๋ณด ํ์ธ
domain_df.printSchema()
# ๋ฐ์ดํฐ ํ๋ ์ 'table'์ด๋ผ๋ ์ด๋ฆ์ผ๋ก SQLํ
์ด๋ธ ์์ฑ
domain_df.createOrReplaceTempView("table") #<=== SparkSQL์ ์์ฑ๋ ํ
์ด๋ธ ์ด๋ฆ
query_1 = """
SELECT label,
CHAR_LENGTH(IF(ISNULL(payload) OR (LOWER(payload) IN ("", " ", "-", "null", "nan")), "", payload)) AS ips_00013_payload_length_value,
IF(CHAR_LENGTH(IF(ISNULL(payload) OR (LOWER(payload) IN ("", " ", "-", "null", "nan")), "", payload))<1, 0, LN(CHAR_LENGTH(IF(ISNULL(payload) OR (LOWER(payload) IN ("", " ", "-", "null", "nan")), "", payload)))) AS ips_00014_payload_logscaled_length_value,
IF(INSTR(LOWER(IF(ISNULL(payload), "", payload)), "manager")>0, 1, 0) AS ips_00015_payload_sys_manager_flag,
IF(INSTR(LOWER(IF(ISNULL(payload), "", payload)), "console")>0, 1, 0) AS ips_00016_payload_sys_console_flag,
IF(INSTR(LOWER(IF(ISNULL(payload), "", payload)), "admin")>0, 1, 0) AS ips_00017_payload_sys_admin_flag,
IF(INSTR(LOWER(IF(ISNULL(payload), "", payload)), "setup")>0, 1, 0) AS ips_00018_payload_sys_setup_flag,
IF(INSTR(LOWER(IF(ISNULL(payload), "", payload)), "config")>0, 1, 0) AS ips_00019_payload_sys_config_flag,
IF(INSTR(LOWER(IF(ISNULL(payload), "", payload)), "server")>0, 1, 0) AS ips_00020_payload_sys_server_flag,
SIZE(SPLIT(IF(ISNULL(payload), "", payload), "[\']"))-1 AS ips_00021_payload_char_single_quotation_cnt
FROM table
"""
query_2 = """
SELECT
SIZE(SPLIT(IF(ISNULL(payload), '', payload), '[\"]'))-1 AS ips_00022_payload_char_double_quotation_cnt,
SIZE(SPLIT(IF(ISNULL(payload), '', payload), '[\\=]')) - SIZE(SPLIT(IF(ISNULL(payload), '', payload), '[\\&]')) AS ips_00023_payload_char_equal_cnt,
SIZE(SPLIT(IF(ISNULL(payload), '', payload), '[\\+]'))-1 AS ips_00024_payload_char_plus_cnt,
SIZE(SPLIT(IF(ISNULL(payload), '', payload), '[\\*]'))-1 AS ips_00025_payload_char_star_cnt,
SIZE(SPLIT(IF(ISNULL(payload), '', payload), '[\\/]'))-1 AS ips_00026_payload_char_slush_cnt,
SIZE(SPLIT(IF(ISNULL(payload), '', payload), '[\\<]'))-1 AS ips_00027_payload_char_lt_cnt,
SIZE(SPLIT(IF(ISNULL(payload), '', payload), '[\\@]'))-1 AS ips_00028_payload_char_at_cnt,
SIZE(SPLIT(IF(ISNULL(payload), '', payload), '[\\(]'))-1 AS ips_00029_payload_char_parent_cnt,
SIZE(SPLIT(IF(ISNULL(payload), '', payload), '[\\{]'))-1 AS ips_00030_payload_char_bracket_cnt,
SIZE(SPLIT(IF(ISNULL(payload), '', payload), '[\\$]'))-1 AS ips_00031_payload_char_dollar_cnt,
SIZE(SPLIT(IF(ISNULL(payload), '', payload), '[\\.][\\.]'))-1 AS ips_00032_payload_char_double_dot_cnt,
IF(AGGREGATE(TRANSFORM(TRANSFORM(ARRAY(' ', CONCAT(CHR(37), '20'), CHR(43)), ch -> CONCAT(ch, 'and', ch)), word -> INT(INSTR(LOWER(IF(ISNULL(payload), '', payload)), word))), 0, (x1, x2) -> x1+x2)>0, 1, 0) AS ips_00033_payload_sql_and_flag,
IF(AGGREGATE(TRANSFORM(TRANSFORM(ARRAY(' ', CONCAT(CHR(37), '20'), CHR(43)), ch -> CONCAT(ch, 'or', ch)), word -> INT(INSTR(LOWER(IF(ISNULL(payload), '', payload)), word))), 0, (x1, x2) -> x1+x2)>0, 1, 0) AS ips_00034_payload_sql_or_flag,
IF(AGGREGATE(TRANSFORM(TRANSFORM(ARRAY(' ', CONCAT(CHR(37), '20'), CHR(43)), ch -> CONCAT(ch, 'select', ch)), word -> INT(INSTR(LOWER(IF(ISNULL(payload), '', payload)), word))), 0, (x1, x2) -> x1+x2)>0, 1, 0) AS ips_00035_payload_sql_select_flag,
IF(AGGREGATE(TRANSFORM(TRANSFORM(ARRAY(' ', CONCAT(CHR(37), '20'), CHR(43)), ch -> CONCAT(ch, 'from', ch)), word -> INT(INSTR(LOWER(IF(ISNULL(payload), '', payload)), word))), 0, (x1, x2) -> x1+x2)>0, 1, 0) AS ips_00036_payload_sql_from_flag,
IF(INSTR(LOWER(IF(ISNULL(payload), '', payload)), CONCAT('cast', CHR(40)))>0, 1, 0) AS ips_00037_payload_sql_cast_flag,
IF(AGGREGATE(TRANSFORM(TRANSFORM(ARRAY(' ', CONCAT(CHR(37), '20'), CHR(43)), ch -> CONCAT('union', ch)), word -> INT(INSTR(LOWER(IF(ISNULL(payload), '', payload)), word))), 0, (x1, x2) -> x1+x2)>0, 1, 0) AS ips_00038_payload_sql_union_flag,
IF(INSTR(LOWER(IF(ISNULL(payload), '', payload)), CONCAT('eval', CHR(40)))>0, 1, 0) AS ips_00039_payload_sql_eval_flag,
IF(INSTR(LOWER(IF(ISNULL(payload), '', payload)), CONCAT('char', CHR(40)))>0, 1, 0) AS ips_00040_payload_sql_char_flag,
IF(INSTR(LOWER(IF(ISNULL(payload), '', payload)), CONCAT('base64', CHR(40)))>0, 1, 0) AS ips_00041_payload_sql_base64_flag,
IF(AGGREGATE(TRANSFORM(TRANSFORM(ARRAY(' ', CONCAT(CHR(37), '20'), CHR(43)), ch -> CONCAT('declare', ch)), word -> INT(INSTR(LOWER(IF(ISNULL(payload), '', payload)), word))), 0, (x1, x2) -> x1+x2)>0, 1, 0) AS ips_00042_payload_sql_declare_flag,
IF(INSTR(LOWER(IF(ISNULL(payload), '', payload)), 'alert')>0, 1, 0) AS ips_00043_payload_xss_alert_flag,
IF(INSTR(LOWER(IF(ISNULL(payload), '', payload)), 'script')>0, 1, 0) AS ips_00044_payload_xss_script_flag,
IF(INSTR(LOWER(IF(ISNULL(payload), '', payload)), 'document')>0, 1, 0) AS ips_00045_payload_xss_document_flag,
IF(INSTR(LOWER(IF(ISNULL(payload), '', payload)), 'onmouseover')>0, 1, 0) AS ips_00046_payload_xss_onmouseover_flag,
IF(INSTR(LOWER(IF(ISNULL(payload), '', payload)), 'onload')>0, 1, 0) AS ips_00047_payload_xss_onload_flag,
IF(AGGREGATE(TRANSFORM(TRANSFORM(ARRAY(' ', CONCAT(CHR(37), '20'), CHR(43)), ch -> CONCAT('cmd', ch)), word -> INT(INSTR(LOWER(IF(ISNULL(payload), '', payload)), word))), 0, (x1, x2) -> x1+x2)>0, 1, 0) AS ips_00048_payload_cmd_cmd_flag,
IF(AGGREGATE(TRANSFORM(TRANSFORM(ARRAY(' ', CONCAT(CHR(37), '20'), CHR(43)), ch -> CONCAT('run', ch)), word -> INT(INSTR(LOWER(IF(ISNULL(payload), '', payload)), word))), 0, (x1, x2) -> x1+x2)>0, 1, 0) AS ips_00049_payload_cmd_run_flag,
IF(AGGREGATE(TRANSFORM(TRANSFORM(ARRAY(' ', CONCAT(CHR(37), '20'), CHR(43)), ch -> CONCAT('config', ch)), word -> INT(INSTR(LOWER(IF(ISNULL(payload), '', payload)), word))), 0, (x1, x2) -> x1+x2)>0, 1, 0) AS ips_00050_payload_cmd_config_flag,
IF(AGGREGATE(TRANSFORM(TRANSFORM(ARRAY(' ', CONCAT(CHR(37), '20'), CHR(43)), ch -> CONCAT('ls', ch)), word -> INT(INSTR(LOWER(IF(ISNULL(payload), '', payload)), word))), 0, (x1, x2) -> x1+x2)>0, 1, 0) AS ips_00051_payload_cmd_ls_flag,
IF(AGGREGATE(TRANSFORM(TRANSFORM(ARRAY(' ', CONCAT(CHR(37), '20'), CHR(43)), ch -> CONCAT('mkdir', ch)), word -> INT(INSTR(LOWER(IF(ISNULL(payload), '', payload)), word))), 0, (x1, x2) -> x1+x2)>0, 1, 0) AS ips_00052_payload_cmd_mkdir_flag,
IF(AGGREGATE(TRANSFORM(TRANSFORM(ARRAY(' ', CONCAT(CHR(37), '20'), CHR(43)), ch -> CONCAT('netstat', ch)), word -> INT(INSTR(LOWER(IF(ISNULL(payload), '', payload)), word))), 0, (x1, x2) -> x1+x2)>0, 1, 0) AS ips_00053_payload_cmd_netstat_flag,
IF(AGGREGATE(TRANSFORM(TRANSFORM(ARRAY(' ', CONCAT(CHR(37), '20'), CHR(43)), ch -> CONCAT('ftp', ch)), word -> INT(INSTR(LOWER(IF(ISNULL(payload), '', payload)), word))), 0, (x1, x2) -> x1+x2)>0, 1, 0) AS ips_00054_payload_cmd_ftp_flag,
IF(AGGREGATE(TRANSFORM(TRANSFORM(ARRAY(' ', CONCAT(CHR(37), '20'), CHR(43)), ch -> CONCAT('cat', ch)), word -> INT(INSTR(LOWER(IF(ISNULL(payload), '', payload)), word))), 0, (x1, x2) -> x1+x2)>0, 1, 0) AS ips_00055_payload_cmd_cat_flag,
IF(AGGREGATE(TRANSFORM(TRANSFORM(ARRAY(' ', CONCAT(CHR(37), '20'), CHR(43)), ch -> CONCAT('dir', ch)), word -> INT(INSTR(LOWER(IF(ISNULL(payload), '', payload)), word))), 0, (x1, x2) -> x1+x2)>0, 1, 0) AS ips_00056_payload_cmd_dir_flag,
IF(AGGREGATE(TRANSFORM(TRANSFORM(ARRAY(' ', CONCAT(CHR(37), '20'), CHR(43)), ch -> CONCAT('wget', ch)), word -> INT(INSTR(LOWER(IF(ISNULL(payload), '', payload)), word))), 0, (x1, x2) -> x1+x2)>0, 1, 0) AS ips_00057_payload_cmd_wget_flag,
IF(AGGREGATE(TRANSFORM(TRANSFORM(ARRAY(' ', CONCAT(CHR(37), '20'), CHR(43)), ch -> CONCAT('echo', ch)), word -> INT(INSTR(LOWER(IF(ISNULL(payload), '', payload)), word))), 0, (x1, x2) -> x1+x2)>0, 1, 0) AS ips_00058_payload_cmd_echo_flag,
IF(AGGREGATE(TRANSFORM(TRANSFORM(ARRAY(' ', CONCAT(CHR(37), '20'), CHR(43)), ch -> CONCAT('rm', ch)), word -> INT(INSTR(LOWER(IF(ISNULL(payload), '', payload)), word))), 0, (x1, x2) -> x1+x2)>0, 1, 0) AS ips_00059_payload_cmd_rm_flag
FROM table
"""
# ์ฟผ๋ฆฌ ์คํํ๊ณ , ๊ฒฐ๊ณผ ๋ฐ์ดํฐ ํ๋ ์์ ์ ์ฅ
output_df = session.sql(query_1) #<==== ์ฟผ๋ฆฌ๋ฅผ ์คํํ๋ ๋ถ๋ถ
output_df_2 = session.sql(query_2) #<==== ์ฟผ๋ฆฌ๋ฅผ ์คํํ๋ ๋ถ๋ถ
sql_result_df = output_df.toPandas()
sql_result_df_2 = output_df_2.toPandas()
sql_result_df_result = pd.concat([sql_result_df, sql_result_df_2], axis = 1)
sql_result_df_result['ips_00014_payload_logscaled_length_value'] = sql_result_df_result['ips_00014_payload_logscaled_length_value'].astype(int)
print('์ ์ฒ๋ฆฌ ๋ฐ์ดํฐ ํฌ๊ธฐ: ', sql_result_df_result.shape)
print('์ ์ฒ๋ฆฌ ๋ฐ์ดํฐ ์ํ: ', sql_result_df_result)
sql_result_df_result.columns
sql_result_df_result.shape
# sql_result_df_result.to_csv('IPS_payload_200000_sql_result_df_result.csv', index = False, sep = ',')
```
| github_jupyter |
## SFC Meteorology Obs from:
** - 2017 C2 (CKITAEM-2A) **
*** - 2017 M2 (BSM-2A) ***
__pyversion__==3.6
__author__==S.Bell
```
%matplotlib inline
import datetime
print("Last run {0}".format(datetime.datetime.now()))
```
### connecting to erddap and retrieving and basic information
```
from erddapy import ERDDAP
import pandas as pd
import numpy as np
server_url = 'http://downdraft.pmel.noaa.gov:8080/erddap'
e = ERDDAP(server=server_url)
df = pd.read_csv(e.get_search_url(response='csv', search_for='a_met'))
'We have {} tabledap, {} griddap, and {} wms endpoints.'.format(
len(set(df['tabledap'].dropna())),
len(set(df['griddap'].dropna())),
len(set(df['wms'].dropna()))
)
datasets = df['Dataset ID'].values
print(datasets)
variables = [e.get_var_by_attr(dataset_id=dataset, standard_name=lambda v: v is not None) for dataset in datasets]
print(variables)
```
### getting Peggy Buoy (BSM-2A) Data
```
wdf = pd.read_csv('http://pavlof.pmel.noaa.gov/bell/ArgosMooring/TotalArgosMessage_28882_2017.csv',
parse_dates=True,index_col='sampletime')
wdf = wdf.resample('1H').mean()
```
### retrieving erddap and plotting data
```
constraints = {
'time>=': '2017-01-10T00:00:00Z',
'time<=': str(datetime.datetime.today()),
}
variables = [
'air_pressure_at_sealevel',
'wind_from_direction',
'air_temperature',
'relative_humidity',
'northward_wind',
'eastward_wind',
'wind_speed',
'latitude',
'longitude',
'time'
]
variable_dic={}
for index,row in df.iterrows():
info_url = e.get_info_url(dataset_id=row['Dataset ID'], response='csv')
info = pd.read_csv(info_url)
#print(info.head())
print('Variables in {}:'.format(row['Dataset ID']))
print(','.join(info.loc[info['Row Type'] == 'variable', 'Variable Name']))
variable_dic.update({row['Dataset ID']:list(info.loc[info['Row Type'] == 'variable', 'Variable Name'])})
from requests.exceptions import HTTPError
dfs = {}
for index,row in df.iterrows():
if row['Dataset ID'] in ['erddap_17ckitaem2a_met']:
print(row['Dataset ID'])
try:
e = ERDDAP(server=server_url,
protocol='tabledap',
response='csv',
)
e.dataset_id=row['Dataset ID']
e.constraints=constraints
e.variables=variables
except HTTPError:
print('Failed to generate url {}'.format(row['Dataset ID']))
continue
dfs.update({row['Dataset ID']: e.to_pandas(
index_col='time',
parse_dates=True,
skiprows=(1,) # units information can be dropped.
)})
```
### Take care of any preliminary QC
```
for ds, df in dfs.items():
df.wind_speed[df.wind_speed>100] = np.nan
```
### Plot
```
import matplotlib.pyplot as plt
fig, ax = plt.subplots(figsize=(12,3))
for ds, df in dfs.items():
try:
df.air_temperature.plot(ax=ax)
plt.ylabel('Temperature (degC)')
except:
pass
wdf.AT.plot(ax=ax)
plt.legend(['17CKITAEPR-2A','17BSM-2A'])
plt.xlim([datetime.datetime(2017,5,1),datetime.datetime(2017,10,15)])
fig, ax = plt.subplots(figsize=(12,3))
for ds, df in dfs.items():
try:
df.air_pressure_at_sealevel.plot(ax=ax)
plt.ylabel('Pressure (mb)')
except:
pass
wdf.BP.plot(ax=ax)
plt.xlim([datetime.datetime(2017,5,1),datetime.datetime(2017,10,15)])
fig, ax = plt.subplots(figsize=(12,3))
for ds, df in dfs.items():
try:
df.relative_humidity.plot(ax=ax)
plt.ylabel('RH (%)')
except:
pass
wdf.RH.plot(ax=ax)
plt.xlim([datetime.datetime(2017,5,1),datetime.datetime(2017,10,15)])
fig, ax = plt.subplots(figsize=(12,3))
for ds, df in dfs.items():
try:
df.wind_speed.plot(ax=ax)
plt.ylabel('wind_speed (m/s)')
except:
pass
wdf.WS.plot(ax=ax)
plt.xlim([datetime.datetime(2017,5,1),datetime.datetime(2017,10,15)])
fig, ax = plt.subplots(figsize=(12,3))
for ds, df in dfs.items():
try:
df.wind_from_direction.plot(style='.',markersize=3.0,ax=ax)
plt.ylabel('wind direction (degree)')
except:
pass
wdf.WD.plot(style='.',markersize=4.0,ax=ax)
plt.xlim([datetime.datetime(2017,5,1),datetime.datetime(2017,10,15)])
```
### Comments
The arms that hold the met package had become disconnected shortly after deployment. G.Lebon ratchet strapped them together and they held throughout the deployment but I observ an interesting ship of winds from Easterly, SEasterly to Westerly, SWesterly with a decrease in speed and a more noisy temperature and rh signal. It is not yet clear if these are resonable (mid sep)
Zooming in on just the C2 deployed Period below
```
import matplotlib.pyplot as plt
fig, ax = plt.subplots(figsize=(12,3))
for ds, df in dfs.items():
try:
df.air_temperature.plot(ax=ax)
plt.ylabel('Temperature (degC)')
except:
pass
wdf.AT.plot(ax=ax)
plt.legend(['17CKITAEPR-2A','17BSM-2A'])
plt.xlim([datetime.datetime(2017,7,21),datetime.datetime(2017,10,15)])
fig, ax = plt.subplots(figsize=(12,3))
for ds, df in dfs.items():
try:
df.air_pressure_at_sealevel.plot(ax=ax)
plt.ylabel('Pressure (mb)')
except:
pass
wdf.BP.plot(ax=ax)
plt.xlim([datetime.datetime(2017,7,21),datetime.datetime(2017,10,15)])
fig, ax = plt.subplots(figsize=(12,3))
for ds, df in dfs.items():
try:
df.relative_humidity.plot(ax=ax)
plt.ylabel('RH (%)')
except:
pass
wdf.RH.plot(ax=ax)
plt.xlim([datetime.datetime(2017,7,21),datetime.datetime(2017,10,15)])
fig, ax = plt.subplots(figsize=(12,3))
for ds, df in dfs.items():
try:
df.wind_speed.plot(ax=ax)
plt.ylabel('wind_speed (m/s)')
except:
pass
wdf.WS.plot(ax=ax)
plt.xlim([datetime.datetime(2017,7,21),datetime.datetime(2017,10,15)])
fig, ax = plt.subplots(figsize=(12,3))
for ds, df in dfs.items():
try:
df.wind_from_direction.plot(style='.',markersize=3.0,ax=ax)
plt.ylabel('wind direction (degree)')
except:
pass
wdf.WD.plot(style='.',markersize=4.0,ax=ax)
plt.xlim([datetime.datetime(2017,7,21),datetime.datetime(2017,10,15)])
```
| github_jupyter |
# 04 - Full Waveform Inversion with Devito and Dask
## Introduction
In this tutorial, we will build on the [previous](https://github.com/devitocodes/devito/blob/master/examples/seismic/tutorials/03_fwi.ipynb) FWI tutorial and implement parallel versions of both forward modeling and FWI objective functions. Furthermore, we will show how our parallel FWI function can be passed to black-box third party optimization libraries, such as SciPy's [optimize](https://docs.scipy.org/doc/scipy/reference/tutorial/optimize.html) package, to access sophisticated optimization algorithms without having to implement them from scratch!
To implement parallel versions of forward modeling and FWI, we will use [Dask](https://dask.pydata.org/en/latest/#dask), a Python library for distributed computing based on parallel data structures and task-based programming. As computing multiple seismic shot records or gradients for subsequent source locations is an embarassingly parallel process, we will use Dask to dynamically distribute our workload to a pool of available workers and afterwards collect the results.
The first part of this tutorial closely follows [tutorial 3](https://github.com/devitocodes/devito/blob/master/examples/seismic/tutorials/03_fwi.ipynb) and consists of reading the velocity model and setting up the acquisition geometry. Subsequently, we will implement serial versions of forward modeling and FWI objective functions and then show how we can use Dask to implement parallel versions of these functions. Finally, we will show how to write a wrapper that lets us pass our objective function to scipy's optimize package and how to run a small 2D FWI example using a limited-memory Quasi-Newton method.
## GPU Aware Dask
The default method to start a Dask Cluster is LocalCluster(...). This method enables CPU worker threads, but it shares one GPU for all workers. To enable Dask to use multi-GPU, or a GPU per Dask worker, the method to start a Dask Cluster needs to be changed to LocalCUDACluster. This Dask modification is pulled from the Rapids.ai open source project.
Reference: https://github.com/rapidsai/dask-cuda
```
USE_GPU_AWARE_DASK = False
```
## Set up velocity models
As before, we start by reading the true (i.e. unknown) velocity model, as well as the starting model for FWI. For our example, we once again use the 2D Camembert model with a transmission acquisition set up, which involves having sources on one side of the model and receivers on the other side.
In reality, we obvisouly cannot know what the true velocity is, but here we use the true model to generate our own data (inverse crime alert!) and to compare it to our FWI result.
```
from examples.seismic import demo_model
# Set up velocity model
shape = (101, 101) # Number of grid points (nx, nz).
spacing = (10., 10.) # Grid spacing in m. The domain size is now 1km by 1km.
origin = (0, 0) # Need origin to define relative source and receiver locations.
nbl = 40
# True model
model1 = demo_model('circle-isotropic', vp_circle=3.0, vp_background=2.5,
origin=origin, shape=shape, spacing=spacing, nbl=nbl)
# Initial model
model0 = demo_model('circle-isotropic', vp_circle=2.5, vp_background=2.5,
origin=origin, shape=shape, spacing=spacing, nbl=nbl, grid = model1.grid)
```
## Acquisition geometry
For the acquisition geometry, we use the same setup as in tutorial 3 and position 5 source position on one side of the model, and an array of 101 receivers on the other side. Note that now our source coordinate array (`src_coordinates`) is a 5 x 2 array, containing the shot locations of all 5 source experiments. After defining the source/receiver coordinates, we set up individual geometry objects for both the observed data (using `model`) and the predicted data (using `model0`).
```
from examples.seismic import AcquisitionGeometry
import numpy as np
# Set up acquisiton geometry
t0 = 0.
tn = 1000.
f0 = 0.010
# Set up source geometry, but define 5 sources instead of just one.
nsources = 5
src_coordinates = np.empty((nsources, 2))
src_coordinates[:, 1] = np.linspace(0, model1.domain_size[0], num=nsources)
src_coordinates[:, 0] = 20. # Source depth is 20m
# Initialize receivers for synthetic and imaging data
nreceivers = 101
rec_coordinates = np.empty((nreceivers, 2))
rec_coordinates[:, 1] = np.linspace(spacing[0], model1.domain_size[0] - spacing[0], num=nreceivers)
rec_coordinates[:, 0] = 980. # Receiver depth
# Set up geometry objects for observed and predicted data
geometry1 = AcquisitionGeometry(model1, rec_coordinates, src_coordinates, t0, tn, f0=f0, src_type='Ricker')
geometry0 = AcquisitionGeometry(model0, rec_coordinates, src_coordinates, t0, tn, f0=f0, src_type='Ricker')
```
## Forward modeling
Before diving into FWI, we will start with forward modeling and show how we can use Dask to implement a parallel wrapper around a serial modeling function to compute seismic shot records for multiple source locations in parallel.
First, we implement a forward modeling function for a single shot, which takes a geometry data structure as the only mandatory input argument. This function assumes that the geometry structure only contains a *single* source location. To solve the wave equation for the current shot location and model as specified in `geometry`, we use the `AcousticSolver` from previous tutorials, which is an abstract layer built on top of (generic) Devito objects. `AcousticSolver` contains Devito implementations of forward and adjoint wave equations, as well as Jacobians as specified in tutorials 1 and 2, so we don't have to re-implement these PDEs here.
```
from examples.seismic.acoustic import AcousticWaveSolver
# Serial modeling function
def forward_modeling_single_shot(model, geometry, save=False, dt=4.0):
solver = AcousticWaveSolver(model, geometry, space_order=4)
d_obs, u0 = solver.forward(vp=model.vp, save=save)[0:2]
return d_obs.resample(dt), u0
```
With our modeling function for a single shot record in place, we now implement our parallel version of our modeling function, which consists of a loop over all source locations. As the `geometry` object in `forward_modeling_single_shot` expects only a single source location, we set up a new geometry structure for the i-th source location to pass to our modeling function. However, rather than simpling calling the modeling function for single shots, we tell Dask to create a *task* for each source location and to distribute them to the available parallel workers. Dask returns a remote reference to the result on each worker called `future`. The `wait` statement tells our function to wait for all tasks to finish their computations, after which we collect the modeled shot records from the workers.
```
# Parallel modeling function
def forward_modeling_multi_shots(model, geometry, save=False, dt=4.0):
futures = []
for i in range(geometry.nsrc):
# Geometry for current shot
geometry_i = AcquisitionGeometry(model, geometry.rec_positions, geometry.src_positions[i,:],
geometry.t0, geometry.tn, f0=geometry.f0, src_type=geometry.src_type)
# Call serial modeling function for each index
futures.append(client.submit(forward_modeling_single_shot, model, geometry_i, save=save, dt=dt))
# Wait for all workers to finish and collect shots
wait(futures)
shots = []
for i in range(geometry.nsrc):
shots.append(futures[i].result()[0])
return shots
```
We can use this parallel modeling function to generate our own observed data set, which we will subsequently use for our FWI example. In reality, we would instead read our observed data from a SEG-Y file. To compute the data in parallel, we launch a pool of workers on our local machine and then call the parallel modeling function:
```
from distributed import Client, wait
# Start Dask cluster
if USE_GPU_AWARE_DASK:
from dask_cuda import LocalCUDACluster
cluster = LocalCUDACluster(threads_per_worker=1, death_timeout=600)
else:
from distributed import LocalCluster
cluster = LocalCluster(n_workers=nsources, death_timeout=600)
client = Client(cluster)
# Compute observed data in parallel (inverse crime). In real life we would read the SEG-Y data here.
d_obs = forward_modeling_multi_shots(model1, geometry1, save=False)
```
The variable `d_obs` is a list of the 5 shots records and we can plot one of the shot records as follows:
```
from examples.seismic import plot_shotrecord
# Plot shot no. 3 of 5
plot_shotrecord(d_obs[2].data, model1, t0, tn)
```
## Parallel Full-Waveform Inversion
Now that we know how to use Dask to implement a parallel loop around a (serial) modeling function for a single shot, we can apply the same concept to an FWI objective function, which computes the FWI function value and gradient for a given geometry and observed shot record. This function follows largely the structure in tutorial 3 and involves computing the predicted data and backpropagating the residual to compute the gradient. As we do not want to update the velocity in the area of the absorbing boundaries, we only return the gradient on the (original) physical grid.
```
from devito import Function
from examples.seismic import Receiver
# Serial FWI objective function
def fwi_objective_single_shot(model, geometry, d_obs):
# Devito objects for gradient and data residual
grad = Function(name="grad", grid=model.grid)
residual = Receiver(name='rec', grid=model.grid,
time_range=geometry.time_axis,
coordinates=geometry.rec_positions)
solver = AcousticWaveSolver(model, geometry, space_order=4)
# Predicted data and residual
d_pred, u0 = solver.forward(vp=model.vp, save=True)[0:2]
residual.data[:] = d_pred.data[:] - d_obs.resample(geometry.dt).data[:][0:d_pred.data.shape[0], :]
# Function value and gradient
fval = .5*np.linalg.norm(residual.data.flatten())**2
solver.gradient(rec=residual, u=u0, vp=model.vp, grad=grad)
# Convert to numpy array and remove absorbing boundaries
grad_crop = np.array(grad.data[:])[model.nbl:-model.nbl, model.nbl:-model.nbl]
return fval, grad_crop
```
As for the serial modeling function, we can call `fwi_objective_single_shot` with a geometry structure containing a single source location and a single observed shot record. Since we are interested in evaluating this function for multiple sources in parallel, we follow the strategy from our forward modeling example and implement a parallel loop over all shots, in which we create a task for each shot location. As before, we use Dask to create one task per shot location and evaluate the single-shot FWI objective function for each source. We wait for all computations to finish via `wait(futures)` and then we sum the function values and gradients from all workers.
```
# Parallel FWI objective function
def fwi_objective_multi_shots(model, geometry, d_obs):
futures = []
for i in range(geometry.nsrc):
# Geometry for current shot
geometry_i = AcquisitionGeometry(model, geometry.rec_positions, geometry.src_positions[i,:],
geometry.t0, geometry.tn, f0=geometry.f0, src_type=geometry.src_type)
# Call serial FWI objective function for each shot location
futures.append(client.submit(fwi_objective_single_shot, model, geometry_i, d_obs[i]))
# Wait for all workers to finish and collect function values and gradients
wait(futures)
fval = 0.0
grad = np.zeros(model.shape)
for i in range(geometry.nsrc):
fval += futures[i].result()[0]
grad += futures[i].result()[1]
return fval, grad
```
We can compute a single gradient of the FWI objective function for all shots by passing the geometry structure with the initial model to the objective function, as well as the observed data we generated earlier.
```
# Compute FWI gradient for 5 shots
f, g = fwi_objective_multi_shots(model0, geometry0, d_obs)
```
The physical units of the gradient are $s^2/km^2$, which means our gradient is an update of the squared slowness, rather than of the velocity.
```
from examples.seismic import plot_image
# Plot g
plot_image(g.reshape(model1.shape), vmin=-6e3, vmax=6e3, cmap="cividis")
```
## FWI with SciPy's L-BFGS
With our parallel FWI objective function in place, we can in principle implement a wide range of gradient-based optimization algorithms for FWI, such as (stochastic) gradient descent or the nonlinear conjugate gradient method. However, many optimization algorithms, especially second order methods or algorithms for constrained optimization, are far from trivial to implement correctly from scratch. Luckily, many optimization libraries exist that we can adapt for our purposes.
Here, we demonstrate how we can interface the scipy *optimize* package to run FWI with a limited-memory Quasi-Newton method. The scipy optimize package was not specifically designed for FWI, but this does not matter, as the library accepts any Python function that can be evaluated for a current model iterate `x` and returns the function value and gradient:
```
f, g = objective_function(x, args)
```
where `f` is function value and `g` is a one-dimensional numpy array of type `float64`. Our parallel FWI function does not take the current model as an input argument, but instead expects a geometry structure and the observed data. Therefore, we have to write a little wrapper function called `loss`, which provides the input argument structure that is expected by `scipy.optimize`. The function takes the current model iteratve `x` (in squared slowness) as the first input argument and overwrites the current velocity in `geometry` with `x`. The gradient that is returned to `scipy.optimize` is converted to a numpy array of the required type (`float64`).
```
# Wrapper for scipy optimizer: x is current model in squared slowness [s^2/km^2]
def loss(x, model, geometry, d_obs):
# Convert x to velocity
v_curr = 1.0/np.sqrt(x.reshape(model.shape))
# Overwrite current velocity in geometry (don't update boundary region)
model.update('vp', v_curr.reshape(model.shape))
# Evaluate objective function
fval, grad = fwi_objective_multi_shots(model, geometry, d_obs)
return fval, grad.flatten().astype(np.float64) # scipy expects double precision vector
```
The `scipy.optimize` function also takes an optional callback function as an input argument, which can be used to keep track of the model error as a function of the iteration number. The callback function takes the current model iterate `xk` as the only input argument and computes the $\ell_2$-misfit with the true model `m`:
```
# Callback to track model error
model_error = []
def fwi_callback(xk):
vp = model1.vp.data[model1.nbl:-model1.nbl, model1.nbl:-model1.nbl]
m = 1.0 / (vp.reshape(-1).astype(np.float64))**2
model_error.append(np.linalg.norm((xk - m)/m))
```
The final preparation step before we can run our example, is the definition of box constraints for the velocity. At each iteration, the optimizer will project the current model iterate onto a feasible set of velocites as defined by the lower and upper bounds `vmin` and `vmax`. Box contraints allow us to prevent velocities from taking negative values or values that are too small or large for the stability criteria of our modeling stepping scheme. We define the box constraints for the velocity in $km/s$ and then convert them to squared slownesses. Furthermore, we define our initial guess `m0`:
```
# Box contraints
vmin = 1.4 # do not allow velocities slower than water
vmax = 4.0
bounds = [(1.0/vmax**2, 1.0/vmin**2) for _ in range(np.prod(model0.shape))] # in [s^2/km^2]
# Initial guess
v0 = model0.vp.data[model0.nbl:-model0.nbl, model0.nbl:-model0.nbl]
m0 = 1.0 / (v0.reshape(-1).astype(np.float64))**2
```
Finally, we run our 2D FWI example by calling the `optimize.minimize` function. The first input argument is the function to be minimized, which is our `loss` function. The second input argument is the starting value, which in our case is our initial model in squared slowness. The third input argument (`args`) are the arguments that are passed to the loss function other than `x`. For this example we use the L-BFGS algorithm, a limited-memory Quasi-Newton algorithm which builds up an approximation of the (inverse) hessian as we iterate. As our `loss` function returns the analytically computed gradient (as opposed to a numerically approximated gradient), we set the argument `jac=True`. Furthermore, we pass our callback function, box constraints and the maximum number of iterations (in this case 5) to the optimizer.
```
from scipy import optimize
# FWI with L-BFGS
ftol = 0.1
maxiter = 5
result = optimize.minimize(loss, m0, args=(model0, geometry0, d_obs), method='L-BFGS-B', jac=True,
callback=fwi_callback, bounds=bounds, options={'ftol':ftol, 'maxiter':maxiter, 'disp':True})
# Check termination criteria
assert np.isclose(result['fun'], ftol) or result['nit'] == maxiter
```
After either the maximum iteration number is reached or we find the minimum of the objective function within some tolerance level `ftol`, the optimizer returns a dictionary with the results and some additional information. We convert the result back to the velocity in $km/s$ and compare it to the true model:
```
# Plot FWI result
vp = 1.0/np.sqrt(result['x'].reshape(model1.shape))
plot_image(model1.vp.data[model1.nbl:-model1.nbl, model1.nbl:-model1.nbl], vmin=2.4, vmax=2.8, cmap="cividis")
plot_image(vp, vmin=2.4, vmax=2.8, cmap="cividis")
```
Looking at the model error as a function of the iteration number, we find that the error decays monotonically, as we would expect.
```
import matplotlib.pyplot as plt
# Plot model error
plt.plot(range(1, maxiter+1), model_error); plt.xlabel('Iteration number'); plt.ylabel('L2-model error')
plt.show()
```
## Next steps
In our current example, the master process keeps all shot records in memory and distributes the data to the workers in the parallel pool. This works perfectly fine for 2D and even small 3D examples, but quickly becomes infeasible for large-scale data sets. Therefore, an extension of our current code should include the following steps if we want to scale things up in the future:
- Write shot records directly to disk on each worker and return a file pointer back to the master process.
- Avoid sending the velocity model to the workers and read the model directly onto each worker.
- Include optimal checkpointing or domain-decomposition to address the memory bottleneck in the gradient computations.
For scaling Devito to industry-scale problems and being able to work on data sets in the range of multiple terabytes, it is furthermore necessary to have a fast SEG-Y reader that is able to scan through large data volumes and efficiently access blocks of data such as single shot records. Furthermore, we need the SEG-Y reader to be able to interact with Devito and automatically set up `geometry` objects from the SEG-Y headers. For this purpose, please check out the [Julia Devito Inversion framework (JUDI)](https://github.com/slimgroup/JUDI.jl), an extension built on top of Devito in the Julia programming language. JUDI consists on an abstract linear algebra framework and an interface to a fast and parallel SEG-Y reader called [SEGYIO.jl](https://github.com/slimgroup/SegyIO.jl), making it possible to:
- Scan large-scale data sets and create look-up tables from which shot records can be directly accessed through their byte locations (no need to loop over traces or read full files).
- Use look-up tables to automatically set up Devito objects with source and receiver coordinates.
- Work with out-of-core data containers that only read the data into memory when it is used for computations.
You can find a full FWI example of the 3D Overthrust model using a 1.1 TB large data set on [JUDI's Github page](https://github.com/slimgroup/JUDI.jl/blob/master/examples/software_paper/examples/fwi_3D_overthrust_spg.jl).
| github_jupyter |
#Fire up graphlab create
```
import graphlab
```
#Load some house sales data
Dataset is from house sales in King County, the region where the city of Seattle, WA is located.
```
sales = graphlab.SFrame('home_data.gl/')
sales
```
#Exploring the data for housing sales
The house price is correlated with the number of square feet of living space.
```
graphlab.canvas.set_target('ipynb')
sales.show(view="Scatter Plot", x="sqft_living", y="price")
```
#Create a simple regression model of sqft_living to price
Split data into training and testing.
We use seed=0 so that everyone running this notebook gets the same results. In practice, you may set a random seed (or let GraphLab Create pick a random seed for you).
```
train_data,test_data = sales.random_split(.8,seed=0)
```
##Build the regression model using only sqft_living as a feature
```
sqft_model = graphlab.linear_regression.create(train_data, target='price', features=['sqft_living'])
```
#Evaluate the simple model
```
print test_data['price'].mean()
print sqft_model.evaluate(test_data)
```
RMSE of about \$255,170!
#Let's show what our predictions look like
Matplotlib is a Python plotting library that is also useful for plotting. You can install it with:
'pip install matplotlib'
```
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot(test_data['sqft_living'],test_data['price'],'.',
test_data['sqft_living'],sqft_model.predict(test_data),'-')
```
Above: blue dots are original data, green line is the prediction from the simple regression.
Below: we can view the learned regression coefficients.
```
sqft_model.get('coefficients')
```
#Explore other features in the data
To build a more elaborate model, we will explore using more features.
```
my_features = ['bedrooms', 'bathrooms', 'sqft_living', 'sqft_lot', 'floors', 'zipcode']
sales[my_features].show()
sales.show(view='BoxWhisker Plot', x='zipcode', y='price')
```
Pull the bar at the bottom to view more of the data.
98039 is the most expensive zip code.
#Build a regression model with more features
```
my_features_model = graphlab.linear_regression.create(train_data,target='price',features=my_features)
print my_features
```
##Comparing the results of the simple model with adding more features
```
print sqft_model.evaluate(test_data)
print my_features_model.evaluate(test_data)
```
The RMSE goes down from \$255,170 to \$179,508 with more features.
#Apply learned models to predict prices of 3 houses
The first house we will use is considered an "average" house in Seattle.
```
house1 = sales[sales['id']=='5309101200']
house1
```
<img src="house-5309101200.jpg">
```
print house1['price']
print sqft_model.predict(house1)
print my_features_model.predict(house1)
```
In this case, the model with more features provides a worse prediction than the simpler model with only 1 feature. However, on average, the model with more features is better.
##Prediction for a second, fancier house
We will now examine the predictions for a fancier house.
```
house2 = sales[sales['id']=='1925069082']
house2
```
<img src="house-1925069082.jpg">
```
print sqft_model.predict(house2)
print my_features_model.predict(house2)
```
In this case, the model with more features provides a better prediction. This behavior is expected here, because this house is more differentiated by features that go beyond its square feet of living space, especially the fact that it's a waterfront house.
##Last house, super fancy
Our last house is a very large one owned by a famous Seattleite.
```
bill_gates = {'bedrooms':[8],
'bathrooms':[25],
'sqft_living':[50000],
'sqft_lot':[225000],
'floors':[4],
'zipcode':['98039'],
'condition':[10],
'grade':[10],
'waterfront':[1],
'view':[4],
'sqft_above':[37500],
'sqft_basement':[12500],
'yr_built':[1994],
'yr_renovated':[2010],
'lat':[47.627606],
'long':[-122.242054],
'sqft_living15':[5000],
'sqft_lot15':[40000]}
```
<img src="house-bill-gates.jpg">
```
print my_features_model.predict(graphlab.SFrame(bill_gates))
```
The model predicts a price of over $13M for this house! But we expect the house to cost much more. (There are very few samples in the dataset of houses that are this fancy, so we don't expect the model to capture a perfect prediction here.)
| github_jupyter |
# ART decision tree classifier attack
This notebook shows how to compute adversarial examples on decision trees (as described in by Papernot et al. in https://arxiv.org/abs/1605.07277). Due to the structure of the decision tree, an adversarial example can be computed without any explicit gradients, only by traversing the learned tree structure.
Consider the following simple decision tree for four dimensional data, where we go to the left if a condition is true:
F1<3
F2<5 F2>2
F4>3 C1 F3<1 C3*
C1 C2 C3 C1
Given sample [4,4,1,1], the tree outputs C3 (as indicated by the star). To misclassify the sample, we walk one node up and explore the subtree on the left. We find the leaf outputting C1 and change the two features, obtaining [4,1.9,0.9,1]. In this implementation, we change only the features with wrong values, and specify the offset in advance.
## Applying the attack
```
from sklearn.tree import DecisionTreeClassifier
from sklearn.datasets import load_digits
from matplotlib import pyplot as plt
import numpy as np
from art.attacks.evasion import DecisionTreeAttack
from art.estimators.classification import SklearnClassifier
digits = load_digits()
X = digits.data
y = digits.target
clf = DecisionTreeClassifier()
clf.fit(X,y)
clf_art = SklearnClassifier(clf)
print(clf.predict(X[:14]))
plt.imshow(X[0].reshape(8,8))
plt.colorbar()
```
We now craft adversarial examples and plot their classification. The difference is really small, and often only one or two features are changed.
```
attack = DecisionTreeAttack(clf_art)
adv = attack.generate(X[:14])
print(clf.predict(adv))
plt.imshow(adv[0].reshape(8,8))
# plt.imshow((X[0]-adv[0]).reshape(8,8)) ##use this to plot the difference
```
The change is possibly larger if we specify which class the sample should be (mis-)classified as. To do this, we just specify a label for each attack point.
```
adv = attack.generate(X[:14],np.array([6,6,7,7,8,8,9,9,1,1,2,2,3,3]))
print(clf.predict(adv))
plt.imshow(adv[0].reshape(8,8))
```
Finally, the attack has an offset parameter which specifies how close the new value of the feature is compared to the learned threshold of the tree. The default value is very small (0.001), however the value can be set larger when desired. Setting it to a very large value might however yield adversarial examples outside the range or normal features!
```
attack = DecisionTreeAttack(clf_art,offset=20.0)
adv = attack.generate(X[:14])
print(clf.predict(adv))
plt.imshow(adv[0].reshape(8,8))
plt.colorbar()
```
| github_jupyter |
<!-- </style><figure align = "left" style="page-break-inside: avoid;"><figcaption style="font-weight: bold; font-size:16pt; font-family:inherit;" align="center"></figcaption><br> -->
<img src= "images/APEX.png">
## Introduction: What's APEX?
APEX is a portfolio trade scheduler that optimizes execution with the latest intraday risk and market impact models from Goldman Sachsโ Quantitative Execution Services (QES) team.
## Modeling Pillars
<img src= "images/three_pillars.png">
## Constraints and Features
<img src= "images/apex_constraints_and_features.png">
## The APEX Trade Lifecycle
<img src= "images/how_apex_works.png">
### First, let's load a sample portfolio:
#### Import Libs and Utils:
```
from qes_utils import persistXls, plotCost, plotVar, plotBuySellNet, plotGrossRemaining, plotMultiStrategyPortfolioLevelAnalytics
from gs_quant.api.gs.assets import GsAssetApi
from gs_quant.session import GsSession, Environment
from gs_quant.common import Position
from gs_quant.target.risk import OptimizationRequest, OptimizationType
from gs_quant.api.gs.risk import GsRiskApi
import matplotlib.pyplot as plt
import pandas as pd
import datetime
import numpy as np
import copy
import datetime
from matplotlib import cm
```
#### Establish GS_Quant Connection:
- Fill in client_id and client_secret\
- Set up Marquee API: https://marquee.gs.com/s/developer/docs/getting-started
- Once you create the application, click on the Application page and scroll down to the โScopeโ section. Request the โread_product_dateโ& โrun_analyticsโ scope for your application.
```
print('INFO: Setting up Marquee Connection')
client_id =
client_secret =
GsSession.use(Environment.PROD, client_id=client_id, client_secret=client_secret, scopes=['read_product_data', 'run_analytics'])
```
#### Set up the portfolio:
```
print('INFO: Setting up portfolio to schedule using APEX...')
portfolio_input = pd.read_csv('trade_list_world.csv').rename(columns={'Symbol': 'sedol', 'Shares': 'qty'})
portfolio_input.dtypes
```
#### Convert Identifier (SEDOL) to marqueeids:
SEDOL access needs to be requested on Marquee with the following steps:
- Go to https://marquee.gs.com/s/developer/datasets/SEDOL
- Select an application to request access for
- Request will be auto approved
```
assets = GsAssetApi.get_many_assets(sedol=list(portfolio_input['sedol']), fields=['sedol', 'rank'], listed=[True], type='Single Stock')
identifier_to_marqueeid_map = pd.DataFrame([{'sedol': list(filter(lambda x: x.type=='SED', i.identifiers))[0].value, 'ID': i.id, 'rank': i.rank} for i in assets])\
.sort_values(['sedol', 'rank'], ascending=False).groupby('sedol').head(1)[['sedol','ID']].rename(columns={'ID': 'marqueeid'})
print(f'found {len(identifier_to_marqueeid_map)} sedols to mrquee ids map...')
```
#### Identify assets with missing marquee ids and drop them from the portfolio
```
portfolio_input = portfolio_input.merge(indentifier_to_marqueeid_map, how='left', on=identifier_type)
missing_marqueeids = portfolio_input[portfolio_input['marqueeid'].isnull()]
if len(missing_marqueeids):
print(f'WARNING: the following bbids are missing marqueeids:\n{missing_marqueeids}\ndropping from the optimization...')
else:
print('INFO: all the assets has been succesfuly converted to marquee id')
portfolio_input = portfolio_input.dropna()
portfolio_input.head()
```
### At this point, we have a portfolio we can optimize using APEX.
Our portfolio is now ready for optimization with APEX.
### We'll run two variations:
##### 1. single optimization analysis - optimize the basket using defined parameters and investigate the cost-risk trade-off.
##### 2. trade scenario analysis - run multiple optimizations upon different risk aversion (urgency) parameters and compare the cost-risk trade-off among optimized execution strategies
### 1. APEX Optimization: run my trade list in the APEX optimizer and explore the various analytics:
#### in this section, we'll explore how to set optimization parameters and how to display multiple optimal trajectory analytics to develop further intuition for the decisions made by APEX
we'll run an APEX-IS (Arrival) risk-cost minimization optimal trade allocation, in the following form:
\begin{equation*}
Min \displaystyle \Bigg( \lambda \sum_{t=1}^T (\mbox{Risk of Residual Holdings}) + (1-\lambda) \sum_{t=1}^T (\mbox{Market Impact of Trades}) \Bigg)
\end{equation*}
\begin{equation*}s.t.\end{equation*}
\begin{equation*}Ax <= b\end{equation*}
where:
\begin{equation*}(\mbox{Risk of Residual Holdings})\end{equation*}
- Incorporates the intraday and overnight expected risk, utilizing our high frequency intraday QES covariances. in other words, "every $ I decided to trade later, is running at the Risk of missing the arrival price"
\begin{equation*}(\mbox{Market Impact of Trades})\end{equation*}
- Denote the expected market impact per asset, as a function of the physical interaction with the order book. in other words, "every $ that I will trade now, will incur some expected market impact, based on the intraday predicted evolution of spread\volume\volatility\participation rate, and other intraday calibrated parameters"
\begin{equation*}\lambda\end{equation*}
- Risk Aversion parameter
\begin{equation*}Ax <= b\end{equation*}
- set of linear constraints (see features available at the top of the notebook)
#### Set up the optimization constraints
| Optimisation Parameters | Description | Value Chosen |
| :- | :- | -: |
| Start Time \ End Time | APEX allowed "Day1" trade horizon, in GMT* | 11pm previous day to 11pm |
| Urgency | APEX Urgency, from VERY_LOW to VERY_HIGH | Medium |
| Target Benchmark | Currently supports 'IS', 'CLOSE' | IS |
| Imbalance | (Optional) setting dollar imbalance for the trade duration; "the net residual must be within +-5% of the residual gross to trade, throughout the entire trade duration" | 0.05 (5%) |
| Participation rate | Setting volume cap for trading | 0.075 (7.5%) |
- Note that APEX allowed start end times range from 23:00 previous day to 23:00 of the query day.
For example, if today is the 9th of October, APEX global optimization can run from start time of 23:00 on T-1 to 23:00 on T.
- Please also note that APEX will automatically optimize up to 5 business days, providing an optimized intraday solution with granularity of 30\60 minutes.
- For a full set of parameters, please refer to the constraints & features image at the top, review the APEX api guide or contact [gs-qes-quant@gs.com](mailto:gs-qes-quant@gs.com)
```
## set optimization configuration
print('INFO: Constructing Optimization Request...')
date_today = datetime.datetime.now().strftime('%Y-%m-%d')
date_yesterday = (datetime.datetime.now() - datetime.timedelta(days=1)).strftime('%Y-%m-%d')
apex_optimization_config = {
'executionStartTime': date_yesterday + 'T23:00:00.000Z', #execution start time
'executionEndTime': date_today +'T21:15:00.000Z', # execution end time (for day 1, can run multiday if not complete on day 1)
'waitForResults': False,
'parameters': {'urgency': 'MEDIUM', #VERY_LOW, LOW, HIGH, VERY_HIGH...
'targetBenchmark': 'IS', #CLOSE
'imbalance': 0.05, #Optional --> setting $ imbalance for the trade duration to never exceed +-20% of residual gross to trade
'participationRate': 0.075 #setting volume cap of 10%
},
}
```
#### Send Optimization + Analytics request to Marquee
```
def sendApexRequestAndGetAnalytics(portfolio_input, apex_optimization_config):
positions = [Position(asset_id=row.marqueeid, quantity=row.qty) for _, row in portfolio_input.iterrows()]
print('setting up the optimization request....')
request = OptimizationRequest(positions=positions,
execution_start_time=apex_optimization_config['executionStartTime'],
execution_end_time=apex_optimization_config['executionEndTime'],
parameters=apex_optimization_config['parameters'],
**{'type': OptimizationType.APEX})
print('Sending the request to the marquee service...')
opt = GsRiskApi.create_pretrade_execution_optimization(request)
analytics_results = GsRiskApi.get_pretrade_execution_optimization(opt.get('optimizationId'))
print ('COMPLETE!')
return analytics_results
results_dict = sendApexRequestAndGetAnalytics(portfolio_input, apex_optimization_config)
print('INFO: High Level Cost estimation and % expected Completion:')
pd.DataFrame(results_dict['analytics']['portfolioAnalyticsDaily']).set_index('tradeDayNumber')
print('missing assets:')
pd.DataFrame(results_dict['analytics']['assetsExcluded'])
```
#### Actual Optimization Parameters Used in APEX
- Although a set of optimization parameters was specified above, APEX might conclude that the parameters joined feasible space does not exist (infeasible set).
- APEX can then choose to soften/drop/relax the constraints in a hierarchical fashion.
```
constraints_hierarchy = pd.DataFrame(results_dict['analytics']['constraintsConsultations'])['constraints']
pd.concat([pd.DataFrame(constraints_hierarchy.values[i]).assign(iteration=i) for i in constraints_hierarchy.index]).set_index(['iteration', 'name'])['status'].unstack().T
```
#### What kind of Analytics provided by APEX ?
##### APEX provide a vast set of numbers that helps understanding unravel the decision made by the optimizer:
```
results_dict['analytics'].keys()
```
#### Visualise Your Optimisation Results
```
analytics_result_analytics = results_dict['analytics']
intraday = pd.DataFrame(analytics_result_analytics['portfolioAnalyticsIntraday'])
intraday_to_plot = intraday.assign(time = lambda x: pd.to_datetime(x['time'])).set_index('time')
```
#### Four examples of visualizing your intraday analysis throughout trade date
- Gross Remaining
- Buy/Sell/Net
- Cost Contribution
- Risk Contribution
```
intraday_to_plot.head(5).append(intraday_to_plot.tail(5))
plotGrossRemaining(intraday_to_plot)
plotBuySellNet(intraday_to_plot)
plotCost(intraday_to_plot)
plotVar(intraday_to_plot)
```
###### Sources: Goldman Sachs, Bloomberg, Reuters, Axioma
##### The creativity around various analytics are endless, here are couple of examples, derived from the various analytics dataframes we use for our APEX clients:
<img src= "images/apex_analytics_examples.png">
###### Sources: Goldman Sachs, Bloomberg, Reuters, Axioma
##### save all results to excel for further exploration:
```
xls_path = persistXls(xls_report=results_dict['analytics'],
path='',
filename='apex_optimization_detailed_analytics',
indentifier_marqueeid_map=portfolio_input[
[identifier_type, 'marqueeid']])
print('saving all analytics frames to {0}...'.format(xls_path))
```
<img src= "images/apex_excel_example.png">
### 2. APEX Optimization - Trade Scenario Analysis: run my trade list in the APEX optimizer across multiple risk aversions\urgency parameters to assess ideal parameters set.
#### Define a function for running multiple optimizations, keeping all constrains intact and change urgency only:
```
def optimisationMulti(portfolio_input, apex_optimization_config, urgency_list = ['VERY_LOW', 'LOW', 'MEDIUM', 'HIGH', 'VERY_HIGH']):
results_dict_multi = {}
apex_optimization_config_temp = copy.deepcopy(apex_optimization_config)
for u in urgency_list:
apex_optimization_config_temp['parameters']['urgency'] = u
apex_optimization_config_temp['parameters']['imbalance'] = .3
apex_optimization_config_temp['parameters']['participationRate'] = .5
print('INFO Running urgency={0} optimization....'.format(u))
results_dict_multi[u] = sendApexRequestAndGetAnalytics(portfolio_input, apex_optimization_config_temp)
print('INFO: High Level Cost estimation and % expected Completion:\n{0}'\
.format(pd.DataFrame(results_dict_multi[u]['analytics']['portfolioAnalyticsDaily'])))
return results_dict_multi
```
##### Run Optimization Across Urgencies
```
urgency_list = ['VERY_LOW', 'LOW', 'MEDIUM', 'HIGH', 'VERY_HIGH']
results_dict_multi = optimisationMulti(portfolio_input = portfolio_input,\
apex_optimization_config = apex_optimization_config,\
urgency_list=urgency_list)
```
#### Compare Results from Different Urgencies on Day 1:
```
ordering = ['grey', 'sky_blue', 'black', 'cyan', 'light_blue', 'dark_green']
urgency_list = ['VERY_LOW', 'LOW', 'MEDIUM', 'HIGH', 'VERY_HIGH']
ptAnalyticsDaily_list = []
for u in urgency_list:
ptAnalyticsDaily_list.append(pd.DataFrame(results_dict_multi[u]['analytics']['portfolioAnalyticsDaily']).iloc[[0]].assign(urgency=u) )
pd.concat(ptAnalyticsDaily_list).set_index('urgency')
```
#### Visualise Optimization Results
- Plotting 'Trade_cum_sum, Total Cost, Total Risk' against time for the chosen urgencies
- Trade_cum_sum: Cumulative sum of the intraday trades
```
metrics_list = ['tradePercentageCumulativeSum', 'totalRiskBps', 'totalCost', 'advAveragePercentage']
title = ['Intraday Trade', 'Risk', 'Cost', 'Participation Rate']
ylabel = ['Trade Cum Sum %', 'Risk(bps) ', 'Cost(bps)', 'Prate(%)']
plotMultiStrategyPortfolioLevelAnalytics(results_dict_multi, metrics_list, title, ylabel)
```
###### Sources: Goldman Sachs, Bloomberg, Reuters, Axioma
#### Plot the optimal Efficient Frontier - the expected Market Impact vs. Residual Risk Trade-off:
```
initial_gross = pd.DataFrame(results_dict_multi['VERY_LOW']['analytics']['portfolioAnalyticsIntraday'])['gross'].iloc[0]
risk_cost_tradeoff = pd.concat( [\
pd.DataFrame(results_dict_multi[urgency]['analytics']['portfolioAnalyticsDaily'])\
[['estimatedCostBps', 'meanExpectedCostVersusBenchmark']]\
.assign(totalRiskBps = lambda x: x['estimatedCostBps'] - x['meanExpectedCostVersusBenchmark'])\
.iloc[0].rename(urgency).to_frame()
for urgency in ['VERY_LOW', 'LOW', 'MEDIUM']], axis=1).T
cmap = cm.get_cmap('Set1')
ax = risk_cost_tradeoff.plot.scatter(x='totalRiskBps', y='meanExpectedCostVersusBenchmark',\
title='The Example Basket Efficient Frontier',\
colormap=cmap, c=range(len(risk_cost_tradeoff)), s=100)
for k, v in risk_cost_tradeoff[['totalRiskBps', 'meanExpectedCostVersusBenchmark']].iterrows():
ax.annotate(k, v,
xytext=(10,-5), textcoords='offset points',
family='sans-serif', fontsize=10, color='darkslategrey')
ax.plot(risk_cost_tradeoff['totalRiskBps'].values, risk_cost_tradeoff['meanExpectedCostVersusBenchmark'].values,
color='grey', alpha=.5)
```
###### Sources: Goldman Sachs, Bloomberg, Reuters, Axioma
# And That's IT! Find below an holistic view of our APEX platform in visual from:
<img src= "images/apex_box.png">
##### Disclaimers:
###### Indicative Terms/Pricing Levels: This material may contain indicative terms only, including but not limited to pricing levels. There is no representation that any transaction can or could have been effected at such terms or prices. Proposed terms and conditions are for discussion purposes only. Finalized terms and conditions are subject to further discussion and negotiation.
###### www.goldmansachs.com/disclaimer/sales-and-trading-invest-rec-disclosures.html If you are not accessing this material via Marquee ContentStream, a list of the author's investment recommendations disseminated during the preceding 12 months and the proportion of the author's recommendations that are 'buy', 'hold', 'sell' or other over the previous 12 months is available by logging into Marquee ContentStream using the link below. Alternatively, if you do not have access to Marquee ContentStream, please contact your usual GS representative who will be able to provide this information to you.
###### Please refer to https://marquee.gs.com/studio/ for price information of corporate equity securities.
###### Notice to Australian Investors: When this document is disseminated in Australia by Goldman Sachs & Co. LLC ("GSCO"), Goldman Sachs International ("GSI"), Goldman Sachs Bank Europe SE ("GSBE"), Goldman Sachs (Asia) L.L.C. ("GSALLC"), or Goldman Sachs (Singapore) Pte ("GSSP") (collectively the "GS entities"), this document, and any access to it, is intended only for a person that has first satisfied the GS entities that:
###### โข the person is a Sophisticated or Professional Investor for the purposes of section 708 of the Corporations Act of Australia; and
###### โข the person is a wholesale client for the purpose of section 761G of the Corporations Act of Australia.
###### To the extent that the GS entities are providing a financial service in Australia, the GS entities are each exempt from the requirement to hold an Australian financial services licence for the financial services they provide in Australia. Each of the GS entities are regulated by a foreign regulator under foreign laws which differ from Australian laws, specifically:
###### โข GSCO is regulated by the US Securities and Exchange Commission under US laws;
###### โข GSI is authorised by the Prudential Regulation Authority and regulated by the Financial Conduct Authority and the Prudential Regulation Authority, under UK laws;
###### โข GSBE is subject to direct prudential supervision by the European Central Bank and in other respects is supervised by the German Federal Financial Supervisory Authority (Bundesanstalt fรผr Finanzdienstleistungsaufischt, BaFin) and Deutsche Bundesbank;
###### โข GSALLC is regulated by the Hong Kong Securities and Futures Commission under Hong Kong laws; and
###### โข GSSP is regulated by the Monetary Authority of Singapore under Singapore laws.
###### Notice to Brazilian Investors
###### Marquee is not meant for the general public in Brazil. The services or products provided by or through Marquee, at any time, may not be offered or sold to the general public in Brazil. You have received a password granting access to Marquee exclusively due to your existing relationship with a GS business located in Brazil. The selection and engagement with any of the offered services or products through Marquee, at any time, will be carried out directly by you. Before acting to implement any chosen service or products, provided by or through Marquee you should consider, at your sole discretion, whether it is suitable for your particular circumstances and, if necessary, seek professional advice. Any steps necessary in order to implement the chosen service or product, including but not limited to remittance of funds, shall be carried out at your discretion. Accordingly, such services and products have not been and will not be publicly issued, placed, distributed, offered or negotiated in the Brazilian capital markets and, as a result, they have not been and will not be registered with the Brazilian Securities and Exchange Commission (Comissรฃo de Valores Mobiliรกrios), nor have they been submitted to the foregoing agency for approval. Documents relating to such services or products, as well as the information contained therein, may not be supplied to the general public in Brazil, as the offering of such services or products is not a public offering in Brazil, nor used in connection with any offer for subscription or sale of securities to the general public in Brazil.
###### The offer of any securities mentioned in this message may not be made to the general public in Brazil. Accordingly, any such securities have not been nor will they be registered with the Brazilian Securities and Exchange Commission (Comissรฃo de Valores Mobiliรกrios) nor has any offer been submitted to the foregoing agency for approval. Documents relating to the offer, as well as the information contained therein, may not be supplied to the public in Brazil, as the offer is not a public offering of securities in Brazil. These terms will apply on every access to Marquee.
###### Ouvidoria Goldman Sachs Brasil: 0800 727 5764 e/ou ouvidoriagoldmansachs@gs.com
###### Horรกrio de funcionamento: segunda-feira ร sexta-feira (exceto feriados), das 9hs ร s 18hs.
###### Ombudsman Goldman Sachs Brazil: 0800 727 5764 and / or ouvidoriagoldmansachs@gs.com
###### Available Weekdays (except holidays), from 9 am to 6 pm.
###### Note to Investors in Israel: GS is not licensed to provide investment advice or investment management services under Israeli law.
###### Notice to Investors in Japan
###### Marquee is made available in Japan by Goldman Sachs Japan Co., Ltd.
###### ๆฌๆธใฏๆ
ๅ ฑใฎๆไพใ็ฎ็ใจใใฆใใใพใใใพใใๅฃฒๅดใป่ณผๅ
ฅใ้ๆณใจใชใใใใชๆณๅใงใฎๆไพก่จผๅธใใฎไปใฎๅฃฒๅด่ฅใใใฏ่ณผๅ
ฅใๅงใใใใฎใงใใใใพใใใใดใผใซใใใณใปใตใใฏในใฏๆฌๆธๅ
ใฎๅๅผๅใฏในใใฉใฏใใฃใผใฎๅง่ชใ่กใใใฎใงใฏใใใใพใใใใใใใฎๅๅผๅใฏในใใฉใฏใใฃใผใฏใ็คพๅ
ๅใณๆณ่ฆๅถ็ญใฎๆฟ่ช็ญๆฌก็ฌฌใงๅฎ้ใซใฏใๆไพใงใใชใๅ ดๅใใใใใพใใ
###### <้ฉๆ ผๆฉ้ขๆ่ณๅฎถ้ๅฎใ่ปขๅฃฒๅถ้>
###### ใดใผใซใใใณใปใตใใฏใน่จผๅธๆ ชๅผไผ็คพใ้ฉๆ ผๆฉ้ขๆ่ณๅฎถใฎใฟใ็ธๆๆนใจใใฆๅๅพ็ณ่พผใฟใฎๅง่ช๏ผๅๅพๅง่ช๏ผๅใฏๅฃฒไปใใฎ็ณ่พผใฟ่ฅใใใฏ่ฒทไปใใฎ็ณ่พผใฟใฎๅง่ช(ๅฃฒไปใๅง่ช็ญ)ใ่กใๆฌๆไพก่จผๅธใซใฏใ้ฉๆ ผๆฉ้ขๆ่ณๅฎถใซ่ญฒๆธกใใๅ ดๅไปฅๅคใฎ่ญฒๆธกใ็ฆๆญขใใใๆจใฎๅถ้ใไปใใใฆใใพใใๆฌๆไพก่จผๅธใฏ้่ๅๅๅๅผๆณ็ฌฌ๏ผๆกใซๅบใฅใ่ฒกๅๅฑใซๅฏพใใๅฑๅบใ่กใใใฆใใใพใใใใชใใๆฌๅ็ฅใฏใๅฎขๆงใซใใใๅๆใฎใใจใซใ้ป็ฃ็ใซไบคไปใใใฆใใใ ใใฆใใใพใใ
###### ๏ผ้ฉๆ ผๆฉ้ขๆ่ณๅฎถ็จ่ณๆ๏ผ
###### ๆฌ่ณๆใฏใ้ฉๆ ผๆฉ้ขๆ่ณๅฎถใฎใๅฎขใใพใฎใฟใๅฏพ่ฑกใซไฝๆใใใใใฎใงใใๆฌ่ณๆใซใใใ้่ๅๅใฏ้ฉๆ ผๆฉ้ขๆ่ณๅฎถใฎใๅฎขใใพใฎใฟใใๅๅผๅฏ่ฝใงใใใ้ฉๆ ผๆฉ้ขๆ่ณๅฎถไปฅๅคใฎใๅฎขใใพใใใฎใๆณจๆ็ญใฏใๅใใงใใพใใใฎใงใใๆณจๆใใ ใใใ ๅๅท็ญ/ใดใผใซใใใณใปใตใใฏใน่จผๅธๆ ชๅผไผ็คพ ้่ๅๅๅๅผๆฅญ่
ใ้ขๆฑ่ฒกๅๅฑ้ท๏ผ้ๅ๏ผ็ฌฌ๏ผ๏ผๅท
###### ๅ ๅ
ฅๅไผ/ใๆฅๆฌ่จผๅธๆฅญๅไผใไธ่ฌ็คพๅฃๆณไบบ้่ๅ
็ฉๅๅผๆฅญๅไผใไธ่ฌ็คพๅฃๆณไบบ็ฌฌไบ็จฎ้่ๅๅๅๅผๆฅญๅไผ
###### ๆฌๆธๅใฏใใฎๆทปไป่ณๆใซไฟก็จๆ ผไปใ่จ่ผใใใฆใใๅ ดๅใๆฅๆฌๆ ผไป็ ็ฉถๆ๏ผJCR๏ผๅใณๆ ผไปๆ่ณๆ
ๅ ฑใปใณใฟใผ๏ผR&I๏ผใซใใๆ ผไปใฏใ็ป้ฒไฟก็จๆ ผไปๆฅญ่
ใซใใๆ ผไป๏ผ็ป้ฒๆ ผไป๏ผใงใใใใฎไปใฎๆ ผไปใฏ็ป้ฒๆ ผไปใงใใๆจใฎ่จ่ผใใชใๅ ดๅใฏใ็ก็ป้ฒๆ ผไปใงใใ็ก็ป้ฒๆ ผไปใๆ่ณๅคๆญใซๅฉ็จใใๅใซใใ็ก็ป้ฒๆ ผไปใซ้ขใใ่ชฌๆๆธใ๏ผhttp://www.goldmansachs.com/disclaimer/ratings.html๏ผใๅๅใซใ่ชญใฟใใ ใใใ
###### If any credit ratings are contained in this material or any attachments, those that have been issued by Japan Credit Rating Agency, Ltd. (JCR) or Rating and Investment Information, Inc. (R&I) are credit ratings that have been issued by a credit rating agency registered in Japan (registered credit ratings). Other credit ratings are unregistered unless denoted as being registered. Before using unregistered credit ratings to make investment decisions, please carefully read "Explanation Regarding Unregistered Credit Ratings" (http://www.goldmansachs.com/disclaimer/ratings.html).
###### Notice to Mexican Investors: Information contained herein is not meant for the general public in Mexico. The services or products provided by or through Goldman Sachs Mexico, Casa de Bolsa, S.A. de C.V. (GS Mexico) may not be offered or sold to the general public in Mexico. You have received information herein exclusively due to your existing relationship with a GS Mexico or any other Goldman Sachs business. The selection and engagement with any of the offered services or products through GS Mexico will be carried out directly by you at your own risk. Before acting to implement any chosen service or product provided by or through GS Mexico you should consider, at your sole discretion, whether it is suitable for your particular circumstances and, if necessary, seek professional advice. Information contained herein related to GS Mexico services or products, as well as any other information, shall not be considered as a product coming from research, nor it contains any recommendation to invest, not to invest, hold or sell any security and may not be supplied to the general public in Mexico.
###### Notice to New Zealand Investors: When this document is disseminated in New Zealand by Goldman Sachs & Co. LLC ("GSCO") , Goldman Sachs International ("GSI"), Goldman Sachs Bank Europe SE ("GSBE"), Goldman Sachs (Asia) L.L.C. ("GSALLC") or Goldman Sachs (Singapore) Pte ("GSSP") (collectively the "GS entities"), this document, and any access to it, is intended only for a person that has first satisfied; the GS entities that the person is someone:
###### (i) who is an investment business within the meaning of clause 37 of Schedule 1 of the Financial Markets Conduct Act 2013 (New Zealand) (the "FMC Act");
###### (ii) who meets the investment activity criteria specified in clause 38 of Schedule 1 of the FMC Act;
###### (iii) who is large within the meaning of clause 39 of Schedule 1 of the FMC Act; or
###### (iv) is a government agency within the meaning of clause 40 of Schedule 1 of the FMC Act.
###### No offer to acquire the interests is being made to you in this document. Any offer will only be made in circumstances where disclosure is not required under the Financial Markets Conducts Act 2013 or the Financial Markets Conduct Regulations 2014.
###### Notice to Swiss Investors: This is marketing material for financial instruments or services. The information contained in this material is for general informational purposes only and does not constitute an offer, solicitation, invitation or recommendation to buy or sell any financial instruments or to provide any investment advice or service of any kind.
###### THE INFORMATION CONTAINED IN THIS DOCUMENT DOES NOT CONSITUTE, AND IS NOT INTENDED TO CONSTITUTE, A PUBLIC OFFER OF SECURITIES IN THE UNITED ARAB EMIRATES IN ACCORDANCE WITH THE COMMERCIAL COMPANIES LAW (FEDERAL LAW NO. 2 OF 2015), ESCA BOARD OF DIRECTORS' DECISION NO. (9/R.M.) OF 2016, ESCA CHAIRMAN DECISION NO 3/R.M. OF 2017 CONCERNING PROMOTING AND INTRODUCING REGULATIONS OR OTHERWISE UNDER THE LAWS OF THE UNITED ARAB EMIRATES. ACCORDINGLY, THE INTERESTS IN THE SECURITIES MAY NOT BE OFFERED TO THE PUBLIC IN THE UAE (INCLUDING THE DUBAI INTERNATIONAL FINANCIAL CENTRE AND THE ABU DHABI GLOBAL MARKET). THIS DOCUMENT HAS NOT BEEN APPROVED BY, OR FILED WITH THE CENTRAL BANK OF THE UNITED ARAB EMIRATES, THE SECURITIES AND COMMODITIES AUTHORITY, THE DUBAI FINANCIAL SERVICES AUTHORITY, THE FINANCIAL SERVICES REGULATORY AUTHORITY OR ANY OTHER RELEVANT LICENSING AUTHORITIES IN THE UNITED ARAB EMIRATES. IF YOU DO NOT UNDERSTAND THE CONTENTS OF THIS DOCUMENT, YOU SHOULD CONSULT WITH A FINANCIAL ADVISOR. THIS DOCUMENT IS PROVIDED TO THE RECIPIENT ONLY AND SHOULD NOT BE PROVIDED TO OR RELIED ON BY ANY OTHER PERSON.
| github_jupyter |
# Welcome to AI for Science Bootcamp
The objective of this bootcamp is to give an introduction to application of Artificial Intelligence (AI) algorithms in Science ( High Performance Computing(HPC) Simulations ). This bootcamp will introduce participants to fundamentals of AI and how those can be applied to different HPC simulation domains.
The following contents will be covered during the Bootcamp :
- [CNN Primer and Keras 101](Intro_to_DL/Part_2.ipynb)
- [Steady State Flow using Neural Networks](CFD/Start_Here.ipynb)
## Quick GPU Check
Before moving forward let us check if Tensorflow backend is able to see and use GPU
```
# Import Necessary Libraries
from __future__ import absolute_import, division, print_function, unicode_literals
# TensorFlow and tf.keras
import tensorflow as tf
from tensorflow import keras
# Helper libraries
import numpy as np
import matplotlib.pyplot as plt
print(tf.__version__)
tf.test.gpu_device_name()
```
The output of the cell above should show all the available compaitable GPU on the system. If no GPU device is listed or you see an error means that there was no compaitable GPU present on the system and the future calls may run on CPU consuming more time.
## [CNN Primer and Keras 101](Intro_to_DL/Part_2.ipynb)
In this notebook, participants will be introduced to Convolution Neural Network and how to implement one using Keras API. For an absolute beginner to CNN and Keras this notebook would serve as a good starting point.
**By the end of this notebook you will:**
- Understand the Machine Learning pipeline
- Understand how a Convolution Neural Network works
- Write your own Deep Learning classifier and train it.
For in depth understanding of Deep Learning Concepts, visit [NVIDIA Deep Learning Institute](https://www.nvidia.com/en-us/deep-learning-ai/education/)
## [Steady State Flow using Neural Networks](CFD/Start_Here.ipynb)
In this notebook, participant will be introduced to how Deep Learning can be applied in the field of Fluid Dynamics.
**Contents of this notebook:**
- Understanding the problem statement
- Building a Deep Learning Pipeline
- Understand data and task
- Discuss various models
- Define Neural network parameters
- Fully Connected Networks
- Convolutional models
- Advanced networks
**By the end of the notebook participant will:**
- Understand the process of applying Deep Learning to Computational Fluid Dynamics
- Understanding how Residual Blocks work.
- Benchmark between different models and how they compare against one another.
## Licensing
This material is released by NVIDIA Corporation under the Creative Commons Attribution 4.0 International (CC BY 4.0)
| github_jupyter |
**Chapter 1 โ The Machine Learning landscape**
_This is the code used to generate some of the figures in chapter 1._
# Setup
First, let's make sure this notebook works well in both python 2 and 3, import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures:
```
# To support both python 2 and python 3
from __future__ import division, print_function, unicode_literals
# Common imports
import numpy as np
import os
# to make this notebook's output stable across runs
np.random.seed(42)
# To plot pretty figures
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
plt.rcParams['axes.labelsize'] = 14
plt.rcParams['xtick.labelsize'] = 12
plt.rcParams['ytick.labelsize'] = 12
# Where to save the figures
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "fundamentals"
def save_fig(fig_id, tight_layout=True):
path = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID, fig_id + ".png")
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format='png', dpi=300)
# Ignore useless warnings (see SciPy issue #5998)
import warnings
warnings.filterwarnings(action="ignore", module="scipy", message="^internal gelsd")
```
# Code example 1-1
This function just merges the OECD's life satisfaction data and the IMF's GDP per capita data. It's a bit too long and boring and it's not specific to Machine Learning, which is why I left it out of the book.
```
def prepare_country_stats(oecd_bli, gdp_per_capita):
oecd_bli = oecd_bli[oecd_bli["INEQUALITY"]=="TOT"]
oecd_bli = oecd_bli.pivot(index="Country", columns="Indicator", values="Value")
gdp_per_capita.rename(columns={"2015": "GDP per capita"}, inplace=True)
gdp_per_capita.set_index("Country", inplace=True)
full_country_stats = pd.merge(left=oecd_bli, right=gdp_per_capita,
left_index=True, right_index=True)
full_country_stats.sort_values(by="GDP per capita", inplace=True)
remove_indices = [0, 1, 6, 8, 33, 34, 35]
keep_indices = list(set(range(36)) - set(remove_indices))
return full_country_stats[["GDP per capita", 'Life satisfaction']].iloc[keep_indices]
```
The code in the book expects the data files to be located in the current directory. I just tweaked it here to fetch the files in datasets/lifesat.
```
import os
datapath = os.path.join("datasets", "lifesat", "")
# Code example
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import sklearn.linear_model
# Load the data
oecd_bli = pd.read_csv(datapath + "oecd_bli_2015.csv", thousands=',')
gdp_per_capita = pd.read_csv(datapath + "gdp_per_capita.csv",thousands=',',delimiter='\t',
encoding='latin1', na_values="n/a")
# Prepare the data
country_stats = prepare_country_stats(oecd_bli, gdp_per_capita)
X = np.c_[country_stats["GDP per capita"]]
y = np.c_[country_stats["Life satisfaction"]]
# Visualize the data
country_stats.plot(kind='scatter', x="GDP per capita", y='Life satisfaction')
plt.show()
# Select a linear model
model = sklearn.linear_model.LinearRegression()
# Train the model
model.fit(X, y)
# Make a prediction for Cyprus
X_new = [[22587]] # Cyprus' GDP per capita
print(model.predict(X_new)) # outputs [[ 5.96242338]]
```
# Note: you can ignore the rest of this notebook, it just generates many of the figures in chapter 1.
# Load and prepare Life satisfaction data
If you want, you can get fresh data from the OECD's website.
Download the CSV from http://stats.oecd.org/index.aspx?DataSetCode=BLI
and save it to `datasets/lifesat/`.
```
oecd_bli = pd.read_csv(datapath + "oecd_bli_2015.csv", thousands=',')
oecd_bli = oecd_bli[oecd_bli["INEQUALITY"]=="TOT"]
oecd_bli = oecd_bli.pivot(index="Country", columns="Indicator", values="Value")
oecd_bli.head(2)
oecd_bli["Life satisfaction"].head()
```
# Load and prepare GDP per capita data
Just like above, you can update the GDP per capita data if you want. Just download data from http://goo.gl/j1MSKe (=> imf.org) and save it to `datasets/lifesat/`.
```
gdp_per_capita = pd.read_csv(datapath+"gdp_per_capita.csv", thousands=',', delimiter='\t',
encoding='latin1', na_values="n/a")
gdp_per_capita.rename(columns={"2015": "GDP per capita"}, inplace=True)
gdp_per_capita.set_index("Country", inplace=True)
gdp_per_capita.head(2)
full_country_stats = pd.merge(left=oecd_bli, right=gdp_per_capita, left_index=True, right_index=True)
full_country_stats.sort_values(by="GDP per capita", inplace=True)
full_country_stats
full_country_stats[["GDP per capita", 'Life satisfaction']].loc["United States"]
remove_indices = [0, 1, 6, 8, 33, 34, 35]
keep_indices = list(set(range(36)) - set(remove_indices))
sample_data = full_country_stats[["GDP per capita", 'Life satisfaction']].iloc[keep_indices]
missing_data = full_country_stats[["GDP per capita", 'Life satisfaction']].iloc[remove_indices]
sample_data.plot(kind='scatter', x="GDP per capita", y='Life satisfaction', figsize=(5,3))
plt.axis([0, 60000, 0, 10])
position_text = {
"Hungary": (5000, 1),
"Korea": (18000, 1.7),
"France": (29000, 2.4),
"Australia": (40000, 3.0),
"United States": (52000, 3.8),
}
for country, pos_text in position_text.items():
pos_data_x, pos_data_y = sample_data.loc[country]
country = "U.S." if country == "United States" else country
plt.annotate(country, xy=(pos_data_x, pos_data_y), xytext=pos_text,
arrowprops=dict(facecolor='black', width=0.5, shrink=0.1, headwidth=5))
plt.plot(pos_data_x, pos_data_y, "ro")
save_fig('money_happy_scatterplot')
plt.show()
sample_data.to_csv(os.path.join("datasets", "lifesat", "lifesat.csv"))
sample_data.loc[list(position_text.keys())]
import numpy as np
sample_data.plot(kind='scatter', x="GDP per capita", y='Life satisfaction', figsize=(5,3))
plt.axis([0, 60000, 0, 10])
X=np.linspace(0, 60000, 1000)
plt.plot(X, 2*X/100000, "r")
plt.text(40000, 2.7, r"$\theta_0 = 0$", fontsize=14, color="r")
plt.text(40000, 1.8, r"$\theta_1 = 2 \times 10^{-5}$", fontsize=14, color="r")
plt.plot(X, 8 - 5*X/100000, "g")
plt.text(5000, 9.1, r"$\theta_0 = 8$", fontsize=14, color="g")
plt.text(5000, 8.2, r"$\theta_1 = -5 \times 10^{-5}$", fontsize=14, color="g")
plt.plot(X, 4 + 5*X/100000, "b")
plt.text(5000, 3.5, r"$\theta_0 = 4$", fontsize=14, color="b")
plt.text(5000, 2.6, r"$\theta_1 = 5 \times 10^{-5}$", fontsize=14, color="b")
save_fig('tweaking_model_params_plot')
plt.show()
from sklearn import linear_model
lin1 = linear_model.LinearRegression()
Xsample = np.c_[sample_data["GDP per capita"]]
ysample = np.c_[sample_data["Life satisfaction"]]
lin1.fit(Xsample, ysample)
t0, t1 = lin1.intercept_[0], lin1.coef_[0][0]
t0, t1
sample_data.plot(kind='scatter', x="GDP per capita", y='Life satisfaction', figsize=(5,3))
plt.axis([0, 60000, 0, 10])
X=np.linspace(0, 60000, 1000)
plt.plot(X, t0 + t1*X, "b")
plt.text(5000, 3.1, r"$\theta_0 = 4.85$", fontsize=14, color="b")
plt.text(5000, 2.2, r"$\theta_1 = 4.91 \times 10^{-5}$", fontsize=14, color="b")
save_fig('best_fit_model_plot')
plt.show()
cyprus_gdp_per_capita = gdp_per_capita.loc["Cyprus"]["GDP per capita"]
print(cyprus_gdp_per_capita)
cyprus_predicted_life_satisfaction = lin1.predict(cyprus_gdp_per_capita)[0][0]
cyprus_predicted_life_satisfaction
sample_data.plot(kind='scatter', x="GDP per capita", y='Life satisfaction', figsize=(5,3), s=1)
X=np.linspace(0, 60000, 1000)
plt.plot(X, t0 + t1*X, "b")
plt.axis([0, 60000, 0, 10])
plt.text(5000, 7.5, r"$\theta_0 = 4.85$", fontsize=14, color="b")
plt.text(5000, 6.6, r"$\theta_1 = 4.91 \times 10^{-5}$", fontsize=14, color="b")
plt.plot([cyprus_gdp_per_capita, cyprus_gdp_per_capita], [0, cyprus_predicted_life_satisfaction], "r--")
plt.text(25000, 5.0, r"Prediction = 5.96", fontsize=14, color="b")
plt.plot(cyprus_gdp_per_capita, cyprus_predicted_life_satisfaction, "ro")
save_fig('cyprus_prediction_plot')
plt.show()
sample_data[7:10]
(5.1+5.7+6.5)/3
backup = oecd_bli, gdp_per_capita
def prepare_country_stats(oecd_bli, gdp_per_capita):
oecd_bli = oecd_bli[oecd_bli["INEQUALITY"]=="TOT"]
oecd_bli = oecd_bli.pivot(index="Country", columns="Indicator", values="Value")
gdp_per_capita.rename(columns={"2015": "GDP per capita"}, inplace=True)
gdp_per_capita.set_index("Country", inplace=True)
full_country_stats = pd.merge(left=oecd_bli, right=gdp_per_capita,
left_index=True, right_index=True)
full_country_stats.sort_values(by="GDP per capita", inplace=True)
remove_indices = [0, 1, 6, 8, 33, 34, 35]
keep_indices = list(set(range(36)) - set(remove_indices))
return full_country_stats[["GDP per capita", 'Life satisfaction']].iloc[keep_indices]
# Code example
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import sklearn
# Load the data
oecd_bli = pd.read_csv(datapath + "oecd_bli_2015.csv", thousands=',')
gdp_per_capita = pd.read_csv(datapath + "gdp_per_capita.csv",thousands=',',delimiter='\t',
encoding='latin1', na_values="n/a")
# Prepare the data
country_stats = prepare_country_stats(oecd_bli, gdp_per_capita)
X = np.c_[country_stats["GDP per capita"]]
y = np.c_[country_stats["Life satisfaction"]]
# Visualize the data
country_stats.plot(kind='scatter', x="GDP per capita", y='Life satisfaction')
plt.show()
# Select a linear model
model = sklearn.linear_model.LinearRegression()
# Train the model
model.fit(X, y)
# Make a prediction for Cyprus
X_new = [[22587]] # Cyprus' GDP per capita
print(model.predict(X_new)) # outputs [[ 5.96242338]]
oecd_bli, gdp_per_capita = backup
missing_data
position_text2 = {
"Brazil": (1000, 9.0),
"Mexico": (11000, 9.0),
"Chile": (25000, 9.0),
"Czech Republic": (35000, 9.0),
"Norway": (60000, 3),
"Switzerland": (72000, 3.0),
"Luxembourg": (90000, 3.0),
}
sample_data.plot(kind='scatter', x="GDP per capita", y='Life satisfaction', figsize=(8,3))
plt.axis([0, 110000, 0, 10])
for country, pos_text in position_text2.items():
pos_data_x, pos_data_y = missing_data.loc[country]
plt.annotate(country, xy=(pos_data_x, pos_data_y), xytext=pos_text,
arrowprops=dict(facecolor='black', width=0.5, shrink=0.1, headwidth=5))
plt.plot(pos_data_x, pos_data_y, "rs")
X=np.linspace(0, 110000, 1000)
plt.plot(X, t0 + t1*X, "b:")
lin_reg_full = linear_model.LinearRegression()
Xfull = np.c_[full_country_stats["GDP per capita"]]
yfull = np.c_[full_country_stats["Life satisfaction"]]
lin_reg_full.fit(Xfull, yfull)
t0full, t1full = lin_reg_full.intercept_[0], lin_reg_full.coef_[0][0]
X = np.linspace(0, 110000, 1000)
plt.plot(X, t0full + t1full * X, "k")
save_fig('representative_training_data_scatterplot')
plt.show()
full_country_stats.plot(kind='scatter', x="GDP per capita", y='Life satisfaction', figsize=(8,3))
plt.axis([0, 110000, 0, 10])
from sklearn import preprocessing
from sklearn import pipeline
poly = preprocessing.PolynomialFeatures(degree=60, include_bias=False)
scaler = preprocessing.StandardScaler()
lin_reg2 = linear_model.LinearRegression()
pipeline_reg = pipeline.Pipeline([('poly', poly), ('scal', scaler), ('lin', lin_reg2)])
pipeline_reg.fit(Xfull, yfull)
curve = pipeline_reg.predict(X[:, np.newaxis])
plt.plot(X, curve)
save_fig('overfitting_model_plot')
plt.show()
full_country_stats.loc[[c for c in full_country_stats.index if "W" in c.upper()]]["Life satisfaction"]
gdp_per_capita.loc[[c for c in gdp_per_capita.index if "W" in c.upper()]].head()
plt.figure(figsize=(8,3))
plt.xlabel("GDP per capita")
plt.ylabel('Life satisfaction')
plt.plot(list(sample_data["GDP per capita"]), list(sample_data["Life satisfaction"]), "bo")
plt.plot(list(missing_data["GDP per capita"]), list(missing_data["Life satisfaction"]), "rs")
X = np.linspace(0, 110000, 1000)
plt.plot(X, t0full + t1full * X, "r--", label="Linear model on all data")
plt.plot(X, t0 + t1*X, "b:", label="Linear model on partial data")
ridge = linear_model.Ridge(alpha=10**9.5)
Xsample = np.c_[sample_data["GDP per capita"]]
ysample = np.c_[sample_data["Life satisfaction"]]
ridge.fit(Xsample, ysample)
t0ridge, t1ridge = ridge.intercept_[0], ridge.coef_[0][0]
plt.plot(X, t0ridge + t1ridge * X, "b", label="Regularized linear model on partial data")
plt.legend(loc="lower right")
plt.axis([0, 110000, 0, 10])
save_fig('ridge_model_plot')
plt.show()
backup = oecd_bli, gdp_per_capita
def prepare_country_stats(oecd_bli, gdp_per_capita):
return sample_data
# Replace this linear model:
model = sklearn.linear_model.LinearRegression()
# with this k-neighbors regression model:
model = sklearn.neighbors.KNeighborsRegressor(n_neighbors=3)
X = np.c_[country_stats["GDP per capita"]]
y = np.c_[country_stats["Life satisfaction"]]
# Train the model
model.fit(X, y)
# Make a prediction for Cyprus
X_new = np.array([[22587.0]]) # Cyprus' GDP per capita
print(model.predict(X_new)) # outputs [[ 5.76666667]]
```
| github_jupyter |
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#WaveSurfer" data-toc-modified-id="WaveSurfer-1"><span class="toc-item-num">1 </span>WaveSurfer</a></span></li><li><span><a href="#Waveform_playlist" data-toc-modified-id="Waveform_playlist-2"><span class="toc-item-num">2 </span>Waveform_playlist</a></span></li><li><span><a href="#TrackSwitch" data-toc-modified-id="TrackSwitch-3"><span class="toc-item-num">3 </span>TrackSwitch</a></span></li></ul></div>
```
import IPython.display
from os.path import join
basedir = 'media'
IPython.display.Audio(join(basedir, 'bass.mp3'))
import pywebaudioplayer.notebook as pwa
dir(pwa)
```
## WaveSurfer
```
w = pwa.wavesurfer(join(basedir, 'bass.mp3'))
IPython.display.HTML(w)
```
## Waveform_playlist
```
p = pwa.waveform_playlist([{'title': 'Drums', 'path': join(basedir, 'drums.mp3')},
{'title': 'Synth', 'path': join(basedir, 'synth.mp3')},
{'title': 'Bass', 'path': join(basedir, 'bass.mp3')},
{'title': 'Violin', 'path': join(basedir, 'violins.mp3')},
], {'text_controls': True}, {'played_wave_colour': '#0000ff', 'unplayed_wave_colour': '#ff0000'})
IPython.display.HTML(p)
```
## TrackSwitch
```
tracks = [{'title': 'Drums', 'image': join(basedir, 'drums.png'), 'path': join(basedir, 'drums.mp3'), 'mimetype': 'audio/mpeg'},
{'title': 'Synth', 'image': join(basedir, 'synth.png'), 'path': join(basedir, 'synth.mp3'), 'mimetype': 'audio/mpeg'},
{'title': 'Bass', 'image': join(basedir, 'bass.png'), 'path': join(basedir, 'bass.mp3'), 'mimetype': 'audio/mpeg'},
{'title': 'Violin', 'image': join(basedir, 'violins.png'), 'path': join(basedir, 'violins.mp3'), 'mimetype': 'audio/mpeg'}]
ts1 = pwa.trackswitch(tracks,
text='Example trackswitch.js instance.', seekable_image=join(basedir, 'mix.png'), seek_margin=(4,4))
IPython.display.HTML(ts1)
```
###### Pass matplotlib.Figure
```
import numpy as np
samplerate = 8000
freq = 440
duration = 3
t = np.arange(duration*samplerate)
f0 = np.sin(2*np.pi*freq*t/samplerate)
f1 = np.sin(2*np.pi*2*freq*t/samplerate)
f2 = np.sin(2*np.pi*3*freq*t/samplerate)
f3 = np.sin(2*np.pi*4*freq*t/samplerate)
f4 = np.sin(2*np.pi*5*freq*t/samplerate)
f5 = np.sin(2*np.pi*6*freq*t/samplerate)
f6 = np.sin(2*np.pi*7*freq*t/samplerate)
complex_sine = f0+f1+f2+f3+f4+f5+f6
import matplotlib.pyplot as plt
fig, ax = plt.subplots(ncols=1, figsize=(10,4), dpi=72);
ax.specgram(complex_sine, Fs=samplerate, detrend='none');
ts2 = pwa.trackswitch([{'title': 'Fundamental', 'samples': (f0, samplerate), 'path': join(basedir, 'fundamental.wav')},
{'title': 'First overtone', 'samples': (f1, samplerate), 'path': join(basedir, 'first.wav')},
{'title': 'Second overtone', 'samples': (f2, samplerate), 'path': join(basedir, 'second.wav')},
{'title': 'Third overtone', 'samples': (f3, samplerate), 'path': join(basedir, 'third.wav')},
{'title': 'Fourth overtone', 'samples': (f4, samplerate), 'path': join(basedir, 'fourth.wav')},
{'title': 'Fifth overtone', 'samples': (f5, samplerate), 'path': join(basedir, 'fifth.wav')},
{'title': 'Sixth overtone', 'samples': (f6, samplerate), 'path': join(basedir, 'sixth.wav')}],
seekable_image=(fig, join(basedir, 'spectrogram.png')), repeat=True)
IPython.display.HTML(ts2)
```
| github_jupyter |
```
#|hide
#|skip
! [ -e /content ] && pip install -Uqq fastai # upgrade fastai on colab
#|default_exp data.core
#|export
from __future__ import annotations
from fastai.torch_basics import *
from fastai.data.load import *
#|hide
from nbdev.showdoc import *
```
# Data core
> Core functionality for gathering data
The classes here provide functionality for applying a list of transforms to a set of items (`TfmdLists`, `Datasets`) or a `DataLoader` (`TfmdDl`) as well as the base class used to gather the data for model training: `DataLoaders`.
## TfmdDL -
```
#|export
@typedispatch
def show_batch(
x, # Input(s) in the batch
y, # Target(s) in the batch
samples, # List of (`x`, `y`) pairs of length `max_n`
ctxs=None, # List of `ctx` objects to show data. Could be a matplotlib axis, DataFrame, etc.
max_n=9, # Maximum number of `samples` to show
**kwargs
):
"Show `max_n` input(s) and target(s) from the batch."
if ctxs is None: ctxs = Inf.nones
if hasattr(samples[0], 'show'):
ctxs = [s.show(ctx=c, **kwargs) for s,c,_ in zip(samples,ctxs,range(max_n))]
else:
for i in range_of(samples[0]):
ctxs = [b.show(ctx=c, **kwargs) for b,c,_ in zip(samples.itemgot(i),ctxs,range(max_n))]
return ctxs
```
`show_batch` is a type-dispatched function that is responsible for showing decoded `samples`. `x` and `y` are the input and the target in the batch to be shown, and are passed along to dispatch on their types. There is a different implementation of `show_batch` if `x` is a `TensorImage` or a `TensorText` for instance (see vision.core or text.data for more details). `ctxs` can be passed but the function is responsible to create them if necessary. `kwargs` depend on the specific implementation.
```
#|export
@typedispatch
def show_results(
x, # Input(s) in the batch
y, # Target(s) in the batch
samples, # List of (`x`, `y`) pairs of length `max_n`
outs, # List of predicted output(s) from the model
ctxs=None, # List of `ctx` objects to show data. Could be a matplotlib axis, DataFrame, etc.
max_n=9, # Maximum number of `samples` to show
**kwargs
):
"Show `max_n` results with input(s), target(s) and prediction(s)."
if ctxs is None: ctxs = Inf.nones
for i in range(len(samples[0])):
ctxs = [b.show(ctx=c, **kwargs) for b,c,_ in zip(samples.itemgot(i),ctxs,range(max_n))]
for i in range(len(outs[0])):
ctxs = [b.show(ctx=c, **kwargs) for b,c,_ in zip(outs.itemgot(i),ctxs,range(max_n))]
return ctxs
```
`show_results` is a type-dispatched function that is responsible for showing decoded `samples` and their corresponding `outs`. Like in `show_batch`, `x` and `y` are the input and the target in the batch to be shown, and are passed along to dispatch on their types. `ctxs` can be passed but the function is responsible to create them if necessary. `kwargs` depend on the specific implementation.
```
#|export
_all_ = ["show_batch", "show_results"]
#|export
_batch_tfms = ('after_item','before_batch','after_batch')
#|export
@delegates()
class TfmdDL(DataLoader):
"Transformed `DataLoader`"
def __init__(self,
dataset, # Map- or iterable-style dataset from which to load the data
bs:int=64, # Size of batch
shuffle:bool=False, # Whether to shuffle data
num_workers:int=None, # Number of CPU cores to use in parallel (default: All available up to 16)
verbose:bool=False, # Whether to print verbose logs
do_setup:bool=True, # Whether to run `setup()` for batch transform(s)
**kwargs
):
if num_workers is None: num_workers = min(16, defaults.cpus)
for nm in _batch_tfms: kwargs[nm] = Pipeline(kwargs.get(nm,None))
super().__init__(dataset, bs=bs, shuffle=shuffle, num_workers=num_workers, **kwargs)
if do_setup:
for nm in _batch_tfms:
pv(f"Setting up {nm}: {kwargs[nm]}", verbose)
kwargs[nm].setup(self)
def _one_pass(self):
b = self.do_batch([self.do_item(None)])
if self.device is not None: b = to_device(b, self.device)
its = self.after_batch(b)
self._n_inp = 1 if not isinstance(its, (list,tuple)) or len(its)==1 else len(its)-1
self._types = explode_types(its)
def _retain_dl(self,b):
if not getattr(self, '_types', None): self._one_pass()
return retain_types(b, typs=self._types)
@delegates(DataLoader.new)
def new(self,
dataset=None, # Map- or iterable-style dataset from which to load the data
cls=None, # Class of the newly created `DataLoader` object
**kwargs
):
res = super().new(dataset, cls, do_setup=False, **kwargs)
if not hasattr(self, '_n_inp') or not hasattr(self, '_types'):
try:
self._one_pass()
res._n_inp,res._types = self._n_inp,self._types
except Exception as e:
print("Could not do one pass in your dataloader, there is something wrong in it. Please see the stack trace below:")
raise
else: res._n_inp,res._types = self._n_inp,self._types
return res
def before_iter(self):
super().before_iter()
split_idx = getattr(self.dataset, 'split_idx', None)
for nm in _batch_tfms:
f = getattr(self,nm)
if isinstance(f,Pipeline): f.split_idx=split_idx
def decode(self,
b # Batch to decode
):
return to_cpu(self.after_batch.decode(self._retain_dl(b)))
def decode_batch(self,
b, # Batch to decode
max_n:int=9, # Maximum number of items to decode
full:bool=True # Whether to decode all transforms. If `False`, decode up to the point the item knows how to show itself
):
return self._decode_batch(self.decode(b), max_n, full)
def _decode_batch(self, b, max_n=9, full=True):
f = self.after_item.decode
f1 = self.before_batch.decode
f = compose(f1, f, partial(getattr(self.dataset,'decode',noop), full = full))
return L(batch_to_samples(b, max_n=max_n)).map(f)
def _pre_show_batch(self, b, max_n=9):
"Decode `b` to be ready for `show_batch`"
b = self.decode(b)
if hasattr(b, 'show'): return b,None,None
its = self._decode_batch(b, max_n, full=False)
if not is_listy(b): b,its = [b],L((o,) for o in its)
return detuplify(b[:self.n_inp]),detuplify(b[self.n_inp:]),its
def show_batch(self,
b=None, # Batch to show
max_n:int=9, # Maximum number of items to show
ctxs=None, # List of `ctx` objects to show data. Could be matplotlib axis, DataFrame etc
show:bool=True, # Whether to display data
unique:bool=False, # Whether to show only one
**kwargs
):
"Show `max_n` input(s) and target(s) from the batch."
if unique:
old_get_idxs = self.get_idxs
self.get_idxs = lambda: Inf.zeros
if b is None: b = self.one_batch()
if not show: return self._pre_show_batch(b, max_n=max_n)
show_batch(*self._pre_show_batch(b, max_n=max_n), ctxs=ctxs, max_n=max_n, **kwargs)
if unique: self.get_idxs = old_get_idxs
def show_results(self,
b, # Batch to show results for
out, # Predicted output from model for the batch
max_n:int=9, # Maximum number of items to show
ctxs=None, # List of `ctx` objects to show data. Could be matplotlib axis, DataFrame etc
show:bool=True, # Whether to display data
**kwargs
):
"Show `max_n` results with input(s), target(s) and prediction(s)."
x,y,its = self.show_batch(b, max_n=max_n, show=False)
b_out = type(b)(b[:self.n_inp] + (tuple(out) if is_listy(out) else (out,)))
x1,y1,outs = self.show_batch(b_out, max_n=max_n, show=False)
res = (x,x1,None,None) if its is None else (x, y, its, outs.itemgot(slice(self.n_inp,None)))
if not show: return res
show_results(*res, ctxs=ctxs, max_n=max_n, **kwargs)
@property
def n_inp(self) -> int:
"Number of elements in `Datasets` or `TfmdDL` tuple to be considered part of input."
if hasattr(self.dataset, 'n_inp'): return self.dataset.n_inp
if not hasattr(self, '_n_inp'): self._one_pass()
return self._n_inp
def to(self,
device # Device to put `DataLoader` and transforms
):
self.device = device
for tfm in self.after_batch.fs:
for a in L(getattr(tfm, 'parameters', None)): setattr(tfm, a, getattr(tfm, a).to(device))
return self
```
A `TfmdDL` is a `DataLoader` that creates `Pipeline` from a list of `Transform`s for the callbacks `after_item`, `before_batch` and `after_batch`. As a result, it can decode or show a processed `batch`.
```
#|export
add_docs(TfmdDL,
decode="Decode `b` using `tfms`",
decode_batch="Decode `b` entirely",
new="Create a new version of self with a few changed attributes",
show_batch="Show `b` (defaults to `one_batch`), a list of lists of pipeline outputs (i.e. output of a `DataLoader`)",
show_results="Show each item of `b` and `out`",
before_iter="override",
to="Put self and its transforms state on `device`")
class _Category(int, ShowTitle): pass
#Test retain type
class NegTfm(Transform):
def encodes(self, x): return torch.neg(x)
def decodes(self, x): return torch.neg(x)
tdl = TfmdDL([(TensorImage([1]),)] * 4, after_batch=NegTfm(), bs=4, num_workers=4)
b = tdl.one_batch()
test_eq(type(b[0]), TensorImage)
b = (tensor([1.,1.,1.,1.]),)
test_eq(type(tdl.decode_batch(b)[0][0]), TensorImage)
class A(Transform):
def encodes(self, x): return x
def decodes(self, x): return TitledInt(x)
@Transform
def f(x)->None: return fastuple((x,x))
start = torch.arange(50)
test_eq_type(f(2), fastuple((2,2)))
a = A()
tdl = TfmdDL(start, after_item=lambda x: (a(x), f(x)), bs=4)
x,y = tdl.one_batch()
test_eq(type(y), fastuple)
s = tdl.decode_batch((x,y))
test_eq(type(s[0][1]), fastuple)
tdl = TfmdDL(torch.arange(0,50), after_item=A(), after_batch=NegTfm(), bs=4)
test_eq(tdl.dataset[0], start[0])
test_eq(len(tdl), (50-1)//4+1)
test_eq(tdl.bs, 4)
test_stdout(tdl.show_batch, '0\n1\n2\n3')
test_stdout(partial(tdl.show_batch, unique=True), '0\n0\n0\n0')
class B(Transform):
parameters = 'a'
def __init__(self): self.a = torch.tensor(0.)
def encodes(self, x): x
tdl = TfmdDL([(TensorImage([1]),)] * 4, after_batch=B(), bs=4)
test_eq(tdl.after_batch.fs[0].a.device, torch.device('cpu'))
tdl.to(default_device())
test_eq(tdl.after_batch.fs[0].a.device, default_device())
```
### Methods
```
show_doc(TfmdDL.one_batch)
tfm = NegTfm()
tdl = TfmdDL(start, after_batch=tfm, bs=4)
b = tdl.one_batch()
test_eq(tensor([0,-1,-2,-3]), b)
show_doc(TfmdDL.decode)
test_eq(tdl.decode(b), tensor(0,1,2,3))
show_doc(TfmdDL.decode_batch)
test_eq(tdl.decode_batch(b), [0,1,2,3])
show_doc(TfmdDL.show_batch)
show_doc(TfmdDL.to)
```
## DataLoaders -
```
#|export
@docs
class DataLoaders(GetAttr):
"Basic wrapper around several `DataLoader`s."
_default='train'
def __init__(self,
*loaders, # `DataLoader` objects to wrap
path:(str,Path)='.', # Path to store export objects
device=None # Device to put `DataLoaders`
):
self.loaders,self.path = list(loaders),Path(path)
if device is not None or hasattr(loaders[0],'to'): self.device = device
def __getitem__(self, i): return self.loaders[i]
def __len__(self): return len(self.loaders)
def new_empty(self):
loaders = [dl.new(dl.dataset.new_empty()) for dl in self.loaders]
return type(self)(*loaders, path=self.path, device=self.device)
def _set(i, self, v): self.loaders[i] = v
train ,valid = add_props(lambda i,x: x[i], _set)
train_ds,valid_ds = add_props(lambda i,x: x[i].dataset)
@property
def device(self): return self._device
@device.setter
def device(self,
d # Device to put `DataLoaders`
):
for dl in self.loaders: dl.to(d)
self._device = d
def to(self,
device # Device to put `DataLoaders`
):
self.device = device
return self
def _add_tfms(self, tfms, event, dl_idx):
"Adds `tfms` to `event` on `dl`"
if(isinstance(dl_idx,str)): dl_idx = 0 if(dl_idx=='train') else 1
dl_tfms = getattr(self[dl_idx], event)
apply(dl_tfms.add, tfms)
def add_tfms(self,
tfms, # List of `Transform`(s) or `Pipeline` to apply
event, # When to run `Transform`. Events mentioned in `TfmdDL`
loaders=None # List of `DataLoader` objects to add `tfms` to
):
"Adds `tfms` to `events` on `loaders`"
if(loaders is None): loaders=range(len(self.loaders))
if not is_listy(loaders): loaders = listify(loaders)
for loader in loaders:
self._add_tfms(tfms,event,loader)
def cuda(self): return self.to(device=default_device())
def cpu(self): return self.to(device=torch.device('cpu'))
@classmethod
def from_dsets(cls,
*ds, # `Datasets` object(s)
path:(str,Path)='.', # Path to put in `DataLoaders`
bs:int=64, # Size of batch
device=None, # Device to put `DataLoaders`
dl_type=TfmdDL, # Type of `DataLoader`
**kwargs
):
default = (True,) + (False,) * (len(ds)-1)
defaults = {'shuffle': default, 'drop_last': default}
tfms = {k:tuple(Pipeline(kwargs[k]) for i in range_of(ds)) for k in _batch_tfms if k in kwargs}
kwargs = merge(defaults, {k: tuplify(v, match=ds) for k,v in kwargs.items() if k not in _batch_tfms}, tfms)
kwargs = [{k: v[i] for k,v in kwargs.items()} for i in range_of(ds)]
return cls(*[dl_type(d, bs=bs, **k) for d,k in zip(ds, kwargs)], path=path, device=device)
@classmethod
def from_dblock(cls,
dblock, # `DataBlock` object
source, # Source of data. Can be `Path` to files
path:(str, Path)='.', # Path to put in `DataLoaders`
bs:int=64, # Size of batch
val_bs:int=None, # Size of batch for validation `DataLoader`
shuffle:bool=True, # Whether to shuffle data
device=None, # Device to put `DataLoaders`
**kwargs
):
return dblock.dataloaders(source, path=path, bs=bs, val_bs=val_bs, shuffle=shuffle, device=device, **kwargs)
_docs=dict(__getitem__="Retrieve `DataLoader` at `i` (`0` is training, `1` is validation)",
train="Training `DataLoader`",
valid="Validation `DataLoader`",
train_ds="Training `Dataset`",
valid_ds="Validation `Dataset`",
to="Use `device`",
add_tfms="Add `tfms` to `loaders` for `event",
cuda="Use the gpu if available",
cpu="Use the cpu",
new_empty="Create a new empty version of `self` with the same transforms",
from_dblock="Create a dataloaders from a given `dblock`")
dls = DataLoaders(tdl,tdl)
x = dls.train.one_batch()
x2 = first(tdl)
test_eq(x,x2)
x2 = dls.one_batch()
test_eq(x,x2)
#|hide
#test assignment works
dls.train = dls.train.new(bs=4)
```
Multiple transforms can by added to multiple dataloaders using `Dataloaders.add_tfms`. You can specify the dataloaders by list of names `dls.add_tfms(...,'valid',...)` or by index `dls.add_tfms(...,1,....)`, by default transforms are added to all dataloaders. `event` is a required argument and determined when the transform will be run, for more information on events please refer to `TfmdDL`. `tfms` is a list of `Transform`, and is a required argument.
```
class _TestTfm(Transform):
def encodes(self, o): return torch.ones_like(o)
def decodes(self, o): return o
tdl1,tdl2 = TfmdDL(start, bs=4),TfmdDL(start, bs=4)
dls2 = DataLoaders(tdl1,tdl2)
dls2.add_tfms([_TestTfm()],'after_batch',['valid'])
dls2.add_tfms([_TestTfm()],'after_batch',[1])
dls2.train.after_batch,dls2.valid.after_batch,
#|hide
test_eq(len(dls2.train.after_batch.fs),0)
test_eq(len(dls2.valid.after_batch.fs),2)
test_eq(next(iter(dls2.valid)),tensor([1,1,1,1]))
class _T(Transform):
def encodes(self, o): return -o
class _T2(Transform):
def encodes(self, o): return o/2
#test tfms are applied on both traind and valid dl
dls_from_ds = DataLoaders.from_dsets([1,], [5,], bs=1, after_item=_T, after_batch=_T2)
b = first(dls_from_ds.train)
test_eq(b, tensor([-.5]))
b = first(dls_from_ds.valid)
test_eq(b, tensor([-2.5]))
```
### Methods
```
show_doc(DataLoaders.__getitem__)
x2
x2 = dls[0].one_batch()
test_eq(x,x2)
show_doc(DataLoaders.train, name="DataLoaders.train")
show_doc(DataLoaders.valid, name="DataLoaders.valid")
show_doc(DataLoaders.train_ds, name="DataLoaders.train_ds")
show_doc(DataLoaders.valid_ds, name="DataLoaders.valid_ds")
```
## TfmdLists -
```
#|export
class FilteredBase:
"Base class for lists with subsets"
_dl_type,_dbunch_type = TfmdDL,DataLoaders
def __init__(self, *args, dl_type=None, **kwargs):
if dl_type is not None: self._dl_type = dl_type
self.dataloaders = delegates(self._dl_type.__init__)(self.dataloaders)
super().__init__(*args, **kwargs)
@property
def n_subsets(self): return len(self.splits)
def _new(self, items, **kwargs): return super()._new(items, splits=self.splits, **kwargs)
def subset(self): raise NotImplemented
def dataloaders(self,
bs:int=64, # Batch size
shuffle_train:bool=None, # (Deprecated, use `shuffle`) Shuffle training `DataLoader`
shuffle:bool=True, # Shuffle training `DataLoader`
val_shuffle:bool=False, # Shuffle validation `DataLoader`
n:int=None, # Size of `Datasets` used to create `DataLoader`
path:(str, Path)='.', # Path to put in `DataLoaders`
dl_type:TfmdDL=None, # Type of `DataLoader`
dl_kwargs:list=None, # List of kwargs to pass to individual `DataLoader`s
device:torch.device=None, # Device to put `DataLoaders`
drop_last:bool=None, # Drop last incomplete batch, defaults to `shuffle`
val_bs:int=None, # Validation batch size, defaults to `bs`
**kwargs
) -> DataLoaders:
if shuffle_train is not None:
shuffle=shuffle_train
warnings.warn('`shuffle_train` is deprecated. Use `shuffle` instead.',DeprecationWarning)
if device is None: device=default_device()
if dl_kwargs is None: dl_kwargs = [{}] * self.n_subsets
if dl_type is None: dl_type = self._dl_type
if drop_last is None: drop_last = shuffle
val_kwargs={k[4:]:v for k,v in kwargs.items() if k.startswith('val_')}
def_kwargs = {'bs':bs,'shuffle':shuffle,'drop_last':drop_last,'n':n,'device':device}
dl = dl_type(self.subset(0), **merge(kwargs,def_kwargs, dl_kwargs[0]))
def_kwargs = {'bs':bs if val_bs is None else val_bs,'shuffle':val_shuffle,'n':None,'drop_last':False}
dls = [dl] + [dl.new(self.subset(i), **merge(kwargs,def_kwargs,val_kwargs,dl_kwargs[i]))
for i in range(1, self.n_subsets)]
return self._dbunch_type(*dls, path=path, device=device)
FilteredBase.train,FilteredBase.valid = add_props(lambda i,x: x.subset(i))
show_doc(FilteredBase().dataloaders)
#|export
class TfmdLists(FilteredBase, L, GetAttr):
"A `Pipeline` of `tfms` applied to a collection of `items`"
_default='tfms'
def __init__(self,
items:list, # Items to apply `Transform`s to
tfms:(list,Pipeline), # `Transform`(s) or `Pipeline` to apply
use_list:bool=None, # Use `list` in `L`
do_setup:bool=True, # Call `setup()` for `Transform`
split_idx:int=None, # Apply `Transform`(s) to training or validation set. `0` for training set and `1` for validation set
train_setup:bool=True, # Apply `Transform`(s) only on training `DataLoader`
splits:list=None, # Indices for training and validation sets
types=None, # Types of data in `items`
verbose:bool=False, # Print verbose output
dl_type:TfmdDL=None # Type of `DataLoader`
):
super().__init__(items, use_list=use_list)
if dl_type is not None: self._dl_type = dl_type
self.splits = L([slice(None),[]] if splits is None else splits).map(mask2idxs)
if isinstance(tfms,TfmdLists): tfms = tfms.tfms
if isinstance(tfms,Pipeline): do_setup=False
self.tfms = Pipeline(tfms, split_idx=split_idx)
store_attr('types,split_idx')
if do_setup:
pv(f"Setting up {self.tfms}", verbose)
self.setup(train_setup=train_setup)
def _new(self, items, split_idx=None, **kwargs):
split_idx = ifnone(split_idx,self.split_idx)
try: return super()._new(items, tfms=self.tfms, do_setup=False, types=self.types, split_idx=split_idx, **kwargs)
except IndexError as e:
e.args = [f"Tried to grab subset {i} in the Dataset, but it contained no items.\n\t{e.args[0]}"]
raise
def subset(self, i): return self._new(self._get(self.splits[i]), split_idx=i)
def _after_item(self, o): return self.tfms(o)
def __repr__(self): return f"{self.__class__.__name__}: {self.items}\ntfms - {self.tfms.fs}"
def __iter__(self): return (self[i] for i in range(len(self)))
def show(self, o, **kwargs): return self.tfms.show(o, **kwargs)
def decode(self, o, **kwargs): return self.tfms.decode(o, **kwargs)
def __call__(self, o, **kwargs): return self.tfms.__call__(o, **kwargs)
def overlapping_splits(self): return L(Counter(self.splits.concat()).values()).filter(gt(1))
def new_empty(self): return self._new([])
def setup(self,
train_setup:bool=True # Apply `Transform`(s) only on training `DataLoader`
):
self.tfms.setup(self, train_setup)
if len(self) != 0:
x = super().__getitem__(0) if self.splits is None else super().__getitem__(self.splits[0])[0]
self.types = []
for f in self.tfms.fs:
self.types.append(getattr(f, 'input_types', type(x)))
x = f(x)
self.types.append(type(x))
types = L(t if is_listy(t) else [t] for t in self.types).concat().unique()
self.pretty_types = '\n'.join([f' - {t}' for t in types])
def infer_idx(self, x):
# TODO: check if we really need this, or can simplify
idx = 0
for t in self.types:
if isinstance(x, t): break
idx += 1
types = L(t if is_listy(t) else [t] for t in self.types).concat().unique()
pretty_types = '\n'.join([f' - {t}' for t in types])
assert idx < len(self.types), f"Expected an input of type in \n{pretty_types}\n but got {type(x)}"
return idx
def infer(self, x):
return compose_tfms(x, tfms=self.tfms.fs[self.infer_idx(x):], split_idx=self.split_idx)
def __getitem__(self, idx):
res = super().__getitem__(idx)
if self._after_item is None: return res
return self._after_item(res) if is_indexer(idx) else res.map(self._after_item)
#|export
add_docs(TfmdLists,
setup="Transform setup with self",
decode="From `Pipeline`",
show="From `Pipeline`",
overlapping_splits="All splits that are in more than one split",
subset="New `TfmdLists` with same tfms that only includes items in `i`th split",
infer_idx="Finds the index where `self.tfms` can be applied to `x`, depending on the type of `x`",
infer="Apply `self.tfms` to `x` starting at the right tfm depending on the type of `x`",
new_empty="A new version of `self` but with no items")
#|exports
def decode_at(o, idx):
"Decoded item at `idx`"
return o.decode(o[idx])
#|exports
def show_at(o, idx, **kwargs):
"Show item at `idx`",
return o.show(o[idx], **kwargs)
```
A `TfmdLists` combines a collection of object with a `Pipeline`. `tfms` can either be a `Pipeline` or a list of transforms, in which case, it will wrap them in a `Pipeline`. `use_list` is passed along to `L` with the `items` and `split_idx` are passed to each transform of the `Pipeline`. `do_setup` indicates if the `Pipeline.setup` method should be called during initialization.
```
class _IntFloatTfm(Transform):
def encodes(self, o): return TitledInt(o)
def decodes(self, o): return TitledFloat(o)
int2f_tfm=_IntFloatTfm()
def _neg(o): return -o
neg_tfm = Transform(_neg, _neg)
items = L([1.,2.,3.]); tfms = [neg_tfm, int2f_tfm]
tl = TfmdLists(items, tfms=tfms)
test_eq_type(tl[0], TitledInt(-1))
test_eq_type(tl[1], TitledInt(-2))
test_eq_type(tl.decode(tl[2]), TitledFloat(3.))
test_stdout(lambda: show_at(tl, 2), '-3')
test_eq(tl.types, [float, float, TitledInt])
tl
# add splits to TfmdLists
splits = [[0,2],[1]]
tl = TfmdLists(items, tfms=tfms, splits=splits)
test_eq(tl.n_subsets, 2)
test_eq(tl.train, tl.subset(0))
test_eq(tl.valid, tl.subset(1))
test_eq(tl.train.items, items[splits[0]])
test_eq(tl.valid.items, items[splits[1]])
test_eq(tl.train.tfms.split_idx, 0)
test_eq(tl.valid.tfms.split_idx, 1)
test_eq(tl.train.new_empty().split_idx, 0)
test_eq(tl.valid.new_empty().split_idx, 1)
test_eq_type(tl.splits, L(splits))
assert not tl.overlapping_splits()
df = pd.DataFrame(dict(a=[1,2,3],b=[2,3,4]))
tl = TfmdLists(df, lambda o: o.a+1, splits=[[0],[1,2]])
test_eq(tl[1,2], [3,4])
tr = tl.subset(0)
test_eq(tr[:], [2])
val = tl.subset(1)
test_eq(val[:], [3,4])
class _B(Transform):
def __init__(self): self.m = 0
def encodes(self, o): return o+self.m
def decodes(self, o): return o-self.m
def setups(self, items):
print(items)
self.m = tensor(items).float().mean().item()
# test for setup, which updates `self.m`
tl = TfmdLists(items, _B())
test_eq(tl.m, 2)
```
Here's how we can use `TfmdLists.setup` to implement a simple category list, getting labels from a mock file list:
```
class _Cat(Transform):
order = 1
def encodes(self, o): return int(self.o2i[o])
def decodes(self, o): return TitledStr(self.vocab[o])
def setups(self, items): self.vocab,self.o2i = uniqueify(L(items), sort=True, bidir=True)
tcat = _Cat()
def _lbl(o): return TitledStr(o.split('_')[0])
# Check that tfms are sorted by `order` & `_lbl` is called first
fns = ['dog_0.jpg','cat_0.jpg','cat_2.jpg','cat_1.jpg','dog_1.jpg']
tl = TfmdLists(fns, [tcat,_lbl])
exp_voc = ['cat','dog']
test_eq(tcat.vocab, exp_voc)
test_eq(tl.tfms.vocab, exp_voc)
test_eq(tl.vocab, exp_voc)
test_eq(tl, (1,0,0,0,1))
test_eq([tl.decode(o) for o in tl], ('dog','cat','cat','cat','dog'))
#Check only the training set is taken into account for setup
tl = TfmdLists(fns, [tcat,_lbl], splits=[[0,4], [1,2,3]])
test_eq(tcat.vocab, ['dog'])
tfm = NegTfm(split_idx=1)
tds = TfmdLists(start, A())
tdl = TfmdDL(tds, after_batch=tfm, bs=4)
x = tdl.one_batch()
test_eq(x, torch.arange(4))
tds.split_idx = 1
x = tdl.one_batch()
test_eq(x, -torch.arange(4))
tds.split_idx = 0
x = tdl.one_batch()
test_eq(x, torch.arange(4))
tds = TfmdLists(start, A())
tdl = TfmdDL(tds, after_batch=NegTfm(), bs=4)
test_eq(tdl.dataset[0], start[0])
test_eq(len(tdl), (len(tds)-1)//4+1)
test_eq(tdl.bs, 4)
test_stdout(tdl.show_batch, '0\n1\n2\n3')
show_doc(TfmdLists.subset)
show_doc(TfmdLists.infer_idx)
show_doc(TfmdLists.infer)
def mult(x): return x*2
mult.order = 2
fns = ['dog_0.jpg','cat_0.jpg','cat_2.jpg','cat_1.jpg','dog_1.jpg']
tl = TfmdLists(fns, [_lbl,_Cat(),mult])
test_eq(tl.infer_idx('dog_45.jpg'), 0)
test_eq(tl.infer('dog_45.jpg'), 2)
test_eq(tl.infer_idx(4), 2)
test_eq(tl.infer(4), 8)
test_fail(lambda: tl.infer_idx(2.0))
test_fail(lambda: tl.infer(2.0))
#|hide
#Test input_types works on a Transform
cat = _Cat()
cat.input_types = (str, float)
tl = TfmdLists(fns, [_lbl,cat,mult])
test_eq(tl.infer_idx(2.0), 1)
#Test type annotations work on a function
def mult(x:(int,float)): return x*2
mult.order = 2
tl = TfmdLists(fns, [_lbl,_Cat(),mult])
test_eq(tl.infer_idx(2.0), 2)
```
## Datasets -
```
#|export
@docs
@delegates(TfmdLists)
class Datasets(FilteredBase):
"A dataset that creates a tuple from each `tfms`"
def __init__(self,
items:list=None, # List of items to create `Datasets`
tfms:(list,Pipeline)=None, # List of `Transform`(s) or `Pipeline` to apply
tls:TfmdLists=None, # If None, `self.tls` is generated from `items` and `tfms`
n_inp:int=None, # Number of elements in `Datasets` tuple that should be considered part of input
dl_type=None, # Default type of `DataLoader` used when function `FilteredBase.dataloaders` is called
**kwargs
):
super().__init__(dl_type=dl_type)
self.tls = L(tls if tls else [TfmdLists(items, t, **kwargs) for t in L(ifnone(tfms,[None]))])
self.n_inp = ifnone(n_inp, max(1, len(self.tls)-1))
def __getitem__(self, it):
res = tuple([tl[it] for tl in self.tls])
return res if is_indexer(it) else list(zip(*res))
def __getattr__(self,k): return gather_attrs(self, k, 'tls')
def __dir__(self): return super().__dir__() + gather_attr_names(self, 'tls')
def __len__(self): return len(self.tls[0])
def __iter__(self): return (self[i] for i in range(len(self)))
def __repr__(self): return coll_repr(self)
def decode(self, o, full=True): return tuple(tl.decode(o_, full=full) for o_,tl in zip(o,tuplify(self.tls, match=o)))
def subset(self, i): return type(self)(tls=L(tl.subset(i) for tl in self.tls), n_inp=self.n_inp)
def _new(self, items, *args, **kwargs): return super()._new(items, tfms=self.tfms, do_setup=False, **kwargs)
def overlapping_splits(self): return self.tls[0].overlapping_splits()
def new_empty(self): return type(self)(tls=[tl.new_empty() for tl in self.tls], n_inp=self.n_inp)
@property
def splits(self): return self.tls[0].splits
@property
def split_idx(self): return self.tls[0].tfms.split_idx
@property
def items(self): return self.tls[0].items
@items.setter
def items(self, v):
for tl in self.tls: tl.items = v
def show(self, o, ctx=None, **kwargs):
for o_,tl in zip(o,self.tls): ctx = tl.show(o_, ctx=ctx, **kwargs)
return ctx
@contextmanager
def set_split_idx(self, i):
old_split_idx = self.split_idx
for tl in self.tls: tl.tfms.split_idx = i
try: yield self
finally:
for tl in self.tls: tl.tfms.split_idx = old_split_idx
_docs=dict(
decode="Compose `decode` of all `tuple_tfms` then all `tfms` on `i`",
show="Show item `o` in `ctx`",
dataloaders="Get a `DataLoaders`",
overlapping_splits="All splits that are in more than one split",
subset="New `Datasets` that only includes subset `i`",
new_empty="Create a new empty version of the `self`, keeping only the transforms",
set_split_idx="Contextmanager to use the same `Datasets` with another `split_idx`"
)
```
A `Datasets` creates a tuple from `items` (typically input,target) by applying to them each list of `Transform` (or `Pipeline`) in `tfms`. Note that if `tfms` contains only one list of `tfms`, the items given by `Datasets` will be tuples of one element.
`n_inp` is the number of elements in the tuples that should be considered part of the input and will default to 1 if `tfms` consists of one set of transforms, `len(tfms)-1` otherwise. In most cases, the number of elements in the tuples spit out by `Datasets` will be 2 (for input,target) but it can happen that there is 3 (Siamese networks or tabular data) in which case we need to be able to determine when the inputs end and the targets begin.
```
items = [1,2,3,4]
dsets = Datasets(items, [[neg_tfm,int2f_tfm], [add(1)]])
t = dsets[0]
test_eq(t, (-1,2))
test_eq(dsets[0,1,2], [(-1,2),(-2,3),(-3,4)])
test_eq(dsets.n_inp, 1)
dsets.decode(t)
class Norm(Transform):
def encodes(self, o): return (o-self.m)/self.s
def decodes(self, o): return (o*self.s)+self.m
def setups(self, items):
its = tensor(items).float()
self.m,self.s = its.mean(),its.std()
items = [1,2,3,4]
nrm = Norm()
dsets = Datasets(items, [[neg_tfm,int2f_tfm], [neg_tfm,nrm]])
x,y = zip(*dsets)
test_close(tensor(y).mean(), 0)
test_close(tensor(y).std(), 1)
test_eq(x, (-1,-2,-3,-4,))
test_eq(nrm.m, -2.5)
test_stdout(lambda:show_at(dsets, 1), '-2')
test_eq(dsets.m, nrm.m)
test_eq(dsets.norm.m, nrm.m)
test_eq(dsets.train.norm.m, nrm.m)
#|hide
#Check filtering is properly applied
class B(Transform):
def encodes(self, x)->None: return int(x+1)
def decodes(self, x): return TitledInt(x-1)
add1 = B(split_idx=1)
dsets = Datasets(items, [neg_tfm, [neg_tfm,int2f_tfm,add1]], splits=[[3],[0,1,2]])
test_eq(dsets[1], [-2,-2])
test_eq(dsets.valid[1], [-2,-1])
test_eq(dsets.valid[[1,1]], [[-2,-1], [-2,-1]])
test_eq(dsets.train[0], [-4,-4])
test_fns = ['dog_0.jpg','cat_0.jpg','cat_2.jpg','cat_1.jpg','kid_1.jpg']
tcat = _Cat()
dsets = Datasets(test_fns, [[tcat,_lbl]], splits=[[0,1,2], [3,4]])
test_eq(tcat.vocab, ['cat','dog'])
test_eq(dsets.train, [(1,),(0,),(0,)])
test_eq(dsets.valid[0], (0,))
test_stdout(lambda: show_at(dsets.train, 0), "dog")
inp = [0,1,2,3,4]
dsets = Datasets(inp, tfms=[None])
test_eq(*dsets[2], 2) # Retrieve one item (subset 0 is the default)
test_eq(dsets[1,2], [(1,),(2,)]) # Retrieve two items by index
mask = [True,False,False,True,False]
test_eq(dsets[mask], [(0,),(3,)]) # Retrieve two items by mask
inp = pd.DataFrame(dict(a=[5,1,2,3,4]))
dsets = Datasets(inp, tfms=attrgetter('a')).subset(0)
test_eq(*dsets[2], 2) # Retrieve one item (subset 0 is the default)
test_eq(dsets[1,2], [(1,),(2,)]) # Retrieve two items by index
mask = [True,False,False,True,False]
test_eq(dsets[mask], [(5,),(3,)]) # Retrieve two items by mask
#test n_inp
inp = [0,1,2,3,4]
dsets = Datasets(inp, tfms=[None])
test_eq(dsets.n_inp, 1)
dsets = Datasets(inp, tfms=[[None],[None],[None]])
test_eq(dsets.n_inp, 2)
dsets = Datasets(inp, tfms=[[None],[None],[None]], n_inp=1)
test_eq(dsets.n_inp, 1)
# splits can be indices
dsets = Datasets(range(5), tfms=[None], splits=[tensor([0,2]), [1,3,4]])
test_eq(dsets.subset(0), [(0,),(2,)])
test_eq(dsets.train, [(0,),(2,)]) # Subset 0 is aliased to `train`
test_eq(dsets.subset(1), [(1,),(3,),(4,)])
test_eq(dsets.valid, [(1,),(3,),(4,)]) # Subset 1 is aliased to `valid`
test_eq(*dsets.valid[2], 4)
#assert '[(1,),(3,),(4,)]' in str(dsets) and '[(0,),(2,)]' in str(dsets)
dsets
# splits can be boolean masks (they don't have to cover all items, but must be disjoint)
splits = [[False,True,True,False,True], [True,False,False,False,False]]
dsets = Datasets(range(5), tfms=[None], splits=splits)
test_eq(dsets.train, [(1,),(2,),(4,)])
test_eq(dsets.valid, [(0,)])
# apply transforms to all items
tfm = [[lambda x: x*2,lambda x: x+1]]
splits = [[1,2],[0,3,4]]
dsets = Datasets(range(5), tfm, splits=splits)
test_eq(dsets.train,[(3,),(5,)])
test_eq(dsets.valid,[(1,),(7,),(9,)])
test_eq(dsets.train[False,True], [(5,)])
# only transform subset 1
class _Tfm(Transform):
split_idx=1
def encodes(self, x): return x*2
def decodes(self, x): return TitledStr(x//2)
dsets = Datasets(range(5), [_Tfm()], splits=[[1,2],[0,3,4]])
test_eq(dsets.train,[(1,),(2,)])
test_eq(dsets.valid,[(0,),(6,),(8,)])
test_eq(dsets.train[False,True], [(2,)])
dsets
#A context manager to change the split_idx and apply the validation transform on the training set
ds = dsets.train
with ds.set_split_idx(1):
test_eq(ds,[(2,),(4,)])
test_eq(dsets.train,[(1,),(2,)])
#|hide
#Test Datasets pickles
dsrc1 = pickle.loads(pickle.dumps(dsets))
test_eq(dsets.train, dsrc1.train)
test_eq(dsets.valid, dsrc1.valid)
dsets = Datasets(range(5), [_Tfm(),noop], splits=[[1,2],[0,3,4]])
test_eq(dsets.train,[(1,1),(2,2)])
test_eq(dsets.valid,[(0,0),(6,3),(8,4)])
start = torch.arange(0,50)
tds = Datasets(start, [A()])
tdl = TfmdDL(tds, after_item=NegTfm(), bs=4)
b = tdl.one_batch()
test_eq(tdl.decode_batch(b), ((0,),(1,),(2,),(3,)))
test_stdout(tdl.show_batch, "0\n1\n2\n3")
# only transform subset 1
class _Tfm(Transform):
split_idx=1
def encodes(self, x): return x*2
dsets = Datasets(range(8), [None], splits=[[1,2,5,7],[0,3,4,6]])
# only transform subset 1
class _Tfm(Transform):
split_idx=1
def encodes(self, x): return x*2
dsets = Datasets(range(8), [None], splits=[[1,2,5,7],[0,3,4,6]])
dls = dsets.dataloaders(bs=4, after_batch=_Tfm(), shuffle=False, device=torch.device('cpu'))
test_eq(dls.train, [(tensor([1,2,5, 7]),)])
test_eq(dls.valid, [(tensor([0,6,8,12]),)])
test_eq(dls.n_inp, 1)
```
### Methods
```
items = [1,2,3,4]
dsets = Datasets(items, [[neg_tfm,int2f_tfm]])
#|hide_input
_dsrc = Datasets([1,2])
show_doc(_dsrc.dataloaders, name="Datasets.dataloaders")
```
Used to create dataloaders. You may prepend 'val_' as in `val_shuffle` to override functionality for the validation set. `dl_kwargs` gives finer per dataloader control if you need to work with more than one dataloader.
```
show_doc(Datasets.decode)
test_eq(*dsets[0], -1)
test_eq(*dsets.decode((-1,)), 1)
show_doc(Datasets.show)
test_stdout(lambda:dsets.show(dsets[1]), '-2')
show_doc(Datasets.new_empty)
items = [1,2,3,4]
nrm = Norm()
dsets = Datasets(items, [[neg_tfm,int2f_tfm], [neg_tfm]])
empty = dsets.new_empty()
test_eq(empty.items, [])
#|hide
#test it works for dataframes too
df = pd.DataFrame({'a':[1,2,3,4,5], 'b':[6,7,8,9,10]})
dsets = Datasets(df, [[attrgetter('a')], [attrgetter('b')]])
empty = dsets.new_empty()
```
## Add test set for inference
```
# only transform subset 1
class _Tfm1(Transform):
split_idx=0
def encodes(self, x): return x*3
dsets = Datasets(range(8), [[_Tfm(),_Tfm1()]], splits=[[1,2,5,7],[0,3,4,6]])
test_eq(dsets.train, [(3,),(6,),(15,),(21,)])
test_eq(dsets.valid, [(0,),(6,),(8,),(12,)])
#|export
def test_set(
dsets:(Datasets, TfmdLists), # Map- or iterable-style dataset from which to load the data
test_items, # Items in test dataset
rm_tfms=None, # Start index of `Transform`(s) from validation set in `dsets` to apply
with_labels:bool=False # Whether the test items contain labels
):
"Create a test set from `test_items` using validation transforms of `dsets`"
if isinstance(dsets, Datasets):
tls = dsets.tls if with_labels else dsets.tls[:dsets.n_inp]
test_tls = [tl._new(test_items, split_idx=1) for tl in tls]
if rm_tfms is None: rm_tfms = [tl.infer_idx(get_first(test_items)) for tl in test_tls]
else: rm_tfms = tuplify(rm_tfms, match=test_tls)
for i,j in enumerate(rm_tfms): test_tls[i].tfms.fs = test_tls[i].tfms.fs[j:]
return Datasets(tls=test_tls)
elif isinstance(dsets, TfmdLists):
test_tl = dsets._new(test_items, split_idx=1)
if rm_tfms is None: rm_tfms = dsets.infer_idx(get_first(test_items))
test_tl.tfms.fs = test_tl.tfms.fs[rm_tfms:]
return test_tl
else: raise Exception(f"This method requires using the fastai library to assemble your data. Expected a `Datasets` or a `TfmdLists` but got {dsets.__class__.__name__}")
class _Tfm1(Transform):
split_idx=0
def encodes(self, x): return x*3
dsets = Datasets(range(8), [[_Tfm(),_Tfm1()]], splits=[[1,2,5,7],[0,3,4,6]])
test_eq(dsets.train, [(3,),(6,),(15,),(21,)])
test_eq(dsets.valid, [(0,),(6,),(8,),(12,)])
#Tranform of the validation set are applied
tst = test_set(dsets, [1,2,3])
test_eq(tst, [(2,),(4,),(6,)])
#|hide
#Test with different types
tfm = _Tfm1()
tfm.split_idx,tfm.order = None,2
dsets = Datasets(['dog', 'cat', 'cat', 'dog'], [[_Cat(),tfm]])
#With strings
test_eq(test_set(dsets, ['dog', 'cat', 'cat']), [(3,), (0,), (0,)])
#With ints
test_eq(test_set(dsets, [1,2]), [(3,), (6,)])
#|hide
#Test with various input lengths
dsets = Datasets(range(8), [[_Tfm(),_Tfm1()],[_Tfm(),_Tfm1()],[_Tfm(),_Tfm1()]], splits=[[1,2,5,7],[0,3,4,6]])
tst = test_set(dsets, [1,2,3])
test_eq(tst, [(2,2),(4,4),(6,6)])
dsets = Datasets(range(8), [[_Tfm(),_Tfm1()],[_Tfm(),_Tfm1()],[_Tfm(),_Tfm1()]], splits=[[1,2,5,7],[0,3,4,6]], n_inp=1)
tst = test_set(dsets, [1,2,3])
test_eq(tst, [(2,),(4,),(6,)])
#|hide
#Test with rm_tfms
dsets = Datasets(range(8), [[_Tfm(),_Tfm()]], splits=[[1,2,5,7],[0,3,4,6]])
tst = test_set(dsets, [1,2,3])
test_eq(tst, [(4,),(8,),(12,)])
dsets = Datasets(range(8), [[_Tfm(),_Tfm()]], splits=[[1,2,5,7],[0,3,4,6]])
tst = test_set(dsets, [1,2,3], rm_tfms=1)
test_eq(tst, [(2,),(4,),(6,)])
dsets = Datasets(range(8), [[_Tfm(),_Tfm()], [_Tfm(),_Tfm()]], splits=[[1,2,5,7],[0,3,4,6]], n_inp=2)
tst = test_set(dsets, [1,2,3], rm_tfms=(1,0))
test_eq(tst, [(2,4),(4,8),(6,12)])
#|export
@patch
@delegates(TfmdDL.__init__)
def test_dl(self:DataLoaders,
test_items, # Items in test dataset
rm_type_tfms=None, # Start index of `Transform`(s) from validation set in `dsets` to apply
with_labels:bool=False, # Whether the test items contain labels
**kwargs
):
"Create a test dataloader from `test_items` using validation transforms of `dls`"
test_ds = test_set(self.valid_ds, test_items, rm_tfms=rm_type_tfms, with_labels=with_labels
) if isinstance(self.valid_ds, (Datasets, TfmdLists)) else test_items
return self.valid.new(test_ds, **kwargs)
dsets = Datasets(range(8), [[_Tfm(),_Tfm1()]], splits=[[1,2,5,7],[0,3,4,6]])
dls = dsets.dataloaders(bs=4, device=torch.device('cpu'))
dsets = Datasets(range(8), [[_Tfm(),_Tfm1()]], splits=[[1,2,5,7],[0,3,4,6]])
dls = dsets.dataloaders(bs=4, device=torch.device('cpu'))
tst_dl = dls.test_dl([2,3,4,5])
test_eq(tst_dl._n_inp, 1)
test_eq(list(tst_dl), [(tensor([ 4, 6, 8, 10]),)])
#Test you can change transforms
tst_dl = dls.test_dl([2,3,4,5], after_item=add1)
test_eq(list(tst_dl), [(tensor([ 5, 7, 9, 11]),)])
```
## Export -
```
#|hide
from nbdev.export import notebook2script
notebook2script()
```
| github_jupyter |
If you wish to use the `enhanced_pyspark_processor`, be sure that `from sagemaker.spark.processing import PySparkProcessor` is commented out and that you're using `from enhanced_pyspark_processor import PySparkProcessor` instead.
```
import sagemaker
from sagemaker.local import LocalSession
#from sagemaker.spark.processing import PySparkProcessor
from enhanced_pyspark_processor import PySparkProcessor
sagemaker_session = LocalSession()
sagemaker_session.config = {"local": {"local_code": True}}
# Update with your SM execution role
role_arn = ""
%%writefile processing.py
import argparse
import logging
import os
import sys
from pyspark.sql import SparkSession
from pyspark.sql.functions import (udf, col)
from pyspark.sql.types import StringType, StructField, StructType, FloatType
# Define custom handler
logger = logging.getLogger(__name__)
handler = logging.StreamHandler(sys.stdout)
handler.setFormatter(logging.Formatter("%(asctime)s %(message)s"))
logger.addHandler(handler)
logger.setLevel(logging.INFO)
def main(data_path):
spark = SparkSession.builder.appName("PySparkJob").getOrCreate()
spark.sparkContext.setLogLevel("ERROR")
schema = StructType(
[
StructField("sex", StringType(), True),
StructField("length", FloatType(), True),
StructField("diameter", FloatType(), True),
StructField("height", FloatType(), True),
StructField("whole_weight", FloatType(), True),
StructField("shucked_weight", FloatType(), True),
StructField("viscera_weight", FloatType(), True),
StructField("rings", FloatType(), True),
]
)
df = spark.read.csv(data_path, header=False, schema=schema)
return df.select("sex", "length", "diameter", "rings")
if __name__ == "__main__":
parser = argparse.ArgumentParser(description="app inputs")
parser.add_argument("--data_path", type=str, help="path to the channel data")
parser.add_argument("--output_path", type=str, help="path to the output data")
args = parser.parse_args()
df = main(args.data_path)
logger.info("Writing transformed data")
df.write.csv(os.path.join(args.output_path, "transformed.csv"), header=True, mode="overwrite")
```
Local Mode only supports an `instance_count` value of 1.
```
spark_processor = PySparkProcessor(
role= role_arn,
instance_type="local",
instance_count=1,
framework_version="2.4"
)
```
For the `enhanced_pyspark_processor`, you need to make sure you use `s3a` rather than `s3` for your S3 paths.
```
spark_processor.run(
"processing.py",
arguments=[
"--data_path",
f"s3a://sagemaker-servicecatalog-seedcode-{sagemaker_session.boto_region_name}/dataset/abalone-dataset.csv",
"--output_path",
f"s3a://{sagemaker_session.default_bucket()}/enhanced_pyspark_processor/output/"
]
)
```
| github_jupyter |
```
import pandas as pd
import numpy as np
import re
pd.set_option('display.max_rows', 1500)
pd.set_option('display.max_columns', 42)
pd.set_option('display.max_colwidth', 100)
!dir
upload1 = pd.read_csv('G:\datasets\Rx_Claims\Rx_BenefitPlan_20161101.csv.csv', sep='|', na_values=['nan', ' ', ' '])
upload2 = pd.read_csv('G:\datasets\Rx_Claims\Rx_BenefitPlan_20161108.csv.csv', sep='|', na_values=['nan', ' ', ' '], low_memory=False)
upload4 = pd.read_csv('G:\datasets\Rx_Claims\Rx_BenefitPlan_20161116.csv', sep='|', na_values=['nan', ' ', ' '], low_memory=False)
upload5 = pd.read_csv('G:\datasets\Rx_Claims\Rx_BenefitPlan_20161122.csv', sep='|', na_values=['nan', ' ', ' '], low_memory=False)
upload6 = pd.read_csv('G:\datasets\Rx_Claims\Rx_BenefitPlan_20161129.csv', sep='|', na_values=['nan', ' ', ' '], low_memory=False)
upload7 = pd.read_csv('G:\datasets\Rx_Claims\Rx_BenefitPlan_20161206.csv', sep='|', na_values=['nan', ' ', ' '], low_memory=False)
upload8 = pd.read_csv('G:\datasets\Rx_Claims\Rx_BenefitPlan_20170110.csv', sep='|', na_values=['nan', ' ', ' '], low_memory=False)
upload9 = pd.read_csv('G:\datasets\Rx_Claims\Rx_BenefitPlan_20170116.csv', sep='|', na_values=['nan', ' ', ' '], low_memory=False)
upload10 = pd.read_csv('G:\datasets\Rx_Claims\Rx_BenefitPlan_20170117.csv', sep='|', na_values=['nan', ' ', ' '], low_memory=False)
upload11 = pd.read_csv('G:\datasets\Rx_Claims\Rx_BenefitPlan_20170124.csv', sep='|', na_values=['nan', ' ', ' '], low_memory=False)
upload12 = pd.read_csv('G:\datasets\Rx_Claims\Rx_BenefitPlan_20170131.csv', sep='|', na_values=['nan', ' ', ' '], low_memory=False)
upload13 = pd.read_csv('G:\datasets\Rx_Claims\Rx_BenefitPlan_20170207.csv', sep='|', na_values=['nan', ' ', ' '], low_memory=False)
upload14 = pd.read_csv('G:\datasets\Rx_Claims\Rx_BenefitPlan_20170214.csv', sep='|', na_values=['nan', ' ', ' '], low_memory=False)
upload15 = pd.read_csv('G:\datasets\Rx_Claims\Rx_BenefitPlan_20170221.csv', sep='|', na_values=['nan', ' ', ' '], low_memory=False)
upload16 = pd.read_csv('G:\datasets\Rx_Claims\Rx_BenefitPlan_20170301.csv', sep='|', na_values=['nan', ' ', ' '], low_memory=False)
data = pd.concat([upload1, upload2, upload4, upload5, upload6,
upload7, upload8, upload9, upload10, upload11,
upload12, upload13, upload14, upload15, upload16])
data.head()
data.columns
data = data.drop(axis=1, columns=['AHFSTherapeuticClassCode', 'ClaimID', 'CoInsurance',
'CompoundDrugIndicator', 'Copay', 'DAWCode', 'DateFilled', 'DaysSupply',
'Deductible', 'FillNumber',
'FormularyStatus', 'GroupNumber',
'MemberID', 'MultisourceIndicator',
'PaidOrAdjudicatedDate',
'PharmacyNPI', 'PharmacyNumber',
'PharmacyStreetAddress2',
'PharmacyTaxId', 'PrescriberFirstName', 'PrescriberID',
'PresriberLastName', 'RxNumber', 'SeqNum', 'UnitMeasure',
'punbr_grnbr'])
data.columns
data.shape
data = data.dropna(subset=['PharmacyZip', 'ClaimStatus', 'PharmacyState', 'PharmacyStreetAddress1'])
data.shape
data = data[(data['MailOrderPharmacy'] != 3) & (data['MailOrderPharmacy'] != 'Y')
& (data['MailOrderPharmacy'] != '3') & (data['MailOrderPharmacy'] != '01')
& (data['MailOrderPharmacy'] != '06') & (data['MailOrderPharmacy'] != '08')
& (data['MailOrderPharmacy'] != '05') & (data['MailOrderPharmacy'] != 'U')]
data['MailOrderPharmacy'].unique()
data['MailOrderPharmacy'].isnull().sum()
data.shape
data = data[(data['ClaimStatus'] == 'P')]
data.shape
data.isnull().sum()
def get_total(row):
if row['IngredientCost'] and row['DispensingFee']:
cost1 = float(row['IngredientCost']) + float(row['DispensingFee'])
elif row['IngredientCost']:
cost1 = float(row['IngredientCost'])
else:
cost1 = 0
cost2 = float(row['OutOfPocket']) + float(row['PaidAmount'])
return max(cost1, cost2)
data['TotalCost'] = data.apply(lambda row: get_total(row), axis=1)
data.head()
def get_unit_cost(row):
if float(row['Quantity']) > 0:
return float(row['TotalCost'])/float(row['Quantity'])
else:
return row['TotalCost']
data['UnitCost'] = data.apply(lambda row: get_unit_cost(row), axis=1)
data.head()
data['PharmacyZip'] = data['PharmacyZip'].str.strip()
data['PharmacyZip'] = data['PharmacyZip'].str[:5]
data.head()
def add_zero(row):
if len(str(row['PharmacyZip'])) < 5:
return '0' + row['PharmacyZip']
else:
return row['PharmacyZip']
data['PharmacyZip'] = data.apply(lambda row: add_zero(row), axis=1)
print(data.shape)
data['PharmacyZip'].nunique()
data['PharmacyStreetAddress1'] = data['PharmacyStreetAddress1'].str.strip()
data['PharmacyName'] = data['PharmacyName'].str.strip()
print(data['PharmacyStreetAddress1'].value_counts())
print(data['PharmacyName'].value_counts())
print(data['PharmacyState'].value_counts())
data['PharmacyState'].nunique()
def get_id(row):
no_spaces = str(row['PharmacyStreetAddress1'].replace(' ', ''))
return no_spaces + row['PharmacyZip']
data['PharmacyID'] = data.apply(lambda row: get_id(row), axis=1)
print(data.shape)
data.head()
data['PharmacyID'].value_counts()
grouped = data.groupby('PharmacyID').mean()
grouped.head()
data = data.drop(axis=1, columns=['ClaimStatus', 'DispensingFee',
'IngredientCost', 'MailOrderPharmacy',
'OutOfPocket', 'PaidAmount', 'Quantity'])
data.head()
data.isnull().sum()
data['DrugLabelName'] = data['DrugLabelName'].str.strip()
data['DrugLabelName'] = data['DrugLabelName'].apply(lambda drug: ' '.join(drug.split()))
data['DrugLabelName'].value_counts()
data['NDCCode'].value_counts()
data['PBMVendor'] = data['PBMVendor'].str.strip()
print(data.shape)
data.head()
# Generate a drug name with no dosage and a "Regional" Zip
data['DrugShortName'] = data.apply(lambda row: row.DrugLabelName.split()[0], axis=1)
data['PharmZip'] = data.apply(lambda row: row.PharmacyZip[:3], axis=1)
print(data.shape)
data.head()
data.columns
data.to_csv('G:\datasets\Rx_Claims\concise_data_v0-1.csv')
data.shape
```
| github_jupyter |
# Is there a relationship between GDP per capita and PISA scores?
July 2015
Written by Susan Chen at NYU Stern with help from Professor David Backus
Contact: <jiachen2017@u.northwestern.edu>
##About PISA
Since 2000, the Programme for International Student Assessment (PISA) has been administered every three years to evaluate education systems around the world. It also gathers family and education background information through surveys. The test, which assesses 15-year-old students in reading, math, and science, is administered to a total of around 510,000 students in 65 countries. The duration of the test is two hours, and it contains a mix of open-ended and multiple-choice questions. Learn more about the test [here](http://www.oecd.org/pisa/test/).
I am interested in seeing if there is a correlation between a nation's wealth and their PISA scores. Do wealthier countries generally attain higher scores, and if so, to what extent? I am using GDP per capita as the economic measure of wealth because this is information that could be sensitive to population numbers so GDP per capita in theory should allow us to compare larger countries (in terms of geography or population) with small countries.
##Abstract
In terms of the correlation between GDP per capita and each component of the PISA, the r-squared values for an OLS regression model, which usually reflect how well the model fits the data, are 0.57, 0.63, and 0.57 for reading, math, and science, respectively. Qatar and Vietnam, outliers, are excluded from the model.
####Packages Imported
I use **matplotlib.pyplot** to plot scatter plots. I use **pandas**, a Python package that allows for fast data manipulation and analysis, to organize my dataset. I access World Bank data through the remote data access API for pandas, **pandas.io**. I also use **numpy**, a Python package for scientific computing, for the mathematical calculations that were needed to fit the data more appropriately. Lastly, I use **statmodels.formula.api**, a Python module used for a variety of statistical computations, for running an OLS linear regression.
```
%matplotlib inline
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import statsmodels.formula.api as smf
from pandas.io import wb
```
##Creating the Dataset
PISA 2012 scores are downloaded as an excel file from the [statlink](http://dx.doi.org/10.1787/888932937035) on page 21 of the published [PISA key findings](http://www.oecd.org/pisa/keyfindings/pisa-2012-results-volume-I.pdf). I deleted the explanatory text surrounding the table. I kept only the "Mean Score in PISA 2012" column for each subject and then saved the file as a csv. Then, I read the file into pandas and renamed the columns.
```
file1 = '/users/susan/desktop/PISA/PISA2012clean.csv' # file location
df1 = pd.read_csv(file1)
#pandas remote data access API for World Bank GDP per capita data
df2 = wb.download(indicator='NY.GDP.PCAP.PP.KD', country='all', start=2012, end=2012)
df1
#drop multilevel index
df2.index = df2.index.droplevel('year')
df1.columns = ['Country','Math','Reading','Science']
df2.columns = ['GDPpc']
#combine PISA and GDP datasets based on country column
df3 = pd.merge(df1, df2, how='left', left_on = 'Country', right_index = True)
df3.columns = ['Country','Math','Reading','Science','GDPpc']
#drop rows with missing GDP per capita values
df3 = df3[pd.notnull(df3['GDPpc'])]
print (df3)
```
##Excluding Outliers
I initially plotted the data and ran the regression without excluding any outliers. The resulting r-squared values for reading, math, and science were 0.29, 0.32, and 0.27, respectively. Looking at the scatter plot, there seem to be two obvious outliers, Qatar and Vietnam. I decided to exclude the data for these two countries because the remaining countries do seem to form a trend. I found upon excluding them that the correlation between GDP per capita and scores was much higher.
Qatar is an outlier as it placed relatively low, 63rd out of the 65 countries, with a relatively high GDP per capita at about $131000. Qatar has a high GDP per capita for a country with just 1.8 million people, and only 13% of which are Qatari nationals. Qatar is a high income economy as it contains one of the world's largest natural gas and oil reserves.
[Vietnam](http://www.economist.com/blogs/banyan/2013/12/education-vietnam) is an outlier because it placed relatively high, 17th out of the 65 countries, with a relatively low GDP per capita at about $4900. Reasons for Vietnam's high score may be due to the investment of the government in education and the uniformity of classroom professionalism and discipline found across countries. At the same time, rote learning is much more emphasized than creative thinking, and it is important to note that many disadvantaged students are forced to drop out, reasons which may account for the high score.
```
df3.index = df3.Country #set country column as the index
df3 = df3.drop(['Qatar', 'Vietnam']) # drop outlier
```
##Plotting the Data
I use the log of the GDP per capita to plot against each component of the PISA on a scatter plot.
```
Reading = df3.Reading
Science = df3.Science
Math = df3.Math
GDP = np.log(df3.GDPpc)
#PISA reading vs GDP per capita
plt.scatter(x = GDP, y = Reading, color = 'r')
plt.title('PISA 2012 Reading scores vs. GDP per capita')
plt.xlabel('GDP per capita (log)')
plt.ylabel('PISA Reading Score')
plt.show()
#PISA math vs GDP per capita
plt.scatter(x = GDP, y = Math, color = 'b')
plt.title('PISA 2012 Math scores vs. GDP per capita')
plt.xlabel('GDP per capita (log)')
plt.ylabel('PISA Math Score')
plt.show()
#PISA science vs GDP per capita
plt.scatter(x = GDP, y = Science, color = 'g')
plt.title('PISA 2012 Science scores vs. GDP per capita')
plt.xlabel('GDP per capita (log)')
plt.ylabel('PISA Science Score')
plt.show()
```
##Regression Analysis
The OLS regression results indicate that the there is a 0.57 correlation betweeen reading scores and GDP per capita, 0.63 between math scores and GDP per capita, and 0.57 between science scores and GDP per capita.
```
lm = smf.ols(formula='Reading ~ GDP', data=df3).fit()
lm2.params
lm.summary()
lm2 = smf.ols(formula='Math ~ GDP', data=df3).fit()
lm2.params
lm2.summary()
lm3 = smf.ols(formula='Science ~ GDP', data=df3).fit()
lm3.params
lm3.summary()
```
##Conclusion
The results show that countries with a higher GDP per capita seem to have a relatively higher advantage even though correlation does not imply causation. GDP per capita only reflects the potential of the country to divert financial resources towards education, and not how much is actually allocated to education. While the correlation is not weak, it is not strong enough to indicate the fact that a country's greater wealth will lead to a better education system. Deviations from the trend line would show that countries with similar performance on the PISA can vary greatly in terms of GDP per capita. The two outliers, Vietnam and Qatar, are two examples of that. At the same time, great scores are not necessarily indicative of a great educational system. There are many factors that need to be taken into consideration when evaluating a country's educational system, such as secondary school enrollment, and this provides a a great opportunity for further research.
##Data Sources
PISA 2012 scores are downloaded from the [statlink](http://dx.doi.org/10.1787/888932937035) on page 21 of the published [PISA key findings](http://www.oecd.org/pisa/keyfindings/pisa-2012-results-volume-I.pdf).
GDP per capita data is accessed through the World Bank API for Pandas. Documentation is found [here](http://pandas.pydata.org/pandas-docs/stable/remote_data.html#remote-data-wb). GDP per capita is based on PPP and is in constant 2011 international dollars (indicator: NY.GDP.PCAP.PP.KD).
| github_jupyter |
# Environment setup
```
# Connect to Google Drive
from google.colab import drive
drive.mount('/content/gdrive')
# Copy the dataset from Google Drive to local
!cp "/content/gdrive/My Drive/CBIS_DDSM.zip" .
!unzip -qq CBIS_DDSM.zip
!rm CBIS_DDSM.zip
cbis_path = 'CBIS_DDSM'
# Import libraries
%tensorflow_version 1.x
import os
import numpy as np
import matplotlib.pyplot as plt
import itertools
from sklearn.metrics import confusion_matrix, classification_report, roc_curve, auc
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras import models
from tensorflow.keras import optimizers
from tensorflow.keras.callbacks import EarlyStopping, ModelCheckpoint
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.optimizers import RMSprop, SGD, Adam, Nadam
from tensorflow.keras.utils import plot_model
def load_training():
"""
Load the training set
"""
abnorm_patches = np.load(os.path.join(cbis_path, 'numpy data', 'train_tensor.npy'))[1::2]
base_patches = np.load(os.path.join(cbis_path, 'numpy data', 'train_tensor.npy'))[0::2]
labels = np.load(os.path.join(cbis_path, 'numpy data', 'train_labels.npy'))[1::2]
return abnorm_patches, base_patches, labels
def load_testing():
"""
Load the test set (abnormalities patches and labels, no baseline)
"""
abnorm_patches = np.load(os.path.join(cbis_path, 'numpy data', 'public_test_tensor.npy'))[1::2]
base_patches = np.load(os.path.join(cbis_path, 'numpy data', 'public_test_tensor.npy'))[0::2]
labels = np.load(os.path.join(cbis_path, 'numpy data', 'public_test_labels.npy'))[1::2]
return abnorm_patches, base_patches, labels
def remap_label(l):
"""
Remap the labels to 0->mass 1->calcification
"""
if l == 1 or l == 2:
return 0
elif l == 3 or l == 4:
return 1
else:
print("[WARN] Unrecognized label (%d)" % l)
return None
# Load training and test images (abnormalities only, no baseline)
train_abn_images, train_base_images, train_labels= load_training()
test_abn_images, test_base_images, test_labels= load_testing()
# Number of images
n_train_img = train_abn_images.shape[0]
n_test_img = test_abn_images.shape[0]
print("Train size: %d \t Test size: %d" % (n_train_img, n_test_img))
# Compute width and height of images
img_w = train_abn_images.shape[1]
img_h = train_abn_images.shape[2]
print("Image size: %dx%d" % (img_w, img_h))
# Remap labels
train_labels = np.array([remap_label(l) for l in train_labels])
test_labels = np.array([remap_label(l) for l in test_labels])
# Create a new dimension for color in the images arrays
train_abn_images = train_abn_images.reshape((n_train_img, img_w, img_h, 1))
train_base_images = train_base_images.reshape((n_train_img, img_w, img_h, 1))
test_abn_images = test_abn_images.reshape((n_test_img, img_w, img_h, 1))
test_base_images = test_base_images.reshape((n_test_img, img_w, img_h, 1))
# Convert from 16-bit (0-65535) to float (0-1)
train_abn_images = train_abn_images.astype('uint16') / 65535
train_base_images = train_base_images.astype('uint16') / 65535
test_abn_images = test_abn_images.astype('uint16') / 65535
test_base_images = test_base_images.astype('uint16') / 65535
# Shuffle the training set (originally sorted by label)
perm = np.random.permutation(n_train_img)
train_abn_images = train_abn_images[perm]
train_base_images = train_base_images[perm]
train_labels = train_labels[perm]
def double_generator(train_abn_images, train_base_images, train_labels, subset, batch_size=128):
gen = ImageDataGenerator(
validation_split=0.2,
rotation_range=180,
shear_range=15,
zoom_range=0.2,
horizontal_flip=True,
vertical_flip=True,
fill_mode='reflect'
)
gen.fit(train_abn_images)
gen_abn = gen.flow(train_abn_images, train_labels, batch_size=batch_size, subset=subset, seed=1)
gen_base = gen.flow(train_base_images, train_labels, batch_size=batch_size, subset=subset, seed=1)
while True:
abn_img, abn_label = gen_abn.next()
base_img, _ = gen_base.next()
yield [abn_img, base_img], abn_label
train_generator = double_generator(train_abn_images, train_base_images, train_labels, 'training', batch_size=128)
validation_generator = double_generator(train_abn_images, train_base_images, train_labels, 'validation', batch_size=128)
''' Create a Siamese network '''
def create_siamese():
# Two input channels
left_input = layers.Input(shape=(150, 150, 1))
right_input = layers.Input(shape=(150, 150, 1))
# CNN core
cnn = models.Sequential()
cnn.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(150, 150, 1)))
cnn.add(layers.MaxPooling2D((2, 2)))
cnn.add(layers.Conv2D(64, (3, 3), activation='relu'))
cnn.add(layers.MaxPooling2D((2, 2)))
cnn.add(layers.Conv2D(128, (3, 3), activation='relu'))
cnn.add(layers.MaxPooling2D((2, 2)))
cnn.add(layers.Flatten())
# Feed both input into the same convolutional base
left_model = cnn(left_input)
right_model = cnn(right_input)
# Compute the difference between the two
diff = layers.Subtract()([left_model, right_model])
# FC layer
fc = layers.Flatten()(diff)
fc = layers.Dense(96, activation="relu")(fc)
fc = layers.Dropout(0.5)(fc)
fc = layers.Dense(1, activation="sigmoid")(fc)
# Instantiate the model
siamese = models.Model(inputs=[left_input, right_input], outputs=fc)
return siamese
siamese_example = create_siamese()
siamese_example.summary()
plot_model(siamese_example, expand_nested=True)
# Instantiate model
siamese = create_siamese()
# Early stopping (stop training after the validation loss reaches the minimum)
earlystopping = EarlyStopping(monitor='val_loss', mode='min', patience=80, verbose=1)
# Callback for checkpointing
checkpoint = ModelCheckpoint('siamese_best.h5',
monitor='val_loss', mode='min', verbose=1,
save_best_only=True, save_freq='epoch'
)
# Compile the model
siamese.compile(optimizer=RMSprop(learning_rate=0.001, decay=1e-3), loss='binary_crossentropy', metrics=['accuracy'])
# Train
history_siamese = siamese.fit_generator(
train_generator,
steps_per_epoch=int(0.8*n_train_img) // 128,
epochs=500,
validation_data=validation_generator,
validation_steps = int(0.2*n_train_img) // 128,
callbacks=[checkpoint, earlystopping],
shuffle=True,
verbose=1,
initial_epoch=0
)
# Save
models.save_model(siamese, 'siamese_end.h5')
!cp model* "/content/gdrive/My Drive/models/"
# History of accuracy and loss
tra_loss = history_siamese.history['loss']
tra_acc = history_siamese.history['acc']
val_loss = history_siamese.history['val_loss']
val_acc = history_siamese.history['val_acc']
# Total number of epochs training
epochs = range(1, len(tra_acc)+1)
end_epoch = len(tra_acc)
# Epoch when reached the validation loss minimum
opt_epoch = val_loss.index(min(val_loss)) + 1
# Loss and accuracy on the validation set
end_val_loss = val_loss[-1]
end_val_acc = val_acc[-1]
opt_val_loss = val_loss[opt_epoch-1]
opt_val_acc = val_acc[opt_epoch-1]
# Loss and accuracy on the test set
opt_siamese = models.load_model('siamese_best.h5')
test_loss, test_acc = siamese.evaluate([test_abn_images, test_base_images], test_labels, verbose=False)
opt_test_loss, opt_test_acc = opt_siamese.evaluate([test_abn_images, test_base_images], test_labels, verbose=False)
print("Siamese CNN\n")
print("Epoch [end]: %d" % end_epoch)
print("Epoch [opt]: %d" % opt_epoch)
print("Valid accuracy [end]: %.4f" % end_val_acc)
print("Valid accuracy [opt]: %.4f" % opt_val_acc)
print("Test accuracy [end]: %.4f" % test_acc)
print("Test accuracy [opt]: %.4f" % opt_test_acc)
print("Valid loss [end]: %.4f" % end_val_loss)
print("Valid loss [opt]: %.4f" % opt_val_loss)
print("Test loss [end]: %.4f" % test_loss)
print("Test loss [opt]: %.4f" % opt_test_loss)
# Model accuracy
plt.figure(figsize=(7, 7), dpi=80, facecolor='w', edgecolor='k')
plt.title('Siamese accuracy')
plt.ylabel('Accuracy')
plt.xlabel('Epoch')
plt.plot(epochs, tra_acc, 'r', label='Training set')
plt.plot(epochs, val_acc, 'g', label='Validation set')
plt.plot(opt_epoch, val_acc[opt_epoch-1], 'go')
plt.vlines(opt_epoch, min(val_acc), opt_val_acc, linestyle="dashed", color='g', linewidth=1)
plt.hlines(opt_val_acc, 1, opt_epoch, linestyle="dashed", color='g', linewidth=1)
plt.legend(loc='lower right')
# Model loss
plt.figure(figsize=(7, 7), dpi=80, facecolor='w', edgecolor='k')
plt.title('Siamese loss')
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.plot(epochs, tra_loss, 'r', label='Training set')
plt.plot(epochs, val_loss, 'g', label='Validation set')
plt.plot(opt_epoch, val_loss[opt_epoch-1], 'go')
plt.vlines(opt_epoch, min(val_loss), opt_val_loss, linestyle="dashed", color='g', linewidth=1)
plt.hlines(opt_val_loss, 1, opt_epoch, linestyle="dashed", color='g', linewidth=1)
plt.legend();
def plot_confusion_matrix(cm, classes, normalize=True, title='Confusion matrix', cmap=plt.cm.Blues):
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, cm[i, j],
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
pred = opt_siamese.predict([test_abn_images, test_base_images])
pred_classes = np.rint(pred)
confusion_mtx = confusion_matrix(test_labels, pred_classes)
plot_confusion_matrix(confusion_mtx, classes=range(2), normalize=False, title='Siamese CNN confusion matrix')
def plot_roc(preds, names, titles='ROC curve'):
plt.figure(figsize=(10, 8), dpi=80, facecolor='w', edgecolor='k')
for pred, name in zip(preds, names):
fpr, tpr, _ = roc_curve(test_labels, pred)
auc_keras = auc(fpr, tpr)
plt.plot([0, 1], [0, 1], 'k--')
plt.plot(fpr, tpr, label=(name +' (area = {:.3f})'.format(auc_keras)))
plt.xlabel('False positive rate')
plt.ylabel('True positive rate')
plt.title('ROC curve')
plt.legend(loc='lower right')
plt.show()
plot_roc([pred], names=['Siamese CNN'])
# Classification report
print(classification_report(test_labels, pred_classes, digits=4))
```
| github_jupyter |
<h1 align="center">EQE512 MATRIX METHODS IN STRUCTURAL ANALYSIS
<br>
<br>
Week 01
<br>
<br>
Defining the solution methods in engineering calculations using matrices and development of algorithms</h1>
<h3 align="center">Dr. Ahmet Anฤฑl Dindar (adindar@gtu.edu.tr)</h3>
<h4 align="center">2017 Fall </h4>
** What is "Structural Analysis"?**
<table style="width:100%">
<tr>
<th><img src="https://media-cdn.tripadvisor.com/media/photo-o/0d/fc/4c/03/burj-khalifa.jpg" alt="Drawing" style="width: 500px;"/> </th>
<th><img src="http://www.mooseyscountrygarden.com/willow-tree-garden/bridge-wood-willow.jpg" alt="Drawing" style="width: 500px;"/></th>
</tr>
</table>
- It is essential to do a system analysis before design and construction of the (engineering) structures.
- Indeed, structural analysis is the first step in the determination of the internal forces occcuring due to the external forces acting on the structures.
** What are the methods in the Structural Analysis? **
Structures are assumed that they are in the statically equilibrium under the external forces.
<table style="width:100%">
<tr>
<th><img src="./images/1-two_story_building.jpg" alt="Drawing" style="width: 500px;"/></th>
<th><img src="./images/2-tsunami_vehicle_on_top.jpg" alt="Drawing" style="width: 500px;"/></th>
</tr>
<tr>
<th>Static Forces</th><th>Dynamic Forces</th>
</tr>
</table>
However, a structure may undergo dynamic forces due to sudden natural disasters such as earthquakes, tsunami, flood or man-made disasters blast, bomb, accident etc.
** What are the methods in the Structural Analysis? **
External forces are grouped in two classes;
<img src="./images/3-loads.svg" alt="Drawing" style="width: 80%;"/>
** What are the methods in the Structural Analysis? **
Structural Analysis Methods are divided into two groups;
<img src="./images/4-SA methods.svg" alt="Drawing" style="width: 80%;"/>
** Now another question arises: What is the relation between computers and structural analysis?**
To be more specific, do we need algorithms and programs running these algorithms?
** Question: Can you solve the following problem?**
<img src="./images/5-simple_beam.svg" alt="Drawing" align="middle" style="width: 50%"/>
- Calculate the support reactions.
- Calculate the max axial (N), shear (V) and bending moment (M) values on the beam.
_Can you draw the flow chart of the steps you'll follow?_
** Flow-chart for the simple beam with uniform loading problem **
<img src="./images/5-simple_beam_flowchart.svg" alt="Drawing" align="middle" style="width: 100%"/>
** Flow-chart for the simple beam with uniform loading problem **
<img src="./images/5-simple_beam_solution-Hand Calculation.png" alt="Drawing" align="middle" style="height: 80%"/>
** Why do we need computers for engineering works? **
Think of the simple beam problem given above. What if the "given" values are found in a range i.e.: L is between 1 to 10m increasing at 25cm and q is between 1 to 100kN/m increasing at 1kN/m. Would you solve the problem easily?
Answer is yes because this is a simple system and the calculations are indeed based on analytical methods.
When the system becomes more complex for example indeterminate structure and the loading cases are more than one, that is usual in case of the loading combinations are encountered, there is strong tool for the computation.
<img src="./images/6-sdof+mdof+truss.svg" alt="Drawing" style="width: 100%;"/>
How can you effectively solve the above problems? Let's learn throughout EQE512.
<img src="http://i2.cdn.cnn.com/cnnnext/dam/assets/140116120426-northridge-earthquake-09-horizontal-large-gallery.jpg" alt="Drawing" style="width: 100%;"/>
Source: http://www.cnn.com/2014/01/16/us/northridge-earthquake-things-learned/index.html
| github_jupyter |
# Gaussian Processes
## Introduction
[Gaussian Processes](https://en.wikipedia.org/wiki/Gaussian_process) have been used in supervised, unsupervised, and even reinforcement learning problems and are described by an elegant mathematical theory (for an overview of the subject see [1, 4]). They are also very attractive conceptually, since they offer an intuitive way to define priors over functions. And finally, since Gaussian Processes are formulated in a Bayesian setting, they come equipped with a powerful notion of uncertainty.
Happily, Pyro offers some support for Gaussian Processes in the `pyro.contrib.gp` module. The goal of this tutorial is to give a brief introduction to Gaussian Processes (GPs) in the context of this module. We will mostly be focusing on how to use the GP interface in Pyro and refer the reader to the references for more details about GPs in general.
The model we're interested in is defined by
$$f \sim \mathcal{GP}\left(0, \mathbf{K}_f(x, x')\right)$$
and
$$y = f(x) + \epsilon,\quad \epsilon \sim \mathcal{N}\left(0, \beta^{-1}\mathbf{I}\right).$$
Here $x, x' \in\mathbf{X}$ are points in the input space and $y\in\mathbf{Y}$ is a point in the output space. $f$ is a draw from the GP prior specified by the kernel $\mathbf{K}_f$ and represents a function from $\mathbf{X}$ to $\mathbf{Y}$. Finally, $\epsilon$ represents Gaussian observation noise.
We will use the [radial basis function kernel](https://en.wikipedia.org/wiki/Radial_basis_function_kernel) (RBF kernel) as the kernel of our GP:
$$ k(x,x') = \sigma^2 \exp\left(-\frac{\|x-x'\|^2}{2l^2}\right).$$
Here $\sigma^2$ and $l$ are parameters that specify the kernel; specifically, $\sigma^2$ is a variance or amplitude squared and $l$ is a lengthscale. We'll get some intuition for these parameters below.
## Imports
First, we import necessary modules.
```
import os
import matplotlib.pyplot as plt
import torch
import pyro
import pyro.contrib.gp as gp
import pyro.distributions as dist
smoke_test = ('CI' in os.environ) # ignore; used to check code integrity in the Pyro repo
assert pyro.__version__.startswith('1.3.1')
pyro.enable_validation(True) # can help with debugging
pyro.set_rng_seed(0)
```
Throughout the tutorial we'll want to visualize GPs. So we define a helper function for plotting:
```
# note that this helper function does three different things:
# (i) plots the observed data;
# (ii) plots the predictions from the learned GP after conditioning on data;
# (iii) plots samples from the GP prior (with no conditioning on observed data)
def plot(plot_observed_data=False, plot_predictions=False, n_prior_samples=0,
model=None, kernel=None, n_test=500):
plt.figure(figsize=(12, 6))
if plot_observed_data:
plt.plot(X.numpy(), y.numpy(), 'kx')
if plot_predictions:
Xtest = torch.linspace(-0.5, 5.5, n_test) # test inputs
# compute predictive mean and variance
with torch.no_grad():
if type(model) == gp.models.VariationalSparseGP:
mean, cov = model(Xtest, full_cov=True)
else:
mean, cov = model(Xtest, full_cov=True, noiseless=False)
sd = cov.diag().sqrt() # standard deviation at each input point x
plt.plot(Xtest.numpy(), mean.numpy(), 'r', lw=2) # plot the mean
plt.fill_between(Xtest.numpy(), # plot the two-sigma uncertainty about the mean
(mean - 2.0 * sd).numpy(),
(mean + 2.0 * sd).numpy(),
color='C0', alpha=0.3)
if n_prior_samples > 0: # plot samples from the GP prior
Xtest = torch.linspace(-0.5, 5.5, n_test) # test inputs
noise = (model.noise if type(model) != gp.models.VariationalSparseGP
else model.likelihood.variance)
cov = kernel.forward(Xtest) + noise.expand(n_test).diag()
samples = dist.MultivariateNormal(torch.zeros(n_test), covariance_matrix=cov)\
.sample(sample_shape=(n_prior_samples,))
plt.plot(Xtest.numpy(), samples.numpy().T, lw=2, alpha=0.4)
plt.xlim(-0.5, 5.5)
```
## Data
The data consist of $20$ points sampled from
$$ y = 0.5\sin(3x) + \epsilon, \quad \epsilon \sim \mathcal{N}(0, 0.2).$$
with $x$ sampled uniformly from the interval $[0, 5]$.
```
N = 20
X = dist.Uniform(0.0, 5.0).sample(sample_shape=(N,))
y = 0.5 * torch.sin(3*X) + dist.Normal(0.0, 0.2).sample(sample_shape=(N,))
plot(plot_observed_data=True) # let's plot the observed data
```
## Define model
First we define a RBF kernel, specifying the values of the two hyperparameters `variance` and `lengthscale`. Then we construct a `GPRegression` object. Here we feed in another hyperparameter, `noise`, that corresponds to $\epsilon$ above.
```
kernel = gp.kernels.RBF(input_dim=1, variance=torch.tensor(5.),
lengthscale=torch.tensor(10.))
gpr = gp.models.GPRegression(X, y, kernel, noise=torch.tensor(1.))
```
Let's see what samples from this GP function prior look like. Note that this is _before_ we've conditioned on the data. The shape these functions take—their smoothness, their vertical scale, etc.—is controlled by the GP kernel.
```
plot(model=gpr, kernel=kernel, n_prior_samples=2)
```
For example, if we make `variance` and `noise` smaller we will see function samples with smaller vertical amplitude:
```
kernel2 = gp.kernels.RBF(input_dim=1, variance=torch.tensor(0.1),
lengthscale=torch.tensor(10.))
gpr2 = gp.models.GPRegression(X, y, kernel2, noise=torch.tensor(0.1))
plot(model=gpr2, kernel=kernel2, n_prior_samples=2)
```
## Inference
In the above we set the kernel hyperparameters by hand. If we want to learn the hyperparameters from the data, we need to do inference. In the simplest (conjugate) case we do gradient ascent on the log marginal likelihood. In `pyro.contrib.gp`, we can use any [PyTorch optimizer](https://pytorch.org/docs/stable/optim.html) to optimize parameters of a model. In addition, we need a loss function which takes inputs are the pair model and guide and returns an ELBO loss (see [SVI Part I](svi_part_i.ipynb) tutorial).
```
optimizer = torch.optim.Adam(gpr.parameters(), lr=0.005)
loss_fn = pyro.infer.Trace_ELBO().differentiable_loss
losses = []
num_steps = 2500 if not smoke_test else 2
for i in range(num_steps):
optimizer.zero_grad()
loss = loss_fn(gpr.model, gpr.guide)
loss.backward()
optimizer.step()
losses.append(loss.item())
# let's plot the loss curve after 2500 steps of training
plt.plot(losses);
```
Let's see if we're learned anything reasonable:
```
plot(model=gpr, plot_observed_data=True, plot_predictions=True)
```
Here the thick red curve is the mean prediction and the blue band represents the 2-sigma uncertainty around the mean. It seems we learned reasonable kernel hyperparameters, as both the mean and uncertainty give a reasonable fit to the data. (Note that learning could have easily gone wrong if we e.g. chose too large of a learning rate or chose bad initital hyperparameters.)
Note that the kernel is only well-defined if `variance` and `lengthscale` are positive. Under the hood Pyro is using PyTorch constraints (see [docs](http://pytorch.org/docs/master/distributions.html#module-torch.distributions.constraints)) to ensure that hyperparameters are constrained to the appropriate domains. Let's see the constrained values we've learned.
```
gpr.kernel.variance.item()
gpr.kernel.lengthscale.item()
gpr.noise.item()
```
The period of the sinusoid that generated the data is $T = 2\pi/3 \approx 2.09$ so learning a lengthscale that's approximiately equal to a quarter period makes sense.
### Fit the model using MAP
We need to define priors for the hyperparameters.
```
# Define the same model as before.
pyro.clear_param_store()
kernel = gp.kernels.RBF(input_dim=1, variance=torch.tensor(5.),
lengthscale=torch.tensor(10.))
gpr = gp.models.GPRegression(X, y, kernel, noise=torch.tensor(1.))
# note that our priors have support on the positive reals
gpr.kernel.lengthscale = pyro.nn.PyroSample(dist.LogNormal(0.0, 1.0))
gpr.kernel.variance = pyro.nn.PyroSample(dist.LogNormal(0.0, 1.0))
optimizer = torch.optim.Adam(gpr.parameters(), lr=0.005)
loss_fn = pyro.infer.Trace_ELBO().differentiable_loss
losses = []
num_steps = 2500 if not smoke_test else 2
for i in range(num_steps):
optimizer.zero_grad()
loss = loss_fn(gpr.model, gpr.guide)
loss.backward()
optimizer.step()
losses.append(loss.item())
plt.plot(losses);
plot(model=gpr, plot_observed_data=True, plot_predictions=True)
```
Let's inspect the hyperparameters we've learned:
```
# tell gpr that we want to get samples from guides
gpr.set_mode('guide')
print('variance = {}'.format(gpr.kernel.variance))
print('lengthscale = {}'.format(gpr.kernel.lengthscale))
print('noise = {}'.format(gpr.noise))
```
Note that the MAP values are different from the MLE values due to the prior.
## Sparse GPs
For large datasets computing the log marginal likelihood is costly due to the expensive matrix operations involved (e.g. see Section 2.2 of [1]). A variety of so-called 'sparse' variational methods have been developed to make GPs viable for larger datasets. This is a big area of research and we won't be going into all the details. Instead we quickly show how we can use `SparseGPRegression` in `pyro.contrib.gp` to make use of these methods.
First, we generate more data.
```
N = 1000
X = dist.Uniform(0.0, 5.0).sample(sample_shape=(N,))
y = 0.5 * torch.sin(3*X) + dist.Normal(0.0, 0.2).sample(sample_shape=(N,))
plot(plot_observed_data=True)
```
Using the sparse GP is very similar to using the basic GP used above. We just need to add an extra parameter $X_u$ (the inducing points).
```
# initialize the inducing inputs
Xu = torch.arange(20.) / 4.0
# initialize the kernel and model
pyro.clear_param_store()
kernel = gp.kernels.RBF(input_dim=1)
# we increase the jitter for better numerical stability
sgpr = gp.models.SparseGPRegression(X, y, kernel, Xu=Xu, jitter=1.0e-5)
# the way we setup inference is similar to above
optimizer = torch.optim.Adam(sgpr.parameters(), lr=0.005)
loss_fn = pyro.infer.Trace_ELBO().differentiable_loss
losses = []
num_steps = 2500 if not smoke_test else 2
for i in range(num_steps):
optimizer.zero_grad()
loss = loss_fn(sgpr.model, sgpr.guide)
loss.backward()
optimizer.step()
losses.append(loss.item())
plt.plot(losses);
# let's look at the inducing points we've learned
print("inducing points:\n{}".format(sgpr.Xu.data.numpy()))
# and plot the predictions from the sparse GP
plot(model=sgpr, plot_observed_data=True, plot_predictions=True)
```
We can see that the model learns a reasonable fit to the data. There are three different sparse approximations that are currently implemented in Pyro:
- "DTC" (Deterministic Training Conditional)
- "FITC" (Fully Independent Training Conditional)
- "VFE" (Variational Free Energy)
By default, `SparseGPRegression` will use "VFE" as the inference method. We can use other methods by passing a different `approx` flag to `SparseGPRegression`.
## More Sparse GPs
Both `GPRegression` and `SparseGPRegression` above are limited to Gaussian likelihoods. We can use other likelihoods with GPs—for example, we can use the Bernoulli likelihood for classification problems—but the inference problem becomes more difficult. In this section, we show how to use the `VariationalSparseGP` module, which can handle non-Gaussian likelihoods. So we can compare to what we've done above, we're still going to use a Gaussian likelihood. The point is that the inference that's being done under the hood can support other likelihoods.
```
# initialize the inducing inputs
Xu = torch.arange(10.) / 2.0
# initialize the kernel, likelihood, and model
pyro.clear_param_store()
kernel = gp.kernels.RBF(input_dim=1)
likelihood = gp.likelihoods.Gaussian()
# turn on "whiten" flag for more stable optimization
vsgp = gp.models.VariationalSparseGP(X, y, kernel, Xu=Xu, likelihood=likelihood, whiten=True)
# instead of defining our own training loop, we will
# use the built-in support provided by the GP module
num_steps = 1500 if not smoke_test else 2
losses = gp.util.train(vsgp, num_steps=num_steps)
plt.plot(losses);
plot(model=vsgp, plot_observed_data=True, plot_predictions=True)
```
That's all there is to it. For more details on the `pyro.contrib.gp` module see the [docs](http://docs.pyro.ai/en/dev/contrib/gp.html). And for example code that uses a GP for classification see [here](https://github.com/pyro-ppl/pyro/blob/dev/examples/contrib/gp/sv-dkl.py).
## Reference
[1] `Deep Gaussian processes and variational propagation of uncertainty`,<br />
Andreas Damianou
[2] `A unifying framework for sparse Gaussian process approximation using power expectation propagation`,<br />
Thang D. Bui, Josiah Yan, and Richard E. Turner
[3] `Scalable variational Gaussian process classification`,<br />
James Hensman, Alexander G. de G. Matthews, and Zoubin Ghahramani
[4] `Gaussian Processes for Machine Learning`,<br />
Carl E. Rasmussen, and Christopher K. I. Williams
[5] `A Unifying View of Sparse Approximate Gaussian Process Regression`,<br />
Joaquin Quinonero-Candela, and Carl E. Rasmussen
| github_jupyter |
```
# from google.colab import drive
# drive.mount('/content/drive')
# path = "/content/drive/MyDrive/Research/cods_comad_plots/sdc_task/mnist/"
m = 5
desired_num = 500
import torch.nn as nn
import torch.nn.functional as F
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import torch
import torchvision
import torchvision.transforms as transforms
from torch.utils.data import Dataset, DataLoader
from torchvision import transforms, utils
from matplotlib import pyplot as plt
import copy
# Ignore warnings
import warnings
warnings.filterwarnings("ignore")
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
transform = transforms.Compose(
[transforms.ToTensor(),
transforms.Normalize((0.5), (0.5))])
trainset = torchvision.datasets.MNIST(root='./data', train=True, download=True, transform=transform)
testset = torchvision.datasets.MNIST(root='./data', train=False, download=True, transform=transform)
classes = ('zero','one','two','three','four','five','six','seven','eight','nine')
foreground_classes = {'zero','one'}
fg_used = '01'
fg1, fg2 = 0,1
all_classes = {'zero','one','two','three','four','five','six','seven','eight','nine'}
background_classes = all_classes - foreground_classes
background_classes
trainloader = torch.utils.data.DataLoader(trainset, batch_size=10, shuffle = False)
testloader = torch.utils.data.DataLoader(testset, batch_size=10, shuffle = False)
dataiter = iter(trainloader)
background_data=[]
background_label=[]
foreground_data=[]
foreground_label=[]
batch_size=10
for i in range(6000):
images, labels = dataiter.next()
for j in range(batch_size):
if(classes[labels[j]] in background_classes):
img = images[j].tolist()
background_data.append(img)
background_label.append(labels[j])
else:
img = images[j].tolist()
foreground_data.append(img)
foreground_label.append(labels[j])
foreground_data = torch.tensor(foreground_data)
foreground_label = torch.tensor(foreground_label)
background_data = torch.tensor(background_data)
background_label = torch.tensor(background_label)
def imshow(img):
img = img / 2 + 0.5 # unnormalize
npimg = img#.numpy()
plt.imshow(np.reshape(npimg, (28,28)))
plt.show()
foreground_data.shape, foreground_label.shape, background_data.shape, background_label.shape
val, idx = torch.max(background_data, dim=0, keepdims= True,)
# torch.abs(val)
mean_bg = torch.mean(background_data, dim=0, keepdims= True)
std_bg, _ = torch.max(background_data, dim=0, keepdims= True)
mean_bg.shape, std_bg.shape
foreground_data = (foreground_data - mean_bg) / std_bg
background_data = (background_data - mean_bg) / torch.abs(std_bg)
foreground_data.shape, foreground_label.shape, background_data.shape, background_label.shape
torch.sum(torch.isnan(foreground_data)), torch.sum(torch.isnan(background_data))
imshow(foreground_data[0])
imshow(background_data[0])
```
## generating CIN train and test data
```
np.random.seed(0)
bg_idx = np.random.randint(0,47335,m-1)
fg_idx = np.random.randint(0,12665)
bg_idx, fg_idx
# for i in background_data[bg_idx]:
# imshow(i)
imshow(torch.sum(background_data[bg_idx], axis = 0))
imshow(foreground_data[fg_idx])
tr_data = ( torch.sum(background_data[bg_idx], axis = 0) + foreground_data[fg_idx] )/m
tr_data.shape
imshow(tr_data)
foreground_label[fg_idx]
train_images =[] # list of mosaic images, each mosaic image is saved as list of 9 images
train_label=[] # label of mosaic image = foreground class present in that mosaic
for i in range(desired_num):
np.random.seed(i)
bg_idx = np.random.randint(0,47335,m-1)
fg_idx = np.random.randint(0,12665)
tr_data = ( torch.sum(background_data[bg_idx], axis = 0) + foreground_data[fg_idx] ) / m
label = (foreground_label[fg_idx].item())
train_images.append(tr_data)
train_label.append(label)
train_images = torch.stack(train_images)
train_images.shape, len(train_label)
imshow(train_images[0])
test_images =[] # list of mosaic images, each mosaic image is saved as list of 9 images
test_label=[] # label of mosaic image = foreground class present in that mosaic
for i in range(10000):
np.random.seed(i)
fg_idx = np.random.randint(0,12665)
tr_data = ( foreground_data[fg_idx] ) / m
label = (foreground_label[fg_idx].item())
test_images.append(tr_data)
test_label.append(label)
test_images = torch.stack(test_images)
test_images.shape, len(test_label)
imshow(test_images[0])
torch.sum(torch.isnan(train_images)), torch.sum(torch.isnan(test_images))
np.unique(train_label), np.unique(test_label)
```
## creating dataloader
```
class CIN_Dataset(Dataset):
"""CIN_Dataset dataset."""
def __init__(self, list_of_images, labels):
"""
Args:
csv_file (string): Path to the csv file with annotations.
root_dir (string): Directory with all the images.
transform (callable, optional): Optional transform to be applied
on a sample.
"""
self.image = list_of_images
self.label = labels
def __len__(self):
return len(self.label)
def __getitem__(self, idx):
return self.image[idx] , self.label[idx]
batch = 250
train_data = CIN_Dataset(train_images, train_label)
train_loader = DataLoader( train_data, batch_size= batch , shuffle=True)
test_data = CIN_Dataset( test_images , test_label)
test_loader = DataLoader( test_data, batch_size= batch , shuffle=False)
train_loader.dataset.image.shape, test_loader.dataset.image.shape
```
## model
```
class Classification(nn.Module):
def __init__(self):
super(Classification, self).__init__()
self.fc1 = nn.Linear(28*28, 50)
self.fc2 = nn.Linear(50, 2)
torch.nn.init.xavier_normal_(self.fc1.weight)
torch.nn.init.zeros_(self.fc1.bias)
torch.nn.init.xavier_normal_(self.fc2.weight)
torch.nn.init.zeros_(self.fc2.bias)
def forward(self, x):
x = x.view(-1, 28*28)
x = F.relu(self.fc1(x))
x = self.fc2(x)
return x
```
## training
```
torch.manual_seed(12)
classify = Classification().double()
classify = classify.to("cuda")
import torch.optim as optim
criterion = nn.CrossEntropyLoss()
optimizer_classify = optim.Adam(classify.parameters(), lr=0.001 ) #, momentum=0.9)
correct = 0
total = 0
count = 0
flag = 1
with torch.no_grad():
for data in train_loader:
inputs, labels = data
inputs = inputs.double()
inputs, labels = inputs.to("cuda"),labels.to("cuda")
outputs = classify(inputs)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the %d train images: %f %%' % ( desired_num , 100 * correct / total))
print("total correct", correct)
print("total train set images", total)
correct = 0
total = 0
count = 0
flag = 1
with torch.no_grad():
for data in test_loader:
inputs, labels = data
inputs = inputs.double()
inputs, labels = inputs.to("cuda"),labels.to("cuda")
outputs = classify(inputs)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the %d test images: %f %%' % ( 10000 , 100 * correct / total))
print("total correct", correct)
print("total train set images", total)
nos_epochs = 200
tr_loss = []
for epoch in range(nos_epochs): # loop over the dataset multiple times
epoch_loss = []
cnt=0
iteration = desired_num // batch
running_loss = 0
#training data set
for i, data in enumerate(train_loader):
inputs, labels = data
inputs = inputs.double()
inputs, labels = inputs.to("cuda"),labels.to("cuda")
inputs = inputs.double()
# zero the parameter gradients
optimizer_classify.zero_grad()
outputs = classify(inputs)
_, predicted = torch.max(outputs.data, 1)
# print(outputs)
# print(outputs.shape,labels.shape , torch.argmax(outputs, dim=1))
loss = criterion(outputs, labels)
loss.backward()
optimizer_classify.step()
running_loss += loss.item()
mini = 1
if cnt % mini == mini-1: # print every 40 mini-batches
# print('[%d, %5d] loss: %.3f' %(epoch + 1, cnt + 1, running_loss / mini))
epoch_loss.append(running_loss/mini)
running_loss = 0.0
cnt=cnt+1
tr_loss.append(np.mean(epoch_loss))
if(np.mean(epoch_loss) <= 0.001):
break;
else:
print('[Epoch : %d] loss: %.3f' %(epoch + 1, np.mean(epoch_loss) ))
print('Finished Training')
plt.plot(tr_loss)
correct = 0
total = 0
count = 0
flag = 1
with torch.no_grad():
for data in train_loader:
inputs, labels = data
inputs = inputs.double()
inputs, labels = inputs.to("cuda"),labels.to("cuda")
outputs = classify(inputs)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the %d train images: %f %%' % ( desired_num , 100 * correct / total))
print("total correct", correct)
print("total train set images", total)
correct = 0
total = 0
count = 0
flag = 1
with torch.no_grad():
for data in test_loader:
inputs, labels = data
inputs = inputs.double()
inputs, labels = inputs.to("cuda"),labels.to("cuda")
outputs = classify(inputs)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the %d train images: %f %%' % ( 10000 , 100 * correct / total))
print("total correct", correct)
print("total test set images", total)
```
| github_jupyter |
<img src='https://assets.leetcode-cn.com/aliyun-lc-upload/uploads/2019/08/17/1336_ex1.jpeg'>
<img src='https://assets.leetcode-cn.com/aliyun-lc-upload/uploads/2019/08/17/1336_ex2.jpeg'>
```
from collections import deque
class Solution:
def maxDistance(self, grid) -> int:
N, q = len(grid), deque() # ๆๆๆ 1 ๆพๅ
ฅ qไธญ
for i in range(N):
for j in range(N):
if grid[i][j] == 1:
grid[i][j] = 2
q.append((i, j))
step = -1
while q:
q_len = len(q)
step += 1
for i in range(q_len):
x, y = q.popleft()
for nx, ny in (x+1, y), (x-1, y), (x, y+1), (x, y-1):
if nx >= N or ny >= N or nx < 0 or ny < 0:
continue
if grid[nx][ny] == 2:
continue
if grid[nx][ny] == 0:
grid[nx][ny] = 2
q.append((nx, ny))
return step if step != 0 else -1
from collections import deque
class Solution:
def maxDistance(self, grid) -> int:
N, q = len(grid), deque() # ๆๆๆ 1 ๆพๅ
ฅ qไธญ
for i in range(N):
for j in range(N):
if grid[i][j] == 1:
grid[i][j] = 2
q.append((i, j))
step = 0
ret = -1
while q:
q_len = len(q)
step += 1
for i in range(q_len):
x, y = q.popleft()
for nx, ny in (x+1, y), (x-1, y), (x, y+1), (x, y-1):
if nx >= N or ny >= N or nx < 0 or ny < 0:
continue
if grid[nx][ny] == 2:
continue
if grid[nx][ny] == 0:
ret = max(ret, step)
grid[nx][ny] = 2
q.append((nx, ny))
return ret
solution = Solution()
solution.maxDistance([[1,0,0],[0,0,0],[0,0,0]])
q = deque(([1, 2]))
q.append([1, 2])
print(q)
q.popleft()
def bfs(r, c):
if r >= N or c >= N or r < 0 or c < 0:
return False
if grid[r][c] == 1:
return True
res = -1
for n_r, n_c in (r+1, c), (r-1, c), (r, c+1), (r, c-1):
if bfs
res = max(res, bfs(n_r, n_c))
return res
from collections import deque
class Solution:
def maxDistance(self, grid) -> int:
N = len(grid)
sum_val = 0
for i in range(N):
print(grid[i])
sum_val += sum(grid[i])
print(sum_val)
if sum_val == 0 or sum_val == N*N:
return -1
visited = {}
for i in range(N):
for j in range(N):
dis = -1
if grid[i][j] == 0: # ๆพๅฐไบไธไธชๆตทๆด
q = deque()
q.append([i, j])
seen = set()
can_break = False
while q:
x, y = q.popleft()
if (x, y) not in seen:
seen.add((x, y))
for n_x, n_y in (x+1, y), (x-1, y), (x, y+1), (x, y-1):
if n_x >= N or n_y >= N or n_x < 0 or n_y < 0:
continue
if (n_x, n_y) in visited:
dis = max(dis, abs(n_x - i) + abs(n_y - j))
if grid[n_x][n_y] == 1:
dis = max(dis, abs(n_x - i) + abs(n_y - j))
can_break = True
break
else:
q.append((n_x, n_y))
if can_break:
break
visited[(i, j)] = dis
return max(visited.values())
from collections import deque
class Solution:
def maxDistance(self, grid) -> int:
N = len(grid)
sum_val = 0
for i in range(N):
sum_val += sum(grid[i])
if sum_val == 0 or sum_val == N*N:
return -1
dis = -1
for i in range(N):
for j in range(N):
if grid[i][j] == 0: # ๆพๅฐไบไธไธชๆตทๆด
q = deque()
q.append([i, j])
grid[i][j] = 2 # ๅทฒ็ปๅป่ฟๆ ่ฎฐไธบ2
while q:
x, y = q.popleft()
for n_x, n_y in (x+1, y), (x-1, y), (x, y+1), (x, y-1):
if n_x >= N or n_y >= N or n_x < 0 or n_y < 0:
continue
if grid[n_x][n_y] == 1:
dis += 1
grid[n_x][n_y] == 2
elif grid[n_x][n_y] == 0:
q.append((n_x, n_y))
return dis
solution = Solution()
solution.maxDistance([[1,0,0],[0,0,0],[0,0,0]])
```
| github_jupyter |
# Import statements
```
from google.colab import drive
drive.mount('/content/drive')
from my_ml_lib import MetricTools, PlotTools
import os
import numpy as np
import matplotlib.pyplot as plt
import pickle
import pandas as pd
import matplotlib.pyplot as plt
from matplotlib.pyplot import figure
import json
import datetime
import copy
from PIL import Image as im
import joblib
from sklearn.model_selection import train_test_split
# import math as Math
import random
import torch.optim
import torch
from torch import nn
from torch import optim
import torch.nn.functional as F
from torchvision import datasets, transforms, models
from torch.utils.data import DataLoader
from torch.utils.data import Dataset
import torchvision
import cv2
```
# Saving and Loading code
```
# Saving and Loading models using joblib
def save(filename, obj):
with open(filename, 'wb') as handle:
joblib.dump(obj, handle, protocol=pickle.HIGHEST_PROTOCOL)
def load(filename):
with open(filename, 'rb') as handle:
return joblib.load(filename)
```
# Importing Dataset
```
p = "/content/drive/MyDrive/A3/"
data_path = p + "dataset/train.pkl"
x = load(data_path)
# save_path = "/content/drive/MyDrive/SEM-2/05-DL /Assignments/A3/dataset/"
# # saving the images and labels array
# save(save_path + "data_image.pkl",data_image)
# save(save_path + "data_label.pkl",data_label)
# # dict values where labels key and image arrays as vlaues in form of list
# save(save_path + "my_dict.pkl",my_dict)
save_path = p + "dataset/"
# saving the images and labels array
data_image = load(save_path + "data_image.pkl")
data_label = load(save_path + "data_label.pkl")
# dict values where labels key and image arrays as vlaues in form of list
my_dict = load(save_path + "my_dict.pkl")
len(data_image) , len(data_label), my_dict.keys()
```
# Data Class and Data Loaders and Data transforms
```
len(x['names']) ,x['names'][4999] , data_image[0].shape
```
## Splitting the data into train and val
```
X_train, X_test, y_train, y_test = train_test_split(data_image, data_label, test_size=0.10, random_state=42,stratify=data_label )
len(X_train) , len(y_train) , len(X_test) ,len(y_test)
pd.DataFrame(y_test).value_counts()
```
## Data Class
```
class myDataClass(Dataset):
"""Custom dataset class"""
def __init__(self, images, labels , transform=None):
"""
Args:
images : Array of all the images
labels : Correspoing labels of all the images
"""
self.images = images
self.labels = labels
self.transform = transform
def __len__(self):
return len(self.images)
def __getitem__(self, idx):
# converts image value between 0 and 1 and returns a tensor C,H,W
img = torchvision.transforms.functional.to_tensor(self.images[idx])
target = self.labels[idx]
if self.transform:
img = self.transform(img)
return img,target
```
## Data Loaders
```
batch = 64
train_dataset = myDataClass(X_train, y_train)
test_dataset = myDataClass(X_test, y_test)
train_dataloader = DataLoader(train_dataset, batch_size= batch, shuffle=True)
test_dataloader = DataLoader(test_dataset, batch_size= batch, shuffle=True)
# next(iter(train_dataloader))[0].shape
len(train_dataloader) , len(test_dataloader)
```
# Train and Test functions
```
def load_best(all_models,model_test):
FILE = all_models[-1]
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model_test.parameters(), lr=0)
checkpoint = torch.load(FILE)
model_test.load_state_dict(checkpoint['model_state'])
optimizer.load_state_dict(checkpoint['optim_state'])
epoch = checkpoint['epoch']
model_test.eval()
return model_test
def train(save_path,epochs,train_dataloader,model,test_dataloader,optimizer,criterion,basic_name):
model_no = 1
c = 1
all_models = []
valid_loss_min = np.Inf
train_losses = []
val_losses = []
for e in range(epochs):
train_loss = 0.0
valid_loss = 0.0
model.train()
for idx, (images,labels) in enumerate(train_dataloader):
images, labels = images.to(device) , labels.to(device)
optimizer.zero_grad()
log_ps= model(images)
loss = criterion(log_ps, labels)
loss.backward()
optimizer.step()
train_loss += ((1 / (idx + 1)) * (loss.data - train_loss))
else:
accuracy = 0
correct = 0
model.eval()
with torch.no_grad():
for idx, (images,labels) in enumerate(test_dataloader):
images, labels = images.to(device) , labels.to(device)
log_ps = model(images)
_, predicted = torch.max(log_ps.data, 1)
loss = criterion(log_ps, labels)
# correct += (predicted == labels).sum().item()
equals = predicted == labels.view(*predicted.shape)
accuracy += torch.mean(equals.type(torch.FloatTensor))
valid_loss += ((1 / (idx + 1)) * (loss.data - valid_loss))
print('Epoch: {} \tTraining Loss: {:.6f} \tValidation Loss: {:.6f}'.format(
e+1,
train_loss,
valid_loss
), "Test Accuracy: {:.3f}".format(accuracy/len(test_dataloader)))
train_losses.append(train_loss)
val_losses.append(valid_loss)
if valid_loss < valid_loss_min:
print('Saving model..' + str(model_no))
valid_loss_min = valid_loss
checkpoint = {
"epoch": e+1,
"model_state": model.state_dict(),
"optim_state": optimizer.state_dict(),
"train_losses": train_losses,
"test_losses": val_losses,
}
FILE = save_path + basic_name +"_epoch_" + str(e+1) + "_model_" + str(model_no)
all_models.append(FILE)
torch.save(checkpoint, FILE)
model_no = model_no + 1
save(save_path + basic_name + "_all_models.pkl", all_models)
return model, train_losses, val_losses, all_models
def plot(train_losses,val_losses,title='Training Validation Loss with CNN'):
plt.plot(train_losses, label='Training loss')
plt.plot(val_losses, label='Validation loss')
plt.xlabel('Iterations')
plt.ylabel('Loss')
plt.legend()
_ = plt.ylim()
plt.title(title)
# plt.savefig('plots/Training Validation Loss with CNN from scratch.png')
plt.show()
def test(loader, model, criterion, device, name):
test_loss = 0.
correct = 0.
total = 0.
y = None
y_hat = None
model.eval()
for batch_idx, (images, labels) in enumerate(loader):
# move to GPU or CPU
images, labels = images.to(device) , labels.to(device)
target = labels
# forward pass: compute predicted outputs by passing inputs to the model
output = model(images)
# calculate the loss
loss = criterion(output,labels)
# update average test loss
test_loss = test_loss + ((1 / (batch_idx + 1)) * (loss.data - test_loss))
# convert output probabilities to predicted class
pred = output.data.max(1, keepdim=True)[1]
if y is None:
y = target.cpu().numpy()
y_hat = pred.data.cpu().view_as(target).numpy()
else:
y = np.append(y, target.cpu().numpy())
y_hat = np.append(y_hat, pred.data.cpu().view_as(target).numpy())
correct += np.sum(pred.view_as(labels).cpu().numpy() == labels.cpu().numpy())
total = total + images.size(0)
# if batch_idx % 20 == 0:
# print("done till batch" , batch_idx+1)
print(name + ' Loss: {:.6f}\n'.format(test_loss))
print(name + ' Accuracy: %2d%% (%2d/%2d)' % (
100. * correct / total, correct, total))
return y, y_hat
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
# def train(save_path,epochs,train_dataloader,model,test_dataloader,optimizer,criterion,basic_name)
# def plot(train_losses,val_losses,title='Training Validation Loss with CNN')
# def test(loader, model, criterion, device)
```
# Relu [ X=2 Y=3 Z=1 ]
## CNN-Block-123
### model
```
cfg3 = {
'B123': [16,16,'M',32,32,32,'M',64,'M'],
}
def make_layers3(cfg, batch_norm=False):
layers = []
in_channels = 3
for v in cfg:
if v == 'M':
layers += [nn.MaxPool2d(kernel_size=2, stride=2)]
elif v == 'M1':
layers += [nn.MaxPool2d(kernel_size=4, stride=3)]
elif v == 'D':
layers += [nn.Dropout(p=0.5)]
else:
conv2d = nn.Conv2d(in_channels, v, kernel_size=3)
if batch_norm:
layers += [conv2d, nn.BatchNorm2d(v), nn.ReLU(inplace=True)]
else:
layers += [conv2d, nn.ReLU(inplace=True)]
in_channels = v
return nn.Sequential(*layers)
class Model_B123(nn.Module):
'''
Model
'''
def __init__(self, features):
super(Model_B123, self).__init__()
self.features = features
self.classifier = nn.Sequential(
# nn.Linear(1600, 512),
# nn.ReLU(True),
# nn.Linear(512, 256),
# nn.ReLU(True),
# nn.Linear(256, 64),
# nn.ReLU(True),
nn.Linear(64, 10),
)
def forward(self, x):
x = self.features(x)
# print(x.shape)
x = x.view(x.size(0), -1)
x = self.classifier(x)
return x
# m = Model_B123(make_layers3(cfg3['B123']))
# for i,l in train_dataloader:
# o = m(i)
model3 = Model_B123(make_layers3(cfg3['B123'])).to(device)
learning_rate = 0.01
criterion3 = nn.CrossEntropyLoss()
optimizer3 = optim.Adam(model3.parameters(), lr=learning_rate)
print(model3)
```
### train
```
# !rm '/content/drive/MyDrive/SEM-2/05-DL /Assignments/A3/models_saved_Q1/1_3/bw_blocks/Dropout(0.5)/cnn_block123/'*
# !ls '/content/drive/MyDrive/SEM-2/05-DL /Assignments/A3/models_saved_Q1/1_3/bw_blocks/Dropout(0.5)/cnn_block123/'
save_path3 = p + "models_saved_Q1/1_4/colab_notebooks /Batchnorm_and_pooling/models/"
m, train_losses, val_losses,m_all_models = train(save_path3,30,train_dataloader,model3,test_dataloader,optimizer3,criterion3,"cnn_b123_x2_y3_z1_without_BN")
```
### Tests and Plots
```
plot(train_losses,val_losses,'Training Validation Loss with CNN-block1')
all_models3 = load(save_path3 + "cnn_b123_x2_y3_z1_without_BN_all_models.pkl")
FILE = all_models3[-1]
m3 = Model_B123(make_layers3(cfg3['B123'])).to(device)
m3 = load_best(all_models3,m3)
train_y, train_y_hat = test(train_dataloader, m3, criterion3, device, "TRAIN")
cm = MetricTools.confusion_matrix(train_y, train_y_hat, nclasses=10)
PlotTools.confusion_matrix(cm, [i for i in range(10)], title='',
filename='Confusion Matrix with CNN', figsize=(6,6))
test_y, test_y_hat = test(test_dataloader, m3, criterion3, device,"TEST")
cm = MetricTools.confusion_matrix(test_y, test_y_hat, nclasses=10)
PlotTools.confusion_matrix(cm, [i for i in range(10)], title='',
filename='Confusion Matrix with CNN', figsize=(6,6))
```
| github_jupyter |
# Deep Learning with Google Earth Engine, Cloud Storage and AI Platform
This notebook is inspired by the following tutorials:
- [Getting started: Training and prediction with Keras](https://cloud.google.com/ml-engine/docs/tensorflow/getting-started-keras)
- [Down to Earth with AI Platform](https://medium.com/google-earth/down-to-earth-with-ai-platform-7bc363abf4fa)
- [Multi-class prediction with a DNN](https://developers.google.com/earth-engine/tf_examples#multi-class-prediction-with-a-dnn)
- [Regression with an FCNN](https://developers.google.com/earth-engine/tf_examples#regression-with-an-fcnn)
- [Deploying to AI Platform](https://developers.google.com/earth-engine/tf_examples#deploying-to-ai-platform)
## Setup software libraries
```
# Import and initialize the Earth Engine library.
import ee
ee.Initialize()
ee.__version__
# Folium setup.
import folium
print(folium.__version__)
import matplotlib.pyplot as plt
import numpy as np
import functools
import json
from pprint import pprint
import env
```
## Earth Engine ImageCollection attributes
We define the different attributes that we will need for each Earth Engine ImageCollection all through the notebook.
We include them in the `ee_collection_specifics.py` file:
```
%%writefile ee_collection_specifics.py
"""
Information on Earth Engine collections stored here (e.g. bands, collection ids, etc.)
"""
import ee
def ee_collections(collection):
"""
Earth Engine image collection names
"""
dic = {
'Sentinel-2-Top-of-Atmosphere-Reflectance': 'COPERNICUS/S2',
'Landsat-7-Surface-Reflectance': 'LANDSAT/LE07/C01/T1_SR',
'Landsat-8-Surface-Reflectance': 'LANDSAT/LC08/C01/T1_SR',
'USDA-NASS-Cropland-Data-Layers': 'USDA/NASS/CDL',
'USGS-National-Land-Cover-Database': 'USGS/NLCD',
'Lake-Water-Quality-100m': 'projects/vizzuality/skydipper-water-quality/LWQ-100m'
}
return dic[collection]
def ee_bands(collection):
"""
Earth Engine band names
"""
dic = {
'Sentinel-2-Top-of-Atmosphere-Reflectance': ['B1','B2','B3','B4','B5','B6','B7','B8A','B8','B11','B12','ndvi','ndwi'],
'Landsat-7-Surface-Reflectance': ['B1','B2','B3','B4','B5','B6','B7','ndvi','ndwi'],
'Landsat-8-Surface-Reflectance': ['B1','B2','B3','B4','B5','B6','B7','B10','B11','ndvi','ndwi'],
'USDA-NASS-Cropland-Data-Layers': ['landcover', 'cropland', 'land', 'water', 'urban'],
'USGS-National-Land-Cover-Database': ['impervious'],
'Lake-Water-Quality-100m': ['turbidity_blended_mean']
}
return dic[collection]
def ee_bands_rgb(collection):
"""
Earth Engine rgb band names
"""
dic = {
'Sentinel-2-Top-of-Atmosphere-Reflectance': ['B4','B3','B2'],
'Landsat-7-Surface-Reflectance': ['B3','B2','B1'],
'Landsat-8-Surface-Reflectance': ['B4', 'B3', 'B2'],
'USDA-NASS-Cropland-Data-Layers': ['landcover'],
'USGS-National-Land-Cover-Database': ['impervious'],
'Lake-Water-Quality-100m': ['turbidity_blended_mean']
}
return dic[collection]
def ee_bands_normThreshold(collection):
"""
Normalization threshold percentage
"""
dic = {
'Sentinel-2-Top-of-Atmosphere-Reflectance': {'B1': 75,'B2': 75,'B3': 75,'B4': 75,'B5': 80,'B6': 80,'B7': 80,'B8A': 80,'B8': 80,'B11': 100,'B12': 100},
'Landsat-7-Surface-Reflectance': {'B1': 95,'B2': 95,'B3': 95,'B4': 100,'B5': 100,'B6': 100,'B7': 100},
'Landsat-8-Surface-Reflectance': {'B1': 90,'B2': 95,'B3': 95,'B4': 95,'B5': 100,'B6': 100,'B7': 100,'B10': 100,'B11': 100},
'USDA-NASS-Cropland-Data-Layers': {'landcover': 100, 'cropland': 100, 'land': 100, 'water': 100, 'urban': 100},
'USGS-National-Land-Cover-Database': {'impervious': 100},
'Lake-Water-Quality-100m': {'turbidity_blended_mean': 100}
}
return dic[collection]
def normalize(collection):
dic = {
'Sentinel-2-Top-of-Atmosphere-Reflectance': True,
'Landsat-7-Surface-Reflectance': True,
'Landsat-8-Surface-Reflectance': True,
'USDA-NASS-Cropland-Data-Layers': False,
'USGS-National-Land-Cover-Database': False,
'Lake-Water-Quality-100m': False
}
return dic[collection]
def vizz_params_rgb(collection):
"""
Visualization parameters
"""
dic = {
'Sentinel-2-Top-of-Atmosphere-Reflectance': {'min':0,'max':3000, 'bands':['B4','B3','B2']},
'Landsat-7-Surface-Reflectance': {'min':0,'max':3000, 'gamma':1.4, 'bands':['B3','B2','B1']},
'Landsat-8-Surface-Reflectance': {'min':0,'max':3000, 'gamma':1.4, 'bands':['B4','B3','B2']},
'USDA-NASS-Cropland-Data-Layers': {'min':0,'max':3, 'bands':['landcover']},
'USGS-National-Land-Cover-Database': {'min': 0, 'max': 1, 'bands':['impervious']},
'Lake-Water-Quality-100m': {'min': 0, 'max': 1, 'bands':['turbidity_blended_mean']}
}
return dic[collection]
def vizz_params(collection):
"""
Visualization parameters
"""
dic = {
'Sentinel-2-Top-of-Atmosphere-Reflectance': [{'min':0,'max':1, 'bands':['B4','B3','B2']},
{'min':0,'max':1, 'bands':['B1']},
{'min':0,'max':1, 'bands':['B5']},
{'min':0,'max':1, 'bands':['B6']},
{'min':0,'max':1, 'bands':['B7']},
{'min':0,'max':1, 'bands':['B8A']},
{'min':0,'max':1, 'bands':['B8']},
{'min':0,'max':1, 'bands':['B11']},
{'min':0,'max':1, 'bands':['B12']},
{'min':0,'max':1, 'gamma':1.4, 'bands':['ndvi']},
{'min':0,'max':1, 'gamma':1.4, 'bands':['ndwi']}],
'Landsat-7-Surface-Reflectance': [{'min':0,'max':1, 'gamma':1.4, 'bands':['B3','B2','B1']},
{'min':0,'max':1, 'gamma':1.4, 'bands':['B4']},
{'min':0,'max':1, 'gamma':1.4, 'bands':['B5']},
{'min':0,'max':1, 'gamma':1.4, 'bands':['B7']},
{'min':0,'max':1, 'gamma':1.4, 'bands':['B6']},
{'min':0,'max':1, 'gamma':1.4, 'bands':['ndvi']},
{'min':0,'max':1, 'gamma':1.4, 'bands':['ndwi']}],
'Landsat-8-Surface-Reflectance': [{'min':0,'max':1, 'gamma':1.4, 'bands':['B4','B3','B2']},
{'min':0,'max':1, 'gamma':1.4, 'bands':['B1']},
{'min':0,'max':1, 'gamma':1.4, 'bands':['B5']},
{'min':0,'max':1, 'gamma':1.4, 'bands':['B6']},
{'min':0,'max':1, 'gamma':1.4, 'bands':['B7']},
{'min':0,'max':1, 'gamma':1.4, 'bands':['B10']},
{'min':0,'max':1, 'gamma':1.4, 'bands':['B11']},
{'min':0,'max':1, 'gamma':1.4, 'bands':['ndvi']},
{'min':0,'max':1, 'gamma':1.4, 'bands':['ndwi']}],
'USDA-NASS-Cropland-Data-Layers': [{'min':0,'max':3, 'bands':['landcover']},
{'min':0,'max':1, 'bands':['cropland']},
{'min':0,'max':1, 'bands':['land']},
{'min':0,'max':1, 'bands':['water']},
{'min':0,'max':1, 'bands':['urban']}],
'USGS-National-Land-Cover-Database': [{'min': 0, 'max': 1, 'bands':['impervious']}],
'Lake-Water-Quality-100m': [{'min': 0, 'max': 1, 'bands':['turbidity_blended_mean']}],
}
return dic[collection]
## ------------------------- Filter datasets ------------------------- ##
## Lansat 7 Cloud Free Composite
def CloudMaskL7sr(image):
qa = image.select('pixel_qa')
#If the cloud bit (5) is set and the cloud confidence (7) is high
#or the cloud shadow bit is set (3), then it's a bad pixel.
cloud = qa.bitwiseAnd(1 << 5).And(qa.bitwiseAnd(1 << 7)).Or(qa.bitwiseAnd(1 << 3))
#Remove edge pixels that don't occur in all bands
mask2 = image.mask().reduce(ee.Reducer.min())
return image.updateMask(cloud.Not()).updateMask(mask2)
def CloudFreeCompositeL7(startDate, stopDate):
## Define your collection
collection = ee.ImageCollection('LANDSAT/LE07/C01/T1_SR')
## Filter
collection = collection.filterDate(startDate,stopDate).map(CloudMaskL7sr)
## Composite
composite = collection.median()
## normDiff bands
normDiff_band_names = ['ndvi', 'ndwi']
for nB, normDiff_band in enumerate([['B4','B3'], ['B4','B2']]):
image_nd = composite.normalizedDifference(normDiff_band).rename(normDiff_band_names[nB])
composite = ee.Image.cat([composite, image_nd])
return composite
## Lansat 8 Cloud Free Composite
def CloudMaskL8sr(image):
opticalBands = ['B1', 'B2', 'B3', 'B4', 'B5', 'B6', 'B7']
thermalBands = ['B10', 'B11']
cloudShadowBitMask = ee.Number(2).pow(3).int()
cloudsBitMask = ee.Number(2).pow(5).int()
qa = image.select('pixel_qa')
mask1 = qa.bitwiseAnd(cloudShadowBitMask).eq(0).And(
qa.bitwiseAnd(cloudsBitMask).eq(0))
mask2 = image.mask().reduce('min')
mask3 = image.select(opticalBands).gt(0).And(
image.select(opticalBands).lt(10000)).reduce('min')
mask = mask1.And(mask2).And(mask3)
return image.updateMask(mask)
def CloudFreeCompositeL8(startDate, stopDate):
## Define your collection
collection = ee.ImageCollection('LANDSAT/LC08/C01/T1_SR')
## Filter
collection = collection.filterDate(startDate,stopDate).map(CloudMaskL8sr)
## Composite
composite = collection.median()
## normDiff bands
normDiff_band_names = ['ndvi', 'ndwi']
for nB, normDiff_band in enumerate([['B5','B4'], ['B5','B3']]):
image_nd = composite.normalizedDifference(normDiff_band).rename(normDiff_band_names[nB])
composite = ee.Image.cat([composite, image_nd])
return composite
## Sentinel 2 Cloud Free Composite
def CloudMaskS2(image):
"""
European Space Agency (ESA) clouds from 'QA60', i.e. Quality Assessment band at 60m
parsed by Nick Clinton
"""
AerosolsBands = ['B1']
VIBands = ['B2', 'B3', 'B4']
RedBands = ['B5', 'B6', 'B7', 'B8A']
NIRBands = ['B8']
SWIRBands = ['B11', 'B12']
qa = image.select('QA60')
# Bits 10 and 11 are clouds and cirrus, respectively.
cloudBitMask = int(2**10)
cirrusBitMask = int(2**11)
# Both flags set to zero indicates clear conditions.
mask = qa.bitwiseAnd(cloudBitMask).eq(0).And(\
qa.bitwiseAnd(cirrusBitMask).eq(0))
return image.updateMask(mask)
def CloudFreeCompositeS2(startDate, stopDate):
## Define your collection
collection = ee.ImageCollection('COPERNICUS/S2')
## Filter
collection = collection.filterDate(startDate,stopDate)\
.filter(ee.Filter.lt('CLOUDY_PIXEL_PERCENTAGE', 20))\
.map(CloudMaskS2)
## Composite
composite = collection.median()
## normDiff bands
normDiff_band_names = ['ndvi', 'ndwi']
for nB, normDiff_band in enumerate([['B8','B4'], ['B8','B3']]):
image_nd = composite.normalizedDifference(normDiff_band).rename(normDiff_band_names[nB])
composite = ee.Image.cat([composite, image_nd])
return composite
## Cropland Data Layers
def CroplandData(startDate, stopDate):
## Define your collection
collection = ee.ImageCollection('USDA/NASS/CDL')
## Filter
collection = collection.filterDate(startDate,stopDate)
## First image
image = ee.Image(collection.first())
## Change classes
land = ['65', '131', '141', '142', '143', '152', '176', '87', '190', '195']
water = ['83', '92', '111']
urban = ['82', '121', '122', '123', '124']
classes = []
for n, i in enumerate([land,water,urban]):
a = ''
for m, j in enumerate(i):
if m < len(i)-1:
a = a + 'crop == '+ j + ' || '
else:
a = a + 'crop == '+ j
classes.append('('+a+') * '+str(n+1))
classes = ' + '.join(classes)
image = image.expression(classes, {'crop': image.select(['cropland'])})
image =image.rename('landcover')
# Split image into 1 band per class
names = ['cropland', 'land', 'water', 'urban']
mask = image
for i, name in enumerate(names):
image = ee.Image.cat([image, mask.eq(i).rename(name)])
return image
## National Land Cover Database
def ImperviousData(startDate, stopDate):
## Define your collection
collection = ee.ImageCollection('USGS/NLCD')
## Filter
collection = collection.filterDate(startDate,stopDate)
## First image
image = ee.Image(collection.first())
## Select impervious band
image = image.select('impervious')
## Normalize to 1
image = image.divide(100).float()
return image
def WaterQuality(startDate, stopDate):
## Define your collection
collection = ee.ImageCollection('projects/vizzuality/skydipper-water-quality/LWQ-100m')
## Filter
collection = collection.filterDate(startDate,stopDate)
## First image
image = ee.Image(collection.first())
## Select impervious band
image = image.select('turbidity_blended_mean')
return image
## ------------------------------------------------------------------- ##
def Composite(collection):
dic = {
'Sentinel-2-Top-of-Atmosphere-Reflectance': CloudFreeCompositeS2,
'Landsat-7-Surface-Reflectance': CloudFreeCompositeL7,
'Landsat-8-Surface-Reflectance': CloudFreeCompositeL8,
'USDA-NASS-Cropland-Data-Layers': CroplandData,
'USGS-National-Land-Cover-Database': ImperviousData,
'Lake-Water-Quality-100m': WaterQuality
}
return dic[collection]
import ee_collection_specifics
```
## Composite image
**Variables**
```
collection = 'USDA-NASS-Cropland-Data-Layers'
startDate = '2016-01-01'
stopDate = '2016-12-31'
scale = 30 #scale in meters
```
**Display composite**
```
# Define the URL format used for Earth Engine generated map tiles.
EE_TILES = 'https://earthengine.googleapis.com/map/{mapid}/{{z}}/{{x}}/{{y}}?token={token}'
composite = ee_collection_specifics.Composite(collection)(startDate, stopDate)
mapid = composite.getMapId(ee_collection_specifics.vizz_params_rgb(collection))
tiles_url = EE_TILES.format(**mapid)
map = folium.Map(location=[34.093568352499986, -118.46832275390466])
folium.TileLayer(
tiles=tiles_url,
attr='Google Earth Engine',
overlay=True,
name=str(ee_collection_specifics.ee_bands_rgb(collection))).add_to(map)
map.add_child(folium.LayerControl())
map
```
***
## Geostore
We select the areas from which we will export the training data.
**Variables**
```
outCollection = 'USDA-NASS-Cropland-Data-Layers'
def polygons_to_multipoligon(polygons):
multipoligon = []
MultiPoligon = {}
for polygon in polygons.get('features'):
multipoligon.append(polygon.get('geometry').get('coordinates'))
MultiPoligon = {
"type": "FeatureCollection",
"features": [
{
"type": "Feature",
"properties": {},
"geometry": {
"type": "MultiPolygon",
"coordinates": multipoligon
}
}
]
}
return MultiPoligon
if outCollection == 'USGS-National-Land-Cover-Database':
trainPolygons = {"type":"FeatureCollection","features":[{"type":"Feature","properties":{},"geometry":{"type":"Polygon","coordinates":[[[-123.22265625000001,45.213003555993964],[-122.03613281249999,45.213003555993964],[-122.03613281249999,46.164614496897094],[-123.22265625000001,46.164614496897094],[-123.22265625000001,45.213003555993964]]]}},{"type":"Feature","properties":{},"geometry":{"type":"Polygon","coordinates":[[[-122.1240234375,38.16911413556086],[-120.76171875,38.16911413556086],[-120.76171875,39.13006024213511],[-122.1240234375,39.13006024213511],[-122.1240234375,38.16911413556086]]]}},{"type":"Feature","properties":{},"geometry":{"type":"Polygon","coordinates":[[[-119.70703125,34.77771580360469],[-118.3447265625,34.77771580360469],[-118.3447265625,35.92464453144099],[-119.70703125,35.92464453144099],[-119.70703125,34.77771580360469]]]}},{"type":"Feature","properties":{},"geometry":{"type":"Polygon","coordinates":[[[-115.97167968750001,35.496456056584165],[-114.521484375,35.496456056584165],[-114.521484375,36.73888412439431],[-115.97167968750001,36.73888412439431],[-115.97167968750001,35.496456056584165]]]}},{"type":"Feature","properties":{},"geometry":{"type":"Polygon","coordinates":[[[-118.21289062499999,33.797408767572485],[-116.23535156249999,33.797408767572485],[-116.23535156249999,34.379712580462204],[-118.21289062499999,34.379712580462204],[-118.21289062499999,33.797408767572485]]]}},{"type":"Feature","properties":{},"geometry":{"type":"Polygon","coordinates":[[[-112.6318359375,33.02708758002874],[-111.4013671875,33.02708758002874],[-111.4013671875,34.016241889667015],[-112.6318359375,34.016241889667015],[-112.6318359375,33.02708758002874]]]}},{"type":"Feature","properties":{},"geometry":{"type":"Polygon","coordinates":[[[-105.6005859375,39.40224434029275],[-104.5458984375,39.40224434029275],[-104.5458984375,40.44694705960048],[-105.6005859375,40.44694705960048],[-105.6005859375,39.40224434029275]]]}},{"type":"Feature","properties":{},"geometry":{"type":"Polygon","coordinates":[[[-112.67578124999999,40.27952566881291],[-111.4453125,40.27952566881291],[-111.4453125,41.21172151054787],[-112.67578124999999,41.21172151054787],[-112.67578124999999,40.27952566881291]]]}},{"type":"Feature","properties":{},"geometry":{"type":"Polygon","coordinates":[[[-97.734375,32.21280106801518],[-95.9326171875,32.21280106801518],[-95.9326171875,33.32134852669881],[-97.734375,33.32134852669881],[-97.734375,32.21280106801518]]]}},{"type":"Feature","properties":{},"geometry":{"type":"Polygon","coordinates":[[[-99.36035156249999,29.036960648558267],[-97.822265625,29.036960648558267],[-97.822265625,30.031055426540206],[-99.36035156249999,30.031055426540206],[-99.36035156249999,29.036960648558267]]]}},{"type":"Feature","properties":{},"geometry":{"type":"Polygon","coordinates":[[[-95.185546875,38.61687046392973],[-93.9990234375,38.61687046392973],[-93.9990234375,39.639537564366684],[-95.185546875,39.639537564366684],[-95.185546875,38.61687046392973]]]}},{"type":"Feature","properties":{},"geometry":{"type":"Polygon","coordinates":[[[-91.2744140625,38.30718056188316],[-89.6484375,38.30718056188316],[-89.6484375,39.16414104768742],[-91.2744140625,39.16414104768742],[-91.2744140625,38.30718056188316]]]}},{"type":"Feature","properties":{},"geometry":{"type":"Polygon","coordinates":[[[-88.330078125,41.343824581185686],[-86.8798828125,41.343824581185686],[-86.8798828125,42.391008609205045],[-88.330078125,42.391008609205045],[-88.330078125,41.343824581185686]]]}},{"type":"Feature","properties":{},"geometry":{"type":"Polygon","coordinates":[[[-93.91113281249999,44.49650533109348],[-92.5048828125,44.49650533109348],[-92.5048828125,45.583289756006316],[-93.91113281249999,45.583289756006316],[-93.91113281249999,44.49650533109348]]]}},{"type":"Feature","properties":{},"geometry":{"type":"Polygon","coordinates":[[[-81.38671875,34.813803317113155],[-80.2880859375,34.813803317113155],[-80.2880859375,35.782170703266075],[-81.38671875,35.782170703266075],[-81.38671875,34.813803317113155]]]}},{"type":"Feature","properties":{},"geometry":{"type":"Polygon","coordinates":[[[-85.0341796875,33.17434155100208],[-83.7158203125,33.17434155100208],[-83.7158203125,34.27083595165],[-85.0341796875,34.27083595165],[-85.0341796875,33.17434155100208]]]}},{"type":"Feature","properties":{},"geometry":{"type":"Polygon","coordinates":[[[-87.2314453125,35.60371874069731],[-86.17675781249999,35.60371874069731],[-86.17675781249999,36.63316209558658],[-87.2314453125,36.63316209558658],[-87.2314453125,35.60371874069731]]]}},{"type":"Feature","properties":{},"geometry":{"type":"Polygon","coordinates":[[[-87.14355468749999,32.91648534731439],[-86.2646484375,32.91648534731439],[-86.2646484375,33.97980872872457],[-87.14355468749999,33.97980872872457],[-87.14355468749999,32.91648534731439]]]}},{"type":"Feature","properties":{},"geometry":{"type":"Polygon","coordinates":[[[-81.9140625,27.566721430409707],[-81.03515625,27.566721430409707],[-81.03515625,28.844673680771795],[-81.9140625,28.844673680771795],[-81.9140625,27.566721430409707]]]}},{"type":"Feature","properties":{},"geometry":{"type":"Polygon","coordinates":[[[-84.7705078125,38.92522904714054],[-83.75976562499999,38.92522904714054],[-83.75976562499999,40.17887331434696],[-84.7705078125,40.17887331434696],[-84.7705078125,38.92522904714054]]]}},{"type":"Feature","properties":{},"geometry":{"type":"Polygon","coordinates":[[[-80.947265625,40.27952566881291],[-79.98046875,40.27952566881291],[-79.98046875,41.178653972331674],[-80.947265625,41.178653972331674],[-80.947265625,40.27952566881291]]]}},{"type":"Feature","properties":{},"geometry":{"type":"Polygon","coordinates":[[[-75.2783203125,40.613952441166596],[-73.8720703125,40.613952441166596],[-73.8720703125,41.21172151054787],[-75.2783203125,41.21172151054787],[-75.2783203125,40.613952441166596]]]}},{"type":"Feature","properties":{},"geometry":{"type":"Polygon","coordinates":[[[-78.0908203125,38.44498466889473],[-76.728515625,38.44498466889473],[-76.728515625,39.33429742980725],[-78.0908203125,39.33429742980725],[-78.0908203125,38.44498466889473]]]}},{"type":"Feature","properties":{},"geometry":{"type":"Polygon","coordinates":[[[-112.6318359375,46.164614496897094],[-111.4453125,46.164614496897094],[-111.4453125,46.86019101567027],[-112.6318359375,46.86019101567027],[-112.6318359375,46.164614496897094]]]}},{"type":"Feature","properties":{},"geometry":{"type":"Polygon","coordinates":[[[-117.1142578125,43.229195113965005],[-115.57617187499999,43.229195113965005],[-115.57617187499999,44.08758502824516],[-117.1142578125,44.08758502824516],[-117.1142578125,43.229195113965005]]]}},{"type":"Feature","properties":{},"geometry":{"type":"Polygon","coordinates":[[[-96.328125,35.746512259918504],[-95.2734375,35.746512259918504],[-95.2734375,36.4566360115962],[-96.328125,36.4566360115962],[-96.328125,35.746512259918504]]]}},{"type":"Feature","properties":{},"geometry":{"type":"Polygon","coordinates":[[[-98.173828125,35.02999636902566],[-96.9873046875,35.02999636902566],[-96.9873046875,35.817813158696616],[-98.173828125,35.817813158696616],[-98.173828125,35.02999636902566]]]}},{"type":"Feature","properties":{},"geometry":{"type":"Polygon","coordinates":[[[-92.6806640625,34.379712580462204],[-91.7578125,34.379712580462204],[-91.7578125,35.10193405724606],[-92.6806640625,35.10193405724606],[-92.6806640625,34.379712580462204]]]}},{"type":"Feature","properties":{},"geometry":{"type":"Polygon","coordinates":[[[-90.7470703125,34.63320791137959],[-89.3408203125,34.63320791137959],[-89.3408203125,35.71083783530009],[-90.7470703125,35.71083783530009],[-90.7470703125,34.63320791137959]]]}},{"type":"Feature","properties":{},"geometry":{"type":"Polygon","coordinates":[[[-107.314453125,34.74161249883172],[-106.12792968749999,34.74161249883172],[-106.12792968749999,35.60371874069731],[-107.314453125,35.60371874069731],[-107.314453125,34.74161249883172]]]}},{"type":"Feature","properties":{},"geometry":{"type":"Polygon","coordinates":[[[-94.3505859375,41.1455697310095],[-92.94433593749999,41.1455697310095],[-92.94433593749999,42.19596877629178],[-94.3505859375,42.19596877629178],[-94.3505859375,41.1455697310095]]]}},{"type":"Feature","properties":{},"geometry":{"type":"Polygon","coordinates":[[[-85.869140625,40.68063802521456],[-84.5947265625,40.68063802521456],[-84.5947265625,41.64007838467894],[-85.869140625,41.64007838467894],[-85.869140625,40.68063802521456]]]}},{"type":"Feature","properties":{},"geometry":{"type":"Polygon","coordinates":[[[-87.099609375,39.30029918615029],[-85.6494140625,39.30029918615029],[-85.6494140625,40.245991504199026],[-87.099609375,40.245991504199026],[-87.099609375,39.30029918615029]]]}},{"type":"Feature","properties":{},"geometry":{"type":"Polygon","coordinates":[[[-117.7734375,47.30903424774781],[-116.103515625,47.30903424774781],[-116.103515625,48.1367666796927],[-117.7734375,48.1367666796927],[-117.7734375,47.30903424774781]]]}},{"type":"Feature","properties":{},"geometry":{"type":"Polygon","coordinates":[[[-97.91015624999999,37.3002752813443],[-96.8115234375,37.3002752813443],[-96.8115234375,38.09998264736481],[-97.91015624999999,38.09998264736481],[-97.91015624999999,37.3002752813443]]]}},{"type":"Feature","properties":{},"geometry":{"type":"Polygon","coordinates":[[[-94.06494140625,32.25926542645933],[-93.4716796875,32.25926542645933],[-93.4716796875,32.7872745269555],[-94.06494140625,32.7872745269555],[-94.06494140625,32.25926542645933]]]}}]}
trainPolys = polygons_to_multipoligon(trainPolygons)
evalPolygons = {"type":"FeatureCollection","features":[{"type":"Feature","properties":{},"geometry":{"type":"Polygon","coordinates":[[[-95.888671875,29.38217507514529],[-95.06469726562499,29.38217507514529],[-95.06469726562499,30.12612436422458],[-95.888671875,30.12612436422458],[-95.888671875,29.38217507514529]]]}},{"type":"Feature","properties":{},"geometry":{"type":"Polygon","coordinates":[[[-83.84765625,42.374778361114195],[-82.94677734375,42.374778361114195],[-82.94677734375,42.78733853171998],[-83.84765625,42.78733853171998],[-83.84765625,42.374778361114195]]]}},{"type":"Feature","properties":{},"geometry":{"type":"Polygon","coordinates":[[[-96.88568115234375,40.69521661351714],[-95.77606201171875,40.69521661351714],[-95.77606201171875,41.393294288784865],[-96.88568115234375,41.393294288784865],[-96.88568115234375,40.69521661351714]]]}},{"type":"Feature","properties":{},"geometry":{"type":"Polygon","coordinates":[[[-105.05126953124999,38.57393751557591],[-104.490966796875,38.57393751557591],[-104.490966796875,39.0831721934762],[-105.05126953124999,39.0831721934762],[-105.05126953124999,38.57393751557591]]]}},{"type":"Feature","properties":{},"geometry":{"type":"Polygon","coordinates":[[[-122.62390136718749,46.95776134668866],[-121.84936523437499,46.95776134668866],[-121.84936523437499,48.04136507445029],[-122.62390136718749,48.04136507445029],[-122.62390136718749,46.95776134668866]]]}},{"type":"Feature","properties":{},"geometry":{"type":"Polygon","coordinates":[[[-120.157470703125,36.465471886798134],[-119.24560546875001,36.465471886798134],[-119.24560546875001,37.03763967977139],[-120.157470703125,37.03763967977139],[-120.157470703125,36.465471886798134]]]}},{"type":"Feature","properties":{},"geometry":{"type":"Polygon","coordinates":[[[-120.02563476562501,39.33854604847979],[-119.55871582031251,39.33854604847979],[-119.55871582031251,39.7240885773337],[-120.02563476562501,39.7240885773337],[-120.02563476562501,39.33854604847979]]]}},{"type":"Feature","properties":{},"geometry":{"type":"Polygon","coordinates":[[[-86.30859375,37.61423141542417],[-84.9462890625,37.61423141542417],[-84.9462890625,38.65119833229951],[-86.30859375,38.65119833229951],[-86.30859375,37.61423141542417]]]}},{"type":"Feature","properties":{},"geometry":{"type":"Polygon","coordinates":[[[-78.31054687499999,36.914764288955936],[-76.86035156249999,36.914764288955936],[-76.86035156249999,38.03078569382294],[-78.31054687499999,38.03078569382294],[-78.31054687499999,36.914764288955936]]]}},{"type":"Feature","properties":{},"geometry":{"type":"Polygon","coordinates":[[[-102.87597656249999,31.541089879585808],[-101.4697265625,31.541089879585808],[-101.4697265625,32.24997445586331],[-102.87597656249999,32.24997445586331],[-102.87597656249999,31.541089879585808]]]}},{"type":"Feature","properties":{},"geometry":{"type":"Polygon","coordinates":[[[-83.5400390625,39.50404070558415],[-82.177734375,39.50404070558415],[-82.177734375,40.54720023441049],[-83.5400390625,40.54720023441049],[-83.5400390625,39.50404070558415]]]}}]}
evalPolys = polygons_to_multipoligon(evalPolygons)
if outCollection == 'USDA-NASS-Cropland-Data-Layers':
trainPolys = {
"type": "FeatureCollection",
"features": [
{
"type": "Feature",
"properties": {},
"geometry": {
"type": "MultiPolygon",
"coordinates": [
[[[ -122.882080078125, 40.50126945841645],[ -122.1240234375, 40.50126945841645],[ -122.1240234375, 41.008920735004885],[ -122.882080078125, 41.008920735004885],[ -122.882080078125, 40.50126945841645]]],
[[[ -122.2283935546875, 39.00637903337455],[ -121.607666015625, 39.00637903337455],[ -121.607666015625, 39.46588451142044],[ -122.2283935546875, 39.46588451142044],[ -122.2283935546875, 39.00637903337455]]],
[[[ -120.355224609375, 38.77978137804918],[ -119.608154296875, 38.77978137804918],[ -119.608154296875, 39.342794408952365],[ -120.355224609375, 39.342794408952365],[ -120.355224609375, 38.77978137804918]]],
[[[ -121.90979003906249, 37.70555348721583],[ -120.9814453125, 37.70555348721583],[ -120.9814453125, 38.39764411353178],[ -121.90979003906249, 38.39764411353178],[ -121.90979003906249, 37.70555348721583]]],
[[[ -120.03662109374999, 37.45741810262938],[ -119.1851806640625, 37.45741810262938],[ -119.1851806640625, 38.08268954483802],[ -120.03662109374999, 38.08268954483802],[ -120.03662109374999, 37.45741810262938]]],
[[[ -120.03662109374999, 37.45741810262938],[ -119.1851806640625, 37.45741810262938],[ -119.1851806640625, 38.08268954483802],[ -120.03662109374999, 38.08268954483802],[ -120.03662109374999, 37.45741810262938]]],
[[[ -120.03662109374999, 37.45741810262938],[ -119.1851806640625, 37.45741810262938],[ -119.1851806640625, 38.08268954483802],[ -120.03662109374999, 38.08268954483802],[ -120.03662109374999, 37.45741810262938]]],
[[[ -112.554931640625, 33.0178760185549],[ -111.588134765625, 33.0178760185549],[ -111.588134765625, 33.78827853625996],[ -112.554931640625, 33.78827853625996],[ -112.554931640625, 33.0178760185549]]],
[[[ -112.87353515625, 40.51379915504413],[ -111.829833984375, 40.51379915504413],[ -111.829833984375, 41.28606238749825],[ -112.87353515625, 41.28606238749825],[ -112.87353515625, 40.51379915504413]]],
[[[ -108.19335937499999, 39.095962936305476],[ -107.1826171875, 39.095962936305476],[ -107.1826171875, 39.85915479295669],[ -108.19335937499999, 39.85915479295669],[ -108.19335937499999, 39.095962936305476]]],
[[[ -124.25537109375, 30.86451022625836],[ -124.25537109375, 30.86451022625836],[ -124.25537109375, 30.86451022625836],[ -124.25537109375, 30.86451022625836]]],
[[[ -106.875, 37.142803443716836],[ -105.49072265625, 37.142803443716836],[ -105.49072265625, 38.18638677411551],[ -106.875, 38.18638677411551],[ -106.875, 37.142803443716836]]],
[[[ -117.31201171875001, 43.27720532212024],[ -116.01562499999999, 43.27720532212024],[ -116.01562499999999, 44.134913443750726],[ -117.31201171875001, 44.134913443750726],[ -117.31201171875001, 43.27720532212024]]],
[[[ -115.7080078125, 44.69989765840318],[ -114.7412109375, 44.69989765840318],[ -114.7412109375, 45.36758436884978],[ -115.7080078125, 45.36758436884978],[ -115.7080078125, 44.69989765840318]]],
[[[ -120.65185546875, 47.517200697839414],[ -119.33349609375, 47.517200697839414],[ -119.33349609375, 48.32703913063476],[ -120.65185546875, 48.32703913063476],[ -120.65185546875, 47.517200697839414]]],
[[[ -119.83886718750001, 45.69083283645816],[ -118.38867187500001, 45.69083283645816],[ -118.38867187500001, 46.694667307773116],[ -119.83886718750001, 46.694667307773116],[ -119.83886718750001, 45.69083283645816]]],
[[[ -107.09472656249999, 47.45780853075031],[ -105.84228515625, 47.45780853075031],[ -105.84228515625, 48.31242790407178],[ -107.09472656249999, 48.31242790407178],[ -107.09472656249999, 47.45780853075031]]],
[[[ -101.57958984375, 46.93526088057719],[ -100.107421875, 46.93526088057719],[ -100.107421875, 47.945786463687185],[ -101.57958984375, 47.945786463687185],[ -101.57958984375, 46.93526088057719]]],
[[[ -101.162109375, 44.32384807250689],[ -99.7119140625, 44.32384807250689],[ -99.7119140625, 45.22848059584359],[ -101.162109375, 45.22848059584359],[ -101.162109375, 44.32384807250689]]],
[[[ -100.5908203125, 41.261291493919884],[ -99.25048828124999, 41.261291493919884],[ -99.25048828124999, 42.114523952464246],[ -100.5908203125, 42.114523952464246],[ -100.5908203125, 41.261291493919884]]],
[[[ -97.9541015625, 37.142803443716836],[ -96.65771484375, 37.142803443716836],[ -96.65771484375, 38.13455657705411],[ -97.9541015625, 38.13455657705411],[ -97.9541015625, 37.142803443716836]]],
[[[ -112.78564453124999, 32.91648534731439],[ -111.357421875, 32.91648534731439],[ -111.357421875, 33.925129700072],[ -112.78564453124999, 33.925129700072],[ -112.78564453124999, 32.91648534731439]]],
[[[ -106.435546875, 35.15584570226544],[ -105.22705078125, 35.15584570226544],[ -105.22705078125, 36.13787471840729],[ -106.435546875, 36.13787471840729],[ -106.435546875, 35.15584570226544]]],
[[[ -97.3828125, 32.45415593941475],[ -96.2841796875, 32.45415593941475],[ -96.2841796875, 33.22949814144951],[ -97.3828125, 33.22949814144951],[ -97.3828125, 32.45415593941475]]],
[[[ -97.97607421875, 35.04798673426734],[ -97.00927734375, 35.04798673426734],[ -97.00927734375, 35.764343479667176],[ -97.97607421875, 35.764343479667176],[ -97.97607421875, 35.04798673426734]]],
[[[ -97.97607421875, 35.04798673426734],[ -97.00927734375, 35.04798673426734],[ -97.00927734375, 35.764343479667176],[ -97.97607421875, 35.764343479667176],[ -97.97607421875, 35.04798673426734]]],
[[[ -95.4052734375, 47.62097541515849],[ -94.24072265625, 47.62097541515849],[ -94.24072265625, 48.28319289548349],[ -95.4052734375, 48.28319289548349],[ -95.4052734375, 47.62097541515849]]],
[[[ -94.19677734375, 41.27780646738183],[ -93.09814453125, 41.27780646738183],[ -93.09814453125, 42.13082130188811],[ -94.19677734375, 42.13082130188811],[ -94.19677734375, 41.27780646738183]]],
[[[ -93.71337890625, 37.75334401310656],[ -92.6806640625, 37.75334401310656],[ -92.6806640625, 38.51378825951165],[ -93.71337890625, 38.51378825951165],[ -93.71337890625, 37.75334401310656]]],
[[[ -90.63720703125, 34.615126683462194],[ -89.47265625, 34.615126683462194],[ -89.47265625, 35.69299463209881],[ -90.63720703125, 35.69299463209881],[ -90.63720703125, 34.615126683462194]]],
[[[ -93.05419921875, 30.44867367928756],[ -91.77978515625, 30.44867367928756],[ -91.77978515625, 31.57853542647338],[ -93.05419921875, 31.57853542647338],[ -93.05419921875, 30.44867367928756]]],
[[[ -90.02197265625, 44.276671273775186],[ -88.59374999999999, 44.276671273775186],[ -88.59374999999999, 44.98034238084973],[ -90.02197265625, 44.98034238084973],[ -90.02197265625, 44.276671273775186]]],
[[[ -90.63720703125, 38.41055825094609],[ -89.49462890625, 38.41055825094609],[ -89.49462890625, 39.18117526158749],[ -90.63720703125, 39.18117526158749],[ -90.63720703125, 38.41055825094609]]],
[[[ -87.56103515625, 35.62158189955968],[ -86.28662109375, 35.62158189955968],[ -86.28662109375, 36.4566360115962],[ -87.56103515625, 36.4566360115962],[ -87.56103515625, 35.62158189955968]]],
[[[ -90.63720703125, 31.93351676190369],[ -89.49462890625, 31.93351676190369],[ -89.49462890625, 32.731840896865684],[ -90.63720703125, 32.731840896865684],[ -90.63720703125, 31.93351676190369]]],
[[[ -69.54345703125, 44.68427737181225],[ -68.5107421875, 44.68427737181225],[ -68.5107421875, 45.336701909968134],[ -69.54345703125, 45.336701909968134],[ -69.54345703125, 44.68427737181225]]],
[[[ -73.212890625, 41.49212083968776],[ -72.35595703125, 41.49212083968776],[ -72.35595703125, 42.032974332441405],[ -73.212890625, 42.032974332441405],[ -73.212890625, 41.49212083968776]]],
[[[ -77.93701171875, 38.70265930723801],[ -76.97021484375, 38.70265930723801],[ -76.97021484375, 39.26628442213066],[ -77.93701171875, 39.26628442213066],[ -77.93701171875, 38.70265930723801]]],
[[[ -79.25537109375, 35.44277092585766],[ -78.15673828125, 35.44277092585766],[ -78.15673828125, 36.13787471840729],[ -79.25537109375, 36.13787471840729],[ -79.25537109375, 35.44277092585766]]],
[[[ -81.4306640625, 33.55970664841198],[ -80.44189453125, 33.55970664841198],[ -80.44189453125, 34.288991865037524],[ -81.4306640625, 34.288991865037524],[ -81.4306640625, 33.55970664841198]]],
[[[ -84.90234375, 33.394759218577995],[ -83.91357421875, 33.394759218577995],[ -83.91357421875, 34.19817309627726],[ -84.90234375, 34.19817309627726],[ -84.90234375, 33.394759218577995]]],
[[[ -82.28759765625, 28.246327971048842],[ -81.2548828125, 28.246327971048842],[ -81.2548828125, 29.209713225868185],[ -82.28759765625, 29.209713225868185],[ -82.28759765625, 28.246327971048842]]],
[[[ -109.88525390624999, 42.65012181368022],[ -108.56689453125, 42.65012181368022],[ -108.56689453125, 43.50075243569041],[ -109.88525390624999, 43.50075243569041],[ -109.88525390624999, 42.65012181368022]]],
[[[ -117.61962890624999, 39.04478604850143],[ -116.65283203124999, 39.04478604850143],[ -116.65283203124999, 39.740986355883564],[ -117.61962890624999, 39.740986355883564],[ -117.61962890624999, 39.04478604850143]]],
[[[ -102.67822265625, 31.42866311735861],[ -101.71142578125, 31.42866311735861],[ -101.71142578125, 32.26855544621476],[ -102.67822265625, 32.26855544621476],[ -102.67822265625, 31.42866311735861]]],
[[[ -119.47631835937499, 36.03133177633187],[ -118.58642578124999, 36.03133177633187],[ -118.58642578124999, 36.55377524336089],[ -119.47631835937499, 36.55377524336089],[ -119.47631835937499, 36.03133177633187]]],
[[[ -116.224365234375, 33.091541548655215],[ -115.56518554687499, 33.091541548655215],[ -115.56518554687499, 33.568861182555565],[ -116.224365234375, 33.568861182555565],[ -116.224365234375, 33.091541548655215]]]
]
}
}
]
}
evalPolys = {
"type": "FeatureCollection",
"features": [
{
"type": "Feature",
"properties": {},
"geometry": {
"type": "MultiPolygon",
"coordinates": [
[[[-122.13208008, 41.25126946],[-121.37402344, 41.25126946],[-121.37402344, 41.75892074],[-122.13208008, 41.75892074],[-122.13208008, 41.25126946]]],
[[[-121.15979004, 38.45555349],[-120.23144531, 38.45555349],[-120.23144531, 39.14764411],[-121.15979004, 39.14764411],[-121.15979004, 38.45555349]]],
[[[-111.80493164, 33.76787602],[-110.83813477, 33.76787602],[-110.83813477, 34.53827854],[-111.80493164, 34.53827854],[-111.80493164, 33.76787602]]],
[[[-106.125 , 37.89280344],[-104.74072266, 37.89280344],[-104.74072266, 38.93638677],[-106.125 , 38.93638677],[-106.125 , 37.89280344]]],
[[[-119.08886719, 46.44083284],[-117.63867188, 46.44083284],[-117.63867188, 47.44466731],[-119.08886719, 47.44466731],[-119.08886719, 46.44083284]]],
[[[-99.84082031, 42.01129149],[-98.50048828, 42.01129149],[-98.50048828, 42.86452395],[-99.84082031, 42.86452395],[-99.84082031, 42.01129149]]],
[[[-96.6328125 , 33.20415594],[-95.53417969, 33.20415594],[-95.53417969, 33.97949814],[-96.6328125 , 33.97949814],[-96.6328125 , 33.20415594]]],
[[[-93.44677734, 42.02780647],[-92.34814453, 42.02780647],[-92.34814453, 42.8808213 ],[-93.44677734, 42.8808213 ],[-93.44677734, 42.02780647]]],
[[[-89.27197266, 45.02667127],[-87.84375 , 45.02667127],[-87.84375 , 45.73034238],[-89.27197266, 45.73034238],[-89.27197266, 45.02667127]]],
[[[-68.79345703, 45.43427737],[-67.76074219, 45.43427737],[-67.76074219, 46.08670191],[-68.79345703, 46.08670191],[-68.79345703, 45.43427737]]],
[[[-80.68066406, 34.30970665],[-79.69189453, 34.30970665],[-79.69189453, 35.03899187],[-80.68066406, 35.03899187],[-80.68066406, 34.30970665]]],
[[[-116.86962891, 39.79478605],[-115.90283203, 39.79478605],[-115.90283203, 40.49098636],[-116.86962891, 40.49098636],[-116.86962891, 39.79478605]]]
]
}
}
]
}
nTrain = len(trainPolys.get('features')[0].get('geometry').get('coordinates'))
print('Number of training polygons:', nTrain)
if evalPolys:
nEval = len(evalPolys.get('features')[0].get('geometry').get('coordinates'))
print('Number of training polygons:', nEval)
```
**Display Polygons**
```
# Define the URL format used for Earth Engine generated map tiles.
EE_TILES = 'https://earthengine.googleapis.com/map/{mapid}/{{z}}/{{x}}/{{y}}?token={token}'
composite = ee_collection_specifics.Composite(collection)(startDate, stopDate)
mapid = composite.getMapId(ee_collection_specifics.vizz_params_rgb(collection))
tiles_url = EE_TILES.format(**mapid)
map = folium.Map(location=[38., -100.], zoom_start=5)
folium.TileLayer(
tiles=tiles_url,
attr='Google Earth Engine',
overlay=True,
name=str(ee_collection_specifics.ee_bands_rgb(collection))).add_to(map)
# Convert the GeoJSONs to feature collections
trainFeatures = ee.FeatureCollection(trainPolys.get('features'))
if evalPolys:
evalFeatures = ee.FeatureCollection(evalPolys.get('features'))
polyImage = ee.Image(0).byte().paint(trainFeatures, 1)
if evalPolys:
polyImage = ee.Image(0).byte().paint(trainFeatures, 1).paint(evalFeatures, 2)
polyImage = polyImage.updateMask(polyImage)
mapid = polyImage.getMapId({'min': 1, 'max': 2, 'palette': ['red', 'blue']})
folium.TileLayer(
tiles=EE_TILES.format(**mapid),
attr='Google Earth Engine',
overlay=True,
name='training polygons',
).add_to(map)
map.add_child(folium.LayerControl())
map
```
***
## Data pre-processing
We normalize the composite images to have values from 0 to 1.
**Variables**
```
inCollection = 'Landsat-8-Surface-Reflectance'
outCollection = 'USDA-NASS-Cropland-Data-Layers'
collections = [inCollection, outCollection]
startDate = '2016-01-01'
stopDate = '2016-12-31'
```
**Normalize images**
```
def min_max_values(image, collection, scale, polygons=None):
normThreshold = ee_collection_specifics.ee_bands_normThreshold(collection)
num = 2
lon = np.linspace(-180, 180, num)
lat = np.linspace(-90, 90, num)
features = []
for i in range(len(lon)-1):
for j in range(len(lat)-1):
features.append(ee.Feature(ee.Geometry.Rectangle(lon[i], lat[j], lon[i+1], lat[j+1])))
if not polygons:
polygons = ee.FeatureCollection(features)
regReducer = {
'geometry': polygons,
'reducer': ee.Reducer.minMax(),
'maxPixels': 1e10,
'bestEffort': True,
'scale':scale,
'tileScale': 10
}
values = image.reduceRegion(**regReducer).getInfo()
print(values)
# Avoid outliers by taking into account only the normThreshold% of the data points.
regReducer = {
'geometry': polygons,
'reducer': ee.Reducer.histogram(),
'maxPixels': 1e10,
'bestEffort': True,
'scale':scale,
'tileScale': 10
}
hist = image.reduceRegion(**regReducer).getInfo()
for band in list(normThreshold.keys()):
if normThreshold[band] != 100:
count = np.array(hist.get(band).get('histogram'))
x = np.array(hist.get(band).get('bucketMeans'))
cumulative_per = np.cumsum(count/count.sum()*100)
values[band+'_max'] = x[np.where(cumulative_per < normThreshold[band])][-1]
return values
def normalize_ee_images(image, collection, values):
Bands = ee_collection_specifics.ee_bands(collection)
# Normalize [0, 1] ee images
for i, band in enumerate(Bands):
if i == 0:
image_new = image.select(band).clamp(values[band+'_min'], values[band+'_max'])\
.subtract(values[band+'_min'])\
.divide(values[band+'_max']-values[band+'_min'])
else:
image_new = image_new.addBands(image.select(band).clamp(values[band+'_min'], values[band+'_max'])\
.subtract(values[band+'_min'])\
.divide(values[band+'_max']-values[band+'_min']))
return image_new
%%time
images = []
for collection in collections:
# Create composite
image = ee_collection_specifics.Composite(collection)(startDate, stopDate)
bands = ee_collection_specifics.ee_bands(collection)
image = image.select(bands)
#Create composite
if ee_collection_specifics.normalize(collection):
# Get min man values for each band
values = min_max_values(image, collection, scale)
print(values)
# Normalize images
image = normalize_ee_images(image, collection, values)
else:
values = {}
images.append(image)
```
**Display composite**
```
# Define the URL format used for Earth Engine generated map tiles.
EE_TILES = 'https://earthengine.googleapis.com/map/{mapid}/{{z}}/{{x}}/{{y}}?token={token}'
map = folium.Map(location=[38., -100.], zoom_start=5)
for n, collection in enumerate(collections):
for params in ee_collection_specifics.vizz_params(collection):
mapid = images[n].getMapId(params)
folium.TileLayer(
tiles=EE_TILES.format(**mapid),
attr='Google Earth Engine',
overlay=True,
name=str(params['bands']),
).add_to(map)
# Convert the GeoJSONs to feature collections
trainFeatures = ee.FeatureCollection(trainPolys.get('features'))
if evalPolys:
evalFeatures = ee.FeatureCollection(evalPolys.get('features'))
polyImage = ee.Image(0).byte().paint(trainFeatures, 1)
if evalPolys:
polyImage = ee.Image(0).byte().paint(trainFeatures, 1).paint(evalFeatures, 2)
polyImage = polyImage.updateMask(polyImage)
mapid = polyImage.getMapId({'min': 1, 'max': 2, 'palette': ['red', 'blue']})
folium.TileLayer(
tiles=EE_TILES.format(**mapid),
attr='Google Earth Engine',
overlay=True,
name='training polygons',
).add_to(map)
map.add_child(folium.LayerControl())
map
```
***
## Create TFRecords for training
### Export images
**Variables**
```
inCollection = 'Landsat8_SR'
outCollection = 'NationalLandCoverDatabase'#'CroplandDataLayers'
inBands = ['B1','B2','B3','B4','B5','B6','B7','ndvi','ndwi']
outBands = ['impervious']#['cropland', 'land', 'water', 'urban']
startDate = '2016-01-01'
stopDate = '2016-12-31'
scale = 30 #scale in meters
sampleSize = 1000 # Total sample size in each polygon.
datasetName = 'Landsat8_Impervious'
```
**An array of images**
We have to stack the 2D images (input and output images of the Neural Network) to create a single image from which samples can be taken. Convert the image into an array image in which each pixel stores 256x256 patches of pixels for each band. This is a key step that bears emphasis: to export training patches, convert a multi-band image to [an array image](https://developers.google.com/earth-engine/arrays_array_images#array-images) using [neighborhoodToArray()](https://developers.google.com/earth-engine/api_docs#eeimageneighborhoodtoarray), then sample the image at points.
```
def image_into_array(url, collections, bands, kernelSize, startDate, stopDate, scale):
headers = {'Content-Type': 'application/json'}
for i, collection in enumerate(collections):
payload = {
"collection": collection,
"start": startDate,
"end": stopDate,
"scale": scale
}
output = requests.post(url, data=json.dumps(payload), headers=headers)
if i == 0:
image = ee.deserializer.fromJSON(output.json()['composite']).select(bands[i])
else:
featureStack = ee.Image.cat([image,\
ee.deserializer.fromJSON(output.json()['composite']).select(bands[i])\
]).float()
list = ee.List.repeat(1, kernelSize)
lists = ee.List.repeat(list, kernelSize)
kernel = ee.Kernel.fixed(kernelSize, kernelSize, lists)
arrays = featureStack.neighborhoodToArray(kernel)
return arrays
import requests
url = f'https://us-central1-skydipper-196010.cloudfunctions.net/ee_pre_processing'
collections = [inCollection, outCollection]
bands = [inBands, outBands]
kernelSize = 256
arrays = image_into_array(url, collections, bands, kernelSize, startDate, stopDate, scale)
```
**Export TFRecords**
The mapped data look reasonable so take a sample from each polygon and merge the results into a single export. The key step is sampling the array image at points, to get all the pixels in a 256x256 neighborhood at each point. It's worth noting that to build the training and testing data for the FCNN, you export a single TFRecord file that contains patches of pixel values in each record. You do NOT need to export each training/testing patch to a different image. Since each record potentially contains a lot of data (especially with big patches or many input bands), some manual sharding of the computation is necessary to avoid the computed value too large error. Specifically, the following code takes multiple (smaller) samples within each geometry, merging the results to get a single export.
```
def export_TFRecords_images(arrays, scale, nShards, sampleSize, features, polysLists, baseNames, bucket, folder, selectors):
# Export all the training/evaluation data (in many pieces), with one task per geometry.
filePaths = []
for i, feature in enumerate(features):
for g in range(feature.size().getInfo()):
geomSample = ee.FeatureCollection([])
for j in range(nShards):
sample = arrays.sample(
region = ee.Feature(polysLists[i].get(g)).geometry(),
scale = scale,
numPixels = sampleSize / nShards, # Size of the shard.
seed = j,
tileScale = 8
)
geomSample = geomSample.merge(sample)
desc = baseNames[i] + '_g' + str(g)
filePaths.append(bucket+ '/' + folder + '/' + desc)
task = ee.batch.Export.table.toCloudStorage(
collection = geomSample,
description = desc,
bucket = bucket,
fileNamePrefix = folder + '/' + desc,
fileFormat = 'TFRecord',
selectors = selectors
)
task.start()
return filePaths
def GeoJSONs_to_FeatureCollections(multipolygon):
# Make a list of Features
features = []
for i in range(len(multipolygon.get('features')[0].get('geometry').get('coordinates'))):
features.append(
ee.Feature(
ee.Geometry.Polygon(
multipolygon.get('features')[0].get('geometry').get('coordinates')[i]
)
)
)
# Create a FeatureCollection from the list and print it.
return ee.FeatureCollection(features)
# Convert the GeoJSONs to feature collections
trainFeatures = GeoJSONs_to_FeatureCollections(trainPolys)
evalFeatures = GeoJSONs_to_FeatureCollections(evalPolys)
# Convert the feature collections to lists for iteration.
trainPolysList = trainFeatures.toList(trainFeatures.size())
evalPolysList = evalFeatures.toList(evalFeatures.size())
# These numbers determined experimentally.
nShards = int(sampleSize/20)#100 # Number of shards in each polygon.
features = [trainFeatures, evalFeatures]
polysLists = [trainPolysList, evalPolysList]
baseNames = ['training_patches', 'eval_patches']
bucket = 'skydipper_materials'
folder = 'cnn-models/'+datasetName+'/data'
selectors = inBands + outBands
# Export all the training/evaluation data (in many pieces), with one task per geometry.
#filePaths = export_TFRecords_images(arrays, scale, nShards, sampleSize, features, polysLists, baseNames, bucket, folder, selectors)
```
***
## Inspect data
Load the data exported from Earth Engine into a tf.data.Dataset.
**Helper functions**
```
# Tensorflow setup.
import tensorflow as tf
if tf.__version__ == '1.15.0':
tf.enable_eager_execution()
print(tf.__version__)
```
### Inspect Images
```
def parse_function(proto):
"""The parsing function.
Read a serialized example into the structure defined by FEATURES_DICT.
Args:
example_proto: a serialized Example.
Returns:
A dictionary of tensors, keyed by feature name.
"""
# Define your tfrecord
features = inBands + outBands
# Specify the size and shape of patches expected by the model.
kernel_shape = [kernel_size, kernel_size]
columns = [
tf.io.FixedLenFeature(shape=kernel_shape, dtype=tf.float32) for k in features
]
features_dict = dict(zip(features, columns))
# Load one example
parsed_features = tf.io.parse_single_example(proto, features_dict)
# Convert a dictionary of tensors to a tuple of (inputs, outputs)
inputsList = [parsed_features.get(key) for key in features]
stacked = tf.stack(inputsList, axis=0)
# Convert the tensors into a stack in HWC shape
stacked = tf.transpose(stacked, [1, 0])
return stacked[:,:,:len(inBands)], stacked[:,:,len(inBands):]
def get_dataset(glob, inBands, outBands, kernel_size, buffer_size, batch_size):
"""Get the preprocessed training dataset
Returns:
A tf.data.Dataset of training data.
"""
glob = tf.compat.v1.io.gfile.glob(glob)
dataset = tf.data.TFRecordDataset(glob, compression_type='GZIP')
dataset = dataset.map(parse_function, num_parallel_calls=5)
dataset = dataset.shuffle(buffer_size).batch(batch_size).repeat()
return dataset
```
**Variables**
```
datasetName = 'Landsat8_Impervious'#'Landsat8_Cropland'
baseNames = ['training_patches', 'eval_patches']
inBands = ['B1','B2','B3','B4','B5','B6','B7','ndvi','ndwi']
outBands = ['impervious']#['cropland', 'land', 'water', 'urban']
bucket = env.bucket_name
folder = 'cnn-models/'+datasetName+'/data1'
kernel_size = 256
buffer_size = 100
batch_size = 16
```
**Dataset**
```
glob = 'gs://' + bucket + '/' + folder + '/' + baseNames[0] + '_g0'+ '*'
dataset = get_dataset(glob, inBands, outBands, kernel_size, buffer_size, batch_size)
dataset
```
**Check the first record**
Shape
```
training_arr = iter(dataset.take(1)).next()
input_arr = training_arr[0].numpy()
print(input_arr.shape)
output_arr = training_arr[1].numpy()
print(output_arr.shape)
```
And display the channels
```
import matplotlib.pyplot as plt
def display_channels(data, nChannels, titles = False):
if nChannels == 1:
plt.figure(figsize=(5,5))
plt.imshow(data[:,:,0])
if titles:
plt.title(titles[0])
else:
fig, axs = plt.subplots(nrows=1, ncols=nChannels, figsize=(5*nChannels,5))
for i in range(nChannels):
ax = axs[i]
ax.imshow(data[:,:,i])
if titles:
ax.set_title(titles[i])
display_channels(input_arr[2,:,:,:], input_arr.shape[3], titles=inBands)
display_channels(output_arr[2,:,:,:], output_arr.shape[3], titles=outBands)
```
***
## Training the model in AI Platform
### Training code package setup
It's necessary to create a Python package to hold the training code. Here we're going to get started with that by creating a folder for the package and adding an empty `__init__.py` file.
```
ROOT_PATH = 'AI_Platform/cnn_trainer'
PACKAGE_FOLDER = '/trainer'
!rm -r {ROOT_PATH}
!mkdir {ROOT_PATH}
!mkdir {ROOT_PATH+PACKAGE_FOLDER}
!touch {ROOT_PATH+PACKAGE_FOLDER}/__init__.py
!ls -l {ROOT_PATH+PACKAGE_FOLDER}
```
**Files**
`env.py` file into the folder.
```
!cp env.py {ROOT_PATH+PACKAGE_FOLDER}/env.py
```
**Variables**
These variables need to be stored in a place where other code can access them. There are a variety of ways of accomplishing that, but here we'll use the `%%writefile` command to write the contents of the code cell to a file called `config.py`.
```
%%writefile {ROOT_PATH+PACKAGE_FOLDER}/config.py
import tensorflow as tf
from . import env
# Define your Google Cloud Storage bucket
bucket = env.bucket_name
# Specify names of output locations in Cloud Storage.
dataset_name = 'Landsat8_Impervious'#'Landsat8_Cropland'
job_dir = 'gs://' + bucket + '/' + 'cnn-models/'+ dataset_name +'/trainer'
model_dir = job_dir + '/model'
logs_dir = job_dir + '/logs'
# Pre-computed training and eval data.
base_names = ['training_patches', 'eval_patches']
folder = 'cnn-models/'+dataset_name+'/data'
# Specify inputs/outputs to the model
in_bands = ['B1','B2','B3','B4','B5','B6','B7','ndvi','ndwi']
out_bands = ['impervious']#['cropland', 'land', 'water', 'urban']
# Specify the size and shape of patches expected by the model.
kernel_size = 256
# Sizes of the training and evaluation datasets.
train_size = 1000*36
eval_size = 1000*11
# Specify model training parameters.
model_type = 'regression'
model_architecture = 'unet'
output_activation = 'sigmoid'
batch_size = 16
epochs = 1
shuffle_size = 2000
learning_rate = 1e-3
optimizer = tf.keras.optimizers.SGD(lr=learning_rate)
loss = 'mse'
metrics = ['RootMeanSquaredError']#['accuracy']
```
**Training/evaluation data**
The following is code to load training/evaluation data. Write this into `util.py`.
```
%%writefile {ROOT_PATH+PACKAGE_FOLDER}/util.py
"""Utilities to download and preprocess the data."""
import tensorflow as tf
from . import config
def parse_function(proto):
"""The parsing function.
Read a serialized example into the structure defined by features_dict.
Args:
example_proto: a serialized Example.
Returns:
A dictionary of tensors, keyed by feature name.
"""
# Define your tfrecord
features = config.in_bands + config.out_bands
# Specify the size and shape of patches expected by the model.
kernel_shape = [config.kernel_size, config.kernel_size]
columns = [
tf.io.FixedLenFeature(shape=kernel_shape, dtype=tf.float32) for k in features
]
features_dict = dict(zip(features, columns))
# Load one example
parsed_features = tf.io.parse_single_example(proto, features_dict)
# Convert a dictionary of tensors to a tuple of (inputs, outputs)
inputs_list = [parsed_features.get(key) for key in features]
stacked = tf.stack(inputs_list, axis=0)
# Convert the tensors into a stack in HWC shape
stacked = tf.transpose(stacked, [1, 2, 0])
return stacked[:,:,:len(config.in_bands)], stacked[:,:,len(config.in_bands):]
def get_dataset(glob):
"""Get the preprocessed training dataset
Returns:
A tf.data.Dataset of training data.
"""
glob = tf.compat.v1.io.gfile.glob(glob)
dataset = tf.data.TFRecordDataset(glob, compression_type='GZIP')
dataset = dataset.map(parse_function, num_parallel_calls=5)
return dataset
def get_training_dataset():
"""Get the preprocessed training dataset
Returns:
A tf.data.Dataset of training data.
"""
glob = 'gs://' + config.bucket + '/' + config.folder + '/' + config.base_names[0] + '*'
dataset = get_dataset(glob)
dataset = dataset.shuffle(config.shuffle_size).batch(config.batch_size).repeat()
return dataset
def get_evaluation_dataset():
"""Get the preprocessed evaluation dataset
Returns:
A tf.data.Dataset of evaluation data.
"""
glob = 'gs://' + config.bucket + '/' + config.folder + '/' + config.base_names[1] + '*'
dataset = get_dataset(glob)
dataset = dataset.batch(1).repeat()
return dataset
```
Verify that `util.py` is functioning as intended.
```
from AI_Platform.cnn_trainer.trainer import config
from AI_Platform.cnn_trainer.trainer import util
training_dataset = util.get_training_dataset()
training_dataset
```
**Model**
We rewrite the desired model (previously specified in `config.py`) into `model.py` file.
```
from AI_Platform.cnn_trainer.trainer import config
!cp ../models/{config.model_type}/{config.model_architecture+'.py'} {ROOT_PATH+PACKAGE_FOLDER}/model.py
```
Verify that `model.py` is functioning as intended.
```
from AI_Platform.cnn_trainer.trainer import config
from AI_Platform.cnn_trainer.trainer import model
model = model.create_keras_model(inputShape = (None, None, len(config.in_bands)), nClasses = len(config.out_bands))
print(model.summary())
```
**Training task**
The following will create `task.py`, which will get the training and evaluation data, train the model and save it when it's done in a Cloud Storage bucket.
```
%%writefile {ROOT_PATH+PACKAGE_FOLDER}/task.py
"""Trains a Keras model"""
import os
import time
import tensorflow as tf
from . import config
from . import util
from . import model
def train_and_evaluate():
"""Trains and evaluates the Keras model.
Uses the Keras model defined in model.py and trains on data loaded and
preprocessed in util.py. Saves the trained model in TensorFlow SavedModel
format to the path defined in part by the --job-dir argument.
"""
# Create the Keras Model
if not config.output_activation:
keras_model = model.create_keras_model(inputShape = (None, None, len(config.in_bands)), nClasses = len(config.out_bands))
else:
keras_model = model.create_keras_model(inputShape = (None, None, len(config.in_bands)), nClasses = len(config.out_bands), output_activation = config.output_activation)
# Compile Keras model
keras_model.compile(loss=config.loss, optimizer=config.optimizer, metrics=config.metrics)
# Pass a tfrecord
training_dataset = util.get_training_dataset()
evaluation_dataset = util.get_evaluation_dataset()
# Setup TensorBoard callback.
tensorboard_cb = tf.keras.callbacks.TensorBoard(config.logs_dir)
# Train model
keras_model.fit(
x=training_dataset,
steps_per_epoch=int(config.train_size / config.batch_size),
epochs=config.epochs,
validation_data=evaluation_dataset,
validation_steps=int(config.eval_size / config.batch_size),
verbose=1,
callbacks=[tensorboard_cb])
tf.contrib.saved_model.save_keras_model(keras_model, os.path.join(config.model_dir, str(int(time.time()))))
if __name__ == '__main__':
tf.logging.set_verbosity('INFO')
train_and_evaluate()
```
**Using GPUs**
AI Platform lets you run any TensorFlow training application on a GPU-enabled machine. Learn more about [using GPUs for training models in the cloud](https://cloud.google.com/ml-engine/docs/tensorflow/using-gpus#submit-job).
We define a `config.yaml` file that describes the GPU options we want.
```
%%writefile {ROOT_PATH}/config.yaml
trainingInput:
scaleTier: CUSTOM
# A single NVIDIA Tesla V100 GPU
masterType: large_model_v100
```
### Submit the package to AI Platform for training
```
import time
# INSERT YOUR PROJECT HERE!
PROJECT_ID = env.project_id
REGION = "us-central1"
JOB_NAME = 'job_v' + str(int(time.time()))
TRAINER_PACKAGE_PATH = 'AI_Platform/cnn_trainer/trainer/'
MAIN_TRAINER_MODULE = 'trainer.task'
```
**Set up your GCP project**
Enter your project ID in the cell below. Then run the cell to make sure the Cloud SDK uses the right project for all the commands in this notebook.
```
! gcloud config set project $PROJECT_ID
```
**Authenticate your GCP account**
Enter the path to your service account key as the
`GOOGLE_APPLICATION_CREDENTIALS` variable in the cell below and run the cell.
```
%env GOOGLE_APPLICATION_CREDENTIALS 'privatekey.json'
```
**Submit a training job to AI Platform**
```
!gcloud ai-platform jobs submit training {JOB_NAME} \
--job-dir {config.job_dir} \
--package-path {TRAINER_PACKAGE_PATH} \
--module-name {MAIN_TRAINER_MODULE} \
--region {REGION} \
--config {ROOT_PATH}/config.yaml \
--runtime-version 1.14 \
--python-version 3.5
```
**Monitor the training job**
```
desc = !gcloud ai-platform jobs describe {JOB_NAME} --project {PROJECT_ID}
state = desc.grep('state:')[0].split(':')[1].strip()
print(state)
```
***
## Prepare the model for making predictions in Earth Engine
Before we can use the model in Earth Engine, it needs to be hosted by AI Platform. But before we can host the model on AI Platform we need to *EEify* (a new word!) it. The EEification process merely appends some extra operations to the input and outputs of the model in order to accomdate the interchange format between pixels from Earth Engine (float32) and inputs to AI Platform (base64). (See [this doc](https://cloud.google.com/ml-engine/docs/online-predict#binary_data_in_prediction_input) for details.)
**`earthengine model prepare`**
The EEification process is handled for you using the Earth Engine command `earthengine model prepare`. To use that command, we need to specify the input and output model directories and the name of the input and output nodes in the TensorFlow computation graph. We can do all that programmatically:
```
inBands = ['B1','B2','B3','B4','B5','B6','B7','ndvi','ndwi']
outBands = ['impervious']#['cropland', 'land', 'water', 'urban']
# Specify names of input locations in Cloud Storage.
dataset_name = 'Landsat8_Impervious'#'Landsat8_Cropland'
job_dir = 'gs://' + env.bucket_name + '/' + 'cnn-models/' + dataset_name + '/trainer'
model_dir = job_dir + '/model'
PROJECT_ID = env.project_id
# Pick the directory with the latest timestamp, in case you've trained multiple times
exported_model_dirs = ! gsutil ls {model_dir}
saved_model_path = exported_model_dirs[-1]
folder_name = saved_model_path.split('/')[-2]
from tensorflow.python.tools import saved_model_utils
meta_graph_def = saved_model_utils.get_meta_graph_def(saved_model_path, 'serve')
inputs = meta_graph_def.signature_def['serving_default'].inputs
outputs = meta_graph_def.signature_def['serving_default'].outputs
# Just get the first thing(s) from the serving signature def. i.e. this
# model only has a single input and a single output.
input_name = None
for k,v in inputs.items():
input_name = v.name
break
output_name = None
for k,v in outputs.items():
output_name = v.name
break
# Make a dictionary that maps Earth Engine outputs and inputs to
# AI Platform inputs and outputs, respectively.
import json
input_dict = "'" + json.dumps({input_name: "array"}) + "'"
output_dict = "'" + json.dumps({output_name: "prediction"}) + "'"
# Put the EEified model next to the trained model directory.
EEIFIED_DIR = job_dir + '/eeified/' + folder_name
# You need to set the project before using the model prepare command.
!earthengine set_project {PROJECT_ID}
!earthengine model prepare --source_dir {saved_model_path} --dest_dir {EEIFIED_DIR} --input {input_dict} --output {output_dict}
```
**Deployed the model to AI Platform**
Before it's possible to get predictions from the trained and EEified model, it needs to be deployed on AI Platform. The first step is to create the model. The second step is to create a version. See [this guide](https://cloud.google.com/ml-engine/docs/tensorflow/deploying-models) for details. Note that models and versions can be monitored from the [AI Platform models page](http://console.cloud.google.com/ai-platform/models) of the Cloud Console.
To ensure that the model is ready for predictions without having to warm up nodes, you can use a configuration yaml file to set the scaling type of this version to autoScaling, and, set a minimum number of nodes for the version. This will ensure there are always nodes on stand-by, however, you will be charged as long as they are running. For this example, we'll set the minNodes to 10. That means that at a minimum, 10 nodes are always up and running and waiting for predictions. The number of nodes will also scale up automatically if needed.
```
%%writefile config.yaml
autoScaling:
minNodes: 10
import time
from AI_Platform.cnn_trainer.trainer import config
REGION = "us-central1"
MODEL_NAME = config.model_architecture+'_'+dataset_name
VERSION_NAME = 'v' + folder_name
print('Creating version: ' + VERSION_NAME)
!gcloud ai-platform models create {MODEL_NAME}
!gcloud ai-platform versions create {VERSION_NAME} \
--model {MODEL_NAME} \
--origin {EEIFIED_DIR} \
--runtime-version=1.14 \
--framework "TENSORFLOW" \
--python-version=3.5
```
***
## Predict in Earth Engine
**Variables**
```
from AI_Platform.cnn_trainer.trainer import config
inCollection = 'Landsat8_SR'
inBands = ['B1','B2','B3','B4','B5','B6','B7'] #['B1','B2','B3','B4','B5','B6','B7','ndvi','ndwi']
outBands = ['impervious']#['cropland', 'land', 'water', 'urban']
startDate = '2016-01-01'
stopDate = '2016-12-31'
scale = 30 #scale in meters
# Model variables
PROJECT_ID = env.project_id
MODEL_NAME = 'deepvel_Landsat8_Impervious' #config.model_architecture+'_'+dataset_name
VERSION_NAME = 'v1578585185'
model_type = config.model_type
# polygon where we want to display de predictions
geometry = {
"type": "FeatureCollection",
"features": [
{
"type": "Feature",
"properties": {},
"geometry": {
"type": "Polygon",
"coordinates": [
[
[
-3.9990234375,
40.17887331434696
],
[
-3.3343505859375,
40.17887331434696
],
[
-3.3343505859375,
40.57849862511043
],
[
-3.9990234375,
40.57849862511043
],
[
-3.9990234375,
40.17887331434696
]
]
]
}
}
]
}
```
**`ee.Model.fromAiPlatformPredictor`**
There is now a trained model, prepared for serving to Earth Engine, hosted and versioned on AI Platform.
We can now connect Earth Engine directly to the trained model for inference. You do that with the `ee.Model.fromAiPlatformPredictor` command.
For this command to work, we need to know a lot about the model. To connect to the model, you need to know the name and version.
**Inputs**
You need to be able to recreate the imagery on which it was trained in order to perform inference. Specifically, you need to create an array-valued input from the scaled data and use that for input. (Recall that the new input node is named `array`, which is convenient because the array image has one band, named `array` by default.) The inputs will be provided as 144x144 patches (`inputTileSize`), at 30-meter resolution (`proj`), but 8 pixels will be thrown out (`inputOverlapSize`) to minimize boundary effects.
```
import requests
payload = {
"collection": inCollection,
"start": startDate,
"end": stopDate,
"scale": scale
}
url = f'https://us-central1-skydipper-196010.cloudfunctions.net/ee_pre_processing'
headers = {'Content-Type': 'application/json'}
output_pre_processing = requests.post(url, data=json.dumps(payload), headers=headers)
output_pre_processing.json()
image = ee.deserializer.fromJSON(output_pre_processing.json()['composite'])
```
Select bands and convert them into float
```
image = image.select(inBands).float()
image.getInfo()
```
**Outputs**
The output (which you also need to know).
```
# Load the trained model and use it for prediction.
model = ee.Model.fromAiPlatformPredictor(
projectName = PROJECT_ID,
modelName = MODEL_NAME,
version = VERSION_NAME,
inputTileSize = [144, 144],
inputOverlapSize = [8, 8],
proj = ee.Projection('EPSG:4326').atScale(scale),
fixInputProj = True,
outputBands = {'prediction': {
'type': ee.PixelType.float(),
'dimensions': 1,
}
}
)
predictions = model.predictImage(image.toArray()).arrayFlatten([outBands])
predictions.getInfo()
model.getInfo()
```
Clip the prediction area with the polygon
```
# Clip the prediction area with the polygon
polygon = ee.Geometry.Polygon(geometry.get('features')[0].get('geometry').get('coordinates'))
predictions = predictions.clip(polygon)
# Get centroid
centroid = polygon.centroid().getInfo().get('coordinates')[::-1]
```
Segmentate image:
```
if model_type == 'segmentation':
maxValues = predictions.reduce(ee.Reducer.max())
predictions = predictions.addBands(maxValues)
expression = ""
for n, band in enumerate(outBands):
expression = expression + f"(b('{band}') == b('max')) ? {str(n+1)} : "
expression = expression + f"0"
segmentation = predictions.expression(expression)
predictions = predictions.addBands(segmentation.mask(segmentation).select(['constant'], ['categories']))
```
**Display**
Use folium to visualize the input imagery and the predictions.
```
# Define the URL format used for Earth Engine generated map tiles.
EE_TILES = 'https://earthengine.googleapis.com/map/{mapid}/{{z}}/{{x}}/{{y}}?token={token}'
mapid = image.getMapId({'bands': ['B4', 'B3', 'B2'], 'min': 0, 'max': 1})
map = folium.Map(location=centroid, zoom_start=12)
folium.TileLayer(
tiles=EE_TILES.format(**mapid),
attr='Google Earth Engine',
overlay=True,
name='median composite',
).add_to(map)
for band in outBands:
mapid = predictions.getMapId({'bands': [band], 'min': 0, 'max': 0.5})
folium.TileLayer(
tiles=EE_TILES.format(**mapid),
attr='Google Earth Engine',
overlay=True,
name=band,
).add_to(map)
if model_type == 'segmentation':
mapid = predictions.getMapId({'bands': ['categories'], 'min': 1, 'max': len(outBands)})
folium.TileLayer(
tiles=EE_TILES.format(**mapid),
attr='Google Earth Engine',
overlay=True,
name='categories',
).add_to(map)
map.add_child(folium.LayerControl())
map
```
***
## Predict in AI Platform
**Variables**
```
inCollection = 'Landsat8_SR'
inBands = ['B1','B2','B3','B4','B5','B6','B7']
outBands = ['cropland', 'land', 'water', 'urban']
startDate = '2016-01-01'
stopDate = '2016-12-31'
scale = 30 #scale in meters
# Model variables
PROJECT_ID = env.project_id
MODEL_NAME = 'segnet_Landsat8_Cropland'
VERSION_NAME = VERSION_NAME
# Specify names of input locations in Cloud Storage.
dataset_name = 'Landsat8_Cropland'
job_dir = 'gs://' + bucket + '/' + 'cnn-models/' + dataset_name + '/trainer'
model_dir = job_dir + '/model'
exported_model_dirs = ! gsutil ls {model_dir}
# Pick the directory with the latest timestamp, in case you've trained multiple times
saved_model_path = exported_model_dirs[-1]
import time
MODEL_NAME = 'segnet_Landsat8_Cropland'
VERSION_NAME = VERSION_NAME
print('Creating version: ' + VERSION_NAME)
!gcloud ai-platform models create {MODEL_NAME}
# Create model version based on that SavedModel directory
! gcloud ai-platform versions create {VERSION_NAME} \
--model {MODEL_NAME} \
--runtime-version 1.13 \
--python-version 3.5 \
--framework tensorflow \
--origin {saved_model_path}
datasetName = 'Landsat8_Cropland'
baseNames = ['training_patches', 'eval_patches']
inBands = ['B1','B2','B3','B4','B5','B6','B7']
outBands = ['cropland', 'land', 'water', 'urban']
folder = 'cnn-models/'+datasetName+'/data'
kernel_size = 256
buffer_size = 100
batch_size = 4
glob = 'gs://' + env.bucket_name + '/' + folder + '/' + baseNames[0] + '_g0'+ '*'
dataset = get_dataset(glob, inBands, outBands, kernel_size, buffer_size, batch_size)
dataset
display_channels(input_arr[2,:80,:80,:], input_arr.shape[3], titles=inBands)
```
**Formatting input data for online prediction**
```
data = input_arr[2,:80,:80,:]
data = np.around(data,2).tolist()
instance = {"image" : data}
```
**Requesting predictions**
```
from google.oauth2 import service_account
import googleapiclient
def predict_json(project, model, instances, privatekey_path, version=None):
"""Send json data to a deployed model for prediction.
Args:
project (str): project where the AI Platform Model is deployed.
model (str): model name.
instances ([Mapping[str: Any]]): Keys should be the names of Tensors
your deployed model expects as inputs. Values should be datatypes
convertible to Tensors, or (potentially nested) lists of datatypes
convertible to tensors.
version: str, version of the model to target.
Returns:
Mapping[str: any]: dictionary of prediction results defined by the
model.
"""
# To authenticate set the GOOGLE_APPLICATION_CREDENTIALS
credentials = service_account.Credentials.from_service_account_file(privatekey_path)
# Create the AI Platform service object.
service = googleapiclient.discovery.build('ml', 'v1', credentials=credentials)
name = 'projects/{}/models/{}'.format(project, model)
if version is not None:
name += '/versions/{}'.format(version)
response = service.projects().predict(
name=name,
body={'instances': instances}
).execute()
if 'error' in response:
raise RuntimeError(response['error'])
return response['predictions']
```
**Submit the online prediction request**
```
privatekey_path = 'privatekey.json'
response = predict_json(project=PROJECT_ID, model=MODEL_NAME, instances=instance, privatekey_path=privatekey_path, version=VERSION_NAME)
output = np.array(response[0].get('output'))
output.shape
display_channels(output, output.shape[2], titles=['cropland', 'land', 'water', 'urban'])
```
***
## Data post-processing
**Variables**
| github_jupyter |
<b>The Relative Frequency</b> of any random variable is the number of occurance in the total number of observation.
The Relative Frequency is calculated as:<br>
\begin{equation}
Relative Frequency = \frac{Frequency}{Total\ number\ of\ observations}
\end{equation}<br>
E.g. We have a samples are like { 5,7,11,19,23,5,18,7,18,23 }. If we calculate the relative frequency of each unique values then it will be as:<br>
${R.F of (5)} = {\frac {2}{10}}$<br>
${R.F of (7)} = {\frac {2}{10}}$<br>
${R.F of (11)} = {\frac {1}{10}}$<br>
${R.F of (18)} = {\frac {2}{10}}$<br>
${R.F of (19)} = {\frac {1}{10}}$<br>
${R.F of (23)} = {\frac {2}{10}}$<br>
Whereas, <b>Probability</b> is actually the limiting case of the <b> Relative Frequency </b> when the sample approaches(limits) towards population.
Going to demonstrate that probability is actually the limiting case of relative frequency when the sample slowly approaches population.
Lets first take an example of binomial random variable where we are taking a sample of observations (responses) arrising due to repititive conduction of a binomial experiment. Which means that we are going to perform some experiment 'N' number of times where each each conduction of experiment (trial) will result in an obervation or response of a trial out of two possible responses. Lets say that our binomial trial is the toss of a coin where each toss is going to give us one of the two responses : Either Heads or Tails.
As we are in statistical domain, we will evaluate the relative frequencies of heads as well as tails as:
\begin{equation}
r = \frac{h}{N}
\end{equation}
Where, r = Relative Frequency.
h = Number of Binomial Experiments in which heads was the outcome.
N = Total number of times binomial experiment has been conducted.
```
# Importing the Numpy library alias np and matplotlib library alias plt
import numpy as np
import matplotlib.pyplot as plt
```
Going to conduct a binomial experiment (having two results either Head or Tail) for N=10 times and considering that the coin used in coin tossing is an unbiased coin, i.e. p=q=0.5
```
N = 10
psuccess = 0.5
qfailure = 1-psuccess
ExperimentOutcomes = np.random.binomial(N,psuccess)
ExperimentOutcomes
```
We tossed an unbiased coin 10 times, we got only 4 heads. So, the relative frequency is given by:
\begin{equation}
r = \frac{h}{N} = \frac{4}{10} = 0.4
\end{equation}
Let's toss an unbiased coin 100 times and see what happens.
```
N = 100
ExperimentOutcomes = np.random.binomial(N,psuccess)
ExperimentOutcomes
```
We tossed a coin 100 times, we got 54 heads. So, the relative frequency is given by:
\begin{equation}
r = \frac{h}{N} = \frac{54}{100} = 0.54
\end{equation}
Let's toss an unbiased coin 10000 times and see what happens.
```
N = 10000
ExperimentOutcomes = np.random.binomial(N,psuccess)
ExperimentOutcomes
```
We tossed a coin 10000 times, we got 4972 heads. So, the relative frequency is given by:
\begin{equation}
r = \frac{h}{N} = \frac{4972}{10000} = 0.4972
\end{equation}
Let's toss a coin 100000 times and see what happens.
```
N = 100000
ExperimentOutcomes = np.random.binomial(N,psuccess)
ExperimentOutcomes
```
We tossed a coin 100000 times, we got 49991 heads. So the relative frequency is given by:
\begin{equation}
r = \frac{h}{N} = \frac{49991}{100000} = 0.49991
\end{equation}
The theoretical answer to the probability of getting heads in a single toss is 0.5 and we can observe that as we are increasing the number of tosses, we are approaching the theoretical value of probability in terms of relative frequency.
This phenomena can also be shown by plotting relative frequencies of heads with increasing sample sizes which are slowly approaching population.
```
#we are trying to get the values from the lower limit=10, higher limit=500
#in the interval of 10.
Ns = np.arange(10,500,10)
Ns
```
We have generated different sample sizes at the gap of 10 tosses.
```
ExperimentOutcomes = np.random.binomial(Ns,psuccess)
```
We have tossed an unbiased coin at different values of total tosses(N) and recorded the number of heads(h) during several number of total tosses.
```
RelativeFrequencies = ExperimentOutcomes/Ns
```
We have now calculated relative frequencies by dividing the number of heads(h) which we observed during different values of N.
```
plt.stem(Ns,RelativeFrequencies)
```
As we can observe from above plot that as we are tossing a coin more number of times, the relative frequency of heads is approaching towards psuccess = 0.5.
Now, let's change the psuccess to 0.65 and plot the similar graph once again.
```
psuccess = 0.65
ExperimentOutcomes = np.random.binomial(Ns,psuccess)
RelativeFrequencies = ExperimentOutcomes/Ns
plt.stem(Ns,RelativeFrequencies)
```
As we can observe from above plot that as we are tossing a coin more number of times, the relative frequency of heads is approaching towards psuccess = 0.65.
Now, let's change the psuccess to 0.35 and plot the similar graph once again.
```
psuccess = 0.35
ExperimentOutcomes = np.random.binomial(Ns,psuccess)
RelativeFrequencies = ExperimentOutcomes/Ns
plt.stem(Ns,RelativeFrequencies)
```
As we can observe from above plot that as we are tossing a coin more number of times, the relative frequency of heads is approaching towards psuccess = 0.35.
| github_jupyter |
# Data Wrangling with Spark SQL Quiz
This quiz uses the same dataset and most of the same questions from the earlier "Quiz - Data Wrangling with Data Frames Jupyter Notebook." For this quiz, however, use Spark SQL instead of Spark Data Frames.
```
from pyspark.sql import SparkSession
# TODOS:
# 1) import any other libraries you might need
# 2) instantiate a Spark session
# 3) read in the data set located at the path "data/sparkify_log_small.json"
# 4) create a view to use with your SQL queries
# 5) write code to answer the quiz questions
import numpy as np
import pandas as pd
spark = SparkSession.builder.appName("Data Wrangling").getOrCreate()
user_log = spark.read.json('data/sparkify_log_small.json')
user_log.createOrReplaceTempView('user_log_table')
```
# Question 1
Which page did user id ""(empty string) NOT visit?
```
# TODO: write your code to answer question 1
spark.sql("""
SELECT DISTINCT page
FROM user_log_table
WHERE page
NOT IN
(SELECT DISTINCT page
FROM user_log_table
WHERE userId = "")
""").show()
```
# Question 2 - Reflect
Why might you prefer to use SQL over data frames? Why might you prefer data frames over SQL?
You might prefer SQL over data frames because the syntax is clearer especially for people already experienced in SQL.
Spark data frames give us more control. You can break down your queries into smaller steps, which can make debugging easier. You can also cache intermediate results or repartition intermediate results.
# Question 3
How many female users do we have in the data set?
```
# TODO: write your code to answer question 3
spark.sql("""
SELECT gender, COUNT(DISTINCT userId) AS count
FROM user_log_table
WHERE gender = 'F'
GROUP BY gender
""").show()
```
# Question 4
How many songs were played from the most played artist?
```
# TODO: write your code to answer question 4
spark.sql("""
SELECT artist, COUNT(song) AS plays_count
FROM user_log_table
WHERE page = 'NextSong'
GROUP BY artist
ORDER BY plays_count DESC
LIMIT 1
""").show()
```
# Question 5 (challenge)
How many songs do users listen to on average between visiting our home page? Please round your answer to the closest integer.
```
# TODO: write your code to answer question 5
is_home = spark.sql("SELECT userID, page, ts, CASE WHEN page = 'Home' THEN 1 ELSE 0 END AS is_home \
FROM user_log_table WHERE (page = 'NextSong') or (page = 'Home')")
is_home.createOrReplaceTempView("is_home_table")
cumulative_sum = spark.sql("SELECT *, SUM(is_home) OVER \
(PARTITION BY userID ORDER BY ts DESC ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) AS period \
FROM is_home_table")
cumulative_sum.createOrReplaceTempView("period_table")
spark.sql("SELECT AVG(count_results) FROM \
(SELECT COUNT(*) AS count_results FROM period_table \
GROUP BY userID, period, page HAVING page = 'NextSong') AS counts").show()
```
| github_jupyter |
```
from __future__ import division
import pandas as pd
import numpy as np
from sklearn import cluster, datasets, mixture
from sklearn.preprocessing import StandardScaler
import matplotlib.pyplot as plt
import torch
import torch.nn as nn
from math import pi
from tqdm import tqdm
from torch.distributions.multivariate_normal import MultivariateNormal
from torch.distributions import LogisticNormal
# read DATA
data = pd.read_csv("C:/Users/bened/Documents/UNIVERSITY/SchoenStats/PyTorch Working Directory/Undergrad Stats Project/wave_data_gen_ugstprj.csv")
data.info()
# random sample from dataset
# DO NOT RUN BLOCK if testing under same data;
# ...or else dataset will be lost and new data sample
X = data.sample(2000).reset_index(drop=True)
X_train = torch.tensor(X.values)
X = data.sample(2000).reset_index(drop=True)
X_test = torch.tensor(X.values)
#n_samples = 2000
#noisy_moons = datasets.make_moons(n_samples=n_samples, noise=.05)
X = np.array(X_train)
y = np.array(X_test)
# normalize
X = StandardScaler().fit_transform(X)
y = StandardScaler().fit_transform(y)
# replica functions for StandardScaler().fit_transform(.)
x_mu = np.mean(np.array(X_train[:,0]))
x_sd = np.std(np.array(X_train[:,0]))
y_mu = np.mean(np.array(X_train[:,1]))
y_sd = np.std(np.array(X_train[:,1]))
# test if scaling works, should produce identical plots superimposed
scale_x = (X_train[:, 0] - x_mu) / x_sd
scale_y = (X_train[:, 1] - y_mu) / y_sd
plt.scatter(scale_x, scale_y, alpha=0.2)
plt.scatter(X[:, 0], X[:, 1], alpha=0.1)
```
# R-NVP
```
base_mu, base_cov = torch.zeros(2), torch.eye(2) * 2
base_dist = MultivariateNormal(base_mu, base_cov)
Z = base_dist.rsample(sample_shape=(2000,))
plt.scatter(Z[:, 0], Z[:, 1], alpha=0.3, s=10)
plt.show()
class R_NVP(nn.Module):
def __init__(self, d, k, hidden):
super().__init__()
self.d, self.k = d, k
self.sig_net = nn.Sequential(
nn.Linear(k, hidden),
nn.LeakyReLU(),
nn.Linear(hidden, d - k))
self.mu_net = nn.Sequential(
nn.Linear(k, hidden),
nn.LeakyReLU(),
nn.Linear(hidden, d - k))
def forward(self, x, flip=False):
x1, x2 = x[:, :self.k], x[:, self.k:]
if flip:
x2, x1 = x1, x2
# forward
sig = self.sig_net(x1)
z1, z2 = x1, x2 * torch.exp(sig) + self.mu_net(x1)
if flip:
z2, z1 = z1, z2
z_hat = torch.cat([z1, z2], dim=-1)
log_pz = base_dist.log_prob(z_hat)
log_jacob = sig.sum(-1)
return z_hat, log_pz, log_jacob
def inverse(self, Z, flip=False):
z1, z2 = Z[:, :self.k], Z[:, self.k:]
if flip:
z2, z1 = z1, z2
x1 = z1
x2 = (z2 - self.mu_net(z1)) * torch.exp(-self.sig_net(z1))
if flip:
x2, x1 = x1, x2
return torch.cat([x1, x2], -1)
class stacked_NVP(nn.Module):
def __init__(self, d, k, hidden, n):
super().__init__()
self.bijectors = nn.ModuleList([
R_NVP(d, k, hidden=hidden) for _ in range(n)
])
self.flips = [True if i%2 else False for i in range(n)]
def forward(self, x):
log_jacobs = []
for bijector, f in zip(self.bijectors, self.flips):
x, log_pz, lj = bijector(x, flip=f)
log_jacobs.append(lj)
return x, log_pz, sum(log_jacobs)
def inverse(self, z):
for bijector, f in zip(reversed(self.bijectors), reversed(self.flips)):
z = bijector.inverse(z, flip=f)
return z
```
## Training Step Function
```
def train(model, epochs, batch_size, optim, scheduler):
losses = []
for _ in tqdm(range(epochs)):
# get batch
X= X_train
X = torch.from_numpy(StandardScaler().fit_transform(X)).float()
optim.zero_grad()
z, log_pz, log_jacob = model(X)
loss = (-log_pz - log_jacob).mean()
losses.append(loss)
loss.backward()
optim.step()
scheduler.step()
return losses
def view(model, losses, y):
plt.plot(losses)
plt.title("Model Loss vs Epoch")
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.show()
X_hat = model.inverse(Z).detach().numpy()
plt.scatter(X_hat[:, 0], X_hat[:, 1], alpha=0.1, s=10)
#plt.scatter(y[:, 0], y[:, 1], alpha=0.1, s=10)
plt.title("Inverse of Normal Samples Z: X = F^-1(Z)")
plt.ylim((-2.5, 2.5))
plt.xlim((-2.5, 2.5))
plt.show()
n_samples = 2000
#X = X_train
X = X_test
X = torch.from_numpy(StandardScaler().fit_transform(X)).float()
z, _, _ = model(X)
z = z.detach().numpy()
plt.scatter(z[:, 0], z[:, 1], alpha=0.2, s=20)
plt.title("Transformation of Data Samples X: Z = F(X)")
plt.show()
return(z, X_hat)
```
## Model Params
```
d = 2
k = 1
```
# Single Layer R_NVP
```
model = R_NVP(d, k, hidden=512)
optim = torch.optim.Adam(model.parameters(), lr=1e-3)
scheduler = torch.optim.lr_scheduler.ExponentialLR(optim, 0.999)
n_samples = 512
# training loop
losses = train(model, 1000, n_samples, optim, scheduler)
z_res, x_hat_res = view(model, losses, y)
```
# 3 Layer R_NVP
```
model = stacked_NVP(d, k, hidden=512, n=3)
optim = torch.optim.Adam(model.parameters(), lr=1e-3)
scheduler = torch.optim.lr_scheduler.ExponentialLR(optim, 0.999)
n_samples = 512
# training loop
losses = train(model, 1000, n_samples, optim, scheduler)
z_res, x_hat_res = view(model, losses, y)
```
# 5 Layer R_NVP
```
model = stacked_NVP(d, k, hidden=512, n=5)
optim = torch.optim.Adam(model.parameters(), lr=1e-3)
scheduler = torch.optim.lr_scheduler.ExponentialLR(optim, 0.999)
n_samples = 512
# training loop
losses = train(model, 1000, n_samples, optim, scheduler)
z_res, x_hat_res = view(model, losses, y)
```
# 10 Layer R_NVP
```
model = stacked_NVP(d, k, hidden=512, n=10)
optim = torch.optim.Adam(model.parameters(), lr=1e-3)
scheduler = torch.optim.lr_scheduler.ExponentialLR(optim, 0.999)
n_samples = 512
# training loop
losses = train(model, 2000, n_samples, optim, scheduler)
z_res, x_hat_res = view(model, losses, y)
```
# 15 Layer R_NVP
```
model = stacked_NVP(d, k, hidden=512, n=15)
optim = torch.optim.Adam(model.parameters(), lr=1e-3)
scheduler = torch.optim.lr_scheduler.ExponentialLR(optim, 0.999)
n_samples = 512
# training loop
losses = train(model, 3000, n_samples, optim, scheduler)
z_res, x_hat_res = view(model, losses, y)
```
## 20 layer R_NVP
```
model = stacked_NVP(d, k, hidden=512, n=20)
optim = torch.optim.Adam(model.parameters(), lr=1e-3)
scheduler = torch.optim.lr_scheduler.ExponentialLR(optim, 0.999)
n_samples = 512
# training loop
losses = train(model, 2000, n_samples, optim, scheduler)
z_res, x_hat_res = view(model, losses, y)
```
# Model Tests; generate Test Data
```
def gen_sine_wave_sample(n=2000, s=1, amp=1, wl=1,
ns_mean=0, ns_var=1, lim_y=True, lim_x=True, x_type='rand',
x_range=(0,1), y_range=(0,1), set_input=None,
r_rad=0, swap_axis=False, formula=2):
# n - number of points to generate
# s - slope gradient of wave
# amp - amplitude of wave
# wl - wavelength equivalent parameter
# ns_mean, ns_var - noise mean and variance
# set how the domain is generated
if x_type == 'rand':
x = np.random.rand(n)
elif x_type == 'lin' :
x = np.linspace(0, 1, n, endpoint=True)
elif x_type == 'self' :
x = set_input
#set x limits
if lim_x:
x = np.interp(x, (x.min(), x.max()), x_range)
epsilon = ns_mean + ns_var * np.random.randn(n)
# apply to formula [y = s*x + mag*sin(wl*x)]
if formula==0:
y = s*x + amp*x*np.sin(wl*x) + epsilon
elif formula==1:
y = s*x + amp*x*np.sin(wl*x) + epsilon*x
elif formula==2:
y = s*x + amp*x*np.sin(wl*x)
else:
y = s*x + amp*np.sin(wl*x) + epsilon
# create rotation matrix
rM = np.array((np.cos(r_rad), -np.sin(r_rad), np.sin(r_rad), np.cos(r_rad))).reshape((-1, 2))
t = np.array((x, y))
# set y limits
if lim_y:
t = t[:, t[1] < y_range[1]]
t = t[:, t[1] > y_range[0]]
# swap x and y axis
if (swap_axis==True):
t = t[[1, 0], :]
# rotation matrix
output = np.matmul(rM, t)
# note that (no. of output) <= n
return(output)
# scaling x_hat_res to return to original
pred_scaled_x = (x_hat_res[:, 0] * x_sd) + x_mu
pred_scaled_y = (x_hat_res[:, 1] * y_sd) + y_mu
# once scaled, we can input the data into the actual toy data generator
t = gen_sine_wave_sample(s=0.9, amp=0.25, wl=13, formula=1,
lim_x=False, lim_y=False, ns_var=0.09,
swap_axis=False,
x_type='self', set_input=pred_scaled_x)
# limit x
# - this may be turned off to see predictions outside of training range
results_scaled = np.array((pred_scaled_x, pred_scaled_y, t[1,:]))
results_scaled = results_scaled[:, results_scaled[0, :] <= 1]
results_scaled = results_scaled[:, results_scaled[0, :] >= 0]
plt.scatter(results_scaled[0,:], results_scaled[2,:], alpha=0.2, s=10, label='target')
plt.scatter(results_scaled[0,:], results_scaled[1,:], alpha=0.2, s=10, label='prediction')
plt.xlim((0, 1))
plt.ylim((0, 1))
plt.legend()
```
# Root Mean Squared
```
t_pred = results_scaled[1,:]
t_true = results_scaled[2,:]
diff = (t_pred - t_true)
L2_diff = np.square(diff)
num_samples = len(L2_diff)
avg_diff = sum(np.array(L2_diff)) / num_samples
RMS = np.sqrt(avg_diff)
print("RMS: {}".format(np.around(RMS, decimals=6)))
plt.scatter(results_scaled[0,:], L2_diff, s=10, alpha=0.2)
plt.ylim((0,0.2))
plt.xlim((0,1))
```
# Log-likelihood Test
```
# log likelihood function for toy dataset
def toy_data_log_prob_adjust_x(t, x, var=0.09):
lp_list =[]
for i in tqdm(range(len(t))):
sigma_sqr = float(0.09 * (x[i] ** 2))
f = torch.distributions.normal.Normal(0, sigma_sqr)
lp_list.append(f.log_prob(t[i]))
return lp_list
# calculation of loglikelihood
LLp = sum(toy_data_log_prob_adjust_x(diff, pred_scaled_x))
print("Log-likelihood of predicted targets: {}".format(LLp))
```
| github_jupyter |
<h1 align="center">TensorFlow Neural Network Lab</h1>
์ด๋ฒ ์๊ฐ์๋ ํ
์ํ๋ก์ฐ์ ๊ธฐ๋ณธ์ ์ธ ๋ด์ฉ๋ค์ ์์ฉํ์ฌ ์ํ๋ฒณ์ ์ธ์ํ๋ ๋ชจ๋ธ์ ๋ง๋ค์ด๋ด
๋๋ค.
ํ
์ํ๋ก์ฐ๋ฅผ ์ด์ฉํ ์ฒซ ์ค์ต์ด๋ผ ๋งค์ฐ ๊ธฐ๋๊ฐใ
๋ฉ๋๋ค.
์ ๊ฐ ์ฌ์ฉํ๋ ๋ฐ์ดํฐ๋ <a href="http://yaroslavvb.blogspot.com/2011/09/notmnist-dataset.html">notMNIST</a> ๋ผ๊ณ ํ๋ ๋ค์ํ ๋ชจ์์ ์ด๋ฏธ์ง๋ก ๊ตฌ์ฑ๋ A-J ์ํ๋ฒณ์
๋๋ค.
<img src="image/notmnist.png">
์ ์ด๋ฏธ์ง๋ ํ์ต์ํฌ ์ํ๋ฒณ์ ์ผ๋ถ์
๋๋ค. ํ์ต์ด ๋๋ ํ, test data์ ๋น๊ต๋ ํด๋ณด๊ฒ ์ต๋๋ค.
์ค๋์ ๋ชฉํ๋ test set์ ๋ํด 80% ์ด์์ ์ ํ๋๋ก ์์ธกํ๋ ๊ฒ ์
๋๋ค.
๊ทธ๋ผ, Let's jump in!
๊ฐ์ฅ ๋จผ์ , ํ์์ ์ธ ๋ชจ๋๋ค์ ๋ถ๋ฌ์ค๊ฒ ์ต๋๋ค.
``All module imported``๋ผ๋ ๋ฌธ์ฅ์ด ์ถ๋ ฅ๋๋ฉด ์ฑ๊ณตํ ๊ฒ ์
๋๋ค.
```
import hashlib
import os
import pickle
from urllib.request import urlretrieve
import numpy as np
from PIL import Image
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelBinarizer
from sklearn.utils import resample
from tqdm import tqdm
from zipfile import ZipFile
print('All modules imported.')
```
The notMNIST dataset ์ 50๋ง์ฅ์ ์ด๋ฏธ์ง๋ก ๊ตฌ์ฑ๋์ด ์์ด ํ ๋ฒ์ ๋ค๋ฃจ๊ธฐ์๋ ๋๋ฌด ํฝ๋๋ค.
๊ทธ๋ฌ๋ฏ๋ก subset์ ์ฌ์ฉํ์ฌ ๊ฐ ๋ ์ด๋ธ (A-J)์ ๋ํ 15,000์ฅ์ ์ด๋ฏธ์ง ๊ทธ๋ฃน์ผ๋ก ๋๋์ด ์ฃผ๊ฒ ์ต๋๋ค.
```
def download(url, file):
"""
Download file from <url>
:param url: URL to file
:param file: Local file path
"""
if not os.path.isfile(file):
print('Downloading ' + file + '...')
urlretrieve(url, file)
print('Download Finished')
# Download the training and test dataset.
download('https://s3.amazonaws.com/udacity-sdc/notMNIST_train.zip', 'notMNIST_train.zip')
download('https://s3.amazonaws.com/udacity-sdc/notMNIST_test.zip', 'notMNIST_test.zip')
# Make sure the files aren't corrupted
assert hashlib.md5(open('notMNIST_train.zip', 'rb').read()).hexdigest() == 'c8673b3f28f489e9cdf3a3d74e2ac8fa',\
'notMNIST_train.zip file is corrupted. Remove the file and try again.'
assert hashlib.md5(open('notMNIST_test.zip', 'rb').read()).hexdigest() == '5d3c7e653e63471c88df796156a9dfa9',\
'notMNIST_test.zip file is corrupted. Remove the file and try again.'
# Wait until you see that all files have been downloaded.
print('All files downloaded.')
def uncompress_features_labels(file):
"""
Uncompress features and labels from a zip file
:param file: The zip file to extract the data from
"""
features = []
labels = []
with ZipFile(file) as zipf:
# Progress Bar
filenames_pbar = tqdm(zipf.namelist(), unit='files')
# Get features and labels from all files
for filename in filenames_pbar:
# Check if the file is a directory
if not filename.endswith('/'):
with zipf.open(filename) as image_file:
image = Image.open(image_file)
image.load()
# Load image data as 1 dimensional array
# We're using float32 to save on memory space
feature = np.array(image, dtype=np.float32).flatten()
# Get the the letter from the filename. This is the letter of the image.
label = os.path.split(filename)[1][0]
features.append(feature)
labels.append(label)
return np.array(features), np.array(labels)
# Get the features and labels from the zip files
train_features, train_labels = uncompress_features_labels('notMNIST_train.zip')
test_features, test_labels = uncompress_features_labels('notMNIST_test.zip')
# Limit the amount of data to work with a docker container
docker_size_limit = 150000
train_features, train_labels = resample(train_features, train_labels, n_samples=docker_size_limit)
# Set flags for feature engineering. This will prevent you from skipping an important step.
is_features_normal = False
is_labels_encod = False
# Wait until you see that all features and labels have been uncompressed.
print('All features and labels uncompressed.')
```
<img src="image/Mean_Variance_Image.png" style="height: 75%;width: 75%; position: relative; right: 5%">
## Problem 1
์ฒซ ๋ฒ์งธ ๊ณผ์ ๋ training/test data feature๋ค์ ํ์ต์ ์ฉ์ดํ๋๋ก normalizeํด์ฃผ๋ ๊ฒ ์
๋๋ค.
์๋ณธ notMNIST ์ด๋ฏธ์ง ๋ฐ์ดํฐ๋ [grayscale](https://en.wikipedia.org/wiki/Grayscale)์ด๊ธฐ ๋๋ฌธ์ ๊ฐ ํฝ์
๋ค์ 0 ์์ 255 ์ฌ์ด์ ๊ฐ์ ๊ฐ์ง๋๋ค.
`normalize_grayscale()` ํจ์๋ฅผ ์ด์ฉํด ๋ฒ์ `a=0.1`, `b=0.9`์ ๋ํ์ฌ Min-Max scaling์ ํด์ค๋๋ค.
scailing ํ์๋ input data์ ํฝ์
๊ฐ์ 0.1์์ 0.9 ์ฌ์ด์ ๊ฐ์ ๊ฐ์ง๋๋ค.
Min-Max Scaling:
$
X'=a+{\frac {\left(X-X_{\min }\right)\left(b-a\right)}{X_{\max }-X_{\min }}}
$
```
# Problem 1 - Implement Min-Max scaling for grayscale image data
def normalize_grayscale(image_data):
"""
Min-Max scaling์ ์ด์ฉํด ์ด๋ฏธ์ง ๋ฐ์ดํฐ๋ฅผ ๋ฒ์ [0.1,0.9]๋ก Normalize ํด์ค๋๋ค.
:param image_data: The image data to be normalized
:return: Normalized image data
"""
# TODO: Implement Min-Max scaling for grayscale image data
return 0.1 + (((image_data)*0.8) / 255)
### DON'T MODIFY ANYTHING BELOW ###
# Test Cases
np.testing.assert_array_almost_equal(
normalize_grayscale(np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 255])),
[0.1, 0.103137254902, 0.106274509804, 0.109411764706, 0.112549019608, 0.11568627451, 0.118823529412, 0.121960784314,
0.125098039216, 0.128235294118, 0.13137254902, 0.9],
decimal=3)
np.testing.assert_array_almost_equal(
normalize_grayscale(np.array([0, 1, 10, 20, 30, 40, 233, 244, 254,255])),
[0.1, 0.103137254902, 0.13137254902, 0.162745098039, 0.194117647059, 0.225490196078, 0.830980392157, 0.865490196078,
0.896862745098, 0.9])
if not is_features_normal:
train_features = normalize_grayscale(train_features)
test_features = normalize_grayscale(test_features)
is_features_normal = True
print('Tests Passed!')
if not is_labels_encod:
# Turn labels into numbers and apply One-Hot Encoding
encoder = LabelBinarizer()
encoder.fit(train_labels)
train_labels = encoder.transform(train_labels)
test_labels = encoder.transform(test_labels)
# Change to float32, so it can be multiplied against the features in TensorFlow, which are float32
train_labels = train_labels.astype(np.float32)
test_labels = test_labels.astype(np.float32)
is_labels_encod = True
print('Labels One-Hot Encoded')
assert is_features_normal, 'You skipped the step to normalize the features'
assert is_labels_encod, 'You skipped the step to One-Hot Encode the labels'
# Get randomized datasets for training and validation
train_features, valid_features, train_labels, valid_labels = train_test_split(
train_features,
train_labels,
test_size=0.05,
random_state=832289)
print('Training features and labels randomized and split.')
# Save the data for easy access
pickle_file = 'notMNIST.pickle'
if not os.path.isfile(pickle_file):
print('Saving data to pickle file...')
try:
with open('notMNIST.pickle', 'wb') as pfile:
pickle.dump(
{
'train_dataset': train_features,
'train_labels': train_labels,
'valid_dataset': valid_features,
'valid_labels': valid_labels,
'test_dataset': test_features,
'test_labels': test_labels,
},
pfile, pickle.HIGHEST_PROTOCOL)
except Exception as e:
print('Unable to save data to', pickle_file, ':', e)
raise
print('Data cached in pickle file.')
```
# Checkpoint
์ง๊ธ๊น์ง ๋ชจ๋ ๊ฒฐ๊ณผ๋ pickle file์ ์ ์ฅ๋์์ต๋๋ค.
์ด์ ๋ค์ ์์
์ ์์ํ๋๋ผ๋ ์ด ์ง์ ๋ถํฐ ์์ํ ์ ์์ต๋๋ค.
์๋ ์ฝ๋๋ฅผ ์คํ์์ผ ๋ชจ๋ ๋ฐ์ดํฐ์ ๋ชจ๋์ ๋ถ๋ฌ์ค์ธ์.
```
%matplotlib inline
# Load the modules
import pickle
import math
import numpy as np
import tensorflow as tf
from tqdm import tqdm
import matplotlib.pyplot as plt
# Reload the data
pickle_file = 'notMNIST.pickle'
with open(pickle_file, 'rb') as f:
pickle_data = pickle.load(f)
train_features = pickle_data['train_dataset']
train_labels = pickle_data['train_labels']
valid_features = pickle_data['valid_dataset']
valid_labels = pickle_data['valid_labels']
test_features = pickle_data['test_dataset']
test_labels = pickle_data['test_labels']
del pickle_data # Free up memory
print('Data and modules loaded.')
```
## Problem 2
์ด์ ๋ณธ๊ฒฉ์ ์ผ๋ก TensorFlow๋ฅผ ์ฌ์ฉํด๋ด
์๋ค.
input layer์ output layer๋ก ๊ตฌ์ฑ๋ ๊ฐ๋จํ ๋ชจ๋ธ์ ๋ง๋ค์ด ๋ณด๊ฒ ์ต๋๋ค.
์๋ ๊ทธ๋ฆผ์ ๋ณด๋ฉด ์ฐ๋ฆฌ๊ฐ ์ด๋ค ๋ชจ๋ธ์ ๋ง๋ค์ด์ผ ํ ์ง ๊ฐ์ด ์กํ๊ฒ๋๋ค.
<img src="image/network_diagram.png" style="height: 40%;width: 40%; position: relative; right: 10%">
input์ผ๋ก ๋ค์ด๊ฐ๋ ์ด๋ฏธ์ง๋ $28 \times 28 = 784$ ์ vector features๋ก flattened๋์ด ์์ต๋๋ค.
๊ทธ๋ฆฌ๊ณ ์ถ๋ ฅ์ ๊ฐ๊ฐ ํ ๊ฐ์ label์ ํด๋นํ๋ 10๊ฐ์ 0 ํน์ 1์ ์ซ์์
๋๋ค.
๋ง์ฝ ์ํ๋ค๋ฉด hidden layer๋ฅผ ์ถ๊ฐํ ์๋ ์์ง๋ง, ์ด๋ฒ ํ๋ก์ ํธ์์๋ ์ต๋ํ ๊ฐ๋ตํ๋๋ก single layer network๋ฅผ ๋ค๋ฃจ๊ฒ ์ต๋๋ค.
๋ฐ์ดํฐ๋ฅผ ํ์ต์ํค๊ธฐ ์ํ ๋ด๋ด ๋คํธ์ํฌ์์ ์ฐ๋ฆฌ๋ ์๋์ ๊ฐ์ <a href="https://www.tensorflow.org/resources/dims_types.html#data-types">float32</a> tensor๋ค์ด ํ์ํฉ๋๋ค.:
- `features`
- Placeholder tensor for feature data (`train_features`/`valid_features`/`test_features`)
- `labels`
- Placeholder tensor for label data (`train_labels`/`valid_labels`/`test_labels`)
- `weights`
- Variable Tensor with random numbers from a truncated normal distribution.
- ๋ ๋ง์ ์ ๋ณด๋ <a href="https://www.tensorflow.org/api_docs/python/constant_op.html#truncated_normal">`tf.truncated_normal()` documentation</a> ๋ฅผ ๋ณด์ธ์.
- `biases`
- Variable Tensor with all zeros.
- ๋ ๋ง์ ์ ๋ณด๋ <a href="https://www.tensorflow.org/api_docs/python/constant_op.html#zeros"> `tf.zeros()` documentation</a> ๋ฅผ ๋ณด์ธ์.
```
# All the pixels in the image (28 * 28 = 784)
features_count = 784
# All the labels
labels_count = 10
# TODO: Set the features and labels tensors
features = tf.placeholder(tf.float32)
labels = tf.placeholder(tf.float32)
# TODO: Set the weights and biases tensors
weights = tf.Variable(tf.truncated_normal((features_count, labels_count)))
biases = tf.Variable(tf.zeros(labels_count))
### DON'T MODIFY ANYTHING BELOW ###
#Test Cases
from tensorflow.python.ops.variables import Variable
assert features._op.name.startswith('Placeholder'), 'features must be a placeholder'
assert labels._op.name.startswith('Placeholder'), 'labels must be a placeholder'
assert isinstance(weights, Variable), 'weights must be a TensorFlow variable'
assert isinstance(biases, Variable), 'biases must be a TensorFlow variable'
assert features._shape == None or (\
features._shape.dims[0].value is None and\
features._shape.dims[1].value in [None, 784]), 'The shape of features is incorrect'
assert labels._shape == None or (\
labels._shape.dims[0].value is None and\
labels._shape.dims[1].value in [None, 10]), 'The shape of labels is incorrect'
assert weights._variable._shape == (784, 10), 'The shape of weights is incorrect'
assert biases._variable._shape == (10), 'The shape of biases is incorrect'
assert features._dtype == tf.float32, 'features must be type float32'
assert labels._dtype == tf.float32, 'labels must be type float32'
# Feed dicts for training, validation, and test session
train_feed_dict = {features: train_features, labels: train_labels}
valid_feed_dict = {features: valid_features, labels: valid_labels}
test_feed_dict = {features: test_features, labels: test_labels}
# Linear Function WX + b
logits = tf.matmul(features, weights) + biases
# Softmax
prediction = tf.nn.softmax(logits)
# Cross entropy
cross_entropy = -tf.reduce_sum(labels * tf.log(prediction), reduction_indices=1)
# Training loss
loss = tf.reduce_mean(cross_entropy)
# Create an operation that initializes all variables
init = tf.global_variables_initializer()
# Test Cases
with tf.Session() as session:
session.run(init)
session.run(loss, feed_dict=train_feed_dict)
session.run(loss, feed_dict=valid_feed_dict)
session.run(loss, feed_dict=test_feed_dict)
biases_data = session.run(biases)
assert not np.count_nonzero(biases_data), 'biases must be zeros'
print('Tests Passed!')
# Determine if the predictions are correct
is_correct_prediction = tf.equal(tf.argmax(prediction, 1), tf.argmax(labels, 1))
# Calculate the accuracy of the predictions
accuracy = tf.reduce_mean(tf.cast(is_correct_prediction, tf.float32))
print('Accuracy function created.')
```
<img src="image/Learn_Rate_Tune_Image.png" style="height: 70%;width: 70%">
## Problem 3
์ด์ ํ์ต์์ ์ค์ํ ํ๋ผ๋ฏธํฐ์ธ **Epoches**์ **Learning Rate**๋ฅผ ์ ์ฉํด๋ณด๊ฒ ์ต๋๋ค.
๊ฐ ์ค์ ์์, ํ๋ผ๋ฏธํฐ๋ค์ ์ฌ๋ฌ๊ฐ์ง ์ต์
๋ค์ ๊ฐ์ง ์ ์์ต๋๋ค.
์ฌ๋ฌ ์ค์ ๋ค ์ค์์, ๊ฐ์ฅ ์ข์ ์ ํ๋๋ฅผ ๋ด๋ ์ค์ ์ ์ ํํฉ๋๋ค.
Parameter configurations:
Configuration 1
* **Epochs:** 1
* **Learning Rate:**
* 0.8
* 0.5
* 0.1
* 0.05
* 0.01
Configuration 2
* **Epochs:**
* 1
* 2
* 3
* 4
* 5
* **Learning Rate:** 0.2
์๋ ์ฝ๋๋ Loss์ Accuracy graph๋ฅผ ๋ณด์ฌ์ค๊ฒ๋๋ค.
๋ฐ๋ผ์ ๋ด๋ด ๋คํธ์ํฌ๊ฐ ์ผ๋ง๋ ์ ์๋ํ๋์ง ์ ์ ์์ต๋๋ค.
```
# Change if you have memory restrictions
batch_size = 128
# TODO: Find the best parameters for each configuration
epochs = 5
learning_rate = 0.2
### DON'T MODIFY ANYTHING BELOW ###
# Gradient Descent
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss)
# The accuracy measured against the validation set
validation_accuracy = 0.0
# Measurements use for graphing loss and accuracy
log_batch_step = 50
batches = []
loss_batch = []
train_acc_batch = []
valid_acc_batch = []
with tf.Session() as session:
session.run(init)
batch_count = int(math.ceil(len(train_features)/batch_size))
for epoch_i in range(epochs):
# Progress bar
batches_pbar = tqdm(range(batch_count), desc='Epoch {:>2}/{}'.format(epoch_i+1, epochs), unit='batches')
# The training cycle
for batch_i in batches_pbar:
# Get a batch of training features and labels
batch_start = batch_i*batch_size
batch_features = train_features[batch_start:batch_start + batch_size]
batch_labels = train_labels[batch_start:batch_start + batch_size]
# Run optimizer and get loss
_, l = session.run(
[optimizer, loss],
feed_dict={features: batch_features, labels: batch_labels})
# Log every 50 batches
if not batch_i % log_batch_step:
# Calculate Training and Validation accuracy
training_accuracy = session.run(accuracy, feed_dict=train_feed_dict)
validation_accuracy = session.run(accuracy, feed_dict=valid_feed_dict)
# Log batches
previous_batch = batches[-1] if batches else 0
batches.append(log_batch_step + previous_batch)
loss_batch.append(l)
train_acc_batch.append(training_accuracy)
valid_acc_batch.append(validation_accuracy)
# Check accuracy against Validation data
validation_accuracy = session.run(accuracy, feed_dict=valid_feed_dict)
loss_plot = plt.subplot(211)
loss_plot.set_title('Loss')
loss_plot.plot(batches, loss_batch, 'g')
loss_plot.set_xlim([batches[0], batches[-1]])
acc_plot = plt.subplot(212)
acc_plot.set_title('Accuracy')
acc_plot.plot(batches, train_acc_batch, 'r', label='Training Accuracy')
acc_plot.plot(batches, valid_acc_batch, 'x', label='Validation Accuracy')
acc_plot.set_ylim([0, 1.0])
acc_plot.set_xlim([batches[0], batches[-1]])
acc_plot.legend(loc=4)
plt.tight_layout()
plt.show()
print('Validation accuracy at {}'.format(validation_accuracy))
```
## Test
์ด์ model์ testํด๋ณด๊ฒ ์ต๋๋ค.
model์ testํจ์ผ๋ก์จ ์ค์ ์ธ๊ณ์์ ๋ชจ๋ธ์ด ์ผ๋ง๋ ์ ์๋ํ๋์ง ์ ์ ์์ต๋๋ค.
์ ํ๋๊ฐ ์ต์ํ 80%๋ ๋์ด๋ด
์๋ค!
```
### DON'T MODIFY ANYTHING BELOW ###
# The accuracy measured against the test set
test_accuracy = 0.0
with tf.Session() as session:
session.run(init)
batch_count = int(math.ceil(len(train_features)/batch_size))
for epoch_i in range(epochs):
# Progress bar
batches_pbar = tqdm(range(batch_count), desc='Epoch {:>2}/{}'.format(epoch_i+1, epochs), unit='batches')
# The training cycle
for batch_i in batches_pbar:
# Get a batch of training features and labels
batch_start = batch_i*batch_size
batch_features = train_features[batch_start:batch_start + batch_size]
batch_labels = train_labels[batch_start:batch_start + batch_size]
# Run optimizer
_ = session.run(optimizer, feed_dict={features: batch_features, labels: batch_labels})
# Check accuracy against Test data
test_accuracy = session.run(accuracy, feed_dict=test_feed_dict)
assert test_accuracy >= 0.80, 'Test accuracy at {}, should be equal to or greater than 0.80'.format(test_accuracy)
print('Nice Job! Test Accuracy is {}'.format(test_accuracy))
```
# Multiple layers
Good job! You built a one layer TensorFlow network! However, you might want to build more than one layer. This is deep learning after all! In the next section, you will start to satisfy your need for more layers.
| github_jupyter |
## Figure3_Geographical distribution of data in United States
```
import os
import pickle
import time
import numpy as np
import pandas as pd
import matplotlib as mpl
import matplotlib.pyplot as plt
#importing basemap
import mpl_toolkits
mpl_toolkits.__path__.append('C:/Users/hp/Anaconda3/pkgs/basemap-1.2.0-py37h0354792_4/Lib/site-packages/mpl_toolkits/')
os.environ["PROJ_LIB"] = "C:\\Users\\hp\\Anaconda3\\pkgs\\proj4-5.2.0-ha925a31_1\\Library\\share";
from mpl_toolkits.basemap import Basemap
df_2013 = pd.read_pickle(os.path.join('C:\\Users\\hp\\Desktop\\FINAL YEAR PROJECT\\DATASET','df_2013_features.df'))
df_2014 = pd.read_pickle(os.path.join('C:\\Users\\hp\\Desktop\\FINAL YEAR PROJECT\\DATASET','df_2014_features.df'))
fig, ax = plt.subplots(2,1,figsize=(14, 20))
mymap = Basemap(llcrnrlon=-119, llcrnrlat=22, urcrnrlon=-80,
urcrnrlat=49, projection='lcc', lat_1=33, lat_2=45,
lon_0=-105, area_thresh=10000,
resolution = 'l',ax=ax[0])
for idx,df in enumerate([df_2013,df_2014]):
mymap = Basemap(llcrnrlon=-119, llcrnrlat=22, urcrnrlon=-80,
urcrnrlat=49, projection='lcc', lat_1=33, lat_2=45,
lon_0=-105, area_thresh=10000,
resolution = 'l',ax=ax[idx])
lng = df['longitude'].tolist()
lat = df['latitude'].tolist()
yld = df['yield'].tolist()
x,y = mymap(lng, lat)
im1 = mymap.scatter(x, y, c=yld, vmin=8, vmax=80, cmap=mpl.cm.get_cmap('rainbow'), zorder=2)
mymap.drawparallels(np.arange(25,65,20))
mymap.drawmeridians(np.arange(-120,-40,20))
mymap.drawcoastlines()
label = ['2013','2014']
ax[idx].annotate(label[idx], xy=(0.02, 1.02), fontsize=12,xycoords='axes fraction')
cax1 = fig.add_axes( [0.92, 0.3, 0.03, 0.4])
cbar = plt.colorbar(im1,cax=cax1, orientation='vertical', extend='both', use_gridspec=True)
cbar.set_label('Records in dataset')
```
## Figure4_Scatter matrix for the sample features
```
df_2013 = pd.read_csv('wheat.csv')
df_2013.head()
import matplotlib.pyplot as plt
import seaborn as sns
plt.figure(figsize=(8, 8))
ax = sns.pairplot(df_2013)
plt.show()
```
## Figure5_Correlation matrix
```
# Plot correlation matrix - Spearman
# Adjust labels to include all column names
fig, ax = plt.subplots(figsize=(10, 10))
# condition on which columns to carry over
corr = df_2013.corr(method='spearman')
plt.imshow(corr,interpolation='none')
# Add colorbar
plt.colorbar()
plt.locator_params(axis='x', nticks=9)
labels = corr.columns
# Set custom tick labels - these two lines are where the deprecation warning originates
ax.set_xticklabels([''] + labels,rotation=90)
ax.set_yticklabels([''] + labels)
# Move x-axis labels to top
ax.xaxis.set_ticks_position('top')
# Set axis label size
ax.tick_params(labelsize=18)
# Adjust tick position x-axis
start, end = ax.get_xlim()
stepsize = 1.0
ax.set_xticks(np.arange(start+0.5,end+0.5, stepsize))
ax.set_xlim(start,end)
# Adjust tick position yaxis
start, end = ax.get_ylim()
stepsize = 1.0
ax.set_yticks(np.arange(end+0.5,start+0.5, stepsize))
ax.set_ylim(start,end)
# Add title
ax.set_title('Correlation matrix', fontsize=22,y=-0.1)
# Save figure
# plt.savefig(os.path.join(supp_dir,'correlation_matrix.png'), bbox_inches='tight')
```
## Figure6_Learning curve of stacked regressor vs random forest
```
# http://scikit-learn.org/stable/auto_examples/model_selection/plot_learning_curve.html
from sklearn.model_selection import learning_curve
from sklearn.model_selection import ShuffleSplit
def plot_learning_curve(estimator, title, X, y, ylim=None, cv=None,
n_jobs=1, train_sizes=np.linspace(.1, 1.0, 5)):
plt.figure()
plt.title(title)
if ylim is not None:
plt.ylim(*ylim)
plt.xlabel("Training examples")
plt.ylabel("Score")
train_sizes, train_scores, test_scores = learning_curve(
estimator, X, y, cv=cv, n_jobs=n_jobs, train_sizes=train_sizes)
train_scores_mean = np.mean(train_scores, axis=1)
train_scores_std = np.std(train_scores, axis=1)
test_scores_mean = np.mean(test_scores, axis=1)
test_scores_std = np.std(test_scores, axis=1)
plt.grid()
plt.fill_between(train_sizes, train_scores_mean - train_scores_std,
train_scores_mean + train_scores_std, alpha=0.1,
color="r")
plt.fill_between(train_sizes, test_scores_mean - test_scores_std,
test_scores_mean + test_scores_std, alpha=0.1, color="g")
plt.plot(train_sizes, train_scores_mean, 'o-', color="r",
label="Training score")
plt.plot(train_sizes, test_scores_mean, 'o-', color="g",
label="Cross-validation score")
plt.legend(loc="best")
return plt
from sklearn.ensemble import RandomForestRegressor
#RANDOM FOREST
#SCORING FUNCTION
from sklearn.model_selection import cross_val_score
cv_k = 5
cv_scoring = 'r2'
from sklearn.model_selection import KFold
kf = KFold(n_splits=cv_k, shuffle=True)
X, y = X_train, y_train
title = "Learning Curve (Random Forest)"
est = RandomForestRegressor(n_estimators=300, n_jobs=-1, max_features= 'sqrt',
min_samples_split=10, min_samples_leaf = 2, max_depth=7 )
plot_learning_curve(est, title, X, y, ylim=(0, 1.0), cv=kf, n_jobs=-1)
plt.show()
X, y = X_train, y_train
title = "Learning Curve (Stacked Regressor)"
estimators = [
('Random Forest', RandomForestRegressor(n_estimators=300, n_jobs=-1, max_features= 'sqrt',min_samples_split=10, min_samples_leaf = 5, max_depth=8)),
('Gradient boosted Regression1' , GradientBoostingRegressor(n_estimators=100, learning_rate=0.01, max_depth=5, random_state=0, loss='ls')),
('Gradient boosted Regression2' , GradientBoostingRegressor(n_estimators=100, learning_rate=0.01, max_depth=5, random_state=0, loss='ls')),
('Gradient boosted Regression' , GradientBoostingRegressor(n_estimators=100, learning_rate=0.01, max_depth=5, random_state=0, loss='ls'))
]
#Stack
reg = StackingRegressor(estimators=estimators,final_estimator=RandomForestRegressor(n_estimators=10,random_state=42))
plot_learning_curve(reg, title, X, y, ylim=(0, 1.0), cv=kf, n_jobs=-1)
plt.show()
```
## Figure7_Performance of stacked generalization regressor
```
X, y = X_train, y_train
estimators = [
('Random Forest', RandomForestRegressor(n_estimators=300, n_jobs=-1, max_features= 'sqrt',min_samples_split=10, min_samples_leaf = 5, max_depth=8)),
('Gradient boosted Regression1' , GradientBoostingRegressor(n_estimators=100, learning_rate=0.01, max_depth=5, random_state=0, loss='ls')),
('Gradient boosted Regression2' , GradientBoostingRegressor(n_estimators=100, learning_rate=0.01, max_depth=5, random_state=0, loss='ls')),
('Gradient boosted Regression' , GradientBoostingRegressor(n_estimators=100, learning_rate=0.01, max_depth=5, random_state=0, loss='ls'))
]
#Stack
reg = StackingRegressor(estimators=estimators,final_estimator=RandomForestRegressor(n_estimators=10,random_state=42))
y_pred = est.predict(X_test)
fig, ax = plt.subplots(figsize=(10, 10))
plt.scatter(y_test,y_pred, color='green')
plt.plot([0,80],[0,80],'--', color='k')
ax.set_xlim(0,80)
ax.set_ylim(0,80)
ax.set_title('Model performance',fontsize=18)
# ax.xlabels().set_fontsize(20)
ax.set_xlabel('Observed yield', fontsize=18)
ax.set_ylabel('Predicted yield', fontsize=18)
ax.tick_params(labelsize=18)
ratio_gbt = y_test/y_pred
```
| github_jupyter |
# ----------ๅฃฐๆๅธธ้----------
```
# Hongjun Wu
# 20180723
# A script that recommends stock based on data and conditions given.
# Import Statement
from selenium import webdriver
from bs4 import BeautifulSoup
from decimal import Decimal
from selenium.common.exceptions import ElementNotVisibleException
import turicreate as tc
import datetime
from zhon.hanzi import punctuation
import os
# ่ช็ถ่ฏญ่จ่ฏๅซๆกๆถ
import pyltp
from pyltp import SentenceSplitter
from pyltp import Segmentor
from pyltp import Postagger
from pyltp import NamedEntityRecognizer
from pyltp import Parser
from pyltp import SementicRoleLabeller
LTP_DATA_DIR = '../../../LTP_data_v3.4.0/' # ltpๆจกๅ็ฎๅฝ็่ทฏๅพ
pos_model_path = os.path.join(LTP_DATA_DIR, 'pos.model') # ่ฏๆงๆ ๆณจๆจกๅ่ทฏๅพ๏ผๆจกๅๅ็งฐไธบ`pos.model`
cws_model_path = os.path.join(LTP_DATA_DIR, 'cws.model') # ๅ่ฏๆจกๅ่ทฏๅพ๏ผๆจกๅๅ็งฐไธบ`cws.model`
ner_model_path = os.path.join(LTP_DATA_DIR, 'ner.model') # ๅฝๅๅฎไฝ่ฏๅซๆจกๅ่ทฏๅพ๏ผๆจกๅๅ็งฐไธบ`pos.model`
par_model_path = os.path.join(LTP_DATA_DIR, 'parser.model') # ไพๅญๅฅๆณๅๆๆจกๅ่ทฏๅพ๏ผๆจกๅๅ็งฐไธบ`parser.model`
srl_model_path = os.path.join(LTP_DATA_DIR, 'srl') # ่ฏญไน่ง่ฒๆ ๆณจๆจกๅ็ฎๅฝ่ทฏๅพ๏ผๆจกๅ็ฎๅฝไธบ`srl`ใๆณจๆ่ฏฅๆจกๅ่ทฏๅพๆฏไธไธช็ฎๅฝ๏ผ่ไธๆฏไธไธชๆไปถใ
# ๅจๅญ่ฎฐๅฝๆไปถ็ๆ น็ฎๅฝ
fileRoot = './SelectedData'
# ๅญๅจๅไธช็ปๅฎๅ้็ๅฐๆน
var_list= []
# ไปฃ็ ๅฏนๅบ็ฝๅ็ๅญๅ
ธ
code_dict = {'a': 'http://quote.eastmoney.com/center/boardlist.html#boards-BK04471',
'b': 'http://quote.eastmoney.com/center/boardlist.html#boards-BK04931',
'c': 'http://quote.eastmoney.com/center/boardlist.html#boards-BK05231',
'd': 'http://quote.eastmoney.com/center/boardlist.html#boards-BK06991',
'e': 'http://quote.eastmoney.com/center/boardlist.html#boards-BK07271',
'f': 'http://quote.eastmoney.com/center/boardlist.html#boards-BK04741',
'g': 'http://quote.eastmoney.com/center/boardlist.html#boards-BK05381',
'h': 'http://quote.eastmoney.com/center/boardlist.html#boards-BK07311',
'i': 'http://quote.eastmoney.com/center/boardlist.html#boards-BK04781',
'j': 'http://quote.eastmoney.com/center/boardlist.html#boards-BK04791',
'k': 'http://quote.eastmoney.com/center/boardlist.html#boards-BK04561',
'l': 'http://quote.eastmoney.com/center/boardlist.html#boards-BK07331',
'm': 'http://quote.eastmoney.com/center/boardlist.html#boards-BK04241',
'n': 'http://quote.eastmoney.com/center/boardlist.html#boards-BK07321',
'o': 'http://quote.eastmoney.com/center/boardlist.html#boards-BK07361',
'p': 'http://quote.eastmoney.com/center/boardlist.html#boards-BK04801',
'q': 'http://quote.eastmoney.com/center/boardlist.html#boards-BK04761',
'r': 'http://quote.eastmoney.com/center/boardlist.html#boards-BK07381',
's': 'http://quote.eastmoney.com/center/boardlist.html#boards-BK04381',
't': 'http://quote.eastmoney.com/center/boardlist.html#boards-BK05381',
'u': 'http://quote.eastmoney.com/center/boardlist.html#boards-BK04241',
'v': 'http://quote.eastmoney.com/center/boardlist.html#boards-BK07361',
'w': 'http://quote.eastmoney.com/center/boardlist.html#boards-BK04561',
'x': 'http://quote.eastmoney.com/center/boardlist.html#boards-BK09101',
'y': 'http://quote.eastmoney.com/center/boardlist.html#boards-BK07401',
'z': 'http://quote.eastmoney.com/center/boardlist.html#boards-BK04221',
'aa': 'http://quote.eastmoney.com/center/boardlist.html#boards-BK04541',
'bb': 'http://quote.eastmoney.com/center/boardlist.html#boards-BK07391',
'cc': 'http://quote.eastmoney.com/center/boardlist.html#boards-BK04571',
'dd': 'http://quote.eastmoney.com/center/boardlist.html#boards-BK04641',
'ee': 'http://quote.eastmoney.com/center/boardlist.html#boards-BK05451',
'ff': 'http://quote.eastmoney.com/center/boardlist.html#boards-BK07281',
'gg': 'http://quote.eastmoney.com/center/boardlist.html#boards-BK04851',
'hh': 'http://quote.eastmoney.com/center/boardlist.html#boards-BK07291',
'ii': 'http://quote.eastmoney.com/center/boardlist.html#boards-BK07351',
'jj': 'http://quote.eastmoney.com/center/boardlist.html#boards-BK04511',
'kk': 'http://quote.eastmoney.com/center/boardlist.html#boards-BK04751',
'll': 'http://quote.eastmoney.com/center/boardlist.html#boards-BK04811',
'mm': 'http://quote.eastmoney.com/center/boardlist.html#boards-BK07251',
'nn': 'http://quote.eastmoney.com/center/boardlist.html#boards-BK07391',
'oo': 'http://quote.eastmoney.com/center/boardlist.html#boards-BK07261',
'pp': 'http://quote.eastmoney.com/center/boardlist.html#boards-BK04731',
'qq': 'http://quote.eastmoney.com/center/boardlist.html#boards-BK04501',
'rr': 'http://quote.eastmoney.com/center/boardlist.html#boards-BK04281',
'ss': 'http://quote.eastmoney.com/center/boardlist.html#boards-BK04701',
'tt': 'http://quote.eastmoney.com/center/boardlist.html#boards-BK04571',
'uu': 'http://quote.eastmoney.com/center/boardlist.html#boards-BK07311',
'vv': 'http://quote.eastmoney.com/center/boardlist.html#boards-BK04291',
'ww': 'http://quote.eastmoney.com/center/boardlist.html#boards-BK07301',
'xx': 'http://quote.eastmoney.com/center/boardlist.html#boards-BK05391',
'yy': 'http://quote.eastmoney.com/center/boardlist.html#boards-BK05371',
'zz': 'http://quote.eastmoney.com/center/boardlist.html#boards-BK04861',
'aaa': 'http://quote.eastmoney.com/center/boardlist.html#boards-BK04841',
'bbb': 'http://quote.eastmoney.com/center/boardlist.html#boards-BK07371',
'ccc': 'http://quote.eastmoney.com/center/boardlist.html#boards-BK04471',
'ddd': 'http://quote.eastmoney.com/center/boardlist.html#boards-BK04591',
'eee': 'http://quote.eastmoney.com/center/boardlist.html#boards-BK04651',
'fff': 'http://quote.eastmoney.com/center/boardlist.html#boards-BK07331',
'ggg': 'http://quote.eastmoney.com/center/boardlist.html#boards-BK04331',
'hhh': 'http://quote.eastmoney.com/center/boardlist.html#boards-BK04771'}
name_dict = {'็ตๅญไฟกๆฏ': 'a',
'ๆฐ่ฝๆบ':'b',
'ๆฐๆๆ':'c',
'ๅ
จๆฏๆๆฏ':'d',
'ๅป็่กไธ':'e',
'ไฟ้ฉ':'f',
'ๅๅทฅ่กไธ':'g',
'ๅ่ฅ่กไธ':'h',
'ๆ่ฒ้ๅฑ':'i',
'้ข้่กไธ':'j',
'ๅฎถ็ต่กไธ':'k',
'ๅ
่ฃ
ๆๆ':'l',
'ๆฐดๆณฅๅปบๆ':'m',
'่ดต้ๅฑ':'n',
'็ตไฟก่ฟ่ฅ':'o',
'่ชๅคฉ่ช็ฉบ':'p',
'ๆจไธๅฎถๅ
ท':'q',
'ๅคๅ
้่':'r',
'้ฃๅ้ฅฎๆ':'s',
'ๅๅทฅ่กไธ':'t',
'ๆฐดๆณฅๅปบๆ':'u',
'็ตไฟก่ฟ่ฅ':'v',
'ๅฎถ็ต่กไธ':'w',
'ไธ็จ่ฎพๅค':'x',
'ๆๆไผ้ฒ':'y',
'ไบค่ฟ็ฉๆต':'z',
'ๅก่ถๅถๅ':'aa',
'้ๅฑๅถๅ':'bb',
'่พ้
็ตๆฐ':'cc',
'็ณๆฒน่กไธ':'dd',
'ๆบๆขฐ่กไธ':'ee',
'็ฏไฟๅทฅ็จ':'ff',
'ๆ
ๆธธ้
ๅบ':'gg',
'่น่ถๅถ้ ':'hh',
'ๅฎ้ฒ่ฎพๅค':'ii',
'ๆฟๅฐไบง':'jj',
'้ถ่ก':'kk',
'ๆฑฝ่ฝฆ่กไธ':'ll',
'่ฃ
ไฟฎ่ฃ
้ฅฐ':'mm',
'้ๅฑๅถๅ':'nn',
'ๅญๆๅทฅ็จ':'oo',
'ๅธๅไฟกๆ':'pp',
'ๆธฏๅฃๆฐด่ฟ':'qq',
'็ตๅ่กไธ':'rr',
'้ ็บธๅฐๅท':'ss',
'่พ้
็ตๆฐ':'tt',
'ๅ่ฅ่กไธ':'uu',
'ไบค่ฟ่ฎพๅค':'vv',
'ๅ่ฏๅ
ฝ่ฏ':'ww',
'็ปผๅ่กไธ':'xx',
'ๆๆ่กไธ':'yy',
'ๆๅไผ ๅช':'zz',
'ๅฝ้
่ดธๆ':'aaa',
'่ฝฏไปถๆๅก':'bbb',
'็ตๅญไฟกๆฏ':'ccc',
'็ตๅญๅ
ไปถ':'ddd',
'ๅป่ฏๅถ้ ':'eee',
'ๅ
่ฃ
ๆๆ':'fff',
'ๅ็ง้ฅฒๆธ':'ggg',
'้
ฟ้
่กไธ':'hhh'}
codename_dict = {'a': '็ตๅญไฟกๆฏ',
'b': 'ๆฐ่ฝๆบ',
'c': 'ๆฐๆๆ',
'd': 'ๅ
จๆฏๆๆฏ',
'e': 'ๅป็่กไธ',
'f': 'ไฟ้ฉ',
'g': 'ๅๅทฅ่กไธ',
'h': 'ๅ่ฅ่กไธ',
'i': 'ๆ่ฒ้ๅฑ',
'j': '้ข้่กไธ',
'k': 'ๅฎถ็ต่กไธ',
'l': 'ๅ
่ฃ
ๆๆ',
'm': 'ๆฐดๆณฅๅปบๆ',
'n': '่ดต้ๅฑ',
'o': '็ตไฟก่ฟ่ฅ',
'p': '่ชๅคฉ่ช็ฉบ',
'q': 'ๆจไธๅฎถๅ
ท',
'r': 'ๅคๅ
้่',
's': '้ฃๅ้ฅฎๆ',
't':'ๅๅทฅ่กไธ',
'u':'ๆฐดๆณฅๅปบๆ',
'v':'็ตไฟก่ฟ่ฅ',
'w':'ๅฎถ็ต่กไธ',
'x':'ไธ็จ่ฎพๅค',
'y':'ๆๆไผ้ฒ',
'z':'ไบค่ฟ็ฉๆต',
'aa':'ๅก่ถๅถๅ',
'bb':'้ๅฑๅถๅ',
'cc':'่พ้
็ตๆฐ',
'dd':'็ณๆฒน่กไธ',
'ee':'ๆบๆขฐ่กไธ',
'ff':'็ฏไฟๅทฅ็จ',
'gg':'ๆ
ๆธธ้
ๅบ',
'hh':'่น่ถๅถ้ ',
'ii':'ๅฎ้ฒ่ฎพๅค',
'jj':'ๆฟๅฐไบง',
'kk':'้ถ่ก',
'll':'ๆฑฝ่ฝฆ่กไธ',
'mm':'่ฃ
ไฟฎ่ฃ
้ฅฐ',
'nn':'้ๅฑๅถๅ',
'oo':'ๅญๆๅทฅ็จ',
'pp':'ๅธๅไฟกๆ',
'qq':'ๆธฏๅฃๆฐด่ฟ',
'rr':'็ตๅ่กไธ',
'ss':'้ ็บธๅฐๅท',
'tt':'่พ้
็ตๆฐ',
'uu':'ๅ่ฅ่กไธ',
'vv':'ไบค่ฟ่ฎพๅค',
'ww':'ๅ่ฏๅ
ฝ่ฏ',
'xx':'็ปผๅ่กไธ',
'yy':'ๆๆ่กไธ',
'zz':'ๆๅไผ ๅช',
'aaa':'ๅฝ้
่ดธๆ',
'bbb':'่ฝฏไปถๆๅก',
'ccc':'็ตๅญไฟกๆฏ',
'ddd':'็ตๅญๅ
ไปถ',
'eee':'ๅป่ฏๅถ้ ',
'fff':'ๅ
่ฃ
ๆๆ',
'ggg':'ๅ็ง้ฅฒๆธ',
'hhh':'้
ฟ้
่กไธ'}
bankuai_weight_dict = {'่ฝฏไปถๆๅก': 10,
'่ชๅคฉ่ช็ฉบ': 10, # ๅ่ฎฎ
'ๆฐ่ฝๆบ': 9, # ๅ่ฎฎ
'ๆฐๆๆ': 9,
'ๆๆ่กไธ': 9, # ๅ่ฎฎ
'็ตๅญไฟกๆฏ': 8,
'ๆ่ฒ้ๅฑ': 4,
'?': 3, # ่ฟๅฅ
'้ข้่กไธ': 2}
```
# ----------ๅ่ฝๆงๅฝๆฐ----------
```
# ๅ่ฝๆงๅฝๆฐ
# ๆฌข่ฟ็้ข
def greetings():
print('+-------------------ๆฌข่ฟ็้ข-------------------+')
print('| ่ฏท้ๆฉๆจ่ฆๆง่ก็ๆไฝ: |')
print('| 1. ่พๅ
ฅaๆฅๅๆๆๅฎๆฟๅ |')
print('| 2. ่พๅ
ฅsๆฅๆๅฏปๆฟๅไปฃ็ |')
print('| 3. ่พๅ
ฅlๆฅๆพ็คบๆๆไปฃ็ |')
print('| 4. ่พๅ
ฅxๆฅๆพ็คบๆๆ้่ก |')
print('| 5. ่พๅ
ฅjๆฅๅ ่ฝฝไปฅๅๅๆๅฎๆ่ก็ฅจๆฐๆฎ |')
print('| 6. ่พๅ
ฅrๆฅๅ ่ฝฝๅๆๅ่ก็ฅจๆๅๆฐๆๅบ |')
print('| 7. ่พๅ
ฅnๆฅๅ ่ฝฝไปฅๅๅๆๅฎๆๆฐ้ปๆฐๆฎ |')
print('| ๏ผ่พๅ
ฅq้ๅบ๏ผ |')
print('+---------------------------------------------+')
choice = str(input('ๅฝไปค:'))
return choice
# ๆฑๅญ่ฝฌๆฐๅญ
def smartMultiply(string):
if string[len(string) - 1:len(string)] == 'ไธ':
string = Decimal(string[0:len(string) - 1])
string = float(string) * 10000
elif string[len(string) - 1:len(string)] == 'ไบฟ':
string = Decimal(string[0:len(string) - 1])
string = float(string) * 100000000
elif string[len(string) - 1:len(string)] == '%':
string = Decimal(string[0:len(string) - 1])
string = float(string) * 0.01
else:
string = float(string)
return string
# ๆฐๅญ่ฝฌๆฑๅญ
def smartDivide(number, isPercent):
if isPercent:
return str(1.0 * number / 0.01) + '%'
elif number > 100000000:
return str(1.0 * number / 100000000) + 'ไบฟ'
elif number > 10000:
return str(1.0 * number / 10000) + 'ไธ'
else:
return str(number)
# searchCode() ็จไบๆๅฏป่ก็ฅจไปฃ็
def searchCode():
isDone = False
print('+--------------่พๅ
ฅๆณ่ฆๆฅๆพ็ๆฟๅๅ--------------+')
while isDone == False:
user_input = str(input('ๆ็ดขๆฟๅๅ:'))
if user_input == 'quit':
print('็จๆทๅทฒๅๆถๆไฝ๏ผ')
break
try:
print(user_input + 'ไปฃ็ ๆฏ๏ผ' + name_dict[user_input])
isDone = True
except KeyError:
print('ๆฟๅไธๅญๅจ!')
# ้ๅฏUI (ๅพ
่ฎฎ)
user_interface(SELECTED_DATA)
# printAllBankuai()
def printAllBankuai():
print('----------------ไปฅไธๆฏๆๆๆฟๅไปฃ็ ----------------')
element_list = [(k, codename_dict[k]) for k in sorted(codename_dict.keys())]
counter = 0
while counter < len(element_list):
print(element_list[counter][0] + ' - ' + element_list[counter][1])
counter += 1
print('-----------------------------------------------')
# ้ๅฏUI
user_interface(SELECTED_DATA)
# ๆพ็คบๆๆ็้่ก
def showAllSelected(xuangu):
# print(SELECTED_DATA)
if xuangu[0]['name'] != 'ๆฐๆฎไธๅญๅจ':
# xuangu.show()
print(xuangu)
else:
print('ๆฒกๆๅๆๅฎๆ็ๆฐๆฎ๏ผ')
user_interface(xuangu)
return xuangu # ่ฟไธช้ๅธธไธ็กฎๅฎ
# ็จไบๅ ่ฝฝSFrame (ไน่ฎธ้่ฆๅไธไธชreturn ๅพ
่ฎฎ)
def parseSFrame():
fileName = input('่พๅ
ฅๆไปถๅ๏ผ')
filePath = './SelectedData/' + fileName + '/'
loaded_data = tc.SFrame(data=filePath)
answer = input('่ฆๆพ็คบ่ฝฝๅ
ฅ็SFrameๅ๏ผ(y/n)')
if answer == 'y':
# loaded_data.materialize()
print(loaded_data)
# loaded_data.show()
return loaded_data
# Literally show SFrame๏ผๅคงๆฆๆฒกๅฅ็จ๏ผ
def showSFrame(SFrame):
SFrame.show()
# ๅๆญฅ็ญ้ๅๆ็จๅบ
def analyze_stock(SFrame):
SFrame = analysis_turnover_rate(SFrame, var_list[0])
SFrame = analysis_volume_rate(SFrame, var_list[1])
return SFrame
# ่ฟๅๆๆๆขๆ็ๅคงไบ5%็่ก
def analysis_turnover_rate(SFrame, turnover_rate):
return SFrame[SFrame['turnover_rate'] > turnover_rate]
# ่ฟๅๆๆ้ๆฏๅคงไบ30%็่ก
def analysis_volume_rate(SFrame, volume_rate):
return SFrame[ SFrame['volume_rate'] > volume_rate]
# ไธไธชไป้กต้ข่ทๅ้กตๆฐ็ๅฝๆฐ
def getPageNumber(bs):
all_buttons = bs.findAll(class_="paginate_button")
if len(all_buttons) == 2:
return 1 # ๅค็ๅชๆไธ้กต็ๆ
ๅต
else:
return len(all_buttons) - 2 # ไธไธ้กตๅGoๆ้ฎ
# ๆๆฐๆฎไธญ็ - ๆนไธบ 0
def noSlash(str):
if str == '-':
return '0'
else:
return str
# ็จไบไฟๅญSFrame
def saveSFrame(SFrame, fileRoot):
# ไฟๅญๆฐๆฎ
date = '20' + str(datetime.datetime.now().strftime("%y%m%d%p"))
filepath = fileRoot + '/' + str(date) + '/'
SFrame.save(filepath)
print('ๆๅไฟๅญๆฐๆฎๆไปถ๏ผๆฐๆฎ่ทฏๅพ๏ผ' + filepath)
# ๆๅฐๆถ้ดๆณ
print('็จๅบ่ฟ่กๆถ้ดๆณ๏ผ20'
+ str(datetime.datetime.now().strftime("%y")) + 'ๅนด'
+ str(datetime.datetime.now().strftime("%m")) + 'ๆ'
+ str(datetime.datetime.now().strftime("%d")) + 'ๆฅ'
+ str(datetime.datetime.now().strftime("%H")) + 'ๆถ'
+ str(datetime.datetime.now().strftime("%M")) + 'ๅ'
+ str(datetime.datetime.now().strftime("%S")) + '็ง')
# ็จไบไฟ็xไฝๅฐๆฐ
def cutDecimal(x, string):
return string[0:x+2]
def balanceSArray(number, SArray_sum):
return number / Sarray_SUM
# ็จไบๆง่กnormalize
def normalizeColumns(SFrame):
waitingList = ['PERation', 'amplitude', 'change', 'close',
'income_increase', 'profit_increase', 'percent_chg',
'turn_volume', 'turnover_rate', 'volume', 'volume_rate']
for element in waitingList:
sumSArray = sum(SFrame[element])
# print('sumSArray', sumSArray)
newSArrayName = 'balanced_' + element
if sumSArray != 0:
SFrame[newSArrayName] = SFrame[element].apply(lambda x: x / sumSArray)
else:
SFrame[newSArrayName] = SFrame[element].apply(lambda x: x * 0)
return SFrame
# ็จไบ่ฟๅๆๅฎๆฟๅ็ๅ ๅ้กน็ฎ
def bankuaiScore(bankuai):
if bankuai in bankuai_weight_dict.keys():
return bankuai_weight_dict[bankuai]
else:
return 1
# ็จไบ่ฎก็ฎๆฏๅช่ก็ฅจ็score
def makeSCore(SFrameRow):
score = 0
score += SFrameRow['balanced_profit_increase'] * 0.2 # ๅๅฉๆถฆ
score += SFrameRow['balanced_income_increase'] * 0.1 # ๆถๅ
ฅ
score += SFrameRow['balanced_turnover_rate'] * 0.1 # ๆขๆ็
score += SFrameRow['balanced_amplitude'] * 0.1 # ้ๆฏ
score += SFrameRow['balanced_PERation'] * 0.1 # ๅธ็็
score += bankuaiScore(SFrameRow['bankuai']) * 0.1 # ่กไธ
return score * 100
# ็จไบๆพ็คบๆๅๅๅๆฐ
def sortSelectStocks(SFrame, columnString):
if columnString == '':
columnString = 'name,code,score'
columnList = []
index = 0
firstIndex = 0
while index < len(columnString):
if columnString[index] == ',':
columnList.append(columnString[firstIndex: index])
firstIndex = index + 1
elif index == len(columnString) - 1:
columnList.append(columnString[firstIndex: index + 1])
index += 1
# print(columnList)
visual_SFrame = SFrame[columnList]
return visual_SFrame
```
# ----------ๆง่กๅๅฝๆฐ----------
```
def user_interface(xuangu):
# print('ๆๆญฃๅจ่ฟๅ
ฅuser_interface', len(xuangu), xuangu['name'])
choice = greetings()
if choice == 'a':
done_xuangu = inputCode(xuangu)
done_xuangu = done_xuangu[1:len(done_xuangu)]
saveSFrame(done_xuangu, fileRoot)
# print('ๆๅจuser_interface', done_xuangu)
return done_xuangu
elif choice == 's':
searchCode()
elif choice == 'l':
printAllBankuai()
elif choice == 'x':
xuangu = showAllSelected(xuangu)
elif choice == 'j':
xuangu = parseSFrame()
user_interface(xuangu)
elif choice == 'r':
selected_stocks = sortSelectStocks(SELECTED_DATA, input('่พๅ
ฅ้่ฆๆพ็คบ็ๅๅ็จ,้ๅผไธ่ฆ็ฉบๆ ผ'))
print(selected_stocks)
elif choice == 'q':
exit
else:
user_interface(xuangu) # ่ฟ่พน่ฟไธชSELECTED_DATAๆฏ็็SELECTED_DATAๅ ๆ่ฎคไธบไธๆฏ
return xuangu
# ๅๆๆฟๅไธป็จๅบ
def inputCode(xuangu):
# print('ๆๆญฃๅจ่ฟๅ
ฅinputCode', len(xuangu), xuangu['name'])
print('+-------------------ๅๆๆฟๅ--------------------+')
print('ๅจไธ้ข่พๅ
ฅๆงๅถๅ้๏ผ ')
turnover_rate = input('ๆขๆ็ๅคงไบ๏ผ้ป่ฎค0.05ๅฏนๅบ5%๏ผ๏ผ')
if turnover_rate == '':
print('ๆขๆ็่ฎพไธบ้ป่ฎค0.05๏ผ')
turnover_rate = 0.05
var_list.append(float(turnover_rate))
volume_rate= input('้ๆฏๅคงไบ๏ผ้ป่ฎค0.3ๅฏนๅบ30%๏ผ๏ผ')
if volume_rate == '':
print('้ๆฏ่ฎพไธบ้ป่ฎค0.3๏ผ')
volume_rate = 0.3
var_list.append(float(volume_rate))
income_rate = input('่ฅไธๆถๅ
ฅๅคงไบ๏ผ้ป่ฎค0.3ๅฏนๅบ30%๏ผ๏ผ')
if income_rate == '':
print('่ฅไธๆถๅ
ฅ่ฎพไธบ้ป่ฎค0.3๏ผ')
income_rate = 0.3
var_list.append(float(income_rate))
benefit_rate = input('ๅๅฉๆถฆๅคงไบ๏ผ้ป่ฎค0.3ๅฏนๅบ30%๏ผ:')
if benefit_rate == '':
print('ๅๅฉๆถฆ่ฎพไธบ้ป่ฎค0.3๏ผ')
benefit_rate = 0.3
var_list.append(float(benefit_rate))
print('+----------------------------------------------+')
print('|่พๅ
ฅๆฟๅไปฃๅท๏ผ่พๅ
ฅquit้ๅบ่ณไธป้กต้ข, allๅๆๆๆๆฟๅ๏ผ|')
print('-----------------------------------------------')
code_name = str(input('ไปฃๅท:'))
if code_name == 'all':
for each_bk in code_dict:
bk_name = codename_dict[each_bk]
# xuangu = xuangu.append(makeRecommend(xuangu, code_dict[each_bk], bk_name))
xuangu = makeRecommend(xuangu, code_dict[each_bk], bk_name)
elif code_name == 'quit':
user_interface(xuangu)
else:
xuangu = makeRecommend(xuangu, code_dict[code_name], codename_dict[code_name])
# print(xuangu)
# showSFrame(xuangu)
# print('ๆๆญฃๅจ็ฆปๅผinputCode', len(xuangu), xuangu['name'])
return xuangu
# ็จไบๅฎๆ็ญ้ไปฃ็ ็ๅ
จ่ฟ็จ
def makeRecommend(xuangu, url, bk_name):
# print('ๆๆญฃๅจ่ฟๅ
ฅmakeRecommend', len(xuangu), xuangu['name'])
# ๅๅปบๅไธช็ฉบSFrame๏ผไปฅๅ ไฝ่กๅผๅคด
all_data = tc.SFrame({'code': ['000000'], 'name': ['ๅๅฉๅๅฉ'],
'close': [0.0], 'percent_chg': [0.0],
'change': [0.0], 'volume': [0.0], 'turn_volume': [0.0], 'amplitude': [0.0],
'high': [0.0], 'low': [0.0],
'now_open': [0.0], 'previous_close': [0.0], 'volume_rate': [0.0],
'turnover_rate': [0.0], 'report_url': ['http://www.bilibili.com'], 'PERation': [0.0]})
# ่ทๅไฟกๆฏ
all_data = makeData(url, all_data)
# ๅๆญฅ็ญ้
analyze_data = analyze_stock(all_data)
# ๆ็ปๆจ่
print('---------------------' + bk_name + '---------------------')
xuangu = recommendStock(xuangu, analyze_data, bk_name)
# print('ๆๆญฃๅจ็ฆปๅผmakeRecommend', len(xuangu), xuangu['name'])
return xuangu
# ่ชๅจๅค็็ฒๆฐๆฎ็ไธป็จๅบ
def makeData(url, SFrame):
browser = webdriver.Chrome() # Get local session of chrome
# url = search_area[topic] # Example: '็ตๅญไฟกๆฏ'
browser.get(url) # Load page
browser.implicitly_wait(20) # ๆบ่ฝ็ญๅพ
20็ง
# ็ฌฌไธๆฌก่ฎฟ้ฎๆถๅคๅฎ่ๅๆฐ้ๆฅๅณๅฎๆต่งๅคๅฐๆฌก่กจๆ ผ
bs = BeautifulSoup(browser.page_source, "lxml")
page_number = getPageNumber(bs)
# ๅพช็ฏๆต่ง้กต้ข็ดๅฐๆ้ๅฎๆฏๆๆtable
counter = 0
while counter < page_number:
SFrame = grabData(bs, SFrame)
try:
browser.find_element_by_id('main-table_next').click()
except ElementNotVisibleException:
print('Warning: ๆ ๆณ่ทๅพๆไบ็ ดๆ็ๆฐๆฎ.')
bs = BeautifulSoup(browser.page_source, "lxml")
counter += 1
SFrame = SFrame[1:len(SFrame)] # ๅ ๆๅ ไฝ็ฌฆ
SFrame = SFrame.unique()
return SFrame
# ไปไธไธช้ๆBeautifulSoup้กต้ข่งฃๆ่กจๆ ผๅนถๅญๅจ่ฟSFrame
def grabData(bs, SFrame):
# ่งฃๅบ่กจๆ ผ
table = bs.findAll(role='row')
table = table[7: len(table) - 1]
# ๅๆๆฏไธช่กจๆ ผ
counter = 0
while counter < len(table):
row_sframe = tc.SFrame({'code': [str(table[counter].find(class_=' listview-col-Code').string)],
'name': [str(table[counter].find(class_=' listview-col-Name').string)],
'close': [smartMultiply(noSlash(table[counter].find(class_=' listview-col-Close').string))],
'percent_chg': [smartMultiply(noSlash(table[counter].find(class_='listview-col-ChangePercent sorting_1').string))],
'change': [smartMultiply(noSlash(table[counter].find(class_=' listview-col-Change').string))],
'volume': [smartMultiply(noSlash(table[counter].find(class_=' listview-col-Volume').string))],
'turn_volume': [
smartMultiply(noSlash(table[counter].find(class_=' listview-col-Amount').string))],
'amplitude': [
smartMultiply(noSlash(table[counter].find(class_=' listview-col-Amplitude').string))],
'high': [smartMultiply(noSlash(table[counter].find(class_=' listview-col-High').string))],
'low': [smartMultiply(noSlash(table[counter].find(class_=' listview-col-Low').string))],
'now_open': [smartMultiply(noSlash(table[counter].find(class_=' listview-col-Open').string))],
'previous_close': [
smartMultiply(noSlash(table[counter].find(class_=' listview-col-PreviousClose').string))],
'volume_rate': [
smartMultiply(noSlash(table[counter].find(class_=' listview-col-VolumeRate').string))],
'turnover_rate': [
smartMultiply(noSlash(table[counter].find(class_=' listview-col-TurnoverRate').string))],
'report_url': [
'http://emweb.securities.eastmoney.com/f10_v2/FinanceAnalysis.aspx?type=web&code=sz' +
table[counter].find(class_=' listview-col-Code').string + '#lrb-0'],
'PERation':[
smartMultiply(noSlash(table[counter].find(class_=' listview-col-PERation').string))]
})
counter += 1
SFrame = SFrame.append(row_sframe)
return SFrame
# ๆๅฐ้ไธญ็่ก็ฅจ
def recommendStock(xuangu, SFrame, bankuai):
# print('ๆๆญฃๅจ่ฟๅ
ฅrecommendStock', len(xuangu), xuangu['name'])
income_limit = var_list[2]
profit_limit = var_list[3]
counter = 0
while counter < len(SFrame):
result_list = getReport(SFrame[counter], bankuai, SFrame[counter], income_limit, profit_limit)
if result_list[0]:
print('่ก็ฅจๅ็งฐ๏ผ' + SFrame[counter]['name'])
print('่ก็ฅจไปฃ็ ๏ผ' + SFrame[counter]['code'])
print('ๆไบค้๏ผ' + smartDivide(SFrame[counter]['volume'], False))
print('ๆไบค้ข๏ผ' + smartDivide(SFrame[counter]['turn_volume'], False))
print('ๆไบค้ๆฏๅขๅน
๏ผ' + smartDivide(float(cutDecimal(4, str(SFrame[counter]['amplitude']))), True))
print('ๆขๆ็๏ผ' + smartDivide(float(cutDecimal(4, str(SFrame[counter]['turnover_rate']))), True))
print('่ฅไธๆปๆถๅ
ฅๅข้ฟ', smartDivide(float(cutDecimal(4, str(result_list[2]))), True))
print('ๅๅฉๆถฆๅข้ฟ', smartDivide(float(cutDecimal(4, str(result_list[3]))), True))
print('-------------------------------------------------')
# print(result_list[1])
xuangu = xuangu.append(result_list[1])
counter += 1
# print('ๆๆญฃๅจ็ฆปๅผrecommendStock', len(xuangu), xuangu['name'])
return xuangu
# ๆฅๆพๆฅ่กจ
def getReport(SFrame, bankuai, row, income_limit, profit_limit):
# ็ฌๅ็ฝ็ซๆบไปฃ็
url = row['report_url']
browser = webdriver.Chrome() # Get local session of chrome
browser.get(url) # Load page
soup = BeautifulSoup(browser.page_source, "lxml")
browser.close()
# ๆฃๆฅๆฏๅฆๆฅ่กจๆจกๆฟ้่ฏฏ
check_report = soup.findAll(id='stock_full_name123')
if len(check_report) != 0:
keyword = check_report[0]['value']
if keyword == '- - - -': # ็บ ๆญฃSFrame
temp_row = row # ่งฃๅณๆ ๆณๆพ็คบไธ่ฏ่ก็ฅจๆฅ่กจ็้ฎ้ข
temp_row['report_url'] = temp_row['report_url'][0:80] + 'sh' + temp_row['report_url'][82:94]
return getReport(SFrame, bankuai, temp_row, income_limit, profit_limit)
# ็ฒๅ ๅทฅๆฐๆฎ
ulist = []
trs = soup.find_all('tr')
for tr in trs:
ui = []
for td in tr:
ui.append(td.string)
ulist.append(ui)
# ๆๅ่ฅไธ้ขๅๅๅฉๆถฆ
income_increase = 0
profit_increase = 0
for element in ulist:
if ('่ฅไธๆปๆถๅ
ฅ' in element):
income_data_list = element
now_data = smartMultiply(income_data_list[3])
past_data = smartMultiply(income_data_list[11])
income_increase = (now_data - past_data) / past_data
elif ('ๅๅฉๆถฆ' in element):
profit_data_list = element
now_data = smartMultiply(profit_data_list[3])
past_data = smartMultiply(profit_data_list[11])
profit_increase = (now_data - past_data) / past_data
# if income_increase > income_limit and profit_increase > profit_limit:
new_row = tc.SFrame({'code': [row['code']], 'name': [row['name']], 'bankuai': [bankuai],
'close': [row['close']], 'percent_chg': [row['percent_chg']], 'change': [row['change']],
'volume': [row['volume']], 'turn_volume': [row['turn_volume']],
'amplitude': [row['amplitude']], 'volume_rate': [row['volume_rate']], 'turnover_rate': [row['turnover_rate']],
'news_url': [''], 'income_increase': [income_increase],
'profit_increase': [profit_increase],'PERation': [row['PERation']]})
result = [income_increase > income_limit and profit_increase > profit_limit, new_row, income_increase, profit_increase]
# print(bankuai,row['name'], result) # Debug
return result
```
# ----------ๆง่กๅฝไปค----------
```
# ๆฐๅปบ็ฒพ้ๆฐๆฎๅบ
SELECTED_DATA = tc.SFrame({'code': ['000000'], 'name': ['ๆฐๆฎไธๅญๅจ'],'bankuai': ['ไบๆฌกๅ
'],
'close': [0.0], 'percent_chg': [0.0],'change': [0.0],
'volume': [0.0], 'turn_volume': [0.0],
'amplitude': [0.0],'volume_rate': [0.0],
'turnover_rate': [0.0],
'news_url': ['http://www.bilibili.com'],
'income_increase': [0.0], 'profit_increase': [0.0],
'PERation': [0.0]})
SELECTED_DATA = user_interface(SELECTED_DATA)
SELECTED_DATA = SELECTED_DATA.unique() # ๆฅ้
SELECTED_DATA = normalizeColumns(SELECTED_DATA)
SELECTED_DATA['score'] = SELECTED_DATA.apply(makeSCore)
SELECTED_DATA = SELECTED_DATA.sort('score', ascending = False)
SELECTED_DATA
```
# ----------ๆฐ้ป็ฌ่ซ----------
```
# ๅฎไน็งป้คๆ ็น็ฌฆๅทๅฝๆฐ
def removePunctuation(contents):
return re.sub("[\s+\.\!\/_,$%^*(+\"\']+|[+โโ๏ผ๏ผใ๏ผใ~@#๏ฟฅ%โฆโฆ&*๏ผ๏ผ]+", "", contents)
# ๅ่ฏๅฝๆฐ
def splitWords(contents):
segmentor = Segmentor() # ๅๅงๅๅฎไพ
segmentor.load(cws_model_path) # ๅ ่ฝฝๆจกๅ
words = segmentor.segment(contents) # ๅ่ฏ
segmentor.release() # ้ๆพๆจกๅ
return words
def tagWords(contents_split):
postagger = Postagger() # ๅๅงๅๅฎไพ
postagger.load(pos_model_path) # ๅ ่ฝฝๆจกๅ
words = contents_split # ๅ่ฏ็ปๆ
postags = postagger.postag(words) # ่ฏๆงๆ ๆณจ
postagger.release() # ้ๆพๆจกๅ
return postags
def getNews(stockName):
hexStockName = str(stockName.encode('utf-8'))
hexStockName = hexStockName.replace('\\', '%')
hexStockName = hexStockName.replace('b\'', '')
hexStockName = hexStockName.replace('\'', '')
hexStockName = hexStockName.upper()
hexStockName = hexStockName.replace('X', '')
print(hexStockName)
url = 'http://so.eastmoney.com/news/s?keyword=' + hexStockName
# ็ฌๅ็ฝ็ซๆบไปฃ็
browser = webdriver.Chrome() # Get local session of chrome
browser.get(url) # Load page
soup = BeautifulSoup(browser.page_source, "lxml")
# browser.close()
# ๅค็ๆ็ดขundefined็ๆ
ๅต
if len(soup.findAll(class_ = 'still')) != 0:
print(soup.findAll(target = 'module module-empty'))
#url = soup.findAll(target = '_self')['href']
browser = webdriver.Chrome() # Get local session of chrome
browser.get(url) # Load page
soup = BeautifulSoup(browser.page_source, "lxml")
news_data = tc.SFrame({'code': ['000000'], 'name': ['ๅๅฉๅๅฉ'],
'url': ['http://www.bilibili.com'], 'title': ['News Title'],
'contents': ['news contents']})
# ่ทๅๅ
จ้จๆฐ้ป็url
news_url_list = []
for tag in soup.findAll('a', href = True):
if tag['href'][0:32] == 'http://stock.eastmoney.com/news/':
news_url_list.append(tag['href'])
print(news_url_list) # Debug
"""
# ่ฟ่ก็งป้คๅฝๆฐๅนถๅๅปบไธไธชๅไธบ'contents_nopunc'็ๅ
news_data['contents_nopunc'] = news_data['contents'].apply(removePunctuation)
# ๆง่กๅ่ฏๅฝๆฐ
news_data['content_split_words'] = news_data['contents_nopunc'].apply(splitWords)
# ๆง่กๆ ๆณจๅฝๆฐ
news_data['content_,tag_words'] = news_data['content_split_words'].apply(tagWords)
"""
return news_data
# print(getNews('ๆถฆ็ฆพๆๆ'))
getNews('ๆถฆ็ฆพๆๆ')
news_list = []
for stock_name in SELECTED_DATA['name']:
news_list.append(getNews(stock_name))
browser = webdriver.Chrome() # Get local session of chrome
browser.get('http://so.eastmoney.com/news/s?keyword=%E6%B6%A6%E7%A6%BE%E6%9D%90%E6%96%99') # Load page
# soup = BeautifulSoup(browser.page_source, "lxml")
soup = BeautifulSoup(browser.page_source, "html.parser")
# print(soup.find_all(class_ = 'still'))
# print(soup.find(class_ = 'still')['href'])
# print(soup.findAll(class_ = 'still',))
still = soup.find(class_ = 'still')
print(type(still('a')))
```
| github_jupyter |
This notebook was prepared by [Donne Martin](https://github.com/donnemartin). Source and license info is on [GitHub](https://github.com/donnemartin/system-design-primer-primer).
# Design a deck of cards
## Constraints and assumptions
* Is this a generic deck of cards for games like poker and black jack?
* Yes, design a generic deck then extend it to black jack
* Can we assume the deck has 52 cards (2-10, Jack, Queen, King, Ace) and 4 suits?
* Yes
* Can we assume inputs are valid or do we have to validate them?
* Assume they're valid
## Solution
```
%%writefile deck_of_cards.py
from abc import ABCMeta, abstractmethod
from enum import Enum
import sys
class Suit(Enum):
HEART = 0
DIAMOND = 1
CLUBS = 2
SPADE = 3
class Card(metaclass=ABCMeta):
def __init__(self, value, suit):
self.value = value
self.suit = suit
self.is_available = True
@property
@abstractmethod
def value(self):
pass
@value.setter
@abstractmethod
def value(self, other):
pass
class BlackJackCard(Card):
def __init__(self, value, suit):
super(BlackJackCard, self).__init__(value, suit)
def is_ace(self):
return self._value == 1
def is_face_card(self):
"""Jack = 11, Queen = 12, King = 13"""
return 10 < self._value <= 13
@property
def value(self):
if self.is_ace() == 1:
return 1
elif self.is_face_card():
return 10
else:
return self._value
@value.setter
def value(self, new_value):
if 1 <= new_value <= 13:
self._value = new_value
else:
raise ValueError('Invalid card value: {}'.format(new_value))
class Hand(object):
def __init__(self, cards):
self.cards = cards
def add_card(self, card):
self.cards.append(card)
def score(self):
total_value = 0
for card in card:
total_value += card.value
return total_value
class BlackJackHand(Hand):
BLACKJACK = 21
def __init__(self, cards):
super(BlackJackHand, self).__init__(cards)
def score(self):
min_over = sys.MAXSIZE
max_under = -sys.MAXSIZE
for score in self.possible_scores():
if self.BLACKJACK < score < min_over:
min_over = score
elif max_under < score <= self.BLACKJACK:
max_under = score
return max_under if max_under != -sys.MAXSIZE else min_over
def possible_scores(self):
"""Return a list of possible scores, taking Aces into account."""
# ...
class Deck(object):
def __init__(self, cards):
self.cards = cards
self.deal_index = 0
def remaining_cards(self):
return len(self.cards) - deal_index
def deal_card():
try:
card = self.cards[self.deal_index]
card.is_available = False
self.deal_index += 1
except IndexError:
return None
return card
def shuffle(self): # ...
```
| github_jupyter |
## Universidade Federal do Rio Grande do Sul (UFRGS)
Programa de Pรณs-Graduaรงรฃo em Engenharia Civil (PPGEC)
# PEC00025: Introduction to Vibration Theory
### Class 14 - Vibration of beams
[1. The vibrating beam equation](#section_1)
[2. Free vibration solution](#section_2)
[3. Vibration modes and frequencies](#section_3)
[4. Solution by approximation](#section_4)
[5. Assignment](#section_5)
---
_Prof. Marcelo M. Rocha, Dr.techn._ [(ORCID)](https://orcid.org/0000-0001-5640-1020)
_Porto Alegre, RS, Brazil_
```
# Importing Python modules required for this notebook
# (this cell must be executed with "shift+enter" before any other Python cell)
import numpy as np
import matplotlib.pyplot as plt
```
## 1. The vibrating beam equation <a name="section_1"></a>
The static analysis of a beam under Bernoulli's hypothesis (plane sections remain plane after beam
deformation) leads to the well known differential equations:
\begin{align*}
\frac{dQ}{dx} &= -q(x) \\
\frac{dM}{dx} &= Q(x) \\
EI \psi^\prime &= M(x)
\end{align*}
where $q(x)$ is the distributed transversal loading, $Q(x)$ is the shear, $M(x)$ is the bending
moment, $\psi(x)$ is the section rotation, and $EI$ is the flexural stiffness (regarded
hereinafter as constant along the beam length).
<img src="images/dynamic_beam.png" alt="Dynamic beam equilibrium" width="360px"/>
Disregarding shear strains, $\gamma(x) = 0$, implies that the section rotation is approximated as:
$$ \psi \approx -w^\prime(x) $$
what implies that:
\begin{align*}
EI w^{\prime\prime} &\approx -M(x) \\
EI w^{\prime\prime\prime\prime} &\approx q(x)
\end{align*}
where $w(x)$ is the beam transversal displacement, also called _elastic line_. The solution of this
last differential equation is straighforward once the load $q(x)$ and the boundary conditions (two
for each beam extremity) are specified.
We shall now include the inertial forces in these equations, as well as regard now the section mean shear strain,
$$ \gamma(x) = \psi(x) + w^\prime(x) = \frac{Q(x)}{GA_{\rm s}}$$
as relevant, where $GA_{\rm s}$ is the shear stiffness (also regarded hereinafter as constant along
the beam length). Although $\gamma(x)$ is indeed negligible for actual slender beams,
the following analysis may also be applied to other slender structures, like
tall buildings, trusses, etc., for which we may define _equivalent stiffnesses_,
$\left(EI\right)_{\rm eq}$ and $\left(GA_{\rm s}\right)_{\rm eq}$.
The dynamic equilibrium equations now become:
\begin{align*}
Q^\prime &= -q + \mu \ddot{ w} \\
M^\prime &= Q - i_\mu \ddot{\psi}
\end{align*}
where $\mu$ is the beam mass per unit length and $i_\mu$ is the cross section rotational inertia
per unit length. Derivating the equation for $\gamma$ and replacing the solution
$EI \psi^{\prime} = M(x)$ gives the elastic line equation accounting for shear strains:
$$ w^{\prime\prime} = -\frac{M}{EI} + \frac{Q^\prime}{GA_{\rm s}} $$
Now we replace the shear (with inertial loads):
$$ w^{\prime\prime} = -\frac{M}{EI} + \frac{-q + \mu \ddot{ w}}{GA_{\rm s}} $$
and derivate the whole equation replacing for $M^\prime$:
$$ w^{\prime\prime\prime} = \frac{i_\mu \ddot{\psi} - Q}{EI} + \frac{\mu \ddot{w}^\prime - q^\prime}{GA_{\rm s}} $$
The angular acceleration, $\ddot{\psi}$, may be safely disregarded, for the rotations are usually very small.
Derivating the equation a last time and replacing for $Q^\prime$ finally gives:
$$ EI w^{\prime\prime\prime\prime} = q - \mu \ddot{ w} +
\frac{EI}{GA_{\rm s}} \, \left(\mu \ddot{ w}^{\prime\prime} - q^{\prime\prime} \right) $$
which is the dynamic elastic line equation for a constant section beam under forced vibration due to
dynamic load $q(x,t)$, with shear deformation accounted for (although plane section hypothesis still kept along).
The last term may be disregarded whenever the shear stiffness is much larger that the bending stiffness.
## 2. Free vibration solution <a name="section_2"></a>
In this section, we take the vibrating beam equation derived above, disregard the shear deformation and look for
free vibration solution, which implies that $q(x, t) = 0$. The beam equation becomes simply:
$$ EI w^{\prime\prime\prime\prime} = - \mu \ddot{w} $$
To solve this equation we separate the time and space independent variables through the hypothesis:
$$ w(x,t) = \varphi(x)\, \sin \omega t $$
which is pretty much alike we have previously done for multiple degrees of freedom systems.
The free vibration equilibrium equation then become:
$$ \varphi^{\prime\prime\prime\prime} - p^4 \varphi = 0 $$
where we have defined:
$$ p^4 = \left(\frac{\mu}{EI}\right) \omega^2 $$
It can be shown that, in the general case, the space dependent function $\varphi(x)$ has the form:
$$ \varphi(x) = C_1 \left(\cos px + \cosh px \right) +
C_2 \left(\cos px - \cosh px \right) +
C_3 \left(\sin px + \sinh px \right) +
C_4 \left(\sin px - \sinh px \right) $$
The corresponding space derivatives will be required to apply the boundary conditions:
\begin{align*}
\varphi^\prime(x) = p^1&\left[C_1 \left(-\sin px + \sinh px \right) +
C_2 \left(-\sin px - \sinh px \right) +
C_3 \left( \cos px + \cosh px \right) +
C_4 \left( \cos px - \cosh px \right)\right] \\
\varphi^{\prime\prime}(x) = p^2&\left[C_1 \left(-\cos px + \cosh px \right) +
C_2 \left(-\cos px - \cosh px \right) +
C_3 \left(-\sin px + \sinh px \right) +
C_4 \left(-\sin px - \sinh px \right)\right] \\
\varphi^{\prime\prime\prime}(x) = p^3&\left[C_1 \left( \sin px + \sinh px \right) +
C_2 \left( \sin px - \sinh px \right) +
C_3 \left(-\cos px + \cosh px \right) +
C_4 \left(-\cos px - \cosh px \right)\right] \\
\varphi^{\prime\prime\prime\prime}(x) = p^4&\left[C_1 \left( \cos px + \cosh px \right) +
C_2 \left( \cos px - \cosh px \right) +
C_3 \left( \sin px + \sinh px \right) +
C_4 \left( \sin px - \sinh px \right)\right]
\end{align*}
The last equation above proves that the assumed general solution is correct.
Now, to have a particular solution for the vibrating beam the kinematic boundary conditions must be applied.
Let us assume a cantilever beam, fixed at the left end ($x = 0$) and free at the right end ($x = L$).
<img src="images/cantilever_beam.png" alt="Cantilever beam" width="360px"/>
The corresponding boundary conditions are:
\begin{align*}
\varphi(0) &= 0 \\
\varphi^ \prime(0) &= 0 \\
\varphi^{\prime\prime}(L) &= 0 \\
\varphi^{\prime\prime\prime}(L) &= 0
\end{align*}
The two last conditions implies that bending moment and shear force are zero at the right end, respectively.
Applying these conditions at the corresponding derivatives:
\begin{align*}
\varphi(0) &= C_1 \left( 1 + 1 \right) +
C_2 \left( 1 - 1 \right) +
C_3 \left( 0 + 0 \right) +
C_4 \left( 0 - 0 \right) = 0\\
\varphi^\prime(0) &= p \left[C_1 \left(-0 + 0 \right) +
C_2 \left(-0 - 0 \right) +
C_3 \left( 1 + 1 \right) +
C_4 \left( 1 - 1 \right)\right] = 0
\end{align*}
which implies that $C_1 = 0$ and $C_3 = 0$. The other two conditions become:
\begin{align*}
\varphi^{\prime\prime}(L) &= p^2 \left[C_2 \left(-\cos pL - \cosh pL \right) +
C_4 \left(-\sin pL - \sinh pL \right)\right] = 0\\
\varphi^{\prime\prime\prime}(L) &= p^3 \left[C_2 \left( \sin pL - \sinh pL \right) +
C_4 \left(-\cos pL - \cosh pL \right)\right] = 0
\end{align*}
These two equations can be put into matrix form as:
$$ \left[ \begin{array}{cc}
\left(-\cos pL - \cosh pL \right) & \left(-\sin pL - \sinh pL \right) \\
\left( \sin pL - \sinh pL \right) & \left(-\cos pL - \cosh pL \right)
\end{array} \right]
\left[ \begin{array}{c}
C_2 \\
C_4
\end{array} \right] =
\left[ \begin{array}{c}
0 \\
0
\end{array} \right] $$
In order to obtain a non trivial (non zero) solution for the unknown coefficients $C_2$ and $C_4$,
the determinant of the coefficients matrix must be zero. This condition yields a nonlinear equation
to be solved for $pL$. We can use the HP Prime for this purpose, as shown in the figure below.
<table>
<tr>
<td><img src="images/det_cantilever_1.jpg" alt="HP Prime determinant" width="320px"/></td>
<td><img src="images/det_cantilever_2.jpg" alt="HP Prime cantilever" width="320px"/></td>
</tr>
</table>
There will be infinite solutions $\alpha_k = \left( pL \right)_k$, $k = 1, 2, \dots, \infty$,
each one associated to a vibration frequency and a modal shape,
$\left[ \omega_k, \varphi_k(x) \right]$. The natural vibration frequencies are obtained
by recalling the definition of $p$, what finally gives:
$$ \omega_k = \left( \frac{\alpha_k}{L} \right) ^2 \sqrt{\frac{EI}{\mu}}$$
For instance, the fundamental vibration frequency, $f_{\rm n}$, is given by:
$$ f_{\rm n} \approx \frac{1}{2\pi} \left( \frac{1.8751}{L} \right) ^2 \sqrt{\frac{EI}{\mu}}$$
which is a very useful formula for estimating the fundamental frequency of slender piles and towers
with constant cross section.
```
x = np.linspace(0, 10, 1000)
```
## 3. Vibration modes and frequencies <a name="section_3"></a>
The table below was taken from a _Sonderausdruck_ (special edition) of the german _Betonkalender_
(concrete almanac), 1988. It summarizes the solutions for some other support conditions of slender beams.
If more accuracy is desiredfor the $\alpha_k$ constants, one can solve the so-called
_characteristic equation_ with the help of a calculator like the HP Prime, ou HP 50G.
<img src="images/beams.png" alt="Beam solutions" width="640px"/>
The characteristic equations are the determinants of respective coefficients matrix,
which can also be solved with the ``fsolve()`` method from ``scipy``, as shown below.
```
def char_eq(x):
x = x[0]
A = np.array([[-np.cos(x)-np.cosh(x), -np.sin(x)-np.sinh(x)],
[ np.sin(x)-np.sinh(x), -np.cos(x)-np.cosh(x)]])
return np.linalg.det(A) # from coefficientes matrix
# return np.cos(x)*np.cosh(x) + 1 # from characteristic equation
#-----------------------------------------------------------------------
from scipy.optimize import fsolve
ak = fsolve(char_eq, 2)
print('Cantilever beam frequency parameter: {0}'.format(ak[0]))
```
Observe that the result is exactly the same (within the required precision) as previously obtained with the
HP Prime. One can use directly the characteristic equation, or one can program the determinant calculation
using ``np.linalg.det()`` in the user function ``char_eq`` above.
## 4. Solution by approximation <a name="section_4"></a>
The solutions for the beam vibration frequencies presented above may also be calculated by means of
Rayleigh quotient, as long as an educated guess for the $\varphi(x)$ function is assumed.
As an example, let us take a simply supporte beam, for which we assume:
$$ \varphi(x) = 4x \left( \frac{L - x}{L^2} \right) $$
with $\varphi(L/2) = 1$, which obviously _is not_ the correct modal shape.
<img src="images/simply_supported.png" alt="simply supported beam" width="400px"/>
We also assume that the beam is subjected to a constant distributed load, $q$ corresponding
to its self weight. The maximum displacement at the beam center is known to be:
$$ w_{\rm max} = \frac{5 q L^4}{384EI}$$
In the following we shall estimate both the fundamental vibration frequency and the
maximum displacement with the assumed modal shape. The reference kinetic energy is given by:
$$ T_{\rm ref} = \frac{1}{2} \int_0^L {\mu \varphi ^2(x) \, dx} $$
while for the elastic potential energy we need the curvature function, $\varphi^{\prime\prime}(x)$:
$$ V = \frac{1}{2} \int_0^L {EI \left[ \varphi^{\prime\prime}(x) \right] ^2 \, dx}
= \frac{1}{2} \int_0^L {q w(x) \, dx}$$
On the other hand, the modal properties are evaluated with a continuous version of the same formula
presented on [Class 11](https://nbviewer.jupyter.org/github/mmaiarocha/PEC00025/blob/master/Class_11_FreeVibrationMDOF.ipynb?flushcache=true). In particular, the modal mass and the modal load are:
\begin{align*}
\vec\phi_k^{\intercal}{\mathbf M} \vec\phi_k &\implies M_k = \int_0^L{\mu \, \varphi^2(x) \, dx} \\
\vec\phi_k^{\intercal} \vec{F} &\implies F_k = \int_0^L{ q \, \varphi(x) \, dx}
\end{align*}
The static modal response can be calculated as:
$$ u_k = F_k/K_k$$
where $K_k = \omega_k^2 M_k$.
Let us apply this approach to the beam example above.
For the assume modal shape we have the curvature:
$$ \varphi^{\prime\prime}(x) = -\frac{8}{L^2}$$
Hence:
\begin{align*}
T_{\rm ref} &= \frac{1}{2} \int_0^L {\mu \left[ \frac{4x(L - x)}{L^2} \right]^2 \, dx} = \frac{ 4}{15 }\mu L \\
V &= \frac{1}{2} \int_0^L { EI \left[ -\frac{8} {L^2} \right]^2 \, dx} = \frac{32}{L^3} EI
\end{align*}
The Rayleigh quotient results:
$$ \omega_k^2 = \frac{V}{T_{\rm ref}} = \frac{32EI}{L^3} \frac{15}{4\mu L}
= \frac{ 120}{L^4} \left( \frac{EI}{\mu} \right)$$
which gives a fundamental frequency comparing to the exact solution as:
$$ \omega_k \approx \left( \frac{3.31}{L} \right)^2 \sqrt{\frac{EI}{\mu}}
\approx \left( \frac{ \pi}{L} \right)^2 \sqrt{\frac{EI}{\mu}} $$
with an error of approximatelly 11%. The modal shape approximation may also be used to estimate
the displacement at beam center, for which we calculate the modal mass and the modal load as:
\begin{align*}
M_k &= \int_0^L{\mu \, \left[ \frac{4x(L - x)}{L^2} \right]^2 \, dx} = \frac{8}{15}\mu L\\
F_k &= \int_0^L{ q \, \left[ \frac{4x(L - x)}{L^2} \right] \, dx} = \frac{2}{ 3} q L
\end{align*}
The modal stiffness is then:
$$ K_k = \frac{ 120}{L^4} \left( \frac{EI}{\mu} \right) \cdot \frac{8}{15}\mu L = \frac{64EI}{L^3}$$
and the modal displacement is:
$$ u_k = \frac{2}{3} qL \cdot \frac{L^3}{64EI} = \frac{4 q L^4}{384EI} \approx \frac{5 q L^4}{384EI} $$
The modal displacement is already the displacement at the beam center, for $\varphi(L/2) = 1$.
The implied error is hence 20%, not bad for an arbitrary elastic line shape.
The following scripts show how to numerically accomplish these calculations.
```
L = 1 # bar length (m)
EI = 2.6 # bending stiffness (Nm2)
mu = 0.260 # mass per unity length (kg/m)
q = mu*9.81 # distributed load is self weight (N/m)
# Proposed modal shape for first mode (second order polynomial)
x = np.linspace(0, L, 200)
qk = 4*x*(L - x)/L/L # guessed modal shape
q0 = np.sin(np.pi*x/L) # exact modal shape!!!
plt.figure(1, figsize=(8,2))
plt.plot(x, qk, 'b', x, q0, 'r')
plt.xlim( 0.0, L ); plt.xlabel('x');
plt.ylim(-0.5, 1.5); plt.ylabel('phi(x)');
plt.title('Proposed modal shape')
plt.grid(True)
```
The same calculation could be carried out with the correct modal frequency, with much more accurate results:
```
wk = ((np.pi/L)**2)*np.sqrt(EI/mu) # exact fundamental frequency
fk = wk/(2*np.pi)
Mk = np.sum(mu*qk*qk) # modal mass from guessed modal shape
Kk = wk*wk*Mk # improved modal stiffness
print('Available fundamental vibration frequency: {0:5.2f} Hz'.format(fk))
print('Modal mass (integrated over bar length): {0:5.1f} kg'.format(Mk))
print('Modal stiffness (from mass and frequency): {0:5.0f} N/m'.format(Kk))
Fk = np.sum(q*qk) # modal force
uk = Fk/Kk # modal displacement
wp = np.max(uk*qk) # approximated elastic line
w0 = (5*q*L**4)/(384*EI) # exact elastic line
print('Maximum displacement approximation: {0:6.2f}mm'.format(1000*wp))
print('Theoretical maximum displacement: {0:6.2f}mm'.format(1000*w0))
w = uk*qk
V = np.sum( q*w )/2 # potential energy calculated with external work
Tref = np.sum(mu*w*w)/2
wk = np.sqrt(V/Tref)
fk = wk/(2*np.pi)
print('Fundamental frequency from Rayleigh quotient: {0:5.2f} Hz'.format(fk))
```
## 5. Assignments <a name="section_5"></a>
1. Aplicar um carregamento no modelo individual discreto que produza uma deformada semelhante ร primeira forma modal, e recalcular a frequรชncia fundamental pelo quociente de Rayleigh.
2. Mantendo uma massa mรฉdia por unidade de comprimento, determinar um EI (rigidez de flexรฃo equivalente) que resulte em frequรชncia fundamental prรณxima ร correta ao se representar a estrutura como uma barra contรญnua equivalente.
Trabalho T8 <br>
Prazo: 08/06/2020.
| github_jupyter |
#### New to Plotly?
Plotly's Python library is free and open source! [Get started](https://plotly.com/python/getting-started/) by downloading the client and [reading the primer](https://plotly.com/python/getting-started/).
<br>You can set up Plotly to work in [online](https://plotly.com/python/getting-started/#initialization-for-online-plotting) or [offline](https://plotly.com/python/getting-started/#initialization-for-offline-plotting) mode, or in [jupyter notebooks](https://plotly.com/python/getting-started/#start-plotting-online).
<br>We also have a quick-reference [cheatsheet](https://images.plot.ly/plotly-documentation/images/python_cheat_sheet.pdf) (new!) to help you get started!
#### Defining and plotting triangulated surfaces
#### with Plotly `Mesh3d`
A triangulation of a compact surface is a finite collection of triangles that cover the surface in such a way that every point on the surface is in a triangle, and the intersection of any two triangles is either void, a common edge or a common vertex. A triangulated surface is called tri-surface.
The triangulation of a surface defined as the graph of a continuous function, $z=f(x,y), (x,y)\in D\subset\mathbb{R}^2$ or in a parametric form:
$$x=x(u,v), y=y(u,v), z=z(u,v), (u,v)\in U\subset\mathbb{R}^2,$$
is the image through $f$,respectively through the parameterization, of the Delaunay triangulation or an user defined triangulation of the planar domain $D$, respectively $U$.
The Delaunay triangulation of a planar region is defined and illustrated in a Python Plotly tutorial posted [here](https://plotly.com/python/alpha-shapes/).
If the planar region $D$ ($U$) is rectangular, then one defines a meshgrid on it, and the points
of the grid are the input points for the `scipy.spatial.Delaunay` function that defines the planar triangulation of $D$, respectively $U$.
### Triangulation of the Moebius band ###
The Moebius band is parameterized by:
$$\begin{align*}
x(u,v)&=(1+0.5 v\cos(u/2))\cos(u)\\
y(u,v)&=(1+0.5 v\cos(u/2))\sin(u)\quad\quad u\in[0,2\pi],\: v\in[-1,1]\\
z(u,v)&=0.5 v\sin(u/2)
\end{align*}
$$
Define a meshgrid on the rectangle $U=[0,2\pi]\times[-1,1]$:
```
import plotly.plotly as py
import plotly.graph_objs as go
import numpy as np
import matplotlib.cm as cm
from scipy.spatial import Delaunay
u=np.linspace(0,2*np.pi, 24)
v=np.linspace(-1,1, 8)
u,v=np.meshgrid(u,v)
u=u.flatten()
v=v.flatten()
#evaluate the parameterization at the flattened u and v
tp=1+0.5*v*np.cos(u/2.)
x=tp*np.cos(u)
y=tp*np.sin(u)
z=0.5*v*np.sin(u/2.)
#define 2D points, as input data for the Delaunay triangulation of U
points2D=np.vstack([u,v]).T
tri = Delaunay(points2D)#triangulate the rectangle U
```
`tri.simplices` is a `np.array` of integers, of shape (`ntri`,3), where `ntri` is the number of triangles generated by `scipy.spatial.Delaunay`.
Each row in this array contains three indices, i, j, k, such that points2D[i,:], points2D[j,:], points2D[k,:] are vertices of a triangle in the Delaunay triangularization of the rectangle $U$.
```
print tri.simplices.shape, '\n', tri.simplices[0]
```
The images of the `points2D` through the surface parameterization are 3D points. The same simplices define the triangles on the surface.
Setting a combination of keys in `Mesh3d` leads to generating and plotting of a tri-surface, in the same way as `plot_trisurf` in matplotlib or `trisurf` in Matlab does.
We note that `Mesh3d` with different combination of keys can generate [alpha-shapes](https://plotly.com/python/alpha-shapes/).
In order to plot a tri-surface, we choose a colormap, and associate to each triangle on the surface, the color in colormap, corresponding to the normalized mean value of z-coordinates of the triangle vertices.
Define a function that maps a mean z-value to a matplotlib color, converted to a Plotly color:
```
def map_z2color(zval, colormap, vmin, vmax):
#map the normalized value zval to a corresponding color in the colormap
if vmin>vmax:
raise ValueError('incorrect relation between vmin and vmax')
t=(zval-vmin)/float((vmax-vmin))#normalize val
R, G, B, alpha=colormap(t)
return 'rgb('+'{:d}'.format(int(R*255+0.5))+','+'{:d}'.format(int(G*255+0.5))+\
','+'{:d}'.format(int(B*255+0.5))+')'
```
To plot the triangles on a surface, we set in Plotly `Mesh3d` the lists of x, y, respectively z- coordinates of the vertices, and the lists of indices, i, j, k, for x, y, z coordinates of all vertices:
```
def tri_indices(simplices):
#simplices is a numpy array defining the simplices of the triangularization
#returns the lists of indices i, j, k
return ([triplet[c] for triplet in simplices] for c in range(3))
def plotly_trisurf(x, y, z, simplices, colormap=cm.RdBu, plot_edges=None):
#x, y, z are lists of coordinates of the triangle vertices
#simplices are the simplices that define the triangularization;
#simplices is a numpy array of shape (no_triangles, 3)
#insert here the type check for input data
points3D=np.vstack((x,y,z)).T
tri_vertices=map(lambda index: points3D[index], simplices)# vertices of the surface triangles
zmean=[np.mean(tri[:,2]) for tri in tri_vertices ]# mean values of z-coordinates of
#triangle vertices
min_zmean=np.min(zmean)
max_zmean=np.max(zmean)
facecolor=[map_z2color(zz, colormap, min_zmean, max_zmean) for zz in zmean]
I,J,K=tri_indices(simplices)
triangles=go.Mesh3d(x=x,
y=y,
z=z,
facecolor=facecolor,
i=I,
j=J,
k=K,
name=''
)
if plot_edges is None:# the triangle sides are not plotted
return [triangles]
else:
#define the lists Xe, Ye, Ze, of x, y, resp z coordinates of edge end points for each triangle
#None separates data corresponding to two consecutive triangles
lists_coord=[[[T[k%3][c] for k in range(4)]+[ None] for T in tri_vertices] for c in range(3)]
Xe, Ye, Ze=[reduce(lambda x,y: x+y, lists_coord[k]) for k in range(3)]
#define the lines to be plotted
lines=go.Scatter3d(x=Xe,
y=Ye,
z=Ze,
mode='lines',
line=dict(color= 'rgb(50,50,50)', width=1.5)
)
return [triangles, lines]
```
Call this function for data associated to Moebius band:
```
data1=plotly_trisurf(x,y,z, tri.simplices, colormap=cm.RdBu, plot_edges=True)
```
Set the layout of the plot:
```
axis = dict(
showbackground=True,
backgroundcolor="rgb(230, 230,230)",
gridcolor="rgb(255, 255, 255)",
zerolinecolor="rgb(255, 255, 255)",
)
layout = go.Layout(
title='Moebius band triangulation',
width=800,
height=800,
scene=dict(
xaxis=dict(axis),
yaxis=dict(axis),
zaxis=dict(axis),
aspectratio=dict(
x=1,
y=1,
z=0.5
),
)
)
fig1 = go.Figure(data=data1, layout=layout)
py.iplot(fig1, filename='Moebius-band-trisurf')
```
### Triangularization of the surface $z=\sin(-xy)$, defined over a disk ###
We consider polar coordinates on the disk, $D(0, 1)$, centered at origin and of radius 1, and define
a meshgrid on the set of points $(r, \theta)$, with $r\in[0,1]$ and $\theta\in[0,2\pi]$:
```
n=12 # number of radii
h=1.0/(n-1)
r = np.linspace(h, 1.0, n)
theta= np.linspace(0, 2*np.pi, 36)
r,theta=np.meshgrid(r,theta)
r=r.flatten()
theta=theta.flatten()
#Convert polar coordinates to cartesian coordinates (x,y)
x=r*np.cos(theta)
y=r*np.sin(theta)
x=np.append(x, 0)# a trick to include the center of the disk in the set of points. It was avoided
# initially when we defined r=np.linspace(h, 1.0, n)
y=np.append(y,0)
z = np.sin(-x*y)
points2D=np.vstack([x,y]).T
tri=Delaunay(points2D)
```
Plot the surface with a modified layout:
```
data2=plotly_trisurf(x,y,z, tri.simplices, colormap=cm.cubehelix, plot_edges=None)
fig2 = go.Figure(data=data2, layout=layout)
fig2['layout'].update(dict(title='Triangulated surface',
scene=dict(camera=dict(eye=dict(x=1.75,
y=-0.7,
z= 0.75)
)
)))
py.iplot(fig2, filename='trisurf-cubehx')
```
This example is also given as a demo for matplotlib [`plot_trisurf`](http://matplotlib.org/examples/mplot3d/trisurf3d_demo.html).
### Plotting tri-surfaces from data stored in ply-files ###
A PLY (Polygon File Format or Stanford Triangle Format) format is a format for storing graphical objects
that are represented by a triangulation of an object, resulted usually from scanning that object. A Ply file contains the coordinates of vertices, the codes for faces (triangles) and other elements, as well as the color for faces or the normal direction to faces.
In the following we show how we can read a ply file via the Python package, `plyfile`. This package can be installed with `pip`.
We choose a ply file from a list provided [here](http://people.sc.fsu.edu/~jburkardt/data/ply/ply.html).
```
!pip install plyfile
from plyfile import PlyData, PlyElement
import urllib2
req = urllib2.Request('http://people.sc.fsu.edu/~jburkardt/data/ply/chopper.ply')
opener = urllib2.build_opener()
f = opener.open(req)
plydata = PlyData.read(f)
```
Read the file header:
```
for element in plydata.elements:
print element
nr_points=plydata.elements[0].count
nr_faces=plydata.elements[1].count
```
Read the vertex coordinates:
```
points=np.array([plydata['vertex'][k] for k in range(nr_points)])
points[0]
x,y,z=zip(*points)
faces=[plydata['face'][k][0] for k in range(nr_faces)]
faces[0]
```
Now we can get data for a Plotly plot of the graphical object read from the ply file:
```
data3=plotly_trisurf(x,y,z, faces, colormap=cm.RdBu, plot_edges=None)
title="Trisurf from a PLY file<br>"+\
"Data Source:<a href='http://people.sc.fsu.edu/~jburkardt/data/ply/airplane.ply'> [1]</a>"
noaxis=dict(showbackground=False,
showline=False,
zeroline=False,
showgrid=False,
showticklabels=False,
title=''
)
fig3 = go.Figure(data=data3, layout=layout)
fig3['layout'].update(dict(title=title,
width=1000,
height=1000,
scene=dict(xaxis=noaxis,
yaxis=noaxis,
zaxis=noaxis,
aspectratio=dict(x=1, y=1, z=0.4),
camera=dict(eye=dict(x=1.25, y=1.25, z= 1.25)
)
)
))
py.iplot(fig3, filename='Chopper-Ply-cls')
```
This a version of the same object plotted along with triangle edges:
```
from IPython.display import HTML
HTML('<iframe src=https://plotly.com/~empet/13734/trisurf-from-a-ply-file-data-source-1/ \
width=800 height=800></iframe>')
```
#### Reference
See https://plotly.com/python/reference/ for more information and chart attribute options!
```
from IPython.display import display, HTML
display(HTML('<link href="//fonts.googleapis.com/css?family=Open+Sans:600,400,300,200|Inconsolata|Ubuntu+Mono:400,700" rel="stylesheet" type="text/css" />'))
display(HTML('<link rel="stylesheet" type="text/css" href="http://help.plot.ly/documentation/all_static/css/ipython-notebook-custom.css">'))
! pip install git+https://github.com/plotly/publisher.git --upgrade
import publisher
publisher.publish(
'triangulation.ipynb', 'python/surface-triangulation/', 'Surface Triangulation',
'How to make Tri-Surf plots in Python with Plotly.',
title = 'Python Surface Triangulation | plotly',
name = 'Surface Triangulation',
has_thumbnail='true', thumbnail='thumbnail/trisurf.jpg',
language='python',
display_as='3d_charts', order=11,
ipynb= '~notebook_demo/71')
```
| github_jupyter |
# [Optional] Data Preparation
---
This section is **optional**. For the purpose of making this lab as efficient as possible, data sets have already been prepared for you in MXNet [RecordIO format](https://mxnet.incubator.apache.org/versions/master/faq/recordio.html), which has various benefits including performance enhancement. The following are steps that were taken to produce training and validation samples in RecordIO format. Take note of the utility functions that MXNet provides for format conversion as well as the native data loaders. These are great features that reduce data wrangling work, and aren't provided by most frameworks.
## Download and unpack the dataset
---
In this exercise, we'll use the [Caltech Birds (CUB 200 2011)](http://www.vision.caltech.edu/visipedia/CUB-200-2011.html) dataset, which contains 11,788 images across 200 bird species (the original technical report can be found [here](http://www.vision.caltech.edu/visipedia/papers/CUB_200_2011.pdf)). Each species comes with around 60 images, with a typical size of about 350 pixels by 500 pixels. Bounding boxes are provided, as are annotations of bird parts. A recommended train/test split is given, but image size data is not.

The dataset can be downloaded [here](http://www.vision.caltech.edu/visipedia/CUB-200-2011.html).
Here we download the birds dataset from CalTech.
## Setup
Before preparing the data, there are some initial steps required for setup.
This notebook requires two additional Python packages:
* **OpenCV** is required for gathering image sizes and flipping of images horizontally.
* The **MXNet** runtime is required for using the im2rec tool.
```
import sys
!{sys.executable} -m pip install --upgrade pip
!{sys.executable} -m pip install opencv-python
!{sys.executable} -m pip install mxnet
import os
import urllib.request
def download(url):
filename = url.split('/')[-1]
if not os.path.exists(filename):
urllib.request.urlretrieve(url, filename)
%%time
download('http://www.vision.caltech.edu/visipedia-data/CUB-200-2011/CUB_200_2011.tgz')
```
Now we unpack the dataset into its own directory structure.
```
%%time
# Clean up prior version of the downloaded dataset if you are running this again
!rm -rf CUB_200_2011
# Unpack and then remove the downloaded compressed tar file
!gunzip -c ./CUB_200_2011.tgz | tar xopf -
!rm CUB_200_2011.tgz
```
# Understand the dataset
Here we define a few parameters that effect the sampling of the CalTech Birds dataset. For example, `SAMPLE_ONLY` is defaulted to `True`. This will force the notebook to train on only a handful of species. Setting to false will make the notebook work with the entire dataset of 200 bird species. This makes the training a more difficult challenge, and you will need many more epochs to complete.
The file parameters define names and locations of metadata files for the dataset.
The RecordIO dataset that has been pre-processed for you utilize the default parameters laid out in the cell below.
```
import pandas as pd
import cv2
import boto3
import json
runtime = boto3.client(service_name='runtime.sagemaker')
import matplotlib.pyplot as plt
%matplotlib inline
RANDOM_SPLIT = False
SAMPLE_ONLY = True
FLIP = False
# To speed up training and experimenting, you can use a small handful of species.
# To see the full list of the classes available, look at the content of CLASSES_FILE.
CLASSES = [17, 36, 47, 68, 73]
# Otherwise, you can use the full set of species
if (not SAMPLE_ONLY):
CLASSES = []
for c in range(200):
CLASSES += [c + 1]
RESIZE_SIZE = 256
BASE_DIR = 'CUB_200_2011/'
IMAGES_DIR = BASE_DIR + 'images/'
CLASSES_FILE = BASE_DIR + 'classes.txt'
BBOX_FILE = BASE_DIR + 'bounding_boxes.txt'
IMAGE_FILE = BASE_DIR + 'images.txt'
LABEL_FILE = BASE_DIR + 'image_class_labels.txt'
SIZE_FILE = BASE_DIR + 'sizes.txt'
SPLIT_FILE = BASE_DIR + 'train_test_split.txt'
TRAIN_LST_FILE = 'birds_ssd_train.lst'
VAL_LST_FILE = 'birds_ssd_val.lst'
if (SAMPLE_ONLY):
TRAIN_LST_FILE = 'birds_ssd_sample_train.lst'
VAL_LST_FILE = 'birds_ssd_sample_val.lst'
TRAIN_RATIO = 0.8
CLASS_COLS = ['class_number','class_id']
IM2REC_SSD_COLS = ['header_cols', 'label_width', 'zero_based_id', 'xmin', 'ymin', 'xmax', 'ymax', 'image_file_name']
```
## Explore the dataset images
For each species, there are dozens of images of various shapes and sizes. By dividing the entire dataset into individual named (numbered) folders, the images are in effect labelled for supervised learning using image classification and object detection algorithms.
The following function displays a grid of thumbnail images for all the image files for a given species.
```
def show_species(species_id):
_im_list = !ls $IMAGES_DIR/$species_id
NUM_COLS = 6
IM_COUNT = len(_im_list)
print('Species ' + species_id + ' has ' + str(IM_COUNT) + ' images.')
NUM_ROWS = int(IM_COUNT / NUM_COLS)
if ((IM_COUNT % NUM_COLS) > 0):
NUM_ROWS += 1
fig, axarr = plt.subplots(NUM_ROWS, NUM_COLS)
fig.set_size_inches(8.0, 16.0, forward=True)
curr_row = 0
for curr_img in range(IM_COUNT):
# fetch the url as a file type object, then read the image
f = IMAGES_DIR + species_id + '/' + _im_list[curr_img]
a = plt.imread(f)
# find the column by taking the current index modulo 3
col = curr_img % NUM_ROWS
# plot on relevant subplot
axarr[col, curr_row].imshow(a)
if col == (NUM_ROWS - 1):
# we have finished the current row, so increment row counter
curr_row += 1
fig.tight_layout()
plt.show()
# Clean up
plt.clf()
plt.cla()
plt.close()
```
Show the list of bird species or dataset classes.
```
classes_df = pd.read_csv(CLASSES_FILE, sep=' ', names=CLASS_COLS, header=None)
criteria = classes_df['class_number'].isin(CLASSES)
classes_df = classes_df[criteria]
print(classes_df.to_csv(columns=['class_id'], sep='\t', index=False, header=False))
```
Now for any given species, display thumbnail images of each of the images provided for training and testing.
```
show_species('017.Cardinal')
```
# Generate RecordIO files
In this section we continue on with the **optional** data preparation exercise. This section serves as a good example of how to converate raw images into optimized RecordIO files.
## Step 1. Gather image sizes
For this particular dataset, bounding box annotations are specified in absolute terms. RecordIO format requires them to be defined in terms relative to the image size. The following code visits each image, extracts the height and width, and saves this information into a file for subsequent use. Some other publicly available datasets provide such a file for exactly this purpose.
```
%%time
SIZE_COLS = ['idx','width','height']
def gen_image_size_file():
print('Generating a file containing image sizes...')
images_df = pd.read_csv(IMAGE_FILE, sep=' ',
names=['image_pretty_name', 'image_file_name'],
header=None)
rows_list = []
idx = 0
for i in images_df['image_file_name']:
# TODO: add progress bar
idx += 1
img = cv2.imread(IMAGES_DIR + i)
dimensions = img.shape
height = img.shape[0]
width = img.shape[1]
image_dict = {'idx': idx, 'width': width, 'height': height}
rows_list.append(image_dict)
sizes_df = pd.DataFrame(rows_list)
print('Image sizes:\n' + str(sizes_df.head()))
sizes_df[SIZE_COLS].to_csv(SIZE_FILE, sep=' ', index=False, header=None)
gen_image_size_file()
```
## Step 2. Generate list files for producing RecordIO files
[RecordIO](https://mxnet.incubator.apache.org/architecture/note_data_loading.html) files can be created using the [im2rec tool](https://mxnet.incubator.apache.org/faq/recordio.html) (images to RecordIO), which takes as input a pair of list files, one for training images and the other for validation images. Each list file has one row for each image. For object detection, each row must contain bounding box data and a class label.
For the CalTech birds dataset, we need to convert absolute bounding box dimensions to relative dimensions based on image size. We also need to adjust class id's to be zero-based (instead of 1 to 200, they need to be 0 to 199). This dataset comes with recommended train/test split information ("is_training_image" flag). This notebook is built flexibly to either leverage this suggestion, or to create a random train/test split with a specific train/test ratio. The `RAMDOM_SPLIT` variable defined earlier controls whether or not the split happens randomly.
```
def split_to_train_test(df, label_column, train_frac=0.8):
train_df, test_df = pd.DataFrame(), pd.DataFrame()
labels = df[label_column].unique()
for lbl in labels:
lbl_df = df[df[label_column] == lbl]
lbl_train_df = lbl_df.sample(frac=train_frac)
lbl_test_df = lbl_df.drop(lbl_train_df.index)
print('\n{}:\n---------\ntotal:{}\ntrain_df:{}\ntest_df:{}'.format(lbl, len(lbl_df), len(lbl_train_df), len(lbl_test_df)))
train_df = train_df.append(lbl_train_df)
test_df = test_df.append(lbl_test_df)
return train_df, test_df
def gen_list_files():
# use generated sizes file
sizes_df = pd.read_csv(SIZE_FILE, sep=' ',
names=['image_pretty_name', 'width', 'height'],
header=None)
bboxes_df = pd.read_csv(BBOX_FILE, sep=' ',
names=['image_pretty_name', 'x_abs', 'y_abs', 'bbox_width', 'bbox_height'],
header=None)
split_df = pd.read_csv(SPLIT_FILE, sep=' ',
names=['image_pretty_name', 'is_training_image'],
header=None)
print(IMAGE_FILE)
images_df = pd.read_csv(IMAGE_FILE, sep=' ',
names=['image_pretty_name', 'image_file_name'],
header=None)
print('num images total: ' + str(images_df.shape[0]))
image_class_labels_df = pd.read_csv(LABEL_FILE, sep=' ',
names=['image_pretty_name', 'class_id'], header=None)
# Merge the metadata into a single flat dataframe for easier processing
full_df = pd.DataFrame(images_df)
full_df.reset_index(inplace=True)
full_df = pd.merge(full_df, image_class_labels_df, on='image_pretty_name')
full_df = pd.merge(full_df, sizes_df, on='image_pretty_name')
full_df = pd.merge(full_df, bboxes_df, on='image_pretty_name')
full_df = pd.merge(full_df, split_df, on='image_pretty_name')
full_df.sort_values(by=['index'], inplace=True)
# Define the bounding boxes in the format required by SageMaker's built in Object Detection algorithm.
# the xmin/ymin/xmax/ymax parameters are specified as ratios to the total image pixel size
full_df['header_cols'] = 2 # one col for the number of header cols, one for the label width
full_df['label_width'] = 5 # number of cols for each label: class, xmin, ymin, xmax, ymax
full_df['xmin'] = full_df['x_abs'] / full_df['width']
full_df['xmax'] = (full_df['x_abs'] + full_df['bbox_width']) / full_df['width']
full_df['ymin'] = full_df['y_abs'] / full_df['height']
full_df['ymax'] = (full_df['y_abs'] + full_df['bbox_height']) / full_df['height']
# object detection class id's must be zero based. map from
# class_id's given by CUB to zero-based (1 is 0, and 200 is 199).
if SAMPLE_ONLY:
# grab a small subset of species for testing
criteria = full_df['class_id'].isin(CLASSES)
full_df = full_df[criteria]
unique_classes = full_df['class_id'].drop_duplicates()
sorted_unique_classes = sorted(unique_classes)
id_to_zero = {}
i = 0.0
for c in sorted_unique_classes:
id_to_zero[c] = i
i += 1.0
full_df['zero_based_id'] = full_df['class_id'].map(id_to_zero)
full_df.reset_index(inplace=True)
# use 4 decimal places, as it seems to be required by the Object Detection algorithm
pd.set_option("display.precision", 4)
train_df = []
val_df = []
if (RANDOM_SPLIT):
# split into training and validation sets
train_df, val_df = split_to_train_test(full_df, 'class_id', TRAIN_RATIO)
train_df[IM2REC_SSD_COLS].to_csv(TRAIN_LST_FILE, sep='\t',
float_format='%.4f', header=None)
val_df[IM2REC_SSD_COLS].to_csv( VAL_LST_FILE, sep='\t',
float_format='%.4f', header=None)
else:
train_df = full_df[(full_df.is_training_image == 1)]
train_df[IM2REC_SSD_COLS].to_csv(TRAIN_LST_FILE, sep='\t',
float_format='%.4f', header=None)
val_df = full_df[(full_df.is_training_image == 0)]
val_df[IM2REC_SSD_COLS].to_csv( VAL_LST_FILE, sep='\t',
float_format='%.4f', header=None)
print('num train: ' + str(train_df.shape[0]))
print('num val: ' + str(val_df.shape[0]))
return train_df, val_df
train_df, val_df = gen_list_files()
```
Here we take a look at a few records from the training list file to understand better what is being fed to the RecordIO files.
The first column is the image number or index. The second column indicates that the label is made up of 2 columns (column 2 and column 3). The third column specifies the label width of a single object. In our case, the value 5 indicates each image has 5 numbers to describe its label information: the class index, and the 4 bounding box coordinates. If there are multiple objects within one image, all the label information should be listed in one line. Our dataset contains only one bounding box per image.
The fourth column is the class label. This identifies the bird species using a zero-based class id. Columns 4 through 7 represent the bounding box for where the bird is found in this image.
The classes should be labeled with successive numbers and start with 0. The bounding box coordinates are ratios of its top-left (xmin, ymin) and bottom-right (xmax, ymax) corner indices to the overall image size. Note that the top-left corner of the entire image is the origin (0, 0). The last column specifies the relative path of the image file within the images directory.
```
!tail -3 $TRAIN_LST_FILE
```
## Step 3. Convert data into RecordIO format
Now we create im2rec databases (.rec files) for training and validation based on the list files created earlier.
```
!python tools/im2rec.py --resize $RESIZE_SIZE --pack-label birds_ssd_sample $BASE_DIR/images/
```
## Step 4. Upload RecordIO files to S3
Upload the training and validation data to the S3 bucket. We do this in multiple channels. Channels are simply directories in the bucket that differentiate the types of data provided to the algorithm. For the object detection algorithm, we call these directories `train` and `validation`.
```
# set the BUCKET to the name of your S3 bucket
#BUCKET = 'dtong-ml-datasets'
BUCKET = '<<REPLACE YOUR BUCKET NAME>>'
S3_PREFIX = 'datasets/caltech-birds/recordio'
import sagemaker
import boto3
import time
role = sagemaker.get_execution_role()
sess = sagemaker.Session()
# Upload the RecordIO files to train and validation channels
train_channel = S3_PREFIX + '/train'
validation_channel = S3_PREFIX + '/validation'
start = time.time()
sess.upload_data(path='birds_ssd_sample_train.rec', bucket=BUCKET, key_prefix=train_channel)
sess.upload_data(path='birds_ssd_sample_val.rec', bucket=BUCKET, key_prefix=validation_channel)
end = time.time()
s3_train_data = 's3://{}/{}'.format(BUCKET, train_channel)
s3_validation_data = 's3://{}/{}'.format(BUCKET, validation_channel)
print('RecordIO files uploaded in {} seconds'.format(end-start))
print('Training dataset location: {}'.format(s3_train_data))
print('Validation dataset location: {}'.format(s3_validation_data))
```
| github_jupyter |
```
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('ggplot')
# Eventually, for Anaconda warnings.
# Can be commented out.
import warnings
warnings.filterwarnings("ignore")
%matplotlib inline
%load_ext autoreload
%autoreload 2
# Load basic libraries
import seaborn; seaborn.set()
import pickle, copy, json
import numpy as np
import scipy.stats
from sklearn.metrics import make_scorer, confusion_matrix
from sklearn.model_selection import cross_val_score, RandomizedSearchCV, train_test_split
from sklearn.externals import joblib
from sklearn_crfsuite import scorers, metrics
import sklearn_crfsuite
from multiprocessing import Pool
# rememebr to save this dataset from before!
data = pickle.load(open("dataset/data.p", "rb"))
print(len(data))
# Generic Tagged BE Tag consolidation
correspondances = {
'b-primary-full': 'b-primary',
'i-primary-full': 'i-primary',
'e-primary-full': 'e-primary',
'b-primary-partial': 'b-primary',
'i-primary-partial': 'i-primary',
'e-primary-partial': 'e-primary',
'b-meta-annotation': 'b-meta-annotation',
'i-meta-annotation': 'i-meta-annotation',
'e-meta-annotation': 'e-meta-annotation',
'b-secondary-full': 'b-secondary',
'i-secondary-full': 'i-secondary',
'e-secondary-full': 'e-secondary',
'b-secondary-partial': 'b-secondary',
'i-secondary-partial': 'i-secondary',
'e-secondary-partial': 'e-secondary',
'o': 'o',
}
# define supporting functions
window = 2
from code.feature_extraction_words import word2features, generate_featuresLight
def text2features(text):
return [word2features(text, i, window = window) for i in range(len(text))]
def text2featuresL(text):
return [word2features(text, i, window = window, feature_function=generate_featuresLight) for i in range(len(text))]
# With extra Specifc Tags. Adding specific tags improves performances
def text2featuresEX(text, extra_labels):
return [word2features(text, i, extra_labels, window = window) for i in range(len(text))]
def text2featuresLEX(text, extra_labels):
return [word2features(text, i, extra_labels, window = window, feature_function=generate_featuresLight) for i in range(len(text))]
# create generic tags Y
def text2labelsG(text):
return [correspondances[token[2][0]] for token in text]
# create beginend tags Y
def text2labelsBE(text):
return [token[2][2] for token in text]
# create tagged-beginend tags Y
def text2labelsTBE(text):
return [correspondances[token[2][3]] for token in text]
# create specific tags Y
def text2labelsS(text):
return [correspondances[token[2][1]] for token in text]
# prepare data for CRF
annotated_data = list()
annotated_labels = list()
for doc in data:
ar_data_ann = list()
ar_labels_ann = list()
for page in doc["pages"].values():
if page["is_annotated"]:
ar_data_ann.extend(page["offsets"])
ar_labels_ann.extend(page["specific_tags"])
if len(ar_data_ann) > 0:
annotated_data.append(ar_data_ann)
annotated_labels.append(ar_labels_ann)
print(len(annotated_data))
print(len(data))
# Define train and test sets for experiments
%%time
d = [text2featuresEX(text, lab) for text, lab in zip(annotated_data, annotated_labels)]
l = [text2labelsTBE(text) for text in annotated_data]
# Clean tag space
labels_to_keep = sorted(list(set([x for y in l for x in y])))
# VALIDATION set
X_rest, X_valid, y_rest, y_valid = train_test_split(d, l, test_size=0.1)
# TRAIN/TEST
X_train, X_test, y_train, y_test = train_test_split(X_rest, y_rest, test_size=0.25)
# Count labels
counts = {x:0 for x in labels_to_keep}
for c in counts.keys():
counts[c] = len([x for y in l for x in y if x==c])
print(counts)
# An example use of CRFs
%%time
crf = sklearn_crfsuite.CRF(
algorithm='lbfgs',
c1=0.1,
c2=0.1,
max_iterations=100,
all_possible_transitions=False
)
crf.fit(X_train, y_train)
y_pred = crf.predict(X_test)
print(metrics.flat_classification_report(
y_test, y_pred, labels=labels_to_keep, digits=3
))
# Parameters search
%%time
crf = sklearn_crfsuite.CRF(
max_iterations=100,
algorithm = 'lbfgs',
all_possible_transitions=False
)
params_space = {
'c1': scipy.stats.expon(scale=0.5),
'c2': scipy.stats.expon(scale=0.05)
}
scorer = make_scorer(metrics.flat_f1_score,
average='weighted', labels=labels_to_keep)
# search
rs = RandomizedSearchCV(crf, params_space,
cv=3,
verbose=1,
n_jobs=-15,
n_iter=5,
scoring=scorer)
rs.fit(X_train, y_train)
print('best params:', rs.best_params_)
print('best CV score:', rs.best_score_)
# classification report
crf = rs.best_estimator_
y_pred = crf.predict(X_test)
print(metrics.flat_classification_report(
y_test, y_pred, labels=labels_to_keep, digits=3
))
# Confusion matrices
from sklearn.metrics import confusion_matrix
from code.support_functions import flatten_predictions
print(confusion_matrix(flatten_predictions(y_test), flatten_predictions(y_pred), labels=labels_to_keep))
plt.imshow(np.log(confusion_matrix(flatten_predictions(y_test), flatten_predictions(y_pred), labels=labels_to_keep)),
cmap='Blues', interpolation='nearest')
plt.grid(False)
plt.ylabel('Ground truth', fontsize=16)
plt.xlabel('Predicted', fontsize=16)
plt.xticks(np.arange(0, len(labels_to_keep), 1))
plt.yticks(np.arange(0, len(labels_to_keep), 1))
plt.title("Confusion Matrix Model 2", fontsize=16)
# K-fold validation
scorer = make_scorer(metrics.flat_f1_score,
average='weighted', labels=labels_to_keep)
# OR rs.best_params_
crf = sklearn_crfsuite.CRF(
algorithm='lbfgs',
c2= 0.093645710804034776, c1= 0.44740028179508301,
max_iterations=200,
all_possible_transitions=True
)
k = 5
cv = cross_val_score(crf, X_rest, y_rest, cv=k, scoring=scorer, n_jobs=-2)
print("%d-fold validation mean: "%k,cv.mean())
# Learning curves
from code.support_functions import plot_learning_curve
# Slices of data for learning curves
train_sizes=np.linspace(0.1, 1.0, 10)
title = "Learning Curves for Model 2"
message = "M2"
# Cross validation scheme with 80-20 splits and 5 iterations per train data size (to evaluate variance)
cv = model_selection.ShuffleSplit(test_size=0.2, random_state=0)
estimator = sklearn_crfsuite.CRF(
algorithm='lbfgs',
c2= 0.093645710804034776, c1= 0.44740028179508301,
max_iterations=200,
all_possible_transitions=True
)
plot_learning_curve(estimator, title, X_rest, y_rest, labels_to_keep, cv=cv, train_sizes=train_sizes, n_jobs=-2, message=message)
# VALIDATION
%%time
crf = sklearn_crfsuite.CRF(
algorithm='lbfgs',
c2= 0.093645710804034776, c1= 0.44740028179508301,
max_iterations=500,
all_possible_transitions=True
)
crf.fit(X_rest, y_rest)
y_pred = crf.predict(X_valid)
print(metrics.flat_classification_report(
y_valid, y_pred, labels=labels_to_keep, digits=3
))
# Train final models for task 1
crf = sklearn_crfsuite.CRF(
algorithm='lbfgs',
c2= 0.093645710804034776, c1= 0.44740028179508301,
max_iterations=500,
all_possible_transitions=True
)
crf.fit(d, l)
# save model
#joblib.dump(crf,'models/modelM2_ALL_L.pkl')
# load model
crf1 = joblib.load('models/modelM2_ALL_L.pkl')
def process_document(doc):
for page in doc["pages"].values():
if not page["is_annotated"]:
data_to_tag = [text2featuresEX(page["offsets"],page["specific_tags"])]
page_lab = crf.predict(data_to_tag)
assert len(page_lab[0]) == len(page["offsets"])
page.update({"BET_tags":page_lab[0]})
else:
page.update({"BET_tags":text2labelsTBE(page["offsets"])})
return doc
threads = Pool(45)
# parse all
data2 = list()
for ar in threads.imap_unordered(process_document, data):
data2.append(ar)
#pickle.dump(data2, open("data/data.p", "wb"))
# parse the references in a more json-like formar
from code.support_functions import json_outputter
_, refs, _ = json_outputter(data2, 40)
print(refs[10])
```
| github_jupyter |
```
import itertools
from pathlib import Path
import re
import sys
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import scipy.io
from tqdm import tqdm
sns.set_style("whitegrid")
%load_ext autoreload
%autoreload 2
sys.path.append("../src")
import nearest_neighbors
import util
decoder_path = Path("../models/decoders")
bert_encoding_path = Path("../models/bert")
sentences_path = Path("../data/sentences/stimuli_384sentences.txt")
brains_path = Path("../data/brains")
brains_cache_path = brains_path / "all_encodings.npz"
bert_base_model = "uncased_L-12_H-768_A-12"
finetune_desc = "finetune-250"
bert_models = ["MNLI", "QQP", "SST", "SQuAD", "LM", "LM_scrambled", "LM_scrambled_para", "LM_contentonly", "LM_lmonly", "LM_pos", "LM_randommask"]
subjects = ["M02", "M04", "M07", "M08", "M09", "M14", "M15", "P01"]
target_runs = [1, 2, 3, 4, 5, 6, 7, 8]
sentences = util.load_sentences(sentences_path)
steps = list(range(5, 255, 5))
```
## Model encoding evaluation
First evaluate pairwise distance between sentences in model encodings (and subject brain activations).
```
PCA_DIM = 256
sims, sim_stats = {}, []
SIM_METRIC = "cosine"
# Load subject images and calculate sentence distances.
if brains_cache_path.exists():
brain_encodings = np.load(brains_cache_path)
else:
brain_encodings = {}
for subject in tqdm(subjects):
brain_encodings[subject] = util.load_brain_data(brains_path / subject / "examples_384sentences.mat", project=PCA_DIM)
np.savez(brains_cache_path, **brain_encodings)
for subject in tqdm(subjects):
sims[subject, 0] = nearest_neighbors.eval_quant(brain_encodings[subject], metric=DIST_METRIC)
dist_stats.append((subject, 0, steps[-1], sims[subject, 0].mean(), sims[subject, 0].std()))
# Load distance data for model encodings.
for encoding_path in tqdm(list(bert_encoding_path.glob("encodings.%s*.npy" % finetune_desc)),
desc="Preparing model encodings"):
model, run, step = re.findall(r"\.(\w+)-run(\d+)-(\d+)\.npy", encoding_path.name)[0]
run, step = int(run), int(step)
if model not in bert_models or run not in target_runs: continue
if step != steps[-1]: continue
try:
encoding = util.load_encodings([encoding_path], project=PCA_DIM)
except:
continue
sims_e = nearest_neighbors.eval_quant(encoding, metric=DIST_METRIC)
if step == steps[-1]:
sims[model, run] = sims_e
dist_stats.append((model, run, step, sims_e.mean(), sims_e.std()))
# Also add distance data for base model.
encoding = util.load_encodings([bert_encoding_path / ("encodings.%s.npy" % bert_base_model)], project=PCA_DIM)
sims_e = nearest_neighbors.eval_quant(encoding, metric=DIST_METRIC)
dists["_", 0] = sims_e
dist_stats.append(("_", 0, steps[-1], sims_e.mean(), sims_e.std()))
df = pd.DataFrame(dist_stats, columns=["model", "run", "step", "avg_sim", "std_sim"]).set_index(["model", "run", "step"])
df
```
## Pairwise distance: global metrics
```
f, ax = plt.subplots(figsize=(15, 10))
final_dists = df.xs(steps[-1], level="step")
order = final_dists.avg_sim.argsort()
sns.barplot(data=final_dists.reset_index(), x="model", y="avg_sim", ax=ax)#, order=order.index[order])
```
## Pairwise distance: local evaluation
```
def argsmallest_n(a, n):
ret = np.argpartition(a, n)[:n]
b = np.take(a, ret)
return np.take(ret, np.argsort(b))
tu = np.triu_indices(np.ceil(np.sqrt(2 * len(next(iter(sims.values()))))), 1)
def nearest_neighbor_sentences(measure, n=10, reverse=True):
closest = argsmallest_n(measure if not reverse else -measure, n)
pairs = np.column_stack((np.take(tu[0], closest),
np.take(tu[1], closest))) + 1
ret = []
for (s1_id, s2_id), sim_id in zip(pairs, closest):
ret.append((measure[sim_id], sentences[s1_id], sentences[s2_id]))
return ret
nearest_neighbor_sentences(sims["P01", 0])
nearest_neighbor_sentences(sims["LM_scrambled", 2])
nearest_neighbor_sentences(sims["LM_scrambled_para", 2])
nearest_neighbor_sentences(sims["LM_pos", 2])
nearest_neighbor_sentences(sims["MNLI", 1])
nearest_neighbor_sentences(sims["QQP", 1])
nearest_neighbor_sentences(sims["SQuAD", 1])
nearest_neighbor_sentences(sims["SST", 1])
sims_df = pd.DataFrame({"LM_pos": sims["LM_pos", 2], "LM_scrambled_para": sims["LM_scrambled_para", 2]})
sims_df["pos_vs_scrambled_para"] = sims_df.LM_pos - sims_df.LM_scrambled_para
sims_df["abs_pos_vs_scrambled_para"] = np.abs(sims_df.pos_vs_scrambled_para)
sims_df = sims_df.sort_values("abs_pos_vs_scrambled_para", ascending=False)
sims_df.head(20)
import math
def calc_row_idx(k, n):
return int(math.ceil((1/2.) * (- (-8*k + 4 *n**2 -4*n - 7)**0.5 + 2*n -1) - 1))
def elem_in_i_rows(i, n):
return i * (n - 1 - i) + (i*(i + 1))/2
def calc_col_idx(k, i, n):
return int(n - elem_in_i_rows(i + 1, n) + k)
def condensed_to_square(k, n):
i = calc_row_idx(k, n)
j = calc_col_idx(k, i, n)
return i, j
[(sentences[condensed_to_square(idx, 384)[0]], sentences[condensed_to_square(idx, 384)[1]], row.LM_pos, row.LM_scrambled_para)
for idx, row in sims_df.sort_values("pos_vs_scrambled_para").head(20).iterrows()]
[(sentences[condensed_to_square(idx, 384)[0]], sentences[condensed_to_square(idx, 384)[1]], row.LM_pos, row.LM_scrambled_para)
for idx, row in sims_df.sort_values("pos_vs_scrambled_para").tail(20).iterrows()]
brain_nearest_neighbors = {}
for subject in subjects:
brain_nearest_neighbors[subject] = nearest_neighbor_sentences(dists[subject, 0], n=30)
# Check Jaccard measure for nearest-neighbor predictions between subjects.
neighbor_jaccards = []
for s1, s2 in itertools.combinations(SUBJECTS, 2):
s1_neighbors = set((sent1, sent2) for _, sent1, sent2 in brain_nearest_neighbors[s1])
s2_neighbors = set((sent1, sent2) for _, sent1, sent2 in brain_nearest_neighbors[s2])
neighbor_jaccards.append((s1, s2, len(s1_neighbors & s2_neighbors) / len(s1_neighbors | s2_neighbors)))
neighbor_jaccards = pd.DataFrame(neighbor_jaccards, columns=["s1", "s2", "jaccard"]).set_index(["s1", "s2"])
neighbor_jaccards.sort_values("jaccard", ascending=False)
```
## Pairwise distance: collapse analysis
Under the collapse theory, we should find that some pairs of sentences which are well separated in brain representations are not well separated in model activations (case I) or that pairs well separated in model activations are well separated in brain representations (case II). We can measure this by computing, for each sentence pair $(s_1, s_2)$,
$$q(s_1, s_2) = \frac{dist(m(s_1), m(s_2))}{dist(b(s_1),b(s_2))}$$
for a model representation $m$ and brain representation $b$.
If $q$ is large, then the model distinguishes $s_1, s_2$ along the major axis in a way not captured by the brain. If $q$ is small, then the brain distinguishes $s_1, s_2$ along the major axis in a way not captured by the model.
```
q_measures = {}
for subject in subjects:
subject_dists = dists[subject, 0].copy()
subject_dists -= subject_dists.min()
subject_dists /= subject_dists.max() - subject_dists.min()
for model, run in zip(bert_models, target_runs):
model_dists = dists[model, run].copy()
model_dists -= model_dists.min()
model_dists /= model_dists.max() - model_dists.min()
q_measures[model, run, subject] = pd.Series(model_dists / (subject_dists + 1e-5))
q_measures = pd.DataFrame(pd.concat(q_measures, names=["model", "run", "subject", "pair"]))
q_measures.head()
q_means = q_measures.reset_index().groupby(["model", "pair"])[0].mean()
q_means.hist(by="model", bins=30, figsize=(15,10), sharex=True, range=(1, 5))
q_means.groupby("model").mean().sort_values(0).plot.bar()
```
## Rank change analysis
```
encoding_preds = {}
for encoding in bert_models:
for run in target_runs:
for subject in subjects:
try:
encoding_preds[encoding, steps[-1], subject] = \
pd.read_csv(decoder_path / ("encodings.%s.%s.%s-run%i-250-%s.pred.csv" % (finetune_desc, bert_base_model, encoding, run, subject)),
index_col=[0, 1])
except:
continue
encoding_preds = pd.concat(encoding_preds, names=["model", "step", "subject"])
encoding_preds.head()
rank_changes = encoding_preds.reset_index().set_index(["step", "idx"]).groupby(["model", "subject"]) \
.apply(lambda xs: xs.loc[steps[-1]] - xs.loc[steps[0]]).rename(columns={"rank": "rank_change"})
rank_changes.head()
# Average across subjects.
avg_rank_changes = rank_changes.mean(level=["model", "idx"])
avg_rank_changes.head()
sns.barplot(data=avg_rank_changes.mean(level="model").reset_index(), x="model", y="rank_change")
plt.title("Average sentence rank change")
n = 20
topn = {}
bottomn = {}
for (model, subject), rank_changes_m in rank_changes.reset_index().groupby(["model", "subject"]):
print("\n\n========\n %s // %s" % (model, subject))
rank_changes_m = rank_changes_m.set_index("idx")
top_sentences = rank_changes_m.index[rank_changes_m.rank_change.argsort()[::-1]]
topn[model, subject] = top_sentences[:n]
bottomn[model, subject] = top_sentences[::-1][:n]
for sent_id in topn[model, subject][:5]:
print(rank_changes_m.loc[sent_id].rank_change, sentences[sent_id])
print()
for sent_id in bottomn[model, subject][:5]:
print(rank_changes_m.loc[sent_id].rank_change, sentences[sent_id])
```
Compute pairwise subject Jaccard cofficients by comparing maximum-rank-change sets for each model.
```
jaccards = []
for model in ENCODINGS:
for s1, s2 in itertools.combinations(SUBJECTS, 2):
topn_s1, topn_s2 = set(topn[model, s1]), set(topn[model, s2])
bottomn_s1, bottomn_s2 = set(bottomn[model, s1]), set(bottomn[model, s2])
jaccard_positive = len(topn_s1 & topn_s2) / len(topn_s1 | topn_s2)
jaccard_negative = len(bottomn_s1 & bottomn_s2) / len(bottomn_s1 | bottomn_s2)
jaccards.append((model, s1, s2, jaccard_positive, jaccard_negative))
jaccards = pd.DataFrame(jaccards, columns=["model", "s1", "s2", "jaccard_positive", "jaccard_negative"]).set_index(["model", "s1", "s2"]).sort_index()
jaccards.head()
jaccards.sort_values("jaccard_positive", ascending=False).head(20)
```
### Per-subject rank changes
Let's go deeper and see how some of the sentences with most highly unstable rank are behaving within-subject.
```
std_rank_changes = rank_changes.std(level=["model", "idx"]).rename(columns={"rank_change": "rank_change_std"})
std_rank_changes = std_rank_changes.sort_values("rank_change_std", ascending=False)
std_rank_changes.loc["QQP"][:10]
for idx, std in std_rank_changes.loc["QQP"][:10].iterrows():
print(sentences[idx], std.rank_change_std)
print(rank_changes.loc["QQP", :, idx].sort_values("rank_change"))
f, ax = plt.subplots(figsize=(20, 8))
sns.barplot(data=rank_changes.loc["MNLI", :, std_rank_changes.loc["MNLI"][:15].index].reset_index(),
x="idx", y="rank_change", hue="subject", ax=ax)
```
## Pair separation analysis
For each model, find sentence pairs whose distance changes maximally between the base model and the end of training.
```
for model in ENCODINGS:
dist_start = dists["LM"]
dist_end = dists[model]
dist_changes = dist_end - dist_start
print(model)
for score, sent1, sent2 in nearest_neighbor_sentences(dist_changes):
print(score, sent1, "//", sent2)
for score, sent1, sent2 in nearest_neighbor_sentences(dist_changes, reverse=True):
print(score, sent1, "//", sent2)
print()
```
| github_jupyter |
Everything is a network. [Assortativity](http://arxiv.org/pdf/cond-mat/0205405v1.pdf) is an interesting property of networks. It is the tendency of nodes in a network to be attached to other nodes that are similar in some way. In social networks, this is sometimes called "homophily."
One kind of assortativity that is particularly descriptive of network topology is *degree assortativity*. This is what it sounds like: the *assortativity* (tendency of nodes to attach to other nodes that are similar) of *degree* (the number of edges a node has).
A suggestive observation by [Newman (2002)](http://arxiv.org/pdf/cond-mat/0205405v1.pdf) is that *social* networks such as academic coauthorship networks and film collaborations tend to have positive degree assortativity, while *technical* and *biological* networks tend to have negative degree assortativity. Another way of saying this is that they are *disassortatively mixed*. This has implications for the ways we model these networks forming as well as the robustness of these networks to the removal of nodes.
Looking at open source software collaboration as a *sociotechnical* system, we can ask whether and to what extent the networks of activity are assortatively mixed. Are these networks more like social networks or technical networks? Or are they something in between?
### Email reply networks
One kind of network that we can extract from open source project data are networks of email replies from public mailing lists. [Mailing lists and discussion forums](http://producingoss.com/en/message-forums.html) are often the first point of contact for new community members and can be the site of non-technical social processes that are necessary for the maintenance of the community. Of all the communications media used in coordinating the cooperative work of open source development, mailing lists are the most "social".
We are going to look at the mailing lists associated with a number of open source and on-line collaborative projects. We will construct for each list a network for which nodes are email senders (identified by their email address) and edges are the number of times a sender has replied directly to another participant on the list. Keep in mind that these are public discussions and that in a sense every reply is sent to everybody.
```
from bigbang.archive import Archive
urls = [#"analytics",
"conferences",
"design",
"education",
"gendergap",
"historic",
"hot",
"ietf-privacy",
"ipython-dev",
"ipython-user",
"languages",
"maps-l",
"numpy-discussion",
"playground",
"potlatch-dev",
"python-committers",
"python-dev",
"scipy-dev",
"scipy-user",
"social-media",
"spambayes",
#"wikien-l",
"wikimedia-l"]
archives= [(url,Archive(url,archive_dir="../archives")) for url in urls]
archives = dict(archives)
```
The above code reads in preprocessed email archive data. These mailing lists are from a variety of different sources:
|List name | Project | Description |
|---|---|---|
|analytics| Wikimedia | |
|conferences| Python | |
|design| Wikimedia | |
|education| Wikimedia | |
|gendergap| Wikimedia | |
|historic| OpenStreetMap | |
|hot| OpenStreetMap | Humanitarian OpenStreetMap Team |
|ietf-privacy| IETF | |
|ipython-dev| IPython | Developer's list |
|ipython-user| IPython | User's list |
|languages| Wikimedia | |
|maps-l| Wikimedia | |
|numpy-discussion| Numpy | |
|playground| Python | |
|potlatch-dev| OpenStreetMap | |
|python-committers| Python | |
|python-dev| Python | |
|scipy-dev| SciPy | Developer's list|
|scipy-user| SciPy | User's list |
|social-media| Wikimedia | |
|spambayes| Python | |
|wikien-l| Wikimedia | English language Wikipedia |
|wikimedia-l| Wikimedia | |
```
import bigbang.graph as graph
igs = dict([(k,graph.messages_to_interaction_graph(v.data)) for (k,v) in list(archives.items())])
igs
```
Now we have processed the mailing lists into interaction graphs based on replies. This is what those graphs look like:
```
import networkx as nx
def draw_interaction_graph(ig):
pos = nx.graphviz_layout(ig,prog='neato')
node_size = [data['sent'] * 4 for name,data in ig.nodes(data=True)]
nx.draw(ig,
pos,
node_size = node_size,
node_color = 'b',
alpha = 0.4,
font_size=18,
font_weight='bold'
)
# edge width is proportional to replies sent
edgewidth=[d['weight'] for (u,v,d) in ig.edges(data=True)]
#overlay edges with width based on weight
nx.draw_networkx_edges(ig,pos,alpha=0.5,width=edgewidth,edge_color='r')
%matplotlib inline
import matplotlib.pyplot as plt
plt.figure(550,figsize=(12.5, 7.5))
for ln,ig in list(igs.items()):
print(ln)
try:
plt.subplot(550 + i)
#print nx.degree_assortativity_coefficient(ig)
draw_interaction_graph(ig)
except:
print('plotting failure')
plt.show()
```
Well, that didn't work out so well...
I guess I should just go on to compute the assortativity directly.
This is every mailing list, with the total number of nodes and its degree assortativity computed.
```
for ln,ig in list(igs.items()):
print(ln, len(ig.nodes()), nx.degree_assortativity_coefficient(ig,weight='weight'))
```
Maybe it will be helpful to compare these values to those in the Newman, 2002 paper:
<img src="assortativity-values.png">
On the whole, with a few exceptions, these reply networks wind up looking much more like technical or biological networks than the social networks of coauthorship and collaboration. Why is this the case?
One explanation is that the mechanism at work in creating these kinds of "interaction" networks over time is very different from the mechanism for creating collaboration or coauthorship networks. These networks are derived from real communications over time in projects actively geared towards encouraging new members and getting the most out of collaborations. Perhaps these kinds of assortativity numbers are typical in projects with leaders who have inclusivity as a priority.
Another possible explanation is that these interaction networks are mirroring the structures of the technical systems that these communities are built around. There is a theory of [institutional isomorphism](http://www.jstor.org/discover/10.2307/2095101?sid=21105865961831&uid=2&uid=70&uid=2129&uid=3739560&uid=3739256&uid=4) that can be tested in this case, where social and technical institutions are paired.
### Directions for future work
Look at each project domain (IPython, Wikimedia, OSM, etc.) separately but include multiple lists from each and look at assortativity within list as well as across list. This would get at how the cyberinfrastructure topology affects the social topology of the communities that use it.
Use a more systematic sampling of email lists to get a typology of those lists with high and low assortativity. Figure out qualitatively what the differences in structure might mean (can always go in and *read the emails*).
Build a generative graph model that with high probability creates networks with this kind of structure (apparently the existing models don't do thise well.) Test its fit across many interaction graphs, declare victory for science of modeling on-line collaboration.
### References
http://producingoss.com/en/message-forums.html
http://arxiv.org/abs/cond-mat/0205405
http://arxiv.org/pdf/cond-mat/0205405v1.pdf
http://arxiv.org/abs/cond-mat/0209450
http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2005302
http://www.jstor.org/discover/10.2307/2095101?sid=21105865961831&uid=2&uid=70&uid=2129&uid=3739560&uid=3739256&uid=4
| github_jupyter |
# ้ๆฉ
## ๅธๅฐ็ฑปๅใๆฐๅผๅ่กจ่พพๅผ

- ๆณจๆ๏ผๆฏ่พ่ฟ็ฎ็ฌฆ็็ธ็ญๆฏไธคไธช็ญๅท๏ผไธไธช็ญๅฐไปฃ่กจ่ตๅผ
- ๅจPythonไธญๅฏไปฅ็จๆดๅ0ๆฅไปฃ่กจFalse๏ผๅ
ถไปๆฐๅญๆฅไปฃ่กจTrue
- ๅ้ข่ฟไผ่ฎฒๅฐ is ๅจๅคๆญ่ฏญๅฅไธญ็็จๅ
```
a = id(1)
b = id(1)
print(a,b)
# ๅ ไธบaๅbๅนถไธๆฏๅไธไธชๅฏน่ฑก
a is b
a = id(1)
b = a
a is b
a = True
b = False
id(True)
a == b
a is b
```
## ๅญ็ฌฆไธฒ็ๆฏ่พไฝฟ็จASCIIๅผ
```
a = "jokar"
b = "jokar"
a > b
```
## Markdown
- https://github.com/younghz/Markdown
่ฏๅฎไผ็ปๅๆฐๆฎๅๅๅ
$\sum_{j=1}^{N}x_{j}$
## EP:
- <img src="../Photo/34.png"></img>
- ่พๅ
ฅไธไธชๆฐๅญ๏ผๅคๆญๅ
ถๅฎๅฅๆฐ่ฟๆฏๅถๆฐ
## ไบง็้ๆบๆฐๅญ
- ๅฝๆฐrandom.randint(a,b) ๅฏไปฅ็จๆฅไบง็ไธไธชaๅbไน้ดไธๅ
ๆฌaๅb็้ๆบๆดๆฐ
```
import random
random.randint(0,1)
if condition:
do someething
else:
other
for iter_ in xxx:
do something
age = 10
Joker = eval(input('Name'))
print(Joker)
```
ไบง็ไธไธช้ๆบๆฐ๏ผไฝ ๅป่พๅ
ฅ๏ผๅฆๆไฝ ่พๅ
ฅ็ๆฐๅคงไบ้ๆบๆฐ๏ผ้ฃไนๅฐฑๅ่ฏไฝ ๅคชๅคงไบ๏ผๅไน๏ผๅคชๅฐไบ๏ผ
็ถๅไฝ ไธ็ด่พๅ
ฅ๏ผ็ฅ้ๅฎๆปกๆไธบๆญข
```
number =random.randint(0,5)
for i in range(5):
input_ = eval(input('>>'))
if input_ > number:
print('ๅคชๅคงๅฆ')
if input_ < number:
print('ๅคชๅฐๅฆ')
if number == input_:
print('ๆญฃๅฅฝ')
break
for i in range(5):
print(i)
```
## ๅ
ถไปrandomๆนๆณ
- random.random ่ฟๅ0.0ๅฐ1.0ไน้ดๅ้ญๅๅผๅบ้ด็้ๆบๆตฎ็น
- random.randrange(a,b) ๅ้ญๅๅผ
```
random.random()
import matplotlib.pyplot as plt
image=plt.imread('/Users/huwang/Downloads/cat.jpeg')
print(image*random.random())
plt.imshow(image)
```
## EP๏ผ
- ไบง็ไธคไธช้ๆบๆดๆฐnumber1ๅnumber2๏ผ็ถๅๆพ็คบ็ป็จๆท๏ผไฝฟ็จๆท่พๅ
ฅๆฐๅญ็ๅ๏ผๅนถๅคๅฎๅ
ถๆฏๅฆๆญฃ็กฎ
- ่ฟ้ถ:ๅไธไธช้ๆบๅบๅท็นๅ็จๅบ
```
number_1 = random.randrange(0,10)
number_2 = random.randrange(0,10)
while 1:
sum_ = eval(input('>>'))
if sum_ == (number_1 + number_2):
print('Congratulations! Correct~')
else:
print('Sorry~SB.')
```
## if่ฏญๅฅ
- ๅฆๆๆกไปถๆญฃ็กฎๅฐฑๆง่กไธไธชๅๅif่ฏญๅฅ๏ผไบฆๅณๅฝๆกไปถไธบ็็ๆถๅๆๆง่กifๅ
้จ็่ฏญๅฅ
- Pythonๆๅพๅค้ๆฉ่ฏญๅฅ:
> - ๅๅif
- ๅๅif-else
- ๅตๅฅif
- ๅคๅif-elif-else
- ๆณจๆ๏ผๅฝ่ฏญๅฅๅซๆๅญ่ฏญๅฅ็ๆถๅ๏ผ้ฃไนไธๅฎ่ณๅฐ่ฆๆไธไธช็ผฉ่ฟ๏ผไนๅฐฑๆฏ่ฏดๅฆๆๆๅฟๅญๅญๅจ๏ผ้ฃไนไธๅฎ่ฆ็ผฉ่ฟ
- ๅ่ฎฐไธๅฏtab้ฎๅspaceๆทท็จ๏ผๅ็จtab ๆ่
space
- ๅฝไฝ ่พๅบ็็ปๆๆฏๆ ่ฎบifๆฏๅฆไธบ็ๆถ้ฝ้่ฆๆพ็คบๆถ๏ผ่ฏญๅฅๅบ่ฏฅไธifๅฏน้ฝ
```
input_ = eval(input('>>'))
if input_ > number:
print('ๅคชๅคงๅฆ')
if input_ < number:
print('ๅคชๅฐๅฆ')
if number == input_:
print('ๆญฃๅฅฝ')
print('ไธ่ฆ็ฐๅฟ')
```
ๆๆๆตฉ็ธไบฒๆต่ฏๆ
ๅนด้พ
่ ๅนด่ฝป
ๆๆ
ๅธ
ๅฆ ๆฏ
่่ไธไธ ่ๅฉ
ๆฒกๆ ๆ
้ฉฌไธ็ปๅฉ ๅๅฎถ็่ฏฑๆ
ไปฃ็ ๅไธๅบๆฅ็็ซ้ฉฌๅๆ,ไปๆญค็คพไผไธๆๅคๅบไธไธชๆธฃ็ท/ๆธฃๅฅณ.
```
age = input('ๅนด่ฝปๅ[y/n]')
if age == 'y':
handsome = input('ๅธ
ๅฆ[y/n]')
if handsome == 'y':
wife = input('ๆๆฒกๆ่ๅฉ[y/n]')
if wife == 'y':
print('ๅๅฎถ็่ฏฑๆ')
else:
print('็ซ้ฉฌ็ปๅฉ')
else:
print('่่ไธไธ')
else:
print('ๆๆ~')
```
## EP๏ผ
- ็จๆท่พๅ
ฅไธไธชๆฐๅญ๏ผๅคๆญๅ
ถๅฎๅฅๆฐ่ฟๆฏๅถๆฐ
- ่ฟ้ถ๏ผๅฏไปฅๆฅ็ไธ4.5ๅฎไพ็ ็ฉถ็็ๆฅ
## ๅๅif-else ่ฏญๅฅ
- ๅฆๆๆกไปถไธบ็๏ผ้ฃไน่ตฐifๅ
้จ่ฏญๅฅ๏ผๅฆๅ่ตฐelseๅ
้จ่ฏญๅฅ
## EP๏ผ
- ไบง็ไธคไธช้ๆบๆดๆฐnumber1ๅnumber2๏ผ็ถๅๆพ็คบ็ป็จๆท๏ผไฝฟ็จๆท่พๅ
ฅๆฐๅญ๏ผๅนถๅคๅฎๅ
ถๆฏๅฆๆญฃ็กฎ,ๅฆๆๆญฃ็กฎๆๅฐโyouโre correctโ๏ผๅฆๅๆๅฐๆญฃ็กฎ้่ฏฏ
## ๅตๅฅif ๅๅคๅif-elif-else

```
if score >= 80:
gread = 'B'
elif score>=90:
gread = 'A'
```
## EP๏ผ
- ๆ็คบ็จๆท่พๅ
ฅไธไธชๅนดไปฝ๏ผ็ถๅๆพ็คบ่กจ็คบ่ฟไธๅนด็ๅจ็ฉ

- ่ฎก็ฎ่บซไฝ่ดจ้ๆๆฐ็็จๅบ
- BMI = ไปฅๅๅ
ไธบๅไฝ็ไฝ้้คไปฅไปฅ็ฑณไธบๅไฝ็่บซ้ซ็ๅนณๆน

```
tizhong = eval(input('ไฝ้'))
shengao = eval(input('่บซ้ซ'))
BMI = tizhong / shengao ** 2
if BMI<18.5 :
print('่ถ
ๆธ
')
elif 18.5<=BMI<25 :
print('ๆ ๅ')
elif 25<=BMI<30 :
print('่ถ
้')
else:
print('่ถ
็บงๆ ๆ่')
```
## ้ป่พ่ฟ็ฎ็ฌฆ



## EP๏ผ
- ๅคๅฎ้ฐๅนด๏ผไธไธชๅนดไปฝๅฆๆ่ฝ่ขซ4ๆด้คไฝไธ่ฝ่ขซ100ๆด้ค๏ผๆ่
่ฝ่ขซ400ๆด้ค๏ผ้ฃไน่ฟไธชๅนดไปฝๅฐฑๆฏ้ฐๅนด
- ๆ็คบ็จๆท่พๅ
ฅไธไธชๅนดไปฝ๏ผๅนถ่ฟๅๆฏๅฆๆฏ้ฐๅนด
- ๆ็คบ็จๆท่พๅ
ฅไธไธชๆฐๅญ๏ผๅคๆญๅ
ถๆฏๅฆไธบๆฐดไป่ฑๆฐ
## ๅฎไพ็ ็ฉถ๏ผๅฝฉ็ฅจ

```
import random
number = random.randint(10,99)
print(number)
N = input('>>')
number_shi = number // 10
number_ge = number % 10
if N[0] == '0':
N_shi = 0
else:
N_shi = int(N) // 10
N_ge = int(N) % 10
if number == int(N):
print('10000')
# elif (number_shi == N_shi or number_shi==N_ge) and (number_ge == N_shi or number_ge==N_ge):
elif number_shi + number_ge == N_shi + N_ge:
print('3000')
elif (number_ge ==N_ge or number_ge == N_shi) or (number_shi == N_ge or number_shi == N_shi):
print('1000')
a = "05"
a[0]
05 // 10
Number = eval(input('>>'))
bai = Number // 100
shi = Number // 10 % 10
ge = Number % 10
if bai**3 + shi **3 + ge **3 == Number:
print('ๆฐดไป่ฑ')
else:
print('ไธๆฏๆฐดไป่ฑ')
223 // 10
```
# Homework
- 1

```
a,b,c = eval(input("Enter a,b,c:"))
import math
def judge(a,b,c):
"""
ๅ่ฝ๏ผ่ฟๅๅคๆญๅผ็ปๆ
ๅๆฐ๏ผ็จๆท่พๅ
ฅ็ๆฐๆฎ
"""
result = 0
if((b*b)-(4*a*c)>0):
result=1
elif((b*b)-(4*a*c)==0):
result=0
else:
result=-1
return result
if judge(a,b,c)==-1:
print("The equation has no real roots")
elif judge(a,b,c)==0:
r1 = (-b+(math.sqrt((b*b)-(4*a*c))))/(2*a)
print("The roots are %.0f"%r1)
elif judge(a,b,c==1):
r1 = (-b+(math.sqrt((b*b)-(4*a*c))))/(2*a)
r2 = (-b-(math.sqrt((b*b)-(4*a*c))))/(2*a)
print("The roots are %.6f and %.6f"%(r1,r2))
```
- 2

```
import random
num1 = random.randrange(0,100)
num2 = random.randrange(0,100)
sum_ = float(input("%f + %f =="%(num1,num2)))
if(sum_ == num1 + num2):
print("็ปๆไธบ็")
else:
print("็ปๆไธบๅ")
```
- 3

```
today = int(input("Enter today's day:"))
future = int(input("Enter the number of days elapsed since today:"))
week = ['Sunday','Monday','Tuesday','Wednesday','Thursday','Friday','Saturday']
futureweek = today + future
ok=0
ok = futureweek % 7
print("Today is %s and the future day is %s"%(week[today],week[ok]))
```
- 4

```
a = int(input("็ฌฌไธไธชๆฐ๏ผ"))
b = int(input("็ฌฌไบไธชๆฐ๏ผ"))
c = int(input("็ฌฌไธไธชๆฐ๏ผ"))
t=0
arr = [a,b,c]
for j in range(3):
for i in range(2):
if(arr[i]>arr[i+1]):
t=arr[i]
arr[i]=arr[i+1]
arr[i+1]=t
print(arr)
```
- 5

```
First1,First2 = eval(input("Enter weight and price for package 1:"))
Second1,Second2 = eval(input("Enter weight and price for package 2:"))
price1 = First1 / First2
price2 = Second1 / Second2
if(price1>price2):
print("Package 1 has the better price.")
else:
print("Package 2 has the better price.")
```
- 6

```
month = input("่ฏท่พๅ
ฅๆไปฝ๏ผ")
year = int(input("่ฏท่พๅ
ฅๅนดไปฝ๏ผ"))
if(year % 100 == 0) or ( year % 4 == 0) and ( year % 100 != 0):
if month in '2':
print("%dๅนด%sๆไปฝๆ29ๅคฉ"%(year,month))
elif month in '1,3,5,7,8,10,12':
print("%dๅนด%sๆไปฝๆ31ๅคฉ"%(year,month))
elif month in '4,6,9,11':
print("%dๅนด%sๆไปฝๆ30ๅคฉ"%(year,month))
else:
if month in '2':
print("%dๅนด%sๆไปฝๆ28ๅคฉ"%(year,month))
elif month in '1,3,5,7,8,10,12':
print("%dๅนด%sๆไปฝๆ31ๅคฉ"%(year,month))
elif month in '4,6,9,11':
print("%dๅนด%sๆไปฝๆ30ๅคฉ"%(year,month))
```
- 7

```
guess = input("่ฏท่พๅ
ฅ็ๆตๅผ๏ผ[ๆญฃ/ๅ]๏ผ")
import numpy as np
res = np.random.choice(['ๆญฃ','ๅ'])
if (guess == res):
print("็ๆตๆญฃ็กฎ")
else:
print("็ๆต้่ฏฏ")
```
- 8

```
hand = input("scissor (0),rock(1),paper(2):")
import numpy as np
res = np.random.choice(['0','1','2'])
arr = ['scissor','rock','paper']
if (hand == res):
print("The computer is %s. You are %s. It is a draw."%(arr[int(res)],arr[int(hand)]))
elif( hand =="1") and ( res == "0"):
print("The computer is %s. You are %s. You won."%(arr[int(res)],arr[int(hand)]))
elif( hand == "1") and (res == "2"):
print("The computer is %s. You are %s. You lost."%(arr[int(res)],arr[int(hand)]))
elif( hand == "2") and (res == "1"):
print("The computer is %s. You are %s. You won."%(arr[int(res)],arr[int(hand)]))
elif( hand == "2") and (res == "0"):
print("The computer is %s. You are %s. You lost."%(arr[int(res)],arr[int(hand)]))
elif( hand == "0") and (res == "2"):
print("The computer is %s. You are %s. You won."%(arr[int(res)],arr[int(hand)]))
elif( hand == "0") and (res == "1"):
print("The computer is %s. You are %s. You lost."%(arr[int(res)],arr[int(hand)]))
```
- 9

```
year = int(input("Enter year: ( e.g. , 2008): "))
month = int(input("Enter month : 1-12: "))
day = int(input("Enter the day of the month : 1-31: "))
import math
h1 = day+math.floor((26*(month+1))/10)
h2 =(year%100)+math.floor((year%100)/4)+math.floor(math.floor(year/100)/4)+5*math.floor(year/100)
h=(h1+h2)%7
week = ['Saturday','Sunday','Monday','Tuesday','Wednesday','Thursday','Friday']
print("Day of week is %s"%week[h])
```
- 10

```
import numpy as np
size_ = np.random.choice(['Ace','2','3','4','5','6','7','8','9','10','Jack','Queen','King'])
color_ = np.random.choice(['Plum blossom','Heart','Square sheet','Spade'])
print("The card you picked is the %s of %s"%(size_,color_))
```
- 11

```
palindrome = int(input("Enter a three-digit integer : "))
bai = palindrome // 100
shi = palindrome %100 // 10
ge = palindrome % 100 % 10
if(bai == ge):
print("%d is a palindrome"%palindrome)
else:
print("%d is not a palindrome"%palindrome)
```
- 12

```
edges1,edges2,edges3 = eval(input("Enter three edges: "))
if(edges1+edges2>edges3) and (edges1+edges3>edges2) and (edges2+edges3>edges1):
print("The perimeter is %d"%(edges1+edges2+edges3))
else:
print("This input is illegal")
```
| github_jupyter |
# bioimageio.core usage examples
```
import os
import hashlib
import bioimageio.core
import imageio
# we use napari for visualising images, you can install it via `pip install napari` or`conda install napari`
import napari
import numpy as np
import xarray as xr
from bioimageio.core.prediction_pipeline import create_prediction_pipeline
# helper function for showing multiple images in napari
def show_images(*images, names=None):
v = napari.Viewer()
for i, im in enumerate(images):
name = None if names is None else names[i]
if isinstance(im, str):
im = imageio.imread(im)
v.add_image(im, name=name)
```
## Loading a model
We will use a model that predicts foreground and boundaries in images of nuclei from the [kaggle nucles segmentation challenge](https://www.kaggle.com/c/data-science-bowl-2018).
Find the model on bioimage.io here: https://bioimage.io/#/?id=10.5072%2Fzenodo.881940
First, we will use `bioimageio.core.load_resource_description` to load the model and inspec the obtained model resource.
```
# the model can be loaded using different representations:
# the doi of the zenodo entry corresponding to the model
rdf_doi = "10.5281/zenodo.6287342"
# the url of the yaml file containing the model resource description
rdf_url = "https://zenodo.org/record/6287342/files/rdf.yaml"
# filepath to the downloaded model (either zipped package or yaml)
# to download it from the website:
# - go to https://bioimage.io/#/?id=10.5281%2Fzenodo.5764892%2F5764893
# - click the download icon
# - select "ilastik" weight format
rdf_path = "/home/pape/Downloads/nuclei-segmentation-boundarymodel_pytorch_state_dict.zip"
# load model from link to rdf.yaml
model_resource = bioimageio.core.load_resource_description(rdf_url)
# load model from doi
model_resource = bioimageio.core.load_resource_description(rdf_doi)
# load model from path to the zipped model files
model_resource = bioimageio.core.load_resource_description(rdf_path)
# the "model_resource" instance returned by load_resource_description
# contains the information stored in the resource description (see https://github.com/bioimage-io/spec-bioimage-io/blob/gh-pages/model_spec_latest.md)
# we can e.g. check what weight formats are available in the model (pytorch_state_dict for the model used here)
print("Available weight formats for this model:", model_resource.weights.keys())
# or where the (downloaded) weight files are stored
print("Pytorch state dict weights are stored at:", model_resource.weights["pytorch_state_dict"].source)
print()
# or what inputs the model expects
print("The model requires as inputs:")
for inp in model_resource.inputs:
print("Input with axes:", inp.axes, "and shape", inp.shape)
print()
# and what the model outputs are
print("The model returns the following outputs:")
for out in model_resource.outputs:
print("Output with axes:", out.axes, "and shape", out.shape)
# the function 'test_model' from 'bioimageio.core.resource_tests' can be used to fully test the model,
# including running prediction for the test input(s) and checking that they agree with the test output(s)
# before using a model, it is recommended to check that it properly works with this function
# 'test_model' returns a dict with 'status'='passed'/'failed' and more detailed information
from bioimageio.core.resource_tests import test_model
test_result = test_model(model_resource)
if test_result["status"] == "failed":
print("model test:", test_result["name"])
print("The model test failed with:", test_result["error"])
print("with the traceback:")
print("".join(test_result["traceback"]))
else:
test_result["status"] == "passed"
print("The model passed all tests")
```
## Prediction with the model
`bioimageio.core` implements functionality to run predictions with a model in bioimage.io format.
This includes functions to run prediction with numpy arrays (more precisely xarray DataArrays) and convenience functions to run predictions for inputs stored on disc.
```
# load the example image for this model, which is stored in numpy file format
input_image = np.load(model_resource.test_inputs[0])
# define a function to run prediction on a numpy input
# "devices" can be used to run prediction on a gpu instead of the cpu
# "weight_format" to specify which weight format to use in case the model contains different weight formats
def predict_numpy(model, input_, devices=None, weight_format=None):
# the prediction pipeline combines preprocessing, prediction and postprocessing.
# it should always be used for prediction with a bioimageio model
pred_pipeline = create_prediction_pipeline(
bioimageio_model=model, devices=devices, weight_format=weight_format
)
# the prediction pipeline expects inputs as xarray.DataArrays.
# these are similar to numpy arrays, but allow for named dimensions (the dims keyword argument)
# in bioimage.io the dims have to agree with the input axes required by the model
axes = tuple(model.inputs[0].axes)
input_tensor = xr.DataArray(input_, dims=axes)
# the prediction pipeline call expects the same number of inputs as the number of inputs required by the model
# in the case here, the model just expects a single input. in the case of multiple inputs use
# prediction = pred_pipeline(input1, input2, ...)
# or, if you have the inputs in a list or tuple
# prediction = pred_pipeline(*inputs)
# the call returns a list of output tensors, corresponding to the output tensors of the model
# (in this case, we just have a single output)
prediction = pred_pipeline(input_tensor)[0]
return prediction
# run prediction for the test input and show the result
prediction = predict_numpy(model_resource, input_image)
show_images(input_image, prediction, names=["image", "prediction"])
# the utility function `predict_image` can be used to run prediction with an image stored on disc
from bioimageio.core.prediction import predict_image
# the filepath where the output should be stored, supports most common image formats as well as npy fileformat
outputs = ["prediction.tif"]
predict_image(
model_resource, model_resource.test_inputs, outputs
)
# the output tensor contains 2 channels, which is not supported by normal tif.
# thus, these 2 channels are stored as 2 separate images
fg_pred = imageio.imread("prediction-c0.tif")
bd_pred = imageio.imread("prediction-c1.tif")
show_images(input_image, fg_pred, bd_pred,
names=["image", "foreground-prediction", "boundary-prediction"])
# the utility function `predict_images` can be use to run prediction for a batch of images stored on disc
# note: this only works for models which have a single input and output!
from bioimageio.core.prediction import predict_images
# here, we use a subset of the dsb challenge data for prediction from the stardist (https://github.com/stardist/stardist)
# you can obtain it from: https://github.com/stardist/stardist/releases/download/0.1.0/dsb2018.zip
# select all images in the "test" subfolder
from glob import glob
folder = "/home/pape/Downloads/dsb2018(1)/dsb2018/test"
inputs = glob(os.path.join(folder, "images", "*.tif"))
# create an output folder and specify the output path for each image
output_folder = os.path.join(folder, "predictions")
os.makedirs(output_folder, exist_ok=True)
outputs = [os.path.join(output_folder, os.path.split(inp)[1]) for inp in inputs]
print(len(inputs), "images for prediction were found")
# the model at hand can only predict images which have a xy-size that is
# a multiple of 16. To run with arbitrary size images, we pass the `padding`
# argument to `predict_images` and specify that the input is padded to the next bigger
# size that is divisible by 16 (mode: dynamic)
# as an alternative `"mode": "fixed"` will pad to a fixed shape, e.g.
# `{"x": 512, "y": 512, "mode": "fixed"}` will always pad to a size of 512x512
# the padding is cropped again after the prediction
padding = {"x": 16, "y": 16, "mode": "dynamic"}
predict_images(
model_resource, inputs, outputs, padding=padding, verbose=True
)
# check the first input/output
show_images(inputs[0], outputs[0].replace(".tif", "-c0.tif"), outputs[0].replace(".tif", "-c1.tif"))
# instead of padding, we can also use tiling.
# here, we specify a tile size of 224 and a halo (= extension of tile on both sides)
# size of 16, which results in an effective tile shale of 256 = 224 + 2*16
tiling = {
"tile": {"x": 224, "y": 224},
"halo": {"x": 16, "y": 16},
}
predict_images(
model_resource, inputs, outputs, tiling=tiling, verbose=True
)
# check the first input/output
show_images(inputs[0], outputs[0].replace(".tif", "-c0.tif"), outputs[0].replace(".tif", "-c1.tif"))
```
## Create a biomiage.io model package
`bioimageio.core` also implements functionality to create a model package compatible with the [bioimageio model spec](https://github.com/bioimage-io/spec-bioimage-io/blob/gh-pages/model_spec_latest.md) ready to be shared via
the [bioimage.io model zoo](https://bioimage.io/#/).
Here, we will use this functionality to create two models, one that adds thresholding as post-processing to the outputs and another one that also adds weights in torchscript format.
```
# get the python file defining the architecture.
# this is only required for models with pytorch_state_dict weights
def get_architecture_source(rdf):
# here, we need the raw resource, which contains the information from the resource description
# before evaluation, e.g. the file and name of the python file with the model architecture
raw_resource = bioimageio.core.load_raw_resource_description(rdf)
# the python file defining the architecture for the pytorch weihgts
model_source = raw_resource.weights["pytorch_state_dict"].architecture
# download the source file if necessary
source_file = bioimageio.core.resource_io.utils.resolve_source(
model_source.source_file
)
# if the source file path does not exist, try combining it with the root path of the model
if not os.path.exists(source_file):
source_file = os.path.join(raw_resource.root_path, os.path.split(source_file)[1])
assert os.path.exists(source_file), source_file
class_name = model_source.callable_name
return f"{source_file}:{class_name}"
# first new model: add thresholding of outputs as post-processing
# the convenience function `build_model` creates a biomageio model spec compatible package (=zipped folder)
from bioimageio.core.build_spec import build_model
# create a subfolder to store the files for the new model
model_root = "./new_model"
os.makedirs(model_root, exist_ok=True)
# create the expected output tensor (= outputs thresholded at 0.5)
threshold = 0.5
new_output = prediction > threshold
new_output_path = f"{model_root}/new_test_output.npy"
np.save(new_output_path, new_output)
# add thresholding as post-processing procedure to our model
preprocessing = [[{"name": prep.name, "kwargs": prep.kwargs} for prep in inp.preprocessing] for inp in model_resource.inputs]
postprocessing = [[{"name": "binarize", "kwargs": {"threshold": threshold}}]]
# get the model architecture
# note that this is only necessary for pytorch state dict models
model_source = get_architecture_source(rdf_doi)
# we use the `parent` field to indicate that the new model is created based on
# the nucleus segmentation model we have obtained from bioimage.io
# this field is optional and only needs to be given for models that are created based on other models from bioimage.io
# the parent is specified via it's doi and the hash of its rdf file
model_root_folder = os.path.split(model_resource.weights["pytorch_state_dict"].source)[0]
rdf_file = os.path.join(model_root_folder, "rdf.yaml")
with open(rdf_file, "rb") as f:
rdf_hash = hashlib.sha256(f.read()).hexdigest()
parent = {"uri": rdf_doi, "sha256": rdf_hash}
# the name of the new model and where to save the zipped model package
name = "new-model1"
zip_path = os.path.join(model_root, f"{name}.zip")
# `build_model` needs some additional information about the model, like citation information
# all this additional information is passed as plain python types and will be converted into the bioimageio representation internally
# for more informantion, check out the function signature
# https://github.com/bioimage-io/core-bioimage-io-python/blob/main/bioimageio/core/build_spec/build_model.py#L252
cite = [{"text": cite_entry.text, "url": cite_entry.url} for cite_entry in model_resource.cite]
# the training data used for the model can also be specified by linking to a dataset available on bioimage.io
training_data = {"id": "ilastik/stradist_dsb_training_data"}
# the axes descriptions for the inputs / outputs
input_axes = ["bcyx"]
output_axes = ["bcyx"]
# the pytorch_state_dict weight file
weight_file = model_resource.weights["pytorch_state_dict"].source
# the path to save the new model with torchscript weights
zip_path = f"{model_root}/new_model2.zip"
# build the model! it will be saved to 'zip_path'
new_model_raw = build_model(
weight_uri=weight_file,
test_inputs=model_resource.test_inputs,
test_outputs=[new_output_path],
input_axes=input_axes,
output_axes=output_axes,
output_path=zip_path,
name=name,
description="nucleus segmentation model with thresholding",
authors=[{"name": "Jane Doe"}],
license="CC-BY-4.0",
documentation=model_resource.documentation,
covers=[str(cover) for cover in model_resource.covers],
tags=["nucleus-segmentation"],
cite=cite,
parent=parent,
architecture=model_source,
model_kwargs=model_resource.weights["pytorch_state_dict"].kwargs,
preprocessing=preprocessing,
postprocessing=postprocessing,
training_data=training_data,
)
# load the new model from the zipped package, run prediction and check the result
new_model = bioimageio.core.load_resource_description(zip_path)
prediction = predict_numpy(new_model, input_image)
show_images(input_image, prediction, names=["input", "binarized-prediction"])
```
## Add different weight format and package model with new weights
```
# `convert_weigths_to_pytorch_script` creates torchscript weigths based on the weights loaded from pytorch_state_dict
from bioimageio.core.weight_converter.torch import convert_weights_to_torchscript
# `add_weights` adds new weights to the model specification
from bioimageio.core.build_spec import add_weights
# the path to save the newly created torchscript weights
weight_path = os.path.join(model_root, "weights.torchscript")
convert_weights_to_torchscript(new_model, weight_path)
# the path to save the new model with torchscript weights
zip_path = f"{model_root}/new_model2.zip"
new_model2_raw = add_weights(new_model_raw, weight_path, weight_type="torchscript", output_path=zip_path)
# load the new model from the zipped package, run prediction and check the result
new_model = bioimageio.core.load_resource_description(zip_path)
prediction = predict_numpy(new_model, input_image, weight_format="torchscript")
show_images(input_image, prediction, names=["input", "binarized-prediction"])
# models in the biomageio.core format can also directly be exported as zipped packages
# using `bioimageio.core.export_resource_package`
bioimageio.core.export_resource_package(new_model2_raw, output_path="another_model.zip")
```
| github_jupyter |
# Diseรฑo de software para cรณmputo cientรญfico
----
## Unidad 3: NO-SQL
### Agenda de la Unidad 3
---
#### Clase 1
- Lectura y escritura de archivos.
- Persistencia de binarios en Python (pickle).
- Archivos INI/CFG, CSV, JSON, XML y YAML
#### Clase 2
- Bases de datos relacionales y SQL.
### Clase 3
- **Breve repaso de bases de datos No relacionales.**
- Formato HDF5.
### Bases de datos no-relacionales
<small>Fuentes
<a href="https://www.thoughtworks.com/insights/blog/nosql-databases-overview">https://www.thoughtworks.com/insights/blog/nosql-databases-overview</a>,
<a href="https://www.scylladb.com/resources/nosql-vs-sql/">https://www.scylladb.com/resources/nosql-vs-sql/</a>
</small>
- Las bases de datos relacionales han dominado la industria del software, proporcionando mecanismos para almacenar datos de forma persistente, control de concurrencia, transacciones y un API estรกndar y mecanismos para integrar datos de aplicaciones.
- En los รบltimos aรฑos, han surgido untipo de bases de datos, conocidas como bases de datos **NoSQL**, que desafรญan el dominio de las bases de datos relacionales.

### NoSQL quรฉ significa
NoSQL fue un hashtag (`#nosql`) elegido para una reuniรณn para discutir estas nuevas bases de datos. El resultado mรกs importante del surgimiento de NoSQL es Polyglot Persistence. NoSQL no tiene una definiciรณn prescriptiva, pero podemos hacer un conjunto de observaciones comunes, como:
- No usar el modelo relacional
- Funciona bien en cluster
- Principalmente de cรณdigo abierto
- Construido para el estado actual de la web.
- Sin esquema (No explรญcito)
### Por quรฉ las bases de datos NoSQL?
- Principalmente combate la frustracion del desajuste de impedancia entre las estructuras de datos relacionales y las estructuras de datos en memoria de la aplicaciรณn.
- El uso de bases de datos NoSQL permite a los desarrolladores desarrollar sin tener que convertir estructuras en memoria en estructuras relacionales.

### Por quรฉ las bases de datos NoSQL?
- Tambiรฉn se estรก alejando el uso de bases de datos como puntos de integraciรณn a favor de encapsular bases de datos con aplicaciones e integrar el uso de servicios.
- El surgimiento de la web como plataforma tambiรฉn creรณ un cambio de factor vital en el almacenamiento de datos como la necesidad de soportar grandes volรบmenes de datos al ejecutarse en clรบsteres.
- Las bases de datos relacionales no fueron diseรฑadas para ejecutarse eficientemente en clรบsteres.
- Las necesidades de almacenamiento de datos de una aplicaciรณn ERP son mucho mรกs diferentes que las necesidades de almacenamiento de datos de un Facebook o un Etsy, por ejemplo.

### Big Data

A mi tambiรฉn me gusta **V**olatilidad
### Modelos de distribuciรณn:
Las bases de datos NO-SQL suelen ser รบtiles para sistemas de agregaciรณn.
Las bases de datos orientadas a los agregados facilitan la distribuciรณn de datos, ya que el mecanismo de distribuciรณn mueven la agregaciรณn y no se preocupa por los datos relacionados. Hay dos estilos de distribuciรณn de datos:
- **Sharding:** Sharding distribuye diferentes datos a travรฉs de mรบltiples servidores, por lo que cada servidor actรบa como la รบnica fuente para un subconjunto de datos.
- **Replicaciรณn:** la replicaciรณn copia datos en varios servidores, por lo que cada bit de datos se puede encontrar en varios lugares. La replicaciรณn viene en dos formas,
- **Maestro-esclavo** convierte a un nodo en la copia autorizada que maneja las escrituras mientras que los esclavos se sincronizan con el maestro y pueden manejar las lecturas.
- **punto a punto** permite escrituras en cualquier nodo; Los nodos se coordinan para sincronizar sus copias de los datos.
### Teorema CAP
El teorema de CAP explica por quรฉ una base de datos distribuida no puede garantizar tanto la coherencia como la disponibilidad frente a las particiones de red.
El teorema dice que una aplicaciรณn puede garantizar solo dos de las siguientes tres caracterรญsticas al mismo tiempo:
- **Consistencia:** se da la misma respuesta a todos
- **Disponibilidad:** el acceso continรบa, incluso durante una falla parcial del sistema
- **Tolerancia de particiรณn:** las operaciones permanecen intactas, incluso si algunos nodos no pueden comunicarse
Un compromiso รบtil es permitir una **consistencia eventual**.
Determinar si los datos de su aplicaciรณn son un candidato adecuado para la coherencia eventual es una decisiรณn comercial.
### ACID vs BASE
Las bases de datos relacionales y algunas bases de datos NoSQL (aquellas que soportan una fuerte consistencia) que cumplen con estos cuatro objetivos se consideran compatibles con ACID. Esto significa que los datos son consistentes despuรฉs de completar las transacciones.
Sin embargo, otras bases de datos NoSQL no relacionales recurren al modelo BASE para lograr beneficios como escala y resistencia. BASE significa:
- **Basic Availability:** los datos estรกn disponibles la mayor parte del tiempo, incluso durante una falla que interrumpe parte de la base de datos. Los datos se replican y se distribuyen en muchos sistemas de almacenamiento diferentes.
- **Soft state:** las rรฉplicas no son consistentes todo el tiempo.
- **Eventual consistency:** los datos serรกn consistentes en algรบn momento, sin garantรญa de cuรกndo.
## Tipos de bases de datos: Document databases

- La base de datos almacena y recupera documentos, que pueden ser XML, JSON, BSON, etc.
- Estos documentos son estructuras de datos de รกrbol jerรกrquico, autodescriptivas, que pueden consistir en mapas, colecciones y valores escalares.
- Los documentos almacenados son similares entre sรญ, pero no tienen que ser exactamente iguales.
- Las claves son parte de los documentos.
- Ejemplos MongoDB, CouchDB, Terrastore, OrientDB o RavenDB.
### Tipos de bases de datos: Document databases - Un ejemplo con MongoDB
```python
>>> import datetime
>>> import pymongo ## pip install pymongo
>>> client = pymongo.MongoClient('mongodb://localhost:27017/')
>>> posts = db['posts']
>>> post = {"author": "Mike",
... "text": "My first blog post!",
... "tags": ["mongodb", "python", "pymongo"],
... "date": datetime.datetime.utcnow()}
>>> post_id = posts.insert_one(post).inserted_id
ObjectId('...')
```
### Tipos de bases de datos: Document databases - Un ejemplo con MongoDB
```python
>>> posts.find_one({"author": "Mike"})
{u'_id': ObjectId('...'),
u'author': u'Mike',
u'date': datetime.datetime(...),
u'tags': [u'mongodb', u'python', u'pymongo'],
u'text': u'My first blog post!'}
```
### Tipos de bases de datos: Document databases - Un ejemplo con MongoDB + ODM
```python
>>> import mongokit import as mk # pip install mongokit
>>> connection = mk.Connection()
>>> @connection.register
... class BlogPost(mk.Document):
... structure = {
... 'title': unicode, 'body': unicode,
... 'author': unicode, 'rank': int,
... 'date_creation': datetime.datetime}
... required_fields = ['title','author', 'date_creation']
... default_values = {'rank': 0, 'date_creation': datetime.datetime.utcnow}
```
### Tipos de bases de datos: Document databases - Un ejemplo con MongoDB + ODM
```python
>>> blogpost = con.test.example.BlogPost()
>>> blogpost['title'] = u'my title'
>>> blogpost['body'] = u'a body'
>>> blogpost['author'] = u'me'
>>> blogpost
{'body': u'a body', 'title': u'my title', 'date_creation': datetime.datetime(...), ...}
>>> blogpost.save()
```
### Tipos de bases de datos: Column databases

- Estas bases de datps almacenan filas que tienen arbitraria cantidad de columnas asociadas con una clave de fila.
- Las columnas son grupos de datos relacionados a los que **a menudo** se accede juntos.
- Para un Cliente, a menudo accedemos a la informaciรณn de su Perfil al mismo tiempo, pero no a sus Pedidos.
- La diferencia es que varias filas no tienen que tener las mismas columnas, y las columnas se pueden agregar a cualquier fila en cualquier momento sin tener que agregarlo a otras filas.
- Ejemplos: Casandra, HBase, Hypertable y SciDB.
### Tipos de bases de datos: Column databases - Un ejemplo con SciDB
```python
>>> from scidbpy import connect # pip install scidb-py
>>> db = connect('http://localhost:8080')
>>> db.arrays.foo[:]
i x
0 0 0.0
1 1 1.0
2 2 2.0
>>> ar = db.input(
... upload_data=numpy.arange(3)).store("ar")
>>> db.join(ar, 'foo').apply('j', ar.i + 1)[:]
i x x_1 j
0 0 0 0.0 1
1 1 1 1.0 2
2 2 2 2.0 3
```
## Tipos de bases de datos: Graph databases

- Permiten almacenar entidades y relaciones entre estas entidades.
- Las entidades tambiรฉn se conocen como nodos, que tienen propiedades. Piense en un nodo como una instancia de un objeto en la aplicaciรณn.
- Las relaciones se conocen como edges que pueden tener propiedades.
## Tipos de bases de datos: Graph databases

- Los edgest tienen significado direccional.
- Los nodos estรกn organizados por relaciones que le permiten encontrar patrones interesantes entre los nodos.
- La organizaciรณn del grรกfico permite que los datos se almacenen una vez y luego se interpreten de diferentes maneras segรบn las relaciones.
- Los edgest tienen significado direccional.
- Los nodos estรกn organizados por relaciones que le permiten encontrar patrones interesantes entre los nodos.
## Tipos de bases de datos: Graph databases

- La organizaciรณn del grรกfico permite que los datos se almacenen una vez y luego se interpreten de diferentes maneras segรบn las relaciones.
- Por lo general, cuando almacenamos una estructura similar a un grรกfico en RDBMS, es para un solo tipo de relaciรณn ("quiรฉn es mi gerente" es un ejemplo comรบn). Mรกs relaciones cambios de esquema
- Del mismo modo, en bases de datos relacionales modelamos el grรกfico de antemano.
- Ejemplos: Neo4J, Infinite Graph y OrientDB.
### Tipos de bases de datos: Graph databases - Un ejemplo con Neo4J
```python
>>> from neomodel import nm # pip install neomodel
>>> nm.config.DATABASE_URL = 'bolt://neo4j:password@localhost:7687'
>>> class Country(nm.StructuredNode):
... code = nm.StringProperty(unique_index=True, required=True)
>>> class Person(nm.StructuredNode):
... uid = nm.UniqueIdProperty()
... name = nm.StringProperty(unique_index=True)
... country = RelationshipTo(Country, 'IS_FROM')
```
### Tipos de bases de datos: Graph databases - Un ejemplo con Neo4J
```python
>>> jim = Person(name='jim').save() # Create
>>> jim.name = "Jim
>>> jim.save() # Update, (with validation)
>>> jim.id # neo4j internal id
```
```python
>>> all_nodes = Person.nodes.all()
>>> jim = Person.nodes.get(name='Jim')
```
```python
>>> germany = Country(code='DE').save()
>>> jim.country.connect(germany)
>>> if jim.country.is_connected(germany):
... print("Jim's from Germany")
```
## Tipos de bases de datos: Key-Value databases
- Son los almacenes de datos NoSQL mรกs simples para usar desde una perspectiva API.
- El cliente puede obtener el valor de la clave, poner un valor para una clave o eliminar una clave del almacรฉn de datos.
- El valor es un BLOB que el almacรฉn de datos solo almacena, sin importarle quรฉ hay dentro.
- Dado que las tiendas de valores clave siempre utilizan el acceso de clave principal, generalmente tienen un gran rendimiento y se pueden escalar fรกcilmente.
## Tipos de bases de datos: Key-Value databases

- Algunas de las bases de datos de este tipo populares son Riak, Redis (a menudo denominado servidor de estructura de datos)o Memcached y sus sabores.
- No todas son iguales, existen grandes diferencias entre estos productos
- Por ejemplo: los datos de Memcached no son persistentes, mientras que en Riak sรญ lo son, estas caracterรญsticas son importantes al implementar ciertas soluciones.
### Tipos de bases de datos: Key-Value databases - Un ejemplo con redis
```python
>>> import redis # pip install Redis
>>> redis_db = redis.StrictRedis(host="localhost", port=6379, db=0)
>>> redis_db.keys() # see what keys are in Redis
[]
>>> redis_db.set('full stack', 'python')
True
>>> redis_db.keys()
"['full stack']
>>> redis_db.get('full stack')
"Python"
```
```
import pickle
from collections.abc import MutableMapping
import attr
@attr.s()
class KVDatabase(MutableMapping):
path = attr.ib()
def _get_data(self):
try:
with open(self.path, "rb") as fp:
return pickle.load(fp)
except:
return {}
def _set_data(self, d):
with open(self.path, "wb") as fp:
return pickle.dump(d, fp)
def __setitem__(self, k, v):
data = self._get_data()
data[k] = v
self._set_data(data)
def __getitem__(self, k):
data = self._get_data()
return data[k]
def __delitem__(self, k):
data = self._get_data()
del data[k]
self._set_data(data)
def __len__(self):
return len(self._get_data())
def __iter__(self):
raise NotImplementedError()
db = KVDatabase("file.pkl")
db["l"] = 1
```
## Tipos de bases de datos: Key-Value databases
```
db = KVDatabase("file.pkl")
db["algo"]
```
- Obviamente esto es una porqueria.
- Podemos pensar en ponerle transacciones.
- Guardar la estructura de datos en algo como B-Tree
- Pensar en guardar indices.
# ZODB - Zope Object Database
<small>
<b>Sources:</b>
<a href="https://www.slideshare.net/jace/zodb-the-zope-object-database">https://www.slideshare.net/jace/zodb-the-zope-object-database</a>,
<a href="http://www.zodb.org">http://www.zodb.org</a>
</small>
- Una base de datos orientada a objetos almacena objetos
en lugar de registros de la base de datos
- Los objetos generalmente se organizan jerรกrquicamente
La mayorรญa de las bases de datos de objetos estรกn vinculadas a un
lenguaje (Excepto db4objects que funciona para Java y C#)
## ZODB
- Alto rendimiento
- Operaciรณn transparente y almacenamiento en cachรฉ
- Transaccional: Deshacer ilimitado
- Multihilo
- Complementos de almacenamiento
- Necesita Python
# ZODB - Zope Object Database
- No es una base de datos relacional (de hecho es key-value)
- Sin soporte SQL
- Sin modelo de seguridad
- Sin interfaz de consulta:
- Se debe acceder a los objetos a travรฉs del contenedor
- Un motor de bรบsqueda separado estรก disponible
- Todas las clases deben derivarse de
- Clase base "persistente" proporcionada por ZODB
- El cรณdigo no necesita ser consciente de ZODB
# ZODB - Ejemplos
```
import persistent
import ZODB, ZODB.FileStorage
import BTrees
class Account(persistent.Persistent):
def __init__(self):
self.movimientos = persistent.list.PersistentList()
@property
def balance(self):
return sum(self.movimientos)
def deposit(self, amount):
self.movimientos.append(amount)
#db.close()
storage = ZODB.FileStorage.FileStorage('zodb/mydata.fs')
db = ZODB.DB(storage)
with db.transaction() as connection:
root = connection.root
root.accounts = BTrees.OOBTree.BTree()
root.accounts['account-1'] = Account()
```
# ZODB - Ejemplos
```
with db.transaction() as connection:
root = connection.root
account = root.accounts["account-1"]
account.deposit(11.)
print(account.balance)
print(account.movimientos)
with db.transaction() as connection:
root = connection.root
account = root.accounts["account-1"]
account.deposit(11.5)
print(account.balance)
print(account.movimientos)
with db.transaction() as connection:
root = connection.root
account = root.accounts["account-1"]
print(account.movimientos)
```
## Sobre los BTree
- Ademรกs de `PersistentList` y `PersistentMapping`, el paquete *BTrees* proporciona estructuras de datos persistentes generales, especialmente los objetos `BTree` y `TreeSet`.
- Los objetos `BTree` y `TreeSet` son escalables y pueden contener fรกcilmente millones de objetos.

## Cerrando: Como elegir bases NO-SQL
Algunas pautas generales:
- **Key-Value:** รบtiles para almacenar informaciรณn de sesiรณn, perfiles de usuario, preferencias, datos del carrito de compras. Evitarรญamos usarlas cuando necesitemos consultar por datos, tener relaciones entre los datos que estรกn siendo almacenados o necesitemos operar en mรบltiples claves al mismo tiempo.
- **Document:** รบtiles para sistemas de gestiรณn de contenido, plataformas de blogs, anรกlisis web, anรกlisis en tiempo real, aplicaciones de comercio electrรณnico. Evitarรญamos usarlas para sistemas con transacciones que abarcan mรบltiples operaciones o consultas contra estructuras agregadas variables.
- **Column:** รบtiles para sistemas de gestiรณn de contenido, plataformas de blogs, mantenimiento de contadores, uso que caduca, volumen de escritura pesado como la agregaciรณn de registros. Evitarรญamos usarlas para sistemas que estรกn en desarrollo temprano, cambiando los patrones de consulta.
- **Graph:** adecuadas para espacios problemรกticos donde tenemos datos conectados, como redes sociales, datos espaciales, informaciรณn de enrutamiento de bienes y dinero, motores de recomendaciรณn.
## Cerrando 2: La mentira del "schema-less"
- Todas las bases de datos NoSQL afirman que no tienen esquema, lo que significa que la base de datos no impone ningรบn esquema.
- No-SQL tienen esquema implรญcito.
- Las bases de datos con esquemas sรณlidos, como las bases de datos relacionales, se pueden migrar guardando cada cambio de esquema, mรกs su migraciรณn de datos, en una secuencia controlada por la versiรณn. Las bases de datos sin esquema aรบn necesitan una migraciรณn cuidadosa debido al esquema implรญcito en cualquier cรณdigo que acceda a los datos.
| github_jupyter |
# Getting started with Perceptual Adversarial Robustness
This notebook contains examples of how to load a pretrained model, measure LPIPS distance, and construct perceptual and non-perceptual attacks.
If you are running this notebook in Google Colab, it is recommended to use a GPU. You can enable GPU acceleration by going to **Runtime** > **Change runtime type** and selecting **GPU** from the dropdown.
First, make sure you have installed the `perceptual_advex` package, either from GitHub or PyPI:
```
try:
import perceptual_advex
except ImportError:
!pip install perceptual-advex
```
## Loading a pretrained model
First, let's load the CIFAR-10 dataset along with a pretrained model. The following code will download a model checkpoint and load it, but you can change the `checkpoint_name` parameter to load a different checkpoint. The checkpoint we're downloading here is trained against $L_2$ adversarial attacks with bound $\epsilon = 1$.
```
import subprocess
import os
if not os.path.exists('data/checkpoints/cifar_pgd_l2_1.pt'):
!mkdir -p data/checkpoints
!curl -o data/checkpoints/cifar_pgd_l2_1.pt https://perceptual-advex.s3.us-east-2.amazonaws.com/cifar_pgd_l2_1_cpu.pt
from perceptual_advex.utilities import get_dataset_model
dataset, model = get_dataset_model(
dataset='cifar',
arch='resnet50',
checkpoint_fname='data/checkpoints/cifar_pgd_l2_1.pt',
)
```
If you want to experiment with ImageNet-100 instead, just change the above to
dataset, model = get_dataset_model(
dataset='imagenet100',
# Change this to where ImageNet is downloaded.
dataset_path='/path/to/imagenet',
arch='resnet50',
# Change this to a pretrained checkpoint path.
checkpoint_fname='/path/to/checkpoint',
)
## Viewing images in the dataset
Now that we have a dataset and model loaded, we can view some images in the dataset.
```
import torchvision
import numpy as np
import matplotlib.pyplot as plt
# We'll use this helper function to show images in the Jupyter notebook.
%matplotlib inline
def show(img):
if len(img.size()) == 4:
img = torchvision.utils.make_grid(img, nrow=10, padding=0)
npimg = img.detach().cpu().numpy()
plt.figure(figsize=(18,16), dpi=80, facecolor='w', edgecolor='k')
plt.imshow(np.transpose(npimg, (1,2,0)), interpolation='nearest')
import torch
# Create a validation set loader.
batch_size = 10
_, val_loader = dataset.make_loaders(1, batch_size, only_val=True)
# Get a batch from the validation set.
inputs, labels = next(iter(val_loader))
# If we have a GPU, let's convert everything to CUDA so it's quicker.
if torch.cuda.is_available():
inputs = inputs.cuda()
labels = labels.cuda()
model.cuda()
# Show the batch!
show(inputs)
```
We can also test the accuracy of the model on this set of inputs by comparing the model output to the ground-truth labels.
```
pred_labels = model(inputs).argmax(1)
print('Natural accuracy is', (labels == pred_labels).float().mean().item())
```
If the natural accuracy is very low on this batch of images, you might want to load a new set by re-running the two cells above.
## Generating perceptual adversarial examples
Next, let's generate some perceptual adversarial examples using Lagrange perceptual attack (LPA) with AlexNet bound $\epsilon = 0.5$. Other perceptual attacks (PPGD and Fast-LPA) are also found in the `perceptual_advex.perceptual_attacks` module, and they mostly share the same options.
```
from perceptual_advex.perceptual_attacks import LagrangePerceptualAttack
attack = LagrangePerceptualAttack(
model,
num_iterations=10,
# The LPIPS distance bound on the adversarial examples.
bound=0.5,
# The model to use for calculate LPIPS; here we use AlexNet.
# You can also use 'self' to perform a self-bounded attack.
lpips_model='alexnet_cifar',
)
adv_inputs = attack(inputs, labels)
# Show the adversarial examples.
show(adv_inputs)
# Show the magnified difference between the adversarial examples and unperturbed inputs.
show((adv_inputs - inputs) * 5 + 0.5)
```
Note that while the perturbations are sometimes large, the adversarial examples are still recognizable as the original image and do not appear too different perceptually.
We can calculate the accuracy of the classifier on the adversarial examples:
```
adv_pred_labels = model(adv_inputs).argmax(1)
print('Adversarial accuracy is', (labels == adv_pred_labels).float().mean().item())
```
Even though this network has been trained to be robust to $L_2$ perturbations, there are still imperceptible perturbations found using LPA that fool it almost every time!
## Calculating LPIPS distance
Next, let's calculate the LPIPS distance between the adversarial examples we generated and the original inputs:
```
from perceptual_advex.distances import LPIPSDistance
from perceptual_advex.perceptual_attacks import get_lpips_model
# LPIPS is based on the activations of a classifier, so we need to first
# load the classifier we'll use.
lpips_model = get_lpips_model('alexnet_cifar')
if torch.cuda.is_available():
lpips_model.cuda()
# Now we can define a distance based on the model we loaded.
# We could also do LPIPSDistance(model) for self-bounded LPIPS.
lpips_distance = LPIPSDistance(lpips_model)
# Finally, let's calculate the distance between the inputs and adversarial examples.
print(lpips_distance(inputs, adv_inputs))
```
Note that all the distances are within the bound of 0.5! At this bound, the adversarial perturbations should all have a similar level of perceptibility to the human eye.
Other distance measures between images are also defined in the `perceptual_advex.distances` package, including $L_\infty$, $L_2$, and SSIM.
## Generating non-perceptual adversarial examples
The `perceptual_advex` package also includes code to perform attacks based on other, narrower threat models like $L_\infty$ or $L_2$ distance and spatial transformations. The non-perceptual attacks are all in the `perceptual_advex.attacks` module. First, let's try an $L_2$ attack:
```
from perceptual_advex.attacks import L2Attack
attack = L2Attack(
model,
'cifar',
# The bound is divided by 255, so this is equivalent to eps=1.
bound=255,
)
l2_adv_inputs = attack(inputs, labels)
show(l2_adv_inputs)
show((l2_adv_inputs - inputs) * 5 + 0.5)
l2_adv_pred_labels = model(l2_adv_inputs).argmax(1)
print('L2 adversarial accuracy is', (labels == l2_adv_pred_labels).float().mean().item())
```
Here's an example of a spatial attack (StAdv):
```
from perceptual_advex.attacks import StAdvAttack
attack = StAdvAttack(
model,
bound=0.02,
)
spatial_adv_inputs = attack(inputs, labels)
show(spatial_adv_inputs)
show((spatial_adv_inputs - inputs) * 5 + 0.5)
spatial_adv_pred_labels = model(spatial_adv_inputs).argmax(1)
print('Spatial adversarial accuracy is', (labels == spatial_adv_pred_labels).float().mean().item())
```
## Conclusion
That's pretty much it for how to use the package! As a final note, here is an overview of what each module contains:
* `perceptual_advex.attacks`: non-perceptual attacks (e.g. $L_2$, $L_\infty$, spatial, recoloring, JPEG, etc.)
* `perceptual_advex.datasets`: datasets (e.g. ImageNet-100, CIFAR-10, etc.)
* `perceptual_advex.distances`: distance measures between images (e.g. LPIPS, SSIM, $L_2$)
* `perceptual_advex.evaluation`: functions used for evaluating a trained model against attacks
* `perceptual_advex.models`: classifier architectures (e.g. ResNet, AlexNet, etc.)
* `perceptual_advex.perceptual_attacks`: perceptual attacks (e.g. LPA, PPGD, Fast-LPA)
* `perceptual_advex.trades_wrn`: classifier architecture used by the TRADES defense (Zhang et al.)
* `perceptual_advex.utilites`: various utilites, including `get_dataset_model` function to load a dataset and model
| github_jupyter |
# Self study 1
In this self study you should work on the code examples below together with the associated questions. The notebook illustrates a basic neural network implementation, where we implement most of the relevant functions from scratch. Except the calculation of gradients, for which we rely on the functionality provided by PyTorch.
The code illustrates the key concepts involved in the learning neural network. Go carefully through the code before starting to answer the questions at the end.
First we import the modules used in this selfstudy
```
import torch
from torchvision import datasets, transforms
from matplotlib import pyplot
import matplotlib.pyplot as plt
import numpy as np
```
Through torch load the MNIST data set, which we will use in this self study. The MNIST database consists of grey scale images of handwritten digits. Each image is of size $28\times 28$; see figure below for an illustration. The data set is divided into a training set consisting of $60000$ images and a test set with $10000$ images; in both
data sets the images are labeled with the correct digits. If interested, you can find more information about the MNIST data set at http://yann.lecun.com/exdb/mnist/, including accuracy results for various machine learning methods.

Using the data loader provided by torch we have an easy way of loading in data in batches (here of size 64). We can also make various other transformation of the data, such as normalization.
```
batch_size = 64
train_loader = torch.utils.data.DataLoader(
datasets.MNIST('../data', train=True, download=True,
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])),
batch_size=batch_size, shuffle=True)
test_loader = torch.utils.data.DataLoader(
datasets.MNIST('../data', train=False, transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])),
batch_size=batch_size, shuffle=True)
```
Each batch is a list of two elements. The first element encodes the digit and has dimensions [64,1,28,28] (the figures are greyscale with no rbg channel, hence the '1'), and the second element contains the class/label information.
```
batch = iter(train_loader).next()
print(f"Batch dimension (digit): {batch[0].shape}")
print(f"Batch dimension (target): {batch[1].shape}")
digit_batch = batch[0]
img = digit_batch[0,:]
pyplot.imshow(img.reshape((28, 28)), cmap="gray")
print(f"Target: {batch[1][0]} with shape {batch[1][0].shape}")
```
With PyTorch we can specify that the tensors require gradients. This will make PyTorch record all operations performed on the tensors, so that we can afterwards calculate the gradients automatically using back propagation. See also the code example from the last lecture.
For the first part of this self study we will specify a neural network, which will encode a softmax function. For this we need a (randomly initialized) weight matrix and a bias, and for both of them we need their gradients wrt. our error function (yet to be defined) in order to perform learning. Note that to facilitate matrix multiplication we will flatten our image from $28\times 28$ to $784$.
```
weights = torch.randn(784, 10) / np.sqrt(784)
weights.requires_grad_()
bias = torch.zeros(10, requires_grad=True)
```
Out model specification
```
def softmax(x):
return x.exp() / x.exp().sum(-1).unsqueeze(-1)
def model(xb):
return softmax(xb @ weights + bias)
```
Let's test our model (with our randomly initialized weights)
```
# We flatten the digit representation so that it is consistent with the weight matrix
xb = digit_batch.flatten(start_dim=1)
print(f"Batch shape: {xb.shape}")
preds = model(xb)
print(f"Prediction on first image {preds[0]}")
print(f"Corresponding classification: {preds[0].argmax()}")
```
Next we define our loss function, in this case the log-loss (or negative log-likelihood):
```
def nll(input, target):
return (-input[range(target.shape[0]), target].log()).mean()
loss_func = nll
# Make a test calculation
yb = batch[1]
print(loss_func(preds,yb))
```
In the end, we are interested in the accuracy of our model
```
def accuracy(out, yb):
preds = torch.argmax(out, dim=1)
return (preds == yb).float().mean()
print(f"Accuracy of model on batch (with random weights): {accuracy(preds, yb)}")
```
Now we are ready to combine it all and perform learning
```
epochs = 4 # how many epochs to train for
lr = 0.01 # learning rate
train_losses = []
for epoch in range(epochs):
for batch_idx, (xb, yb) in enumerate(train_loader):
xb = xb.squeeze().flatten(start_dim=1)
pred = model(xb)
loss = loss_func(pred, yb)
loss.backward()
with torch.no_grad():
weights -= weights.grad * lr
bias -= bias.grad * lr
weights.grad.zero_()
bias.grad.zero_()
if batch_idx % 50 == 0:
with torch.no_grad():
train_loss = np.mean([loss_func(model(txb.squeeze().flatten(start_dim=1)), tyb).item() for txb, tyb in train_loader])
print(f"Epoch: {epoch}, B-idx: {batch_idx}, Training loss: {train_loss}")
train_losses.append(train_loss)
```
Plot the evolution of the training loss
```
plt.plot(range(len(train_losses)), train_losses)
```
__Exercise:__
1. Experiment with different variations of the gradient descent implementation; try varying the learning rate and the batch size. Assuming that you have a fixed time budget (say 2 minutes for learning), what can we then say about the effect of changing the parameters?
2. Implement momentum in the learning algorithm. How does it affect the results?
3. Try with different initialization schemes for the parameters (e.g. allowing for larger values). How does it affect the behavior of the algorithm?
4. Analyze the behavior of the algorithm on the test set and implement a method for evaluating the accuracy over the entire training/test set (for inspiration, see Line 21 above).
| github_jupyter |
#### New to Plotly?
Plotly's Python library is free and open source! [Get started](https://plot.ly/python/getting-started/) by downloading the client and [reading the primer](https://plot.ly/python/getting-started/).
<br>You can set up Plotly to work in [online](https://plot.ly/python/getting-started/#initialization-for-online-plotting) or [offline](https://plot.ly/python/getting-started/#initialization-for-offline-plotting) mode, or in [jupyter notebooks](https://plot.ly/python/getting-started/#start-plotting-online).
<br>We also have a quick-reference [cheatsheet](https://images.plot.ly/plotly-documentation/images/python_cheat_sheet.pdf) (new!) to help you get started!
#### Version Check
Plotly's Python API is updated frequently. Run `pip install plotly --upgrade` to update your Plotly version.
```
import plotly
plotly.__version__
```
#### Simple Candlestick with Pandas
```
import plotly.plotly as py
import plotly.graph_objs as go
import pandas as pd
from datetime import datetime
df = pd.read_csv('https://raw.githubusercontent.com/plotly/datasets/master/finance-charts-apple.csv')
trace = go.Ohlc(x=df['Date'],
open=df['AAPL.Open'],
high=df['AAPL.High'],
low=df['AAPL.Low'],
close=df['AAPL.Close'])
data = [trace]
py.iplot(data, filename='simple_candlestick')
```
#### Candlestick without Rangeslider
```
import plotly.plotly as py
import plotly.graph_objs as go
import pandas as pd
from datetime import datetime
df = pd.read_csv('https://raw.githubusercontent.com/plotly/datasets/master/finance-charts-apple.csv')
trace = go.Ohlc(x=df['Date'],
open=df['AAPL.Open'],
high=df['AAPL.High'],
low=df['AAPL.Low'],
close=df['AAPL.Close'])
layout = go.Layout(
xaxis = dict(
rangeslider = dict(
visible = False
)
)
)
data = [trace]
fig = go.Figure(data=data,layout=layout)
py.iplot(fig, filename='simple_candlestick')
```
#### Adding Customized Text and Annotations
```
import plotly.plotly as py
import plotly.graph_objs as go
import pandas as pd
from datetime import datetime
df = pd.read_csv('https://raw.githubusercontent.com/plotly/datasets/master/finance-charts-apple.csv')
trace = go.Ohlc(x=df['Date'],
open=df['AAPL.Open'],
high=df['AAPL.High'],
low=df['AAPL.Low'],
close=df['AAPL.Close'])
data = [trace]
layout = {
'title': 'The Great Recession',
'yaxis': {'title': 'AAPL Stock'},
'shapes': [{
'x0': '2016-12-09', 'x1': '2016-12-09',
'y0': 0, 'y1': 1, 'xref': 'x', 'yref': 'paper',
'line': {'color': 'rgb(30,30,30)', 'width': 1}
}],
'annotations': [{
'x': '2016-12-09', 'y': 0.05, 'xref': 'x', 'yref': 'paper',
'showarrow': False, 'xanchor': 'left',
'text': 'Increase Period Begins'
}]
}
fig = dict(data=data, layout=layout)
py.iplot(fig, filename='aapl-recession-candlestick')
```
#### Custom Candlestick Colors
```
import plotly.plotly as py
import plotly.graph_objs as go
import pandas as pd
from datetime import datetime
df = pd.read_csv('https://raw.githubusercontent.com/plotly/datasets/master/finance-charts-apple.csv')
trace = go.Ohlc(x=df['Date'],
open=df['AAPL.Open'],
high=df['AAPL.High'],
low=df['AAPL.Low'],
close=df['AAPL.Close'],
increasing=dict(line=dict(color= '#17BECF')),
decreasing=dict(line=dict(color= '#7F7F7F')))
data = [trace]
py.iplot(data, filename='styled_candlestick')
```
#### Simple Example with `datetime` Objects
```
import plotly.plotly as py
import plotly.graph_objs as go
from datetime import datetime
open_data = [33.0, 33.3, 33.5, 33.0, 34.1]
high_data = [33.1, 33.3, 33.6, 33.2, 34.8]
low_data = [32.7, 32.7, 32.8, 32.6, 32.8]
close_data = [33.0, 32.9, 33.3, 33.1, 33.1]
dates = [datetime(year=2013, month=10, day=10),
datetime(year=2013, month=11, day=10),
datetime(year=2013, month=12, day=10),
datetime(year=2014, month=1, day=10),
datetime(year=2014, month=2, day=10)]
trace = go.Candlestick(x=dates,
open=open_data,
high=high_data,
low=low_data,
close=close_data)
data = [trace]
py.iplot(data, filename='candlestick_datetime')
```
#### Reference
For more information on candlestick attributes, see: https://plot.ly/python/reference/#candlestick
```
from IPython.display import display, HTML
display(HTML('<link href="//fonts.googleapis.com/css?family=Open+Sans:600,400,300,200|Inconsolata|Ubuntu+Mono:400,700" rel="stylesheet" type="text/css" />'))
display(HTML('<link rel="stylesheet" type="text/css" href="http://help.plot.ly/documentation/all_static/css/ipython-notebook-custom.css">'))
!pip install git+https://github.com/plotly/publisher.git --upgrade
import publisher
publisher.publish(
'candlestick-charts.ipynb', 'python/candlestick-charts/', 'Candlestick Charts',
'How to make interactive candlestick charts in Python with Plotly. '
'Six examples of candlestick charts with Pandas, time series, and yahoo finance data.',
title = 'Python Candlestick Charts | plotly',
thumbnail='thumbnail/candlestick.jpg', language='python',
page_type='example_index', has_thumbnail='true', display_as='financial', order=2,
ipynb= '~notebook_demo/152')
```
| github_jupyter |
```
text = """เคฌเฅเคเคฏเคพเคฒเฅ เคฎเฅเค เคญเฅ เคคเฅเคฐเคพ เคนเฅ เคเคฏเคพเคฒ เคเค
"เคเฅเคฏเฅเค เคฌเคฟเคเฅเคจเคพ เคนเฅ เฅเคฐเฅเคฐเฅ?" เคฏเฅ เคธเคตเคพเคฒ เคเค
เคคเฅเคฐเฅ เคจเฅเคฆเฅเคเคฟเคฏเฅเค เคเฅ เฅเฅเคถเฅ เคฌเฅเคนเคฟเคธเคพเคฌ เคฅเฅ
เคนเคฟเคธเฅเคธเฅ เคฎเฅเค เคซเคผเคพเคธเคฒเฅ เคญเฅ เคคเฅเคฐเฅ เคฌเฅเคฎเคฟเคธเคพเคฒ เคเค
เคฎเฅเค เคเฅ เคคเฅเคฎเคธเฅ เคฆเฅเคฐ เคนเฅเค, เคเฅเคฏเฅเค เคฆเฅเคฐ เคฎเฅเค เคฐเคนเฅเค?
เคคเฅเคฐเคพ เคเฅเคฐเฅเคฐ เคนเฅเค
เค เคคเฅ เคซเคผเคพเคธเคฒเคพ เคฎเคฟเคเคพ, เคคเฅ เคเฅเคตเคพเคฌ เคธเคพ เคฎเคฟเคฒเคพ
เคเฅเคฏเฅเค เคเฅเคตเคพเคฌ เคคเฅเฅ เคฆเฅเค?
เคฌเฅเคเคฏเคพเคฒเฅ เคฎเฅเค เคญเฅ เคคเฅเคฐเคพ เคนเฅ เคเคฏเคพเคฒ เคเค
"เคเฅเคฏเฅเค เคเฅเคฆเคพเค เคฆเฅ เคเคฏเคพ เคคเฅ?" เคฏเฅ เคธเคตเคพเคฒ เคเค
เคฅเฅเฅเคพ เคธเคพ เคฎเฅเค เคเคซเคผเคพ เคนเฅ เคเคฏเคพ เค
เคชเคจเฅ เคเคช เคธเฅ
เคฅเฅเฅเคพ เคธเคพ เคคเฅเคเคชเฅ เคญเฅ เคฌเฅเคตเคเคน เคนเฅ เคฎเคฒเคพเคฒ เคเค
เคนเฅ เคฏเฅ เคคเฅเคชเคจ, เคนเฅ เคฏเฅ เคเคฒเคเคจ
เคเฅเคธเฅ เคเฅ เคฒเฅเค เคฌเคฟเคจเคพ เคคเฅเคฐเฅ?
เคฎเฅเคฐเฅ เค
เคฌ เคธเคฌ เคธเฅ เคนเฅ เค
เคจเคฌเคจ
เคฌเคจเคคเฅ เคเฅเคฏเฅเค เคฏเฅ เคเฅเคฆเคพ เคฎเฅเคฐเฅ?
เคฏเฅ เคเฅ เคฒเฅเค-เคฌเคพเค เคนเฅเค, เคเคเคเคฒ เคเฅ เคเค เคนเฅเค
เคเฅเคฏเฅเค เคเค เคฎเฅเค เคเคฒเฅเค?
เคฏเฅ เคจเคพเคเคพเคฎ เคชเฅเคฏเคพเคฐ เคฎเฅเค, เคเฅเคถ เคนเฅเค เคฏเฅ เคนเคพเคฐ เคฎเฅเค
เคเคจ เคเฅเคธเคพ เคเฅเคฏเฅเค เคฌเคจเฅเค?
เคฐเคพเคคเฅเค เคฆเฅเคเคเฅ เคฌเคคเคพ, เคจเฅเคฆเฅเค เคฎเฅเค เคคเฅเคฐเฅ เคนเฅ เคฌเคพเคค เคนเฅ
เคญเฅเคฒเฅเค เคเฅเคธเฅ เคคเฅเคเฅ? เคคเฅ เคคเฅ เคเฅเคฏเคพเคฒเฅเค เคฎเฅเค เคธเคพเคฅ เคนเฅ
เคฌเฅเคเคฏเคพเคฒเฅ เคฎเฅเค เคญเฅ เคคเฅเคฐเคพ เคนเฅ เคเคฏเคพเคฒ เคเค
"เคเฅเคฏเฅเค เคฌเคฟเคเฅเคจเคพโฆ
"""
print(text)
print(len(text))
def load_file(path):
with open(path,encoding = "utf-8") as f:
data = f.read()
return data
data = load_file("./kabir singh B.txt")
print(data)
def generateTable(data,k=4):
T = {}
for i in range(len(data)-k):
X = data[i:i+k]
Y = data[i+k]
#print("X %s and Y %s "%(X,Y))
if T.get(X) is None:
T[X] = {}
T[X][Y] = 1
else:
if T[X].get(Y) is None:
T[X][Y] = 1
else:
T[X][Y] += 1
return T
T = generateTable(text)
print(T)
def convertFreqIntoProb(T):
for kx in T.keys():
s = float(sum(T[kx].values()))
for k in T[kx].keys():
T[kx][k] = T[kx][k]/s
return T
T = convertFreqIntoProb(T)
print(T)
print(data[:1000])
```
# Train Our Markov Chain
```
def trainMarkovChain(text,k=4):
T = generateTable(text,k)
T = convertFreqIntoProb(T)
return T
model = trainMarkovChain(data)
print(model)
import numpy as np
import random
def sample_next(ctx,T,k):
ctx = ctx[-k:]
if T.get(ctx) is None:
return " "
possible_Chars = list(T[ctx].keys())
possible_values = list(T[ctx].values())
#print(possible_Chars)
#print(possible_values)
return np.random.choice(possible_Chars,p=possible_values)
sample_next("เคฌเฅเคเคฏเคพเคฒเฅ",model,6)
def generateText(starting_sent,k=4,maxLen=1000):
sentence = starting_sent
ctx = starting_sent[-k:]
for ix in range(maxLen):
next_prediction = sample_next(ctx,model,k)
sentence += next_prediction
ctx = sentence[-k:]
return sentence
```
# Random Lyrics Generation
```
text = generateText("เคฌเฅเคเคฏเคพเคฒเฅ",k=4,maxLen=200000)
print(text)
```
| github_jupyter |
# Machine Learning Models for SCOPE: Passive Aggressive Classifier
Models will be coded here, but the official write up will be in the RMarkdown document.
```
# load the data files
import pandas as pd
import numpy as np
from pymodelutils import utils
logs = pd.read_csv("data/metis_logs.csv")
logs.head()
# filter down to show the average opinion (0 means no alert, 1 means alert)
logs['run_date'] = logs['run_date'].astype('datetime64[ns]')
logs['is_alert'] = (np.where(logs['is_alert'] == 'f', 0, 1))
logs = logs.groupby(['series', 'kpi', 'run_date']).mean().round(0).reset_index()
logs['is_campaign'] = np.where(logs['campaign_id'] > 0, 1, 0)
logs = logs.drop(columns=['client_id', 'partner_id', 'campaign_id'])
logs['is_alert'].describe()
AS_data = pd.read_csv("data/python_AS.csv")
AS_data.head()
TS_data = pd.read_csv("data/python_TS.csv")
TS_data.head()
RexT_data = pd.read_csv("data/python_RexT.csv")
RexT_data.head()
```
## Data Prep
R has already filtered down the data to the days we are going to use and marked what is disqualified. We still have to handle the feature selection and one-hot encoding of select columns though. We also need to normalize it since out KPIs behave quite differently.
```
# add column for AS to tell if it is campaign level or not
AS_data['is_campaign'] = np.where(AS_data['campaign_id'] > 0, 1, 0)
# drop the data we don't need for the model or for matching back to the logs
AS_keep_columns = ['series', 'day', 'run_date', 'kpi', 'value', 'disqualified', 'is_campaign']
TS_keep_columns = ['series', 'day', 'run_date', 'site_type', 'event_name',
'kpi', 'value', 'disqualified']
RexT_drop_columns = ['ranking',
'day_of_week',
'day_of_month',
'month_of_year',
'day_of_year',
'week_of_year']
AS_data = AS_data[AS_keep_columns]
TS_data = TS_data[TS_keep_columns]
RexT_data = RexT_data.drop(columns=RexT_drop_columns)
AS_data.head()
TS_data.head()
RexT_data.head()
# add a new column to determine how many days before the run_date the day column entry is
# this will enable us to pivot that data into separate columns for the features of our model
utils.prep_dates(AS_data)
utils.prep_dates(TS_data)
utils.prep_dates(RexT_data)
# inner joins to logs
AS_data = pd.merge(AS_data, logs, on=['series', 'run_date', 'kpi', 'is_campaign'], how='inner')
TS_data = pd.merge(TS_data, logs, on=['series', 'run_date', 'kpi'], how='inner')
RexT_data = pd.merge(RexT_data, logs, on=['series', 'run_date', 'kpi'], how='inner')
# filter out the disqualified data (AS and TS data only)
AS_disqualified = AS_data[AS_data.disqualified]
TS_disqualified = TS_data[TS_data.disqualified]
# valid for model (AS and TS data only)
valid_AS_raw = AS_data[~(AS_data.disqualified)]
valid_TS_raw = TS_data[~(TS_data.disqualified)]
# keep a copy of the raw RexT data
RexT_data_raw = RexT_data.copy(deep=True)
# final preparations to the data shape for use in the model
valid_AS = utils.data_prep_pipeline(AS_data.copy(),
indices=['series', 'run_date', 'kpi', 'is_campaign', 'is_alert'],
cols=['kpi'],
scaling_method=['standardize', 'min_max', 'percent_of_mean'])
valid_TS = utils.data_prep_pipeline(TS_data.copy(),
indices=['series', 'run_date', 'site_type', 'event_name', 'is_alert'],
cols=['site_type', 'event_name'],
scaling_method=['standardize', 'min_max', 'percent_of_mean'])
valid_RexT = utils.data_prep_pipeline(utils.clean_regions(RexT_data),
indices=['isCountry', 'isSubregion', 'isRegion',
'series', 'run_date', 'is_alert'],
cols=['series'],
scaling_method=['standardize', 'min_max', 'percent_of_mean'])
# for the TS data we need to drop event_name_SITE LEVEL because it will always be the same as site_type_SITE LEVEL
valid_TS = {key : value.drop(columns='event_name_SITE LEVEL') for key, value in valid_TS.items()}
valid_AS['min_max'].head()
valid_TS['percent_of_mean'].head()
valid_RexT['standardize'].head()
```
## Modelling
Now that all the data is prepped, we can start building some logistic regression models to test on. We also need to split our data into a test and train set being careful that we have an equal proportion of anomalies in each (because they are very few, we have to make sure we don't train or test the model on all the anomalies while the other gets none).
### Split Data into Train and Test Sets
```
from sklearn.model_selection import train_test_split
# scaling method to test
AS_scaler = 'min_max'
TS_scaler = 'min_max'
RexT_scaler = 'min_max'
# separate out data into feature matrices and target arrays
AS_features = valid_AS[AS_scaler][[col for col in valid_AS[AS_scaler].columns
if col not in ['series', 'run_date', 'is_alert']]] # this needs to be the model features
AS_targets = valid_AS[AS_scaler]['is_alert'] # this needs to be the results from the logs (only)
TS_features = valid_TS[TS_scaler][[col for col in valid_TS[TS_scaler].columns
if col not in ['series', 'run_date', 'is_alert']]]
TS_targets = valid_TS[TS_scaler]['is_alert']
RexT_features = valid_RexT[RexT_scaler][[col for col in valid_RexT[RexT_scaler].columns
if col not in ['run_date', 'is_alert']]]
RexT_targets = valid_RexT[RexT_scaler]['is_alert']
test_RexT_features = RexT_features.drop(columns=[col for col in RexT_features.columns
if 'series' in col
or col in ['isCountry', 'isSubregion', 'isRegion']])
# split into a train and test set
AS_X_train, AS_X_test, AS_y_train, AS_y_test = train_test_split(AS_features[[col for col in AS_features.columns
if 'diff' not in col]],
AS_targets,
test_size=0.2,
random_state=25)
TS_X_train, TS_X_test, TS_y_train, TS_y_test = train_test_split(TS_features[[col for col in TS_features.columns
if 'diff' not in col]],
TS_targets,
test_size=0.2,
random_state=25)
RexT_X_train, RexT_X_test, RexT_y_train, RexT_y_test = train_test_split(test_RexT_features[[col for col in
test_RexT_features.columns
if 'diff' not in col]],
RexT_targets,
test_size=0.5,
random_state=25)
```
Let's make sure that we have similar percentage of anomalies in our test and train sets.
```
# AS
print('Total alerts in training set: ' + str(AS_y_train.sum()))
print('Total alerts in test set: ' + str(AS_y_test.sum()))
pd.DataFrame({'train' : AS_y_train.value_counts(normalize=True),
'test' : AS_y_test.value_counts(normalize=True)})
# TS
print('Total alerts in training set: ' + str(TS_y_train.sum()))
print('Total alerts in test set: ' + str(TS_y_test.sum()))
pd.DataFrame({'train' : TS_y_train.value_counts(normalize=True),
'test' : TS_y_test.value_counts(normalize=True)})
# RexT
print('Total alerts in training set: ' + str(RexT_y_train.sum()))
print('Total alerts in test set: ' + str(RexT_y_test.sum()))
pd.DataFrame({'train' : RexT_y_train.value_counts(normalize=True),
'test' : RexT_y_test.value_counts(normalize=True)})
```
### Passive Aggressive Classifier without Differences
**Rough idea**:
- Passive: if correct classification, keep the model
- Aggressive: if incorrect classification, update to adjust to this misclassified example.
```
%%capture
# ^ don't print errors about precision, recall, F1-score being 0
from sklearn.linear_model import PassiveAggressiveClassifier
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import make_scorer, f1_score, precision_score, recall_score, fbeta_score, zero_one_loss
pac = PassiveAggressiveClassifier(random_state=42, n_jobs=-1, average=True)
parameters = {'C':[i/10 for i in range(11) if i > 0],
'fit_intercept' : [True, False],
'class_weight' : ['balanced', None]}
scoring = {'auc': 'roc_auc',
# only reports on alerts flagged
'precision_binary' : make_scorer(precision_score, average='binary'),
# global count of everything like confusion matrix
'precision_micro' : make_scorer(precision_score, average='micro'),
# metrics calculated for each label and averaged evenly (regardless of class size)
# 'precision_macro' : make_scorer(precision_score, average='macro'),
# same as macro but with a weighted average to account for imbalance of the classes
'precision_weighted' : make_scorer(precision_score, average='weighted'),
'recall_weighted' : make_scorer(recall_score, average='weighted'),
'recall_binary' : make_scorer(recall_score, average='binary'),
'recall_micro' : make_scorer(recall_score, average='micro'),
# 'recall_macro' : make_scorer(recall_score, average='macro'),
'f1_score_weighted' : make_scorer(f1_score, average='weighted'),
'f1_score_binary' : make_scorer(f1_score, average='binary'),
'f1_score_micro' : make_scorer(f1_score, average='micro'),
# 'f1_score_macro' : make_scorer(f1_score, average='macro'),
# emphasize recall more
'f2_score_weighted' : make_scorer(fbeta_score, beta=2, average='weighted'),
'f2_score_binary' : make_scorer(fbeta_score, beta=2, average='binary'),
'f2_score_micro' : make_scorer(fbeta_score, beta=2, average='micro'),
# 'f2_score_macro' : make_scorer(fbeta_score, beta=2, average='macro'),
# emphasize precision more
'f0.5_score_weighted' : make_scorer(fbeta_score, beta=0.5, average='weighted'),
'f0.5_score_binary' : make_scorer(fbeta_score, beta=0.5, average='binary'),
'f0.5_score_micro' : make_scorer(fbeta_score, beta=0.5, average='micro'),
# 'f0.5_score_macro' : make_scorer(fbeta_score, beta=0.5, average='macro'),
# percent of misclassifications
'zero_one_loss_normalized' : make_scorer(zero_one_loss, greater_is_better=False),
# number of misclassifications
'zero_one_loss_count' : make_scorer(zero_one_loss, greater_is_better=False,
normalize=False),
'accuracy' : 'accuracy'}
# pick '_weighted' if you want to be right on each class proportionally
# ('macro' isn't really appropriate due to the class imbalance)
# pick '_binary' if you want to perform best on the alert class
# pick '_micro' to count globally all TP, FP, TN, FN (like confusion matrix)
# so for our purposes 'f1_score' with one of the above is likely to be the best
refit_AS = 'f1_score_weighted'
refit_TS = 'f1_score_weighted'
refit_RexT = 'f1_score_weighted'
AS_pac_grid = GridSearchCV(estimator=pac, param_grid=parameters,
scoring=scoring, refit=refit_AS, return_train_score=True)
TS_pac_grid = GridSearchCV(estimator=pac, param_grid=parameters,
scoring=scoring, refit=refit_TS, return_train_score=True)
RexT_pac_grid = GridSearchCV(estimator=pac, param_grid=parameters,
scoring=scoring, refit=refit_RexT, return_train_score=True)
# fit the models to find the best version
AS_pac_model = AS_pac_grid.fit(AS_X_train, AS_y_train)
TS_pac_model = TS_pac_grid.fit(TS_X_train, TS_y_train)
RexT_pac_model = RexT_pac_grid.fit(RexT_X_train, RexT_y_train)
```
#### Best Passive Aggressive Classifiers
##### AS
The model seems to pick up that the most important features are whether or not it is a campaign, the 5 most recent dates (time_delta's 1-5). TAC, margin, and RexT euro have very negative coefficients most likely because we don't see many alerts there so the log odds are small and the exponential of those values is near 0. Clicks, displays, and spend are more likely.
```
print(AS_pac_model.best_estimator_)
print(refit_AS + ' (Mean cross-validated score of the best_estimator): ' + \
str(AS_pac_model.best_score_))
for col, coef in zip(AS_X_train.columns, AS_pac_model.best_estimator_.coef_[0]):
print(col + '\t' + str(round(coef, 2)))
```
##### TS
```
print(TS_pac_model.best_estimator_)
print(refit_TS + ' (Mean cross-validated score of the best_estimator): ' + \
str(TS_pac_model.best_score_))
for col, coef in zip(TS_X_train.columns, TS_pac_model.best_estimator_.coef_[0]):
print(col + '\t' + str(round(coef, 2)))
```
##### RexT
```
print(RexT_pac_model.best_estimator_)
print(refit_RexT + ' (Mean cross-validated score of the best_estimator): ' + \
str(RexT_pac_model.best_score_))
for col, coef in zip(RexT_X_train.columns, RexT_pac_model.best_estimator_.coef_[0]):
print(col + '\t' + str(round(coef, 2)))
```
#### Model Evaluation
##### ROC Curve
```
from sklearn.metrics import roc_curve, auc
AS_y_prob_fit = AS_pac_model.decision_function(AS_X_test)
TS_y_prob_fit = TS_pac_model.decision_function(TS_X_test)
RexT_y_prob_fit = RexT_pac_model.decision_function(RexT_X_test)
AS_pac_roc_curve = roc_curve(AS_y_test, AS_y_prob_fit, pos_label=1) # returns tuple: fpr, tpr, thresholds
AS_pac_roc_curve_AUC = auc(AS_pac_roc_curve[0],
AS_pac_roc_curve[1]) # needs fpr, tpr
TS_pac_roc_curve = roc_curve(TS_y_test, TS_y_prob_fit, pos_label=1)
TS_pac_roc_curve_AUC = auc(TS_pac_roc_curve[0],
TS_pac_roc_curve[1])
RexT_pac_roc_curve = roc_curve(RexT_y_test, RexT_y_prob_fit, pos_label=1)
RexT_pac_roc_curve_AUC = auc(RexT_pac_roc_curve[0],
RexT_pac_roc_curve[1])
# ROC Curve without 1 period differences
utils.model_roc_curves(roc_data_dict={'AS' : AS_pac_roc_curve,
'TS' : TS_pac_roc_curve,
'RexT' : RexT_pac_roc_curve},
auc_dict={'AS' : AS_pac_roc_curve_AUC,
'TS' : TS_pac_roc_curve_AUC,
'RexT' : RexT_pac_roc_curve_AUC},
method_name='Passive Aggressive Classifier')
```
##### Check Alert Counts at Certain Threshold
##### Confusion Matrix
```
AS_threshold = 0.05
utils.confusion_matrix_visual(AS_y_test[AS_X_test.is_campaign == 0],
AS_pac_model.decision_function(AS_X_test[AS_X_test.is_campaign == 0]) >= AS_threshold,
'AS Client-level')
utils.confusion_matrix_visual(AS_y_test[AS_X_test.is_campaign == 1],
AS_pac_model.decision_function(AS_X_test[AS_X_test.is_campaign == 1]) >= AS_threshold,
'AS Campaign-level')
utils.confusion_matrix_visual(AS_y_test,
AS_pac_model.decision_function(AS_X_test) >= AS_threshold, 'AS Overall')
TS_threshold = 0.16
utils.confusion_matrix_visual(TS_y_test,
TS_pac_model.decision_function(TS_X_test) >= TS_threshold,
'TS')
RexT_threshold = 0.1
utils.confusion_matrix_visual(RexT_y_test,
RexT_pac_model.decision_function(RexT_X_test) >= RexT_threshold,
'RexT')
```
#### Metrics
```
utils.classification_report_all(y_test_dict={'AS' : AS_y_test,
'TS' : TS_y_test,
'RexT' : RexT_y_test},
y_pred_dict={'AS' :
AS_pac_model.decision_function(AS_X_test) >= AS_threshold,
'TS' :
TS_pac_model.decision_function(TS_X_test) >= TS_threshold,
'RexT' :
RexT_pac_model.decision_function(RexT_X_test) >= RexT_threshold})
```
| github_jupyter |
# LABXX: What-if Tool: Model Interpretability Using Mortgage Data
**Learning Objectives**
1. Create a What-if Tool visualization
2. What-if Tool exploration using the XGBoost Model
## Introduction
This notebook shows how to use the [What-if Tool (WIT)](https://pair-code.github.io/what-if-tool/) on a deployed [Cloud AI Platform](https://cloud.google.com/ai-platform/) model. The What-If Tool provides an easy-to-use interface for expanding understanding of black-box classification and regression ML models. With the plugin, you can perform inference on a large set of examples and immediately visualize the results in a variety of ways. Additionally, examples can be edited manually or programmatically and re-run through the model in order to see the results of the changes. It contains tooling for investigating model performance and fairness over subsets of a dataset. The purpose of the tool is to give people a simple, intuitive, and powerful way to explore and investigate trained ML models through a visual interface with absolutely no code required.
[Extreme Gradient Boosting (XGBoost)](https://xgboost.ai/) is a decision-tree-based ensemble Machine Learning algorithm that uses a gradient boosting framework. In prediction problems involving unstructured data (images, text, etc.) artificial neural networks tend to outperform all other algorithms or frameworks. However, when it comes to small-to-medium structured/tabular data, decision tree based algorithms are considered best-in-class right now. Please see the chart below for the evolution of tree-based algorithms over the years.
*You don't need your own cloud project* to run this notebook.
** UPDATE LINK BEFORE PRODUCTION **: Each learning objective will correspond to a __#TODO__ in the [student lab notebook](https://github.com/GoogleCloudPlatform/training-data-analyst/blob/gwendolyn-dev/courses/machine_learning/deepdive2/ml_on_gc/what_if_mortgage.ipynb)) -- try to complete that notebook first before reviewing this solution notebook.
## Set up environment variables and load necessary libraries
We will start by importing the necessary libraries for this lab.
```
import sys
python_version = sys.version_info[0]
print("Python Version: ", python_version)
!pip3 install witwidget
import pandas as pd
import numpy as np
import witwidget
from witwidget.notebook.visualization import WitWidget, WitConfigBuilder
```
## Loading the mortgage test dataset
The model we'll be exploring here is a binary classification model built with XGBoost and trained on a [mortgage dataset](https://www.ffiec.gov/hmda/hmdaflat.htm). It predicts whether or not a mortgage application will be approved. In this section we'll:
* Download some test data from Cloud Storage and load it into a numpy array + Pandas DataFrame
* Preview the features for our model in Pandas
```
# Download our Pandas dataframe and our test features and labels
!gsutil cp gs://mortgage_dataset_files/data.pkl .
!gsutil cp gs://mortgage_dataset_files/x_test.npy .
!gsutil cp gs://mortgage_dataset_files/y_test.npy .
```
## Preview the Features
Preview the features from our model as a pandas DataFrame
```
features = pd.read_pickle('data.pkl')
features.head()
features.info()
```
## Load the test features and labels into numpy arrays
Developing machine learning models in Python often requires the use of NumPy arrays. Recall that NumPy, which stands for Numerical Python, is a library consisting of multidimensional array objects and a collection of routines for processing those arrays. NumPy arrays are efficient data structures for working with data in Python, and machine learning models like those in the scikit-learn library, and deep learning models like those in the Keras library, expect input data in the format of NumPy arrays and make predictions in the format of NumPy arrays. As such, it is common to need to save NumPy arrays to file. Note that the data info reveals the following datatypes dtypes: float64(8), int16(1), int8(1), uint8(34) -- and no strings or "objects". So, let's now load the features and labels into numpy arrays.
```
x_test = np.load('x_test.npy')
y_test = np.load('y_test.npy')
```
Let's take a look at the contents of the 'x_test.npy' file. You can see the "array" structure.
```
print(x_test)
```
## Combine the features and labels into one array for the What-if Tool
Note that the numpy.hstack() function is used to stack the sequence of input arrays horizontally (i.e. column wise) to make a single array. In the following example, the numpy matrix is reshaped into a vector using the reshape function with .reshape((-1, 1) to convert the array into a single column matrix.
```
test_examples = np.hstack((x_test,y_test.reshape(-1,1)))
```
## Using the What-if Tool to interpret our model
With our test examples ready, we can now connect our model to the What-if Tool using the `WitWidget`. To use the What-if Tool with Cloud AI Platform, we need to send it:
* A Python list of our test features + ground truth labels
* Optionally, the names of our columns
* Our Cloud project, model, and version name (we've created a public one for you to play around with)
See the next cell for some exploration ideas in the What-if Tool.
## Create a What-if Tool visualization
This prediction adjustment function is needed as this xgboost model's prediction returns just a score for the positive class of the binary classification, whereas the What-If Tool expects a list of scores for each class (in this case, both the negative class and the positive class).
**NOTE:** The WIT may take a minute to load. While it is loading, review the parameters that are defined in the next cell, BUT NOT RUN IT, it is simply for reference.
```
# ******** DO NOT RUN THIS CELL ********
# TODO 1
PROJECT_ID = 'YOUR_PROJECT_ID'
MODEL_NAME = 'YOUR_MODEL_NAME'
VERSION_NAME = 'YOUR_VERSION_NAME'
TARGET_FEATURE = 'mortgage_status'
LABEL_VOCAB = ['denied', 'approved']
# TODO 1a
config_builder = (WitConfigBuilder(test_examples.tolist(), features.columns.tolist() + ['mortgage_status'])
.set_ai_platform_model(PROJECT_ID, MODEL_NAME, VERSION_NAME, adjust_prediction=adjust_prediction)
.set_target_feature(TARGET_FEATURE)
.set_label_vocab(LABEL_VOCAB))
```
Run this cell to load the WIT config builder. **NOTE:** The WIT may take a minute to load
```
# TODO 1b
def adjust_prediction(pred):
return [1 - pred, pred]
config_builder = (WitConfigBuilder(test_examples.tolist(), features.columns.tolist() + ['mortgage_status'])
.set_ai_platform_model('wit-caip-demos', 'xgb_mortgage', 'v1', adjust_prediction=adjust_prediction)
.set_target_feature('mortgage_status')
.set_label_vocab(['denied', 'approved']))
WitWidget(config_builder, height=800)
```
## What-if Tool exploration using the XGBoost Model
#### TODO 2
* **Individual data points**: The default graph shows all data points from the test set, colored by their ground truth label (approved or denied)
* Try selecting data points close to the middle and tweaking some of their feature values. Then run inference again to see if the model prediction changes
* Select a data point and then move the "Show nearest counterfactual datapoint" slider to the right. This will highlight a data point with feature values closest to your original one, but with a different prediction
#### TODO 2a
* **Binning data**: Create separate graphs for individual features
* From the "Binning - X axis" dropdown, try selecting one of the agency codes, for example "Department of Housing and Urban Development (HUD)". This will create 2 separate graphs, one for loan applications from the HUD (graph labeled 1), and one for all other agencies (graph labeled 0). This shows us that loans from this agency are more likely to be denied
#### TODO 2b
* **Exploring overall performance**: Click on the "Performance & Fairness" tab to view overall performance statistics on the model's results on the provided dataset, including confusion matrices, PR curves, and ROC curves.
* Experiment with the threshold slider, raising and lowering the positive classification score the model needs to return before it decides to predict "approved" for the loan, and see how it changes accuracy, false positives, and false negatives.
* On the left side "Slice by" menu, select "loan_purpose_Home purchase". You'll now see performance on the two subsets of your data: the "0" slice shows when the loan is not for a home purchase, and the "1" slice is for when the loan is for a home purchase. Notice that the model's false positive rate is much higher on loans for home purchases. If you expand the rows to look at the confusion matrices, you can see that the model predicts "approved" more often for home purchase loans.
* You can use the optimization buttons on the left side to have the tool auto-select different positive classification thresholds for each slice in order to achieve different goals. If you select the "Demographic parity" button, then the two thresholds will be adjusted so that the model predicts "approved" for a similar percentage of applicants in both slices. What does this do to the accuracy, false positives and false negatives for each slice?
Copyright 2020 Google Inc.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
| github_jupyter |
# Simulation Experiments
We simulate a greybox fuzing campaign to understand the behavior of discovery probability as more inputs are generated. In contrast to a blackbox fuzzer, a greybox fuzzer adds inputs to the corpus that discover a new species (e.g., that cover a new program branch).
We simulate ๐๐๐๐๐=30 many greybox campaigns of length ๐ฟ๐๐๐๐กโ=50000 generated test inputs, where the subject has ๐=50 species. We take data points in exponentially increasing intervals ( ๐๐
๐๐๐๐ ).
Seeds are identified by their species (species are identified by a number between 0 and ๐ ). In order to derive the local species distribution for a seed, we assume that there exists some template distribution ๐๐๐๐โ๐ก๐ . This captures that, irrespective of the current seed, some species are always abundant, while some are always rare. The ๐บ๐๐๐ฃ๐๐ก๐ฆ parameter determines how close the local distribution of a seed is to that template distribution.
๐๐๐๐โ๐ก๐ is defined such that there are very few highly abundandt species and a lot of very rare species. Specifically, species ๐ is ๐ฟ๐๐๐๐๐=1.3 times more likely than species ๐+1 .
The initial corpus is ๐ถ๐๐๐๐ข๐ =5 and a very rare species is marked as "buggy" ( ๐ต๐ข๐=40 ).
## Initialize
This Jupyter notebook allows you to simulate black- and greybox fuzzing campaigns to generate results presented in https://github.com/Adaptive-Bias/fse21_paper270.
We executed this notebook with the following versions.
```
โ ggplot2 3.1.1 โ scales_1.0.0
โ dplyr 1.0.6
```
To install a missing package use, e.g.,
```
install.packages("ggplot2",dependencies=TRUE)
```
To install a specific version of a package use, e.g.,
```
require(devtools)
install_version("ggplot2", version = "3.1.1", repos = "http://cran.us.r-project.org")
```
Required packages
```
library(ggplot2)
library(scales)
library(grid)
library(dplyr)
theme_set(theme_bw())
options(warn=-1)
```
## Simulation Configuration
```
# Configuration
S = 50 # number of species
Length = 50000 # number of test cases
Trials = 30 # number of experiment repetitions
NRange = c(1.2^seq(0,log2(Length)/log2(1.2)), Length) # data points
Lambda = 1.3 # probability decrease factor
Gravity = 100 # Impact of Weight on local distribution
Corpus = c(5) # We start with one initial seed that is neither too likely nor too unlikely if sampled from 1:S according to Weights.
Bug = 40 # id of buggy species (higher id means more difficult to find)
# Weights will be used to derive local distributions
# As Gravity approaches infty, a local distribution approaches Weights.
Weights = rep(0,S)
for (i in 1:S) {
if (i == 1)
Weights[i] = 0.5
else Weights[i] = Weights[i - 1] / Lambda
}
Weights = Weights / sum(Weights)
# Just some stats
print("10 smallest probabilities:")
print(tail(Weights, n=10))
print(paste("10 samples between 1 and ", S, ":", sep=""))
print(sample(1:S, replace=TRUE, 10, prob=Weights))
```
## How we derive the local distribution for a seed.
```
local_distribution = function(seed) {
local_weights = rep(0,S)
local_weights[seed] = 0.5
# Decrease probability on the left and right of seed.
left = seed - 1
while (left >= 1) {
local_weights[left] = local_weights[left + 1] / Lambda
left = left - 1
}
right = seed + 1
while (right <= S) {
local_weights[right] = local_weights[right - 1] / Lambda
right = right + 1
}
# Add to Weights. The key idea is that some species (statements/branches)
# are always very likely or very unlikely, irrespective of the seed.
local_weights = local_weights / Gravity + Weights
# Normalize
return(local_weights / sum(local_weights))
}
```
## Power Schedule
We start with a power schedule that chooses all seeds uniformely.
```
power_schedule = function(corpus) {
# Uniform distribution
return (rep(1/length(corpus), length(corpus)))
# Prefer later seeds
#weights = seq(1:length(corpus))
#return (weights / sum(weights))
}
print(power_schedule(Corpus))
print(power_schedule(c(1,2)))
```
Let's Simulate
```
data = data.frame("Run"=character(),"Fuzzer"=character(), "n"=integer(), "factor"=character(), "value"=numeric())
for (N in NRange) {
# Assuming there is only one initial seed
pi = local_distribution(Corpus[1])
data = rbind(data,
data.frame(Run=1, # We only need one run as there is no sampling
Fuzzer="Blackbox", n=N, factor="Discovery probability", value=sum(pi * (1-pi)^N)),
data.frame(Run=1, Fuzzer="Blackbox", n=N, factor="#Species discovered", value=sum(1-(1-pi)^N)),
data.frame(Run=1, Fuzzer="Blackbox", n=N, factor="Residual risk", value=pi[Bug]))
}
time_to_error = c()
for (run in 1:Trials) {
timestamps = list()
# Construct corpus
current_C = Corpus
for (N in 1:Length) {
seed = sample(1:length(current_C), 1, prob=power_schedule(current_C))
species = sample(1:S, 1, prob=local_distribution(seed))
#print(current_C)
if (! species %in% current_C) {
current_C = c(current_C, species)
timestamps[[toString(N)]] = current_C
}
}
# Derive discovery probability values at time stamps
for (N in NRange) {
# Find current corpus at time N
current_C = Corpus
for (n in names(timestamps)) {
if (as.integer(n) >= N)
break
else
current_C = timestamps[[n]]
}
if (Bug %in% current_C) {
time_to_error = c(time_to_error, as.integer(n))
}
# Derive global distribution pi from local distributions and power schedule
pi = rep(0,S)
qt = power_schedule(current_C)
for (t in 1:length(current_C)) {
pi = pi + qt[t] * local_distribution(current_C[t])
}
# Compute discovery probability
discovery_probability = sum(unlist(lapply(sample(1:S, N*2, replace=TRUE, prob=pi), function(x) ifelse(x %in% current_C,0,1)))) / (N * 2)
# Append to data frame
data = rbind(data,
data.frame(Run=run, Fuzzer="Greybox", n=N, factor="Discovery probability",value=discovery_probability),
data.frame(Run=run, Fuzzer="Greybox", n=N, factor="#Species discovered", value=length(current_C)),
data.frame(Run=run, Fuzzer="Greybox", n=N, factor="Bug Probability", value=pi[Bug]))
}
}
print(paste("[Blackbox] Avg. time to error: ", 1/local_distribution(Corpus[1])[Bug]))
if (length(time_to_error) == 0) {
print("[Greybox] Error not found.")
} else {
print(paste("[Greybox] Avg. time to error: ", mean(time_to_error)))
}
summary(subset(data,factor=="Discovery probability"))
```
Compute the average residual risk and number of discovered species over all runs.
```
mean_data = data %>%
group_by(Fuzzer, factor, n) %>%
summarize(value = mean(value, na.rm = TRUE))
summary(mean_data)
```
## Results
```
ggplot(subset(data, factor=="Discovery probability"), aes(n, value)) +
geom_point(aes(shape=Fuzzer),color="gray") +
geom_line(data=subset(mean_data, factor=="Discovery probability"), aes(n, value, linetype=Fuzzer)) +
#geom_smooth(aes(linetype=Fuzzer), color ="black") +
scale_x_log10("Number of generated test inputs (n)",
breaks = trans_breaks("log10", function(x) 10^x),
labels = trans_format("log10", math_format(10^.x))) +
scale_y_log10("Discovery prob. Delta(n)",
breaks = trans_breaks("log10", function(x) 10^x),
labels = trans_format("log10", math_format(10^.x))) +
#facet_wrap(~ factor, ncol=1, scale="free") +
theme(axis.text.x = element_text(colour="black"), axis.text.y = element_text(colour="black"), legend.position = c(0.85, 0.75))
ggsave("../outputs/residual.pdf",scale=0.4)
ggplot(subset(data, factor=="#Species discovered"), aes(n, value / S)) +
geom_point(aes(shape=Fuzzer), color="gray") +
#geom_smooth(aes(linetype=Fuzzer), color ="black") +
geom_line(data=subset(mean_data, factor=="#Species discovered"), aes(n, value / S, linetype=Fuzzer)) +
scale_x_log10("Number of generated test inputs (n)",
breaks = trans_breaks("log10", function(x) 10^x),
labels = trans_format("log10", math_format(10^.x))) +
scale_y_continuous("Species coverage S(n)/S",
labels=percent) +
theme(axis.text.x = element_text(colour="black"), axis.text.y = element_text(colour="black"), legend.position = c(0.85, 0.25))
ggsave("../outputs/species.pdf",scale=0.4)
```
## Goodness-Of-Fit of an Extrapolation by Orders of Magnitude
```
summary(lm(formula = log(subset(data, Fuzzer=="Blackbox" & factor=="Discovery probability" & value>0)$n) ~ log(subset(data, Fuzzer=="Blackbox" & factor=="Discovery probability" & value>0)$value)))
summary(lm(formula = log(subset(data, Fuzzer=="Greybox" & factor=="Discovery probability" & value>0)$n) ~ log(subset(data, Fuzzer=="Greybox" & factor=="Discovery probability" & value>0)$value)))
logbias = log10(subset(data, Fuzzer == "Greybox" & factor=="Discovery probability" )$value) - log10(subset(data, Fuzzer == "Blackbox"& factor=="Discovery probability")$value)
logbias_avg = log10(subset(mean_data, Fuzzer == "Greybox" & factor=="Discovery probability" )$value) - log10(subset(mean_data, Fuzzer == "Blackbox"& factor=="Discovery probability")$value)
p1 = ggplot() +
geom_point(aes(x=rep(NRange,Trials), y=logbias), color="gray") +
geom_hline(yintercept=0, color="black", linetype="dashed") +
geom_line(aes(x=NRange, y=logbias_avg), color="black") +
#geom_smooth(aes(x=rep(NRange,1), y=bias), color="black") +
scale_x_log10("Number of generated test inputs (n)",
breaks = trans_breaks("log10", function(x) 10^x),
labels = trans_format("log10", math_format(10^.x))) +
scale_y_continuous("Adaptive bias (log10(GB)-log10(BB)") +
#facet_wrap(~c("Log-Difference (log(GB)-log(BB)"),strip.position="right") +
theme(axis.text.x = element_text(colour="black"), axis.text.y = element_text(colour="black"))
#ggsave("bias.pdf",scale=0.4)
bias = (subset(data, Fuzzer == "Greybox" & factor=="Discovery probability" )$value) - (subset(data, Fuzzer == "Blackbox"& factor=="Discovery probability")$value)
bias_avg = (subset(mean_data, Fuzzer == "Greybox" & factor=="Discovery probability" )$value) - (subset(mean_data, Fuzzer == "Blackbox"& factor=="Discovery probability")$value)
p2 = ggplot() +
geom_point(aes(x=rep(NRange,Trials), y=bias), color="gray") +
geom_hline(yintercept=0, color="black", linetype="dashed") +
geom_line(aes(x=NRange, y=bias_avg), color="black") +
#geom_smooth(aes(x=rep(NRange,1), y=bias), color="black") +
scale_x_log10("Number of generated test inputs (n)",
breaks = trans_breaks("log10", function(x) 10^x),
labels = trans_format("log10", math_format(10^.x))) +
scale_y_continuous(" Adaptive bias (GB - BB)") +
#facet_wrap(~c("Difference (GB - BB)"),strip.position="right") +
theme(axis.text.x = element_blank(), axis.title.x = element_blank(), axis.ticks=element_blank(),
plot.margin = unit(c(1,1,0,1), "cm"))
grid.newpage()
grid.draw(rbind(ggplotGrob(p2), ggplotGrob(p1), size = "last"))
pdf("../outputs/bias.pdf", width=5, height=5)
grid.newpage()
grid.draw(rbind(ggplotGrob(p2), ggplotGrob(p1), size = "last"))
dev.off()
ggplot(subset(data, Fuzzer == "Greybox" & factor %in% c("Bug Probability","Discovery probability")), aes(n, value)) +
geom_point(aes(shape=factor),color="gray") +
#geom_smooth(aes(linetype=factor), color ="black") +
geom_line(data=subset(mean_data, Fuzzer == "Greybox" & factor %in% c("Bug Probability","Discovery probability")), aes(linetype=factor),color="black") +
geom_vline(aes(xintercept=mean(time_to_error)),linetype="dashed") +
scale_x_log10("Number of generated test inputs (n)",
breaks = trans_breaks("log10", function(x) 10^x),
labels = trans_format("log10", math_format(10^.x))) +
scale_y_log10("Probability",
breaks = trans_breaks("log10", function(x) 10^x),
labels = trans_format("log10", math_format(10^.x))) +
facet_wrap(~Fuzzer) +
theme(axis.text.x = element_text(colour="black"), axis.text.y = element_text(colour="black"), legend.position = "bottom")
ggsave("../outputs/risk.pdf",scale=0.4, height=8)
```
| github_jupyter |
# Random Forest
The aim of this part of the workshop is to give you initial experience in using *random forests*, which is a popular ensemble method that was presented earlier in the lectures. A particular emphasis is given to the *out-of-bag* error (sometimes called out-of-sample error) that can be used to select random forest model complexity.
As a first step, setup the ipython notebook environment to include numpy, scipy, matplotlib etc.
```
%pylab inline
import numpy as np
import matplotlib.pyplot as plt
```
In this tutorial, we are going to use synthetic data. You can also repeat steps of this worksheet with Cats dataset from the previous worksheet.
We are going to generate data using a function defined below. This function produces S-shaped dataset which is mostly separable, but not necessarily linearly separable. We can control the degree of separability. The resulting dataset is going to be two-dimensional (so that we can plot it) with a binary label. That is, the dataset is a $N\times2$ array of instances coupled with an $N\times1$ of labels. The classes are encoded as $-1$ and $1$.
Since the dataset is a tuple of two arrays, we are going to use a special data structure called *named tuple* from a Python module called *collections*.
```
import collections
def generate_s_shaped_data(gap=3):
x = np.random.randn(80, 2)
x[10:20] += np.array([3, 4])
x[20:30] += np.array([0, 8])
x[30:40] += np.array([3, 12])
x[40:50] += np.array([gap, 0])
x[50:60] += np.array([3 + gap, 4])
x[60:70] += np.array([gap, 8])
x[70:80] += np.array([3 + gap, 12])
t = np.hstack([-np.ones(40), np.ones(40)])
d = collections.namedtuple('Dataset', ['x', 't'])
d.x = x
d.t = t
return d
```
We start with generating training data. A random forest is a non-linear model, so it's fine to generate data that is not linearly separable.
```
d = generate_s_shaped_data(2)
x = d.x
t = d.t
plt.plot(x[t==-1,0], x[t==-1,1], "o")
plt.plot(x[t==1,0], x[t==1,1], "o")
```
Next, we need to import modules that implement random forests. The current implementation yields a lot of annoying deprecation warnings that we don't need to care about in this tutorial. Therefore, we suppress the warnings.
```
from sklearn.ensemble import RandomForestClassifier
import warnings
warnings.filterwarnings('ignore')
```
Key parameters for training a random forest are the number of trees (*n_estimators*) and the number of features used in each iteration (*max_features*). First, we are going to train a random forest with only a few trees. Moreover, in this exercise, we are working with a simple 2D data, and there is not much room with features subset selection, so we are going to use both features. Note that *max_features=None* instructs the function to use all features.
```
model = RandomForestClassifier(n_estimators=5, max_features=None)
model.fit(x, t)
```
We now generate test data using the same setup as for the training data. In essense, this mimics the holdout validation approach where the dataset is randomly split in halves for training and testing. The test dataset should look similar, but not identical to the training data.
```
d = generate_s_shaped_data(2)
x_heldout = d.x
t_heldout = d.t
plt.plot(x_heldout[t_heldout==-1,0], x_heldout[t_heldout==-1,1], "o")
plt.plot(x_heldout[t_heldout==1,0], x_heldout[t_heldout==1,1], "o")
```
Let's see what the trained forest predicts for the test data. We will use an auxiliary variable $r$ as an indicator whether the prediction was correct. We will plot correcly classified points in blue and orange for both classes and misclassified points in black. Recall that this is the same test data as above, but instead of color-code labels, we show whether a point was misclassified or not.
```
y = model.predict(x_heldout)
r = y+t_heldout
plt.plot(x_heldout[r==-2,0], x_heldout[r==-2,1], "o")
plt.plot(x_heldout[r==2,0], x_heldout[r==2,1], "o")
plt.plot(x_heldout[r==0,0], x_heldout[r==0,1], "ok")
print('Proportion misclassified:')
print(1 - np.sum(y == t_heldout) / float(t_heldout.shape[0]))
```
It looks like there are quite a few mistakes. Perhaps, this is because we are using too few trees. What do you expect to happen if we increase the number of trees? Try different values to see if you can achieve a perfect classification.
One way of choosing the number of trees would be to measure the error on the test set (heldout set) for different number of trees. However, recall from the lectures that out-of-bag (aka out-of-sample) error can be used to estimate the test error. The *error* here means the proportion of misclassified cases. We are going to use the out-of-bag error for choosing the number of trees. Note the *oob_score* parameter that instruct the function to remember the out-of-bag errors in each training iteration.
```
list_num_trees = [1, 2, 4, 8, 16, 32, 64]
num_cases = len(list_num_trees)
oob_error = np.zeros(num_cases)
test_error = np.zeros(num_cases)
for i in range(num_cases):
model= RandomForestClassifier(n_estimators=list_num_trees[i], max_features=None, oob_score=True)
model.fit(x, t)
oob_error[i] = 1 - model.oob_score_
y = model.predict(x_heldout)
test_error[i] = 1 - np.sum(y == t_heldout) / float(t_heldout.shape[0])
plot(list_num_trees, oob_error, 'b')
plot(list_num_trees, test_error, 'g')
```
We can see that increasing the number of trees helps, but performance eventually stabilizes. Note that we only use training data for computing the out-of-bag error. Implement the heldout validation approach and compare the two errors as a function of number of trees.
| github_jupyter |
# ์ผ์ฑ์ ์ ์ฒจ๊ธฐ์ฐ ์๊ฐ ์ฌํ
- **Instructor**: Jongwoo Lim / Jiun Bae
- **Email**: [jlim@hanyang.ac.kr](mailto:jlim@hanyang.ac.kr) / [jiun.maydev@gmail.com](mailto:jiun.maydev@gmail.com)
```
from pathlib import Path
import yaml
import numpy as np
import pandas as pd
import torch
from models.mdnet import MDNet, BCELoss, Precision
from models.extractor import SampleGenerator, RegionDataset
```
## Dataset
Download OTB-50, 100 dataset from [CVLab](http://cvlab.hanyang.ac.kr/tracker_benchmark/datasets.html).
```
class Dataset:
def __init__(self, root: str, options):
self.sequences, self.images, self.ground_truths = map(list, zip(*[(
str(seq.stem),
list(map(str, sorted(seq.glob('img/*.jpg')))),
pd.read_csv(str(seq.joinpath('groundtruth_rect.txt')), header=None, sep=r'\,|\t|\ ', engine='python').values,
) for seq in filter(lambda p: p.is_dir(), Path(root).iterdir())]))
# assertion
for i, _ in enumerate(self.sequences):
if len(self.images[i]) != np.size(self.ground_truths[i], 0):
self.images[i] = self.images[i][:self.ground_truths[i].shape[0]]
self.regions = [RegionDataset(i, g, options) for i, g in zip(self.images, self.ground_truths)]
def __len__(self):
return len(self.sequences)
def __getitem__(self, idx):
return self.regions[idx]
def __iter__(self):
yield from self.regions
```
## Prepare Environments
Download pre-trained imagenet-vgg weights from [link](http://www.vlfeat.org/matconvnet/models/imagenet-vgg-m.mat)
```
wget "http://www.vlfeat.org/matconvnet/models/imagenet-vgg-m.mat"
```
```
opts = yaml.safe_load(open('options.yaml','r'))
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
dataset = Dataset('/home/jiun/workspace/deeplearning_practice/datasets', opts)
model = MDNet(opts['init_model_path'], len(dataset)).to(device)
model.set_learnable_params(opts['ft_layers'])
criterion = BCELoss()
evaluator = Precision()
optimizer = model.optimizer(opts['lr'], opts['lr_mult'])
```
## Train model
```
for b in range(opts['n_cycles']):
model.train()
prec = np.zeros(len(dataset))
permute = np.random.permutation(len(dataset))
for i, j in enumerate(permute):
pos, neg = dataset[j].next()
pos_loss = model(pos.to(device), j)
neg_loss = model(neg.to(device), j)
loss = criterion(pos_loss, neg_loss)
model.zero_grad()
loss.backward()
if 'grad_clip' in opts:
torch.nn.utils.clip_grad_norm_(model.parameters(), opts['grad_clip'])
optimizer.step()
prec[j] = evaluator(pos_loss, neg_loss)
if not i % 10:
print(f'Iter {i:2d} (Domain {j:2d}), Loss {loss.item():.3f}, Precision {prec[j]:.3f}')
print(f'Batch {b:2d}: Mean Precision: {prec.mean():.3f}')
torch.save({
'shared_layers': model.cpu().layers.state_dict()
}, opts['model_path'])
model = model.to(device)
```
## Inference
```
from PIL import Image
import cv2
import matplotlib.pyplot as plt
import torch.optim as optim
from utils import Options, overlap_ratio
from models.extractor import RegionExtractor
from models.regressor import BBRegressor
def forward_samples(model, image, samples, opts, out_layer='conv3'):
model.eval()
extractor = RegionExtractor(image, samples, opts.img_size, opts.padding, opts.batch_test)
for i, regions in enumerate(extractor):
if opts.use_gpu:
regions = regions.cuda()
with torch.no_grad():
feat = model(regions, out_layer=out_layer)
feats = torch.cat((feats, feat.detach().clone()), 0) if i else feat.detach().clone()
return feats
def train(model, criterion, optimizer,
pos_feats, neg_feats, maxiter, opts,
in_layer='fc4'):
model.train()
batch_pos = opts.batch_pos
batch_neg = opts.batch_neg
batch_test = opts.batch_test
batch_neg_cand = max(opts.batch_neg_cand, batch_neg)
pos_idx = np.random.permutation(pos_feats.size(0))
neg_idx = np.random.permutation(neg_feats.size(0))
while len(pos_idx) < batch_pos * maxiter:
pos_idx = np.concatenate([pos_idx, np.random.permutation(pos_feats.size(0))])
while len(neg_idx) < batch_neg_cand * maxiter:
neg_idx = np.concatenate([neg_idx, np.random.permutation(neg_feats.size(0))])
pos_pointer = 0
neg_pointer = 0
for _ in range(maxiter):
# select pos idx
pos_next = pos_pointer + batch_pos
pos_cur_idx = pos_idx[pos_pointer:pos_next]
pos_cur_idx = pos_feats.new(pos_cur_idx).long()
pos_pointer = pos_next
# select neg idx
neg_next = neg_pointer + batch_neg_cand
neg_cur_idx = neg_idx[neg_pointer:neg_next]
neg_cur_idx = neg_feats.new(neg_cur_idx).long()
neg_pointer = neg_next
# create batch
batch_pos_feats = pos_feats[pos_cur_idx]
batch_neg_feats = neg_feats[neg_cur_idx]
# hard negative mining
if batch_neg_cand > batch_neg:
model.eval()
for start in range(0, batch_neg_cand, batch_test):
end = min(start + batch_test, batch_neg_cand)
with torch.no_grad():
score = model(batch_neg_feats[start:end], in_layer=in_layer)
if start == 0:
neg_cand_score = score.detach()[:, 1].clone()
else:
neg_cand_score = torch.cat((neg_cand_score, score.detach()[:, 1].clone()), 0)
_, top_idx = neg_cand_score.topk(batch_neg)
batch_neg_feats = batch_neg_feats[top_idx]
model.train()
# forward
pos_score = model(batch_pos_feats, in_layer=in_layer)
neg_score = model(batch_neg_feats, in_layer=in_layer)
# optimize
loss = criterion(pos_score, neg_score)
model.zero_grad()
loss.backward()
torch.nn.utils.clip_grad_norm_(model.parameters(), opts.grad_clip)
optimizer.step()
def main(images, init_bbox, ground_truths, opts):
device = ('cuda' if opts.use_gpu else 'cpu')
model = MDNet(opts.model_path).to(device)
criterion = BCELoss()
# Set learnable parameters
for k, p in model.params.items():
p.requires_grad = any([k.startswith(l) for l in opts.ft_layers])
# Set optimizer states
def set_optimizer(lr_base, lr_mult, momentum=0.9, w_decay=0.0005):
param_list = []
for k, p in filter(lambda kp: kp[1].requires_grad, model.params.items()):
lr = lr_base
for l, m in lr_mult.items():
if k.startswith(l):
lr = lr_base * m
param_list.append({'params': [p], 'lr': lr})
return optim.SGD(param_list, lr=lr, momentum=momentum, weight_decay=w_decay)
init_optimizer = set_optimizer(opts.lr_init, opts.lr_mult)
update_optimizer = set_optimizer(opts.lr_update, opts.lr_mult)
# Load first image
image = Image.open(images[0]).convert('RGB')
# Draw pos/neg samples
pos_examples = SampleGenerator('gaussian', image.size, opts.trans_pos, opts.scale_pos)(
init_bbox, opts.n_pos_init, opts.overlap_pos_init)
neg_examples = np.concatenate([
SampleGenerator('uniform', image.size, opts.trans_neg_init, opts.scale_neg_init)(
init_bbox, int(opts.n_neg_init * 0.5), opts.overlap_neg_init),
SampleGenerator('whole', image.size)(
init_bbox, int(opts.n_neg_init * 0.5), opts.overlap_neg_init)])
neg_examples = np.random.permutation(neg_examples)
# Extract pos/neg features
pos_feats = forward_samples(model, image, pos_examples, opts)
neg_feats = forward_samples(model, image, neg_examples, opts)
# Initial training
train(model, criterion, init_optimizer, pos_feats, neg_feats, opts.maxiter_init, opts)
del init_optimizer, neg_feats
torch.cuda.empty_cache()
# Train bbox regressor
bbreg_examples = SampleGenerator('uniform', image.size, opts.trans_bbreg, opts.scale_bbreg, opts.aspect_bbreg)\
(init_bbox, opts.n_bbreg, opts.overlap_bbreg)
bbreg_feats = forward_samples(model, image, bbreg_examples, opts)
bbreg = BBRegressor(image.size)
bbreg.train(bbreg_feats, bbreg_examples, init_bbox)
del bbreg_feats
torch.cuda.empty_cache()
# Init sample generators for update
sample_generator = SampleGenerator('gaussian', image.size, opts.trans, opts.scale)
pos_generator = SampleGenerator('gaussian', image.size, opts.trans_pos, opts.scale_pos)
neg_generator = SampleGenerator('uniform', image.size, opts.trans_neg, opts.scale_neg)
# Init pos/neg features for update
neg_examples = neg_generator(init_bbox, opts.n_neg_update, opts.overlap_neg_init)
neg_feats = forward_samples(model, image, neg_examples, opts)
pos_feats_all = [pos_feats]
neg_feats_all = [neg_feats]
# Main loop
for i, image in enumerate(images[1:], 1):
image = Image.open(image).convert('RGB')
# Estimate target bbox
samples = sample_generator(init_bbox, opts.n_samples)
sample_scores = forward_samples(model, image, samples, opts, out_layer='fc6')
top_scores, top_idx = sample_scores[:, 1].topk(5)
top_idx = top_idx.cpu()
target_score = top_scores.mean()
init_bbox = samples[top_idx]
if top_idx.shape[0] > 1:
init_bbox = init_bbox.mean(axis=0)
success = target_score > 0
# Expand search area at failure
sample_generator.trans = opts.trans if success else min(sample_generator.trans * 1.1, opts.trans_limit)
# Bbox regression
if success:
bbreg_samples = samples[top_idx]
if top_idx.shape[0] == 1:
bbreg_samples = bbreg_samples[None, :]
bbreg_feats = forward_samples(model, image, bbreg_samples, opts)
bbreg_samples = bbreg.predict(bbreg_feats, bbreg_samples)
bbreg_bbox = bbreg_samples.mean(axis=0)
else:
bbreg_bbox = init_bbox
yield init_bbox, bbreg_bbox, overlap_ratio(ground_truths[i], bbreg_bbox)[0], target_score
# Data collect
if success:
pos_examples = pos_generator(init_bbox, opts.n_pos_update, opts.overlap_pos_update)
pos_feats = forward_samples(model, image, pos_examples, opts)
pos_feats_all.append(pos_feats)
if len(pos_feats_all) > opts.n_frames_long:
del pos_feats_all[0]
neg_examples = neg_generator(init_bbox, opts.n_neg_update, opts.overlap_neg_update)
neg_feats = forward_samples(model, image, neg_examples, opts)
neg_feats_all.append(neg_feats)
if len(neg_feats_all) > opts.n_frames_short:
del neg_feats_all[0]
# Short term update
if not success:
nframes = min(opts.n_frames_short, len(pos_feats_all))
pos_data = torch.cat(pos_feats_all[-nframes:], 0)
neg_data = torch.cat(neg_feats_all, 0)
train(model, criterion, update_optimizer, pos_data, neg_data, opts.maxiter_update, opts)
# Long term update
elif i % opts.long_interval == 0:
pos_data = torch.cat(pos_feats_all, 0)
neg_data = torch.cat(neg_feats_all, 0)
train(model, criterion, update_optimizer, pos_data, neg_data, opts.maxiter_update, opts)
torch.cuda.empty_cache()
```
### (Optional)
Refresh image output in IPython
```
from IPython.display import clear_output
%matplotlib inline
```
## Showcase
```
options = Options()
dataset = Path('./datasets/DragonBaby')
images = list(sorted(dataset.joinpath('img').glob('*.jpg')))
ground_truths = pd.read_csv(str(dataset.joinpath('groundtruth_rect.txt')), header=None).values
iou, success = 0, 0
# Run tracker
for i, (result, (x, y, w, h), overlap, score) in \
enumerate(main(images, ground_truths[0], ground_truths, options), 1):
clear_output(wait=True)
image = np.asarray(Image.open(images[i]).convert('RGB'))
gx, gy, gw, gh = ground_truths[i]
cv2.rectangle(image, (int(gx), int(gy)), (int(gx+gw), int(gy+gh)), (0, 255, 0), 2)
cv2.rectangle(image, (int(x), int(y)), (int(x+w), int(y+h)), (255, 0, 0), 2)
iou += overlap
success += overlap > .5
plt.imshow(image)
plt.pause(.1)
plt.title(f'#{i}/{len(images)-1}, Overlap {overlap:.3f}, Score {score:.3f}')
plt.draw()
iou /= len(images) - 1
print(f'Mean IOU: {iou:.3f}, Success: {success} / {len(images)-1}')
```
| github_jupyter |
# MNIST using RNN
## ์์ํ๊ธฐ
* ์ฌ์ฉํ ๋ผ์ด๋ธ๋ฌ๋ฆฌ๋ฅผ import ํฉ๋๋ค.
* ๋ณธ ์์ ๋ tensorflow๋ฅผ ์ฌ์ฉํฉ๋๋ค.
* ๋ฐ์ดํฐ์
์ tensorflow์์ ์ ๊ณตํ๋ mnist ๋ฐ์ดํฐ์
์ ์ฌ์ฉํฉ๋๋ค.
```
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
import os
import time
tf.reset_default_graph()
%matplotlib inline
```
## ๋ฐ์ดํฐ์
๋ถ๋ฌ์ค๊ธฐ
* ์ฌ์ฉํ ๋ฐ์ดํฐ์
์ ๋ถ๋ฌ์จ๋ค.
* images: 0๋ถํฐ 9๊น์ง ํ๊ธฐ์ฒด๋ก ์์ฑ๋ ํ๋ฐฑ์์ (1์ฑ๋).
* label: one_hot ํํ๋ก ์ค์ ํ๋ค.
- 0 = [1 0 0 0 0 0 0 0 0 0]
- 1 = [0 1 0 0 0 0 0 0 0 0]
```
mnist = input_data.read_data_sets('MNIST_data', one_hot=True)
print("Number of train data is %d" % (mnist.train.num_examples))
print("Number of test data is %d" % (mnist.test.num_examples))
```
## ๋ฐ์ดํฐ์
ํ์ธํ๊ธฐ
ํ์ต ๋ฐ์ดํฐ์ ์ด๋ฏธ์ง์ ๋ ์ด๋ธ์ ํ์ธํ๋ค.
```
nsample = 3
rand_idx = np.random.randint(mnist.train.images.shape[0], size=nsample)
for i in rand_idx:
curr_img = np.reshape(mnist.train.images[i, :], (28,28))
curr_lbl = np.argmax(mnist.train.labels[i, :])
plt.matshow(curr_img, cmap=plt.get_cmap('gray'))
plt.title(""+str(i)+"th training image "
+ "(label: " + str(curr_lbl) + ")")
plt.show()
```
## ์ด๋ฏธ์ง ๋ถ๋ฅ๋ฅผ ์ํ RNN ๊ตฌ์ฑ
28x28 ์ด๋ฏธ์ง๋ฅผ RNN ์
๋ ฅ์ ์ํด ์๋์ ๊ฐ์ด ๊ตฌ์ฑํ๋ค.
* input_vec_size: ํ๋์ ์ด๋ฏธ์ง (28x28) ์ค ํ ํ์ฉ RNN์ ์
๋ ฅ์ผ๋ก ์ฌ์ฉํ๋ค.
* time_step_size: ํ๋์ ์ด๋ฏธ์ง (28x28)๋ฅผ ๋ชจ๋ ์
๋ ฅํ๋ ค๋ฉด 28๊ฐ์ ํ์ด ํ์ํ๋ค. ๋ฐ๋ผ์ time_step์ 28.
* lstm_size: rnn_cell์ ํฌํจ๋ hidden unit์ ์
```
# configuration
# O * W + b -> 10 labels for each image, O[? 28], W[28 10], B[10]
# ^ (O: output 28 vec from 28 vec input)
# |
# +-+ +-+ +--+
# |1|->|2|-> ... |28| time_step_size = 28
# +-+ +-+ +--+
# ^ ^ ... ^
# | | |
# img1:[28] [28] ... [28]
# img2:[28] [28] ... [28]
# img3:[28] [28] ... [28]
# ...
# img128 or img256 (batch_size or test_size 256)
# each input size = input_vec_size=lstm_size=28
input_vec_size = lstm_size = 28
time_step_size = 28
batch_size = 128
test_size = 256
```
## RNN ๋ชจ๋ธ ์ ์
```
def init_weights(shape):
return tf.Variable(tf.random_normal(shape, stddev=0.01))
def model(X, W, B, lstm_size):
# X, input shape: (batch_size, time_step_size, input_vec_size)
XT = tf.transpose(X, [1, 0, 2]) # permute time_step_size and batch_size
# XT shape: (time_step_size, batch_size, input_vec_size)
XR = tf.reshape(XT, [-1, lstm_size]) # each row has input for each lstm cell (lstm_size=input_vec_size)
# XR shape: (time_step_size * batch_size, input_vec_size)
X_split = tf.split(0, time_step_size, XR) # split them to time_step_size (28 arrays)
# Each array shape: (batch_size, input_vec_size)
# Make lstm with lstm_size (each input vector size)
lstm = tf.nn.rnn_cell.BasicLSTMCell(lstm_size, forget_bias=1.0, state_is_tuple=True)
# Get lstm cell output, time_step_size (28) arrays with lstm_size output: (batch_size, lstm_size)
outputs, _states = tf.nn.rnn(lstm, X_split, dtype=tf.float32)
# Linear activation
# Get the last output
return tf.matmul(outputs[-1], W) + B, lstm.state_size # State size to initialize the stat
```
## ๋ฐ์ดํฐ ๋ณํ
* training๊ณผ test ๋ฐ์ดํฐ๋ฅผ 1d์์ 2dํํ๋ก reshapeํ๋ค.
* ์
๋ ฅ๊ณผ ์ถ๋ ฅ์ ํด๋นํ๋ placeholder ์ ์ธ
```
trX, trY, teX, teY = mnist.train.images, mnist.train.labels, mnist.test.images, mnist.test.labels
trX = trX.reshape(-1, 28, 28)
teX = teX.reshape(-1, 28, 28)
X = tf.placeholder("float", [None, 28, 28])
Y = tf.placeholder("float", [None, 10])
```
## ๋ชจ๋ธ ์ ์ธ
* Hidden unit๊ณผ output unitํฌ๊ธฐ๋งํผ ๋ชจ๋ธ ์์ฑ
* ๋ชจ๋ธ ์์ฑ
```
# get lstm_size and output 10 labels
W = init_weights([lstm_size, 10])
B = init_weights([10])
py_x, state_size = model(X, W, B, lstm_size)
```
## loss ํจ์ ์ ์ธ ๋ฐ ์ต์ ํ
```
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(py_x, Y))
train_op = tf.train.RMSPropOptimizer(0.001, 0.9).minimize(cost)
predict_op = tf.argmax(py_x, 1)
```
## ์ธ์
์์ฑ ๋ฐ ํ์ต ์ ํ๋ ์ธก์
```
# Launch the graph in a session
with tf.Session() as sess:
# you need to initialize all variables
tf.initialize_all_variables().run()
for i in range(100):
for start, end in zip(range(0, len(trX), batch_size), range(batch_size, len(trX)+1, batch_size)):
sess.run(train_op, feed_dict={X: trX[start:end], Y: trY[start:end]})
test_indices = np.arange(len(teX)) # Get A Test Batch
np.random.shuffle(test_indices)
test_indices = test_indices[0:test_size]
print(i, np.mean(np.argmax(teY[test_indices], axis=1) ==
sess.run(predict_op, feed_dict={X: teX[test_indices],
Y: teY[test_indices]})))
```
| github_jupyter |
<a href="https://colab.research.google.com/github/pyGuru123/Data-Analysis-and-Visualization/blob/main/Tracking%20Bird%20Migration/bird_migration.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
One fascinating area of research uses GPS to track movements of animals. It is now possible to manufacture a small GPS device that is solar charged, so you donโt need to change batteries and use it to track flight patterns of birds.
# GPS Tracking Gulls
The data for this case study comes from the LifeWatch INBO project. Since 1999, the Research Institute for Nature and Forest (INBO) studies the postnuptial migration, and mate and site fidelity of large gulls, using observer sightings of colour-ringed individuals. In the framework of the Flemish contributions to the LifeWatch infrastructure, a high-tech sensor network was installed (start June 2013) to better monitor the habitat use and migration patterns of large birds, such as the European Herring Gull (Larus argentatus Pontoppidan). Several data sets have been released as part of this project. We will use a small data set that consists of migration data for three gulls named Eric, Nico, and Sanne

Method steps
1. Researcher captures bird, takes biometrics, attaches GPS tracker, and
releases bird.
2. Researcher sets a measurement scheme, which can be updated anytime.
GPS tracker records data.
3. GPS tracker automatically receives new measurement settings and transmits recorded data when a connection can be established with the base station at the colony.
4. Recorded data are automatically harvested, post-processed, and stored in a central PostgreSQL database at UvA-BiTS.
5. Tracking data specific to LifeWatch Flanders are exported, cleaned, and enhanced monthly with a bird tracking ETL.
6. LifeWatch INBO team periodically (re)publishes data as a Darwin Core Archive, registered with GBIF.
7. Data stream stops when bird no longer returns to colony or if GPS tracker no longer functions (typical tracker lifespan: 2-3 years)
# Aim and requirements
**Aim** : Track the movement of three gulls namely โ Eric, Nico & Sanne
**Dataset** : [official_datasets](https://inbo.carto.com/u/lifewatch/datasets) ; used dataset โ [csv](https://d37djvu3ytnwxt.cloudfront.net/assets/courseware/v1/c72498a54a4513c2eb4ec005adc0010c/asset-v1:HarvardX+PH526x+3T2016+type@asset+block/bird_tracking.csv)
**Dependencies** : Requests, Matplotlib, Pandas, Numpy, Cartopy, Shapely
The csv file contains eight columns and includes variables like latitude, longitude, altitude, and time stamps. In this case study, we will first load the data, visualize some simple flight trajectories, track flight speed, learn about daytime and much, much more.
The case study is divied into six parts:
1. Visualizing longitude and latitude data of the gulls.
2. Visualize the variation of the speed of the gulls.
3. Visualize the time required by the gulls to cover equal distances over the journey.
4. Visualize the daily mean speed of the gulls.
5. Visualize the daily average altitude gulls fly at.
6. Cartographic view of the journey of the gulls.
Importing required libraries
```
import requests
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
```
Installing Cartopy in colab
```
!apt-get install libproj-dev proj-data proj-bin
!apt-get install libgeos-dev
!pip install cython
!pip install cartopy
```
Installing shapely in colab
```
!pip uninstall shapely
!pip install shapely --no-binary shapely
```
# Downloading and reading dataset
Downloading dataset
```
url = 'https://d37djvu3ytnwxt.cloudfront.net/assets/courseware/v1/c72498a54a4513c2eb4ec005adc0010c/asset-v1:HarvardX+PH526x+3T2016+type@asset+block/bird_tracking.csv'
response = requests.get(url)
data = response.content
with open('bird_tracking.csv', 'wb') as file:
file.write(data)
```
Reading csv data
```
df = pd.read_csv('bird_tracking.csv')
df.info
```
printing dataframe
```
df
```
Getting bird names
```
bird_names = pd.unique(df.bird_name)
print(bird_names)
```
Number of row entries of each bird
```
for bird_name in bird_names:
print(bird_name, len(df.bird_name[df.bird_name == bird_name]))
```
# Visualizing longitude and latitude data of the gulls
Visulization of location for bird Eric
```
plt.figure(figsize=(8,6))
ix = df.bird_name == 'Eric'
x, y = df.longitude[ix], df.latitude[ix]
plt.figure(figsize = (7,7))
plt.plot(x,y, 'b.')
plt.title('location of bird Eric')
plt.show()
```
Visulization of all three birds as subplots
```
plt.style.use('fivethirtyeight')
colors = ['blue', 'orange', 'green']
fig, subplot = plt.subplots(1, 3, figsize=(15,6))
for index, bird_name in enumerate(bird_names):
ix = df.bird_name == bird_name
x, y = df.longitude[ix], df.latitude[ix]
subplot[index].plot(x,y, colors[index] )
subplot[index].title.set_text(bird_name)
plt.show()
```
Visualization of all three birds in 1 plot
```
plt.style.use('default')
plt.figure(figsize = (7,7))
for bird_name in bird_names:
ix = df.bird_name == bird_name
x, y = df.longitude[ix], df.latitude[ix]
plt.plot(x,y , '.', label=bird_name)
plt.xlabel("Longitude")
plt.ylabel("Latitude")
plt.legend(loc="lower right")
plt.show()
```
# Visualize the variation of the speed of the gulls
We are going to visualize 2D speed vs Frequency for the gull named โSanneโ
```
ix = df.bird_name == 'Sanne'
speed = df.speed_2d[ix]
plt.figure(figsize = (8,4))
ind = np.isnan(speed)
plt.hist(speed[~ind], bins = np.linspace(0,30,20))
plt.title('2D Speed vs Frequency for Sanne')
plt.xlabel(" 2D speed (m/s) ")
plt.ylabel(" Frequency ")
plt.show()
```
Speed vs Frequency distri of each bird
```
plt.figure(figsize = (8,4))
for bird_name in bird_names:
ix = df.bird_name == bird_name
speed = df.speed_2d[ix]
ind = np.isnan(speed)
plt.hist(speed[~ind], bins = np.linspace(0,30,30), label=bird_name ,stacked=True)
plt.title('2D Speed vs Frequency for Each bird')
plt.xlabel(" 2D speed (m/s) ")
plt.ylabel(" Frequency ")
plt.legend()
plt.show()
```
# Visualize the time required by the gulls to cover equal distances over the journey
```
print(len(df))
df
```
Extracting date-time column and converting to readble form
```
import datetime
timestamps = []
for i in range(len(df)):
dt = datetime.datetime.strptime(df.date_time.iloc[i][:-3], "%Y-%m-%d %H:%M:%S")
timestamps.append(dt)
```
Adding above extracted list as a new column in dataframe
```
df['timestamps'] = pd.Series(timestamps, index = df.index)
df
```
Extracting time rows for Eric
```
times = df.timestamps[df.bird_name == 'Eric']
print(times[:3])
elapsed_time = [time-times[0] for time in times]
print(elapsed_time[:3])
elapsed_time[-1]
plt.plot(np.array(elapsed_time)/datetime.timedelta(days=1))
plt.xlabel(" Observation ")
plt.ylabel(" Elapsed time (days) ")
plt.show()
plt.style.use('seaborn')
colors = ['blue', 'orange', 'green']
rows_index = [0, 19795, 40916]
fig, subplot = plt.subplots(1, 3, figsize=(14,7), sharex=True, sharey=True)
for index, bird_name in enumerate(bird_names):
times = df.timestamps[df.bird_name == bird_name]
elapsed_time = [time- times[rows_index[index]] for time in times]
subplot[index].plot(np.array(elapsed_time)/datetime.timedelta(days=1))
subplot[index].title.set_text(bird_name)
fig.text(0.5, 0.04, 'Observation', ha='center')
fig.text(0.04, 0.5, ' Elapsed time (days) ', va='center', rotation='vertical')
plt.show()
```
# Visualize the daily mean speed of the gulls
```
df
```
Daily mean speed of Eric
```
data = df[df.bird_name == "Eric"]
times = data.timestamps
elapsed_time = [(time - times[0]) for time in times]
elapsed_days = np.array(elapsed_time)/datetime.timedelta(days=1)
elapsed_days
next_day = 1
inds = []
daily_mean_speed = []
for (i,t) in enumerate(elapsed_days):
if t < next_day:
inds.append(i)
else:
daily_mean_speed.append(np.mean(data.speed_2d[inds]))
next_day += 1
inds = []
plt.style.use('default')
plt.figure(figsize = (8,6))
plt.plot(daily_mean_speed, "rs-")
plt.xlabel(" Day ")
plt.ylabel(" Mean Speed (m/s) ");
plt.show()
```
# Visualize the daily average altitude gulls fly at
```
df
def get_daily_mean_altitude(dataframe, index):
times = data.timestamps
elapsed_time = [(time - times[index]) for time in times]
elapsed_days = np.array(elapsed_time)/datetime.timedelta(days=1)
elapsed_days
next_day = 1
inds = []
daily_mean_altitude = []
for (i,t) in enumerate(elapsed_days):
if t <= next_day:
inds.append(i)
else:
daily_mean_altitude.append(np.mean(data.altitude[inds]))
next_day += 1
inds = []
return daily_mean_altitude
data = df[df.bird_name == "Eric"]
daily_mean_altitude = get_daily_mean_altitude(data, 0)
daily_mean_altitude = pd.Series(daily_mean_altitude)
overall_mean = daily_mean_altitude.mean()
days = [i for i in range(len(daily_mean_altitude))]
overall_average = [overall_mean for i in range(len(daily_mean_altitude))]
plt.style.use('default')
plt.figure(figsize = (8,6))
plt.plot(daily_mean_altitude, "gs-")
plt.plot(overall_average, color='red')
plt.title('Average altitude at which Eric flies')
plt.xlabel(" Day ")
plt.ylabel(" Mean Altitude (m) ");
plt.show()
```
# Cartographic view of the journey of the gulls
```
import cartopy.crs as ccrs
import cartopy.feature as cfeature
proj = ccrs.Mercator()
colors = ['red', 'blue', 'green']
plt.style.use('ggplot')
plt.figure(figsize=(10,10))
ax = plt.axes(projection=proj)
ax.set_extent((-25.0, 20.0, 52.0, 10.0))
ax.add_feature(cfeature.LAND)
ax.add_feature(cfeature.OCEAN)
ax.add_feature(cfeature.COASTLINE)
ax.add_feature(cfeature.BORDERS, linestyle=':')
for index, name in enumerate(bird_names):
ix = df['bird_name'] == name
x,y = df.longitude[ix], df.latitude[ix]
ax.plot(x,y,'.', transform=ccrs.Geodetic(), label=name, color=colors[index])
plt.legend(loc="upper left")
plt.show()
```
That's it
| github_jupyter |
```
# default_exp metrics
```
# metrics
> API details.
```
#export
#hide
import numpy as np
import scipy.stats
import torch
from scipy.stats import chi2 as Chi2Dist
import matplotlib.pyplot as plt
from sklearn.metrics import auc
from fastcore.test import *
from fastai.metrics import rmse
import dcor
#export
def crps_for_quantiles(probabilistic_forecasts, measurements, quantiles=np.linspace(0.1, 0.9, 9)):
""" Computes the CRPS score with quantile representation.
This variant is the variant proposed in Hersbach H. Decomposition of the Continuous Ranked Probability Score for
Ensemble Prediction Systems. Weather Forecast. 2000;15(5):559-570.
Parameters
----------
probabilistic_forecasts: array_like
Either list of "M" scipy.stats.rv_continuous distributions
https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.rv_continuous.html
OR
2D-numpy array with quantile forecasts with dimensionality M x Q,
where "Q" is number of quantiles.
measurements: array_like
List or numpy array with "M" measurements / observations.
quantiles: array_like
List of "Q" values of the quantiles to be evaluated.
Returns
-------
mean_crps: float
The mean CRIGN over all probabilistic_forecast - measurement pairs.
single_crps: array, shape (M,)
CRIGN value for each probabilistic_forecast - measurement pair.
"""
quantile_forecasts = np.array(probabilistic_forecasts)
measurements = np.atleast_2d(measurements).T
alpha_mat = np.diff(np.hstack([quantile_forecasts, measurements]))
alpha_mat = np.maximum(0, alpha_mat)
alpha_mat = np.minimum(alpha_mat, np.maximum(0, np.repeat(measurements, quantile_forecasts.shape[1],
axis=1) - quantile_forecasts))
beta_mat = np.diff(np.hstack([measurements, quantile_forecasts]))
beta_mat = np.maximum(0, beta_mat)
beta_mat = np.minimum(beta_mat,
np.maximum(0,
quantile_forecasts - np.repeat(measurements, quantile_forecasts.shape[1], axis=1)))
single_crps = np.matmul(alpha_mat, np.power(quantiles, 2)) + np.matmul(beta_mat, np.power(quantiles - 1, 2))
return np.mean(single_crps), single_crps
quantiles = np.array([0.25,0.5, 0.75])
probabilistic_forecasts = np.array([[1,2,3],[4,5,6],[7,8,9],])
measurements = np.array([2,5,8])
mean_crps, crps_per_observation = crps_for_quantiles(probabilistic_forecasts, measurements, quantiles)
mean_crps
#export
def rmse_nll(preds, targs, pos_mean=0):
"""RMSE for negative log likelihood forecast, where we have e.g. the mean and the variance as prediction."""
return rmse(preds[:, pos_mean], targs)
#export
def normalized_sum_of_squared_residuals_np(stochastic_preds, targets):
"""
Calculates normalized sof of squared residuals according to Eq. 50-52 in https://arxiv.org/pdf/2007.06823.pdf
stochastic samples of shape [n_observation, n_targets, n_samples]
targets of shape [n_observation, n_targets]
"""
nruntests = len(stochastic_preds)
predictions = stochastic_preds.mean(axis=-1, keepdims=True)
errors = targets - predictions.squeeze(-1)
covs = stochastic_preds-predictions
covs = (covs @ covs.swapaxes(1,2)) / (nruntests-1.)
weights = np.linalg.inv(covs)
nssr = np.matmul(errors[:,np.newaxis,:], np.matmul(weights, errors[:,:,np.newaxis]))
nssr = np.sort(nssr.flatten())
p_prediction = Chi2Dist.cdf(nssr, targets.shape[1]);
p_expected = np.linspace(1./nssr.size, 1.0, nssr.size)
return p_prediction, p_expected
stochasic_preds = np.array([[1.01,2.05,3.04], [3.05,2.02,1.01]]).reshape(3,1,2)
targets = np.array([2.1,2.05,2.2]).reshape(3,1,1)
stochasic_preds.shape, targets.shape
p_prediction_np, p_expected_np = normalized_sum_of_squared_residuals_np(stochasic_preds, targets)
plt.figure("Calibration curve for sparse measure model")
plt.plot(p_prediction_np, p_expected_np, label='Calibration curve')
plt.plot([0,1],[0,1], 'k--', alpha=0.5, label='Ideal curve')
plt.xlabel('Predicted probability')
plt.ylabel('Observed probability')
plt.axis('equal')
plt.xlim([0,1])
plt.ylim([0,1])
plt.legend()
plt.show()
```
The result can be utilized to calculate the area under the curve
```
auc(p_expected_np, p_prediction_np)
```
Or the distance to the ideal curve
```
#export
def distance_ideal_curve(p_prediction, p_expected):
return np.trapz((p_prediction-p_expected)**2)**0.5
distance_ideal_curve(p_expected_np, p_prediction_np)
#export
def normalized_sum_of_squared_residuals_torch(stochastic_preds, targets):
"""
Calculates normalized sof of squared residuals according to Eq. 50-52 in https://arxiv.org/pdf/2007.06823.pdf
stochastic samples of shape [n_observation, n_targets, n_samples]
targets of shape [n_observation, n_targets]
"""
predictions = stochastic_preds.mean(axis=-1, keepdim=True)
errors = targets - predictions.squeeze(dim=-1)
nruntests = len(stochastic_preds)
predictions = stochastic_preds.mean(axis=-1, keepdims=True)
covs = stochastic_preds - predictions
covs = torch.matmul(covs, torch.transpose(covs, 1, 2))/(nruntests-1.)
weights = torch.linalg.inv(covs) #
nssr = torch.matmul(errors[:,np.newaxis,:].float(),
torch.matmul(weights.float(), errors[:,:,np.newaxis].float()).float() )
nssr = nssr.cpu().numpy().flatten()
nssr = np.sort(nssr)
p_prediction = Chi2Dist.cdf(nssr, targets.shape[1]);
p_expected = np.linspace(1./nssr.size, 1.0, nssr.size)
return p_prediction, p_expected
p_prediction_torch, p_expected_torch = normalized_sum_of_squared_residuals_torch(torch.tensor(stochasic_preds),torch.tensor( targets))
plt.figure("Calibration curve for sparse measure model")
plt.plot(p_prediction_torch, p_expected_torch, label='Calibration curve')
plt.plot([0,1],[0,1], 'k--', alpha=0.5, label='Ideal curve')
plt.xlabel('Predicted probability')
plt.ylabel('Observed probability')
plt.axis('equal')
plt.xlim([0,1])
plt.ylim([0,1])
plt.legend()
plt.show()
#hide
test_close(p_expected_np, p_expected_torch)
test_close(p_prediction_np, p_prediction_torch)
auc(p_expected_torch, p_prediction_torch)
distance_ideal_curve(p_expected_torch, p_prediction_torch)
#export
def energy_distance(samples, targets):
"""
samples tensor (e.g. from a Bayesian Model) of shape [n_observation, n_samples]
target tensor observatrion fo shape [n_observation, 1]
"""
return dcor.energy_distance(samples.reshape(samples.shape[0], samples.shape[-1]).T, targets.T)
#hide
from nbdev.export import notebook2script
notebook2script()
```
| github_jupyter |
# ็ตฑ่จ็ๆจๅฎ
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from scipy import stats
%precision 3
%matplotlib inline
df = pd.read_csv('../data/ch4_scores400.csv')
scores = np.array(df['็นๆฐ'])
p_mean = np.mean(scores)
p_var = np.var(scores)
p_mean, p_var
fig = plt.figure(figsize=(10, 6))
ax = fig.add_subplot(111)
xs = np.arange(101)
rv = stats.norm(p_mean, np.sqrt(p_var))
ax.plot(xs, rv.pdf(xs), color='gray')
ax.hist(scores, bins=100, range=(0, 100), density=True)
plt.show()
np.random.seed(0)
n = 20
sample = np.random.choice(scores, n)
sample
np.random.seed(1111)
n_samples = 10000
samples = np.random.choice(scores, (n_samples, n))
```
## ็นๆจๅฎ
### ๆฏๅนณๅใฎ็นๆจๅฎ
```
for i in range(5):
s_mean = np.mean(samples[i])
print(f'{i+1}ๅ็ฎใฎๆจๆฌๅนณๅ: {s_mean:.3f}')
sample_means = np.mean(samples, axis=1)
np.mean(sample_means)
np.mean(np.random.choice(scores, int(1e6)))
s_mean = np.mean(sample)
s_mean
```
### ๆฏๅๆฃใฎ็นๆจๅฎ
```
for i in range(5):
s_var = np.var(samples[i])
print(f'{i+1}ๅ็ฎใฎๆจๆฌๅๆฃ: {s_var:.3f}')
sample_vars = np.var(samples, axis=1)
np.mean(sample_vars)
sample_u_vars = np.var(samples, axis=1, ddof=1)
np.mean(sample_u_vars)
np.var(np.random.choice(scores, int(1e6)), ddof=1)
u_var = np.var(sample, ddof=1)
u_var
```
## ๅบ้ๆจๅฎ
### ๆญฃ่ฆๅๅธใฎๆฏๅนณๅ(ๅๆฃๆข็ฅ)ใฎๅบ้ๆจๅฎ
```
rv = stats.norm()
lcl = s_mean - rv.isf(0.025) * np.sqrt(p_var/n)
ucl = s_mean - rv.isf(0.975) * np.sqrt(p_var/n)
lcl, ucl
fig = plt.figure(figsize=(10, 10))
ax = fig.add_subplot(111)
rv = stats.norm()
n_samples = 20
ax.vlines(p_mean, 0, 21)
for i in range(n_samples):
sample_ = samples[i]
s_mean_ = np.mean(sample_)
lcl = s_mean_ - rv.isf(0.025) * np.sqrt(p_var/n)
ucl = s_mean_ - rv.isf(0.975) * np.sqrt(p_var/n)
if lcl <= p_mean <= ucl:
ax.scatter(s_mean_, n_samples-i, color='gray')
ax.hlines(n_samples-i, lcl, ucl, color='gray')
else:
ax.scatter(s_mean_, n_samples-i, color='b')
ax.hlines(n_samples-i, lcl, ucl, color='b')
ax.set_xticks([p_mean])
ax.set_xticklabels(['ๆฏๅนณๅ'])
plt.show()
rv = stats.norm()
cnt = 0
for sample_ in samples:
s_mean_ = np.mean(sample_)
lcl = s_mean_ - rv.isf(0.025) * np.sqrt(p_var/n)
ucl = s_mean_ - rv.isf(0.975) * np.sqrt(p_var/n)
if lcl <= p_mean <= ucl:
cnt += 1
cnt / len(samples)
```
### ๆญฃ่ฆๅๅธใฎๆฏๅๆฃ(ๅนณๅๆช็ฅ)ใฎๅบ้ๆจๅฎ
```
sample_y = sample_u_vars * (n-1) / p_var
sample_y
fig = plt.figure(figsize=(10, 6))
ax = fig.add_subplot(111)
xs = np.linspace(0, 40, 100)
rv = stats.chi2(df=n-1)
ax.plot(xs, rv.pdf(xs), color='gray')
hist, _, _ = ax.hist(sample_y, bins=100,
range=(0, 40), density=True)
plt.show()
rv = stats.chi2(df=n-1)
lcl = (n-1) * u_var / rv.isf(0.025)
hcl = (n-1) * u_var / rv.isf(0.975)
lcl, hcl
fig = plt.figure(figsize=(10, 10))
ax = fig.add_subplot(111)
rv = stats.chi2(df=n-1)
n_samples = 20
ax.vlines(p_var, 0, 21)
for i in range(n_samples):
sample_ = samples[i]
u_var_ = np.var(sample_, ddof=1)
lcl = (n-1) * u_var_ / rv.isf(0.025)
ucl = (n-1) * u_var_ / rv.isf(0.975)
if lcl <= p_var <= ucl:
ax.scatter(u_var_, n_samples-i, color='gray')
ax.hlines(n_samples-i, lcl, ucl, 'gray')
else:
ax.scatter(u_var_, n_samples-i, color='b')
ax.hlines(n_samples-i, lcl, ucl, 'b')
ax.set_xticks([p_var])
ax.set_xticklabels(['ๆฏๅๆฃ'])
plt.show()
rv = stats.chi2(df=n-1)
cnt = 0
for sample_ in samples:
u_var_ = np.var(sample_, ddof=1)
lcl = (n-1) * u_var_ / rv.isf(0.025)
ucl = (n-1) * u_var_ / rv.isf(0.975)
if lcl <= p_var <= ucl:
cnt += 1
cnt / len(samples)
```
### ๆญฃ่ฆๅๅธใฎๆฏๅนณๅ(ๆฏๅๆฃๆช็ฅ)ใฎๅบ้ๆจๅฎ
```
rv = stats.t(df=n-1)
lcl = s_mean - rv.isf(0.025) * np.sqrt(u_var/n)
ucl = s_mean - rv.isf(0.975) * np.sqrt(u_var/n)
lcl, ucl
```
### ใใซใใผใคๅๅธใฎๆฏๅนณๅใฎๅบ้ๆจๅฎ
```
enquete_df = pd.read_csv('../data/ch10_enquete.csv')
enquete = np.array(enquete_df['็ฅใฃใฆใใ'])
n = len(enquete)
enquete[:10]
s_mean = enquete.mean()
s_mean
rv = stats.norm()
lcl = s_mean - rv.isf(0.025) * np.sqrt(s_mean*(1-s_mean)/n)
ucl = s_mean - rv.isf(0.975) * np.sqrt(s_mean*(1-s_mean)/n)
lcl, ucl
```
### ใใขใฝใณๅๅธใฎๆฏๅนณๅใฎๅบ้ๆจๅฎ
```
n_access_df = pd.read_csv('../data/ch10_access.csv')
n_access = np.array(n_access_df['ใขใฏใปในๆฐ'])
n = len(n_access)
n_access[:10]
s_mean = n_access.mean()
s_mean
rv = stats.norm()
lcl = s_mean - rv.isf(0.025) * np.sqrt(s_mean/n)
ucl = s_mean - rv.isf(0.975) * np.sqrt(s_mean/n)
lcl, ucl
```
| github_jupyter |
```
import numpy as np
import pandas as pd
np.seterr(divide='ignore', invalid='ignore')
data = pd.io.parsers.read_csv('data/final-new-ratings.csv',
names=['user_id', 'movie_id', 'rating', 'time'],
engine='python', delimiter=';')
movie_data = pd.io.parsers.read_csv('data/final-new-movies.csv',
names=['movie_id', 'title', 'genre'],
engine='python', delimiter=';')
ratings_mat = np.ndarray(
shape=(np.max(data.movie_id.values), np.max(data.user_id.values)),
dtype=np.uint8)
ratings_mat[data.movie_id.values - 1, data.user_id.values - 1] = data.rating.values
normalised_mat = ratings_mat - np.matrix(np.mean(ratings_mat, 1)).T
cov_mat = np.cov(normalised_mat)
evals, evecs = np.linalg.eig(cov_mat)
def top_cosine_similarity(data, movie_id, top_n=10):
index = movie_id - 1 # Movie id starts from 1
movie_row = data[index, :]
magnitude = np.sqrt(np.einsum('ij, ij -> i', data, data))
similarity = np.dot(movie_row, data.T) / (magnitude[index] * magnitude)
sort_indexes = np.argsort(-similarity)
return (sort_indexes[:top_n], similarity)
#"kasutatav filmi id": tmdb filmi id
lookup = {
"71": 862,
"66": 197,
"67": 278,
"68": 3049,
"69": 8587,
"62": 78,
"63": 771,
"3": 114,
"64": 627,
"65": 238,
"70": 872,
"4": 567,
"5": 770,
"6": 62,
"7": 88,
"8": 601,
"9": 85,
"10": 348,
"11": 703,
"12": 694,
"13": 914,
"14": 621,
"15": 578,
"2": 816,
"16": 18,
"17": 597,
"18": 1725,
"19": 11252,
"20": 8741,
"21": 11167,
"22": 603,
"23": 509,
"1": 2105,
"24": 550,
"25": 10784,
"26": 392,
"27": 77,
"28": 808,
"29": 676,
"30": 585,
"31": 120,
"32": 453,
"33": 855,
"34": 425,
"35": 672,
"36": 423,
"37": 12,
"38": 22,
"39": 24,
"40": 11846,
"41": 38,
"42": 11036,
"43": 6947,
"44": 9806,
"45": 477433,
"46": 591,
"47": 920,
"48": 350,
"49": 1858,
"50": 7326,
"51": 155,
"52": 8966,
"53": 13223,
"54": 19995,
"55": 50014,
"56": 84892,
"57": 157336,
"58": 207703,
"59": 140607,
"60": 286217,
"61": 259693,
}
# Helper function to print top N similar movies
def get_similar_movies(movie_data, movie_id, top_indexes):
print('Recommendations for {0}: \n'.format(
movie_data[movie_data.movie_id == movie_id].title.values[0]))
result = []
for id in top_indexes[0] + 1:
result.append({
"tmdb_id": lookup[str(movie_data[movie_data.movie_id == id].movie_id.values[0])],
"similarity": top_indexes[1][id - 1],
"title": movie_data[movie_data.movie_id == id].title.values[0]
})
return {"result": result}
k = 25
# kui mitmele filmile soovitakse sarnasus saada
top_n = 70
sliced = evecs[:, :k] # representative data
def getData(movie_id):
movie_id = int(movie_id)
top_indexes = top_cosine_similarity(sliced, movie_id, top_n)
return get_similar_movies(movie_data, movie_id, top_indexes)
# muutes id-d saab muuta, mis filmile sarnasusi otsitakse
print(getData("54"))
```
| github_jupyter |
# Regularization
Welcome to the second assignment of this week. Deep Learning models have so much flexibility and capacity that **overfitting can be a serious problem**, if the training dataset is not big enough. Sure it does well on the training set, but the learned network **doesn't generalize to new examples** that it has never seen!
**You will learn to:** Use regularization in your deep learning models.
Let's first import the packages you are going to use.
```
# import packages
import numpy as np
import matplotlib.pyplot as plt
from reg_utils import sigmoid, relu, plot_decision_boundary, initialize_parameters, load_2D_dataset, predict_dec
from reg_utils import compute_cost, predict, forward_propagation, backward_propagation, update_parameters
import sklearn
import sklearn.datasets
import scipy.io
from testCases import *
%matplotlib inline
plt.rcParams['figure.figsize'] = (7.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
```
**Problem Statement**: You have just been hired as an AI expert by the French Football Corporation. They would like you to recommend positions where France's goal keeper should kick the ball so that the French team's players can then hit it with their head.
<img src="images/field_kiank.png" style="width:600px;height:350px;">
<caption><center> <u> **Figure 1** </u>: **Football field**<br> The goal keeper kicks the ball in the air, the players of each team are fighting to hit the ball with their head </center></caption>
They give you the following 2D dataset from France's past 10 games.
```
train_X, train_Y, test_X, test_Y = load_2D_dataset()
```
Each dot corresponds to a position on the football field where a football player has hit the ball with his/her head after the French goal keeper has shot the ball from the left side of the football field.
- If the dot is blue, it means the French player managed to hit the ball with his/her head
- If the dot is red, it means the other team's player hit the ball with their head
**Your goal**: Use a deep learning model to find the positions on the field where the goalkeeper should kick the ball.
**Analysis of the dataset**: This dataset is a little noisy, but it looks like a diagonal line separating the upper left half (blue) from the lower right half (red) would work well.
You will first try a non-regularized model. Then you'll learn how to regularize it and decide which model you will choose to solve the French Football Corporation's problem.
## 1 - Non-regularized model
You will use the following neural network (already implemented for you below). This model can be used:
- in *regularization mode* -- by setting the `lambd` input to a non-zero value. We use "`lambd`" instead of "`lambda`" because "`lambda`" is a reserved keyword in Python.
- in *dropout mode* -- by setting the `keep_prob` to a value less than one
You will first try the model without any regularization. Then, you will implement:
- *L2 regularization* -- functions: "`compute_cost_with_regularization()`" and "`backward_propagation_with_regularization()`"
- *Dropout* -- functions: "`forward_propagation_with_dropout()`" and "`backward_propagation_with_dropout()`"
In each part, you will run this model with the correct inputs so that it calls the functions you've implemented. Take a look at the code below to familiarize yourself with the model.
```
def model(X, Y, learning_rate = 0.3, num_iterations = 30000, print_cost = True, lambd = 0, keep_prob = 1):
"""
Implements a three-layer neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SIGMOID.
Arguments:
X -- input data, of shape (input size, number of examples)
Y -- true "label" vector (1 for blue dot / 0 for red dot), of shape (output size, number of examples)
learning_rate -- learning rate of the optimization
num_iterations -- number of iterations of the optimization loop
print_cost -- If True, print the cost every 10000 iterations
lambd -- regularization hyperparameter, scalar
keep_prob - probability of keeping a neuron active during drop-out, scalar.
Returns:
parameters -- parameters learned by the model. They can then be used to predict.
"""
grads = {}
costs = [] # to keep track of the cost
m = X.shape[1] # number of examples
layers_dims = [X.shape[0], 20, 3, 1]
# Initialize parameters dictionary.
parameters = initialize_parameters(layers_dims)
# Loop (gradient descent)
for i in range(0, num_iterations):
# Forward propagation: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID.
if keep_prob == 1:
a3, cache = forward_propagation(X, parameters)
elif keep_prob < 1:
a3, cache = forward_propagation_with_dropout(X, parameters, keep_prob)
# Cost function
if lambd == 0:
cost = compute_cost(a3, Y)
else:
cost = compute_cost_with_regularization(a3, Y, parameters, lambd)
# Backward propagation.
assert(lambd==0 or keep_prob==1) # it is possible to use both L2 regularization and dropout,
# but this assignment will only explore one at a time
if lambd == 0 and keep_prob == 1:
grads = backward_propagation(X, Y, cache)
elif lambd != 0:
grads = backward_propagation_with_regularization(X, Y, cache, lambd)
elif keep_prob < 1:
grads = backward_propagation_with_dropout(X, Y, cache, keep_prob)
# Update parameters.
parameters = update_parameters(parameters, grads, learning_rate)
# Print the loss every 10000 iterations
if print_cost and i % 10000 == 0:
print("Cost after iteration {}: {}".format(i, cost))
if print_cost and i % 1000 == 0:
costs.append(cost)
# plot the cost
plt.plot(costs)
plt.ylabel('cost')
plt.xlabel('iterations (x1,000)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
return parameters
```
Let's train the model without any regularization, and observe the accuracy on the train/test sets.
```
parameters = model(train_X, train_Y)
print ("On the training set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
```
The train accuracy is 94.8% while the test accuracy is 91.5%. This is the **baseline model** (you will observe the impact of regularization on this model). Run the following code to plot the decision boundary of your model.
```
plt.title("Model without regularization")
axes = plt.gca()
axes.set_xlim([-0.75,0.40])
axes.set_ylim([-0.75,0.65])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
```
The non-regularized model is obviously overfitting the training set. It is fitting the noisy points! Lets now look at two techniques to reduce overfitting.
## 2 - L2 Regularization
The standard way to avoid overfitting is called **L2 regularization**. It consists of appropriately modifying your cost function, from:
$$J = -\frac{1}{m} \sum\limits_{i = 1}^{m} \large{(}\small y^{(i)}\log\left(a^{[L](i)}\right) + (1-y^{(i)})\log\left(1- a^{[L](i)}\right) \large{)} \tag{1}$$
To:
$$J_{regularized} = \small \underbrace{-\frac{1}{m} \sum\limits_{i = 1}^{m} \large{(}\small y^{(i)}\log\left(a^{[L](i)}\right) + (1-y^{(i)})\log\left(1- a^{[L](i)}\right) \large{)} }_\text{cross-entropy cost} + \underbrace{\frac{1}{m} \frac{\lambda}{2} \sum\limits_l\sum\limits_k\sum\limits_j W_{k,j}^{[l]2} }_\text{L2 regularization cost} \tag{2}$$
Let's modify your cost and observe the consequences.
**Exercise**: Implement `compute_cost_with_regularization()` which computes the cost given by formula (2). To calculate $\sum\limits_k\sum\limits_j W_{k,j}^{[l]2}$ , use :
```python
np.sum(np.square(Wl))
```
Note that you have to do this for $W^{[1]}$, $W^{[2]}$ and $W^{[3]}$, then sum the three terms and multiply by $ \frac{1}{m} \frac{\lambda}{2} $.
```
# GRADED FUNCTION: compute_cost_with_regularization
def compute_cost_with_regularization(A3, Y, parameters, lambd):
"""
Implement the cost function with L2 regularization. See formula (2) above.
Arguments:
A3 -- post-activation, output of forward propagation, of shape (output size, number of examples)
Y -- "true" labels vector, of shape (output size, number of examples)
parameters -- python dictionary containing parameters of the model
Returns:
cost - value of the regularized loss function (formula (2))
"""
m = Y.shape[1]
W1 = parameters["W1"]
W2 = parameters["W2"]
W3 = parameters["W3"]
cross_entropy_cost = compute_cost(A3, Y) # This gives you the cross-entropy part of the cost
### START CODE HERE ### (approx. 1 line)
L2_regularization_cost = (lambd/(2*m)) *(np.sum(np.square(W1)) + np.sum(np.square(W2)) + np.sum(np.square(W3)))
### END CODER HERE ###
cost = cross_entropy_cost + L2_regularization_cost
return cost
A3, Y_assess, parameters = compute_cost_with_regularization_test_case()
print("cost = " + str(compute_cost_with_regularization(A3, Y_assess, parameters, lambd = 0.1)))
```
**Expected Output**:
<table>
<tr>
<td>
**cost**
</td>
<td>
1.78648594516
</td>
</tr>
</table>
Of course, because you changed the cost, you have to change backward propagation as well! All the gradients have to be computed with respect to this new cost.
**Exercise**: Implement the changes needed in backward propagation to take into account regularization. The changes only concern dW1, dW2 and dW3. For each, you have to add the regularization term's gradient ($\frac{d}{dW} ( \frac{1}{2}\frac{\lambda}{m} W^2) = \frac{\lambda}{m} W$).
```
# GRADED FUNCTION: backward_propagation_with_regularization
def backward_propagation_with_regularization(X, Y, cache, lambd):
"""
Implements the backward propagation of our baseline model to which we added an L2 regularization.
Arguments:
X -- input dataset, of shape (input size, number of examples)
Y -- "true" labels vector, of shape (output size, number of examples)
cache -- cache output from forward_propagation()
lambd -- regularization hyperparameter, scalar
Returns:
gradients -- A dictionary with the gradients with respect to each parameter, activation and pre-activation variables
"""
m = X.shape[1]
(Z1, A1, W1, b1, Z2, A2, W2, b2, Z3, A3, W3, b3) = cache
dZ3 = A3 - Y
### START CODE HERE ### (approx. 1 line)
dW3 = 1./m * np.dot(dZ3, A2.T) + lambd/m * W3
### END CODE HERE ###
db3 = 1./m * np.sum(dZ3, axis=1, keepdims = True)
dA2 = np.dot(W3.T, dZ3)
dZ2 = np.multiply(dA2, np.int64(A2 > 0))
### START CODE HERE ### (approx. 1 line)
dW2 = 1./m * np.dot(dZ2, A1.T) + lambd/m * W2
### END CODE HERE ###
db2 = 1./m * np.sum(dZ2, axis=1, keepdims = True)
dA1 = np.dot(W2.T, dZ2)
dZ1 = np.multiply(dA1, np.int64(A1 > 0))
### START CODE HERE ### (approx. 1 line)
dW1 = 1./m * np.dot(dZ1, X.T) + lambd/m * W1
### END CODE HERE ###
db1 = 1./m * np.sum(dZ1, axis=1, keepdims = True)
gradients = {"dZ3": dZ3, "dW3": dW3, "db3": db3,"dA2": dA2,
"dZ2": dZ2, "dW2": dW2, "db2": db2, "dA1": dA1,
"dZ1": dZ1, "dW1": dW1, "db1": db1}
return gradients
X_assess, Y_assess, cache = backward_propagation_with_regularization_test_case()
grads = backward_propagation_with_regularization(X_assess, Y_assess, cache, lambd = 0.7)
print ("dW1 = "+ str(grads["dW1"]))
print ("dW2 = "+ str(grads["dW2"]))
print ("dW3 = "+ str(grads["dW3"]))
```
**Expected Output**:
<table>
<tr>
<td>
**dW1**
</td>
<td>
[[-0.25604646 0.12298827 -0.28297129]
[-0.17706303 0.34536094 -0.4410571 ]]
</td>
</tr>
<tr>
<td>
**dW2**
</td>
<td>
[[ 0.79276486 0.85133918]
[-0.0957219 -0.01720463]
[-0.13100772 -0.03750433]]
</td>
</tr>
<tr>
<td>
**dW3**
</td>
<td>
[[-1.77691347 -0.11832879 -0.09397446]]
</td>
</tr>
</table>
Let's now run the model with L2 regularization $(\lambda = 0.7)$. The `model()` function will call:
- `compute_cost_with_regularization` instead of `compute_cost`
- `backward_propagation_with_regularization` instead of `backward_propagation`
```
parameters = model(train_X, train_Y, lambd = 0.7)
print ("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
```
Congrats, the test set accuracy increased to 93%. You have saved the French football team!
You are not overfitting the training data anymore. Let's plot the decision boundary.
```
plt.title("Model with L2-regularization")
axes = plt.gca()
axes.set_xlim([-0.75,0.40])
axes.set_ylim([-0.75,0.65])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
```
**Observations**:
- The value of $\lambda$ is a hyperparameter that you can tune using a dev set.
- L2 regularization makes your decision boundary smoother. If $\lambda$ is too large, it is also possible to "oversmooth", resulting in a model with high bias.
**What is L2-regularization actually doing?**:
L2-regularization relies on the assumption that a model with small weights is simpler than a model with large weights. Thus, by penalizing the square values of the weights in the cost function you drive all the weights to smaller values. It becomes too costly for the cost to have large weights! This leads to a smoother model in which the output changes more slowly as the input changes.
<font color='blue'>
**What you should remember** -- the implications of L2-regularization on:
- The cost computation:
- A regularization term is added to the cost
- The backpropagation function:
- There are extra terms in the gradients with respect to weight matrices
- Weights end up smaller ("weight decay"):
- Weights are pushed to smaller values.
## 3 - Dropout
Finally, **dropout** is a widely used regularization technique that is specific to deep learning.
**It randomly shuts down some neurons in each iteration.** Watch these two videos to see what this means!
<!--
To understand drop-out, consider this conversation with a friend:
- Friend: "Why do you need all these neurons to train your network and classify images?".
- You: "Because each neuron contains a weight and can learn specific features/details/shape of an image. The more neurons I have, the more featurse my model learns!"
- Friend: "I see, but are you sure that your neurons are learning different features and not all the same features?"
- You: "Good point... Neurons in the same layer actually don't talk to each other. It should be definitly possible that they learn the same image features/shapes/forms/details... which would be redundant. There should be a solution."
!-->
<center>
<video width="620" height="440" src="images/dropout1_kiank.mp4" type="video/mp4" controls>
</video>
</center>
<br>
<caption><center> <u> Figure 2 </u>: Drop-out on the second hidden layer. <br> At each iteration, you shut down (= set to zero) each neuron of a layer with probability $1 - keep\_prob$ or keep it with probability $keep\_prob$ (50% here). The dropped neurons don't contribute to the training in both the forward and backward propagations of the iteration. </center></caption>
<center>
<video width="620" height="440" src="images/dropout2_kiank.mp4" type="video/mp4" controls>
</video>
</center>
<caption><center> <u> Figure 3 </u>: Drop-out on the first and third hidden layers. <br> $1^{st}$ layer: we shut down on average 40% of the neurons. $3^{rd}$ layer: we shut down on average 20% of the neurons. </center></caption>
When you shut some neurons down, you actually modify your model. The idea behind drop-out is that at each iteration, you train a different model that uses only a subset of your neurons. With dropout, your neurons thus become less sensitive to the activation of one other specific neuron, because that other neuron might be shut down at any time.
### 3.1 - Forward propagation with dropout
**Exercise**: Implement the forward propagation with dropout. You are using a 3 layer neural network, and will add dropout to the first and second hidden layers. We will not apply dropout to the input layer or output layer.
**Instructions**:
You would like to shut down some neurons in the first and second layers. To do that, you are going to carry out 4 Steps:
1. In lecture, we dicussed creating a variable $d^{[1]}$ with the same shape as $a^{[1]}$ using `np.random.rand()` to randomly get numbers between 0 and 1. Here, you will use a vectorized implementation, so create a random matrix $D^{[1]} = [d^{[1](1)} d^{[1](2)} ... d^{[1](m)}] $ of the same dimension as $A^{[1]}$.
2. Set each entry of $D^{[1]}$ to be 0 with probability (`1-keep_prob`) or 1 with probability (`keep_prob`), by thresholding values in $D^{[1]}$ appropriately. Hint: to set all the entries of a matrix X to 0 (if entry is less than 0.5) or 1 (if entry is more than 0.5) you would do: `X = (X < 0.5)`. Note that 0 and 1 are respectively equivalent to False and True.
3. Set $A^{[1]}$ to $A^{[1]} * D^{[1]}$. (You are shutting down some neurons). You can think of $D^{[1]}$ as a mask, so that when it is multiplied with another matrix, it shuts down some of the values.
4. Divide $A^{[1]}$ by `keep_prob`. By doing this you are assuring that the result of the cost will still have the same expected value as without drop-out. (This technique is also called inverted dropout.)
```
# GRADED FUNCTION: forward_propagation_with_dropout
def forward_propagation_with_dropout(X, parameters, keep_prob = 0.5):
"""
Implements the forward propagation: LINEAR -> RELU + DROPOUT -> LINEAR -> RELU + DROPOUT -> LINEAR -> SIGMOID.
Arguments:
X -- input dataset, of shape (2, number of examples)
parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3":
W1 -- weight matrix of shape (20, 2)
b1 -- bias vector of shape (20, 1)
W2 -- weight matrix of shape (3, 20)
b2 -- bias vector of shape (3, 1)
W3 -- weight matrix of shape (1, 3)
b3 -- bias vector of shape (1, 1)
keep_prob - probability of keeping a neuron active during drop-out, scalar
Returns:
A3 -- last activation value, output of the forward propagation, of shape (1,1)
cache -- tuple, information stored for computing the backward propagation
"""
np.random.seed(1)
# retrieve parameters
W1 = parameters["W1"]
b1 = parameters["b1"]
W2 = parameters["W2"]
b2 = parameters["b2"]
W3 = parameters["W3"]
b3 = parameters["b3"]
# LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID
Z1 = np.dot(W1, X) + b1
A1 = relu(Z1)
### START CODE HERE ### (approx. 4 lines) # Steps 1-4 below correspond to the Steps 1-4 described above.
D1 = np.random.rand(A1.shape[0],A1.shape[1]) # Step 1: initialize matrix D1 = np.random.rand(..., ...)
D1 = D1 < keep_prob # Step 2: convert entries of D1 to 0 or 1 (using keep_prob as the threshold)
A1 = np.multiply(A1,D1) # Step 3: shut down some neurons of A1
A1 = A1/keep_prob # Step 4: scale the value of neurons that haven't been shut down
### END CODE HERE ###
Z2 = np.dot(W2, A1) + b2
A2 = relu(Z2)
### START CODE HERE ### (approx. 4 lines)
D2 = np.random.rand(A2.shape[0],A2.shape[1]) # Step 1: initialize matrix D2 = np.random.rand(..., ...)
D2 = D2 < keep_prob # Step 2: convert entries of D2 to 0 or 1 (using keep_prob as the threshold)
A2 = np.multiply(A2,D2) # Step 3: shut down some neurons of A2
A2 = A2/keep_prob # Step 4: scale the value of neurons that haven't been shut down
### END CODE HERE ###
Z3 = np.dot(W3, A2) + b3
A3 = sigmoid(Z3)
cache = (Z1, D1, A1, W1, b1, Z2, D2, A2, W2, b2, Z3, A3, W3, b3)
return A3, cache
X_assess, parameters = forward_propagation_with_dropout_test_case()
A3, cache = forward_propagation_with_dropout(X_assess, parameters, keep_prob = 0.7)
print ("A3 = " + str(A3))
```
**Expected Output**:
<table>
<tr>
<td>
**A3**
</td>
<td>
[[ 0.36974721 0.00305176 0.04565099 0.49683389 0.36974721]]
</td>
</tr>
</table>
### 3.2 - Backward propagation with dropout
**Exercise**: Implement the backward propagation with dropout. As before, you are training a 3 layer network. Add dropout to the first and second hidden layers, using the masks $D^{[1]}$ and $D^{[2]}$ stored in the cache.
**Instruction**:
Backpropagation with dropout is actually quite easy. You will have to carry out 2 Steps:
1. You had previously shut down some neurons during forward propagation, by applying a mask $D^{[1]}$ to `A1`. In backpropagation, you will have to shut down the same neurons, by reapplying the same mask $D^{[1]}$ to `dA1`.
2. During forward propagation, you had divided `A1` by `keep_prob`. In backpropagation, you'll therefore have to divide `dA1` by `keep_prob` again (the calculus interpretation is that if $A^{[1]}$ is scaled by `keep_prob`, then its derivative $dA^{[1]}$ is also scaled by the same `keep_prob`).
```
# GRADED FUNCTION: backward_propagation_with_dropout
def backward_propagation_with_dropout(X, Y, cache, keep_prob):
"""
Implements the backward propagation of our baseline model to which we added dropout.
Arguments:
X -- input dataset, of shape (2, number of examples)
Y -- "true" labels vector, of shape (output size, number of examples)
cache -- cache output from forward_propagation_with_dropout()
keep_prob - probability of keeping a neuron active during drop-out, scalar
Returns:
gradients -- A dictionary with the gradients with respect to each parameter, activation and pre-activation variables
"""
m = X.shape[1]
(Z1, D1, A1, W1, b1, Z2, D2, A2, W2, b2, Z3, A3, W3, b3) = cache
dZ3 = A3 - Y
dW3 = 1./m * np.dot(dZ3, A2.T)
db3 = 1./m * np.sum(dZ3, axis=1, keepdims = True)
dA2 = np.dot(W3.T, dZ3)
### START CODE HERE ### (โ 2 lines of code)
dA2 = np.multiply(dA2,D2) # Step 1: Apply mask D2 to shut down the same neurons as during the forward propagation
dA2 = dA2/keep_prob # Step 2: Scale the value of neurons that haven't been shut down
### END CODE HERE ###
dZ2 = np.multiply(dA2, np.int64(A2 > 0))
dW2 = 1./m * np.dot(dZ2, A1.T)
db2 = 1./m * np.sum(dZ2, axis=1, keepdims = True)
dA1 = np.dot(W2.T, dZ2)
### START CODE HERE ### (โ 2 lines of code)
dA1 = np.multiply(dA1,D1) # Step 1: Apply mask D1 to shut down the same neurons as during the forward propagation
dA1 = dA1/keep_prob # Step 2: Scale the value of neurons that haven't been shut down
### END CODE HERE ###
dZ1 = np.multiply(dA1, np.int64(A1 > 0))
dW1 = 1./m * np.dot(dZ1, X.T)
db1 = 1./m * np.sum(dZ1, axis=1, keepdims = True)
gradients = {"dZ3": dZ3, "dW3": dW3, "db3": db3,"dA2": dA2,
"dZ2": dZ2, "dW2": dW2, "db2": db2, "dA1": dA1,
"dZ1": dZ1, "dW1": dW1, "db1": db1}
return gradients
X_assess, Y_assess, cache = backward_propagation_with_dropout_test_case()
gradients = backward_propagation_with_dropout(X_assess, Y_assess, cache, keep_prob = 0.8)
print ("dA1 = " + str(gradients["dA1"]))
print ("dA2 = " + str(gradients["dA2"]))
```
**Expected Output**:
<table>
<tr>
<td>
**dA1**
</td>
<td>
[[ 0.36544439 0. -0.00188233 0. -0.17408748]
[ 0.65515713 0. -0.00337459 0. -0. ]]
</td>
</tr>
<tr>
<td>
**dA2**
</td>
<td>
[[ 0.58180856 0. -0.00299679 0. -0.27715731]
[ 0. 0.53159854 -0. 0.53159854 -0.34089673]
[ 0. 0. -0.00292733 0. -0. ]]
</td>
</tr>
</table>
Let's now run the model with dropout (`keep_prob = 0.86`). It means at every iteration you shut down each neurons of layer 1 and 2 with 24% probability. The function `model()` will now call:
- `forward_propagation_with_dropout` instead of `forward_propagation`.
- `backward_propagation_with_dropout` instead of `backward_propagation`.
```
parameters = model(train_X, train_Y, keep_prob = 0.86, learning_rate = 0.3)
print ("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
```
Dropout works great! The test accuracy has increased again (to 95%)! Your model is not overfitting the training set and does a great job on the test set. The French football team will be forever grateful to you!
Run the code below to plot the decision boundary.
```
plt.title("Model with dropout")
axes = plt.gca()
axes.set_xlim([-0.75,0.40])
axes.set_ylim([-0.75,0.65])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
```
**Note**:
- A **common mistake** when using dropout is to use it both in training and testing. You should use dropout (randomly eliminate nodes) only in training.
- Deep learning frameworks like [tensorflow](https://www.tensorflow.org/api_docs/python/tf/nn/dropout), [PaddlePaddle](http://doc.paddlepaddle.org/release_doc/0.9.0/doc/ui/api/trainer_config_helpers/attrs.html), [keras](https://keras.io/layers/core/#dropout) or [caffe](http://caffe.berkeleyvision.org/tutorial/layers/dropout.html) come with a dropout layer implementation. Don't stress - you will soon learn some of these frameworks.
<font color='blue'>
**What you should remember about dropout:**
- Dropout is a regularization technique.
- You only use dropout during training. Don't use dropout (randomly eliminate nodes) during test time.
- Apply dropout both during forward and backward propagation.
- During training time, divide each dropout layer by keep_prob to keep the same expected value for the activations. For example, if keep_prob is 0.5, then we will on average shut down half the nodes, so the output will be scaled by 0.5 since only the remaining half are contributing to the solution. Dividing by 0.5 is equivalent to multiplying by 2. Hence, the output now has the same expected value. You can check that this works even when keep_prob is other values than 0.5.
## 4 - Conclusions
**Here are the results of our three models**:
<table>
<tr>
<td>
**model**
</td>
<td>
**train accuracy**
</td>
<td>
**test accuracy**
</td>
</tr>
<td>
3-layer NN without regularization
</td>
<td>
95%
</td>
<td>
91.5%
</td>
<tr>
<td>
3-layer NN with L2-regularization
</td>
<td>
94%
</td>
<td>
93%
</td>
</tr>
<tr>
<td>
3-layer NN with dropout
</td>
<td>
93%
</td>
<td>
95%
</td>
</tr>
</table>
Note that regularization hurts training set performance! This is because it limits the ability of the network to overfit to the training set. But since it ultimately gives better test accuracy, it is helping your system.
Congratulations for finishing this assignment! And also for revolutionizing French football. :-)
<font color='blue'>
**What we want you to remember from this notebook**:
- Regularization will help you reduce overfitting.
- Regularization will drive your weights to lower values.
- L2 regularization and Dropout are two very effective regularization techniques.
| github_jupyter |
# Title
**Class imbalance: Random Forest vs SMOTE Classification**
# Description
The goal of this exercise is to investigate the performance of Random Forest with and without class balancing techniques on a dataset with class imbalance.
The comparison will look a little something like this:
<img src="../img/image2.png" style="width: 500px;">
# Instructions:
- Read the dataset `diabetes.csv` as a pandas dataframe.
- Take a quick look at the dataset.
- Quantify the class imbalance of your response variable.
- Assign the response variable as `Outcome` and everything else as a predictor.
- Split the data into train and validation sets.
- Fit a `RandomForestClassifier()` on the training data, without any consideration for class imbalance.
- Predict on the validation set and compute the `f1_score` and the `auc_score` and save them to appropriate variables.
- Fit a `RandomForestClassifier()` on the training data, but this time make a consideration for class imbalance by setting `lass_weight='balanced_subsample'`.
- Predict on the validation set and compute the f1_score and the auc_score for this model and save them to appropriate variables.
- Fit a `RandomForestClassifier()` on the training data generated using <a href="https://imbalanced-learn.readthedocs.io/en/stable/generated/imblearn.over_sampling.SMOTE.html" target="_blank">SMOTE</a> using `class_weight='balanced_subsample'`.
- Predict on the validation set and compute the `f1_score` and the `auc_score` for this model and save them to appropriate variables.
- Finally, use the helper code to tabulate your results and compare the performance of each model.
# Hints:
<a href="https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html" target="_blank">RandomForestClassifier()</a> : A random forest classifier.
<a href="https://scikit-learn.org/stable/modules/generated/sklearn.metrics.f1_score.html" target="_blank">f1_score()</a> : Compute the F1 score, also known as balanced F-score or F-measure
<a href="https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_auc_score.html" target="_blank">roc_auc_score()</a> : Compute Area Under the Receiver Operating Characteristic Curve (ROC AUC) from prediction scores.
```
# Import the main packages
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split
from sklearn.inspection import permutation_importance
from sklearn.tree import DecisionTreeClassifier
from sklearn.model_selection import train_test_split
from sklearn.metrics import f1_score
from sklearn.metrics import roc_auc_score
from imblearn.over_sampling import SMOTE
from prettytable import PrettyTable
%matplotlib inline
# Read the dataset and take a quick look
df = pd.read_csv('diabetes.csv')
df.head()
# On checking the response variable ('Outcome') value counts, you will notice that the number of diabetics are less than the number of non-diabetics
df['Outcome'].value_counts()
### edTest(test_imbalance) ###
# To estimate the amount of data imbalance, find the ratio of class 1(Diabetics) to the size of the dataset.
imbalance_ratio = ___
print(f'The percentage of diabetics in the dataset is only {(imbalance_ratio)*100:.2f}%')
# Assign the predictor and response variables.
# Outcome is the response and all the other columns are the predictors
X = ___
y = ___
# Fix a random_state and split the data into train and validation sets
random_state = 22
X_train, X_val, y_train,y_val = train_test_split(X,y,train_size = 0.8,random_state =random_state)
# We fix the max_depth variable to 20 for all trees, you can come back and change this to investigate performance of RandomForest
max_depth = 20
```
## Strategy 1 - Vanilla Random Forest
- No correction for imbalance
```
# Define a Random Forest classifier with random_state as above
# Set the maximum depth to be max_depth and use 10 estimators
random_forest = ___
# Fit the model on the training set
random_forest.___
# We make predictions on the validation set
predictions = ___
# We also compute two metrics that better represent misclassification of minority classes i.e `f1 score` and `AUC`
# Compute the f1-score and assign it to variable score1
score1 = ___
# Compute the `auc` and assign it to variable auc1
auc1 = ___
```
## Strategy 2 - Random Forest with class weighting
- Balancing the class imbalance in each bootstrap
```
# Define a Random Forest classifier with random_state as above
# Set the maximum depth to be max_depth and use 10 estimators
# Specify `class_weight='balanced_subsample'
random_forest = ___
# Fit the model on the training data
random_forest.___
# We make predictions on the validation set
predictions = ___
# Again we also compute two metrics that better represent misclassification of minority classes i.e `f1 score` and `AUC`
# Compute the f1-score and assign it to variable score2
score2 = ___
# Compute the `auc` and assign it to variable auc2
auc2 = ___
```
## Strategy 3 - RandomForest with SMOTE
- We can use SMOTE along with the previous method to further improve our metrics.
- Read more about imblearn's SMOTE [here](https://imbalanced-learn.readthedocs.io/en/stable/generated/imblearn.over_sampling.SMOTE.html).
```
# Run this cell below to use SMOTE to balance our dataset
sm = SMOTE(random_state=3)
X_train_res, y_train_res = sm.fit_sample(X_train, y_train.ravel())
#If you now see the shape, you will see that X_train_res has more number of points than X_train
print(f'Number of points in balanced dataset is {X_train_res.shape[0]}')
# Again Define a Random Forest classifier with random_state as above
# Set the maximum depth to be max_depth and use 10 estimators
# Again specify `class_weight='balanced_subsample'
random_forest = ___
# Fit the model on the new training data created above with SMOTE
random_forest.___
### edTest(test_smote) ###
# We make predictions on the validation set
predictions = ___
# Again we also compute two metrics that better represent misclassification of minority classes i.e `f1 score` and `AUC`
# Compute the f1-score and assign it to variable score3
score3 = ___
# Compute the `auc` and assign it to variable auc3
auc3 = ___
# Finally, we compare the results from the three implementations above
# Just run the cells below to see your results
pt = PrettyTable()
pt.field_names = ["Strategy","F1 Score","AUC score"]
pt.add_row(["Random Forest - no correction",score1,auc1])
pt.add_row(["Random Forest - class weighting",score2,auc2])
pt.add_row(["Random Forest - SMOTE upsampling",score3,auc3])
print(pt)
```
## Mindchow ๐ฒ
- Go back and change the `learning_rate` parameter and `n_estimators` for Adaboost. Do you see an improvement in results?
*Your answer here*
| github_jupyter |
# API Demo: Search, Order, and Download Capella's Rotterdam Data Set
Capella has collected two AOIs - 10 x 5 km each. Each AOI will be covered from 8am to 8pm over two days,
6 times each day to simulate roughly two hour revisit at very high resolution.
The main features in the AOIs are: the port, ships (tankers and commercials), shipping containers, cars,
parking lot, stadium, train rails, floating oil tanks, downtown, vegetated areas.
Imagery is be generated using backprojection in map coordinates.
```
# Required libraries:
# requests
# json
# urllib
```
Your username and password must be saved in a .json file named 'credentials.json' and formatted as follows.
{"username": "yourusername","password": "xxxxxxxxx"}
### Import required libraries, build a print utility function, assign API endpoints and load Credentials
```
import requests
import json
# JSON utility function
def p(data):
print(json.dumps(data, indent=2))
# Capella API endpoints
URL = 'https://api.capellaspace.com'
token = '/token'
collections = '/catalog/collections'
catsearch = '/catalog/search'
orders = '/orders/'
# Load my username and password
with open('credentials.json') as f:
data = json.load(f)
username = data['username']
password = data['password']
```
### Get and Print Access Token
```
# Get the token
r = requests.post(URL + token,
headers = {'Content-Type': 'application/x-www-form-urlencoded'}, auth=(username,password))
accesstoken = r.json()["accessToken"]
# Print the token
print("Access Token: " + accesstoken)
headers = {'Authorization':'Bearer ' + accesstoken}
```
### Print Available Collections
```
# See what collections are available
r = requests.get(URL + collections, headers=headers)
# Print results
#p(r.json())
```
### Post Search Filters for Rotterdam data set, Print the Results
```
# Filters
filters = {
"datetime": "2019-08-01T00:00:00Z/2019-08-31T12:31:12Z",
"limit": 50, # overwrite the default pagination limit of 10, adjust as necessary
"collections": ["capella-aerial"], # specify the desired collection
"query": {"sar:product_type": {"eq":"GEO"},
"sar:polarizations": {"eq":"HH"}},
"sortby": "properties.datetime",
}
headers = {'Content-Type': 'application/json',
'Accept': 'application/geo+json', 'Authorization':'Bearer ' + accesstoken}
r = requests.post(URL + catsearch, json=filters, headers=headers)
# Print the results
p(r.json())
```
### Make and Post an Order
```
# Build the Order
features = r.json()["features"]
granulelist = []
# Loop over all the features from the response and add to an array for an order
for f in features:
item = {"CollectionId": f["collection"], "GranuleId": f["id"]}
granulelist.append(item)
myorder = {"Items": granulelist}
# Post the order and inspect the result
r = requests.post(URL + orders, json=myorder, headers=headers)
p(r.json())
```
### Get the STAC records with the signed URLs using the /download endpoint, Print the Result
```
myorderid = r.json()["orderId"]
r = requests.get(URL + orders + myorderid + '/download', headers=headers)
p(r.json())
```
### Download the Results
```
features = r.json()
# Local directory to save data; Update to preference
basefp = 'C:/data/rotterdam/'
for feature in features:
filepath = feature["assets"]["HH"]["href"]
filename = filepath[filepath.rfind("/")+1:]
sep = "?"
truncname = filename.split(sep, 1)[0]
outfp = basefp + truncname
with requests.get(filepath, stream=True) as result:
result.raise_for_status()
with open(outfp, 'wb') as f:
for chunk in result.iter_content(chunk_size=10000000):
f.write(chunk)
```
| github_jupyter |
# 19 - Exploring Well Log Data using the Welly Python Library
The welly library was developed by Agile Geoscience to help with loading, processing and analysing well log data from a single well or multiple wells. The library allows exploration of the meta data found within the headers of las files and also contains a plotting function to display a typical well log. Additionally, the welly library contains tools for identifying and handling data quality issues.
The Welly library can be found at the Agile Geoscience GitHub at https://github.com/agile-geoscience/welly
In this short tutorial we will see how to load a well from the Volve field and exploring some of the functionality available within this library.
## The Dataset
The dataset we are using comes from the publicly available Equinor Volve Field dataset released in 2018. The file used in this tutorial is from well 15/19-F1B.
Details on the Volve Dataset can be found [here](https://www.equinor.com/en/what-we-do/norwegian-continental-shelf-platforms/volve.html)
## Importing Libraries and Data
The first step in this tutorial will be to load in the required modules, Well and Curve, from the Welly libray. These modules are used to work with well log data and with individual curves.
```
from welly import Well
from welly import Curve
import matplotlib.pyplot as plt
```
Our LAS file can be loaded in using the `Well.from_las()` method. This will create a new well object.
```
well = Well.from_las('Data/15_19_F1B_WLC_PETRO_COMPUTED_INPUT_1.LAS')
```
# Data Exploration
## File and Well Information
Now that our data has been loaded in we can begin exploring the contents and meta data for the selected well.
If we call upon our `well` object we will be presented with a summary table which contains the wellname, location, co-ordinates, and a list of curve mnemonics.
```
well
```
We can also call upon specific functions to access the required information.
The first is the `header` which will return key header information, including the well name, Unique Well Identifier (UWI), the field name and company name.
```
well.header
```
Let's now have a look at the location information for this well. To do so we can call upon the `.location` method for our data object.
```
well.location
```
This returns a location object in the form of a dictionary. The file we are using does not contain much information about the location of the well, but we do have information about the latitude and longitude. These can be extracted by appending `.latitude` and `.longitude` to the location method and put into an easier to read format.
Using the print function for these methods provides a nicer output to read.
```
lati = well.location.latitude
long = well.location.longitude
print(lati)
print(long)
```
## Exploring the Data
We saw in the previous section when looking at the well header that we had a number of curves. We can get an idea of how many by calling up on the `count_curves()` function.
```
well.count_curves()
```
This returns a total count of 22 curves.
We can also obtain a list of the curve menmonics within the las file using the method `_get_curve_menmonics()`.
```
well._get_curve_mnemonics()
```
Another way to view all of the curves is by calling upon `.data`. This returns a dictionary object containing the well name, along with the first 3 and the last 3 values for that curve.
As seen in the example below, many of the first and last values are listed as nan, which stands for Not a Number.
```
well.data
```
We can delve a little deeper into each of the curves within the las file by passing in the name of the curve like so:
```
well.data['GR']
```
This provides us with some summary statistics of the curve, such as:
- what the null value is
- the curve units
- the curve data range
- the step value of the data
- the total number of samples
- the total number of missing values (NaNs)
- Min, Max and Mean of the curve
- Curve description
- A list of the first 3 and last 3 values
# Data QC
Checking the quality of well log data is an important part of the petrophysics workflow.
The borehole environment can be a hostile place with high temperatures, high pressures, irregular borehole shapes etc all of which can have an impact on the logging measurements. This can result in numerous issues such as missing values, outliers, constant values and erroneous values.
The welly library comes with a number of quality control checks which will allow us to check all of the data or specific curves for issues.
The quality control checks include:
- Checking for gaps / missing values : `.no_nans(curve)`
- Checking if the entire curve is empty or not : `.not_empty(curve)`
- Checking if the curve contains constant values : `.no_flat(curve)`
- Checking units: `check_units(list_of_units)`
- Checking if values are all positive : `all_positive(curve)`
- Checking if curve is within a range : `all_between(min_value, max_value)`
The full list of methods can be found within the Welly help documents at: https://welly.readthedocs.io
Before we start we will need to import the quality module like so:
```
import welly.quality as wq
```
Before we run any quality checks we first need to create a list of what tests we want to run and on what data we want to run those tests.
To do this we can build up a dictionary, with the key being the curve(s) we want to run the checks on. If want to run it on all of the curves we need to use the key `Each`.
For every curve we will check if there are any flatline values, any gaps and making sure the curve is not empty.
For the gamma ray (GR) and bulk density (RHOB) curves we are going to check that all of the values are positive, that they are between standard ranges and that the units are what we expect them to be.
```
tests = {'Each': [wq.no_flat,
wq.no_gaps,
wq.not_empty],
'GR': [
wq.all_positive,
wq.all_between(0, 250),
wq.check_units(['API', 'GAPI']),
],
'RHOB': [
wq.all_positive,
wq.all_between(1.5, 3),
wq.check_units(['G/CC', 'g/cm3']),
]}
```
We could run the tests as they are, however, the output is not easy to read. To make easier and nicer, we will using the HTML function from IPython.display to make a pretty table.
Once the module is imported we can create a variable called `data_qc_table` to store the information in. Assigned to this variable will be `data.qc_table_html(tests)` which generates the table from the `tests` dictionary we created above.
```
from IPython.display import HTML
data_qc_table = well.qc_table_html(tests)
HTML(data_qc_table)
```
After running the tests we can see that we have a coloured HTML table returned. Anything highlighted in green is True and anything in red is False.
From the table we can see that the BS (BitSize) curve failed on one of the three tests. Under the `no_flat` column we have a False value flagged which suggests that this curve contains constant/flat values. This has been correctly flagged as the bitsize curve measures the drill bit diameter, which will be constant for a given run or series of runs.
We can also see that a number of curves have been flagged as containing gaps.
The tests that were run just for GR and RHOB can also be seen in the table. When we run specific tests on specific curves, the remainder of the the results will be greyed out.
We can run another test to identify the fraction of the data that is not nan. For this we setup a new test and apply to all curves using `Each`.
```
tests_nans = {'Each': [wq.fraction_not_nans]}
data_nans_qc_table = well.qc_table_html(tests_nans)
HTML(data_nans_qc_table)
```
Once we run these tests we are presented with a table similar to the one above. In the last column we have the total fraction of values for each curve this is not a nan. These values are in decimal, with a value of 1.0 representing 100% completeness. The Score column contains a rounded version of this number.
We can write a short loop and print the percentage values out for each curve. This provides a cleaner table to get an idea of missing data percentage for each curve.
```
print((f'Curve \t % Complete').expandtabs(10))
print((f'----- \t ----------').expandtabs(10))
for k,v in well.qc_data(tests_nans).items():
for i,j in v.items():
values = round(j*100, 2)
print((f'{k} \t {values}%').expandtabs(10))
```
From the results we can see that a number of curves have a high percentage of missing values. This could be attributable to some of the measurements not starting until deeper in the well. We will be able to determine this in the next section with plots.
# Data Plotting
Visualising well log data is at the heart of petrophysics, with log plots being one of the most common display formats.
The welly library allows fast and easy generation of well log plots.
First we generate a list of data that we want to display in each track. If we want to display more than one curve in a track we can embed another list e.g. `['MD', ['DT', 'DTS']]`. The curves within the inner list will be plotted on the same track and on the same scale.
Next, we can call upon the plot function and pass in the tracks list.
```
tracks = ['MD', 'GR', 'RHOB', 'NPHI', ['DT', 'DTS']]
well.plot(tracks=tracks)
```
As discussed in the data quality section, our assumption that some of the logging curves do not extend all the way to the top of the well. This is very common practice and avoids the need for and the cost of running tools from the top of the well to the bottom.
Let's zoom in a little bit closer on the lower interval. To do this we can use a regular matplotlib function to set the y-axis limits. Note that we do need to reverse the numbers so that the deeper value is first, and the shallower one second.
```
tracks = ['MD', 'BS', 'CALI', 'GR', 'RHOB', 'NPHI', ['DT', 'DTS']]
well.plot(tracks=tracks)
plt.ylim(3500, 3000)
```
We can see from the result that we now have a nice looking plot with very little effort.
However, the control over the plot appearance is limited with the current implementation not allowing granular control over the plot such as colours, scales and displaying curves with reversed scales (e.g. Neutron & Density curves).
# Well Log Data to Pandas Dataframe
In this final section, we will look at exporting the well log data from welly to pandas. Pandas is one of the most popular libraries for storing, analysing and manipulating data.
The conversion is a simple process and can be achieved by calling `.df()` on our well object.
```
df = well.df()
```
We can confirm the data has been converted by calling upon the `.describe()` method from pandas to view the summary statistics of the data.
```
df.describe()
```
# Summary
The welly library, developed by Agile-Geoscience is a great tool for working with and exploring well log files. In this example we have seen how to load a single las file, explore the meta information about the well and the curve contents, and display the log data on a log plot.
Welly has significant more functionality that can handle multiple well logs as well as creating synthetic siesmiograms from the well data.
You can find and explore the welly repository [here](https://github.com/agile-geoscience/welly).
***Thanks for reading!***
*If you have found this article useful, please feel free to check out my other articles looking at various aspects of Python and well log data. You can also find my code used in this article and others at [GitHub](https://github.com/andymcdgeo).*
*If you want to get in touch you can find me on [LinkedIn](https://www.linkedin.com/in/andymcdonaldgeo/) or at my [website](http://andymcdonald.scot/).*
*Interested in learning more about python and well log data or petrophysics? Follow me on [Medium](https://medium.com/@andymcdonaldgeo).*
If you have enjoyed this article or any others and want to show your appreciate you are welcome to [Buy Me a Coffee](https://www.buymeacoffee.com/andymcdonaldgeo)
| github_jupyter |
```
import pandas as pd
from bs4 import BeautifulSoup
df = pd.read_csv('train.csv', sep=';')
texts = df['text']
from pymystem3 import Mystem
morph = Mystem()
import re
import nltk
nltk.download('punkt')
def text_to_sent(t):
text = BeautifulSoup(t).text.lower()
tokenizer = nltk.data.load('tokenizers/punkt/russian.pickle')
raw_sentences = tokenizer.tokenize(text.strip())
raw_sentences = [x.split(';') for x in raw_sentences]
raw_sentences = sum(raw_sentences, [])
sentences = [morph.lemmatize(x) for x in raw_sentences]
return [[y for y in x if re.match('[ะฐ-ััa-z0-9]', y)]
for x in sentences]
text_to_sent(texts[0])
from tqdm import tqdm
sentences = []
for text in tqdm(texts):
sentences += text_to_sent(text)
for text in tqdm(pd.read_csv('test.csv', sep=';')['text']):
sentences += text_to_sent(text)
len(sentences)
from gensim.models.word2vec import Word2Vec
num_features = 300 # ะธัะพะณะพะฒะฐั ัะฐะทะผะตัะฝะพััั ะฒะตะบัะพัะฐ ะบะฐะถะดะพะณะพ ัะปะพะฒะฐ
min_word_count = 5 # ะผะธะฝะธะผะฐะปัะฝะฐั ัะฐััะพัะฝะพััั ัะปะพะฒะฐ, ััะพะฑั ะพะฝะพ ะฟะพะฟะฐะปะพ ะฒ ะผะพะดะตะปั
num_workers = 8 # ะบะพะปะธัะตััะฒะพ ัะดะตั ะฒะฐัะตะณะพ ะฟัะพัะตััะพัะฐ, ััะพะฑ ะทะฐะฟัััะธัั ะพะฑััะตะฝะธะต ะฒ ะฝะตัะบะพะปัะบะพ ะฟะพัะพะบะพะฒ
context = 10 # ัะฐะทะผะตั ะพะบะฝะฐ
downsampling = 1e-3 # ะฒะฝัััะตะฝะฝัั ะผะตััะธะบะฐ ะผะพะดะตะปะธ
model = Word2Vec(sentences, workers=num_workers, size=num_features,
min_count=min_word_count, window=context, sample=downsampling)
model.init_sims(replace=True)
model.most_similar(['ัะฑะตัะฑะฐะฝะบ'])
import numpy as np
index2word_set = set(model.wv.index2word)
def text_to_vec(words):
text_vec = np.zeros((300,), dtype="float32")
n_words = 0
for word in words:
if word in index2word_set:
n_words = n_words + 1
text_vec = np.add(text_vec, model.wv[word])
if n_words != 0:
text_vec /= n_words
return text_vec
texts_vecs = np.zeros((len(df['text']), 300), dtype="float32")
for i, text in enumerate(df['text']):
s = text_to_sent(text)
s = sum(s, [])
texts_vecs[i] = text_to_vec(s)
from sklearn.metrics.pairwise import cosine_distances
m = cosine_distances(texts_vecs)
m[0][1:].argmin()
m[0][10639+1]
print(df['text'][0])
print(df['text'][10639+1])
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(
texts_vecs, df['target'],
test_size=0.33, random_state=42)
from sklearn.linear_model import LogisticRegression
lr = LogisticRegression()
lr.fit(X_train, y_train)
pred = lr.predict(X_test)
from sklearn.metrics import accuracy_score
accuracy_score(y_test, pred)
from sklearn.cluster import KMeans
word_vectors = model.wv.syn0
word_vectors.shape
num_clusters = 8000
kmeans_clustering = KMeans(n_clusters=num_clusters, n_jobs=-1)
idx = kmeans_clustering.fit_predict(word_vectors)
word_centroid_map = dict(zip( model.wv.index2word, idx ))
for cluster in range(0,30):
print("\nCluster %d" % cluster)
words = []
for i in range(0,len(word_centroid_map.values())):
if( list(word_centroid_map.values())[i] == cluster ):
words.append(list(word_centroid_map.keys())[i])
print(words)
```
| github_jupyter |
```
# Copyright 2021 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Vertex SDK: Custom training tabular regression model for online prediction with explainabilty using get_metadata
<table align="left">
<td>
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/tree/master/notebooks/official/automl/sdk_custom_tabular_regression_online_explain_get_metadata.ipynb">
<img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab
</a>
</td>
<td>
<a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/tree/master/notebooks/official/automl/sdk_custom_tabular_regression_online_explain_get_metadata.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
</td>
<td>
<a href="https://console.cloud.google.com/ai/platform/notebooks/deploy-notebook?download_url=https://github.com/GoogleCloudPlatform/vertex-ai-samples/tree/master/notebooks/official/automl/sdk_custom_tabular_regression_online_explain_get_metadata.ipynb">
Open in Google Cloud Notebooks
</a>
</td>
</table>
<br/><br/><br/>
## Overview
This tutorial demonstrates how to use the Vertex SDK to train and deploy a custom tabular regression model for online prediction with explanation.
### Dataset
The dataset used for this tutorial is the [Boston Housing Prices dataset](https://www.cs.toronto.edu/~delve/data/boston/bostonDetail.html). The version of the dataset you will use in this tutorial is built into TensorFlow. The trained model predicts the median price of a house in units of 1K USD.
### Objective
In this tutorial, you create a custom model from a Python script in a Google prebuilt Docker container using the Vertex SDK, and then do a prediction with explanations on the deployed model by sending data. You can alternatively create custom models using `gcloud` command-line tool or online using Cloud Console.
The steps performed include:
- Create a Vertex custom job for training a model.
- Train a TensorFlow model.
- Retrieve and load the model artifacts.
- View the model evaluation.
- Set explanation parameters.
- Upload the model as a Vertex `Model` resource.
- Deploy the `Model` resource to a serving `Endpoint` resource.
- Make a prediction with explanation.
- Undeploy the `Model` resource.
### Costs
This tutorial uses billable components of Google Cloud:
* Vertex AI
* Cloud Storage
Learn about [Vertex AI
pricing](https://cloud.google.com/vertex-ai/pricing) and [Cloud Storage
pricing](https://cloud.google.com/storage/pricing), and use the [Pricing
Calculator](https://cloud.google.com/products/calculator/)
to generate a cost estimate based on your projected usage.
### Set up your local development environment
If you are using Colab or Google Cloud Notebooks, your environment already meets all the requirements to run this notebook. You can skip this step.
Otherwise, make sure your environment meets this notebook's requirements. You need the following:
- The Cloud Storage SDK
- Git
- Python 3
- virtualenv
- Jupyter notebook running in a virtual environment with Python 3
The Cloud Storage guide to [Setting up a Python development environment](https://cloud.google.com/python/setup) and the [Jupyter installation guide](https://jupyter.org/install) provide detailed instructions for meeting these requirements. The following steps provide a condensed set of instructions:
1. [Install and initialize the SDK](https://cloud.google.com/sdk/docs/).
2. [Install Python 3](https://cloud.google.com/python/setup#installing_python).
3. [Install virtualenv](https://cloud.google.com/python/setup#installing_and_using_virtualenv) and create a virtual environment that uses Python 3. Activate the virtual environment.
4. To install Jupyter, run `pip3 install jupyter` on the command-line in a terminal shell.
5. To launch Jupyter, run `jupyter notebook` on the command-line in a terminal shell.
6. Open this notebook in the Jupyter Notebook Dashboard.
## Installation
Install the latest version of Vertex SDK for Python.
```
import os
# Google Cloud Notebook
if os.path.exists("/opt/deeplearning/metadata/env_version"):
USER_FLAG = "--user"
else:
USER_FLAG = ""
! pip3 install --upgrade google-cloud-aiplatform $USER_FLAG
```
Install the latest GA version of *google-cloud-storage* library as well.
```
! pip3 install -U google-cloud-storage $USER_FLAG
if os.getenv("IS_TESTING"):
! pip3 install --upgrade tensorflow $USER_FLAG
```
### Restart the kernel
Once you've installed the additional packages, you need to restart the notebook kernel so it can find the packages.
```
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
```
## Before you begin
### GPU runtime
This tutorial does not require a GPU runtime.
### Set up your Google Cloud project
**The following steps are required, regardless of your notebook environment.**
1. [Select or create a Google Cloud project](https://console.cloud.google.com/cloud-resource-manager). When you first create an account, you get a $300 free credit towards your compute/storage costs.
2. [Make sure that billing is enabled for your project.](https://cloud.google.com/billing/docs/how-to/modify-project)
3. [Enable the following APIs: Vertex AI APIs, Compute Engine APIs, and Cloud Storage.](https://console.cloud.google.com/flows/enableapi?apiid=ml.googleapis.com,compute_component,storage-component.googleapis.com)
4. If you are running this notebook locally, you will need to install the [Cloud SDK]((https://cloud.google.com/sdk)).
5. Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
**Note**: Jupyter runs lines prefixed with `!` as shell commands, and it interpolates Python variables prefixed with `$`.
```
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = ! gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
```
#### Region
You can also change the `REGION` variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you.
- Americas: `us-central1`
- Europe: `europe-west4`
- Asia Pacific: `asia-east1`
You may not use a multi-regional bucket for training with Vertex AI. Not all regions provide support for all Vertex AI services.
Learn more about [Vertex AI regions](https://cloud.google.com/vertex-ai/docs/general/locations)
```
REGION = "us-central1" # @param {type: "string"}
```
#### Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial.
```
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
```
### Authenticate your Google Cloud account
**If you are using Google Cloud Notebooks**, your environment is already authenticated. Skip this step.
**If you are using Colab**, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.
**Otherwise**, follow these steps:
In the Cloud Console, go to the [Create service account key](https://console.cloud.google.com/apis/credentials/serviceaccountkey) page.
**Click Create service account**.
In the **Service account name** field, enter a name, and click **Create**.
In the **Grant this service account access to project** section, click the Role drop-down list. Type "Vertex" into the filter box, and select **Vertex Administrator**. Type "Storage Object Admin" into the filter box, and select **Storage Object Admin**.
Click Create. A JSON file that contains your key downloads to your local environment.
Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
```
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
import os
import sys
# If on Google Cloud Notebook, then don't execute this code
if not os.path.exists("/opt/deeplearning/metadata/env_version"):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
```
### Create a Cloud Storage bucket
**The following steps are required, regardless of your notebook environment.**
When you initialize the Vertex SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions.
Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.
```
BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]":
BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
```
**Only if your bucket doesn't already exist**: Run the following cell to create your Cloud Storage bucket.
```
! gsutil mb -l $REGION $BUCKET_NAME
```
Finally, validate access to your Cloud Storage bucket by examining its contents:
```
! gsutil ls -al $BUCKET_NAME
```
### Set up variables
Next, set up some variables used throughout the tutorial.
### Import libraries and define constants
```
import google.cloud.aiplatform as aip
```
## Initialize Vertex SDK for Python
Initialize the Vertex SDK for Python for your project and corresponding bucket.
```
aip.init(project=PROJECT_ID, staging_bucket=BUCKET_NAME)
```
#### Set hardware accelerators
You can set hardware accelerators for training and prediction.
Set the variables `TRAIN_GPU/TRAIN_NGPU` and `DEPLOY_GPU/DEPLOY_NGPU` to use a container image supporting a GPU and the number of GPUs allocated to the virtual machine (VM) instance. For example, to use a GPU container image with 4 Nvidia Telsa K80 GPUs allocated to each VM, you would specify:
(aip.AcceleratorType.NVIDIA_TESLA_K80, 4)
Otherwise specify `(None, None)` to use a container image to run on a CPU.
Learn more [here](https://cloud.google.com/vertex-ai/docs/general/locations#accelerators) hardware accelerator support for your region
*Note*: TF releases before 2.3 for GPU support will fail to load the custom model in this tutorial. It is a known issue and fixed in TF 2.3 -- which is caused by static graph ops that are generated in the serving function. If you encounter this issue on your own custom models, use a container image for TF 2.3 with GPU support.
```
if os.getenv("IS_TESTING_TRAIN_GPU"):
TRAIN_GPU, TRAIN_NGPU = (
aip.gapic.AcceleratorType.NVIDIA_TESLA_K80,
int(os.getenv("IS_TESTING_TRAIN_GPU")),
)
else:
TRAIN_GPU, TRAIN_NGPU = (None, None)
if os.getenv("IS_TESTING_DEPLOY_GPU"):
DEPLOY_GPU, DEPLOY_NGPU = (
aip.gapic.AcceleratorType.NVIDIA_TESLA_K80,
int(os.getenv("IS_TESTING_DEPLOY_GPU")),
)
else:
DEPLOY_GPU, DEPLOY_NGPU = (None, None)
```
#### Set pre-built containers
Set the pre-built Docker container image for training and prediction.
For the latest list, see [Pre-built containers for training](https://cloud.google.com/ai-platform-unified/docs/training/pre-built-containers).
For the latest list, see [Pre-built containers for prediction](https://cloud.google.com/ai-platform-unified/docs/predictions/pre-built-containers).
```
if os.getenv("IS_TESTING_TF"):
TF = os.getenv("IS_TESTING_TF")
else:
TF = "2-1"
if TF[0] == "2":
if TRAIN_GPU:
TRAIN_VERSION = "tf-gpu.{}".format(TF)
else:
TRAIN_VERSION = "tf-cpu.{}".format(TF)
if DEPLOY_GPU:
DEPLOY_VERSION = "tf2-gpu.{}".format(TF)
else:
DEPLOY_VERSION = "tf2-cpu.{}".format(TF)
else:
if TRAIN_GPU:
TRAIN_VERSION = "tf-gpu.{}".format(TF)
else:
TRAIN_VERSION = "tf-cpu.{}".format(TF)
if DEPLOY_GPU:
DEPLOY_VERSION = "tf-gpu.{}".format(TF)
else:
DEPLOY_VERSION = "tf-cpu.{}".format(TF)
TRAIN_IMAGE = "gcr.io/cloud-aiplatform/training/{}:latest".format(TRAIN_VERSION)
DEPLOY_IMAGE = "gcr.io/cloud-aiplatform/prediction/{}:latest".format(DEPLOY_VERSION)
print("Training:", TRAIN_IMAGE, TRAIN_GPU, TRAIN_NGPU)
print("Deployment:", DEPLOY_IMAGE, DEPLOY_GPU, DEPLOY_NGPU)
```
#### Set machine type
Next, set the machine type to use for training and prediction.
- Set the variables `TRAIN_COMPUTE` and `DEPLOY_COMPUTE` to configure the compute resources for the VMs you will use for for training and prediction.
- `machine type`
- `n1-standard`: 3.75GB of memory per vCPU.
- `n1-highmem`: 6.5GB of memory per vCPU
- `n1-highcpu`: 0.9 GB of memory per vCPU
- `vCPUs`: number of \[2, 4, 8, 16, 32, 64, 96 \]
*Note: The following is not supported for training:*
- `standard`: 2 vCPUs
- `highcpu`: 2, 4 and 8 vCPUs
*Note: You may also use n2 and e2 machine types for training and deployment, but they do not support GPUs*.
```
if os.getenv("IS_TESTING_TRAIN_MACHINE"):
MACHINE_TYPE = os.getenv("IS_TESTING_TRAIN_MACHINE")
else:
MACHINE_TYPE = "n1-standard"
VCPU = "4"
TRAIN_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Train machine type", TRAIN_COMPUTE)
if os.getenv("IS_TESTING_DEPLOY_MACHINE"):
MACHINE_TYPE = os.getenv("IS_TESTING_DEPLOY_MACHINE")
else:
MACHINE_TYPE = "n1-standard"
VCPU = "4"
DEPLOY_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Deploy machine type", DEPLOY_COMPUTE)
```
# Tutorial
Now you are ready to start creating your own custom model and training for Boston Housing.
### Examine the training package
#### Package layout
Before you start the training, you will look at how a Python package is assembled for a custom training job. When unarchived, the package contains the following directory/file layout.
- PKG-INFO
- README.md
- setup.cfg
- setup.py
- trainer
- \_\_init\_\_.py
- task.py
The files `setup.cfg` and `setup.py` are the instructions for installing the package into the operating environment of the Docker image.
The file `trainer/task.py` is the Python script for executing the custom training job. *Note*, when we referred to it in the worker pool specification, we replace the directory slash with a dot (`trainer.task`) and dropped the file suffix (`.py`).
#### Package Assembly
In the following cells, you will assemble the training package.
```
# Make folder for Python training script
! rm -rf custom
! mkdir custom
# Add package information
! touch custom/README.md
setup_cfg = "[egg_info]\n\ntag_build =\n\ntag_date = 0"
! echo "$setup_cfg" > custom/setup.cfg
setup_py = "import setuptools\n\nsetuptools.setup(\n\n install_requires=[\n\n 'tensorflow_datasets==1.3.0',\n\n ],\n\n packages=setuptools.find_packages())"
! echo "$setup_py" > custom/setup.py
pkg_info = "Metadata-Version: 1.0\n\nName: Boston Housing tabular regression\n\nVersion: 0.0.0\n\nSummary: Demostration training script\n\nHome-page: www.google.com\n\nAuthor: Google\n\nAuthor-email: aferlitsch@google.com\n\nLicense: Public\n\nDescription: Demo\n\nPlatform: Vertex"
! echo "$pkg_info" > custom/PKG-INFO
# Make the training subfolder
! mkdir custom/trainer
! touch custom/trainer/__init__.py
```
#### Task.py contents
In the next cell, you write the contents of the training script task.py. I won't go into detail, it's just there for you to browse. In summary:
- Get the directory where to save the model artifacts from the command line (`--model_dir`), and if not specified, then from the environment variable `AIP_MODEL_DIR`.
- Loads Boston Housing dataset from TF.Keras builtin datasets
- Builds a simple deep neural network model using TF.Keras model API.
- Compiles the model (`compile()`).
- Sets a training distribution strategy according to the argument `args.distribute`.
- Trains the model (`fit()`) with epochs specified by `args.epochs`.
- Saves the trained model (`save(args.model_dir)`) to the specified model directory.
- Saves the maximum value for each feature `f.write(str(params))` to the specified parameters file.
```
%%writefile custom/trainer/task.py
# Single, Mirror and Multi-Machine Distributed Training for Boston Housing
import tensorflow_datasets as tfds
import tensorflow as tf
from tensorflow.python.client import device_lib
import numpy as np
import argparse
import os
import sys
tfds.disable_progress_bar()
parser = argparse.ArgumentParser()
parser.add_argument('--model-dir', dest='model_dir',
default=os.getenv('AIP_MODEL_DIR'), type=str, help='Model dir.')
parser.add_argument('--lr', dest='lr',
default=0.001, type=float,
help='Learning rate.')
parser.add_argument('--epochs', dest='epochs',
default=20, type=int,
help='Number of epochs.')
parser.add_argument('--steps', dest='steps',
default=100, type=int,
help='Number of steps per epoch.')
parser.add_argument('--distribute', dest='distribute', type=str, default='single',
help='distributed training strategy')
parser.add_argument('--param-file', dest='param_file',
default='/tmp/param.txt', type=str,
help='Output file for parameters')
args = parser.parse_args()
print('Python Version = {}'.format(sys.version))
print('TensorFlow Version = {}'.format(tf.__version__))
print('TF_CONFIG = {}'.format(os.environ.get('TF_CONFIG', 'Not found')))
# Single Machine, single compute device
if args.distribute == 'single':
if tf.test.is_gpu_available():
strategy = tf.distribute.OneDeviceStrategy(device="/gpu:0")
else:
strategy = tf.distribute.OneDeviceStrategy(device="/cpu:0")
# Single Machine, multiple compute device
elif args.distribute == 'mirror':
strategy = tf.distribute.MirroredStrategy()
# Multiple Machine, multiple compute device
elif args.distribute == 'multi':
strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
# Multi-worker configuration
print('num_replicas_in_sync = {}'.format(strategy.num_replicas_in_sync))
def make_dataset():
# Scaling Boston Housing data features
def scale(feature):
max = np.max(feature)
feature = (feature / max).astype(np.float)
return feature, max
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.boston_housing.load_data(
path="boston_housing.npz", test_split=0.2, seed=113
)
params = []
for _ in range(13):
x_train[_], max = scale(x_train[_])
x_test[_], _ = scale(x_test[_])
params.append(max)
# store the normalization (max) value for each feature
with tf.io.gfile.GFile(args.param_file, 'w') as f:
f.write(str(params))
return (x_train, y_train), (x_test, y_test)
# Build the Keras model
def build_and_compile_dnn_model():
model = tf.keras.Sequential([
tf.keras.layers.Dense(128, activation='relu', input_shape=(13,)),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(1, activation='linear')
])
model.compile(
loss='mse',
optimizer=tf.keras.optimizers.RMSprop(learning_rate=args.lr))
return model
NUM_WORKERS = strategy.num_replicas_in_sync
# Here the batch size scales up by number of workers since
# `tf.data.Dataset.batch` expects the global batch size.
BATCH_SIZE = 16
GLOBAL_BATCH_SIZE = BATCH_SIZE * NUM_WORKERS
with strategy.scope():
# Creation of dataset, and model building/compiling need to be within
# `strategy.scope()`.
model = build_and_compile_dnn_model()
# Train the model
(x_train, y_train), (x_test, y_test) = make_dataset()
model.fit(x_train, y_train, epochs=args.epochs, batch_size=GLOBAL_BATCH_SIZE)
model.save(args.model_dir)
```
#### Store training script on your Cloud Storage bucket
Next, you package the training folder into a compressed tar ball, and then store it in your Cloud Storage bucket.
```
! rm -f custom.tar custom.tar.gz
! tar cvf custom.tar custom
! gzip custom.tar
! gsutil cp custom.tar.gz $BUCKET_NAME/trainer_boston.tar.gz
```
### Create and run custom training job
To train a custom model, you perform two steps: 1) create a custom training job, and 2) run the job.
#### Create custom training job
A custom training job is created with the `CustomTrainingJob` class, with the following parameters:
- `display_name`: The human readable name for the custom training job.
- `container_uri`: The training container image.
- `requirements`: Package requirements for the training container image (e.g., pandas).
- `script_path`: The relative path to the training script.
```
job = aip.CustomTrainingJob(
display_name="boston_" + TIMESTAMP,
script_path="custom/trainer/task.py",
container_uri=TRAIN_IMAGE,
requirements=["gcsfs==0.7.1", "tensorflow-datasets==4.4"],
)
print(job)
```
### Prepare your command-line arguments
Now define the command-line arguments for your custom training container:
- `args`: The command-line arguments to pass to the executable that is set as the entry point into the container.
- `--model-dir` : For our demonstrations, we use this command-line argument to specify where to store the model artifacts.
- direct: You pass the Cloud Storage location as a command line argument to your training script (set variable `DIRECT = True`), or
- indirect: The service passes the Cloud Storage location as the environment variable `AIP_MODEL_DIR` to your training script (set variable `DIRECT = False`). In this case, you tell the service the model artifact location in the job specification.
- `"--epochs=" + EPOCHS`: The number of epochs for training.
- `"--steps=" + STEPS`: The number of steps per epoch.
```
MODEL_DIR = "{}/{}".format(BUCKET_NAME, TIMESTAMP)
EPOCHS = 20
STEPS = 100
DIRECT = True
if DIRECT:
CMDARGS = [
"--model-dir=" + MODEL_DIR,
"--epochs=" + str(EPOCHS),
"--steps=" + str(STEPS),
]
else:
CMDARGS = [
"--epochs=" + str(EPOCHS),
"--steps=" + str(STEPS),
]
```
#### Run the custom training job
Next, you run the custom job to start the training job by invoking the method `run`, with the following parameters:
- `args`: The command-line arguments to pass to the training script.
- `replica_count`: The number of compute instances for training (replica_count = 1 is single node training).
- `machine_type`: The machine type for the compute instances.
- `accelerator_type`: The hardware accelerator type.
- `accelerator_count`: The number of accelerators to attach to a worker replica.
- `base_output_dir`: The Cloud Storage location to write the model artifacts to.
- `sync`: Whether to block until completion of the job.
```
if TRAIN_GPU:
job.run(
args=CMDARGS,
replica_count=1,
machine_type=TRAIN_COMPUTE,
accelerator_type=TRAIN_GPU.name,
accelerator_count=TRAIN_NGPU,
base_output_dir=MODEL_DIR,
sync=True,
)
else:
job.run(
args=CMDARGS,
replica_count=1,
machine_type=TRAIN_COMPUTE,
base_output_dir=MODEL_DIR,
sync=True,
)
model_path_to_deploy = MODEL_DIR
```
## Load the saved model
Your model is stored in a TensorFlow SavedModel format in a Cloud Storage bucket. Now load it from the Cloud Storage bucket, and then you can do some things, like evaluate the model, and do a prediction.
To load, you use the TF.Keras `model.load_model()` method passing it the Cloud Storage path where the model is saved -- specified by `MODEL_DIR`.
```
import tensorflow as tf
local_model = tf.keras.models.load_model(MODEL_DIR)
```
## Evaluate the model
Now let's find out how good the model is.
### Load evaluation data
You will load the Boston Housing test (holdout) data from `tf.keras.datasets`, using the method `load_data()`. This returns the dataset as a tuple of two elements. The first element is the training data and the second is the test data. Each element is also a tuple of two elements: the feature data, and the corresponding labels (median value of owner-occupied home).
You don't need the training data, and hence why we loaded it as `(_, _)`.
Before you can run the data through evaluation, you need to preprocess it:
`x_test`:
1. Normalize (rescale) the data in each column by dividing each value by the maximum value of that column. This replaces each single value with a 32-bit floating point number between 0 and 1.
```
import numpy as np
from tensorflow.keras.datasets import boston_housing
(_, _), (x_test, y_test) = boston_housing.load_data(
path="boston_housing.npz", test_split=0.2, seed=113
)
def scale(feature):
max = np.max(feature)
feature = (feature / max).astype(np.float32)
return feature
# Let's save one data item that has not been scaled
x_test_notscaled = x_test[0:1].copy()
for _ in range(13):
x_test[_] = scale(x_test[_])
x_test = x_test.astype(np.float32)
print(x_test.shape, x_test.dtype, y_test.shape)
print("scaled", x_test[0])
print("unscaled", x_test_notscaled)
```
### Perform the model evaluation
Now evaluate how well the model in the custom job did.
```
local_model.evaluate(x_test, y_test)
```
## Get the serving function signature
You can get the signatures of your model's input and output layers by reloading the model into memory, and querying it for the signatures corresponding to each layer.
When making a prediction request, you need to route the request to the serving function instead of the model, so you need to know the input layer name of the serving function -- which you will use later when you make a prediction request.
You also need to know the name of the serving function's input and output layer for constructing the explanation metadata -- which is discussed subsequently.
```
loaded = tf.saved_model.load(model_path_to_deploy)
serving_input = list(
loaded.signatures["serving_default"].structured_input_signature[1].keys()
)[0]
print("Serving function input:", serving_input)
serving_output = list(loaded.signatures["serving_default"].structured_outputs.keys())[0]
print("Serving function output:", serving_output)
input_name = local_model.input.name
print("Model input name:", input_name)
output_name = local_model.output.name
print("Model output name:", output_name)
```
### Explanation Specification
To get explanations when doing a prediction, you must enable the explanation capability and set corresponding settings when you upload your custom model to an Vertex `Model` resource. These settings are referred to as the explanation metadata, which consists of:
- `parameters`: This is the specification for the explainability algorithm to use for explanations on your model. You can choose between:
- Shapley - *Note*, not recommended for image data -- can be very long running
- XRAI
- Integrated Gradients
- `metadata`: This is the specification for how the algoithm is applied on your custom model.
#### Explanation Parameters
Let's first dive deeper into the settings for the explainability algorithm.
#### Shapley
Assigns credit for the outcome to each feature, and considers different permutations of the features. This method provides a sampling approximation of exact Shapley values.
Use Cases:
- Classification and regression on tabular data.
Parameters:
- `path_count`: This is the number of paths over the features that will be processed by the algorithm. An exact approximation of the Shapley values requires M! paths, where M is the number of features. For the CIFAR10 dataset, this would be 784 (28*28).
For any non-trival number of features, this is too compute expensive. You can reduce the number of paths over the features to M * `path_count`.
#### Integrated Gradients
A gradients-based method to efficiently compute feature attributions with the same axiomatic properties as the Shapley value.
Use Cases:
- Classification and regression on tabular data.
- Classification on image data.
Parameters:
- `step_count`: This is the number of steps to approximate the remaining sum. The more steps, the more accurate the integral approximation. The general rule of thumb is 50 steps, but as you increase so does the compute time.
#### XRAI
Based on the integrated gradients method, XRAI assesses overlapping regions of the image to create a saliency map, which highlights relevant regions of the image rather than pixels.
Use Cases:
- Classification on image data.
Parameters:
- `step_count`: This is the number of steps to approximate the remaining sum. The more steps, the more accurate the integral approximation. The general rule of thumb is 50 steps, but as you increase so does the compute time.
In the next code cell, set the variable `XAI` to which explainabilty algorithm you will use on your custom model.
```
XAI = "ig" # [ shapley, ig, xrai ]
if XAI == "shapley":
PARAMETERS = {"sampled_shapley_attribution": {"path_count": 10}}
elif XAI == "ig":
PARAMETERS = {"integrated_gradients_attribution": {"step_count": 50}}
elif XAI == "xrai":
PARAMETERS = {"xrai_attribution": {"step_count": 50}}
parameters = aip.explain.ExplanationParameters(PARAMETERS)
```
#### Explanation Metadata
Let's first dive deeper into the explanation metadata, which consists of:
- `outputs`: A scalar value in the output to attribute -- what to explain.
- `inputs`: The features for attribution -- how they contributed to the output.
You can either customize your metadata -- what to explain and what to attribute, or automatically generate the metadata using the method `get_metadata_protobuf()`. This method will construct metadata for explaining all outputs and attributing all inputs.
```
from google.cloud.aiplatform.explain.metadata.tf.v2 import \
saved_model_metadata_builder
builder = saved_model_metadata_builder.SavedModelMetadataBuilder(MODEL_DIR)
metadata = builder.get_metadata_protobuf()
print(metadata)
```
## Upload the model
Next, upload your model to a `Model` resource using `Model.upload()` method, with the following parameters:
- `display_name`: The human readable name for the `Model` resource.
- `artifact`: The Cloud Storage location of the trained model artifacts.
- `serving_container_image_uri`: The serving container image.
- `sync`: Whether to execute the upload asynchronously or synchronously.
- `explanation_parameters`: Parameters to configure explaining for `Model`'s predictions.
- `explanation_metadata`: Metadata describing the `Model`'s input and output for explanation.
If the `upload()` method is run asynchronously, you can subsequently block until completion with the `wait()` method.
```
model = aip.Model.upload(
display_name="boston_" + TIMESTAMP,
artifact_uri=MODEL_DIR,
serving_container_image_uri=DEPLOY_IMAGE,
explanation_parameters=parameters,
explanation_metadata=metadata,
sync=False,
)
model.wait()
```
## Deploy the model
Next, deploy your model for online prediction. To deploy the model, you invoke the `deploy` method, with the following parameters:
- `deployed_model_display_name`: A human readable name for the deployed model.
- `traffic_split`: Percent of traffic at the endpoint that goes to this model, which is specified as a dictionary of one or more key/value pairs.
If only one model, then specify as { "0": 100 }, where "0" refers to this model being uploaded and 100 means 100% of the traffic.
If there are existing models on the endpoint, for which the traffic will be split, then use model_id to specify as { "0": percent, model_id: percent, ... }, where model_id is the model id of an existing model to the deployed endpoint. The percents must add up to 100.
- `machine_type`: The type of machine to use for training.
- `accelerator_type`: The hardware accelerator type.
- `accelerator_count`: The number of accelerators to attach to a worker replica.
- `starting_replica_count`: The number of compute instances to initially provision.
- `max_replica_count`: The maximum number of compute instances to scale to. In this tutorial, only one instance is provisioned.
```
DEPLOYED_NAME = "boston-" + TIMESTAMP
TRAFFIC_SPLIT = {"0": 100}
MIN_NODES = 1
MAX_NODES = 1
if DEPLOY_GPU:
endpoint = model.deploy(
deployed_model_display_name=DEPLOYED_NAME,
traffic_split=TRAFFIC_SPLIT,
machine_type=DEPLOY_COMPUTE,
accelerator_type=DEPLOY_GPU,
accelerator_count=DEPLOY_NGPU,
min_replica_count=MIN_NODES,
max_replica_count=MAX_NODES,
)
else:
endpoint = model.deploy(
deployed_model_display_name=DEPLOYED_NAME,
traffic_split=TRAFFIC_SPLIT,
machine_type=DEPLOY_COMPUTE,
accelerator_type=DEPLOY_GPU,
accelerator_count=0,
min_replica_count=MIN_NODES,
max_replica_count=MAX_NODES,
)
```
### Get test item
You will use an example out of the test (holdout) portion of the dataset as a test item.
```
test_item = x_test[0]
test_label = y_test[0]
print(test_item.shape)
```
### Make the prediction with explanation
Now that your `Model` resource is deployed to an `Endpoint` resource, one can do online explanations by sending prediction requests to the `Endpoint` resource.
#### Request
The format of each instance is:
[feature_list]
Since the explain() method can take multiple items (instances), send your single test item as a list of one test item.
#### Response
The response from the explain() call is a Python dictionary with the following entries:
- `ids`: The internal assigned unique identifiers for each prediction request.
- `predictions`: The prediction per instance.
- `deployed_model_id`: The Vertex AI identifier for the deployed `Model` resource which did the predictions.
- `explanations`: The feature attributions
```
instances_list = [test_item.tolist()]
prediction = endpoint.explain(instances_list)
print(prediction)
```
### Understanding the explanations response
First, you will look what your model predicted and compare it to the actual value.
```
value = prediction[0][0][0]
print("Predicted Value:", value)
```
### Examine feature attributions
Next you will look at the feature attributions for this particular example. Positive attribution values mean a particular feature pushed your model prediction up by that amount, and vice versa for negative attribution values.
```
from tabulate import tabulate
feature_names = [
"crim",
"zn",
"indus",
"chas",
"nox",
"rm",
"age",
"dis",
"rad",
"tax",
"ptratio",
"b",
"lstat",
]
attributions = (
prediction.explanations[0].attributions[0].feature_attributions[serving_input]
)
rows = []
for i, val in enumerate(feature_names):
rows.append([val, test_item[i], attributions[i]])
print(tabulate(rows, headers=["Feature name", "Feature value", "Attribution value"]))
```
### Check your explanations and baselines
To better make sense of the feature attributions you're getting, you should compare them with your model's baseline. In most cases, the sum of your attribution values + the baseline should be very close to your model's predicted value for each input. Also note that for regression models, the `baseline_score` returned from AI Explanations will be the same for each example sent to your model. For classification models, each class will have its own baseline.
In this section you'll send 10 test examples to your model for prediction in order to compare the feature attributions with the baseline. Then you'll run each test example's attributions through a sanity check in the `sanity_check_explanations` method.
#### Get explanations
```
# Prepare 10 test examples to your model for prediction
instances = []
for i in range(10):
instances.append(x_test[i].tolist())
response = endpoint.explain(instances)
```
#### Sanity check
In the function below you perform a sanity check on the explanations.
```
import numpy as np
def sanity_check_explanations(
explanation, prediction, mean_tgt_value=None, variance_tgt_value=None
):
passed_test = 0
total_test = 1
# `attributions` is a dict where keys are the feature names
# and values are the feature attributions for each feature
baseline_score = explanation.attributions[0].baseline_output_value
print("baseline:", baseline_score)
# Sanity check 1
# The prediction at the input is equal to that at the baseline.
# Please use a different baseline. Some suggestions are: random input, training
# set mean.
if abs(prediction - baseline_score) <= 0.05:
print("Warning: example score and baseline score are too close.")
print("You might not get attributions.")
else:
passed_test += 1
print("Sanity Check 1: Passed")
print(passed_test, " out of ", total_test, " sanity checks passed.")
i = 0
for explanation in response.explanations:
try:
prediction = np.max(response.predictions[i]["scores"])
except TypeError:
prediction = np.max(response.predictions[i])
sanity_check_explanations(explanation, prediction)
i += 1
```
## Undeploy the model
When you are done doing predictions, you undeploy the model from the `Endpoint` resouce. This deprovisions all compute resources and ends billing for the deployed model.
```
endpoint.undeploy_all()
```
# Cleaning up
To clean up all Google Cloud resources used in this project, you can [delete the Google Cloud
project](https://cloud.google.com/resource-manager/docs/creating-managing-projects#shutting_down_projects) you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial:
- Dataset
- Pipeline
- Model
- Endpoint
- AutoML Training Job
- Batch Job
- Custom Job
- Hyperparameter Tuning Job
- Cloud Storage Bucket
```
delete_all = True
if delete_all:
# Delete the dataset using the Vertex dataset object
try:
if "dataset" in globals():
dataset.delete()
except Exception as e:
print(e)
# Delete the model using the Vertex model object
try:
if "model" in globals():
model.delete()
except Exception as e:
print(e)
# Delete the endpoint using the Vertex endpoint object
try:
if "endpoint" in globals():
endpoint.delete()
except Exception as e:
print(e)
# Delete the AutoML or Pipeline trainig job
try:
if "dag" in globals():
dag.delete()
except Exception as e:
print(e)
# Delete the custom trainig job
try:
if "job" in globals():
job.delete()
except Exception as e:
print(e)
# Delete the batch prediction job using the Vertex batch prediction object
try:
if "batch_predict_job" in globals():
batch_predict_job.delete()
except Exception as e:
print(e)
# Delete the hyperparameter tuning job using the Vertex hyperparameter tuning object
try:
if "hpt_job" in globals():
hpt_job.delete()
except Exception as e:
print(e)
if "BUCKET_NAME" in globals():
! gsutil rm -r $BUCKET_NAME
```
| github_jupyter |
# Exemplos
---
* Vamos realizar alguns exercรญcios legais com dicionรกrios.
**<font color="red"> Questรฃo 1 </font>**
---
Digamos que vocรช estรก construindo um programa para identificar nรญveis de $CO_{2}$ (gรกs carbรดnico) em determinados locais para evitar potenciais acidentes. Em cada um desses locais a sua empresa tem 5 sensores que captam o nรญvel de $CO_2$ do local.
Os nรญveis normais de CO2 sรฃo em mรฉdia 350. O nรญvel de CO2 de um local รฉ dado pela mรฉdia captada pelos 5 sensores.
Caso o nรญvel seja maior do que 450, as pessoas reponsรกveis pelas fรกbricas devem ser avisadas.
Os resultados dos sensores sรฃo monitorados frequentemente e sรฃo dados para o sistema em forma de dicionรกrio:
```python
niveis_co2 = {
'AC': [325,405,429,486,402],
'AL': [492,495,310,407,388],
'AP': [507,503,368,338,400],
'AM': [429,456,352,377,363],
'BA': [321,508,372,490,412],
'CE': [424,328,425,516,480],
'ES': [449,506,461,337,336],
'GO': [425,460,385,485,460],
'MA': [361,310,344,425,490],
'MT': [358,402,425,386,379],
'MS': [324,357,441,405,427],
'MG': [345,367,391,427,516],
'PA': [479,514,392,493,329],
'PB': [418,499,317,302,476],
'PR': [420,508,419,396,327],
'PE': [404,444,495,320,343],
'PI': [513,513,304,377,475],
'RJ': [502,481,492,502,506],
'RN': [446,437,519,356,317],
'RS': [427,518,459,317,321],
'RO': [517,466,512,326,458],
'RR': [466,495,469,495,310],
'SC': [495,436,382,483,479],
'SP': [495,407,362,389,317],
'SE': [508,351,334,389,418],
'TO': [339,490,304,488,419],
'DF': [376,516,320,310,518],
}
```
```
niveis_co2 = {
'AC': [325,405,429,486,402],
'AL': [492,495,310,407,388],
'AP': [507,503,368,338,400],
'AM': [429,456,352,377,363],
'BA': [321,508,372,490,412],
'CE': [424,328,425,516,480],
'ES': [449,506,461,337,336],
'GO': [425,460,385,485,460],
'MA': [361,310,344,425,490],
'MT': [358,402,425,386,379],
'MS': [324,357,441,405,427],
'MG': [345,367,391,427,516],
'PA': [479,514,392,493,329],
'PB': [418,499,317,302,476],
'PR': [420,508,419,396,327],
'PE': [404,444,495,320,343],
'PI': [513,513,304,377,475],
'RJ': [502,481,492,502,506],
'RN': [446,437,519,356,317],
'RS': [427,518,459,317,321],
'RO': [517,466,512,326,458],
'RR': [466,495,469,495,310],
'SC': [495,436,382,483,479],
'SP': [495,407,362,389,317],
'SE': [508,351,334,389,418],
'TO': [339,490,304,488,419],
'DF': [376,516,320,310,518],
}
for estado in niveis_co2:
media_estado = sum(niveis_co2[estado]) / len(niveis_co2[estado])
if media_estado > 450:
print(f'Os nรญveis de CO2 do {estado} estรก em alerta!')
```
**<font color="red"> Questรฃo 2 </font>**
---
Neste dicionรกrio retire os links que estรฃo nele!
```python
a = {'uri': '/videos/465407533', 'name': '15 Atalhos no Excel para Ficar Mais Produtivo', 'download': [{'quality': 'source', 'type': 'source', 'width': 1920, 'height': 1080, 'expires': '2020-10-07T04:00:55+00:00', 'link': 'https://player.vimeo.com/play/2064518513?s=465407533_1602043255_5f2f93dd00b66eba66d481f913383b4f&loc=external&context=Vimeo%5CController%5CApi%5CResources%5CUser%5CVideosController.&download=1&filename=15%2BAtalhos%2Bno%2BExcel%2Bpara%2BFicar%2BMais%2BProdutivosource.mp4', 'created_time': '2020-10-06T14:26:17+00:00', 'fps': 30, 'size': 402678442, 'md5': 'af09508ceceed4994554f04e8b931e22', 'public_name': 'Original', 'size_short': '384.02MB'}, {'quality': 'hd', 'type': 'video/mp4', 'width': 1920, 'height': 1080, 'expires': '2020-10-07T04:00:55+00:00', 'link': 'https://player.vimeo.com/play/2064523157?s=465407533_1602043255_ab7b8353c59b5048032396ec5d95a276&loc=external&context=Vimeo%5CController%5CApi%5CResources%5CUser%5CVideosController.&download=1&filename=15%2BAtalhos%2Bno%2BExcel%2Bpara%2BFicar%2BMais%2BProdutivo175.mp4', 'created_time': '2020-10-06T14:29:06+00:00', 'fps': 30, 'size': 173556205, 'md5': '3c05e1e69bd6b13eb1464451033907d2', 'public_name': 'HD 1080p', 'size_short': '165.52MB'}, {'quality': 'sd', 'type': 'video/mp4', 'width': 960, 'height': 540, 'expires': '2020-10-07T04:00:55+00:00', 'link': 'https://player.vimeo.com/play/2064523153?s=465407533_1602043255_f5ac38009ec5c0a13b30600c631446a3&loc=external&context=Vimeo%5CController%5CApi%5CResources%5CUser%5CVideosController.&download=1&filename=15%2BAtalhos%2Bno%2BExcel%2Bpara%2BFicar%2BMais%2BProdutivo165.mp4', 'created_time': '2020-10-06T14:29:06+00:00', 'fps': 30, 'size': 89881848, 'md5': '4a5c5c96cdf18202ed20ca534fd88007', 'public_name': 'SD 540p', 'size_short': '85.72MB'}, {'quality': 'sd', 'type': 'video/mp4', 'width': 426, 'height': 240, 'expires': '2020-10-07T04:00:55+00:00', 'link': 'https://player.vimeo.com/play/2064522788?s=465407533_1602043255_16c69872e2c4e92cc949d0b772242959&loc=external&context=Vimeo%5CController%5CApi%5CResources%5CUser%5CVideosController.&download=1&filename=15%2BAtalhos%2Bno%2BExcel%2Bpara%2BFicar%2BMais%2BProdutivo139.mp4', 'created_time': '2020-10-06T14:28:31+00:00', 'fps': 30, 'size': 27401450, 'md5': '91cc0229087ec94bf67f64b01ad8768d', 'public_name': 'SD 240p', 'size_short': '26.13MB'}, {'quality': 'sd', 'type': 'video/mp4', 'width': 640, 'height': 360, 'expires': '2020-10-07T04:00:55+00:00', 'link': 'https://player.vimeo.com/play/2064522787?s=465407533_1602043255_310b087e2fc8c5e1154ce7a33d10d60e&loc=external&context=Vimeo%5CController%5CApi%5CResources%5CUser%5CVideosController.&download=1&filename=15%2BAtalhos%2Bno%2BExcel%2Bpara%2BFicar%2BMais%2BProdutivo164.mp4', 'created_time': '2020-10-06T14:28:31+00:00', 'fps': 30, 'size': 48627155, 'md5': '548640bf79ce1552a3401726bb0e4224', 'public_name': 'SD 360p', 'size_short': '46.37MB'}]}
```
```
a = {'uri': '/videos/465407533', 'name': '15 Atalhos no Excel para Ficar Mais Produtivo', 'download': [{'quality': 'source', 'type': 'source', 'width': 1920, 'height': 1080, 'expires': '2020-10-07T04:00:55+00:00', 'link': 'https://player.vimeo.com/play/2064518513?s=465407533_1602043255_5f2f93dd00b66eba66d481f913383b4f&loc=external&context=Vimeo%5CController%5CApi%5CResources%5CUser%5CVideosController.&download=1&filename=15%2BAtalhos%2Bno%2BExcel%2Bpara%2BFicar%2BMais%2BProdutivosource.mp4', 'created_time': '2020-10-06T14:26:17+00:00', 'fps': 30, 'size': 402678442, 'md5': 'af09508ceceed4994554f04e8b931e22', 'public_name': 'Original', 'size_short': '384.02MB'}, {'quality': 'hd', 'type': 'video/mp4', 'width': 1920, 'height': 1080, 'expires': '2020-10-07T04:00:55+00:00', 'link': 'https://player.vimeo.com/play/2064523157?s=465407533_1602043255_ab7b8353c59b5048032396ec5d95a276&loc=external&context=Vimeo%5CController%5CApi%5CResources%5CUser%5CVideosController.&download=1&filename=15%2BAtalhos%2Bno%2BExcel%2Bpara%2BFicar%2BMais%2BProdutivo175.mp4', 'created_time': '2020-10-06T14:29:06+00:00', 'fps': 30, 'size': 173556205, 'md5': '3c05e1e69bd6b13eb1464451033907d2', 'public_name': 'HD 1080p', 'size_short': '165.52MB'}, {'quality': 'sd', 'type': 'video/mp4', 'width': 960, 'height': 540, 'expires': '2020-10-07T04:00:55+00:00', 'link': 'https://player.vimeo.com/play/2064523153?s=465407533_1602043255_f5ac38009ec5c0a13b30600c631446a3&loc=external&context=Vimeo%5CController%5CApi%5CResources%5CUser%5CVideosController.&download=1&filename=15%2BAtalhos%2Bno%2BExcel%2Bpara%2BFicar%2BMais%2BProdutivo165.mp4', 'created_time': '2020-10-06T14:29:06+00:00', 'fps': 30, 'size': 89881848, 'md5': '4a5c5c96cdf18202ed20ca534fd88007', 'public_name': 'SD 540p', 'size_short': '85.72MB'}, {'quality': 'sd', 'type': 'video/mp4', 'width': 426, 'height': 240, 'expires': '2020-10-07T04:00:55+00:00', 'link': 'https://player.vimeo.com/play/2064522788?s=465407533_1602043255_16c69872e2c4e92cc949d0b772242959&loc=external&context=Vimeo%5CController%5CApi%5CResources%5CUser%5CVideosController.&download=1&filename=15%2BAtalhos%2Bno%2BExcel%2Bpara%2BFicar%2BMais%2BProdutivo139.mp4', 'created_time': '2020-10-06T14:28:31+00:00', 'fps': 30, 'size': 27401450, 'md5': '91cc0229087ec94bf67f64b01ad8768d', 'public_name': 'SD 240p', 'size_short': '26.13MB'}, {'quality': 'sd', 'type': 'video/mp4', 'width': 640, 'height': 360, 'expires': '2020-10-07T04:00:55+00:00', 'link': 'https://player.vimeo.com/play/2064522787?s=465407533_1602043255_310b087e2fc8c5e1154ce7a33d10d60e&loc=external&context=Vimeo%5CController%5CApi%5CResources%5CUser%5CVideosController.&download=1&filename=15%2BAtalhos%2Bno%2BExcel%2Bpara%2BFicar%2BMais%2BProdutivo164.mp4', 'created_time': '2020-10-06T14:28:31+00:00', 'fps': 30, 'size': 48627155, 'md5': '548640bf79ce1552a3401726bb0e4224', 'public_name': 'SD 360p', 'size_short': '46.37MB'}]}
for i in a['download']:
print(i['link'])
a = {'uri': '/videos/465407533', 'name': '15 Atalhos no Excel para Ficar Mais Produtivo', 'download': [{'quality': 'source', 'type': 'source', 'width': 1920, 'height': 1080, 'expires': '2020-10-07T04:00:55+00:00', 'link': 'https://player.vimeo.com/play/2064518513?s=465407533_1602043255_5f2f93dd00b66eba66d481f913383b4f&loc=external&context=Vimeo%5CController%5CApi%5CResources%5CUser%5CVideosController.&download=1&filename=15%2BAtalhos%2Bno%2BExcel%2Bpara%2BFicar%2BMais%2BProdutivosource.mp4', 'created_time': '2020-10-06T14:26:17+00:00', 'fps': 30, 'size': 402678442, 'md5': 'af09508ceceed4994554f04e8b931e22', 'public_name': 'Original', 'size_short': '384.02MB'}, {'quality': 'hd', 'type': 'video/mp4', 'width': 1920, 'height': 1080, 'expires': '2020-10-07T04:00:55+00:00', 'link': 'https://player.vimeo.com/play/2064523157?s=465407533_1602043255_ab7b8353c59b5048032396ec5d95a276&loc=external&context=Vimeo%5CController%5CApi%5CResources%5CUser%5CVideosController.&download=1&filename=15%2BAtalhos%2Bno%2BExcel%2Bpara%2BFicar%2BMais%2BProdutivo175.mp4', 'created_time': '2020-10-06T14:29:06+00:00', 'fps': 30, 'size': 173556205, 'md5': '3c05e1e69bd6b13eb1464451033907d2', 'public_name': 'HD 1080p', 'size_short': '165.52MB'}, {'quality': 'sd', 'type': 'video/mp4', 'width': 960, 'height': 540, 'expires': '2020-10-07T04:00:55+00:00', 'link': 'https://player.vimeo.com/play/2064523153?s=465407533_1602043255_f5ac38009ec5c0a13b30600c631446a3&loc=external&context=Vimeo%5CController%5CApi%5CResources%5CUser%5CVideosController.&download=1&filename=15%2BAtalhos%2Bno%2BExcel%2Bpara%2BFicar%2BMais%2BProdutivo165.mp4', 'created_time': '2020-10-06T14:29:06+00:00', 'fps': 30, 'size': 89881848, 'md5': '4a5c5c96cdf18202ed20ca534fd88007', 'public_name': 'SD 540p', 'size_short': '85.72MB'}, {'quality': 'sd', 'type': 'video/mp4', 'width': 426, 'height': 240, 'expires': '2020-10-07T04:00:55+00:00', 'link': 'https://player.vimeo.com/play/2064522788?s=465407533_1602043255_16c69872e2c4e92cc949d0b772242959&loc=external&context=Vimeo%5CController%5CApi%5CResources%5CUser%5CVideosController.&download=1&filename=15%2BAtalhos%2Bno%2BExcel%2Bpara%2BFicar%2BMais%2BProdutivo139.mp4', 'created_time': '2020-10-06T14:28:31+00:00', 'fps': 30, 'size': 27401450, 'md5': '91cc0229087ec94bf67f64b01ad8768d', 'public_name': 'SD 240p', 'size_short': '26.13MB'}, {'quality': 'sd', 'type': 'video/mp4', 'width': 640, 'height': 360, 'expires': '2020-10-07T04:00:55+00:00', 'link': 'https://player.vimeo.com/play/2064522787?s=465407533_1602043255_310b087e2fc8c5e1154ce7a33d10d60e&loc=external&context=Vimeo%5CController%5CApi%5CResources%5CUser%5CVideosController.&download=1&filename=15%2BAtalhos%2Bno%2BExcel%2Bpara%2BFicar%2BMais%2BProdutivo164.mp4', 'created_time': '2020-10-06T14:28:31+00:00', 'fps': 30, 'size': 48627155, 'md5': '548640bf79ce1552a3401726bb0e4224', 'public_name': 'SD 360p', 'size_short': '46.37MB'}]}
for nome in a:
if 'download' == nome:
for dicionario in a[nome]:
for atributo in dicionario:
if 'link' == atributo:
print(dicionario[atributo])
print()
```
**<font color="red"> Questรฃo 3 </font>**
---
De um texto qualquer crie um sistema que possa receber esse texto e quantificar os caracteres presentes neste texto.
```
texto = '''
Johnson disse que os sinais apontam para o plano de invasรฃo da Ucrรขnia e a inteligรชncia do paรญs indica que a Rรบssia
'''
texto = texto.lower()
letras = {}
for letra in texto:
if letra in letras:
letras[letra] += 1
else:
letras[letra] = 1
sorted(letras)
```
**<font color="red"> Questรฃo 4 </font>**
---
Processe uma lista de operaรงรตes e calcule o preรงo total de venda, atualizando tambรฉm o estoque.
```python
estoque = {
"tomate":[1000, 2.30],
"alface":[500, 0.45],
"batata":[2001, 1.20],
"feijao":[100, 1.50],
}
```
```python
venda = [["tomate", 5], ["batata", 10], ["alface", 5]]
```
```
estoque = {
"tomate":[1000, 2.30],
"alface":[500, 0.45],
"batata":[2000, 1.20],
"feijao":[100, 1.50],
}
venda = [["tomate", 5], ["batata", 10], ["alface", 5]]
total = 0
print('Vendas:\n')
for operacao in venda:
produto, quantidade = operacao
preco = estoque[produto][-1]
custo = preco * quantidade
print(f'{produto}: {quantidade}x{preco}={custo:.2f} R$')
estoque[produto][0] -= quantidade
total += custo
print(f'Custo total: {total:.2f} R$')
print('Estoque:\n')
for chave, dados in estoque.items():
print('Descriรงรฃo: ', chave)
print('Quantidade: ', dados[0])
print('Preรงo: {:.2f}'.format(dados[-1]))
estoque.items()
```
| github_jupyter |
# Identifying the diffusion equation from a random walk
Samuel Rudy, 2016
Here we take various lengths of a random walk where $x_{j+1} \sim \mathcal{N}(x_j, dt)$ and see if we can identify the diffusion equation. As expected, it works better for longer series.
```
%pylab inline
pylab.rcParams['figure.figsize'] = (12,6)
import numpy as np
from PDE_FIND import *
# set seed = 0
numpy.random.seed(0)
N = 24
lengths = [int(10**j) for j in 2+np.arange(N+1)*4.0/N]
err = {}
for l in range(N+1): err[l] = []
sparsity_err = np.zeros(len(lengths))
trials = 10
w_true = np.zeros((16,1))
w_true[8] = 0.5
for trial in range(trials):
print "Trial:", trial+1, "of", trials
# generate a new time series
dt = 0.01
advection = 0 # it has some trouble with advection
pos = np.cumsum(np.sqrt(dt)*np.random.randn(lengths[-1])) + dt*advection*np.arange(lengths[-1])
# fit various lengths of it to a pde
for l in range(len(lengths)):
length = lengths[l]
P = {}
M = 0
m = 5
# More bins for longer time series. We just need to make sure there aren't too many bins for how many points we have
n = int(20*log(length))
for j in range(m): P[j] = []
for i in range(length-m):
# center
y = pos[i+1:i+m+1] - pos[i]
M = max([M, max(abs(y))])
# add to distribution
for j in range(m):
P[j].append(y[j])
bins = np.linspace(-M,M,n+1)
x = linspace(M*(1/n-1),M*(1-1/n),n)
dx = x[2]-x[1]
U = np.zeros((n,m))
for i in range(m):
U[:,i] = np.histogram(P[i],bins)[0]/float(dx*(len(pos)-m))
Ut,R,rhs_des = build_linear_system(U, dt, dx, D=3, P=3, time_diff = 'FD', deg_x = 5, width_x = np.max([10,n/10]))
lam = 10**-3*length
d_tol_2 = 0.001/dx
d_tol_0 = 0.001/dx
# Try two different normalizations and see which one performs better. They should get the same answer for most of
# the longer runs.
split = 0.8
N = len(Ut)
train = np.random.choice(N, int(N*split), replace = False)
test = [i for i in np.arange(N) if i not in train]
w2 = TrainSTRidge(R[train,:], Ut[train], lam, d_tol_2, normalize = 2)
w0 = TrainSTRidge(R[train,:], Ut[train], lam, d_tol_0, normalize = 0)
err2 = np.linalg.norm(Ut[test] - R[test,:].dot(w2)) + 0.01*np.linalg.norm(Ut[test], 2)*np.count_nonzero(w2)
err0 = np.linalg.norm(Ut[test] - R[test,:].dot(w0)) + 0.01*np.linalg.norm(Ut[test], 2)*np.count_nonzero(w0)
w = [w0,w2]
error = [err0,err2]
method = argmin(error)
w_r = w[method]
err[l].append(np.linalg.norm(w_r - w_true, 1))
if trial == 0:
print "Length of time series used: ", length
# print "Condition Number: ", cond(R)
# print "Regularization: ", ['unregularized','2 norm'][method]
print_pde(w_r, rhs_des)
w_r = np.array(w_r)
sparsity_err[l] += float(len(np.where(w_r[0:5] != 0)[0]) + len(np.where(w_r[7:8] != 0)[0]) + int(w_r[6] == 0))/trials
# print err
# print sparsity_err
# pylab.rcParams['figure.figsize'] = (3.5,1.7)
err2 = [np.mean(j) for _,j in err.items()]
min_len = np.max(np.where(np.array(err2) >= 0.5))+1
loglog(lengths[0:min_len], err2[0:min_len], 'x', color = 'r', mew=2, ms=5)
loglog(lengths[min_len:], err2[min_len:], 'o', color = 'b', ms = 5)
yticks([10**-3,10**-1,10**1,10**3,10**5, 10**7, 10**9], fontsize = 12)
xticks([10**3,10**5], fontsize = 12)
pareto_front = lengths[min_len]/10**(1.0/12)
axvspan(100,pareto_front, alpha=0.3, color='gray')
xlabel('Length of Time series', fontsize = 16)
ylabel(r'Average $\ell^1$ parameter error', fontsize = 16)
title('Parameter error for diffusion equation', fontsize = 20)
```
| github_jupyter |
<div class="alert alert-block alert-info" style="margin-top: 20px">
<a href="https://cocl.us/topNotebooksPython101Coursera">
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Ad/TopAd.png" width="750" align="center">
</a>
</div>
<a href="https://cognitiveclass.ai/">
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Ad/CCLog.png" width="200" align="center">
</a>
<h1>Tuples in Python</h1>
<p><strong>Welcome!</strong> This notebook will teach you about the tuples in the Python Programming Language. By the end of this lab, you'll know the basics tuple operations in Python, including indexing, slicing and sorting.</p>
<h2>Table of Contents</h2>
<div class="alert alert-block alert-info" style="margin-top: 20px">
<ul>
<li>
<a href="#dataset">About the Dataset</a>
</li>
<li>
<a href="#tuple">Tuples</a>
<ul>
<li><a href="index">Indexing</a></li>
<li><a href="slice">Slicing</a></li>
<li><a href="sort">Sorting</a></li>
</ul>
</li>
<li>
<a href="#escape">Quiz on Tuples</a>
</li>
</ul>
<p>
Estimated time needed: <strong>15 min</strong>
</p>
</div>
<hr>
<h2 id="dataset">About the Dataset</h2>
Imagine you received album recommendations from your friends and compiled all of the recommandations into a table, with specific information about each album.
The table has one row for each movie and several columns:
- **artist** - Name of the artist
- **album** - Name of the album
- **released_year** - Year the album was released
- **length_min_sec** - Length of the album (hours,minutes,seconds)
- **genre** - Genre of the album
- **music_recording_sales_millions** - Music recording sales (millions in USD) on [SONG://DATABASE](http://www.song-database.com/)
- **claimed_sales_millions** - Album's claimed sales (millions in USD) on [SONG://DATABASE](http://www.song-database.com/)
- **date_released** - Date on which the album was released
- **soundtrack** - Indicates if the album is the movie soundtrack (Y) or (N)
- **rating_of_friends** - Indicates the rating from your friends from 1 to 10
<br>
<br>
The dataset can be seen below:
<font size="1">
<table font-size:xx-small style="width:25%">
<tr>
<th>Artist</th>
<th>Album</th>
<th>Released</th>
<th>Length</th>
<th>Genre</th>
<th>Music recording sales (millions)</th>
<th>Claimed sales (millions)</th>
<th>Released</th>
<th>Soundtrack</th>
<th>Rating (friends)</th>
</tr>
<tr>
<td>Michael Jackson</td>
<td>Thriller</td>
<td>1982</td>
<td>00:42:19</td>
<td>Pop, rock, R&B</td>
<td>46</td>
<td>65</td>
<td>30-Nov-82</td>
<td></td>
<td>10.0</td>
</tr>
<tr>
<td>AC/DC</td>
<td>Back in Black</td>
<td>1980</td>
<td>00:42:11</td>
<td>Hard rock</td>
<td>26.1</td>
<td>50</td>
<td>25-Jul-80</td>
<td></td>
<td>8.5</td>
</tr>
<tr>
<td>Pink Floyd</td>
<td>The Dark Side of the Moon</td>
<td>1973</td>
<td>00:42:49</td>
<td>Progressive rock</td>
<td>24.2</td>
<td>45</td>
<td>01-Mar-73</td>
<td></td>
<td>9.5</td>
</tr>
<tr>
<td>Whitney Houston</td>
<td>The Bodyguard</td>
<td>1992</td>
<td>00:57:44</td>
<td>Soundtrack/R&B, soul, pop</td>
<td>26.1</td>
<td>50</td>
<td>25-Jul-80</td>
<td>Y</td>
<td>7.0</td>
</tr>
<tr>
<td>Meat Loaf</td>
<td>Bat Out of Hell</td>
<td>1977</td>
<td>00:46:33</td>
<td>Hard rock, progressive rock</td>
<td>20.6</td>
<td>43</td>
<td>21-Oct-77</td>
<td></td>
<td>7.0</td>
</tr>
<tr>
<td>Eagles</td>
<td>Their Greatest Hits (1971-1975)</td>
<td>1976</td>
<td>00:43:08</td>
<td>Rock, soft rock, folk rock</td>
<td>32.2</td>
<td>42</td>
<td>17-Feb-76</td>
<td></td>
<td>9.5</td>
</tr>
<tr>
<td>Bee Gees</td>
<td>Saturday Night Fever</td>
<td>1977</td>
<td>1:15:54</td>
<td>Disco</td>
<td>20.6</td>
<td>40</td>
<td>15-Nov-77</td>
<td>Y</td>
<td>9.0</td>
</tr>
<tr>
<td>Fleetwood Mac</td>
<td>Rumours</td>
<td>1977</td>
<td>00:40:01</td>
<td>Soft rock</td>
<td>27.9</td>
<td>40</td>
<td>04-Feb-77</td>
<td></td>
<td>9.5</td>
</tr>
</table></font>
<hr>
<h2 id="tuple">Tuples</h2>
In Python, there are different data types: string, integer and float. These data types can all be contained in a tuple as follows:
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%202/Images/TuplesType.png" width="750" align="center" />
Now, let us create your first tuple with string, integer and float.
```
# Create your first tuple
tuple1 = ("disco",10,1.2 )
tuple1
```
The type of variable is a **tuple**.
```
# Print the type of the tuple you created
type(tuple1)
```
<h3 id="index">Indexing</h3>
Each element of a tuple can be accessed via an index. The following table represents the relationship between the index and the items in the tuple. Each element can be obtained by the name of the tuple followed by a square bracket with the index number:
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%202/Images/TuplesIndex.gif" width="750" align="center">
We can print out each value in the tuple:
```
# Print the variable on each index
print(tuple1[0])
print(tuple1[1])
print(tuple1[2])
```
We can print out the **type** of each value in the tuple:
```
# Print the type of value on each index
print(type(tuple1[0]))
print(type(tuple1[1]))
print(type(tuple1[2]))
```
We can also use negative indexing. We use the same table above with corresponding negative values:
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%202/Images/TuplesNeg.png" width="750" align="center">
We can obtain the last element as follows (this time we will not use the print statement to display the values):
```
# Use negative index to get the value of the last element
tuple1[-1]
```
We can display the next two elements as follows:
```
# Use negative index to get the value of the second last element
tuple1[-2]
# Use negative index to get the value of the third last element
tuple1[-3]
```
<h3 id="concate">Concatenate Tuples</h3>
We can concatenate or combine tuples by using the **+** sign:
```
# Concatenate two tuples
tuple2 = tuple1 + ("hard rock", 10)
tuple2
```
We can slice tuples obtaining multiple values as demonstrated by the figure below:
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%202/Images/TuplesSlice.gif" width="750" align="center">
<h3 id="slice">Slicing</h3>
We can slice tuples, obtaining new tuples with the corresponding elements:
```
# Slice from index 0 to index 2
tuple2[0:3]
```
We can obtain the last two elements of the tuple:
```
# Slice from index 3 to index 4
tuple2[3:5]
```
We can obtain the length of a tuple using the length command:
```
# Get the length of tuple
len(tuple2)
```
This figure shows the number of elements:
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%202/Images/TuplesElement.png" width="750" align="center">
<h3 id="sort">Sorting</h3>
Consider the following tuple:
```
# A sample tuple
Ratings = (0, 9, 6, 5, 10, 8, 9, 6, 2)
```
We can sort the values in a tuple and save it to a new tuple:
```
# Sort the tuple
RatingsSorted = sorted(Ratings)
RatingsSorted
```
<h3 id="nest">Nested Tuple</h3>
A tuple can contain another tuple as well as other more complex data types. This process is called 'nesting'. Consider the following tuple with several elements:
```
# Create a nest tuple
NestedT =(1, 2, ("pop", "rock") ,(3,4),("disco",(1,2)))
```
Each element in the tuple including other tuples can be obtained via an index as shown in the figure:
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%202/Images/TuplesNestOne.png" width="750" align="center">
```
# Print element on each index
print("Element 0 of Tuple: ", NestedT[0])
print("Element 1 of Tuple: ", NestedT[1])
print("Element 2 of Tuple: ", NestedT[2])
print("Element 3 of Tuple: ", NestedT[3])
print("Element 4 of Tuple: ", NestedT[4])
```
We can use the second index to access other tuples as demonstrated in the figure:
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%202/Images/TuplesNestTwo.png" width="750" align="center">
We can access the nested tuples :
```
# Print element on each index, including nest indexes
print("Element 2, 0 of Tuple: ", NestedT[2][0])
print("Element 2, 1 of Tuple: ", NestedT[2][1])
print("Element 3, 0 of Tuple: ", NestedT[3][0])
print("Element 3, 1 of Tuple: ", NestedT[3][1])
print("Element 4, 0 of Tuple: ", NestedT[4][0])
print("Element 4, 1 of Tuple: ", NestedT[4][1])
```
We can access strings in the second nested tuples using a third index:
```
# Print the first element in the second nested tuples
NestedT[2][1][0]
# Print the second element in the second nested tuples
NestedT[2][1][1]
```
We can use a tree to visualise the process. Each new index corresponds to a deeper level in the tree:
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%202/Images/TuplesNestThree.gif" width="750" align="center">
Similarly, we can access elements nested deeper in the tree with a fourth index:
```
# Print the first element in the second nested tuples
NestedT[4][1][0]
# Print the second element in the second nested tuples
NestedT[4][1][1]
```
The following figure shows the relationship of the tree and the element <code>NestedT[4][1][1]</code>:
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%202/Images/TuplesNestFour.gif" width="750" align="center">
<h2 id="quiz">Quiz on Tuples</h2>
Consider the following tuple:
```
# sample tuple
genres_tuple = ("pop", "rock", "soul", "hard rock", "soft rock", \
"R&B", "progressive rock", "disco")
genres_tuple
```
Find the length of the tuple, <code>genres_tuple</code>:
```
# Write your code below and press Shift+Enter to execute
```
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%202/Images/TuplesQuiz.png" width="1100" align="center">
Double-click __here__ for the solution.
<!-- Your answer is below:
len(genres_tuple)
-->
Access the element, with respect to index 3:
```
# Write your code below and press Shift+Enter to execute
```
Double-click __here__ for the solution.
<!-- Your answer is below:
genres_tuple[3]
-->
Use slicing to obtain indexes 3, 4 and 5:
```
# Write your code below and press Shift+Enter to execute
```
Double-click __here__ for the solution.
<!-- Your answer is below:
genres_tuple[3:6]
-->
Find the first two elements of the tuple <code>genres_tuple</code>:
```
# Write your code below and press Shift+Enter to execute
```
Double-click __here__ for the solution.
<!-- Your answer is below:
genres_tuple[0:2]
-->
Find the first index of <code>"disco"</code>:
```
# Write your code below and press Shift+Enter to execute
```
Double-click __here__ for the solution.
<!-- Your answer is below:
genres_tuple.index("disco")
-->
Generate a sorted List from the Tuple <code>C_tuple=(-5, 1, -3)</code>:
```
# Write your code below and press Shift+Enter to execute
```
Double-click __here__ for the solution.
<!-- Your answer is below:
C_tuple = (-5, 1, -3)
C_list = sorted(C_tuple)
C_list
-->
<hr>
<h2>The last exercise!</h2>
<p>Congratulations, you have completed your first lesson and hands-on lab in Python. However, there is one more thing you need to do. The Data Science community encourages sharing work. The best way to share and showcase your work is to share it on GitHub. By sharing your notebook on GitHub you are not only building your reputation with fellow data scientists, but you can also show it off when applying for a job. Even though this was your first piece of work, it is never too early to start building good habits. So, please read and follow <a href="https://cognitiveclass.ai/blog/data-scientists-stand-out-by-sharing-your-notebooks/" target="_blank">this article</a> to learn how to share your work.
<hr>
<div class="alert alert-block alert-info" style="margin-top: 20px">
<h2>Get IBM Watson Studio free of charge!</h2>
<p><a href="https://cocl.us/bottemNotebooksPython101Coursera"><img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Ad/BottomAd.png" width="750" align="center"></a></p>
</div>
<h3>About the Authors:</h3>
<p><a href="https://www.linkedin.com/in/joseph-s-50398b136/" target="_blank">Joseph Santarcangelo</a> is a Data Scientist at IBM, and holds a PhD in Electrical Engineering. His research focused on using Machine Learning, Signal Processing, and Computer Vision to determine how videos impact human cognition. Joseph has been working for IBM since he completed his PhD.</p>
Other contributors: <a href="www.linkedin.com/in/jiahui-mavis-zhou-a4537814a">Mavis Zhou</a>
<hr>
<p>Copyright © 2018 IBM Developer Skills Network. This notebook and its source code are released under the terms of the <a href="https://cognitiveclass.ai/mit-license/">MIT License</a>.</p>
| github_jupyter |
## Topological Data Analysis - Part 5 - Persistent Homology
This is Part 5 in a series on topological data analysis.
See <a href="TDApart1.html">Part 1</a> | <a href="TDApart2.html">Part 2</a> | <a href="TDApart3.html">Part 3</a> | <a href="TDApart4.html">Part 4</a>
<a href="https://github.com/outlace/OpenTDA/PersistentHomology.py">Download this notebook</a> | <a href="https://github.com/outlace/outlace.github.io/ipython-notebooks/TDA/TDApart5.ipynb">Download the code</a>
In this part we finally utilize all we've learned to compute the persitent homology groups and draw persistence diagrams to summarize the information graphically.
Let's summarize what we know so far.
We know...
1. how to generate a simplicial complex from point-cloud data using an arbitrary $\epsilon$ distance parameter
2. how to calculate homology groups of a simplicial complex
3. how to compute Betti numbers of a simplicial complex
The jump from what we know to persistent homology is small conceptually. We just need to calculate Betti numbers for a set of simplicial complexes generated by continuously varying $\epsilon: 0 \rightarrow \infty$. Then we can see which topological features persist significantly longer than others, and declare those to be signal not noise.
>Note: I'm ignoring an objective definition of "significantly longer" since that is really a statistical question that is outside the scope of this exposition. For all the examples we consider here, it will be obvious which features persist significantly longer just by visual inspection.
Unfortunately, while the coneptual jump is small, the technical jump is more formidable. Especially because we also want to be able to ask which data points in my original data set lie on some particular topological feature.
Let's revisit the code we used to sample points (with some intentional randomness added) from a circle and build a simplicial complex.
```
import numpy as np
import matplotlib.pyplot as plt
n = 30 #number of points to generate
#generate space of parameter
theta = np.linspace(0, 2.0*np.pi, n)
a, b, r = 0.0, 0.0, 5.0
x = a + r*np.cos(theta)
y = b + r*np.sin(theta)
#code to plot the circle for visualization
plt.plot(x, y)
plt.show()
x2 = np.random.uniform(-0.75,0.75,n) + x #add some "jitteriness" to the points
y2 = np.random.uniform(-0.75,0.75,n) + y
fig, ax = plt.subplots()
ax.scatter(x2,y2)
plt.show()
newData = np.array(list(zip(x2,y2)))
import SimplicialComplex
graph = SimplicialComplex.buildGraph(raw_data=newData, epsilon=3.0) #Notice the epsilon parameter is 3.0
ripsComplex = SimplicialComplex.rips(graph=graph, k=3)
SimplicialComplex.drawComplex(origData=newData, ripsComplex=ripsComplex)
```
As you can see, setting $\epsilon = 3.0$ produces a nice looking simplicial complex that captures the single 1-dimensional "hole" in the original data.
However, let's play around with $\epsilon$ to see how it changes our complex.
```
graph = SimplicialComplex.buildGraph(raw_data=newData, epsilon=2.0)
ripsComplex = SimplicialComplex.rips(graph=graph, k=3)
SimplicialComplex.drawComplex(origData=newData, ripsComplex=ripsComplex)
```
We decreased $\epsilon$ to $2.0$ and now we have a "break" in our circle. If we calculate the homology and Betti numbers of this complex, we will no longer have a 1-dimensional cycle present. We will only see a single connected component.
Let's decrease it a little bit more to 1.9
```
newData = np.array(list(zip(x2,y2)))
graph = SimplicialComplex.buildGraph(raw_data=newData, epsilon=1.9)
ripsComplex = SimplicialComplex.rips(graph=graph, k=3)
SimplicialComplex.drawComplex(origData=newData, ripsComplex=ripsComplex)
```
Now we have three connected components and no cycles/holes in the complex. Ok, let's go the other direction and increase $\epsilon$ to 4.0
```
newData = np.array(list(zip(x2,y2)))
graph = SimplicialComplex.buildGraph(raw_data=newData, epsilon=4.0)
ripsComplex = SimplicialComplex.rips(graph=graph, k=3)
SimplicialComplex.drawComplex(origData=newData, ripsComplex=ripsComplex)
```
Unlike going down by 1, by increasing $\epsilon$ to 4.0, we haven't changed anything about our homology groups. We still have a single connected component and a single 1-dimensional cycle.
Let's make an even bigger jump and set $\epsilon = 7.0$, an increase of 3.
```
graph = SimplicialComplex.buildGraph(raw_data=newData, epsilon=7.0)
ripsComplex = SimplicialComplex.rips(graph=graph, k=3)
SimplicialComplex.drawComplex(origData=newData, ripsComplex=ripsComplex)
```
Alas, even though we've gone up by 4 units from our original nice value of 3.0, we still get a complex with the same topological features: a single connected component and a 1-dimensional cycle.
This is the primary insight of __persistence__ in persistent homology. These features are persistent over a wide range of $\epsilon$ scale parameters and thus are likely to be true features of the underlying data rather than noise.
We can diagram our findings with two major styles: a barcode or a persistence diagram (not shown).
Here's what our barcode might look like for the above example:
<img src="images/TDAimages/barcode_example.png" width="500px" />
>NOTE: I've prepared this barcode "by band," i.e. it is not the precise computed barcode. I've highlighted the "true" topological features amongst the noise. $H_0, H_1, H_2$ refer to the respective homology groups and Betti numbers.
Importantly, it is possible that two different true topological features may exist at different scales and thus can only be captured with a persistent homology, they will be missed in a simplicial complex with a single fixed scale. For example, if we have data that presents a large circle next to a small circle, it is possible at a small $\epsilon$ value only the small circle will be connected, giving rise to a single 1-dimensionl hole, then at a larger $\epsilon$ the big circle will be connected and the small circle will get "filled in." So at no single $\epsilon$ value will both circles be revealed.
#### Filtrations
It turns out there is a relatively straightforward way to extend our previous work on calculating Betti numbers with boundary matrices to the setting of persistent homology where we're dealing with collections of ever expanding complexes.
We define a _filtration complex_ as the sequence of simplicial complexes generated by continuously increasing the scale parameter $\epsilon$.
But rather than building multiple simplicial complexes at various $\epsilon$ parameters and then combining them into a sequence, we can just build a single simplicial complex over our data using a large (maximal) $\epsilon$ value. But we will keep track of the distance between all points of pairs (we already do this with the algorithm we wrote) so we know at what $\epsilon$ scale each pair of points form an edge. Thus "hidden" in any simplicial complex at some $\epsilon$ value is a filtration (sequence of nested complexes) up to that value of $\epsilon$.
Here's a really simple example:
<img src="images/TDAimages/simplicialComplex9a.png" />
So if we take the maximum scale, $\epsilon = 4$, our simplicial complex is:
$$ S = \text{ { {0}, {1}, {2}, {0,1}, {2,0}, {1,2}, {0,1,2} } } $$
But if we keep track of the pair-wise distances between points (i.e. the length/weight of all the edges), then we already have the information necessary for a filtration.
Here are the weights (lengths) of each edge (1-simplex) in this simplicial complex (the vertical bars indicate weight/length):
$$ |{0,1}| = 1.4 \\
|{2,0}| = 2.2 \\
|{1,2}| = 3
$$
And this is how we would use that information to build a filtration:
$$
S_0 \subseteq S_1 \subseteq S_2 \\
S_0 = \text{ { {0}, {1}, {2} } } \\
S_1 = \text{ { {0}, {1}, {2}, {0,1} } } \\
S_2 = \text{ { {0}, {1}, {2}, {0,1}, {2,0}, {1,2}, {0,1,2} } } \\
$$
Basically each simplex in a subcomplex of the filtration will appear when its longest edge appears. So the 2-simplex {0,1,2} appears only once the edge {1,2} appears since that edge is the longest and doesn't show up until $\epsilon \geq 2.2$
For it to be a filtration that we can use in our (future) algorithm, it needs to have a __total order__. A total order is an ordering of the simplices in our filtration such that there is a valid "less than" relationship between any two simplices (i.e. no two simplices are equal in "value"). The most famous example of a set with a total order would be the natural numbers {0,1,2,3,4...} since no two numbers are equal, we can always say one number is greater than or less than another.
How do we determine the "value" (henceforth: filter value) of a simplex in a filtration (and thus determine the ordering of the filtration)? Well I already said part of it. The filter value of a simplex is partly determined by the length of its maximum edge. But sometimes two distinct simplices have maximum edges of the same length, so we have to define a heirarchy of rules for determining the value (the ordering) of our simplices.
For any two simplices, $\sigma_1, \sigma_2$...
1. 0-simplices must be less than 1-simplices must be less than 2-simplices, etc. This implies that any face of a simplex (i.e. $f \subset \sigma$) is automatically less than (comes before in the ordering) of the simplex. I.e. if $dim(\sigma_1) < dim(\sigma_2) \implies \sigma_1 < \sigma_2$ (dim = dimension, the symbol $\implies$ means "implies").
<br /><br />
2. If $\sigma_1, \sigma_2$ are of an equal dimension (and hence one is not the face of the other), then the value of each simplex is determined by its longest (highest weight) 1-simplex (edge). In our example above, $\{0,1\} \lt \{2,0\} \lt \{1,2\}$ due to the weights of each of those. To compare higher-dimensional simplices, you still just compare them by the value of their greatest edge. I.e. if $dim(\sigma_1) = dim(\sigma_2)$ then $max\_edge(\sigma_1) < max\_edge(\sigma_2) \implies \sigma_1 < \sigma_2$
<br /><br />
3. If $\sigma_1,\sigma_2$ are of an equal dimension AND their longest edges are of equal value (i.e. their maximum weight edges enter the filtration at the same $\epsilon$ value), then $max\_vertex(\sigma_1) < max\_vertex(\sigma_2) \implies \sigma_1 < \sigma_2$. What is a maximum node? Well we just have to place an arbitrary ordering over the vertices even though they all appear at the same time.
>Just as an aside, we just discussed a _total order_. The corollary to that idea is a _partial order_ where we have "less than" relationships defined between some but not all elements, and some elements may be equal to others.
Remember from part 3 how we setup the boundary matrices by setting the columns to represent the n-simplices in the n-chain group and the rows to represent the (n-1)-simplices in the (n-1)-chain group? Well we can extend this procedure to calculate Betti numbers across an entire filtration complex in the following way.
Let's use the filtration from above:
$$
S_0 \subseteq S_1 \subseteq S_2 \\
S_0 = \text{ [ {0}, {1}, {2} } ] \\
S_1 = \text{ [ {0}, {1}, {2}, {0,1} ] } \\
S_2 = S = \text{ [ {0}, {1}, {2}, {0,1}, {2,0}, {1,2}, {0,1,2} ] } \\
$$
Notice I already have the simplices in each subcomplex of the filtration in order (I've imposed a total order on the set of simplices) indicated by the square brackets rather than curly braces (although I may abuse this notation).
So we'll build a boundary matrix for the full filtration in the same way we built individual boundary matrices for each homology group before. We'll make a square matrix where the columns (label: $j$) and rows (label: $i$) are the simplices in the filtration in their proper (total) order.
Then, as before, we set each cell $[i,j] = 1$ if $\sigma_i$ is a face of $\sigma_j$ ($\sigma$ meaning simplex). All other cells are $0$.
Here's what it looks like in our very small filtration from above:
$$
\partial_{filtration} =
\begin{array}{c|lcr}
\partial & \{0\} & \{1\} & \{2\} & \{0,1\} & \{2,0\} & \{1,2\} & \{0,1,2\} \\
\hline
\{0\} & 0 & 0 & 0 & 1 & 1 & 0 & 0 \\
\{1\} & 0 & 0 & 0 & 1 & 0 & 1 & 0 \\
\{2\} & 0 & 0 & 0 & 0 & 1 & 1 & 0 \\
\{0,1\} & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\
\{2,0\} & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\
\{1,2\} & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\
\{0,1,2\} & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
\end{array}
$$
As before, we will apply an algorithm to change the form of this matrix. However, unlike before, we are now going to convert this boundary matrix into Smith normal form, we are going to change it into something else called column-echelon form. This conversion process is called a __matrix reduction__, implying we're kind of reducing it into a simpler form.
##### Matrix reduction
Now here's where I have to apologize for a mistake I made in our last post, because I never explained _why_ we had to convert our boundary matrix into Smith normal form, I just told you _how_ to do it.
So here's the deal, our boundary matrices from before gave us a linear map from a n-chain group down to the (n-1)-chain group. We could just multiply the boundary matrix by any element in the n-chain and the result would be the corresponding (mapped) element in the (n-1)-chain. When we reduced the matrix to Smith normal form, we altered the boundary matrix such that we can no longer just multiply by it to map elements in that way. What we did was actually apply another linear map over our boundary matrix, the result being the Smith normal form.
More formally, the Smith normal form $R$ of a matrix $A$ is the matrix product: $R = SAT$ where $S,T$ are other matrices. Hence we have a composition of linear maps that forms $R$, and we can in principle decompose $R$ into the individual linear maps (matrices) that compose it.
So the algorithm for reducing to Smith normal form is essentially finding two other matrices $S,T$ such that $SAT$ produces a matrix with 1s along the diagonal (at least partially).
But why do we do that? Well remember that a matrix being a linear map means it maps one vector space to another. If we have a matrix $M: V_1 \rightarrow V_2$, then it is mapping the basis vectors in $V_1$ to basis vectors in $V_2$. So when we reduce a matrix, we're essentially redefining the basis vectors in each vector space. It just so happens that Smith normal form finds the bases that form cycles and boundaries. There are many different types of reduced matrix forms that have useful interpretations and properties. I'm not going to get into anymore of the mathematics about it here, I just wanted to give a little more explanation to this voo doo matrix reduction we're doing.
When we reduce a filtration boundary matrix into column-echelon form via an algorithm, it tells us the information about when certain topological features at each dimension are formed or "die" (by being subsumed into a larger feature) at various stages in the filtration (i.e. at increasing values of $\epsilon$, via our total order implied on the filtration). Hence, once we reduce the boundary matrix, all we need to do is read off the information as intervals when features are born and die, and then we can graph those intervals as a barcode plot.
The column-echelon form $C$ is likewise a composition of linear maps such that $C = VB$ where $V$ is some matrix that makes the composition work, and $B$ is a filtration boundary matrix. We will actually keep a copy of $V$ once we're done reducing $B$ because $V$ records the information necessary to dermine which data points lie on interesting topological features.
The general algorithm for reducing a matrix to column-echelon form is a type of <a src="https://en.wikipedia.org/wiki/Gaussian_elimination">Gaussian elemination</a>:
```python
for j = 1 to n
while there exists i < j with low(i) = low(j)
add column i to column j
end while
end for
```
The function `low` accepts a column `j` and returns the row index with the lowest $1$. For example, if we have a column of a matrix:
$ j = \begin{pmatrix} 1 \\ 0 \\1 \\1 \\ 0 \\0 \\0 \end{pmatrix} $<br />
Then `low(j) = 3` (with indexing starting from 0) since the lowest $1$ in the column is in the fourth row (which is index 3).
So basically the algorithm scans each column in the matrix from left to right, so if we're currently at column `j`, the algorithm looks for all the columns `i` before `j` such that `low(i) == low(j)`, and if it finds such a column `i`, it will add that column to `j`. And we keep a log of everytime we add a column to another in the form of another matrix. If a column is all zeros, then `low(j) = -1` (meaning undefined).
Let's try out the algorithm by hand on our boundary matrix from above. I've removed the column/row labels to be more concise:
$$
\partial_{filtration} =
\begin{Bmatrix}
0 & 0 & 0 & 1 & 1 & 0 & 0 \\
0 & 0 & 0 & 1 & 0 & 1 & 0 \\
0 & 0 & 0 & 0 & 1 & 1 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 1 \\
0 & 0 & 0 & 0 & 0 & 0 & 1 \\
0 & 0 & 0 & 0 & 0 & 0 & 1 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 \\
\end{Bmatrix}
$$
So remember, columns are some index `j` and rows are some index `i`. We scan from left to right. The first 3 columns are all zeros so `low(j)` is undefined and we don't do anything. And when we get to column 4 (index `j=3`), since all the prior columns were zero, then there's also nothing to do. When we get to column 5 (index `j=4`), then `low(4) = 2` and `low(3) = 1` so since `low(4) != low(3)` we don't do anything and just move on. It isn't until we get to column 6 (index `j=5`) that there is a column `i < j` (in this case column `4 < 5`) such that `low(4) = low(5)`. So we add column 5 to column 6. Since these are binary (using the field $\mathbb Z_2$) columns, $1+1=0$. The result of adding column 5 to 6 is shown below:
$$
\partial_{filtration} =
\begin{Bmatrix}
0 & 0 & 0 & 1 & 1 & 1 & 0 \\
0 & 0 & 0 & 1 & 0 & 1 & 0 \\
0 & 0 & 0 & 0 & 1 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 1 \\
0 & 0 & 0 & 0 & 0 & 0 & 1 \\
0 & 0 & 0 & 0 & 0 & 0 & 1 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 \\
\end{Bmatrix}
$$
Now we continue on to the end, and the last column's lowest 1 is in a unique row so we don't do anything. Now we start again from the beginning on the left. We get to column 6 (index `j=5`) and we find that column 4 has the same lowest 1, `low(3) = low(5)`, so we add column 4 to 6. The result is shown below:
$$
\partial_{filtration} =
\begin{Bmatrix}
0 & 0 & 0 & 1 & 1 & 0 & 0 \\
0 & 0 & 0 & 1 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 1 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 1 \\
0 & 0 & 0 & 0 & 0 & 0 & 1 \\
0 & 0 & 0 & 0 & 0 & 0 & 1 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 \\
\end{Bmatrix}
$$
Look, we now have a new column of all zeros! What does this mean? Well it means that column is a new topological feature. It either represent a connected component or some n-dimensional cycle. In this case it represents a 1-dimensional cycle, the cycle formed from the three 1-simplices.
Notice now that the matrix is fully reduced to column-echelon form since all the lowest $1$s are in unique rows, so our algorithm halts in satisfaction. Now that the boundary matrix is reduced, it is no longer the case that each column and row represents a single simplex in the filtration. Since we've been adding columns together, each column may represent multiple simplices in the filtration. In this case, we only added columns together two times and both we're adding to column 6 (index `j = 5`), so column 6 represents the simplices from columns 4 and 5 (which happen to be {0,1} and {2,0}). So column 6 is the group of simplices: $\text{ {0,1}, {2,0}, {1,2} }$, and if you refer back to the graphical depiction of the simplex, those 1-simplices form a 1-dimensional cycle (albeit immediately killed off by the 2-simplex {0,1,2}).
It is important to keep track of what the algorithm does so we can find out what each column represents when the algorithm is done. We do this by setting up another matrix that I call the _memory matrix_. It starts off just being the identity matrix with the same dimensions as the boundary matrix.
$$
M_{memory} =
\begin{Bmatrix}
1 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 1 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 1 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 1 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 1 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 1 \\
\end{Bmatrix}
$$
But everytime we add a column `i` to column `j` in our reducing algorithm, we record the change in the memory matrix by putting a `1` in the cell `[i,j]`. So in our case, we recorded the events of adding columns 4 and 5 to column 6. Hence in our memory matrix, we will put a 1 in the cells `[3,5]` and `[4,5]` (using indices). This is shown below:
$$
M_{memory} =
\begin{Bmatrix}
1 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 1 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 1 & 0 & 1 & 0 \\
0 & 0 & 0 & 0 & 1 & 1 & 0 \\
0 & 0 & 0 & 0 & 0 & 1 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 1 \\
\end{Bmatrix}
$$
Once the algorithm is done running, we can always refer to this memory matrix to remember what the algorithm actually did and figure out what the columns in the reduced boundary matrix represent.
Let's refer back to our _reduced_ (column-echelon form) boundary matrix of the filtration:
$$
\partial_{reduced} =
\begin{Bmatrix}
0 & 0 & 0 & 1 & 1 & 0 & 0 \\
0 & 0 & 0 & 1 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 1 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 1 \\
0 & 0 & 0 & 0 & 0 & 0 & 1 \\
0 & 0 & 0 & 0 & 0 & 0 & 1 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 \\
\end{Bmatrix}
$$
To record the intervals of birth and death of topological features, we simply scan each column from left to right. If column `j` has all zeros (i.e. `low(j) = -1`) then we record this as the birth of a new feature (being whatever column `j` represents, maybe a single simplex, maybe a group of simplices).
Otherwise, if a column is not all zeros but has some 1s in it, then we say that the column with index equal to `low(j)` dies at `j`, and hence is the end point of the interval for that feature.
So in our case, all three vertices (the first three columns) are new features that are born (their columns are all zeros, `low(j) = -1`) so we record 3 new intervals with starting points being their column indices. Since we're scanning sequentially from left to right, we don't yet know if or when these features will die, so we'll just tentatively set the end point as `-1` to indicate the end or infinity. Here are the first three intervals:
```python
#Remember the start and end points are column indices
[0,-1], [1,-1], [2,-1]
```
Then we keep scanning left to right and hit column 4 (index `j=3`) and we calculate `low(3) = 1`. So this means the feature that was born in column `j=1` (column 2) just died at `j=3`. Now we can go back and update the tentative end point for that interval, our update intervals being:
```python
#updating intervals...
[0,-1], [1,3], [2,-1]
```
So we just continue this process until the last column and we get all our intervals:
```python
#The final set of intervals
[0,-1], [1,3], [2,4], [5,6]
```
The first three features are 0-simplices and since they are dimension 0, they represent the connected components of the filtration. The 4th feature is the 1-dimensional cycle since its interval indices refer to a group of 1-simplices.
Believe it or not, we've just done persistent homology. That's all there is to it. Once we have the intervals, all we need to do is graph them as a barcode. We should convert the start/end points in these intervals to values of $\epsilon$ by referring back to our set of weights on the edges and assigning an $\epsilon$ value (the value of $\epsilon$ the results in the formation of a particular simplex in the filtration) to each simplex. Here's the barcode:
<img src="images/TDAimages/barcode_example2.png" width="300px" />
>I drew a dot in the $H_1$ group to indicate that the 1-dimensional cycle is born and immediately dies at the same point (since as soon as it forms the 2-simplex subsumes it). Most real barcodes do not produce those dots. We don't care about such ephemeral features.
Notice how we have a bar in the $H_0$ group that is significantly longer than the other two. This suggests our data has only 1 connected component. Groups $H_1, H_2$ don't really have any bars so our data doesn't have any true holes/cycles. Of course with a more realistic data set we would expect to find some cycles.
#### Let's write some code
Alright, so we've basically covered the conceptual framework for computing persistent homology. Let's actually write some code to compute persistent homology on (somewhat) realistic data. I'm not going to spend too much effort explaining all the code and how it works since I'm more concerned with explaining in more abstract terms so you can go write your own algorithms. I've tried to add inline comments that should help. Also keep in mind that since this is educational, these algorithms and data structures will _not_ be very efficient, but will be simple. I hope to write a follow up post at some point that demonstrates how we can make efficient versions of these algorithms and data structures.
Let's start by constructing a simple simplicial complex using the code we wrote in part 4.
```
data = np.array([[1,4],[1,1],[6,1],[6,4]])
#for example... this is with a small epsilon, to illustrate the presence of a 1-dimensional cycle
graph = SimplicialComplex.buildGraph(raw_data=data, epsilon=5.1)
ripsComplex = SimplicialComplex.rips(nodes=graph[0], edges=graph[1], k=3)
SimplicialComplex.drawComplex(origData=data, ripsComplex=ripsComplex, axes=[0,7,0,5])
```
So our simplicial complex is just a box. It obviously has 1 connected component and a 1-dimensional cycle. If we keep increasing $\epsilon$ then the box will "fill in" and we'll get a maximal simplex with all four points forming a 3-dimensional simplex (tetrahedron).
>Note, I have modified the `SimplicialComplex` library a bit (mostly cosmetic/stylistic changes) since <a href="http://outlace.com/Topological+Data+Analysis+Tutorial+-+Part+4/">part 4</a>. Refer to the <a href="https://github.com/outlace/outlace.github.io">GitHub project</a> for changes.
Next we're going to modify the functions from the original `SimplicialComplex` library from part 4 so that it works well with a filtration complex rather than ordinary simplicial complexes.
So I'm just going to drop a block of code on you know and describe what each function does. The `buildGraph` function is the same as before. But we have a several new functions: `ripsFiltration`, `getFilterValue`, `compare` and `sortComplex`.
The `ripsFiltration` function accepts the graph object from `buildGraph` and maximal dimension `k` (e.g. up to what dimensional simpices we will bother calculating) and returns a simplicial complex object sorted by filter values. The filter values are determined as described above. We have a `sortComplex` function that takes a complex and filter values and returns the sorted complex.
So the only difference between our previous simplicial complex function and the `ripsFiltration` function is that the latter also generates filter values for each simplex in the complex and imposes a total order on the simplices in the filtration.
```
import itertools
import functools
def euclidianDist(a,b): #this is the default metric we use but you can use whatever distance function you want
return np.linalg.norm(a - b) #euclidian distance metric
#Build neighorbood graph
def buildGraph(raw_data, epsilon = 3.1, metric=euclidianDist): #raw_data is a numpy array
nodes = [x for x in range(raw_data.shape[0])] #initialize node set, reference indices from original data array
edges = [] #initialize empty edge array
weights = [] #initialize weight array, stores the weight (which in this case is the distance) for each edge
for i in range(raw_data.shape[0]): #iterate through each data point
for j in range(raw_data.shape[0]-i): #inner loop to calculate pairwise point distances
a = raw_data[i]
b = raw_data[j+i] #each simplex is a set (no order), hence [0,1] = [1,0]; so only store one
if (i != j+i):
dist = metric(a,b)
if dist <= epsilon:
edges.append({i,j+i}) #add edge if distance between points is < epsilon
weights.append(dist)
return nodes,edges,weights
def lower_nbrs(nodeSet, edgeSet, node): #lowest neighbors based on arbitrary ordering of simplices
return {x for x in nodeSet if {x,node} in edgeSet and node > x}
def ripsFiltration(graph, k): #k is the maximal dimension we want to compute (minimum is 1, edges)
nodes, edges, weights = graph
VRcomplex = [{n} for n in nodes]
filter_values = [0 for j in VRcomplex] #vertices have filter value of 0
for i in range(len(edges)): #add 1-simplices (edges) and associated filter values
VRcomplex.append(edges[i])
filter_values.append(weights[i])
if k > 1:
for i in range(k):
for simplex in [x for x in VRcomplex if len(x)==i+2]: #skip 0-simplices and 1-simplices
#for each u in simplex
nbrs = set.intersection(*[lower_nbrs(nodes, edges, z) for z in simplex])
for nbr in nbrs:
newSimplex = set.union(simplex,{nbr})
VRcomplex.append(newSimplex)
filter_values.append(getFilterValue(newSimplex, VRcomplex, filter_values))
return sortComplex(VRcomplex, filter_values) #sort simplices according to filter values
def getFilterValue(simplex, edges, weights): #filter value is the maximum weight of an edge in the simplex
oneSimplices = list(itertools.combinations(simplex, 2)) #get set of 1-simplices in the simplex
max_weight = 0
for oneSimplex in oneSimplices:
filter_value = weights[edges.index(set(oneSimplex))]
if filter_value > max_weight: max_weight = filter_value
return max_weight
def compare(item1, item2):
#comparison function that will provide the basis for our total order on the simpices
#each item represents a simplex, bundled as a list [simplex, filter value] e.g. [{0,1}, 4]
if len(item1[0]) == len(item2[0]):
if item1[1] == item2[1]: #if both items have same filter value
if sum(item1[0]) > sum(item2[0]):
return 1
else:
return -1
else:
if item1[1] > item2[1]:
return 1
else:
return -1
else:
if len(item1[0]) > len(item2[0]):
return 1
else:
return -1
def sortComplex(filterComplex, filterValues): #need simplices in filtration have a total order
#sort simplices in filtration by filter values
pairedList = zip(filterComplex, filterValues)
#since I'm using Python 3.5+, no longer supports custom compare, need conversion helper function..its ok
sortedComplex = sorted(pairedList, key=functools.cmp_to_key(compare))
sortedComplex = [list(t) for t in zip(*sortedComplex)]
#then sort >= 1 simplices in each chain group by the arbitrary total order on the vertices
orderValues = [x for x in range(len(filterComplex))]
return sortedComplex
graph2 = buildGraph(raw_data=data, epsilon=7) #epsilon = 9 will build a "maximal complex"
ripsComplex2 = ripsFiltration(graph2, k=3)
SimplicialComplex.drawComplex(origData=data, ripsComplex=ripsComplex2[0], axes=[0,7,0,5])
ripsComplex2
#return the n-simplices and weights in a complex
def nSimplices(n, filterComplex):
nchain = []
nfilters = []
for i in range(len(filterComplex[0])):
simplex = filterComplex[0][i]
if len(simplex) == (n+1):
nchain.append(simplex)
nfilters.append(filterComplex[1][i])
if (nchain == []): nchain = [0]
return nchain, nfilters
#check if simplex is a face of another simplex
def checkFace(face, simplex):
if simplex == 0:
return 1
elif (set(face) < set(simplex) and ( len(face) == (len(simplex)-1) )): #if face is a (n-1) subset of simplex
return 1
else:
return 0
#build boundary matrix for dimension n ---> (n-1) = p
def filterBoundaryMatrix(filterComplex):
bmatrix = np.zeros((len(filterComplex[0]),len(filterComplex[0])), dtype='>i8')
#bmatrix[0,:] = 0 #add "zero-th" dimension as first row/column, makes algorithm easier later on
#bmatrix[:,0] = 0
i = 0
for colSimplex in filterComplex[0]:
j = 0
for rowSimplex in filterComplex[0]:
bmatrix[j,i] = checkFace(rowSimplex, colSimplex)
j += 1
i += 1
return bmatrix
bm = filterBoundaryMatrix(ripsComplex2)
bm #Here is the (non-reduced) boundary matrix
```
The following functions are for reducing the boundary matrix as described above (when we did it by hand).
```
#returns row index of lowest "1" in a column i in the boundary matrix
def low(i, matrix):
col = matrix[:,i]
col_len = len(col)
for i in range( (col_len-1) , -1, -1): #loop through column from bottom until you find the first 1
if col[i] == 1: return i
return -1 #if no lowest 1 (e.g. column of all zeros), return -1 to be 'undefined'
#checks if the boundary matrix is fully reduced
def isReduced(matrix):
for j in range(matrix.shape[1]): #iterate through columns
for i in range(j): #iterate through columns before column j
low_j = low(j, matrix)
low_i = low(i, matrix)
if (low_j == low_i and low_j != -1):
return i,j #return column i to add to column j
return [0,0]
#the main function to iteratively reduce the boundary matrix
def reduceBoundaryMatrix(matrix):
#this refers to column index in the boundary matrix
reduced_matrix = matrix.copy()
matrix_shape = reduced_matrix.shape
memory = np.identity(matrix_shape[1], dtype='>i8') #this matrix will store the column additions we make
r = isReduced(reduced_matrix)
while (r != [0,0]):
i = r[0]
j = r[1]
col_j = reduced_matrix[:,j]
col_i = reduced_matrix[:,i]
#print("Mod: add col %s to %s \n" % (i+1,j+1)) #Uncomment to see what mods are made
reduced_matrix[:,j] = np.bitwise_xor(col_i,col_j) #add column i to j
memory[i,j] = 1
r = isReduced(reduced_matrix)
return reduced_matrix, memory
z = reduceBoundaryMatrix(bm)
z
```
So the `reduceBoundaryMatrix` function returns two matrices, the reduced boundary matrix and a _memory_ matrix that records all the actions of the reduction algorithm. This is necessary so we can look up what each column in the boundary matrix actually refers to. Once it's reduced each column in the boundary matrix is not necessarily a single simplex but possibly a group of simplices such as some n-dimensional cycle.
The following functions use the reduced matrix to read the intervals for all the features that are born and die throughout the filtration
```
def readIntervals(reduced_matrix, filterValues): #reduced_matrix includes the reduced boundary matrix AND the memory matrix
#store intervals as a list of 2-element lists, e.g. [2,4] = start at "time" point 2, end at "time" point 4
#note the "time" points are actually just the simplex index number for now. we will convert to epsilon value later
intervals = []
#loop through each column j
#if low(j) = -1 (undefined, all zeros) then j signifies the birth of a new feature j
#if low(j) = i (defined), then j signifies the death of feature i
for j in range(reduced_matrix[0].shape[1]): #for each column (its a square matrix so doesn't matter...)
low_j = low(j, reduced_matrix[0])
if low_j == -1:
interval_start = [j, -1]
intervals.append(interval_start) # -1 is a temporary placeholder until we update with death time
#if no death time, then -1 signifies feature has no end (start -> infinity)
#-1 turns out to be very useful because in python if we access the list x[-1] then that will return the
#last element in that list. in effect if we leave the end point of an interval to be -1
# then we're saying the feature lasts until the very end
else: #death of feature
feature = intervals.index([low_j, -1]) #find the feature [start,end] so we can update the end point
intervals[feature][1] = j #j is the death point
#if the interval start point and end point are the same, then this feature begins and dies instantly
#so it is a useless interval and we dont want to waste memory keeping it
epsilon_start = filterValues[intervals[feature][0]]
epsilon_end = filterValues[j]
if epsilon_start == epsilon_end: intervals.remove(intervals[feature])
return intervals
def readPersistence(intervals, filterComplex):
#this converts intervals into epsilon format and figures out which homology group each interval belongs to
persistence = []
for interval in intervals:
start = interval[0]
end = interval[1]
homology_group = (len(filterComplex[0][start]) - 1) #filterComplex is a list of lists [complex, filter values]
epsilon_start = filterComplex[1][start]
epsilon_end = filterComplex[1][end]
persistence.append([homology_group, [epsilon_start, epsilon_end]])
return persistence
intervals = readIntervals(z, ripsComplex2[1])
intervals
```
So those are all the intervals for the features that arise and die. The `readPersistence` function will just convert the start/end points from being indices in the boundary matrix to their corresponding $\epsilon$ value. It will also figure out to which homology group (i.e. which Betti number dimension) each interval belongs.
```
persist1 = readPersistence(intervals, ripsComplex2)
persist1
```
This function will just graph the persistence barcode for individual dimensions.
```
import matplotlib.pyplot as plt
def graph_barcode(persistence, homology_group = 0):
#this function just produces the barcode graph for each homology group
xstart = [s[1][0] for s in persistence if s[0] == homology_group]
xstop = [s[1][1] for s in persistence if s[0] == homology_group]
y = [0.1 * x + 0.1 for x in range(len(xstart))]
plt.hlines(y, xstart, xstop, color='b', lw=4)
#Setup the plot
ax = plt.gca()
plt.ylim(0,max(y)+0.1)
ax.yaxis.set_major_formatter(plt.NullFormatter())
plt.xlabel('epsilon')
plt.ylabel("Betti dim %s" % (homology_group,))
plt.show()
graph_barcode(persist1, 0)
graph_barcode(persist1, 1)
```
Schweeeeet! Persistent homology, at last!
So we've graphed the barcode diagrams for the first two Betti numbers. The first barcode is a little underwhelming since what we want to see is some bars that are significantly longer than others, indicating a true feature. In this case, the Betti 0 barcode has a longest bar which represents the single connected componenent that is formed with the box, but it's not _that_ much longer then the next longest bar. That's mostly an artifact of the example being so simple. If I had added in a few more points then we would see a more significant longest bar.
The Betti 1 barcode is in a lot better shape. We clearly just have a single long bar indicating the 1-dimensional cycle that exists up until the box "fills in" at $\epsilon = 5.8$.
An important feature of persistent homology is being able to find the data points that lie on some interesting topological feature. If all persistent homology could do was give us barcodes and tell us how many connected components and cycles then that would be useful but wanting.
What we really want to be able to do is say, "hey look, the barcode shows there's a statistically significant 1-dimensional cycle, I wonder which data points form that cycle?"
To test out this procedure, let's modify our simple "box" simplicial complex a bit and add another edge (giving us another connected component).
```
data_b = np.array([[1,4],[1,1],[6,1],[6,4],[12,3.5],[12,1.5]])
graph2b = buildGraph(raw_data=data_b, epsilon=8) #epsilon is set to a high value to create a maximal complex
rips2b = ripsFiltration(graph2b, k=3)
SimplicialComplex.drawComplex(origData=data_b, ripsComplex=rips2b[0], axes=[0,14,0,6])
```
The depiction shows the maximal complex since we set $\epsilon$ to be a high value. But I tried to design the data so the "true" features are a box (which is a 1-dim cycle) and an edge off to the right, for a total of two "true" connected components.
Alright, let's run persistent homology on this data.
```
bm2b = filterBoundaryMatrix(rips2b)
rbm2b = reduceBoundaryMatrix(bm2b)
intervals2b = readIntervals(rbm2b, rips2b[1])
persist2b = readPersistence(intervals2b, rips2b)
graph_barcode(persist2b, 0)
graph_barcode(persist2b, 1)
```
We can see the two connected components (the two longest bars) in `Betti dim 0` and we see two bars in `Betti dim 1`, but one is clearly almost twice as long as the other. The shorter bar is from when the edge on the right forms a cycle with the two left-most vertices on the left-sided box.
So at this point we're thinking we have one significant 1-dim cycle, but (pretending we can't just plot our data) we don't know which points form this cycle so that we can further analyze that subset of the data if we wish.
In order to figure that out, we just need to use the _memory_ matrix that our reduction algorithm also returns to us. First we find the interval we want from the `intervals2b` list, in this case it is the first element, then we get the start point (since that indicates the birth of the feature). The start point is an index value in the boundary array, so we'll just find that column in the memory array and look for the 1s in that column. The rows with 1s in that column are the other simplices in the group (including the column itself).
```
persist2b
```
First, look at the intervals in homology group 1, then we want the interval that spans the epsilon range from 5.0 to 5.83. That's index 6 in the persistence list, and is likewise index 6 in the intervals list. The intervals list, rather than epsilon start and end, has index values so we can lookup the simplices in the memory matrix.
```
cycle1 = intervals2b[6]
cycle1
#So birth index is 10
column10 = rbm2b[1][:,10]
column10
```
So this is the column in the memory matrix with index=10. So we automatically know that whatever simplex is in index 10 is part of the cycle as well as the rows with 1s in this column.
```
ptsOnCycle = [i for i in range(len(column10)) if column10[i] == 1]
ptsOnCycle
#so the simplices with indices 7,8,9,10 lie on our 1-dimensional cycle, let's find what those simplices are
rips2b[0][7:11] #range [start:stop], but stop is non-inclusive, so put 11 instead of 10
```
Exactly! Now this is the list of 1-simplices that form the 1-dimensional cycle we saw in our barcode. It should be trivial to go from this list to the raw data points so I won't bore you with those details here.
Alright. Let's try this with a little bit more realistic data. We'll use data sampled from a circle like we did in the beginning of this section. For this example, I've set the parameter `k=2` in the `ripsFiltration` function so it will only generate simplices up to 2-simplices. This is just to reduce the memory needed. If you have a fast computer with a lot of memory, you're welcome to set `k` to 3 or so, but I wouldn't make it much greater than that. Usually we're mostly interested in connected components and 1 or 2 dimensional cycles. The utility of topological features in dimensions higher than that seems to be a diminishing return and the price in memory and algorithm running time is generally not worth it.
>__NOTE__: The following may take awhile to run, perhaps several minutes. This is because the code written in these tutorials is optimized for clarity and ease, NOT for efficiency or speed. There are a lot of performance optimizations that can and should be made if we wanted to make this anywhere close to a production ready TDA library. I plan to write a follow up post at some point about the most reasonable algorithm and data structure optimizations that we can make because I hope to develop a reasonable efficient open source TDA library in Python in the future and would appreciate any help I can get.
```
n = 30 #number of points to generate
#generate space of parameter
theta = np.linspace(0, 2.0*np.pi, n)
a, b, r = 0.0, 0.0, 5.0
x = a + r*np.cos(theta)
y = b + r*np.sin(theta)
#code to plot the circle for visualization
plt.plot(x, y)
plt.show()
xc = np.random.uniform(-0.25,0.25,n) + x #add some "jitteriness" to the points (but less than before, reduces memory)
yc = np.random.uniform(-0.25,0.25,n) + y
fig, ax = plt.subplots()
ax.scatter(xc,yc)
plt.show()
circleData = np.array(list(zip(xc,yc)))
graph4 = buildGraph(raw_data=circleData, epsilon=3.0)
rips4 = ripsFiltration(graph4, k=2)
SimplicialComplex.drawComplex(origData=circleData, ripsComplex=rips4[0], axes=[-6,6,-6,6])
```
Clearly, persistent homology should tell us we have 1 connected component and a single 1-dimensional cycle.
```
len(rips4[0])
#On my laptop, a rips filtration with more than about 250 simplices will take >10 mins to compute persistent homology
#anything < ~220 only takes a few minutes or less
%%time
bm4 = filterBoundaryMatrix(rips4)
rbm4 = reduceBoundaryMatrix(bm4)
intervals4 = readIntervals(rbm4, rips4[1])
persist4 = readPersistence(intervals4, rips4)
graph_barcode(persist4, 0)
graph_barcode(persist4, 1)
```
We can clearly see that there is a _significantly_ longer bar than the others in the `Betti dim 0` barcode, indicating we have only one significant connected component. This fits clearly with the circular data we plotted.
The `Betti dim 1` barcode is even easier as it only shows a single bar, so we of course have a significant feature here being a 1-dimensional cycle.
Okay well, as usual, let's make things a little bit tougher to test our algorithms.
We're going to sample points from a shape called a __lemniscate__, more commonly known as a figure-of-eight, since it looks like the number 8 sideways. As you can tell, it should have 1 connected component and two 1-dimensional cycles.
```
n = 50
t = np.linspace(0, 2*np.pi, num=n)
#equations for lemniscate
x = np.cos(t) / (np.sin(t)**2 + 1)
y = np.cos(t) * np.sin(t) / (np.sin(t)**2 + 1)
plt.plot(x, y)
plt.show()
x2 = np.random.uniform(-0.03, 0.03, n) + x #add some "jitteriness" to the points
y2 = np.random.uniform(-0.03, 0.03, n) + y
fig, ax = plt.subplots()
ax.scatter(x2,y2)
plt.show()
figure8Data = np.array(list(zip(x2,y2)))
graph5 = buildGraph(raw_data=figure8Data, epsilon=0.2)
rips5 = ripsFiltration(graph5, k=2)
SimplicialComplex.drawComplex(origData=figure8Data, ripsComplex=rips5[0], axes=[-1.5,1.5,-1, 1])
%%time
bm5 = filterBoundaryMatrix(rips5)
rbm5 = reduceBoundaryMatrix(bm5)
intervals5 = readIntervals(rbm5, rips5[1])
persist5 = readPersistence(intervals5, rips5)
```
Yeah... that took 17 minutes. Good thing I still had enough CPU/RAM to watch YouTube.
```
graph_barcode(persist5, 0)
graph_barcode(persist5, 1)
```
:-) Just as we expected. `Betti dim 0` shows one significantly longer bar than the others and `Betti dim 1` shows us two long bars, our two 1-dim cycles.
Let's add in another component. In this example, I've just added in a small circle in the data. So we should have two connected components and 3 1-dim cycles.
```
theta = np.linspace(0, 2.0*np.pi, 10)
a, b, r = 1.6, 0.5, 0.2
x3 = a + r*np.cos(theta)
y3 = b + r*np.sin(theta)
x4 = np.append(x, x3)
y4 = np.append(y, y3)
fig, ax = plt.subplots()
ax.scatter(x4,y4)
plt.show()
figure8Data2 = np.array(list(zip(x4,y4)))
# I didn't add "jitteriness" this time since that increases the complexity of the subsequent simplicial complex,
# which makes the memory and computation requirements much greater
graph6 = buildGraph(raw_data=figure8Data2, epsilon=0.19)
rips6 = ripsFiltration(graph6, k=2)
SimplicialComplex.drawComplex(origData=figure8Data2, ripsComplex=rips6[0], axes=[-1.5,2.5,-1, 1])
len(rips6[0]) #reasonable size
%%time
bm6 = filterBoundaryMatrix(rips6)
rbm6 = reduceBoundaryMatrix(bm6)
intervals6 = readIntervals(rbm6, rips6[0])
persist6 = readPersistence(intervals6, rips6)
graph_barcode(persist6, 0)
graph_barcode(persist6, 1)
```
Excellent. I think by now I don't need to tell you how to interpret the barcodes.
### The End... What's next?
Well that's it folks. Part 5 is the end of this sub-series on persistent homology. You now should have all the knowledge necessary to understand and use existing persistent homology software tools, or even build your own if you want.
Next, we will turn our attention to the other major tool in topological data analysis, __mapper__. Mapper is an algorithm that allows us to create visualizable graphs from arbitrarily high-dimensional data. In this way, we are able to see global and local topological features. It is very useful for exploratory data analysis and hypothesis generation. Fortunately, the concepts and math behind it are a lot easier than persistent homology.
#### References (Websites):
1. http://dyinglovegrape.com/math/topology_data_1.php
2. http://www.math.uiuc.edu/~r-ash/Algebra/Chapter4.pdf
3. https://en.wikipedia.org/wiki/Group_(mathematics)
4. https://jeremykun.com/2013/04/03/homology-theory-a-primer/
5. http://suess.sdf-eu.org/website/lang/de/algtop/notes4.pdf
6. http://www.mit.edu/~evanchen/napkin.html
7. https://triangleinequality.wordpress.com/2014/01/23/computing-homology
#### References (Academic Publications):
1. Adams, H., Atanasov, A., & Carlsson, G. (2011). Nudged Elastic Band in Topological Data Analysis. arXiv Preprint, 1112.1993v(December 2011). Retrieved from http://arxiv.org/abs/1112.1993
2. Artamonov, O. (2010). Topological Methods for the Representation and Analysis of Exploration Data in Oil Industry by Oleg Artamonov.
3. Basher, M. (2012). On the Folding of Finite Topological Space. International Mathematical Forum, 7(15), 745โ752. Retrieved from http://www.m-hikari.com/imf/imf-2012/13-16-2012/basherIMF13-16-2012.pdf
4. Bauer, U., Kerber, M., & Reininghaus, J. (2013). Distributed computation of persistent homology. arXiv Preprint arXiv:1310.0710, 31โ38. http://doi.org/10.1137/1.9781611973198.4
5. Bauer, U., Kerber, M., & Reininghaus, J. (2013). Clear and Compress: Computing Persistent Homology in Chunks. arXiv Preprint arXiv:1303.0477, 1โ12. http://doi.org/10.1007/978-3-319-04099-8__7
6. Berry, T., & Sauer, T. (2016). Consistent Manifold Representation for Topological Data Analysis. Retrieved from http://arxiv.org/abs/1606.02353
7. Biasotti, S., Giorgi, D., Spagnuolo, M., & Falcidieno, B. (2008). Reeb graphs for shape analysis and applications. Theoretical Computer Science, 392(1โ3), 5โ22. http://doi.org/10.1016/j.tcs.2007.10.018
8. Boissonnat, J.-D., & Maria, C. (2014). The Simplex Tree: An Efficient Data Structure for General Simplicial Complexes. Algorithmica, 70(3), 406โ427. http://doi.org/10.1007/s00453-014-9887-3
9. Cazals, F., Roth, A., Robert, C., & Christian, M. (2013). Towards Morse Theory for Point Cloud Data, (July). Retrieved from http://hal.archives-ouvertes.fr/hal-00848753/
10. Chazal, F., & Michel, B. (2016). Persistent homology in TDA.
11. Cheng, J. (n.d.). Lecture 16โฏ: Computation of Reeb Graphs Topics in Computational Topologyโฏ: An Algorithmic View Computation of Reeb Graphs, 1, 1โ5.
12. Day, M. (2012). Notes on Cayley Graphs for Math 5123 Cayley graphs, 1โ6.
13. Dey, T. K., Fan, F., & Wang, Y. (2013). Graph Induced Complex: A Data Sparsifier for Homology Inference.
14. Doktorova, M. (2012). CONSTRUCTING SIMPLICIAL COMPLEXES OVER by, (June).
15. Edelsbrunner, H. (2006). IV.1 Homology. Computational Topology, 81โ87. Retrieved from http://www.cs.duke.edu/courses/fall06/cps296.1/
16. Edelsbrunner, H. (2006). VI.1 Persistent Homology. Computational Topology, 128โ134. Retrieved from http://www.cs.duke.edu/courses/fall06/cps296.1/
17. Edelsbrunner, H., Letscher, D., & Zomorodian, A. (n.d.). ย a d โ d A ( gpirqtsuGv I โ dfe h d5e x ย ย V ย W x ( A x Aji x } ย~ ย k g G ย โ f ย g I ktg ยง ย y ย V ยยย k g G ย โ f g I ยจย ย " f g ยก ย k g ยง ย VXW.
18. Edelsbrunner, H., Letscher, D., & Zomorodian, A. (2002). Topological persistence and simplification. Discrete and Computational Geometry, 28(4), 511โ533. http://doi.org/10.1007/s00454-002-2885-2
19. Edelsbrunner, H., & Morozov, D. (2012). Persistent homology: theory and practice. 6th European Congress of Mathematics, 123โ142. http://doi.org/10.4171/120-1/3
20. Erickson, J. (1908). Homology. Computational Topology, 1โ11.
21. Evan Chen. (2016). An Infinitely Large Napkin.
22. Figure, S., & Figure, S. (n.d.). Chapter 4โฏ: Persistent Homology Topics in Computational Topologyโฏ: An Algorithmic View Persistent homology, 1โ8.
23. Grigorโyan, A., Muranov, Y. V., & Yau, S. T. (2014). Graphs associated with simplicial complexes. Homology, Homotopy and Applications, 16(1), 295โ311. http://doi.org/10.4310/HHA.2014.v16.n1.a16
24. Kaczynski, T., Mischaikow, K., & Mrozek, M. (2003). Computing homology. Homology, Homotopy and Applications, 5(2), 233โ256. http://doi.org/10.4310/HHA.2003.v5.n2.a8
25. Kerber, M. (2016). Persistent Homology โ State of the art and challenges 1 Motivation for multi-scale topology. Internat. Math. Nachrichten Nr, 231(231), 15โ33.
26. Khoury, M. (n.d.). Lecture 6โฏ: Introduction to Simplicial Homology Topics in Computational Topologyโฏ: An Algorithmic View, 1โ6.
27. Kraft, R. (2016). Illustrations of Data Analysis Using the Mapper Algorithm and Persistent Homology.
28. Lakshmivarahan, S., & Sivakumar, L. (2016). Cayley Graphs, (1), 1โ9.
29. Lewis, R. (n.d.). Parallel Computation of Persistent Homology using the Blowup Complex, 323โ331. http://doi.org/10.1145/2755573.2755587
30. Liu, X., Xie, Z., & Yi, D. (2012). A fast algorithm for constructing topological structure in large data. Homology, Homotopy and Applications, 14(1), 221โ238. http://doi.org/10.4310/HHA.2012.v14.n1.a11
31. Medina, P. S., & Doerge, R. W. (2016). Statistical Methods in Topological Data Analysis for Complex, High-Dimensional Data. Retrieved from http://arxiv.org/abs/1607.05150
32. Morozov, D. (n.d.). A Practical Guide to Persistent Homology A Practical Guide to Persistent Homology.
33. Murty, N. A., Natarajan, V., & Vadhiyar, S. (2013). Efficient homology computations on multicore and manycore systems. 20th Annual International Conference on High Performance Computing, HiPC 2013. http://doi.org/10.1109/HiPC.2013.6799139
34. Naik, V. (2006). Group theoryโฏ: a first journey, 1โ21.
35. Otter, N., Porter, M. A., Tillmann, U., Grindrod, P., & Harrington, H. A. (2015). A roadmap for the computation of persistent homology. Preprint ArXiv, (June), 17. Retrieved from http://arxiv.org/abs/1506.08903
36. Pearson, P. T. (2013). Visualizing Clusters in Artificial Neural Networks Using Morse Theory. Advances in Artificial Neural Systems, 2013, 1โ8. http://doi.org/10.1155/2013/486363
37. Reininghaus, J. (2012). Computational Discrete Morse Theory.
38. Reininghaus, J., Huber, S., Bauer, U., Tu, M., & Kwitt, R. (2015). A Stable Multi-Scale Kernel for Topological Machine Learning, 1โ8. Retrieved from papers3://publication/uuid/CA230E5C-90AC-4352-80D2-2F556E8B47D3
39. Rykaczewski, K., Wiลniewski, P., & Stencel, K. (n.d.). An Algorithmic Way to Generate Simplexes for Topological Data Analysis.
40. Semester, A. (2017). ยง 4 . Simplicial Complexes and Simplicial Homology, 1โ13.
41. Siles, V. (n.d.). Computing Persistent Homology within Coq / SSReflect, 243847(243847).
42. Singh, G. (2007). Algorithms for Topological Analysis of Data, (November).
43. Tylianakis, J. (2009). Course Notes. Methodology, (2002), 1โ124.
44. Wagner, H., & Dลotko, P. (2014). Towards topological analysis of high-dimensional feature spaces. Computer Vision and Image Understanding, 121, 21โ26. http://doi.org/10.1016/j.cviu.2014.01.005
45. Xiaoyin Ge, Issam I. Safa, Mikhail Belkin, & Yusu Wang. (2011). Data Skeletonization via Reeb Graphs. Neural Information Processing Systems 2011, 837โ845. Retrieved from https://papers.nips.cc/paper/4375-data-skeletonization-via-reeb-graphs.pdf
46. Zomorodian, A. (2010). Fast construction of the Vietoris-Rips complex. Computers and Graphics (Pergamon), 34(3), 263โ271. http://doi.org/10.1016/j.cag.2010.03.007
47. Zomorodian, A. (2009). Computational Topology Notes. Advances in Discrete and Computational Geometry, 2, 109โ143. Retrieved from http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.50.7483
48. Zomorodian, A. J. (2001). Computing and Comprehending Topology: Persistence and Hierarchical Morse Complexes, 199. Retrieved from http://www.cs.dartmouth.edu/~afra/papers.html
49. Zomorodian, A., & Carlsson, G. (2005). Computing persistent homology. Discrete and Computational Geometry, 33(2), 249โ274. http://doi.org/10.1007/s00454-004-1146-y
50. Groups and their Representations Karen E. Smith. (n.d.).
51. Symmetry and Group Theory 1. (2016), 1โ18. http://doi.org/10.1016/B978-0-444-53786-7.00026-5
| github_jupyter |
```
import numpy as np
import json
import pandas as pd
import matplotlib.pyplot as plt
import linear_regression as clf
from learning_rate import *
import model
from data_process import load_data
import seaborn as sns
import time
%matplotlib inline
%load_ext autoreload
%autoreload 1
```
# Task 1
```
with open("data/reddit.json") as fp:
data = json.load(fp)
sz_tr=10000; sz_val=1000; sz_test=1000
train = data[:sz_tr]
val = data[sz_tr:sz_tr+sz_val]
test = data[sz_tr+sz_val:]
train = pd.DataFrame.from_records(train)
train.describe()
corr = train.corr()
corr.sort_values(["popularity_score"], ascending = False, inplace = True)
print(corr.popularity_score)
train['popularity_score'].kurtosis()
corrmat = train.corr()
sns.heatmap(corrmat, vmax=.8, square=True)
# Xtr, ytr, Xval, yval, Xtest, ytest = load_data('test')
# sentiments = pd.DataFrame(data=Xtr)
# # sentiments.columns = ['children'] + ['text' + str(i) for i in range(10)] + ['len_text', 'len_sentence', 'sentiment_neg', 'sentiment_neu', 'sentiment_pos',
# # 'sentiment_compound']
# sentiments['popularity'] = ytr
# fig, ax = plt.subplots(figsize=(10,10))
# corrmat_sent = sentiments.corr()
# sns.heatmap(corrmat_sent, vmax=.8, square=True)
# corrmat_sent.sort_values(["popularity"], ascending = False, inplace = True)
# print(corrmat_sent.popularity[:10])
# print(sentiments.head(3))
# plt.plot(sentiments['popularity'], sentiments[6])
from scipy import stats
sns.distplot(train['popularity_score'], fit=norm);
fig = plt.figure()
res = stats.probplot(train['popularity_score'], plot=plt)
idx = np.argsort(train['controversiality'].values)
x = train[train['controversiality']>0]
x['is_root'].value_counts().plot.bar()
x['len'] = x['text'].apply(len)
idx = np.argsort(x['children'])
plt.plot(x['children'].iloc[idx].values, x['len'].iloc[idx].values)
```
Most comments have no children. So, it might be a correlation between the number of children and popularity.
```
train['children'][train['children']>4].value_counts().plot.bar()
```
**Yes, there is a correlation.** Most comments with more than 10 replies are quite popular.
```
x = train['children']
o = np.argsort(x)
plt.plot(x[o], train['popularity_score'][o], '-')
```
We are using only 160 words. We are not using many words, so using stop words or some other preprocessig might be crucial.
```
x = train['text'].apply(len)
o = np.argsort(x)
plt.plot(x[o], train['popularity_score'][o], '-')
plt.plot((160, 160), (-6, 8), 'k-')
train.iloc[o.tail(-1)].values # Those big comments might be outliers.
```
# Task 2
There are two modules for the implentation of linear regression.
1. `linear_regression`: implementation for Linear regression closed-form and gradient descent.
2. `learning_rate`: implementation of different schedulers for learning rate such as constant, decay, momentum.
The following code shows an analysis of some toy data for which we know the exact weights. The purpose of this is to find the scheduler that converges faster. As the graph shows, decay converges with only 155 iterations, decay with 185. Conversely, a constant learning rate takes a lot of steps to converge. We should notice that we are using the default configurations of the implemented algorithms. Better results could be obtained with parameter tuning. However, the purpose of this is to check the implementation of the algorithms.
```
x = np.array([.86, .09, -.85, .87, -.44, -.43, -1.1, .40, -.96, .17])
y = np.array([2.49, .83, -.25, 3.1, .87, .02, -.12, 1.81, -.83, .43])
lse = clf.LinearRegressionMSE()
lse.fit(x, y)
exact = lse.w
lr = [LearningRate(),Decay(),Momentum()]
name = ['Constant', 'Decay', 'Momentum']
errors = []
for i, lr in enumerate(lr):
regressor = clf.LinearRegressionGD(lr)
regressor.fit(x, y)
errors.append(regressor.errors)
print("{} (steps={}, error={})".format(name[i], regressor.step, regressor.error))
def plot_error(errors):
fig = plt.figure(figsize=(15,4))
for i, error in enumerate(errors):
plt.subplot(1, 3, i+1)
it = list(range(len(error)))
y = error
plt.plot(it, y)
plt.xlabel('Iteration')
plt.ylabel('Error')
plt.title(label=name[i] + " ({})".format( str(len(error))))
plt.suptitle("Error vs Iteration")
plot_error(errors)
```
# Task 3
##### 3.1 Compare the runtime, stability, and performance of the closed-form linear regression and gradient descent approaches. For these experiments, it is fine to ignore the text features and only use the 3 simple features we provide. For gradient descent make sure you try out different learning rates and initializations! (And note that the learning rate might need to be very small...
##### 3.2 Using either the closed-form approach or gradient descent, compare a model with no text features, a model that uses only the top-60 words, and a model that uses the full 160 word occurrence features. Are any of these models underfitting or overfitting?
##### 3.3 Using either the closed-form approach or gradient descent, demonstrate the the two new features you proposed improve performance on the validation set.
##### 3.4 Run your best-performing model on the test set
1. runtime: table of time/iterations
2. stability: singular matrix / Cross validation
3. performance: TODO: _train on test set / validation_
### Default hyperparameters
```
init = [True]#, False]
learning_rates = [10e-3, 1e-3, 5e-4,10e-5, 10e-7, 10e-8]
momentum = [.7, 0.8, 0.9, 0.999]
decay = [10e-2, 10e-3, 1e-4, 5e-4]
```
### Models
#### Model 1
Training with features:
1. is_root
2. controversiality
3. children
```
data_model_1 = 'default_notext'
best_model_1, results_model_1 = model.train_model(data_model_1, init, learning_rates, momentum, decay)
```
#### Model 2
Training with features:
1. is_root
2. controversiality
3. children
4. text: top 60
```
data_model_2 = 'default_top60'
best_model_2, results_model_2 = model.train_model(data_model_2, init, learning_rates, momentum, decay)
```
#### Model 3
Training with features:
1. is_root
2. controversiality
3. children
4. text: top 160
```
data_model_3 = 'default_top160'
best_model_3, results_model_3 = model.train_model(data_model_3, init, learning_rates, momentum, decay)
```
#### Model 4
Training with features:
1. children
2. $children^2$
3. text: top 160
```
data_model_4 = 'square'
best_model_4, results_model_4 = model.train_model(data_model_4, init, learning_rates, momentum, decay)
```
#### Model 5
Training with features:
1. children
3. text: top 160
```
data_model_5 = 'only_children_text'
best_model_5, results_model_5 = model.train_model(data_model_5, init, learning_rates, momentum, decay)
```
#### Model 6
Training with features:
1. is_root
2. children
3. text: top 160
```
data_model_6 = 'default_noroot'
best_model_6, results_model_6 = model.train_model(data_model_6, init, learning_rates, momentum, decay)
```
#### Model 7
Training with features:
1. is_root
2. controversality
3. children
3. text: tfidf, top160
```
data_model_7 = 'default_tfidf'
best_model_7, results_model_7 = model.train_model(data_model_7, init, learning_rates, momentum, decay)
```
#### Model 8
Training with features:
1. is_root
2. controversality
3. children
3. text: feelings, top 160
```
data_model_8 = 'default_feeling'
best_model_8, results_model_8 = model.train_model(data_model_8, init, learning_rates, momentum, decay)
```
#### Model 9
Training with features:
3. children
```
data_model_9 = 'only_children'
best_model_9, results_model_9 = model.train_model(data_model_9, init, learning_rates, momentum, decay)
```
#### Model 10
Training with features:
3. children
3. $children^2$
```
data_model_10 = 'only_children_square'
best_model_10, results_model_10 = model.train_model(data_model_10, init, learning_rates, momentum, decay)
```
#### Model 11
Training with features:
3. children
3. $children^2$
3. $children^3$
```
data_model_11 = 'only_cube'
best_model_11, results_model_11 = model.train_model(data_model_11, init, learning_rates, momentum, decay)
```
#### Model 12
Training with features:
3. children
3. $children^2$
3. $children^3$
3. $children^4$
```
data_model_12 = 'only_fourth'
best_model_12, results_model_12 = model.train_model(data_model_12, init, learning_rates, momentum, decay)
```
##### $children^3$ vs $children^4$
```
X_train, y_train, X_val, y_val, X_test, y_test = load_data('only_cube')
plt.plot(X_train[:, 0], y_train, 'o')
f = lambda x1, x2, x3: best_model_11.w[0] * x1 + best_model_11.w[1] * x2 + best_model_11.w[2] * x3 + best_model_11.w[3]
plt.plot(X_train[:, 0], f(X_train[:, 0], X_train[:, 1], X_train[:, 2]), 'ro')
X_train, y_train, X_val, y_val, X_test, y_test = load_data('only_fourth')
f = lambda x1, x2, x3, x4: best_model_12.w[0] * x1 + best_model_12.w[1] * x2 + best_model_12.w[2]*x3+ best_model_12.w[3]*x4 + best_model_12.w[4]
plt.plot(X_train[:, 0], f(X_train[:, 0], X_train[:, 1], X_train[:, 2], X_train[:, 3]), 'go')
```
#### Model 13
Training with features:
3. children
3. text: stop words, top 120
```
data_model_13 = 'stopwords_children_120'
best_model_13, results_model_13 = model.train_model(data_model_13, init, learning_rates, momentum, decay)
```
#### Model 14
Training with features:
3. children
3. text: stopwords, top 50
```
data_model_14 = 'stopwords_children_50'
best_model_14, results_model_14 = model.train_model(data_model_14, init, learning_rates, momentum, decay)
```
#### Model 15
Training with features:
3. children
3. text: stopwords, top 50
3. len text
```
data_model_15 = 'stopwords_len_top50'
best_model_15, results_model_15 = model.train_model(data_model_15, init, learning_rates, momentum, decay)
```
#### Model 15
Training with features:
3. children
3. text: top 50
3. len text
```
data_model_16 = 'len_top50'
best_model_16, results_model_16 = model.train_model(data_model_16, init, learning_rates, momentum, decay)
```
#### Model 17
Training with features:
3. children
3. text: top 10
3. len text
3. len sentence
3. sentiment positive
3. sentiment neutral
3. sentiment negative
3. sentiment compound
```
data_model_17 = 'children_sentiment_top10'
best_model_17, results_model_17 = model.train_model(data_model_17, init, learning_rates, momentum, decay)
```
#### Model 18
Training with features:
3. children
3. len text
3. len sentence
3. sentiment positive
3. sentiment neutral
3. sentiment negative
3. sentiment compound
```
data_model_18 = 'children_sentiment'
best_model_18, results_model_18 = model.train_model(data_model_18, init, learning_rates, momentum, decay)
```
#### Model 19
Training with features:
3. children
3. $children^2$
3. len_text
3. sentiment_neg
3. sentiment_neu
3. sentiment_pos
3. text: top57
```
data_model_19 = 'most_important_features'
best_model_19, results_model_19 = model.train_model(data_model_19, init, learning_rates, momentum, decay)
```
#### Best models found
features:
##### model with feature 1
1. text: top 62
2. is_root
3. controversiality
3. children
3. len_text
##### model with feature 2
1. text: top 57
2. is_root
3. controversiality
3. children
3. $children^2$
##### model with feature 1 and 2
1. text: top 57, 60, 62
2. is_root
3. controversiality
3. children
3. $children^2$
3. len_text
```
best_validation_mse_found = 0.9895357003238057
best_models = pd.DataFrame(
columns=['algorithm', 'mse train', 'mse val', 'init zero', 'iterations', 'time', 'lr method', 'lr', 'b',
'model'])
best_models_found = ["lenght_text", 'children_all', 'best_combination', 'best_combination1', 'best_combination2']
for m in best_models_found:
X_train, y_train, X_val, y_val, X_test, y_test = load_data(m)
test_model_closed = clf.LinearRegressionMSE()
test_model_closed.fit(X_train, y_train)
y_pred = test_model_closed.pred(X_train)
y_pred_val = test_model_closed.pred(X_val)
y_pred_test = test_model_closed.pred(X_test)
mse_train = metric.mse(y_train, y_pred)
mse_val = metric.mse(y_val, y_pred_val)
print("mse train:", mse_train,"mse validation:", mse_val, "improvement:", best_validation_mse_found - mse_val,
best_validation_mse_found > .005)
best_models = best_models.append(
{'algorithm': 'closed-form', 'mse train': mse_train, 'mse val': mse_val,
'time': 'NA', 'model': m}, ignore_index=True)
```
### Results
Save results for comparison
```
result_models_all = [results_model_1, results_model_2, results_model_3, results_model_4, results_model_5,
results_model_6, results_model_7, results_model_8, results_model_9, results_model_10,
results_model_10, results_model_11, results_model_12, results_model_13, results_model_14,
results_model_15, results_model_16, results_model_17, results_model_18, results_model_19]
result_models_ass = [results_model_1, results_model_2, results_model_3]
result_all = pd.concat(result_models_all).sort_values(['mse val'])
result_ass = pd.concat(result_models_ass).sort_values(['mse val'])
result_all.to_csv('result/all_data.csv')
result_ass.to_csv('result/main_data.csv')
result_all.head(25)
result_ass.head(10)
best_models.sort_values(by=['mse val'])
```
### Run on test data
Select the tree top models
```
import metric
# file = ['default_top60', 'default_top160', 'most_important_features']
# file = ["lenght_text", 'children_all', 'best_combination', 'best_combination1',
# 'best_combination2', 'most_important_features']
file = ['children_all', 'best_combination1']
for f in file:
X_train, y_train, X_val, y_val, X_test, y_test = load_data(f)
# train model with selected hyperparameters
# test_model = clf.LinearRegressionGD(Momentum(0.01, 0.999))
test_model = clf.LinearRegressionMSE()
# test_model.fit(X_train, y_train, verbose=False, max_iter=10000000, tol=5e-6)
test_model.fit(X_train, y_train)
# evaluate model in val and test set
y_pred_train = test_model.pred(X_train)
y_pred_val = test_model.pred(X_val)
y_pred_test = test_model.pred(X_test)
# evaluate
mse_train = metric.mse(y_train, y_pred_train)
mse_val = metric.mse(y_val, y_pred_val)
mse_test = metric.mse(y_test, y_pred_test)
print("%s: mse train: %s, mse val: %s, mse test: %s" % (f, mse_train, mse_val, mse_test))
# test_model.plot_error()
# print(test_model.step)
```
#### Analyze gradient descent and closed form
```
X_train, y_train, X_val, y_val, X_test, y_test = load_data('default_notext')
test_model_closed = clf.LinearRegressionMSE()
start = time.time()
test_model_closed.fit(X_train, y_train)
end = time.time()
time_closed_form = end - start
rates = [.5, .3, .2, .1, .08, .05, .02, .01, .008, .004, .001, .0005, .0001, .00001]
error = []
times = []
iterations = []
import time
for i in rates:
test_model = clf.LinearRegressionGD(Momentum(i, 0.9))
start = time.time()
test_model.fit(X_train, y_train, verbose=False, max_iter=10000000, tol=1e-7)
end = time.time()
iterations.append(test_model.step)
times.append(end-start)
error.append(test_model.w - test_model_closed.w)
# iterations_norm = (np.array(iterations) - np.min(iterations)) / (np.max(iterations) - np.min(iterations))
# # error_norm = (np.array(error) - np.min(error)) / (np.max(error) - np.min(error))
error_mean = [e.mean() for e in error]
# error_norm = (np.array(error_mean) - np.min(error_mean)) / (np.max(error_mean) - np.min(error_mean))
# time_norm = (np.array(times) - np.min(times)) / (np.max(times) - np.min(times))
# # plt.plot(rates[2:10], iterations_norm[2:10] * 70)
# plt.plot(rates[2:10], error_norm[2:10] * 10)
# plt.plot(rates[2:10], time_norm[2:10])
# plt.plot([0, .2], [time_closed_form, time_closed_form])
# plt.xlabel('learning rate')
# plt.legend(['gd error', 'gd time', 'closed-form time'])
fig = plt.figure(figsize=(15,4))
plt.subplot(1,2,1)
plt.plot(rates[2:10], times[2:10])
plt.plot([0, .2], [time_closed_form, time_closed_form])
plt.xlabel('learning rate')
plt.ylabel('time (s)')
plt.legend(['gradient descent', 'closed-form'])
plt.subplot(1,2,2)
plt.plot(rates[2:10], error_mean[2:10])
plt.xlabel('learning rate')
plt.ylabel('precision (1e-7)')
import metric
X_train, y_train, X_val, y_val, X_test, y_test = load_data('most_important_features')
test_model_closed = clf.LinearRegressionMSE()
test_model_closed.fit(X_train, y_train)
y_pred = test_model_closed.pred(X_train)
y_pred_val = test_model_closed.pred(X_val)
y_pred_test = test_model_closed.pred(X_test)
mse_train = metric.mse(y_train, y_pred)
mse_val = metric.mse(y_val, y_pred_val)
mse_test = metric.mse(y_test, y_pred_test)
print(mse_train,mse_val,mse_test)
# test_model_closed.w
```
##### See residuals
```
plt.plot(y_test, y_pred_test, 'o')
plt.plot(y_train, y_pred, 'o')
```
| github_jupyter |
<a href="https://colab.research.google.com/github/john-s-butler-dit/Basic-Introduction-to-Python/blob/master/W1T3%20The%20Psychometric%20Function.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# The Psychometric Function Week 1, Tutorial 3
In this notebook we will show some of the basics of plotting and accessing elements of a vector (array) of numbers using the psychometric function also known as a cumulative Gaussian.
### Libraries
```
# LIBRARY
import numpy as np # vector manipulation
from scipy.stats import norm # Psychometric Function
# THIS IS FOR PLOTTING
%matplotlib inline
import matplotlib.pyplot as plt # side-stepping mpl backend
import warnings
warnings.filterwarnings("ignore")
```
## A Single Psychometric Function
The code below will plot a psychometric function for temporal discrimation from 0 to 100ms interstimulus interval with a Point of Subjective Equlaity PSE (mean $\mu$) of 50 and and Just Noiticable Difference JND (variance $\sigma$) of 10.
The value 0 indicates the participant saw two synchronous lights, a value of 1 indicates the participant saw two asynchronous lights (Butler et al. 2015).
Now we define a range of x values starting at 0 and ending at 100 in unit steps. To do this we use the __numpy__ library function __arange__.
```
ISI=np.arange(0,101,1) # INTER STIMULUS THRESHOLD
print(ISI)
```
To print the first element of the x range use the comand print(x[0])
```
print(ISI[0])
```
To plot the psychometric function we use the function __norm.cdf__ from the __scipy.stats__ library.
```
PSE=50
JND=10
TDT_fun= norm.cdf(ISI,PSE,JND)
print(TDT_fun)
```
To plot the result we use the __matplotlib__ library function __plt__.
```
fig = plt.figure(figsize=(6,4)) # This setups the size of the figure
plt.plot(ISI,TDT_fun,'-',color='black')
plt.show() # This plots the figure
```
To plot the guassian of the psychometric function we use the function __norm.pdf__ from the __scipy.stats__ library.
```
gaussian= norm.pdf(ISI,PSE,JND)
fig = plt.figure(figsize=(6,4)) # This setups the size of the figure
plt.plot(ISI,gaussian,'-',color='black')
plt.show() # This plots the figure
```
## Problem 1
Re-do the plot of the psychometric function with a PSE of 40 and JND of 30.
```
##############################################################################
## INSERT: Re-do the plot of the psychometric function with a PSE of 40 and JND of 30.
##############################################################################
#######################ANSWER#################################
PSE =40
JND=30
ISI=np.arange(0,101,1) # INTER STIMULUS THRESHOLD
#######################ANSWER#################################
psychometric= norm.cdf(ISI,PSE,JND)
fig = plt.figure(figsize=(6,6))
plt.plot(ISI,psychometric,'-',color='red')
plt.show()
```
## Problem 2
Re-do the plot but with a different coloured line.
```
fig = plt.figure(figsize=(6,6))
##############################################################################
## INSERT: change the plot function to plot a different coloured line.
##############################################################################
#######################ANSWER#################################
plt.plot(ISI,psychometric,'-',color='indigo')
#######################ANSWER#################################
plt.show()
```
## Problem 3
What is the probability that someone will see the two flashing lights if the interstimulus interval is 60ms apart given a PSE of 45 and JND of 15.
```
PSE=45
JND=15
psychometric= norm.cdf(ISI,PSE,JND)
fig = plt.figure(figsize=(6,6))
plt.plot(ISI,psychometric,'-',color='blue')
plt.xlabel('Interstimulus Interval (ms)')
plt.ylabel('Proportion of Different Responses')
plt.show()
ISI==60.0 # FIND THE ISI VALUE THAT EQUALS 60
print(psychometric[ISI==60.0])
```
---
# Summary
In this tutorial, we learned:
* To plot psychometric (cummulative Gaussian) function.
* To find value from a function.
## Reference
Butler, John S., et al. "Non-parametric bootstrapping method for measuring the temporal discrimination threshold for movement disorders." Journal of neural engineering 12.4 (2015): 046026.
| github_jupyter |
Parallel Map on Files
------------------------
For each of a set of filenames, we parse JSON data contents, load that data into a Pandas DataFrame, and then output the result to another file with a nicer format, HDF5.
We find that parsing JSON is slow and so we parallelize the process using the [concurrent.futures](https://docs.python.org/3/library/concurrent.futures.html) module to do this work in multiple processes.
### Objectives
* Profile code to find bottleneck
* Use `concurrent.futures` to `map` a function across many inputs in parallel
### Requirements
* Pandas
* concurrent.futures (standard in Python 3, `pip install futures` in Python 2)
* snakeviz (for profile visualization, `pip install snakeviz`)
pip install snakeviz
pip install futures
### Extra Exercise
Try out alternative binary formats. Perhaps try [feather](https://github.com/wesm/feather).
## Before we start
We need to get some data to work with.
We generate some fake stock data by adding a bunch of points between real stock data points. This will take a few minutes the first time we run it. If you already ran `python prep.py` when going through the README then you can skip this step.
```
%run ../prep.py
```
## Sequential Execution
```
%load_ext snakeviz
from glob import glob
import json
import pandas as pd
import os
filenames = sorted(glob(os.path.join('..', 'data', 'json', '*.json'))) # ../data/json/*.json
filenames[:5]
%%snakeviz
for fn in filenames:
print(fn)
with open(fn) as f:
data = [json.loads(line) for line in f]
df = pd.DataFrame(data)
out_filename = fn[:-5] + '.h5'
df.to_hdf(out_filename, '/data')
```
Parallel Execution
--------------------
We can process each file independently and in parallel. To accomplish this we'll transform the body of our for loop into a function and then use the [concurrent.futures.ProcessPoolExecutor](https://docs.python.org/3/library/concurrent.futures.html#executor-objects) to apply that function across all of the filenames in parallel using multiple processes.
### Before
Whenever we have code like the following:
```python
results = []
for x in L:
results.append(f(x))
```
or the following:
```python
results = [f(x) for x in L]
```
or the following:
```python
results = list(map(f, x))
```
### After
We can instead write it as the following:
```python
from concurrent.futures import ProcessPoolExecutor
e = ProcessPoolExecutor()
results = list(e.map(f, L))
```
### Example
```
%%time
### Sequential code
import time
results = []
for i in range(8):
time.sleep(1)
results.append(i + 1)
%%time
### Parallel code
# from concurrent.futures import ProcessPoolExecutor
from loky import ProcessPoolExecutor # for Windows users
e = ProcessPoolExecutor()
def slowinc(x):
time.sleep(1)
return x + 1
results = list(e.map(slowinc, range(8)))
```
### Exercise: Convert JSON data to HDF5 in parallel using `concurrent.futures.Executor.map`
```
import json
%%time
### Sequential code
for fn in filenames:
with open(fn) as f:
data = [json.loads(line) for line in f]
df = pd.DataFrame(data)
out_filename = fn[:-5] + '.h5'
df.to_hdf(out_filename, '/data')
%%time
# try replacing %%time with %%snakeviz when everything's working
# to get a profile
### Parallel code๏ผไธ้ขload็codeๆฏ็ญๆก
def json2hdf5(fn):
with open(fn) as f:
data = [json.loads(line) for line in f]
df = pd.DataFrame(data)
out_filename = fn[:-5] + '.h5'
df.to_hdf(out_filename, '/data')
e = ProcessPoolExecutor()
e.map(json2hdf5, filenames)
%load solutions/map-1.py
```
Try visualizing your parallel version with `%%snakeviz`. Where does it look like it's spending all its time?
Parallelism isn't everything
--------------------------------
We get a moderate increase in performance when using multiple processes. However parallelism isn't the only way to accelerate this computation. Recall that the bulk of the cost comes from the `json.loads` function. A quick internet search on "fast json parsing in python" yields the [ujson](https://pypi.python.org/pypi/ujson) library as the top hit.
Knowing about and importing the optimized `ujson` library is just as effective as multi-core execution.
```
import ujson
%%time
filenames = sorted(glob(os.path.join('..', 'data', 'json', '*.json')))
for fn in filenames:
with open(fn) as f:
data = [ujson.loads(line) for line in f]
df = pd.DataFrame(data)
out_filename = fn[:-5] + '.h5'
df.to_hdf(out_filename, '/data')
```
History: multiprocessing.Pool
--------------------------------
Previously people have done multi-processing computations with the `multiprocessing.Pool` object, which behaves more or less identically.
However, today most library designers are coordinating around the `concurrent.futures` interface, so it's wise to move over.
```
%%time
from multiprocessing import Pool
p = Pool()
list(p.map(load_parse_store, filenames))
```
Conclusion
-----------
* Used `snakeviz` to profile code
* Used `concurrent.futures.ProcessPoolExecutor` for simple parallelism across many files
* Gained some speed boost (but not as much as expected)
* Lost ability to diagnose performance within parallel code
* Describing each task as a function call helps use tools like map for parallelism
* Saw that other options than parallelism exist to speed up code, including the `ujson` library.
* Making your tasks fast is often at least as important as parallelizing your tasks.
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.