text stringlengths 2.5k 6.39M | kind stringclasses 3
values |
|---|---|
# Demo 2:
The objectives of this demo are as follows:
- Simulate a single stochastic trajectory of the Ricker model going through a Flip bifurcation
- Compute bootstrapped versions of segments of the time-series over a rolling window
- Compute EWS of the bootstrapped time-series
- Compute and display confidence intervals of the ensemble of EWS
- Run time < 3min
## Import the standard Python libraries and ewstools
```
# We will require the following standard Python packages for this analysis
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
# This is the package we use to compute the early warning signals
import ewstools.core as ewstools
```
## Simulate the Ricker model
Here we simulate a single trajectory of the Ricker model going through a Fold bifurcation. We will use this data to demonstrate the process of computing EWS. Alternatively, you could import your own data here. The importnat thing is that we end up with a Pandas DataFrame indexed by time.
**Set simulation parameters**
```
dt = 1 # time-step (using 1 since discrete-time system)
t0 = 0 # starting time
tmax = 1000 # end time
tburn = 100 # burn-in period preceding start-time
seed = 0 # random number generation seed (set for reproducibility)
```
**Define model**
We use the Ricker model with a Holling Type II harvesting term and additive white noise. It is given by
$$ N_{t+1} = N_t e^{(r(1-N_t/K) + \sigma\epsilon_t} ) - F\frac{N_t^2}{N_t^2 + h^2}$$
where $N_t$ is the population size at time $t$, $r$ is the intrinsic growth rate, $K$ is the carrying capacity, $F$ is the maximum rate of harvesting, $h$ is the half saturation constant of the harvesting term, $\sigma$ is the noise amplitude, and $\epsilon_t$ is a normal random variable with zero mean and unit variance.
```
# Define the model
def de_fun(x,r,k,f,h,xi):
return x*np.exp(r*(1-x/k)+xi) - f*x**2/(x**2+h**2)
```
**Set model parameters**
```
f = 0 # harvesting rate
k = 10 # carrying capacity
h = 0.75 # half-saturation constant of harvesting function
bl = 0.5 # bifurcation parameter (growth rate) low
bh = 2.3 # bifurcation parameter (growth rate) high
bcrit = 2 # bifurcation point (computed using XPPAUT)
sigma = 0.02 # noise intensity
x0 = 0.8 # initial condition
```
**Initialisation**
```
# Initialise arrays for time and state values
t = np.arange(t0,tmax,dt)
x = np.zeros(len(t))
# Bifurcation parameter values (increasing linearly in time)
b = pd.Series(np.linspace(bl,bh,len(t)),index=t) # bifurcation parameter values over time (linear increase)
# Compute time at which bifurcation is crossed
tcrit = b[b > bcrit].index[1]
# Array of noise values (normal random variables with variance sigma^2 dt)
dW_burn = np.random.normal(loc=0, scale=sigma*np.sqrt(dt), size = int(tburn/dt)) # burn-in period
dW = np.random.normal(loc=0, scale=sigma*np.sqrt(dt), size = len(t)) # monitored period
```
**Run simulation**
```
# Run burn-in period starting from intiial condition x0
for i in range(int(tburn/dt)):
x0 = de_fun(x0,bl,k,f,h,dW_burn[i])
# State value post burn-in period. Set as starting value.
x[0]=x0
# Run simulation using recursion
for i in range(len(t)-1):
x[i+1] = de_fun(x[i],b.iloc[i],k,f,h,dW[i])
# Make sure that state variable stays >= 0
if x[i+1] < 0:
x[i+1] = 0
# Store array data in a DataFrame indexed by time
sim_data = {'Time': t, 'x': x}
df_traj = pd.DataFrame(sim_data)
df_traj.set_index('Time', inplace=True)
```
We now have a DataFrame df_traj, with our trajectory, indexed by time. We can check it out with a simple plot, using the command
```
df_traj.plot();
```
## Bootstrap the time-series over a rolling window
To obtain a more reliable estimate of the statistical metrics that consitute EWS in this system, we bootstrap the detrended time-series within each position of the rolling window. Specifically, we use a block-bootstrapping method where blocks of points are sampled randomly with replacement. The size of the block for each sample is taken from an exponential distribution with a chosen parameter. The block sizes used should be large enough to retain the significant temporal correlations in the time-series.
**Set bootstrapping parameters**
```
rw = 0.4 # rolling window
span = 0.5 # Lowess span
block_size = 20 # characteristic size of blocks used to resample time-series
bs_type = 'Stationary' # type of bootstrapping
n_samples = 3 # number of bootstrapping samples to take
roll_offset = 20 # rolling window offset
```
**Compute block-bootstrapped samples**
We now construct a Dataframe of bootstrapped samples of the time-series, using the function *roll_bootstrap* within the *ewstools* package. Note that documentation of each function can be obtained using help(*function_name*)
```
df_samples = ewstools.roll_bootstrap(df_traj['x'],
span = span,
roll_window = rw,
roll_offset = roll_offset,
upto = tcrit,
n_samples = n_samples,
bs_type = bs_type,
block_size = block_size
)
```
For illustraion, here are 3 bootstrapped samples from the time-series within the rolling window at $t=459$.
```
df_samples.loc[459].loc[1:3]['x'].unstack(level=0).plot();
```
## Compute EWS of the ensemble of bootstrap time-series
Now we send each bootstrapped time-series through *ews_compute*. Note that detrending and extracting segments of the time-series has already been done, so there is no need to smooth the bootstrapped data, or use a rolling window.
**EWS parameters**
```
ews = ['var','ac','smax','aic']
lags = [1,2,3] # autocorrelation lag times
ham_length = 40 # number of data points in Hamming window
ham_offset = 0.5 # proportion of Hamming window to offset by upon each iteration
pspec_roll_offset = 20 # offset for rolling window when doing spectrum metrics
sweep = 'False' # whether to sweep over optimisation parameters
```
**Initialisation**
```
# List to store EWS DataFrames
list_df_ews = []
# List to store power spectra
list_pspec = []
# Extract time and sample values to loop over
# Time values
tVals = np.array(df_samples.index.levels[0])
# Sample values
sampleVals = np.array(df_samples.index.levels[1])
```
**Run ews_compute for each bootstrapped sample (takes a few minutes)**
```
# Loop over time (at end of rolling window)
for t in tVals:
# Loop over samples
for sample in sampleVals:
# Extract series for this time and sample number
series_temp = df_samples.loc[t].loc[sample]['x']
ews_dic = ewstools.ews_compute(series_temp,
roll_window = 1, # effectively no rolling window
band_width = 1, # effectively no detrending
ews = ews,
lag_times = lags,
upto='Full',
ham_length = ham_length,
ham_offset = ham_offset,
sweep = sweep)
# The DataFrame of EWS
df_ews_temp = ews_dic['EWS metrics']
# Include columns for sample value and realtime
df_ews_temp['Sample'] = sample
df_ews_temp['Time'] = t
# Drop NaN values
df_ews_temp = df_ews_temp.dropna()
# Append list_df_ews
list_df_ews.append(df_ews_temp)
# Output power spectrum for just one of the samples (ow large file size)
df_pspec_temp = ews_dic['Power spectrum'][['Empirical']].dropna()
list_pspec.append(df_pspec_temp)
# Print update
print('EWS for t=%.2f complete' % t)
# Concatenate EWS DataFrames. Index [Realtime, Sample]
df_ews_boot = pd.concat(list_df_ews).reset_index(drop=True).set_index(['Time','Sample'])
df_pspec_boot = pd.concat(list_pspec)
```
## Plot EWS with 95% confidence intervals
We use the Seaborn package here to make plots of the ensemble EWS as mean values with 95% confidence intervals.
**Variance**
```
sns.relplot(x='Time', y='Variance',
data=df_ews_boot.reset_index()[['Time','Variance']],
kind='line',
height=3,
aspect=2);
```
**Autocorrelation**
```
# Structure the data for Seaborn
plot_data=df_ews_boot.reset_index()[['Time','Lag-1 AC','Lag-2 AC','Lag-3 AC']].melt(id_vars='Time',
value_vars=('Lag-1 AC','Lag-2 AC','Lag-3 AC'),
var_name='EWS',
value_name='Magnitude')
sns.relplot(x='Time',
y='Magnitude',
hue='EWS',
data=plot_data,
kind='line',
height=3,
aspect=2);
```
**Smax**
```
sns.relplot(x='Time', y='Smax',
data=df_ews_boot.reset_index()[['Time','Smax']],
kind='line',
height=3,
aspect=2);
```
**AIC weights**
```
# Structure the data for Seaborn
plot_data=df_ews_boot.reset_index()[['Time','AIC fold','AIC hopf','AIC null']].melt(id_vars='Time',
value_vars=('AIC fold','AIC hopf','AIC null'),
var_name='EWS',
value_name='Magnitude')
sns.relplot(x='Time',
y='Magnitude',
hue='EWS',
data=plot_data,
kind='line',
height=3,
aspect=2);
```
| github_jupyter |
```
import os
import sys
import datetime as dt
#os.chdir("../Automated/")
#from DataGathering import RedditScraper
#from ChangePointAnalysis import ChangePointAnalysis
from NeuralNets import CreateNeuralNets
from matplotlib import pyplot as plt
import numpy as np
subreddit = "WallStreetBets"
```
# Data Scraping
For analyzing wallstreetbets data, we recommend downloading full.csv from [url] and putting it in ../Data/subreddit_wallstreetbets.
If you want to scrape a different subreddit, you can use the following file. You will need API.env with appropriate credentials in /Automated/
```
start = dt.datetime(2020, 1, 1)
end = dt.datetime(2020, 1, 30)
if not os.path.exists(f"../Data/subreddit_{subreddit}/full.csv"):
print("Did not find scraped data, scraping.")
RedditScraper.scrape_data(subreddits = [subreddit], start = start, end = end)
```
# Change Point Analysis
The next cell will open full.csv , compute the words that are among the top daily_words most popular words on any day, and then run the change point analysis model on each of them.
The first time this is a run, a cleaned up version of the dataframe will be created for ease of processing.
```
up_to = 1 # Only calculate change points for up_to of the popular words. Set to None to do all of them.
daily_words = 2 # Get the daily_words most popular posts on each day.
# Compute the changepoints
ChangePointAnalysis.changepointanalysis([subreddit], up_to = up_to, daily_words = daily_words)
```
After running, these files will in ../Data/subreddit_subreddit/Changepoints/Metropolis_30000Draws_5000Tune
(The final folder corresponds to the parameters of the Markov chain used by pymc3 for the inference.)
For instance:

### Brief explanation of how this works:
The Bayesian model is as follows:
1. A coin is flipped with probability p.
2. If the coin comes up heads, then there is a change point. Otherwise, there is no change point.
3. It is assumed that the frequency random variable consists of independent draws from a beta distribution. If the coin decided there would be no change point, it is the same beta distribution at all times. Otherwise, it is a different beta on the different sides of the change points.
The posterior distribution of p is the models confidence that there is a change point, and the posterior distribution of tau represents its guess about when it occured.
Of course, this is not a realistic picture of the process; the independence of the different draws from the betas is especially unlike the data. However, it appears to be good enough to discover change points.
As currently written, it only handles one change point, however this can be improved.
# Neural Nets
The following code will train a neural net that predicts, given a submission's title text and time of posting, whether that submission's score will be above the median.
We use pre-trained GloVe word embeddings in order to convert the title text into a vector that can be used in the neural net. These word embeddings are tuned along with the model parameters as the model is being trained.
This technique and the neural net's architecture are taken from a blog post of Max Woolf, https://minimaxir.com/2017/06/reddit-deep-learning/.
```
model, accuracies, word_tokenizer, df = CreateNeuralNets.buildnets(['wallstreetbets'])[0]
```
## Predicted popularity as a time series
We now show how the predicted popularity of a post depends on the day on which it was posted.
We plot the prediction for the same title, "GME GME GME GME GME GME", as if it were posted at noon each day.
It is interesting to note that the variance seems to decrease after the GameStop short squeeze of early 2021.
```
text = "GME GME GME GME GME GME"
CreateNeuralNets.timeseries(df, text, model, word_tokenizer)
```
This will produce a picture like the following:

## Workshopping example
Here we start with a potential title (to be posted at noon on April 1, 2021) and attempt to improve it based on the model's prediction.
```
#this is the date information for April 1, 2021.
#Note we normalize so the earliest year in our data set (2020)
#and the earliest day of the year correspond to the number 0
input_hour = np.array([12])
input_dayofweek = np.array([3])
input_minute = np.array([0])
input_dayofyear = np.array([91])
input_year = np.array([0])
input_info=[input_hour,input_dayofweek, input_minute, input_dayofyear, input_year]
#given a list of potential titles, predict the success of each one
def CheckPopularity(potential_titles):
for title in potential_titles:
print(model.predict([CreateNeuralNets.encode_text(title,word_tokenizer)] + input_info)[0][0][0])
potential_titles = ["Buy TSLA", "Buy TSLA! I like the stock", "Buy TSLA! Elon likes the stock",
"TSLA is the next GME. Elon likes the stock",
"TSLA is the next GME. To the moon! Elon likes the stock"]
CheckPopularity(potential_titles)
potential_titles = ["trump", "Buy TSLA! I like the stock", "Buy TSLA! Elon likes the stock",
"TSLA is the next GME. Elon likes the stock",
"TSLA is the next GME. To the moon! Elon likes the stock"]
for title in potential_titles:
print(model.predict([CreateNeuralNets.encode_text(title,word_tokenizer)] + input_info)[0][0][0])
def GetPop(title):
return model.predict([CreateNeuralNets.encode_text(title,word_tokenizer)] + input_info)[0][0][0]
word_dict={}
for word in list(word_tokenizer.word_index)[:200]:
word_dict[word] = GetPop([word])
word_dict
df.title
CheckPopularity(["Got help stick it to them!"])
df2 = df[:200]
df2 = df2.copy()
df2['predicted_pop'] = df2.title.apply(lambda x: GetPop(x))
min(df2.predicted_pop)
```
| github_jupyter |
# Deep Learning with PyTorch
Author: [Anand Saha](http://teleported.in/)
### 5. Autoencoder: denoising images
Removing noise from images.

```
import torch
import torch.cuda as cuda
import torch.nn as nn
import matplotlib.pyplot as plt
import numpy as np
from torch.autograd import Variable
# Torchvision module contains various utilities, classes, models and datasets
# used towards computer vision usecases
from torchvision import datasets
from torchvision import transforms
from torch.nn import functional as F
```
**Create the datasets**
The CIFAR-10 dataset consists of 60000 32x32 colour images in 10 classes, with 6000 images per class. There are 50000 training images and 10000 test images.
https://www.cs.toronto.edu/~kriz/cifar.html

```
transform=transforms.Compose([transforms.ToTensor()])
cifar10_train = datasets.CIFAR10('./data.cifar10', train=True, download=True, transform=transform)
cifar10_valid = datasets.CIFAR10('./data.cifar10', train=False, download=True, transform=transform)
```
**Utility to display the original, noisy and denoised image**
```
def show_img(orig, noisy, denoised):
fig=plt.figure()
orig = orig.swapaxes(0, 1).swapaxes(1, 2)
noisy = noisy.swapaxes(0, 1).swapaxes(1, 2)
denoised = denoised.swapaxes(0, 1).swapaxes(1, 2)
# Normalize for display purpose
orig = (orig - orig.min()) / (orig.max() - orig.min())
noisy = (noisy - noisy.min()) / (noisy.max() - noisy.min())
denoised = (denoised - denoised.min()) / (denoised.max() - denoised.min())
fig.add_subplot(1, 3, 1, title='Original')
plt.imshow(orig)
fig.add_subplot(1, 3, 2, title='Noisy')
plt.imshow(noisy)
fig.add_subplot(1, 3, 3, title='Denoised')
plt.imshow(denoised)
fig.subplots_adjust(wspace = 0.5)
plt.show()
# To test
# show_img(cifar10_train[0][0].numpy(), cifar10_train[1][0].numpy(), cifar10_train[2][0].numpy())
```
**Some hyper parameters**
```
batch_size = 250 # Reduce this if you get out-of-memory error
learning_rate = 0.001
noise_level = 0.1
```
**Create the dataloader**
```
cifar10_train_loader = torch.utils.data.DataLoader(cifar10_train, batch_size=batch_size, shuffle=True, num_workers=1)
cifar10_valid_loader = torch.utils.data.DataLoader(cifar10_valid, batch_size=batch_size, shuffle=True, num_workers=1)
```
**The Denoising Autoencoder**
```
class DenoisingAutoencoder(nn.Module):
def __init__(self):
super(DenoisingAutoencoder, self).__init__()
# 32 x 32 x 3 (input)
self.conv1e = nn.Conv2d(3, 24, 3, padding=2) # 30 x 30 x 24
self.conv2e = nn.Conv2d(24, 48, 3, padding=2) # 28 x 28 x 48
self.conv3e = nn.Conv2d(48, 96, 3, padding=2) # 26 x 26 x 96
self.conv4e = nn.Conv2d(96, 128, 3, padding=2) # 24 x 24 x 128
self.conv5e = nn.Conv2d(128, 256, 3, padding=2) # 22 x 22 x 256
self.mp1e = nn.MaxPool2d(2, return_indices=True) # 11 x 11 x 256
self.mp1d = nn.MaxUnpool2d(2)
self.conv5d = nn.ConvTranspose2d(256, 128, 3, padding=2)
self.conv4d = nn.ConvTranspose2d(128, 96, 3, padding=2)
self.conv3d = nn.ConvTranspose2d(96, 48, 3, padding=2)
self.conv2d = nn.ConvTranspose2d(48, 24, 3, padding=2)
self.conv1d = nn.ConvTranspose2d(24, 3, 3, padding=2)
def forward(self, x):
# Encoder
x = self.conv1e(x)
x = F.relu(x)
x = self.conv2e(x)
x = F.relu(x)
x = self.conv3e(x)
x = F.relu(x)
x = self.conv4e(x)
x = F.relu(x)
x = self.conv5e(x)
x = F.relu(x)
x, i = self.mp1e(x)
# Decoder
x = self.mp1d(x, i)
x = self.conv5d(x)
x = F.relu(x)
x = self.conv4d(x)
x = F.relu(x)
x = self.conv3d(x)
x = F.relu(x)
x = self.conv2d(x)
x = F.relu(x)
x = self.conv1d(x)
x = F.relu(x)
return x
autoencoder = DenoisingAutoencoder().cuda()
parameters = list(autoencoder.parameters())
loss_func = nn.MSELoss()
optimizer = torch.optim.Adam(parameters, lr=learning_rate)
train_loss = []
valid_loss = []
for i in range(10):
# Let's train the model
total_loss = 0.0
total_iter = 0
autoencoder.train()
for image, label in cifar10_train_loader:
noise = torch.randn(image.shape[0], 3, 32, 32) * noise_level
image_n = torch.add(image, noise)
image = Variable(image).cuda()
image_n = Variable(image_n).cuda()
optimizer.zero_grad()
output = autoencoder(image_n)
loss = loss_func(output, image)
loss.backward()
optimizer.step()
total_iter += 1
total_loss += loss.data[0]
# Let's record the validation loss
total_val_loss = 0.0
total_val_iter = 0
autoencoder.eval()
for image, label in cifar10_valid_loader:
noise = torch.randn(image.shape[0], 3, 32, 32) * noise_level
image_n = torch.add(image, noise)
image = Variable(image).cuda()
image_n = Variable(image_n).cuda()
output = autoencoder(image_n)
loss = loss_func(output, image)
total_val_iter += 1
total_val_loss += loss.data[0]
# Let's visualize the first image of the last batch in our validation set
orig = image[0].cpu()
noisy = image_n[0].cpu()
denoised = output[0].cpu()
orig = orig.data.numpy()
noisy = noisy.data.numpy()
denoised = denoised.data.numpy()
print("Iteration ", i+1)
show_img(orig, noisy, denoised)
train_loss.append(total_loss / total_iter)
valid_loss.append(total_val_loss / total_val_iter)
# Save the model
torch.save(autoencoder.state_dict(), "./5.autoencoder.pth")
fig = plt.figure(figsize=(10, 7))
plt.plot(train_loss, label='Train loss')
plt.plot(valid_loss, label='Validation loss')
plt.legend()
plt.show()
```
### Let's do standalone inference
```
import random
img, _ = random.choice(cifar10_valid)
img = img.resize_((1, 3, 32, 32))
noise = torch.randn((1, 3, 32, 32)) * noise_level
img_n = torch.add(img, noise)
img_n = Variable(img_n).cuda()
denoised = autoencoder(img_n)
show_img(img[0].numpy(), img_n[0].data.cpu().numpy(), denoised[0].data.cpu().numpy())
```
### Homework
* Use a different dataset
* Make the autoencoder act as a vanilla stacked autoencoder
* Play with the layers of the autoencoder - can you simplify it
| github_jupyter |
```
import os
from os.path import isfile, join
import numpy as np
import imageio as img
from tqdm import tqdm
from pprint import pprint
import plotly as py
import plotly.graph_objects as go
if not os.path.exists('images'):
os.mkdir('images')
images_dir = os.getcwd() + '\\images'
print(f'Saving imaged to dir: {images_dir}')
os.chdir('../')
if not os.path.exists('Visualisations'):
os.mkdir('Visualisations')
visualisations_dir = os.getcwd() + '\\Visualisations'
from Logistic_Map_Generator import LogisticMapGenerator
def get_labels(axis_num, tick_counts):
increments = np.linspace(0.0, 1.0, num=tick_counts+1, endpoint=True)\
if axis_num != 1\
else np.linspace(0.0, 1.0, num=tick_counts+1, endpoint=True)[::-1]
labels = np.array([[increments[idx],increments[-(1+idx)]]
for idx in range(tick_counts+1)
]
)
labels = np.round(np.insert(labels, axis_num, 0.0, axis=1),1)[:-1]
labels = [str(tuple(label)) for label in labels]
return labels
def make_axis(title, axis_num, tick_angle, tick_counts):
return {'title': title,
'titlefont': { 'size': 10 },
'tickangle': tick_angle,
'tickfont': { 'size': 6 },
'tickcolor': 'rgba(0,0,0,0)',
'tickmode' : 'array',
'tickvals': [idx/tick_counts for idx in range(tick_counts)],
'ticktext': get_labels(axis_num=axis_num, tick_counts=tick_counts),
'ticklen': 50,
'showline': True,
'showgrid': True
}
keys = ['x @ t = 0', 'x @ t = +1', 'x @ t = +2']
total_itts = 333
for idx, r_val in tqdm(enumerate(np.linspace(start=3.0, stop=4.0, num=total_itts, endpoint=False)), total=total_itts):
# generate new map with random start (x_0, and new r_val)
x=np.random.rand()
map_gen = LogisticMapGenerator(x=x, r=r_val, alphabet='ABCD', depth=6, ret_type='ternary', ret_history=3)
# Extract data for 50000 points
# TODO: store data in a dictionary to be able to look up periodicity (key = ternary values)
rawData = [{key:val for key,val in zip(keys, next(map_gen))} for _ in range(50000)]
# Create Figure
fig = go.Figure(go.Scatterternary({'mode': 'markers',
'a': [i for i in map(lambda x: x['x @ t = 0'], rawData)],
'b': [i for i in map(lambda x: x['x @ t = +1'], rawData)],
'c': [i for i in map(lambda x: x['x @ t = +2'], rawData)],
'marker': {'symbol': 100,
'color': '#DB7365',
'size':4,
'line': { 'width': 1 },
'opacity' : 0.625
}
}
)
)
# Add Equilateral Triangle Centroid -> Ternary [1/3, 1/3, 1/3] point
fig.add_trace(go.Scatterternary({'mode': 'markers',
'a': [1/3.],
'b': [1/3.],
'c': [1/3.],
'marker': {'symbol': 'triangle-up-open',
'color': '#DB7365',
'size':10,
'line': { 'width': 2 }
}
}
)
)
fig.update_layout({'ternary': {'sum': 1,
'aaxis': make_axis(title='x@ t = 0', axis_num=2, tick_angle=60, tick_counts=10),
'baxis': make_axis(title='x@<br>t = +1', axis_num=0, tick_angle=-60, tick_counts=10),
'caxis': make_axis(title='x@<br>t = +2', axis_num=1, tick_angle=0, tick_counts=10)
},
'annotations': [{'showarrow': False,
'text': f'Ternary Plot f([t=0, t=+1, t=+2]) of Logistic Map for <br>r = {r_val:.4f}'
'x': 0.5,
'y': 0.98,
'font': { 'size': 14 }
}],
'autosize':False,
'showlegend':False,
'width':768,
'height':768,
'margin':go.layout.Margin(l=80,
r=80,
b=0,
t=0,
pad=0
),
'paper_bgcolor':'rgba(0.95,0.95,0.95,1.0)',
'plot_bgcolor':'rgba(0.95,0.95,0.95,1.0)'
}
)
# Save File
r_val_str = str(round(r_val, 5)).replace('.', '_')
fig.write_image(images_dir + f'\\{idx} - r_value = {r_val_str}.jpeg')
# Get file_names
file_names = [file_name for file_name in os.listdir(images_dir)
if isfile(join(images_dir, file_name))
]
# Sort file_names (by frame number) to guarantee order
file_names = {int(file_name.split(' - ')[0]): join(images_dir, file_name)
for file_name in file_names
}
# get ordered file_names
file_names = [file_names[idx] for idx in range(len(file_names))]
#set output file (.gif)
output_file = visualisations_dir + '\\Ternary Plot [t=0, t=+1, t=+2] of Logistic Map.gif'
# Create gif with imageio
with img.get_writer(output_file, mode='I') as writer:
for file_name in file_names:
image = img.imread(file_name)
writer.append_data(image)
```
| github_jupyter |
```
import tensorflow as tf
import pandas as pd
import numpy as np
import scipy.sparse as sp
import script_config as sc
import csv
hops = sc._config_hops
max_list_size = sc._config_relation_list_size_neighborhood
#Location of the data_folder
data_folder ="/nvme/drive_1/NTDS_Final/"
users = pd.read_csv(data_folder+"usersdata.csv",delimiter='\t',header=None).values
filtered_users = pd.read_csv(data_folder+"filtered_users.csv",delimiter=',').values
user_idx_dict = {}
filtered_user_idx_dict = {}
for i,user in enumerate(users):
user_idx_dict[user[0]]=i
for i,user in enumerate(filtered_users):
filtered_user_idx_dict[user[0]]=user_idx_dict[user[0]]
def check_symmetric(a, tol=1e-8):
return np.allclose(a, a.T, atol=tol)
```
** I. Extract Neighboors **
```
def load_adj_as_matrix(feature_type):
desired_indices = np.array(list(filtered_user_idx_dict.values()))
adj = sp.load_npz(data_folder+"filtering/adjacency_"+str(feature_type)+".npz")
#reduced_adj = adj[desired_indices,:][:,desired_indices]
return adj.todense()
# BFS search to find neighborhood of radius "hops"
def find_neighborhood(adjacency,user_idx):
# Looking for elements in numpy array (better than lists)
def element_in_narray(narray,row):
count = np.where(narray==row)
if len(count[0]) is 0:
return False
else:
return True
# Data Structures
queue = np.ndarray((max_list_size,2),dtype='i4')
queue[0] = [int(user_idx),0]
queue_head = 0 # Index of queue head
queue_tail = 1 # Index of next free spot in queue
# Loop until queue is empty
while( queue_head != queue_tail ):
current_id, current_hops = queue[queue_head]
queue_head += 1
# Cutoff Condition
if current_hops + 1 < hops:
neigh_ids = np.where(adjacency[current_id,:]==1)[1]
for neigh_id in neigh_ids:
# Check that node has not been visited
# and has not been marked to be visited
if (not element_in_narray(queue[queue_head:queue_tail],int(neigh_id))):
if queue_tail == max_list_size:
raise MemoryError("Increase _config_list_size_neighborhood_creation \
from config.py")
# Mark node to be visited
queue[queue_tail] = [int(neigh_id),current_hops+1]
queue_tail += 1
return queue[:queue_tail,0]
neighborhoods = [{i} for i in range(len(filtered_users))]
for feature in [1,2,3,4,5,6,7]:
adj = load_adj_as_matrix(feature)
for i in range(len(filtered_users)):
neighborhoods[i] = neighborhoods[i].union(find_neighborhood(adj,i))
if i%1000 is 0:
print("\r"+str(i) +" users processed for feature "+str(feature), sep=' ', end='', flush=True)
# Make sure that central node has index 0
# in the local neighbor list
for i in range(len(filtered_users)):
neighborhoods[i] = list(neighborhoods[i])
for idx, neighboor in enumerate(neighborhoods[i]):
if neighboor == i:
temp = neighborhoods[i][0]
neighborhoods[i][0] = neighborhoods[i][idx]
neighborhoods[i][idx] = temp
break
assert(neighborhoods[i][0] == i)
if i%1000 is 0:
print("\r"+str(i) +" users validated", sep=' ', end='', flush=True)
```
** II. Create Local Adjacencies from List of Neighbors **
```
'''
for i,hood in enumerate(neighborhoods):
num_neighbors = len(hood)
local_adj = np.zeros((num_neighbors*7,num_neighbors*7))
np.save(data_folder+"local/adjacency_"+str(i)+".npy",local_adj)
if i%100 is 0:
print("\r"+str(i) +" matrices initialized", sep=' ', end='', flush=True)
for feature in [1,2,3,4,5,6,7]:
adj = load_adj_as_matrix(feature)
for i, neighboor_list in enumerate(neighborhoods):
local_adj = np.load(data_folder+"local/adjacency_"+str(i)+".npy")
for idx_1,neighboor_1 in enumerate(neighboor_list):
for idx_2,neighboor_2 in enumerate(neighboor_list[idx_1:]):
val = adj[neighboor_1,neighboor_2]
local_adj[idx_1*7+feature-1,idx_2*7+feature-1] = val
local_adj[idx_2*7+feature-1,idx_1*7+feature-1] = val
np.save(data_folder+"local/adjacency_"+str(i)+".npy",local_adj)
if i%100 is 0:
print("\r"+str(i) +" users processed for feature "+str(feature), sep=' ', end='', flush=True)
'''
for i, neighboor_list in enumerate(neighborhoods):
local_adj = np.load(data_folder+"local/adjacency_"+str(i)+".npy")
for idx,neighboor in enumerate(neighboor_list):
for feature_1 in [1,2,3,4,5,6,7]:
for feature_2 in [1,2,3,4,5,6,7]:
if feature_1 != feature_2:
local_adj[idx*7+feature_1-1,idx*7+feature_2-1] = 1
local_adj[idx*7+feature_2-1,idx*7+feature_1-1] = 1
#assert(check_symmetric(local_adj))
np.save(data_folder+"local/adjacency_"+str(i)+".npy",local_adj)
if i%100 is 0:
print("\r"+str(i)+" adjacency matrices verified"+str(feature), sep=' ', end='', flush=True)
```
| github_jupyter |
# Author Expectations
Watch a [short tutorial video](https://greatexpectations.io/videos/getting_started/create_expectations?utm_source=notebook&utm_medium=create_expectations) or read [the written tutorial](https://docs.greatexpectations.io/en/latest/tutorials/create_expectations.html?utm_source=notebook&utm_medium=create_expectations)
We'd love it if you **reach out for help on** the [**Great Expectations Slack Channel**](https://greatexpectations.io/slack)
```
import json
import os
import great_expectations as ge
import great_expectations.jupyter_ux
import pandas as pd
```
## 1. Get a DataContext
This represents your **project** that you just created using `great_expectations init`. [Read more in the tutorial](https://docs.greatexpectations.io/en/latest/tutorials/create_expectations.html?utm_source=notebook&utm_medium=create_expectations#get-a-datacontext-object)
```
context = ge.data_context.DataContext()
```
## 2. List the tables in your database
The `DataContext` will now introspect your pandas `Datasource` and list the CSVs it finds. [Read more in the tutorial](https://docs.greatexpectations.io/en/latest/tutorials/create_expectations.html?utm_source=notebook&utm_medium=create_expectations#list-data-assets)
```
great_expectations.jupyter_ux.list_available_data_asset_names(context)
```
## 3. Pick a table and set the expectation suite name
Internally, Great Expectations represents CSVs and dataframes as `DataAsset`s and uses this notion to link them to `Expectation Suites`. [Read more in the tutorial](https://docs.greatexpectations.io/en/latest/tutorials/create_expectations.html?utm_source=notebook&utm_medium=create_expectations#pick-a-data-asset-and-set-the-expectation-suite-name)
```
data_asset_name = "YOUR_TABLE_NAME_LISTED_ABOVE" # TODO: replace with your value!
normalized_data_asset_name = context.normalize_data_asset_name(data_asset_name)
normalized_data_asset_name
```
We recommend naming your first expectation suite for a table `warning`. Later, as you identify some of the expectations that you add to this suite as critical, you can move these expectations into another suite and call it `failure`.
```
expectation_suite_name = "warning" # TODO: replace with your value!
```
## 4. Create a new empty expectation suite
```
context.create_expectation_suite(data_asset_name=data_asset_name, expectation_suite_name=expectation_suite_name, overwrite_existing=True)
```
## 5. Load a batch of data you want to use to create `Expectations`
To learn more about batches and `get_batch`, see [this tutorial](https://docs.greatexpectations.io/en/latest/tutorials/create_expectations.html?utm_source=notebook&utm_medium=create_expectations#load-a-batch-of-data-to-create-expectations)
```
# If you would like to validate an entire table or view in your database's default schema:
batch_kwargs = {'table': "YOUR_TABLE"}
# If you would like to validate an entire table or view from a non-default schema in your database:
batch_kwargs = {'table': "YOUR_TABLE", "schema": "YOUR_SCHEMA"}
# If you would like to validate using a query to construct a temporary table:
# batch_kwargs = {'query': 'SELECT YOUR_ROWS FROM YOUR_TABLE'}
batch = context.get_batch(normalized_data_asset_name, expectation_suite_name, batch_kwargs)
batch.head()
```
Load a batch of data and take a peek at the first few rows.
```
batch = context.get_batch(data_asset_name, expectation_suite_name, batch_kwargs)
batch.head()
```
#### Optionally, customize and review batch options
`BatchKwargs` are extremely flexible - to learn more [read the tutorial](https://docs.greatexpectations.io/en/latest/tutorials/create_expectations.html?utm_source=notebook&utm_medium=create_expectations#load-a-batch-of-data-to-create-expectations)
Here are the batch kwargs used to load your batch
```
batch.batch_kwargs
# The datasource can add and store additional identifying information to ensure you can track a batch through
# your pipeline
batch.batch_id
```
## 6. Author Expectations
With a batch, you can add expectations by calling specific expectation methods. They all begin with `.expect_` which makes autocompleting easy.
See available expectations in the [expectation glossary](https://docs.greatexpectations.io/en/latest/glossary.html?utm_source=notebook&utm_medium=create_expectations).
You can also see available expectations by hovering over data elements in the HTML page generated by profiling your dataset.
Below is an example expectation that checks if the values in the batch's first column are null.
[Read more in the tutorial](https://docs.greatexpectations.io/en/latest/tutorials/create_expectations.html?utm_source=notebook&utm_medium=create_expectations#author-expectations)
```
column_name = batch.get_table_columns()[0]
batch.expect_column_values_to_not_be_null(column_name)
```
Add more expectations here. **Hint** start with `batch.expect_` and hit tab for Jupyter's autocomplete to see all the expectations!
## 7. Review and save your Expectations
Expectations that are `True` on this data batch are added automatically. Let's view all the expectations you created in machine-readable JSON.
```
batch.get_expectation_suite()
```
If you decide not to save some expectations that you created, use [remove_expectaton method](https://docs.greatexpectations.io/en/latest/module_docs/data_asset_module.html?highlight=remove_expectation&utm_source=notebook&utm_medium=create_expectations#great_expectations.data_asset.data_asset.DataAsset.remove_expectation). You can also choose not to filter expectations that were `False` on this batch.
The following method will save the expectation suite as a JSON file in the `great_expectations/expectations` directory of your project:
```
batch.save_expectation_suite()
```
## 8. View the Expectations in Data Docs
Let's now build and look at your Data Docs. These will now include an **Expectation Suite Overview** built from the expectations you just created that helps you communicate about your data with both machines and humans.
```
context.build_data_docs()
context.open_data_docs()
```
## Congratulations! You created and saved Expectations
## Next steps:
### 1. Play with Validation
Validation is the process of checking if new batches of this data meet to your expectations before they are processed by your pipeline. Go to [validation_playground.ipynb](validation_playground.ipynb) to see how!
### 2. Explore the documentation & community
You are now among the elite data professionals who know how to build robust descriptions of your data and protections for pipelines and machine learning models. Join the [**Great Expectations Slack Channel**](https://greatexpectations.io/slack) to see how others are wielding these superpowers.
| github_jupyter |
```
from resources.workspace import *
```
$
% START OF MACRO DEF
% DO NOT EDIT IN INDIVIDUAL NOTEBOOKS, BUT IN macros.py
%
\newcommand{\Reals}{\mathbb{R}}
\newcommand{\Expect}[0]{\mathbb{E}}
\newcommand{\NormDist}{\mathcal{N}}
%
\newcommand{\DynMod}[0]{\mathscr{M}}
\newcommand{\ObsMod}[0]{\mathscr{H}}
%
\newcommand{\mat}[1]{{\mathbf{{#1}}}}
%\newcommand{\mat}[1]{{\pmb{\mathsf{#1}}}}
\newcommand{\bvec}[1]{{\mathbf{#1}}}
%
\newcommand{\trsign}{{\mathsf{T}}}
\newcommand{\tr}{^{\trsign}}
\newcommand{\tn}[1]{#1}
\newcommand{\ceq}[0]{\mathrel{≔}}
%
\newcommand{\I}[0]{\mat{I}}
\newcommand{\K}[0]{\mat{K}}
\newcommand{\bP}[0]{\mat{P}}
\newcommand{\bH}[0]{\mat{H}}
\newcommand{\bF}[0]{\mat{F}}
\newcommand{\R}[0]{\mat{R}}
\newcommand{\Q}[0]{\mat{Q}}
\newcommand{\B}[0]{\mat{B}}
\newcommand{\C}[0]{\mat{C}}
\newcommand{\Ri}[0]{\R^{-1}}
\newcommand{\Bi}[0]{\B^{-1}}
\newcommand{\X}[0]{\mat{X}}
\newcommand{\A}[0]{\mat{A}}
\newcommand{\Y}[0]{\mat{Y}}
\newcommand{\E}[0]{\mat{E}}
\newcommand{\U}[0]{\mat{U}}
\newcommand{\V}[0]{\mat{V}}
%
\newcommand{\x}[0]{\bvec{x}}
\newcommand{\y}[0]{\bvec{y}}
\newcommand{\z}[0]{\bvec{z}}
\newcommand{\q}[0]{\bvec{q}}
\newcommand{\br}[0]{\bvec{r}}
\newcommand{\bb}[0]{\bvec{b}}
%
\newcommand{\bx}[0]{\bvec{\bar{x}}}
\newcommand{\by}[0]{\bvec{\bar{y}}}
\newcommand{\barB}[0]{\mat{\bar{B}}}
\newcommand{\barP}[0]{\mat{\bar{P}}}
\newcommand{\barC}[0]{\mat{\bar{C}}}
\newcommand{\barK}[0]{\mat{\bar{K}}}
%
\newcommand{\D}[0]{\mat{D}}
\newcommand{\Dobs}[0]{\mat{D}_{\text{obs}}}
\newcommand{\Dmod}[0]{\mat{D}_{\text{obs}}}
%
\newcommand{\ones}[0]{\bvec{1}}
\newcommand{\AN}[0]{\big( \I_N - \ones \ones\tr / N \big)}
%
% END OF MACRO DEF
$
In this tutorial we're going to code an EnKF implementation using numpy.
# The EnKF algorithm
As with the KF, the EnKF consists of the recursive application of
a forecast step and an analysis step.
This presentation follows the traditional template, presenting the EnKF as the "the Monte Carlo version of the KF
where the state covariance is estimated by the ensemble covariance".
It is not obvious that this postulated method should work;
indeed, it is only justified upon inspection of its properties,
deferred to later.
<mark><font size="-1">
<b>NB:</b>
Since we're going to focus on a single filtering cycle (at a time),
the subscript $k$ is dropped. Moreover, <br>
The superscript $f$ indicates that $\{\x_n^f\}_{n=1..N}$ is the forecast (prior) ensemble.<br>
The superscript $a$ indicates that $\{\x_n^a\}_{n=1..N}$ is the analysis (posterior) ensemble.
</font></mark>
### The forecast step
Suppose $\{\x_n^a\}_{n=1..N}$ is an iid. sample from $p(\x_{k-1} \mid \y_1,\ldots, \y_{k-1})$, which may or may not be Gaussian.
The forecast step of the EnKF consists of a Monte Carlo simulation
of the forecast dynamics for each $\x_n^a$:
$$
\forall n, \quad \x^f_n = \DynMod(\x_n^a) + \q_n \, , \\
$$
where $\{\q_n\}_{n=1..N}$ are sampled iid. from $\NormDist(\bvec{0},\Q)$,
or whatever noise model is assumed,
and $\DynMod$ is the model dynamics.
The dynamics could consist of *any* function, i.e. the EnKF can be applied with nonlinear models.
The ensemble, $\{\x_n^f\}_{n=1..N}$, is then an iid. sample from the forecast pdf,
$p(\x_k \mid \y_1,\ldots,\y_{k-1})$. This follows from the definition of the latter, so it is a relatively trivial idea and way to obtain this pdf. However, before Monte-Carlo methods were computationally feasible, the computation of the forecast pdf required computing the [Chapman-Kolmogorov equation](https://en.wikipedia.org/wiki/Chapman%E2%80%93Kolmogorov_equation), which constituted a major hurdle for filtering methods.
### The analysis update step
of the ensemble is given by:
$$\begin{align}
\forall n, \quad \x^\tn{a}_n &= \x_n^\tn{f} + \barK \left\{\y - \br_n - \ObsMod(\x_n^\tn{f}) \right\}
\, , \\
\text{or,}\quad
\E^\tn{a} &= \E^\tn{f} + \barK \left\{\y\ones\tr - \Dobs - \ObsMod(\E^\tn{f}) \right\} \, ,
\tag{4}
\end{align}
$$
where the "observation perturbations", $\br_n$, are sampled iid. from the observation noise model, e.g. $\NormDist(\bvec{0},\R)$,
and form the columns of $\Dobs$,
and the observation operator (again, any type of function) $\ObsMod$ is applied column-wise to $\E^\tn{f}$.
The gain $\barK$ is defined by inserting the ensemble estimates for
* (i) $\B \bH\tr$: the cross-covariance between $\x^\tn{f}$ and $\ObsMod(\x^\tn{f})$, and
* (ii) $\bH \B \bH\tr$: the covariance matrix of $\ObsMod(\x^\tn{f})$,
in the formula for $\K$, namely eqn. (K1) of [T4](T4%20-%20Multivariate%20Kalman.ipynb).
Using the estimators from [T7](T7%20-%20Ensemble%20representation.ipynb) yields
$$\begin{align}
\barK &= \X \Y\tr ( \Y \Y\tr + (N{-}1) \R )^{-1} \, , \tag{5a}
\end{align}
$$
where $\Y \in \Reals^{P \times N}$
is the centered, *observed* ensemble
$\Y \ceq
\begin{bmatrix}
\y_1 -\by, & \ldots & \y_n -\by, & \ldots & \y_N -\by
\end{bmatrix} \, ,$ where $\y_n = \ObsMod(\x_n^\tn{f})$.
The EnKF is summarized in the animation below.
```
EnKF_animation
```
#### Exc 2:
(a) Use the Woodbury identity (C2) of [T4](T4%20-%20Multivariate%20Kalman.ipynb) to show that eqn. (5) can also be written
$$\begin{align}
\barK &= \X ( \Y\tr \Ri \Y + (N{-}1)\I_N )^{-1} \Y\tr \Ri \, . \tag{5b}
\end{align}
$$
(b) What is the potential benefit?
#### Exc 4:
The above animation assumed that the observation operator is just the identity matrix, $\I$, rather than a general observation operator, $\ObsMod()$. Meanwhile, the Kalman gain used by the EnKF, eqn. (5a), is applicable for any $\ObsMod()$. On the other hand, the formula (5a) consists solely of linear algebra. Therefore it cannot perfectly represent any general (nonlinear) $\ObsMod()$. So how does it actually treat the observation operator? What meaning can we assign to the resulting updates?
*Hint*: consider the limit of $\R \rightarrow 0$.
#### Exc 6 (a):
Consider the ensemble averages,
- $\bx^\tn{a} = \frac{1}{N}\sum_{n=1}^N \x^\tn{a}_n$, and
- $\bx^\tn{f} = \frac{1}{N}\sum_{n=1}^N \x^\tn{f}_n$,
and recall that the analysis step, eqn. (4), defines $\x^\tn{a}_n$ from $\x^\tn{f}_n$.
(a) Show that, in case $\ObsMod$ is linear (the matrix $\bH$),
$$\begin{align}
\Expect \bx^\tn{a} &= \bx^\tn{f} + \barK \left\{\y\ones\tr - \bH\bx^\tn{f} \right\} \, , \tag{6}
\end{align}
$$
where the expectation, $\Expect$, is taken with respect to $\Dobs$ only (i.e. not the sampling of the forecast ensemble, $\E^\tn{f}$ itself).
What does this mean?
```
#show_answer("EnKF_nobias_a")
```
#### Exc 6 (b)*:
Consider the ensemble covariance matrices:
$$\begin{align}
\barB &= \frac{1}{N-1} \X{\X}\tr \, , \tag{7a} \\\
\barP &= \frac{1}{N-1} \X^a{\X^a}\tr \, . \tag{7b}
\end{align}$$
Now, denote the centralized observation perturbations:
$$\begin{align}
\D &= \Dobs - \bar{\br}\ones\tr \\\
&= \Dobs\AN \, . \tag{8}
\end{align}$$
Note that $\D \ones = \bvec{0}$ and, with expectation over $\Dobs$,
$$
\begin{align}
\label{eqn:R_sample_cov_of_D}
\frac{1}{N-1}\D \D\tr = \R \, , \tag{9a} \\\
\label{eqn:zero_AD_cov}
\X \D\tr = \bvec{0} \, . \tag{9b}
\end{align}
$$
Assuming eqns (8) and (9) hold true, show that
$$\begin{align}
\barP &= [\I_M - \barK \bH]\barB \, . \tag{10}
\end{align}$$
```
#show_answer("EnKF_nobias_b")
```
#### Exc 6 (c)*:
Show that, if no observation perturbations are used in eqn. (4), then $\barP$ would be too small.
```
#show_answer("EnKF_without_perturbations")
```
## Experimental setup
Before making the EnKF, we'll set up an experiment to test it with, so that you can check if you've implemented a working method or not.
To that end, we'll use the Lorenz-63 model, from [T6](T6%20-%20Dynamical%20systems,%20chaos,%20Lorenz.ipynb). The coupled ODEs are recalled here, but with some of the parameters fixed.
```
M = 3 # ndim
def dxdt(x):
sig = 10.0
rho = 28.0
beta = 8.0/3
x,y,z = x
d = np.zeros(3)
d[0] = sig*(y - x)
d[1] = rho*x - y - x*z
d[2] = x*y - beta*z
return d
```
Next, we make the forecast model $\DynMod$ out of $\frac{d \x}{dt}$ such that $\x(t+dt) = \DynMod(\x(t),t,dt)$. We'll make use of the "4th order Runge-Kutta" integrator `rk4`.
```
def Dyn(E, t0, dt):
def step(x0):
return rk4(lambda t,x: dxdt(x), x0, t0, dt)
if E.ndim == 1:
# Truth (single state vector) case
E = step(E)
else:
# Ensemble case
for n in range(E.shape[1]):
E[:,n] = step(E[:,n])
return E
Q_chol = zeros((M,M))
Q = Q_chol @ Q_chol.T
```
Notice the loop over each ensemble member. For better performance, this should be vectorized, if possible. Or, if the forecast model is computationally demanding (as is typically the case in real applications), the loop should be parallelized: i.e. the forecast simulations should be distributed to separate computers.
The following are the time settings that we will use
```
dt = 0.01 # integrational time step
dkObs = 25 # number of steps between observations
dtObs = dkObs*dt # time between observations
KObs = 60 # total number of observations
K = dkObs*(KObs+1) # total number of time steps
```
Initial conditions
```
mu0 = array([1.509, -1.531, 25.46])
P0_chol = eye(3)
P0 = P0_chol @ P0_chol.T
```
Observation model settings
```
p = 3 # ndim obs
def Obs(E, t):
if E.ndim == 1: return E[:p]
else: return E[:p,:]
R_chol = sqrt(2)*eye(p)
R = R_chol @ R_chol.T
```
Generate synthetic truth (`xx`) and observations (`yy`)
```
# Init
xx = zeros((K+1 ,M))
yy = zeros((KObs+1,p))
xx[0] = mu0 + P0_chol @ randn(M)
# Loop
for k in range(1,K+1):
xx[k] = Dyn(xx[k-1],(k-1)*dt,dt)
xx[k] += Q_chol @ randn(M)
if k%dkObs == 0:
kObs = k//dkObs-1
yy[kObs] = Obs(xx[k],nan) + R_chol @ randn(p)
```
## EnKF implementation
We will make use of `estimate_mean_and_cov` and `estimate_cross_cov` from the previous section. Paste them in below.
```
# def estimate_mean_and_cov ...
```
**Exc 8:** Complete the code below
```
xxhat = zeros((K+1,M))
# Useful linear algebra: compute B/A
def divide_1st_by_2nd(B,A):
return nla.solve(A.T,B.T).T
def my_EnKF(N):
# Init ensemble
...
for k in range(1,K+1):
# Forecast
t = k*dt
# use model
E = ...
# add noise
E += ...
if k%dkObs == 0:
# Analysis
y = yy[k//dkObs-1] # current observation
Eo = Obs(E,t) # obsrved ensemble
# Compute ensemble moments
BH = ...
HBH = ...
# Compute Kalman Gain
KG = ...
# Generate perturbations
Perturb = ...
# Update ensemble with KG
E +=
# Save statistics
xxhat[k] = mean(E,axis=1)
```
Notice that we only store some stats (`xxhat`). This is because in large systems, keeping the entire ensemble in memory is probably too much.
```
#show_answer('EnKF v1')
```
Now let's try out its capabilities
```
# Run assimilation
my_EnKF(10)
# Plot
fig, axs = plt.subplots(3,1,True)
for m in range(3):
axs[m].plot(dt*arange(K+1), xx [:,m], 'k', label="Truth")
axs[m].plot(dt*arange(K+1), xxhat[:,m], 'b', label="Estimate")
if m<p:
axs[m].plot(dtObs*arange(1,KObs+2),yy[:,m],'g*')
axs[m].set_ylabel("Dim %d"%m)
axs[0].legend()
plt.xlabel("Time (t)")
```
**Exc 10:** The visuals of the plots are nice. But it would be good to have a summary statistic of the accuracy performance of the filter. Make a function `average_rmse(xx,xxhat)` that computes $ \frac{1}{K+1} \sum_{k=0}^K \sqrt{\frac{1}{M} \| \bx_k - \x_k \|_2^2} \, .$
```
def average_rmse(xx,xxhat):
### INSERT ANSWER ###
return average
# Test
average_rmse(xx,xxhat)
#show_answer('rmse')
```
**Exc 12:**
* (a). Repeat the above experiment, but now observing only the first (0th) component of the state.
```
#show_answer('Repeat experiment a')
```
* (b). Put a `seed()` command in the right place so as to be able to recreate exactly the same results from an experiment.
```
#show_answer('Repeat experiment b')
```
* (c). Use $N=5$, and repeat the experiments. This is quite a small ensemble size, and quite often it will yield divergence: the EnKF "definitely loses track" of the truth, typically because of strong nonlinearity in the forecast models, and underestimation (by $\barP)$ of the actual errors. Repeat the experiment with different seeds until you observe in the plots that divergence has happened.
* (d). Implement "multiplicative inflation" to remedy the situation; this is a factor that should spread the ensemble further apart; a simple version is to inflate the perturbations. Implement it, and tune its value to try to avoid divergence.
```
#show_answer('Repeat experiment cd')
```
### Next: [DAPPER example scripts](https://github.com/nansencenter/DAPPER/tree/master/examples)
We hope you have enjoyed these exercises. For your next steps, we suggest you try out the example notebooks that come with DAPPER. In particular, the first example contains exercises for the willing.
| github_jupyter |
# CV: Feature Dropout
Try eliminating a random subset of features to check for possible overfitting.
## Imports
```
from pygoose import *
import gc
import lightgbm as lgb
from sklearn.model_selection import StratifiedKFold
```
## Config
```
project = kg.Project.discover()
```
Model-specific parameters.
```
NUM_FOLDS = 5
```
Search-specific parameters.
```
DROPOUT_FEATURE_FRACTION = 0.2
NUM_IMPORTANCE_BINS = 5
NUM_SEARCH_ITERATIONS = 50
```
Make subsequent runs consistent and reproducible.
```
RANDOM_SEED = 100500
np.random.seed(RANDOM_SEED)
```
## Read Data
```
feature_lists = [
'simple_summaries',
'jaccard_ngrams',
'fuzzy',
'tfidf',
'lda',
'nlp_tags',
'wordnet_similarity',
'phrase_embedding',
'wmd',
'wm_intersect',
'3rdparty_abhishek',
'3rdparty_dasolmar_whq',
'3rdparty_mephistopheies',
'3rdparty_image_similarity',
'magic_pagerank',
'magic_frequencies',
'magic_cooccurrence_matrix',
'oofp_nn_mlp_with_magic',
'oofp_nn_cnn_with_magic',
'oofp_nn_bi_lstm_with_magic',
'oofp_nn_siamese_lstm_attention',
]
df_train, df_test, _ = project.load_feature_lists(feature_lists)
y_train = kg.io.load(project.features_dir + 'y_train.pickle')
```
## Compute dropout probabilities
```
gbm_importances = {
# Place prior feature importances here
}
imps = pd.DataFrame(
[[feature, importance] for feature, importance in gbm_importances.items()],
columns=['feature', 'importance'],
)
imps['importance_bin'] = pd.cut(imps['importance'], NUM_IMPORTANCE_BINS, labels=list(range(1, NUM_IMPORTANCE_BINS + 1)))
importance_bin = dict(zip(imps['feature'], imps['importance_bin']))
dropout_probs = np.array([
1 / importance_bin.get(feature_name, NUM_IMPORTANCE_BINS // 2 + 1)
for feature_name in df_train.columns.tolist()
])
```
Normalize so that the vector sums up to 1
```
dropout_probs *= (1 / np.sum(dropout_probs))
```
## Run the search
```
def run_experiment(dropout_feature_list):
X_train = df_train.drop(dropout_feature_list, axis=1).values
kfold = StratifiedKFold(
n_splits=NUM_FOLDS,
shuffle=True,
random_state=RANDOM_SEED
)
experiment_scores = []
for fold_num, (ix_train, ix_val) in enumerate(kfold.split(X_train, y_train)):
X_fold_train = X_train[ix_train]
X_fold_val = X_train[ix_val]
y_fold_train = y_train[ix_train]
y_fold_val = y_train[ix_val]
lgb_params = {
'objective': 'binary',
'metric': 'binary_logloss',
'boosting': 'gbdt',
'device': 'cpu',
'feature_fraction': 0.5,
'num_leaves': 64,
'learning_rate': 0.03,
'num_boost_round': 3000,
'early_stopping_rounds': 5,
'verbose': 1,
'bagging_fraction_seed': RANDOM_SEED,
'feature_fraction_seed': RANDOM_SEED,
}
lgb_data_train = lgb.Dataset(X_fold_train, y_fold_train)
lgb_data_val = lgb.Dataset(X_fold_val, y_fold_val)
evals_result = {}
model = lgb.train(
lgb_params,
lgb_data_train,
valid_sets=[lgb_data_train, lgb_data_val],
evals_result=evals_result,
num_boost_round=lgb_params['num_boost_round'],
early_stopping_rounds=lgb_params['early_stopping_rounds'],
verbose_eval=False,
)
fold_train_scores = evals_result['training'][lgb_params['metric']]
fold_val_scores = evals_result['valid_1'][lgb_params['metric']]
experiment_scores.append([
fold_train_scores[-1],
fold_val_scores[-1],
])
# Compute final scores.
final_experiment_score = np.mean(np.array(experiment_scores), axis=0)
# Clean up.
del X_train
del model
gc.collect()
return [
dropout_feature_list,
final_experiment_score[0],
final_experiment_score[1],
]
all_experiments_log = []
for i in range(NUM_SEARCH_ITERATIONS):
print(f'Iteration {i + 1} of {NUM_SEARCH_ITERATIONS}')
dropout_list = np.random.choice(
df_train.columns,
size=int(len(df_train.columns) * DROPOUT_FEATURE_FRACTION),
replace=False,
p=dropout_probs,
)
print(f'Removing {dropout_list}')
experiment_result = run_experiment(dropout_list)
_, result_train, result_val = experiment_result
print(f'Train: {result_train:.6f} Val: {result_val:.6f} Diff: {result_val - result_train:.6f}')
all_experiments_log.append(experiment_result)
pd.DataFrame(all_experiments_log).to_csv(project.temp_dir + 'dropout_experiments.log', index=None)
print()
```
| github_jupyter |
Copyright 2021 The TensorFlow Hub Authors.
Licensed under the Apache License, Version 2.0 (the "License");
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/hub/tutorials/cropnet_on_device"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/hub/blob/master/examples/colab/cropnet_on_device.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/hub/blob/master/examples/colab/cropnet_on_device.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/hub/examples/colab/cropnet_on_device.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
<td>
<a href="https://tfhub.dev/s?module-type=image-feature-vector&q=cropnet"><img src="https://www.tensorflow.org/images/hub_logo_32px.png" />See TF Hub models</a>
</td>
</table>
```
# Copyright 2021 The TensorFlow Hub Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
```
# Fine tuning models for plant disease detection
This notebook shows you how to **fine-tune CropNet models from TensorFlow Hub** on a dataset from TFDS or your own crop disease detection dataset.
You will:
- Load the TFDS cassava dataset or your own data
- Enrich the data with unknown (negative) examples to get a more robust model
- Apply image augmentations to the data
- Load and fine tune a [CropNet model](https://tfhub.dev/s?module-type=image-feature-vector&q=cropnet) from TF Hub
- Export a TFLite model, ready to be deployed on your app with [Task Library](https://www.tensorflow.org/lite/inference_with_metadata/task_library/image_classifier), [MLKit](https://developers.google.com/ml-kit/vision/image-labeling/custom-models/android) or [TFLite](https://www.tensorflow.org/lite/guide/inference) directly
## Imports and Dependencies
Before starting, you'll need to install some of the dependencies that will be needed like [Model Maker](https://www.tensorflow.org/lite/guide/model_maker) and the latest version of TensorFlow Datasets.
```
!pip install -q tflite-model-maker
!pip install -q -U tensorflow-datasets
import matplotlib.pyplot as plt
import os
import seaborn as sns
import tensorflow as tf
import tensorflow_datasets as tfds
from tensorflow_examples.lite.model_maker.core.export_format import ExportFormat
from tensorflow_examples.lite.model_maker.core.task import image_preprocessing
from tflite_model_maker import image_classifier
from tflite_model_maker import ImageClassifierDataLoader
from tflite_model_maker.image_classifier import ModelSpec
```
## Load a TFDS dataset to fine-tune on
Lets use the publicly available [Cassava Leaf Disease dataset](https://www.tensorflow.org/datasets/catalog/cassava) from TFDS.
```
tfds_name = 'cassava'
(ds_train, ds_validation, ds_test), ds_info = tfds.load(
name=tfds_name,
split=['train', 'validation', 'test'],
with_info=True,
as_supervised=True)
TFLITE_NAME_PREFIX = tfds_name
```
## Or alternatively load your own data to fine-tune on
Instead of using a TFDS dataset, you can also train on your own data. This code snippet shows how to load your own custom dataset. See [this](https://www.tensorflow.org/datasets/api_docs/python/tfds/folder_dataset/ImageFolder) link for the supported structure of the data. An example is provided here using the publicly available [Cassava Leaf Disease dataset](https://www.tensorflow.org/datasets/catalog/cassava).
```
# data_root_dir = tf.keras.utils.get_file(
# 'cassavaleafdata.zip',
# 'https://storage.googleapis.com/emcassavadata/cassavaleafdata.zip',
# extract=True)
# data_root_dir = os.path.splitext(data_root_dir)[0] # Remove the .zip extension
# builder = tfds.ImageFolder(data_root_dir)
# ds_info = builder.info
# ds_train = builder.as_dataset(split='train', as_supervised=True)
# ds_validation = builder.as_dataset(split='validation', as_supervised=True)
# ds_test = builder.as_dataset(split='test', as_supervised=True)
```
## Visualize samples from train split
Let's take a look at some examples from the dataset including the class id and the class name for the image samples and their labels.
```
_ = tfds.show_examples(ds_train, ds_info)
```
## Add images to be used as Unknown examples from TFDS datasets
Add additional unknown (negative) examples to the training dataset and assign a new unknown class label number to them. The goal is to have a model that, when used in practice (e.g. in the field), has the option of predicting "Unknown" when it sees something unexpected.
Below you can see a list of datasets that will be used to sample the additional unknown imagery. It includes 3 completely different datasets to increase diversity. One of them is a beans leaf disease dataset, so that the model has exposure to diseased plants other than cassava.
```
UNKNOWN_TFDS_DATASETS = [{
'tfds_name': 'imagenet_v2/matched-frequency',
'train_split': 'test[:80%]',
'test_split': 'test[80%:]',
'num_examples_ratio_to_normal': 1.0,
}, {
'tfds_name': 'oxford_flowers102',
'train_split': 'train',
'test_split': 'test',
'num_examples_ratio_to_normal': 1.0,
}, {
'tfds_name': 'beans',
'train_split': 'train',
'test_split': 'test',
'num_examples_ratio_to_normal': 1.0,
}]
```
The UNKNOWN datasets are also loaded from TFDS.
```
# Load unknown datasets.
weights = [
spec['num_examples_ratio_to_normal'] for spec in UNKNOWN_TFDS_DATASETS
]
num_unknown_train_examples = sum(
int(w * ds_train.cardinality().numpy()) for w in weights)
ds_unknown_train = tf.data.experimental.sample_from_datasets([
tfds.load(
name=spec['tfds_name'], split=spec['train_split'],
as_supervised=True).repeat(-1) for spec in UNKNOWN_TFDS_DATASETS
], weights).take(num_unknown_train_examples)
ds_unknown_train = ds_unknown_train.apply(
tf.data.experimental.assert_cardinality(num_unknown_train_examples))
ds_unknown_tests = [
tfds.load(
name=spec['tfds_name'], split=spec['test_split'], as_supervised=True)
for spec in UNKNOWN_TFDS_DATASETS
]
ds_unknown_test = ds_unknown_tests[0]
for ds in ds_unknown_tests[1:]:
ds_unknown_test = ds_unknown_test.concatenate(ds)
# All examples from the unknown datasets will get a new class label number.
num_normal_classes = len(ds_info.features['label'].names)
unknown_label_value = tf.convert_to_tensor(num_normal_classes, tf.int64)
ds_unknown_train = ds_unknown_train.map(lambda image, _:
(image, unknown_label_value))
ds_unknown_test = ds_unknown_test.map(lambda image, _:
(image, unknown_label_value))
# Merge the normal train dataset with the unknown train dataset.
weights = [
ds_train.cardinality().numpy(),
ds_unknown_train.cardinality().numpy()
]
ds_train_with_unknown = tf.data.experimental.sample_from_datasets(
[ds_train, ds_unknown_train], [float(w) for w in weights])
ds_train_with_unknown = ds_train_with_unknown.apply(
tf.data.experimental.assert_cardinality(sum(weights)))
print((f"Added {ds_unknown_train.cardinality().numpy()} negative examples."
f"Training dataset has now {ds_train_with_unknown.cardinality().numpy()}"
' examples in total.'))
```
## Apply augmentations
For all the images, to make them more diverse, you'll apply some augmentation, like changes in:
- Brightness
- Contrast
- Saturation
- Hue
- Crop
These types of augmentations help make the model more robust to variations in image inputs.
```
def random_crop_and_random_augmentations_fn(image):
# preprocess_for_train does random crop and resize internally.
image = image_preprocessing.preprocess_for_train(image)
image = tf.image.random_brightness(image, 0.2)
image = tf.image.random_contrast(image, 0.5, 2.0)
image = tf.image.random_saturation(image, 0.75, 1.25)
image = tf.image.random_hue(image, 0.1)
return image
def random_crop_fn(image):
# preprocess_for_train does random crop and resize internally.
image = image_preprocessing.preprocess_for_train(image)
return image
def resize_and_center_crop_fn(image):
image = tf.image.resize(image, (256, 256))
image = image[16:240, 16:240]
return image
no_augment_fn = lambda image: image
train_augment_fn = lambda image, label: (
random_crop_and_random_augmentations_fn(image), label)
eval_augment_fn = lambda image, label: (resize_and_center_crop_fn(image), label)
```
To apply the augmentation, it uses the `map` method from the Dataset class.
```
ds_train_with_unknown = ds_train_with_unknown.map(train_augment_fn)
ds_validation = ds_validation.map(eval_augment_fn)
ds_test = ds_test.map(eval_augment_fn)
ds_unknown_test = ds_unknown_test.map(eval_augment_fn)
```
## Wrap the data into Model Maker friendly format
To use these dataset with Model Maker, they need to be in a ImageClassifierDataLoader class.
```
label_names = ds_info.features['label'].names + ['UNKNOWN']
train_data = ImageClassifierDataLoader(ds_train_with_unknown,
ds_train_with_unknown.cardinality(),
label_names)
validation_data = ImageClassifierDataLoader(ds_validation,
ds_validation.cardinality(),
label_names)
test_data = ImageClassifierDataLoader(ds_test, ds_test.cardinality(),
label_names)
unknown_test_data = ImageClassifierDataLoader(ds_unknown_test,
ds_unknown_test.cardinality(),
label_names)
```
## Run training
[TensorFlow Hub](https://tfhub.dev) has multiple models available for Tranfer Learning.
Here you can choose one and you can also keep experimenting with other ones to try to get better results.
If you want even more models to try, you can add them from this [collection](https://tfhub.dev/google/collections/image/1).
```
#@title Choose a base model
model_name = 'mobilenet_v3_large_100_224' #@param ['cropnet_cassava', 'cropnet_concat', 'cropnet_imagenet', 'mobilenet_v3_large_100_224']
map_model_name = {
'cropnet_cassava':
'https://tfhub.dev/google/cropnet/feature_vector/cassava_disease_V1/1',
'cropnet_concat':
'https://tfhub.dev/google/cropnet/feature_vector/concat/1',
'cropnet_imagenet':
'https://tfhub.dev/google/cropnet/feature_vector/imagenet/1',
'mobilenet_v3_large_100_224':
'https://tfhub.dev/google/imagenet/mobilenet_v3_large_100_224/feature_vector/5',
}
model_handle = map_model_name[model_name]
```
To fine tune the model, you will use Model Maker. This makes the overall solution easier since after the training of the model, it'll also convert it to TFLite.
Model Maker makes this conversion be the best one possible and with all the necessary information to easily deploy the model on-device later.
The model spec is how you tell Model Maker which base model you'd like to use.
```
image_model_spec = ModelSpec(uri=model_handle)
```
One important detail here is setting `train_whole_model` which will make the base model fine tuned during training. This makes the process slower but the final model has a higher accuracy. Setting `shuffle` will make sure the model sees the data in a random shuffled order which is a best practice for model learning.
```
model = image_classifier.create(
train_data,
model_spec=image_model_spec,
batch_size=128,
learning_rate=0.03,
epochs=5,
shuffle=True,
train_whole_model=True,
validation_data=validation_data)
```
## Evaluate model on test split
```
model.evaluate(test_data)
```
To have an even better understanding of the fine tuned model, it's good to analyse the confusion matrix. This will show how often one class is predicted as another.
```
def predict_class_label_number(dataset):
"""Runs inference and returns predictions as class label numbers."""
rev_label_names = {l: i for i, l in enumerate(label_names)}
return [
rev_label_names[o[0][0]]
for o in model.predict_top_k(dataset, batch_size=128)
]
def show_confusion_matrix(cm, labels):
plt.figure(figsize=(10, 8))
sns.heatmap(cm, xticklabels=labels, yticklabels=labels,
annot=True, fmt='g')
plt.xlabel('Prediction')
plt.ylabel('Label')
plt.show()
confusion_mtx = tf.math.confusion_matrix(
predict_class_label_number(test_data),
list(ds_test.map(lambda x, y: y)),
num_classes=len(label_names))
show_confusion_matrix(confusion_mtx, label_names)
```
## Evaluate model on unknown test data
In this evaluation we expect the model to have accuracy of almost 1. All images the model is tested on are not related to the normal dataset and hence we expect the model to predict the "Unknown" class label.
```
model.evaluate(unknown_test_data)
```
Print the confusion matrix.
```
unknown_confusion_mtx = tf.math.confusion_matrix(
predict_class_label_number(unknown_test_data),
list(ds_unknown_test.map(lambda x, y: y)),
num_classes=len(label_names))
show_confusion_matrix(unknown_confusion_mtx, label_names)
```
## Export the model as TFLite and SavedModel
Now we can export the trained models in TFLite and SavedModel formats for deploying on-device and using for inference in TensorFlow.
```
tflite_filename = f'{TFLITE_NAME_PREFIX}_model_{model_name}.tflite'
model.export(export_dir='.', tflite_filename=tflite_filename)
# Export saved model version.
model.export(export_dir='.', export_format=ExportFormat.SAVED_MODEL)
```
## Next steps
The model that you've just trained can be used on mobile devices and even deployed in the field!
**To download the model, click the folder icon for the Files menu on the left side of the colab, and choose the download option.**
The same technique used here could be applied to other plant diseases tasks that might be more suitable for your use case or any other type of image classification task. If you want to follow up and deploy on an Android app, you can continue on this [Android quickstart guide](https://www.tensorflow.org/lite/guide/android).
| github_jupyter |
```
#export
from fastai.basics import *
#hide
from nbdev.showdoc import *
#default_exp callback.hook
```
# Model hooks
> Callback and helper function to add hooks in models
```
from fastai.test_utils import *
```
## What are hooks?
Hooks are functions you can attach to a particular layer in your model and that will be executed in the forward pass (for forward hooks) or backward pass (for backward hooks). Here we begin with an introduction around hooks, but you should jump to `HookCallback` if you quickly want to implement one (and read the following example `ActivationStats`).
Forward hooks are functions that take three arguments: the layer it's applied to, the input of that layer and the output of that layer.
```
tst_model = nn.Linear(5,3)
def example_forward_hook(m,i,o): print(m,i,o)
x = torch.randn(4,5)
hook = tst_model.register_forward_hook(example_forward_hook)
y = tst_model(x)
hook.remove()
```
Backward hooks are functions that take three arguments: the layer it's applied to, the gradients of the loss with respect to the input, and the gradients with respect to the output.
```
def example_backward_hook(m,gi,go): print(m,gi,go)
hook = tst_model.register_backward_hook(example_backward_hook)
x = torch.randn(4,5)
y = tst_model(x)
loss = y.pow(2).mean()
loss.backward()
hook.remove()
```
Hooks can change the input/output of a layer, or the gradients, print values or shapes. If you want to store something related to theses inputs/outputs, it's best to have your hook associated to a class so that it can put it in the state of an instance of that class.
## Hook -
```
#export
@docs
class Hook():
"Create a hook on `m` with `hook_func`."
def __init__(self, m, hook_func, is_forward=True, detach=True, cpu=False, gather=False):
store_attr('hook_func,detach,cpu,gather')
f = m.register_forward_hook if is_forward else m.register_backward_hook
self.hook = f(self.hook_fn)
self.stored,self.removed = None,False
def hook_fn(self, module, input, output):
"Applies `hook_func` to `module`, `input`, `output`."
if self.detach:
input,output = to_detach(input, cpu=self.cpu, gather=self.gather),to_detach(output, cpu=self.cpu, gather=self.gather)
self.stored = self.hook_func(module, input, output)
def remove(self):
"Remove the hook from the model."
if not self.removed:
self.hook.remove()
self.removed=True
def __enter__(self, *args): return self
def __exit__(self, *args): self.remove()
_docs = dict(__enter__="Register the hook",
__exit__="Remove the hook")
```
This will be called during the forward pass if `is_forward=True`, the backward pass otherwise, and will optionally `detach`, `gather` and put on the `cpu` the (gradient of the) input/output of the model before passing them to `hook_func`. The result of `hook_func` will be stored in the `stored` attribute of the `Hook`.
```
tst_model = nn.Linear(5,3)
hook = Hook(tst_model, lambda m,i,o: o)
y = tst_model(x)
test_eq(hook.stored, y)
show_doc(Hook.hook_fn)
show_doc(Hook.remove)
```
> Note: It's important to properly remove your hooks for your model when you're done to avoid them being called again next time your model is applied to some inputs, and to free the memory that go with their state.
```
tst_model = nn.Linear(5,10)
x = torch.randn(4,5)
y = tst_model(x)
hook = Hook(tst_model, example_forward_hook)
test_stdout(lambda: tst_model(x), f"{tst_model} ({x},) {y.detach()}")
hook.remove()
test_stdout(lambda: tst_model(x), "")
```
### Context Manager
Since it's very important to remove your `Hook` even if your code is interrupted by some bug, `Hook` can be used as context managers.
```
show_doc(Hook.__enter__)
show_doc(Hook.__exit__)
tst_model = nn.Linear(5,10)
x = torch.randn(4,5)
y = tst_model(x)
with Hook(tst_model, example_forward_hook) as h:
test_stdout(lambda: tst_model(x), f"{tst_model} ({x},) {y.detach()}")
test_stdout(lambda: tst_model(x), "")
#export
def _hook_inner(m,i,o): return o if isinstance(o,Tensor) or is_listy(o) else list(o)
def hook_output(module, detach=True, cpu=False, grad=False):
"Return a `Hook` that stores activations of `module` in `self.stored`"
return Hook(module, _hook_inner, detach=detach, cpu=cpu, is_forward=not grad)
```
The activations stored are the gradients if `grad=True`, otherwise the output of `module`. If `detach=True` they are detached from their history, and if `cpu=True`, they're put on the CPU.
```
tst_model = nn.Linear(5,10)
x = torch.randn(4,5)
with hook_output(tst_model) as h:
y = tst_model(x)
test_eq(y, h.stored)
assert not h.stored.requires_grad
with hook_output(tst_model, grad=True) as h:
y = tst_model(x)
loss = y.pow(2).mean()
loss.backward()
test_close(2*y / y.numel(), h.stored[0])
#cuda
with hook_output(tst_model, cpu=True) as h:
y = tst_model.cuda()(x.cuda())
test_eq(h.stored.device, torch.device('cpu'))
```
## Hooks -
```
#export
@docs
class Hooks():
"Create several hooks on the modules in `ms` with `hook_func`."
def __init__(self, ms, hook_func, is_forward=True, detach=True, cpu=False):
self.hooks = [Hook(m, hook_func, is_forward, detach, cpu) for m in ms]
def __getitem__(self,i): return self.hooks[i]
def __len__(self): return len(self.hooks)
def __iter__(self): return iter(self.hooks)
@property
def stored(self): return L(o.stored for o in self)
def remove(self):
"Remove the hooks from the model."
for h in self.hooks: h.remove()
def __enter__(self, *args): return self
def __exit__ (self, *args): self.remove()
_docs = dict(stored = "The states saved in each hook.",
__enter__="Register the hooks",
__exit__="Remove the hooks")
layers = [nn.Linear(5,10), nn.ReLU(), nn.Linear(10,3)]
tst_model = nn.Sequential(*layers)
hooks = Hooks(tst_model, lambda m,i,o: o)
y = tst_model(x)
test_eq(hooks.stored[0], layers[0](x))
test_eq(hooks.stored[1], F.relu(layers[0](x)))
test_eq(hooks.stored[2], y)
hooks.remove()
show_doc(Hooks.stored, name='Hooks.stored')
show_doc(Hooks.remove)
```
### Context Manager
Like `Hook` , you can use `Hooks` as context managers.
```
show_doc(Hooks.__enter__)
show_doc(Hooks.__exit__)
layers = [nn.Linear(5,10), nn.ReLU(), nn.Linear(10,3)]
tst_model = nn.Sequential(*layers)
with Hooks(layers, lambda m,i,o: o) as h:
y = tst_model(x)
test_eq(h.stored[0], layers[0](x))
test_eq(h.stored[1], F.relu(layers[0](x)))
test_eq(h.stored[2], y)
#export
def hook_outputs(modules, detach=True, cpu=False, grad=False):
"Return `Hooks` that store activations of all `modules` in `self.stored`"
return Hooks(modules, _hook_inner, detach=detach, cpu=cpu, is_forward=not grad)
```
The activations stored are the gradients if `grad=True`, otherwise the output of `modules`. If `detach=True` they are detached from their history, and if `cpu=True`, they're put on the CPU.
```
layers = [nn.Linear(5,10), nn.ReLU(), nn.Linear(10,3)]
tst_model = nn.Sequential(*layers)
x = torch.randn(4,5)
with hook_outputs(layers) as h:
y = tst_model(x)
test_eq(h.stored[0], layers[0](x))
test_eq(h.stored[1], F.relu(layers[0](x)))
test_eq(h.stored[2], y)
for s in h.stored: assert not s.requires_grad
with hook_outputs(layers, grad=True) as h:
y = tst_model(x)
loss = y.pow(2).mean()
loss.backward()
g = 2*y / y.numel()
test_close(g, h.stored[2][0])
g = g @ layers[2].weight.data
test_close(g, h.stored[1][0])
g = g * (layers[0](x) > 0).float()
test_close(g, h.stored[0][0])
#cuda
with hook_outputs(tst_model, cpu=True) as h:
y = tst_model.cuda()(x.cuda())
for s in h.stored: test_eq(s.device, torch.device('cpu'))
#export
def dummy_eval(m, size=(64,64)):
"Evaluate `m` on a dummy input of a certain `size`"
ch_in = in_channels(m)
x = one_param(m).new(1, ch_in, *size).requires_grad_(False).uniform_(-1.,1.)
with torch.no_grad(): return m.eval()(x)
#export
def model_sizes(m, size=(64,64)):
"Pass a dummy input through the model `m` to get the various sizes of activations."
with hook_outputs(m) as hooks:
_ = dummy_eval(m, size=size)
return [o.stored.shape for o in hooks]
m = nn.Sequential(ConvLayer(3, 16), ConvLayer(16, 32, stride=2), ConvLayer(32, 32))
test_eq(model_sizes(m), [[1, 16, 64, 64], [1, 32, 32, 32], [1, 32, 32, 32]])
#export
def num_features_model(m):
"Return the number of output features for `m`."
sz,ch_in = 32,in_channels(m)
while True:
#Trying for a few sizes in case the model requires a big input size.
try:
return model_sizes(m, (sz,sz))[-1][1]
except Exception as e:
sz *= 2
if sz > 2048: raise e
m = nn.Sequential(nn.Conv2d(5,4,3), nn.Conv2d(4,3,3))
test_eq(num_features_model(m), 3)
m = nn.Sequential(ConvLayer(3, 16), ConvLayer(16, 32, stride=2), ConvLayer(32, 32))
test_eq(num_features_model(m), 32)
```
## HookCallback -
To make hooks easy to use, we wrapped a version in a Callback where you just have to implement a `hook` function (plus any element you might need).
```
#export
def has_params(m):
"Check if `m` has at least one parameter"
return len(list(m.parameters())) > 0
assert has_params(nn.Linear(3,4))
assert has_params(nn.LSTM(4,5,2))
assert not has_params(nn.ReLU())
#export
@funcs_kwargs
class HookCallback(Callback):
"`Callback` that can be used to register hooks on `modules`"
_methods = ["hook"]
hook = noops
def __init__(self, modules=None, every=None, remove_end=True, is_forward=True, detach=True, cpu=True, **kwargs):
store_attr('modules,every,remove_end,is_forward,detach,cpu')
assert not kwargs
def before_fit(self):
"Register the `Hooks` on `self.modules`."
if self.modules is None: self.modules = [m for m in flatten_model(self.model) if has_params(m)]
if self.every is None: self._register()
def before_batch(self):
if self.every is None: return
if self.training and self.train_iter%self.every==0: self._register()
def after_batch(self):
if self.every is None: return
if self.training and self.train_iter%self.every==0: self._remove()
def after_fit(self):
"Remove the `Hooks`."
if self.remove_end: self._remove()
def _register(self): self.hooks = Hooks(self.modules, self.hook, self.is_forward, self.detach, self.cpu)
def _remove(self):
if getattr(self, 'hooks', None): self.hooks.remove()
def __del__(self): self._remove()
```
You can either subclass and implement a `hook` function (along with any event you want) or pass that a `hook` function when initializing. Such a function needs to take three argument: a layer, input and output (for a backward hook, input means gradient with respect to the inputs, output, gradient with respect to the output) and can either modify them or update the state according to them.
If not provided, `modules` will default to the layers of `self.model` that have a `weight` attribute. Depending on `do_remove`, the hooks will be properly removed at the end of training (or in case of error). `is_forward` , `detach` and `cpu` are passed to `Hooks`.
The function called at each forward (or backward) pass is `self.hook` and must be implemented when subclassing this callback.
```
class TstCallback(HookCallback):
def hook(self, m, i, o): return o
def after_batch(self): test_eq(self.hooks.stored[0], self.pred)
learn = synth_learner(n_trn=5, cbs = TstCallback())
learn.fit(1)
class TstCallback(HookCallback):
def __init__(self, modules=None, remove_end=True, detach=True, cpu=False):
super().__init__(modules, None, remove_end, False, detach, cpu)
def hook(self, m, i, o): return o
def after_batch(self):
if self.training:
test_eq(self.hooks.stored[0][0], 2*(self.pred-self.y)/self.pred.shape[0])
learn = synth_learner(n_trn=5, cbs = TstCallback())
learn.fit(1)
show_doc(HookCallback.before_fit)
show_doc(HookCallback.after_fit)
```
## Model summary
```
#export
def total_params(m):
"Give the number of parameters of a module and if it's trainable or not"
params = sum([p.numel() for p in m.parameters()])
trains = [p.requires_grad for p in m.parameters()]
return params, (False if len(trains)==0 else trains[0])
test_eq(total_params(nn.Linear(10,32)), (32*10+32,True))
test_eq(total_params(nn.Linear(10,32, bias=False)), (32*10,True))
test_eq(total_params(nn.BatchNorm2d(20)), (20*2, True))
test_eq(total_params(nn.BatchNorm2d(20, affine=False)), (0,False))
test_eq(total_params(nn.Conv2d(16, 32, 3)), (16*32*3*3 + 32, True))
test_eq(total_params(nn.Conv2d(16, 32, 3, bias=False)), (16*32*3*3, True))
#First ih layer 20--10, all else 10--10. *4 for the four gates
test_eq(total_params(nn.LSTM(20, 10, 2)), (4 * (20*10 + 10) + 3 * 4 * (10*10 + 10), True))
# export
def layer_info(learn, *xb):
"Return layer infos of `model` on `xb` (only support batch first inputs)"
def _track(m, i, o): return (m.__class__.__name__,)+total_params(m)+(apply(lambda x:x.shape, o),)
with Hooks(flatten_model(learn.model), _track) as h:
batch = apply(lambda o:o[:1], xb)
with learn: r = learn.get_preds(dl=[batch], inner=True, reorder=False)
return h.stored
def _m(): return nn.Sequential(nn.Linear(1,50), nn.ReLU(), nn.BatchNorm1d(50), nn.Linear(50, 1))
sample_input = torch.randn((16, 1))
test_eq(layer_info(synth_learner(model=_m()), sample_input), [
('Linear', 100, True, [1, 50]),
('ReLU', 0, False, [1, 50]),
('BatchNorm1d', 100, True, [1, 50]),
('Linear', 51, True, [1, 1])
])
#hide
# Test for multiple inputs model
class _2InpModel(Module):
def __init__(self):
super().__init__()
self.seq = nn.Sequential(nn.Linear(2,50), nn.ReLU(), nn.BatchNorm1d(50), nn.Linear(50, 1))
def forward(self, *inps):
outputs = torch.cat(inps, dim=-1)
return self.seq(outputs)
sample_inputs = (torch.randn(16, 1), torch.randn(16, 1))
learn = synth_learner(model=_2InpModel())
learn.dls.n_inp = 2
test_eq(layer_info(learn, *sample_inputs), [
('Linear', 150, True, [1, 50]),
('ReLU', 0, False, [1, 50]),
('BatchNorm1d', 100, True, [1, 50]),
('Linear', 51, True, [1, 1])
])
#export
def _print_shapes(o, bs):
if isinstance(o, torch.Size): return ' x '.join([str(bs)] + [str(t) for t in o[1:]])
else: return str([_print_shapes(x, bs) for x in o])
# export
def module_summary(learn, *xb):
"Print a summary of `model` using `xb`"
#Individual parameters wrapped in ParameterModule aren't called through the hooks in `layer_info`,
# thus are not counted inside the summary
#TODO: find a way to have them counted in param number somehow
infos = layer_info(learn, *xb)
n,bs = 64,find_bs(xb)
inp_sz = _print_shapes(apply(lambda x:x.shape, xb), bs)
res = f"{learn.model.__class__.__name__} (Input shape: {inp_sz})\n"
res += "=" * n + "\n"
res += f"{'Layer (type)':<20} {'Output Shape':<20} {'Param #':<10} {'Trainable':<10}\n"
res += "=" * n + "\n"
ps,trn_ps = 0,0
infos = [o for o in infos if o is not None] #see comment in previous cell
for typ,np,trn,sz in infos:
if sz is None: continue
ps += np
if trn: trn_ps += np
res += f"{typ:<20} {_print_shapes(sz, bs)[:19]:<20} {np:<10,} {str(trn):<10}\n"
res += "_" * n + "\n"
res += f"\nTotal params: {ps:,}\n"
res += f"Total trainable params: {trn_ps:,}\n"
res += f"Total non-trainable params: {ps - trn_ps:,}\n\n"
return PrettyString(res)
#export
@patch
def summary(self:Learner):
"Print a summary of the model, optimizer and loss function."
xb = self.dls.train.one_batch()[:self.dls.train.n_inp]
res = module_summary(self, *xb)
res += f"Optimizer used: {self.opt_func}\nLoss function: {self.loss_func}\n\n"
if self.opt is not None:
res += f"Model " + ("unfrozen\n\n" if self.opt.frozen_idx==0 else f"frozen up to parameter group #{self.opt.frozen_idx}\n\n")
res += "Callbacks:\n" + '\n'.join(f" - {cb}" for cb in sort_by_run(self.cbs))
return PrettyString(res)
learn = synth_learner(model=_m())
learn.summary()
#hide
#cuda
learn = synth_learner(model=_m(), cuda=True)
learn.summary()
#hide
# Test for multiple output
class _NOutModel(Module):
def __init__(self): self.lin = nn.Linear(5, 6)
def forward(self, x1):
x = torch.randn((10, 5))
return x,self.lin(x)
learn = synth_learner(model = _NOutModel())
learn.summary() # Output Shape should be (50, 16, 256), (1, 16, 256)
```
## Activation graphs
This is an example of a `HookCallback`, that stores the mean, stds and histograms of activations that go through the network.
```
#exports
@delegates()
class ActivationStats(HookCallback):
"Callback that record the mean and std of activations."
run_before=TrainEvalCallback
def __init__(self, with_hist=False, **kwargs):
super().__init__(**kwargs)
self.with_hist = with_hist
def before_fit(self):
"Initialize stats."
super().before_fit()
self.stats = L()
def hook(self, m, i, o):
o = o.float()
res = {'mean': o.mean().item(), 'std': o.std().item(),
'near_zero': (o<=0.05).long().sum().item()/o.numel()}
if self.with_hist: res['hist'] = o.histc(40,0,10)
return res
def after_batch(self):
"Take the stored results and puts it in `self.stats`"
if self.training and (self.every is None or self.train_iter%self.every == 0):
self.stats.append(self.hooks.stored)
super().after_batch()
def layer_stats(self, idx):
lstats = self.stats.itemgot(idx)
return L(lstats.itemgot(o) for o in ('mean','std','near_zero'))
def hist(self, idx):
res = self.stats.itemgot(idx).itemgot('hist')
return torch.stack(tuple(res)).t().float().log1p()
def color_dim(self, idx, figsize=(10,5), ax=None):
"The 'colorful dimension' plot"
res = self.hist(idx)
if ax is None: ax = subplots(figsize=figsize)[1][0]
ax.imshow(res, origin='lower')
ax.axis('off')
def plot_layer_stats(self, idx):
_,axs = subplots(1, 3, figsize=(12,3))
for o,ax,title in zip(self.layer_stats(idx),axs,('mean','std','% near zero')):
ax.plot(o)
ax.set_title(title)
learn = synth_learner(n_trn=5, cbs = ActivationStats(every=4))
learn.fit(1)
learn.activation_stats.stats
```
The first line contains the means of the outputs of the model for each batch in the training set, the second line their standard deviations.
```
import math
def test_every(n_tr, every):
"create a learner, fit, then check number of stats collected"
learn = synth_learner(n_trn=n_tr, cbs=ActivationStats(every=every))
learn.fit(1)
expected_stats_len = math.ceil(n_tr / every)
test_eq(expected_stats_len, len(learn.activation_stats.stats))
for n_tr in [11, 12, 13]:
test_every(n_tr, 4)
test_every(n_tr, 1)
#hide
class TstCallback(HookCallback):
def hook(self, m, i, o): return o
def before_fit(self):
super().before_fit()
self.means,self.stds = [],[]
def after_batch(self):
if self.training:
self.means.append(self.hooks.stored[0].mean().item())
self.stds.append (self.hooks.stored[0].std() .item())
learn = synth_learner(n_trn=5, cbs = [TstCallback(), ActivationStats()])
learn.fit(1)
test_eq(learn.activation_stats.stats.itemgot(0).itemgot("mean"), learn.tst.means)
test_eq(learn.activation_stats.stats.itemgot(0).itemgot("std"), learn.tst.stds)
```
## Export -
```
#hide
from nbdev.export import notebook2script
notebook2script()
```
| github_jupyter |
## 105 - Training Regressions
This example notebook is similar to
[Notebook 102](102 - Regression Example with Flight Delay Dataset.ipynb).
In this example, we will demonstrate the use of `DataConversion()` in two
ways. First, to convert the data type of several columns after the dataset
has been read in to the Spark DataFrame instead of specifying the data types
as the file is read in. Second, to convert columns to categorical columns
instead of iterating over the columns and applying the `StringIndexer`.
This sample demonstrates how to use the following APIs:
- [`TrainRegressor`
](http://mmlspark.azureedge.net/docs/pyspark/TrainRegressor.html)
- [`ComputePerInstanceStatistics`
](http://mmlspark.azureedge.net/docs/pyspark/ComputePerInstanceStatistics.html)
- [`DataConversion`
](http://mmlspark.azureedge.net/docs/pyspark/DataConversion.html)
First, import the pandas package
```
import pandas as pd
```
Next, import the CSV dataset: retrieve the file if needed, save it locally,
read the data into a pandas dataframe via `read_csv()`, then convert it to
a Spark dataframe.
Print the schema of the dataframe, and note the columns that are `long`.
```
dataFile = "On_Time_Performance_2012_9.csv"
import os, urllib
if not os.path.isfile(dataFile):
urllib.request.urlretrieve("https://mmlspark.azureedge.net/datasets/"+dataFile, dataFile)
flightDelay = spark.createDataFrame(pd.read_csv(dataFile))
# print some basic info
print("records read: " + str(flightDelay.count()))
print("Schema: ")
flightDelay.printSchema()
flightDelay.limit(10).toPandas()
```
Use the `DataConversion` transform API to convert the columns listed to
double.
The `DataConversion` API accepts the following types for the `convertTo`
parameter:
* `boolean`
* `byte`
* `short`
* `integer`
* `long`
* `float`
* `double`
* `string`
* `toCategorical`
* `clearCategorical`
* `date` -- converts a string or long to a date of the format
"yyyy-MM-dd HH:mm:ss" unless another format is specified by
the `dateTimeFormat` parameter.
Again, print the schema and note that the columns are now `double`
instead of long.
```
from mmlspark import DataConversion
flightDelay = DataConversion(cols=["Quarter","Month","DayofMonth","DayOfWeek",
"OriginAirportID","DestAirportID",
"CRSDepTime","CRSArrTime"],
convertTo="double") \
.transform(flightDelay)
flightDelay.printSchema()
flightDelay.limit(10).toPandas()
```
Split the datasest into train and test sets.
```
train, test = flightDelay.randomSplit([0.75, 0.25])
```
Create a regressor model and train it on the dataset.
First, use `DataConversion` to convert the columns `Carrier`, `DepTimeBlk`,
and `ArrTimeBlk` to categorical data. Recall that in Notebook 102, this
was accomplished by iterating over the columns and converting the strings
to index values using the `StringIndexer` API. The `DataConversion` API
simplifies the task by allowing you to specify all columns that will have
the same end type in a single command.
Create a LinearRegression model using the Limited-memory BFGS solver
(`l-bfgs`), an `ElasticNet` mixing parameter of `0.3`, and a `Regularization`
of `0.1`.
Train the model with the `TrainRegressor` API fit on the training dataset.
```
from mmlspark import TrainRegressor, TrainedRegressorModel
from pyspark.ml.regression import LinearRegression
trainCat = DataConversion(cols=["Carrier","DepTimeBlk","ArrTimeBlk"],
convertTo="toCategorical") \
.transform(train)
testCat = DataConversion(cols=["Carrier","DepTimeBlk","ArrTimeBlk"],
convertTo="toCategorical") \
.transform(test)
lr = LinearRegression().setSolver("l-bfgs").setRegParam(0.1) \
.setElasticNetParam(0.3)
model = TrainRegressor(model=lr, labelCol="ArrDelay").fit(trainCat)
model.write().overwrite().save("flightDelayModel.mml")
```
Score the regressor on the test data.
```
flightDelayModel = TrainedRegressorModel.load("flightDelayModel.mml")
scoredData = model.transform(testCat)
scoredData.limit(10).toPandas()
```
Compute model metrics against the entire scored dataset
```
from mmlspark import ComputeModelStatistics
metrics = ComputeModelStatistics().transform(scoredData)
metrics.toPandas()
```
Finally, compute and show statistics on individual predictions in the test
dataset, demonstrating the usage of `ComputePerInstanceStatistics`
```
from mmlspark import ComputePerInstanceStatistics
evalPerInstance = ComputePerInstanceStatistics().transform(scoredData)
evalPerInstance.select("ArrDelay", "Scores", "L1_loss", "L2_loss") \
.limit(10).toPandas()
```
| github_jupyter |
<a href="https://colab.research.google.com/github/luislauriano/data_science/blob/master/Analisando_dados_Airbnb_Rio_de_Janeiro.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
#**Análise dos Dados do Airbnb - Rio de Janeiro**
O Airbnb vem sendo considerado como a **maior empresa hoteleira da atualidade**, com este grande crescimento deve ser uma empresa que possui muitos hoteis, não, a empresa hoteleira possui **nenhum hotel**. Essa empresa funciona buscando conectar pessoas que buscam um local para se hospedar em uma viagem com aquelas que podem oferecer uma hospedagem em troca de diárias.
No final de 2018, a Startup fundada 10 anos atrás, já havia **hospedado mais de 300 milhões** de pessoas ao redor de todo o mundo, desafiando as redes hoteleiras tradicionais.
Uma das iniciativas do Airbnb é disponibilizar dados do site, para algumas das principais cidades do mundo, para nós do Brasil, são disponibilizados dados apenas da cidade do Rio de Janeiro. Através do portal [Inside Airbnb](https://http://insideairbnb.com/get-the-data.html), é possível baixar uma grande quantidade de dados para desenvolver projetos e soluções de Data Science.
>Em periodo de carnaval e finais de ano, a rede hoteleira do Rio tem um grande crescimento, para 2020 a projeção é que a rede hoteleira carioca tenha 100% dos leitos ocupados durante o evento. Dados atuais da Hotéis Rio
apontam que os hotéis da Barra da Tijuca e São Conrado registram 84% de ocupação, seguidos de Ipanema e Leblon com 80%, Leme e Copacabana com 78%, Botafogo e Flamengo com 89% e o Centro do Rio chegando a 83%.
**Neste *notebook*, iremos analisar os dados referentes à cidade do Rio de Janeiro, e ver quais insights podem ser extraídos a partir de dados brutos.**
#**Obtenção de Dados**
```
#importar os pacotes necessarios
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sea
%matplotlib inline
# importar o arquivo listings.csv para um DataFrame
df = pd.read_csv("http://data.insideairbnb.com/brazil/rj/rio-de-janeiro/2019-07-15/visualisations/listings.csv")
```
#**Análise dos Dados**
Esta etapa tem por objetivo criar uma consciência situacional inicial e permitir um entendimento de como os dados estão estruturados
**Dicionário das variáveis**
>* id - Numero gerado como identificador do Imóvel
>* name - Nome da propriedade do anunciante
>* host_id - Nome do anfitrião
>* neighbourhood_group - Coluna sem valor válido
>* neighbourhood - Nome do bairro
>* latitude - Coordenada da latitude da propriedade
* longitude - Coordenada da longitude da propriedade
* room_type - Informa o tipo de quarto que é oferecido
* price - Preço para alugar o imóvel
* minimun_nights - Noites minimas para conseguir alugar o imóvel
* number_of_reviews - Número de reviews que o imóvel possui
* last_review - Data da última review
* reviews_per_month - Quantidade de reviews por mês
* calculate_host_listings_count - Quantidade de imóvel que o dono possui
* availibility_365 - Número de dias que o imóvel esta disponivel duranto o ano(365 dias)
Antes de iniciar qualquer análise, vamos verificar a cara do nosso dataset, analisando as 5 primeiras entradas.
```
# mostrar as 5 primeiras entradas(para mais entradas, colocar o numero desejado dentro dos parenteses)
df.head()
```
# **Q1. Quantos atributos (variáveis) e quantas entradas o nosso conjunto de dados possui? Quais os tipos das variáveis?**
Vamos prosseguir e identificar a quantidade de entradas que nosso conjunto de dados possui e ver os tipos de cada coluna.
>Este dataset que baixamos é a versão "resumida" do Airbnb. Na mesma página que baixamos o arquivo listings.csv. Há uma versão mais completa com 35847 entradas e 106 variáveis (listings.csv.gz).
```
#indentificar o volume de dados do DataFrame
print(f'Variaveis/Linhas: {df.shape[0]}')
print(f'Entradas/Colunas: {df.shape[1]}')
#verificar as 5 primeiras entradas do DataSet(verificar cada tipo de variavel)
display(df.dtypes)
```
#**Q2. Qual a porcentagem de valores ausentes no dataset?**
A qualidade de um dataset está diretamente relacionada à quantidade de valores ausentes. É importante entender logo no início se esses valores nulos são significativos comparados ao total de entradas.
```
# ordenar em ordem descrescente as variáveis por seus valores ausentes
# a função sort_values serve para ordenar de forma crescente ou decrescente
(df.isnull().sum()/df.shape[0]).sort_values(ascending=False)
```
>* Nota-se que a coluna **neighbourhood_group** possui 100% dos seus valores faltantes.
* As variáveis **reviews_per_month** e **last_review** possuem valores nulos em quase metade das linhas.
* A variável **name** têm aproximadamente 0,1% dos valores nulos.
* A variável **host_name** têm aproximadamente 0,005% dos seus valores nulos.
# **Q3. Qual o tipo de distribuição das variáveis?**
Para verificar qual a distribuição das variáveis, irei plotar um histograma. Devido que com um gráfico de histograma fica mais fácil o entendimento e a visualização, neste caso.
```
#plotar o histograma das variaveis numéricas
df.hist(bins=10, figsize=(15,10));
```
# **Q4. Há outliers presentes?**
Pela distribuição de alguns histogramas é possivel averiguar que existem alguns outliers. Se olharmos por exemplo as variáveis **price**, **minimum_nights** e **calculated_host_listnigs_count**
Perceba que os valores não seguem um distribuição e distorcem toda a representação gráfica.
>Para confirmar isso exitem duas maneiras bem rapidas que auxiliam a detecção de outliers. São elas:
>* Resumo estatistico por meio do metodo describe()
>* Plotar Boxplot para à variável
```
# ver o resumo estatístico das variáveis numéricas
df[['price', 'minimum_nights', 'number_of_reviews', 'reviews_per_month',
'calculated_host_listings_count', 'availability_365']].describe()
```
Olhando o resumo estatístico acima, podemos confirmar algumas hipóteses como:
>* A variável price possui 75% do valor abaixo de 599, porém seu valor máximo é 40000.
* A quantidade mínima de noites (minimum_nights) está acima do limite real de 365 dias no ano.
**Boxplot para minimum_nights**
```
# minimum_nights
df.minimum_nights.plot(kind='box', vert=False, figsize=(15, 3))
plt.show()
# ver quantidade de valores acima de 30 dias para minimum_nights
print("minimum_nights: valores acima de 30:")
print("{} entradas".format(len(df[df.minimum_nights > 30])))
print("{:.4f}%".format((len(df[df.minimum_nights > 30]) / df.shape[0])*100))
```
**Boxplot para price**
```
# price
df.price.plot(kind='box', vert=False, figsize=(15, 3),)
plt.show()
# ver quantidade de valores acima de 1500 para price
print("\nprice: valores acima de 1500")
print("{} entradas".format(len(df[df.price > 1500])))
print("{:.4f}%".format((len(df[df.price > 1500]) / df.shape[0])*100))
# df.price.plot(kind='box', vert=False, xlim=(0,1300), figsize=(15,3));
```
**Histogramas sem outliers**
Já que identificamos outliers nas variáveis **price** e **minimum_nights**, vamos agora limpar o DataFrame delas e plotar novamente o histograma.
```
#remover outliers em um novo DataFrame.
df_clean = df.copy()
df_clean.drop(df_clean[df_clean.price > 1200].index, axis=0, inplace = True)
df_clean.drop(df_clean[df_clean.minimum_nights > 30].index, axis=0, inplace = True)
#remover a coluna neighbourhood_group, pois esta vazia.
df_clean.drop('neighbourhood_group', axis=1, inplace = True)
#plotar o histograma novamente, agora de forma limpa.
df_clean.hist(bins=15, figsize=(15,10));
```
# **Q5. Qual a correlação existente entre as variaveis?**
Correlação significa que existe um relação entre duas ou mais coisas. No nosso contexto estamos buscando relação entre duas variáveis.
Essa relação pode ser medida e é função do coeficiente estabelecer qual a intensidade dela.
>Podemos fazer isso atravez de:
>* Criar uma matriz de correlação.
* Gerar um heatmap a partir dessa matriz, usando a biblioteca seaborn.
```
#criar uma matriz de correlação com matrizes selecionadas
corr = df_clean[['price', 'minimum_nights', 'number_of_reviews', 'reviews_per_month',
'calculated_host_listings_count', 'availability_365']].corr()
display(corr)
#utilizando a matriz gerada acima vamos plotar um heatmap.
sea.heatmap(corr,cmap='RdBu', fmt='.2f', square='true', linecolor='white', annot=True);
```
# **Q6. Qual o tipo de imóvel mais alugado no Airbnb?**
A coluna da variável **room_type** indica o tipo de locação que está anunciada no Airbnb. No site, existem opções de apartamentos/casas inteiras, apenas o aluguel de um quarto ou mesmo dividir o quarto com outras pessoas.
Agora vamos analisar qual o tipo de imóvel mais alugado de acordo com as colunas **room_type** que indica o tipo de imóvel e usando o método **value_counts()** que nos diz quantas vezes houve a ocorrência.
```
# mostrar a quantidade de cada tipo de imóvel disponível
df_clean.room_type.value_counts()
# mostrar a porcentagem de cada tipo de imóvel disponível
df_clean.room_type.value_counts() / df_clean.shape[0]
```
# **Q6. Qual a localidade mais cara do Rio?**
Uma maneira de se verificar uma variável em função da outra é usando groupby().
>Neste caso, queremos comparar a entrada dos bairros **(neighbourhoods)** a partir da entrada do preço de locação **(price)**.
```
df_clean.groupby(['neighbourhood']).price.mean().sort_values(ascending=False)[:10]
```
# **Qual a localidade mais barata do Rio de Janeiro?**
```
df_clean.groupby(['neighbourhood']).price.mean().sort_values(ascending = True)[:10]
```
Pudemos observar que o bairro Vaz Lobo lidera a lista dos bairros mais caros com uma média de 967 reais por dia, deixando para trás lugares como Barra da Tijuca e Leblon. Nota-se ainda que o bairro da Abolição está com uma faixa de preço maior do que a bairros mais nobres como o Leblon.
>Uma pessoa que não conhecesse o Rio poderia apresentar esses resultados sem se questionar.
Por outro lado, o bairro do Coelho Neto lidera os bairros com menores faixas de preço por locação, apresentando um valor de 48,5 reais por dia, uma diferença de mais de 900 reais comparado ao bairro Vaz lobo.
Quando analisamos seja os bairros com maiores preços ou os bairros com menores preços, percebemos que ambos possuem uma localização afastada do centro da cidade do Rio de Janeiro, usando o Cristo Redentor como referência. O bairro Vaz Lobo que apresentou o maior preço de locação, obtêm uma distancia de aproximadamente 35 km do cristo redentor, enquanto, o bairro Coelho Neto de menor preço de locação, obtêm uma distancia de aproximadamente 36 km do cristo redentor.
Essa distância das locações para o centro da cidade fica mais clara quando é feito um gráfico de dispersão.
>Como nos foi fornecido a longitude e latitude, podemos plotar um gráfico de dispersão através dessas variáveis determinando o eixo (x) para longitude e o eixo (y) para latitude. Observe:
```
# plotar os imóveis pela latitude-longitude
df_clean.plot(kind="scatter", x='longitude', y='latitude', alpha=0.4, c=df_clean['price'], s=8,
cmap=plt.get_cmap('jet'), figsize=(12,8));
```
# **Conclusão**
>O objetivo foi fazer uma análise superficial dos dados do Airbnb do estado do Rio de Janeiro, porém com está análise já podemos observar a presença de outliers em algumas das variáveis, que impemdem uma análise estatística com maior precisão.
>Conclui-se que seja os maiores ou menores preços, as locações em sua maioria estão focadas distantes do centro da cidade do Rio e famosos pontos turísticos, como Maracanã, Cristo Redentor e etc... Então é sempre bom ficar atento quanto a localização do imóvel que pretende ser alocado e o seu objetivo pessoal, tendo em vista, se for pra turismo, tem o custo da deslocação, tempo e claro o dinheiro. Também foi possivel observar que bairros não tão famosos estão com um maior número de locações do que bairros como o do Leblon.
>Por fim lembre-se que o DataSet utilizado é resumido e oferece apenas uma visão superficial da situação, assim recomenda-se um análise com uma base de dados mais abrengente.
| github_jupyter |
# 3. FE_XGBClassifier_GSCV1
Kaggle score:
Abstract:
- date 7, 8, 9少feature的数据
## Run name
```
import time
project_name = 'TalkingdataAFD2018'
step_name = 'FE_XGBClassifier_GSCV1'
time_str = time.strftime("%Y%m%d_%H%M%S", time.localtime())
run_name = '%s_%s_%s' % (project_name, step_name, time_str)
print('run_name: %s' % run_name)
t0 = time.time()
```
## Important params
```
feature_run_name = 'TalkingdataAFD2018_FeatureExtraction_20180501_185800'
date = 100
print('date: ', date)
is_debug = False
print('is_debug: %s' % is_debug)
# epoch = 3
# batch_size = 2000 * 10000
# skip_data_len = (epoch - 1) * batch_size
# data_len = batch_size
# print('Echo: %s, Data rows: [%s, %s]' % (epoch, skip_data_len, skip_data_len + data_len))
# epoch = 2
# batch_size = 4000 * 10000
# skip_data_len = 59633310 - batch_size
# data_len = batch_size
# print('batch_size: %s' % batch_size)
# print('epoch: %s, data rows: [%s, %s]' % (epoch, skip_data_len, skip_data_len + data_len))
# run_name = '%s_date%s%s' % (run_name, date, epoch)
run_name = '%s_date%s' % (run_name, date)
print(run_name)
if is_debug:
test_n_rows = 1 * 10000
else:
test_n_rows = None
# test_n_rows = 18790469
day_rows = {
0: {
'n_skiprows': 1,
'n_rows': 1 * 10000
},
1: {
'n_skiprows': 1 * 10000,
'n_rows': 2 * 10000
},
6: {
'n_skiprows': 1,
'n_rows': 9308568,
'file_name': ''
},
7: {
'n_skiprows': 1 + 9308568,
'n_rows': 59633310,
'file_name': ''
},
8: {
'n_skiprows': 1 + 9308568 + 59633310,
'n_rows': 62945075,
'file_name': ''
},
9: {
'n_skiprows': 1 + 9308568 + 59633310 + 62945075,
'n_rows': 53016937,
'file_name': ''
}
}
# n_skiprows = day_rows[date]['n_skiprows']
# n_rows = day_rows[date]['n_rows']
```
## Import PKGs
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
from IPython.display import display
import os
import sys
import gc
import time
import random
import zipfile
import h5py
import pickle
import math
from PIL import Image
import shutil
from tqdm import tqdm
import multiprocessing
from multiprocessing import cpu_count
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.metrics import accuracy_score
random_num = np.random.randint(10000)
cpu_amount = cpu_count()
print('cpu_amount: %s' % (cpu_amount))
print('random_num: %s' % random_num)
```
## Project folders
```
cwd = os.getcwd()
input_folder = os.path.join(cwd, 'input')
output_folder = os.path.join(cwd, 'output')
model_folder = os.path.join(cwd, 'model')
feature_folder = os.path.join(cwd, 'feature')
log_folder = os.path.join(cwd, 'log')
print('input_folder: \t\t\t%s' % input_folder)
print('output_folder: \t\t\t%s' % output_folder)
print('model_folder: \t\t\t%s' % model_folder)
print('feature_folder: \t\t%s' % feature_folder)
print('log_folder: \t\t\t%s' % log_folder)
train_csv_file = os.path.join(input_folder, 'train.csv')
train_sample_csv_file = os.path.join(input_folder, 'train_sample.csv')
test_csv_file = os.path.join(input_folder, 'test.csv')
sample_submission_csv_file = os.path.join(input_folder, 'sample_submission.csv')
print('\ntrain_csv_file: \t\t%s' % train_csv_file)
print('train_sample_csv_file: \t\t%s' % train_sample_csv_file)
print('test_csv_file: \t\t\t%s' % test_csv_file)
print('sample_submission_csv_file: \t%s' % sample_submission_csv_file)
```
## Load feature
```
sample_submission_csv = pd.read_csv(sample_submission_csv_file)
print('sample_submission_csv.shape: \t', sample_submission_csv.shape)
display(sample_submission_csv.head(2))
print('train_csv: %.2f Mb' % (sys.getsizeof(sample_submission_csv)/1024./1024.))
def save_feature(x_data, y_data, file_name):
print(y_data[:5])
if os.path.exists(file_name):
os.remove(file_name)
print('File removed: %s' % file_name)
with h5py.File(file_name) as h:
h.create_dataset('x_data', data=x_data)
h.create_dataset('y_data', data=y_data)
print('File saved: %s' % file_name)
def load_feature(file_name):
with h5py.File(file_name, 'r') as h:
x_data = np.array(h['x_data'])
y_data = np.array(h['y_data'])
print('File loaded: %s' % file_name)
print(y_data[:5])
return x_data, y_data
def save_test_feature(x_test, click_ids, file_name):
print(click_ids[:5])
if os.path.exists(file_name):
os.remove(file_name)
print('File removed: %s' % file_name)
with h5py.File(file_name) as h:
h.create_dataset('x_test', data=x_test)
h.create_dataset('click_ids', data=click_ids)
print('File saved: %s' % file_name)
def load_test_feature(file_name):
with h5py.File(file_name, 'r') as h:
x_test = np.array(h['x_test'])
click_ids = np.array(h['click_ids'])
print('File loaded: %s' % file_name)
print(click_ids[:5])
return x_test, click_ids
def save_feature_map(feature_map, file_name):
print(feature_map[:5])
feature_map_encode = []
for item in feature_map:
feature_name_encode = item[1].encode('UTF-8')
feature_map_encode.append((item[0], feature_name_encode))
if os.path.exists(file_name):
os.remove(file_name)
print('File removed: \t%s' % file_name)
with h5py.File(file_name) as h:
h.create_dataset('feature_map', data=feature_map_encode)
print('File saved: \t%s' % file_name)
def load_feature_map(file_name):
with h5py.File(file_name, 'r') as h:
feature_map_encode = np.array(h['feature_map'])
print('File loaded: \t%s' % file_name)
feature_map = []
for item in feature_map_encode:
feature_name = item[1].decode('UTF-8')
feature_map.append((int(item[0]), feature_name))
print(feature_map[:5])
return feature_map
def describe(data):
if isinstance(data, list):
print(len(data), '\t\t%.2f Mb' % (sys.getsizeof(data)/1024./1024.))
else:
print(data.shape, '\t%.2f Mb' % (sys.getsizeof(data)/1024./1024.))
test_np = np.ones((5000, 10))
describe(test_np)
describe(list(range(5000)))
%%time
feature_files = []
x_train = []
y_train = []
if date == 100:
for key in [7, 8, 9]:
y_proba_file = os.path.join(feature_folder, 'feature_%s_date%s.p' % (feature_run_name, key))
feature_files.append(y_proba_file)
x_train_date, y_train_date = load_feature(y_proba_file)
x_train.append(x_train_date)
y_train.append(y_train_date)
x_train = np.concatenate(x_train, axis=0)
y_train = np.concatenate(y_train, axis=0)
else:
y_proba_file = os.path.join(feature_folder, 'feature_%s_date%s.p' % (feature_run_name, date))
feature_files.append(y_proba_file)
x_train, y_train = load_feature(y_proba_file)
# Use date 6 as validation dataset
y_proba_file = os.path.join(feature_folder, 'feature_%s_date%s.p' % (feature_run_name, 6))
feature_files.append(y_proba_file)
x_val, y_val = load_feature(y_proba_file)
y_proba_file = os.path.join(feature_folder, 'feature_%s_test.p' % feature_run_name)
feature_files.append(y_proba_file)
x_test, click_ids = load_test_feature(y_proba_file)
print('*' * 80)
describe(x_train)
describe(y_train)
describe(x_val)
describe(y_val)
describe(x_test)
describe(click_ids)
# feature_files = []
# y_proba_file = os.path.join(feature_folder, 'feature_%s_date%s.p' % (feature_run_name, 6))
# feature_files.append(y_proba_file)
# x_train, y_train = load_feature(y_proba_file)
# y_proba_file = os.path.join(feature_folder, 'feature_%s_test.p' % feature_run_name)
# feature_files.append(y_proba_file)
# x_test, click_ids = load_test_feature(y_proba_file)
# print('*' * 80)
# describe(x_train)
# describe(y_train)
# describe(x_val)
# describe(y_val)
# describe(x_test)
# describe(click_ids)
# from sklearn.model_selection import train_test_split
from sklearn.utils import shuffle
# x_train, x_val, y_train, y_val = train_test_split(x_data[skip_data_len: data_len], y_data[skip_data_len: data_len], test_size=0.1, random_state=random_num, shuffle=True)
# x_train, x_val, y_train, y_val = train_test_split(x_data, y_data, test_size=0.2, random_state=random_num, shuffle=False)
# x_train = x_train[skip_data_len: skip_data_len + data_len]
# y_train = y_train[skip_data_len: skip_data_len + data_len]
x_train, y_train = shuffle(x_train, y_train, random_state=random_num)
describe(x_train)
describe(y_train)
describe(x_val)
describe(y_val)
gc.collect()
```
## Train
```
%%time
import warnings
warnings.filterwarnings('ignore')
import xgboost as xgb
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import roc_auc_score
clf = xgb.XGBClassifier(
max_depth=3,
learning_rate=0.1,
n_estimators=1000,
silent=False,
objective='gpu:binary:logistic',
booster='gbtree',
n_jobs=cpu_amount,
nthread=None,
gamma=0,
min_child_weight=1,
max_delta_step=0,
subsample=0.7,
colsample_bytree=1,
colsample_bylevel=1,
reg_alpha=1,
reg_lambda=2,
scale_pos_weight=97,
base_score=0.5,
random_state=random_num,
seed=None,
missing=None,
# booster params
num_boost_round=50,
early_stopping_rounds=30,
tree_method='gpu_hist',
predictor='gpu_predictor',
eval_metric=['auc'],
n_gpus=8,
)
parameters = {
# 'max_depth': [3, 5],
# 'n_estimators': [1000, 2000]
# 'subsample': [0.5, 1],
# 'colsample_bytree': [0.5, 1],
# 'reg_alpha':[0, 1, 5],
# 'reg_lambda':[1, 2, 8],
# 'scale_pos_weight': [1, 10, 80, 100, 120, 200]
}
grid_search = GridSearchCV(clf, parameters, verbose=2, cv=3, scoring='roc_auc')
# grid_search.fit(x_train[:100*10000], y_train[:100*10000])
grid_search.fit(x_train, y_train)
%%time
print('*' * 80)
y_train_proba = grid_search.predict_proba(x_train)
print(y_train.shape)
print(y_train_proba.shape)
print(y_train_proba[:10])
y_train_pred = (y_train_proba[:, 1]>=0.5).astype(int)
acc_train = accuracy_score(y_train, y_train_pred)
roc_train = roc_auc_score(y_train, y_train_proba[:, 1])
print('acc_train: %.4f \t roc_train: %.4f' % (acc_train, roc_train))
# y_train_pred = grid_search.predict(x_train)
# acc_train = accuracy_score(y_train, y_train_pred)
# roc_train = roc_auc_score(y_train, y_train_proba[:, 1])
# print('acc_train: %.4f \t roc_train: %.4f' % (acc_train, roc_train))
y_val_proba = grid_search.predict_proba(x_val)
y_val[0] = 0
y_val[1] = 1
print(y_val.shape)
print(y_val_proba.shape)
print(y_val_proba[:10])
y_val_pred = (y_val_proba[:, 1]>=0.5).astype(int)
acc_val = accuracy_score(y_val, y_val_pred)
roc_val = roc_auc_score(y_val, y_val_proba[:, 1])
print('acc_val: %.4f \t roc_val: %.4f' % (acc_val, roc_val))
print(grid_search.cv_results_)
print('*' * 60)
print(grid_search.grid_scores_ )
print(grid_search.best_estimator_)
print(grid_search.best_score_)
print(grid_search.best_params_)
print(grid_search.scorer_)
print('*' * 60)
print(type(grid_search.best_estimator_))
print(dir(grid_search.best_estimator_))
cv_results = pd.DataFrame(grid_search.cv_results_)
display(cv_results)
from xgboost import plot_importance
fig, ax = plt.subplots(figsize=(10,int(x_train.shape[1]/2)))
xgb.plot_importance(grid_search.best_estimator_, height=0.5, ax=ax, max_num_features=300)
plt.show()
feature_map_file_name = y_proba_file = os.path.join(feature_folder, 'feature_map_TalkingdataAFD2018_FeatureExtraction_20180501_053830_date7.p')
feature_map = load_feature_map(feature_map_file_name)
print(len(feature_map))
print(feature_map[:5])
feature_dict = {}
for item in feature_map:
feature_dict[item[0]] = item[1]
print(list(feature_dict.keys())[:5])
print(list(feature_dict.values())[:5])
# print(dir(grid_search.best_estimator_.get_booster()))
importance_score = grid_search.best_estimator_.get_booster().get_fscore()
sorted_score = []
for key in importance_score:
indx = int(key[1:])
sorted_score.append((importance_score[key], key, indx, feature_dict[indx]))
dtype = [('importance_score', int), ('key', 'S50'), ('indx', int), ('name', 'S50')]
importance_table = np.array(sorted_score, dtype=dtype)
display(importance_table[:2])
importance_table = np.sort(importance_table, axis=0, order=['importance_score'])
display(importance_table)
# del x_train; gc.collect()
# del x_val; gc.collect()
```
## Predict
```
run_name_acc = run_name + '_' + str(int(roc_val*10000)).zfill(4)
print(run_name_acc)
from sklearn.cross_validation import KFold
kf = KFold(105, n_folds=10)
for train_index, test_index in kf:
print(test_index)
kf = KFold(len(x_test), n_folds=10)
y_test_proba = []
for train_index, test_index in kf:
y_test_proba_fold = grid_search.predict_proba(x_test[test_index])
y_test_proba.append(y_test_proba_fold)
print(y_test_proba_fold.shape)
y_test_proba = np.concatenate(y_test_proba, axis=0)
print(y_test_proba.shape)
print(y_test_proba[:20])
def save_proba(y_val_proba, y_val, y_test_proba, click_ids, file_name):
print(click_ids[:5])
if os.path.exists(file_name):
os.remove(file_name)
print('File removed: %s' % file_name)
with h5py.File(file_name) as h:
h.create_dataset('y_val_proba', data=y_val_proba)
h.create_dataset('y_val', data=y_val)
h.create_dataset('y_test_proba', data=y_test_proba)
h.create_dataset('click_ids', data=click_ids)
print('File saved: %s' % file_name)
def load_proba(file_name):
with h5py.File(file_name, 'r') as h:
y_val_proba = np.array(h['y_val_proba'])
y_val = np.array(h['y_val'])
y_test_proba = np.array(h['y_test_proba'])
click_ids = np.array(h['click_ids'])
print('File loaded: %s' % file_name)
print(click_ids[:5])
return y_val_proba, y_val, y_test_proba, click_ids
y_proba_file = os.path.join(model_folder, 'proba_%s.p' % run_name_acc)
save_proba(
y_val_proba,
y_val,
y_test_proba,
np.array(sample_submission_csv['click_id']),
y_proba_file
)
y_val_proba_true, y_val, y_test_proba_true, click_ids = load_proba(y_proba_file)
print(y_val_proba_true.shape)
print(y_val.shape)
print(y_test_proba_true.shape)
print(len(click_ids))
# %%time
submission_csv_file = os.path.join(output_folder, 'pred_%s.csv' % run_name_acc)
print(submission_csv_file)
submission_csv = pd.DataFrame({ 'click_id': click_ids , 'is_attributed': y_test_proba_true[:, 1] })
submission_csv.to_csv(submission_csv_file, index = False)
display(submission_csv.head())
print('Time cost: %.2f s' % (time.time() - t0))
print('random_num: ', random_num)
print('date: ', date)
print(run_name_acc)
print('Done!')
```
| github_jupyter |
```
from bayes_opt import BayesianOptimization
from bayes_opt import UtilityFunction
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import gridspec
%matplotlib inline
```
# Target Function
Lets create a target 1-D function with multiple local maxima to test and visualize how the [BayesianOptimization](https://github.com/fmfn/BayesianOptimization) package works. The target function we will try to maximize is the following:
$$f(x) = e^{-(x - 2)^2} + e^{-\frac{(x - 6)^2}{10}} + \frac{1}{x^2 + 1}, $$ its maximum is at $x = 2$ and we will restrict the interval of interest to $x \in (-2, 10)$.
Notice that, in practice, this function is unknown, the only information we have is obtained by sequentialy probing it at different points. Bayesian Optimization works by contructing a posterior distribution of functions that best fit the data observed and chosing the next probing point by balancing exploration and exploitation.
```
def target(x):
return np.exp(-(x - 2)**2) + np.exp(-(x - 6)**2/10) + 1/ (x**2 + 1)
x = np.linspace(-2, 10, 10000).reshape(-1, 1)
y = target(x)
plt.plot(x, y);
```
# Create a BayesianOptimization Object
Enter the target function to be maximized, its variable(s) and their corresponding ranges. A minimum number of 2 initial guesses is necessary to kick start the algorithms, these can either be random or user defined.
```
optimizer = BayesianOptimization(target, {'x': (-2, 10)}, random_state=27)
```
In this example we will use the Upper Confidence Bound (UCB) as our utility function. It has the free parameter
$\kappa$ which control the balance between exploration and exploitation; we will set $\kappa=5$ which, in this case, makes the algorithm quite bold.
```
optimizer.maximize(init_points=2, n_iter=0, kappa=5)
```
# Plotting and visualizing the algorithm at each step
### Let's first define a couple functions to make plotting easier
```
def posterior(optimizer, x_obs, y_obs, grid):
optimizer._gp.fit(x_obs, y_obs)
mu, sigma = optimizer._gp.predict(grid, return_std=True)
return mu, sigma
def plot_gp(optimizer, x, y):
fig = plt.figure(figsize=(16, 10))
steps = len(optimizer.space)
fig.suptitle(
'Gaussian Process and Utility Function After {} Steps'.format(steps),
fontdict={'size':30}
)
gs = gridspec.GridSpec(2, 1, height_ratios=[3, 1])
axis = plt.subplot(gs[0])
acq = plt.subplot(gs[1])
x_obs = np.array([[res["params"]["x"]] for res in optimizer.res])
y_obs = np.array([res["target"] for res in optimizer.res])
mu, sigma = posterior(optimizer, x_obs, y_obs, x)
axis.plot(x, y, linewidth=3, label='Target')
axis.plot(x_obs.flatten(), y_obs, 'D', markersize=8, label=u'Observations', color='r')
axis.plot(x, mu, '--', color='k', label='Prediction')
axis.fill(np.concatenate([x, x[::-1]]),
np.concatenate([mu - 1.9600 * sigma, (mu + 1.9600 * sigma)[::-1]]),
alpha=.6, fc='c', ec='None', label='95% confidence interval')
axis.set_xlim((-2, 10))
axis.set_ylim((None, None))
axis.set_ylabel('f(x)', fontdict={'size':20})
axis.set_xlabel('x', fontdict={'size':20})
utility_function = UtilityFunction(kind="ucb", kappa=5, xi=0)
utility = utility_function.utility(x, optimizer._gp, 0)
acq.plot(x, utility, label='Utility Function', color='purple')
acq.plot(x[np.argmax(utility)], np.max(utility), '*', markersize=15,
label=u'Next Best Guess', markerfacecolor='gold', markeredgecolor='k', markeredgewidth=1)
acq.set_xlim((-2, 10))
acq.set_ylim((0, np.max(utility) + 0.5))
acq.set_ylabel('Utility', fontdict={'size':20})
acq.set_xlabel('x', fontdict={'size':20})
axis.legend(loc=2, bbox_to_anchor=(1.01, 1), borderaxespad=0.)
acq.legend(loc=2, bbox_to_anchor=(1.01, 1), borderaxespad=0.)
```
### Two random points
After we probe two points at random, we can fit a Gaussian Process and start the bayesian optimization procedure. Two points should give us a uneventful posterior with the uncertainty growing as we go further from the observations.
```
plot_gp(optimizer, x, y)
```
### After one step of GP (and two random points)
```
optimizer.maximize(init_points=0, n_iter=1, kappa=5)
plot_gp(optimizer, x, y)
```
### After two steps of GP (and two random points)
```
optimizer.maximize(init_points=0, n_iter=1, kappa=5)
plot_gp(optimizer, x, y)
```
### After three steps of GP (and two random points)
```
optimizer.maximize(init_points=0, n_iter=1, kappa=5)
plot_gp(optimizer, x, y)
```
### After four steps of GP (and two random points)
```
optimizer.maximize(init_points=0, n_iter=1, kappa=5)
plot_gp(optimizer, x, y)
```
### After five steps of GP (and two random points)
```
optimizer.maximize(init_points=0, n_iter=1, kappa=5)
plot_gp(optimizer, x, y)
```
### After six steps of GP (and two random points)
```
optimizer.maximize(init_points=0, n_iter=1, kappa=5)
plot_gp(optimizer, x, y)
```
### After seven steps of GP (and two random points)
```
optimizer.maximize(init_points=0, n_iter=1, kappa=5)
plot_gp(optimizer, x, y)
```
# Stopping
After just a few points the algorithm was able to get pretty close to the true maximum. It is important to notice that the trade off between exploration (exploring the parameter space) and exploitation (probing points near the current known maximum) is fundamental to a succesful bayesian optimization procedure. The utility function being used here (Upper Confidence Bound - UCB) has a free parameter $\kappa$ that allows the user to make the algorithm more or less conservative. Additionally, a the larger the initial set of random points explored, the less likely the algorithm is to get stuck in local minima due to being too conservative.
| github_jupyter |
```
%matplotlib inline
%load_ext autoreload
%autoreload 2
import os
os.chdir('..')
import pandas as pd
import numpy as np
import trading.start
import trading.portfolio as portfolio
import config.settings
from time import sleep
from core.utility import *
from trading.accountcurve import *
import data.db_mongo as db
import config.portfolios
from pylab import rcParams
rcParams['figure.figsize'] = 15, 10
p = portfolio.Portfolio(instruments=config.portfolios.p_trade)
i = p.instruments
from trading.bootstrap import bootstrap
from IPython.display import FileLink, FileLinks
```
## How our System Works
There are three main objects that are important to us:
* Instrument
* Portfolio
* AccountCurve
## Instrument
An instrument in our system refers to a particular futures market, such as Corn or Gold.
Instruments are defined in [config/instruments.py](file/../../config/instruments.py).
When our system starts up, we instantiate an [Instrument](file/../../core/instrument.py) object for each instrument.
The instrument object:
* Holds all the properties and settings which are used by the system for calculation and rolling
* Holds all the details which are used by Interactive Brokers for trading
* Holds methods for:
* Calculating position
* Calculating panama prices
* Downloading historical data
* Bootstrapping weights for what rules work best on this instrument, based on the historical data in the system.
## Volatility or Standard Deviation
There are many different ways of calculating volatility. In our simple case, we assume daily returns are Gaussian distributed, and we calculate volatility as being the standard deviation of daily returns. Under this assumption, volatility and standard deviation are the same thing.
It is worth pointing out that the assumption of Gaussian returns is a dangerous one, and is only suitable for a simple system like this. There are plenty of examples when a single day move in prices has been several standard deviations.
## Panama Prices
We have a problem when we try to see trends on futures contracts. In order to apply our moving averages, we need a continuous series of prices. The contracts are seperate though, with different prices, which means there will be big jumps in the prices on the days we roll:
```
i['corn'].plot_contracts('2000','2002', panama=False)
```
We aren't interested in the absolute price, only the relative price (our moving averages effectively work on the first derivative- they are trying to find the trend). So what we can do is just move each contract up/down, so that it joins where the previous contract left off.
```
i['corn'].panama_prices().loc['2000':'2002'].plot(title='Panama prices for Corn')
```
Looking at this, we can see the absolute price on the left no longer makes any sense (it's negative). But this doesn't concern us; once we apply the moving average to this, it becomes irrelvant. In this way, we can now measure the trend across contracts.
There are various more advanced time-weighted versions that can preserve the price, but we don't need these. The simple method we use here is known as **Panama stitching**.
## Forecasts
We apply our forecasts to this data use `Instrument.forecasts()`.
For Corn, we can see we have several variations of the EWMAC rule, and another rule, carry_next (to be discussed in a later chapter).
```
i['corn'].forecasts()
```
We need to find a way to combine these forecasts into a single forecast that we can use to make a position. In our case, we will use a weighted mean.
To determine the weights, we use bootstrapping.
## Bootstrapping rule weights
In order to find the best weights for the rules, we take samples, with replacement, of continuous sections of the price history, and we see what combination of rules works best for that sample (we optimize the Sharpe ratio, to be discussed later). We repeat this several hundred times, until we have stable weights.
```
i['corn'].bootstrap()
```
#### Things to note:
* The faster EWMAC variations seem to get much lower weights. This makes sense, as they cost more to trade, and our system accounts for trading costs.
* We really want uncorrelated rules. Having two very similar rules misses the point.
In reality, we would **never** use these weights, as they've been fitted to a single instrument. Instead, we would run this process on many different instruments, spanning hundreds of years of history, and then test the generated weights on a set of test data that wasn't used for the original bootstrapping, to check we haven't overfitted.
Let's use our systems predefined weights for the next step.
```
config.strategy.rule_weights
```
Forecast * weights =
```
i['corn'].weighted_forecast()
```
Putting this all together, we can calculate the position. This is based on an account size of £500,000 GBP. Corn is priced in USD. In `position()`, we convert our instrument return volatility to GBP, so that we can have a daily volatility target also in GBP.
Note that the contracts numbers are integers. We round these as we cannot own fractional contracts.
```
i['corn'].position()
```
The other key feature of Instrument is the roll progression, which tells us which contract we want to be in at what time. Our rolling system is not smart; it works like clockwork. Using settings from the instruments.py file, we generate a sequence which says which contract we want to be in.
```
i['corn'].roll_progression()
```
## Account Curve
AccountCurve is an object that takes a list of of positions from the instruments, and calculates the returns based upon owning those positions.
How it works:
1. Get the positions, either as a parameter, or from a portfolio that's been submitted.
1. Multiply the positions by the return for that day (`prices.diff()`) and remove:
* **Commissions** - fixed charges per contract charged by IB
* **Spreads** - the difference between the Bid-Ask price (sometimes called, 'crossing the spread'). When we buy a contract, we pay the ask price, and when we sell it, we pay the bid price. The difference is called the 'spread'. When we buy a contract, it will show an immediate loss, as we value it at the price we can sell it for. Paying the spread is a cost of trading.
* **Slippage** - this is difference between our expected fill price for an order, based on our model, and the actual fill price. For example, we use the settlement price for Corn yesterday to make a trade today. However, we are unlikely to be able to buy corn at the exact price of yesterday's settlement. The difference is called 'slippage'. In our code, there is a factor called `slippage_multiplier`, which is set to `0.5`. This means we expect the price we transact at to be halfway between yesterdays settlement price and today's settlement price. It's a very simple estimate, but it's better than nothing.
1. Add all these together:
* **Position returns** - the returns from holding the portfolio that we had yesterday
* **Transaction returns** - the returns from just today's trades, minus expenses
The final result is a DataFrame at `accountCurve.returns()`, which shows how much money we would have made on a daily basis.
### Important - fixed capital
AccountCurve uses fixed capital. For example, if the capital was £500,000, it will show the daily return on £500,000. It does not reinvest the returns, and it does not accumulate losses. Every day, it imagines £500,000 is invested. In reality, if there were to be a large drawdown, that could bust an account.
We do it this way because:
* As we are using leverage, we do not need £500,000 to buy £500,000 of contracts. So the money in your account has nothing to do with what you can/cannot buy.
* We do not need to 'feed forward' an account balance, we can vectorize the calculation (rather than using a loop), which makes calculating the backtest hundreds of times faster.
```
i['corn'].curve().returns().cumsum().plot(title='Cumulative returns for Corn')
```
## How the account curve calculation works on a portfolio
1. Call `Instrument.positions()` for each instrument in the portfolio, and put together in a DataFrame.
1. Calculate the volatility on that portfolio. Calculate the following normalising factor:
$$\text{Vol Norm Factor} = \frac{\text{Daily Volatility Target}}{\text{Volatility of unnormalised instruments}}$$
The purpose of this is to make the volatility of the whole portfolio hit our volatiltiy target. Remember the $\text{Daily volatility target} = \text{Annual volatility target} \div \sqrt{252}$.
1. Multiply the `Instrument.positions()` DataFrame by the volatility normalising factor.
1. Finally, apply the `chunk_trades` function, which is designed to only change the position when there is a change in volatility of >10% on a particular instrument. We do this to try and reduce trading costs.
```
p.curve().returns().sum(axis=1).cumsum().plot(title='Cumulative returns for a diversified portfolio')
```
### The Sharpe Ratio
The accountCurve produces a number of statistics based on the returns series. The most important one is the Sharpe ratio, which is our measure of performance. It is the return we achieved, given the volatility we endured.
$$
\text{Sharpe}
=
\frac{\text{Total Returns}}{\text{Std dev of Returns}}
$$
In other words, it's the smoothness of returns. Smooth, predictible returns are the what we seek most, because we can use leverage to multiply them.
A reasonable return for a trend following strategy is about 0.3 on single instrument, and about 0.7 on a diversified portfolio.
We control the portfolio's volatility, so the expected return is:
$$
\begin{eqnarray}
\text{Expected annual return}
&=&
\text{Sharpe ratio } \times \text{Annual volatility target}\\
&=& 0.7 \times \text{25%} \\
&=& \text{17.5% } \pm \text{25%}
\end{eqnarray}
$$
So this means we can expect to have some very good years, some years with a loss, and on average, return about 17.5% a year. In this regard, performance of trend following is **mean-reverting**.
```
p.curve()
```
In the results for the Sharpe ratio above, we can see `gross_sharpe` and `sharpe`. `gross_sharpe` does not consider the impact of commissions and spreads (but does include spreads), so we can see the impact of our trading costs on overall performance.
### How to improve Sharpe ratio
The process for improving Sharpe ratio would be roughly:
* Add more uncorrelated instruments. A diversified portfolio increases Sharpe and reduces volatility.
* Add more uncorrelated rules/strategies. The more varied and profitable rules we have, the better.
* Model costs better, and trade faster. Faster strategies usually have a higher Sharpe ratio, however, costs play a much larger role, so need to be accounted for carefully.
### Limitations of Sharpe
One problem with using the Sharpe ratio is that it penalises 'good' volatility. If our portfolio is flat, and then makes a sudden jump up, this will be great for us, but have a worse Sharpe ratio. One way to fix this is using the Sortino ratio, which only considers the downside volatility. This is something to investigate more.
## Portfolio
A Portfolio is a collection of Instruments, defined by [config/portfolios.py](file/../../config/portfolios.py).
Portfolio allows us to:
* Calculate AccountCurve for all instruments combined
* Work out ideal positions for all the instruments combined
* Run bootstrapping of rules across an entire portfolio at once
* Calculate the **frontier** - the ideal position we want to be in, and what we send to Interactive Brokers.
| github_jupyter |
```
# we assume that we have the pycnn module in your path.
# we also assume that LD_LIBRARY_PATH includes a pointer to where libcnn_shared.so is.
from pycnn import *
```
# Working with the pyCNN package
The pyCNN package is intended for neural-network processing on the CPU, and is particularly suited for NLP applications. It is a python-wrapper for the CNN package written by Chris Dyer.
There are two modes of operation:
* __Static networks__, in which a network is built and then being fed with different inputs/outputs. Most NN packages work this way.
* __Dynamic networks__, in which a new network is built for each training example (sharing parameters with the networks of other training examples). This approach is what makes pyCNN unique, and where most of its power comes from.
We will describe both of these modes.
## Package Fundamentals
The main piece of pyCNN is the `ComputationGraph`, which is what essentially defines a neural network.
The `ComputationGraph` is composed of expressions, which relate to the inputs and outputs of the network,
as well as the `Parameters` of the network. The parameters are the things in the network that are optimized over time, and all of the parameters sit inside a `Model`. There are `trainers` (for example `SimpleSGDTrainer`) that are in charge of setting the parameter values.
We will not be using the `ComputationGraph` directly, but it is there in the background, as a singleton object.
When `pycnn` is imported, a new `ComputationGraph` is created. We can then reset the computation graph to a new state
by calling `renew_cg()`.
## Static Networks
The life-cycle of a pyCNN program is:
1. Create a `Model`, and populate it with `Parameters`.
2. Renew the computation graph, and create `Expression` representing the network
(the network will include the `Expression`s for the `Parameters` defined in the model).
3. Optimize the model for the objective of the network.
As an example, consider a model for solving the "xor" problem. The network has two inputs, which can be 0 or 1, and a single output which should be the xor of the two inputs.
We will model this as a multi-layer perceptron with a single hidden node.
Let $x = x_1, x_2$ be our input. We will have a hidden layer of 8 nodes, and an output layer of a single node. The activation on the hidden layer will be a $\tanh$. Our network will then be:
$\sigma(V(\tanh(Wx+b)))$
Where $W$ is a $8 \times 2$ matrix, $V$ is an $8 \times 1$ matrix, and $b$ is an 8-dim vector.
We want the output to be either 0 or 1, so we take the output layer to be the logistic-sigmoid function, $\sigma(x)$, that takes values between $-\infty$ and $+\infty$ and returns numbers in $[0,1]$.
We will begin by defining the model and the computation graph.
```
# create a model and add the parameters.
m = Model()
m.add_parameters("W", (8,2))
m.add_parameters("V", (1,8))
m.add_parameters("b", (8))
renew_cg() # new computation graph. not strictly needed here, but good practice.
# associate the parameters with cg Expressions
W = parameter(m["W"])
V = parameter(m["V"])
b = parameter(m["b"])
#b[1:-1].value()
b.value()
```
The first block creates a model and populates it with parameters.
The second block creates a computation graph and adds the parameters to it, transforming them into `Expression`s.
The need to distinguish model parameters from "expressions" will become clearer later.
We now make use of the W and V expressions, in order to create the complete expression for the network.
```
x = vecInput(2) # an input vector of size 2. Also an expression.
output = logistic(V*(tanh((W*x)+b)))
# we can now query our network
x.set([0,0])
output.value()
# we want to be able to define a loss, so we need an input expression to work against.
y = scalarInput(0) # this will hold the correct answer
loss = binary_log_loss(output, y)
x.set([1,0])
y.set(0)
print loss.value()
y.set(1)
print loss.value()
```
## Training
We now want to set the parameter weights such that the loss is minimized.
For this, we will use a trainer object. A trainer is constructed with respect to the parameters of a given model.
```
trainer = SimpleSGDTrainer(m)
```
To use the trainer, we need to:
* **call the `forward_scalar`** method of `ComputationGraph`. This will run a forward pass through the network, calculating all the intermediate values until the last one (`loss`, in our case), and then convert the value to a scalar. The final output of our network **must** be a single scalar value. However, if we do not care about the value, we can just use `cg.forward()` instead of `cg.forward_sclar()`.
* **call the `backward`** method of `ComputationGraph`. This will run a backward pass from the last node, calculating the gradients with respect to minimizing the last expression (in our case we want to minimize the loss). The gradients are stored in the model, and we can now let the `trainer` take care of the optimization step.
* **call `trainer.update()`** to optimize the values with respect to the latest gradients.
```
x.set([1,0])
y.set(1)
loss_value = loss.value() # this performs a forward through the network.
print "the loss before step is:",loss_value
# now do an optimization step
loss.backward() # compute the gradients
trainer.update()
# see how it affected the loss:
loss_value = loss.value(recalculate=True) # recalculate=True means "don't use precomputed value"
print "the loss after step is:",loss_value
```
The optimization step indeed made the loss decrease. We now need to run this in a loop.
To this end, we will create a `training set`, and iterate over it.
For the xor problem, the training instances are easy to create.
```
def create_xor_instances(num_rounds=2000):
questions = []
answers = []
for round in xrange(num_rounds):
for x1 in 0,1:
for x2 in 0,1:
answer = 0 if x1==x2 else 1
questions.append((x1,x2))
answers.append(answer)
return questions, answers
questions, answers = create_xor_instances()
```
We now feed each question / answer pair to the network, and try to minimize the loss.
```
total_loss = 0
seen_instances = 0
for question, answer in zip(questions, answers):
x.set(question)
y.set(answer)
seen_instances += 1
total_loss += loss.value()
loss.backward()
trainer.update()
if (seen_instances > 1 and seen_instances % 100 == 0):
print "average loss is:",total_loss / seen_instances
```
Our network is now trained. Let's verify that it indeed learned the xor function:
```
x.set([0,1])
print "0,1",output.value()
x.set([1,0])
print "1,0",output.value()
x.set([0,0])
print "0,0",output.value()
x.set([1,1])
print "1,1",output.value()
```
In case we are curious about the parameter values, we can query them:
```
W.value()
V.value()
b.value()
```
## To summarize
Here is a complete program:
```
# define the parameters
m = Model()
m.add_parameters("W", (8,2))
m.add_parameters("V", (1,8))
m.add_parameters("b", (8))
# renew the computation graph
renew_cg()
# add the parameters to the graph
W = parameter(m["W"])
V = parameter(m["V"])
b = parameter(m["b"])
# create the network
x = vecInput(2) # an input vector of size 2.
output = logistic(V*(tanh((W*x)+b)))
# define the loss with respect to an output y.
y = scalarInput(0) # this will hold the correct answer
loss = binary_log_loss(output, y)
# create training instances
def create_xor_instances(num_rounds=2000):
questions = []
answers = []
for round in xrange(num_rounds):
for x1 in 0,1:
for x2 in 0,1:
answer = 0 if x1==x2 else 1
questions.append((x1,x2))
answers.append(answer)
return questions, answers
questions, answers = create_xor_instances()
# train the network
trainer = SimpleSGDTrainer(m)
total_loss = 0
seen_instances = 0
for question, answer in zip(questions, answers):
x.set(question)
y.set(answer)
seen_instances += 1
total_loss += loss.value()
loss.backward()
trainer.update()
if (seen_instances > 1 and seen_instances % 100 == 0):
print "average loss is:",total_loss / seen_instances
```
## Dynamic Networks
Dynamic networks are very similar to static ones, but instead of creating the network once and then calling "set" in each training example to change the inputs, we just create a new network for each training example.
We present an example below. While the value of this may not be clear in the `xor` example, the dynamic approach
is very convenient for networks for which the structure is not fixed, such as recurrent or recursive networks.
```
# create training instances, as before
def create_xor_instances(num_rounds=2000):
questions = []
answers = []
for round in xrange(num_rounds):
for x1 in 0,1:
for x2 in 0,1:
answer = 0 if x1==x2 else 1
questions.append((x1,x2))
answers.append(answer)
return questions, answers
questions, answers = create_xor_instances()
# create a network for the xor problem given input and output
def create_xor_network(model, inputs, expected_answer):
renew_cg()
W = parameter(model["W"])
V = parameter(model["V"])
b = parameter(model["b"])
x = vecInput(len(inputs))
x.set(inputs)
y = scalarInput(expected_answer)
output = logistic(V*(tanh((W*x)+b)))
loss = binary_log_loss(output, y)
return loss
m = Model()
m.add_parameters("W", (8,2))
m.add_parameters("V", (1,8))
m.add_parameters("b", (8))
trainer = SimpleSGDTrainer(m)
seen_instances = 0
total_loss = 0
for question, answer in zip(questions, answers):
loss = create_xor_network(m, question, answer)
seen_instances += 1
total_loss += loss.value()
loss.backward()
trainer.update()
if (seen_instances > 1 and seen_instances % 100 == 0):
print "average loss is:",total_loss / seen_instances
```
| github_jupyter |
# Working with structured data in Python
*Before we begin, let's first take a quick [survey on the Inaugural assignment](https://forms.office.com/Pages/ResponsePage.aspx?id=kX-So6HNlkaviYyfHO_6kckJrnVYqJlJgGf8Jm3FvY9UMEZTODYyVjJWSFBPNTVRMzBMQzFYOE5JQiQlQCN0PWcu)*
By and large, handling data sets in Python means working with **Pandas**.
Pandas is a standard element in the Anaconda package, so you'll have it automatically.
The fact that Python is a general purpose language *and* has a good way of handling data sets through pandas has helped it become such a popular language for scientific and general purposes.
Today, you will learn about
1. the pandas **data frame** object and the **pandas series**.
2. how to **load and save data** both to and from offline sources (e.g. CSV or Excel).
3. and how to clean, rename, structure and index your data.
**Links:**
1. Official [tutorials](https://pandas.pydata.org/pandas-docs/stable/getting_started/tutorials.html)
2. DataCamp's [pandas' cheat sheet](https://www.datacamp.com/community/blog/python-pandas-cheat-sheet)
3. DataCamp has additional courses on pandas like [Writing efficient code with pandas](https://app.datacamp.com/learn/courses/writing-efficient-code-with-pandas).
4. About the [pandas project](https://pandas.pydata.org/about/)
```
import pandas as pd
from IPython.display import display
import matplotlib.pyplot as plt
plt.style.use('seaborn-whitegrid')
```
# Pandas dataframes
In Pandas, the fundamental object of interest is a **pandas dataframe**.
A pandas data frame is superficially like the data frames you know from stata and sas: it is in 2-d, each column has a name.
The *data type* of a column in a pandas data frame is a **pandas series**.
A pandas series is **a lot like a numpy array** and they can be used in much the same way.
A pandas data frame can be thought of as a **dictionary of pandas series**. (Keys are column names)
To create a DataFrame:
```
ids = pd.Series([1, 2, 3])
incs = pd.Series([11.7, 13.9, 14.6])
names = pd.Series(['Vitus', 'Maximilian', 'Bo-bob'])
# Use data frame definition
X = pd.DataFrame({'id': ids, 'inc':incs, 'name': names})
display(X)
```
When creating a DataFrame, you can also rely on python to recast the variables into pandas series at creation.
```
# Variables are cast into pandas series as the DataFrame is created
X = pd.DataFrame({'id': [1, 2, 3],
'inc': [11.7, 13.9, 14.6],
'name': ['Vitus', 'Maximilian', 'Bo-bob']})
type(X['id'])
```
You can also pass in data as a list of lists and provide column names as argument
```
X = pd.DataFrame(data = [[1,11.7,'Vitus'],
[2,13.9,'Maximilian'],
[3,14.6,'Bo-Bob']],
columns=['id','inc','name'])
display(X)
```
**A dataframe is essentially a matrix.**
* rows = observations
* columns = variables
* the index = keeps track of the rows' locations
**General information:**
```
X.info()
```
**What does `object` mean?** In practice it is a `str` but it can give rise to difficulties.
**Note:** You can also show a dataframe in the middle of some code.
```
print('before')
display(X)
print('after')
```
## Indexing ("subsetting")
**Choosing a subset of the rows and/or columns of a dataframe is known as "indexing"**.
Recall the stuff about ***slicing*** and ***logical indices*** from previous lectures. Since Pandas is build in Numpy, we can do the same here.
All pandas dataframes are born with the method `.loc[]` and `.iloc[]`:
1. `.iloc[]` is for **numeric indexing**
2. `.loc[]` for **logical** and **name-based** indexing.
Examples
* `df.iloc[0:3,1]` selects rows 0,1,2 and column 2.
* `df.loc[:, ['year']]` selects all rows (indicated by `:`) but only the column (variable) `year`.
* `df.loc[df['year'] == 2002, :]` selects the rows where the variable `year` is equal to 2002 and all columns (indicated by `:`)
* `df.loc[df['year'] == 2002, ['name']]` selects the variable `name` and shows the rows where `year` is equal to 2002.
*You cannot write*:
`df.iloc[0:2, ['year']]`
*You should not write*
`df.loc[0:2, ['year']]`
*It will only work with a numerical index and now the slice intervals are **closed instead of half open***
In general, the **syntax** is `df.loc[CONDITION, [VARLIST]]`, where `CONDITION` is a vector of logical statements with the same length as the number of rows in the dataframe, and `VARLIST` is a list over variables.
```
# Use logical indexing to subset from variable name based on id
X.loc[X['id'] > 1, ['name']]
```
Subset all variables:
```
X.loc[X['id'] > 1]
```
**Alternatives:**
Create a boolean series
```
I = X['id'] > 1
print(I)
X.loc[I, ['name']]
```
Use `.VARIABLE` notation
```
X.loc[(X.id > 1) & (X.inc > 14), ['id','name']]
```
Why do you think the `.VARIABLE` notation works at all? What does it make you suspect a variable is to the DataFrame?
Subsetting with numerical indexing works the same way as lists and arrays.
**Syntax:** `df.iloc[ROW INDICES, [COLUMN INDICES]]`
```
display(X.iloc[0:2,[0,2]])
```
Remember the **half-open** intervals!
## Adding a variable
Variables are added with `df['newvar'] = SOMETHING`. *The length must match or RHS is a scalar (broadcasting)*.
```
X['year'] = [2003, 2005, 2010]
X['zone'] = 7
X
```
**Note:** You canNOT write `df.newvar = SOMETHING`. Some of you will forget. I promise.
**Also:** note that you could add the year-variable even though it does not have an explicit row dimension.
The *something* can be an **expression based on other variables**.
```
X['inc_adj'] = X.inc - X.inc.mean() + 0.1
X
```
## Assignments to a subset of rows
**LHS:** Selected using logical statement.<br>
**RHS:** Must either be:
1. a **single value** (all rows are set to this)
2. a **list of values** with same length as the number of selected rows
**Multiple rows, one value:**
```
# Create copy of X to avoid overwriting it.
Y = X.iloc[:,0:4].copy()
Y.loc[Y.id > 1, ['name']] = 'no name'
print('Y After change in names:')
Y
```
**Multiple rows, multiple values:**
```
print('Original df:')
Y = X.iloc[:,0:4].copy()
display(Y)
# Subset the rows, where name is Vitus or year is 2005. LHS is incidentally only 2 rows, which match the RHS!
I = (Y.name == 'Vitus') | (Y.year == 2010)
# Print LHS
print('Subset of Y, LHS in assignment:')
display(Y.loc[I,:])
# Assignment
Y.loc[I, ['name']] = ['Bib', 'Peter']
print('Final Y:')
Y
```
## Copies vs. views
Remember the stuff about references to objects from L02 and how making changes in a reference also causes changes in the "original" object? Pandas sort of shields you from that trap.
Here is how:
When **looking** at the data it is natural to just avoid the `.loc` (as in most other languages):
```
# Here I'm NOT using the .loc function
Z = Y[['id','name']]
Z
```
You can even make subsets without it:
```
I = Y['id'] > 1
Z[I]
```
Importantly, this **does not work with assignment**.
**Case 1:** It does not work with views, as they are references.
```
display(X)
Y = X.copy() # Create Y as a new instance by copying
I = Y['id'] > 2 # Boolean index
Z1 = Y[['id','name']] # returns a VIEW through chained assignment
# We CANNOT change Z1 as it is a view of Y
Z1.loc[I, ['name']] = 'test'
# But it works with Z2
Z2 = Y.loc[:, ['id','name']]
Z2.loc[I, ['name']] = 'test'
display(Z2)
# Importantly, we did not change names in Y
display(Y)
```
**Case 2:** Sometimes it works, but not how you want it to..
```
#display(X)
Y = X.copy()
I = Y['id'] > 1
Z = Y['name'] # returns a view of the column (same with Y.name)
Z[I] = 'test' # Reassigning values to the view of name in Y
## WOOPS:
display(Y)
display(Z)
```
**Solution:** Do the assignment in one step.
```
I = Y['id'] > 1
Y.loc[I, ['name']] = 'test'
Y
```
## The index
The **first column** in the dataset is referred to as the `index` of the dataframe.<br>
**Baseline:** If you haven't done anything, it is just `[0, 1, 2, ....]`.
```
X = pd.DataFrame({'id': [1, 2, 3],
'inc': [11.7, 13.9, 14.6],
'name': ['Vitus', 'Maximilian', 'Bo-bob'],
'year': [2010, 2010, 2019]})
# See the indices of X
print(X.index.values)
```
**Custom:** You can actually use any **unique** identifier. It does not have to be numbers. For example, you can assign the name column to be the index instead.
```
Y = X.set_index('name') # returns a copy
Y # notice name is now below the other variables
```
We could also have specified an index at creation of X
```
X = pd.DataFrame({'id': [1, 2, 3],
'inc': [11.7, 13.9, 14.6],
'year': [2010, 2010, 2019]},
index= ['Vitus', 'Maximilian', 'Bo-bob'])
X
# Use index of rows:
Y.loc['Vitus']
# See the indices of Y
print(Y.index.values)
```
Lets have a [**quizz**](https://forms.office.com/Pages/ResponsePage.aspx?id=kX-So6HNlkaviYyfHO_6kckJrnVYqJlJgGf8Jm3FvY9UNDdSQTgzRU1XMlc3MzJEQUo5UjNCRURDSCQlQCN0PWcu) on subsetting.
## Series and numpy arrays
When you select an individual variable, it has the data type `Series`. Some functions work on a pandas series (e.g. most numpy functions), but it is sometimes nice to extract the underlying numpy objects:
* `df`: **pandas dataframe**
* `df['variable']`: **pandas series**
* `df['variabe'].values` (or `.to_numpy()`): **numpy array**
```
# One way to do it
X.inc.to_numpy()
# Another way
display(X.inc.values)
type(X.inc.values)
# Get a list instead
display([*X['id'].values]) # returns a view
display(type([*X['id'].values]))
```
## Calling functions on a DataFrame
**Row-by-row**
Create function that takes row as an argument, and then **apply** the action of the function along the row dimension (axis=1).
```
Y
Y = pd.DataFrame({'id': [1, 2, 3],
'inc': [11.7, 13.9, 14.6],
'year': [2010, 2010, 2019],
'name': ['Vitus', 'Maximilian', 'Bo-bob']})
# Notice that row is an input argument here
def conc_row_wise(row):
return str(row['year']) + ' - ' + row['name']
# The fact that row is an input argument in the conc_row_wise function is implicitly understood by .apply()
Y['year_name'] = Y.apply(conc_row_wise, axis=1) # Notice that axis = 1 is going down rows. Kind of confusing.
Y
```
**Function for numpy arrays:**
Use the fact that a Pandas df is based on Numpy arrays to create a function that operate on the rows.
This may involve broadcasting (see L03).
```
def all_at_once(inc, year):
return inc * year.max() # Notice that the values of a pd DataFrame column is Numpy, so it has a .max() method.
Y['inc_adj_year'] = all_at_once(Y.inc.values, Y.year.values)
Y
```
**Using the assign method of DataFrames**
Apply the assing method coupled with a lambda function using the functionality of numpy arrays to get inplace changes:
```
Y = Y.assign(inc_adj_inplace = lambda x: x.inc * x.year.max())
Y
```
# Reading and writing data
**Check:** We make sure that we have the **data/** subfolder, and that it has the datasets we need.
```
import os
# Using assert to check that paths exist on computer. See L05 for details.
assert os.path.isdir('data/')
assert os.path.isfile('data/RAS200.xlsx')
assert os.path.isfile('data/INDKP107.xlsx')
# Print everything in data
os.listdir('data/')
```
## Reading in data
Pandas offers a lot of facilities for **reading and writing to different formats**. The functions have logical names:
* CSV: `pd.read_csv()`
* SAS: `pd.read_sas()`
* Excel: `pd.read_excel()`
* Stata: `pd.read_stata()`
* Parquet: `pd.read_parquet()`
**Inspecting:**
* `df.head(10)` is ued to inspect the first 10 rows
* `df.sample(10)` is ued to look at 10 random rows
**Example:** Raw download from DST
Clearly not quite right!
```
filename = 'data/RAS200.xlsx' # open the file and have a look at it
pd.read_excel(filename).head(5)
```
We need to clean this **mess** up.
### Getting the right columns and rows
**Skipping rows:** Clearly, we should **skip** the first three rows and the first four columns
```
empl = pd.read_excel(filename, skiprows=2)
empl.head(5)
```
**Dropping columns:** The first couple of columns are not needed and contain only missing values (denoted by `NaN` (not-a-number)), so we will drop those.
**Note:** `df.drop()` is a function that the data frame object applies to itself. Hence, no return value is used.
```
# These columns have to go: 'Unnamed: 0' 'Unnamed: 1' 'Unnamed: 2' 'Unnamed: 3'
drop_these = ['Unnamed: ' + str(num) for num in range(4)] # use list comprehension to create list of columns
print(drop_these)
empl.drop(drop_these, axis=1, inplace=True) # axis = 1 -> columns, inplace=True -> changed, no copy made
empl.head(5)
```
> **Alternative:** Use `del empl['Unnamed: 0'], empl['Unnamed: 1']..`.
**But!** that borders on code repetition.. Would give you 4 places to make code changes rather than 2 as with the list comprehension above, in case data changed.
### Renaming variables
We are not happy with the column comprising regions, which is currently called `Unnamed: 4`.
We rename using `df.rename(columns=dict)`, where dict must be a Python *dictionary*. Why a dictionary? It is simply the most practical solution if you are renaming several columns at once.
```
empl.rename(columns = {'Unnamed: 4':'municipality'}, inplace=True)
empl.head(5)
```
**Rename all year columns:** We also see that the employment rate in 2008 has been named `2008`.
This is allowed in Python, but having a **variable named as a number** can cause **problems** with some functions (and many other programming languages do not even allow it), so let us change their names.
To change all columns, we need to create a dictionary that maps each of the years {2008, ..., 2016} to {e2008, ..., e2016}.
```
col_dict = {}
for i in range(2008, 2020+1): # range goes from 2008 to but not including 2018
col_dict[str(i)] = f'empl{i}'
col_dict
empl.rename(columns = col_dict, inplace=True)
empl.head(10)
```
**A big NO-NO!!** is to put *white spaces* in column names. You can theoretically have a column such as empl['e 2017'] in a pandas df, but this is *very likely* to get messy. And you can no longer use `.`notation.
**Extract:** Now we can find the employment rate in the municipality where Christian grew up:
```
empl.loc[empl.municipality == 'Hillerød']
```
### Dropping observations that are not actually municipalities
The dataset contains observations like "Region Hovedstaden", which is not a municipality, so we want to drop such rows. To do this, we can use the `df['var'].str` functionalities. These are all sorts of functions that work with strings, in particular searching for instances of specific content by `df['var'].str.contains('PATTERN')`.
```
# Build up a logical index I
I = empl.municipality.str.contains('Region')
I |= empl.municipality.str.contains('Province')
I |= empl.municipality.str.contains('All Denmark')
empl.loc[I, :]
```
**Delete these rows:**
```
empl = empl.loc[I == False] # keep everything else
empl.head(10)
```
Very important: **reset index**
```
empl.reset_index(inplace = True, drop = True) # Drop old index too
empl.iloc[0:4,:]
```
### Summary statistics
To get an overview of employments across municipalities we can use the function `df.describe()`.
```
empl.describe()
```
**Single descriptive statistic:** We can also just get the mean for each year:
```
empl.iloc[:,1:].mean()
```
## Long vs. wide datasets: `pd.wide_to_long()`
Often in economic applications, it can be useful to switch between *wide* vs. *long* formats (long is sometimes referred to as *tall*, e.g. in Stata). This is done by the commands `pd.wide_to_long()` (and `pd.long_to_wide()`). Many types of analysis are easier to do in one format than in another so it is extremely useful to be able to switch comfortably between formats.
**Common:** Think of a dataset as having an `ID` and a `PERIOD` variable. In our dataset `empl`, the `ID` variable is `municipality`, and the `PERIOD` variable is `year`.
**Wide dataset:** The default from Statistics Denmark: 1 row in data per `ID` and a variable for each `PERIOD`. If there are more than one variable per observation that varies by period, then a new block of period-wise cases must be created along columns.
**Long dataset:** There is one row for each combination of (`ID`, `PERIOD`). Vertical blocks of periods.
A **long dataset** is often easier to work with if you have more than one time-varying variable in the data set.
In general, Pandas will assume that the variables in the *wide* format have a particular structure: namely they are of the form `XPERIOD`, where `X` is called the "stub". In our case, the variable names are e.g. `e2011`, so the stub is `e` and the period (for that variable) is `2011`. You'll want to clean out the variable names if there is anything after the `period` part.
```
empl
empl_long = pd.wide_to_long(empl, stubnames='empl', i='municipality', j='year')
empl_long.head(10)
```
**Note:** The variables `municipality` and `year` are now in the index!! We see that because they are "below" `e` in the `head` overview.
```
# The index variable now consists of tuples.
print(empl_long.index.values[0:8])
```
We can **select a specific municipality** using ``.xs``:
```
empl_long.xs('Roskilde',level='municipality')
```
Or ``.loc[]`` in a special way:
```
empl_long.loc[empl_long.index.get_level_values('municipality') == 'Roskilde', :]
```
**Alternative:** Reset the index, and use `.loc` as normal.
```
empl_long = empl_long.reset_index()
empl_long.loc[empl_long.municipality == 'Roskilde', :]
```
### Plotting interactively
A pandas DataFrame has built-in functions for plotting. Works a bit differently from matplotlib.
Example:
```
# Data frame with roskilde
empl_roskilde = empl_long.loc[empl_long['municipality'] == 'Roskilde', :]
# Plot the content of the data frame
empl_roskilde.plot(x='year',y='empl',legend=False);
```
We can even do it **interactively**:
```
import ipywidgets as widgets
def plot_e(df, municipality):
I = df['municipality'] == municipality
ax=df.loc[I,:].plot(x='year', y='empl', style='-o', legend=False)
widgets.interact(plot_e,
df = widgets.fixed(empl_long),
municipality = widgets.Dropdown(description='Municipality',
options=empl_long.municipality.unique(),
value='Roskilde')
);
```
## Income
Next, we will read in the avg. disposable income for highly educated in each municipality. Here we do the cleaning, renaming and structuring in a few condensed lines.
```
# a. load
inc = pd.read_excel('data/INDKP107.xlsx', skiprows=2)
# b. clean and rename
inc.drop([f'Unnamed: {i}' for i in range(4)], axis=1, inplace=True) # using list comprehension
inc.rename(columns = {'Unnamed: 4':'municipality'}, inplace=True)
inc.rename(columns = {str(i): f'inc{i}' for i in range(2004,2020+1)}, inplace=True) # using dictionary comprehension
# c. drop rows with missing values. Denoted na
inc.dropna(inplace=True)
# d. remove non-municipalities. Notice how to avoid code repetition!
for val in ['Region','Province', 'All Denmark']:
I = inc.municipality.str.contains(val)
inc.drop(inc[I].index, inplace=True) # .index -> get the indexes of the series
inc.head(5)
```
**Convert** wide -> long:
```
inc_long = pd.wide_to_long(df=inc, stubnames='inc', i='municipality', j='year')
inc_long.reset_index(inplace=True)
inc_long.head(5)
```
## Municipal area
Finally, let's read in a dataset on municipality areas in km$^2$.
```
# a. load
area = pd.read_excel('data/areal.xlsx', skiprows=2)
# b. clean and rename
area.rename(columns = {'Unnamed: 0':'municipality','2019':'km2'}, inplace=True)
# c. drop rows with missing
area.dropna(inplace=True)
# d. remove non-municipalities
for val in ['Region','Province', 'All Denmark']:
I = area.municipality.str.contains(val)
area.drop(area[I].index, inplace=True)
area.head(5)
```
## Writing data
As with reading in data, we have the corresponding functions for **writing data**:
* CSV: `pd.to_csv()`
* SAS: `pd.to_sas()`
* Excel: `pd.to_excel()`
* Stata: `pd.to_stata()`
* Parquet: `pd.to_parquet()`
Let's **save our dataset to CSV form**. We will set `index=False` to avoid saving the index (which does not mean anything here but can in other contexts be an annoying thing).
```
empl_long.to_csv('data/RAS200_long.csv', index=False)
inc_long.to_csv('data/INDKP107_long.csv', index=False)
area.to_csv('data/area.csv', index=False)
```
## Be cautious
Code for cleaning data tend to get long and repetetive. But remember **DRY**! Errors crop up in data cleaning when you just copy blocks of code around. Avoid repetitions at all costs.
# Summary
**This lecture**: We have discussed
1. The generel pandas framework (indexing, assigment, copies vs. views, functions)
2. Loading and saving data
3. Basic data cleaning (renaming, droping etc.)
4. Wide $\leftrightarrow$ long transformations
**Your work:** Before solving Problem Set 3 read through this notebook and play around with the code.
**Next lecture:** Basic data analysis.
**Data exploration?:** Try out [dtale](https://github.com/man-group/dtale).
| github_jupyter |
#Create the environment
```
from google.colab import drive
drive.mount('/content/drive')
%cd /content/drive/My Drive/ESoWC
import pandas as pd
import xarray as xr
import numpy as np
#Our class
from create_dataset.make_dataset import CustomDataset
```
I apply the following changes to all three datasets that I upload:
* cut_region function: the dataset contains a larger region than the one we are analyzing, so we proceed to eliminate the excess area
* rescale function: we change the scale of the dataset from 0.75 degree to 0.25 degree
* resample function: we create a time series with an hourly frequency
Extremes of the region that we are analyzing:
```
lat_s = 43.0
lat_n = 51.0
lon_e = 4.0
lon_w = 12.0
```
#05_2019
This file already exist and it is called 05_2019_weather.nc
#07_2019
##Load datasets
###Wind_vegetation_pressure_cloud
```
wvpc_instance_07_19 = CustomDataset('Data/07_2019_wind_vegetation_pressure_cloud.nc')
wvpc_instance_07_19.cut_region(lat_n, lat_s, lon_e, lon_w)
wvpc_instance_07_19.rescale()
wvpc_instance_07_19.resample("1H")
wvpc_Dataset_07_19 = wvpc_instance_07_19.get_dataset()
```
###Spechum_temp
```
spechum_temp_instance_07_19 = CustomDataset('Data/07_2019_spechum_temp.nc')
spechum_temp_instance_07_19.cut_region(lat_n, lat_s, lon_e, lon_w)
spechum_temp_instance_07_19.rescale()
spechum_temp_instance_07_19.resample("1H")
```
Rename the variables
```
spechum_temp_instance_07_19.rename_var({'t':'tmp',
'q':'sp_hum'})
spechum_temp_Dataset_07_19 = spechum_temp_instance_07_19.get_dataset()
```
###Relhum
```
relhum_instance_07_19 = CustomDataset('Data/07_2019_relhum.nc')
relhum_instance_07_19.cut_region(lat_n, lat_s, lon_e, lon_w)
relhum_instance_07_19.rescale()
relhum_instance_07_19.resample("1H")
```
Rename the variables and drop the variables already in the others dataset
```
relhum_instance_07_19.rename_var({'r':'rel_hum'})
relhum_Dataset_07_19 = relhum_instance_07_19.get_dataset()
relhum_Dataset_07_19 = relhum_Dataset_07_19.drop_vars('t')
relhum_Dataset_07_19 = relhum_Dataset_07_19.drop_vars('q')
```
###Water
```
water_instance_07_19 = CustomDataset('Data/07_2019_water.nc')
water_instance_07_19.cut_region(lat_n, lat_s, lon_e, lon_w)
water_instance_07_19.rescale()
water_instance_07_19.resample("1H")
water_Dataset_07_19 = water_instance_07_19.get_dataset()
```
##Put the datasets togheter
```
weather_dataset_07_19 = wvpc_Dataset_07_19.merge(spechum_temp_Dataset_07_19)
weather_dataset_07_19 = weather_dataset_07_19.merge(relhum_Dataset_07_19)
weather_dataset_07_19 = weather_dataset_07_19.merge(water_Dataset_07_19)
```
##New feature
I want to add a new feature: is the sum of the two direction wind
```
weather_dataset_07_19['tot_wind']= np.sqrt(np.square(weather_dataset_07_19['u10']) + np.square(weather_dataset_07_19['v10']))
weather_dataset_07_19
```
##Save dataset
```
weather_dataset_07_19.to_netcdf('Data/07_2019_weather.nc', 'w', 'NETCDF4')
```
#05_2020
##Load datasets
###Wind_vegetation_pressure_cloud
```
wvpc_instance_05_20 = CustomDataset('Data/05_2020_wind_vegetation_pressure_cloud.nc')
wvpc_instance_05_20.cut_region(lat_n, lat_s, lon_e, lon_w)
wvpc_instance_05_20.rescale()
wvpc_instance_05_20.resample("1H")
wvpc_Dataset_05_20 = wvpc_instance_05_20.get_dataset()
```
###Spechum_temp
```
spechum_temp_instance_05_20 = CustomDataset('Data/05_2020_spechum_temp.nc')
spechum_temp_instance_05_20.cut_region(lat_n, lat_s, lon_e, lon_w)
spechum_temp_instance_05_20.rescale()
spechum_temp_instance_05_20.resample("1H")
```
Rename the variables
```
spechum_temp_instance_05_20.rename_var({'t':'tmp',
'q':'sp_hum'})
spechum_temp_Dataset_05_20 = spechum_temp_instance_05_20.get_dataset()
```
###Relhum
```
relhum_instance_05_20 = CustomDataset('Data/05_2020_relhum.nc')
relhum_instance_05_20.cut_region(lat_n, lat_s, lon_e, lon_w)
relhum_instance_05_20.rescale()
relhum_instance_05_20.resample("1H")
```
Rename the variables and drop the variables already in the others dataset
```
relhum_instance_05_20.rename_var({'r':'rel_hum'})
relhum_Dataset_05_20 = relhum_instance_05_20.get_dataset()
relhum_Dataset_05_20 = relhum_Dataset_05_20.drop_vars('t')
relhum_Dataset_05_20 = relhum_Dataset_05_20.drop_vars('q')
```
###Water
```
water_instance_05_20 = CustomDataset('Data/05_2020_water.nc')
water_instance_05_20.cut_region(lat_n, lat_s, lon_e, lon_w)
water_instance_05_20.rescale()
water_instance_05_20.resample("1H")
water_Dataset_05_20 = water_instance_05_20.get_dataset()
```
##Put the datasets togheter
```
weather_dataset_05_20 = wvpc_Dataset_05_20.merge(spechum_temp_Dataset_05_20)
weather_dataset_05_20 = weather_dataset_05_20.merge(relhum_Dataset_05_20)
weather_dataset_05_20 = weather_dataset_05_20.merge(water_Dataset_05_20)
```
##New feature
I want to add a new feature: is the sum of the two direction wind
```
weather_dataset_05_20['tot_wind']= np.sqrt(np.square(weather_dataset_05_20['u10']) + np.square(weather_dataset_05_20['v10']))
weather_dataset_05_20
```
##Save dataset
```
weather_dataset_05_20.to_netcdf('Data/05_2020_weather.nc', 'w', 'NETCDF4')
```
| github_jupyter |
```
from __future__ import print_function
import ipywidgets as widgets
from bqplot import pyplot as plt
from bqplot import topo_load
from bqplot.interacts import panzoom
import numpy as np
import pandas as pd
import datetime as dt
# initializing data to be plotted
np.random.seed(0)
size = 100
y_data = np.cumsum(np.random.randn(size) * 100.0)
y_data_2 = np.cumsum(np.random.randn(size))
y_data_3 = np.cumsum(np.random.randn(size) * 100.0)
x = np.linspace(0.0, 10.0, size)
price_data = pd.DataFrame(
np.cumsum(np.random.randn(150, 2).dot([[0.5, 0.8], [0.8, 1.0]]), axis=0) + 100,
columns=["Security 1", "Security 2"],
index=pd.date_range(start="01-01-2007", periods=150),
)
symbol = "Security 1"
dates_all = price_data.index.values
final_prices = price_data[symbol].values.flatten()
price_data.index.names = ["date"]
```
## Simple Plots
### Line Chart
```
plt.figure()
plt.plot(x, y_data)
plt.xlabel("Time")
plt.show()
_ = plt.ylabel("Stock Price")
# Setting the title for the current figure
plt.title("Brownian Increments")
plt.figure()
plt.plot("Security 1", data=price_data)
plt.show()
```
### Scatter Plot
```
plt.figure(title="Scatter Plot with colors")
plt.scatter(y_data_2, y_data_3, color=y_data, stroke="black")
plt.show()
```
### Horizontal and Vertical Lines
```
## adding a horizontal line at y=0
plt.hline(0)
plt.show()
## adding a vertical line at x=4 with stroke_width and colors being passed.
plt.vline(4.0, stroke_width=2, colors=["orangered"])
plt.show()
plt.figure()
plt.scatter(
"Security 1",
"Security 2",
color="date",
data=price_data.reset_index(),
stroke="black",
)
plt.show()
```
### Histogram
```
plt.figure()
plt.hist(y_data, colors=["OrangeRed"])
plt.show()
plt.figure()
plt.hist("Security 1", data=price_data, colors=["MediumSeaGreen"])
plt.xlabel("Hello")
plt.show()
```
### Bar Chart
```
plt.figure()
bar_x = [
"A",
"B",
"C",
"D",
"E",
"F",
"G",
"H",
"I",
"J",
"K",
"L",
"M",
"N",
"P",
"Q",
"R",
"S",
"T",
"U",
]
plt.bar(bar_x, y_data_3)
plt.show()
plt.figure()
plt.bar("date", "Security 2", data=price_data.reset_index()[:10])
plt.show()
```
### Pie Chart
```
plt.figure()
d = abs(y_data_2[:5])
plt.pie(d)
plt.show()
plt.figure()
plt.pie("Security 2", color="Security 1", data=price_data[:4])
plt.show()
```
### OHLC
```
dates = np.arange(
dt.datetime(2014, 1, 2), dt.datetime(2014, 1, 30), dt.timedelta(days=1)
)
prices = np.array(
[
[187.21, 187.4, 185.2, 185.53],
[185.83, 187.35, 185.3, 186.64],
[187.15, 187.355, 185.3, 186.0],
[186.39, 190.35, 186.38, 189.71],
[189.33, 189.4175, 187.26, 187.97],
[189.02, 189.5, 186.55, 187.38],
[188.31, 188.57, 186.28, 187.26],
[186.26, 186.95, 183.86, 184.16],
[185.06, 186.428, 183.8818, 185.92],
[185.82, 188.65, 185.49, 187.74],
[187.53, 188.99, 186.8, 188.76],
[188.04, 190.81, 187.86, 190.09],
[190.23, 190.39, 186.79, 188.43],
[181.28, 183.5, 179.67, 182.25],
[181.43, 183.72, 180.71, 182.73],
[181.25, 182.8141, 179.64, 179.64],
[179.605, 179.65, 177.66, 177.9],
[178.05, 178.45, 176.16, 176.85],
[175.98, 178.53, 175.89, 176.4],
[177.17, 177.86, 176.36, 177.36],
]
)
plt.figure()
plt.ohlc(dates, prices)
plt.show()
```
### Boxplot
```
plt.figure()
plt.boxplot(np.arange(10), np.random.randn(10, 100))
plt.show()
```
### Map
```
plt.figure()
plt.geo(map_data="WorldMap")
plt.show()
```
### Heatmap
```
plt.figure(padding_y=0)
plt.heatmap(x * x[:, np.newaxis])
plt.show()
```
### GridHeatMap
```
plt.figure(padding_y=0)
plt.gridheatmap(x[:10] * x[:10, np.newaxis])
plt.show()
```
### Plotting Dates
```
plt.figure()
plt.plot(dates_all, final_prices)
plt.show()
```
### Editing existing axes properties
```
## adding grid lines and changing the side of the axis in the figure above
_ = plt.axes(
options={
"x": {"grid_lines": "solid"},
"y": {"side": "right", "grid_lines": "dashed"},
}
)
```
## Advanced Usage
### Multiple Marks on the same Figure
```
plt.figure()
plt.plot(x, y_data_3, colors=["orangered"])
plt.scatter(x, y_data, stroke="black")
plt.show()
```
### Using marker strings in Line Chart
```
mark_x = np.arange(10)
plt.figure(title="Using Marker Strings")
plt.plot(
mark_x, 3 * mark_x + 5, "g-.s"
) # color=green, line_style=dash_dotted, marker=square
plt.plot(mark_x ** 2, "m:d") # color=magenta, line_style=None, marker=diamond
plt.show()
```
### Partially changing the scales
```
plt.figure()
plt.plot(x, y_data)
## preserving the x scale and changing the y scale
plt.scales(scales={"x": plt.Keep})
plt.plot(
x,
y_data_2,
colors=["orange"],
axes_options={"y": {"side": "right", "color": "orange", "grid_lines": "none"}},
)
plt.show()
```
### Adding a label to the chart
```
plt.figure()
line = plt.plot(dates_all, final_prices)
plt.show()
## adds the label to the figure created above
_ = plt.label(
["Pie Day"],
x=[np.datetime64("2007-03-14")],
y=[final_prices.mean()],
scales=line.scales,
colors=["orangered"],
)
```
### Changing context figure
```
plt.figure(1)
plt.plot(x, y_data_3)
plt.show()
plt.figure(2)
plt.plot(x[:20], y_data_3[:20])
plt.show()
```
### Re-editing first figure
```
## adds the new line to the first figure
fig = plt.figure(1, title="New title")
plt.plot(x, y_data, colors=["orangered"])
fig
```
### Viewing the properties of the figure
```
marks = plt.current_figure().marks
marks[0].get_state()
```
### Showing a second view of the first figure
```
plt.show()
```
### Clearing the figure
```
### Clearing the figure above
plt.clear()
```
### Deleting a figure and all its views.
```
plt.show(2)
plt.close(2)
```
## Interactions in Pyplot
### Brush Selector
```
fig = plt.figure()
plt.scatter(y_data_2, y_data_3, colors=["orange"], stroke="black")
label = widgets.Label()
def callback(name, value):
label.value = str(value)
## click and drag on the figure to see the selector
plt.brush_selector(callback)
widgets.VBox([fig, label])
```
### Fast Interval Selector
```
fig = plt.figure()
n = 100
plt.plot(np.arange(n), y_data_3)
label = widgets.Label()
def callback(name, value):
label.value = str(value)
## click on the figure to activate the selector
plt.int_selector(callback)
widgets.VBox([fig, label])
```
### Brush Interval Selector with call back on brushing
```
# click and drag on chart to make a selection
def callback(name, value):
label.value = "Brushing: " + str(value)
_ = plt.brush_int_selector(callback, "brushing")
```
| github_jupyter |
```
import pandas as pd
import numpy as np
import json
from sklearn.ensemble import RandomForestRegressor, RandomForestClassifier
from sklearn.neighbors import KNeighborsRegressor
from sklearn.pipeline import Pipeline
from sklearn.metrics import r2_score
import descriptors.preprocessing as pp
import descriptors.dft_featurisation as dft_ft
import descriptors.rdkit_featurisation as rdkit_ft
from analysis import analysis_train_set_size, random_split, stratified_split
from rdkit import Chem
estimators = [('predictor', RandomForestRegressor())]
pipe = Pipeline(estimators)
metric = r2_score
df_dft = pd.read_csv("data/NiCOlit.csv", sep = ',')
df_dft = pp.preprocess(df_dft)
indexes_kept_dft = np.array(df_dft.index)
X_dft, y_dft, DOI_dft, mechanisms_dft, origins_dft, sub_dft, lig_dft = dft_ft.process_dataframe_dft(df_dft, data_path="data/utils/", origin=False)
# TODO: clean
r2 = []
length = []
for sub in np.unique(sub_dft):
indexes = np.where(sub_dft==sub)[0]
values, baseline_values, model_values, stratification_values, additional_stratification_values = random_split(X_dft[indexes, :], y_dft[indexes], origins_dft[indexes], sub_dft[indexes], n_iterations=100)
print(sub)
print(len(indexes))
print(round(r2_score(values, model_values), 3))
r2.append(round(r2_score(values, model_values), 3))
length.append(len(indexes))
values, baseline_values, model_values, stratification_values, additional_stratification_values = random_split(X_dft, y_dft, origins_dft, sub_dft, n_iterations=100)
R2_scores_full = []
for sub in np.unique(sub_dft):
indexes = np.where(np.array(additional_stratification_values)==sub)[0]
print(sub)
print(round(r2_score(np.array(values)[indexes], np.array(model_values)[indexes]),3))
R2_scores_full.append(round(r2_score(np.array(values)[indexes], np.array(model_values)[indexes]),3))
# resulting in 14100 values for 50 iterations
print(np.unique(sub_dft))
print(r2)
print(R2_scores_full)
df_dft['substrate_class'] = sub_dft
for sub in np.unique(sub_dft):
indexes = np.where(sub_dft==sub)[0]
print(sub, ":\n","DOIs:", len(np.unique(DOI_dft[indexes])),
"\n A-X:", len(np.unique(mechanisms_dft[indexes])),
":\n","n_reactions:", len(np.where(df_dft.substrate_class==sub)[0]))
for sub in np.unique(sub_dft):
indexes = np.where(sub_dft==sub)[0]
np.where(df_dft.substrate_class==sub)
df_dft.columns
np.unique(sub_dft)
sub = 'OSi(C)(C)C'
indexes = np.where(sub_dft==sub)[0]
smis = np.unique(df_dft.substrate[indexes])
print(len(smis))
from rdkit.Chem import Draw
from rdkit import Chem
Draw.MolsToGridImage([Chem.MolFromSmiles(smi) for smi in smis], maxMols=500)
DOIs = np.unique(df_dft.DOI[indexes])
DOIs
doi = 'https://doi.org/10.1021/jo1024464'
dois_indexes = np.where(df_dft.DOI == doi)[0]
smis = np.unique(df_dft.substrate[dois_indexes])
Draw.MolsToGridImage([Chem.MolFromSmiles(smi) for smi in smis])
```
| github_jupyter |
# Implement a stack using an array
In this notebook, we'll look at one way to implement a stack. First, check out the walkthrough for an overview, and then you'll get some practice implementing it for yourself.
<span class="graffiti-highlight graffiti-id_e0wuf6a-id_1ldgv9h"><i></i><button>Walkthrough</button></span>

Below we'll go through the implementation step by step. Each step has a walkthrough and also a solution. We recommend that you first watch the walkthrough, and then try to write the code on your own.
When you first try to remember and write out the code for yourself, this effort helps you understand and remember the ideas better. At the same time, it's normal to get stuck and need a refresher—so don't hesitate to use the *Show Solution* buttons when you need them.
## Functionality
Our goal will be to implement a `Stack` class that has the following behaviors:
1. `push` - adds an item to the top of the stack
2. `pop` - removes an item from the top of the stack (and returns the value of that item)
3. `size` - returns the size of the stack
4. `top` - returns the value of the item at the top of stack (without removing that item)
5. `is_empty` - returns `True` if the stack is empty and `False` otherwise
## 1. Create and initialize the `Stack` class
First, have a look at the walkthrough:
<span class="graffiti-highlight graffiti-id_21k0jl7-id_1axklhx"><i></i><button>Walkthrough</button></span>
In the cell below:
* Define a class named `Stack` and add the `__init__` method
* Initialize the `arr` attribute with an array containing 10 elements, like this: `[0, 0, 0, 0, 0, 0, 0, 0, 0, 0]`
* Initialize the `next_index` attribute
* Initialize the `num_elements` attribute
```
class Stack:
def __init__(self, initial_size = 10):
self.arr = [0 for _ in range(initial_size)]
self.next_index = 0
self.num_elements = 0
```
Let's check that the array is being initialized correctly. We can create a `Stack` object and access the `arr` attribute, and we should see our ten-element array:
```
foo = Stack()
print(foo.arr)
print("Pass" if foo.arr == [0, 0, 0, 0, 0, 0, 0, 0, 0, 0] else "Fail")
```
<span class="graffiti-highlight graffiti-id_eltfzj4-id_cufd3qq"><i></i><button>Show Solution</button></span>
## 2. Add the `push` method
Next, we need to define our `push` method, so that we have a way of adding elements to the top of the stack.
<span class="graffiti-highlight graffiti-id_syp82vq-id_06cws2s"><i></i><button>Walkthrough</button></span>
Now give it a try for yourself. Here's are the key things to include:
* The method will need to have a parameter for the value that you want to push
* Remember that `next_index` will have the index for where the value should be added
* Once you've added the value, you'll want to increment both `next_index` and `num_elements`
```
class Stack:
def __init__(self, initial_size = 10):
self.arr = [0 for _ in range(initial_size)]
self.next_index = 0
self.num_elements = 0
def push(self, data):
self.arr[self.next_index] = data
self.next_index += 1
self.num_elements += 1
```
Let's test it by creating a stack object and pushing an item onto the stack:
```
foo = Stack()
foo.push("Test!")
print(foo.arr)
print("Pass" if foo.arr[0] == "Test!" else "Fail")
```
<span class="graffiti-highlight graffiti-id_l646osb-id_14qzzul"><i></i><button>Show Solution</button></span>
## 3. Handle full capacity
Great, the `push` method seems to be working fine! But we know that it's not done yet. If we keep pushing items onto the stack, eventually we will run out of room in the array. Currently, that will cause an `Index out of range` error. In order to avoid a stack overflow, we need to check the capacity of the array before pushing an item to the stack. And if the array is full, we need to increase the array size before pushing the new element.
<span class="graffiti-highlight graffiti-id_vhjw9tp-id_iv1zb1u"><i></i><button>Walkthrough</button></span>
First, define the `_handle_stack_capacity_full` method:
* Define an `old_arr` variable and assign it the current (full) array
* Create a new (larger) array and assign it to `arr`.
* Iterate over the values in the old array and copy them to the new array.
Then, in the `push` method:
* Add a conditional to check if the array is full; if it is, call the `_handle_stack_capacity_full`
```
class Stack:
def __init__(self, initial_size = 10):
self.arr = [0 for _ in range(initial_size)]
self.next_index = 0
self.num_elements = 0
def push(self, data):
# TODO: Add a conditional to check for full capacity
if self.next_index < len(self.arr):
self.arr[self.next_index] = data
self.next_index += 1
self.num_elements += 1
else:
print("Out of space! Increasing array capacity ...")
self._handle_stack_capacity_full()
# TODO: Add the _handle_stack_capacity_full method
def _handle_stack_capacity_full(self):
old_arr = self.arr
self.arr = [0 for _ in range(2 * len(old_arr))]
for index, element in enumerate(old_arr):
self.arr[index] = element
```
We can test this by pushing items onto the stack until we exceed the original capacity. Let's try it and see if we get an error, or if the array size gets increased like we want it to.
```
foo = Stack()
foo.push(1)
foo.push(2)
foo.push(3)
foo.push(4)
foo.push(5)
foo.push(6)
foo.push(7)
foo.push(8)
foo.push(9)
foo.push(10) # The array is now at capacity!
foo.push(11) # This one should cause the array to increase in size
print(foo.arr) # Let's see what the array looks like now!
print("Pass" if len(foo.arr) == 20 else "Fail") # If we successfully doubled the array size, it should now be 20.
```
<span class="graffiti-highlight graffiti-id_ntkl64o-id_174d370"><i></i><button>Show Solution</button></span>
## 4. Add the `size` and `is_empty` methods
Next, we need to add a couple of simple methods:
* Add a `size` method that returns the current size of the stack
* Add an `is_empty` method that returns `True` if the stack is empty and `False` otherwise
(This one is pretty straightforward, so there's no walkthrough—but there's still solution code below if you should need it.)
```
class Stack:
def __init__(self, initial_size = 10):
self.arr = [0 for _ in range(initial_size)]
self.next_index = 0
self.num_elements = 0
def push(self, data):
if self.next_index == len(self.arr):
print("Out of space! Increasing array capacity ...")
self._handle_stack_capacity_full()
self.arr[self.next_index] = data
self.next_index += 1
self.num_elements += 1
# TODO: Add the size method
def size(self):
return self.num_elements
# TODO: Add the is_empty method
def is_empty(self):
return self.num_elements == 0
def _handle_stack_capacity_full(self):
old_arr = self.arr
self.arr = [0 for _ in range( 2* len(old_arr))]
for index, value in enumerate(old_arr):
self.arr[index] = value
```
Let's test the new methods:
```
foo = Stack()
print(foo.size()) # Should return 0
print(foo.is_empty()) # Should return True
foo.push("Test") # Let's push an item onto the stack and check again
print(foo.size()) # Should return 1
print(foo.is_empty()) # Should return False
```
<span class="graffiti-highlight graffiti-id_gmxq7fd-id_ptatjto"><i></i><button>Show Solution</button></span>
## 5. Add the `pop` method
The last thing we need to do is add the `pop` method.
<span class="graffiti-highlight graffiti-id_33sdd37-id_a5x7quf"><i></i><button>Walkthrough</button></span>
The method needs to:
* Check if the stack is empty and, if it is, return `None`
* Decrement `next_index` and `num_elements`
* Return the item that is being "popped"
```
class Stack:
def __init__(self, initial_size = 10):
self.arr = [0 for _ in range(initial_size)]
self.next_index = 0
self.num_elements = 0
def push(self, data):
if self.next_index == len(self.arr):
print("Out of space! Increasing array capacity ...")
self._handle_stack_capacity_full()
self.arr[self.next_index] = data
self.next_index += 1
self.num_elements += 1
# TODO: Add the pop method
def pop(self):
if self.is_empty():
self.next_index = 0
return None
self.next_index -= 1
self.num_elements -= 1
return self.arr[self.next_index]
def size(self):
return self.num_elements
def is_empty(self):
return self.num_elements == 0
def _handle_stack_capacity_full(self):
old_arr = self.arr
self.arr = [0 for _ in range( 2* len(old_arr))]
for index, value in enumerate(old_arr):
self.arr[index] = value
```
Let's test the `pop` method:
```
foo = Stack()
foo.push("Test") # We first have to push an item so that we'll have something to pop
print(foo.pop()) # Should return the popped item, which is "Test"
print(foo.pop()) # Should return None, since there's nothing left in the stack
```
<span class="graffiti-highlight graffiti-id_aydi240-id_2ed7qdm"><i></i><button>Show Solution</button></span>
Done!
| github_jupyter |
# Time Series Forecasting Assignment
For Part 1, Choose _either_ Option A or Option B <br>
For Part 2, Find a time series (either univariate, or multivariate) and apply the time series methods from Part 1 to analyze it.
## Part 1, Option A: Software Engineering (1.5 to 2 hours max)
Write a `ForecastingToolkit` class
that packages up the workflow of time series forecasting, that we learned from today's Lecture Notebook. Add any desired "bells and whistles" to make it even better!
```
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
from pandas import read_csv
import math
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import LSTM
from sklearn.preprocessing import MinMaxScaler
from sklearn.metrics import mean_squared_error
from keras.layers.core import Dense, Activation, Dropout
import time #helper libraries
from tensorflow.keras import regularizers
class ForecastingToolkit(object):
def __init__(self, df = None, model = None):
"""
Variables that we passed into our functions should now be defined
as class attributes, i.e. class variables.
"""
# here are a few to get you started
# store data here
self.df = df
# store your forecasting model here
self.model = model
# store feature scalers here
self.scaler_dict = None
# store the training results of your model here
self.history = None
def load_transform_data(self):
pass
def scale_data(self):
pass
def invert_scaling(self):
pass
def create_dataset(self):
pass
def create_train_test_split(self,):
pass
def build_model(self):
pass
def fit_model(self):
pass
def predict(self):
pass
def plot_model_loss_metrics(self):
pass
```
----
```
# once you've completed your class, you'll be able to perform a many operations with just a few lines of code!
tstk = ForecastingToolkit()
tstk.load_transform_data()
tstk.scale_data()
tstk.build_model()
tstk.fit_model()
tstk.plot_model_loss_metrics()
```
## Part 1, Option B: A Deeper Dive in Time-Series Forecasting (1.5 to 2 hours max)
Work through this notebook [time_series_forecasting](https://drive.google.com/file/d/1RgyaO9zuZ90vWEzQWo1iVip1Me7oiHiO/view?usp=sharing), which compares a number of forecasting methods and in the end finds that 1 Dimensional Convolutional Neural Networks is even better than LSTMs!
## Part 2 Time series forecasting on a real data set (2 hours max)
Use one or more time series forecasting methods (from either Part 1A or Part 1B) to make forecasts on a real time series data set.<br> If time permits, perform hyperparameter tuning to make the forecasts as good as possible. <br>Report the MAE (mean absolute error) of your forecast, and compare to a naive baseline model. <br>Are you getting good forecasts? Why or why not?
## Time Series Data Sets
Here are a half-dozen or so datasets you could choose from: [7 Time Series Datasets for Machine Learning](https://machinelearningmastery.com/time-series-datasets-for-machine-learning/)<br>
OR: Freely available time series data is plentiful on the WWW.
Feel free to choose any time series data set of interest!
## Your Time Series Forecasting Code
Describe the time series forecasting problem you will solve, and which method you will use.
```
## YOUR CODE HERE
```
| github_jupyter |
# Solving Knapsack Problem with Amazon SageMaker RL
Knapsack is a canonical operations research problem. We start with a bag and a set of items. We choose which items to put in the bag. Our objective is to maximize the value of the items in the bag; but we cannot put all the items in as the bag capacity is limited. The problem is hard because the items have different values and weights, and there are many combinations to consider.
In the classic version of the problem, we pick the items in one shot. But in this baseline, we instead consider the items one at a time over a fixed time horizon.
## Problem Statement
We start with an empty bag and an item. We need to either put the item in the bag or throw it away. If we put it in the bag, we get a reward equal to the value of the item. If we throw the item away, we get a fixed penalty. In case the bag is too full to accommodate the item, we are forced to throw it away.
In the next step, another item appears and we need to decide again if we want to put it in the bag or throw it away. This process repeats for a fixed number of steps.
Since we do not know the value and weight of items that will come in the future, and the bag can only hold so many items, it is not obvious what is the right thing to do.
At each time step, our agent is aware of the following information:
- Weight capacity of the bag
- Volume capacity of the bag
- Sum of item weight in the bag
- Sum of item volume in the bag
- Sum of item value in the bag
- Current item weight
- Current item volume
- Current item value
- Time remaining
At each time step, our agent can take one of the following actions:
- Put the item in the bag
- Throw the item away
At each time step, our agent gets the following reward depending on their action:
- Item value if you put it in the bag and bag does not overflow
- A penalty if you throw the item away or if the item does not fit in the bag
The time horizon is 20 steps. You can see the specifics in the `KnapSackMediumEnv` class in `knapsack_env.py`. There are a couple of other classes that provide an easier (`KnapSackEnv`) and a more difficult version (`KnapSackHardEnv`) of this problem.
## Using Amazon SageMaker RL
Amazon SageMaker RL allows you to train your RL agents in cloud machines using docker containers. You do not have to worry about setting up your machines with the RL toolkits and deep learning frameworks. You can easily switch between many different machines setup for you, including powerful GPU machines that give a big speedup. You can also choose to use multiple machines in a cluster to further speedup training, often necessary for production level loads.
### Pre-requsites
#### Imports
To get started, we'll import the Python libraries we need, set up the environment with a few prerequisites for permissions and configurations.
```
import sagemaker
import boto3
import sys
import os
import glob
import re
import subprocess
from IPython.display import HTML
import time
from time import gmtime, strftime
sys.path.append("common")
from misc import get_execution_role, wait_for_s3_object
from sagemaker.rl import RLEstimator, RLToolkit, RLFramework
```
#### Settings
You can run this notebook from your local host or from a SageMaker notebook instance. In both of these scenarios, you can run the following in either `local` or `SageMaker` modes. The `local` mode uses the SageMaker Python SDK to run your code in a local container before deploying to SageMaker. This can speed up iterative testing and debugging while using the same familiar Python SDK interface. You just need to set `local_mode = True`.
```
# run in local mode?
local_mode = False
# create unique job name
job_name_prefix = "rl-knapsack"
# S3 bucket
sage_session = sagemaker.session.Session()
s3_bucket = sage_session.default_bucket()
print("Using s3 bucket %s" % s3_bucket) # create this bucket if it doesn't exist
s3_output_path = "s3://{}/".format(s3_bucket) # SDK appends the job name and output folder
```
#### Install docker for `local` mode
In order to work in `local` mode, you need to have docker installed. When running from you local instance, please make sure that you have docker or docker-compose (for local CPU machines) and nvidia-docker (for local GPU machines) installed. Alternatively, when running from a SageMaker notebook instance, you can simply run the following script
Note, you can only run a single local notebook at one time.
```
if local_mode:
!/bin/bash ./common/setup.sh
```
#### Create an IAM role
Either get the execution role when running from a SageMaker notebook `role = sagemaker.get_execution_role()` or, when running locally, set it to an IAM role with `AmazonSageMakerFullAccess` and `CloudWatchFullAccess permissions`.
```
try:
role = sagemaker.get_execution_role()
except:
role = get_execution_role()
print("Using IAM role arn: {}".format(role))
```
#### Setup the environment
The environment is defined in a Python file called `knapsack_env.py` in the `./src` directory. It implements the init(), step(), reset() and render() functions that describe how the environment behaves. This is consistent with Open AI Gym interfaces for defining an environment.
- Init() - initialize the environment in a pre-defined state
- Step() - take an action on the environment
- reset()- restart the environment on a new episode
- render() - get a rendered image of the environment in its current state
#### Configure the presets for RL algorithm
The presets that configure the RL training jobs are defined in the `preset-knapsack-clippedppo.py` in the `./src` directory. Using the preset file, you can define agent parameters to select the specific agent algorithm. You can also set the environment parameters, define the schedule and visualization parameters, and define the graph manager. The schedule presets will define the number of heat up steps, periodic evaluation steps, training steps between evaluations.
These can be overridden at runtime by specifying the RLCOACH_PRESET hyperparameter. Additionally, it can be used to define custom hyperparameters.
```
!pygmentize src/preset-knapsack-clippedppo.py
```
#### Write the Training Code
The training code is in the file `train-coach.py` which is also the `./src` directory.
```
!pygmentize src/train-coach.py
```
### Train the model using Python SDK/ script mode
If you are using local mode, the training will run on the notebook instance. When using SageMaker for training, you can select a GPU or CPU instance. The RLEstimator is used for training RL jobs.
- Specify the source directory where the environment, presets and training code is uploaded.
- Specify the entry point as the training code
- Specify the choice of RL toolkit and framework. This automatically resolves to the ECR path for the RL Container.
- Define the training parameters such as the instance count, job name, S3 path for output and job name.
- Specify the hyperparameters for the RL agent algorithm. The RLCOACH_PRESET can be used to specify the RL agent algorithm you want to use.
- Define the metrics definitions that you are interested in capturing in your logs. These can also be visualized in CloudWatch and SageMaker Notebooks.
```
if local_mode:
instance_type = "local"
else:
instance_type = "ml.m4.4xlarge"
estimator = RLEstimator(
entry_point="train-coach.py",
source_dir="src",
dependencies=["common/sagemaker_rl"],
toolkit=RLToolkit.COACH,
toolkit_version="1.0.0",
framework=RLFramework.TENSORFLOW,
role=role,
instance_type=instance_type,
instance_count=1,
output_path=s3_output_path,
base_job_name=job_name_prefix,
hyperparameters={
"RLCOACH_PRESET": "preset-knapsack-clippedppo",
"rl.agent_params.algorithm.discount": 0.9,
"rl.evaluation_steps:EnvironmentEpisodes": 8,
},
)
estimator.fit(wait=local_mode)
```
### Store intermediate training output and model checkpoints
The output from the training job above is stored on S3. The intermediate folder contains gifs and metadata of the training
```
job_name = estimator._current_job_name
print("Job name: {}".format(job_name))
s3_url = "s3://{}/{}".format(s3_bucket, job_name)
if local_mode:
output_tar_key = "{}/output.tar.gz".format(job_name)
else:
output_tar_key = "{}/output/output.tar.gz".format(job_name)
intermediate_folder_key = "{}/output/intermediate".format(job_name)
output_url = "s3://{}/{}".format(s3_bucket, output_tar_key)
intermediate_url = "s3://{}/{}".format(s3_bucket, intermediate_folder_key)
print("S3 job path: {}".format(s3_url))
print("Output.tar.gz location: {}".format(output_url))
print("Intermediate folder path: {}".format(intermediate_url))
tmp_dir = "/tmp/{}".format(job_name)
os.system("mkdir {}".format(tmp_dir))
print("Create local folder {}".format(tmp_dir))
```
### Visualization
#### Plot metrics for training job
We can pull the reward metric of the training and plot it to see the performance of the model over time.
```
%matplotlib inline
import pandas as pd
csv_file_name = "worker_0.simple_rl_graph.main_level.main_level.agent_0.csv"
key = intermediate_folder_key + "/" + csv_file_name
wait_for_s3_object(s3_bucket, key, tmp_dir)
csv_file = "{}/{}".format(tmp_dir, csv_file_name)
df = pd.read_csv(csv_file)
df = df.dropna(subset=["Training Reward"])
x_axis = "Episode #"
y_axis = "Training Reward"
plt = df.plot(x=x_axis, y=y_axis, figsize=(12, 5), legend=True, style="b-")
plt.set_ylabel(y_axis)
plt.set_xlabel(x_axis);
```
#### Visualize the rendered gifs
The latest gif file found in the gifs directory is displayed. You can replace the tmp.gif file below to visualize other files generated.
```
key = intermediate_folder_key + "/gifs"
wait_for_s3_object(s3_bucket, key, tmp_dir)
print("Copied gifs files to {}".format(tmp_dir))
glob_pattern = os.path.join("{}/*.gif".format(tmp_dir))
gifs = [file for file in glob.iglob(glob_pattern, recursive=True)]
extract_episode = lambda string: int(
re.search(".*episode-(\d*)_.*", string, re.IGNORECASE).group(1)
)
gifs.sort(key=extract_episode)
print("GIFs found:\n{}".format("\n".join([os.path.basename(gif) for gif in gifs])))
# visualize a specific episode
gif_index = -1 # since we want last gif
gif_filepath = gifs[gif_index]
gif_filename = os.path.basename(gif_filepath)
print("Selected GIF: {}".format(gif_filename))
os.system(
"mkdir -p ./src/tmp_render/ && cp {} ./src/tmp_render/{}.gif".format(gif_filepath, gif_filename)
)
HTML('<img src="./src/tmp_render/{}.gif">'.format(gif_filename))
```
### Evaluation of RL models
We use the last checkpointed model to run evaluation for the RL Agent.
#### Load checkpointed model
Checkpointed data from the previously trained models will be passed on for evaluation / inference in the checkpoint channel. In local mode, we can simply use the local directory, whereas in the SageMaker mode, it needs to be moved to S3 first.
```
wait_for_s3_object(s3_bucket, output_tar_key, tmp_dir, timeout=1800)
if not os.path.isfile("{}/output.tar.gz".format(tmp_dir)):
raise FileNotFoundError("File output.tar.gz not found")
os.system("tar -xvzf {}/output.tar.gz -C {}".format(tmp_dir, tmp_dir))
if local_mode:
checkpoint_dir = "{}/data/checkpoint".format(tmp_dir)
else:
checkpoint_dir = "{}/checkpoint".format(tmp_dir)
print("Checkpoint directory {}".format(checkpoint_dir))
if local_mode:
checkpoint_path = "file://{}".format(checkpoint_dir)
print("Local checkpoint file path: {}".format(checkpoint_path))
else:
checkpoint_path = "s3://{}/{}/checkpoint/".format(s3_bucket, job_name)
if not os.listdir(checkpoint_dir):
raise FileNotFoundError("Checkpoint files not found under the path")
os.system("aws s3 cp --recursive {} {}".format(checkpoint_dir, checkpoint_path))
print("S3 checkpoint file path: {}".format(checkpoint_path))
```
#### Run the evaluation step
Use the checkpointed model to run the evaluation step.
```
estimator_eval = RLEstimator(
role=role,
source_dir="src/",
dependencies=["common/sagemaker_rl"],
toolkit=RLToolkit.COACH,
toolkit_version="1.0.0",
framework=RLFramework.TENSORFLOW,
entry_point="evaluate-coach.py",
instance_count=1,
instance_type=instance_type,
base_job_name=job_name_prefix + "-evaluation",
hyperparameters={
"RLCOACH_PRESET": "preset-knapsack-clippedppo",
"evaluate_steps": 250, # 5 episodes
},
)
estimator_eval.fit({"checkpoint": checkpoint_path})
```
### Visualize the output
Optionally, you can run the steps defined earlier to visualize the output
| github_jupyter |
Antes de empezar, asegurate de que tod va segun lo esperado. Primero, **reinicia el kernel** (en la barra de menu, selecciona Kernel$\rightarrow$Restart) y entonces **ejecuta todas las celdas** (en la barra de menu, selecciona Cell$\rightarrow$Run All).
Asegurate de rellenar cualquier lugar donde aparezca `TU CÓDIGO AQUI` o `TU RESPUESTA AQUÍ`.
```
ANONYMOUS_ID="student"
```
## Resolviendo problemas con bucles
### contar elementos de una lista
```
lista = [9,13,2,7,124,-5]
print(len(lista))
```
Cómo lo harías sin la función len?
```
#TU RESPUESTA AQUÍ
```
<details>
<summary>Pincha aqui para ver la solución</summary>
<pre><code>
cuenta = 0
print (f'inicializamos la cuenta a:{cuenta}')
for elemento in lista:
cuenta = cuenta + 1
print(f'el valor actual es:{cuenta}')
print(f'el resultado final es:{cuenta}')
<code><pre>
</details>
### suma los valores de los elementos
```
lista = [9,13,2,7,124,-5]
print(sum(lista))
```
Cómo lo harías sin la función sum?
```
#TU RESPUESTA AQUÍ
```
<details>
<summary>Pincha aqui para ver la solución</summary>
<pre><code>
suma = 0
print(f'valor inicial es :{suma}')
for elemento in lista:
suma = suma + elemento
print(f'conteo parcial es: {suma}')
print(f'el resultado de la suma de los elementos es: {suma}')
<code><pre>
</details>
### calcula la media de los valores de los elementos
```
lista = [9,13,2,7,124,-5]
print(sum(lista)/len(lista))
```
Cómo lo harias sin sum ni len?
```
#TU RESPUESTA AQUÍ
```
<details>
<summary>Pincha aqui para ver la solución</summary>
<pre><code>
suma = 0
cuenta = 0
print(f'valor inicial es :{suma}')
for elemento in lista:
suma = suma + elemento
cuenta = cuenta + 1
print(f'suma parcial es: {suma}')
print(f'conteo parcial es: {cuenta}')
print(f'la media de los elementos es: {suma/cuenta}')
<code><pre>
</details>
y si quisieras la media de los valores positivos de los elementos?
```
#TU RESPUESTA AQUÍ
```
<details>
<summary>Pincha aqui para ver la solución</summary>
<pre><code>
suma = 0
cuenta = 0
print(f'valor inicial es :{suma}')
for elemento in lista:
suma = suma + abs(elemento)
cuenta = cuenta + 1
print(f'suma parcial es: {suma}')
print(f'conteo parcial es: {cuenta}')
print(f'la media de los elementos es: {suma/cuenta}')
<code><pre>
</details>
### Buscando si un elemento existe
```
lista = [9,13,2,7,124,-5]
print(7 in lista)
```
como lo harías sin usar in?
```
#TU RESPUESTA AQUÍ
```
<details>
<summary>Pincha aqui para ver la solución</summary>
<pre><code>
existe = False
elementoBuscado = 7
for item in lista:
if item ==elementoBuscado:
existe = True
break
if existe==True:
print(f"El elemento {elementoBuscado} esta en la lista")
else:
print(f"No hemos podido encontrar {elementoBuscado} en la lista")
<code><pre>
</details>
### Buscando el elemento más grande/pequeño de la lista
```
lista = [9,13,2,7,124,-5]
print(max(lista))
print(min(lista))
```
como lo harías sin usar max/min?
```
#TU RESPUESTA AQUÍ
```
<details>
<summary>Pincha aqui para ver la solución</summary>
<pre><code>
_max = None
_min = None
for item in lista:
if _max==None and _min == None:
_max=item
_min=item
else:
if _max < item:
_max = item
if _min > item:
_min = item
print (f'el mayor elemento de la lista es {_max}')
print (f'el menor elemento de la lista es {_min}')
<code><pre>
</details>
| github_jupyter |
```
from google.colab import drive
drive.mount('/content/gdrive')
cd /content/gdrive/'My Drive'/'directory-with-ssl-resnet'
#dataset preparation
!unzip /content/gdrive/'My Drive'/NSFW_Classification/data.zip -d /content/
import argparse
import os
import random
import shutil
import time
import warnings
import numpy as np
from imutils import paths
import matplotlib.pyplot as plt
import torch
import torch.nn as nn
import torch.nn.parallel
import torch.backends.cudnn as cudnn
import torch.optim
import torch.utils.data
import torchvision
import torchvision.transforms as transforms
import torchvision.datasets as datasets
import torchvision.models as models
from torchvision import transforms
from torch.utils.data import DataLoader
from resnet_wider import resnet50x1, resnet50x2, resnet50x4
# create model
model = resnet50x1()
sd = 'resnet50-1x.pth'
'''
elif args.arch == 'resnet50-2x':
model = resnet50x2()
sd = 'resnet50-2x.pth'
elif args.arch == 'resnet50-4x':
model = resnet50x4()
sd = 'resnet50-4x.pth'
else:
raise NotImplementedError
'''
sd = torch.load(sd, map_location='cpu')
model.load_state_dict(sd['state_dict'])
model.fc = nn.Identity()
model = (model).to('cuda')
# define loss function (criterion) and optimizer
criterion = nn.CrossEntropyLoss()
cudnn.benchmark = True
print("Model successfully loaded")
class TransformsSimCLR:
def __init__(self, size):
self.test_transform = torchvision.transforms.Compose(
[
torchvision.transforms.Resize(size=size),
torchvision.transforms.ToTensor(),
]
)
def __call__(self, x):
return self.train_transform(x), self.train_transform(x)
import os
os.mkdir('models')
os.mkdir('history')
size = 224
image_transforms = {
'train':transforms.Compose([
transforms.Resize(size=size),
transforms.ToTensor()
])
}
# Set the train, test and validation directory
train_directory = '/content/data/train'
test_directory = '/content/data/test'
valid_directory = '/content/data/valid'
# Setting batch size for training
batch_size=128
#Number of classes for the data
num_classes = 5
#Loading the data from the folders into the variable 'data'
data = {
'train': datasets.ImageFolder(root=train_directory, transform=image_transforms['train']),
'valid': datasets.ImageFolder(root=valid_directory, transform=image_transforms['train']),
'test': datasets.ImageFolder(root=test_directory, transform=image_transforms['train'])
}
#Find out the size of the data
train_data_size = len(data['train'])
test_data_size = len(data['test'])
val_data_size = len(data['valid'])
# Create iterators for the Data loaded using DataLoader module
train_loader = DataLoader(data['train'],batch_size=batch_size,shuffle=True)
test_loader = DataLoader(data['test'],batch_size=batch_size,shuffle=True)
val_loader = DataLoader(data['valid'],batch_size=batch_size,shuffle=True)
#Printing the sizes of the sets
print(train_data_size,test_data_size,val_data_size)
# For one image and text
for step, (x,y) in enumerate(train_loader):
print(x.shape)
x = x.to('cuda')
x = model(x)
print(x.shape)
break
def inference(loader, simclr_model, device):
feature_vector = []
labels_vector = []
for step, (x, y) in enumerate(loader):
x = x.to(device)
# get encoding
with torch.no_grad():
h = simclr_model(x)
h = h.detach()
feature_vector.extend(h.cpu().detach().numpy())
labels_vector.extend(y.numpy())
if step % 20 == 0:
print(f"Step [{step}/{len(loader)}]\t Computing features...")
feature_vector = np.array(feature_vector)
labels_vector = np.array(labels_vector)
print("Features shape {}".format(feature_vector.shape))
return feature_vector, labels_vector
def get_features(context_model, train_loader, test_loader, val_loader, device):
train_X, train_y = inference(train_loader, context_model, device)
print("Computed train features")
test_X, test_y = inference(test_loader, context_model, device)
print("Computed test features")
val_x, val_y = inference(val_loader, context_model, device)
return train_X, train_y, test_X, test_y, val_x, val_y
train_x, train_y, test_x, test_y, val_x, val_y = get_features(model,train_loader,test_loader,val_loader,'cuda')
#create dataloader from those features
def create_data_loaders_from_arrays(X_train, y_train, X_test, y_test, X_val, y_val, batch_size):
train = torch.utils.data.TensorDataset(
torch.from_numpy(X_train), torch.from_numpy(y_train)
)
train_loader = torch.utils.data.DataLoader(
train, batch_size=batch_size, shuffle=False
)
print("Trainloader successfully made")
val = torch.utils.data.TensorDataset(
torch.from_numpy(X_val), torch.from_numpy(y_val)
)
val_loader = torch.utils.data.DataLoader(
val, batch_size=batch_size, shuffle=False
)
print("Valloader successfully made")
test = torch.utils.data.TensorDataset(
torch.from_numpy(X_test), torch.from_numpy(y_test)
)
test_loader = torch.utils.data.DataLoader(
test, batch_size=batch_size, shuffle=False
)
print("Testloader successfully made")
return train_loader, test_loader,val_loader
train_loader, test_loader, val_loader = create_data_loaders_from_arrays(train_x, train_y, test_x, test_y, val_x, val_y, 128)
class LinearEvaluation(nn.Module):
def __init__(self,num_classes):
super(LinearEvaluation,self).__init__()
self.Linear = nn.Sequential(
nn.Linear(2048,num_classes,bias=False),
nn.Softmax(dim=1)
)
def forward(self,x):
x = self.Linear(x)
return x
linmodel = LinearEvaluation(5).to('cuda')
print(linmodel)
# For one image and label
for step, (x,y) in enumerate(train_loader):
print(x.shape)
print(y)
x = x.to('cuda')
x = linmodel(x)
print(x.shape)
break
#define train
def train(device, loader, model, criterion, optimizer,val_loader):
train_loss_epoch = 0
train_accuracy_epoch = 0
val_loss_epoch = 0
val_accuracy_epoch = 0
for step, (x, y) in enumerate(loader):
optimizer.zero_grad()
x = x.to(device)
y = y.to(device)
output = model(x)
loss = criterion(output, y)
predicted = output.argmax(1)
acc = (predicted == y).sum().item() / y.size(0)
train_accuracy_epoch += acc
loss.backward()
optimizer.step()
train_loss_epoch += loss.item()
# if step % 100 == 0:
# print(
# f"Step [{step}/{len(loader)}]\t Loss: {loss.item()}\t Accuracy: {acc}"
# )
with torch.no_grad():
model.eval()
for step, (x, y) in enumerate(val_loader):
model.zero_grad()
x = x.to(device)
y = y.to(device)
output = model(x)
loss = criterion(output, y)
predicted = output.argmax(1)
acc = (predicted == y).sum().item() / y.size(0)
val_accuracy_epoch += acc
val_loss_epoch += loss.item()
return train_loss_epoch, train_accuracy_epoch, val_loss_epoch, val_accuracy_epoch
#define test
def test(device, loader, model, criterion, optimizer):
loss_epoch = 0
accuracy_epoch = 0
model.eval()
for step, (x, y) in enumerate(loader):
model.zero_grad()
x = x.to(device)
y = y.to(device)
output = model(x)
loss = criterion(output, y)
predicted = output.argmax(1)
acc = (predicted == y).sum().item() / y.size(0)
accuracy_epoch += acc
loss_epoch += loss.item()
return loss_epoch, accuracy_epoch
class LinearEvaluation(nn.Module):
def __init__(self,num_classes):
super(LinearEvaluation,self).__init__()
self.Linear = nn.Sequential(
nn.Linear(2048,num_classes,bias=False),
)
def forward(self,x):
x = self.Linear(x)
return x
linmodel = LinearEvaluation(5).to('cuda')
print(linmodel)
optimizer = torch.optim.Adam(linmodel.parameters(), lr=0.0001)
criterion = torch.nn.CrossEntropyLoss()
epochs = 100
device = 'cuda'
for epoch in range(epochs):
train_loss_epoch, train_accuracy_epoch, val_loss_epoch, val_accuracy_epoch = train(device, train_loader, linmodel, criterion, optimizer,val_loader)
if epoch % 10 == 0:
print(f"Epoch [{epoch}/{epochs}]\t Train Loss: {train_loss_epoch / len(train_loader)}\t Train Accuracy: {train_accuracy_epoch / len(train_loader)}\t Val Loss: {val_loss_epoch / len(val_loader)}\t Accuracy: {val_accuracy_epoch / len(val_loader)}")
print("Training successfully completed")
# final testing
loss_epoch, accuracy_epoch = test(
device, test_loader,linmodel, criterion, optimizer
)
print(
f"[FINAL]\t Loss: {loss_epoch / len(test_loader)}\t Accuracy: {accuracy_epoch / len(test_loader)}"
)
```
| github_jupyter |
```
import tensorflow as tf
tf.logging.set_verbosity(tf.logging.WARN)
import pickle
import numpy as np
import os
from sklearn.model_selection import train_test_split
from sklearn.metrics import f1_score
from sklearn.metrics import accuracy_score
import os
from tensorflow.python.client import device_lib
from collections import Counter
f = open('../Glove/word_embedding_glove', 'rb')
word_embedding = pickle.load(f)
f.close()
word_embedding = word_embedding[: len(word_embedding)-1]
f = open('../Glove/vocab_glove', 'rb')
vocab = pickle.load(f)
f.close()
word2id = dict((w, i) for i,w in enumerate(vocab))
id2word = dict((i, w) for i,w in enumerate(vocab))
unknown_token = "UNKNOWN_TOKEN"
f = open("train.pickle", 'rb')
full_data = pickle.load(f)
f.close()
# Model Description
sense_word = 'hard'
model_name = 'model-3'
model_dir = 'output/' + sense_word + '/' + model_name
save_dir = os.path.join(model_dir, "save/")
log_dir = os.path.join(model_dir, "log")
if not os.path.exists(model_dir):
os.mkdir(model_dir)
if not os.path.exists(save_dir):
os.mkdir(save_dir)
if not os.path.exists(log_dir):
os.mkdir(log_dir)
sense_counts = Counter(full_data[sense_word][1])
print(sense_counts)
total_count = len(full_data[sense_word][1])
sort_sense_counts = sense_counts.most_common()
vocab_sense = [k for k,v in sort_sense_counts]
freq_sense = [v for k,v in sort_sense_counts]
weights = np.multiply(6, [1 - count/total_count for count in freq_sense])
weights = weights.astype(np.float32)
print(weights)
# Parameters
mode = 'train'
num_senses = 3
batch_size = 64
vocab_size = len(vocab)
unk_vocab_size = 1
word_emb_size = len(word_embedding[0])
max_sent_size = 200
hidden_size = 100
keep_prob = 0.5
l2_lambda = 0.002
init_lr = 0.005
decay_steps = 500
decay_rate = 0.96
clip_norm = 1
clipping = True
# MODEL
x = tf.placeholder('int32', [batch_size, max_sent_size], name="x")
y = tf.placeholder('int32', [batch_size], name="y")
x_mask = tf.placeholder('bool', [batch_size, max_sent_size], name='x_mask')
is_train = tf.placeholder('bool', [], name='is_train')
word_emb_mat = tf.placeholder('float', [None, word_emb_size], name='emb_mat')
input_keep_prob = tf.cond(is_train,lambda:keep_prob, lambda:tf.constant(1.0))
x_len = tf.reduce_sum(tf.cast(x_mask, 'int32'), 1)
with tf.name_scope("word_embedding"):
if mode == 'train':
unk_word_emb_mat = tf.get_variable("word_emb_mat", dtype='float', shape=[unk_vocab_size, word_emb_size], initializer=tf.contrib.layers.xavier_initializer(uniform=True, seed=0, dtype=tf.float32))
else:
unk_word_emb_mat = tf.get_variable("word_emb_mat", shape=[unk_vocab_size, word_emb_size], dtype='float')
final_word_emb_mat = tf.concat([word_emb_mat, unk_word_emb_mat], 0)
Wx = tf.nn.embedding_lookup(final_word_emb_mat, x)
with tf.variable_scope("lstm1"):
cell_fw1 = tf.contrib.rnn.BasicLSTMCell(hidden_size,state_is_tuple=True)
cell_bw1 = tf.contrib.rnn.BasicLSTMCell(hidden_size,state_is_tuple=True)
d_cell_fw1 = tf.contrib.rnn.DropoutWrapper(cell_fw1, input_keep_prob=input_keep_prob)
d_cell_bw1 = tf.contrib.rnn.DropoutWrapper(cell_bw1, input_keep_prob=input_keep_prob)
(fw_h1, bw_h1), _ = tf.nn.bidirectional_dynamic_rnn(d_cell_fw1, d_cell_bw1, Wx, sequence_length=x_len, dtype='float', scope='lstm1')
h1 = tf.concat([fw_h1, bw_h1], 2)
with tf.variable_scope("lstm2"):
cell_fw2 = tf.contrib.rnn.BasicLSTMCell(hidden_size,state_is_tuple=True)
cell_bw2 = tf.contrib.rnn.BasicLSTMCell(hidden_size,state_is_tuple=True)
d_cell_fw2 = tf.contrib.rnn.DropoutWrapper(cell_fw2, input_keep_prob=input_keep_prob)
d_cell_bw2 = tf.contrib.rnn.DropoutWrapper(cell_bw2, input_keep_prob=input_keep_prob)
(fw_h2, bw_h2), _ = tf.nn.bidirectional_dynamic_rnn(d_cell_fw2, d_cell_bw2, h1, sequence_length=x_len, dtype='float', scope='lstm2')
h = tf.concat([fw_h2, bw_h2], 2)
def attention(input_x, input_mask, W_att):
h_masked = tf.boolean_mask(input_x, input_mask)
h_tanh = tf.tanh(h_masked)
u = tf.matmul(h_tanh, W_att)
a = tf.nn.softmax(u)
c = tf.reduce_sum(tf.multiply(h_tanh, a), 0)
return c
with tf.variable_scope("attention"):
W_att = tf.Variable(tf.truncated_normal([2*hidden_size, 1], mean=0.0, stddev=0.1, seed=0), name="W_att")
c = tf.expand_dims(attention(h[0], x_mask[0], W_att), 0)
for i in range(1, batch_size):
c = tf.concat([c, tf.expand_dims(attention(h[i], x_mask[i], W_att), 0)], 0)
with tf.variable_scope("softmax_layer"):
W = tf.Variable(tf.truncated_normal([2*hidden_size, num_senses], mean=0.0, stddev=0.1, seed=0), name="W")
b = tf.Variable(tf.zeros([num_senses]), name="b")
drop_c = tf.nn.dropout(c, input_keep_prob)
logits = tf.matmul(drop_c, W) + b
predictions = tf.argmax(logits, 1)
class_weight = tf.constant(weights)
weighted_logits = logits * class_weight
loss = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(logits=weighted_logits, labels=y))
global_step = tf.Variable(0, trainable=False, name="global_step")
learning_rate = tf.train.exponential_decay(init_lr, global_step, decay_steps, decay_rate, staircase=True)
tv_all = tf.trainable_variables()
tv_regu =[]
for t in tv_all:
if t.name.find('b:')==-1:
tv_regu.append(t)
# l2 Loss
l2_loss = l2_lambda * tf.reduce_sum([ tf.nn.l2_loss(v) for v in tv_regu ])
total_loss = loss + l2_loss
# Optimizer for loss
optimizer = tf.train.AdamOptimizer(learning_rate)
# Gradients and Variables for Loss
grads_vars = optimizer.compute_gradients(total_loss)
# Clipping of Gradients
clipped_grads = grads_vars
if(clipping == True):
clipped_grads = [(tf.clip_by_norm(grad, clip_norm), var) for grad, var in clipped_grads]
# Training Optimizer for Total Loss
train_op = optimizer.apply_gradients(clipped_grads, global_step=global_step)
# Summaries
var_summaries = []
for v in tv_all:
var_summary = tf.summary.histogram("{}/var".format(v.name), v)
var_summaries.append(var_summary)
var_summaries_merged = tf.summary.merge(var_summaries)
loss_summary = tf.summary.scalar("loss", loss)
total_loss_summary = tf.summary.scalar("total_loss", total_loss)
summary = tf.summary.merge_all()
os.environ["CUDA_DEVICE_ORDER"]="PCI_BUS_ID" # see issue #152
os.environ["CUDA_VISIBLE_DEVICES"]="1"
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
sess = tf.Session(config=config)
sess.run(tf.global_variables_initializer()) # For initializing all the variables
saver = tf.train.Saver() # For Saving the model
summary_writer = tf.summary.FileWriter(log_dir, sess.graph) # For writing Summaries
# Splitting
data_x = full_data[sense_word][0]
data_y = full_data[sense_word][2]
x_train, x_test, y_train, y_test = train_test_split(data_x, data_y, train_size=0.8, shuffle=True, stratify=data_y, random_state=0)
x_train, x_val, y_train, y_val = train_test_split(x_train, y_train, train_size=0.9, shuffle=True, stratify=y_train, random_state=0)
def data_prepare(x):
num_examples = len(x)
xx = np.zeros([num_examples, max_sent_size], dtype=int)
xx_mask = np.zeros([num_examples, max_sent_size], dtype=bool)
for j in range(num_examples):
for i in range(max_sent_size):
if(i>=len(x[j])):
break
w = x[j][i]
xx[j][i] = word2id[w] if w in word2id else word2id['UNKNOWN_TOKEN']
xx_mask[j][i] = True
return xx, xx_mask
def eval_score(yy, pred):
num_batches = int(len(yy)/batch_size)
f1 = f1_score(yy[:batch_size*num_batches], pred, average='macro')
accu = accuracy_score(yy[:batch_size*num_batches], pred)
return f1*100, accu*100
def model(xx, yy, mask, train_cond=True):
num_batches = int(len(xx)/batch_size)
losses = 0
preds = []
for j in range(num_batches):
s = j * batch_size
e = (j+1) * batch_size
feed_dict = {x:xx[s:e], y:yy[s:e], x_mask:mask[s:e], is_train:train_cond, input_keep_prob:keep_prob, word_emb_mat:word_embedding}
if(train_cond==True):
_, _loss, step, _summary = sess.run([train_op, total_loss, global_step, summary], feed_dict)
summary_writer.add_summary(_summary, step)
# print("Steps:{}".format(step), ", Loss: {}".format(_loss))
else:
_loss, pred = sess.run([total_loss, predictions], feed_dict)
preds.append(pred)
losses +=_loss
if(train_cond==False):
y_pred = []
for i in range(num_batches):
for pred in preds[i]:
y_pred.append(pred)
return losses/num_batches, y_pred
return losses/num_batches, step
x_id_train, mask_train = data_prepare(x_train)
x_id_val, mask_val = data_prepare(x_val)
x_id_test, mask_test = data_prepare(x_test)
y_train = np.array(y_train)
num_epochs = 50
for i in range(num_epochs):
random = np.random.choice(len(y_train), size=(len(y_train)), replace=False)
x_id_train = x_id_train[random]
y_train = y_train[random]
mask_train = mask_train[random]
losses, step = model(x_id_train, y_train, mask_train)
print("Epoch:", i+1,"Step:", step, "loss:",losses)
if((i+1)%5==0):
saver.save(sess, save_path=save_dir)
print("Saved Model Complete")
train_loss, train_pred = model(x_id_train, y_train, mask_train, train_cond=False)
f1_, accu_ = eval_score(y_train, train_pred)
print("Train: F1 Score: ", f1_, "Accuracy: ", accu_, "Loss: ", train_loss)
val_loss, val_pred = model(x_id_val, y_val, mask_val, train_cond=False)
f1_, accu_ = eval_score(y_val, val_pred)
print("Val: F1 Score: ", f1_, "Accuracy: ", accu_, "Loss: ", val_loss)
test_loss, test_pred = model(x_id_test, y_test, mask_test, train_cond=False)
f1_, accu_ = eval_score(y_test, test_pred)
print("Test: F1 Score: ", f1_, "Accuracy: ", accu_, "Loss: ", test_loss)
saver.restore(sess, save_dir)
```
| github_jupyter |
# CatBoost
**Tutorial:** https://catboost.ai/docs/concepts/tutorials.html
```
import os
import sys
import pandas as pd
import numpy as np
import importlib
import itertools
from pandas.io.json import json_normalize
import sklearn.metrics as metrics
import random
import matplotlib.pyplot as plt
import seaborn as sns
import datetime
from matplotlib import rcParams
import json
import math
%matplotlib inline
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:95% !important; }</style"))
import catboost
# Global variables
meta_data_path = "../../data-campaigns/meta-data/"
legs = "all_legs_merged_no_outlier_0.01.pkl"
input_path = "../../2019-12-16.out/"
out_path = "../../2019-12-16.out/WI_results/"
# Graphical parameters
rcParams["axes.titlepad"] = 45
rcParams["font.size"] = 16
rcParams["figure.figsize"] = 12, 8
sns.set_style("whitegrid")
```
**READ DATA**
```
#### all_legs ####
all_legs = pd.read_pickle(input_path + legs)
# remove "unknown" as transport category (?)
all_legs = all_legs[all_legs.transp_category != "Unknown"]
# select only useful wasted time
all_legs = all_legs[(all_legs.wastedTime > 0) & (all_legs.wastedTime < 6)]
# convert to int
all_legs["wastedTime"] = all_legs["wastedTime"].apply(lambda x: np.round(x))
# country - assign 'CHE' to the class Other (AAA)
all_legs["onCampaigns"] = all_legs["onCampaigns"].apply(
lambda x: "AAA" if x == "CHE" else x
)
top10 = list(all_legs.onCampaigns.unique())
#### values_from_trip ####
values_from_trip = pd.read_pickle(input_path + "values_from_trip.pkl")
# add info
values_from_trip = values_from_trip.merge(
all_legs[
[
"legid",
"wastedTime",
"userid",
"gender",
"onCampaigns",
"age",
"transp_category",
]
],
on="legid",
).drop_duplicates()
values_from_trip.head()
```
### Model 1. wt ~ E + P + F
```
tmp = values_from_trip[["legid", "value", "valueFromTrip"]].drop_duplicates()
values_from_trip_pivot = pd.pivot(
data=tmp, index="legid", columns="valueFromTrip", values="value"
).reset_index()
# add transport category and userid
values_from_trip_pivot = values_from_trip_pivot.merge(
all_legs[["legid", "userid", "transp_category", "wastedTime"]], on="legid"
).drop_duplicates()
# Merge Paid_work and Personal_tasks into Productivity taking the **maximum** value
values_from_trip_pivot["Productivity"] = values_from_trip_pivot[
["Paid_work", "Personal_tasks"]
].max(axis=1)
values_from_trip_pivot.drop(["Paid_work", "Personal_tasks"], axis=1, inplace=True)
## select columns
values_from_trip_pivot = values_from_trip_pivot[
["Enjoyment", "Productivity", "Fitness", "wastedTime"]
] # , 'transp_category']]
# remove legs with missing values in E+P+F
values_from_trip_pivot = values_from_trip_pivot[
~(
(values_from_trip_pivot.Enjoyment.isnull())
& (values_from_trip_pivot.Productivity.isnull())
& (values_from_trip_pivot.Fitness.isnull())
)
]
# remove legs with null tc
# values_from_trip_pivot = values_from_trip_pivot[~ values_from_trip_pivot.transp_category.isnull()]
# convert E P F into int values
values_from_trip_pivot["Enjoyment"] = values_from_trip_pivot["Enjoyment"].astype(np.int)
values_from_trip_pivot["Productivity"] = values_from_trip_pivot["Productivity"].astype(
np.int
)
values_from_trip_pivot["Fitness"] = values_from_trip_pivot["Fitness"].astype(np.int)
values_from_trip_pivot.head()
# values_from_trip_pivot.to_csv('values_from_trip_pivot.csv', index=False)
values_from_trip_pivot[
(values_from_trip_pivot.Enjoyment == 0)
& (values_from_trip_pivot.Productivity == 0)
& (values_from_trip_pivot.Fitness == 0)
].groupby("wastedTime").size().reset_index(name="nlegs")
values_from_trip_pivot.groupby("wastedTime").size().reset_index(name="count")
```
**Train - Test split**
```
from catboost import CatBoostClassifier, Pool, cv
from sklearn.metrics import accuracy_score
random.seed(123)
th = 0.8
nlegs_train = np.int64(values_from_trip_pivot.shape[0] * th)
nlegs_train_lst = random.sample(list(values_from_trip_pivot.index), nlegs_train)
train_df = values_from_trip_pivot[values_from_trip_pivot.index.isin(nlegs_train_lst)]
test_df = values_from_trip_pivot[~values_from_trip_pivot.index.isin(nlegs_train_lst)]
y = train_df.wastedTime
X = train_df.drop("wastedTime", axis=1)
from catboost.utils import create_cd
feature_names = dict()
for column, name in enumerate(train_df):
if column == 0:
continue
feature_names[column - 1] = name
create_cd(
label=0,
cat_features=list(range(1, train_df.columns.shape[0])),
feature_names=feature_names,
# output_path=os.path.join(dataset_dir, 'train.cd')
)
!cat train.cd
pool1 = Pool(data=X, label=y, cat_features=cat_features)
### Train and Validation set
from sklearn.model_selection import train_test_split
th = 0.8
X_train, X_validation, y_train, y_validation = train_test_split(
X, y, train_size=th, random_state=42
)
X_test = test_df
model = CatBoostClassifier(
iterations=1000,
loss_function="MultiClass",
# learning_rate=0.1,
custom_loss="Accuracy",
)
model.fit(
X_train,
y_train,
cat_features=cat_features,
eval_set=(X_validation, y_validation),
verbose=100,
plot=True,
)
print("Model is fitted: " + str(model.is_fitted()))
print("Model params:")
print(model.get_params())
## OVERFITTING: Se il test error aumenta nel corso delle iterazioni
# e se l'ottimo viene raggiunto nelle prime iterazioni
## il modello automaticamente taglia dopo l'overfitting
print("Tree count: " + str(model.tree_count_))
predictions = model.predict(X_test)
predictions_probs = model.predict_proba(X_test)
print(predictions[:10])
print(predictions_probs[:10])
unique, counts = np.unique(predictions, return_counts=True)
dict(zip(unique, counts))
```
**CROSS VALIDATION**
```
from catboost import cv
params = {}
params["loss_function"] = "MultiClass"
params["iterations"] = 80
params["custom_loss"] = "Accuracy"
params["random_seed"] = 63
params["learning_rate"] = 0.5
cv_data = cv(
params=params,
pool=Pool(X, label=y, cat_features=cat_features),
fold_count=5,
shuffle=True,
partition_random_seed=0,
plot=True,
# stratified=False,
verbose=False,
)
best_value = np.min(cv_data["test-MultiClass-mean"])
best_iter = np.argmin(cv_data["test-MultiClass-mean"])
print(
"Best validation Logloss score, not stratified: {:.4f}±{:.4f} on step {}".format(
best_value, cv_data["test-MultiClass-std"][best_iter], best_iter
)
)
```
**Overfitting detector**
```
model_with_early_stop = CatBoostClassifier(
iterations=200,
random_seed=63,
learning_rate=0.5,
early_stopping_rounds=20, # stop when there is no improvement after 20 iterations
)
model_with_early_stop.fit(
X_train,
y_train,
cat_features=cat_features,
eval_set=(X_validation, y_validation),
verbose=False,
plot=True,
)
```
### Multiclass
For multiclass problems with many classes sometimes it's better to solve classification problem using ranking. To do that we will build a dataset with groups. Every group will represent one object from our initial dataset. But it will have one additional categorical feature - possible class value. Target values will be equal to 1 if the class value is equal to the correct class, and 0 otherwise. Thus each group will have exactly one 1 in labels, and some zeros. You can put all possible class values in the group or you can try setting only hard negatives if there are too many labels. We'll show this approach on an example of binary classification problem.
```
from copy import deepcopy
def build_multiclass_ranking_dataset(
X, y, cat_features, label_values=[0, 1], start_group_id=0
):
ranking_matrix = []
ranking_labels = []
group_ids = []
X_train_matrix = X.values
y_train_vector = y.values
for obj_idx in range(X.shape[0]):
obj = list(X_train_matrix[obj_idx])
for label in label_values:
obj_of_given_class = deepcopy(obj)
obj_of_given_class.append(label)
ranking_matrix.append(obj_of_given_class)
ranking_labels.append(float(y_train_vector[obj_idx] == label))
group_ids.append(start_group_id + obj_idx)
final_cat_features = deepcopy(cat_features)
final_cat_features.append(
X.shape[1]
) # new feature that we are adding should be categorical.
return Pool(
ranking_matrix,
ranking_labels,
cat_features=final_cat_features,
group_id=group_ids,
)
groupwise_train_pool = build_multiclass_ranking_dataset(
X_train, y_train, cat_features, [1, 2, 3, 4, 5]
)
groupwise_eval_pool = build_multiclass_ranking_dataset(
X_validation, y_validation, cat_features, [1, 2, 3, 4, 5], X_train.shape[0]
)
params = {"iterations": 100, "learning_rate": 0.01, "loss_function": "QuerySoftMax"}
model = CatBoost(params)
model.fit(
X=groupwise_train_pool, verbose=False, eval_set=groupwise_eval_pool, plot=True
)
import math
obj = list(X_validation.values[0])
ratings = []
for label in [1, 2, 3, 4, 5]:
obj_with_label = deepcopy(obj)
obj_with_label.append(label)
rating = model.predict([obj_with_label])[0]
ratings.append(rating)
print("Raw values:", np.array(ratings))
def soft_max(values):
return [math.exp(val) / sum([math.exp(val) for val in values]) for val in values]
print("Probabilities", np.array(soft_max(ratings)))
```
### Cleaned dataset
Remove from the data all the legs with E, P, F = 0 and wt > 3,4,5
```
cleaned_df = values_from_trip_pivot[
~(
(values_from_trip_pivot.Enjoyment == 0)
& (values_from_trip_pivot.Fitness == 0)
& (values_from_trip_pivot.Productivity == 0)
& (values_from_trip_pivot.wastedTime >= 3)
)
]
cleaned_df.groupby("wastedTime").size().reset_index(name="count")
```
**save each TC df**
```
tmp = values_from_trip[["legid", "value", "valueFromTrip"]].drop_duplicates()
tmp = tmp[tmp.valueFromTrip != "Unknown"]
values_from_trip_pivot = pd.pivot(
data=tmp, index="legid", columns="valueFromTrip", values="value"
).reset_index()
# add transport category and userid
values_from_trip_pivot = values_from_trip_pivot.merge(
all_legs[["legid", "userid", "transp_category", "wastedTime"]], on="legid"
).drop_duplicates()
values_from_trip_pivot = values_from_trip_pivot[
~values_from_trip_pivot.transp_category.isnull()
]
# Merge Paid_work and Personal_tasks into Productivity taking the **maximum** value
values_from_trip_pivot["Productivity"] = values_from_trip_pivot[
["Paid_work", "Personal_tasks"]
].max(axis=1)
values_from_trip_pivot.drop(["Paid_work", "Personal_tasks"], axis=1, inplace=True)
# select columns
values_from_trip_pivot = values_from_trip_pivot[
["Enjoyment", "Productivity", "Fitness", "wastedTime", "transp_category"]
]
cleaned_df = values_from_trip_pivot[
~(
(values_from_trip_pivot.Enjoyment == 0)
& (values_from_trip_pivot.Fitness == 0)
& (values_from_trip_pivot.Productivity == 0)
& (values_from_trip_pivot.wastedTime >= 3)
)
]
for i in list(cleaned_df.transp_category.unique()):
print(i)
tc_df = cleaned_df[cleaned_df.transp_category == i]
tc_df = tc_df.iloc[:, :-1]
# save
tc_df.to_csv(out_path + "OLR_results/" + i + ".csv", index=False)
random.seed(123)
th = 0.8
nlegs_train = np.int64(cleaned_df.shape[0] * th)
nlegs_train_lst = random.sample(list(cleaned_df.index), nlegs_train)
train_df = cleaned_df[cleaned_df.index.isin(nlegs_train_lst)]
test_df = cleaned_df[~cleaned_df.index.isin(nlegs_train_lst)]
y = train_df.wastedTime
X = train_df.drop("wastedTime", axis=1)
from catboost.utils import create_cd
feature_names = dict()
for column, name in enumerate(train_df):
if column == 0:
continue
feature_names[column - 1] = name
create_cd(
label=0,
cat_features=list(range(1, train_df.columns.shape[0])),
feature_names=feature_names,
# output_path=os.path.join(dataset_dir, 'train.cd')
)
!cat train.cd
cat_features = [0, 1, 2]
pool1 = Pool(data=X, label=y, cat_features=cat_features)
### Train and Validation set
from sklearn.model_selection import train_test_split
th = 0.8
X_train, X_validation, y_train, y_validation = train_test_split(
X, y, train_size=th, random_state=42
)
X_test = test_df
model = CatBoostClassifier(
iterations=1000,
loss_function="MultiClass",
# learning_rate=0.1,
custom_loss="AUC",
)
model.fit(
X_train,
y_train,
cat_features=cat_features,
eval_set=(X_validation, y_validation),
verbose=100,
plot=True,
)
print("Model is fitted: " + str(model.is_fitted()))
print("Model params:")
print(model.get_params())
predictions = model.predict(X_test)
predictions_probs = model.predict_proba(X_test)
print(predictions[:10])
print(predictions_probs[:10])
unique, counts = np.unique(predictions, return_counts=True)
dict(zip(unique, counts))
X_test.groupby("wastedTime").size().reset_index()
```
| github_jupyter |
<br>
Example of running GROMACS with lithops<br>
# GROMACS Computations
GROMACS is a versatile package to perform molecular dynamics, i.e. simulate the Newtonian equations of motion for systems with hundreds to millions of particles.
In this notebook, GROMACS software is run with parameters over cloud functions via shell command. We demonstrate how complex computations and softwares like this can be handled using Lithops and Cloud resources.
## Installing the dependencies
```
import lithops
import os
import zipfile
import time
import wget
import json
```
## Execution
Here in this notebook there is only one function that we execute over cloud functions. This function downloads the benchMEM benchmark set and executes the shell command for running the gromacs software with the given parameters and uploads the results to IBM COS.
```
temp_dir = '/tmp'
iterdata = [1]
def sh_cmd_executor(x, param1, ibm_cos):
lithops_config = json.loads(os.environ['LITHOPS_CONFIG'])
bucket = lithops_config['lithops']['storage_bucket']
print (bucket)
print (param1)
filename = 'benchMEM.zip'
outfile = os.path.join(temp_dir, filename)
if not os.path.isfile(filename):
filename = wget.download('https://www.mpibpc.mpg.de/15101317/benchMEM.zip', out=outfile)
print(filename, "was downloaded")
with zipfile.ZipFile(outfile, 'r') as zip_ref:
print('Extracting file to %s' % temp_dir)
zip_ref.extractall(temp_dir)
else:
print(filename, " already exists")
os.chdir(temp_dir)
cmd = "/usr/local/gromacs/bin/gmx mdrun -nt 4 -s benchMEM.tpr -nsteps 1000 -resethway"
st = time.time()
import subprocess
subprocess.call(cmd, shell=True)
run_time = time.time() - st
# upload results to IBM COS
res = ['confout.gro', 'ener.edr', 'md.log', 'state.cpt']
for name in res:
f = open(os.path.join(temp_dir, name), "rb")
ibm_cos.put_object(Bucket=bucket, Key=os.path.join('gmx-mem', name), Body=f)
with open('md.log', 'r') as file:
data = file.read()
return {'run_time': run_time, 'md_log': data}
```
We use lithops and the pre-built runtime to run gromacs with the parameters given. **Currently, the runtime cactusone/lithops-gromacs:1.0.2 uses Python3.8, so you must run the application with Python3.8.** And we get the results from GROMACS at the end of the process.
```
if __name__ == '__main__':
# Example of using bechMEM from https://www.mpibpc.mpg.de/grubmueller/bench
param1 = 'param1 example'
total_start = time.time()
fexec = lithops.FunctionExecutor(runtime='cactusone/lithops-gromacs:1.0.2', runtime_memory=2048)
fexec.map(sh_cmd_executor, iterdata, extra_args=(param1,))
res = fexec.get_result()
fexec.clean()
print ("GROMACS execution time {}".format(res[0]['run_time']))
print ("Total execution time {}".format(time.time()-total_start))
print (res[0]['md_log'])
```
| github_jupyter |
```
# Copyright 2020 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Vertex client library: AutoML image object detection model for export to edge
<table align="left">
<td>
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/community/gapic/automl/showcase_automl_image_object_detection_export_edge.ipynb">
<img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab
</a>
</td>
<td>
<a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/community/gapic/automl/showcase_automl_image_object_detection_export_edge.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
</td>
</table>
<br/><br/><br/>
## Overview
This tutorial demonstrates how to use the Vertex client library for Python to create image object detection models to export as an Edge model using Google Cloud's AutoML.
### Dataset
The dataset used for this tutorial is the Salads category of the [OpenImages dataset](https://www.tensorflow.org/datasets/catalog/open_images_v4) from [TensorFlow Datasets](https://www.tensorflow.org/datasets/catalog/overview). This dataset does not require any feature engineering. The version of the dataset you will use in this tutorial is stored in a public Cloud Storage bucket. The trained model predicts the bounding box locations and corresponding type of salad items in an image from a class of five items: salad, seafood, tomato, baked goods, or cheese.
### Objective
In this tutorial, you create a AutoML image object detection model from a Python script using the Vertex client library, and then export the model as an Edge model in TFLite format. You can alternatively create models with AutoML using the `gcloud` command-line tool or online using the Google Cloud Console.
The steps performed include:
- Create a Vertex `Dataset` resource.
- Train the model.
- Export the `Edge` model from the `Model` resource to Cloud Storage.
- Download the model locally.
- Make a local prediction.
### Costs
This tutorial uses billable components of Google Cloud (GCP):
* Vertex AI
* Cloud Storage
Learn about [Vertex AI
pricing](https://cloud.google.com/vertex-ai/pricing) and [Cloud Storage
pricing](https://cloud.google.com/storage/pricing), and use the [Pricing
Calculator](https://cloud.google.com/products/calculator/)
to generate a cost estimate based on your projected usage.
## Installation
Install the latest version of Vertex client library.
```
import os
import sys
# Google Cloud Notebook
if os.path.exists("/opt/deeplearning/metadata/env_version"):
USER_FLAG = "--user"
else:
USER_FLAG = ""
! pip3 install -U google-cloud-aiplatform $USER_FLAG
```
Install the latest GA version of *google-cloud-storage* library as well.
```
! pip3 install -U google-cloud-storage $USER_FLAG
```
### Restart the kernel
Once you've installed the Vertex client library and Google *cloud-storage*, you need to restart the notebook kernel so it can find the packages.
```
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
```
## Before you begin
### GPU runtime
*Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select* **Runtime > Change Runtime Type > GPU**
### Set up your Google Cloud project
**The following steps are required, regardless of your notebook environment.**
1. [Select or create a Google Cloud project](https://console.cloud.google.com/cloud-resource-manager). When you first create an account, you get a $300 free credit towards your compute/storage costs.
2. [Make sure that billing is enabled for your project.](https://cloud.google.com/billing/docs/how-to/modify-project)
3. [Enable the Vertex APIs and Compute Engine APIs.](https://console.cloud.google.com/flows/enableapi?apiid=ml.googleapis.com,compute_component)
4. [The Google Cloud SDK](https://cloud.google.com/sdk) is already installed in Google Cloud Notebook.
5. Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
**Note**: Jupyter runs lines prefixed with `!` as shell commands, and it interpolates Python variables prefixed with `$` into these commands.
```
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
```
#### Region
You can also change the `REGION` variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex. We recommend that you choose the region closest to you.
- Americas: `us-central1`
- Europe: `europe-west4`
- Asia Pacific: `asia-east1`
You may not use a multi-regional bucket for training with Vertex. Not all regions provide support for all Vertex services. For the latest support per region, see the [Vertex locations documentation](https://cloud.google.com/vertex-ai/docs/general/locations)
```
REGION = "us-central1" # @param {type: "string"}
```
#### Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial.
```
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
```
### Authenticate your Google Cloud account
**If you are using Google Cloud Notebook**, your environment is already authenticated. Skip this step.
**If you are using Colab**, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.
**Otherwise**, follow these steps:
In the Cloud Console, go to the [Create service account key](https://console.cloud.google.com/apis/credentials/serviceaccountkey) page.
**Click Create service account**.
In the **Service account name** field, enter a name, and click **Create**.
In the **Grant this service account access to project** section, click the Role drop-down list. Type "Vertex" into the filter box, and select **Vertex Administrator**. Type "Storage Object Admin" into the filter box, and select **Storage Object Admin**.
Click Create. A JSON file that contains your key downloads to your local environment.
Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
```
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
# If on Google Cloud Notebook, then don't execute this code
if not os.path.exists("/opt/deeplearning/metadata/env_version"):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
```
### Create a Cloud Storage bucket
**The following steps are required, regardless of your notebook environment.**
This tutorial is designed to use training data that is in a public Cloud Storage bucket and a local Cloud Storage bucket for exporting the trained model. You may alternatively use your own training data that you have stored in a local Cloud Storage bucket.
Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.
```
BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]":
BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
```
**Only if your bucket doesn't already exist**: Run the following cell to create your Cloud Storage bucket.
```
! gsutil mb -l $REGION $BUCKET_NAME
```
Finally, validate access to your Cloud Storage bucket by examining its contents:
```
! gsutil ls -al $BUCKET_NAME
```
### Set up variables
Next, set up some variables used throughout the tutorial.
### Import libraries and define constants
#### Import Vertex client library
Import the Vertex client library into our Python environment.
```
import time
from google.cloud.aiplatform import gapic as aip
from google.protobuf import json_format
from google.protobuf.json_format import MessageToJson, ParseDict
from google.protobuf.struct_pb2 import Struct, Value
```
#### Vertex constants
Setup up the following constants for Vertex:
- `API_ENDPOINT`: The Vertex API service endpoint for dataset, model, job, pipeline and endpoint services.
- `PARENT`: The Vertex location root path for dataset, model, job, pipeline and endpoint resources.
```
# API service endpoint
API_ENDPOINT = "{}-aiplatform.googleapis.com".format(REGION)
# Vertex location root path for your dataset, model and endpoint resources
PARENT = "projects/" + PROJECT_ID + "/locations/" + REGION
```
#### AutoML constants
Set constants unique to AutoML datasets and training:
- Dataset Schemas: Tells the `Dataset` resource service which type of dataset it is.
- Data Labeling (Annotations) Schemas: Tells the `Dataset` resource service how the data is labeled (annotated).
- Dataset Training Schemas: Tells the `Pipeline` resource service the task (e.g., classification) to train the model for.
```
# Image Dataset type
DATA_SCHEMA = "gs://google-cloud-aiplatform/schema/dataset/metadata/image_1.0.0.yaml"
# Image Labeling type
LABEL_SCHEMA = "gs://google-cloud-aiplatform/schema/dataset/ioformat/image_bounding_box_io_format_1.0.0.yaml"
# Image Training task
TRAINING_SCHEMA = "gs://google-cloud-aiplatform/schema/trainingjob/definition/automl_image_object_detection_1.0.0.yaml"
```
# Tutorial
Now you are ready to start creating your own AutoML image object detection model.
## Set up clients
The Vertex client library works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the Vertex server.
You will use different clients in this tutorial for different steps in the workflow. So set them all up upfront.
- Dataset Service for `Dataset` resources.
- Model Service for `Model` resources.
- Pipeline Service for training.
```
# client options same for all services
client_options = {"api_endpoint": API_ENDPOINT}
def create_dataset_client():
client = aip.DatasetServiceClient(client_options=client_options)
return client
def create_model_client():
client = aip.ModelServiceClient(client_options=client_options)
return client
def create_pipeline_client():
client = aip.PipelineServiceClient(client_options=client_options)
return client
clients = {}
clients["dataset"] = create_dataset_client()
clients["model"] = create_model_client()
clients["pipeline"] = create_pipeline_client()
for client in clients.items():
print(client)
```
## Dataset
Now that your clients are ready, your first step in training a model is to create a managed dataset instance, and then upload your labeled data to it.
### Create `Dataset` resource instance
Use the helper function `create_dataset` to create the instance of a `Dataset` resource. This function does the following:
1. Uses the dataset client service.
2. Creates an Vertex `Dataset` resource (`aip.Dataset`), with the following parameters:
- `display_name`: The human-readable name you choose to give it.
- `metadata_schema_uri`: The schema for the dataset type.
3. Calls the client dataset service method `create_dataset`, with the following parameters:
- `parent`: The Vertex location root path for your `Database`, `Model` and `Endpoint` resources.
- `dataset`: The Vertex dataset object instance you created.
4. The method returns an `operation` object.
An `operation` object is how Vertex handles asynchronous calls for long running operations. While this step usually goes fast, when you first use it in your project, there is a longer delay due to provisioning.
You can use the `operation` object to get status on the operation (e.g., create `Dataset` resource) or to cancel the operation, by invoking an operation method:
| Method | Description |
| ----------- | ----------- |
| result() | Waits for the operation to complete and returns a result object in JSON format. |
| running() | Returns True/False on whether the operation is still running. |
| done() | Returns True/False on whether the operation is completed. |
| canceled() | Returns True/False on whether the operation was canceled. |
| cancel() | Cancels the operation (this may take up to 30 seconds). |
```
TIMEOUT = 90
def create_dataset(name, schema, labels=None, timeout=TIMEOUT):
start_time = time.time()
try:
dataset = aip.Dataset(
display_name=name, metadata_schema_uri=schema, labels=labels
)
operation = clients["dataset"].create_dataset(parent=PARENT, dataset=dataset)
print("Long running operation:", operation.operation.name)
result = operation.result(timeout=TIMEOUT)
print("time:", time.time() - start_time)
print("response")
print(" name:", result.name)
print(" display_name:", result.display_name)
print(" metadata_schema_uri:", result.metadata_schema_uri)
print(" metadata:", dict(result.metadata))
print(" create_time:", result.create_time)
print(" update_time:", result.update_time)
print(" etag:", result.etag)
print(" labels:", dict(result.labels))
return result
except Exception as e:
print("exception:", e)
return None
result = create_dataset("salads-" + TIMESTAMP, DATA_SCHEMA)
```
Now save the unique dataset identifier for the `Dataset` resource instance you created.
```
# The full unique ID for the dataset
dataset_id = result.name
# The short numeric ID for the dataset
dataset_short_id = dataset_id.split("/")[-1]
print(dataset_id)
```
### Data preparation
The Vertex `Dataset` resource for images has some requirements for your data:
- Images must be stored in a Cloud Storage bucket.
- Each image file must be in an image format (PNG, JPEG, BMP, ...).
- There must be an index file stored in your Cloud Storage bucket that contains the path and label for each image.
- The index file must be either CSV or JSONL.
#### CSV
For image object detection, the CSV index file has the requirements:
- No heading.
- First column is the Cloud Storage path to the image.
- Second column is the label.
- Third/Fourth columns are the upper left corner of bounding box. Coordinates are normalized, between 0 and 1.
- Fifth/Sixth/Seventh columns are not used and should be 0.
- Eighth/Ninth columns are the lower right corner of the bounding box.
#### Location of Cloud Storage training data.
Now set the variable `IMPORT_FILE` to the location of the CSV index file in Cloud Storage.
```
IMPORT_FILE = "gs://cloud-samples-data/vision/salads.csv"
```
#### Quick peek at your data
You will use a version of the Salads dataset that is stored in a public Cloud Storage bucket, using a CSV index file.
Start by doing a quick peek at the data. You count the number of examples by counting the number of rows in the CSV index file (`wc -l`) and then peek at the first few rows.
```
if "IMPORT_FILES" in globals():
FILE = IMPORT_FILES[0]
else:
FILE = IMPORT_FILE
count = ! gsutil cat $FILE | wc -l
print("Number of Examples", int(count[0]))
print("First 10 rows")
! gsutil cat $FILE | head
```
### Import data
Now, import the data into your Vertex Dataset resource. Use this helper function `import_data` to import the data. The function does the following:
- Uses the `Dataset` client.
- Calls the client method `import_data`, with the following parameters:
- `name`: The human readable name you give to the `Dataset` resource (e.g., salads).
- `import_configs`: The import configuration.
- `import_configs`: A Python list containing a dictionary, with the key/value entries:
- `gcs_sources`: A list of URIs to the paths of the one or more index files.
- `import_schema_uri`: The schema identifying the labeling type.
The `import_data()` method returns a long running `operation` object. This will take a few minutes to complete. If you are in a live tutorial, this would be a good time to ask questions, or take a personal break.
```
def import_data(dataset, gcs_sources, schema):
config = [{"gcs_source": {"uris": gcs_sources}, "import_schema_uri": schema}]
print("dataset:", dataset_id)
start_time = time.time()
try:
operation = clients["dataset"].import_data(
name=dataset_id, import_configs=config
)
print("Long running operation:", operation.operation.name)
result = operation.result()
print("result:", result)
print("time:", int(time.time() - start_time), "secs")
print("error:", operation.exception())
print("meta :", operation.metadata)
print(
"after: running:",
operation.running(),
"done:",
operation.done(),
"cancelled:",
operation.cancelled(),
)
return operation
except Exception as e:
print("exception:", e)
return None
import_data(dataset_id, [IMPORT_FILE], LABEL_SCHEMA)
```
## Train the model
Now train an AutoML image object detection model using your Vertex `Dataset` resource. To train the model, do the following steps:
1. Create an Vertex training pipeline for the `Dataset` resource.
2. Execute the pipeline to start the training.
### Create a training pipeline
You may ask, what do we use a pipeline for? You typically use pipelines when the job (such as training) has multiple steps, generally in sequential order: do step A, do step B, etc. By putting the steps into a pipeline, we gain the benefits of:
1. Being reusable for subsequent training jobs.
2. Can be containerized and ran as a batch job.
3. Can be distributed.
4. All the steps are associated with the same pipeline job for tracking progress.
Use this helper function `create_pipeline`, which takes the following parameters:
- `pipeline_name`: A human readable name for the pipeline job.
- `model_name`: A human readable name for the model.
- `dataset`: The Vertex fully qualified dataset identifier.
- `schema`: The dataset labeling (annotation) training schema.
- `task`: A dictionary describing the requirements for the training job.
The helper function calls the `Pipeline` client service'smethod `create_pipeline`, which takes the following parameters:
- `parent`: The Vertex location root path for your `Dataset`, `Model` and `Endpoint` resources.
- `training_pipeline`: the full specification for the pipeline training job.
Let's look now deeper into the *minimal* requirements for constructing a `training_pipeline` specification:
- `display_name`: A human readable name for the pipeline job.
- `training_task_definition`: The dataset labeling (annotation) training schema.
- `training_task_inputs`: A dictionary describing the requirements for the training job.
- `model_to_upload`: A human readable name for the model.
- `input_data_config`: The dataset specification.
- `dataset_id`: The Vertex dataset identifier only (non-fully qualified) -- this is the last part of the fully-qualified identifier.
- `fraction_split`: If specified, the percentages of the dataset to use for training, test and validation. Otherwise, the percentages are automatically selected by AutoML.
```
def create_pipeline(pipeline_name, model_name, dataset, schema, task):
dataset_id = dataset.split("/")[-1]
input_config = {
"dataset_id": dataset_id,
"fraction_split": {
"training_fraction": 0.8,
"validation_fraction": 0.1,
"test_fraction": 0.1,
},
}
training_pipeline = {
"display_name": pipeline_name,
"training_task_definition": schema,
"training_task_inputs": task,
"input_data_config": input_config,
"model_to_upload": {"display_name": model_name},
}
try:
pipeline = clients["pipeline"].create_training_pipeline(
parent=PARENT, training_pipeline=training_pipeline
)
print(pipeline)
except Exception as e:
print("exception:", e)
return None
return pipeline
```
### Construct the task requirements
Next, construct the task requirements. Unlike other parameters which take a Python (JSON-like) dictionary, the `task` field takes a Google protobuf Struct, which is very similar to a Python dictionary. Use the `json_format.ParseDict` method for the conversion.
The minimal fields you need to specify are:
- `budget_milli_node_hours`: The maximum time to budget (billed) for training the model, where 1000 = 1 hour. For image object detection, the budget must be a minimum of 20 hours.
- `model_type`: The type of deployed model:
- `CLOUD_HIGH_ACCURACY_1`: For deploying to Google Cloud and optimizing for accuracy.
- `CLOUD_LOW_LATENCY_1`: For deploying to Google Cloud and optimizing for latency (response time),
- `MOBILE_TF_HIGH_ACCURACY_1`: For deploying to the edge and optimizing for accuracy.
- `MOBILE_TF_LOW_LATENCY_1`: For deploying to the edge and optimizing for latency (response time).
- `MOBILE_TF_VERSATILE_1`: For deploying to the edge and optimizing for a trade off between latency and accuracy.
- `disable_early_stopping`: Whether True/False to let AutoML use its judgement to stop training early or train for the entire budget.
Finally, create the pipeline by calling the helper function `create_pipeline`, which returns an instance of a training pipeline object.
```
PIPE_NAME = "salads_pipe-" + TIMESTAMP
MODEL_NAME = "salads_model-" + TIMESTAMP
task = json_format.ParseDict(
{
"budget_milli_node_hours": 20000,
"model_type": "MOBILE_TF_LOW_LATENCY_1",
"disable_early_stopping": False,
},
Value(),
)
response = create_pipeline(PIPE_NAME, MODEL_NAME, dataset_id, TRAINING_SCHEMA, task)
```
Now save the unique identifier of the training pipeline you created.
```
# The full unique ID for the pipeline
pipeline_id = response.name
# The short numeric ID for the pipeline
pipeline_short_id = pipeline_id.split("/")[-1]
print(pipeline_id)
```
### Get information on a training pipeline
Now get pipeline information for just this training pipeline instance. The helper function gets the job information for just this job by calling the the job client service's `get_training_pipeline` method, with the following parameter:
- `name`: The Vertex fully qualified pipeline identifier.
When the model is done training, the pipeline state will be `PIPELINE_STATE_SUCCEEDED`.
```
def get_training_pipeline(name, silent=False):
response = clients["pipeline"].get_training_pipeline(name=name)
if silent:
return response
print("pipeline")
print(" name:", response.name)
print(" display_name:", response.display_name)
print(" state:", response.state)
print(" training_task_definition:", response.training_task_definition)
print(" training_task_inputs:", dict(response.training_task_inputs))
print(" create_time:", response.create_time)
print(" start_time:", response.start_time)
print(" end_time:", response.end_time)
print(" update_time:", response.update_time)
print(" labels:", dict(response.labels))
return response
response = get_training_pipeline(pipeline_id)
```
# Deployment
Training the above model may take upwards of 30 minutes time.
Once your model is done training, you can calculate the actual time it took to train the model by subtracting `end_time` from `start_time`. For your model, you will need to know the fully qualified Vertex Model resource identifier, which the pipeline service assigned to it. You can get this from the returned pipeline instance as the field `model_to_deploy.name`.
```
while True:
response = get_training_pipeline(pipeline_id, True)
if response.state != aip.PipelineState.PIPELINE_STATE_SUCCEEDED:
print("Training job has not completed:", response.state)
model_to_deploy_id = None
if response.state == aip.PipelineState.PIPELINE_STATE_FAILED:
raise Exception("Training Job Failed")
else:
model_to_deploy = response.model_to_upload
model_to_deploy_id = model_to_deploy.name
print("Training Time:", response.end_time - response.start_time)
break
time.sleep(60)
print("model to deploy:", model_to_deploy_id)
```
## Model information
Now that your model is trained, you can get some information on your model.
## Evaluate the Model resource
Now find out how good the model service believes your model is. As part of training, some portion of the dataset was set aside as the test (holdout) data, which is used by the pipeline service to evaluate the model.
### List evaluations for all slices
Use this helper function `list_model_evaluations`, which takes the following parameter:
- `name`: The Vertex fully qualified model identifier for the `Model` resource.
This helper function uses the model client service's `list_model_evaluations` method, which takes the same parameter. The response object from the call is a list, where each element is an evaluation metric.
For each evaluation -- you probably only have one, we then print all the key names for each metric in the evaluation, and for a small set (`evaluatedBoundingBoxCount` and `boundingBoxMeanAveragePrecision`) you will print the result.
```
def list_model_evaluations(name):
response = clients["model"].list_model_evaluations(parent=name)
for evaluation in response:
print("model_evaluation")
print(" name:", evaluation.name)
print(" metrics_schema_uri:", evaluation.metrics_schema_uri)
metrics = json_format.MessageToDict(evaluation._pb.metrics)
for metric in metrics.keys():
print(metric)
print("evaluatedBoundingBoxCount", metrics["evaluatedBoundingBoxCount"])
print(
"boundingBoxMeanAveragePrecision",
metrics["boundingBoxMeanAveragePrecision"],
)
return evaluation.name
last_evaluation = list_model_evaluations(model_to_deploy_id)
```
## Export as Edge model
You can export an AutoML image object detection model as an Edge model which you can then custom deploy to an edge device, such as a mobile phone or IoT device, or download locally. Use this helper function `export_model` to export the model to Google Cloud, which takes the following parameters:
- `name`: The Vertex fully qualified identifier for the `Model` resource.
- `format`: The format to save the model format as.
- `gcs_dest`: The Cloud Storage location to store the SavedFormat model artifacts to.
This function calls the `Model` client service's method `export_model`, with the following parameters:
- `name`: The Vertex fully qualified identifier for the `Model` resource.
- `output_config`: The destination information for the exported model.
- `artifact_destination.output_uri_prefix`: The Cloud Storage location to store the SavedFormat model artifacts to.
- `export_format_id`: The format to save the model format as. For AutoML image object detection:
- `tf-saved-model`: TensorFlow SavedFormat for deployment to a container.
- `tflite`: TensorFlow Lite for deployment to an edge or mobile device.
- `edgetpu-tflite`: TensorFlow Lite for TPU
- `tf-js`: TensorFlow for web client
- `coral-ml`: for Coral devices
The method returns a long running operation `response`. We will wait sychronously for the operation to complete by calling the `response.result()`, which will block until the model is exported.
```
MODEL_DIR = BUCKET_NAME + "/" + "salads"
def export_model(name, format, gcs_dest):
output_config = {
"artifact_destination": {"output_uri_prefix": gcs_dest},
"export_format_id": format,
}
response = clients["model"].export_model(name=name, output_config=output_config)
print("Long running operation:", response.operation.name)
result = response.result(timeout=1800)
metadata = response.operation.metadata
artifact_uri = str(metadata.value).split("\\")[-1][4:-1]
print("Artifact Uri", artifact_uri)
return artifact_uri
model_package = export_model(model_to_deploy_id, "tflite", MODEL_DIR)
```
#### Download the TFLite model artifacts
Now that you have an exported TFLite version of your model, you can test the exported model locally, but first downloading it from Cloud Storage.
```
! gsutil ls $model_package
# Download the model artifacts
! gsutil cp -r $model_package tflite
tflite_path = "tflite/model.tflite"
```
#### Instantiate a TFLite interpreter
The TFLite version of the model is not a TensorFlow SavedModel format. You cannot directly use methods like predict(). Instead, one uses the TFLite interpreter. You must first setup the interpreter for the TFLite model as follows:
- Instantiate an TFLite interpreter for the TFLite model.
- Instruct the interpreter to allocate input and output tensors for the model.
- Get detail information about the models input and output tensors that will need to be known for prediction.
```
import tensorflow as tf
interpreter = tf.lite.Interpreter(model_path=tflite_path)
interpreter.allocate_tensors()
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
input_shape = input_details[0]["shape"]
print("input tensor shape", input_shape)
```
### Get test item
You will use an arbitrary example out of the dataset as a test item. Don't be concerned that the example was likely used in training the model -- we just want to demonstrate how to make a prediction.
```
test_items = ! gsutil cat $IMPORT_FILE | head -n1
test_item = test_items[0].split(",")[0]
with tf.io.gfile.GFile(test_item, "rb") as f:
content = f.read()
test_image = tf.io.decode_jpeg(content)
print("test image shape", test_image.shape)
test_image = tf.image.resize(test_image, (224, 224))
print("test image shape", test_image.shape, test_image.dtype)
test_image = tf.cast(test_image, dtype=tf.uint8).numpy()
```
#### Make a prediction with TFLite model
Finally, you do a prediction using your TFLite model, as follows:
- Convert the test image into a batch of a single image (`np.expand_dims`)
- Set the input tensor for the interpreter to your batch of a single image (`data`).
- Invoke the interpreter.
- Retrieve the softmax probabilities for the prediction (`get_tensor`).
- Determine which label had the highest probability (`np.argmax`).
```
import numpy as np
data = np.expand_dims(test_image, axis=0)
interpreter.set_tensor(input_details[0]["index"], data)
interpreter.invoke()
softmax = interpreter.get_tensor(output_details[0]["index"])
label = np.argmax(softmax)
print(label)
```
# Cleaning up
To clean up all GCP resources used in this project, you can [delete the GCP
project](https://cloud.google.com/resource-manager/docs/creating-managing-projects#shutting_down_projects) you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial:
- Dataset
- Pipeline
- Model
- Endpoint
- Batch Job
- Custom Job
- Hyperparameter Tuning Job
- Cloud Storage Bucket
```
delete_dataset = True
delete_pipeline = True
delete_model = True
delete_endpoint = True
delete_batchjob = True
delete_customjob = True
delete_hptjob = True
delete_bucket = True
# Delete the dataset using the Vertex fully qualified identifier for the dataset
try:
if delete_dataset and "dataset_id" in globals():
clients["dataset"].delete_dataset(name=dataset_id)
except Exception as e:
print(e)
# Delete the training pipeline using the Vertex fully qualified identifier for the pipeline
try:
if delete_pipeline and "pipeline_id" in globals():
clients["pipeline"].delete_training_pipeline(name=pipeline_id)
except Exception as e:
print(e)
# Delete the model using the Vertex fully qualified identifier for the model
try:
if delete_model and "model_to_deploy_id" in globals():
clients["model"].delete_model(name=model_to_deploy_id)
except Exception as e:
print(e)
# Delete the endpoint using the Vertex fully qualified identifier for the endpoint
try:
if delete_endpoint and "endpoint_id" in globals():
clients["endpoint"].delete_endpoint(name=endpoint_id)
except Exception as e:
print(e)
# Delete the batch job using the Vertex fully qualified identifier for the batch job
try:
if delete_batchjob and "batch_job_id" in globals():
clients["job"].delete_batch_prediction_job(name=batch_job_id)
except Exception as e:
print(e)
# Delete the custom job using the Vertex fully qualified identifier for the custom job
try:
if delete_customjob and "job_id" in globals():
clients["job"].delete_custom_job(name=job_id)
except Exception as e:
print(e)
# Delete the hyperparameter tuning job using the Vertex fully qualified identifier for the hyperparameter tuning job
try:
if delete_hptjob and "hpt_job_id" in globals():
clients["job"].delete_hyperparameter_tuning_job(name=hpt_job_id)
except Exception as e:
print(e)
if delete_bucket and "BUCKET_NAME" in globals():
! gsutil rm -r $BUCKET_NAME
```
| github_jupyter |
#### This code generates large dataframe containing multiple timeseries
```
%matplotlib inline
import numpy as np
import pandas as pd
import random
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.model_selection import cross_val_score
from sklearn.ensemble import AdaBoostClassifier
from sklearn import tree
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import roc_curve, auc
from sklearn import tree
from sklearn.metrics import roc_curve, auc
#from sklearn.preprocessing import LabelBinarizer
from pandas.tseries.offsets import *
import time
#from graphviz import Source
```
#### parameters to set
```
n_series = 6
start_date = '2017-08-01 00:00:00'
end_date = '2017-08-07 23:59:59'
# regular behaviour
max_noise_amplitude = 0.05 # all the timeseries will have values between 0 and 1
# anomalies
p_anomaly = 10E-6
max_anomaly_duration = 4*3600 # 4 h
# tuning parameters
cut = 0.55
window = 24
```
#### generate normal data
```
dti=pd.DatetimeIndex(start=start_date,end=end_date, freq='s')
n_timesteps = len(dti)
df = pd.DataFrame()
for s in range(n_series):
v = np.random.normal(random.random()/2, max_noise_amplitude/random.randint(1, 8), n_timesteps)
df['link '+str(s)] = pd.Series(v)
df['Flag']=0
df['auc_score']=0.5
df.index = dti
df.head()
```
#### generate anomalies
```
to_generate = int(n_timesteps * p_anomaly)
for a in range(to_generate):
affects = random.sample(range(n_series), random.randint(1, n_series))
duration = int(max_anomaly_duration * random.random())
start = int(n_timesteps * random.random())
end = min(start+duration, n_timesteps)
print('affected:', affects, df.iloc[start].name, df.iloc[end].name)
for s in affects:
df.iloc[start:end,s] = df.iloc[start:end,s] + random.random() * 0.2
df.iloc[start:end,n_series]=1
```
#### enforce range
```
df[df<0] = 0
df[df>1] = 1
```
#### plot timeseries
```
#df.plot(figsize=(20,7))
fig = plt.figure(figsize=(20,7))
plt.plot(df)
fig.suptitle('Simulated Data', fontsize=20)
plt.xlabel('Time', fontsize=18)
plt.ylabel('Variance in Data', fontsize=16)
plt.legend(df)
#plt.figure(figsize=[16, 17])
#gs = gridspec.GridSpec(4, 1)
#ax2.plot(df)
#ax2.set_xlabel('time')
#ax2.set_ylabel('variance in data')
#ax2.legend()
#plt.show()
print(df['auc_score'])
print(df)
```
#### functions
```
def check_for_anomaly(ref, sub):
y_ref = pd.Series([0] * ref.shape[0])
X_ref = ref
#print("y_ref before: ", y_ref)
#print("x_ref before: ", x_ref)
del X_ref['Flag']
del X_ref['auc_score']
#print("x_ref after: ", X_ref)
y_sub = pd.Series([1] * sub.shape[0])
X_sub=sub
#print("y_sub before: ", y_sub)
#print("x_sub before: ", X_sub)
del X_sub['Flag']
del X_sub['auc_score']
#print("X_sub after: ", X_sub)
# separate Reference and Subject into Train and Test
X_ref_train, X_ref_test, y_ref_train, y_ref_test = train_test_split(X_ref, y_ref, test_size=0.3, random_state=42)
X_sub_train, X_sub_test, y_sub_train, y_sub_test = train_test_split(X_sub, y_sub, test_size=0.3, random_state=42)
# combine training ref and sub samples
X_train = pd.concat([X_ref_train, X_sub_train])
y_train = pd.concat([y_ref_train, y_sub_train])
# combine testing ref and sub samples
X_test = pd.concat([X_ref_test, X_sub_test])
y_test = pd.concat([y_ref_test, y_sub_test])
# dtc=DecisionTreeClassifier()
clf = AdaBoostClassifier(base_estimator=DecisionTreeClassifier(max_depth=6)) #dtc
# clf = AdaBoostClassifier(DecisionTreeClassifier(max_depth=1),algorithm="SAMME",n_estimators=200)
#train an AdaBoost model to be able to tell the difference between the reference and subject data
clf.fit(X_train, y_train)
#Predict using the combined test data
y_predict = clf.predict(X_test)
# scores = cross_val_score(clf, X, y)
# print(scores)
fpr, tpr, thresholds = roc_curve(y_test, y_predict) # calculate the false positive rate and true positive rate
auc_score = auc(fpr, tpr) #calculate the AUC score
print ("auc_score = ", auc_score, "\tfeature importances:", clf.feature_importances_)
if auc_score > cut:
plot_roc(fpr, tpr, auc_score)
#filename='tree_'+sub.index.min().strftime("%Y-%m-%d_%H")
#tree.export_graphviz(clf.estimators_[0] , out_file=filename +'_1.dot')
#tree.export_graphviz(clf.estimators_[1] , out_file=filename +'_2.dot')
return auc_score
def plot_roc(fpr,tpr, roc_auc):
plt.figure()
plt.plot(fpr, tpr, color='darkorange', label='ROC curve (area = %0.2f)' % roc_auc)
plt.plot([0, 1], [0, 1], color='navy', linestyle='--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.plot([0, 1], [0, 1], linestyle='--', color='r',label='Luck', alpha=.8)
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver operating characteristic')
plt.legend(loc="lower right")
plt.show()
start = df.index.min()
end = df.index.max()
#print(df)
print(df.index)
```
#### Looping over time intervals
```
start_time = time.time()
#find min and max timestamps
start = df.index.min()
end = df.index.max()
#round start
start.seconds=0
start.minutes=0
ref = window * Hour()
sub = 1 * Hour()
# loop over them
ti=start+ref+sub
count=0
while ti < end + 1 * Minute():
ref_start = ti-ref-sub
ref_end = ti-sub
ref_df = df[(df.index >= ref_start) & (df.index < ref_end)]
#print("In while loop: ref_df: ", ref_df)
sub_df = df[(df.index >= ref_end) & (df.index < ti)]
#print("In while loop: sub_df: ", sub_df)
auc_score = check_for_anomaly(ref_df, sub_df)
df.loc[(df.index>=ref_end) & (df.index<=ti),['auc_score']] = auc_score
#print(ti,"\trefes:" , ref_df.shape[0], "\tsubjects:", sub_df.shape[0], '\tauc:', auc_score)
ti = ti + sub
count=count+1
#if count>2: break
print("--- %s seconds ---" % (time.time() - start_time))
print(start)
print(end)
#df.plot(figsize=(20,7))
df.plot(figsize=(20,7))
plt.xlabel('Time', fontsize=18)
plt.ylabel('Variance in Data', fontsize=16)
plt.legend(df)
fig, ax = plt.subplots(figsize=(20,7))
df.loc[:,'Detected'] = 0
df.loc[df.auc_score>0.55,'Detected']=1
df.head()
ax.plot(df.Flag, 'r')
ax.plot(df.auc_score,'g')
ax.fill( df.Detected, 'b', alpha=0.3)
ax.legend(loc='upper left')
plt.xlabel('Time', fontsize=18)
plt.ylabel('AUC score', fontsize=16)
plt.show()
```
| github_jupyter |
## Train a character-level GPT on some smiles
The inputs here are simple smiles, which we chop up to individual characters and then train GPT on. So you could say this is a char-transformer instead of a char-rnn. Doesn't quite roll off the tongue as well. In this example we will feed it smiles from moses dataset, which we'll get it to predict character-level (i.e. token in smiles).
```
# set up logging
import logging
import pandas as pd
logging.basicConfig(
format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
datefmt="%m/%d/%Y %H:%M:%S",
level=logging.INFO,
)
# make deterministic
from mingpt.utils import set_seed
set_seed(42)
import numpy as np
import torch
import torch.nn as nn
from torch.nn import functional as F
from torch.utils.data import Dataset
class CharDataset(Dataset):
def __init__(self, data, content):
chars = sorted(list(set(content)))
data_size, vocab_size = len(data), len(chars)
print('data has %d smiles, %d unique characters.' % (data_size, vocab_size))
self.stoi = { ch:i for i,ch in enumerate(chars) }
self.itos = { i:ch for i,ch in enumerate(chars) }
self.block_size = block_size
self.vocab_size = vocab_size
self.data = data
def __len__(self):
return math.ceil(len(self.data) / (self.block_size + 1))
def __getitem__(self, idx):
smiles = self.data[idx]
len_smiles = len(smiles)
dix = [self.stoi[s] for s in smiles]
x = torch.tensor(dix[:-1], dtype=torch.long)
y = torch.tensor(dix[1:], dtype=torch.long)
return x, y
# you can download this moses file here https://media.githubusercontent.com/media/molecularsets/moses/master/data/dataset_v1.csv
smiles = pd.read_csv('moses.csv')['SMILES']
# some preprocessin, adding "<" to make every smile of max length (for us '<' is an end token)
lens = [len(i) for i in smiles]
max_len = max(lens)
smiles = [ i + str('<')*(max_len - len(i)) for i in smiles]
content = ' '.join(smiles)
block_size = max_len
train_dataset = CharDataset(smiles, content, )
from mingpt.model import GPT, GPTConfig
mconf = GPTConfig(train_dataset.vocab_size, train_dataset.block_size,
n_layer=8, n_head=8, n_embd=256)
model = GPT(mconf)
from mingpt.trainer import Trainer, TrainerConfig
import math
# initialize a trainer instance and kick off training
tconf = TrainerConfig(max_epochs=50, batch_size=128, learning_rate=6e-4,
lr_decay=True, warmup_tokens=32*20, final_tokens=200*len(train_dataset)*block_size,
num_workers=10)
trainer = Trainer(model, train_dataset, None, tconf)
trainer.train()
# alright, let's sample some molecules and draw them using rdkit
from rdkit import Chem
from rdkit.Chem.Draw import IPythonConsole
from IPython.core.display import HTML
from rdkit.Chem.QED import qed
from rdkit.Chem import PandasTools
from mingpt.utils import sample
import seaborn as sns
def show(df):
return HTML(df.to_html(notebook=True))
PandasTools.RenderImagesInAllDataFrames(images=True)
molecules = []
context = "C"
for i in range(100):
x = torch.tensor([train_dataset.stoi[s] for s in context], dtype=torch.long)[None,...].to(trainer.device)
y = sample(model, x, block_size, temperature=0.9, sample=True, top_k=5)[0]
completion = ''.join([train_dataset.itos[int(i)] for i in y])
completion = completion.replace('<', '')
mol = Chem.MolFromSmiles(completion)
if mol:
molecules.append(mol)
"Valid molecules % = {}".format(len(molecules))
mol_dict = []
for i in molecules:
mol_dict.append({'molecule' : i, 'qed': qed(i), 'smiles': Chem.MolToSmiles(i)})
results = pd.DataFrame(mol_dict)
sns.kdeplot(results['qed'].values)
show(results)
from rdkit.DataStructs import TanimotoSimilarity
from rdkit.Chem import AllChem
fp_list = []
for molecule in molecules:
fp = AllChem.GetMorganFingerprintAsBitVect(molecule, 2, nBits=1024)
fp_list.append(fp)
diversity = []
for i in range(len(fp_list)):
for j in range(i+1, len(fp_list)):
current_diverity = 1 - float(TanimotoSimilarity(fp_list[i], fp_list[j]))
diversity.append(current_diverity)
"Diversity of molecules % = {}".format(np.mean(diversity))
```
| github_jupyter |
# Variational inference for Bayesian neural networks
This article demonstrates how to implement and train a Bayesian neural network with Keras following the approach described in [Weight Uncertainty in Neural Networks](https://arxiv.org/abs/1505.05424) (*Bayes by Backprop*). The implementation is kept simple for illustration purposes and uses Keras 2.2.4 and Tensorflow 1.12.0. For more advanced implementations of Bayesian methods for neural networks consider using [Tensorflow Probability](https://www.tensorflow.org/probability), for example.
Bayesian neural networks differ from plain neural networks in that their weights are assigned a probability distribution instead of a single value or point estimate. These probability distributions describe the uncertainty in weights and can be used to estimate uncertainty in predictions. Training a Bayesian neural network via variational inference learns the parameters of these distributions instead of the weights directly.
## Probabilistic model
A neural network can be viewed as probabilistic model $p(y \lvert \mathbf{x},\mathbf{w})$. For classification, $y$ is a set of classes and $p(y \lvert \mathbf{x},\mathbf{w})$ is a categorical distribution. For regression, $y$ is a continuous variable and $p(y \lvert \mathbf{x},\mathbf{w})$ is a Gaussian distribution.
Given a training dataset $\mathcal{D} = \left\{\mathbf{x}^{(i)}, y^{(i)}\right\}$ we can construct the likelihood function $p(\mathcal{D} \lvert \mathbf{w}) = \prod_i p(y^{(i)} \lvert \mathbf{x}^{(i)}, \mathbf{w})$ which is a function of parameters $\mathbf{w}$. Maximizing the likelihood function gives the maximimum likelihood estimate (MLE) of $\mathbf{w}$. The usual optimization objective during training is the negative log likelihood. For a categorical distribution this is the *cross entropy* error function, for a Gaussian distribution this is proportional to the *sum of squares* error function. MLE can lead to severe overfitting though.
Multiplying the likelihood with a prior distribution $p(\mathbf{w})$ is, by Bayes theorem, proportional to the posterior distribution $p(\mathbf{w} \lvert \mathcal{D}) \propto p(\mathcal{D} \lvert \mathbf{w}) p(\mathbf{w})$. Maximizing $p(\mathcal{D} \lvert \mathbf{w}) p(\mathbf{w})$ gives the maximum a posteriori (MAP) estimate of $\mathbf{w}$. Computing the MAP estimate has a regularizing effect and can prevent overfitting. The optimization objectives here are the same as for MLE plus a regularization term coming from the log prior.
Both MLE and MAP give point estimates of parameters. If we instead had a full posterior distribution over parameters we could make predictions that take weight uncertainty into account. This is covered by the posterior predictive distribution $p(y \lvert \mathbf{x},\mathcal{D}) = \int p(y \lvert \mathbf{x}, \mathbf{w}) p(\mathbf{w} \lvert \mathcal{D}) d\mathbf{w}$ in which the parameters have been marginalized out. This is equivalent to averaging predictions from an ensemble of neural networks weighted by the posterior probabilities of their parameters $\mathbf{w}$.
## Variational inference
Unfortunately, an analytical solution for the posterior $p(\mathbf{w} \lvert \mathcal{D})$ in neural networks is untractable. We therefore have to approximate the true posterior with a variational distribution $q(\mathbf{w} \lvert \boldsymbol{\theta})$ of known functional form whose parameters we want to estimate. This can be done by minimizing the [Kullback-Leibler divergence](https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence) between $q(\mathbf{w} \lvert \boldsymbol{\theta})$ and the true posterior $p(\mathbf{w} \lvert \mathcal{D})$ w.r.t. to $\boldsymbol{\theta}$. It can be shown that the corresponding optimization objective or cost function can be written as
$$
\mathcal{F}(\mathcal{D},\boldsymbol{\theta}) =
\mathrm{KL}(q(\mathbf{w} \lvert \boldsymbol{\theta}) \mid\mid p(\mathbf{w})) -
\mathbb{E}_{q(\mathbf{w} \lvert \boldsymbol{\theta})} \log p(\mathcal{D} \lvert \mathbf{w})
\tag{1}
$$
This is known as the *variational free energy*. The first term is the Kullback-Leibler divergence between the variational distribution $q(\mathbf{w} \lvert \boldsymbol{\theta})$ and the prior $p(\mathbf{w})$ and is called the *complexity cost*. The second term is the expected value of the likelihood w.r.t. the variational distribution and is called the *likelihood cost*. By re-arranging the KL term, the cost function can also be written as
$$
\mathcal{F}(\mathcal{D},\boldsymbol{\theta}) =
\mathbb{E}_{q(\mathbf{w} \lvert \boldsymbol{\theta})} \log q(\mathbf{w} \lvert \boldsymbol{\theta}) -
\mathbb{E}_{q(\mathbf{w} \lvert \boldsymbol{\theta})} \log p(\mathbf{w}) -
\mathbb{E}_{q(\mathbf{w} \lvert \boldsymbol{\theta})} \log p(\mathcal{D} \lvert \mathbf{w})
\tag{2}
$$
We see that all three terms in equation $2$ are expectations w.r.t. the variational distribution $q(\mathbf{w} \lvert \boldsymbol{\theta})$. The cost function can therefore be approximated by drawing [Monte Carlo](https://en.wikipedia.org/wiki/Monte_Carlo_method) samples $\mathbf{w}^{(i)}$ from $q(\mathbf{w} \lvert \boldsymbol{\theta})$.
$$
\mathcal{F}(\mathcal{D},\boldsymbol{\theta}) \approx {1 \over N} \sum_{i=1}^N \left[
\log q(\mathbf{w}^{(i)} \lvert \boldsymbol{\theta}) -
\log p(\mathbf{w}^{(i)}) -
\log p(\mathcal{D} \lvert \mathbf{w}^{(i)})\right]
\tag{3}
$$
In the following example, we'll use a Gaussian distribution for the variational posterior, parameterized by $\boldsymbol{\theta} = (\boldsymbol{\mu}, \boldsymbol{\sigma})$ where $\boldsymbol{\mu}$ is the mean vector of the distribution and $\boldsymbol{\sigma}$ the standard deviation vector. The elements of $\boldsymbol{\sigma}$ are the elements of a diagonal covariance matrix which means that weights are assumed to be uncorrelated. Instead of parameterizing the neural network with weights $\mathbf{w}$ directly we parameterize it with $\boldsymbol{\mu}$ and $\boldsymbol{\sigma}$ and therefore double the number of parameters compared to a plain neural network.
## Network training
A training iteration consists of a forward-pass and and backward-pass. During a forward pass a single sample is drawn from the variational posterior distribution. It is used to evaluate the approximate cost function defined by equation $3$. The first two terms of the cost function are data-independent and can be evaluated layer-wise, the last term is data-dependent and is evaluated at the end of the forward-pass. During a backward-pass, gradients of $\boldsymbol{\mu}$ and $\boldsymbol{\sigma}$ are calculated via backpropagation so that their values can be updated by an optimizer.
Since a forward pass involves a stochastic sampling step we have to apply the so-called *re-parameterization trick* for backpropagation to work. The trick is to sample from a parameter-free distribution and then transform the sampled $\boldsymbol{\epsilon}$ with a deterministic function $t(\boldsymbol{\mu}, \boldsymbol{\sigma}, \boldsymbol{\epsilon})$ for which a gradient can be defined. Here, $\boldsymbol{\epsilon}$ is drawn from a standard normal distribution i.e. $\boldsymbol{\epsilon} \sim \mathcal{N}(\mathbf{0}, \mathbf{I})$ and function $t(\boldsymbol{\mu}, \boldsymbol{\sigma}, \boldsymbol{\epsilon}) = \boldsymbol{\mu} + \boldsymbol{\sigma} \odot \boldsymbol{\epsilon}$ shifts the sample by mean $\boldsymbol{\mu}$ and scales it with $\boldsymbol{\sigma}$ where $\odot$ is element-wise multiplication.
For numeric stability we will parameterize the network with $\boldsymbol{\rho}$ instead of $\boldsymbol{\sigma}$ directly and transform $\boldsymbol{\rho}$ with the softplus function to obtain $\boldsymbol{\sigma} = \log(1 + \exp(\boldsymbol{\rho}))$. This ensures that $\boldsymbol{\sigma}$ is always positive. As prior, a scale mixture of two Gaussians is used $p(\mathbf{w}) = \pi \mathcal{N}(\mathbf{w} \lvert 0,\sigma_1^2) + (1 - \pi) \mathcal{N}(\mathbf{w} \lvert 0,\sigma_2^2)$ where $\sigma_1$, $\sigma_2$ and $\pi$ are shared parameters. Their values are learned during training (which is in contrast to the paper where a fixed prior is used).
## Uncertainty characterization
Uncertainty in predictions that arise from the uncertainty in weights is called [epistemic uncertainty](https://en.wikipedia.org/wiki/Uncertainty_quantification). This kind of uncertainty can be reduced if we get more data. Consequently, epistemic uncertainty is higher in regions of no or little training data and lower in regions of more training data. Epistemic uncertainty is covered by the variational posterior distribution. Uncertainty coming from the inherent noise in training data is an example of [aleatoric uncertainty](https://en.wikipedia.org/wiki/Uncertainty_quantification). It cannot be reduced if we get more data. Aleatoric uncertainty is covered by the probability distribution used to define the likelihood function.
## Implementation example
Variational inference of neural network parameters is now demonstrated on a simple regression problem. We therefore use a Gaussian distribution for $p(y \lvert \mathbf{x},\mathbf{w})$. The training dataset consists of 32 noisy samples `X`, `y` drawn from a sinusoidal function.
```
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
def f(x, sigma):
epsilon = np.random.randn(*x.shape) * sigma
return 10 * np.sin(2 * np.pi * (x)) + epsilon
train_size = 32
noise = 1.0
X = np.linspace(-0.5, 0.5, train_size).reshape(-1, 1)
y = f(X, sigma=noise)
y_true = f(X, sigma=0.0)
plt.scatter(X, y, marker='+', label='Training data')
plt.plot(X, y_true, label='Truth')
plt.title('Noisy training data and ground truth')
plt.legend();
```
The noise in training data gives rise to aleatoric uncertainty. To cover epistemic uncertainty we implement the variational inference logic in a custom `DenseVariational` Keras layer. The learnable parameters of the mixture prior, $\sigma_1$ $\sigma_2$ and $\pi$, are shared across layers. The complexity cost (`kl_loss`) is computed layer-wise and added to the total loss with the `add_loss` method. Implementations of `build` and `call` directly follow the equations defined above.
```
from keras import backend as K
from keras import activations, initializers
from keras.layers import Layer
import tensorflow as tf
def mixture_prior_params(sigma_1, sigma_2, pi, return_sigma=False):
params = K.variable([sigma_1, sigma_2, pi], name='mixture_prior_params')
sigma = np.sqrt(pi * sigma_1 ** 2 + (1 - pi) * sigma_2 ** 2)
return params, sigma
def log_mixture_prior_prob(w):
comp_1_dist = tf.distributions.Normal(0.0, prior_params[0])
comp_2_dist = tf.distributions.Normal(0.0, prior_params[1])
comp_1_weight = prior_params[2]
return K.log(comp_1_weight * comp_1_dist.prob(w) + (1 - comp_1_weight) * comp_2_dist.prob(w))
# Mixture prior parameters shared across DenseVariational layer instances
prior_params, prior_sigma = mixture_prior_params(sigma_1=1.0, sigma_2=0.1, pi=0.2)
class DenseVariational(Layer):
def __init__(self, output_dim, kl_loss_weight, activation=None, **kwargs):
self.output_dim = output_dim
self.kl_loss_weight = kl_loss_weight
self.activation = activations.get(activation)
super().__init__(**kwargs)
def build(self, input_shape):
self._trainable_weights.append(prior_params)
self.kernel_mu = self.add_weight(name='kernel_mu',
shape=(input_shape[1], self.output_dim),
initializer=initializers.normal(stddev=prior_sigma),
trainable=True)
self.bias_mu = self.add_weight(name='bias_mu',
shape=(self.output_dim,),
initializer=initializers.normal(stddev=prior_sigma),
trainable=True)
self.kernel_rho = self.add_weight(name='kernel_rho',
shape=(input_shape[1], self.output_dim),
initializer=initializers.constant(0.0),
trainable=True)
self.bias_rho = self.add_weight(name='bias_rho',
shape=(self.output_dim,),
initializer=initializers.constant(0.0),
trainable=True)
super().build(input_shape)
def call(self, x):
kernel_sigma = tf.math.softplus(self.kernel_rho)
kernel = self.kernel_mu + kernel_sigma * tf.random.normal(self.kernel_mu.shape)
bias_sigma = tf.math.softplus(self.bias_rho)
bias = self.bias_mu + bias_sigma * tf.random.normal(self.bias_mu.shape)
self.add_loss(self.kl_loss(kernel, self.kernel_mu, kernel_sigma) +
self.kl_loss(bias, self.bias_mu, bias_sigma))
return self.activation(K.dot(x, kernel) + bias)
def compute_output_shape(self, input_shape):
return (input_shape[0], self.output_dim)
def kl_loss(self, w, mu, sigma):
variational_dist = tf.distributions.Normal(mu, sigma)
return kl_loss_weight * K.sum(variational_dist.log_prob(w) - log_mixture_prior_prob(w))
```
Our model is a neural network with two `DenseVariational` hidden layers, each having 20 units, and one `DenseVariational` output layer with one unit. Instead of modeling a full probability distribution $p(y \lvert \mathbf{x},\mathbf{w})$ as output the network simply outputs the mean of the corresponding Gaussian distribution. In other words, we do not model aleatoric uncertainty here and assume it is known. We only model epistemic uncertainty via the `DenseVariational` layers.
Since the training dataset has only 32 examples we train the network with all 32 examples per epoch so that the number of batches per epoch is 1. For other configurations, the complexity cost (`kl_loss`) must be weighted by $1/M$ as described in section 3.4 of the paper where $M$ is the number of mini-batches per epoch.
```
from keras.layers import Input
from keras.models import Model
batch_size = train_size
num_batches = train_size / batch_size
kl_loss_weight = 1.0 / num_batches
x_in = Input(shape=(1,))
x = DenseVariational(20, kl_loss_weight=kl_loss_weight, activation='relu')(x_in)
x = DenseVariational(20, kl_loss_weight=kl_loss_weight, activation='relu')(x)
x = DenseVariational(1, kl_loss_weight=kl_loss_weight)(x)
model = Model(x_in, x)
```
The network can now be trained with a Gaussian negative log likelihood function (`neg_log_likelihood`) as loss function assuming a fixed standard deviation (`noise`). This corresponds to the *likelihood cost*, the last term in equation $3$.
```
from keras import callbacks, optimizers
def neg_log_likelihood(y_true, y_pred, sigma=noise):
dist = tf.distributions.Normal(loc=y_pred, scale=sigma)
return K.sum(-dist.log_prob(y_true))
model.compile(loss=neg_log_likelihood, optimizer=optimizers.Adam(lr=0.03), metrics=['mse'])
model.fit(X, y, batch_size=batch_size, epochs=1500, verbose=0);
```
When calling `model.predict` we draw a random sample from the variational posterior distribution and use it to compute the output value of the network. This is equivalent to obtaining the output from a single member of a hypothetical ensemble of neural networks. Drawing 500 samples means that we get predictions from 500 ensemble members. From these predictions we can compute statistics such as the mean and standard deviation. In our example, the standard deviation is a measure of epistemic uncertainty.
```
import tqdm
X_test = np.linspace(-1.5, 1.5, 1000).reshape(-1, 1)
y_pred_list = []
for i in tqdm.tqdm(range(500)):
y_pred = model.predict(X_test)
y_pred_list.append(y_pred)
y_preds = np.concatenate(y_pred_list, axis=1)
y_mean = np.mean(y_preds, axis=1)
y_sigma = np.std(y_preds, axis=1)
plt.plot(X_test, y_mean, 'r-', label='Predictive mean');
plt.scatter(X, y, marker='+', label='Training data')
plt.fill_between(X_test.ravel(),
y_mean + 2 * y_sigma,
y_mean - 2 * y_sigma,
alpha=0.5, label='Epistemic uncertainty')
plt.title('Prediction')
plt.legend();
```
We can clearly see that epistemic uncertainty is much higher in regions of no training data than it is in regions of existing training data. The predictive mean could have also been obtained with a single forward pass i.e. a single `model.predict` call by using only the mean of the variational posterior distribution which is equivalent to sampling from the variational posterior with $\boldsymbol{\sigma}$ set to $\mathbf{0}$. The corresponding implementation is omitted here but is trivial to add.
For an example how to model both epistemic and aleatoric uncertainty I recommend reading [Regression with Probabilistic Layers in TensorFlow Probability](https://medium.com/tensorflow/regression-with-probabilistic-layers-in-tensorflow-probability-e46ff5d37baf) which uses probabilistic Keras layers from the upcoming Tensorflow Probability 0.7.0 release. Their approach to variational inference is similar to the approach described here but differs in some details. For example, they compute the complexity cost analytically instead of estimating it from Monte Carlo samples, among other differences.
| github_jupyter |
<table class="ee-notebook-buttons" align="left">
<td><a target="_blank" href="https://github.com/giswqs/earthengine-py-notebooks/tree/master/JavaScripts/Arrays/QualityMosaic.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> View source on GitHub</a></td>
<td><a target="_blank" href="https://nbviewer.jupyter.org/github/giswqs/earthengine-py-notebooks/blob/master/JavaScripts/Arrays/QualityMosaic.ipynb"><img width=26px src="https://upload.wikimedia.org/wikipedia/commons/thumb/3/38/Jupyter_logo.svg/883px-Jupyter_logo.svg.png" />Notebook Viewer</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/giswqs/earthengine-py-notebooks/blob/master/JavaScripts/Arrays/QualityMosaic.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" /> Run in Google Colab</a></td>
</table>
## Install Earth Engine API and geemap
Install the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.
The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemap#dependencies), including earthengine-api, folium, and ipyleaflet.
**Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60#issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving).
```
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('geemap package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
# Checks whether this notebook is running on Google Colab
try:
import google.colab
import geemap.eefolium as emap
except:
import geemap as emap
# Authenticates and initializes Earth Engine
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
```
## Create an interactive map
The default basemap is `Google Satellite`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py#L13) can be added using the `Map.add_basemap()` function.
```
Map = emap.Map(center=[40,-100], zoom=4)
Map.add_basemap('ROADMAP') # Add Google Map
Map
```
## Add Earth Engine Python script
```
# Add Earth Engine dataset
# Array-based quality mosaic.
# Returns a mosaic built by sorting each stack of pixels by the first band
# in descending order, and taking the highest quality pixel.
def qualityMosaic(bands):
# Convert to an array, and declare names for the axes and indices along the
# band axis.
array = bands.toArray()
imageAxis = 0
bandAxis = 1
qualityIndex = 0
valuesIndex = 1
# Slice the quality and values off the main array, and sort the values by the
# quality in descending order.
quality = array.arraySlice(bandAxis, qualityIndex, qualityIndex + 1)
values = array.arraySlice(bandAxis, valuesIndex)
valuesByQuality = values.arraySort(quality.multiply(-1))
# Get an image where each pixel is the array of band values where the quality
# band is greatest. Note that while the array is 2-D, the first axis is
# length one.
best = valuesByQuality.arraySlice(imageAxis, 0, 1)
# Project the best 2D array down to a single dimension, and convert it back
# to a regular scalar image by naming each position along the axis. Note we
# provide the original band names, but slice off the first band since the
# quality band is not part of the result. Also note to get at the band names,
# we have to do some kind of reduction, but it won't really calculate pixels
# if we only access the band names.
bandNames = bands.min().bandNames().slice(1)
return best.arrayProject([bandAxis]).arrayFlatten([bandNames])
# Load the l7_l1t collection for the year 2000, and make sure the first band
# is our quality measure, in this case the normalized difference values.
l7 = ee.ImageCollection('LANDSAT/LE07/C01/T1') \
.filterDate('2000-01-01', '2001-01-01')
def func_ned(image):
return image.normalizedDifference(['B4', 'B3']).addBands(image)
withNd = l7.map(func_ned)
# Build a mosaic using the NDVI of bands 4 and 3, essentially showing the
# greenest pixels from the year 2000.
greenest = qualityMosaic(withNd)
# Select out the color bands to visualize. An interesting artifact of this
# approach is that clouds are greener than water. So all the water is white.
rgb = greenest.select(['B3', 'B2', 'B1'])
Map.addLayer(rgb, {'gain': [1.4, 1.4, 1.1]}, 'Greenest')
Map.setCenter(-90.08789, 16.38339, 11)
```
## Display Earth Engine data layers
```
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map
```
| github_jupyter |
<img src="../../../../../images/qiskit_header.png" alt="Note: In order for images to show up in this jupyter notebook you need to select File => Trusted Notebook" align="middle">
# _*Qiskit Finance: Pricing Fixed-Income Assets*_
The latest version of this notebook is available on https://github.com/Qiskit/qiskit-iqx-tutorials.
***
### Contributors
Stefan Woerner<sup>[1]</sup>, Daniel Egger<sup>[1]</sup>, Shaohan Hu<sup>[1]</sup>, Stephen Wood<sup>[1]</sup>, Marco Pistoia<sup>[1]</sup>
### Affliation
- <sup>[1]</sup>IBMQ
### Introduction
We seek to price a fixed-income asset knowing the distributions describing the relevant interest rates. The cash flows $c_t$ of the asset and the dates at which they occur are known. The total value $V$ of the asset is thus the expectation value of:
$$V = \sum_{t=1}^T \frac{c_t}{(1+r_t)^t}$$
Each cash flow is treated as a zero coupon bond with a corresponding interest rate $r_t$ that depends on its maturity. The user must specify the distribution modeling the uncertainty in each $r_t$ (possibly correlated) as well as the number of qubits he wishes to use to sample each distribution. In this example we expand the value of the asset to first order in the interest rates $r_t$. This corresponds to studying the asset in terms of its duration.
<br>
<br>
The approximation of the objective function follows the following paper:<br>
<a href="https://arxiv.org/abs/1806.06893">Quantum Risk Analysis. Woerner, Egger. 2018.</a>
```
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
from qiskit import BasicAer
from qiskit.aqua.algorithms.single_sample.amplitude_estimation.ae import AmplitudeEstimation
from qiskit.aqua.components.uncertainty_models import MultivariateNormalDistribution
from qiskit.aqua.components.uncertainty_problems import FixedIncomeExpectedValue
backend = BasicAer.get_backend('statevector_simulator')
```
### Uncertainty Model
We construct a circuit factory to load a multivariate normal random distribution in $d$ dimensions into a quantum state.
The distribution is truncated to a given box $\otimes_{i=1}^d [low_i, high_i]$ and discretized using $2^{n_i}$ grid points, where $n_i$ denotes the number of qubits used for dimension $i = 1,\ldots, d$.
The unitary operator corresponding to the circuit factory implements the following:
$$\big|0\rangle_{n_1}\ldots\big|0\rangle_{n_d} \mapsto \big|\psi\rangle = \sum_{i_1=0}^{2^n_-1}\ldots\sum_{i_d=0}^{2^n_-1} \sqrt{p_{i_1,...,i_d}}\big|i_1\rangle_{n_1}\ldots\big|i_d\rangle_{n_d},$$
where $p_{i_1, ..., i_d}$ denote the probabilities corresponding to the truncated and discretized distribution and where $i_j$ is mapped to the right interval $[low_j, high_j]$ using the affine map:
$$ \{0, \ldots, 2^{n_{j}}-1\} \ni i_j \mapsto \frac{high_j - low_j}{2^{n_j} - 1} * i_j + low_j \in [low_j, high_j].$$
In addition to the uncertainty model, we can also apply an affine map, e.g. resulting from a principal component analysis. The interest rates used are then given by:
$$ \vec{r} = A * \vec{x} + b,$$
where $\vec{x} \in \otimes_{i=1}^d [low_i, high_i]$ follows the given random distribution.
```
# can be used in case a principal component analysis has been done to derive the uncertainty model, ignored in this example.
A = np.eye(2)
b = np.zeros(2)
# specify the number of qubits that are used to represent the different dimenions of the uncertainty model
num_qubits = [2, 2]
# specify the lower and upper bounds for the different dimension
low = [0, 0]
high = [0.12, 0.24]
mu = [0.12, 0.24]
sigma = 0.01*np.eye(2)
# construct corresponding distribution
u = MultivariateNormalDistribution(num_qubits, low, high, mu, sigma)
# plot contour of probability density function
x = np.linspace(low[0], high[0], 2**num_qubits[0])
y = np.linspace(low[1], high[1], 2**num_qubits[1])
z = u.probabilities.reshape(2**num_qubits[0], 2**num_qubits[1])
plt.contourf(x, y, z)
plt.xticks(x, size=15)
plt.yticks(y, size=15)
plt.grid()
plt.xlabel('$r_1$ (%)', size=15)
plt.ylabel('$r_2$ (%)', size=15)
plt.colorbar()
plt.show()
```
### Cash flow, payoff function, and exact expected value
In the following we define the cash flow per period, the resulting payoff function and evaluate the exact expected value.
For the payoff function we first use a first order approximation and then apply the same approximation technique as for the linear part of the payoff function of the [European Call Option](european_call_option_pricing.ipynb).
```
# specify cash flow
cf = [1.0, 2.0]
periods = range(1, len(cf)+1)
# plot cash flow
plt.bar(periods, cf)
plt.xticks(periods, size=15)
plt.yticks(size=15)
plt.grid()
plt.xlabel('periods', size=15)
plt.ylabel('cashflow ($)', size=15)
plt.show()
# estimate real value
cnt = 0
exact_value = 0.0
for x1 in np.linspace(low[0], high[0], pow(2, num_qubits[0])):
for x2 in np.linspace(low[1], high[1], pow(2, num_qubits[1])):
prob = u.probabilities[cnt]
for t in range(len(cf)):
# evaluate linear approximation of real value w.r.t. interest rates
exact_value += prob * (cf[t]/pow(1 + b[t], t+1) - (t+1)*cf[t]*np.dot(A[:, t], np.asarray([x1, x2]))/pow(1 + b[t], t+2))
cnt += 1
print('Exact value: \t%.4f' % exact_value)
# specify approximation factor
c_approx = 0.125
# get fixed income circuit appfactory
fixed_income = FixedIncomeExpectedValue(u, A, b, cf, c_approx)
# set number of evaluation qubits (samples)
m = 5
# construct amplitude estimation
ae = AmplitudeEstimation(m, fixed_income)
# result = ae.run(quantum_instance=LegacySimulators.get_backend('qasm_simulator'), shots=100)
result = ae.run(quantum_instance=backend)
print('Exact value: \t%.4f' % exact_value)
print('Estimated value:\t%.4f' % result['estimation'])
print('Probability: \t%.4f' % result['max_probability'])
# plot estimated values for "a" (direct result of amplitude estimation, not rescaled yet)
plt.bar(result['values'], result['probabilities'], width=0.5/len(result['probabilities']))
plt.xticks([0, 0.25, 0.5, 0.75, 1], size=15)
plt.yticks([0, 0.25, 0.5, 0.75, 1], size=15)
plt.title('"a" Value', size=15)
plt.ylabel('Probability', size=15)
plt.xlim((0,1))
plt.ylim((0,1))
plt.grid()
plt.show()
# plot estimated values for fixed-income asset (after re-scaling and reversing the c_approx-transformation)
plt.bar(result['mapped_values'], result['probabilities'], width=3/len(result['probabilities']))
plt.plot([exact_value, exact_value], [0,1], 'r--', linewidth=2)
plt.xticks(size=15)
plt.yticks([0, 0.25, 0.5, 0.75, 1], size=15)
plt.title('Estimated Option Price', size=15)
plt.ylabel('Probability', size=15)
plt.ylim((0,1))
plt.grid()
plt.show()
import qiskit.tools.jupyter
%qiskit_version_table
%qiskit_copyright
```
| github_jupyter |
# SEIRS+ Network Model Demo
**This notebook provides a demonstration of the core functionality of the SEIRS+ Network Model and offers a sandbox for easily changing simulation parameters and scenarios.**
For a more thorough walkthrough of the model and use of this package, refer to the README.
### Installing and Importing the model code
All of the code needed to run the model is imported from the ```models``` module of this package.
#### Install the package using ```pip```
The package can be installed on your machine by entering this in the command line:
```sudo pip install seirsplus```
Then, the ```models``` module can be imported into your scripts as shown here:
```
from seirsplus.models import *
import networkx
```
#### *Alternatively, manually copy the code to your machine*
*You can use the model code without installing a package by copying the ```models.py``` module file to a directory on your machine. In this case, the easiest way to use the module is to place your scripts in the same directory as the module, and import the module as shown here:*
```python
from models import *
```
### Generating interaction networks
This model simulates SEIRS epidemic dynamics for populations with a structured interaction network (as opposed to standard deterministic SIR/SEIR/SEIRS models, which assume uniform mixing of the population). As such, a graph specifying the interaction network for the population must be specified, where each node represents an individual in the population and edges connect individuals who have regular interactions.
The interaction network can be specified by a ```networkx``` Graph object or a 2D numpy array representing the adjacency matrix, either of which can be defined and generated by any method.
*Here, we use a ```custom_exponential_graph()``` generation function included in this package, which generates power-law graphs that have degree distributions with two exponential tails. For more information on this custom graph type and its generation, see the README.*
**_Note:_** *Simulation time increases with network size. Small networks simulate quickly, but have more stochastic volatility. Networks with ~10,000 are large enough to produce per-capita population dynamics that are generally consistent with those of larger networks, but small enough to simulate quickly. We recommend using networks with ~10,000 nodes for prototyping parameters and scenarios, which can then be run on larger networks if more precision is required (for more on this, see README).*
```
numNodes = 10000
baseGraph = networkx.barabasi_albert_graph(n=numNodes, m=9)
# Baseline normal interactions:
G_normal = custom_exponential_graph(baseGraph, scale=100)
plot_degree_distn(G_normal, max_degree=40)
```
Epidemic scenarios of interest often involve interaction networks that change in time. Multiple interaction networks can be defined and used at different times in the model simulation, as will be shown below.
*Here we generate a graph representing interactions during corresponding to Social Distancing, where each individual drops some portion of their normal interactions with others. Again, we use the ```custom_exponential_graph()``` to generate this graph; for more information, see the README.*
```
# Social distancing interactions:
G_distancing = custom_exponential_graph(baseGraph, scale=10)
plot_degree_distn(G_distancing, max_degree=40)
```
This SEIRS+ model features dynamics corresponding to testing individuals for the disease and moving individuals with detected infection into a state where their rate of recovery, mortality, etc may be different. In addition, given that this model considers individuals in an interaction network, a separate graph defining the interactions for individuals with detected cases can be specified.
*Here we generate a graph representing the interactions that individuals have when they test positive for the disease. In this case, a significant portion of each individual's normal interaction edges are removed from the graph, as if the individual is quarantined upon detection of infection. Again, we use the ```custom_exponential_graph()``` to generate this graph; for more information, see the README.*
For more information on how testing, contact tracing, and detected cases are handled in this model, see the README.
```
# Quarantine interactions:
G_quarantine = custom_exponential_graph(baseGraph, scale=5)
plot_degree_distn(G_quarantine, max_degree=40)
```
### Initializing the model parameters
All model parameter values, including the normal and quarantine interaction networks, are set in the call to the ```SEIRSNetworkModel``` constructor. The normal interaction network ```G``` and the basic SEIR parameters ```beta```, ```sigma```, and ```gamma``` are the only required arguments. All other arguments represent parameters for optional extended model dynamics; these optional parameters take default values that turn off their corresponding dynamics when not provided in the constructor. For clarity and ease of customization in this notebook, all available model parameters are listed below.
For more information on parameter meanings, see the README.
*The parameter values shown correspond to rough estimates of parameter values for the COVID-19 epidemic.*
```
model = SEIRSNetworkModel(G =G_normal,
beta =0.155,
sigma =1/5.2,
gamma =1/12.39,
mu_I =0.0004,
mu_0 =0,
nu =0,
xi =0,
p =0.5,
Q =G_quarantine,
beta_D =0.155,
sigma_D =1/5.2,
gamma_D =1/12.39,
mu_D =0.0004,
theta_E =0,
theta_I =0,
phi_E =0,
phi_I =0,
psi_E =1.0,
psi_I =1.0,
q =0.5,
initI =numNodes/100,
initE =0,
initD_E =0,
initD_I =0,
initR =0,
initF =0)
```
### Checkpoints
Model parameters can be easily changed during a simulation run using checkpoints. A dictionary holds a list of checkpoint times (```checkpoints['t']```) and lists of new values to assign to various model parameters at each checkpoint time. Any model parameter listed in the model constrcutor can be updated in this way. Only model parameters that are included in the checkpoints dictionary have their values updated at the checkpoint times, all other parameters keep their pre-existing values.
*The checkpoints shown here correspond to starting social distancing and testing at time ```t=20``` (the graph ```G``` is updated to ```G_distancing``` and locality parameter ```p``` is decreased to ```0.1```; testing params ```theta_E```, ```theta_I```, ```phi```, and ```phi_I``` are set to non-zero values) and then stopping social distancing at time ```t=100``` (```G``` and ```p``` changed back to their "normal" values; testing params remain non-zero).*
```
checkpoints = {'t': [20, 100],
'G': [G_distancing, G_normal],
'p': [0.1, 0.5],
'theta_E': [0.02, 0.02],
'theta_I': [0.02, 0.02],
'phi_E': [0.2, 0.2],
'phi_I': [0.2, 0.2]}
```
### Running the simulation
```
model.run(T=300, checkpoints=checkpoints)
```
### Visualizing the results
The ```SEIRSNetworkModel``` class has a ```plot()``` convenience function for plotting simulation results on a matplotlib axis. This function generates a line plot of the frequency of each model state in the population by default, but there are many optional arguments that can be used to customize the plot.
The ```SEIRSNetworkModel``` class also has convenience functions for generating a full figure out of model simulation results (optionaly arguments can be provided to customize the plots generated by these functions).
- ```figure_basic()``` calls the ```plot()``` function with default parameters to generate a line plot of the frequency of each state in the population.
- ```figure_infections()``` calls the ```plot()``` function with default parameters to generate a stacked area plot of the frequency of only the infection states ($E$, $I$, $D_E$, $D_I$) in the population.
For more information on the built-in plotting functions, see the README.
```
model.figure_infections(vlines=checkpoints['t'], ylim=0.15)
```
#### Reference simulation visualizations
We can also visualize the results of other simulation(s) as a reference for comparison of our main simulation.
Here we simulate a model where no distancing or testing takes place, so that we can compare the effects of these interventions:
```
ref_model = SEIRSNetworkModel(G=G_normal, beta=0.155, sigma=1/5.2, gamma=1/12.39, mu_I=0.0004, p=0.5,
Q=G_quarantine, beta_D=0.155, sigma_D=1/5.2, gamma_D=1/12.39, mu_D=0.0004,
theta_E=0, theta_I=0, phi_E=0, phi_I=0, psi_E=1.0, psi_I=1.0, q=0.5,
initI=numNodes/100)
ref_model.run(T=300)
```
Now we can visualize our main simulation together with this reference simulation by passing the model object of the reference simulation to the appropriate figure function argument (note: a second reference simulation could also be visualized by passing it to the ```dashed_reference_results``` argument):
```
model.figure_infections(vlines=checkpoints['t'], ylim=0.2, shaded_reference_results=ref_model)
```
As further demonstration, we might also wish to compare the results of these network model simulations to a deterministic model simulation of the same SEIRS parameters (with no interventions in this case):
```
ref_model_determ = SEIRSModel(beta=0.147, sigma=1/5.2, gamma=1/12.39, mu_I=0.0004, initI=100, initN=10000)
ref_model_determ.run(T=300)
model.figure_infections(vlines=checkpoints['t'], ylim=0.2,
shaded_reference_results=ref_model, shaded_reference_label='network: no interventions',
dashed_reference_results=ref_model_determ, dashed_reference_label='deterministic: no interventions')
```
| github_jupyter |
```
import xlnet
import numpy as np
import tensorflow as tf
from tqdm import tqdm
import model_utils
import pickle
import json
pad_sequences = tf.keras.preprocessing.sequence.pad_sequences
import os
os.environ['CUDA_VISIBLE_DEVICES'] = '2'
import sentencepiece as spm
from prepro_utils import preprocess_text, encode_ids
sp_model = spm.SentencePieceProcessor()
sp_model.Load('alxlnet-base/sp10m.cased.v5.model')
def tokenize_fn(text):
text = preprocess_text(text, lower= False)
return encode_ids(sp_model, text)
with open('../bert/session-pos.pkl', 'rb') as fopen:
data = pickle.load(fopen)
data.keys()
train_X = data['train_X']
test_X = data['test_X']
train_Y = data['train_Y']
test_Y = data['test_Y']
with open('../bert/dictionary-pos.json') as fopen:
dictionary = json.load(fopen)
dictionary.keys()
word2idx = dictionary['word2idx']
idx2word = {int(k): v for k, v in dictionary['idx2word'].items()}
tag2idx = dictionary['tag2idx']
idx2tag = {int(k): v for k, v in dictionary['idx2tag'].items()}
char2idx = dictionary['char2idx']
from tqdm import tqdm
SEG_ID_A = 0
SEG_ID_B = 1
SEG_ID_CLS = 2
SEG_ID_SEP = 3
SEG_ID_PAD = 4
special_symbols = {
"<unk>" : 0,
"<s>" : 1,
"</s>" : 2,
"<cls>" : 3,
"<sep>" : 4,
"<pad>" : 5,
"<mask>" : 6,
"<eod>" : 7,
"<eop>" : 8,
}
VOCAB_SIZE = 32000
UNK_ID = special_symbols["<unk>"]
CLS_ID = special_symbols["<cls>"]
SEP_ID = special_symbols["<sep>"]
MASK_ID = special_symbols["<mask>"]
EOD_ID = special_symbols["<eod>"]
def XY(left_train, right_train):
X, Y, segments, masks = [], [], [], []
for i in tqdm(range(len(left_train))):
left = [idx2word[d] for d in left_train[i]]
right = [idx2tag[d] for d in right_train[i]]
bert_tokens = []
y = []
for no, orig_token in enumerate(left):
t = tokenize_fn(orig_token)
bert_tokens.extend(t)
if len(t):
y.append(right[no])
y.extend(['X'] * (len(t) - 1))
bert_tokens.extend([4, 3])
segment = [0] * (len(bert_tokens) - 1) + [SEG_ID_CLS]
input_mask = [0] * len(segment)
y.extend(['PAD', 'PAD'])
y = [tag2idx[i] for i in y]
if len(bert_tokens) != len(y):
print(i)
X.append(bert_tokens)
Y.append(y)
segments.append(segment)
masks.append(input_mask)
return X, Y, segments, masks
train_X, train_Y, train_segments, train_masks = XY(train_X, train_Y)
test_X, test_Y, test_segments, test_masks = XY(test_X, test_Y)
kwargs = dict(
is_training=True,
use_tpu=False,
use_bfloat16=False,
dropout=0.0,
dropatt=0.0,
init='normal',
init_range=0.1,
init_std=0.05,
clamp_len=-1)
xlnet_parameters = xlnet.RunConfig(**kwargs)
xlnet_config = xlnet.XLNetConfig(json_path='alxlnet-base/config.json')
epoch = 5
batch_size = 32
warmup_proportion = 0.1
num_train_steps = int(len(train_X) / batch_size * epoch)
num_warmup_steps = int(num_train_steps * warmup_proportion)
print(num_train_steps, num_warmup_steps)
training_parameters = dict(
decay_method = 'poly',
train_steps = num_train_steps,
learning_rate = 2e-5,
warmup_steps = num_warmup_steps,
min_lr_ratio = 0.0,
weight_decay = 0.00,
adam_epsilon = 1e-8,
num_core_per_host = 1,
lr_layer_decay_rate = 1,
use_tpu=False,
use_bfloat16=False,
dropout=0.0,
dropatt=0.0,
init='normal',
init_range=0.1,
init_std=0.02,
clip = 1.0,
clamp_len=-1,)
class Parameter:
def __init__(self, decay_method, warmup_steps, weight_decay, adam_epsilon,
num_core_per_host, lr_layer_decay_rate, use_tpu, learning_rate, train_steps,
min_lr_ratio, clip, **kwargs):
self.decay_method = decay_method
self.warmup_steps = warmup_steps
self.weight_decay = weight_decay
self.adam_epsilon = adam_epsilon
self.num_core_per_host = num_core_per_host
self.lr_layer_decay_rate = lr_layer_decay_rate
self.use_tpu = use_tpu
self.learning_rate = learning_rate
self.train_steps = train_steps
self.min_lr_ratio = min_lr_ratio
self.clip = clip
training_parameters = Parameter(**training_parameters)
class Model:
def __init__(
self,
dimension_output,
learning_rate = 2e-5,
):
self.X = tf.placeholder(tf.int32, [None, None])
self.segment_ids = tf.placeholder(tf.int32, [None, None])
self.input_masks = tf.placeholder(tf.float32, [None, None])
self.Y = tf.placeholder(tf.int32, [None, None])
self.lengths = tf.count_nonzero(self.X, 1)
self.maxlen = tf.shape(self.X)[1]
xlnet_model = xlnet.XLNetModel(
xlnet_config=xlnet_config,
run_config=xlnet_parameters,
input_ids=tf.transpose(self.X, [1, 0]),
seg_ids=tf.transpose(self.segment_ids, [1, 0]),
input_mask=tf.transpose(self.input_masks, [1, 0]))
output_layer = xlnet_model.get_sequence_output()
output_layer = tf.transpose(output_layer, [1, 0, 2])
logits = tf.layers.dense(output_layer, dimension_output)
y_t = self.Y
log_likelihood, transition_params = tf.contrib.crf.crf_log_likelihood(
logits, y_t, self.lengths
)
self.cost = tf.reduce_mean(-log_likelihood)
self.optimizer = tf.train.AdamOptimizer(
learning_rate = learning_rate
).minimize(self.cost)
mask = tf.sequence_mask(self.lengths, maxlen = self.maxlen)
self.tags_seq, tags_score = tf.contrib.crf.crf_decode(
logits, transition_params, self.lengths
)
self.tags_seq = tf.identity(self.tags_seq, name = 'logits')
y_t = tf.cast(y_t, tf.int32)
self.prediction = tf.boolean_mask(self.tags_seq, mask)
mask_label = tf.boolean_mask(y_t, mask)
correct_pred = tf.equal(self.prediction, mask_label)
correct_index = tf.cast(correct_pred, tf.float32)
self.accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
dimension_output = len(tag2idx)
learning_rate = 2e-5
tf.reset_default_graph()
sess = tf.InteractiveSession()
model = Model(
dimension_output,
learning_rate
)
sess.run(tf.global_variables_initializer())
import collections
import re
def get_assignment_map_from_checkpoint(tvars, init_checkpoint):
"""Compute the union of the current variables and checkpoint variables."""
assignment_map = {}
initialized_variable_names = {}
name_to_variable = collections.OrderedDict()
for var in tvars:
name = var.name
m = re.match('^(.*):\\d+$', name)
if m is not None:
name = m.group(1)
name_to_variable[name] = var
init_vars = tf.train.list_variables(init_checkpoint)
assignment_map = collections.OrderedDict()
for x in init_vars:
(name, var) = (x[0], x[1])
if name not in name_to_variable:
continue
assignment_map[name] = name_to_variable[name]
initialized_variable_names[name] = 1
initialized_variable_names[name + ':0'] = 1
return (assignment_map, initialized_variable_names)
tvars = tf.trainable_variables()
checkpoint = 'alxlnet-base/model.ckpt'
assignment_map, initialized_variable_names = get_assignment_map_from_checkpoint(tvars,
checkpoint)
saver = tf.train.Saver(var_list = assignment_map)
saver.restore(sess, checkpoint)
def merge_sentencepiece_tokens_tagging(x, y):
new_paired_tokens = []
n_tokens = len(x)
rejected = ['<cls>', '<sep>']
i = 0
while i < n_tokens:
current_token, current_label = x[i], y[i]
if not current_token.startswith('▁') and current_token not in rejected:
previous_token, previous_label = new_paired_tokens.pop()
merged_token = previous_token
merged_label = [previous_label]
while (
not current_token.startswith('▁')
and current_token not in rejected
):
merged_token = merged_token + current_token.replace('▁', '')
merged_label.append(current_label)
i = i + 1
current_token, current_label = x[i], y[i]
merged_label = merged_label[0]
new_paired_tokens.append((merged_token, merged_label))
else:
new_paired_tokens.append((current_token, current_label))
i = i + 1
words = [
i[0].replace('▁', '')
for i in new_paired_tokens
if i[0] not in ['<cls>', '<sep>']
]
labels = [i[1] for i in new_paired_tokens if i[0] not in ['<cls>', '<sep>']]
return words, labels
string = 'KUALA LUMPUR: Sempena sambutan Aidilfitri minggu depan, Perdana Menteri Tun Dr Mahathir Mohamad dan Menteri Pengangkutan Anthony Loke Siew Fook menitipkan pesanan khas kepada orang ramai yang mahu pulang ke kampung halaman masing-masing. Dalam video pendek terbitan Jabatan Keselamatan Jalan Raya (JKJR) itu, Dr Mahathir menasihati mereka supaya berhenti berehat dan tidur sebentar sekiranya mengantuk ketika memandu.'
import re
def entities_textcleaning(string, lowering = False):
"""
use by entities recognition, pos recognition and dependency parsing
"""
string = re.sub('[^A-Za-z0-9\-\/() ]+', ' ', string)
string = re.sub(r'[ ]+', ' ', string).strip()
original_string = string.split()
if lowering:
string = string.lower()
string = [
(original_string[no], word.title() if word.isupper() else word)
for no, word in enumerate(string.split())
if len(word)
]
return [s[0] for s in string], [s[1] for s in string]
def parse_X(left):
bert_tokens = []
for no, orig_token in enumerate(left):
t = tokenize_fn(orig_token)
bert_tokens.extend(t)
bert_tokens.extend([4, 3])
segment = [0] * (len(bert_tokens) - 1) + [SEG_ID_CLS]
input_mask = [0] * len(segment)
s_tokens = [sp_model.IdToPiece(i) for i in bert_tokens]
return bert_tokens, segment, input_mask, s_tokens
sequence = entities_textcleaning(string)[1]
parsed_sequence, segment_sequence, mask_sequence, xlnet_sequence = parse_X(sequence)
predicted = sess.run(model.tags_seq,
feed_dict = {
model.X: [parsed_sequence],
model.segment_ids: [segment_sequence],
model.input_masks: [mask_sequence],
},
)[0]
merged = merge_sentencepiece_tokens_tagging(xlnet_sequence, [idx2tag[d] for d in predicted])
list(zip(merged[0], merged[1]))
import time
for e in range(5):
lasttime = time.time()
train_acc, train_loss, test_acc, test_loss = [], [], [], []
pbar = tqdm(
range(0, len(train_X), batch_size), desc = 'train minibatch loop'
)
for i in pbar:
index = min(i + batch_size, len(train_X))
batch_x = train_X[i : index]
batch_y = train_Y[i : index]
batch_masks = train_masks[i : index]
batch_segments = train_segments[i : index]
batch_x = pad_sequences(batch_x, padding='post')
batch_y = pad_sequences(batch_y, padding='post')
batch_segments = pad_sequences(batch_segments, padding='post', value = 4)
batch_masks = pad_sequences(batch_masks, padding='post', value = 1)
acc, cost, _ = sess.run(
[model.accuracy, model.cost, model.optimizer],
feed_dict = {
model.X: batch_x,
model.Y: batch_y,
model.segment_ids: batch_segments,
model.input_masks: batch_masks,
},
)
assert not np.isnan(cost)
train_loss.append(cost)
train_acc.append(acc)
pbar.set_postfix(cost = cost, accuracy = acc)
pbar = tqdm(
range(0, len(test_X), batch_size), desc = 'test minibatch loop'
)
for i in pbar:
index = min(i + batch_size, len(test_X))
batch_x = test_X[i : index]
batch_y = test_Y[i : index]
batch_masks = test_masks[i : index]
batch_segments = test_segments[i : index]
batch_x = pad_sequences(batch_x, padding='post')
batch_y = pad_sequences(batch_y, padding='post')
batch_segments = pad_sequences(batch_segments, padding='post', value = 4)
batch_masks = pad_sequences(batch_masks, padding='post', value = 1)
acc, cost = sess.run(
[model.accuracy, model.cost],
feed_dict = {
model.X: batch_x,
model.Y: batch_y,
model.segment_ids: batch_segments,
model.input_masks: batch_masks,
},
)
assert not np.isnan(cost)
test_loss.append(cost)
test_acc.append(acc)
pbar.set_postfix(cost = cost, accuracy = acc)
train_loss = np.mean(train_loss)
train_acc = np.mean(train_acc)
test_loss = np.mean(test_loss)
test_acc = np.mean(test_acc)
print('time taken:', time.time() - lasttime)
print(
'epoch: %d, training loss: %f, training acc: %f, valid loss: %f, valid acc: %f\n'
% (e, train_loss, train_acc, test_loss, test_acc)
)
predicted = sess.run(model.tags_seq,
feed_dict = {
model.X: [parsed_sequence],
model.segment_ids: [segment_sequence],
model.input_masks: [mask_sequence],
},
)[0]
merged = merge_sentencepiece_tokens_tagging(xlnet_sequence, [idx2tag[d] for d in predicted])
print(list(zip(merged[0], merged[1])))
def pred2label(pred):
out = []
for pred_i in pred:
out_i = []
for p in pred_i:
out_i.append(idx2tag[p])
out.append(out_i)
return out
real_Y, predict_Y = [], []
pbar = tqdm(
range(0, len(test_X), batch_size), desc = 'validation minibatch loop'
)
for i in pbar:
index = min(i + batch_size, len(test_X))
batch_x = test_X[i : index]
batch_y = test_Y[i : index]
batch_masks = test_masks[i : index]
batch_segments = test_segments[i : index]
batch_x = pad_sequences(batch_x, padding='post')
batch_y = pad_sequences(batch_y, padding='post')
batch_segments = pad_sequences(batch_segments, padding='post', value = 4)
batch_masks = pad_sequences(batch_masks, padding='post', value = 1)
predicted = pred2label(sess.run(
model.tags_seq,
feed_dict = {
model.X: batch_x,
model.segment_ids: batch_segments,
model.input_masks: batch_masks,
},
))
real = pred2label(batch_y)
predict_Y.extend(predicted)
real_Y.extend(real)
temp_real_Y = []
for r in real_Y:
temp_real_Y.extend(r)
temp_predict_Y = []
for r in predict_Y:
temp_predict_Y.extend(r)
from sklearn.metrics import classification_report
print(classification_report(temp_real_Y, temp_predict_Y, digits = 5))
```
| github_jupyter |
```
# Description: Plot Figure 4 (Monochromatic 2D topography experiments, varying topographic height).
# Author: André Palóczy
# E-mail: paloczy@gmail.com
# Date: March/2022
import numpy as np
import matplotlib.pyplot as plt
from numpy.fft import fft as npfft
from numpy.fft import fft2 as npfft2
from numpy.fft import ifft2 as npifft2
from numpy.fft import fftshift
from hdf5storage import loadmat
from glob import glob
from cmocean.cm import balance
from matplotlib.ticker import FuncFormatter
def near(x, x0, npts=1, return_index=False):
"""
USAGE
-----
xnear = near(x, x0, npts=1, return_index=False)
Finds 'npts' points (defaults to 1) in array 'x'
that are closest to a specified 'x0' point.
If 'return_index' is True (defauts to False),
then the indices of the closest points are
returned. The indices are ordered in order of
closeness.
"""
x = list(x)
xnear = []
xidxs = []
for n in range(npts):
idx = np.nanargmin(np.abs(np.array(x)-x0))
xnear.append(x.pop(idx))
if return_index:
xidxs.append(idx)
if return_index: # Sort indices according to the proximity of wanted points.
xidxs = [xidxs[i] for i in np.argsort(xnear).tolist()]
xnear.sort()
if npts==1:
xnear = xnear[0]
if return_index:
xidxs = xidxs[0]
else:
xnear = np.array(xnear)
if return_index:
return xidxs
else:
return xnear
def normalize_psi(psi1, psi2):
maxp = 0
for k in psi1.keys():
maxpk = np.maximum(np.abs(psi1[k]).max(), np.abs(psi2[k]).max())
if maxpk>maxp:
maxp = maxpk
for k in psi1.keys():
psi1[k] = psi1[k]/maxp
psi2[k] = psi2[k]/maxp
return psi1, psi2
def get_max_psi(psi1, psi2):
maxp = 0
for k in psi1.keys():
maxpk = np.maximum(np.abs(psi1[k]).max(), np.abs(psi2[k]).max())
if maxpk>maxp:
maxp = maxpk
return maxp
def cbfmt(x, pos):
a, b = '{:.0e}'.format(x).split('e')
b = int(b)
if b==0:
return r'$0$'
else:
return r'${}\times10^{{{}}}$'.format(a, b)
def sech(x):
return 1/np.cosh(x)
# Baroclinic, upper-layer lateral and topographic energy transfer terms.
def Ethicknessh(psi1, psi2, U1, k, F, dy):
psi1hx, psi2hx = npfft(psi1, axis=1), npfft(psi2, axis=1)
eh = 1j*k*F*U1*(psi1hx*psi2hx.conj() - psi1hx.conj()*psi2hx)
return eh.sum(axis=0)*dy # On LHS.
def Emom1h(psi1, U1, k, l, delta1, dy):
u1y = npifft2(l**2*npfft2(psi1)).real
u1yhx = npfft(u1y, axis=1)
psi1hx = npfft(psi1, axis=1)
eh = 1j*k*delta1*U1*(psi1hx.conj()*u1yhx - psi1hx*u1yhx.conj())
return eh.sum(axis=0)*dy # On LHS.
def Etopogh(psi2, h, k, l, delta2, dy):
psi2h, psi2hx = npfft2(psi2), npfft(psi2, axis=1)
u = npifft2(-1j*l*psi2h).real
v = npifft2(+1j*k*psi2h).real
uhx = npfft(u, axis=1)
uhhx, vhhx = npfft(u*h, axis=1), npfft(v*h, axis=1)
eh = -delta2*(uhx.conj()*vhhx + uhx*vhhx.conj() + 1j*k*(psi2hx.conj()*uhhx - psi2hx*uhhx.conj()))
return eh.sum(axis=0)*dy # On LHS.
plt.close("all")
F1 = 75#25#100
ttyp = "cosi"
# ttyp = "hrand256Km2tk10filtnx32"
# ttyp = "ridg"
NORMALIZE_PSI = True
N = 256
tk = 10
L = 2
U0 = 1
Lj = L/10
dx = L/N
dy = dx
x = np.arange(0, N)
x, y = np.meshgrid(x*dx, x*dx - L/2)
dk = 2*np.pi/L
kp = np.concatenate((np.arange(0, N/2+1), np.arange(-N/2+1, 0)))*dk
k, l = np.meshgrid(kp, kp)
U1 = U0*sech(y/Lj)**2 # Bickley jet
Lh = L/2
# Gather fields from experiments with different topographic heights.
htdimfac = 31.8 # [m]
hts = [1, 5, 10]
htsdim = list(np.array(hts)*htdimfac)
psi1, psi2 = dict(), dict()
Ethh, Em1h, Eth = dict(), dict(), dict()
for ht in hts:
fname = "../../simulations/lin_N%d_ht%d_F1%d_%s%d.npz"%(N, ht, F1, ttyp, tk)
if "rand" in ttyp:
fname_hrand = "../../code_simulations/hrand256Km2tk10filtnx32.mat"
fname = "../../simulations/lin_N%d_ht%d_F1%d_%s.npz"%(N, ht, F1, ttyp)
d = np.load(fname)
p1, p2 = d["p1"], d["p2"]
psi1.update({"ht-"+str(ht):p1})
psi2.update({"ht-"+str(ht):p2})
# Calculate momentum flux and thickness flux spectra.
d12 = d["d12"].flatten()[0]
d1 = d12/(1 + d12)
d2 = 1/(1 + d12)
F = d1*F1
# Spectral fluxes on the RHS.
Ethhi = -Ethicknessh(p1, p2, U1, k, F, dy).real
Em1hi = -Emom1h(p1, U1, k, l, d1, dy).real
Ethhi = fftshift(Ethhi); Ethhi = Ethhi[N//2:]
Em1hi = fftshift(Em1hi); Em1hi = Em1hi[N//2:]
Ethh.update({"ht-"+str(ht):Ethhi})
Em1h.update({"ht-"+str(ht):Em1hi})
if ttyp=='cosi':
lt = tk*np.pi # cosine topography
kt = lt
h = ht*np.cos(kt*x)*np.cos(lt*y)
hy = -lt*ht*np.cos(kt*x)*np.sin(lt*y)
hx = -kt*ht*np.sin(kt*x)*np.cos(lt*y)
elif 'rand' in ttyp: # random topography
h = loadmat(fname_hrand)["h"]
h = h*ht*0.5 # ht is rms in random topography. The factor 1/2 is the rms of sin(x)sin(y).
# hy, hx = np.gradient(h, dx)
hh = npfft2(h)
hx = npifft2(1j*k*hh).real
hy = npifft2(1j*l*hh).real
elif ttyp=='ridg':
lt = tk*np.pi # ridge topography
kt = 0
h = ht*np.cos(lt*y)
hy = -lt*ht*np.sin(lt*y)
hx = 0
Ethi = -Etopogh(p2, h, k, l, d2, dy).real
Ethi = fftshift(Ethi); Ethi = Ethi[N//2:]
Eth.update({"ht-"+str(ht):Ethi})
kp = fftshift(k[0, :]); kp = kp[N//2:]
if NORMALIZE_PSI:
psi1, psi2 = normalize_psi(psi1, psi2) # Normalize psi1 and psi2.
maxpsi = 1
else:
maxpsi = get_max_psi(psi1, psi2)
kwclim = dict(vmin=-maxpsi, vmax=maxpsi)
cmap = balance
# Plot top and bottom layer streamfunctions for different topographic heights.
fig, ax = plt.subplots(nrows=3, ncols=3, figsize=(8, 8))
ax1, ax4, ax7 = ax[0]
ax2, ax5, ax8 = ax[1]
ax3, ax6, ax9 = ax[2]
ax1.xaxis.tick_top(); ax1.xaxis.set_label_position("top")
ax4.xaxis.tick_top(); ax4.xaxis.set_label_position("top")
ax7.xaxis.tick_top(); ax7.xaxis.set_label_position("top")
fig.subplots_adjust(hspace=0.05, wspace=0.05)
ax2.tick_params(labelbottom=False, labelleft=False)
ax4.tick_params(labeltop=False, labelleft=False); ax7.tick_params(labeltop=False, labelleft=False)
ax5.tick_params(labelbottom=False, labelleft=False); ax8.tick_params(labelbottom=False, labelleft=False)
ax3.tick_params(labelbottom=False); ax6.tick_params(labelleft=False); ax9.tick_params(labelbottom=False, labelleft=False)
nlevs = 20
kwcc = dict(levels=nlevs, **kwclim, cmap=cmap)
carr = np.linspace(-maxpsi, maxpsi, num=nlevs)
cs1 = ax1.contourf(x, y, psi1["ht-%s"%hts[0]], **kwcc); ax1.axis("square"); cs1.set_array(carr)
cs2 = ax2.contourf(x, y, psi2["ht-%s"%hts[0]], **kwcc); ax2.axis("square"); cs2.set_array(carr)
cs4 = ax4.contourf(x, y, psi1["ht-%s"%hts[1]], **kwcc); ax4.axis("square"); cs4.set_array(carr)
cs5 = ax5.contourf(x, y, psi2["ht-%s"%hts[1]], **kwcc); ax5.axis("square"); cs5.set_array(carr)
cs7 = ax7.contourf(x, y, psi1["ht-%s"%hts[2]], **kwcc); ax7.axis("square"); cs7.set_array(carr)
cs8 = ax8.contourf(x, y, psi2["ht-%s"%hts[2]], **kwcc); ax8.axis("square"); cs8.set_array(carr)
maxp = []
css = [cs1, cs2, cs4, cs5, cs7, cs8]
for csi in css:
maxp.append(np.abs(csi.get_array()).max())
cs = css[near(maxp, maxpsi, return_index=True)]
caxx, caxy = 0.0, 1.05
cbaxes = ax7.inset_axes([caxx, caxy, 1.0, 0.05])
if NORMALIZE_PSI:
cb = fig.colorbar(mappable=cs, cax=cbaxes, orientation="horizontal")
cblabel = r"Normalized $\psi_1$, $\psi_2$"
else:
cb = fig.colorbar(mappable=cs, cax=cbaxes, orientation='horizontal', extend='both', format=FuncFormatter(cbfmt))
cblabel = r"$\psi_1$, $\psi_2$"
cb.ax.xaxis.set_ticks_position("top"); cb.ax.xaxis.set_label_position("top")
cb.set_ticks([-maxpsi, 0, maxpsi])
cb.set_label(cblabel, fontsize=14, fontweight="normal")
xt, yt = 0.015, 0.90
xt2, yt2 = 0.85, 0.05
kwtxt, kwtxt2 = dict(fontsize=12, fontweight="normal"), dict(fontsize=18, fontweight="normal")
ax1.text(xt, yt, r"(a) $h_0 = %d$"%hts[0], transform=ax1.transAxes, **kwtxt)
ax1.text(xt2, yt2, r"$\psi_1$", transform=ax1.transAxes, **kwtxt2)
ax2.text(xt, yt, r"(d) $h_0 = %d$"%hts[0], transform=ax2.transAxes, **kwtxt)
ax2.text(xt2, yt2, r"$\psi_2$", transform=ax2.transAxes, **kwtxt2)
ax3.text(xt, yt, r"(g)", transform=ax3.transAxes, **kwtxt)
ax4.text(xt, yt, r"(b) $h_0 = %d$"%hts[1], transform=ax4.transAxes, **kwtxt)
ax4.text(xt2, yt2, r"$\psi_1$", transform=ax4.transAxes, **kwtxt2)
ax5.text(xt, yt, r"(e) $h_0 = %d$"%hts[1], transform=ax5.transAxes, **kwtxt)
ax5.text(xt2, yt2, r"$\psi_2$", transform=ax5.transAxes, **kwtxt2)
ax6.text(xt, yt, r"(h)", transform=ax6.transAxes, **kwtxt)
ax7.text(xt, yt, r"(c) $h_0 = %d$"%hts[2], transform=ax7.transAxes, **kwtxt)
ax7.text(xt2, yt2, r"$\psi_1$", transform=ax7.transAxes, **kwtxt2)
ax8.text(xt, yt, r"(f) $h_0 = %d$"%hts[2], transform=ax8.transAxes, **kwtxt)
ax8.text(xt2, yt2, r"$\psi_2$", transform=ax8.transAxes, **kwtxt2)
ax9.text(xt, yt, r"(i)", transform=ax9.transAxes, **kwtxt)
eps = kp[1] - kp[0]
kupp = 2e2
ymin, ymax = -3e1, 3e2
# Plot spectra of thickness flux, momentum flux and topographic flux in the bottom row.
ax3.plot(kp, Ethh["ht-%s"%hts[0]], "b", label=r"$\hat{C}_T$")
ax3.plot(kp, Em1h["ht-%s"%hts[0]], "r", label=r"$\hat{C}_{M_1}$")
ax3.set_xscale("log"); ax3.set_yscale("symlog")
ax3.set_xlim(kp[0]+eps, kupp); ax3.set_ylim(ymin, ymax)
ax3.axhline(linestyle="--", color="gray")
ax6.plot(kp, Ethh["ht-%s"%hts[1]], "b")
ax6.plot(kp, Em1h["ht-%s"%hts[1]], "r")
ax6.set_xscale("log"); ax6.set_yscale("symlog")
ax6.set_xlim(kp[0]+eps, kupp); ax6.set_ylim(ymin, ymax)
ax6.axhline(linestyle="--", color="gray")
ax9.plot(kp, Ethh["ht-%s"%hts[2]], "b")
ax9.plot(kp, Em1h["ht-%s"%hts[2]], "r")
ax9.set_xscale("log"); ax9.set_yscale("symlog")
ax9.set_xlim(kp[0]+eps, kupp); ax9.set_ylim(ymin, ymax)
ax9.axhline(linestyle="--", color="gray")
if ttyp!="ridg":
ax3.plot(kp, Eth["ht-%s"%hts[0]], "g", label=r"$\hat{C}_{topo}$")
ax6.plot(kp, Eth["ht-%s"%hts[1]], "g")
ax9.plot(kp, Eth["ht-%s"%hts[2]], "g")
ax1.set_xlabel(r"$x$", fontsize=15, fontweight="black")
ax1.set_ylabel(r"$y$", fontsize=15, fontweight="black")
ax6.set_xlabel(r"Zonal wavenumber, $k$", fontsize=13, fontweight="black")
ax3.set_ylabel(r"Spectral density", fontsize=13, fontweight="black")
axsaux = (ax1, ax2, ax4, ax5, ax7, ax8)
for axaux in axsaux:
axaux.set_xlim(0, 2)
axaux.set_ylim(-1, 1)
axaux.tick_params(bottom=True, top=True, left=True, right=True)
LjonLd1 = np.sqrt(F1)/5
if LjonLd1==1:
ttl = r"$L_j = L_{d1}$"
elif LjonLd1/int(LjonLd1)!=1:
ttl = r"$L_j = $%.1f$L_{d1}$"%LjonLd1
else:
ttl = r"$L_j = $%d$L_{d1}$"%LjonLd1
ax4.set_title(ttl, fontsize=22, y=1.13)
ax3.legend(loc="best", frameon=False)
fig.savefig("fig04.png", bbox_inches="tight")
plt.show()
```
| github_jupyter |
```
import pandas as pd
import numpy as np
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import train_test_split
from matplotlib import pyplot as plt
from collections import Counter
from sklearn.metrics import classification_report, confusion_matrix
class LogReg:
def __init__(self,X,y):
self.X = X
self.y = y
self.m = len(y)
self.bgd = False
def sigmoid(self,z):
return 1/ (1 + np.exp(-z))
def cost_function(self,X,y):
h = self.sigmoid(X.dot(self.thetas.T))
m = len(y)
J = (1/m) * (-y.dot(h) - (1-y).dot(np.log(1-h)))
return J
def add_intercept_term(self,X):
X = np.insert(X,0,np.ones(X.shape[0:1]),axis=1).copy()
return X
def feature_scale(self,X):
X = (X - X.mean()) / (X.std())
return X
def initialise_thetas(self):
np.random.seed(42)
self.thetas = np.random.rand(self.X.shape[1])
def batch_gradient_descent(self,alpha,n_iterations):
self.cost_history = [0] * (n_iterations)
self.n_iterations = n_iterations
for i in range(n_iterations):
h = self.sigmoid(np.dot(self.X,self.thetas.T))
gradient = alpha * (1/self.m) * (h - self.y).dot(self.X)
self.thetas = self.thetas - gradient
self.cost_history[i] = self.cost_function(self.X,self.y)
return self.thetas
def fit(self,alpha=0.4,n_iterations=2000):
self.X = self.feature_scale(self.X)
# self.add_intercept_term(self.X)
self.initialise_thetas()
self.thetas = self.batch_gradient_descent(alpha,n_iterations)
def plot_cost_function(self):
plt.plot(range((self.n_iterations)),self.cost_history)
plt.xlabel('No. of iterations')
plt.ylabel('Cost Function')
plt.title('Gradient Descent Cost Function Line Plot')
plt.show()
def predict(self,X_test):
self.X_test = X_test.copy()
self.X_test = self.feature_scale(self.X_test)
h = self.sigmoid(np.dot(self.X_test,self.thetas.T))
predictions = (h >= 0.5).astype(int)
return h
whole_data = pd.read_csv("Data/trac2_eng_train.csv")
le = LabelEncoder()
whole_data['Y'] = le.fit_transform(whole_data["Sub-task A"])
df = pd.read_csv("Cache/Embeddings/lda_vecs_en.csv")
df["Y"] = whole_data["Y"]
print(df.shape, whole_data.shape)
df = pd.get_dummies(df, columns=['Y'])
df.head()
X_train, X_test, Y_train, Y_test = train_test_split(df.iloc[:, :-3], df.iloc[:, -3:], random_state = 0, test_size = 0.3)
X_train, Y_train = df.iloc[:, :-3], df.iloc[:, -3:]
model0 = LogReg(X_train.values, Y_train.iloc[:, 0].values)
model1 = LogReg(X_train.values, Y_train.iloc[:, 1].values)
model2 = LogReg(X_train.values, Y_train.iloc[:, 2].values)
model0.fit(alpha=0.05, n_iterations=2000)
model1.fit(alpha=0.05, n_iterations=2000)
model2.fit(alpha=0.05,n_iterations=2000)
pred0 = model0.predict(X_train)
pred1 = model1.predict(X_train)
pred2 = model2.predict(X_train)
pred = np.array(pred0)
for i in range(Y_train.shape[0]):
m = max(pred0[i], pred1[i], pred2[i])
if(pred0[i]==m): pred[i] = 0
if(pred1[i]==m): pred[i] = 1
if(pred2[i]==m): pred[i] = 2
print(classification_report(whole_data["Y"].values, pred))
print(confusion_matrix(whole_data["Y"].values, pred))
model0.plot_cost_function()
```
| github_jupyter |
# IWI-131 Programación
## Ciclos
Permiten repetir un conjunto de instrucciones.
Hay dos tipos de ciclos:
- Los que se ejecutan una determinada cantidad de veces, conocida previamente.
- Los que se ejecutan mientras una condición se cumple.
_Nota: En realidad el segundo tipo puede utilizarse para modelar el primero._
### Instrucción while
Una instrucción `while` ejecuta una secuencia de instrucciones mientras una condición sea **verdadera**.
Las instrucciones que se repiten son las que se encuentran indentadas a la derecha, dentro de la instrucción `while`.
#### Sintaxis
```python
while condición:
instrucciones mientras la condición sea verdadera
```
#### Notas adicionales:
- A una ejecución de las instrucciones que componen el ciclo se le llama **iteración**.
- La condición de un `while` es evaluada antes de cada iteración.
- Por lo anterior, es posible que un `while` **no ejecute** ninguna iteración, si la condición es `False` al inicio.
#### ¿Entendemos `while`?
* ¿Qué realiza el siguiente ejemplo?
```
m = int(input("m: "))
n = int(input("n: "))
k = 0
while m > 0:
m = m - 1
k = k + n
print("El producto de m y n es:",k)
```
- ¿Entiendes cómo se lleva a cabo el cálculo del producto?
- ¿Por qué estamos seguros de que en algún momento terminará el ciclo?
## Uso de Ciclos
**1a.** Escriba un programa que calcule el promedio de las notas del certamen 1 de un paralelo de IWI131. (Nótese que generalmente los paralelos no tienen la misma cantidad de alumnos)
```
cant = int(input("Ingrese la cantidad de alumnos: "))
cont = 0 #contador de alumnos/notas ingresadas
suma = 0 #notas acumuladas
while cont < cant:
nota = int(input("Ingrese nota: "))
suma = suma + nota
cont = cont + 1
print(cont)
print("El promedio del paralelo es:", int(round(suma/cont)))
```
**1b.** Escriba un programa que calcule el promedio las notas de varios alumnos. Se ingresarán notas hasta que se ingrese el valor $-1$.
```
cont = 0 #contador de cantidad notas ingresadas
suma = 0 #notas acumuladas
nota = int(input("Ingrese nota: "))
while nota != -1:
suma = suma + nota
cont += 1 #cont = cont + 1
nota = int(input("Ingrese nota: "))
print("El promedio es:", int(round(suma/cont)))
```
**1c.** Escriba un programa que calcule el promedio las notas de varios alumnos. Se ingresarán notas hasta que se ingrese el valor $-1$.
```
flag = True # bandera que indica si condicion se cumple
cont = 0 #contador de cantidad notas ingresadas
suma = 0 #notas acumuladas
while flag: #flag == True
nota = int(input("Ingrese nota: "))
if nota == -1:
flag = False
else:
suma += nota #suma = suma + nota
cont += 1 #cont = cont + 1
print("El promedio es:", int(round(suma/cont)))
```
## Patrones Comunes
Un patrón es una solución que puede ser aplicada a distintos problemas o situaciones, con ligeros cambios.
1. **Patrón de acumulación con suma.**
2. **Patrón de acumulación con multiplicación.**
3. **Patrón de conteo.**
4. **Patrón de encontrar el mayor (máximo).**
5. **Patrón de encontrar el menor (mínimo).**
### 1. Patrón de acumulación con suma.
Escriba un programa que reciba como entrada un número entero. El programa debe mostrar el resultado de la suma de los números al cuadrado desde el $1$ hasta el valor ingresado.
$$
1^2+2^2+\ldots+(n-1)^2+n^2
$$
```
n = int(input("Ingrese n: "))
suma = 0
cont = 1
while cont <= n:
d = cont**2
suma = suma + d
cont += 1
print("La suma de los numeros al cuadrado es:", suma)
```
#### Estructura del patrón de acumulación con suma:
```Python
suma = 0
while (condición_del_ciclo):
n = ... #calcular lo que se quiere acumular
suma = suma + n
```
¿Cuáles son los elementos de este patrón?
### 2. Patrón de acumulación con multiplicación
Escriba una programa que calcule el factorial de un número $n$ ingresada como entrada:
$3! = 1\cdot 2 \cdot 3$
```
n = int(input("Ingrese n: "))
if n < 0:
print("El factorial está definido sólo para números naturales mayores o igual que 0.")
else:
prod = 1
cont = 1
while cont <= n:
prod = prod * cont # prod *= cont
cont += 1 #cont = cont + 1
print("El factorial de", n, "es:", prod)
```
#### Estructura del patrón de acumulación con multiplicación:
```Python
producto = 1
while (condición_del_ciclo):
n = ... #calcular lo que se quiere acumular
producto = producto * n
```
¿Cuáles son los elementos de este patrón?
¿Ves las diferencias con el patrón de acumulación con suma?
### 3. Patrón de conteo
Escriba un programa que solicite el ingreso de $n$ números, luego entregue la cantidad de números pares ingresados.
```
n = int(input("Ingrese n: "))
pares = 0 #contador de pares
cont = 0 #contador de numeros
while cont < n:
num = int(input("Ingrese numero: "))
if num % 2 == 0: #si es divisible por 2
pares += 1 #pares = pares + 1
print("pares =", pares)
cont += 1 #cont = cont + 1
print("La cantidad de pares ingresados es:", pares)
```
#### Estructura del patrón de conteo:
```Python
contador = 0
while (condición_del_ciclo):
if (condición_para_contar):
contador = contador + 1
```
¿Cuáles son los elementos de este patrón?
### 4. Patrón de encontrar el mayor (máximo)
Escribir un programa que solicite n números y luego muestre el número mayor que haya sido ingresado.
**Opción 1:** usando un número muy pequeño para comparar.
```
n = int(input("Ingrese n: "))
i = 1
mayor = float("-inf")
while i <= n:
mensaje = "Ingrese numero: "
numero = float(input(mensaje))
if mayor < numero:
mayor = numero
print("mayor temp =", mayor)
i += 1
print("El numero mayor es:", mayor)
```
**Opción 2:** sin usar número muy pequeño para comparar.
```
n = int(input("Ingrese n: "))
i = 1
while i <= n:
mensaje = "Ingrese numero: "
numero = float(input(mensaje))
if i == 1:
mayor = numero
elif mayor < numero:
mayor = numero
i += 1
print("El numero mayor es", mayor)
```
#### **Estructura del patrón de encontrar el mayor:**
```Python
mayor = numero_muy_chico
while (condición_del_ciclo):
n = ... #determinar lo que se quiere comparar
if n > mayor:
mayor = n
```
¿Cuáles son los elementos de este patrón?
### 5. Patrón de encontrar el menor (mínimo)
¿Cómo cambia el patrón anterior si ahora se quiere encontrar el mínimo?
```
n = int(input("Ingrese n: "))
i = 1
menor = float("inf")
while i <= n:
mensaje = "Ingrese numero: "
numero = float(input(mensaje))
if menor > numero:
menor = numero
i += 1
print("El numero menor es:", menor)
```
#### Estructura del patrón de encontrar el menor:
```Python
menor = numero_muy_grande
while (condición_del_ciclo):
n = ... #determinar lo que se quiere comparar
if n < menor:
menor = n
```
### Ciclos anidados
Dentro de un ciclo es posible incluir cualquier tipo de instrucción, incluso otros ciclos.
Cuando un ciclo se encuentra dentro de otro ciclo, el ciclo interno se ejecutará completo en cada iteración del ciclo externo.
#### Ejemplo 1:
Escribir un programa que imprima en la pantalla un rectángulo de asteriscos como el siguiente, que tiene 4 filas con 10 asteriscos cada una:
```
**********
**********
**********
**********
```
```
fila = 0
while fila < 4:
columna = 0
linea = ""
while columna < 10:
linea = linea + "*"
columna += 1
print(linea)
fila += 1
```
El ciclo interno ejecuta sus 10 iteraciones para cada iteración del ciclo externo.
Entonces, ¿cuántas veces en total se ejecuta la instrucción `linea = linea + "*"`?
¿Cuáles son todos los valores por los que pasa la variable columna durante la ejecución del programa?
#### Ejercicio:
Modificar el programa anterior para que la cantidad de filas y columnas sean entradas, pudiendo dibujar así rectángulos de diferentes dimensiones.
```
cantidad_filas = int(input("Ingrese la cantidad de filas: "))
cantidad_columnas = int(input("Ingrese la cantidad de columnas: "))
fila = 0
while fila < cantidad_filas:
columna = 0
linea = ""
while columna < cantidad_columnas:
linea = linea + "*"
columna += 1
print(linea)
fila += 1
```
#### Ejemplo 2:
Escribir un programa que muestre todas las combinaciones posibles al lanzar $2$ dados.
```
i = 1
while i <= 6:
j = 1
while j <= 6:
print(i,j)
j += 1
i += 1
```
## Ejercicios
### Ejercicio 1: Conjetura de Ulam
A partir de un número cualquiera (entrada) es posible hacer una sucesión de números que termina en $1$.
- Si el número es par, se debe dividir por $2$.
- Si el número es impar, se debe multiplicar por $3$ y sumarle $1$.
Con esto se obtiene el siguiente número de la sucesión, al cual se le deben aplicar las mismas operaciones. La sucesión de números termina cuando el número obtenido por medio de las operaciones es $1$.
```
t= int(input('Inicio: '))
while t != 1:
print(int(t))
if t % 2 == 0:
t = t/2
else:
t = (3*t) + 1
print(int(t))
```
### Ejercicio 2
Escriba un programa que determine si un número es **primo** o **compuesto**.
```
n=int(input("n: "))
es_primo = True
d = 2
while d<n and es_primo:
if n%d == 0:
es_primo = False
d = d+1
if es_primo:
print("Es primo")
else:
print("Es compuesto")
```
### Ejercicio 3
Utilizando el programa diseñado anteriormente para determinar si un número es primo o compuesto. Realice el ruteo para los siguientes valores:
* 5
<table style="font-size: 1em; float:center;">
<thead>
<td style="border-right: 1px solid black;"><b>n</b></td>
<td style="border-right: 1px solid black;"><b>es_primo</b></td>
<td><b>d</b></td>
</thead>
<tr>
<td style="border-right: 1px solid black;">5</td>
<td style="border-right: 1px solid black;"> </td>
<td> </td>
</tr>
<tr>
<td style="border-right: 1px solid black;"> </td>
<td style="border-right: 1px solid black;">True</td>
<td> </td>
</tr>
<tr>
<td style="border-right: 1px solid black;"> </td>
<td style="border-right: 1px solid black;"> </td>
<td>2</td>
</tr>
<tr>
<td style="border-right: 1px solid black;"> </td>
<td style="border-right: 1px solid black;"> </td>
<td>3</td>
</tr>
<tr>
<td style="border-right: 1px solid black;"> </td>
<td style="border-right: 1px solid black;"> </td>
<td>4</td>
</tr>
<tr>
<td style="border-right: 1px solid black;"> </td>
<td style="border-right: 1px solid black;"> </td>
<td>5</td>
</tr>
</table>
<table style="font-size: 1em; float:center;">
<tr>
<td style="text-align:left" align="left"><b>Pantalla:</b></td>
</tr>
<tr>
<td style="text-align:left">Es primo<br/> <br/></td>
</tr>
</table>
| github_jupyter |
### Prepare maps for analysis
#### Hypotheses:
Parametric effect of gain:
1. Positive effect in ventromedial PFC - for the equal indifference group
2. Positive effect in ventromedial PFC - for the equal range group
3. Positive effect in ventral striatum - for the equal indifference group
4. Positive effect in ventral striatum - for the equal range group
Parametric effect of loss:
- 5: Negative effect in VMPFC - for the equal indifference group
- 6: Negative effect in VMPFC - for the equal range group
- 7: Positive effect in amygdala - for the equal indifference group
- 8: Positive effect in amygdala - for the equal range group
Equal range vs. equal indifference:
- 9: Greater positive response to losses in amygdala for equal range condition vs. equal indifference condition.
#### Notes on data
Data and hypothesis decisions were obtained from 70 teams. Of these, the maps for 16 teams were excluded from further analysis due to the following problems:
##### Bad normalization/resampling:
- SVXXBBHN_98BT: rejected due to very bad normalization
- UGXECSGG_L1A8: resampled image much smaller than template brain
- VDOVGCPL_4SZ2: resampled image offset from template brain
- WBTVHMSS_4TQ6: resampled image offset and too large compared to template
##### Missing thresholded images:
- WIYTWEEA_2T7P: missing thresholded images
##### Used surface-based analysis (only provided data for cortical ribbon:
- DRVQPPNO_1K0E
- UMSEIVRB_X1Z4
##### Missing data:
- YWGVKXBZ_6FH5: missing much of the central brain
- ACMKXFTG_P5F3: rejected due to large amounts of missing data across brain
- NNUNPWLT_0H5E: rejected due to large amount of missing brain in center
- CMJIFMMR_L3V8: rejected due to large amount of missing brain in center
##### Bad histograms:
visual examiniation of histograms of unthresholded images showed a number of them that were clearly not distribted as z/t
- RIIVGRDK_E3B6: very long tail, with substantial inflation at a value just below zero
- JAWCZRDS_V55J: very small values
- EPBGXICO_I07H: bimodal, with second distribution centered around 2.5
- UXJVRRRR_0C7Q: appears to be a p-value distribution, with slight excursions below and above zero
- OXLAIRNK_Q58J: bimodal, zero-inflated with a second distribution centered around 5
```
# all of the functionality is now embedded in the Narps() class within narps.py
import os,sys,glob,warnings
import matplotlib.pyplot as plt
import numpy,pandas
import nilearn.input_data
from narps import Narps
from utils import get_masked_data,get_merged_metadata_decisions
overwrite = False
make_reports = False # turn on to generate individual maps - this takes a long time!
# set an environment variable called NARPS_BASEDIR with location of base directory
if 'NARPS_BASEDIR' in os.environ:
basedir = os.environ['NARPS_BASEDIR']
else:
basedir = '/data'
assert os.path.exists(basedir)
metadata_dir = os.path.join(basedir,'metadata')
if not os.path.exists(metadata_dir):
os.mkdir(metadata_dir)
# instantiate main Narps class, which loads data
# to rerun everything, set overwrite to True
narps = Narps(basedir,overwrite=overwrite)
output_dir = narps.dirs.dirs['output']
mask_img = narps.dirs.MNI_mask
# uncomment to load cached data
# narps.load_data()
```
### Create binarized thresholded masks
load the thresholded maps and binarize the absolute value (to detect either positive or negative signals). save output to the *thresh_mask_orig* directory (in original space).
```
narps.get_binarized_thresh_masks()
```
### Resample images
The images were supposed to be submitted in MNI space. Here we use nilearn.image.resample_to_img() to resample the image into the FSL MNI space (91x109x91, 2mm isotropic). For unthresholded maps, we use continuous interpolation; for thresholded maps, we use linear interpolation and then threshold at 0.5; this avoids some errors that occurred using nearest neighbor interpolation, where there were empty voxels in some slices. save to *resampled* directory.
```
narps.get_resampled_images()
```
### Check image values
Compute number of NA and zero voxels for each image, and save to image_metadata_df.csv.
```
image_value_df = narps.check_image_values()
```
### Combine resampled images for thresholded data
Combine all images into a single concatenated file for each hypothesis. save to *thresh\_concat\_resampled*.
```
_ = narps.create_concat_images(datatype='resampled',imgtypes = ['thresh'],overwrite=overwrite)
```
### Create overlap images for thresholded maps
```
narps.create_thresh_overlap_images()
```
### Rectify maps
Rectify the unthresholded maps if they used positive values for negative activations - so that all unthresholded maps reflect positive values for the hypothesis in question. This is based on metadata provided by each team. Save to *rectified*
```
narps.create_rectified_images()
```
### Create concatenated rectified unthresholded images
```
# concat_rectified = narps.create_concat_images(datatype='rectified',imgtypes = ['unthresh'],overwrite=True)
```
### Convert all unthresholded images to Z stat images
Use metadata reported by teams (contained in NARPS_analysis_teams_members.csv) which specifies what kind of unthresholded statistical images were submitted. For those who reported T values, convert those to Z values using Hughett's transform (based on https://github.com/vsoch/TtoZ/blob/master/TtoZ/scripts.py)
```
narps.convert_to_zscores()
```
### Create concatenated zstat images
```
concat_zstat = narps.create_concat_images(datatype='zstat',imgtypes = ['unthresh'],overwrite=True)
```
### Compute image statistics
Compute range and standard deviation maps across teams for zstat maps.
```
narps.compute_image_stats(overwrite=True)
```
### Estimate smoothness
Use FSL's SmoothEstimate (via nipype) to estimate smoothness of each Z map.
```
smoothness_df = narps.estimate_smoothness(overwrite=True)
```
### Save outputs
Serialize the important information from the class and save to *narps_prepare_maps.pkl*
```
output = narps.write_data()
```
## Make reports
Create some diagnostic reports
### Make report showing all orig maps with threshold overlays
This report includes all maps for which data were available, including those that were excluded
```
if make_reports:
cut_coords = [-24,-10,4,18,32,52,64]
bins = numpy.linspace(-5,5)
figdir = os.path.join(basedir,'figures/orig_map_overlays')
if not os.path.exists(figdir):
os.mkdir(figdir)
for hyp in [1,2,5,6,7,8,9]:
outfile = os.path.join(figdir,'hyp%d_orig_map_overlays.pdf'%hyp)
if not os.path.exists(outfile) or overwrite:
print('making map overlay figure for hyp',hyp)
# find all maps
hmaps = glob.glob(os.path.join(output_dir,'orig/*_*'))
collection_ids = [os.path.basename(i) for i in hmaps]
collection_ids.sort()
fig, ax = plt.subplots(len(collection_ids),2,figsize=(len(collection_ids),140),gridspec_kw={'width_ratios': [2, 1]})
ctr=0
for collection_id in collection_ids:
teamID = collection_id.split('_')[1]
unthresh_img=os.path.join(output_dir,'orig/%s/hypo%d_unthresh.nii.gz'%(collection_id,hyp))
thresh_img=os.path.join(output_dir,'thresh_mask_orig/%s/hypo%d_thresh.nii.gz'%(collection_id,hyp))
if not (os.path.exists(thresh_img) or os.path.exists(unthresh_img)):
print('skipping',teamID)
continue
if not teamID in narps.complete_image_sets:
imagetitle = '%s (excluded)'%teamID
else:
imagetitle = teamID
display = nilearn.plotting.plot_stat_map(unthresh_img, display_mode="z",
colorbar=True,title=collection_id,
cut_coords = cut_coords,axes = ax[ctr,0],cmap='gray')
with warnings.catch_warnings(): # ignore levels warning
warnings.simplefilter("ignore")
display.add_contours(thresh_img, filled=False, alpha=0.7, levels=[0.5], colors='b')
masker=nilearn.input_data.NiftiMasker(mask_img=thresh_img)
maskdata = masker.fit_transform(unthresh_img)
if numpy.sum(maskdata)>0: # check for empty mask
hist_result=ax[ctr,1].hist(maskdata,bins=bins)
ctr+=1
plt.savefig(outfile)
plt.close(fig)
```
#### Create histograms for in-mask values in unthresholded images
These are only created for the images that were successfully registered and rectified.
```
if make_reports:
figdir = os.path.join(basedir,'figures/unthresh_histograms')
if not os.path.exists(figdir):
os.mkdir(figdir)
for hyp in [1,2,5,6,7,8,9]:
outfile = os.path.join(figdir,'hyp%d_unthresh_histogram.pdf'%hyp)
if not os.path.exists(outfile) or overwrite:
print('making figure for hyp',hyp)
unthresh_data,labels=get_masked_data(hyp,mask_img,output_dir,imgtype='unthresh',dataset='rectified')
fig, ax = plt.subplots(int(numpy.ceil(len(labels)/3)),3,figsize=(16,50))
# make three columns - these are row and column counters
ctr_x=0
ctr_y=0
for i,l in enumerate(labels):
ax[ctr_x,ctr_y].hist(unthresh_data[i,:],100)
ax[ctr_x,ctr_y].set_title(l)
ctr_y+=1
if ctr_y>2:
ctr_y=0
ctr_x+=1
plt.tight_layout()
plt.savefig(outfile)
```
| github_jupyter |
### Wide Deep Model
Can be used for Classification and Regression (Google use it for Google App as Recommendation Algorethmus)
### 1 sparse feature vector
Sparse representation is used when feature vectors are expected to have a large percentage of zeros in them, as opposed to dense vectors. <br>
**For Example:** <br>
Subject = {computer_science, culture, math, etc} <br>
**One-Hot:** <br>
Subject = [ 1, 0, 0, 0 ] one represents computer_science<br>
### 2 Feature Multiplication
You can combine such feature vector with other information and represent them together as a matrix <br>
**+** very efficient way <br>
**-** you need to design it manually <br>
**-** overfitting, every feature will multiplicate with other features, this is a kind of "memory for all of information"
### 3 Dense feature vector
For the same example like Subject = {computer_science, culture, math, etc} you can also represent the information in following way (dense featrue vector), which will show you the distance between vectors:
<br>
computer_science = [ 0.3, 0.25, 0.1, 0.4 ]
<br>
culture = [ 0.5, 0.2, 0.2, 0.1 ]
<br>
math = [ 0.33, 0.35, 0.1, 0.2 ]
<br>
etc = [ 0.4, 0.15, 0.7, 0.4 ]
<br> <br>
**3.1 Word2Vector**
<br>
use exactly this way this calculate the similarity of words. As result we got:
<br>
man - women = king - queen
<br><br>
**+** it will also take the meaning of such things into consideration <br>
**+** compatible also with the information, which didn't appeared in training phase <br>
**+** less manually work <br>
**-** underfitting for example it recommend you something, what you don't really want
### 4 Wide Deep Model

### 5 Use wide deep model to predict california housing price
```
import tensorflow as tf
import matplotlib as mpl
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
import pandas as pd
import os
import sys
import time
from tensorflow import keras
from tensorflow.python.keras.callbacks import History
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
from sklearn.datasets import fetch_california_housing
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
from sklearn.datasets import fetch_california_housing
# Loading datasets
housing = fetch_california_housing()
# Split data into tran, test and validation
# by default train_test_split split data in 3: 1 -> train and test -> default param -> test_size = 0.25
x_train_all, x_test, y_train_all, y_test = train_test_split(housing.data, housing.target, random_state = 7)
x_train, x_valid, y_train, y_valid = train_test_split(x_train_all, y_train_all, random_state = 11)
print(x_train.shape, y_train.shape)
print(x_valid.shape, y_valid.shape)
print(x_test.shape, y_test.shape)
# Data normalization
# before normalization
print(np.max(x_train), np.min(x_train))
# perform normalization
scaler = StandardScaler()
# 1. data in x_train is int32, we need to convert them to float32 first
# 2. convert x_train data from
# [None, 28, 28] -> [None, 784]
# -> after all reshape back to [None, 28, 28]
x_train_scaled = scaler.fit_transform(x_train)
x_valid_scaled = scaler.transform(x_valid)
x_test_scaled = scaler.transform(x_test)
# after normalization
# print(np.max(x_train_scaled), np.min(x_train_scaled))
# in wide deep model we usualy use multiple inputs. for a simple example I just
# made from one dataset two subsets:
# the first one take first 6 columns and the second the last 6
input_wide = keras.layers.Input(shape=[5]) # wide part
input_deep = keras.layers.Input(shape=[6]) # beginn to build deep part
hidden1 = keras.layers.Dense(30, activation='relu')(input_deep)
hidden2 = keras.layers.Dense(30, activation='relu')(hidden1)
# concate two parts together
concat = keras.layers.concatenate([input_wide, hidden2])
output = keras.layers.Dense(1)(concat)
# the same as multi-input, it is also possible to have a multi-output net
'''
output2 = keras.layers.Dense(1)(hidden2)
model = keras.models.Model(inputs = [input_wide, input_deep],
outputs = [output, output2])
'''
model = keras.models.Model(inputs = [input_wide, input_deep],
outputs = output)
model.summary()
# mean_squared_error make model as regression
model.compile(loss = "mean_squared_error", optimizer = "sgd", metrics = ["accuracy"])
callbacks = [
keras.callbacks.EarlyStopping(patience = 5, min_delta = 1e-2)
]
# simulate input data as wide and deep
x_train_scaled_wide = x_train_scaled[:, :5]
x_train_scaled_deep = x_train_scaled[:, 2:]
x_test_scaled_wide = x_test_scaled[:, :5]
x_test_scaled_deep = x_test_scaled[:, 2:]
x_valid_scaled_wide = x_valid_scaled[:, :5]
x_valid_scaled_deep = x_valid_scaled[:, 2:]
history = model.fit([x_train_scaled_wide, x_train_scaled_deep],
y_train,
validation_data=([x_valid_scaled_wide, x_valid_scaled_deep], y_valid),
epochs = 100,
callbacks = callbacks)
def plot_learning_curves(history: History):
pd.DataFrame(history.history).plot(figsize = (8, 5))
plt.grid(True)
plt.gca().set_ylim(0, 1)
plt.show()
plot_learning_curves(history)
test_loss, test_acc = model.evaluate([x_test_scaled_wide, x_test_scaled_deep], y_test)
# one_hot encoded results
predictions = model.predict([x_test_scaled_wide, x_test_scaled_deep])
index = 40
for indx in range(index):
print(y_test[indx], predictions[indx])
```
| github_jupyter |
# Using `pybind11`
The package `pybind11` is provides an elegant way to wrap C++ code for Python, including automatic conversions for `numpy` arrays and the C++ `Eigen` linear algebra library. Used with the `cppimport` package, this provides a very nice work flow for integrating C++ and Python:
- Edit C++ code
- Run Python code
```bash
! pip install pybind11
! pip install cppimport
```
Clone the Eigen library if necessary - no installation is required as Eigen is a header only library.
```bash
! git clone https://github.com/RLovelett/eigen.git
```
## Resources
- [`pybind11`](http://pybind11.readthedocs.io/en/latest/)
- [`cppimport`](https://github.com/tbenthompson/cppimport)
- [`Eigen`](http://eigen.tuxfamily.org)
## Example 1 - Basic usage
```
%%file ex1.cpp
<%
setup_pybind11(cfg)
%>
#include <pybind11/pybind11.h>
namespace py = pybind11;
PYBIND11_MODULE(ex1, m) {
m.def("add", [](int a, int b) { return a + b; });
m.def("mult", [](int a, int b) { return a * b; });
}
import cppimport
ex1 = cppimport.imp("ex1")
ex1.add(3,4)
from ex1 import mult
mult(3,4)
ls ex1*so
```
## Example 2 - Adding doc and named/default arguments
```
%%file ex2.cpp
<%
setup_pybind11(cfg)
%>
#include <pybind11/pybind11.h>
namespace py = pybind11;
using namespace pybind11::literals;
PYBIND11_MODULE(ex2, m) {
m.def("add",
[](int a, int b) { return a + b; },
"Add two integers.",
py::arg("a") = 3,
py::arg("b") = 4);
m.def("mult",
[](int a, int b) { return a * b; },
"Multiply two integers.",
"a"_a=3,
"b"_a=4);
}
import cppimport
ex2 = cppimport.imp("ex2")
help(ex1.add)
help(ex2.add)
```
### Example 3 - Split into execution modules for efficient compilation
```
%%file funcs1.cpp
#include <pybind11/pybind11.h>
namespace py = pybind11;
int add(int a, int b) {
return a + b;
}
void init_f1(py::module &m) {
m.def("add", &add);
}
%%file funcs2.cpp
#include <pybind11/pybind11.h>
namespace py = pybind11;
int mult(int a, int b) {
return a + b;
}
void init_f2(py::module &m) {
m.def("mult", &mult);
}
%%file ex3.cpp
<%
setup_pybind11(cfg)
cfg['sources'] = ['funcs1.cpp', 'funcs2.cpp']
%>
#include <pybind11/pybind11.h>
namespace py = pybind11;
void init_f1(py::module &m);
void init_f2(py::module &m);
PYBIND11_MODULE(ex3, m) {
init_f1(m);
init_f2(m);
}
import cppimport
ex3 = cppimport.imp("ex3")
ex3.add(3,4), ex3.mult(3, 4)
## Example 4 - Using setup.py to create shared libraries
%%file funcs.hpp
#pragma once
int add(int a, int b);
int mult(int a, int b);
%%file funcs.cpp
#include "funcs.hpp"
int add(int a, int b) {
return a + b;
}
int mult(int a, int b) {
return a * b;
}
%%file ex4.cpp
#include "funcs.hpp"
#include <pybind11/pybind11.h>
namespace py = pybind11;
PYBIND11_MODULE(ex4, m) {
m.def("add", &add);
m.def("mult", &mult);
}
%%file setup.py
import os, sys
from distutils.core import setup, Extension
from distutils import sysconfig
cpp_args = ['-std=c++14']
ext_modules = [
Extension(
'ex4',
['funcs.cpp', 'ex4.cpp'],
include_dirs=['pybind11/include'],
language='c++',
extra_compile_args = cpp_args,
),
]
setup(
name='ex4',
version='0.0.1',
author='Cliburn Chan',
author_email='cliburn.chan@duke.edu',
description='Example',
ext_modules=ext_modules,
)
%%bash
python3 setup.py build_ext -i
import ex4
ex4.add(3,4), ex4.mult(3,4)
```
## Example 5 - Using STL containers
```
%%file ex5.cpp
<%
setup_pybind11(cfg)
%>
#include "funcs.hpp"
#include <pybind11/pybind11.h>
#include <pybind11/stl.h>
#include <vector>
namespace py = pybind11;
double vsum(const std::vector<double>& vs) {
double res = 0;
for (const auto& i: vs) {
res += i;
}
return res;
}
std::vector<int> range(int start, int stop, int step) {
std::vector<int> res;
for (int i=start; i<stop; i+=step) {
res.push_back(i);
}
return res;
}
PYBIND11_MODULE(ex5, m) {
m.def("vsum", &vsum);
m.def("range", &range);
}
import cppimport
ex5 = cppimport.imp("ex5")
ex5.vsum(range(10))
ex5.range(1, 10, 2)
```
## Using `cppimport`
The `cppimport` package allows you to specify several options. See [Github page](https://github.com/tbenthompson/cppimport)
### Use of `cppimport.imp`
Note that `cppimport.imp` only needs to be called to build the shared library. Once it is called, the shared library is created and can be sued. Any updates to the C++ files will be detected by `cppimport` and it will automatically trigger a re-build.
## Example 6: Vectorizing functions for use with `numpy` arrays
Example showing how to vectorize a `square` function. Note that from here on, we don't bother to use separate header and implementation files for these code snippets, and just write them together with the wrapping code in a `code.cpp` file. This means that with `cppimport`, there are only two files that we actually code for, a C++ `code.cpp` file and a python test file.
```
%%file ex6.cpp
<%
cfg['compiler_args'] = ['-std=c++14']
setup_pybind11(cfg)
%>
#include <pybind11/pybind11.h>
#include <pybind11/numpy.h>
namespace py = pybind11;
double square(double x) {
return x * x;
}
PYBIND11_MODULE(ex6, m) {
m.doc() = "pybind11 example plugin";
m.def("square", py::vectorize(square), "A vectroized square function.");
}
import cppimport
ex6 = cppimport.imp("ex6")
ex6.square([1,2,3])
import ex7
ex7.square([2,4,6])
```
## Example 7: Using `numpy` arrays as function arguments and return values
Example showing how to pass `numpy` arrays in and out of functions. These `numpy` array arguments can either be generic `py:array` or typed `py:array_t<double>`. The properties of the `numpy` array can be obtained by calling its `request` method. This returns a `struct` of the following form:
```c++
struct buffer_info {
void *ptr;
size_t itemsize;
std::string format;
int ndim;
std::vector<size_t> shape;
std::vector<size_t> strides;
};
```
Here is C++ code for two functions - the function `twice` shows how to change a passed in `numpy` array in-place using pointers; the function `sum` shows how to sum the elements of a `numpy` array. By taking advantage of the information in `buffer_info`, the code will work for arbitrary `n-d` arrays.
```
%%file ex7.cpp
<%
cfg['compiler_args'] = ['-std=c++11']
setup_pybind11(cfg)
%>
#include <pybind11/pybind11.h>
#include <pybind11/numpy.h>
namespace py = pybind11;
// Passing in an array of doubles
void twice(py::array_t<double> xs) {
py::buffer_info info = xs.request();
auto ptr = static_cast<double *>(info.ptr);
int n = 1;
for (auto r: info.shape) {
n *= r;
}
for (int i = 0; i <n; i++) {
*ptr++ *= 2;
}
}
// Passing in a generic array
double sum(py::array xs) {
py::buffer_info info = xs.request();
auto ptr = static_cast<double *>(info.ptr);
int n = 1;
for (auto r: info.shape) {
n *= r;
}
double s = 0.0;
for (int i = 0; i <n; i++) {
s += *ptr++;
}
return s;
}
PYBIND11_MODULE(ex7, m) {
m.doc() = "auto-compiled c++ extension";
m.def("sum", &sum);
m.def("twice", &twice);
}
%%file test_code.py
import cppimport
import numpy as np
ex7 = cppimport.imp("ex7")
if __name__ == '__main__':
xs = np.arange(12).reshape(3,4).astype('float')
print(xs)
print("np :", xs.sum())
print("cpp:", ex7.sum(xs))
print()
ex7.twice(xs)
print(xs)
%%bash
python test_code.py
```
## Example 8: More on working with `numpy` arrays (optional)
This example shows how to use array access for `numpy` arrays within the C++ function. It is taken from the `pybind11` documentation, but fixes a small bug in the official version. As noted in the documentation, the function would be more easily coded using `py::vectorize`.
```
%%file ex8.cpp
<%
cfg['compiler_args'] = ['-std=c++11']
setup_pybind11(cfg)
%>
#include <pybind11/pybind11.h>
#include <pybind11/numpy.h>
namespace py = pybind11;
py::array_t<double> add_arrays(py::array_t<double> input1, py::array_t<double> input2) {
auto buf1 = input1.request(), buf2 = input2.request();
if (buf1.ndim != 1 || buf2.ndim != 1)
throw std::runtime_error("Number of dimensions must be one");
if (buf1.shape[0] != buf2.shape[0])
throw std::runtime_error("Input shapes must match");
auto result = py::array(py::buffer_info(
nullptr, /* Pointer to data (nullptr -> ask NumPy to allocate!) */
sizeof(double), /* Size of one item */
py::format_descriptor<double>::value, /* Buffer format */
buf1.ndim, /* How many dimensions? */
{ buf1.shape[0] }, /* Number of elements for each dimension */
{ sizeof(double) } /* Strides for each dimension */
));
auto buf3 = result.request();
double *ptr1 = (double *) buf1.ptr,
*ptr2 = (double *) buf2.ptr,
*ptr3 = (double *) buf3.ptr;
for (size_t idx = 0; idx < buf1.shape[0]; idx++)
ptr3[idx] = ptr1[idx] + ptr2[idx];
return result;
}
PYBIND11_MODULE(ex8, m) {
m.def("add_arrays", &add_arrays, "Add two NumPy arrays");
}
import cppimport
import numpy as np
code = cppimport.imp("ex8")
xs = np.arange(12)
print(xs)
print(code.add_arrays(xs, xs))
```
## Example 9: Using the C++ `eigen` library to calculate matrix inverse and determinant
Example showing how `Eigen` vectors and matrices can be passed in and out of C++ functions. Note that `Eigen` arrays are automatically converted to/from `numpy` arrays simply by including the `pybind/eigen.h` header. Because of this, it is probably simplest in most cases to work with `Eigen` vectors and matrices rather than `py::buffer` or `py::array` where `py::vectorize` is insufficient.
**Note**: When working with matrices, you can make code using `eigen` more efficient by ensuring that the eigen Matrix and numpy array have the same data types and storage layout, and using the Eigen::Ref class to pass in arguments. By default, numpy stores data in row major format while Eigen stores data in column major format, and this incompatibility triggers a copy which can be expensive for large matrices. There are basically 3 ways to make pass by reference work:
1. Use Eigen reference with arbitrary storage order
`Eigen::Ref<MatrixType, 0, Eigen::Stride<Eigen::Dynamic, Eigen::Dynamic>>`
2. Use Eigen row order matrices
`Eigen::Matrix<double, Eigen::Dynamic, Eigen::Dynamic, Eigen::RowMajor>`
3. Create numpy arrays with column order
`np.array(data, order='F')`
This is an advanced topic that you can explore in the [docs](https://pybind11.readthedocs.io/en/stable/advanced/cast/eigen.html?highlight=eigen#pass-by-reference).
```
%%file ex9.cpp
<%
cfg['compiler_args'] = ['-std=c++11']
cfg['include_dirs'] = ['eigen']
setup_pybind11(cfg)
%>
#include <pybind11/pybind11.h>
#include <pybind11/eigen.h>
#include <Eigen/LU>
namespace py = pybind11;
// convenient matrix indexing comes for free
double get(Eigen::MatrixXd xs, int i, int j) {
return xs(i, j);
}
// takes numpy array as input and returns double
double det(Eigen::MatrixXd xs) {
return xs.determinant();
}
// takes numpy array as input and returns another numpy array
Eigen::MatrixXd inv(Eigen::MatrixXd xs) {
return xs.inverse();
}
PYBIND11_MODULE(ex9, m) {
m.doc() = "auto-compiled c++ extension";
m.def("inv", &inv);
m.def("det", &det);
}
import cppimport
import numpy as np
code = cppimport.imp("ex9")
A = np.array([[1,2,1],
[2,1,0],
[-1,1,2]])
print(A)
print(code.det(A))
print(code.inv(A))
```
## Example 10: Using `pybind11` with `openmp` (optional)
Here is an example of using OpenMP to integrate the value of $\pi$ written using `pybind11`.
```
%%file ex10.cpp
/*
<%
cfg['compiler_args'] = ['-std=c++11', '-fopenmp']
cfg['linker_args'] = ['-lgomp']
setup_pybind11(cfg)
%>
*/
#include <cmath>
#include <omp.h>
#include <pybind11/pybind11.h>
#include <pybind11/numpy.h>
namespace py = pybind11;
// Passing in an array of doubles
void twice(py::array_t<double> xs) {
py::gil_scoped_acquire acquire;
py::buffer_info info = xs.request();
auto ptr = static_cast<double *>(info.ptr);
int n = 1;
for (auto r: info.shape) {
n *= r;
}
#pragma omp parallel for
for (int i = 0; i <n; i++) {
*ptr++ *= 2;
}
}
PYBIND11_MODULE(ex10, m) {
m.doc() = "auto-compiled c++ extension";
m.def("twice", [](py::array_t<double> xs) {
/* Release GIL before calling into C++ code */
py::gil_scoped_release release;
return twice(xs);
});
}
import cppimport
import numpy as np
code = cppimport.imp("ex10")
xs = np.arange(10).astype('double')
code.twice(xs)
xs
```
| github_jupyter |
```
# default_exp models.TSiTPlus
```
# TSiT
> This is a PyTorch implementation created by Ignacio Oguiza (timeseriesAI@gmail.com) based on ViT (Vision Transformer)
Reference:
Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., ... & Houlsby, N. (2020).
An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929.
This implementation is a modified version of Vision Transformer that is part of the great timm library
(https://github.com/rwightman/pytorch-image-models/blob/72b227dcf57c0c62291673b96bdc06576bb90457/timm/models/vision_transformer.py)
```
#export
from tsai.imports import *
from tsai.models.utils import *
from tsai.models.layers import *
from typing import Callable
#export
class _TSiTEncoderLayer(nn.Module):
def __init__(self, d_model:int, n_heads:int, attn_dropout:float=0., dropout:float=0, drop_path_rate:float=0.,
mlp_ratio:int=1, qkv_bias:bool=True, act:str='relu', pre_norm:bool=False):
super().__init__()
self.mha = MultiheadAttention(d_model, n_heads, attn_dropout=attn_dropout, proj_dropout=dropout, qkv_bias=qkv_bias)
self.attn_norm = nn.LayerNorm(d_model)
self.pwff = PositionwiseFeedForward(d_model, dropout=dropout, act=act, mlp_ratio=mlp_ratio)
self.ff_norm = nn.LayerNorm(d_model)
self.drop_path = DropPath(drop_path_rate) if drop_path_rate != 0 else nn.Identity()
self.pre_norm = pre_norm
def forward(self, x):
if self.pre_norm:
x = self.drop_path(self.mha(self.attn_norm(x))[0]) + x
x = self.drop_path(self.pwff(self.ff_norm(x))) + x
else:
x = self.attn_norm(self.drop_path(self.mha(x)[0]) + x)
x = self.ff_norm(self.drop_path(self.pwff(x)) + x)
return x
# export
class _TSiTEncoder(nn.Module):
def __init__(self, d_model, n_heads, depth:int=6, attn_dropout:float=0., dropout:float=0, drop_path_rate:float=0.,
mlp_ratio:int=1, qkv_bias:bool=True, act:str='relu', pre_norm:bool=False):
super().__init__()
dpr = [x.item() for x in torch.linspace(0, drop_path_rate, depth)]
layers = []
for i in range(depth):
layer = _TSiTEncoderLayer(d_model, n_heads, attn_dropout=attn_dropout, dropout=dropout, drop_path_rate=dpr[i],
mlp_ratio=mlp_ratio, qkv_bias=qkv_bias, act=act, pre_norm=pre_norm)
layers.append(layer)
self.encoder = nn.Sequential(*layers)
self.norm = nn.LayerNorm(d_model) if pre_norm else nn.Identity()
def forward(self, x):
x = self.encoder(x)
x = self.norm(x)
return x
# export
class _TSiTBackbone(Module):
def __init__(self, c_in:int, seq_len:int, depth:int=6, d_model:int=128, n_heads:int=16, d_head:Optional[int]=None, act:str='relu', d_ff:int=256,
qkv_bias:bool=True, attn_dropout:float=0., dropout:float=0., drop_path_rate:float=0., mlp_ratio:int=1,
pre_norm:bool=False, use_token:bool=True, use_pe:bool=True, n_embeds:Optional[list]=None, embed_dims:Optional[list]=None,
padding_idxs:Optional[list]=None, cat_pos:Optional[list]=None, feature_extractor:Optional[Callable]=None):
# Categorical embeddings
if n_embeds is not None:
n_embeds = listify(n_embeds)
if embed_dims is None:
embed_dims = [emb_sz_rule(s) for s in n_embeds]
self.to_cat_embed = MultiEmbedding(c_in, n_embeds, embed_dims=embed_dims, padding_idxs=padding_idxs, cat_pos=cat_pos)
c_in = c_in + sum(embed_dims) - len(n_embeds)
else:
self.to_cat_embed = nn.Identity()
# Feature extractor
if feature_extractor:
if isinstance(feature_extractor, nn.Module): self.feature_extractor = feature_extractor
else: self.feature_extractor = feature_extractor(c_in, d_model)
c_in, seq_len = output_size_calculator(self.feature_extractor, c_in, seq_len)
else:
self.feature_extractor = nn.Conv1d(c_in, d_model, 1)
self.transpose = Transpose(1,2)
# Position embedding & token
if use_pe:
self.pos_embed = nn.Parameter(torch.zeros(1, seq_len, d_model))
self.use_pe = use_pe
self.cls_token = nn.Parameter(torch.zeros(1, 1, d_model))
self.use_token = use_token
self.emb_dropout = nn.Dropout(dropout)
# Encoder
self.encoder = _TSiTEncoder(d_model, n_heads, depth=depth, qkv_bias=qkv_bias, dropout=dropout,
mlp_ratio=mlp_ratio, drop_path_rate=drop_path_rate, act=act, pre_norm=pre_norm)
def forward(self, x):
# Categorical embeddings
x = self.to_cat_embed(x)
# Feature extractor
x = self.feature_extractor(x)
# Position embedding & token
x = self.transpose(x)
if self.use_pe:
x = x + self.pos_embed
if self.use_token: # token is concatenated after position embedding so that embedding can be learned using self.supervised learning
x = torch.cat((self.cls_token.expand(x.shape[0], -1, -1), x), dim=1)
x = self.emb_dropout(x)
# Encoder
x = self.encoder(x)
# Output
x = x.transpose(1,2)
return x
#exports
class TSiTPlus(nn.Sequential):
r"""Time series transformer model based on ViT (Vision Transformer):
Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., ... & Houlsby, N. (2020).
An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929.
This implementation is a modified version of Vision Transformer that is part of the grat timm library
(https://github.com/rwightman/pytorch-image-models/blob/72b227dcf57c0c62291673b96bdc06576bb90457/timm/models/vision_transformer.py)
Args:
c_in: the number of features (aka variables, dimensions, channels) in the time series dataset.
c_out: the number of target classes.
seq_len: number of time steps in the time series.
d_model: total dimension of the model (number of features created by the model).
depth: number of blocks in the encoder.
n_heads: parallel attention heads. Default:16 (range(8-16)).
d_head: size of the learned linear projection of queries, keys and values in the MHA.
Default: None -> (d_model/n_heads) = 32.
act: the activation function of positionwise feedforward layer.
d_ff: the dimension of the feedforward network model.
attn_dropout: dropout rate applied to the attention sublayer.
dropout: dropout applied to to the embedded sequence steps after position embeddings have been added and
to the mlp sublayer in the encoder.
drop_path_rate: stochastic depth rate.
mlp_ratio: ratio of mlp hidden dim to embedding dim.
qkv_bias: determines whether bias is applied to the Linear projections of queries, keys and values in the MultiheadAttention
pre_norm: if True normalization will be applied as the first step in the sublayers. Defaults to False.
use_token: if True, the output will come from the transformed token. This is meant to be use in classification tasks.
use_pe: flag to indicate if positional embedding is used.
n_embeds: list with the sizes of the dictionaries of embeddings (int).
embed_dims: list with the sizes of each embedding vector (int).
padding_idxs: If specified, the entries at padding_idxs do not contribute to the gradient; therefore, the embedding vector at padding_idxs
are not updated during training. Use 0 for those categorical embeddings that may have #na# values. Otherwise, leave them as None.
You can enter a combination for different embeddings (for example, [0, None, None]).
cat_pos: list with the position of the categorical variables in the input.
feature_extractor: an nn.Module or optional callable that will be used to preprocess the time series before
the embedding step. It is useful to extract features or resample the time series.
flatten: flag to indicate if the 3d logits will be flattened to 2d in the model's head if use_token is set to False.
If use_token is False and flatten is False, the model will apply a pooling layer.
concat_pool: if True the head begins with fastai's AdaptiveConcatPool2d if concat_pool=True; otherwise, it uses traditional average pooling.
fc_dropout: dropout applied to the final fully connected layer.
use_bn: flag that indicates if batchnorm will be applied to the head.
bias_init: values used to initialized the output layer.
y_range: range of possible y values (used in regression tasks).
custom_head: custom head that will be applied to the network. It must contain all kwargs (pass a partial function)
verbose: flag to control verbosity of the model.
Input:
x: bs (batch size) x nvars (aka features, variables, dimensions, channels) x seq_len (aka time steps)
"""
def __init__(self, c_in:int, c_out:int, seq_len:int, d_model:int=128, depth:int=6, n_heads:int=16, d_head:Optional[int]=None, act:str='relu',
d_ff:int=256, attn_dropout:float=0., dropout:float=0., drop_path_rate:float=0., mlp_ratio:int=1, qkv_bias:bool=True, pre_norm:bool=False,
use_token:bool=True, use_pe:bool=True, n_embeds:Optional[list]=None, embed_dims:Optional[list]=None, padding_idxs:Optional[list]=None,
cat_pos:Optional[list]=None, feature_extractor:Optional[Callable]=None, flatten:bool=False, concat_pool:bool=True, fc_dropout:float=0.,
use_bn:bool=False, bias_init:Optional[Union[float, list]]=None, y_range:Optional[tuple]=None, custom_head:Optional[Callable]=None,
verbose:bool=True):
if use_token and c_out == 1:
use_token = False
pv("use_token set to False as c_out == 1", verbose)
backbone = _TSiTBackbone(c_in, seq_len, depth=depth, d_model=d_model, n_heads=n_heads, d_head=d_head, act=act,
d_ff=d_ff, attn_dropout=attn_dropout, dropout=dropout, drop_path_rate=drop_path_rate,
pre_norm=pre_norm, mlp_ratio=mlp_ratio, use_pe=use_pe, use_token=use_token,
n_embeds=n_embeds, embed_dims=embed_dims, padding_idxs=padding_idxs, cat_pos=cat_pos,
feature_extractor=feature_extractor)
self.head_nf = d_model
self.c_out = c_out
self.seq_len = seq_len
# Head
if custom_head:
if isinstance(custom_head, nn.Module): head = custom_head
else: head = custom_head(self.head_nf, c_out, seq_len)
else:
nf = d_model
layers = []
if use_token:
layers += [TokenLayer()]
elif flatten:
layers += [Reshape(-1)]
nf = nf * seq_len
else:
if concat_pool: nf *= 2
layers = [GACP1d(1) if concat_pool else GAP1d(1)]
if use_bn: layers += [nn.BatchNorm1d(nf)]
if fc_dropout: layers += [nn.Dropout(fc_dropout)]
# Last layer
linear = nn.Linear(nf, c_out)
if bias_init is not None:
if isinstance(bias_init, float): nn.init.constant_(linear.bias, bias_init)
else: linear.bias = nn.Parameter(torch.as_tensor(bias_init, dtype=torch.float32))
layers += [linear]
if y_range: layers += [SigmoidRange(*y_range)]
head = nn.Sequential(*layers)
super().__init__(OrderedDict([('backbone', backbone), ('head', head)]))
TSiT = TSiTPlus
bs = 16
nvars = 4
seq_len = 50
c_out = 2
xb = torch.rand(bs, nvars, seq_len)
model = TSiTPlus(nvars, c_out, seq_len, attn_dropout=.1, dropout=.1, use_token=True)
test_eq(model(xb).shape, (bs, c_out))
model = TSiTPlus(nvars, c_out, seq_len, attn_dropout=.1, dropout=.1, use_token=False)
test_eq(model(xb).shape, (bs, c_out))
bs = 16
nvars = 4
seq_len = 50
c_out = 2
xb = torch.rand(bs, nvars, seq_len)
bias_init = np.array([0.8, .2])
model = TSiTPlus(nvars, c_out, seq_len, bias_init=bias_init)
test_eq(model(xb).shape, (bs, c_out))
test_eq(model.head[1].bias.data, tensor(bias_init))
bs = 16
nvars = 4
seq_len = 50
c_out = 1
xb = torch.rand(bs, nvars, seq_len)
bias_init = 8.5
model = TSiTPlus(nvars, c_out, seq_len, bias_init=bias_init)
test_eq(model(xb).shape, (bs, c_out))
test_eq(model.head[1].bias.data, tensor([bias_init]))
```
## Feature extractor
It's a known fact that transformers cannot be directly applied to long sequences. To avoid this, we have included a way to subsample the sequence to generate a more manageable input.
```
from tsai.data.validation import get_splits
from tsai.data.core import get_ts_dls
X = np.zeros((10, 3, 5000))
y = np.random.randint(0,2,X.shape[0])
splits = get_splits(y)
dls = get_ts_dls(X, y, splits=splits)
xb, yb = dls.train.one_batch()
xb
```
If you try to use TSiTPlus, it's likely you'll get an 'out-of-memory' error.
To avoid this you can subsample the sequence reducing the input's length. This can be done in multiple ways. Here are a few examples:
```
# Separable convolution (to avoid mixing channels)
feature_extractor = Conv1d(xb.shape[1], xb.shape[1], ks=100, stride=50, padding=0, groups=xb.shape[1]).to(default_device())
feature_extractor.to(xb.device)(xb).shape
# Convolution (if you want to mix channels or change number of channels)
feature_extractor=MultiConv1d(xb.shape[1], 64, kss=[1,3,5,7,9], keep_original=True).to(default_device())
test_eq(feature_extractor.to(xb.device)(xb).shape, (xb.shape[0], 64, xb.shape[-1]))
# MaxPool
feature_extractor = nn.Sequential(Pad1d((0, 50), 0), nn.MaxPool1d(kernel_size=100, stride=50)).to(default_device())
feature_extractor.to(xb.device)(xb).shape
# AvgPool
feature_extractor = nn.Sequential(Pad1d((0, 50), 0), nn.AvgPool1d(kernel_size=100, stride=50)).to(default_device())
feature_extractor.to(xb.device)(xb).shape
```
Once you decide what type of transform you want to apply, you just need to pass the layer as the feature_extractor attribute:
```
bs = 16
nvars = 4
seq_len = 1000
c_out = 2
d_model = 128
xb = torch.rand(bs, nvars, seq_len)
feature_extractor = partial(Conv1d, ks=5, stride=3, padding=0, groups=xb.shape[1])
model = TSiTPlus(nvars, c_out, seq_len, d_model=d_model, feature_extractor=feature_extractor)
test_eq(model.to(xb.device)(xb).shape, (bs, c_out))
```
## Categorical variables
```
from tsai.utils import alphabet, ALPHABET
a = alphabet[np.random.randint(0,3,40)]
b = ALPHABET[np.random.randint(6,10,40)]
c = np.random.rand(40).reshape(4,1,10)
map_a = {k:v for v,k in enumerate(np.unique(a))}
map_b = {k:v for v,k in enumerate(np.unique(b))}
n_embeds = [len(m.keys()) for m in [map_a, map_b]]
szs = [emb_sz_rule(n) for n in n_embeds]
a = np.asarray(a.map(map_a)).reshape(4,1,10)
b = np.asarray(b.map(map_b)).reshape(4,1,10)
inp = torch.from_numpy(np.concatenate((c,a,b), 1)).float()
feature_extractor = partial(Conv1d, ks=3, padding='same')
model = TSiTPlus(3, 2, 10, d_model=64, cat_pos=[1,2], feature_extractor=feature_extractor)
test_eq(model(inp).shape, (4,2))
#hide
from tsai.imports import create_scripts
from tsai.export import get_nb_name
nb_name = get_nb_name()
create_scripts(nb_name);
```
| github_jupyter |
## One Hot Encoding
Cuando se tienen variables independientes que representan categorias, deben transformarse a valores númericos para que puedan ser interpretados por algún modelo. Una manera de hacerlo es dar un número único a cada categoría, aunque esto puede hacer que un modelo las interprete de acuerdo a su magintud. Por ejemplo, si se codificaron _n_ categorias con valores de 1 a _n_, se puede interpretar que la categoría _n_ es mejor, a pesar de que no esten ordenadas en un orden en particular. One Hot Encoding es una manera alternativa de codificar categorias usando solo valores binarios, reemplazando una variable por _n_ (# numero de categorías para esa variable) variables binarias, usando un 1 en la variable que representa la categoria, y 0 en todas las demás. De esta manera se pueden identificar las categorias, sin asociarles una magnitud o costo.
Referencias:
- <https://scikit-learn.org/stable/modules/preprocessing.html#preprocessing-categorical-features>
- <https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.OneHotEncoder.html>
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
```
## 2) Titanic
```
titanic = pd.read_csv('Titanic.csv')
titanic
# Pasajeros que no embarcaron
titanic[titanic.isnull().values == True]
titanic = titanic.dropna().reset_index(drop=True)
labels = titanic['Survived']
features = titanic.drop('Survived', axis=1)
features
from sklearn.preprocessing import OneHotEncoder
# one hot encoding
def encode_cols(df, cols, encoder):
encoded_arr = encoder.transform(df[cols]).toarray()
encoded_df = pd.DataFrame(encoded_arr, columns=encoder.get_feature_names())
dropped_encoded = df.drop(columns=cols)
return pd.concat([encoded_df, dropped_encoded], axis=1)
encoder = OneHotEncoder()
categorical_features = ['Pclass', 'Sex', 'Embarked']
encoder.fit(features[categorical_features])
enc_features = encode_cols(features, categorical_features, encoder)
enc_features
from sklearn.model_selection import train_test_split
X_train,X_test,y_train,y_test = train_test_split(np.array(enc_features), np.reshape(np.array(labels), (-1, 1)), test_size=0.20, random_state=0)
from sklearn.linear_model import LogisticRegression
model = LogisticRegression()
model.fit(X_train, y_train)
sample = pd.DataFrame([[1, 'female', 0, 0, 7.5, 'C']], columns=features.columns)
enc_sample = encode_cols(sample, categorical_features, encoder)
sample_pred = model.predict(np.array(enc_sample))
pred = pd.concat([sample, pd.DataFrame(sample_pred, columns=['Survived'])], axis=1)
pred.style.set_caption('Predicción')
```
## 3) q1 dataset
```
q1 = pd.read_csv('q1_data.csv', header=None)
q1
zeros = q1[q1[2] == 0]
ones = q1[q1[2] == 1]
fig, ax = plt.subplots()
ax.set_aspect('equal')
ax.scatter(zeros[0], zeros[1], label="0's")
ax.scatter(ones[0], ones[1], label="1's")
ax.legend(loc='upper right')
plt.show()
x = np.array(q1[[0,1]])
y = np.array(q1[2])
y = np.reshape(y, (-1, 1))
X_train,X_test,y_train,y_test = train_test_split(x, y, test_size=0.20, random_state=0)
model = LogisticRegression()
model.fit(X_train, y_train)
pred_y = model.predict(X_test)
df_t = pd.DataFrame(X_test)
df_t[2] = y_test
df_t[3] = pred_y
real_zeros = df_t[df_t[2] == 0]
pred_zeros = df_t[df_t[3] == 0]
real_ones = df_t[df_t[2] == 1]
pred_ones = df_t[df_t[3] == 1]
f, (ax1, ax2) = plt.subplots(1, 2, sharey=True)
ax1.set_title('Test data')
ax1.scatter(real_zeros[0], real_zeros[1], label="0's")
ax1.scatter(real_ones[0], real_ones[1], label="1's")
ax1.legend()
ax1.set(adjustable='box', aspect='equal')
ax2.set_title('Predictions')
ax2.scatter(pred_zeros[0], pred_zeros[1], label="0's")
ax2.scatter(pred_ones[0], pred_ones[1], label="1's")
ax2.legend()
ax2.set(adjustable='box', aspect='equal')
plt.show()
```
| github_jupyter |
```
!nvidia-smi
!pip install transformers
!pip install tensorboardX
!pip install ujson
from google.colab import drive
drive.mount('/content/gdrive', force_remount=True)
!unzip -q "/content/gdrive/My Drive/Yelp_Sentiment_Analysis/dataset.zip" -d '/content'
```
**Fetch data and Preprocess**
```
import random
import numpy as np
import json
from collections import Counter
from tqdm import tqdm
import re
from transformers import BertTokenizer, BertModel
# does data cleaning for inital text
def process_file(filename, data_type):
print(f"Pre-processing {data_type} examples...")
examples = {}
total = 0
with open(filename, "r") as fh:
source = json.load(fh)
total = len(source)
for idx in tqdm(range(total)):
text = source[str(idx)]["text"]
# basic text preprocessing
text = text.replace("''", '" ').replace("``", '" ') # replace the quotes
text = text.replace("`", "'") # backticks typo
text = text.replace("\"", "") # replace quotes
text = text.replace("...", " ").replace(". . .", " ").replace('..', ' ') # replace dots
text = text.replace("\n", " ") # replace new line chars
text = re.sub(r'(?:http:|https:).*?(?=\s)', '', text) # remove url and website
text = re.sub(r'www.*?(?=\s)', '', text) # remove url and website
list_to_replace = [':(', '=)', ':)', ':P', '-', ',,', ':', ';', '/', '+', '~', '_', '*', '(', ')', '&', '='] #replace the punctuations which are messy with empty
for elem in list_to_replace:
text = text.replace(elem, '')
text = re.sub(r'\!{2,}', '!', text) # duplicate punctuatio
text = re.sub(r'\?{2,}', '!', text) # duplicate punctuation
text = text.replace('?!', '?').replace('!?', '?') #replace slang punctuation with question
text = re.sub(r'\s(?:\.|\,)', '', text) # replace spaces before punctuation
text = re.sub(r'([a-zA-Z?!])\1\1+', r'\1', text) # removes repeated characters (Ex: Veryyyyy -> very)
text = re.sub(r'\s{2,}', ' ', text) # replace multiple spaces
text = text.strip() # strips spaces
text = text.lower() # lower text
examples[str(idx)] = {"user_id": source[str(idx)]["user_id"],
"business_id": source[str(idx)]["business_id"],
"text": text,
"rating": source[str(idx)]["rating"]}
return examples
train_examples = process_file('dataset/train.json', "train")
test_examples = process_file('dataset/test.json', "test")
test_examples_small = process_file('dataset/test_small.json', "test")
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
# Key Tokens
print('[CLS] token: ', tokenizer.convert_tokens_to_ids("[CLS]"))
print('[SEP] token: ', tokenizer.convert_tokens_to_ids("[SEP]"))
print('[PAD] token: ', tokenizer.convert_tokens_to_ids("[PAD]"))
def pad_sent(dataset, max_text_len = 350): #token number 0 is [PAD]
text = []
attnmask = []
seg_id = []
rating = []
uniq_ids = []
for idx in tqdm(dataset):
uniq_ids.append(int(idx)) # append unique ids
curr_text = dataset[str(idx)]["text"] # extract text
curr_text = "[CLS] " + curr_text # add starting cls token
tokenized_text = tokenizer.tokenize(curr_text) # tokenize
tokenized_ids = tokenizer.convert_tokens_to_ids(tokenized_text) # convert to ids
tokenized_ids = tokenized_ids[:max_text_len - 1] # trim the reviews
tokenized_ids.append(102) # add special token for [SEP]
curr_sent_len = len(tokenized_ids)
remaining = max_text_len - curr_sent_len # words remaining for padding
# pad the input token
tokenized_ids.extend([0] * remaining) # pad the text to max_text_len
# add to storage
text.append(tokenized_ids)
# create attention and segmented mask
curr_attn = [1] * curr_sent_len
curr_attn.extend([0] * remaining)
curr_seg_id = [0] * max_text_len
# append to storage
attnmask.append(curr_attn)
seg_id.append(curr_seg_id)
rating.append(dataset[str(idx)]["rating"])
return text, attnmask, seg_id, rating, uniq_ids
text, attnmask, seg_id, rating, uniq_ids = pad_sent(train_examples)
np.savez('/content/gdrive/My Drive/Yelp_Sentiment_Analysis/train.npz',
text = np.array(text),
attnmask = np.array(attnmask),
seg_id = np.array(seg_id),
rating = np.array(rating),
ids = np.array(uniq_ids)
)
text, attnmask, seg_id, rating, uniq_ids = pad_sent(test_examples)
np.savez('/content/gdrive/My Drive/Yelp_Sentiment_Analysis/test.npz',
text = np.array(text),
attnmask = np.array(attnmask),
seg_id = np.array(seg_id),
rating = np.array(rating),
ids = np.array(uniq_ids)
)
text, attnmask, seg_id, rating, uniq_ids = pad_sent(test_examples_small)
np.savez('/content/gdrive/My Drive/Yelp_Sentiment_Analysis/test_small.npz',
text = np.array(text),
attnmask = np.array(attnmask),
seg_id = np.array(seg_id),
rating = np.array(rating),
ids = np.array(uniq_ids)
)
```
**Data Loader**
```
import torch
import torch.utils.data as data
import numpy as np
class YelpDataset(data.Dataset):
"""Yelp Dataset.
Each item in the dataset is a tuple with the following entries (in order):
text = np.array(text),
attnmask = np.array(attnmask),
seg_id = np.array(seg_id),
rating = np.array(rating),
ids = np.array(uniq_ids)
Args:
data_path (str): Path to .npz file containing pre-processed dataset.
"""
def __init__(self, data_path):
super(YelpDataset, self).__init__()
dataset = np.load(data_path)
self.text = torch.from_numpy(dataset['text']).long()
self.attnmask = torch.from_numpy(dataset['attnmask']).long()
self.seg_id = torch.from_numpy(dataset['seg_id']).long()
self.rating = torch.from_numpy(dataset['rating']).long()
self.ids = torch.from_numpy(dataset['ids']).long()
# index
self.valid_idxs = [idx for idx in range(len(self.ids))]
def __getitem__(self, idx):
idx = self.valid_idxs[idx]
example = (self.text[idx],
self.attnmask[idx],
self.seg_id[idx],
self.rating[idx],
self.ids[idx])
return example
def __len__(self):
return len(self.valid_idxs)
train_dataset = YelpDataset('gdrive/My Drive/Yelp_Sentiment_Analysis/train.npz')
train_loader = data.DataLoader(train_dataset,
batch_size=8,
shuffle=True,
num_workers=4,
)
dev_dataset = YelpDataset('gdrive/My Drive/Yelp_Sentiment_Analysis/test.npz')
dev_loader = data.DataLoader(dev_dataset,
batch_size=8,
shuffle=False,
num_workers=4,
)
```
**BERT Finetune**
```
import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
import torch.utils.data as data
#import torch.optim.lr_scheduler as sched
from torch.optim.lr_scheduler import StepLR
from transformers import BertTokenizer, BertModel, BertConfig
import numpy as np
import tqdm
from ujson import load as json_load
from collections import OrderedDict
from json import dumps
import random
import os
import logging
import queue
import shutil
import string
import re
random.seed(224)
np.random.seed(224)
torch.manual_seed(224)
torch.cuda.manual_seed_all(224)
from tensorboardX import SummaryWriter
class AverageMeter:
"""Keep track of average values over time.
Adapted from:
> https://github.com/pytorch/examples/blob/master/imagenet/main.py
"""
def __init__(self):
self.avg = 0
self.sum = 0
self.count = 0
def reset(self):
"""Reset meter."""
self.__init__()
def update(self, val, num_samples=1):
"""Update meter with new value `val`, the average of `num` samples.
Args:
val (float): Average value to update the meter with.
num_samples (int): Number of samples that were averaged to
produce `val`.
"""
self.count += num_samples
self.sum += val * num_samples
self.avg = self.sum / self.count
class EMA:
"""Exponential moving average of model parameters.
Args:
model (torch.nn.Module): Model with parameters whose EMA will be kept.
decay (float): Decay rate for exponential moving average.
"""
def __init__(self, model, decay):
self.decay = decay
self.shadow = {}
self.original = {}
# Register model parameters
for name, param in model.named_parameters():
if param.requires_grad:
self.shadow[name] = param.data.clone()
def __call__(self, model, num_updates):
decay = min(self.decay, (1.0 + num_updates) / (10.0 + num_updates))
for name, param in model.named_parameters():
if param.requires_grad:
assert name in self.shadow
new_average = \
(1.0 - decay) * param.data + decay * self.shadow[name]
self.shadow[name] = new_average.clone()
def assign(self, model):
"""Assign exponential moving average of parameter values to the
respective parameters.
Args:
model (torch.nn.Module): Model to assign parameter values.
"""
for name, param in model.named_parameters():
if param.requires_grad:
assert name in self.shadow
self.original[name] = param.data.clone()
param.data = self.shadow[name]
def resume(self, model):
"""Restore original parameters to a model. That is, put back
the values that were in each parameter at the last call to `assign`.
Args:
model (torch.nn.Module): Model to assign parameter values.
"""
for name, param in model.named_parameters():
if param.requires_grad:
assert name in self.shadow
param.data = self.original[name]
class CheckpointSaver:
"""Class to save and load model checkpoints.
Save the best checkpoints as measured by a metric value passed into the
`save` method. Overwrite checkpoints with better checkpoints once
`max_checkpoints` have been saved.
Args:
save_dir (str): Directory to save checkpoints.
max_checkpoints (int): Maximum number of checkpoints to keep before
overwriting old ones.
metric_name (str): Name of metric used to determine best model.
maximize_metric (bool): If true, best checkpoint is that which maximizes
the metric value passed in via `save`. Otherwise, best checkpoint
minimizes the metric.
log (logging.Logger): Optional logger for printing information.
"""
def __init__(self, save_dir, max_checkpoints, metric_name,
maximize_metric=False, log=None):
super(CheckpointSaver, self).__init__()
self.save_dir = save_dir
self.max_checkpoints = max_checkpoints
self.metric_name = metric_name
self.maximize_metric = maximize_metric
self.best_val = None
self.ckpt_paths = queue.PriorityQueue()
self.log = log
self._print(f"Saver will {'max' if maximize_metric else 'min'}imize {metric_name}...")
def is_best(self, metric_val):
"""Check whether `metric_val` is the best seen so far.
Args:
metric_val (float): Metric value to compare to prior checkpoints.
"""
if metric_val is None:
# No metric reported
return False
if self.best_val is None:
# No checkpoint saved yet
return True
return ((self.maximize_metric and self.best_val < metric_val)
or (not self.maximize_metric and self.best_val > metric_val))
def _print(self, message):
"""Print a message if logging is enabled."""
if self.log is not None:
self.log.info(message)
def save(self, step, model, metric_val, device):
"""Save model parameters to disk.
Args:
step (int): Total number of examples seen during training so far.
model (torch.nn.DataParallel): Model to save.
metric_val (float): Determines whether checkpoint is best so far.
device (torch.device): Device where model resides.
"""
ckpt_dict = {
'model_name': model.__class__.__name__,
'model_state': model.cpu().state_dict(),
'step': step
}
model.to(device)
checkpoint_path = os.path.join(self.save_dir,
f'step_{step}.pth.tar')
torch.save(ckpt_dict, checkpoint_path)
self._print(f'Saved checkpoint: {checkpoint_path}')
if self.is_best(metric_val):
# Save the best model
self.best_val = metric_val
best_path = os.path.join(self.save_dir, 'best.pth.tar')
shutil.copy(checkpoint_path, best_path)
self._print(f'New best checkpoint at step {step}...')
# Add checkpoint path to priority queue (lowest priority removed first)
if self.maximize_metric:
priority_order = metric_val
else:
priority_order = -metric_val
self.ckpt_paths.put((priority_order, checkpoint_path))
# Remove a checkpoint if more than max_checkpoints have been saved
if self.ckpt_paths.qsize() > self.max_checkpoints:
_, worst_ckpt = self.ckpt_paths.get()
try:
os.remove(worst_ckpt)
self._print(f'Removed checkpoint: {worst_ckpt}')
except OSError:
# Avoid crashing if checkpoint has been removed or protected
pass
def load_model(model, checkpoint_path, gpu_ids, return_step=True):
"""Load model parameters from disk.
Args:
model (torch.nn.DataParallel): Load parameters into this model.
checkpoint_path (str): Path to checkpoint to load.
gpu_ids (list): GPU IDs for DataParallel.
return_step (bool): Also return the step at which checkpoint was saved.
Returns:
model (torch.nn.DataParallel): Model loaded from checkpoint.
step (int): Step at which checkpoint was saved. Only if `return_step`.
"""
device = f"cuda:{gpu_ids[0] if gpu_ids else 'cpu'}"
ckpt_dict = torch.load(checkpoint_path, map_location=device)
# Build model, load parameters
model.load_state_dict(ckpt_dict['model_state'])
if return_step:
step = ckpt_dict['step']
return model, step
return model
def get_logger(log_dir, name):
"""Get a `logging.Logger` instance that prints to the console
and an auxiliary file.
Args:
log_dir (str): Directory in which to create the log file.
name (str): Name to identify the logs.
Returns:
logger (logging.Logger): Logger instance for logging events.
"""
class StreamHandlerWithTQDM(logging.Handler):
"""Let `logging` print without breaking `tqdm` progress bars.
See Also:
> https://stackoverflow.com/questions/38543506
"""
def emit(self, record):
try:
msg = self.format(record)
tqdm.tqdm.write(msg)
self.flush()
except (KeyboardInterrupt, SystemExit):
raise
except:
self.handleError(record)
# Create logger
logger = logging.getLogger(name)
logger.setLevel(logging.DEBUG)
# Log everything (i.e., DEBUG level and above) to a file
log_path = os.path.join(log_dir, 'log.txt')
file_handler = logging.FileHandler(log_path)
file_handler.setLevel(logging.DEBUG)
# Log everything except DEBUG level (i.e., INFO level and above) to console
console_handler = StreamHandlerWithTQDM()
console_handler.setLevel(logging.INFO)
# Create format for the logs
file_formatter = logging.Formatter('[%(asctime)s] %(message)s',
datefmt='%m.%d.%y %H:%M:%S')
file_handler.setFormatter(file_formatter)
console_formatter = logging.Formatter('[%(asctime)s] %(message)s',
datefmt='%m.%d.%y %H:%M:%S')
console_handler.setFormatter(console_formatter)
# add the handlers to the logger
logger.addHandler(file_handler)
logger.addHandler(console_handler)
return logger
class BertFineTune(nn.Module):
def __init__(self):
super(BertFineTune, self).__init__()
self.embed_model = BertModel.from_pretrained('bert-base-uncased')
self.fc_weight = nn.Parameter(torch.zeros(768, 5), requires_grad = True) # 5 represents the number of classes
self.fc_bias = nn.Parameter(torch.zeros(1,5), requires_grad = True)
nn.init.xavier_uniform_(self.fc_weight) # FC layer initialization
self.softmax_layer = nn.LogSoftmax(dim = 1) # uses logsoftmax for NLL loss instead of normal softmax
def forward(self, x, seg_id_tensor, attnmask_tensor):
original = torch.zeros([x.size()[0], 350, 768], dtype = torch.float32, device = 'cuda:0') #, device=device) # batchsize x max_text_len x hidden_size
mask = torch.ones([x.size()[0], 350], dtype = torch.float32, device = 'cuda:0') #, device=device) # batchsize x max_text_len
sep_idxs = (x == 102).nonzero().reshape(x.size()[0], -1) # [idx, pos of [SEP] token]
for i in range(x.size()[0]):
# get pre-contextual word embedding from BERT
curr_embed = torch.zeros([1,350,768], dtype=torch.float32)
# take the BERT embedding
# last_hidden_states = last_layer_embedding (batch_size, seq_len, hidden_size)
# pooler output = last_layer_embedding further preprocessed by linear and Tanh layers
# hidden_states = tuple of length 13 (one for output of embedding layer and 12 output for each layer in the transformer)
last_hidden_states, pooler_output, hidden_states = self.embed_model(x[i:i+1], token_type_ids = seg_id_tensor[i:i+1],
attention_mask = attnmask_tensor[i:i+1],
output_hidden_states = True)
# do masking and filter pad sequences
# set embedding after the [SEP] token to 0, and also create the masking
curr_embed = last_hidden_states
end_text = sep_idxs[i,1].item()
curr_embed[0,end_text+1:,:] = torch.zeros([350 - (end_text+1), 768], dtype = torch.float32) # set everything else to 0 after [SEP] token
mask[i,end_text+1:] = torch.zeros([350 - (end_text+1)], dtype=torch.float32) # set everything else to 0 after [SEP] token
original[i] = curr_embed
# Take [CLS] token embedding which is at index 0
cls_embed = original[:,0,:] # size = batch_size x 768
# attach FC layer after the CLS word embeddings
dotted = torch.matmul(cls_embed, self.fc_weight) # size = batch_size x 5, 5 = Number of classes
dotted = dotted.add(self.fc_bias) # add the bias (batch_size x 5)
# perform log softmax to normalize
softmax_output = self.softmax_layer(dotted)
return softmax_output
def get_prediction(review_ids, log_softmax_score):
"""
review_ids (int): Tensor of Review example IDs.
log_softmax_score (list): tensor of log likehood scores (take max to get prediction)
"""
maxs = torch.argmax(log_softmax_score, dim = 1)
pred_dict = {}
for id, max_val in zip(review_ids, maxs):
pred_dict[id.item()] = max_val.item()
return pred_dict
def evaluate_dict(gold_dict, pred_dict):
sum_acc = 0
total = 0
for key, value in pred_dict.items():
total += 1
ground_truths = gold_dict[key]
prediction = value
if ground_truths == prediction:
sum_acc += 1
eval_dict = {'acc': 100. * sum_acc / total}
return eval_dict
def evaluate(model, data_loader, eval_file, device):
nll_meter = AverageMeter()
model.eval()
pred_dict = {}
# get all true labels for ratings
test_dataset = np.load(eval_file)
true_labels = torch.from_numpy(test_dataset['rating']).long()
uniq_ids = torch.from_numpy(test_dataset['ids']).long()
true_labels = true_labels - 1 ## Need to subtract 1 because classes need to be 0 to n_classes - 1
gold_dict = {}
for true, ids in zip(true_labels, uniq_ids):
gold_dict[ids.item()] = true.item()
with torch.no_grad(), tqdm.notebook.tqdm(total=len(data_loader.dataset), position=1, leave=True) as progress_bar:
for text, attnmask, seg_id, rating, ids in data_loader:
# Setup for forward
text = text.to(device)
attnmask = attnmask.to(device)
seg_id = seg_id.to(device)
batch_size = text.size(0)
ids = ids.to(device)
# Forward
log_softmax_scores = model(text, seg_id, attnmask)
rating = rating.to(device)
# rating needs to be 0 to num_classes - 1
rating = rating - 1
loss = F.nll_loss(log_softmax_scores, rating)
nll_meter.update(loss.item(), batch_size)
# Get maximum prediction for prediction
preds = get_prediction(ids, log_softmax_score)
pred_dict.update(preds)
# Log info
progress_bar.update(batch_size)
progress_bar.set_postfix(NLL=nll_meter.avg)
model.train()
results = evaluate_dict(gold_dict, pred_dict)
results_list = [('NLL', nll_meter.avg),
('acc', results['acc'])]
results = OrderedDict(results_list)
return results, pred_dict
def visualize(tbx, pred_dict, eval_path, step, split, num_visuals):
"""Visualize text examples to TensorBoard.
Args:
tbx (tensorboardX.SummaryWriter): Summary writer.
pred_dict (dict): dict of predictions of the form id -> pred.
eval_path (str): Path to eval JSON file.
step (int): Number of examples seen so far during training.
split (str): Name of data split being visualized.
num_visuals (int): Number of visuals to select at random from preds.
"""
if num_visuals <= 0:
return
if num_visuals > len(pred_dict):
num_visuals = len(pred_dict)
visual_ids = np.random.choice(list(pred_dict), size=num_visuals, replace=False)
with open(eval_path, 'r') as eval_file:
eval_dict = json.load(eval_file)
for i, id_ in enumerate(visual_ids):
pred = pred_dict[id_] + 1 # NEED POST PROCESSING BECAUSE WE SUBTRACTED TO COMPLY WITH NLL LOSS LABELS (0, n_classes - 1)
example = eval_dict[str(id_)]
user_id = example['user_id']
business_id = example['business_id']
text = example['text']
gold = int(example['rating'])
tbl_fmt = (f'- **Reviews:** {text}\n'
+ f'- **Answer:** {gold}\n'
+ f'- **Prediction:** {pred}')
tbx.add_text(tag=f'{split}/{i+1}_of_{num_visuals}',
text_string=tbl_fmt,
global_step=step)
save_dir = 'gdrive/My Drive/Yelp_Sentiment_Analysis/save/train/baseline-01'
if not os.path.exists(save_dir):
os.makedirs(save_dir)
log = get_logger(save_dir, 'baseline')
tbx = SummaryWriter(save_dir)
log.info(f'Using random seed 224 ...')
saver = CheckpointSaver(save_dir,
max_checkpoints=5,
metric_name='acc',
maximize_metric='acc',
log=log)
model = BertFineTune()
model = model.to('cuda:0')
model.train()
optimizer = optim.Adam(model.parameters(), lr = 0.00005)
#scheduler = sched.LambdaLR(optimizer, lambda s: 1.) # Constant LR
# LEARNING RATE SCHEDULER
# step_size: at how many multiples of epoch you decay
# step_size = 1, after every 1 epoch, new_lr = lr*gamma
# step_size = 2, after every 2 epoch, new_lr = lr*gamma
# gamma = decaying factor
#scheduler = StepLR(optimizer, step_size=1, gamma=0.9)
ema = EMA(model, 0.999)
epoch = 0
step = 0
steps_till_eval = 30000 # evaluate model after 30000 iterations
device = 'cuda:0'
while epoch != 3: # Num Epochs to train on
epoch += 1
#scheduler.step() # Decay Learning Rate
log.info(f'Starting epoch {epoch}...')
with torch.enable_grad(), tqdm.tqdm(total=len(train_loader.dataset), position=0, leave=True) as progress_bar:
for text, attnmask, seg_id, rating, ids in train_loader:
# Setup for forward
text = text.to(device)
attnmask = attnmask.to(device)
seg_id = seg_id.to(device)
batch_size = text.size(0)
optimizer.zero_grad()
# Forward
log_softmax_score = model(text, seg_id, attnmask)
rating = rating.to(device)
# need ratings to be 0 to n_classes - 1 (can later transform it)
rating = rating - 1
loss = F.nll_loss(log_softmax_score, rating) #NLL loss for correct position
loss_val = loss.item()
# Backward
loss.backward()
nn.utils.clip_grad_norm_(model.parameters(), 3) # clip max gradients to 3
optimizer.step()
ema(model, step // batch_size)
# Log info
step += batch_size
progress_bar.update(batch_size)
progress_bar.set_postfix(epoch=epoch, NLL=loss_val)
tbx.add_scalar('train/NLL', loss_val, step)
tbx.add_scalar('train/LR',
optimizer.param_groups[0]['lr'],
step)
steps_till_eval -= batch_size
if steps_till_eval <= 0:
steps_till_eval = 30000
# Evaluate and save checkpoint
log.info(f'Evaluating at step {step}...')
ema.assign(model)
results, pred_dict = evaluate(model, dev_loader, 'gdrive/My Drive/Yelp_Sentiment_Analysis/test.npz', device)
saver.save(step, model, results['acc'], device)
ema.resume(model)
# Log to console
results_str = ', '.join(f'{k}: {v:05.2f}' for k, v in results.items())
log.info(f'Dev {results_str}')
# Log to TensorBoard
log.info('Visualizing in TensorBoard...')
for k, v in results.items():
tbx.add_scalar(f'dev/{k}', v, step)
visualize(tbx,
pred_dict=pred_dict,
eval_path='dataset/test.json',
step=step,
split='dev',
num_visuals=30)
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = BertModel.from_pretrained('bert-base-uncased')
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs, output_hidden_states = True)
last_hidden_states, pooler_output, hidden_states = outputs # The last hidden-state is the first element of the output tuple
inputs['input_ids'].size()
last_hidden_states.size()
hidden_states
hidden_states[1]
len(hidden_states)
last_hidden_states.size()
hidden_states[].size()
model.config
```
| github_jupyter |
Copyright (c) Microsoft Corporation. All rights reserved.
Licensed under the MIT License.

# Automated Machine Learning
_**Classification of credit card fraudulent transactions on remote compute **_
## Contents
1. [Introduction](#Introduction)
1. [Setup](#Setup)
1. [Train](#Train)
1. [Results](#Results)
1. [Test](#Test)
1. [Acknowledgements](#Acknowledgements)
## Introduction
In this example we use the associated credit card dataset to showcase how you can use AutoML for a simple classification problem. The goal is to predict if a credit card transaction is considered a fraudulent charge.
This notebook is using remote compute to train the model.
If you are using an Azure Machine Learning [Notebook VM](https://docs.microsoft.com/en-us/azure/machine-learning/service/tutorial-1st-experiment-sdk-setup), you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) notebook first if you haven't already to establish your connection to the AzureML Workspace.
In this notebook you will learn how to:
1. Create an experiment using an existing workspace.
2. Configure AutoML using `AutoMLConfig`.
3. Train the model using remote compute.
4. Explore the results.
5. Test the fitted model.
## Setup
As part of the setup you have already created an Azure ML `Workspace` object. For Automated ML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments.
```
import logging
from matplotlib import pyplot as plt
import pandas as pd
import os
import azureml.core
from azureml.core.experiment import Experiment
from azureml.core.workspace import Workspace
from azureml.core.dataset import Dataset
from azureml.train.automl import AutoMLConfig
ws = Workspace.from_config()
# choose a name for experiment
experiment_name = 'automl-classification-ccard-remote'
experiment=Experiment(ws, experiment_name)
output = {}
output['SDK version'] = azureml.core.VERSION
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Experiment Name'] = experiment.name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
```
## Create or Attach existing AmlCompute
A compute target is required to execute the Automated ML run. In this tutorial, you create AmlCompute as your training compute resource.
#### Creation of AmlCompute takes approximately 5 minutes.
If the AmlCompute with that name is already in your workspace this code will skip the creation process.
As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota.
```
from azureml.core.compute import AmlCompute
from azureml.core.compute import ComputeTarget
# Choose a name for your AmlCompute cluster.
amlcompute_cluster_name = "cpu-cluster-1"
found = False
# Check if this compute target already exists in the workspace.
cts = ws.compute_targets
if amlcompute_cluster_name in cts and cts[amlcompute_cluster_name].type == 'cpu-cluster-1':
found = True
print('Found existing compute target.')
compute_target = cts[amlcompute_cluster_name]
if not found:
print('Creating a new compute target...')
provisioning_config = AmlCompute.provisioning_configuration(vm_size = "STANDARD_DS12_V2", # for GPU, use "STANDARD_NC6"
#vm_priority = 'lowpriority', # optional
max_nodes = 6)
# Create the cluster.
compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, provisioning_config)
print('Checking cluster status...')
# Can poll for a minimum number of nodes and for a specific timeout.
# If no min_node_count is provided, it will use the scale settings for the cluster.
compute_target.wait_for_completion(show_output = True, min_node_count = None, timeout_in_minutes = 20)
# For a more detailed view of current AmlCompute status, use get_status().
```
# Data
### Load Data
Load the credit card dataset from a csv file containing both training features and labels. The features are inputs to the model, while the training labels represent the expected output of the model. Next, we'll split the data using random_split and extract the training data for the model.
```
data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/creditcard.csv"
dataset = Dataset.Tabular.from_delimited_files(data)
training_data, validation_data = dataset.random_split(percentage=0.8, seed=223)
label_column_name = 'Class'
```
## Train
Instantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.
|Property|Description|
|-|-|
|**task**|classification or regression|
|**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: <br><i>accuracy</i><br><i>AUC_weighted</i><br><i>average_precision_score_weighted</i><br><i>norm_macro_recall</i><br><i>precision_score_weighted</i>|
|**enable_early_stopping**|Stop the run if the metric score is not showing improvement.|
|**n_cross_validations**|Number of cross validation splits.|
|**training_data**|Input dataset, containing both features and label column.|
|**label_column_name**|The name of the label column.|
**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-train#primary-metric)
```
automl_settings = {
"n_cross_validations": 3,
"primary_metric": 'average_precision_score_weighted',
"preprocess": True,
"enable_early_stopping": True,
"max_concurrent_iterations": 2, # This is a limit for testing purpose, please increase it as per cluster size
"experiment_timeout_minutes": 10, # This is a time limit for testing purposes, remove it for real use cases, this will drastically limit ablity to find the best model possible
"verbosity": logging.INFO,
}
automl_config = AutoMLConfig(task = 'classification',
debug_log = 'automl_errors.log',
compute_target = compute_target,
training_data = training_data,
label_column_name = label_column_name,
**automl_settings
)
```
Call the `submit` method on the experiment object and pass the run configuration. Depending on the data and the number of iterations this can run for a while.
In this example, we specify `show_output = True` to print currently running iterations to the console.
```
remote_run = experiment.submit(automl_config, show_output = False)
# If you need to retrieve a run that already started, use the following code
#from azureml.train.automl.run import AutoMLRun
#remote_run = AutoMLRun(experiment = experiment, run_id = '<replace with your run id>')
remote_run
```
## Results
#### Widget for Monitoring Runs
The widget will first report a "loading" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.
**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details
```
from azureml.widgets import RunDetails
RunDetails(remote_run).show()
remote_run.wait_for_completion(show_output=False)
```
#### Explain model
Automated ML models can be explained and visualized using the SDK Explainability library. [Learn how to use the explainer](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/automated-machine-learning/model-explanation-remote-amlcompute/auto-ml-model-explanations-remote-compute.ipynb).
## Analyze results
### Retrieve the Best Model
Below we select the best pipeline from our iterations. The `get_output` method on `automl_classifier` returns the best run and the fitted model for the last invocation. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*.
```
best_run, fitted_model = remote_run.get_output()
fitted_model
```
#### Print the properties of the model
The fitted_model is a python object and you can read the different properties of the object.
See *Print the properties of the model* section in [this sample notebook](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/automated-machine-learning/classification/auto-ml-classification.ipynb).
### Deploy
To deploy the model into a web service endpoint, see _Deploy_ section in [this sample notebook](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/automated-machine-learning/classification-with-deployment/auto-ml-classification-with-deployment.ipynb)
## Test the fitted model
Now that the model is trained, split the data in the same way the data was split for training (The difference here is the data is being split locally) and then run the test data through the trained model to get the predicted values.
```
# convert the test data to dataframe
X_test_df = validation_data.drop_columns(columns=[label_column_name]).to_pandas_dataframe()
y_test_df = validation_data.keep_columns(columns=[label_column_name], validate=True).to_pandas_dataframe()
# call the predict functions on the model
y_pred = fitted_model.predict(X_test_df)
y_pred
```
### Calculate metrics for the prediction
Now visualize the data on a scatter plot to show what our truth (actual) values are compared to the predicted values
from the trained model that was returned.
```
from sklearn.metrics import confusion_matrix
import numpy as np
import itertools
cf =confusion_matrix(y_test_df.values,y_pred)
plt.imshow(cf,cmap=plt.cm.Blues,interpolation='nearest')
plt.colorbar()
plt.title('Confusion Matrix')
plt.xlabel('Predicted')
plt.ylabel('Actual')
class_labels = ['False','True']
tick_marks = np.arange(len(class_labels))
plt.xticks(tick_marks,class_labels)
plt.yticks([-0.5,0,1,1.5],['','False','True',''])
# plotting text value inside cells
thresh = cf.max() / 2.
for i,j in itertools.product(range(cf.shape[0]),range(cf.shape[1])):
plt.text(j,i,format(cf[i,j],'d'),horizontalalignment='center',color='white' if cf[i,j] >thresh else 'black')
plt.show()
```
## Acknowledgements
This Credit Card fraud Detection dataset is made available under the Open Database License: http://opendatacommons.org/licenses/odbl/1.0/. Any rights in individual contents of the database are licensed under the Database Contents License: http://opendatacommons.org/licenses/dbcl/1.0/ and is available at: https://www.kaggle.com/mlg-ulb/creditcardfraud
The dataset has been collected and analysed during a research collaboration of Worldline and the Machine Learning Group (http://mlg.ulb.ac.be) of ULB (Université Libre de Bruxelles) on big data mining and fraud detection. More details on current and past projects on related topics are available on https://www.researchgate.net/project/Fraud-detection-5 and the page of the DefeatFraud project
Please cite the following works:
• Andrea Dal Pozzolo, Olivier Caelen, Reid A. Johnson and Gianluca Bontempi. Calibrating Probability with Undersampling for Unbalanced Classification. In Symposium on Computational Intelligence and Data Mining (CIDM), IEEE, 2015
• Dal Pozzolo, Andrea; Caelen, Olivier; Le Borgne, Yann-Ael; Waterschoot, Serge; Bontempi, Gianluca. Learned lessons in credit card fraud detection from a practitioner perspective, Expert systems with applications,41,10,4915-4928,2014, Pergamon
• Dal Pozzolo, Andrea; Boracchi, Giacomo; Caelen, Olivier; Alippi, Cesare; Bontempi, Gianluca. Credit card fraud detection: a realistic modeling and a novel learning strategy, IEEE transactions on neural networks and learning systems,29,8,3784-3797,2018,IEEE
o Dal Pozzolo, Andrea Adaptive Machine learning for credit card fraud detection ULB MLG PhD thesis (supervised by G. Bontempi)
• Carcillo, Fabrizio; Dal Pozzolo, Andrea; Le Borgne, Yann-Aël; Caelen, Olivier; Mazzer, Yannis; Bontempi, Gianluca. Scarff: a scalable framework for streaming credit card fraud detection with Spark, Information fusion,41, 182-194,2018,Elsevier
• Carcillo, Fabrizio; Le Borgne, Yann-Aël; Caelen, Olivier; Bontempi, Gianluca. Streaming active learning strategies for real-life credit card fraud detection: assessment and visualization, International Journal of Data Science and Analytics, 5,4,285-300,2018,Springer International Publishing
| github_jupyter |
# Time Construction Categories
Beginning with their phrase types, I will analyze the kind of time constructions found in the corpus.
```
from setup_tools import * # get all of the tools for the analyses
```
### Get Timephrases
```
tp = A.search('''
phrase function=Time
/with/
word language=Hebrew
/-/
''')
```
### Phrase Types Reflected in Constructions
`PP` is prepotional phrase, `NP` is noun phrase, `AdvP` is adverb phrase, as might be expected.
```
cx_types = collections.Counter()
for cx in F.label.s('timephrase'):
firstphrase = L.d(cx, 'phrase')[0]
cx_types[F.typ.v(firstphrase)] += 1
cx_types = convert2pandas(cx_types)
cx_types
cx_types.to_excel(firstyear+'phrase_types.xlsx')
countBarplot(cx_types, title='Phrase Types in Time Construction Set', xlabel='Phrase Types', save=f'{firstyear}phrase_types.png')
```
Proportion of prepositional phrases...
```
cx_types.loc['PP']['Total'] / cx_types.sum()[0]
```
There is a large % difference between the counts of NP and those of PP:
```
(cx_types.loc['PP']['Total'] - cx_types.loc['NP']['Total']) / cx_types.loc['NP']['Total']
```
The preposition is the most influential form within time constructions.
### Compare with unprocessed Time Phrases
```
tp_types = collections.Counter()
for ph in tp:
tp_types[F.typ.v(ph[0])] += 1
tp_types = convert2pandas(tp_types)
display(tp_types)
countBarplot(tp_types)
tp_types.loc['AdvP'] - cx_types.loc['AdvP']
```
### Compare with Location
This includes `Loca` phrases as well as complement phrases with a semantic head that has high association with location phrases.
```
# get words attracted to location
loca_lexs = set(F.lex.v(res[1]) for res in A.search('''
phrase function=Loca
<nhead- word funct_assoc>2
'''))
locations = []
cmpl2loca = []
locaonly = []
# find location phrases
# if complement phrase, check for a locational lexeme at the head
for phrase in F.otype.s('phrase'):
function = F.function.v(phrase)
# check complement heads for locas
if function == 'Cmpl' and E.nhead.t(phrase):
head_lexs = set(F.lex.v(h) for h in E.nhead.t(phrase))
if head_lexs & loca_lexs:
locations.append(phrase)
cmpl2loca.append(phrase)
# log location phrases
elif function == 'Loca':
locaonly.append(phrase)
locations.append(phrase)
print(len(locations), 'total locations found...')
print(len(cmpl2loca), 'complements logged as locas...')
print(len(locaonly), 'default location phrases found...')
loca_types = collections.Counter()
for ph in locations:
loca_types[F.typ.v(ph)] += 1
loca_types = convert2pandas(loca_types)
loca_types.index = ['PP', 'AdvP', 'NP', 'PrNP\n(proper noun phrase)']
display(loca_types)
countBarplot(loca_types, save=firstyear+'loca_types.png', xlabel='Phrase Functions')
loca_types.to_excel(firstyear+'loca_types.xlsx')
```
Compare percentage of prepositions...
```
loca_types.loc['PP'][0] / loca_types.sum()[0]
```
### See if Differences Between Loca and Time are Statistically Significant
```
loca_types
cx_types
time_vs_loca = pd.concat([cx_types, loca_types], axis=1, sort=False).fillna(0)
time_vs_loca.columns = ['Time', 'Loca']
time_vs_loca
```
Apply Fisher's test for significance...
```
time_vs_loca_fish = apply_fishers(time_vs_loca)
time_vs_loca_fish
```
### Preposition & Time Associations
I want to see whether certain prepositions are particularly associated with certain time nouns. A version of this analysis was done [SBH_time_expressions](https://nbviewer.jupyter.org/github/CambridgeSemiticsLab/BH_time_collocations/blob/master/analysis/SBH_time_expressions.ipynb) for Genesis-Kings. Here we do the analysis for the entire Hebrew Bible.
The association measure is the Fisher's exact test.
```
prep_obj_counts = collections.defaultdict(lambda: collections.Counter())
prep2obj2res = collections.defaultdict(lambda: collections.defaultdict(list))
allpreps = collections.Counter()
for cx in F.label.s('timephrase'):
ph = L.d(cx, 'phrase')[0] # get first phrase
if F.typ.v(ph) != 'PP':
continue
prep_chunk = next(obj for obj in L.d(cx, 'chunk') if F.label.v(obj) == 'prep') # get prep chunk
prep_obj = E.obj_prep.t(L.d(prep_chunk, 'word')[-1])
prep_text = '.'.join(F.lex_utf8.v(w) for w in L.d(prep_chunk, 'word'))
allpreps[prep_text] += 1
if prep_obj:
obj_text = F.lex_utf8.v(prep_obj[0])
prep_obj_counts[prep_text][obj_text] += 1
prep2obj2res[prep_text][obj_text].append(L.d(cx, 'phrase'))
prep_obj_counts = pd.DataFrame(prep_obj_counts).fillna(0)
allpreps = convert2pandas(allpreps)
```
### Show Preposition Counts
```
allpreps.to_excel(firstyear+'prep_counts.xlsx')
```
Count בְּ's share...
```
allpreps.loc['ב'].sum() / allpreps.sum()[0]
search = A.search('''
phrase function=Time typ=PP
<nhead- word lex=>RK=/
''')
#A.show(search)
formatPassages(search, api)
```
### Apply the association test below. This will take some time...
```
po_assoc = apply_fishers(prep_obj_counts)
```
#### Attraction Plots
```
def assign_hue(iterable_data, p=1.3, maxvalue=10, minvalue=-10):
'''
Function to assign heat-map hues based
on a p-value midpoint and max/min attraction
values.
The following rules are used for making
the colors:
p = pvalue, i.e. significance level
upper grey = p
lower grey = -p
starting red = p+0.1
starting blue = -p-0.4
max_red = max(dataset) if > p = hotmax
max_blue = min(dataset) if < p = coldmax
--output--
1. a dataframe with values mapped to a unique color code
2. a list of rgba colors that are aligned with the
indices of the data
'''
maxvalue = int(maxvalue) # for max red
minvalue = int(minvalue) # for max blue
# assign ranges based on p values and red/blue/grey
red_range = len(range(int(p), maxvalue+1))
blue_range = len(range(int(p), abs(minvalue-1)))
blues = sns.light_palette('blue', blue_range)
reds = sns.light_palette('red', red_range)
grey = sns.color_palette('Greys')[0]
# assign colors based on p-value
data = list()
colorCount = collections.Counter()
rgbs = list()
for point in iterable_data:
if point > p:
rgb = reds[int(point)-1]
color = 'red'
elif point < -p:
rgb = blues[abs(int(point))-1]
color = 'blue'
else:
rgb = grey
color = 'grey'
color_count = colorCount.get(color, 0)
colorCount[color] += 1
data.append([point, f'{color}{color_count}'])
rgbs.append(rgb)
data = pd.DataFrame(data, columns=('value', 'color'))
return data, rgbs
# values for uniform hue assignment:
maxattraction = float(po_assoc.max().max())
minattraction = float(po_assoc.min().min())
pvalue = 1.3
def plot_attraction(prep, size=(15, 5), save=''):
# get plot data and generate hues
colexs = po_assoc[prep].sort_values()
colex_data, colors = assign_hue(colexs.values, p=pvalue, maxvalue=maxattraction, minvalue=minattraction)
# plot the figure
plt.figure(figsize=size)
dummyY = ['']*colexs.shape[0] # needed due to bug with Y & hue
ax = sns.swarmplot(x=colex_data['value'], y=dummyY, hue=colex_data['color'], size=15, palette=colors)
ax.legend_.remove()
# offset annotation text from dot for readability
offsetX, offsetY = np.array(ax.collections[0].get_offsets()).T
plt.xlabel('log10 Fisher\'s Scores (attraction)')
# annotate lexemes for those with significant values
for i, colex in enumerate(colexs.index):
annotateX = offsetX[i]
annotateY = offsetY[i] - 0.06
colex_text = reverse_hb(colex).replace('/','').replace('=','')
if colexs[colex] > pvalue:
ax.annotate(colex_text, (annotateX, annotateY), size=20, fontname='Times New Roman')
elif colexs[colex] < -pvalue:
ax.annotate(colex_text, (annotateX, annotateY), size=20, fontname='Times New Roman')
if save:
plt.savefig(f'{firstyear}{prep}_assocs.png', dpi=300, bbox_inches='tight')
plt.title(f'Time Attractions to {reverse_hb(prep)}')
plt.show()
```
Let's look at everything up to כ by setting a count limit of > 20.
```
for prep in prep_obj_counts.columns[(prep_obj_counts.sum() > 20)]:
top_attractions = pd.DataFrame(po_assoc[prep].sort_values(ascending=False))
top_attractions.columns = ['Fisher\'s Score']
top_attractions['Raw Counts'] = prep_obj_counts[prep].loc[top_attractions.index]
top_attractions.round(2).to_excel(firstyear+f'{prep}_top_assocs.xlsx')
plot_attraction(prep, size=(18, 5), save=True)
display(top_attractions.head(10))
```
Look at ממחרת...
```
min_mxrt = A.search('''
verse
clause
phrase function=Time
<head- word lex=MN
<obj_prep- word lex=MXRT/
''')
'; '.join(['{} {}:{}'.format(*T.sectionFromNode(res[0])) for res in min_mxrt if F.txt.v(res[1]) in {'N', '?N'}])
'; '.join(['{} {}:{}'.format(*T.sectionFromNode(res[0])) for res in min_mxrt if F.txt.v(res[1]) not in {'N', '?N'}])
```
Compare with מתמול
```
min_tmwl = A.search('''
verse
clause
phrase function=Time
<head- word lex=MN
<obj_prep- word lex=TMWL/
''')
'; '.join(['{} {}:{}'.format(*T.sectionFromNode(res[0])) for res in min_tmwl])
T.text(min_tmwl[0][0])
A.show(A.search('''
phrase function=Time
<head- word lex=MN
<obj_prep- word lex=RXM/
'''))
```
I can see that times which are attracted to ב are primarily calendrical times like "day", "year", "month", "morning", but also עת "time". The attraction between יום and ב is quite strong.
The ל preposition, as well as עד, prefers more deictic, adverbial kinds of indicators like לעולם, לצח, לפני, מחר. Indeed עד has nearly identical preferences. The association between ל and עולם is the strongest in the dataset:
```
print('top 5 association scores in dataset by their prep')
po_assoc.max().sort_values(ascending=False).head(5)
print('top 5 associations to ל')
po_assoc['ל'].sort_values(ascending=False).head(5)
```
This very strong score suggests the possibility that ל and עולם together constitute a strongly entrenched unit. Note also that the association between ל and נצח is likewise quite strong, as is the association with פנה. These smaller associations can be interpreted through the entrenched combination of ל+עולם.
The preposition אחר has a distinct preference for nouns that are not necessary associated with time, such as proper names and nouns representing events.
כ is attracted to עת, which is a notable similarity with ב. This is consistent with observations that these two prepositions have similar meanings. The use with תמול and אתמול are worth investigating.
Finally, מן is primarily attracted to מחרת, a
### Time Constructions, Raw Forms (without accents)
```
letter_inventory = set(l for w in F.otype.s('word') for l in F.voc_lex_utf8.v(w))
raw_surfaces = collections.Counter()
for cx in F.label.s('timephrase'):
surface = ''
for w in L.d(cx, 'word'):
for let in F.g_word_utf8.v(w):
if let in letter_inventory:
surface += let
if F.trailer_utf8.v(w) in letter_inventory:
surface += F.trailer_utf8.v(w)
raw_surfaces[surface] += 1
raw_surfaces = convert2pandas(raw_surfaces)
raw_surfaces.head(20)
raw_surfaces.head(20).to_excel(firstyear+'raw_surfaces.xlsx')
```
### Time Constructions, Clustered on Raw Surface Forms without Vocalization (tokens)
In this section, I break down time constructions by clustering them based on surface forms and various surface form filters. This is a rough form of clustering, by which two time constructions are grouped together if their tokenized strings match.
```
def surfaceToken(phrasenode):
'''
Return a surface token of a phrase node.
The words are dot-separated and heh consonants
are added if they are present in vocalized form.
'''
subtokens = []
for w in L.d(phrasenode, 'word'):
if F.lex.v(w) == 'H':
subtokens.append('ה')
else:
subtokens.append(F.g_cons_utf8.v(w))
return '.'.join(subtokens)
freq_surface = collections.Counter()
for cx in F.label.s('timephrase'):
freq_surface[surfaceToken(cx)] += 1
freq_surface = convert2pandas(freq_surface)
freq_surface.head(20)
freq_surface.to_excel(firstyear+'raw_tokens.xlsx')
freq_surface.head(50).sum()[0]
freq_surface.head(50).sum()[0] / len(list(F.label.s('timephrase')))
```
ב.ה.יום.ה.הוא is a dominant pattern. But there are other patterns that are similar to it, such as עד.ה.יום.ה.זה or ב.ה.עת.ה.היא. Other similarities include ל.ֹעולם and עד.עולם. Taking a broader definition of similarity to include a role within the phrase, we can see similarities between the preposition + object constructions such as: ל.עולם, ב.יום, ל.נצח.
```
cases = '''
ב.ה.יום.ה.הוא
עד.ה.יום.ה.זה
ב.ה.עת.ה.היא
ב.ה.ימים.ה.הם
ב.ה.עת.ה.הוא
ב.ה.לילה.ה.הוא
ב.עצם.ה.יום.ה.זה
'''.split('\n')
demos = [c.strip() for c in cases if c]
freq_surface.loc[demos].sum()[0]
freq_surface.loc[demos].sum()[0] / len(list(F.label.s('timephrase')))
defi = '''
ה.יום
ב.ה.בקר
עד.ה.ערב
ה.לילה
ב.ה.ערב
ב.ה.לילה'''.split('\n')
defis = [c.strip() for c in defi if c]
freq_surface.loc[defis].sum()[0]
freq_surface.loc[defis].sum()[0] / len(list(F.label.s('timephrase')))
```
### Count Semantic Head Lexemes
```
sem_heads = collections.Counter()
for cx in F.label.s('timephrase'):
firstphrase = L.d(cx, 'phrase')[0]
semhead = E.nhead.t(firstphrase)[0]
sem_heads[F.voc_lex_utf8.v(semhead)] += 1
sem_heads = convert2pandas(sem_heads)
sem_heads.head(25)
sem_heads.head(50).to_excel(firstyear+'semantic_heads.xlsx')
```
Headed by מלכות
```
# A.show(A.search('''
# construction
# =: phrase
# /with/
# <nhead- word lex=MLKWT/
# /-/
# '''))
```
Headed by ראשׁ
```
A.show(A.search('''
chunk label=timephrase
=: phrase
/with/
<nhead- word lex=<WD/
/-/
'''))
```
### Time Constructions, Clustered on Parts of Speech and Chunks
Based on the kinds of resemblances mentioned above, I wanted to obtain a clustering that better reflected word types and sub-constructions within the time constructions. A "sub-construction", what I have called "chunks", consist of either chained prepositional phrases: e.g. מקץ "from the end of...", or quantified noun phrases, which can consist of chained cardinal numbers such as שׁבעים ושׁשׁ שׁנה. These chunks were processed in [chunking](https://nbviewer.jupyter.org/github/CambridgeSemiticsLab/BH_time_collocations/blob/master/analysis/preprocessing/chunking.ipynb) and then further refined into complete tags in [time constructions [part 1]](https://nbviewer.jupyter.org/github/CambridgeSemiticsLab/BH_time_collocations/blob/master/analysis/time_constructions1.ipynb).
The result is a tokenization strategy which produces larger, more useful clusters. In fact, the top 11 of these clusters account for 76% of the entire dataset, as I show.
```
freq_times = collections.Counter()
for cx in F.label.s('timephrase'):
time = Time(cx, api)
freq_times[time.tag] += 1
freq_times = convert2pandas(freq_times)
freq_times.head(20)
```
Top 11 account for a large % of the dataset:
```
freq_times.head(11)['Total'].sum() / freq_times['Total'].sum()
```
The top 20 account accounts for even more:
```
freq_times.head(20)['Total'].sum() / freq_times['Total'].sum()
```
It is my hunch that the remaining 25% / 17% of the data most often consists of some combination of the major types reflected in the top 75% group. Thus by describing and understanding these major types, we can obtain even better clustering parameters.
From this point forward I will focus on accounting for the subgroups found amongst these major clusters.
## `prep.time`
What kind of time nouns most often appear in the `time` slot?
```
pt_time = collections.Counter()
pt_prep = collections.Counter()
pt_cx = collections.Counter()
tag2res = collections.defaultdict(list)
for cx in F.label.s('timephrase'):
time_dat = Time(cx, api)
# skip non prep.time cxs
if time_dat.tag != 'PPtime':
continue
time = time_dat.times[0]
prep = time_dat.preps[0]
time_text = F.lex_utf8.v(time)
prep_text = '.'.join(F.lex_utf8.v(w) for w in L.d(prep, 'word'))
cx_text = '.'.join(F.g_cons_utf8.v(w) for w in L.d(cx, 'word'))
pt_time[time_text] += 1
pt_prep[prep_text] += 1
pt_cx[cx_text] += 1
tag2res[cx_text].append(L.d(cx, 'phrase'))
tag2res[time_text].append(L.d(cx, 'phrase'))
tag2res[prep_text].append(L.d(cx, 'phrase'))
pt_time = convert2pandas(pt_time)
pt_prep = convert2pandas(pt_prep)
pt_cx = convert2pandas(pt_cx)
```
#### Top Raw Surface Form Counts
```
pt_cx.head(20)
```
#### Top Times
```
pt_time.head(20)
```
#### Top Preposition Counts
```
pt_prep
```
Does עולם ever have additional modifications? I know from previous analysis of time constructions that they often have various morphological modifications or additional specifications. I would expect this to be different with עולם, and I would also expect this situation to resemble other words that are being used adverbially. If there is indeed a strict separation between patterns with and without these kinds of modifications, I may have good reason to define this as an "adverb construction," i.e. a construction with deictic sense and that caries its temporal modifications internally.
Practically it makes more sense to first define what I mean, especially in terms of database querying, of "modifications." In order to do that, I move on to the next most common item in the list, יום. I know from the previous analysis that יום *does* in fact attract these modifications. By definining them here, I might have a way to identify other cases that have such modifications. Then I can define those without modifications as the inverse of these search parameters.
**Below are a few examples of יום as used with a preposition.** The examples are shown in the context of a sentence, because infinitival modifiers of יום will exist occur as clauses embedded in the same sentence. These cases in particular are marked with a clause relation of `RgRc` (Regens/rectum connection). Note that I have collapsed the cases with `end=1`. Modify this to see all the other examples.
```
A.show(tag2res['יום'], condenseType='sentence', extraFeatures='st vt', end=1) # <- NB modify end= to see more than 5 examples
random.shuffle(tag2res['יום'])
jwm = tag2res['יום']
for ph in jwm[:5]:
print('{} {}:{}'.format(*T.sectionFromNode(ph[0])))
print(T.text(L.u(ph[0], 'sentence')[0]))
print()
```
After reviewing several dozen cases, I see 4 specific patterns that follow the construction ב+יום:
* \+ [CONSTRUCT] + [VERBAL CLAUSE rela=RgRc] (often with infinitive but occasionally with qatal)
* \+ [PLURAL ENDING]
* \+ [PRONOMINAL SUFFIX]
* \+ [אשׁר in VERBAL CLAUSE rela=Attr]
Let's see how much of the יום pattern this accounts for. The individual cases are stored under `tag2res['יום']`. We define a few search patterns to account for the cases above. The phrases stored in `tag2res` are fed in as sets so that only those cases are queried.
```
# yom_phrases = set(phrase for res in tag2res['יום'] for phrase in res)
# found_yom = set()
# print(len(yom_phrases), 'total יום phrases')
# # + CONSTRUCT + VERBAL CLAUSE
# verbal_construct = set(res[1] for res in A.search('''
# sentence
# yomphrase
# word lex=JWM/
# /with/
# <mother- clause rela=RgRc kind=VC
# /or/
# y1:yomphrase
# ..
# c1:clause rela=Attr
# y1 <mother- c1
# /-/
# ''', sets={'yomphrase': yom_phrases}, silent=True))
# found_yom |= (verbal_construct)
# print(f'verbal construct cases found: {len(verbal_construct)}')
# # + PLURAL
# pluralday = set(res[1] for res in A.search('''
# sentence
# yomphrase
# word lex=JWM/ nu=pl
# ''', sets={'yomphrase': yom_phrases}, silent=True))
# found_yom |= (pluralday)
# print(f'plural cases found: {len(pluralday)}')
# # + PRONOMINAL
# pronominalday = set(res[1] for res in A.search('''
# sentence
# yomphrase
# word lex=JWM/ prs#absent
# ''', sets={'yomphrase': yom_phrases}, silent=True))
# found_yom |= (pronominalday)
# print(f'pronominal suffix cases found: {len(pronominalday)}')
# # + אשׁר/relative + VERBAL CLAUSE
# asher_day = set(res[1] for res in A.search('''
# sentence
# yomphrase
# word lex=JWM/
# /with/
# sentence
# ..
# <: clause rela=Attr
# =: phrase function=Rela
# /-/
# ''', sets={'yomphrase': yom_phrases}, silent=True))
# found_yom |= (asher_day)
# print(f'relative attributive cases found: {len(asher_day)}')
# print(f'remaining cases: {len(yom_phrases-found_yom)}')
```
Let's look at the 5 remaining cases...
```
# rare_jwm = [(case,) for case in yom_phrases-found_yom]
# A.show(rare_jwm, extraFeatures='st nu', condenseType='sentence') # uncomment to see cases
# A.show(A.search('''
# phrase function=Time
# <head- word lex=MN
# <obj_prep- word lex=<WLM/
# '''))
```
The remaining cases are all interesting, especially Psalm 138:3 and Ruth 4:5. These may be true cases of non-modification, **construing יום as an adverb.** The case of Ezra 3:4 does not look like an adverbial use of the time construction. I will consider removing it from the samples moving forward.
There is one important case that I did not account for at first: the dual ending. I added that below.
```
# # + PLURAL
# dualday = set(res[1] for res in A.search('''
# sentence
# yomphrase
# word lex=JWM/ nu=du
# ''', sets={'yomphrase': yom_phrases}, silent=True))
# found_yom |= (dualday)
# print(f'dual cases found: {len(dualday)}')
```
Below this final case is added to the others, bringing the total construction forms to 5:
* \+ [CONSTRUCT] + [VERBAL CLAUSE rela=RgRc | occasionally Attr in BHSA] (often with infinitive but occasionally with qatal)
* \+ [PLURAL ENDING]
* \+ [PRONOMINAL SUFFIX]
* \+ [אשׁר in VERBAL CLAUSE rela=Attr]
* \+ [DUAL ENDING]
**Based on these features, I propose to attempt a two-way subdivision of all constructions in the `prep.time` construction: 1) those that appear with specification, and 2) those that appear without specification.** I will test the efficacy of this division below with a handcoded version of the templates from above.
The specified times will go into `cx_specified` mapped to the form that they were found in.
NB: I have moved the plural to the bottom. The plural can offten co-occur with other specifications. Yet it seems that the other specifications have the "final say" so-to-speak, in the sense that they are still able to function as they do. The effect of the plural is the same as it is elsewhere: to extend the time over a duration through quantification.
```
def tagSpecs(cx):
'''
A function that queries for
specifications on a time noun
or phrase within a construction
marked for time function.
output - string
'''
phrase = L.d(cx, 'phrase')[0]
time = next(role[0] for role in E.role.t(cx) if role[1]=='time')
time_mother = [cl for cl in E.mother.t(time) if F.rela.v(cl) == 'RgRc']
phrase_mother = [cl for cl in E.mother.t(phrase) if F.rela.v(cl) == 'Attr']
result = (phrase, time)
tag = []
# isolate construct + verbal clauses
if time_mother:
tag.append('construct + VC')
elif F.st.v(time) == 'c':
tag.append('construct + NP?')
# isolate pronominal suffixes
if F.prs.v(time) not in {'absent', 'n/a'}:
tag.append('pronominal suffix')
# isolate relative clauses | attributives
if phrase_mother:
if 'Rela' in set(F.function.v(ph) for ph in L.d(phrase_mother[0], 'phrase')):
tag.append('RELA + VC')
else:
tag.append('+ VC')
# isolate plural endings
if F.nu.v(time) == 'pl' and F.pdp.v(time) not in {'prde'}: # exclude plural forms inherent to the word
tag.append('plural')
# isolate dual endings
if F.nu.v(time) == 'du':
tag.append('dual')
return ' & '.join(tag), result, time
cx_specified = collections.defaultdict(list)
lex2tag2result = collections.defaultdict(lambda: collections.defaultdict(list)) # keep a mapping from time lexemes to their specific results
for cx in set(F.otype.s('construction')) & set(F.label.s('prep.time')):
tag, result, time = tagSpecs(cx)
if tag:
cx_specified[tag].append(result)
lex2tag2result[F.lex_utf8.v(time)][tag].append(result)
cx_specified_all = set(res for tag in cx_specified for res in cx_specified[tag])
found = len(set(res[0] for res in cx_specified_all))
print(f'number found {found} ({found / pt_cx["Total"].sum()})')
for tag, results in cx_specified.items():
print('{:<30} {}'.format(tag, len(results)))
```
These patterns thus account for 40% of the cases in this construction. That is a good discrimination rate.
```
#A.show(cx_specified['construct + NP?'], condenseType='sentence', extraFeatures='st')
```
Let's see what lexemes those accounted for...
```
lex_count = collections.Counter()
for phrase, time in cx_specified_all:
lex_count[F.lex_utf8.v(time)] += 1
lex_count = convert2pandas(lex_count)
lex_count
lex2tag2result['פנה']
#A.show(lex2tag2result['יום']['pronominal suffix'])
```
Many of these lexemes are quite similar to יום in terms of being calendrical or having similar prepositional preferences.
Below are lexemes that were not found.
```
not_specified = []
nsresults = collections.defaultdict(list)
spec_set = set(res[0] for res in cx_specified_all)
for cx in set(F.otype.s('construction')) & set(F.label.s('prep.time')):
phrase = L.d(cx, 'phrase')[0]
if phrase not in spec_set:
time = next(role[0] for role in E.role.t(cx) if role[1]=='time')
result = (phrase, time)
not_specified.append(result)
nsresults[F.lex_utf8.v(time)].append(result)
print(len(not_specified))
lex_count2 = collections.Counter()
for phrase, time in not_specified:
lex_count2[F.lex_utf8.v(time)] += 1
lex_count2 = convert2pandas(lex_count2)
lex_count2
```
This is a strong list of adverbial forms. There are also several nouns mixed in. Note the appearance of יומם as well, which is a great example of a nominal form that is slotted as an adverbial—in this case that is obvious because of the adverbial ending that is appended to it.
**The cases above, as they are not modified, are anchored either to discourse context or the time of speech.** Others, such as בטן, have rather inferred anchor points. It would be interesting to isolate when the reference is discourse-anchored. For example, the case of proper names this would be relatively easy to ascertain. However, most of these seem to be anchored to speech time.
```
A.show(tag2res['זאת'], extraFeatures='prs')
```
This case ^ is a good example, though, of a discourse-anchored form.
Below, I randomize the not specified list and manually inspect many examples to make sure there are no specifications I've missed.
```
random.shuffle(not_specified)
#A.show(not_specified, condensed=False, condenseType='sentence')
```
## The Role of Specifications
After reflecting on the specifications that I've isolated in the `prep.time` group, I am wondering what they have in common and where they differ. The + verbal clause specifications and pronominal suffixes anchor the time references to specific participants or events in the discourse. The pronominal suffixes also have a commonality with the verbal clauses since both contain markers of person, often identically so as the infinitive accepts the pronominal suffix. The other two specifications, that of the plural and the dual, then seemingly have a quite different role to play. They do not anchor the time, but they modify it by extending it in quantity, which metaphorically indicates a duration of time. In the case of the dual this duration is specified. **Furthermore, the plural differs from the other specifications in that it is compatible with them—in all other cases the specifications are mutually exclusive.**
The time of יום accepts all of the major roles, revealing its multi-purpose utility as the generic time marker, as seen below:
```
for spec, results in lex2tag2result['יום'].items():
print('{:<30} {}'.format(spec, len(results)))
```
עת, as a seeming near synonym of יום, also accepts a variety of specifications:
```
for spec, results in lex2tag2result['עת'].items():
print('{:<30} {}'.format(spec, len(results)))
```
The next most common term, פנה, appears in all cases in the plural, but in 6 cases with an additional specification of the suffix.
```
for spec, results in lex2tag2result['פנה'].items():
print('{:<30} {}'.format(spec, len(results)))
```
מות only occurs with the pronominal suffix though:
```
for spec, results in lex2tag2result['מות'].items():
print('{:<20} {}'.format(spec, len(results)))
```
After more data has been gathered, it would be a good idea to see whether there are any statistical associations between certain terms and specification. For instance, it is clear the עולם has a strong association with non-specification. Then some terms are used both with and without it.
**Looking through the list of other major clusters besides `prep.time`, it seems that the other clusters are likewise defined by different methods of specification:**
```
freq_times.head(10)
```
**I propose the possibility that specification is the means by which a time is anchored to discourse.** That is self-evident in the prominence of the form `prep.H.time.H.dem`, i.e. the demonstrative plays a front-and-center role, anchoring the time noun to a point forward or backward relative to the discourse. The same is evident with the cluster `prep.H.time.H.ordn`, with the ordinal number anchoring the time to a day or month on the calendar.
Other clusters potentially resemble the unspecified times found above, such as `quantNP`, a quantified noun phrase. These times technically have the specification of quantification, but it is likely, as we have seen with the plural, that this form can variously be combined with or without additional specifications.
## Co-specifications with `prep.H.time.H.dem`?
This is the next most frequent cluster in the set. With the demonstrative already in place, it seems likely that this construction deflects additional specifications. Let's write a query to see if this is so. The query will utilize similar parameters as were used to separate specified from non-specifieds above.
```
prephdem_specs = collections.Counter()
phdtag2res = collections.defaultdict(list)
for cx in set(F.otype.s('construction')) & set(F.label.s('prep.H.time.H.dem')):
tag, result, time = tagSpecs(cx)
tag = tag or 'no further specification'
prephdem_specs[tag] += 1
phdtag2res[tag].append(result)
convert2pandas(prephdem_specs)
```
We have 4 cases of additional specification. Let's look closer...
```
A.show(phdtag2res['RELA + VC'], condenseType='sentence')
A.show(phdtag2res['RELA + VC & plural'], condenseType='sentence')
```
These specifications *may* reveal a difference in the attributive specification characterized by relative particles, and the specification characterized by the construct. The attributive spec can describe an anchored time. But the construct spec, if it plays an anchoring role itself, may resist being combined with additional anchors.
The majority of cases, though, seem to disprefer specification. I will examine some random selections from the `no further specification` set to make sure.
```
random.shuffle(phdtag2res['no further specification'])
#A.show(phdtag2res['no further specification'], condensed=False, condenseType='sentence')
```
## Specifications with `quantNP`
I will apply the same query method with `quantNP`, to see how much of the data is accounted for and whether any new specifications are missed. The function has to be modified a bit to interact with the `quantNP` chunk.
```
def tagSpecsQuant(cx):
'''
A function that queries for
specifications on a time noun
or phrase within a construction
marked for time function.
output - string
Note on Quantifier Constructions:
The quantNP can be a complex construction.
It is built of smaller quantNP chunks,
perhaps a single chunk or perhaps more.
The "quantified" edge value identifies a word as the
time noun being quantified. But this is only stored on
the lowest level chunks. A few extra steps are needed
to isolate these nouns and check them for specifications.
'''
phrase = L.d(cx, 'phrase')[0]
phrase_mother = [cl for cl in E.mother.t(phrase) if F.rela.v(cl) == 'Attr'] # look for attr rela on phrase
# isolate component quantNP chunks
atomic_chunks = [chunk for chunk in L.d(cx, 'chunk')
if L.u(chunk, 'chunk') # either is not top level chunk
or len(L.d(cx, 'chunk')) == 1 # or has no embedded chunks
]
# get list of quantified time noun(s)
times = [noun[0] for chunk in atomic_chunks for noun in E.role.t(chunk) if noun[1] == 'quantified']
time_mothers = [cl for time in times for cl in E.mother.t(time) if F.rela.v(cl) == 'RgRc']
result = [phrase] + times
tag = []
# isolate construct + verbal clauses
if time_mothers:
tag.append('construct + VC')
elif set(t for t in times if F.st.v(t) == 'c'):
tag.append('construct + ??')
# isolate pronominal suffixes
if set(t for t in times if F.prs.v(t) not in {'absent', 'n/a'}):
tag.append('pronominal suffix')
# isolate relative clauses | attributives
if phrase_mother:
if 'Rela' in set(F.function.v(ph) for ph in L.d(phrase_mother[0], 'phrase')):
tag.append('RELA + VC')
else:
tag.append('+ VC')
# isolate plural endings
if set(t for t in times if F.nu.v(t) == 'pl' and F.pdp.v(t) not in {'prde'}): # exclude plural forms inherent to the word
tag.append('plural')
# isolate dual endings
if set(t for t in times if F.nu.v(t) == 'du'):
tag.append('dual')
return ' & '.join(tag), result
quantnp_specs = collections.Counter()
qnptag2res = collections.defaultdict(list)
for cx in set(F.otype.s('construction')) & set(F.label.s('quantNP')):
tag, result = tagSpecsQuant(cx)
tag = tag or 'no known spec'
quantnp_specs[tag] += 1
qnptag2res[tag].append(result)
convert2pandas(quantnp_specs)
```
The plural specs are expected with the quantifier NP. As above, the relative attributive spec has appeared. But the quantified NP has resisted any construct relations.
```
A.show(qnptag2res['RELA + VC & plural'], condenseType='sentence')
```
I will inspect randomized cases of `no known spec` below.
```
random.shuffle(qnptag2res['no known spec'])
#A.show(qnptag2res['no known spec'], condensed=False, condenseType='sentence')
```
## TODO: MERGE SEVERAL OF THESE KINDS OF PHRASES
```
# A.show(A.search('''
# phrase function=Time
# /with/
# clause
# ..
# <: phrase function=Modi
# /or/
# clause
# phrase function=Modi
# <: ..
# /-/
# '''), condenseType='sentence')
```
## `prep.H.time`
```
prephtime = collections.Counter()
phttag2res = collections.defaultdict(list)
for cx in set(F.otype.s('construction')) & set(F.label.s('prep.H.time')):
tag, result, time = tagSpecs(cx)
tag = tag or 'no known spec'
prephtime[tag] += 1
phttag2res[tag].append(result)
convert2pandas(prephtime)
freq_times.loc['prep.time.adju']
#freq_times.head(50)
# A.show(A.search('''
# phrase function=Time
# word lex=JWM/ st=c
# <: word pdp=subs
# '''))
```
| github_jupyter |
# Basic Tensorflow
This notebook will familiarize you with the **basic concepts**
of Tensorflow. Each of these concepts could be extended into
its own notebook(s) but because we want to do some actual
machine learning later on, we only briefly touch on each of
the concepts.
Table of Contents:
- [ 1 The Graph](#1-The-Graph)
- [ 2 The Session](#2-The-Session)
- [ 3 The Shapes](#3-The-Shapes)
- [ 4 Variables – bonus!](#4-Variables-%E2%80%93-bonus!)
```
import tensorflow as tf
# Always make sure you are using running the expected version.
# There are considerable differences between versions...
# We tested this with version 1.4.X
tf.__version__
```
# 1 The Graph
Most important concept with Tensorflow : There is a Graph to which
tensors are attached. This graph is never specified explicitly but
has important consequences for the tensors that are attached to it
(e.g. you cannot connect two tensors that are in different graphs).
The python variable "tensor" is simply a reference to the actual
tensor in the Graph. More precisely, it is a reference to an operation
that will produce a tensor (in the Tensorflow Graph, the nodes are
actually operations and the tensors "flow" on the edges between
the nodes...)
Important note : There is a new simplification of the execution theme
presented in this notebook :
[Tensorflow Eager](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/eager/python/examples/notebooks/1_basics.ipynb)
-- But since this feature is currently in alpha state and most code
still uses the graph/session paradigm, we won't use the new execution
style (anyways, if you can use the old style the new simplified style
should be a welcome and straight forward simplification...)
```
# There is always a "graph" even if you haven't defined one.
tf.get_default_graph()
# Store the default graph in a variable for exploration.
graph = tf.get_default_graph()
# Ok let's try to get all "operations" that are currently defined in this
# default graph.
# Remember : Placing the caret at the end of the line and typing <tab> will
# show an auto-completed list of methods...
graph.get_operations() #graph.get_
# Let's create a separate graph:
graph2 = tf.Graph()
# Try to predict what these statements will output.
print tf.get_default_graph() == graph
print tf.get_default_graph() == graph2
with graph2.as_default():
print tf.get_default_graph() == graph
print tf.get_default_graph() == graph2
# We define our first TENSOR. Fill in your favourite numbers
# You can find documentation to this function here:
# https://www.tensorflow.org/versions/master/api_docs/python/tf/constant
# Try to change data type and shape of the tensor...
favorite_numbers = tf.constant([13, 22, 83])
print favorite_numbers
# (Note that this only prints the "properties" of the tensor
# and not its actual value -- more about this strange behavior
# in the section "The Session".)
# Remember that graph that is always in the background? All the
# tensors that you defined above have been duefully attached to the
# graph by Tensorflow -- check this out:
# (Also note how the operations are named by default)
graph.get_operations() # Show graph operations.
# Note that above are the OPERATIONS that are the nodes in the
# graph (in our the case the "Const" operation creates a constant
# tensor). The tensors themselves are the EDGES between the nodes,
# and their name is usually the operation's name + ":0".
favorite_numbers.name
# Let's say we want to clean up our experimental mess...
# Search on Tensorflow homepage for a command to "reset" the graph:
# https://www.tensorflow.org/api_docs/
### YOUR ACTION REQUIRED:
# Find the right Tensorflow command to reset the graph.
tf.reset_default_graph() #tf.
tf.get_default_graph().get_operations()
# Important note: "resetting" didn't clear our original graph but
# rather replace it with a new graph:
tf.get_default_graph() == graph
# Because we cannot define operations across graphs, we need to
# redefine our favorite numbers in the context of the new
# graph:
favorite_numbers = tf.constant([13, 22, 83])
# Now let's do some computations. Actually we don't really execute
# any computation yet (see next section "The Session" for that), but
# rather define how we intend to do computation later on...
# We first multiply our favorite numbers with our favorite multiplier:
favorite_multiplier = tf.constant(7)
# Do you have an idea how to write below multiplication more succinctly?
# Try it! (Hint: operator overloading)
favorite_products = tf.multiply(favorite_multiplier, favorite_numbers)
print 'favorite_products.shape=', favorite_products.shape
# Now we want to add up all the favorite products to a single scalar
# (0-dim tensor).
# There is a Tensorflow function for this. It starts with "reduce"...
# (Use <tab> auto-completion and/or tensorflow documentation)
### YOUR ACTION REQUIRED:
# Find the correct Tensorflow command to sum up the numbers.
favorite_sum = tf.reduce_sum(favorite_products) #favorite_sum = tf.
print 'favorite_sum.shape=', favorite_sum.shape
# Because we really like our "first" favorite number we add this number
# again to the sum:
favorite_sum_enhanced = favorite_sum + favorite_numbers[0]
# See how we used Python's overloaded "+" and "[]" operators?
# You could also define the same computation using Tensorflow
# functions only:
# favorite_sum_enhanced = tf.add(favorite_sum, tf.slice(favorite_numbers, [0], [1]))
# Of course, it's good practice to avoid a global invisible graph, and
# you can use a Python "with" block to explicitly specify the graph for
# a codeblock:
with tf.Graph().as_default():
within_with = tf.constant([1, 2, 3], name='within_with')
print 'within with:'
print tf.get_default_graph()
print within_with
print tf.get_default_graph().get_operations()
print '\noutside with:'
print tf.get_default_graph()
print within_with
print tf.get_default_graph().get_operations()
# You can execute this cell multiple times without messing up any graph.
# Note that you won't be able to connect the tensor to other tensors
# because we didn't store a reference to the graph of the with statement.
%%writefile _derived/2_visualize_graph.py
# (Written into separate file for sharing between notebooks.)
# Let's visualize our graph!
# Tip: to make your graph more readable you can add a
# name="..." parameter to the individual Ops.
# src: https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/tutorials/deepdream/deepdream.ipynb
import numpy as np
import tensorflow as tf
from IPython.display import clear_output, Image, display, HTML
def strip_consts(graph_def, max_const_size=32):
"""Strip large constant values from graph_def."""
strip_def = tf.GraphDef()
for n0 in graph_def.node:
n = strip_def.node.add()
n.MergeFrom(n0)
if n.op == 'Const':
tensor = n.attr['value'].tensor
size = len(tensor.tensor_content)
if size > max_const_size:
tensor.tensor_content = "<stripped %d bytes>"%size
return strip_def
def show_graph(graph_def, max_const_size=32):
"""Visualize TensorFlow graph."""
if hasattr(graph_def, 'as_graph_def'):
graph_def = graph_def.as_graph_def()
strip_def = strip_consts(graph_def, max_const_size=max_const_size)
code = """
<script>
function load() {{
document.getElementById("{id}").pbtxt = {data};
}}
</script>
<link rel="import" href="https://tensorboard.appspot.com/tf-graph-basic.build.html" onload=load()>
<div style="height:600px">
<tf-graph-basic id="{id}"></tf-graph-basic>
</div>
""".format(data=repr(str(strip_def)), id='graph'+str(np.random.rand()))
iframe = """
<iframe seamless style="width:1200px;height:620px;border:0" srcdoc="{}"></iframe>
""".format(code.replace('"', '"'))
display(HTML(iframe))
# (Load code from previous cell -- make sure to have executed above cell first.)
%run -i _derived/2_visualize_graph.py
show_graph(tf.get_default_graph())
```
# 2 The Session
So far we have only setup our computational Graph -- If you want to actually
*do* any computations, you need to attach the graph to a Session.
```
# The only difference to a "normal" session is that the interactive
# session registers itself as default so .eval() and .run() methods
# know which session to use...
interactive_session = tf.InteractiveSession()
# Hooray -- try printing other tensors of above to see the intermediate
# steps. What is their type and shape ?
print favorite_sum.eval()
# Note that the session is also connected to a Graph, and if no Graph
# is specified then it will connect to the default Graph. Try to fix
# the following code snippet:
graph2 = tf.Graph()
with graph2.as_default():
graph2_tensor = tf.constant([1])
with tf.Session(graph=graph2) as sess: #with tf.Session() as sess:
print graph2_tensor.eval()
# Providing input to the graph: The value of any tensor can be overwritten
# by the "feed_dict" parameter provided to Session's run() method:
a = tf.constant(1)
b = tf.constant(2)
a_plus_b = tf.add(a, b)
print interactive_session.run(a_plus_b)
print interactive_session.run(a_plus_b, feed_dict={a: 123000, b:456})
# It's good practice not to override just any tensor in the graph, but to
# rather use "tf.placeholder" that indicates that this tensor must be
# provided through the feed_dict:
placeholder = tf.placeholder(tf.int32)
placeholder_double = 2 * placeholder
### YOUR ACTION REQUIRED:
# Modify below command to make it work.
print placeholder_double.eval(feed_dict={placeholder:21}) #print placeholder_double.eval()
```
# 3 The Shapes
Another basic skill with Tensorflow is the handling of shapes. This
sounds pretty simple but you will be surprised by how much time of
your Tensorflow coding you will spend on massaging Tensors in the
right form...
Here we go with a couple of exercises with increasing difficulty...
Please refer to the Tensorflow documentation
[Tensor Transformations](https://www.tensorflow.org/versions/master/api_guides/python/array_ops#Shapes_and_Shaping)
for useful functions.
```
tensor12 = tf.constant([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12])
print tensor12
batch = tf.placeholder(tf.int32, shape=[None, 3])
print batch
# Tensor must be of same datatype. Try to change the datatype
# of one of the tensors to fix the ValueError...
multiplier = tf.constant(1.5)
### YOUR ACTION REQUIRED:
# Fix error below.
tf.cast(tensor12, tf.float32) * multiplier #tensor12 * multiplier
# What does tf.squeeze() do? Try it out on tensor12_3!
tensor12_3 = tf.reshape(tensor12, [3, 2, 2, 1])
### YOUR ACTION REQUIRED:
# Checkout the effects of tf.squeeze()
print tf.squeeze(tensor12_3).shape, tensor12_3.shape #print tensor12_3.shape
# This cell is about accessing individual elements of a 2D tensor:
batch = tf.constant([[1, 2, 3, 0, 0],
[2, 4, 6, 8, 0],
[3, 6, 0, 0, 0]])
# Note that individual elements have lengths < batch.shape[1] but
# are zero padded.
lengths = tf.constant([3, 4, 2])
# The FIRST elements can be accessed by using Python's
# overloaded bracket indexing OR the related tf.slice():
print 'first elements:'
print batch[:, 0].eval()
print tf.slice(batch, [0, 0], [3, 1]).eval()
# Accessing the LAST (non-padded) element within every sequence is
# somewhat more involved -- you need to specify both the indices in
# the first and the second dimension and then use tf.gather_nd():
### YOUR ACTION REQUIRED:
# Define provide the correct expression for indices_0 and indices_1.
indices_0 = range(3) #indices_0 =
indices_1 = lengths - 1 #indices_1 =
print 'last elements:'
print tf.gather_nd(batch, tf.transpose([indices_0, indices_1])).eval()
# Below you have an integer tensor and then an expression that is set True
# for all elements that are odd. Try to print those elements using the
# operations tf.where() and tf.gather()
numbers = tf.range(1, 11)
odd_condition = tf.logical_not(tf.equal(0, tf.mod(numbers, 2)))
### YOUR ACTION REQUIRED:
# Define provide the correct expression for odd_indices and odd_numbers.
odd_indices = tf.where(odd_condition) #odd_indices =
odd_numbers = tf.gather(numbers, odd_indices) #odd_numbers =
print odd_numbers.eval()
# "Dynamic shapes" : This feature is mainly used for variable size batches.
# "Dynamic" means that one (or multiple) dimensions are not specified
# before graph execution time (when running the graph with a session).
batch_of_pairs = tf.placeholder(dtype=tf.int32, shape=(None, 2))
# Note how the "unknown" dimension displays as a "?".
print batch_of_pairs
# So we want to reshape the batch of pairs into a batch of quadruples.
# Since we don't know the batch size at runtime we will use the special
# value "-1" (meaning "as many as needed") for the first dimension.
# (Note that this wouldn't work for batch_of_triplets.)
### YOUR ACTION REQUIRED:
# Complete next line.
batch_of_quadruples = tf.reshape(batch_of_pairs, [-1, 4]) #batch_of_quadruples = tf.reshape(batch_of_pairs,
# Test run our batch of quadruples:
print batch_of_quadruples.eval(feed_dict={
batch_of_pairs: [[1,2], [3,4], [5,6], [7,8]]})
# Dynamic shapes cannot be accessed at graph construction time;
# accessing the ".shape" attribute (which is equivalent to the
# .get_shape() method) will return a "TensorShape" with "Dimension(None)".
batch_of_pairs.shape
# i.e. .shape is a property of every tensor that can contain
# values that are not specified -- Dimension(None)
# i.e. first dimension is dynamic and only known at runtime
batch_of_pairs.shape[0].value == None
# The actual dimensions can only be determined at runtime
# by calling tf.shape() -- the output of the tf.shape() Op
# is a tensor like any other tensor whose value is only known
# at runtime (when also all dynamic shapes are known).
batch_of_pairs_shape = tf.shape(batch_of_pairs)
batch_of_pairs_shape.eval(feed_dict={
batch_of_pairs: [[1, 2]]
})
# i.e. tf.shape() is an Op that takes a tensor (that might have
# a dynamic shape or not) as input and outputs another tensor
# that fully specifies the shape of the input tensor.
# So you think shapes are easy, right?
# Well... Then here we go with a real-world shape challenge!
#
# (You probably won't have time to finish this challenge during
# the workshop; come back to this later and don't feel bad about
# consulting the solution...)
#
# Imagine you have a recurrent neural network that outputs a "sequence"
# tensor with dimension [?, max_len, ?], where
# - the first (dynamic) dimension is the number of elements in the batch
# - the second dimension is the maximum sequence length
# - the third (dynamic) dimension is the number of number per element
#
# The actual length of every sequence in the batch (<= max_len) is also
# specified in the tensor "lens" (length=number of elements in batch).
#
# The task at hand is to extract the "nth" element of every sequence.
# The resulting tensor "last_elements" should have the shape [?, ?],
# matching the first and third dimension of tensor "sequence".
#
# Hint: The idea is to reshape the "sequence" to "partially_flattened"
# and then construct a "idxs" tensor (within this partially flattened
# tensor) that returns the requested elements.
#
# Handy functions:
# tf.gather()
# tf.range()
# tf.reshape()
# tf.shape()
lens = tf.placeholder(dtype=tf.int32, shape=(None,))
max_len = 5
sequences = tf.placeholder(dtype=tf.int32, shape=(None, max_len, None))
### YOUR ACTION REQUIRED:
# Find the correct expression for below tensors.
batch_size = tf.shape(sequences)[0] #batch_size =
hidden_state_size = tf.shape(sequences)[2] #hidden_state_size =
idxs = tf.range(0, batch_size) * max_len + (lens - 1) #idxs =
partially_flattened = tf.reshape(sequences, [-1, hidden_state_size]) #partially_flattened =
last_elements = tf.gather(partially_flattened, idxs) #last_elements =
sequences_data = [
[[1,1], [1,1], [2,2], [0,0], [0,0]],
[[1,1], [1,1], [1,1], [3,3], [0,0]],
[[1,1], [1,1], [1,1], [1,1], [4,4]],
]
lens_data = [3, 4, 5]
# Should output [[2,2], [3,3], [4,4]]
last_elements.eval(feed_dict={sequences: sequences_data, lens: lens_data})
```
# 4 Variables – bonus!
So far all our computations have been purely stateless. Obviously,
programming become much more fun once we add some state to our code...
Tensorflow's **variables** encode state that persists between calls to
`Session.run()`.
The confusion with Tensorflow and variables comes from the fact that we
usually "execute" the graph from within Python by running some nodes of
the graph -- via `Session.run()` -- and that variable assignments are also
encoded through nodes in the graph that only get executed if we ask the
value of one of its descendants (see explanatory code below).
Tensorflow's overview of
[variable related functions](https://www.tensorflow.org/versions/r1.0/api_guides/python/state_ops#Variables),
the
[variable HOWTO](https://www.tensorflow.org/versions/r1.0/programmers_guide/variables),
and the
[variable guide](https://www.tensorflow.org/programmers_guide/variables).
And finally some notes on [sharing variables](https://www.tensorflow.org/api_guides/python/state_ops#Sharing_Variables).
```
counter = tf.Variable(0)
increment_counter = tf.assign_add(counter, 1)
with tf.Session() as sess:
# Something is missing here...
# -> Search the world wide web for the error message...
### YOUR ACTION REQUIRED:
# Add a statement that fixes the error.
sess.run([tf.global_variables_initializer()]) #
print increment_counter.eval()
print increment_counter.eval()
print increment_counter.eval()
# Same conditions apply when we use our global interactive session...
interactive_session.run([tf.global_variables_initializer()])
print increment_counter.eval()
# Execute this cell multiple times and note how our global interactive
# sessions keeps state between cell executions.
print increment_counter.eval()
# Usually you would create variables with tf.get_variable() which makes
# it possible to "look up" variables later on.
# For a change let's not try to fix a code snippet but rather to make it
# fail:
# 1. What happens if the block is not wrapped in a tf.Graph()?
# 2. What happens if reuse= is not set?
# 3. What happens if dtype= is not set?
with tf.Graph().as_default():
with tf.variable_scope('counters'):
counter1 = tf.get_variable('counter1', initializer=1)
counter2 = tf.get_variable('counter2', initializer=2)
counter3 = tf.get_variable('counter3', initializer=3)
with tf.Session() as sess:
sess.run([tf.global_variables_initializer()])
print counter1.eval()
with tf.variable_scope('counters', reuse=True):
print tf.get_variable('counter2', dtype=tf.int32).eval()
```
| github_jupyter |
```
import geopandas as gpd
import pandas as pd
import rasterio
from rasterio import mask
import shapely
from shapely.geometry import Polygon, Point
from shapely.ops import cascaded_union
import shapely.speedups
shapely.speedups.enable()
df_district_boundaries = gpd.read_file('gadm40_IND_shp/gadm40_IND_2.shp')
df_district_boundaries['NAME_2'].nunique()
india_worldpop_raster_2020 = rasterio.open('ind_ppp_2020_1km_Aggregated_UNadj.tif')
print('No. of bands:',(india_worldpop_raster_2020.count))
# Reading the first band, filtering negative raster values and visualise data with matplotlib
india_worldpop_raster_2020_tot = india_worldpop_raster_2020.read(1)
india_worldpop_raster_2020_tot[india_worldpop_raster_2020_tot<0] = None
india_worldpop_raster_2020_nonzero = india_worldpop_raster_2020_tot[india_worldpop_raster_2020_tot>0]
population_worldpop = india_worldpop_raster_2020_nonzero[india_worldpop_raster_2020_nonzero > 0].sum()
print('Total population - India (2020): ',round(population_worldpop/1000000000,2),'billion')
def get_population_count(vector_polygon,raster_layer):
gtraster, bound = rasterio.mask.mask(raster_layer, [vector_polygon], crop=True)
pop_estimate = gtraster[0][gtraster[0]>0].sum()
return (pop_estimate.round(2))
%%time
df_district_boundaries['population_count_wp'] = df_district_boundaries['geometry'].apply(get_population_count,raster_layer=india_worldpop_raster_2020)
district_population = df_district_boundaries.groupby(['NAME_2','NAME_1'])['population_count_wp'].sum().round().reset_index().sort_values(by='population_count_wp')
relative_wealth_data = pd.read_csv('ind_pak_relative_wealth_index.csv')
def convert_Point(facebook_relative_wealth):
return Point(facebook_relative_wealth['longitude'],facebook_relative_wealth['latitude'])
relative_wealth_data['geometry'] = relative_wealth_data[['latitude','longitude']].apply(convert_Point,axis=1)
relative_wealth_data = gpd.GeoDataFrame(relative_wealth_data)
relative_wealth_data.head(2)
def get_rwi_mean(vector_polygon,vector_layer):
pip_mask = vector_layer.within(vector_polygon)
pip_data = vector_layer.loc[pip_mask]
mean_val = round(pip_data['rwi'].mean(),2)
return(mean_val)
def get_rwi_median(vector_polygon,vector_layer):
pip_mask = vector_layer.within(vector_polygon)
pip_data = vector_layer.loc[pip_mask]
mean_val = round(pip_data['rwi'].median(),2)
return(mean_val)
df_district_boundaries['rwi_mean'] = df_district_boundaries['geometry'].apply(get_rwi_mean,
vector_layer=relative_wealth_data)
district_average_rwi = df_district_boundaries.groupby(['NAME_2','NAME_1'])['rwi_mean'].mean().reset_index().sort_values(by='rwi_mean')
district_population.head(2)
df_combined = pd.merge(district_average_rwi,district_population,on=['NAME_2','NAME_1'])
df_combined['weighted'] = df_combined['population_count_wp']*df_combined['rwi_mean']
df_combined.sort_values(by='weighted').to_excel('rwi_average.xlsx')
df_district_boundaries['rwi_median'] = df_district_boundaries['geometry'].apply(get_rwi_median,
vector_layer=relative_wealth_data)
district_median_rwi = df_district_boundaries.groupby(['NAME_2','NAME_1'])['rwi_median'].mean().reset_index().sort_values(by='rwi_median')
df_combined = pd.merge(district_median_rwi,district_population,on=['NAME_2','NAME_1'])
df_combined['weighted'] = df_combined['population_count_wp']*df_combined['rwi_median']
df_combined.sort_values(by='weighted').to_excel('rwi_median.xlsx')
```
| github_jupyter |
# SUYASH PRATAP SINGH
# NUMPY OPERATIONS
```
# Import Numpy Library
import numpy as np
import warnings
warnings.filterwarnings("ignore")
from IPython.display import Image
```
# Numpy Array Creation
```
list1 = [10,20,30,40,50,60]
list1
# Display the type of an object
type(list1)
#Convert list to Numpy Array
arr1 = np.array(list1)
arr1
#Memory address of an array object
arr1.data
# Display type of an object
type(arr1)
#Datatype of array
arr1.dtype
# Convert Integer Array to FLOAT
arr1.astype(float)
# Generate evenly spaced numbers (space =1) between 0 to 10
np.arange(0,10)
# Generate numbers between 0 to 100 with a space of 10
np.arange(0,100,10)
# Generate numbers between 10 to 100 with a space of 10 in descending order
np.arange(100, 10, -10)
#Shape of Array
arr3 = np.arange(0,10)
arr3.shape
arr3
# Size of array
arr3.size
# Dimension
arr3.ndim
# Datatype of object
arr3.dtype
# Bytes consumed by one element of an array object
arr3.itemsize
# Bytes consumed by an array object
arr3.nbytes
# Length of array
len(arr3)
# Generate an array of zeros
np.zeros(10)
# Generate an array of ones with given shape
np.ones(10)
# Repeat 10 five times in an array
np.repeat(10,5)
# Repeat each element in array 'a' thrice
a= np.array([10,20,30])
np.repeat(a,3)
# Array of 10's
np.full(5,10)
# Generate array of Odd numbers
ar1 = np.arange(1,20)
ar1[ar1%2 ==1]
# Generate array of even numbers
ar1 = np.arange(1,20)
ar1[ar1%2 == 0]
# Generate evenly spaced 4 numbers between 10 to 20.
np.linspace(10,20,4)
# Generate evenly spaced 11 numbers between 10 to 20.
np.linspace(10,20,11)
# Create an array of random values
np.random.random(4)
# Generate an array of Random Integer numbers
np.random.randint(0,500,5)
# Generate an array of Random Integer numbers
np.random.randint(0,500,10)
# Using random.seed we can generate same number of Random numbers
np.random.seed(123)
np.random.randint(0,100,10)
# Using random.seed we can generate same number of Random numbers
np.random.seed(123)
np.random.randint(0,100,10)
# Using random.seed we can generate same number of Random numbers
np.random.seed(101)
np.random.randint(0,100,10)
# Using random.seed we can generate same number of Random numbers
np.random.seed(101)
np.random.randint(0,100,10)
# Generate array of Random float numbers
f1 = np.random.uniform(5,10, size=(10))
f1
# Extract Integer part
np.floor(f1)
# Truncate decimal part
np.trunc(f1)
# Convert Float Array to Integer array
f1.astype(int)
# Normal distribution (mean=0 and variance=1)
b2 =np.random.randn(10)
b2
arr1
# Enumerate for Numpy Arrays
for index, value in np.ndenumerate(arr1):
print(index, value)
```
# Operations on an Array
```
arr2 = np.arange(1,20)
arr2
# Sum of all elements in an array
arr2.sum()
# Cumulative Sum
np.cumsum(arr2)
# Find Minimum number in an array
arr2.min()
# Find MAX number in an array
arr2.max()
# Find INDEX of Minimum number in an array
arr2.argmin()
# Find INDEX of MAX number in an array
arr2.argmax()
# Find mean of all numbers in an array
arr2.mean()
# Find median of all numbers present in arr2
np.median(arr2)
# Variance
np.var(arr2)
# Standard deviation
np.std(arr2)
# Calculating percentiles
np.percentile(arr2,70)
# 10th & 70th percentile
np.percentile(arr2,[10,70])
```
# Operations on a 2D Array
```
A = np.array([[1,2,3,0] , [5,6,7,22] , [10 , 11 , 1 ,13] , [14,15,16,3]])
A
# SUM of all numbers in a 2D array
A.sum()
# MAX number in a 2D array
A.max()
# Minimum
A.min()
# Column wise mimimum value
np.amin(A, axis=0)
# Row wise mimimum value
np.amin(A, axis=1)
# Mean of all numbers in a 2D array
A.mean()
# Mean
np.mean(A)
# Median
np.median(A)
# 50 percentile = Median
np.percentile(A,50)
np.var(A)
np.std(A)
np.percentile(arr2,70)
# Enumerate for Numpy 2D Arrays
for index, value in np.ndenumerate(A):
print(index, value)
```
# Reading elements of an array
```
a = np.array([7,5,3,9,0,2])
# Access first element of the array
a[0]
# Access all elements of Array except first one.
a[1:]
# Fetch 2nd , 3rd & 4th value from the Array
a[1:4]
# Get last element of the array
a[-1]
a[-3]
a[-6]
a[-3:-1]
```
# Replace elements in array
```
ar = np.arange(1,20)
ar
# Replace EVEN numbers with ZERO
rep1 = np.where(ar % 2 == 0, 0 , ar)
print(rep1)
ar2 = np.array([10, 20 , 30 , 10 ,10 ,20, 20])
ar2
# Replace 10 with value 99
rep2 = np.where(ar2 == 10, 99 , ar2)
print(rep2)
p2 = np.arange(0,100,10)
p2
# Replace values at INDEX loc 0,3,5 with 33,55,99
np.put(p2, [0, 3 , 5], [33, 55, 99])
p2
```
# Missing Values in an array
```
a = np.array([10 ,np.nan,20,30,60,np.nan,90,np.inf])
a
# Search for missing values and return as a boolean array
np.isnan(a)
# Index of missing values in an array
np.where(np.isnan(a))
# Replace all missing values with 99
a[np.isnan(a)] = 99
a
# Check if array has any NULL value
np.isnan(a).any()
A = np.array([[1,2,np.nan,4] , [np.nan,6,7,8] , [10 , np.nan , 12 ,13] , [14,15,16,17]])
A
# Search for missing values and return as a boolean array
np.isnan(A)
# Index of missing values in an array
np.where(np.isnan(A))
```
# Stack Arrays Vertically
```
a = np.zeros(20).reshape(2,-1)
b = np.repeat(1, 20).reshape(2,-1)
a
b
np.vstack([a,b])
a1 = np.array([[1], [2], [3]])
b1 = np.array([[4], [5], [6]])
a1
b1
np.vstack([a1,b1])
```
# Stack Arrays Horizontally
```
np.hstack([a,b])
np.hstack([a1,b1])
### hstack & vstack
arr1 = np.array([[7,13,14],[18,10,17],[11,12,19]])
arr2= np.array([16,6,1])
arr3= np.array([[5,8,4,3]])
np.hstack((np.vstack((arr1,arr2)),np.transpose(arr3)))
```
# Common items between two Arrays
```
c1 = np.array([10,20,30,40,50,60])
c2 = np.array([12,20,33,40,55,60])
np.intersect1d(c1,c2)
```
# Remove Common Elements
```
# Remove common elements of C1 & C2 array from C1
np.setdiff1d(c1,c2)
```
# Process Elements on Conditions
```
a = np.array([1,2,3,6,8])
b = np.array([10,2,30,60,8])
np.where(a == b) # returns the indices of elements in an input array where the given condition is satisfied.
# Return an array where condition is satisfied
a[np.where(a == b)]
# Return all numbers betweeen 20 & 35
a1 = np.arange(0,60)
a1[np.where ((a1>20) & (a1<35))]
# Return all numbers betweeen 20 & 35 OR numbers divisible by 10
a1 = np.arange(0,60)
a1[np.where (((a1>20) & (a1<35)) | (a1 % 10 ==0)) ]
# Return all numbers betweeen 20 & 35 using np.logical_and
a1[np.where(np.logical_and(a1>20, a1<35))]
```
# Check for elements in an Array using isin()
```
a = np.array([10,20,30,40,50,60,70])
a
# Check whether number 11 & 20 are present in an array
np.isin(a, [11,20])
#Display the matching numbers
a[np.isin(a,20)]
# Check whether number 33 is present in an array
np.isin(a, 33)
a[np.isin(a, 33)]
b = np.array([10,20,30,40,10,10,70,80,70,90])
b
# Check whether number 10 & 70 are present in an array
np.isin(b, [10,70])
# Display the indices where match occurred
np.where(np.isin(b, [10,70]))
# Display the matching values
b[np.where(np.isin(b, [10,70]))]
# Display the matching values
b[np.isin(b, [10,70])]
```
# Reverse Array
```
a4 = np.arange(10,30)
a4
# Reverse the array
a4[::-1]
# Reverse the array
np.flip(a4)
a3 = np.array([[3,2,8,1] , [70,50,10,67] , [45,25,75,15] , [12,9,77,4]])
a3
# Reverse ROW positions
a3[::-1,]
# Reverse COLUMN positions
a3[:,::-1]
# Reverse both ROW & COLUMN positions
a3[::-1,::-1]
```
# Sorting Array
```
a = np.array([10,5,2,22,12,92,17,33])
# Sort array in ascending order
np.sort(a)
a3 = np.array([[3,2,8,1] , [70,50,10,67] , [45,25,75,15]])
a3
# Sort along rows
np.sort(a3)
# Sort along rows
np.sort(a3,axis =1)
# Sort along columns
np.sort(a3,axis =0)
# Sort in descending order
b = np.sort(a)
b = b[::-1]
b
# Sort in descending order
c = np.sort(a)
np.flip(c)
# Sort in descending order
a[::-1].sort()
a
```
# "N" Largest & Smallest Numbers in an Array
```
p = np.arange(0,50)
p
np.random.shuffle(p)
p
# Return "n" largest numbers in an Array
n = 4
p[np.argsort(p)[-nth:]]
# Return "n" largest numbers in an Array
p[np.argpartition(-p,n)[:n]]
# Return "n" smallest numbers in an Array
p[np.argsort(-p)[-n:]]
# Return "n" smallest numbers in an Array
p[np.argpartition(p,n)[:n]]
```
# Repeating Sequences
```
a5 = [10,20,30]
a5
# Repeat whole array twice
np.tile(a5, 2)
# Repeat each element in an array thrice
np.repeat(a5, 3)
```
# Compare Arrays
```
d1 = np.arange(0,10)
d1
d2 = np.arange(0,10)
d2
d3 = np.arange(10,20)
d3
d4 = d1[::-1]
d4
# Compare arrays using "allclose" function. If this function returns True then Arrays are equal
res1 = np.allclose(d1,d2)
res1
# Compare arrays using "allclose" function. If this function returns False then Arrays are not equal
res2 = np.allclose(d1,d3)
res2
# Compare arrays using "allclose" function.
res3 = np.allclose(d1,d4)
res3
```
# Frequent Values in an Array
```
# unique numbers in an array
b = np.array([10,10,10,20,30,20,30,30,20,10,10,30,10])
np.unique(b)
# unique numbers in an array along with the count E.g value 10 occurred maximum times (5 times) in an array "b"
val , count = np.unique(b,return_counts=True)
val,count
# 10 is the most frequent value
np.bincount(b).argmax()
```
# Read-Only Array
```
d5 = np.arange(10,100,10)
d5
# Make arrays immutable
d5.flags.writeable = False
d5[0] = 99
d5[2] = 11
```
# Load & Save
```
# Load data from a text file using loadtext
p4 = np.loadtxt('sample.txt',
dtype = np.integer # Decides the datatype of resulting array
)
p4
# Load data from a text file using genfromtxt
p5 = np.genfromtxt('sample0.txt',dtype='str')
p5
# Accessing specific rows
p5[0]
# Accessing specific columns
p5[:,0]
p6 = np.genfromtxt('sample2.txt',
delimiter=' ',
dtype=None,
names=('Name', 'ID', 'Age')
)
p6
# Skip header using "skiprows" parameter
p6 = np.loadtxt('sample2.txt',
delimiter=' ',
dtype=[('Name', str, 50), ('ID', np.integer), ('Age', np.integer)],
skiprows=1
)
p6
# Return only first & third column using "usecols" parameter
np.loadtxt('sample.txt', delimiter =' ', usecols =(0, 2))
# Return only three rows using "max_rows" parameter
p6 = np.loadtxt('sample2.txt',
delimiter=' ',
dtype=[('Name', str, 50), ('ID', np.integer), ('Age', np.integer)],
skiprows=1,
max_rows = 3
)
p6
# Skip header using "skip_header" parameter
p6 = np.genfromtxt('sample2.txt',
delimiter=' ',
dtype=[('Name', str, 50), ('ID', np.integer), ('Age', np.float)],
names=('Name', 'ID', 'Age'),
skip_header=1
)
p6
p7 = np.arange(10,200,11)
p7
np.savetxt('test3.csv', p7, delimiter=',')
p8 = np.arange(0,121).reshape(11,11)
p8
np.save('test4.npy', p8)
p9 = np.load('test4.npy')
p9
np.save('numpyfile', p8)
p10 = np.load('numpyfile.npy')
p10
p11 = np.arange(0,1000000).reshape(1000,1000)
p11
# Save Numpy array to a compressed file
np.savez_compressed('test6.npz', p11)
# Save Numpy array to a npy file
np.save('test7.npy', p11)
# Compressed file size is much lesser than normal npy file
Image(filename='load_save.PNG')
```
# Printing Options
```
# Display values upto 4 decimal place
np.set_printoptions(precision=4)
a = np.array([12.654398765 , 90.7864098354674])
a
# Display values upto 2 decimal place
np.set_printoptions(precision=2)
a = np.array([12.654398765 , 90.7864098354674])
a
# Array Summarization
np.set_printoptions(threshold=3)
np.arange(200)
# Reset Formatter
np.set_printoptions(precision=8,suppress=False, threshold=1000, formatter=None)
a = np.array([12.654398765 , 90.7864098354674])
a
np.arange(1,1100)
# Display all values
np.set_printoptions(threshold=np.inf)
np.arange(1,1100)
```
# Vector Addition
```
v1 = np.array([1,2])
v2 = np.array([3,4])
v3 = v1+v2
v3 = np.add(v1,v2)
print('V3 =' ,v3)
```
# Multiplication of vectors
```
a1 = [5 , 6 ,8]
a2 = [4, 7 , 9]
print(np.multiply(a1,a2))
```
# Dot Product
```
a1 = np.array([1,2,3])
a2 = np.array([4,5,6])
dotp = a1@a2
print(" Dot product - ",dotp)
dotp = np.dot(a1,a2)
print(" Dot product usign np.dot",dotp)
dotp = np.inner(a1,a2)
print(" Dot product usign np.inner", dotp)
dotp = sum(np.multiply(a1,a2))
print(" Dot product usign np.multiply & sum",dotp)
dotp = np.matmul(a1,a2)
print(" Dot product usign np.matmul",dotp)
dotp = 0
for i in range(len(a1)):
dotp = dotp + a1[i]*a2[i]
print(" Dot product usign for loop" , dotp)
```
# Length of Vector
```
v3 = np.array([1,2,3,4,5,6])
length = np.sqrt(np.dot(v3,v3))
length
v3 = np.array([1,2,3,4,5,6])
length = np.sqrt(sum(np.multiply(v3,v3)))
length
v3 = np.array([1,2,3,4,5,6])
length = np.sqrt(np.matmul(v3,v3))
length
```
# Normalized Vector
```
#First Method
v1 = [2,3]
length_v1 = np.sqrt(np.dot(v1,v1))
norm_v1 = v1/length_v1
length_v1 , norm_v1
#Second Method
v1 = [2,3]
norm_v1 = v1/np.linalg.norm(v1)
norm_v1
```
# Angle between vectors
```
#First Method
v1 = np.array([8,4])
v2 = np.array([-4,8])
ang = np.rad2deg(np.arccos( np.dot(v1,v2) / (np.linalg.norm(v1)*np.linalg.norm(v2))))
ang
#Second Method
v1 = np.array([4,3])
v2 = np.array([-3,4])
lengthV1 = np.sqrt(np.dot(v1,v1))
lengthV2 = np.sqrt(np.dot(v2,v2))
ang = np.rad2deg(np.arccos( np.dot(v1,v2) / (lengthV1 * lengthV2)))
print('Angle between Vectors - %s' %ang)
```
# Inner & outer products
```
v1 = np.array([1,2,3])
v2 = np.array([4,5,6])
np.inner(v1,v2)
print("\n Inner Product ==> \n", np.inner(v1,v2))
print("\n Outer Product ==> \n", np.outer(v1,v2))
```
# Vector Cross Product
```
v1 = np.array([1,2,3])
v2 = np.array([4,5,6])
print("\nVector Cross Product ==> \n", np.cross(v1,v2))
```
# Matrix Creation
```
# Create a 4x4 matrix
A = np.array([[1,2,3,4] , [5,6,7,8] , [10 , 11 , 12 ,13] , [14,15,16,17]])
A
# Datatype of Matrix
A.dtype
B = np.array([[1.5,2.07,3,4] , [5,6,7,8] , [10 , 11 , 12 ,13] , [14,15,16,17]])
B
# Datatype of Matrix
B.dtype
# Shape of Matrix
A.shape
# Generate a 4x4 zero matrix
np.zeros((4,4))
#Shape of Matrix
z1 = np.zeros((4,4))
z1.shape
# Generate a 5x5 matrix filled with ones
np.ones((5,5))
# Return 10x10 matrix of random integer numbers between 0 to 500
np.random.randint(0,500, (10,10))
arr2
arr2.reshape(5,4)
mat1 = np.random.randint(0,1000,100).reshape(10,10)
mat1
mat1[0,0]
mat1[mat1 > 500]
# Identity Matrix : https://en.wikipedia.org/wiki/Identity_matrix
I = np.eye(9)
I
# Diagonal Matrix : https://en.wikipedia.org/wiki/Diagonal_matrix
D = np.diag([1,2,3,4,5,6,7,8])
D
# Traingular Matrices (lower & Upper triangular matrix) : https://en.wikipedia.org/wiki/Triangular_matrix
M = np.random.randn(5,5)
U = np.triu(M)
L = np.tril(M)
print("lower triangular matrix - \n" , M)
print("\n")
print("lower triangular matrix - \n" , L)
print("\n")
print("Upper triangular matrix - \n" , U)
# Generate a 5X5 matrix with a given fill value of 8
np.full((5,5) , 8)
# Generate 5X5 matrix of Random float numbers between 10 to 20
np.random.uniform(10,20, size=(5,5))
A
# Collapse Matrix into one dimension array
A.flatten()
# Collapse Matrix into one dimension array
A.ravel()
```
# Reading elements of a Matrix
```
A
# Fetch first row of matrix
A[0,]
# Fetch first column of matrix
A[:,0]
# Fetch first element of the matrix
A[0,0]
A[1:3 , 1:3]
```
# Reverse Rows / Columns of a Matrix
```
arr = np.arange(16).reshape(4,4)
arr
# Reverse rows
arr[::-1]
#Reverse Columns
arr[:, ::-1]
```
# SWAP Rows & Columns
```
m1 = np.arange(0,16).reshape(4,4)
m1
# SWAP rows 0 & 1
m1[[0,1]] = m1[[1,0]]
m1
# SWAP rows 2 & 3
m1[[3,2]] = m1[[2,3]]
m1
m2 = np.arange(0,36).reshape(6,6)
m2
# Swap columns 0 & 1
m2[:,[0, 1]] = m2[:,[1, 0]]
m2
# Swap columns 2 & 3
m2[:,[2, 3]] = m2[:,[3, 2]]
m2
```
# Concatenate Matrices
Matrix Concatenation : https://docs.scipy.org/doc/numpy/reference/generated/numpy.concatenate.html
```
A = np.array([[1,2] , [3,4] ,[5,6]])
B = np.array([[1,1] , [1,1]])
C = np.concatenate((A,B))
C
```
# Matrix Addition
```
#********************************************************#
M = np.array([[1,2,3],[4,-3,6],[7,8,0]])
N = np.array([[1,1,1],[2,2,2],[3,3,3]])
print("\n First Matrix (M) ==> \n", M)
print("\n Second Matrix (N) ==> \n", N)
C = M+N
print("\n Matrix Addition (M+N) ==> \n", C)
# OR
C = np.add(M,N,dtype = np.float64)
print("\n Matrix Addition using np.add ==> \n", C)
#********************************************************#
```
# Matrix subtraction
```
#********************************************************#
M = np.array([[1,2,3],[4,-3,6],[7,8,0]])
N = np.array([[1,1,1],[2,2,2],[3,3,3]])
print("\n First Matrix (M) ==> \n", M)
print("\n Second Matrix (N) ==> \n", N)
C = M-N
print("\n Matrix Subtraction (M-N) ==> \n", C)
# OR
C = np.subtract(M,N,dtype = np.float64)
print("\n Matrix Subtraction using np.subtract ==> \n", C)
#********************************************************#
```
# Matrices Scalar Multiplication
```
M = np.array([[1,2,3],[4,-3,6],[7,8,0]])
C = 10
print("\n Matrix (M) ==> \n", M)
print("\nMatrices Scalar Multiplication ==> \n", C*M)
# OR
print("\nMatrices Scalar Multiplication ==> \n", np.multiply(C,M))
```
# Transpose of a matrix
```
M = np.array([[1,2,3],[4,-3,6],[7,8,0]])
print("\n Matrix (M) ==> \n", M)
print("\nTranspose of M ==> \n", np.transpose(M))
# OR
print("\nTranspose of M ==> \n", M.T)
```
# Determinant of a matrix
```
M = np.array([[1,2,3],[4,-3,6],[7,8,0]])
print("\n Matrix (M) ==> \n", M)
print("\nDeterminant of M ==> ", np.linalg.det(M))
```
# Rank of a matrix
```
M = np.array([[1,2,3],[4,-3,6],[7,8,0]])
print("\n Matrix (M) ==> \n", M)
print("\nRank of M ==> ", np.linalg.matrix_rank(M))
```
# Trace of matrix
```
M = np.array([[1,2,3],[4,-3,6],[7,8,0]])
print("\n Matrix (M) ==> \n", M)
print("\nTrace of M ==> ", np.trace(M))
```
# Inverse of matrix A
```
M = np.array([[1,2,3],[4,-3,6],[7,8,0]])
print("\n Matrix (M) ==> \n", M)
print("\nInverse of M ==> \n", np.linalg.inv(M))
```
# Matrix Multiplication (pointwise multiplication)
```
M = np.array([[1,2,3],[4,-3,6],[7,8,0]])
N = np.array([[1,1,1],[2,2,2],[3,3,3]])
print("\n First Matrix (M) ==> \n", M)
print("\n Second Matrix (N) ==> \n", N)
print("\n Point-Wise Multiplication of M & N ==> \n", M*N)
# OR
print("\n Point-Wise Multiplication of M & N ==> \n", np.multiply(M,N))
```
# Matrix dot product
```
M = np.array([[1,2,3],[4,-3,6],[7,8,0]])
N = np.array([[1,1,1],[2,2,2],[3,3,3]])
print("\n First Matrix (M) ==> \n", M)
print("\n Second Matrix (N) ==> \n", N)
print("\n Matrix Dot Product ==> \n", M@N)
# OR
print("\n Matrix Dot Product using np.matmul ==> \n", np.matmul(M,N))
# OR
print("\n Matrix Dot Product using np.dot ==> \n", np.dot(M,N))
```
# Matrix Division
```
M = np.array([[1,2,3],[4,-3,6],[7,8,0]])
N = np.array([[1,1,1],[2,2,2],[3,3,3]])
print("\n First Matrix (M) ==> \n", M)
print("\n Second Matrix (N) ==> \n", N)
print("\n Matrix Division (M/N) ==> \n", M/N)
# OR
print("\n Matrix Division (M/N) ==> \n", np.divide(M,N))
```
# Sum of all elements in a matrix
```
N = np.array([[1,1,1],[2,2,2],[3,3,3]])
print("\n Matrix (N) ==> \n", N)
print ("Sum of all elements in a Matrix ==>")
print (np.sum(N))
```
# Column-Wise Addition
```
N = np.array([[1,1,1],[2,2,2],[3,3,3]])
print("\n Matrix (N) ==> \n", N)
print ("Column-Wise summation ==> ")
print (np.sum(N,axis=0))
```
# Row-Wise Addition
```
N = np.array([[1,1,1],[2,2,2],[3,3,3]])
print("\n Matrix (N) ==> \n", N)
print ("Row-Wise summation ==>")
print (np.sum(N,axis=1))
```
# Kronecker Product of matrices
```
M1 = np.array([[1,2,3] , [4,5,6]])
M1
M2 = np.array([[10,10,10],[10,10,10]])
M2
np.kron(M1,M2)
```
# Matrix Powers
```
M1 = np.array([[1,2],[4,5]])
M1
#Matrix to the power 3
M1@M1@M1
#Matrix to the power 3
np.linalg.matrix_power(M1,3)
```
# Tensor
```
# Create Tensor
T1 = np.array([
[[1,2,3], [4,5,6], [7,8,9]],
[[10,20,30], [40,50,60], [70,80,90]],
[[100,200,300], [400,500,600], [700,800,900]],
])
T1
T2 = np.array([
[[0,0,0] , [0,0,0] , [0,0,0]],
[[1,1,1] , [1,1,1] , [1,1,1]],
[[2,2,2] , [2,2,2] , [2,2,2]]
])
T2
```
# Tensor Addition
```
A = T1+T2
A
np.add(T1,T2)
```
# Tensor Subtraction
```
S = T1-T2
S
np.subtract(T1,T2)
```
# Tensor Element-Wise Product
```
P = T1*T2
P
np.multiply(T1,T2)
```
# Tensor Element-Wise Division
```
D = T1/T2
D
np.divide(T1,T2)
```
# Tensor Dot Product
```
T1
T2
np.tensordot(T1,T2)
```
# Solving Equations $$AX = B$$
```
A = np.array([[1,2,3] , [4,5,6] , [7,8,9]])
A
B = np.random.random((3,1))
B
# Ist Method
X = np.dot(np.linalg.inv(A) , B)
X
# 2nd Method
X = np.matmul(np.linalg.inv(A) , B)
X
# 3rd Method
X = np.linalg.inv(A)@B
X
# 4th Method
X = np.linalg.solve(A,B)
X
```
# THANK YOU
| github_jupyter |
```
!pip install sentence-transformers keybert spacy spacycake
doc = """Best Dentists in NYC
209 NYC Dental is the oldest continuing dental practice in New York State.
Established in 1887 our dental office has been providing quality dental care to New York City patients for over a century.
This legacy of treatment comes with responsibility. A responsibility to treat people with respect, excellence, and compassion.
209 NYC Dental team is a wonderfully eclectic group of top rated dentists, hygienists, and staff.
We have great clinical and people skills. We the very best, high quality of dental care.
Having all dental specialties at 209 NYC Dental, we can comprehensively serve all your dental needs,
from routine cleanings to dental implants, from whitening to more advanced cosmetic dentistry.
Dental Care at 209 NYC Dental
Our 209 NYC Dentists and staff understand the challenges that NYC patients face. Our patients have stayed with us for decades.
They travel from all over the US, Europe and distant parts of the world to see us.
Whether you have Dental Insurance or not, take advantage of a Free Consultation!
NYC Smile Design.
NYC Cosmetic Dentists Serving Manhattan and New York, NY.
COVID-19 Message to Our Patients, Future Patients, and Friends.
At NYC Smile Design, experienced New York City cosmetic dentists Dr. Elisa Mello and Dr. Ramin Tabib collaborate to provide you with comprehensive dental care and outstanding cosmetic dentistry results. Together they founded NYC Smile Design in 1994 and have been dedicated to providing life-changing dentistry ever since. To schedule a consultation, please call us at 212-452-3344.
As partners in marriage as well as business, Dr. Mello and Dr. Tabib have a warmth and commitment to excellence that influences all aspects of their lives. Their collaboration at NYC Smile Design means your care is approached in a multi-disciplinary, comprehensive manner. The dentists and entire staff share a belief in the importance of being a complete person: adhering to the highest professional standards while continuing to grow at the personal level.
Dr. Mello and Dr. Tabib are committed to understanding your perspective and respecting that you are entrusting them with your smile and dental health. Our dentists are also active in giving back to the community and have worked with programs including "Smiles for Success," restoring the smiles of battered women to build their confidence as they begin anew. Dr. Mello and Dr. Tabib have also provided services to participants of the Doe Fund, an organization that helps homeless men attain housing and employment.
"""
from sklearn.feature_extraction.text import CountVectorizer
n_gram_range = (1, 2)
stop_words = "english"
# Extract candidate words/phrases
count = CountVectorizer(ngram_range=n_gram_range, stop_words=stop_words).fit([doc])
candidates = count.get_feature_names()
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('distilbert-base-nli-mean-tokens')
doc_embedding = model.encode([doc])
candidate_embeddings = model.encode(candidates)
from sklearn.metrics.pairwise import cosine_similarity
top_n = 5
distances = cosine_similarity(doc_embedding, candidate_embeddings)
keywords = [candidates[index] for index in distances.argsort()[0][-top_n:]]
keywords
import numpy as np
import itertools
def max_sum_sim(doc_embedding, word_embeddings, words, top_n, nr_candidates):
# Calculate distances and extract keywords
distances = cosine_similarity(doc_embedding, candidate_embeddings)
distances_candidates = cosine_similarity(candidate_embeddings,
candidate_embeddings)
# Get top_n words as candidates based on cosine similarity
words_idx = list(distances.argsort()[0][-nr_candidates:])
words_vals = [candidates[index] for index in words_idx]
distances_candidates = distances_candidates[np.ix_(words_idx, words_idx)]
# Calculate the combination of words that are the least similar to each other
min_sim = np.inf
candidate = None
for combination in itertools.combinations(range(len(words_idx)), top_n):
sim = sum([distances_candidates[i][j] for i in combination for j in combination if i != j])
if sim < min_sim:
candidate = combination
min_sim = sim
return [words_vals[idx] for idx in candidate]
max_sum_sim(doc_embedding, candidate_embeddings, candidates, top_n=5, nr_candidates=10)
from keybert import KeyBERT
model = KeyBERT('distilbert-base-nli-mean-tokens')
keywords = model.extract_keywords(doc, keyphrase_length=1, stop_words=None)
keywords
model.extract_keywords(doc, keyphrase_length=3, stop_words='english', use_maxsum=True, nr_candidates=20, top_n=5)
model.extract_keywords(doc, keyphrase_length=3, stop_words='english', use_mmr=True, diversity=0.7)
model.extract_keywords(doc, keyphrase_length=3, stop_words='english', use_mmr=True, diversity=0.2)
import spacy
from spacycake import BertKeyphraseExtraction as bake
nlp = spacy.load('en')
cake = bake(nlp, from_pretrained='bert-base-uncased', top_k=5)
nlp.add_pipe(cake, last=True)
print(sdoc(doc)._.extracted_phrases)
```
| github_jupyter |
```
import sys
sys.path.append('../DeepMimic')
from util.logger import Logger
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
output_folder = './output/dog3d_drl_finetuning_heading_control/'
def read_log_files(filenames):
log = None
Iter_Step = 200
for i, filename in enumerate(filenames):
log_part = np.genfromtxt(filename, dtype=None, names=True, encoding='utf-8')
if log is None:
log = log_part
else:
log_intermediate = dict()
for j, key in enumerate(list(log_part.dtype.names)):
if 'Iteration' in key:
log_intermediate[key] = np.concatenate((log[key], log_part[key] + log[key][-1] + Iter_Step), axis=0)
elif 'Wall_Time' in key:
log_intermediate[key] = np.concatenate((log[key], log_part[key] + log[key][-1]), axis=0)
else:
log_intermediate[key] = np.concatenate((log[key], log_part[key]), axis=0)
log = log_intermediate
return log
A = read_log_files([output_folder+'agent0_log.txt'])
print('Wall_Time: %.1f hours' % (A['Wall_Time'][-1]))
print('Samples: %d' % (A['Samples'][-1]))
x_label = 'Samples'
y_label = 'Critic_Loss'
fig = plt.figure()
plt.plot(A[x_label], A[y_label])
plt.ylabel(y_label)
plt.xlabel(x_label)
y_label = 'Actor_Loss'
fig = plt.figure()
plt.plot(A[x_label], A[y_label])
plt.ylabel(y_label)
plt.xlabel(x_label)
y_label = 'Return'
fig = plt.figure()
plt.plot(A[x_label], A['Train_Return'], label='Training')
plt.plot(A[x_label], A['Test_Return'], label='Testing')
# plt.plot(A[x_label], np.ones(len(A['Test_Return'])) * 600, linestyle='--', color='black', alpha=0.5, label='Maximum Return')
plt.legend()
plt.grid(alpha=0.4)
plt.ylabel(y_label)
plt.xlabel(x_label)
# fig.savefig(output_folder+'training_result.png', format='png', dpi=200, bbox_inches='tight')
y_label = 'Exp_Rate'
fig = plt.figure()
plt.plot(A[x_label], A['Exp_Rate'], label='Train')
plt.legend()
plt.grid(alpha=0.4)
plt.ylabel(y_label)
plt.xlabel(x_label)
max_episode_lengths = [0] * len(A['Samples'])
time_lim = 0.5
time_end_lim = 20
anneal_samples = 100000000
for i, samples in enumerate(A['Samples']):
if samples > anneal_samples:
max_episode_lengths[i] = time_end_lim
else:
t = samples / anneal_samples
max_episode_lengths[i] = t * time_end_lim + (1-t) * time_lim
y_label = 'Train_Path_Length'
fig = plt.figure()
plt.plot(A['Samples'], A['Train_Path_Length'], label='Path Length')
plt.plot(A['Samples'], max_episode_lengths, label='Max Episode Length')
plt.show()
import numpy as np
goals = np.load('output/dog3d_gan_control_adapter_speed_control/goals.npy')
print(goals.shape)
import glob
def _parse_record_data(filename):
trajectories = []
with open(filename) as fin:
lines = fin.readlines()
i = 0
records = []
while i < len(lines):
if lines[i] == '\n':
trajectories += [records]
records = []
i += 1
else:
state = np.fromstring(lines[i], dtype=np.float32, sep=' ')
goal = np.fromstring(lines[i + 1], dtype=np.float32, sep=' ')
weight = np.fromstring(lines[i + 2], dtype=np.float32, sep=' ')
record = [state, goal, weight]
records += [record]
i += 3
return trajectories
trajectories = []
record_files = glob.glob('output/dog3d_gan_control_adapter_speed_control/intermediate/agent0_records/*.txt')
for record_file in record_files:
print('Read file ' + record_file)
_trajectories = _parse_record_data(record_file)
trajectories += _trajectories
print(len(trajectories))
for i, record_data in enumerate(trajectories):
if i >= 1:
break
print(record_data[:][2])
```
| github_jupyter |
```
import seaborn as sns
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
from IPython.display import display
```
## Exercise 1
The [Pima Indians dataset](https://archive.ics.uci.edu/ml/datasets/Pima+Indians+Diabetes) is a very famous dataset distributed by UCI and originally collected from the National Institute of Diabetes and Digestive and Kidney Diseases. It contains data from clinical exams for women age 21 and above of Pima indian origins. The objective is to predict based on diagnostic measurements whether a patient has diabetes.
It has the following features:
- Pregnancies: Number of times pregnant
- Glucose: Plasma glucose concentration a 2 hours in an oral glucose tolerance test
- BloodPressure: Diastolic blood pressure (mm Hg)
- SkinThickness: Triceps skin fold thickness (mm)
- Insulin: 2-Hour serum insulin (mu U/ml)
- BMI: Body mass index (weight in kg/(height in m)^2)
- DiabetesPedigreeFunction: Diabetes pedigree function
- Age: Age (years)
The last colum is the outcome, and it is a binary variable.
In this first exercise we will explore it through the following steps:
1. Load the ..data/diabetes.csv dataset, use pandas to explore the range of each feature
- For each feature draw a histogram. Bonus points if you draw all the histograms in the same figure.
- Explore correlations of features with the outcome column. You can do this in several ways, for example using the `sns.pairplot` we used above or drawing a heatmap of the correlations.
- Do features need standardization? If so what stardardization technique will you use? MinMax? Standard?
- Prepare your final `X` and `y` variables to be used by a ML model. Make sure you define your target variable well. Will you need dummy columns?
### 1. Load the ..data/diabetes.csv dataset, use pandas to explore the range of each feature
```
df = pd.read_csv('diabetes.csv')
display(df.info())
display(df.head())
display(df.describe())
# Labels
display(df['Outcome'].value_counts())
```
### 2. For each feature draw a histogram. Bonus points if you draw all the histograms in the same figure
```
df.hist(figsize=(12, 12));
```
### 3.Explore correlations of features with the outcome column. You can do this in several ways, for example using the sns.pairplot we used above or drawing a heatmap of the correlations.
```
sns.pairplot(df, hue="Outcome");
## Feature overlap alot
sns.heatmap(df.corr(), annot = True);
```
### 4. Do features need standardization? If so what stardardization technique will you use? MinMax? Standard?
- X feature engineering by Standard Scaler (every feature get mean = 0)
```
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X = sc.fit_transform(df.drop('Outcome', axis=1))
X.shape
```
- One-hot encoding y
```
from keras.utils import to_categorical
y = df['Outcome'].values
y_cat = to_categorical(y)
y_cat.shape
```
## Exercise 2
1. Split your data in a train/test with a test size of 20% and a `random_state = 22`
- define a sequential model with at least one inner layer. You will have to make choices for the following things:
- what is the size of the input?
- how many nodes will you use in each layer?
- what is the size of the output?
- what activation functions will you use in the inner layers?
- what activation function will you use at output?
- what loss function will you use?
- what optimizer will you use?
- fit your model on the training set, using a validation_split of 0.1
- test your trained model on the test data from the train/test split
- check the accuracy score, the confusion matrix and the classification report
### 1. Split your data in a train/test with a test size of 20% and a random_state = 22
```
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(
X, y_cat,
random_state=22, test_size=0.2)
```
### 2. Define a sequential model with at least one inner layer.
```
from keras.models import Sequential
from keras.layers import Dense
from keras.optimizers import Adam
model = Sequential()
model.add(Dense(32, input_shape=(8,), activation='relu'))
model.add(Dense(32, activation='relu'))
model.add(Dense(32, activation='relu'))
model.add(Dense(2, activation='softmax'))
model.compile(Adam(lr=0.05),
loss='categorical_crossentropy',
metrics=['accuracy'])
model.summary()
```
## 3. fit your model on the training set, using a validation_split of 0.1
```
result = model.fit(
X_train, y_train,
epochs=30, verbose=2, validation_split=0.1)
```
### 4. test your trained model on the test data from the train/test split
```
plt.plot(result.history['acc'])
plt.plot(result.history['val_acc'])
plt.legend(['Training', 'Validation'])
plt.title('Accuracy')
plt.xlabel('Epochs')
from sklearn.metrics import accuracy_score
# Training set
y_train_pred = model.predict(X_train)
y_train_pred_label = [np.argmax(y, axis=None, out=None) for y in y_train_pred] # reverse One-hot encoding
y_train_label = [np.argmax(y, axis=None, out=None) for y in y_train] # reverse One-hot encoding
print("The Accuracy score on the Train set is:\t{:0.3f}".format(
accuracy_score(
y_train_label,
y_train_pred_label)))
```
## 5. check the accuracy score, the confusion matrix and the classification report
```
# Test set
y_test_pred = model.predict(X_test)
y_test_pred_label = [np.argmax(y, axis=None, out=None) for y in y_test_pred] # reverse One-hot encoding
y_test_label = [np.argmax(y, axis=None, out=None) for y in y_test] # reverse One-hot encoding
from sklearn.metrics import accuracy_score
print("The Accuracy score on the Test set is:\t{:0.3f}".format(
accuracy_score(
y_test_label,
y_test_pred_label)))
from sklearn.metrics import classification_report
print(classification_report(
y_test_label,
y_test_pred_label))
from sklearn.metrics import confusion_matrix
print(confusion_matrix(
y_test_label,
y_test_pred_label))
```
## Exercise 3
Compare your work with the results presented in [this notebook](https://www.kaggle.com/futurist/d/uciml/pima-indians-diabetes-database/pima-data-visualisation-and-machine-learning). Are your Neural Network results better or worse than the results obtained by traditional Machine Learning techniques?
- Try training a Support Vector Machine or a Random Forest model on the exact same train/test split. Is the performance better or worse?
- Try restricting your features to only 4 features like in the suggested notebook. How does model performance change?
```
from sklearn.ensemble import RandomForestClassifier
from sklearn.svm import SVC
from sklearn.naive_bayes import GaussianNB
for model in [RandomForestClassifier(), SVC(), GaussianNB()]:
model.fit(X_train, y_train[:, 1])
y_pred = model.predict(X_test)
print("="*80)
print(model)
print("-"*80)
print("Accuracy score: {:0.3}".format(accuracy_score(y_test_label,
y_pred)))
print("Confusion Matrix:")
print(confusion_matrix(y_test_label, y_pred))
print()
```
## Exercise 4
[Tensorflow playground](http://playground.tensorflow.org/) is a web based neural network demo. It is really useful to develop an intuition about what happens when you change architecture, activation function or other parameters. Try playing with it for a few minutes. You don't need do understand the meaning of every knob and button in the page, just get a sense for what happens if you change something. In the next chapter we'll explore these things in more detail.
| github_jupyter |
```
#import dask
#dask.config.config
```
## Running Dask on Summit via Ipython Terminal
You will need 2 terminals and a browser for this lab
___
#### In terminal 1
1. login to summit
2. activate conda environment
` module load ibm-wml-ce/1.7.0-1
conda activate wmlce17-ornl`
3. launch ipython <br>
ipython
### In terminal 2
1. forward ssh ports from login node to your laptop. Here XXXX should be an unused port on the system. Use 7777 as example<br> Pay attention to making sure the right
ssh -N -L XXXX:loginYY.summit.olcf.ornl.gov:XXXX userid@summit.olcf.ornl.gov
e.g.
ssh -N -L 3761:login4.summit.olcf.ornl.gov:3761 vanstee@summit.olcf.ornl.gov
## Dask on Summit
```
# This library enables interoperability with clusters (like LSF)
import sys
from dask_jobqueue import LSFCluster
# Per node specification
dask_worker_prefix = "jsrun -n1 -a1 -g0 -c2"
cluster = LSFCluster(
scheduler_options={"dashboard_address": ":3761"},
cores=8,
processes=1,
memory="4 GB",
project="VEN201",
walltime="00:30",
job_extra=["-nnodes 1"], # <--- new!
header_skip=["-R", "-n ", "-M"], # <--- new!
interface='ib0',
use_stdin=False,
python= f"{dask_worker_prefix} {sys.executable}"
)
```
## Lets See what is sent to LSF
```
print(cluster.job_script())
from dask.distributed import Client
client = Client(cluster)
client
# Open another terminal here and run bjobs ..
cluster.scale(4)
# takes a couple of mins potentially ....
client
#In [11]: client
#Out[11]: <Client: 'tcp://10.41.0.34:37579' processes=2 threads=16, memory=8.00 GB>
watch !bjobs
```
# Numpy simple example ...
```
import dask.array as da
# 2.5 B element array , 500 chunks
x = da.random.random([5000,5000], chunks=[250,250])
cluster.scale(8)
x = x.persist()
x
y = x.T ** x - x.mean()
# Note if you run y.compute() the result is not saved ...
# each request triggers computation..
print(y.compute())
print(y.compute())
# Now lets pin it to memory ... and re-run
y.persist()
print(y.compute())
# Persist vs Compute https://distributed.dask.org/en/latest/memory.html
# use compute when the return value is small and you want to feed result into other analyses.
# use persist (similar to cache in spark) to trigger computation and pin results to memory.
# Follow actions build task graphs, but only up to this point as it will use the value calculated by persist.
```
## Simple Pandas Example with our lending club data ...
```
dtype={'acc_now_delinq': 'float64',
'acc_open_past_24mths': 'float64',
'all_util': 'float64',
'avg_cur_bal': 'float64',
'chargeoff_within_12_mths': 'float64',
'collections_12_mths_ex_med': 'float64',
'delinq_2yrs': 'float64',
'delinq_amnt': 'float64',
'desc': 'object',
'fico_range_high': 'float64',
'fico_range_low': 'float64',
'funded_amnt': 'float64',
'funded_amnt_inv': 'float64',
'id': 'object',
'inq_fi': 'float64',
'inq_last_12m': 'float64',
'inq_last_6mths': 'float64',
'last_fico_range_high': 'float64',
'last_fico_range_low': 'float64',
'loan_amnt': 'float64',
'max_bal_bc': 'float64',
'mo_sin_old_rev_tl_op': 'float64',
'mo_sin_rcnt_rev_tl_op': 'float64',
'mo_sin_rcnt_tl': 'float64',
'mort_acc': 'float64',
'num_accts_ever_120_pd': 'float64',
'num_actv_bc_tl': 'float64',
'num_actv_rev_tl': 'float64',
'num_bc_sats': 'float64',
'num_bc_tl': 'float64',
'num_il_tl': 'float64',
'num_op_rev_tl': 'float64',
'num_rev_accts': 'float64',
'num_rev_tl_bal_gt_0': 'float64',
'num_sats': 'float64',
'num_tl_30dpd': 'float64',
'num_tl_90g_dpd_24m': 'float64',
'num_tl_op_past_12m': 'float64',
'open_acc': 'float64',
'open_acc_6m': 'float64',
'open_act_il': 'float64',
'open_il_12m': 'float64',
'open_il_24m': 'float64',
'open_rv_12m': 'float64',
'open_rv_24m': 'float64',
'policy_code': 'float64',
'pub_rec': 'float64',
'pub_rec_bankruptcies': 'float64',
'revol_bal': 'float64',
'tax_liens': 'float64',
'tot_coll_amt': 'float64',
'tot_cur_bal': 'float64',
'tot_hi_cred_lim': 'float64',
'total_acc': 'float64',
'total_bal_ex_mort': 'float64',
'total_bal_il': 'float64',
'total_bc_limit': 'float64',
'total_cu_tl': 'float64',
'total_il_high_credit_limit': 'float64',
'total_rev_hi_lim': 'float64'}
# dummy data for demo...
!cp ../Tabular/ldata2016.csv.gz ./
!gunzip ./ldata2016.csv.gz
# import dask
import dask.dataframe as dd
ddf = dd.read_csv("./dask-tutorial/ldata2016.csv", blocksize=15e6,dtype=dtype) # , compression="gzip")
#
#ddf = ddf.repartition(npartitions=5)
ddf
# Standard operations example
filtered_df = ddf[ddf["loan_amnt"] > 15000]
answer = filtered_df.compute()
#compare
len(answer)
len(ddf)
print(ddf.columns)
# ok, lets count NaNs ..
ddf.isna().sum().compute()
# well dask doesnt do well with NaNs, let just do a few colums ..
ddf_small = ddf[[ 'id', 'loan_amnt', 'funded_amnt','revol_bal','dti']]
# Check NaNs
ddf_small.isna().sum().compute()
ddf_small.describe().compute()
# correlation
ddf.corr().compute()
# Do one join.. cartesian ?
merge(ddf_small, ddf_small,on='id')# [['loan_amnt', 'funded_amnt']]
```
| github_jupyter |
```
import sys
import time
import os.path
from glob import glob
from datetime import datetime, timedelta
# data tools
import h5py
import numpy as np
# custom tools
sys.path.insert(0, '/glade/u/home/ksha/WORKSPACE/utils/')
sys.path.insert(0, '/glade/u/home/ksha/WORKSPACE/Analog_BC/')
sys.path.insert(0, '/glade/u/home/ksha/WORKSPACE/Analog_BC/utils/')
import data_utils as du
import graph_utils as gu
from namelist import *
import warnings
warnings.filterwarnings("ignore")
# graph tools
import cmaps
import cartopy.crs as ccrs
import cartopy.mpl.geoaxes
import cartopy.feature as cfeature
from cartopy.io.shapereader import Reader
from cartopy.feature import ShapelyFeature
from cartopy.mpl.gridliner import LONGITUDE_FORMATTER, LATITUDE_FORMATTER
import matplotlib.pyplot as plt
import matplotlib.lines as mlines
import matplotlib.colors as colors
import matplotlib.patches as patches
from matplotlib.collections import PatchCollection
from matplotlib import ticker
import matplotlib.ticker as mticker
import matplotlib.gridspec as gridspec
from mpl_toolkits.axes_grid1.inset_locator import inset_axes
%matplotlib inline
need_publish = False
# True: publication quality figures
# False: low resolution figures in the notebook
if need_publish:
dpi_ = fig_keys['dpi']
else:
dpi_ = 75
# def bs_3c(pred, frac, o, hist):
# '''
# bs three components decompose
# '''
# L = len(pred)
# f = np.empty((L,))
# r = np.empty((L,))
# for i in range(L):
# f[i] = np.nansum(((pred[i, :] - frac[i, :])**2)*hist[i, :])/np.sum(hist[i, :])
# r[i] = np.nansum(((frac[i, :] - o[i])**2)*hist[i, :])/np.sum(hist[i, :])
# return f, r
# with h5py.File(save_dir+'D_SCNN_Calib_loc0.hdf', 'r') as h5io:
# SCNN_frac_loc0 = h5io['pos_frac'][...]
# SCNN_pred_loc0 = h5io['pred_value'][...]
# SCNN_use_loc0 = h5io['use'][...]
# o_bar_loc0 = h5io['o_bar'][...]
# with h5py.File(save_dir+'BCSD_SCNN_Calib_loc0.hdf', 'r') as h5io:
# BCSD_SCNN_frac_loc0 = h5io['pos_frac'][...]
# BCSD_SCNN_pred_loc0 = h5io['pred_value'][...]
# BCSD_SCNN_use_loc0 = h5io['use'][...]
# with h5py.File(save_dir+'DCNN_SCNN_Calib_loc0.hdf', 'r') as h5io:
# DCNN_SCNN_frac_loc0 = h5io['pos_frac'][...]
# DCNN_SCNN_pred_loc0 = h5io['pred_value'][...]
# DCNN_SCNN_use_loc0 = h5io['use'][...]
# with h5py.File(save_dir+'D_SCNN_Calib_loc1.hdf', 'r') as h5io:
# SCNN_frac_loc1 = h5io['pos_frac'][...]
# SCNN_pred_loc1 = h5io['pred_value'][...]
# SCNN_use_loc1 = h5io['use'][...]
# o_bar_loc1 = h5io['o_bar'][...]
# with h5py.File(save_dir+'BCSD_SCNN_Calib_loc1.hdf', 'r') as h5io:
# BCSD_SCNN_frac_loc1 = h5io['pos_frac'][...]
# BCSD_SCNN_pred_loc1 = h5io['pred_value'][...]
# BCSD_SCNN_use_loc1 = h5io['use'][...]
# with h5py.File(save_dir+'DCNN_SCNN_Calib_loc1.hdf', 'r') as h5io:
# DCNN_SCNN_frac_loc1 = h5io['pos_frac'][...]
# DCNN_SCNN_pred_loc1 = h5io['pred_value'][...]
# DCNN_SCNN_use_loc1 = h5io['use'][...]
# with h5py.File(save_dir+'D_SCNN_Calib_loc2.hdf', 'r') as h5io:
# SCNN_frac_loc2 = h5io['pos_frac'][...]
# SCNN_pred_loc2 = h5io['pred_value'][...]
# SCNN_use_loc2 = h5io['use'][...]
# o_bar_loc2 = h5io['o_bar'][...]
# with h5py.File(save_dir+'BCSD_SCNN_Calib_loc2.hdf', 'r') as h5io:
# BCSD_SCNN_frac_loc2 = h5io['pos_frac'][...]
# BCSD_SCNN_pred_loc2 = h5io['pred_value'][...]
# BCSD_SCNN_use_loc2 = h5io['use'][...]
# with h5py.File(save_dir+'DCNN_SCNN_Calib_loc2.hdf', 'r') as h5io:
# DCNN_SCNN_frac_loc2 = h5io['pos_frac'][...]
# DCNN_SCNN_pred_loc2 = h5io['pred_value'][...]
# DCNN_SCNN_use_loc2 = h5io['use'][...]
# PLOT_loc0 = {}
# PLOT_loc0['scnn_fm'] = np.mean(SCNN_frac_loc0, axis=2)
# PLOT_loc0['scnn_pm'] = np.mean(SCNN_pred_loc0, axis=2)
# # 95 CI
# PLOT_loc0['scnn_fs'] = np.quantile(np.abs(SCNN_frac_loc0-PLOT_loc0['scnn_fm'][..., None]), 0.95, axis=2)
# PLOT_loc0['bcsd_fm'] = np.mean(BCSD_SCNN_frac_loc0, axis=2)
# PLOT_loc0['bcsd_pm'] = np.mean(BCSD_SCNN_pred_loc0, axis=2)
# PLOT_loc0['bcsd_fs'] = np.quantile(np.abs(BCSD_SCNN_frac_loc0-PLOT_loc0['bcsd_fm'][..., None]), 0.95, axis=2)
# PLOT_loc0['dcnn_fm'] = np.mean(DCNN_SCNN_frac_loc0, axis=2)
# PLOT_loc0['dcnn_pm'] = np.mean(DCNN_SCNN_pred_loc0, axis=2)
# PLOT_loc0['dcnn_fs'] = np.quantile(np.abs(DCNN_SCNN_frac_loc0-PLOT_loc0['dcnn_fm'][..., None]), 0.95, axis=2)
# PLOT_loc1 = {}
# PLOT_loc1['scnn_fm'] = np.mean(SCNN_frac_loc1, axis=2)
# PLOT_loc1['scnn_pm'] = np.mean(SCNN_pred_loc1, axis=2)
# PLOT_loc1['scnn_fs'] = np.quantile(np.abs(SCNN_frac_loc1-PLOT_loc1['scnn_fm'][..., None]), 0.95, axis=2)
# PLOT_loc1['bcsd_fm'] = np.mean(BCSD_SCNN_frac_loc1, axis=2)
# PLOT_loc1['bcsd_pm'] = np.mean(BCSD_SCNN_pred_loc1, axis=2)
# PLOT_loc1['bcsd_fs'] = np.quantile(np.abs(BCSD_SCNN_frac_loc1-PLOT_loc1['bcsd_fm'][..., None]), 0.95, axis=2)
# PLOT_loc1['dcnn_fm'] = np.mean(DCNN_SCNN_frac_loc1, axis=2)
# PLOT_loc1['dcnn_pm'] = np.mean(DCNN_SCNN_pred_loc1, axis=2)
# PLOT_loc1['dcnn_fs'] = np.quantile(np.abs(DCNN_SCNN_frac_loc1-PLOT_loc1['dcnn_fm'][..., None]), 0.95, axis=2)
# PLOT_loc2 = {}
# PLOT_loc2['scnn_fm'] = np.mean(SCNN_frac_loc2, axis=2)
# PLOT_loc2['scnn_pm'] = np.mean(SCNN_pred_loc2, axis=2)
# PLOT_loc2['scnn_fs'] = np.quantile(np.abs(SCNN_frac_loc2-PLOT_loc2['scnn_fm'][..., None]), 0.95, axis=2)
# PLOT_loc2['bcsd_fm'] = np.mean(BCSD_SCNN_frac_loc2, axis=2)
# PLOT_loc2['bcsd_pm'] = np.mean(BCSD_SCNN_pred_loc2, axis=2)
# PLOT_loc2['bcsd_fs'] = np.quantile(np.abs(BCSD_SCNN_frac_loc2-PLOT_loc2['bcsd_fm'][..., None]), 0.95, axis=2)
# PLOT_loc2['dcnn_fm'] = np.mean(DCNN_SCNN_frac_loc2, axis=2)
# PLOT_loc2['dcnn_pm'] = np.mean(DCNN_SCNN_pred_loc2, axis=2)
# PLOT_loc2['dcnn_fs'] = np.quantile(np.abs(DCNN_SCNN_frac_loc2-PLOT_loc2['dcnn_fm'][..., None]), 0.95, axis=2)
# USE_loc0 = {}
# USE_loc0['scnn'] = SCNN_use_loc0/np.sum(SCNN_use_loc0)
# USE_loc0['dcnn'] = DCNN_SCNN_use_loc0/np.sum(DCNN_SCNN_use_loc0)
# USE_loc0['bcsd'] = BCSD_SCNN_use_loc0/np.sum(BCSD_SCNN_use_loc0)
# USE_loc1 = {}
# USE_loc1['scnn'] = SCNN_use_loc1/np.sum(SCNN_use_loc1)
# USE_loc1['dcnn'] = DCNN_SCNN_use_loc1/np.sum(DCNN_SCNN_use_loc1)
# USE_loc1['bcsd'] = BCSD_SCNN_use_loc1/np.sum(BCSD_SCNN_use_loc1)
# USE_loc2 = {}
# USE_loc2['scnn'] = SCNN_use_loc2/np.sum(SCNN_use_loc2)
# USE_loc2['dcnn'] = DCNN_SCNN_use_loc2/np.sum(DCNN_SCNN_use_loc2)
# USE_loc2['bcsd'] = BCSD_SCNN_use_loc2/np.sum(BCSD_SCNN_use_loc2)
# o_bar = {}
# o_bar['loc0'] = o_bar_loc0
# o_bar['loc1'] = o_bar_loc1
# o_bar['loc2'] = o_bar_loc2
# REL_loc0 = {}
# RES_loc0 = {}
# BS_3c_loc0 = {}
# rel_, res_ = bs_3c(PLOT_loc0['scnn_pm'], PLOT_loc0['scnn_fm'], o_bar['loc0'], USE_loc0['scnn'])
# REL_loc0['scnn'] = rel_
# RES_loc0['scnn'] = res_
# BS_3c_loc0['scnn'] = o_bar_loc0*(1-o_bar_loc0) + rel_ - res_
# rel_, res_ = bs_3c(PLOT_loc0['bcsd_pm'], PLOT_loc0['bcsd_fm'], o_bar['loc0'], USE_loc0['bcsd'])
# REL_loc0['bcsd'] = rel_
# RES_loc0['bcsd'] = res_
# BS_3c_loc0['bcsd'] = o_bar_loc0*(1-o_bar_loc0) + rel_ - res_
# rel_, res_ = bs_3c(PLOT_loc0['dcnn_pm'], PLOT_loc0['dcnn_fm'], o_bar['loc0'], USE_loc0['dcnn'])
# REL_loc0['dcnn'] = rel_
# RES_loc0['dcnn'] = res_
# BS_3c_loc0['dcnn'] = o_bar_loc0*(1-o_bar_loc0) + rel_ - res_
# REL_loc1 = {}
# RES_loc1 = {}
# BS_3c_loc1 = {}
# rel_, res_ = bs_3c(PLOT_loc1['scnn_pm'], PLOT_loc1['scnn_fm'], o_bar['loc1'], USE_loc1['scnn'])
# REL_loc1['scnn'] = rel_
# RES_loc1['scnn'] = res_
# BS_3c_loc1['scnn'] = o_bar_loc1*(1-o_bar_loc1) + rel_ - res_
# rel_, res_ = bs_3c(PLOT_loc1['bcsd_pm'], PLOT_loc1['bcsd_fm'], o_bar['loc1'], USE_loc1['bcsd'])
# REL_loc1['bcsd'] = rel_
# RES_loc1['bcsd'] = res_
# BS_3c_loc1['bcsd'] = o_bar_loc1*(1-o_bar_loc1) + rel_ - res_
# rel_, res_ = bs_3c(PLOT_loc1['dcnn_pm'], PLOT_loc1['dcnn_fm'], o_bar['loc1'], USE_loc1['dcnn'])
# REL_loc1['dcnn'] = rel_
# RES_loc1['dcnn'] = res_
# BS_3c_loc1['dcnn'] = o_bar_loc1*(1-o_bar_loc1) + rel_ - res_
# REL_loc2 = {}
# RES_loc2 = {}
# BS_3c_loc2 = {}
# rel_, res_ = bs_3c(PLOT_loc2['scnn_pm'], PLOT_loc2['scnn_fm'], o_bar['loc2'], USE_loc2['scnn'])
# REL_loc2['scnn'] = rel_
# RES_loc2['scnn'] = res_
# BS_3c_loc2['scnn'] = o_bar_loc2*(1-o_bar_loc2) + rel_ - res_
# rel_, res_ = bs_3c(PLOT_loc2['bcsd_pm'], PLOT_loc2['bcsd_fm'], o_bar['loc2'], USE_loc2['bcsd'])
# REL_loc2['bcsd'] = rel_
# RES_loc2['bcsd'] = res_
# BS_3c_loc2['bcsd'] = o_bar_loc2*(1-o_bar_loc2) + rel_ - res_
# rel_, res_ = bs_3c(PLOT_loc2['dcnn_pm'], PLOT_loc2['dcnn_fm'], o_bar['loc2'], USE_loc2['dcnn'])
# REL_loc2['dcnn'] = rel_
# RES_loc2['dcnn'] = res_
# BS_3c_loc2['dcnn'] = o_bar_loc2*(1-o_bar_loc2) + rel_ - res_
# save_dict = {'PLOT':PLOT_loc0, 'USE':USE_loc0, 'o_bar':o_bar['loc0'], 'REL':REL_loc0, 'RES':RES_loc0, 'BS_3c':BS_3c_loc0}
# np.save(save_dir+'DSCALE_Calib_BCH_loc0.npy', save_dict)
# save_dict = {'PLOT':PLOT_loc1, 'USE':USE_loc1, 'o_bar':o_bar['loc1'], 'REL':REL_loc1, 'RES':RES_loc1, 'BS_3c':BS_3c_loc1}
# np.save(save_dir+'DSCALE_Calib_BCH_loc1.npy', save_dict)
# save_dict = {'PLOT':PLOT_loc2, 'USE':USE_loc2, 'o_bar':o_bar['loc2'], 'REL':REL_loc2, 'RES':RES_loc2, 'BS_3c':BS_3c_loc2}
# np.save(save_dir+'DSCALE_Calib_BCH_loc2.npy', save_dict)
```
# Figures
```
gray = '0.5'
C = [gray, red, blue]
LS = ['-', '-', '-',]
M = ['v', 'o', '<', 's', '>', 'd', 'o']
KW = {}
KW['scnn'] = {'linestyle': '-', 'color': gray, 'ecolor':red, 'linewidth':2.5, 'elinewidth':2.5, 'barsabove':False}
KW['bcsd'] = {'linestyle': '-', 'color': red, 'ecolor':red, 'linewidth':2.5, 'elinewidth':2.5, 'barsabove':False}
KW['dcnn'] = {'linestyle': '-', 'color': blue, 'ecolor':blue, 'linewidth':2.5, 'elinewidth':2.5, 'barsabove':False}
kw_lines = {}
kw_lines['scnn'] = {'linestyle': '-', 'color': gray, 'linewidth':2.5}
kw_lines['bcsd'] = {'linestyle': '-', 'color': red, 'linewidth':2.5}
kw_lines['dcnn'] = {'linestyle': '-', 'color': blue, 'linewidth':2.5}
def filter_single_point(use, plot):
methods = ['scnn', 'bcsd', 'dcnn']
for i, m in enumerate(methods):
for j in range(3):
flag_single_point = use[m][j, :] < 1e-5
if np.sum(flag_single_point) > 0:
use[m][j, flag_single_point] = np.nan
plot['{}_fm'.format(m)][j, flag_single_point] = np.nan
return use, plot
```
## South Coast
```
calib_dict = np.load(save_dir+'DSCALE_Calib_BCH_loc0.npy', allow_pickle=True)[()]
PLOT = calib_dict['PLOT']
BS = calib_dict['BS_3c']
BS_3c = BS
USE = calib_dict['USE']
o_bar = calib_dict['o_bar']
REL = calib_dict['REL']
RES = calib_dict['RES']
USE, PLOT = filter_single_point(USE, PLOT)
print('obar: {}, {}, {}'.format(o_bar[0], o_bar[1], o_bar[2]))
fig = plt.figure(figsize=(12, 12/(3.2)*2.65), dpi=dpi_)
handle_title = []
handle_lines = []
handle_marker = []
gs = gridspec.GridSpec(7, 5, height_ratios=[0.45, 0.15, 1, 0.1, 0.5, 0.15, 0.3], width_ratios=[1, 0.1, 1, 0.1, 1])
YLIM = [7.5e-5, 1e0]
YLAB = [1e-4, 1e-3, 1e-2, 1e-1]
# no skill line
fake_x = np.linspace(0, 1, 100)
fake_y = [0.5*fake_x + 0.5*o_bar[0], 0.5*fake_x + 0.5*o_bar[1], 0.5*fake_x + 0.5*o_bar[2]]
AX_bs = []
AX_re = [] # reliability curve axis
AX_hi = [] # freq of use axis
AX_da = [] # data axis
for j in [0, 2, 4]:
AX_bs.append(plt.subplot(gs[0, j]))
AX_re.append(plt.subplot(gs[2, j]))
AX_hi.append(plt.subplot(gs[4, j]))
AX_da.append(plt.subplot(gs[6, j]))
plt.subplots_adjust(0, 0, 1, 1, hspace=0, wspace=0)
for i, ax in enumerate(AX_re):
ax = gu.ax_decorate_box(ax)
ax.plot(fake_x, fake_x, linewidth=1.5, linestyle='--', color='0.5')
ax.plot(fake_x, fake_y[i], linewidth=1.5, linestyle='--', color='r')
ax.tick_params(axis="both", which="both", labelbottom=True)
ax.set_xlim([0, 1])
ax.set_xticks([0, 0.25, 0.5, 0.75, 1.0])
ax.set_xticklabels([0, 0.25, 0.5, 0.75, 1.0])
ax.set_ylim([0, 1])
ax.set_yticks([0, 0.25, 0.5, 0.75, 1.0])
ax.set_yticklabels([0, 0.25, 0.5, 0.75, 1.0])
ax.set_aspect('equal')
for i, ax in enumerate(AX_da):
ax.set_axis_off()
for i, ax in enumerate(AX_hi):
ax = gu.ax_decorate_box(ax)
ax.tick_params(axis="both", which="both", labelbottom=True)
ax.set_xlim([0, 1])
ax.set_xticks([0, 0.25, 0.5, 0.75, 1.0])
ax.set_xticklabels([0, 0.25, 0.5, 0.75, 1.0])
ax.set_yscale('log')
ax.set_ylim(YLIM)
ax.set_yticks(YLAB)
ax.text(0.05, 0.05, 'Forecast Probability', ha='left', va='bottom', fontsize=14, transform=ax.transAxes)
AX_re[0].set_ylabel('Observed relative frequency', fontsize=14)
AX_hi[0].set_ylabel('Frequency of usage', fontsize=14)
AX_re[0].tick_params(axis="both", which="both", labelleft=True)
AX_hi[0].tick_params(axis="both", which="both", labelleft=True)
methods = ['scnn', 'bcsd', 'dcnn']
Z = [5, 4, 5, 4, 4]
labels = ['Interp-SL', 'BCSD-SL', 'DCNN-SL']
label_ = [' ',
' ',
' ']
bar_gap = 1/4
text_gap = 1/3
YLIM_bs = [-0.2, 1.0]
YTICKS_bs = np.array([0.0, 0.3, 0.6])
titles_bs = ['(a) Day-1 forecast. BSS',
'(b) Day-3 forecast. BSS',
'(c) Day-5 forecast. BSS']
titles_re = ['(d) Day-1 forecast. Reliability diagram',
'(e) Day-3 forecast. Reliability diagram',
'(f) Day-5 forecast. Reliability diagram']
for i in range(3):
ax_bs = AX_bs[i]
ax_bs.text(0.5, 0.975, titles_bs[i], ha='center', va='top', fontsize=14, transform=ax_bs.transAxes)
ax_bs = gu.ax_decorate(ax_bs, left_flag=False, bottom_flag=False)
ax_bs.grid(linestyle=":", linewidth=1.5)
ax_bs.xaxis.grid(False)
ax_bs.spines["left"].set_visible(False)
ax_bs.set_xlim([0, 1])
ax_bs.set_ylim(YLIM_bs)
ax_bs.set_yticks(YTICKS_bs)
d_bs = 0.125*(YLIM_bs[1] - YLIM_bs[0])
for m, method in enumerate(['scnn', 'bcsd', 'dcnn']):
x_ = np.array([bar_gap+bar_gap*m,])
x_text = 0.5*text_gap + text_gap*m
bs_ = 1-10*np.array([BS_3c['{}'.format(method)][i],])
marker_p, stem_p, base_p = ax_bs.stem(x_, bs_, bottom=0.0, use_line_collection=True)
plt.setp(marker_p, marker='s', ms=15, mew=2.5, mec='k', mfc=KW[method]['color'], zorder=4)
plt.setp(stem_p, linewidth=2.5, color='k')
ax_bs.text(x_, bs_+d_bs, '{:.3f}'.format(bs_[0]), va='center', ha='center', fontsize=14)
ax_bs.axhline(y=0.0, xmin=0, xmax=1, linewidth=2.5, color='k')
ax_bs.text(x_text, -0.2, labels[m], ha='center', va='bottom', color=C[m],
fontsize=13.5, fontweight='bold')
for i, ind in enumerate([0, 1, 2]):
AX_re[i].text(0.5, 1.025, titles_re[i], fontsize=14, ha='center', va='bottom', transform=AX_re[i].transAxes)
AX_re[i].axvline(x=o_bar[ind], ymin=0, ymax=1, linewidth=1.5, linestyle='--', color='0.5')
AX_re[i].axhline(y=o_bar[ind], xmin=0, xmax=1, linewidth=1.5, linestyle='--', color='0.5')
for m, method in enumerate(methods):
temp_p = PLOT['{}_pm'.format(method)][ind, :]
temp_f = PLOT['{}_fm'.format(method)][ind, :]
temp_95th = PLOT['{}_fs'.format(method)][ind, :]
temp_ux = temp_p
temp_uy = USE['{}'.format(method)][ind, :]
AX_re[i].errorbar(temp_p, temp_f, yerr=temp_95th/8.0, **KW[method], zorder=Z[m])
handle_lines += AX_hi[i].plot(temp_ux, temp_uy, color=C[m], linestyle=LS[m], linewidth=1.5,
marker='o', ms=7, mew=0, mfc=C[m], zorder=Z[m]-2, label=label_[m])
AX_re[0].text(0.125, 0.75, '$\overline{o}$ = 0.0756', ha='left', va='bottom', fontsize=14)
AX_re[1].text(0.125, 0.75, '$\overline{o}$ = 0.0792', ha='left', va='bottom', fontsize=14)
AX_re[2].text(0.125, 0.75, '$\overline{o}$ = 0.0792', ha='left', va='bottom', fontsize=14)
ax_t1 = fig.add_axes([0.5*(3.2-3.0)/3.2, 1.0, (3.0/3.2), 0.025])
ax_t1.set_axis_off()
handle_title += gu.string_partial_format(fig, ax_t1, 0, 1.0, 'left', 'top',
['Verifying downscaled ', 'daily',
' accumulated precipitation > monthly ',
'90-th', ' events. 2017-2019. ', 'South Coast', ' stations'],
['k',]*7, [14,]*7, ['normal', 'bold', 'normal', 'bold',
'normal', 'bold', 'normal'])
AX_bs[0].text(0.85, 1.0, '[*]', ha='right', va='top', fontsize=10, transform=AX_bs[0].transAxes);
AX_re[0].text(0.98, 1.085, '[*]', ha='right', va='top', fontsize=10, transform=AX_re[0].transAxes);
for handle in handle_title:
handle.set_bbox(dict(facecolor='w', pad=0, edgecolor='none', zorder=2))
loc_y = [0.708, 0.53, 0.352, 0.173, 0.,]
table_heads = ['Brier', 'REL', 'RES']
locx_heads = [0.25, 0.5, 0.75]
for i, ind in enumerate([0, 1, 2]):
AX_da[i].text(1.035, 0.5-0.035, r'($10^{-2}$)', ha='right', va='center', fontsize=14, transform=AX_da[i].transAxes)
for m, method in enumerate(methods):
for j in range(3):
AX_da[i].text(locx_heads[j], 0.93, table_heads[j], ha='center', va='bottom',
fontsize=14, transform=AX_da[i].transAxes)
AX_da[i].text(locx_heads[0], loc_y[m], '{:.3f}'.format(1e2*BS['{}'.format(method)][ind]),
ha='center', va='bottom', color='k', fontsize=14, transform=AX_da[i].transAxes)
AX_da[i].text(locx_heads[1], loc_y[m], '{:.3f}'.format(1e2*REL['{}'.format(method)][ind]),
ha='center', va='bottom', color='k', fontsize=14, transform=AX_da[i].transAxes)
AX_da[i].text(locx_heads[2], loc_y[m], '{:.3f}'.format(1e2*RES['{}'.format(method)][ind]),
ha='center', va='bottom', color='k', fontsize=14, transform=AX_da[i].transAxes)
for m, method in enumerate(methods):
AX_da[0].text(-0.2, loc_y[m], labels[m], ha='left', va='bottom', color=C[m],
fontsize=14, fontweight='bold', transform=AX_da[0].transAxes)
handle_errbar = []
handle_errbar.append(mlines.Line2D([], [], label=label_[0], **kw_lines['scnn']))
handle_errbar.append(mlines.Line2D([], [], label=label_[1], **kw_lines['bcsd']))
handle_errbar.append(mlines.Line2D([], [], label=label_[2], **kw_lines['dcnn']))
ax_box = fig.add_axes([-0.055, -0.085+0.04, 0.75, 0.0625])
ax_box.set_axis_off()
ax_lw1 = inset_axes(ax_box, height='50%', width='35%', borderpad=0, loc=2)
ax_lw2 = inset_axes(ax_box, height='50%', width='35%', borderpad=0, loc=3)
ax_lg1 = inset_axes(ax_box, height='50%', width='65%', borderpad=0, loc=1)
ax_lg2 = inset_axes(ax_box, height='50%', width='65%', borderpad=0, loc=4)
ax_lw1.text(1, 0.5, 'Calibration curve : ', ha='right', va='center', fontsize=14, transform=ax_lw1.transAxes);
ax_lw2.text(1, 0.5, 'Frequency of usage (log-scale): ', ha='right', va='center', fontsize=14, transform=ax_lw2.transAxes);
ax_lw1.text(0.95, 0.55, '[**]', ha='right', va='center', fontsize=10, transform=ax_lw1.transAxes);
LG = ax_lg1.legend(handles=handle_errbar, bbox_to_anchor=(1, 1.5), ncol=5, prop={'size':14}, fancybox=False);
LG.get_frame().set_facecolor('none')
LG.get_frame().set_linewidth(0)
LG.get_frame().set_alpha(1.0)
LG1 = ax_lg2.legend(handles=handle_lines[0:3], bbox_to_anchor=(1, 1.5), ncol=5, prop={'size':14}, fancybox=False);
LG1.get_frame().set_facecolor('none')
LG1.get_frame().set_linewidth(0)
LG1.get_frame().set_alpha(1.0)
ax_lg1.text(0.1, 0.9, 'Interp-SL', ha='left', va='top', fontsize=14, fontweight='bold',
color=KW['scnn']['color'], transform=ax_lg1.transAxes)
ax_lg1.text(0.435, 0.9, 'BCSD-SL', ha='left', va='top', fontsize=14, fontweight='bold',
color=KW['bcsd']['color'], transform=ax_lg1.transAxes)
ax_lg1.text(0.78, 0.9, 'DCNN-SL', ha='left', va='top', fontsize=14, fontweight='bold',
color=KW['dcnn']['color'], transform=ax_lg1.transAxes)
ax_lg2.text(0.1, 0.9, 'Interp-SL', ha='left', va='top', fontsize=14, fontweight='bold',
color=KW['scnn']['color'], transform=ax_lg2.transAxes)
ax_lg2.text(0.435, 0.9, 'BCSD-SL', ha='left', va='top', fontsize=14, fontweight='bold',
color=KW['bcsd']['color'], transform=ax_lg2.transAxes)
ax_lg2.text(0.78, 0.9, 'DCNN-SL', ha='left', va='top', fontsize=14, fontweight='bold',
color=KW['dcnn']['color'], transform=ax_lg2.transAxes)
ax_lw1.set_axis_off()
ax_lg1.set_axis_off()
ax_lw2.set_axis_off()
ax_lg2.set_axis_off()
ax_w1 = fig.add_axes([-0.055, -0.135+0.045, 1.065, 0.025])
ax_w1.set_axis_off()
ax_w1.text(0, 1, '* Reliability diagrams and BS components are calculated relative to the 2000-2014 ERA5 monthly CDFs.',
ha='left', va='top', fontsize=14, transform=ax_w1.transAxes);
ax_w2 = fig.add_axes([-0.055, -0.2+0.045, 1.065, 0.05])
ax_w2.set_axis_off()
ax_w2.text(0, 1, '** Calibration curves are averaged over 100 bootstrap replicates.\n Error bars represent the 95% CI.',
ha='left', va='top', fontsize=14, transform=ax_w2.transAxes);
if need_publish:
# Save figure
fig.savefig(fig_dir+'DSCALE_Calib_SouthCoast.png', format='png', **fig_keys)
```
## Southern Interior
```
calib_dict = np.load(save_dir+'DSCALE_Calib_BCH_loc1.npy', allow_pickle=True)[()]
PLOT = calib_dict['PLOT']
BS = calib_dict['BS_3c']
BS_3c = calib_dict['BS_3c']
USE = calib_dict['USE']
o_bar = calib_dict['o_bar']
REL = calib_dict['REL']
RES = calib_dict['RES']
USE, PLOT = filter_single_point(USE, PLOT)
print('obar: {}, {}, {}'.format(o_bar[0], o_bar[1], o_bar[2]))
fig = plt.figure(figsize=(12, 12/(3.2)*2.65), dpi=dpi_)
handle_title = []
handle_lines = []
handle_marker = []
gs = gridspec.GridSpec(7, 5, height_ratios=[0.45, 0.15, 1, 0.1, 0.5, 0.15, 0.3], width_ratios=[1, 0.1, 1, 0.1, 1])
YLIM = [7.5e-5, 1e0]
YLAB = [1e-4, 1e-3, 1e-2, 1e-1]
# no skill line
fake_x = np.linspace(0, 1, 100)
fake_y = [0.5*fake_x + 0.5*o_bar[0], 0.5*fake_x + 0.5*o_bar[1], 0.5*fake_x + 0.5*o_bar[2]]
AX_bs = []
AX_re = [] # reliability curve axis
AX_hi = [] # freq of use axis
AX_da = [] # data axis
for j in [0, 2, 4]:
AX_bs.append(plt.subplot(gs[0, j]))
AX_re.append(plt.subplot(gs[2, j]))
AX_hi.append(plt.subplot(gs[4, j]))
AX_da.append(plt.subplot(gs[6, j]))
plt.subplots_adjust(0, 0, 1, 1, hspace=0, wspace=0)
for i, ax in enumerate(AX_re):
ax = gu.ax_decorate_box(ax)
ax.plot(fake_x, fake_x, linewidth=1.5, linestyle='--', color='0.5')
ax.plot(fake_x, fake_y[i], linewidth=1.5, linestyle='--', color='r')
ax.tick_params(axis="both", which="both", labelbottom=True)
ax.set_xlim([0, 1])
ax.set_xticks([0, 0.25, 0.5, 0.75, 1.0])
ax.set_xticklabels([0, 0.25, 0.5, 0.75, 1.0])
ax.set_ylim([0, 1])
ax.set_yticks([0, 0.25, 0.5, 0.75, 1.0])
ax.set_yticklabels([0, 0.25, 0.5, 0.75, 1.0])
ax.set_aspect('equal')
for i, ax in enumerate(AX_da):
ax.set_axis_off()
for i, ax in enumerate(AX_hi):
ax = gu.ax_decorate_box(ax)
ax.tick_params(axis="both", which="both", labelbottom=True)
ax.set_xlim([0, 1])
ax.set_xticks([0, 0.25, 0.5, 0.75, 1.0])
ax.set_xticklabels([0, 0.25, 0.5, 0.75, 1.0])
ax.set_yscale('log')
ax.set_ylim(YLIM)
ax.set_yticks(YLAB)
ax.text(0.05, 0.05, 'Forecast Probability', ha='left', va='bottom', fontsize=14, transform=ax.transAxes)
AX_re[0].set_ylabel('Observed relative frequency', fontsize=14)
AX_hi[0].set_ylabel('Frequency of usage', fontsize=14)
AX_re[0].tick_params(axis="both", which="both", labelleft=True)
AX_hi[0].tick_params(axis="both", which="both", labelleft=True)
methods = ['scnn', 'bcsd', 'dcnn']
Z = [5, 4, 5, 4, 4]
labels = ['Interp-SL', 'BCSD-SL', 'DCNN-SL']
label_ = [' ',
' ',
' ']
bar_gap = 1/4
text_gap = 1/3
YLIM_bs = [-0.2, 1.0]
YTICKS_bs = np.array([0.0, 0.3, 0.6])
titles_bs = ['(a) Day-1 forecast. BSS',
'(b) Day-3 forecast. BSS',
'(c) Day-5 forecast. BSS']
titles_re = ['(d) Day-1 forecast. Reliability diagram',
'(e) Day-3 forecast. Reliability diagram',
'(f) Day-5 forecast. Reliability diagram']
for i in range(3):
ax_bs = AX_bs[i]
ax_bs.text(0.5, 0.975, titles_bs[i], ha='center', va='top', fontsize=14, transform=ax_bs.transAxes)
ax_bs = gu.ax_decorate(ax_bs, left_flag=False, bottom_flag=False)
ax_bs.grid(linestyle=":", linewidth=1.5)
ax_bs.xaxis.grid(False)
ax_bs.spines["left"].set_visible(False)
ax_bs.set_xlim([0, 1])
ax_bs.set_ylim(YLIM_bs)
ax_bs.set_yticks(YTICKS_bs)
d_bs = 0.125*(YLIM_bs[1] - YLIM_bs[0])
for m, method in enumerate(['scnn', 'bcsd', 'dcnn']):
x_ = np.array([bar_gap+bar_gap*m,])
x_text = 0.5*text_gap + text_gap*m
bs_ = 1-10*np.array([BS_3c['{}'.format(method)][i],])
marker_p, stem_p, base_p = ax_bs.stem(x_, bs_, bottom=0.0, use_line_collection=True)
plt.setp(marker_p, marker='s', ms=15, mew=2.5, mec='k', mfc=KW[method]['color'], zorder=4)
plt.setp(stem_p, linewidth=2.5, color='k')
ax_bs.text(x_, bs_+d_bs, '{:.3f}'.format(bs_[0]), va='center', ha='center', fontsize=14)
ax_bs.axhline(y=0.0, xmin=0, xmax=1, linewidth=2.5, color='k')
ax_bs.text(x_text, -0.2, labels[m], ha='center', va='bottom', color=C[m],
fontsize=13.5, fontweight='bold')
for i, ind in enumerate([0, 1, 2]):
AX_re[i].text(0.5, 1.025, titles_re[i], fontsize=14, ha='center', va='bottom', transform=AX_re[i].transAxes)
AX_re[i].axvline(x=o_bar[ind], ymin=0, ymax=1, linewidth=1.5, linestyle='--', color='0.5')
AX_re[i].axhline(y=o_bar[ind], xmin=0, xmax=1, linewidth=1.5, linestyle='--', color='0.5')
for m, method in enumerate(methods):
temp_p = PLOT['{}_pm'.format(method)][ind, :]
temp_f = PLOT['{}_fm'.format(method)][ind, :]
temp_95th = PLOT['{}_fs'.format(method)][ind, :]
temp_ux = temp_p
temp_uy = USE['{}'.format(method)][ind, :]
AX_re[i].errorbar(temp_p, temp_f, yerr=temp_95th/8.0, **KW[method], zorder=Z[m])
handle_lines += AX_hi[i].plot(temp_ux, temp_uy, color=C[m], linestyle=LS[m], linewidth=1.5,
marker='o', ms=7, mew=0, mfc=C[m], zorder=Z[m]-2, label=label_[m])
AX_re[0].text(0.125, 0.75, '$\overline{o}$ = 0.1031', ha='left', va='bottom', fontsize=14)
AX_re[1].text(0.125, 0.75, '$\overline{o}$ = 0.1036', ha='left', va='bottom', fontsize=14)
AX_re[2].text(0.125, 0.75, '$\overline{o}$ = 0.1012', ha='left', va='bottom', fontsize=14)
# ax_t1 = fig.add_axes([0.5*(3.2-2.75)/3.2, 1.0, (2.75/3.2), 0.045])
# ax_t1.set_axis_off()
# handle_title += gu.string_partial_format(fig, ax_t1, 0, 1.0, 'left', 'top',
# ['Reliability diagrams for precipitation rate > monthly ', '90-th',
# ' events. 2017-2019. ', 'South Coast', ' stations'],
# ['k',]*5, [14,]*5, ['normal', 'bold', 'normal', 'bold', 'normal'])
ax_t1 = fig.add_axes([0.5*(3.2-3.17)/3.2, 1.0, (3.17/3.2), 0.025])
ax_t1.set_axis_off()
handle_title += gu.string_partial_format(fig, ax_t1, 0, 1.0, 'left', 'top',
['Verifying downscaled ', 'daily',
' accumulated precipitation > monthly ',
'90-th', ' events. 2017-2019. ', 'Southern Interior', ' stations'],
['k',]*7, [14,]*7, ['normal', 'bold', 'normal', 'bold',
'normal', 'bold', 'normal'])
AX_bs[0].text(0.85, 1.0, '[*]', ha='right', va='top', fontsize=10, transform=AX_bs[0].transAxes);
AX_re[0].text(0.98, 1.085, '[*]', ha='right', va='top', fontsize=10, transform=AX_re[0].transAxes);
for handle in handle_title:
handle.set_bbox(dict(facecolor='w', pad=0, edgecolor='none', zorder=2))
loc_y = [0.708, 0.53, 0.352, 0.173, 0.,]
table_heads = ['Brier', 'REL', 'RES']
locx_heads = [0.25, 0.5, 0.75]
for i, ind in enumerate([0, 1, 2]):
AX_da[i].text(1.035, 0.5-0.035, r'($10^{-2}$)', ha='right', va='center', fontsize=14, transform=AX_da[i].transAxes)
for m, method in enumerate(methods):
for j in range(3):
AX_da[i].text(locx_heads[j], 0.93, table_heads[j], ha='center', va='bottom',
fontsize=14, transform=AX_da[i].transAxes)
AX_da[i].text(locx_heads[0], loc_y[m], '{:.3f}'.format(1e2*BS['{}'.format(method)][ind]),
ha='center', va='bottom', color='k', fontsize=14, transform=AX_da[i].transAxes)
AX_da[i].text(locx_heads[1], loc_y[m], '{:.3f}'.format(1e2*REL['{}'.format(method)][ind]),
ha='center', va='bottom', color='k', fontsize=14, transform=AX_da[i].transAxes)
AX_da[i].text(locx_heads[2], loc_y[m], '{:.3f}'.format(1e2*RES['{}'.format(method)][ind]),
ha='center', va='bottom', color='k', fontsize=14, transform=AX_da[i].transAxes)
for m, method in enumerate(methods):
AX_da[0].text(-0.2, loc_y[m], labels[m], ha='left', va='bottom', color=C[m],
fontsize=14, fontweight='bold', transform=AX_da[0].transAxes)
handle_errbar = []
handle_errbar.append(mlines.Line2D([], [], label=label_[0], **kw_lines['scnn']))
handle_errbar.append(mlines.Line2D([], [], label=label_[1], **kw_lines['bcsd']))
handle_errbar.append(mlines.Line2D([], [], label=label_[2], **kw_lines['dcnn']))
ax_box = fig.add_axes([-0.055, -0.085+0.04, 0.75, 0.0625])
ax_box.set_axis_off()
ax_lw1 = inset_axes(ax_box, height='50%', width='35%', borderpad=0, loc=2)
ax_lw2 = inset_axes(ax_box, height='50%', width='35%', borderpad=0, loc=3)
ax_lg1 = inset_axes(ax_box, height='50%', width='65%', borderpad=0, loc=1)
ax_lg2 = inset_axes(ax_box, height='50%', width='65%', borderpad=0, loc=4)
ax_lw1.text(1, 0.5, 'Calibration curve : ', ha='right', va='center', fontsize=14, transform=ax_lw1.transAxes);
ax_lw2.text(1, 0.5, 'Frequency of usage (log-scale): ', ha='right', va='center', fontsize=14, transform=ax_lw2.transAxes);
ax_lw1.text(0.95, 0.55, '[**]', ha='right', va='center', fontsize=10, transform=ax_lw1.transAxes);
LG = ax_lg1.legend(handles=handle_errbar, bbox_to_anchor=(1, 1.5), ncol=5, prop={'size':14}, fancybox=False);
LG.get_frame().set_facecolor('none')
LG.get_frame().set_linewidth(0)
LG.get_frame().set_alpha(1.0)
LG1 = ax_lg2.legend(handles=handle_lines[0:3], bbox_to_anchor=(1, 1.5), ncol=5, prop={'size':14}, fancybox=False);
LG1.get_frame().set_facecolor('none')
LG1.get_frame().set_linewidth(0)
LG1.get_frame().set_alpha(1.0)
ax_lg1.text(0.1, 0.9, 'Interp-SL', ha='left', va='top', fontsize=14, fontweight='bold',
color=KW['scnn']['color'], transform=ax_lg1.transAxes)
ax_lg1.text(0.435, 0.9, 'BCSD-SL', ha='left', va='top', fontsize=14, fontweight='bold',
color=KW['bcsd']['color'], transform=ax_lg1.transAxes)
ax_lg1.text(0.78, 0.9, 'DCNN-SL', ha='left', va='top', fontsize=14, fontweight='bold',
color=KW['dcnn']['color'], transform=ax_lg1.transAxes)
ax_lg2.text(0.1, 0.9, 'Interp-SL', ha='left', va='top', fontsize=14, fontweight='bold',
color=KW['scnn']['color'], transform=ax_lg2.transAxes)
ax_lg2.text(0.435, 0.9, 'BCSD-SL', ha='left', va='top', fontsize=14, fontweight='bold',
color=KW['bcsd']['color'], transform=ax_lg2.transAxes)
ax_lg2.text(0.78, 0.9, 'DCNN-SL', ha='left', va='top', fontsize=14, fontweight='bold',
color=KW['dcnn']['color'], transform=ax_lg2.transAxes)
ax_lw1.set_axis_off()
ax_lg1.set_axis_off()
ax_lw2.set_axis_off()
ax_lg2.set_axis_off()
ax_w1 = fig.add_axes([-0.055, -0.135+0.045, 1.065, 0.025])
ax_w1.set_axis_off()
ax_w1.text(0, 1, '* Reliability diagrams and BS components are calculated relative to the 2000-2014 ERA5 monthly CDFs.',
ha='left', va='top', fontsize=14, transform=ax_w1.transAxes);
ax_w2 = fig.add_axes([-0.055, -0.2+0.045, 1.065, 0.05])
ax_w2.set_axis_off()
ax_w2.text(0, 1, '** Calibration curves are averaged over 100 bootstrap replicates.\n Error bars represent the 95% CI.',
ha='left', va='top', fontsize=14, transform=ax_w2.transAxes);
if need_publish:
# Save figure
fig.savefig(fig_dir+'DSCALE_Calib_southern.png', format='png', **fig_keys)
```
## Northeast BC
```
calib_dict = np.load(save_dir+'DSCALE_Calib_BCH_loc2.npy', allow_pickle=True)[()]
PLOT = calib_dict['PLOT']
BS = calib_dict['BS_3c']
BS_3c = calib_dict['BS_3c']
USE = calib_dict['USE']
o_bar = calib_dict['o_bar']
REL = calib_dict['REL']
RES = calib_dict['RES']
USE, PLOT = filter_single_point(USE, PLOT)
print('obar: {}, {}, {}'.format(o_bar[0], o_bar[1], o_bar[2]))
fig = plt.figure(figsize=(12, 12/(3.2)*2.65), dpi=dpi_)
handle_title = []
handle_lines = []
handle_marker = []
gs = gridspec.GridSpec(7, 5, height_ratios=[0.45, 0.15, 1, 0.1, 0.5, 0.15, 0.3], width_ratios=[1, 0.1, 1, 0.1, 1])
YLIM = [7.5e-5, 1e0]
YLAB = [1e-4, 1e-3, 1e-2, 1e-1]
# no skill line
fake_x = np.linspace(0, 1, 100)
fake_y = [0.5*fake_x + 0.5*o_bar[0], 0.5*fake_x + 0.5*o_bar[1], 0.5*fake_x + 0.5*o_bar[2]]
AX_bs = []
AX_re = [] # reliability curve axis
AX_hi = [] # freq of use axis
AX_da = [] # data axis
for j in [0, 2, 4]:
AX_bs.append(plt.subplot(gs[0, j]))
AX_re.append(plt.subplot(gs[2, j]))
AX_hi.append(plt.subplot(gs[4, j]))
AX_da.append(plt.subplot(gs[6, j]))
plt.subplots_adjust(0, 0, 1, 1, hspace=0, wspace=0)
for i, ax in enumerate(AX_re):
ax = gu.ax_decorate_box(ax)
ax.plot(fake_x, fake_x, linewidth=1.5, linestyle='--', color='0.5')
ax.plot(fake_x, fake_y[i], linewidth=1.5, linestyle='--', color='r')
ax.tick_params(axis="both", which="both", labelbottom=True)
ax.set_xlim([0, 1])
ax.set_xticks([0, 0.25, 0.5, 0.75, 1.0])
ax.set_xticklabels([0, 0.25, 0.5, 0.75, 1.0])
ax.set_ylim([0, 1])
ax.set_yticks([0, 0.25, 0.5, 0.75, 1.0])
ax.set_yticklabels([0, 0.25, 0.5, 0.75, 1.0])
ax.set_aspect('equal')
for i, ax in enumerate(AX_da):
ax.set_axis_off()
for i, ax in enumerate(AX_hi):
ax = gu.ax_decorate_box(ax)
ax.tick_params(axis="both", which="both", labelbottom=True)
ax.set_xlim([0, 1])
ax.set_xticks([0, 0.25, 0.5, 0.75, 1.0])
ax.set_xticklabels([0, 0.25, 0.5, 0.75, 1.0])
ax.set_yscale('log')
ax.set_ylim(YLIM)
ax.set_yticks(YLAB)
if i < 2:
ax.text(0.05, 0.05, 'Forecast Probability', ha='left', va='bottom', fontsize=14, transform=ax.transAxes)
AX_re[0].set_ylabel('Observed relative frequency', fontsize=14)
AX_hi[0].set_ylabel('Frequency of usage', fontsize=14)
AX_re[0].tick_params(axis="both", which="both", labelleft=True)
AX_hi[0].tick_params(axis="both", which="both", labelleft=True)
methods = ['scnn', 'bcsd', 'dcnn']
Z = [5, 4, 5, 4, 4]
labels = ['Interp-SL', 'BCSD-SL', 'DCNN-SL']
label_ = [' ',
' ',
' ']
bar_gap = 1/4
text_gap = 1/3
YLIM_bs = [-0.2, 1.0]
YTICKS_bs = np.array([0.0, 0.3, 0.6])
titles_bs = ['(a) Day-1 forecast. BSS',
'(b) Day-3 forecast. BSS',
'(c) Day-5 forecast. BSS']
titles_re = ['(d) Day-1 forecast. Reliability diagram',
'(e) Day-3 forecast. Reliability diagram',
'(f) Day-5 forecast. Reliability diagram']
for i in range(3):
ax_bs = AX_bs[i]
ax_bs.text(0.5, 0.975, titles_bs[i], ha='center', va='top', fontsize=14, transform=ax_bs.transAxes)
ax_bs = gu.ax_decorate(ax_bs, left_flag=False, bottom_flag=False)
ax_bs.grid(linestyle=":", linewidth=1.5)
ax_bs.xaxis.grid(False)
ax_bs.spines["left"].set_visible(False)
ax_bs.set_xlim([0, 1])
ax_bs.set_ylim(YLIM_bs)
ax_bs.set_yticks(YTICKS_bs)
d_bs = 0.125*(YLIM_bs[1] - YLIM_bs[0])
for m, method in enumerate(['scnn', 'bcsd', 'dcnn']):
x_ = np.array([bar_gap+bar_gap*m,])
x_text = 0.5*text_gap + text_gap*m
bs_ = 1-10*np.array([BS_3c['{}'.format(method)][i],])
marker_p, stem_p, base_p = ax_bs.stem(x_, bs_, bottom=0.0, use_line_collection=True)
plt.setp(marker_p, marker='s', ms=15, mew=2.5, mec='k', mfc=KW[method]['color'], zorder=4)
plt.setp(stem_p, linewidth=2.5, color='k')
ax_bs.text(x_, bs_+d_bs, '{:.3f}'.format(bs_[0]), va='center', ha='center', fontsize=14)
ax_bs.axhline(y=0.0, xmin=0, xmax=1, linewidth=2.5, color='k')
ax_bs.text(x_text, -0.2, labels[m], ha='center', va='bottom', color=C[m],
fontsize=13.5, fontweight='bold')
for i, ind in enumerate([0, 1, 2]):
AX_re[i].text(0.5, 1.025, titles_re[i], fontsize=14, ha='center', va='bottom', transform=AX_re[i].transAxes)
AX_re[i].axvline(x=o_bar[ind], ymin=0, ymax=1, linewidth=1.5, linestyle='--', color='0.5')
AX_re[i].axhline(y=o_bar[ind], xmin=0, xmax=1, linewidth=1.5, linestyle='--', color='0.5')
for m, method in enumerate(methods):
temp_p = PLOT['{}_pm'.format(method)][ind, :]
temp_f = PLOT['{}_fm'.format(method)][ind, :]
temp_95th = PLOT['{}_fs'.format(method)][ind, :]
temp_ux = temp_p
temp_uy = USE['{}'.format(method)][ind, :]
AX_re[i].errorbar(temp_p, temp_f, yerr=temp_95th/8.0, **KW[method], zorder=Z[m])
handle_lines += AX_hi[i].plot(temp_ux, temp_uy, color=C[m], linestyle=LS[m], linewidth=1.5,
marker='o', ms=7, mew=0, mfc=C[m], zorder=Z[m]-2, label=label_[m])
AX_re[0].text(0.125, 0.75, '$\overline{o}$ = 0.1066', ha='left', va='bottom', fontsize=14)
AX_re[1].text(0.125, 0.75, '$\overline{o}$ = 0.1023', ha='left', va='bottom', fontsize=14)
AX_re[2].text(0.125, 0.75, '$\overline{o}$ = 0.1041', ha='left', va='bottom', fontsize=14)
ax_t1 = fig.add_axes([0.5*(3.2-3.17)/3.2, 1.0, (3.17/3.2), 0.025])
ax_t1.set_axis_off()
handle_title += gu.string_partial_format(fig, ax_t1, 0, 1.0, 'left', 'top',
['Verifying downscaled ', 'daily',
' accumulated precipitation > monthly ',
'90-th', ' events. 2017-2019. ', 'Northeast', ' stations'],
['k',]*7, [14,]*7, ['normal', 'bold', 'normal', 'bold',
'normal', 'bold', 'normal'])
AX_bs[0].text(0.85, 1.0, '[*]', ha='right', va='top', fontsize=10, transform=AX_bs[0].transAxes);
AX_re[0].text(0.98, 1.085, '[*]', ha='right', va='top', fontsize=10, transform=AX_re[0].transAxes);
for handle in handle_title:
handle.set_bbox(dict(facecolor='w', pad=0, edgecolor='none', zorder=2))
loc_y = [0.708, 0.53, 0.352, 0.173, 0.,]
table_heads = ['Brier', 'REL', 'RES']
locx_heads = [0.25, 0.5, 0.75]
for i, ind in enumerate([0, 1, 2]):
AX_da[i].text(1.035, 0.5-0.035, r'($10^{-2}$)', ha='right', va='center', fontsize=14, transform=AX_da[i].transAxes)
for m, method in enumerate(methods):
for j in range(3):
AX_da[i].text(locx_heads[j], 0.93, table_heads[j], ha='center', va='bottom',
fontsize=14, transform=AX_da[i].transAxes)
AX_da[i].text(locx_heads[0], loc_y[m], '{:.3f}'.format(1e2*BS['{}'.format(method)][ind]),
ha='center', va='bottom', color='k', fontsize=14, transform=AX_da[i].transAxes)
AX_da[i].text(locx_heads[1], loc_y[m], '{:.3f}'.format(1e2*REL['{}'.format(method)][ind]),
ha='center', va='bottom', color='k', fontsize=14, transform=AX_da[i].transAxes)
AX_da[i].text(locx_heads[2], loc_y[m], '{:.3f}'.format(1e2*RES['{}'.format(method)][ind]),
ha='center', va='bottom', color='k', fontsize=14, transform=AX_da[i].transAxes)
for m, method in enumerate(methods):
AX_da[0].text(-0.2, loc_y[m], labels[m], ha='left', va='bottom', color=C[m],
fontsize=14, fontweight='bold', transform=AX_da[0].transAxes)
handle_errbar = []
handle_errbar.append(mlines.Line2D([], [], label=label_[0], **kw_lines['scnn']))
handle_errbar.append(mlines.Line2D([], [], label=label_[1], **kw_lines['bcsd']))
handle_errbar.append(mlines.Line2D([], [], label=label_[2], **kw_lines['dcnn']))
ax_box = fig.add_axes([-0.055, -0.085+0.04, 0.75, 0.0625])
ax_box.set_axis_off()
ax_lw1 = inset_axes(ax_box, height='50%', width='35%', borderpad=0, loc=2)
ax_lw2 = inset_axes(ax_box, height='50%', width='35%', borderpad=0, loc=3)
ax_lg1 = inset_axes(ax_box, height='50%', width='65%', borderpad=0, loc=1)
ax_lg2 = inset_axes(ax_box, height='50%', width='65%', borderpad=0, loc=4)
ax_lw1.text(1, 0.5, 'Calibration curve : ', ha='right', va='center', fontsize=14, transform=ax_lw1.transAxes);
ax_lw2.text(1, 0.5, 'Frequency of usage (log-scale): ', ha='right', va='center', fontsize=14, transform=ax_lw2.transAxes);
ax_lw1.text(0.95, 0.55, '[**]', ha='right', va='center', fontsize=10, transform=ax_lw1.transAxes);
LG = ax_lg1.legend(handles=handle_errbar, bbox_to_anchor=(1, 1.5), ncol=5, prop={'size':14}, fancybox=False);
LG.get_frame().set_facecolor('none')
LG.get_frame().set_linewidth(0)
LG.get_frame().set_alpha(1.0)
LG1 = ax_lg2.legend(handles=handle_lines[0:3], bbox_to_anchor=(1, 1.5), ncol=5, prop={'size':14}, fancybox=False);
LG1.get_frame().set_facecolor('none')
LG1.get_frame().set_linewidth(0)
LG1.get_frame().set_alpha(1.0)
ax_lg1.text(0.1, 0.9, 'Interp-SL', ha='left', va='top', fontsize=14, fontweight='bold',
color=KW['scnn']['color'], transform=ax_lg1.transAxes)
ax_lg1.text(0.435, 0.9, 'BCSD-SL', ha='left', va='top', fontsize=14, fontweight='bold',
color=KW['bcsd']['color'], transform=ax_lg1.transAxes)
ax_lg1.text(0.78, 0.9, 'DCNN-SL', ha='left', va='top', fontsize=14, fontweight='bold',
color=KW['dcnn']['color'], transform=ax_lg1.transAxes)
ax_lg2.text(0.1, 0.9, 'Interp-SL', ha='left', va='top', fontsize=14, fontweight='bold',
color=KW['scnn']['color'], transform=ax_lg2.transAxes)
ax_lg2.text(0.435, 0.9, 'BCSD-SL', ha='left', va='top', fontsize=14, fontweight='bold',
color=KW['bcsd']['color'], transform=ax_lg2.transAxes)
ax_lg2.text(0.78, 0.9, 'DCNN-SL', ha='left', va='top', fontsize=14, fontweight='bold',
color=KW['dcnn']['color'], transform=ax_lg2.transAxes)
ax_lw1.set_axis_off()
ax_lg1.set_axis_off()
ax_lw2.set_axis_off()
ax_lg2.set_axis_off()
ax_w1 = fig.add_axes([-0.055, -0.135+0.045, 1.065, 0.025])
ax_w1.set_axis_off()
ax_w1.text(0, 1, '* Reliability diagrams and BS components are calculated relative to the 2000-2014 ERA5 monthly CDFs.',
ha='left', va='top', fontsize=14, transform=ax_w1.transAxes);
ax_w2 = fig.add_axes([-0.055, -0.2+0.045, 1.065, 0.05])
ax_w2.set_axis_off()
ax_w2.text(0, 1, '** Calibration curves are averaged over 100 bootstrap replicates.\n Error bars represent the 95% CI.',
ha='left', va='top', fontsize=14, transform=ax_w2.transAxes);
if need_publish:
# Save figure
fig.savefig(fig_dir+'DSCALE_Calib_Northeast.png', format='png', **fig_keys)
```
| github_jupyter |
**This notebook is an exercise in the [Pandas](https://www.kaggle.com/learn/pandas) course. You can reference the tutorial at [this link](https://www.kaggle.com/residentmario/grouping-and-sorting).**
---
# Introduction
In these exercises we'll apply groupwise analysis to our dataset.
Run the code cell below to load the data before running the exercises.
```
import pandas as pd
reviews = pd.read_csv("../input/wine-reviews/winemag-data-130k-v2.csv", index_col=0)
#pd.set_option("display.max_rows", 5)
from learntools.core import binder; binder.bind(globals())
from learntools.pandas.grouping_and_sorting import *
print("Setup complete.")
```
# Exercises
## 1.
Who are the most common wine reviewers in the dataset? Create a `Series` whose index is the `taster_twitter_handle` category from the dataset, and whose values count how many reviews each person wrote.
```
# Your code here
reviews_written = reviews.groupby('taster_twitter_handle').taster_twitter_handle.count()
# Check your answer
q1.check()
#q1.hint()
q1.solution()
```
## 2.
What is the best wine I can buy for a given amount of money? Create a `Series` whose index is wine prices and whose values is the maximum number of points a wine costing that much was given in a review. Sort the values by price, ascending (so that `4.0` dollars is at the top and `3300.0` dollars is at the bottom).
```
best_rating_per_price = reviews.groupby('price')['points'].max().sort_index()
# Check your answer
q2.check()
#q2.hint()
q2.solution()
```
## 3.
What are the minimum and maximum prices for each `variety` of wine? Create a `DataFrame` whose index is the `variety` category from the dataset and whose values are the `min` and `max` values thereof.
```
price_extremes = reviews.groupby('variety').price.agg([min, max])
# Check your answer
q3.check()
#q3.hint()
q3.solution()
```
## 4.
What are the most expensive wine varieties? Create a variable `sorted_varieties` containing a copy of the dataframe from the previous question where varieties are sorted in descending order based on minimum price, then on maximum price (to break ties).
```
sorted_varieties = price_extremes.sort_values(by=['min', 'max'], ascending=False).copy()
# Check your answer
q4.check()
#q4.hint()
q4.solution()
```
## 5.
Create a `Series` whose index is reviewers and whose values is the average review score given out by that reviewer. Hint: you will need the `taster_name` and `points` columns.
```
reviewer_mean_ratings = reviews.groupby('taster_name').points.mean()
# Check your answer
q5.check()
#q5.hint()
q5.solution()
```
Are there significant differences in the average scores assigned by the various reviewers? Run the cell below to use the `describe()` method to see a summary of the range of values.
```
reviewer_mean_ratings.describe()
```
## 6.
What combination of countries and varieties are most common? Create a `Series` whose index is a `MultiIndex`of `{country, variety}` pairs. For example, a pinot noir produced in the US should map to `{"US", "Pinot Noir"}`. Sort the values in the `Series` in descending order based on wine count.
```
country_variety_counts = reviews.groupby(['country', 'variety']).size().sort_values(ascending=False)
# Check your answer
q6.check()
#q6.hint()
q6.solution()
```
# Keep going
Move on to the [**data types and missing data**](https://www.kaggle.com/residentmario/data-types-and-missing-values).
---
*Have questions or comments? Visit the [course discussion forum](https://www.kaggle.com/learn/pandas/discussion) to chat with other learners.*
| github_jupyter |
```
!pip install autokeras
import tensorflow as tf
import autokeras as ak
gpus = tf.config.experimental.list_physical_devices('GPU')
tf.config.experimental.set_visible_devices(gpus[-1], 'GPU')
```
### Tuning MLP for structured-data regression (Normalization + DenseBlock)
```
input_node = ak.StructuredDataInput()
output_node = ak.Normalization()(input_node)
output_node = ak.DenseBlock(use_batchnorm=False, dropout=0.0)(output_node)
output_node = ak.RegressionHead(dropout=0.0)(output_node)
auto_model = ak.AutoModel(inputs=input_node, outputs=output_node, max_trials=10, overwrite=True, seed=42)
from sklearn.datasets import fetch_california_housing
house_dataset = fetch_california_housing()
# Import pandas package to format the data
import pandas as pd
# Extract features with their names into the a dataframe format
data = pd.DataFrame(house_dataset.data, columns=house_dataset.feature_names)
# Extract target with their names into a pd.Series object with name MEDV
target = pd.Series(house_dataset.target, name = 'MEDV')
from sklearn.model_selection import train_test_split
train_data, test_data, train_targets, test_targets = train_test_split(data, target, test_size=0.2, random_state=42)
auto_model.fit(train_data, train_targets, batch_size=1024, epochs=150)
```
### Visualize the best pipeline
```
best_model = auto_model.export_model()
tf.keras.utils.plot_model(best_model, show_shapes=True, expand_nested=True) # rankdir='LR'
```
### Evaluate best pipeline
```
test_loss, test_acc = auto_model.evaluate(test_data, test_targets, verbose=0)
print('Test accuracy: ', test_acc)
```
### Show best trial
```
auto_model.tuner.results_summary(num_trials=1)
best_model = auto_model.export_model()
tf.keras.utils.plot_model(best_model, show_shapes=True, expand_nested=True)
from tensorflow import keras
best_model.save('saved_model')
best_model = keras.models.load_model('saved_model')
```
### Customize the search space for tuning MLP
```
from kerastuner.engine import hyperparameters as hp
input_node = ak.StructuredDataInput()
# output_node = ak.CategoricalToNumerical()(input_node)
output_node = ak.Normalization()(input_node)
output_node = ak.DenseBlock(num_layers=1,
num_units=hp.Choice("num_units", [128, 256, 512, 1024]),
use_batchnorm=False,
dropout=0.0)(output_node)
output_node = ak.DenseBlock(num_layers=1,
num_units=hp.Choice("num_units", [16, 32, 64]),
use_batchnorm=False,
dropout=0.0)(output_node)
output_node = ak.RegressionHead()(output_node)
auto_model = ak.AutoModel(inputs=input_node, outputs=output_node, max_trials=10, overwrite=True, seed=42)
auto_model.fit(train_data, train_targets, batch_size=1024, epochs=150)
```
### Display the best pipeline
```
best_model = auto_model.export_model()
tf.keras.utils.plot_model(best_model, show_shapes=True, expand_nested=True) # rankdir='LR'
test_loss, test_acc = auto_model.evaluate(test_data, test_targets, verbose=0)
print('Test accuracy: ', test_acc)
auto_model.tuner.results_summary(num_trials=1)
best_model.summary()
```
| github_jupyter |
# AquaCrop-OSPy: Bridging the gap between research and practice in crop-water modelling
<a href="https://colab.research.google.com/github/thomasdkelly/aquacrop/blob/master/tutorials/AquaCrop_OSPy_Notebook_3.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab"/></a>
This series of notebooks provides users with an introduction to AquaCrop-OSPy, an open-source Python implementation of the U.N. Food and Agriculture Organization (FAO) AquaCrop model. AquaCrop-OSPy is accompanied by a series of Jupyter notebooks, which guide users interactively through a range of common applications of the model. Only basic Python experience is required, and the notebooks can easily be extended and adapted by users for their own applications and needs.
This notebook series consists of four parts:
1. <a href=https://colab.research.google.com/github/thomasdkelly/aquacrop/blob/master/tutorials/AquaCrop_OSPy_Notebook_1.ipynb>Running an AquaCrop-OSPy model</a>
2. <a href=https://colab.research.google.com/github/thomasdkelly/aquacrop/blob/master/tutorials/AquaCrop_OSPy_Notebook_2.ipynb>Estimation of irrigation water demands</a>
3. <a href=https://colab.research.google.com/github/thomasdkelly/aquacrop/blob/master/tutorials/AquaCrop_OSPy_Notebook_3.ipynb>Optimisation of irrigation management strategies</a>
4. <a href=https://colab.research.google.com/github/thomasdkelly/aquacrop/blob/master/tutorials/AquaCrop_OSPy_Notebook_4.ipynb>Projection of climate change impacts</a>
# Notebook 3: Developing and optimizing irrigation stratgeies
In the previous notebook, we looked at how to simulate yields and water use for different pre-specified irrigation management practices or rules. However, what if you wanted to know which strategy would give you the maximum yield for a given amount of irrigation water use? In this notebook, we look at how optimal irrigation schedules can be identified by linking AquaCrop-OSPy with one of the many optimization modules in available in the python ecosystem.
Our specific example focuses on optimizing soil-moisture thresholds which are commonly used both in practice and literature on optimizing irrigation decisions. During the growing season, if the soil-moisture content drops below the threshold, irrigation is applied to refill the soil profile back to field capacity subject to a maximum irrigation depth. AquaCrop-OSPy allows you to define four thresholds corresponding to four main growing periods (emergence, canopy growth, max canopy and senescence). Changing the threshold depending on crop growth stage reflects the fact that crop water requirements and drought stress responses vary over the course of the season.
Using the optimization library `scipy.optimize` we will find sets of soil-moisture thresholds that maximize yields for a Maize crop located in Champion Nebraska. The optimization will be repeated for different water supply constraints (maximum amount of water that can be applied in a given season). The simulation will take place over 3 years (2016-2018).
Import and install AquaCrop-OSPy
```
# !pip install aquacrop==0.2
# from aquacrop.classes import *
# from aquacrop.core import *
# from google.colab import output
# output.clear()
# only used for local development
# import sys
# _=[sys.path.append(i) for i in ['.', '..']]
from aquacrop.classes import *
from aquacrop.core import *
path = get_filepath('champion_climate.txt')
wdf = prepare_weather(path)
```
Define a function called `run_model` that creates and runs an AquaCrop model (just like in the previous notebooks), and returns the final output.
```
def run_model(smts,max_irr_season,year1,year2):
"""
funciton to run model and return results for given set of soil moisture targets
"""
maize = CropClass('Maize',PlantingDate='05/01') # define crop
loam = SoilClass('ClayLoam') # define soil
init_wc = InitWCClass(wc_type='Pct',value=[70]) # define initial soil water conditions
irrmngt = IrrMngtClass(IrrMethod=1,SMT=smts,MaxIrrSeason=max_irr_season) # define irrigation management
# create and run model
model = AquaCropModel(f'{year1}/05/01',f'{year2}/10/31',wdf,loam,maize,
IrrMngt=irrmngt,InitWC=init_wc)
model.initialize()
model.step(till_termination=True)
return model.Outputs.Final
run_model([70]*4,300,2018,2018)
```
Define `evaluate` will act as a reward function for the optimization library to optimize. Inside this function we run the model and return the reward (in this case the average yield).
```
import numpy as np # import numpy library
def evaluate(smts,max_irr_season,test=False):
"""
funciton to run model and calculate reward (yield) for given set of soil moisture targets
"""
# run model
out = run_model(smts,max_irr_season,year1=2016,year2=2018)
# get yields and total irrigation
yld = out['Yield (tonne/ha)'].mean()
tirr = out['Seasonal irrigation (mm)'].mean()
reward=yld
# return either the negative reward (for the optimization)
# or the yield and total irrigation (for analysis)
if test:
return yld,tirr,reward
else:
return -reward
evaluate([70]*4,300)
```
Define `get_starting_point` that chooses a set of random irrigation strategies and evaluates them to give us a good starting point for our optimization. (Since we are only using a local minimization function this will help get a good result)
```
def get_starting_point(num_smts,max_irr_season,num_searches):
"""
find good starting threshold(s) for optimization
"""
# get random SMT's
x0list = np.random.rand(num_searches,num_smts)*100
rlist=[]
# evaluate random SMT's
for xtest in x0list:
r = evaluate(xtest,max_irr_season,)
rlist.append(r)
# save best SMT
x0=x0list[np.argmin(rlist)]
return x0
get_starting_point(4,300,10)
```
Define `optimize` that uses the `scipy.optimize.fmin` optimization package to find yield maximizing irrigation strategies for a maximum seasonal irrigation limit.
```
from scipy.optimize import fmin
def optimize(num_smts,max_irr_season,num_searches=100):
"""
optimize thresholds to be profit maximising
"""
# get starting optimization strategy
x0=get_starting_point(num_smts,max_irr_season,num_searches)
# run optimization
res = fmin(evaluate, x0,disp=0,args=(max_irr_season,))
# reshape array
smts= res.squeeze()
# evaluate optimal strategy
return smts
smts=optimize(4,300)
evaluate(smts,300,True)
```
For a range of maximum seasonal irrigation limits (0-450mm), find the yield maximizing irrigation schedule.
```
from tqdm.notebook import tqdm # progress bar
opt_smts=[]
yld_list=[]
tirr_list=[]
for max_irr in tqdm(range(0,500,50)):
# find optimal thresholds and save to list
smts=optimize(4,max_irr)
opt_smts.append(smts)
# save the optimal yield and total irrigation
yld,tirr,_=evaluate(smts,max_irr,True)
yld_list.append(yld)
tirr_list.append(tirr)
```
Visualize the optimal yield and total irrigation, creating a crop-water production function.
```
# import plotting library
import matplotlib.pyplot as plt
# create plot
fig,ax=plt.subplots(1,1,figsize=(13,8))
# plot results
ax.scatter(tirr_list,yld_list)
ax.plot(tirr_list,yld_list)
# labels
ax.set_xlabel('Total Irrigation (ha-mm)',fontsize=18)
ax.set_ylabel('Yield (tonne/ha)',fontsize=18)
ax.set_xlim([-20,600])
ax.set_ylim([2,15.5])
# annotate with optimal thresholds
bbox = dict(boxstyle="round",fc="1")
offset = [15,15,15, 15,15,-125,-100, -5, 10,10]
yoffset= [0,-5,-10,-15, -15, 0, 10,15, -20,10]
for i,smt in enumerate(opt_smts):
smt=smt.clip(0,100)
ax.annotate('(%.0f, %.0f, %.0f, %.0f)'%(smt[0],smt[1],smt[2],smt[3]),
(tirr_list[i], yld_list[i]), xytext=(offset[i], yoffset[i]), textcoords='offset points',
bbox=bbox,fontsize=12)
```
Note that fmin is a local optimizer and so optimal soil-moisture thresholds will vary over multiple repetitions
# Appendix: Parrallel
Can also speed things up with a parallel approach. Though for Colab notebooks there are only 2 CPUs so we are not expecting a massive speed up. But this kind of approach can be useful when more CPUs are available either locally or in cloud computing infestructure.
```
# import multiprocessing library
from multiprocessing import Pool
# time library so we can check the speed up
from time import time
# define funciton to parallelize
def func(max_irr):
# find optimal smts
smts=optimize(4,max_irr)
# return the optimal yield, total irrigaiton and thresholds
yld,tirr,_=evaluate(smts,max_irr,True)
print(f"finished max_irr = {max_irr} at {round(time()-start)} seconds")
return yld,tirr,smts
```
Multi processing in python can be done using the `Pool` object. The code below create a `Pool` object, passing in the number of CPU cores that you want to parallelize over. Then use `p.map` to evaluate the function `func` for each input given in the list.
```
start = time() # save start time
with Pool(2) as p:
results = p.map(func, list(range(0,500,50)))
```
This approach in Colab does not give us a massive speed up, however this approach can be a big help if more CPU cores are available. Combine results for visualization.
```
parr_opt_smts=[]
parr_yld_list=[]
parr_tirr_list=[]
for i in range(len(results)):
parr_yld_list.append(results[i][0])
parr_tirr_list.append(results[i][1])
parr_opt_smts.append(results[i][2])
```
Plot crop-water production function.
```
fig,ax=plt.subplots(1,1,figsize=(10,7))
ax.scatter(parr_tirr_list,parr_yld_list)
ax.plot(parr_tirr_list,parr_yld_list)
ax.set_xlabel('Total Irrigation (ha-mm)')
ax.set_ylabel('Yield (tonne/ha)',fontsize=18)
```
| github_jupyter |
```
GPU_device_id = str(0)
model_id_save_as = 'learningcurve-cnn-easy-final-reluupdate'
architecture_id = '../hyperparameter_search/hyperparameter-search-results/CNN-kfoldseasy-final-1-reluupdate_33'
model_class_id = 'CNN1D'
testing_dataset_id = '../../source-interdiction/dataset_generation/validation_dataset_200keV_log10time_100.npy'
training_dataset_id = '../../source-interdiction/dataset_generation/training_dataset_200keV_log10time_10000.npy'
difficulty_setting = 'easy'
train_sizes = [10000, 50, 100, 500, 1000, 5000, 15000, 20000, ]
earlystop_patience = 10
num_epochs = 2000
import matplotlib.pyplot as plt
import os
os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID" # see issue #152
os.environ["CUDA_VISIBLE_DEVICES"] = GPU_device_id
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import FunctionTransformer, LabelBinarizer
from sklearn.model_selection import StratifiedKFold, StratifiedShuffleSplit
import tensorflow as tf
import pickle
import numpy as np
import pandas as pd
from random import choice
from numpy.random import seed
seed(5)
from tensorflow import set_random_seed
set_random_seed(5)
```
#### Import model, training function
```
from annsa.model_classes import build_cnn_model, compile_model, f1
from annsa.load_dataset import load_easy, load_full, dataset_to_spectrakeys
from annsa.load_pretrained_network import load_features
```
## Training Data Construction
```
training_dataset = np.load(training_dataset_id)
training_spectra, training_keys = dataset_to_spectrakeys(training_dataset)
```
## Load testing dataset
```
testing_dataset = np.load(testing_dataset_id)
testing_spectra, testing_keys = dataset_to_spectrakeys(testing_dataset)
```
## Load Model
```
model_features = load_features(architecture_id)
model_features.output_function = tf.nn.softmax
model_features.cnn_kernels = model_features.cnn_kernel
model_features.pool_sizes = model_features.pool_size
model_features.loss = tf.keras.losses.categorical_crossentropy
model_features.optimizer = tf.keras.optimizers.Adam
model_features.metrics = [f1]
model_features.dropout_rate = model_features.dropout_probability
model_features.output_function = tf.nn.softmax
model_features.input_dim = 1024
model = compile_model(
build_cnn_model,
model_features)
```
## Train network
# Scale input data
```
training_spectra_scaled = model_features.scaler.transform(training_spectra)
testing_spectra_scaled = model_features.scaler.transform(testing_spectra)
earlystop_callback = tf.keras.callbacks.EarlyStopping(
monitor='val_f1',
patience=earlystop_patience,
mode='max',
min_delta=0.01,
restore_best_weights=True)
mlb=LabelBinarizer()
for train_size in train_sizes:
print('\n\nRunning through training size '+str(train_size))
k_folds_errors = []
sss = StratifiedShuffleSplit(n_splits=5, train_size=train_size)
k = 0
for train_index, _ in sss.split(training_spectra, training_keys):
print('Running through fold '+str(k))
training_keys_binarized = mlb.fit_transform(training_keys.reshape([training_keys.shape[0],1]))
testing_keys_binarized = mlb.transform(testing_keys)
model = compile_model(
build_cnn_model,
model_features)
csv_logger = tf.keras.callbacks.CSVLogger('./final-models-keras/'+model_id_save_as+'_trainsize'+str(train_size)+'_fold'+str(k)+'.log')
output = model.fit(
x=training_spectra_scaled[train_index],
y=training_keys_binarized[train_index],
epochs=num_epochs,
verbose=1,
validation_data=(testing_spectra_scaled,
testing_keys_binarized),
shuffle=True,
callbacks=[earlystop_callback, csv_logger],
)
model.save('./final-models-keras/'+model_id_save_as+'_trainsize'+str(train_size)+'_fold'+str(k)+'.hdf5')
k += 1
```
| github_jupyter |
```
# (1) Import the required Python dependencies
import findspark
findspark.init()
from pyspark import SparkContext, SparkConf
from pyspark.sql import SQLContext
from pyspark.sql.functions import *
from pyspark.sql.types import StructType, StructField
from pyspark.sql.types import LongType, DoubleType, IntegerType, StringType, BooleanType
from pyspark.ml.feature import VectorAssembler
from pyspark.ml.feature import StringIndexer
from pyspark.ml.feature import Tokenizer
from pyspark.ml.feature import StopWordsRemover
from pyspark.ml.feature import HashingTF, IDF
from pyspark.ml import Pipeline, PipelineModel
from pyspark.ml.classification import DecisionTreeClassifier
from pyspark.ml.classification import RandomForestClassifier
from pyspark.ml.evaluation import BinaryClassificationEvaluator
from pyspark.mllib.evaluation import MulticlassMetrics
from sparknlp.base import *
from sparknlp.annotator import Tokenizer as NLPTokenizer
from sparknlp.annotator import Stemmer, Normalizer
# (2) Instantiate a Spark Context
conf = SparkConf().setMaster("spark://192.168.56.10:7077") \
.set("spark.jars", '/opt/anaconda3/lib/python3.6/site-packages/sparknlp/lib/sparknlp.jar') \
.setAppName("Natural Language Processing - Sentiment Analysis")
sc = SparkContext(conf=conf)
sqlContext = SQLContext(sc)
sc.getConf().getAll()
# (3) Load the labelled Airline Tweet Corpus
schema = StructType([
StructField("unit_id", LongType()),
StructField("golden", BooleanType()),
StructField("unit_state", StringType()),
StructField("trusted_judgments", IntegerType()),
StructField("last_judgment_at", StringType()),
StructField("airline_sentiment", StringType()),
StructField("airline_sentiment_confidence", DoubleType()),
StructField("negative_reason", StringType()),
StructField("negative_reason_confidence", DoubleType()),
StructField("airline", StringType()),
StructField("airline_sentiment_gold", StringType()),
StructField("name", StringType()),
StructField("negative_reason_gold", StringType()),
StructField("retweet_count", IntegerType()),
StructField("text", StringType()),
StructField("tweet_coordinates", StringType()),
StructField("tweet_created", StringType()),
StructField("tweet_id", StringType()),
StructField("tweet_location", StringType()),
StructField("user_timezone", StringType())
])
airline_tweets_df = sqlContext.read.format('com.databricks.spark.csv').schema(schema) \
.options(header = 'true', inferschema = 'false') \
.load('/data/workspaces/jillur.quddus/jupyter/notebooks/Machine-Learning-with-Apache-Spark-QuickStart-Guide/chapter06/data/twitter-data/airline-tweets-labelled-corpus.csv')
airline_tweets_df.show(2)
# (4) Since we are only interested in detecting tweets with negative sentiment, generate a new label
# whereby if the sentiment is negative, the label is TRUE (Positive Outcome) otherwise FALSE
airline_tweets_with_labels_df = airline_tweets_df.withColumn("negative_sentiment_label",
when(col("airline_sentiment") == "negative", lit("true")).otherwise(lit("false"))) \
.select("unit_id", "text", "negative_sentiment_label")
airline_tweets_with_labels_df.show(10, False)
# (5) Pre-Process the tweets using the Feature Transformers NATIVE to Spark MLlib
# (5.1) Remove any tweets with null textual content
# (5.2) Tokenize the textual content using the Tokenizer Feature Transformer
# (5.3) Remove Stop Words from the sequence of tokens using the StopWordsRemover Feature Transformer
# (5.4) Concatenate the filtered sequence of tokens into a single string for 3rd party pre-processing (-> spark-nlp)
filtered_df = airline_tweets_with_labels_df.filter("text is not null")
tokenizer = Tokenizer(inputCol="text", outputCol="tokens_1")
tokenized_df = tokenizer.transform(filtered_df)
remover = StopWordsRemover(inputCol="tokens_1", outputCol="filtered_tokens")
preprocessed_part_1_df = remover.transform(tokenized_df)
preprocessed_part_1_df = preprocessed_part_1_df.withColumn("concatenated_filtered_tokens",
concat_ws(" ", col("filtered_tokens")))
preprocessed_part_1_df.show(3, False)
# (6) Define a NLP pipeline to pre-process the tweets using the spark-nlp 3rd party library
# (6.1) Annotate the string containing the concatenated filtered tokens using the DocumentAssembler Transformer
# (6.2) Re-tokenize the document using the Tokenizer Annotator
# (6.3) Apply Stemming to the Tokens using the Stemmer Annotator
# (6.4) Clean and lowercase all the Tokens using the Normalizer Annotator
document_assembler = DocumentAssembler().setInputCol("concatenated_filtered_tokens")
tokenizer = NLPTokenizer().setInputCols(["document"]).setOutputCol("tokens_2")
stemmer = Stemmer().setInputCols(["tokens_2"]).setOutputCol("stems")
normalizer = Normalizer().setInputCols(["stems"]).setOutputCol("normalised_stems")
pipeline = Pipeline(stages=[document_assembler, tokenizer, stemmer, normalizer])
pipeline_model = pipeline.fit(preprocessed_part_1_df)
preprocessed_df = pipeline_model.transform(preprocessed_part_1_df)
preprocessed_df.select("unit_id", "text", "negative_sentiment_label", "normalised_stems").show(3, False)
preprocessed_df.dtypes
# (7) We could proceed to use the 3rd party annotators available in spark-nlp to train a sentiment model, such as
# SentimentDetector and ViveknSentimentDetector respectively
# However in this case study, we will use the Feature Extractors native to Spark MLlib to generate feature vectors
# to train our subsequent machine learning model. In this case, we will use MLlib's TF-IDF Feature Extractor.
# (7.1) Extract the normalised stems from the spark-nlp Annotator Array Structure
exploded_df = preprocessed_df.withColumn("stems", explode("normalised_stems")) \
.withColumn("stems", col("stems").getItem("result")) \
.select("unit_id", "negative_sentiment_label", "text", "stems")
exploded_df.show(10, False)
# (7.2) Group by Unit ID and aggregate then normalised stems into a sequence of tokens
aggregated_df = exploded_df.groupBy("unit_id").agg(concat_ws(" ", collect_list(col("stems"))),
first("text"), first("negative_sentiment_label")) \
.toDF("unit_id", "tokens", "text", "negative_sentiment_label") \
.withColumn("tokens", split(col("tokens"), " ").cast("array<string>"))
aggregated_df.show(10, False)
# (8) Generate Term Frequency Feature Vectors by passing the sequence of tokens to the HashingTF Transformer.
# Then fit an IDF Estimator to the Featurized Dataset to generate the IDFModel.
# Finally pass the TF Feature Vectors to the IDFModel to scale based on frequency across the corpus
hashingTF = HashingTF(inputCol="tokens", outputCol="raw_features", numFeatures=280)
features_df = hashingTF.transform(aggregated_df)
idf = IDF(inputCol="raw_features", outputCol="features")
idf_model = idf.fit(features_df)
scaled_features_df = idf_model.transform(features_df)
scaled_features_df.cache()
scaled_features_df.show(8, False)
# (9) Index the label column using StringIndexer
# Now a label of 1.0 = FALSE (Not Negative Sentiment) and a label of 0.0 = TRUE (Negative Sentiment)
indexer = StringIndexer(inputCol = "negative_sentiment_label", outputCol = "label").fit(scaled_features_df)
scaled_features_indexed_label_df = indexer.transform(scaled_features_df)
# (10) Split the index-labelled Scaled Feature Vectors into Training and Test DataFrames
train_df, test_df = scaled_features_indexed_label_df.randomSplit([0.9, 0.1], seed=12345)
train_df.count(), test_df.count()
# (11) Train a Classification Tree Model on the Training DataFrame
decision_tree = DecisionTreeClassifier(featuresCol = 'features', labelCol = 'label')
decision_tree_model = decision_tree.fit(train_df)
# (12) Apply the Trained Classification Tree Model to the Test DataFrame to make predictions
test_decision_tree_predictions_df = decision_tree_model.transform(test_df)
print("TEST DATASET PREDICTIONS AGAINST ACTUAL LABEL: ")
test_decision_tree_predictions_df.select("prediction", "label", "text").show(10, False)
# (13) Compute the Confusion Matrix for our Decision Tree Classifier on the Test DataFrame
predictions_and_label = test_decision_tree_predictions_df.select("prediction", "label").rdd
metrics = MulticlassMetrics(predictions_and_label)
print("N = %g" % test_decision_tree_predictions_df.count())
print(metrics.confusionMatrix())
# (14) For completeness let us train a Decision Tree Classifier using Feature Vectors derived from the Bag of Words algorithm
# Note that we have already computed these Feature Vectors when applying the HashingTF Transformer in Cell #8 above
# (14.1) Create Training and Test DataFrames based on the Bag of Words Feature Vectors
bow_indexer = StringIndexer(inputCol = "negative_sentiment_label", outputCol = "label").fit(features_df)
bow_features_indexed_label_df = bow_indexer.transform(features_df).withColumnRenamed("raw_features", "features")
bow_train_df, bow_test_df = bow_features_indexed_label_df.randomSplit([0.9, 0.1], seed=12345)
# (14.2) Train a Decision Tree Classifier using the Bag of Words Feature Vectors
bow_decision_tree = DecisionTreeClassifier(featuresCol = 'features', labelCol = 'label')
bow_decision_tree_model = bow_decision_tree.fit(bow_train_df)
# (14.3) Apply the Bag of Words Decision Tree Classifier to the Test DataFrame and generate the Confusion Matrix
bow_test_decision_tree_predictions_df = bow_decision_tree_model.transform(bow_test_df)
bow_predictions_and_label = bow_test_decision_tree_predictions_df.select("prediction", "label").rdd
bow_metrics = MulticlassMetrics(bow_predictions_and_label)
print("N = %g" % bow_test_decision_tree_predictions_df.count())
print(bow_metrics.confusionMatrix())
# (15) Persist the trained Decision Tree Classifier to disk for later use
bow_decision_tree_model.save('/data/workspaces/jillur.quddus/jupyter/notebooks/Machine-Learning-with-Apache-Spark-QuickStart-Guide/chapter06/models/airline-sentiment-analysis-decision-tree-classifier')
# (16) Stop the Spark Context
sc.stop()
```
| github_jupyter |
## Rotations
This part of the package is by far the most costly operation. It carries out of the following steps a large number of times
1. Rotates all data points
2. Discretize the rotated data
3. Estimate the transition matrix
4. Compute pathways
Unlike obtaining a bootstrap sample; in this case for each rotation every step of the algorithm has to be run making it very costly.
As this is a very good way to use the package; an example is given at the end to make the output into a dataframe for easy exportation.
### Making the networks
The below is the costly part, hence we use multiprocessing to speed it up.
**NOTE** In this example the remove_undesired function is not run on the transition matrix network. We show the consequences of this later in the Artificial connections page.
```
import os
import pickle
import numpy as np
from multiprocessing import Pool
import driftmlp
from driftmlp.rotations import random_ll_rot
from driftmlp.drifter_indexing.discrete_system import h3_default
import matplotlib.pyplot as plt
import cartopy.crs as ccrs
rotations = [random_ll_rot() for i in range(8)] #
DFILE = os.environ['DRIFTFILE']
def get_network(rot):
#Discretizers can be troublesome to pickle,
#so apply rotation directly using the lon_lat_transform kwarg
#By specifing variables, we can read the data slightly quicker.
drift_kwargs = {'variables': ['position', 'drogue', 'datetime'],
'drop_na': False,
'drogue': None, #drouge : None gives all drifters, True for drogued only, False for undrogued
'lon_lat_transform': rot}
net = driftmlp.driftfile_to_network(DFILE, drift_kwargs=drift_kwargs)
return net
p = Pool(3)
networks = p.map(get_network, rotations)
p.close()
del p
discretizers = [h3_default(res=3, rot=rotation) for rotation in rotations]
plotting_dfs = [driftmlp.plotting.make_gpd.full_multipolygon_df(discretizer=discretizer) for discretizer in discretizers]
```
The below functions and locations are from the bootstrap notebook example
```
#Copy sample locations from basic_pathways page
from_loc = [-90.90, 23.88]
to_loc =[-9.88, 35.80]
locs = {
"Gulf of Mexico": from_loc,
"Strait of Gibraltar": to_loc,
"North Atlantic" : [-41, 34],
"South Atlantic" : [-14, -27],
"North Pacific" : [-170, 30]
}
N_locs = len(locs)
def make_heatmap(arr, ax):
ax.set_xticks(np.arange(N_locs))
ax.set_yticks(np.arange(N_locs))
# ... and label them with the respective list entries
ax.set_xticklabels(locs.keys())
ax.set_yticklabels(locs.keys())
# Rotate the tick labels and set their alignment.
plt.setp(ax.get_xticklabels(), rotation=45, ha="right",
rotation_mode="anchor")
# Loop over data dimensions and create text annotations.
for i in range(N_locs):
for j in range(N_locs):
text = ax.text(j, i, np.round(arr[i, j]/365,2) ,
ha="center", va="center", color="w")
ax.imshow(arr)
def extract_travel_time_matrix(list_of_lists):
"""
Turn a list of lists of network paths into an array
Prints a warning if there is an invalid path
"""
tt = np.array([[path.travel_time for path in list_of_paths] for list_of_paths in list_of_lists])
if np.any(tt<0):
print("WARNING: There's an invalid path")
return tt
```
### Obtaining pathways and traveltimes
Note this time each network was created with a different discretizer so we must use the correct discretizer for each network.
```
def rotation_pathways(network, discretizer):
"""
Takes in network and the discretizer used to create the network.
Outputs all pairwise network_path objects between the global dict locs.
"""
h3_ids = discretizer.return_inds(locs.values())
# Get all pathways
return driftmlp.shortest_path.AllPairwisePaths(network, h3_ids)
paths = [rotation_pathways(T, discretizer) for T, discretizer in zip(networks, discretizers)]
travel_times_arr = np.stack([extract_travel_time_matrix(paths_list_of_lists) for paths_list_of_lists in paths])
```
#### Single origin, destination pair
The geopandas dataframe used for plotting the geometries also depend on the the discretizer so, a different one must be used for each transition matrix. Each of these were created at the top of this page.
```
from driftmlp.plotting import h3_cartopy
def plot_od_pair(from_idx, to_idx, networks, plotting_dfs, paths):
fig = plt.figure()
ax = plt.subplot(projection = ccrs.PlateCarree())
for i in range(len(networks)):
# Plot bootstrapped lines with a bit of uncertainty
pathway = paths[i][from_idx][to_idx]
h3_cartopy.plot_line(plotting_dfs[i], pathway.h3id, ax=ax, alpha=0.1, color='red', centroid_col='centroid_col')
ax.coastlines()
plot_od_pair(0,1,networks, plotting_dfs, paths)
```
The results for this particular pathway are very similar to the bootstrap. All pathways strongly agree up until the mid atlantic where pathways then find different ways to go.
#### Travel time matrix
Similar to the bootstrap example one can estimate the rotation mean and variance.
```
fig, ax = plt.subplots(1,2, figsize=(8,4))
make_heatmap(travel_times_arr.mean(axis=0), ax=ax[0])
make_heatmap(travel_times_arr.std(axis=0, ddof=1), ax=ax[1])
ax[0].set_title("Mean over rotations (years)")
ax[1].set_title("Standard error over rotations (years)")
fig.tight_layout()
```
Strangely, a few of these pathways have massive standard errors, this is linked to not calling `driftmlp.helpers.remove_undesired` see the end of this page on Artificial connections
### Output
Quick example to turn the mean travel times into a pandas dataframe. .to_csv or a similar output pandas function to save the output
```
import pandas as pd
def matrix_to_pd(locs, arr):
names = list(locs.keys())
print(names)
N = len(locs)
origin = [name for _ in names for name in names]
destination = [name for name in names for _ in names]
mean_arr = arr.mean(axis=0)
std_arr = arr.std(axis=0, ddof=1)
mean_tt = [mean_arr[i,j] for j in range(N) for i in range(N)]
std_tt = [std_arr[i,j] for j in range(N) for i in range(N)]
out_df = pd.DataFrame({"origin": origin, "destination":destination, "mean_rotation": mean_tt, "std_tt": std_tt})
out_df = out_df[out_df['mean_rotation']!=0].reset_index()
return out_df
tt_df = matrix_to_pd(locs, travel_times_arr)
tt_df.head()
```
## Artificial Connections
A side benefit of the rotation functionality is that is can highlight artificial connections. We seen a VERY high a standard error going from the North Pacific over to the Gulf of mexico for example. We can investigate this by plotting the pathways
```
plot_od_pair(0,4,networks, plotting_dfs, paths)
plot_od_pair(4,0,networks, plotting_dfs, paths)
```
Both of these pathways on occasion just hopping over the Panama Isthmus; which has never been observed by a drifter and is actually just a result of the grid system used covering both sides.
In the package there is functionallity to remove the connections over the Strait of Gibraltar and the Panama Isthumus using the `driftmlp.helpers.removed_undesired` function. It takes the points listed in the dictionary below, finds the discrete index and automatically deletes those states and corresponding transitions from the transition matrix.
Note this also results in an invalid transition matrix; however this does not hugely affect the algorithm.
```
driftmlp.helpers.RM_DICT
```
Lets run the function on all the networks and observe how the results change
```
for i in range(len(networks)):
driftmlp.helpers.remove_undesired(networks[i], discretizer=discretizers[i])
paths = [rotation_pathways(T, discretizer) for T, discretizer in zip(networks, discretizers)]
travel_times_arr = np.stack([extract_travel_time_matrix(paths_list_of_lists) for paths_list_of_lists in paths])
fig, ax = plt.subplots(1,2, figsize=(8,4))
make_heatmap(travel_times_arr.mean(axis=0), ax=ax[0])
make_heatmap(travel_times_arr.std(axis=0, ddof=1), ax=ax[1])
ax[0].set_title("Mean over rotations (years)")
ax[1].set_title("Standard error over rotations (years)")
fig.tight_layout()
plot_od_pair(0,4,networks, plotting_dfs, paths)
plot_od_pair(4,0,networks, plotting_dfs, paths)
```
Now, the standard errors seem slightly less un-usual. The North Pacific to Gulf of Mexico travel time standard error seems high, but when taken relative to the large travel time this seems more reasonable. Also the corresponding pathway does not cross any land masses.
| github_jupyter |
# REINFORCE
---
In this notebook, we will train REINFORCE with OpenAI Gym's Cartpole environment.
### 1. Import the Necessary Packages
```
import gym
import numpy as np
from collections import deque
import matplotlib.pyplot as plt
%matplotlib inline
import tensorflow as tf
tf.random.set_seed(0)
```
### 2. Define the Architecture of the Policy
```
env = gym.make('CartPole-v0')
env.seed(0)
print('observation space:', env.observation_space)
print('action space:', env.action_space)
class Network(tf.keras.Model):
def __init__(self, h_size=16, a_size=2):
super(Network, self).__init__()
self.fc1 = tf.keras.layers.Dense(h_size)
self.fc2 = tf.keras.layers.Dense(a_size)
def call(self, x):
x = tf.nn.relu(self.fc1(x))
x = self.fc2(x)
return x
class Policy:
def __init__(self, s_size=4, h_size=16, a_size=2):
self.network = Network(h_size, a_size)
@tf.function
def act(self, state):
policy_logits = self.network(state)
sample_action = tf.squeeze(tf.random.categorical(policy_logits, 1), 1)
return sample_action
def neglogprobs(self, policy_logits, actions):
return tf.nn.sparse_softmax_cross_entropy_with_logits(labels=actions,logits=policy_logits)
@tf.function(input_signature=[tf.TensorSpec(shape=(None,None), dtype=tf.float32),
tf.TensorSpec(shape=None, dtype=tf.int32),
tf.TensorSpec(shape=None, dtype=tf.float32)],)
def optimize(self, states, actions, adv):
with tf.GradientTape() as tape:
logits = self.network(states)
neglogprobs = self.neglogprobs(logits, actions)
loss = tf.reduce_sum(neglogprobs * adv)
grads = tape.gradient(loss, self.network.trainable_variables)
optimizer.apply_gradients(zip(grads, self.network.trainable_variables))
def sample_trajectory(env, policy, max_t):
states = []
rewards = []
actions = []
state = env.reset()
for t in range(max_t):
action = policy.act(np.array(state)[None]).numpy()[0]
states.append(state)
actions.append(action)
state, reward, done, _ = env.step(action)
rewards.append(reward)
if done:
break
return np.array(states, dtype=np.float32), np.array(actions, dtype=np.int32), np.array(rewards, dtype=np.float32)
```
### 3. Train the Agent with REINFORCE
```
policy = Policy()
optimizer = tf.keras.optimizers.Adam(learning_rate=1e-2)
def reinforce(n_episodes=1000, max_t=1000, gamma=1.0, print_every=100):
scores_deque = deque(maxlen=100)
scores = []
for i_episode in range(1, n_episodes+1):
states, actions, rewards = sample_trajectory(env, policy, max_t)
scores_deque.append(sum(rewards))
scores.append(sum(rewards))
discounts = [gamma**i for i in range(len(rewards)+1)]
R = (sum([a*b for a,b in zip(discounts, rewards)])).astype(np.float32)
states = tf.convert_to_tensor(states)
actions = tf.convert_to_tensor(actions)
tot_reward = tf.convert_to_tensor(R)
policy.optimize(states, actions, tot_reward)
if i_episode % print_every == 0:
print('Episode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_deque)))
if np.mean(scores_deque)>=195.0:
print('Environment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode-100, np.mean(scores_deque)))
break
return scores
scores = reinforce()
```
### 4. Plot the Scores
```
fig = plt.figure()
ax = fig.add_subplot(111)
plt.plot(np.arange(1, len(scores)+1), scores)
plt.ylabel('Score')
plt.xlabel('Episode #')
plt.show()
```
### 5. Watch a Smart Agent!
```
env = gym.make('CartPole-v0')
env = gym.wrappers.Monitor(env, 'videos/', force=True)
state = env.reset()
for t in range(1000):
action = policy.act(np.array(state)[None]).numpy()[0]
#env.render()
state, reward, done, _ = env.step(action)
if done:
break
env.close()
```
| github_jupyter |
# Prerequisites:
install and import some libraries we need and import the data
```
#import sys
#!pip install keras tensorflow sklearn
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
from subprocess import check_output
from keras.layers.core import Dense, Activation, Dropout
from keras.layers.recurrent import LSTM
from keras.models import Sequential
from sklearn.model_selection import train_test_split
import time #helper libraries
from sklearn.preprocessing import MinMaxScaler
import matplotlib.pyplot as plt
from numpy import newaxis
import matplotlib.dates as mdates
import datetime as dt
# import historical stock data:
prices_dataset = pd.read_csv('./^GDAXI_1y.csv', header=0)
prices_dataset[-2:]
```
# Data preparation:
In this part, the data is manipulated so that the model can use it later
```
DAX_closing_prices_raw = prices_dataset.Close.values.astype('float32')
# turn list of closing values into list of lists of lenght 1 containing one closing value each
# so that MinMaxScaler below works:
DAX_closing_prices_raw = DAX_closing_prices_raw.reshape(len(DAX_closing_prices_raw), 1)
time_stamps = [dt.datetime.strptime(d,'%Y-%m-%d').date() for d in prices_dataset.Date]
fig, ax = plt.subplots()
ax.set_xlabel('Date')
ax.set_ylabel('Closing value')
ax.set_title(r'Historic DAX® chart')
# Tweak spacing to prevent clipping of ylabel
fig.tight_layout()
plt.plot(time_stamps,DAX_closing_prices_raw)
ax.set_ylim(ymin=11000)
plt.show()
# normalize data max=1 min=0
scaler = MinMaxScaler(feature_range=(0, 1))
DAX_closing_prices = scaler.fit_transform(DAX_closing_prices_raw)
# define which part of the data is the training set:
train_size = int(len(DAX_closing_prices) * 0.80)
test_size = len(DAX_closing_prices) - train_size
# train: training set, test: test set:
train, test = DAX_closing_prices[0:train_size,:], DAX_closing_prices[train_size:len(DAX_closing_prices),:]
fig, ax = plt.subplots()
ax.set_xlabel('Date')
ax.set_ylabel('Closing value')
ax.set_title(r'Historic DAX® chart')
# Tweak spacing to prevent clipping of ylabel
fig.tight_layout()
plt.plot(time_stamps[0:train_size],DAX_closing_prices_raw[0:train_size],label="training set")
plt.plot(time_stamps[train_size:],DAX_closing_prices_raw[train_size:],label="validation set")
ax.legend()
ax.set_ylim(ymin=11000)
plt.show()
# convert an array of values into a dataset matrix
def create_dataset(dataset, look_back=1):
dataX, dataY = [], []
for i in range(len(dataset)-look_back-1):
a = dataset[i:(i+look_back), 0]
dataX.append(a)
dataY.append(dataset[i + look_back, 0])
return np.array(dataX), np.array(dataY)
# reshape into X=[DAX_t, DAX_t+1, ..., DAX_t+look_back-1] and Y=DAX_t+look_back
look_back = 3
trainX, trainY = create_dataset(train, look_back)
testX, testY = create_dataset(test, look_back)
# why is this one short?
trainY.shape
```
At this point, `trainX` is a list of DAX closing values, for one trading day after another, i.e., `trainX[10]` id the closing value on the 11th trading day in our traning data set.
`trainY` on the other hand, is the value one day later, i.e., `trainY[10]` id the closing value on the _12th_ trading day in our traning data set.
NOTE: if `look_back` is set to anything larger than 1, then `trainX[10]` is a list of length `look_back` that contains the closing values of all trading days from the 11th to the (11+`look_back`-1)th
```
# reshape input to be [samples, time steps, features]
trainX = np.reshape(trainX, (trainX.shape[0], 1, trainX.shape[1]))
testX = np.reshape(testX, (testX.shape[0], 1, testX.shape[1]))
```
# Build the LSTM model:
In this step we use Keras-supplied functions to piece together a model to predict the DAX values on the next day based on one or more values before a given day.
```
model = Sequential()
model.add(LSTM(
input_shape=(1, look_back),
units=50,
return_sequences=True))
model.add(Dropout(0.2))
model.add(LSTM(
100,
return_sequences=False))
model.add(Dropout(0.2))
model.add(Dense(
units=1))
model.add(Activation('linear'))
start = time.time()
model.compile(loss='mse', optimizer='rmsprop')
print ('compilation time : ', time.time() - start)
```
# Train the model:
we supply the model with the training set and optimize it. This results in a function that correlates the input to the output data
```
model.fit(
trainX,
trainY,
batch_size=128,
epochs=100,
validation_split=0.05)
with open("model.json", "w") as json_file:
json_file.write(model.to_json())
```
# Test the prediction:
We use the corelation function that results from training the model and apply it to the validation (or "test") set. This gives an indication of how the model performs.
As a first step, let's look at how the model's prediction based oon the training set looks like.
In this case, the model "knows" both the input and the expected output.
It returns its best approximaiton of the expected output (this is basically the train step we did above).
```
trainY_predicted_scaled = model.predict(trainX)
trainY_predicted = scaler.inverse_transform(trainY_predicted_scaled)
fig, ax = plt.subplots()
ax.set_xlabel('Days')
ax.set_ylabel('Closing value')
ax.set_title(r'Predicted DAX® chart')
plt.plot(scaler.inverse_transform(trainY.reshape(-1,1)), label="training set")
plt.plot(trainY_predicted, label="predicted curve")
ax.legend()
plt.show()
```
Now we provide the model with the test set.
If our model describes the underlying mechanisms well enough than it should return a good enough approximaion of the actual behaviour of the test data.
```
testY_predicted_normalized = model.predict(testX)
testY_predicted = scaler.inverse_transform(testY_predicted_normalized)
fig, ax = plt.subplots()
ax.set_xlabel('Days (last day = yesterday)')
ax.set_ylabel('Closing value')
ax.set_title(r'Predicted DAX® chart')
plt.plot(scaler.inverse_transform(testY.reshape(-1,1)), label="actual curve (validation set)")
plt.plot(testY_predicted, label="predicted curve")
ax.legend(loc=3)
plt.show()
```
# For later reference:
```
def plot_results_multiple(predicted_data, true_data, length):
fig, ax = plt.subplots()
ax.set_xlabel('Days')
ax.set_ylabel('Closing value')
ax.set_title(r'Predicted DAX® chart')
# Tweak spacing to prevent clipping of ylabel
fig.tight_layout()
plt.plot(range(1,length+1),scaler.inverse_transform(true_data.reshape(-1, 1))[:length], label='test data')
plt.plot(range(1,length+1),scaler.inverse_transform(np.array(predicted_data).reshape(-1, 1))[:length], label='prediction')
ax.set_ylim(ymin=10000)
plt.xticks(range(1,length+1))
ax.legend()
plt.show()
#predict lenght consecutive values from a real one
def predict_sequences_multiple(model, firstValue,length):
prediction_seqs = []
curr_frame = firstValue
for i in range(length):
print(i)
predicted = []
print("current frame BEGIN: {}".format(curr_frame))
model_input = curr_frame[newaxis,:,:]
#if i==1:
# model_input=np.array([[[0.8438635 ]],[[0.82055986]]])
print("model input: {}".format(model_input))
model_output = model.predict(model_input)
predicted.append(model_output[0,0])
#print("predicted values: {}".format(predicted))
print("last predicted value: {}".format(predicted[-1]))
curr_frame = curr_frame[0:]
curr_frame = np.insert(curr_frame, i+1, predicted[-1], axis=0)
print("current frame AFTER: {}".format(curr_frame))
prediction_seqs.append(predicted[-1])
return prediction_seqs
#predict_length = 5
#predictions = predict_sequences_multiple(model, testX[0], predict_length)
#plot_results_multiple(predictions, testY, predict_length)
```
| github_jupyter |
```
from SimPEG import *
import simpegEM as EM
from simpegem1d import Utils1D
%pylab inline
import matplotlib
from IPython.display import set_matplotlib_formats
set_matplotlib_formats('png')
matplotlib.rcParams['savefig.dpi'] = 100
mesh3D = Utils.meshutils.readUBCTensorMesh('mesh.msh')
sigma3D = Utils.meshutils.readUBCTensorModel('sigma_realistic.con', mesh3D)
!pwd
x1 = np.arange(30)*10 - 300.
y1 = np.arange(30)*10 - 150.
xyz1 = Utils.ndgrid(x1, y1, np.r_[0.])
xc1 = -150
yc1 = 0.
r1 = np.sqrt((xyz1[:,0]-xc1)**2+(xyz1[:,1]-yc1)**2)
x2 = np.arange(30)*10 + 10.
y2 = np.arange(30)*10 - 150.
xyz2 = Utils.ndgrid(x2, y2, np.r_[0.])
xc2 = 150
yc2 = 0.
r2 = np.sqrt((xyz2[:,0]-xc2)**2+(xyz2[:,1]-yc2)**2)
dobs = np.load('bzobs_realistic.npy')
Dobs = dobs.reshape((900, 31, 2), order='F')
Dobs1 = Dobs[:,:,0]
Dobs2 = Dobs[:,:,1]
fig, ax = plt.subplots(1,2, figsize = (12, 5))
ax[0].contourf(Dobs1)
ax[1].contourf(Dobs2)
meshType = 'CYL'
cs, ncx, ncz, npad = 20., 25, 30, 12
hx = [(cs,ncx), (cs,npad,1.3)]
hz = [(cs,npad,-1.4), (cs,ncz), (cs,npad,1.4)]
mesh = Mesh.CylMesh([hx,1,hz], '00C')
active = mesh.vectorCCz<0.
layer1 = (mesh.vectorCCz<0.) & (mesh.vectorCCz>=-60.)
layer2 = (mesh.vectorCCz<-60) & (mesh.vectorCCz>=-100.)
layer3 = (mesh.vectorCCz<-100) & (mesh.vectorCCz>=-200.)
actMap = Maps.ActiveCells(mesh, active, np.log(1e-8), nC=mesh.nCz)
mapping = Maps.ExpMap(mesh) * Maps.Vertical1DMap(mesh) * actMap
sig_half = 1e-3
sig_air = 1e-8
sig_layer1 = 1./300
sig_layer2 = 1./100
sig_layer3 = 1./10
sigma = np.ones(mesh.nCz)*sig_air
sigma[active] = sig_half
sigma[layer1] = sig_layer1
sigma[layer2] = sig_layer2
sigma[layer3] = sig_layer3
mtrue = np.log(sigma[active])
xc = -100
yc = 100.
dind = np.argmin(abs( xyz1[:,0]-xc)+abs( xyz1[:,1]-yc))
def circfun(xc, yc, r, npoint):
theta = np.linspace(np.pi, -np.pi, npoint)
x = r*np.cos(theta)
y = r*np.sin(theta)
return x+xc, y+yc
xcirc1, ycirc1 = circfun(-150., 0., 250., 60)
xcirc2, ycirc2 = circfun(150., 0., 250., 60)
ind = np.argwhere(xyz1[:,1]==0.)
fig, ax = plt.subplots(1,1, figsize=(7,3))
indz = 20
print mesh.vectorCCz[indz]
mesh3D.plotSlice(np.log10(sigma3D), ind = indz, ax = ax, clim=(-3, -0.5))
ax.plot(xyz1[:,0], xyz1[:,1], 'r.')
ax.plot(xyz1[ind,0], xyz1[ind,1], 'k.', ms=10)
# ax.plot(xyz2[:,0], xyz2[:,1], 'b.')
ax.plot(xcirc1, ycirc1, 'r-')
# ax.plot(xcirc2, ycirc2, 'b-')
ax.set_xlim(-500, 500)
ax.set_ylim(-300, 300)
sig_test = (sigma[active])
# plt.pcolor(X, Z, np.log10(Sig_test))
# plt.ylim()
fig, ax = plt.subplots(1,1, figsize = (3, 6))
Utils1D.plotLayer(sig_test, mesh.vectorCCz[active], showlayers=True, ax = ax)
ax.set_ylim(-300., 0.)
from pymatsolver import MumpsSolver
prb = EM.TDEM.ProblemTDEM_b(mesh, mapping=mapping, verbose=False)
prb.Solver = MumpsSolver
prb.timeSteps = [(1e-4/10, 15), (1e-3/10, 15), (1e-2/10, 15), (1e-1/10, 15)]
# Mopt = []
# Dest=[]
# # for i in range(Dobs1[ind,:].shape[0]):
# for i in range(1):
# rxoffset=r1[ind][i]
# time = np.logspace(-4, -2, 31)
# rx = EM.TDEM.RxTDEM(np.array([[rxoffset, 0., 0.]]), time, 'bz')
# tx = EM.TDEM.SrcTDEM_CircularLoop_MVP([rx], np.array([0., 0., 0.]), radius = 250.)
# survey = EM.TDEM.SurveyTDEM([tx])
# if prb.ispaired:
# prb.unpair()
# if survey.ispaired:
# survey.unpair()
# prb.pair(survey)
# std = 0.2
# survey.dobs = Utils.mkvc(Dobs1[ind,:][i,:])
# survey.std = survey.dobs*0 + std
# dmisfit = DataMisfit.l2_DataMisfit(survey)
# dmisfit.Wd = 1/(abs(survey.dobs)*std)
# regMesh = Mesh.TensorMesh([mesh.hz[mapping.maps[-1].indActive]])
# reg = Regularization.Tikhonov(regMesh)
# opt = Optimization.InexactGaussNewton(maxIter = 5)
# invProb = InvProblem.BaseInvProblem(dmisfit, reg, opt)
# # Create an inversion object
# beta = Directives.BetaSchedule(coolingFactor=5, coolingRate=2)
# betaest = Directives.BetaEstimate_ByEig(beta0_ratio=1e0)
# inv = Inversion.BaseInversion(invProb, directiveList=[beta,betaest])
# m0 = np.log(np.ones(mtrue.size)*2e-3)
# reg.alpha_s = 1e-2
# reg.alpha_x = 1.
# prb.counter = opt.counter = Utils.Counter()
# opt.LSshorten = 0.5
# opt.remember('xc')
# mopt = inv.run(m0)
# Mopt.append(mopt)
# Dest.append(invProb.dpred)
# fig, ax = plt.subplots(1,1, figsize = (3, 6))
# Utils1D.plotLayer(np.exp(mopt), mesh.vectorCCz[active], showlayers=True, ax = ax)
# ax.set_ylim(-500., 0.)
# np.save('Mopt1_realistic', Mopt)
# np.save('Dest1_realistic', Dest)
Mopt1 = np.load('Mopt1_realistic.npy')
Dest1 = np.load('Dest1_realistic.npy')
Mopt2 = np.load('Mopt2.npy')
Dest2 = np.load('Dest2.npy')
Mopt1_FD = np.load('Mopt1_realistic_FD.npy')
Dest1_FD = np.load('Dest1_realistic_FD.npy')
Mopt2_FD = np.load('Mopt2_realistic_FD.npy')
Dest2_FD = np.load('Dest2_realistic_FD.npy')
sigma2D = np.load('./inv2D_realistic_line/invTEM2D.npy')
sigma2DFD = np.load('./inv2D_FD_realistic_line/invTEM2D.npy')
Sig_test = (sig_test.reshape([1,-1])).repeat(10, axis=0)
x = np.r_[xyz1[ind,0], xyz2[ind,0]]
z = mesh.vectorCCz[active]
Z, X = np.meshgrid(z, x)
time = np.logspace(-4, -2, 31)
z = mesh.vectorCCz[active]
Time, Xtime = np.meshgrid(time, x)
SigMat = np.exp(np.vstack([Mopt1, Mopt2]))
DpreMat = np.vstack([Dest1, Dest2])
SigMat_FD = np.exp(np.vstack([Mopt1_FD, Mopt2_FD]))
DpreMat_FD = np.vstack([Dest1_FD, Dest2_FD])
DobsMat = Utils.mkvc(np.vstack((Dobs1[ind, :], Dobs2[ind, :]))).reshape((60, 31), order='F')
DobsMatFD = np.load('./inv2D_FD_realistic_line/bzobs_FD_realistic_line.npy')
sigest3D = Utils.meshutils.readUBCTensorModel('sigest3D_realistic.con',mesh3D)
# sigest3D = np.load('inv3D_realistic/model_15.npy')
sigest3DFD = Utils.meshutils.readUBCTensorModel('sigest3D_realisticFD.con',mesh3D)
fig, ax = plt.subplots(1,2, figsize = (12, 5))
vmin = np.log10(Utils.mkvc(SigMat).min())
vmax = np.log10(Utils.mkvc(SigMat).max())
indy = 21
indz = 21
dat = mesh3D.plotSlice(np.log10(sigma3D), ind = indz, normal='Z', ax = ax[0], clim=(-3, -0.5))
dat = mesh3D.plotSlice(np.log10(sigma3D), ind = indy, normal='Y', ax = ax[1], clim=(-3, -0.5))
for i in range(2):
if i==0:
ax[i].set_xlabel('Easting (m)', fontsize = 16)
ax[i].set_ylabel('Northing (m)', fontsize = 16)
ax[i].set_ylim(-150., 150.)
ax[i].set_xlim(-300., 300.)
ax[i].set_title(('Depth at %5.2f m')%(mesh3D.vectorCCz[indz]), fontsize = 16)
elif i==1:
ax[i].set_xlabel('Easting (m)', fontsize = 16)
ax[i].set_ylabel('Depth (m)', fontsize = 16)
ax[i].set_ylim(-600., -10.)
ax[i].set_xlim(-300., 300.)
ax[i].set_title(('Northing at %5.2f m')%(mesh3D.vectorCCy[indy]), fontsize = 16)
cb = plt.colorbar(dat[0], ax=ax[i], orientation = 'horizontal', ticks = [np.arange(6)*0.5-3])
cb.set_label('Log10 conductivity (S/m)', fontsize = 14)
fig.savefig('./figures/sigtrue.png', dpi = 200)
fig, ax = plt.subplots(1,2, figsize = (12, 5))
# vmin = np.log10(Utils.mkvc(SigMat).min())
# vmax = np.log10(Utils.mkvc(SigMat).max())
dat = mesh3D.plotSlice(np.log10(sigest3D), ind = indz, normal='Z', ax = ax[0], clim=(-3, -0.5))
dat = mesh3D.plotSlice(np.log10(sigest3D), ind = indy, normal='Y', ax = ax[1], clim=(-3, -0.5))
for i in range(2):
if i==0:
ax[i].set_xlabel('Easting (m)', fontsize = 16)
ax[i].set_ylabel('Northing (m)', fontsize = 16)
ax[i].set_ylim(-150., 150.)
ax[i].set_xlim(-300., 300.)
ax[i].set_title(('Depth at %5.2f m')%(mesh3D.vectorCCz[indz]), fontsize = 16)
elif i==1:
ax[i].set_xlabel('Easting (m)', fontsize = 16)
ax[i].set_ylabel('Depth (m)', fontsize = 16)
ax[i].set_ylim(-600., -10.)
ax[i].set_xlim(-300., 300.)
ax[i].set_title(('Northing at %5.2f m')%(mesh3D.vectorCCy[indy]), fontsize = 16)
cb = plt.colorbar(dat[0], ax=ax[i], orientation = 'horizontal', ticks = [np.arange(6)*0.5-3])
cb.set_label('Log10 conductivity (S/m)', fontsize = 14)
fig.savefig('./figures/sigestTD.png', dpi = 200)
fig, ax = plt.subplots(1,2, figsize = (12, 5))
# vmin = np.log10(Utils.mkvc(SigMat).min())
# vmax = np.log10(Utils.mkvc(SigMat).max())
dat = mesh3D.plotSlice(np.log10(sigest3DFD), ind = indz, normal='Z', ax = ax[0], clim=(-3, -0.5))
dat = mesh3D.plotSlice(np.log10(sigest3DFD), ind = indy, normal='Y', ax = ax[1], clim=(-3, -0.5))
for i in range(2):
if i==0:
ax[i].set_xlabel('Easting (m)', fontsize = 16)
ax[i].set_ylabel('Northing (m)', fontsize = 16)
ax[i].set_ylim(-150., 150.)
ax[i].set_xlim(-300., 300.)
ax[i].set_title(('Depth at %5.2f m')%(mesh3D.vectorCCz[indz]), fontsize = 16)
elif i==1:
ax[i].set_xlabel('Easting (m)', fontsize = 16)
ax[i].set_ylabel('Depth (m)', fontsize = 16)
ax[i].set_ylim(-600., -10.)
ax[i].set_xlim(-300., 300.)
ax[i].set_title(('Northing at %5.2f m')%(mesh3D.vectorCCy[indy]), fontsize = 16)
cb = plt.colorbar(dat[0], ax=ax[i], orientation = 'horizontal', ticks = [np.arange(6)*0.5-3])
cb.set_label('Log10 conductivity (S/m)', fontsize = 14)
fig.savefig('./figures/sigestFD.png', dpi = 200)
fig, ax = plt.subplots(1,2, figsize = (12, 5))
vmin = np.log10(Utils.mkvc(SigMat).min())
vmax = np.log10(Utils.mkvc(SigMat).max())
ax[0].contourf(X, Z, np.log10(SigMat), 100, vmin = -3, vmax = -0.5)
dat = mesh3D.plotSlice(np.log10(sigma2D), ind = 21, normal='Y', ax = ax[1], clim=(-3, -0.5))
for i in range(2):
ax[i].set_ylim(-600., -10.)
ax[i].set_xlim(-300., 300.)
ax[i].set_xlabel('Easting (m)', fontsize = 16)
ax[i].set_ylabel('Depth (m)', fontsize = 16)
if i==0:
ax[i].set_title('1D stitched inversion model (TEM)', fontsize = 16)
elif i==1:
ax[i].set_title(('2D inversion model (TEM)'), fontsize = 16)
cb = plt.colorbar(dat[0], ax=ax[i], orientation = 'horizontal', ticks = [np.arange(6)*0.5-3])
cb.set_label('Log10 conductivity (S/m)', fontsize = 14)
fig.savefig('./figures/1DinvTD.png', dpi = 200)
fig, ax = plt.subplots(1,2, figsize = (12, 5))
vmin = np.log10(Utils.mkvc(SigMat_FD).min())
vmax = np.log10(Utils.mkvc(SigMat_FD).max())
ax[0].contourf(X, Z, np.log10(SigMat_FD), 100, vmin = -3, vmax = -0.5)
dat = mesh3D.plotSlice(np.log10(sigma2DFD), ind = 21, normal='Y', ax = ax[1], clim=(-3, -0.5))
for i in range(2):
ax[i].set_ylim(-600., -10.)
ax[i].set_xlim(-300., 300.)
ax[i].set_xlabel('Easting (m)', fontsize = 16)
ax[i].set_ylabel('Depth (m)', fontsize = 16)
if i==0:
ax[i].set_title('1D stitched inversion for FEM', fontsize = 16)
elif i==1:
ax[i].set_title(('2.5D inversion model (FEM)'), fontsize = 16)
cb = plt.colorbar(dat[0], ax=ax[i], orientation = 'horizontal', ticks = [np.arange(5)*0.5-3])
cb.set_label('Log10 conductivity (S/m)', fontsize = 14)
fig.savefig('./figures/1DinvFD.png', dpi = 200)
# fig, ax = plt.subplots(1,2, figsize = (12, 5))
# vmin = np.log10(Utils.mkvc(DpreMat).min())
# vmax = np.log10(Utils.mkvc(DpreMat).max())
# ax[1].contourf(Xtime, np.log10(Time), np.log10(DpreMat), 31, vmin = vmin, vmax = vmax)
# ax[0].contourf(Xtime, np.log10(Time), np.log10(DobsMat), 31, vmin = vmin, vmax = vmax)
# Itime = [0, 5, 10, 15, 20, 25, 30]
Itime = [0, 15, 30]
color = ['k', 'b', 'r', 'g', 'c', 'm', 'y']
legendobs1 = [('Obs %3.1f ms')%(time[itime]*1e3) for itime in Itime ]
legendobs2 = [('Pred1D %3.1f ms')%(time[itime]*1e3) for itime in Itime ]
legendobs3 = [('Pred2D %3.1f ms')%(time[itime]*1e3) for itime in Itime ]
legendobs = np.r_[legendobs1, legendobs2, legendobs3]
dpredline = np.load('./inv2D_realistic_line/dpred_13.npy')
dpredline = dpredline.reshape((30, 31, 2), order='F')
fig, ax = plt.subplots(1,1, figsize = (6, 4))
for icount, itime in enumerate(Itime):
plt.semilogy(np.r_[xyz1[ind,0], xyz2[ind,0]], Utils.mkvc(DpreMat[:,itime]), color[icount])
for icount, itime in enumerate(Itime):
plt.semilogy(np.r_[xyz1[ind,0], xyz2[ind,0]], Utils.mkvc(DobsMat[:,itime]), color[icount]+'o', ms = 3)
for icount, itime in enumerate(Itime):
plt.semilogy(np.r_[xyz1[ind,0], xyz2[ind,0]], Utils.mkvc(dpredline[:,itime,:]), color[icount]+'--', ms = 3)
ax.legend(legendobs,bbox_to_anchor=(1.4, 1.00), fontsize = 10)
ax.set_xlabel('Easting (m)', fontsize = 14)
ax.set_ylabel('bz (T)', fontsize = 14)
fig.savefig('./figures/1dinvobspred_TD.png', dpi = 200)
frequency = np.r_[1, 10., 100.]
dobs = np.load('bzobs_FD_realistic_line.npy')
dest = np.load('inv2D_FD_realistic_line/dpred_14.npy')
Dpred = abs(dobs.reshape((30, 2, frequency.size, 2), order='F'))
Dest = abs(dest.reshape((30, 2, frequency.size, 2), order='F'))
DpreMat_FD.shape
legendobs1 = [('Obs %3.0f Hz')%(freq) for freq in frequency ]
legendobs2 = [('Pred1D %3.0f Hz')%(freq) for freq in frequency ]
legendobs3 = [('Pred2.5D %3.0f Hz')%(freq) for freq in frequency ]
legendobs = np.r_[legendobs1, legendobs2, legendobs3]
ind = xyz1[:,1] == 0.
absFD = lambda x, y: np.sqrt(x**2+y**2)
mradFD = lambda x, y: np.angle(x+1j*y)*1e3
realind = [0, 2, 4]
imagind = [1, 3, 5]
fig, ax = plt.subplots(1,1, figsize = (6, 4))
for itime in range(3):
abs1 = absFD(Utils.mkvc(Dpred[:,0,itime,0]), Utils.mkvc(Dpred[:,1,itime,1]))
abs2 = absFD(Utils.mkvc(Dpred[:,0,itime,0]), Utils.mkvc(Dpred[:,1,itime,1]))
ax.semilogy(np.r_[xyz1[ind,0], xyz2[ind,0]], np.r_[abs1, abs2] , color[itime])
for itime in range(3):
abs3 = absFD(Utils.mkvc(DpreMat_FD[:,realind[itime]]), Utils.mkvc(DpreMat_FD[:,imagind[itime]]) )
ax.semilogy(np.r_[xyz1[ind,0], xyz2[ind,0]], abs3, color[itime]+'--', ms = 3)
for itime in range(3):
abs1 = absFD(Utils.mkvc(Dest[:,0,itime,0]), Utils.mkvc(Dest[:,1,itime,1]))
abs2 = absFD(Utils.mkvc(Dest[:,0,itime,0]), Utils.mkvc(Dest[:,1,itime,1]))
ax.semilogy(np.r_[xyz1[ind,0], xyz2[ind,0]], np.r_[abs1, abs2], color[itime]+'o', ms = 3)
ax.legend(legendobs,bbox_to_anchor=(1.4, 1.00), fontsize = 10)
ax.set_xlabel('Easting (m)', fontsize = 14)
ax.set_ylabel('Amplitude of Bz (T)', fontsize = 14)
fig.savefig('./figures/1dinvobspred_amp_FD.png', dpi = 200)
ind = xyz1[:,1] == 0.
absFD = lambda x, y: np.sqrt(x**2+y**2)
mradFD = lambda x, y: np.angle(x+1j*y)*1e3
fig, ax = plt.subplots(1,1, figsize = (6, 4))
for itime in range(3):
phase1 = mradFD(Utils.mkvc(Dpred[:,0,itime,0]), Utils.mkvc(Dpred[:,1,itime,1]))
phase2 = mradFD(Utils.mkvc(Dpred[:,0,itime,0]), Utils.mkvc(Dpred[:,1,itime,1]))
ax.plot(np.r_[xyz1[ind,0], xyz2[ind,0]], np.r_[phase1, phase2], color[itime])
for itime in range(3):
phase3 = mradFD(Utils.mkvc(DpreMat_FD[:,realind[itime]]), Utils.mkvc(DpreMat_FD[:,imagind[itime]]) )
ax.plot(np.r_[xyz1[ind,0], xyz2[ind,0]], phase3+np.pi*1e3, color[itime]+'--', ms = 3)
for itime in range(3):
phase1 = mradFD(Utils.mkvc(Dest[:,0,itime,0]), Utils.mkvc(Dest[:,1,itime,1]))
phase2 = mradFD(Utils.mkvc(Dest[:,0,itime,0]), Utils.mkvc(Dest[:,1,itime,1]))
ax.plot(np.r_[xyz1[ind,0], xyz2[ind,0]], np.r_[phase1, phase2], color[itime]+'o', ms = 3)
ax.legend(legendobs,bbox_to_anchor=(1.4, 1.00), fontsize = 10)
ax.set_xlabel('Easting (m)', fontsize = 14)
ax.set_ylabel('Phase of Bz (mrad)', fontsize = 14)
fig.savefig('./figures/1dinvobspred_pahse_FD.png', dpi = 200)
```
| github_jupyter |
```
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
import torch
import torchvision
import torchvision.transforms as transforms
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from tqdm import tqdm
%matplotlib inline
transform = transforms.Compose(
[transforms.CenterCrop((28,28)),transforms.ToTensor(),transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
# transform = transforms.Compose(
# [transforms.ToTensor(),transforms.CenterCrop(28,28)])
cifar_trainset = torchvision.datasets.CIFAR10(root='./data', train=True, download=True, transform=transform)
cifar_testset = torchvision.datasets.CIFAR10(root='./data', train=False, download=True, transform=transform)
corruption_percentage = 0.1
cifar_trainset_random = torchvision.datasets.CIFAR10(root='./data', train=True, download=True, transform=transform)
np.random.seed(1234)
mask = np.random.uniform(0,1,50000) < corruption_percentage
a = np.array(cifar_trainset_random.targets)
print("true",a[mask])
a[mask] = np.random.randint(0,10,sum(mask))
print("randomized",a[mask])
cifar_trainset_random.targets = list(a)
cifar_trainset_random.targets
# cifar_trainset_random.targets[:50000] = np.random.randint(low=0,high=9,size=50000)
trainloader_random = torch.utils.data.DataLoader(cifar_trainset_random,batch_size=256,shuffle=False,num_workers=2)
np.unique(cifar_trainset.targets),sum(mask)
trainloader = torch.utils.data.DataLoader(cifar_trainset, batch_size=256,
shuffle=False, num_workers=2)
testloader = torch.utils.data.DataLoader(cifar_testset, batch_size=256,
shuffle=False, num_workers=2)
classes = ('plane', 'car', 'bird', 'cat',
'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
class Conv_module(nn.Module):
def __init__(self,inp_ch,f,s,k,pad):
super(Conv_module,self).__init__()
self.inp_ch = inp_ch
self.f = f
self.s = s
self.k = k
self.pad = pad
self.conv = nn.Conv2d(self.inp_ch,self.f,k,stride=s,padding=self.pad)
self.bn = nn.BatchNorm2d(self.f)
self.act = nn.ReLU()
def forward(self,x):
x = self.conv(x)
x = self.bn(x)
x = self.act(x)
return x
conv = Conv_module(3,64,1,3,1)
# conv.forward(images).shape
class inception_module(nn.Module):
def __init__(self,inp_ch,f0,f1):
super(inception_module, self).__init__()
self.inp_ch = inp_ch
self.f0 = f0
self.f1 = f1
self.conv1 = Conv_module(self.inp_ch,self.f0,1,1,pad=0)
self.conv3 = Conv_module(self.inp_ch,self.f1,1,3,pad=1)
#self.conv1 = nn.Conv2d(3,self.f0,1)
#self.conv3 = nn.Conv2d(3,self.f1,3,padding=1)
def forward(self,x):
x1 = self.conv1.forward(x)
x3 = self.conv3.forward(x)
#print(x1.shape,x3.shape)
x = torch.cat((x1,x3),dim=1)
return x
inc_module = inception_module(96,32,32)
conv_module = Conv_module(3,96,1,1,0)
class downsample_module(nn.Module):
def __init__(self,inp_ch,f):
super(downsample_module,self).__init__()
self.inp_ch = inp_ch
self.f = f
self.conv = Conv_module(self.inp_ch,self.f,2,3,pad=0)
self.pool = nn.MaxPool2d(3,stride=2,padding=0)
def forward(self,x):
x1 = self.conv(x)
#print(x1.shape)
x2 = self.pool(x)
#print(x2.shape)
x = torch.cat((x1,x2),dim=1)
return x,x1
class inception_net(nn.Module):
def __init__(self):
super(inception_net,self).__init__()
self.conv1 = Conv_module(3,96,1,3,0)
self.incept1 = inception_module(96,32,32)
self.incept2 = inception_module(64,32,48)
self.downsample1 = downsample_module(80,80)
self.incept3 = inception_module(160,112,48)
self.incept4 = inception_module(160,96,64)
self.incept5 = inception_module(160,80,80)
self.incept6 = inception_module(160,48,96)
self.downsample2 = downsample_module(144,96)
self.incept7 = inception_module(240,176,60)
self.incept8 = inception_module(236,176,60)
self.pool = nn.AvgPool2d(7)
self.linear = nn.Linear(236,10)
def forward(self,x):
x = self.conv1.forward(x)
#act1 = x
x = self.incept1.forward(x)
#act2 = x
x = self.incept2.forward(x)
#act3 = x
x,act4 = self.downsample1.forward(x)
x = self.incept3.forward(x)
#act5 = x
x = self.incept4.forward(x)
#act6 = x
x = self.incept5.forward(x)
#act7 = x
x = self.incept6.forward(x)
#act8 = x
x,act9 = self.downsample2.forward(x)
x = self.incept7.forward(x)
#act10 = x
x = self.incept8.forward(x)
#act11 = x
x = self.pool(x)
x = x.view(-1,1*1*236)
x = self.linear(x)
#activatn = {"act1":act1,"act2":act2,"act3":act3,"act4":act4,"act5":act5,"act6":act6,
# "act7":act7,"act8":act8,"act9":act9,"act10":act10,"act11":act11}
return x
inc_net_random_labels = inception_net()
inc_net_random_labels = inc_net_random_labels.to("cuda")
criterion_inc_rand = nn.CrossEntropyLoss()
optimizer_inc_rand = optim.SGD(inc_net_random_labels.parameters(), lr=0.01, momentum=0.9)
# actri = []
# lossr_curi = []
# inc_net_random_labels.train()
# for epoch in range(100): # loop over the dataset multiple times
# ep_lossri = []
# running_loss = 0.0
# for i, data in enumerate(trainloader_random, 0):
# # get the inputs
# inputs, labels = data
# inputs,labels = inputs.to("cuda"),labels.to("cuda")
# # zero the parameter gradients
# optimizer_inc_rand.zero_grad()
# # forward + backward + optimize
# outputs = inc_net_random_labels(inputs)
# loss = criterion_inc_rand(outputs, labels)
# loss.backward()
# optimizer_inc_rand.step()
# # print statistics
# running_loss += loss.item()
# if i % 50 == 49: # print every 50 mini-batches
# print('[%d, %5d] loss: %.3f' %
# (epoch + 1, i + 1, running_loss / 50))
# ep_lossri.append(running_loss)
# running_loss = 0.0
# lossr_curi.append(np.mean(ep_lossri)) #loss per epoch
# # if (epoch%5 == 0):
# # _,actirs= inc(inputs)
# # actri.append(actirs)
# print('Finished Training')
inc_net_random_labels.load_state_dict(torch.load("inception10_lr_random_01.pt"))
correct = 0
total = 0
inc_net_random_labels.eval()
with torch.no_grad():
for data in trainloader_random:
images, labels = data
images,labels = images.to("cuda"),labels.to("cuda")
outputs = inc_net_random_labels(images)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 50000 train images: %d %%' % (
100 * correct / total))
correct = 0
total = 0
loss = 0
batch = 0
inc_net_random_labels.eval()
with torch.no_grad():
for data in testloader:
images, labels = data
images,labels = images.to("cuda"),labels.to("cuda")
outputs = inc_net_random_labels(images)
loss += criterion_inc_rand(outputs, labels)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
batch+=1
print('Accuracy of the network on the 10000 test images: %d %%' % (
100 * correct / total))
print("loss", loss/batch)
def accuracy( a, b):
length = a.shape
correct = a==b
return sum(correct)/length
correct = 0
total = 0
train_loss = 0
true = []
pred=[]
out =[]
inc_net_random_labels.eval()
with torch.no_grad():
for data in trainloader_random:
images, labels = data
images,labels = images.to("cuda"),labels.to("cuda")
true.append(labels.cpu().numpy())
outputs = inc_net_random_labels(images)
out.append(outputs.cpu())
loss = criterion_inc_rand(outputs, labels)
train_loss += loss.item()
_, predicted = torch.max(outputs.data, 1)
pred.append(predicted.cpu().numpy())
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 50000 train images: %d %%' % (
100 * correct / total))
true_targets = np.concatenate(true,axis=0)
predicted_targets = np.concatenate(pred,axis =0)
print("---"*20)
print("Train accuracy on corrupt data",accuracy(true_targets[mask], predicted_targets[mask]))
print("Train accuracy on un-corrupt data",accuracy(true_targets[~mask], predicted_targets[~mask]))
print("Train accuracy on full data", accuracy(true_targets, predicted_targets))
# print( sum (predicted_targets == np.argmax(out, axis =1)))
l= np.where(mask ==True)
p = np.where(mask == False)
out = torch.cat(out, dim =0)
print("Train cross entropy loss on corrupt data", criterion_inc_rand(out[l], torch.Tensor(true_targets[l]).type(torch.LongTensor)))
print("Train cross entropy loss on un-corrupt data",criterion_inc_rand(out[p], torch.Tensor(true_targets[p]).type(torch.LongTensor)))
print("Train cross entropy loss on full data",criterion_inc_rand(out, torch.Tensor(true_targets).type(torch.LongTensor)))
print("---"*20)
testset=cifar_testset
testset_len = len(testset.targets)
np.random.seed(1234)
mask1 = np.random.uniform(0,1,testset_len) < corruption_percentage
a = np.array(testset.targets)
print("true",a[mask1])
print(np.sum(mask1))
a[mask1] = np.random.randint(0,10,sum(mask1))
print("randomized",a[mask1])
testset.targets = list(a)
correct = 0
total = 0
test_loss = 0
true = []
pred=[]
out =[]
inc_net_random_labels.eval()
with torch.no_grad():
for data in testloader:
images, labels = data
images,labels = images.to("cuda"),labels.to("cuda")
true.append(labels.cpu().numpy())
outputs = inc_net_random_labels(images)
out.append(outputs.cpu())
loss = criterion_inc_rand(outputs, labels)
test_loss += loss.item()
_, predicted = torch.max(outputs.data, 1)
pred.append(predicted.cpu().numpy())
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 10000 test images: %d %%' % (
100 * correct / total))
true_targets = np.concatenate(true,axis=0)
predicted_targets = np.concatenate(pred,axis =0)
print("---"*20)
print("Test accuracy on corrupt data",accuracy(true_targets[mask1], predicted_targets[mask1]))
print("Test accuracy on un-corrupt data",accuracy(true_targets[~mask1], predicted_targets[~mask1]))
print("Test accuracy on full data", accuracy(true_targets, predicted_targets))
l = np.where(mask1 ==True)
p = np.where(mask1 == False)
out = torch.cat(out, dim =0)
print("Test cross entropy loss on corrupt data", criterion_inc_rand(out[l], torch.Tensor(true_targets[l]).type(torch.LongTensor)))
print("Test cross entropy loss on un-corrupt data",criterion_inc_rand(out[p], torch.Tensor(true_targets[p]).type(torch.LongTensor)))
print("Test cross entropy loss on full data",criterion_inc_rand(out, torch.Tensor(true_targets).type(torch.LongTensor)))
print("---"*20)
```
| github_jupyter |
# Centralized Learning Simulator
Simulating `Centralized Learning` paradigm.
```
import import_ipynb
import nn.ml as ml
import nn.nets as nets
if __name__ == "__main__":
import os
from copy import deepcopy
import torch
import torch.nn as nn
import torch.optim as optim
from torch.autograd import Variable
import torchvision.datasets as dset
import torchvision.transforms as transforms
from torch.utils.data import DataLoader # TODO: DistributedDataParallel
"""Hyperparams"""
numWorkers = 4
cuda = True
base_path = './simul_centralized'
os.makedirs(base_path, exist_ok=True)
trainFile = open(os.path.join(base_path, 'train.csv'), 'w')
testFile = open(os.path.join(base_path, 'test.csv'), 'w')
epochs = 3000
batchSz = 64
"""Datasets"""
# # gets mean and std
# transform = transforms.Compose([transforms.ToTensor()])
# dataset = dset.CIFAR10(root='cifar', train=True, download=True, transform=transform)
# normMean, normStd = dist.get_norm(dataset)
normMean = [0.49139968, 0.48215841, 0.44653091]
normStd = [0.24703223, 0.24348513, 0.26158784]
normTransform = transforms.Normalize(normMean, normStd)
trainTransform = transforms.Compose([
transforms.RandomCrop(32, padding=4),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
normTransform
])
testTransform = transforms.Compose([
transforms.ToTensor(),
normTransform
])
trainset = dset.CIFAR10(root='cifar', train=True, download=True, transform=trainTransform)
testset = dset.CIFAR10(root='cifar', train=False, download=True, transform=trainTransform)
# num_workers: number of CPU cores to use for data loading
# pin_memory: being able to speed up the host to device transfer by enabling
kwargs = {'num_workers': numWorkers, 'pin_memory': cuda}
# loaders
trainLoader = DataLoader(trainset, batch_size=batchSz, shuffle=True, **kwargs)
testLoader = DataLoader(testset, batch_size=batchSz, shuffle=True, **kwargs)
"""Nets"""
num_classes = 10
resnet = nets.resnet18(num_classes=num_classes)
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(resnet.parameters(), lr=1e-1, momentum=0.9)
if cuda:
# if multi-gpus
if torch.cuda.device_count() > 1:
resnet = nn.DataParallel(resnet)
# use cuda
resnet.cuda()
"""Train & Test models"""
for epoch in range(epochs):
ml.train(
resnet, criterion, optimizer, trainLoader,
epoch=epoch, cuda=cuda, log=True, log_file=trainFile
# alpha=0.9, temperature=4
)
ml.test(
resnet, criterion, testLoader,
epoch=epoch, cuda=cuda, log=True, log_file=testFile
)
```
| github_jupyter |
<style>div.container { width: 100% }</style>
<img style="float:left; vertical-align:text-bottom;" height="65" width="172" src="../assets/PyViz_logo_wm_line.png" />
<div style="float:right; vertical-align:text-bottom;"><h2>Strata NY 2018 Tutorial Index</h2></div>
## Welcome to the PyViz tutorial at Strata New York 2018!
This tutorial will take you through all of the steps involved in exploring data of many different types and sizes, building simple and complex figures, working with billions of data points, adding interactive behavior, widgets and controls, and deploying full dashboards and applications.
The tutorial outlined here is given as a half-day course led by trained instructors. For self-paced usage, you should consult the main [tutorial index](index.ipynb).
We'll be using a wide range of open-source Python libraries, including the [Anaconda](http://anaconda.com)-supported tools
[HoloViews](http://holoviews.org),
[GeoViews](http://geo.holoviews.org),
[Bokeh](http://bokeh.pydata.org),
[Datashader](https://datashader.org), and
[Param](http://ioam.github.io/param):
<img height="148" width="800" src="../assets/hv_gv_bk_ds_pa.png"/>
These libraries have been carefully designed to work together to address a very wide range of data-analysis and visualization tasks, making it simple to discover, understand, and communicate the important properties of your data. For more background on these tools and why and how they are integrated, see [PyViz.org/background](http://pyviz.org/background.html).
<img align="center" src="../assets/tutorial_app.gif"></img>
This notebook serves as the homepage of the tutorial, including a table of contents listing each tutorial section. Here, timings listed in brackets ("*[2 min]*") indicate material that will be skimmed in the live tutorial, but which can be examined later in more detail as self-paced material.
## Index and Schedule
- *Overview*
* **10 min** [0 - Setup](./00_Setup.ipynb): Setting up the environment and data files.
* **40 min** [1 - Workflow Introduction](./01_Workflow_Introduction.ipynb): Overview of solving a simple but complete data-science task, using each of the main PyViz tools.
* **5 min** *Break*<br><br>
- *Making data visualizable*
* **30 min** [2 - Annotating Data](./02_Annotating_Data.ipynb): Using HoloViews Elements to make your data instantly visualizable
* **20 min** [3 - Customizing Visual Appearance](./03_Customizing_Visual_Appearance.ipynb): How to change the appearance and output format of elements.
* **10 min** [*Exercise 1*](../exercises/Exercise-1-making-data-visualizable.ipynb)
* **10 min** *Break*<br><br>
- *Datasets and collections of data*
* **30 min** [4 - Working with Tabular Data](./04_Working_with_Tabular_Data.ipynb): Exploring tabular/columnar data.
* *[2 min]* [5 - Working with Gridded Data](./05_Working_with_Gridded_Data.ipynb): Exploring a gridded (n-dimensional) dataset.
* **10 min** [*Exercise 2*](../exercises/Exercise-2-datasets-and-collections-of-data.ipynb)
* *[2 min]* [6 - Network Graphs](./06_Network_Graphs.ipynb): Exploring network graph data.
* *[2 min]* [7 - Geographic Data](./07_Geographic_Data.ipynb): Plotting data in geographic coordinates.
* *[omit] * [*Exercise 3*](../exercises/Exercise-3-networks-and-geoviews.ipynb)<br><br>
- *Dynamic interactions*
* **25 min** [8 - Custom Interactivity](./08_Custom_Interactivity.ipynb): Using HoloViews "streams" to add interactivity to your visualizations.
* *[2 min]* [9 - Operations and Pipelines](./09_Operations_and_Pipelines.ipynb): Dynamically transforming your data as needed
* **20 min** [10 - Working with Large Datasets](./10_Working_with_Large_Datasets.ipynb): Using datasets too large to feed directly to your browser.
* *[2 min]* [11 - Streaming Data](./11_Streaming_Data.ipynb): Live plots of dynamically updated data sources.
* **15 min** [*Exercise 4*](../exercises/Exercise-4-dynamic-interactions.ipynb)<br><br>
- *Apps and dashboards*
* *[2 min]* [12 - Parameters and Widgets](./12_Parameters_and_Widgets.ipynb): Declarative custom controls
* *[2 min]* [13 - Deploying Bokeh Apps](./13_Deploying_Bokeh_Apps.ipynb): Deploying your visualizations using Bokeh server.
* *[2 min]* [A1 - Exploration with Containers](./A1_Exploration_with_Containers.ipynb): Containers that let you explore complex datasets.
* *[2 min]* [A2 - Dashboard Workflow](./A2_Dashboard_Workflow.ipynb): PyViz intro for people focusing on dashboards.
* *[omit]* [*Exercise 5*](../exercises/Exercise-5-exporting-and-deploying-apps.ipynb)
## Related links
You will find extensive support material on the websites for each package. You may find these links particularly useful during the tutorial:
* [HoloViews reference gallery](http://holoviews.org/reference/index.html): Visual reference of all elements and containers, along with some other components
* [HoloViews getting-started guide](http://holoviews.org/getting_started/index.html): Covers some of the same topics as this tutorial, but without exercises
| github_jupyter |
# Diseño de software para cómputo científico
----
## Unidad 1: Lenguajes de alto nivel - Python
### Agenda de la Unidad 1
---
- Clase 1:
- Diferencias entre alto y bajo nivel.
- Lenguajes dinámicos y estáticos.
- Limbo:
- **Introducción al lenguaje Python.**
- Librerías de cómputo científico.
- Clase Limbo + 1 y Limbo + 2:
- Orientación a objetos, decoradores.
### Repaso del lenguaje Python
----
- Vamos a ver las estructuras básicas y un par de librerías en los siguientes minutos.
- Vamos a ver **Python 3.7**.
- Va con perspectiva histórica.
#### Fuentes
- https://es.wikipedia.org/wiki/Python
- https://en.wikipedia.org/wiki/Python
### Sobre Python
---
- Python fue creado a finales de los ochenta por Guido van Rossum en el Centro para las Matemáticas y la Informática (CWI, Centrum Wiskunde & Informatica), en los Países Bajos, como un sucesor del lenguaje de programación ABC.
- Es administrado por la Python Software Foundation. Posee una licencia de código abierto, denominada *Python Software Foundation License* que es compatible con la Licencia pública general de GNU a partir de la versión 2.1.1, e incompatible en ciertas versiones anteriores.
- Se gestiona a través de los documentos conocidos como **PEP** (Python Enhancement Proposal).
- Los PEPs están descriptos en el [PEP 001](https://www.python.org/dev/peps/pep-0001/)

### Sobre Python
----
- Python es un lenguaje de programación $interpretado^*$ cuya filosofía hace hincapié en una sintaxis que favorezca un código legible.
- Utiliza **tipado fuerte y dinámico**.
- Es multiparadigma, ya que soporta orientación a objetos, programación estructurada , metaprogramación y, en menor medida, programación funcional.
- Muchos otros paradigmas son soportados vía extensiones como, programación lógica y la orientada a contratos.
- La filosofía del lenguage está sumarizada en el *Zen de Python* ([PEP 020](https://www.python.org/dev/peps/pep-0020/)).
- El estilo de código que se espera de Python está descrita en el [PEP 008](https://www.python.org/dev/peps/pep-0008/).
#### Aclaración
Todo código a evaluar en esta materia tiene que ser razonado con el PEP-20 y escrito según las reglas del PEP-8.
### Sobre Python
---
#### Implementaciones
- **CPython** es lo que usamos día a día. Es la implementación de referencia.
- **Jython** compila Java byte code, por lo cual puede ser ejecutado en la Java virtual machine. Esto permite interactuar con cualquier programa desarrollado para esta máquina virtual.
- **IronPython** permite compilar Python a la máquina virtual .NET/Mono Common Language Runtime.
- **RPython** (Restricted Python) se compila a C, Java bytecode, o .NET/Mono. Es sobre lo que está contruído el proyecto **PyPy**.
- **Cython** compila Python a C/C++ (A éste lo vamos a ver).
- **Numba** compila Python a código máquina (A éste lo vamos a ver).
- Google's **Grumpy** compila a Go.
- **MyHDL** compila a VHDL.
- **Nuitka** compila a C++.
### Python Built-in Types
----
<table style="width=80%">
<thead>
<tr class="header">
<th><p>Type</p></th>
<th>Mutable?</th>
<th><p>Description</p></th>
<th><p>Syntax example</p></th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td><p><code>bool</code></p></td>
<td><p>No</p></td>
<td><p>Valor Booleano.</p></td>
<td><p><code>True</code><br />
<code>False</code></p></td>
</tr>
<tr class="odd">
<td><p><code>bytes</code></p></td>
<td><p>No</p></td>
<td><p>Sequencia de bytes.</p></td>
<td><p><code>b'Some ASCII'</code><br />
<code>b"Some ASCII"</code><br />
<code>bytes([119, 105, 107, 105])</code></p></td>
</tr>
<tr class="even">
<td><p><code>complex</code></p></td>
<td><p>No</p></td>
<td><p>Numero complejo.</p></td>
<td><p><code>3+2.7j</code></p></td>
</tr>
<tr class="odd">
<td><p><code>dict</code></p></td>
<td><p>Sí</p></td>
<td><p>Arreglo asociativo entre llave y valor.</p></td>
<td><p><code>{'key1': 1.0, 3: False}</code></p></td>
</tr>
<tr class="odd">
<td><p><code>float</code></p></td>
<td><p>No</p></td>
<td><p>Número flotante</p></td>
<td><p><code>3.1415927</code></p></td>
</tr>
<tr class="even">
<td><p><code>frozenset</code></p></td>
<td><p>No</p></td>
<td><p>Conjunto inmutable (sin orden).</p></td>
<td><p><code>frozenset([4.0, 'string', True])</code></p></td>
</tr>
<tr class="odd">
<td><p><code>int</code></p></td>
<td><p>No</p></td>
<td><p>Entero</p></td>
<td><p><code>42</code></p></td>
</tr>
<tr class="even">
<td><p><code>list</code></p></td>
<td><p>Sí</p></td>
<td><p>Lista ordenada mutable.</p></td>
<td><p><code>[4.0, 'string', True]</code></p></td>
</tr>
<tr class="odd">
<td><p><code>NoneType</code></p></td>
<td><p>No</p></td>
<td><p>Ausencia de valor.</p></td>
<td><p><code>None</code></p></td>
</tr>
<tr class="odd">
<td><p><code>set</code></p></td>
<td><p>Sí</p></td>
<td><p>Conjunto mutable (sin orden).</p></td>
<td><p><code>{4.0, 'string', True}</code></p></td>
</tr>
<tr class="even">
<td><p><code>str</code></p></td>
<td><p>No</p></td>
<td><p>Cadena de caracteres.</p></td>
<td><p><code>'Wikipedia'</code><br />
<code>"Wikipedia"</code><br />
<code>"""Spanning</code><br />
<code>multiple</code><br />
<code>lines"""</code></p></td>
</tr>
<tr class="odd">
<td><p><code>tuple</code></p></td>
<td><p>No</p></td>
<td><p>Lista ordenada inmutable.</p></td>
<td><p><code>(4.0, 'string', True)</code></p></td>
</tr>
</tbody>
</table>
### Python Keywords
----
Estas son las palabras que python no te deja usar para nombrar variables
```python
False await else import pass
None break except in raise
True class finally is return
and continue for lambda try
as def from nonlocal while
assert del global not with
async elif if or yield
```
De acá en adelante todos los repasos van con los ejemplos complejos directamente
### Python funciones
------
```
def func(a, b, c, *args, **kwargs):
"""Esto es la documentación de la función"""
partes = [
"Parámetros pasados",
f"a={a} b={b} c={c} args={args} kwargs={kwargs}"]
return " -- ".join(partes)
func(1, 2, 3)
func(b=1, c=2, a=3)
func(1, 4, c=2, b=4)
func(b=1, 2, 3)
```
### Python funciones
------
```python
def func(a, b, c, *args, **kwargs):
"""Esto es la documentación de la función"""
partes = [
"Parámetros pasados",
f"a={a} b={b} args={args} kwargs={kwargs}"]
return " -- ".join(partes)
```
```
func(1, 2, 3, 4, 5)
func(b=3, a=4, c=8, z=9)
func(1, 2, 3, 4, x=89)
```
### Condicionales
--------------
```
def que_hago(genero, edad):
"""Según tu edad o tu género: trabajás o te jubilás?
"""
if genero not in ("m", "f"):
print(f"Género tiene que ser 'm' o 'f'. Encontrado {genero}")
elif edad < 18:
print(f"Edad tiene que ser >= 18")
elif edad < 60 or (edad < 65 and genero == "m"):
genero_vagancia = "vago" if genero == "m" else "vaga"
print(f"Anda a trabajar {genero_vagancia}!")
elif (genero == "m" and edad >= 65) or (genero == "f" and edad >= 60):
print("Jubilate")
else:
print("Algo salió muy mal")
que_hago("m", 18)
que_hago("k", 11)
que_hago("f", 59)
```
### For loop
----
```
def imprimir_lista(lista_de, lista):
print("Esto es una lista de {}".format(lista_de.title()))
total = len(lista)
for idx, elemento in enumerate(lista):
if not elemento:
break
print(f"[{idx}/{total}] {elemento}")
else:
print("-----")
imprimir_lista("Aguas frescas", "naranja manzana tamarindo".split())
imprimir_lista("Super Heroes", ["Batman", "Flash", ""])
```
### While loop
----
```
def saludar_hasta(ultimo, no_saludar):
"""Saluda gente hasta que llega el último.
- La función es insensible a mayúsculas y minúsculas.
- Los que estén en la lista 'no_saludar', los ignora.
"""
ultimo, no_saludar = ultimo.lower(), map(str.lower, no_saludar)
saludados = []
while not saludados or saludados[-1] != ultimo:
saludar_a = input("A quién querés saludar? ").lower()
if saludar_a in no_saludar:
print("Nope!")
continue
print(f"Hola {saludar_a.title()}")
saludados.append(saludar_a)
else:
print("ESTAN TODOS!!!!")
print("-" * 10)
for s in saludados:
print(f"Chau {s.title()}!")
```
### While loop
----
```
saludar_hasta("juan", no_saludar=["Cristina", "Mauricio"])
```
### List Comprehensions
----
```
def only_even(*numbers):
"""Solo retorna los pares"""
evens = [n for n in numbers if n % 2 == 0]
return evens
only_even(1, 2, 3, 4, 5, 6, 100, 101)
```
### Dict Comprehensions
----
```
def positions_of_vowels(word):
"""Retorna un diccionario con las posiciones de las vocales como llave
y la letra como valor
"""
return {idx: l.lower() for idx, l in enumerate(word) if l.lower() in "aeiou"}
positions_of_vowels("Hola perinola")
```
### Set comprehensions
------
```
{s for s in [1, 2, 3, 2, 4] if not s % 2}
```
### Generadores
----
Un generador es una promesa de que valores van a generarse a futuro
```
def fib(n):
values = [0, 1]
while values[-2] < n:
values.append(values[-2] + values[-1])
return values
print(fib(300))
def ifib(n):
a, b = 0, 1
yield a
while a < n:
a, b = b, a + b
yield a
gen = ifib(300)
gen
print(
next(gen),
next(gen),
next(gen))
```
### Generadores
----
Se pueden usar en for loops
```
[n for n in ifib(300)]
```
Range retorna un generador
```
range(100)
```
### Generadores
----
Pueden hacer generadores por comprehensions
```
def potencia(n):
print("calculando potencia de", n)
return n ** 2
gen = (potencia(n) for n in [1, 2, 3])
gen
for n in gen:
print(n)
```
### Generadores
#### Un caso mas parecido a la realidad
----
El uso real de los generadores es para que las cosas no se procesen sin sentido. Por ejemplo
```
import time
def leer_archivo(*args, **kwargs):
"""Esto espera 2 segundos para simular lentitud"""
time.sleep(2)
intentos = 0
def procesar(*args, **kwargs):
"""Esta función falla siempre en su segunda llamada"""
global intentos
if intentos == 2:
intentos = 0
return 1
intentos += 1
return 0
```
### Generadores
#### Un caso mas parecido a la realidad
----
Version con lista
```
%%time
# leemos los archivos
archivos = [leer_archivo(n) for n in range(4)]
# ahora simulamos procesarlos
for a in archivos:
print("Procesando...")
if procesar(a) == 1:
print("Exploto todo!")
break
```
### Generadores
#### Un caso mas parecido a la realidad
----
Versión con generador
```
%%time
# leemos los archivos (lo único que cambie fueron los paréntesis)
archivos = (leer_archivo(n) for n in range(4))
# ahora simulamos procesarlos
for a in archivos:
print("Procesando...")
if procesar(a) == 1:
print("Exploto todo!")
break
```
### Contextos
---
Algunas funciones/objetos de python pueden ponerse en contextos `with` los cuales
suelen encargarse de limpieza de memoria.
El ejemplo mas común es la apertura y cierre de archivos
#### Sin contextos
```python
fp = open("archivo")
src = fp.read()
# SIEMPRE CERRAR EL ARCHIVO
fp.close() # esto puede no ejecutarse y el archivo queda abierto
```
#### Con contexto
```python
with open("archivo") as fp: # esto se encarga de cerrar solo SIEMPRE
src = fp.read()
```
| github_jupyter |
<table class="ee-notebook-buttons" align="left">
<td><a target="_blank" href="https://github.com/giswqs/earthengine-py-notebooks/tree/master/FeatureCollection/add_area_column.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> View source on GitHub</a></td>
<td><a target="_blank" href="https://nbviewer.jupyter.org/github/giswqs/earthengine-py-notebooks/blob/master/FeatureCollection/add_area_column.ipynb"><img width=26px src="https://upload.wikimedia.org/wikipedia/commons/thumb/3/38/Jupyter_logo.svg/883px-Jupyter_logo.svg.png" />Notebook Viewer</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/giswqs/earthengine-py-notebooks/blob/master/FeatureCollection/add_area_column.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" /> Run in Google Colab</a></td>
</table>
## Install Earth Engine API and geemap
Install the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.
The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemap#dependencies), including earthengine-api, folium, and ipyleaflet.
**Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60#issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving).
```
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('geemap package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
# Checks whether this notebook is running on Google Colab
try:
import google.colab
import geemap.eefolium as emap
except:
import geemap as emap
# Authenticates and initializes Earth Engine
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
```
## Create an interactive map
The default basemap is `Google Satellite`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py#L13) can be added using the `Map.add_basemap()` function.
```
Map = emap.Map(center=[40,-100], zoom=4)
Map.add_basemap('ROADMAP') # Add Google Map
Map
```
## Add Earth Engine Python script
```
# Add Earth Engine dataset
fromFT = ee.FeatureCollection("users/wqs/Pipestem/Pipestem_HUC10")
# This function computes the feature's geometry area and adds it as a property.
def addArea(feature):
return feature.set({'areaHa': feature.geometry().area().divide(100 * 100)})
# Map the area getting function over the FeatureCollection.
areaAdded = fromFT.map(addArea)
# Print the first feature from the collection with the added property.
first = areaAdded.first()
print('First feature: ', first.getInfo())
print("areaHa: ", first.get("areaHa").getInfo())
```
## Display Earth Engine data layers
```
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map
```
| github_jupyter |
# Морфология 1
Здесь мы познакомимся с двумя мофрологическими анализоторами: pymorphy и mystem.
```
sample_text = u'Гло́кая ку́здра ште́ко будлану́ла бо́кра и курдя́чит бокрёнка'
```
### 1. MyStem
```
# поставим модуль если он еще не стоит
!pip install pymystem3
from pymystem3 import Mystem
# инициализация собственно инициализатора
mystem_analyzer = Mystem(entire_input=False, disambiguation=False)
# entire_output - сохранение всего входа (напр. пробелов)
# disambiguation - снятие омонимии
```
Две основные функции Mystem:
- Проводить мофрологический анализ
- Приводить начальные формы для слов в тексте
```
mystem_result = mystem_analyzer.analyze(sample_text)
mystem_lemmas = mystem_analyzer.lemmatize(sample_text)
# Посмотрим, что у нас получилось при лемматизации
# (да, чтобы вывести юникодные строки на втором питоне приходится так извращаться)
print sample_text
for word in mystem_lemmas:
print word,
# Ну и результат морфологического анализа
# выведены всевозможные разборы, чтобы оценить масшатбы
for word in mystem_result:
print word['text']
for res in word['analysis']:
print '\t', repr(res).decode('unicode_escape')
```
Создадим теперь анализатор со снятием омонимии
```
mystem_analyzer2 = Mystem(entire_input=False, disambiguation=True)
mystem_result2 = mystem_analyzer2.analyze(sample_text)
mystem_lemmas2 = mystem_analyzer2.lemmatize(sample_text)
print sample_text
for (word, word2) in zip(mystem_lemmas, mystem_lemmas2):
print word, word2
for word in mystem_result2:
print word['text']
for res in word['analysis']:
print '\t', repr(res).decode('unicode_escape')
```
Проблемы MyStem
```
disambiguations = [ 'Александра Иванова пошла в кино',
'Александра Иванова видели в кино с кем-то',
'Воробьев сегодня встал не с той ноги']
disambiguation_results = []
for dis in disambiguations:
disambiguation_results.append(mystem_analyzer2.lemmatize(dis))
for res in disambiguation_results:
for word in res:
print word,
print
```
#### Задание
Для того, чтобы наиграться с MyStem, предлагается написать метод, который находит топ n лексем.
```
def get_top_words(text, n):
'''
:param text: input text in russian
:param n: number of most common words
:return: list of most common lexemas
'''
pass
```
### 2. Pymorphy
```
# установка модуля и словарей
!pip install pymorphy2
!pip install -U pymorphy2-dicts-ru
# создание анализатора
import pymorphy2
morph = pymorphy2.MorphAnalyzer()
# sample_text = u'Глокая куздра штеко будланула бокра и кудрячит бокренка'
# в отличие от mystem работает пословно
pymorphy_results = map(lambda x: morph.parse(x), sample_text.split())
# собираем результаты и выводим
for word_result in pymorphy_results:
print word_result[0].word
for res in word_result:
# print repr(res).decode('unicode_escape')
print '\t', res.normal_form, res.tag, res.score
```
В отличие от mystem можно получать лексему и склонять слова
```
bokr = morph.parse(u'градус')[0]
for form in bokr.lexeme:
print form.word, form.tag.PO
print bokr.inflect({'loct'}).word,
print bokr.make_agree_with_number(1).word,
print bokr.make_agree_with_number(2).word,
print bokr.make_agree_with_number(5).word
```
#### Задание
С помощью pymorphy на тексте получить:
- Распределение по частям речи
- Для части речи вывести топ n лексем
```
def get_pos_distribution(text, lexemas=None):
'''
:param: text: input text in russian
:param: lexemas: list of interested pos, if None - all are interesting
:return: dict of pos - probability
'''
pass
def get_top_pos_words(text, pos, n):
'''
:param text: input text in russian
:param pos: part of speech
:param n: number of most common words
:return: list of most common lexemas with selected pos
'''
pass
```
| github_jupyter |
# Ecuadorian Personal Identity Codes
## Introduction
The function `clean_ec_ci()` cleans a column containing Ecuadorian personal identity code (CI) strings, and standardizes them in a given format. The function `validate_ec_ci()` validates either a single CI strings, a column of CI strings or a DataFrame of CI strings, returning `True` if the value is valid, and `False` otherwise.
CI strings can be converted to the following formats via the `output_format` parameter:
* `compact`: only number strings without any seperators or whitespace, like "1714307103"
* `standard`: CI strings with proper whitespace in the proper places. Note that in the case of CI, the compact format is the same as the standard one.
Invalid parsing is handled with the `errors` parameter:
* `coerce` (default): invalid parsing will be set to NaN
* `ignore`: invalid parsing will return the input
* `raise`: invalid parsing will raise an exception
The following sections demonstrate the functionality of `clean_ec_ci()` and `validate_ec_ci()`.
### An example dataset containing CI strings
```
import pandas as pd
import numpy as np
df = pd.DataFrame(
{
"ci": [
'171430710-3',
'BE431150351',
'BE 428759497',
'BE431150351',
"002 724 334",
"hello",
np.nan,
"NULL",
],
"address": [
"123 Pine Ave.",
"main st",
"1234 west main heights 57033",
"apt 1 789 s maple rd manhattan",
"robie house, 789 north main street",
"1111 S Figueroa St, Los Angeles, CA 90015",
"(staples center) 1111 S Figueroa St, Los Angeles",
"hello",
]
}
)
df
```
## 1. Default `clean_ec_ci`
By default, `clean_ec_ci` will clean ci strings and output them in the standard format with proper separators.
```
from dataprep.clean import clean_ec_ci
clean_ec_ci(df, column = "ci")
```
## 2. Output formats
This section demonstrates the output parameter.
### `standard` (default)
```
clean_ec_ci(df, column = "ci", output_format="standard")
```
### `compact`
```
clean_ec_ci(df, column = "ci", output_format="compact")
```
## 3. `inplace` parameter
This deletes the given column from the returned DataFrame.
A new column containing cleaned CI strings is added with a title in the format `"{original title}_clean"`.
```
clean_ec_ci(df, column="ci", inplace=True)
```
## 4. `errors` parameter
### `coerce` (default)
```
clean_ec_ci(df, "ci", errors="coerce")
```
### `ignore`
```
clean_ec_ci(df, "ci", errors="ignore")
```
## 4. `validate_ec_ci()`
`validate_ec_ci()` returns `True` when the input is a valid CI. Otherwise it returns `False`.
The input of `validate_ec_ci()` can be a string, a Pandas DataSeries, a Dask DataSeries, a Pandas DataFrame and a dask DataFrame.
When the input is a string, a Pandas DataSeries or a Dask DataSeries, user doesn't need to specify a column name to be validated.
When the input is a Pandas DataFrame or a dask DataFrame, user can both specify or not specify a column name to be validated. If user specify the column name, `validate_ec_ci()` only returns the validation result for the specified column. If user doesn't specify the column name, `validate_ec_ci()` returns the validation result for the whole DataFrame.
```
from dataprep.clean import validate_ec_ci
print(validate_ec_ci("171430710-3"))
print(validate_ec_ci("BE431150351"))
print(validate_ec_ci('BE 428759497'))
print(validate_ec_ci('BE431150351'))
print(validate_ec_ci("004085616"))
print(validate_ec_ci("hello"))
print(validate_ec_ci(np.nan))
print(validate_ec_ci("NULL"))
```
### Series
```
validate_ec_ci(df["ci"])
```
### DataFrame + Specify Column
```
validate_ec_ci(df, column="ci")
```
### Only DataFrame
```
validate_ec_ci(df)
```
| github_jupyter |
```
from cuda_code.final.test_functions import *
import numpy as np
from cuda_code.final.benchmark import *
from cuda_code.final.monolithic import GSO as PGSO
import matplotlib.pyplot as plt
import seaborn as sns
```
# Time profile between PSO and GSO (NON - PARALLEL) and PARALLEL GSO
Construct particle class
```
#dependencies
import random
import math
import copy # for array copying
import sys
class Particle:
def __init__(self,x0, num_dimensions):
self.position_i=[] # particle position
self.velocity_i=[] # particle velocity
self.pos_best_i=[] # best position individual
self.err_best_i=-1 # best error individual
self.err_i=-1 # error individual
self.num_dimensions = num_dimensions
for i in range(0, self.num_dimensions):
self.velocity_i.append(random.uniform(-1,1))
self.position_i.append(x0[i])
# evaluate current fitness
def evaluate(self,costFunc):
self.err_i=costFunc(self.position_i)
# check to see if the current position is an individual best
if self.err_i < self.err_best_i or self.err_best_i==-1:
self.pos_best_i=self.position_i
self.err_best_i=self.err_i
# update new particle velocity
def update_velocity(self,pos_best_g):
w=0.5 # constant inertia weight (how much to weigh the previous velocity)
c1=1 # cognative constant
c2=2 # social constant
for i in range(0, self.num_dimensions):
r1=random.random()
r2=random.random()
vel_cognitive=c1*r1*(self.pos_best_i[i]-self.position_i[i])
vel_social=c2*r2*(pos_best_g[i]-self.position_i[i])
self.velocity_i[i]=w*self.velocity_i[i]+vel_cognitive+vel_social
# update the particle position based off new velocity updates
def update_position(self,bounds):
for i in range(0, self.num_dimensions):
self.position_i[i]=self.position_i[i]+self.velocity_i[i]
# adjust maximum position if necessary
if self.position_i[i]>bounds[i][1]:
self.position_i[i]=bounds[i][1]
# adjust minimum position if neseccary
if self.position_i[i] < bounds[i][0]:
self.position_i[i]=bounds[i][0]
```
# PSO
```
def PSO(costFunc,bounds,maxiter, swarm_init):
global num_dimensions
num_dimensions=len(swarm_init[0])
err_best_g=-1 # best error for group
pos_best_g=[] # best position for group
num_particles = len(swarm_init)
# establish the swarm
swarm = [Particle(position, num_dimensions) for position in swarm_init]
# begin optimization loop
i=0
while i < maxiter:
#print i,err_best_g
# cycle through particles in swarm and evaluate fitness
for j in range(0,num_particles):
swarm[j].evaluate(costFunc)
# determine if current particle is the best (globally)
if swarm[j].err_i < err_best_g or err_best_g == -1:
pos_best_g=list(swarm[j].position_i)
err_best_g=float(swarm[j].err_i)
# cycle through swarm and update velocities and position
for j in range(0,num_particles):
swarm[j].update_velocity(pos_best_g)
swarm[j].update_position(bounds)
i+=1
# print final results
#print ('\n')
#print (pos_best_g,' , ', err_best_g)
return pos_best_g, err_best_g
# GLOBAL SEttiNGS
initial=[5,5] # initial starting location [x1,x2...]
bounds = [[-1000, 1000], [-1000, 1000]] # input bounds [(x1_min,x1_max),(x2_min,x2_max)...]
num_particles = 75
max_iter = 1500
def test_PSO(M, bounds, num_particles, max_iter, costfunc):
subswarm_bests = []
for i in range(M):
swarm_init = [np.random.uniform(-10,10, 2) for _ in range(num_particles)]
subswarm_best,_ = PSO(costfunc,bounds,max_iter, swarm_init=swarm_init)
subswarm_bests.append(subswarm_best)
best_position, best_error = PSO(costfunc, bounds, max_iter, swarm_init=subswarm_bests)
return best_position, best_error
%timeit test_PSO(5, bounds, num_particles, max_iter, rastrigin)
```
# Plot cpu usage graph for PSO
Y - axis is cpu usage in %
```
cpu_percent, time_at, top_prcnt = monitor(test_PSO, bounds, num_particles, max_iter, rastrigin, 5)
multiple_cpu_plot(top_prcnt, time_at)
```
# GSO
galactic swarm optimization without parallel
```
def GSO(M, bounds, num_particles, max_iter, costfunc):
subswarm_bests = []
dims = len(bounds)
lb = bounds[0][0]
ub = bounds[0][1]
for i in range(M):
#initial= np.random.uniform(-10,10, 2) # initial starting location [x1,x2...]
swarm_init = [np.random.uniform(lb, ub, dims) for _ in range(num_particles)]
subswarm_best,_ = PSO(costfunc,bounds,max_iter, swarm_init=swarm_init)
subswarm_bests.append(subswarm_best)
best_position, best_error = PSO(costfunc, bounds, max_iter, swarm_init=subswarm_bests)
return best_position, best_error
%timeit GSO(5, bounds, num_particles, max_iter, rastrigin)
```
# Plot cpu usage graph for GSO
```
cpu_percent, time_at, top_prcnt = monitor(GSO, bounds, num_particles, max_iter, rastrigin, 5)
multiple_cpu_plot(top_prcnt, time_at)
```
# Parallelized GSO
```
%timeit PGSO(1, bounds, num_particles, max_iter, rastrigin)
# 1 cpu test
cpu_percents, time_at, top_prcnt = monitor(PGSO, bounds, num_particles, max_iter, rastrigin, 1)
multiple_cpu_plot(top_prcnt, time_at)
```
# Testing All benchmark functions with PGSO spwaning 8 processes
```
list_of_benchmark_functions = [sphere, rosen, rastrigin, griewank, zakharov, nonContinuousRastrigin]
string_list = ["sphere", "rosen", "rastrigin", "griewank", "zakharov", "nonContinuousRastrigin"]
bounds = [[-100, 100], [-100, 100]] # input bounds [(x1_min,x1_max),(x2_min,x2_max)...]
num_particles = 20
max_iter = 500
for costfun, stre in zip(list_of_benchmark_functions, string_list):
best_pos, best_err = PGSO(8, bounds, num_particles, max_iter, costfun)
print(stre," best position: ", best_pos, " best error: ", best_err)
%load_ext line_profiler
%lprun PGSO(8, bounds, num_particles, max_iter, costfun)
```
# Testing All Benchmark functions with GSO (NON - PARALLEL)
```
for costfun, stre in zip(list_of_benchmark_functions, string_list):
if stre == 'rosen':
continue
best_pos, best_err = GSO(5, bounds, num_particles, max_iter, costfun)
print(stre," best position: ", best_pos, " best error: ", best_err)
```
# Testing ALL Benchmark functions with PSO
```
for costfun, stre in zip(list_of_benchmark_functions, string_list):
if stre == 'rosen':
continue
best_pos, best_err = test_PSO(5, bounds, num_particles, max_iter, costfun)
print(stre," best position: ", best_pos, " best error: ", best_err)
!numba --annotate-html gail.html ./cuda_code/final/monolithic.py
bounds = [[-1000, 1000], [-1000, 1000]] # input bounds [(x1_min,x1_max),(x2_min,x2_max)...]
num_particles = 50
max_iter = 1000
GSO_time = %timeit -o GSO(5, bounds, num_particles, max_iter, rastrigin)
PGSO_time = %timeit -o PGSO(1, bounds, num_particles, max_iter, rastrigin)
TS_GSO = GSO_time.average
TP = list()
TP.append(PGSO_time.average)
speedup = list()
speedup.append(TS_GSO/TP[0])
PGSO_time_2 = %timeit -o PGSO(2, bounds, num_particles, max_iter, rastrigin)
TP_2 = PGSO_time_2.average
TP.append(TP_2)
speedup.append(TS_GSO/TP_2)
PGSO_time_4 = %timeit -o PGSO(4, bounds, num_particles, max_iter, rastrigin)
TP_4 = PGSO_time_4.average
TP.append(TP_4)
speedup.append(TS_GSO/TP_4)
PGSO_time_6 = %timeit -o PGSO(6, bounds, num_particles, max_iter, rastrigin)
TP_6 = PGSO_time_6.average
TP.append(TP_6)
speedup.append(TS_GSO/TP_6)
PGSO_time_8 = %timeit -o PGSO(8, bounds, num_particles, max_iter, rastrigin)
TP_8 = PGSO_time_8.average
TP.append(TP_8)
speedup.append(TS_GSO/TP_8)
```
[HTML](file:///gail.html)
```
%matplotlib inline
plt.figure(figsize=(16,9))
ax = plt.gca()
ax.set_xlabel('num of proccessors')
ax.set_ylabel('speedup')
sns.lineplot(x=[1,2,4,6,8], y=speedup)
```
| github_jupyter |
This is an exercise based on a question by Stephen Ku:
>I'm searching for direct objects within an `Ellp` clause whose `Pred` is not in the immediately preceding clause.
>My logic is that if there is already a verb in the immediately preceding clause, then any verb found in a clause prior to the immediately preceding clause is probably not for the direct object in the `Ellp`.
> The query below searches for two `Pred`s in two clauses preceding and `Ellp` in which the direct object is found.
>The assumption is that in this scenario, the first `Pred` does not go with the direct object, but this assumption is not always true.
>Is there a way to search more accurately for a `Pred`/`Objc` pair in which the `Pred` and `Objc` can be several clauses apart (even when there is an intervening `Pred`) and the results are always correct? Can `mother` be used somehow for this?
```
import collections
from tf.fabric import Fabric
query1 = """
sentence
c1:clause
phrase function=Pred
word pdp=verb
c2:clause
phrase function=Pred
c3:clause typ=Ellp
phrase function=Objc
word pdp=subs|nmpr|prps|prde|prin
c1 << c2
c2 << c3
"""
```
Let's see what is happening here.
# Load data
We load the some features of the
[BHSA](https://github.com/etcbc/bhsa) data.
See the [feature documentation](https://etcbc.github.io/bhsa/features/hebrew/2017/0_home.html) for more info.
```
BHSA = "BHSA/tf/2017"
TF = Fabric(locations="~/github/etcbc", modules=BHSA)
api = TF.load(
"""
function
mother
"""
)
api.makeAvailableIn(globals())
results = list(S.search(query1))
len(results)
for r in results[0:10]:
print(S.glean(r))
```
# Mothers and phrases
Just to make sure: are there mother edges that arrive at an `Objc` phrase or at a `Pred` phrase?
Let's explore the distribution of mother edges.
First we need to see between what node types they occur, and then we see between what
phrase functions they occur.
We start with a generic function that shows the distribution of a data set.
Think of the data set as a mapping from nodes to sets of nodes.
If we have a property of nodes, we want to see how many times a nodes with property value1
are mapped to nodes of property value 2.
The next function takes `data`, which is a node mapping to sets, and `dataKey`, which is a *function* that gives a value for each node.
```
def showDist(data, dataKey):
dataDist = collections.Counter()
for (p, ms) in data.items():
for m in ms:
dataDist[(dataKey(p), dataKey(m))] += 1
for (combi, amount) in sorted(
dataDist.items(),
key=lambda x: (-x[1], x[0]),
):
print(f'{amount:>3} x {" => ".join(str(s) for s in combi)}')
```
First we get the distribution of node types between which mother edges may occur.
```
allMothers = {}
for n in N():
allMothers[n] = set(E.mother.f(n))
showDist(allMothers, F.otype.v)
```
Given that there are more than 250,000 phrases, it is clear that the mother relation is only used very sparsely among phrases. We are going to show how many times there are mothers between phrases with specified `function`s.
Probably, we should look at phrase_atoms.
```
phraseMothers = collections.defaultdict(set)
for p in F.otype.s("phrase_atom"):
mothers = E.mother.f(p)
for m in mothers:
if F.otype.v(m) == "phrase_atom":
phraseMothers[p].add(m)
len(phraseMothers)
showDist(phraseMothers, F.function.v)
```
Ah, phrase_atoms do not have functions!
We just take the function of the phrase they are contained in.
```
def getFunction(pa):
return F.function.v(L.u(pa, otype="phrase")[0])
showDist(phraseMothers, getFunction)
```
That's a pity. It seems that the mother edges between phrase_atoms only link phrase_atoms in the same phrase.
Let's have a look at the mothers between phrases after all.
```
phraseMothers = collections.defaultdict(set)
for p in F.otype.s("phrase"):
mothers = E.mother.f(p)
for m in mothers:
if F.otype.v(m) == "phrase":
phraseMothers[p].add(m)
showDist(phraseMothers, F.function.v)
```
All cases have to do with `Frnt` and `PreAd`. So the mother is definitely not helping out with Stephen's original question.
| github_jupyter |
# Amazon SageMaker Workshop
### _**Pipelines**_
---
In this part of the workshop we will all our previous work from the labs and will automate the whole ML workflow. With that we can make the whole process more robust and any updates to the data preparation, modeling, evaluation, inference and monitoring will be put into production faster and more reliable.
---
## Contents
a. [Background](#background) - Getting the work from previous labs.
b. [Create the training pipeline](#Create_pipeline) - featuring [SageMaker Pipelines](https://docs.aws.amazon.com/sagemaker/latest/dg/pipelines.html)
1. [Creating data preparation step](#dataprep_step)
2. [Creating training step](#train_step)
3. [Creating evaluation step](#eval_step)
4. [Creating approve and register model steps](#appr_model_reg_step)
5. [Finish the pipeline](#end_creation_pipe)
d. [Create the end-to-end solution automatically](#SM_Projects) - Create end-to-end ML solutions with CI/CD (featuring [SageMaker Projects](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-projects.html))
1. customize the project with our pipeline and code
2. trigger training pipeline
3. trigger deployment pipeline
---
<a id='background'></a>
## Background
In the previous labs we created multiple resources to prepare the data (_2-DataPrep_), train the model (_3-Modeling_), evaluate model performance (_4-Evaluation_), deploy and customize inference logic (_4-Deployment/RealTime_) and monitor the deployed model (_5-Monitoring_).
Now it's time to **bring everything together**!
We will create a pipeline with 5 steps:
1. Data preparation
2. Training
3. Evaluation
4. Approve model
5. Save to model registry step
We will build our pipeline iterating little by little.
---
### - if you _skipped_ some/all of the previous labs, follow instructions:
- **run this [notebook](./config/pre_setup.ipynb)**
---
Load all variables (and modules) for this lab:
```
%store -r bucket
%store -r prefix
%store -r region
%store -r docker_image_name
bucket, prefix, region, docker_image_name
#Supress default INFO logging
import logging
logger = logging.getLogger()
logger.setLevel(logging.ERROR)
import sagemaker
role = sagemaker.get_execution_role()
sagemaker_session = sagemaker.session.Session()
```
---
<a id="Create_pipeline"></a>
# Create the training pipeline with SageMaker Pipelines
<a id="dataprep_step"></a>
## 1. Create data preparation step
Get the raw data location and the S3 URI where our code for data preparation was stored:
```
%store -r s3uri_raw
%store -r s3_dataprep_code_uri
s3uri_raw, s3_dataprep_code_uri
from sagemaker.workflow.steps import (
ProcessingStep,
TrainingStep,
)
from sagemaker.processing import (
ProcessingInput,
ProcessingOutput,
ScriptProcessor,
)
```
This first step will receive some inputs:
```
from sagemaker.workflow.parameters import (
ParameterInteger,
ParameterString,
)
# Parameters for data preparation step
input_data = ParameterString(
name="InputDataUrl",
default_value=s3uri_raw # S3 URI where we stored the raw data
)
processing_instance_count = ParameterInteger(
name="ProcessingInstanceCount", default_value=1
)
processing_instance_type = ParameterString(
name="ProcessingInstanceType", default_value="ml.m5.xlarge"
)
from my_labs_solutions.dataprep_solution import get_dataprep_processor
sklearn_processor = get_dataprep_processor(processing_instance_type, processing_instance_count, role)
sklearn_processor
# Processing step for feature engineering
step_process = ProcessingStep(
name="CustomerChurnProcess", # choose any name
processor=sklearn_processor,
outputs=[
ProcessingOutput(output_name="train", source="/opt/ml/processing/train"),
ProcessingOutput(
output_name="validation", source="/opt/ml/processing/validation"
),
ProcessingOutput(output_name="test", source="/opt/ml/processing/test"),
],
code=s3_dataprep_code_uri,
job_arguments=["--input-data", input_data],
)
```
## Create the first iteration of the Pipeline
We will create a simple pipeline that receives some inputs and just have 1 data preparation step:
```
from time import strftime, gmtime
from sagemaker.workflow.pipeline import Pipeline
from sagemaker.workflow.pipeline_experiment_config import PipelineExperimentConfig
from sagemaker.workflow.step_collections import RegisterModel
```
You can associate SageMaker Experiments with Pipelines to help track multiple moving pieces (ML hyperparameters, data, artifacts, plots, metrics, etc. - a.k.a. [ML lineage tracking](https://docs.aws.amazon.com/sagemaker/latest/dg/lineage-tracking.html))
```
# Experiment configs
create_date = lambda: strftime("%Y-%m-%d-%H-%M-%S", gmtime())
experiment_name=f"pipeline-customer-churn-prediction-xgboost-{create_date()}"
trial_name=f"pipeline-framework-trial-{create_date()}"
pipeline_name = f"ChurnMLPipeline"
pipeline_experiment_config = PipelineExperimentConfig(
experiment_name = experiment_name,
trial_name = trial_name
)
# Pipeline with just input parameters and 1 step for data prep
pipeline = Pipeline(
name=pipeline_name,
parameters=[
input_data,
processing_instance_type,
processing_instance_count,
],
steps=[step_process],
sagemaker_session=sagemaker_session,
)
# Validate that pipeline was configured correctly and load its definition
import json
json.loads(pipeline.definition())
```
#### Ok, looks good. Let's create the pipeline:
```
pipeline.upsert(role_arn=role)
```
1. Go to the pipeline and see its DAG:
<img src="./media/sm-pipeline.png" width="50%">
2. Right-click the ChurnMLPipeline -> `Open pipeline details`.
Check its DAG (with just the data prep step:
<img src="./media/sm-pipe-iter-1.png" width="50%">
3. Click on `Parameters` to see the default parameter inputs for a execution:
<img src="./media/sm-pipe-iter-1-params.png" width="100%">
> Remember that we set the inputs as:
```python
# Parameters for data preparation step
input_data = ParameterString(
name="InputDataUrl",
default_value=s3uri_raw # S3 URI where we stored the raw data
)
processing_instance_count = ParameterInteger(
name="ProcessingInstanceCount", default_value=1
)
processing_instance_type = ParameterString(
name="ProcessingInstanceType", default_value="ml.m5.xlarge"
)
```
**Let's programatically execute the pipeline with defaults:**
```
execution = pipeline.start()
execution.describe()
execution.list_steps()
# If we wanted to wait for execution to end:
# execution.wait()
```
4. Right-click the `Executions` tab:
<img src="./media/sm-pipe-iter-1-exec-list.png" width="100%">
5. Select the only execution (should be in status "Executing") and double click on it:
<img src="./media/sm-pipe-iter-1-exec.png" width="60%">
6. Wait for a few minutes (for the data preparation step and the SageMaker Processing Job under the hood to finish):
<img src="./media/sm-pipe-iter-1-exec-succ.png" width="60%">
7. If you go to `Experiments and trials` tab you will see that SageMaker Pipelines created an experiment called `churnmlpipeline`.
Also if we select our data prep Processing job, we can see that it correctly created 3 dataset as output: `train`, `validation` and `test`:
<img src="./media/sm-pipe-iter-1-exec-outs.png" width="100%">
---
<a id="train_step"></a>
# 2. Create modeling step
```
%store -r s3_modeling_code_uri
%store -r train_script_name
s3_modeling_code_uri, train_script_name
from my_labs_solutions.modeling_solution import get_modeling_estimator
xgb_train = get_modeling_estimator(bucket,
prefix,
s3_modeling_code_uri,
docker_image_name,
role,
entry_point_script = train_script_name)
xgb_train
from sagemaker.inputs import TrainingInput
step_train = TrainingStep(
name="CustomerChurnTrain",
estimator=xgb_train,
inputs={
"train": TrainingInput(
s3_data=step_process.properties.ProcessingOutputConfig.Outputs[
"train"
].S3Output.S3Uri,
content_type="text/csv"
),
"validation": TrainingInput(
s3_data=step_process.properties.ProcessingOutputConfig.Outputs[
"validation"
].S3Output.S3Uri,
content_type="text/csv"
)
}
)
```
Notice that we can link one step's output to other steps input by accessing the properties:
```python
# Get output from processing step with key `train`
step_process.properties.ProcessingOutputConfig.Outputs["train"].S3Output.S3Uri
```
## Create the second iteration of the Pipeline (updating the definition)
We will update the pipeline adding an input parameter for the training Step and also the training Step itself, resulting in a pipeline with 2 step:
```
# Add an input parameter to define the training instance type
training_instance_type = ParameterString(
name="TrainingInstanceType", default_value="ml.m5.xlarge"
)
pipeline = Pipeline(
name=pipeline_name,
parameters=[
input_data,
processing_instance_type,
processing_instance_count,
training_instance_type,
],
steps=[step_process, step_train],
sagemaker_session=sagemaker_session,
)
# Update the pipeline
pipeline.upsert(role_arn=role)
```
1. If we go to the pipeline and click on the refresh button, we see now its 2 steps and the new input parameter:
<img src="./media/sm-pipe-iter-2.png" width="70%">
<img src="./media/sm-pipe-iter-2-params.png" width="100%">
2. Now, let's execute the new pipeline in the Studio UI. Click on `Start an execution`:
<img src="./media/sm-pipe-iter-2-exec.png" width="80%">
3. The default input configurations should appear in the Studio UI. Click on `Start`:
<img src="./media/sm-pipe-iter-2-exec-man.png" width="80%">
4. Refreshing the executions we should see:
<img src="./media/sm-pipe-iter-2-exec-man2.png" width="80%">
5. Click on `View details` or select the execution in the list (status "Executing") and double click:
<img src="./media/sm-pipe-iter-2-exec-man3.png" width="80%">
6. Wait a few minutes to the data prep Processing job and the training job finish. You should see this:
<img src="./media/sm-pipe-iter-2-exec-succ.png" width="80%">
If you click on the training step and select `Outputs` you will also be able to see the final training and validation log losses.
---
<a id="eval_step"></a>
# 3. Create evaluation step
```
from my_labs_solutions.evaluation_solution import get_evaluation_processor
script_eval = get_evaluation_processor(docker_image_name, role)
script_eval
from sagemaker.workflow.properties import PropertyFile
evaluation_report = PropertyFile(
name="EvaluationReport",
output_name="evaluation",
path="evaluation.json",
)
%store -r s3_evaluation_code_uri
s3_evaluation_code_uri
# Processing step for evaluation
step_eval = ProcessingStep(
name="CustomerChurnEval",
processor=script_eval,
inputs=[
ProcessingInput(
source=step_train.properties.ModelArtifacts.S3ModelArtifacts,
destination="/opt/ml/processing/model",
),
ProcessingInput(
source=step_process.properties.ProcessingOutputConfig.Outputs[
"test"
].S3Output.S3Uri,
destination="/opt/ml/processing/test",
),
],
outputs=[
ProcessingOutput(
output_name="evaluation", source="/opt/ml/processing/evaluation"
),
],
code=s3_evaluation_code_uri,
property_files=[evaluation_report],
)
```
Now, notice that we get the model from the training step and also the test dataset from the data preparation step:
```python
# Get output model artifact from training step
step_train.properties.ModelArtifacts.S3ModelArtifacts
# Get the test dataset - the output of data preparation step with key `test`
step_process.properties.ProcessingOutputConfig.Outputs["test"].S3Output.S3Uri
```
## Create the third iteration of the Pipeline (updating the definition)
We will update the pipeline adding the evaluation step, resulting in a pipeline with 3 step: data prep, training and evaluation.
```
pipeline = Pipeline(
name=pipeline_name,
parameters=[
input_data,
processing_instance_type,
processing_instance_count,
training_instance_type,
],
steps=[step_process, step_train, step_eval],
sagemaker_session=sagemaker_session,
)
# Update the pipeline
pipeline.upsert(role_arn=role)
```
1. If we go to the pipeline and click on the refresh button again:
<img src="./media/sm-pipe-iter-3.png" width="70%">
2. Now, _**let's execute the new pipeline programatically here...**_
> ### Wait! You must be wondering...
> Do I have to keep re-running everything from beginning every time?!
No, you don't.
**SageMaker Pipelines can cache results from previous step.** Hence, the executions will be a lot faster and you won't keep spending money with steps that would generate in the exact same outputs!
Let's run this 3rd iteration of the pipeline caching both data preparation and training steps:
```
from sagemaker.workflow.steps import CacheConfig
# Cache for 30 minutes
cache_config = CacheConfig(enable_caching=True, expire_after="T30m")
# Minor change in data preparation steps
step_process = ProcessingStep(
name="CustomerChurnProcess", # choose any name
processor=sklearn_processor,
outputs=[
ProcessingOutput(output_name="train", source="/opt/ml/processing/train"),
ProcessingOutput(
output_name="validation", source="/opt/ml/processing/validation"
),
ProcessingOutput(output_name="test", source="/opt/ml/processing/test"),
],
code=s3_dataprep_code_uri,
job_arguments=["--input-data", input_data],
cache_config=cache_config
)
# Minor change in data training steps
step_train = TrainingStep(
name="CustomerChurnTrain",
estimator=xgb_train,
inputs={
"train": TrainingInput(
s3_data=step_process.properties.ProcessingOutputConfig.Outputs[
"train"
].S3Output.S3Uri,
content_type="text/csv"
),
"validation": TrainingInput(
s3_data=step_process.properties.ProcessingOutputConfig.Outputs[
"validation"
].S3Output.S3Uri,
content_type="text/csv"
)
},
cache_config=cache_config
)
# Processing step for evaluation
step_eval = ProcessingStep(
name="CustomerChurnEval",
processor=script_eval,
inputs=[
ProcessingInput(
source=step_train.properties.ModelArtifacts.S3ModelArtifacts,
destination="/opt/ml/processing/model",
),
ProcessingInput(
source=step_process.properties.ProcessingOutputConfig.Outputs[
"test"
].S3Output.S3Uri,
destination="/opt/ml/processing/test",
),
],
outputs=[
ProcessingOutput(
output_name="evaluation", source="/opt/ml/processing/evaluation"
),
],
code=s3_evaluation_code_uri,
property_files=[evaluation_report],
cache_config=cache_config
)
```
Update the pipeline definition and pipelines with the cache configuration of 30 min for all 3 steps:
```
pipeline = Pipeline(
name=pipeline_name,
parameters=[
input_data,
processing_instance_type,
processing_instance_count,
training_instance_type,
],
steps=[step_process, step_train, step_eval],
sagemaker_session=sagemaker_session,
)
# Update the pipeline
pipeline.upsert(role_arn=role)
execution = pipeline.start()
```
Ok, so you should see:
<img src="./media/sm-pipe-iter-3-exec.png" width="80%">
This is the first execution with the cache configuration (we will have to wait one more time).
3. Wait a few minutes to the data prep step, the training step and evaluation step finish:
<img src="./media/sm-pipe-iter-3-exec-succ.png" width="80%">
---
<a id="appr_model_reg_step"></a>
# 4. Create approve model and save to model registry steps
```
from sagemaker.workflow.conditions import (
ConditionGreaterThanOrEqualTo,
)
from sagemaker.workflow.condition_step import (
ConditionStep,
JsonGet,
)
from sagemaker.workflow.step_collections import RegisterModel
from sagemaker.model_metrics import (
MetricsSource,
ModelMetrics,
)
```
Add new input parameter for the model registration step:
```
model_approval_status = ParameterString(
name="ModelApprovalStatus",
default_value="PendingManualApproval", # ModelApprovalStatus can be set to a default of "Approved" if you don't want manual approval.
)
```
Create register model step:
```
# Model metrics that will be associated with RegisterModel step
model_metrics = ModelMetrics(
model_statistics=MetricsSource(
s3_uri="{}/evaluation.json".format(
step_eval.arguments["ProcessingOutputConfig"]["Outputs"][0]["S3Output"][
"S3Uri"
]
),
content_type="application/json",
)
)
model_package_group_name="CustomerChurnPackageGroup"
# Register model step that will be conditionally executed
step_register = RegisterModel(
name="CustomerChurnRegisterModel",
estimator=xgb_train,
model_data=step_train.properties.ModelArtifacts.S3ModelArtifacts,
content_types=["text/csv"],
response_types=["text/csv"],
inference_instances=["ml.t2.medium", "ml.m5.large"],
transform_instances=["ml.m5.large"],
model_package_group_name=model_package_group_name,
approval_status=model_approval_status,
model_metrics=model_metrics,
)
```
Create condition step for **accuracy above 0.8**:
```
# Condition step for evaluating model quality and branching execution
cond_lte = ConditionGreaterThanOrEqualTo( # You can change the condition here
left=JsonGet(
step=step_eval,
property_file=evaluation_report,
json_path="binary_classification_metrics.accuracy.value", # This should follow the structure of your report_dict defined in the evaluate.py file.
),
right=0.8, # You can change the threshold here
)
step_cond = ConditionStep(
name="CustomerChurnAccuracyCond",
conditions=[cond_lte],
if_steps=[step_register],
else_steps=[],
)
```
## Create the forth and final iteration of the Pipeline (updating the definition)
We will update the pipeline the final approve model (contidion) and save model steps, resulting in a pipeline with 5 steps: data prep, training, evaluation, approval, save to registry steps.
```
pipeline = Pipeline(
name=pipeline_name,
parameters=[
input_data,
processing_instance_type,
processing_instance_count,
training_instance_type,
model_approval_status,
],
steps=[step_process, step_train, step_eval, step_cond],
sagemaker_session=sagemaker_session,
)
pipeline.upsert(role_arn=role)
```
Let's start final execution:
```
execution = pipeline.start()
```
<a id='end_creation_pipe'></a>
# 5. End of pipeline creation
With the caches all should be faster now.
Let's get the final result of the pipeline. Read evaluation report:
```
evaluation_json = sagemaker.s3.S3Downloader.read_file("{}/evaluation.json".format(
step_eval.arguments["ProcessingOutputConfig"]["Outputs"][0]["S3Output"]["S3Uri"]
))
json.loads(evaluation_json)
```
Ok, so you should see in the execution:
<img src="./media/sm-pipe-iter-final-exec-succ.png" width="80%">
If we go to the register model step we can see that it was approved (accuracy was 0.95 > 0.8):
<img src="./media/sm-pipe-iter-final-exec-succ2.png" width="80%">
Since the model was approved it was saved in the **SageMaker Model Registry**:
<img src="./media/sm-pipe-iter-final-exec-succ3.png" width="40%">
Select our model group:
<img src="./media/sm-pipe-iter-final-exec-succ4.png" width="40%">
Open its details:
<img src="./media/sm-pipe-iter-final-exec-succ5.png" width="40%">
Select the model that was saved in the Model Registry:
<img src="./media/sm-pipe-iter-final-exec-succ6.png" width="100%">
Here we can see the model details (if you click on the metrics you can visualize the auc and accuracy metrics).
<img src="./media/sm-pipe-iter-final-exec-succ7.png" width="100%">
Now, we can manually approve the model:
<img src="./media/sm-pipe-iter-final-exec-succ-approve.png" width="40%">
After selecting approve and click on `Update status`, the model will be updated:
<img src="./media/sm-pipe-iter-final-exec-succ8.png" width="100%">
We also can see the metrics to this model we just approved:
<img src="./media/sm-pipe-iter-final-exec-succ9.png" width="100%">
and its details:
<img src="./media/sm-pipe-iter-final-exec-succ10.png" width="100%">
#### Run again with caches and changing input parameters:
```
# Obs.: If we want to override the input parameters with other ones:
execution = pipeline.start(
parameters=dict(
ProcessingInstanceType="ml.c5.xlarge",
ModelApprovalStatus="Approved", # Would approve automatically
)
)
```
Now with the cache everything will be even faster (check the `Elapsed time`):
<img src="./media/sm-pipe-iter-final-exec-override.png" width="80%">
Since we overrode the input `ModelApprovalStatus` to "Approved", this time model will be approved automatically and saved to the Model Registry:
<img src="./media/sm-pipe-iter-final-exec-override2.png" width="80%">
Let's compare the models. Just select both, right-click and then choose `Compare models`:
<img src="./media/sm-pipe-iter-final-exec-override3.png" width="80%">
Obviously both executions were identical and the 2 models have the same metrics:
<img src="./media/sm-pipe-iter-final-exec-override4.png" width="80%">
```
# # Obs.: if we wanted to stop pipeline execution:
# execution.stop()
# # Obs.: if we wanted to delete the whole pipeline:
# pipeline.delete()
```
Let's put the whole pipeline code into a python script:
```
%%writefile my_labs_solutions/pipeline_definition.py
import os
import json
from time import strftime, gmtime
import sagemaker
from sagemaker.inputs import TrainingInput
from sagemaker.workflow.steps import (
ProcessingStep,
TrainingStep,
)
from sagemaker.processing import (
ProcessingInput,
ProcessingOutput,
ScriptProcessor,
)
from sagemaker.workflow.parameters import (
ParameterInteger,
ParameterString,
)
from sagemaker.workflow.pipeline import Pipeline
from sagemaker.workflow.pipeline_experiment_config import PipelineExperimentConfig
from sagemaker.workflow.step_collections import RegisterModel
from sagemaker.workflow.properties import PropertyFile
from sagemaker.workflow.steps import CacheConfig
from sagemaker.workflow.conditions import (
ConditionGreaterThanOrEqualTo,
)
from sagemaker.workflow.condition_step import (
ConditionStep,
JsonGet,
)
from sagemaker.workflow.step_collections import RegisterModel
from sagemaker.model_metrics import (
MetricsSource,
ModelMetrics,
)
from .dataprep_solution import get_dataprep_processor
from .modeling_solution import get_modeling_estimator
from .evaluation_solution import get_evaluation_processor
BASE_DIR = os.path.dirname(os.path.realpath(__file__))
def get_my_solutions_vars():
vars_path = os.path.join(".", "pipelines", "my_labs_solutions", "my-solution-vars.json")
with open(vars_path, "rb") as f:
my_vars = json.loads(f.read())
return my_vars
def get_pipeline(region,
role=None,
default_bucket=None,
model_package_group_name="MLOpsCustomerChurnPackageGroup", # Choose any name
pipeline_name="MLOpsFinalChurnMLPipeline", # You can find your pipeline name in the Studio UI (project -> Pipelines -> name)
base_job_prefix="CustomerChurn", # Choose any name
) -> Pipeline:
# Get config vars
my_vars = get_my_solutions_vars()
bucket = my_vars["bucket"]
prefix = my_vars["prefix"]
region = my_vars["region"]
docker_image_name = my_vars["docker_image_name"]
s3uri_raw = my_vars["s3uri_raw"]
s3_dataprep_code_uri = my_vars["s3_dataprep_code_uri"]
s3_modeling_code_uri = my_vars["s3_modeling_code_uri"]
train_script_name = my_vars["train_script_name"]
s3_evaluation_code_uri = my_vars["s3_evaluation_code_uri"]
role = my_vars["role"]
sagemaker_session = sagemaker.session.Session()
# Parameters for data preparation step
input_data = ParameterString(
name="InputDataUrl",
default_value=s3uri_raw # S3 URI where we stored the raw data
)
processing_instance_count = ParameterInteger(
name="ProcessingInstanceCount", default_value=1
)
processing_instance_type = ParameterString(
name="ProcessingInstanceType", default_value="ml.m5.xlarge"
)
# Add an input parameter to define the training instance type
training_instance_type = ParameterString(
name="TrainingInstanceType", default_value="ml.m5.xlarge"
)
model_approval_status = ParameterString(
name="ModelApprovalStatus",
default_value="PendingManualApproval", # ModelApprovalStatus can be set to a default of "Approved" if you don't want manual approval.
)
# Cache for 30 minutes
cache_config = CacheConfig(enable_caching=True, expire_after="T30m")
sklearn_processor = get_dataprep_processor(processing_instance_type, processing_instance_count, role)
# Processing step for feature engineering
step_process = ProcessingStep(
name="CustomerChurnProcess", # choose any name
processor=sklearn_processor,
outputs=[
ProcessingOutput(output_name="train", source="/opt/ml/processing/train"),
ProcessingOutput(
output_name="validation", source="/opt/ml/processing/validation"
),
ProcessingOutput(output_name="test", source="/opt/ml/processing/test"),
],
code=s3_dataprep_code_uri,
job_arguments=["--input-data", input_data],
cache_config=cache_config
)
xgb_train = get_modeling_estimator(bucket,
prefix,
s3_modeling_code_uri,
docker_image_name,
role,
entry_point_script = train_script_name)
step_train = TrainingStep(
name="CustomerChurnTrain",
estimator=xgb_train,
inputs={
"train": TrainingInput(
s3_data=step_process.properties.ProcessingOutputConfig.Outputs[
"train"
].S3Output.S3Uri,
content_type="text/csv"
),
"validation": TrainingInput(
s3_data=step_process.properties.ProcessingOutputConfig.Outputs[
"validation"
].S3Output.S3Uri,
content_type="text/csv"
)
},
cache_config=cache_config
)
evaluation_report = PropertyFile(
name="EvaluationReport",
output_name="evaluation",
path="evaluation.json",
)
script_eval = get_evaluation_processor(docker_image_name, role)
# Processing step for evaluation
step_eval = ProcessingStep(
name="CustomerChurnEval",
processor=script_eval,
inputs=[
ProcessingInput(
source=step_train.properties.ModelArtifacts.S3ModelArtifacts,
destination="/opt/ml/processing/model",
),
ProcessingInput(
source=step_process.properties.ProcessingOutputConfig.Outputs[
"test"
].S3Output.S3Uri,
destination="/opt/ml/processing/test",
),
],
outputs=[
ProcessingOutput(
output_name="evaluation", source="/opt/ml/processing/evaluation"
),
],
code=s3_evaluation_code_uri,
property_files=[evaluation_report],
cache_config=cache_config
)
# Model metrics that will be associated with RegisterModel step
model_metrics = ModelMetrics(
model_statistics=MetricsSource(
s3_uri="{}/evaluation.json".format(
step_eval.arguments["ProcessingOutputConfig"]["Outputs"][0]["S3Output"][
"S3Uri"
]
),
content_type="application/json",
)
)
model_package_group_name="CustomerChurnPackageGroup"
# Register model step that will be conditionally executed
step_register = RegisterModel(
name="CustomerChurnRegisterModel",
estimator=xgb_train,
model_data=step_train.properties.ModelArtifacts.S3ModelArtifacts,
content_types=["text/csv"],
response_types=["text/csv"],
inference_instances=["ml.t2.medium", "ml.m5.large"],
transform_instances=["ml.m5.large"],
model_package_group_name=model_package_group_name,
approval_status=model_approval_status,
model_metrics=model_metrics,
)
# Condition step for evaluating model quality and branching execution
cond_lte = ConditionGreaterThanOrEqualTo( # You can change the condition here
left=JsonGet(
step=step_eval,
property_file=evaluation_report,
json_path="binary_classification_metrics.accuracy.value", # This should follow the structure of your report_dict defined in the evaluate.py file.
),
right=0.8, # You can change the threshold here
)
step_cond = ConditionStep(
name="CustomerChurnAccuracyCond",
conditions=[cond_lte],
if_steps=[step_register],
else_steps=[],
)
# Experiment configs
create_date = lambda: strftime("%Y-%m-%d-%H-%M-%S", gmtime())
experiment_name=f"pipeline-customer-churn-prediction-xgboost-{create_date()}"
trial_name=f"pipeline-framework-trial-{create_date()}"
pipeline_experiment_config = PipelineExperimentConfig(
experiment_name = experiment_name,
trial_name = trial_name
)
pipeline = Pipeline(
name=pipeline_name,
parameters=[
input_data,
processing_instance_type,
processing_instance_count,
training_instance_type,
model_approval_status,
],
steps=[step_process, step_train, step_eval, step_cond],
sagemaker_session=sagemaker_session,
)
return pipeline
```
<a id='SM_Projects'></a>
# Customizing the Build/Train/Deploy MLOps Project Template
SageMaker Projects introduce MLOps templates that automatically provision the underlying resources needed to enable
CI/CD capabilities for your Machine Learning Development Lifecycle (MLDC). Customers can use a number of built-in
templates or create your own custom templates.
This workshop we will use one of the **pre-built MLOps templates** to bootstrap your ML project and establish a CI/CD
pattern from seed code.
### MLOps Template for Build, Train, and Deploy
> Imagine now that you are a data scientist that just joined the company. You need to get access to the ML resources.
To get started with SageMaker Projects, [they must be first enabled in the SageMaker Studio console](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-projects-studio-updates.html).
This can be done for existing users or while creating new ones:
<img src="media/enable_projects.png">
Within Amazon SageMaker Studio, you can now select “Projects” from a drop-down menu on the “Components and registries”
tab as shown below:
<img src="media/select_projects.png">
From the projects page you’ll have the option to launch a pre-configured SageMaker MLOps template. Click on `Create project` and we'll select the build, train and deploy template:
<img src="media/create_project.png">
Name the project `ChurnProject`.
> NOTE: Launching this template will kick off a model building pipeline by default and will train a regression model. This will incur a small cost.
Once the project is created from the MLOps template, the following architecture will be deployed:
<img src="media/deep_dive.png">
## Modifying the Seed Code for Custom Use Case
After your project has been created the architecture shown above will be deployed and the visualization of the
Pipeline will be available in the “Pipelines” drop down menu within SageMaker Studio.
In order to modify the seed code from this launched template, we’ll first need to clone the AWS CodeCommit
repositories to our local SageMaker Studio instance. From the list of projects, select the one that was just
created. Under the “Repositories” tab you can select the hyperlinks to locally clone the AWS CodeCommit repos:
<img src="media/clone_repos.png">
### Clone the `...modelbuild` repo (click on `clone repo...`)
The SageMaker project template will create this repositories.
In the `...-modelbuild` repository there's the code for preprocessing, training, and evaluating the model. This pre-built template includes another example for a regression model related to the [UCI Abalone dataset](https://archive.ics.uci.edu/ml/datasets/abalone):
<img src="media/repo_directory.png">
**In our case we want to create a pipeline for predicting Churn (previous labs).** We can modify these files in order to solve our own customer churn use-case.
---
### Modifying the code for the Churn problem
This is the sample structure of the Project (Abalone):
<img src="media/repo_directory.png" width="40%">
#### Let's use everything we just built:
In the `...modelbuild` repo:
1. replace `codebuild-buildspec.yml` in your current Studio project (Abalone) with the one found in [modelbuild/codebuild-buildspec.yml](modelbuild/codebuild-buildspec.yml) (Churn)
The final `codebuild-buildspec.yml` should be this one (with the comment at the top 1st line)
<img src="media/buildspec.png" width="60%">
2. go to `pipelines`. Delete the `abalone` directory.
<img src="media/dir_del.png" width="40%">
3. Cut `my_labs_solutions` directory and paste it to the `...modelbuild/pipelines` repo.
<img src="media/dir_cut.png" width="40%">
<img src="media/dir_paste.png" width="40%">
In the end the `...modelbuild` repo should look like this:
<img src="media/dir_structure.png" width="40%">
## Trigger a new training Pipeline Execution through git commit
By committing these changes to the AWS CodeCommit repository (easily done in SageMaker Studio source control tab), a
new Pipeline execution will be triggered since there is an EventBridge monitoring for commits. After a few moments,
we can monitor the execution by selecting your Pipeline inside of the SageMaker Project.
Go to the directory of the `...modelbuild/pipelines` repo. Click on the git symbol:
<img src="media/git_push.png">
This triggers the pipelines for training. Go to our `“Pipelines”` tab inside of the SageMaker Project. Click on our only pipeline. And you'll see:
<img src="media/execute_pipeline.png">
Select the most recent execution:
<img src="media/dag.png">
## Trigger the ModelDeploy Pipeline
Once the train pipeline is completed, we can go to our `“Model groups”` tab inside of the SageMaker Project and inspect the metadata attached to the model artifacts. If everything looks good, we can manually approve the model:
<img src="media/model_metrics.png">
<img src="media/approve_model.png">
This approval will trigger the ModelDeploy pipeline (in CodePipeline):
<img src="media/execute_pipeline_deploy.png">
After we deploy to a staging environment and run some tests, we will have to **approve the deployment to production** by approving in the `ApproveDeployment` stage:
<img src="media/approve_deploy_prod.png">
Finally, if we go back to Studio, we will see the Production endpoint for real time inference.
<img src="media/endpoints.png">
---
| github_jupyter |
<!-- Autogenerated by `scripts/make_examples.py` -->
<table align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/voxel51/fiftyone-examples/blob/master/examples/image_uniqueness.ipynb">
<img src="https://user-images.githubusercontent.com/25985824/104791629-6e618700-5769-11eb-857f-d176b37d2496.png" height="32" width="32">
Try in Google Colab
</a>
</td>
<td>
<a target="_blank" href="https://nbviewer.jupyter.org/github/voxel51/fiftyone-examples/blob/master/examples/image_uniqueness.ipynb">
<img src="https://user-images.githubusercontent.com/25985824/104791634-6efa1d80-5769-11eb-8a4c-71d6cb53ccf0.png" height="32" width="32">
Share via nbviewer
</a>
</td>
<td>
<a target="_blank" href="https://github.com/voxel51/fiftyone-examples/blob/master/examples/image_uniqueness.ipynb">
<img src="https://user-images.githubusercontent.com/25985824/104791633-6efa1d80-5769-11eb-8ee3-4b2123fe4b66.png" height="32" width="32">
View on GitHub
</a>
</td>
<td>
<a href="https://github.com/voxel51/fiftyone-examples/raw/master/examples/image_uniqueness.ipynb" download>
<img src="https://user-images.githubusercontent.com/25985824/104792428-60f9cc00-576c-11eb-95a4-5709d803023a.png" height="32" width="32">
Download notebook
</a>
</td>
</table>
# Exploring Image Uniqueness
This example provides a brief overivew of using FiftyOne's [image uniqueness method](https://voxel51.com/docs/fiftyone/user_guide/brain.html#image-uniqueness) to analyze and extract insights from unlabeled datasets.
For more details, check out the in-depth [image uniqueness tutorial](https://voxel51.com/docs/fiftyone/tutorials/uniqueness.html).
## Load dataset
We'll work with the test split of the [CIFAR-10 dataset](https://www.cs.toronto.edu/~kriz/cifar.html), which is
conveniently available in the [FiftyOne Dataset Zoo](https://voxel51.com/docs/fiftyone/user_guide/dataset_creation/zoo.html):
```
import fiftyone as fo
import fiftyone.zoo as foz
# Load the CIFAR-10 test split
# This will download the dataset from the web, if necessary
dataset = foz.load_zoo_dataset("cifar10", split="test")
dataset.name = "image-uniqueness-example"
print(dataset)
```
## Index by visual uniqueness
Next we'll index the dataset by visual uniqueness using a
[builtin method](https://voxel51.com/docs/fiftyone/user_guide/brain.html#image-uniqueness)
from the FiftyOne Brain:
```
import fiftyone.brain as fob
fob.compute_uniqueness(dataset)
print(dataset)
```
Note that the dataset now has a `uniqueness` field that contains a numeric measure of the visual uniqueness of each sample:
```
# View a sample from the dataset
print(dataset.first())
```
## Visualize near-duplicate samples in the App
Let's open the dataset in the App:
```
# View dataset in the App
session = fo.launch_app(dataset)
```

From the App, we can show the most visually similar images in the dataset by creating a `SortBy("uniqueness", reverse=False)` stage in the [view bar](https://voxel51.com/docs/fiftyone/user_guide/app.html#using-the-view-bar).
Alternatively, this same operation can be performed programmatically via Python:
```
# Show least unique images first
least_unique_view = dataset.sort_by("uniqueness", reverse=False)
# Open view in App
session.view = least_unique_view
```

## Omit near-duplicate samples from the dataset
Next, we'll show how to omit visually similar samples from a dataset.
First, use the App to select visually similar samples.

Assuming the visually similar samples are currently selected in the App, we can easily add a `duplicate` tag to these samples via Python:
```
# Get currently selected images from App
dup_ids = session.selected
print(dup_ids)
# Get view containing selected samples
dups_view = dataset.select(dup_ids)
# Mark as duplicates
for sample in dups_view:
sample.tags.append("duplicate")
sample.save()
```
We can, for example, then use the `MatchTag("duplicate")` stage in the [view bar](https://voxel51.com/docs/fiftyone/user_guide/app.html#using-the-view-bar) to re-isolate the duplicate samples.
Alternatively, this same operation can be performed programmatically via Python:
```
# Select samples with `duplicate` tag
dups_tag_view = dataset.match_tags("duplicate")
# Open view in App
session.view = dups_tag_view
```

## Export de-duplicated dataset
Now let's [create a view](https://voxel51.com/docs/fiftyone/user_guide/using_views.html#filtering)
that omits samples with the `duplicate` tag, and then export them to disk as an [image classification directory tree](https://voxel51.com/docs/fiftyone/user_guide/export_datasets.html#imageclassificationdirectorytree):
```
from fiftyone import ViewField as F
# Get samples that do not have the `duplicate` tag
no_dups_view = dataset.match(~F("tags").contains("duplicate"))
# Export dataset to disk as a classification directory tree
no_dups_view.export(
"/tmp/fiftyone-examples/cifar10-no-dups",
fo.types.ImageClassificationDirectoryTree
)
```
Let's list the contents of the exported dataset on disk to verify the export:
```
# Check the top-level directory structure
!ls -lah /tmp/fiftyone-examples/cifar10-no-dups
# View the contents of a class directory
!ls -lah /tmp/fiftyone-examples/cifar10-no-dups/airplane | head
```
| github_jupyter |
# Partially Observable Markov decision processes (POMDPs)
This Jupyter notebook acts as supporting material for POMDPs, covered in **Chapter 17 Making Complex Decisions** of the book* Artificial Intelligence: A Modern Approach*. We make use of the implementations of POMPDPs in mdp.py module. This notebook has been separated from the notebook `mdp.py` as the topics are considerably more advanced.
**Note that it is essential to work through and understand the mdp.ipynb notebook before diving into this one.**
Let us import everything from the mdp module to get started.
```
from mdp import *
from notebook import psource, pseudocode
```
## CONTENTS
1. Overview of MDPs
2. POMDPs - a conceptual outline
3. POMDPs - a rigorous outline
4. Value Iteration
- Value Iteration Visualization
## 1. OVERVIEW
We first review Markov property and MDPs as in [Section 17.1] of the book.
- A stochastic process is said to have the **Markov property**, or to have a **Markovian transition model** if the conditional probability distribution of future states of the process (conditional on both past and present states) depends only on the present state, not on the sequence of events that preceded it.
-- (Source: [Wikipedia](https://en.wikipedia.org/wiki/Markov_property))
A Markov decision process or MDP is defined as:
- a sequential decision problem for a fully observable, stochastic environment with a Markovian transition model and additive rewards.
An MDP consists of a set of states (with an initial state $s_0$); a set $A(s)$ of actions
in each state; a transition model $P(s' | s, a)$; and a reward function $R(s)$.
The MDP seeks to make sequential decisions to occupy states so as to maximise some combination of the reward function $R(s)$.
The characteristic problem of the MDP is hence to identify the optimal policy function $\pi^*(s)$ that provides the _utility-maximising_ action $a$ to be taken when the current state is $s$.
### Belief vector
**Note**: The book refers to the _belief vector_ as the _belief state_. We use the latter terminology here to retain our ability to refer to the belief vector as a _probability distribution over states_.
The solution of an MDP is subject to certain properties of the problem which are assumed and justified in [Section 17.1]. One critical assumption is that the agent is **fully aware of its current state at all times**.
A tedious (but rewarding, as we will see) way of expressing this is in terms of the **belief vector** $b$ of the agent. The belief vector is a function mapping states to probabilities or certainties of being in those states.
Consider an agent that is fully aware that it is in state $s_i$ in the statespace $(s_1, s_2, ... s_n)$ at the current time.
Its belief vector is the vector $(b(s_1), b(s_2), ... b(s_n))$ given by the function $b(s)$:
\begin{align*}
b(s) &= 0 \quad \text{if }s \neq s_i \\ &= 1 \quad \text{if } s = s_i
\end{align*}
Note that $b(s)$ is a probability distribution that necessarily sums to $1$ over all $s$.
## 2. POMDPs - a conceptual outline
The POMDP really has only two modifications to the **problem formulation** compared to the MDP.
- **Belief state** - In the real world, the current state of an agent is often not known with complete certainty. This makes the concept of a belief vector extremely relevant. It allows the agent to represent different degrees of certainty with which it _believes_ it is in each state.
- **Evidence percepts** - In the real world, agents often have certain kinds of evidence, collected from sensors. They can use the probability distribution of observed evidence, conditional on state, to consolidate their information. This is a known distribution $P(e\ |\ s)$ - $e$ being an evidence, and $s$ being the state it is conditional on.
Consider the world we used for the MDP.

#### Using the belief vector
An agent beginning at $(1, 1)$ may not be certain that it is indeed in $(1, 1)$. Consider a belief vector $b$ such that:
\begin{align*}
b((1,1)) &= 0.8 \\
b((2,1)) &= 0.1 \\
b((1,2)) &= 0.1 \\
b(s) &= 0 \quad \quad \forall \text{ other } s
\end{align*}
By horizontally catenating each row, we can represent this as an 11-dimensional vector (omitting $(2, 2)$).
Thus, taking $s_1 = (1, 1)$, $s_2 = (1, 2)$, ... $s_{11} = (4,3)$, we have $b$:
$b = (0.8, 0.1, 0, 0, 0.1, 0, 0, 0, 0, 0, 0)$
This fully represents the certainty to which the agent is aware of its state.
#### Using evidence
The evidence observed here could be the number of adjacent 'walls' or 'dead ends' observed by the agent. We assume that the agent cannot 'orient' the walls - only count them.
In this case, $e$ can take only two values, 1 and 2. This gives $P(e\ |\ s)$ as:
\begin{align*}
P(e=2\ |\ s) &= \frac{1}{7} \quad \forall \quad s \in \{s_1, s_2, s_4, s_5, s_8, s_9, s_{11}\}\\
P(e=1\ |\ s) &= \frac{1}{4} \quad \forall \quad s \in \{s_3, s_6, s_7, s_{10}\} \\
P(e\ |\ s) &= 0 \quad \forall \quad \text{ other } s, e
\end{align*}
Note that the implications of the evidence on the state must be known **a priori** to the agent. Ways of reliably learning this distribution from percepts are beyond the scope of this notebook.
## 3. POMDPs - a rigorous outline
A POMDP is thus a sequential decision problem for for a *partially* observable, stochastic environment with a Markovian transition model, a known 'sensor model' for inferring state from observation, and additive rewards.
Practically, a POMDP has the following, which an MDP also has:
- a set of states, each denoted by $s$
- a set of actions available in each state, $A(s)$
- a reward accrued on attaining some state, $R(s)$
- a transition probability $P(s'\ |\ s, a)$ of action $a$ changing the state from $s$ to $s'$
And the following, which an MDP does not:
- a sensor model $P(e\ |\ s)$ on evidence conditional on states
Additionally, the POMDP is now uncertain of its current state hence has:
- a belief vector $b$ representing the certainty of being in each state (as a probability distribution)
#### New uncertainties
It is useful to intuitively appreciate the new uncertainties that have arisen in the agent's awareness of its own state.
- At any point, the agent has belief vector $b$, the distribution of its believed likelihood of being in each state $s$.
- For each of these states $s$ that the agent may **actually** be in, it has some set of actions given by $A(s)$.
- Each of these actions may transport it to some other state $s'$, assuming an initial state $s$, with probability $P(s'\ |\ s, a)$
- Once the action is performed, the agent receives a percept $e$. $P(e\ |\ s)$ now tells it the chances of having perceived $e$ for each state $s$. The agent must use this information to update its new belief state appropriately.
#### Evolution of the belief vector - the `FORWARD` function
The new belief vector $b'(s')$ after an action $a$ on the belief vector $b(s)$ and the noting of evidence $e$ is:
$$ b'(s') = \alpha P(e\ |\ s') \sum_s P(s'\ | s, a) b(s)$$
where $\alpha$ is a normalising constant (to retain the interpretation of $b$ as a probability distribution.
This equation is just counts the sum of likelihoods of going to a state $s'$ from every possible state $s$, times the initial likelihood of being in each $s$. This is multiplied by the likelihood that the known evidence actually implies the new state $s'$.
This function is represented as `b' = FORWARD(b, a, e)`
#### Probability distribution of the evolving belief vector
The goal here is to find $P(b'\ |\ b, a)$ - the probability that action $a$ transforms belief vector $b$ into belief vector $b'$. The following steps illustrate this -
The probability of observing evidence $e$ when action $a$ is enacted on belief vector $b$ can be distributed over each possible new state $s'$ resulting from it:
\begin{align*}
P(e\ |\ b, a) &= \sum_{s'} P(e\ |\ b, a, s') P(s'\ |\ b, a) \\
&= \sum_{s'} P(e\ |\ s') P(s'\ |\ b, a) \\
&= \sum_{s'} P(e\ |\ s') \sum_s P(s'\ |\ s, a) b(s)
\end{align*}
The probability of getting belief vector $b'$ from $b$ by application of action $a$ can thus be summed over all possible evidences $e$:
\begin{align*}
P(b'\ |\ b, a) &= \sum_{e} P(b'\ |\ b, a, e) P(e\ |\ b, a) \\
&= \sum_{e} P(b'\ |\ b, a, e) \sum_{s'} P(e\ |\ s') \sum_s P(s'\ |\ s, a) b(s)
\end{align*}
where $P(b'\ |\ b, a, e) = 1$ if $b' = $ `FORWARD(b, a, e)` and $= 0$ otherwise.
Given initial and final belief states $b$ and $b'$, the transition probabilities still depend on the action $a$ and observed evidence $e$. Some belief states may be achievable by certain actions, but have non-zero probabilities for states prohibited by the evidence $e$. Thus, the above condition thus ensures that only valid combinations of $(b', b, a, e)$ are considered.
#### A modified rewardspace
For MDPs, the reward space was simple - one reward per available state. However, for a belief vector $b(s)$, the expected reward is now:
$$\rho(b) = \sum_s b(s) R(s)$$
Thus, as the belief vector can take infinite values of the distribution over states, so can the reward for each belief vector vary over a hyperplane in the belief space, or space of states (planes in an $N$-dimensional space are formed by a linear combination of the axes).
| github_jupyter |
# <div style="text-align: center">Probability of Earthquake: EDA, FE, +5 Models </div>
<img src='http://s8.picofile.com/file/8355280718/pro.png' width=400 height=400>
<div style="text-align:center"> last update: <b>25/03/2019</b></div>
You can Fork code and Follow me on:
> ###### [ GitHub](https://github.com/mjbahmani/10-steps-to-become-a-data-scientist)
> ###### [Kaggle](https://www.kaggle.com/mjbahmani/)
-------------------------------------------------------------------------------------------------------------
<b> I hope you find this kernel helpful and some <font color='red'>UPVOTES</font> would be very much appreciated.</b>
-----------
<a id="top"></a> <br>
## Notebook Content
1. [Introduction](#1)
1. [Load packages](#2)
1. [import](21)
1. [Setup](22)
1. [Version](23)
1. [Problem Definition](#3)
1. [Problem Feature](#31)
1. [Aim](#32)
1. [Variables](#33)
1. [Evaluation](#34)
1. [Exploratory Data Analysis(EDA)](#4)
1. [Data Collection](#41)
1. [Visualization](#42)
1. [Hist](#421)
1. [Time to failure histogram](#422)
1. [Distplot](#423)
1. [kdeplot](#424)
1. [Data Preprocessing](#43)
1. [Create new feature](#431)
1. [ML Explainability](#44)
1. [Permutation Importance](#441)
1. [Partial Dependence Plots](#442)
1. [SHAP Values](#443)
1. [Model Development](#5)
1. [SVM](#51)
1. [LGBM](#52)
1. [Catboost](#53)
1. [Submission](#6)
1. [Blending](#61)
1. [References](#7)
1. [References](#71)
<a id="1"></a> <br>
## 1- Introduction
**Forecasting earthquakes** is one of the most important problems in **Earth science**. If you agree, the earthquake forecast is likely to be related to the concepts of **probability**. In this kernel, I try to look at the prediction of the earthquake with the **help** of the concepts of **probability** .
<img src='https://www.preventionweb.net/files/52472_largeImage.jpg' width=600 height=600 >
For anyone taking first steps in data science, **Probability** is a must know concept. Concepts of probability theory are the backbone of many important concepts in data science like inferential statistics to Bayesian networks. It would not be wrong to say that the journey of mastering statistics begins with **probability**.
Before starting, I have to point out that I used the following great kernel:
[https://www.kaggle.com/inversion/basic-feature-benchmark](https://www.kaggle.com/inversion/basic-feature-benchmark)
<a id="2"></a> <br>
## 2- Load packages
<a id="21"></a> <br>
## 2-1 Import
```
from sklearn.model_selection import StratifiedKFold, KFold, RepeatedKFold
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestRegressor
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import mean_absolute_error
from eli5.sklearn import PermutationImportance
from sklearn.tree import DecisionTreeRegressor
from catboost import CatBoostRegressor,Pool
import matplotlib.patches as patch
from scipy.stats import kurtosis
import matplotlib.pyplot as plt
from sklearn.svm import NuSVR
from scipy.stats import skew
from scipy.stats import norm
from scipy import linalg
from sklearn import tree
from sklearn import svm
import lightgbm as lgb
import xgboost as xgb
from tqdm import tqdm
import seaborn as sns
import pandas as pd
import numpy as np
import graphviz
import warnings
import random
import eli5
import shap # package used to calculate Shap values
import time
import glob
import sys
import os
```
<a id="22"></a> <br>
## 2-2 Setup
```
%matplotlib inline
%precision 4
warnings.filterwarnings('ignore')
plt.style.use('ggplot')
np.set_printoptions(suppress=True)
pd.set_option("display.precision", 15)
```
<a id="23"></a> <br>
## 2-3 Version
```
print('pandas: {}'.format(pd.__version__))
print('numpy: {}'.format(np.__version__))
print('Python: {}'.format(sys.version))
```
<a id="3"></a>
<br>
## 3- Problem Definition
I think one of the important things when you start a new machine learning project is Defining your problem. that means you should understand business problem.( **Problem Formalization**)
Problem Definition has four steps that have illustrated in the picture below:
<img src="http://s8.picofile.com/file/8344103134/Problem_Definition2.png" width=400 height=400>
> <font color='red'>Note</font> : **Current scientific studies related to earthquake forecasting focus on three key points:**
1. when the event will occur
1. where it will occur
1. how large it will be.
<a id="31"></a>
### 3-1 Problem Feature
1. Train.csv - A single, continuous training segment of experimental data.
1. Test - A folder containing many small segments of test data.
1. Slample_sumbission.csv - A sample submission file in the correct format.
<a id="32"></a>
### 3-2 Aim
In this competition, you will address <font color='red'><b>WHEN</b></font> the earthquake will take place.
<a id="33"></a>
### 3-3 Variables
1. **acoustic_data** - the seismic signal [int16]
1. **time_to_failure** - the time (in seconds) until the next laboratory earthquake [float64]
1. **seg_id**- the test segment ids for which predictions should be made (one prediction per segment)
<a id="34"></a>
### 3-4 Evaluation
Submissions are evaluated using the [**mean absolute error**](https://en.wikipedia.org/wiki/Mean_absolute_error) between the predicted time remaining before the next lab earthquake and the act remaining time.
<img src='https://wikimedia.org/api/rest_v1/media/math/render/svg/3ef87b78a9af65e308cf4aa9acf6f203efbdeded'>
<a id="4"></a>
## 4- Exploratory Data Analysis(EDA)
In this section, we'll analysis how to use graphical and numerical techniques to begin uncovering the structure of your data.
1. Which variables suggest interesting relationships?
1. Which observations are unusual?
1. Analysis of the features!
By the end of the section, you'll be able to answer these questions and more, while generating graphics that are both insightful and beautiful. then We will review analytical and statistical operations:
1. Data Collection
1. Visualization
1. Data Preprocessing
1. Data Cleaning
<a id="41"></a> <br>
## 4-1 Data Collection
What we have in input!
```
print(os.listdir("../input/"))
%%time
train = pd.read_csv('../input/train.csv' , dtype={'acoustic_data': np.int16, 'time_to_failure': np.float32})
print("Train has: rows:{} cols:{}".format(train.shape[0], train.shape[1]))
submission = pd.read_csv('../input/sample_submission.csv', index_col='seg_id')
submission.head()
print("submission has: rows:{} cols:{}".format(submission.shape[0], submission.shape[1]))
```
### There are 2624 files in test.zip.
```
len(os.listdir(os.path.join('../input/', 'test')))
```
Also we have **2624** row same as number of test files in submission , so this clear that we should predict **time_to_failure** for all of test files.
## Memory usage: 3.5 GB
```
train.info()
#train.describe()
train.shape
```
### Wow! so large(rows:629145480 columns:2) for playing with it, let's select just 150000 rows!
```
train.head()
```
<a id="42"></a> <br>
## 4-2 Visualization
Because the size of the database is very large, for the visualization section, we only need to select a small subset of the data.
```
# we change the size of Dataset due to play with it (just loaded %0001)
mini_train= pd.read_csv("../input/train.csv",nrows=150000)
mini_train.describe()
mini_train.shape
mini_train.isna().sum()
type(mini_train)
```
<a id="421"></a>
### 4-2-1 Hist
```
#acoustic_data means signal
mini_train["acoustic_data"].hist();
```
<a id="422"></a>
### 4-2-2 Time to failure histogram
```
plt.plot(mini_train["time_to_failure"], mini_train["acoustic_data"])
plt.title("time_to_failure histogram")
```
<a id="423"></a>
### 4-2-3 Distplot
```
sns.distplot(mini_train["acoustic_data"])
```
<a id="424"></a>
### 4-2-4 kdeplot
```
sns.kdeplot(mini_train["acoustic_data"] )
```
<a id="43"></a> <br>
## 4-3 Data Preprocessing
Because we have only one feature(**acoustic_data**), and the size of the training set is very large( more that 60000000 rows), it is a good idea to reduce the size of the training set with making new segment and also to increase the number of attributes by using **statistical attributes**.
```
# based on : https://www.kaggle.com/inversion/basic-feature-benchmark
rows = 150_000
segments = int(np.floor(train.shape[0] / rows))
segments
X_train = pd.DataFrame(index=range(segments), dtype=np.float64,
columns=['ave', 'std', 'max', 'min','sum','skew','kurt'])
y_train = pd.DataFrame(index=range(segments), dtype=np.float64,
columns=['time_to_failure'])
```
### y_train is our target for prediction
```
y_train.head()
```
### our train set with 4 new feature
```
X_train.head()
```
<a id="431"></a> <br>
### 4-3-1 Create New Features
> <font color='red'>Note:</font>
**tqdm** means "progress" in Arabic (taqadum, تقدّم) and is an abbreviation for "I love you so much" in Spanish (te quiero demasiado). Instantly make your loops show a smart progress meter - just wrap any iterable with tqdm(iterable), and you're done! [https://tqdm.github.io/](https://tqdm.github.io/)
```
%%time
for segment in tqdm(range(segments)):
seg = train.iloc[segment*rows:segment*rows+rows]
x = seg['acoustic_data'].values
y = seg['time_to_failure'].values[-1]
y_train.loc[segment, 'time_to_failure'] = y
X_train.loc[segment, 'ave'] = x.mean()
X_train.loc[segment, 'std'] = x.std()
X_train.loc[segment, 'max'] = x.max()
X_train.loc[segment, 'min'] = x.min()
X_train.loc[segment, 'sum'] = x.sum()
X_train.loc[segment, 'skew'] =skew(x)
X_train.loc[segment, 'kurt'] = kurtosis(x)
X_train.head()
y_train.head()
```
### Cheking missing Data
```
def check_missing_data(df):
flag=df.isna().sum().any()
if flag==True:
total = df.isnull().sum()
percent = (df.isnull().sum())/(df.isnull().count()*100)
output = pd.concat([total, percent], axis=1, keys=['Total', 'Percent'])
data_type = []
# written by MJ Bahmani
for col in df.columns:
dtype = str(df[col].dtype)
data_type.append(dtype)
output['Types'] = data_type
return(np.transpose(output))
else:
return(False)
check_missing_data(X_train)
```
### Now we must create our X_test for submission
```
X_test = pd.DataFrame(columns=X_train.columns, dtype=np.float64, index=submission.index)
X_test.head()
%%time
for seg_id in tqdm(X_test.index):
seg = pd.read_csv('../input/test/' + seg_id + '.csv')
x = seg['acoustic_data'].values
X_test.loc[seg_id, 'ave'] = x.mean()
X_test.loc[seg_id, 'std'] = x.std()
X_test.loc[seg_id, 'max'] = x.max()
X_test.loc[seg_id, 'min'] = x.min()
X_test.loc[seg_id, 'sum'] = x.sum()
X_test.loc[seg_id, 'skew'] =skew(x)
X_test.loc[seg_id, 'kurt'] = kurtosis(x)
X_test.shape
```
Now we have all of the data frames for applying ML algorithms. just adding some feature scaling.
```
scaler = StandardScaler()
scaler.fit(X_train)
X_train_scaled = scaler.transform(X_train)
X_test_scaled = scaler.transform(X_test)
X=X_train.copy()
y=y_train.copy()
```
<a id="44"></a> <br>
## 4-4 ML Explainability
In this section, I want to try extract insights from models with the help of this excellent [Course](https://www.kaggle.com/learn/machine-learning-explainability) in Kaggle. The Goal behind of ML Explainability for Earthquake is:
1. Extract insights from models.
1. Find the most inmortant feature in models.
1. Affect of each feature on the model's predictions.
<a id="441"></a> <br>
### 4-4-1 Permutation Importance
In this section we will answer following question:
What features have the biggest impact on predictions?
how to extract insights from models?
```
train_X, val_X, train_y, val_y = train_test_split(X, y, random_state=1)
rfc_model = RandomForestRegressor(random_state=0).fit(train_X, train_y)
```
Here is how to calculate and show importances with the [eli5](https://eli5.readthedocs.io/en/latest/) library:
```
perm = PermutationImportance(rfc_model, random_state=1).fit(val_X, val_y)
eli5.show_weights(perm, feature_names = val_X.columns.tolist(), top=7)
```
<a id="442"></a> <br>
### 4-4-2 Partial Dependence Plots
While feature importance shows what variables most affect predictions, partial dependence plots show how a feature affects predictions. and partial dependence plots are calculated after a model has been fit. [partial-plots](https://www.kaggle.com/dansbecker/partial-plots)
```
train_X, val_X, train_y, val_y = train_test_split(X, y, random_state=1)
tree_model = DecisionTreeRegressor(random_state=0, max_depth=5, min_samples_split=5).fit(train_X, train_y)
```
For the Partial of explanation, I use a Decision Tree which you can see below.
```
tree_graph = tree.export_graphviz(tree_model, out_file=None, feature_names=X.columns)
graphviz.Source(tree_graph)
```
<a id="443"></a> <br>
### 4-4-3 SHAP Values
SHAP (SHapley Additive exPlanations) is a unified approach to explain the output of any machine learning model. SHAP connects game theory with local explanations, uniting several previous methods [1-7] and representing the only possible consistent and locally accurate additive feature attribution method based on expectations (see the SHAP NIPS paper for details).
```
row_to_show = 5
data_for_prediction = val_X.iloc[row_to_show] # use 1 row of data here. Could use multiple rows if desired
data_for_prediction_array = data_for_prediction.values.reshape(1, -1)
tree_model.predict(data_for_prediction_array);
# Create object that can calculate shap values
explainer = shap.TreeExplainer(tree_model)
# Calculate Shap values
shap_values = explainer.shap_values(data_for_prediction)
```
>**Note**: Shap can answer to this qeustion : how the model works for an individual prediction?
```
shap.initjs()
shap.force_plot(explainer.expected_value, shap_values, data_for_prediction)
```
### 4-4-4 pdpbox
```
from matplotlib import pyplot as plt
from pdpbox import pdp, get_dataset, info_plots
# Create the data that we will plot
pdp_goals = pdp.pdp_isolate(model=tree_model, dataset=val_X, model_features=X.columns, feature='std')
# plot it
pdp.pdp_plot(pdp_goals, 'std')
plt.show()
# Create the data that we will plot
pdp_goals = pdp.pdp_isolate(model=rfc_model, dataset=val_X, model_features=X.columns, feature='kurt')
# plot it
pdp.pdp_plot(pdp_goals, 'kurt')
plt.show()
```
<a id="5"></a> <br>
# 5- Model Development
<a id="51"></a> <br>
# 5-1 SVM
```
svm = NuSVR()
svm.fit(X_train_scaled, y_train.values.flatten())
y_pred_svm = svm.predict(X_train_scaled)
score = mean_absolute_error(y_train.values.flatten(), y_pred_svm)
print(f'Score: {score:0.3f}')
```
<a id="52"></a> <br>
# 5-2 LGBM
Defing folds for cross-validation
```
folds = KFold(n_splits=5, shuffle=True, random_state=42)
```
LGBM params
```
params = {'objective' : "regression",
'boosting':"gbdt",
'metric':"mae",
'boost_from_average':"false",
'num_threads':8,
'learning_rate' : 0.001,
'num_leaves' : 52,
'max_depth':-1,
'tree_learner' : "serial",
'feature_fraction' : 0.85,
'bagging_freq' : 1,
'bagging_fraction' : 0.85,
'min_data_in_leaf' : 10,
'min_sum_hessian_in_leaf' : 10.0,
'verbosity' : -1}
%%time
y_pred_lgb = np.zeros(len(X_test_scaled))
for fold_n, (train_index, valid_index) in tqdm(enumerate(folds.split(X))):
print('Fold', fold_n, 'started at', time.ctime())
X_train, X_valid = X.iloc[train_index], X.iloc[valid_index]
y_train, y_valid = y.iloc[train_index], y.iloc[valid_index]
model = lgb.LGBMRegressor(**params, n_estimators = 22000, n_jobs = -1)
model.fit(X_train, y_train,
eval_set=[(X_train, y_train), (X_valid, y_valid)], eval_metric='mae',
verbose=1000, early_stopping_rounds=200)
y_pred_valid = model.predict(X_valid)
y_pred_lgb += model.predict(X_test_scaled, num_iteration=model.best_iteration_) / folds.n_splits
```
<a id="53"></a> <br>
# 5-3 Catboost
```
train_pool = Pool(X,y)
cat_model = CatBoostRegressor(
iterations=3000,# change 25 to 3000 to get best performance
learning_rate=0.03,
eval_metric='MAE',
)
cat_model.fit(X,y,silent=True)
y_pred_cat = cat_model.predict(X_test)
```
<a id="6"></a> <br>
# 6- submission
### submission for svm
```
y_pred_svm= svm.predict(X_test_scaled)
submission['time_to_failure'] = y_pred_cat
submission.to_csv('submission_svm.csv')
```
### Submission for LGBM
```
submission['time_to_failure'] = y_pred_lgb
submission.to_csv('submission_lgb.csv')
```
### Submission for Catboost
```
submission['time_to_failure'] = y_pred_cat
submission.to_csv('submission_cat.csv')
```
### Submission for Randomforest
```
y_pred_rf=rfc_model.predict(X_test_scaled)
submission['time_to_failure'] = y_pred_rf
submission.to_csv('submission_rf.csv')
```
<a id="61"></a> <br>
# 6-1 Blending
```
blending = y_pred_svm*0.5 + y_pred_lgb*0.5
submission['time_to_failure']=blending
submission.to_csv('submission_lgb_svm.csv')
blending = y_pred_svm*0.5 + y_pred_cat*0.5
submission['time_to_failure']=blending
submission.to_csv('submission_cat_svm.csv')
```
you can follow me on:
> ###### [ GitHub](https://github.com/mjbahmani/)
> ###### [Kaggle](https://www.kaggle.com/mjbahmani/)
<b>I hope you find this kernel helpful and some <font color='red'>UPVOTES</font> would be very much appreciated.<b/>
<a id="7"></a> <br>
# 7-References
1. [Basic Probability Data Science with examples](https://www.analyticsvidhya.com/blog/2017/02/basic-probability-data-science-with-examples/)
1. [How to self learn statistics of data science](https://medium.com/ml-research-lab/how-to-self-learn-statistics-of-data-science-c05db1f7cfc3)
1. [Probability statistics for data science- series](https://towardsdatascience.com/probability-statistics-for-data-science-series-83b94353ca48)
1. [basic-statistics-in-python-probability](https://www.dataquest.io/blog/basic-statistics-in-python-probability/)
1. [tutorialspoint](https://www.tutorialspoint.com/python/python_poisson_distribution.htm)
## 7-1 Kaggle Kernels
In the end , I want to thank all the kernels I've used in this notebook
1. [https://www.kaggle.com/inversion/basic-feature-benchmark](https://www.kaggle.com/inversion/basic-feature-benchmark)
1. [https://www.kaggle.com/dansbecker/permutation-importance](https://www.kaggle.com/dansbecker/permutation-importance)
Go to first step: [Course Home Page](https://www.kaggle.com/mjbahmani/10-steps-to-become-a-data-scientist)
Go to next step : [Titanic](https://www.kaggle.com/mjbahmani/a-comprehensive-ml-workflow-with-python)
### Not Completed yet!!!
| github_jupyter |
```
from gurobipy import *
import numpy as np
try:
## Define ranges
age = 5 # define i
years = 5 #define j
## Package ranges into one variable
dim = [age,years]
## Create my model
m = Model('prob1')
m.params.outputFlag = 0
## Create my variables
x = m.addVars(age,years, lb=0.0, vtype= GRB.BINARY, name="x") # machine of age i is operated in year j
y = m.addVars(years, lb=0.0, vtype= GRB.BINARY, name="y") # machine of age i is sold in year j
## Define data
costs = [2,4,5,9,12] #(in range j)
gains = [0,7,6,2,1] #(in range j)
## Create objective function expression
init_cost = 12
maint_costs = quicksum(costs[j]*x[i,j] for i in range(age) for j in range(years))
buy_sell_costs = quicksum((12+costs[j]-gains[j])*y[j] for j in range(years))
expression = init_cost + maint_costs + buy_sell_costs
## CONSTRAINTS
#No machines can be sold when they are of age 0
m.addConstr(y[0] == 0)
#One machine must be running at every time j
m.addConstrs(
(quicksum(x[i,j] for i in range(age))
== 1
for j in range(years)))
# Sets constraint st sum of y must be equal to number of new machines
m.addConstr(
(quicksum(y[j] for j in range(years)))
==
(quicksum(x[0,j] for j in range(1,years)))
)
# Zero out x's if i>j
m.addConstrs(x[i,j] == 0
for i in range(age)
for j in range(years)
if i > j )
# sum of old i's must be greater than sum of next i's
# m.addConstr(quicksum(x[0,j] for j in range(years)) >= quicksum(x[1,j] for j in range(years)))
# m.addConstr(quicksum(x[1,j] for j in range(years)) >= quicksum(x[2,j] for j in range(years)))
# m.addConstr((quicksum(x[2,j] for j in range(years))) >= (quicksum(x[3,j] for j in range(years))))
# m.addConstr((quicksum(x[3,j] for j in range(years))) >= (quicksum(x[4,j] for j in range(years))))
# Set an objective function
m.setObjective(expression, GRB.MINIMIZE)
# Optimize model
m.optimize()
for v in m.getVars():
print('%s %g' % (v.varName, v.x))
print('Obj: %g' % m.objVal)
except GurobiError as e:
print('Error code ' + str(e.errno) + ": " + str(e))
except AttributeError:
print('Encountered an attribute error')
vars = m.getVars()
for index in range(age*years):
print(vars[index])
# for v in m.getVars():
# print(v)
vars[0]
subset = [ (i,j) for i in range(age) for j in range(years) if i <= j ]
print(z)
range(dim[1])
```
| github_jupyter |
# Self-Driving Car Engineer Nanodegree
## Deep Learning
## Project: Build a Traffic Sign Recognition Classifier
In this notebook, a template is provided for you to implement your functionality in stages, which is required to successfully complete this project. If additional code is required that cannot be included in the notebook, be sure that the Python code is successfully imported and included in your submission if necessary.
> **Note**: Once you have completed all of the code implementations, you need to finalize your work by exporting the iPython Notebook as an HTML document. Before exporting the notebook to html, all of the code cells need to have been run so that reviewers can see the final implementation and output. You can then export the notebook by using the menu above and navigating to \n",
"**File -> Download as -> HTML (.html)**. Include the finished document along with this notebook as your submission.
In addition to implementing code, there is a writeup to complete. The writeup should be completed in a separate file, which can be either a markdown file or a pdf document. There is a [write up template](https://github.com/udacity/CarND-Traffic-Sign-Classifier-Project/blob/master/writeup_template.md) that can be used to guide the writing process. Completing the code template and writeup template will cover all of the [rubric points](https://review.udacity.com/#!/rubrics/481/view) for this project.
The [rubric](https://review.udacity.com/#!/rubrics/481/view) contains "Stand Out Suggestions" for enhancing the project beyond the minimum requirements. The stand out suggestions are optional. If you decide to pursue the "stand out suggestions", you can include the code in this Ipython notebook and also discuss the results in the writeup file.
>**Note:** Code and Markdown cells can be executed using the **Shift + Enter** keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode.
---
## Step 0: Load The Data
```
# Imports
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# Load pickled data
import pickle
import csv
# TODO: Fill this in based on where you saved the training and testing data
training_file = 'data/train.p'
validation_file= 'data/valid.p'
testing_file = 'data/test.p'
with open(training_file, mode='rb') as f:
train = pickle.load(f)
with open(validation_file, mode='rb') as f:
valid = pickle.load(f)
with open(testing_file, mode='rb') as f:
test = pickle.load(f)
X_train, Y_train = train['features'], train['labels']
X_valid, Y_valid = valid['features'], valid['labels']
X_test, Y_test = test['features'], test['labels']
#Create a dict that maps labels to actual signnames
sign_name = {}
with open('signnames.csv') as csvfile:
reader = csv.reader(csvfile)
next(reader)
for row in reader:
sign_name[int(row[0])] = row[1]
print(sign_name)
```
---
## Step 1: Dataset Summary & Exploration
The pickled data is a dictionary with 4 key/value pairs:
- `'features'` is a 4D array containing raw pixel data of the traffic sign images, (num examples, width, height, channels).
- `'labels'` is a 1D array containing the label/class id of the traffic sign. The file `signnames.csv` contains id -> name mappings for each id.
- `'sizes'` is a list containing tuples, (width, height) representing the original width and height the image.
- `'coords'` is a list containing tuples, (x1, y1, x2, y2) representing coordinates of a bounding box around the sign in the image. **THESE COORDINATES ASSUME THE ORIGINAL IMAGE. THE PICKLED DATA CONTAINS RESIZED VERSIONS (32 by 32) OF THESE IMAGES**
Complete the basic data summary below. Use python, numpy and/or pandas methods to calculate the data summary rather than hard coding the results. For example, the [pandas shape method](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.shape.html) might be useful for calculating some of the summary results.
### Provide a Basic Summary of the Data Set Using Python, Numpy and/or Pandas
```
### Replace each question mark with the appropriate value.
### Use python, pandas or numpy methods rather than hard coding the results
# TODO: Number of training examples
n_train = X_train.shape[0]
# TODO: Number of validation examples
n_validation = X_valid.shape[0]
# TODO: Number of testing examples.
n_test = X_test.shape[0]
# TODO: What's the shape of an traffic sign image?
image_shape = X_train[0].shape
# TODO: How many unique classes/labels there are in the dataset.
n_classes = len(np.unique(train['labels']))
print("Number of training examples =", n_train)
print("Number of validation examples =", n_validation)
print("Number of testing examples =", n_test)
print("Image data shape =", image_shape)
print("Number of classes =", n_classes)
```
### Include an exploratory visualization of the dataset
Visualize the German Traffic Signs Dataset using the pickled file(s). This is open ended, suggestions include: plotting traffic sign images, plotting the count of each sign, etc.
The [Matplotlib](http://matplotlib.org/) [examples](http://matplotlib.org/examples/index.html) and [gallery](http://matplotlib.org/gallery.html) pages are a great resource for doing visualizations in Python.
**NOTE:** It's recommended you start with something simple first. If you wish to do more, come back to it after you've completed the rest of the sections. It can be interesting to look at the distribution of classes in the training, validation and test set. Is the distribution the same? Are there more examples of some classes than others?
```
### Data exploration visualization code goes here.
### Feel free to use as many code cells as needed.
import matplotlib.pyplot as plt
import random
# Visualizations will be shown in the notebook.
%matplotlib inline
#displays a random set of traffic signs from training data
fig, ax = plt.subplots(2, 5, figsize=(15,6))
fig.subplots_adjust(hspace = .5, wspace=.001)
ax = ax.ravel()
for i in range(10):
img_index = random.randint(0, len(X_train))
img = X_train[img_index]
ax[i].imshow(img)
ax[i].set_title(sign_name[Y_train[img_index]])
#Bar graph of label distribution in training set
labels, counts = np.unique(Y_train, return_counts= True)
plt.bar(labels, counts)
plt.show()
```
----
## Step 2: Design and Test a Model Architecture
Design and implement a deep learning model that learns to recognize traffic signs. Train and test your model on the [German Traffic Sign Dataset](http://benchmark.ini.rub.de/?section=gtsrb&subsection=dataset).
The LeNet-5 implementation shown in the [classroom](https://classroom.udacity.com/nanodegrees/nd013/parts/fbf77062-5703-404e-b60c-95b78b2f3f9e/modules/6df7ae49-c61c-4bb2-a23e-6527e69209ec/lessons/601ae704-1035-4287-8b11-e2c2716217ad/concepts/d4aca031-508f-4e0b-b493-e7b706120f81) at the end of the CNN lesson is a solid starting point. You'll have to change the number of classes and possibly the preprocessing, but aside from that it's plug and play!
With the LeNet-5 solution from the lecture, you should expect a validation set accuracy of about 0.89. To meet specifications, the validation set accuracy will need to be at least 0.93. It is possible to get an even higher accuracy, but 0.93 is the minimum for a successful project submission.
There are various aspects to consider when thinking about this problem:
- Neural network architecture (is the network over or underfitting?)
- Play around preprocessing techniques (normalization, rgb to grayscale, etc)
- Number of examples per label (some have more than others).
- Generate fake data.
Here is an example of a [published baseline model on this problem](http://yann.lecun.com/exdb/publis/pdf/sermanet-ijcnn-11.pdf). It's not required to be familiar with the approach used in the paper but, it's good practice to try to read papers like these.
### Pre-process the Data Set (normalization, grayscale, etc.)
Minimally, the image data should be normalized so that the data has mean zero and equal variance. For image data, `(pixel - 128)/ 128` is a quick way to approximately normalize the data and can be used in this project.
Other pre-processing steps are optional. You can try different techniques to see if it improves performance.
Use the code cell (or multiple code cells, if necessary) to implement the first step of your project.
```
### Preprocess the data here. It is required to normalize the data. Other preprocessing steps could include
### converting to grayscale, etc.
### Feel free to use as many code cells as needed.
def pre_process(image):
#Convert images to grayscale
gray = np.sum(image/3, axis = 3, keepdims=True)
#Normalize grayscaled images
normalized = (gray - 128) / 128
return normalized
```
### Model Architecture
```
import tensorflow as tf
from sklearn.utils import shuffle
epochs = 50
lr = 0.001
batch_size = 64
dropout = 0.5
X_train, Y_train = shuffle(X_train, Y_train)
x = tf.placeholder(tf.float32, (None, 32, 32, 1))
y = tf.placeholder(tf.int32, (None))
#labels need to be one-hot encoded
one_hot_y = tf.one_hot(y, 43)
keep_prob = tf.placeholder(tf.float32)
### Define your architecture here.
from tensorflow.contrib.layers import flatten
def LeNet(x):
# Arguments used for tf.truncated_normal, randomly defines variables for the weights and biases for each layer
mu = 0
sigma = 0.1
#Layer 1: Convolutional. Input = 32x32x1. Output = 28x28x6.
conv1_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 1, 6), mean = mu, stddev = sigma))
conv1_b = tf.Variable(tf.zeros(6))
conv1 = tf.nn.conv2d(x, conv1_W, strides=[1, 1, 1, 1], padding='VALID') + conv1_b
#Layer 1: Activation.
conv1 = tf.nn.relu(conv1)
#Layer 1: Pooling. Input = 28x28x6. Output = 14x14x6.
conv1 = tf.nn.max_pool(conv1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')
#Layer 2: Convolutional. Output = 10x10x16.
conv2_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 6, 16), mean = mu, stddev = sigma))
conv2_b = tf.Variable(tf.zeros(16))
conv2 = tf.nn.conv2d(conv1, conv2_W, strides=[1, 1, 1, 1], padding='VALID') + conv2_b
#Layer 2: Activation.
conv2 = tf.nn.relu(conv2)
#Layer 2: Pooling. Input = 10x10x16. Output = 5x5x16.
conv2 = tf.nn.max_pool(conv2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')
#Flatten. Input = 5x5x16. Output = 400.
fc0 = flatten(conv2)
#Layer 3: Fully Connected. Input = 400. Output = 120.
fc1_W = tf.Variable(tf.truncated_normal(shape=(400, 120), mean = mu, stddev = sigma))
fc1_b = tf.Variable(tf.zeros(120))
fc1 = tf.matmul(fc0, fc1_W) + fc1_b
#Layer 3: Activation.
fc1 = tf.nn.relu(fc1)
#Layer 4: Fully Connected. Input = 120. Output = 84.
fc2_W = tf.Variable(tf.truncated_normal(shape=(120, 84), mean = mu, stddev = sigma))
fc2_b = tf.Variable(tf.zeros(84))
fc2 = tf.matmul(fc1, fc2_W) + fc2_b
#Layer 4: Activation.
fc2 = tf.nn.relu(fc2)
#Layer 5: Fully Connected. Input = 84. Output = 43.
fc3_W = tf.Variable(tf.truncated_normal(shape=(84, 43), mean = mu, stddev = sigma))
fc3_b = tf.Variable(tf.zeros(43))
logits = tf.matmul(fc2, fc3_W) + fc3_b
return logits
### Define your architecture here.
from tensorflow.contrib.layers import flatten
def LeNet_dropout(x):
# Arguments used for tf.truncated_normal, randomly defines variables for the weights and biases for each layer
mu = 0
sigma = 0.1
#Layer 1: Convolutional. Input = 32x32x1. Output = 28x28x6.
conv1_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 1, 6), mean = mu, stddev = sigma))
conv1_b = tf.Variable(tf.zeros(6))
conv1 = tf.nn.conv2d(x, conv1_W, strides=[1, 1, 1, 1], padding='VALID') + conv1_b
#Layer 1: Activation.
conv1 = tf.nn.relu(conv1)
#Layer 1: Pooling. Input = 28x28x6. Output = 14x14x6.
conv1 = tf.nn.max_pool(conv1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')
#Layer 2: Convolutional. Output = 10x10x16.
conv2_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 6, 16), mean = mu, stddev = sigma))
conv2_b = tf.Variable(tf.zeros(16))
conv2 = tf.nn.conv2d(conv1, conv2_W, strides=[1, 1, 1, 1], padding='VALID') + conv2_b
#Layer 2: Activation.
conv2 = tf.nn.relu(conv2)
#Layer 2: Pooling. Input = 10x10x16. Output = 5x5x16.
conv2 = tf.nn.max_pool(conv2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')
#Flatten. Input = 5x5x16. Output = 400.
fc0 = flatten(conv2)
fc0 = tf.nn.dropout(fc0, keep_prob)
#Layer 3: Fully Connected. Input = 400. Output = 120.
fc1_W = tf.Variable(tf.truncated_normal(shape=(400, 120), mean = mu, stddev = sigma))
fc1_b = tf.Variable(tf.zeros(120))
fc1 = tf.matmul(fc0, fc1_W) + fc1_b
#Layer 3: Activation.
fc1 = tf.nn.relu(fc1)
fc1 = tf.nn.dropout(fc1, keep_prob)
#Layer 4: Fully Connected. Input = 120. Output = 84.
fc2_W = tf.Variable(tf.truncated_normal(shape=(120, 84), mean = mu, stddev = sigma))
fc2_b = tf.Variable(tf.zeros(84))
fc2 = tf.matmul(fc1, fc2_W) + fc2_b
#Layer 4: Activation.
fc2 = tf.nn.relu(fc2)
fc2 = tf.nn.dropout(fc2, keep_prob)
#Layer 5: Fully Connected. Input = 84. Output = 43.
fc3_W = tf.Variable(tf.truncated_normal(shape=(84, 43), mean = mu, stddev = sigma))
fc3_b = tf.Variable(tf.zeros(43))
logits = tf.matmul(fc2, fc3_W) + fc3_b
return logits
```
### Train, Validate and Test the Model
A validation set can be used to assess how well the model is performing. A low accuracy on the training and validation
sets imply underfitting. A high accuracy on the training set but low accuracy on the validation set implies overfitting.
```
### Train your model here.
### Calculate and report the accuracy on the training and validation set.
### Once a final model architecture is selected,
### the accuracy on the test set should be calculated and reported as well.
### Feel free to use as many code cells as needed.
```
## Training Pipeline
```
lr = 0.001
logits = LeNet_dropout(x)
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=one_hot_y)
loss = tf.reduce_mean(cross_entropy)
#Intiliasing the optimizer i.e. setting the type and learning rate
optimizer = tf.train.AdamOptimizer(learning_rate=lr)
#Attaching the optimizer to the dataflow graph
training_operation = optimizer.minimize(loss)
```
## Evaluation/Validation Pipeline
```
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(one_hot_y, 1))
accuracy_operation = tf.reduce_mean(tf.to_float(correct_pred))
saver = tf.train.Saver()
def evaluate(X_data, Y_data):
num_examples = len(X_data)
total_accuracy = 0
sess = tf.get_default_session()
for offset in range(0, num_examples, batch_size):
batch_x, batch_y = X_data[offset:offset+batch_size], Y_data[offset:offset+batch_size]
accuracy = sess.run(accuracy_operation, feed_dict={x: batch_x, y: batch_y, keep_prob: 1.0})
total_accuracy += (accuracy * len(batch_x))
return total_accuracy / num_examples
X_train = pre_process(X_train)
X_valid = pre_process(X_valid)
X_test = pre_process(X_test)
```
## Training Loop
```
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
num_examples = len(X_train)
print('Training..')
print()
for i in range(epochs):
X_train, Y_train = shuffle(X_train, Y_train)
for offset in range(0, num_examples, batch_size):
end = offset + batch_size
batch_x, batch_y = X_train[offset:end], Y_train[offset:end]
sess.run(training_operation, feed_dict={x: batch_x, y: batch_y, keep_prob: dropout})
train_acc = evaluate(X_train, Y_train)
validation_acc = evaluate(X_valid, Y_valid)
print("Epoch {} ..".format(i+1))
print("Training accuracy = {:.3f}".format(train_acc))
print("Validation accuracy = {:.3f}".format(validation_acc))
print()
saver.save(sess, './model2/lenet')
print("Model saved")
```
## Evaluation loop
```
with tf.Session() as sess:
saver.restore(sess, "./model2/lenet")
test_accuracy = evaluate(X_test, Y_test)
print("Test Accuracy = {:.3f}".format(test_accuracy))
```
---
## Step 3: Test a Model on New Images
To give yourself more insight into how your model is working, download at least five pictures of German traffic signs from the web and use your model to predict the traffic sign type.
You may find `signnames.csv` useful as it contains mappings from the class id (integer) to the actual sign name.
### Load and Output the Images
```
### Load the images and plot them here.
### Feel free to use as many code cells as needed.
import numpy as np
import glob
import os
import cv2
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
img_folder = './traffic_signs/'
images = glob.glob(img_folder + '*.jpg')
print(images)
web_X = []
web_Y = []
for image in images:
img = plt.imread(image)
web_X.append(np.array(img))
label = image.split('/')[2]
web_Y.append(label.split('.')[0])
web_X = np.array(web_X)
print(type(web_X))
print(web_X.shape)
print(web_Y)
fig, ax = plt.subplots(2, 4, figsize=(15,6))
fig.subplots_adjust(hspace = .5, wspace=.001)
ax = ax.ravel()
for i in range(8):
img = web_X[i]
ax[i].imshow(img)
ax[i].set_title(sign_name[int(web_Y[i])])
#Normalize images
web_X_preproc = pre_process((web_X))
print(web_X.shape)
```
## Predict the Sign Type for Each Image
```
### Run the predictions here and use the model to output the prediction for each image.
### Make sure to pre-process the images with the same pre-processing pipeline used earlier.
### Feel free to use as many code cells as needed.
```
### Analyze Performance
```
### Calculate the accuracy for these 5 new images.
### For example, if the model predicted 1 out of 5 signs correctly, it's 20% accurate on these new images.
with tf.Session() as sess:
saver.restore(sess, "./model2/lenet")
web_accuracy = evaluate(web_X_preproc, web_Y)
print("Web test-set accuracy = {:.3f}".format(web_accuracy))
```
### Output Top 5 Softmax Probabilities For Each Image Found on the Web
For each of the new images, print out the model's softmax probabilities to show the **certainty** of the model's predictions (limit the output to the top 5 probabilities for each image). [`tf.nn.top_k`](https://www.tensorflow.org/versions/r0.12/api_docs/python/nn.html#top_k) could prove helpful here.
The example below demonstrates how tf.nn.top_k can be used to find the top k predictions for each image.
`tf.nn.top_k` will return the values and indices (class ids) of the top k predictions. So if k=3, for each sign, it'll return the 3 largest probabilities (out of a possible 43) and the correspoding class ids.
Take this numpy array as an example. The values in the array represent predictions. The array contains softmax probabilities for five candidate images with six possible classes. `tf.nn.top_k` is used to choose the three classes with the highest probability:
```
# (5, 6) array
a = np.array([[ 0.24879643, 0.07032244, 0.12641572, 0.34763842, 0.07893497,
0.12789202],
[ 0.28086119, 0.27569815, 0.08594638, 0.0178669 , 0.18063401,
0.15899337],
[ 0.26076848, 0.23664738, 0.08020603, 0.07001922, 0.1134371 ,
0.23892179],
[ 0.11943333, 0.29198961, 0.02605103, 0.26234032, 0.1351348 ,
0.16505091],
[ 0.09561176, 0.34396535, 0.0643941 , 0.16240774, 0.24206137,
0.09155967]])
```
Running it through `sess.run(tf.nn.top_k(tf.constant(a), k=3))` produces:
```
TopKV2(values=array([[ 0.34763842, 0.24879643, 0.12789202],
[ 0.28086119, 0.27569815, 0.18063401],
[ 0.26076848, 0.23892179, 0.23664738],
[ 0.29198961, 0.26234032, 0.16505091],
[ 0.34396535, 0.24206137, 0.16240774]]), indices=array([[3, 0, 5],
[0, 1, 4],
[0, 5, 1],
[1, 3, 5],
[1, 4, 3]], dtype=int32))
```
Looking just at the first row we get `[ 0.34763842, 0.24879643, 0.12789202]`, you can confirm these are the 3 largest probabilities in `a`. You'll also notice `[3, 0, 5]` are the corresponding indices.
```
### Print out the top five softmax probabilities for the predictions on the German traffic sign images found on the web.
### Feel free to use as many code cells as needed.
softmax_out = tf.nn.softmax(logits)
topK = tf.nn.top_k(softmax_out, k = 5)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
saver.restore(sess, './model2/lenet')
new_softmax = sess.run(softmax_out, feed_dict={x: web_X_preproc, keep_prob: 1.0})
new_topK = sess.run(topK, feed_dict={x: web_X_preproc, keep_prob: 1.0})
fig, ax = plt.subplots(8, 6, figsize=(15, 10))
fig.subplots_adjust(hspace = .4, wspace=.2)
ax = ax.ravel()
for idx, image in enumerate(web_X):
ax[6*idx].axis('off')
ax[6*idx].imshow(image)
ax[6*idx].set_title('Input: {}'.format(web_Y[idx]))
guess1 = new_topK[1][idx][0]
index1 = np.argwhere(Y_train == guess1)[0]
ax[6*idx+1].axis('off')
ax[6*idx+1].imshow(X_train[index1].squeeze())
ax[6*idx+1].set_title('Top guess: {} ({:.0f}%)'.format(guess1, 100*new_topK[0][i][0]))
guess2 = new_topK[1][idx][1]
index2 = np.argwhere(Y_train == guess2)[0]
ax[6*idx+2].axis('off')
ax[6*idx+2].imshow(X_train[index2].squeeze())
ax[6*idx+2].set_title('2nd guess: {} ({:.0f}%)'.format(guess2, 100*new_topK[0][i][1]))
guess3 = new_topK[1][idx][2]
index3 = np.argwhere(Y_train == guess3)[0]
ax[6*idx+3].axis('off')
ax[6*idx+3].imshow(X_train[index3].squeeze())
ax[6*idx+3].set_title('3rd guess: {} ({:.0f}%)'.format(guess3, 100*new_topK[0][i][2]))
guess4 = new_topK[1][idx][3]
index4 = np.argwhere(Y_train == guess4)[0]
ax[6*idx+4].axis('off')
ax[6*idx+4].imshow(X_train[index4].squeeze())
ax[6*idx+4].set_title('4th guess: {} ({:.0f}%)'.format(guess4, 100*new_topK[0][i][3]))
guess5 = new_topK[1][idx][4]
index5 = np.argwhere(Y_train == guess5)[0]
ax[6*idx+5].axis('off')
ax[6*idx+5].imshow(X_train[index5].squeeze())
ax[6*idx+5].set_title('5th guess: {} ({:.0f}%)'.format(guess5, 100*new_topK[0][i][4]))
```
### Project Writeup
Once you have completed the code implementation, document your results in a project writeup using this [template](https://github.com/udacity/CarND-Traffic-Sign-Classifier-Project/blob/master/writeup_template.md) as a guide. The writeup can be in a markdown or pdf file.
> **Note**: Once you have completed all of the code implementations and successfully answered each question above, you may finalize your work by exporting the iPython Notebook as an HTML document. You can do this by using the menu above and navigating to \n",
"**File -> Download as -> HTML (.html)**. Include the finished document along with this notebook as your submission.
---
## Step 4 (Optional): Visualize the Neural Network's State with Test Images
This Section is not required to complete but acts as an additional excersise for understaning the output of a neural network's weights. While neural networks can be a great learning device they are often referred to as a black box. We can understand what the weights of a neural network look like better by plotting their feature maps. After successfully training your neural network you can see what it's feature maps look like by plotting the output of the network's weight layers in response to a test stimuli image. From these plotted feature maps, it's possible to see what characteristics of an image the network finds interesting. For a sign, maybe the inner network feature maps react with high activation to the sign's boundary outline or to the contrast in the sign's painted symbol.
Provided for you below is the function code that allows you to get the visualization output of any tensorflow weight layer you want. The inputs to the function should be a stimuli image, one used during training or a new one you provided, and then the tensorflow variable name that represents the layer's state during the training process, for instance if you wanted to see what the [LeNet lab's](https://classroom.udacity.com/nanodegrees/nd013/parts/fbf77062-5703-404e-b60c-95b78b2f3f9e/modules/6df7ae49-c61c-4bb2-a23e-6527e69209ec/lessons/601ae704-1035-4287-8b11-e2c2716217ad/concepts/d4aca031-508f-4e0b-b493-e7b706120f81) feature maps looked like for it's second convolutional layer you could enter conv2 as the tf_activation variable.
For an example of what feature map outputs look like, check out NVIDIA's results in their paper [End-to-End Deep Learning for Self-Driving Cars](https://devblogs.nvidia.com/parallelforall/deep-learning-self-driving-cars/) in the section Visualization of internal CNN State. NVIDIA was able to show that their network's inner weights had high activations to road boundary lines by comparing feature maps from an image with a clear path to one without. Try experimenting with a similar test to show that your trained network's weights are looking for interesting features, whether it's looking at differences in feature maps from images with or without a sign, or even what feature maps look like in a trained network vs a completely untrained one on the same sign image.
<figure>
<img src="visualize_cnn.png" width="380" alt="Combined Image" />
<figcaption>
<p></p>
<p style="text-align: center;"> Your output should look something like this (above)</p>
</figcaption>
</figure>
<p></p>
```
### Visualize your network's feature maps here.
### Feel free to use as many code cells as needed.
# image_input: the test image being fed into the network to produce the feature maps
# tf_activation: should be a tf variable name used during your training procedure that represents the calculated state of a specific weight layer
# activation_min/max: can be used to view the activation contrast in more detail, by default matplot sets min and max to the actual min and max values of the output
# plt_num: used to plot out multiple different weight feature map sets on the same block, just extend the plt number for each new feature map entry
def outputFeatureMap(image_input, tf_activation, activation_min=-1, activation_max=-1 ,plt_num=1):
# Here make sure to preprocess your image_input in a way your network expects
# with size, normalization, ect if needed
# image_input =
# Note: x should be the same name as your network's tensorflow data placeholder variable
# If you get an error tf_activation is not defined it may be having trouble accessing the variable from inside a function
activation = tf_activation.eval(session=sess,feed_dict={x : image_input})
featuremaps = activation.shape[3]
plt.figure(plt_num, figsize=(15,15))
for featuremap in range(featuremaps):
plt.subplot(6,8, featuremap+1) # sets the number of feature maps to show on each row and column
plt.title('FeatureMap ' + str(featuremap)) # displays the feature map number
if activation_min != -1 & activation_max != -1:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmin =activation_min, vmax=activation_max, cmap="gray")
elif activation_max != -1:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmax=activation_max, cmap="gray")
elif activation_min !=-1:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmin=activation_min, cmap="gray")
else:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", cmap="gray")
```
| github_jupyter |
# MNIST example with 3-conv. layer network
This example demonstrates the usage of `FastaiLRFinder` with a 3-conv. layer network on the MNIST dataset.
```
%matplotlib inline
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import torchvision.transforms as transforms
from torch.utils.data import DataLoader
from torchvision.datasets import MNIST
```
## Loading MNIST
```
mnist_pwd = "data"
batch_size= 256
transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))])
trainset = MNIST(mnist_pwd, train=True, download=True, transform=transform)
trainloader = DataLoader(trainset, batch_size=batch_size, shuffle=True, num_workers=4)
testset = MNIST(mnist_pwd, train=False, download=True, transform=transform)
testloader = DataLoader(testset, batch_size=batch_size * 2, shuffle=False, num_workers=0)
```
## Model
```
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(1, 10, kernel_size=5)
self.conv2 = nn.Conv2d(10, 20, kernel_size=5)
self.conv2_drop = nn.Dropout2d()
self.fc1 = nn.Linear(320, 50)
self.fc2 = nn.Linear(50, 10)
def forward(self, x):
x = F.relu(F.max_pool2d(self.conv1(x), 2))
x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2))
x = x.view(-1, 320)
x = F.relu(self.fc1(x))
x = F.dropout(x, training=self.training)
x = self.fc2(x)
return F.log_softmax(x, dim=1)
from ignite.engine import create_supervised_trainer, create_supervised_evaluator
from ignite.metrics import Loss, Accuracy
from ignite.contrib.handlers import FastaiLRFinder, ProgressBar
```
## Training loss (fastai)
This learning rate test range follows the same procedure used by fastai. The model is trained for `num_iter` iterations while the learning rate is increased from its initial value specified by the optimizer algorithm to `end_lr`. The increase can be linear (`step_mode="linear"`) or exponential (`step_mode="exp"`); linear provides good results for small ranges while exponential is recommended for larger ranges.
```
device = "cuda" if torch.cuda.is_available() else "cpu"
criterion = nn.NLLLoss()
model = Net()
model.to(device) # Move model before creating optimizer
optimizer = optim.SGD(model.parameters(), lr=3e-4, momentum=0.9)
trainer = create_supervised_trainer(model, optimizer, criterion, device=device)
ProgressBar(persist=True).attach(trainer, output_transform=lambda x: {"batch loss": x})
lr_finder = FastaiLRFinder()
to_save={'model': model, 'optimizer': optimizer}
with lr_finder.attach(trainer, to_save, diverge_th=1.5) as trainer_with_lr_finder:
trainer_with_lr_finder.run(trainloader)
trainer.run(trainloader, max_epochs=10)
evaluator = create_supervised_evaluator(model, metrics={"acc": Accuracy(), "loss": Loss(nn.NLLLoss())}, device=device)
evaluator.run(testloader)
print(evaluator.state.metrics)
lr_finder.plot()
lr_finder.lr_suggestion()
```
Let's now setup suggested learning rate to the optimizer and we can train the model with optimal learning rate.
*Note, that model and optimizer were restored to their initial states when `FastaiLRFinder` finished.*
```
optimizer.param_groups[0]['lr'] = lr_finder.lr_suggestion()
trainer.run(trainloader, max_epochs=10)
evaluator.run(testloader)
print(evaluator.state.metrics)
```
| github_jupyter |
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc" style="margin-top: 1em;"><ul class="toc-item"><li><span><a href="#Name" data-toc-modified-id="Name-1"><span class="toc-item-num">1 </span>Name</a></span></li><li><span><a href="#Search" data-toc-modified-id="Search-2"><span class="toc-item-num">2 </span>Search</a></span><ul class="toc-item"><li><span><a href="#Load-Cached-Results" data-toc-modified-id="Load-Cached-Results-2.1"><span class="toc-item-num">2.1 </span>Load Cached Results</a></span></li><li><span><a href="#Build-Model-From-Google-Images" data-toc-modified-id="Build-Model-From-Google-Images-2.2"><span class="toc-item-num">2.2 </span>Build Model From Google Images</a></span></li></ul></li><li><span><a href="#Analysis" data-toc-modified-id="Analysis-3"><span class="toc-item-num">3 </span>Analysis</a></span><ul class="toc-item"><li><span><a href="#Gender-cross-validation" data-toc-modified-id="Gender-cross-validation-3.1"><span class="toc-item-num">3.1 </span>Gender cross validation</a></span></li><li><span><a href="#Face-Sizes" data-toc-modified-id="Face-Sizes-3.2"><span class="toc-item-num">3.2 </span>Face Sizes</a></span></li><li><span><a href="#Screen-Time-Across-All-Shows" data-toc-modified-id="Screen-Time-Across-All-Shows-3.3"><span class="toc-item-num">3.3 </span>Screen Time Across All Shows</a></span></li><li><span><a href="#Appearances-on-a-Single-Show" data-toc-modified-id="Appearances-on-a-Single-Show-3.4"><span class="toc-item-num">3.4 </span>Appearances on a Single Show</a></span></li><li><span><a href="#Other-People-Who-Are-On-Screen" data-toc-modified-id="Other-People-Who-Are-On-Screen-3.5"><span class="toc-item-num">3.5 </span>Other People Who Are On Screen</a></span></li></ul></li><li><span><a href="#Persist-to-Cloud" data-toc-modified-id="Persist-to-Cloud-4"><span class="toc-item-num">4 </span>Persist to Cloud</a></span><ul class="toc-item"><li><span><a href="#Save-Model-to-Google-Cloud-Storage" data-toc-modified-id="Save-Model-to-Google-Cloud-Storage-4.1"><span class="toc-item-num">4.1 </span>Save Model to Google Cloud Storage</a></span></li><li><span><a href="#Save-Labels-to-DB" data-toc-modified-id="Save-Labels-to-DB-4.2"><span class="toc-item-num">4.2 </span>Save Labels to DB</a></span><ul class="toc-item"><li><span><a href="#Commit-the-person-and-labeler" data-toc-modified-id="Commit-the-person-and-labeler-4.2.1"><span class="toc-item-num">4.2.1 </span>Commit the person and labeler</a></span></li><li><span><a href="#Commit-the-FaceIdentity-labels" data-toc-modified-id="Commit-the-FaceIdentity-labels-4.2.2"><span class="toc-item-num">4.2.2 </span>Commit the FaceIdentity labels</a></span></li></ul></li></ul></li></ul></div>
```
from esper.prelude import *
from esper.identity import *
from esper.topics import *
from esper.plot_util import *
from esper import embed_google_images
```
# Name
Please add the person's name and their expected gender below (Male/Female).
```
name = 'Tashfeen Malik'
gender = 'Female'
```
# Search
## Load Cached Results
Reads cached identity model from local disk. Run this if the person has been labelled before and you only wish to regenerate the graphs. Otherwise, if you have never created a model for this person, please see the next section.
```
assert name != ''
results = FaceIdentityModel.load(name=name)
imshow(tile_images([cv2.resize(x[1][0], (200, 200)) for x in results.model_params['images']], cols=10))
plt.show()
plot_precision_and_cdf(results)
```
## Build Model From Google Images
Run this section if you do not have a cached model and precision curve estimates. This section will grab images using Google Image Search and score each of the faces in the dataset. We will interactively build the precision vs score curve.
It is important that the images that you select are accurate. If you make a mistake, rerun the cell below.
```
assert name != ''
# Grab face images from Google
img_dir = embed_google_images.fetch_images(name)
# If the images returned are not satisfactory, rerun the above with extra params:
# query_extras='' # additional keywords to add to search
# force=True # ignore cached images
face_imgs = load_and_select_faces_from_images(img_dir)
face_embs = embed_google_images.embed_images(face_imgs)
assert(len(face_embs) == len(face_imgs))
reference_imgs = tile_imgs([cv2.resize(x[0], (200, 200)) for x in face_imgs if x], cols=10)
def show_reference_imgs():
print('User selected reference images for {}.'.format(name))
imshow(reference_imgs)
plt.show()
show_reference_imgs()
# Score all of the faces in the dataset (this can take a minute)
face_ids_by_bucket, face_ids_to_score = face_search_by_embeddings(face_embs)
precision_model = PrecisionModel(face_ids_by_bucket)
```
Now we will validate which of the images in the dataset are of the target identity.
__Hover over with mouse and press S to select a face. Press F to expand the frame.__
```
show_reference_imgs()
print(('Mark all images that ARE NOT {}. Thumbnails are ordered by DESCENDING distance '
'to your selected images. (The first page is more likely to have non "{}" images.) '
'There are a total of {} frames. (CLICK THE DISABLE JUPYTER KEYBOARD BUTTON '
'BEFORE PROCEEDING.)').format(
name, name, precision_model.get_lower_count()))
lower_widget = precision_model.get_lower_widget()
lower_widget
show_reference_imgs()
print(('Mark all images that ARE {}. Thumbnails are ordered by ASCENDING distance '
'to your selected images. (The first page is more likely to have "{}" images.) '
'There are a total of {} frames. (CLICK THE DISABLE JUPYTER KEYBOARD BUTTON '
'BEFORE PROCEEDING.)').format(
name, name, precision_model.get_lower_count()))
upper_widget = precision_model.get_upper_widget()
upper_widget
```
Run the following cell after labelling to compute the precision curve. Do not forget to re-enable jupyter shortcuts.
```
# Compute the precision from the selections
lower_precision = precision_model.compute_precision_for_lower_buckets(lower_widget.selected)
upper_precision = precision_model.compute_precision_for_upper_buckets(upper_widget.selected)
precision_by_bucket = {**lower_precision, **upper_precision}
results = FaceIdentityModel(
name=name,
face_ids_by_bucket=face_ids_by_bucket,
face_ids_to_score=face_ids_to_score,
precision_by_bucket=precision_by_bucket,
model_params={
'images': list(zip(face_embs, face_imgs))
}
)
plot_precision_and_cdf(results)
```
The next cell persists the model locally.
```
results.save()
```
# Analysis
## Gender cross validation
Situations where the identity model disagrees with the gender classifier may be cause for alarm. We would like to check that instances of the person have the expected gender as a sanity check. This section shows the breakdown of the identity instances and their labels from the gender classifier.
```
gender_breakdown = compute_gender_breakdown(results)
print('Expected counts by gender:')
for k, v in gender_breakdown.items():
print(' {} : {}'.format(k, int(v)))
print()
print('Percentage by gender:')
denominator = sum(v for v in gender_breakdown.values())
for k, v in gender_breakdown.items():
print(' {} : {:0.1f}%'.format(k, 100 * v / denominator))
print()
```
Situations where the identity detector returns high confidence, but where the gender is not the expected gender indicate either an error on the part of the identity detector or the gender detector. The following visualization shows randomly sampled images, where the identity detector returns high confidence, grouped by the gender label.
```
high_probability_threshold = 0.8
show_gender_examples(results, high_probability_threshold)
```
## Face Sizes
Faces shown on-screen vary in size. For a person such as a host, they may be shown in a full body shot or as a face in a box. Faces in the background or those part of side graphics might be smaller than the rest. When calculuating screentime for a person, we would like to know whether the results represent the time the person was featured as opposed to merely in the background or as a tiny thumbnail in some graphic.
The next cell, plots the distribution of face sizes. Some possible anomalies include there only being very small faces or large faces.
```
plot_histogram_of_face_sizes(results)
```
The histogram above shows the distribution of face sizes, but not how those sizes occur in the dataset. For instance, one might ask why some faces are so large or whhether the small faces are actually errors. The following cell groups example faces, which are of the target identity with probability, by their sizes in terms of screen area.
```
high_probability_threshold = 0.8
show_faces_by_size(results, high_probability_threshold, n=10)
```
## Screen Time Across All Shows
One question that we might ask about a person is whether they received a significantly different amount of screentime on different shows. The following section visualizes the amount of screentime by show in total minutes and also in proportion of the show's total time. For a celebrity or political figure such as Donald Trump, we would expect significant screentime on many shows. For a show host such as Wolf Blitzer, we expect that the screentime be high for shows hosted by Wolf Blitzer.
```
screen_time_by_show = get_screen_time_by_show(results)
plot_screen_time_by_show(name, screen_time_by_show)
```
We might also wish to validate these findings by comparing to the whether the person's name is mentioned in the subtitles. This might be helpful in determining whether extra or lack of screentime for a person may be due to a show's aesthetic choices. The following plots show compare the screen time with the number of caption mentions.
```
caption_mentions_by_show = get_caption_mentions_by_show([name.upper()])
plot_screen_time_and_other_by_show(name, screen_time_by_show, caption_mentions_by_show,
'Number of caption mentions', 'Count')
```
## Appearances on a Single Show
For people such as hosts, we would like to examine in greater detail the screen time allotted for a single show. First, fill in a show below.
```
show_name = 'FOX and Friends'
# Compute the screen time for each video of the show
screen_time_by_video_id = compute_screen_time_by_video(results, show_name)
```
One question we might ask about a host is "how long they are show on screen" for an episode. Likewise, we might also ask for how many episodes is the host not present due to being on vacation or on assignment elsewhere. The following cell plots a histogram of the distribution of the length of the person's appearances in videos of the chosen show.
```
plot_histogram_of_screen_times_by_video(name, show_name, screen_time_by_video_id)
```
For a host, we expect screentime over time to be consistent as long as the person remains a host. For figures such as Hilary Clinton, we expect the screentime to track events in the real world such as the lead-up to 2016 election and then to drop afterwards. The following cell plots a time series of the person's screentime over time. Each dot is a video of the chosen show. Red Xs are videos for which the face detector did not run.
```
plot_screentime_over_time(name, show_name, screen_time_by_video_id)
```
We hypothesized that a host is more likely to appear at the beginning of a video and then also appear throughout the video. The following plot visualizes the distibution of shot beginning times for videos of the show.
```
plot_distribution_of_appearance_times_by_video(results, show_name)
```
In the section 3.3, we see that some shows may have much larger variance in the screen time estimates than others. This may be because a host or frequent guest appears similar to the target identity. Alternatively, the images of the identity may be consistently low quality, leading to lower scores. The next cell plots a histogram of the probabilites for for faces in a show.
```
plot_distribution_of_identity_probabilities(results, show_name)
```
## Other People Who Are On Screen
For some people, we are interested in who they are often portrayed on screen with. For instance, the White House press secretary might routinely be shown with the same group of political pundits. A host of a show, might be expected to be on screen with their co-host most of the time. The next cell takes an identity model with high probability faces and displays clusters of faces that are on screen with the target person.
```
get_other_people_who_are_on_screen(results, k=25, precision_thresh=0.8)
```
# Persist to Cloud
The remaining code in this notebook uploads the built identity model to Google Cloud Storage and adds the FaceIdentity labels to the database.
## Save Model to Google Cloud Storage
```
gcs_model_path = results.save_to_gcs()
```
To ensure that the model stored to Google Cloud is valid, we load it and print the precision and cdf curve below.
```
gcs_results = FaceIdentityModel.load_from_gcs(name=name)
imshow(tile_imgs([cv2.resize(x[1][0], (200, 200)) for x in gcs_results.model_params['images']], cols=10))
plt.show()
plot_precision_and_cdf(gcs_results)
```
## Save Labels to DB
If you are satisfied with the model, we can commit the labels to the database.
```
from django.core.exceptions import ObjectDoesNotExist
def standardize_name(name):
return name.lower()
person_type = ThingType.objects.get(name='person')
try:
person = Thing.objects.get(name=standardize_name(name), type=person_type)
print('Found person:', person.name)
except ObjectDoesNotExist:
person = Thing(name=standardize_name(name), type=person_type)
print('Creating person:', person.name)
labeler = Labeler(name='face-identity:{}'.format(person.name), data_path=gcs_model_path)
```
### Commit the person and labeler
The labeler and person have been created but not set saved to the database. If a person was created, please make sure that the name is correct before saving.
```
person.save()
labeler.save()
```
### Commit the FaceIdentity labels
Now, we are ready to add the labels to the database. We will create a FaceIdentity for each face whose probability exceeds the minimum threshold.
```
commit_face_identities_to_db(results, person, labeler, min_threshold=0.001)
print('Committed {} labels to the db'.format(FaceIdentity.objects.filter(labeler=labeler).count()))
```
| github_jupyter |
```
import sys
import numpy as np
import xarray as xr
import pyproj
import cmocean as cmo
import warnings
import matplotlib as mpl
import cartopy.crs as ccrs
import matplotlib.pyplot as plt
from mpl_toolkits.axes_grid1 import make_axes_locatable
from mpl_toolkits.axes_grid1.inset_locator import inset_axes
sys.path.append("..")
warnings.filterwarnings("ignore")
np.seterr(all='ignore')
%matplotlib inline
%config InlineBackend.print_figure_kwargs={'bbox_inches':None}
%load_ext autoreload
%autoreload 2
from PICO import PicoModel
from Plume import PlumeModel
from PICOP import PicoPlumeModel
from Simple import SimpleModels
from forcing import Forcing
from plotfunctions import *
from real_geometry import RealGeometry
mpl.rcParams['figure.subplot.bottom'] = .15
mpl.rcParams['figure.subplot.top'] = .99
mpl.rcParams['figure.subplot.left'] = .01
mpl.rcParams['figure.subplot.right'] = .99
mpl.rcParams['figure.subplot.wspace'] = .02
mpl.rcParams['figure.subplot.hspace'] = .01
"""Boundaries MITgcm"""
y1,y2 = {},{}
x1,x2 = {},{}
x1['CrossDots'] = 245
x2['CrossDots'] = 251
y1['CrossDots'] = -75.45
y2['CrossDots'] = -74.05
x1['Thwaites_e'] = 251
x2['Thwaites_e'] = 257
y1['Thwaites_e'] = -75.45
y2['Thwaites_e'] = -74.05
x1['PineIsland'] = 257
x2['PineIsland'] = 262
y1['PineIsland'] = -75.45
y2['PineIsland'] = -74.05
x1['Getz'] = 225
x2['Getz'] = 245.3
y1['Getz'] = -75.45
y2['Getz'] = -73.2
"""Locations for text"""
plon,plat = {},{}
plon['CrossDots'] = 250.5
plat['CrossDots'] = -74.4
plon['Thwaites_e'] = 253.5
plat['Thwaites_e'] = -74.6
plon['PineIsland'] = 257
plat['PineIsland'] = -74.8
plon['Getz'] = 231
plat['Getz'] = -74
spinup = {}
spinup['PineIsland'] = 180
spinup['Thwaites_e'] = 60
spinup['CrossDots'] = 180
spinup['Getz'] = 60
date = '2021_02_25/'
panelheight = .2
step = .125
figheight = 11
proj = ccrs.SouthPolarStereo(true_scale_latitude=-75,central_longitude=245-360)
def addaxis(n,title):
pos = 1-panelheight-n*step
ax = fig.add_axes([.01,pos,.98,panelheight],projection=proj)
ds = xr.open_dataset('../../data/BedMachine/BedMachineAntarctica_2020-07-15_v02.nc')
ds = ds.isel(x=slice(3000,4500),y=slice(7700,9500))
ds = add_lonlat(ds)
mask = xr.where(ds.mask==0,4,ds.mask)
ax.contour(ds.lon,ds.lat,mask,[2.5],c='k',linewidths=.2,transform=ccrs.PlateCarree())
ds = xr.open_dataset('../../data/BedMachine/BedMachineAntarctica_2020-07-15_v02.nc')
ds = ds.isel(x=slice(3300,4000),y=slice(6000,7700))
ds = add_lonlat(ds)
mask = xr.where(ds.mask==0,4,ds.mask)
ax.contour(ds.lon,ds.lat,mask,[2.5],c='k',linewidths=.2,transform=ccrs.PlateCarree())
ax.set_extent([227,259,-75.2,-73.1],crs=ccrs.PlateCarree())
ax.set_title(title,y=.5,loc='left')
ax.axis('off')
return ax
def printvals(ax,geom,melt):
ax.text(plon[geom],plat[geom],f'{np.nanmean(melt):.1f}',transform=ccrs.PlateCarree(),c='k',ha='center')
%%time
for quant in ['average', 'sensitivity']:
if quant=='average':
timep = slice("1989-1-1","2018-12-31")
elif quant=='sensitivity':
timep_w = slice("2006-1-1","2012-1-1")
timep_c = slice("2013-1-1","2017-1-1")
fig = plt.figure(figsize=(8,figheight))
"""Geometry"""
ax = addaxis(0,'a) Geometry')
ds = xr.open_dataset('../../data/BedMachine/BedMachineAntarctica_2020-07-15_v02.nc')
ds = ds.isel(x=slice(3000,4500),y=slice(7700,9500))
ds = add_lonlat(ds)
draft = xr.where(ds.mask==3,(ds.surface-ds.thickness).astype('float64'),np.nan)
draft = xr.where((ds.lon>-105).all() and (ds.lat>-73.7).all(),np.nan,draft)
IM = plt.pcolormesh(ds.lon,ds.lat,draft,vmin=-1000,vmax=0,cmap=plt.get_cmap('cmo.rain_r'),transform=ccrs.PlateCarree(),shading='auto')
ds = xr.open_dataset('../../data/BedMachine/BedMachineAntarctica_2020-07-15_v02.nc')
ds = ds.isel(x=slice(3300,4000),y=slice(6000,7700))
ds = add_lonlat(ds)
draft = xr.where(ds.mask==3,(ds.surface-ds.thickness).astype('float64'),np.nan)
draft = xr.where((ds.lon>-105).all() and (ds.lat>-73.7).all(),np.nan,draft)
IM = plt.pcolormesh(ds.lon,ds.lat,draft,vmin=-1000,vmax=0,cmap=plt.get_cmap('cmo.rain_r'),transform=ccrs.PlateCarree(),shading='auto')
'''Colorbar'''
cax = fig.add_axes([.1,.05,.35,.015])
cbar = plt.colorbar(IM, cax=cax,extend='min',orientation='horizontal')
cbar.set_label('Draft depth [m]', labelpad=0)
for geom in ['PineIsland','Thwaites_e','CrossDots','Getz']:
if geom in ['PineIsland','Thwaites']:
Tdeep, ztclw, ztclc = 1.2, -400, -600
elif geom in ['CrossDots','Getz']:
Tdeep, ztclw, ztclc = 0.7, -400, -700
dp = RealGeometry(geom,n=5).create()
dp = add_lonlat(dp)
dw = Forcing(dp).tanh2(ztcl=ztclw, Tdeep=Tdeep)
dc = Forcing(dp).tanh2(ztcl=ztclc, Tdeep=Tdeep)
for j, model in enumerate(['Simple','Plume', 'PICO', 'PICOP', 'Layer','MITgcm']):
title = [r'b) M$_+$','c) Plume','d) PICO','e) PICOP','f) Layer','g) MITgcm'][j]
ax = addaxis(j+1,title)
if model=='Simple':
ds = SimpleModels(dw).compute()
melt_w = ds['Mp']
melt_c = SimpleModels(dc).compute()['Mp']
elif model=='Plume':
ds = PlumeModel(dw).compute_plume()
melt_w = ds['m']
melt_c = PlumeModel(dc).compute_plume()['m']
elif model=='PICO':
ds = PicoModel(ds=dw).compute_pico()[1]
melt_w = ds['melt']
melt_c = PicoModel(ds=dc).compute_pico()[1]['melt']
elif model=='PICOP':
ds = PicoPlumeModel(ds=dw).compute_picop()[2]
melt_w = ds['m']
melt_c = PicoPlumeModel(ds=dc).compute_picop()[2]['m']
elif model=='Layer':
ds = xr.open_dataset(f'../../results/Layer/{date}{geom}_tanh2_Tdeep{Tdeep}_ztcl{ztclw}_{spinup[geom]:.3f}.nc')
ds = add_lonlat(ds)
melt_w = np.where(ds.mask==3,ds.melt,np.nan)
ds = xr.open_dataset(f'../../results/Layer/{date}{geom}_tanh2_Tdeep{Tdeep}_ztcl{ztclc}_{spinup[geom]:.3f}.nc')
ds = add_lonlat(ds)
melt_c = np.where(ds.mask==3,ds.melt,np.nan)
elif model=='MITgcm':
if quant=='average':
ds = xr.open_dataset('../../data/paulholland/melt.nc')
ds = ds.sel(LONGITUDE=slice(x1[geom],x2[geom]),LATITUDE=slice(y1[geom],y2[geom]),TIME=timep)
ds = ds.mean(dim='TIME')
melt = xr.where(ds.melt==0,np.nan,ds.melt)
melt_w, melt_c = melt, melt # to make it compatible with the other output
elif quant=='sensitivity':
ds = xr.open_dataset('../../data/paulholland/melt.nc')
ds = ds.sel(LONGITUDE=slice(x1[geom],x2[geom]),LATITUDE=slice(y1[geom],y2[geom]),TIME=timep_w)
ds = ds.mean(dim='TIME')
melt_w = xr.where(ds.melt==0,np.nan,ds.melt)
ds = xr.open_dataset('../../data/paulholland/melt.nc')
ds = ds.sel(LONGITUDE=slice(x1[geom],x2[geom]),LATITUDE=slice(y1[geom],y2[geom]),TIME=timep_c)
ds = ds.mean(dim='TIME')
melt_c = xr.where(ds.melt==0,np.nan,ds.melt)
ds['lon'] = ds.LONGITUDE
ds['lat'] = ds.LATITUDE-.05
if quant=='average': melt = (melt_w+melt_c)/2
elif quant=='sensitivity': melt = melt_w-melt_c
printvals(ax,geom,melt)
melt = np.where(melt<1,1,melt)
if quant=='average': IM = plotmelt(ax,ds.lon,ds.lat,melt)
elif quant=='sensitivity': IM = plotdiffmelt(ax,ds.lon,ds.lat,melt)
'''Colorbar'''
cax = fig.add_axes([.5,.05,.35,.015])
cbar = plt.colorbar(IM, cax=cax,extend='both',orientation='horizontal')
if quant=='average':
cbar.set_ticks([1,10,100])
cbar.set_ticklabels([1,10,100])
cbar.set_label('Melt rate [m/yr]', labelpad=0)
elif quant=='sensitivity':
cbar.set_label('Melt rate difference [m/yr]', labelpad=0)
"""Save figure"""
plt.savefig(f"../../figures/Amundsen_{quant}.png",dpi=300)
```
| github_jupyter |
<img src="../../../images/qiskit-heading.png" alt="Note: In order for images to show up in this jupyter notebook you need to select File => Trusted Notebook" width="500 px" align="left">
## _*Topological Quantum Walks on IBM Q*_
This notebook is based on the paper of Radhakrishnan Balu, Daniel Castillo, and George Siopsis, "Physical realization of topological quantum walks on IBM-Q and beyond" arXiv:1710.03615 \[quant-ph\](2017).
***
### Contributors
Keita Takeuchi (Univ. of Tokyo) and Rudy Raymond (IBM Research - Tokyo)
***
### Qiskit Package Versions
```
import qiskit
qiskit.__qiskit_version__
```
## Introduction: challenges in implementing topological walk
In this section, we introduce one model of quantum walk called *split-step topological quantum walk*.
We define Hilbert space of quantum walker states and coin states as
$\mathcal{H}_{\mathcal{w}}=\{\vert x \rangle, x\in\mathbb{Z}_N\}, \mathcal{H}_{\mathcal{c}}=\{\vert 0 \rangle, \vert 1 \rangle\}$, respectively. Then, step operator is defined as
$$
S^+ := \vert 0 \rangle_c \langle 0 \vert \otimes L^+ + \vert 1 \rangle_c \langle 1 \vert \otimes \mathbb{I}\\
S^- := \vert 0 \rangle_c \langle 0 \vert \otimes \mathbb{I} + \vert 1 \rangle_c \langle 1 \vert \otimes L^-,
$$
where
$$
L^{\pm}\vert x \rangle_{\mathcal w} := \vert (x\pm1)\ \rm{mod}\ N \rangle_{\mathcal w}
$$
is a shift operator. The boundary condition is included.
Also, we define the coin operator as
$$
T(\theta):=e^{-i\theta Y} = \begin{bmatrix} \cos\theta & -\sin\theta \\ \sin\theta & \cos\theta \end{bmatrix}.
$$
One step of quantum walk is the unitary operator defined as below that uses two mode of coins, i.e., $\theta_1$ and $\theta_2$:
$$
W := S^- T(\theta_2)S^+ T(\theta_1).
$$
Intuitively speaking, the walk consists of flipping coin states and based on the outcome of the coins, the shifting operator is applied to determine the next position of the walk.
Next, we consider a walk with two phases that depend on the current position:
$$
(\theta_1,\theta_2) = \begin{cases}
(\theta_{1}^{-},\ \theta_{2}^{-}) & 0 \leq x < \frac{N}{2} \\
(\theta_{1}^{+},\ \theta_{2}^{+}) & \frac{N}{2} \leq x < N.
\end{cases}
$$
Then, two coin operators are rewritten as
$$
\mathcal T_i = \sum^{N-1}_{x=0}e^{-i\theta_i(x) Y_c}\otimes \vert x \rangle_w \langle x \vert,\ i=1,2.
$$
By using this, one step of quantum walk is equal to
$$
W = S^- \mathcal T_2 S^+ \mathcal T_1.
$$
In principle, we can execute the quantum walk by multiplying $W$ many times, but then we need many circuit elements to construct it. This is not possible with the current approximate quantum computers due to large errors produced after each application of circuit elements (gates).
## Hamiltonian of topological walk
Altenatively, we can think of time evolution of the states. The Hamiltonian $H$ is regarded as $H=\lim_{n \to \infty}W^n$(See below for further details.).
For example, when $(\theta_1,\ \theta_2) = (0,\ \pi/2)$, the Schrödinger equation is
$$
i\frac{d}{dt}\vert \Psi \rangle = H_{\rm I} \vert \Psi \rangle,\ H_{\rm I} = -Y\otimes [2\mathbb I+L^+ + L^-].
$$
If Hamiltonian is time independent, the solution of the Schrödinger equation is
$$
\vert \Psi(t) \rangle = e^{-iHt} \vert \Psi(0) \rangle,
$$
so we can get the final state at arbitrary time $t$ at once without operating W step by step, if we know the corresponding Hamiltonian.
The Hamiltonian can be computed as below.
Set $(\theta_1,\ \theta_2) = (\epsilon,\ \pi/2+\epsilon)$, and $\epsilon\to 0$ and the number of step $s\to \infty$
while $se=t/2$(finite variable). Then,
\begin{align*}
H_I&=\lim_{n \to \infty}W^n\\
\rm{(LHS)} &= \mathbb{I}-iH_{I}t+O(t^2)\\
\rm{(RHS)} &= \lim_{\substack{s\to \infty\\ \epsilon\to 0}}(W^4)^{s/4}=
\lim_{\substack{s\to \infty\\ \epsilon\to0}}(\mathbb{I}+O(\epsilon))^{s/4}\\
&\simeq \lim_{\substack{s\to \infty\\ \epsilon\to 0}}\mathbb{I}+\frac{s}{4}O(\epsilon)\\
&= \lim_{\epsilon\to 0}\mathbb{I}+iY\otimes [2\mathbb I+L^+ + L^-]t+O(\epsilon).
\end{align*}
Therefore,
$$H_{\rm I} = -Y\otimes [2\mathbb I+L^+ + L^-].$$
## Computation model
In order to check the correctness of results of the implementation of quantum walk by using IBMQ, we investigate two models, which have different features of coin phases. Let the number of positions on the line $n$ is 4.
- $\rm I / \rm II:\ (\theta_1,\theta_2) = \begin{cases}
(0,\ -\pi/2) & 0 \leq x < 2 \\
(0,\ \pi/2) & 2 \leq x < 4
\end{cases}$
- $\rm I:\ (\theta_1,\theta_2)=(0,\ \pi/2),\ 0 \leq x < 4$
That is, the former is a quantum walk on a line with two phases of coins, while the latter is that with only one phase of coins.
<img src="../images/q_walk_lattice_2phase.png" width="30%" height="30%">
<div style="text-align: center;">
Figure 1. Quantum Walk on a line with two phases
</div>
The Hamiltonian operators for each of the walk on the line are, respectively,
$$
H_{\rm I/II} = Y \otimes \mathbb I \otimes \frac{\mathbb I + Z}{2}\\
H_{\rm I} = Y\otimes (2\mathbb I\otimes \mathbb I + \mathbb I\otimes X + X \otimes X).
$$
Then, we want to implement the above Hamiltonian operators with the unitary operators as product of two-qubit gates CNOTs, CZs, and single-qubit gate rotation matrices. Notice that the CNOT and CZ gates are
\begin{align*}
\rm{CNOT_{ct}}&=\left |0\right\rangle_c\left\langle0\right | \otimes I_t + \left |1\right\rangle_c\left\langle1\right | \otimes X_t\\
\rm{CZ_{ct}}&=\left |0\right\rangle_c\left\langle0\right | \otimes I_t + \left |1\right\rangle_c\left\langle1\right | \otimes Z_t.
\end{align*}
Below is the reference of converting Hamiltonian into unitary operators useful for the topological quantum walk.
<br><br>
<div style="text-align: center;">
Table 1. Relation between the unitary operator and product of elementary gates
</div>
|unitary operator|product of circuit elements|
|:-:|:-:|
|$e^{-i\theta X_c X_j}$|$\rm{CNOT_{cj}}\cdot e^{-i\theta X_c t}\cdot \rm{CNOT_{cj}}$|
|$e^{-i\theta X_c Z_j}$|$\rm{CZ_{cj}}\cdot e^{-i\theta X_c t}\cdot \rm{CZ_{cj}}$|
|$e^{-i\theta Y_c X_j}$|$\rm{CNOT_{cj}}\cdot e^{i\theta Y_c t}\cdot \rm{CNOT_{cj}}$|
|$e^{-i\theta Y_c Z_j}$|$\rm{CNOT_{jc}}\cdot e^{-i\theta Y_c t}\cdot \rm{CNOT_{jc}}$|
|$e^{-i\theta Z_c X_j}$|$\rm{CZ_{cj}}\cdot e^{-i\theta X_j t}\cdot \rm{CZ_{cj}}$|
|$e^{-i\theta Z_c Z_j}$|$\rm{CNOT_{jc}}\cdot e^{-i\theta Z_c t}\cdot \rm{CNOT_{jc}}$|
By using these formula, the unitary operators are represented by only CNOT, CZ, and rotation matrices, so we can implement them by using IBM Q, as below.
### Phase I/II:<br><br>
\begin{align*}
e^{-iH_{I/II}t}=~&e^{-itY_c \otimes \mathbb I_0 \otimes \frac{\mathbb I_1 + Z_1}{2}}\\
=~& e^{-iY_c t}e^{-itY_c\otimes Z_1}\\
=~& e^{-iY_c t}\cdot\rm{CNOT_{1c}}\cdot e^{-i Y_c t}\cdot\rm{CNOT_{1c}}
\end{align*}
<img src="../images/c12.png" width="50%" height="60%">
<div style="text-align: center;">
Figure 2. Phase I/II on $N=4$ lattice$(t=8)$ - $q[0]:2^0,\ q[1]:coin,\ q[2]:2^1$
</div>
<br><br>
### Phase I:<br><br>
\begin{align*}
e^{-iH_I t}=~&e^{-itY_c\otimes (2\mathbb I_0\otimes \mathbb I_1 + \mathbb I_0\otimes X_1 + X_0 \otimes X_1)}\\
=~&e^{-2itY_c}e^{-itY_c\otimes X_1}e^{-itY_c\otimes X_0 \otimes X_1}\\
=~&e^{-2iY_c t}\cdot\rm{CNOT_{c1}}\cdot\rm{CNOT_{c0}}\cdot e^{-iY_c t}\cdot\rm{CNOT_{c0}}\cdot e^{-iY_c t}\cdot\rm{CNOT_{c1}}
\end{align*}
<img src="../images/c1.png" width="70%" height="70%">
<div style="text-align: center;">
Figure 3. Phase I on $N=4$ lattice$(t=8)$ - $q[0]:2^0,\ q[1]:2^1,\ q[2]:coin$
</div>
## Implementation
```
#initialization
import sys
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
# importing QISKit
from qiskit import QuantumCircuit, ClassicalRegister, QuantumRegister
from qiskit import Aer, IBMQ, execute
from qiskit.tools.monitor import job_monitor
from qiskit.providers.ibmq import least_busy
from qiskit.tools.visualization import plot_histogram
IBMQ.load_accounts()
sim_backend = Aer.get_backend('qasm_simulator')
device_backend = least_busy(IBMQ.backends(operational=True, simulator=False))
```
**Quantum walk, phase I/II on $N=4$ lattice$(t=8)$**
```
t=8 #time
q1_2 = QuantumRegister(3)
c1_2 = ClassicalRegister(3)
qw1_2 = QuantumCircuit(q1_2, c1_2)
qw1_2.x(q1_2[2])
qw1_2.u3(t, 0, 0, q1_2[1])
qw1_2.cx(q1_2[2], q1_2[1])
qw1_2.u3(t, 0, 0, q1_2[1])
qw1_2.cx(q1_2[2], q1_2[1])
qw1_2.measure(q1_2[0], c1_2[0])
qw1_2.measure(q1_2[1], c1_2[2])
qw1_2.measure(q1_2[2], c1_2[1])
print(qw1_2.qasm())
qw1_2.draw(output='mpl')
```
Below is the result when executing the circuit on the simulator.
```
job = execute(qw1_2, sim_backend, shots=1000)
result = job.result()
plot_histogram(result.get_counts())
```
And below is the result when executing the circuit on the real device.
```
job = execute(qw1_2, backend=device_backend, shots=100)
job_monitor(job)
result = job.result()
plot_histogram(result.get_counts())
```
**Conclusion**: The walker is bounded at the initial state, which is the boundary of two phases, when the quantum walk on the line has two phases.
**Quantum walk, phase I on $N=4$ lattice$(t=8)$**
```
t=8 #time
q1 = QuantumRegister(3)
c1 = ClassicalRegister(3)
qw1 = QuantumCircuit(q1, c1)
qw1.x(q1[1])
qw1.cx(q1[2], q1[1])
qw1.u3(t, 0, 0, q1[2])
qw1.cx(q1[2], q1[0])
qw1.u3(t, 0, 0, q1[2])
qw1.cx(q1[2], q1[0])
qw1.cx(q1[2], q1[1])
qw1.u3(2*t, 0, 0, q1[2])
qw1.measure(q1[0], c1[0])
qw1.measure(q1[1], c1[1])
qw1.measure(q1[2], c1[2])
print(qw1.qasm())
qw1.draw(output='mpl')
```
Below is the result when executing the circuit on the simulator.
```
job = execute(qw1, sim_backend, shots=1000)
result = job.result()
plot_histogram(result.get_counts())
```
And below is the result when executing the circuit on the real device.
```
job = execute(qw1, backend=device_backend, shots=100)
job_monitor(job)
result = job.result()
plot_histogram(result.get_counts())
```
**Conclusion**: The walker is unbounded when the quantum walk on the line has one phase.
We can see that the results from simulators match those from real devices. This hints that IBM Q systems can be used to experiments with topological quantum walk.
| github_jupyter |
**부록 D – 자동 미분**
_이 노트북은 간단한 예제를 통해 여러 가지 자동 미분 기법의 작동 원리를 설명합니다._
# 설정
먼저 이 노트북이 파이썬 2와 3에서 문제없이 작동되도록 합니다:
```
# 파이썬 2와 파이썬 3을 모두 지원하기 위해
from __future__ import absolute_import, division, print_function, unicode_literals
```
# 소개
파라미터 x와 y에 대한 함수 $f(x,y)=x^2y + y + 2$의 그래디언트를 계산한다고 가정합시다:
```
def f(x,y):
return x*x*y + y + 2
```
해석적으로 푸는 방법이 하나 있습니다:
$\dfrac{\partial f}{\partial x} = 2xy$
$\dfrac{\partial f}{\partial y} = x^2 + 1$
```
def df(x,y):
return 2*x*y, x*x + 1
```
예를 들어 $\dfrac{\partial f}{\partial x}(3,4) = 24$ 이고, $\dfrac{\partial f}{\partial y}(3,4) = 10$ 입니다.
```
df(3, 4)
```
완벽합니다! 2차 도함수(헤시안이라고도 부릅니다)를 위한 식도 구할 수 있습니다:
$\dfrac{\partial^2 f}{\partial x \partial x} = \dfrac{\partial (2xy)}{\partial x} = 2y$
$\dfrac{\partial^2 f}{\partial x \partial y} = \dfrac{\partial (2xy)}{\partial y} = 2x$
$\dfrac{\partial^2 f}{\partial y \partial x} = \dfrac{\partial (x^2 + 1)}{\partial x} = 2x$
$\dfrac{\partial^2 f}{\partial y \partial y} = \dfrac{\partial (x^2 + 1)}{\partial y} = 0$
x=3이고 y=4일 때, 헤시안은 각각 8, 6, 6, 0입니다. 위 식을 사용해 이를 계산해 보죠:
```
def d2f(x, y):
return [2*y, 2*x], [2*x, 0]
d2f(3, 4)
```
좋습니다. 하지만 이렇게 하려면 수학 지식이 필요합니다. 이 경우에는 아주 어렵지 않지만 심층 신경망일 때 이런 식으로 도함수를 계산하는 것은 현실적으로 불가능합니다. 자동화해서 계산할 수 있는 여러 방법을 살펴 보겠습니다!
# 수치 미분
여기서는 다음 식을 사용하여 그래디언트 근사값을 계산합니다. $\dfrac{\partial f}{\partial x} = \displaystyle{\lim_{\epsilon \to 0}}\dfrac{f(x+\epsilon, y) - f(x, y)}{\epsilon}$ (그리고 $\dfrac{\partial f}{\partial y}$에 대해서도 비슷합니다).
```
def gradients(func, vars_list, eps=0.0001):
partial_derivatives = []
base_func_eval = func(*vars_list)
for idx in range(len(vars_list)):
tweaked_vars = vars_list[:]
tweaked_vars[idx] += eps
tweaked_func_eval = func(*tweaked_vars)
derivative = (tweaked_func_eval - base_func_eval) / eps
partial_derivatives.append(derivative)
return partial_derivatives
def df(x, y):
return gradients(f, [x, y])
df(3, 4)
```
잘 작동하네요!
이 방식의 장점은 헤시안 계산이 쉽다는 것입니다. 먼저 1차 도함수(자코비안이라고도 부릅니다)를 계산하는 함수를 만듭니다:
```
def dfdx(x, y):
return gradients(f, [x,y])[0]
def dfdy(x, y):
return gradients(f, [x,y])[1]
dfdx(3., 4.), dfdy(3., 4.)
```
이제 간단하게 이 함수에 `grandients()` 함수를 적용하면 됩니다:
```
def d2f(x, y):
return [gradients(dfdx, [3., 4.]), gradients(dfdy, [3., 4.])]
d2f(3, 4)
```
모두 잘 계산되었지만 이 결과는 근사값입니다. $n$개의 변수에 대한 함수의 그래디언트를 계산하러면 이 함수를 $n$번 호출해야 합니다. 심층 신경망에서는 경사 하강법을 사용해 수정할 파라미터가 수천 개가 있기 때문에 이런 방법은 매우 느릴 수 있습니다(경사 하강법은 각 파라미터에 대한 손실 함수의 그래디언트를 계산해야 합니다).
## 간단한 계산 그래프 구현하기
수치적인 방법 대신에 기호 미분 기법을 구현해 보죠. 이를 위해 상수, 변수, 연산을 표현할 클래스를 정의하겠습니다.
```
class Const(object):
def __init__(self, value):
self.value = value
def evaluate(self):
return self.value
def __str__(self):
return str(self.value)
class Var(object):
def __init__(self, name, init_value=0):
self.value = init_value
self.name = name
def evaluate(self):
return self.value
def __str__(self):
return self.name
class BinaryOperator(object):
def __init__(self, a, b):
self.a = a
self.b = b
class Add(BinaryOperator):
def evaluate(self):
return self.a.evaluate() + self.b.evaluate()
def __str__(self):
return "{} + {}".format(self.a, self.b)
class Mul(BinaryOperator):
def evaluate(self):
return self.a.evaluate() * self.b.evaluate()
def __str__(self):
return "({}) * ({})".format(self.a, self.b)
```
좋습니다. 이제 함수 $f$를 나타내는 계산 그래프를 만들 수 있습니다:
```
x = Var("x")
y = Var("y")
f = Add(Mul(Mul(x, x), y), Add(y, Const(2))) # f(x,y) = x²y + y + 2
```
이 그래프를 실행하여 어떤 포인트에서도 $f$를 계산할 수 있습니다. 예를 들면 $f(3, 4)$는 다음과 같습니다.
```
x.value = 3
y.value = 4
f.evaluate()
```
완벽한 정답을 찾았네요.
## 그래디언트 계산하기
여기서 제시할 자동 미분 방법은 모두 *연쇄 법칙(chain rule)*을 기반으로 합니다.
두 개의 함수 $u$와 $v$가 있고 어떤 입력 $x$에 연속적으로 적용하여 결과 $v$를 얻었다고 가정합시다. 즉, $z = v(u(x))$이고, $z = v(s)$와 $s = u(x)$로 나누어 쓸 수 있습니다. 연쇄 법칙을 적용하면 입력 $x$에 대한 출력 $z$의 편도 함수를 계산할 수 있습니다:
$ \dfrac{\partial z}{\partial x} = \dfrac{\partial s}{\partial x} \cdot \dfrac{\partial z}{\partial s}$
$z$가 중간 출력이 $s_1, s_2, ..., s_n$인 연속 함수의 출력이라면, 연쇄 법칙이 다음과 같이 적용됩니다:
$ \dfrac{\partial z}{\partial x} = \dfrac{\partial s_1}{\partial x} \cdot \dfrac{\partial s_2}{\partial s_1} \cdot \dfrac{\partial s_3}{\partial s_2} \cdot \dots \cdot \dfrac{\partial s_{n-1}}{\partial s_{n-2}} \cdot \dfrac{\partial s_n}{\partial s_{n-1}} \cdot \dfrac{\partial z}{\partial s_n}$
전진 모드 자동 미분에서는 알고리즘이 이 항들을 "진행 순서대로"(즉, 출력 $z$을 계산하기 위해 필요한 계산 순서와 동일하게), 즉 왼쪽에서 오른쪽으로 계산합니다. 먼저 $\dfrac{\partial s_1}{\partial x}$를 계산하고, 그다음 $\dfrac{\partial s_2}{\partial s_1}$을 계산하는 식입니다. 후진 모드 자동 미분에서는 알고리즘이 이 항들을 "진행 반대 순서로", 즉 오른쪽에서 왼쪽으로 계산합니다. 먼저 $\dfrac{\partial z}{\partial s_n}$을 계산하고, 그다음 $\dfrac{\partial s_n}{\partial s_{n-1}}$을 계산하는 식입니다.
예를 들어, x=3에서 함수 $z(x)=\sin(x^2)$의 도함수를 전진 모드 자동 미분을 사용하여 계산한다고 가정합시다. 알고리즘은 먼저 편도함수 $\dfrac{\partial s_1}{\partial x}=\dfrac{\partial x^2}{\partial x}=2x=6$을 계산합니다. 다음, $\dfrac{\partial z}{\partial x}=\dfrac{\partial s_1}{\partial x}\cdot\dfrac{\partial z}{\partial s_1}= 6 \cdot \dfrac{\partial \sin(s_1)}{\partial s_1}=6 \cdot \cos(s_1)=6 \cdot \cos(3^2)\approx-5.46$을 계산합니다.
앞서 정의한 `gradients()` 함수를 사용해 결과를 검증해 보겠습니다:
```
from math import sin
def z(x):
return sin(x**2)
gradients(z, [3])
```
훌륭하네요. 이제 후진 모드 자동 미분을 사용해 동일한 계산을 해보겠습니다. 이번에는 알고리즘이 오른쪽부터 시작하므로 $\dfrac{\partial z}{\partial s_1} = \dfrac{\partial \sin(s_1)}{\partial s_1}=\cos(s_1)=\cos(3^2)\approx -0.91$을 계산합니다. 다음 $\dfrac{\partial z}{\partial x}=\dfrac{\partial s_1}{\partial x}\cdot\dfrac{\partial z}{\partial s_1} \approx \dfrac{\partial s_1}{\partial x} \cdot -0.91 = \dfrac{\partial x^2}{\partial x} \cdot -0.91=2x \cdot -0.91 = 6\cdot-0.91=-5.46$을 계산합니다.
당연히 두 방법 모두 같은 결과를 냅니다(반올림 오차는 제외하고). 하나의 입력과 하나의 출력이 있는 경우에는 둘 다 동일한 횟수의 계산이 필요합니다. 하지만 입력과 출력의 개수가 여러 개이면 두 방법의 성능이 매우 달라집니다. 입력이 많다면 가장 오른쪽에 있는 항은 각 입력마다 편도 함수를 계산하기 위해 필요할 것입니다. 그러므로 가장 오른쪽에 있는 항을 먼저 계산하는 것이 좋습니다. 이것은 후진 모드 자동 미분을 의미합니다. 가장 오른쪽의 항을 한번 계산해서 모든 편도 함수를 계산하는데 사용할 수 있습니다. 반대로 출력이 많을 경우에는 가장 왼쪽의 항을 한번 계산해서 여러 출력의 편도 함수를 계산할 수 있는 전진 모드가 더 좋습니다. 딥러닝에서는 전형적으로 수천 개의 모델 파라미터가 있고 입력은 많지만 출력은 적습니다. 사실 훈련하는 동안 일반적으로 출력은 손실 단 하나입니다. 그래서 텐서플로와 주요 딥러닝 라이브러리들은 후진 모드 자동 미분을 사용합니다.
후진 모드 자동 미분에는 복잡도가 한가지 추가됩니다. $s_i$의 값은 일반적으로 $\dfrac{\partial s_{i+1}}{\partial s_i}$를 계산할 때 필요하고, $s_i$는 먼저 $s_{i-1}$를 계산해야 합니다. 이는 또 $s_{i-2}$를 계산해야 하는 식입니다. 그래서 $s_1$, $s_2$, $s_3$, $\dots$, $s_{n-1}$ 그리고 $s_n$를 계산하기 위해 기본적으로 전진 방향으로 한번 네트워크를 실행해야 합니다. 그다음에 알고리즘이 오른쪽에서 왼쪽으로 편도 함수를 계산할 수 있습니다. RAM에 모든 $s_i$의 중간값을 저장하는 것은 가끔 문제가 됩니다. 특히 이미지를 다룰 때와 RAM이 부족한 GPU를 사용할 때 입니다. 이 문제를 완화하기 위해 신경망의 층 개수를 줄이거나, 텐서플로가 GPU RAM에서 CPU RAM으로 중간값들을 스왑(swap)하도록 설정할 수 있습니다. 다른 방법은 홀수 번째 중간값인 $s_1$, $s_3$, $s_5$, $\dots$, $s_{n-4}$, $s_{n-2}$ 그리고 $s_n$만 캐싱하는 것입니다. 알고리즘이 편도 함수를 계산할 때 중간값 $s_i$가 없으면, 이전 중간값 $s_{i-1}$를 사용하여 다시 계산해야 합니다. 이는 CPU와 RAM 사이의 트레이드오프입니다(관심이 있다면 [이 논문](https://pdfs.semanticscholar.org/f61e/9fd5a4878e1493f7a6b03774a61c17b7e9a4.pdf)을 확인해 보세요).
### 전진 모드 자동 미분
```
Const.gradient = lambda self, var: Const(0)
Var.gradient = lambda self, var: Const(1) if self is var else Const(0)
Add.gradient = lambda self, var: Add(self.a.gradient(var), self.b.gradient(var))
Mul.gradient = lambda self, var: Add(Mul(self.a, self.b.gradient(var)), Mul(self.a.gradient(var), self.b))
x = Var(name="x", init_value=3.)
y = Var(name="y", init_value=4.)
f = Add(Mul(Mul(x, x), y), Add(y, Const(2))) # f(x,y) = x²y + y + 2
dfdx = f.gradient(x) # 2xy
dfdy = f.gradient(y) # x² + 1
dfdx.evaluate(), dfdy.evaluate()
```
`gradient()` 메서드의 출력은 완전한 기호 미분이므로 1차 도함수에 국한되지 않고 2차 도함수도 계산할 수 있습니다:
```
d2fdxdx = dfdx.gradient(x) # 2y
d2fdxdy = dfdx.gradient(y) # 2x
d2fdydx = dfdy.gradient(x) # 2x
d2fdydy = dfdy.gradient(y) # 0
[[d2fdxdx.evaluate(), d2fdxdy.evaluate()],
[d2fdydx.evaluate(), d2fdydy.evaluate()]]
```
결과는 근사값이 아니고 완벽하게 맞습니다(물론 컴퓨터의 부동 소수 정밀도 한계까지만).
### 이원수(dual number)를 사용한 전진 모드 자동 미분
전진 모드 자동 미분을 적용하는 좋은 한가지 방법은 [이원수](https://ko.wikipedia.org/wiki/%EC%9D%B4%EC%9B%90%EC%88%98_(%EC%88%98%ED%95%99))를 사용하는 것입니다. 간단하게 말하면 이원수 $z$는 $z = a + b\epsilon$의 형태를 가집니다. 여기에서 $a$와 $b$는 실수입니다. $\epsilon$은 아주 작은 양수 이지만 모든 실수보다 작기 때문에 $\epsilon^2=0$입니다. $f(x + \epsilon) = f(x) + \dfrac{\partial f}{\partial x}\epsilon$로 쓸 수 있으므로, $f(x + \epsilon)$를 계산하여 $f(x)$와 $x$에 대한 $f$의 편도 함수를 구할 수 있습니다.
이원수는 자체적인 산술 규칙을 가집니다. 일반적으로 매우 직관적입니다. 예를 들면:
**덧셈**
$(a_1 + b_1\epsilon) + (a_2 + b_2\epsilon) = (a_1 + a_2) + (b_1 + b_2)\epsilon$
**뺄셈**
$(a_1 + b_1\epsilon) - (a_2 + b_2\epsilon) = (a_1 - a_2) + (b_1 - b_2)\epsilon$
**곱셈**
$(a_1 + b_1\epsilon) \times (a_2 + b_2\epsilon) = (a_1 a_2) + (a_1 b_2 + a_2 b_1)\epsilon + b_1 b_2\epsilon^2 = (a_1 a_2) + (a_1b_2 + a_2b_1)\epsilon$
**나눗셈**
$\dfrac{a_1 + b_1\epsilon}{a_2 + b_2\epsilon} = \dfrac{a_1 + b_1\epsilon}{a_2 + b_2\epsilon} \cdot \dfrac{a_2 - b_2\epsilon}{a_2 - b_2\epsilon} = \dfrac{a_1 a_2 + (b_1 a_2 - a_1 b_2)\epsilon - b_1 b_2\epsilon^2}{{a_2}^2 + (a_2 b_2 - a_2 b_2)\epsilon - {b_2}^2\epsilon} = \dfrac{a_1}{a_2} + \dfrac{a_1 b_2 - b_1 a_2}{{a_2}^2}\epsilon$
**거듭제곱**
$(a + b\epsilon)^n = a^n + (n a^{n-1}b)\epsilon$
등.
이원수를 표현할 클래스를 만들고 몇 개의 연산(덧셈과 곱셈)을 구현해 보죠. 필요하면 다른 연산을 더 추가해도 됩니다.
```
class DualNumber(object):
def __init__(self, value=0.0, eps=0.0):
self.value = value
self.eps = eps
def __add__(self, b):
return DualNumber(self.value + self.to_dual(b).value,
self.eps + self.to_dual(b).eps)
def __radd__(self, a):
return self.to_dual(a).__add__(self)
def __mul__(self, b):
return DualNumber(self.value * self.to_dual(b).value,
self.eps * self.to_dual(b).value + self.value * self.to_dual(b).eps)
def __rmul__(self, a):
return self.to_dual(a).__mul__(self)
def __str__(self):
if self.eps:
return "{:.1f} + {:.1f}ε".format(self.value, self.eps)
else:
return "{:.1f}".format(self.value)
def __repr__(self):
return str(self)
@classmethod
def to_dual(cls, n):
if hasattr(n, "value"):
return n
else:
return cls(n)
```
$3 + (3 + 4 \epsilon) = 6 + 4\epsilon$
```
3 + DualNumber(3, 4)
```
$(3 + 4ε)\times(5 + 7ε)$ = $3 \times 5 + 3 \times 7ε + 4ε \times 5 + 4ε \times 7ε$ = $15 + 21ε + 20ε + 28ε^2$ = $15 + 41ε + 28 \times 0$ = $15 + 41ε$
```
DualNumber(3, 4) * DualNumber(5, 7)
```
이제 이원수가 우리가 만든 계산 프레임워크와 함께 쓸 수 있는지 확인해 보죠:
```
x.value = DualNumber(3.0)
y.value = DualNumber(4.0)
f.evaluate()
```
오, 잘 되네요. 이를 사용해 x=3이고 y=4에서 $x$와 $y$에 대한 $f$의 편도 함수를 계산해 보겠습니다:
```
x.value = DualNumber(3.0, 1.0) # 3 + ε
y.value = DualNumber(4.0) # 4
dfdx = f.evaluate().eps
x.value = DualNumber(3.0) # 3
y.value = DualNumber(4.0, 1.0) # 4 + ε
dfdy = f.evaluate().eps
dfdx
dfdy
```
훌륭합니다! 하지만 이 구현에서는 1차 도함수만 가능합니다. 이제 후진 모드를 살펴 보죠.
### 후진 모드 자동 미분
우리가 만든 간단한 프레임워크를 수정해서 후진 모드 자동 미분을 추가하겠습니다:
```
class Const(object):
def __init__(self, value):
self.value = value
def evaluate(self):
return self.value
def backpropagate(self, gradient):
pass
def __str__(self):
return str(self.value)
class Var(object):
def __init__(self, name, init_value=0):
self.value = init_value
self.name = name
self.gradient = 0
def evaluate(self):
return self.value
def backpropagate(self, gradient):
self.gradient += gradient
def __str__(self):
return self.name
class BinaryOperator(object):
def __init__(self, a, b):
self.a = a
self.b = b
class Add(BinaryOperator):
def evaluate(self):
self.value = self.a.evaluate() + self.b.evaluate()
return self.value
def backpropagate(self, gradient):
self.a.backpropagate(gradient)
self.b.backpropagate(gradient)
def __str__(self):
return "{} + {}".format(self.a, self.b)
class Mul(BinaryOperator):
def evaluate(self):
self.value = self.a.evaluate() * self.b.evaluate()
return self.value
def backpropagate(self, gradient):
self.a.backpropagate(gradient * self.b.value)
self.b.backpropagate(gradient * self.a.value)
def __str__(self):
return "({}) * ({})".format(self.a, self.b)
x = Var("x", init_value=3)
y = Var("y", init_value=4)
f = Add(Mul(Mul(x, x), y), Add(y, Const(2))) # f(x,y) = x²y + y + 2
result = f.evaluate()
f.backpropagate(1.0)
print(f)
result
x.gradient
y.gradient
```
여기에서도 이 구현의 출력이 숫자이고 기호 표현(symbolic expressions)이 아니므로 1차 도함수로 제한이 됩니다. 그러나 값 대신 기호 표현을 반환하는 `backpropagate()` 메서드를 만들 수 있습니다. 이렇게 하면 2차 도함수(또 그 이상)를 계산할 수 있습니다. 이것이 텐서플로와 자동 미분을 구현한 모든 주요 딥러닝 라이브러리들의 방식입니다.
### 텐서플로를 사용한 후진 모드 자동 미분
```
import tensorflow as tf
tf.reset_default_graph()
x = tf.Variable(3., name="x")
y = tf.Variable(4., name="y")
f = x*x*y + y + 2
jacobians = tf.gradients(f, [x, y])
init = tf.global_variables_initializer()
with tf.Session() as sess:
init.run()
f_val, jacobians_val = sess.run([f, jacobians])
f_val, jacobians_val
```
전부 기호이기 때문에 2차 도함수와 그 이상도 계산할 수 있습니다. 그러나 의존하지 않는 변수에 대한 텐서의 도함수를 계산할 때, `grandients()` 함수가 0.0 대신 None을 반환하기 때문에 `sess.run()`으로 평가할 수 없습니다. 그러므로 `None`을 주의하세요. 여기서는 간단하게 0 텐서로 바꿉니다.
```
hessians_x = tf.gradients(jacobians[0], [x, y])
hessians_y = tf.gradients(jacobians[1], [x, y])
def replace_none_with_zero(tensors):
return [tensor if tensor is not None else tf.constant(0.)
for tensor in tensors]
hessians_x = replace_none_with_zero(hessians_x)
hessians_y = replace_none_with_zero(hessians_y)
init = tf.global_variables_initializer()
with tf.Session() as sess:
init.run()
hessians_x_val, hessians_y_val = sess.run([hessians_x, hessians_y])
hessians_x_val, hessians_y_val
```
여기까지 해서 마치도록 하겠습니다! 이 노트북이 맘에 드시길 바랄께요.
| github_jupyter |
# Anna KaRNNa
In this notebook, I'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book.
This network is based off of Andrej Karpathy's [post on RNNs](http://karpathy.github.io/2015/05/21/rnn-effectiveness/) and [implementation in Torch](https://github.com/karpathy/char-rnn). Also, some information [here at r2rt](http://r2rt.com/recurrent-neural-networks-in-tensorflow-ii.html) and from [Sherjil Ozair](https://github.com/sherjilozair/char-rnn-tensorflow) on GitHub. Below is the general architecture of the character-wise RNN.
<img src="assets/charseq.jpeg" width="500">
```
import time
from collections import namedtuple
import numpy as np
import tensorflow as tf
```
First we'll load the text file and convert it into integers for our network to use. Here I'm creating a couple dictionaries to convert the characters to and from integers. Encoding the characters as integers makes it easier to use as input in the network.
```
with open('anna.txt', 'r') as f:
text=f.read()
vocab = set(text)
vocab_to_int = {c: i for i, c in enumerate(vocab)}
int_to_vocab = dict(enumerate(vocab))
encoded = np.array([vocab_to_int[c] for c in text], dtype=np.int32)
```
Let's check out the first 100 characters, make sure everything is peachy. According to the [American Book Review](http://americanbookreview.org/100bestlines.asp), this is the 6th best first line of a book ever.
```
text[:100]
```
And we can see the characters encoded as integers.
```
encoded[:100]
```
Since the network is working with individual characters, it's similar to a classification problem in which we are trying to predict the next character from the previous text. Here's how many 'classes' our network has to pick from.
```
len(vocab)
```
## Making training mini-batches
Here is where we'll make our mini-batches for training. Remember that we want our batches to be multiple sequences of some desired number of sequence steps. Considering a simple example, our batches would look like this:
<img src="assets/sequence_batching@1x.png" width=500px>
<br>
We have our text encoded as integers as one long array in `encoded`. Let's create a function that will give us an iterator for our batches. I like using [generator functions](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/) to do this. Then we can pass `encoded` into this function and get our batch generator.
The first thing we need to do is discard some of the text so we only have completely full batches. Each batch contains $N \times M$ characters, where $N$ is the batch size (the number of sequences) and $M$ is the number of steps. Then, to get the number of batches we can make from some array `arr`, you divide the length of `arr` by the batch size. Once you know the number of batches and the batch size, you can get the total number of characters to keep.
After that, we need to split `arr` into $N$ sequences. You can do this using `arr.reshape(size)` where `size` is a tuple containing the dimensions sizes of the reshaped array. We know we want $N$ sequences (`n_seqs` below), let's make that the size of the first dimension. For the second dimension, you can use `-1` as a placeholder in the size, it'll fill up the array with the appropriate data for you. After this, you should have an array that is $N \times (M * K)$ where $K$ is the number of batches.
Now that we have this array, we can iterate through it to get our batches. The idea is each batch is a $N \times M$ window on the array. For each subsequent batch, the window moves over by `n_steps`. We also want to create both the input and target arrays. Remember that the targets are the inputs shifted over one character. You'll usually see the first input character used as the last target character, so something like this:
```python
y[:, :-1], y[:, -1] = x[:, 1:], x[:, 0]
```
where `x` is the input batch and `y` is the target batch.
The way I like to do this window is use `range` to take steps of size `n_steps` from $0$ to `arr.shape[1]`, the total number of steps in each sequence. That way, the integers you get from `range` always point to the start of a batch, and each window is `n_steps` wide.
```
def get_batches(arr, n_seqs, n_steps):
'''Create a generator that returns batches of size
n_seqs x n_steps from arr.
Arguments
---------
arr: Array you want to make batches from
n_seqs: Batch size, the number of sequences per batch
n_steps: Number of sequence steps per batch
'''
# Get the number of characters per batch and number of batches we can make
characters_per_batch = n_seqs * n_steps
n_batches = len(arr)//characters_per_batch
# Keep only enough characters to make full batches
arr = arr[:n_batches * characters_per_batch]
# Reshape into n_seqs rows
arr = arr.reshape((n_seqs, -1))
for n in range(0, arr.shape[1], n_steps):
# The features
x = arr[:, n:n+n_steps]
# The targets, shifted by one
y = np.zeros_like(x)
y[:, :-1], y[:, -1] = x[:, 1:], x[:, 0]
yield x, y
```
Now I'll make my data sets and we can check out what's going on here. Here I'm going to use a batch size of 10 and 50 sequence steps.
```
batches = get_batches(encoded, 10, 50)
x, y = next(batches)
print('x\n', x[:10, :10])
print('\ny\n', y[:10, :10])
```
If you implemented `get_batches` correctly, the above output should look something like
```
x
[[55 63 69 22 6 76 45 5 16 35]
[ 5 69 1 5 12 52 6 5 56 52]
[48 29 12 61 35 35 8 64 76 78]
[12 5 24 39 45 29 12 56 5 63]
[ 5 29 6 5 29 78 28 5 78 29]
[ 5 13 6 5 36 69 78 35 52 12]
[63 76 12 5 18 52 1 76 5 58]
[34 5 73 39 6 5 12 52 36 5]
[ 6 5 29 78 12 79 6 61 5 59]
[ 5 78 69 29 24 5 6 52 5 63]]
y
[[63 69 22 6 76 45 5 16 35 35]
[69 1 5 12 52 6 5 56 52 29]
[29 12 61 35 35 8 64 76 78 28]
[ 5 24 39 45 29 12 56 5 63 29]
[29 6 5 29 78 28 5 78 29 45]
[13 6 5 36 69 78 35 52 12 43]
[76 12 5 18 52 1 76 5 58 52]
[ 5 73 39 6 5 12 52 36 5 78]
[ 5 29 78 12 79 6 61 5 59 63]
[78 69 29 24 5 6 52 5 63 76]]
```
although the exact numbers will be different. Check to make sure the data is shifted over one step for `y`.
## Building the model
Below is where you'll build the network. We'll break it up into parts so it's easier to reason about each bit. Then we can connect them up into the whole network.
<img src="assets/charRNN.png" width=500px>
### Inputs
First off we'll create our input placeholders. As usual we need placeholders for the training data and the targets. We'll also create a placeholder for dropout layers called `keep_prob`.
```
def build_inputs(batch_size, num_steps):
''' Define placeholders for inputs, targets, and dropout
Arguments
---------
batch_size: Batch size, number of sequences per batch
num_steps: Number of sequence steps in a batch
'''
# Declare placeholders we'll feed into the graph
inputs = tf.placeholder(tf.int32, [batch_size, num_steps], name='inputs')
targets = tf.placeholder(tf.int32, [batch_size, num_steps], name='targets')
# Keep probability placeholder for drop out layers
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
return inputs, targets, keep_prob
```
### LSTM Cell
Here we will create the LSTM cell we'll use in the hidden layer. We'll use this cell as a building block for the RNN. So we aren't actually defining the RNN here, just the type of cell we'll use in the hidden layer.
We first create a basic LSTM cell with
```python
lstm = tf.contrib.rnn.BasicLSTMCell(num_units)
```
where `num_units` is the number of units in the hidden layers in the cell. Then we can add dropout by wrapping it with
```python
tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
```
You pass in a cell and it will automatically add dropout to the inputs or outputs. Finally, we can stack up the LSTM cells into layers with [`tf.contrib.rnn.MultiRNNCell`](https://www.tensorflow.org/versions/r1.0/api_docs/python/tf/contrib/rnn/MultiRNNCell). With this, you pass in a list of cells and it will send the output of one cell into the next cell. For example,
```python
tf.contrib.rnn.MultiRNNCell([cell]*num_layers)
```
This might look a little weird if you know Python well because this will create a list of the same `cell` object. However, TensorFlow will create different weight matrices for all `cell` objects. Even though this is actually multiple LSTM cells stacked on each other, you can treat the multiple layers as one cell.
We also need to create an initial cell state of all zeros. This can be done like so
```python
initial_state = cell.zero_state(batch_size, tf.float32)
```
Below, we implement the `build_lstm` function to create these LSTM cells and the initial state.
```
def build_lstm(lstm_size, num_layers, batch_size, keep_prob):
''' Build LSTM cell.
Arguments
---------
keep_prob: Scalar tensor (tf.placeholder) for the dropout keep probability
lstm_size: Size of the hidden layers in the LSTM cells
num_layers: Number of LSTM layers
batch_size: Batch size
'''
### Build the LSTM Cell
# Use a basic LSTM cell
lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
# Add dropout to the cell
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
# Stack up multiple LSTM layers, for deep learning
cell = tf.contrib.rnn.MultiRNNCell([drop] * num_layers)
initial_state = cell.zero_state(batch_size, tf.float32)
return cell, initial_state
```
### RNN Output
Here we'll create the output layer. We need to connect the output of the RNN cells to a full connected layer with a softmax output. The softmax output gives us a probability distribution we can use to predict the next character.
If our input has batch size $N$, number of steps $M$, and the hidden layer has $L$ hidden units, then the output is a 3D tensor with size $N \times M \times L$. The output of each LSTM cell has size $L$, we have $M$ of them, one for each sequence step, and we have $N$ sequences. So the total size is $N \times M \times L$.
We are using the same fully connected layer, the same weights, for each of the outputs. Then, to make things easier, we should reshape the outputs into a 2D tensor with shape $(M * N) \times L$. That is, one row for each sequence and step, where the values of each row are the output from the LSTM cells.
One we have the outputs reshaped, we can do the matrix multiplication with the weights. We need to wrap the weight and bias variables in a variable scope with `tf.variable_scope(scope_name)` because there are weights being created in the LSTM cells. TensorFlow will throw an error if the weights created here have the same names as the weights created in the LSTM cells, which they will be default. To avoid this, we wrap the variables in a variable scope so we can give them unique names.
```
def build_output(lstm_output, in_size, out_size):
''' Build a softmax layer, return the softmax output and logits.
Arguments
---------
x: Input tensor
in_size: Size of the input tensor, for example, size of the LSTM cells
out_size: Size of this softmax layer
'''
# Reshape output so it's a bunch of rows, one row for each step for each sequence.
# That is, the shape should be batch_size*num_steps rows by lstm_size columns
seq_output = tf.concat(lstm_output, axis=1)
x = tf.reshape(seq_output, [-1, in_size])
# Connect the RNN outputs to a softmax layer
with tf.variable_scope('softmax'):
softmax_w = tf.Variable(tf.truncated_normal((in_size, out_size), stddev=0.1))
softmax_b = tf.Variable(tf.zeros(out_size))
# Since output is a bunch of rows of RNN cell outputs, logits will be a bunch
# of rows of logit outputs, one for each step and sequence
logits = tf.matmul(x, softmax_w) + softmax_b
# Use softmax to get the probabilities for predicted characters
out = tf.nn.softmax(logits, name='predictions')
return out, logits
```
### Training loss
Next up is the training loss. We get the logits and targets and calculate the softmax cross-entropy loss. First we need to one-hot encode the targets, we're getting them as encoded characters. Then, reshape the one-hot targets so it's a 2D tensor with size $(M*N) \times C$ where $C$ is the number of classes/characters we have. Remember that we reshaped the LSTM outputs and ran them through a fully connected layer with $C$ units. So our logits will also have size $(M*N) \times C$.
Then we run the logits and targets through `tf.nn.softmax_cross_entropy_with_logits` and find the mean to get the loss.
```
def build_loss(logits, targets, lstm_size, num_classes):
''' Calculate the loss from the logits and the targets.
Arguments
---------
logits: Logits from final fully connected layer
targets: Targets for supervised learning
lstm_size: Number of LSTM hidden units
num_classes: Number of classes in targets
'''
# One-hot encode targets and reshape to match logits, one row per batch_size per step
y_one_hot = tf.one_hot(targets, num_classes)
y_reshaped = tf.reshape(y_one_hot, logits.get_shape())
# Softmax cross entropy loss
loss = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y_reshaped)
loss = tf.reduce_mean(loss)
return loss
```
### Optimizer
Here we build the optimizer. Normal RNNs have have issues gradients exploding and disappearing. LSTMs fix the disappearance problem, but the gradients can still grow without bound. To fix this, we can clip the gradients above some threshold. That is, if a gradient is larger than that threshold, we set it to the threshold. This will ensure the gradients never grow overly large. Then we use an AdamOptimizer for the learning step.
```
def build_optimizer(loss, learning_rate, grad_clip):
''' Build optmizer for training, using gradient clipping.
Arguments:
loss: Network loss
learning_rate: Learning rate for optimizer
'''
# Optimizer for training, using gradient clipping to control exploding gradients
tvars = tf.trainable_variables()
grads, _ = tf.clip_by_global_norm(tf.gradients(loss, tvars), grad_clip)
train_op = tf.train.AdamOptimizer(learning_rate)
optimizer = train_op.apply_gradients(zip(grads, tvars))
return optimizer
```
### Build the network
Now we can put all the pieces together and build a class for the network. To actually run data through the LSTM cells, we will use [`tf.nn.dynamic_rnn`](https://www.tensorflow.org/versions/r1.0/api_docs/python/tf/nn/dynamic_rnn). This function will pass the hidden and cell states across LSTM cells appropriately for us. It returns the outputs for each LSTM cell at each step for each sequence in the mini-batch. It also gives us the final LSTM state. We want to save this state as `final_state` so we can pass it to the first LSTM cell in the the next mini-batch run. For `tf.nn.dynamic_rnn`, we pass in the cell and initial state we get from `build_lstm`, as well as our input sequences. Also, we need to one-hot encode the inputs before going into the RNN.
```
class CharRNN:
def __init__(self, num_classes, batch_size=64, num_steps=50,
lstm_size=128, num_layers=2, learning_rate=0.001,
grad_clip=5, sampling=False):
# When we're using this network for sampling later, we'll be passing in
# one character at a time, so providing an option for that
if sampling == True:
batch_size, num_steps = 1, 1
else:
batch_size, num_steps = batch_size, num_steps
tf.reset_default_graph()
# Build the input placeholder tensors
self.inputs, self.targets, self.keep_prob = build_inputs(batch_size, num_steps)
# Build the LSTM cell
cell, self.initial_state = build_lstm(lstm_size, num_layers, batch_size, self.keep_prob)
### Run the data through the RNN layers
# First, one-hot encode the input tokens
x_one_hot = tf.one_hot(self.inputs, num_classes)
# Run each sequence step through the RNN and collect the outputs
outputs, state = tf.nn.dynamic_rnn(cell, x_one_hot, initial_state=self.initial_state)
self.final_state = state
# Get softmax predictions and logits
self.prediction, self.logits = build_output(outputs, lstm_size, num_classes)
# Loss and optimizer (with gradient clipping)
self.loss = build_loss(self.logits, self.targets, lstm_size, num_classes)
self.optimizer = build_optimizer(self.loss, learning_rate, grad_clip)
```
## Hyperparameters
Here I'm defining the hyperparameters for the network.
* `batch_size` - Number of sequences running through the network in one pass.
* `num_steps` - Number of characters in the sequence the network is trained on. Larger is better typically, the network will learn more long range dependencies. But it takes longer to train. 100 is typically a good number here.
* `lstm_size` - The number of units in the hidden layers.
* `num_layers` - Number of hidden LSTM layers to use
* `learning_rate` - Learning rate for training
* `keep_prob` - The dropout keep probability when training. If you're network is overfitting, try decreasing this.
Here's some good advice from Andrej Karpathy on training the network. I'm going to copy it in here for your benefit, but also link to [where it originally came from](https://github.com/karpathy/char-rnn#tips-and-tricks).
> ## Tips and Tricks
>### Monitoring Validation Loss vs. Training Loss
>If you're somewhat new to Machine Learning or Neural Networks it can take a bit of expertise to get good models. The most important quantity to keep track of is the difference between your training loss (printed during training) and the validation loss (printed once in a while when the RNN is run on the validation data (by default every 1000 iterations)). In particular:
> - If your training loss is much lower than validation loss then this means the network might be **overfitting**. Solutions to this are to decrease your network size, or to increase dropout. For example you could try dropout of 0.5 and so on.
> - If your training/validation loss are about equal then your model is **underfitting**. Increase the size of your model (either number of layers or the raw number of neurons per layer)
> ### Approximate number of parameters
> The two most important parameters that control the model are `lstm_size` and `num_layers`. I would advise that you always use `num_layers` of either 2/3. The `lstm_size` can be adjusted based on how much data you have. The two important quantities to keep track of here are:
> - The number of parameters in your model. This is printed when you start training.
> - The size of your dataset. 1MB file is approximately 1 million characters.
>These two should be about the same order of magnitude. It's a little tricky to tell. Here are some examples:
> - I have a 100MB dataset and I'm using the default parameter settings (which currently print 150K parameters). My data size is significantly larger (100 mil >> 0.15 mil), so I expect to heavily underfit. I am thinking I can comfortably afford to make `lstm_size` larger.
> - I have a 10MB dataset and running a 10 million parameter model. I'm slightly nervous and I'm carefully monitoring my validation loss. If it's larger than my training loss then I may want to try to increase dropout a bit and see if that helps the validation loss.
> ### Best models strategy
>The winning strategy to obtaining very good models (if you have the compute time) is to always err on making the network larger (as large as you're willing to wait for it to compute) and then try different dropout values (between 0,1). Whatever model has the best validation performance (the loss, written in the checkpoint filename, low is good) is the one you should use in the end.
>It is very common in deep learning to run many different models with many different hyperparameter settings, and in the end take whatever checkpoint gave the best validation performance.
>By the way, the size of your training and validation splits are also parameters. Make sure you have a decent amount of data in your validation set or otherwise the validation performance will be noisy and not very informative.
```
batch_size = 100 # Sequences per batch
num_steps = 100 # Number of sequence steps per batch
lstm_size = 512 # Size of hidden layers in LSTMs
num_layers = 2 # Number of LSTM layers
learning_rate = 0.001 # Learning rate
keep_prob = 0.5 # Dropout keep probability
```
## Time for training
This is typical training code, passing inputs and targets into the network, then running the optimizer. Here we also get back the final LSTM state for the mini-batch. Then, we pass that state back into the network so the next batch can continue the state from the previous batch. And every so often (set by `save_every_n`) I save a checkpoint.
Here I'm saving checkpoints with the format
`i{iteration number}_l{# hidden layer units}.ckpt`
```
epochs = 20
# Save every N iterations
save_every_n = 200
model = CharRNN(len(vocab), batch_size=batch_size, num_steps=num_steps,
lstm_size=lstm_size, num_layers=num_layers,
learning_rate=learning_rate)
saver = tf.train.Saver(max_to_keep=100)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# Use the line below to load a checkpoint and resume training
#saver.restore(sess, 'checkpoints/______.ckpt')
counter = 0
for e in range(epochs):
# Train network
new_state = sess.run(model.initial_state)
loss = 0
for x, y in get_batches(encoded, batch_size, num_steps):
counter += 1
start = time.time()
feed = {model.inputs: x,
model.targets: y,
model.keep_prob: keep_prob,
model.initial_state: new_state}
batch_loss, new_state, _ = sess.run([model.loss,
model.final_state,
model.optimizer],
feed_dict=feed)
end = time.time()
print('Epoch: {}/{}... '.format(e+1, epochs),
'Training Step: {}... '.format(counter),
'Training loss: {:.4f}... '.format(batch_loss),
'{:.4f} sec/batch'.format((end-start)))
if (counter % save_every_n == 0):
saver.save(sess, "checkpoints/i{}_l{}.ckpt".format(counter, lstm_size))
saver.save(sess, "checkpoints/i{}_l{}.ckpt".format(counter, lstm_size))
```
#### Saved checkpoints
Read up on saving and loading checkpoints here: https://www.tensorflow.org/programmers_guide/variables
```
tf.train.get_checkpoint_state('checkpoints')
```
## Sampling
Now that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that.
The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters.
```
def pick_top_n(preds, vocab_size, top_n=5):
p = np.squeeze(preds)
p[np.argsort(p)[:-top_n]] = 0
p = p / np.sum(p)
c = np.random.choice(vocab_size, 1, p=p)[0]
return c
def sample(checkpoint, n_samples, lstm_size, vocab_size, prime="The "):
samples = [c for c in prime]
model = CharRNN(len(vocab), lstm_size=lstm_size, sampling=True)
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, checkpoint)
new_state = sess.run(model.initial_state)
for c in prime:
x = np.zeros((1, 1))
x[0,0] = vocab_to_int[c]
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.prediction, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
for i in range(n_samples):
x[0,0] = c
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.prediction, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
return ''.join(samples)
```
Here, pass in the path to a checkpoint and sample from the network.
```
tf.train.latest_checkpoint('checkpoints')
checkpoint = tf.train.latest_checkpoint('checkpoints')
samp = sample(checkpoint, 2000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = 'checkpoints/i200_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = 'checkpoints/i600_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = 'checkpoints/i1200_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
```
| github_jupyter |
# Table of Contents
<p><div class="lev1 toc-item"><a href="#Funções-em-Python" data-toc-modified-id="Funções-em-Python-1"><span class="toc-item-num">1 </span>Funções em Python</a></div><div class="lev2 toc-item"><a href="#Sintaxe-para-definir-funções" data-toc-modified-id="Sintaxe-para-definir-funções-11"><span class="toc-item-num">1.1 </span>Sintaxe para definir funções</a></div><div class="lev2 toc-item"><a href="#Parâmetros-da-função" data-toc-modified-id="Parâmetros-da-função-12"><span class="toc-item-num">1.2 </span>Parâmetros da função</a></div><div class="lev2 toc-item"><a href="#Referências" data-toc-modified-id="Referências-13"><span class="toc-item-num">1.3 </span>Referências</a></div>
# Funções em Python
## Sintaxe para definir funções
As funções em Python utilizam a palavra chave def seguida do nome da função e os parâmetros entre parêntesis terminado por dois pontos como no exemplo a seguir onde a função soma é definida para retornar a soma de seus dois parâmetros. Observe que o corpo da função é indentado da definição da função:
```
def soma( x, y):
s = x + y
return s
```
Para se realizar a chamada da função soma, basta utilizá-la pelo seu nome passando os parâmetros como argumentos da função. Veja o exemplo a seguir
```
r = soma(50, 20)
print (r)
```
## Parâmetros da função
Existem dois tipos de parâmetros: posicional e com palavra chave. Os posicionais são aqueles que são identificados pela ordem em que aparecem na lista dos parâmetros da função. Já os com palavra chave, são identificados por nome=. Os parâmetros por palavra chave podem também ser posicionais, mas tem a vantagem que se não forem passados, ele assumem o valor mencionado por falta (default). Veja o exemplo abaixo:
```
def soma( x, y, squared=False):
if squared:
s = (x + y)**2
else:
s = (x + y)
return s
```
Observe que os parâmetros, x e y são posicionais e serão os 2 primeiros argumentos da chamada da função. O terceiro parâmetro é por palavra chave e portanto opcional, posso usá-lo tanto na forma posicional, como na forma explícita com a palavra chave. A grande vantagem neste esquema é que posso ter um grande número de parâmetros com palavra chave e na hora de usar a função deixar explicitamente só os parâmetros desejados.
Veja os exemplos:
```
print ('soma(2, 3):', soma(2, 3))
print ('soma(2, 3, False):', soma(2, 3, False))
print ('soma(2, 3, True):', soma(2, 3, True))
print ('soma(2, 3, squared= True):', soma(2, 3, squared= True))
```
## Referências
As referências a seguir são da documentação oficial do Python. Neste curso, iremos utilizar muito pouco de sua biblioteca padrão, assim não há necessidade (nem condições) de se estudar estas referências todas. O nosso foco neste curso será no NumPy e aos poucos iremos oferecer estes materiais.
- [Python Language Reference](https://docs.python.org/2/reference/)
- [Python Standard Library](https://docs.python.org/2/library/index.html)
| github_jupyter |
# Workflows
Although it would be possible to write analysis scripts using just Nipype [Interfaces](basic_interfaces.ipynb), and this may provide some advantages over directly making command-line calls, the main benefits of Nipype are the workflows.
A workflow controls the setup and the execution of individual interfaces. Let's assume you want to run multiple interfaces in a specific order, where some have to wait for others to finish while others can be executed in parallel. The nice thing about a nipype workflow is, that the workflow will take care of input and output of each interface and arrange the execution of each interface in the most efficient way.
A workflow therefore consists of multiple [Nodes](basic_nodes.ipynb), each representing a specific [Interface](basic_interfaces.ipynb) and directed connection between those nodes. Those connections specify which output of which node should be used as an input for another node. To better understand why this is so great, let's look at an example.
## Interfaces vs. Workflows
Interfaces are the building blocks that solve well-defined tasks. We solve more complex tasks by combining interfaces with workflows:
<table style="width: 100%; font-size: 14px;">
<thead>
<th style="text-align:left">Interfaces</th>
<th style="text-align:left">Workflows</th>
</thead>
<tbody>
<tr>
<td style="text-align:left">Wrap *unitary* tasks</td>
<td style="text-align:left">Wrap *meta*-tasks
<li style="text-align:left">implemented with nipype interfaces wrapped inside ``Node`` objects</li>
<li style="text-align:left">subworkflows can also be added to a workflow without any wrapping</li>
</td>
</tr>
<tr>
<td style="text-align:left">Keep track of the inputs and outputs, and check their expected types</td>
<td style="text-align:left">Do not have inputs/outputs, but expose them from the interfaces wrapped inside</td>
</tr>
<tr>
<td style="text-align:left">Do not cache results (unless you use [interface caching](advanced_interfaces_caching.ipynb))</td>
<td style="text-align:left">Cache results</td>
</tr>
<tr>
<td style="text-align:left">Run by a nipype plugin</td>
<td style="text-align:left">Run by a nipype plugin</td>
</tr>
</tbody>
</table>
## Preparation
Before we can start, let's first load some helper functions:
```
import numpy as np
import nibabel as nb
import matplotlib.pyplot as plt
# Let's create a short helper function to plot 3D NIfTI images
def plot_slice(fname):
# Load the image
img = nb.load(fname)
data = img.get_fdata()
# Cut in the middle of the brain
cut = int(data.shape[-1]/2) + 10
# Plot the data
plt.imshow(np.rot90(data[..., cut]), cmap="gray")
plt.gca().set_axis_off()
```
# Example 1 - ``Command-line`` execution
Let's take a look at a small preprocessing analysis where we would like to perform the following steps of processing:
- Skullstrip an image to obtain a mask
- Smooth the original image
- Mask the smoothed image
This could all very well be done with the following shell script:
```
%%bash
ANAT_NAME=sub-01_ses-test_T1w
ANAT=/data/ds000114/sub-01/ses-test/anat/${ANAT_NAME}
bet ${ANAT} /output/${ANAT_NAME}_brain -m -f 0.3
fslmaths ${ANAT} -s 2 /output/${ANAT_NAME}_smooth
fslmaths /output/${ANAT_NAME}_smooth -mas /output/${ANAT_NAME}_brain_mask /output/${ANAT_NAME}_smooth_mask
```
This is simple and straightforward. We can see that this does exactly what we wanted by plotting the four steps of processing.
```
f = plt.figure(figsize=(12, 4))
for i, img in enumerate(["T1w", "T1w_smooth",
"T1w_brain_mask", "T1w_smooth_mask"]):
f.add_subplot(1, 4, i + 1)
if i == 0:
plot_slice("/data/ds000114/sub-01/ses-test/anat/sub-01_ses-test_%s.nii.gz" % img)
else:
plot_slice("/output/sub-01_ses-test_%s.nii.gz" % img)
plt.title(img)
```
# Example 2 - ``Interface`` execution
Now let's see what this would look like if we used Nipype, but only the Interfaces functionality. It's simple enough to write a basic procedural script, this time in Python, to do the same thing as above:
```
from nipype.interfaces import fsl
# Skullstrip process
skullstrip = fsl.BET(
in_file="/data/ds000114/sub-01/ses-test/anat/sub-01_ses-test_T1w.nii.gz",
out_file="/output/sub-01_T1w_brain.nii.gz",
mask=True)
skullstrip.run()
# Smoothing process
smooth = fsl.IsotropicSmooth(
in_file="/data/ds000114/sub-01/ses-test/anat/sub-01_ses-test_T1w.nii.gz",
out_file="/output/sub-01_T1w_smooth.nii.gz",
fwhm=4)
smooth.run()
# Masking process
mask = fsl.ApplyMask(
in_file="/output/sub-01_T1w_smooth.nii.gz",
out_file="/output/sub-01_T1w_smooth_mask.nii.gz",
mask_file="/output/sub-01_T1w_brain_mask.nii.gz")
mask.run()
f = plt.figure(figsize=(12, 4))
for i, img in enumerate(["T1w", "T1w_smooth",
"T1w_brain_mask", "T1w_smooth_mask"]):
f.add_subplot(1, 4, i + 1)
if i == 0:
plot_slice("/data/ds000114/sub-01/ses-test/anat/sub-01_ses-test_%s.nii.gz" % img)
else:
plot_slice("/output/sub-01_%s.nii.gz" % img)
plt.title(img)
```
This is more verbose, although it does have its advantages. There's the automated input validation we saw previously, some of the options are named more meaningfully, and you don't need to remember, for example, that fslmaths' smoothing kernel is set in sigma instead of FWHM -- Nipype does that conversion behind the scenes.
### Can't we optimize that a bit?
As we can see above, the inputs for the **``mask``** routine ``in_file`` and ``mask_file`` are actually the output of **``skullstrip``** and **``smooth``**. We therefore somehow want to connect them. This can be accomplished by saving the executed routines under a given object and then using the output of those objects as input for other routines.
```
from nipype.interfaces import fsl
# Skullstrip process
skullstrip = fsl.BET(
in_file="/data/ds000114/sub-01/ses-test/anat/sub-01_ses-test_T1w.nii.gz", mask=True)
bet_result = skullstrip.run() # skullstrip object
# Smooth process
smooth = fsl.IsotropicSmooth(
in_file="/data/ds000114/sub-01/ses-test/anat/sub-01_ses-test_T1w.nii.gz", fwhm=4)
smooth_result = smooth.run() # smooth object
# Mask process
mask = fsl.ApplyMask(in_file=smooth_result.outputs.out_file,
mask_file=bet_result.outputs.mask_file)
mask_result = mask.run()
f = plt.figure(figsize=(12, 4))
for i, img in enumerate([skullstrip.inputs.in_file, smooth_result.outputs.out_file,
bet_result.outputs.mask_file, mask_result.outputs.out_file]):
f.add_subplot(1, 4, i + 1)
plot_slice(img)
plt.title(img.split('/')[-1].split('.')[0].split('test_')[-1])
```
Here we didn't need to name the intermediate files; Nipype did that behind the scenes, and then we passed the result object (which knows those names) onto the next step in the processing stream. This is somewhat more concise than the example above, but it's still a procedural script. And the dependency relationship between the stages of processing is not particularly obvious. To address these issues, and to provide solutions to problems we might not know we have yet, Nipype offers **Workflows.**
# Example 3 - ``Workflow`` execution
What we've implicitly done above is to encode our processing stream as a directed acyclic graphs: each stage of processing is a node in this graph, and some nodes are unidirectionally dependent on others. In this case, there is one input file and several output files, but there are no cycles -- there's a clear line of directionality to the processing. What the Node and Workflow classes do is make these relationships more explicit.
The basic architecture is that the Node provides a light wrapper around an Interface. It exposes the inputs and outputs of the Interface as its own, but it adds some additional functionality that allows you to connect Nodes into a Workflow.
Let's rewrite the above script with these tools:
```
# Import Node and Workflow object and FSL interface
from nipype import Node, Workflow
from nipype.interfaces import fsl
# For reasons that will later become clear, it's important to
# pass filenames to Nodes as absolute paths
from os.path import abspath
in_file = abspath("/data/ds000114/sub-01/ses-test/anat/sub-01_ses-test_T1w.nii.gz")
# Skullstrip process
skullstrip = Node(fsl.BET(in_file=in_file, mask=True), name="skullstrip")
# Smooth process
smooth = Node(fsl.IsotropicSmooth(in_file=in_file, fwhm=4), name="smooth")
# Mask process
mask = Node(fsl.ApplyMask(), name="mask")
```
This looks mostly similar to what we did above, but we've left out the two crucial inputs to the ApplyMask step. We'll set those up by defining a Workflow object and then making *connections* among the Nodes.
```
# Initiation of a workflow
wf = Workflow(name="smoothflow", base_dir="/output/working_dir")
```
The Workflow object has a method called ``connect`` that is going to do most of the work here. This routine also checks if inputs and outputs are actually provided by the nodes that are being connected.
There are two different ways to call ``connect``:
connect(source, "source_output", dest, "dest_input")
connect([(source, dest, [("source_output1", "dest_input1"),
("source_output2", "dest_input2")
])
])
With the first approach, you can establish one connection at a time. With the second you can establish multiple connects between two nodes at once. In either case, you're providing it with four pieces of information to define the connection:
- The source node object
- The name of the output field from the source node
- The destination node object
- The name of the input field from the destination node
We'll illustrate each method in the following cell:
```
# First the "simple", but more restricted method
wf.connect(skullstrip, "mask_file", mask, "mask_file")
# Now the more complicated method
wf.connect([(smooth, mask, [("out_file", "in_file")])])
```
Now the workflow is complete!
Above, we mentioned that the workflow can be thought of as a directed acyclic graph. In fact, that's literally how it's represented behind the scenes, and we can use that to explore the workflow visually:
```
wf.write_graph("workflow_graph.dot")
from IPython.display import Image
Image(filename="/output/working_dir/smoothflow/workflow_graph.png")
```
This representation makes the dependency structure of the workflow obvious. (By the way, the names of the nodes in this graph are the names we gave our Node objects above, so pick something meaningful for those!)
Certain graph types also allow you to further inspect the individual connections between the nodes. For example:
```
wf.write_graph(graph2use='flat')
from IPython.display import Image
Image(filename="/output/working_dir/smoothflow/graph_detailed.png")
```
Here you see very clearly, that the output ``mask_file`` of the ``skullstrip`` node is used as the input ``mask_file`` of the ``mask`` node. For more information on graph visualization, see the [Graph Visualization](./basic_graph_visualization.ipynb) section.
But let's come back to our example. At this point, all we've done is define the workflow. We haven't executed any code yet. Much like Interface objects, the Workflow object has a ``run`` method that we can call so that it executes. Let's do that and then examine the results.
```
# Specify the base directory for the working directory
wf.base_dir = "/output/working_dir"
# Execute the workflow
wf.run()
```
**The specification of ``base_dir`` is very important (and is why we needed to use absolute paths above) because otherwise all the outputs would be saved somewhere in the temporary files.** Unlike interfaces, which by default spit out results to the local directly, the Workflow engine executes things off in its own directory hierarchy.
Let's take a look at the resulting images to convince ourselves we've done the same thing as before:
```
f = plt.figure(figsize=(12, 4))
for i, img in enumerate(["/data/ds000114/sub-01/ses-test/anat/sub-01_ses-test_T1w.nii.gz",
"/output/working_dir/smoothflow/smooth/sub-01_ses-test_T1w_smooth.nii.gz",
"/output/working_dir/smoothflow/skullstrip/sub-01_ses-test_T1w_brain_mask.nii.gz",
"/output/working_dir/smoothflow/mask/sub-01_ses-test_T1w_smooth_masked.nii.gz"]):
f.add_subplot(1, 4, i + 1)
plot_slice(img)
```
Perfect!
Let's also have a closer look at the working directory:
```
!tree /output/working_dir/smoothflow/ -I '*js|*json|*html|*pklz|_report'
```
As you can see, the name of the working directory is the name we gave the workflow ``base_dir``. And the name of the folder within is the name of the workflow object ``smoothflow``. Each node of the workflow has its' own subfolder in the ``smoothflow`` folder. And each of those subfolders contains the output of the node as well as some additional files.
# The #1 gotcha of nipype Workflows
Nipype workflows are just DAGs (Directed Acyclic Graphs) that the runner ``Plugin`` takes in and uses to compose an ordered list of nodes for execution. As a matter of fact, running a workflow will return a graph object. That's why you often see something like `<networkx.classes.digraph.DiGraph at 0x7f83542f1550>` at the end of execution stream when running a workflow.
The principal implication is that ``Workflow``s *don't have inputs and outputs*, you can just access them through the ``Node`` decoration.
In practical terms, this has one clear consequence: from the resulting object of the workflow execution, you don't generally have access to the value of the outputs of the interfaces. This is particularly true for Plugins with an asynchronous execution.
# A workflow inside a workflow
When you start writing full-fledged analysis workflows, things can get quite complicated. Some aspects of neuroimaging analysis can be thought of as a coherent step at a level more abstract than the execution of a single command line binary. For instance, in the standard FEAT script in FSL, several calls are made in the process of using `susan` to perform nonlinear smoothing on an image. In Nipype, you can write **nested workflows**, where a sub-workflow can take the place of a Node in a given script.
Let's use the prepackaged `susan` workflow that ships with Nipype to replace our Gaussian filtering node and demonstrate how this works.
```
from niflow.nipype1.workflows.fmri.fsl import create_susan_smooth
```
Calling this function will return a pre-written `Workflow` object:
```
susan = create_susan_smooth(separate_masks=False)
```
Let's display the graph to see what happens here.
```
susan.write_graph("susan_workflow.dot")
from IPython.display import Image
Image(filename="susan_workflow.png")
```
We see that the workflow has an `inputnode` and an `outputnode`. While not strictly necessary, this is standard practice for workflows (especially those that are intended to be used as nested workflows in the context of a longer analysis graph) and makes it more clear how to connect inputs and outputs from this workflow.
Let's take a look at what those inputs and outputs are. Like Nodes, Workflows have `inputs` and `outputs` attributes that take a second sub-attribute corresponding to the specific node we want to make connections to.
```
print("Inputs:\n", susan.inputs.inputnode)
print("Outputs:\n", susan.outputs.outputnode)
```
Note that `inputnode` and `outputnode` are just conventions, and the Workflow object exposes connections to all of its component nodes:
```
susan.inputs
```
Let's see how we would write a new workflow that uses this nested smoothing step.
The susan workflow actually expects to receive and output a list of files (it's intended to be executed on each of several runs of fMRI data). We'll cover exactly how that works in later tutorials, but for the moment we need to add an additional ``Function`` node to deal with the fact that ``susan`` is outputting a list. We can use a simple `lambda` function to do this:
```
from nipype import Function
extract_func = lambda list_out: list_out[0]
list_extract = Node(Function(input_names=["list_out"],
output_names=["out_file"],
function=extract_func),
name="list_extract")
```
Now let's create a new workflow ``susanflow`` that contains the ``susan`` workflow as a sub-node. To be sure, let's also recreate the ``skullstrip`` and the ``mask`` node from the examples above.
```
# Initiate workflow with name and base directory
wf2 = Workflow(name="susanflow", base_dir="/output/working_dir")
# Create new skullstrip and mask nodes
skullstrip2 = Node(fsl.BET(in_file=in_file, mask=True), name="skullstrip")
mask2 = Node(fsl.ApplyMask(), name="mask")
# Connect the nodes to each other and to the susan workflow
wf2.connect([(skullstrip2, mask2, [("mask_file", "mask_file")]),
(skullstrip2, susan, [("mask_file", "inputnode.mask_file")]),
(susan, list_extract, [("outputnode.smoothed_files",
"list_out")]),
(list_extract, mask2, [("out_file", "in_file")])
])
# Specify the remaining input variables for the susan workflow
susan.inputs.inputnode.in_files = abspath(
"/data/ds000114/sub-01/ses-test/anat/sub-01_ses-test_T1w.nii.gz")
susan.inputs.inputnode.fwhm = 4
```
First, let's see what this new processing graph looks like.
```
wf2.write_graph(dotfilename='/output/working_dir/full_susanflow.dot', graph2use='colored')
from IPython.display import Image
Image(filename="/output/working_dir/full_susanflow.png")
```
We can see how there is a nested smoothing workflow (blue) in the place of our previous `smooth` node. This provides a very detailed view, but what if you just wanted to give a higher-level summary of the processing steps? After all, that is the purpose of encapsulating smaller streams in a nested workflow. That, fortunately, is an option when writing out the graph:
```
wf2.write_graph(dotfilename='/output/working_dir/full_susanflow_toplevel.dot', graph2use='orig')
from IPython.display import Image
Image(filename="/output/working_dir/full_susanflow_toplevel.png")
```
That's much more manageable. Now let's execute the workflow
```
wf2.run()
```
As a final step, let's look at the input and the output. It's exactly what we wanted.
```
f = plt.figure(figsize=(12, 4))
for i, e in enumerate([["/data/ds000114/sub-01/ses-test/anat/sub-01_ses-test_T1w.nii.gz", 'input'],
["/output/working_dir//susanflow/mask/sub-01_ses-test_T1w_smooth_masked.nii.gz",
'output']]):
f.add_subplot(1, 2, i + 1)
plot_slice(e[0])
plt.title(e[1])
```
# So, why are workflows so great?
So far, we've seen that you can build up rather complex analysis workflows. But at the moment, it's not been made clear why this is worth the extra trouble from writing a simple procedural script. To demonstrate the first added benefit of the Nipype, let's just rerun the ``susanflow`` workflow from above and measure the execution times.
```
%time wf2.run()
```
That happened quickly! **Workflows (actually this is handled by the Node code) are smart and know if their inputs have changed from the last time they are run. If they have not, they don't recompute; they just turn around and pass out the resulting files from the previous run.** This is done on a node-by-node basis, also.
Let's go back to the first workflow example. What happened if we just tweak one thing:
```
wf.inputs.smooth.fwhm = 1
wf.run()
```
By changing an input value of the ``smooth`` node, this node will be re-executed. This triggers a cascade such that any file depending on the ``smooth`` node (in this case, the ``mask`` node, also recompute). However, the ``skullstrip`` node hasn't changed since the first time it ran, so it just coughed up its original files.
That's one of the main benefits of using Workflows: **efficient recomputing**.
Another benefit of Workflows is parallel execution, which is covered under [Plugins and Distributed Computing](./basic_plugins.ipynb). With Nipype it is very easy to up a workflow to an extremely parallel cluster computing environment.
In this case, that just means that the `skullstrip` and `smooth` Nodes execute together, but when you scale up to Workflows with many subjects and many runs per subject, each can run together, such that (in the case of unlimited computing resources), you could process 50 subjects with 10 runs of functional data in essentially the time it would take to process a single run.
To emphasize the contribution of Nipype here, you can write and test your workflow on one subject computing on your local CPU, where it is easier to debug. Then, with the change of a single function parameter, you can scale your processing up to a 1000+ node SGE cluster.
### Exercise 1
Create a workflow that connects three nodes for:
- skipping the first 3 dummy scans using ``fsl.ExtractROI``
- applying motion correction using ``fsl.MCFLIRT`` (register to the mean volume, use NIFTI as output type)
- correcting for slice wise acquisition using ``fsl.SliceTimer`` (assumed that slices were acquired with interleaved order and time repetition was 2.5, use NIFTI as output type)
```
# write your solution here
# importing Node and Workflow
from nipype import Workflow, Node
# importing all interfaces
from nipype.interfaces.fsl import ExtractROI, MCFLIRT, SliceTimer
```
Defining all nodes
```
# extracting all time levels but not the first four
extract = Node(ExtractROI(t_min=4, t_size=-1, output_type='NIFTI'),
name="extract")
# using MCFLIRT for motion correction to the mean volume
mcflirt = Node(MCFLIRT(mean_vol=True,
output_type='NIFTI'),
name="mcflirt")
# correcting for slice wise acquisition (acquired with interleaved order and time repetition was 2.5)
slicetimer = Node(SliceTimer(interleaved=True,
output_type='NIFTI',
time_repetition=2.5),
name="slicetimer")
```
Creating a workflow
```
# Initiation of a workflow
wf_ex1 = Workflow(name="exercise1", base_dir="/output/working_dir")
# connect nodes with each other
wf_ex1.connect([(extract, mcflirt, [('roi_file', 'in_file')]),
(mcflirt, slicetimer, [('out_file', 'in_file')])])
# providing a input file for the first extract node
extract.inputs.in_file = "/data/ds000114/sub-01/ses-test/func/sub-01_ses-test_task-fingerfootlips_bold.nii.gz"
```
### Exercise 2
Visualize and run the workflow
```
# write your solution here
```
We learnt 2 methods of plotting graphs:
```
wf_ex1.write_graph("workflow_graph.dot")
from IPython.display import Image
Image(filename="/output/working_dir/exercise1/workflow_graph.png")
```
And more detailed graph:
```
wf_ex1.write_graph(graph2use='flat')
from IPython.display import Image
Image(filename="/output/working_dir/exercise1/graph_detailed.png")
```
if everything works good, we're ready to run the workflow:
```
wf_ex1.run()
```
we can now check the output:
```
! ls -lh /output/working_dir/exercise1
```
| github_jupyter |
<a href="https://colab.research.google.com/github/elliotgunn/DS-Unit-1-Sprint-2-Data-Wrangling-and-Storytelling/blob/master/LS_DS_121_Join_and_Reshape_Data.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
_Lambda School Data Science_
# Join and Reshape datasets
Objectives
- concatenate data with pandas
- merge data with pandas
- understand tidy data formatting
- melt and pivot data with pandas
Links
- [Pandas Cheat Sheet](https://github.com/pandas-dev/pandas/blob/master/doc/cheatsheet/Pandas_Cheat_Sheet.pdf)
- [Tidy Data](https://en.wikipedia.org/wiki/Tidy_data)
- Combine Data Sets: Standard Joins
- Tidy Data
- Reshaping Data
- Python Data Science Handbook
- [Chapter 3.6](https://jakevdp.github.io/PythonDataScienceHandbook/03.06-concat-and-append.html), Combining Datasets: Concat and Append
- [Chapter 3.7](https://jakevdp.github.io/PythonDataScienceHandbook/03.07-merge-and-join.html), Combining Datasets: Merge and Join
- [Chapter 3.8](https://jakevdp.github.io/PythonDataScienceHandbook/03.08-aggregation-and-grouping.html), Aggregation and Grouping
- [Chapter 3.9](https://jakevdp.github.io/PythonDataScienceHandbook/03.09-pivot-tables.html), Pivot Tables
Reference
- Pandas Documentation: [Reshaping and Pivot Tables](https://pandas.pydata.org/pandas-docs/stable/reshaping.html)
- Modern Pandas, Part 5: [Tidy Data](https://tomaugspurger.github.io/modern-5-tidy.html)
## Download data
We’ll work with a dataset of [3 Million Instacart Orders, Open Sourced](https://tech.instacart.com/3-million-instacart-orders-open-sourced-d40d29ead6f2)!
```
!wget https://s3.amazonaws.com/instacart-datasets/instacart_online_grocery_shopping_2017_05_01.tar.gz
!tar --gunzip --extract --verbose --file=instacart_online_grocery_shopping_2017_05_01.tar.gz
%cd instacart_2017_05_01
!ls -lh *.csv
```
# Join Datasets
## Goal: Reproduce this example
The first two orders for user id 1:
```
from IPython.display import display, Image
url = 'https://cdn-images-1.medium.com/max/1600/1*vYGFQCafJtGBBX5mbl0xyw.png'
example = Image(url=url, width=600)
display(example)
```
## Load data
Here's a list of all six CSV filenames
```
!ls -lh *.csv
```
For each CSV
- Load it with pandas
- Look at the dataframe's shape
- Look at its head (first rows)
- `display(example)`
- Which columns does it have in common with the example we want to reproduce?
### aisles
```
import pandas as pd
!head aisles.csv
!wc aisles.csv
aisles = pd.read_csv('aisles.csv')
print(aisles.shape)
aisles.head()
aisles.isnull().sum()
display(example)
```
aisle doesnt have any data that we need
### departments
```
departments = pd.read_csv('departments.csv')
print(departments.shape)
departments.head()
display(example)
```
### order_products__prior
```
order_products__prior = pd.read_csv('order_products__prior.csv')
print(order_products__prior.shape)
order_products__prior.head()
display(example)
```
We need:
order id
product id
add to cart order
### order_products__train
```
order_products__train = pd.read_csv('order_products__train.csv')
print(order_products__train.shape)
order_products__train.head()
```
We need:
order id
product id
add to cart order
### orders
```
orders = pd.read_csv('orders.csv')
print(orders.shape)
orders.head()
display(example)
```
We need:
user id
order id
order number
order dow
order hour of day
### products
```
products = pd.read_csv('products.csv')
print(products.shape)
products.head()
display(example)
```
We need:
product id
product name
## Concatenate order_products__prior and order_products__train
```
order_products = pd.concat([order_products__prior, order_products__train])
print(order_products.shape)
order_products.head()
order_products.shape, order_products__prior.shape, order_products__train.shape
assert len(order_products) == len(order_products__prior) + len(order_products__train)
```
## Get a subset of orders — the first two orders for user id 1
From `orders` dataframe:
- user_id
- order_id
- order_number
- order_dow
- order_hour_of_day
```
display(example)
condition = order_products['order_id'] == 2539329
order_products[condition]
condition = (orders['user_id'] == 1) & (orders['order_number'] <=2)
columns = ['order_id', 'user_id', 'order_number', 'order_dow',
'order_hour_of_day']
subset = orders.loc[condition, columns]
subset
```
## Merge dataframes
Merge the subset from `orders` with columns from `order_products`
```
# order_id
# product_id
# add_to_cart_order
# order_products_only_columns_i_want
columns = ['order_id', 'product_id', 'add_to_cart_order']
merged = pd.merge(subset, order_products[columns], how='inner', on='order_id')
merged
display(example)
```
Merge with columns from `products`
```
# columns = ['product_id', 'product_name']
final = pd.merge(merged, products[['product_id', 'product_name']], how='inner', on='product_id')
final
final.columns
```
# Reshape Datasets
## Why reshape data?
#### Some libraries prefer data in different formats
For example, the Seaborn data visualization library prefers data in "Tidy" format often (but not always).
> "[Seaborn will be most powerful when your datasets have a particular organization.](https://seaborn.pydata.org/introduction.html#organizing-datasets) This format ia alternately called “long-form” or “tidy” data and is described in detail by Hadley Wickham. The rules can be simply stated:
> - Each variable is a column
- Each observation is a row
> A helpful mindset for determining whether your data are tidy is to think backwards from the plot you want to draw. From this perspective, a “variable” is something that will be assigned a role in the plot."
#### Data science is often about putting square pegs in round holes
Here's an inspiring [video clip from _Apollo 13_](https://www.youtube.com/watch?v=ry55--J4_VQ): “Invent a way to put a square peg in a round hole.” It's a good metaphor for data wrangling!
## Hadley Wickham's Examples
From his paper, [Tidy Data](http://vita.had.co.nz/papers/tidy-data.html)
```
%matplotlib inline
import pandas as pd
import numpy as np
import seaborn as sns
table1 = pd.DataFrame(
[[np.nan, 2],
[16, 11],
[3, 1]],
index=['John Smith', 'Jane Doe', 'Mary Johnson'],
columns=['treatmenta', 'treatmentb'])
table2 = table1.T
```
"Table 1 provides some data about an imaginary experiment in a format commonly seen in the wild.
The table has two columns and three rows, and both rows and columns are labelled."
```
table1
```
"There are many ways to structure the same underlying data.
Table 2 shows the same data as Table 1, but the rows and columns have been transposed. The data is the same, but the layout is different."
```
table2
```
"Table 3 reorganises Table 1 to make the values, variables and obserations more clear.
Table 3 is the tidy version of Table 1. Each row represents an observation, the result of one treatment on one person, and each column is a variable."
| name | trt | result |
|--------------|-----|--------|
| John Smith | a | - |
| Jane Doe | a | 16 |
| Mary Johnson | a | 3 |
| John Smith | b | 2 |
| Jane Doe | b | 11 |
| Mary Johnson | b | 1 |
## Table 1 --> Tidy
We can use the pandas `melt` function to reshape Table 1 into Tidy format.
```
table1
table1 = table1.reset_index()
table1
tidy = table1.melt(id_vars='index')
tidy.columns = ['name', 'trt', 'result']
tidy
# table1['index'].value_counts().reset_index()
```
## Table 2 --> Tidy
```
table2
# so we need to transpose back?
table2 = table2.reset_index()
table2
tidy2 = table2.melt(id_vars='index')
tidy.columns = ['name', 'trt', 'result']
tidy.head()
```
## Tidy --> Table 1
The `pivot_table` function is the inverse of `melt`.
```
table1
tidy
tidy.pivot_table(index='name', columns='trt', values='result')
```
## Tidy --> Table 2
```
tidy
tidy.pivot_table(index='trt', columns='name', values='result')
```
# Seaborn example
The rules can be simply stated:
- Each variable is a column
- Each observation is a row
A helpful mindset for determining whether your data are tidy is to think backwards from the plot you want to draw. From this perspective, a “variable” is something that will be assigned a role in the plot."
```
sns.catplot(x='trt', y='result', col='name',
kind='bar', data=tidy, height=2);
```
## Now with Instacart data
```
products = pd.read_csv('products.csv')
order_products = pd.concat([pd.read_csv('order_products__prior.csv'),
pd.read_csv('order_products__train.csv')])
orders = pd.read_csv('orders.csv')
```
## Goal: Reproduce part of this example
Instead of a plot with 50 products, we'll just do two — the first products from each list
- Half And Half Ultra Pasteurized
- Half Baked Frozen Yogurt
```
from IPython.display import display, Image
url = 'https://cdn-images-1.medium.com/max/1600/1*wKfV6OV-_1Ipwrl7AjjSuw.png'
example = Image(url=url, width=600)
display(example)
```
So, given a `product_name` we need to calculate its `order_hour_of_day` pattern.
## Subset and Merge
One challenge of performing a merge on this data is that the `products` and `orders` datasets do not have any common columns that we can merge on. Due to this we will have to use the `order_products` dataset to provide the columns that we will use to perform the merge.
```
product_names = ['Half Baked Frozen Yogurt', 'Half And Half Ultra Pasteurized']
products.columns.tolist()
orders.columns.tolist()
order_products.columns.tolist()
merged = (products[['product_id', 'product_name']]
.merge(order_products[['order_id', 'product_id']])
.merge(orders[['order_id', 'order_hour_of_day']]))
merged.head()
products.shape, order_products.shape, orders.shape, merged.shape
# What conditon will filter `merged` to just the 2 products
# that we care about?
# This is equivalent ...
condition = ((merged['product_name']=='Half Baked Frozen Yogurt') |
(merged['product_name']=='Half And Half Ultra Pasteurized'))
merged = merged[condition]
# ... to this:
product_names = ['Half Baked Frozen Yogurt', 'Half And Half Ultra Pasteurized']
condition = merged['product_name'].isin(product_names)
subset = merged[condition]
subset
```
## 4 ways to reshape and plot
### 1. value_counts
```
froyo = subset[subset['product_name']=='Half Baked Frozen Yogurt']
cream = subset[subset['product_name']=='Half And Half Ultra Pasteurized']
(cream['order_hour_of_day']
.value_counts(normalize=True)
.sort_index()
.plot())
(froyo['order_hour_of_day']
.value_counts(normalize=True)
.sort_index()
.plot());
```
### 2. crosstab
```
(pd.crosstab(subset['order_hour_of_day'],
subset['product_name'],
normalize='columns') * 100).plot();
```
### 3. Pivot Table
```
subset.pivot_table(index='order_hour_of_day',
columns='product_name',
values='order_id',
aggfunc=len).plot();
```
### 4. melt
```
table = pd.crosstab(subset['order_hour_of_day'],
subset['product_name'],
normalize=True)
melted = (table
.reset_index()
.melt(id_vars='order_hour_of_day')
.rename(columns={
'order_hour_of_day': 'Hour of Day Ordered',
'product_name': 'Product',
'value': 'Percent of Orders by Product'
}))
sns.relplot(x='Hour of Day Ordered',
y='Percent of Orders by Product',
hue='Product',
data=melted,
kind='line');
```
# Assignment
## Join Data Section
These are the top 10 most frequently ordered products. How many times was each ordered?
1. Banana
2. Bag of Organic Bananas
3. Organic Strawberries
4. Organic Baby Spinach
5. Organic Hass Avocado
6. Organic Avocado
7. Large Lemon
8. Strawberries
9. Limes
10. Organic Whole Milk
First, write down which columns you need and which dataframes have them.
Next, merge these into a single dataframe.
Then, use pandas functions from the previous lesson to get the counts of the top 10 most frequently ordered products.
## Reshape Data Section
- Replicate the lesson code
- Complete the code cells we skipped near the beginning of the notebook
- Table 2 --> Tidy
- Tidy --> Table 2
- Load seaborn's `flights` dataset by running the cell below. Then create a pivot table showing the number of passengers by month and year. Use year for the index and month for the columns. You've done it right if you get 112 passengers for January 1949 and 432 passengers for December 1960.
```
orders = pd.read_csv('orders.csv')
order_products__prior = pd.read_csv('order_products__prior.csv')
order_products__train = pd.read_csv('order_products__train.csv')
order_products__test = pd.read_csv('order_products__train.csv')
orders.shape, order_products__prior.shape, order_products__train.shape, order_products__test.shape
# what we want: product_name, product_id
# df
# under orders: 'eval_set': which evaluation set this order belongs
# so we need to add product_id to the orders df via the 'eval_set'
# first we need to concat files
order_products = pd.concat([order_products__prior, order_products__train])
print(order_products.shape)
order_products.head()
# order_id
# product_id
# add_to_cart_order
columns = ['order_id', 'product_id', 'add_to_cart_order']
merged = pd.merge(orders, order_products[columns], how='inner', on='order_id')
merged.head()
# now we have to merge merged and products via product_id
# we want to add product_name
products = pd.read_csv('products.csv')
final = pd.merge(merged, products[['product_id', 'product_name']], how='inner', on='product_id')
final.head()
# now tally the top orders
# Then, use pandas functions from the previous lesson to get the counts of the top 10 most frequently ordered products.
final['product_name'].value_counts().head(10)
```
**Reshape Data**
```
flights = sns.load_dataset('flights')
flights.head()
flights.pivot_table(index='year', columns='month', values='passengers')
```
## Join Data Stretch Challenge
The [Instacart blog post](https://tech.instacart.com/3-million-instacart-orders-open-sourced-d40d29ead6f2) has a visualization of "**Popular products** purchased earliest in the day (green) and latest in the day (red)."
The post says,
> "We can also see the time of day that users purchase specific products.
> Healthier snacks and staples tend to be purchased earlier in the day, whereas ice cream (especially Half Baked and The Tonight Dough) are far more popular when customers are ordering in the evening.
> **In fact, of the top 25 latest ordered products, the first 24 are ice cream! The last one, of course, is a frozen pizza.**"
Your challenge is to reproduce the list of the top 25 latest ordered popular products.
We'll define "popular products" as products with more than 2,900 orders.
## Reshape Data Stretch Challenge
_Try whatever sounds most interesting to you!_
- Replicate more of Instacart's visualization showing "Hour of Day Ordered" vs "Percent of Orders by Product"
- Replicate parts of the other visualization from [Instacart's blog post](https://tech.instacart.com/3-million-instacart-orders-open-sourced-d40d29ead6f2), showing "Number of Purchases" vs "Percent Reorder Purchases"
- Get the most recent order for each user in Instacart's dataset. This is a useful baseline when [predicting a user's next order](https://www.kaggle.com/c/instacart-market-basket-analysis)
- Replicate parts of the blog post linked at the top of this notebook: [Modern Pandas, Part 5: Tidy Data](https://tomaugspurger.github.io/modern-5-tidy.html)
| github_jupyter |
```
from IPython.display import HTML
tag = HTML('''<script>
code_show=true;
function code_toggle() {
if (code_show){
$('div.input').hide()
} else {
$('div.input').show()
}
code_show = !code_show
}
$( document ).ready(code_toggle);
</script>
Toggle cell visibility <a href="javascript:code_toggle()">here</a>.''')
display(tag)
# Erasmus+ ICCT project (2018-1-SI01-KA203-047081)
%matplotlib notebook
import numpy as np
import math
import matplotlib.pyplot as plt
from scipy import signal
import ipywidgets as widgets
import control as c
import sympy as sym
from IPython.display import Latex, display, Markdown # For displaying Markdown and LaTeX code
from fractions import Fraction
import matplotlib.patches as patches
```
## Approssimazione a poli dominanti
Quando si studia il comportamento dei sistemi, spesso questi vengono approssimati da un polo dominante o da una coppia di poli complessi dominanti.
Il sistema del secondo ordine presentato è definito dalla seguente funzione di trasferimento:
\begin{equation}
G(s)=\frac{\alpha\beta}{(s+\alpha)(s+\beta)}=\frac{1}{(\frac{1}{\alpha}s+1)(\frac{1}{\beta}s+1)},
\end{equation}
dove $\beta=1$ e $\alpha$ variabile.
Il sistema del terzo ordine presentato è invece definito dalla seguente funzione di trasferimento:
\begin{equation}
G(s)=\frac{\alpha{\omega_0}^2}{\big(s+\alpha\big)\big(s^2+2\zeta\omega_0s+\omega_0^2\big)}=\frac{1}{(\frac{1}{\alpha}s+1)(\frac{1}{\omega_0^2}s^2+\frac{2\zeta\alpha}{\omega_0}s+1)},
\end{equation}
dove $\beta=1$, $\omega_0=4.1$ e $\zeta=0.24$ e $\alpha$ variabile.
---
### Come usare questo notebook?
Alterna tra il sistema del secondo e terzo ordine e sposta il cursore per cambiare la posizione del polo mobile $\alpha$.
<sub>Questo notebook è basato sul seguente [tutorial](https://lpsa.swarthmore.edu/PZXferStepBode/DomPole.html "The Dominant Pole Approximation") del Prof. Erik Cheever.
```
# System selector buttons
style = {'description_width': 'initial','button_width': '200px'}
typeSelect = widgets.ToggleButtons(
options=[('Sistema del secondo ordine', 0), ('Sistema del terzo ordine', 1),],
description='Seleziona: ',style=style)
display(typeSelect)
continuous_update=False
# set up plot
fig, ax = plt.subplots(2,1,figsize=[9.8,7],num='Approssimazione a poli dominanti')
plt.subplots_adjust(hspace=0.35)
ax[0].grid(True)
ax[1].grid(True)
# ax[2].grid(which='both', axis='both', color='lightgray')
ax[0].axhline(y=0,color='k',lw=.8)
ax[1].axhline(y=0,color='k',lw=.8)
ax[0].axvline(x=0,color='k',lw=.8)
ax[1].axvline(x=0,color='k',lw=.8)
ax[0].set_xlabel('Re')
ax[0].set_ylabel('Im')
ax[0].set_xlim([-10,0.5])
ax[1].set_xlim([-0.5,20])
ax[1].set_xlabel('$t$ [s]')
ax[1].set_ylabel('input, output')
ax[0].set_title('Mappa poli-zeri')
ax[1].set_title('Risposta')
plotzero, = ax[0].plot([], [])
response, = ax[1].plot([], [])
responseAdom, = ax[1].plot([], [])
responseBdom, = ax[1].plot([], [])
ax[1].step([0,50],[0,1],color='C0',label='input')
# generate x values
def response_func(a,index):
global plotzero, response, responseAdom, responseBdom
# global bodePlot, bodePlotAdom, bodePlotBdom
t = np.linspace(0, 50, 1000)
if index==0:
b=1
num=a*b
den=([1,a+b,a*b])
tf_sys=c.TransferFunction(num,den)
poles_sys,zeros_sys=c.pzmap(tf_sys, Plot=False)
tout, yout = c.step_response(tf_sys,t)
den1=([1,a])
tf_sys1=c.TransferFunction(a,den1)
toutA, youtA = c.step_response(tf_sys1,t)
den2=([1,b])
tf_sys2=c.TransferFunction(b,den2)
toutB, youtB = c.step_response(tf_sys2,t)
mag, phase, omega = c.bode_plot(tf_sys, Plot=False) # Bode-plot
magA, phase, omegaA = c.bode_plot(tf_sys1, Plot=False) # Bode-plot
magB, phase, omegaB = c.bode_plot(tf_sys2, Plot=False) # Bode-plot
s=sym.Symbol('s')
eq=(a*b/((s+a)*(s+b)))
eq1=1/(((1/a)*s+1)*((1/b)*s+1))
display(Markdown('Il polo variabile (curva viola) $\\alpha$ è uguale %.1f, il polo fisso (curva rossa) $b$ è uguale a %i; la funzione di trasferimento è uguale a:'%(a,1)))
display(eq),display(Markdown('o')),display(eq1)
elif index==1:
omega0=4.1
zeta=0.24
num=a*omega0**2
den=([1,2*zeta*omega0+a,omega0**2+2*zeta*omega0*a,a*omega0**2])
tf_sys=c.TransferFunction(num,den)
poles_sys,zeros_sys=c.pzmap(tf_sys, Plot=False)
tout, yout = c.step_response(tf_sys,t)
den1=([1,a])
tf_sys1=c.TransferFunction(a,den1)
toutA, youtA = c.step_response(tf_sys1,t)
den2=([1,2*zeta*omega0,omega0**2])
tf_sys2=c.TransferFunction(omega0**2,den2)
toutB, youtB = c.step_response(tf_sys2,t)
mag, phase, omega = c.bode_plot(tf_sys, Plot=False) # Bode-plot
magA, phase, omegaA = c.bode_plot(tf_sys1, Plot=False) # Bode-plot
magB, phase, omegaB = c.bode_plot(tf_sys2, Plot=False) # Bode-plot
s=sym.Symbol('s')
eq=(a*omega0**2/((s+a)*(s**2+2*zeta*omega0*s+omega0*omega0)))
eq1=1/(((1/a)*s+1)*((1/(omega0*omega0))*s*s+(2*zeta*a/omega0)*s+1))
display(Markdown('Il polo variabile (curva viola) $\\alpha$ è uguale %.1f, i poli fissi (curva rossa) $b$ sono uguali a $1\pm4j$ ($\omega_0 = 4.1$, $\zeta=0.24$). La funzione di trasferimento è uguale a:'%(a)))
display(eq),display(Markdown('o')),display(eq1)
ax[0].lines.remove(plotzero)
ax[1].lines.remove(response)
ax[1].lines.remove(responseAdom)
ax[1].lines.remove(responseBdom)
plotzero, = ax[0].plot(np.real(poles_sys), np.imag(poles_sys), 'xg', markersize=10, label = 'polo')
response, = ax[1].plot(tout,yout,color='C1',label='risposta',lw=3)
responseAdom, = ax[1].plot(toutA,youtA,color='C4',label='risposta dovuta al solo polo variabile')
responseBdom, = ax[1].plot(toutB,youtB,color='C3',label='risposta dovuta al solo polo fisso (o coppia)')
ax[0].legend()
ax[1].legend()
a_slider=widgets.FloatSlider(value=0.1, min=0.1, max=10, step=.1,
description='$\\alpha$:',disabled=False,continuous_update=False,
orientation='horizontal',readout=True,readout_format='.2f',)
input_data=widgets.interactive_output(response_func,{'a':a_slider,'index':typeSelect})
def update_slider(index):
global a_slider
aval=[0.1,0.1]
a_slider.value=aval[index]
input_data2=widgets.interactive_output(update_slider,{'index':typeSelect})
display(a_slider,input_data)
```
| github_jupyter |
```
#default_exp models
#export
import torch, kornia
from fastai.basics import *
from fastai.vision.all import *
```
# Custom models
> Transformer net.
```
#export
def transformer_net(): return TransformerNet()
#export
class TransformerNet(Module):
def __init__(self):
# Initial convolution layers
self.conv1 = _ConvLayer(3, 32, kernel_size=9, stride=1)
self.in1 = torch.nn.InstanceNorm2d(32, affine=True)
self.conv2 = _ConvLayer(32, 64, kernel_size=3, stride=2)
self.in2 = torch.nn.InstanceNorm2d(64, affine=True)
self.conv3 = _ConvLayer(64, 128, kernel_size=3, stride=2)
self.in3 = torch.nn.InstanceNorm2d(128, affine=True)
# Residual layers
self.res1 = _ResidualBlock(128)
self.res2 = _ResidualBlock(128)
self.res3 = _ResidualBlock(128)
self.res4 = _ResidualBlock(128)
self.res5 = _ResidualBlock(128)
# Upsampling Layers
self.deconv1 = PixelShuffle_ICNR(128, 64, norm_type=None)
self.in4 = torch.nn.InstanceNorm2d(64, affine=True)
self.deconv2 = PixelShuffle_ICNR(64, 32, norm_type=None)
self.in5 = torch.nn.InstanceNorm2d(32, affine=True)
self.deconv3 = _ConvLayer(32, 3, kernel_size=9, stride=1)
# Non-linearities
self.relu = torch.nn.ReLU()
self.sigm = torch.nn.Sigmoid()
def forward(self, X):
y = self.relu(self.in1(self.conv1(X)))
y = self.relu(self.in2(self.conv2(y)))
y = self.relu(self.in3(self.conv3(y)))
y = self.res1(y)
y = self.res2(y)
y = self.res3(y)
y = self.res4(y)
y = self.res5(y)
y = self.relu(self.in4(self.deconv1(y)))
y = self.relu(self.in5(self.deconv2(y)))
y = self.deconv3(y)
y = self.sigm(y)
return y
#export
class TransformerNet2(Module):
def __init__(self, n):
# Initial convolution layers
# self.w_conv = torch.nn.Conv2d(n, 32, kernel_size=1, stride=1)
self.w_dense = torch.nn.Linear(n, 32)
self.conv1 = _ConvLayer(3, 32, kernel_size=9, stride=1)
self.in1 = torch.nn.InstanceNorm2d(32, affine=True)
self.conv2 = _ConvLayer(32, 64, kernel_size=3, stride=2)
self.in2 = torch.nn.InstanceNorm2d(64, affine=True)
self.conv3 = _ConvLayer(64, 128, kernel_size=3, stride=2)
self.in3 = torch.nn.InstanceNorm2d(128, affine=True)
# Residual layers
nf = 128 + n
self.res1 = _ResidualBlock(nf)
self.res2 = _ResidualBlock(nf)
self.res3 = _ResidualBlock(nf)
self.res4 = _ResidualBlock(nf)
self.res5 = _ResidualBlock(nf)
# Upsampling Layers
self.deconv1 = PixelShuffle_ICNR(nf, 64, norm_type=None)
self.in4 = torch.nn.InstanceNorm2d(64, affine=True)
self.deconv2 = PixelShuffle_ICNR(64, 32, norm_type=None)
self.in5 = torch.nn.InstanceNorm2d(32, affine=True)
self.deconv3 = _ConvLayer(32, 3, kernel_size=9, stride=1)
# Non-linearities
self.relu = torch.nn.ReLU()
self.sigm = torch.nn.Sigmoid()
def forward(self, X):
y = self.relu(self.in1(self.conv1(X.im)))
y = self.relu(self.in2(self.conv2(y)))
y = self.relu(self.in3(self.conv3(y)))
y2 = X.ws[..., None, None].expand(-1,-1,*y.shape[2:])
y = torch.cat((y,y2), axis=1)
y = self.res1(y)
y = self.res2(y)
y = self.res3(y)
y = self.res4(y)
y = self.res5(y)
y = self.relu(self.in4(self.deconv1(y)))
y = self.relu(self.in5(self.deconv2(y)))
y = self.deconv3(y)
y = self.sigm(y)
return y
#export
class _ConvLayer(Module):
def __init__(self, in_channels, out_channels, kernel_size, stride):
reflection_padding = kernel_size // 2
self.reflection_pad = torch.nn.ReflectionPad2d(reflection_padding)
self.conv2d = torch.nn.Conv2d(in_channels, out_channels, kernel_size, stride)
def forward(self, x):
out = self.reflection_pad(x)
out = self.conv2d(out)
return out
#export
class _ResidualBlock(Module):
"""_ResidualBlock
introduced in: https://arxiv.org/abs/1512.03385
recommended architecture: http://torch.ch/blog/2016/02/04/resnets.html
"""
def __init__(self, channels):
super(_ResidualBlock, self).__init__()
self.conv1 = _ConvLayer(channels, channels, kernel_size=3, stride=1)
self.in1 = torch.nn.InstanceNorm2d(channels, affine=True)
self.conv2 = _ConvLayer(channels, channels, kernel_size=3, stride=1)
self.in2 = torch.nn.InstanceNorm2d(channels, affine=True)
self.relu = torch.nn.ReLU()
def forward(self, x):
residual = x
out = self.relu(self.in1(self.conv1(x)))
out = self.in2(self.conv2(out))
out = out + residual
return out
```
## Export -
```
#hide
from nbdev.export import *
notebook2script()
```
| github_jupyter |
```
import numpy as np
import random
from math import *
import time
import copy
import torch
from torch.autograd import Variable
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torch.optim.lr_scheduler import StepLR, MultiStepLR
torch.cuda.set_device(2)
torch.set_default_tensor_type('torch.DoubleTensor')
# activation function
def activation(x):
return x * torch.sigmoid(x)
# build neural network
class Net(torch.nn.Module):
def __init__(self,input_width,layer_width):
super(Net,self).__init__()
self.layer_in = torch.nn.Linear(input_width, layer_width)
self.layer1 = torch.nn.Linear(layer_width, layer_width)
self.layer2 = torch.nn.Linear(layer_width, layer_width)
self.layer_out = torch.nn.Linear(layer_width, 1)
def forward(self,x):
y = self.layer_in(x)
y = activation(self.layer2(activation(self.layer1(y))))
output = self.layer_out(y)
return output
dimension = 1
input_width,layer_width = dimension, 3
net = Net(input_width,layer_width).cuda() # network for u on gpu
# defination of exact solution
def u_ex(x):
temp = 1.0
for i in range(dimension):
temp = temp * torch.sin(pi*x[:, i])
u_temp = 1.0 * temp
return u_temp.reshape([x.size()[0], 1])
# defination of f(x)
def f(x):
temp = 1.0
for i in range(dimension):
temp = temp * torch.sin(pi*x[:, i])
u_temp = 1.0 * temp
f_temp = dimension * pi**2 * u_temp
return f_temp.reshape([x.size()[0],1])
# generate points by random
def generate_sample(data_size):
sample_temp = torch.rand(data_size, dimension)
return sample_temp.cuda()
def model(x):
x_temp = x.cuda()
D_x_0 = torch.prod(x_temp, axis = 1).reshape([x.size()[0], 1])
D_x_1 = torch.prod(1.0 - x_temp, axis = 1).reshape([x.size()[0], 1])
model_u_temp = D_x_0 * D_x_1 * net(x)
return model_u_temp.reshape([x.size()[0], 1])
# loss function to DGM by auto differential
def loss_function(x):
# x = generate_sample(data_size).cuda()
# x.requires_grad = True
u_hat = model(x)
grad_u_hat = torch.autograd.grad(outputs = u_hat, inputs = x, grad_outputs = torch.ones(u_hat.shape).cuda(), create_graph = True)
laplace_u = torch.zeros([len(grad_u_hat[0]), 1]).cuda()
for index in range(dimension):
p_temp = grad_u_hat[0][:, index].reshape([len(grad_u_hat[0]), 1])
temp = torch.autograd.grad(outputs = p_temp, inputs = x, grad_outputs = torch.ones(p_temp.shape).cuda(), create_graph = True, allow_unused = True)[0]
laplace_u = temp[:, index].reshape([len(grad_u_hat[0]), 1]) + laplace_u
part_2 = torch.sum((-laplace_u - f(x))**2) / len(x)
return part_2
data_size = 1000
x = generate_sample(data_size).cuda()
x.requires_grad = True
def get_weights(net):
""" Extract parameters from net, and return a list of tensors"""
return [p.data for p in net.parameters()]
def set_weights(net, weights, directions=None, step=None):
"""
Overwrite the network's weights with a specified list of tensors
or change weights along directions with a step size.
"""
if directions is None:
# You cannot specify a step length without a direction.
for (p, w) in zip(net.parameters(), weights):
p.data.copy_(w.type(type(p.data)))
else:
assert step is not None, 'If a direction is specified then step must be specified as well'
if len(directions) == 2:
dx = directions[0]
dy = directions[1]
changes = [d0*step[0] + d1*step[1] for (d0, d1) in zip(dx, dy)]
else:
changes = [d*step for d in directions[0]]
for (p, w, d) in zip(net.parameters(), weights, changes):
p.data = w + torch.Tensor(d).type(type(w))
def set_states(net, states, directions=None, step=None):
"""
Overwrite the network's state_dict or change it along directions with a step size.
"""
if directions is None:
net.load_state_dict(states)
else:
assert step is not None, 'If direction is provided then the step must be specified as well'
if len(directions) == 2:
dx = directions[0]
dy = directions[1]
changes = [d0*step[0] + d1*step[1] for (d0, d1) in zip(dx, dy)]
else:
changes = [d*step for d in directions[0]]
new_states = copy.deepcopy(states)
assert (len(new_states) == len(changes))
for (k, v), d in zip(new_states.items(), changes):
d = torch.tensor(d)
v.add_(d.type(v.type()))
net.load_state_dict(new_states)
def get_random_weights(weights):
"""
Produce a random direction that is a list of random Gaussian tensors
with the same shape as the network's weights, so one direction entry per weight.
"""
return [torch.randn(w.size()) for w in weights]
def get_random_states(states):
"""
Produce a random direction that is a list of random Gaussian tensors
with the same shape as the network's state_dict(), so one direction entry
per weight, including BN's running_mean/var.
"""
return [torch.randn(w.size()) for k, w in states.items()]
def get_diff_weights(weights, weights2):
""" Produce a direction from 'weights' to 'weights2'."""
return [w2 - w for (w, w2) in zip(weights, weights2)]
def get_diff_states(states, states2):
""" Produce a direction from 'states' to 'states2'."""
return [v2 - v for (k, v), (k2, v2) in zip(states.items(), states2.items())]
def normalize_direction(direction, weights, norm='filter'):
"""
Rescale the direction so that it has similar norm as their corresponding
model in different levels.
Args:
direction: a variables of the random direction for one layer
weights: a variable of the original model for one layer
norm: normalization method, 'filter' | 'layer' | 'weight'
"""
if norm == 'filter':
# Rescale the filters (weights in group) in 'direction' so that each
# filter has the same norm as its corresponding filter in 'weights'.
for d, w in zip(direction, weights):
d.mul_(w.norm()/(d.norm() + 1e-10))
elif norm == 'layer':
# Rescale the layer variables in the direction so that each layer has
# the same norm as the layer variables in weights.
direction.mul_(weights.norm()/direction.norm())
elif norm == 'weight':
# Rescale the entries in the direction so that each entry has the same
# scale as the corresponding weight.
direction.mul_(weights)
elif norm == 'dfilter':
# Rescale the entries in the direction so that each filter direction
# has the unit norm.
for d in direction:
d.div_(d.norm() + 1e-10)
elif norm == 'dlayer':
# Rescale the entries in the direction so that each layer direction has
# the unit norm.
direction.div_(direction.norm())
def normalize_directions_for_weights(direction, weights, norm='filter', ignore='biasbn'):
"""
The normalization scales the direction entries according to the entries of weights.
"""
assert(len(direction) == len(weights))
for d, w in zip(direction, weights):
if d.dim() <= 1:
if ignore == 'biasbn':
d.fill_(0) # ignore directions for weights with 1 dimension
else:
d.copy_(w) # keep directions for weights/bias that are only 1 per node
else:
normalize_direction(d, w, norm)
def normalize_directions_for_states(direction, states, norm='filter', ignore='ignore'):
assert(len(direction) == len(states))
for d, (k, w) in zip(direction, states.items()):
if d.dim() <= 1:
if ignore == 'biasbn':
d.fill_(0) # ignore directions for weights with 1 dimension
else:
d.copy_(w) # keep directions for weights/bias that are only 1 per node
else:
normalize_direction(d, w, norm)
def ignore_biasbn(directions):
""" Set bias and bn parameters in directions to zero """
for d in directions:
if d.dim() <= 1:
d.fill_(0)
def create_random_direction(net, dir_type='weights', ignore='biasbn', norm='filter'):
"""
Setup a random (normalized) direction with the same dimension as
the weights or states.
Args:
net: the given trained model
dir_type: 'weights' or 'states', type of directions.
ignore: 'biasbn', ignore biases and BN parameters.
norm: direction normalization method, including
'filter" | 'layer' | 'weight' | 'dlayer' | 'dfilter'
Returns:
direction: a random direction with the same dimension as weights or states.
"""
# random direction
if dir_type == 'weights':
weights = get_weights(net) # a list of parameters.
direction = get_random_weights(weights)
normalize_directions_for_weights(direction, weights, norm, ignore)
elif dir_type == 'states':
states = net.state_dict() # a dict of parameters, including BN's running mean/var.
direction = get_random_states(states)
normalize_directions_for_states(direction, states, norm, ignore)
return direction
def tvd(m, l_i):
# load model parameters
pretrained_dict = torch.load('net_params_DGM.pkl')
# get state_dict
net_state_dict = net.state_dict()
# remove keys that does not belong to net_state_dict
pretrained_dict_1 = {k: v for k, v in pretrained_dict.items() if k in net_state_dict}
# update dict
net_state_dict.update(pretrained_dict_1)
# set new dict back to net
net.load_state_dict(net_state_dict)
weights_temp = get_weights(net)
states_temp = net.state_dict()
step_size = 2 * l_i / m
grid = np.arange(-l_i, l_i + step_size, step_size)
num_direction = 1
loss_matrix = torch.zeros((num_direction, len(grid)))
for temp in range(num_direction):
weights = weights_temp
states = states_temp
direction_temp = create_random_direction(net, dir_type='weights', ignore='biasbn', norm='filter')
normalize_directions_for_states(direction_temp, states, norm='filter', ignore='ignore')
directions = [direction_temp]
for dx in grid:
itemindex_1 = np.argwhere(grid == dx)
step = dx
set_states(net, states, directions, step)
loss_temp = loss_function(x)
loss_matrix[temp, itemindex_1[0]] = loss_temp
# clear memory
torch.cuda.empty_cache()
# get state_dict
net_state_dict = net.state_dict()
# remove keys that does not belong to net_state_dict
pretrained_dict_1 = {k: v for k, v in pretrained_dict.items() if k in net_state_dict}
# update dict
net_state_dict.update(pretrained_dict_1)
# set new dict back to net
net.load_state_dict(net_state_dict)
weights_temp = get_weights(net)
states_temp = net.state_dict()
interval_length = grid[-1] - grid[0]
TVD = 0.0
for temp in range(num_direction):
for index in range(loss_matrix.size()[1] - 1):
TVD = TVD + np.abs(float(loss_matrix[temp, index] - loss_matrix[temp, index + 1]))
Max = np.max(loss_matrix.detach().numpy())
Min = np.min(loss_matrix.detach().numpy())
TVD = TVD / interval_length / num_direction / (Max - Min)
return TVD, Max, Min
M = 100
m = 100
l_i = 0.02
TVD_DGM = 0.0
time_start = time.time()
Max = []
Min = []
Result = []
for count in range(M):
TVD_temp, Max_temp, Min_temp = tvd(m, l_i)
# print(Max_temp, Min_temp)
Max.append(Max_temp)
Min.append(Min_temp)
Result.append(TVD_temp)
print('Current direction TVD of DGM is: ', TVD_temp)
TVD_DGM = TVD_DGM + TVD_temp
print((count + 1) / M * 100, '% finished.')
# print('Max of all is: ', np.max(Max))
# print('Min of all is: ', np.min(Min))
TVD_DGM = TVD_DGM / M # / (np.max(Max) - np.min(Min))
print('All directions average TVD of DGM is: ', TVD_DGM)
# print('Variance TVD of DGM is: ', np.sqrt(np.var(Result / (np.max(Max) - np.min(Min)), ddof = 1)))
print('Variance TVD of DGM is: ', np.sqrt(np.var(Result, ddof = 1)))
time_end = time.time()
print('Total time costs: ', time_end - time_start, 'seconds')
```
| github_jupyter |
# Блок A1.4 Модуля A1
## А1.4.1 IF - ELSE
Кроме математических, в _Python_ есть также и условные операторы, они позволяют включать в программу условия, при которых то или иное действие будет выполнено.
Условный оператор в _Python_ обозначается как `if`. Попробуем разобрать на примере, как он работает.
Пусть у нас есть переменная `а`, содержащая число; мы хотим проверить, чётное ли это число, и вывести соответствующую информацию на экран.
( _Оператор % дает остаток от деления_ )
```
a = 4
if a % 2 == 0:
print('Число чётное.')
```
В этом примере наша программа сообщает компьютеру: если остаток от деления переменной `a` на `2` равен нулю, то выведи на экран: _«Число чётное»_.
**Важно!**
Следует обратить внимание на оператор `== (двойное «равно»)`, который мы использовали. В отличие от привычного `= («равно»)` он используется не для присваивания значений, а **для сравнения** двух объектов между собой.
**Список логических операторов**
Оператор Обозначение
* `>` больше
* `>=` больше или равно
* `<` меньше
* `<=` меньше или равно
* `!=` не равно
* `==` равно
В текущем примере наша команда выполняет действие, только если условие соблюдено. А если в случае несоблюдения условия мы хотим тоже совершить какое-то действие? Например, вывести на экран информацию о том, что число не является чётным.
Для случаев, когда ситуаций всего две (соблюдение и несоблюдение условий), достаточно дополнить код оператором `else`.
```
a = 3
if a % 2 == 0:
print('Число чётное.')
else:
print('Число нечётное.')
```
**Важно!**
Оператор `else` также требует после себя двоеточия и отступа с новой строки.
Теперь нашу программу можно расшифровать так: если остаток от деления переменной `а` на `2` равен нулю, то напиши: " _Число чётное_ ", в противном случае, — " _Число нечётное_ ".
**А1.4.1.1 Задание 1**
Напишите без пробелов, условные выражения на языке Python:
___
Значение переменной `а` равно значению переменной `b`
Ответ: `a==b`
___
Значение переменной a больше 2000
Ответ: `a>2000`
___
Значение переменной `name` равно `'Саша'`
Ответ: `name=='Саша'`
___
Значение переменной `name` не равно `'Паша'`
Ответ: `name!='Паша'`
Значение переменной `a` больше `2000`
Ответ: `a>2000`
**А1.4.1.2 Задание 2**
Какие ошибки в нижеследующем блоке кода?
```
a = int(input())
b = int(input())
if a = b
print('Введены одинаковые числа')
else
print('Введены разные числа')
```
* Лишние отступы перед `if` и `else` __[верно]__
* Нет отступов перед объявлениями переменных
* Нет отступов перед print __[верно]__
* Нет двоеточия после `else` и `if` __[верно]__
* Условие равенства прописано неверным оператором (`=` вместо `==`) __[верно]__
**А1.4.1.3 Задание 3**
Представьте, что вам нужно написать программу, которая принимает на вход от пользователя слово (переменная `word`), определяет, короткое оно или длинное, и выводит эту информацию на экран. Слова, в которых букв **`строго больше семи`**, будем считать длинными, а все остальные - короткими. Пусть результатом работы программы будет выведенная на экран фраза: **`"Это слово короткое."`** или **`"Это слово длинное."`** Напишите такую программу.
Для проверки используйте **`word = 'ОченьДлинноеСлово'`**.
```
word = 'ОченьДлинноеСлово'
if len(word)>7:
print('Это слово длинное.')
else:
print('Это слово короткое.')
```
**А1.4.1.4 Задание 4**
Считается, что буква **`"ф"`** встречается в русском языке реже всех остальных букв. Вы решили написать программу, которая позволяет пользователям вводить какие-либо слова и проверяет, можно считать это слово редким или нет. Редкими будем считать слова, которые содержат букву **`"ф"`**. Пусть эта программа выводит на экран одну из двух фраз: **`"Ого! Вы ввели редкое слово!"`**, если в слове есть буква **`"ф"`**, или **`"Эх, это не очень редкое слово..."`**, если в нём этой буквы нет.
**Примечание**: проверить, есть ли буква в слове можно с помощью оператора `in`. Пример:
```
if 'г' in 'голубь':
print('Буква "г" в слове есть.')
```
аналог этой конструкции:
```
if ('г' in 'голубь') == True:
print('Буква "г" в слове есть.')
```
Для проверки работы программы используйте **`word = 'ОченьДлинноеСлово'`**.
```
word = 'ОченьДлинноеСлово'
if('ф' in word)==True:
print('Ого! Вы ввели редкое слово!')
else:
print('Эх, это не очень редкое слово...')
```
**А1.4.1.5 Задание 5**
Преобразуйте предыдущую задачу таким образом, чтобы она проверяла не наличие конкретной буквы **`"ф"`**, а наличие любой буквы, выбранной пользователем. Пусть в качестве результата программа выводит фразы: **`"Выбранной буквы нет в введённом слове"`** или **`"Выбранная буква есть в введённом слове"`**.
Для проверки работы программы используйте **`word = 'ОченьДлинноеСлово'`**, **`letter = 'и'`**.
```
word = 'ОченьДлинноеСлово'
letter = 'и'
if(letter in word)==False:
print('Выбранной буквы нет в введённом слове')
else:
print('Выбранная буква есть в введённом слове')
```
**А1.4.1.6 Задание 6**
Напишите программу, которая проверяет, является ли квадратный корень из заданного целого положительного числа **`number`** целым числом.
Если в результате извлечения квадратного корня получается целое число, то пусть на экран выводится результат выполнения этой операции, если нет - фраза **`"Квадратный корень из ... - не целое число."`**. Вместо многоточия во фразе должно быть указано значение переменной **`number`**.
Для проверки работы программы используйте **`number = 169`**.
**Подсказка**: обратите внимание, что результат извлечения корня должен быть выведен в виде целого числа, то есть без десятичного разделителя и дробной части.
```
number = 1169
if number**(1/2)%1==0.0:
print(int(number**(1/2)))
else:
print('Квадратный корень из', number, '- не целое число.')
```
## А1.4.2 IF ... ELIF ... ELSE
Конечно, не всегда в наших задачах присутствует только два возможных исхода, даже редко их бывает всего два.
В случае, когда нужно проверять несколько условий последовательно, на помощь приходит ещё один полезный оператор — `elif`.
Он занимает место между `if` и `else` и может повторять действие много раз.
Представим, что необходимо сравнить два заданных числа между собой. Возможных исходов здесь может быть три: числа равны, первое число меньше второго и первое число больше второго. Такая простая проверка на _Python_ выглядела бы так:
```
a = 4
b = 5
if a == b:
print('Числа равны.')
elif a < b:
print('Первое число меньше второго.')
else:
print('Первое число больше второго.')
```
Синтаксис оператора `elif` повторяет синтаксис `if`: после самого оператора следует проверяемое условие, затем — двоеточие, отступ с новой строки и результат при выполнении этого условия.
Как действует компьютер в этом коде?
Первым действием проверяет исполнение того условия, которое следует за `if`, если условие соблюдается, то выполняет операцию после двоеточия, если не соблюдается, — проверяет исполнение условия, которое следует за `elif`, если это условие соблюдается, — выполняет операцию после двоеточия, если нет, — выполняет операцию, следующую за `else`.
Как было сказано, в отличие от `if` и `else`, `elif` может повторяться множество раз, если необходимо проверить много условий.
**А1.4.2.1 Задание 1**
Напишите программу, которая принимает на вход от пользователя число от `1` до `7` (переменная `number`) и выводит на экран соответствующий день недели (на русском языке с маленькой буквы). Например, при вводе **`1`** появляется надпись **`понедельник`**.
Для проверки используйте **`number = 5`**.
```
#number = int(input())
number=5
if number==1:
print('понедельник')
elif number==2:
print('вторник')
elif number==3:
print('среда')
elif number==4:
print('четверг')
elif number==5:
print('пятница')
elif number==6:
print('суббота')
elif number==7:
print('воскресенье')
else:
print('Введите число от 1 до 7')
```
**А1.4.2.2 Задание 2**
Индекс массы тела (`ИМТ`) является простым и одновременно важным индикатором состояния здоровья человека. Он рассчитывается как `отношение веса человека (в килограммах) к квадрату его роста (в метрах)`. Если `ИМ` находится в пределах **`от 18,5 до 24,99`** включительно, соответствие между массой тела и ростом человека считают нормальным. Значения ниже этого диапазона сигнализируют о недостаточной массе тела, выше — об избыточной массе тела.
Помогите врачам автоматизировать принятие решений на основе `ИМТ`. Напишите программу, в которой человек может указать свой вес (**`weight`**) и рост (**`height`**), а затем прочитать заключение: **`"Недостаточная масса тела"`**, **`"Норма"`**, **`"Избыточная масса тела"`**.
Для проверки используйте **`weight = 80`** и **`height=1.8`**.
```
#weight = float(input())
#height = float(input())
weight=80
height=1.8
IMT=weight/(height**2)
if IMT<18.5:
print('Недостаточная масса тела')
elif IMT<=24.99:
print('Норма')
else:
print('Избыточная масса тела')
```
**А1.4.2.3 Задание 3**
При выставлении оценок в ведомость преподавателю необходимо не только указать полученную студентом оценку, но и расшифровать её. В университете принята следующая шкала оценок: **меньше 4** — **`'неудовлетворительно'`**, **4-5** —**`'удовлетворительно'`**, **6-7** — **`'хорошо'`**, **8 и выше** — **`'отлично'`**. Напишите программу, которая принимает на вход полученную оценку (`mark`) и расшифровывает её таким образом.
**Вводимая оценка — число от 1 до 10**.
Для проверки используйте **`mark = 7`**.
```
mark=int(7)
if 1<=mark<=4:
print('неудовлетворительно')
elif 4<mark<=5:
print('удовлетворительно')
elif 5<mark<=7:
print('хорошо')
elif 8<=mark<=10:
print('отлично')
else:
print('Введите число от 1 до 10')
```
**А1.4.2.4 Задание 4**
Иногда человек хочет сходить в ресторан. Он может себе это позволить, если у него до зарплаты осталось (**`balance`**) **`больше 5 тысяч рублей`**. Если же у него **`5 тысяч рублей и меньше`**, но **`2.5 тысяч рублей и больше`**, он может сходить только в фастфуд. Если **`меньше 2.5 тысяч рублей, то придётся терпеть до зарплаты`**. Напишите программу, которая принимает на вход количество оставшихся денег в рублях и говорит, можно ли человеку пойти в ресторан (варианты ответа программы - **`"Сегодня твой выбор - ресторан!"`**, **`"Эх, только фастфуд."`** и **`"Придётся потерпеть!"`**).
Для проверки используйте **`balance = 455`**.
```
#balance = float(input())
balance =455
if balance<2500:
print('Придётся потерпеть!')
elif balance<=5000:
print('Эх, только фастфуд.')
else:
print('Сегодня твой выбор - ресторан!')
```
**А1.4.2.5 Задание 5**
Напишите программу, которая принимает на вход целое число (`number`) и проверяет его на делимость на `2`, `3` и `5`. При **`делимости на 2 без остатка`** она выведет `"Число делится на 2 без остатка."`, иначе при **`делимости на 3`** выведет `"Число делится на 3 без остатка."`, иначе при **`делимости на 5 выведет`** `"Число делится на 5 без остатка."`, а иначе выведет `"Число не делится ни на 2, ни на 3, ни на 5 без остатка!"`.
Для проверки используйте **`number = 452`**.
```
number=452
if number%2==0.0:
print('Число делится на 2 без остатка.')
elif number%3==0.0:
print('Число делится на 3 без остатка.')
elif number%5==0.0:
print('Число делится на 5 без остатка.')
else:
print('Число не делится ни на 2, ни на 3, ни на 5 без остатка!')
```
## А1.4.3 Простые и составные условия
Бывает, что условий несколько, но проверять их нужно не последовательно, а одновременно.
Операторы **`AND`** и **`OR`** позволяют решить такую задачу. При **`AND`** выражение считается истинным, когда оба условия, стоящие слева и справа от этого оператора, соблюдены. При **`OR`** выражение считается истинным, если соблюдено хотя бы одно из двух условий, стоящих слева и справа от оператора.
Рассмотрим пример. Будем считать слово приятным, если в нём встречаются какие-либо из сонорных согласных: **`л`**, **`м`**, **`н`**. Будем считать слово неприятным, если в нём нет перечисленных сонорных согласных и при этом встречаются какие-либо из шипящих согласных: **`ч`**, **`ш`**, **`щ`**. В иных случаях будем считать слово нейтральным.
Как бы выглядел код, определяющий, к какой группе относится заданное слово?
**P.S.** **`in`** проверяет, есть ли подстрока, стоящая слева от **`in`** в строке, стоящей справа от **`in`**.
```
word='солнышко'
if 'л' in word or 'м' in word or 'н' in word:
print('Это приятное слово.')
elif 'ч' in word or 'ш' in word or 'щ' in word:
print('Это не приятное слово.')
else:
print('Это нейтральное слово.')
```
Первым делом компьютер проверяет, есть ли хотя бы один из сонорных согласных в заданном слове с помощью оператора **`or`**. Если ни одного из них нет, то выполняет следующую проверку: есть ли в слове какая-нибудь из шипящих согласных. В случае когда и это условие не выполняется, переходит к части **`else`** и выполняет заключительное действие. При проверке слова `«солнышко»` выполняется действие после первого же условия.
Можно довольно гибко комбинировать разные условия между собой и составлять действительно сложные, если это необходимо.
**`AND`**
* `True` и `True` => `True`
* `True` и `False` => `False`
* `False` и `True` => `False`
* `False` и `False` => `False`
**`OR`**
* `True` и `True` => `True`
* `True` и `False` => `True`
* `False` и `True` => `True`
* `False` и `False` => `False`
**А1.4.3.1 Задание 1**
Для следующих выражений выберите, истинны они (**`True`**) или ложны (**`False`**):
**`5 < 3 or 6 > 7 or 4 > 9`**
* True
* False **[верно]**
___
**`x = 5`**
**`y = 10`**
**`y > x * x or y >= 2 * x and x < y`**
* True **[верно]**
* False
___
**`a = True`**
**`b = False`**
**`a and b or not a and not b`**
* True
* False **[верно]**
___
**`"239" < "30" and 239 > 30`**
* True **[верно]**
* False
___
**`"5a"<"5b" and 123/2 > 30 or True`**
* True **[верно]**
* False
___
**`False and True or 12<5**2 and "Python" > "Ruby"`**
* True
* False **[верно]**
___
**`"Android" > "Apple" or "Apple" > "Android"`**
* True **[верно]**
* False
___
**`x = 10`**
**`y = 15`**
`y > x**y - 1000 or "test" > "push"`
* True **[верно]**
* False
___
**А1.4.3.2 Задание 2**
Аня очень любит цветы, но далеко не все. Только если это синие или белые розы. Напишите программу, которая принимает на вход цвет цветов ( _`color`_ ) и название цветов ( _`flower`_ ) в начальной форме и говорит, понравятся ли они Ане. (Варианты ответа: **`"Ане понравятся эти цветы"`**, **`"Аня не фанат таких цветов"`**).
Для проверки используйте **`flower = 'роза'`** и **`color = 'фиолетовый'`**.
```
flower='роза'
color='фиолетовый'
if (color=='синий' or color=='белый') and flower=='роза':
print('Ане понравятся эти цветы')
else:
print('Аня не фанат таких цветов')
```
**А1.4.3.3 Задание 3**
Вы работаете в агентстве знакомств. К вам обратилась некая Алла. Она ищет мужчину, рост которого **`выше 170 см`**, вес — **`менее 80 кг`** и его **`любимый цвет — красный`**. Напишите программу, которая позволит определить, подходит ли Алле тот или иной кандидат.
Программа должна выводить фразу: **`"Ваша половинка нашлась!"`**, если подходит, и **`"Попробуем поискать ещё..."`**, если нет.
Для проверки используйте **`height = 180`**, **`weight = 92`** и **`color = 'синий'`**.
```
height=180
weight=92
color='синий'
if height>170 and weight<80 and color=='красный':
print('Ваша половинка нашлась!')
else:
print('Попробуем поискать ещё...')
```
**А1.4.3.4 Задание 4**
Мальчику Вове для решения домашней работы по математике потребовалось написать программу, которая выясняет, имеется ли среди делителей числа следующие числа: **`2`**, **`5`**, **`173`** или **`821`**. Напишите программу, которая получает на вход целое число (**`number`**) и отвечает Вове `"Вова, это нужное число"`, если среди делителей есть хотя бы одно число из **`2`**, **`5`**, **`173`**, **`821`** или `"Вова, в этот раз ты не попал"`, если это не так.
Для проверки используйте **`number = 346`**.
```
number=346
if (number%2==0.0) or (number%5==0.0) or(number%173==0.0) or(number%821==0.0):
print('Вова, это нужное число')
else:
print('Вова, в этот раз ты не попал')
```
**А1.4.3.5 Задание 5**
Вы решили сделать систему, которая может подобрать для людей язык программирования в зависимости от их любимого слова. Напишите программу, которая может вам в этом помочь. Программа принимает на вход слово (**`fav_word`**), и если это **`"рептилия"`**, **`"питон"`** или **`"змея"`**, выводит " _`Python`_ ", если это **`"плюс"`** или **`"плюсы"`**, выводит " _`C++`_ ", если это **`"рубин"`** или **`"кристалл"`**, выводит " _`Ruby`_ ". Ну, а если что-то другое, то тоже выводит " _`Python`_ ".
Для проверки используйте **`fav_word = 'Аппликация'`**
```
fav_word='Аппликация'
if fav_word=="плюс" or fav_word=="плюсы":
print('C++')
elif fav_word=="рубин" or fav_word=="кристалл":
print('Ruby')
else:
print('Python')
```
**А1.4.3.6 Задание 6**
Преподаватель ведёт занятия **`с 10:30 до 12:00`**, **`с 13:40 до 15:00`**, **`с 18:00 до 19:30`**. В университет он приходит **`в 10 утра`**, а уходит - **`в 20 часов`**. Время в университете, свободное от занятий, он посвящает консультированию студентов. Напишите программу, которая помогла бы студентам подстроиться под свободное время преподавателя: пусть они вводят желаемое время (сначала одно число — часы /**`hour`**/, затем второе — минуты /**`minute`**/), а программа показывает, свободен преподаватель в это время или нет (выводит на экран **`"Преподаватель свободен."`** или **`"Преподаватель занят."`**, соответственно).
Для проверки используйте **`hour = 18`** и **`minute = 47`**.
```
hour=18
minute=47
if (hour==10 and minute<30) or (hour==12 and minute>0) or (hour==13 and minute<40) or (hour==15 and minute>0) or (16<=hour<=17) or (hour==19 and minute>30):
print('Преподаватель свободен.')
else:
print('Преподаватель занят.')
```
| github_jupyter |
Trains a simple convnet on the MNIST dataset.
Gets to 99.25% test accuracy after 12 epochs
(there is still a lot of margin for parameter tuning).
16 seconds per epoch on a GRID K520 GPU.
```
from __future__ import print_function
import keras
from keras.callbacks import ModelCheckpoint, EarlyStopping
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten
from keras.layers import Conv2D, MaxPooling2D
from keras import backend as K
batch_size = 128
num_classes = 10
# input image dimensions
img_rows, img_cols = 28, 28
# the data, shuffled and split between train and test sets
(x_train, y_train), (x_test, y_test) = mnist.load_data()
if K.image_data_format() == 'channels_first':
x_train = x_train.reshape(x_train.shape[0], 1, img_rows, img_cols)
x_test = x_test.reshape(x_test.shape[0], 1, img_rows, img_cols)
input_shape = (1, img_rows, img_cols)
else:
x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, 1)
x_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, 1)
input_shape = (img_rows, img_cols, 1)
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255
print('x_train shape:', x_train.shape)
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')
# convert class vectors to binary class matrices
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
model = Sequential()
model.add(Conv2D(32, kernel_size=(3, 3),
activation='relu',
input_shape=input_shape))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(num_classes, activation='softmax'))
model.compile(loss=keras.losses.categorical_crossentropy,
optimizer=keras.optimizers.Adadelta(),
metrics=['accuracy'])
model.summary()
epochs = 200
callbacks = [EarlyStopping(monitor='val_loss', patience=10, verbose=1),
ModelCheckpoint(filepath='weights.h5', verbose=1, save_best_only=True)]
model.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
verbose=1,
validation_data=(x_test, y_test),
callbacks=callbacks)
model.save_weights('weights.h5')
score = model.evaluate(x_test, y_test, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
```
| github_jupyter |
# OpenVINO example with Squeezenet Model
This notebook illustrates how you can serve [OpenVINO](https://software.intel.com/en-us/openvino-toolkit) optimized models for Imagenet with Seldon Core.

To run all of the notebook successfully you will need to start it with
```
jupyter notebook --NotebookApp.iopub_data_rate_limit=100000000
```
## Setup Seldon Core
Use the setup notebook to [Setup Cluster](../../seldon_core_setup.ipynb#Setup-Cluster) with [Ambassador Ingress](../../seldon_core_setup.ipynb#Ambassador) and [Install Seldon Core](../../seldon_core_setup.ipynb#Install-Seldon-Core). Instructions [also online](./seldon_core_setup.html).
```
!kubectl create namespace seldon
!kubectl config set-context $(kubectl config current-context) --namespace=seldon
```
## Deploy Seldon Intel OpenVINO Graph
```
!helm install openvino-squeezenet ../../../helm-charts/seldon-openvino \
--set openvino.model.src=gs://seldon-models/openvino/squeezenet \
--set openvino.model.path=/opt/ml/squeezenet \
--set openvino.model.name=squeezenet1.1 \
--set openvino.model.input=data \
--set openvino.model.output=prob
!helm template openvino-squeezenet ../../../helm-charts/seldon-openvino \
--set openvino.model.src=gs://seldon-models/openvino/squeezenet \
--set openvino.model.path=/opt/ml/squeezenet \
--set openvino.model.name=squeezenet1.1 \
--set openvino.model.input=data \
--set openvino.model.output=prob | pygmentize -l json
!kubectl rollout status deploy/$(kubectl get deploy -l seldon-deployment-id=openvino-model -o jsonpath='{.items[0].metadata.name}')
```
## Test
```
%matplotlib inline
import numpy as np
from keras.applications.imagenet_utils import preprocess_input, decode_predictions
from keras.preprocessing import image
import sys
import json
import matplotlib.pyplot as plt
from seldon_core.seldon_client import SeldonClient
def getImage(path):
img = image.load_img(path, target_size=(227, 227))
x = image.img_to_array(img)
plt.imshow(x/255.)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
return x
X = getImage("car.png")
X = X.transpose((0,3,1,2))
print(X.shape)
sc = SeldonClient(deployment_name="openvino-model",namespace="seldon")
response = sc.predict(gateway="ambassador",transport="grpc",data=X, client_return_type="proto")
result = response.response.data.tensor.values
result = np.array(result)
result = result.reshape(1,1000)
with open('imagenet_classes.json') as f:
cnames = eval(f.read())
for i in range(result.shape[0]):
single_result = result[[i],...]
ma = np.argmax(single_result)
print("\t",i, cnames[ma])
assert(cnames[ma]=="sports car, sport car")
!helm delete openvino-squeezenet
```
| github_jupyter |
```
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
print('Modules are imported.')
corona_dataset_csv = pd.read_csv('covid19_Confirmed_dataset.csv')
corona_dataset_csv.head(10)
#Let's check the shape of dataframe
corona_dataset_csv.shape
#Basic Text and Legend Functions
fig = plt.figure()
plt.plot(4, 2, 'o')
plt.xlim([0, 10])
plt.ylim([0, 10])
fig.suptitle('Suptitle', fontsize=10, fontweight='bold')
ax = plt.gca()
ax.set_xlabel('X Label')
ax.set_ylabel('Y Label')
ax.text(4, 6, 'Text in Data Coords', bbox={'facecolor': 'yellow', 'alpha':0.5, 'pad':10})
ax.annotate('Example of Annotate', xy=(4,2), xytext=(8,4), arrowprops=dict(facecolor='green', shrink=0.05))
plt.show()
corona_dataset_aggregated = corona_dataset_csv.groupby("Country/Region").sum()
corona_dataset_aggregated.head() #Aggregating the rows by the country
corona_dataset_aggregated.shape #Let's check the dataframe
corona_dataset_aggregated.loc["China"].plot()
corona_dataset_aggregated.loc["Spain"].plot()
corona_dataset_aggregated.loc["Italy"].plot() # Visualizing data related to a country for example China
plt.legend()
corona_dataset_aggregated.loc["China"][:3].plot() #Calculating a good measure
corona_dataset_aggregated.loc["China"].diff().plot() #caculating the first derivative of the curve
corona_dataset_aggregated.loc["China"].diff().max() #find maxmimum infection rate for China
corona_dataset_aggregated.loc["Italy"].diff().max()
corona_dataset_aggregated.loc["Spain"].diff().max()
countries = list(corona_dataset_aggregated.index)
max_infection_rates =[] # find maximum infection rate for all of the countries
for c in countries :
max_infection_rates.append(corona_dataset_aggregated.loc[c].diff().max())
corona_dataset_aggregated["max_infection_rates"]= max_infection_rates
corona_dataset_aggregated.head()
countries = list(corona_dataset_aggregated.index)
max_infection_rates =[] # find maximum infection rate for all of the countries
for c in countries :
max_infection_rates.append(corona_dataset_aggregated.loc[c].diff().max())
max_infection_rates
corona_dataset_aggregated.loc["Spain"].diff().plot()
happiness_report_csv=pd.read_csv("worldwide_happiness_report.csv") #Importing dataset
happiness_report_csv.head()
useless_cols=["Overall rank","Score","Generosity"] # let's drop the useless columns
happiness_report_csv.drop(useless_cols,axis=1,inplace=True)
happiness_report_csv.head()
happiness_report_csv.set_index("Country or region",inplace=True) #changing the indices of the dataframe
happiness_report_csv.head()
corona_dataset_aggregated.loc["India"].diff().plot()
corona_dataset_aggregated.loc["Italy"].diff().plot()
```
| github_jupyter |
# SUSA CX Kaggle Capstone Project
## Part 4: Deep Learning in Keras and Submitting to Kaggle
### Table Of Contents
* [Introduction](#section1)
* [Initial Setup](#section2)
* [Deep Learning](#section3)
* [Final Kaggle Evaluation](#section4)
* [Conclusion](#conclusion)
* [Additional Reading](#reading)
### Hosted by and maintained by the [Statistics Undergraduate Students Association (SUSA)](https://susa.berkeley.edu). Originally authored by [Patrick Chao](mailto:prc@berkeley.edu) & [Arun Ramamurthy](mailto:contact@arun.run).
<a id='section1'></a>
# SUSA CX Kaggle Capstone Project
Woohoo! You've made it to the end of the CX Kaggle Capstone Project! Congratulations on all of your hard work so far. We hope you've enjoyed this opportunity to learn new modeling techniques, some underlying mathematics, and even make new friends within CX. At this point, we've covered the entirety of the Data Science Workflow, linear regression, feature engineering, PCA, shrinkage, hyperparameter tuning, decision trees and even ensemble models. This week, we're going to finish off this whirlwind tour with a revisit to our old friend, Deep Learning. While the MNIST digit dataset was really interesting to look at as a cool toy example of the powers of DL, this time you're going to apply neural networks to your housing dataset for some hands-on practice using Keras.
> ### CX Kaggle Competition & Final Kaggle Evaluation
After you get some practice with deep learning, this week we will be asking you and your team to select and finalize your best model, giving you the codespace to write up your finalized model and evaluate it by officially submitting your results to Kaggle. The winners of this friendly collab-etition will be honored at the SUSA Banquet next Friday, including prizes for the winning team! We also want to encourage and facilitate discussion between teams on why different models performed differently, and give you a chance to chat with other teams about their own experiences with the CX Kaggle Capstone.
## Logistics
Most of the logistics are the same as last week, but we are repeating them here for your convenience. Please let us know if you or your teammates are feeling nervous about the pace of this project - remember that we are not grading you on your project, and we really try to make the notebooks relatively easy and fast to code through. If for any reason you are feeling overwhelmed or frustrated, please DM us or talk to us in person. We want all of you to have a productive, healthy, and fun time learning data science! If you have any suggestions or recommendations on how to improve, please do not hesitate to reach out!
### Mandatory Office Hours
Because this is such a large project, you and your team will surely have to work on it outside of meetings. In order to get you guys to seek help from this project, we are making it **mandatory** for you and your group to attend **two (2)** SUSA Office Hours over the next 4 weeks. This will allow questions to be answered outside of the regular meetings and will help promote collaboration with more experienced SUSA members.
The schedule of SUSA office hours are below:
https://susa.berkeley.edu/calendar#officehours-table
We understand that most of you will end up going to Arun or Patrick's office hours, but we highly encourage you to go to other people's office hours as well. There are many qualified SUSA mentors who can help and this could be an opportunity for you to meet them.
<a id='section2'></a>
# Initial Setup
To begin we will import all the necessary libraries and functions.
```
# Import statements
from sklearn import tree # There are lots of other models from this module you can try!
from sklearn.ensemble import RandomForestRegressor
from sklearn.linear_model import Ridge, Lasso, LinearRegression, Ridge
import pandas as pd
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
from sklearn.grid_search import GridSearchCV
from sklearn.metrics import mean_squared_error, make_scorer
from sklearn.externals.six import StringIO
from IPython.display import Image
import tensorflow as tf
import keras
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense, Dropout, Conv2D, MaxPool2D, Flatten
from keras.layers.advanced_activations import LeakyReLU, PReLU
from keras.optimizers import SGD,Adam
from keras.layers.normalization import BatchNormalization
from keras.layers import Activation
from keras import backend as K
sqrt=np.sqrt
```
We also define a few familiar functions that should be helpful to you later.
```
def get_features(data, col_list, y_name):
"""
Function to return a numpy matrix of pandas dataframe features, given k column names and a single y column
Outputs X, a n X k dimensional numpy matrix, and Y, an n X 1 dimensional numpy matrix.
This is not a smart function - although it does drop rows with NA values. It might break.
data(DataFrame): e.g. train, clean
col_list(list): list of columns to extract data from
y_name(string): name of the column you to treat as the y column
Ideally returns one np.array of shape (len(data), len(col_list)), and one of shape (len(data), len(col_list))
"""
# keep track of numpy values
feature_matrix = data[col_list + [y_name]].dropna().values
np.random.shuffle(feature_matrix)
return feature_matrix[:, :-1], feature_matrix[:, -1]
def get_loss(model, X,Y_true):
"""Returns square root of L2 loss (RMSE) from a model, X value input, and true y values
model(Model object): model we use to predict values
X: numpy matrix of x values
Y_true: numpy matrix of true y values
"""
Y_hat = model.predict(X)
return get_RMSE(Y_hat,Y_true)
def get_RMSE(Y_hat,Y_true):
"""Returns square root of L2 loss (RMSE) between Y_hat and true values
Y_true: numpy matrix of predicted y values
Y_true: numpy matrix of true y values
"""
return np.sqrt(np.mean((Y_true-Y_hat)**2))
def root_mean_squared_error(y_true, y_pred):
return K.sqrt(K.mean(K.square(y_pred - y_true), axis=-1))
def get_train_and_val(X,Y):
"""Given the X and Y data, return the training and validation based on the split variable
X: numpy matrix of x values
Y: numpy matrix of y values
split: value between 0 and 1 for the training split
"""
Y = Y.reshape(Y.shape[0],)
train_index,_ = get_train_val_indices(X,Y)
y_train = Y.reshape(Y.shape[0],)
y_train = Y[:train_index]
x_train = X[:train_index,:]
x_val = X[train_index:,:]
y_val = Y[train_index:]
return (x_train,y_train),(x_val,y_val)
def get_train_val_indices(X,Y=None,split=0.7):
train_index = (int)(X.shape[0]*split)
test_index =X.shape[0]-1
return train_index,test_index
def select_columns_except(dframe, non_examples):
"""Returns all comlumns in dframe except those in non_examples."""
all_cols = dframe.select_dtypes(include=[np.number]).columns.tolist()
cond = lambda x: sum([x == col for col in non_examples]) >= 1
return [x for x in all_cols if not cond(x)]
#Metric for keras
def root_mean_squared_error(y_true, y_pred):
return K.sqrt(K.mean(K.square(y_pred - y_true), axis=-1))
```
## Data Loading
First, we need to load and clean the data. Although you may have cleaned data from `kaggle1`, we provide our solution for the cleaned housing data for your convenience. If you would like to view the completed data cleaning procedure, it has been updated in [kaggle1.ipynb](kaggle1.ipynb).
```
train = pd.read_csv('DATA/house-prices/train_cleaned.csv')
test = pd.read_csv('DATA/house-prices/test_cleaned.csv')
train = train.drop('Unnamed: 0',axis=1)
test = test.drop('Unnamed: 0',axis=1)
train.head()
```
Same as before, we need to preprocess the data into `numpy` matrices and separate the `SalePrice` as the response variable.
```
feature_cols = select_columns_except(train, ['Id','SalePrice'])
X, Y = get_features(train, feature_cols, 'SalePrice')
(x_train,y_train),(x_val,y_val) = get_train_and_val(X,Y)
x_test = test.loc[:, test.columns != 'Id'].values
test_ids = test['Id'].values
```
We provide a function `model_prediction` that takes in a model and a set of features from the test set, and outputs the predictions into a vector. This should work with the `keras` neural networks, `sklearn` decision trees and random forests.
```
def model_prediction(model,test=x_test):
prediction = model.predict(test)
return prediction.reshape(prediction.shape[0],)
```
## Model Example 1 : Random Forest
To help you get started, we supply a couple of naive models. Recall the three steps to modeling: model selection, training, and evaluation (validation or testing). Use the optimal parameters you found from grid search last week to tune the following random forest model.
```
####################
### MODEL DESIGN ###
####################
max_depth = 10
min_samples_leaf = 1
min_samples_split = 4
n_estimators = 40
model_rf = RandomForestRegressor(max_depth = max_depth,
min_samples_leaf = min_samples_leaf, min_samples_split = min_samples_split,
n_estimators = n_estimators, random_state = 0, bootstrap = True)
################
### TRAINING ###
################
model_rf = model_rf.fit(x_train, y_train)
def model_prediction(model,x_test=x_test):
prediction = model.predict(x_test)
return prediction.reshape(prediction.shape[0],)
##################
### EVALUATION ###
##################
loss = get_loss(model_rf, x_val,y_val)
print("Root Mean Squared Error loss on the Validation Set for our RF model: {:.2f}".format(loss))
```
<a id='section3'></a>
# Deep Learning
From `kaggle3`, a strange tale:
>We may imagine hyperparameters as a bunch of individual knobs we can turn to change our model. Consider that you are visiting your friend and staying at her place. However, you did not realize that she is actually an alien and her house is filled with very strange objects. When you head to bed, you attempt to use her shower, but see that her shower is has a dozen of knobs that control the temperature of the water coming out! We only have a single output to work off of, but many different knobs or *parameters* to adjust. If the water is too hot, we can turn random knobs until it becomes cold, and learn a bit about our environment. We may determine that some knobs are more or less sensitive, just like hyperparameters. Each knob in the shower is equivalent to a hyperparameter we can tune in a model.
## Model Example 2 : Neural Networks
Here is a very simple example of a neural network in Keras. It's performance is not fantastic to start, but mess around with tuning parameters and adding or subtracting layers and see what you can come up with!
```
####################
### MODEL DESIGN ###
####################
model_nn = Sequential()
model_nn.add(Dense(30, activation='relu', input_shape=(x_train.shape[1],)))
model_nn.add(Dense(1, activation='relu'))
model_nn.compile(optimizer=Adam(), loss = root_mean_squared_error,
metrics =[root_mean_squared_error])
################
### TRAINING ###
################
batch_size = 20
epochs = 50
learning_rate = 0.01
history = model_nn.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
verbose=1,
validation_data=(x_val, y_val))
##################
### EVALUATION ###
##################
score = model_nn.evaluate(x_val, y_val, verbose=0)
print("Root Mean Squared Error loss on the Validation Set for our neural net model: {:.2f}".format(score[0]))
```
<a id='section4'></a>
# Final Kaggle Evaluation
Congrats on finishing the last of the models we planned on teaching you about during the CX Kaggle Capstone!
You have now covered five distinct models and a several related techniques to add to your data science bag-of-tricks:
- Linear Models
- Multivariate Linear Regression
- Polynomial Regression
- Shrinkage / Biased Regression / Regularization (i.e. Ridge, LASSO)
- Decision Trees
- Random Forests
- Deep Learning
- Sequential Neural Networks
- Auxiliary Techniques
- The Data Science Workflow
- Data Cleaning
- Interpreting EDA Graphs
- Feature Engineering
- Principal Component Analysis
- Hyperparameter Tuning (i.e. grid search)
- Ensemble Learning (i.e. bagging, boosting)
Wow, that's a lot! We are really proud of you all for exploring these techniques, which constitute some of Berkeley's toughest machine learning and statistics classes. As always, if you want to learn more about any of these topics, or are hungry to learn about even more techniques, feel free to reach out to any one of the SUSA Mentors.
With the help of the above listing and your own team's preferences, choose a model and a couple of techniques to implement for your final model. We will provide you with a preamble and some space to construct and train your model, as well as a helper function to turn your output into an official Kaggle submission file.
A huge part of the modeling process is to mess around with different models, approaches, and hyperparameters! Don't be afraid to get your hands dirty and explore!
```
################
### PREAMBLE ###
################
train = x_train
labels = y_train
test = x_test
####################
### MODEL DESIGN ###
####################
model = model_rf
# ^^ REPLACE THIS LINE ^^
################
### TRAINING ###
################
model = model.fit(train, labels)
# ^^ REPLACE THIS LINE ^^
##################
### EVALUATION ###
##################
test_predictions = model_prediction(model)
##################
### SUBMISSION ###
##################
def generate_kaggle_submission(predictions, test = test):
'''
This function accepts your 1459-dimensional vector of predicted SalesPrices for the test dataset,
and writes a CSV named kaggle_submission.csv containing your vector in a form suitable for
submission onto the Kaggle leaderboard.
'''
pd.DataFrame({'Id': test_ids, 'SalePrice': predictions}) \
.to_csv('kaggle_submission.csv', index=False)
generate_kaggle_submission(test_predictions)
```
As you might have noticed in the code block above, we had to write a simple CSV file containing row IDs and predicted values for the 1459 houses in the test dataset. This submission file is your ticket to getting onto the official Kaggle leaderboard and seeing how you did as compared to the rest of the world!
Take a look at your `kaggle_submission.csv` file to ensure its content matches your expectations. When you and your team are ready, follow these instructions to upload your predictions to Kaggle and receive an official Kaggle score:
> 1. First, choose one person on your team (perhaps the rprincess) to submit your team's predictions under their name. This person will need to visit the [Kaggle website](https://www.kaggle.com) to create an account.
> 2. Go to the [Competition Submission Page](https://www.kaggle.com/c/house-prices-advanced-regression-techniques/submit) for the House Prices competition.
> 3. Upload and submit `kaggle_submission.csv`
> 4. Wait for your submission to be scored! This should only take a few seconds to submit. You may have to refresh the page to see your final score.
> 5. To each member: record your team's score in [this Google form](https://goo.gl/forms/VteqJ5Di84t54TEZ2), which also contains a section for feedback on your Kaggle experience this semester. Congratulations on your first Kaggle submission!
For initial comparison, the naive random forest in this notebook achieves a score of 0.15-0.2 - you can think of this as a baseline score to beat.
This is Patrick's score after some minor edits to the models above:
<img src="GRAPHICS/kaggle.png" width="80%">
Can you beat Patrick's Kaggle score?
> "As an incentive, if you get a score under 0.13, I will personally take your team out to lunch!" - Patrick Chao
If you get some great models and great kaggle scores, feel free to screenshot them and post them in the slack!
We will post a leaderboard of all of the CX Kaggle teams, and the winning team will be honored at banquet, so stay tuned for that!
In the meantime, take some time to talk to other finished teams and explore the differences in your modeling approach, design, and insights! Happy Kaggling :)
# Conclusion
This brings us to an end to the CX Kaggle Capstone Project, as well as the Spring 2018 semester of SUSA Career Exploration. Congratulations on graduating from the SUSA Career Exploration committee! It's been a wonderful experience teaching you all, and we hope you got as much out of CX as we did this semester. This semester brought several new pilot programs to CX, such as the crash courses, workbooks, a revamped curriculum, and the CX Kaggle Capstone Project. You all have been great sources of feedback, and we want to make next semester's CX curriculum even better for the new generation of CX!
We're going to ask you for feedback one last time, to give us insight into how we can improve the CX Kaggle Capstone experience for future CX members. Please fill out [this feedback form](https://goo.gl/forms/VteqJ5Di84t54TEZ2) and let us know how we could have done better. Thank you again for a wonderful semester, and we will see you again in the Fall!
As always, please email [Arun Ramamurthy](mailto:contact@arun.run), [Patrick Chao](mailto:prc@berkeley.edu), or [Noah Gundotra](mailto:noah.gundotra@berkeley.edu) with any questions or concerns whatsoever. Have a great summer, and we hope to see you as a returning member in the Fall! Go SUSA!!!
**Signed with geom_love,
Lucas, Arun, Patrick, Noah, and the rest of the SUSA Board**
| github_jupyter |
# Tutorial 8 : Comparison of various DMD algorithms
In this tutorial, we perform a thorough comparison of various DMD algorithms available in PyDMD, namely:
- the original DMD algorithm proposed by [Schmid (*J. Fluid Mech.*, 2010)](https://www.cambridge.org/core/journals/journal-of-fluid-mechanics/article/dynamic-mode-decomposition-of-numerical-and-experimental-data/AA4C763B525515AD4521A6CC5E10DBD4), see `DMD`
- the optimal closed-form solution given by [Héas & Herzet (*arXiv*, 2016)](https://arxiv.org/abs/1610.02962), see `OptDMD`.
For that purpose, different test cases are considered in order to assess the accuracy, the computational efficiency and the generalization capabilities of each method. The system we'll consider throughout this notebook is that of a chain of slightly damped 1D harmonic oscillators with nearest-neighbours coupling. Defining the state-vector as
$$
\mathbf{x} = \begin{bmatrix} \mathbf{q} & \mathbf{p} \end{bmatrix}^T,
$$
where $\mathbf{q}$ is the position and $\mathbf{p}$ is the momentum, our linear system can be written as
$$
\displaystyle \frac{\mathrm{d}}{\mathrm{d}t} \begin{bmatrix} \mathbf{q} \\ \mathbf{p} \end{bmatrix} = \begin{bmatrix} \mathbf{0} & \mathbf{I} \\ -\mathbf{K} & -\mathbf{G} \end{bmatrix} \begin{bmatrix} \mathbf{q} \\ \mathbf{p} \end{bmatrix},
$$
with $\mathbf{K}$ and $\mathbf{G}$ being the stiffness and friction matrices, respectively. It must be emphasized that, because we consider $N=50$ identifical oscillators, this system does not exhibit low-rank dynamics. It will nonetheless enable us to further highlight the benefits of using the optimal solution proposed by [Héas & Herzet (arXiv, 2016)](https://arxiv.org/abs/1610.02962) as opposed to the other algorithms previously available in PyDMD.
Three different test cases will be considered :
- fitting a DMD model using a single long time-series,
- fitting a DMD model using a short burst,
- fitting a DMD model using an ensemble of short bursts.
For each case, the reconstruction error on the training dataset used to fit the model will be reported along with the error on an ensemble of testing datasets as to assess the generalization capabilities of these various DMD models. The following two cells build the discrete linear time invariant (LTI) state space model for our system. It is this particular system, hereafter denoted `dsys`, that will be used throughout this notebook to generate both the training and testing datasets. For more details about SciPy implementation of LTI systems, interested readers are refered to the [`scipy.signal`](https://docs.scipy.org/doc/scipy/reference/signal.html#discrete-time-linear-systems) module.
```
# Import standard functions from numpy.
import numpy as np
from numpy.random import normal
# Import matplotlib and set related parameters.
import matplotlib.pyplot as plt
fig_width = 12
# Import SciPy utility functions for linear dynamical systems.
from scipy.signal import lti
from scipy.signal import dlti, dlsim
# Import standard linear algebra functions from SciPy.
from scipy.linalg import norm
# Import various DMD algorithms available in PyDMD.
from pydmd import DMD, OptDMD
def harmonic_oscillators(N=10, omega=0.1, alpha=0.2, gamma=0.05, dt=1.0):
"""
This function builds the discrete-time model of a chain of N coupled
weakly damped harmonic oscillators. All oscillators are identical to
one another and have a nearest-neighbour coupling.
Parameters
----------
N : integer
The number of oscillators forming the chain (default 10).
omega : float
The natural frequency of the base oscillator (default 0.1).
alpha : float
The nearest-neighbour coupling strength (default 0.2).
gamma : float
The damping parameter (default 0.05).
dt : float
The sampling period for the continuous-to-discrete time conversion (default 1.0).
Returns
-------
dsys : scipy.signal.dlti
The corresponding discrete-time state-space model.
"""
# Miscellaneous imports.
from scipy.sparse import diags, identity, bmat, block_diag
# Build the stiffness matrix.
K_ii = np.ones((N,)) * (omega**2/2.0 + alpha) # Self-coupling.
K_ij = np.ones((N-1,)) * (-alpha/2.0) # Nearest-neighbor coupling.
K = diags([K_ij, K_ii, K_ij], offsets=[-1, 0, 1]) # Assembles the stiffness matrix.
# Build the friction matrix.
G = gamma * identity(N)
# Build the dynamic matrix.
A = bmat([[None, identity(N)], [-K, -G]])
# Build the control matrix.
B = bmat([[0*identity(N)], [identity(N)]])
# Build the observation matrix.
C = identity(2*N)
# Build the feedthrough matrix.
D = bmat([[0*identity(N)], [0*identity(N)]])
# SciPy continuous-time LTI object.
sys = lti(A.toarray(), B.toarray(), C.toarray(), D.toarray())
# Return the discrete-time equivalent.
return sys.to_discrete(dt)
# Get the discrete-time LTI model.
N = 50 # Number of oscillators (each has 2 degrees of freedom so the total size of the system is 2N).
dsys = harmonic_oscillators(N=N) # Build the model.
```
## Case 1 : Using a single long time-series to fit a DMD model
```
# Training initial condition.
x0_train = normal( loc=0.0, scale=1.0, size=(dsys.A.shape[1]) )
# Run simulation to generate dataset.
t, _, x_train = dlsim(dsys, np.zeros((2000, dsys.inputs)), x0=x0_train)
def plot_training_dataset(t, x_train):
"""
This is a simple utility function to plot the time-series forming our training dataset.
Parameters
----------
t : array-like, shape (n_samples,)
The time instants.
x_train : array-like, shape (n_samples, n_dof)
The time-series of our system.
"""
# Setup the figure.
fig, axes = plt.subplots( 1, 2, sharex=True, figsize=(fig_width, fig_width/6) )
# Plot the oscillators' positions.
axes[0].plot( t, x_train[:, :dsys.inputs], alpha=0.5)
# Add decorators.
axes[0].set_ylabel(r"$q_i[k]$")
# Plot the oscillators'velocities.
axes[1].plot( t, x_train[:, dsys.inputs:], alpha=0.5 )
# Add decorators.
axes[1].set( xlim=(t.min(), t.max()), xlabel=r"k", ylabel=r"p_i[k]$" )
return
plot_training_dataset(t, x_train)
```
Let us now fit our models, namely vanilla DMD (Schmid, *J. Fluid Mech.*, 2010) and the closed-form solution DMD (Héas & Herzet, *arXiv*, 2016).
```
def rank_sensitvity(dsys, x_train, n_test=100):
"""
This function using the generated training dataset to fit DMD and OptDMD models of increasing rank.
It also computes the test error on an ensemble of testing dataset to get a better estimation of the
generalization capabilities of the fitted models.
Parameters
----------
dsys : scipy.signal.dlti
The discrete LTI system considered.
x_train : array-like, shape (n_features, n_samples)
The training dataset.
NOTE : It is transposed compared to the output of dsys.
n_test : int
The number of testing datasets to be generated.
Returns
-------
dmd_train_error : array-like, shape (n_ranks,)
The reconstruction error of the DMD model on the training data.
dmd_test_error : array-like, shape (n_ranks, n_test)
The reconstruction error of the DMD model on the various testing datasets.
optdmd_train_error : array-like, shape (n_ranks,)
The reconstruction error of the OptDMD model on the training data.
optdmd_test_error : array-like, shape (n_ranks, n_test)
The reconstruction error of the OptDMD model on the various testing datasets.
"""
dmd_train_error, optdmd_train_error = list(), list()
dmd_test_error, optdmd_test_error = list(), list()
# Split the training data into input/output snapshots.
y_train, X_train = x_train[:, 1:], x_train[:, :-1]
for rank in range(1, dsys.A.shape[0]+1):
# Fit the DMD model (Schmid's algorithm)
dmd = DMD(svd_rank=rank).fit(x_train)
# Fit the DMD model (optimal closed-form solution)
optdmd = OptDMD(svd_rank=rank, factorization="svd").fit(x_train)
# One-step ahead prediction using both DMD models.
y_predict_dmd = dmd.predict(X_train)
y_predict_opt = optdmd.predict(X_train)
# Compute the one-step ahead prediction error.
dmd_train_error.append( norm(y_predict_dmd-y_train)/norm(y_train) )
optdmd_train_error.append( norm(y_predict_opt-y_train)/norm(y_train) )
# Evaluate the error on test data.
dmd_error, optdmd_error = list(), list()
for _ in range(n_test):
# Test initial condition.
x0_test = normal( loc=0.0, scale=1.0, size=(dsys.A.shape[1]) )
# Run simulation to generate dataset.
t, _, x_test = dlsim(dsys, np.zeros((250, dsys.inputs)), x0=x0_test)
# Split the training data into input/output snapshots.
y_test, X_test = x_test.T[:, 1:], x_test.T[:, :-1]
# One-step ahead prediction using both DMD models.
y_predict_dmd = dmd.predict(X_test)
y_predict_opt = optdmd.predict(X_test)
# Compute the one-step ahead prediction error.
dmd_error.append( norm(y_predict_dmd-y_test)/norm(y_test) )
optdmd_error.append( norm(y_predict_opt-y_test)/norm(y_test) )
# Store the error for rank i DMD.
dmd_test_error.append( np.asarray(dmd_error) )
optdmd_test_error.append( np.asarray(optdmd_error) )
# Complete rank-sensitivity.
dmd_test_error = np.asarray(dmd_test_error)
optdmd_test_error = np.asarray(optdmd_test_error)
dmd_train_error = np.asarray(dmd_train_error)
optdmd_train_error = np.asarray(optdmd_train_error)
return dmd_train_error, dmd_test_error, optdmd_train_error, optdmd_test_error
def plot_rank_sensitivity(dmd_train_error, dmd_test_error, optdmd_train_error, optdmd_test_error):
"""
Simple utility function to plot the results from the rank sensitivity analysis.
Parameters
----------
dmd_train_error : array-like, shape (n_ranks,)
The reconstruction error of the DMD model on the training data.
dmd_test_error : array-like, shape (n_ranks, n_test)
The reconstruction error of the DMD model on the various testing datasets.
optdmd_train_error : array-like, shape (n_ranks,)
The reconstruction error of the OptDMD model on the training data.
optdmd_test_error : array-like, shape (n_ranks, n_test)
The reconstruction error of the OptDMD model on the various testing datasets.
"""
# Generate figure.
fig, axes = plt.subplots( 1, 2, figsize=(fig_width, fig_width/4), sharex=True, sharey=True )
# Misc.
rank = np.arange(1, dmd_test_error.shape[0]+1)
#####
##### TRAINING ERROR
#####
# Plot the vanilla DMD error.
axes[0].plot( rank, dmd_train_error )
# Plot the OptDMD error.
axes[0].plot( rank, optdmd_train_error, ls="--" )
# Add decorators.
axes[0].set(
xlabel=r"Rank of the DMD model", ylabel=r"Normalized error", title=r"Training dataset"
)
axes[0].grid(True)
#####
##### TESTING ERROR
#####
# Plot the vanilla DMD error.
axes[1].plot( rank, np.mean(dmd_test_error, axis=1), label=r"Regular DMD" )
axes[1].fill_between(
rank,
np.mean(dmd_test_error, axis=1) + np.std(dmd_test_error, axis=1),
np.mean(dmd_test_error, axis=1) - np.std(dmd_test_error, axis=1),
alpha=0.25,
)
# Plot the OptDMD error.
axes[1].plot( rank, np.mean(optdmd_test_error, axis=1), ls="--", label=r"Optimal DMD" )
axes[1].fill_between(
rank,
np.mean(optdmd_test_error, axis=1) + np.std(optdmd_test_error, axis=1),
np.mean(optdmd_test_error, axis=1) - np.std(optdmd_test_error, axis=1),
alpha=0.25,
)
# Add decorators.
axes[1].set(
xlim = (0, rank.max()), xlabel=r"Rank of the DMD model",
ylim=(0, 1),
title=r"Testing dataset"
)
axes[1].grid(True)
axes[1].legend(loc=0)
return
# Run the rank-sensitivity analysis.
output = rank_sensitvity(dsys, x_train.T)
# Keep for later use.
long_time_series_optdmd_train, long_time_series_optdmd_test = output[2], output[3]
# Plot the results.
plot_rank_sensitivity(*output)
```
## Case 2 : Using a short burst to fit a DMD model
```
# Training initial condition.
x0_train = normal( loc=0.0, scale=1.0, size=(dsys.A.shape[1]) )
# Run simulation to generate dataset.
t, _, x_train = dlsim(dsys, np.zeros((100, dsys.inputs)), x0=x0_train)
# Plot the corresponding training data.
plot_training_dataset(t, x_train)
# Run the rank-sensitivity analysis.
output = rank_sensitvity(dsys, x_train.T)
# Keep for later use.
short_time_series_optdmd_train, short_time_series_optdmd_test = output[2], output[3]
# Plot the results.
plot_rank_sensitivity(*output)
```
## Case 3 : Fitting a DMD model using an ensemble of trajectories
```
def generate_ensemble_time_series(dsys, n_traj, len_traj):
"""
Utility function to generate a training dataset formed by an ensemble of time-series.
Parameters
-----------
dsys : scipy.signal.dlti
The discrete LTI system considered.
n_traj : int
The numbr of trajectories forming our ensemble.
len_traj : int
The length of each time-series.
Returns
-------
X : array-like, shape (n_features, n_samples)
The input to the system.
Y : array-like, shape (n_features, n_samples)
The output of the system.
"""
for i in range(n_traj):
# Training initial condition.
x0_train = normal(loc=0.0,scale=1.0,size=(dsys.A.shape[1]))
# Run simulation to generate dataset.
t, _, x = dlsim(dsys, np.zeros((len_traj, dsys.inputs)), x0=x0_train)
# Store the data.
if i == 0:
X, Y = x.T[:, :-1], x.T[:, 1:]
else:
X, Y = np.c_[X, x.T[:, :-1]], np.c_[Y, x.T[:, 1:]]
return X, Y
def rank_sensitvity_bis(dsys, X, Y, n_test=100):
"""
Same as before but for the ensemble training. Note that no DMD model is fitted, only OptDMD.
"""
optdmd_train_error, optdmd_test_error = list(), list()
# Fit a DMD model for each possible rank.
for rank in range(1, dsys.A.shape[0]+1):
# Fit the DMD model (optimal closed-form solution)
optdmd = OptDMD(svd_rank=rank, factorization="svd").fit(X, Y)
# One-step ahead prediction using both DMD models.
y_predict_opt = optdmd.predict(X)
# Compute the one-step ahead prediction error.
optdmd_train_error.append( norm(y_predict_opt-Y)/norm(Y) )
# Evaluate the error on test data.
optdmd_error = list()
for _ in range(n_test):
# Test initial condition.
x0_test = normal(loc=0.0,scale=1.0,size=(dsys.A.shape[1]))
# Run simulation to generate dataset.
t, _, x_test = dlsim(dsys, np.zeros((250, dsys.inputs)), x0=x0_test)
# Split the training data into input/output snapshots.
y_test, X_test = x_test.T[:, 1:], x_test.T[:, :-1]
# One-step ahead prediction using both DMD models.
y_predict_opt = optdmd.predict(X_test)
# Compute the one-step ahead prediction error.
optdmd_error.append( norm(y_predict_opt-y_test)/norm(y_test) )
# Store the error for rank i DMD.
optdmd_test_error.append( np.asarray(optdmd_error) )
# Complete rank-sensitivity.
optdmd_test_error = np.asarray(optdmd_test_error)
optdmd_train_error = np.asarray(optdmd_train_error)
return optdmd_train_error, optdmd_test_error
def plot_rank_sensitivity_bis(
short_time_series_optdmd_train, short_time_series_optdmd_test,
long_time_series_optdmd_train, long_time_series_optdmd_test,
optdmd_train_error, optdmd_test_error
):
"""
Same as before for this second rank sensitivity analysis.
"""
# Generate figure.
fig, axes = plt.subplots( 1, 2, figsize=(fig_width, fig_width/4), sharey=True, sharex=True )
# Misc.
rank = np.arange(1, optdmd_train_error.shape[0]+1)
#####
##### TRAINING ERROR
#####
# Training error using a short time-series to fit the model.
axes[0].plot( rank, short_time_series_optdmd_train )
# Training error using a long time-series to fit the model.
axes[0].plot( rank, long_time_series_optdmd_train )
# Training error using an ensemble of short time-series to fit the model.
axes[0].plot( rank, optdmd_train_error )
# Add decorators.
axes[0].set(
xlim=(0, rank.max()), ylim=(0, 1),
xlabel=r"Rank of the DMD model", ylabel=r"Normalized error",
title=r"Training dataset"
)
axes[0].grid(True)
#####
##### TESTING ERROR
#####
# Testing error for the model fitted with a short time-series.
axes[1].plot( rank, np.mean(short_time_series_optdmd_test, axis=1), label=r"Short time-series" )
axes[1].fill_between(
rank,
np.mean(short_time_series_optdmd_test, axis=1) + np.std(short_time_series_optdmd_test, axis=1),
np.mean(short_time_series_optdmd_test, axis=1) - np.std(short_time_series_optdmd_test, axis=1),
alpha=0.25,
)
# Testing error for the model fitted with a long time-series.
axes[1].plot( rank, np.mean(long_time_series_optdmd_test, axis=1), label=r"Long time-series" )
axes[1].fill_between(
rank,
np.mean(long_time_series_optdmd_test, axis=1) + np.std(long_time_series_optdmd_test, axis=1),
np.mean(long_time_series_optdmd_test, axis=1) - np.std(long_time_series_optdmd_test, axis=1),
alpha=0.25,
)
# Testing error for the model fitted using an ensemble of trajectories.
axes[1].plot( rank, np.mean(optdmd_test_error, axis=1), label=r"Ensemble" )
axes[1].fill_between(
rank,
np.mean(optdmd_test_error, axis=1) + np.std(optdmd_test_error, axis=1),
np.mean(optdmd_test_error, axis=1) - np.std(optdmd_test_error, axis=1),
alpha=0.25,
)
# Add decorators.
axes[1].set(
xlim=(0, rank.max()), ylim=(0, 1),
xlabel=r"Rank of the DMD model", title=r"Testing dataset"
)
axes[1].grid(True)
axes[1].legend(loc=0)
return
```
### Case 3.1 : Small ensemble
```
# Number of trajectories and length.
n_traj, len_traj = 10, 10
X, Y = generate_ensemble_time_series(dsys, n_traj, len_traj)
# Run the rank-sensitivity analysis.
optdmd_train_error, optdmd_test_error = rank_sensitvity_bis(dsys, X, Y)
plot_rank_sensitivity_bis(
short_time_series_optdmd_train, short_time_series_optdmd_test,
long_time_series_optdmd_train, long_time_series_optdmd_test,
optdmd_train_error, optdmd_test_error
)
```
### Case 3.2 : Large ensemble
```
# Number of trajectories and length.
n_traj, len_traj = 200, 10
X, Y = generate_ensemble_time_series(dsys, n_traj, len_traj)
# Run the rank-sensitivity analysis.
optdmd_train_error, optdmd_test_error = rank_sensitvity_bis(dsys, X, Y)
plot_rank_sensitivity_bis(
short_time_series_optdmd_train, short_time_series_optdmd_test,
long_time_series_optdmd_train, long_time_series_optdmd_test,
optdmd_train_error, optdmd_test_error
)
```
## Conclusion
These various tests show that:
- The models fitted using the new `OptDMD` consistently have smaller errors both on the training and testing datasets as compared to the regular `DMD`, no matter whether one has few data or a lot of data.
- When a single time-series is used, both `DMD` and `OptDMD` models tend to actually overfit the training data. Indeed, a zero reconstruction error can be achieved for a low-rank model on the training data. The same models however poorly generalize to new testing data.
- In order to prevent this overfitting problem, using an ensemble of bursts (i.e. short time-series) to fit the model appear to be extremely beneficial. In this case, the model's error on the testing dataset is of the same order as the error on the training one. Using such an ensemble of trajectories is moreover data-efficient : for a given reconstruction error, `OptDMD` models fitted using an ensemble of trajectories require actually less data than models fitted using a single time-series.
| github_jupyter |
# Milestone Project 1: Walk-through Steps Workbook
Below is a set of steps for you to follow to try to create the Tic Tac Toe Milestone Project game!
```
# For using the same code in either Python 2 or 3
from __future__ import print_function
## Note: Python 2 users, use raw_input() to get player input. Python 3 users, use input()
```
**Step 1: Write a function that can print out a board. Set up your board as a list, where each index 1-9 corresponds with a number on a number pad, so you get a 3 by 3 board representation.**
```
from IPython.display import clear_output
def display_board(board):
clear_output()
# implementation is really janky in tut
```
**Step 2: Write a function that can take in a player input and assign their marker as 'X' or 'O'. Think about using *while* loops to continually ask until you get a correct answer.**
```
def player_input():
marker = ''
while not (marker == 'O' or marker == 'X'):
marker = raw_input('Player 1: Do you want to be O or X? ').upper()
if marker == 'X':
return ('X', 'O')
else:
return ('O','X')
player_input()
```
**Step 3: Write a function that takes, in the board list object, a marker ('X' or 'O'), and a desired position (number 1-9) and assigns it to the board.**
```
def place_marker(board, marker, position):
board[position] = marker
```
**Step 4: Write a function that takes in a board and a mark (X or O) and then checks to see if that mark has won. **
```
def win_check(board,mark):
# lots of manual checks in tut
pass
```
**Step 5: Write a function that uses the random module to randomly decide which player goes first. You may want to lookup random.randint() Return a string of which player went first.**
```
import random
def choose_first(): # janky
if random.randint(0, 1) == 0:
return 'Player 1'
else:
return 'Player 2'
```
**Step 6: Write a function that returns a boolean indicating whether a space on the board is freely available.**
```
def space_check(board, position):
return board[position] == ' ' # janky
```
**Step 7: Write a function that checks if the board is full and returns a boolean value. True if full, False otherwise.**
```
def full_board_check(board):
# janky implement using a 1D array
for i in range(1, 10):
if space_check(board, i):
return False
return True
```
**Step 8: Write a function that asks for a player's next position (as a number 1-9) and then uses the function from step 6 to check if its a free position. If it is, then return the position for later use. **
```
def player_choice(board):
position = ''
while position not in '1 2 3 4 5 6 7 8 9'.split() or not space_check(board[int(position)]):
position = raw_input('Choose your next position: (1-9)')
return position
```
**Step 9: Write a function that asks the player if they want to play again and returns a boolean True if they do want to play again.**
```
def replay():
# what's interesting is the str func chaining
return raw_input('Do you want to play again? Enter Yes or No').lower().startswith('y')
```
**Step 10: Here comes the hard part! Use while loops and the functions you've made to run the game!**
```
print('Welcome to Tic Tac Toe!')
while True:
# Set the game up here
theBoard = [' '] * 10
player1_marker, player2_marker = player_input() #tuple unpacking
turn = choose_first()
print(turn + 'will go first!')
game_on = True
while game_on:
#Player 1 Turn
if turn == 'Player 1': # janky
display_board(theBoard)
position = player_choice(theBoard)
place_marker(theBoard, player1_marker, position)
if win_check(theBoard, player1_marker):
display_board(theBoard)
print('Congrats, {pl}, has won the game!'.format(pl=turn))
game_on = False
else:
turn = 'Player 2' # janky
# Player2's turn.
if turn == 'Player 2':
display_board(theBoard)
position = player_choice(theBoard)
place_marker(theBoard, player2_marker, position)
if win_check(theBoard, player2_marker):
display_board(theBoard)
print('Congrats, {pl}, has won the game!'.format(pl=turn))
game_on = False
else:
turn = 'Player 1' # janky
if not replay():
break
```
## Good Job!
| github_jupyter |
Airline Delay Dataset
=====================
### [Neil D. Lawrence](http://inverseprobability.com), University of
Cambridge
### 2015-10-04
$$
$$
<!-- Do not edit this file locally. -->
<!-- Do not edit this file locally. -->
<!---->
<!-- Do not edit this file locally. -->
<!-- Do not edit this file locally. -->
<!-- The last names to be defined. Should be defined entirely in terms of macros from above-->
<!--
-->
Setup
-----
First we download some libraries and files to support the notebook.
```
import urllib.request
urllib.request.urlretrieve('https://raw.githubusercontent.com/lawrennd/talks/gh-pages/mlai.py','mlai.py')
urllib.request.urlretrieve('https://raw.githubusercontent.com/lawrennd/talks/gh-pages/teaching_plots.py','teaching_plots.py')
urllib.request.urlretrieve('https://raw.githubusercontent.com/lawrennd/talks/gh-pages/gp_tutorial.py','gp_tutorial.py')
import matplotlib.pyplot as plt
plt.rcParams.update({'font.size': 22})
```
<!--setupplotcode{import seaborn as sns
sns.set_style('darkgrid')
sns.set_context('paper')
sns.set_palette('colorblind')}-->
pods
----
In Sheffield we created a suite of software tools for ‘Open Data
Science’. Open data science is an approach to sharing code, models and
data that should make it easier for companies, health professionals and
scientists to gain access to data science techniques.
You can also check this blog post on [Open Data
Science](http://inverseprobability.com/2014/07/01/open-data-science).
The software can be installed using
```
%pip install --upgrade git+https://github.com/sods/ods
```
from the command prompt where you can access your python installation.
The code is also available on github:
<a href="https://github.com/sods/ods" class="uri">https://github.com/sods/ods</a>
Once `pods` is installed, it can be imported in the usual manner.
```
import pods
```
Data on Airline Delays
----------------------
Flight arrival and departure times for every commercial flight in the
USA from January 2008 to April 2008. This dataset contains extensive
information about almost 2 million flights, including the delay (in
minutes) in reaching the desitnation.
```
import pods
data = pods.datasets.airline_delay()
```
The data dictionary contains the standard keys ‘X’ and ‘Y’, which
contain 700,000 randomly sub-sampled training points.
```
data['X'].shape
data['Y'].shape
```
Additionally there are keys `Xtest` and `Ytest` which provide test data.
The number of points considered to be *training data* is controlled by
the argument `num_train` argument, which defaults to 700,000. This
number is chosen as it matches that used in the Gaussian Processes for
Big Data paper.
```
data['Xtest'].shape
data['Ytest'].shape
```
The data was compiled by Nicolò Fusi for the paper Hensman et al.
(n.d.).
```
print(data['citation'])
```
And extra information about the data is included, as standard, under the
keys `info` and `details`.
```
print(data['info'])
print()
print(data['details'])
```
References
----------
Hensman, J., Fusi, N., Lawrence, N.D., n.d. Gaussian processes for big
data, in:.
| github_jupyter |
# Lecture 8 - Variable scopes, Recursion, Debugging.
*Wednesday, June 24th 2020*
*Rahul Dani*
In this lecture, we will cover scopes for variables, the concept of recursion, and simple debugging techniques.
## Topic 1 : Variable scopes
In the previous lecture, we talked about function scopes. Here is a quick recap task:
**Task 1 :** What is the result of this code?
def sub(x,y):
print(x-y)
def mult(x,y):
z = x*y
print(z)
sub(z, y)
def add(x,y):
z = x + y
mult(z, x)
print(z)
add(10,3)
You need to understand function scopes to some extent to be able to grasp variable scopes.
**Task 2 :** What is the result of this code?
x = 0
while (x < 5):
y = 0
print('The value of x is:',str(x))
print('The value of y is:', str(y))
x+=1
y+=1
As you can see, the placement of the variables makes a difference on the values of the variables.
If you want to count the number of times something happens, you need to make the variable outside the loop:
**Task 3 :** Make a program to count the number of times carrot appears in this veggies list : ['celery', 'cabbage', 'spinach', 'carrot', 'lettuce', 'potato', 'carrot', 'cucumber', 'celery', 'carrot', 'broccoli' ]
**Task 4:** What is the result of this code (value of x and y) ?
def fun1(x):
x = 10
return x
def fun2(x):
x = 20
fun1(x)
return x
x = 5
y = fun2(x)
print(x)
print(y)
## Topic 2 : Recursion
Recursion is a process in which we call a function within the body of the same function. **It calls itself**.
There is a commonly used math function known as **factorial** (!). It is used to multiply all the numbers before it, all the way to 1.
Recursion has two steps to it:
* The base case (stopping point)
* The recursive case
Example:
5! = 5*4*3*2*1 = 120
3! = 3*2*1 = 6
In order to code a function like this, we use recursion:
def factorial(x):
if x == 1:
return x
else:
return x * factorial(x-1)
The way the functions are called for an example like x = 5 look like this:
factorial(5)
5*factorial(4)
5*4*factorial(3)
5*4*3*factorial(2)
5*4*3*2*factorial(1)
5*4*3*2*1
**Task 5 :** Make recursive function called "add" to add all the numbers from x to 1 where x is any positive number > 1. For example: if x = 10 -> 10+9+8+..+3+2+1
<!-- ```
def add(x):
if(x == 0):
return x
else:
return x + add(x-1)
print(add(10))
``` -->
We have seen this question before! It is the Gauss sum problem! The formula approach is more efficient that recursion, but this is another way of solving that initial problem.
There is a famous math sequence known as the **fibonacci sequence**.
The sequence:

This is a shape it takes:
/https://public-media.si-cdn.com/filer/3a/70/3a70f58d-dabc-4d54-ba16-1d1548594720/2560px-fibonaccispiralsvg.jpg)
These are some mind-blowing examples of fibonacci sequence: [Fibonacci Sequence](https://io9.gizmodo.com/15-uncanny-examples-of-the-golden-ratio-in-nature-5985588)
Also this egg :

So the code for this mind-blowing sequence looks like:
def fibonacci(n):
if n == 0 or n == 1:
return n
else:
return fibonacci(n-1) + fibonacci(n-2)
print(fibonacci(6))
The way the functions are called for an example like fibonacci(6) looks like this:
fibonacci(6) = fibonacci(5)+fibonacci(4)
fibonacci(5) = fibonacci(4)+fibonacci(3)
fibonacci(4) = fibonacci(3)+fibonacci(2)
fibonacci(3) = fibonacci(2)+fibonacci(1)
fibonacci(2) = fibonacci(1)+fibonacci(0)
fibonacci(1) = 1
fibonacci(0) = 0
fibonacci(2) = 1 + 0 = 1
fibonacci(3) = 1 + 1 = 2
fibonacci(4) = 2 + 1 = 3
fibonacci(5) = 3 + 2 = 5
fibonacci(6) = 5 + 3 = 8
Therefore, the 6th item in the fibonacci sequence is **8**.
This might be very confusing to you guys and it is supposed to be quite a challenging concept, so ask me any questions you have!
Just remember that recursion is used when you call a function within itself. If you continue programming in the future, this is surely an idea you will come across!
## Topic 3 : Debugging
Python is a special language in the sense that it does not break until an error line is found. It will run everything prior to the error.
For example:
print('Hello') # valid
print(24) # valid
print(Hello) # invalid because Hello variable doesn't exist
In this case, the program shows the first 2 lines but crashes at the 3rd line.
Some programming languages won't even run if there is an error in any of the lines, this is where python is unique.
I will now demonstrate how to read error messages and fix them.
```
print('Hello') # valid
print(24) # valid
print(Hello) # invalid because Hello variable doesn't exist
# Shift + Enter
```
**Task 6 :** What is the error in the following code? Explain...
sodas = ['Coke', 'Pepsi', 'Dr.Pepper', 'Fanta', 'Sprite', 'Mountain Dew']
for soda in soda:
print(soda)
**Task 7 :** What is the error in the following code? Explain...
sodas = ['Coke', 'Pepsi', 'Dr.Pepper', 'Fanta', 'Sprite', 'Mountain Dew']
print(sodas[10])
There is a concept in programming known as **edge cases**. It refers to errors in the program that one does not typically think of.
**Task 8 :** What is the edge case in the following code? Explain...
x = input('Enter the first number: ')
x = int(x)
y = input('Enter the second number: ')
y = int(y)
print(x/y)
Practice Problems from previous lectures to work on after class:
Lecture 1 : https://colab.research.google.com/drive/1RlMF5WD6YvUf7sbGs0XkpYyWvOLyT30b?usp=sharing
Lecture 2 : https://colab.research.google.com/drive/14B7NaXdTWmfFhb6wRldo8nTdPMHNX7fZ?usp=sharing
Lecture 3 : https://colab.research.google.com/drive/1WLEpSIk2eDn9YORxIuBJKAIBtvQ0E50Q?usp=sharing
Lecture 4 : https://colab.research.google.com/drive/1eZ3Xzojdu8QYzLD3AfkImMgSf9xUMsHD?usp=sharing
Lecture 6 : https://colab.research.google.com/drive/1fbE_v3kYROSSHDjPQEzmtSp263SRDNiI?usp=sharing
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.