text stringlengths 2.5k 6.39M | kind stringclasses 3
values |
|---|---|
<img src="images/dask_horizontal.svg" align="right" width="30%">
# Data Storage
<img src="images/hdd.jpg" width="20%" align="right">
Efficient storage can dramatically improve performance, particularly when operating repeatedly from disk.
Decompressing text and parsing CSV files is expensive. One of the most effective strategies with medium data is to use a binary storage format like HDF5. Often the performance gains from doing this is sufficient so that you can switch back to using Pandas again instead of using `dask.dataframe`.
In this section we'll learn how to efficiently arrange and store your datasets in on-disk binary formats. We'll use the following:
1. [Pandas `HDFStore`](http://pandas.pydata.org/pandas-docs/stable/io.html#io-hdf5) format on top of `HDF5`
2. Categoricals for storing text data numerically
**Main Take-aways**
1. Storage formats affect performance by an order of magnitude
2. Text data will keep even a fast format like HDF5 slow
3. A combination of binary formats, column storage, and partitioned data turns one second wait times into 80ms wait times.
## Setup
Create data if we don't have any
```
from prep import accounts_csvs
accounts_csvs(3, 1000000, 500)
```
## Read CSV
First we read our csv data as before.
CSV and other text-based file formats are the most common storage for data from many sources, because they require minimal pre-processing, can be written line-by-line and are human-readable. Since Pandas' `read_csv` is well-optimized, CSVs are a reasonable input, but far from optimized, since reading required extensive text parsing.
```
import os
filename = os.path.join('data', 'accounts.*.csv')
filename
import dask.dataframe as dd
df_csv = dd.read_csv(filename)
df_csv.head()
```
### Write to HDF5
HDF5 and netCDF are binary array formats very commonly used in the scientific realm.
Pandas contains a specialized HDF5 format, `HDFStore`. The ``dd.DataFrame.to_hdf`` method works exactly like the ``pd.DataFrame.to_hdf`` method.
```
target = os.path.join('data', 'accounts.h5')
target
# convert to binary format, takes some time up-front
%time df_csv.to_hdf(target, '/data')
# same data as before
df_hdf = dd.read_hdf(target, '/data')
df_hdf.head()
```
### Compare CSV to HDF5 speeds
We do a simple computation that requires reading a column of our dataset and compare performance between CSV files and our newly created HDF5 file. Which do you expect to be faster?
```
%time df_csv.amount.sum().compute()
%time df_hdf.amount.sum().compute()
```
Sadly they are about the same, or perhaps even slower.
The culprit here is `names` column, which is of `object` dtype and thus hard to store efficiently. There are two problems here:
1. How do we store text data like `names` efficiently on disk?
2. Why did we have to read the `names` column when all we wanted was `amount`
### 1. Store text efficiently with categoricals
We can use Pandas categoricals to replace our object dtypes with a numerical representation. This takes a bit more time up front, but results in better performance.
More on categoricals at the [pandas docs](http://pandas.pydata.org/pandas-docs/stable/categorical.html) and [this blogpost](http://matthewrocklin.com/blog/work/2015/06/18/Categoricals).
```
# Categorize data, then store in HDFStore
%time df_hdf.categorize(columns=['names']).to_hdf(target, '/data2')
# It looks the same
df_hdf = dd.read_hdf(target, '/data2')
df_hdf.head()
# But loads more quickly
%time df_hdf.amount.sum().compute()
```
This is now definitely faster than before. This tells us that it's not only the file type that we use but also how we represent our variables that influences storage performance.
How does the performance of reading depend on the scheduler we use? You can try this with threaded, processes and distributed.
However this can still be better. We had to read all of the columns (`names` and `amount`) in order to compute the sum of one (`amount`). We'll improve further on this with `parquet`, an on-disk column-store. First though we learn about how to set an index in a dask.dataframe.
### Exercise
`fastparquet` is a library for interacting with parquet-format files, which are a very common format in the Big Data ecosystem, and used by tools such as Hadoop, Spark and Impala.
```
target = os.path.join('data', 'accounts.parquet')
df_csv.categorize(columns=['names']).to_parquet(target, has_nulls=False)
```
Investigate the file structure in the resultant new directory - what do you suppose those files are for?
`to_parquet` comes with many options, such as compression, whether to explicitly write NULLs information (not necessary in this case), and how to encode strings. You can experiment with these, to see what effect they have on the file size and the processing times, below.
```
ls -l data/accounts.parquet/
df_p = dd.read_parquet(target)
# note that column names shows the type of the values - we could
# choose to load as a categorical column or not.
df_p.dtypes
```
Rerun the sum computation above for this version of the data, and time how long it takes. You may want to try this more than once - it is common for many libraries to do various setup work when called for the first time.
```
%time df_p.amount.sum().compute()
```
When archiving data, it is common to sort and partition by a column with unique identifiers, to facilitate fast look-ups later. For this data, that column is `id`. Time how long it takes to retrieve the rows corresponding to `id==100` from the raw CSV, from HDF5 and parquet versions, and finally from a new parquet version written after applying `set_index('id')`.
```
# df_p.set_index('id').to_parquet(...)
```
## Remote files
Dask can access various cloud- and cluster-oriented data storage services such as Amazon S3 or HDFS
Advantages:
* scalable, secure storage
Disadvantages:
* network speed becomes bottleneck
The way to set up dataframes (and other collections) remains very similar to before. Note that the data here is available anonymously, but in general an extra parameter `storage_options=` can be passed with further details about how to interact with the remote storage.
```python
taxi = dd.read_csv('s3://nyc-tlc/trip data/yellow_tripdata_2015-*.csv',
storage_options={'anon': True})
```
**Warning**: operations over the Internet can take a long time to run. Such operations work really well in a cloud clustered set-up, e.g., amazon EC2 machines reading from S3 or Google compute machines reading from GCS.
| github_jupyter |
# 11. Hyperparameter Optimization
We have just introduced a lot of new hyperparameters, and this is only going to get worse as our neural networks get more and more complex. This leads us in to a topic called **hyperparameter optimization**. So, what are the hyperparameters that we have learned about so far?
>* **Learning Rate** or if it is adaptive, **initial learning rate** and **Decay rate** <br>
* **Momentum** <br>
* **Regularization weight**<br>
* **Hidden Layer Size**<br>
* **Number of hidden layers**<br>
So, what are some approaches to choosing these parameters?
## 1.1 K-Fold Cross-Validation
Let's go over **K-Fold Cross Validation** to review. The idea is simple, we do the following:
> 1. Split data into K parts
2. Do a loop that iterates through K times
3. Each time we take out one part, and use that as the validation set, and the rest of the data as the training set
So in the first iteration of the loop, we validate with the first part, and train on the rest.
<img src="https://drive.google.com/uc?id=1n7uJj20zfiJ0Xo_0yfEYjn0JpwjcJmB2" width="500">
Here is some pseudocode that can do this:
```
def crossValidation(model, X, Y, K=5):
X, Y = shuffle (X, Y)
sz = len(Y) / K
scores = []
for k in range(K):
xtr = np.concatenate([ X[:k*sz, :], X[(k*sz + sz):, :] ])
ytr = np.concatenate([ Y[:k*sz], X[(k*sz + sz):] ])
xte = X[k*sz:(k*sz + sz), :]
yte = Y[k*sz:(k*sz + sz)]
model.fit(xtr, ytr)
score = model.score(xte, yte)
scores.append(score)
return np.mean(scores), np.std(scores)
```
Now, we can see that this algorithm contains **K** different scores. We can simply use the mean of these of these scores as a measurement for how good this particular hyperparameter setting is. Another thing we could do is a statistical test to determine if the difference between two hyperparameter settings is statistically significantly better than the other.
## 1.2 Sci-Kit Learn K-Folds
Sci-Kit learn has its own K-folds implementation that is great to use:
```
from sklearn import cross_validation
scores = cross_validation.cross_val_score(model, X, Y, cv=K)
```
Note that the SKLearn implementation does require you to conform to certain aspects of the SKLearn API. For example, you must provide a class with at least the 3 methods `fit()`, `predict()`, and `score()`.
## 1.3 Leave-One-Out Cross-Validation
One special variation of K-Folds cross-validation is where we set K = N. We will talk more about this later. But the basic idea is:
> 1. We do a loop N times
2. Every iteration of the loop we train on everything but one point
3. We test on the one point that was left out
4. We do this N times for all points
Now with all of that discussed, what are some of the different approaches to hyperparameter optimization?
---
<br>
## 1.4 Grid Search
Grid search is an exhaustive search. This means that you can choose a set of learning rates that you want to try, choose a set of momentums you want to try, and choose a set of regularizations that you want to try, at which point you try every combination of them. In code that may look like:
```
learning_rates = [0.1, 0.01, 0.001, 0.0001, 0.00001]
momentums = [1, 0.1, 0.01, 0.001]
regularizations = [1, 0.1, 0.01]
for lr in learning_rates:
for mu in momentums:
for reg in regularizations:
score = cross_validation(lr, mu, reg, data)
```
As you can imagine, this is **very** slow! But, since each model is independent of the others, there is a great opportunity here for a **parallelization**! Frameworks like **hadoop** and **spark** are ideal for this type of problem.
---
<br>
## 1.5 Random Search
On the other hand we have **random search**, which instead of looking at every possibility, just moves in random directions until the score is improved. A basic algorithm could look like:
```
theta = random position in hyperparameter space
score1 = cross_validation(theta, data)
for i in range(max_iterations):
next_theta = sample from hypersphere around theta
score2 = cross_validation(next_theta, data)
if score2 is better than score1:
theta = next_theta
```
---
<br>
# 2. Sampling Logarithmically
Let's now talk about how to sample random numbers when performing random search. It is not quite as straight forward as you may assume.
## 2.1 Main Problem
Suppose we want to randomly sample the learning rate. We know that the difference between 0.001 and 0.0011 is not that significant. In general, we want to try different numbers on a log scale, such as $10^{-2}$,$10^{-3}$,$10^{-4}$, etc. So if you sample between $10^{-7}$ and $10^{-1}$ uniformally, what is going to happen?
Well if we look at the image below, we can see that most of what we would sample is on the same scale as $10^-1$, while everything else is under represented!
<img src="https://drive.google.com/uc?id=12uteto-AdT7CE2uvtW-uezNEuat8dELu">
So, how can we fix this problem? Well, we will sample on a **log scale**! That way we will get an even distribution between every 10th power, which is exactly what we want! Algorithmically this may look like:
```
Sample uniformly from (-7, -1) from a uniform distribution # Or whatever limits you want
R ~ U(-7, -1)
Set your learning rate to be 10^R
```
This can also be used for hyper parameters like decay where we want to try numbers like 0.9, 0.99, 0.999, etc!
It may not seem intuitive that these numbers are still on a log scale, but if we rewrite those numbers as:
$$0.9 = 1 - 10^{-1}$$
$$0.99 = 1 - 10^{-2}$$
$$0.999 = 1 - 10^{-3}$$
We can see that they are indeed in fact still on a log scale! These will give very different results, so being able to sample them effectively is very important. Algorithmically it may look like:
```
R ~ U(lower, upper)
Decay Rate = 1 - 10^R
```
---
<br>
# 3. Grid Search in Code
The dataset that we are going to look at is the spiral dataset. It is a set of arms extending out from the origin, and each arm is a different class than the one next to it. Why are we using this dataset and not something like MNIST? Well, images take a long time to train on if we wanted to test out 4 different hyperaparameters and 3 different settings of each that is 81 different combinations. If each combination takes 1 hour to train, that is 81 hours! A bit to long for our liking.
So, inside of our `get_spiral` function, we are going to create the spirals using polar coordinates. We want the radius to stretch outwards from 0 to some maximum, and we want the angle to change with the radius. So, as the radius goes from 0 to max, the angle goes from its start angle to its start angle plus $\frac{\pi}{2}$. Because $\frac{\pi}{2}$ is just 90 degrees, each spiral is going to travel 90 degrees in the angular direction as the radius increases. We will have 6 spiral arms, which is why we split up the start angles by $\frac{2\pi}{6}$ or $\frac{\pi}{3}$, aka 60 degrees.
Next we convert the polar coordinates into cartesian coordinates. The formula is:
$$x_1 = r*cos(\theta)$$
$$x_2 = r*sin(\theta)$$
We are referring to the variables as $x_1$ and $x_2$ because we are going to use the letter $y$ for the targets.
The next step is to add $x_1$ and $x_2$ to the $X$ matrix that is going to represent the input data. $x_1$ and $x_2$ begin in arrays that are of size (6 x 100), but we need to flatten them to be of size 600, since there are 600 data points- 100 for each of the 6 spiral arms. Next we add some noise to the data so it isn't a perfect set of spirals.
We then create the targets, which is just 100 zeros followed by 100 ones, and so on. This is because we want each spiral arm to be a different class than the arms beside it.
```
"""Function to get the spiral dataset"""
def get_spiral():
radius = np.linspace(1, 10, 100)
thetas = np.empty((6, 100))
for i in range(6):
start_angle = np.pi*i / 3.0
end_angle = start_angle + np.pi / 2
points = np.linspace(start_angle, end_angle, 100)
thetas[i] = points
# convert into cartesian coordinates
x1 = np.empty((6, 100))
x2 = np.empty((6, 100))
for i in range(6):
x1[i] = radius * np.cos(thetas[i])
x2[i] = radius * np.sin(thetas[i])
# inputs
X = np.empty((600, 2))
X[:,0] = x1.flatten()
X[:,1] = x2.flatten()
# add noise
X += np.random.randn(600, 2)*0.5
# targets
Y = np.array([0]*100 + [1]*100 + [0]*100 + [1]*100 + [0]*100 + [1]*100)
return X, Y
import theano.tensor as T
from theano_ann import ANN
from util import get_spiral, get_clouds
from sklearn.utils import shuffle
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
# Seaborn Plot Styling
sns.set(style="white", palette="husl")
sns.set_context("poster")
sns.set_style("ticks")
```
Now let's just take a quick look at our dataset.
```
"""Visualize Dataset"""
X, Y = get_spiral()
fig, ax = plt.subplots(figsize=(12,8))
plt.scatter(X[:,0], X[:,1], s=100, c=Y, alpha=0.5, cmap="viridis")
plt.show()
```
And now we can finally get back to grid search! Note that we have imported `theano_ann`, which makes use of a custom class that we have not gone over yet. However, it is fully encapsulated, so it is just like using Scikit Learn! You don't need to know how it is written, just how to use it's API.
Something else to note is that while it is standard practice in the field of machine learning to use cross validation and a test set, we will not use it here. It is not very common in the subfield of deep learning, because you generally have so much data that one validation set is fine. Plus, if training takes 1 hour and you want to do 5 fold cross validation, then it is going to take 5 hours in that case.
```
"""Grid Search Function"""
def grid_search():
X, Y = get_spiral() # Get the data
X, Y = shuffle(X, Y)
Ntrain = int(0.7 * len(X)) # Shuffle and split data into train & test
Xtrain, Ytrain = X[:Ntrain], Y[:Ntrain:]
Xtest, Ytest = X[Ntrain:], Y[Ntrain:]
# Create arrays of hyperparameters to try
hidden_layer_sizes = [
[300],
[100,100],
[50,50,50],
]
learning_rates = [1e-4, 1e-3, 1e-2]
l2_penalties = [0., 0.1, 1.0]
# Loop through all possible hyperparameter settings
best_validation_rate = 0
best_hls = None
best_lr = None
best_l2 = None
for hls in hidden_layer_sizes: # Nested for loop through all settings
for lr in learning_rates:
for l2 in l2_penalties:
model = ANN(hls) # Create an ANN
model.fit(Xtrain, Ytrain, learning_rate=lr, reg=l2,
mu=0.99, epochs=3000, show_fig=False) # Fit with chosen hyperparams
validation_accuracy = model.score(Xtest, Ytest) # Get validation accuracy
train_accuracy = model.score(Xtrain, Ytrain) # Get train accuracy
print(
"Validation_accuracy: %.3f, train_accuracy: %.3f, settings: %s, %s, %s" %
(validation_accuracy, train_accuracy, hls, lr, l2)
)
# If best accuracy, keep the current settings
if validation_accuracy > best_validation_rate:
best_validation_rate = validation_accuracy
best_hls = hls
best_lr = lr
best_l2 = l2
print("Best validation_accuracy:", best_validation_rate)
print("Best settings:")
print("hidden_layer_sizes:", best_hls)
print("learning_rate:", best_lr)
print("l2:", best_l2)
grid_search()
```
One thing to be mindful of with grid search is that people are usually looking for ways to automate choosing hyper parameters, because it is kind of unscientific to chose them yourself. Well, grid search doesn't really solve that problem. You still need to choose what hyperparameter settings to go through. If you chose the wrong ones, you are still going to arrive at a poor result. So, unfortunately there is no way of getting around this trial and error method.
---
<br>
# 4. Modifying Grid Search
We are now going to talk about the limitations of grid search, and what you might want to do instead. Let's go over the main problem with grid search. Say we have two hyperparameters that we are trying to optimize. The **learning rate** and the **RMSProp epsilon**. However, let's say that **epsilon** doesn't really have any effect on your performance. Since we are still going to be trying several different values for epsilon and it doesn't really have an impact, we are effectively going to be doing 5xs as much work as we have to in order to achieve the same result.
For instance, the results along this first column will all be the same (Since learning rate is held constant and change epsilon has no effect!)
<img src="https://drive.google.com/uc?id=1P1f0m2dyUZUU1Rgd6un4aEkFMm7yXqbs" width="350">
<br>
The same thing would happen for the second column, and so on.
<img src="https://drive.google.com/uc?id=1HYXBS10gn-xSgjNp3n6kL-szVWAQY7KB" width="350">
What is the solution to this? The solution is that instead of evenly dividing a grid into equally spaced points, just randomly choose points within the grid. You may get strange learning rates like 0.91324, but this is okay, and greatly reduces the chance that you are going to duplicate work unecessarily.
---
<br>
# 5. Random Search in Code
We will now implement random search in code. It will be very similar to the grid search that we did earlier, but instead of selecting the hyperparameters from a grid, they will be selected randomly.
We can start with our imports.
```
import theano.tensor as T
from theano_ann import ANN
from util import get_spiral, get_clouds
from sklearn.utils import shuffle
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
```
And now we can write our `random_search` function. A few things to note:
> * For the **learning rate** and **L2 penalty** we are actually going to search through the log of these hyperparameters. There are two reasons for this. The first is that the learning rate and L2 penalty have to be positive, so if we work in the log space and then later exponentiate these values, we can guarantee that they will always be positive. The second reason is because this is the scale that we usually test hyperparameters such as learning rate and regularization. Typically if you have tried a learning rate of 0.1, you won't then try a learning rate of 0.099 or 0.098. Instead you would try 0.1, 0.01, 0.001 and so on- this is a log scale.
* When we randomly select new hyperparameters, we can think of each hyperparameters like a dimension, so the complete set is like a vector. So we are essentially searching within a hypersphere around this vector. In simple terms we are going to go plus or minus a certain amount in each direction, aka for each parameter. For example, when we select a new number of hidden layers, the new number will be the current number minus 1, plus 0, or plus 1. Of course, N hidden cannot be less than 1, so we take the max of that or 1. We do the same thing for m, the number of hidden units, but we search in multiples of 10.
```
"""Function to perform a random search of hyperparameters"""
def random_search():
X, Y = get_spiral() # Get the data
X, Y = shuffle(X, Y)
Ntrain = int(0.7 * len(X)) # Shuffle and split data into train & test
Xtrain, Ytrain = X[:Ntrain], Y[:Ntrain:]
Xtest, Ytest = X[Ntrain:], Y[Ntrain:]
# Choose starting set of hyperparameters
M = 20 # Number hidden units
nHidden = 2 # Number hidden layers
log_lr = -4 # Log of learning rate
log_l2 = -2 # Log of L2 penalty
max_tries = 30 # Number of diff hyperparam settings to try
# Initialize hyperparameters to 0 or none
best_validation_rate = 0
best_hls = None
best_lr = None
best_l2 = None
# Loop through all possible hyperparameter settings
for _ in range(max_tries):
model = ANN([M]*nHidden) # Create model with current hiddenlayer size
model.fit(
Xtrain, Ytrain,
learning_rate=10**log_lr, reg=10**log_l2,
mu=0.99, epochs=3000, show_fig=False
) # Fit model with current l-rate and l2 pen
validation_accuracy = model.score(Xtest, Ytest)
train_accuracy = model.score(Xtrain, Ytrain)
print(
"validation_accuracy: %.3f, train_accuracy: %.3f, settings: %s, %s, %s" %
(validation_accuracy, train_accuracy, [M]*nHidden, log_lr, log_l2)
)
# If validation accuracy is better than the best so far, set these as new hyperparms
if validation_accuracy > best_validation_rate:
best_validation_rate = validation_accuracy
best_M = M
best_nHidden = nHidden
best_lr = log_lr
best_l2 = log_l2
# Randomly select new hyperparameters (this is different from grid search)
nHidden = best_nHidden + np.random.randint(-1, 2) # -1, 0, or 1
nHidden = max(1, nHidden)
M = best_M + np.random.randint(-1, 2)*10
M = max(10, M)
log_lr = best_lr + np.random.randint(-1, 2)
log_l2 = best_l2 + np.random.randint(-1, 2)
print("Best validation_accuracy:", best_validation_rate)
print("Best settings:")
print("best_M:", best_M)
print("best_nHidden:", best_nHidden)
print("learning_rate:", best_lr)
print("l2:", best_l2)
if __name__ == '__main__':
random_search()
```
| github_jupyter |
# Rate–distortion experiments with toy sources
This notebook contains code to train VECVQ and NTC models using stochastic rate–distortion optimization.
The Laplace and Banana sources are described in:
> "Nonlinear Transform Coding"<br />
> J. Ballé, P. A. Chou, D. Minnen, S. Singh, N. Johnston, E. Agustsson, S. J. Hwang, G. Toderici<br />
> https://arxiv.org/abs/2007.03034
The Sawbridge process is described in:
> "Neural Networks Optimally Compress the Sawbridge"<br />
> A. B. Wagner, J. Ballé<br />
> https://arxiv.org/abs/2011.05065
This notebook requires TFC v2 (`pip install tensorflow-compression==2.*`)
```
#@title Dependencies for Colab
# Run this cell to install the necessary dependencies when running the notebook
# directly in a Colaboratory hosted runtime from Github.
!pip uninstall -y tensorflow
!pip install tf-nightly==2.5.0.dev20210312 tensorflow-compression==2.1
![[ -e /tfc ]] || git clone https://github.com/tensorflow/compression /tfc
%cd /tfc/models
#@title Imports
from absl import logging
import numpy as np
import tensorflow as tf
import tensorflow_probability as tfp
import tensorflow_compression as tfc
tfm = tf.math
tfkl = tf.keras.layers
tfpb = tfp.bijectors
tfpd = tfp.distributions
#@title Matplotlib configuration
import cycler
import matplotlib as mpl
import matplotlib.pyplot as plt
_colors = [
"#1f77b4", "#ff7f0e", "#2ca02c", "#d62728", "#9467bd",
"#8c564b", "#e377c2", "#7f7f7f", "#bcbd22", "#17becf",
]
plt.rc("axes", facecolor="white", labelsize="large",
prop_cycle=cycler.cycler(color=_colors))
plt.rc("grid", color="black", alpha=.1)
plt.rc("legend", frameon=True, framealpha=.9, borderpad=.5, handleheight=1,
fontsize="large")
plt.rc("image", cmap="viridis", interpolation="nearest")
plt.rc("figure", figsize=(16, 8))
#@title Source definitions
from toy_sources import sawbridge
def _rotation_2d(degrees):
phi = tf.convert_to_tensor(degrees / 180 * np.pi, dtype=tf.float32)
rotation = [[tfm.cos(phi), -tfm.sin(phi)], [tfm.sin(phi), tfm.cos(phi)]]
rotation = tf.linalg.LinearOperatorFullMatrix(
rotation, is_non_singular=True, is_square=True)
return rotation
def get_laplace(loc=0, scale=1):
return tfpd.Independent(
tfpd.Laplace(loc=[loc], scale=[scale]),
reinterpreted_batch_ndims=1,
)
def get_banana():
return tfpd.TransformedDistribution(
tfpd.Independent(tfpd.Normal(loc=[0, 0], scale=[3, .5]), 1),
tfpb.Invert(tfpb.Chain([
tfpb.RealNVP(
num_masked=1,
shift_and_log_scale_fn=lambda x, _: (.1 * x ** 2, None)),
tfpb.ScaleMatvecLinearOperator(_rotation_2d(240)),
tfpb.Shift([1, 1]),
])),
)
def get_sawbridge(order=1, stationary=False, num_points=1024):
index_points = tf.linspace(0., 1., num_points)
return sawbridge.Sawbridge(
index_points, stationary=stationary, order=order)
#@title Model definitions
import pywt
import scipy
from toy_sources import ntc
from toy_sources import vecvq
def _get_activation(activation, dtype):
if not activation:
return None
if activation == "gdn":
return tfc.GDN(dtype=dtype)
elif activation == "igdn":
return tfc.GDN(inverse=True, dtype=dtype)
else:
return getattr(tf.nn, activation)
def _make_nlp(units, activation, name, input_shape, dtype):
kwargs = [dict( # pylint:disable=g-complex-comprehension
units=u, use_bias=True, activation=activation,
name=f"{name}_{i}", dtype=dtype,
) for i, u in enumerate(units)]
kwargs[0].update(input_shape=input_shape)
kwargs[-1].update(activation=None)
return tf.keras.Sequential(
[tf.keras.layers.Dense(**k) for k in kwargs], name=name)
def get_ntc_mlp_model(analysis_filters, synthesis_filters,
analysis_activation, synthesis_activation,
latent_dims, source, dtype=tf.float32, **kwargs):
"""NTC with MLP transforms."""
source_dims, = source.event_shape
analysis = _make_nlp(
analysis_filters + [latent_dims],
_get_activation(analysis_activation, dtype),
"analysis",
[source_dims],
dtype,
)
synthesis = _make_nlp(
synthesis_filters + [source_dims],
_get_activation(synthesis_activation, dtype),
"synthesis",
[latent_dims],
dtype,
)
return ntc.NTCModel(
analysis=analysis, synthesis=synthesis, source=source, dtype=dtype,
**kwargs)
def get_ltc_model(latent_dims, source, dtype=tf.float32, **kwargs):
"""LTC."""
source_dims, = source.event_shape
analysis = tf.keras.Sequential([
tf.keras.layers.Dense(
latent_dims, use_bias=True, activation=None, name="analysis",
input_shape=[source_dims], dtype=dtype),
], name="analysis")
synthesis = tf.keras.Sequential([
tf.keras.layers.Dense(
source_dims, use_bias=True, activation=None, name="synthesis",
input_shape=[latent_dims], dtype=dtype),
], name="synthesis")
return ntc.NTCModel(
analysis=analysis, synthesis=synthesis, source=source, dtype=dtype,
**kwargs)
@tf.function
def estimate_klt(source, num_samples, latent_dims):
"""Estimates KLT."""
dims = source.event_shape[0]
def energy(samples):
c = tf.linalg.matmul(samples, samples, transpose_a=True)
return c / tf.cast(num_samples[1], tf.float32)
# Estimate mean.
mean = tf.zeros([dims])
for _ in range(num_samples[0]):
samples = source.sample(num_samples[1])
mean += tf.reduce_mean(samples, axis=0)
mean /= tf.cast(num_samples[0], tf.float32)
# Estimate covariance.
covariance = tf.zeros([dims, dims])
for _ in range(num_samples[0]):
samples = source.sample(num_samples[1])
covariance += energy(samples - mean)
covariance /= tf.cast(num_samples[0], tf.float32)
variance = tf.reduce_sum(tf.linalg.diag_part(covariance))
tf.print("SOURCE VARIANCE:", variance)
# Compute first latent_dims eigenvalues in descending order.
eig, eigv = tf.linalg.eigh(covariance)
eig = eig[::-1]
eigv = eigv[:, ::-1]
eig = eig[:latent_dims]
eigv = eigv[:, :latent_dims]
tf.print("SOURCE EIGENVALUES:", eig)
# Estimate covariance again after whitening.
whitened = tf.zeros([latent_dims, latent_dims])
for _ in range(num_samples[0]):
samples = source.sample(num_samples[1])
whitened += energy(tf.linalg.matmul(samples - mean, eigv))
whitened /= tf.cast(num_samples[0], tf.float32)
whitened_var = tf.linalg.diag_part(whitened)
whitened /= tf.sqrt(
whitened_var[:, None] * whitened_var[None, :]) + 1e-20
error = tf.linalg.set_diag(abs(whitened), tf.zeros(latent_dims))
error = tf.reduce_max(error)
tf.print("MAX. CORRELATION COEFFICIENT:", error)
return eigv, error
class ScaleAndBias(tf.keras.layers.Layer):
"""Multiplies each channel by a learned scaling factor and adds a bias."""
def __init__(self, scale_first, init_scale=1, **kwargs):
super().__init__(**kwargs)
self.scale_first = bool(scale_first)
self.init_scale = float(init_scale)
def build(self, input_shape):
input_shape = tf.TensorShape(input_shape)
channels = int(input_shape[-1])
self._log_factors = self.add_weight(
name="log_factors", shape=[channels],
initializer=tf.keras.initializers.Constant(
tf.math.log(self.init_scale)))
self.bias = self.add_weight(
name="bias", shape=[channels],
initializer=tf.keras.initializers.Zeros())
super().build(input_shape)
@property
def factors(self):
return tf.math.exp(self._log_factors)
def call(self, inputs):
if self.scale_first:
return inputs * self.factors + self.bias
else:
return (inputs + self.bias) * self.factors
def get_ltc_klt_model(latent_dims, source, num_samples, tolerance,
dtype=tf.float32, **kwargs):
"""LTC constrained to KLT."""
source_dims, = source.event_shape
# Estimate KLT from samples.
eigv, error = estimate_klt(
source, tf.constant(num_samples), tf.constant(latent_dims))
assert error < tolerance, error.numpy()
eigv = tf.cast(eigv, dtype)
analysis = tf.keras.Sequential([
tf.keras.layers.Dense(
latent_dims, use_bias=False, activation=None, name="klt",
kernel_initializer=lambda *a, **k: eigv,
trainable=False, input_shape=[source_dims], dtype=dtype),
ScaleAndBias(
scale_first=True, name="klt_scaling", dtype=dtype),
], name="analysis")
synthesis = tf.keras.Sequential([
ScaleAndBias(
scale_first=False, name="iklt_scaling",
input_shape=[latent_dims], dtype=dtype),
tf.keras.layers.Dense(
source_dims, use_bias=False, activation=None, name="iklt",
kernel_initializer=lambda *a, **k: tf.transpose(eigv),
trainable=False, dtype=dtype),
], name="synthesis")
return ntc.NTCModel(
analysis=analysis, synthesis=synthesis, source=source, dtype=dtype,
**kwargs)
def get_ltc_ortho_model(latent_dims, source, transform, dtype=tf.float32,
**kwargs):
"""LTC constrained to fixed orthonormal transforms."""
source_dims, = source.event_shape
if transform == "dct":
basis = scipy.fftpack.dct(np.eye(source_dims), norm="ortho")
else:
num_levels = int(round(np.log2(source_dims)))
assert 2 ** num_levels == source_dims
basis = []
for impulse in np.eye(source_dims):
levels = pywt.wavedec(
impulse, transform, mode="periodization", level=num_levels)
basis.append(np.concatenate(levels))
basis = np.array(basis)
# `basis` must have IO format, so DC should be in first column.
assert np.allclose(basis[:, 0], basis[0, 0])
assert not np.allclose(basis[0, :], basis[0, 0])
# `basis` should be orthonormal.
assert np.allclose(np.dot(basis, basis.T), np.eye(source_dims))
# Only take the first `latent_dims` basis functions.
basis = tf.constant(basis[:, :latent_dims], dtype=dtype)
analysis = tf.keras.Sequential([
tf.keras.layers.Dense(
latent_dims, use_bias=False, activation=None, name=transform,
kernel_initializer=lambda *a, **k: basis,
trainable=False, input_shape=[source_dims], dtype=dtype),
ScaleAndBias(
scale_first=True, name=f"{transform}_scaling", dtype=dtype),
], name="analysis")
synthesis = tf.keras.Sequential([
ScaleAndBias(
scale_first=False, name=f"i{transform}_scaling",
input_shape=[latent_dims], dtype=dtype),
tf.keras.layers.Dense(
source_dims, use_bias=False, activation=None, name=f"i{transform}",
kernel_initializer=lambda *a, **k: tf.transpose(basis),
trainable=False, dtype=dtype),
], name="synthesis")
return ntc.NTCModel(
analysis=analysis, synthesis=synthesis, source=source, dtype=dtype,
**kwargs)
def get_vecvq_model(**kwargs):
return vecvq.VECVQModel(**kwargs)
#@title Learning schedule definitions
def get_lr_scheduler(learning_rate, epochs, warmup_epochs=0):
"""Returns a learning rate scheduler function for the given configuration."""
def scheduler(epoch, lr):
del lr # unused
if epoch < warmup_epochs:
return learning_rate * 10. ** (epoch - warmup_epochs)
if epoch < 1/2 * epochs:
return learning_rate
if epoch < 3/4 * epochs:
return learning_rate * 1e-1
if epoch < 7/8 * epochs:
return learning_rate * 1e-2
return learning_rate * 1e-3
return scheduler
class AlphaScheduler(tf.keras.callbacks.Callback):
"""Alpha parameter scheduler."""
def __init__(self, schedule, verbose=0):
super().__init__()
self.schedule = schedule
self.verbose = verbose
def on_epoch_begin(self, epoch, logs=None):
if not hasattr(self.model, "alpha"):
# Silently ignore models that don't have an alpha parameter.
return
self.model.force_alpha = self.schedule(epoch)
def on_epoch_end(self, epoch, logs=None):
if not hasattr(self.model, "alpha"):
# Silently ignore models that don't have an alpha parameter.
return
if not hasattr(self.model, "soft_round") or not any(self.model.soft_round):
# Silently ignore models that don't use soft rounding.
return
logs["alpha"] = self.model.alpha
def get_alpha_scheduler(epochs):
"""Returns an alpha scheduler function for the given configuration."""
def scheduler(epoch):
if epoch < 1/4 * epochs:
return 3. * (epoch + 1) / (epochs/4 + 1)
return None
return scheduler
#@title Tensorboard logging callback
class LogCallback(tf.keras.callbacks.Callback):
"""Logs metrics to TensorBoard."""
def __init__(self, log_path):
super().__init__()
self.log_path = log_path
self._train_graph = None
self._test_graph = None
def on_train_begin(self, logs=None):
del logs # unused
if not hasattr(self, "train_writer"):
self.train_writer = tf.summary.create_file_writer(
self.log_path + "/train")
self.log_variables()
def on_test_begin(self, logs=None):
del logs # unused
if not hasattr(self, "test_writer"):
self.test_writer = tf.summary.create_file_writer(
self.log_path + "/val")
def on_test_end(self, logs=None):
# Log test metrics.
self.log_tensorboard(
self.test_writer, {"metrics/" + l: v for l, v in logs.items()})
self.test_writer.flush()
def on_epoch_begin(self, epoch, logs=None):
del logs # unused
self.model.epoch.assign(epoch)
def on_epoch_end(self, epoch, logs=None):
logs = dict(logs)
lr = logs.pop("lr")
alpha = logs.pop("alpha", None)
# Log training metrics.
logs = {l: v for l, v in logs.items() if not l.startswith("val_")}
self.log_tensorboard(
self.train_writer, {"metrics/" + l: v for l, v in logs.items()})
# Log learning rate.
logs = {"learning rate": lr}
if alpha is not None:
logs["alpha"] = alpha
self.log_tensorboard(self.train_writer, logs)
self.train_writer.flush()
def on_train_batch_begin(self, batch, logs=None):
del logs # unused
if batch == 0 and not self._train_graph:
with self.train_writer.as_default():
tf.summary.trace_on(graph=True, profiler=False)
self._train_graph = "tracing"
def on_train_batch_end(self, batch, logs=None):
del batch, logs # unused
if self._train_graph == "tracing":
with self.train_writer.as_default():
tf.summary.trace_export("step", step=self.model.epoch.value())
self._train_graph = "traced"
def on_test_batch_begin(self, batch, logs=None):
del logs # unused
if batch == 0 and not self._test_graph:
with self.test_writer.as_default():
tf.summary.trace_on(graph=True, profiler=False)
self._test_graph = "tracing"
def on_test_batch_end(self, batch, logs=None):
del batch, logs # unused
if self._test_graph == "tracing":
with self.test_writer.as_default():
tf.summary.trace_export("step", step=self.model.epoch.value())
self._test_graph = "traced"
def log_tensorboard(self, writer, logs):
"""Logs the values in `logs` to the summary writer."""
with writer.as_default():
for label, value in logs.items():
tf.summary.scalar(label, value, step=self.model.epoch.value())
def log_variables(self):
"""Logs shape and dtypes of variable collections."""
model = self.model
var_format = lambda v: f"{v.name} {v.dtype} {v.shape}"
logging.info(
"TRAINABLE VARIABLES:\n%s\n",
"\n".join(var_format(v) for v in model.trainable_variables))
```
# Laplace source
```
#@title VECVQ
work_path = "/tmp/toy_sources/laplace/vecvq"
epochs = 50
steps_per_epoch = 1000
batch_size = 1024
validation_size = 10000000
validation_batch_size = 65536
learning_rate = 1e-3
codebook_size = 128
lmbda = 1.
# tf.debugging.enable_check_numerics()
source = get_laplace()
optimizer = tf.keras.optimizers.Adam()
model = get_vecvq_model(
codebook_size=codebook_size, initialize="uniform-40",
source=source, lmbda=lmbda, distortion_loss="sse")
model.compile(optimizer=optimizer)
# Add an epoch counter for keeping track in checkpoints.
model.epoch = tf.Variable(0, trainable=False, dtype=tf.int64)
lr_scheduler = get_lr_scheduler(learning_rate, epochs)
alpha_scheduler = get_alpha_scheduler(epochs)
callback_list = [
tf.keras.callbacks.ModelCheckpoint(
work_path + "/checkpoints/ckpt-{epoch:04d}",
save_weights_only=True),
tf.keras.callbacks.experimental.BackupAndRestore(
work_path + "/backup"),
tf.keras.callbacks.LearningRateScheduler(lr_scheduler),
AlphaScheduler(alpha_scheduler),
LogCallback(work_path),
]
model.fit(
epochs=epochs,
steps_per_epoch=steps_per_epoch,
batch_size=batch_size,
validation_size=validation_size,
validation_batch_size=validation_batch_size,
verbose=2,
callbacks=tf.keras.callbacks.CallbackList(callback_list, model=model),
)
#@title NTC
work_path = "/tmp/toy_sources/laplace/ntc"
epochs = 50
steps_per_epoch = 1000
batch_size = 1024
validation_size = 10000000
validation_batch_size = 65536
learning_rate = 1e-3
latent_dims = 1
analysis_filters = [50, 50]
analysis_activation = "softplus"
synthesis_filters = [50, 50]
synthesis_activation = "softplus"
prior_type = "deep"
dither = (1, 1, 0, 0)
soft_round = (1, 0)
guess_offset = False
lmbda = 1.
# tf.debugging.enable_check_numerics()
source = get_laplace()
optimizer = tf.keras.optimizers.Adam()
model = get_ntc_mlp_model(
latent_dims=latent_dims,
analysis_filters=analysis_filters,
analysis_activation=analysis_activation,
synthesis_filters=synthesis_filters,
synthesis_activation=synthesis_activation,
prior_type=prior_type,
dither=dither,
soft_round=soft_round,
guess_offset=guess_offset,
source=source, lmbda=lmbda, distortion_loss="sse")
model.compile(optimizer=optimizer)
# Add an epoch counter for keeping track in checkpoints.
model.epoch = tf.Variable(0, trainable=False, dtype=tf.int64)
lr_scheduler = get_lr_scheduler(learning_rate, epochs)
alpha_scheduler = get_alpha_scheduler(epochs)
callback_list = [
tf.keras.callbacks.ModelCheckpoint(
work_path + "/checkpoints/ckpt-{epoch:04d}",
save_weights_only=True),
tf.keras.callbacks.experimental.BackupAndRestore(
work_path + "/backup"),
tf.keras.callbacks.LearningRateScheduler(lr_scheduler),
AlphaScheduler(alpha_scheduler),
LogCallback(work_path),
]
model.fit(
epochs=epochs,
steps_per_epoch=steps_per_epoch,
batch_size=batch_size,
validation_size=validation_size,
validation_batch_size=validation_batch_size,
verbose=2,
callbacks=tf.keras.callbacks.CallbackList(callback_list, model=model),
)
model.plot_quantization([(-5, 5, 1000)])
model.plot_transfer([(-5, 5, 1000)])
```
# Banana source
```
#@title VECVQ
work_path = "/tmp/toy_sources/banana/vecvq"
epochs = 100
steps_per_epoch = 1000
batch_size = 1024
validation_size = 10000000
validation_batch_size = 65536
learning_rate = 1e-3
codebook_size = 256
lmbda = 1.
# tf.debugging.enable_check_numerics()
source = get_banana()
optimizer = tf.keras.optimizers.Adam()
model = get_vecvq_model(
codebook_size=codebook_size, initialize="sample",
source=source, lmbda=lmbda, distortion_loss="sse")
model.compile(optimizer=optimizer)
# Add an epoch counter for keeping track in checkpoints.
model.epoch = tf.Variable(0, trainable=False, dtype=tf.int64)
lr_scheduler = get_lr_scheduler(learning_rate, epochs)
alpha_scheduler = get_alpha_scheduler(epochs)
callback_list = [
tf.keras.callbacks.ModelCheckpoint(
work_path + "/checkpoints/ckpt-{epoch:04d}",
save_weights_only=True),
tf.keras.callbacks.experimental.BackupAndRestore(
work_path + "/backup"),
tf.keras.callbacks.LearningRateScheduler(lr_scheduler),
AlphaScheduler(alpha_scheduler),
LogCallback(work_path),
]
model.fit(
epochs=epochs,
steps_per_epoch=steps_per_epoch,
batch_size=batch_size,
validation_size=validation_size,
validation_batch_size=validation_batch_size,
verbose=2,
callbacks=tf.keras.callbacks.CallbackList(callback_list, model=model),
)
#@title NTC
work_path = "/tmp/toy_sources/banana/ntc"
epochs = 100
steps_per_epoch = 1000
batch_size = 1024
validation_size = 10000000
validation_batch_size = 65536
learning_rate = 1e-3
latent_dims = 2
analysis_filters = [100, 100]
analysis_activation = "softplus"
synthesis_filters = [100, 100]
synthesis_activation = "softplus"
prior_type = "deep"
dither = (1, 1, 0, 0)
soft_round = (1, 0)
guess_offset = False
lmbda = 1.
# tf.debugging.enable_check_numerics()
source = get_banana()
optimizer = tf.keras.optimizers.Adam()
model = get_ntc_mlp_model(
latent_dims=latent_dims,
analysis_filters=analysis_filters,
analysis_activation=analysis_activation,
synthesis_filters=synthesis_filters,
synthesis_activation=synthesis_activation,
prior_type=prior_type,
dither=dither,
soft_round=soft_round,
guess_offset=guess_offset,
source=source, lmbda=lmbda, distortion_loss="sse")
model.compile(optimizer=optimizer)
# Add an epoch counter for keeping track in checkpoints.
model.epoch = tf.Variable(0, trainable=False, dtype=tf.int64)
lr_scheduler = get_lr_scheduler(learning_rate, epochs)
alpha_scheduler = get_alpha_scheduler(epochs)
callback_list = [
tf.keras.callbacks.ModelCheckpoint(
work_path + "/checkpoints/ckpt-{epoch:04d}",
save_weights_only=True),
tf.keras.callbacks.experimental.BackupAndRestore(
work_path + "/backup"),
tf.keras.callbacks.LearningRateScheduler(lr_scheduler),
AlphaScheduler(alpha_scheduler),
LogCallback(work_path),
]
model.fit(
epochs=epochs,
steps_per_epoch=steps_per_epoch,
batch_size=batch_size,
validation_size=validation_size,
validation_batch_size=validation_batch_size,
verbose=2,
callbacks=tf.keras.callbacks.CallbackList(callback_list, model=model),
)
model.plot_quantization(2 * [(-5, 5, 1000)])
```
# Sawbridge source
```
#@title VECVQ
work_path = "/tmp/toy_sources/sawbridge/vecvq"
epochs = 200
steps_per_epoch = 1000
batch_size = 1024
validation_size = 10000000
validation_batch_size = 4096
learning_rate = 1e-3
codebook_size = 50
lmbda = 1.
# tf.debugging.enable_check_numerics()
source = get_sawbridge()
optimizer = tf.keras.optimizers.Adam()
model = get_vecvq_model(
codebook_size=codebook_size, initialize="sample-.1",
source=source, lmbda=lmbda, distortion_loss="mse")
model.compile(optimizer=optimizer)
# Add an epoch counter for keeping track in checkpoints.
model.epoch = tf.Variable(0, trainable=False, dtype=tf.int64)
lr_scheduler = get_lr_scheduler(learning_rate, epochs)
alpha_scheduler = get_alpha_scheduler(epochs)
callback_list = [
tf.keras.callbacks.ModelCheckpoint(
work_path + "/checkpoints/ckpt-{epoch:04d}",
save_weights_only=True),
tf.keras.callbacks.experimental.BackupAndRestore(
work_path + "/backup"),
tf.keras.callbacks.LearningRateScheduler(lr_scheduler),
AlphaScheduler(alpha_scheduler),
LogCallback(work_path),
]
model.fit(
epochs=epochs,
steps_per_epoch=steps_per_epoch,
batch_size=batch_size,
validation_size=validation_size,
validation_batch_size=validation_batch_size,
verbose=2,
callbacks=tf.keras.callbacks.CallbackList(callback_list, model=model),
)
#@title NTC
work_path = "/tmp/toy_sources/sawbridge/ntc"
epochs = 200
steps_per_epoch = 1000
batch_size = 1024
validation_size = 10000000
validation_batch_size = 4096
learning_rate = 1e-3
latent_dims = 10
analysis_filters = [100, 100]
analysis_activation = "leaky_relu"
synthesis_filters = [100, 100]
synthesis_activation = "leaky_relu"
prior_type = "deep"
dither = (1, 1, 0, 0)
soft_round = (1, 0)
guess_offset = False
lmbda = 1.
# tf.debugging.enable_check_numerics()
source = get_sawbridge()
optimizer = tf.keras.optimizers.Adam()
model = get_ntc_mlp_model(
latent_dims=latent_dims,
analysis_filters=analysis_filters,
analysis_activation=analysis_activation,
synthesis_filters=synthesis_filters,
synthesis_activation=synthesis_activation,
prior_type=prior_type,
dither=dither,
soft_round=soft_round,
guess_offset=guess_offset,
source=source, lmbda=lmbda, distortion_loss="mse")
model.compile(optimizer=optimizer)
# Add an epoch counter for keeping track in checkpoints.
model.epoch = tf.Variable(0, trainable=False, dtype=tf.int64)
lr_scheduler = get_lr_scheduler(learning_rate, epochs)
alpha_scheduler = get_alpha_scheduler(epochs)
callback_list = [
tf.keras.callbacks.ModelCheckpoint(
work_path + "/checkpoints/ckpt-{epoch:04d}",
save_weights_only=True),
tf.keras.callbacks.experimental.BackupAndRestore(
work_path + "/backup"),
tf.keras.callbacks.LearningRateScheduler(lr_scheduler),
AlphaScheduler(alpha_scheduler),
LogCallback(work_path),
]
model.fit(
epochs=epochs,
steps_per_epoch=steps_per_epoch,
batch_size=batch_size,
validation_size=validation_size,
validation_batch_size=validation_batch_size,
verbose=2,
callbacks=tf.keras.callbacks.CallbackList(callback_list, model=model),
)
#@title KLT (dither)
work_path = "/tmp/toy_sources/sawbridge/klt_dither"
epochs = 200
steps_per_epoch = 1000
batch_size = 1024
validation_size = 10000000
validation_batch_size = 4096
learning_rate = 1e-3
num_samples = (1000, 10000)
tolerance = 1e-2
latent_dims = 50
prior_type = "deep"
dither = (1, 1, 1, 1)
soft_round = (0, 0)
guess_offset = False
lmbda = 1.
# tf.debugging.enable_check_numerics()
source = get_sawbridge()
optimizer = tf.keras.optimizers.Adam()
model = get_ltc_klt_model(
num_samples=num_samples,
tolerance=tolerance,
latent_dims=latent_dims,
prior_type=prior_type,
dither=dither,
soft_round=soft_round,
guess_offset=guess_offset,
source=source, lmbda=lmbda, distortion_loss="mse")
model.compile(optimizer=optimizer)
# Add an epoch counter for keeping track in checkpoints.
model.epoch = tf.Variable(0, trainable=False, dtype=tf.int64)
lr_scheduler = get_lr_scheduler(learning_rate, epochs)
alpha_scheduler = get_alpha_scheduler(epochs)
callback_list = [
tf.keras.callbacks.ModelCheckpoint(
work_path + "/checkpoints/ckpt-{epoch:04d}",
save_weights_only=True),
tf.keras.callbacks.experimental.BackupAndRestore(
work_path + "/backup"),
tf.keras.callbacks.LearningRateScheduler(lr_scheduler),
AlphaScheduler(alpha_scheduler),
LogCallback(work_path),
]
model.fit(
epochs=epochs,
steps_per_epoch=steps_per_epoch,
batch_size=batch_size,
validation_size=validation_size,
validation_batch_size=validation_batch_size,
verbose=2,
callbacks=tf.keras.callbacks.CallbackList(callback_list, model=model),
)
#@title Daubechies 4-tap
work_path = "/tmp/toy_sources/sawbridge/daub4"
epochs = 200
steps_per_epoch = 1000
batch_size = 1024
validation_size = 10000000
validation_batch_size = 4096
learning_rate = 1e-3
transform = "db4"
latent_dims = 50
prior_type = "deep"
dither = (1, 1, 1, 1)
soft_round = (0, 0)
guess_offset = False
lmbda = 1.
# tf.debugging.enable_check_numerics()
source = get_sawbridge()
optimizer = tf.keras.optimizers.Adam()
model = get_ltc_ortho_model(
transform=transform,
latent_dims=latent_dims,
prior_type=prior_type,
dither=dither,
soft_round=soft_round,
guess_offset=guess_offset,
source=source, lmbda=lmbda, distortion_loss="mse")
model.compile(optimizer=optimizer)
# Add an epoch counter for keeping track in checkpoints.
model.epoch = tf.Variable(0, trainable=False, dtype=tf.int64)
lr_scheduler = get_lr_scheduler(learning_rate, epochs)
alpha_scheduler = get_alpha_scheduler(epochs)
callback_list = [
tf.keras.callbacks.ModelCheckpoint(
work_path + "/checkpoints/ckpt-{epoch:04d}",
save_weights_only=True),
tf.keras.callbacks.experimental.BackupAndRestore(
work_path + "/backup"),
tf.keras.callbacks.LearningRateScheduler(lr_scheduler),
AlphaScheduler(alpha_scheduler),
LogCallback(work_path),
]
model.fit(
epochs=epochs,
steps_per_epoch=steps_per_epoch,
batch_size=batch_size,
validation_size=validation_size,
validation_batch_size=validation_batch_size,
verbose=2,
callbacks=tf.keras.callbacks.CallbackList(callback_list, model=model),
)
```
| github_jupyter |
**This notebook is an exercise in the [Intermediate Machine Learning](https://www.kaggle.com/learn/intermediate-machine-learning) course. You can reference the tutorial at [this link](https://www.kaggle.com/alexisbcook/pipelines).**
---
In this exercise, you will use **pipelines** to improve the efficiency of your machine learning code.
# Setup
The questions below will give you feedback on your work. Run the following cell to set up the feedback system.
```
# Set up code checking
import os
if not os.path.exists("../input/train.csv"):
os.symlink("../input/home-data-for-ml-course/train.csv", "../input/train.csv")
os.symlink("../input/home-data-for-ml-course/test.csv", "../input/test.csv")
from learntools.core import binder
binder.bind(globals())
from learntools.ml_intermediate.ex4 import *
print("Setup Complete")
```
You will work with data from the [Housing Prices Competition for Kaggle Learn Users](https://www.kaggle.com/c/home-data-for-ml-course).

Run the next code cell without changes to load the training and validation sets in `X_train`, `X_valid`, `y_train`, and `y_valid`. The test set is loaded in `X_test`.
```
import pandas as pd
from sklearn.model_selection import train_test_split
# Read the data
X_full = pd.read_csv('../input/train.csv', index_col='Id')
X_test_full = pd.read_csv('../input/test.csv', index_col='Id')
# Remove rows with missing target, separate target from predictors
X_full.dropna(axis=0, subset=['SalePrice'], inplace=True)
y = X_full.SalePrice
X_full.drop(['SalePrice'], axis=1, inplace=True)
# Break off validation set from training data
X_train_full, X_valid_full, y_train, y_valid = train_test_split(X_full, y,
train_size=0.8, test_size=0.2,
random_state=0)
# "Cardinality" means the number of unique values in a column
# Select categorical columns with relatively low cardinality (convenient but arbitrary)
categorical_cols = [cname for cname in X_train_full.columns if
X_train_full[cname].nunique() < 10 and
X_train_full[cname].dtype == "object"]
# Select numerical columns
numerical_cols = [cname for cname in X_train_full.columns if
X_train_full[cname].dtype in ['int64', 'float64']]
# Keep selected columns only
my_cols = categorical_cols + numerical_cols
X_train = X_train_full[my_cols].copy()
X_valid = X_valid_full[my_cols].copy()
X_test = X_test_full[my_cols].copy()
X_train.head()
```
The next code cell uses code from the tutorial to preprocess the data and train a model. Run this code without changes.
```
from sklearn.compose import ColumnTransformer
from sklearn.pipeline import Pipeline
from sklearn.impute import SimpleImputer
from sklearn.preprocessing import OneHotEncoder
from sklearn.ensemble import RandomForestRegressor
from sklearn.metrics import mean_absolute_error
# Preprocessing for numerical data
numerical_transformer = SimpleImputer(strategy='constant')
# Preprocessing for categorical data
categorical_transformer = Pipeline(steps=[
('imputer', SimpleImputer(strategy='most_frequent')),
('onehot', OneHotEncoder(handle_unknown='ignore'))
])
# Bundle preprocessing for numerical and categorical data
preprocessor = ColumnTransformer(
transformers=[
('num', numerical_transformer, numerical_cols),
('cat', categorical_transformer, categorical_cols)
])
# Define model
model = RandomForestRegressor(n_estimators=100, random_state=0)
# Bundle preprocessing and modeling code in a pipeline
clf = Pipeline(steps=[('preprocessor', preprocessor),
('model', model)
])
# Preprocessing of training data, fit model
clf.fit(X_train, y_train)
# Preprocessing of validation data, get predictions
preds = clf.predict(X_valid)
print('MAE:', mean_absolute_error(y_valid, preds))
```
The code yields a value around 17862 for the mean absolute error (MAE). In the next step, you will amend the code to do better.
# Step 1: Improve the performance
### Part A
Now, it's your turn! In the code cell below, define your own preprocessing steps and random forest model. Fill in values for the following variables:
- `numerical_transformer`
- `categorical_transformer`
- `model`
To pass this part of the exercise, you need only define valid preprocessing steps and a random forest model.
```
# Preprocessing for numerical data
numerical_transformer = numerical_transformer = SimpleImputer(strategy='mean') # Your code here
# Preprocessing for categorical data
categorical_transformer = Pipeline(steps=[
('imputer', SimpleImputer(strategy='most_frequent')),
('onehot', OneHotEncoder(handle_unknown='ignore'))
]) # Your code here
# Bundle preprocessing for numerical and categorical data
preprocessor = ColumnTransformer(
transformers=[
('num', numerical_transformer, numerical_cols),
('cat', categorical_transformer, categorical_cols)
])
# Define model
model = RandomForestRegressor(n_estimators=100, random_state=0) # Your code here
# Check your answer
step_1.a.check()
# Lines below will give you a hint or solution code
#step_1.a.hint()
#step_1.a.solution()
```
### Part B
Run the code cell below without changes.
To pass this step, you need to have defined a pipeline in **Part A** that achieves lower MAE than the code above. You're encouraged to take your time here and try out many different approaches, to see how low you can get the MAE! (_If your code does not pass, please amend the preprocessing steps and model in Part A._)
```
# Bundle preprocessing and modeling code in a pipeline
my_pipeline = Pipeline(steps=[('preprocessor', preprocessor),
('model', model)
])
# Preprocessing of training data, fit model
my_pipeline.fit(X_train, y_train)
# Preprocessing of validation data, get predictions
preds = my_pipeline.predict(X_valid)
# Evaluate the model
score = mean_absolute_error(y_valid, preds)
print('MAE:', score)
# Check your answer
step_1.b.check()
# Line below will give you a hint
#step_1.b.hint()
```
# Step 2: Generate test predictions
Now, you'll use your trained model to generate predictions with the test data.
```
# Preprocessing of test data, fit model
preds_test =my_pipeline.predict(X_test) # Your code here
# Check your answer
step_2.check()
# Lines below will give you a hint or solution code
#step_2.hint()
#step_2.solution()
```
Run the next code cell without changes to save your results to a CSV file that can be submitted directly to the competition.
```
# Save test predictions to file
output = pd.DataFrame({'Id': X_test.index,
'SalePrice': preds_test})
output.to_csv('submission.csv', index=False)
```
# Submit your results
Once you have successfully completed Step 2, you're ready to submit your results to the leaderboard! If you choose to do so, make sure that you have already joined the competition by clicking on the **Join Competition** button at [this link](https://www.kaggle.com/c/home-data-for-ml-course).
1. Begin by clicking on the blue **Save Version** button in the top right corner of the window. This will generate a pop-up window.
2. Ensure that the **Save and Run All** option is selected, and then click on the blue **Save** button.
3. This generates a window in the bottom left corner of the notebook. After it has finished running, click on the number to the right of the **Save Version** button. This pulls up a list of versions on the right of the screen. Click on the ellipsis **(...)** to the right of the most recent version, and select **Open in Viewer**. This brings you into view mode of the same page. You will need to scroll down to get back to these instructions.
4. Click on the **Output** tab on the right of the screen. Then, click on the file you would like to submit, and click on the blue **Submit** button to submit your results to the leaderboard.
You have now successfully submitted to the competition!
If you want to keep working to improve your performance, select the blue **Edit** button in the top right of the screen. Then you can change your code and repeat the process. There's a lot of room to improve, and you will climb up the leaderboard as you work.
# Keep going
Move on to learn about [**cross-validation**](https://www.kaggle.com/alexisbcook/cross-validation), a technique you can use to obtain more accurate estimates of model performance!
---
*Have questions or comments? Visit the [Learn Discussion forum](https://www.kaggle.com/learn-forum/161289) to chat with other Learners.*
| github_jupyter |
```
# this generate_subject_files is different in that can specify a custom number of real & sentinel images for tutorials
# previously, assumed that would have either equal numbers of real & sentinels, or else just real images in tutorial
import generate_codecharts
import os
import string
import random
import json
import matplotlib.pyplot as plt
import numpy as np
import base64
import glob
# TODO: run generate_subject_files/check_bad_img.py
# PARAMETERS for generating subject files
num_subject_files = 100
num_images_per_sf = 50
num_imgs_per_tutorial = 3
num_sentinels_per_tutorial = 3
num_sentinels_per_sf = 5 # excluding the tutorial
add_sentinels_to_tutorial = True
ncodecharts = 2000 # number of codecharts to generate in total
sentinel_images_per_bucket = 50
# params for generating sentinels
target_type = "img" # one of fix_cross, red_dot, or img
target_imdir = "sentinel_target_images"
# set these parameters
num_buckets = 10
start_bucket_at = 0
which_buckets = range(10) # can make this a list of specific buckets e.g., [4,5,6]
rootdir = './task_data'
real_image_dir = os.path.join(rootdir,'real_images')
real_CC_dir = os.path.join(rootdir,'real_CC')
sentinel_image_dir = os.path.join(rootdir,'sentinel_images')
sentinel_CC_dir = os.path.join(rootdir,'sentinel_CC')
sentinel_targetim_dir = os.path.join(rootdir, 'sentinel_target')
import create_padded_image_dir
all_image_dir = os.path.join(rootdir,'all_images')
sourcedir = '../../attention-code-charts-eyetracking-datasets/Selection/SALICON_subset' # take images from here
if not os.path.exists(all_image_dir):
print('Creating directory %s'%(all_image_dir))
os.makedirs(all_image_dir)
allfiles = []
for ext in ('*.jpeg', '*.png', '*.jpg'):
allfiles.extend(glob.glob(os.path.join(sourcedir, ext)))
print("len allfiles", len(allfiles))
image_width,image_height = create_padded_image_dir.save_padded_images(all_image_dir,allfiles)
from generate_central_fixation_cross import save_fixation_cross
save_fixation_cross(rootdir,image_width,image_height);
from distribute_image_files_by_buckets import distribute_images
distribute_images(all_image_dir,real_image_dir,num_buckets,start_bucket_at)
from collections import Counter
# sanity check that buckets have different images:
temp = []
for bnum in range(num_buckets):
bucketpath = os.path.join(real_image_dir,'bucket%d'%(bnum))
temp.extend(glob.glob(os.path.join(bucketpath,'*.jpg')))
print('Number unique files: %d'%(len(Counter(temp).keys())))
from create_codecharts_dir import create_codecharts
create_codecharts(real_CC_dir,ncodecharts,image_width,image_height)
import generate_sentinels
border_padding = 100 # used to guarantee that chosen sentinel location is not too close to border to be hard to spot
generate_sentinels.generate_sentinels(sentinel_image_dir,sentinel_CC_dir,num_buckets,start_bucket_at,sentinel_images_per_bucket,\
image_width,image_height,border_padding,target_type, target_imdir)
from generate_tutorials import generate_tutorials
# inherit border_padding and fixcross styles from above cell
border_padding = 100
tutorial_source_dir = './tutorial_images' # where tutorial images are stored
tutorial_image_dir = os.path.join(rootdir,'tutorial_images') # where processed tutorial images will be saved
if not os.path.exists(tutorial_image_dir):
print('Creating directory %s'%(tutorial_image_dir))
os.makedirs(tutorial_image_dir)
allfiles = []
for ext in ('*.jpeg', '*.png', '*.jpg'):
allfiles.extend(glob.glob(os.path.join(tutorial_source_dir, ext)))
print("len allfiles", len(allfiles))
create_padded_image_dir.save_padded_images(tutorial_image_dir,allfiles,toplot=False,maxwidth=image_width,maxheight=image_height)
# TODO: or pick a random set of images to serve as tutorial images
N = 6 # number of images to use for tutorials (these will be sampled from to generate subject files below)
# note: make this larger than num_imgs_per_tutorial so not all subject files have the same tutorials
N_sent = 100 # number of sentinels to use for tutorials
# note: if equal to num_sentinels_per_tutorial, all subject files will have the same tutorial sentinels
generate_tutorials(tutorial_image_dir,rootdir,image_width,image_height,border_padding,N,target_type,target_imdir,N_sent)
start_subjects_at = 0 # where to start naming subject files at (if had already created files)
if os.path.exists(os.path.join(rootdir,'subject_files/bucket0')):
subjfiles = glob.glob(os.path.join(rootdir,'subject_files/bucket0/*.json'))
start_subjects_at = len(subjfiles)
#real_codecharts = os.listdir(real_CC_dir)
real_codecharts = glob.glob(os.path.join(real_CC_dir,'*.jpg'))
sentinel_codecharts = glob.glob(os.path.join(sentinel_CC_dir,'*.jpg'))
with open(os.path.join(real_CC_dir,'CC_codes_full.json')) as f:
real_codes_data = json.load(f) # contains mapping of image path to valid codes
## GENERATING SUBJECT FILES
subjdir = os.path.join(rootdir,'subject_files')
if not os.path.exists(subjdir):
os.makedirs(subjdir)
os.makedirs(os.path.join(rootdir,'full_subject_files'))
with open(os.path.join(rootdir,'tutorial_full.json')) as f:
tutorial_data = json.load(f)
tutorial_real_filenames = [fn for fn in tutorial_data.keys() if tutorial_data[fn]['flag']=='tutorial_real']
tutorial_sentinel_filenames = [fn for fn in tutorial_data.keys() if tutorial_data[fn]['flag']=='tutorial_sentinel']
# iterate over all buckets
for b in range(len(which_buckets)):
bucket = 'bucket%d'%(which_buckets[b])
img_bucket_dir = os.path.join(real_image_dir,bucket)
#img_files = os.listdir(img_bucket_dir)
#img_files = glob.glob(os.path.join(img_bucket_dir,'*.jpg'))
img_files = []
for ext in ('*.jpeg', '*.png', '*.jpg'):
img_files.extend(glob.glob(os.path.join(img_bucket_dir, ext)))
sentinel_bucket_dir = os.path.join(sentinel_image_dir,bucket)
#sentinel_files = os.listdir(sentinel_bucket_dir)
sentinel_files = glob.glob(os.path.join(sentinel_bucket_dir,'*.jpg'))
with open(os.path.join(sentinel_bucket_dir,'sentinel_codes_full.json')) as f:
sentinel_codes_data = json.load(f) # contains mapping of image path to valid codes
subjdir = os.path.join(rootdir,'subject_files',bucket)
if not os.path.exists(subjdir):
os.makedirs(subjdir)
os.makedirs(os.path.join(rootdir,'full_subject_files',bucket))
# for each bucket, generate subject files
for i in range(num_subject_files):
random.shuffle(img_files)
random.shuffle(sentinel_files)
random.shuffle(real_codecharts)
# for each subject files, add real images
sf_data = []
full_sf_data = []
# ADDING TUTORIALS
random.shuffle(tutorial_real_filenames)
random.shuffle(tutorial_sentinel_filenames)
# initialize temporary arrays, because will shuffle real & sentinel tutorial images before adding to
# final subject files
sf_data_temp = []
full_sf_data_temp = []
for j in range(num_imgs_per_tutorial):
image_data = {}
fn = tutorial_real_filenames[j]
image_data["image"] = fn
image_data["codechart"] = tutorial_data[fn]['codechart_file'] # stores codechart path
image_data["codes"] = tutorial_data[fn]['valid_codes'] # stores valid codes
image_data["flag"] = 'tutorial_real' # stores flag of whether we have real or sentinel image
full_image_data = image_data.copy() # identical to image_data but includes a key for coordinates
full_image_data["coordinates"] = tutorial_data[fn]['coordinates'] # store (x, y) coordinate of each triplet
sf_data_temp.append(image_data)
full_sf_data_temp.append(full_image_data)
if add_sentinels_to_tutorial and num_sentinels_per_tutorial>0:
for j in range(num_sentinels_per_tutorial):
image_data2 = {}
fn = tutorial_sentinel_filenames[j]
image_data2["image"] = fn
image_data2["codechart"] = tutorial_data[fn]['codechart_file'] # stores codechart path
image_data2["correct_code"] = tutorial_data[fn]['correct_code']
image_data2["correct_codes"] = tutorial_data[fn]['correct_codes']
image_data2["codes"] = tutorial_data[fn]['valid_codes'] # stores valid codes
image_data2["flag"] = 'tutorial_sentinel' # stores flag of whether we have real or sentinel image
full_image_data2 = image_data2.copy() # identical to image_data but includes a key for coordinates
full_image_data2["coordinate"] = tutorial_data[fn]['coordinate'] # stores coordinate for correct code
full_image_data2["codes"] = tutorial_data[fn]['valid_codes'] # stores valid codes
full_image_data2["coordinates"] = tutorial_data[fn]['coordinates'] # store (x, y) coordinate of each triplet
sf_data_temp.append(image_data2)
full_sf_data_temp.append(full_image_data2)
# up to here, have sequentially added real images and then sentinel images to tutorial
# now want to shuffle them
# perm = np.random.permutation(len(sf_data_temp))
# modification to the above: shuffle all but the first image, because don't want to start with a sentinel
perm = range(1,len(sf_data_temp)) # ignore 0 index
random.shuffle(perm)
perm.insert(0,0) # insert index 0 and index 0
# ------------
for j in range(len(perm)): # note need to make sure sf_data and full_sf_data correspond
sf_data.append(sf_data_temp[perm[j]])
full_sf_data.append(full_sf_data_temp[perm[j]])
# ADDING REAL IMAGES
for j in range(num_images_per_sf):
image_data = {}
image_data["image"] = img_files[j] # stores image path
# select a code chart
pathname = real_codecharts[j] # since shuffled, will pick up first set of random codecharts
image_data["codechart"] = pathname # stores codechart path
image_data["codes"] = real_codes_data[pathname]['valid_codes'] # stores valid codes
image_data["flag"] = 'real' # stores flag of whether we have real or sentinel image
full_image_data = image_data.copy() # identical to image_data but includes a key for coordinates
full_image_data["coordinates"] = real_codes_data[pathname]['coordinates'] # store locations - (x, y) coordinate of each triplet
sf_data.append(image_data)
full_sf_data.append(full_image_data)
## ADDING SENTINEL IMAGES
sentinel_spacing = int(num_images_per_sf/float(num_sentinels_per_sf))
insertat = num_imgs_per_tutorial+num_sentinels_per_tutorial + 1; # don't insert before all the tutorial images are done
for j in range(num_sentinels_per_sf):
sentinel_image_data = {}
sentinel_pathname = sentinel_files[j]
sentinel_image_data["image"] = sentinel_pathname # stores image path
sentinel_image_data["codechart"] = sentinel_codes_data[sentinel_pathname]['codechart_file']
sentinel_image_data["correct_code"] = sentinel_codes_data[sentinel_pathname]['correct_code']
sentinel_image_data["correct_codes"] = sentinel_codes_data[sentinel_pathname]['correct_codes']
sentinel_image_data["codes"] = sentinel_codes_data[sentinel_pathname]["valid_codes"]
sentinel_image_data["flag"] = 'sentinel' # stores flag of whether we have real or sentinel image
# for analysis, save other attributes too
full_sentinel_image_data = sentinel_image_data.copy() # identical to sentinel_image_data but includes coordinate key
full_sentinel_image_data["coordinate"] = sentinel_codes_data[sentinel_pathname]["coordinate"] # stores the coordinate of the correct code
full_sentinel_image_data["codes"] = sentinel_codes_data[sentinel_pathname]["valid_codes"] # stores other valid codes
full_sentinel_image_data["coordinates"] = sentinel_codes_data[sentinel_pathname]["coordinates"] # stores the coordinate of the valid code
insertat = insertat + random.choice(range(sentinel_spacing-1,sentinel_spacing+2))
insertat = min(insertat,len(sf_data)-1)
sf_data.insert(insertat, sentinel_image_data)
full_sf_data.insert(insertat, full_sentinel_image_data)
# Add an image_id to each subject file entry
image_id = 0 # represents the index of the image in the subject file
for d in range(len(sf_data)):
sf_data[d]['index'] = image_id
full_sf_data[d]['index'] = image_id
image_id+=1
subj_num = start_subjects_at+i
with open(os.path.join(rootdir,'subject_files',bucket,'subject_file_%d.json'%(subj_num)), 'w') as outfile:
print(outfile.name)
json.dump(sf_data, outfile)
with open(os.path.join(rootdir,'full_subject_files',bucket,'subject_file_%d.json'%(subj_num)), 'w') as outfile:
json.dump(full_sf_data, outfile)
import math
# source: https://www.rapidtables.com/web/dev/screen-resolution-statistics.html
dist = 40 # recommended = 30 to 50
s_diag = 14
hres = 1366
vres = 768
imageHeight = 700
# imageWidth = 1000
factor = s_diag/math.sqrt(hres^2 + vres^2);
s1 = (hres)*factor; # width
s2 = (vres)*factor; # height
#temp = imageWidth*s1/hres;
temp = imageHeight*s2/float(vres);
ndegs = math.degrees(2*math.atan2(temp,2*d))
print('Image height subtends %2.2f visual angle'%ndegs)
numDegs = 1
vert_1deg = (vres/s2)*d*math.tan(math.radians(numDegs));
print('%g deg. of vertical visual angle: %2.2f pixels'%(numDegs,vert_1deg))
```
| github_jupyter |
# Neural Machine Translation with Attention Model
In this notebook, we will build a Neural Machine Translation (NMT) model to translate human readable dates ("28th of September, 2009") into machine readable dates ("2009-09-28"). We will do this with an attention model, which is used to learn sophisticated mappings from one sequence to another.
Let's load all the packages.
```
#import matplotlib
#matplotlib.use('Agg')
from keras.layers import Bidirectional, Concatenate, Permute, Dot, Input, LSTM, Multiply
from keras.layers import RepeatVector, Dense, Activation, Lambda
from keras.optimizers import Adam
from keras.utils import to_categorical
from keras.models import load_model, Model
import keras.backend as K
import numpy as np
from faker import Faker
import random
from tqdm import tqdm
from babel.dates import format_date
from nmt_utils import *
import matplotlib.pyplot as plt
%matplotlib inline
```
## Translating human readable dates into machine readable dates
The attention model can be used to translate from one language to another but that task will require large training dataset and a lot of time training on GPUs. Hence in order to explore the functionality of this model, we will instead use it to translate dates written in a variety of formats to machine readable dates.
For example,
- $"the 29th of August 1958"$ to $"1958-08-29"$
- $"03/30/1968"$ to $"1968-03-30"$
- $"24 JUNE 1987"$ to $"1987-06-24"$.
### Dataset
For trainig, we will use a dataset of 10000 dates in different formats and their equivalent machine readable dates.
Let's print some examples.
```
m = 10000
dataset, human_vocab, machine_vocab, inv_machine_vocab = load_dataset(m)
# lets load some sample dates and their corresponding machine readable dates
dataset[:10]
```
The data we have loaded has,
- `dataset`: a list of tuples of (human readable date, machine readable date)
- `human_vocab`: a python dictionary mapping all characters used in the human readable dates to an integer-valued index
- `machine_vocab`: a python dictionary mapping all characters used in machine readable dates to an integer-valued index.
- `inv_machine_vocab`: the inverse dictionary of `machine_vocab`, mapping from indices back to characters.
Let's preprocess the data and map the raw text data into the index values. We will also use Tx=30 (which we assume is the maximum length of the human readable date; if we get a longer input, we would have to truncate it) and Ty=10 (since "YYYY-MM-DD" is 10 characters long).
```
Tx = 30
Ty = 10
X, Y, Xoh, Yoh = preprocess_data(dataset, human_vocab, machine_vocab, Tx, Ty)
print("X.shape:", X.shape)
print("Y.shape:", Y.shape)
print("Xoh.shape:", Xoh.shape)
print("Yoh.shape:", Yoh.shape)
```
We now have:
- `X`: where each character in the human readable dates is replaced by an index mapped to the character via `human_vocab`. Each date is further padded to $T_x = 30$ values with a special character (< pad >). `X.shape = (m, Tx)`
- `Y`: where each character in machine readable dates is replaced by the index it is mapped to in `machine_vocab`. `Y.shape = (m, Ty)`.
- `Xoh`: one-hot version of `X`,
- `Yoh`: one-hot version of `Y`.
```
index = 9
print("Source date:", dataset[index][0])
print("Target date:", dataset[index][1])
print()
print("Source after preprocessing (indices):", X[index])
print("Target after preprocessing (indices):", Y[index])
print()
print("Source after preprocessing (one-hot):", Xoh[index])
print("Target after preprocessing (one-hot):", Yoh[index])
```
## Neural machine translation with attention
Let's take a look at out Model.
The attention mechanism tells a Neural Machine Translation model where it should pay attention to at any step. If we are to translate a sentence from one language to another, especially if its a long sentence or a paragraph, we don't read the whole paragraph and translate it, but we pay attention to a few words in a paragraph like negation or conjugation and then translate the paragraph in pieces. That's what attention model does.
### Attention mechanism
Here is a figure of how the model works. The diagram on the left shows the attention model. The diagram on the right shows what one "Attention" step does to calculate the attention variables $\alpha^{\langle t, t' \rangle}$, which are used to compute the context variable $context^{\langle t \rangle}$ for each timestep in the output ($t=1, \ldots, T_y$).
<table>
<td>
<img src="images/attn_model.png" style="width:500;height:500px;"> <br>
</td>
<td>
<img src="images/attn_mechanism.png" style="width:500;height:500px;"> <br>
</td>
</table>
<caption><center> **Source**: Coursera.org, Deep Learning Specialization.</center></caption>
Here are some properties of the model that you may notice:
- The LSTM at the bottom is a Bi-directional LSTM and comes before the attention model and the one at the top a one-directional LSTM. The pre-attention LSTM goes through $T_x$ time steps while the post attantion LSTM goes though $T_y$ time steps. We use two LSTMs here because $T_x$ is not necessarily equal to $T_y$.
- In the figure on the right, where we implement the one step of the attention model, for a given time step ($t$), we take as input all the Hidden states of the Bi_LSTM and the previous hiddent state ($t-1$) of the post-attention LSTM. The output gives us a weighted sum of all the hidden states. The weights here decide how each character in the sentence ( or a date in this case) influences the character at time step $t$. The model will learn this influence and will try to replicate it when we test it on a new data.
**1) `one_step_attention()`**: At step $t$, given all the hidden states of the Bi-LSTM ($[a^{<1>},a^{<2>}, ..., a^{<T_x>}]$) and the previous hidden state of the second LSTM ($s^{<t-1>}$), `one_step_attention()` will compute the attention weights ($[\alpha^{<t,1>},\alpha^{<t,2>}, ..., \alpha^{<t,T_x>}]$) and output the context vector (see Figure 1 (right) for details):
$$context^{<t>} = \sum_{t' = 0}^{T_x} \alpha^{<t,t'>}a^{<t'>}$$
**2) `model()`**: Implements the entire model. It first runs the input through a Bi-LSTM to get back $[a^{<1>},a^{<2>}, ..., a^{<T_x>}]$. Then, it calls `one_step_attention()` $T_y$ times (`for` loop). At each iteration of this loop, it gives the computed context vector $c^{<t>}$ to the second LSTM, and runs the output of the LSTM through a dense layer with softmax activation to generate a prediction $\hat{y}^{<t>}$.
```
# Defined shared layers as global variables
repeator = RepeatVector(Tx)
concatenator = Concatenate(axis=-1)
densor1 = Dense(10, activation = "tanh")
densor2 = Dense(1, activation = "relu")
activator = Activation(softmax, name='attention_weights')
dotor = Dot(axes = 1)
def one_step_attention(a, s_prev):
"""
Performs one step of attention: Outputs a context vector computed as a dot product of the attention weights
"alphas" and the hidden states "a" of the Bi-LSTM.
Arguments:
a -- hidden state output of the Bi-LSTM, numpy-array of shape (m, Tx, 2*n_a)
s_prev -- previous hidden state of the (post-attention) LSTM, numpy-array of shape (m, n_s)
Returns:
context -- context vector, input of the next (post-attetion) LSTM cell
"""
# Use repeator to repeat s_prev to be of shape (m, Tx, n_s) to
#concatenate it with all hidden states "a"
s_prev = repeator(s_prev)
# print(s_prev.shape)
# Use concatenator to concatenate a and s_prev on the last axis
concat = concatenator([a , s_prev])
#print(concat.shape)
# Use densor1 to propagate concat through a small fully-connected
# neural network to compute the "intermediate energies" variable e.
e = densor1(concat)
#print(e.shape)
# Use densor2 to propagate e through a small fully-connected neural
# network to compute the "energies" variable energies.
energies = densor2(e)
#print(energies.shape)
# Use "activator" on "energies" to compute the attention weights "alphas"
alphas = activator(energies)
#print(alphas.shape)
# Use dotor together with "alphas" and "a" to compute the context vector
# to be given to the next (post-attention) LSTM-cell
context = dotor([alphas,a])
#print(context.shape)
#print(context.shape)
return context
# Define global layers that will share weights to be used in 'model()'
n_a = 32
n_s = 64
post_activation_LSTM_cell = LSTM(n_s, return_state = True)
output_layer = Dense(len(machine_vocab), activation=softmax)
def model(Tx, Ty, n_a, n_s, human_vocab_size, machine_vocab_size):
"""
Arguments:
Tx -- length of the input sequence
Ty -- length of the output sequence
n_a -- hidden state size of the Bi-LSTM
n_s -- hidden state size of the post-attention LSTM
human_vocab_size -- size of the python dictionary "human_vocab"
machine_vocab_size -- size of the python dictionary "machine_vocab"
Returns:
model -- Keras model instance
"""
# Define the inputs of the model with a shape (Tx,)
# Define s0 and c0, initial hidden state for the decoder LSTM of shape (n_s,)
X = Input(shape=(Tx, human_vocab_size))
s0 = Input(shape=(n_s,), name='s0')
c0 = Input(shape=(n_s,), name='c0')
s = s0
c = c0
# Initialize empty list of outputs
outputs = []
# Step 1: Define pre-attention Bi-LSTM..
a = Bidirectional( LSTM(n_a, return_sequences=True ))(X)
#print(a.shape)
# Step 2: Iterate for Ty steps
for t in range(Ty):
# Step 2.A: Perform one step of the attention mechanism to get back the context vector at step t
context = one_step_attention(a , s)
# Step 2.B: Apply the post-attention LSTM cell to the "context" vector.
# initial_state = [hidden state, cell state]
s, _, c = post_activation_LSTM_cell(context, initial_state = [s,c])
# Step 2.C: Applying Dense layer to the hidden state output of the post-attention LSTM
out = output_layer(s)
# Step 2.D: Append "out" to the "outputs" list
outputs.append(out)
# Step 3: Create model instance taking three inputs and returning the list of outputs.
model = Model(inputs = [X,s0,c0], outputs = outputs)
return model
modelb = model(Tx, Ty, n_a, n_s, len(human_vocab), len(machine_vocab))
#model_use = modela
opt = Adam(lr=0.005, beta_1=0.9, beta_2=0.999) # decay=0.01)
modelb.compile(opt, loss='categorical_crossentropy', metrics=['accuracy'] )
#modelb.summary()
s0 = np.zeros((m, n_s))
c0 = np.zeros((m, n_s))
outputs = list(Yoh.swapaxes(0,1))
modelb.fit([Xoh, s0, c0], outputs, epochs=10, batch_size=100, verbose = 2)
```
## Test on few examples
```
from numpy import newaxis
EXAMPLES = ['3 May 1979', '5 April 09', '21th of August 2016', 'Tue 10 Jul 2007', 'Saturday May 9 2018', 'March 3 2001', 'March 3rd 2001', '1 March 2001']
for example in EXAMPLES:
source = string_to_int(example, Tx, human_vocab)
#print(source)
#print(human_vocab)
source = np.array(list(map(lambda x: to_categorical(x, num_classes=len(human_vocab)), source)))#.swapaxes(0,1)
#print(source.shape)
source = source[newaxis,:,:]
#print(source.shape)
prediction = modelb.predict([source,s0, c0])
prediction = np.argmax(prediction, axis = -1)
output = [inv_machine_vocab[int(i)] for i in prediction]
print("source:", example)
print("output:", ''.join(output))
```
### Getting the activations from the network
Lets now visualize the attention values in your network. We'll propagate an example through the network, then visualize the values of $\alpha^{\langle t, t' \rangle}$.
To figure out where the attention values are located, let's start by printing a summary of the model .
```
modelb.summary()
def plot_attn_map(model, input_vocabulary, inv_output_vocabulary, text, n_s = 64, num = 7 , Tx = 30, Ty = 10):
"""
Plot the attention map.
"""
attention_map = np.zeros((10, 30))
Ty, Tx = attention_map.shape
s0 = np.zeros((1, n_s))
c0 = np.zeros((1, n_s))
layer = model.layers[num]
encoded = np.array(string_to_int(text, Tx, input_vocabulary)).reshape((1, 30))
encoded = np.array(list(map(lambda x: to_categorical(x, num_classes=len(input_vocabulary)), encoded)))
f = K.function(model.inputs, [layer.get_output_at(t) for t in range(Ty)])
r = f([encoded, s0, c0])
for t in range(Ty):
for t_prime in range(Tx):
attention_map[t][t_prime] = r[t][0,t_prime,0]
# Normalize attention map
# row_max = attention_map.max(axis=1)
# attention_map = attention_map / row_max[:, None]
prediction = model.predict([encoded, s0, c0])
predicted_text = []
for i in range(len(prediction)):
predicted_text.append(int(np.argmax(prediction[i], axis=1)))
predicted_text = list(predicted_text)
predicted_text = int_to_string(predicted_text, inv_output_vocabulary)
text_ = list(text)
# get the lengths of the string
input_length = len(text)
output_length = Ty
# Plot the attention_map
plt.clf()
f = plt.figure()
f.set_figwidth(18)
f.set_figheight(8.5)
ax = f.add_subplot(1, 1, 1)
# add image
i = ax.imshow(attention_map, interpolation='nearest', cmap='gray')
# add colorbar
cbaxes = f.add_axes([0.2, 0.0, 0.6, 0.03])
cbar = f.colorbar(i, cax=cbaxes, orientation='horizontal')
cbar.ax.set_xlabel('Alpha value (Probability output of the "softmax")', labelpad=2)
# add labels
ax.set_yticks(range(output_length))
ax.set_yticklabels(predicted_text[:output_length])
ax.set_xticks(range(input_length))
ax.set_xticklabels(text_[:input_length], rotation=45)
ax.set_xlabel('Input Sequence')
ax.set_ylabel('Output Sequence')
# add grid and legend
ax.grid()
#f.show()
return attention_map
```
On the generated plot we can observe the values of the attention weights for each character of the predicted output.
In the date translation application, we will observe that most of the time attention helps predict the date/month, and hasn't much impact on predicting the year .
```
attention_map = plot_attn_map(modelb, human_vocab, inv_machine_vocab, "Friday Oct 19 2018", num = 7, n_s = 64)
attention_map = plot_attn_map(modelb, human_vocab, inv_machine_vocab, "Saturday May 9 2018", num = 7, n_s = 64)
```
The ideas in this notebook are based on Andrew Ng's Deep Learning Specialization on Coursera.org.
| github_jupyter |
##### Copyright 2020 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# TensorFlowアドオンオプティマイザ:ConditionalGradient
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/addons/tutorials/optimizers_conditionalgradient"><img src="https://www.tensorflow.org/images/tf_logo_32px.png"> TensorFlow.orgで表示</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/addons/tutorials/optimizers_conditionalgradient.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png"> Google Colab で実行</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/addons/tutorials/optimizers_conditionalgradient.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub でソースを表示{</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/addons/tutorials/optimizers_conditionalgradient.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">ノートブックをダウンロード/a0}</a></td>
</table>
# 概要
このノートブックでは、アドオンパッケージのConditional Gradientオプティマイザの使用方法を紹介します。
# ConditionalGradient
> 根本的な正則化の効果を出すために、ニューラルネットワークのパラメーターを制約することがトレーニングに有益であることが示されています。多くの場合、パラメーターはソフトペナルティ(制約充足を保証しない)または投影操作(計算コストが高い)によって制約されますが、Conditional Gradient(CG)オプティマイザは、費用のかかる投影ステップを必要とせずに、制約を厳密に適用します。これは、制約内のオブジェクトの線形近似を最小化することによって機能します。このノートブックでは、MNISTデータセットに対してCGオプティマイザを使用してフロベニウスノルム制約を適用する方法を紹介します。CGは、tensorflow APIとして利用可能になりました。オプティマイザの詳細は、https://arxiv.org/pdf/1803.06453.pdfを参照してください。
## セットアップ
```
!pip install -U tensorflow-addons
import tensorflow as tf
import tensorflow_addons as tfa
from matplotlib import pyplot as plt
# Hyperparameters
batch_size=64
epochs=10
```
# モデルの構築
```
model_1 = tf.keras.Sequential([
tf.keras.layers.Dense(64, input_shape=(784,), activation='relu', name='dense_1'),
tf.keras.layers.Dense(64, activation='relu', name='dense_2'),
tf.keras.layers.Dense(10, activation='softmax', name='predictions'),
])
```
# データの準備
```
# Load MNIST dataset as NumPy arrays
dataset = {}
num_validation = 10000
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()
# Preprocess the data
x_train = x_train.reshape(-1, 784).astype('float32') / 255
x_test = x_test.reshape(-1, 784).astype('float32') / 255
```
# カスタムコールバック関数の定義
```
def frobenius_norm(m):
"""This function is to calculate the frobenius norm of the matrix of all
layer's weight.
Args:
m: is a list of weights param for each layers.
"""
total_reduce_sum = 0
for i in range(len(m)):
total_reduce_sum = total_reduce_sum + tf.math.reduce_sum(m[i]**2)
norm = total_reduce_sum**0.5
return norm
CG_frobenius_norm_of_weight = []
CG_get_weight_norm = tf.keras.callbacks.LambdaCallback(
on_epoch_end=lambda batch, logs: CG_frobenius_norm_of_weight.append(
frobenius_norm(model_1.trainable_weights).numpy()))
```
# トレーニングと評価:オプティマイザとしてCGを使用
一般的なkerasオプティマイザを新しいtfaオプティマイザに置き換えるだけです。
```
# Compile the model
model_1.compile(
optimizer=tfa.optimizers.ConditionalGradient(
learning_rate=0.99949, lambda_=203), # Utilize TFA optimizer
loss=tf.keras.losses.SparseCategoricalCrossentropy(),
metrics=['accuracy'])
history_cg = model_1.fit(
x_train,
y_train,
batch_size=batch_size,
validation_data=(x_test, y_test),
epochs=epochs,
callbacks=[CG_get_weight_norm])
```
# トレーニングと評価:オプティマイザとしてSGDを使用
```
model_2 = tf.keras.Sequential([
tf.keras.layers.Dense(64, input_shape=(784,), activation='relu', name='dense_1'),
tf.keras.layers.Dense(64, activation='relu', name='dense_2'),
tf.keras.layers.Dense(10, activation='softmax', name='predictions'),
])
SGD_frobenius_norm_of_weight = []
SGD_get_weight_norm = tf.keras.callbacks.LambdaCallback(
on_epoch_end=lambda batch, logs: SGD_frobenius_norm_of_weight.append(
frobenius_norm(model_2.trainable_weights).numpy()))
# Compile the model
model_2.compile(
optimizer=tf.keras.optimizers.SGD(0.01), # Utilize SGD optimizer
loss=tf.keras.losses.SparseCategoricalCrossentropy(),
metrics=['accuracy'])
history_sgd = model_2.fit(
x_train,
y_train,
batch_size=batch_size,
validation_data=(x_test, y_test),
epochs=epochs,
callbacks=[SGD_get_weight_norm])
```
# 重みのフロベニウスノルム:CGとSGDの比較
現在のCGオプティマイザの実装はフロベニウスノルムに基づいており、フロベニウスノルムをターゲット関数の正則化機能と見なしています。ここでは、CGオプティマイザの正規化された効果を、フロベニウスノルム正則化機能のないSGDオプティマイザと比較します。
```
plt.plot(
CG_frobenius_norm_of_weight,
color='r',
label='CG_frobenius_norm_of_weights')
plt.plot(
SGD_frobenius_norm_of_weight,
color='b',
label='SGD_frobenius_norm_of_weights')
plt.xlabel('Epoch')
plt.ylabel('Frobenius norm of weights')
plt.legend(loc=1)
```
# トレーニングと検証の精度:CGとSGDの比較
```
plt.plot(history_cg.history['accuracy'], color='r', label='CG_train')
plt.plot(history_cg.history['val_accuracy'], color='g', label='CG_test')
plt.plot(history_sgd.history['accuracy'], color='pink', label='SGD_train')
plt.plot(history_sgd.history['val_accuracy'], color='b', label='SGD_test')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.legend(loc=4)
```
| github_jupyter |
```
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
import pandas as pd
import sklearn
sklearn.set_config(print_changed_only=True)
plt.rcParams['savefig.dpi'] = 300
plt.rcParams['savefig.bbox'] = 'tight'
from sklearn.linear_model import Ridge, LinearRegression, Lasso, RidgeCV, LassoCV
from sklearn.model_selection import cross_val_score
ames = pd.read_excel("AmesHousing.xls")
# These seem to be crazy outliers
ames = ames.loc[ames['Gr Liv Area'] <= 4000]
ames.columns
from sklearn.model_selection import train_test_split
from sklearn.datasets import fetch_openml
#X_train, X_test, y_train, y_test = train_test_split(ames.drop('SalePrice', axis=1), ames.SalePrice, random_state=42)
X_train, X_test, y_train, y_test = train_test_split(ames.drop(['SalePrice', 'Order', 'PID'], axis=1), ames.SalePrice, random_state=2)
X_train.shape
categorical = X_train.dtypes == object
X_train_cont = X_train.loc[:, ~categorical]
X_train_cont.shape
y_train.hist(bins='auto')
plt.xlabel('Sale Price')
plt.savefig("images/ames_housing_price_hist.png")
np.log(y_train).hist(bins='auto')
plt.xlabel('Log Sale Price')
plt.savefig("images/ames_housing_price_hist_log.png")
fig, axes = plt.subplots(6, 6, figsize=(20, 10))
for i, ax in enumerate(axes.ravel()):
ax.plot(X_train_cont.iloc[:, i], np.log(y_train), 'o', alpha=.5)
ax.set_title("{}: {}".format(i, X_train_cont.columns[i]))
ax.set_ylabel("Log Price")
plt.tight_layout()
plt.savefig("images/ames_housing_scatter.png")
print(X_train.shape)
print(y_train.shape)
plt.figure(figsize=(10, 5))
X_train_cont.boxplot(rot=90)
plt.yscale('log')
plt.savefig("images/ames_scaling.png")
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import OneHotEncoder, StandardScaler
from sklearn.impute import SimpleImputer
from sklearn.compose import make_column_transformer, make_column_selector
from sklearn.pipeline import make_pipeline
cat_preprocessing = make_pipeline(SimpleImputer(strategy='constant', fill_value='NA'),
OneHotEncoder(handle_unknown='ignore'))
cont_preprocessing = make_pipeline(SimpleImputer(), StandardScaler())
preprocess = make_column_transformer((cat_preprocessing, make_column_selector(dtype_include='object')),
remainder=cont_preprocessing)
preprocess
cross_val_score(make_pipeline(preprocess, LinearRegression()), X_train, y_train, cv=5)
from sklearn.model_selection import cross_val_predict
y_pred = cross_val_predict(make_pipeline(preprocess, LinearRegression()), X_train, y_train, cv=5)
from sklearn.compose import TransformedTargetRegressor
log_regressor = TransformedTargetRegressor(LinearRegression(), func=np.log, inverse_func=np.exp)
cross_val_score(make_pipeline(preprocess, log_regressor), X_train, y_train, cv=5)
plt.plot([0, 600000], [0, 600000], color='k')
plt.scatter(y_pred, y_train, alpha=.2, s=4)
plt.figure(figsize=(10, 10))
plt.hexbin(y_pred, y_train, extent=[100000, 300000, 100000, 300000])
plt.plot([0, 600000], [0, 600000], color='k')
plt.gca().set_aspect('equal')
cross_val_score(make_pipeline(SimpleImputer(), TransformedTargetRegressor(LinearRegression(), func=np.log, inverse_func=np.exp)), X_train_cont, y_train, cv=5)
cross_val_score(make_pipeline(preprocess, TransformedTargetRegressor(Ridge(), func=np.log, inverse_func=np.exp)), X_train, y_train, cv=5)
log_ridge = TransformedTargetRegressor(
Ridge(), func=np.log, inverse_func=np.exp)
cross_val_score(make_pipeline(preprocess, log_ridge),
X_train, y_train, cv=5)
cross_val_score(make_pipeline(preprocess, TransformedTargetRegressor(Ridge(), func=np.log, inverse_func=np.exp)), X_train, y_train, cv=5)
np.set_printoptions(suppress=True, precision=3)
print(param_grid)
from sklearn.model_selection import GridSearchCV, RepeatedKFold
from sklearn.pipeline import Pipeline
pipe = Pipeline([('preprocessing', preprocess),
('ridge', log_ridge)])
param_grid = {'ridge__regressor__alpha': np.logspace(-3, 3, 13)}
grid = GridSearchCV(pipe, param_grid, cv=RepeatedKFold(10, 5),
return_train_score=True)
grid.fit(X_train, y_train)
import pandas as pd
results = pd.DataFrame(grid.cv_results_)
results.plot('param_ridge__regressor__alpha', 'mean_train_score')
results.plot('param_ridge__regressor__alpha', 'mean_test_score', ax=plt.gca())
plt.fill_between(results.param_ridge__regressor__alpha.astype(np.float),
results['mean_train_score'] + results['std_train_score'],
results['mean_train_score'] - results['std_train_score'], alpha=0.2)
plt.fill_between(results.param_ridge__regressor__alpha.astype(np.float),
results['mean_test_score'] + results['std_test_score'],
results['mean_test_score'] - results['std_test_score'], alpha=0.2)
plt.legend()
plt.xscale("log")
#plt.yscale("log")
plt.savefig("images/ridge_alpha_search.png")
a = results.plot(c='b', x='param_ridge__regressor__alpha', y=[f'split{i}_train_score' for i in range(50)], alpha=.1)
ax = plt.gca()
a = results.plot(c='r', x='param_ridge__regressor__alpha', y=[f'split{i}_test_score' for i in range(50)], alpha=.1, ax=ax)
plt.legend((ax.get_children()[0], ax.get_children()[10]), ('training scores', 'test_scores'))
plt.xscale("log")
plt.savefig("images/ridge_alpha_search_cv_runs.png")
preprocess
X_train_pre = preprocess.fit_transform(X_train, y_train)
log_regressor.fit(X_train_pre, y_train)
```
```
feature_names.shape
log_regressor.regressor_.coef_.max()
largest = np.abs(log_regressor.regressor_.coef_).argsort()[-10:]
triazines = fetch_openml('triazines')
pd.Series(triazines.target).hist()
plt.savefig('images/triazine_bar.png')
triazines.data.shape
X_train, X_test, y_train, y_test = train_test_split(triazines.data, triazines.target, random_state=0)
cross_val_score(LinearRegression(), X_train, y_train, cv=5)
cross_val_score(Ridge(), X_train, y_train, cv=5)
from sklearn.model_selection import GridSearchCV, RepeatedKFold
from sklearn.pipeline import Pipeline
param_grid = {'alpha': np.logspace(-3, 3, 13)}
grid = GridSearchCV(Ridge(), param_grid, cv=RepeatedKFold(10, 5),
return_train_score=True)
grid.fit(X_train, y_train)
cross_val_score(LinearRegression(), X_train, y_train, cv=5)
cross_val_score(Ridge(), X_train, y_train, cv=5)
import pandas as pd
results = pd.DataFrame(grid.cv_results_)
results.plot('param_alpha', 'mean_train_score')
results.plot('param_alpha', 'mean_test_score', ax=plt.gca())
plt.fill_between(results.param_alpha.astype(np.float),
results['mean_train_score'] + results['std_train_score'],
results['mean_train_score'] - results['std_train_score'], alpha=0.2)
plt.fill_between(results.param_alpha.astype(np.float),
results['mean_test_score'] + results['std_test_score'],
results['mean_test_score'] - results['std_test_score'], alpha=0.2)
plt.legend()
plt.xscale("log")
#plt.yscale("log")
plt.savefig("images/ridge_alpha_triazine.png")
lr = LinearRegression().fit(X_train, y_train)
plt.scatter(range(X_train.shape[1]), lr.coef_, c=np.sign(lr.coef_), cmap='bwr_r')
plt.xlabel('feature index')
plt.ylabel('regression coefficient')
plt.title("Linear Regression on Triazine")
plt.savefig("images/lr_coefficients_large.png")
ridge = grid.best_estimator_
plt.scatter(range(X_train.shape[1]), ridge.coef_, c=np.sign(ridge.coef_), cmap='bwr_r')
plt.xlabel('feature index')
plt.ylabel('regression coefficient')
plt.title("Ridge Regression on Triazine")
plt.savefig("images/ridge_coefficients.png")
ridge100 = Ridge(alpha=100).fit(X_train, y_train)
ridge1 = Ridge(alpha=1).fit(X_train, y_train)
plt.figure(figsize=(8, 4))
plt.plot(ridge1.coef_, 'o', label="alpha=.1")
plt.plot(ridge.coef_, 'o', label=f"alpha={ridge.alpha:.2f}")
plt.plot(ridge100.coef_, 'o', label="alpha=100")
plt.legend()
plt.savefig("images/ridge_coefficients_alpha.png")
from sklearn.model_selection import learning_curve
def plot_learning_curve(est, name):
train_set_size, train_scores, test_scores = learning_curve(est, triazines.data, triazines.target, cv=10, train_sizes=np.linspace(0, 1, 20)[1:])
test_mean = test_scores.mean(axis=1)
train_mean = train_scores.mean(axis=1)
line, = plt.plot(train_set_size, train_mean, linestyle="--", label="train score {}".format(name))
plt.plot(train_set_size, test_mean, label="test score {}".format(name),
c=line.get_color())
plot_learning_curve(Ridge(alpha=.01), "alpha=.01")
plot_learning_curve(Ridge(alpha=ridge.alpha), f"alpha={ridge.alpha:.2f}")
plot_learning_curve(Ridge(alpha=100), "alpha=100")
#plot_learning_curve(LinearRegression(), "lr")
plt.legend(loc=(1, 0))
plt.xlabel("training set size")
plt.ylabel("R^2")
plt.ylim(-1, 1)
plt.savefig("images/ridge_learning_curve.png")
```
# Lasso
```
def nonzero(est, X, y):
return np.sum(est.coef_ != 0)
param_grid = {'alpha': np.logspace(-5, 0, 10)}
grid = GridSearchCV(Lasso(max_iter=10000), param_grid, cv=10, return_train_score=True,
scoring={'r2': 'r2', 'num_nonzero': nonzero}, refit='r2')
grid.fit(X_train, y_train)
print(grid.best_params_)
print(grid.best_score_)
import pandas as pd
results = pd.DataFrame(grid.cv_results_)
a = results.plot('param_alpha', 'mean_train_r2', legend=False)
b = results.plot('param_alpha', 'mean_test_r2', ax=plt.gca(), legend=False)
plt.fill_between(results.param_alpha.astype(np.float),
results['mean_train_r2'] + results['std_train_r2'],
results['mean_train_r2'] - results['std_train_r2'], alpha=0.2)
plt.fill_between(results.param_alpha.astype(np.float),
results['mean_test_r2'] + results['std_test_r2'],
results['mean_test_r2'] - results['std_test_r2'], alpha=0.2)
ax1 = plt.gca()
ax2 = ax1.twinx()
c = results.plot('param_alpha', 'mean_train_num_nonzero', ax=ax2, c='k', legend=False)
plt.legend(ax1.get_children()[2:4] + [c.get_children()[0]], ('train r2', 'test r2', '#nonzero'))
plt.xscale("log")
ax1.set_ylabel('r2')
ax2.set_ylabel('# nonzero coefficients')
plt.savefig("images/lasso_alpha_triazine.png")
line = np.linspace(-2, 2, 1001)
plt.plot(line, line ** 2, label="L2")
plt.plot(line, np.abs(line), label="L1")
plt.plot(line, line!=0, label="L0")
plt.legend()
plt.savefig("images/l2_l1_l0.png")
line = np.linspace(-1, 1, 1001)
alpha_l1 = 5
alpha_l2 = 5
f_x = (2 * line - 1) ** 2
f_x_l2 = f_x + alpha_l2 * line ** 2
f_x_l1 = f_x + alpha_l1 * np.abs(line)
plt.plot(line, f_x, label="f(x)")
plt.plot(line, f_x_l2, label="f(x) + L2")
plt.plot(line, f_x_l1, label="f(x) + L1")
plt.legend()
plt.savefig("images/l1_kink.png")
xline = np.linspace(-2., 2, 1001)
yline = np.linspace(-1.5, 1.5, 1001)
xx, yy = np.meshgrid(xline, yline)
l2 = xx ** 2 + yy ** 2
l1 = np.abs(xx) + np.abs(yy)
plt.figure(figsize=(10, 10))
l2_contour = plt.contour(xx, yy, l2, levels="1", colors='k')
l1_contour = plt.contour(xx, yy, l1, levels="1", colors='k')
ax = plt.gca()
ax.set_aspect("equal")
ax.spines['left'].set_position('center')
ax.spines['right'].set_color('none')
ax.spines['bottom'].set_position('center')
ax.spines['top'].set_color('none')
plt.clabel(l2_contour, inline=1, fontsize=10, fmt={1.0: 'L2'}, manual=[(-1, -1)])
plt.clabel(l1_contour, inline=1, fontsize=10, fmt={1.0: 'L1'}, manual=[(-1, -1)])
#ax.set_xticks(np.arange(-2, 2, .5))
#ax.set_yticks([.5, 1, 2])
plt.savefig("images/l1l2ball.png")
xline = np.linspace(-2., 2, 1001)
yline = np.linspace(-1.5, 1.5, 1001)
xx, yy = np.meshgrid(xline, yline)
l2 = xx ** 2 + yy ** 2
l1 = np.abs(xx) + np.abs(yy)
quadratic = np.sqrt(2 * (xx - 2.) ** 2 + (yy - 1.7) ** 2 + xx * yy)
plt.figure(figsize=(10, 10))
l2_contour = plt.contour(xx, yy, l2, levels=1, colors='k')
l1_contour = plt.contour(xx, yy, l1, levels=1, colors='k')
quadratic_contour = plt.contour(xx, yy, quadratic, levels=[1.6, 1.7, 1.8, 1.9, 2, 2.1, 2.2, 2.3])
ax = plt.gca()
ax.set_aspect("equal")
ax.spines['left'].set_position('center')
ax.spines['right'].set_color('none')
ax.spines['bottom'].set_position('center')
ax.spines['top'].set_color('none')
plt.clabel(l2_contour, inline=1, fontsize=10, fmt={1.0: 'L2'}, manual=[(-1, -1)])
plt.clabel(l1_contour, inline=1, fontsize=10, fmt={1.0: 'L1'}, manual=[(-1, -1)])
#plt.clabel(quadratic_contour, inline=1, fontsize=10)
#ax.set_xticks(np.arange(-2, 2, .5))
#ax.set_yticks([.5, 1, 2])
plt.savefig("images/l1l2ball_intersect.png")
param_grid = {'alpha': np.logspace(-3, 0, 13)}
print(param_grid)
grid = GridSearchCV(Lasso(normalize=True, max_iter=1e6), param_grid, cv=10, return_train_score=True, iid=False)
grid.fit(X_train, y_train)
results = pd.DataFrame(grid.cv_results_)
results.plot('param_alpha', 'mean_train_score')
results.plot('param_alpha', 'mean_test_score', ax=plt.gca())
plt.fill_between(results.param_alpha.astype(np.float),
results['mean_train_score'] + results['std_train_score'],
results['mean_train_score'] - results['std_train_score'], alpha=0.2)
plt.fill_between(results.param_alpha.astype(np.float),
results['mean_test_score'] + results['std_test_score'],
results['mean_test_score'] - results['std_test_score'], alpha=0.2)
plt.legend()
plt.xscale("log")
plt.savefig("images/lasso_alpha_search.png")
print(grid.best_params_)
print(grid.best_score_)
grid.score(X_test, y_test)
lasso = grid.best_estimator_
plt.scatter(range(X_train.shape[1]), lasso.coef_, c=np.sign(lasso.coef_), cmap="bwr_r", edgecolor='k')
plt.savefig("images/lasso_coefficients.png")
print(X_train.shape)
np.sum(lasso.coef_ != 0)
from sklearn.linear_model import lars_path
# lars_path computes the exact regularization path which is piecewise linear.
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42)
alphas, active, coefs = lars_path(X_train, y_train, eps=0.00001, method="lasso")
plt.plot(alphas, coefs.T, alpha=.5)
plt.xscale("log")
plt.savefig("images/lars_path.png")
```
# Elastic Net
```
xline = np.linspace(-2., 2, 1001)
yline = np.linspace(-1.5, 1.5, 1001)
xx, yy = np.meshgrid(xline, yline)
l2 = xx ** 2 + yy ** 2
l1 = np.abs(xx) + np.abs(yy)
elastic = .5 * l2 + .5 * l1
plt.figure(figsize=(10, 10))
l2_contour = plt.contour(xx, yy, l2, levels="1", colors='m')
l1_contour = plt.contour(xx, yy, l1, levels="1", colors='g')
elasticnet_contour = plt.contour(xx, yy, elastic, levels="1", colors='b')
ax = plt.gca()
ax.set_aspect("equal")
ax.spines['left'].set_position('center')
ax.spines['right'].set_color('none')
ax.spines['bottom'].set_position('center')
ax.spines['top'].set_color('none')
plt.clabel(l2_contour, inline=1, fontsize=10, fmt={1.0: 'L2'}, manual=[(-1, -1)])
plt.clabel(l1_contour, inline=1, fontsize=10, fmt={1.0: 'L1'}, manual=[(-1, -1)])
plt.clabel(elasticnet_contour, inline=1, fontsize=10, fmt={1.0: 'elastic'}, manual=[(-1, -1)])
#ax.set_xticks(np.arange(-2, 2, .5))
#ax.set_yticks([.5, 1, 2])
plt.savefig("images/l1l2_elasticnet.png")
X_train, X_test, y_train, y_test = train_test_split(triazines.data, triazines.target, random_state=42)
param_grid = {'alpha': np.logspace(-4, -1, 10), 'l1_ratio': [0.01, .1, .5, .8, .9, .95, .98, 1]}
print(param_grid)
from sklearn.linear_model import ElasticNet
grid = GridSearchCV(ElasticNet(normalize=True, max_iter=1e6), param_grid, cv=RepeatedKFold(10, 10), return_train_score=True)
grid.fit(X_train, y_train)
print(grid.best_params_)
print(grid.best_score_)
grid.score(X_test, y_test)
pd.DataFrame(grid.cv_results_).columns
import pandas as pd
res = pd.pivot_table(pd.DataFrame(grid.cv_results_), values='mean_test_score', index='param_alpha', columns='param_l1_ratio')
pd.set_option("display.precision", 3)
res = res.set_index(res.index.values.round(4))
res
import seaborn as sns
sns.heatmap(res, annot=True, fmt=".3g")
plt.savefig("images/elasticnet_search.png")
plt.figure(dpi=100)
plt.imshow(res) #, vmin=.70, vmax=.825)
plt.colorbar()
alphas = param_grid['alpha']
l1_ratio = np.array(param_grid['l1_ratio'])
plt.xlabel("l1_ratio")
plt.ylabel("alpha")
plt.yticks(range(len(alphas)), ["{:.4f}".format(a) for a in alphas])
plt.xticks(range(len(l1_ratio)), l1_ratio);
(grid.best_estimator_.coef_!= 0).sum()
alphas.shape
param_grid = {'alpha': np.logspace(-4, -1, 10), 'l1_ratio': [.98]}
print(param_grid)
grid = GridSearchCV(ElasticNet(normalize=True, max_iter=1e6), param_grid, cv=10, return_train_score=True, iid=False)
grid.fit(X_train, y_train)
(grid.best_estimator_.coef_!= 0).sum()
```
| github_jupyter |
#### New to Plotly?
Plotly's Python library is free and open source! [Get started](https://plot.ly/python/getting-started/) by downloading the client and [reading the primer](https://plot.ly/python/getting-started/).
<br>You can set up Plotly to work in [online](https://plot.ly/python/getting-started/#initialization-for-online-plotting) or [offline](https://plot.ly/python/getting-started/#initialization-for-offline-plotting) mode, or in [jupyter notebooks](https://plot.ly/python/getting-started/#start-plotting-online).
<br>We also have a quick-reference [cheatsheet](https://images.plot.ly/plotly-documentation/images/python_cheat_sheet.pdf) (new!) to help you get started!
### Imports
This tutorial imports [Plotly](https://plot.ly/python/getting-started/) and [Numpy](http://www.numpy.org/).
```
import plotly.plotly as py
import plotly.graph_objs as go
import numpy as np
```
#### Create an Array
Very similar to the Python list object, a `numpy array` is an array for which data can be appended, removed, and can be reshaped. The data can be read according to other programming languages (eg. C, Fortran) and instantiated as all-zeros or as an empty array.
```
import plotly.plotly as py
import plotly.graph_objs as go
x = np.array([1, 2, 3])
y = np.array([4, 7, 2])
trace = go.Scatter(x=x, y=y)
py.iplot([trace], filename='numpy-array-ex1')
```
#### Create an N-D Array
`np.ndarray` creates an array of a given `shape` and fills the array with garbage values.
```
import plotly.plotly as py
import plotly.graph_objs as go
nd_array = np.ndarray(shape=(2,3), dtype=float)
nd_array[0] = x
nd_array[1] = y
trace = go.Scatter(x=nd_array[0], y=nd_array[1])
py.iplot([trace], filename='numpy-ndarray-ex2')
help(np.array)
from IPython.display import display, HTML
display(HTML('<link href="//fonts.googleapis.com/css?family=Open+Sans:600,400,300,200|Inconsolata|Ubuntu+Mono:400,700" rel="stylesheet" type="text/css" />'))
display(HTML('<link rel="stylesheet" type="text/css" href="http://help.plot.ly/documentation/all_static/css/ipython-notebook-custom.css">'))
! pip install git+https://github.com/plotly/publisher.git --upgrade
import publisher
publisher.publish(
'Array.ipynb', 'numpy/array/', 'Array | plotly',
'A NumPy array is like a Python list and is handled very similarly.',
title = 'Numpy Array | plotly',
name = 'Array',
has_thumbnail='true', thumbnail='thumbnail/numpy_array.jpg',
language='numpy', page_type='example_index',
display_as='numpy-array', order=1)
```
| github_jupyter |
```
import sys, os
if 'google.colab' in sys.modules and not os.path.exists('.setup_complete'):
!wget -q https://raw.githubusercontent.com/yandexdataschool/Practical_RL/master/setup_colab.sh -O- | bash
!pip install --upgrade https://github.com/Theano/Theano/archive/master.zip
!pip install --upgrade https://github.com/Lasagne/Lasagne/archive/master.zip
!wget -q https://raw.githubusercontent.com/yandexdataschool/Practical_RL/master/week05_explore/bayes.py
!wget -q https://raw.githubusercontent.com/yandexdataschool/Practical_RL/master/week05_explore/action_rewards.npy
!wget -q https://raw.githubusercontent.com/yandexdataschool/Practical_RL/master/week05_explore/all_states.npy
!touch .setup_complete
# This code creates a virtual display to draw game images on.
# It will have no effect if your machine has a monitor.
if type(os.environ.get("DISPLAY")) is not str or len(os.environ.get("DISPLAY")) == 0:
!bash ../xvfb start
os.environ['DISPLAY'] = ':1'
from abc import ABCMeta, abstractmethod, abstractproperty
import enum
import numpy as np
np.set_printoptions(precision=3)
np.set_printoptions(suppress=True)
import pandas
import matplotlib.pyplot as plt
%matplotlib inline
```
## Contents
* [1. Bernoulli Bandit](#Part-1.-Bernoulli-Bandit)
* [Bonus 1.1. Gittins index (5 points)](#Bonus-1.1.-Gittins-index-%285-points%29.)
* [HW 1.1. Nonstationary Bernoulli bandit](#HW-1.1.-Nonstationary-Bernoulli-bandit)
* [2. Contextual bandit](#Part-2.-Contextual-bandit)
* [2.1 Bulding a BNN agent](#2.1-Bulding-a-BNN-agent)
* [2.2 Training the agent](#2.2-Training-the-agent)
* [HW 2.1 Better exploration](#HW-2.1-Better-exploration)
* [3. Exploration in MDP](#Part-3.-Exploration-in-MDP)
* [Bonus 3.1 Posterior sampling RL (3 points)](#Bonus-3.1-Posterior-sampling-RL-%283-points%29)
* [Bonus 3.2 Bootstrapped DQN (10 points)](#Bonus-3.2-Bootstrapped-DQN-%2810-points%29)
## Part 1. Bernoulli Bandit
We are going to implement several exploration strategies for simplest problem - bernoulli bandit.
The bandit has $K$ actions. Action produce 1.0 reward $r$ with probability $0 \le \theta_k \le 1$ which is unknown to agent, but fixed over time. Agent's objective is to minimize regret over fixed number $T$ of action selections:
$$\rho = T\theta^* - \sum_{t=1}^T r_t$$
Where $\theta^* = \max_k\{\theta_k\}$
**Real-world analogy:**
Clinical trials - we have $K$ pills and $T$ ill patient. After taking pill, patient is cured with probability $\theta_k$. Task is to find most efficient pill.
A research on clinical trials - https://arxiv.org/pdf/1507.08025.pdf
```
class BernoulliBandit:
def __init__(self, n_actions=5):
self._probs = np.random.random(n_actions)
@property
def action_count(self):
return len(self._probs)
def pull(self, action):
if np.any(np.random.random() > self._probs[action]):
return 0.0
return 1.0
def optimal_reward(self):
""" Used for regret calculation
"""
return np.max(self._probs)
def step(self):
""" Used in nonstationary version
"""
pass
def reset(self):
""" Used in nonstationary version
"""
class AbstractAgent(metaclass=ABCMeta):
def init_actions(self, n_actions):
self._successes = np.zeros(n_actions)
self._failures = np.zeros(n_actions)
self._total_pulls = 0
@abstractmethod
def get_action(self):
"""
Get current best action
:rtype: int
"""
pass
def update(self, action, reward):
"""
Observe reward from action and update agent's internal parameters
:type action: int
:type reward: int
"""
self._total_pulls += 1
if reward == 1:
self._successes[action] += 1
else:
self._failures[action] += 1
@property
def name(self):
return self.__class__.__name__
class RandomAgent(AbstractAgent):
def get_action(self):
return np.random.randint(0, len(self._successes))
```
### Epsilon-greedy agent
**for** $t = 1,2,...$ **do**
**for** $k = 1,...,K$ **do**
$\hat\theta_k \leftarrow \alpha_k / (\alpha_k + \beta_k)$
**end for**
$x_t \leftarrow argmax_{k}\hat\theta$ with probability $1 - \epsilon$ or random action with probability $\epsilon$
Apply $x_t$ and observe $r_t$
$(\alpha_{x_t}, \beta_{x_t}) \leftarrow (\alpha_{x_t}, \beta_{x_t}) + (r_t, 1-r_t)$
**end for**
Implement the algorithm above in the cell below:
```
class EpsilonGreedyAgent(AbstractAgent):
def __init__(self, epsilon=0.01):
self._epsilon = epsilon
def get_action(self):
<YOUR CODE>
@property
def name(self):
return self.__class__.__name__ + "(epsilon={})".format(self._epsilon)
```
### UCB Agent
Epsilon-greedy strategy heve no preference for actions. It would be better to select among actions that are uncertain or have potential to be optimal. One can come up with idea of index for each action that represents otimality and uncertainty at the same time. One efficient way to do it is to use UCB1 algorithm:
**for** $t = 1,2,...$ **do**
**for** $k = 1,...,K$ **do**
$w_k \leftarrow \alpha_k / (\alpha_k + \beta_k) + \sqrt{2log\ t \ / \ (\alpha_k + \beta_k)}$
**end for**
**end for**
$x_t \leftarrow argmax_{k}w$
Apply $x_t$ and observe $r_t$
$(\alpha_{x_t}, \beta_{x_t}) \leftarrow (\alpha_{x_t}, \beta_{x_t}) + (r_t, 1-r_t)$
**end for**
__Note:__ in practice, one can multiply $\sqrt{2log\ t \ / \ (\alpha_k + \beta_k)}$ by some tunable parameter to regulate agent's optimism and wilingness to abandon non-promising actions.
More versions and optimality analysis - https://homes.di.unimi.it/~cesabian/Pubblicazioni/ml-02.pdf
```
class UCBAgent(AbstractAgent):
def get_action(self):
<YOUR CODE>
```
### Thompson sampling
UCB1 algorithm does not take into account actual distribution of rewards. If we know the distribution - we can do much better by using Thompson sampling:
**for** $t = 1,2,...$ **do**
**for** $k = 1,...,K$ **do**
Sample $\hat\theta_k \sim beta(\alpha_k, \beta_k)$
**end for**
$x_t \leftarrow argmax_{k}\hat\theta$
Apply $x_t$ and observe $r_t$
$(\alpha_{x_t}, \beta_{x_t}) \leftarrow (\alpha_{x_t}, \beta_{x_t}) + (r_t, 1-r_t)$
**end for**
More on Thompson Sampling:
https://web.stanford.edu/~bvr/pubs/TS_Tutorial.pdf
```
class ThompsonSamplingAgent(AbstractAgent):
def get_action(self):
<YOUR CODE>
from collections import OrderedDict
def get_regret(env, agents, n_steps=5000, n_trials=50):
scores = OrderedDict({
agent.name: [0.0 for step in range(n_steps)] for agent in agents
})
for trial in range(n_trials):
env.reset()
for a in agents:
a.init_actions(env.action_count)
for i in range(n_steps):
optimal_reward = env.optimal_reward()
for agent in agents:
action = agent.get_action()
reward = env.pull(action)
agent.update(action, reward)
scores[agent.name][i] += optimal_reward - reward
env.step() # change bandit's state if it is unstationary
for agent in agents:
scores[agent.name] = np.cumsum(scores[agent.name]) / n_trials
return scores
def plot_regret(agents, scores):
for agent in agents:
plt.plot(scores[agent.name])
plt.legend([agent.name for agent in agents])
plt.ylabel("regret")
plt.xlabel("steps")
plt.show()
# Uncomment agents
agents = [
# EpsilonGreedyAgent(),
# UCBAgent(),
# ThompsonSamplingAgent()
]
regret = get_regret(BernoulliBandit(), agents, n_steps=10000, n_trials=10)
plot_regret(agents, regret)
```
# Bonus 1.1. Gittins index (5 points).
Bernoulli bandit problem has an optimal solution - Gittins index algorithm. Implement finite horizon version of the algorithm and demonstrate it's performance with experiments. some articles:
- Wikipedia article - https://en.wikipedia.org/wiki/Gittins_index
- Different algorithms for index computation - http://www.ece.mcgill.ca/~amahaj1/projects/bandits/book/2013-bandit-computations.pdf (see "Bernoulli" section)
# HW 1.1. Nonstationary Bernoulli bandit
What if success probabilities change over time? Here is an example of such bandit:
```
class DriftingBandit(BernoulliBandit):
def __init__(self, n_actions=5, gamma=0.01):
"""
Idea from https://github.com/iosband/ts_tutorial
"""
super().__init__(n_actions)
self._gamma = gamma
self._successes = None
self._failures = None
self._steps = 0
self.reset()
def reset(self):
self._successes = np.zeros(self.action_count) + 1.0
self._failures = np.zeros(self.action_count) + 1.0
self._steps = 0
def step(self):
action = np.random.randint(self.action_count)
reward = self.pull(action)
self._step(action, reward)
def _step(self, action, reward):
self._successes = self._successes * (1 - self._gamma) + self._gamma
self._failures = self._failures * (1 - self._gamma) + self._gamma
self._steps += 1
self._successes[action] += reward
self._failures[action] += 1.0 - reward
self._probs = np.random.beta(self._successes, self._failures)
```
And a picture how it's reward probabilities change over time
```
drifting_env = DriftingBandit(n_actions=5)
drifting_probs = []
for i in range(20000):
drifting_env.step()
drifting_probs.append(drifting_env._probs)
plt.figure(figsize=(17, 8))
plt.plot(pandas.DataFrame(drifting_probs).rolling(window=20).mean())
plt.xlabel("steps")
plt.ylabel("Success probability")
plt.title("Reward probabilities over time")
plt.legend(["Action {}".format(i) for i in range(drifting_env.action_count)])
plt.show()
```
Your task is to invent an agent that will have better regret than stationary agents from above.
```
# YOUR AGENT HERE SECTION
drifting_agents = [
ThompsonSamplingAgent(),
EpsilonGreedyAgent(),
UCBAgent(),
YourAgent()
]
plot_regret(DriftingBandit(), drifting_agents, n_steps=20000, n_trials=10)
```
## Part 2. Contextual bandit
Now we will solve much more complex problem - reward will depend on bandit's state.
**Real-word analogy:**
> Contextual advertising. We have a lot of banners and a lot of different users. Users can have different features: age, gender, search requests. We want to show banner with highest click probability.
If we want use strategies from above, we need some how store reward distributions conditioned both on actions and bandit's state.
One way to do this - use bayesian neural networks. Instead of giving pointwise estimates of target, they maintain probability distributions
<img src="bnn.png">
Picture from https://arxiv.org/pdf/1505.05424.pdf
More material:
* A post on the matter - [url](http://twiecki.github.io/blog/2016/07/05/bayesian-deep-learning/)
* Theano+PyMC3 for more serious stuff - [url](http://pymc-devs.github.io/pymc3/notebooks/bayesian_neural_network_advi.html)
* Same stuff in tensorflow - [url](http://edwardlib.org/tutorials/bayesian-neural-network)
Let's load our dataset:
```
all_states = np.load("all_states.npy")
action_rewards = np.load("action_rewards.npy")
state_size = all_states.shape[1]
n_actions = action_rewards.shape[1]
print("State size: %i, actions: %i" % (state_size, n_actions))
import theano
import theano.tensor as T
import lasagne
from lasagne import init
from lasagne.layers import *
import bayes
as_bayesian = bayes.bbpwrap(bayes.NormalApproximation(std=0.1))
BayesDenseLayer = as_bayesian(DenseLayer)
```
## 2.1 Bulding a BNN agent
Let's implement epsilon-greedy BNN agent
```
class BNNAgent:
"""a bandit with bayesian neural net"""
def __init__(self, state_size, n_actions):
input_states = T.matrix("states")
target_actions = T.ivector("actions taken")
target_rewards = T.vector("rewards")
self.total_samples_seen = theano.shared(
np.int32(0), "number of training samples seen so far")
batch_size = target_actions.shape[0] # por que?
# Network
inp = InputLayer((None, state_size), name='input')
# YOUR NETWORK HERE
out = <YOUR CODE: Your network>
# Prediction
prediction_all_actions = get_output(out, inputs=input_states)
self.predict_sample_rewards = theano.function(
[input_states], prediction_all_actions)
# Training
# select prediction for target action
prediction_target_actions = prediction_all_actions[T.arange(
batch_size), target_actions]
# loss = negative log-likelihood (mse) + KL
negative_llh = T.sum((prediction_target_actions - target_rewards)**2)
kl = bayes.get_var_cost(out) / (self.total_samples_seen+batch_size)
loss = (negative_llh + kl)/batch_size
self.weights = get_all_params(out, trainable=True)
self.out = out
# gradient descent
updates = lasagne.updates.adam(loss, self.weights)
# update counts
updates[self.total_samples_seen] = self.total_samples_seen + \
batch_size.astype('int32')
self.train_step = theano.function([input_states, target_actions, target_rewards],
[negative_llh, kl],
updates=updates,
allow_input_downcast=True)
def sample_prediction(self, states, n_samples=1):
"""Samples n_samples predictions for rewards,
:returns: tensor [n_samples, state_i, action_i]
"""
assert states.ndim == 2, "states must be 2-dimensional"
return np.stack([self.predict_sample_rewards(states) for _ in range(n_samples)])
epsilon = 0.25
def get_action(self, states):
"""
Picks action by
- with p=1-epsilon, taking argmax of average rewards
- with p=epsilon, taking random action
This is exactly e-greedy policy.
"""
reward_samples = self.sample_prediction(states, n_samples=100)
# ^-- samples for rewards, shape = [n_samples,n_states,n_actions]
best_actions = reward_samples.mean(axis=0).argmax(axis=-1)
# ^-- we take mean over samples to compute expectation, then pick best action with argmax
<YOUR CODE>
chosen_actions = <YOUR CODE: implement epsilon-greedy strategy>
return chosen_actions
def train(self, states, actions, rewards, n_iters=10):
"""
trains to predict rewards for chosen actions in given states
"""
loss_sum = kl_sum = 0
for _ in range(n_iters):
loss, kl = self.train_step(states, actions, rewards)
loss_sum += loss
kl_sum += kl
return loss_sum / n_iters, kl_sum / n_iters
@property
def name(self):
return self.__class__.__name__
```
## 2.2 Training the agent
```
N_ITERS = 100
def get_new_samples(states, action_rewards, batch_size=10):
"""samples random minibatch, emulating new users"""
batch_ix = np.random.randint(0, len(states), batch_size)
return states[batch_ix], action_rewards[batch_ix]
from IPython.display import clear_output
from pandas import DataFrame
moving_average = lambda x, **kw: DataFrame(
{'x': np.asarray(x)}).x.ewm(**kw).mean().values
def train_contextual_agent(agent, batch_size=10, n_iters=100):
rewards_history = []
for i in range(n_iters):
b_states, b_action_rewards = get_new_samples(
all_states, action_rewards, batch_size)
b_actions = agent.get_action(b_states)
b_rewards = b_action_rewards[
np.arange(batch_size), b_actions
]
mse, kl = agent.train(b_states, b_actions, b_rewards, n_iters=100)
rewards_history.append(b_rewards.mean())
if i % 10 == 0:
clear_output(True)
print("iteration #%i\tmean reward=%.3f\tmse=%.3f\tkl=%.3f" %
(i, np.mean(rewards_history[-10:]), mse, kl))
plt.plot(rewards_history)
plt.plot(moving_average(np.array(rewards_history), alpha=0.1))
plt.title("Reward per epesode")
plt.xlabel("Episode")
plt.ylabel("Reward")
plt.show()
samples = agent.sample_prediction(
b_states[:1], n_samples=100).T[:, 0, :]
for i in range(len(samples)):
plt.hist(samples[i], alpha=0.25, label=str(i))
plt.legend(loc='best')
print('Q(s,a) std:', ';'.join(
list(map('{:.3f}'.format, np.std(samples, axis=1)))))
print('correct', b_action_rewards[0].argmax())
plt.title("p(Q(s, a))")
plt.show()
return moving_average(np.array(rewards_history), alpha=0.1)
bnn_agent = BNNAgent(state_size=state_size, n_actions=n_actions)
greedy_agent_rewards = train_contextual_agent(
bnn_agent, batch_size=10, n_iters=N_ITERS)
```
## HW 2.1 Better exploration
Use strategies from first part to gain more reward in contextual setting
```
class ThompsonBNNAgent(BNNAgent):
def get_action(self, states):
"""
picks action based by taking _one_ sample from BNN and taking action with highest sampled reward (yes, that simple)
This is exactly thompson sampling.
"""
<YOUR CODE>
thompson_agent_rewards = train_contextual_agent(ThompsonBNNAgent(state_size=state_size, n_actions=n_actions),
batch_size=10, n_iters=N_ITERS)
class BayesUCBBNNAgent(BNNAgent):
q = 90
def get_action(self, states):
"""
Compute q-th percentile of rewards P(r|s,a) for all actions
Take actions that have highest percentiles.
This implements bayesian UCB strategy
"""
<YOUR CODE>
ucb_agent_rewards = train_contextual_agent(BayesUCBBNNAgent(state_size=state_size, n_actions=n_actions),
batch_size=10, n_iters=N_ITERS)
plt.figure(figsize=(17, 8))
plt.plot(greedy_agent_rewards)
plt.plot(thompson_agent_rewards)
plt.plot(ucb_agent_rewards)
plt.legend([
"Greedy BNN",
"Thompson sampling BNN",
"UCB BNN"
])
plt.show()
```
## Part 3. Exploration in MDP
The following problem, called "river swim", illustrates importance of exploration in context of mdp's.
<img src="river_swim.png">
Picture from https://arxiv.org/abs/1306.0940
Rewards and transition probabilities are unknown to an agent. Optimal policy is to swim against current, while easiest way to gain reward is to go left.
```
class RiverSwimEnv:
LEFT_REWARD = 5.0 / 1000
RIGHT_REWARD = 1.0
def __init__(self, intermediate_states_count=4, max_steps=16):
self._max_steps = max_steps
self._current_state = None
self._steps = None
self._interm_states = intermediate_states_count
self.reset()
def reset(self):
self._steps = 0
self._current_state = 1
return self._current_state, 0.0, False
@property
def n_actions(self):
return 2
@property
def n_states(self):
return 2 + self._interm_states
def _get_transition_probs(self, action):
if action == 0:
if self._current_state == 0:
return [0, 1.0, 0]
else:
return [1.0, 0, 0]
elif action == 1:
if self._current_state == 0:
return [0, .4, .6]
if self._current_state == self.n_states - 1:
return [.4, .6, 0]
else:
return [.05, .6, .35]
else:
raise RuntumeError(
"Unknown action {}. Max action is {}".format(action, self.n_actions))
def step(self, action):
"""
:param action:
:type action: int
:return: observation, reward, is_done
:rtype: (int, float, bool)
"""
reward = 0.0
if self._steps >= self._max_steps:
return self._current_state, reward, True
transition = np.random.choice(
range(3), p=self._get_transition_probs(action))
if transition == 0:
self._current_state -= 1
elif transition == 1:
pass
else:
self._current_state += 1
if self._current_state == 0:
reward = self.LEFT_REWARD
elif self._current_state == self.n_states - 1:
reward = self.RIGHT_REWARD
self._steps += 1
return self._current_state, reward, False
```
Let's implement q-learning agent with epsilon-greedy exploration strategy and see how it performs.
```
class QLearningAgent:
def __init__(self, n_states, n_actions, lr=0.2, gamma=0.95, epsilon=0.1):
self._gamma = gamma
self._epsilon = epsilon
self._q_matrix = np.zeros((n_states, n_actions))
self._lr = lr
def get_action(self, state):
if np.random.random() < self._epsilon:
return np.random.randint(0, self._q_matrix.shape[1])
else:
return np.argmax(self._q_matrix[state])
def get_q_matrix(self):
""" Used for policy visualization
"""
return self._q_matrix
def start_episode(self):
""" Used in PSRL agent
"""
pass
def update(self, state, action, reward, next_state):
<YOUR CODE>
# Finish implementation of q-learnig agent
def train_mdp_agent(agent, env, n_episodes):
episode_rewards = []
for ep in range(n_episodes):
state, ep_reward, is_done = env.reset()
agent.start_episode()
while not is_done:
action = agent.get_action(state)
next_state, reward, is_done = env.step(action)
agent.update(state, action, reward, next_state)
state = next_state
ep_reward += reward
episode_rewards.append(ep_reward)
return episode_rewards
env = RiverSwimEnv()
agent = QLearningAgent(env.n_states, env.n_actions)
rews = train_mdp_agent(agent, env, 1000)
plt.figure(figsize=(15, 8))
plt.plot(moving_average(np.array(rews), alpha=.1))
plt.xlabel("Episode count")
plt.ylabel("Reward")
plt.show()
```
Let's visualize our policy:
```
def plot_policy(agent):
fig = plt.figure(figsize=(15, 8))
ax = fig.add_subplot(111)
ax.matshow(agent.get_q_matrix().T)
ax.set_yticklabels(['', 'left', 'right'])
plt.xlabel("State")
plt.ylabel("Action")
plt.title("Values of state-action pairs")
plt.show()
plot_policy(agent)
```
As your see, agent uses suboptimal policy of going left and does not explore the right state.
## Bonus 3.1 Posterior sampling RL (3 points)
Now we will implement Thompson Sampling for MDP!
General algorithm:
>**for** episode $k = 1,2,...$ **do**
>> sample $M_k \sim f(\bullet\ |\ H_k)$
>> compute policy $\mu_k$ for $M_k$
>> **for** time $t = 1, 2,...$ **do**
>>> take action $a_t$ from $\mu_k$
>>> observe $r_t$ and $s_{t+1}$
>>> update $H_k$
>> **end for**
>**end for**
In our case we will model $M_k$ with two matricies: transition and reward. Transition matrix is sampled from dirichlet distribution. Reward matrix is sampled from normal-gamma distribution.
Distributions are updated with bayes rule - see continious distribution section at https://en.wikipedia.org/wiki/Conjugate_prior
Article on PSRL - https://arxiv.org/abs/1306.0940
```
def sample_normal_gamma(mu, lmbd, alpha, beta):
""" https://en.wikipedia.org/wiki/Normal-gamma_distribution
"""
tau = np.random.gamma(alpha, beta)
mu = np.random.normal(mu, 1.0 / np.sqrt(lmbd * tau))
return mu, tau
class PsrlAgent:
def __init__(self, n_states, n_actions, horizon=10):
self._n_states = n_states
self._n_actions = n_actions
self._horizon = horizon
# params for transition sampling - Dirichlet distribution
self._transition_counts = np.zeros(
(n_states, n_states, n_actions)) + 1.0
# params for reward sampling - Normal-gamma distribution
self._mu_matrix = np.zeros((n_states, n_actions)) + 1.0
self._state_action_counts = np.zeros(
(n_states, n_actions)) + 1.0 # lambda
self._alpha_matrix = np.zeros((n_states, n_actions)) + 1.0
self._beta_matrix = np.zeros((n_states, n_actions)) + 1.0
def _value_iteration(self, transitions, rewards):
# YOU CODE HERE
state_values = <YOUR CODE: find action values with value iteration>
return state_values
def start_episode(self):
# sample new mdp
self._sampled_transitions = np.apply_along_axis(
np.random.dirichlet, 1, self._transition_counts)
sampled_reward_mus, sampled_reward_stds = sample_normal_gamma(
self._mu_matrix,
self._state_action_counts,
self._alpha_matrix,
self._beta_matrix
)
self._sampled_rewards = sampled_reward_mus
self._current_value_function = self._value_iteration(
self._sampled_transitions, self._sampled_rewards)
def get_action(self, state):
return np.argmax(self._sampled_rewards[state] +
self._current_value_function.dot(self._sampled_transitions[state]))
def update(self, state, action, reward, next_state):
<YOUR CODE>
# update rules - https://en.wikipedia.org/wiki/Conjugate_prior
def get_q_matrix(self):
return self._sampled_rewards + self._current_value_function.dot(self._sampled_transitions)
from pandas import DataFrame
moving_average = lambda x, **kw: DataFrame(
{'x': np.asarray(x)}).x.ewm(**kw).mean().values
horizon = 20
env = RiverSwimEnv(max_steps=horizon)
agent = PsrlAgent(env.n_states, env.n_actions, horizon=horizon)
rews = train_mdp_agent(agent, env, 1000)
plt.figure(figsize=(15, 8))
plt.plot(moving_average(np.array(rews), alpha=0.1))
plt.xlabel("Episode count")
plt.ylabel("Reward")
plt.show()
plot_policy(agent)
```
## Bonus 3.2 Bootstrapped DQN (10 points)
Implement Bootstrapped DQN algorithm and compare it's performance with ordinary DQN on BeamRider Atari game. Links:
- https://arxiv.org/abs/1602.04621
| github_jupyter |
# RE19-classification: interpretable ML via Skope Rules
## 0. Set up (optional)
Run the following install functions if running Jupyter on a cloud environment like Colaboratory, which does not allow you to install the libraries permanently on your local machine
```
#!git clone https://github.com/rulematrix/rule-matrix-py.git
#!pip install rule-matrix-py/.
#!pip install mdlp-discretization
#!pip install pysbrl==0.4.2rc0
#!pip install fim
!pip install cython numpy
!pip install skope-rules
```
## 1. Import libraries
```
#import rulematrix
#from rulematrix.surrogate import rule_surrogate
from skrules import SkopeRules
from sklearn.neural_network import MLPClassifier
from sklearn import svm
from sklearn.svm import SVC
from sklearn.preprocessing import OneHotEncoder
from sklearn.pipeline import Pipeline
from sklearn.model_selection import cross_validate, StratifiedKFold
from sklearn.metrics import recall_score, precision_score, f1_score, roc_curve, precision_recall_curve, auc, confusion_matrix, accuracy_score
from imblearn.over_sampling import ADASYN
from sklearn.model_selection import train_test_split
from sklearn.datasets import load_breast_cancer, load_iris
import matplotlib.pyplot as plt
from matplotlib import cm
from scipy import interp
import numpy as np
import pandas as pd
# Set the ipython display in such a way that helps the visualization of the rulematrix outputs.
from IPython.display import display, HTML
display(HTML(data="""
<style>
div#notebook-container { width: 95%; }
div#menubar-container { width: 65%; }
div#maintoolbar-container { width: 99%; }
</style>
"""))
def split_tr_te(dataset, target, to_drop, tsize=0.25):
return train_test_split(dataset.drop(to_drop, axis=1), dataset[target], test_size=tsize, random_state=42)
def compute_y_pred_from_query(X, rule):
score = np.zeros(X.shape[0])
X = X.reset_index(drop=True)
score[list(X.query(rule).index)] = 1
return(score)
def compute_performances_from_y_pred(y_true, y_pred, index_name='default_index'):
df = pd.DataFrame(data=
{
'precision':[sum(y_true * y_pred)/sum(y_pred)],
'recall':[sum(y_true * y_pred)/sum(y_true)]
},
index=[index_name],
columns=['precision', 'recall']
)
return(df)
def compute_train_test_query_performances(X_train, y_train, X_test, y_test, rule):
y_train_pred = compute_y_pred_from_query(X_train, rule)
y_test_pred = compute_y_pred_from_query(X_test, rule)
performances = None
performances = pd.concat([
performances,
compute_performances_from_y_pred(y_train, y_train_pred, 'train_set')],
axis=0)
performances = pd.concat([
performances,
compute_performances_from_y_pred(y_test, y_test_pred, 'test_set')],
axis=0)
return(performances)
def drop_descriptive_columns(dataset):
for c in dataset.columns:
if c in ['RequirementText', 'Class', 'ProjectID', 'Length','det', 'punct', 'TreeHeight', 'DTreeHeight', 'SubTrees']:
dataset = dataset.drop(c, axis = 1)
return dataset
```
## 2. Skope rules
```
folder_datasets = './'
filenames = ['promise-reclass', 'INDcombined', '8combined']
targets = ['IsFunctional', 'IsQuality', 'OnlyFunctional', 'OnlyQuality']
feature_sets = ['allext', 'sd', 'sdsb8sel02ext']
for target in targets:
for feature_set in feature_sets:
for filename in filenames:
print('======== Target '+target +' Feature set '+feature_set+' Dataset '+filename+ ' ========')
appendix='-ling-'+feature_set
# INPUT 1: File name
data = pd.read_csv(folder_datasets+filename + appendix + '.csv', engine='python')
data.rename(columns=lambda x: x.replace('+', '___'), inplace=True)
data = drop_descriptive_columns(data)
data = data.drop(data.columns[0], axis=1)
to_drop = ['IsFunctional', 'IsQuality']
if target == 'OnlyQuality':
data['IsQuality'] = ~data['IsFunctional'] & data['IsQuality']
target = 'IsQuality'
if target == 'OnlyFunctional':
data['IsFunctional'] = data['IsFunctional'] & ~data['IsQuality']
target = 'IsFunctional'
X_train, X_test, y_train, y_test = split_tr_te(data, target, to_drop)
feature_names = X_train.columns
# INPUT 2: Pick a trade-off between precision and recall!!!
skope_rules_clf = SkopeRules(feature_names=feature_names, random_state=42, n_estimators=30,
# recall_min=0.05, precision_min=0.8,
recall_min=0.1, precision_min=0.7,
max_samples=0.7,
max_depth_duplication= 4, max_depth = 5)
skope_rules_clf.fit(X_train, y_train)
# Compute prediction scores
classifier = svm.SVC(kernel='linear', probability=True, random_state=0)
classifier.fit(X_train, y_train)
baseline_svm = classifier.predict_proba(X_test)[:, 1]
skope_rules_scoring = skope_rules_clf.score_top_rules(X_test)
# Get number of survival rules created
print(str(len(skope_rules_clf.rules_)) + ' rules have been built with ' +
'SkopeRules.\n')
nrules = len(skope_rules_clf.rules_)
# print('The most performing rules are the following one:\n')
# for i_rule, rule in enumerate(skope_rules_clf.rules_):
# print("R" + str(i_rule + 1) + ". " + rule[0])
# #print('-> '+rules_explanations[i_rule]+ '\n')
for i in range(nrules):
print('Rule '+str(i+1)+':',skope_rules_clf.rules_[i][0])
display(compute_train_test_query_performances(X_train, y_train,
X_test, y_test,
skope_rules_clf.rules_[i][0])
)
y_pred_tr = skope_rules_clf.predict_top_rules(X_train, nrules)
y_pred_test = skope_rules_clf.predict_top_rules(X_test, nrules)
print('The performances reached with '+str(nrules)+' discovered rules are the following:')
print(compute_performances_from_y_pred(y_train, y_pred_tr, 'train_set'))
print(compute_performances_from_y_pred(y_test, y_pred_test, 'test_set'))
print('========================================================================================')
def plot_scores(y_true, scores_with_line=[], scores_with_points=[],
labels_with_line=['SVM'],
labels_with_points=['skope-rules']):
gradient = np.linspace(0, 1, 10)
color_list = [ cm.tab10(x) for x in gradient ]
fig, axes = plt.subplots(1, 2, figsize=(12, 5),
sharex=True, sharey=True)
ax = axes[0]
n_line = 0
for i_score, score in enumerate(scores_with_line):
n_line = n_line + 1
fpr, tpr, _ = roc_curve(y_true, score)
ax.plot(fpr, tpr, linestyle='-.', c=color_list[i_score], lw=1, label=labels_with_line[i_score])
for i_score, score in enumerate(scores_with_points):
fpr, tpr, _ = roc_curve(y_true, score)
ax.scatter(fpr[:-1], tpr[:-1], c=color_list[n_line + i_score], s=10, label=labels_with_points[i_score])
ax.set_title("ROC", fontsize=20)
ax.set_xlabel('False Positive Rate', fontsize=18)
ax.set_ylabel('True Positive Rate (Recall)', fontsize=18)
ax.legend(loc='lower center', fontsize=8)
ax = axes[1]
n_line = 0
for i_score, score in enumerate(scores_with_line):
n_line = n_line + 1
precision, recall, _ = precision_recall_curve(y_true, score)
ax.step(recall, precision, linestyle='-.', c=color_list[i_score], lw=1, where='post', label=labels_with_line[i_score])
for i_score, score in enumerate(scores_with_points):
precision, recall, _ = precision_recall_curve(y_true, score)
ax.scatter(recall, precision, c=color_list[n_line + i_score], s=10, label=labels_with_points[i_score])
ax.set_title("Precision-Recall", fontsize=20)
ax.set_xlabel('Recall (True Positive Rate)', fontsize=18)
ax.set_ylabel('Precision', fontsize=18)
ax.legend(loc='lower center', fontsize=8)
plt.show()
plot_scores(y_test,
scores_with_line=[baseline_svm],
scores_with_points=[skope_rules_scoring]
)
y_pred = skope_rules_clf.predict_top_rules(X_test, nrules)
y_pred2 = skope_rules_clf.predict_top_rules(X_train, nrules)
print('The performances reached with '+str(nrules)+' discovered rules are the following:')
print(compute_performances_from_y_pred(y_train, y_pred2, 'train_set'))
print(compute_performances_from_y_pred(y_test, y_pred, 'test_set'))
```
| github_jupyter |
# Bag Of Words --> Transferlearning
by Niels Helsø
git = https://github.com/slein89/BOW_transferlearning
linkedin = https://www.linkedin.com/in/nielshelsoe/
# Purpose
- 3 models for NLP
- Compare them
- Every model has a tutorial
- IT IS ALL ON **GIT HUB**
# Agenda
1. Introduction
2. Classifier with BoW (Bag Of Words)
3. Classifier with Word embeddings
4. Classifier with Transferlearning
## Code, articles, tutorials and stuff
Git url = https://github.com/slein89/BOW_transferlearning
# Introduction
- Who am I?
- Name = Niels
- Age = 29
- Education = cand. it
- have coded in python for 2 years time
- Love NLP
- Favorite guilty pleasure = paagen gifler
# The Danish Business Authority
- Government agency
- 550 people
- Goal: The best conditions and framework for Danish companies
- Three locations: Copenhagen, Silkeborg, Nykøbing Falster
## Data science in Danish Business Authority
- Data-Science team of 5 (WE ARE HIRING!)
- Use Data science
- Profiles:
- fine arts
- economics
- engineer
- phd
- it
- information studies
- Using Python and libs such as:
- sklearn
- pandas
- keras
- numpy
- and so on
# why this talk?
1. Give an overview of where natural language processing(NLP) is going and where it has been
2. Pro and cons of three differrent methods
3. Demystify NLP
# PLEASE ASK QUESTIONS
There is no such thing as a dumb question.
# Disclaimer
Use this at your own risk
# Data
thanks to Prayson Wilfred Daniel
- trustpilot reviews
- 254464 (after making a equal distribution of the two classes)
- task: build a sentiment classifier
```
from src.data.load_data import load_trustpilot_data
from src.preprocessing.text_pre import clean_text
df_trust = load_trustpilot_data()
df_trust = clean_text(df_trust, 'text')
print('Total value of classes amount to')
print('-----------------')
print(df_trust['y'].value_counts())
print('')
print('Examples of text')
print('-----------------')
for index,row in df_trust[:5].iterrows():
print(row['text'])
# %load src/preprocessing/text_pre.py
import re
import string
def clean_text (df, row_name):
#lower all text
df[row_name] = df[row_name].str.lower()
#remove all numbers
df[row_name] = df[row_name].apply(lambda x: x.translate(str.maketrans('','','0123456789')))
#remove all special chareters
df[row_name] = df[row_name].apply(lambda x: re.sub('[\W]+', ' ', x))
#make translation tabel where punctuation is removed and apply it to the text
table = str.maketrans({key: None for key in string.punctuation})
df[row_name] = df[row_name].apply(lambda x: x.translate(table))
return df
```
# Metrics:
- Roc auc
- Confusion Matrix
- Classification report (recall, precision, F1 score)
# BoW (Bag Of Words)
- one of the first NLP approaches
- converts text to vector
## Libs for BoW in python
- Gensim
- Nltk
- Spacy
- Sklearn
```
corpus=['Pydatacopenhagen is the best place to be tonight, yes tonight',
'At Pydatacopenhagen every one is welcome']
from sklearn.feature_extraction.text import CountVectorizer
vectorizer = CountVectorizer()
#the vector representation of the words (13)
print( vectorizer.fit_transform(corpus).todense() )
#count of the uniq words in corpus
print( vectorizer.vocabulary_ )
```
## ngrams
- helps with preserving some meaning to the sentence
```
corpus=['Pydatacopenhagen is the best place to be tonight, yes tonight',
'At Pydatacopenhagen everyone is welcome']
from nltk import ngrams
grams = ngrams(corpus[0].split(),2)
for gram in grams:
print (gram)
from sklearn.model_selection import train_test_split
from src.preprocessing.metrics import get_metrics
X_train,X_test,y_train,y_test = train_test_split(df_trust['text'], df_trust['y'], random_state=42 )
```
### performanche on BoW
Let's take a look at a tutorial and see how BOW performs
```
from joblib import load
model_bow = load('data/models/bow_model.pkl')
get_metrics(model_bow, X_test, y_test)
```
## Some issues
- Ngrams get us some of the way with the sentiment
- just one vector per sentence (document)
# Word embeddings
- convert each word into vector
- map it into a vector space
- allows for word representations with similar meaning to have a similar representation
## Word2vec
Word2Vec has two methods:
1. CBOW
2. Skip-gram
```
from IPython.display import IFrame
IFrame("pictures/skipgra, and cbow.png", width=600, height=400)
```
Lets train one and have a look of how good it is at finding word similarity
```
import gensim
word2vec_model = gensim.models.Word2Vec.load("data/models/word2vec.model")
w1 = 'nokia'
word2vec_model.wv.most_similar(positive=w1)
```
## FastText
- Framework with word2vecs cbow implemented
- ngrams (both on words and characters)
- size (size of the context window)
- dim (size of the vector)
- Each word is represented as a bag of character n-grams
- ngrams = 3
- where = <wh, whe, her, ere, re>
```
%%time
from src.model.fasttext import fasttext_pipeline
X_train,X_test,y_train,y_test = train_test_split(df_trust[['text']], df_trust['y'], random_state=42 )
model_fasttext = fasttext_pipeline(X_train, y_train)
get_metrics(model_fasttext, X_test, y_test)
```
# Transferlearning
- lets take something that we have learned somewhere and apply it to a new domain
- why this?
- we do not always have a ton of data
- The model do not need to learn from scratch
- Thus we save time
- We make (re)use of models built by very talented people
- the approach has proven itself in computer vision
## Universal Language Model Fine-tuning for Text Classification (ULMFiT)
- fast.ai
- 15 may 2018
- Make use of Language model approach
- A model which predicts the next word in a sentence
- e.g. "jeg elsker pågen" -> "gifler"
- has high understanding of semantics
- eg. " så det der kommer ..." != "jeg så en flot sol opgang"
- has a high understanding of grammar
- Data = wikipedia
```
from IPython.display import IFrame
IFrame("pictures/ulmfit.png", width=600, height=200)
```
## Status on a Danish ulmfit model
- trained for 24 hours on Danish wikipedia
- 20000 + wikipedia articles
- 60000 + tokens
Let's take a look at the code!
# models:
- BOW
- 90% roc auc
- Word2vec
- 94,5% roc auc
- Transerlearning
- 81 % roc auc
# Recap
BoW
- pros
- Robust towards its specific task
- High transparecy
- Quick to code
- easy to install
- cons
- "only" counts word
- loses semantics
- slow to train
Word2vec(FastText)
- pros
- take semantics into account
- is fast
- easy to install
- cons
- transparency is a bit of a blur
- needs a good portion of data
Transferlearning (ULMFIT)
- pros
- state of the art performanche (they say)
- have the potential to save a lot of time
- cons
- hard to code and install
- initial training takes a long time
- Black Box
# KISS
***Keep It Simple Stupid*** can be quite clever
Questions?
Thank you for your time.
| github_jupyter |
# Example 02: SoSQL Queries
Constructing custom queries to conserve bandwith and computational resources
## Setup
```
import os
# Note: we don't need Pandas
# Filters allow you to accomplish many basic operations automatically
from sodapy import Socrata
```
## Find Some Data
As in the first example, I'm using the Santa Fe political contribution dataset.
`https://opendata.socrata.com/dataset/Santa-Fe-Contributors/f92i-ik66.json`
```
socrata_domain = 'opendata.socrata.com'
socrata_dataset_identifier = 'f92i-ik66'
# If you choose to use a token, run the following command on the terminal (or add it to your .bashrc)
# $ export SODAPY_APPTOKEN=<token>
socrata_token = os.environ.get("SODAPY_APPTOKEN")
client = Socrata(socrata_domain, socrata_token)
```
## Use Metadata to Plan Your Query
You've probably looked through the column names and descriptions in the web UI,
but it can be nice to have them right in your workspace as well.
```
metadata = client.get_metadata(socrata_dataset_identifier)
[x['name'] for x in metadata['columns']]
meta_amount = [x for x in metadata['columns'] if x['name'] == 'AMOUNT'][0]
meta_amount
```
## Efficiently Query for Data
### Restrict rows to above-average donations
```
# Get the average from the metadata. Note that it's a string by default
meta_amount['cachedContents']['average']
# Use the 'where' argument to filter the data before downloading it
results = client.get(socrata_dataset_identifier, where="amount >= 2433")
print("Total number of non-null results: {}".format(meta_amount['cachedContents']['non_null']))
print("Number of results downloaded: {}".format(len(results)))
results[:3]
```
### Restrict columns and order rows
Often, you know which columns you want, so you can further simplify the download.
It can also be valuable to have results in order, so that you can quickly grab the
largest or smallest.
```
results = client.get(socrata_dataset_identifier,
where="amount < 2433",
select="amount, job",
order="amount ASC")
results[:3]
```
### Perform basic operations
You can even accomplish some basic analytics operations like finding sums.
If you're planning on doing further processing, note that the numeric outputs
are strings by default.
```
results = client.get(socrata_dataset_identifier,
group="recipient",
select="sum(amount), recipient",
order="sum(amount) DESC")
results
```
### Break download into managable chunks
Sometimes you do want all the data, but it would be too big for one download.
By default, all queries have a limit of 1000 rows, but you can manually set it
higher or lower. If you want to loop through results, just use `offset`
```
results = client.get(socrata_dataset_identifier, limit=6, select="name, amount")
results
loop_size = 3
num_loops = 2
for i in range(num_loops):
results = client.get(socrata_dataset_identifier,
select="name, amount",
limit=loop_size,
offset=loop_size * i)
print("\n> Loop number: {}".format(i))
# This simply formats the output nicely
for result in results:
print(result)
```
### Query strings
All of the queries above were made with method parameters,
but you could also pass all the parameters at once in a
SQL-like format
```
query = """
select
name,
amount
where
amount > 1000
and amount < 2000
limit
5
"""
results = client.get(socrata_dataset_identifier, query=query)
results
```
### Free text search
My brother just got a dog named Slider, so we were curious about how many other New York City dogs had that name.
Searches with `q` match anywhere in the row, which allows you to quickly search through data with several free text columns of interest.
```
nyc_dogs_domain = 'data.cityofnewyork.us'
nyc_dogs_dataset_identifier = 'nu7n-tubp'
nyc_dogs_client = Socrata(nyc_dogs_domain, socrata_token)
results = nyc_dogs_client.get(nyc_dogs_dataset_identifier,
q="Slider",
select="animalname, breedname")
results
```
# Going Further
There's plenty more to do! Check out [Queries using SODA](https://dev.socrata.com/docs/queries/) for additional functionality
| github_jupyter |
# NASA Common Research Model
## Drag Prediction Workshop 5
### Matched Lift Coefficient (0.50) with Drag Components
### References
https://aiaa-dpw.larc.nasa.gov/
https://aiaa-dpw.larc.nasa.gov/Workshop5/presentations/DPW5_Presentation_Files/14_DPW5%20Summary-Draft_V7.pdf
```
CASE='dpw5_L1'
DATA_DIR='.'
REF_DATA_DIR='.'
```
## Define Data Location
For remote data the interaction will use ssh to securely interact with the data<br/>
This uses the reverse connection capability in paraview so that the paraview server can be submitted to a job scheduler<br/>
Note: The default paraview server connection will use port 11111
```
import os
from zutil import analysis
analysis.data_init(default_data_dir=DATA_DIR,
default_ref_data_dir=REF_DATA_DIR)
```
### Initialise Environment
```
from zutil.plot import *
from paraview.simple import *
paraview.simple._DisableFirstRenderCameraReset()
import math
```
### Get status file
```
from zutil.post import get_status_dict
status=get_status_dict(CASE)
num_procs = str(status['num processor'])
```
### Define test conditions
```
alpha = 2.21 # degrees
reference_area = 594720.0 # inches^2
reference_length = 275.8 # inches, mean chord.
reference_span = 1156.75 # inches
from zutil.post import cp_profile_wall_from_file
def plot_cp_profile(ax,file_root,span_loc, label):
force_data = cp_profile_wall_from_file(file_root,
[0.0,1.0,0.0],
[0, span_loc*reference_span, 0],
func=plot_array,
axis=ax,
span_loc=span_loc,
alpha=alpha,
label=label)
def plot_array(data_array,pts_array,**kwargs):
ax = kwargs['axis']
span_loc = kwargs['span_loc']
cp_array = data_array.GetPointData()['cp']
chord_array = data_array.GetPointData()['chord']
ax.plot(chord_array, cp_array , '.',label=kwargs['label'])
```
### Comparison Data
```
# Reproduce plots from DPW5 presentation, page 45
# Pressure data points (reference semi-span: 1156.75)
# Station Type WBL ETA Chord
# 1 CFD Cut Only 121.459 0.1050 466.5
# 2 CFD Cut Only 133.026 0.1150 459.6
# 3 CFD Cut Only 144.594 0.1250 452.7
# 4 Pressure Tap 151.074 0.1306 448.8
# 5 Pressure Tap 232.444 0.2009 400.7
# 6 Pressure Tap 327.074 0.2828 345.0
# 7 CFD Cut Only 396.765 0.3430 304.1
# 8 CFD Cut Only 427.998 0.3700 285.8
# 9 Pressure Tap 459.370 0.3971 278.1
# 10 Pressure Tap 581.148 0.5024 248.3
# 11 Pressure Tap 697.333 0.6028 219.9
# 12 Pressure Tap 840.704 0.7268 184.8
# 13 Pressure Tap 978.148 0.8456 151.2
# 14 Pressure Tap 1098.126 0.9500 121.7
# 15 CFD Cut Only 1122.048 0.9700 116.0
# 16 CFD Cut Only 1145.183 0.9900 110.5
#eta_values = [0.1306, 0.2828, 0.3971, 0.5024, 0.7268, 0.9500] # stations 4, 6, 9, 10, 12, 14
from collections import OrderedDict
station_values = OrderedDict([("S04" , 0.1306), ("S06" , 0.2828), ("S09" , 0.3971),
("S10" , 0.5024), ("S12" , 0.7268), ("S14" , 0.9500)])
sources = [["Edge SST","r"], ["CFD++ SST","g"], ["FUN3D SA", "m"], ["MFlow SA", "y"]]
dpw5_comparative_data = eval(open(os.path.join(analysis.data.ref_data_dir, 'DPW5_Comparative_Data.py'), 'r').read())
```
## Force and Residual Convergence
```
from zutil.post import get_csv_data
def plot_residual(ax, csv_data):
eqns = ['rho', 'rhoV[0]', 'rhoV[1]', 'rhoV[2]', 'rhoE', 'rhok', 'rhoOmega']
cycles = csv_data['Cycle'].tolist()
ax.set_yscale('log')
for eqn in eqns:
ax.plot(cycles, csv_data[eqn], label=eqn)
fig_list = {}
C_L = {}
C_D = {}
C_D_P = {}
C_D_F = {}
plots = ('Drag', 'Lift', 'Residuals DPW L1')
for plot in plots:
fig_list[plot] = get_figure(plt)
fig = fig_list[plot]
ax = fig.gca()
set_title(ax, plot+' vs Cycle\n')
x_label(ax, 'Cycle')
if plot.startswith('Res'):
y_label(ax, 'Residual')
else:
y_label(ax, plot)
csv_file = os.path.join(analysis.data.data_dir, CASE+"_report.csv")
csv_data = get_csv_data(csv_file, delim='\s+', header=True)
drag = np.asarray(csv_data['wall_Ftx'].tolist())
lift = np.asarray(csv_data['wall_Ftz'].tolist())
friction = np.asarray(csv_data['wall_Ftfx'].tolist())
pressure = np.asarray(csv_data['wall_Ftpx'].tolist())
cycles = csv_data['Cycle'].tolist()
C_L[CASE] = np.mean(lift[-100:])
C_D[CASE] = np.mean(drag[-100:])
C_D_P[CASE] = np.mean(pressure[-100:])
C_D_F[CASE] = np.mean(friction[-100:])
for plot in plots:
fig = fig_list[plot]
ax = fig.gca()
if plot == 'Drag':
ax.axis([0, 10000, 0.005, 0.03])
ax.plot(cycles, drag, label='$\mathbf{C_{d}}$ %s' % CASE)
ax.plot(cycles, friction, label='$\mathbf{C_{df}}$ %s' % CASE)
ax.plot(cycles, pressure, label='$\mathbf{C_{dp}}$ %s' % CASE)
elif plot == 'Lift':
ax.axis([0, 10000, 0.49, 0.51])
ax.plot(cycles, lift, label='$\mathbf{C_l}$ %s' % CASE)
ax.axhline(0.5, ls='dashed')
elif plot.endswith('L1') and CASE == 'dpw5_L1':
plot_residual(ax, csv_data)
elif plot.endswith('L2') and CASE == 'dpw5_L2':
plot_residual(ax, csv_data)
for plot in plots:
fig = fig_list[plot]
ax = fig.gca()
set_legend(ax, 'best')
#fig.savefig(os.path.join("output", "DPW5_"+plot+"_convergence.png"))
```
## Cp Profile
```
from zutil.post import get_case_root
from zutil.post import calc_force_wall
from zutil.post import ProgressBar
import os
pbar = ProgressBar()
fig_list = {}
for station in station_values:
fig_list[station] = get_figure(plt)
fig = fig_list[station]
span_loc = station_values[station]
ax = fig.gca()
set_title(ax,'$\mathbf{C_p}$ span='+str(span_loc*100)+'% \n')
ax.grid(True)
x_label(ax,'$\mathbf{x/c}$')
y_label(ax,'$\mathbf{C_p}$')
ax.axis([0.0,1.0,1.0,-1.2])
set_ticks(ax)
for source, colour in sources:
plotlist_x = []
plotlist_y = []
for key, value in dpw5_comparative_data["L3"][source][station]['X/C'].iteritems():
plotlist_x.append(value)
for key, value in dpw5_comparative_data["L3"][source][station]['CP'].iteritems():
plotlist_y.append(value)
ax.plot(plotlist_x, plotlist_y, colour+'.', label=source)
case_name = CASE
label = 'zCFD SST L1'
status=get_status_dict(case_name)
num_procs = str(status['num processor'])
plot = 1
pbar+=5
for station in station_values:
span_loc = station_values[station]
fig = fig_list[station]
ax = fig.gca()
plot_cp_profile(ax,os.path.join(analysis.data.data_dir,get_case_root(case_name,num_procs)),
span_loc, label=label)
plot += 1
pbar += 5
for station in station_values:
fig = fig_list[station]
ax = fig.gca()
set_legend(ax,'best')
#fig.subplots_adjust(hspace=0.3)
fig.savefig(os.path.join("output", "DPW5_cp_profile_"+str(station)+".png"))
pbar.complete()
plt.show()
from zutil.post import get_num_procs
def save_image(file_root, label):
renderView1 = CreateView('RenderView')
renderView1.ViewSize = [1080, 634]
renderView1.InteractionMode = '2D'
renderView1.AxesGrid = 'GridAxes3DActor'
renderView1.OrientationAxesVisibility = 0
renderView1.CenterOfRotation = [1327.6915283203125, 0.0, 217.05277633666992]
renderView1.StereoType = 0
renderView1.CameraPosition = [1327.6915283203125, 0.0, 6781.762593092753]
renderView1.CameraFocalPoint = [1327.6915283203125, 0.0, 217.05277633666992]
renderView1.CameraParallelScale = 1404.19167450244
renderView1.Background = [0.0, 0.0, 0.0]
dpw5_L1_wallpvd = PVDReader(FileName=file_root+'_wall.pvd')
cleantoGrid1 = CleantoGrid(Input=dpw5_L1_wallpvd)
cellDatatoPointData1 = CellDatatoPointData(Input=cleantoGrid1)
reflect1 = Reflect(Input=cellDatatoPointData1)
reflect1.Plane = 'Y Min'
cpLUT = GetColorTransferFunction('cp')
cpLUT.RGBPoints = [-1.0959302186965942, 0.231373, 0.298039, 0.752941, 0.07523328065872192, 0.865003, 0.865003, 0.865003, 1.246396780014038, 0.705882, 0.0156863, 0.14902]
cpLUT.ScalarRangeInitialized = 1.0
cpPWF = GetOpacityTransferFunction('cp')
cpPWF.Points = [-1.0959302186965942, 0.0, 0.5, 0.0, 1.246396780014038, 1.0, 0.5, 0.0]
cpPWF.ScalarRangeInitialized = 1
reflect1Display = Show(reflect1, renderView1)
reflect1Display.ColorArrayName = ['POINTS', 'cp']
reflect1Display.LookupTable = cpLUT
reflect1Display.ScalarOpacityUnitDistance = 113.80220009822894
Render()
WriteImage(os.path.join("output", label+'.png'))
from IPython.display import Image, display
case_name = CASE
label_ = 'NASA CRM zCFD L1'
num_procs = get_num_procs(case_name)
save_image(os.path.join(analysis.data.data_dir,get_case_root(case_name,str(num_procs))),label_)
display(Image(filename=os.path.join("output", '%s.png' % label_), width=800, height=500, unconfined=True))
```
| github_jupyter |
# Datacamp: Data exploration using Pandas
```
%matplotlib inline
import os
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
```
In short, we want to make beautiful map to report results of a referendum. In some way, we would like to depict results with something similar to the following example:
<img align="left" width=50% src="./image/example_map.png">
#### For a couple of hints (if you wish), see below:
* Open the `data/referendum.csv` file and you can scan the 5 first rows to understand the structure of the data.
```
# %load solutions/24_solutions.py
df_referendum = pd.read_csv("data/referendum.csv", sep=';')
# %load solutions/25_solutions.py
df_referendum.head()
```
* There is no information about regions in this file. Then, load the information related to the regions from the file `data/regions.csv`. Show the 5 first rows.
```
# %load solutions/27_solutions.py
df_regions = pd.read_csv(os.path.join('data', 'regions.csv'))
df_regions.head()
```
* Load the information related to the departments from the file `data/departments.csv`. Show the 5 first rows.
```
# %load solutions/28_solutions.py
df_departments = pd.read_csv(os.path.join('data', 'departments.csv'))
df_departments.head()
df_referendum["Department code"] = df_referendum["Department code"].apply(
lambda x: "0" + x if len(x) == 1 else x
)
```
* Find the column in the departments dataframe which is related to the `code` column of the regions dataframe. Merge both dataframe using these informations. Show the 5 first rows of the resulting dataframe.
```
# %load solutions/29_solutions.py
df_reg_dep = df_departments.merge(df_regions, how='inner', left_on='region_code', right_on='code')
df_reg_dep.head()
```
* In the previous dataframe as column linked to the department code which could be merged with our referendum data as we did previously. Since we already got information about the regions, we can get the regions dataframe with a new merge. Show the 5 first rows of the merged dataframe.
```
# %load solutions/30_solutions.py
df = df_referendum.merge(df_reg_dep, how='inner', left_on='Department code', right_on='code_x')
df.head()
```
* Group and add up the vote by region and show the resulting dataframe.
```
# %load solutions/31_solutions.py
regions_vote = df.groupby(['code_y', 'name_y']).sum()
regions_vote
```
* Taking example on the previous case, plot the vote for the "choice A" and "choice B". Use the file `regions.geojson` which contains geographical information regarding the region. You can use `geopandas` which will load these information into a dataframe that you can merge with any other dataframe.
* Once the data are merged, you can plot the absolute count and ratio of votes per region.
```
import geopandas as gpd
# %load solutions/32_solutions.py
gdf_regions = gpd.read_file(os.path.join('data', 'regions.geojson'))
gdf_regions = gdf_regions.merge(regions_vote, how='inner', left_on='code', right_on='code_y')
gdf_normalized = gdf_regions.copy()
gdf_normalized['Choice A'] /= gdf_regions[['Choice A', 'Choice B']].sum(axis=1)
gdf_normalized['Choice B'] /= gdf_regions[['Choice A', 'Choice B']].sum(axis=1)
# %load solutions/33_solutions.py
gdf_normalized.plot(column='Choice B')
# %load solutions/34_solutions.py
gdf_normalized.plot(column='Choice A')
```
| github_jupyter |
# Regression Week 4: Ridge Regression (interpretation)
In this notebook, we will run ridge regression multiple times with different L2 penalties to see which one produces the best fit. We will revisit the example of polynomial regression as a means to see the effect of L2 regularization. In particular, we will:
* Use a pre-built implementation of regression (GraphLab Create) to run polynomial regression
* Use matplotlib to visualize polynomial regressions
* Use a pre-built implementation of regression (GraphLab Create) to run polynomial regression, this time with L2 penalty
* Use matplotlib to visualize polynomial regressions under L2 regularization
* Choose best L2 penalty using cross-validation.
* Assess the final fit using test data.
We will continue to use the House data from previous notebooks. (In the next programming assignment for this module, you will implement your own ridge regression learning algorithm using gradient descent.)
# Fire up graphlab create
```
import graphlab
```
# Polynomial regression, revisited
We build on the material from Week 3, where we wrote the function to produce an SFrame with columns containing the powers of a given input. Copy and paste the function `polynomial_sframe` from Week 3:
```
def polynomial_sframe(feature, degree):
# assume that degree >= 1
# initialize the SFrame:
poly_sframe = graphlab.SFrame()
# and set poly_sframe['power_1'] equal to the passed feature
poly_sframe['power_1'] = feature
# first check if degree > 1
if degree > 1:
# then loop over the remaining degrees:
# range usually starts at 0 and stops at the endpoint-1. We want it to start at 2 and stop at degree
for power in range(2, degree+1):
# first we'll give the column a name:
name = 'power_' + str(power)
# then assign poly_sframe[name] to the appropriate power of feature
tmp = feature.apply(lambda x: x**power)
poly_sframe[name] = tmp
return poly_sframe
```
Let's use matplotlib to visualize what a polynomial regression looks like on the house data.
```
import matplotlib.pyplot as plt
%matplotlib inline
sales = graphlab.SFrame('kc_house_data.gl/')
```
As in Week 3, we will use the sqft_living variable. For plotting purposes (connecting the dots), you'll need to sort by the values of sqft_living. For houses with identical square footage, we break the tie by their prices.
```
sales = sales.sort(['sqft_living','price'])
```
Let us revisit the 15th-order polynomial model using the 'sqft_living' input. Generate polynomial features up to degree 15 using `polynomial_sframe()` and fit a model with these features. When fitting the model, use an L2 penalty of `1e-5`:
```
l2_small_penalty = 1e-5
l2_penalty=1e5
```
Note: When we have so many features and so few data points, the solution can become highly numerically unstable, which can sometimes lead to strange unpredictable results. Thus, rather than using no regularization, we will introduce a tiny amount of regularization (`l2_penalty=1e-5`) to make the solution numerically stable. (In lecture, we discussed the fact that regularization can also help with numerical stability, and here we are seeing a practical example.)
With the L2 penalty specified above, fit the model and print out the learned weights.
Hint: make sure to add 'price' column to the new SFrame before calling `graphlab.linear_regression.create()`. Also, make sure GraphLab Create doesn't create its own validation set by using the option `validation_set=None` in this call.
```
poly15_data = polynomial_sframe(sales['sqft_living'],15)
my_features = poly15_data.column_names()
poly15_data['price'] = sales['price']
model15 = graphlab.linear_regression.create(poly15_data,target='price',features=my_features,l2_penalty=l2_small_penalty,validation_set=None)
model15.get('coefficients')
```
***QUIZ QUESTION: What's the learned value for the coefficient of feature `power_1`?*** 103.090951289
# Observe overfitting
Recall from Week 3 that the polynomial fit of degree 15 changed wildly whenever the data changed. In particular, when we split the sales data into four subsets and fit the model of degree 15, the result came out to be very different for each subset. The model had a *high variance*. We will see in a moment that ridge regression reduces such variance. But first, we must reproduce the experiment we did in Week 3.
First, split the data into split the sales data into four subsets of roughly equal size and call them `set_1`, `set_2`, `set_3`, and `set_4`. Use `.random_split` function and make sure you set `seed=0`.
```
(semi_split1, semi_split2) = sales.random_split(.5,seed=0)
(set_1, set_2) = semi_split1.random_split(0.5, seed=0)
(set_3, set_4) = semi_split2.random_split(0.5, seed=0)
```
Next, fit a 15th degree polynomial on `set_1`, `set_2`, `set_3`, and `set_4`, using 'sqft_living' to predict prices. Print the weights and make a plot of the resulting model.
Hint: When calling `graphlab.linear_regression.create()`, use the same L2 penalty as before (i.e. `l2_small_penalty`). Also, make sure GraphLab Create doesn't create its own validation set by using the option `validation_set = None` in this call.
```
poly01_data = polynomial_sframe(set_1['sqft_living'], 15)
my_features = poly01_data.column_names() # get the name of the features
poly01_data['price'] = set_1['price'] # add price to the data since it's the target
model01 = graphlab.linear_regression.create(poly01_data, target = 'price', features = my_features, l2_penalty=l2_small_penalty,validation_set = None)
model01.get("coefficients").print_rows(num_rows = 16)
plt.plot(poly01_data['power_1'],poly01_data['price'],'.',
poly01_data['power_1'], model01.predict(poly01_data),'-')
poly02_data = polynomial_sframe(set_2['sqft_living'], 15)
my_features = poly02_data.column_names() # get the name of the features
poly02_data['price'] = set_2['price'] # add price to the data since it's the target
model02 = graphlab.linear_regression.create(poly02_data, target = 'price', features = my_features,l2_penalty=l2_small_penalty, validation_set = None)
model02.get("coefficients").print_rows(num_rows = 16)
plt.plot(poly02_data['power_1'],poly02_data['price'],'.',
poly02_data['power_1'], model02.predict(poly02_data),'-')
poly03_data = polynomial_sframe(set_3['sqft_living'], 15)
my_features = poly03_data.column_names() # get the name of the features
poly03_data['price'] = set_3['price'] # add price to the data since it's the target
model03 = graphlab.linear_regression.create(poly03_data, target = 'price', features = my_features, l2_penalty=l2_small_penalty,validation_set = None)
model03.get("coefficients").print_rows(num_rows = 16)
plt.plot(poly03_data['power_1'],poly03_data['price'],'.',
poly03_data['power_1'], model03.predict(poly03_data),'-')
poly04_data = polynomial_sframe(set_4['sqft_living'], 15)
my_features = poly04_data.column_names() # get the name of the features
poly04_data['price'] = set_4['price'] # add price to the data since it's the target
model04 = graphlab.linear_regression.create(poly04_data, target = 'price', features = my_features, l2_penalty=l2_small_penalty,validation_set = None)
model04.get("coefficients").print_rows(num_rows = 16)
plt.plot(poly04_data['power_1'],poly04_data['price'],'.',
poly04_data['power_1'], model04.predict(poly04_data),'-')
```
The four curves should differ from one another a lot, as should the coefficients you learned.
***QUIZ QUESTION: For the models learned in each of these training sets, what are the smallest and largest values you learned for the coefficient of feature `power_1`?*** (For the purpose of answering this question, negative numbers are considered "smaller" than positive numbers. So -5 is smaller than -3, and -3 is smaller than 5 and so forth.) 585.86581347 783.493802508 -759.251842854 1247.59035088
# Ridge regression comes to rescue
Generally, whenever we see weights change so much in response to change in data, we believe the variance of our estimate to be large. Ridge regression aims to address this issue by penalizing "large" weights. (Weights of `model15` looked quite small, but they are not that small because 'sqft_living' input is in the order of thousands.)
With the argument `l2_penalty=1e5`, fit a 15th-order polynomial model on `set_1`, `set_2`, `set_3`, and `set_4`. Other than the change in the `l2_penalty` parameter, the code should be the same as the experiment above. Also, make sure GraphLab Create doesn't create its own validation set by using the option `validation_set = None` in this call.
```
poly01_data = polynomial_sframe(set_1['sqft_living'], 15)
my_features = poly01_data.column_names() # get the name of the features
poly01_data['price'] = set_1['price'] # add price to the data since it's the target
model01 = graphlab.linear_regression.create(poly01_data, target = 'price', features = my_features, l2_penalty=l2_penalty,validation_set = None)
model01.get("coefficients").print_rows(num_rows = 16)
plt.plot(poly01_data['power_1'],poly01_data['price'],'.',
poly01_data['power_1'], model01.predict(poly01_data),'-')
poly02_data = polynomial_sframe(set_2['sqft_living'], 15)
my_features = poly02_data.column_names() # get the name of the features
poly02_data['price'] = set_2['price'] # add price to the data since it's the target
model02 = graphlab.linear_regression.create(poly02_data, target = 'price', features = my_features,l2_penalty=l2_penalty, validation_set = None)
model02.get("coefficients").print_rows(num_rows = 16)
plt.plot(poly02_data['power_1'],poly02_data['price'],'.',
poly02_data['power_1'], model02.predict(poly02_data),'-')
poly03_data = polynomial_sframe(set_3['sqft_living'], 15)
my_features = poly03_data.column_names() # get the name of the features
poly03_data['price'] = set_3['price'] # add price to the data since it's the target
model03 = graphlab.linear_regression.create(poly03_data, target = 'price', features = my_features, l2_penalty=l2_penalty,validation_set = None)
model03.get("coefficients").print_rows(num_rows = 16)
plt.plot(poly03_data['power_1'],poly03_data['price'],'.',
poly03_data['power_1'], model03.predict(poly03_data),'-')
poly04_data = polynomial_sframe(set_4['sqft_living'], 15)
my_features = poly04_data.column_names() # get the name of the features
poly04_data['price'] = set_4['price'] # add price to the data since it's the target
model04 = graphlab.linear_regression.create(poly04_data, target = 'price', features = my_features,l2_penalty=l2_penalty, validation_set = None)
model04.get("coefficients").print_rows(num_rows = 16)
plt.plot(poly04_data['power_1'],poly04_data['price'],'.',
poly04_data['power_1'], model04.predict(poly04_data),'-')
```
These curves should vary a lot less, now that you applied a high degree of regularization.
***QUIZ QUESTION: For the models learned with the high level of regularization in each of these training sets, what are the smallest and largest values you learned for the coefficient of feature `power_1`?*** (For the purpose of answering this question, negative numbers are considered "smaller" than positive numbers. So -5 is smaller than -3, and -3 is smaller than 5 and so forth.) 2.58738875673 2.04470474182 2.26890421877 1.91040938244
# Selecting an L2 penalty via cross-validation
Just like the polynomial degree, the L2 penalty is a "magic" parameter we need to select. We could use the validation set approach as we did in the last module, but that approach has a major disadvantage: it leaves fewer observations available for training. **Cross-validation** seeks to overcome this issue by using all of the training set in a smart way.
We will implement a kind of cross-validation called **k-fold cross-validation**. The method gets its name because it involves dividing the training set into k segments of roughtly equal size. Similar to the validation set method, we measure the validation error with one of the segments designated as the validation set. The major difference is that we repeat the process k times as follows:
Set aside segment 0 as the validation set, and fit a model on rest of data, and evalutate it on this validation set<br>
Set aside segment 1 as the validation set, and fit a model on rest of data, and evalutate it on this validation set<br>
...<br>
Set aside segment k-1 as the validation set, and fit a model on rest of data, and evalutate it on this validation set
After this process, we compute the average of the k validation errors, and use it as an estimate of the generalization error. Notice that all observations are used for both training and validation, as we iterate over segments of data.
To estimate the generalization error well, it is crucial to shuffle the training data before dividing them into segments. GraphLab Create has a utility function for shuffling a given SFrame. We reserve 10% of the data as the test set and shuffle the remainder. (Make sure to use `seed=1` to get consistent answer.)
```
(train_valid, test) = sales.random_split(.9, seed=1)
train_valid_shuffled = graphlab.toolkits.cross_validation.shuffle(train_valid, random_seed=1)
```
Once the data is shuffled, we divide it into equal segments. Each segment should receive `n/k` elements, where `n` is the number of observations in the training set and `k` is the number of segments. Since the segment 0 starts at index 0 and contains `n/k` elements, it ends at index `(n/k)-1`. The segment 1 starts where the segment 0 left off, at index `(n/k)`. With `n/k` elements, the segment 1 ends at index `(n*2/k)-1`. Continuing in this fashion, we deduce that the segment `i` starts at index `(n*i/k)` and ends at `(n*(i+1)/k)-1`.
With this pattern in mind, we write a short loop that prints the starting and ending indices of each segment, just to make sure you are getting the splits right.
```
n = len(train_valid_shuffled)
k = 10 # 10-fold cross-validation
for i in xrange(k):
start = (n*i)/k
end = (n*(i+1))/k-1
print i, (start, end)
```
Let us familiarize ourselves with array slicing with SFrame. To extract a continuous slice from an SFrame, use colon in square brackets. For instance, the following cell extracts rows 0 to 9 of `train_valid_shuffled`. Notice that the first index (0) is included in the slice but the last index (10) is omitted.
```
train_valid_shuffled[0:10] # rows 0 to 9
```
Now let us extract individual segments with array slicing. Consider the scenario where we group the houses in the `train_valid_shuffled` dataframe into k=10 segments of roughly equal size, with starting and ending indices computed as above.
Extract the fourth segment (segment 3) and assign it to a variable called `validation4`.
```
validation4 = train_valid_shuffled[5818:7758]
```
To verify that we have the right elements extracted, run the following cell, which computes the average price of the fourth segment. When rounded to nearest whole number, the average should be $536,234.
```
print int(round(validation4['price'].mean(), 0))
```
After designating one of the k segments as the validation set, we train a model using the rest of the data. To choose the remainder, we slice (0:start) and (end+1:n) of the data and paste them together. SFrame has `append()` method that pastes together two disjoint sets of rows originating from a common dataset. For instance, the following cell pastes together the first and last two rows of the `train_valid_shuffled` dataframe.
```
n = len(train_valid_shuffled)
first_two = train_valid_shuffled[0:2]
last_two = train_valid_shuffled[n-2:n]
print first_two.append(last_two)
```
Extract the remainder of the data after *excluding* fourth segment (segment 3) and assign the subset to `train4`.
```
train4 = train_valid_shuffled[0:5818].append(train_valid_shuffled[7758:n])
```
To verify that we have the right elements extracted, run the following cell, which computes the average price of the data with fourth segment excluded. When rounded to nearest whole number, the average should be $539,450.
```
print int(round(train4['price'].mean(), 0))
```
Now we are ready to implement k-fold cross-validation. Write a function that computes k validation errors by designating each of the k segments as the validation set. It accepts as parameters (i) `k`, (ii) `l2_penalty`, (iii) dataframe, (iv) name of output column (e.g. `price`) and (v) list of feature names. The function returns the average validation error using k segments as validation sets.
* For each i in [0, 1, ..., k-1]:
* Compute starting and ending indices of segment i and call 'start' and 'end'
* Form validation set by taking a slice (start:end+1) from the data.
* Form training set by appending slice (end+1:n) to the end of slice (0:start).
* Train a linear model using training set just formed, with a given l2_penalty
* Compute validation error using validation set just formed
```
import numpy as np
def k_fold_cross_validation(k, l2_penalty, data, output_name, features_list):
empty_vector = np.empty(k)
n = len(data)
for i in xrange(k):
start = (n*i)/k
end = (n*(i+1))/k-1
#print i, (start, end)
validation_set = data[start:end+1]
train_set = data[0:start].append(data[end+1:n])
model = graphlab.linear_regression.create(train_set,target=output_name, features=features_list,l2_penalty=l2_penalty,validation_set=None)
predict = model.predict(validation_set)
errors = validation_set[output_name] - predict
square_errors = errors ** 2
RSS = square_errors.sum()
empty_vector[i] = RSS
return empty_vector.mean()
print 'mean: '+ str(empty_vector.mean())
```
Once we have a function to compute the average validation error for a model, we can write a loop to find the model that minimizes the average validation error. Write a loop that does the following:
* We will again be aiming to fit a 15th-order polynomial model using the `sqft_living` input
* For `l2_penalty` in [10^1, 10^1.5, 10^2, 10^2.5, ..., 10^7] (to get this in Python, you can use this Numpy function: `np.logspace(1, 7, num=13)`.)
* Run 10-fold cross-validation with `l2_penalty`
* Report which L2 penalty produced the lowest average validation error.
Note: since the degree of the polynomial is now fixed to 15, to make things faster, you should generate polynomial features in advance and re-use them throughout the loop. Make sure to use `train_valid_shuffled` when generating polynomial features!
```
poly_data = polynomial_sframe(train_valid_shuffled['sqft_living'], 15)
my_features = poly_data.column_names() # get the name of the features
poly_data['price'] = train_valid_shuffled['price'] # add price to the data since it's the target
a = np.logspace(1, 7, num=13)
nn = len(a)
error_vector = np.empty(13)
for i in xrange(nn):
#print 'l2_penalty: ' + str(l2_penalty)
error_vector[i] = k_fold_cross_validation(10, a[i], poly_data, 'price', my_features)
#print 'error_vector: ' + str(error_vector)
print error_vector
```
***QUIZ QUESTIONS: What is the best value for the L2 penalty according to 10-fold validation?*** 10^3
You may find it useful to plot the k-fold cross-validation errors you have obtained to better understand the behavior of the method.
```
# Plot the l2_penalty values in the x axis and the cross-validation error in the y axis.
# Using plt.xscale('log') will make your plot more intuitive.
plt.plot(a,error_vector,'k-')
plt.xlabel('$\ell_2$ penalty')
plt.ylabel('cross validation error')
plt.xscale('log')
plt.yscale('log')
print a
print error_vector
```
Once you found the best value for the L2 penalty using cross-validation, it is important to retrain a final model on all of the training data using this value of `l2_penalty`. This way, your final model will be trained on the entire dataset.
```
l2_penalty = 1e3
data = polynomial_sframe(train_valid['sqft_living'], 15)
my_features = data.column_names() # get the name of the features
data['price'] = train_valid['price'] # add price to the data since it's the target
final_model = graphlab.linear_regression.create(data, target = 'price', features = my_features,l2_penalty=l2_penalty, validation_set = None)
predict = final_model.predict(test)
errors = test['price'] - predict
square_errors = errors ** 2
RSS = square_errors.sum()
print RSS
```
***QUIZ QUESTION: Using the best L2 penalty found above, train a model using all training data. What is the RSS on the TEST data of the model you learn with this L2 penalty? ***
| github_jupyter |
# Excerise 5 ECN222 - estimation of a keynesian consumption equation
* Joakim Blix Prestmo
* 08.11.2018
```
import pandas as pd
#import matplotlib as plt
import matplotlib.pyplot as plt
import datetime
import numpy as np
df = pd.read_excel("../MakroData.xlsx", sheet_name="Ark5")
# source: Statitics Norway and National Accounts
df_macro = pd.read_excel("http://www.ssb.no/statbank/sq/10010628/", skiprows=3, nrows=162)
# Source: KVARTS database, Statistics Norways macroeconomic model
df.head()
# Fix indexing
df_macro['Unnamed: 0'] = df_macro['Unnamed: 0'].str.replace('K','Q')
df_macro = df_macro.drop(['Unnamed: 0'], axis=1)
df_macro.index = pd.Index(pd.period_range('1978-01', periods=210, freq='Q'))
df.index = pd.Index(pd.period_range('1978-01', periods=156, freq='Q'))
```
# Figures
```
# Figure for household consumption
df_macro['Konsum i husholdninger og ideelle organisasjoner'].plot()
plt.title("Figure 1.1: Consumption")
plt.legend()
plt.xlabel('Date', fontdict=None, labelpad=None)
plt.ylabel('MNOK')
plt.show()
# Figure for mortage loan rates
df['RENPF300BO'].plot()
plt.title("Figure 2.1: Mortgage loan rate")
plt.legend()
plt.xlabel('Date', fontdict=None, labelpad=None)
plt.ylabel('')
plt.show()
# Merge datasets to get a complete set of data (note not the same price-set, but for our usecase this is not a big issue)
df_merge=df.merge(df_macro, how='left', on=None, left_on=None, left_index=True, right_index=True)
```
## Features engineering
```
# Create growth rates:
# df[log_C] = ...
df_merge['C'] = (df_merge['Konsum i husholdninger og ideelle organisasjoner'])
df_merge['c'] = np.log(df_merge['Konsum i husholdninger og ideelle organisasjoner'])
df_merge['Dc'] = np.log(df_merge['Konsum i husholdninger og ideelle organisasjoner']).diff(4)
df_merge['YD'] = (df_merge['RD300']-df_merge['RAM300'])/df_merge['KPI']
df_merge['yd'] = np.log(df_merge['YD'])
df_merge['Dyd'] = np.log(df_merge['YD']).diff(4)
df_merge['yd_1'] = np.log(df_merge['YD']).shift(1)
df_merge['c_1'] = np.log(df_merge['C']).shift(1)
df_merge['Dkpi'] = np.log(df_merge['KPI']).diff(4)
df_merge['i'] = df_merge['RENPF300BO']
df_merge['r'] = df_merge['RENPF300BO']-df_merge['Dkpi']
df_merge['Dc_1']=(df_merge['Dc']).shift(1)
df_merge['Dyd_1']=(df_merge['Dyd']).shift(1)
df_merge['Dc_2']=(df_merge['Dc_1']).shift(1)
df_merge['Dyd_2']=(df_merge['Dyd_1']).shift(1)
df_merge['Dc'].plot()
df_merge['Dyd'].plot()
df_merge['r'].plot()
plt.title("Figure 1.1: Growth in consumption, interest rate and growth in real disposable income")
plt.legend()
plt.xlabel('Date', fontdict=None, labelpad=None)
plt.ylabel('MNOK')
plt.show()
df_merge['C'].plot()
df_merge['YD'].plot()
plt.title("Figure 1.1: Consumption and real disposable income")
plt.legend()
plt.xlabel('Date', fontdict=None, labelpad=None)
plt.ylabel('MNOK')
plt.show()
```
## Build regression model
```
import statsmodels.api as sm # import statsmodels
df_merge.head()
df_merge['Dc'].shape, df_merge['Dyd'].shape
# Because of the lags we want to delete the first rows
df_model = df_merge.drop(df_merge.index[[0,1,2,3,4,5]])
df_model['Dc'].shape, df_model['Dyd'].shape
#df_model[['Dyd_1', 'i']]
# Set up for 4 models, with increasing no of explanatory variables
modell1 = ["Dyd"]
modell2 = ["r","Dyd"]
modell3 = ["r","Dyd","yd_1", 'c_1']
modell4 = ["r","Dc_1","Dyd","Dyd_1","yd_1", 'c_1']
y = df_model[["Dc"]] ## X is the explanatory variables
X = df_model[modell4] ## Y is the dependent variable
X = sm.add_constant(X) ## a constant (or intercept)
y.shape, X.shape
#Regression model: fit the regression line
model = sm.OLS(y,X).fit()
# Print he estimation result / model summary
model.summary()
# %pylab inline
# # We pick 100 hundred points equally spaced from the min to the max //create a test-set within the range of our min-max values
# X_prime = np.linspace(y.y.min(), y.y.max(), 100)[:, np.newaxis]
# X_prime = sm.add_constant(X_prime) # add constant as we did before
# # Now we calculate the predicted values
# y_hat = model.predict(X_prime)
# plt.scatter(df_merge['DC'], df_merge['DRD300'] , alpha=0.3) # Plot the raw data
# plt.xlabel("DC")
# plt.ylabel("DRD300")
# plt.plot(X_prime[:, 1], y_hat, 'r', alpha=0.9) # Add the regression line, colored in red
```
# Export data
```
datalist = ['C', 'c', 'c_1', 'Dc', 'Dc_1', 'r', 'i', 'YD','yd', 'yd_1', 'Dyd_1']
df_short = df_model[datalist]
df_short.to_excel("../MakroDataKort.xlsx")
df_model.to_excel("../MakroDataModell.xlsx")
df_short.head()
```
| github_jupyter |
```
import sys
from datetime import date
from io import BytesIO
from IPython import display
from sklearn.datasets import load_breast_cancer
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.model_selection import train_test_split
from sklearn.metrics import plot_roc_curve, plot_confusion_matrix
import base64
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
import uuid
import random
# parent directory to work with dev
sys.path.append("..")
import verifyml.model_card_toolkit as mctlib
cancer = load_breast_cancer()
X = pd.DataFrame(cancer.data, columns=cancer.feature_names)
X['age'] = X['mean radius'].apply(lambda x: '>46' if random.random() < 1/2 else ('25-45' if random.random() > 1/2 else '<25'))
X = pd.get_dummies(X, columns=['age'])
y = pd.Series(cancer.target)
X_train, X_test, y_train, y_test = train_test_split(X, y)
# Utility function that will export a plot to a base-64 encoded string that the model card will accept.
def plot_to_str():
img = BytesIO()
plt.savefig(img, format='png', bbox_inches="tight")
return base64.encodebytes(img.getvalue()).decode('utf-8')
# Plot the mean radius feature for both the train and test sets
sns.displot(x=X_train['mean radius'], hue=y_train)
mean_radius_train = plot_to_str()
sns.displot(x=X_test['mean radius'], hue=y_test)
mean_radius_test = plot_to_str()
# Plot the mean texture feature for both the train and test sets
sns.displot(x=X_train['mean texture'], hue=y_train)
mean_texture_train = plot_to_str()
sns.displot(x=X_test['mean texture'], hue=y_test)
mean_texture_test = plot_to_str()
# Create a classifier and fit the training data
clf = GradientBoostingClassifier().fit(X_train, y_train)
# Plot a ROC curve
plot_roc_curve(clf, X_test, y_test)
roc_curve = plot_to_str()
# Plot a confusion matrix
plot_confusion_matrix(clf, X_test, y_test)
confusion_matrix = plot_to_str()
feature_importances = pd.Series(clf.feature_importances_, index=X.columns)
feature_importances.sort_values(ascending=False).head().plot.barh()
top5_features = plot_to_str()
mc = mctlib.ModelCard()
mct = mctlib.ModelCardToolkit(output_dir="model_card_output", file_name="breast_cancer_diagnostic_model_card")
model_card = mct.scaffold_assets()
model_card.model_details.name = 'Breast Cancer Wisconsin (Diagnostic) Dataset'
model_card.model_details.overview = (
'This model predicts whether breast cancer is benign or malignant based on '
'image measurements.')
model_card.model_details.owners = [
mctlib.Owner(name= 'Model Cards Team', contact='model-cards@google.com', role="auditor")
]
model_card.model_details.references = [
mctlib.Reference(reference='https://archive.ics.uci.edu/ml/datasets/Breast+Cancer+Wisconsin+(Diagnostic)'),
mctlib.Reference(reference='https://minds.wisconsin.edu/bitstream/handle/1793/59692/TR1131.pdf')
]
model_card.model_details.version.name = str(uuid.uuid4())
model_card.model_details.version.date = str(date.today())
model_card.considerations.ethical_considerations = [mctlib.Risk(
name=('Manual selection of image sections to digitize could create '
'selection bias'),
mitigation_strategy='Automate the selection process'
)]
model_card.considerations.limitations = [mctlib.Limitation(description='Breast cancer diagnosis')]
model_card.considerations.use_cases = [mctlib.UseCase(description='Breast cancer diagnosis')]
model_card.considerations.users = [mctlib.User(description='Medical professionals'), mctlib.User(description='ML researchers')]
model_card.model_parameters.data.append(mctlib.Dataset())
model_card.model_parameters.data[0].description = (
'Training data distribution of the selected attributes of cell nuclei, with respect to diagnosis outcome')
model_card.model_parameters.data[0].graphics.collection = [
mctlib.Graphic(image=mean_radius_train),
mctlib.Graphic(image=mean_texture_train)
]
model_card.model_parameters.data.append(mctlib.Dataset())
model_card.model_parameters.data[1].description = (
'Test data distribution of the selected attributes of cell nuclei, with respect to diagnosis outcome')
model_card.model_parameters.data[1].graphics.collection = [
mctlib.Graphic(image=mean_radius_test),
mctlib.Graphic(image=mean_texture_test)
]
model_card.quantitative_analysis.performance_metrics.append(mctlib.PerformanceMetric())
model_card.quantitative_analysis.performance_metrics[0].type = "accuracy"
model_card.quantitative_analysis.performance_metrics[0].value = str((49 + 89) / (49 + 89 + 2 + 3))
model_card.quantitative_analysis.performance_metrics[0].slice = "training"
model_card.quantitative_analysis.performance_metrics[0].graphics.description = (
'ROC curve and confusion matrix')
model_card.quantitative_analysis.performance_metrics[0].graphics.collection = [
mctlib.Graphic(image=roc_curve), mctlib.Graphic(image=confusion_matrix)
]
model_card.explainability_analysis.explainability_reports = [
mctlib.ExplainabilityReport(
type="Feature Importance",
slice="training",
description="top 5 features",
graphics = mctlib.GraphicsCollection(collection = [
mctlib.Graphic(image=top5_features)
]),
)
]
# Mock fairness test results
model_card.fairness_analysis.fairness_reports = [
mctlib.FairnessReport(
type="fairness parity test",
slice="test data",
segment="Age",
description="TPR should be above the specified test threshold",
tests= [
mctlib.Test(name="Age group: <25", threshold=str(0.6), result=str(0.7), passed=True),
mctlib.Test(name="Age group: 25-45", threshold=str(0.6), result=str(0.65), passed=True),
mctlib.Test(name="Age group: >46", threshold=str(0.6), result=str(0.9), passed=True),
]
)
]
mct.update_model_card(model_card)
md = mct.export_format(output_file="breast_cancer_diagnostic_model_card.md", template_path="model_card_output/template/md/default_template.md.jinja")
html = mct.export_format(output_file="breast_cancer_diagnostic_model_card.html")
display.display(display.HTML(html))
```
| github_jupyter |
```
%matplotlib inline
%load_ext autoreload
%autoreload 2
import os
import copy
import numpy as np
from astropy.io import fits
from astropy.table import Table, Column
import matplotlib.pyplot as plt
from kungpao.display import display_single
```
## Learn about the CCD "raw" data of the DECaLS survey
```
data_dir = '/Users/song/Downloads/merian/'
decals_dir = os.path.join(data_dir, 'decals')
brick_test = '1933m005'
ccd_data = os.path.join(decals_dir, brick_test, 'ccd')
band = 'r'
ccd_test = os.path.join(ccd_data, band, 'decam-425341-S29-r_img.fits')
_ = display_single(fits.open(ccd_test)[1].data)
fits.open(ccd_test)[0].header
import pandas as pd
import pingouin as pg
import scipy
import scipy.stats
import math, numpy as np
test_sample = 1000
Data_A = np.random.randint(100,600, size=test_sample) + 20 * np.random.random(size=test_sample)
Data_B = Data_A + 100 * np.random.random(size=test_sample)
Data_C = np.random.randint(100,200, size=test_sample) + 20 * np.random.random(size=test_sample)
print('A and B should be strongly correlated, C should be independent of A or B')
print('pearsonr between A and B: r=%.2f, p=%.2f' %scipy.stats.pearsonr(Data_A, Data_B))
print('pearsonr between A and C: r=%.2f, p=%.2f' %scipy.stats.pearsonr(Data_A, Data_C))
print('pearsonr between C and B: r=%.2f, p=%.2f' %scipy.stats.pearsonr(Data_C, Data_B))
r12, _ = scipy.stats.pearsonr(Data_A, Data_B)
r13, _ = scipy.stats.pearsonr(Data_A, Data_C)
r23, _ = scipy.stats.pearsonr(Data_B, Data_C)
# partial corr
r12_3 = (r12-r13*r23)/math.sqrt((1.0-r13**2)*(1.0-r23**2))
print('controlling data_c, partial corr coeff of A, B=%.2f' %r12_3)
# partial corr
r23_1 = (r23-r12*r13)/math.sqrt((1.0-r13**2)*(1.0-r12**2))
print('controlling data_A, partial corr coeff of B, C=%.2f' %r23_1)
# partial corr
r13_1 = (r13-r23*r12)/math.sqrt((1.0-r12**2)*(1.0-r23**2))
print('controlling data_B, partial corr coeff of A, C=%.2f' %r13_1)
df = pd.DataFrame(np.vstack((Data_A, Data_B, Data_C)).T,
columns=['A', 'B', 'C'])
#partial_corr(df)
pg.partial_corr(data=df, x='A', y='B', covar='C')
pg.partial_corr(data=df, x='A', y='C', covar='B')
pg.partial_corr(data=df, x='B', y='C', covar='A')
# Generating some datasets where:
# 1) A and D are independent random variables
# 2) B is mildly correlated with A
# 3) C is correlated with B, and mildly correlated with A
# 4) E is strongly correlated with D
test_sample = 1000;
Data_A = np.random.randint(100,600, size=test_sample) + 20 * np.random.random(size=test_sample)
Data_B = np.random.randint(100,600, size=test_sample) - 10 * Data_A + 20 * np.random.random(size=test_sample)
Data_C = Data_A * 5 - Data_B + np.random.randint(100,600, size=test_sample) + 20 * np.random.random(size=test_sample)
Data_D = np.random.randint(100,200, size=test_sample) + 20 * np.random.random(size=test_sample)
Data_E = Data_D + 100 * np.random.random(size=test_sample)
## This doesn't need to be a pandas DF, it can just be an array.
# You just need to comment out the line inside the p_corr module to use the array-only version
df = pd.DataFrame(np.vstack((Data_A, Data_B, Data_C, Data_D, Data_E)).T,
columns=['A', 'B', 'C', 'D', 'E'])
#partial_corr(df)
## Reading off each column and row, we can see that by design the correlation matrix reflects the input.
pg.partial_corr(data=df, x='A', y='B', covar=['C', 'D', 'E'])
```
| github_jupyter |
<table align="left" width="100%"> <tr>
<td style="background-color:#ffffff;"><a href="https://qsoftware.lu.lv/index.php/qworld/" target="_blank"><img src="..\images\qworld.jpg" width="35%" align="left"></a></td>
<td align="right" style="background-color:#ffffff;vertical-align:bottom;horizontal-align:right">
prepared by Özlem Salehi (<a href="http://qworld.lu.lv/index.php/qturkey/" target="_blank">QTurkey</a>)
</td>
</tr></table>
<table width="100%"><tr><td style="color:#bbbbbb;background-color:#ffffff;font-size:11px;font-style:italic;text-align:right;">This cell contains some macros. If there is a problem with displaying mathematical formulas, please run this cell to load these macros. </td></tr></table>
$\newcommand{\Mod}[1]{\ (\mathrm{mod}\ #1)}$
$ \newcommand{\bra}[1]{\langle #1|} $
$ \newcommand{\ket}[1]{|#1\rangle} $
$ \newcommand{\braket}[2]{\langle #1|#2\rangle} $
$ \newcommand{\dot}[2]{ #1 \cdot #2} $
$ \newcommand{\biginner}[2]{\left\langle #1,#2\right\rangle} $
$ \newcommand{\mymatrix}[2]{\left( \begin{array}{#1} #2\end{array} \right)} $
$ \newcommand{\myvector}[1]{\mymatrix{c}{#1}} $
$ \newcommand{\myrvector}[1]{\mymatrix{r}{#1}} $
$ \newcommand{\mypar}[1]{\left( #1 \right)} $
$ \newcommand{\mybigpar}[1]{ \Big( #1 \Big)} $
$ \newcommand{\sqrttwo}{\frac{1}{\sqrt{2}}} $
$ \newcommand{\dsqrttwo}{\dfrac{1}{\sqrt{2}}} $
$ \newcommand{\onehalf}{\frac{1}{2}} $
$ \newcommand{\donehalf}{\dfrac{1}{2}} $
$ \newcommand{\hadamard}{ \mymatrix{rr}{ \sqrttwo & \sqrttwo \\ \sqrttwo & -\sqrttwo }} $
$ \newcommand{\vzero}{\myvector{1\\0}} $
$ \newcommand{\vone}{\myvector{0\\1}} $
$ \newcommand{\stateplus}{\myvector{ \sqrttwo \\ \sqrttwo } } $
$ \newcommand{\stateminus}{ \myrvector{ \sqrttwo \\ -\sqrttwo } } $
$ \newcommand{\myarray}[2]{ \begin{array}{#1}#2\end{array}} $
$ \newcommand{\X}{ \mymatrix{cc}{0 & 1 \\ 1 & 0} } $
$ \newcommand{\Z}{ \mymatrix{rr}{1 & 0 \\ 0 & -1} } $
$ \newcommand{\Htwo}{ \mymatrix{rrrr}{ \frac{1}{2} & \frac{1}{2} & \frac{1}{2} & \frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & \frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} & \frac{1}{2} } } $
$ \newcommand{\CNOT}{ \mymatrix{cccc}{1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0} } $
$ \newcommand{\norm}[1]{ \left\lVert #1 \right\rVert } $
$ \newcommand{\pstate}[1]{ \lceil \mspace{-1mu} #1 \mspace{-1.5mu} \rfloor } $
<h1> Order Finding Algorithm</h1>
For positive integers $ x $ and $ N $ where $x<N$ with no common factors, order of $x$ is the least positive integer $ r $ such that $x^r = 1\Mod{N}$. In order finding algorithm, given $ x $ and $ N $, our goal is to find $ r $.
<h3>Task 1</h3>
Let $x=5$ and $N=21$. Plot $x^ i \Mod{N}$ for $i$ values between $1$ and $50$ and find the order of $x$.
```
```
<a href="D04_Order_Finding_Algorithm_Solutions.ipynb#task1">click for our solution</a>
No classical algorithm is known which solves the problem using resources polynomial in $O(L)$ where
$ L= \big \lceil \log N \big \rceil $, number of bits needed to specify $ N $. We will solve this problem efficiently using phase estimation algorithm.
<h2>Idea</h2>
Let $ x<N $ be given. The idea is to apply phase estimation to the operator $U_x$ which maps $ U_x \ket{y} \rightarrow \ket{xy {\Mod{N}}}$ where $y \in \{ 0, 1\}^L$ and $ 0 \leq y\leq N-1 $. We assume that $U \ket{y} = \ket{y}$ if $N \leq y \leq {2^L} - 1$.
<h3>Task 2 (on paper)</h3>
Let $\ket{\psi_0}=\ket{1 \Mod{N}}+\ket{x\Mod{N}}+\ket{x^2\Mod{N}}+ \cdots + \ket{x^{r-1}\Mod{N}}$.
What is $U_x \ket{\psi_0}$? What can you conclude about $\ket{\psi_0}$?
Repeat the same task for $\ket{\psi_1}=\ket{1}+ \omega^{-1}\ket{x\Mod{N}}+\omega^{-2}\ket{x^2\Mod{N}}+ \cdots + \omega^{-(r-1)} \ket{x^{r-1}\Mod{N}}$ where $\omega=e^{-\frac{2{\pi}i}{r}}$.
<a href="D04_Order_Finding_Algorithm_Solutions.ipynb#task2">click for our solution</a>
Now let's prove that $ \ket{u_s}= \displaystyle \frac{1}{\sqrt{r}}\sum_{k=0}^{r-1}e^{-\frac{2{\pi}i s k}{r}} \ket{x^k \Mod{N}} $ for $ 0 \leq s \leq r-1 $ are eigenvectors for $ U_x $ with the corresponding eigenvalues $ e^{\frac{2{\pi}i s}{r}}$.
$$
\begin{align*}
U_x\ket{u_s} &= \frac{1}{\sqrt{r}}\sum_{k=0}^{r-1}e^{\frac{-2{\pi}i s k}{r} }\ket{x^{k+1} \Mod{N}}\\
&= \frac{1}{\sqrt{r}}\sum_{k=0}^{r-1}e^{\frac{-2{\pi}i s (k+1)}{r}}e^{\frac{2{\pi}i s}{r}}\ket{x^{k+1} \Mod{N}}~~~\mbox{ since $ x^r=1 \Mod{N}$} \\
&=e^{\frac{2{\pi}i s}{r} }\ket{u_s}
\end{align*}
$$
Information about $ r $ is encoded inside the eigenvalues of the operator $ U_x $ and we will use phase estimation algorithm to estimate $ \frac{s}{r} $. In order to apply phase estimation algorithm, we need to prepare the second register to hold state $ \ket{u_s} $, but there is a problem. The eigenvector has the variable $ r $ while our aim is to find $ r $. How will we prepare the eigenvector?
Instead, we will prepare a superposition of the eigenvectors $\displaystyle \frac{1}{\sqrt{r}}\sum_{s=0}^{r-1}\ket{u_s}$.
<h3>Task 3 (on paper)</h3>
Show that $\frac{1}{\sqrt{r}}\sum_{s=0}^{r-1}\ket{u_s}= \ket{1}$.
<a href="D04_Order_Finding_Algorithm_Solutions.ipynb#task3">click for our solution</a>
Suppose that phase estimation algorithm takes state $\ket{0}\ket{u}$ to state $\ket{\tilde{\phi_u}}\ket{u}$. In this case given input $\ket{0} \sum_{u} c_u \ket{u}$, algorithm outputs $\sum_{u} c_u\ket{\tilde{\phi_u}}\ket{u}$.
If $ t $ is chosen as previously, then it can be proven that the probability of measuring $\phi_u$ accurate to $ n $ bits is at least $|c_u|^2(1- \epsilon)$.
Hence by combining this with Task 2, we can prepare the second register to hold state $\ket{1}$ at the beginning of the algorithm.
<h2>Procedure </h2>
We use two registers: First register has $ t $ qubits, second register has $ L $ qubits. Let $ t = 2L + 1 + \big \lceil \log \big(2 + \frac{1}{2\epsilon}\big) \big \rceil $. Choice of $ t $ will become clear later on.
- Initialize the registers as
$\displaystyle \ket{\psi_0} = \frac{1}{\sqrt{r}}\sum_{s=0}^{r-1}\ket{0}\ket{u_s} =\ket{0}\ket{1} .$
Note that here by $ \ket{0} $, we denote $ \ket{0}^{\otimes t} $ and by $ \ket{1} $ we denote $ \ket{0}^{L-1}\ket{1} $.
- Apply $ H $ and $ CU^{2^j} $ gates in the phase estimation algorithm.
$ \displaystyle
\ket{\psi_1}=\frac{1}{\sqrt{r}}\sum_{s=0}^{r-1}\frac{1}{{2^{t/2}}}\sum_{k=0}^{{2^t}-1}e^{\frac{2{\pi}i s k}{r}}\ket{k}\ket{u_s}
$
- Apply Inverse QFT to the first register.
$\displaystyle
\frac{1}{\sqrt{r}}\sum_{s=0}^{r-1}\ket{\tilde{\phi}}\ket{u_s}
$
At the end of this procedure, for each $ s $ in the range $ 0, ..., r-1 $, we obtain an estimate of the phase $\tilde{\phi} = \frac{s}{r}$ accurate to $ 2L+1 $ bits with probability at least $ \frac{1-\epsilon}{r} $.
Note that if $r$ is not a power of 2, then it can not be expressed in the form $\frac{x}{N}$ for some $ x $ and $ N=2^t $.
Now the question is how to find $ r $ from the estimate of $ s/r $? The answer is using continued fractions.
<h2>Continued Fractions</h2>
Continued fractions is a representation of a real number by a sequence of (possibly infinite) integers by using expressions of the form:
$
[a_0,...a_n] = a_0 + \frac{1}{a_1 + \frac{1}{a_2 + \frac{1}{...}}}.
$
The rationals $a_0,a_0+\frac{1}{a_1},a_0+\frac{1}{a_1+\frac{1}{a_2}},...$ are called the convergents. All convergents can be found using $O(L^3)$ operations where $ L $ is the number of bits needed to express $ m $ and $ n $ in $ \frac{m}{n} $.
### Example
$\frac{25}{11}$ can be expressed as $\frac{25}{11}=2+\frac{3}{11}$. Continuing like this,
\begin{align*}
=2+\frac{1}{3+\frac{2}{3}} = 2+\frac{1}{3+\frac{1}{\frac{3}{2}}}
\end{align*}
The resulting expression will be
\begin{align*}
2+\frac{1}{3+\frac{1}{1 + \frac{1}{2}}}
\end{align*}
with the continued fraction expression $[2,3,1,2]$.
The convergents are $c_1=2$, $c_2=2 + \frac{1}{3} = \frac{7}{3} $, $c_3 = 2 + \frac{1}{3 + \frac{1}{1}} = \frac{9}{4}$, $c_4 = 2+ \frac{ 1}{3 + \frac{1}{1 + \frac{1}{2}}} = \frac{25}{11}$
We defined two functions to calculate continued fractions expression and the convergents, which will be useful in the following tasks.
- <i>contFrac</i> takes a parameter $N$ and returns the continued fractions expressions of $N$ as a list
- <i>convergents</i> takes as paremeter the continued fractions expression and returns the list of convergents
Run the following cell to load the functions.
```
%run ../include/helpers.py
```
Below you see example usage of <i>contFrac</i> and <i>convergents</i> methods.
```
cf = contFrac(25/11)
print(cf)
cv = convergents(cf)
print(cv)
cv = convergents([1,4,2,1])
print(cv)
```
<h3>Task 4</h3>
Find the continued fractions expression for $\frac{31}{13}$ and the convergents first using pen and paper and then using the functions defined above.
```
#Your code here
```
<a href="D04_Order_Finding_Algorithm_Solutions.ipynb#task4">click for our solution</a>
### Choice of t (optional)
Following theorem guarantees that the continued fractions algorithm yields a good estimate for $\phi$.
<b>Theorem:</b> Suppose $\frac{s}{r}$ is a rational number so that $\displaystyle\left |\frac{s}{r}-\phi \right | \leq \frac{1}{2r^2}$. Then $\displaystyle \frac{s}{r}$ is a convergent of the continued fraction for $\phi$, and thus can be computed in $O(L^3)$ operations, using continued fraction algorithm.
Remember that $\phi$ is an approximation to $\frac{s}{r}$ accurate to $2L+1$ bits due to our choice of $t$. Since $r\leq N \leq 2^L$, we get $
\left |\frac{s}{r}-\phi \right | \leq \frac{1}{2^{2L+1}} \leq \frac{1}{2r^2}. $
Now according to the theorem, $\frac{s}{r}$ is a convergent of the continued fraction for $\phi$.
Computing the convergents we have candidate values for $s$ and $r$ and then we can test if $x^r=1 \Mod{N}$. (Remember that we have more than one convergent and we compute each one of them but not all of them will give us the correct $r$ value. So it might be the case that we fail in some cases)
<hr>
<h2> Modular Exponentiation</h2>
In the phase estimation procedure, we have assumed that we were given operators $U$, $CU$ and their powers as blackbox functions. In reality, $ CU^{2^j} $ operators should be implemented efficiently to have a speedup against the classical algorithm.
Note that to compute $x^{2^j}$, you don't need to perform $2^j$ multiplications. Once you obtain $x^2$, you can obtain $x^4$, $x^8$ so that you need only log$j$ multiplications. Nevertheless, multiplication involves implementation of addition and carries.
The important thing is that it can be performed using $ O(L^3) $ gates by using a procedure known as modular exponentiation. This is a technical procedure and instead we will continue implementing $ CU^{2^j} $ operators with the built-in function of $Cirq$. More details about modular exponentiation can be found in https://arxiv.org/pdf/1207.0511.pdf.
<hr>
<h3>Task 5</h3>
You are given a function named $U_x$ which implements $ U_x \ket{y} \rightarrow \ket{xy {\Mod{N}}}$ and returns its controlled version. Run the following cell to load the function.
```
%run operator.py
```
In order to use the function you should pass $x$ and $N$ as parameter.
<pre>CU=Ux(x,N)</pre>
Let $x=3$ and $N=20$. Use phase estimation procedure to find the estimates for $\frac{s}{r}$. Pick the correct values for $t$ and $L$. You can use the <i>qpe</i> function you have already implemented. Plot your results using a histogram. Where do the peaks occur?
```
%load qpe.py
%load iqft.py
import matplotlib
import cirq
#Your code here
# Print a histogram of results
results= samples.histogram(key='result')
print(results)
import matplotlib.pyplot as plt
plt.bar(results.keys(), results.values())
```
<a href="D04_Order_Finding_Algorithm_Solutions.ipynb#task5">click for our solution</a>
<h3>Task 6</h3>
For each one of the possible outcomes in Task 5, try to find out the value of $r$ using continued fractions algorithm. You can use the functions defined above.
```
#Your code here
```
<a href="D04_Order_Finding_Algorithm_Solutions.ipynb#task6">click for our solution</a>
<h3>Task 7</h3>
Repeat Task 5 and Task 6 with $x=5$ and $N=42$.
```
%run operator.py
%load qpe.py
%load iqft.py
import matplotlib
import cirq
#Your code here
# Print a histogram of results
results= samples.histogram(key='result')
print(results)
for key in results:
results[key]=results[key]/1000
import matplotlib.pyplot as plt
plt.bar(results.keys(), results.values())
```
<a href="D04_Order_Finding_Algorithm_Solutions.ipynb#task7">click for our solution</a>
<h2>Remarks about the algorithm</h2>
- Algorithm might produce a bad estimate to $\frac{s}{r}$ which occurs with probability at most $ \epsilon $. This problem can be improved, but in the cost of increasing the size of the circuit.
- As we have seen in Task 6 and Task 7, $s$ and $r$ may have a common factor, and we may get $r'$ (which is a factor of $r$) instead of $r$. Nevertheless, the number of primes which are less than $r$ is at least $\frac{r}{2}\log r$. With a constant number of repetitions of the algorithm, one can obtain $s$ and $r$ which are relatively prime with high probability.
- Overall, we have an algorithm which uses $O(L^3)$ gates, $ O(L) $ qubits and constant repetitions.
- Hadamard operation at the beginning requires $ O(L) $ gates
- $ O(L^2) $ gates are required by $QFT^\dagger $
- $ O(L^3) $ gates are needed for modular exponentiation
- Continued fraction algorithm requires $ O(L^3) $ classical processing
The best classical algorithm for order finding is of exponential time while we have a polynomial size quantum circuit for order finding.
| github_jupyter |
# Force Index
https://stockcharts.com/school/doku.php?id=chart_school:technical_indicators:force_index
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings("ignore")
# fix_yahoo_finance is used to fetch data
import fix_yahoo_finance as yf
yf.pdr_override()
# input
symbol = 'AAPL'
start = '2016-01-01'
end = '2019-01-01'
# Read data
df = yf.download(symbol,start,end)
# View Columns
df.head()
n = 13
df['FI_1'] = (df['Adj Close'] - df['Adj Close'].shift())*df['Volume']
df['FI_13'] = df['FI_1'].ewm(ignore_na=False,span=n,min_periods=n,adjust=True).mean()
df.head(20)
fig = plt.figure(figsize=(14,10))
ax1 = plt.subplot(3, 1, 1)
ax1.plot(df['Adj Close'])
ax1.set_title('Stock '+ symbol +' Closing Price')
ax1.set_ylabel('Price')
ax2 = plt.subplot(3, 1, 2)
ax2.plot(df['FI_1'], label='1-Period Force Index', color='black')
ax2.axhline(y=0, color='blue', linestyle='--')
ax2.grid()
ax2.set_ylabel('1-Period Force Index')
ax2.legend(loc='best')
ax3 = plt.subplot(3, 1, 3)
ax3.plot(df['FI_13'], label='13-Period Force Index', color='black')
ax3.axhline(y=0, color='blue', linestyle='--')
ax3.fill_between(df.index, df['FI_13'], where=df['FI_13']>0, color='green')
ax3.fill_between(df.index, df['FI_13'], where=df['FI_13']<0, color='red')
ax3.grid()
ax3.set_ylabel('13-Period Force Index')
ax3.set_xlabel('Date')
ax3.legend(loc='best')
```
## Candlestick with Force Index
```
from matplotlib import dates as mdates
import datetime as dt
dfc = df.copy()
dfc['VolumePositive'] = dfc['Open'] < dfc['Adj Close']
#dfc = dfc.dropna()
dfc = dfc.reset_index()
dfc['Date'] = mdates.date2num(dfc['Date'].astype(dt.date))
dfc.head()
from mpl_finance import candlestick_ohlc
fig = plt.figure(figsize=(14,10))
ax1 = plt.subplot(3, 1, 1)
candlestick_ohlc(ax1,dfc.values, width=0.5, colorup='g', colordown='r', alpha=1.0)
ax1.xaxis_date()
ax1.xaxis.set_major_formatter(mdates.DateFormatter('%d-%m-%Y'))
ax1.grid(True, which='both')
ax1.minorticks_on()
ax1v = ax1.twinx()
colors = dfc.VolumePositive.map({True: 'g', False: 'r'})
ax1v.bar(dfc.Date, dfc['Volume'], color=colors, alpha=0.4)
ax1v.axes.yaxis.set_ticklabels([])
ax1v.set_ylim(0, 3*df.Volume.max())
ax1.set_title('Stock '+ symbol +' Closing Price')
ax1.set_ylabel('Price')
ax2 = plt.subplot(3, 1, 2)
ax2.plot(df['FI_1'], label='1-Period Force Index', color='black')
ax2.axhline(y=0, color='blue', linestyle='--')
ax2.grid()
ax2.set_ylabel('1-Period Force Index')
ax2.legend(loc='best')
ax3 = plt.subplot(3, 1, 3)
ax3.plot(df['FI_13'], label='13-Period Force Index', color='black')
ax3.axhline(y=0, color='blue', linestyle='--')
ax3.fill_between(df.index, df['FI_13'], where=df['FI_13']>0, color='green')
ax3.fill_between(df.index, df['FI_13'], where=df['FI_13']<0, color='red')
ax3.grid()
ax3.set_ylabel('13-Period Force Index')
ax3.set_xlabel('Date')
ax3.legend(loc='best')
```
| github_jupyter |
# 결정 트리
<table align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/rickiepark/hg-mldl/blob/master/5-1.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />구글 코랩에서 실행하기</a>
</td>
</table>
## 로지스틱 회귀로 와인 분류하기
```
import pandas as pd
wine = pd.read_csv('https://bit.ly/wine_csv_data')
wine.head()
wine.info()
wine.describe()
data = wine[['alcohol', 'sugar', 'pH']].to_numpy()
target = wine['class'].to_numpy()
from sklearn.model_selection import train_test_split
train_input, test_input, train_target, test_target = train_test_split(
data, target, test_size=0.2, random_state=42)
print(train_input.shape, test_input.shape)
from sklearn.preprocessing import StandardScaler
ss = StandardScaler()
ss.fit(train_input)
train_scaled = ss.transform(train_input)
test_scaled = ss.transform(test_input)
from sklearn.linear_model import LogisticRegression
lr = LogisticRegression()
lr.fit(train_scaled, train_target)
print(lr.score(train_scaled, train_target))
print(lr.score(test_scaled, test_target))
```
### 설명하기 쉬운 모델과 어려운 모델
```
print(lr.coef_, lr.intercept_)
```
## 결정 트리
```
from sklearn.tree import DecisionTreeClassifier
dt = DecisionTreeClassifier(random_state=42)
dt.fit(train_scaled, train_target)
print(dt.score(train_scaled, train_target))
print(dt.score(test_scaled, test_target))
import matplotlib.pyplot as plt
from sklearn.tree import plot_tree
plt.figure(figsize=(10,7))
plot_tree(dt)
plt.show()
plt.figure(figsize=(10,7))
plot_tree(dt, max_depth=1, filled=True, feature_names=['alcohol', 'sugar', 'pH'])
plt.show()
```
### 가지치기
```
dt = DecisionTreeClassifier(max_depth=3, random_state=42)
dt.fit(train_scaled, train_target)
print(dt.score(train_scaled, train_target))
print(dt.score(test_scaled, test_target))
plt.figure(figsize=(20,15))
plot_tree(dt, filled=True, feature_names=['alcohol', 'sugar', 'pH'])
plt.show()
dt = DecisionTreeClassifier(max_depth=3, random_state=42)
dt.fit(train_input, train_target)
print(dt.score(train_input, train_target))
print(dt.score(test_input, test_target))
plt.figure(figsize=(20,15))
plot_tree(dt, filled=True, feature_names=['alcohol', 'sugar', 'pH'])
plt.show()
print(dt.feature_importances_)
```
## 확인문제
```
dt = DecisionTreeClassifier(min_impurity_decrease=0.0005, random_state=42)
dt.fit(train_input, train_target)
print(dt.score(train_input, train_target))
print(dt.score(test_input, test_target))
plt.figure(figsize=(20,15))
plot_tree(dt, filled=True, feature_names=['alcohol', 'sugar', 'pH'])
plt.show()
```
| github_jupyter |
# Model Optimization with an Image Classification Example
1. [Introduction](#Introduction)
2. [Prerequisites and Preprocessing](#Prequisites-and-Preprocessing)
3. [Train the model](#Train-the-model)
4. [Optimize trained model using SageMaker Neo and Deploy](#Optimize-trained-model-using-SageMaker-Neo-and-Deploy)
5. [Request Inference](#Request-Inference)
6. [Delete the Endpoints](#Delete-the-Endpoints)
## Introduction
***
Welcome to our model optimization example for image classification. In this demo, we will use the Amazon SageMaker Image Classification algorithm to train on the [caltech-256 dataset](http://www.vision.caltech.edu/Image_Datasets/Caltech256/) and then we will demonstrate Amazon SageMaker Neo's ability to optimize models.
## Prequisites and Preprocessing
***
### Setup
To get started, we need to define a few variables and obtain certain permissions that will be needed later in the example. These are:
* A SageMaker session
* IAM role to give learning, storage & hosting access to your data
* An S3 bucket, a folder & sub folders that will be used to store data and artifacts
* SageMaker's specific Image Classification training image which should not be changed
We also need to upgrade the [SageMaker SDK for Python](https://sagemaker.readthedocs.io/en/stable/v2.html) to v2.33.0 or greater and restart the kernel.
```
!~/anaconda3/envs/mxnet_p36/bin/pip install --upgrade sagemaker>=2.33.0
import sagemaker
from sagemaker import session, get_execution_role
role = get_execution_role()
sagemaker_session = session.Session()
# S3 bucket and folders for saving code and model artifacts.
# Feel free to specify different bucket/folders here if you wish.
bucket = sagemaker_session.default_bucket()
folder = "DEMO-ImageClassification"
model_with_custom_code_sub_folder = folder + "/model-with-custom-code"
validation_data_sub_folder = folder + "/validation-data"
training_data_sub_folder = folder + "/training-data"
training_output_sub_folder = folder + "/training-output"
compilation_output_sub_folder = folder + "/compilation-output"
from sagemaker import session, get_execution_role
from sagemaker.amazon.amazon_estimator import get_image_uri
# S3 Location to save the model artifact after training
s3_training_output_location = "s3://{}/{}".format(bucket, training_output_sub_folder)
# S3 Location to save the model artifact after compilation
s3_compilation_output_location = "s3://{}/{}".format(bucket, compilation_output_sub_folder)
# S3 Location to save your custom code in tar.gz format
s3_model_with_custom_code_location = "s3://{}/{}".format(bucket, model_with_custom_code_sub_folder)
from sagemaker.image_uris import retrieve
aws_region = sagemaker_session.boto_region_name
training_image = retrieve(
framework="image-classification", region=aws_region, image_scope="training"
)
```
### Data preparation
In this demo, we are using [Caltech-256](http://www.vision.caltech.edu/Image_Datasets/Caltech256/) dataset, pre-converted into `RecordIO` format using MXNet's [im2rec](https://mxnet.apache.org/versions/1.7/api/faq/recordio) tool. Caltech-256 dataset contains 30608 images of 256 objects. For the training and validation data, the splitting scheme followed is governed by this [MXNet example](https://github.com/apache/incubator-mxnet/blob/8ecdc49cf99ccec40b1e342db1ac6791aa97865d/example/image-classification/data/caltech256.sh). The example randomly selects 60 images per class for training, and uses the remaining data for validation. It takes around 50 seconds to convert the entire Caltech-256 dataset (~1.2GB) into `RecordIO` format on a p2.xlarge instance. SageMaker's training algorithm takes `RecordIO` files as input. For this demo, we will download the `RecordIO` files and upload it to S3. We then initialize the 256 object categories as well to a variable.
```
import os
import urllib.request
def download(url):
filename = url.split("/")[-1]
if not os.path.exists(filename):
urllib.request.urlretrieve(url, filename)
# Dowload caltech-256 data files from MXNet's website
download("http://data.mxnet.io/data/caltech-256/caltech-256-60-train.rec")
download("http://data.mxnet.io/data/caltech-256/caltech-256-60-val.rec")
# Upload the file to S3
s3_training_data_location = sagemaker_session.upload_data(
"caltech-256-60-train.rec", bucket, training_data_sub_folder
)
s3_validation_data_location = sagemaker_session.upload_data(
"caltech-256-60-val.rec", bucket, validation_data_sub_folder
)
class_labels = [
"ak47",
"american-flag",
"backpack",
"baseball-bat",
"baseball-glove",
"basketball-hoop",
"bat",
"bathtub",
"bear",
"beer-mug",
"billiards",
"binoculars",
"birdbath",
"blimp",
"bonsai-101",
"boom-box",
"bowling-ball",
"bowling-pin",
"boxing-glove",
"brain-101",
"breadmaker",
"buddha-101",
"bulldozer",
"butterfly",
"cactus",
"cake",
"calculator",
"camel",
"cannon",
"canoe",
"car-tire",
"cartman",
"cd",
"centipede",
"cereal-box",
"chandelier-101",
"chess-board",
"chimp",
"chopsticks",
"cockroach",
"coffee-mug",
"coffin",
"coin",
"comet",
"computer-keyboard",
"computer-monitor",
"computer-mouse",
"conch",
"cormorant",
"covered-wagon",
"cowboy-hat",
"crab-101",
"desk-globe",
"diamond-ring",
"dice",
"dog",
"dolphin-101",
"doorknob",
"drinking-straw",
"duck",
"dumb-bell",
"eiffel-tower",
"electric-guitar-101",
"elephant-101",
"elk",
"ewer-101",
"eyeglasses",
"fern",
"fighter-jet",
"fire-extinguisher",
"fire-hydrant",
"fire-truck",
"fireworks",
"flashlight",
"floppy-disk",
"football-helmet",
"french-horn",
"fried-egg",
"frisbee",
"frog",
"frying-pan",
"galaxy",
"gas-pump",
"giraffe",
"goat",
"golden-gate-bridge",
"goldfish",
"golf-ball",
"goose",
"gorilla",
"grand-piano-101",
"grapes",
"grasshopper",
"guitar-pick",
"hamburger",
"hammock",
"harmonica",
"harp",
"harpsichord",
"hawksbill-101",
"head-phones",
"helicopter-101",
"hibiscus",
"homer-simpson",
"horse",
"horseshoe-crab",
"hot-air-balloon",
"hot-dog",
"hot-tub",
"hourglass",
"house-fly",
"human-skeleton",
"hummingbird",
"ibis-101",
"ice-cream-cone",
"iguana",
"ipod",
"iris",
"jesus-christ",
"joy-stick",
"kangaroo-101",
"kayak",
"ketch-101",
"killer-whale",
"knife",
"ladder",
"laptop-101",
"lathe",
"leopards-101",
"license-plate",
"lightbulb",
"light-house",
"lightning",
"llama-101",
"mailbox",
"mandolin",
"mars",
"mattress",
"megaphone",
"menorah-101",
"microscope",
"microwave",
"minaret",
"minotaur",
"motorbikes-101",
"mountain-bike",
"mushroom",
"mussels",
"necktie",
"octopus",
"ostrich",
"owl",
"palm-pilot",
"palm-tree",
"paperclip",
"paper-shredder",
"pci-card",
"penguin",
"people",
"pez-dispenser",
"photocopier",
"picnic-table",
"playing-card",
"porcupine",
"pram",
"praying-mantis",
"pyramid",
"raccoon",
"radio-telescope",
"rainbow",
"refrigerator",
"revolver-101",
"rifle",
"rotary-phone",
"roulette-wheel",
"saddle",
"saturn",
"school-bus",
"scorpion-101",
"screwdriver",
"segway",
"self-propelled-lawn-mower",
"sextant",
"sheet-music",
"skateboard",
"skunk",
"skyscraper",
"smokestack",
"snail",
"snake",
"sneaker",
"snowmobile",
"soccer-ball",
"socks",
"soda-can",
"spaghetti",
"speed-boat",
"spider",
"spoon",
"stained-glass",
"starfish-101",
"steering-wheel",
"stirrups",
"sunflower-101",
"superman",
"sushi",
"swan",
"swiss-army-knife",
"sword",
"syringe",
"tambourine",
"teapot",
"teddy-bear",
"teepee",
"telephone-box",
"tennis-ball",
"tennis-court",
"tennis-racket",
"theodolite",
"toaster",
"tomato",
"tombstone",
"top-hat",
"touring-bike",
"tower-pisa",
"traffic-light",
"treadmill",
"triceratops",
"tricycle",
"trilobite-101",
"tripod",
"t-shirt",
"tuning-fork",
"tweezer",
"umbrella-101",
"unicorn",
"vcr",
"video-projector",
"washing-machine",
"watch-101",
"waterfall",
"watermelon",
"welding-mask",
"wheelbarrow",
"windmill",
"wine-bottle",
"xylophone",
"yarmulke",
"yo-yo",
"zebra",
"airplanes-101",
"car-side-101",
"faces-easy-101",
"greyhound",
"tennis-shoes",
"toad",
"clutter",
]
```
## Train the model
***
Now that we are done with all the setup that is needed, we are ready to train our object detector. To begin, let us create a ``sagemaker.estimator.Estimator`` object. This estimator is required to launch the training job.
We specify the following parameters while creating the estimator:
* ``image_uri``: This is set to the training_image uri we defined previously. Once set, this image will be used later while running the training job.
* ``role``: This is the IAM role which we defined previously.
* ``instance_count``: This is the number of instances on which to run the training. When the number of instances is greater than one, then the image classification algorithm will run in distributed settings.
* ``instance_type``: This indicates the type of machine on which to run the training. For this example we will use `ml.p3.8xlarge`.
* ``volume_size``: This is the size in GB of the EBS volume to use for storing input data during training. Must be large enough to store training data as File Mode is used.
* ``max_run``: This is the timeout value in seconds for training. After this amount of time SageMaker terminates the job regardless of its current status.
* ``input_mode``: This is set to `File` in this example. SageMaker copies the training dataset from the S3 location to a local directory.
* ``output_path``: This is the S3 path in which the training output is stored. We are assigning it to `s3_training_output_location` defined previously.
```
ic_estimator = sagemaker.estimator.Estimator(
image_uri=training_image,
role=role,
instance_count=1,
instance_type="ml.p3.8xlarge",
volume_size=50,
max_run=360000,
input_mode="File",
output_path=s3_training_output_location,
base_job_name="img-classification-training",
)
```
Following are certain hyperparameters that are specific to the algorithm which are also set:
* ``num_layers``: The number of layers (depth) for the network. We use 18 in this samples but other values such as 50, 152 can be used.
* ``image_shape``: The input image dimensions,'num_channels, height, width', for the network. It should be no larger than the actual image size. The number of channels should be same as the actual image.
* ``num_classes``: This is the number of output classes for the new dataset. Imagenet was trained with 1000 output classes but the number of output classes can be changed for fine-tuning. For caltech, we use 257 because it has 256 object categories + 1 clutter class.
* ``num_training_samples``: This is the total number of training samples. It is set to 15240 for caltech dataset with the current split.
* ``mini_batch_size``: The number of training samples used for each mini batch. In distributed training, the number of training samples used per batch will be N * mini_batch_size where N is the number of hosts on which training is run.
* ``epochs``: Number of training epochs.
* ``learning_rate``: Learning rate for training.
* ``top_k``: Report the top-k accuracy during training.
* ``precision_dtype``: Training datatype precision (default: float32). If set to 'float16', the training will be done in mixed_precision mode and will be faster than float32 mode.
```
ic_estimator.set_hyperparameters(
num_layers=18,
image_shape="3,224,224",
num_classes=257,
num_training_samples=15420,
mini_batch_size=128,
epochs=5,
learning_rate=0.01,
top_k=2,
use_pretrained_model=1,
precision_dtype="float32",
)
```
Next we setup the input ``data_channels`` to be used later for training.
```
train_data = sagemaker.inputs.TrainingInput(
s3_training_data_location, content_type="application/x-recordio", s3_data_type="S3Prefix"
)
validation_data = sagemaker.inputs.TrainingInput(
s3_validation_data_location, content_type="application/x-recordio", s3_data_type="S3Prefix"
)
data_channels = {"train": train_data, "validation": validation_data}
```
After we've created the estimator object, we can train the model using ``fit()`` API
```
ic_estimator.fit(inputs=data_channels, logs=True)
```
## Optimize trained model using SageMaker Neo and Deploy
***
We will use SageMaker Neo's ``compile_model()`` API while specifying ``MXNet`` as the framework and the version to optimize the model. When calling this API, we also specify the target instance family, correct input shapes for the model and the S3 location to which the compiled model artifacts would be stored. For this example, we will choose ``ml_c5`` as the target instance family.
```
optimized_ic = ic_estimator.compile_model(
target_instance_family="ml_c5",
input_shape={"data": [1, 3, 224, 224]},
output_path=s3_compilation_output_location,
framework="mxnet",
framework_version="1.8",
)
```
After compiled artifacts are generated and we have a ``sagemaker.model.Model`` object, we then create a ``sagemaker.mxnet.model.MXNetModel`` object while specifying the following parameters:
* ``model_data``: s3 location where compiled model artifact is stored
* ``image_uri``: Neo's Inference Image URI for MXNet
* ``framework_version``: set to MXNet's v1.8.0
* ``role`` & ``sagemaker_session`` : IAM role and sagemaker session which we defined in the setup
* ``entry_point``: points to the entry_point script. In our example the script has SageMaker's hosting functions implementation
* ``py_version``: We are required to set to python version 3
* ``env``: A dict to specify the environment variables. We are required to set MMS_DEFAULT_RESPONSE_TIMEOUT to 500
* ``code_location``: s3 location where repacked model.tar.gz is stored. Repacked tar file consists of compiled model artifacts and entry_point script
```
from sagemaker.mxnet.model import MXNetModel
optimized_ic_model = MXNetModel(
model_data=optimized_ic.model_data,
image_uri=optimized_ic.image_uri,
framework_version="1.8.0",
role=role,
sagemaker_session=sagemaker_session,
entry_point="inference.py",
py_version="py37",
env={"MMS_DEFAULT_RESPONSE_TIMEOUT": "500"},
code_location=s3_model_with_custom_code_location,
)
```
We can now deploy this ``sagemaker.mxnet.model.MXNetModel`` using the ``deploy()`` API, for which we need to use an instance_type belonging to the target_instance_family we used for compilation. For this example, we will choose ``ml.c5.4xlarge`` instance as we compiled for ``ml_c5``. The API also allow us to set the number of initial_instance_count that will be used for the Endpoint. By default the API will use ``JSONSerializer()`` and ``JSONDeserializer()`` for ``sagemaker.mxnet.model.MXNetModel`` whose ``CONTENT_TYPE`` is ``application/json``. The API creates a SageMaker endpoint that we can use to perform inference.
**Note**: If you compiled the model for a GPU `target_instance_family` then please make sure to deploy to one of the same target `instance_type` below and also make necessary changes in the entry point script `inference.py`
```
optimized_ic_classifier = optimized_ic_model.deploy(
initial_instance_count=1, instance_type="ml.c5.4xlarge"
)
```
## Request Inference
***
Once the endpoint is in ``InService`` we can then send a test image ``test.jpg`` and get the prediction result from the endpoint using SageMaker's ``predict()`` API. Instead of sending the raw image to the endpoint for prediction we will prepare and send the payload which is in a form acceptable by the API. Upon receiving the prediction result we will print the class label and probability.
```
import PIL.Image
import numpy as np
from IPython.display import Image
test_file = "test.jpg"
test_image = PIL.Image.open(test_file)
payload = np.asarray(test_image.resize((224, 224)))
Image(test_file)
%%time
result = optimized_ic_classifier.predict(payload)
index = np.argmax(result)
print("Result: label - " + class_labels[index] + ", probability - " + str(result[index]))
```
## Delete the Endpoint
***
Having an endpoint running will incur some costs. Therefore as an optional clean-up job, you can delete it.
```
print("Endpoint name: " + optimized_ic_classifier.endpoint_name)
optimized_ic_classifier.delete_endpoint()
```
| github_jupyter |
# Exploratory Data Analysis
```
%%HTML
<style type="text/css">
table.dataframe td, table.dataframe th {
border: 1px black solid !important;
color: black !important;
}
</style>
# Install the missingno library
! pip install missingno
import warnings
warnings.filterwarnings("ignore", category=DeprecationWarning)
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
df = pd.read_csv("datasets/titanic.csv")
df.head(3)
df.shape
df.info()
```
### Handling missing values
```
df.isnull().sum()
import missingno as ms
ms.bar(df, color="orange", inline=True)
```
* ### PassengerId: Just a serial number
* ### Survived: 0 = No, 1 = Yes
* ### pclass: Ticket class 1 = 1st, 2 = 2nd, 3 = 3rd
* ### sibsp: # of siblings / spouses aboard the Titanic
* ### parch: # of parents / children aboard the Titanic
* ### ticket: Ticket number
* ### cabin: Cabin number
* ### embarked: Port of Embarkation C = Cherbourg,
### Q = Queenstown, S = Southampton
```
df.groupby(["Pclass", "Sex"])["Age"].mean()
df.groupby(["Pclass", "Sex"])["Age"].median()
# Fillin the age values for Pclass =1 for male and Female
df.loc[df.Age.isnull() & (df.Sex == "male") & (df.Pclass == 1), "Age"] = 37
df.loc[df.Age.isnull() & (df.Sex == "female") & (df.Pclass == 1), "Age"] = 35.5
# Fillin the age values for Pclass =2 for male and Female
df.loc[df.Age.isnull() & (df.Sex == "male") & (df.Pclass == 2), "Age"] = 29.0
df.loc[df.Age.isnull() & (df.Sex == "female") & (df.Pclass == 2), "Age"] = 28.5
# Fillin the age values for Pclass =2 for male and Female
df.loc[df.Age.isnull() & (df.Sex == "male") & (df.Pclass == 3), "Age"] = 25
df.loc[df.Age.isnull() & (df.Sex == "female") & (df.Pclass == 3), "Age"] = 22
df.drop(columns="Cabin", inplace=True)
import missingno as ms
ms.bar(df, color="orange", inline=True)
```
* ### PassengerId: Just a serial number
* ### Survived: 0 = No, 1 = Yes
* ### pclass: Ticket class 1 = 1st, 2 = 2nd, 3 = 3rd
* ### sibsp: # of siblings / spouses aboard the Titanic
* ### parch: # of parents / children aboard the Titanic
* ### ticket: Ticket number
* ### cabin: Cabin number
* ### embarked: Port of Embarkation C = Cherbourg,
### Q = Queenstown, S = Southampton
## Types of variables
```
sns.countplot(x="Survived", data=df)
sns.set_style("whitegrid")
sns.countplot(x="Embarked", data=df)
sns.countplot(x="Sex", data=df)
sns.countplot(x="SibSp", data=df)
```
# Bivariate Analysis
### Scatter plot
```
sns.set_style("darkgrid")
sns.relplot("Age", "Fare", data=df)
sns.relplot("Age", "Fare", kind="line", data=df)
corr = df.corr()
sns.heatmap(corr, annot=True)
sns.barplot(x="Pclass", y="Survived", data=df, ci=None)
```
# Multivariate Analysis
```
sns.barplot(x="Sex", y="Survived", hue="Pclass", data=df, ci=None)
sns.set_style("darkgrid")
sns.relplot(x="Age", y="Fare", hue="Survived", data=df)
```
## Handling outliers
```
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
df = pd.read_csv("datasets/pizza_prices.csv")
df.head()
len(df)
sns.boxplot("price", data=df)
np.mean(df.price)
df.drop(df[df.price == 160.32].index, inplace=True)
df.drop(df[df.price == 63.43].index, inplace=True)
df.drop(df[df.price == 158.38].index, inplace=True)
np.mean(df.price)
```
## Feature Selection
```
import pandas as pd
import numpy as np
import seaborn as sns
%matplotlib inline
df = pd.read_csv("datasets/titanic.csv")
df.head(3)
df_main = df.drop(columns=["PassengerId", "Cabin", "Name", "Ticket"])
df_main.head()
df_main["Sex"].value_counts()
df_main["Embarked"].value_counts()
df_main.loc[df_main["Sex"] == "male", "Sex"] = 1
df_main.loc[df_main["Sex"] == "female", "Sex"] = 0
df_main.loc[df_main["Embarked"] == "S", "Embarked"] = 0
df_main.loc[df_main["Embarked"] == "Q", "Embarked"] = 1
df_main.loc[df_main["Embarked"] == "C", "Embarked"] = 2
df_main.head()
X = df_main.drop(columns="Survived")
X.head()
y = df["Survived"]
y.head()
```
| github_jupyter |
# Face Generation
In this project, you'll define and train a DCGAN on a dataset of faces. Your goal is to get a generator network to generate *new* images of faces that look as realistic as possible!
The project will be broken down into a series of tasks from **loading in data to defining and training adversarial networks**. At the end of the notebook, you'll be able to visualize the results of your trained Generator to see how it performs; your generated samples should look like fairly realistic faces with small amounts of noise.
### Get the Data
You'll be using the [CelebFaces Attributes Dataset (CelebA)](http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html) to train your adversarial networks.
This dataset is more complex than the number datasets (like MNIST or SVHN) you've been working with, and so, you should prepare to define deeper networks and train them for a longer time to get good results. It is suggested that you utilize a GPU for training.
### Pre-processed Data
Since the project's main focus is on building the GANs, we've done *some* of the pre-processing for you. Each of the CelebA images has been cropped to remove parts of the image that don't include a face, then resized down to 64x64x3 NumPy images. Some sample data is show below.
<img src='assets/processed_face_data.png' width=60% />
> If you are working locally, you can download this data [by clicking here](https://s3.amazonaws.com/video.udacity-data.com/topher/2018/November/5be7eb6f_processed-celeba-small/processed-celeba-small.zip)
This is a zip file that you'll need to extract in the home directory of this notebook for further loading and processing. After extracting the data, you should be left with a directory of data `processed_celeba_small/`
```
# can comment out after executing
#!unzip processed_celeba_small.zip
data_dir = 'processed_celeba_small/'
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import pickle as pkl
import matplotlib.pyplot as plt
import numpy as np
import problem_unittests as tests
#import helper
%matplotlib inline
```
## Visualize the CelebA Data
The [CelebA](http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html) dataset contains over 200,000 celebrity images with annotations. Since you're going to be generating faces, you won't need the annotations, you'll only need the images. Note that these are color images with [3 color channels (RGB)](https://en.wikipedia.org/wiki/Channel_(digital_image)#RGB_Images) each.
### Pre-process and Load the Data
Since the project's main focus is on building the GANs, we've done *some* of the pre-processing for you. Each of the CelebA images has been cropped to remove parts of the image that don't include a face, then resized down to 64x64x3 NumPy images. This *pre-processed* dataset is a smaller subset of the very large CelebA data.
> There are a few other steps that you'll need to **transform** this data and create a **DataLoader**.
#### Exercise: Complete the following `get_dataloader` function, such that it satisfies these requirements:
* Your images should be square, Tensor images of size `image_size x image_size` in the x and y dimension.
* Your function should return a DataLoader that shuffles and batches these Tensor images.
#### ImageFolder
To create a dataset given a directory of images, it's recommended that you use PyTorch's [ImageFolder](https://pytorch.org/docs/stable/torchvision/datasets.html#imagefolder) wrapper, with a root directory `processed_celeba_small/` and data transformation passed in.
```
# necessary imports
import torch
from torchvision import datasets
from torchvision import transforms
def get_dataloader(batch_size, image_size, data_dir='processed_celeba_small/'):
"""
Batch the neural network data using DataLoader
:param batch_size: The size of each batch; the number of images in a batch
:param img_size: The square size of the image data (x, y)
:param data_dir: Directory where image data is located
:return: DataLoader with batched data
"""
# Define image augmentation methods
data_transforms = transforms.Compose([transforms.Resize(image_size),
transforms.ToTensor()])
# Load the dataset with ImageFolder
dataset = datasets.ImageFolder(data_dir, transform=data_transforms)
# Define the dataloader
data_loader = torch.utils.data.DataLoader(dataset, batch_size, shuffle=True)
return data_loader
```
## Create a DataLoader
#### Exercise: Create a DataLoader `celeba_train_loader` with appropriate hyperparameters.
Call the above function and create a dataloader to view images.
* You can decide on any reasonable `batch_size` parameter
* Your `image_size` **must be** `32`. Resizing the data to a smaller size will make for faster training, while still creating convincing images of faces!
```
# Define function hyperparameters
batch_size = 28
img_size = 32
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
# Call your function and get a dataloader
celeba_train_loader = get_dataloader(batch_size, img_size)
```
Next, you can view some images! You should seen square images of somewhat-centered faces.
Note: You'll need to convert the Tensor images into a NumPy type and transpose the dimensions to correctly display an image, suggested `imshow` code is below, but it may not be perfect.
```
# helper display function
def imshow(img):
npimg = img.numpy()
plt.imshow(np.transpose(npimg, (1, 2, 0)))
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
# obtain one batch of training images
dataiter = iter(celeba_train_loader)
images, _ = dataiter.next() # _ for no labels
# plot the images in the batch, along with the corresponding labels
fig = plt.figure(figsize=(20, 4))
plot_size=20
for idx in np.arange(plot_size):
ax = fig.add_subplot(2, plot_size/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
```
#### Exercise: Pre-process your image data and scale it to a pixel range of -1 to 1
You need to do a bit of pre-processing; you know that the output of a `tanh` activated generator will contain pixel values in a range from -1 to 1, and so, we need to rescale our training images to a range of -1 to 1. (Right now, they are in a range from 0-1.)
```
# TODO: Complete the scale function
def scale(x, feature_range=(-1, 1)):
''' Scale takes in an image x and returns that image, scaled
with a feature_range of pixel values from -1 to 1.
This function assumes that the input x is already scaled from 0-1.'''
max_scale = max(feature_range)
min_scale = min(feature_range)
# Scale to feature range
x = x * (max_scale-min_scale) + min_scale
return x
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
# check scaled range
# should be close to -1 to 1
img = images[0]
scaled_img = scale(img)
print('Min: ', scaled_img.min())
print('Max: ', scaled_img.max())
```
---
# Define the Model
A GAN is comprised of two adversarial networks, a discriminator and a generator.
## Discriminator
Your first task will be to define the discriminator. This is a convolutional classifier like you've built before, only without any maxpooling layers. To deal with this complex data, it's suggested you use a deep network with **normalization**. You are also allowed to create any helper functions that may be useful.
#### Exercise: Complete the Discriminator class
* The inputs to the discriminator are 32x32x3 tensor images
* The output should be a single value that will indicate whether a given image is real or fake
```
import torch.nn as nn
import torch.nn.functional as F
# helper conv function
def conv(in_channels, out_channels, kernel_size, stride=2, padding=1, batch_norm=True):
"""Creates a convolutional layer, with optional batch normalization.
"""
layers = []
conv_layer = nn.Conv2d(in_channels=in_channels, out_channels=out_channels,
kernel_size=kernel_size, stride=stride, padding=padding, bias=False)
layers.append(conv_layer)
if batch_norm:
layers.append(nn.BatchNorm2d(out_channels))
return nn.Sequential(*layers)
class Discriminator(nn.Module):
def __init__(self, conv_dim):
"""
Initialize the Discriminator Module
:param conv_dim: The depth of the first convolutional layer
"""
super(Discriminator, self).__init__()
# define all convolutional layers
self.conv1 = conv(3, conv_dim, 4, batch_norm=False)
self.conv2 = conv(conv_dim, conv_dim*2, 4)
self.conv3 = conv(conv_dim*2, conv_dim*4, 4)
self.conv4 = conv(conv_dim*4, conv_dim*8, 4)
# define last classification layer
self.conv5 = conv(conv_dim*8, 1, 4, stride=1, batch_norm=False)
# define activation function
self.leakyrelu = nn.LeakyReLU()
# define dropout layer
self.dropout = nn.Dropout(0.3)
def forward(self, x):
"""
Forward propagation of the neural network
:param x: The input to the neural network
:return: Discriminator logits; the output of the neural network
"""
## define feedforward behavior
# all convolutional layers
x = self.leakyrelu(self.conv1(x))
x = self.leakyrelu(self.conv2(x))
x = self.leakyrelu(self.conv3(x))
x = self.dropout(x)
x = self.leakyrelu(self.conv4(x))
x = self.dropout(x)
# last classification layer
out = self.conv5(x).view(-1, 1)
return out
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_discriminator(Discriminator)
```
## Generator
The generator should upsample an input and generate a *new* image of the same size as our training data `32x32x3`. This should be mostly transpose convolutional layers with normalization applied to the outputs.
#### Exercise: Complete the Generator class
* The inputs to the generator are vectors of some length `z_size`
* The output should be a image of shape `32x32x3`
```
# helper deconv function
def deconv(in_channels, out_channels, kernel_size, stride=2, padding=1, batch_norm=True):
"""Creates a transpose convolutional layer, with optional batch normalization.
"""
layers = []
# append transpose conv layer
layers.append(nn.ConvTranspose2d(in_channels, out_channels, kernel_size, stride, padding, bias=False))
# optional batch norm layer
if batch_norm:
layers.append(nn.BatchNorm2d(out_channels))
return nn.Sequential(*layers)
class Generator(nn.Module):
def __init__(self, z_size, conv_dim):
"""
Initialize the Generator Module
:param z_size: The length of the input latent vector, z
:param conv_dim: The depth of the inputs to the *last* transpose convolutional layer
"""
super(Generator, self).__init__()
self.conv_dim = conv_dim
## Define input fully connected layer
self.fcin = nn.Linear(z_size, 4*4*conv_dim*4)
## Define transpose convolutional layers
self.t_conv1 = deconv(conv_dim*4, conv_dim*2, 4, batch_norm=True)
self.t_conv2 = deconv(conv_dim*2, conv_dim, 4, batch_norm=True)
self.t_conv3 = deconv(conv_dim, 3, 4, batch_norm=False)
## Define activation function
self.relu = nn.ReLU()
self.tanh = nn.Tanh()
def forward(self, x):
"""
Forward propagation of the neural network
:param x: The input to the neural network
:return: A 32x32x3 Tensor image as output
"""
## input fully connected layer
x = self.fcin(x)
## reshape the tensor for transpose concolutional layer
x = x.view(-1, self.conv_dim*4, 4, 4)
## convolutional layers
x = self.relu(self.t_conv1(x))
x = self.relu(self.t_conv2(x))
## last convolutional layer with tanh act function
out = self.tanh(self.t_conv3(x))
return out
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_generator(Generator)
```
## Initialize the weights of your networks
To help your models converge, you should initialize the weights of the convolutional and linear layers in your model. From reading the [original DCGAN paper](https://arxiv.org/pdf/1511.06434.pdf), they say:
> All weights were initialized from a zero-centered Normal distribution with standard deviation 0.02.
So, your next task will be to define a weight initialization function that does just this!
You can refer back to the lesson on weight initialization or even consult existing model code, such as that from [the `networks.py` file in CycleGAN Github repository](https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/blob/master/models/networks.py) to help you complete this function.
#### Exercise: Complete the weight initialization function
* This should initialize only **convolutional** and **linear** layers
* Initialize the weights to a normal distribution, centered around 0, with a standard deviation of 0.02.
* The bias terms, if they exist, may be left alone or set to 0.
```
from torch.nn import init
def weights_init_normal(m):
"""
Applies initial weights to certain layers in a model .
The weights are taken from a normal distribution
with mean = 0, std dev = 0.02.
:param m: A module or layer in a network
"""
classname = m.__class__.__name__
if hasattr(m, 'weight') and (classname.find('Conv') != -1 or classname.find('Linear') != -1):
## Initialize conv and linear layer
init.normal_(m.weight.data, 0.0, 0.02)
```
## Build complete network
Define your models' hyperparameters and instantiate the discriminator and generator from the classes defined above. Make sure you've passed in the correct input arguments.
```
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
def build_network(d_conv_dim, g_conv_dim, z_size):
# define discriminator and generator
D = Discriminator(d_conv_dim)
G = Generator(z_size=z_size, conv_dim=g_conv_dim)
# initialize model weights
D.apply(weights_init_normal)
G.apply(weights_init_normal)
print(D)
print()
print(G)
return D, G
```
#### Exercise: Define model hyperparameters
```
# Define model hyperparams
d_conv_dim = 32
g_conv_dim = 64
z_size = 256
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
D, G = build_network(d_conv_dim, g_conv_dim, z_size)
```
### Training on GPU
Check if you can train on GPU. Here, we'll set this as a boolean variable `train_on_gpu`. Later, you'll be responsible for making sure that
>* Models,
* Model inputs, and
* Loss function arguments
Are moved to GPU, where appropriate.
```
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
else:
print('Training on GPU!')
```
---
## Discriminator and Generator Losses
Now we need to calculate the losses for both types of adversarial networks.
### Discriminator Losses
> * For the discriminator, the total loss is the sum of the losses for real and fake images, `d_loss = d_real_loss + d_fake_loss`.
* Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that.
### Generator Loss
The generator loss will look similar only with flipped labels. The generator's goal is to get the discriminator to *think* its generated images are *real*.
#### Exercise: Complete real and fake loss functions
**You may choose to use either cross entropy or a least squares error loss to complete the following `real_loss` and `fake_loss` functions.**
```
def real_loss(D_out):
'''Calculates how close discriminator outputs are to being real.
param, D_out: discriminator logits
return: real loss'''
loss = torch.mean(D_out-1)**2
return loss
def fake_loss(D_out):
'''Calculates how close discriminator outputs are to being fake.
param, D_out: discriminator logits
return: fake loss'''
loss = torch.mean(D_out**2)
return loss
```
## Optimizers
#### Exercise: Define optimizers for your Discriminator (D) and Generator (G)
Define optimizers for your models with appropriate hyperparameters.
```
import torch.optim as optim
lr = 0.0002
beta1 = 0.3
beta2 = 0.999
# Create optimizers for the discriminator D and generator G
d_optimizer = optim.Adam(D.parameters(), lr, [beta1, beta2])
g_optimizer = optim.Adam(G.parameters(), lr, [beta1, beta2])
```
---
## Training
Training will involve alternating between training the discriminator and the generator. You'll use your functions `real_loss` and `fake_loss` to help you calculate the discriminator losses.
* You should train the discriminator by alternating on real and fake images
* Then the generator, which tries to trick the discriminator and should have an opposing loss function
#### Saving Samples
You've been given some code to print out some loss statistics and save some generated "fake" samples.
#### Exercise: Complete the training function
Keep in mind that, if you've moved your models to GPU, you'll also have to move any model inputs to GPU.
```
def train(D, G, n_epochs, print_every=50):
'''Trains adversarial networks for some number of epochs
param, D: the discriminator network
param, G: the generator network
param, n_epochs: number of epochs to train for
param, print_every: when to print and record the models' losses
return: D and G losses'''
# move models to GPU
if train_on_gpu:
D.cuda()
G.cuda()
# keep track of loss and generated, "fake" samples
samples = []
losses = []
# Get some fixed data for sampling. These are images that are held
# constant throughout training, and allow us to inspect the model's performance
sample_size=16
fixed_z = np.random.uniform(-1, 1, size=(sample_size, z_size))
fixed_z = torch.from_numpy(fixed_z).float()
# move z to GPU if available
if train_on_gpu:
fixed_z = fixed_z.cuda()
# epoch training loop
for epoch in range(n_epochs):
# batch training loop
for batch_i, (real_images, _) in enumerate(celeba_train_loader):
batch_size = real_images.size(0)
real_images = scale(real_images)
# ==================================================
# 1. Train the discriminator on real and fake images
# ==================================================
d_optimizer.zero_grad()
## Compute real loss on discriminator
# move real images to GPU if available
if train_on_gpu:
real_images = real_images.cuda()
# pass real images to descriminator
D_real = D(real_images)
# compute real loss of descriminator
d_real_loss = real_loss(D_real)
## Compute fake loss on discriminator
# Compute latent vector z
z = np.random.uniform(-1, 1, size=(batch_size, z_size))
z = torch.from_numpy(z).float()
if train_on_gpu:
z = z.cuda()
# generate fake images with generator
fake_images = G(z)
# pass fake images to descriminator
D_fake = D(fake_images)
# compute fake loss of descrimintor
d_fake_loss = fake_loss(D_fake)
## Compute total descriminator loss
d_loss = d_real_loss + d_fake_loss
## Perform backprop to the descriminator
d_loss.backward()
d_optimizer.step()
# ===============================================
# 2. Train the generator with an adversarial loss
# ===============================================
g_optimizer.zero_grad()
# generate latent vector z
z = np.random.uniform(-1, 1, size=(batch_size, z_size))
z = torch.from_numpy(z).float()
if train_on_gpu:
z = z.cuda()
# generate fake images from latent vector z
fake_images = G(z)
# compute generator loss
D_fake = D(fake_images)
g_loss = real_loss(D_fake)
# perform backprop on generator
g_loss.backward()
g_optimizer.step()
# ===============================================
# END OF YOUR CODE
# ===============================================
# Print some loss stats
if batch_i % print_every == 0:
# append discriminator loss and generator loss
losses.append((d_loss.item(), g_loss.item()))
# print discriminator and generator loss
print('Epoch [{:5d}/{:5d}] | d_loss: {:6.4f} | g_loss: {:6.4f}'.format(
epoch+1, n_epochs, d_loss.item(), g_loss.item()))
## AFTER EACH EPOCH##
# this code assumes your generator is named G, feel free to change the name
# generate and save sample, fake images
G.eval() # for generating samples
samples_z = G(fixed_z)
samples.append(samples_z)
G.train() # back to training mode
# Save training generator samples
with open('train_samples.pkl', 'wb') as f:
pkl.dump(samples, f)
# finally return losses
return losses
```
Set your number of training epochs and train your GAN!
```
import signal
from contextlib import contextmanager
import requests
DELAY = INTERVAL = 4 * 60 # interval time in seconds
MIN_DELAY = MIN_INTERVAL = 2 * 60
KEEPALIVE_URL = "https://nebula.udacity.com/api/v1/remote/keep-alive"
TOKEN_URL = "http://metadata.google.internal/computeMetadata/v1/instance/attributes/keep_alive_token"
TOKEN_HEADERS = {"Metadata-Flavor":"Google"}
def _request_handler(headers):
def _handler(signum, frame):
requests.request("POST", KEEPALIVE_URL, headers=headers)
return _handler
@contextmanager
def active_session(delay=DELAY, interval=INTERVAL):
"""
Example:
from workspace_utils import active session
with active_session():
# do long-running work here
"""
token = requests.request("GET", TOKEN_URL, headers=TOKEN_HEADERS).text
headers = {'Authorization': "STAR " + token}
delay = max(delay, MIN_DELAY)
interval = max(interval, MIN_INTERVAL)
original_handler = signal.getsignal(signal.SIGALRM)
try:
signal.signal(signal.SIGALRM, _request_handler(headers))
signal.setitimer(signal.ITIMER_REAL, delay, interval)
yield
finally:
signal.signal(signal.SIGALRM, original_handler)
signal.setitimer(signal.ITIMER_REAL, 0)
def keep_awake(iterable, delay=DELAY, interval=INTERVAL):
"""
Example:
from workspace_utils import keep_awake
for i in keep_awake(range(5)):
# do iteration with lots of work here
"""
with active_session(delay, interval): yield from iterable
# set number of epochs
n_epochs = 20
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# call training function
with active_session():
losses = train(D, G, n_epochs=n_epochs)
```
## Training loss
Plot the training losses for the generator and discriminator, recorded after each epoch.
```
fig, ax = plt.subplots()
losses = np.array(losses)
plt.plot(losses.T[0], label='Discriminator', alpha=0.5)
plt.plot(losses.T[1], label='Generator', alpha=0.5)
plt.title("Training Losses")
plt.legend()
```
## Generator samples from training
View samples of images from the generator, and answer a question about the strengths and weaknesses of your trained models.
```
# helper function for viewing a list of passed in sample images
def view_samples(epoch, samples):
fig, axes = plt.subplots(figsize=(16,4), nrows=2, ncols=8, sharey=True, sharex=True)
for ax, img in zip(axes.flatten(), samples[epoch]):
img = img.detach().cpu().numpy()
img = np.transpose(img, (1, 2, 0))
img = ((img + 1)*255 / (2)).astype(np.uint8)
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
im = ax.imshow(img.reshape((32,32,3)))
# Load samples from generator, taken while training
with open('train_samples.pkl', 'rb') as f:
samples = pkl.load(f)
_ = view_samples(-1, samples)
```
### Question: What do you notice about your generated samples and how might you improve this model?
When you answer this question, consider the following factors:
* The dataset is biased; it is made of "celebrity" faces that are mostly white
* Model size; larger models have the opportunity to learn more features in a data feature space
* Optimization strategy; optimizers and number of epochs affect your final result
**Answer:**
Here are problems along with its potential solutions:
1. **Problem :** Most generated faces are white. Therefore, the model is bias to the white people.
**Solution :** Balance trainset for all races
2. **Problem :** There are some flaw face features, such as mouth, eyes, nose.
**Solution :** Make the network larger to recognite wider features and train with more epochs.
3. **Problem :** The image has low resolution
**Solution :** Use higher resolution on training images and potentially use high resolution GAN.
### Submitting This Project
When submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as "dlnd_face_generation.ipynb" and save it as a HTML file under "File" -> "Download as". Include the "problem_unittests.py" files in your submission.
| github_jupyter |
# Multiple Qubits & Entangled States
Single qubits are interesting, but individually they offer no computational advantage. We will now look at how we represent multiple qubits, and how these qubits can interact with each other. We have seen how we can represent the state of a qubit using a 2D-vector, now we will see how we can represent the state of multiple qubits.
## Contents
1. [Representing Multi-Qubit States](#represent)
1.1 [Exercises](#ex1)
2. [Single Qubit Gates on Multi-Qubit Statevectors](#single-qubit-gates)
2.1 [Exercises](#ex2)
3. [Multi-Qubit Gates](#multi-qubit-gates)
3.1 [The CNOT-gate](#cnot)
3.2 [Entangled States](#entangled)
3.3 [Visualizing Entangled States](#visual)
3.4 [Exercises](#ex3)
## 1. Representing Multi-Qubit States <a id="represent"></a>
We saw that a single bit has two possible states, and a qubit state has two complex amplitudes. Similarly, two bits have four possible states:
`00` `01` `10` `11`
And to describe the state of two qubits requires four complex amplitudes. We store these amplitudes in a 4D-vector like so:
$$ |a\rangle = a_{00}|00\rangle + a_{01}|01\rangle + a_{10}|10\rangle + a_{11}|11\rangle = \begin{bmatrix} a_{00} \\ a_{01} \\ a_{10} \\ a_{11} \end{bmatrix} $$
The rules of measurement still work in the same way:
$$ p(|00\rangle) = |\langle 00 | a \rangle |^2 = |a_{00}|^2$$
And the same implications hold, such as the normalisation condition:
$$ |a_{00}|^2 + |a_{01}|^2 + |a_{10}|^2 + |a_{11}|^2 = 1$$
If we have two separated qubits, we can describe their collective state using the tensor product:
$$ |a\rangle = \begin{bmatrix} a_0 \\ a_1 \end{bmatrix}, \quad |b\rangle = \begin{bmatrix} b_0 \\ b_1 \end{bmatrix} $$
$$
|ba\rangle = |b\rangle \otimes |a\rangle = \begin{bmatrix} b_0 \times \begin{bmatrix} a_0 \\ a_1 \end{bmatrix} \\ b_1 \times \begin{bmatrix} a_0 \\ a_1 \end{bmatrix} \end{bmatrix} = \begin{bmatrix} b_0 a_0 \\ b_0 a_1 \\ b_1 a_0 \\ b_1 a_1 \end{bmatrix}
$$
And following the same rules, we can use the tensor product to describe the collective state of any number of qubits. Here is an example with three qubits:
$$
|cba\rangle = \begin{bmatrix} c_0 b_0 a_0 \\ c_0 b_0 a_1 \\ c_0 b_1 a_0 \\ c_0 b_1 a_1 \\
c_1 b_0 a_0 \\ c_1 b_0 a_1 \\ c_1 b_1 a_0 \\ c_1 b_1 a_1 \\
\end{bmatrix}
$$
If we have $n$ qubits, we will need to keep track of $2^n$ complex amplitudes. As we can see, these vectors grow exponentially with the number of qubits. This is the reason quantum computers with large numbers of qubits are so difficult to simulate. A modern laptop can easily simulate a general quantum state of around 20 qubits, but simulating 100 qubits is too difficult for the largest supercomputers.
Let's look at an example circuit:
```
from qiskit import QuantumCircuit, Aer, assemble
import numpy as np
from qiskit.visualization import plot_histogram, plot_bloch_multivector
qc = QuantumCircuit(3)
# Apply H-gate to each qubit:
for qubit in range(3):
qc.h(qubit)
# See the circuit:
qc.draw()
```
Each qubit is in the state $|+\rangle$, so we should see the vector:
$$
|{+++}\rangle = \frac{1}{\sqrt{8}}\begin{bmatrix} 1 \\ 1 \\ 1 \\ 1 \\
1 \\ 1 \\ 1 \\ 1 \\
\end{bmatrix}
$$
```
# Let's see the result
svsim = Aer.get_backend('aer_simulator')
qc.save_statevector()
qobj = assemble(qc)
final_state = svsim.run(qobj).result().get_statevector()
# In Jupyter Notebooks we can display this nicely using Latex.
# If not using Jupyter Notebooks you may need to remove the
# array_to_latex function and use print(final_state) instead.
from qiskit.visualization import array_to_latex
array_to_latex(final_state, prefix="\\text{Statevector} = ")
```
And we have our expected result.
### 1.2 Quick Exercises: <a id="ex1"></a>
1. Write down the tensor product of the qubits:
a) $|0\rangle|1\rangle$
b) $|0\rangle|+\rangle$
c) $|+\rangle|1\rangle$
d) $|-\rangle|+\rangle$
2. Write the state:
$|\psi\rangle = \tfrac{1}{\sqrt{2}}|00\rangle + \tfrac{i}{\sqrt{2}}|01\rangle $
as two separate qubits.
## 2. Single Qubit Gates on Multi-Qubit Statevectors <a id="single-qubit-gates"></a>
We have seen that an X-gate is represented by the matrix:
$$
X = \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix}
$$
And that it acts on the state $|0\rangle$ as so:
$$
X|0\rangle = \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix}\begin{bmatrix} 1 \\ 0 \end{bmatrix} = \begin{bmatrix} 0 \\ 1\end{bmatrix}
$$
but it may not be clear how an X-gate would act on a qubit in a multi-qubit vector. Fortunately, the rule is quite simple; just as we used the tensor product to calculate multi-qubit statevectors, we use the tensor product to calculate matrices that act on these statevectors. For example, in the circuit below:
```
qc = QuantumCircuit(2)
qc.h(0)
qc.x(1)
qc.draw()
```
we can represent the simultaneous operations (H & X) using their tensor product:
$$
X|q_1\rangle \otimes H|q_0\rangle = (X\otimes H)|q_1 q_0\rangle
$$
The operation looks like this:
$$
X\otimes H = \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix} \otimes \tfrac{1}{\sqrt{2}}\begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix}
$$
$$
= \frac{1}{\sqrt{2}}
\begin{bmatrix} 0 \times \begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix}
& 1 \times \begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix}
\\
1 \times \begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix}
& 0 \times \begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix}
\end{bmatrix}
$$
$$
= \frac{1}{\sqrt{2}}
\begin{bmatrix} 0 & 0 & 1 & 1 \\
0 & 0 & 1 & -1 \\
1 & 1 & 0 & 0 \\
1 & -1 & 0 & 0 \\
\end{bmatrix}
$$
Which we can then apply to our 4D statevector $|q_1 q_0\rangle$. This can become quite messy, you will often see the clearer notation:
$$
X\otimes H =
\begin{bmatrix} 0 & H \\
H & 0\\
\end{bmatrix}
$$
Instead of calculating this by hand, we can use Qiskit’s `aer_simulator` to calculate this for us. The Aer simulator multiplies all the gates in our circuit together to compile a single unitary matrix that performs the whole quantum circuit:
```
usim = Aer.get_backend('aer_simulator')
qc.save_unitary()
qobj = assemble(qc)
unitary = usim.run(qobj).result().get_unitary()
```
and view the results:
```
# In Jupyter Notebooks we can display this nicely using Latex.
# If not using Jupyter Notebooks you may need to remove the
# array_to_latex function and use print(unitary) instead.
from qiskit.visualization import array_to_latex
array_to_latex(unitary, prefix="\\text{Circuit = }\n")
```
If we want to apply a gate to only one qubit at a time (such as in the circuit below), we describe this using tensor product with the identity matrix, e.g.:
$$ X \otimes I $$
```
qc = QuantumCircuit(2)
qc.x(1)
qc.draw()
# Simulate the unitary
usim = Aer.get_backend('aer_simulator')
qc.save_unitary()
qobj = assemble(qc)
unitary = usim.run(qobj).result().get_unitary()
# Display the results:
array_to_latex(unitary, prefix="\\text{Circuit = } ")
```
We can see Qiskit has performed the tensor product:
$$
X \otimes I =
\begin{bmatrix} 0 & I \\
I & 0\\
\end{bmatrix} =
\begin{bmatrix} 0 & 0 & 1 & 0 \\
0 & 0 & 0 & 1 \\
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
\end{bmatrix}
$$
### 2.1 Quick Exercises: <a id="ex2"></a>
1. Calculate the single qubit unitary ($U$) created by the sequence of gates: $U = XZH$. Use Qiskit's Aer simulator to check your results.
2. Try changing the gates in the circuit above. Calculate their tensor product, and then check your answer using the Aer simulator.
**Note:** Different books, softwares and websites order their qubits differently. This means the tensor product of the same circuit can look very different. Try to bear this in mind when consulting other sources.
## 3. Multi-Qubit Gates <a id="multi-qubit-gates"></a>
Now we know how to represent the state of multiple qubits, we are now ready to learn how qubits interact with each other. An important two-qubit gate is the CNOT-gate.
### 3.1 The CNOT-Gate <a id="cnot"></a>
You have come across this gate before in _[The Atoms of Computation](../ch-states/atoms-computation.html)._ This gate is a conditional gate that performs an X-gate on the second qubit (target), if the state of the first qubit (control) is $|1\rangle$. The gate is drawn on a circuit like this, with `q0` as the control and `q1` as the target:
```
qc = QuantumCircuit(2)
# Apply CNOT
qc.cx(0,1)
# See the circuit:
qc.draw()
```
When our qubits are not in superposition of $|0\rangle$ or $|1\rangle$ (behaving as classical bits), this gate is very simple and intuitive to understand. We can use the classical truth table:
| Input (t,c) | Output (t,c) |
|:-----------:|:------------:|
| 00 | 00 |
| 01 | 11 |
| 10 | 10 |
| 11 | 01 |
And acting on our 4D-statevector, it has one of the two matrices:
$$
\text{CNOT} = \begin{bmatrix} 1 & 0 & 0 & 0 \\
0 & 0 & 0 & 1 \\
0 & 0 & 1 & 0 \\
0 & 1 & 0 & 0 \\
\end{bmatrix}, \quad
\text{CNOT} = \begin{bmatrix} 1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 0 & 1 \\
0 & 0 & 1 & 0 \\
\end{bmatrix}
$$
depending on which qubit is the control and which is the target. Different books, simulators and papers order their qubits differently. In our case, the left matrix corresponds to the CNOT in the circuit above. This matrix swaps the amplitudes of $|01\rangle$ and $|11\rangle$ in our statevector:
$$
|a\rangle = \begin{bmatrix} a_{00} \\ a_{01} \\ a_{10} \\ a_{11} \end{bmatrix}, \quad \text{CNOT}|a\rangle = \begin{bmatrix} a_{00} \\ a_{11} \\ a_{10} \\ a_{01} \end{bmatrix} \begin{matrix} \\ \leftarrow \\ \\ \leftarrow \end{matrix}
$$
We have seen how this acts on classical states, but let’s now see how it acts on a qubit in superposition. We will put one qubit in the state $|+\rangle$:
```
qc = QuantumCircuit(2)
# Apply H-gate to the first:
qc.h(0)
qc.draw()
# Let's see the result:
svsim = Aer.get_backend('aer_simulator')
qc.save_statevector()
qobj = assemble(qc)
final_state = svsim.run(qobj).result().get_statevector()
# Print the statevector neatly:
array_to_latex(final_state, prefix="\\text{Statevector = }")
```
As expected, this produces the state $|0\rangle \otimes |{+}\rangle = |0{+}\rangle$:
$$
|0{+}\rangle = \tfrac{1}{\sqrt{2}}(|00\rangle + |01\rangle)
$$
And let’s see what happens when we apply the CNOT gate:
```
qc = QuantumCircuit(2)
# Apply H-gate to the first:
qc.h(0)
# Apply a CNOT:
qc.cx(0,1)
qc.draw()
# Let's get the result:
qc.save_statevector()
qobj = assemble(qc)
result = svsim.run(qobj).result()
# Print the statevector neatly:
final_state = result.get_statevector()
array_to_latex(final_state, prefix="\\text{Statevector = }")
```
We see we have the state:
$$
\text{CNOT}|0{+}\rangle = \tfrac{1}{\sqrt{2}}(|00\rangle + |11\rangle)
$$
This state is very interesting to us, because it is _entangled._ This leads us neatly on to the next section.
### 3.2 Entangled States <a id="entangled"></a>
We saw in the previous section we could create the state:
$$
\tfrac{1}{\sqrt{2}}(|00\rangle + |11\rangle)
$$
This is known as a _Bell_ state. We can see that this state has 50% probability of being measured in the state $|00\rangle$, and 50% chance of being measured in the state $|11\rangle$. Most interestingly, it has a **0%** chance of being measured in the states $|01\rangle$ or $|10\rangle$. We can see this in Qiskit:
```
plot_histogram(result.get_counts())
```
This combined state cannot be written as two separate qubit states, which has interesting implications. Although our qubits are in superposition, measuring one will tell us the state of the other and collapse its superposition. For example, if we measured the top qubit and got the state $|1\rangle$, the collective state of our qubits changes like so:
$$
\tfrac{1}{\sqrt{2}}(|00\rangle + |11\rangle) \quad \xrightarrow[]{\text{measure}} \quad |11\rangle
$$
Even if we separated these qubits light-years away, measuring one qubit collapses the superposition and appears to have an immediate effect on the other. This is the [‘spooky action at a distance’](https://en.wikipedia.org/wiki/Quantum_nonlocality) that upset so many physicists in the early 20th century.
It’s important to note that the measurement result is random, and the measurement statistics of one qubit are **not** affected by any operation on the other qubit. Because of this, there is **no way** to use shared quantum states to communicate. This is known as the no-communication theorem.[1]
### 3.3 Visualizing Entangled States<a id="visual"></a>
We have seen that this state cannot be written as two separate qubit states, this also means we lose information when we try to plot our state on separate Bloch spheres:
```
plot_bloch_multivector(final_state)
```
Given how we defined the Bloch sphere in the earlier chapters, it may not be clear how Qiskit even calculates the Bloch vectors with entangled qubits like this. In the single-qubit case, the position of the Bloch vector along an axis nicely corresponds to the expectation value of measuring in that basis. If we take this as _the_ rule of plotting Bloch vectors, we arrive at this conclusion above. This shows us there is _no_ single-qubit measurement basis for which a specific measurement is guaranteed. This contrasts with our single qubit states, in which we could always pick a single-qubit basis. Looking at the individual qubits in this way, we miss the important effect of correlation between the qubits. We cannot distinguish between different entangled states. For example, the two states:
$$\tfrac{1}{\sqrt{2}}(|01\rangle + |10\rangle) \quad \text{and} \quad \tfrac{1}{\sqrt{2}}(|00\rangle + |11\rangle)$$
will both look the same on these separate Bloch spheres, despite being very different states with different measurement outcomes.
How else could we visualize this statevector? This statevector is simply a collection of four amplitudes (complex numbers), and there are endless ways we can map this to an image. One such visualization is the _Q-sphere,_ here each amplitude is represented by a blob on the surface of a sphere. The size of the blob is proportional to the magnitude of the amplitude, and the colour is proportional to the phase of the amplitude. The amplitudes for $|00\rangle$ and $|11\rangle$ are equal, and all other amplitudes are 0:
```
from qiskit.visualization import plot_state_qsphere
plot_state_qsphere(final_state)
```
Here we can clearly see the correlation between the qubits. The Q-sphere's shape has no significance, it is simply a nice way of arranging our blobs; the number of `0`s in the state is proportional to the states position on the Z-axis, so here we can see the amplitude of $|00\rangle$ is at the top pole of the sphere, and the amplitude of $|11\rangle$ is at the bottom pole of the sphere.
### 3.4 Exercise: <a id="ex3"></a>
1. Create a quantum circuit that produces the Bell state: $\tfrac{1}{\sqrt{2}}(|01\rangle + |10\rangle)$.
Use the statevector simulator to verify your result.
2. The circuit you created in question 1 transforms the state $|00\rangle$ to $\tfrac{1}{\sqrt{2}}(|01\rangle + |10\rangle)$, calculate the unitary of this circuit using Qiskit's simulator. Verify this unitary does in fact perform the correct transformation.
3. Think about other ways you could represent a statevector visually. Can you design an interesting visualization from which you can read the magnitude and phase of each amplitude?
## 4. References
[1] Asher Peres, Daniel R. Terno, _Quantum Information and Relativity Theory,_ 2004, https://arxiv.org/abs/quant-ph/0212023
```
import qiskit.tools.jupyter
%qiskit_version_table
```
| github_jupyter |
```
%matplotlib inline
```
Advanced: Making Dynamic Decisions and the Bi-LSTM CRF
======================================================
Dynamic versus Static Deep Learning Toolkits
--------------------------------------------
Pytorch is a *dynamic* neural network kit. Another example of a dynamic
kit is `Dynet <https://github.com/clab/dynet>`__ (I mention this because
working with Pytorch and Dynet is similar. If you see an example in
Dynet, it will probably help you implement it in Pytorch). The opposite
is the *static* tool kit, which includes Theano, Keras, TensorFlow, etc.
The core difference is the following:
* In a static toolkit, you define
a computation graph once, compile it, and then stream instances to it.
* In a dynamic toolkit, you define a computation graph *for each
instance*. It is never compiled and is executed on-the-fly
Without a lot of experience, it is difficult to appreciate the
difference. One example is to suppose we want to build a deep
constituent parser. Suppose our model involves roughly the following
steps:
* We build the tree bottom up
* Tag the root nodes (the words of the sentence)
* From there, use a neural network and the embeddings
of the words to find combinations that form constituents. Whenever you
form a new constituent, use some sort of technique to get an embedding
of the constituent. In this case, our network architecture will depend
completely on the input sentence. In the sentence "The green cat
scratched the wall", at some point in the model, we will want to combine
the span $(i,j,r) = (1, 3, \text{NP})$ (that is, an NP constituent
spans word 1 to word 3, in this case "The green cat").
However, another sentence might be "Somewhere, the big fat cat scratched
the wall". In this sentence, we will want to form the constituent
$(2, 4, NP)$ at some point. The constituents we will want to form
will depend on the instance. If we just compile the computation graph
once, as in a static toolkit, it will be exceptionally difficult or
impossible to program this logic. In a dynamic toolkit though, there
isn't just 1 pre-defined computation graph. There can be a new
computation graph for each instance, so this problem goes away.
Dynamic toolkits also have the advantage of being easier to debug and
the code more closely resembling the host language (by that I mean that
Pytorch and Dynet look more like actual Python code than Keras or
Theano).
Bi-LSTM Conditional Random Field Discussion
-------------------------------------------
For this section, we will see a full, complicated example of a Bi-LSTM
Conditional Random Field for named-entity recognition. The LSTM tagger
above is typically sufficient for part-of-speech tagging, but a sequence
model like the CRF is really essential for strong performance on NER.
Familiarity with CRF's is assumed. Although this name sounds scary, all
the model is is a CRF but where an LSTM provides the features. This is
an advanced model though, far more complicated than any earlier model in
this tutorial. If you want to skip it, that is fine. To see if you're
ready, see if you can:
- Write the recurrence for the viterbi variable at step i for tag k.
- Modify the above recurrence to compute the forward variables instead.
- Modify again the above recurrence to compute the forward variables in
log-space (hint: log-sum-exp)
If you can do those three things, you should be able to understand the
code below. Recall that the CRF computes a conditional probability. Let
$y$ be a tag sequence and $x$ an input sequence of words.
Then we compute
\begin{align}P(y|x) = \frac{\exp{(\text{Score}(x, y)})}{\sum_{y'} \exp{(\text{Score}(x, y')})}\end{align}
Where the score is determined by defining some log potentials
$\log \psi_i(x,y)$ such that
\begin{align}\text{Score}(x,y) = \sum_i \log \psi_i(x,y)\end{align}
To make the partition function tractable, the potentials must look only
at local features.
In the Bi-LSTM CRF, we define two kinds of potentials: emission and
transition. The emission potential for the word at index $i$ comes
from the hidden state of the Bi-LSTM at timestep $i$. The
transition scores are stored in a $|T|x|T|$ matrix
$\textbf{P}$, where $T$ is the tag set. In my
implementation, $\textbf{P}_{j,k}$ is the score of transitioning
to tag $j$ from tag $k$. So:
\begin{align}\text{Score}(x,y) = \sum_i \log \psi_\text{EMIT}(y_i \rightarrow x_i) + \log \psi_\text{TRANS}(y_{i-1} \rightarrow y_i)\end{align}
\begin{align}= \sum_i h_i[y_i] + \textbf{P}_{y_i, y_{i-1}}\end{align}
where in this second expression, we think of the tags as being assigned
unique non-negative indices.
If the above discussion was too brief, you can check out
`this <http://www.cs.columbia.edu/%7Emcollins/crf.pdf>`__ write up from
Michael Collins on CRFs.
Implementation Notes
--------------------
The example below implements the forward algorithm in log space to
compute the partition function, and the viterbi algorithm to decode.
Backpropagation will compute the gradients automatically for us. We
don't have to do anything by hand.
The implementation is not optimized. If you understand what is going on,
you'll probably quickly see that iterating over the next tag in the
forward algorithm could probably be done in one big operation. I wanted
to code to be more readable. If you want to make the relevant change,
you could probably use this tagger for real tasks.
```
# Author: Robert Guthrie
import torch
import torch.autograd as autograd
import torch.nn as nn
import torch.optim as optim
torch.manual_seed(1)
```
Helper functions to make the code more readable.
```
def argmax(vec):
# return the argmax as a python int
_, idx = torch.max(vec, 1)
return idx.item()
def prepare_sequence(seq, to_ix):
idxs = [to_ix[w] for w in seq]
return torch.tensor(idxs, dtype=torch.long)
# Compute log sum exp in a numerically stable way for the forward algorithm
def log_sum_exp(vec):
max_score = vec[0, argmax(vec)]
max_score_broadcast = max_score.view(1, -1).expand(1, vec.size()[1])
return max_score + \
torch.log(torch.sum(torch.exp(vec - max_score_broadcast)))
```
Create model
```
class BiLSTM_CRF(nn.Module):
def __init__(self, vocab_size, tag_to_ix, embedding_dim, hidden_dim):
super(BiLSTM_CRF, self).__init__()
self.embedding_dim = embedding_dim
self.hidden_dim = hidden_dim
self.vocab_size = vocab_size
self.tag_to_ix = tag_to_ix
self.tagset_size = len(tag_to_ix)
self.word_embeds = nn.Embedding(vocab_size, embedding_dim)
self.lstm = nn.LSTM(embedding_dim, hidden_dim // 2,
num_layers=1, bidirectional=True)
# Maps the output of the LSTM into tag space.
self.hidden2tag = nn.Linear(hidden_dim, self.tagset_size)
# Matrix of transition parameters. Entry i,j is the score of
# transitioning *to* i *from* j.
self.transitions = nn.Parameter(
torch.randn(self.tagset_size, self.tagset_size))
# These two statements enforce the constraint that we never transfer
# to the start tag and we never transfer from the stop tag
self.transitions.data[tag_to_ix[START_TAG], :] = -10000
self.transitions.data[:, tag_to_ix[STOP_TAG]] = -10000
self.hidden = self.init_hidden()
def init_hidden(self):
return (torch.randn(2, 1, self.hidden_dim // 2),
torch.randn(2, 1, self.hidden_dim // 2))
def _forward_alg(self, feats):
# Do the forward algorithm to compute the partition function
init_alphas = torch.full((1, self.tagset_size), -10000.)
# START_TAG has all of the score.
init_alphas[0][self.tag_to_ix[START_TAG]] = 0.
# Wrap in a variable so that we will get automatic backprop
forward_var = init_alphas
# Iterate through the sentence
for feat in feats:
alphas_t = [] # The forward tensors at this timestep
for next_tag in range(self.tagset_size):
# broadcast the emission score: it is the same regardless of
# the previous tag
emit_score = feat[next_tag].view(
1, -1).expand(1, self.tagset_size)
# the ith entry of trans_score is the score of transitioning to
# next_tag from i
trans_score = self.transitions[next_tag].view(1, -1)
# The ith entry of next_tag_var is the value for the
# edge (i -> next_tag) before we do log-sum-exp
next_tag_var = forward_var + trans_score + emit_score
# The forward variable for this tag is log-sum-exp of all the
# scores.
alphas_t.append(log_sum_exp(next_tag_var).view(1))
forward_var = torch.cat(alphas_t).view(1, -1)
terminal_var = forward_var + self.transitions[self.tag_to_ix[STOP_TAG]]
alpha = log_sum_exp(terminal_var)
return alpha
def _get_lstm_features(self, sentence):
self.hidden = self.init_hidden()
embeds = self.word_embeds(sentence).view(len(sentence), 1, -1)
lstm_out, self.hidden = self.lstm(embeds, self.hidden)
lstm_out = lstm_out.view(len(sentence), self.hidden_dim)
lstm_feats = self.hidden2tag(lstm_out)
return lstm_feats
def _score_sentence(self, feats, tags):
# Gives the score of a provided tag sequence
score = torch.zeros(1)
tags = torch.cat([torch.tensor([self.tag_to_ix[START_TAG]], dtype=torch.long), tags])
for i, feat in enumerate(feats):
score = score + \
self.transitions[tags[i + 1], tags[i]] + feat[tags[i + 1]]
score = score + self.transitions[self.tag_to_ix[STOP_TAG], tags[-1]]
return score
def _viterbi_decode(self, feats):
backpointers = []
# Initialize the viterbi variables in log space
init_vvars = torch.full((1, self.tagset_size), -10000.)
init_vvars[0][self.tag_to_ix[START_TAG]] = 0
# forward_var at step i holds the viterbi variables for step i-1
forward_var = init_vvars
for feat in feats:
bptrs_t = [] # holds the backpointers for this step
viterbivars_t = [] # holds the viterbi variables for this step
for next_tag in range(self.tagset_size):
# next_tag_var[i] holds the viterbi variable for tag i at the
# previous step, plus the score of transitioning
# from tag i to next_tag.
# We don't include the emission scores here because the max
# does not depend on them (we add them in below)
next_tag_var = forward_var + self.transitions[next_tag]
best_tag_id = argmax(next_tag_var)
bptrs_t.append(best_tag_id)
viterbivars_t.append(next_tag_var[0][best_tag_id].view(1))
# Now add in the emission scores, and assign forward_var to the set
# of viterbi variables we just computed
forward_var = (torch.cat(viterbivars_t) + feat).view(1, -1)
backpointers.append(bptrs_t)
# Transition to STOP_TAG
terminal_var = forward_var + self.transitions[self.tag_to_ix[STOP_TAG]]
best_tag_id = argmax(terminal_var)
path_score = terminal_var[0][best_tag_id]
# Follow the back pointers to decode the best path.
best_path = [best_tag_id]
for bptrs_t in reversed(backpointers):
best_tag_id = bptrs_t[best_tag_id]
best_path.append(best_tag_id)
# Pop off the start tag (we dont want to return that to the caller)
start = best_path.pop()
assert start == self.tag_to_ix[START_TAG] # Sanity check
best_path.reverse()
return path_score, best_path
def neg_log_likelihood(self, sentence, tags):
feats = self._get_lstm_features(sentence)
forward_score = self._forward_alg(feats)
gold_score = self._score_sentence(feats, tags)
return forward_score - gold_score
def forward(self, sentence): # dont confuse this with _forward_alg above.
# Get the emission scores from the BiLSTM
lstm_feats = self._get_lstm_features(sentence)
# Find the best path, given the features.
score, tag_seq = self._viterbi_decode(lstm_feats)
return score, tag_seq
```
Run training
```
START_TAG = "<START>"
STOP_TAG = "<STOP>"
EMBEDDING_DIM = 5
HIDDEN_DIM = 4
# Make up some training data
training_data = [(
"the wall street journal reported today that apple corporation made money".split(),
"B I I I O O O B I O O".split()
), (
"georgia tech is a university in georgia".split(),
"B I O O O O B".split()
)]
word_to_ix = {}
for sentence, tags in training_data:
for word in sentence:
if word not in word_to_ix:
word_to_ix[word] = len(word_to_ix)
tag_to_ix = {"B": 0, "I": 1, "O": 2, START_TAG: 3, STOP_TAG: 4}
model = BiLSTM_CRF(len(word_to_ix), tag_to_ix, EMBEDDING_DIM, HIDDEN_DIM)
optimizer = optim.SGD(model.parameters(), lr=0.01, weight_decay=1e-4)
# Check predictions before training
with torch.no_grad():
precheck_sent = prepare_sequence(training_data[0][0], word_to_ix)
precheck_tags = torch.tensor([tag_to_ix[t] for t in training_data[0][1]], dtype=torch.long)
print(model(precheck_sent))
# Make sure prepare_sequence from earlier in the LSTM section is loaded
for epoch in range(
300): # again, normally you would NOT do 300 epochs, it is toy data
for sentence, tags in training_data:
# Step 1. Remember that Pytorch accumulates gradients.
# We need to clear them out before each instance
model.zero_grad()
# Step 2. Get our inputs ready for the network, that is,
# turn them into Tensors of word indices.
sentence_in = prepare_sequence(sentence, word_to_ix)
targets = torch.tensor([tag_to_ix[t] for t in tags], dtype=torch.long)
# Step 3. Run our forward pass.
loss = model.neg_log_likelihood(sentence_in, targets)
# Step 4. Compute the loss, gradients, and update the parameters by
# calling optimizer.step()
loss.backward()
optimizer.step()
# Check predictions after training
with torch.no_grad():
precheck_sent = prepare_sequence(training_data[0][0], word_to_ix)
print(model(precheck_sent))
# We got it!
```
Exercise: A new loss function for discriminative tagging
--------------------------------------------------------
It wasn't really necessary for us to create a computation graph when
doing decoding, since we do not backpropagate from the viterbi path
score. Since we have it anyway, try training the tagger where the loss
function is the difference between the Viterbi path score and the score
of the gold-standard path. It should be clear that this function is
non-negative and 0 when the predicted tag sequence is the correct tag
sequence. This is essentially *structured perceptron*.
This modification should be short, since Viterbi and score\_sentence are
already implemented. This is an example of the shape of the computation
graph *depending on the training instance*. Although I haven't tried
implementing this in a static toolkit, I imagine that it is possible but
much less straightforward.
Pick up some real data and do a comparison!
| github_jupyter |
# Facial Expression Recognition Project
## Library Installations and Imports
```
!pip install -U -q PyDrive
!apt-get -qq install -y graphviz && pip install -q pydot
!pip install -q keras
from google.colab import files
from pydrive.auth import GoogleAuth
from pydrive.drive import GoogleDrive
from google.colab import auth
from oauth2client.client import GoogleCredentials
import pandas as pd
import numpy as np
from matplotlib import pyplot as plt
import pydot
import tensorflow as tf
from tensorflow.python.client import device_lib
from keras.models import Sequential
from keras.layers import Conv2D, LocallyConnected2D, MaxPooling2D, Dense
from keras.layers import Activation, Dropout, Flatten
from keras.callbacks import EarlyStopping
from keras.utils import plot_model, to_categorical
from keras import backend as K
```
### Confirm Tensorflow and GPU Support
```
K.tensorflow_backend._get_available_gpus()
device_lib.list_local_devices()
tf.test.gpu_device_name()
```
## Helper Functions
```
def uploadFiles():
uploaded = files.upload()
for fn in uploaded.keys():
print('User uploaded file "{name}" with length {length} bytes'.format(
name=fn, length=len(uploaded[fn])))
filenames = list(uploaded.keys())
for f in filenames:
data = str(uploaded[f], 'utf-8')
file = open(f, 'w')
file.write(data)
file.close()
def pullImage(frame, index: int):
"""
Takes in a pandas data frame object and an index and returns the 48 x 48 pixel
matrix as well as the label for the type of emotion.
"""
img = frame.loc[index]['pixels'].split(' ')
img = np.array([np.int(i) for i in img])
img.resize(48,48)
label = np.uint8(frame.loc[index]['emotion'])
return img, label
def splitImage_Labels(frame):
"""
Takes in a pandas data frame object filled with pixel field and label field
and returns two numpy arrays; one for images and one for labels.
"""
labels = np.empty(len(frame))
images = np.empty((len(frame), 48, 48, 1)) # using channel last notation.
for i in range(len(frame)):
img, lbl = pullImage(frame, i)
img = np.reshape(img, (48,48,1))
images[i], labels[i] = img, lbl
return images.astype(np.uint8), to_categorical(labels, 7).astype(np.uint8)
```
## Import FER2013 Dataset and Other Files
```
# Authenticate and create the PyDrive client.
auth.authenticate_user()
gauth = GoogleAuth()
gauth.credentials = GoogleCredentials.get_application_default()
drive = GoogleDrive(gauth)
# previous token was 4/AACID65Nxa7BHDHpZA-B8KTFCD_ctqRXJjozgUjW5rirIQVTFwJzE3E
fer2013 = drive.CreateFile({'id':'1Xdlvej7eXaVcfCf3CsQ1LcSFAiNx_63c'})
fer2013.GetContentFile('fer2013file.csv')
```
Save file as a pandas dataframe.
```
df = pd.read_csv('fer2013file.csv')
```
## Parse Data
Each image is a 48 x 48 grayscale photo.
The contents of pixel string are space-separated pixel values in row major order.
Emotional assignment convention:
* 0 = Angry
* 1 = Disgust
* 2 = Fear
* 3 = Happy
* 4 = Sad
* 5 = Surprise
* 6 = Neutral
```
df_Training = df[df.Usage == 'Training']
df_Testing = df[df.Usage == 'PrivateTest'].reset_index(drop = True)
img_train, lbl_train = splitImage_Labels(df_Training)
img_test, lbl_test = splitImage_Labels(df_Testing)
print('Type and Shape of Image Datasets: ' + '\n\tTraining: ' + '\t' +
str(type(img_train[0][0][0][0])) + '\t' + str(img_train.shape) +
'\n\tTesting: ' + '\t' + str(type(img_train[0][0][0][0])) + '\t' +
str(img_test.shape))
print('Type and Shape of Image Datasets: ' + '\n\tTraining: ' + '\t' +
str(type(lbl_train[0][0])) + '\t' + str(lbl_train.shape) +
'\n\tTesting: ' + '\t' + str(type(lbl_train[0][0])) + '\t' +
str(lbl_test.shape))
```
### Save Data to .npy Files
```
#np.save('img_train.npy', img_train)
#np.save('lbl_train.npy', lbl_train)
#np.save('img_test.npy', img_test)
#np.save('lbl_test.npy', img_test)
```
### Verify Image Import
```
plt.imshow(np.reshape(img_train[0], (48,48)))
plt.title('Training Image 1 (with label ' + str(lbl_train[0]) + ')')
plt.imshow(np.reshape(img_test[0], (48,48)))
plt.title('Training Image 1 (with label ' + str(lbl_test[0]) + ')')
```
## Build Convolutional Neural Network Model
```
model = Sequential()
```
### Phase 1
- Locally-Connected Convlutional Filtering Phase.
- The locally-connected layer works similarly to the traditional 2D convolutional layer, except that weights are unshared, that is, a different set of filters is applied at each different patch of the input.
- **Ouput Filters: 8**
- **Kernal Size: 4x4**
- **Stride: 1 (default)**
- **Non-Active Padding**
```
outputFilters = 8
kernelSize = 4
model.add(LocallyConnected2D(outputFilters, kernelSize, padding='valid',
activation='relu', input_shape=img_train[0].shape))
model.summary()
```
### Phase 2
- Convolutional and Max Pooling Phase.
- **Kernal Size: 3x3**
- **Ouput Filters: 32**
- **Stride: 1 (default)**
- **Active Padding**
```
outputFilters = 32
kernelSize = 3
model.add(Conv2D(outputFilters, kernelSize, padding='same', activation='relu'))
model.add(Conv2D(outputFilters, kernelSize, padding='same', activation='relu'))
model.add(Conv2D(outputFilters, kernelSize, padding='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.summary()
```
### Phase 2
- Convolutional and Max Pooling Phase.
- **Kernal Size: 3x3**
- **Ouput Filters: 64**
- **Stride: 1 (default)**
- **Active Padding**
```
outputFilters = 64
kernelSize = 3
model.add(Conv2D(outputFilters, kernelSize, padding='same', activation='relu'))
model.add(Conv2D(outputFilters, kernelSize, padding='same', activation='relu'))
model.add(Conv2D(outputFilters, kernelSize, padding='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.summary()
```
### Level 3
- Size of Convolutional Template Filter: 3 x 3 pixels
- Size of Template Stride: 3 pixels (for both horizontal and vertical stride)
- Number of output filters in the convolution: 128
- Padding protocol: Output is same dimensions as original image.
```
outputFilters = 128
kernelSize = 3
model.add(Conv2D(outputFilters, kernelSize, padding='same', activation='relu'))
model.add(Conv2D(outputFilters, kernelSize, padding='same', activation='relu'))
model.add(Conv2D(outputFilters, kernelSize, padding='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.summary()
```
### Dense Layers
```
layerSize = 64
dropoutRate = 0.5
model.add(Flatten())
model.add(Dense(layerSize, activation='relu'))
model.add(Dropout(dropoutRate))
model.add(Dense(layerSize, activation='relu'))
model.add(Dropout(dropoutRate))
model.add(Dense(7, activation='softmax'))
model.summary()
```
### Show Model Structure
```
plot_model(model, to_file='model.png', show_shapes=True)
from IPython.display import Image
Image(filename='model.png')
```
## Compile, Train, and Evaluate the Model
```
batchSize = 128
trainingEpochs = 50
model.compile(optimizer='adam', loss='categorical_crossentropy',
metrics=['accuracy'])
#early_stopping = EarlyStopping(monitor='val_loss', patience=5, verbose=1)
trainingHistory = model.fit(img_train, lbl_train, batch_size=batchSize,
epochs=trainingEpochs,
validation_split=0.3,
#callbacks=[early_stopping],
shuffle=True,)
trainingAccuracy = trainingHistory.history['acc']
validationAccuracy = trainingHistory.history['val_acc']
print("Done Training: ")
print('Final Training Accuracy: ', trainingAccuracy[-1])
print('Final Validation Accuracy: ', validationAccuracy[-1])
print('Overfit Ratio: ', validationAccuracy[-1]/trainingAccuracy[-1])
metrics = model.evaluate(img_test, lbl_test, batch_size=batchSize, verbose=1)
print('Evaluation Loss: ', metrics[0])
print('Evaluation Accuracy: ', metrics[1])
```
| github_jupyter |
```
import param
import panel as pn
pn.extension('katex')
```
The [Param user guide](Param.ipynb) described how to set up classes that declare parameters and link them to some computation or visualization. In this section we will discover how to connect multiple such panels into a ``Pipeline`` to express complex multi-page workflows where the output of one stage feeds into the next stage.
To start using a ``Pipeline``, let us declare an empty one by instantiating the class:
```
pipeline = pn.pipeline.Pipeline()
```
Having set up a Pipeline it is now possible to start populating it with one or more stages. We have previously seen how a ``Parameterized`` class can have parameters controlling some visualization or computation on a method, with that linkage declared using the ``param.depends`` decorator. To use such classes as a pipeline stage, the `Parameterized` class will also need to designate at least one of the methods as an "output", and it should also provide a visual representation for that pipeline stage.
To declare the output for a stage, decorate one of its methods with ``param.output``. A ``Pipeline`` will use this information to determine what outputs are available to be fed into the next stage of the workflow. In the example below, the ``Stage1`` class has two parameters (``a`` and ``b``) and one output (``c``). The signature of the decorator allows a number of different ways of declaring the outputs:
* ``param.output()``: Declaring an output without arguments will declare that the method returns an output that will inherit the name of the method and does not make any specific type declarations.
* ``param.output(param.Number)``: Declaring an output with a specific ``Parameter`` or Python type also declares an output with the name of the method but declares that the output will be of a specific type.
* ``param.output(c=param.Number)``: Declaring an output using a keyword argument allows overriding the method name as the name of the output and declares the type.
It is also possible to declare multiple outputs, either as keywords (Python >= 3.6 required) or tuples:
* ``param.output(c=param.Number, d=param.String)`` or
* ``param.output(('c', param.Number), ('d', param.String))``
The example below takes two inputs (``a`` and ``b``) and produces two outputs (``c``, computed by mutiplying the inputs, and ``d``, computed by raising ``a`` to the power ``b``). To use the class as a pipeline stage, we also have to implement a ``panel`` method, which returns a Panel object providing a visual representation of the stage. Here we help implement the ``panel`` method using an additional ``view`` method that returns a ``LaTeX`` pane, which will render the equation to ``LaTeX``.
In addition to passing along the outputs, the Pipeline will also pass along the values of any input parameters whose names match input parameters on the next stage (unless ``inherit_params`` is set to `False`).
Let's start by displaying this stage on its own:
```
class Stage1(param.Parameterized):
a = param.Number(default=5, bounds=(0, 10))
b = param.Number(default=5, bounds=(0, 10))
ready = param.Boolean(default=False, precedence=-1)
@param.output(('c', param.Number), ('d', param.Number))
def output(self):
return self.a * self.b, self.a ** self.b
@param.depends('a', 'b')
def view(self):
c, d = self.output()
return pn.pane.LaTeX('${a} * {b} = {c}$\n${a}^{{{b}}} = {d}$'.format(
a=self.a, b=self.b, c=c, d=d), style={'font-size': '2em'})
def panel(self):
return pn.Row(self.param, self.view)
stage1 = Stage1()
stage1.panel()
```
To summarize, we have followed several conventions when setting up this stage of our ``Pipeline``:
1. Declare a Parameterized class with some input parameters.
2. Declare one or more methods decorated with the `param.output` decorator.
3. Declare a ``panel`` method that returns a view of the object that the ``Pipeline`` can render.
Now that the object has been instantiated we can also query it for its outputs:
```
stage1.param.outputs()
```
We can see that ``Stage1`` declares outputs named ``c`` and ``d`` of type ``param.Number`` that can be accessed by calling the ``output`` method on the object. Now let us add this stage to our ``Pipeline`` using the ``add_stage`` method:
```
pipeline.add_stage('Stage 1', stage1)
```
The ``add_stage`` method takes the name of the stage as its first argument, the stage class or instance as the second parameter, and any additional keyword arguments if you want to override default behavior.
A ``Pipeline`` with only a single stage is not much of a ``Pipeline``, of course! So let's set up a second stage, consuming the output of the first. Recall that ``Stage1`` declares an output named ``c``. If output ``c`` from ``Stage1`` should flow to ``Stage2``, the latter should declare a ``Parameter`` named ``c`` to consume the output of the first stage. ``Stage2`` does not have to consume all parameters, and here we will ignore output ``d``.
In the second stage we will define parameters ``c`` and ``exp``. To make sure that we don't get a widget for ``c`` (as it will be set by the previous stage, not the user), we'll set its precedence to be negative (which tells Panel to skip creating a widget for it). In other respects this class is very similar to the first one; it declares both a ``view`` method that depends on the parameters of the class, and a ``panel`` method that returns a view of the object.
```
class Stage2(param.Parameterized):
c = param.Number(default=5, bounds=(0, None))
exp = param.Number(default=0.1, bounds=(0, 3))
@param.depends('c', 'exp')
def view(self):
return pn.pane.LaTeX('${%s}^{%s}={%.3f}$' % (self.c, self.exp, self.c**self.exp),
style={'font-size': '2em'})
def panel(self):
return pn.Row(self.param, self.view)
stage2 = Stage2(c=stage1.output()[0])
stage2.panel()
```
Now that we have declared the second stage of the pipeline, let us add it to the ``Pipeline`` object:
```
pipeline.add_stage('Stage 2', stage2)
```
And that's it; we have now declared a two-stage pipeline, where the output ``c`` flows from the first stage into the second stage. To begin with we can `print` the pipeline to see the stages:
```
print(pipeline)
```
To display the `pipeline` we simply let it render itself:
```
pipeline = pn.pipeline.Pipeline(debug=True)
pipeline.add_stage('Stage 1', Stage1())
pipeline.add_stage('Stage 2', Stage2)
pipeline
```
As you can see the ``Pipeline`` renders a little diagram displaying the available stages in the workflow along with previous and next buttons to move between each stage. This allows setting up complex workflows with multiple stages, where each component is a self-contained unit, with minimal declarations about stage outputs (using the ``param.output`` decorator) and how to render the stage (by declaring a ``panel`` method). Note also when progressing to Stage 2, the `c` parameter widget is not rendered because its value has been provided by the previous stage.
Above we created the ``Pipeline`` as we went along, which makes some sense in a notebook to allow debugging and development of each stage. When deploying the Pipeline as a server app or when there's no reason to instantiate each stage separately; instead we can declare the stages as part of the constructor:
```
pipeline = pn.pipeline.Pipeline([('Stage 1', Stage1), ('Stage 2', Stage2)])
pipeline
```
Pipeline stages may be either ``Parameterized`` instances or ``Parameterized`` classes. With classes, the class will not be instantiated until that stage is reached, which lets you postpone allocating memory, reading files, querying databases, and other expensive actions that might be in the constructor until the parameters for that stage have been collected from previous stages. You can also use an instantiated ``Parameterized`` object instance, e.g. if you want to set some parameters to non-default values before putting the stage into the pipeline, but in that case you will need to ensure that updating the parameters of the instantiated object is sufficient to update the full current state of that stage.
## Non-linear pipelines
Pipelines are not limited to simple linear UI workflows like the ones listed above. They support any arbitrary branching structures, i.e., an acyclic graph. A simple example might be a workflow with two alternative stages that rejoin at a later point. In the very simple example below we declare four stages: an `Input`, `Multiply`, `Add`, and `Result`.
```
class Input(param.Parameterized):
value1 = param.Integer(default=2)
value2 = param.Integer(default=3)
def panel(self):
return pn.Row(self.param.value1, self.param.value2)
class Multiply(Input):
def panel(self):
return '%.3f * %.3f' % (self.value1, self.value2)
@param.output('result')
def output(self):
return self.value1 * self.value2
class Add(Input):
def panel(self):
return '%d + %d' % (self.value1, self.value2)
@param.output('result')
def output(self):
return self.value1 + self.value2
class Result(Input):
result = param.Number(default=0)
def panel(self):
return self.result
dag = pn.pipeline.Pipeline()
dag.add_stage('Input', Input)
dag.add_stage('Multiply', Multiply)
dag.add_stage('Add', Add)
dag.add_stage('Result', Result)
```
After adding all the stages we have to express the relationship between these stages. To declare the graph we can use the ``define_graph`` method and provide a adjacency map, which declares which stage feeds into which other stages. In this case the `Input` feeds into both ``Multiply`` and ``Add`` and both those stages feed into the ``Result``:
```
dag.define_graph({'Input': ('Multiply', 'Add'), 'Multiply': 'Result', 'Add': 'Result'})
```
This is of course a very simple example but it demonstrates the ability to express arbitrary workflows with branching and converging steps:
```
dag
```
## Custom layout
For a `Pipeline` object `p`, `p.layout` is a Panel layout with the following hierarchically arranged components:
* `layout`: The overall layout of the header and stage.
- `header`: The navigation components and network diagram.
* `title`: The name of the current stage.
* `network`: A network diagram representing the pipeline.
* `buttons`: All navigation buttons and selectors.
- `prev_button`: The button to go to the previous stage.
- `prev_selector`: The selector widget to select between previous branching stages.
- `next_button`: The button to go to the previous stage
- `next_selector`: The selector widget to select the next branching stages.
* `stage`: The contents of the current pipeline stage.
You can pick and choose any combination of these components to display in any configuration, e.g. just `pn.Column(p.title,p.network,p.stage)` if you don't want to show any buttons for a pipeline `p`.
For instance, let's rearrange our `dag` pipeline to fit into a smaller horizontal space:
```
pn.Column(
pn.Row(dag.title, pn.layout.HSpacer(), dag.buttons),
dag.network,
dag.stage
)
```
## Programmatic flow control
By default, controlling the flow between different stages is done using the "Previous" and "Next" buttons. However often we want to control the UI flow programmatically from within a stage. A `Pipeline` allows programmatic control by declaring a `ready_parameter` either per stage or globally on the Pipeline, which can block or unblock the buttons depending on the information obtained so far, as well as advancing automatically when combined with the ``auto_advance`` parameter. In this way we can control the workflow programmatically from inside the stages.
In the example below we create a version of the previous workflow that can be used without the buttons by declaring ``ready`` parameters for each of the stages, which we can toggle with a custom button or simply set to `True` by default to automatically skip the stage.
Lastly, we can also control which branching stage to switch to from within a stage. To do so we declare a parameter which will hold the name of the next stage to switch to, in this case selecting between 'Add' and 'Multiply'. Later we will point the pipeline to this parameter using the `next_parameter` argument.
```
class AutoInput(Input):
operator = param.Selector(default='Add', objects=['Multiply', 'Add'])
ready = param.Boolean(default=False)
def panel(self):
button = pn.widgets.Button(name='Go', button_type='success')
button.on_click(lambda event: setattr(self, 'ready', True))
widgets = pn.Row(self.param.value1, self.param.operator, self.param.value2)
for w in widgets:
w.width = 85
return pn.Column(widgets, button)
class AutoMultiply(Multiply):
ready = param.Boolean(default=True)
class AutoAdd(Add):
ready = param.Boolean(default=True)
```
Now that we have declared these stages let us set up the pipeline, ensuring that we declare the `ready_parameter`, `next_parameter`, and `auto_advance` settings appropriately:
```
dag = pn.pipeline.Pipeline() # could instead set ready_parameter='ready' and auto_advance=True globally here
dag.add_stage('Input', AutoInput, ready_parameter='ready', auto_advance=True, next_parameter='operator')
dag.add_stage('Multiply', AutoMultiply, ready_parameter='ready', auto_advance=True)
dag.add_stage('Add', AutoAdd, ready_parameter='ready', auto_advance=True)
dag.add_stage('Result', Result)
dag.define_graph({'Input': ('Multiply', 'Add'), 'Multiply': 'Result', 'Add': 'Result'})
```
Finally we display the pipeline without the buttons, which is appropriate because all the flow control is now handled from within the stages:
```
pn.Column(
dag.title,
dag.network,
dag.stage
)
```
As you can see, a panel Pipeline can be used to set up complex workflows when needed, with each stage controlled either manually or from within the stage, without having to define complex callbacks or other GUI logic.
| github_jupyter |
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
def distance_measure(x, mu, measure='Euclidean'):
# x = np.atleast_2d(x)
# mu = np.atleast_2d(mu)
if measure == 'Euclidean':
return np.sum((x[:,np.newaxis]-mu)**2, axis=-1)
if measure == 'Manhattan':
return np.sum(np.abs(x[:, np.newaxis]-mu), axis=-1)
if measure == 'Cosine_Similarity':
x_norm = np.sqrt(np.sum(x**2, keepdims=True, axis=-1))
mu_norm = np.sqrt(np.sum(mu**2, keepdims=True, axis=-1))
return np.dot(x, mu.T) / np.dot(x_norm, mu_norm.T)
"""
1) Implement the k-means clustering algorithm with Euclidean distance to cluster the instances
into k clusters.
"""
def my_kmeans(x, k=4, measure='Euclidean'):
"""
k: the number of centres
"""
# n:number of of sample, m: number of sample feature
n, m = x.shape
old_clusters_dict, clusters_dict = {}, {}
# 1. Set k instances from the dataset randomly. (initial cluster centers)
centres = np.random.permutation(np.arange(n))[:k]
# get the centres
mu = x[centres]
maxiter_num = 100
iter_num = 0
while (clusters_dict == {} or old_clusters_dict != clusters_dict) and iter_num<maxiter_num:
old_clusters_dict = clusters_dict
clusters_dict = {}
# caculate the distance, distance shape: n x k
distance = distance_measure(x, mu, measure)
if measure == 'Cosine_Similarity':
clusters = np.argmax(distance, axis=-1)
else:
clusters = np.argmin(distance, axis=-1)
# 2. Assign all other instances to the closest cluster centre.
for i in range(k):
clusters_dict[i] = []
for (idx, cluster) in enumerate(clusters):
clusters_dict[cluster].append(idx)
# 3. Compute the mean of each cluster
for cluster in range(k):
mu[cluster] = np.mean(x[clusters_dict[cluster]], axis=0)
iter_num += 1
# 4. until convergence repeat between steps 2 and 3
return clusters_dict
#tp, fn, fp, tn, tp_fp, tn_fn = 6 * [np.zeros(10, dtype='int')]
def get_PRF(x, xclass, measure='Euclidean'):
tp = np.zeros(10)
fn = np.zeros(10)
fp = np.zeros(10)
tn = np.zeros(10)
tp_fp = np.zeros(10)
tn_fn = np.zeros(10)
num_class = 4
for k in range(1, 11):
cluster_class = np.zeros((k, num_class))
xcluster = my_kmeans(x, k, measure=measure)
for id_cluster in range(k):
for item in xcluster[id_cluster]:
id_class = xclass[item]
cluster_class[id_cluster][id_class] += 1
# print(cluster_class)
same_cluster = np.sum(cluster_class, axis=-1)
tp_fp[k-1] = np.sum((same_cluster - 1) * same_cluster / 2)
tp[k-1] = np.sum((cluster_class - 1)* cluster_class / 2)
fp[k-1] = tp_fp[k-1] - tp[k-1]
for id_cluster1 in range(k):
for id_cluster2 in range (id_cluster1, k):
tn_fn[k-1] += same_cluster[id_cluster1] * same_cluster[id_cluster2]
for id_class in range(num_class):
for id_cluster in range(1,k):
fn[k-1] += cluster_class[id_cluster][id_class]* cluster_class[id_cluster-1][id_class]
tn[k-1] = tn_fn[k-1] - fn[k-1]
precision = tp / (tp + fp)
recall = tp / (tp + fn)
fscore = (2 * precision * recall) / (precision + recall)
return precision, recall, fscore
```
| github_jupyter |
# Normalizing Flows - Introduction (Part 1)
This tutorial introduces Pyro's normalizing flow library. It is independent of much of Pyro, but users may want to read about distribution shapes in the [Tensor Shapes Tutorial](http://pyro.ai/examples/tensor_shapes.html).
## Introduction
In standard probabilistic modeling practice, we represent our beliefs over unknown continuous quantities with simple parametric distributions like the normal, exponential, and Laplacian distributions. However, using such simple forms, which are commonly symmetric and unimodal (or have a fixed number of modes when we take a mixture of them), restricts the performance and flexibility of our methods. For instance, standard variational inference in the Variational Autoencoder uses independent univariate normal distributions to represent the variational family. The true posterior is neither independent nor normally distributed, which results in suboptimal inference and simplifies the model that is learnt. In other scenarios, we are likewise restricted by not being able to model multimodal distributions and heavy or light tails.
Normalizing Flows \[1-4\] are a family of methods for constructing flexible learnable probability distributions, often with neural networks, which allow us to surpass the limitations of simple parametric forms. Pyro contains state-of-the-art normalizing flow implementations, and this tutorial explains how you can use this library for learning complex models and performing flexible variational inference. We introduce the main idea of Normalizing Flows (NFs) and demonstrate learning simple univariate distributions with element-wise, multivariate, and conditional flows.
## Univariate Distributions
### Background
Normalizing Flows are a family of methods for constructing flexible distributions. Let's first restrict our attention to representing univariate distributions. The basic idea is that a simple source of noise, for example a variable with a standard normal distribution, $X\sim\mathcal{N}(0,1)$, is passed through a bijective (i.e. invertible) function, $g(\cdot)$ to produce a more complex transformed variable $Y=g(X)$.
For a given random variable, we typically want to perform two operations: sampling and scoring. Sampling $Y$ is trivial. First, we sample $X=x$, then calculate $y=g(x)$. Scoring $Y$, or rather, evaluating the log-density $\log(p_Y(y))$, is more involved. How does the density of $Y$ relate to the density of $X$? We can use the substitution rule of integral calculus to answer this. Suppose we want to evaluate the expectation of some function of $X$. Then,
$$
\begin{align}
\mathbb{E}_{p_X(\cdot)}\left[f(X)\right] &= \int_{\text{supp}(X)}f(x)p_X(x)dx\\
&= \int_{\text{supp}(Y)}f(g^{-1}(y))p_X(g^{-1}(y))\left|\frac{dx}{dy}\right|dy\\
&= \mathbb{E}_{p_Y(\cdot)}\left[f(g^{-1}(Y))\right],
\end{align}
$$
where $\text{supp}(X)$ denotes the support of $X$, which in this case is $(-\infty,\infty)$. Crucially, we used the fact that $g$ is bijective to apply the substitution rule in going from the first to the second line. Equating the last two lines we get,
$$
\begin{align}
\log(p_Y(y)) &= \log(p_X(g^{-1}(y)))+\log\left(\left|\frac{dx}{dy}\right|\right)\\
&= \log(p_X(g^{-1}(y)))-\log\left(\left|\frac{dy}{dx}\right|\right).
\end{align}
$$
Inituitively, this equation says that the density of $Y$ is equal to the density at the corresponding point in $X$ plus a term that corrects for the warp in volume around an infinitesimally small length around $Y$ caused by the transformation.
If $g$ is cleverly constructed (and we will see several examples shortly), we can produce distributions that are more complex than standard normal noise and yet have easy sampling and computationally tractable scoring. Moreover, we can compose such bijective transformations to produce even more complex distributions. By an inductive argument, if we have $L$ transforms $g_{(0)}, g_{(1)},\ldots,g_{(L-1)}$, then the log-density of the transformed variable $Y=(g_{(0)}\circ g_{(1)}\circ\cdots\circ g_{(L-1)})(X)$ is
$$
\begin{align}
\log(p_Y(y)) &= \log\left(p_X\left(\left(g_{(L-1)}^{-1}\circ\cdots\circ g_{(0)}^{-1}\right)\left(y\right)\right)\right)+\sum^{L-1}_{l=0}\log\left(\left|\frac{dg^{-1}_{(l)}(y_{(l)})}{dy'}\right|\right),
%\left( g^{(l)}(y^{(l)})
%\right).
\end{align}
$$
where we've defined $y_{(0)}=x$, $y_{(L-1)}=y$ for convenience of notation.
In a latter section, we will see how to generalize this method to multivariate $X$. The field of Normalizing Flows aims to construct such $g$ for multivariate $X$ to transform simple i.i.d. standard normal noise into complex, learnable, high-dimensional distributions. The methods have been applied to such diverse applications as image modeling, text-to-speech, unsupervised language induction, data compression, and modeling molecular structures. As probability distributions are the most fundamental component of probabilistic modeling we will likely see many more exciting state-of-the-art applications in the near future.
### Fixed Univariate Transforms in Pyro
PyTorch contains classes for representing *fixed* univariate bijective transformations, and sampling/scoring from transformed distributions derived from these. Pyro extends this with a comprehensive library of *learnable* univariate and multivariate transformations using the latest developments in the field. As Pyro imports all of PyTorch's distributions and transformations, we will work solely with Pyro. We also note that the NF components in Pyro can be used independently of the probabilistic programming functionality of Pyro, which is what we will be doing in the first two tutorials.
Let us begin by showing how to represent and manipulate a simple transformed distribution,
$$
\begin{align}
X &\sim \mathcal{N}(0,1)\\
Y &= \text{exp}(X).
\end{align}
$$
You may have recognized that this is by definition, $Y\sim\text{LogNormal}(0,1)$.
We begin by importing the relevant libraries:
```
import torch
import pyro
import pyro.distributions as dist
import pyro.distributions.transforms as T
import matplotlib.pyplot as plt
import seaborn as sns
import os
smoke_test = ('CI' in os.environ)
```
A variety of bijective transformations live in the [pyro.distributions.transforms](http://docs.pyro.ai/en/stable/distributions.html#transforms) module, and the classes to define transformed distributions live in [pyro.distributions](http://docs.pyro.ai/en/stable/distributions.html). We first create the base distribution of $X$ and the class encapsulating the transform $\text{exp}(\cdot)$:
```
dist_x = dist.Normal(torch.zeros(1), torch.ones(1))
exp_transform = T.ExpTransform()
```
The class [ExpTransform](https://pytorch.org/docs/master/distributions.html#torch.distributions.transforms.ExpTransform) derives from [Transform](https://pytorch.org/docs/master/distributions.html#torch.distributions.transforms.Transform) and defines the forward, inverse, and log-absolute-derivative operations for this transform,
$$
\begin{align}
g(x) &= \text{exp(x)}\\
g^{-1}(y) &= \log(y)\\
\log\left(\left|\frac{dg}{dx}\right|\right) &= y.
\end{align}
$$
In general, a transform class defines these three operations, from which it is sufficient to perform sampling and scoring.
The class [TransformedDistribution](https://pytorch.org/docs/master/distributions.html#torch.distributions.transformed_distribution.TransformedDistribution) takes a base distribution of simple noise and a list of transforms, and encapsulates the distribution formed by applying these transformations in sequence. We use it as:
```
dist_y = dist.TransformedDistribution(dist_x, [exp_transform])
```
Now, plotting samples from both to verify that we that have produced the log-normal distribution:
```
plt.subplot(1, 2, 1)
plt.hist(dist_x.sample([1000]).numpy(), bins=50)
plt.title('Standard Normal')
plt.subplot(1, 2, 2)
plt.hist(dist_y.sample([1000]).numpy(), bins=50)
plt.title('Standard Log-Normal')
plt.show()
```
Our example uses a single transform. However, we can compose transforms to produce more expressive distributions. For instance, if we apply an affine transformation we can produce the general log-normal distribution,
$$
\begin{align}
X &\sim \mathcal{N}(0,1)\\
Y &= \text{exp}(\mu+\sigma X).
\end{align}
$$
or rather, $Y\sim\text{LogNormal}(\mu,\sigma^2)$. In Pyro this is accomplished, e.g. for $\mu=3, \sigma=0.5$, as follows:
```
dist_x = dist.Normal(torch.zeros(1), torch.ones(1))
affine_transform = T.AffineTransform(loc=3, scale=0.5)
exp_transform = T.ExpTransform()
dist_y = dist.TransformedDistribution(dist_x, [affine_transform, exp_transform])
plt.subplot(1, 2, 1)
plt.hist(dist_x.sample([1000]).numpy(), bins=50)
plt.title('Standard Normal')
plt.subplot(1, 2, 2)
plt.hist(dist_y.sample([1000]).numpy(), bins=50)
plt.title('Log-Normal')
plt.show()
```
For the forward operation, transformations are applied in the order of the list that is the second argument to [TransformedDistribution](https://pytorch.org/docs/master/distributions.html#torch.distributions.transformed_distribution.TransformedDistribution). In this case, first [AffineTransform](https://pytorch.org/docs/master/distributions.html#torch.distributions.transforms.AffineTransform) is applied to the base distribution and then [ExpTransform](https://pytorch.org/docs/master/distributions.html#torch.distributions.transforms.ExpTransform).
### Learnable Univariate Distributions in Pyro
Having introduced the interface for invertible transforms and transformed distributions, we now show how to represent *learnable* transforms and use them for density estimation. Our dataset in this section and the next will comprise samples along two concentric circles. Examining the joint and marginal distributions:
```
import numpy as np
from sklearn import datasets
from sklearn.preprocessing import StandardScaler
n_samples = 1000
X, y = datasets.make_circles(n_samples=n_samples, factor=0.5, noise=0.05)
X = StandardScaler().fit_transform(X)
plt.title(r'Samples from $p(x_1,x_2)$')
plt.xlabel(r'$x_1$')
plt.ylabel(r'$x_2$')
plt.scatter(X[:,0], X[:,1], alpha=0.5)
plt.show()
plt.subplot(1, 2, 1)
sns.distplot(X[:,0], hist=False, kde=True,
bins=None,
hist_kws={'edgecolor':'black'},
kde_kws={'linewidth': 2})
plt.title(r'$p(x_1)$')
plt.subplot(1, 2, 2)
sns.distplot(X[:,1], hist=False, kde=True,
bins=None,
hist_kws={'edgecolor':'black'},
kde_kws={'linewidth': 2})
plt.title(r'$p(x_2)$')
plt.show()
```
Standard transforms derive from the [Transform](https://pytorch.org/docs/master/distributions.html#torch.distributions.transforms.ExpTransform) class and are not designed to contain learnable parameters. Learnable transforms, on the other hand, derive from [TransformModule](http://docs.pyro.ai/en/stable/distributions.html#pyro.distributions.TransformModule), which is a [torch.nn.Module](https://pytorch.org/docs/master/generated/torch.nn.Module.html#torch.nn.Module) and registers parameters with the object.
We will learn the marginals of the above distribution using such a transform, [Spline](http://docs.pyro.ai/en/stable/distributions.html#pyro.distributions.transforms.Spline) \[5,6\], defined on a two-dimensional input:
```
base_dist = dist.Normal(torch.zeros(2), torch.ones(2))
spline_transform = T.Spline(2, count_bins=16)
flow_dist = dist.TransformedDistribution(base_dist, [spline_transform])
```
This transform passes each dimension of its input through a *separate* monotonically increasing function known as a spline. From a high-level, a spline is a complex parametrizable curve for which we can define specific points known as knots that it passes through and the derivatives at the knots. The knots and their derivatives are parameters that can be learnt, e.g., through stochastic gradient descent on a maximum likelihood objective, as we now demonstrate:
```
%%time
steps = 1 if smoke_test else 1001
dataset = torch.tensor(X, dtype=torch.float)
optimizer = torch.optim.Adam(spline_transform.parameters(), lr=1e-2)
for step in range(steps):
optimizer.zero_grad()
loss = -flow_dist.log_prob(dataset).mean()
loss.backward()
optimizer.step()
flow_dist.clear_cache()
if step % 200 == 0:
print('step: {}, loss: {}'.format(step, loss.item()))
```
Note that we call `flow_dist.clear_cache()` after each optimization step to clear the transform's forward-inverse cache. This is required because `flow_dist`'s `spline_transform` is a stateful [TransformModule](http://docs.pyro.ai/en/stable/distributions.html#pyro.distributions.TransformModule) rather than a purely stateless [Transform](https://pytorch.org/docs/stable/distributions.html#torch.distributions.transforms.Transform) object. Purely functional Pyro code typically creates `Transform` objects each model execution, then discards them after `.backward()`, effectively clearing the transform caches. By contrast in this tutorial we create stateful module objects and need to manually clear their cache after update.
Plotting samples drawn from the transformed distribution after learning:
```
X_flow = flow_dist.sample(torch.Size([1000,])).detach().numpy()
plt.title(r'Joint Distribution')
plt.xlabel(r'$x_1$')
plt.ylabel(r'$x_2$')
plt.scatter(X[:,0], X[:,1], label='data', alpha=0.5)
plt.scatter(X_flow[:,0], X_flow[:,1], color='firebrick', label='flow', alpha=0.5)
plt.legend()
plt.show()
plt.subplot(1, 2, 1)
sns.distplot(X[:,0], hist=False, kde=True,
bins=None,
hist_kws={'edgecolor':'black'},
kde_kws={'linewidth': 2},
label='data')
sns.distplot(X_flow[:,0], hist=False, kde=True,
bins=None, color='firebrick',
hist_kws={'edgecolor':'black'},
kde_kws={'linewidth': 2},
label='flow')
plt.title(r'$p(x_1)$')
plt.subplot(1, 2, 2)
sns.distplot(X[:,1], hist=False, kde=True,
bins=None,
hist_kws={'edgecolor':'black'},
kde_kws={'linewidth': 2},
label='data')
sns.distplot(X_flow[:,1], hist=False, kde=True,
bins=None, color='firebrick',
hist_kws={'edgecolor':'black'},
kde_kws={'linewidth': 2},
label='flow')
plt.title(r'$p(x_2)$')
plt.show()
```
As we can see, we have learnt close approximations to the marginal distributions, $p(x_1),p(x_2)$. It would have been challenging to fit the irregularly shaped marginals with standard methods, e.g., a mixture of normal distributions. As expected, since there is a dependency between the two dimensions, we do not learn a good representation of the joint, $p(x_1,x_2)$. In the next section, we explain how to learn multivariate distributions whose dimensions are not independent.
## Multivariate Distributions
### Background
The fundamental idea of normalizing flows also applies to multivariate random variables, and this is where its value is clearly seen - *representing complex high-dimensional distributions*. In this case, a simple multivariate source of noise, for example a standard i.i.d. normal distribution, $X\sim\mathcal{N}(\mathbf{0},I_{D\times D})$, is passed through a vector-valued bijection, $g:\mathbb{R}^D\rightarrow\mathbb{R}^D$, to produce the more complex transformed variable $Y=g(X)$.
Sampling $Y$ is again trivial and involves evaluation of the forward pass of $g$. We can score $Y$ using the multivariate substitution rule of integral calculus,
$$
\begin{align}
\mathbb{E}_{p_X(\cdot)}\left[f(X)\right] &= \int_{\text{supp}(X)}f(\mathbf{x})p_X(\mathbf{x})d\mathbf{x}\\
&= \int_{\text{supp}(Y)}f(g^{-1}(\mathbf{y}))p_X(g^{-1}(\mathbf{y}))\det\left|\frac{d\mathbf{x}}{d\mathbf{y}}\right|d\mathbf{y}\\
&= \mathbb{E}_{p_Y(\cdot)}\left[f(g^{-1}(Y))\right],
\end{align}
$$
where $d\mathbf{x}/d\mathbf{y}$ denotes the Jacobian matrix of $g^{-1}(\mathbf{y})$. Equating the last two lines we get,
$$
\begin{align}
\log(p_Y(y)) &= \log(p_X(g^{-1}(y)))+\log\left(\det\left|\frac{d\mathbf{x}}{d\mathbf{y}}\right|\right)\\
&= \log(p_X(g^{-1}(y)))-\log\left(\det\left|\frac{d\mathbf{y}}{d\mathbf{x}}\right|\right).
\end{align}
$$
Inituitively, this equation says that the density of $Y$ is equal to the density at the corresponding point in $X$ plus a term that corrects for the warp in volume around an infinitesimally small volume around $Y$ caused by the transformation. For instance, in $2$-dimensions, the geometric interpretation of the absolute value of the determinant of a Jacobian is that it represents the area of a parallelogram with edges defined by the columns of the Jacobian. In $n$-dimensions, the geometric interpretation of the absolute value of the determinant Jacobian is that is represents the hyper-volume of a parallelepiped with $n$ edges defined by the columns of the Jacobian (see a calculus reference such as \[7\] for more details).
Similar to the univariate case, we can compose such bijective transformations to produce even more complex distributions. By an inductive argument, if we have $L$ transforms $g_{(0)}, g_{(1)},\ldots,g_{(L-1)}$, then the log-density of the transformed variable $Y=(g_{(0)}\circ g_{(1)}\circ\cdots\circ g_{(L-1)})(X)$ is
$$
\begin{align}
\log(p_Y(y)) &= \log\left(p_X\left(\left(g_{(L-1)}^{-1}\circ\cdots\circ g_{(0)}^{-1}\right)\left(y\right)\right)\right)+\sum^{L-1}_{l=0}\log\left(\left|\frac{dg^{-1}_{(l)}(y_{(l)})}{dy'}\right|\right),
%\left( g^{(l)}(y^{(l)})
%\right).
\end{align}
$$
where we've defined $y_{(0)}=x$, $y_{(L-1)}=y$ for convenience of notation.
The main challenge is in designing parametrizable multivariate bijections that have closed form expressions for both $g$ and $g^{-1}$, a tractable Jacobian whose calculation scales with $O(D)$ rather than $O(D^3)$, and can express a flexible class of functions.
### Multivariate Transforms in Pyro
Up to this point we have used element-wise transforms in Pyro. These are indicated by having the property `transform.event_dim == 0` set on the transform object. Such element-wise transforms can only be used to represent univariate distributions and multivariate distributions whose dimensions are independent (known in variational inference as the mean-field approximation).
The power of Normalizing Flow, however, is most apparent in their ability to model complex high-dimensional distributions with neural networks and Pyro contains several such flows for accomplishing this. Transforms that operate on vectors have the property `transform.event_dim == 1`, transforms on matrices with `transform.event_dim == 2`, and so on. In general, the `event_dim` property of a transform indicates how many dependent dimensions there are in the output of a transform.
In this section, we show how to use [SplineCoupling](http://docs.pyro.ai/en/stable/distributions.html#pyro.distributions.transforms.SplineCoupling) to learn the bivariate toy distribution from our running example. A coupling transform \[8, 9\] divides the input variable into two parts and applies an element-wise bijection to the section half whose parameters are a function of the first. Optionally, an element-wise bijection is also applied to the first half. Dividing the inputs at $d$, the transform is,
$$
\begin{align}
\mathbf{y}_{1:d} &= g_\theta(\mathbf{x}_{1:d})\\
\mathbf{y}_{(d+1):D} &= h_\phi(\mathbf{x}_{(d+1):D};\mathbf{x}_{1:d}),
\end{align}
$$
where $\mathbf{x}_{1:d}$ represents the first $d$ elements of the inputs, $g_\theta$ is either the identity function or an elementwise bijection parameters $\theta$, and $h_\phi$ is an element-wise bijection whose parameters are a function of $\mathbf{x}_{1:d}$.
This type of transform is easily invertible. We invert the first half, $\mathbf{y}_{1:d}$, then use the resulting $\mathbf{x}_{1:d}$ to evaluate $\phi$ and invert the second half,
$$
\begin{align}
\mathbf{x}_{1:d} &= g_\theta^{-1}(\mathbf{y}_{1:d})\\
\mathbf{x}_{(d+1):D} &= h_\phi^{-1}(\mathbf{y}_{(d+1):D};\mathbf{x}_{1:d}).
\end{align}
$$
Difference choices for $g$ and $h$ form different types of coupling transforms. When both are monotonic rational splines, the transform is the spline coupling layer of Neural Spline Flow \[5,6\], which is represented in Pyro by the [SplineCoupling](http://docs.pyro.ai/en/stable/distributions.html#pyro.distributions.transforms.SplineCoupling) class. As shown in the references, when we combine a sequence of coupling layers sandwiched between random permutations so we introduce dependencies between all dimensions, we can model complex multivariate distributions.
Most of the learnable transforms in Pyro have a corresponding helper function that takes care of constructing a neural network for the transform with the correct output shape. This neural network outputs the parameters of the transform and is known as a [hypernetwork](https://arxiv.org/abs/1609.09106) \[10\]. The helper functions are represented by lower-case versions of the corresponding class name, and usually input at the very least the input-dimension or shape of the distribution to model. For instance, the helper function corresponding to [SplineCoupling](http://docs.pyro.ai/en/stable/distributions.html#pyro.distributions.transforms.SplineCoupling) is [spline_coupling](http://docs.pyro.ai/en/stable/distributions.html#spline-coupling). We create a bivariate flow with a single spline coupling layer as follows:
```
base_dist = dist.Normal(torch.zeros(2), torch.ones(2))
spline_transform = T.spline_coupling(2, count_bins=16)
flow_dist = dist.TransformedDistribution(base_dist, [spline_transform])
```
Similarly to before, we train this distribution on the toy dataset and plot the results:
```
%%time
steps = 1 if smoke_test else 5001
dataset = torch.tensor(X, dtype=torch.float)
optimizer = torch.optim.Adam(spline_transform.parameters(), lr=5e-3)
for step in range(steps+1):
optimizer.zero_grad()
loss = -flow_dist.log_prob(dataset).mean()
loss.backward()
optimizer.step()
flow_dist.clear_cache()
if step % 500 == 0:
print('step: {}, loss: {}'.format(step, loss.item()))
X_flow = flow_dist.sample(torch.Size([1000,])).detach().numpy()
plt.title(r'Joint Distribution')
plt.xlabel(r'$x_1$')
plt.ylabel(r'$x_2$')
plt.scatter(X[:,0], X[:,1], label='data', alpha=0.5)
plt.scatter(X_flow[:,0], X_flow[:,1], color='firebrick', label='flow', alpha=0.5)
plt.legend()
plt.show()
plt.subplot(1, 2, 1)
sns.distplot(X[:,0], hist=False, kde=True,
bins=None,
hist_kws={'edgecolor':'black'},
kde_kws={'linewidth': 2},
label='data')
sns.distplot(X_flow[:,0], hist=False, kde=True,
bins=None, color='firebrick',
hist_kws={'edgecolor':'black'},
kde_kws={'linewidth': 2},
label='flow')
plt.title(r'$p(x_1)$')
plt.subplot(1, 2, 2)
sns.distplot(X[:,1], hist=False, kde=True,
bins=None,
hist_kws={'edgecolor':'black'},
kde_kws={'linewidth': 2},
label='data')
sns.distplot(X_flow[:,1], hist=False, kde=True,
bins=None, color='firebrick',
hist_kws={'edgecolor':'black'},
kde_kws={'linewidth': 2},
label='flow')
plt.title(r'$p(x_2)$')
plt.show()
```
We see from the output that this normalizing flow has successfully learnt both the univariate marginals *and* the bivariate distribution.
## Conditional versus Joint Distributions
### Background
In many cases, we wish to represent conditional rather than joint distributions. For instance, in performing variational inference, the variational family is a class of conditional distributions,
$$
\begin{align}
\{q_\psi(\mathbf{z}\mid\mathbf{x})\mid\theta\in\Theta\},
\end{align}
$$
where $\mathbf{z}$ is the latent variable and $\mathbf{x}$ the observed one, that hopefully contains a member close to the true posterior of the model, $p(\mathbf{z}\mid\mathbf{x})$. In other cases, we may wish to learn to generate an object $\mathbf{x}$ conditioned on some context $\mathbf{c}$ using $p_\theta(\mathbf{x}\mid\mathbf{c})$ and observations $\{(\mathbf{x}_n,\mathbf{c}_n)\}^N_{n=1}$. For instance, $\mathbf{x}$ may be a spoken sentence and $\mathbf{c}$ a number of speech features.
The theory of Normalizing Flows is easily generalized to conditional distributions. We denote the variable to condition on by $C=\mathbf{c}\in\mathbb{R}^M$. A simple multivariate source of noise, for example a standard i.i.d. normal distribution, $X\sim\mathcal{N}(\mathbf{0},I_{D\times D})$, is passed through a vector-valued bijection that also conditions on C, $g:\mathbb{R}^D\times\mathbb{R}^M\rightarrow\mathbb{R}^D$, to produce the more complex transformed variable $Y=g(X;C=\mathbf{c})$. In practice, this is usually accomplished by making the parameters for a known normalizing flow bijection $g$ the output of a hypernet neural network that inputs $\mathbf{c}$.
Sampling of conditional transforms simply involves evaluating $Y=g(X; C=\mathbf{c})$. Conditioning the bijections on $\mathbf{c}$, the same formula holds for scoring as for the joint multivariate case.
### Conditional Transforms in Pyro
In Pyro, most learnable transforms have a corresponding conditional version that derives from [ConditionalTransformModule](http://docs.pyro.ai/en/stable/distributions.html#conditionaltransformmodule). For instance, the conditional version of the spline transform is [ConditionalSpline](http://docs.pyro.ai/en/stable/distributions.html#conditionalspline) with helper function [conditional_spline](http://docs.pyro.ai/en/stable/distributions.html#conditional-spline).
In this section, we will show how we can learn our toy dataset as the decomposition of the product of a conditional and a univariate distribution,
$$
\begin{align}
p(x_1,x_2) &= p(x_2\mid x_1)p(x_1).
\end{align}
$$
First, we create the univariate distribution for $x_1$ as shown previously,
```
dist_base = dist.Normal(torch.zeros(1), torch.ones(1))
x1_transform = T.spline(1)
dist_x1 = dist.TransformedDistribution(dist_base, [x1_transform])
```
A conditional transformed distribution is created by passing the base distribution and list of conditional and non-conditional transforms to the [ConditionalTransformedDistribution](http://docs.pyro.ai/en/stable/distributions.html#conditionaltransformeddistribution) class:
```
x2_transform = T.conditional_spline(1, context_dim=1)
dist_x2_given_x1 = dist.ConditionalTransformedDistribution(dist_base, [x2_transform])
```
You will notice that we pass the dimension of the context variable, $M=1$, to the conditional spline helper function.
Until we condition on a value of $x_1$, the [ConditionalTransformedDistribution](http://docs.pyro.ai/en/stable/distributions.html#conditionaltransformeddistribution) object is merely a placeholder and cannot be used for sampling or scoring. By calling its [.condition(context)](http://docs.pyro.ai/en/stable/distributions.html#pyro.distributions.ConditionalDistribution.condition) method, we obtain a [TransformedDistribution](https://pytorch.org/docs/master/distributions.html#transformeddistribution) for which all its conditional transforms have been conditioned on `context`.
For example, to draw a sample from $x_2\mid x_1=1$:
```
x1 = torch.ones(1)
print(dist_x2_given_x1.condition(x1).sample())
```
In general, the context variable may have batch dimensions and these dimensions must broadcast over the batch dimensions of the input variable.
Now, combining the two distributions and training it on the toy dataset:
```
%%time
steps = 1 if smoke_test else 5001
modules = torch.nn.ModuleList([x1_transform, x2_transform])
optimizer = torch.optim.Adam(modules.parameters(), lr=3e-3)
x1 = dataset[:,0][:,None]
x2 = dataset[:,1][:,None]
for step in range(steps):
optimizer.zero_grad()
ln_p_x1 = dist_x1.log_prob(x1)
ln_p_x2_given_x1 = dist_x2_given_x1.condition(x1.detach()).log_prob(x2.detach())
loss = -(ln_p_x1 + ln_p_x2_given_x1).mean()
loss.backward()
optimizer.step()
dist_x1.clear_cache()
dist_x2_given_x1.clear_cache()
if step % 500 == 0:
print('step: {}, loss: {}'.format(step, loss.item()))
X = torch.cat((x1, x2), dim=-1)
x1_flow = dist_x1.sample(torch.Size([1000,]))
x2_flow = dist_x2_given_x1.condition(x1_flow).sample(torch.Size([1000,]))
X_flow = torch.cat((x1_flow, x2_flow), dim=-1)
plt.title(r'Joint Distribution')
plt.xlabel(r'$x_1$')
plt.ylabel(r'$x_2$')
plt.scatter(X[:,0], X[:,1], label='data', alpha=0.5)
plt.scatter(X_flow[:,0], X_flow[:,1], color='firebrick', label='flow', alpha=0.5)
plt.legend()
plt.show()
plt.subplot(1, 2, 1)
sns.distplot(X[:,0], hist=False, kde=True,
bins=None,
hist_kws={'edgecolor':'black'},
kde_kws={'linewidth': 2},
label='data')
sns.distplot(X_flow[:,0], hist=False, kde=True,
bins=None, color='firebrick',
hist_kws={'edgecolor':'black'},
kde_kws={'linewidth': 2},
label='flow')
plt.title(r'$p(x_1)$')
plt.subplot(1, 2, 2)
sns.distplot(X[:,1], hist=False, kde=True,
bins=None,
hist_kws={'edgecolor':'black'},
kde_kws={'linewidth': 2},
label='data')
sns.distplot(X_flow[:,1], hist=False, kde=True,
bins=None, color='firebrick',
hist_kws={'edgecolor':'black'},
kde_kws={'linewidth': 2},
label='flow')
plt.title(r'$p(x_2)$')
plt.show()
```
### Conclusions
In this tutorial, we have explained the basic idea behind normalizing flows and the Pyro interface to create flows to represent univariate, multivariate, and conditional distributions. It is useful to think of flows as a powerful general-purpose tool in your probabilistic modelling toolkit, and you can replace any existing distribution in your model with one to increase its flexibility and performance. We hope you have fun exploring the power of normalizing flows!
### References
1. E.G. Tabak, Christina Turner. [*A Family of Nonparametric Density Estimation Algorithms*](https://www.math.nyu.edu/faculty/tabak/publications/Tabak-Turner.pdf). Communications on Pure and Applied Mathematics, 66(2):145–164, 2013.
2. Danilo Jimenez Rezende, Shakir Mohamed. [*Variational Inference with Normalizing Flows*](http://proceedings.mlr.press/v37/rezende15.pdf). ICML 2015.
3. Ivan Kobyzev, Simon J.D. Prince, and Marcus A. Brubaker. [*Normalizing Flows: An Introduction and Review of Current Methods*](https://arxiv.org/abs/1908.09257). \[arXiv:1908.09257\] 2019.
4. George Papamakarios, Eric Nalisnick, Danilo Jimenez Rezende, Shakir Mohamed, Balaji Lakshminarayanan. [*Normalizing Flows for Probabilistic Modeling and Inference*](https://arxiv.org/abs/1912.02762). \[arXiv:1912.02762\] 2019.
5. Conor Durkan, Artur Bekasov, Iain Murray, George Papamakarios. [*Neural Spline Flows*](https://arxiv.org/abs/1906.04032). NeurIPS 2019.
6. Hadi M. Dolatabadi, Sarah Erfani, Christopher Leckie. [*Invertible Generative Modeling using Linear Rational Splines*](https://arxiv.org/abs/2001.05168). AISTATS 2020.
7. James Stewart. [*Calculus*](https://www.stewartcalculus.com/). Cengage Learning. 9th Edition 2020.
8. Laurent Dinh, David Krueger, Yoshua Bengio. [*NICE: Non-linear Independent Components Estimation*](https://arxiv.org/abs/1410.8516). Workshop contribution at ICLR 2015.
9. Laurent Dinh, Jascha Sohl-Dickstein, Samy Bengio. [*Density estimation using Real-NVP*](https://arxiv.org/abs/1605.08803). Conference paper at ICLR 2017.
10. David Ha, Andrew Dai, Quoc V. Le. [*HyperNetworks*](https://arxiv.org/abs/1609.09106). Workshop contribution at ICLR 2017.
| github_jupyter |
# Training embcompr
```
# import pdb; pdb.set_trace()
CLR = {
'blue': ['#e0f3ff', '#aadeff', '#2bb1ff', '#15587f', '#0b2c40'],
'gold': ['#fff3dc', '#ffebc7', '#ffddab', '#b59d79', '#5C4938'],
'red': ['#ffd8e8', '#ff9db6', '#ff3e72', '#6B404C', '#521424'],
}
import h5py
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.patches as ppt
from tabulate import tabulate
from tqdm import tqdm_notebook as tqdm
import pathlib
def _rolling_mean(a: np.array, window: int):
assert len(a.shape) == 1
a_prev = np.repeat(a[0], window // 2)
a_post = np.repeat(a[-1], window // 2 - 1)
x = np.concatenate((a_prev, a, a_post))
v = np.ones((window, )) / window
return np.convolve(x, v, mode='valid')
def load_loss(path: pathlib.Path, skip: int):
try:
with h5py.File(str(path), mode='r') as fd:
train_shape, valid_shape = fd['train'].shape, fd['valid'].shape
_, batches = fd['train'].shape
epochs = len(np.where(fd['train'][:].sum(axis=1) > 0)[0]) # early stopping (TODO: may be solved more elegantly)
epochs -= skip
data_train = fd['train'][skip:epochs]
data_valid = fd['valid'][skip:epochs]
if not len(data_train) or not len(data_valid):
print('skipping {} - no data available'.format(str(path)))
return
except OSError as e:
# print(str(e))
print('skipping {} - currently in use'.format(str(path)))
return
return data_train, data_valid, epochs
```
## Summary
```
def summarize(selection: str, display: bool, save: bool, skip: int = 20):
def argmin(x: np.array, data: np.array):
y_mean = data.mean(axis=1)
idx_min = y_mean.argmin()
return x[idx_min], y_mean[idx_min]
rows = []
for glob in pathlib.Path('..').glob(selection + '/losses.h5'):
data = load_loss(glob, skip)
if data is None:
continue
data_train, data_valid, epochs = data
x = np.arange(skip, epochs + skip)
exp = glob.parts[-2]
row = (exp, )
# add loss minima to data
row = row + argmin(x, data_train)
row = row + argmin(x, data_valid)
best_epoch = row[-2]
# find code entropy
try:
codes_fd = h5py.File(str(glob.parents[0] / 'codes.h5'))
epochs = [int(key) for key in codes_fd['valid'].keys()]
s = epochs[np.argmin([abs(best_epoch - e) for e in epochs])]
row = row + (s, codes_fd['valid'][str(s)]['entropy'].attrs['mean'])
except OSError as e:
print('{} no codes.h5 found'.format(str(glob.parent.name)))
row = row + ('-', '-')
except KeyError:
print('{} no "valid" dataset found in codes.h5'.format(str(glob)))
row = row + ('-', '-')
rows.append(row)
headers = ['exp', 't epoch', 't loss', 'v epoch', 'v loss', 'entropy epoch', 'entropy']
rows.sort(key=lambda t: t[4])
assert len(rows), 'no data found'
if display:
print()
print(tabulate(rows, headers=headers))
print()
if save:
path = glob.parents[1] / 'summary.txt'
print('writing', str(path))
with path.open('w') as fd:
fd.write(tabulate(rows, headers=headers, tablefmt='orgtbl'))
```
## Loss
```
def _loss_line_plot(ax, x, data: np.array, window: int, label_fmt: str,
color: str = None, show_bounds: bool = True,
marker = 2, baseline: float = None):
y_min = _rolling_mean(data.min(axis=1), window)
y_max = _rolling_mean(data.max(axis=1), window)
if show_bounds:
ax.plot(x, y_min, color=CLR[color][3], alpha=0.5, lw=0.6)
ax.plot(x, y_max, color=CLR[color][3], alpha=0.5, lw=0.6)
ax.fill_between(x, y_min, y_max, color=CLR[color][0], alpha=0.2)
y_mean = data.mean(axis=1)
ax.plot(x, y_mean, color=CLR[color][2])
# marker and lines
line_style = dict(linestyle='dashed', lw=1, alpha=0.5, color=CLR[color][3])
patches = []
if baseline is not None:
ax.axhline(baseline, 0, 1, **line_style)
patches.append(ppt.Patch(
color='black', label='Baseline: {}'.format(baseline)))
if show_bounds:
ax.vlines(x[0], y_min[0] - 1, y_max[0] + 1, **line_style)
ax.vlines(x[-1], y_min[-1] - 1, y_max[-1] + 1, **line_style)
idx_min = y_mean.argmin()
ax.scatter(x[idx_min], y_mean[idx_min], marker=marker, s=100, color=CLR[color][2])
# legend
patches.insert(0, ppt.Patch(
color=CLR[color][2],
label=label_fmt.format(y_mean[idx_min], x[idx_min])))
return patches
def plot_loss(path: pathlib.Path, display: bool, save: bool, skip: int = 0, window: int = 50, baseline: float = None):
name_exp = '/'.join(path.parts[-3:-1])
out_dir = path.parents[0]/'images'
out_dir.mkdir(exist_ok=True)
# data aggregation
data = load_loss(path, skip)
if data is None:
return
data_train, data_valid, epochs = data
if epochs < 50:
print('not enough data:', epochs)
return
data_train = data_train[:epochs]
data_valid = data_valid[:epochs]
x = np.arange(skip, epochs + skip)
# clip above baseline
data_train[data_train > baseline] = baseline
data_valid[data_valid > baseline] = baseline
def fig_before(title: str):
fig = plt.figure()
ax = fig.add_subplot(111)
ax.set_title(title)
ax.set_xlabel('Epochs')
ax.set_ylabel('Distance Loss')
return fig, ax
def fig_after(fix, ax, patches, fname):
ax.legend(handles=patches)
if display:
plt.show(fig)
if save:
for out_file in [str(out_dir/fname) + s for s in ('.png', '.svg')]:
# print('saving to', out_file)
fig.savefig(out_file)
plt.close(fig)
# plot three plots
# fig, ax = fig_before('Training Loss ({})'.format(name_exp))
# patch = _loss_line_plot(
# ax, x, data_train, window, 'Min. Training: {:2.3f} (Epoch {})', color='blue', baseline=baseline)
# fig_after(fig, ax, patch, 'loss-training')
# fig, ax = fig_before('Validation Loss ({})'.format(name_exp))
# patch = _loss_line_plot(
# ax, x, data_valid, window, 'Min. Validation: {:2.3f} (Epoch {})', color='red', baseline=baseline)
# fig_after(fig, ax, patch, 'loss-validation')
fig, ax = fig_before('Loss ({})'.format(name_exp))
patch1 = _loss_line_plot(
ax, x, data_train, window,
'Min. Training: {:2.3f} (Epoch {})', color='blue',
show_bounds=False, marker=3)
patch2 = _loss_line_plot(
ax, x, data_valid, window,
'Min. Validation: {:2.3f} (Epoch {})', color='red',
show_bounds=False, baseline=baseline)
fig_after(fig, ax, patch1 + patch2, 'loss')
def plot_losses(selection, display: bool = True, save: bool = True, baseline: float = None):
for glob in pathlib.Path('..').glob(selection + 'losses.h5'):
print('plot loss', str(glob))
plot_loss(glob, display, save, baseline=baseline)
```
## Encoder Activations
### Training
```
def plot_activation_train(codefile: pathlib.Path, display: bool, save: bool):
path = codefile.parents[0]
fd = h5py.File(str(codefile), mode='r')
name_exp = '/'.join(path.parts[-2:])
try:
group = fd['train']
except KeyError:
print('no "train" datagroup: only "{}"'.format(str(list(fd.keys()))))
return
out_dir = path / 'images'
out_dir.mkdir(exist_ok=True)
def fig_before(title: str):
fig = plt.figure()
ax = fig.add_subplot(111)
ax.set_title(title)
return fig, ax
def fig_after(fig, ax, fname):
if display:
plt.show(fig)
if save:
for out_file in [str(out_dir/fname) + s for s in ('.png', '.svg')]:
# print('saving to', out_file)
fig.savefig(out_file)
plt.close(fig)
for epoch in tqdm(sorted([int(key) for key in group.keys()])):
hist = group[str(epoch)]['histogram']
bin_edges = hist.attrs['bin_edges']
x = [(a + b) / 2 for a, b in zip(bin_edges, bin_edges[1:])]
y = hist[:]
# pie plot
fig, ax = fig_before('Encoder Activations (Epoch {}) \n ({})\n'.format(
epoch, name_exp))
l0 = 'x < {:2.2f}'.format(bin_edges[2])
l1 = 'other'.format(bin_edges[1], bin_edges[-3])
l2 = '{:2.2f} < x'.format(bin_edges[-2])
sizes = y[0], sum(y[1:-1]), y[-1]
colors = CLR['blue'], CLR['red'], CLR['gold']
wp_outer = dict(wedgeprops=dict(width=0.3, edgecolor='w'))
wp_inner = dict(wedgeprops=dict(width=0.1, edgecolor='w'))
pct_style = dict(pctdistance=0.4, autopct='%1.1f%%')
kw_outer = {**wp_outer, **pct_style, 'colors': [c[2] for c in colors]}
kw_inner = {**wp_inner, 'colors': [c[0] for c in colors]}
ax.pie(sizes, labels=(l0, l1, l2), startangle=0, **kw_outer)
ax.pie(sizes, radius=0.7, startangle=0, **kw_inner)
# circle = plt.Circle((0,0), 0.8, color='w', fc=CLR['blue'][0], lw=1)
# ax.add_artist(circle)
plt.axis('equal')
fig_after(fig, ax, 'encoder-activation-train_{}'.format(epoch))
def plot_activations_train(selection, display: bool = True, save: bool = True):
for glob in pathlib.Path('..').glob(selection + 'codes.h5'):
print('training activations', str(glob))
plot_activation_train(glob, display, save)
```
### Validation
```
def _plot_activation_bar(fig, ax, arr):
def color_gen(switch: int = 1, kind: int = 2):
on = False
count = 1
while True:
if on:
yield CLR['blue'][kind]
else:
yield CLR['red'][kind]
if count % switch == 0:
on = not on
count += 1
def color_map(amount: int, switch: int):
gen = color_gen(switch=switch)
return [next(gen) for _ in range(amount)]
M, K = arr.shape
fig.set_size_inches(40, 10)
# draw codebook separators
line_style = dict(color='black', ls='dashed', lw=1)
for i, bg_color in zip(range(M), color_gen(kind=0)):
ax.axvline(x=i * (K + 1) - 1, **line_style)
begin = i * (K + 1) - 1
end = begin + K + 1
ax.axvspan(begin, end, color=bg_color, alpha=0.2)
ax.axvline(x=M * (K + 1) - 1, **line_style)
# arr shape: (n, M, K)
# retrieve selection per codebook (along dim=1)
# adding [0] as seperator element between codebooks
sums = np.array([np.concatenate((arr[i], [0])) for i in range(M)])
ax.bar(range(len(sums.flat)), sums.flat, color=color_map(M * (K + 1), K + 1), align='edge')
def plot_activation_valid(codefile: pathlib.Path, display: bool, save: bool):
path = codefile.parents[0]
try:
fd = h5py.File(str(codefile), mode='r')
group = fd['valid']
except OSError:
print('skipping {}, currently in use'.format(str(path)))
return
except KeyError:
print('no "train" datagroup: only "{}"'.format(str(list(fd.keys()))))
return
out_dir = path / 'images'
out_dir.mkdir(exist_ok=True)
def fig_before(title: str):
fig = plt.figure()
ax = fig.add_subplot(111)
ax.set_title(title)
return fig, ax
def fig_after(fig, ax, fname):
if display:
plt.show(fig)
if save:
for out_file in [str(out_dir/fname) + s for s in ('.png', '.svg')]:
# print('saving to', out_file)
fig.savefig(out_file)
fig.clear()
plt.close(fig)
# stupid thing to mitigate the memory leak...
# process manually chunk-wise if end < len(epochs)
epochs = sorted([int(key) for key in group.keys()])
start, end = 0, 500
if end - start < len(epochs):
print('WARNING: only using data subset!')
print('current epoch range: ', start, end)
epochs = epochs[start:end]
for i, epoch in tqdm(enumerate(epochs), total=len(epochs)):
counts = group[str(epoch)]['counts'][:]
fig, ax = fig_before('Encoder Activations (Epoch {})'.format(epoch))
_plot_activation_bar(fig, ax, counts)
fig_after(fig, ax, 'encoder-activation-valid_{}'.format(epoch))
if i == end:
break
def plot_activations_valid(selection, display: bool = True, save: bool = True):
for glob in pathlib.Path('..').glob(selection + 'codes.h5'):
print('code bars', str(glob))
plot_activation_valid(glob, display, save)
```
## Play the organ
```
# note: there is a small memory leak becoming obvious when plotting many figures.
# I have no idea why that is, but it seems to come from pyplot.
# It does not help, that it also only occurs sometimes and not consistently...
experiment = 'experiments/enwiki'
# --- RAW
# embedding = 'glove', 20.17
# embedding = 'fasttext.en', 12.11
# embedding = 'fasttext.de', 11.47
# --- BOV
embedding = '**', 2
options = dict(display=True, save=True)
selection = f'opt/{experiment}/{embedding[0]}/'
summarize(selection, **options)
plot_losses(selection, baseline=embedding[1], **options)
# plot_activations_train(selection, **options)
# plot_activations_valid(selection, **options)
print('done')
```
| github_jupyter |
```
import numpy as np
import networkx as nx
import matplotlib.pyplot as plt
from IPython.display import Image
Image('graph.png')
```
We are going to use networkx package to construct the graph and find the shortest paths. Refer to the [NetworkX documentation](https://networkx.github.io/documentation/stable/).
```
#type in the edges and edgecost as a list of 3-tuples
edges = [(0,1,2),(0,2, 1.5),(0,3, 2.5),(1,4, 1.5),(2,5, 0.5),(4,8, 1),
(2,6, 2.5),(3,7, 2),(7,9, 1.25),(5,10, 2.75),(6,10, 3.25),
(9,10, 1.5),(8,10, 3.5)]
#Define an empty graph
G =nx.Graph()
#populate the edges and the cost in graph G
G.add_weighted_edges_from(edges, weight='cost')
#Find the shortest path from Node 0 to Node 10
print(nx.shortest_path(G, 0, 10, 'cost'))
#Find the cost of the shortest path from Node 0 to Node 10
print(nx.shortest_path_length(G, 0, 10, 'cost'))
```
Let us now move onto a grid which represents the robot's operating environment. First convert the grid to a graph. Then we will use Astar from networkX to find the shortest path
```
# write the Euclidean function that takes in the
# node x, y and compute the distance
def euclidean(node1, node2):
x1, y1 = node1
x2, y2 = node2
return np.sqrt((x1-x2)**2 + (y1-y2)**2)
grid = np.load("astar_grid.npy")
print(grid.shape)
# use np.load to load a grid of 1s and 0s
# 1 - occupied 0- free
grid = np.load("astar_grid.npy")
# you can define your own start/ end
start = (0, 0)
goal = (0, 19)
# visualize the start/ end and the robot's environment
fig, ax = plt.subplots(figsize=(12,12))
ax.imshow(grid, cmap=plt.cm.Dark2)
ax.scatter(start[1],start[0], marker = "+", color = "yellow", s = 200)
ax.scatter(goal[1],goal[0], marker = "+", color = "red", s = 200)
plt.show()
```
Convert this grid array into a graph. You have to follow these steps
1. Find the dimensions of grid. Use grid_2d_graph() to initialize a grid graph of corresponding dimensions
2. Use remove_node() to remove nodes and edges of all cells that are occupied
```
#initialize graph
grid_size = grid.shape
G = nx.grid_2d_graph(*grid_size)
deleted_nodes = 0 # counter to keep track of deleted nodes
#nested loop to remove nodes that are not connected
#free cell => grid[i, j] = ?
#occupied cell => grid[i, j] = ?
num_nodes = 0
for i in range(grid_size[0]):
for j in range(grid_size[1]):
if grid[i, j] == 1:
G.remove_node((i, j))
num_nodes += 1
print(f"removed {num_nodes} nodes")
print(f"number of occupied cells in grid {np.sum(grid)}")
```
Visualize the resulting graph using nx.draw(). Note that pos argument for nx.draw() has been given below. The graph is too dense. Try changing the node_size and node_color. You can correlate this graph with the grid's occupied cells
```
pos = {(x,y):(y,-x) for x,y in G.nodes()}
nx.draw(G, pos=pos, node_color='red', node_size=10)
```
We are 2 more steps away from finding the path!
1. Set edge attribute. Use set_edge_attributes(). Remember we have to provide a dictionary input: Edge is the key and cost is the value. We can set every move to a neighbor to have unit cost.
2. Use astar_path() to find the path. Set heuristic to be euclidean distance. weight to be the attribute you assigned in step 1
```
nx.set_edge_attributes(G, {e: 1 for e in G.edges()}, "cost")
astar_path = nx.astar_path(G, start, goal, heuristic=euclidean, weight="cost")
astar_path
```
Visualize the path you have computed!
```
fig, ax = plt.subplots(figsize=(12,12))
ax.imshow(grid, cmap=plt.cm.Dark2)
ax.scatter(start[1],start[0], marker = "+", color = "yellow", s = 200)
ax.scatter(goal[1],goal[0], marker = "+", color = "red", s = 200)
for s in astar_path[1:]:
ax.plot(s[1], s[0],'r+')
```
Cool! Now you can read arbitrary evironments and find the shortest path between 2 robot positions. Pick a game environment from here and repeat: {https://www.movingai.com/benchmarks/dao/index.html}
```
```
| github_jupyter |
# Operations on word vectors
Welcome to your first assignment of this week!
Because word embeddings are very computionally expensive to train, most ML practitioners will load a pre-trained set of embeddings.
**After this assignment you will be able to:**
- Load pre-trained word vectors, and measure similarity using cosine similarity
- Use word embeddings to solve word analogy problems such as Man is to Woman as King is to ______.
- Modify word embeddings to reduce their gender bias
Let's get started! Run the following cell to load the packages you will need.
```
import numpy as np
from w2v_utils import *
```
Next, lets load the word vectors. For this assignment, we will use 50-dimensional GloVe vectors to represent words. Run the following cell to load the `word_to_vec_map`.
```
words, word_to_vec_map = read_glove_vecs('data/glove.6B.50d.txt')
```
You've loaded:
- `words`: set of words in the vocabulary.
- `word_to_vec_map`: dictionary mapping words to their GloVe vector representation.
You've seen that one-hot vectors do not do a good job cpaturing what words are similar. GloVe vectors provide much more useful information about the meaning of individual words. Lets now see how you can use GloVe vectors to decide how similar two words are.
# 1 - Cosine similarity
To measure how similar two words are, we need a way to measure the degree of similarity between two embedding vectors for the two words. Given two vectors $u$ and $v$, cosine similarity is defined as follows:
$$\text{CosineSimilarity(u, v)} = \frac {u . v} {||u||_2 ||v||_2} = cos(\theta) \tag{1}$$
where $u.v$ is the dot product (or inner product) of two vectors, $||u||_2$ is the norm (or length) of the vector $u$, and $\theta$ is the angle between $u$ and $v$. This similarity depends on the angle between $u$ and $v$. If $u$ and $v$ are very similar, their cosine similarity will be close to 1; if they are dissimilar, the cosine similarity will take a smaller value.
<img src="images/cosine_sim.png" style="width:800px;height:250px;">
<caption><center> **Figure 1**: The cosine of the angle between two vectors is a measure of how similar they are</center></caption>
**Exercise**: Implement the function `cosine_similarity()` to evaluate similarity between word vectors.
**Reminder**: The norm of $u$ is defined as $ ||u||_2 = \sqrt{\sum_{i=1}^{n} u_i^2}$
```
# GRADED FUNCTION: cosine_similarity
def cosine_similarity(u, v):
"""
Cosine similarity reflects the degree of similariy between u and v
Arguments:
u -- a word vector of shape (n,)
v -- a word vector of shape (n,)
Returns:
cosine_similarity -- the cosine similarity between u and v defined by the formula above.
"""
distance = 0.0
### START CODE HERE ###
# Compute the dot product between u and v (≈1 line)
dot = np.dot(u,v)
# Compute the L2 norm of u (≈1 line)
from numpy import linalg as LA
norm_u = LA.norm(u)
# Compute the L2 norm of v (≈1 line)
norm_v = LA.norm(v)
# Compute the cosine similarity defined by formula (1) (≈1 line)
cosine_similarity = dot/(norm_u*norm_v)
### END CODE HERE ###
return cosine_similarity
father = word_to_vec_map["father"]
mother = word_to_vec_map["mother"]
ball = word_to_vec_map["ball"]
crocodile = word_to_vec_map["crocodile"]
france = word_to_vec_map["france"]
italy = word_to_vec_map["italy"]
paris = word_to_vec_map["paris"]
rome = word_to_vec_map["rome"]
print("cosine_similarity(father, mother) = ", cosine_similarity(father, mother))
print("cosine_similarity(ball, crocodile) = ",cosine_similarity(ball, crocodile))
print("cosine_similarity(france - paris, rome - italy) = ",cosine_similarity(france - paris, rome - italy))
cosine_similarity(ball,rome)
```
**Expected Output**:
<table>
<tr>
<td>
**cosine_similarity(father, mother)** =
</td>
<td>
0.890903844289
</td>
</tr>
<tr>
<td>
**cosine_similarity(ball, crocodile)** =
</td>
<td>
0.274392462614
</td>
</tr>
<tr>
<td>
**cosine_similarity(france - paris, rome - italy)** =
</td>
<td>
-0.675147930817
</td>
</tr>
</table>
After you get the correct expected output, please feel free to modify the inputs and measure the cosine similarity between other pairs of words! Playing around the cosine similarity of other inputs will give you a better sense of how word vectors behave.
## 2 - Word analogy task
In the word analogy task, we complete the sentence <font color='brown'>"*a* is to *b* as *c* is to **____**"</font>. An example is <font color='brown'> '*man* is to *woman* as *king* is to *queen*' </font>. In detail, we are trying to find a word *d*, such that the associated word vectors $e_a, e_b, e_c, e_d$ are related in the following manner: $e_b - e_a \approx e_d - e_c$. We will measure the similarity between $e_b - e_a$ and $e_d - e_c$ using cosine similarity.
**Exercise**: Complete the code below to be able to perform word analogies!
```
# GRADED FUNCTION: complete_analogy
def complete_analogy(word_a, word_b, word_c, word_to_vec_map):
"""
Performs the word analogy task as explained above: a is to b as c is to ____.
Arguments:
word_a -- a word, string
word_b -- a word, string
word_c -- a word, string
word_to_vec_map -- dictionary that maps words to their corresponding vectors.
Returns:
best_word -- the word such that v_b - v_a is close to v_best_word - v_c, as measured by cosine similarity
"""
# convert words to lower case
word_a, word_b, word_c = word_a.lower(), word_b.lower(), word_c.lower()
### START CODE HERE ###
# Get the word embeddings v_a, v_b and v_c (≈1-3 lines)
e_a, e_b, e_c = word_to_vec_map[word_a],word_to_vec_map[word_b],word_to_vec_map[word_c]
### END CODE HERE ###
words = word_to_vec_map.keys()
max_cosine_sim = -100 # Initialize max_cosine_sim to a large negative number
best_word = None # Initialize best_word with None, it will help keep track of the word to output
# loop over the whole word vector set
for w in words:
# to avoid best_word being one of the input words, pass on them.
if w in [word_a, word_b, word_c] :
continue
None
### START CODE HERE ###
# Compute cosine similarity between the vector (e_b - e_a) and the vector ((w's vector representation) - e_c) (≈1 line)
cosine_sim = cosine_similarity(e_b - e_a , word_to_vec_map[w]- e_c)
# If the cosine_sim is more than the max_cosine_sim seen so far,
# then: set the new max_cosine_sim to the current cosine_sim and the best_word to the current word (≈3 lines)
if cosine_sim > max_cosine_sim:
max_cosine_sim = cosine_sim
best_word = w
### END CODE HERE ###
return best_word
```
Run the cell below to test your code, this may take 1-2 minutes.
```
triads_to_try = [('italy', 'italian', 'spain'), ('india', 'delhi', 'japan'), ('man', 'woman', 'boy'), ('small', 'smaller', 'large'), ('man','doctor','women')]
for triad in triads_to_try:
print ('{} -> {} :: {} -> {}'.format( *triad, complete_analogy(*triad,word_to_vec_map)))
```
**Expected Output**:
<table>
<tr>
<td>
**italy -> italian** ::
</td>
<td>
spain -> spanish
</td>
</tr>
<tr>
<td>
**india -> delhi** ::
</td>
<td>
japan -> tokyo
</td>
</tr>
<tr>
<td>
**man -> woman ** ::
</td>
<td>
boy -> girl
</td>
</tr>
<tr>
<td>
**small -> smaller ** ::
</td>
<td>
large -> larger
</td>
</tr>
</table>
Once you get the correct expected output, please feel free to modify the input cells above to test your own analogies. Try to find some other analogy pairs that do work, but also find some where the algorithm doesn't give the right answer: For example, you can try small->smaller as big->?.
### Congratulations!
You've come to the end of this assignment. Here are the main points you should remember:
- Cosine similarity a good way to compare similarity between pairs of word vectors. (Though L2 distance works too.)
- For NLP applications, using a pre-trained set of word vectors from the internet is often a good way to get started.
Even though you have finished the graded portions, we recommend you take a look too at the rest of this notebook.
Congratulations on finishing the graded portions of this notebook!
## 3 - Debiasing word vectors (OPTIONAL/UNGRADED)
In the following exercise, you will examine gender biases that can be reflected in a word embedding, and explore algorithms for reducing the bias. In addition to learning about the topic of debiasing, this exercise will also help hone your intuition about what word vectors are doing. This section involves a bit of linear algebra, though you can probably complete it even without being expert in linear algebra, and we encourage you to give it a shot. This portion of the notebook is optional and is not graded.
Lets first see how the GloVe word embeddings relate to gender. You will first compute a vector $g = e_{woman}-e_{man}$, where $e_{woman}$ represents the word vector corresponding to the word *woman*, and $e_{man}$ corresponds to the word vector corresponding to the word *man*. The resulting vector $g$ roughly encodes the concept of "gender". (You might get a more accurate representation if you compute $g_1 = e_{mother}-e_{father}$, $g_2 = e_{girl}-e_{boy}$, etc. and average over them. But just using $e_{woman}-e_{man}$ will give good enough results for now.)
```
g = word_to_vec_map['woman'] - word_to_vec_map['man']
print(g)
```
Now, you will consider the cosine similarity of different words with $g$. Consider what a positive value of similarity means vs a negative cosine similarity.
```
print ('List of names and their similarities with constructed vector:')
# girls and boys name
name_list = ['john', 'marie', 'sophie', 'ronaldo', 'priya', 'rahul', 'danielle', 'reza', 'katy', 'yasmin']
for w in name_list:
print (w, cosine_similarity(word_to_vec_map[w], g))
```
As you can see, female first names tend to have a positive cosine similarity with our constructed vector $g$, while male first names tend to have a negative cosine similarity. This is not suprising, and the result seems acceptable.
But let's try with some other words.
```
print('Other words and their similarities:')
word_list = ['lipstick', 'guns', 'science', 'arts', 'literature', 'warrior','doctor', 'tree', 'receptionist',
'technology', 'fashion', 'teacher', 'engineer', 'pilot', 'computer', 'singer']
for w in word_list:
print (w, cosine_similarity(word_to_vec_map[w], g))
```
Do you notice anything surprising? It is astonishing how these results reflect certain unhealthy gender stereotypes. For example, "computer" is closer to "man" while "literature" is closer to "woman". Ouch!
We'll see below how to reduce the bias of these vectors, using an algorithm due to [Boliukbasi et al., 2016](https://arxiv.org/abs/1607.06520). Note that some word pairs such as "actor"/"actress" or "grandmother"/"grandfather" should remain gender specific, while other words such as "receptionist" or "technology" should be neutralized, i.e. not be gender-related. You will have to treat these two type of words differently when debiasing.
### 3.1 - Neutralize bias for non-gender specific words
The figure below should help you visualize what neutralizing does. If you're using a 50-dimensional word embedding, the 50 dimensional space can be split into two parts: The bias-direction $g$, and the remaining 49 dimensions, which we'll call $g_{\perp}$. In linear algebra, we say that the 49 dimensional $g_{\perp}$ is perpendicular (or "othogonal") to $g$, meaning it is at 90 degrees to $g$. The neutralization step takes a vector such as $e_{receptionist}$ and zeros out the component in the direction of $g$, giving us $e_{receptionist}^{debiased}$.
Even though $g_{\perp}$ is 49 dimensional, given the limitations of what we can draw on a screen, we illustrate it using a 1 dimensional axis below.
<img src="images/neutral.png" style="width:800px;height:300px;">
<caption><center> **Figure 2**: The word vector for "receptionist" represented before and after applying the neutralize operation. </center></caption>
**Exercise**: Implement `neutralize()` to remove the bias of words such as "receptionist" or "scientist". Given an input embedding $e$, you can use the following formulas to compute $e^{debiased}$:
$$e^{bias\_component} = \frac{e \cdot g}{||g||_2^2} * g\tag{2}$$
$$e^{debiased} = e - e^{bias\_component}\tag{3}$$
If you are an expert in linear algebra, you may recognize $e^{bias\_component}$ as the projection of $e$ onto the direction $g$. If you're not an expert in linear algebra, don't worry about this.
<!--
**Reminder**: a vector $u$ can be split into two parts: its projection over a vector-axis $v_B$ and its projection over the axis orthogonal to $v$:
$$u = u_B + u_{\perp}$$
where : $u_B = $ and $ u_{\perp} = u - u_B $
!-->
```
def neutralize(word, g, word_to_vec_map):
"""
Removes the bias of "word" by projecting it on the space orthogonal to the bias axis.
This function ensures that gender neutral words are zero in the gender subspace.
Arguments:
word -- string indicating the word to debias
g -- numpy-array of shape (50,), corresponding to the bias axis (such as gender)
word_to_vec_map -- dictionary mapping words to their corresponding vectors.
Returns:
e_debiased -- neutralized word vector representation of the input "word"
"""
from numpy import linalg as LA
norm_g=LA.norm(g)
### START CODE HERE ###
# Select word vector representation of "word". Use word_to_vec_map. (≈ 1 line)
e = word_to_vec_map[word]
# Compute e_biascomponent using the formula give above. (≈ 1 line)
from numpy import linalg as LA
e_biascomponent = np.dot(e,g)*g/(norm_g*norm_g)
# Neutralize e by substracting e_biascomponent from it
# e_debiased should be equal to its orthogonal projection. (≈ 1 line)
e_debiased = e - e_biascomponent
### END CODE HERE ###
return e_debiased
e = "receptionist"
print("cosine similarity between " + e + " and g, before neutralizing: ", cosine_similarity(word_to_vec_map["receptionist"], g))
e_debiased = neutralize("receptionist", g, word_to_vec_map)
print("cosine similarity between " + e + " and g, after neutralizing: ", cosine_similarity(e_debiased, g))
```
**Expected Output**: The second result is essentially 0, up to numerical roundof (on the order of $10^{-17}$).
<table>
<tr>
<td>
**cosine similarity between receptionist and g, before neutralizing:** :
</td>
<td>
0.330779417506
</td>
</tr>
<tr>
<td>
**cosine similarity between receptionist and g, after neutralizing:** :
</td>
<td>
-3.26732746085e-17
</tr>
</table>
### 3.2 - Equalization algorithm for gender-specific words
Next, lets see how debiasing can also be applied to word pairs such as "actress" and "actor." Equalization is applied to pairs of words that you might want to have differ only through the gender property. As a concrete example, suppose that "actress" is closer to "babysit" than "actor." By applying neutralizing to "babysit" we can reduce the gender-stereotype associated with babysitting. But this still does not guarantee that "actor" and "actress" are equidistant from "babysit." The equalization algorithm takes care of this.
The key idea behind equalization is to make sure that a particular pair of words are equi-distant from the 49-dimensional $g_\perp$. The equalization step also ensures that the two equalized steps are now the same distance from $e_{receptionist}^{debiased}$, or from any other work that has been neutralized. In pictures, this is how equalization works:
<img src="images/equalize10.png" style="width:800px;height:400px;">
The derivation of the linear algebra to do this is a bit more complex. (See Bolukbasi et al., 2016 for details.) But the key equations are:
$$ \mu = \frac{e_{w1} + e_{w2}}{2}\tag{4}$$
$$ \mu_{B} = \frac {\mu \cdot \text{bias_axis}}{||\text{bias_axis}||_2^2} *\text{bias_axis}
\tag{5}$$
$$\mu_{\perp} = \mu - \mu_{B} \tag{6}$$
$$ e_{w1B} = \frac {e_{w1} \cdot \text{bias_axis}}{||\text{bias_axis}||_2^2} *\text{bias_axis}
\tag{7}$$
$$ e_{w2B} = \frac {e_{w2} \cdot \text{bias_axis}}{||\text{bias_axis}||_2^2} *\text{bias_axis}
\tag{8}$$
$$e_{w1B}^{corrected} = \sqrt{ |{1 - ||\mu_{\perp} ||^2_2} |} * \frac{e_{\text{w1B}} - \mu_B} {|(e_{w1} - \mu_{\perp}) - \mu_B)|} \tag{9}$$
$$e_{w2B}^{corrected} = \sqrt{ |{1 - ||\mu_{\perp} ||^2_2} |} * \frac{e_{\text{w2B}} - \mu_B} {|(e_{w2} - \mu_{\perp}) - \mu_B)|} \tag{10}$$
$$e_1 = e_{w1B}^{corrected} + \mu_{\perp} \tag{11}$$
$$e_2 = e_{w2B}^{corrected} + \mu_{\perp} \tag{12}$$
**Exercise**: Implement the function below. Use the equations above to get the final equalized version of the pair of words. Good luck!
```
def equalize(pair, bias_axis, word_to_vec_map):
"""
Debias gender specific words by following the equalize method described in the figure above.
Arguments:
pair -- pair of strings of gender specific words to debias, e.g. ("actress", "actor")
bias_axis -- numpy-array of shape (50,), vector corresponding to the bias axis, e.g. gender
word_to_vec_map -- dictionary mapping words to their corresponding vectors
Returns
e_1 -- word vector corresponding to the first word
e_2 -- word vector corresponding to the second word
"""
### START CODE HERE ###
# Step 1: Select word vector representation of "word". Use word_to_vec_map. (≈ 2 lines)
w1, w2 = None
e_w1, e_w2 = None
# Step 2: Compute the mean of e_w1 and e_w2 (≈ 1 line)
mu = None
# Step 3: Compute the projections of mu over the bias axis and the orthogonal axis (≈ 2 lines)
mu_B = None
mu_orth = None
# Step 4: Use equations (7) and (8) to compute e_w1B and e_w2B (≈2 lines)
e_w1B = None
e_w2B = None
# Step 5: Adjust the Bias part of e_w1B and e_w2B using the formulas (9) and (10) given above (≈2 lines)
corrected_e_w1B = None
corrected_e_w2B = None
# Step 6: Debias by equalizing e1 and e2 to the sum of their corrected projections (≈2 lines)
e1 = None
e2 = None
### END CODE HERE ###
return e1, e2
print("cosine similarities before equalizing:")
print("cosine_similarity(word_to_vec_map[\"man\"], gender) = ", cosine_similarity(word_to_vec_map["man"], g))
print("cosine_similarity(word_to_vec_map[\"woman\"], gender) = ", cosine_similarity(word_to_vec_map["woman"], g))
print()
e1, e2 = equalize(("man", "woman"), g, word_to_vec_map)
print("cosine similarities after equalizing:")
print("cosine_similarity(e1, gender) = ", cosine_similarity(e1, g))
print("cosine_similarity(e2, gender) = ", cosine_similarity(e2, g))
```
**Expected Output**:
cosine similarities before equalizing:
<table>
<tr>
<td>
**cosine_similarity(word_to_vec_map["man"], gender)** =
</td>
<td>
-0.117110957653
</td>
</tr>
<tr>
<td>
**cosine_similarity(word_to_vec_map["woman"], gender)** =
</td>
<td>
0.356666188463
</td>
</tr>
</table>
cosine similarities after equalizing:
<table>
<tr>
<td>
**cosine_similarity(u1, gender)** =
</td>
<td>
-0.700436428931
</td>
</tr>
<tr>
<td>
**cosine_similarity(u2, gender)** =
</td>
<td>
0.700436428931
</td>
</tr>
</table>
Please feel free to play with the input words in the cell above, to apply equalization to other pairs of words.
These debiasing algorithms are very helpful for reducing bias, but are not perfect and do not eliminate all traces of bias. For example, one weakness of this implementation was that the bias direction $g$ was defined using only the pair of words _woman_ and _man_. As discussed earlier, if $g$ were defined by computing $g_1 = e_{woman} - e_{man}$; $g_2 = e_{mother} - e_{father}$; $g_3 = e_{girl} - e_{boy}$; and so on and averaging over them, you would obtain a better estimate of the "gender" dimension in the 50 dimensional word embedding space. Feel free to play with such variants as well.
### Congratulations
You have come to the end of this notebook, and have seen a lot of the ways that word vectors can be used as well as modified.
Congratulations on finishing this notebook!
**References**:
- The debiasing algorithm is from Bolukbasi et al., 2016, [Man is to Computer Programmer as Woman is to
Homemaker? Debiasing Word Embeddings](https://papers.nips.cc/paper/6228-man-is-to-computer-programmer-as-woman-is-to-homemaker-debiasing-word-embeddings.pdf)
- The GloVe word embeddings were due to Jeffrey Pennington, Richard Socher, and Christopher D. Manning. (https://nlp.stanford.edu/projects/glove/)
| github_jupyter |
```
from IPython.display import HTML
HTML('''<script>
code_show=true;
function code_toggle() {
if (code_show){
$('div.input').hide();
$('div.output_prompt').css('opacity', 0); // do not show output prompt
} else {
$('div.input').show();
}
code_show = !code_show
}
$( document ).ready(code_toggle);
</script>
<form action="javascript:code_toggle()"><input type="submit" value="Click here to hide/unhide code."></form>''')
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
import pandas as pd
import seaborn as sns
import warnings
warnings.filterwarnings('ignore')
df = pd.read_csv(r'your_file')
df.head()
```
### All Associates:
```
criteria1 = df.ANNUAL_RT > 0
criteria2 = df.SAL_ADMIN_PLAN.isin(['ENG','EXC'])
criteria3 = df.COMPANY == 'HAM'
engineers = df[criteria1 & criteria2 & criteria3]
fig, axes = plt.subplots(figsize=(10,6))
sns.boxplot(x='ANNUAL_RT', y='JOBTITLE', data=engineers, ax=axes)
sns.despine()
plt.title('Annual Salary based on 2013 Data')
plt.xlabel('')
plt.ylabel('')
plt.show()
```
### Let's Limit to Just Engineering or Executive Staff Members and Control Sort Order:
```
criteria1 = df.JOBTITLE.isin(['Engineering Coordinator', 'Staff Engineer', 'Engineering Staff',
'Senior Staff Engineer', 'Associate Chief Engineer', 'Chief Engineer'])
criteria2 = df.ANNUAL_RT > 0
criteria3 = df.SAL_ADMIN_PLAN.isin(['ENG','EXC'])
criteria4 = df.COMPANY == 'HAM'
engineers = df[criteria1 & criteria2 & criteria3 & criteria4]
engineers['JOBTITLE'] = engineers['JOBTITLE'].astype('category')
engineers.JOBTITLE.cat.set_categories(['Chief Engineer','Associate Chief Engineer','Senior Staff Engineer',
'Staff Engineer','Engineering Coordinator','Engineering Staff'], inplace=True)
fig, axes = plt.subplots(figsize=(10,6))
sns.boxplot(x='ANNUAL_RT', y='JOBTITLE', data=engineers, ax=axes)
sns.despine()
plt.title('Engineering Track Annual Salary based on 2012 Data')
plt.xlabel('')
plt.ylabel('')
plt.show()
```
### Add Grid Lines and Re-Format X-Axis Labels:
```
fig, axes = plt.subplots(figsize=(12, 6))
axes.xaxis.set_major_locator(ticker.MultipleLocator(10000))
# Create boxplot and make the facecolor more transparent
sns.boxplot(x="ANNUAL_RT", y="JOBTITLE", data=engineers, ax=axes, )
for patch in axes.artists:
r, g, b, a = patch.get_facecolor()
patch.set_facecolor((r, g, b, 0.5))
# Add strip/jitter plot with semi-transparent data circles
sns.stripplot(x="ANNUAL_RT", y="JOBTITLE", data=engineers,
size=5, jitter=0.2, edgecolor="white", alpha=0.5)
sns.despine(offset=10, trim=True)
plt.title('Engineering Track - Annual Salary based on 2012 Data', fontsize=14)
plt.grid(True)
plt.xlabel('')
plt.ylabel('')
fig.tight_layout()
labels = [str(int(int(item.get_text())/1000))+"K" for item in axes.get_xticklabels()]
axes.set_xticklabels(labels)
plt.show()
```
### Let's Look at Salaries by Sex:
```
criteria1 = df.JOBTITLE.isin(['Engineering Coordinator', 'Staff Engineer', 'Engineering Staff',
'Senior Staff Engineer', 'Associate Chief Engineer', 'Chief Engineer'])
criteria2 = df.ANNUAL_RT > 0
criteria3 = df.SAL_ADMIN_PLAN.isin(['ENG','EXC'])
criteria4 = df.COMPANY == 'HAM'
engineers = df[criteria1 & criteria2 & criteria3 & criteria4]
engineers['JOBTITLE'] = engineers['JOBTITLE'].astype('category')
engineers.JOBTITLE.cat.set_categories(['Chief Engineer','Associate Chief Engineer','Senior Staff Engineer',
'Staff Engineer','Engineering Coordinator','Engineering Staff'], inplace=True)
fig, axes = plt.subplots(figsize=(10,6))
sns.boxplot(x='ANNUAL_RT', y='JOBTITLE', data=engineers, hue='SEX', ax=axes)
sns.despine()
plt.title('Engineering Track - Annual Salary based on 2012 Data')
plt.xlabel('')
plt.ylabel('')
plt.show()
median = pd.pivot_table(engineers, values='ANNUAL_RT', index='JOBTITLE', aggfunc='median')
median
engineers.groupby(by='JOBTITLE').describe()
```
| github_jupyter |
# LAB 03: Basic Feature Engineering in Keras
**Learning Objectives**
1. Create an input pipeline using tf.data
2. Engineer features to create categorical, crossed, and numerical feature columns
## Introduction
In this lab, we utilize feature engineering to improve the prediction of housing prices using a Keras Sequential Model.
Each learning objective will correspond to a __#TODO__ in the notebook where you will complete the notebook cell's code before running. Refer to the [solution](https://github.com/GoogleCloudPlatform/training-data-analyst/blob/master/courses/machine_learning/deepdive2/feature_engineering/solutions/3_keras_basic_feat_eng.ipynb) for reference.
Start by importing the necessary libraries for this lab.
```
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
# Install Sklearn
!python3 -m pip install --user sklearn
import os
import tensorflow.keras
import matplotlib.pyplot as plt
import pandas as pd
import tensorflow as tf
from tensorflow import feature_column as fc
from tensorflow.keras import layers
from sklearn.model_selection import train_test_split
#from keras.utils import plot_model
print("TensorFlow version: ",tf.version.VERSION)
```
Many of the Google Machine Learning Courses Programming Exercises use the [California Housing Dataset](https://developers.google.com/machine-learning/crash-course/california-housing-data-description
), which contains data drawn from the 1990 U.S. Census. Our lab dataset has been pre-processed so that there are no missing values.
First, let's download the raw .csv data by copying the data from a cloud storage bucket.
```
if not os.path.isdir("../data"):
os.makedirs("../data")
!gsutil cp gs://cloud-training-demos/feat_eng/housing/housing_pre-proc.csv ../data
!ls -l ../data/
```
Now, let's read in the dataset just copied from the cloud storage bucket and create a Pandas dataframe.
```
housing_df = pd.read_csv('../data/housing_pre-proc.csv', error_bad_lines=False)
housing_df.head()
```
We can use .describe() to see some summary statistics for the numeric fields in our dataframe. Note, for example, the count row and corresponding columns. The count shows 20433.000000 for all feature columns. Thus, there are no missing values.
```
housing_df.describe()
```
#### Split the dataset for ML
The dataset we loaded was a single CSV file. We will split this into train, validation, and test sets.
```
train, test = train_test_split(housing_df, test_size=0.2)
train, val = train_test_split(train, test_size=0.2)
print(len(train), 'train examples')
print(len(val), 'validation examples')
print(len(test), 'test examples')
```
Now, we need to output the split files. We will specifically need the test.csv later for testing. You should see the files appear in the home directory.
```
train.to_csv('../data/housing-train.csv', encoding='utf-8', index=False)
val.to_csv('../data/housing-val.csv', encoding='utf-8', index=False)
test.to_csv('../data/housing-test.csv', encoding='utf-8', index=False)
!head ../data/housing*.csv
```
## Lab Task 1: Create an input pipeline using tf.data
Next, we will wrap the dataframes with [tf.data](https://www.tensorflow.org/guide/datasets). This will enable us to use feature columns as a bridge to map from the columns in the Pandas dataframe to features used to train the model.
Here, we create an input pipeline using tf.data. This function is missing two lines. Correct and run the cell.
```
# A utility method to create a tf.data dataset from a Pandas Dataframe
def df_to_dataset(dataframe, shuffle=True, batch_size=32):
dataframe = dataframe.copy()
# TODO 1a -- Your code here
if shuffle:
ds = ds.shuffle(buffer_size=len(dataframe))
ds = ds.batch(batch_size)
return ds
```
Next we initialize the training and validation datasets.
```
batch_size = 32
train_ds = df_to_dataset(train)
val_ds = df_to_dataset(val, shuffle=False, batch_size=batch_size)
```
Now that we have created the input pipeline, let's call it to see the format of the data it returns. We have used a small batch size to keep the output readable.
```
# TODO 1b -- Your code here
```
We can see that the dataset returns a dictionary of column names (from the dataframe) that map to column values from rows in the dataframe.
#### Numeric columns
The output of a feature column becomes the input to the model. A numeric is the simplest type of column. It is used to represent real valued features. When using this column, your model will receive the column value from the dataframe unchanged.
In the California housing prices dataset, most columns from the dataframe are numeric. Let' create a variable called **numeric_cols** to hold only the numerical feature columns.
```
# TODO 1c -- Your code here
```
#### Scaler function
It is very important for numerical variables to get scaled before they are "fed" into the neural network. Here we use min-max scaling. Here we are creating a function named 'get_scal' which takes a list of numerical features and returns a 'minmax' function, which will be used in tf.feature_column.numeric_column() as normalizer_fn in parameters. 'Minmax' function itself takes a 'numerical' number from a particular feature and return scaled value of that number.
Next, we scale the numerical feature columns that we assigned to the variable "numeric cols".
```
# Scalar def get_scal(feature):
# TODO 1d -- Your code here
# TODO 1e -- Your code here
```
Next, we should validate the total number of feature columns. Compare this number to the number of numeric features you input earlier.
```
print('Total number of feature coLumns: ', len(feature_columns))
```
### Using the Keras Sequential Model
Next, we will run this cell to compile and fit the Keras Sequential model.
```
# Model create
feature_layer = tf.keras.layers.DenseFeatures(feature_columns, dtype='float64')
model = tf.keras.Sequential([
feature_layer,
layers.Dense(12, input_dim=8, activation='relu'),
layers.Dense(8, activation='relu'),
layers.Dense(1, activation='linear', name='median_house_value')
])
# Model compile
model.compile(optimizer='adam',
loss='mse',
metrics=['mse'])
# Model Fit
history = model.fit(train_ds,
validation_data=val_ds,
epochs=32)
```
Next we show loss as Mean Square Error (MSE). Remember that MSE is the most commonly used regression loss function. MSE is the sum of squared distances between our target variable (e.g. housing median age) and predicted values.
```
loss, mse = model.evaluate(train_ds)
print("Mean Squared Error", mse)
```
#### Visualize the model loss curve
Next, we will use matplotlib to draw the model's loss curves for training and validation. A line plot is also created showing the mean squared error loss over the training epochs for both the train (blue) and test (orange) sets.
```
def plot_curves(history, metrics):
nrows = 1
ncols = 2
fig = plt.figure(figsize=(10, 5))
for idx, key in enumerate(metrics):
ax = fig.add_subplot(nrows, ncols, idx+1)
plt.plot(history.history[key])
plt.plot(history.history['val_{}'.format(key)])
plt.title('model {}'.format(key))
plt.ylabel(key)
plt.xlabel('epoch')
plt.legend(['train', 'validation'], loc='upper left');
plot_curves(history, ['loss', 'mse'])
```
### Load test data
Next, we read in the test.csv file and validate that there are no null values.
Again, we can use .describe() to see some summary statistics for the numeric fields in our dataframe. The count shows 4087.000000 for all feature columns. Thus, there are no missing values.
```
test_data = pd.read_csv('../data/housing-test.csv')
test_data.describe()
```
Now that we have created an input pipeline using tf.data and compiled a Keras Sequential Model, we now create the input function for the test data and to initialize the test_predict variable.
```
# TODO 1f -- Your code here
test_predict = test_input_fn(dict(test_data))
```
#### Prediction: Linear Regression
Before we begin to feature engineer our feature columns, we should predict the median house value. By predicting the median house value now, we can then compare it with the median house value after feature engineering.
To predict with Keras, you simply call [model.predict()](https://keras.io/models/model/#predict) and pass in the housing features you want to predict the median_house_value for. Note: We are predicting the model locally.
```
predicted_median_house_value = model.predict(test_predict)
```
Next, we run two predictions in separate cells - one where ocean_proximity=INLAND and one where ocean_proximity= NEAR OCEAN.
```
# Ocean_proximity is INLAND
model.predict({
'longitude': tf.convert_to_tensor([-121.86]),
'latitude': tf.convert_to_tensor([39.78]),
'housing_median_age': tf.convert_to_tensor([12.0]),
'total_rooms': tf.convert_to_tensor([7653.0]),
'total_bedrooms': tf.convert_to_tensor([1578.0]),
'population': tf.convert_to_tensor([3628.0]),
'households': tf.convert_to_tensor([1494.0]),
'median_income': tf.convert_to_tensor([3.0905]),
'ocean_proximity': tf.convert_to_tensor(['INLAND'])
}, steps=1)
# Ocean_proximity is NEAR OCEAN
model.predict({
'longitude': tf.convert_to_tensor([-122.43]),
'latitude': tf.convert_to_tensor([37.63]),
'housing_median_age': tf.convert_to_tensor([34.0]),
'total_rooms': tf.convert_to_tensor([4135.0]),
'total_bedrooms': tf.convert_to_tensor([687.0]),
'population': tf.convert_to_tensor([2154.0]),
'households': tf.convert_to_tensor([742.0]),
'median_income': tf.convert_to_tensor([4.9732]),
'ocean_proximity': tf.convert_to_tensor(['NEAR OCEAN'])
}, steps=1)
```
The arrays returns a predicted value. What do these numbers mean? Let's compare this value to the test set.
Go to the test.csv you read in a few cells up. Locate the first line and find the median_house_value - which should be 249,000 dollars near the ocean. What value did your model predicted for the median_house_value? Was it a solid model performance? Let's see if we can improve this a bit with feature engineering!
## Lab Task 2: Engineer features to create categorical and numerical features
Now we create a cell that indicates which features will be used in the model.
Note: Be sure to bucketize 'housing_median_age' and ensure that 'ocean_proximity' is one-hot encoded. And, don't forget your numeric values!
```
# TODO 2a -- Your code here
```
Next, we scale the numerical, bucktized, and categorical feature columns that we assigned to the variables in the preceding cell.
```
# Scalar def get_scal(feature):
def get_scal(feature):
def minmax(x):
mini = train[feature].min()
maxi = train[feature].max()
return (x - mini)/(maxi-mini)
return(minmax)
# All numerical features - scaling
feature_columns = []
for header in numeric_cols:
scal_input_fn = get_scal(header)
feature_columns.append(fc.numeric_column(header,
normalizer_fn=scal_input_fn))
```
### Categorical Feature
In this dataset, 'ocean_proximity' is represented as a string. We cannot feed strings directly to a model. Instead, we must first map them to numeric values. The categorical vocabulary columns provide a way to represent strings as a one-hot vector.
Next, we create a categorical feature using 'ocean_proximity'.
```
# TODO 2b -- Your code here
```
### Bucketized Feature
Often, you don't want to feed a number directly into the model, but instead split its value into different categories based on numerical ranges. Consider our raw data that represents a homes' age. Instead of representing the house age as a numeric column, we could split the home age into several buckets using a [bucketized column](https://www.tensorflow.org/api_docs/python/tf/feature_column/bucketized_column). Notice the one-hot values below describe which age range each row matches.
Next we create a bucketized column using 'housing_median_age'
```
# TODO 2c -- Your code here
```
### Feature Cross
Combining features into a single feature, better known as [feature crosses](https://developers.google.com/machine-learning/glossary/#feature_cross), enables a model to learn separate weights for each combination of features.
Next, we create a feature cross of 'housing_median_age' and 'ocean_proximity'.
```
# TODO 2d -- Your code here
```
Next, we should validate the total number of feature columns. Compare this number to the number of numeric features you input earlier.
```
print('Total number of feature columns: ', len(feature_columns))
```
Next, we will run this cell to compile and fit the Keras Sequential model. This is the same model we ran earlier.
```
# Model create
feature_layer = tf.keras.layers.DenseFeatures(feature_columns,
dtype='float64')
model = tf.keras.Sequential([
feature_layer,
layers.Dense(12, input_dim=8, activation='relu'),
layers.Dense(8, activation='relu'),
layers.Dense(1, activation='linear', name='median_house_value')
])
# Model compile
model.compile(optimizer='adam',
loss='mse',
metrics=['mse'])
# Model Fit
history = model.fit(train_ds,
validation_data=val_ds,
epochs=32)
```
Next, we show loss and mean squared error then plot the model.
```
loss, mse = model.evaluate(train_ds)
print("Mean Squared Error", mse)
plot_curves(history, ['loss', 'mse'])
```
Next we create a prediction model. Note: You may use the same values from the previous prediciton.
```
# TODO 2e -- Your code here
```
### Analysis
The array returns a predicted value. Compare this value to the test set you ran earlier. Your predicted value may be a bit better.
Now that you have your "feature engineering template" setup, you can experiment by creating additional features. For example, you can create derived features, such as households per population, and see how they impact the model. You can also experiment with replacing the features you used to create the feature cross.
Copyright 2020 Google Inc.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
| github_jupyter |
# Spark Learning Note - Recommendation
Jia Geng | gjia0214@gmail.com
<a id='directory'></a>
## Directory
- [Data Source](https://github.com/databricks/Spark-The-Definitive-Guide/tree/master/data/)
- [1. Alternative Least Square and Collaborate Filtering](#sec1)
- [2. Model Params](#sec2)
- [3. Evaluator and Metrics](#sec3)
- [3.1 Regression Metrics](#sec3-1)
- [3.2 Ranking Metrics](#sec3-2)
```
from pyspark.sql.session import SparkSession
spark = SparkSession.builder.appName('MLexample').getOrCreate()
spark
data_path = '/home/jgeng/Documents/Git/SparkLearning/book_data/sample_movielens_ratings.txt'
data = spark.read.text(data_path)
data.show(1)
# how to convert strings into array of strings
data = data.selectExpr("split(value, '::') as col")
data.show(1, False)
# how to convert array of strings into columns
data = data.selectExpr('cast(col[0] as int) as userID',
'cast(col[1] as int) as movieID',
'cast(col[2] as float) as rating',
'cast(col[3] as long) as timestamp')
data.show(1)
train, test = data.randomSplit([0.8, 0.2])
train.cache()
test.cache()
print(train.count(), test.count())
print(train.where('rating is null').count())
```
## 1. Alternative Least Square and Collaborate Filtering <a id='sec1'></a>
Spark have an implementatoin of Alternative Least Squares for Collaborative Filterinig. ALS finds a dimentional featue vector for each user an item such that the dot product of each user's feature vector with each item's feature vector approximates the user's rating for that item. The dataset should includes existing ratings between user-item pairs:
- a user ID column (need to be int)
- an item ID column (need to be int)
- a rating column (need to be a float)
- the rating can be explicit: a numerical rating that the system should predict directly
- or implicit: rating represents the strength of interactions between a user and item (e.g. number of visits to a particular page)
The goal for recommendation system is that: given an ipnut data frame, the model will produce feature vectors that can be used to predict user's rating for items they have not yet rated.
Some potential problem of such system - **cold start problems**:
- when introducing a new product that no user has expressed a preference for, the algorithm is not going to recommend it to many people.
- if a new user are onboarding onto the platform, they might not have many ratings yet. Therefore the algorithm won't know what to recommend them.
The MLlib can scale the algorithm to millions of users, millions of items and billions of ratings.
[back to top](#directory)
## 2. Model Params <a id='sec2'></a>
**Hyperparams**
|Name|Input|Notes|
|-|-|-|
|rank|int|the dimension of the feature vectors learned for users and items. **Controls the bias and variance trade off.** Default is 10.
|alpha|float|when traninig on implicit feedback, alpha sets a baseline confidence for preference. default is 1.0
|regParam|float|default is 0.1
|implicitPrefs|bool|whether training on implicit or explicit. default is explicity
|nonnegative|bool|whether to place a non-negative (feature) constriants on the least square problem. default is False.
**Training Params**
|Name|Input|Notes|
|-|-|-|
|numUserBlocks|int|how many blocks to split the user into. default is 10|
|numItemBlocks|int|how many blocks to split the items into. default is 10|
|maxIter|int|total number of iterations over the data before stopping. default is 10
|checkpointInterval|int|allow saving the checkpoints during training
|seed|int|random seed for replicating results
Rule of thumb to decide how much data to be put in each block:
- one to five millions ratings per block
- if data is less than that in each block, more blocks will not improve the algorithm performance.
**Prediction Params**
|Name|Input|Notes|
|-|-|-|
|coldStartStrategy|'nan', 'drop'| strategy for dealing with unknown or new users/items at prediction time. This may be useful in cross-validation or production scenarios, for handling user/item ids the model has not seen in the training data.
By default, Spark assign NaN prediction values when encountering a user that is not present in the actual model. However, if this happens during training, the NaN value will ruin the ability for the evaluator to properly measure the success of the model.
Set to drop will drop any rows that contains NaN prediction so that the rest of the data will become valid for evaluation.
[back to top](#directory)
```
from pyspark.ml.recommendation import ALS
print(ALS().explainParams())
train.printSchema()
als = ALS().setMaxIter(5)\
.setRegParam(0.01)\
.setUserCol('userID')\
.setItemCol('movieID')\
.setRatingCol('rating')
alsclf = als.fit(train)
predictions = alsclf.transform(test)
predictions.show(20, False)
predictions.count()
```
## 3. Evaluator and Metrics <a id='sec3'></a>
[back to top](#directory)
### 3.1 Regression Metrics <a id='sec3-1'></a>
We can use a regression evaluator to measure the rmse of the prediction on the rating and the actual rating.
[back to top](#directory)
```
from pyspark.mllib.evaluation import RegressionMetrics
# get the rdd data
input_data = predictions.select('rating', 'prediction').rdd.map(lambda x: (x[0], x[1]))
input_data
# compute the regression metrics
metrics = RegressionMetrics(input_data)
metrics.rootMeanSquaredError
```
### 3.2 Ranking Metrics <a id='sec3-2'></a>
There is another tool provided in Spark: `RankingMetrics` that provides more sophisticated way of measuring the performance of the recommendation system. Ranking metrics does not focus on the value of the rank. It focuses on whether the algorithm recommeds an already ranked item again to a user.
For example, if there is a movie that a person gives a very high rate. Will the system recommend this movie to the person?
From a high level point of view, wo can do:
- predict the person's rating on every movie in the dataset
- rank the movie by predicted ratings
- check whether the high-rated movie is associate with a high rank
In spark, we do:
- train model and make predictions on the testing set
- set up a ranking threshold to represent the 'high ranking'
- filter the ground truth ==> aggregate on user to put the rated items into a list (DF A)
- filter the predicted ranking ==> aggregate on user to put the rated items into a list (DF B)
- join A and B on the users
- call the ranking metrics on the joined DF's RDD data (Only take top k from the prediction columns as recommendations)
[back to top](#directory)
```
from pyspark.sql.functions import expr
# get the all movies with high actual rating (>2.5 for this case)
filtered_truth = predictions.where('rating > 2.5')
# use collect set to aggregate the high rating movies for each user
agg_truth = filtered_truth.groupBy('userID').agg(expr('collect_set(movieID) as truths'))
agg_truth.show(3)
# get all movies with high predicted rating
filtered_pred = predictions.where('prediction > 2.5')
agg_pred = filtered_pred.groupBy('userID').agg(expr('collect_set(movieID) as predictions'))
agg_pred.show(3)
# join the two DF
joined = agg_truth.join(agg_pred, 'userID')
joined.show(3)
from pyspark.mllib.evaluation import RankingMetrics
k = 15 # recommend top k from predictions
rdds = joined.rdd.map(lambda row: (row[1], row[2][:k]))
metrics = RankingMetrics(rdds)
metrics.meanAveragePrecision
metrics.precisionAt(5)
??metrics
```
| github_jupyter |
```
from imblearn.over_sampling import RandomOverSampler, SMOTE
from imblearn.under_sampling import RandomUnderSampler
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
from sklearn.tree import DecisionTreeClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn import metrics
from sklearn.utils import class_weight
```
# Imbalanced Data Preprocessing
Strategies for balancing highly imbalanced datasets:
* Oversample
- Oversample the minority class to balance the dataset
- Can create synthetic data based on the minority class
* Undersample
- Remove majority class data (not preferred)
* Weight Classes
- Use class weights to make minority class data more prominent
Let's use the red wine dataset to start with to demonstrate a highly imbalanced data set with very few high and low quality wine ratings.
```
df = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-databases/wine-quality/winequality-red.csv', sep=';')
df.quality.value_counts()
```
Set the features to use for our prediction
```
features = df[['volatile acidity', 'citric acid', 'sulphates', 'alcohol']]
#features = df.drop(columns='quality')
```
Set the target value for our prediction
```
target = df['quality']
```
Split the dataset into a training and test dataset.
```
xtrain, xtest, ytrain, ytrue = train_test_split(features, target)
```
Visualize the imbalanced nature of the training data set outcomes.
```
count = ytrue.value_counts()
count.plot.bar()
plt.ylabel('Number of records')
plt.xlabel('Target Class')
plt.show()
```
## Base Model - Imbalanced Data
Using a simple Decision Tree Classifier to demonstrate the changes in prediction quality based on using different techniques to deal with imbalanced data.
```
model = DecisionTreeClassifier()
model.fit(xtrain, ytrain)
y_pred = model.predict(xtest)
print(f'Accuracy Score: {metrics.accuracy_score(ytrue, y_pred)}')
print(f'Precision Score: {metrics.precision_score(ytrue, y_pred, average="macro")}')
print(f'Recall Score: {metrics.recall_score(ytrue, y_pred, average="macro")}')
print(f'F1 Score: {metrics.f1_score(ytrue, y_pred, average="macro")}')
```
## Oversampling
Using the Imbalanced-Learn module, which is built on top of scikit learn, there are a number of options for oversampling (and undersampling) your training data. The most basic is the `RandomOverSampler()` function, which has a couple of different options:
* `'auto'` (default: `'not majority'`)
* `'minority'`
* `'not majority'`
* `'not minority'`
* `'all'`
There are also a host of other possibilities to create synthetic data (e.g., SMOTE)
https://imbalanced-learn.org/stable/over_sampling.html#
```
ros = RandomOverSampler()
X_resampled, y_resampled = ros.fit_resample(xtrain, ytrain)
value_counts = np.unique(y_resampled, return_counts=True)
for val, count in zip(value_counts[0], value_counts[1]):
print(val, count)
```
Let's look at the resampled data to confirm that we now have a balanced dataset.
```
plt.bar(value_counts[0], value_counts[1])
count.plot.bar()
plt.ylabel('Number of records')
plt.xlabel('Target Class')
plt.show()
```
Now let's try our prediction with the oversampled data
```
model = DecisionTreeClassifier()
model.fit(X_resampled, y_resampled)
y_pred = model.predict(xtest)
print(f'Accuracy Score: {metrics.accuracy_score(ytrue, y_pred)}')
print(f'Precision Score: {metrics.precision_score(ytrue, y_pred, average="macro")}')
print(f'Recall Score: {metrics.recall_score(ytrue, y_pred, average="macro")}')
print(f'F1 Score: {metrics.f1_score(ytrue, y_pred, average="macro")}')
```
So from this, we were able to improve the accuracy, precision, and recall of our model!
## Weighting
Determining weights are a balance of different factors and partially affected by the size of the imbalance. Scikit Learn has a function to help compute weights to get balanced classes caleed `compute_class_weights` frim the `class_weight` portion of the module.
To get the balanced weights use:
`class_weights = ‘balanced’`
and the model automatically assigns the class weights inversely proportional to their respective frequencies.
If the classes are too imbalanced, you might find better success by assigning weights to each class using a dictionary.
```
classes = np.unique(ytrain)
cw = class_weight.compute_class_weight('balanced', classes=classes, y=ytrain)
weights = dict(zip(classes, cw))
print(weights)
```
Now let's use our Decision Tree Model with the class weights calculated above.
```
model = DecisionTreeClassifier(class_weight=weights)
model.fit(xtrain, ytrain)
y_pred = model.predict(xtest)
print(f'Accuracy Score: {metrics.accuracy_score(ytrue, y_pred)}')
print(f'Precision Score: {metrics.precision_score(ytrue, y_pred, average="macro")}')
print(f'Recall Score: {metrics.recall_score(ytrue, y_pred, average="macro")}')
print(f'F1 Score: {metrics.f1_score(ytrue, y_pred, average="macro")}')
```
So improved over our initial model, but not as much as the oversampled model in this case.
## Credit Card Fraud - Logistic Regression
```
# load the data set
data = pd.read_csv('http://bergeron.valpo.edu/creditcard.csv')
# normalise the amount column
data['normAmount'] = StandardScaler().fit_transform(np.array(data['Amount']).reshape(-1, 1))
# drop Time and Amount columns as they are not relevant for prediction purpose
data = data.drop(['Time', 'Amount'], axis = 1)
# as you can see there are 492 fraud transactions.
print(data['Class'].value_counts())
plt.figure(figsize=(8, 8))
plt.bar([0, 1], data['Class'].value_counts(), tick_label=['Not Fraud', 'Fraud'])
plt.text(0, 286000, data['Class'].value_counts()[0], ha='center', fontsize=16)
plt.text(1, 10000, data['Class'].value_counts()[1], ha='center', fontsize=16)
plt.show()
X = data.drop(columns=['Class'])
y = data['Class']
# split into 70:30 ration
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.3, random_state = 0)
# describes info about train and test set
print("Number transactions X_train dataset: ", X_train.shape)
print("Number transactions y_train dataset: ", y_train.shape)
print("Number transactions X_test dataset: ", X_test.shape)
print("Number transactions y_test dataset: ", y_test.shape)
```
## Base Model - Imbalanced Data
```
# logistic regression object
lr = LogisticRegression()
# train the model on train set
lr.fit(X_train, y_train.ravel())
predictions = lr.predict(X_test)
# print classification report
print(metrics.classification_report(y_test, predictions))
```
So our prediction leaves a lot to be desired as we have a very low recall of the fraud cases.
Let's try our hand at creating some synthetic data for resampling the minority class using SMOTE (Synthetic Minority Oversampling Technique)
```
sm = SMOTE(sampling_strategy='minority', random_state = 2)
X_train_res, y_train_res = sm.fit_resample(X_train, y_train)
lr1 = LogisticRegression()
lr1.fit(X_train_res, y_train_res)
predictions = lr1.predict(X_test)
# print classification report
print(metrics.classification_report(y_test, predictions))
```
Our model's recall of fraud cases has improved greatly from our original model and our non-fraud recall has not suffered much at all.
We can also use a different threshold for predicting the fraud case. Instead of the standard >0.5 threshold, we could set 0.6 or 0.7 to improve the precision without harming the recall too much.
```
predictions = (lr1.predict_proba(X_test)[:,1]>=0.7).astype(int)
# print classification report
print(metrics.classification_report(y_test, predictions))
```
| github_jupyter |
<img alt="QuantRocket logo" src="https://www.quantrocket.com/assets/img/notebook-header-logo.png">
© Copyright Quantopian Inc.<br>
© Modifications Copyright QuantRocket LLC<br>
Licensed under the [Creative Commons Attribution 4.0](https://creativecommons.org/licenses/by/4.0/legalcode).
<a href="https://www.quantrocket.com/disclaimer/">Disclaimer</a>
# Introduction to pandas
by Maxwell Margenot
pandas is a Python library that provides a collection of powerful data structures to better help you manage data. In this lecture, we will cover how to use the `Series` and `DataFrame` objects to handle data. These objects have a strong integration with NumPy, allowing us to easily do the necessary statistical and mathematical calculations that we need for finance.
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
```
With pandas, it is easy to store, visualize, and perform calculations on your data. With only a few lines of code we can modify our data and present it in an easily-understandable way. Here we simulate some returns in NumPy, put them into a pandas `DataFrame`, and perform calculations to turn them into prices and plot them, all only using a few lines of code.
```
returns = pd.DataFrame(np.random.normal(1.0, 0.03, (100, 10)))
prices = returns.cumprod()
prices.plot()
plt.title('Randomly-generated Prices')
plt.xlabel('Time')
plt.ylabel('Price')
plt.legend(loc=0);
```
So let's have a look at how we actually build up to this point!
## pandas Data Structures
### `Series`
A pandas `Series` is a 1-dimensional array with labels that can contain any data type. We primarily use them for handling time series data. Creating a `Series` is as easy as calling `pandas.Series()` on a Python list or NumPy array.
```
s = pd.Series([1, 2, np.nan, 4, 5])
print(s)
```
Every `Series` has a name. We can give the series a name as a parameter or we can define it afterwards by directly accessing the name attribute. In this case, we have given our time series no name so the attribute should be empty.
```
print(s.name)
```
This name can be directly modified with no repercussions.
```
s.name = "Toy Series"
print(s.name)
```
We call the collected axis labels of a `Series` its index. An index can either passed to a `Series` as a parameter or added later, similarly to its name. In the absence of an index, a `Series` will simply contain an index composed of integers, starting at $0$, as in the case of our "Toy Series".
```
print(s.index)
```
pandas has a built-in function specifically for creating date indices, `date_range()`. We use the function here to create a new index for `s`.
```
new_index = pd.date_range("2016-01-01", periods=len(s), freq="D")
print(new_index)
```
An index must be exactly the same length as the `Series` itself. Each index must match one-to-one with each element of the `Series`. Once this is satisfied, we can directly modify the `Series` index, as with the name, to use our new and more informative index (relatively speaking).
```
s.index = new_index
print(s.index)
```
The index of the `Series` is crucial for handling time series, which we will get into a little later.
#### Accessing `Series` Elements
`Series` are typically accessed using the `iloc[]` and `loc[]` methods. We use `iloc[]` to access elements by integer index and we use `loc[]` to access the index of the Series.
```
print("First element of the series:", s.iloc[0])
print("Last element of the series:", s.iloc[len(s)-1])
```
We can slice a `Series` similarly to our favorite collections, Python lists and NumPy arrays. We use the colon operator to indicate the slice.
```
s.iloc[:2]
```
When creating a slice, we have the options of specifying a beginning, an end, and a step. The slice will begin at the start index, and take steps of size `step` until it passes the end index, not including the end.
```
start = 0
end = len(s) - 1
step = 1
s.iloc[start:end:step]
```
We can even reverse a `Series` by specifying a negative step size. Similarly, we can index the start and end with a negative integer value.
```
s.iloc[::-1]
```
This returns a slice of the series that starts from the second to last element and ends at the third to last element (because the fourth to last is not included, taking steps of size $1$).
```
s.iloc[-2:-4:-1]
```
We can also access a series by using the values of its index. Since we indexed `s` with a collection of dates (`Timestamp` objects) we can look at the value contained in `s` for a particular date.
```
s.loc['2016-01-01']
```
Or even for a range of dates!
```
s.loc['2016-01-02':'2016-01-04']
```
With `Series`, we *can* just use the brackets (`[]`) to access elements, but this is not best practice. The brackets are ambiguous because they can be used to access `Series` (and `DataFrames`) using both index and integer values and the results will change based on context (especially with `DataFrames`).
#### Boolean Indexing
In addition to the above-mentioned access methods, you can filter `Series` using boolean arrays. `Series` are compatible with your standard comparators. Once compared with whatever condition you like, you get back yet another `Series`, this time filled with boolean values.
```
print(s < 3)
```
We can pass *this* `Series` back into the original `Series` to filter out only the elements for which our condition is `True`.
```
print(s.loc[s < 3])
```
If we so desire, we can group multiple conditions together using the logical operators `&`, `|`, and `~` (and, or, and not, respectively).
```
print(s.loc[(s < 3) & (s > 1)])
```
This is very convenient for getting only elements of a `Series` that fulfill specific criteria that we need. It gets even more convenient when we are handling `DataFrames`.
#### Indexing and Time Series
Since we use `Series` for handling time series, it's worth covering a little bit of how we handle the time component. For our purposes we use pandas `Timestamp` objects. Let's pull a full time series, complete with all the appropriate labels, by using our `get_prices()` function. All data pulled with `get_prices()` will be in `DataFrame` format. We can modify this index however we like.
```
from quantrocket.master import get_securities
securities = get_securities(symbols='XOM', fields=['Sid','Symbol','Exchange'], vendors='usstock')
securities
from quantrocket import get_prices
XOM = securities.index[0]
start = "2012-01-01"
end = "2016-01-01"
prices = get_prices("usstock-free-1min", data_frequency="daily", sids=XOM, start_date=start, end_date=end, fields="Close")
prices = prices.loc["Close"][XOM]
```
We can display the first few elements of our series by using the `head()` method and specifying the number of elements that we want. The analogous method for the last few elements is `tail()`.
```
print(type(prices))
prices.head(5)
```
As with our toy example, we can specify a name for our time series, if only to clarify the name the `get_pricing()` provides us.
```
print('Old name:', prices.name)
prices.name = "XOM"
print('New name:', prices.name)
```
Let's take a closer look at the `DatetimeIndex` of our `prices` time series.
```
print(prices.index)
print("tz:", prices.index.tz)
```
Notice that this `DatetimeIndex` has a collection of associated information. In particular it has an associated frequency (`freq`) and an associated timezone (`tz`). The frequency indicates whether the data is daily vs monthly vs some other period while the timezone indicates what locale this index is relative to. We can modify all of this extra information!
If we resample our `Series`, we can adjust the frequency of our data. We currently have daily data (excluding weekends). Let's downsample from this daily data to monthly data using the `resample()` method.
```
monthly_prices = prices.resample('M').last()
monthly_prices.head(10)
```
In the above example we use the last value of the lower level data to create the higher level data. We can specify how else we might want the down-sampling to be calculated, for example using the median.
```
monthly_prices_med = prices.resample('M').median()
monthly_prices_med.head(10)
```
We can even specify how we want the calculation of the new period to be done. Here we create a `custom_resampler()` function that will return the first value of the period. In our specific case, this will return a `Series` where the monthly value is the first value of that month.
```
def custom_resampler(array_like):
""" Returns the first value of the period """
return array_like[0]
first_of_month_prices = prices.resample('M').apply(custom_resampler)
first_of_month_prices.head(10)
```
We can also adjust the timezone of a `Series` to adapt the time of real-world data. In our case, our time series isn't localized to a timezone, but let's say that we want to localize the time to be 'America/New_York'. In this case we use the `tz_localize()` method, since the time isn't already localized.
```
eastern_prices = prices.tz_localize('America/New_York')
eastern_prices.head(10)
```
In addition to the capacity for timezone and frequency management, each time series has a built-in `reindex()` method that we can use to realign the existing data according to a new set of index labels. If data does not exist for a particular label, the data will be filled with a placeholder value. This is typically `np.nan`, though we can provide a fill method.
The data that we get from `get_prices()` only includes market days. But what if we want prices for every single calendar day? This will include holidays and weekends, times when you normally cannot trade equities. First let's create a new `DatetimeIndex` that contains all that we want.
```
calendar_dates = pd.date_range(start=start, end=end, freq='D')
print(calendar_dates)
```
Now let's use this new set of dates to reindex our time series. We tell the function that the fill method that we want is `ffill`. This denotes "forward fill". Any `NaN` values will be filled by the *last value* listed. So the price on the weekend or on a holiday will be listed as the price on the last market day that we know about.
```
calendar_prices = prices.reindex(calendar_dates, method='ffill')
calendar_prices.head(15)
```
You'll notice that we still have a couple of `NaN` values right at the beginning of our time series. This is because the first of January in 2012 was a Sunday and the second was a market holiday! Because these are the earliest data points and we don't have any information from before them, they cannot be forward-filled. We will take care of these `NaN` values in the next section, when we deal with missing data.
#### Missing Data
Whenever we deal with real data, there is a very real possibility of encountering missing values. Real data is riddled with holes and pandas provides us with ways to handle them. Sometimes resampling or reindexing can create `NaN` values. Fortunately, pandas provides us with ways to handle them. We have two primary means of coping with missing data. The first of these is filling in the missing data with `fillna()`. For example, say that we want to fill in the missing days with the mean price of all days.
```
meanfilled_prices = calendar_prices.fillna(calendar_prices.mean())
meanfilled_prices.head(10)
```
Using `fillna()` is fairly easy. It is just a matter of indicating the value that you want to fill the spaces with. Unfortunately, this particular case doesn't make a whole lot of sense, for reasons discussed in the lecture on stationarity in the Lecture series. We could fill them with with $0$, simply, but that's similarly uninformative.
Rather than filling in specific values, we can use the `method` parameter. We could use "backward fill", where `NaN`s are filled with the *next* filled value (instead of forward fill's *last* filled value) like so:
```
bfilled_prices = calendar_prices.fillna(method='bfill')
bfilled_prices.head(10)
```
But again, this is a bad idea for the same reasons as the previous option. Both of these so-called solutions take into account *future data* that was not available at the time of the data points that we are trying to fill. In the case of using the mean or the median, these summary statistics are calculated by taking into account the entire time series. Backward filling is equivalent to saying that the price of a particular security today, right now, is tomorrow's price. This also makes no sense. These two options are both examples of look-ahead bias, using data that would be unknown or unavailable at the desired time, and should be avoided.
Our next option is significantly more appealing. We could simply drop the missing data using the `dropna()` method. This is much better alternative than filling `NaN` values in with arbitrary numbers.
```
dropped_prices = calendar_prices.dropna()
dropped_prices.head(10)
```
Now our time series is cleaned for the calendar year, with all of our `NaN` values properly handled. It is time to talk about how to actually do time series analysis with pandas data structures.
#### Time Series Analysis with pandas
Let's do some basic time series analysis on our original prices. Each pandas `Series` has a built-in plotting method.
```
prices.plot();
# We still need to add the axis labels and title ourselves
plt.title("XOM Prices")
plt.ylabel("Price")
plt.xlabel("Date");
```
As well as some built-in descriptive statistics. We can either calculate these individually or using the `describe()` method.
```
print("Mean:", prices.mean())
print("Standard deviation:", prices.std())
print("Summary Statistics")
print(prices.describe())
```
We can easily modify `Series` with scalars using our basic mathematical operators.
```
modified_prices = prices * 2 - 10
modified_prices.head(5)
```
And we can create linear combinations of `Series` themselves using the basic mathematical operators. pandas will group up matching indices and perform the calculations elementwise to produce a new `Series`.
```
noisy_prices = prices + 5 * pd.Series(np.random.normal(0, 5, len(prices)), index=prices.index) + 20
noisy_prices.head(5)
```
If there are no matching indices, however, we may get an empty `Series` in return.
```
empty_series = prices + pd.Series(np.random.normal(0, 1, len(prices)))
empty_series.head(5)
```
Rather than looking at a time series itself, we may want to look at its first-order differences or percent change (in order to get additive or multiplicative returns, in our particular case). Both of these are built-in methods.
```
add_returns = prices.diff()[1:]
mult_returns = prices.pct_change()[1:]
plt.title("Multiplicative returns of XOM")
plt.xlabel("Date")
plt.ylabel("Percent Returns")
mult_returns.plot();
```
pandas has convenient functions for calculating rolling means and standard deviations, as well!
```
rolling_mean = prices.rolling(30).mean()
rolling_mean.name = "30-day rolling mean"
prices.plot()
rolling_mean.plot()
plt.title("XOM Price")
plt.xlabel("Date")
plt.ylabel("Price")
plt.legend();
rolling_std = prices.rolling(30).std()
rolling_std.name = "30-day rolling volatility"
rolling_std.plot()
plt.title(rolling_std.name);
plt.xlabel("Date")
plt.ylabel("Standard Deviation");
```
Many NumPy functions will work on `Series` the same way that they work on 1-dimensional NumPy arrays.
```
print(np.median(mult_returns))
```
The majority of these functions, however, are already implemented directly as `Series` and `DataFrame` methods.
```
print(mult_returns.median())
```
In every case, using the built-in pandas method will be better than using the NumPy function on a pandas data structure due to improvements in performance. Make sure to check out the `Series` [documentation](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.html) before resorting to other calculations of common functions.
### `DataFrames`
Many of the aspects of working with `Series` carry over into `DataFrames`. pandas `DataFrames` allow us to easily manage our data with their intuitive structure.
Like `Series`, `DataFrames` can hold multiple types of data, but `DataFrames` are 2-dimensional objects, unlike `Series`. Each `DataFrame` has an index and a columns attribute, which we will cover more in-depth when we start actually playing with an object. The index attribute is like the index of a `Series`, though indices in pandas have some extra features that we will unfortunately not be able to cover here. If you are interested in this, check out the [pandas documentation](https://pandas.pydata.org/docs/user_guide/advanced.html) on advanced indexing. The columns attribute is what provides the second dimension of our `DataFrames`, allowing us to combine named columns (all `Series`), into a cohesive object with the index lined-up.
We can create a `DataFrame` by calling `pandas.DataFrame()` on a dictionary or NumPy `ndarray`. We can also concatenate a group of pandas `Series` into a `DataFrame` using `pandas.concat()`.
```
dict_data = {
'a' : [1, 2, 3, 4, 5],
'b' : ['L', 'K', 'J', 'M', 'Z'],
'c' : np.random.normal(0, 1, 5)
}
print(dict_data)
```
Each `DataFrame` has a few key attributes that we need to keep in mind. The first of these is the index attribute. We can easily include an index of `Timestamp` objects like we did with `Series`.
```
frame_data = pd.DataFrame(dict_data, index=pd.date_range('2016-01-01', periods=5))
print(frame_data)
```
As mentioned above, we can combine `Series` into `DataFrames`. Concatatenating `Series` like this will match elements up based on their corresponding index. As the following `Series` do not have an index assigned, they each default to an integer index.
```
s_1 = pd.Series([2, 4, 6, 8, 10], name='Evens')
s_2 = pd.Series([1, 3, 5, 7, 9], name="Odds")
numbers = pd.concat([s_1, s_2], axis=1)
print(numbers)
```
We will use `pandas.concat()` again later to combine multiple `DataFrame`s into one.
Each `DataFrame` also has a `columns` attribute. These can either be assigned when we call `pandas.DataFrame` or they can be modified directly like the index. Note that when we concatenated the two `Series` above, the column names were the names of those `Series`.
```
print(numbers.columns)
```
To modify the columns after object creation, we need only do the following:
```
numbers.columns = ['Shmevens', 'Shmodds']
print(numbers)
```
In the same vein, the index of a `DataFrame` can be changed after the fact.
```
print(numbers.index)
numbers.index = pd.date_range("2016-01-01", periods=len(numbers))
print(numbers)
```
Separate from the columns and index of a `DataFrame`, we can also directly access the values they contain by looking at the values attribute.
```
numbers.values
```
This returns a NumPy array.
```
type(numbers.values)
```
#### Accessing `DataFrame` elements
Again we see a lot of carryover from `Series` in how we access the elements of `DataFrames`. The key sticking point here is that everything has to take into account multiple dimensions now. The main way that this happens is through the access of the columns of a `DataFrame`, either individually or in groups. We can do this either by directly accessing the attributes or by using the methods we already are familiar with.
Let's start by loading price data for several securities:
```
securities = get_securities(symbols=['XOM', 'JNJ', 'MON', 'KKD'], vendors='usstock')
securities
```
Since `get_securities` returns sids in the index, we can call the index's `tolist()` method to pass a list of sids to `get_prices`:
```
start = "2012-01-01"
end = "2017-01-01"
prices = get_prices("usstock-free-1min", data_frequency="daily", sids=securities.index.tolist(), start_date=start, end_date=end, fields="Close")
prices = prices.loc["Close"]
prices.head()
```
For the purpose of this tutorial, it will be more convenient to reference the data by symbol instead of sid. To do this, we can create a Python dictionary mapping sid to symbol, and use the dictionary to rename the columns, using the DataFrame's `rename` method:
```
sids_to_symbols = securities.Symbol.to_dict()
prices = prices.rename(columns=sids_to_symbols)
prices.head()
```
Here we directly access the `XOM` column. Note that this style of access will only work if your column name has no spaces or unfriendly characters in it.
```
prices.XOM.head()
```
We can also access the column using the column name in brackets:
```
prices["XOM"].head()
```
We can also use `loc[]` to access an individual column like so.
```
prices.loc[:, 'XOM'].head()
```
Accessing an individual column will return a `Series`, regardless of how we get it.
```
print(type(prices.XOM))
print(type(prices.loc[:, 'XOM']))
```
Notice how we pass a tuple into the `loc[]` method? This is a key difference between accessing a `Series` and accessing a `DataFrame`, grounded in the fact that a `DataFrame` has multiple dimensions. When you pass a 2-dimensional tuple into a `DataFrame`, the first element of the tuple is applied to the rows and the second is applied to the columns. So, to break it down, the above line of code tells the `DataFrame` to return every single row of the column with label `'XOM'`. Lists of columns are also supported.
```
prices.loc[:, ['XOM', 'JNJ']].head()
```
We can also simply access the `DataFrame` by index value using `loc[]`, as with `Series`.
```
prices.loc['2015-12-15':'2015-12-22']
```
This plays nicely with lists of columns, too.
```
prices.loc['2015-12-15':'2015-12-22', ['XOM', 'JNJ']]
```
Using `iloc[]` also works similarly, allowing you to access parts of the `DataFrame` by integer index.
```
prices.iloc[0:2, 1]
# Access prices with integer index in
# [1, 3, 5, 7, 9, 11, 13, ..., 99]
# and in column 0 or 2
prices.iloc[[1, 3, 5] + list(range(7, 100, 2)), [0, 2]].head(20)
```
#### Boolean indexing
As with `Series`, sometimes we want to filter a `DataFrame` according to a set of criteria. We do this by indexing our `DataFrame` with boolean values.
```
prices.loc[prices.MON > prices.JNJ].head()
```
We can add multiple boolean conditions by using the logical operators `&`, `|`, and `~` (and, or, and not, respectively) again!
```
prices.loc[(prices.MON > prices.JNJ) & ~(prices.XOM > 66)].head()
```
#### Adding, Removing Columns, Combining `DataFrames`/`Series`
It is all well and good when you already have a `DataFrame` filled with data, but it is also important to be able to add to the data that you have.
We add a new column simply by assigning data to a column that does not already exist. Here we use the `.loc[:, 'COL_NAME']` notation and store the output of `get_pricing()` (which returns a pandas `Series` if we only pass one security) there. This is the method that we would use to add a `Series` to an existing `DataFrame`.
```
securities = get_securities(symbols="AAPL", vendors='usstock')
securities
AAPL = securities.index[0]
s_1 = get_prices("usstock-free-1min", data_frequency="daily", sids=AAPL, start_date=start, end_date=end, fields='Close').loc["Close"][AAPL]
prices.loc[:, AAPL] = s_1
prices.head(5)
```
It is also just as easy to remove a column.
```
prices = prices.drop(AAPL, axis=1)
prices.head(5)
```
#### Time Series Analysis with pandas
Using the built-in statistics methods for `DataFrames`, we can perform calculations on multiple time series at once! The code to perform calculations on `DataFrames` here is almost exactly the same as the methods used for `Series` above, so don't worry about re-learning everything.
The `plot()` method makes another appearance here, this time with a built-in legend that corresponds to the names of the columns that you are plotting.
```
prices.plot()
plt.title("Collected Stock Prices")
plt.ylabel("Price")
plt.xlabel("Date");
```
The same statistical functions from our interactions with `Series` resurface here with the addition of the `axis` parameter. By specifying the `axis`, we tell pandas to calculate the desired function along either the rows (`axis=0`) or the columns (`axis=1`). We can easily calculate the mean of each columns like so:
```
prices.mean(axis=0)
```
As well as the standard deviation:
```
prices.std(axis=0)
```
Again, the `describe()` function will provide us with summary statistics of our data if we would rather have all of our typical statistics in a convenient visual instead of calculating them individually.
```
prices.describe()
```
We can scale and add scalars to our `DataFrame`, as you might suspect after dealing with `Series`. This again works element-wise.
```
(2 * prices - 50).head(5)
```
Here we use the `pct_change()` method to get a `DataFrame` of the multiplicative returns of the securities that we are looking at.
```
mult_returns = prices.pct_change()[1:]
mult_returns.head()
```
If we use our statistics methods to standardize the returns, a common procedure when examining data, then we can get a better idea of how they all move relative to each other on the same scale.
```
norm_returns = (mult_returns - mult_returns.mean(axis=0))/mult_returns.std(axis=0)
norm_returns.loc['2014-01-01':'2015-01-01'].plot();
```
This makes it easier to compare the motion of the different time series contained in our example.
Rolling means and standard deviations also work with `DataFrames`.
```
rolling_mean = prices.rolling(30).mean()
rolling_mean.columns = prices.columns
rolling_mean.plot()
plt.title("Rolling Mean of Prices")
plt.xlabel("Date")
plt.ylabel("Price")
plt.legend();
```
For a complete list of all the methods that are built into `DataFrame`s, check out the [documentation](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html).
# Next Steps
Managing data gets a lot easier when you deal with pandas, though this has been a very general introduction. There are many more tools within the package which you may discover while trying to get your data to do precisely what you want. If you would rather read more on the additional capabilities of pandas, check out the [documentation](http://pandas.pydata.org/pandas-docs/stable/).
---
**Next Lecture:** [Plotting Data](Lecture05-Plotting-Data.ipynb)
[Back to Introduction](Introduction.ipynb)
---
*This presentation is for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation for any security; nor does it constitute an offer to provide investment advisory or other services by Quantopian, Inc. ("Quantopian") or QuantRocket LLC ("QuantRocket"). Nothing contained herein constitutes investment advice or offers any opinion with respect to the suitability of any security, and any views expressed herein should not be taken as advice to buy, sell, or hold any security or as an endorsement of any security or company. In preparing the information contained herein, neither Quantopian nor QuantRocket has taken into account the investment needs, objectives, and financial circumstances of any particular investor. Any views expressed and data illustrated herein were prepared based upon information believed to be reliable at the time of publication. Neither Quantopian nor QuantRocket makes any guarantees as to their accuracy or completeness. All information is subject to change and may quickly become unreliable for various reasons, including changes in market conditions or economic circumstances.*
| github_jupyter |
# Trying out features
**Learning Objectives:**
* Improve the accuracy of a model by adding new features with the appropriate representation
The data is based on 1990 census data from California. This data is at the city block level, so these features reflect the total number of rooms in that block, or the total number of people who live on that block, respectively.
## Set Up
In this first cell, we'll load the necessary libraries.
```
import math
import shutil
import numpy as np
import pandas as pd
import tensorflow as tf
print(tf.__version__)
tf.logging.set_verbosity(tf.logging.INFO)
pd.options.display.max_rows = 10
pd.options.display.float_format = '{:.1f}'.format
```
Next, we'll load our data set.
```
df = pd.read_csv("https://storage.googleapis.com/ml_universities/california_housing_train.csv", sep=",")
```
## Examine and split the data
It's a good idea to get to know your data a little bit before you work with it.
We'll print out a quick summary of a few useful statistics on each column.
This will include things like mean, standard deviation, max, min, and various quantiles.
```
df.head()
df.describe()
```
Now, split the data into two parts -- training and evaluation.
```
np.random.seed(seed=1) #makes result reproducible
msk = np.random.rand(len(df)) < 0.8
traindf = df[msk]
evaldf = df[~msk]
```
## Training and Evaluation
In this exercise, we'll be trying to predict **median_house_value** It will be our label (sometimes also called a target).
We'll modify the feature_cols and input function to represent the features you want to use.
We divide **total_rooms** by **households** to get **avg_rooms_per_house** which we expect to positively correlate with **median_house_value**.
We also divide **population** by **total_rooms** to get **avg_persons_per_room** which we expect to negatively correlate with **median_house_value**.
```
def add_more_features(df):
df['avg_rooms_per_house'] = df['total_rooms'] / df['households'] #expect positive correlation
df['avg_persons_per_room'] = df['population'] / df['total_rooms'] #expect negative correlation
return df
# Create pandas input function
def make_input_fn(df, num_epochs):
return tf.estimator.inputs.pandas_input_fn(
x = add_more_features(df),
y = df['median_house_value'] / 100000, # will talk about why later in the course
batch_size = 128,
num_epochs = num_epochs,
shuffle = True,
queue_capacity = 1000,
num_threads = 1
)
# Define your feature columns
def create_feature_cols():
return [
tf.feature_column.numeric_column('housing_median_age'),
tf.feature_column.bucketized_column(tf.feature_column.numeric_column('latitude'), boundaries = np.arange(32.0, 42, 1).tolist()),
tf.feature_column.numeric_column('avg_rooms_per_house'),
tf.feature_column.numeric_column('avg_persons_per_room'),
tf.feature_column.numeric_column('median_income')
]
# Create estimator train and evaluate function
def train_and_evaluate(output_dir, num_train_steps):
estimator = tf.estimator.LinearRegressor(model_dir = output_dir, feature_columns = create_feature_cols())
train_spec = tf.estimator.TrainSpec(input_fn = make_input_fn(traindf, None),
max_steps = num_train_steps)
eval_spec = tf.estimator.EvalSpec(input_fn = make_input_fn(evaldf, 1),
steps = None,
start_delay_secs = 1, # start evaluating after N seconds,
throttle_secs = 5) # evaluate every N seconds
tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)
# Launch tensorboard
OUTDIR = './trained_model'
# Run the model
shutil.rmtree(OUTDIR, ignore_errors = True) # start fresh each time
tf.summary.FileWriterCache.clear() # ensure filewriter cache is clear for TensorBoard events file
train_and_evaluate(OUTDIR, 2000)
```
| github_jupyter |
# LinearSVC with RobustScaler & Power Transformer
This Code template is for the Classification task using LinearSVC using RobustScaler with pipeline and Power Transformer Feature Transformation.
## Required Packages
```
import numpy as np
import pandas as pd
import seaborn as se
import warnings
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.svm import LinearSVC
from imblearn.over_sampling import RandomOverSampler
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import LabelEncoder, PowerTransformer, RobustScaler
from sklearn.metrics import classification_report,plot_confusion_matrix
warnings.filterwarnings('ignore')
```
## Initialization
Filepath of CSV file
```
#filepath
file_path= ""
```
List of features which are required for model training
```
#x_values
features=[]
```
Target feature for prediction
```
#y_value
target=''
```
## Data Fetching
Pandas is an open-source, BSD-licensed library providing high-performance, easy-to-use data manipulation and data analysis tools.
We will use panda's library to read the CSV file using its storage path.And we use the head function to display the initial row or entry.
```
df=pd.read_csv(file_path)
df.head()
```
## Feature Selections
It is the process of reducing the number of input variables when developing a predictive model. Used to reduce the number of input variables to both reduce the computational cost of modelling and, in some cases, to improve the performance of the model.
We will assign all the required input features to X and target/outcome to Y.
```
X=df[features]
Y=df[target]
```
## Data Preprocessing
Since the majority of the machine learning models in the Sklearn library doesn't handle string category data and Null value, we have to explicitly remove or replace null values. The below snippet have functions, which removes the null value if any exists. And convert the string classes data in the datasets by encoding them to integer classes
```
def NullClearner(df):
if(isinstance(df, pd.Series) and (df.dtype in ["float64","int64"])):
df.fillna(df.mean(),inplace=True)
return df
elif(isinstance(df, pd.Series)):
df.fillna(df.mode()[0],inplace=True)
return df
else:return df
def EncodeX(df):
return pd.get_dummies(df)
def EncodeY(df):
if len(df.unique())<=2:
return df
else:
un_EncodedT=np.sort(pd.unique(df), axis=-1, kind='mergesort')
df=LabelEncoder().fit_transform(df)
EncodedT=[xi for xi in range(len(un_EncodedT))]
print("Encoded Target: {} to {}".format(un_EncodedT,EncodedT))
return df
```
Calling preprocessing functions on the feature and target set.
```
x=X.columns.to_list()
for i in x:
X[i]=NullClearner(X[i])
X=EncodeX(X)
Y=EncodeY(NullClearner(Y))
X.head()
```
## Correlation Map
In order to check the correlation between the features, we will plot a correlation matrix. It is effective in summarizing a large amount of data where the goal is to see patterns.
```
f,ax = plt.subplots(figsize=(18, 18))
matrix = np.triu(X.corr())
se.heatmap(X.corr(), annot=True, linewidths=.5, fmt= '.1f',ax=ax, mask=matrix)
plt.show()
```
## Distribution Of Target Variable
```
plt.figure(figsize = (10,6))
se.countplot(Y)
```
## Data Splitting
The train-test split is a procedure for evaluating the performance of an algorithm. The procedure involves taking a dataset and dividing it into two subsets. The first subset is utilized to fit/train the model. The second subset is used for prediction. The main motive is to estimate the performance of the model on new data.
```
x_train,x_test,y_train,y_test=train_test_split(X,Y,test_size=0.2,random_state=123)
```
## Handling Target Imbalance
The challenge of working with imbalanced datasets is that most machine learning techniques will ignore, and in turn have poor performance on, the minority class, although typically it is performance on the minority class that is most important.
One approach to addressing imbalanced datasets is to oversample the minority class. The simplest approach involves duplicating examples in the minority class.We will perform overspampling using imblearn library.
```
x_train,y_train = RandomOverSampler(random_state=123).fit_resample(x_train, y_train)
```
## Data Rescaling
It scales features using statistics that are robust to outliers. This method removes the median and scales the data in the range between 1st quartile and 3rd quartile. i.e., in between 25th quantile and 75th quantile range. This range is also called an Interquartile range.
<a href="https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.RobustScaler.html">More about Robust Scaler</a>
## Feature Transformation
Power transforms are a family of parametric, monotonic transformations that are applied to make data more Gaussian-like. This is useful for modeling issues related to heteroscedasticity (non-constant variance), or other situations where normality is desired.
<a href="https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.PowerTransformer.html">More about Power Transformer module</a>
## Model
Support vector machines (SVMs) are a set of supervised learning methods used for classification, regression and outliers detection.
A Support Vector Machine is a discriminative classifier formally defined by a separating hyperplane. In other terms, for a given known/labelled data points, the SVM outputs an appropriate hyperplane that classifies the inputted new cases based on the hyperplane. In 2-Dimensional space, this hyperplane is a line separating a plane into two segments where each class or group occupied on either side.
LinearSVC is similar to SVC with kernel=’linear’. It has more flexibility in the choice of tuning parameters and is suited for large samples.
### Tuning parameters
**penalty:** Specifies the norm used in the penalization. The ‘l2’ penalty is the standard used in SVC. The ‘l1’ leads to coef_ vectors that are sparse.
**Loss:** Specifies the loss function. ‘hinge’ is the standard SVM loss (used e.g. by the SVC class) while ‘squared_hinge’ is the square of the hinge loss. The combination of penalty='l1' and loss='hinge' is not supported.
**C:** Regularization parameter. The strength of the regularization is inversely proportional to C. Must be strictly positive.
**tolerance:** Tolerance for stopping criteria.
**dual:** Select the algorithm to either solve the dual or primal optimization problem. Prefer dual=False when n_samples > n_features.
For More Info: [API](https://scikit-learn.org/stable/modules/generated/sklearn.svm.LinearSVC.html)
```
model = make_pipeline(RobustScaler(),PowerTransformer(),LinearSVC())
model.fit(x_train, y_train)
```
## Model Accuracy
score() method return the mean accuracy on the given test data and labels.
In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted.
```
print("Accuracy score {:.2f} %\n".format(model.score(x_test,y_test)*100))
```
## Confusion Matrix
A confusion matrix is utilized to understand the performance of the classification model or algorithm in machine learning for a given test set where results are known.
```
plot_confusion_matrix(model,x_test,y_test,cmap=plt.cm.Blues)
```
## Classification Report
A Classification report is used to measure the quality of predictions from a classification algorithm. How many predictions are True, how many are False.
where:
- Precision:- Accuracy of positive predictions.
- Recall:- Fraction of positives that were correctly identified.
- f1-score:- percent of positive predictions were correct
- support:- Support is the number of actual occurrences of the class in the specified dataset.
```
print(classification_report(y_test,model.predict(x_test)))
```
## Creator: Abhishek Garg, Github: <a href="https://github.com/abhishek-252">Profile</a>
| github_jupyter |
```
# 使下面的代码支持python2和python3
from __future__ import division, print_function, unicode_literals
# 查看python的版本是否为3.5及以上
import sys
assert sys.version_info >= (3, 5)
# 查看sklearn的版本是否为0.20及以上
import sklearn
assert sklearn.__version__ >= "0.20"
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
import os
# 在每一次的运行后获得的结果与这个notebook的结果相同
np.random.seed(42)
# 让matplotlib的图效果更好
%matplotlib inline
import matplotlib as mpl
mpl.rc('axes', labelsize=14)
mpl.rc('xtick', labelsize=12)
mpl.rc('ytick', labelsize=12)
# 设置保存图片的途径
PROJECT_ROOT_DIR = "."
IMAGE_PATH = os.path.join(PROJECT_ROOT_DIR, "images")
os.makedirs(IMAGE_PATH, exist_ok=True)
def save_fig(fig_id, tight_layout=True):
'''
运行即可保存自动图片
:param fig_id: 图片名称
'''
path = os.path.join(PROJECT_ROOT_DIR, "images", fig_id + ".png")
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format='png', dpi=300)
# 忽略掉没用的警告 (Scipy issue #5998)
import warnings
warnings.filterwarnings(action="ignore", category=FutureWarning, module='sklearn', lineno=196)
# 读取数据集
df = pd.read_excel('data.xlsx', engine="openpyxl")
df.head()
# 查看数据集是否有空值,看需不需要插值
df.info()
'''
# 插值
df.fillna(0, inplace=True)
# 或者是参考之前在多项式回归里的插值方式
'''
# 将真实的分类标签与特征分开
data = df.drop('TRUE VALUE', axis=1)
labels = df['TRUE VALUE'].copy()
np.unique(labels)
labels
# 获取数据的数量和特征的数量
n_samples, n_features = data.shape
# 获取分类标签的数量
n_labels = len(np.unique(labels))
np.unique(labels)
labels.value_counts()
```
# KMeans算法聚类
```
from sklearn import metrics
def get_marks(estimator, data, name=None, kmeans=None, af=None):
"""
获取评分,有五种需要知道数据集的实际分类信息,有三种不需要,参考readme.txt
对于Kmeans来说,一般用轮廓系数和inertia即可
:param estimator: 模型
:param name: 初始方法
:param data: 特征数据集
"""
estimator.fit(data)
print(20 * '*', name, 20 * '*')
if kmeans:
print("Mean Inertia Score: ", estimator.inertia_)
elif af:
cluster_centers_indices = estimator.cluster_centers_indices_
print("The estimated number of clusters: ", len(cluster_centers_indices))
print("Homogeneity Score: ", metrics.homogeneity_score(labels, estimator.labels_))
print("Completeness Score: ", metrics.completeness_score(labels, estimator.labels_))
print("V Measure Score: ", metrics.v_measure_score(labels, estimator.labels_))
print("Adjusted Rand Score: ", metrics.adjusted_rand_score(labels, estimator.labels_))
print("Adjusted Mutual Info Score: ", metrics.adjusted_mutual_info_score(labels, estimator.labels_))
print("Calinski Harabasz Score: ", metrics.calinski_harabasz_score(data, estimator.labels_))
print("Silhouette Score: ", metrics.silhouette_score(data, estimator.labels_))
from sklearn.cluster import KMeans
# 使用k-means进行聚类,设置簇=2,设置不同的初始化方式('k-means++'和'random')
km1 = KMeans(init='k-means++', n_clusters=2, n_init=10, random_state=42)
km2 = KMeans(init='random', n_clusters=2, n_init=10, random_state=42)
print("n_labels: %d \t n_samples: %d \t n_features: %d" % (n_labels, n_samples, n_features))
get_marks(km1, data, name="k-means++", kmeans=True)
get_marks(km2, data, name="random", kmeans=True)
# 聚类后每个数据的类别
km1.labels_
# 类别的类型
np.unique(km1.labels_)
# 将聚类的结果写入原始表格中
df['km_clustering_label'] = km1.labels_
# 以csv形式导出原始表格
#df.to_csv('result.csv')
# 区别于data,df是原始数据集
df.head()
from sklearn.model_selection import GridSearchCV
# 使用GridSearchCV自动寻找最优参数,kmeans在这里是作为分类模型使用,而非聚类
params = {'init':('k-means++', 'random'), 'n_clusters':[2, 3, 4, 5, 6], 'n_init':[5, 10, 15]}
cluster = KMeans(random_state=42)
# 使用调整的兰德系数(adjusted_rand_score)作为评分,具体可参考readme.txt
km_best_model = GridSearchCV(cluster, params, cv=3, scoring='adjusted_rand_score',
verbose=1, n_jobs=-1)
# 由于选用的是外部评价指标,因此得有原数据集的真实分类信息
km_best_model.fit(data, labels)
# 最优模型的参数
km_best_model.best_params_
# 最优模型的评分
km_best_model.best_score_
# 获得的最优模型
km3 = km_best_model.best_estimator_
km3
# 获取最优模型的8种评分,具体含义参考readme.txt
get_marks(km3, data, name="k-means++", kmeans=True)
from sklearn.metrics import silhouette_score
from sklearn.metrics import calinski_harabasz_score
from matplotlib import pyplot as plt
def plot_scores(init, max_k, data, labels):
'''画出kmeans不同初始化方法的三种评分图
:param init: 初始化方法,有'k-means++'和'random'两种
:param max_k: 最大的簇中心数目
:param data: 特征的数据集
:param labels: 真实标签的数据集
'''
i = []
inertia_scores = []
y_silhouette_scores = []
y_calinski_harabaz_scores = []
for k in range(2, max_k):
kmeans_model = KMeans(n_clusters=k, random_state=1, init=init, n_init=10)
pred = kmeans_model.fit_predict(data)
i.append(k)
inertia_scores.append(kmeans_model.inertia_)
y_silhouette_scores.append(silhouette_score(data, pred))
y_calinski_harabaz_scores.append(calinski_harabasz_score(data, pred))
new = [inertia_scores, y_silhouette_scores, y_calinski_harabaz_scores]
for j in range(len(new)):
plt.figure(j+1)
plt.plot(i, new[j], 'bo-')
plt.xlabel('n_clusters')
if j == 0:
name = 'inertia'
elif j == 1:
name = 'silhouette'
else:
name = 'calinski_harabasz'
plt.ylabel('{}_scores'.format(name))
plt.title('{}_scores with {} init'.format(name, init))
save_fig('{} with {}'.format(name, init))
plot_scores('k-means++', 18, data, labels)
plot_scores('random', 10, data, labels)
from sklearn.metrics import silhouette_samples, silhouette_score
from matplotlib.ticker import FixedLocator, FixedFormatter
def plot_silhouette_diagram(clusterer, X, show_xlabels=True,
show_ylabels=True, show_title=True):
"""
画轮廓图表
:param clusterer: 训练好的聚类模型(这里是能提前设置簇数量的,可以稍微修改代码换成不能提前设置的)
:param X: 只含特征的数据集
:param show_xlabels: 为真,添加横坐标信息
:param show_ylabels: 为真,添加纵坐标信息
:param show_title: 为真,添加图表名
"""
y_pred = clusterer.labels_
silhouette_coefficients = silhouette_samples(X, y_pred)
silhouette_average = silhouette_score(X, y_pred)
padding = len(X) // 30
pos = padding
ticks = []
for i in range(clusterer.n_clusters):
coeffs = silhouette_coefficients[y_pred == i]
coeffs.sort()
color = mpl.cm.Spectral(i / clusterer.n_clusters)
plt.fill_betweenx(np.arange(pos, pos + len(coeffs)), 0, coeffs,
facecolor=color, edgecolor=color, alpha=0.7)
ticks.append(pos + len(coeffs) // 2)
pos += len(coeffs) + padding
plt.axvline(x=silhouette_average, color="red", linestyle="--")
plt.gca().yaxis.set_major_locator(FixedLocator(ticks))
plt.gca().yaxis.set_major_formatter(FixedFormatter(range(clusterer.n_clusters)))
if show_xlabels:
plt.gca().set_xticks([-0.1, 0, 0.2, 0.4, 0.6, 0.8, 1])
plt.xlabel("Silhouette Coefficient")
else:
plt.tick_params(labelbottom=False)
if show_ylabels:
plt.ylabel("Cluster")
if show_title:
plt.title("init:{} n_cluster:{}".format(clusterer.init, clusterer.n_clusters))
plt.figure(figsize=(15, 4))
plt.subplot(121)
plot_silhouette_diagram(km1, data)
plt.subplot(122)
plot_silhouette_diagram(km3, data, show_ylabels=False)
save_fig("silhouette_diagram")
```
# MiniBatch KMeans
```
from sklearn.cluster import MiniBatchKMeans
# 测试KMeans算法运行速度
%timeit KMeans(n_clusters=3).fit(data)
# 测试MiniBatchKMeans算法运行速度
%timeit MiniBatchKMeans(n_clusters=5).fit(data)
from timeit import timeit
times = np.empty((100, 2))
inertias = np.empty((100, 2))
for k in range(1, 101):
kmeans = KMeans(n_clusters=k, random_state=42)
minibatch_kmeans = MiniBatchKMeans(n_clusters=k, random_state=42)
print("\r Training: {}/{}".format(k, 100), end="")
times[k-1, 0] = timeit("kmeans.fit(data)", number=10, globals=globals())
times[k-1, 1] = timeit("minibatch_kmeans.fit(data)", number=10, globals=globals())
inertias[k-1, 0] = kmeans.inertia_
inertias[k-1, 1] = minibatch_kmeans.inertia_
plt.figure(figsize=(10, 4))
plt.subplot(121)
plt.plot(range(1, 101), inertias[:, 0], "r--", label="K-Means")
plt.plot(range(1, 101), inertias[:, 1], "b.-", label="Mini-batch K-Means")
plt.xlabel("$k$", fontsize=16)
plt.ylabel("Inertia", fontsize=14)
plt.legend(fontsize=14)
plt.subplot(122)
plt.plot(range(1, 101), times[:, 0], "r--", label="K-Means")
plt.plot(range(1, 101), times[:, 1], "b.-", label="Mini-batch K-Means")
plt.xlabel("$k$", fontsize=16)
plt.ylabel("Training time (seconds)", fontsize=14)
plt.axis([1, 100, 0, 6])
plt.legend(fontsize=14)
save_fig("minibatch_kmeans_vs_kmeans")
plt.show()
```
# 降维后聚类
```
from sklearn.decomposition import PCA
# 使用普通PCA进行降维,将特征从11维降至3维
pca1 = PCA(n_components=n_labels)
reduced_data1 = pca1.fit(data)
km4 = KMeans(init='k-means++', n_clusters=n_labels, n_init=10)
get_marks(km4, data, name="PCA-based KMeans", kmeans=True)
# 查看训练集的维度,已降至3个维度
len(pca1.components_)
# 使用普通PCA降维,将特征降至2维,作二维平面可视化
pca2 = PCA(n_components=2)
reduced_data2 = pca2.fit_transform(data)
# 使用k-means进行聚类,设置簇=3,初始化方法为'k-means++'
kmeans1 = KMeans(init="k-means++", n_clusters=3, n_init=3)
kmeans2 = KMeans(init="random", n_clusters=3, n_init=3)
kmeans1.fit(reduced_data2)
kmeans2.fit(reduced_data2)
# 训练集的特征维度降至2维
len(pca2.components_)
# 2维的特征值(降维后)
reduced_data2
# 3个簇中心的坐标
kmeans1.cluster_centers_
from matplotlib.colors import ListedColormap
def plot_data(X, real_tag=None):
"""
画散点图
:param X: 只含特征值的数据集
:param real_tag: 有值,则给含有不同分类的散点上色
"""
try:
if not real_tag:
plt.plot(X[:, 0], X[:, 1], 'k.', markersize=2)
except ValueError:
types = list(np.unique(real_tag))
for i in range(len(types)):
plt.plot(X[:, 0][real_tag==types[i]], X[:, 1][real_tag==types[i]],
'.', label="{}".format(types[i]), markersize=3)
plt.legend()
def plot_centroids(centroids, circle_color='w', cross_color='k'):
"""
画出簇中心
:param centroids: 簇中心坐标
:param circle_color: 圆圈的颜色
:param cross_color: 叉的颜色
"""
plt.scatter(centroids[:, 0], centroids[:, 1],
marker='o', s=30, zorder=10, linewidths=8,
color=circle_color, alpha=0.9)
plt.scatter(centroids[:, 0], centroids[:, 1],
marker='x', s=50, zorder=11, linewidths=3,
color=cross_color, alpha=1)
def plot_centroids_labels(clusterer):
labels = np.unique(clusterer.labels_)
centroids = clusterer.cluster_centers_
for i in range(centroids.shape[0]):
t = str(labels[i])
plt.text(centroids[i, 0]-1, centroids[i, 1]-1, t, fontsize=25,
zorder=10, bbox=dict(boxstyle='round', fc='yellow', alpha=0.5))
def plot_decision_boundaried(clusterer, X, tag=None, resolution=1000,
show_centroids=True, show_xlabels=True,
show_ylabels=True, show_title=True,
show_centroids_labels=False):
"""
画出决策边界,并填色
:param clusterer: 训练好的聚类模型(能提前设置簇中心数量或不能提前设置都可以)
:param X: 只含特征值的数据集
:param tag: 只含真实分类信息的数据集,有值,则给散点上色
:param resolution: 类似图片分辨率,给最小的单位上色
:param show_centroids: 为真,画出簇中心
:param show_centroids_labels: 为真,标注出该簇中心的标签
"""
mins = X.min(axis=0) - 0.1
maxs = X.max(axis=0) + 0.1
xx, yy = np.meshgrid(np.linspace(mins[0], maxs[0], resolution),
np.linspace(mins[1], maxs[1], resolution))
Z = clusterer.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
# 可用color code或者color自定义填充颜色
# custom_cmap = ListedColormap(["#fafab0", "#9898ff", "#a0faa0"])
plt.contourf(xx, yy, Z, extent=(mins[0], maxs[0], mins[1], maxs[1]),
cmap="Pastel2")
plt.contour(xx, yy, Z, extent=(mins[0], maxs[0], mins[1], maxs[1]),
colors='k')
try:
if not tag:
plot_data(X)
except ValueError:
plot_data(X, real_tag=tag)
if show_centroids:
plot_centroids(clusterer.cluster_centers_)
if show_centroids_labels:
plot_centroids_labels(clusterer)
if show_xlabels:
plt.xlabel(r"$x_1$", fontsize=14)
else:
plt.tick_params(labelbottom=False)
if show_ylabels:
plt.ylabel(r"$x_2$", fontsize=14, rotation=0)
else:
plt.tick_params(labelleft=False)
if show_title:
plt.title("init:{} n_cluster:{}".format(clusterer.init, clusterer.n_clusters))
plt.figure(figsize=(15, 4))
plt.subplot(121)
plot_decision_boundaried(kmeans1, reduced_data2, tag=labels)
plt.subplot(122)
plot_decision_boundaried(kmeans2, reduced_data2, show_centroids_labels=True)
save_fig("real_tag_vs_non")
plt.show()
kmeans3 = KMeans(init="k-means++", n_clusters=3, n_init=3)
kmeans4 = KMeans(init="k-means++", n_clusters=4, n_init=3)
kmeans5 = KMeans(init="k-means++", n_clusters=5, n_init=3)
kmeans6 = KMeans(init="k-means++", n_clusters=6, n_init=3)
kmeans3.fit(reduced_data2)
kmeans4.fit(reduced_data2)
kmeans5.fit(reduced_data2)
kmeans6.fit(reduced_data2)
plt.figure(figsize=(15, 8))
plt.subplot(221)
plot_decision_boundaried(kmeans3, reduced_data2, show_xlabels=False, show_centroids_labels=True)
plt.subplot(222)
plot_decision_boundaried(kmeans4, reduced_data2, show_ylabels=False, show_xlabels=False)
plt.subplot(223)
plot_decision_boundaried(kmeans5, reduced_data2, show_centroids_labels=True)
plt.subplot(224)
plot_decision_boundaried(kmeans6, reduced_data2, show_ylabels=False)
save_fig("reduced_and_cluster")
plt.show()
```
# AP算法聚类
```
from sklearn.cluster import AffinityPropagation
# 使用AP聚类算法
af = AffinityPropagation(preference=-500, damping=0.8)
af.fit(data)
# 获取簇的坐标
cluster_centers_indices = af.cluster_centers_indices_
cluster_centers_indices
# 获取分类的类别数量
af_labels = af.labels_
np.unique(af_labels)
get_marks(af, data=data, af=True, name="Affinity Propagation")
# 将AP聚类聚类的结果写入原始表格中
df['ap_clustering_label'] = af.labels_
# 以csv形式导出原始表格
df.to_csv('data_after_clustering.csv')
# 最后两列为两种聚类算法的分类信息
df.head()
from sklearn.model_selection import GridSearchCV
# from sklearn.model_selection import RamdomizedSearchCV
# 使用GridSearchCV自动寻找最优参数,如果时间太久(约4.7min),可以使用随机搜索,这里是用AP做分类的工作
params = {'preference':[-50, -100, -150, -200], 'damping':[0.5, 0.6, 0.7, 0.8, 0.9]}
cluster = AffinityPropagation()
af_best_model = GridSearchCV(cluster, params, cv=5, scoring='adjusted_rand_score', verbose=1, n_jobs=-1)
af_best_model.fit(data, labels)
# 最优模型的参数设置
af_best_model.best_params_
# 最优模型的评分,使用调整的兰德系数(adjusted_rand_score)作为评分
af_best_model.best_score_
# 获取最优模型
af1 = af_best_model.best_estimator_
af1
# 最优模型的评分
get_marks(af1, data=data, af=True, name="Affinity Propagation")
"""
from sklearn.externals import joblib
# 保存以pkl格式最优模型
joblib.dump(af1, "af1.pkl")
"""
"""
# 从pkl格式中导出最优模型
my_model_loaded = joblib.load("af1.pkl")
"""
"""
my_model_loaded
"""
from sklearn.decomposition import PCA
# 使用普通PCA进行降维,将特征从11维降至3维
pca3 = PCA(n_components=n_labels)
reduced_data3 = pca3.fit_transform(data)
af2 = AffinityPropagation(preference=-200, damping=0.8)
get_marks(af2, reduced_data3, name="PCA-based AF", af=True)
```
# 基于聚类结果的分层抽样
```
# data2是去掉真实分类信息的数据集(含有聚类后的结果)
data2 = df.drop("TRUE VALUE", axis=1)
data2.head()
# 查看使用kmeans聚类后的分类标签值,两类
data2['km_clustering_label'].hist()
from sklearn.model_selection import StratifiedShuffleSplit
# 基于kmeans聚类结果的分层抽样
split = StratifiedShuffleSplit(n_splits=1, test_size=0.2, random_state=42)
for train_index, test_index in split.split(data2, data2["km_clustering_label"]):
strat_train_set = data2.loc[train_index]
strat_test_set = data2.loc[test_index]
def clustering_result_propotions(data):
"""
分层抽样后,训练集或测试集里不同分类标签的数量比
:param data: 训练集或测试集,纯随机取样或分层取样
"""
return data["km_clustering_label"].value_counts() / len(data)
# 经过分层抽样的测试集中,不同分类标签的数量比
clustering_result_propotions(strat_test_set)
# 经过分层抽样的训练集中,不同分类标签的数量比
clustering_result_propotions(strat_train_set)
# 完整的数据集中,不同分类标签的数量比
clustering_result_propotions(data2)
from sklearn.model_selection import train_test_split
# 纯随机取样
random_train_set, random_test_set = train_test_split(data2, test_size=0.2, random_state=42)
# 完整的数据集、分层抽样后的测试集、纯随机抽样后的测试集中,不同分类标签的数量比
compare_props = pd.DataFrame({
"Overall": clustering_result_propotions(data2),
"Stratified": clustering_result_propotions(strat_test_set),
"Random": clustering_result_propotions(random_test_set),
}).sort_index()
# 计算分层抽样和纯随机抽样后的测试集中不同分类标签的数量比,和完整的数据集中不同分类标签的数量比的误差
compare_props["Rand. %error"] = 100 * compare_props["Random"] / compare_props["Overall"] - 100
compare_props["Start. %error"] = 100 * compare_props["Stratified"] / compare_props["Overall"] - 100
compare_props
from sklearn.metrics import f1_score
def get_classification_marks(model, data, labels, train_index, test_index):
"""
获取分类模型(二元或多元分类器)的评分:F1值
:param data: 只含有特征值的数据集
:param labels: 只含有标签值的数据集
:param train_index: 分层抽样获取的训练集中数据的索引
:param test_index: 分层抽样获取的测试集中数据的索引
:return: F1评分值
"""
m = model(random_state=42)
m.fit(data.loc[train_index], labels.loc[train_index])
test_labels_predict = m.predict(data.loc[test_index])
score = f1_score(labels.loc[test_index], test_labels_predict, average="weighted")
return score
from sklearn.linear_model import LogisticRegression
# 用分层抽样后的训练集训练分类模型后的评分值
start_marks = get_classification_marks(LogisticRegression, data, labels, strat_train_set.index, strat_test_set.index)
start_marks
# 用纯随机抽样后的训练集训练分类模型后的评分值
random_marks = get_classification_marks(LogisticRegression, data, labels, random_train_set.index, random_test_set.index)
random_marks
import numpy as np
from sklearn.metrics import f1_score, r2_score
from sklearn.model_selection import StratifiedKFold
from sklearn.base import clone, BaseEstimator, TransformerMixin
class stratified_cross_val_score(BaseEstimator, TransformerMixin):
"""实现基于分层抽样的k折交叉验证"""
def __init__(self, model, random_state=0, cv=5, pattern="classification"):
"""
:model: 训练的模型(回归或分类)
:random_state: 模型的随机种子值
:cv: 交叉验证的次数
:pattern: classification和regression两种选择
"""
self.model = model
self.random_state = random_state
self.cv = cv
self.pattern = pattern
self.scores_ = []
self.best_score_ = []
self.estimators_ = []
self.best_estimator_ = []
self.i = 0
def fit(self, X, y, layer_tag):
"""
:param X: 只含有特征值的完整数据集
:param y: 只含有标签值的完整数据集
:param tag: 只含有分层依据的完整数据集(此例是KMeans聚类结果)
"""
skfolds = StratifiedKFold(n_splits=self.cv, random_state=self.random_state, shuffle=True)
for train_index, test_index in skfolds.split(X, layer_tag):
# 复制要训练的模型(分类或回归)
clone_model = clone(self.model)
strat_X_train_folds, strat_X_test_fold = X.iloc[train_index], X.iloc[test_index]
strat_y_train_folds, strat_y_test_fold = y.iloc[train_index], y.iloc[test_index]
# 训练模型
clone_model.fit(strat_X_train_folds, strat_y_train_folds)
# 保留模型
self.estimators_.append(clone_model)
# 预测值(这里是分类模型的分类结果)
test_labels_pred = clone_model.predict(strat_X_test_fold)
if self.pattern == "classification":
# 分类模型用F1值
score_fold = f1_score(y.iloc[test_index], test_labels_pred, average="weighted")
elif self.pattern == "regression":
# 回归模型使用r2值
score_fold = r2_score(y.iloc[test_index], test_labels_pred)
# 避免重复向列表里重复添加值
if self.i < self.cv:
self.scores_.append(score_fold)
else:
None
self.i += 1
# 获取评分最高模型的索引
argmax = np.argmax(self.scores_)
self.best_score_ = self.scores_[argmax]
self.best_estimator_ = self.estimators_[argmax]
def transform(self, X, y=None):
return self
def mean(self):
"""返回交叉验证评分的平均值"""
return np.array(self.scores_).mean()
def std(self):
"""返回交叉验证评分的标准差"""
return np.array(self.scores_).std()
from sklearn.linear_model import SGDClassifier
# 分类模型
clf_model = SGDClassifier(max_iter=5, tol=-np.infty, random_state=42)
# 基于分层抽样的交叉验证,data是只含特征值的完整数据集,labels是只含标签值的完整数据集
clf_cross_val = stratified_cross_val_score(clf_model, cv=5, random_state=42, pattern="classification")
clf_cross_val_score = clf_cross_val.fit(data, labels, df['km_clustering_label'])
# 每折交叉验证的评分
clf_cross_val.scores_
# 五折交叉验证中最好的评分
clf_cross_val.best_score_
# 交叉验证评分的平均值
clf_cross_val.mean()
# 交叉验证评分的标准差
clf_cross_val.std()
# 五折交叉验证的所有模型
clf_cross_val.estimators_
# 五折交叉验证中的最优模型
best_model = clf_cross_val.best_estimator_
```
| github_jupyter |
# Lecture Notebook: Modeling Data and Knowledge
## Making Choices about Data Representation and Processing using LinkedIn
This module explores concepts in:
* Designing data representations to capture important relationships
* Reasoning over graphs
* Exploring and traversing graphs
It sets the stage for a deeper understanding of issues related to performance, and cloud/cluster-compute data processing.
We'll use MongoDB on the cloud as a sample NoSQL database
```
!pip install pymongo[tls,srv]
!pip install lxml
!pip install matplotlib
!pip install pandas
!pip install numpy
import pandas as pd
import numpy as np
# JSON parsing
import json
import urllib
# SQLite RDBMS
import sqlite3
# Time conversions
import time
# NoSQL DB
from pymongo import MongoClient
from pymongo.errors import DuplicateKeyError, OperationFailure
import zipfile
import os
# HTML parsing
from lxml import etree
```
## Our Example Dataset
The example dataset is a crawl of LinkedIn, stored as a sequence of JSON objects (one per line).
**Notice:** You need to correctly load the data before successfully running this notebook. The solution would be using an url to visit the data, or to open/mount the data locally. See **instructor notes** or **README** for detail.
**Note:** When using url to visit the data in the below, substitute the location of this dataset in X (see Instructor Notes). The urllib.request will place the result in a file called 'local.zip'. Otherwise, when opening a data mounted/located in the local directory, use open() function instead.
```
url = 'https://github.com/odpi/OpenDS4All/raw/master/assets/data/test_data_10000.zip'
filehandle, _ = urllib.request.urlretrieve(url,filename='local.zip')
def fetch_file(fname):
zip_file_object = zipfile.ZipFile(filehandle, 'r')
for file in zip_file_object.namelist():
file = zip_file_object.open(file)
if file.name == fname: return file
return None
linked_in = fetch_file('test_data_10000.json')
%%time
# 10K records from linkedin
# if use url to visit zipped data, then use 'fetch_file' object.
# linked_in = fetch_file('linkedin_small.json')
# If visit data locally, if using Colab or local machine.
people = []
for line in linked_in:
# print(line)
person = json.loads(line)
people.append(person)
people_df = pd.DataFrame(people)
print ("%d records"%len(people_df))
people_df
```
## NoSQL storage
For this part you need to access MongoDB (see the Instructor Notes). One option is to sign up at:
https://www.mongodb.com/cloud
Click on "Get started", sign up, agree to terms of service, and create a new cluster on AWS free tier (Northern Virginia). Use this location as 'Y' in the client creation below.
Eventually you'll need to tell MongoDB to add your IP address (so you can talk to the machine) and you'll need to create a database called 'linkedin'.
```
# Store in MongoDB some number of records (here limit=10k) and in an in-memory list
START = 0
LIMIT = 10000
# TODO: you need to replace "Y" with the URL corresponding to your
# remote MongoDB cloud instance
client = MongoClient('mongodb+srv://Y')
linkedin_db = client['linkedin']
# Build a list of the JSON elements
list_for_comparison = []
people = 0
for line in linked_in:
person = json.loads(line)
if people >= START:
try:
list_for_comparison.append(person)
linkedin_db.posts.insert_one(person)
except DuplicateKeyError:
pass
except OperationFailure:
# If the above still uses our cluster, you'll get this error in
# attempting to write to our MongoDB client
pass
people = people + 1
# if(people % 1000 == 0):
# print (people)
if people > LIMIT:
break
# Two ways of looking up skills, one based on an in-memory
# list, one based on MongoDB queries
def find_skills_in_list(skill):
for post in list_for_comparison:
if 'skills' in post:
skills = post['skills']
for this_skill in skills:
if this_skill == skill:
return post
return None
def find_skills_in_mongodb(skill):
return linkedin_db.posts.find_one({'skills': skill})
%%time
find_skills_in_list('Marketing')
%%time
find_skills_in_mongodb('Marketing')
```
## Designing a relational schema from hierarchical data
Given that we already have a predefined set of fields / attributes / features, we don't need to spend a lot of time defining our table *schemas*, except that we need to unnest data.
* Nested relationships can be captured by creating a second table, which has a **foreign key** pointing to the identifier (key) for the main (parent) table.
* Ordered lists can be captured by encoding an index number or row number.
```
'''
Simple code to pull out data from JSON and load into sqllite
'''
# linked_in = urllib.request.urlopen('X')
# linked_in = fetch_file('linkedin_small.json')
linked_in = open('/content/drive/My Drive/Colab Notebooks/test_data_10000.json')
START = 0
LIMIT = 10000 # Limit the max number of records to be 10K.
def get_df(rel):
ret = pd.DataFrame(rel).fillna('')
for k in ret.keys():
ret[k] = ret[k].astype(str)
return ret
def extract_relation(rel, name):
'''
Pull out a nested list that has a key, and return it as a list
of dictionaries suitable for treating as a relation / dataframe
'''
# We'll return a list
ret = []
if name in rel:
ret2 = rel.pop(name)
try:
# Try to parse the string as a dictionary
ret2 = json.loads(ret2.replace('\'','\"'))
except:
# If we get an error in parsing, we'll leave as a string
pass
# If it's a dictionary, add it to our return results after
# adding a key to the parent
if isinstance(ret2, dict):
item = ret2
item['person'] = rel['_id']
ret.append(item)
else:
# If it's a list, iterate over each item
index = 0
for r in ret2:
item = r
if not isinstance(item, dict):
item = {'person': rel['_id'], 'value': item}
else:
item['person'] = rel['_id']
# A fix to a typo in the data
if 'affilition' in item:
item['affiliation'] = item.pop('affilition')
item['pos'] = index
index = index + 1
ret.append(item)
return ret
names = []
people = []
groups = []
education = []
skills = []
experience = []
honors = []
also_view = []
events = []
conn = sqlite3.connect('linkedin.db')
lines = []
i = 1
for line in linked_in:
if i > START + LIMIT:
break
elif i >= START:
person = json.loads(line)
# By inspection, all of these are nested dictionary or list content
nam = extract_relation(person, 'name')
edu = extract_relation(person, 'education')
grp = extract_relation(person, 'group')
skl = extract_relation(person, 'skills')
exp = extract_relation(person, 'experience')
hon = extract_relation(person, 'honors')
als = extract_relation(person, 'also_view')
eve = extract_relation(person, 'events')
# This doesn't seem relevant and it's the only
# non-string field that's sometimes null
if 'interval' in person:
person.pop('interval')
lines.append(person)
names = names + nam
education = education + edu
groups = groups + grp
skills = skills + skl
experience = experience + exp
honors = honors + hon
also_view = also_view + als
events = events + eve
i = i + 1
people_df = get_df(pd.DataFrame(lines))
names_df = get_df(pd.DataFrame(names))
education_df = get_df(pd.DataFrame(education))
groups_df = get_df(pd.DataFrame(groups))
skills_df = get_df(pd.DataFrame(skills))
experience_df = get_df(pd.DataFrame(experience))
honors_df = get_df(pd.DataFrame(honors))
also_view_df = get_df(pd.DataFrame(also_view))
events_df = get_df(pd.DataFrame(events))
people_df
# Save these to the SQLite database
people_df.to_sql('people', conn, if_exists='replace', index=False)
names_df.to_sql('names', conn, if_exists='replace', index=False)
education_df.to_sql('education', conn, if_exists='replace', index=False)
groups_df.to_sql('groups', conn, if_exists='replace', index=False)
skills_df.to_sql('skills', conn, if_exists='replace', index=False)
experience_df.to_sql('experience', conn, if_exists='replace', index=False)
honors_df.to_sql('honors', conn, if_exists='replace', index=False)
also_view_df.to_sql('also_view', conn, if_exists='replace', index=False)
events_df.to_sql('events', conn, if_exists='replace', index=False)
groups_df
pd.read_sql_query('select _id, org from people join experience on _id=person', conn)
pd.read_sql_query("select _id, group_concat(org) as experience " +\
" from people left join experience on _id=person group by _id", conn)
```
## Views
Since we may want to see all the experiences of a person in one place rather than in separate rows, we will create a view in which they are listed as a string (column named experience). The following code creates this view within the context of a transaction (the code between "begin" and "commit" or "rollback"). If the view already exists, it removes it and creates a new one.
```
conn.execute('begin transaction')
conn.execute('drop view if exists people_experience')
conn.execute("create view people_experience as select _id, group_concat(org) as experience " +\
" from people left join experience on _id=person group by _id")
conn.execute('commit')
# Treat the view as a table, see what's there
pd.read_sql_query('select * from people_experience', conn)
```
| github_jupyter |
ERROR: type should be string, got "https://pythonprogramming.net/q-learning-reinforcement-learning-python-tutorial/\n\nhttps://en.wikipedia.org/wiki/Q-learning\n\n```\n# objective is to get the cart to the flag.\n# for now, let's just move randomly:\n\nimport gym\nimport numpy as np\n\nenv = gym.make(\"MountainCar-v0\")\n\nLEARNING_RATE = 0.1\n\nDISCOUNT = 0.95sla\nEPISODES = 25000\nSHOW_EVERY = 3000\n\nDISCRETE_OS_SIZE = [20, 20]\ndiscrete_os_win_size = (env.observation_space.high - env.observation_space.low)/DISCRETE_OS_SIZE\n\n# Exploration settings\nepsilon = 1 # not a constant, qoing to be decayed\nSTART_EPSILON_DECAYING = 1\nEND_EPSILON_DECAYING = EPISODES//2\nepsilon_decay_value = epsilon/(END_EPSILON_DECAYING - START_EPSILON_DECAYING)\n\n\nq_table = np.random.uniform(low=-2, high=0, size=(DISCRETE_OS_SIZE + [env.action_space.n]))\n\n\ndef get_discrete_state(state):\n discrete_state = (state - env.observation_space.low)/discrete_os_win_size\n return tuple(discrete_state.astype(np.int)) # we use this tuple to look up the 3 Q values for the available actions in the q-table\n\n\nfor episode in range(EPISODES):\n discrete_state = get_discrete_state(env.reset())\n done = False\n\n if episode % SHOW_EVERY == 0:\n render = True\n print(episode)\n else:\n render = False\n\n while not done:\n\n if np.random.random() > epsilon:\n # Get action from Q table\n action = np.argmax(q_table[discrete_state])\n else:\n # Get random action\n action = np.random.randint(0, env.action_space.n)\n\n\n new_state, reward, done, _ = env.step(action)\n\n new_discrete_state = get_discrete_state(new_state)\n\n if episode % SHOW_EVERY == 0:\n env.render()\n #new_q = (1 - LEARNING_RATE) * current_q + LEARNING_RATE * (reward + DISCOUNT * max_future_q)\n\n # If simulation did not end yet after last step - update Q table\n if not done:\n\n # Maximum possible Q value in next step (for new state)\n max_future_q = np.max(q_table[new_discrete_state])\n\n # Current Q value (for current state and performed action)\n current_q = q_table[discrete_state + (action,)]\n\n # And here's our equation for a new Q value for current state and action\n new_q = (1 - LEARNING_RATE) * current_q + LEARNING_RATE * (reward + DISCOUNT * max_future_q)\n\n # Update Q table with new Q value\n q_table[discrete_state + (action,)] = new_q\n\n\n # Simulation ended (for any reson) - if goal position is achived - update Q value with reward directly\n elif new_state[0] >= env.goal_position:\n #q_table[discrete_state + (action,)] = reward\n q_table[discrete_state + (action,)] = 0\n\n discrete_state = new_discrete_state\n\n # Decaying is being done every episode if episode number is within decaying range\n if END_EPSILON_DECAYING >= episode >= START_EPSILON_DECAYING:\n epsilon -= epsilon_decay_value\n\n\nenv.close()\n```\n\n" | github_jupyter |

[](https://colab.research.google.com/github/JohnSnowLabs/spark-nlp-workshop/blob/master/tutorials/Certification_Trainings/Healthcare/22.CPT_Entity_Resolver.ipynb)
# CPT Entity Resolvers with sBert
```
import json, os
from google.colab import files
license_keys = files.upload()
with open(list(license_keys.keys())[0]) as f:
license_keys = json.load(f)
# Defining license key-value pairs as local variables
locals().update(license_keys)
# Adding license key-value pairs to environment variables
os.environ.update(license_keys)
# Installing pyspark and spark-nlp
! pip install --upgrade -q pyspark==3.1.2 spark-nlp==$PUBLIC_VERSION
# Installing Spark NLP Healthcare
! pip install --upgrade -q spark-nlp-jsl==$JSL_VERSION --extra-index-url https://pypi.johnsnowlabs.com/$SECRET
# Installing Spark NLP Display Library for visualization
! pip install -q spark-nlp-display
import json
import os
from pyspark.ml import Pipeline, PipelineModel
from pyspark.sql import SparkSession
import sparknlp
import sparknlp_jsl
import sys, os, time
from sparknlp.base import *
from sparknlp.annotator import *
from sparknlp.util import *
from sparknlp_jsl.annotator import *
from sparknlp.pretrained import ResourceDownloader
from pyspark.sql import functions as F
params = {"spark.driver.memory":"16G",
"spark.kryoserializer.buffer.max":"2000M",
"spark.driver.maxResultSize":"2000M"}
spark = sparknlp_jsl.start(license_keys['SECRET'],params=params)
print (sparknlp.version())
print (sparknlp_jsl.version())
```
## Named Entity Recognition
```
documentAssembler = DocumentAssembler()\
.setInputCol("text")\
.setOutputCol("document")
sentenceDetector = SentenceDetectorDLModel.pretrained()\
.setInputCols(["document"])\
.setOutputCol("sentence")
tokenizer = Tokenizer()\
.setInputCols(["sentence"])\
.setOutputCol("token")\
word_embeddings = WordEmbeddingsModel.pretrained("embeddings_clinical", "en", "clinical/models")\
.setInputCols(["sentence", "token"])\
.setOutputCol("embeddings")
clinical_ner = MedicalNerModel.pretrained("ner_jsl", "en", "clinical/models") \
.setInputCols(["sentence", "token", "embeddings"]) \
.setOutputCol("ner")
ner_converter = NerConverter() \
.setInputCols(["sentence", "token", "ner"]) \
.setOutputCol("ner_chunk")
ner_pipeline = Pipeline(
stages = [
documentAssembler,
sentenceDetector,
tokenizer,
word_embeddings,
clinical_ner,
ner_converter,
])
data_ner = spark.createDataFrame([['']]).toDF("text")
ner_model = ner_pipeline.fit(data_ner)
ner_light_pipeline = LightPipeline(ner_model)
clinical_note = (
'A 28-year-old female with a history of gestational diabetes mellitus diagnosed eight years '
'prior to presentation and subsequent type two diabetes mellitus (T2DM), one prior '
'episode of HTG-induced pancreatitis three years prior to presentation, associated '
'with an acute hepatitis, and obesity with a body mass index (BMI) of 33.5 kg/m2, '
'presented with a one-week history of polyuria, polydipsia, poor appetite, and vomiting. '
'Two weeks prior to presentation, she was treated with a five-day course of amoxicillin '
'for a respiratory tract infection. She was on metformin, glipizide, and dapagliflozin '
'for T2DM and atorvastatin and gemfibrozil for HTG. She had been on dapagliflozin for six months '
'at the time of presentation. Physical examination on presentation was significant for dry oral mucosa; '
'significantly, her abdominal examination was benign with no tenderness, guarding, or rigidity. Pertinent '
'laboratory findings on admission were: serum glucose 111 mg/dl, bicarbonate 18 mmol/l, anion gap 20, '
'creatinine 0.4 mg/dL, triglycerides 508 mg/dL, total cholesterol 122 mg/dL, glycated hemoglobin (HbA1c) '
'10%, and venous pH 7.27. Serum lipase was normal at 43 U/L. Serum acetone levels could not be assessed '
'as blood samples kept hemolyzing due to significant lipemia. The patient was initially admitted for '
'starvation ketosis, as she reported poor oral intake for three days prior to admission. However, '
'serum chemistry obtained six hours after presentation revealed her glucose was 186 mg/dL, the anion gap '
'was still elevated at 21, serum bicarbonate was 16 mmol/L, triglyceride level peaked at 2050 mg/dL, and '
'lipase was 52 U/L. The β-hydroxybutyrate level was obtained and found to be elevated at 5.29 mmol/L - '
'the original sample was centrifuged and the chylomicron layer removed prior to analysis due to '
'interference from turbidity caused by lipemia again. The patient was treated with an insulin drip '
'for euDKA and HTG with a reduction in the anion gap to 13 and triglycerides to 1400 mg/dL, within '
'24 hours. Her euDKA was thought to be precipitated by her respiratory tract infection in the setting '
'of SGLT2 inhibitor use. The patient was seen by the endocrinology service and she was discharged on '
'40 units of insulin glargine at night, 12 units of insulin lispro with meals, and metformin 1000 mg '
'two times a day. It was determined that all SGLT2 inhibitors should be discontinued indefinitely. She '
'had close follow-up with endocrinology post discharge.'
)
from sparknlp_display import NerVisualizer
visualiser = NerVisualizer()
# Change color of an entity label
visualiser.set_label_colors({'PROBLEM':'#008080', 'TEST':'#800080', 'TREATMENT':'#806080'})
# Set label filter
#visualiser.display(ppres, label_col='ner_chunk', labels=['PER'])
visualiser.display(ner_light_pipeline.fullAnnotate(clinical_note)[0], label_col='ner_chunk', document_col='document')
```
## CPT Resolver
```
documentAssembler = DocumentAssembler()\
.setInputCol("text")\
.setOutputCol("document")
sentenceDetector = SentenceDetectorDLModel.pretrained()\
.setInputCols(["document"])\
.setOutputCol("sentence")
tokenizer = Tokenizer()\
.setInputCols(["sentence"])\
.setOutputCol("token")\
word_embeddings = WordEmbeddingsModel.pretrained("embeddings_clinical", "en", "clinical/models")\
.setInputCols(["sentence", "token"])\
.setOutputCol("embeddings")
clinical_ner = MedicalNerModel.pretrained("ner_jsl", "en", "clinical/models") \
.setInputCols(["sentence", "token", "embeddings"]) \
.setOutputCol("ner")
ner_converter = NerConverter() \
.setInputCols(["sentence", "token", "ner"]) \
.setOutputCol("ner_chunk")\
.setWhiteList(['Test','Procedure'])
c2doc = Chunk2Doc()\
.setInputCols("ner_chunk")\
.setOutputCol("ner_chunk_doc")
sbert_embedder = BertSentenceEmbeddings\
.pretrained("sbiobert_base_cased_mli",'en','clinical/models')\
.setInputCols(["ner_chunk_doc"])\
.setOutputCol("sbert_embeddings")
cpt_resolver = SentenceEntityResolverModel.pretrained("sbiobertresolve_cpt_procedures_augmented","en", "clinical/models") \
.setInputCols(["ner_chunk", "sbert_embeddings"]) \
.setOutputCol("cpt_code")\
.setDistanceFunction("EUCLIDEAN")
sbert_pipeline_cpt = Pipeline(
stages = [
documentAssembler,
sentenceDetector,
tokenizer,
word_embeddings,
clinical_ner,
ner_converter,
c2doc,
sbert_embedder,
cpt_resolver])
text = '''
EXAM: Left heart cath, selective coronary angiogram, right common femoral angiogram, and StarClose closure of right common femoral artery.
REASON FOR EXAM: Abnormal stress test and episode of shortness of breath.
PROCEDURE: Right common femoral artery, 6-French sheath, JL4, JR4, and pigtail catheters were used.
FINDINGS:
1. Left main is a large-caliber vessel. It is angiographically free of disease,
2. LAD is a large-caliber vessel. It gives rise to two diagonals and septal perforator. It erupts around the apex. LAD shows an area of 60% to 70% stenosis probably in its mid portion. The lesion is a type A finishing before the takeoff of diagonal 1. The rest of the vessel is angiographically free of disease.
3. Diagonal 1 and diagonal 2 are angiographically free of disease.
4. Left circumflex is a small-to-moderate caliber vessel, gives rise to 1 OM. It is angiographically free of disease.
5. OM-1 is angiographically free of disease.
6. RCA is a large, dominant vessel, gives rise to conus, RV marginal, PDA and one PL. RCA has a tortuous course and it has a 30% to 40% stenosis in its proximal portion.
7. LVEDP is measured 40 mmHg.
8. No gradient between LV and aorta is noted.
Due to contrast concern due to renal function, no LV gram was performed.
Following this, right common femoral angiogram was performed followed by StarClose closure of the right common femoral artery.
'''
data_ner = spark.createDataFrame([[text]]).toDF("text")
sbert_models = sbert_pipeline_cpt.fit(data_ner)
sbert_outputs = sbert_models.transform(data_ner)
from pyspark.sql import functions as F
cpt_sdf = sbert_outputs.select(F.explode(F.arrays_zip("ner_chunk.result","ner_chunk.metadata","cpt_code.result","cpt_code.metadata","ner_chunk.begin","ner_chunk.end")).alias("cpt_code")) \
.select(F.expr("cpt_code['0']").alias("chunk"),
F.expr("cpt_code['4']").alias("begin"),
F.expr("cpt_code['5']").alias("end"),
F.expr("cpt_code['1'].entity").alias("entity"),
F.expr("cpt_code['2']").alias("code"),
F.expr("cpt_code['3'].confidence").alias("confidence"),
F.expr("cpt_code['3'].all_k_resolutions").alias("all_k_resolutions"),
F.expr("cpt_code['3'].all_k_results").alias("all_k_codes"))
cpt_sdf.show(10, truncate=100)
import pandas as pd
def get_codes (light_model, code, text):
full_light_result = light_model.fullAnnotate(text)
chunks = []
terms = []
begin = []
end = []
resolutions=[]
entity=[]
all_codes=[]
for chunk, term in zip(full_light_result[0]['ner_chunk'], full_light_result[0][code]):
begin.append(chunk.begin)
end.append(chunk.end)
chunks.append(chunk.result)
terms.append(term.result)
entity.append(chunk.metadata['entity'])
resolutions.append(term.metadata['all_k_resolutions'])
all_codes.append(term.metadata['all_k_results'])
df = pd.DataFrame({'chunks':chunks, 'begin': begin, 'end':end, 'entity':entity,
'code':terms,'resolutions':resolutions,'all_codes':all_codes})
return df
text='''
REASON FOR EXAM: Evaluate for retroperitoneal hematoma on the right side of pelvis, the patient has been following, is currently on Coumadin.
In CT abdomen, there is no evidence for a retroperitoneal hematoma, but there is an metastases on the right kidney.
The liver, spleen, adrenal glands, and pancreas are unremarkable. Within the superior pole of the left kidney, there is a 3.9 cm cystic lesion. A 3.3 cm cystic lesion is also seen within the inferior pole of the left kidney. No calcifications are noted. The kidneys are small bilaterally.
In CT pelvis, evaluation of the bladder is limited due to the presence of a Foley catheter, the bladder is nondistended. The large and small bowels are normal in course and caliber. There is no obstruction.
'''
cpt_light_pipeline = LightPipeline(sbert_models)
get_codes (cpt_light_pipeline, 'cpt_code', text)
from sparknlp_display import EntityResolverVisualizer
vis = EntityResolverVisualizer()
# Change color of an entity label
vis.set_label_colors({'Procedure':'#008080', 'Test':'#800080'})
light_data_cpt = cpt_light_pipeline.fullAnnotate(text)
vis.display(light_data_cpt[0], 'ner_chunk', 'cpt_code')
text='''1. The left ventricular cavity size and wall thickness appear normal. The wall motion and left ventricular systolic function appears hyperdynamic with estimated ejection fraction of 70% to 75%. There is near-cavity obliteration seen. There also appears to be increased left ventricular outflow tract gradient at the mid cavity level consistent with hyperdynamic left ventricular systolic function. There is abnormal left ventricular relaxation pattern seen as well as elevated left atrial pressures seen by Doppler examination.
2. The left atrium appears mildly dilated.
3. The right atrium and right ventricle appear normal.
4. The aortic root appears normal.
5. The aortic valve appears calcified with mild aortic valve stenosis, calculated aortic valve area is 1.3 cm square with a maximum instantaneous gradient of 34 and a mean gradient of 19 mm.
6. There is mitral annular calcification extending to leaflets and supportive structures with thickening of mitral valve leaflets with mild mitral regurgitation.
7. The tricuspid valve appears normal with trace tricuspid regurgitation with moderate pulmonary artery hypertension. Estimated pulmonary artery systolic pressure is 49 mmHg. Estimated right atrial pressure of 10 mmHg.
8. The pulmonary valve appears normal with trace pulmonary insufficiency.
9. There is no pericardial effusion or intracardiac mass seen.
10. There is a color Doppler suggestive of a patent foramen ovale with lipomatous hypertrophy of the interatrial septum.
11. The study was somewhat technically limited and hence subtle abnormalities could be missed from the study.
'''
df = get_codes (cpt_light_pipeline, 'cpt_code', text)
df
text='''
CC: Left hand numbness on presentation; then developed lethargy later that day.
HX: On the day of presentation, this 72 y/o RHM suddenly developed generalized weakness and lightheadedness, and could not rise from a chair. Four hours later he experienced sudden left hand numbness lasting two hours. There were no other associated symptoms except for the generalized weakness and lightheadedness. He denied vertigo.
He had been experiencing falling spells without associated LOC up to several times a month for the past year.
MEDS: procardia SR, Lasix, Ecotrin, KCL, Digoxin, Colace, Coumadin.
PMH: 1)8/92 evaluation for presyncope (Echocardiogram showed: AV fibrosis/calcification, AV stenosis/insufficiency, MV stenosis with annular calcification and regurgitation, moderate TR, Decreased LV systolic function, severe LAE. MRI brain: focal areas of increased T2 signal in the left cerebellum and in the brainstem probably representing microvascular ischemic disease.
'''
df = get_codes (cpt_light_pipeline, 'cpt_code', text)
df
```
| github_jupyter |
# PyPick: seismic event picking using Matplotlib and machine learning
*by [Alan Richardson](mailto:alan@ausargeo.com) ([Ausar Geophysical](http://www.ausargeo.com))*
Picking, either arrivals in data or horizons in migrated images, is a common task in seismic processing. I wanted to try doing it using Matplotlib so that I could pick data on remote servers using a Jupyter notebook and also easily have access to the picks in Python for further processing. It additionally seemed like a task that might be a fun potential application of machine learning.
The result is rather slow and a bit fragile, but it seems to work moderately well. It is just a toy project at the moment, so I don't know whether it would work on realistically-sized datasets.
## Method
### Picking with Matplotlib
One really nice feature of Matplotlib is that it can use several different backends. These allow it to run in different situations, such as on different operating systems, and, notably, in web browsers using Jupyter notebooks. Writing a picking utility using Matplotlib thus means that it can run locally on a computer, and also on a remote server accessed via a web browser.
A second feature of Matplotlib that is critical for this application is that it supports event triggers. Clicking on the plot can trigger an event that passes the location of the click to a function I have written. This is understandably difficult for Matplotlib to do across multiple backends, but it worked for me both locally (using the Qt4 backend) and in a Jupyter notebook, although it is much slower than I would like and is sometimes a bit temperamental. I initially considered using [`pyplot.ginput`](https://matplotlib.org/devdocs/api/_as_gen/matplotlib.pyplot.ginput.html), but this wasn't supported by the Jupyter notebook backend. The [`mpl_connect`](https://matplotlib.org/users/event_handling.html) function did work, though, and supports both mouse clicks and button presses. To enable interactive plots in Jupyter notebooks, you need to use the `%matplotlib notebook` magic function, rather than the more common `%matplotlib inline` (which renders the plots as static PNG images).
### Predicting picks using machine learning
We don't want to have to pick the event in every frame of data. In the case of seismic data, we would like to be able to pick the event on a few gathers spread over the volume, and then have the computer predict the picks for the rest of the data. This is often accomplished with a combination of interpolation and peak/trough detection, but I decided to try applying machine learning tools to it.
I split the task into two parts. The first consists of predicting the approximate location of a trace's pick. For this I use regression methods from [Scikit-learn](http://scikit-learn.org/), such as [linear regression](http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.html) or [support vector machines](http://scikit-learn.org/stable/modules/generated/sklearn.svm.SVR.html). I allow the user to specify which method to use, so the parameters can be chosen, or the regression method could even be wrapped in a [grid search](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html). This approximate pick is used to isolate a window of data that hopefully contains the true pick location, which is then passed to the second prediction step.
The second part of the prediction uses a single layer convolutional neural network to identify the pick location within the window provided by the first part. It can also use a user supplied function instead. Using a neural network rather than peak/trough detection means that, if desired, you do not need to pick along a peak or trough. It also creates the possibility that the network could detect more complicated waveforms than simple peak or tough detection, potentially improving its ability to identify the correct event. For this neural network part, I use [TensorFlow](https://www.tensorflow.org/).
## Example
To test the code, I will try to pick the first break in the [2D Alaska 31-81 dataset](http://wiki.seg.org/wiki/Alaska_2D_land_line_31-81). I used [Karl Schleicher's processing flow](http://s3.amazonaws.com/open.source.geoscience/open_data/alaska/alaska31-81.tar.gz) to remove auxiliary traces and add geometry to the dataset. I then used my [netcdf_segy](https://github.com/ar4/netcdf_segy) tool to convert the data to NetCDF format, which I load below.
Note that I use the `%matplotlib notebook` magic function to specify that I want to use the interactive backend.
```
import pypick.pypick
import pypick.predict_picks
import xarray as xr
import numpy as np
import matplotlib.pyplot as plt
%matplotlib notebook
from sklearn.svm import SVR
from sklearn.model_selection import GridSearchCV
data = xr.open_dataset('allshots.nc')
```
The pypick tool requires that you pass arrays containing the frame parameters (such as shot number for shot gathers), and trace parameters (such as receiver coordinates for shot gathers), so I form those in the following cell. I use signed offset, absolute offset, receiver X position, and receiver elevation for the trace parameters.
The frame parameters array must contain at least two columns. The first two columns will be used as coordinates for making a scatter plot that allows the user to choose which frames to pick. Since this is a 2D dataset (a single line of sources), I just want the frames to be arranged in a line, so I set the second column of the frame parameters array to be all zeros.
```
num_frames, frame_len, trace_len = data.Samples.shape
frame_params = np.zeros([num_frames, 2], dtype=np.float32)
frame_params[:, 0] = data.FieldRecord
trace_params = np.concatenate([data.offset.values[:,:,np.newaxis],
np.abs(data.offset.values[:,:,np.newaxis]),
data.GroupX.values[:,:,np.newaxis],
data.ReceiverGroupElevation.values[:,:,np.newaxis]],
axis=2)
print(frame_params.shape, trace_params.shape)
data.Samples.shape
```
To make the data easier to pick, I crudely balance the trace amplitude by dividing each trace by its maximum amplitude. Applying AGC would also have been a good choice.
```
data['normalised_Samples'] = data.Samples/np.max(np.abs(data.Samples), axis=2)
```
To get started, we need to create a `Pypicks` object. We pass the data, frame and trace parameters, the methods for doing the approximate (step 1) and fine (step 2) pick prediction, and the length of the window that we extract around the approximate picks. If you have already done some picking on this dataset, you can also pass your existing picks.
Because pick prediction takes time, you may want to turn it off while you are doing picking by passing `perform_prediction=False`. You can turn it on again later by setting the `perform_prediction` attribute of your `Pypicks` object to `True`, or you could recreate a new `Pypicks` object with `perform_prediction=True` (the default) and `picks` equal to the `picks` attribute from the previous object (so your existing picks get copied to the new one).
```
picks = pypick.pypick.Pypicks(data.normalised_Samples.values, frame_params, trace_params,
perform_prediction=False)
```
Next, we do some picking by calling `pypick`. This launches a PyPlot figure that initially shows a scatter plot with each point representing a frame of the data. Clicking on one of the points will open that frame so that you can pick it. The image below shows this screen after picking several frames (they change to yellow when you have picked them).
In the picking screen, clicking on a point while pressing the 'a' key will update the pick at that location. Pressing the 'q' key saves the picks and exits the picking screen (returning to the screen that allows you to choose a frame). Pressing the 'x' key exits the picking screen without saving the picks from the current frame.
Once you have picked at least one frame, the tool will try to predict the picks on subsequent frames that are opened (unless you have turned this off, as I mentioned above). For the first few of these, the predictions will probably not be good, and so you may wish to delete many or all of them so that you can more easily make large adjustments. You can do this by holding down the 'd' key. Clicking and dragging while 'd' is pressed will create a rectangular selector that will delete all points within in. I have found that when doing this, I have to ensure that I am not in zoom or pan mode for it to work.
```
picks.pypick()
```
In the following cell I run `pypick` again and this time turn off interactive mode (converting the current view into a static PNG) in the picking screen to show you what that (the picking screen) looks like. I have zoomed in (another nice feature of interactive mode) to make accurate picking easier.
```
picks.pypick()
```
All of the saved picks are stored in the `picks` array of the `Pypicks` object.
```
picks.picks[0] # picks for the first frame
```
Let's save our picks to make sure that we don't lose them.
```
ppicks = picks.picks
import pickle
pickle.dump(ppicks, open('picks.pickle', 'wb'))
# I am finished with interactive plotting now
%matplotlib inline
```
To predict the picks for a frame, use the `predict` method of the `Pypicks` object, specifying the frame index.
In the cells below I compare the true picks for the first frame with the predicted picks for the same frame. These will not necessarily be the same, although we would like them to be as similar as possible. The accuracy is affected by the estimators that are used and their parameters. One interesting way to choose estimator parameters would be using [ipywidgets](http://ipywidgets.readthedocs.io/en/stable/examples/Using%20Interact.html) to create sliders to adjust the estimator parameters until you like the results. Another would be to use GridSearchCV, as I mentioned previously. I have decided to use the latter on both steps of the prediction.
```
picks = pypick.pypick.Pypicks(data.normalised_Samples.values, frame_params, trace_params,
approx_reg=GridSearchCV(SVR(), {'C': 10**np.arange(0, 4)}),
fine_reg=GridSearchCV(pypick.predict_picks.Fine_reg_tensorflow(batch_size=25,
num_steps=5000),
[{'box_size': [10], 'layer_len': [3, 5]},
{'box_size': [15], 'layer_len': [3, 5, 10]},
{'box_size': [20], 'layer_len': [3, 5, 10, 15]}]),
picks=ppicks)
plt.figure()
plt.plot(ppicks[0], label='true')
plt.plot(picks.predict(0), label='predicted')
plt.legend();
```
We can check which parameters gave the best results in the grid search.
```
print(picks.approx_reg.best_params_, picks.fine_reg.best_params_)
```
To predict picks for all of the frames, we call predict without an argument.
```
all_predict = picks.predict()
```
## Application: Tomo2D
Now that we have the first break picks for every shot, we can use a tool such as [Tomo2D](http://people.earth.yale.edu/software/jun-korenaga) to invert for the velocity model. The maximum offset in this dataset is only about 1.6 km, so first break tomography will only allow us to update the very near surface.
```
np.max(np.abs(data.offset.values)) * 0.305e-3 # maximum offset converted from ft to km
plt.figure(figsize=(12,12))
plt.scatter(np.arange(55).repeat(96), data.GroupX.values.reshape(-1), s=0.1, label='receivers')
plt.scatter(np.arange(55), data.SourceX[:,0].values.reshape(-1), s=5, label='shots')
plt.xlabel('shot index');
plt.ylabel('x');
plt.legend();
```
The coordinates in the dataset are in feet, but Tomo2D expects kilometers, so let's convert them and save the results as new variables.
```
data['SourceX_km'] = data.SourceX * 0.305e-3
data['GroupX_km'] = data.GroupX * 0.305e-3
data['SourceSurfaceElevation_km'] = data.SourceSurfaceElevation * 0.305e-3
data['ReceiverGroupElevation_km'] = data.ReceiverGroupElevation * 0.305e-3
```
To run travel time inversion with Tomo2D, a few files are required. The first of these, which I will call "picks.txt", contains the travel time picks. The format is described in the Tomo2D documentation. It mainly consists of providing the source and receiver coordinates, and the associated travel time. As the depth axis is assumed to point downward, I use the negative of the elevations as the depth of the sources and receivers. To ensure that they are in the earth, rather than being affected by the air velocity, I shift the elevations slightly by `srcrecdepth_km`.
It was difficult to pick the first breaks for the receivers at the end of the line for the last few sources, so I exclude those from the inversion.
```
picks_file = open('picks.txt', 'w')
num_src = 55
num_rcv = 96 * np.ones(55, np.int)
# for shots after 45, only use the first 75 receivers as picks
num_rcv[45:] = 75
minx = np.min(data.GroupX_km.values)
srcrecdepth_km = 0.0001
picks_file.write('%d\n' % num_src)
for src in range(num_src):
picks_file.write('s %f %f %d\n' % (data.SourceX_km[src, 0] - minx,
-(data.SourceSurfaceElevation_km[src, 0] - srcrecdepth_km),
num_rcv[src]))
for rcv in range(num_rcv[src]):
pick_time = (all_predict[src*96 + rcv] - 10) * 0.002
picks_file.write('r %f %f 0 %f 0.02\n' % (data.GroupX_km[src, rcv] - minx,
-(data.ReceiverGroupElevation_km[src, rcv] - srcrecdepth_km),
pick_time))
picks_file.close()
```
You can also specify the topography. There isn't a large variation in elevation over the survey area, but extracting topography from the dataset is a nice demonstration of how easy it is to work with seismic data in Python.
I begin by extracting the `x` and `z` coordinates of the sources and receivers. Since the same source will occur in many traces, and similarly for each receiver, I only use the first occurrence of each to avoid duplicates. I then do a 1D interpolation so that I can estimate the topography at each `x` in my inversion model grid. The result is plotted a few cells down.
```
rcv_group = data.groupby(data.GroupX_km).first()
shot_group = data.groupby(data.SourceX_km).first()
elevations = np.concatenate([rcv_group.ReceiverGroupElevation_km.values,
shot_group.SourceSurfaceElevation_km.values])
x_pos = np.concatenate([rcv_group.GroupX_km.values, shot_group.SourceX_km.values])
import scipy.interpolate
topo_func = scipy.interpolate.interp1d(x_pos, elevations, fill_value='extrapolate')
print(np.min(x_pos), np.max(x_pos), np.max(x_pos) - np.min(x_pos))
x_new = np.arange(0, 10.6, 0.02) + np.min(x_pos)
topo_interp = topo_func(x_new)
plt.plot(x_new, topo_interp)
plt.xlabel('x (km)');
plt.ylabel('elevation (km)');
```
I now write the range of `x` and `z` grid coordinates that I want the model grid to cover to files, along with the topography at the chosen `x` coordinates (again negated as the depth axis points down).
```
vfile_x = x_new - np.min(x_pos)
vfile_z = np.arange(0, 0.3, 0.005)
vfile_t = -topo_interp
vfile_x.tofile('x.txt', sep='\n')
vfile_z.tofile('z.txt', sep='\n')
vfile_t.tofile('t.txt', sep='\n')
```
I use Tomo2D's `gen_smesh` tool to convert these files into a starting model, with a velocity increasing from 4km/s at the surface by 0.5km/s per km depth.
```
!./gen_smesh -A4.0 -B0.5 -Xx.txt -Zz.txt -Tt.txt > v.in
```
Now it's time for the actual inversion. The cell below runs Tomo2D's `tt_inverse` tool for 30 iterations. I have chosen to apply quite a bit of smoothing to the result. I created the `vcorrt.in` file by hand. It specifies spatial correlations (which I set to be 0.5km in `x` and `z`).
```
!./tt_inverse -Mv.in -Gpicks.txt -I30 -SV1000 -CVvcorrt.in
```
In the cells below, I open and plot the inverted model. The result indicates that my guess of 4km/s at the surface might have been a bit too high, as the inversion has slowed it down to around 3.7km/s.
```
v1=np.fromfile('inv_out.smesh.30.1', sep=" ")
nx=530
nz=60
v1=v1[4+2*nx+nz:].reshape(nx,nz)
plt.figure(figsize=(12,6))
plt.imshow(v1.T, aspect='auto')
plt.colorbar();
plt.xlabel('x (cells)')
plt.ylabel('z (cells)')
```
## Conclusion
This picking tool is currently quite slow and fragile, but the convenience of being able to run it locally or remotely using a Jupyter notebook, and of having easy access to the results in Python, may compensate for this. Although it is only intended for demonstration, applying it to larger datasets may be possible. The main obstacle would probably be a shortage of memory. The prediction step, in particular, loads large parts of the dataset into memory. Running the prediction on batches of frames (`picks.predict(frame_ids)`) rather than on the whole dataset at once, might overcome that problem.
Using machine learning for picking seems to work quite well, although I certainly think there is room for improvement. One feature that may be desirable is an enforcement of continuity. At the moment, there is no penalty for predicting picks with large jumps between neighbouring traces. Another current shortcoming is the restriction of only having one pick per trace. This is reasonable when picking arrivals in seismic data (such as the first break picking example above), but may be problematic for picking seismic images where a fault could cause the same layer to occur at two depths in the same depth trace.
If you have suggestions for improvement, bug reports, or other comments, please [let me know](mailto:alan@ausargeo.com) or submit an issue or pull request on Github.
| github_jupyter |
```
# %load_ext nb_black
from sklearn.manifold import TSNE
import seaborn as sns
from IPython.core.display import display, HTML
from torch.nn.utils.rnn import pad_sequence
import pytorch_lightning as pl
from torch.nn import CrossEntropyLoss
from transformers import BertForSequenceClassification, BertTokenizer
import numpy as np
import json
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.feature_extraction.text import TfidfVectorizer, CountVectorizer
from sklearn.ensemble import RandomForestClassifier
from sklearn.dummy import DummyClassifier
from sklearn.svm import SVC
from sklearn.model_selection import StratifiedKFold, cross_val_predict
from sklearn.model_selection import cross_validate
from sklearn.neural_network import MLPClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.metrics import (
precision_recall_curve,
roc_auc_score,
average_precision_score,
confusion_matrix,
)
from transformers import TapasModel, TapasTokenizer
%load_ext nb_black
import sys
sys.path.append("../src/")
from rule_based_triple_extraction import (
rule_based_triple_extraction,
_set_index,
_collapse_multiindex,
is_compound_series_ner,
known_compounds_foodb,
known_foods_foodb,
is_entity_series,
)
from tqdm import tqdm
import torch
import spacy
from transformers import BertTokenizer, BertModel
from autogoal.kb import VectorContinuous, Categorical, Supervised
def load_data(annotations_file):
annotation_results = json.load(open(annotations_file))
labels = []
table_htmls = []
for result in annotation_results:
# assert len(result["annotations"]) == 1
label = result["annotations"][0]["result"][0]["value"]["choices"][0]
labels.append(label)
table_htmls.append(result["data"]["table_html"])
data = pd.DataFrame({"table_html": table_htmls, "labels": labels})
data = data.drop_duplicates()
labels = data["labels"]
return data, labels
data, labels = load_data(
"../data/labeling_output/table_relevance/exported_labels_is_compositional_Apr1121.json"
)
data_orig = data.copy()
data = data.drop_duplicates(subset="table_html")
len(data)
labels = labels[data_orig.index.isin(data.index)]
data.labels.value_counts() / len(data.labels)
```
About 70/30 neg/pos split
Some tables with no content (probably used for layout) throw an error when attempting to parse with Pandas
```
def filter_no_content_tables(data, labels):
pd_tables = []
for table in data["table_html"]:
try:
tbl_df = pd.read_html(table, index_col=0)[0]
pd_tables.append(tbl_df)
except ValueError:
pd_tables.append(None)
data = pd.DataFrame({"pd_tables": pd_tables, "labels": labels}).dropna(how="any")
labels = data["labels"]
return data, labels
data, labels = filter_no_content_tables(data, labels)
data["labels"] = data["labels"].map({"Y": 1, "N": 0})
import spacy
nlp = spacy.load("en_core_sci_lg")
from torch.utils.data import TensorDataset, DataLoader
from torch.nn import Conv2d, ReLU, MaxPool2d, Sigmoid, BCELoss, Linear
def table2tensor(df: pd.DataFrame):
df = error_df.T.reset_index().T
if df.index.name in df.columns:
df.index.name = None
df = df.reset_index()
df_vectors = df.applymap(lambda x: list(nlp(str(x)).vector))
vecs = torch.tensor(np.array(df_vectors.values.tolist()))
vecs = vecs.permute([2, 0, 1]) # put number of channels first
return vecs
X = []
y = []
error_df = None
for idx, row in tqdm(data.iterrows(), total=len(data)):
try:
vecs = table2tensor(row["pd_tables"])
label = torch.tensor(float(row["labels"]))
X.append(vecs)
y.append(label)
except:
error_df = row["pd_tables"]
X_train, X_test, y_train, y_test = train_test_split(
X, y, random_state=42, test_size=0.995
)
train_dataset = TensorDataset(torch.stack(X_train), torch.stack(y_train))
val_dataset = TensorDataset(torch.stack(X_test), torch.stack(y_test))
train_dataloader = DataLoader(
train_dataset,
)
val_dataloader = DataLoader(
val_dataset,
)
class TCN(pl.LightningModule):
def __init__(
self,
cell_vector_dim=200,
vector_dim_reduced=20,
out_channels_1=10,
out_channels_2=3,
kernel_size=3,
n_class=2,
):
super().__init__()
# self.linear = Linear(cell_vector_dim, vector_dim_reduced)
self.conv1 = Conv2d(
cell_vector_dim,
out_channels_1,
kernel_size,
1,
padding=2,
)
self.relu1 = ReLU()
self.conv2 = Conv2d(
out_channels_1,
out_channels_2,
kernel_size,
1,
padding=2,
)
self.relu2 = ReLU()
self.cell_scores = Conv2d(out_channels_2, 1, kernel_size)
self.sigmoid = Sigmoid()
self.loss = BCELoss()
def forward(self, x):
# print(x.shape)
# x = self.linear(x)
# print(x.shape)
out = self.relu1(self.conv1(x))
out = self.relu2(self.conv2(out))
cell_scores = self.cell_scores(out)
out = self.sigmoid(torch.max(cell_scores, dim=(1, 2, 3)))
return out
def training_step(self, batch, batch_idx):
x, y = batch
out = self.forward(x)
loss = self.loss(out.squeeze(), y.squeeze())
return loss
def validation_step(self, batch, batch_idx):
x, y = batch
out = self.forward(x)
val_loss = self.loss(out.squeeze(), y.squeeze())
self.log("val_loss", val_loss)
return val_loss
def configure_optimizers(self):
optimizer = torch.optim.RMSprop(self.parameters(), lr=1e-3)
return optimizer
linear = Linear(
200,
100,
)
tcn = TCN()
trainer = pl.Trainer(max_epochs=1000)
trainer.fit(tcn, train_dataloader, val_dataloader)
```
# model can overfit a single example
```
def process_data(data, labels):
indices = [" ".join(df.index.astype(str).str.lower()) for df in data["pd_tables"]]
columns = []
for df in data["pd_tables"]:
if isinstance(df.columns, pd.MultiIndex):
cols = [" ".join(x) for x in df.columns.to_flat_index()]
else:
cols = df.columns
columns.append(" ".join(pd.Series(cols).astype(str).str.lower()))
cell_contents = [
" ".join(x.astype(str).values.flatten()) for x in data["pd_tables"]
]
processed_data = pd.DataFrame(
{
"pd_tables": data["pd_tables"],
"indices": indices,
"columns": columns,
"cell_contents": cell_contents,
"labels": labels,
}
)
return processed_data
processed_data = process_data(data, labels)
```
# TFIDF featurization
```
def featurize_tfidf(processed_data):
vec1 = TfidfVectorizer(ngram_range=(1, 3), analyzer="word")
vec2 = TfidfVectorizer(ngram_range=(1, 3), analyzer="word")
vec3 = TfidfVectorizer()
vectorized_indices = vec1.fit_transform(processed_data["indices"]).toarray()
vectorized_columns = vec2.fit_transform(processed_data["columns"]).toarray()
vectorized_cells = vec3.fit_transform(processed_data["cell_contents"]).toarray()
features = np.concatenate([vectorized_indices, vectorized_columns], axis=1)
features = pd.DataFrame(
features,
columns=vec1.get_feature_names() + vec2.get_feature_names()
# + vec3.get_feature_names(),
)
X = features
y = processed_data.labels.map({"Y": 1, "N": 0}).values
return X, y
X, y = featurize_tfidf(processed_data)
data.columns.isin(["pd_tables"])
def split_and_train(X, y, filter_features=False, heldout_test=False):
if heldout_test:
X_train, X_test, y_train, y_test = train_test_split(
X, y, random_state=42, test_size=0.3
)
else:
X_train = X
y_train = y
X_test = None
if filter_features:
correlation = X_train.corrwith(pd.Series(y_train))
good_features = correlation.sort_values(ascending=False).head(1000).index
X_train = X_train[good_features]
if X_test:
X_test = X_test[good_features]
rfc = RandomForestClassifier(max_depth=20)
mlp = MLPClassifier(
hidden_layer_sizes=(
300,
150,
50,
),
max_iter=2000,
early_stopping=False,
n_iter_no_change=500,
random_state=42,
)
dummy = DummyClassifier(strategy="most_frequent")
models = [dummy, rfc, mlp]
model_scores = {}
confusion_matrices = {}
pr_curves = {}
for model in models:
scores = cross_validate(
model,
X_train,
y_train,
scoring=[
"accuracy",
"precision",
"recall",
"f1",
"roc_auc",
"average_precision",
],
cv=StratifiedKFold(5),
return_train_score=True,
)
y_pred = cross_val_predict(
model,
X_train,
y_train,
cv=StratifiedKFold(
5,
),
)
y_pred_proba = cross_val_predict(
model,
X_train,
y_train,
cv=StratifiedKFold(
5,
),
method="predict_proba",
)
pr_curve = precision_recall_curve(y_train, y_pred_proba[:, 1])
cm = confusion_matrix(y_train, y_pred)
confusion_matrices[model.__class__.__name__] = cm
pr_curves[model.__class__.__name__] = pr_curve
model_scores[model.__class__.__name__] = scores
return model_scores, confusion_matrices, pr_curves
model_scores, confusion_matrices, pr_curves = split_and_train(X, y, filter_features=True)
model_scores["DummyClassifier"]["test_score"].mean()
model_scores["RandomForestClassifier"]["test_score"].mean()
model_scores["MLPClassifier"]["test_score"].mean()
```
# SciSpacy featurization
```
import spacy
nlp = spacy.load("en_core_sci_lg")
def featurize_scispacy(processed_data):
indices_vectors = np.array([nlp(x).vector for x in processed_data["indices"]])
columns_vectors = np.array([nlp(x).vector for x in processed_data["columns"]])
features = np.concatenate([indices_vectors, columns_vectors], axis=1)
X = features
y = processed_data.labels.map({"Y": 1, "N": 0}).values
return X, y
X, y = featurize_scispacy(processed_data)
model_scores = split_and_train(X, y)
print(model_scores["DummyClassifier"]["test_score"].mean())
print(model_scores["RandomForestClassifier"]["test_score"].mean())
print(model_scores["MLPClassifier"]["test_score"].mean())
```
# Tapas Featurization
```
from unidecode import unidecode
def unidecode_df(df):
df = df.astype(str)
df.columns = [unidecode(x) for x in df.columns.astype(str)]
df.index = [unidecode(x) for x in df.index.astype(str)]
df = df.applymap(unidecode)
return df
tokenizer = TapasTokenizer.from_pretrained("google/tapas-base")
model = TapasModel.from_pretrained("google/tapas-base")
def encode_table_avg_tapas_hidden_state(table: pd.DataFrame):
inputs = tokenizer(
table=table,
queries=["Does this table contain food compositional measurements?"],
)
with torch.no_grad():
hidden = model(
input_ids=torch.tensor(inputs["input_ids"]),
attention_mask=torch.tensor(inputs["attention_mask"]),
token_type_ids=torch.tensor(inputs["token_type_ids"]),
)
vec = hidden["last_hidden_state"].mean(dim=1).flatten().detach().numpy()
return vec
def featurize_tapas(processed_data):
hiddens = []
ys = []
i = 0
for df, label in tqdm(
list(
zip(
processed_data["pd_tables"],
processed_data.labels.map({"Y": 1, "N": 0}).values,
)
)
):
df = _collapse_multiindex(df.astype(str))
# if df.index.dtype == "object":
df = df.reset_index()
df = df.fillna("-")
df = df.astype(str)
df = unidecode_df(df)
df.index = df.index.astype(int)
try:
hidden = encode_table_avg_tapas_hidden_state(df)
hiddens.append(hidden)
ys.append(label)
except Exception as e:
print(e)
print(df)
i += 1
X = np.array(hiddens)
y = np.array(ys)
return X, y
X, y = featurize_tapas(processed_data)
np.savetxt('X_tapas.csv',X,delimiter=',')
np.savetxt("y_tapas.csv", y, delimiter=",")
```
Model scores not any better with average tapas encoding... maybe need some sequential info
```
model_scores = split_and_train(X, y)
print(model_scores["DummyClassifier"]["test_score"].mean())
print(model_scores["RandomForestClassifier"]["test_score"].mean())
print(model_scores["MLPClassifier"]["test_score"].mean())
```
# SciBERT featurization
```
tokenizer = BertTokenizer.from_pretrained('allenai/scibert_scivocab_uncased')
model = BertModel.from_pretrained('allenai/scibert_scivocab_uncased')
list(processed_data[["indices", "columns"]].iterrows())
def featurize_scibert(processed_data):
indices_vectors = []
columns_vectors = []
labels = []
for idx, row in tqdm(processed_data.iterrows()):
try:
indices_inputs = tokenizer(row["indices"])
indices_hidden = model(
input_ids=torch.tensor(indices_inputs["input_ids"]).unsqueeze(0)[:512],
attention_mask=torch.tensor(indices_inputs["attention_mask"]).unsqueeze(
0
)[:512],
token_type_ids=torch.tensor(indices_inputs["token_type_ids"]).unsqueeze(
0
)[:512],
)["last_hidden_state"][:, 0, :].flatten()
columns_inputs = tokenizer(row["columns"])
columns_hidden = model(
input_ids=torch.tensor(columns_inputs["input_ids"]).unsqueeze(0)[:512],
attention_mask=torch.tensor(columns_inputs["attention_mask"]).unsqueeze(
0
)[:512],
token_type_ids=torch.tensor(columns_inputs["token_type_ids"]).unsqueeze(
0
)[:512],
)["last_hidden_state"][:, 0, :].flatten()
indices_vectors.append(indices_hidden.detach().numpy())
columns_vectors.append(columns_hidden.detach().numpy())
labels.append(row["labels"])
except Exception as e:
print(e)
print(row["indices"], row["columns"])
indices_vectors = np.array(indices_vectors)
columns_vectors = np.array(columns_vectors)
features = np.concatenate([indices_vectors, columns_vectors], axis=1)
X = features
y = pd.Series(labels).map({"Y": 1, "N": 0}).values
return X, y
X, y = featurize_scibert(processed_data)
```
performance about the same, maybe a couple percent higher than the scispacy embeddings
```
model_scores = split_and_train(X, y)
print(model_scores["DummyClassifier"]["test_score"].mean())
print(model_scores["RandomForestClassifier"]["test_score"].mean())
print(model_scores["MLPClassifier"]["test_score"].mean())
processed_data["indices"][0]
```
# TSNE
```
tsne = TSNE()
X_tnse = pd.DataFrame(tsne.fit_transform(X), columns=["TSNE1", "TSNE2"])
sns.set_style("whitegrid")
X_tnse.plot.scatter(x="TSNE1", y="TSNE2", c=y, cmap="Paired")
idx = X_tnse["TSNE2"].abs() > 25
outliers = data.loc[idx.values, "pd_tables"]
outliers.values[2]
len(outliers)
```
## SciBERT
```
import torch
from datasets import Dataset
from torch.utils.data import DataLoader
import torch
from torch.utils.data import TensorDataset
tokenizer = BertTokenizer.from_pretrained("allenai/scibert_scivocab_uncased")
indices_toks = [tokenizer.encode(x) for x in processed_data["indices"]]
columns_toks = [tokenizer.encode(x) for x in processed_data["columns"]]
indices_toks = [torch.tensor(x) for x in indices_toks]
columns_toks = [torch.tensor(x) for x in columns_toks]
padded_indices_toks = pad_sequence(
indices_toks,
)
padded_columns_toks = pad_sequence(columns_toks)
padded_indices_toks = padded_indices_toks[:200, :]
```
indices number of tokens distribution
```
# pd.Series(indices_toks).apply(len).plot.hist()
```
columns number of tokens distribution
```
# pd.Series(columns_toks).apply(len).plot.hist()
input_features = torch.cat((padded_indices_toks, padded_columns_toks)).permute(1, 0)
X_train, X_test, y_train, y_test = train_test_split(
input_features, y, random_state=42, test_size=0.3
)
train_dataset = TensorDataset(X_train, torch.tensor(y_train))
val_dataset = TensorDataset(X_test, torch.tensor(y_test))
train_dataloader = DataLoader(train_dataset, batch_size=8)
val_dataloader = DataLoader(val_dataset)
x, y = next(iter(train_dataloader))
class SciBertClassification(pl.LightningModule):
def __init__(self):
super().__init__()
self.scibert = BertForSequenceClassification.from_pretrained(
"allenai/scibert_scivocab_uncased"
)
self.loss = CrossEntropyLoss()
def forward(self, input_ids):
return self.scibert(input_ids)
def training_step(self, batch, batch_idx):
input_ids, labels = batch
out = self.scibert(input_ids)
logits = out.logits
loss = self.loss(logits, labels)
print(logits, labels)
self.log("train_loss", loss)
return loss
def configure_optimizers(self):
# optimizer = torch.optim.Adam(self.parameters(), lr=1e-5)
optimizer = torch.optim.Adam(
[
param[1]
for param in self.named_parameters()
if "scibert.classifier" in param[0]
],
lr=1e-5,
)
return optimizer
def validation_step(self, batch, batch_idx):
input_ids, labels = batch
out = self.scibert(input_ids)
logits = out.logits
loss = self.loss(logits, labels)
self.log("val_loss", loss)
import wandb
from pytorch_lightning.loggers import WandbLogger
from pytorch_lightning.callbacks.early_stopping import EarlyStopping
# wandb.finish()
model = SciBertClassification()
trainer = pl.Trainer(
max_epochs=5,
checkpoint_callback=False,
callbacks=[EarlyStopping(monitor="val_loss")],
logger=WandbLogger(),
log_every_n_steps=1,
)
trainer.fit(model, train_dataloader, val_dataloader)
```
| github_jupyter |
# Experiments reported in "[Domain Conditional Predictors for Domain Adaptation](http://preregister.science/papers_20neurips/45_paper.pdf)"
Copyright 2021 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
# Preamble
```
#@test {"skip": true}
!pip install dm-sonnet==2.0.0 --quiet
!pip install tensorflow_addons==0.12 --quiet
#@test {"output": "ignore"}
import numpy as np
import tensorflow as tf
import tensorflow_datasets as tfds
import tensorflow_addons as tfa
try:
import sonnet.v2 as snt
except ModuleNotFoundError:
import sonnet as snt
```
#### Colab tested with:
```
TensorFlow version: 2.4.1
Sonnet version: 2.0.0
TensorFlow Addons version: 0.12.0
```
```
#@test {"skip": true}
print(" TensorFlow version: {}".format(tf.__version__))
print(" Sonnet version: {}".format(snt.__version__))
print("TensorFlow Addons version: {}".format(tfa.__version__))
```
# Data preparation
Define 4 domains by transforming the data on the fly. Current transformations are rotation, blurring, flipping colors between background and digits, and horizontal flip.
```
#@test {"output": "ignore"}
batch_size = 100
NUM_DOMAINS = 4
def process_batch_train(images, labels):
images = tf.image.grayscale_to_rgb(images)
images = tf.cast(images, dtype=tf.float32)
images = images / 255.
domain_index_candidates = tf.convert_to_tensor(list(range(NUM_DOMAINS)))
samples = tf.random.categorical(tf.math.log([[1/NUM_DOMAINS for i in range(NUM_DOMAINS)]]), 1) # note log-prob
domain_index=domain_index_candidates[tf.cast(samples[0][0], dtype=tf.int64)]
if tf.math.equal(domain_index, tf.constant(0)):
images = tfa.image.rotate(images, np.pi/3)
elif tf.math.equal(domain_index, tf.constant(1)):
images = tfa.image.gaussian_filter2d(images, filter_shape=[8,8])
elif tf.math.equal(domain_index, tf.constant(2)):
images = tf.ones_like(images) - images
elif tf.math.equal(domain_index, tf.constant(3)):
images = tf.image.flip_left_right(images)
domain_label = tf.cast(domain_index, tf.int64)
return images, labels, domain_label
def process_batch_test(images, labels):
images = tf.image.grayscale_to_rgb(images)
images = tf.cast(images, dtype=tf.float32)
images = images / 255.
return images, labels
def mnist(split, multi_domain_test=False):
dataset = tfds.load("mnist", split=split, as_supervised=True)
if split == "train":
process_batch = process_batch_train
else:
if multi_domain_test:
process_batch = process_batch_train
else:
process_batch = process_batch_test
dataset = dataset.map(process_batch)
dataset = dataset.batch(batch_size)
dataset = dataset.prefetch(tf.data.experimental.AUTOTUNE)
dataset = dataset.cache()
return dataset
mnist_train = mnist("train").shuffle(1000)
mnist_test = mnist("test")
mnist_test_multidomain = mnist("test", multi_domain_test=True)
```
Look at samples from the training domains. Domain labels are such that: Rotation >> 0, Blurring >> 1, Color flipping >> 2, Horizontal flip >> 3.
```
#@test {"skip": true}
import matplotlib.pyplot as plt
images, label, domain_label = next(iter(mnist_train))
print(label[0], domain_label[0])
plt.imshow(images[0]);
```
# Baseline 1: Unconditional model
A baseline model is defined below and referred to as unconditional since it does not take domain labels into account in any way.
```
class M_unconditional(snt.Module):
def __init__(self):
super(M_unconditional, self).__init__()
self.hidden1 = snt.Conv2D(output_channels=10, kernel_shape=5, name="hidden1")
self.hidden2 = snt.Conv2D(output_channels=20, kernel_shape=5, name="hidden2")
self.flatten = snt.Flatten()
self.logits = snt.Linear(10, name="logits")
def __call__(self, images):
output = tf.nn.relu(self.hidden1(images))
output = tf.nn.relu(self.hidden2(output))
output = self.flatten(output)
output = self.logits(output)
return output
m_unconditional = M_unconditional()
```
Training of the baseline:
```
#@test {"output": "ignore"}
opt_unconditional = snt.optimizers.SGD(learning_rate=0.01)
num_epochs = 10
loss_log_unconditional = []
def step(images, labels):
"""Performs one optimizer step on a single mini-batch."""
with tf.GradientTape() as tape:
logits_unconditional = m_unconditional(images)
loss_unconditional = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logits_unconditional,
labels=labels)
loss_unconditional = tf.reduce_mean(loss_unconditional)
params_unconditional = m_unconditional.trainable_variables
grads_unconditional = tape.gradient(loss_unconditional, params_unconditional)
opt_unconditional.apply(grads_unconditional, params_unconditional)
return loss_unconditional
for images, labels, domain_labels in mnist_train.repeat(num_epochs):
loss_unconditional = step(images, labels)
loss_log_unconditional.append(loss_unconditional.numpy())
print("\n\nFinal loss: {}".format(loss_log_unconditional[-1]))
REDUCTION_FACTOR = 0.2 ## Factor in [0,1] used to check whether the training loss reduces during training
## Checks whether the training loss reduces
assert loss_log_unconditional[-1] < REDUCTION_FACTOR*loss_log_unconditional[0]
```
# Baseline 2: Domain invariant representations
DANN-like model where the domain discriminator is replaced by a domain classifier aiming to induce invariance across training domains
```
#@test {"skip": true}
class DANN_task(snt.Module):
def __init__(self):
super(DANN_task, self).__init__()
self.hidden1 = snt.Conv2D(output_channels=10, kernel_shape=5, name="hidden1")
self.hidden2 = snt.Conv2D(output_channels=20, kernel_shape=5, name="hidden2")
self.flatten = snt.Flatten()
self.logits = snt.Linear(10, name="logits")
def __call__(self, images):
output = tf.nn.relu(self.hidden1(images))
output = tf.nn.relu(self.hidden2(output))
z = self.flatten(output)
output = self.logits(z)
return output, z
#@test {"skip": true}
class DANN_domain(snt.Module):
def __init__(self):
super(DANN_domain, self).__init__()
self.logits = snt.Linear(NUM_DOMAINS, name="logits")
def __call__(self, z):
output = self.logits(z)
return output
#@test {"skip": true}
m_DANN_task = DANN_task()
m_DANN_domain = DANN_domain()
```
Training of the DANN baseline
```
#@test {"skip": true}
opt_task = snt.optimizers.SGD(learning_rate=0.01)
opt_domain = snt.optimizers.SGD(learning_rate=0.01)
domain_loss_weight = 0.2 ## Hyperparameter - factor to be multiplied by the domain entropy term when training the task classifier
num_epochs = 20 ## Doubled the number of epochs to train the task classifier for as many iterations as the other methods since we have alternate updates
loss_log_dann = {'task_loss':[],'domain_loss':[]}
number_of_iterations = 0
def step(images, labels, domain_labels, iteration_count):
"""Performs one optimizer step on a single mini-batch."""
if iteration_count%2==0: ## Alternate between training the class classifier and the domain classifier
with tf.GradientTape() as tape:
logits_DANN_task, z_DANN = m_DANN_task(images)
logist_DANN_domain = m_DANN_domain(z_DANN)
loss_DANN_task = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logits_DANN_task,
labels=labels)
loss_DANN_domain = tf.nn.softmax_cross_entropy_with_logits(logits=logist_DANN_domain,
labels=1/NUM_DOMAINS*tf.ones_like(logist_DANN_domain)) ## Negative entropy of P(Y|X) measured as the cross-entropy against the uniform dist.
loss_DANN = tf.reduce_mean(loss_DANN_task + domain_loss_weight*loss_DANN_domain)
params_DANN = m_DANN_task.trainable_variables
grads_DANN = tape.gradient(loss_DANN, params_DANN)
opt_task.apply(grads_DANN, params_DANN)
return 'task_loss', loss_DANN
else:
with tf.GradientTape() as tape:
_, z_DANN = m_DANN_task(images)
logist_DANN_domain_classifier = m_DANN_domain(z_DANN)
loss_DANN_domain_classifier = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logist_DANN_domain_classifier,
labels=domain_labels)
loss_DANN_domain_classifier = tf.reduce_mean(loss_DANN_domain_classifier)
params_DANN_domain_classifier = m_DANN_domain.trainable_variables
grads_DANN_domain_classifier = tape.gradient(loss_DANN_domain_classifier, params_DANN_domain_classifier)
opt_domain.apply(grads_DANN_domain_classifier, params_DANN_domain_classifier)
return 'domain_loss', loss_DANN_domain_classifier
for images, labels, domain_labels in mnist_train.repeat(num_epochs):
number_of_iterations += 1
loss_tag, loss_dann = step(images, labels, domain_labels, number_of_iterations)
loss_log_dann[loss_tag].append(loss_dann.numpy())
print("\n\nFinal losses: {} - {}, {} - {}".format('task_loss', loss_log_dann['task_loss'][-1], 'domain_loss', loss_log_dann['domain_loss'][-1]))
```
# Definition of our models
The models for our proposed setting are defined below.
* The FiLM layer simply projects z onto 2 tensors (independent dense layers for each projection) matching the shape of the features. Each such tensor is used for element-wise multiplication and addition with the input features.
* m_domain corresponds to a domain classifier. It outputs the output of the second conv. layer to be used as z, as well as a set of logits over the set of train domains.
* m_task is the main classifier and it contains FiLM layers that take z as input. Its output corresponds to the set of logits over the labels.
```
#@test {"skip": true}
class FiLM(snt.Module):
def __init__(self, features_shape):
super(FiLM, self).__init__()
self.features_shape = features_shape
target_dimension = np.prod(features_shape)
self.hidden_W = snt.Linear(output_size=target_dimension)
self.hidden_B = snt.Linear(output_size=target_dimension)
def __call__(self, features, z):
W = snt.reshape(self.hidden_W(z), output_shape=self.features_shape)
B = snt.reshape(self.hidden_B(z), output_shape=self.features_shape)
output = W*features+B
return output
#@test {"skip": true}
class M_task(snt.Module):
def __init__(self):
super(M_task, self).__init__()
self.hidden1 = snt.Conv2D(output_channels=10, kernel_shape=5, name="hidden1")
self.film1 = FiLM(features_shape=[28,28,10])
self.hidden2 = snt.Conv2D(output_channels=20, kernel_shape=5, name="hidden2")
self.film2 = FiLM(features_shape=[28,28,20])
self.flatten = snt.Flatten()
self.logits = snt.Linear(10, name="logits")
def __call__(self, images, z):
output = tf.nn.relu(self.hidden1(images))
output = self.film1(output,z)
output = tf.nn.relu(self.hidden2(output))
output = self.film2(output,z)
output = self.flatten(output)
output = self.logits(output)
return output
#@test {"skip": true}
class M_domain(snt.Module):
def __init__(self):
super(M_domain, self).__init__()
self.hidden = snt.Conv2D(output_channels=10, kernel_shape=5, name="hidden")
self.flatten = snt.Flatten()
self.logits = snt.Linear(NUM_DOMAINS, name="logits")
def __call__(self, images):
output = tf.nn.relu(self.hidden(images))
z = self.flatten(output)
output = self.logits(z)
return output, z
#@test {"skip": true}
m_task = M_task()
m_domain = M_domain()
#@test {"skip": true}
images, labels = next(iter(mnist_test))
domain_logits, z = m_domain(images)
logits = m_task(images, z)
prediction = tf.argmax(logits[0]).numpy()
actual = labels[0].numpy()
print("Predicted class: {} actual class: {}".format(prediction, actual))
plt.imshow(images[0])
```
# Training of the proposed model
```
#@test {"skip": true}
from tqdm import tqdm
# MNIST training set has 60k images.
num_images = 60000
def progress_bar(generator):
return tqdm(
generator,
unit='images',
unit_scale=batch_size,
total=(num_images // batch_size) * num_epochs)
#@test {"skip": true}
opt = snt.optimizers.SGD(learning_rate=0.01)
num_epochs = 10
loss_log = {'total_loss':[], 'task_loss':[], 'domain_loss':[]}
def step(images, labels, domain_labels):
"""Performs one optimizer step on a single mini-batch."""
with tf.GradientTape() as tape:
domain_logits, z = m_domain(images)
logits = m_task(images, z)
loss_task = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logits,
labels=labels)
loss_domain = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=domain_logits,
labels=domain_labels)
loss = loss_task + loss_domain
loss = tf.reduce_mean(loss)
loss_task = tf.reduce_mean(loss_task)
loss_domain = tf.reduce_mean(loss_domain)
params = m_task.trainable_variables + m_domain.trainable_variables
grads = tape.gradient(loss, params)
opt.apply(grads, params)
return loss, loss_task, loss_domain
for images, labels, domain_labels in progress_bar(mnist_train.repeat(num_epochs)):
loss, loss_task, loss_domain = step(images, labels, domain_labels)
loss_log['total_loss'].append(loss.numpy())
loss_log['task_loss'].append(loss_task.numpy())
loss_log['domain_loss'].append(loss_domain.numpy())
print("\n\nFinal total loss: {}".format(loss.numpy()))
print("\n\nFinal task loss: {}".format(loss_task.numpy()))
print("\n\nFinal domain loss: {}".format(loss_domain.numpy()))
```
# Ablation 1: Learned domain-wise context variable z
Here we consider a case where the context variables z used for conditioning are learned directly from data, and the domain predictor is discarded. **This only allows for in-domain prediction though**.
```
#@test {"skip": true}
class M_learned_z(snt.Module):
def __init__(self):
super(M_learned_z, self).__init__()
self.context = snt.Embed(vocab_size=NUM_DOMAINS, embed_dim=128)
self.hidden1 = snt.Conv2D(output_channels=10, kernel_shape=5, name="hidden1")
self.film1 = FiLM(features_shape=[28,28,10])
self.hidden2 = snt.Conv2D(output_channels=20, kernel_shape=5, name="hidden2")
self.film2 = FiLM(features_shape=[28,28,20])
self.flatten = snt.Flatten()
self.logits = snt.Linear(10, name="logits")
def __call__(self, images, domain_labels):
z = self.context(domain_labels)
output = tf.nn.relu(self.hidden1(images))
output = self.film1(output,z)
output = tf.nn.relu(self.hidden2(output))
output = self.film2(output,z)
output = self.flatten(output)
output = self.logits(output)
return output
#@test {"skip": true}
m_learned_z = M_learned_z()
#@test {"skip": true}
opt_learned_z = snt.optimizers.SGD(learning_rate=0.01)
num_epochs = 10
loss_log_learned_z = []
def step(images, labels, domain_labels):
"""Performs one optimizer step on a single mini-batch."""
with tf.GradientTape() as tape:
logits_learned_z = m_learned_z(images, domain_labels)
loss_learned_z = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logits_learned_z,
labels=labels)
loss_learned_z = tf.reduce_mean(loss_learned_z)
params_learned_z = m_learned_z.trainable_variables
grads_learned_z = tape.gradient(loss_learned_z, params_learned_z)
opt_learned_z.apply(grads_learned_z, params_learned_z)
return loss_learned_z
for images, labels, domain_labels in mnist_train.repeat(num_epochs):
loss_learned_z = step(images, labels, domain_labels)
loss_log_learned_z.append(loss_learned_z.numpy())
print("\n\nFinal loss: {}".format(loss_log_learned_z[-1]))
```
# Ablation 2: Dropping the domain classification term of the loss
We consider an ablation where the same models as in our conditional predictor are used, but training is carried out with the classification loss only. This gives us a model with the same capacity as ours but no explicit mechanism to account for domain variations in train data. The goal of this ablation is to understand whether the improvement might be simply coming from the added capacity rather than the conditional modeling.
```
#@test {"skip": true}
m_task_ablation = M_task()
m_domain_ablation = M_domain()
m_DANN_ablation = DANN_domain() ## Used for evaluating how domain dependent the representations are
#@test {"skip": true}
opt_ablation = snt.optimizers.SGD(learning_rate=0.01)
num_epochs = 10
loss_log_ablation = []
def step(images, labels, domain_labels):
"""Performs one optimizer step on a single mini-batch."""
with tf.GradientTape() as tape:
domain_logits_ablation, z = m_domain_ablation(images)
logits_ablation = m_task_ablation(images, z)
loss_ablation = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logits_ablation,
labels=labels)
loss_ablation = tf.reduce_mean(loss_ablation)
params_ablation = m_task_ablation.trainable_variables + m_domain_ablation.trainable_variables
grads_ablation = tape.gradient(loss_ablation, params_ablation)
opt_ablation.apply(grads_ablation, params_ablation)
return loss_ablation
for images, labels, domain_labels in mnist_train.repeat(num_epochs):
loss_ablation = step(images, labels, domain_labels)
loss_log_ablation.append(loss_ablation.numpy())
print("\n\nFinal task loss: {}".format(loss_ablation.numpy()))
#@test {"skip": true}
opt_ablation_domain_classifier = snt.optimizers.SGD(learning_rate=0.01)
num_epochs = 10
log_loss_ablation_domain_classification = []
def step(images, labels, domain_labels):
"""Performs one optimizer step on a single mini-batch."""
with tf.GradientTape() as tape:
_, z = m_domain_ablation(images)
logits_ablation_domain_classification = m_DANN_ablation(z)
loss_ablation_domain_classification = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logits_ablation_domain_classification,
labels=domain_labels)
loss_ablation_domain_classification = tf.reduce_mean(loss_ablation_domain_classification)
params_ablation_domain_classification = m_DANN_ablation.trainable_variables
grads_ablation_domain_classification = tape.gradient(loss_ablation_domain_classification, params_ablation_domain_classification)
opt_ablation.apply(grads_ablation_domain_classification, params_ablation_domain_classification)
return loss_ablation_domain_classification
for images, labels, domain_labels in mnist_train.repeat(num_epochs):
loss_ablation_domain_classifier = step(images, labels, domain_labels)
log_loss_ablation_domain_classification.append(loss_ablation_domain_classifier.numpy())
print("\n\nFinal task loss: {}".format(loss_ablation_domain_classifier.numpy()))
```
# Results
## Plots of training losses
```
#@test {"skip": true}
f = plt.figure(figsize=(32,8))
ax = f.add_subplot(1,3,1)
ax.plot(loss_log['total_loss'])
ax.set_title('Total Loss')
ax = f.add_subplot(1,3,2)
ax.plot(loss_log['task_loss'])
ax.set_title('Task loss')
ax = f.add_subplot(1,3,3)
ax.plot(loss_log['domain_loss'])
ax.set_title('Domain loss')
#@test {"skip": true}
f = plt.figure(figsize=(8,8))
ax = f.add_axes([1,1,1,1])
ax.plot(loss_log_unconditional)
ax.set_title('Unconditional baseline - Train Loss')
#@test {"skip": true}
f = plt.figure(figsize=(16,8))
ax = f.add_subplot(1,2,1)
ax.plot(loss_log_dann['task_loss'])
ax.set_title('Domain invariant baseline - Task loss (Class. + -Entropy)')
ax = f.add_subplot(1,2,2)
ax.plot(loss_log_dann['domain_loss'])
ax.set_title('Domain invariant baseline - Domain classification loss')
#@test {"skip": true}
f = plt.figure(figsize=(16,8))
ax = f.add_subplot(1,2,1)
ax.plot(loss_log_learned_z)
ax.set_title('Ablation 1 - Task loss')
ax = f.add_subplot(1,2,2)
ax.plot(loss_log_ablation)
ax.set_title('Ablation 2 - Task loss')
#@test {"skip": true}
f = plt.figure(figsize=(8,8))
ax = f.add_axes([1,1,1,1])
ax.plot(log_loss_ablation_domain_classification)
ax.set_title('Ablation 2: Domain classification - Train Loss')
```
## Out-of-domain evaluations
The original test set of mnist without any transformations is considered
```
#@test {"skip": true}
total = 0
correct = 0
correct_unconditional = 0
correct_adversarial = 0
correct_ablation2 = 0 ## The model corresponding to ablation 1 can only be used with in-domain data (with domain labels)
for images, labels in mnist_test:
domain_logits, z = m_domain(images)
logits = m_task(images, z)
logits_unconditional = m_unconditional(images)
logits_adversarial, _ = m_DANN_task(images)
domain_logits_ablation, z_ablation = m_domain_ablation(images)
logits_ablation2 = m_task_ablation(images, z_ablation)
predictions = tf.argmax(logits, axis=1)
predictions_unconditional = tf.argmax(logits_unconditional, axis=1)
predictions_adversarial = tf.argmax(logits_adversarial, axis=1)
predictions_ablation2 = tf.argmax(logits_ablation2, axis=1)
correct += tf.math.count_nonzero(tf.equal(predictions, labels))
correct_unconditional += tf.math.count_nonzero(tf.equal(predictions_unconditional, labels))
correct_adversarial += tf.math.count_nonzero(tf.equal(predictions_adversarial, labels))
correct_ablation2 += tf.math.count_nonzero(tf.equal(predictions_ablation2, labels))
total += images.shape[0]
print("Got %d/%d (%.02f%%) correct" % (correct, total, correct / total * 100.))
print("Unconditional baseline perf.: %d/%d (%.02f%%) correct" % (correct_unconditional, total, correct_unconditional / total * 100.))
print("Adversarial baseline perf.: %d/%d (%.02f%%) correct" % (correct_adversarial, total, correct_adversarial / total * 100.))
print("Ablation 2 perf.: %d/%d (%.02f%%) correct" % (correct_ablation2, total, correct_ablation2 / total * 100.))
```
## In-domain evaluations and domain prediction
The same transformations applied in train data are applied during test
```
#@test {"skip": true}
n_repetitions = 10 ## Going over the test set multiple times to account for multiple transformations
total = 0
correct_class = 0
correct_unconditional = 0
correct_adversarial = 0
correct_ablation1 = 0
correct_ablation2 = 0
correct_domain = 0
correct_domain_adversarial = 0
correct_domain_ablation = 0
for images, labels, domain_labels in mnist_test_multidomain.repeat(n_repetitions):
domain_logits, z = m_domain(images)
class_logits = m_task(images, z)
logits_unconditional = m_unconditional(images)
logits_adversarial, z_adversarial = m_DANN_task(images)
domain_logits_adversarial = m_DANN_domain(z_adversarial)
logits_ablation1 = m_learned_z(images, domain_labels)
_, z_ablation = m_domain_ablation(images)
domain_logits_ablation = m_DANN_ablation(z_ablation)
logits_ablation2 = m_task_ablation(images, z_ablation)
predictions_class = tf.argmax(class_logits, axis=1)
predictions_unconditional = tf.argmax(logits_unconditional, axis=1)
predictions_adversarial = tf.argmax(logits_adversarial, axis=1)
predictions_ablation1 = tf.argmax(logits_ablation1, axis=1)
predictions_ablation2 = tf.argmax(logits_ablation2, axis=1)
predictions_domain = tf.argmax(domain_logits, axis=1)
predictions_domain_adversarial = tf.argmax(domain_logits_adversarial, axis=1)
predictions_domain_ablation = tf.argmax(domain_logits_ablation, axis=1)
correct_class += tf.math.count_nonzero(tf.equal(predictions_class, labels))
correct_unconditional += tf.math.count_nonzero(tf.equal(predictions_unconditional, labels))
correct_adversarial += tf.math.count_nonzero(tf.equal(predictions_adversarial, labels))
correct_ablation1 += tf.math.count_nonzero(tf.equal(predictions_ablation1, labels))
correct_ablation2 += tf.math.count_nonzero(tf.equal(predictions_ablation2, labels))
correct_domain += tf.math.count_nonzero(tf.equal(predictions_domain, domain_labels))
correct_domain_adversarial += tf.math.count_nonzero(tf.equal(predictions_domain_adversarial, domain_labels))
correct_domain_ablation += tf.math.count_nonzero(tf.equal(predictions_domain_ablation, domain_labels))
total += images.shape[0]
print("In domain unconditional baseline perf.: %d/%d (%.02f%%) correct" % (correct_unconditional, total, correct_unconditional / total * 100.))
print("In domain adversarial baseline perf.: %d/%d (%.02f%%) correct" % (correct_adversarial, total, correct_adversarial / total * 100.))
print("In domain ablation 1: %d/%d (%.02f%%) correct" % (correct_ablation1, total, correct_ablation1 / total * 100.))
print("In domain ablation 2: %d/%d (%.02f%%) correct" % (correct_ablation2, total, correct_ablation2 / total * 100.))
print("In domain class predictions: Got %d/%d (%.02f%%) correct" % (correct_class, total, correct_class / total * 100.))
print("\n\nDomain predictions: Got %d/%d (%.02f%%) correct" % (correct_domain, total, correct_domain / total * 100.))
print("Adversarial baseline domain predictions: Got %d/%d (%.02f%%) correct" % (correct_domain_adversarial, total, correct_domain_adversarial / total * 100.))
print("Ablation 2 domain predictions: Got %d/%d (%.02f%%) correct" % (correct_domain_ablation, total, correct_domain_ablation / total * 100.))
#@test {"skip": true}
def sample(correct, rows, cols):
n = 0
f, ax = plt.subplots(rows, cols)
if rows > 1:
ax = tf.nest.flatten([tuple(ax[i]) for i in range(rows)])
f.set_figwidth(14)
f.set_figheight(4 * rows)
for images, labels in mnist_test:
domain_logits, z = m_domain(images)
logits = m_task(images, z)
predictions = tf.argmax(logits, axis=1)
eq = tf.equal(predictions, labels)
for i, x in enumerate(eq):
if x.numpy() == correct:
label = labels[i]
prediction = predictions[i]
image = images[i]
ax[n].imshow(image)
ax[n].set_title("Prediction:{}\nActual:{}".format(prediction, label))
n += 1
if n == (rows * cols):
break
if n == (rows * cols):
break
```
## Samples and corresponding predictions
```
#@test {"skip": true}
sample(correct=True, rows=1, cols=5)
#@test {"skip": true}
sample(correct=False, rows=2, cols=5)
```
| github_jupyter |
# hotel key 데이터 구하기
```
from bs4 import BeautifulSoup
from selenium import webdriver
from tqdm import tqdm_notebook
import time
global options
options = webdriver.ChromeOptions()
options.add_argument('headless')
options.add_argument('window-size=1920x1080')
options.add_argument('disable-gpu')
options.add_argument("user-agent=Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.66 Safari/537.36")
# location: 지역
local_url={
'서울':'https://hotel.naver.com/hotels/list?destination=place:Seoul',
'부산':'https://hotel.naver.com/hotels/list?destination=place:Busan_Province',
'속초':'https://hotel.naver.com/hotels/list?destination=place:Sokcho',
'경주':'https://hotel.naver.com/hotels/list?destination=place:Gyeongju',
'강릉':'https://hotel.naver.com/hotels/list?destination=place:Gangneung',
'여수':'https://hotel.naver.com/hotels/list?destination=place:Yeosu',
'수원':'https://hotel.naver.com/hotels/list?destination=place:Suwon',
'제주':'https://hotel.naver.com/hotels/list?destination=place:Jeju_Province',
'인천':'https://hotel.naver.com/hotels/list?destination=place:Incheon_Metropolitan_City',
'대구':'https://hotel.naver.com/hotels/list?destination=place:Daegu_Metropolitan_City',
'전주':'https://hotel.naver.com/hotels/list?destination=place:Jeonju',
'광주':'https://hotel.naver.com/hotels/list?destination=place:Gwangju_Metropolitan_City',
'울산':'https://hotel.naver.com/hotels/list?destination=place:Ulsan_Metropolitan_City',
'평창':'https://hotel.naver.com/hotels/list?destination=place:Pyeongchang',
'군산':'https://hotel.naver.com/hotels/list?destination=place:Gunsan',
'양양':'https://hotel.naver.com/hotels/list?destination=place:Yangyang',
'춘천':'https://hotel.naver.com/hotels/list?destination=place:Chuncheon',
'대전':'https://hotel.naver.com/hotels/list?destination=place:Daejeon_Metropolitan_City',
'천안':'https://hotel.naver.com/hotels/list?destination=place:Cheonan',
'세종':'https://hotel.naver.com/hotels/list?destination=place:Sejong',
'청주':'https://hotel.naver.com/hotels/list?destination=place:Cheongju'
}
def get_hotel_data(url):
#driverOpen
driver= webdriver.Chrome('chromedriver.exe', options=options)
driver.get(url)
req= driver.page_source
soup=BeautifulSoup(req,'html.parser')
result=[]
driver.implicitly_wait(5) #5초 대기
detail_keys=soup.select('ul.lst_hotel > li.ng-scope')
result.extend([ key['id'] for key in detail_keys])
#다음버튼을 누른다.
driver.find_element_by_xpath('/html/body/div/div/div[1]/div[2]/div[6]/div[2]/a[2]').click()
#창을 닫는다.
driver.quit()
return result
hotel_keys=[]
for local in tqdm_notebook(local_url.keys()):
hotel_keys.extend(get_hotel_data(local_url[local]))
hotel_keys
len(hotel_keys)
```
# hotel 데이터 구하기
```
from bs4 import BeautifulSoup
from selenium import webdriver
import getHotelKeys
import time
global local, hotels
localCode = {
'강원도': 1,
'경기도': 2,
'경상남도': 3,
'경남': 3,
'경상북도': 4,
'경북': 4,
'광주': 5,
'광주광역시': 5,
'대구': 6,
'대전': 7,
'부산': 8,
'부산광역시': 8,
'서울': 9,
'서울특별시': 9,
'세종': 10,
'세종특별시': 10,
'세종특별자치시': 10,
'인천': 11,
'인천광역시': 11,
'울산': 12,
'울산광역시': 12,
'전라남도': 13,
'전남': 13,
'전라북도': 14,
'전북': 14,
'제주도': 15,
'제주': 15,
'제주특별자치도': 15,
'충청남도': 16,
'충남': 16,
'충청북도': 17,
'충북': 17
}
def get_hotel_info(key):
try:
one_hotel_info = {}
driver = webdriver.Chrome('chromedriver.exe', options=options)
# 호텔디테일 페이지 url
url = 'https://hotel.naver.com/hotels/item?hotelId='+key
driver.get(url)
time.sleep(1) #페이지 로드까지 1초 대기
req = driver.page_source
soup = BeautifulSoup(req, 'html.parser')
# 호텔이름
hotel_name = soup.select_one('div.info_wrap > strong.hotel_name').text
one_hotel_info['BO_TITLE'] = hotel_name
# 호텔등급
hotel_rank = soup.find('span', class_='grade').text
if hotel_rank in ['1성급', '2성급', '3성급', '4성급', '5성급']:
one_hotel_info['HOTEL_RANK']=int(hotel_rank[0])
else:
one_hotel_info['HOTEL_RANK']=None
# 호텔주소
hotel_addr_list= [addr for addr in soup.select_one('p.addr').text.split(', ')]
one_hotel_info['HOTEL_ADDRESS']=hotel_addr_list[0]
one_hotel_info['HOTEL_LOCAL_CODE']=localCode[hotel_addr_list[-2]]
# 호텔소개
driver.find_element_by_class_name('more').click() # 더보기 버튼을 누른다.
time.sleep(1) # 예약페이지 로드 1초 대기
hotel_content = soup.select_one('div.desc_wrap.ng-scope > p.txt.ng-binding')
one_hotel_info['BO_CONTENT'] = hotel_content.text
# 호텔옵션
hotel_options = [option.text for option in soup.select('i.feature')]
one_hotel_info['HOTEL_REAL_OPTIONS'] = hotel_options
driver.quit()
return one_hotel_info
except Exception:
driver.quit()
pass
hotels=[]
for key in tqdm_notebook( hotel_keys):
hotels.append(get_hotel_info(key))
hotels
len(hotels)
```
# 10개의 데이터를 오라클테이블에 넣는다.
```
hotels[3]
hotel_keys[3]
hotels[44]
hotel_keys[44]
```
https://hotel.naver.com/hotels/item?hotelId=hotel:Marinabay_Sokcho
```
hotels[55]
hotel_keys[55]
```
https://hotel.naver.com/hotels/item?hotelId=hotel:Hyundai_Soo_Resort_Sokcho
```
hotels[120]
hotel_keys[120]
```
https://hotel.naver.com/hotels/item?hotelId=hotel:Ramada_Plaza_Suwon_Hotel
```
hotels[221]
hotel_keys[221]
```
https://hotel.naver.com/hotels/item?hotelId=hotel:Holiday_Inn_Gwangju
| github_jupyter |
```
# Spark Imports
from pyspark.sql import SparkSession
import pyspark.sql.functions as f
from pyspark.sql.types import (StructField,StringType,
IntegerType,StructType)
# Regular Python imports.
from datetime import datetime
import random
# Initialize the seed for the Random Number Generator.
random.seed(datetime.now())
# Start the Spark session.
spark = SparkSession.builder.appName('aggs').getOrCreate()
# We can infer the schema/types (only in CSV), and header tells us
# that the first row are the names of the columns.
df = spark.read.csv('Data/sales_info.csv',
inferSchema=True, header=True)
# Number of rows we read.
df.count()
#
df.printSchema()
df.show()
# This returns a GroupedData object.
grouped = df.groupBy("Company")
# You can call several functions on grouped data, but mathematical ones
# will only work on numerical columns. You can't call .show() directly
# on data that has been grouped, you have to run an aggregate function.
grouped.mean().show()
# For grouped data you can call max, min, count, mean, avg, etc.
grouped.count().show()
random_udf = f.udf(lambda value: random.randint(0, value), IntegerType())
# In each row lets put an extra column where the value is a random
# integer value between 0 and the value of the 'Sales' column.
new_grouped = (df.withColumn('Random', random_udf(f.col('Sales')))
.groupBy("Company"))
# Multiple aggregator functions can be applied in a single call on
# different numerical columns!
new_grouped.agg({'Sales' : 'sum', 'Random' : 'avg'}).show()
# The dataframe has to be persisted 'cache()' because otherwise
# everytime we'll perform some computation on it, it will retrieve
# the columns and the 'random' column will be re-rolled.
x = df.withColumn('Random', random_udf(f.col('Sales'))).cache()
x.show()
# In the select you can pass some of the predefined functions,
# which there's PLENTY of in the pyspark.sql.functions module.
df.select(f.countDistinct('Sales')).show()
# You can alias columns to different names for clarity!
# `stddev_samp` is an ugly name. But you still have a
# hard to read number with a bunch of significant digits.
sales_std = df.select(f.stddev('Sales').alias('STDDEV'))
# format a column to two significant digits!
sales_std = sales_std.select(f.format_number('STDDEV', 2))
sales_std.show()
# You can order by one (or multiple) keys, the first keys take precedence
# and then in case of a tie it takes into account the following keys and
# so on. By default it will sort ascending and receiving only the column
# name(s), you have to manually specify the column object and .desc() if
# you want to use descending.
df.orderBy('Company', f.col('Sales').desc(), df['Person'].asc()).show()
```
| github_jupyter |
## **Titanic Dataset: Who Most Likely Survived?**
---
By Cohort B Team 6: (Sylar)Jiajian Guo, Lequn Yu, Qiqi Tang, Scott McCoy, Tiam Moradi
## **Proposal**
Our goal for the project is to use python to further explore the Titanic dataset and the relationship between passenger features and their chances of survival. We plan to use this improved understanding to optimize our Machine Learning model to make better predictions.
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
# formatting options cell
sns.set_context('notebook')
sns.set_style('darkgrid')
df = pd.read_csv('https://www.openml.org/data/get_csv/16826755/phpMYEkMl', na_values='?')
df.head()
```
## **Exploratory Data Analysis**
The Titanic dataset we have here includes 14 columns. The dependent variable is 'survived' and the remaining 13 columns are independent variables. There are 1309 data entries which means we have in total 1309 people for this dataset. There are only 4 columns missing a lot of data: 'cabin', 'boat', 'body',and 'home.dest', while other variables have mostly non-null data. Most of the columns are categorical.
```
df.info()
```
We can see some useful statistics listed below with the describe method. The total survival rate was 38.2% on Titanic which gives us a general sense of how serious the Titanic disaster event was. Another useful piece of informatino we can tell is that most passengers were mid-age with an interquartile range of 21-39 years old. Lastly, we can guess the distribution of 'fare' is highly skewed with outliers from its statistics here.
```
df.describe()
```
### **Examining Survival Based on Each Feature**
Which passenger features were most associated with surviving the shipwreck?
The titanic dataset has a number of features about each passenger that could be used as indicators affecting their chances of surviving. We analyzed each feature independently and found that the most significant indicators of survival were sex, passenger class, and age. Our analysis focuses on these features, but also discusses other relevant features that have potential predictive capability.
#### **Sex**
After calculating survival rates based on sex, we found that female passengers had a significantly higher survival rate at 72.7%, to male passengers at 19.1%.
```
df.pivot_table(values='survived',index='sex')
```
The following two charts illustrate the striking divide between survival rates of men and women. Even though a majority of passengers were male, women made up more than two thrids of survivors.
```
# fig, (ax1, ax2) = plt.subplots(ncols=2, sharey=True);
sns.countplot(data = df, x='sex');
plt.title('Distribution of passengers by sex');
sns.countplot(data = df, x=df['survived'].map({1:'Yes', 0:'No'}), hue = 'sex');
plt.title('Distribution of Survivors by Sex');
```
#### **Passenger Class**
When looking into survival rate based on passenger class, we see that class 1 had the highest survival rate of 61.9% , followed by class 2 with 43%, and finally class 3 with 25.5%. One thing to also notice is that there are more passengers in class 3 than class 1 and class 2 combined. Our hypothesis for why there is a big disparity of the survival rates is because of the locations of the passenger rooms on the titanic for a different passenger classes.
```
df.pivot_table(values='survived',index='pclass')
df.pivot_table(values='survived',index='pclass').plot(kind = 'bar');
plt.title('Survival Percentage by Passenger Class');
plt.ylabel('Survival Rate');
plt.xlabel('Passenger Class');
plt.xticks(rotation=0);
sns.countplot(data = df, x = 'pclass');
plt.title('Passenger Class Distribution');
plt.ylabel('Count');
plt.xlabel('Passenger Class');
```
#### **Age**
To compare the effect of age on survival, we divided our passengers into several distinct age categores. We define a child as a person under 18, an adult as a person between the ages of 18 and 50, and Elder as a person over 50 years old. Children had the highest survival rate, followed by adults and elderly. With the dataset, we also see that there was a quite large group created called None which means we had Null values within the age column of our dataset.
```
dfage = df.copy()
group_names = ['Child', 'Adult', 'Elderly']
ranges = [0,17,49, np.inf]
dfage['age_group'] = pd.cut(dfage['age'], bins = ranges, labels = group_names)
dfage.pivot_table(values='survived', index = 'age_group')
dfage.pivot_table(values='survived', index = 'age_group').plot(kind = 'bar');
plt.title('Survival Percentage by Age Group');
plt.ylabel('Survival Rate');
plt.xlabel('Age Group');
plt.xticks(rotation=0);
```
The survival rate for children was higher than for any other age group, but the effect is less than we expected. Later we will show that the correlation between age and passenger class somewhat masks the predictive effect of age, and that when viewed independently from passenger class, age is a powerful predictor of survival.
```
```
#### **Fare**
Here we have defined the fare to be cheap if the price was under 10 dollars, if the fair was over 10 but less than 30 dollars then it is mid , and if the fare price was over 30 dollars, then we would define that as expensive. There was also a large portion of the passengers that had missing values, however for the data that we do have, high level fare prices had the highest survival rate, followed by mid and cheap fare tickets.
> Survival Rate by Fare Group:
```
group_names = ['Cheap', 'Mid', 'Expensive']
ranges = [0,10,30, np.inf]
df['fare_group'] = pd.cut(df['fare'], bins = ranges, labels = group_names)
df.pivot_table(values='survived', index = 'fare_group')
df.pivot_table(values='survived', index = 'fare_group').plot(kind = 'bar');
plt.title('Survival Percentage by Fare Group');
plt.xticks(rotation=0)
plt.ylabel('Survival Rate');
```
One possible explanation for the high survival rate of passengers with expensive tickets is that first class passengers (the class with the highest survival percentage) paid significantly more for their tickets on average. Later we will show that even after controlling for passenger class, higher fares are still associated with higher survival rates.
> Average Fare and Survival Percentage by Passenger Class
```
dff = df.pivot_table(index = 'pclass', values = ['fare', 'survived'])
dff.columns = ['Average Fare', 'Survival Percentage']
dff
```
#### **Port Embarked**
Data shows us that Cherbourg, France is the port with the highest survival rate.
> Survival Percentage by Port of Embarkation
```
dfpe = df.pivot_table(values='survived',index='embarked')
dfpe.columns = ['Survival Percentage']
dfpe['Number of Passengers'] = df.groupby('embarked').count()['survived']
dfpe['Number of Survivors'] = df[df['survived'] == 1].groupby('embarked').count()['survived']
dfpe[['Number of Passengers', 'Number of Survivors', 'Survival Percentage']]
sns.catplot(data = df, kind = 'bar', x = df['embarked'].map({'C': 'Cherbourg', 'S':'Southampton', 'Q': 'Queenstown'}), y = 'survived', ci = None);
plt.title('Survival Percentage by Port of Embarkation');
plt.ylabel('Survival Rate');
plt.xlabel('Port');
```
We also see that a higher proportion of passengers embarking from Cherbourg were first class compared to the other two ports. This raises the question if the high survival rate can actually be attributed to the port itself, or whether it has more to do with correlation between port and passenger class. Later we will show that passengers embarking from Cherbourg had a higher survival rate regardless of their class.
Distribution of Passengers by Port and Passenger Class
```
df1 = df.pivot_table(values='survived',index=['embarked', 'pclass'], aggfunc = 'count')
df1.columns = ['Count']
df1
df.groupby(['pclass', 'embarked']).count()['survived'].unstack()
```
#### **Familial Relationships (sibsp / parch)**
```
```
First aspect to note is that 891 passengers did not have a sibling or a spouse, yet this demographic contributed to over 68 percent of the population. It is also interesting that the group with the highest survival rate had atleast a spouse or siblings on board the Titanic. Once we look at passnegers having 2 siblings or more, we see a decline in the survival rate.
> Survival Percentage by Number of Companions
```
group_names = ['Alone', '1-2 companions', 'more than 2 companions']
ranges = [-1,0,2, np.inf]
df['sibsp_group'] = pd.cut(df['sibsp'], bins = ranges, labels = group_names)
df.pivot_table(values='survived', index = 'sibsp_group')
df.pivot_table(values='survived', index = 'sibsp').plot(kind = 'bar');
plt.title('Survival Percentage by Sibsp Group');
plt.xticks(rotation=0)
plt.xlabel('Number of Siblings/Spouses');
plt.ylabel('Survival Rate');
# sns.set_style("darkgrid")
```
### **Correlation Between Features**
> Can we make meaningful observations about the relationships between our features, and use this to improve our analysis or classification model?
```
co = df.copy()
co['sex'] = co['sex'].map({'male':0, 'female':1})
co['pclass'] = co['pclass'].map({1:3, 2:2, 3:1}) #switched numbering of passenger classes for correlation calc
corr = co.corr()
corr
sns.set_style('white')
mask = np.triu(np.ones_like(corr, dtype=bool))
f, ax = plt.subplots(figsize=(11, 9))
cmap = sns.diverging_palette(230, 20, as_cmap=True)
sns.heatmap(corr, mask=mask,cmap=cmap, vmax=.6, center=0,
square=True, linewidths=.55, cbar_kws={"shrink": .5});
plt.title('Correlation Matrix')
plt.show()
```
Very high positive correlation between:
* Sex and survived - Shows how important sex feature is for prediction.
* Age and pclass (1st class passengers are more likely to be older) - Might explain why age didn't have strong predictive value we expected.
* Fare and pclass - Makes sense that higher class tickets cost more. We need to further explore this relationship to determine the predictive power of fare.
#### Age vs Passenger Class
> Average age by passenger class:
```
df.pivot_table(values = 'age', index = 'pclass')
```
First class passengers are on average almost 10 years older than second class passengers, and 15 years older than third class passengers. <br >
> Indented block
Since the older first class had the highest survival percentage, are we underestimating the predictive value of age when we just look at survival rates of passengers as a whole? <br >
We broke out the distribution of age by passenger class to get a better understanding.
```
sns.set_style('darkgrid')
g = sns.catplot(data = df, y = 'age', x = 'pclass', kind = 'swarm', hue = 'survived',height=7.5, aspect=7/8);
plt.axhline(18, color="k", linestyle="--", alpha=1);
g.fig.suptitle('Distribution of Age by Passenger Class')
plt.show()
```
> Survival by passenger class and age group:
```
df_p = df[df['age'] < 18].pivot_table(values = 'survived', index = 'pclass')
df_p.columns = ['Child']
df_p['Adult'] = df[df['age'] >= 18].pivot_table(values = 'survived', index = 'pclass')
df_p['Difference'] = df_p['Child'] - df_p['Adult']
df_p
```
Here we see that for each passenger class, being under the age of 18 results in a higher chance of survival.
Our visualization also shows that a majority of passengers under the age of 18 were in 3rd class, which would drag down the survival rate of the age group as a whole.
> Distribution of passenger class for child passengers:
```
df[df['age'] < 18 ]['pclass'].value_counts(normalize=True) * 100
```
When viewed independently from passenger class, we see that age is a powerful predictor of survival.
```
```
#### Fare vs Passenger Class
We previously found that a higher fare was associated with a higher rate of survival. But was that effect more reflective of 1st class passengers paying higher fares?
```
sns.set_style('dark')
sns.histplot(data = df[ df['pclass'] == 1], x = 'fare', hue = 'survived');
plt.title('First class passenger survival by fare');
```
Here we see that among all first class passengers, higher fares are associated with greater chance of survival.
```
sns.histplot(data = df[ df['pclass'] == 2], x = 'fare', hue = 'survived');
plt.title('Second class passenger survival by fare');
sns.histplot(data = df[ df['pclass'] == 3], x = 'fare', hue = 'survived', bins = 10);
plt.title('Third class passenger survival by fare');
```
For second and third class passengers, the effect is not as profound, but we can still see that passengers paying higher fares are more likely to survive regardless of their passenger class.
#### Port of Embarkation vs Passenger Class
```
sns.set_style('darkgrid')
g= sns.catplot(data = df, x = 'pclass', kind = 'count', col = 'embarked', col_order = ['C', 'S', 'Q']);
g.fig.suptitle('Distributon of Passenger Class by Port of Embarkation', y = 1.1);
g.set(xlabel = 'Passenger Class', ylabel = 'Count');
```
We saw earlier that passengers embarking from Cherbourg had the highest rate of survival. However this chart also shows that these passengers were much more likely to be first class than either of the other ports. Does this skew the survival rate by port? To find out, we broke out the survival rates by passenger class below.
```
```
> Survival Percentage by Port and Passenger Class:
```
dfch = df[df['embarked'] == 'C'].pivot_table(values='survived',index='pclass')
dfch['Other Port'] = df[df['embarked'].isin(['S', 'Q'])].pivot_table(values = 'survived', index = 'pclass')
dfch.columns = ['Cherbourg Surival Rate', 'Other Ports Survival Rate']
dfch['Difference'] = dfch['Cherbourg Surival Rate']- dfch['Other Ports Survival Rate']
dfch
```
This table shows that passengers who embarked from Cherbourg had a higher survival percentage than the other two ports, regardless of their class.
```
import scipy as scipy
# 1st class C vs non-C embark survival diff significance
scipy.stats.ttest_ind(pd.array(df.loc[(df['embarked'] == 'C')&(df['pclass']==1)]['survived']),pd.array(df.loc[(df['embarked'] != 'C')&(df['pclass']==1)]['survived']))
#2nd class
scipy.stats.ttest_ind(pd.array(df.loc[(df['embarked'] == 'C')&(df['pclass']==2)]['survived']),pd.array(df.loc[(df['embarked'] != 'C')&(df['pclass']==2)]['survived']))
#3rd class
scipy.stats.ttest_ind(pd.array(df.loc[(df['embarked'] == 'C')&(df['pclass']==3)]['survived']),pd.array(df.loc[(df['embarked'] != 'C')&(df['pclass']==3)]['survived']))
```
## **Machine Learning**
For this iteration of the machine learning section, we decided to make a class called project, which would handle the preprocessing steps, as well as train two different algorithms, Logistic Regression and XGBoost, which stands for Extreme Gradient Boosting Trees. \
### **Preprocessing**
For preprocessing, methods depended on the type of model the user decides to use. If linear_model == True, then our preprocessing steps are going to be similar to when we worked on BigQuery: we One Hot Encoded the pclass, sex, and embarked section, kept the numerical features on their same scale. One thing we tried differently was creating a family feature that was the summation of the Sibsp and Parch features. All other features were dropped.
If linear_model == False, then we took a slightly different approach to preprocessing our data. We still created the family feature, and dropped irrelevant columns, however we tried to fill in the null values of the cabin, as well as not create One Hot Encodings of the categorical features. The reason we are even working with the cabin feature is because one of the properties of XGBoost is that it can handle null values. First we readjusted the cabin feature to be labels; we took our the letter in the cabin value and reassigned them to a label. For the null values, we gave them a label of 0, but the non-null values received a label between 1-7. We also decided against using one hot encoding because for Tree Based methods, it is better for these models to split on nodes based on the labels, it helps the Tree method perforrm better with rarer categories ( the non-null values of cabin for instance) and helps the Tree methods make fewer splits.
### **Performance**
After creating and evaluating our model, how can we tune and re-train it to increase the accuracy of predictions?
For the LogisticRegression, we still used L1 regularization, but we also used an algorithm called GridSearchCV for hyperparameter tuning, in order to find the right combination of hyperparameters for the model to perform the best. Finally unlike the BigQuery model, we used a straify parameter within the train_test_split function in order to keep the class distribution the same as in the training set; in BigQuery, the splitting of the data was done by random and did not take class distribution into account. \
With this iteration of the LogisticRegression model, here are the metrics:
accuracy: **79.8%** \
precision: **77%** \
recal': **67.2%** \
f1_score: **71.8%** \
roc_auc: **77.4%** \
We performed very similar to the model produced in BigQuery; our recall improved by 6 percent, while our precision dropped by 6 percent and our roc_auc dropped by 7 percent. Interestingly enough our F1-score remained the same. Therefore, even though the model did not generate features that were more likely to seperate the data, it still is just as good as the model done in the cloud.
Now here is the performance of the XGBoost model:
accuracy: **82%** \
precision: **83.7%** \
recall: **65.6%** \
f1_score: **73.5%** \
roc_auc: **78.9%** \
In comparison to both Logistic Regression models, this model performs slightly better with a higher f1-score. This model was able to capture around the same metrics of precision and accuracy as BigQuery model , while having a higher AUC or the ROC curve in comparison the the Logistic Regression trained in python. The one interesting not with these metrics is that the XGBoost had a hard time trying to understand passengers who did not survive, and this could have been because of the inclusion of the cabin feature, but one could check feature importance or use SHAP values to see how big of a role cabin played.
### **Cross Validation Results: How do the models compare?**
As stated, with GridSearch, where we are able to find the cominbation of hyperparameters that achieved the best results on each of the validation sets , where we have 5 - fold validation. With the cells in the Cross Validation scores section, we have the best performing hyperparameters for the Logistic Regression and XGBoost model we have. First thing to notice is that the Logistic Regression model had the hightest overall test score on all of the cross validation sets, but we can see that the XGBoost model was able to consistently achieve over 80 percent on 4 out of the 5 splits ; this is also seen with the standard deviation of the test scores being lower in comparison to the Logistic Regression model as well.
Some more information that we can see from the following series is that with the Logistic Regression model is that there were 30 combinations of hyperparameters that achieved the average test scores, where as the XGBoost model only had 2 models that have achieved the average test scores. That alone tells us a couple of things. First is that overall, the Logistic Regression model with certain hyperparameters that are within a certain range are all capable of performing the same and converge to the best possible scroes. On the otherhand, we can see that hyperparameter tuning significantly improved the XGBoost model, we the limitied amount of comibnations of rank 1 scores means that the slightest tuning of features improved the performance of the model.
Overall between the two models, in a real world scenario, we would most likely use the XGBoost model, as it has a higher test accuracy in comparison to the Logistic Regression, for both the validation sets as well as the test set.
### **"Project" Python Class**
```
!pip install shap
import numpy as np
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.preprocessing import MinMaxScaler as minmax
from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score , roc_auc_score
import shap
class project():
def __init__(self,**kwargs):
assert isinstance(kwargs['path'],str)
assert isinstance(kwargs['linear_model'],bool)
assert isinstance(kwargs['seed'],int)
self.path = kwargs['path']
self.linear_model = kwargs['linear_model']
self.seed = kwargs['seed']
def preprocessing(self):
"""
Here we are giving the path of dataset, prepare the dataset to be processed in a machine learning model
Input:
path: str of the path to the dataset
split_labels: bool that either returns the whole dataframe or the features and the labels seperately
linear_model: bool that preprocesses the data differently depending on if the model we use is either
a linear classifier or a tree based method
baseline: Here we are going to be making a baseline model for either method
Output:
df or X , y if split_labels = True
"""
df = pd.read_csv(self.path,na_values='?')
# create categorical vars
if self.linear_model:
print('We are turning categorical features into ohc and dropping some unhelpful columns...')
df = pd.get_dummies(df,columns=['pclass','sex','embarked'])
df['family_size'] = df['sibsp'] + df['parch']
df.drop(['name','boat','home.dest','sex_male','body','cabin','ticket','sibsp','parch','pclass_3','embarked_S'],axis=1,inplace=True)
# df.dropna(axis=0,inplace=True)
df['age'].fillna(df['age'].median(),inplace=True)
df['fare'].fillna(df['fare'].median(),inplace=True)
else:
df = pd.get_dummies(df,columns=['pclass','sex','embarked'])
df['family_size'] = df['sibsp'] + df['parch']
df['cabin'] = df[df.cabin.notnull()].cabin.apply(lambda cabin: cabin[0]).map({'A':1,'B':2,'C':3,'D':4,'E':5,'F':6,'G':7})
df.drop(['name','boat','home.dest','sex_male','body','ticket','sibsp','parch','pclass_3','embarked_S'],axis=1,inplace=True)
df.fillna(0,inplace=True)
# print(df.info())
label = df['survived']
df.drop(['survived'],axis=1,inplace=True)
self.df = df
self.label = label
def demistify_model(self,model,X_test):
## Depending on the model, we are going to be either coefficients,
if self.linear_model:
return pd.DataFrame(model.coef_,columns=self.df.columns)
else:
## First we are going to be printing the feature importance
print('Printing the SHAP Values of the ')
shap_values_t1 = shap.TreeExplainer(model).shap_values(X_test)
return shap.summary_plot(shap_values_t1,X_test)
@classmethod
def cv_scores(self,model):
# Here we are going to see the CV scores from GridSearch
return model.cv_results_
def run_ml(self):
# Here we are going to be running the machine learning model
# make sure that the params
print('Training the model.....',end='\n')
X , y = self.df, self.label
X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=.25,random_state=self.seed,stratify=y)
if self.linear_model:
lr = LogisticRegression()
clf = GridSearchCV(estimator=lr,param_grid={'penalty':['l1'],
'solver':['liblinear'],
'C':list(np.arange(0.005,1,.015)),
'max_iter':list(range(50,500,15)),
'random_state':[self.seed]},cv=5)
else:
xgboost = GradientBoostingClassifier()
clf = GridSearchCV(estimator=xgboost,param_grid={'loss':['deviance','exponential'],
'learning_rate':list(np.arange(0.005,1,.015)),
'n_estimators': [100],
'max_depth':list(range(3,10,2)),
'random_state':[self.seed]},cv=5)
clf.fit(X_train,y_train)
print('We are done training. Now we are applying the model to the test set...',end='\n')
y_pred = clf.predict(X_test)
print('Now we are constructing the metrics dataframe...')
accuracy = accuracy_score(y_test,y_pred)
precision = precision_score(y_test,y_pred)
recall = recall_score(y_test,y_pred)
f1 = f1_score(y_test,y_pred)
roc_auc = roc_auc_score(y_test,y_pred)
metrics = pd.DataFrame({'accuracy':accuracy,
'precision':precision,
'recall':recall,
'f1_score':f1,
'roc_auc':roc_auc},index=[0])
return X_test,y_test,y_pred, clf,metrics
```
### **Fitting and Running Model**
```
project_= project(path='https://www.openml.org/data/get_csv/16826755/phpMYEkMl',
split_labels=True,
linear_model=True,
seed=833)
project_.preprocessing()
print(project_.df.head())
_, base_truth_array, predicted_array, model, metrics_df = project_.run_ml()
metrics_df
# This is going to run a Gradient Boosting Model
project_xg = project(path = 'https://www.openml.org/data/get_csv/16826755/phpMYEkMl',
split_labels=True,
linear_model=False,
seed=833)
project_xg.preprocessing()
print(project_xg.df.head())
X_test, base_truth_array_xg, predicted_array_xg, model_, metrics_xg_df = project_xg.run_ml()
metrics_xg_df
```
### **Cross Validation Scores**
```
# Best Performing Logistic Regression results
pd.DataFrame(model.cv_results_).sort_values('rank_test_score').iloc[0,:]
# Best Performing XGBoost results
pd.DataFrame(model_.cv_results_).sort_values('rank_test_score').iloc[0,:]
```
### **Coefficients and SHAP values**
Here we can see the coefficients of the best performing Logistic Regression. Similarly to when we used Logistic Regression on the cloud, our model also contains l1 regularization, and zeroed out irrelevant features for prediction, thus deeming the following features significant: sex_female, plcass_1, pclass_2, embarked_C. Unlike working with the BigQuery ML, we were not able to set the lambda parameter, however, we are able to adjust a parameter called C, which stands for the inverse of the regularization parameter: \\[C = \frac{1}{\lambda} \\]
As for the XGBoost, we are going to utilize another method that will give us a visual insight as to how the model is predicting between variables: SHAP values, or Shapely Additive Explanations. This method is able to de black-box models that are non-linear. On the graph, the x axis represents the impact on model performance, or another way is to look at it as the negative and positive classes (negative being passenger did not survive, and positive being that passenger survived). The Y-axis represents the features of the dataset, were they are ordered in descending order of importance. The feature importance is calculated based on the average marginal contribution a feature makes towards predictions across all partitions of the feature space. For this plot, we are going to be using the test set to visualize how the model deciding how to predict its values. For example, we see that the sex_female feature was the most important feature, and when a passenger was a male, it thinks that the passenger will not be as likely to survive, and when a passenger is a female, then the passenger would survive. Followed by the pclass_1 feature as being the most important features for the models predictions, however, this is where the similarities between the Logistic Regression and the XGBoost. Next we see that the XGBoost has the feature importances in the following order: fair, cabin, age, family_size, and the emarkation features. For fair , age, and family_size, we can see that there is some error points that could have been contributions for misclassification \\
From our understanding of the models, we can see the different models are capable of solving the same tasks, but take almost different methods for their predictions.
```
# Coefficients for our Logistic Regression
project_.demistify_model(model.best_estimator_,None).T
# SHAP Summary Plot for our
project_xg.demistify_model(model_.best_estimator_,X_test)
```
### Examining Predictions
We create a copy of the test set and add columns for the model's predicted value, and whether or not the prediction was correct.
```
compar = pd.DataFrame(base_truth_array)
compar['predicted'] = predicted_array
compar['correct_pred'] = compar['survived'] == compar['predicted']
compar = df.merge(compar, how = 'inner', left_index=True, right_index=True)
compar
compar_xg = pd.DataFrame(base_truth_array_xg)
compar_xg['predicted'] = predicted_array_xg
compar_xg['correct_pred'] = compar_xg['survived'] == compar_xg['predicted']
compar_xg = df.merge(compar_xg, how = 'inner', left_index=True, right_index=True)
compar_xg
```
We can use a confusion matrix to visualize
```
from sklearn.metrics import confusion_matrix
conf = confusion_matrix(compar_xg['correct_pred'], compar_xg['predicted'])
# conf_norm = confusion_matrix(compar['correct_pred'], compar['predicted'], normalize='true')
conf
plt.figure(figsize=(10,7));
sns.heatmap(conf, cmap = 'Blues', annot = True, fmt = 'g',xticklabels = ['Not Survived', 'Survived'], yticklabels=['Incorrect', 'Correct']);
plt.xlabel('Predicted Result');
plt.ylabel('Prediction');
plt.title('Confusion Matrix');
```
> Metrics:
```
metrics_xg_df
CP = conf[1].sum()
TotP = conf.flatten().sum()
AP = CP / TotP
print('Correct Predictions: {} \nTotal Predictions: {} \nAccuracy: {}\n\n'.format(CP, TotP, round(AP,3)))
TP = conf[1][1]
FP = conf[0][1]
Pr = TP / (TP + FP)
print('True Positives: {} \nFalse Positives: {} \nPrecision: {}\n'.format(TP, FP, round(Pr,3)))
TP = conf[1][1]
FN = conf[0][0]
Rc = TP / (TP + FN)
print('True Positives: {} \nFalse Negatives: {}\nRecall: {}\n'.format(TP,FN,round(Rc, 3)))
# TP = conf[1][1]
# TotPos = conf[0][0] + conf[1][1]
# Sen = TP / TotPos
# print('True Positives: {} \nTotal Positives: {} \nSensitivity: {} \n'.format(TP, TotPos, round(Sen, 3)))
TN = conf[1][0]
N = conf[1][0] + conf[0][1]
Sp = TN / N
print('True Negatives: {} \nTotal Negatives: {} \nSpecificity: {} \n'.format(TN, N, round(Sp, 3)))
```
Out of 328 records we had:
* 43 false negatives (predicted a passenger would not survive when they did)
* 16 false positives (predicted a passenger would survive when they did not)
We create a dataframe of only incorrect predictions so we can examine the characteristics of the passengers our model misclassified.
```
wrong_pred = compar_xg[compar_xg['correct_pred'] == False]
wrong_pred.head()
correct_pred = compar_xg[compar_xg['correct_pred'] == True]
correct_pred.head()
```
**Visualising Incorrect Predictions**
False Negatives
```
sns.countplot(data = wrong_pred[wrong_pred['survived_x'] == 1], x = 'sex', hue = 'pclass');
# plt.ylim(0, 100);
plt.title('Distribution of False Negatives by Sex and Passenger Class');
```
False Positives
```
sns.countplot(data = wrong_pred[wrong_pred['survived_x'] == 0], x = 'sex', hue = 'pclass');
# plt.ylim(0, 35);
plt.title('Distribution of False Positives by Sex and Passenger Class');
```
These two countplots show that there are patterns in the way our model misclassified data.
* Male passengers more likely to be misclassified as not surviving when they did
* Female passengers more likely to be misclassified as surviving when they did not
* Almost all false positives were 3rd class female passengers
> 90% of passengers model predicted to survive were female:
```
compar_xg[(compar_xg['predicted'] == 1) & (compar_xg['sex'] == 'female')].shape[0] / compar_xg[(compar_xg['predicted'] == 1)].shape[0]
```
> 89% of passengers model predicted to not survive were male:
```
compar_xg[(compar_xg['predicted'] == 0) & (compar_xg['sex'] == 'male')].shape[0] / compar_xg[(compar_xg['predicted'] == 0)].shape[0]
```
**Visualising Correct Predictions**
```
sns.countplot(data = correct_pred[correct_pred['survived_x'] == 1], x = 'sex', hue = 'pclass');
plt.title('Distribution of True Positives by Sex and Passenger Class');
sns.countplot(data = correct_pred[correct_pred['survived_x'] == 0], x = 'sex', hue = 'pclass');
plt.title('Distribution of True Negatives by Sex and Passenger Class');
```
# 775 Notebook
## Titanic Dataset: Who Most Likely Survived?
---
#### By Cohort B Team 6: (Sylar)Jiajian Guo, Lequn Yu, Qiqi Tang, Scott McCoy, Tiam Moradi
### Import Packages and Titanic Dataset
```
%%bigquery
SELECT *
FROM `ba775-team-6b.Project.passengers`
```
### Preview of the Dataset
```
%%bigquery
SELECT *
FROM `ba775-team-6b.Project.passengers`
LIMIT 5
```
### How many people survived the Titanic shipwrek?
---
The first question we asked was how many people in the dataset survived the accident, and what is the overall survival rate. With 500 passengers survived out of 1309 in total, only 38 percent of people survived.
The titanic dataset has a number of features about each passenger that could be used as indicators affecting their chances of surviving. We analyzed each feature independently and found that the most significant indicators of survival were sex, passenger class, and age. Our analysis focuses on these features, but also discusses other relevant features that have potential predictive capability.
```
%%bigquery
SELECT count(CASE WHEN survived = 1 THEN 1 ELSE NULL END) AS number_of_survivors
FROM `ba775-team-6b.Project.passengers`
%%bigquery
SELECT count(*) AS number_of_passengers
FROM `ba775-team-6b.Project.passengers`
%%bigquery
SELECT (count(CASE WHEN survived = 1 THEN 1 ELSE NULL END) / count(*)) * 100 AS passenger_survival_percentage
FROM `ba775-team-6b.Project.passengers`
```
### Survival by Sex
After calculating survival rates based on sex, we found that female passengers had a significantly higher survival rate at 72 percent, to male passengers at 19 percent.
```
%%bigquery
SELECT sex, count(*) Number_Passengers, sum(survived) Number_Survivors, (sum(survived) / count(*)) Survival_Percentage
FROM `ba775-team-6b.Project.passengers`
GROUP BY sex
#comparing survival statistics of male and female passengers
```
### Survival by Passenger Class
When looking into survival rate based on passenger class, we see that class 1 had the highest survival rate of 61 percent, followed by class 2 with 42 percent, and finally class 3 with 25 percent. One thing to also notice is that there are more passengers in class 3 than class 1 and class 2 combined. Our hypothesis for why there is a big disparity of the survival rates is because of the locations of the passenger rooms on the titanic for a different passenger classes.
```
%%bigquery
SELECT pclass, count(*) Number_Passengers, sum(survived) Number_Survivors, (sum(survived) / count(*)) Survival_Percentage
FROM `ba775-team-6b.Project.passengers`
GROUP BY pclass
ORDER BY pclass
#comparing survival rate between passenger classes
```
### Survival by Age Group
Following the theme of analyzing survival rates based on different factors, we decided to compare the survival rates based on age. We define a child is a person under 18, an adult is a person between the ages of 18 and 50, and Elder as a person over 50 years old. Children had the highest survival rate, followed by adults and elderly. With the dataset, we also see that there was a quite large group created called None which means we had Null values within the age column of our dataset. The Elder's survival rate is not very meaningful due to its lack of data number of only 15.
```
%%bigquery
WITH AGEGROUP AS
(SELECT *,
CASE WHEN age > 0 AND age < 18 THEN 'Child'
WHEN age > 17.999 AND age < 50 THEN 'Adult'
WHEN age <=50 THEN 'Elder'
ELSE NULL END AS age_group
FROM `ba775-team-6b.Project.passengers`
)
SELECT age_group, count(*) Number_Passengers, sum(survived) Number_Survivors, (sum(survived) / count(*)) Survival_Percentage
FROM AGEGROUP
GROUP BY age_group
#comparing survival rate between age groups
```
### Survival by Passenger's Number of Siblings / Spouces
First aspect to note is that 891 passengers did not have a sibling or a spouse, yet this demographic contributed to over 68 percent of the population. It is also interesting that the group with the highest survival rate had atleast a spouse or siblings on board the Titanic. Once we look at passnegers having 2 siblings or more, we see a decline in the survival rate.
```
%%bigquery
SELECT SibSp, count(*) Number_Passengers, sum(survived) Number_Survivors, (sum(survived) / count(*)) Survival_Percentage
FROM `ba775-team-6b.Project.passengers`
GROUP BY SibSp
ORDER BY SibSp
#comparing survival rate between passenger numbers of siblings / spouses aboard
```
### Survival by Port Embarked
Here are able to get a sense of where the passengers have arrived from.
```
%%bigquery
SELECT
CASE
WHEN embarked = 'C' THEN 'Cherbourg, France'
WHEN embarked = 'Q' THEN 'Queenstown, Ireland'
WHEN embarked = 'S' THEN 'Southampton, England'
END AS port_of_embarkation, count(*) Number_Passengers, sum(survived) Number_Survivors, (sum(survived) / count(*)) Survival_Percentage
FROM `ba775-team-6b.Project.passengers`
WHERE embarked IS NOT NULL
GROUP BY embarked
ORDER BY embarked
#comparing survival rate between passenger with different ports embarked
```
Further analysis shows us that Cherbourg, the port with the highgest survival percentage, also had the highest share of first class passengers.
```
%%bigquery
SELECT
CASE
WHEN embarked = 'C' THEN 'Cherbourg, France'
WHEN embarked = 'Q' THEN 'Queenstown, Ireland'
WHEN embarked = 'S' THEN 'Southampton, UK'
END
AS port_of_embarkation,
pclass,
COUNT(*) Number_Passengers,
FROM
`ba775-team-6b.Project.passengers`
WHERE
embarked IS NOT NULL
GROUP BY
embarked,
pclass
ORDER BY
embarked,
pclass
```
### Survival by Fare Group
Here we have define the fare to be cheap if the price was under 10 dollars, if the fair was over 10 but less than 30 dollars then its mid , and if the fare price was over 30 dollars, then we would define that as expensive. There was also a big portion of the passengers that missing values, however for the data that we do have, mid level fare prices had the highest survival rate, followed by expensive and cheap fare tickets. The expensive's survival rate is not very meaningful here due to the lack of number of data. Our hypothesis is the more passengers pay, the higher the rate of survival.
```
%%bigquery
WITH FAREGROUP AS
(SELECT *,
CASE WHEN fare > 0 AND fare < 10 THEN 'Cheap'
WHEN fare >= 10 AND fare < 30 THEN 'Mid'
WHEN fare >= 30 THEN 'Expensive'
ELSE NULL END AS fare_group
FROM `ba775-team-6b.Project.passengers`
)
SELECT fare_group, count(*) Number_Passengers, sum(survived) Number_Survivors, (sum(survived) / count(*)) Survival_Percentage
FROM FAREGROUP
GROUP BY fare_group
#comparing survival rate between passenger with different fare paid
```
After performing our initial exploratory data analysis we found that sex, passenger class, and age were most associated with an increased chance of survival. We also saw some correlation between our features that might distort the predictive value of any variable. Our next step would be to develop and tune a machine learning model to best make predictions about survival based on data.
### Who Most Likely Survived: Machine Learning
Our team attempted to predict whether passengers would survived the Titanic accident. Since our dataset is only two labels, survived and not survived, we are going to solving a binary classification problem. Our algorithm of choice is Logisitic Regression. Below is how we structured our steps leading up to evaluating the test set.
#### Preprocessing the Dataset
The first cell shows preprocessed features that are going to be using for our classification model.
Since the majority of our data is text, we wanted to one hot encode features such as sex, Embarked, and Pclass.
with utilizing Logisitic Regression, having one hot encodinged data will improve the model's performance because
our features will be transformed from label encoded features to numerical encoded features.
We also standardized the age and fare columns, this was done in order for gradient descent to have the best
trajectory towards the global minimum. If we kept age and fare as their original values,
those two features would dominate the contribution of predictions and direction of gradient descent, as their as their parameters would be the largest values and gradients. Finally, we did utilize one feature that was label encoded calledName title, this was generated by going through and labelling the most common names on board, along with a group of titles that were rare. We generated this feature to see if there were any patterns with titles and survival the model could discover.
I also used the RAND() function to create random values for the whole dataset, and then used a WHERE condition to
filter the data into a training and test set. Since the numbers are randomly generated, we can consider this to be a random
shuffle of the data.
#### Model Performance
We can see that the model does perform well. When evaluating the test set, our metrics are as follows:
- **Accuracy: %**
- **Area Under the ROC curve: 84.6%**
- **Precision: 83.6%**
- **Recall: 61.8%**
- **F1:score 71%**
Overall I believe that this is a good baseline score, but much can be improved. Although the accuracy is 80 percent, we can see that the model is learning parameters to fit well to the positive class; that is, the Logisitic Regression is learning patterns for people who survived the Titanic accident and is having a hard time learning patterns for the people who did not survive the accident. We know this because we have a significantly higher precision score in comparison to recall ; Our precision score is an indicator of how well our model is learning true positives, and recall is an indicator of how well our model is learning true negatives. This could also potentially suggest that the distribution of the negative class is more scattered and thus producing more
false negatives.
Finally, I also utilized L1 regularization to the model in order to reduce overfitting, however it seems that this was not possible. One way to combat this is to increase the value for lambda, which represent how severe the regularization will be; although one would need to be careful, as a lambda value too large would cause the model
to suffer from high bias and underfit.
### Feature Importance
After the model has trained, we are able to
#### Limitations
One of the limitations is the amount of the data we have to train the model. There were roughly 1300 data points and thus naturally difficult to be able to generalize to the test set well. In addition, because this dataset is of the Titantic accident, there is no other method of curating additional information that regards the passengers.
Another limitation was certain features had null values. While we are able to impute these values, it is still rather difficult to tell if the values were a true representative of the missing passengers age's. Perhaps if we were able to use models such as XGBoost or CATboost,these null values would not be as bad of an issue
```
%%bigquery
CREATE OR REPLACE TABLE `ba775-team-6b.Project.preprocessed_data` as (SELECT Survived,
CASE WHEN Sex = 'male' Then 1 Else 0 END as isMale,
CASE WHEN Sex = 'female' Then 1 Else 0 End as isFemale,
CASE WHEN Pclass = 1 THEN 1 ELSE 0 END as Pclass1,
CASE WHEN Pclass = 2 THEN 1 ELSE 0 END as Pclass2,
CASE WHEN Pclass = 3 THEN 1 ELSE 0 END as Pclass3,
SibSp,
Parch,
Age,
Fare,
CASE WHEN Embarked = 'S' THEN 1 Else 0 END as Embarked_S,
CASE WHEN Embarked = 'Q' THEN 1 ELSE 0 END as Embarked_B,
CASE WHEN Embarked = 'C' THEN 1 ELSE 0 END as Embarked_C,
CASE WHEN RAND() < .3 OR RAND() > .6 THEN 'Training'
WHEN RAND() >=.3 OR RAND() <= .6 THEN 'Test'
ELSE 'Training'
END as data_split
FROM `ba775-team-6b.Project.passengers`)
```
In the next two cells, we are going to be using the data_split feature in order to parition the table into a training and test set to feed into the model.
```
%%bigquery
CREATE OR REPLACE table `ba775-team-6b.Project.preprocessed_data_train` as (
SELECT * FROM `ba775-team-6b.Project.preprocessed_data`
WHERE data_split = 'Training')
%%bigquery
CREATE OR REPLACE table `ba775-team-6b.Project.preprocessed_data_test` as (
SELECT * FROM `ba775-team-6b.Project.preprocessed_data`
WHERE data_split = 'Test')
```
The following cells are going to train and evaluate the model
```
%%bigquery
CREATE OR REPLACE MODEL `ba775-team-6b.Project.classification_model`
OPTIONS(model_type='logistic_reg',L1_REG=5,labels = ['survived'])
AS
SELECT * EXCEPT(data_split) FROM `ba775-team-6b.Project.preprocessed_data_train`
%%bigquery
SELECT *
FROM ML.EVALUATE
(
MODEL `ba775-team-6b.Project.classification_model`,
(SELECT * EXCEPT(data_split) FROM `ba775-team-6b.Project.preprocessed_data_test`)
)
%%bigquery
Select processed_input,weight FROM
ML.WEIGHTS(MODEL `ba775-team-6b.Project.classification_model`)
```
| github_jupyter |
```
# This Python 3 environment comes with many helpful analytics libraries installed
# It is defined by the kaggle/python Docker image: https://github.com/kaggle/docker-python
# For example, here's several helpful packages to load
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk('/kaggle/input'):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 5GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
# Common Imports:
import pandas as pd
import numpy as np
import os
# To Plot Figures:
import seaborn as sns
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rc('axes', labelsize=14)
mpl.rc('xtick', labelsize=12)
mpl.rc('ytick', labelsize=12)
%matplotlib inline
# allowing for any single variable to print out without using the print statement:
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
# To allow markdowns in Python Cells:
from IPython.display import display, Markdown
```
### First, we will load our images from Kaggle Dataset and create our test and train X and Y values.
```
# Path to access images
from pathlib import Path
# in-built keras image pre-processing library
from keras.preprocessing import image
# Path to folders with training data
parasitized_path = Path('../input/cell-images-for-detecting-malaria/cell_images') / 'Parasitized'
not_parasitized_path = Path('../input/cell-images-for-detecting-malaria/cell_images') / 'Uninfected'
# making sure the directories exist
parasitized_path.is_dir()
not_parasitized_path.is_dir()
# initializing the lists of images (X) and labels (Y)
images = []
labels = []
# import library to resize images:
from skimage import transform
# setting the new shape of image:
new_shape = (50, 50, 3)
```
###### Lets import all the non infected images
```
import warnings;
warnings.filterwarnings('ignore');
# Load all the non-malaria images and setting their Y label as 0
for img in not_parasitized_path.glob("*.png"):
# Load the image from disk
img = image.load_img(img)
# Convert the image to a numpy array
image_array = image.img_to_array(img)
# resize the image (must be done after it has turned into a np array):
image_array = transform.resize(image_array, new_shape, anti_aliasing=True)
# scaling the image data to fall between 0-1 since images have 255 brightness values:
image_array /= 255
# Add the image to the list of images
images.append(image_array)
# For each 'not parasitized' image, the expected value should be 0
labels.append(0)
plt.imshow(images[1])
plt.title('Sample Uninfected Cell')
"Dimensions of image:"
images[1].shape
"Images / Labels we have imported thus far:"
len(images)
len(labels)
```
###### Lets import all the infected images
```
# Load all the malaria images and setting their Y label as 1
for img in parasitized_path.glob("*.png"):
# Load the image from disk
img = image.load_img(img)
# Convert the image to a numpy array
image_array = image.img_to_array(img)
# resize the image (must be done after it has turned into a np array):
image_array = transform.resize(image_array, new_shape, anti_aliasing=True)
# scaling the image data to fall between 0-1 since images have 255 brightness values:
image_array /= 255
# Add the image to the list of images
images.append(image_array)
# For each 'parasitized' image, the expected value should be 1
labels.append(1)
```
Let's take a look at an infected cell:
```
plt.imshow(images[-1])
plt.title('Sample Infected Cell')
"Dimensions of image:"
images[-1].shape
"Images / Labels we have imported thus far:"
len(images)
len(labels)
```
Here, we save and load the np data so we don't need to run through all the pre-processing from scratch next time.
```
# memory dump
import gc
gc.collect()
```
Now, we randomly shuffle the images and labels (while respecting their order of course) before we split into training and testing sets:
```
from sklearn.utils import shuffle
images, labels = shuffle(images, labels)
# checking to make sure that the order is still in place:
plt.imshow(images[-7])
"1 means it is infected:"
labels[-7]
# Create a single numpy array with all the images we loaded (list to np array)
x_data = np.array(images)
# Also convert the labels to a numpy array from a list
y_data = np.array(labels)
# Splitting the dataset into the Training set and Test set
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(x_data, y_data, test_size = 0.2, random_state = 0)
# type convert the test and training data:
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
f'X_train shape: {X_train.shape}'
f'X_test.shape: {X_test.shape}'
f'Y_train shape: {y_train.shape}'
f'Y_test.shape: {y_test.shape}'
y_train[0:3]
# one hot encoding Y:
from keras.utils import to_categorical
y_train = to_categorical(y_train)
y_test = to_categorical(y_test)
y_train[0:3]
import h5py
with h5py.File('X_train.hdf5', 'w') as f:
dset = f.create_dataset("default", data=X_train)
```
| github_jupyter |
```
#IMPORT SEMUA LIBRARY DISINI
#IMPORT LIBRARY PANDAS
import pandas as pd
#IMPORT LIBRARY POSTGRESQL
import psycopg2
from psycopg2.extensions import ISOLATION_LEVEL_AUTOCOMMIT
#IMPORT LIBRARY CHART
from matplotlib import pyplot as plt
from matplotlib import style
#IMPORT LIBRARY PDF
from fpdf import FPDF
#IMPORT LIBRARY BASEPATH
import io
#IMPORT LIBRARY BASE64 IMG
import base64
#IMPORT LIBRARY NUMPY
import numpy as np
#IMPORT LIBRARY EXCEL
import xlsxwriter
#IMPORT LIBRARY SIMILARITAS
import n0similarities as n0
#FUNGSI UNTUK MENGUPLOAD DATA DARI CSV KE POSTGRESQL
def uploadToPSQL(host, username, password, database, port, table, judul, filePath, name, subjudul, dataheader, databody):
#TEST KONEKSI KE DATABASE
try:
for t in range(0, len(table)):
#DATA DIJADIKAN LIST
rawstr = [tuple(x) for x in zip(dataheader, databody[t])]
#KONEKSI KE DATABASE
connection = psycopg2.connect(user=username,password=password,host=host,port=port,database=database)
cursor = connection.cursor()
connection.set_isolation_level(ISOLATION_LEVEL_AUTOCOMMIT);
#CEK TABLE
cursor.execute("SELECT * FROM information_schema.tables where table_name=%s", (table[t],))
exist = bool(cursor.rowcount)
#KALAU ADA DIHAPUS DULU, TERUS DICREATE ULANG
if exist == True:
cursor.execute("DROP TABLE "+ table[t] + " CASCADE")
cursor.execute("CREATE TABLE "+table[t]+" (index SERIAL, tanggal date, total varchar);")
#KALAU GA ADA CREATE DATABASE
else:
cursor.execute("CREATE TABLE "+table[t]+" (index SERIAL, tanggal date, total varchar);")
#MASUKAN DATA KE DATABASE YANG TELAH DIBUAT
cursor.execute('INSERT INTO '+table[t]+'(tanggal, total) values ' +str(rawstr)[1:-1])
#JIKA BERHASIL SEMUA AKAN MENGHASILKAN KELUARAN BENAR (TRUE)
return True
#JIKA KONEKSI GAGAL
except (Exception, psycopg2.Error) as error :
return error
#TUTUP KONEKSI
finally:
if(connection):
cursor.close()
connection.close()
#FUNGSI UNTUK MEMBUAT CHART, DATA YANG DIAMBIL DARI DATABASE DENGAN MENGGUNAKAN ORDER DARI TANGGAL DAN JUGA LIMIT
#DISINI JUGA MEMANGGIL FUNGSI MAKEEXCEL DAN MAKEPDF
def makeChart(host, username, password, db, port, table, judul, filePath, name, subjudul, A2, B2, C2, D2, E2, F2, G2, H2,I2, J2, K2, limitdata, wilayah, tabledata, basePath):
try:
datarowsend = []
for t in range(0, len(table)):
#TEST KONEKSI KE DATABASE
connection = psycopg2.connect(user=username,password=password,host=host,port=port,database=db)
cursor = connection.cursor()
#MENGAMBIL DATA DARI DATABASE DENGAN LIMIT YANG SUDAH DIKIRIMKAN DARI VARIABLE DIBAWAH
postgreSQL_select_Query = "SELECT * FROM "+table[t]+" ORDER BY tanggal DESC LIMIT " + str(limitdata)
cursor.execute(postgreSQL_select_Query)
mobile_records = cursor.fetchall()
uid = []
lengthx = []
lengthy = []
#MENYIMPAN DATA DARI DATABASE KE DALAM VARIABLE
for row in mobile_records:
uid.append(row[0])
lengthx.append(row[1])
lengthy.append(row[2])
datarowsend.append(mobile_records)
#JUDUL CHART
judulgraf = A2 + " " + wilayah[t]
#bar
style.use('ggplot')
fig, ax = plt.subplots()
#DATA CHART DIMASUKAN DISINI
ax.bar(uid, lengthy, align='center')
#JUDUL CHART
ax.set_title(judulgraf)
ax.set_ylabel('Total')
ax.set_xlabel('Tanggal')
ax.set_xticks(uid)
ax.set_xticklabels((lengthx))
b = io.BytesIO()
#BUAT CHART MENJADI FORMAT PNG
plt.savefig(b, format='png', bbox_inches="tight")
#CHART DIJADIKAN BASE64
barChart = base64.b64encode(b.getvalue()).decode("utf-8").replace("\n", "")
plt.show()
#line
#DATA CHART DIMASUKAN DISINI
plt.plot(lengthx, lengthy)
plt.xlabel('Tanggal')
plt.ylabel('Total')
#JUDUL CHART
plt.title(judulgraf)
plt.grid(True)
l = io.BytesIO()
#CHART DIJADIKAN GAMBAR
plt.savefig(l, format='png', bbox_inches="tight")
#GAMBAR DIJADIKAN BAS64
lineChart = base64.b64encode(l.getvalue()).decode("utf-8").replace("\n", "")
plt.show()
#pie
#JUDUL CHART
plt.title(judulgraf)
#DATA CHART DIMASUKAN DISINI
plt.pie(lengthy, labels=lengthx, autopct='%1.1f%%',
shadow=True, startangle=180)
plt.plot(legend=None)
plt.axis('equal')
p = io.BytesIO()
#CHART DIJADIKAN GAMBAR
plt.savefig(p, format='png', bbox_inches="tight")
#CHART DICONVERT KE BASE64
pieChart = base64.b64encode(p.getvalue()).decode("utf-8").replace("\n", "")
plt.show()
#CHART DISIMPAN KE DIREKTORI DIJADIKAN FORMAT PNG
#BARCHART
bardata = base64.b64decode(barChart)
barname = basePath+'jupyter/CEIC/26. Pemilihan Umum/img/'+name+''+table[t]+'-bar.png'
with open(barname, 'wb') as f:
f.write(bardata)
#LINECHART
linedata = base64.b64decode(lineChart)
linename = basePath+'jupyter/CEIC/26. Pemilihan Umum/img/'+name+''+table[t]+'-line.png'
with open(linename, 'wb') as f:
f.write(linedata)
#PIECHART
piedata = base64.b64decode(pieChart)
piename = basePath+'jupyter/CEIC/26. Pemilihan Umum/img/'+name+''+table[t]+'-pie.png'
with open(piename, 'wb') as f:
f.write(piedata)
#MEMANGGIL FUNGSI EXCEL
makeExcel(datarowsend, A2, B2, C2, D2, E2, F2, G2, H2,I2, J2, K2, name, limitdata, table, wilayah, basePath)
#MEMANGGIL FUNGSI PDF
makePDF(datarowsend, judul, barChart, lineChart, pieChart, name, subjudul, A2, B2, C2, D2, E2, F2, G2, H2,I2, J2, K2, limitdata, table, wilayah, basePath)
#JIKA KONEKSI GAGAL
except (Exception, psycopg2.Error) as error :
print (error)
#TUTUP KONEKSI
finally:
if(connection):
cursor.close()
connection.close()
#FUNGSI UNTUK MEMBUAT PDF YANG DATANYA BERASAL DARI DATABASE DIJADIKAN FORMAT EXCEL TABLE F2
#PLUGIN YANG DIGUNAKAN ADALAH FPDF
def makePDF(datarow, judul, bar, line, pie, name, subjudul, A2, B2, C2, D2, E2, F2, G2, H2,I2, J2, K2, lengthPDF, table, wilayah, basePath):
#PDF DIATUR DENGAN SIZE A4 DAN POSISI LANDSCAPE
pdf = FPDF('L', 'mm', [210,297])
#TAMBAH HALAMAN PDF
pdf.add_page()
#SET FONT DAN JUGA PADDING
pdf.set_font('helvetica', 'B', 20.0)
pdf.set_xy(145.0, 15.0)
#TAMPILKAN JUDUL PDF
pdf.cell(ln=0, h=2.0, align='C', w=10.0, txt=judul, border=0)
#SET FONT DAN JUGA PADDING
pdf.set_font('arial', '', 14.0)
pdf.set_xy(145.0, 25.0)
#TAMPILKAN SUB JUDUL PDF
pdf.cell(ln=0, h=2.0, align='C', w=10.0, txt=subjudul, border=0)
#BUAT GARIS DIBAWAH SUB JUDUL
pdf.line(10.0, 30.0, 287.0, 30.0)
pdf.set_font('times', '', 10.0)
pdf.set_xy(17.0, 37.0)
pdf.set_font('Times','B',11.0)
pdf.ln(0.5)
th1 = pdf.font_size
#BUAT TABLE DATA DATA DI DPF
pdf.cell(100, 2*th1, "Kategori", border=1, align='C')
pdf.cell(177, 2*th1, A2, border=1, align='C')
pdf.ln(2*th1)
pdf.cell(100, 2*th1, "Region", border=1, align='C')
pdf.cell(177, 2*th1, B2, border=1, align='C')
pdf.ln(2*th1)
pdf.cell(100, 2*th1, "Frekuensi", border=1, align='C')
pdf.cell(177, 2*th1, C2, border=1, align='C')
pdf.ln(2*th1)
pdf.cell(100, 2*th1, "Unit", border=1, align='C')
pdf.cell(177, 2*th1, D2, border=1, align='C')
pdf.ln(2*th1)
pdf.cell(100, 2*th1, "Sumber", border=1, align='C')
pdf.cell(177, 2*th1, E2, border=1, align='C')
pdf.ln(2*th1)
pdf.cell(100, 2*th1, "Status", border=1, align='C')
pdf.cell(177, 2*th1, F2, border=1, align='C')
pdf.ln(2*th1)
pdf.cell(100, 2*th1, "ID Seri", border=1, align='C')
pdf.cell(177, 2*th1, G2, border=1, align='C')
pdf.ln(2*th1)
pdf.cell(100, 2*th1, "Kode SR", border=1, align='C')
pdf.cell(177, 2*th1, H2, border=1, align='C')
pdf.ln(2*th1)
pdf.cell(100, 2*th1, "Tanggal Obs. Pertama", border=1, align='C')
pdf.cell(177, 2*th1, str(I2.date()), border=1, align='C')
pdf.ln(2*th1)
pdf.cell(100, 2*th1, "Tanggal Obs. Terakhir ", border=1, align='C')
pdf.cell(177, 2*th1, str(J2.date()), border=1, align='C')
pdf.ln(2*th1)
pdf.cell(100, 2*th1, "Waktu pembaruan terakhir", border=1, align='C')
pdf.cell(177, 2*th1, str(K2.date()), border=1, align='C')
pdf.ln(2*th1)
pdf.set_xy(17.0, 125.0)
pdf.set_font('Times','B',11.0)
epw = pdf.w - 2*pdf.l_margin
col_width = epw/(lengthPDF+1)
pdf.ln(0.5)
th = pdf.font_size
#HEADER TABLE DATA F2
pdf.cell(col_width, 2*th, str("Wilayah"), border=1, align='C')
#TANGAL HEADER DI LOOPING
for row in datarow[0]:
pdf.cell(col_width, 2*th, str(row[1]), border=1, align='C')
pdf.ln(2*th)
#ISI TABLE F2
for w in range(0, len(table)):
data=list(datarow[w])
pdf.set_font('Times','B',10.0)
pdf.set_font('Arial','',9)
pdf.cell(col_width, 2*th, wilayah[w], border=1, align='C')
#DATA BERDASARKAN TANGGAL
for row in data:
pdf.cell(col_width, 2*th, str(row[2]), border=1, align='C')
pdf.ln(2*th)
#PEMANGGILAN GAMBAR
for s in range(0, len(table)):
col = pdf.w - 2*pdf.l_margin
pdf.ln(2*th)
widthcol = col/3
#TAMBAH HALAMAN
pdf.add_page()
#DATA GAMBAR BERDASARKAN DIREKTORI DIATAS
pdf.image(basePath+'jupyter/CEIC/26. Pemilihan Umum/img/'+name+''+table[s]+'-bar.png', link='', type='',x=8, y=80, w=widthcol)
pdf.set_xy(17.0, 144.0)
col = pdf.w - 2*pdf.l_margin
pdf.image(basePath+'jupyter/CEIC/26. Pemilihan Umum/img/'+name+''+table[s]+'-line.png', link='', type='',x=103, y=80, w=widthcol)
pdf.set_xy(17.0, 144.0)
col = pdf.w - 2*pdf.l_margin
pdf.image(basePath+'jupyter/CEIC/26. Pemilihan Umum/img/'+name+''+table[s]+'-pie.png', link='', type='',x=195, y=80, w=widthcol)
pdf.ln(4*th)
#PDF DIBUAT
pdf.output(basePath+'jupyter/CEIC/26. Pemilihan Umum/pdf/'+A2+'.pdf', 'F')
#FUNGSI MAKEEXCEL GUNANYA UNTUK MEMBUAT DATA YANG BERASAL DARI DATABASE DIJADIKAN FORMAT EXCEL TABLE F2
#PLUGIN YANG DIGUNAKAN ADALAH XLSXWRITER
def makeExcel(datarow, A2, B2, C2, D2, E2, F2, G2, H2, I2, J2, K2, name, limit, table, wilayah, basePath):
#BUAT FILE EXCEL
workbook = xlsxwriter.Workbook(basePath+'jupyter/CEIC/26. Pemilihan Umum/excel/'+A2+'.xlsx')
#BUAT WORKSHEET EXCEL
worksheet = workbook.add_worksheet('sheet1')
#SETTINGAN UNTUK BORDER DAN FONT BOLD
row1 = workbook.add_format({'border': 2, 'bold': 1})
row2 = workbook.add_format({'border': 2})
#HEADER UNTUK TABLE EXCEL F2
header = ["Wilayah", "Kategori","Region","Frekuensi","Unit","Sumber","Status","ID Seri","Kode SR","Tanggal Obs. Pertama","Tanggal Obs. Terakhir ","Waktu pembaruan terakhir"]
#DATA DATA DITAMPUNG PADA VARIABLE
for rowhead2 in datarow[0]:
header.append(str(rowhead2[1]))
#DATA HEADER DARI VARIABLE DIMASUKAN KE SINI UNTUK DITAMPILKAN BERDASARKAN ROW DAN COLUMN
for col_num, data in enumerate(header):
worksheet.write(0, col_num, data, row1)
#DATA ISI TABLE F2 DITAMPILKAN DISINI
for w in range(0, len(table)):
data=list(datarow[w])
body = [wilayah[w], A2, B2, C2, D2, E2, F2, G2, H2, str(I2.date()), str(J2.date()), str(K2.date())]
for rowbody2 in data:
body.append(str(rowbody2[2]))
for col_num, data in enumerate(body):
worksheet.write(w+1, col_num, data, row2)
#FILE EXCEL DITUTUP
workbook.close()
#DISINI TEMPAT AWAL UNTUK MENDEFINISIKAN VARIABEL VARIABEL SEBELUM NANTINYA DIKIRIM KE FUNGSI
#PERTAMA MANGGIL FUNGSI UPLOADTOPSQL DULU, KALAU SUKSES BARU MANGGIL FUNGSI MAKECHART
#DAN DI MAKECHART MANGGIL FUNGSI MAKEEXCEL DAN MAKEPDF
#BASE PATH UNTUK NANTINYA MENGCREATE FILE ATAU MEMANGGIL FILE
basePath = 'C:/Users/ASUS/Documents/bappenas/'
#FILE SIMILARITY WILAYAH
filePathwilayah = basePath+'data mentah/CEIC/allwilayah.xlsx';
#BACA FILE EXCEL DENGAN PANDAS
readexcelwilayah = pd.read_excel(filePathwilayah)
dfwilayah = list(readexcelwilayah.values)
readexcelwilayah.fillna(0)
allwilayah = []
#PEMILIHAN JENIS DATA, APA DATA ITU PROVINSI, KABUPATEN, KECAMATAN ATAU KELURAHAN
tipewilayah = 'prov'
if tipewilayah == 'prov':
for x in range(0, len(dfwilayah)):
allwilayah.append(dfwilayah[x][1])
elif tipewilayah=='kabkot':
for x in range(0, len(dfwilayah)):
allwilayah.append(dfwilayah[x][3])
elif tipewilayah == 'kec':
for x in range(0, len(dfwilayah)):
allwilayah.append(dfwilayah[x][5])
elif tipewilayah == 'kel':
for x in range(0, len(dfwilayah)):
allwilayah.append(dfwilayah[x][7])
semuawilayah = list(set(allwilayah))
#SETTING VARIABLE UNTUK DATABASE DAN DATA YANG INGIN DIKIRIMKAN KE FUNGSI DISINI
name = "02. Pemilihan Umum Presiden Hasil Perolehan Suara (GEB001-GEB036)"
host = "localhost"
username = "postgres"
password = "1234567890"
port = "5432"
database = "ceic"
judul = "Produk Domestik Bruto (AA001-AA007)"
subjudul = "Badan Perencanaan Pembangunan Nasional"
filePath = basePath+'data mentah/CEIC/26. Pemilihan Umum/'+name+'.xlsx';
limitdata = int(8)
readexcel = pd.read_excel(filePath)
tabledata = []
wilayah = []
databody = []
#DATA EXCEL DIBACA DISINI DENGAN MENGGUNAKAN PANDAS
df = list(readexcel.values)
head = list(readexcel)
body = list(df[0])
readexcel.fillna(0)
#PILIH ROW DATA YANG INGIN DITAMPILKAN
rangeawal = 106
rangeakhir = 107
rowrange = range(rangeawal, rangeakhir)
#INI UNTUK MEMFILTER APAKAH DATA YANG DIPILIH MEMILIKI SIMILARITAS ATAU TIDAK
#ISIKAN 'WILAYAH' UNTUK SIMILARITAS
#ISIKAN BUKAN WILAYAH JIKA BUKAN WILAYAH
jenisdata = "Indonesia"
#ROW DATA DI LOOPING UNTUK MENDAPATKAN SIMILARITAS WILAYAH
#JIKA VARIABLE JENISDATA WILAYAH AKAN MASUK KESINI
if jenisdata == 'Wilayah':
for x in rowrange:
rethasil = 0
big_w = 0
for w in range(0, len(semuawilayah)):
namawilayah = semuawilayah[w].lower().strip()
nama_wilayah_len = len(namawilayah)
hasil = n0.get_levenshtein_similarity(df[x][0].lower().strip()[nama_wilayah_len*-1:], namawilayah)
if hasil > rethasil:
rethasil = hasil
big_w = w
wilayah.append(semuawilayah[big_w].capitalize())
tabledata.append('produkdomestikbruto_'+semuawilayah[big_w].lower().replace(" ", "") + "" + str(x))
testbody = []
for listbody in df[x][11:]:
if ~np.isnan(listbody) == False:
testbody.append(str('0'))
else:
testbody.append(str(listbody))
databody.append(testbody)
#JIKA BUKAN WILAYAH MASUK KESINI
else:
for x in rowrange:
wilayah.append(jenisdata.capitalize())
tabledata.append('produkdomestikbruto_'+jenisdata.lower().replace(" ", "") + "" + str(x))
testbody = []
for listbody in df[x][11:]:
if ~np.isnan(listbody) == False:
testbody.append(str('0'))
else:
testbody.append(str(listbody))
databody.append(testbody)
#HEADER UNTUK PDF DAN EXCEL
A2 = "Data Migas"
B2 = df[rangeawal][1]
C2 = df[rangeawal][2]
D2 = df[rangeawal][3]
E2 = df[rangeawal][4]
F2 = df[rangeawal][5]
G2 = df[rangeawal][6]
H2 = df[rangeawal][7]
I2 = df[rangeawal][8]
J2 = df[rangeawal][9]
K2 = df[rangeawal][10]
#DATA ISI TABLE F2
dataheader = []
for listhead in head[11:]:
dataheader.append(str(listhead))
#FUNGSI UNTUK UPLOAD DATA KE SQL, JIKA BERHASIL AKAN MAMANGGIL FUNGSI UPLOAD CHART
sql = uploadToPSQL(host, username, password, database, port, tabledata, judul, filePath, name, subjudul, dataheader, databody)
if sql == True:
makeChart(host, username, password, database, port, tabledata, judul, filePath, name, subjudul, A2, B2, C2, D2, E2, F2, G2, H2,I2, J2, K2, limitdata, wilayah, tabledata, basePath)
else:
print(sql)
```
| github_jupyter |
```
#default_exp rnn
#hide
from nbdev.showdoc import *
```
# Recurrent Neural Networks
> Summary: Recurrent Neural Networks, RNN, LSTM, Long Short-term Memory, seq2seq
## Implementation
```
import numpy as np
from tqdm import tqdm
from sklearn.model_selection import train_test_split
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import LSTM, TimeDistributed, Dense, RepeatVector
#export
def generate_data(training_size=10):
X = []
y = []
duplicates = set()
p_bar = tqdm(total=training_size)
while len(X) < training_size:
a = int(''.join(np.random.choice(list('0123456789')) for i in range(np.random.randint(1, DIGITS + 1))))
b = int(''.join(np.random.choice(list('0123456789')) for i in range(np.random.randint(1, DIGITS + 1))))
pair = tuple(sorted((a, b)))
if pair in duplicates:
continue
duplicates.add(pair)
pair_str = '{}+{}'.format(a,b)
pair_str = ' ' * (MAXLEN - len(pair_str)) + pair_str
ans = str(a + b)
ans = ' ' * ((DIGITS + 1) - len(ans)) + ans
X.append(pair_str)
y.append(ans)
p_bar.update(1)
return X,y
#export
def encode(questions, answers, alphabet):
char_to_index = dict((c, i) for i, c in enumerate(alphabet))
x = np.zeros((len(questions), MAXLEN, len(alphabet)))
y = np.zeros((len(questions), DIGITS + 1, len(alphabet)))
for q_counter, pair in enumerate(questions):
encoded_pair = np.zeros((MAXLEN, len(alphabet)))
for i, c in enumerate(pair):
encoded_pair[i, char_to_index[c]] = 1
x[q_counter] = encoded_pair
for a_counter, ans in enumerate(answers):
encoded_ans = np.zeros((DIGITS + 1, len(alphabet)))
for i, c in enumerate(ans):
encoded_ans[i, char_to_index[c]] = 1
y[a_counter] = encoded_ans
return x, y
#export
def decode(seq, alphabet, calc_argmax=True):
index_to_char = dict((i, c) for i, c in enumerate(alphabet))
if calc_argmax:
seq = np.argmax(seq, axis=-1)
return ''.join(index_to_char[c] for c in seq)
```
## Let's generate some data
```
DIGITS = 3
MAXLEN = DIGITS + DIGITS + 1
n_training_examples = 1000
print('Generating data...', end=' ')
pairs,ans = generate_data(n_training_examples)
print('done!')
print('Size of Training set: ' , len(pairs))
alphabet = list('0123456789+ ')
x,y = encode(pairs, ans, alphabet)
```
## Split the data
We split the data into training and testting sets.
```
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.1)
print('x_train shape = ' , x_train.shape)
print('y_train shape = ', y_train.shape)
```
## Build the Model
Now it's time to build an RNN with LSTM cells.
```
print('Build model...')
model = Sequential()
model.add(LSTM(128, input_shape=(MAXLEN, len(alphabet))))
model.add(RepeatVector(DIGITS + 1))
model.add(LSTM(128, return_sequences=True))
model.add(TimeDistributed(Dense(len(alphabet), activation='softmax')))
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
model.summary()
```
## Train the Model
After we builded and compiled the model, we must train it.
```
EPOCHS = 2
BATCH_SIZE = 32
class colors:
ok = '\033[92m'
fail = '\033[91m'
close = '\033[0m'
for epoch in range(1, EPOCHS + 1):
print('Iteration ', epoch)
model.fit(x_train, y_train, batch_size=BATCH_SIZE, epochs=1, validation_data=(x_test, y_test), verbose=1)
# Select 10 samples from test set and visualize errors
for i in range(10):
index = np.random.randint(0, len(x_test))
q = x_test[np.array([index])]
ans = y_test[np.array([index])]
preds = np.argmax(model.predict(q),axis=-1)
question = decode(q[0],alphabet)
actual = decode(ans[0],alphabet)
guessed = decode(preds[0], alphabet, calc_argmax=False)
print('Q:', question, end=' ')
print(' Actual:', actual, end=' ')
if actual == guessed:
print(colors.ok + ' ☑' + colors.close, end=' ')
else:
print(colors.fail + ' ☒' + colors.close, end=' ')
print('Guessed:', guessed)
```
| github_jupyter |
```
import numpy as np
from scipy.special import expit
from scipy.optimize import minimize
import matplotlib.pyplot as plt
from scipy.stats import norm
import math
import csv
```
# Part a
```
def gass_hermite_quad(f, degree):
'''
Calculate the integral (1) numerically.
:param f: target function, takes a array as input x = [x0, x1,...,xn], and return a array of function values f(x) = [f(x0),f(x1), ..., f(xn)]
:param degree: integer, >=1, number of points
:return:
'''
points, weights = np.polynomial.hermite.hermgauss(degree)
#function values at given points
f_x = f(points)
#weighted sum of function values
F = np.sum( f_x * weights)
return F
def p(x):
return(np.exp(-x*x)*expit(10*x + 3))
def p_sigmoid(x):
return(expit(10*x + 3))
print("=====================================================================\n")
print("Gauss-Hermite Quadrature approximation : ")
degree = 100
x = np.linspace(-5, 5, 500)
F = gass_hermite_quad(p_sigmoid, degree=degree)
y = p(x)/F
print("Normalisation Constant: ", F)
plt.plot(x,y,label="Gauss Hermite")
plt.legend()
plt.show()
```
# Part b
```
def neg_log(x):
y = -x*x + np.log(expit(10*x+3))
return(-y)
def gaussian(mean, std, x):
return(norm.pdf(x, loc = mean, scale=std))
def laplace_approx(x):
res = minimize(neg_log, np.array(0))
mean = (res.x)
sigmoid = expit(10*mean+3)
var = 1/(2 + 100 * sigmoid*(1-sigmoid))
y = gaussian(mean, math.sqrt(var), x)
print(("Mean : {}, variance : {}").format(mean, var))
return(y)
x = np.linspace(-5, 5, 500)
y_laplace = laplace_approx(x)
print("Normalisation Constant: ", F)
plt.plot(x,y_laplace,label="Laplace Approximation")
plt.plot(x,y,label="Gauss Hermite")
plt.legend()
plt.show()
```
# Part c
```
def get_lambda(xi):
lamb = -(1/(2*(xi*10+3)))*(expit(10*xi+3) - 0.5)
return lamb
def compute_var_local_inference(x):
xi = 0
degree = 100
Z1 = gass_hermite_quad(p_sigmoid, degree)
def get_sigmod_y(x):
lamb = get_lambda(xi)
sigmoid_y = expit(10*xi+3) * np.exp(5 * (x - xi) + lamb * np.multiply(10*(x-xi), 10*(x+xi)+6))
return sigmoid_y
for i in range(100):
qx = get_sigmod_y(x)*np.exp(-x*x)/Z1
xi = x[np.argmax(qx)]
return qx
x = np.linspace(-5, 5, 500)
y_inf = compute_var_local_inference(x)
plt.plot(x,y_laplace,label="Laplace Approximation")
plt.plot(x,y,label="Gauss Hermite")
plt.plot(x,y_inf,label="Variantional Inference")
plt.legend()
plt.show()
```
| github_jupyter |
```
from sqlalchemy import create_engine
import api_keys
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import sys
from sklearn.model_selection import train_test_split
from sklearn import metrics
from sklearn.linear_model import LinearRegression
24 * 60 / 5
DB_USER = api_keys.DB_USER
DB_PASS = api_keys.DB_PASS
DB_URL = api_keys.DB_URL
engine = create_engine("mysql+pymysql://{0}:{1}@{2}".format(DB_USER, DB_PASS, DB_URL), echo=True)
connection = engine.connect()
statement = """SELECT * FROM dublin_bikes.availability, dublin_bikes.weather_current
where availability.number = 2 && weather_current.station_number = 2 && timestampdiff(MINUTE,availability.time_queried, weather_current.time_queried) < 5 && timestampdiff(MINUTE,availability.time_queried, weather_current.time_queried) > 0
""" # create select statement for stations table
df = pd.read_sql_query(statement, engine) # https://stackoverflow.com/questions/29525808/sqlalchemy-orm-conversion-to-pandas-dataframe
# the following notebook is based off material presented in Data Analytics module COMP47350 labs 7 and 9
print("Mb:", sys.getsizeof(df)/ (2 ** 20))
df.head(5)
df.dtypes
categorical_columns = df[['weather_description','weather_main','station_number','number','station_status']].columns
# Convert data type to category for these columns
for column in categorical_columns:
df[column] = df[column].astype('category')
continuous_columns = df.select_dtypes(['int64']).columns
datetime_columns = df.select_dtypes(['datetime64[ns]']).columns
df.dtypes
df["humidity"] = df["humidity"].fillna(0)
df.corr()
plt.plot(df["time_queried"],df["available_bikes"])
plt.plot(df["time_queried"],df["temp"])
weather_dummies = pd.get_dummies(df['weather_main'], prefix='weather_main', drop_first=True)
df = pd.get_dummies(df, drop_first=True)
df.head(5)
# Prepare the descriptive features
#print(df.head(10))
cont_features = ['temp', 'wind_speed', 'pressure', 'humidity']
categ_features = weather_dummies.columns.values.tolist()
features = cont_features + categ_features
X = df[features]
y = df.available_bikes
print("\nDescriptive features in X:\n", X)
print("\nTarget feature in y:\n", y)
linreg = LinearRegression().fit(X, y)
# Use more features for training
# Train aka fit, a model using all continuous features.
linreg = LinearRegression().fit(X[features], y)
# Print the weights learned for each feature.
print("Features: \n", features)
print("Coeficients: \n", linreg.coef_)
print("\nIntercept: \n", linreg.intercept_)
linreg_predictions = linreg.predict(X[features])
print("\nPredictions with linear regression: \n")
actual_vs_predicted_linreg = pd.concat([y, pd.DataFrame(linreg_predictions, columns=['Predicted'], index=y.index)], axis=1)
print(actual_vs_predicted_linreg)
#This function is used repeatedly to compute all metrics
def printMetrics(testActualVal, predictions):
#classification evaluation measures
print('\n==============================================================================')
print("MAE: ", metrics.mean_absolute_error(testActualVal, predictions))
#print("MSE: ", metrics.mean_squared_error(testActualVal, predictions))
print("RMSE: ", metrics.mean_squared_error(testActualVal, predictions)**0.5)
print("R2: ", metrics.r2_score(testActualVal, predictions))
printMetrics(y, linreg_predictions)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0)
print("Training data:\n", pd.concat([X_train, y_train], axis=1))
print("\nTest data:\n", pd.concat([X_test, y_test], axis=1))
# Train on the training sample and test on the test sample.
linreg = LinearRegression().fit(X_train, y_train)
# Print the weights learned for each feature.
#print(linreg_train.coef_)
print("Features and coeficients:", list(zip(features, linreg.coef_)))
X_train
# Predicted price on training set
train_predictions = linreg.predict(X_train)
print("Actual vs predicted on training:\n", pd.concat([y_train, pd.DataFrame(train_predictions, columns=['Predicted'], index=y_train.index)], axis=1))
printMetrics(y_train, train_predictions)
# Predicted price on test set
test_predictions = linreg.predict(X_test)
print("Actual vs predicted on test:\n", pd.concat([y_test, pd.DataFrame(test_predictions, columns=['Predicted'], index=y_test.index)], axis=1))
printMetrics(y_test, test_predictions)
```
| github_jupyter |
```
# Copyright 2021 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# AI Platform SDK: Train & deploy a TensorFlow model with hosted runtimes (aka pre-built containers)
## Installation
Install the Google *cloud-storage* library as well.
```
! pip3 install google-cloud-storage
```
### Restart the Kernel
Once you've installed the AI Platform (Unified) SDK and Google *cloud-storage*, you need to restart the notebook kernel so it can find the packages.
```
import os
if not os.getenv("AUTORUN"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
```
## Before you begin
### GPU run-time
*Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select* **Runtime > Change Runtime Type > GPU**
### Set up your GCP project
**The following steps are required, regardless of your notebook environment.**
1. [Select or create a GCP project](https://console.cloud.google.com/cloud-resource-manager). When you first create an account, you get a $300 free credit towards your compute/storage costs.
2. [Make sure that billing is enabled for your project.](https://cloud.google.com/billing/docs/how-to/modify-project)
3. [Enable the AutoML APIs and Compute Engine APIs.](https://console.cloud.google.com/flows/enableapi?apiid=ml.googleapis.com,compute_component)
4. [Google Cloud SDK](https://cloud.google.com/sdk) is already installed in AutoML Notebooks.
5. Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
**Note**: Jupyter runs lines prefixed with `!` as shell commands, and it interpolates Python variables prefixed with `$` into these commands.
```
PROJECT_ID = "[your-project-id]" #@param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
```
#### Region
You can also change the `REGION` variable, which is used for operations
throughout the rest of this notebook. Below are regions supported forAutoML. We recommend when possible, to choose the region closest to you.
Currently project resources must be in the `us-central1` region to use this API.
```
REGION = 'us-central1' #@param {type: "string"}
```
#### Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial.
```
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
```
### Authenticate your GCP account
**If you are using AutoML Notebooks**, your environment is already
authenticated. Skip this step.
*Note: If you are on an AutoML notebook and run the cell, the cell knows to skip executing the authentication steps.*
```
import os
import sys
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your Google Cloud account. This provides access
# to your Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
# If on AutoML, then don't execute this code
if not os.path.exists('/opt/deeplearning/metadata/env_version'):
if 'google.colab' in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this tutorial in a notebook locally, replace the string
# below with the path to your service account key and run this cell to
# authenticate your Google Cloud account.
else:
%env GOOGLE_APPLICATION_CREDENTIALS your_path_to_credentials.json
# Log in to your account on Google Cloud
! gcloud auth login
```
### Create a Cloud Storage bucket
**The following steps are required, regardless of your notebook environment.**
This tutorial is designed to use training data that is in a public Cloud Storage bucket and a local Cloud Storage bucket for your batch predictions. You may alternatively use your own training data that you have stored in a local Cloud Storage bucket.
Set the name of your Cloud Storage bucket below. It must be unique across all Cloud Storage buckets.
```
BUCKET_NAME = "[your-bucket-name]" #@param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "[your-bucket-name]":
BUCKET_NAME = PROJECT_ID + "aip-" + TIMESTAMP
```
**Only if your bucket doesn't already exist**: Run the following cell to create your Cloud Storage bucket.
```
! gsutil mb -l $REGION gs://$BUCKET_NAME
```
Finally, validate access to your Cloud Storage bucket by examining its contents:
```
! gsutil ls -al gs://$BUCKET_NAME
```
### Set up variables
Next, set up some variables used throughout the tutorial.
### Import libraries and define constants
```
import json
import time
from googleapiclient import discovery, errors
from google.protobuf.json_format import MessageToJson
from google.protobuf.struct_pb2 import Value
```
#### AutoML constants
Setup up the following constants for AutoML:
- `PARENT`: The AutoM location root path for dataset, model and endpoint resources.
```
# AutoM location root path for your dataset, model and endpoint resources
PARENT = "projects/" + PROJECT_ID
```
## Clients
We use the Google APIs Client Library for Python to call the AI Platform Training and Prediction API without manually constructing HTTP requests.
```
cloudml = discovery.build("ml", "v1")
```
## Prepare a trainer script
### Package assembly
```
! rm -rf cifar
! mkdir cifar
! touch cifar/README.md
setup_cfg = "[egg_info]\n\
tag_build =\n\
tag_date = 0"
! echo "$setup_cfg" > cifar/setup.cfg
setup_py = "import setuptools\n\
# Requires TensorFlow Datasets\n\
setuptools.setup(\n\
install_requires=[\n\
'tensorflow_datasets==1.3.0',\n\
],\n\
packages=setuptools.find_packages())"
! echo "$setup_py" > cifar/setup.py
pkg_info = "Metadata-Version: 1.0\n\
Name: Custom Training CIFAR-10\n\
Version: 0.0.0\n\
Summary: Demonstration training script\n\
Home-page: www.google.com\n\
Author: Google\n\
Author-email: aferlitsch@google.com\n\
License: Public\n\
Description: Demo\n\
Platform: AI Platform (Unified)"
! echo "$pkg_info" > cifar/PKG-INFO
! mkdir cifar/trainer
! touch cifar/trainer/__init__.py
```
### Task.py contents
```
%%writefile cifar/trainer/task.py
import tensorflow_datasets as tfds
import tensorflow as tf
from tensorflow.python.client import device_lib
import argparse
import os
import sys
tfds.disable_progress_bar()
parser = argparse.ArgumentParser()
parser.add_argument('--model-dir', dest='model_dir',
default='/tmp/saved_model', type=str, help='Model dir.')
parser.add_argument('--lr', dest='lr',
default=0.01, type=float,
help='Learning rate.')
parser.add_argument('--epochs', dest='epochs',
default=10, type=int,
help='Number of epochs.')
parser.add_argument('--steps', dest='steps',
default=200, type=int,
help='Number of steps per epoch.')
parser.add_argument('--distribute', dest='distribute', type=str, default='single',
help='distributed training strategy')
args = parser.parse_args()
print('Python Version = {}'.format(sys.version))
print('TensorFlow Version = {}'.format(tf.__version__))
print('TF_CONFIG = {}'.format(os.environ.get('TF_CONFIG', 'Not found')))
print('DEVICES', device_lib.list_local_devices())
if args.distribute == 'single':
if tf.test.is_gpu_available():
strategy = tf.distribute.OneDeviceStrategy(device="/gpu:0")
else:
strategy = tf.distribute.OneDeviceStrategy(device="/cpu:0")
elif args.distribute == 'mirror':
strategy = tf.distribute.MirroredStrategy()
elif args.distribute == 'multi':
strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
print('num_replicas_in_sync = {}'.format(strategy.num_replicas_in_sync))
BUFFER_SIZE = 10000
BATCH_SIZE = 64
def make_datasets_unbatched():
def scale(image, label):
image = tf.cast(image, tf.float32)
image /= 255.0
return image, label
datasets, info = tfds.load(name='cifar10',
with_info=True,
as_supervised=True)
return datasets['train'].map(scale).cache().shuffle(BUFFER_SIZE).repeat()
def build_and_compile_cnn_model():
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(32, 3, activation='relu', input_shape=(32, 32, 3)),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Conv2D(32, 3, activation='relu'),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(
loss=tf.keras.losses.sparse_categorical_crossentropy,
optimizer=tf.keras.optimizers.SGD(learning_rate=args.lr),
metrics=['accuracy'])
return model
NUM_WORKERS = strategy.num_replicas_in_sync
GLOBAL_BATCH_SIZE = BATCH_SIZE * NUM_WORKERS
train_dataset = make_datasets_unbatched().batch(GLOBAL_BATCH_SIZE)
with strategy.scope():
model = build_and_compile_cnn_model()
model.fit(x=train_dataset, epochs=args.epochs, steps_per_epoch=args.steps)
model.save(args.model_dir)
```
### Store training script on your Cloud Storage bucket
```
! rm -f cifar.tar cifar.tar.gz
! tar cvf cifar.tar cifar
! gzip cifar.tar
! gsutil cp cifar.tar.gz gs://$BUCKET_NAME/trainer_cifar.tar.gz
```
## Train a model
### [projects.jobs.create](https://cloud.google.com/ai-platform/training/docs/reference/rest/v1/projects.jobs/create)
#### Request
```
JOB_NAME = "custom_job_TF_" + TIMESTAMP
TRAINING_INPUTS = {
"scaleTier": "CUSTOM",
"masterType": "n1-standard-4",
"masterConfig": {
"acceleratorConfig": {
"count": "1",
"type": "NVIDIA_TESLA_K80"
}
},
"packageUris": [
"gs://" + BUCKET_NAME + "/trainer_cifar.tar.gz"
],
"pythonModule": "trainer.task",
"args": [
"--model-dir=" + 'gs://{}/{}'.format(BUCKET_NAME, JOB_NAME),
"--epochs=" + str(20),
"--steps=" + str(100),
"--distribute=" + "single"
],
"region": REGION,
"runtimeVersion": "2.1",
"pythonVersion": "3.7"
}
body = {"jobId": JOB_NAME, "trainingInput": TRAINING_INPUTS}
request = cloudml.projects().jobs().create(
parent=PARENT
)
request.body = json.loads(json.dumps(body, indent=2))
print(json.dumps(json.loads(request.to_json()), indent=2))
request = cloudml.projects().jobs().create(
parent=PARENT,
body=body
)
```
*Example output*:
```
{
"uri": "https://ml.googleapis.com/v1/projects/migration-ucaip-training/jobs?alt=json",
"method": "POST",
"body": {
"jobId": "custom_job_TF_20210325211532",
"trainingInput": {
"scaleTier": "CUSTOM",
"masterType": "n1-standard-4",
"masterConfig": {
"acceleratorConfig": {
"count": "1",
"type": "NVIDIA_TESLA_K80"
}
},
"packageUris": [
"gs://migration-ucaip-trainingaip-20210325211532/trainer_cifar.tar.gz"
],
"pythonModule": "trainer.task",
"args": [
"--model-dir=gs://migration-ucaip-trainingaip-20210325211532/custom_job_TF_20210325211532",
"--epochs=20",
"--steps=100",
"--distribute=single"
],
"region": "us-central1",
"runtimeVersion": "2.1",
"pythonVersion": "3.7"
}
},
"headers": {
"accept": "application/json",
"accept-encoding": "gzip, deflate",
"user-agent": "(gzip)",
"x-goog-api-client": "gdcl/1.12.8 gl-python/3.7.8"
},
"methodId": "ml.projects.jobs.create",
"resumable": null,
"response_callbacks": [],
"_in_error_state": false,
"body_size": 0,
"resumable_uri": null,
"resumable_progress": 0
}
```
#### Call
```
response = request.execute()
```
#### Response
```
print(json.dumps(response, indent=2))
```
*Example output*:
```
{
"jobId": "custom_job_TF_20210325211532",
"trainingInput": {
"scaleTier": "CUSTOM",
"masterType": "n1-standard-4",
"packageUris": [
"gs://migration-ucaip-trainingaip-20210325211532/trainer_cifar.tar.gz"
],
"pythonModule": "trainer.task",
"args": [
"--model-dir=gs://migration-ucaip-trainingaip-20210325211532/custom_job_TF_20210325211532",
"--epochs=20",
"--steps=100",
"--distribute=single"
],
"region": "us-central1",
"runtimeVersion": "2.1",
"pythonVersion": "3.7",
"masterConfig": {
"acceleratorConfig": {
"count": "1",
"type": "NVIDIA_TESLA_K80"
}
}
},
"createTime": "2021-03-25T21:15:40Z",
"state": "QUEUED",
"trainingOutput": {},
"etag": "dH4whflp8Fg="
}
```
```
# The full unique ID for the custom training job
custom_training_id = f'{PARENT}/jobs/{response["jobId"]}'
# The short numeric ID for the custom training job
custom_training_short_id = response["jobId"]
print(custom_training_id)
```
### [projects.jobs.get](https://cloud.google.com/ai-platform/training/docs/reference/rest/v1/projects.jobs/get)
#### Call
```
request = cloudml.projects().jobs().get(
name=custom_training_id
)
response = request.execute()
```
#### Response
```
print(json.dumps(response, indent=2))
```
*Example output*:
```
{
"jobId": "custom_job_TF_20210325211532",
"trainingInput": {
"scaleTier": "CUSTOM",
"masterType": "n1-standard-4",
"packageUris": [
"gs://migration-ucaip-trainingaip-20210325211532/trainer_cifar.tar.gz"
],
"pythonModule": "trainer.task",
"args": [
"--model-dir=gs://migration-ucaip-trainingaip-20210325211532/custom_job_TF_20210325211532",
"--epochs=20",
"--steps=100",
"--distribute=single"
],
"region": "us-central1",
"runtimeVersion": "2.1",
"pythonVersion": "3.7",
"masterConfig": {
"acceleratorConfig": {
"count": "1",
"type": "NVIDIA_TESLA_K80"
}
}
},
"createTime": "2021-03-25T21:15:40Z",
"state": "PREPARING",
"trainingOutput": {},
"etag": "eLnYfClHtKU="
}
```
```
while True:
response = cloudml.projects().jobs().get(name=custom_training_id).execute()
if response["state"] != "SUCCEEDED":
print("Training job has not completed:", response["state"])
if response["state"] == "FAILED":
break
else:
break
time.sleep(20)
# model artifact output directory on Google Cloud Storage
model_artifact_dir =response["trainingInput"]["args"][0].split("=")[-1]
print("artifact location " + model_artifact_dir)
```
### Serving function for trained model (image data)
```
import tensorflow as tf
model = tf.keras.models.load_model(model_artifact_dir)
CONCRETE_INPUT = "numpy_inputs"
def _preprocess(bytes_input):
decoded = tf.io.decode_jpeg(bytes_input, channels=3)
decoded = tf.image.convert_image_dtype(decoded, tf.float32)
resized = tf.image.resize(decoded, size=(32, 32))
rescale = tf.cast(resized / 255.0, tf.float32)
return rescale
@tf.function(input_signature=[tf.TensorSpec([None], tf.string)])
def preprocess_fn(bytes_inputs):
decoded_images = tf.map_fn(_preprocess, bytes_inputs, dtype=tf.float32, back_prop=False)
return {CONCRETE_INPUT: decoded_images} # User needs to make sure the key matches model's input
m_call = tf.function(model.call).get_concrete_function(
[tf.TensorSpec(shape=[None, 32, 32, 3], dtype=tf.float32, name=CONCRETE_INPUT)]
)
@tf.function(input_signature=[tf.TensorSpec([None], tf.string), tf.TensorSpec([None], tf.string)])
def serving_fn(bytes_inputs, key):
images = preprocess_fn(bytes_inputs)
prob = m_call(**images)
return {"prediction": prob, "key":key}
tf.saved_model.save(
model,
model_artifact_dir,
signatures={
'serving_default': serving_fn,
}
)
loaded = tf.saved_model.load(model_artifact_dir)
tensors_specs = list(loaded.signatures['serving_default'].structured_input_signature)
print('Tensors specs:', tensors_specs)
input_name = [ v for k, v in tensors_specs[1].items() if k != "key"][0].name
print('Bytes input tensor name:', input_name)
```
*Example output*:
```
Tensors specs: [(), {'bytes_inputs': TensorSpec(shape=(None,), dtype=tf.string, name='bytes_inputs'), 'key': TensorSpec(shape=(None,), dtype=tf.string, name='key')}]
Bytes input tensor name: bytes_inputs
```
## Make batch predictions
### Prepare files for batch prediction
```
import base64
import cv2
import json
import numpy as np
import tensorflow as tf
(_, _), (x_test, y_test) = tf.keras.datasets.cifar10.load_data()
test_image_1, test_label_1 = x_test[0], y_test[0]
test_image_2, test_label_2 = x_test[1], y_test[1]
cv2.imwrite('tmp1.jpg', (test_image_1 * 255).astype(np.uint8))
cv2.imwrite('tmp2.jpg', (test_image_2 * 255).astype(np.uint8))
gcs_input_uri = "gs://" + BUCKET_NAME + "/" + "test.json"
with tf.io.gfile.GFile(gcs_input_uri, 'w') as f:
for img in ["tmp1.jpg", "tmp2.jpg"]:
bytes = tf.io.read_file(img)
b64str = base64.b64encode(bytes.numpy()).decode('utf-8')
f.write(json.dumps({"key": img, input_name: {"b64": b64str}}) + '\n')
! gsutil cat $gcs_input_uri
```
*Example output*:
```
{"key": "tmp1.jpg", "bytes_inputs": {"b64": "/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAIBAQEBAQIBAQECAgICAgQDAgICAgUEBAMEBgUGBgYFBgYGBwkIBgcJBwYGCAsICQoKCgoKBggLDAsKDAkKCgr/2wBDAQICAgICAgUDAwUKBwYHCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgr/wAARCAAgACADASIAAhEBAxEB/8QAHwAAAQUBAQEBAQEAAAAAAAAAAAECAwQFBgcICQoL/8QAtRAAAgEDAwIEAwUFBAQAAAF9AQIDAAQRBRIhMUEGE1FhByJxFDKBkaEII0KxwRVS0fAkM2JyggkKFhcYGRolJicoKSo0NTY3ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqDhIWGh4iJipKTlJWWl5iZmqKjpKWmp6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uHi4+Tl5ufo6erx8vP09fb3+Pn6/8QAHwEAAwEBAQEBAQEBAQAAAAAAAAECAwQFBgcICQoL/8QAtREAAgECBAQDBAcFBAQAAQJ3AAECAxEEBSExBhJBUQdhcRMiMoEIFEKRobHBCSMzUvAVYnLRChYkNOEl8RcYGRomJygpKjU2Nzg5OkNERUZHSElKU1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6goOEhYaHiImKkpOUlZaXmJmaoqOkpaanqKmqsrO0tba3uLm6wsPExcbHyMnK0tPU1dbX2Nna4uPk5ebn6Onq8vP09fb3+Pn6/9oADAMBAAIRAxEAPwD6E1zw/qemaZY669mkdtqsTPZTpMH85Y3KMcKeOR36444NZGj2/ibWPHaaPeHSLXRbq3jSw1O7u3V3u9zb0ZAh+QIFO4EkliCBjnwv9lfxtrviTxBbW974le/0nQ/h5ohms7m4b92bhVlkEfPIDuwJ6gyADgCuWh1fxP8As6/tGad8H5PiRrHjW6tNd1O/iXUr5Z7mx0uSZlinHODiRQCqrgGTGPmwPyqfClGlnM6Em3TSi/N3Wtnto015H6y+MK08kp14QSqScle6tFxel0+6aZ9d6/rvhXwH4407wWtq+uSXth9pa5jcwKUBIbyxzkL0Ock8nHQV2x0NtN0Gw8a6PDOunXc3liO5GZIGxwG6YBxx1x0zkV4L8Xfij4k8X/Gr4V+HdJtDpdgui3GoajJBAXlkuGvNoUEDcD5MYyuN3zEnpX0B4Q+Iunafdap8OPFCG/sL+PzLkGNgbQB1O7Jxh1JOCOvHXNfUYrh/LPqMo0oKDgvdl10117nzGD4izR5hGdWcp8zs4+umisflx8DNXi/Z/wDHviPTfiP4g+x2WieFtV03U5r9miLw2ilonTIySWijZCB6Yr2X4R/tQT/tC/s56f8AGn4C/AvxTrXiq7jksW1G78NxRlNiRxIrzO5EwiVHAePAfeoO1lIrqv2pf2Xz+1t+z3feC9E1GLSvE2paQtraa1cISXiEqu9tKVydrbMZ5Kkg8jIr234a/Bq7+EngjQPAng3wzB/ZOl6ZFa2tpp/yeWiqFB2Hq2ASeuTz15r9ixHBa+vSp1JXpxXuy6vyfpbXuz8jocUyWCVSirTb1j09V95e+E3hnwXr8dn8QPjLaSWZBguP+EcudKSW6gnSMfLHOrcQh2djCSAxY5BxkzfEDx1H4n8ZyvpEC2WnMAwighMe8hvl3gZyQCB15K5xWNq3iKbVNVk8MW91NZzxLllkt9jL2z0I/DrXCeG47T4seNL3wN4c1nULKPTY2GoX8YYNcSkfKisxwis2ASMnk9AK7f8AiHuQ47CulWlKzfM7S5W+vRfgZQ47zvA4qNako3irK8eZLpfVn//Z"}}
{"key": "tmp2.jpg", "bytes_inputs": {"b64": "/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAIBAQEBAQIBAQECAgICAgQDAgICAgUEBAMEBgUGBgYFBgYGBwkIBgcJBwYGCAsICQoKCgoKBggLDAsKDAkKCgr/2wBDAQICAgICAgUDAwUKBwYHCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgr/wAARCAAgACADASIAAhEBAxEB/8QAHwAAAQUBAQEBAQEAAAAAAAAAAAECAwQFBgcICQoL/8QAtRAAAgEDAwIEAwUFBAQAAAF9AQIDAAQRBRIhMUEGE1FhByJxFDKBkaEII0KxwRVS0fAkM2JyggkKFhcYGRolJicoKSo0NTY3ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqDhIWGh4iJipKTlJWWl5iZmqKjpKWmp6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uHi4+Tl5ufo6erx8vP09fb3+Pn6/8QAHwEAAwEBAQEBAQEBAQAAAAAAAAECAwQFBgcICQoL/8QAtREAAgECBAQDBAcFBAQAAQJ3AAECAxEEBSExBhJBUQdhcRMiMoEIFEKRobHBCSMzUvAVYnLRChYkNOEl8RcYGRomJygpKjU2Nzg5OkNERUZHSElKU1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6goOEhYaHiImKkpOUlZaXmJmaoqOkpaanqKmqsrO0tba3uLm6wsPExcbHyMnK0tPU1dbX2Nna4uPk5ebn6Onq8vP09fb3+Pn6/9oADAMBAAIRAxEAPwD8V5UubRQlxlSvDAtyD6dadbW91fK8lrFI6o6KzrnALHCj8cH8jX3J+1V+wR8adOjsrDxR8EPhzohsg13qKfD+zddRWBF2u0sR42AnIzjJAAzzXmnwh/Yk+D3jX4Q6h478cftgaX4Al/tR4f8AhHdf0eRruVI+IpdkbFiWLsAqgnrXZLBVFWcI6/gc0MVSlSU2eZaX+zdr954Nv/EEt7FNeWyrJHZ2moRn93tYsTuwcg7OBz19q8sa7AUEMf8AvqvoHwX+yz8Vb74gXtn4M+Euq/EbSYpV+y6vf2txptrMOAz+XIysR0xu9M4qf9pn9mf4jJoNprJ+BGgeCn0mHZfQ2OqRl793fAZUDkkAbcd8k1pUw1OUE6e/bf8AEVOs1JqT3P19/aT/AOCMf7RH7Qfx5134zeNf2z7S18Q+PkSWWDSb6406BrSMFYrWNCCAsakDbnOSSeTXg+sf8G3viHwt49ez1jxdY6zqds1veTwT+MzBdqJWnWCYb0DhXe3n2sOGMD4J2HH7IfD3xnc/EPwl4Y8R6t458M28y+EL1NRh1nS3vGXV3a1+w3S4mjCwxxpdCaFSjTNLGRImwk+A6f8AAL9oH4gaX4+tf+Ckn7Vfw4+I2k3fiW6m+HOneFNPn0WDw9piTLLbuUiYGWZsCNYp/tMtqiSbL+b7RMrqvWxVDKamZ89BOg03Q9+deupOpBRotU1CM4OMak/aSUIxkouTbUjmllc0qic60XrGNldX/dtNr/n2+aS5r3XI3ytKz+Jof+CN2r6LYHU/ibqOo2iQzFmmn8eXLfugMbDhwMcdeprg/iV+zX+zx8O9Mu9f8NaRplw9oSr6g0sl0BgdBNMzZ+i9K+svi9P+yv8ADAnRfhl4MfxNdhSDe63fzS2sJHdYpHbfjtu/KvhL9ub4tarruhy2JvJMsdjJFGFj28gKqrgKo9B6VhlvEGMzfDxm8M6N+kpRlJeT5dE/mwoZDiMO+evVb8j/2Q=="}}
```
### [projects.jobs.create](https://cloud.google.com/ai-platform/prediction/docs/reference/rest/v1/projects.jobs/create)
#### Request
```
body = {
'jobId': "custom_job_TF_pred_" + TIMESTAMP,
"prediction_input": {
"input_paths": gcs_input_uri,
"output_path": "gs://" + f"{BUCKET_NAME}/batch_output/",
"data_format":"JSON",
"runtime_version": "2.1",
"uri": model_artifact_dir,
"region":"us-central1"
}
}
request = cloudml.projects().jobs().create(
parent=PARENT,
)
request.body = json.loads(json.dumps(body, indent=2))
print(json.dumps(json.loads(request.to_json()), indent=2))
request = cloudml.projects().jobs().create(
parent=PARENT,
body=body
)
```
*Example output*:
```
{
"uri": "https://ml.googleapis.com/v1/projects/migration-ucaip-training/jobs?alt=json",
"method": "POST",
"body": {
"jobId": "custom_job_TF_pred_20210325211532",
"prediction_input": {
"input_paths": "gs://migration-ucaip-trainingaip-20210325211532/test.json",
"output_path": "gs://migration-ucaip-trainingaip-20210325211532/batch_output/",
"data_format": "JSON",
"runtime_version": "2.1",
"uri": "gs://migration-ucaip-trainingaip-20210325211532/custom_job_TF_20210325211532",
"region": "us-central1"
}
},
"headers": {
"accept": "application/json",
"accept-encoding": "gzip, deflate",
"user-agent": "(gzip)",
"x-goog-api-client": "gdcl/1.12.8 gl-python/3.7.8"
},
"methodId": "ml.projects.jobs.create",
"resumable": null,
"response_callbacks": [],
"_in_error_state": false,
"body_size": 0,
"resumable_uri": null,
"resumable_progress": 0
}
```
#### Call
```
response = request.execute()
```
#### Response
```
print(json.dumps(response, indent=2))
```
*Example output*:
```
{
"jobId": "custom_job_TF_pred_20210325211532",
"predictionInput": {
"dataFormat": "JSON",
"inputPaths": [
"gs://migration-ucaip-trainingaip-20210325211532/test.json"
],
"outputPath": "gs://migration-ucaip-trainingaip-20210325211532/batch_output/",
"region": "us-central1",
"runtimeVersion": "2.1",
"uri": "gs://migration-ucaip-trainingaip-20210325211532/custom_job_TF_20210325211532",
"framework": "TENSORFLOW"
},
"createTime": "2021-03-25T21:34:56Z",
"state": "QUEUED",
"predictionOutput": {
"outputPath": "gs://migration-ucaip-trainingaip-20210325211532/batch_output/"
},
"etag": "QwNOFOfoKdY="
}
```
```
# The full unique ID for the batch prediction job
batch_job_id = PARENT + "/jobs/" + response["jobId"]
print(batch_job_id)
```
### [projects.jobs.get](https://cloud.google.com/ai-platform/prediction/docs/reference/rest/v1/projects.jobs/get)
#### Call
```
request = cloudml.projects().jobs().get(
name=batch_job_id
)
response = request.execute()
```
#### Response
```
print(json.dumps(response, indent=2))
```
*Example output*:
```
{
"jobId": "custom_job_TF_pred_20210325211532",
"predictionInput": {
"dataFormat": "JSON",
"inputPaths": [
"gs://migration-ucaip-trainingaip-20210325211532/test.json"
],
"outputPath": "gs://migration-ucaip-trainingaip-20210325211532/batch_output/",
"region": "us-central1",
"runtimeVersion": "2.1",
"uri": "gs://migration-ucaip-trainingaip-20210325211532/custom_job_TF_20210325211532",
"framework": "TENSORFLOW"
},
"createTime": "2021-03-25T21:34:56Z",
"state": "QUEUED",
"predictionOutput": {
"outputPath": "gs://migration-ucaip-trainingaip-20210325211532/batch_output/"
},
"etag": "NSbtn4XnbbU="
}
```
```
while True:
response = request = cloudml.projects().jobs().get(name=batch_job_id).execute()
if response["state"] != "SUCCEEDED":
print("The job has not completed:", response["state"])
if response["state"] == "FAILED":
break
else:
folder = response["predictionInput"]["outputPath"][:-1]
! gsutil ls $folder/prediction*
! gsutil cat $folder/prediction*
break
time.sleep(60)
```
*Example output*:
```
gs://migration-ucaip-trainingaip-20210325211532/batch_output/prediction.errors_stats-00000-of-00001
gs://migration-ucaip-trainingaip-20210325211532/batch_output/prediction.results-00000-of-00001
{"prediction": [0.033321816474199295, 0.052459586411714554, 0.1548144668340683, 0.11401787400245667, 0.17382358014583588, 0.09015274047851562, 0.19865882396697998, 0.10446511209011078, 0.029874442145228386, 0.048411525785923004], "key": "tmp1.jpg"}
{"prediction": [0.03346974775195122, 0.05255022272467613, 0.15449963510036469, 0.11388237029314041, 0.17408262193202972, 0.08989296853542328, 0.19814379513263702, 0.10520868003368378, 0.02989153563976288, 0.04837837815284729], "key": "tmp2.jpg"}
```
## Make online predictions
### Deploy the model
### [projects.models.create](https://cloud.google.com/ai-platform/prediction/docs/reference/rest/v1/projects.models/create)
#### Request
```
request = cloudml.projects().models().create(
parent=PARENT
)
request.body = json.loads(
json.dumps(
{"name": "custom_job_TF_" + TIMESTAMP},
indent=2
)
)
print(json.dumps(json.loads(request.to_json()), indent=2))
request = cloudml.projects().models().create(
parent=PARENT,
body={"name": "custom_job_TF_" + TIMESTAMP}
)
```
*Example output*:
```
{
"uri": "https://ml.googleapis.com/v1/projects/migration-ucaip-training/models?alt=json",
"method": "POST",
"body": {
"name": "custom_job_TF_20210325211532"
},
"headers": {
"accept": "application/json",
"accept-encoding": "gzip, deflate",
"user-agent": "(gzip)",
"x-goog-api-client": "gdcl/1.12.8 gl-python/3.7.8"
},
"methodId": "ml.projects.models.create",
"resumable": null,
"response_callbacks": [],
"_in_error_state": false,
"body_size": 0,
"resumable_uri": null,
"resumable_progress": 0
}
```
#### Call
```
response = request.execute()
```
#### Response
```
print(json.dumps(response, indent=2))
```
*Example output*:
```
{
"name": "projects/migration-ucaip-training/models/custom_job_TF_20210325211532",
"regions": [
"us-central1"
],
"etag": "fFH1QQbH3tA="
}
```
```
# The full unique ID for the training pipeline
model_id = response["name"]
# The short numeric ID for the training pipeline
model_short_name = model_id.split("/")[-1]
print(model_id)
```
### [projects.models.versions.create](https://cloud.google.com/ai-platform/prediction/docs/reference/rest/v1/projects.models.versions/create)
#### Request
```
version = {
"name": "custom_job_TF_" + TIMESTAMP,
"deploymentUri": model_artifact_dir,
"runtimeVersion": "2.1",
"framework": "TENSORFLOW",
"pythonVersion": "3.7",
"machineType": "mls1-c1-m2"
}
request = cloudml.projects().models().versions().create(
parent=model_id,
)
request.body = json.loads(json.dumps(version, indent=2))
print(json.dumps(json.loads(request.to_json()), indent=2))
request = cloudml.projects().models().versions().create(
parent=model_id,
body=version
)
```
*Example output*:
```
{
"uri": "https://ml.googleapis.com/v1/projects/migration-ucaip-training/models/custom_job_TF_20210325211532/versions?alt=json",
"method": "POST",
"body": {
"name": "custom_job_TF_20210325211532",
"deploymentUri": "gs://migration-ucaip-trainingaip-20210325211532/custom_job_TF_20210325211532",
"runtimeVersion": "2.1",
"framework": "TENSORFLOW",
"pythonVersion": "3.7",
"machineType": "mls1-c1-m2"
},
"headers": {
"accept": "application/json",
"accept-encoding": "gzip, deflate",
"user-agent": "(gzip)",
"x-goog-api-client": "gdcl/1.12.8 gl-python/3.7.8"
},
"methodId": "ml.projects.models.versions.create",
"resumable": null,
"response_callbacks": [],
"_in_error_state": false,
"body_size": 0,
"resumable_uri": null,
"resumable_progress": 0
}
```
#### Call
```
response = request.execute()
```
#### Response
```
print(json.dumps(response, indent=2))
```
*Example output*:
```
{
"name": "projects/migration-ucaip-training/operations/create_custom_job_TF_20210325211532_custom_job_TF_20210325211532-1616708521927",
"metadata": {
"@type": "type.googleapis.com/google.cloud.ml.v1.OperationMetadata",
"createTime": "2021-03-25T21:42:02Z",
"operationType": "CREATE_VERSION",
"modelName": "projects/migration-ucaip-training/models/custom_job_TF_20210325211532",
"version": {
"name": "projects/migration-ucaip-training/models/custom_job_TF_20210325211532/versions/custom_job_TF_20210325211532",
"deploymentUri": "gs://migration-ucaip-trainingaip-20210325211532/custom_job_TF_20210325211532",
"createTime": "2021-03-25T21:42:01Z",
"runtimeVersion": "2.1",
"etag": "3vf44xGDtdw=",
"framework": "TENSORFLOW",
"machineType": "mls1-c1-m2",
"pythonVersion": "3.7"
}
}
}
```
```
# The full unique ID for the model version
model_version_name = response["metadata"]["version"]["name"]
print(model_version_name)
while True:
response = cloudml.projects().models().versions().get(
name=model_version_name
).execute()
if response["state"] == "READY":
print("Model version created.")
break
time.sleep(60)
```
### Prepare input for online prediction
```
import base64
import cv2
import json
import tensorflow as tf
(_, _), (x_test, y_test) = tf.keras.datasets.cifar10.load_data()
test_image_1, test_label_1 = x_test[0], y_test[0]
test_image_2, test_label_2 = x_test[1], y_test[1]
cv2.imwrite('tmp1.jpg', (test_image_1 * 255).astype(np.uint8))
cv2.imwrite('tmp2.jpg', (test_image_2 * 255).astype(np.uint8))
```
### [projects.predict](https://cloud.google.com/ai-platform/prediction/docs/reference/rest/v1/projects/predict)
#### Request
```
instances_list = []
for img in ["tmp1.jpg", "tmp2.jpg"]:
bytes = tf.io.read_file(img)
b64str = base64.b64encode(bytes.numpy()).decode('utf-8')
instances_list.append({
'key': img,
input_name: {
'b64': b64str
}
})
request = cloudml.projects().predict(name=model_version_name)
request.body = json.loads(json.dumps({'instances': instances_list}, indent=2))
print(json.dumps(json.loads(request.to_json()), indent=2))
request = cloudml.projects().predict(
name=model_version_name,
body={'instances': instances_list}
)
```
*Example output*:
```
{
"uri": "https://ml.googleapis.com/v1/projects/migration-ucaip-training/models/custom_job_TF_20210325211532/versions/custom_job_TF_20210325211532:predict?alt=json",
"method": "POST",
"body": {
"instances": [
{
"key": "tmp1.jpg",
"bytes_inputs": {
"b64": "/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAIBAQEBAQIBAQECAgICAgQDAgICAgUEBAMEBgUGBgYFBgYGBwkIBgcJBwYGCAsICQoKCgoKBggLDAsKDAkKCgr/2wBDAQICAgICAgUDAwUKBwYHCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgr/wAARCAAgACADASIAAhEBAxEB/8QAHwAAAQUBAQEBAQEAAAAAAAAAAAECAwQFBgcICQoL/8QAtRAAAgEDAwIEAwUFBAQAAAF9AQIDAAQRBRIhMUEGE1FhByJxFDKBkaEII0KxwRVS0fAkM2JyggkKFhcYGRolJicoKSo0NTY3ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqDhIWGh4iJipKTlJWWl5iZmqKjpKWmp6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uHi4+Tl5ufo6erx8vP09fb3+Pn6/8QAHwEAAwEBAQEBAQEBAQAAAAAAAAECAwQFBgcICQoL/8QAtREAAgECBAQDBAcFBAQAAQJ3AAECAxEEBSExBhJBUQdhcRMiMoEIFEKRobHBCSMzUvAVYnLRChYkNOEl8RcYGRomJygpKjU2Nzg5OkNERUZHSElKU1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6goOEhYaHiImKkpOUlZaXmJmaoqOkpaanqKmqsrO0tba3uLm6wsPExcbHyMnK0tPU1dbX2Nna4uPk5ebn6Onq8vP09fb3+Pn6/9oADAMBAAIRAxEAPwD6E1zw/qemaZY669mkdtqsTPZTpMH85Y3KMcKeOR36444NZGj2/ibWPHaaPeHSLXRbq3jSw1O7u3V3u9zb0ZAh+QIFO4EkliCBjnwv9lfxtrviTxBbW974le/0nQ/h5ohms7m4b92bhVlkEfPIDuwJ6gyADgCuWh1fxP8As6/tGad8H5PiRrHjW6tNd1O/iXUr5Z7mx0uSZlinHODiRQCqrgGTGPmwPyqfClGlnM6Em3TSi/N3Wtnto015H6y+MK08kp14QSqScle6tFxel0+6aZ9d6/rvhXwH4407wWtq+uSXth9pa5jcwKUBIbyxzkL0Ock8nHQV2x0NtN0Gw8a6PDOunXc3liO5GZIGxwG6YBxx1x0zkV4L8Xfij4k8X/Gr4V+HdJtDpdgui3GoajJBAXlkuGvNoUEDcD5MYyuN3zEnpX0B4Q+Iunafdap8OPFCG/sL+PzLkGNgbQB1O7Jxh1JOCOvHXNfUYrh/LPqMo0oKDgvdl10117nzGD4izR5hGdWcp8zs4+umisflx8DNXi/Z/wDHviPTfiP4g+x2WieFtV03U5r9miLw2ilonTIySWijZCB6Yr2X4R/tQT/tC/s56f8AGn4C/AvxTrXiq7jksW1G78NxRlNiRxIrzO5EwiVHAePAfeoO1lIrqv2pf2Xz+1t+z3feC9E1GLSvE2paQtraa1cISXiEqu9tKVydrbMZ5Kkg8jIr234a/Bq7+EngjQPAng3wzB/ZOl6ZFa2tpp/yeWiqFB2Hq2ASeuTz15r9ixHBa+vSp1JXpxXuy6vyfpbXuz8jocUyWCVSirTb1j09V95e+E3hnwXr8dn8QPjLaSWZBguP+EcudKSW6gnSMfLHOrcQh2djCSAxY5BxkzfEDx1H4n8ZyvpEC2WnMAwighMe8hvl3gZyQCB15K5xWNq3iKbVNVk8MW91NZzxLllkt9jL2z0I/DrXCeG47T4seNL3wN4c1nULKPTY2GoX8YYNcSkfKisxwis2ASMnk9AK7f8AiHuQ47CulWlKzfM7S5W+vRfgZQ47zvA4qNako3irK8eZLpfVn//Z"
}
},
{
"key": "tmp2.jpg",
"bytes_inputs": {
"b64": "/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAIBAQEBAQIBAQECAgICAgQDAgICAgUEBAMEBgUGBgYFBgYGBwkIBgcJBwYGCAsICQoKCgoKBggLDAsKDAkKCgr/2wBDAQICAgICAgUDAwUKBwYHCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgr/wAARCAAgACADASIAAhEBAxEB/8QAHwAAAQUBAQEBAQEAAAAAAAAAAAECAwQFBgcICQoL/8QAtRAAAgEDAwIEAwUFBAQAAAF9AQIDAAQRBRIhMUEGE1FhByJxFDKBkaEII0KxwRVS0fAkM2JyggkKFhcYGRolJicoKSo0NTY3ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqDhIWGh4iJipKTlJWWl5iZmqKjpKWmp6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uHi4+Tl5ufo6erx8vP09fb3+Pn6/8QAHwEAAwEBAQEBAQEBAQAAAAAAAAECAwQFBgcICQoL/8QAtREAAgECBAQDBAcFBAQAAQJ3AAECAxEEBSExBhJBUQdhcRMiMoEIFEKRobHBCSMzUvAVYnLRChYkNOEl8RcYGRomJygpKjU2Nzg5OkNERUZHSElKU1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6goOEhYaHiImKkpOUlZaXmJmaoqOkpaanqKmqsrO0tba3uLm6wsPExcbHyMnK0tPU1dbX2Nna4uPk5ebn6Onq8vP09fb3+Pn6/9oADAMBAAIRAxEAPwD8V5UubRQlxlSvDAtyD6dadbW91fK8lrFI6o6KzrnALHCj8cH8jX3J+1V+wR8adOjsrDxR8EPhzohsg13qKfD+zddRWBF2u0sR42AnIzjJAAzzXmnwh/Yk+D3jX4Q6h478cftgaX4Al/tR4f8AhHdf0eRruVI+IpdkbFiWLsAqgnrXZLBVFWcI6/gc0MVSlSU2eZaX+zdr954Nv/EEt7FNeWyrJHZ2moRn93tYsTuwcg7OBz19q8sa7AUEMf8AvqvoHwX+yz8Vb74gXtn4M+Euq/EbSYpV+y6vf2txptrMOAz+XIysR0xu9M4qf9pn9mf4jJoNprJ+BGgeCn0mHZfQ2OqRl793fAZUDkkAbcd8k1pUw1OUE6e/bf8AEVOs1JqT3P19/aT/AOCMf7RH7Qfx5134zeNf2z7S18Q+PkSWWDSb6406BrSMFYrWNCCAsakDbnOSSeTXg+sf8G3viHwt49ez1jxdY6zqds1veTwT+MzBdqJWnWCYb0DhXe3n2sOGMD4J2HH7IfD3xnc/EPwl4Y8R6t458M28y+EL1NRh1nS3vGXV3a1+w3S4mjCwxxpdCaFSjTNLGRImwk+A6f8AAL9oH4gaX4+tf+Ckn7Vfw4+I2k3fiW6m+HOneFNPn0WDw9piTLLbuUiYGWZsCNYp/tMtqiSbL+b7RMrqvWxVDKamZ89BOg03Q9+deupOpBRotU1CM4OMak/aSUIxkouTbUjmllc0qic60XrGNldX/dtNr/n2+aS5r3XI3ytKz+Jof+CN2r6LYHU/ibqOo2iQzFmmn8eXLfugMbDhwMcdeprg/iV+zX+zx8O9Mu9f8NaRplw9oSr6g0sl0BgdBNMzZ+i9K+svi9P+yv8ADAnRfhl4MfxNdhSDe63fzS2sJHdYpHbfjtu/KvhL9ub4tarruhy2JvJMsdjJFGFj28gKqrgKo9B6VhlvEGMzfDxm8M6N+kpRlJeT5dE/mwoZDiMO+evVb8j/2Q=="
}
}
]
},
"headers": {
"accept": "application/json",
"accept-encoding": "gzip, deflate",
"user-agent": "(gzip)",
"x-goog-api-client": "gdcl/1.12.8 gl-python/3.7.8"
},
"methodId": "ml.projects.predict",
"resumable": null,
"response_callbacks": [],
"_in_error_state": false,
"body_size": 0,
"resumable_uri": null,
"resumable_progress": 0
}
```
#### Call
```
response = request.execute()
```
#### Response
```
print(json.dumps(response, indent=2))
```
*Example output*:
```
{
"predictions": [
{
"key": "tmp1.jpg",
"prediction": [
0.033321816474199295,
0.052459586411714554,
0.1548144668340683,
0.11401788890361786,
0.17382356524467468,
0.09015275537967682,
0.19865882396697998,
0.10446509718894958,
0.02987445704638958,
0.048411525785923004
]
},
{
"key": "tmp2.jpg",
"prediction": [
0.03346974775195122,
0.052550218999385834,
0.15449965000152588,
0.11388237029314041,
0.17408263683319092,
0.08989296108484268,
0.19814379513263702,
0.10520866513252258,
0.02989153563976288,
0.04837837815284729
]
}
]
}
```
### [projects.models.versions.delete](https://cloud.google.com/ai-platform/prediction/docs/reference/rest/v1/projects.models.versions/delete)
#### Call
```
request = cloudml.projects().models().versions().delete(
name=model_version_name
)
response = request.execute()
```
#### Response
```
print(json.dumps(response, indent=2))
```
*Example output*:
```
{
"name": "projects/migration-ucaip-training/operations/delete_custom_job_TF_20210325211532_custom_job_TF_20210325211532-1616708584436",
"metadata": {
"@type": "type.googleapis.com/google.cloud.ml.v1.OperationMetadata",
"createTime": "2021-03-25T21:43:04Z",
"operationType": "DELETE_VERSION",
"modelName": "projects/migration-ucaip-training/models/custom_job_TF_20210325211532",
"version": {
"name": "projects/migration-ucaip-training/models/custom_job_TF_20210325211532/versions/custom_job_TF_20210325211532",
"deploymentUri": "gs://migration-ucaip-trainingaip-20210325211532/custom_job_TF_20210325211532",
"createTime": "2021-03-25T21:42:01Z",
"runtimeVersion": "2.1",
"state": "READY",
"etag": "Nu2QJaCl6vw=",
"framework": "TENSORFLOW",
"machineType": "mls1-c1-m2",
"pythonVersion": "3.7"
}
}
}
```
# Cleanup
```
delete_model = True
delete_bucket = True
# Delete the model using the AI Platform (Unified) fully qualified identifier for the model
try:
if delete_model:
cloudml.projects().models().delete(
name=model_id
)
except Exception as e:
print(e)
if delete_bucket and 'BUCKET_NAME' in globals():
! gsutil rm -r gs://$BUCKET_NAME
```
| github_jupyter |
## Overfitting Exercise
In this exercise, we'll build a model that, as you'll see, dramatically overfits the training data. This will allow you to see what overfitting can "look like" in practice.
```
import os
import pandas as pd
import numpy as np
import math
import matplotlib.pyplot as plt
```
For this exercise, we'll use gradient boosted trees. In order to implement this model, we'll use the XGBoost package.
```
! pip install xgboost
import xgboost as xgb
```
Here, we define a few helper functions.
```
# number of rows in a dataframe
def nrow(df):
return(len(df.index))
# number of columns in a dataframe
def ncol(df):
return(len(df.columns))
# flatten nested lists/arrays
flatten = lambda l: [item for sublist in l for item in sublist]
# combine multiple arrays into a single list
def c(*args):
return(flatten([item for item in args]))
```
In this exercise, we're going to try to predict the returns of the S&P 500 ETF. This may be a futile endeavor, since many experts consider the S&P 500 to be essentially unpredictable, but it will serve well for the purpose of this exercise. The following cell loads the data.
```
df = pd.read_csv("SPYZ.csv")
```
As you can see, the data file has four columns, `Date`, `Close`, `Volume` and `Return`.
```
df.head()
n = nrow(df)
```
Next, we'll form our predictors/features. In the cells below, we create four types of features. We also use a parameter, `K`, to set the number of each type of feature to build. With a `K` of 25, 100 features will be created. This should already seem like a lot of features, and alert you to the potential that the model will be overfit.
```
predictors = []
# we'll create a new DataFrame to hold the data that we'll use to train the model
# we'll create it from the `Return` column in the original DataFrame, but rename that column `y`
model_df = pd.DataFrame(data = df['Return']).rename(columns = {"Return" : "y"})
# IMPORTANT: this sets how many of each of the following four predictors to create
K = 25
```
Now, you write the code to create the four types of predictors.
```
for L in range(1,K+1):
# this predictor is just the return L days ago, where L goes from 1 to K
# these predictors will be named `R1`, `R2`, etc.
pR = "".join(["R",str(L)])
predictors.append(pR)
for i in range(K+1,n):
# TODO: fill in the code to assign the return from L days before to the ith row of this predictor in `model_df`
model_df.loc[i, pR] = df.loc[i-L,'Return']
# this predictor is the return L days ago, squared, where L goes from 1 to K
# these predictors will be named `Rsq1`, `Rsq2`, etc.
pR2 = "".join(["Rsq",str(L)])
predictors.append(pR2)
for i in range(K+1,n):
# TODO: fill in the code to assign the squared return from L days before to the ith row of this predictor
# in `model_df`
model_df.loc[i, pR2] = (df.loc[i-L,'Return']) ** 2
# this predictor is the log volume L days ago, where L goes from 1 to K
# these predictors will be named `V1`, `V2`, etc.
pV = "".join(["V",str(L)])
predictors.append(pV)
for i in range(K+1,n):
# TODO: fill in the code to assign the log of the volume from L days before to the ith row of this predictor
# in `model_df`
# Add 1 to the volume before taking the log
model_df.loc[i, pV] = math.log(1.0 + df.loc[i-L,'Volume'])
# this predictor is the product of the return and the log volume from L days ago, where L goes from 1 to K
# these predictors will be named `RV1`, `RV2`, etc.
pRV = "".join(["RV",str(L)])
predictors.append(pRV)
for i in range(K+1,n):
# TODO: fill in the code to assign the product of the return and the log volume from L days before to the
# ith row of this predictor in `model_df`
model_df.loc[i, pRV] = model_df.loc[i, pR] * model_df.loc[i, pV]
```
Let's take a look at the predictors we've created.
```
model_df.iloc[100:105,:]
```
Next, we create a DataFrame that holds the recent volatility of the ETF's returns, as measured by the standard deviation of a sliding window of the past 20 days' returns.
```
vol_df = pd.DataFrame(data = df[['Return']])
for i in range(K+1,n):
# TODO: create the code to assign the standard deviation of the return from the time period starting
# 20 days before day i, up to the day before day i, to the ith row of `vol_df`
vol_df.loc[i, 'vol'] = np.std(vol_df.loc[(i-20):(i-1),'Return'])
```
Let's take a quick look at the result.
```
vol_df.iloc[100:105,:]
```
Now that we have our data, we can start thinking about training a model.
```
# for training, we'll use all the data except for the first K days, for which the predictors' values are NaNs
model = model_df.iloc[K:n,:]
```
In the cell below, first split the data into train and test sets, and then split off the targets from the predictors.
```
# Split data into train and test sets
train_size = 2.0/3.0
breakpoint = round(nrow(model) * train_size)
# TODO: fill in the code to split off the chunk of data up to the breakpoint as the training set, and
# assign the rest as the test set.
training_data = model.iloc[1:breakpoint,:]
test_data = model.loc[breakpoint : nrow(model),]
# TODO: Split training data and test data into targets (Y) and predictors (X), for the training set and the test set
X_train = training_data.iloc[:,1:ncol(training_data)]
Y_train = training_data.iloc[:,0]
X_test = test_data.iloc[:,1:ncol(training_data)]
Y_test = test_data.iloc[:,0]
```
Great, now that we have our data, let's train the model.
```
# DMatrix is a internal data structure that used by XGBoost which is optimized for both memory efficiency
# and training speed.
dtrain = xgb.DMatrix(X_train, Y_train)
# Train the XGBoost model
param = { 'max_depth':20, 'silent':1 }
num_round = 20
xgModel = xgb.train(param, dtrain, num_round)
```
Now let's predict the returns for the S&P 500 ETF in both the train and test periods. If the model is successful, what should the train and test accuracies look like? What would be a key sign that the model has overfit the training data?
Todo: Before you run the next cell, write down what you expect to see if the model is overfit.
An overfit model will have low error on the training set, but high error on the testing set.
```
# Make the predictions on the test data
preds_train = xgModel.predict(xgb.DMatrix(X_train))
preds_test = xgModel.predict(xgb.DMatrix(X_test))
```
Let's quickly look at the mean squared error of the predictions on the training and testing sets.
```
# TODO: Calculate the mean squared error on the training set
msetrain = sum((preds_train-Y_train)**2)/len(preds_train)
msetrain
# TODO: Calculate the mean squared error on the test set
msetest = sum((preds_test-Y_test)**2)/len(preds_test)
msetest
```
Looks like the mean squared error on the test set is an order of magnitude greater than on the training set. Not a good sign. Now let's do some quick calculations to gauge how this would translate into performance.
```
# combine prediction arrays into a single list
predictions = c(preds_train, preds_test)
responses = c(Y_train, Y_test)
# as a holding size, we'll take predicted return divided by return variance
# this is mean-variance optimization with a single asset
vols = vol_df.loc[K:n,'vol']
position_size = predictions / vols ** 2
# TODO: Calculate pnl. Pnl in each time period is holding * realized return.
performance = position_size * responses
# plot simulated performance
plt.plot(np.cumsum(performance))
plt.ylabel('Simulated Performance')
plt.axvline(x=breakpoint, c = 'r')
plt.show()
```
Our simulated returns accumulate throughout the training period, but they are absolutely flat in the testing period. The model has no predictive power whatsoever in the out-of-sample period.
Can you think of a few reasons our simulation of performance is unrealistic?
```
# TODO: Answer the above question.
```
1. We left out any accounting of trading costs. If we had included trading costs, the performance in the out-of-sample period would be downward!
2. We didn't account for any time for trading. It's most conservative to assume that we would make trades on the day following our calculation of position size to take, and realize returns the day after that, such that there's a two-day delay between holding size calculation and realized return.
| github_jupyter |
```
name = "2019-10-08-speedy-python"
title = "Speeding up python using tools from the standard library"
tags = "basics, optimisation, numpy, multiprocessing"
author = "Callum Rollo"
from nb_tools import connect_notebook_to_post
from IPython.core.display import HTML
html = connect_notebook_to_post(name, title, tags, author)
```
To demonstrate Python's performance, we'll use a short function
```
import math
import numpy as np
from datetime import datetime
def cart2pol(x, y):
r = np.sqrt(x**2 + y**2)
phi = np.arctan2(y, x)
return(r, phi)
```
As the name suggest **cart2pol** converts a pair of cartesian coordinates [x, y] to polar coordinates [r, phi]
```
from IPython.core.display import Image
Image(url='https://upload.wikimedia.org/wikipedia/commons/thumb/7/78/Polar_to_cartesian.svg/1024px-Polar_to_cartesian.svg.png',width=400)
x = 3
y = 4
r, phi = cart2pol(x,y)
print(r,phi)
```
All well and good. However, what if we want to convert a list of cartesian coordinates to polar coordinates?
We could **loop** through both lists and perform the conversion for each x-y pair:
```
def cart2pol_list(list_x, list_y):
# Prepare empty lists for r and phi values
r = np.empty(len(list_x))
phi = np.empty(len(list_x))
# Loop through the lists of x and y, calculating the r and phi values
for i in range(len(list_x)):
r[i] = np.sqrt(list_x[i]**2 + list_y[i]**2)
phi[i] = np.arctan2(list_y[i], list_x[i])
return(r, phi)
x_list = np.sin(np.arange(0,2*np.pi,0.1))
y_list = np.cos(np.arange(0,2*np.pi,0.1))
```
These coordinates make a circle centered at [0,0]
```
import matplotlib.pyplot as plt
fig, ax = plt.subplots(1, 1, figsize=(6,6))
ax.scatter(x_list,y_list)
r_list, phi_list = cart2pol_list(x_list,y_list)
print(r_list)
print(phi_list)
```
This is a bit time consuming to type out though, surely there is a better way to make our functions work for lists of inputs?
Step forward **vectorise**
```
cart2pol_vec = np.vectorize(cart2pol)
r_list_vec, phi_list_vec = cart2pol_vec(x_list, y_list)
```
Like magic! We can assure ourselves that these two methods produce the same answers
```
print(r_list == r_list_vec)
print(phi_list == phi_list_vec)
```
But how do they perform?
We can use Python's magic **%timeit** function to test this
```
%timeit cart2pol_list(x_list, y_list)
%timeit cart2pol_vec(x_list, y_list)
```
It is significantly faster, both for code writing and at runtime, to use **vectorsie** rather than manually looping through lists
-----------
### Multiprocessing
Another important consideration when code becomes computationally intensive is **multiprocessing**. Python normally runs on one core, so you won't feel the full benefit of your quad-core or greater machine. You can see this when you run a section of code. To demonstrate the effect of multiprocessing we'll need some heftier maths:
```
def do_maths(start=0, num=10):
pos = start
big = 1000 * 1000
ave = 0
while pos < num:
pos += 1
val = math.sqrt((pos - big) * (pos - big))
ave += val / num
return int(ave)
t0 = datetime.now()
do_maths(num=30000000)
dt = datetime.now() - t0
print("Done in {:,.2f} sec.".format(dt.total_seconds()))
import multiprocessing
t0 = datetime.now()
pool = multiprocessing.Pool()
processor_count = multiprocessing.cpu_count()
# processor_count = 2 # we can Python to use a specific number of cores if desired
print(f"Computing with {processor_count} processor(s)")
tasks = []
for n in range(1, processor_count + 1):
task = pool.apply_async(do_maths, (30000000 * (n - 1) / processor_count,
30000000 * n / processor_count))
tasks.append(task)
pool.close()
pool.join()
dt = datetime.now() - t0
print("Done in {:,.2f} sec.".format(dt.total_seconds()))
```
Note that you can recover results stored in the task list with get(). This list will be in the same order as that which you used to spawn the processes
```
for t in tasks:
print(t.get())
```
The structure of a multiproccess call is:
```
pool = multiprocessing.Pool() # Make a pool ready to recieve taks
results = [] # empty list for results
for n in range(1, processor_count + 1): # Loop for assigning a number of tasks
result = pool.appy_async(function, (arguments)) # make a task by passing it a function and arguments
results.append(result) # append the result(s) of this task to the list
pool.close() # tell async there are no more tasks coming
pool.join() # start running the tasks concurrently
for t in results:
t.get() # ret`rieve your results, You could print or assign each result to a variable
```
### Why can't we multithread in Python?
If you have experience of other programming languages, you may wonder why we can't assign tasks to multiple threads to speed up execution. We are prevented from doing this by the Global Interpreter Lock (GIL). This is a lock on the interpreter which ensures that only one thread can be in a state of execution at any one time. This is essential to protect Python's reference system that keeps track of all of the objects in memory.
To get around this lock we spawn several processes which each have their own instance of the interpreter and allocated memory so cannot block one another or cause mischief with references. There's a great summary of the GIL on the real Python website [here](https://realpython.com/python-gil/)
**tl;dr** multithreading won't speed up your compute heavy calcualtions as only one thread can execute at any. time. Use multiprocessing instead
Multiprocessing example adapted from [Talk Python To Me Training: async techniques](https://training.talkpython.fm/courses/details/async-in-python-with-threading-and-multiprocessing)
```
HTML(html)
```
| github_jupyter |
## Portfolio Exercise: Starbucks
<br>
<img src="https://opj.ca/wp-content/uploads/2018/02/New-Starbucks-Logo-1200x969.jpg" width="200" height="200">
<br>
<br>
#### Background Information
The dataset you will be provided in this portfolio exercise was originally used as a take-home assignment provided by Starbucks for their job candidates. The data for this exercise consists of about 120,000 data points split in a 2:1 ratio among training and test files. In the experiment simulated by the data, an advertising promotion was tested to see if it would bring more customers to purchase a specific product priced at $10. Since it costs the company 0.15 to send out each promotion, it would be best to limit that promotion only to those that are most receptive to the promotion. Each data point includes one column indicating whether or not an individual was sent a promotion for the product, and one column indicating whether or not that individual eventually purchased that product. Each individual also has seven additional features associated with them, which are provided abstractly as V1-V7.
#### Optimization Strategy
Your task is to use the training data to understand what patterns in V1-V7 to indicate that a promotion should be provided to a user. Specifically, your goal is to maximize the following metrics:
* **Incremental Response Rate (IRR)**
IRR depicts how many more customers purchased the product with the promotion, as compared to if they didn't receive the promotion. Mathematically, it's the ratio of the number of purchasers in the promotion group to the total number of customers in the purchasers group (_treatment_) minus the ratio of the number of purchasers in the non-promotional group to the total number of customers in the non-promotional group (_control_).
$$ IRR = \frac{purch_{treat}}{cust_{treat}} - \frac{purch_{ctrl}}{cust_{ctrl}} $$
* **Net Incremental Revenue (NIR)**
NIR depicts how much is made (or lost) by sending out the promotion. Mathematically, this is 10 times the total number of purchasers that received the promotion minus 0.15 times the number of promotions sent out, minus 10 times the number of purchasers who were not given the promotion.
$$ NIR = (10\cdot purch_{treat} - 0.15 \cdot cust_{treat}) - 10 \cdot purch_{ctrl}$$
For a full description of what Starbucks provides to candidates see the [instructions available here](https://drive.google.com/open?id=18klca9Sef1Rs6q8DW4l7o349r8B70qXM).
Below you can find the training data provided. Explore the data and different optimization strategies.
#### How To Test Your Strategy?
When you feel like you have an optimization strategy, complete the `promotion_strategy` function to pass to the `test_results` function.
From past data, we know there are four possible outomes:
Table of actual promotion vs. predicted promotion customers:
<table>
<tr><th></th><th colspan = '2'>Actual</th></tr>
<tr><th>Predicted</th><th>Yes</th><th>No</th></tr>
<tr><th>Yes</th><td>I</td><td>II</td></tr>
<tr><th>No</th><td>III</td><td>IV</td></tr>
</table>
The metrics are only being compared for the individuals we predict should obtain the promotion – that is, quadrants I and II. Since the first set of individuals that receive the promotion (in the training set) receive it randomly, we can expect that quadrants I and II will have approximately equivalent participants.
Comparing quadrant I to II then gives an idea of how well your promotion strategy will work in the future.
Get started by reading in the data below. See how each variable or combination of variables along with a promotion influences the chance of purchasing. When you feel like you have a strategy for who should receive a promotion, test your strategy against the test dataset used in the final `test_results` function.
```
# load in packages
from itertools import combinations
from test_results import test_results, score
import numpy as np
import pandas as pd
import scipy as sp
import sklearn as sk
from sklearn.model_selection import train_test_split
from imblearn.ensemble import EasyEnsembleClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import f1_score
import scipy.stats as stats
import matplotlib.pyplot as plt
import seaborn as sb
%matplotlib inline
# load in the data
train_data = pd.read_csv('./training.csv')
train_data.head()
```
## 1. Inspect the data
```
train_data.shape
train_data.dtypes
train_data.describe()
#checking for missing values
train_data.isnull().sum()
#Create a DF with clients that recieve the promotion
promoted_users = train_data[train_data['Promotion'] == 'Yes']
promoted_users.head()
```
## 2. Distribution of promotion
```
#Group the data by promotion and purchase
train_data.groupby(by=['Promotion','purchase'], as_index = False)['ID'].count()
```
How many clients we have in each group?
```
train_data['Promotion'].value_counts()
```
## Analyzing Data
### Invariant metric
We'll check if the invariant metric (Number of participants) doesn't show statistically difference. It means that the number of customers assigned to each group is similar.
It's important to check this as a prerequisite so that the following inferences on the evaluation metrics are founded on solid ground.
$$ H_0: Number-of-participants_{ctrl} - Number of participants_{exp} = 0$$
$$ H_1: Number of participants_{ctrl} - Number of participants_{exp} \not= 0$$
$$ \alpha = 0.05 $$
**Analytical approach:**
```
#Calculate P-value with and analytical approach
n = train_data['Promotion'].value_counts().sum() #n data points
sd = np.sqrt(0.5 * (1 - 0.5) * n) #calculate standard deviation
z = ((42170 + 0.5) - 0.5 * n) / sd #calculate z
p_value = 2 * stats.norm.cdf(z) #calculate p-value
print('p_value:', p_value)
```
**Bootstraping**
```
def calculate_diff(df):
"""Calculate the difference between the two groups (Control and Experimental)
INPUT: df → pandas DataFrame
Output: invariant metric: difference between groups - int """
return (df.query("Promotion == 'No'").shape[0] - df.query("Promotion == 'Yes'").shape[0])
n_trials = 1000
n_points = train_data.shape[0]
sample_diffs = []
# For each trial...
for _ in range(n_trials):
# draw a random sample from the data with replacement...
sample = train_data.sample(n_points, replace = True)
# compute the desired metric...
sample_diff = calculate_diff(sample)
# and add the value to the list of sampled metrics
sample_diffs.append(sample_diff)
# Compute the confidence interval bounds
lower_limit = np.percentile(sample_diffs, (1 - .95)/2 * 100)
upper_limit = np.percentile(sample_diffs, (1 + .95)/2 * 100)
print("Confidence intervals (Type I error rate = 0.05):\n"
"Lower Limit:", lower_limit,"\n"
"Upper Limit:", upper_limit)
# Difference between groups - distribution of our null hypothesis
null_values = np.random.normal(0, np.std(sample_diffs), 10000)
plt.hist(null_values, bins=60)
plt.title("Normal Distribution under the Null Hypothesis")
plt.axvline(calculate_diff(train_data), color='r', label="Observed Metric")
plt.xlabel('Difference between groups')
plt.legend();
#Calculate P-value
z = (calculate_diff(train_data) - 0) / np.std(sample_diffs)
p_value = 2 * stats.norm.cdf(z)
print("P-value:", p_value)
```
We **Fail to reject the null hypothesis**. The difference in the invariant metric isn't statistically significant.
We can continue on to evaluating the evaluation metrics
### Calculate Evaluation Metrics
First we need to calculate the metrics
$$Incremental Response Rate$$
$$IRR = \frac{purch_{treat}}{cust_{treat}} - \frac{purch_{ctrl}}{cust_{ctrl}} $$
```
def calculate_irr(dataframe):
""" Calculate IRR.
INPUT: dataframe = pandas dataframe
OUTPUT: IRR - float - Incremental Response Rate"""
purch_treat = dataframe.query("Promotion == 'Yes'").query('purchase == 1').shape[0]
purch_ctrl = dataframe.query("Promotion == 'No'").query('purchase == 1').shape[0]
cust_treat = dataframe.query("Promotion == 'Yes'").shape[0]
cust_ctrl = dataframe.query("Promotion == 'No'").shape[0]
IRR = (purch_treat / cust_treat) - (purch_ctrl / cust_ctrl)
return IRR
#Calculate observed IRR
IRR = calculate_irr(train_data)
print(IRR)
```
$$Net Incremental Revenue$$
$$ NIR = (10 * purch_{treat} - 0.15 * cust_{treat}) - 10 * purch_{ctrl}$$
```
def calculate_nir(dataframe):
""" Calculate NIR.
INPUT: dataframe = pandas dataframe
OUTPUT: NIR - float - Net Incremental Revenue"""
purch_treat = dataframe.query("Promotion == 'Yes'").query('purchase == 1').shape[0]
purch_ctrl = dataframe.query("Promotion == 'No'").query('purchase == 1').shape[0]
cust_treat = dataframe.query("Promotion == 'Yes'").shape[0]
cust_ctrl = dataframe.query("Promotion == 'No'").shape[0]
NIR = (10 * purch_treat - 0.15 * cust_treat) - 10 * purch_ctrl
return NIR
#Calculate observed NIR
NIR = calculate_nir(train_data)
print('NIR:', NIR)
```
## Perform a Hypothesis test for the IRR and NIR value
We want to perform a hypothesis test to see if the experiment has shown statistical significance results.
Analysis of statisticaly of $IRR$ and $NIR$ values. It shall determine if the experiment had a positive effect in metrics
For the further hypothesis test we'll use an $\alpha_{overall} = 0.05$, as we are using more than one variable metric, we'll make a correction on the individual Type I Error Rate to maintain the overall rate.
Our alpha with Bonferroni correction:
$$Bonferroni Correction: \frac{\alpha_{overall}}{number-of-metrics} = \frac{0.05}{2} = 0.025$$
### Incremental Response Rate (IRR)
Hypothesis:
$$H_0: IRR = 0$$
$$H_1: IRR > 0$$
We'll apply bootstraping in our data to estimate the sampling distribution
```
n_trials = 1000
n_points = train_data.shape[0]
sample_irrs = []
# For each trial...
for _ in range(n_trials):
# draw a random sample from the data with replacement...
sample = train_data.sample(n_points, replace = True)
# compute the desired metric (IRR)...
sample_irr = calculate_irr(sample)
# and add the value to the list of sampled IRR
sample_irrs.append(sample_irr)
# Compute the confidence interval bounds
lower_limit = np.percentile(sample_irrs, (1 - .975)/2 * 100)
upper_limit = np.percentile(sample_irrs, (1 + .975)/2 * 100)
print("Confidence intervals (Overall Type I error rate = 0.025):\n"
"Lower Limit:", lower_limit,"\n"
"Upper Limit:", upper_limit)
# distribution of our null hypothesis
null_IRRs = np.random.normal(0, np.std(sample_irrs), 10000)
plt.hist(null_IRRs, bins=60)
plt.title("IRR Normal Distribution under the Null Hypothesis")
plt.xlabel('IRR')
plt.axvline(lower_limit, color='r')
plt.axvline(upper_limit, color='r')
plt.text(lower_limit+0.0002, 300, 'Confidence \nInterval \n(Sample IRR)', ma='center');
#Calculate P-value
p_value = 1 - stats.norm.cdf(IRR, 0, np.std(sample_irrs))
print('P-Value:', p_value)
```
### IRR conclusions
The P-value is below our $\alpha$, so we **reject the null hypothesis**.
It means that the experiment has shown statistically significant results for the IRR metric.
Now we'll analyze the other variable metric: INN
### Net Incremental Reveneu (NIR)
Hypothesis:
$$H_0: NIR \le 0$$
$$H_1: NIR > 0$$
We'll apply bootstraping in our data to estimate the sampling distribution
```
n_trials = 1000
n_points = train_data.shape[0]
sample_nirs = []
# For each trial...
for _ in range(n_trials):
# draw a random sample from the data with replacement...
sample = train_data.sample(n_points, replace = True)
# compute the desired metric (NIR)...
sample_nir = calculate_nir(sample)
# and add the value to the list of sampled NIR
sample_nirs.append(sample_nir)
#Compute the confidence interval bounds
lower_limit = np.percentile(sample_nirs, (1 - .975)/2 * 100)
upper_limit = np.percentile(sample_nirs, (1 + .975)/2 * 100)
print("Confidence intervals (Type I error rate = 0.025):\n"
"Lower Limit:", lower_limit,"\n"
"Upper Limit:", upper_limit)
#plot distribution of our null hypothesis
null_NIRs = np.random.normal(0, np.std(sample_nirs), 10000)
plt.hist(null_NIRs, bins=60)
plt.title("NIR Normal Distribution under the Null Hypothesis")
plt.xlabel('NIR', fontsize=13)
plt.axvline(lower_limit, color='r')
plt.axvline(upper_limit, color='r')
plt.text(lower_limit+250, 250, 'Confidence \nInterval \n(Sample NIR)', ma='center');
#Calculate p-value
p_value = 1 - stats.norm.cdf(NIR, 0, np.std(sample_nirs))
print("P-Value:", p_value)
```
### NIR conclusions
The P-value is well above our $\alpha$, so we **fail to reject the null hypothesis**.
It means that the experiment doesn't shows good results in our NIR metric.
We see before that the promotion actually improves the response rate. Maybe if instead of carrying out the promotional campaign at random, we select the appropriate segment of clients, we could improve our NIR indicator.
We can simulate this scenario by building a model that predicts the users who will show the best response to the promotion and what happens if we target the campaign at them.
## Model Building
The goal of our model is to predict what clients are the best to target in our promotion campaign.
For that we'll try two models:
* Random Forest.
* Easy Ensemble.
```
#Select features and target variable
X = train_data.drop(columns = ['ID','purchase','Promotion'])
y = train_data['purchase']
#Split the data into train and test subsets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size= .33)
```
Note: with the models we are using we don't need to scale the data!
```
def labeling(y):
promotion_df = []
for i in y:
if i == 1:
promotion_df.append('Yes')
if i == 0:
promotion_df.append('No')
return np.asarray(promotion_df)
```
### Easy Ensemble Classifier
```
EEC = EasyEnsembleClassifier(sampling_strategy='all', replacement=True)
EEC.fit(X_train, y_train)
y_hat = EEC.predict(X_test)
print("F1-Score:",f1_score(y_test, y_hat, average='weighted'))
def promotion_strategy(df):
'''
INPUT
df - a dataframe with *only* the columns V1 - V7 (same as train_data)
OUTPUT
promotion_df - np.array with the values
'Yes' or 'No' related to whether or not an
individual should recieve a promotion
should be the length of df.shape[0]
Ex:
INPUT: df
V1 V2 V3 V4 V5 V6 V7
2 30 -1.1 1 1 3 2
3 32 -0.6 2 3 2 2
2 30 0.13 1 1 4 2
OUTPUT: promotion
array(['Yes', 'Yes', 'No'])
indicating the first two users would recieve the promotion and
the last should not.
'''
y_hat = EEC.predict(df)
promotion = labeling(y_hat)
return promotion
# This will test your results, and provide you back some information
# on how well your promotion_strategy will work in practice
test_results(promotion_strategy)
```
### Random Forest Classifier
```
#Instantiete the model
clf = RandomForestClassifier(max_depth=5, random_state=0,class_weight='balanced_subsample')
#Train the model
clf.fit(X = X_train, y = y_train)
y_hat = clf.predict(X_test)
np.unique(y_hat)
print("F1-Score:",f1_score(y_test, y_hat, average='weighted'))
def promotion_strategy(df):
'''
INPUT
df - a dataframe with *only* the columns V1 - V7 (same as train_data)
OUTPUT
promotion_df - np.array with the values
'Yes' or 'No' related to whether or not an
individual should recieve a promotion
should be the length of df.shape[0]
Ex:
INPUT: df
V1 V2 V3 V4 V5 V6 V7
2 30 -1.1 1 1 3 2
3 32 -0.6 2 3 2 2
2 30 0.13 1 1 4 2
OUTPUT: promotion
array(['Yes', 'Yes', 'No'])
indicating the first two users would recieve the promotion and
the last should not.
'''
y_hat = clf.predict(df)
promotion = labeling(y_hat)
return promotion
# This will test your results, and provide you back some information
# on how well your promotion_strategy will work in practice
test_results(promotion_strategy)
```
## Conclutions
We can conclude that with a machine learning model to choose clients to target, we can improve the NIR metric.
It went from a negative value to a positive one, it means that the promotion brings earnings to the company.
Anyway, here we worked with a very unbalanced data set. I think we could improve even more the performance if we count with better data.
| github_jupyter |
# Fitting censored data
Experimental measurements are sometimes censored such that we only know partial information about a particular data point. For example, in measuring the lifespan of mice, a portion of them might live through the duration of the study, in which case we only know the lower bound.
One of the ways we can deal with this is to use Maximum Likelihood Estimation ([MLE](http://en.wikipedia.org/wiki/Maximum_likelihood)). However, censoring often make analytical solutions difficult even for well known distributions.
We can overcome this challenge by converting the MLE into a convex optimization problem and solving it using [CVXPY](http://www.cvxpy.org/en/latest/).
This example is adapted from a homework problem from Boyd's [CVX 101: Convex Optimization Course](https://class.stanford.edu/courses/Engineering/CVX101/Winter2014/info).
## Setup
We will use similar notation here. Suppose we have a linear model:
$$ y^{(i)} = c^Tx^{(i)} +\epsilon^{(i)} $$
where $y^{(i)} \in \mathbf{R}$, $c \in \mathbf{R}^n$, $k^{(i)} \in \mathbf{R}^n$, and $\epsilon^{(i)}$ is the error and has a normal distribution $N(0, \sigma^2)$ for $ i = 1,\ldots,K$.
Then the MLE estimator $c$ is the vector that minimizes the sum of squares of the errors $\epsilon^{(i)}$, namely:
$$
\begin{array}{ll}
\underset{c}{\mbox{minimize}} & \sum_{i=1}^K (y^{(i)} - c^T x^{(i)})^2
\end{array}
$$
In the case of right censored data, only $M$ observations are fully observed and all that is known for the remaining observations is that $y^{(i)} \geq D$ for $i=\mbox{M+1},\ldots,K$ and some constant $D$.
Now let's see how this would work in practice.
## Data Generation
```
import numpy as np
n = 30 # number of variables
M = 50 # number of censored observations
K = 200 # total number of observations
np.random.seed(n*M*K)
X = np.random.randn(K*n).reshape(K, n)
c_true = np.random.rand(n)
# generating the y variable
y = X.dot(c_true) + .3*np.sqrt(n)*np.random.randn(K)
# ordering them based on y
order = np.argsort(y)
y_ordered = y[order]
X_ordered = X[order,:]
#finding boundary
D = (y_ordered[M-1] + y_ordered[M])/2.
# applying censoring
y_censored = np.concatenate((y_ordered[:M], np.ones(K-M)*D))
import matplotlib.pyplot as plt
# Show plot inline in ipython.
%matplotlib inline
def plot_fit(fit, fit_label):
plt.figure(figsize=(10,6))
plt.grid()
plt.plot(y_censored, 'bo', label = 'censored data')
plt.plot(y_ordered, 'co', label = 'uncensored data')
plt.plot(fit, 'ro', label=fit_label)
plt.ylabel('y')
plt.legend(loc=0)
plt.xlabel('observations');
```
## Regular OLS
Let's see what the OLS result looks like. We'll use the `np.linalg.lstsq` function to solve for our coefficients.
```
c_ols = np.linalg.lstsq(X_ordered, y_censored, rcond=None)[0]
fit_ols = X_ordered.dot(c_ols)
plot_fit(fit_ols, 'OLS fit')
```
We can see that we are systematically overestimating low values of $y$ and vice versa (red vs. cyan). This is caused by our use of censored (blue) observations, which are exerting a lot of leverage and pulling down the trendline to reduce the error between the red and blue points.
## OLS using uncensored data
A simple way to deal with this while maintaining analytical tractability is to simply ignore all censored observations.
$$
\begin{array}{ll}
\underset{c}{\mbox{minimize}} & \sum_{i=1}^M (y^{(i)} - c^T x^{(i)})^2
\end{array}
$$
Give that our $M$ is much smaller than $K$, we are throwing away the majority of the dataset in order to accomplish this, let's see how this new regression does.
```
c_ols_uncensored = np.linalg.lstsq(X_ordered[:M], y_censored[:M], rcond=None)[0]
fit_ols_uncensored = X_ordered.dot(c_ols_uncensored)
plot_fit(fit_ols_uncensored, 'OLS fit with uncensored data only')
bad_predictions = (fit_ols_uncensored<=D) & (np.arange(K)>=M)
plt.plot(np.arange(K)[bad_predictions], fit_ols_uncensored[bad_predictions], color='orange', marker='o', lw=0);
```
We can see that the fit for the uncensored portion is now vastly improved. Even the fit for the censored data is now relatively unbiased i.e. the fitted values (red points) are now centered around the uncensored obsevations (cyan points).
The one glaring issue with this arrangement is that we are now predicting many observations to be _below_ $D$ (orange) even though we are well aware that this is not the case. Let's try to fix this.
##Using constraints to take into account of censored data
Instead of throwing away all censored observations, lets leverage these observations to enforce the additional information that we know, namely that $y$ is bounded from below. We can do this by setting additional constraints:
$$
\begin{array}{ll}
\underset{c}{\mbox{minimize}} & \sum_{i=1}^M (y^{(i)} - c^T x^{(i)})^2 \\
\mbox{subject to} & c^T x^{(i)} \geq D\\
& \mbox{for } i=\mbox{M+1},\ldots,K
\end{array}
$$
```
import cvxpy as cp
X_uncensored = X_ordered[:M, :]
c = cp.Variable(shape=n)
objective = cp.Minimize(cp.sum_squares(X_uncensored*c - y_ordered[:M]))
constraints = [ X_ordered[M:,:]*c >= D]
prob = cp.Problem(objective, constraints)
result = prob.solve()
c_cvx = np.array(c.value).flatten()
fit_cvx = X_ordered.dot(c_cvx)
plot_fit(fit_cvx, 'CVX fit')
```
Qualitatively, this already looks better than before as it no longer predicts inconsistent values with respect to the censored portion of the data. But does it do a good job of actually finding coefficients $c$ that are close to our original data?
We'll use a simple Euclidean distance $\|c_\mbox{true} - c\|_2$ to compare:
```
print("norm(c_true - c_cvx): {:.2f}".format(np.linalg.norm((c_true - c_cvx))))
print("norm(c_true - c_ols_uncensored): {:.2f}".format(np.linalg.norm((c_true - c_ols_uncensored))))
```
## Conclusion
Fitting censored data to a parametric distribution can be challenging as the MLE solution is often not analytically tractable. However, many MLEs can be converted into a convex optimization problems as show above. With the advent of simple-to-use and robust numerical packages, we can now solve these problems easily while taking into account the entirety of our information set by enforcing consistency conditions on various portions of the data.
| github_jupyter |
## Compile Figure 2 - Training curves for three VAE architectures
```
suppressPackageStartupMessages(library(dplyr))
suppressPackageStartupMessages(library(ggplot2))
suppressPackageStartupMessages(library(cowplot))
file_extensions <- c(".png", ".pdf")
plot_theme <- theme(
axis.title.y = element_text(size = 6),
axis.title.x = element_text(size = 9),
legend.text = element_text(size = 7),
legend.title = element_text(size = 9),
legend.key.size = unit(0.5, 'cm')
)
# Load training curve data
data_file <- file.path("training_curve_summary_data_L1000.csv")
training_col_types <- readr::cols(
epoch = readr::col_double(),
model = readr::col_character(),
shuffled = readr::col_character(),
loss_type = readr::col_character(),
loss_value = readr::col_double()
)
training_df <- readr::read_csv(data_file, col_types = training_col_types)
# Process training data for plot input
training_df$loss_type <- dplyr::recode(
training_df$loss_type, loss = "Training", val_loss = "Validation"
)
training_df$shuffled <- dplyr::recode(
training_df$shuffled, real = "Real", shuffled = "Shuffled"
)
# Rename columns
training_df <- training_df %>% dplyr::rename(`Input` = shuffled, `Data split` = loss_type)
print(dim(training_df))
head(training_df)
table(training_df$model)
split_colors <- c("Training" = "#708A8C", "Validation" = "#FD3A4A")
# Figure panel A
plot_subset_df <- training_df %>%
dplyr::filter(model == "vanilla")
panel_a_gg <- (
ggplot(plot_subset_df, aes(x = epoch, y = loss_value))
+ geom_line(aes(color = `Data split`, linetype = `Input`))
+ scale_color_manual(values = split_colors)
+ theme_bw()
+ xlab("Epoch")
+ ylab("Vanilla VAE loss\n(MSE + KL divergence)")
+ plot_theme
)
# Figure panel B
plot_subset_df <- training_df %>%
dplyr::filter(model == "beta")
panel_b_gg <- (
ggplot(plot_subset_df, aes(x = epoch, y = loss_value))
+ geom_line(aes(color = `Data split`, linetype = `Input`))
+ scale_color_manual(values = split_colors)
+ theme_bw()
+ xlab("Epoch")
+ ylab("Beta VAE loss\n(MSE + (beta * KL divergence))")
+ plot_theme
)
# Figure panel C
plot_subset_df <- training_df %>%
dplyr::filter(model == "mmd")
panel_c_gg <- (
ggplot(plot_subset_df, aes(x = epoch, y = loss_value))
+ geom_line(aes(color = `Data split`, linetype = `Input`))
+ scale_color_manual(values = split_colors)
+ theme_bw()
+ xlab("Epoch")
+ ylab("MMD VAE loss\n(MSE + Maximum Mean Discrepancy)")
+ plot_theme
)
# Get legend
figure_2_legend <- cowplot::get_legend(panel_a_gg)
# Combine figure together
figure_2_gg <- (
cowplot::plot_grid(
cowplot::plot_grid(
panel_a_gg + theme(legend.position = "none"),
panel_b_gg + theme(legend.position = "none"),
panel_c_gg + theme(legend.position = "none"),
labels = c("a", "b", "c"),
nrow = 3
),
figure_2_legend,
rel_widths = c(1, 0.2),
ncol = 2
)
)
figure_2_gg
# Save figure
output_file_base <- file.path("output", "figure2_training_curves_L1000")
for (file_extension in file_extensions) {
output_file <- paste0(output_file_base, file_extension)
cowplot::save_plot(output_file, figure_2_gg, dpi = 500, base_width = 6, base_height = 6)
}
```
| github_jupyter |
<h3> Collision Handing in HashTables </h3>
```
class HashTable:
def __init__(self):
self.MAX = 10
self.arr = [None for i in range(self.MAX)]
def get_hash(self, key):
hash = 0
for char in key:
hash += ord(char)
return hash % self.MAX
def __getitem__(self, index): #this is a in built function in python which Return the value of a at index b.
h = self.get_hash(index)
return self.arr[h]
def __setitem__(self, key, val): # this is a in built function in python which Set the value of a at index b to c.
h = self.get_hash(key)
self.arr[h] = val
def __delitem__(self, key): #this is a in built function in python which Remove the value of a at index b.
h = self.get_hash(key)
self.arr[h] = None
t = HashTable()
t.get_hash("march 6")
t.get_hash("march 17")
```
<p style="font-size:15px">In the above example you can see both march 17 and march 6 returns the hash value of 9 , so this is where the <strong>collision</strong> happen when two keys returns the same index value now we will see what happens if we try to store value for this index</p>
```
t["march 6"] = 120 #now we will assign some values to each key and see what does it show
t["march 8"] = 67
t["march 9"] = 4
t["march 17"] = 459
t["march 6"]
```
<p> here you can clearly see the value on <strong>"march 6"</strong> is being returned as 459 because the value of <strong>"march 17"</strong> has also the same index where it is storing the value so it overrides the value of <strong>"march 6"</strong> </p>
<ul>Now to handle such situations we have two approaches of collision handling :
<li> Chaining method </li>
<li> Linear Probing </li>
<ul/>
<h3>Chaining Method</h3>
<p> This method of collision Handling creates a list or linked list at each index for storing the value , so that mutliple keys can be inserted in one index if they have the same hashing value</p>
```
class HashTable:
def __init__(self):
self.MAX = 10
self.arr = [[] for i in range(self.MAX)] # here the only change we did was instead of NONE we are looping through a empty list
def get_hash(self, key): #similar hashing helper function
hash = 0
for char in key:
hash += ord(char)
return hash % self.MAX
def __getitem__(self, key): #here instead of index we will pass the key value as if we had passed index then the whole list of elements at the particular index would have returned.
arr_index = self.get_hash(key)
for kv in self.arr[arr_index]: #kv :means key,value
if kv[0] == key: #if the key matches return the value attached to it
return kv[1]
def __setitem__(self, key, val): #update function which handle the collision
h = self.get_hash(key)
found = False # variable to check whether key exists or not
for idx, element in enumerate(self.arr[h]): #now we have a linked list of items at each index value so we need to iterate through it
if len(element)==2 and element[0] == key:
self.arr[h][idx] = (key,val) #update the list and store the value in the form of tuple as tuples are immutable
found = True #change the value of found to true
if not found: #if not found then as usual
self.arr[h].append((key,val))
def __delitem__(self, key):
arr_index = self.get_hash(key)
for index, kv in enumerate(self.arr[arr_index]):
if kv[0] == key:
print("del",index)
del self.arr[arr_index][index]
from IPython.display import Image
Image(filename='C:/Users/prakhar/Desktop/Python_Data_Structures_and_Algorithms_Implementations/Images/Chaining_method.png',width=800, height=400)
#save the images from github to your local machine and then give the absolute path of the image
```
<p>pictorial Representaion of Chaining Method</p>
```
t = HashTable()
t["march 6"] = 310
t["march 7"] = 420
t["march 8"] = 67
t["march 17"] = 63457
t["march 6"] #see here the value is not over written because we have managed the collision
t["march 17"]
t.arr #see here now we are able to store mutiple values on one index
t["march 6"] = 11
t.arr
t["march 6"]
del t["march 6"]
t.arr #deleting a particular key
```
| github_jupyter |
ERROR: type should be string, got "https://www.kaggle.com/c/word2vec-nlp-tutorial#part-2-word-vectors\n\n```\nimport pandas as pd\n\n# Read data from files \ntrain = pd.read_csv( \"./data/labeledTrainData.tsv\", header=0, \n delimiter=\"\\t\", quoting=3 )\ntest = pd.read_csv( \"./data/testData.tsv\", header=0, delimiter=\"\\t\", quoting=3 )\nunlabeled_train = pd.read_csv( \"./data/unlabeledTrainData.tsv\", header=0, \n delimiter=\"\\t\", quoting=3 )\n\n# Verify the number of reviews that were read (100,000 in total)\nprint(\"Read %d labeled train reviews, %d labeled test reviews, \" \\\n \"and %d unlabeled reviews\\n\" % (train[\"review\"].size, \n test[\"review\"].size, unlabeled_train[\"review\"].size ))\n# Import various modules for string cleaning\nfrom bs4 import BeautifulSoup\nimport re\nfrom nltk.corpus import stopwords\n\ndef review_to_wordlist( review, remove_stopwords=False ):\n # Function to convert a document to a sequence of words,\n # optionally removing stop words. Returns a list of words.\n #\n # 1. Remove HTML\n review_text = BeautifulSoup(review).get_text()\n # \n # 2. Remove non-letters\n review_text = re.sub(\"[^a-zA-Z]\",\" \", review_text)\n #\n # 3. Convert words to lower case and split them\n words = review_text.lower().split()\n #\n # 4. Optionally remove stop words (false by default)\n if remove_stopwords:\n stops = set(stopwords.words(\"english\"))\n words = [w for w in words if not w in stops]\n #\n # 5. Return a list of words\n return(words)\n# Download the punkt tokenizer for sentence splitting\nimport nltk.data\n# nltk.download() \n\n# Load the punkt tokenizer\ntokenizer = nltk.data.load('tokenizers/punkt/english.pickle')\n\n# Define a function to split a review into parsed sentences\ndef review_to_sentences( review, tokenizer, remove_stopwords=False ):\n # Function to split a review into parsed sentences. Returns a \n # list of sentences, where each sentence is a list of words\n #\n # 1. Use the NLTK tokenizer to split the paragraph into sentences\n raw_sentences = tokenizer.tokenize(review.strip())\n #\n # 2. Loop over each sentence\n sentences = []\n for raw_sentence in raw_sentences:\n # If a sentence is empty, skip it\n if len(raw_sentence) > 0:\n # Otherwise, call review_to_wordlist to get a list of words\n sentences.append( review_to_wordlist( raw_sentence, \\\n remove_stopwords ))\n #\n # Return the list of sentences (each sentence is a list of words,\n # so this returns a list of lists\n return sentences\nsentences = [] # Initialize an empty list of sentences\n\nprint(\"Parsing sentences from training set\")\nfor review in train[\"review\"]:\n sentences += review_to_sentences(review, tokenizer)\n\nprint(\"Parsing sentences from unlabeled set\")\nfor review in unlabeled_train[\"review\"]:\n sentences += review_to_sentences(review, tokenizer)\n# Check how many sentences we have in total - should be around 850,000+\nlen(sentences)\nsentences[0]\n# Import the built-in logging module and configure it so that Word2Vec \n# creates nice output messages\nimport logging\nlogging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s',\\\n level=logging.INFO)\n\n# Set values for various parameters\nnum_features = 300 # Word vector dimensionality \nmin_word_count = 40 # Minimum word count \nnum_workers = 4 # Number of threads to run in parallel\ncontext = 10 # Context window size \ndownsampling = 1e-3 # Downsample setting for frequent words\n\n# Initialize and train the model (this will take some time)\nfrom gensim.models import word2vec\nprint(\"Training model...\")\nmodel = word2vec.Word2Vec(sentences, workers=num_workers, \\\n size=num_features, min_count = min_word_count, \\\n window = context, sample = downsampling)\n\n# If you don't plan to train the model any further, calling \n# init_sims will make the model much more memory-efficient.\nmodel.init_sims(replace=True)\n\n# It can be helpful to create a meaningful model name and \n# save the model for later use. You can load it later using Word2Vec.load()\nmodel_name = \"300features_40minwords_10context\"\nmodel.save(model_name)\n```\n\n" | github_jupyter |
# Model Interpretation for Pretrained ResNet Model
This notebook demonstrates how to apply model interpretability algorithms on pretrained ResNet model using a handpicked image and visualizes the attributions for each pixel by overlaying them on the image.
The interpretation algorithms that we use in this notebook are `Integrated Gradients` (w/ and w/o noise tunnel), `GradientShap`, and `Occlusion`. A noise tunnel allows to smoothen the attributions after adding gaussian noise to each input sample.
**Note:** Before running this tutorial, please install the torchvision, PIL, and matplotlib packages.
```
import torch
import torch.nn.functional as F
from PIL import Image
import os
import json
import numpy as np
from matplotlib.colors import LinearSegmentedColormap
import torchvision
from torchvision import models
from torchvision import transforms
from captum.attr import IntegratedGradients
from captum.attr import GradientShap
from captum.attr import Occlusion
from captum.attr import NoiseTunnel
from captum.attr import visualization as viz
```
## 1- Loading the model and the dataset
Loads pretrained Resnet model and sets it to eval mode
```
model = models.resnet18(pretrained=True)
model = model.eval()
```
Downloads the list of classes/labels for ImageNet dataset and reads them into the memory
```
!wget -P $HOME/.torch/models https://s3.amazonaws.com/deep-learning-models/image-models/imagenet_class_index.json
labels_path = os.getenv("HOME") + '/.torch/models/imagenet_class_index.json'
with open(labels_path) as json_data:
idx_to_labels = json.load(json_data)
```
Defines transformers and normalizing functions for the image.
It also loads an image from the `img/resnet/` folder that will be used for interpretation purposes.
```
transform = transforms.Compose([
transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor()
])
transform_normalize = transforms.Normalize(
mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225]
)
img = Image.open('img/resnet/swan-3299528_1280.jpg')
transformed_img = transform(img)
input = transform_normalize(transformed_img)
input = input.unsqueeze(0)
```
Predict the class of the input image
```
output = model(input)
output = F.softmax(output, dim=1)
prediction_score, pred_label_idx = torch.topk(output, 1)
pred_label_idx.squeeze_()
predicted_label = idx_to_labels[str(pred_label_idx.item())][1]
print('Predicted:', predicted_label, '(', prediction_score.squeeze().item(), ')')
```
## 2- Gradient-based attribution
Let's compute attributions using Integrated Gradients and visualize them on the image. Integrated gradients computes the integral of the gradients of the output of the model for the predicted class `pred_label_idx` with respect to the input image pixels along the path from the black image to our input image.
```
print('Predicted:', predicted_label, '(', prediction_score.squeeze().item(), ')')
integrated_gradients = IntegratedGradients(model)
attributions_ig = integrated_gradients.attribute(input, target=pred_label_idx, n_steps=200)
```
Let's visualize the image and corresponding attributions by overlaying the latter on the image.
```
default_cmap = LinearSegmentedColormap.from_list('custom blue',
[(0, '#ffffff'),
(0.25, '#000000'),
(1, '#000000')], N=256)
_ = viz.visualize_image_attr(np.transpose(attributions_ig.squeeze().cpu().detach().numpy(), (1,2,0)),
np.transpose(transformed_img.squeeze().cpu().detach().numpy(), (1,2,0)),
method='heat_map',
cmap=default_cmap,
show_colorbar=True,
sign='positive',
outlier_perc=1)
```
Let us compute attributions using Integrated Gradients and smoothens them across multiple images generated by a <em>noise tunnel</em>. The latter adds gaussian noise with a std equals to one, 10 times (n_samples=10) to the input. Ultimately, noise tunnel smoothens the attributions across `n_samples` noisy samples using `smoothgrad_sq` technique. `smoothgrad_sq` represents the mean of the squared attributions across `n_samples` samples.
```
noise_tunnel = NoiseTunnel(integrated_gradients)
attributions_ig_nt = noise_tunnel.attribute(input, n_samples=10, nt_type='smoothgrad_sq', target=pred_label_idx)
_ = viz.visualize_image_attr_multiple(np.transpose(attributions_ig_nt.squeeze().cpu().detach().numpy(), (1,2,0)),
np.transpose(transformed_img.squeeze().cpu().detach().numpy(), (1,2,0)),
["original_image", "heat_map"],
["all", "positive"],
cmap=default_cmap,
show_colorbar=True)
```
Finally, let us use `GradientShap`, a linear explanation model which uses a distribution of reference samples (in this case two images) to explain predictions of the model. It computes the expectation of gradients for an input which was chosen randomly between the input and a baseline. The baseline is also chosen randomly from given baseline distribution.
```
torch.manual_seed(0)
np.random.seed(0)
gradient_shap = GradientShap(model)
# Defining baseline distribution of images
rand_img_dist = torch.cat([input * 0, input * 1])
attributions_gs = gradient_shap.attribute(input,
n_samples=50,
stdevs=0.0001,
baselines=rand_img_dist,
target=pred_label_idx)
_ = viz.visualize_image_attr_multiple(np.transpose(attributions_gs.squeeze().cpu().detach().numpy(), (1,2,0)),
np.transpose(transformed_img.squeeze().cpu().detach().numpy(), (1,2,0)),
["original_image", "heat_map"],
["all", "absolute_value"],
cmap=default_cmap,
show_colorbar=True)
```
## 3- Occlusion-based attribution
Now let us try a different approach to attribution. We can estimate which areas of the image are critical for the classifier's decision by occluding them and quantifying how the decision changes.
We run a sliding window of size 15x15 (defined via `sliding_window_shapes`) with a stride of 8 along both image dimensions (a defined via `strides`). At each location, we occlude the image with a baseline value of 0 which correspondes to a gray patch (defined via `baselines`).
**Note:** this computation might take more than one minute to complete, as the model is evaluated at every position of the sliding window.
```
occlusion = Occlusion(model)
attributions_occ = occlusion.attribute(input,
strides = (3, 8, 8),
target=pred_label_idx,
sliding_window_shapes=(3,15, 15),
baselines=0)
```
Let us visualize the attribution, focusing on the areas with positive attribution (those that are critical for the classifier's decision):
```
_ = viz.visualize_image_attr_multiple(np.transpose(attributions_occ.squeeze().cpu().detach().numpy(), (1,2,0)),
np.transpose(transformed_img.squeeze().cpu().detach().numpy(), (1,2,0)),
["original_image", "heat_map"],
["all", "positive"],
show_colorbar=True,
outlier_perc=2,
)
```
The upper part of the goose, especially the beak, seems to be the most critical for the model to predict this class.
We can verify this further by occluding the image using a larger sliding window:
```
occlusion = Occlusion(model)
attributions_occ = occlusion.attribute(input,
strides = (3, 50, 50),
target=pred_label_idx,
sliding_window_shapes=(3,60, 60),
baselines=0)
_ = viz.visualize_image_attr_multiple(np.transpose(attributions_occ.squeeze().cpu().detach().numpy(), (1,2,0)),
np.transpose(transformed_img.squeeze().cpu().detach().numpy(), (1,2,0)),
["original_image", "heat_map"],
["all", "positive"],
show_colorbar=True,
outlier_perc=2,
)
```
| github_jupyter |
# CNP_T1_validation_age
```
import pandas as pd
import numpy as np
import nibabel as nib
from sklearn.svm import LinearSVR
# from sklearn.linear_model import LinearRegression
from sklearn.feature_selection import VarianceThreshold, SelectKBest, f_regression
import os
from sklearn.model_selection import train_test_split
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import mean_absolute_error, r2_score
from nilearn.input_data import NiftiMasker
import random
import matplotlib.pyplot as plt
import datetime
# Config
toy = False
run_all = False
#AWS cloud
# path = '/home/ubuntu/fmriprep/'
# output_dir = '/output/'
#For Daniel's computer
input_dir = 'data/CNP_clinical/T1/'
output_dir = 'data/CNP_clinical/T1/'
n_jobs = 20 #amount of cores
cv=4
description = 'CNP_clinical_func_validation_age_'
log_file = description+datetime.datetime.now().strftime("%y-%m-%d-%H-%M-%S")
# Load healthy control data
input_dir = '/Users/danielmlow/Dropbox/neurohack/brain_age/data/CNP_T1/'
loaded = np.load(input_dir+'train_set.npz')
X_train, y_train, random_subj_train = loaded['a'], loaded['b'], loaded['c']
# Load clinical population
input_dir = '/Users/danielmlow/Dropbox/neurohack/brain_age/data/CNP_clinical/T1/'
disorder = 'adhd_'
# the clinical group was split 50-50 in validation (called "train") and test set (called "test")
loaded = np.load(input_dir+'CNP_T1_adhd_train_set.npz')
X_validation_adhd, y_validation_adhd, random_subj_train_adhd = loaded['a'], loaded['b'], loaded['c']
disorder = 'bipolar_'
# the clinical group was split 50-50 in validation (called "train") and test set (called "test")
loaded = np.load(input_dir+'CNP_T1_bipolar_train_set.npz')
X_validation_bipolar, y_validation_bipolar, random_subj_train_bipolar = loaded['a'], loaded['b'], loaded['c']
disorder = 'schz_'
# the clinical group was split 50-50 in validation (called "train") and test set (called "test")
loaded = np.load(input_dir+'CNP_T1_schz_train_set.npz')
X_validation_schz, y_validation_schz, random_subj_train_schz = loaded['a'], loaded['b'], loaded['c']
X_train.shape
print(y_train.shape)
print(X_train.shape)
print(X_train.shape)
# def reshape_3_to_2D(X):
# return np.reshape(X, (X.shape[0], X.shape[2]))
# # X_train = reshape_3_to_2D(X_train)
# X_validation_adhd = reshape_3_to_2D(X_validation_adhd)
# X_validation_bipolar = reshape_3_to_2D(X_validation_bipolar)
# X_validation_schz = reshape_3_to_2D(X_validation_schz)
X_validation_adhd.shape
df = pd.DataFrame(X_train)
df.iloc[:,1890000]
# df2 = np.reshape(df, (df.shape[0],1890000))
def reduce_dim(X):
df = pd.DataFrame(X).iloc[:,1890000]
return np.array(df)
X_train1 = reduce_dim(X_train)
X_validation1 = reduce_dim(X_validation)
X_validation_adhd1 = reduce_dim(X_validation_adhd)
X_validation_bipolar1 = reduce_dim(X_validation_bipolar)
X_validation_schz1 = reduce_dim(X_validation_schz)
X_train, X_validation, y_train, y_validation = train_test_split(X_train, y_train, test_size=0.30, random_state=42)
```
## Validation
Here we take the validation set of the clinical population and "diagnose" their brain age.
```
from sklearn.pipeline import Pipeline
if toy:
X_train = X_train[:8]
y_train = y_train[:8]
X_validation = X_validation[:8]
y_validation = y_validation[:8]
# Remove features with too low between-subject variance
# Here we use a classical univariate feature selection based on F-test,
# namely Anova.
# We have our predictor (SVR), our feature selection (SelectKBest), and now,
# we can plug them together in a *pipeline* that performs the two operations
# successively:
anova_svr = Pipeline([
('anova', SelectKBest(f_regression, k=1900)),
('svr', LinearSVR(C=.01))])
# parameters = [{'anova__k': [2000],
# 'svr__C': [0.1,1]}]
anova_svr.fit(X_train, y_train)
# clf = SVR(kernel='linear', C=0.01)
# clf.fit(X_train, y_train)
# predictions = clf.predict(X_validation)
# r2 = r2_score(y_validation, predictions)
# mean_scores = np.array(grid.cv_results_['neg_mean_absolute_error'])
# mean_scores = mean_scores.reshape(len(C_OPTIONS), -1, len(N_FEATURES_OPTIONS))
# # select score for best C
# mean_scores = mean_scores.max(axis=0)
# bar_offsets = (np.arange(len(N_FEATURES_OPTIONS)) *
# (len(reducer_labels) + 1) + .5)
print(X_train.shape)
print(X_validation_bipolar.shape)
print(X_validation_adhd.shape)
print(X_validation_schz.shape)
def mean_error(predictions, true_ages):
errors = []
for pred, true in zip(predictions, true_ages):
errors.append(pred-true)
return np.mean(errors), np.std(errors)
'''
pred true
40 45 -5 brain age is five years younger that true age
30 25 5
'''
predictions_healthy = anova_svr.predict(X_validation)
predictions_adhd = anova_svr.predict(X_validation_adhd)
predictions_bipolar = anova_svr.predict(X_validation_bipolar)
predictions_schz = anova_svr.predict(X_validation_schz)
mean_error_healthy, std_healthy = mean_error(predictions_healthy, y_validation)
mean_error_adhd, std_adhd = mean_error(predictions_adhd, y_validation_adhd)
mean_error_bipolar, std_bipolar = mean_error(predictions_adhd, y_validation_bipolar)
mean_error_schz, std_schz = mean_error(predictions_adhd, y_validation_schz)
print('healthy: ', mean_error_healthy.round(2),std_healthy.round(2))
print('adhd: ', mean_error_adhd, std_adhd.round(2))
print('bipolar: ', mean_error_bipolar, std_bipolar.round(2))
print('schz: ', mean_error_schz, std_schz.round(2), std_schz.round(2))
print('mean age:')
print(np.mean(y_train))
print(np.mean(y_validation_adhd))
print(np.mean(y_validation_bipolar))
print(np.mean(y_validation_schz))
# Diagnose brain age
print('adhd: ', mean_error_adhd)
print('bipolar: ', mean_error_bipolar)
print('schz: ', mean_error_schz)
with open(output_dir+'log_'+log_file+'.txt', '+a') as f:
f.write(str('healthy: ')+str( mean_error_healthy.round(2),std_healthy.round(2)))
f.write(str('adhd: ')+str(mean_error_adhd, std_adhd.round(2)))
f.write('bipolar: '+str(mean_error_bipolar, std_bipolar.round(2)))
f.write(str('schz: ')+str( mean_error_schz, std_schz.round(2), std_schz.round(2)))
f.write(str('mean age:'))
f.write(str(np.mean(y_train)))
f.write(str(np.mean(y_validation_adhd)))
f.write(str(np.mean(y_validation_bipolar)))
f.write(str(np.mean(y_validation_schz)))
output_dir+'log_'+log_file+'.txt'
# age_pred = grid.predict(gm_maps_masked)
# np.mean(np.abs(age_pred-y_age))
# y_age
# r2 score
```
| github_jupyter |
# Chapter 2: Our First Model
```
import torch
import torch.nn as nn
import torch.optim as optim
import torch.utils.data
import torch.nn.functional as F
import torchvision
from torchvision import transforms
from PIL import Image, ImageFile
ImageFile.LOAD_TRUNCATED_IMAGES=True
```
## Setting up DataLoaders
We'll use the built-in dataset of `torchvision.datasets.ImageFolder` to quickly set up some dataloaders of downloaded cat and fish images.
`check_image` is a quick little function that is passed to the `is_valid_file` parameter in the ImageFolder and will do a sanity check to make sure PIL can actually open the file. We're going to use this in lieu of cleaning up the downloaded dataset.
```
def check_image(path):
try:
im = Image.open(path)
return True
except:
return False
```
Set up the transforms for every image:
* Resize to 64x64
* Convert to tensor
* Normalize using ImageNet mean & std
```
img_transforms = transforms.Compose([
transforms.Resize((64,64)),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225] )
])
train_data_path = "./train/"
train_data = torchvision.datasets.ImageFolder(root=train_data_path,transform=img_transforms, is_valid_file=check_image)
val_data_path = "./val/"
val_data = torchvision.datasets.ImageFolder(root=val_data_path,transform=img_transforms, is_valid_file=check_image)
test_data_path = "./test/"
test_data = torchvision.datasets.ImageFolder(root=test_data_path,transform=img_transforms, is_valid_file=check_image)
batch_size=64
train_data_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size)
val_data_loader = torch.utils.data.DataLoader(val_data, batch_size=batch_size)
test_data_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size)
```
## Our First Model, SimpleNet
SimpleNet is a very simple combination of three Linear layers and ReLu activations between them. Note that as we don't do a `softmax()` in our `forward()`, we will need to make sure we do it in our training function during the validation phase.
```
class SimpleNet(nn.Module):
def __init__(self):
super(SimpleNet, self).__init__()
self.fc1 = nn.Linear(12288, 84)
self.fc2 = nn.Linear(84, 50)
self.fc3 = nn.Linear(50,2)
def forward(self, x):
x = x.view(-1, 12288)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
simplenet = SimpleNet()
```
## Create an optimizer
Here, we're just using Adam as our optimizer with a learning rate of 0.001.
```
optimizer = optim.Adam(simplenet.parameters(), lr=0.001)
```
## Copy the model to GPU
Copy the model to the GPU if available.
```
if torch.cuda.is_available():
device = torch.device("cuda")
else:
device = torch.device("cpu")
simplenet.to(device)
```
## Training
Trains the model, copying batches to the GPU if required, calculating losses, optimizing the network and perform validation for each epoch.
```
def train(model, optimizer, loss_fn, train_loader, val_loader, epochs=20, device="cpu"):
for epoch in range(epochs):
training_loss = 0.0
valid_loss = 0.0
model.train()
for batch in train_loader:
optimizer.zero_grad()
inputs, targets = batch
inputs = inputs.to(device)
targets = targets.to(device)
output = model(inputs)
loss = loss_fn(output, targets)
loss.backward()
optimizer.step()
training_loss += loss.data.item() * inputs.size(0)
training_loss /= len(train_loader.dataset)
model.eval()
num_correct = 0
num_examples = 0
for batch in val_loader:
inputs, targets = batch
inputs = inputs.to(device)
output = model(inputs)
targets = targets.to(device)
loss = loss_fn(output,targets)
valid_loss += loss.data.item() * inputs.size(0)
correct = torch.eq(torch.max(F.softmax(output), dim=1)[1], targets).view(-1)
num_correct += torch.sum(correct).item()
num_examples += correct.shape[0]
valid_loss /= len(val_loader.dataset)
print('Epoch: {}, Training Loss: {:.2f}, Validation Loss: {:.2f}, accuracy = {:.2f}'.format(epoch, training_loss,
valid_loss, num_correct / num_examples))
train(simplenet, optimizer,torch.nn.CrossEntropyLoss(), train_data_loader,val_data_loader, epochs=5, device=device)
```
## Making predictions
Labels are in alphanumeric order, so `cat` will be 0, `fish` will be 1. We'll need to transform the image and also make sure that the resulting tensor is copied to the appropriate device before applying our model to it.
```
labels = ['cat','fish']
img = Image.open("./val/fish/100_1422.JPG")
img = img_transforms(img).to(device)
prediction = F.softmax(simplenet(img))
prediction = prediction.argmax()
print(labels[prediction])
```
## Saving Models
We can either save the entire model using `save` or just the parameters using `state_dict`. Using the latter is normally preferable, as it allows you to reuse parameters even if the model's structure changes (or apply parameters from one model to another).
```
torch.save(simplenet, "/tmp/simplenet")
simplenet = torch.load("/tmp/simplenet")
torch.save(simplenet.state_dict(), "/tmp/simplenet")
simplenet = SimpleNet()
simplenet_state_dict = torch.load("/tmp/simplenet")
simplenet.load_state_dict(simplenet_state_dict)
```
| github_jupyter |
```
import Bio
from Bio.KEGG import REST
from Bio.KEGG import Enzyme
import gzip
import pandas as pd
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
import seaborn as sns
import pubchempy as pc
enzyme_fields = [method for method in dir(Enzyme.Record()) if not method.startswith('_')]
data_matrix = []
with gzip.open('../datasets/KEGG_enzymes_all_data.gz', 'rt') as file:
for record in Enzyme.parse(file):
data_matrix.append([getattr(record, field) for field in enzyme_fields])
enzyme_df = pd.DataFrame(data_matrix, columns=enzyme_fields)
enzyme_df.head()
# example enzyme df search
enzyme_df[enzyme_df.entry == '1.1.1.153']['reaction']
enzyme_df['reaction'][153]
# get promiscuous dataframe and make it compact
promiscuous_df = enzyme_df[[True if len(rxn) > 1 else False for rxn in enzyme_df['reaction']]]
compact_promiscuous_df = promiscuous_df[['entry','reaction','product','substrate']]
```
#### check for reversible reactions
```
def get_reaction_list(df_with_reaction_column):
"""get the list of reaction from a dataframe that contains reaction column"""
reaction_list = []
for index,row in df_with_reaction_column.iterrows():
for reaction in row['reaction']:
reaction_split = reaction.split("[RN:")[-1]
if reaction_split.startswith("R") and not reaction_split.startswith("RN"):
for i in reaction_split[:-1].split(" "):
reaction_list.append(i)
return reaction_list
promiscuous_reaction_list = get_reaction_list(compact_promiscuous_df)
len(promiscuous_reaction_list)
def query_reversible_reaction(list_with_reaction):
"""get the list of reversible reaction"""
reversible_reaction = []
for reaction in reaction_list:
reaction_file = REST.kegg_get(reaction).read()
for i in reaction_file.rstrip().split("\n"):
if i.startswith("EQUATION") and "<=>" in i:
reversible_reaction.append(reaction)
return reversible_reaction
#check whether query_reversible_reaction function works.
reaction_file = REST.kegg_get("R00709").read()
for line in reaction_file.rstrip().split("\n"):
if line.startswith("EQUATION") and "<=>" in line:
print ("R00709")
print (line)
#will take forever to run
#reversible_reaction = query_reversible_reaction(promiscuous_reaction_list)
# it seem like all the reactions are reversible
#len(reversible_reaction)
```
### append substrate molecules to product column
```
# difficult to use iterrows because of inconsistent index
compact_promiscuous_df.head(10)
rowindex = np.arange(0,len(compact_promiscuous_df))
compact_promiscuous_df_index = compact_promiscuous_df.set_index(rowindex)
def combine_substrate_product(df_with_ordered_index):
"""append substrates to product column. should not be run multiple times.
it will append substrates multiple times"""
newdf = df_with_ordered_index
for index,row in df_with_ordered_index.iterrows():
productlist = row['product']
substratelist = row['substrate']
newdf.iloc[index,2] = productlist + substratelist
return newdf
# do not run this multiple times!
combined_df = combine_substrate_product(compact_promiscuous_df_index)
# check whether it is added multiple times
# if appended multiple times, need to rerun cells from the very beginning
combined_df.iloc[0,2]
compact_combined_df = combined_df[['entry','product']]
compact_combined_df.head(10)
# save substrate and product combined dataframe to csv
# might remove this dataframe from the git repo soon
# substrate_to_product_promiscuous_df.to_csv("../datasets/substrate_product_combined_promiscuous.csv")
```
### cofactor removal
```
len(compact_combined_df)
# test text splicing
test='aldehyde [CPD:C00071]'
test[-7:-1]
def get_cofactor_list(cofactor_df,CPDcolumn):
cofactor_list = [cofactor[4:10] for cofactor in cofactor_df[CPDcolumn]]
return cofactor_list
cofactor_df=pd.read_csv("../datasets/cofactor_list.csv")
cofactor_df.head(10)
cofactor_list = get_cofactor_list(cofactor_df,"CPD")
cofactor_list
def get_cpd(compound_full):
"when full name of compound inserted, return cpd id"
cpd = compound_full[-7:-1]
return cpd
def rm_cofactor_only_cpd(df,compound_columnname,cofactor_list):
newdf = df.drop(["product"],axis=1)
cleaned_compound_column = []
for index,row in df.iterrows():
cpd_compound_list =[]
for compound in row[compound_columnname]:
if "CPD" in compound:
onlycpd = get_cpd(compound)
if onlycpd not in cofactor_list:
cpd_compound_list.append(onlycpd)
else:
pass
if len(cpd_compound_list)==0:
cleaned_compound_column.append("NA")
else:
cleaned_compound_column.append(cpd_compound_list)
newdf['product'] = cleaned_compound_column
return newdf
cleaned_df_productinList = rm_cofactor_only_cpd(compact_combined_df,'product',cofactor_list)
#cleaned_promiscuous_enzyme_df.to_csv("../datasets/cleaned_promiscous_enzyme_df.csv", header=['entry','product'])
#remove enzymes with no products
noNAenzyme = cleaned_df_productinList.loc[cleaned_df_productinList['product']!='NA']
len(noNAenzyme)
```
### format the dataframe to be easily applicable for pubchem ID search and SMILES string search
```
noNAenzyme.rename(columns={'product':'products'}, inplace=True)
noNAenzyme
def itemlist_eachrow(df,oldcolumnname,newcolumnname,enzymecolumn):
newdf = df[oldcolumnname].\
apply(pd.Series).\
merge(df, left_index=True, right_index=True).\
drop([oldcolumnname],axis=1).\
melt(id_vars=[enzymecolumn],value_name=newcolumnname).\
sort_values(by=[enzymecolumn]).\
dropna().\
drop(columns=["variable"])
return newdf
expanded_noNAenzyme = itemlist_eachrow(noNAenzyme,"products","product","entry")
#dropped duplicates within product column
expanded_noNAenzyme.drop_duplicates(['entry','product'],keep='first',inplace=True)
expanded_noNAenzyme
len(expanded_noNAenzyme)
```
### pubchemID search
```
import re
from Bio.KEGG import Compound
def compound_records_to_df(file_path):
"""
Input should be a filepath string pointing to a gzipped text file of KEGG enzyme records.
Function parses all records using Biopython.Bio.KEGG.Compound parser, and returns a pandas dataframe.
"""
compound_fields = [method for method in dir(Compound.Record()) if not method.startswith('_')]
data_matrix = []
with gzip.open(file_path, 'rt') as file:
for record in Compound.parse(file):
data_matrix.append([getattr(record, field) for field in compound_fields])
compound_df = pd.DataFrame(data_matrix, columns=compound_fields)
return compound_df
compound_df = compound_records_to_df('../datasets/KEGG_compound_db_entries.gz')
def extract_PubChem_id(field):
"""
This function uses regular expressions to extract the PubChem compound IDs from a field in a record
"""
regex = "'PubChem', \[\'(\d+)\'\]\)" # matches "'PubChem', ['" characters exactly, then captures any number of digits (\d+), before another literal "']" character match
ids = re.findall(regex, str(field), re.IGNORECASE)
if len(ids) > 0:
pubchem_id = ids[0]
else:
pubchem_id = ''
return pubchem_id
PubChemID_list = []
for _, row in compound_df.iterrows():
pubchem_id = extract_PubChem_id(row['dblinks'])
PubChemID_list.append(pubchem_id)
compound_df['PubChem'] = PubChemID_list
compound_df.head(10)
joint_enzyme_compound_df = expanded_noNAenzyme.merge(compound_df, left_on='product', right_on='entry')
joint_enzyme_compound_df.head(10)
compact_joint_enzyme_compound_df = joint_enzyme_compound_df[['entry_x','product','PubChem']].\
sort_values(by=['entry_x'])
compact_joint_enzyme_compound_df.head(10)
print (len(compact_joint_enzyme_compound_df))
#rename column names
compact_joint_enzyme_compound_df.rename(columns={'entry_x':'entry','product':'KEGG'},inplace=True)
compact_joint_enzyme_compound_df = compact_joint_enzyme_compound_df.loc[compact_joint_enzyme_compound_df['PubChem']!='']
len(compact_joint_enzyme_compound_df)
compact_joint_enzyme_compound_df.columns
shortened_df = compact_joint_enzyme_compound_df.copy()
short_50 = shortened_df.head(50)
```
def sid_to_smiles(sid):
"""Takes an SID and prints the associated SMILES string."""
substance = pc.Substance.from_sid(sid)
cid = substance.standardized_cid
compound = pc.get_compounds(cid)[0]
return compound.isomeric_smiles
def kegg_df_to_smiles(kegg_df):
"""Takes a pandas dataframe that includes a column of SIDs, gets the isomeric SMILES for each SID, stores them as a list, then adds a SMILES column."""
res = []
unsuccessful_list = []
for i in range(len(kegg_df)):
sid = kegg_df.iloc[i, 2] #CHANGE THIS 1 TO THE PROPER COLUMN NUMBER FOR SID
try:
result = sid_to_smiles(sid)
res.append(result)
except:
res.append('none')
unsuccessful_list.append(sid)
pass
kegg_df.insert(3, column='SMILES', value=res) #Change this 2 to the number where the smiles column should be
kegg_df.to_csv(r'../datasets/cleaned_kegg_with_smiles')
return kegg_df, unsuccessful_list
```
def sid_to_smiles(sid):
"""Takes an SID and prints the associated SMILES string."""
substance = pc.Substance.from_sid(sid)
cid = substance.standardized_cid
compound = pc.get_compounds(cid)[0]
return compound.isomeric_smiles, cid
def kegg_df_to_smiles(kegg_df):
"""Takes a pandas dataframe that includes a column of SIDs, gets the isomeric SMILES for each SID, stores them as a list, then adds a SMILES column."""
res = []
cid_list = []
unsuccessful_list = []
for i in range(len(kegg_df)):
sid = kegg_df.iloc[i, 2] #CHANGE THIS 1 TO THE PROPER COLUMN NUMBER FOR SID
try:
smile_result = sid_to_smiles(sid)[0]
res.append(smile_result)
cid_result = sid_to_smiles(sid)[1]
cid_list.append(cid_result)
except:
res.append('none')
cid_list.append('none')
unsuccessful_list.append(sid)
pass
kegg_df.insert(3, column='CID', value=cid_list)
kegg_df.insert(4, column='SMILES', value=res) #Change this 2 to the number where the smiles column should be
kegg_df.to_csv(r'../datasets/df_cleaned_kegg_with_smiles.csv')
return kegg_df, unsuccessful_list
kegg_df_to_smiles(compact_joint_enzyme_compound_df)
ugh = ['3371',
'3526',
'4627',
'4764',
'17396586',
'17396584',
'96023258',
'96023257',
'96023254',
'96023253',
'96023256',
'96023255',
'135626278',
'135626279',
'7881',
'7880',
'8711',
'17396499',
'17396498',
'6046',
'5930',
'6046',
'5930',
'6046',
'5930',
'5930',
'6046',
'6046',
'5930',
'5930',
'6046',
'6046',
'5930',
'5930',
'6046',
'6046',
'5930',
'5930',
'6046',
'6046',
'5930',
'5930',
'6046',
'5930',
'6046',
'5930',
'6046',
'6046',
'5930',
'5930',
'6046',
'6046',
'5930',
'6046',
'5930',
'5930',
'6046',
'6046',
'5930',
'5930',
'6046',
'6046',
'5930',
'5930',
'6046',
'6046',
'5930',
'6046',
'5930',
'6046',
'5930',
'5930',
'6046',
'6046',
'5930',
'5930',
'6046',
'5930',
'6046',
'6046',
'5930',
'6046',
'5930',
'5930',
'6046',
'5930',
'6046',
'6046',
'5930',
'6046',
'5930',
'135626312',
'17396588',
'172232403',
'5930',
'6046',
'350078274',
'350078276',
'350078273',
'5930',
'6046',
'6046',
'5930',
'6046',
'5930',
'6046',
'5930',
'6046',
'5930',
'5930',
'6046',
'5930',
'6046',
'5930',
'6046',
'5930',
'6046',
'6046',
'5930',
'5930',
'6046',
'3936',
'3931',
'3931',
'3936',
'3439',
'3438',
'3931',
'3936',
'3439',
'3438',
'3438',
'3439',
'3439',
'3438',
'3439',
'3438',
'3439',
'3438',
'3438',
'3439',
'3439',
'3438',
'3931',
'3936',
'3439',
'3438',
'4242',
'4245',
'336445168',
'336445170',
'336445167',
'4245',
'336445169',
'4242',
'14292',
'4245',
'4242',
'254741478',
'254741479',
'4242',
'4245',
'4245',
'4242',
'4245',
'4242',
'4245',
'4242',
'4242',
'336445167',
'4245',
'4245',
'4242',
'336445169',
'3438',
'3439',
'336445167',
'3439',
'3438',
'4242',
'4245',
'4245',
'4242',
'4242',
'4245',
'4245',
'336445169',
'336445167',
'4242',
'5930',
'6046',
'6046',
'5930',
'4242',
'4245',
'6046',
'5930',
'3438',
'3439',
'4245',
'4242',
'3438',
'3439',
'7734',
'7735',
'340125640',
'328082950',
'340125643',
'96024377',
'3426',
'3425',
'17396594',
'17396595',
'3438',
'3439',
'6918',
'7171',
'3438',
'3439',
'3426',
'3425',
'3425',
'3426',
'3636',
'5929',
'3635',
'6627',
'5741',
'3887',
'315431311',
'6828',
'315431312',
'3887',
'6828',
'350078249',
'350078250',
'350078252',
'350078251',
'3341',
'6549',
'6915',
'5703',
'4730',
'96023244',
'7070',
'7272',
'6622',
'315431316',
'315431317',
'6902',
'5130',
'340125666',
'3342',
'340125665',
'340125667',
'329728080',
'329728081',
'329728079',
'4794',
'5136',
'329728078',
'96023263',
'4794',
'5136',
'328082930',
'328082931',
'328082929',
'254741191',
'7462',
'7362',
'7362',
'135626269',
'135626270',
'135626271',
'5260',
'6747',
'254741326',
'8311',
'340125667',
'340125665',
'340125666',
'5260',
'135626270',
'333497482',
'333497484',
'333497483',
'5930',
'6046',
'9490',
'47205174',
'9491',
'3341',
'5330',
'3501',
'124489752',
'124489757',
'5279',
'124489750',
'3914',
'3643',
'3465',
'3635',
'3636',
'5703',
'329728063',
'47205136',
'47205137',
'376219005',
'3426',
'3425',
'4998',
'3712',
'3360',
'3465',
'135626269',
'135626270',
'5260',
'47205322',
'3887',
'3371',
'7130',
'7224',
'4416',
'3462',
'4266',
'3360',
'8030',
'254741203',
'8029',
'5259',
'5703',
'3887',
'5813',
'3360',
'3740',
'4348',
'3501',
'17396557',
'3746',
'5502',
'5185',
'17396556',
'5502',
'3746',
'17396557',
'5185',
'17396556',
'5813',
'3445',
'3348',
'3911',
'47205545',
'47205548',
'3341',
'3341',
'3348',
'3341',
'3341',
'3348',
'7736']
len(ugh) # invalid SIDs because of KEGg
myset = set(ugh)
len(myset) # unique invalid SIDs because of KEGG
pd.read_csv('../datasets/playground_df_cleaned_kegg_with_smiles.csv')
```
| github_jupyter |
# Deep Learning for Natural Language Processing: Exercise 03
Moritz Eck (14-715-296)<br/>
University of Zurich
Please see the section right at the bottom of this notebook for the discussion of the results as well as the answers to the exercise questions.
### Mount Google Drive (Please do this step first => only needs to be done once!)
This mounts the user's Google Drive directly.
On my personal machine inside the Google Drive folder the input files are stored in the following folder:<br/>
**~/Google Drive/Colab Notebooks/ex02/**
Below I have defined the default filepath as **default_fp = 'drive/Colab Notebooks/ex02/'**.<br/>
Please change the filepath to the location where you have the input file and the glove embeddings saved.
```
!apt-get install -y -qq software-properties-common python-software-properties module-init-tools
!add-apt-repository -y ppa:alessandro-strada/ppa 2>&1 > /dev/null
!apt-get update -qq 2>&1 > /dev/null
!apt-get -y install -qq google-drive-ocamlfuse fuse
from google.colab import auth
auth.authenticate_user()
from oauth2client.client import GoogleCredentials
creds = GoogleCredentials.get_application_default()
import getpass
!google-drive-ocamlfuse -headless -id={creds.client_id} -secret={creds.client_secret} < /dev/null 2>&1 | grep URL
vcode = getpass.getpass()
!echo {vcode} | google-drive-ocamlfuse -headless -id={creds.client_id} -secret={creds.client_secret}
```
**Mount Google Drive**
```
!mkdir -p drive
!google-drive-ocamlfuse drive
```
### Install the required packages
```
!pip install pandas
!pip install numpy
!pip install tensorflow
```
### Check that the GPU is used
```
import tensorflow as tf
device_name = tf.test.gpu_device_name()
if device_name != '/device:GPU:0':
raise SystemError('GPU device not found')
print('Found GPU at: {}'.format(device_name))
from tensorflow.python.client import device_lib
print(device_lib.list_local_devices())
```
### Helper Functions
```
# encoding: UTF-8
# Copyright 2017 Google.com
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import numpy as np
import glob
import sys
# size of the alphabet that we work with
ALPHASIZE = 98
# Specification of the supported alphabet (subset of ASCII-7)
# 10 line feed LF
# 32-64 numbers and punctuation
# 65-90 upper-case letters
# 91-97 more punctuation
# 97-122 lower-case letters
# 123-126 more punctuation
def convert_from_alphabet(a):
"""Encode a character
:param a: one character
:return: the encoded value
"""
if a == 9:
return 1
if a == 10:
return 127 - 30 # LF
elif 32 <= a <= 126:
return a - 30
else:
return 0 # unknown
# encoded values:
# unknown = 0
# tab = 1
# space = 2
# all chars from 32 to 126 = c-30
# LF mapped to 127-30
def convert_to_alphabet(c, avoid_tab_and_lf=False):
"""Decode a code point
:param c: code point
:param avoid_tab_and_lf: if True, tab and line feed characters are replaced by '\'
:return: decoded character
"""
if c == 1:
return 32 if avoid_tab_and_lf else 9 # space instead of TAB
if c == 127 - 30:
return 92 if avoid_tab_and_lf else 10 # \ instead of LF
if 32 <= c + 30 <= 126:
return c + 30
else:
return 0 # unknown
def encode_text(s):
"""Encode a string.
:param s: a text string
:return: encoded list of code points
"""
return list(map(lambda a: convert_from_alphabet(ord(a)), s))
def decode_to_text(c, avoid_tab_and_lf=False):
"""Decode an encoded string.
:param c: encoded list of code points
:param avoid_tab_and_lf: if True, tab and line feed characters are replaced by '\'
:return:
"""
return "".join(map(lambda a: chr(convert_to_alphabet(a, avoid_tab_and_lf)), c))
def sample_from_probabilities(probabilities, topn=ALPHASIZE):
"""Roll the dice to produce a random integer in the [0..ALPHASIZE] range,
according to the provided probabilities. If topn is specified, only the
topn highest probabilities are taken into account.
:param probabilities: a list of size ALPHASIZE with individual probabilities
:param topn: the number of highest probabilities to consider. Defaults to all of them.
:return: a random integer
"""
p = np.squeeze(probabilities)
p[np.argsort(p)[:-topn]] = 0
p = p / np.sum(p)
return np.random.choice(ALPHASIZE, 1, p=p)[0]
def rnn_minibatch_sequencer(raw_data, batch_size, sequence_size, nb_epochs):
"""
Divides the data into batches of sequences so that all the sequences in one batch
continue in the next batch. This is a generator that will keep returning batches
until the input data has been seen nb_epochs times. Sequences are continued even
between epochs, apart from one, the one corresponding to the end of raw_data.
The remainder at the end of raw_data that does not fit in an full batch is ignored.
:param raw_data: the training text
:param batch_size: the size of a training minibatch
:param sequence_size: the unroll size of the RNN
:param nb_epochs: number of epochs to train on
:return:
x: one batch of training sequences
y: on batch of target sequences, i.e. training sequences shifted by 1
epoch: the current epoch number (starting at 0)
"""
data = np.array(raw_data)
data_len = data.shape[0]
# using (data_len-1) because we must provide for the sequence shifted by 1 too
nb_batches = (data_len - 1) // (batch_size * sequence_size)
assert nb_batches > 0, "Not enough data, even for a single batch. Try using a smaller batch_size."
rounded_data_len = nb_batches * batch_size * sequence_size
xdata = np.reshape(data[0:rounded_data_len], [batch_size, nb_batches * sequence_size])
ydata = np.reshape(data[1:rounded_data_len + 1], [batch_size, nb_batches * sequence_size])
for epoch in range(nb_epochs):
for batch in range(nb_batches):
x = xdata[:, batch * sequence_size:(batch + 1) * sequence_size]
y = ydata[:, batch * sequence_size:(batch + 1) * sequence_size]
x = np.roll(x, -epoch, axis=0) # to continue the text from epoch to epoch (do not reset rnn state!)
y = np.roll(y, -epoch, axis=0)
yield x, y, epoch
def find_book(index, bookranges):
return next(
book["name"] for book in bookranges if (book["start"] <= index < book["end"]))
def find_book_index(index, bookranges):
return next(
i for i, book in enumerate(bookranges) if (book["start"] <= index < book["end"]))
def print_learning_learned_comparison(X, Y, losses, bookranges, batch_loss, batch_accuracy, epoch_size, index, epoch):
"""Display utility for printing learning statistics"""
print()
# epoch_size in number of batches
batch_size = X.shape[0] # batch_size in number of sequences
sequence_len = X.shape[1] # sequence_len in number of characters
start_index_in_epoch = index % (epoch_size * batch_size * sequence_len)
for k in range(1):
index_in_epoch = index % (epoch_size * batch_size * sequence_len)
decx = decode_to_text(X[k], avoid_tab_and_lf=True)
decy = decode_to_text(Y[k], avoid_tab_and_lf=True)
bookname = find_book(index_in_epoch, bookranges)
formatted_bookname = "{: <10.40}".format(bookname) # min 10 and max 40 chars
epoch_string = "{:4d}".format(index) + " (epoch {}) ".format(epoch)
loss_string = "loss: {:.5f}".format(losses[k])
print_string = epoch_string + formatted_bookname + " │ {} │ {} │ {}"
print(print_string.format(decx, decy, loss_string))
index += sequence_len
# box formatting characters:
# │ \u2502
# ─ \u2500
# └ \u2514
# ┘ \u2518
# ┴ \u2534
# ┌ \u250C
# ┐ \u2510
format_string = "└{:─^" + str(len(epoch_string)) + "}"
format_string += "{:─^" + str(len(formatted_bookname)) + "}"
format_string += "┴{:─^" + str(len(decx) + 2) + "}"
format_string += "┴{:─^" + str(len(decy) + 2) + "}"
format_string += "┴{:─^" + str(len(loss_string)) + "}┘"
footer = format_string.format('INDEX', 'BOOK NAME', 'TRAINING SEQUENCE', 'PREDICTED SEQUENCE', 'LOSS')
print(footer)
# print statistics
batch_index = start_index_in_epoch // (batch_size * sequence_len)
batch_string = "batch {}/{} in epoch {},".format(batch_index, epoch_size, epoch)
stats = "{: <28} batch loss: {:.5f}, batch accuracy: {:.5f}".format(batch_string, batch_loss, batch_accuracy)
print()
print("TRAINING STATS: {}".format(stats))
class Progress:
"""Text mode progress bar.
Usage:
p = Progress(30)
p.step()
p.step()
p.step(start=True) # to restart form 0%
The progress bar displays a new header at each restart."""
def __init__(self, maxi, size=100, msg=""):
"""
:param maxi: the number of steps required to reach 100%
:param size: the number of characters taken on the screen by the progress bar
:param msg: the message displayed in the header of the progress bat
"""
self.maxi = maxi
self.p = self.__start_progress(maxi)() # () to get the iterator from the generator
self.header_printed = False
self.msg = msg
self.size = size
def step(self, reset=False):
if reset:
self.__init__(self.maxi, self.size, self.msg)
if not self.header_printed:
self.__print_header()
next(self.p)
def __print_header(self):
print()
format_string = "0%{: ^" + str(self.size - 6) + "}100%"
print(format_string.format(self.msg))
self.header_printed = True
def __start_progress(self, maxi):
def print_progress():
# Bresenham's algorithm. Yields the number of dots printed.
# This will always print 100 dots in max invocations.
dx = maxi
dy = self.size
d = dy - dx
for x in range(maxi):
k = 0
while d >= 0:
print('=', end="", flush=True)
k += 1
d -= dx
d += dy
yield k
return print_progress
def read_data_files(directory, validation=True):
"""Read data files according to the specified glob pattern
Optionnaly set aside the last file as validation data.
No validation data is returned if there are 5 files or less.
:param directory: for example "data/*.txt"
:param validation: if True (default), sets the last file aside as validation data
:return: training data, validation data, list of loaded file names with ranges
If validation is
"""
codetext = []
bookranges = []
shakelist = glob.glob(directory, recursive=True)
for shakefile in shakelist:
shaketext = open(shakefile, "r")
print("Loading file " + shakefile)
start = len(codetext)
codetext.extend(encode_text(shaketext.read()))
end = len(codetext)
bookranges.append({"start": start, "end": end, "name": shakefile.rsplit("/", 1)[-1]})
shaketext.close()
if len(bookranges) == 0:
sys.exit("No training data has been found. Aborting.")
# For validation, use roughly 90K of text,
# but no more than 10% of the entire text
# and no more than 1 book in 5 => no validation at all for 5 files or fewer.
# 10% of the text is how many files ?
total_len = len(codetext)
validation_len = 0
nb_books1 = 0
for book in reversed(bookranges):
validation_len += book["end"]-book["start"]
nb_books1 += 1
if validation_len > total_len // 10:
break
# 90K of text is how many books ?
validation_len = 0
nb_books2 = 0
for book in reversed(bookranges):
validation_len += book["end"]-book["start"]
nb_books2 += 1
if validation_len > 90*1024:
break
# 20% of the books is how many books ?
nb_books3 = len(bookranges) // 5
# pick the smallest
nb_books = min(nb_books1, nb_books2, nb_books3)
if nb_books == 0 or not validation:
cutoff = len(codetext)
else:
cutoff = bookranges[-nb_books]["start"]
valitext = codetext[cutoff:]
codetext = codetext[:cutoff]
return codetext, valitext, bookranges
def print_data_stats(datalen, valilen, epoch_size):
datalen_mb = datalen/1024.0/1024.0
valilen_kb = valilen/1024.0
print("Training text size is {:.2f}MB with {:.2f}KB set aside for validation.".format(datalen_mb, valilen_kb)
+ " There will be {} batches per epoch".format(epoch_size))
def print_validation_header(validation_start, bookranges):
bookindex = find_book_index(validation_start, bookranges)
books = ''
for i in range(bookindex, len(bookranges)):
books += bookranges[i]["name"]
if i < len(bookranges)-1:
books += ", "
print("{: <60}".format("Validating on " + books), flush=True)
def print_validation_stats(loss, accuracy):
print("VALIDATION STATS: loss: {:.5f}, accuracy: {:.5f}".format(loss,
accuracy))
def print_text_generation_header():
print()
print("┌{:─^111}┐".format('Generating random text from learned state'))
def print_text_generation_footer():
print()
print("└{:─^111}┘".format('End of generation'))
def frequency_limiter(n, multiple=1, modulo=0):
def limit(i):
return i % (multiple * n) == modulo*multiple
return limit
```
### Training the Model
```
# encoding: UTF-8
# Copyright 2017 Google.com
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import importlib.util
import tensorflow as tf
from tensorflow.contrib import layers
from tensorflow.contrib import rnn # rnn stuff temporarily in contrib, moving back to code in TF 1.1
import os
import time
import math
import numpy as np
tf.set_random_seed(0)
# model parameters
#
# Usage:
# Training only:
# Leave all the parameters as they are
# Disable validation to run a bit faster (set validation=False below)
# You can follow progress in Tensorboard: tensorboard --log-dir=log
# Training and experimentation (default):
# Keep validation enabled
# You can now play with the parameters anf follow the effects in Tensorboard
# A good choice of parameters ensures that the testing and validation curves stay close
# To see the curves drift apart ("overfitting") try to use an insufficient amount of
# training data (shakedir = "shakespeare/t*.txt" for example)
#
SEQLEN = 30
BATCHSIZE = 200
ALPHASIZE = ALPHASIZE
INTERNALSIZE = 512
NLAYERS = 3
learning_rate = 0.001 # fixed learning rate
dropout_pkeep = 0.8 # some dropout
# load data, either shakespeare, or the Python source of Tensorflow itself
default_fp = 'drive/Colab Notebooks/ex03/'
shakedir = default_fp + "/shakespeare/*.txt"
#shakedir = "../tensorflow/**/*.py"
codetext, valitext, bookranges = read_data_files(shakedir, validation=True)
# display some stats on the data
epoch_size = len(codetext) // (BATCHSIZE * SEQLEN)
print_data_stats(len(codetext), len(valitext), epoch_size)
#
# the model (see FAQ in README.md)
#
lr = tf.placeholder(tf.float32, name='lr') # learning rate
pkeep = tf.placeholder(tf.float32, name='pkeep') # dropout parameter
batchsize = tf.placeholder(tf.int32, name='batchsize')
# inputs
X = tf.placeholder(tf.uint8, [None, None], name='X') # [ BATCHSIZE, SEQLEN ]
Xo = tf.one_hot(X, ALPHASIZE, 1.0, 0.0) # [ BATCHSIZE, SEQLEN, ALPHASIZE ]
# expected outputs = same sequence shifted by 1 since we are trying to predict the next character
Y_ = tf.placeholder(tf.uint8, [None, None], name='Y_') # [ BATCHSIZE, SEQLEN ]
Yo_ = tf.one_hot(Y_, ALPHASIZE, 1.0, 0.0) # [ BATCHSIZE, SEQLEN, ALPHASIZE ]
# input state
Hin = tf.placeholder(tf.float32, [None, INTERNALSIZE*NLAYERS], name='Hin') # [ BATCHSIZE, INTERNALSIZE * NLAYERS]
# using a NLAYERS=3 layers of GRU cells, unrolled SEQLEN=30 times
# dynamic_rnn infers SEQLEN from the size of the inputs Xo
# How to properly apply dropout in RNNs: see README.md
cells = [rnn.GRUCell(INTERNALSIZE) for _ in range(NLAYERS)]
# "naive dropout" implementation
dropcells = [rnn.DropoutWrapper(cell,input_keep_prob=pkeep) for cell in cells]
multicell = rnn.MultiRNNCell(dropcells, state_is_tuple=False)
multicell = rnn.DropoutWrapper(multicell, output_keep_prob=pkeep) # dropout for the softmax layer
Yr, H = tf.nn.dynamic_rnn(multicell, Xo, dtype=tf.float32, initial_state=Hin)
# Yr: [ BATCHSIZE, SEQLEN, INTERNALSIZE ]
# H: [ BATCHSIZE, INTERNALSIZE*NLAYERS ] # this is the last state in the sequence
H = tf.identity(H, name='H') # just to give it a name
# Softmax layer implementation:
# Flatten the first two dimension of the output [ BATCHSIZE, SEQLEN, ALPHASIZE ] => [ BATCHSIZE x SEQLEN, ALPHASIZE ]
# then apply softmax readout layer. This way, the weights and biases are shared across unrolled time steps.
# From the readout point of view, a value coming from a sequence time step or a minibatch item is the same thing.
Yflat = tf.reshape(Yr, [-1, INTERNALSIZE]) # [ BATCHSIZE x SEQLEN, INTERNALSIZE ]
Ylogits = layers.linear(Yflat, ALPHASIZE) # [ BATCHSIZE x SEQLEN, ALPHASIZE ]
Yflat_ = tf.reshape(Yo_, [-1, ALPHASIZE]) # [ BATCHSIZE x SEQLEN, ALPHASIZE ]
loss = tf.nn.softmax_cross_entropy_with_logits(logits=Ylogits, labels=Yflat_) # [ BATCHSIZE x SEQLEN ]
loss = tf.reshape(loss, [batchsize, -1]) # [ BATCHSIZE, SEQLEN ]
Yo = tf.nn.softmax(Ylogits, name='Yo') # [ BATCHSIZE x SEQLEN, ALPHASIZE ]
Y = tf.argmax(Yo, 1) # [ BATCHSIZE x SEQLEN ]
Y = tf.reshape(Y, [batchsize, -1], name="Y") # [ BATCHSIZE, SEQLEN ]
# choose the optimizer
# train_step = tf.train.AdamOptimizer(lr)
train_step = tf.train.RMSPropOptimizer(lr, momentum=0.9)
# train_step = tf.train.GradientDescentOptimizer(lr)
name = train_step.get_name()
train_step = train_step.minimize(loss)
# stats for display
seqloss = tf.reduce_mean(loss, 1)
batchloss = tf.reduce_mean(seqloss)
accuracy = tf.reduce_mean(tf.cast(tf.equal(Y_, tf.cast(Y, tf.uint8)), tf.float32))
loss_summary = tf.summary.scalar("batch_loss", batchloss)
acc_summary = tf.summary.scalar("batch_accuracy", accuracy)
summaries = tf.summary.merge([loss_summary, acc_summary])
# Init Tensorboard stuff. This will save Tensorboard information into a different
# folder at each run named 'log/<timestamp>/'. Two sets of data are saved so that
# you can compare training and validation curves visually in Tensorboard.
timestamp = str(math.trunc(time.time()))
summary_writer = tf.summary.FileWriter(default_fp + "/log/" + timestamp + '-{}'.format(name) + "-training", flush_secs=15)
validation_writer = tf.summary.FileWriter(default_fp + "/log/" + timestamp + '-{}'.format(name) + "-validation", flush_secs=15)
# Init for saving models. They will be saved into a directory named 'checkpoints'.
# Only the last checkpoint is kept.
if not os.path.exists("checkpoints"):
os.mkdir("checkpoints")
saver = tf.train.Saver(max_to_keep=1000)
# for display: init the progress bar
DISPLAY_FREQ = 50
_50_BATCHES = DISPLAY_FREQ * BATCHSIZE * SEQLEN
progress = Progress(DISPLAY_FREQ, size=111+2, msg="Training on next "+str(DISPLAY_FREQ)+" batches")
# init
istate = np.zeros([BATCHSIZE, INTERNALSIZE*NLAYERS]) # initial zero input state
init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init)
step = 0
# training loop
for x, y_, epoch in rnn_minibatch_sequencer(codetext, BATCHSIZE, SEQLEN, nb_epochs=4):
# train on one minibatch
feed_dict = {X: x, Y_: y_, Hin: istate, lr: learning_rate, pkeep: dropout_pkeep, batchsize: BATCHSIZE}
_, y, ostate = sess.run([train_step, Y, H], feed_dict=feed_dict)
# log training data for Tensorboard display a mini-batch of sequences (every 50 batches)
if step % _50_BATCHES == 0:
feed_dict = {X: x, Y_: y_, Hin: istate, pkeep: 1.0, batchsize: BATCHSIZE} # no dropout for validation
y, l, bl, acc, smm = sess.run([Y, seqloss, batchloss, accuracy, summaries], feed_dict=feed_dict)
print_learning_learned_comparison(x, y, l, bookranges, bl, acc, epoch_size, step, epoch)
summary_writer.add_summary(smm, step)
summary_writer.flush()
# run a validation step every 50 batches
# The validation text should be a single sequence but that's too slow (1s per 1024 chars!),
# so we cut it up and batch the pieces (slightly inaccurate)
# tested: validating with 5K sequences instead of 1K is only slightly more accurate, but a lot slower.
if step % _50_BATCHES == 0 and len(valitext) > 0:
VALI_SEQLEN = 1*1024 # Sequence length for validation. State will be wrong at the start of each sequence.
bsize = len(valitext) // VALI_SEQLEN
print_validation_header(len(codetext), bookranges)
vali_x, vali_y, _ = next(rnn_minibatch_sequencer(valitext, bsize, VALI_SEQLEN, 1)) # all data in 1 batch
vali_nullstate = np.zeros([bsize, INTERNALSIZE*NLAYERS])
feed_dict = {X: vali_x, Y_: vali_y, Hin: vali_nullstate, pkeep: 1.0, # no dropout for validation
batchsize: bsize}
ls, acc, smm = sess.run([batchloss, accuracy, summaries], feed_dict=feed_dict)
print_validation_stats(ls, acc)
# save validation data for Tensorboard
validation_writer.add_summary(smm, step)
validation_writer.flush()
# # display a short text generated with the current weights and biases (every 500 batches)
# if step // 10 % _50_BATCHES == 0:
# print_text_generation_header()
# ry = np.array([[convert_from_alphabet(ord("K"))]])
# rh = np.zeros([1, INTERNALSIZE * NLAYERS])
# for k in range(1000):
# ryo, rh = sess.run([Yo, H], feed_dict={X: ry, pkeep: 1.0, Hin: rh, batchsize: 1})
# rc = sample_from_probabilities(ryo, topn=10 if epoch <= 1 else 2)
# print(chr(convert_to_alphabet(rc)), end="")
# ry = np.array([[rc]])
# print_text_generation_footer()
# # save a checkpoint (every 500 batches)
# if step // 10 % _50_BATCHES == 0:
# saved_file = saver.save(sess, default_fp + '/checkpoints/rnn_train_' + timestamp, global_step=step)
# print("Saved file: " + saved_file)
# display progress bar
progress.step(reset=step % _50_BATCHES == 0)
# loop state around
istate = ostate
step += BATCHSIZE * SEQLEN
# all runs: SEQLEN = 30, BATCHSIZE = 100, ALPHASIZE = 98, INTERNALSIZE = 512, NLAYERS = 3
# run 1477669632 decaying learning rate 0.001-0.0001-1e7 dropout 0.5: not good
# run 1477670023 lr=0.001 no dropout: very good
# Tensorflow runs:
# 1485434262
# trained on shakespeare/t*.txt only. Validation on 1K sequences
# validation loss goes up from step 5M (overfitting because of small dataset)
# 1485436038
# trained on shakespeare/t*.txt only. Validation on 5K sequences
# On 5K sequences validation accuracy is slightly higher and loss slightly lower
# => sequence breaks do introduce inaccuracies but the effect is small
# 1485437956
# Trained on shakespeare/*.txt. Validation on 1K sequences
# On this much larger dataset, validation loss still decreasing after 6 epochs (step 35M)
# 1495447371
# Trained on shakespeare/*.txt no dropout, 30 epochs
# Validation loss starts going up after 10 epochs (overfitting)
# 1495440473
# Trained on shakespeare/*.txt "naive dropout" pkeep=0.8, 30 epochs
# Dropout brings the validation loss under control, preventing it from
# going up but the effect is small.
```
| github_jupyter |
## Step 0: Latent Dirichlet Allocation ##
LDA is used to classify text in a document to a particular topic. It builds a topic per document model and words per topic model, modeled as Dirichlet distributions.
* Each document is modeled as a multinomial distribution of topics and each topic is modeled as a multinomial distribution of words.
* LDA assumes that the every chunk of text we feed into it will contain words that are somehow related. Therefore choosing the right corpus of data is crucial.
* It also assumes documents are produced from a mixture of topics. Those topics then generate words based on their probability distribution.
## Step 1: Load the dataset
The dataset we'll use is a list of over one million news headlines published over a period of 15 years. We'll start by loading it from the `abcnews-date-text.csv` file.
```
'''
Load the dataset from the CSV and save it to 'data_text'
'''
import pandas as pd
data = pd.read_csv('abcnews-date-text.csv', error_bad_lines=False);
# We only need the Headlines text column from the data
data_text = data[:300000][['headline_text']];
data_text['index'] = data_text.index
documents = data_text
```
Let's glance at the dataset:
```
'''
Get the total number of documents
'''
print(len(documents))
documents[:5]
```
## Step 2: Data Preprocessing ##
We will perform the following steps:
* **Tokenization**: Split the text into sentences and the sentences into words. Lowercase the words and remove punctuation.
* Words that have fewer than 3 characters are removed.
* All **stopwords** are removed.
* Words are **lemmatized** - words in third person are changed to first person and verbs in past and future tenses are changed into present.
* Words are **stemmed** - words are reduced to their root form.
```
'''
Loading Gensim and nltk libraries
'''
# pip install gensim
import gensim
from gensim.utils import simple_preprocess
from gensim.parsing.preprocessing import STOPWORDS
from nltk.stem import WordNetLemmatizer, SnowballStemmer
from nltk.stem.porter import *
import numpy as np
np.random.seed(400)
import nltk
nltk.download('wordnet')
```
### Lemmatizer Example
Before preprocessing our dataset, let's first look at an lemmatizing example. What would be the output if we lemmatized the word 'went':
```
print(WordNetLemmatizer().lemmatize('went', pos = 'v')) # past tense to present tense
```
### Stemmer Example
Let's also look at a stemming example. Let's throw a number of words at the stemmer and see how it deals with each one:
```
stemmer = SnowballStemmer("english")
original_words = ['caresses', 'flies', 'dies', 'mules', 'denied','died', 'agreed', 'owned',
'humbled', 'sized','meeting', 'stating', 'siezing', 'itemization','sensational',
'traditional', 'reference', 'colonizer','plotted']
singles = [stemmer.stem(plural) for plural in original_words]
pd.DataFrame(data={'original word':original_words, 'stemmed':singles })
'''
Write a function to perform the pre processing steps on the entire dataset
'''
def lemmatize_stemming(text):
return stemmer.stem(WordNetLemmatizer().lemmatize(text, pos='v'))
# Tokenize and lemmatize
def preprocess(text):
result=[]
for token in gensim.utils.simple_preprocess(text) :
if token not in gensim.parsing.preprocessing.STOPWORDS and len(token) > 3:
# TODO: Apply lemmatize_stemming on the token, then add to the results list
result.append(lemmatize_stemming(token))
return result
'''
Preview a document after preprocessing
'''
document_num = 4310
doc_sample = documents[documents['index'] == document_num].values[0][0]
print("Original document: ")
words = []
for word in doc_sample.split(' '):
words.append(word)
print(words)
print("\n\nTokenized and lemmatized document: ")
print(preprocess(doc_sample))
documents
```
Let's now preprocess all the news headlines we have. To do that, let's use the [map](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.map.html) function from pandas to apply `preprocess()` to the `headline_text` column
**Note**: This may take a few minutes
```
# TODO: preprocess all the headlines, saving the list of results as 'processed_docs'
processed_docs = documents['headline_text'].map(preprocess)
'''
Preview 'processed_docs'
'''
processed_docs[:10]
```
## Step 3.1: Bag of words on the dataset
Now let's create a dictionary from 'processed_docs' containing the number of times a word appears in the training set. To do that, let's pass `processed_docs` to [`gensim.corpora.Dictionary()`](https://radimrehurek.com/gensim/corpora/dictionary.html) and call it '`dictionary`'.
```
'''
Create a dictionary from 'processed_docs' containing the number of times a word appears
in the training set using gensim.corpora.Dictionary and call it 'dictionary'
'''
dictionary = gensim.corpora.Dictionary(processed_docs)
'''
Checking dictionary created
'''
count = 0
for k, v in dictionary.iteritems():
print(k, v)
count += 1
if count > 10:
break
```
** Gensim filter_extremes **
[`filter_extremes(no_below=5, no_above=0.5, keep_n=100000)`](https://radimrehurek.com/gensim/corpora/dictionary.html#gensim.corpora.dictionary.Dictionary.filter_extremes)
Filter out tokens that appear in
* less than no_below documents (absolute number) or
* more than no_above documents (fraction of total corpus size, not absolute number).
* after (1) and (2), keep only the first keep_n most frequent tokens (or keep all if None).
```
'''
OPTIONAL STEP
Remove very rare and very common words:
- words appearing less than 15 times
- words appearing in more than 10% of all documents
'''
# TODO: apply dictionary.filter_extremes() with the parameters mentioned above
dictionary.filter_extremes(no_below=15, no_above=0.1, keep_n=100000)
```
** Gensim doc2bow **
[`doc2bow(document)`](https://radimrehurek.com/gensim/corpora/dictionary.html#gensim.corpora.dictionary.Dictionary.doc2bow)
* Convert document (a list of words) into the bag-of-words format = list of (token_id, token_count) 2-tuples. Each word is assumed to be a tokenized and normalized string (either unicode or utf8-encoded). No further preprocessing is done on the words in document; apply tokenization, stemming etc. before calling this method.
```
'''
Create the Bag-of-words model for each document i.e for each document we create a dictionary reporting how many
words and how many times those words appear. Save this to 'bow_corpus'
'''
bow_corpus = [dictionary.doc2bow(doc) for doc in processed_docs]
'''
Checking Bag of Words corpus for our sample document --> (token_id, token_count)
'''
bow_corpus[document_num]
'''
Preview BOW for our sample preprocessed document
'''
# Here document_num is document number 4310 which we have checked in Step 2
bow_doc_4310 = bow_corpus[document_num]
for i in range(len(bow_doc_4310)):
print("Word {} (\"{}\") appears {} time.".format(bow_doc_4310[i][0],
dictionary[bow_doc_4310[i][0]],
bow_doc_4310[i][1]))
```
## Step 3.2: TF-IDF on our document set ##
While performing TF-IDF on the corpus is not necessary for LDA implemention using the gensim model, it is recemmended. TF-IDF expects a bag-of-words (integer values) training corpus during initialization. During transformation, it will take a vector and return another vector of the same dimensionality.
*Please note: The author of Gensim dictates the standard procedure for LDA to be using the Bag of Words model.*
** TF-IDF stands for "Term Frequency, Inverse Document Frequency".**
* It is a way to score the importance of words (or "terms") in a document based on how frequently they appear across multiple documents.
* If a word appears frequently in a document, it's important. Give the word a high score. But if a word appears in many documents, it's not a unique identifier. Give the word a low score.
* Therefore, common words like "the" and "for", which appear in many documents, will be scaled down. Words that appear frequently in a single document will be scaled up.
In other words:
* TF(w) = `(Number of times term w appears in a document) / (Total number of terms in the document)`.
* IDF(w) = `log_e(Total number of documents / Number of documents with term w in it)`.
** For example **
* Consider a document containing `100` words wherein the word 'tiger' appears 3 times.
* The term frequency (i.e., tf) for 'tiger' is then:
- `TF = (3 / 100) = 0.03`.
* Now, assume we have `10 million` documents and the word 'tiger' appears in `1000` of these. Then, the inverse document frequency (i.e., idf) is calculated as:
- `IDF = log(10,000,000 / 1,000) = 4`.
* Thus, the Tf-idf weight is the product of these quantities:
- `TF-IDF = 0.03 * 4 = 0.12`.
```
'''
Create tf-idf model object using models.TfidfModel on 'bow_corpus' and save it to 'tfidf'
'''
from gensim import corpora, models
#tfidf = # TODO
tfidf = models.TfidfModel(bow_corpus)
'''
Apply transformation to the entire corpus and call it 'corpus_tfidf'
'''
#corpus_tfidf = # TODO
corpus_tfidf = tfidf[bow_corpus]
'''
Preview TF-IDF scores for our first document --> --> (token_id, tfidf score)
'''
from pprint import pprint
for doc in corpus_tfidf:
pprint(doc)
break
```
## Step 4.1: Running LDA using Bag of Words ##
We are going for 10 topics in the document corpus.
** We will be running LDA using all CPU cores to parallelize and speed up model training.**
Some of the parameters we will be tweaking are:
* **num_topics** is the number of requested latent topics to be extracted from the training corpus.
* **id2word** is a mapping from word ids (integers) to words (strings). It is used to determine the vocabulary size, as well as for debugging and topic printing.
* **workers** is the number of extra processes to use for parallelization. Uses all available cores by default.
* **alpha** and **eta** are hyperparameters that affect sparsity of the document-topic (theta) and topic-word (lambda) distributions. We will let these be the default values for now(default value is `1/num_topics`)
- Alpha is the per document topic distribution.
* High alpha: Every document has a mixture of all topics(documents appear similar to each other).
* Low alpha: Every document has a mixture of very few topics
- Eta is the per topic word distribution.
* High eta: Each topic has a mixture of most words(topics appear similar to each other).
* Low eta: Each topic has a mixture of few words.
* ** passes ** is the number of training passes through the corpus. For example, if the training corpus has 50,000 documents, chunksize is 10,000, passes is 2, then online training is done in 10 updates:
* `#1 documents 0-9,999 `
* `#2 documents 10,000-19,999 `
* `#3 documents 20,000-29,999 `
* `#4 documents 30,000-39,999 `
* `#5 documents 40,000-49,999 `
* `#6 documents 0-9,999 `
* `#7 documents 10,000-19,999 `
* `#8 documents 20,000-29,999 `
* `#9 documents 30,000-39,999 `
* `#10 documents 40,000-49,999`
```
# LDA mono-core
# lda_model = gensim.models.LdaModel(bow_corpus,
# num_topics = 10,
# id2word = dictionary,
# passes = 50)
# LDA multicore
'''
Train your lda model using gensim.models.LdaMulticore and save it to 'lda_model'
'''
# TODO
lda_model = gensim.models.LdaMulticore(bow_corpus,
num_topics=10,
id2word = dictionary,
passes = 2,
workers=2)
'''
For each topic, we will explore the words occuring in that topic and its relative weight
'''
for idx, topic in lda_model.print_topics(-1):
print("Topic: {} \nWords: {}".format(idx, topic))
print("\n")
```
### Classification of the topics ###
Using the words in each topic and their corresponding weights, what categories were you able to infer?
* 0:
* 1:
* 2:
* 3:
* 4:
* 5:
* 6:
* 7:
* 8:
* 9:
## Step 4.2 Running LDA using TF-IDF ##
```
'''
Define lda model using corpus_tfidf
'''
# TODO
lda_model_tfidf = gensim.models.LdaMulticore(corpus_tfidf,
num_topics=10,
id2word = dictionary,
passes = 2,
workers=4)
'''
For each topic, we will explore the words occuring in that topic and its relative weight
'''
for idx, topic in lda_model_tfidf.print_topics(-1):
print("Topic: {} Word: {}".format(idx, topic))
print("\n")
```
### Classification of the topics ###
As we can see, when using tf-idf, heavier weights are given to words that are not as frequent which results in nouns being factored in. That makes it harder to figure out the categories as nouns can be hard to categorize. This goes to show that the models we apply depend on the type of corpus of text we are dealing with.
Using the words in each topic and their corresponding weights, what categories could you find?
* 0:
* 1:
* 2:
* 3:
* 4:
* 5:
* 6:
* 7:
* 8:
* 9:
## Step 5.1: Performance evaluation by classifying sample document using LDA Bag of Words model##
We will check to see where our test document would be classified.
```
'''
Text of sample document 4310
'''
processed_docs[4310]
'''
Check which topic our test document belongs to using the LDA Bag of Words model.
'''
# Our test document is document number 4310
for index, score in sorted(lda_model[bow_corpus[document_num]], key=lambda tup: -1*tup[1]):
print("\nScore: {}\t \nTopic: {}".format(score, lda_model.print_topic(index, 10)))
```
### It has the highest probability (`x`) to be part of the topic that we assigned as X, which is the accurate classification. ###
## Step 5.2: Performance evaluation by classifying sample document using LDA TF-IDF model##
```
'''
Check which topic our test document belongs to using the LDA TF-IDF model.
'''
for index, score in sorted(lda_model_tfidf[bow_corpus[document_num]], key=lambda tup: -1*tup[1]):
print("\nScore: {}\t \nTopic: {}".format(score, lda_model_tfidf.print_topic(index, 10)))
```
### It has the highest probability (`x%`) to be part of the topic that we assigned as X. ###
## Step 6: Testing model on unseen document ##
```
unseen_document = "My favorite sports activities are running and swimming."
# Data preprocessing step for the unseen document
bow_vector = dictionary.doc2bow(preprocess(unseen_document))
for index, score in sorted(lda_model[bow_vector], key=lambda tup: -1*tup[1]):
print("Score: {}\t Topic: {}".format(score, lda_model.print_topic(index, 5)))
```
The model correctly classifies the unseen document with 'x'% probability to the X category.
| github_jupyter |
```
%matplotlib inline
```
`파이토치(PyTorch) 기본 익히기 <intro.html>`_ ||
`빠른 시작 <quickstart_tutorial.html>`_ ||
`텐서(Tensor) <tensorqs_tutorial.html>`_ ||
`Dataset과 Dataloader <data_tutorial.html>`_ ||
`변형(Transform) <transforms_tutorial.html>`_ ||
**신경망 모델 구성하기** ||
`Autograd <autogradqs_tutorial.html>`_ ||
`최적화(Optimization) <optimization_tutorial.html>`_ ||
`모델 저장하고 불러오기 <saveloadrun_tutorial.html>`_
신경망 모델 구성하기
==========================================================================
신경망은 데이터에 대한 연산을 수행하는 계층(layer)/모듈(module)로 구성되어 있습니다.
`torch.nn <https://pytorch.org/docs/stable/nn.html>`_ 네임스페이스는 신경망을 구성하는데 필요한 모든 구성 요소를 제공합니다.
PyTorch의 모든 모듈은 `nn.Module <https://pytorch.org/docs/stable/generated/torch.nn.Module.html>`_ 의 하위 클래스(subclass)
입니다. 신경망은 다른 모듈(계층; layer)로 구성된 모듈입니다. 이러한 중첩된 구조는 복잡한 아키텍처를 쉽게 구축하고 관리할 수 있습니다.
이어지는 장에서는 FashionMNIST 데이터셋의 이미지들을 분류하는 신경망을 구성해보겠습니다.
```
import os
import torch
from torch import nn
from torch.utils.data import DataLoader
from torchvision import datasets, transforms
```
학습을 위한 장치 얻기
------------------------------------------------------------------------------------------
가능한 경우 GPU와 같은 하드웨어 가속기에서 모델을 학습하려고 합니다.
`torch.cuda <https://pytorch.org/docs/stable/notes/cuda.html>`_ 를 사용할 수 있는지
확인하고 그렇지 않으면 CPU를 계속 사용합니다.
```
device = "cuda" if torch.cuda.is_available() else "cpu"
print(f"Using {device} device")
```
클래스 정의하기
------------------------------------------------------------------------------------------
신경망 모델을 ``nn.Module`` 의 하위클래스로 정의하고, ``__init__`` 에서 신경망 계층들을 초기화합니다.
``nn.Module`` 을 상속받은 모든 클래스는 ``forward`` 메소드에 입력 데이터에 대한 연산들을 구현합니다.
```
class NeuralNetwork(nn.Module):
def __init__(self):
super(NeuralNetwork, self).__init__()
self.flatten = nn.Flatten()
self.linear_relu_stack = nn.Sequential(
nn.Linear(28*28, 512),
nn.ReLU(),
nn.Linear(512, 512),
nn.ReLU(),
nn.Linear(512, 10),
)
def forward(self, x):
x = self.flatten(x)
logits = self.linear_relu_stack(x)
return logits
```
``NeuralNetwork`` 의 인스턴스(instance)를 생성하고 이를 ``device`` 로 이동한 뒤,
구조(structure)를 출력합니다.
```
model = NeuralNetwork().to(device)
print(model)
```
모델을 사용하기 위해 입력 데이터를 전달합니다. 이는 일부
`백그라운드 연산들 <https://github.com/pytorch/pytorch/blob/270111b7b611d174967ed204776985cefca9c144/torch/nn/modules/module.py#L866>`_ 과 함께
모델의 ``forward`` 를 실행합니다. ``model.forward()`` 를 직접 호출하지 마세요!
모델에 입력을 호출하면 각 분류(class)에 대한 원시(raw) 예측값이 있는 10-차원 텐서가 반환됩니다.
원시 예측값을 ``nn.Softmax`` 모듈의 인스턴스에 통과시켜 예측 확률을 얻습니다.
```
X = torch.rand(1, 28, 28, device=device)
logits = model(X)
pred_probab = nn.Softmax(dim=1)(logits)
y_pred = pred_probab.argmax(1)
print(f"Predicted class: {y_pred}")
```
------------------------------------------------------------------------------------------
모델 계층(Layer)
------------------------------------------------------------------------------------------
FashionMNIST 모델의 계층들을 살펴보겠습니다. 이를 설명하기 위해, 28x28 크기의 이미지 3개로 구성된
미니배치를 가져와, 신경망을 통과할 때 어떤 일이 발생하는지 알아보겠습니다.
```
input_image = torch.rand(3,28,28)
print(input_image.size())
```
nn.Flatten
^^^^^^^^^^^^^^^^^^^^^^
`nn.Flatten <https://pytorch.org/docs/stable/generated/torch.nn.Flatten.html>`_ 계층을 초기화하여
각 28x28의 2D 이미지를 784 픽셀 값을 갖는 연속된 배열로 변환합니다. (dim=0의 미니배치 차원은 유지됩니다.)
```
flatten = nn.Flatten()
flat_image = flatten(input_image)
print(flat_image.size())
```
nn.Linear
^^^^^^^^^^^^^^^^^^^^^^
`선형 계층 <https://pytorch.org/docs/stable/generated/torch.nn.Linear.html>`_ 은 저장된 가중치(weight)와
편향(bias)을 사용하여 입력에 선형 변환(linear transformation)을 적용하는 모듈입니다.
```
layer1 = nn.Linear(in_features=28*28, out_features=20)
hidden1 = layer1(flat_image)
print(hidden1.size())
```
nn.ReLU
^^^^^^^^^^^^^^^^^^^^^^
비선형 활성화(activation)는 모델의 입력과 출력 사이에 복잡한 관계(mapping)를 만듭니다.
비선형 활성화는 선형 변환 후에 적용되어 *비선형성(nonlinearity)* 을 도입하고, 신경망이 다양한 현상을 학습할 수 있도록 돕습니다.
이 모델에서는 `nn.ReLU <https://pytorch.org/docs/stable/generated/torch.nn.ReLU.html>`_ 를 선형 계층들 사이에 사용하지만,
모델을 만들 때는 비선형성을 가진 다른 활성화를 도입할 수도 있습니다.
```
print(f"Before ReLU: {hidden1}\n\n")
hidden1 = nn.ReLU()(hidden1)
print(f"After ReLU: {hidden1}")
```
nn.Sequential
^^^^^^^^^^^^^^^^^^^^^^
`nn.Sequential <https://pytorch.org/docs/stable/generated/torch.nn.Sequential.html>`_ 은 순서를 갖는
모듈의 컨테이너입니다. 데이터는 정의된 것과 같은 순서로 모든 모듈들을 통해 전달됩니다. 순차 컨테이너(sequential container)를 사용하여
아래의 ``seq_modules`` 와 같은 신경망을 빠르게 만들 수 있습니다.
```
seq_modules = nn.Sequential(
flatten,
layer1,
nn.ReLU(),
nn.Linear(20, 10)
)
input_image = torch.rand(3,28,28)
logits = seq_modules(input_image)
```
nn.Softmax
^^^^^^^^^^^^^^^^^^^^^^
신경망의 마지막 선형 계층은 `nn.Softmax <https://pytorch.org/docs/stable/generated/torch.nn.Softmax.html>`_ 모듈에 전달될
([-\infty, \infty] 범위의 원시 값(raw value)인) `logits` 를 반환합니다. logits는 모델의 각 분류(class)에 대한 예측 확률을 나타내도록
[0, 1] 범위로 비례하여 조정(scale)됩니다. ``dim`` 매개변수는 값의 합이 1이 되는 차원을 나타냅니다.
```
softmax = nn.Softmax(dim=1)
pred_probab = softmax(logits)
```
모델 매개변수
------------------------------------------------------------------------------------------
신경망 내부의 많은 계층들은 *매개변수화(parameterize)* 됩니다. 즉, 학습 중에 최적화되는 가중치와 편향과 연관지어집니다.
``nn.Module`` 을 상속하면 모델 객체 내부의 모든 필드들이 자동으로 추적(track)되며, 모델의 ``parameters()`` 및
``named_parameters()`` 메소드로 모든 매개변수에 접근할 수 있게 됩니다.
이 예제에서는 각 매개변수들을 순회하며(iterate), 매개변수의 크기와 값을 출력합니다.
```
print(f"Model structure: {model}\n\n")
for name, param in model.named_parameters():
print(f"Layer: {name} | Size: {param.size()} | Values : {param[:2]} \n")
```
------------------------------------------------------------------------------------------
더 읽어보기
------------------------------------------------------------------------------------------
- `torch.nn API <https://pytorch.org/docs/stable/nn.html>`_
| github_jupyter |
```
from nornir.core import InitNornir
nr = InitNornir(config_file="config.yaml")
```
# Executing tasks
Now that you know how to initialize nornir and work with the inventory let's see how we can leverage it to run tasks on groups of hosts.
Nornir ships a bunch of tasks you can use directly without having to code them yourself. You can check them out [here](../../plugins/tasks/index.rst).
Let's start by executing the `ls -la /tmp` command on all the device in `cmh` of type `host`:
```
from nornir.plugins.tasks import commands
from nornir.plugins.functions.text import print_result
cmh_hosts = nr.filter(site="cmh", role="host")
result = cmh_hosts.run(task=commands.remote_command,
command="ls -la /tmp")
print_result(result, vars=["stdout"])
```
So what have we done here? First we have imported the `commands` and `text` modules. Then we have narrowed down nornir to the hosts we want to operate on. Once we have selected the devices we wanted to operate on we have run two tasks:
1. The task `commands.remote_command` which runs the specified `command` in the remote device.
2. The function `print_result` which just prints on screen the result of an executed task or group of tasks.
Let's try with another example:
```
from nornir.plugins.tasks import networking
cmh_spines = nr.filter(site="bma", role="spine")
result = cmh_spines.run(task=networking.napalm_get,
getters=["facts"])
print_result(result)
```
Pretty much the same pattern, just different task on different devices.
## What is a task
Let's take a look at what a task is. In it's simplest form a task is a function that takes at least a [Task](../../ref/api/task.rst#nornir.core.task.Task) object as argument. For instance:
```
def hi(task):
print(f"hi! My name is {task.host.name} and I live in {task.host['site']}")
nr.run(task=hi, num_workers=1)
```
The task object has access to `nornir`, `host` and `dry_run` attributes.
You can call other tasks from within a task:
```
def available_resources(task):
task.run(task=commands.remote_command,
name="Available disk",
command="df -h")
task.run(task=commands.remote_command,
name="Available memory",
command="free -m")
result = cmh_hosts.run(task=available_resources)
print_result(result, vars=["stdout"])
```
You probably noticed in your previous example that you can name your tasks.
Your task can also accept any extra arguments you may need:
```
def count(task, to):
print(f"{task.host.name}: {list(range(0, to))}")
cmh_hosts.run(task=count,
num_workers=1,
to=10)
cmh_hosts.run(task=count,
num_workers=1,
to=20)
```
## Tasks vs Functions
You probably noticed we introduced the concept of a `function` when we talked about `print_result`. The difference between tasks and functions is that tasks are meant to be run per host while functions are helper functions meant to be run globally.
| github_jupyter |
```
#Part1
with open('input_D11.txt','r') as file:
(x,y)=(0,0)
canvas= dict()
file_content=file.readline().strip()
while file_content:
#print(file_content)
for num in file_content:
canvas[(x,y)] = int(num)
x+=1
y+=1
x=0
file_content=file.readline().strip()
#print(canvas)
def neighbours(p):
(x,y) = p
matrix=[-1,0,1]
return [((x+dx),(y+dy))
for dx in matrix
for dy in matrix
if not(dx ==0 and dy ==0)]
#print(neighbours((p)))
def print_canvas(canvas):
for y in range(10):
for x in range(10):
if (x,y) in canvas:
print(canvas[(x,y)], end='')
print()
def new_layout(canvas,count_flashes):
canvas_new= canvas
#1step increase by one
for key in canvas:
canvas_new[key]+=1
#print_canvas(canvas_new)
updated = True
while updated:
updated = False
for key, value in canvas_new.items():
if value>9:
canvas_new[key]=0
updated = True
count_flashes+=1
for i in neighbours(key):
if i in canvas_new and canvas_new[i]!=0:
canvas_new[i]+=1
return canvas_new, count_flashes
count_flashes=0
for cycle in range(100):
(canvas,count_flashes)=new_layout(canvas,count_flashes)
#print_canvas(canvas)
print(count_flashes)
len(canvas)
#Part1
with open('input_D11.txt','r') as file:
(x,y)=(0,0)
canvas= dict()
file_content=file.readline().strip()
while file_content:
#print(file_content)
for num in file_content:
canvas[(x,y)] = int(num)
x+=1
y+=1
x=0
file_content=file.readline().strip()
#print(canvas)
def neighbours(p):
(x,y) = p
matrix=[-1,0,1]
return [((x+dx),(y+dy))
for dx in matrix
for dy in matrix
if not(dx ==0 and dy ==0)]
#print(neighbours((p)))
def print_canvas(canvas):
for y in range(10):
for x in range(10):
if (x,y) in canvas:
print(canvas[(x,y)], end='')
print()
def new_layout(canvas,count_flashes):
canvas_new= canvas
#1step increase by one
for key in canvas:
canvas_new[key]+=1
#print_canvas(canvas_new)
updated = True
while updated:
updated = False
for key, value in canvas_new.items():
if value>9:
canvas_new[key]=0
updated = True
count_flashes+=1
for i in neighbours(key):
if i in canvas_new and canvas_new[i]!=0:
canvas_new[i]+=1
return canvas_new, count_flashes
count_flashes=0
step=0
while count!=100:
(canvas,count_flashes)=new_layout(canvas,count_flashes)
step+=1
count=0
for keys, value in canvas.items():
if value==0:
count+=1
print(step)
#print_canvas(canvas)
def neighbours(p):
(x,y) = p
matrix=[-1,0,1]
return [((x+dx),(y+dy))
for dx in matrix
for dy in matrix
if not(dx ==0 and dy ==0)]
print(neighbours((5,5)))
```
| github_jupyter |
<a href="https://colab.research.google.com/github/mohameddhameem/TensorflowCertification/blob/main/Sequences%2C%20Time%20Series%20and%20Prediction/Week%203/S%2BP_Week_3_Exercise_Question.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
!pip install tf-nightly-2.0-preview
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
print(tf.__version__)
def plot_series(time, series, format="-", start=0, end=None):
plt.plot(time[start:end], series[start:end], format)
plt.xlabel("Time")
plt.ylabel("Value")
plt.grid(False)
def trend(time, slope=0):
return slope * time
def seasonal_pattern(season_time):
"""Just an arbitrary pattern, you can change it if you wish"""
return np.where(season_time < 0.1,
np.cos(season_time * 6 * np.pi),
2 / np.exp(9 * season_time))
def seasonality(time, period, amplitude=1, phase=0):
"""Repeats the same pattern at each period"""
season_time = ((time + phase) % period) / period
return amplitude * seasonal_pattern(season_time)
def noise(time, noise_level=1, seed=None):
rnd = np.random.RandomState(seed)
return rnd.randn(len(time)) * noise_level
time = np.arange(10 * 365 + 1, dtype="float32")
baseline = 10
series = trend(time, 0.1)
baseline = 10
amplitude = 40
slope = 0.005
noise_level = 3
# Create the series
series = baseline + trend(time, slope) + seasonality(time, period=365, amplitude=amplitude)
# Update with noise
series += noise(time, noise_level, seed=51)
split_time = 3000
time_train = time[:split_time]
x_train = series[:split_time]
time_valid = time[split_time:]
x_valid = series[split_time:]
window_size = 20
batch_size = 32
shuffle_buffer_size = 1000
plot_series(time, series)
def windowed_dataset(series, window_size, batch_size, shuffle_buffer):
dataset = tf.data.Dataset.from_tensor_slices(series)
dataset = dataset.window(window_size + 1, shift=1, drop_remainder=True)
dataset = dataset.flat_map(lambda window: window.batch(window_size + 1))
dataset = dataset.shuffle(shuffle_buffer).map(lambda window: (window[:-1], window[-1]))
dataset = dataset.batch(batch_size).prefetch(1)
return dataset
tf.keras.backend.clear_session()
tf.random.set_seed(51)
np.random.seed(51)
tf.keras.backend.clear_session()
dataset = windowed_dataset(x_train, window_size, batch_size, shuffle_buffer_size)
model = tf.keras.models.Sequential([
tf.keras.layers.Lambda(lambda x : tf.expand_dims(x, axis=-1), input_shape=[None]),# YOUR CODE HERE
# YOUR CODE HERE
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(32,return_sequences=True)),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(32,return_sequences=True)),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(32)),
tf.keras.layers.Dense(1),
tf.keras.layers.Lambda(lambda x : x * 100.0)# YOUR CODE HERE
])
lr_schedule = tf.keras.callbacks.LearningRateScheduler(
lambda epoch: 1e-8 * 10**(epoch / 20))
optimizer = tf.keras.optimizers.SGD(learning_rate=1e-8, momentum=0.9)
model.compile(loss=tf.keras.losses.Huber(),
optimizer=optimizer,
metrics=["mae"])
history = model.fit(dataset, epochs=100, callbacks=[lr_schedule],verbose=0)
plt.semilogx(history.history["lr"], history.history["loss"])
plt.axis([1e-8, 1e-4, 0, 30])
# FROM THIS PICK A LEARNING RATE
tf.keras.backend.clear_session()
tf.random.set_seed(51)
np.random.seed(51)
tf.keras.backend.clear_session()
dataset = windowed_dataset(x_train, window_size, batch_size, shuffle_buffer_size)
model = tf.keras.models.Sequential([
tf.keras.layers.Lambda(lambda x : tf.expand_dims(x, axis=-1), input_shape=[None]),# YOUR CODE HERE
# YOUR CODE HERE
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(32,return_sequences=True)),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(32,return_sequences=True)),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(32)),
tf.keras.layers.Dense(1),
tf.keras.layers.Lambda(lambda x : x * 100.0)# YOUR CODE HERE
])
model.compile(loss="mse", optimizer=tf.keras.optimizers.SGD(learning_rate=1e-5, momentum=0.9),metrics=["mae"])# PUT YOUR LEARNING RATE HERE#
history = model.fit(dataset,epochs=500,verbose=1)
# FIND A MODEL AND A LR THAT TRAINS TO AN MAE < 3
forecast = []
results = []
for time in range(len(series) - window_size):
forecast.append(model.predict(series[time:time + window_size][np.newaxis]))
forecast = forecast[split_time-window_size:]
results = np.array(forecast)[:, 0, 0]
plt.figure(figsize=(10, 6))
plot_series(time_valid, x_valid)
plot_series(time_valid, results)
tf.keras.metrics.mean_absolute_error(x_valid, results).numpy()
# YOUR RESULT HERE SHOULD BE LESS THAN 4
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
#-----------------------------------------------------------
# Retrieve a list of list results on training and test data
# sets for each training epoch
#-----------------------------------------------------------
mae=history.history['mae']
loss=history.history['loss']
epochs=range(len(loss)) # Get number of epochs
#------------------------------------------------
# Plot MAE and Loss
#------------------------------------------------
plt.plot(epochs, mae, 'r')
plt.plot(epochs, loss, 'b')
plt.title('MAE and Loss')
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend(["MAE", "Loss"])
plt.figure()
epochs_zoom = epochs[200:]
mae_zoom = mae[200:]
loss_zoom = loss[200:]
#------------------------------------------------
# Plot Zoomed MAE and Loss
#------------------------------------------------
plt.plot(epochs_zoom, mae_zoom, 'r')
plt.plot(epochs_zoom, loss_zoom, 'b')
plt.title('MAE and Loss')
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend(["MAE", "Loss"])
plt.figure()
```
| github_jupyter |
<img src="https://raw.githubusercontent.com/Qiskit/qiskit-tutorials/master/images/qiskit-heading.png" alt="Note: In order for images to show up in this jupyter notebook you need to select File => Trusted Notebook" width="500 px" align="left">
## _*Comparing Classical and Quantum Finite Automata (QFA)*_
Finite Automaton has been a mathematical model for computation since its invention in the 1940s. The purpose of a Finite State Machine is to recognize patterns within an input taken from some character set and accept or reject the input based on whether the pattern defined by the machine occurs in the input. The machine requires a list of states, the initial state, and the conditions for each transition from state to state. Such classical examples are vending machines, coin-operated turnstiles, elevators, traffic lights, etc.
In the classical algorithm, the sequence begins in the start state, and will only make a transition if the next character in the input string matches the label on the transition from the current to the next state. The machine will continue making transitions on each input character until no move is possible. The string will be accepted if its final state is in the accept state and will be rejected if its final state is anywhere else.
As for Quantum Finite Automata (QFA), the machine works by accepting a finite-length string of letters from a finite alphabet and utilizing quantum properties such as superposition to assign the string a probability of being in either the accept or reject state.
***
### Contributors
Kaitlin Gili, Rudy Raymond
### Qiskit Package Versions
```
import qiskit
qiskit.__qiskit_version__
```
## Prime Divisibility Algorithm
Let's say that we have a string with $ a^i $ letters and we want to know whether the string is in the language $ L $ where $ L $ = {$ a^i $ | $ i $ is divisble by $ p $} and $ p $ is a prime number. If $ i $ is divisible by $ p $, we want to accept the string into the language, and if not, we want to reject it.
$|0\rangle $ and $ |1\rangle $ serve as our accept and reject states.
Classically, this algorithm requires a minimum of $ log(p) $ bits to store the information, whereas the quantum algorithm only requires $ log(log(p)) $ qubits. For example, using the highest known prime integer, the classical algorithm requires **a minimum of 77,232,917 bits**, whereas the quantum algorithm **only requires 27 qubits**.
## Introduction <a id='introduction'></a>
The algorithm in this notebook follows that in [Ambainis et al. 1998](https://arxiv.org/pdf/quant-ph/9802062.pdf). We assume that we are given a string and a prime integer. If the user does not input a prime number, the output will be a ValueError. First, we demonstrate a simpler version of the quantum algorithm that uses $ log(p) $ qubits to store the information. Then, we can use this to more easily understand the quantum algorithm that requires only $ log(log(p)) $ qubits.
## The Algorithm for Log(p) Qubits
The algorithm is quite simple as follows.
1. Prepare quantum and classical registers for $ log(p) $ qubits initialized to zero.
$$ |0\ldots 0\rangle $$
2. Prepare $ log(p) $ random numbers k in the range {$ 1 $... $ p-1 $}. These numbers will be used to decrease the probability of a string getting accepted when $ i $ does not divide $ p $.
3. Perform a number of $ i $ Y-Rotations on each qubit, where $ \theta $ is initially zero and $ \Phi $ is the angle of rotation for each unitary. $$ \Phi = \frac{2 \pi k}{p} $$
4. In the final state:
$$ \cos \theta |0\rangle + \sin \theta |1\rangle $$
$$ \theta = \frac{2 \pi k} p {i} $$
5. Measure each of the qubits in the classical register. If $ i $ divides $ p $, $ \cos \theta $ will be one for every qubit and the state will collapse to $ |0\rangle $ to demonstrate an accept state with a probability of one. Otherwise, the output will consist of a small probability of accepting the string into the language and a higher probability of rejecting the string.
## The Circuit <a id="circuit"></a>
We now implement the QFA Prime Divisibility algorithm with QISKit by first preparing the environment.
```
# useful additional packages
import random
import math
from sympy.ntheory import isprime
# importing QISKit
from qiskit import QuantumCircuit, ClassicalRegister, QuantumRegister
from qiskit import Aer, IBMQ, execute
from qiskit.tools.monitor import job_monitor
from qiskit.providers.ibmq import least_busy
from qiskit.tools.visualization import plot_histogram
IBMQ.load_accounts()
sim_backend = Aer.get_backend('qasm_simulator')
device_backend = least_busy(IBMQ.backends(operational=True, simulator=False))
```
We then use QISKit to program the algorithm.
```
#Function that takes in a prime number and a string of letters and returns a quantum circuit
def qfa_algorithm(string, prime):
if isprime(prime) == False:
raise ValueError("This number is not a prime") #Raises a ValueError if the input prime number is not prime
else:
n = math.ceil((math.log(prime))) #Rounds up to the next integer of the log(prime)
qr = QuantumRegister(n) #Creates a quantum register of length log(prime) for log(prime) qubits
cr = ClassicalRegister(n) #Creates a classical register for measurement
qfaCircuit = QuantumCircuit(qr, cr) #Defining the circuit to take in the values of qr and cr
for x in range(n): #For each qubit, we want to apply a series of unitary operations with a random int
random_value = random.randint(1,prime - 1) #Generates the random int for each qubit from {1, prime -1}
for letter in string: #For each letter in the string, we want to apply the same unitary operation to each qubit
qfaCircuit.ry((2*math.pi*random_value) / prime, qr[x]) #Applies the Y-Rotation to each qubit
qfaCircuit.measure(qr[x], cr[x]) #Measures each qubit
return qfaCircuit #Returns the created quantum circuit
```
The qfa_algorithm function returns the Quantum Circuit qfaCircuit.
## Experiment with Simulators
We can run the above circuit on the simulator.
```
#A function that returns a string saying if the string is accepted into the language or rejected
def accept(parameter):
states = list(result.get_counts(parameter))
for s in states:
for integer in s:
if integer == "1":
return "Reject: the string is not accepted into the language"
return "Accept: the string is accepted into the language"
```
Insert your own parameters and try even larger prime numbers.
```
range_lower = 0
range_higher = 36
prime_number = 11
for length in range(range_lower,range_higher):
params = qfa_algorithm("a"* length, prime_number)
job = execute(params, sim_backend, shots=1000)
result = job.result()
print(accept(params), "\n", "Length:",length," " ,result.get_counts(params))
```
### Drawing the circuit of the QFA
Below is the snapshop of the QFA for reading the bitstring of length $3$. It can be seen that there are independent QFAs each of which performs $Y$ rotation for $3$ times.
```
qfa_algorithm("a"* 3, prime_number).draw(output='mpl')
```
## The Algorithm for Log(Log(p)) Qubits
The algorithm is quite simple as follows.
1. Prepare a quantum register for $ log(log(p)) + 1 $ qubits initialized to zero. The $ log(log(p))$ qubits will act as your control bits and the 1 extra will act as your target bit. Also prepare a classical register for 1 bit to measure the target.
$$ |0\ldots 0\rangle |0\rangle $$
2. Hadamard the control bits to put them in a superposition so that we can perform multiple QFA's at the same time.
3. For each of $s $ states in the superposition, we can perform an individual QFA with the control qubits acting as the random integer $ k $ from the previous algorithm. Thus, we need $ n $ values from $ 1... log(p)$ for $ k $. For each letter $ i $ in the string, we perform a controlled y-rotation on the target qubit, where $ \theta $ is initially zero and $ \Phi $ is the angle of rotation for each unitary. $$ \Phi = \frac{2 \pi k_{s}}{p} $$
4. The target qubit in the final state:
$$ \cos \theta |0\rangle + \sin \theta |1\rangle $$
$$ \theta = \sum_{s=0}^n \frac{2 \pi k_{s}} p {i} $$
5. Measure the target qubit in the classical register. If $ i $ divides $ p $, $ \cos \theta $ will be one for every QFA and the state of the target will collapse to $ |0\rangle $ to demonstrate an accept state with a probability of one. Otherwise, the output will consist of a small probability of accepting the string into the language and a higher probability of rejecting the string.
## The Circuit <a id="circuit"></a>
We then use QISKit to program the algorithm.
```
#Function that takes in a prime number and a string of letters and returns a quantum circuit
def qfa_controlled_algorithm(string, prime):
if isprime(prime) == False:
raise ValueError("This number is not a prime") #Raises a ValueError if the input prime number is not prime
else:
n = math.ceil((math.log(math.log(prime,2),2))) #Represents log(log(p)) control qubits
states = 2 ** (n) #Number of states that the qubits can represent/Number of QFA's to be performed
qr = QuantumRegister(n+1) #Creates a quantum register of log(log(prime)) control qubits + 1 target qubit
cr = ClassicalRegister(1) #Creates a classical register of log(log(prime)) control qubits + 1 target qubit
control_qfaCircuit = QuantumCircuit(qr, cr) #Defining the circuit to take in the values of qr and cr
for q in range(n): #We want to take each control qubit and put them in a superposition by applying a Hadamard Gate
control_qfaCircuit.h(qr[q])
for letter in string: #For each letter in the string, we want to apply a series of Controlled Y-rotations
for q in range(n):
control_qfaCircuit.cu3(2*math.pi*(2**q)/prime, 0, 0, qr[q], qr[n]) #Controlled Y on Target qubit
control_qfaCircuit.measure(qr[n], cr[0]) #Measure the target qubit
return control_qfaCircuit #Returns the created quantum circuit
```
The qfa_algorithm function returns the Quantum Circuit control_qfaCircuit.
## Experiment with Simulators
We can run the above circuit on the simulator.
```
for length in range(range_lower,range_higher):
params = qfa_controlled_algorithm("a"* length, prime_number)
job = execute(params, sim_backend, shots=1000)
result = job.result()
print(accept(params), "\n", "Length:",length," " ,result.get_counts(params))
```
### Drawing the circuit of the QFA
Below is the snapshot of the QFA for reading the bitstring of length $3$. It can be seen that there is a superposition of QFAs instead of independent QFAs.
```
qfa_controlled_algorithm("a"* 3, prime_number).draw(output='mpl')
```
## Experimenting with Real Devices
Real-device backends have errors and if the above QFAs are executed on the noisy backends, errors in rejecting strings that should have been accepted can happen. Let us see how well the real-device backends can realize the QFAs.
Let us look an example when the QFA should reject the bitstring because the length of the bitstring is not divisible by the prime number.
```
prime_number = 3
length = 2 # set the length so that it is not divisible by the prime_number
print("The length of a is", length, " while the prime number is", prime_number)
qfa1 = qfa_controlled_algorithm("a"* length, prime_number)
job = execute(qfa1, backend=device_backend, shots=100)
job_monitor(job)
result = job.result()
plot_histogram(result.get_counts())
```
In the above, we can see that the probability of observing "1" is quite significant. Let us see how the circuit looks like.
```
qfa1.draw(output='mpl')
```
Now, let us see what happens when the QFAs should accept the input string.
```
print_number = length = 3 # set the length so that it is divisible by the prime_number
print("The length of a is", length, " while the prime number is", prime_number)
qfa2 = qfa_controlled_algorithm("a"* length, prime_number)
job = execute(qfa2, backend=device_backend, shots=100)
job_monitor(job)
result = job.result()
plot_histogram(result.get_counts())
```
The error of rejecting the bitstring is equal to the probability of observing "1" which can be checked from the above histogram. We can see that the noise of real-device backends prevents us to have a correct answer. It is left as future work on how to mitigate errors of the backends in the QFA models.
```
qfa2.draw(output='mpl')
```
| github_jupyter |
# DOF and quadrature point plotter
This notebook helps visualize the locations of MFEM's DOFs and quadrature points for various element types and quadrature rules.
To use this, your python environment must have:
* PyMFEM (https://github.com/mfem/PyMFEM)
* Matplotlib (https://matplotlib.org)
## Preamble: import any necessary libs
```
import mfem.ser as mfem
import numpy as np
import os
import math
%matplotlib inline
import matplotlib.pyplot as plt
%config InlineBackend.figure_formats = ['svg']
# Create some output directories for images
for d in ["figs", "figs/dofs", "figs/qpts",
"figs/dofs/svg", "figs/dofs/pdf", "figs/dofs/png",
"figs/qpts/svg", "figs/qpts/pdf", "figs/qpts/png"]:
if not os.path.exists(d):
os.makedirs(d)
```
## Setup mesh and options
Create an mfem mesh and define some parameters
We currently support a simple Cartesian 2D mesh.
This can be easily extended to load a mesh from file.
```
# Options for Cartesian mesh constructor are:
# 0D: POINT
# 1D: SEGMENT
# 2D: TRIANGLE, QUADRILATERAL
# 3D: TETRAHEDRON, HEXAHEDRON, WEDGE
mesh = mfem.Mesh(1,1,"QUADRILATERAL")
mesh.EnsureNodes()
# Define some parameters for the generated figures
dim = mesh.Dimension()
max_order = 6
# Polynomial order of the FECollection
orders = [i for i in range(max_order)]
# Basis types of the FiniteElementCollections
b_types = [ mfem.BasisType.GaussLobatto,
mfem.BasisType.GaussLegendre,
mfem.BasisType.Positive ]
# Quadrature types for the integration rules
q_types = [{'q': mfem.Quadrature1D.GaussLegendre, 'name': 'Gauss-Legendre'},
{'q': mfem.Quadrature1D.GaussLobatto, 'name': 'Gauss-Lobatto'},
{'q': mfem.Quadrature1D.OpenUniform, 'name': 'Open-Uniform'},
{'q': mfem.Quadrature1D.OpenHalfUniform, 'name': 'Open-Half-Uniform'},
{'q': mfem.Quadrature1D.ClosedGL, 'name': 'Closed-Gauss-Legendre'},
{'q': mfem.Quadrature1D.ClosedUniform, 'name': 'Closed-Uniform'}]
# Finite Element collection types
fec_types = [ "H1", "L2" ]
```
## Functions to extract integration points and map them to physical space
```
def getQptPositions(mesh, eid, ir):
"""Returns list of dictionaries of points in physical space for an element and integration rule.
The dictionaries have entries for physical space position ('x', 'y')
and reference space position and weight ('ix', 'iy', 'w').
"""
t = mesh.GetElementTransformation(eid)
pts = []
mfem.Vector(3)
for i in range(ir.GetNPoints()):
ip = ir.IntPoint(i)
v = t.Transform(ip)
d = {'x' : v[0],
'y' : v[1],
'ix': ip.x,
'iy': ip.y,
'w' : ip.weight}
#print(d)
pts.append(d)
return pts
def getDofPositions(fespace, eid):
"""Returns a list of dictionaries containing the DOF positions
in physical and reference space"""
mesh = fespace.GetMesh()
fe = fespace.GetFE(eid)
ir = fe.GetNodes()
return getQptPositions(mesh, eid, ir)
```
## Functions to generate plots for DOFs and quadrature points
```
def plotDofPositions(name, pts, wscale = .1):
"""Creates a matplotlib plot for the DOFs in pts
Note: Currently assumes all points are in a unit square.
TODO: Extend this to draw curved elements of the mesh.
"""
plt.clf()
fig, ax = plt.subplots(figsize=(6,6))
ax.set_xlim((-.1, 1.1))
ax.set_ylim((-.1, 1.1))
rect = plt.Rectangle((0, 0), 1, 1, linewidth=3, edgecolor='k', facecolor='none')
ax.add_patch(rect)
for p in pts:
circle=plt.Circle((p['x'],p['y']), wscale, facecolor='#c0504d', edgecolor='#366092')
ax.add_patch(circle)
plt.axis('off')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
print(F"Saving DOFs for '{name}'")
fig.savefig(F'figs/dofs/svg/{name}.svg') # bbox_inches=extent ?
fig.savefig(F'figs/dofs/png/{name}.png')
fig.savefig(F'figs/dofs/pdf/{name}.pdf')
plt.show()
def plotQptPositions(name, pts, wscale = .1, use_weights = True, min_size = None):
"""Creates a matplotlib plot for the quadrature points in pts
Note: Currently assumes all points are in a unit square.
TODO: Extend this to draw curved elements of the mesh.
Parameters:
name The name for the output figures
pts A collection of dictionaries of point data
wscale Used to scale the quadrature weights
use_weights When true, use the quadrature weights to scale the quadrature points
min_size Optionally sets a lower bound on the size of the quadrature points
"""
plt.clf()
fig, ax = plt.subplots(figsize=(6,6))
ax.set_xlim((-.1, 1.1))
ax.set_ylim((-.1, 1.1))
rect = plt.Rectangle((0, 0), 1, 1, linewidth=1.5, edgecolor='#969696', facecolor='none')
ax.add_patch(rect)
for p in pts:
# Color depends on sign
facecolor = '#0871b7C0' if (p['w'] >= 0) else '#B74E08C0'
# Scale w/ weights proportional to area
sc = wscale * math.sqrt(abs(p['w'])) if use_weights else wscale
# Apply threshold to size, if applicable
sc = min_size if (min_size and sc < min_size) else sc
circle=plt.Circle((p['x'],p['y']), sc, facecolor=facecolor, edgecolor='#231f20')
ax.add_patch(circle)
plt.axis('off')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
print(F"Saving qpts for '{name}'")
fig.savefig(F'figs/qpts/svg/{name}.svg') # bbox_inches=extent ?
fig.savefig(F'figs/qpts/png/{name}.png')
fig.savefig(F'figs/qpts/pdf/{name}.pdf')
plt.show()
```
## Plot the DOFs
```
# Create a grid function for each FE type, order and basis type and plot figures
# Skip the invalid combinations
for f in fec_types:
for b in b_types:
for p in orders:
try:
if "H1" in f:
fec = mfem.H1_FECollection(p, dim, b)
elif "L2" in f:
fec = mfem.L2_FECollection(p, dim, b)
fespace = mfem.FiniteElementSpace(mesh, fec)
bname = mfem.BasisType.Name(b).split(" ")[0]
print(F"Working on {fec.Name()} -- dim {dim} -- basis type {bname} -- fec type {f}" )
pts = getDofPositions(fespace, 0)
plotDofPositions(F'{fec.Name()}_{bname}', pts, 0.05)
except:
#print(F"\tFEC {fec.Name()} did not work: dim {dim} -- basis type {b} -- fec type {f}" )
pass
```
## Plot the quadrature points
```
# Create a chart for each quadrature type and order defined above
# Skip the invalid combinations
# Note: Rules for 2*order and 2*order+1 are the same, so only plot the even ones
# Currently hard-coded for squares
g_type = mfem.Geometry.SQUARE
g_name = "square"
wscale=.125
for q in q_types:
for o in orders:
try:
intrules = mfem.IntegrationRules(0, q['q'])
ir = intrules.Get(g_type, 2*o)
pts = getQptPositions(mesh, 0, ir)
pts = sorted(pts, key = lambda p: (p['x']-.5)**2 + (p['y']-.5)**2, reverse=True)
name = F"qpts_{g_name}_{q['name']}_{dim}D_P{2*o}"
#for p in pts:
# print(F"{name}: P{2*o} {p['w']}")
plotQptPositions(name, pts, wscale, True, min_size=None)
except:
pass
```
| github_jupyter |
# Logic, Control Flow and Filtering
**Boolean logic is the foundation of decision-making in Python programs. Learn about different comparison operators, how to combine them with Boolean operators, and how to use the Boolean outcomes in control structures. You'll also learn to filter data in pandas DataFrames using logic.**
## Compare arrays
Out of the box, you can also use comparison operators with Numpy arrays.
Remember `areas`, the list of area measurements for different rooms in your house from Introduction to Python? This time there's two Numpy arrays: `my_house` and `your_house`. They both contain the areas for the kitchen, living room, bedroom and bathroom in the same order, so you can compare them.
Using comparison operators, generate boolean arrays that answer the following questions:
- Which areas in `my_house` are greater than or equal to `18`?
- You can also compare two Numpy arrays element-wise. Which areas in `my_house` are smaller than the ones in `your_house`?
- Make sure to wrap both commands in a `print()` statement so that you can inspect the output!
```
# Create arrays
import numpy as np
my_house = np.array([18.0, 20.0, 10.75, 9.50])
your_house = np.array([14.0, 24.0, 14.25, 9.0])
# my_house greater than or equal to 18
print(my_house >= 18)
# my_house less than your_house
print(my_house < your_house)
```
## Boolean operators with Numpy
Before, the operational operators like `<` and `>=` worked with Numpy arrays out of the box. Unfortunately, this is not true for the boolean operators `and`, `or`, and `not`.
To use these operators with Numpy, you will need `np.logical_and()`, `np.logical_or()` and `np.logical_not()`. Here's an example on the `my_house` and `your_house` arrays from before to give you an idea:
```python
np.logical_and(my_house > 13,
your_house < 15)
```
- Generate boolean arrays that answer the following questions:
- Which areas in `my_house` are greater than `18.5` or smaller than `10`?
- Which areas are smaller than `11` in both `my_house` and `your_house`? Make sure to wrap both commands in `print()` statement, so that you can inspect the output.
```
# Create arrays
import numpy as np
my_house = np.array([18.0, 20.0, 10.75, 9.50])
your_house = np.array([14.0, 24.0, 14.25, 9.0])
# my_house greater than 18.5 or smaller than 10
print(np.logical_or(my_house > 18.5, my_house < 10))
# Both my_house and your_house smaller than 11
print(np.logical_and(my_house < 11, your_house < 11))
```
## Driving right (1)
Remember that `cars` dataset, containing the cars per 1000 people (`cars_per_cap`) and whether people drive right (`drives_right`) for different countries (`country`)?
Let's start simple and try to find all observations in `cars` where `drives_right` is `True`.
`drives_right` is a boolean column, so you'll have to extract it as a Series and then use this boolean Series to select observations from `cars`.
- Extract the `drives_right` column as a Pandas Series and store it as `dr`.
- Use `dr`, a boolean Series, to subset the cars DataFrame. Store the resulting selection in `sel`.
- Print `sel`, and assert that `drives_right` is `True` for all observations.
```
# Import cars data
import pandas as pd
cars = pd.read_csv('cars.csv', index_col = 0)
# Extract drives_right column as Series: dr
dr = cars['drives_right'] == True
# Use dr to subset cars: sel
sel = cars[dr]
# Print sel
print(sel)
```
## Driving right (2)
The code in the previous example worked fine, but you actually unnecessarily created a new variable `dr`. You can achieve the same result without this intermediate variable. Put the code that computes `dr` straight into the square brackets that select observations from `cars`.
- Convert the code to a one-liner that calculates the variable `sel` as before.
```
# Convert code to a one-liner
sel = cars[cars['drives_right'] == True]
# Print sel
print(sel)
```
## Cars per capita (1)
Let's stick to the `cars` data some more. This time you want to find out which countries have a high cars per capita figure. In other words, in which countries do many people have a car, or maybe multiple cars.
Similar to the previous example, you'll want to build up a boolean Series, that you can then use to subset the `cars` DataFrame to select certain observations. If you want to do this in a one-liner, that's perfectly fine!
- Select the `cars_per_cap` column from `cars` as a Pandas Series and store it as `cpc`.
- Use `cpc` in combination with a comparison operator and `500`. You want to end up with a boolean Series that's `True` if the corresponding country has a `cars_per_cap` of more than `500` and `False` otherwise. Store this boolean Series as `many_cars`.
- Use `many_cars` to subset `cars`, similar to what you did before. Store the result as `car_maniac`.
- Print out `car_maniac` to see if you got it right.
```
# Create car_maniac: observations that have a cars_per_cap over 500
cpc = cars['cars_per_cap']
many_cars = cpc > 500
car_maniac = cars[many_cars]
# Print car_maniac
print(car_maniac)
```
## Cars per capita (2)
Remember about `np.logical_and()`, `np.logical_or()` and `np.logical_not()`, the Numpy variants of the `and`, `or` and `not` operators? You can also use them on Pandas Series to do more advanced filtering operations.
Take this example that selects the observations that have a `cars_per_cap` between 10 and 80. Try out these lines of code step by step to see what's happening.
```python
cpc = cars['cars_per_cap']
between = np.logical_and(cpc > 10, cpc < 80)
medium = cars[between]
```
- Use the code sample provided to create a DataFrame `medium`, that includes all the observations of `cars` that have a `cars_per_cap` between `100` and `500`.
- Print out `medium`.
```
# Create medium: observations with cars_per_cap between 100 and 500
cpc = cars['cars_per_cap']
between = np.logical_and(cpc > 100, cpc < 500)
medium = cars[between]
# Print medium
print(medium)
```
---
# Loops
**There are several techniques you can use to repeatedly execute Python code. While loops are like repeated if statements, the for loop iterates over all kinds of data structures. Learn all about them in this chapter.**
## Loop over list of lists
Remember the `house` variable from the Intro to Python course? Have a look at its definition in the script. It's basically a list of lists, where each sublist contains the name and area of a room in your house.
It's up to you to build a `for` loop from scratch this time!
- Write a `for` loop that goes through each sublist of `house` and prints out `the x is y sqm`, where x is the name of the room and y is the area of the room.
```
# house list of lists
house = [["hallway", 11.25],
["kitchen", 18.0],
["living room", 20.0],
["bedroom", 10.75],
["bathroom", 9.50]]
# Build a for loop from scratch
for item in house:
print('the ' + item[0] + ' is ' + str(item[1]) + ' spm')
```
## Loop over dictionary
In Python 3, you need the `items()` method to loop over a dictionary:
```python
world = { "afghanistan":30.55,
"albania":2.77,
"algeria":39.21 }
for key, value in world.items() :
print(key + " -- " + str(value))
```
Remember the `europe` dictionary that contained the names of some European countries as key and their capitals as corresponding value? Go ahead and write a loop to iterate over it!
- Write a `for` loop that goes through each key:value pair of `europe`. On each iteration, `"the capital of x is y"` should be printed out, where x is the key and y is the value of the pair.
```
# Definition of dictionary
europe = {'spain':'madrid', 'france':'paris', 'germany':'berlin',
'norway':'oslo', 'italy':'rome', 'poland':'warsaw', 'austria':'vienna' }
# Iterate over europe
for key, value in europe.items():
print('the capital of ' + key + ' is ' + value)
```
## Loop over DataFrame (1)
Iterating over a Pandas DataFrame is typically done with the `iterrows()` method. Used in a `for` loop, every observation is iterated over and on every iteration the row label and actual row contents are available:
```python
for lab, row in brics.iterrows() :
...
```
In this and the following exercises you will be working on the `cars` DataFrame. It contains information on the cars per capita and whether people drive right or left for seven countries in the world.
- Write a `for` loop that iterates over the rows of `cars` and on each iteration perform two `print()` calls: one to print out the row label and one to print out all of the rows contents.
```
# Iterate over rows of cars
for lab, row in cars.iterrows():
print(lab)
print(row)
```
## Loop over DataFrame (2)
The row data that's generated by `iterrows()` on every run is a Pandas Series. This format is not very convenient to print out. Luckily, you can easily select variables from the Pandas Series using square brackets:
```python
for lab, row in brics.iterrows() :
print(row['country'])
```
- Using the iterators `lab` and `row`, adapt the code in the for loop such that the first iteration prints out `"US: 809"`, the second iteration `"AUS: 731"`, and so on.
- The output should be in the form `"country: cars_per_cap"`. Make sure to print out this exact string (with the correct spacing).
- *You can use `str()` to convert your integer data to a string so that you can print it in conjunction with the country label.*
```
# Adapt for loop
for lab, row in cars.iterrows():
print(lab + ': ' + str(row[0]))
```
## Add column (1)
```python
for lab, row in brics.iterrows() :
brics.loc[lab, "name_length"] = len(row["country"])
```
You can do similar things on the cars DataFrame.
- Use a `for` loop to add a new column, named `COUNTRY`, that contains a uppercase version of the country names in the `"country"` column. You can use the string method `upper()` for this.
- To see if your code worked, print out `cars`. Don't indent this code, so that it's not part of the `for` loop.
```
# Import cars data
import pandas as pd
cars = pd.read_csv('cars.csv', index_col = 0)
# Code for loop that adds COUNTRY column
for lab, row in cars.iterrows():
cars.loc[lab, 'COUNTRY'] = row['country'].upper()
# Print cars
print(cars)
```
## Add column (2)
Using `iterrows()` to iterate over every observation of a Pandas DataFrame is easy to understand, but not very efficient. On every iteration, you're creating a new Pandas Series.
If you want to add a column to a DataFrame by calling a function on another column, the `iterrows()` method in combination with a `for` loop is not the preferred way to go. Instead, you'll want to use `apply()`.
Compare the `iterrows()` version with the `apply()` version to get the same result in the `brics` DataFrame:
```python
for lab, row in brics.iterrows() :
brics.loc[lab, "name_length"] = len(row["country"])
brics["name_length"] = brics["country"].apply(len)
```
We can do a similar thing to call the `upper()` method on every name in the `country` column. However, `upper()` is a **method**, so we'll need a slightly different approach:
- Replace the `for` loop with a one-liner that uses `.apply(str.upper)`. The call should give the same result: a column `COUNTRY` should be added to `cars`, containing an uppercase version of the country names.
- As usual, print out `cars` to see the fruits of your hard labor
```
# Import cars data
import pandas as pd
cars = pd.read_csv('cars.csv', index_col = 0)
# Use .apply(str.upper)
cars['COUNTRY'] = cars['country'].apply(str.upper)
print(cars)
```
| github_jupyter |
# 客户分配问题 Customer Assignment Problem
等级:中级
## 目的和先决条件
在案例中,您将学习如何:
- 根据客户的接近程度选择设施位置。
- 可以利用机器学习来处理大型数据集。为此,读者应该熟悉聚类技术,例如k-means算法。
This modeling example is at the intermediate level, where we assume that you know Python and are familiar with the Gurobi Python API. In addition, you should have some knowledge about building mathematical optimization models.
**Note:** You can download the repository containing this and other examples by clicking [here](https://github.com/Gurobi/modeling-examples/archive/master.zip). In order to run this Jupyter Notebook properly, you must have a Gurobi license. If you do not have one, you can request an [evaluation license](https://www.gurobi.com/downloads/request-an-evaluation-license/?utm_source=Github&utm_medium=website_JupyterME&utm_campaign=CommercialDataScience) as a *commercial user*, or download a [free license](https://www.gurobi.com/academia/academic-program-and-licenses/?utm_source=Github&utm_medium=website_JupyterME&utm_campaign=AcademicDataScience) as an *academic user*.
---
## Motivation
Many companies in various industries must, at some point, make strategic decisions about where to build facilities to support their operations. For example:
- Producers of goods need to decide how to design their supply chains – which encompass factories, distribution centers, warehouses, and retail stores.
- Healthcare providers need to determine where to build hospitals to maximize their population coverage.
These are strategic decisions that are difficult to implement and costly to change because they entail long-term investments. Furthermore, these decisions have a significant impact, both in terms of customer satisfaction and cost management. One of the critical factors to consider in this process is the location of the customers that the company is planning to serve.
---
## Problem Description
The Customer Assignment Problem is closely related to the Facility Location Problem, which is concerned with the optimal placement of facilities (from a set of candidate locations) in order to minimize the distance between the company's facilities and the customers. When the facilities have unlimited capacity, customers are assumed to be served by the closest facility.
In cases where the number of customers considered is too big, the customers can be grouped into clusters. Then, the cluster centers can be used in lieu of the individual customer locations. This pre-processing makes the assumption that all customers belonging to a given cluster will be served by the facility assigned to that cluster. The k-means algorithm can be used for this task, which aims to partition $n$ objects into $k$ distinct and non-overlapping clusters.
---
## Solution Approach
Mathematical optimization is a declarative approach where the modeler formulates a mathematical optimization model that captures the key aspects of a complex decision problem. The Gurobi Optimizer solves such models using state-of-the-art mathematics and computer science.
A mathematical optimization model has five components, namely:
- Sets and indices.
- Parameters.
- Decision variables.
- Objective function(s).
- Constraints.
We now present a Binary Integer Programming (BIP) formulation:
### Sets and Indices
$i \in I$: Set of customer clusters.
$j \in J$: Set of potential facility locations.
$\text{Pairings}= \{(i,j) \in I \times J: \text{dist}_{i,j} \leq \text{threshold}\}$: Set of allowed pairings
### Parameters
$\text{threshold} \in \mathbb{R}^+$: Maximum distance for a cluster-facility pairing to be considered.
$\text{max_facilities} \in \mathbb{N}$: Maximum number of facilities to be opened.
$\text{weight}_i \in \mathbb{N}$: Number of customers in cluster $i$.
$\text{dist}_{i,j} \in \mathbb{R}^+$: Distance from cluster $i$ to facility location $j$.
### Decision Variables
$\text{select}_j \in \{0,1\}$: 1 if facility location $j$ is selected; 0 otherwise.
$\text{assign}_{i,j} \in \{0,1\}$: 1 if cluster $i$ is assigned to facility location $j$; 0 otherwise.
### Objective Function
- **Total distance**: Minimize the total distance from clusters to their assigned facility:
\begin{equation}
\text{Min} \quad Z = \sum_{(i,j) \in \text{Pairings}}\text{weight}_i \cdot \text{dist}_{i,j} \cdot \text{assign}_{i,j}
\tag{0}
\end{equation}
### Constraints
- **Facility limit**: The number of facilities built cannot exceed the limit:
\begin{equation}
\sum_{j}\text{select}_j \leq \text{max_facilities}
\tag{1}
\end{equation}
- **Open to assign**: Cluster $i$ can only be assigned to facility $j$ if we decide to build that facility:
\begin{equation}
\text{assign}_{i,j} \leq \text{select}_{j} \quad \forall (i,j) \in \text{Pairings}
\tag{2}
\end{equation}
- **Closest store**: Cluster $i$ must be assigned to exactly one facility:
\begin{equation}
\sum_{j:(i,j) \in \text{Pairings}}\text{assign}_{i,j} = 1 \quad \forall i \in I
\tag{3}
\end{equation}
---
## Python Implementation
### Dataset Generation
In this simple example, we choose random locations for customers and facility candidates. Customers are distributed using Gaussian distributions around a few randomly chosen population centers, whereas facility locations are uniformly distributed.
```
%matplotlib inline
import random
import gurobipy as gp
import matplotlib.pyplot as plt
import numpy as np
from gurobipy import GRB
from sklearn.cluster import MiniBatchKMeans
# Tested with Gurobi v9.0.0 and Python 3.7.0
seed = 10101
num_customers = 50000
num_candidates = 50
max_facilities = 8
num_clusters = 1000
num_gaussians = 10
threshold = 0.99
random.seed(seed)
customers_per_gaussian = np.random.multinomial(num_customers,
[1/num_gaussians]*num_gaussians)
customer_locs = []
for i in range(num_gaussians):
# each center coordinate in [-0.5, 0.5]
center = (random.random()-0.5, random.random()-0.5)
customer_locs += [(random.gauss(0,.1) + center[0], random.gauss(0,.1) + center[1])
for i in range(customers_per_gaussian[i])]
# each candidate coordinate in [-0.5, 0.5]
facility_locs = [(random.random()-0.5, random.random()-0.5)
for i in range(num_candidates)]
print('First customer location:', customer_locs[0])
```
### Preprocessing
**Clustering**
To limit the size of the optimization model, we group individual customers into clusters and optimize on these clusters. Clusters are computed using the K-means algorithm, as implemented in the scikit-learn package.
```
kmeans = MiniBatchKMeans(n_clusters=num_clusters, init_size=3*num_clusters,
random_state=seed).fit(customer_locs)
memberships = list(kmeans.labels_)
centroids = list(kmeans.cluster_centers_) # Center point for each cluster
weights = list(np.histogram(memberships, bins=num_clusters)[0]) # Number of customers in each cluster
print('First cluster center:', centroids[0])
print('Weights for first 10 clusters:', weights[:10])
```
***Viable Customer-Store Pairings***
Some facilities are just too far away from a cluster center to be relevant, so let's heuristically filter all distances that exceed a given `threshold`:
```
def dist(loc1, loc2):
return np.linalg.norm(loc1-loc2, ord=2) # Euclidean distance
pairings = {(facility, cluster): dist(facility_locs[facility], centroids[cluster])
for facility in range(num_candidates)
for cluster in range(num_clusters)
if dist(facility_locs[facility], centroids[cluster]) < threshold}
print("Number of viable pairings: {0}".format(len(pairings.keys())))
```
### Model Deployment
Build facilities from among candidate locations to minimize total distance to cluster centers:
```
m = gp.Model("Facility location")
# Decision variables: select facility locations
select = m.addVars(range(num_candidates), vtype=GRB.BINARY, name='select')
# Decision variables: assign customer clusters to a facility location
assign = m.addVars(pairings.keys(), vtype=GRB.BINARY, name='assign')
# Deploy Objective Function
# 0. Total distance
obj = gp.quicksum(weights[cluster]
*pairings[facility, cluster]
*assign[facility, cluster]
for facility, cluster in pairings.keys())
m.setObjective(obj, GRB.MINIMIZE)
# 1. Facility limit
m.addConstr(select.sum() <= max_facilities, name="Facility_limit")
# 2. Open to assign
m.addConstrs((assign[facility, cluster] <= select[facility]
for facility, cluster in pairings.keys()),
name="Open2assign")
# 3. Closest store
m.addConstrs((assign.sum('*', cluster) == 1
for cluster in range(num_clusters)),
name="Closest_store")
# Find the optimal solution
m.optimize()
```
## Analysis
Let's plot a map with:
- Customer locations represented as small pink dots.
- Customer cluster centroids represented as large red dots.
- Facility location candidates represented as green dots. Notice that selected locations have black lines emanating from them towards each cluster that is likely to be served by that facility.
```
plt.figure(figsize=(8,8), dpi=150)
plt.scatter(*zip(*customer_locs), c='Pink', s=0.5)
plt.scatter(*zip(*centroids), c='Red', s=10)
plt.scatter(*zip(*facility_locs), c='Green', s=10)
assignments = [p for p in pairings if assign[p].x > 0.5]
for p in assignments:
pts = [facility_locs[p[0]], centroids[p[1]]]
plt.plot(*zip(*pts), c='Black', linewidth=0.1)
```
---
## Conclusions
We learned how mathematical optimization can be used to solve the Customer Assignment Problem. Moreover, it has been shown how machine learning can be used in the pre-processing so as to reduce the computational burden of big datasets. Of course, this comes at a cost, as using fewer clusters will result in coarser approximations to the global optimal solution.
---
## References
1. Drezner, Z., & Hamacher, H. W. (Eds.). (2001). Facility location: applications and theory. Springer Science & Business Media.
2. James, G., Witten, D., Hastie, T., & Tibshirani, R. (2013). An introduction to statistical learning. New York: springer.
3. Klose, A., & Drexl, A. (2005). Facility location models for distribution system design. European journal of operational research, 162(1), 4-29.
Copyright © 2020 Gurobi Optimization, LLC
| github_jupyter |
<a href="https://colab.research.google.com/github/intel-analytics/BigDL/blob/branch-2.0/python/chronos/colab-notebook/chronos_experimental_autots_nyc_taxi.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>

##### Copyright 2016 The BigDL Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
```
## **Environment Preparation**
**Install bigdl-chronos**
You can install the latest pre-release version with automl support using `pip install --pre --upgrade bigdl-chronos[all]`.
```
# Install latest pre-release version of bigdl-chronos
# Installing bigdl-chronos from pip will automatically install pyspark, bigdl, and their dependencies.
!pip install --pre --upgrade bigdl-chronos[all]
!pip uninstall -y torchtext # uninstall torchtext to avoid version conflict
exit() # restart the runtime to refresh installed pkg
```
## **Distributed automl for time series forecasting using Chronos AutoTS**
In this guide we will demonstrate how to use Chronos AutoTS for automated time seires forecasting in 5 simple steps.
## **Step 0: Prepare dataset**
We used NYC taxi passengers dataset in [Numenta Anomaly Benchmark (NAB)](https://github.com/numenta/NAB) for demo, which contains 10320 records, each indicating the total number of taxi passengers in NYC at a corresonponding time spot.
```
# download the dataset
!wget https://raw.githubusercontent.com/numenta/NAB/v1.0/data/realKnownCause/nyc_taxi.csv
# load the dataset. The downloaded dataframe contains two columns, "timestamp" and "value".
import pandas as pd
df = pd.read_csv("nyc_taxi.csv", parse_dates=["timestamp"])
```
### **Step 1: Init Orca Context**
```
# import necesary libraries and modules
from bigdl.orca import init_orca_context, stop_orca_context
from bigdl.orca import OrcaContext
```
This is the only place where you need to specify local or distributed mode. View [Orca Context](https://analytics-zoo.readthedocs.io/en/latest/doc/Orca/Overview/orca-context.html) for more details. Note that argument ```init_ray_on_spark``` must be ```True``` for Chronos.
```
# recommended to set it to True when running bigdl-chronos in Jupyter notebook
OrcaContext.log_output = True # (this will display terminal's stdout and stderr in the Jupyter notebook).
init_orca_context(cluster_mode="local", cores=4, init_ray_on_spark=True)
```
### **Step 2: Data transformation and feature engineering using Chronos TSDataset**
[TSDataset](https://bigdl.readthedocs.io/en/latest/doc/PythonAPI/Chronos/tsdataset.html) is our abstract of time series dataset for data transformation and feature engineering. Here we use it to preprocess the data.
```
from bigdl.chronos.data import TSDataset
from sklearn.preprocessing import StandardScaler
tsdata_train, tsdata_val, tsdata_test = TSDataset.from_pandas(df, # the dataframe to load
dt_col="timestamp", # the column name specifying datetime
target_col="value", # the column name to predict
with_split=True, # split the dataset into 3 parts
val_ratio=0.1, # validation set ratio
test_ratio=0.1) # test set ratio
# for each tsdataset, we
# 1. generate datetime feature columns.
# 2. impute the dataset with last occured value.
# 3. scale the dataset with standard scaler, fit = true for train data.
standard_scaler = StandardScaler()
for tsdata in [tsdata_train, tsdata_val, tsdata_test]:
tsdata.gen_dt_feature()\
.impute(mode="last")\
.scale(standard_scaler, fit=(tsdata is tsdata_train))
```
### **Step 3: Create an AutoTSEstimator**
[AutoTSEstimator](https://bigdl.readthedocs.io/en/latest/doc/PythonAPI/Chronos/autotsestimator.html) is our Automated TimeSeries Estimator for time series forecasting task.
```
import bigdl.orca.automl.hp as hp
from bigdl.chronos.autots import AutoTSEstimator
auto_estimator = AutoTSEstimator(model='lstm', # the model name used for training
search_space='normal', # a default hyper parameter search space
past_seq_len=hp.randint(1, 10)) # hp sampling function of past_seq_len for auto-tuning
```
### **Step 4: Fit with AutoTSEstimator**
```
# fit with AutoTSEstimator for a returned TSPipeline
ts_pipeline = auto_estimator.fit(data=tsdata_train, # train dataset
validation_data=tsdata_val, # validation dataset
epochs=5) # number of epochs to train in each trial
```
### **Step 5: Further deployment with TSPipeline**
[TSPipeline](https://analytics-zoo.readthedocs.io/en/latest/doc/PythonAPI/Chronos/autotsestimator.html#tspipeline-experimental) is our E2E solution for time series forecasting task.
```
# predict with the best trial
y_pred = ts_pipeline.predict(tsdata_test)
# evaluate the result pipeline
mse, smape = ts_pipeline.evaluate(tsdata_test, metrics=["mse", "smape"])
print("Evaluate: the mean square error is", mse)
print("Evaluate: the smape value is", smape)
# plot the result
import matplotlib.pyplot as plt
lookback = auto_estimator.get_best_config()['past_seq_len']
groundtruth_unscale = tsdata_test.unscale().to_pandas()[lookback - 1:]
plt.figure(figsize=(16,6))
plt.plot(groundtruth_unscale["timestamp"], y_pred[:,0,0])
plt.plot(groundtruth_unscale["timestamp"], groundtruth_unscale["value"])
plt.legend(["prediction", "ground truth"])
# save the pipeline
my_ppl_file_path = "/tmp/saved_pipeline"
ts_pipeline.save(my_ppl_file_path)
# restore the pipeline for further deployment
from bigdl.chronos.autots import TSPipeline
loaded_ppl = TSPipeline.load(my_ppl_file_path)
# Stop orca context when your program finishes
stop_orca_context()
# show a tensorboard view
%load_ext tensorboard
%tensorboard --logdir /tmp/autots_estimator/autots_estimator_leaderboard/
```
| github_jupyter |
# Multi-Fidelity Deep Gaussian process benchmark
This notebook replicates the benchmark experiments from the paper:
[Deep Gaussian Processes for Multi-fidelity Modeling (Kurt Cutajar, Mark Pullin, Andreas Damianou, Neil Lawrence, Javier González)](https://arxiv.org/abs/1903.07320)
Note that the code for one of the benchmark models in the paper, "Deep Multi-fidelity Gaussian process", is not publically available and so does not appear in this notebook.
```
from prettytable import PrettyTable
import numpy as np
import scipy.stats
from sklearn.metrics import mean_squared_error, r2_score
import emukit.examples.multi_fidelity_dgp
from emukit.examples.multi_fidelity_dgp.baseline_model_wrappers import LinearAutoRegressiveModel, NonLinearAutoRegressiveModel, HighFidelityGp
from emukit.core import ContinuousParameter
from emukit.experimental_design import LatinDesign
from emukit.examples.multi_fidelity_dgp.multi_fidelity_deep_gp import MultiFidelityDeepGP
from emukit.test_functions.multi_fidelity import (multi_fidelity_borehole_function, multi_fidelity_branin_function,
multi_fidelity_park_function, multi_fidelity_hartmann_3d,
multi_fidelity_currin_function)
```
# Parameters for different benchmark functions
```
from collections import namedtuple
Function = namedtuple('Function', ['name', 'y_scale', 'noise_level', 'do_x_scaling', 'num_data', 'fcn'])
borehole = Function(name='borehole', y_scale=100, noise_level=[0.05, 0.1], do_x_scaling=True, num_data=[60, 5],
fcn=multi_fidelity_borehole_function)
branin = Function(name='branin', y_scale=1, noise_level=[0., 0., 0.], do_x_scaling=False, num_data=[80, 30, 10],
fcn=multi_fidelity_branin_function)
currin = Function(name='currin', y_scale=1, noise_level=[0., 0.], do_x_scaling=False, num_data=[12, 5],
fcn=multi_fidelity_currin_function)
park = Function(name='park', y_scale=1, noise_level=[0., 0.], do_x_scaling=False, num_data=[30, 5],
fcn=multi_fidelity_park_function)
hartmann_3d = Function(name='hartmann', y_scale=1, noise_level=[0., 0., 0.], do_x_scaling=False, num_data=[80, 40, 20],
fcn=multi_fidelity_hartmann_3d)
# Function to repeat test across different random seeds.
def do_benchmark(fcn_tuple):
metrics = dict()
# Some random seeds to use
seeds = [123, 184, 202, 289, 732]
for i, seed in enumerate(seeds):
run_name = str(seed) + str(fcn_tuple.num_data)
metrics[run_name] = test_function(fcn_tuple, seed)
print('After ' + str(i+1) + ' runs of ' + fcn_tuple.name)
print_metrics(metrics)
return metrics
# Print metrics as table
def print_metrics(metrics):
model_names = list(list(metrics.values())[0].keys())
metric_names = ['r2', 'mnll', 'rmse']
table = PrettyTable(['model'] + metric_names)
for name in model_names:
mean = []
for metric_name in metric_names:
mean.append(np.mean([metric[name][metric_name] for metric in metrics.values()]))
table.add_row([name] + mean)
print(table)
def test_function(fcn, seed):
np.random.seed(seed)
x_test, y_test, X, Y = generate_data(fcn, 1000)
mf_dgp_fix_lf_mean = MultiFidelityDeepGP(X, Y, n_iter=5000)
mf_dgp_fix_lf_mean.name = 'mf_dgp_fix_lf_mean'
models = [HighFidelityGp(X, Y), LinearAutoRegressiveModel(X, Y), NonLinearAutoRegressiveModel(X, Y), mf_dgp_fix_lf_mean]
return benchmark_models(models, x_test, y_test)
def benchmark_models(models, x_test, y_test):
metrics = dict()
for model in models:
model.optimize()
y_mean, y_var = model.predict(x_test)
metrics[model.name] = calculate_metrics(y_test, y_mean, y_var)
print('+ ######################## +')
print(model.name, 'r2', metrics[model.name]['r2'])
print('+ ######################## + ')
return metrics
def generate_data(fcn_tuple, n_test_points):
"""
Generates train and test data for
"""
do_x_scaling = fcn_tuple.do_x_scaling
fcn, space = fcn_tuple.fcn()
# Generate training data
latin = LatinDesign(space)
X = [latin.get_samples(n) for n in fcn_tuple.num_data]
# Scale X if required
if do_x_scaling:
scalings = X[0].std(axis=0)
else:
scalings = np.ones(X[0].shape[1])
for x in X:
x /= scalings
Y = []
for i, x in enumerate(X):
Y.append(fcn.f[i](x * scalings))
y_scale = fcn_tuple.y_scale
# scale y and add noise if required
noise_levels = fcn_tuple.noise_level
for y, std_noise in zip(Y, noise_levels):
y /= y_scale + std_noise * np.random.randn(y.shape[0], 1)
# Generate test data
x_test = latin.get_samples(n_test_points)
x_test /= scalings
y_test = fcn.f[-1](x_test * scalings)
y_test /= y_scale
i_highest_fidelity = (len(fcn_tuple.num_data) - 1) * np.ones((x_test.shape[0], 1))
x_test = np.concatenate([x_test, i_highest_fidelity], axis=1)
print(X[1].shape)
return x_test, y_test, X, Y
def calculate_metrics(y_test, y_mean_prediction, y_var_prediction):
# R2
r2 = r2_score(y_test, y_mean_prediction)
# RMSE
rmse = np.sqrt(mean_squared_error(y_test, y_mean_prediction))
# Test log likelihood
mnll = -np.sum(scipy.stats.norm.logpdf(y_test, loc=y_mean_prediction, scale=np.sqrt(y_var_prediction)))/len(y_test)
return {'r2': r2, 'rmse': rmse, 'mnll': mnll}
metrics = []
metrics.append(do_benchmark(currin))
for (metric) in zip(metrics):
print(fcn_name)
print_metrics(metric[0])
```
| github_jupyter |
```
import numpy as np
import pandas as pd
import torch
import torchvision
from torch.utils.data import Dataset, DataLoader
from torchvision import transforms, utils
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from matplotlib import pyplot as plt
%matplotlib inline
mu1 = np.array([3,3,3,3,0])
sigma1 = np.array([[1,1,1,1,1],[1,16,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1]])
mu2 = np.array([4,4,4,4,0])
sigma2 = np.array([[16,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1]])
mu3 = np.array([10,5,5,10,0])
sigma3 = np.array([[1,1,1,1,1],[1,16,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1]])
mu4 = np.array([-10,-10,-10,-10,0])
sigma4 = np.array([[1,1,1,1,1],[1,16,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1]])
mu5 = np.array([-21,4,4,-21,0])
sigma5 = np.array([[16,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1]])
mu6 = np.array([-10,18,18,-10,0])
sigma6 = np.array([[1,1,1,1,1],[1,16,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1]])
mu7 = np.array([4,20,4,20,0])
sigma7 = np.array([[16,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1]])
mu8 = np.array([4,-20,-20,4,0])
sigma8 = np.array([[16,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1]])
mu9 = np.array([20,20,20,20,0])
sigma9 = np.array([[1,1,1,1,1],[1,16,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1]])
mu10 = np.array([20,-10,-10,20,0])
sigma10 = np.array([[1,1,1,1,1],[1,16,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1]])
sample1 = np.random.multivariate_normal(mean=mu1,cov= sigma1,size=500)
sample2 = np.random.multivariate_normal(mean=mu2,cov= sigma2,size=500)
sample3 = np.random.multivariate_normal(mean=mu3,cov= sigma3,size=500)
sample4 = np.random.multivariate_normal(mean=mu4,cov= sigma4,size=500)
sample5 = np.random.multivariate_normal(mean=mu5,cov= sigma5,size=500)
sample6 = np.random.multivariate_normal(mean=mu6,cov= sigma6,size=500)
sample7 = np.random.multivariate_normal(mean=mu7,cov= sigma7,size=500)
sample8 = np.random.multivariate_normal(mean=mu8,cov= sigma8,size=500)
sample9 = np.random.multivariate_normal(mean=mu9,cov= sigma9,size=500)
sample10 = np.random.multivariate_normal(mean=mu10,cov= sigma10,size=500)
# mu1 = np.array([3,3,0,0,0])
# sigma1 = np.array([[1,1,1,1,1],[1,16,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1]])
# mu2 = np.array([4,4,0,0,0])
# sigma2 = np.array([[16,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1]])
# mu3 = np.array([10,5,0,0,0])
# sigma3 = np.array([[1,1,1,1,1],[1,16,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1]])
# mu4 = np.array([-10,-10,0,0,0])
# sigma4 = np.array([[1,1,1,1,1],[1,16,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1]])
# mu5 = np.array([-21,4,0,0,0])
# sigma5 = np.array([[16,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1]])
# mu6 = np.array([-10,18,0,0,0])
# sigma6 = np.array([[1,1,1,1,1],[1,16,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1]])
# mu7 = np.array([4,20,0,0,0])
# sigma7 = np.array([[16,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1]])
# mu8 = np.array([4,-20,0,0,0])
# sigma8 = np.array([[16,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1]])
# mu9 = np.array([20,20,0,0,0])
# sigma9 = np.array([[1,1,1,1,1],[1,16,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1]])
# mu10 = np.array([20,-10,0,0,0])
# sigma10 = np.array([[1,1,1,1,1],[1,16,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1]])
# sample1 = np.random.multivariate_normal(mean=mu1,cov= sigma1,size=500)
# sample2 = np.random.multivariate_normal(mean=mu2,cov= sigma2,size=500)
# sample3 = np.random.multivariate_normal(mean=mu3,cov= sigma3,size=500)
# sample4 = np.random.multivariate_normal(mean=mu4,cov= sigma4,size=500)
# sample5 = np.random.multivariate_normal(mean=mu5,cov= sigma5,size=500)
# sample6 = np.random.multivariate_normal(mean=mu6,cov= sigma6,size=500)
# sample7 = np.random.multivariate_normal(mean=mu7,cov= sigma7,size=500)
# sample8 = np.random.multivariate_normal(mean=mu8,cov= sigma8,size=500)
# sample9 = np.random.multivariate_normal(mean=mu9,cov= sigma9,size=500)
# sample10 = np.random.multivariate_normal(mean=mu10,cov= sigma10,size=500)
X = np.concatenate((sample1,sample2,sample3,sample4,sample5,sample6,sample7,sample8,sample9,sample10),axis=0)
Y = np.concatenate((np.zeros((500,1)),np.ones((500,1)),2*np.ones((500,1)),3*np.ones((500,1)),4*np.ones((500,1)),
5*np.ones((500,1)),6*np.ones((500,1)),7*np.ones((500,1)),8*np.ones((500,1)),9*np.ones((500,1))),axis=0).astype(int)
print(X.shape,Y.shape)
# plt.scatter(sample1[:,0],sample1[:,1],label="class_0")
# plt.scatter(sample2[:,0],sample2[:,1],label="class_1")
# plt.scatter(sample3[:,0],sample3[:,1],label="class_2")
# plt.scatter(sample4[:,0],sample4[:,1],label="class_3")
# plt.scatter(sample5[:,0],sample5[:,1],label="class_4")
# plt.scatter(sample6[:,0],sample6[:,1],label="class_5")
# plt.scatter(sample7[:,0],sample7[:,1],label="class_6")
# plt.scatter(sample8[:,0],sample8[:,1],label="class_7")
# plt.scatter(sample9[:,0],sample9[:,1],label="class_8")
# plt.scatter(sample10[:,0],sample10[:,1],label="class_9")
# plt.legend(bbox_to_anchor=(1.05, 1), loc='upper left')
class SyntheticDataset(Dataset):
"""MosaicDataset dataset."""
def __init__(self, x, y):
"""
Args:
csv_file (string): Path to the csv file with annotations.
root_dir (string): Directory with all the images.
transform (callable, optional): Optional transform to be applied
on a sample.
"""
self.x = x
self.y = y
#self.fore_idx = fore_idx
def __len__(self):
return len(self.y)
def __getitem__(self, idx):
return self.x[idx] , self.y[idx] #, self.fore_idx[idx]
trainset = SyntheticDataset(X,Y)
# testset = torchvision.datasets.MNIST(root='./data', train=False, download=True, transform=transform)
classes = ('zero','one','two','three','four','five','six','seven','eight','nine')
foreground_classes = {'zero','one','two'}
fg_used = '012'
fg1, fg2, fg3 = 0,1,2
all_classes = {'zero','one','two','three','four','five','six','seven','eight','nine'}
background_classes = all_classes - foreground_classes
background_classes
trainloader = torch.utils.data.DataLoader(trainset, batch_size=100, shuffle=True)
dataiter = iter(trainloader)
background_data=[]
background_label=[]
foreground_data=[]
foreground_label=[]
batch_size=100
for i in range(50):
images, labels = dataiter.next()
for j in range(batch_size):
if(classes[labels[j]] in background_classes):
img = images[j].tolist()
background_data.append(img)
background_label.append(labels[j])
else:
img = images[j].tolist()
foreground_data.append(img)
foreground_label.append(labels[j])
foreground_data = torch.tensor(foreground_data)
foreground_label = torch.tensor(foreground_label)
background_data = torch.tensor(background_data)
background_label = torch.tensor(background_label)
def create_mosaic_img(bg_idx,fg_idx,fg):
"""
bg_idx : list of indexes of background_data[] to be used as background images in mosaic
fg_idx : index of image to be used as foreground image from foreground data
fg : at what position/index foreground image has to be stored out of 0-8
"""
image_list=[]
j=0
for i in range(9):
if i != fg:
image_list.append(background_data[bg_idx[j]])
j+=1
else:
image_list.append(foreground_data[fg_idx])
label = foreground_label[fg_idx] - fg1 # minus fg1 because our fore ground classes are fg1,fg2,fg3 but we have to store it as 0,1,2
#image_list = np.concatenate(image_list ,axis=0)
image_list = torch.stack(image_list)
return image_list,label
desired_num = 3000
mosaic_list_of_images =[] # list of mosaic images, each mosaic image is saved as list of 9 images
fore_idx =[] # list of indexes at which foreground image is present in a mosaic image i.e from 0 to 9
mosaic_label=[] # label of mosaic image = foreground class present in that mosaic
list_set_labels = []
for i in range(desired_num):
set_idx = set()
np.random.seed(i)
bg_idx = np.random.randint(0,3500,8)
set_idx = set(background_label[bg_idx].tolist())
fg_idx = np.random.randint(0,1500)
set_idx.add(foreground_label[fg_idx].item())
fg = np.random.randint(0,9)
fore_idx.append(fg)
image_list,label = create_mosaic_img(bg_idx,fg_idx,fg)
mosaic_list_of_images.append(image_list)
mosaic_label.append(label)
list_set_labels.append(set_idx)
def create_avg_image_from_mosaic_dataset(mosaic_dataset,labels,foreground_index,dataset_number):
"""
mosaic_dataset : mosaic_dataset contains 9 images 32 x 32 each as 1 data point
labels : mosaic_dataset labels
foreground_index : contains list of indexes where foreground image is present so that using this we can take weighted average
dataset_number : will help us to tell what ratio of foreground image to be taken. for eg: if it is "j" then fg_image_ratio = j/9 , bg_image_ratio = (9-j)/8*9
"""
avg_image_dataset = []
for i in range(len(mosaic_dataset)):
img = torch.zeros([5], dtype=torch.float64)
for j in range(9):
if j == foreground_index[i]:
img = img + mosaic_dataset[i][j]*dataset_number/9
else :
img = img + mosaic_dataset[i][j]*(9-dataset_number)/(8*9)
avg_image_dataset.append(img)
return torch.stack(avg_image_dataset) , torch.stack(labels) , foreground_index
def calculate_loss(dataloader,model,criter):
model.eval()
r_loss = 0
with torch.no_grad():
for i, data in enumerate(dataloader, 0):
inputs, labels = data
inputs, labels = inputs.to("cuda"),labels.to("cuda")
outputs = model(inputs)
loss = criter(outputs, labels)
r_loss += loss.item()
return r_loss/i
class MosaicDataset1(Dataset):
"""MosaicDataset dataset."""
def __init__(self, mosaic_list, mosaic_label,fore_idx):
"""
Args:
csv_file (string): Path to the csv file with annotations.
root_dir (string): Directory with all the images.
transform (callable, optional): Optional transform to be applied
on a sample.
"""
self.mosaic = mosaic_list
self.label = mosaic_label
self.fore_idx = fore_idx
def __len__(self):
return len(self.label)
def __getitem__(self, idx):
return self.mosaic[idx] , self.label[idx] , self.fore_idx[idx]
batch = 250
msd = MosaicDataset1(mosaic_list_of_images, mosaic_label, fore_idx)
train_loader = DataLoader( msd,batch_size= batch ,shuffle=True)
```
**Focus Net**
```
class Focus_deep(nn.Module):
'''
deep focus network averaged at zeroth layer
input : elemental data
'''
def __init__(self,inputs,output,K,d):
super(Focus_deep,self).__init__()
self.inputs = inputs
self.output = output
self.K = K
self.d = d
self.linear1 = nn.Linear(self.inputs,50) #,self.output)
self.linear2 = nn.Linear(50,self.output)
def forward(self,z):
batch = z.shape[0]
x = torch.zeros([batch,self.K],dtype=torch.float64)
y = torch.zeros([batch,self.d], dtype=torch.float64)
x,y = x.to("cuda"),y.to("cuda")
for i in range(self.K):
x[:,i] = self.helper(z[:,i] )[:,0] # self.d*i:self.d*i+self.d
x = F.softmax(x,dim=1) # alphas
x1 = x[:,0]
for i in range(self.K):
x1 = x[:,i]
y = y+torch.mul(x1[:,None],z[:,i]) # self.d*i:self.d*i+self.d
return y , x
def helper(self,x):
x = F.relu(self.linear1(x))
x = self.linear2(x)
return x
```
**Classification Net**
```
class Classification_deep(nn.Module):
'''
input : elemental data
deep classification module data averaged at zeroth layer
'''
def __init__(self,inputs,output):
super(Classification_deep,self).__init__()
self.inputs = inputs
self.output = output
self.linear1 = nn.Linear(self.inputs,200)
self.linear2 = nn.Linear(200,self.output)
def forward(self,x):
x = F.relu(self.linear1(x))
x = self.linear2(x)
return x
```
```
where = Focus_deep(5,1,9,5).double()
what = Classification_deep(5,3).double()
where = where.to("cuda")
what = what.to("cuda")
def calculate_attn_loss(dataloader,what,where,criter):
what.eval()
where.eval()
r_loss = 0
alphas = []
lbls = []
pred = []
fidices = []
with torch.no_grad():
for i, data in enumerate(dataloader, 0):
inputs, labels,fidx = data
lbls.append(labels)
fidices.append(fidx)
inputs = inputs.double()
inputs, labels = inputs.to("cuda"),labels.to("cuda")
avg,alpha = where(inputs)
outputs = what(avg)
_, predicted = torch.max(outputs.data, 1)
pred.append(predicted.cpu().numpy())
alphas.append(alpha.cpu().numpy())
mx,_ = torch.max(alpha,1)
entropy = np.mean(-np.log2(mx.cpu().detach().numpy()))
# print("entropy of batch", entropy)
loss = criter(outputs, labels) + 0.1*entropy
r_loss += loss.item()
alphas = np.concatenate(alphas,axis=0)
pred = np.concatenate(pred,axis=0)
lbls = np.concatenate(lbls,axis=0)
fidices = np.concatenate(fidices,axis=0)
#print(alphas.shape,pred.shape,lbls.shape,fidices.shape)
analysis = analyse_data(alphas,lbls,pred,fidices)
return r_loss/i,analysis
def analyse_data(alphas,lbls,predicted,f_idx):
'''
analysis data is created here
'''
batch = len(predicted)
amth,alth,ftpt,ffpt,ftpf,ffpf = 0,0,0,0,0,0
for j in range (batch):
focus = np.argmax(alphas[j])
if(alphas[j][focus] >= 0.5):
amth +=1
else:
alth +=1
if(focus == f_idx[j] and predicted[j] == lbls[j]):
ftpt += 1
elif(focus != f_idx[j] and predicted[j] == lbls[j]):
ffpt +=1
elif(focus == f_idx[j] and predicted[j] != lbls[j]):
ftpf +=1
elif(focus != f_idx[j] and predicted[j] != lbls[j]):
ffpf +=1
#print(sum(predicted==lbls),ftpt+ffpt)
return [ftpt,ffpt,ftpf,ffpf,amth,alth]
print("--"*40)
criterion = nn.CrossEntropyLoss()
optimizer_where = optim.Adam(where.parameters(),lr =0.001)
optimizer_what = optim.Adam(what.parameters(), lr=0.001)
acti = []
loss_curi = []
analysis_data = []
epochs = 1000
running_loss,anlys_data = calculate_attn_loss(train_loader,what,where,criterion)
loss_curi.append(running_loss)
analysis_data.append(anlys_data)
print('epoch: [%d ] loss: %.3f' %(0,running_loss))
for epoch in range(epochs): # loop over the dataset multiple times
ep_lossi = []
running_loss = 0.0
what.train()
where.train()
for i, data in enumerate(train_loader, 0):
# get the inputs
inputs, labels,_ = data
inputs = inputs.double()
inputs, labels = inputs.to("cuda"),labels.to("cuda")
# zero the parameter gradients
optimizer_where.zero_grad()
optimizer_what.zero_grad()
# forward + backward + optimize
avg, alpha = where(inputs)
outputs = what(avg)
mx,_ = torch.max(alpha,1)
entropy = np.mean(-np.log2(mx.cpu().detach().numpy()))
# print("entropy of batch", entropy)
loss = criterion(outputs, labels) + 0.1*entropy
# loss = criterion(outputs, labels)
# print statistics
running_loss += loss.item()
loss.backward()
optimizer_where.step()
optimizer_what.step()
running_loss,anls_data = calculate_attn_loss(train_loader,what,where,criterion)
analysis_data.append(anls_data)
print('epoch: [%d] loss: %.3f' %(epoch + 1,running_loss))
loss_curi.append(running_loss) #loss per epoch
if running_loss<=0.01:
break
print('Finished Training')
correct = 0
total = 0
with torch.no_grad():
for data in train_loader:
images, labels,_ = data
images = images.double()
images, labels = images.to("cuda"), labels.to("cuda")
avg, alpha = where(images)
outputs = what(avg)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 3000 train images: %d %%' % ( 100 * correct / total))
analysis_data = np.array(analysis_data)
plt.figure(figsize=(6,6))
plt.plot(np.arange(0,epoch+2,1),analysis_data[:,0],label="ftpt")
plt.plot(np.arange(0,epoch+2,1),analysis_data[:,1],label="ffpt")
plt.plot(np.arange(0,epoch+2,1),analysis_data[:,2],label="ftpf")
plt.plot(np.arange(0,epoch+2,1),analysis_data[:,3],label="ffpf")
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
plt.savefig("trends_synthetic_300_300.png",bbox_inches="tight")
plt.savefig("trends_synthetic_300_300.pdf",bbox_inches="tight")
analysis_data[-1,:2]/3000
running_loss,anls_data = calculate_attn_loss(train_loader,what,where,criterion)
print(running_loss, anls_data)
what.eval()
where.eval()
alphas = []
max_alpha =[]
alpha_ftpt=[]
alpha_ffpt=[]
alpha_ftpf=[]
alpha_ffpf=[]
argmax_more_than_half=0
argmax_less_than_half=0
cnt =0
with torch.no_grad():
for i, data in enumerate(train_loader, 0):
inputs, labels, fidx = data
inputs = inputs.double()
inputs, labels = inputs.to("cuda"),labels.to("cuda")
avg, alphas = where(inputs)
outputs = what(avg)
_, predicted = torch.max(outputs.data, 1)
batch = len(predicted)
mx,_ = torch.max(alphas,1)
max_alpha.append(mx.cpu().detach().numpy())
for j in range (batch):
cnt+=1
focus = torch.argmax(alphas[j]).item()
if alphas[j][focus] >= 0.5 :
argmax_more_than_half += 1
else:
argmax_less_than_half += 1
if (focus == fidx[j].item() and predicted[j].item() == labels[j].item()):
alpha_ftpt.append(alphas[j][focus].item())
# print(focus, fore_idx[j].item(), predicted[j].item() , labels[j].item() )
elif (focus != fidx[j].item() and predicted[j].item() == labels[j].item()):
alpha_ffpt.append(alphas[j][focus].item())
elif (focus == fidx[j].item() and predicted[j].item() != labels[j].item()):
alpha_ftpf.append(alphas[j][focus].item())
elif (focus != fidx[j].item() and predicted[j].item() != labels[j].item()):
alpha_ffpf.append(alphas[j][focus].item())
np.mean(-np.log2(mx.cpu().detach().numpy()))
a = np.array([0.8,0.9])
-np.log2(a)
np.mean(-np.log2(a))
max_alpha = np.concatenate(max_alpha,axis=0)
print(max_alpha.shape, cnt)
np.array(alpha_ftpt).size, np.array(alpha_ffpt).size, np.array(alpha_ftpf).size, np.array(alpha_ffpf).size
plt.figure(figsize=(6,6))
_,bins,_ = plt.hist(max_alpha,bins=50,color ="c")
plt.title("alpha values histogram")
plt.savefig("attention_model_2_hist")
plt.figure(figsize=(6,6))
_,bins,_ = plt.hist(np.array(alpha_ftpt),bins=50,color ="c")
plt.title("alpha values in ftpt")
plt.savefig("attention_model_2_hist")
plt.figure(figsize=(6,6))
_,bins,_ = plt.hist(np.array(alpha_ffpt),bins=50,color ="c")
plt.title("alpha values in ffpt")
plt.savefig("attention_model_2_hist")
```
| github_jupyter |
# Per Relation TACRED Breakdown
This notebook visualizes the difference in $F_1$ performance between supervised QA models and reflex
```
import pickle
import json
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import numpy as np
with open('../reflextacred.json') as rf, open('fixed_reflextacred.json', 'w') as wf:
# the file might be formatted wrong
for l in rf.readlines():
l = l.strip()
if ':' in l:
left, right = l.split(':')
left = left.strip()
right = right.strip()
wf.write(f'"{left}":{right}')
continue
wf.write(l)
with open('fixed_reflextacred.json') as wf:
obj = json.load(wf)
reflex_per_relation = obj['per_relation_metrics']
with open('../BERTSQUADTACRED.pkl', 'rb') as rf:
squad_per_relation = pickle.load(rf)
with open('../BERTZSRETACRED.pkl', 'rb') as rf:
zsre_per_relation = pickle.load(rf)
def get_rel_name(dataset, rel):
with open(f'../data/{dataset}_relations.jsonl') as rf:
for l in rf:
l = l.strip()
obj = json.loads(l)
if obj['relation'] == rel:
return obj['label']
s_dict = []
z_dict = []
for rel in reflex_per_relation:
rf1 = reflex_per_relation[rel]['f1'] * 100
sf1 = squad_per_relation[rel]['f1'] * 100
zf1 = zsre_per_relation[rel]['f1'] * 100
delta_s = rf1 - sf1
delta_z = rf1 - zf1
if delta_s > 0:
rname = get_rel_name('tacred', rel)
print(rname)
# if delta_z > 0:
# rname = get_rel_name('tacred', rel)
# print(rname)
s_dict.append(delta_s)
z_dict.append(delta_z)
ser = pd.Series(s_dict)
zer = pd.Series(z_dict)
plt.figure(figsize=(20,6))
sns.set(style='whitegrid', font_scale=2.5)
#order = ['No overlap', 'Longer by 1', 'Longer by 2', 'Longer by 3', 'Longer by 4 or more']
ax = sns.distplot(ser, bins=[-100, -80, -60, -40, -20, 0, 20, 40, 60], kde=False, axlabel='$\Delta F_1$', hist_kws=dict(edgecolor="k", linewidth=3))
ax.set_title('Per relation differences in $F_1$ scores on TACRED, RE-Flex vs B-Squad', pad=25)
ax.set(ylabel='# of Relations')
#ax.axlabel('test')
ax.yaxis.labelpad = 25
ax.xaxis.labelpad = 25
#plt.setp(ax.get_legend().get_texts(), fontsize='20')
#plt.setp(ax.get_legend().get_title(), fontsize='20') # for legend title
plt.savefig('reflex_bsquad_tacred.png', bbox_inches='tight')
plt.figure(figsize=(20,6))
sns.set(style='whitegrid', font_scale=2.5)
#order = ['No overlap', 'Longer by 1', 'Longer by 2', 'Longer by 3', 'Longer by 4 or more']
ax = sns.distplot(zer, bins=[-100, -80, -60, -40, -20, 0, 20, 40, 60], kde=False, axlabel='$\Delta F_1$', hist_kws=dict(edgecolor="k", linewidth=3))
ax.set_title('Per relation differences in $F_1$ scores on TACRED, RE-Flex vs B-ZSRE', pad=25)
ax.set(ylabel='# of Relations')
#ax.axlabel('test')
ax.yaxis.labelpad = 25
ax.xaxis.labelpad = 25
#plt.setp(ax.get_legend().get_texts(), fontsize='20')
#plt.setp(ax.get_legend().get_title(), fontsize='20') # for legend title
plt.savefig('reflex_bzsre_tacred.png', bbox_inches='tight')
h = np.histogram(zer, bins=[-100, -80, -60, -40, -20, 0, 20, 40, 60])
h
k = np.histogram(ser, bins=[-100, -80, -60, -40, -20, 0, 20, 40, 60])
k
better_than_z = 16/41
better_than_z
better_than_s = 8/41
better_than_s
within_20_z = 14/120
within_20_z
within_20_s = 31 / 120
within_20_s
```
| github_jupyter |
# How to use backpropagation to train a feedforward NN
This noteboo implements a simple single-layer architecture and forward propagation computations using matrix algebra and Numpy,the Python counterpart of linear algebra.
Please follow the installations [instructions](../installation.md).
## Imports & Settings
```
import warnings
warnings.filterwarnings('ignore')
%matplotlib inline
from pathlib import Path
from copy import deepcopy
import numpy as np
import pandas as pd
import sklearn
from sklearn.datasets import make_circles
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
from mpl_toolkits.mplot3d import Axes3D # 3D plots
import seaborn as sns
# plotting style
sns.set_style('white')
# for reproducibility
np.random.seed(seed=42)
results_path = Path('results')
if not results_path.exists():
results_path.mkdir()
```
## Input Data
### Generate random data
The target `y` represents two classes generated by two circular distribution that are not linearly separable because class 0 surrounds class 1.
We will generate 50,000 random samples in the form of two concentric circles with different radius using scikit-learn's make_circles function so that the classes are not linearly separable.
```
# dataset params
N = 50000
factor = 0.1
noise = 0.1
n_iterations = 50000
learning_rate = 0.0001
momentum_factor = .5
# generate data
X, y = make_circles(
n_samples=N,
shuffle=True,
factor=factor,
noise=noise)
# define outcome matrix
Y = np.zeros((N, 2))
for c in [0, 1]:
Y[y == c, c] = 1
```
$$X =\begin{bmatrix}
x_{11} & x_{12} \\
\vdots & \vdots \\
x_{N1} & x_{N2}
\end{bmatrix}
\quad\quad
Y = \begin{bmatrix}
y_{11} & y_{12}\\
\vdots & \vdots \\
y_{N1} & y_{N2}
\end{bmatrix}$$
```
f'Shape of: X: {X.shape} | Y: {Y.shape} | y: {y.shape}'
```
### Visualize Data
```
ax = sns.scatterplot(x=X[:, 0],
y=X[:, 1],
hue=y,
style=y,
markers=['_', '+'])
ax.set_title('Synthetic Classification Data')
sns.despine()
plt.tight_layout()
plt.savefig(results_path / 'ffnn_data', dpi=300);
```
## Neural Network Architecture
### Hidden Layer Activations
The hidden layer $h$ projects the 2D input into a 3D space. To this end, the hidden layer weights are a $2\times3$ matrix $\mathbf{W}^h$, and the hidden layer bias vector $\mathbf{b}^h$ is a 3-dimensional vector:
\begin{align*}
\underset{\scriptscriptstyle 2 \times 3}{\mathbf{W}^h} =
\begin{bmatrix}
w^h_{11} & w^h_{12} & w^h_{13} \\
w^h_{21} & w^h_{22} & w^h_{23}
\end{bmatrix}
&& \underset{\scriptscriptstyle 1 \times 3}{\mathbf{b}^h} =
\begin{bmatrix}
b^h_1 & b^h_2 & b^h_3
\end{bmatrix}
\end{align*}
The output layer values $\mathbf{Z}^h$ result from the dot product of the $N\times\ 2$ input data $\mathbf{X}$ and the the $2\times3$ weight matrix $\mathbf{W}^h$ and the addition of the $1\times3$ hidden layer bias vector $\mathbf{b}^h$:
$$\underset{\scriptscriptstyle N \times 3}{\mathbf{Z}^h} = \underset{\scriptscriptstyle N \times 2}{\vphantom{\mathbf{W}^o}\mathbf{X}}\cdot\underset{\scriptscriptstyle 2 \times 3}{\mathbf{W}^h} + \underset{\scriptscriptstyle 1 \times 3}{\mathbf{b}^h}$$
The logistic sigmoid function $\sigma$ applies a non-linear transformation to $\mathbf{Z}^h$ to yield the hidden layer activations as an $N\times3$ matrix:
$$\underset{\scriptscriptstyle N \times 3}{\mathbf{H}} = \sigma(\mathbf{X} \cdot \mathbf{W}^h + \mathbf{b}^h) = \frac{1}{1+e^{−(\mathbf{X} \cdot \mathbf{W}^h + \mathbf{b}^h)}} = \begin{bmatrix}
h_{11} & h_{12} & h_{13} \\
\vdots & \vdots & \vdots \\
h_{N1} & h_{N2} & h_{N3}
\end{bmatrix}$$
```
def logistic(z):
"""Logistic function."""
return 1 / (1 + np.exp(-z))
def hidden_layer(input_data, weights, bias):
"""Compute hidden activations"""
return logistic(input_data @ weights + bias)
```
### Output Activations
The values $\mathbf{Z}^o$ for the output layer $o$ are a $N\times2$ matrix that results from the dot product of the $\underset{\scriptscriptstyle N \times 3}{\mathbf{H}}$ hidden layer activation matrix with the $3\times2$ output weight matrix $\mathbf{W}^o$ and the addition of the $1\times2$ output bias vector $\mathbf{b}^o$:
$$\underset{\scriptscriptstyle N \times 2}{\mathbf{Z}^o} = \underset{\scriptscriptstyle N \times 3}{\vphantom{\mathbf{W}^o}\mathbf{H}}\cdot\underset{\scriptscriptstyle 3 \times 2}{\mathbf{W}^o} + \underset{\scriptscriptstyle 1 \times 2}{\mathbf{b}^o}$$
The Softmax function $\varsigma$ squashes the unnormalized probabilities predicted for each class to lie within $[0, 1]$ and sum to 1. The result is a $N\times2$ matrix with one column for each output class.
$$\underset{\scriptscriptstyle N \times 2}{\mathbf{\hat{Y}}}
= \varsigma(\mathbf{H} \cdot \mathbf{W}^o + \mathbf{b}^o)
= \frac{e^{Z^o}}{\sum_{c=1}^C e^{\mathbf{z}^o_c}}
= \frac{e^{H \cdot W^o + \mathbf{b}^o}}{\sum_{c=1}^C e^{H \cdot \mathbf{w}^o_c + b^o_c}}
= \begin{bmatrix}
\hat{y}_{11} & \hat{y}_{12}\\
\vdots & \vdots \\
\hat{y}_{n1} & \hat{y}_{n2}
\end{bmatrix}$$
```
def softmax(z):
"""Softmax function"""
return np.exp(z) / np.sum(np.exp(z), axis=1, keepdims=True)
def output_layer(hidden_activations, weights, bias):
"""Compute the output y_hat"""
return softmax(hidden_activations @ weights + bias)
```
### Forward Propagation
The `forward_prop` function combines the previous operations to yield the output activations from the input data as a function of weights and biases. The `predict` function produces the binary class predictions given weights, biases, and input data.
```
def forward_prop(data, hidden_weights, hidden_bias, output_weights, output_bias):
"""Neural network as function."""
hidden_activations = hidden_layer(data, hidden_weights, hidden_bias)
return output_layer(hidden_activations, output_weights, output_bias)
def predict(data, hidden_weights, hidden_bias, output_weights, output_bias):
"""Predicts class 0 or 1"""
y_pred_proba = forward_prop(data,
hidden_weights,
hidden_bias,
output_weights,
output_bias)
return np.around(y_pred_proba)
```
### Cross-Entropy Loss
The cost function $J$ uses the cross-entropy loss $\xi$ that sums the deviations of the predictions for each class $c$ $\hat{y}_{ic}, i=1,...,N$ from the actual outcome.
$$J(\mathbf{Y},\mathbf{\hat{Y}}) = \sum_{i=1}^n \xi(\mathbf{y}_i,\mathbf{\hat{y}}_i) = − \sum_{i=1}^N \sum_{i=c}^{C} y_{ic} \cdot log(\hat{y}_{ic})$$
```
def loss(y_hat, y_true):
"""Cross-entropy"""
return - (y_true * np.log(y_hat)).sum()
```
## Backpropagation
Backpropagation updates parameters values based on the partial derivative of the loss with respect to that parameter, computed using the chain rule.
### Loss Function Gradient
The derivative of the loss function $J$ with respect to each output layer activation $\varsigma(\mathbf{Z}^o_i), i=1,...,N$, is a very simple expression:
$$\frac{\partial J}{\partial z^0_i} = \delta^o = \hat{y}_i-y_i$$
See [here](https://math.stackexchange.com/questions/945871/derivative-of-softmax-loss-function) and [here](https://deepnotes.io/softmax-crossentropy) for details on derivation.
```
def loss_gradient(y_hat, y_true):
"""output layer gradient"""
return y_hat - y_true
```
### Output Layer Gradients
#### Output Weight Gradients
To propagate the updates back to the output layer weights, we take the partial derivative of the loss function with respect to the weight matrix:
$$
\frac{\partial J}{\partial \mathbf{W}^o} = H^T \cdot (\mathbf{\hat{Y}}-\mathbf{Y}) = H^T \cdot \delta^{o}
$$
```
def output_weight_gradient(H, loss_grad):
"""Gradients for the output layer weights"""
return H.T @ loss_grad
```
#### Output Bias Update
To update the output layer bias values, we similarly apply the chain rule to obtain the partial derivative of the loss function with respect to the bias vector:
$$\frac{\partial J}{\partial \mathbf{b}_{o}}
= \frac{\partial \xi}{\partial \mathbf{\hat{Y}}} \frac{\partial \mathbf{\hat{Y}}}{\partial \mathbf{Z}^o} \frac{\partial \mathbf{Z}^{o}}{\partial \mathbf{b}^o}
= \sum_{i=1}^N 1 \cdot (\mathbf{\hat{y}}_i - \mathbf{y}_i)
= \sum_{i=1}^N \delta_{oi}$$
```
def output_bias_gradient(loss_grad):
"""Gradients for the output layer bias"""
return np.sum(loss_grad, axis=0, keepdims=True)
```
### Hidden layer gradients
$$\delta_{h}
= \frac{\partial J}{\partial \mathbf{Z}^h}
= \frac{\partial J}{\partial \mathbf{H}} \frac{\partial \mathbf{H}}{\partial \mathbf{Z}^h}
= \frac{\partial J}{\partial \mathbf{Z}^o} \frac{\partial \mathbf{Z}^o}{\partial H} \frac{\partial H}{\partial \mathbf{Z}^h}$$
```
def hidden_layer_gradient(H, out_weights, loss_grad):
"""Error at the hidden layer.
H * (1-H) * (E . Wo^T)"""
return H * (1 - H) * (loss_grad @ out_weights.T)
```
#### Hidden Weight Gradient
$$
\frac{\partial J}{\partial \mathbf{W}^h} = \mathbf{X}^T \cdot \delta^{h}
$$
```
def hidden_weight_gradient(X, hidden_layer_grad):
"""Gradient for the weight parameters at the hidden layer"""
return X.T @ hidden_layer_grad
```
#### Hidden Bias Gradient
$$
\frac{\partial \xi}{\partial \mathbf{b}_{h}}
= \frac{\partial \xi}{\partial H} \frac{\partial H}{\partial Z_{h}} \frac{\partial Z_{h}}{\partial \mathbf{b}_{h}}
= \sum_{j=1}^N \delta_{hj}
$$
```
def hidden_bias_gradient(hidden_layer_grad):
"""Gradient for the bias parameters at the output layer"""
return np.sum(hidden_layer_grad, axis=0, keepdims=True)
```
## Initialize Weights
```
def initialize_weights():
"""Initialize hidden and output weights and biases"""
# Initialize hidden layer parameters
hidden_weights = np.random.randn(2, 3)
hidden_bias = np.random.randn(1, 3)
# Initialize output layer parameters
output_weights = np.random.randn(3, 2)
output_bias = np.random.randn(1, 2)
return hidden_weights, hidden_bias, output_weights, output_bias
```
## Compute Gradients
```
def compute_gradients(X, y_true, w_h, b_h, w_o, b_o):
"""Evaluate gradients for parameter updates"""
# Compute hidden and output layer activations
hidden_activations = hidden_layer(X, w_h, b_h)
y_hat = output_layer(hidden_activations, w_o, b_o)
# Compute the output layer gradients
loss_grad = loss_gradient(y_hat, y_true)
out_weight_grad = output_weight_gradient(hidden_activations, loss_grad)
out_bias_grad = output_bias_gradient(loss_grad)
# Compute the hidden layer gradients
hidden_layer_grad = hidden_layer_gradient(hidden_activations, w_o, loss_grad)
hidden_weight_grad = hidden_weight_gradient(X, hidden_layer_grad)
hidden_bias_grad = hidden_bias_gradient(hidden_layer_grad)
return [hidden_weight_grad, hidden_bias_grad, out_weight_grad, out_bias_grad]
```
## Check Gradients
It's easy to make mistakes with the numerous inputs to the backpropagation algorithm. A simple way to test for accuracy is to compare the change in the output for slightly perturbed parameter values with the change implied by the computed gradient (see [here](http://ufldl.stanford.edu/wiki/index.php/Gradient_checking_and_advanced_optimization) for more detail).
```
# change individual parameters by +/- eps
eps = 1e-4
# initialize weights and biases
params = initialize_weights()
# Get all parameter gradients
grad_params = compute_gradients(X, Y, *params)
# Check each parameter matrix
for i, param in enumerate(params):
# Check each matrix entry
rows, cols = param.shape
for row in range(rows):
for col in range(cols):
# change current entry by +/- eps
params_low = deepcopy(params)
params_low[i][row, col] -= eps
params_high = deepcopy(params)
params_high[i][row, col] += eps
# Compute the numerical gradient
loss_high = loss(forward_prop(X, *params_high), Y)
loss_low = loss(forward_prop(X, *params_low), Y)
numerical_gradient = (loss_high - loss_low) / (2 * eps)
backprop_gradient = grad_params[i][row, col]
# Raise error if numerical and backprop gradient differ
assert np.allclose(numerical_gradient, backprop_gradient), ValueError(
f'Numerical gradient of {numerical_gradient:.6f} not close to '
f'backprop gradient of {backprop_gradient:.6f}!')
print('No gradient errors found')
```
## Train Network
```
def update_momentum(X, y_true, param_list,
Ms, momentum_term,
learning_rate):
"""Update the momentum matrices."""
# param_list = [hidden_weight, hidden_bias, out_weight, out_bias]
# gradients = [hidden_weight_grad, hidden_bias_grad,
# out_weight_grad, out_bias_grad]
gradients = compute_gradients(X, y_true, *param_list)
return [momentum_term * momentum - learning_rate * grads
for momentum, grads in zip(Ms, gradients)]
def update_params(param_list, Ms):
"""Update the parameters."""
# param_list = [Wh, bh, Wo, bo]
# Ms = [MWh, Mbh, MWo, Mbo]
return [P + M for P, M in zip(param_list, Ms)]
def train_network(iterations=1000, lr=.01, mf=.1):
# Initialize weights and biases
param_list = list(initialize_weights())
# Momentum Matrices = [MWh, Mbh, MWo, Mbo]
Ms = [np.zeros_like(M) for M in param_list]
train_loss = [loss(forward_prop(X, *param_list), Y)]
for i in range(iterations):
if i % 1000 == 0: print(f'{i:,d}', end=' ', flush=True)
# Update the moments and the parameters
Ms = update_momentum(X, Y, param_list, Ms, mf, lr)
param_list = update_params(param_list, Ms)
train_loss.append(loss(forward_prop(X, *param_list), Y))
return param_list, train_loss
# n_iterations = 20000
# results = {}
# for learning_rate in [.01, .02, .05, .1, .25]:
# for momentum_factor in [0, .01, .05, .1, .5]:
# print(learning_rate, momentum_factor)
# trained_params, train_loss = train_network(iterations=n_iterations, lr=learning_rate, mf=momentum_factor)
# results[(learning_rate, momentum_factor)] = train_loss[::1000]
trained_params, train_loss = train_network(iterations=n_iterations,
lr=learning_rate,
mf=momentum_factor)
hidden_weights, hidden_bias, output_weights, output_bias = trained_params
```
### Plot Training Loss
This plot displays the training loss over 50K iterations for 50K training samples with a momentum term of 0.5 and a learning rate of 1e-4.
It shows that it takes over 5K iterations for the loss to start to decline but then does so very fast. We have not uses stochastic gradient descent, which would have likely significantly accelerated convergence.
```
ax = pd.Series(train_loss).plot(figsize=(12, 3), title='Loss per Iteration', xlim=(0, n_iterations), logy=True)
ax.set_xlabel('Iteration')
ax.set_ylabel('$\\log \\xi$', fontsize=12)
sns.despine()
plt.tight_layout()
plt.savefig(results_path / 'ffnn_loss', dpi=300)
```
## Decision Boundary
The following plots show the function learned by the neural network with a three-dimensional hidden layer form two-dimensional data with two classes that are not linearly separable as shown on the left. The decision boundary misclassifies very few data points and would further improve with continued training.
The second plot shows the representation of the input data learned by the hidden layer. The network learns hidden layer weights so that the projection of the input from two to three dimensions enables the linear separation of the two classes.
The last plot shows how the output layer implements the linear separation in the form of a cutoff value of 0.5 in the output dimension.
```
n_vals = 200
x1 = np.linspace(-1.5, 1.5, num=n_vals)
x2 = np.linspace(-1.5, 1.5, num=n_vals)
xx, yy = np.meshgrid(x1, x2) # create the grid
# Initialize and fill the feature space
feature_space = np.zeros((n_vals, n_vals))
for i in range(n_vals):
for j in range(n_vals):
X_ = np.asarray([xx[i, j], yy[i, j]])
feature_space[i, j] = np.argmax(predict(X_, *trained_params))
# Create a color map to show the classification colors of each grid point
cmap = ListedColormap([sns.xkcd_rgb["pale red"],
sns.xkcd_rgb["denim blue"]])
# Plot the classification plane with decision boundary and input samples
plt.contourf(xx, yy, feature_space, cmap=cmap, alpha=.25)
# Plot both classes on the x1, x2 plane
data = pd.DataFrame(X, columns=['$x_1$', '$x_2$']).assign(Class=pd.Series(y).map({0:'negative', 1:'positive'}))
sns.scatterplot(x='$x_1$', y='$x_2$', hue='Class', data=data, style=y, markers=['_', '+'], legend=False)
plt.title('Decision Boundary')
sns.despine()
plt.tight_layout()
plt.savefig(results_path / 'boundary', dpi=300);
```
## Projection on Hidden Layer
```
n_vals = 25
x1 = np.linspace(-1.5, 1.5, num=n_vals)
x2 = np.linspace(-1.5, 1.5, num=n_vals)
xx, yy = np.meshgrid(x1, x2) # create the grid
X_ = np.array([xx.ravel(), yy.ravel()]).T
fig = plt.figure(figsize=(6, 4))
with sns.axes_style("whitegrid"):
ax = Axes3D(fig)
ax.plot(*hidden_layer(X[y == 0], hidden_weights, hidden_bias).T,
'_', label='negative class', alpha=0.75)
ax.plot(*hidden_layer(X[y == 1], hidden_weights, hidden_bias).T,
'+', label='positive class', alpha=0.75)
ax.set_xlabel('$h_1$', fontsize=12)
ax.set_ylabel('$h_2$', fontsize=12)
ax.set_zlabel('$h_3$', fontsize=12)
ax.view_init(elev=30, azim=-20)
# plt.legend(loc='best')
plt.title('Projection of X onto the hidden layer H')
sns.despine()
plt.tight_layout()
plt.savefig(results_path / 'projection3d', dpi=300)
```
## Network Output Surface Plot
```
zz = forward_prop(X_, hidden_weights, hidden_bias, output_weights, output_bias)[:, 1].reshape(25, -1)
zz.shape
fig = plt.figure()
with sns.axes_style("whitegrid"):
ax = fig.gca(projection='3d')
ax.plot_surface(xx, yy, zz, alpha=.25)
ax.set_title('Learned Function')
ax.set_xlabel('$x_1$', fontsize=12)
ax.set_ylabel('$x_2$', fontsize=12)
ax.set_zlabel('$y$', fontsize=12)
ax.view_init(elev=45, azim=-20)
sns.despine()
fig.tight_layout()
fig.savefig(results_path / 'surface', dpi=300);
```
## Summary
To sum up: we have seen how a very simple network with a single hidden layer with three nodes and a total of 17 parameters is able to learn how to solve a non-linear classification problem using backprop and gradient descent with momentum.
We will next review key design choices useful to design and train more complex architectures before we turn to popular deep learning libraries that facilitate the process by providing many of these building blocks and automating the differentiation process to compute the gradients and implement backpropagation.
| github_jupyter |
# Computing Differential Privacy budget with PyTorch-DP
## Introduction and inital settings
Our purpose is here to show how to get directly a Differential Privacy budget for a Stochastic Gradient Descent, with a simple handy tool. And then, to link this results to the underlying justifications and finally to show the impact of different parameters on privacy guarantee.
The reader is assumed to have basic knowledge of Deep Learning, Stochastic Gradient Descent (**SGD**) and Differential Privacy (**DP**).
---
*To be run, this notebook is supposed to take place in `pytorch-dp/docs/` directory, see [GitHub of PyTorch-DP](https://github.com/facebookresearch/pytorch-dp/tree/master/) to fetch the code. If not, please adapt the last two `import` statements below.*
*To begin with, please **execute the two following Python cells** to get a correct environment.*
---
```
# == Usual basics ==
import math
import numpy as np
# == Graphic tools ==
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
# To adapt matplotlib graphs to the width of your screen
# (willingly in a different cell than the next one, see below...)
_DPI = plt.rcParams['figure.dpi']
# == From pytorch-dp, for DP accounting ==
# From local files if this notebook is in
# pytorch-dp/docs/
# with an ugly trick
import sys
import os
scripts_dir = os.path.abspath('../torchdp/scripts/')
if scripts_dir not in sys.path:
sys.path.insert(0, scripts_dir)
scripts_dir = os.path.abspath('../')
if scripts_dir not in sys.path:
sys.path.insert(0, scripts_dir)
from torchdp import privacy_analysis as tf_privacy
import compute_dp_sgd_privacy as co
# Optional adaptation to your screen (for matplotlib plots)
# (It doesn't take place in the same cell as above, to avoid
# cumulating zooms when iterating modifications on _zoom value)
# == Your zoom factor (THE ONLY VALUE TO MODIFY) ==
_zoom = 1.2
# Modify
plt.rcParams['figure.dpi'] = int(_DPI * _zoom)
# Used to restore original zoom value if a crash occurs
# in a cell where scale has been modifyed.
dim_memo = plt.rcParams['figure.figsize']
```
### <span id="toc">↳ Here's the plan</span>
* [A short intro: practical examples of direct privacy budget calculation.](#i)
* [Theorical grounds and PyTorch-DP functions to compute this budget.](#ii)
* [What does DP protection depend on ?](#iii)
---
## <span id="i">A short demo to begin with...</span> [⌂](#toc)
---
Let's suppose you want to train a Deep Learning model (network) using **Differential Private Stochastic Gradient Descent**. You have set an acceptable value for $\delta$ (about the $(\varepsilon,\ \delta)$-DP) and then, you want to get the associated best $\varepsilon$ for a given training. As this learning step is quite slow and consumes the privacy budget, it's interesting to estimate it independently, before running it. Let's see how to use PyTorch-DP tools for that purpose.
Your parameters could be:
* training dataset size = 15000.
* batch size = 250.
* noise multiplier $\sigma$ for Gaussian noise = 1.3. That means if the gradient is clipped so that its $\ell_2$ norm is lesser than $C$, then standard deviation of normal noise added is $C\sigma$ (see in next part).
* number of epochs = 15.
* targetted $\delta$ = $10^{-5}$.
### ↳Pure Python approach
The first possibility consists in calling directly `compute_dp_sgd_privacy()`. *We build this function from the similar one [in TensorFlow-privacy project](https://github.com/tensorflow/privacy/blob/master/tensorflow_privacy/privacy/analysis/compute_dp_sgd_privacy.py). The associated "RDP order" is mentioned below as "optimal $\alpha$", which is explained later*.
```
############################################################
# HERE ARE SOME DEFAULT VALUES FOR THE DP-SGD. #
# THEY CAN BE MODIFIED, BUT BE AWARE THAT FOLLOWING CELLS #
# ARE MAINLY SUPPOSED TO BE EXECUTED SEQUENTIALLY. #
############################################################
dataset_size, batch_size = 10000, 4
noise_multiplier, epochs, delta = 0.8, 10, 1e-5
my_alphas = [1 + x/10.0 for x in range(1, 100)] + list(range(12, 64))
co.compute_dp_sgd_privacy(dataset_size,
batch_size,
noise_multiplier,
epochs,
delta,
alphas=my_alphas
)
# Arguments you may also experiment:
# . facultative `printed`, boolean (default True),
# to print results, or else just return the (Ɛ, α) tuple.
# . other values e.g. `alpha=[10, 20, 30]`,
# to get noticed in the answer that estimation could be improved.
```
### ↳As a Shell script
It is also possible to call `compute_dp_sgd_privacy.py` directly __in a shell__, with arguments given on the command line (if necessary, try **`-h`** to get details about them. A default list of $\alpha$ coefficients is provided but could be overwritten).
> *The `!` in first position below allows a shell command execution from the Python notebook. Don't use it in a genuine shell.*
```
!/usr/bin/env python3 ../torchdp/scripts/compute_dp_sgd_privacy.py -s 10000 -b 4 -n 0.8 -e 10 -d 1e-5
```
Under the hood, `compute_dp_sgd_privacy()` uses the same tools you could use in your PyTorch model to transform it into a Differential Private one, if its training phase is based on Stochastic Gradient Descent. And then, you are certainly interested in how the previous values are determined, so let's go...
>As far as I know, one of the main reference about privacy with SGD is **"_Deap Learning with Differential Privacy_"** (**Abadi** _et al._, **2016**, **[arXiv:1607.00133](https://arxiv.org/pdf/1607.00133)** _alias_ **[A16]** in the following), see **Algorithm 1** in **section 3.1** for the details about how to clip gradient, then add random noise, to get privacy.
For his part, **Nicolas Papernot** wrote an easy to read article (on [tensorflow/privacy GitHub](github.com/tensorflow/privacy/tree/master/tutorials/walkthrough)), clearly explaining the vanilla SGD and then how to transform it into a Differentially Private algorithm. His paper's snippets are based on TensorFlow, not PyTorch, but _pytorch-dp_ has roots in _TensorFlow Privacy_ (some portions of code are common), so with the exception of some differences in code extracts, the explanations in this article are still relevant.
I also love the **[3blue1brown Youtube videos](https://www.youtube.com/watch?v=aircAruvnKk)** about the basics of deep learning.
*OK that's all about ads :wink: Here are some explanations...*
---
## <span id="ii">Let's dive into the magic of Differential Privacy</span> [⌂](#toc)
---
### ↳Rényi DP ~~*vs*~~ serving $(\varepsilon, \delta)$-DP
Indeed, both formalizations of Differential Privacy are involved here: **the result is expressed in $(\varepsilon, \delta)$-DP terms**, which is the *the facto* standard to express DP. In addition, it is quite understandable (*say, better than more complicated ways of expressing DP* :smile: !).
One could compute the consumption of "privacy budget" for each loop (each application of Gaussian mechanism, i.e. adding normal noise) and could add the $\varepsilon$ relevant values thanks to one of *"(advanced) composition theorems"*. But then, the obtained bound is not tight and the privacy guarantee is too expensive in terms of utility (too much noise has to be added to get it).
That is why a new approach is used *to compute the DP budget*, which allows tigher bounds about the guaranteed value of $\varepsilon$. It is based on **Rényi-DP of order $\alpha$** (**RDP**), generally expressed in $(\alpha, \varepsilon)$-RDP terms. Note that we write $(\alpha, \varepsilon_\mathbf{\text{rdp}})$-RDP in this notebook, to distinguish the two different $\varepsilon$ coefficients and especially since $\varepsilon_\text{rdp}$ is represented by `rdp` variable in *pytorch-dp*'s code.
But despite many advantages, RDP remains more difficult to interpret, that is why it is *finally translated back into $(\varepsilon, \delta)$-DP terms*. Fortunately, RDP is also compatible with composition:
>In **"*Rényi Differential Privacy*"** (**Mironov** *et al.*, **2017**, **[arXiv:1702.07476](https://arxiv.org/pdf/1702.07476)**) *alias* **[M17]**, **Proposition 1** claims that
>The composition of a $(\alpha, \varepsilon_{\text{rdp}})$-RDP mechanism and a $(\alpha, \varepsilon'_{\text{rdp}})$-RDP one is $(\alpha, \varepsilon_{\text{rdp}}+\varepsilon'_{\text{rdp}})$-RDP), for any $\alpha>1$.
The idea of the *moments accountant* used to calculate this budget is due to **[A16]**, but it's certainly easier to undestand in **[M17]**, thanks to a new formalization widely spread today. As said before, the RDP privacy budget has to be eventually expressed with $(\varepsilon, \delta)$-DP formalism.
>In **[M17]**, **Proposition 3** claims that
>If a randomized mechanism is $(\alpha, \varepsilon_\text{rdp})$-RDP, then for all $0<\delta<1$ it is also $(\varepsilon, \delta)$-DP where $\varepsilon = \varepsilon_\text{rdp} + \frac{\ln(1/\delta)}{\alpha-1} = \varepsilon_\text{rdp} - \frac{\ln(\delta)}{\alpha-1}$.
Thereby, we get the needed transformation from $(\alpha, \varepsilon_\text{rdp})$-RDP to $(\varepsilon, \delta)$-DP values. But we still need to know how to get $\varepsilon_\text{rdp}$!
### ↳Computing one $\varepsilon_\text{rdp}$ value
> * **For one single shot** (`q == 1.0` case in code, where `q` denotes the sampling ratio)
> **Corollary 3** of **Proposition 7** in **[M17]** claims that
>
> If the $\ell_2$-sensivity of a function $f$ is $1$, then the $\mathbf{G}_{\sigma}\,f$ Gaussian mechanism of $f$ (i.e. addition of a normal noise with a standard deviation $\sigma$ to the image of $f$) is $(\alpha, \varepsilon_\text{rdp})$-RDP for: $\quad\varepsilon_\text{rdp} = \frac{\alpha}{2\sigma^2}$.
>
> An analog demonstration shows that il $f$ has a $\ell_2$-sensivity of $C$, the same guarantee is provided by adding a noise with $C\sigma$ std. See [this confirmation](https://github.com/facebookresearch/pytorch-dp/issues/11) by Ilya Mironov. That is why $\sigma$ stands for "noise parameter" in this context.
RDP is able to take into account the positive role of sampling about privacy.
> * **With sampling**
>
> **Theorem 4** and **Corollary 7** of **Theorem 5** in (**"*Rényi Differential Privacy of the Sampled Gaussian Mechanism*"**, **Mironov**, **2019**, **[arXiv:1908.10530](https://arxiv.org/pdf/1908.10530)**) *alias* **[M19]** claims that
>
> The Sampled Gaussian Mechanism $\mathbf{SGM}_\sigma\, f$ with sample rate $q$ is $(\alpha, \varepsilon_\text{rdp})$-RDP, for all $\varepsilon_\text{rdp} \geqslant \frac{1}{\alpha-1} \ln A_\alpha$ (*then, the best value for us is* $\varepsilon_\text{rdp} = \frac{1}{\alpha-1} \ln A_\alpha$), where $A_\alpha$ is a real value depending on $\alpha$ and $q$.
>
> More precisely (*see* **section 3.3** *of* **[M19]**)
> * when $\alpha$ is an integer, $A_\alpha = \sum_{k=0}^{\alpha} \binom{\alpha}{k} (1-q)^{\alpha-k} q^k \exp(\frac{k^2-k}{2\sigma^2})$.
> * when it is fractional, its value is computed by adding the approximations of two convergent series, which is not detailled here. If needed, analyse the code of `_compute_log_a(q, sigma, alpha)` in `torchdp/` directory *(the operations are done in log-arithmetic space, to overcome float numbers representation limits, see the first functions in `privacy_analysis`)...*
If you're interested in the Python coding in Pytorch-DP, have a look at the "internal" function `_compute_rdp()` of `privacy_analysis.py` that calculates $\varepsilon_\text{rdp}$ for one step of the Sampled Gaussian mechanism, from the sampling rate, a given order $\alpha$ and a "noise multiplier". It is the core of the public function `compute_rdp()` (named with no `_` prefix), in which the number of step is indicated.
Here's **an example using Pytorch-DP functions**:
```
# Sample rate
q = batch_size / dataset_size
# Example of RDP order
alpha = 10
# Number of iterations
steps = 900
# For one step, the "private" function operating under the hood is
ln_A_alpha = tf_privacy._compute_rdp(q, noise_multiplier, alpha)
print(f'With the "private" _compute_rdp() function, for α = {alpha}:')
print(f' ε_rdp = ln(A_α) / (α - 1) = {ln_A_alpha:.4g}.')
# The associated "public" function to be called is
# . again, for one step
ln_A_alpha = tf_privacy.compute_rdp(q, noise_multiplier, 1, alpha)
print(f'\nAgain, with the public function compute_rdp() '
f'we get for a single iteration with α = {alpha}:')
print(f' ε_rdp = ln(A_α) / (α - 1) = {ln_A_alpha:.4g}.')
# . or more than one step like in SGM
steps = int(math.ceil(epochs * dataset_size / batch_size))
ln_A_alpha = tf_privacy.compute_rdp(q, noise_multiplier, steps, alpha)
print(f"And for {steps} iterations with α = {alpha},"
f"\n ε'_rdp = {steps} ε_rdp = {ln_A_alpha:.4g}. ")
```
### ↳In quest of the optimal $\varepsilon_\text{rdp}$
We won't detail RDP theory here, but remember that a given configuration gets a "budget *curve*", an infinity of $(\alpha, \varepsilon)$ couples that can be seen as coordinates, instead of the *unique* best $\varepsilon$ in $(\varepsilon, \delta)$-DP.
The optimal $\varepsilon_\text{rdp}$ chosen is simply the one mapped with the lowest corresponding $\varepsilon$. The associated RDP order is the "optimal $\alpha$" mentioned in the demo example. There is an infinity of points on the "budget curve", of possible values of $\alpha$. But in practice, testing a well selected list of orders provides good results.
In practice, **`get_privacy_spent()`** of `privacy_analysis` aka `tf_privacy` is dedicated to this task:
```
# A selection of orders, like the one in `compute_dp_sgd_privacy()`
orders = (
[1 + x / 10.0 for x in range(1, 100)] + list(range(12, 64))
)
# Compute ε_rdp (`rdp` tensor) for each order
rdp = tf_privacy.compute_rdp(q, noise_multiplier, steps, orders)
# Chose the best, then return its associated ε
# in terms of (ε,δ)-DP, and the correspondant order α.
delta = 1e-5
epsilon, optimal_order = tf_privacy.get_privacy_spent(orders, rdp, delta)
print(f'Best guarantee: (ε, δ)-DP = ({epsilon:0.4g}, {delta})-DP,'
f'obtained with α = {optimal_order} RDP-order.')
```
### ↳Graphical illustration
```
# ===== One ε for each α (`epsilons`, `orders` are tensors) ======
epsilons = np.zeros_like(orders, dtype=float)
for i, alpha in enumerate(orders):
rdp_eps = tf_privacy.compute_rdp(q, noise_multiplier, steps, alpha)
epsilons[i] = rdp_eps - np.log(delta) / (alpha - 1)
# ===== Get the optimal ε index ======
min_index = np.nanargmin(epsilons) # min ignoring NaN values
# ===== Graphs =====
# *** Wider scale for two graphs ***
local_zoom = np.array([1.5, 1.0])
plt.rcParams['figure.figsize'] = dim_memo * local_zoom
fig = plt.figure()
# Global title separated from subtitles
plt.gcf().subplots_adjust(top = 0.85, wspace=0.3)
fig.suptitle(f"(ε,δ)-DP guaranteed by (α,ε')-RDP: "
f"ε = f(α) for δ={delta}", size='x-large')
ax = fig.add_subplot(121) # 1 row × 2 col, #1
ax2 = fig.add_subplot(122)
# == Left side, whole view, semi-log ==
ax.semilogy(orders, epsilons, marker='.', markersize=0.1,
ls = '--', linewidth=0.5, color='g')
ax.scatter(orders[min_index], epsilons[min_index], marker='.', c='r')
ax.text(orders[min_index], epsilons[min_index],f" Best ε", c="red")
ax.set(title='Global, SEMI-LOG', xlabel='α', ylabel='ε')
ax.grid(color='gray', alpha=0.2, ls='--')
ax.grid(color='gray', which='minor', alpha=0.2, ls=':')
# == Right side, closer view ==
# Drawing ε = f(α) curve, near arround optimum
a = max(0, min_index - 4)
b = min(len(orders)-1, min_index + 4)
ax2.plot(orders[a:b], epsilons[a:b], marker='.', markersize=3,
ls = '--', linewidth=0.5, color='g')
ax2.scatter(orders[min_index], epsilons[min_index], marker='o', c='r')
ax2.text(orders[min_index], epsilons[min_index],
f"Best ε\n={epsilons[min_index]:.4g}\n", color="red")
ax2.set(title='ZOOM IN near optimal point', xlabel='α', ylabel='ε')
ax2.grid(color='gray', alpha=0.2, ls='--')
# *** Restore default dimensions ***
plt.rcParams['figure.figsize'] = dim_memo
```
---
# <span id="iii">What does the privacy guarantee depend on ?</span> [⌂](#toc)
---
Let us illustrate the ε-cost dependence on different data and parameters (*one varying, while others kept to default values of the function*).
### ↳Size of the training set
It's not usually a matter of choice. But it's still relevant to confirm that too small datasets make it difficult to guarantee enough privacy.
```
def eps_wrt_size(sz):
"""
Returns epsilon parameter w.r.t. the `sz` training set size.
"""
return co.compute_dp_sgd_privacy(sz,
batch_size,
noise_multiplier,
epochs,
delta,
alphas=my_alphas,
printed=False
)[0]
# ===== Training set sizes =====
interval = list(range(500, 50_500, 10_000))
interval += list(range(50_000, 500_500, 50_000))
# Computation
sz = np.array(interval)
eps = np.zeros_like(interval, dtype=float)
for i, s in enumerate(interval):
eps[i] = eps_wrt_size(s)
# Graph
fig, ax = plt.subplots()
ax.plot(sz, eps, marker='.', markersize=3, ls = '--',
linewidth=0.5, c='g')
ax.set(title='ε as a function of the training set size',
xlabel='Size of training set', ylabel='ε')
ax.grid(color='gray', alpha=0.2, ls='--')
```
### ↳Batch size
Smaller micro-batches (aka lots, in the Ilya Mironov's terminology) implies more randomness, which is beneficial to privacy. An optimal choice should be a divider of dataset size, to avoid incomplete batch during last round of each epoch.
```
def eps_wrt_batch(bs):
"""
Returns epsilon parameter w.r.t. the `bs` batch size.
"""
return co.compute_dp_sgd_privacy(dataset_size,
bs,
noise_multiplier,
epochs,
delta,
alphas=my_alphas,
printed=False
)[0]
# ===== Batch sizes =====
_interval = list(range(1, 50, 10)) + list(range(50, 500, 50))
_interval += list(range(500, dataset_size+1, 100))
# Keep only dataset_size's dividers
interval = [i for i in _interval if dataset_size % i == 0]
# Computation
bs = np.array(interval)
eps = np.zeros_like(interval, dtype=float)
for i, s in enumerate(interval):
eps[i] = eps_wrt_batch(s)
# Graph
fig, ax = plt.subplots()
ax.plot(bs, eps, marker='.', markersize=3, ls = '--',
linewidth=0.5, color='g')
ax.set(title='ε as a function of the batch size',
xlabel='Batch size', ylabel='ε')
ax.grid(color='gray', alpha=0.2, ls='--')
```
### ↳Noise multiplier $\sigma$
Basically, the more noise is added, the more privacy is protected. But of course, there's a trade off with accuracy, which decreases with noise (this relation depends on your model and your context, it can't be quantified here).
Let's see how $\varepsilon$ depends on $\sigma$. Above a given threshold, adding noise is not so efficient in terms of privacy.
```
def eps_wrt_sigma(std):
"""
Returns epsilon parameter w.r.t. the `std` noise multiplier.
"""
return co.compute_dp_sgd_privacy(dataset_size,
batch_size,
std,
epochs,
delta,
alphas=my_alphas,
printed=False
)[0]
# ===== Noise mutlipliers =====
interval = [0.6 + i/10 for i in range(21)]
# Computation
std = np.array(interval)
eps = np.zeros_like(interval, dtype=float)
for i, sig in enumerate(interval):
eps[i] = eps_wrt_sigma(sig)
# Graph
fig, ax = plt.subplots()
ax.plot(std, eps, marker='.', markersize=3, ls = '--',
linewidth=0.5, color='g')
ax.set(title='ε as a function of noise multiplier σ',
xlabel='σ', ylabel='ε')
ax.grid(color='gray', alpha=0.2, ls='--')
```
### ↳Number of epochs
Thanks to the tight computation due to RDP, one can verify that the privacy loss increases less than proportionally as a function of the number $n$ of epochs (*seems closer to* $k\sqrt{n}$).
```
def eps_wrt_epochs(n_epochs):
"""
Returns epsilon parameter w.r.t. the `n_epochs` number of epochs.
"""
return co.compute_dp_sgd_privacy(dataset_size,
batch_size,
noise_multiplier,
n_epochs,
delta,
alphas=my_alphas,
printed=False
)[0]
# ===== Numbers of epochs =====
interval = list(range(1,11)) + [i for i in range(15, 101, 5)]
# Computation
epcs = np.array(interval)
eps = np.zeros_like(interval, dtype=float)
for i, epc in enumerate(epcs):
eps[i] = eps_wrt_epochs(epc)
# Graph
fig, ax = plt.subplots()
ax.plot(epcs, eps, marker='.', markersize=3, ls = '--',
linewidth=0.5, c='g')
ax.set(title='ε as a function of number of epochs',
xlabel='Number of epochs', ylabel='ε')
ax.grid(color='gray', alpha=0.2, ls='--')
```
### ↳Delta
The value of $\delta$ is mainly constraint by the needed level of protection and the dataset size (a value under $\frac{1}{\text{number of epochs}}$ is a serious guarantee). Analyzing how $\varepsilon$ depends on $\delta$ shows that it's important to set it just to a sufficient level, otherwise the overcost could be high if the magnitude of $\delta$ is too low.
```
def eps_wrt_delta(d):
"""
Returns epsilon parameter w.r.t. `d` as delta.
"""
return co.compute_dp_sgd_privacy(dataset_size,
batch_size,
noise_multiplier,
epochs,
d,
alphas=my_alphas,
printed=False
)[0]
# ===== Deltas =====
interval = ([ i * 1e-7 for i in range(1, 102, 10)]
+ [ i * 1e-5 for i in range(1, 62, 5)]
+ [ i * 1e-5 for i in range(61, 502, 20)]
)
# Computation
deltas = np.array(interval)
eps = np.zeros_like(interval, dtype=float)
for i, d in enumerate(deltas):
eps[i] = eps_wrt_delta(d)
# Graph
# Wider for two graphs
dim_memo = plt.rcParams['figure.figsize']
plt.rcParams['figure.figsize'] = plt.rcParams['figure.figsize'] * np.array([1.5, 1.0])
fig = plt.figure()
#fig, _ = plt.subplots(ncols=2)
ax = fig.add_subplot(121) # 1 row × 2 col, #1
ax2 = fig.add_subplot(122)
ax.plot(deltas, eps, marker='.', markersize=3, ls = '--',
linewidth=0.5, color='g')
ax.set(title='ε as a function of δ', xlabel='δ', ylabel='ε')
ax.grid(color='gray', alpha=0.2, ls='--')
ax2.plot(np.log(deltas), eps, marker='.', markersize=3,
ls = '--', linewidth=0.5, color='b')
ax2.set(title='ε as a function of ln(δ)', xlabel='ln(δ)')
ax2.grid(color='gray', alpha=0.2, ls='--')
# Specific points
h_space = (np.max(deltas) - np.min(deltas))/20
for mag in [m for m in [1e-8, 1e-7, 1e-6, 1e-5, 1e-4, 1e-3]
if np.min(deltas) <= m <= np.max(deltas)]:
ep = eps_wrt_delta(mag)
ax.scatter(mag, ep, marker='.', color='r')
ax.text(mag + h_space , ep, "δ = "+str(mag), color='r')
# Restore default dimensions
plt.rcParams['figure.figsize'] = dim_memo
```
### ↳Noise multiplier and batch size
The idea is here to compare the impact of both parameters on $\varepsilon$ value.
```
# ===== Noise mutlipliers =====
interval = [0.7 + i/10 for i in range(20)]
std = np.array(interval)
# ===== Batch sizes =====
bsz = [1, 4, 8, 16, 32, 64, 128, 256, 512]
# Computing ε
### Tip: set to `already_done = True` to redraw graph
### without calculating data again
already_done = False
###
if not already_done:
eps = np.zeros_like(np.outer(bsz, std)) # Cartesian product
for x, bs in enumerate(bsz):
for y, st in enumerate(std):
eps[x][y] = co.compute_dp_sgd_privacy(dataset_size,
int(bs),
float(st),
epochs,
delta,
alphas=my_alphas,
printed=False)[0]
x, y = np.meshgrid(std, bsz)
# 3D graph
# Switch mode to add / remove interaction
%matplotlib inline
#%matplotlib ipympl
# Wider scale
local_zoom = np.array([1.8, 1.8], dtype=float)
plt.rcParams['figure.figsize'] = dim_memo * local_zoom
ax = plt.axes(projection='3d')
ax.set(xlabel='Noise multiplier', ylabel="Batch size", zlabel='ε')
# Rotation (degrees)
ax.view_init(30, 10)
surf = ax.plot_wireframe(x, y, eps)
#surf = ax.plot_surface(x, y, eps, linewidth=2, cmap='winter')
# Restore default dimensions
plt.rcParams['figure.figsize'] = dim_memo
```
| github_jupyter |
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
print(tf.__version__)
def plot_series(time, series, format="-", start=0, end=None):
plt.plot(time[start:end], series[start:end], format)
plt.xlabel("Time")
plt.ylabel("Value")
plt.grid(True)
def trend(time, slope=0):
return slope * time
def seasonal_pattern(season_time):
"""Just an arbitrary pattern, you can change it if you wish"""
return np.where(season_time < 0.4,
np.cos(season_time * 2 * np.pi),
1 / np.exp(3 * season_time))
def seasonality(time, period, amplitude=1, phase=0):
"""Repeats the same pattern at each period"""
season_time = ((time + phase) % period) / period
return amplitude * seasonal_pattern(season_time)
def noise(time, noise_level=1, seed=None):
rnd = np.random.RandomState(seed)
return rnd.randn(len(time)) * noise_level
time = np.arange(4 * 365 + 1, dtype="float32")
baseline = 10
series = trend(time, 0.1)
baseline = 10
amplitude = 40
slope = 0.05
noise_level = 5
# Create the series
series = baseline + trend(time, slope) + seasonality(time, period=365, amplitude=amplitude)
# Update with noise
series += noise(time, noise_level, seed=42)
split_time = 1000
time_train = time[:split_time]
x_train = series[:split_time]
time_valid = time[split_time:]
x_valid = series[split_time:]
window_size = 20
batch_size = 32
shuffle_buffer_size = 1000
def windowed_dataset(series, window_size, batch_size, shuffle_buffer):
dataset = tf.data.Dataset.from_tensor_slices(series)
dataset = dataset.window(window_size + 1, shift=1, drop_remainder=True)
dataset = dataset.flat_map(lambda window: window.batch(window_size + 1))
dataset = dataset.shuffle(shuffle_buffer).map(lambda window: (window[:-1], window[-1]))
dataset = dataset.batch(batch_size).prefetch(1)
return dataset
tf.keras.backend.clear_session()
tf.random.set_seed(51)
np.random.seed(51)
tf.keras.backend.clear_session()
dataset = windowed_dataset(x_train, window_size, batch_size, shuffle_buffer_size)
model = tf.keras.models.Sequential([
tf.keras.layers.Lambda(lambda x: tf.expand_dims(x, axis=-1),
input_shape=[None]),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(32, return_sequences=True)),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(32)),
tf.keras.layers.Dense(1),
tf.keras.layers.Lambda(lambda x: x * 100.0)
])
lr_schedule = tf.keras.callbacks.LearningRateScheduler(
lambda epoch: 1e-8 * 10**(epoch / 20))
optimizer = tf.keras.optimizers.SGD(lr=1e-8, momentum=0.9)
model.compile(loss=tf.keras.losses.Huber(),
optimizer=optimizer,
metrics=["mae"])
history = model.fit(dataset, epochs=100, callbacks=[lr_schedule])
plt.semilogx(history.history["lr"], history.history["loss"])
plt.axis([1e-8, 1e-4, 0, 30])
tf.keras.backend.clear_session()
tf.random.set_seed(51)
np.random.seed(51)
tf.keras.backend.clear_session()
dataset = windowed_dataset(x_train, window_size, batch_size, shuffle_buffer_size)
model = tf.keras.models.Sequential([
tf.keras.layers.Lambda(lambda x: tf.expand_dims(x, axis=-1),
input_shape=[None]),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(32, return_sequences=True)),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(32)),
tf.keras.layers.Dense(1),
tf.keras.layers.Lambda(lambda x: x * 100.0)
])
model.compile(loss="mse", optimizer=tf.keras.optimizers.SGD(lr=1e-5, momentum=0.9),metrics=["mae"])
history = model.fit(dataset,epochs=500,verbose=0)
forecast = []
results = []
for time in range(len(series) - window_size):
forecast.append(model.predict(series[time:time + window_size][np.newaxis]))
forecast = forecast[split_time-window_size:]
results = np.array(forecast)[:, 0, 0]
plt.figure(figsize=(10, 6))
plot_series(time_valid, x_valid)
plot_series(time_valid, results)
tf.keras.metrics.mean_absolute_error(x_valid, results).numpy()
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
#-----------------------------------------------------------
# Retrieve a list of list results on training and test data
# sets for each training epoch
#-----------------------------------------------------------
mae=history.history['mae']
loss=history.history['loss']
epochs=range(len(loss)) # Get number of epochs
#------------------------------------------------
# Plot MAE and Loss
#------------------------------------------------
plt.plot(epochs, mae, 'r')
plt.plot(epochs, loss, 'b')
plt.title('MAE and Loss')
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend(["MAE", "Loss"])
plt.figure()
epochs_zoom = epochs[200:]
mae_zoom = mae[200:]
loss_zoom = loss[200:]
#------------------------------------------------
# Plot Zoomed MAE and Loss
#------------------------------------------------
plt.plot(epochs_zoom, mae_zoom, 'r')
plt.plot(epochs_zoom, loss_zoom, 'b')
plt.title('MAE and Loss')
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend(["MAE", "Loss"])
plt.figure()
tf.keras.backend.clear_session()
dataset = windowed_dataset(x_train, window_size, batch_size, shuffle_buffer_size)
model = tf.keras.models.Sequential([
tf.keras.layers.Lambda(lambda x: tf.expand_dims(x, axis=-1),
input_shape=[None]),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(32, return_sequences=True)),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(32)),
tf.keras.layers.Dense(1),
tf.keras.layers.Lambda(lambda x: x * 100.0)
])
model.compile(loss="mse", optimizer=tf.keras.optimizers.SGD(lr=1e-6, momentum=0.9))
model.fit(dataset,epochs=100, verbose=0)
tf.keras.backend.clear_session()
dataset = windowed_dataset(x_train, window_size, batch_size, shuffle_buffer_size)
model = tf.keras.models.Sequential([
tf.keras.layers.Lambda(lambda x: tf.expand_dims(x, axis=-1),
input_shape=[None]),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(32, return_sequences=True)),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(32, return_sequences=True)),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(32)),
tf.keras.layers.Dense(1),
tf.keras.layers.Lambda(lambda x: x * 100.0)
])
model.compile(loss="mse", optimizer=tf.keras.optimizers.SGD(lr=1e-6, momentum=0.9))
model.fit(dataset,epochs=100)
```
| github_jupyter |
# Keyword Extraction
<div class="alert alert-info">
This tutorial is available as an IPython notebook at [Malaya/example/keyword-extraction](https://github.com/huseinzol05/Malaya/tree/master/example/keyword-extraction).
</div>
```
import malaya
# https://www.bharian.com.my/berita/nasional/2020/06/698386/isu-bersatu-tun-m-6-yang-lain-saman-muhyiddin
string = """
Dalam saman itu, plaintif memohon perisytiharan, antaranya mereka adalah ahli BERSATU yang sah, masih lagi memegang jawatan dalam parti (bagi pemegang jawatan) dan layak untuk bertanding pada pemilihan parti.
Mereka memohon perisytiharan bahawa semua surat pemberhentian yang ditandatangani Muhammad Suhaimi bertarikh 28 Mei lalu dan pengesahan melalui mesyuarat Majlis Pimpinan Tertinggi (MPT) parti bertarikh 4 Jun lalu adalah tidak sah dan terbatal.
Plaintif juga memohon perisytiharan bahawa keahlian Muhyiddin, Hamzah dan Muhammad Suhaimi di dalam BERSATU adalah terlucut, berkuat kuasa pada 28 Februari 2020 dan/atau 29 Februari 2020, menurut Fasal 10.2.3 perlembagaan parti.
Yang turut dipohon, perisytiharan bahawa Seksyen 18C Akta Pertubuhan 1966 adalah tidak terpakai untuk menghalang pelupusan pertikaian berkenaan oleh mahkamah.
Perisytiharan lain ialah Fasal 10.2.6 Perlembagaan BERSATU tidak terpakai di atas hal melucutkan/ memberhentikan keahlian semua plaintif.
"""
import re
# minimum cleaning, just simply to remove newlines.
def cleaning(string):
string = string.replace('\n', ' ')
string = re.sub('[^A-Za-z\-() ]+', ' ', string).strip()
string = re.sub(r'[ ]+', ' ', string).strip()
return string
string = cleaning(string)
```
### Use RAKE algorithm
Original implementation from [https://github.com/aneesha/RAKE](https://github.com/aneesha/RAKE). Malaya added attention mechanism into RAKE algorithm.
```python
def rake(
string: str,
model = None,
vectorizer = None,
top_k: int = 5,
atleast: int = 1,
stopwords = get_stopwords,
**kwargs
):
"""
Extract keywords using Rake algorithm.
Parameters
----------
string: str
model: Object, optional (default=None)
Transformer model or any model has `attention` method.
vectorizer: Object, optional (default=None)
Prefer `sklearn.feature_extraction.text.CountVectorizer` or,
`malaya.text.vectorizer.SkipGramCountVectorizer`.
If None, will generate ngram automatically based on `stopwords`.
top_k: int, optional (default=5)
return top-k results.
ngram: tuple, optional (default=(1,1))
n-grams size.
atleast: int, optional (default=1)
at least count appeared in the string to accept as candidate.
stopwords: List[str], (default=malaya.texts.function.get_stopwords)
A callable that returned a List[str], or a List[str], or a Tuple[str]
For automatic Ngram generator.
Returns
-------
result: Tuple[float, str]
"""
```
#### auto-ngram
This will auto generated N-size ngram for keyword candidates.
```
malaya.keyword_extraction.rake(string)
```
#### auto-gram with Attention
This will use attention mechanism as the scores. I will use `small-electra` in this example.
```
electra = malaya.transformer.load(model = 'small-electra')
malaya.keyword_extraction.rake(string, model = electra)
```
#### using vectorizer
```
from malaya.text.vectorizer import SkipGramCountVectorizer
stopwords = malaya.text.function.get_stopwords()
vectorizer = SkipGramCountVectorizer(
token_pattern = r'[\S]+',
ngram_range = (1, 3),
stop_words = stopwords,
lowercase = False,
skip = 2
)
malaya.keyword_extraction.rake(string, vectorizer = vectorizer)
```
#### fixed-ngram with Attention
```
malaya.keyword_extraction.rake(string, model = electra, vectorizer = vectorizer)
```
### Use Textrank algorithm
Malaya simply use textrank algorithm.
```python
def textrank(
string: str,
model = None,
vectorizer = None,
top_k: int = 5,
atleast: int = 1,
stopwords = get_stopwords,
**kwargs
):
"""
Extract keywords using Textrank algorithm.
Parameters
----------
string: str
model: Object, optional (default='None')
model has `fit_transform` or `vectorize` method.
vectorizer: Object, optional (default=None)
Prefer `sklearn.feature_extraction.text.CountVectorizer` or,
`malaya.text.vectorizer.SkipGramCountVectorizer`.
If None, will generate ngram automatically based on `stopwords`.
top_k: int, optional (default=5)
return top-k results.
atleast: int, optional (default=1)
at least count appeared in the string to accept as candidate.
stopwords: List[str], (default=malaya.texts.function.get_stopwords)
A callable that returned a List[str], or a List[str], or a Tuple[str]
Returns
-------
result: Tuple[float, str]
"""
```
```
from sklearn.feature_extraction.text import TfidfVectorizer
tfidf = TfidfVectorizer()
```
#### auto-ngram with TFIDF
This will auto generated N-size ngram for keyword candidates.
```
malaya.keyword_extraction.textrank(string, model = tfidf)
```
#### auto-ngram with Attention
This will auto generated N-size ngram for keyword candidates.
```
electra = malaya.transformer.load(model = 'small-electra')
albert = malaya.transformer.load(model = 'albert')
malaya.keyword_extraction.textrank(string, model = electra)
malaya.keyword_extraction.textrank(string, model = albert)
```
**Or you can use any classification model to find keywords sensitive towards to specific domain**.
```
sentiment = malaya.sentiment.transformer(model = 'xlnet', quantized = True)
malaya.keyword_extraction.textrank(string, model = sentiment)
```
#### fixed-ngram with Attention
```
stopwords = malaya.text.function.get_stopwords()
vectorizer = SkipGramCountVectorizer(
token_pattern = r'[\S]+',
ngram_range = (1, 3),
stop_words = stopwords,
lowercase = False,
skip = 2
)
malaya.keyword_extraction.textrank(string, model = electra, vectorizer = vectorizer)
malaya.keyword_extraction.textrank(string, model = albert, vectorizer = vectorizer)
```
### Load Attention mechanism
Use attention mechanism to get important keywords.
```python
def attention(
string: str,
model,
vectorizer = None,
top_k: int = 5,
atleast: int = 1,
stopwords = get_stopwords,
**kwargs
):
"""
Extract keywords using Attention mechanism.
Parameters
----------
string: str
model: Object
Transformer model or any model has `attention` method.
vectorizer: Object, optional (default=None)
Prefer `sklearn.feature_extraction.text.CountVectorizer` or,
`malaya.text.vectorizer.SkipGramCountVectorizer`.
If None, will generate ngram automatically based on `stopwords`.
top_k: int, optional (default=5)
return top-k results.
atleast: int, optional (default=1)
at least count appeared in the string to accept as candidate.
stopwords: List[str], (default=malaya.texts.function.get_stopwords)
A callable that returned a List[str], or a List[str], or a Tuple[str]
Returns
-------
result: Tuple[float, str]
"""
```
#### auto-ngram
This will auto generated N-size ngram for keyword candidates.
```
malaya.keyword_extraction.attention(string, model = electra)
malaya.keyword_extraction.attention(string, model = albert)
```
#### fixed-ngram
```
malaya.keyword_extraction.attention(string, model = electra, vectorizer = vectorizer)
malaya.keyword_extraction.attention(string, model = albert, vectorizer = vectorizer)
```
| github_jupyter |
# LAB 4a: Creating a Sampled Dataset.
**Learning Objectives**
1. Setup up the environment
1. Sample the natality dataset to create train/eval/test sets
1. Preprocess the data in Pandas dataframe
## Introduction
In this notebook, we'll read data from BigQuery into our notebook to preprocess the data within a Pandas dataframe for a small, repeatable sample.
We will set up the environment, sample the natality dataset to create train/eval/test splits, and preprocess the data in a Pandas dataframe.
## Set up environment variables and load necessary libraries
Check that the Google BigQuery library is installed and if not, install it.
```
%%bash
python3 -m pip freeze | grep google-cloud-bigquery==1.6.1 || \
python3 -m pip install google-cloud-bigquery==1.6.1
```
Import necessary libraries.
```
from google.cloud import bigquery
import pandas as pd
```
Set environment variables so that we can use them throughout the entire lab. We will be using our project name for our bucket, so you only need to change your project and region.
```
%%bash
export PROJECT=$(gcloud config list project --format "value(core.project)")
echo "Your current GCP Project Name is: "$PROJECT
PROJECT = "cloud-training-demos" # Replace with your PROJECT
```
## Create ML datasets by sampling using BigQuery
We'll begin by sampling the BigQuery data to create smaller datasets. Let's create a BigQuery client that we'll use throughout the lab.
```
bq = bigquery.Client(project = PROJECT)
```
We need to figure out the right way to divide our hash values to get our desired splits. To do that we need to define some values to hash with in the modulo. Feel free to play around with these values to get the perfect combination.
```
modulo_divisor = 100
train_percent = 80.0
eval_percent = 10.0
train_buckets = int(modulo_divisor * train_percent / 100.0)
eval_buckets = int(modulo_divisor * eval_percent / 100.0)
```
We can make a series of queries to check if our bucketing values result in the correct sizes of each of our dataset splits and then adjust accordingly. Therefore, to make our code more compact and reusable, let's define a function to return the head of a dataframe produced from our queries up to a certain number of rows.
```
def display_dataframe_head_from_query(query, count=10):
"""Displays count rows from dataframe head from query.
Args:
query: str, query to be run on BigQuery, results stored in dataframe.
count: int, number of results from head of dataframe to display.
Returns:
Dataframe head with count number of results.
"""
df = bq.query(
query + " LIMIT {limit}".format(
limit=count)).to_dataframe()
return df.head(count)
```
For our first query, we're going to use the original query above to get our label, features, and columns to combine into our hash which we will use to perform our repeatable splitting. There are only a limited number of years, months, days, and states in the dataset. Let's see what the hash values are. We will need to include all of these extra columns to hash on to get a fairly uniform spread of the data. Feel free to try less or more in the hash and see how it changes your results.
```
# Get label, features, and columns to hash and split into buckets
hash_cols_fixed_query = """
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks,
year,
month,
CASE
WHEN day IS NULL THEN
CASE
WHEN wday IS NULL THEN 0
ELSE wday
END
ELSE day
END AS date,
IFNULL(state, "Unknown") AS state,
IFNULL(mother_birth_state, "Unknown") AS mother_birth_state
FROM
publicdata.samples.natality
WHERE
year > 2000
AND weight_pounds > 0
AND mother_age > 0
AND plurality > 0
AND gestation_weeks > 0
"""
display_dataframe_head_from_query(hash_cols_fixed_query)
```
Using `COALESCE` would provide the same result as the nested `CASE WHEN`. This is preferable when all we want is the first non-null instance. To be precise the `CASE WHEN` would become `COALESCE(wday, day, 0) AS date`. You can read more about it [here](https://cloud.google.com/bigquery/docs/reference/standard-sql/conditional_expressions).
Next query will combine our hash columns and will leave us just with our label, features, and our hash values.
```
data_query = """
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks,
FARM_FINGERPRINT(
CONCAT(
CAST(year AS STRING),
CAST(month AS STRING),
CAST(date AS STRING),
CAST(state AS STRING),
CAST(mother_birth_state AS STRING)
)
) AS hash_values
FROM
({CTE_hash_cols_fixed})
""".format(CTE_hash_cols_fixed=hash_cols_fixed_query)
display_dataframe_head_from_query(data_query)
```
The next query is going to find the counts of each of the unique 657484 `hash_values`. This will be our first step at making actual hash buckets for our split via the `GROUP BY`.
```
# Get the counts of each of the unique hashs of our splitting column
first_bucketing_query = """
SELECT
hash_values,
COUNT(*) AS num_records
FROM
({CTE_data})
GROUP BY
hash_values
""".format(CTE_data=data_query)
display_dataframe_head_from_query(first_bucketing_query)
```
The query below performs a second layer of bucketing where now for each of these bucket indices we count the number of records.
```
# Get the number of records in each of the hash buckets
second_bucketing_query = """
SELECT
ABS(MOD(hash_values, {modulo_divisor})) AS bucket_index,
SUM(num_records) AS num_records
FROM
({CTE_first_bucketing})
GROUP BY
ABS(MOD(hash_values, {modulo_divisor}))
""".format(
CTE_first_bucketing=first_bucketing_query, modulo_divisor=modulo_divisor)
display_dataframe_head_from_query(second_bucketing_query)
```
The number of records is hard for us to easily understand the split, so we will normalize the count into percentage of the data in each of the hash buckets in the next query.
```
# Calculate the overall percentages
percentages_query = """
SELECT
bucket_index,
num_records,
CAST(num_records AS FLOAT64) / (
SELECT
SUM(num_records)
FROM
({CTE_second_bucketing})) AS percent_records
FROM
({CTE_second_bucketing})
""".format(CTE_second_bucketing=second_bucketing_query)
display_dataframe_head_from_query(percentages_query)
```
We'll now select the range of buckets to be used in training.
```
# Choose hash buckets for training and pull in their statistics
train_query = """
SELECT
*,
"train" AS dataset_name
FROM
({CTE_percentages})
WHERE
bucket_index >= 0
AND bucket_index < {train_buckets}
""".format(
CTE_percentages=percentages_query,
train_buckets=train_buckets)
display_dataframe_head_from_query(train_query)
```
We'll do the same by selecting the range of buckets to be used evaluation.
```
# Choose hash buckets for validation and pull in their statistics
eval_query = """
SELECT
*,
"eval" AS dataset_name
FROM
({CTE_percentages})
WHERE
bucket_index >= {train_buckets}
AND bucket_index < {cum_eval_buckets}
""".format(
CTE_percentages=percentages_query,
train_buckets=train_buckets,
cum_eval_buckets=train_buckets + eval_buckets)
display_dataframe_head_from_query(eval_query)
```
Lastly, we'll select the hash buckets to be used for the test split.
```
# Choose hash buckets for testing and pull in their statistics
test_query = """
SELECT
*,
"test" AS dataset_name
FROM
({CTE_percentages})
WHERE
bucket_index >= {cum_eval_buckets}
AND bucket_index < {modulo_divisor}
""".format(
CTE_percentages=percentages_query,
cum_eval_buckets=train_buckets + eval_buckets,
modulo_divisor=modulo_divisor)
display_dataframe_head_from_query(test_query)
```
In the below query, we'll `UNION ALL` all of the datasets together so that all three sets of hash buckets will be within one table. We added `dataset_id` so that we can sort on it in the query after.
```
# Union the training, validation, and testing dataset statistics
union_query = """
SELECT
0 AS dataset_id,
*
FROM
({CTE_train})
UNION ALL
SELECT
1 AS dataset_id,
*
FROM
({CTE_eval})
UNION ALL
SELECT
2 AS dataset_id,
*
FROM
({CTE_test})
""".format(CTE_train=train_query, CTE_eval=eval_query, CTE_test=test_query)
display_dataframe_head_from_query(union_query)
```
Lastly, we'll show the final split between train, eval, and test sets. We can see both the number of records and percent of the total data. It is really close to the 80/10/10 that we were hoping to get.
```
# Show final splitting and associated statistics
split_query = """
SELECT
dataset_id,
dataset_name,
SUM(num_records) AS num_records,
SUM(percent_records) AS percent_records
FROM
({CTE_union})
GROUP BY
dataset_id,
dataset_name
ORDER BY
dataset_id
""".format(CTE_union=union_query)
display_dataframe_head_from_query(split_query)
```
Now that we know that our splitting values produce a good global splitting on our data, here's a way to get a well-distributed portion of the data in such a way that the train/eval/test sets do not overlap and takes a subsample of our global splits.
```
# every_n allows us to subsample from each of the hash values
# This helps us get approximately the record counts we want
every_n = 1000
splitting_string = "ABS(MOD(hash_values, {0} * {1}))".format(every_n, modulo_divisor)
def create_data_split_sample_df(query_string, splitting_string, lo, up):
"""Creates a dataframe with a sample of a data split.
Args:
query_string: str, query to run to generate splits.
splitting_string: str, modulo string to split by.
lo: float, lower bound for bucket filtering for split.
up: float, upper bound for bucket filtering for split.
Returns:
Dataframe containing data split sample.
"""
query = "SELECT * FROM ({0}) WHERE {1} >= {2} and {1} < {3}".format(
query_string, splitting_string, int(lo), int(up))
df = bq.query(query).to_dataframe()
return df
train_df = create_data_split_sample_df(
query_string, splitting_string,
lo=0, up=train_percent)
eval_df = create_data_split_sample_df(
query_string, splitting_string,
lo=train_percent, up=train_percent + eval_percent)
test_df = create_data_split_sample_df(
query_string, splitting_string,
lo=train_percent + eval_percent, up=modulo_divisor)
print("There are {} examples in the train dataset.".format(len(train_df)))
print("There are {} examples in the validation dataset.".format(len(eval_df)))
print("There are {} examples in the test dataset.".format(len(test_df)))
```
## Preprocess data using Pandas
We'll perform a few preprocessing steps to the data in our dataset. Let's add extra rows to simulate the lack of ultrasound. That is we'll duplicate some rows and make the `is_male` field be `Unknown`. Also, if there is more than child we'll change the `plurality` to `Multiple(2+)`. While we're at it, we'll also change the plurality column to be a string. We'll perform these operations below.
Let's start by examining the training dataset as is.
```
train_df.head()
```
Also, notice that there are some very important numeric fields that are missing in some rows (the count in Pandas doesn't count missing data)
```
train_df.describe()
```
It is always crucial to clean raw data before using in machine learning, so we have a preprocessing step. We'll define a `preprocess` function below. Note that the mother's age is an input to our model so users will have to provide the mother's age; otherwise, our service won't work. The features we use for our model were chosen because they are such good predictors and because they are easy enough to collect.
```
def preprocess(df):
""" Preprocess pandas dataframe for augmented babyweight data.
Args:
df: Dataframe containing raw babyweight data.
Returns:
Pandas dataframe containing preprocessed raw babyweight data as well
as simulated no ultrasound data masking some of the original data.
"""
# Clean up raw data
# Filter out what we don"t want to use for training
df = df[df.weight_pounds > 0]
df = df[df.mother_age > 0]
df = df[df.gestation_weeks > 0]
df = df[df.plurality > 0]
# Modify plurality field to be a string
twins_etc = dict(zip([1,2,3,4,5],
["Single(1)",
"Twins(2)",
"Triplets(3)",
"Quadruplets(4)",
"Quintuplets(5)"]))
df["plurality"].replace(twins_etc, inplace=True)
# Clone data and mask certain columns to simulate lack of ultrasound
no_ultrasound = df.copy(deep=True)
# Modify is_male
no_ultrasound["is_male"] = "Unknown"
# Modify plurality
condition = no_ultrasound["plurality"] != "Single(1)"
no_ultrasound.loc[condition, "plurality"] = "Multiple(2+)"
# Concatenate both datasets together and shuffle
return pd.concat(
[df, no_ultrasound]).sample(frac=1).reset_index(drop=True)
```
Let's process the train/eval/test set and see a small sample of the training data after our preprocessing:
```
train_df = preprocess(train_df)
eval_df = preprocess(eval_df)
test_df = preprocess(test_df)
train_df.head()
train_df.tail()
```
Let's look again at a summary of the dataset. Note that we only see numeric columns, so `plurality` does not show up.
```
train_df.describe()
```
## Write to .csv files
In the final versions, we want to read from files, not Pandas dataframes. So, we write the Pandas dataframes out as csv files. Using csv files gives us the advantage of shuffling during read. This is important for distributed training because some workers might be slower than others, and shuffling the data helps prevent the same data from being assigned to the slow workers.
```
# Define columns
columns = ["weight_pounds",
"is_male",
"mother_age",
"plurality",
"gestation_weeks"]
# Write out CSV files
train_df.to_csv(
path_or_buf="train.csv", columns=columns, header=False, index=False)
eval_df.to_csv(
path_or_buf="eval.csv", columns=columns, header=False, index=False)
test_df.to_csv(
path_or_buf="test.csv", columns=columns, header=False, index=False)
%%bash
wc -l *.csv
%%bash
head *.csv
%%bash
tail *.csv
```
## Lab Summary:
In this lab, we set up the environment, sampled the natality dataset to create train/eval/test splits, and preprocessed the data in a Pandas dataframe.
Copyright 2019 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License
| github_jupyter |
# Case Study 3 - Email Spam
__Team Members:__ Amber Clark, Andrew Leppla, Jorge Olmos, Paritosh Rai
# Content
* [Business Understanding](#business-understanding)
- [Scope](#scope)
- [Introduction](#introduction)
- [Methods](#methods)
- [Results](#results)
* [Data Evaluation](#data-evaluation)
- [Loading Data](#loading-data)
- [Data Summary](#data-summary)
- [Missing Values](#missing-values)
- [Exploratory Data Analysis (EDA)](#eda)
- [Assumptions](#assumptions)
* [Model Preparations](#model-preparations)
- [Proposed Method](#proposed-metrics)
- [Evaluation Metrics](#evaluation-metrics)
- [Feature Selection](#feature-selection)
* [Model Building & Evaluations](#model-building)
- [Sampling Methodology](#sampling-methodology)
- [Model](#model)
- [Performance Analysis](#performance-analysis)
* [Model Interpretability & Explainability](#model-explanation)
- [Examining Feature Importance](#examining-feature-importance)
* [Conclusion](#conclusion)
- [Final Model Proposal](#final-model-proposal)
- [Future Considerations and Model Enhancements](#model-enhancements)
# Business Understanding & Executive Summary <a id='business-understanding'/>
This case study involves a text analysis of several thousand emails to create a model that identifies spam. The final model uses a Naïve Bayes classification method to quickly identify spam with approximately XXXXXX XX.XX% XXXXX accuracy.
### Introduction <a id='introduction'/>
Spam is a term for unsolicited email (and now other forms of messaging) that has plagued inboxes for virtually as long as email has existed. The spam originated from a song in Monty Python's Flying Circus with the lyric, "Spam spam spam spam, spam spam spam spam, lovely spam, wonderful spam." In this scene, a group of Vikings in a cafe are singing about the ubiquity of the canned meat product after World War II. [1]. The term fits well to describe the potentially overwhelming pervasive presence of these unwanted and often malicious emails.
According to spam laws.com [2], 14.5 billion messages globally per day, i.e., approximately 45% of all emails (or more according to certain research firms) are considered spam. The United States is the number one generator of spam emails. The types of spam range from relatively harmless but annoying advertising (36%), some attempt to gather sensitive personal information, to malware that can install critically harmful programs capable of crashing networks or stealing data. Email filters designed to block or delete spam are essential and constantly tested by those trying to penetrate the inboxes of business and personal accounts over the globe. Even with these security measures in place, some research firms estimate that the annual losses due to productivity interruption and technical expenses to deal with spam amount to over $20 billion [2].
The case study analysis examines thousands of sample emails that have been identified as either everyday correspondence(ham) or spam and attempts to build a model that can classify the emails correctly using the email sample provided.
### Methods <a id='methods'/>
#### Data Wrangling
Data is provided in five (5) folders containing email text – spam, spam_2, easy_ham, easy_ham2, and hard_ham – to extract the information to construct the dataset that can be leveraged to identify the spam emails. Empty email data frame was created with folder name (folder), mail originator (from), email subject (subject), and main message (body). Email from the five (5) folders listed above was extracted and parsed under the respective column into the email data frame.
To make the data more manageable and processable, data cleaning was carried out to remove html and xml tags, unwanted characters, and NaN from the subject and main text. Data was split in training and test in 70/30 ratio, respectively. Care was taken to manage the imbalance ratio of spam and ham in training and test set.
#### NLP and Naive Bayes
Data is processed using TFIDF (Term Frequency Inverse Data Frequency) to quantify and determine the rare words across the spam and ham emails in training and test data set of body and subject. Gaussian Naive Bayes (GaussianNB) was used to build the model. Predict outcome of the train and test data set was compared for subject and body o email with the actual outcome.
The team decided to use NLP (Natural Language processing) to read and identify the text in the email. Team performed cleanup efforts on unstructured alphanumeric data and removing unwanted characters, and symbols to create a data frame to convert unstructured data to structured data.
The Naïve Bayes algorithm will be used to classify the data. This classification technique works well when features predicting the target classification are independent of each other. After the cleanup effort, the data frame was created with features that are independent of each other. So, the team decided to use the Naïve Bayes algorithm to classify the email data as spam and ham.
#### DBSCAN
Density-Based Spatial Clustering of Applications with Noise (DBSCAN) algorithm is used to find associations and structures in data that are challenging to find manually but are relevant and valuable to find patterns and predict trends. DBSCAN groups the data points close to each other based on the density and distance measured. It also identifies the outliers in low-density regions. In the case of email, it will be able to identify closely related words and separate the noise or irreverent words. This algorithm effectively manages outliers by detecting them and considering them as a separate (not able to cluster them) cluster(s). DBSCAN allows for easy execution of distance measures. Cosine distance measures the distance between vectors by calculating cosine angle vectors, i.e., two overlapping vectors will have minimum values. Cosine distance works excellent with text.
#### Metrics
The metrics used to evaluate model performance were accuracy, time to process, and F1. Time to process the emails was also considered to be an essential factor. So, the team had to find the right balance between the three legs of stool (accuracy, time to process, and F1).
Accuracy refers to the level of agreement between the actual measurement and the predicted value.
#### F1
F1 values depend on Precision and Recall. Precision and recall are useful metrics when the classes are imbalanced like in this dataset. These metrics are defined as follows:
Recall = TP/(TP+FN)
Precision = TP/(TP+FP)
Where:
True Positive (TP) is "email is correctly predicted to be spam". These predictions will correctly identify spam mail.
False Negative (FN) is "email is incorrectly predicting ham and is spam". These predictions would likely predict ham as spam.
False Positive (FP) is "email is incorrectly predicting ham to be spam". These predictions would likely identify ham to be spam.
Both Recall and Precision focus the modeling on True Positives, but Recall is penalized by False Negatives whereas Precision is penalized by False Positives.
To balance both precision and recall, the team used the F1 score. The F1 score combines precision and recall into a single metric by taking their harmonic mean:
F1 = 2 * Recall * Precision / (Recall + Precision)
#### Time
The time to process the email is calculated for each scenario. Team tuned the percentile value in Naïve Bayes algorithm to optimize the processing time and prevent over fitting.
By maximizing the F1 score, the model balances both recall and precision for the positive class to ensure right balance is achieved to minimize the ham being identified as Spam and Spam as ham as both can create challenge.
The team's objective was to maximize accuracy while minimizing the number of features and processing time, to make predictions in real time.
### Results <a id='results'/>
The team's objective was to maximize accuracy while minimizing the number of features and processing time, to make predictions in real time. The best model used 1% of the features from the email subject+body with 832 features, F1-score of 0.88, and processing time of 0.103 seconds (so fast!).
| Type | Percentile | No of Features | Train Accuracy | Test Accuracy | f1-score | Processing Time |
|--------------|------------|----------------|----------------|---------------|----------|-----------------|
| body | 100 | 77396 | 0.95 | 0.93 | 0.83 | 7.9631 |
| subject | 100 | 6086 | 0.96 | 0.92 | 0.81 | 0.8683 |
| subject | 1 | 59 | 0.8 | 0.8 | 0.37 | 0.0195 |
| body | 1 | 773 | 0.94 | 0.93 | 0.85 | 0.0861 |
| body | 0.1 | 78 | 0.83 | 0.83 | 0.52 | 0.0175 |
| __subject+body__ | __1__ | __832__ | __0.94__ | __0.94__ | __0.88__ | __0.103__ |
| subject+body | 0.1 | 84 | 0.84 | 0.84 | 0.55 | 0.0138 |
# Data Engineering <a id='data-evaluation'>
## Data Summary
Data was provided in five (5) folders containing email text – spam, spam_2, easy_ham, easy_ham2, and hard_ham – to extract the information to construct the dataset that can be leveraged to identify the spam emails. Empty email data frame was created with folder name (folder), mail originator (from), email subject (subject), and main message (body). Email from the five (5) folders listed above was extracted and parsed under the respective column into the email data frame.
```
# standard libraries
import pandas as pd
import numpy as np
import re
import os
from IPython.display import Image
# email
from email import policy
from email.parser import BytesParser
# visualization
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
from tabulate import tabulate
# data pre-processing
from sklearn.model_selection import train_test_split
from sklearn.model_selection import StratifiedShuffleSplit
from sklearn.feature_extraction.text import TfidfVectorizer
# clustering
from sklearn.cluster import DBSCAN
from statistics import stdev
# prediction models
from sklearn.naive_bayes import GaussianNB
from sklearn.feature_selection import SelectPercentile, f_classif
from sklearn.ensemble import VotingClassifier
from sklearn.model_selection import cross_val_score
# import warnings filter
'''import warnings
warnings.filterwarnings('ignore')
from warnings import simplefilter
simplefilter(action='ignore', category=FutureWarning)'''
```
### Loading Data and Cleanup <a id='loading-data'>
As part of data cleanup, team removed HTML tags, stop words, and non-alphanumeric characters.
```
%%javascript
IPython.notebook.kernel.execute('nb_name = "' + IPython.notebook.notebook_name + '"')
nb_name
import os
notebook_path = os.path.abspath(nb_name)
notebook_path
file_spam = os.path.join(os.path.dirname(notebook_path), "SpamAssassinMessages")
file_spam
def get_all_files(folder):
file_list = []
if os.path.exists(folder):
for root, dirs, files in os.walk(folder):
for file in files:
file_list.append(os.path.join(root,file))
return file_list
folders = os.listdir(file_spam)
folders
# Get the file names in each folder (list of lists)
files = [ os.listdir(file_spam+ '/'+ folder) for folder in folders]
# Create a list of dataframes for all of the folders
emails = [ pd.DataFrame({'folder' : [], 'from' : [], 'subject' : [], 'body': []}) ]*len(folders)
# Add folder path to file names
for i in range(0,len(folders)):
for j in range(0, len(files[i])):
files[i][j] = str(file_spam +'/' + folders[i] + '/' + files[i][j])
# Parse and extract email 'subject' and 'from'
with open(files[i][j], 'rb') as fp:
msg = BytesParser(policy=policy.default).parse(fp)
# Error checking when reading in body for some html-based emails from spam folders
try:
simplest = msg.get_body(preferencelist=('plain', 'html'))
try:
new_row = {'folder': folders[i], 'from': msg['from'], 'subject': msg['subject'], 'body': simplest.get_content()}
emails[i] = emails[i].append(new_row, ignore_index=True)
except:
new_row = {'folder': folders[i], 'from': msg['from'], 'subject':msg['subject'], 'body':'Error(html)'}
emails[i] = emails[i].append(new_row, ignore_index=True)
except:
new_row = {'folder': folders[i], 'from': msg['from'], 'subject':msg['subject'], 'body':'Error(html)'}
emails[i] = emails[i].append(new_row, ignore_index=True)
# Emails per folder
print("# files in folders:", [len(i) for i in files])
print("# emails read in :", [i.shape[0] for i in emails])
# Total emails
print( "\n# total emails =", sum([len(i) for i in files]) )
# Create single dataframe from all folders
df = pd.concat( [emails[i] for i in range(0, len(emails))], axis=0)
# Keep the indices from the folders
df = df.reset_index()
# create response column from folder names
spam = [(i=='spam' or i=='spam_2') for i in df['folder']]
df = pd.concat([df, pd.Series(spam).astype(int)], axis=1)
df.columns = ['folder_idx', 'folder', 'from', 'subject', 'body','spam']
df.shape
df.head()
CLEANR = re.compile('<.*?>|&([a-z0-9]+|#[0-9]{1,6}|#x[0-9a-f]{1,6});')
def cleanhtml(raw_html):
cleantext = re.sub(CLEANR, '', raw_html)
return cleantext
def cleanAndSplit(raw_text):
temp = []
for word in raw_text.split():
temp.append(re.sub(r"[^a-zA-Z0-9]","",word).lower())
return temp
def cleanBody(raw_content):
if(pd.isna(raw_content)):
return []
clean_from_html = cleanhtml(raw_content)
out = cleanAndSplit(clean_from_html)
return out
def combine(content):
joined = np.concatenate((*content[['clean_body']], *content[['clean_subject']]))
return joined
df['clean_body'] = df['body'].apply(cleanBody)
df['clean_subject'] = df['subject'].apply(cleanBody)
df['joined'] = df[['clean_body','clean_subject']].agg(combine, axis=1)
df.head()
```
## Missing Values <a id='missing-values'>
There were 12 emails iwth missing subjects. In addition, 33 of approximately 9,000 emails (less than .5%) that the team was not able to read in for modeling and analysis. These 33 emails were a small subset of the html emails.
```
# Rows where body couldn't be read in = 'Error(html)'
df.loc[df['body']=='Error(html)']
# All spam emails
# Count of body read Errors
df.loc[df['body']=='Error(html)'].shape[0]
# Look at file example with Error(html)
with open(files[4][1], 'rb') as fp:
msg = BytesParser(policy=policy.default).parse(fp)
print(msg)
df.isna().sum()
# Replace NaN and None values with 'No Subject'
df.loc[ df['subject'].isna(), 'subject'] = 'No Subject'
df.loc[ df['subject']=='', 'subject'] = 'No Subject'
```
## Exploratory Data Analysis (EDA) <a id='eda'>
```
df['spam'].value_counts()
```
### Clustering
- Use DBSCAN with cosine distance for NLP.
- Try to get approx. 5 clusters to dissect and explain
- Look for descriptors for the clusters, like business vs. personal emails, IT emails, etc.
```
def dummy_fun(doc):
return doc
vectorizer = TfidfVectorizer(analyzer='word', tokenizer=dummy_fun, preprocessor=dummy_fun, token_pattern=None, stop_words='english')
features_subject = vectorizer.fit_transform(df['clean_subject'])
dbscan = DBSCAN(metric='cosine', min_samples=125)
eps_clusters = []
for j in range(75,126,25):
dbscan.min_samples = j
for i in range(60,76,5):
dbscan.eps = i/100
clustering = pd.Series( dbscan.fit_predict(features_subject) )
cluster_counts = list( clustering.value_counts().sort_index() )
cluster_counts.pop(0) # remove -1 cluster which is a non-cluster
num_clusters = len(cluster_counts)
clustering_spam = pd.concat([clustering, df['spam']], axis=1)
clustering_spam.columns = ['cluster', 'spam']
clustering_spam_counts = pd.DataFrame( clustering_spam.value_counts() ).reset_index()
cluster_purity = list( ( clustering_spam_counts[['cluster']].value_counts()==1 ).sort_index() )
cluster_purity.pop(0) # remove -1 cluster which is a non-cluster
num_pure_clusters = sum(cluster_purity)
num_pure_emails = sum( np.multiply(cluster_counts, cluster_purity) )
eps_clusters.append([ dbscan.min_samples, dbscan.eps, num_clusters, num_pure_clusters, num_pure_emails])
clusters_df = pd.DataFrame(eps_clusters, columns = ['min_samples','epsilon', '# clusters', '# pure clusters', '# pure emails'])
clusters_df.sort_values(by='# pure emails', ascending=False)
```
## Final Clusters - Insights
min_samples=100, epsilon=0.65 had 4 pure clusters that contained only 'ham' emails. These clusters could be used to filter or pre-process emails before Naive Bayes classification.
- Cluster 0 : News & Dates
- Cluster 1 : Pain (ouch, hurts)
- Cluster 2 : Spambayes
- Cluster 3 : razorusers
- Cluster 4 : [Spambayes] & package
min_samples could be decreased to 50 or less which would give more pure emails but many more clusters to explore and manage.
```
# Most clusters with the lowest spread (stdev) in cluster counts
dbscan.min_samples = 100
dbscan.eps = 0.65
clusters = dbscan.fit_predict(features_subject)
clusters = pd.Series(clusters)
clusters.index = df.index
subject_clusters = pd.concat([ clusters, df['clean_subject'], df['spam'] ], axis=1)
subject_clusters.columns = ['cluster','clean_subject', 'spam']
heads_tails = pd.DataFrame({'cluster' : [], 'clean_subject' : [], 'spam' : []})
for i in range( 0, max(clusters.unique())+1 ): # Exclude the -1 class which is a non-cluster
heads_tails = pd.concat([heads_tails,
subject_clusters.loc[subject_clusters['cluster']==i].head(3),
subject_clusters.loc[subject_clusters['cluster']==i].tail(3) ], axis=0)
heads_tails[['cluster', 'spam']] = heads_tails[['cluster','spam']].astype(int)
print('Head & tail of each cluster:')
heads_tails
```
### 83 spam cases, all in Cluster 4. Clusters 0-3 have no spam and can be used for filtering or pre-processing
```
subject_clusters.loc[subject_clusters['cluster']!=-1,'spam'].value_counts()
subject_clusters.loc[subject_clusters['cluster']==4,'spam'].value_counts()
```
## Assumptions <a id='assumptions'>
Team made the assumption emails are representative of future emails. Team also made the assumption that the emails were properly categorized into folders as ham or spam. In order for the spam detection to be effective, the model will need to be run and/or updated on a regular basis to ensure new trends in spam are captured.
Naive Bayes is built with the assumption that all prediction variables(features) are independent.
# Model Preparations <a id='model-preparations'/>
### Methods
For this classification problem, team found NLP useful for modeling because it is used for natural language processing. (A natural language is one that develops naturally such as English, ASL, or Spanish. An example of an artificial language is Klingon.) Naïve Bayes was useful for classification because it looks at each classification individually. It is the most popular method for spam detection because it gives a low result of false positive. The dataset started with 6,086 features. NLP only needed a fraction of the features in order to detect whether a message was spam or ham. DBSCAN was useful for finding connections in the data such as related words and noise.
#### NLP and Naïve Bayes
Data is processed using TFIDF (Term Frequency Inverse Data Frequency) to quantify and determine the rare words across the spam and ham emails in training and test data set of body and subject. Gaussian Naive Bayes (GaussianNB) was used to build the model. Predict outcome of the train and test data set was compared for subject and body o email with the actual outcome.
The team decided to use NLP (Natural Language processing) to read and identify the text in the email. Team performed cleanup efforts on unstructured alphanumeric data and removing unwanted characters, and symbols to create a data frame to convert unstructured data to structured data.
The Naïve Bayes algorithm will be used to classify the data. This classification technique works well when features predicting the target classification are independent of each other. After the cleanup effort, the data frame was created with features that are independent of each other. So, the team decided to use the Naïve Bayes algorithm to classify the email data as spam and ham.
#### DBSCAN
Density-Based Spatial Clustering of Applications with Noise (DBSCAN) algorithm is used to find associations and structures in data that are challenging to find manually but are relevant and valuable to find patterns and predict trends. DBSCAN groups the data points close to each other based on the density and distance measured. It also identifies the outliers in low-density regions. In the case of email, it will be able to identify closely related words and separate the noise or irreverent words. This algorithm effectively manages outliers by detecting them and considering them as a separate (not able to cluster them) cluster(s). DBSCAN allows for easy execution of distance measures. Cosine distance measures the distance between vectors by calculating cosine angle vectors, i.e., two overlapping vectors will have minimum values. Cosine distance works excellent with text.
#### Metrics
The metrics used to evaluate model performance were accuracy, time to process, and F1. Time to process the emails was also considered to be an essential factor. So, the team had to find the right balance between the three legs of stool (accuracy, time to process, and F1).
Accuracy refers to the level of agreement between the actual measurement and the predicted value.
# Model Building & Evaluations <a id='model-building'/>
DBSCAN approach for clustering is discussed in EDA section.
## Split into Training and Test
Data was split in training and test in 70/30 ratio, respectively. Care was taken to manage the imbalance ratio of spam and ham in training and test set. Specifically, StratifiedShuffleSplit was used to ensure class balance.
```
def split_dependant_and_independant_variables(df: pd.DataFrame, y_var: str):
X = df.copy()
y = X[y_var]
X = X.drop([y_var], axis=1)
return X, y
def shuffle_split(X, y, test_size, random_state):
stratified_shuffle_split = StratifiedShuffleSplit(n_splits=1, test_size=test_size, random_state=random_state)
for train_index, test_index in stratified_shuffle_split.split(X, y):
X_train, X_test = X.iloc[train_index], X.iloc[test_index]
y_train, y_test = y[train_index], y[test_index]
return X_train, X_test, y_train, y_test
X, y = split_dependant_and_independant_variables(df, 'spam')
X_train, X_test, y_train, y_test = shuffle_split(X, y, test_size=0.3, random_state=12343)
```
### Base Naive Bayes
The Naïve Bayes algorithm will be used to classify the data. This classification technique works well when features predicting the target classification are independent of each other. After the cleanup effort, the data frame was created with features that are independent of each other. So, the team decided to use the Naïve Bayes algorithm to classify the email data as spam and ham.
```
count_Class=pd.value_counts(df["spam"], sort= True)
count_Class.plot(kind= 'bar', color= ["blue", "orange"])
plt.title('Bar chart')
plt.show()
# def dummy_fun(doc):
# return doc
# vectorizer = TfidfVectorizer(analyzer='word', tokenizer=dummy_fun, preprocessor=dummy_fun, token_pattern=None)
features_subject_train = vectorizer.fit_transform(X_train['clean_subject'])
features_subject_test = vectorizer.transform(X_test['clean_subject'])
features_subject_train
features_body_train = vectorizer.fit_transform(X_train['clean_body'])
features_body_test = vectorizer.transform(X_test['clean_body'])
features_body_train
from scipy import sparse
features_joined_train = sparse.hstack((features_subject_train, features_body_train), format='csr')
features_joined_test = sparse.hstack((features_subject_test, features_body_test), format='csr')
features_joined_train
```
## Feature Selection
Using all the extracted features resulting in overfitting the training set, longer processing time and degredation in F1 value. To mitigate these issues, the team reduced the number of features in the model utilzing Select Percentile from Sklearn feature selection. After multiple iterations, the team found only 1% of the features was best for predicting spam without overfitting. This reduction of features drastically improved the processing time (~80 times as fast).
```
selector = SelectPercentile(f_classif, percentile=1)
selector.fit(features_subject_train, y_train)
features_subject_train_sel = selector.transform(features_subject_train).toarray()
features_subject_test_sel = selector.transform(features_subject_test).toarray()
features_subject_train_sel.shape
selector.fit(features_body_train, y_train)
features_body_train_sel = selector.transform(features_body_train).toarray()
features_body_test_sel = selector.transform(features_body_test).toarray()
features_body_train_sel.shape
selector.fit(features_joined_train, y_train)
features_joined_train_sel = selector.transform(features_joined_train).toarray()
features_joined_test_sel = selector.transform(features_joined_test).toarray()
features_joined_train_sel.shape
```
## Modeling
Model Performance is discussed in the conclusion.
```
# Baseline accuracy of 74% with class imbalance
df['spam'].value_counts(normalize=True)
def measure_time(func):
start = time.process_time()
func() #outputs train and test score
print("Time Processing: ", time.process_time() - start) outputs times
# All subject features - overfits training set by 5.5%
features_subject_train1 = features_subject_train.toarray()
features_subject_test1 = features_subject_test.toarray()
def run_model1():
model1a = GaussianNB()
model1a.fit(features_subject_train1, y_train)
score_train = model1a.score(features_subject_train1, y_train)
score_test = model1a.score(features_subject_test1, y_test)
print("Train Subject Accuracy:", score_train)
print("Test Subject Accuracy:", score_test)
measure_time(run_model1)
# 1% of subject features - no overfitting
def run_model2():
model1b = GaussianNB()
model1b.fit(features_subject_train_sel, y_train)
score_train = model1b.score(features_subject_train_sel, y_train)
score_test = model1b.score(features_subject_test_sel, y_test)
print("Train Subject Accuracy:", score_train)
print("Test Subject Accuracy:", score_test)
measure_time(run_model2)
# 1% of body features
def run_model3():
model2 = GaussianNB()
model2.fit(features_body_train_sel, y_train)
score_train = model2.score(features_body_train_sel, y_train)
score_test = model2.score(features_body_test_sel, y_test)
print("Train Body Accuracy:", score_train)
print("Test Body Accuracy:", score_test)
measure_time(run_model3)
# 0.1% of body features
selector = SelectPercentile(f_classif, percentile=0.1)
selector.fit(features_body_train, y_train)
features_body_train_sel = selector.transform(features_body_train).toarray()
features_body_test_sel = selector.transform(features_body_test).toarray()
def run_model4():
model2b = GaussianNB()
model2b.fit(features_body_train_sel, y_train)
score_train = model2b.score(features_body_train_sel, y_train)
score_test = model2b.score(features_body_test_sel, y_test)
print("Train Body Score:", score_train)
print("Test Body Score:", score_test)
measure_time(run_model4)
# 1st percentile of subject + body features
# overfits training set by just 1%
def run_model5():
model3 = GaussianNB()
model3.fit(features_joined_train_sel, y_train)
score_train = model3.score(features_joined_train_sel, y_train)
score_test = model3.score(features_joined_test_sel, y_test)
print("Train join Score:", score_train)
print("Test join Score:", score_test)
measure_time(run_model5)
# 0.1% of subject + body features, similar to body only, not worth the complexity
selector = SelectPercentile(f_classif, percentile=0.1)
selector.fit(features_joined_train, y_train)
features_joined_train_sel = selector.transform(features_joined_train).toarray()
features_joined_test_sel = selector.transform(features_joined_test).toarray()
def run_model6():
model2b = GaussianNB()
model2b.fit(features_joined_train_sel, y_train)
score_train = model2b.score(features_joined_train_sel, y_train)
score_test = model2b.score(features_joined_test_sel, y_test)
print("Train Body Score:", score_train)
print("Test Body Score:", score_test)
measure_time(run_model6)
```
# Conclusion <a id='conclusion'>
For this classification problem, team found NLP useful for modeling because it is used for natural language processing. (A natural language is one that develops naturally such as English, ASL, or Spanish. An example of an artificial language is Klingon.)
Naïve Bayes was useful for classification because treats each feature as independent. It is the most popular method for spam detection because it gives a low result of false positive. The dataset started with 6,086 features. NLP only needed a fraction of the features in order to detect whether a message was spam or ham. DBSCAN was useful for finding connections in the data such as related words and noise.
### Final Model Proposal <a id='final-model-proposal'/>
The team's objective was to maximize accuracy while minimizing the number of features and processing time, to make predictions in real time. The best model used 1% of the features from the email subject+body with 832 features, F1-score of 0.88, and processing time of 0.103 seconds (so fast!).
| Type | Percentile | No of Features | Train Accuracy | Test Accuracy | f1-score | Processing Time |
|--------------|------------|----------------|----------------|---------------|----------|-----------------|
| body | 100 | 77396 | 0.95 | 0.93 | 0.83 | 7.9631 |
| subject | 100 | 6086 | 0.96 | 0.92 | 0.81 | 0.8683 |
| subject | 1 | 59 | 0.8 | 0.8 | 0.37 | 0.0195 |
| body | 1 | 773 | 0.94 | 0.93 | 0.85 | 0.0861 |
| body | 0.1 | 78 | 0.83 | 0.83 | 0.52 | 0.0175 |
| __subject+body__ | __1__ | __832__ | __0.94__ | __0.94__ | __0.88__ | __0.103__ |
| subject+body | 0.1 | 84 | 0.84 | 0.84 | 0.55 | 0.0138 |
### Future Considerations and Model Enhancements <a id='model-enhancements'/>
In order for the spam detection to be effective, the model will need to be run and/or updated on a regular basis to ensure new trends in spam are captured.
The team will recommend future evaluation of leveraging output of DBSCAN to feed the Naïve Bayes algorithm to optimize the outcome further. This algorithm has to be updated regularly to keep it updated to manage new spam.
# References <a id='references'>
[1]. What is Spam? | Webopedia
[2] Spam Statistics and Facts (spamlaws.com)
Team drew inspriation from https://towardsdatascience.com/training-a-naive-bayes-model-to-identify-the-author-of-an-email-or-document-17dc85fa630a
| github_jupyter |
# Pytorch
Мы будем работать с pytorch версии 1.4 и выше, поэтому не забудьте это проверить
```
import torch
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
```
Начнем с небольшого сравнения с numpy
```
# Можем переносить np.array в torch.Tensor и обратно
# Следите за типами данных
np_arr = np.array([[1, 2, 3], [3, 2, 1]])
torch_tensor = torch.Tensor(np_arr)
print('numpy array \n', np_arr)
print('torch tensor \n', torch_tensor)
print('numpy array from torch tensor \n', torch_tensor.numpy())
# Базовые операции одинаковые как в numpy, так и в torch
np_arr = np.arange(25).reshape(5, 5)
torch_tensor = torch.arange(25).view(5, 5)
print(np_arr)
print(torch_tensor)
# Умножение
print(np.dot(np_arr, np_arr.T))
print(torch.matmul(torch_tensor, torch_tensor.permute(1, 0)))
# В torch нет удобного .T, поэтому приходится использовать .permute(1, 0), который меняет местами 0 и 1 axis
# Среднее
print(np.mean(np_arr, axis=1))
print(torch.mean(torch_tensor.float(), dim=1))
# Переводим во float потому что наш torch_tensor Long, а по такому типу torch отказывается считать среднее
# Максимум
print(np.max(np_arr, axis=1))
print(torch.max(torch_tensor, dim=1))
# Здесь заметьте, что torch.max выдает не просто максимумы, а еще и индексы максимумов
```
## NumPy vs Pytorch
```
x.reshape([1,2,8]) -> x.view(1,2,8)
x.sum(axis=-1) -> x.sum(dim=-1)
```
конвертация
```
torch.from_numpy(x) -- вернет Tensor
x.numpy() -- вернет Numpy Array
```
Если тензор состоит из одного числа, то его можно достать и превратить в обычное питоновское:
```
torch.tensor([[[1]]]).item() -> 1
```
#### Задача
Посчитать $sin^2 (x) + cos^2 (x)$ и нарисовать график $sin(x)$ и $cos(x)$
```
x = torch.linspace(0, 2 * np.pi, 16, dtype=torch.float64)
# result = YOUR CODE
import matplotlib.pyplot as plt
%matplotlib inline
# YOUR CODE
plt.show()
```
# Automatic gradients
У каждого тензора в Pytorch есть флаг `requires_grad`, который отвечает за автоматическое вычисление градиентов:
1. Создать переменную: `a = torch.tensor(..., requires_grad=True)`
2. Определить какую-нибудь дифференцируемую функцию `loss = whatever(a)`
3. Запросить обратный проход `loss.backward()`
4. Градиенты будут доступны в `a.grads`
Есть два важных отличия Pytorch от TF/Keras:
1. Функцию ошибки можно изменять динамически, например на каждом минибатче.
2. После вычисления `.backward()` градиенты сохраняются в `.grad` каждой задействованной переменной, при повторных вызовах градиенты суммируются. Это позволяет использовать несколько функций ошибок или виртуально увеличивать batch_size. Поэтому, после каждого шага оптимизатора градиенты стоит обнулять.
```
# Обратите внимание, что вычисление градиентов работает только для тензоров с вещественным типом данных:
x = torch.tensor([1, 2, 3, 4], requires_grad=True)
# Обратите внимание, что вычисление градиентов работает только для тензоров с вещественным типом данных:
x = torch.tensor([1., 2., 3., 4.], requires_grad=True)
```
Чтобы выставить флаг `requires_grad=False` и выключить автоматическое вычисление градиентов для нескольких тензоров, можно использовать `with torch.no_grad()` или `detach`:
```
x = torch.tensor([1.], requires_grad=True)
y = x**2
print('x.requires_grad', x.requires_grad)
print('y.requires_grad', y.requires_grad)
with torch.no_grad():
z = torch.exp(x)
print('torch.no_grad(), z.requires_grad', z.requires_grad)
# detach from the graph
w = torch.log(x).detach()
print('.detach(), w.requires_grad', w.requires_grad)
```
### Пример
Рассмотрим пример линейной регрессии на датасете Boston. Будем предсказывать цену дома только по последнему признаку
```
from sklearn.datasets import load_boston
x, y = load_boston(return_X_y=True)
x = x[:, -1]
plt.scatter(x, y)
```
##### Задача:
Стандартизируйте x и y. (сначала вычтите среднее, а потом разделите на std)
```
# Разделим x и y на их std, чтобы было проще
# x = (x - x.mean())/x.std()
# y = (y - y.mean())/y.std()
# x = YOUR CODE
# x = YOUR CODE
plt.scatter(x, y)
# Определим веса нашей регрессии
w = torch.zeros(1, requires_grad=True)
b = torch.zeros(1, requires_grad=True)
# Обернем наши данные в torch tensor
x = torch.from_numpy(x).float()
y = torch.from_numpy(y).float()
# Видно что x и y не требуют вычисления градиентов.
# Логично, потому что мы бы не хотели менять данные, чтобы уменьшить ошибку
for vec in [w, b, x, y]:
print(vec.is_leaf, vec.requires_grad)
```
#### Задача:
Постройте регрессию. Напишите функцию, которая считает MSE.
У вас есть:
1. вес признака - w
2. bias - b
3. значения признака - x
4. таргет - y
```
y_pred = w * x + b
def MSE(y_pred, y):
return torch.mean((y_pred - y)**2)
mse = MSE(y_pred, y)
# Когда мы делаем .backward(), в параметрах нашей модели появляется атрибут .grad
print("dL/dw = \n", w.grad)
print("dL/db = \n", b.grad)
# В тензорах с requires_grad=False .grad так и остался None
print("Non-Leaf x dL/dx = \n", x.grad)
print("Non-Leaf loss dL/dpred = \n", y_pred.grad)
```
# Линейная регрессия
```
from IPython.display import clear_output
import tqdm
w = torch.zeros(1, requires_grad=True)
b = torch.zeros(1, requires_grad=True)
for i in tqdm.tqdm(range(200)):
y_pred = w * x + b
loss = MSE(y_pred, y)
loss.backward()
# Делаем градиентный шаг для параметров модели
learning_rate = 0.1
w.data = w.data - w.grad * learning_rate
b.data = b.data - b.grad * learning_rate
# Не забываем обнулить градиенты
w.grad.zero_()
b.grad.zero_()
# Здесь рисуются красивые графики
if i % 5==0:
clear_output(True)
plt.axhline(0, color='gray')
plt.axvline(0, color='gray')
plt.scatter(x.numpy(), y.numpy())
plt.plot(x.numpy(), y_pred.data.numpy(),color='orange')
plt.show()
print("loss = ", loss.item())
if loss.item() < 0.5:
print("Done!")
break
```
# Optimizers
В этом примере мы пользовались простым правилом для градиентного спуска:
$$W^{n+1} = W^{n} - \alpha \nabla_{{W^n}}L$$
Единственным параметром в нем является $\alpha$ -- это `learning_rate`.
На практике часто используют различные модификации (например [Adam](https://machinelearningmastery.com/adam-optimization-algorithm-for-deep-learning/) ):
Хороший обзор алгоритмов оптимизации для сетей можно посмотреть [тут](http://ruder.io/optimizing-gradient-descent/).
Оптимизаторы удобны в использовании:
- требуется указать список переменных для оптимизации
- `opt.step()` обновляет веса
- `opt.zero_grad()` сбрасывает градиенты
```
w = torch.zeros(1, requires_grad=True)
b = torch.zeros(1, requires_grad=True)
optim = torch.optim.Adam([w, b], lr=0.1)
for i in tqdm.tqdm(range(200)):
y_pred = w * x + b
loss = MSE(y_pred, y)
loss.backward()
# Делаем градиентный шаг для параметров модели
optim.step()
optim.zero_grad()
# Здесь рисуются красивые графики
if i % 5==0:
clear_output(True)
plt.axhline(0, color='gray')
plt.axvline(0, color='gray')
plt.scatter(x.numpy(), y.numpy())
plt.plot(x.numpy(), y_pred.data.numpy(),color='orange')
plt.show()
print("loss = ", loss.item())
if loss.item() < 0.5:
print("Done!")
break
```
## Highlevel-API
При переходе от одномерной линейной регрессии к более сложным моделям становится очень неудобно держать все переменные в голове, определять каждый слой отдельно и все такое.
Главное приемущество Pytorch - его высокоуровневый [API](http://pytorch.org/docs/master/nn.html#torch.nn.Module), с помощью которого можно легко определить почти любую сеть
Чтобы воспользоваться моделью необходимо отнаследоваться от torch.nn.Module, определить слои и описать `forward`. `backward` и остальное - будут вычислены автоматически.
```
# MNIST again
from sklearn.datasets import load_digits
from sklearn.model_selection import train_test_split
X, y = load_digits(return_X_y=True)
X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.25, random_state=2)
plt.figure(figsize=(10, 10))
for index, (image, label) in enumerate(zip(X_train[:10], y_train[:10])):
plt.subplot(5, 5, index + 1)
plt.imshow(image.reshape(8, 8))
plt.yticks([])
plt.xticks([])
plt.title(str(label))
plt.show()
# Высокоуровневое определение сети
import torch.nn as nn
import torch.nn.functional as F
class Net(nn.Module):
def __init__(self, input_size=64, hidden_size=32, output_size=10):
super(Net, self).__init__()
# Здесь надо определить все слои которые будут у модели, в каждом указать размер входа и выхода
self.fc1 = nn.Linear(input_size, hidden_size)
self.fc2 = nn.Linear(hidden_size, hidden_size)
self.fc3 = nn.Linear(hidden_size, output_size)
def forward(self, x):
x = x.float()
# Здесь надо определить в каком порядке применяются слои к данным
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
# Софтмакс чтобы посчитать вероятности каждого класса
return F.softmax(x, dim=-1)
```
Раз уж мы начали работать с высокоуровневым API, стоит упомянуть про DataLoader и Dataset
```
from torch.utils.data import DataLoader, Dataset
# Этот класс будет использоваться для того чтобы итерироваться по данным
class DigitSet(Dataset):
def __init__(self, x, y):
# Здесь просто сохраним аргументы x и y
self.x = x
self.y = y
def __len__(self):
# Чтобы итератор знал какого размера наш датасет (какие индексы запрашивать)
return len(self.x)
def __getitem__(self, index):
# Итерируясь по датасету, мы вызываем эту функцию с разными индексами
return self.x[index], self.y[index]
mnist_set = DigitSet(X_train, y_train)
image, label = next(iter(mnist_set))
print('image:', image.shape)
print('label:', label)
# DataLoader - удобная обертка над Dataset, позволяющая итерироваться батчами
train_loader = DataLoader(mnist_set, batch_size=32)
image, label = next(iter(train_loader))
print('image:', image.shape)
print('label:', label)
# Создаем экземпляр класса, который мы описали сверху
model = Net()
prediction = model(batch[0])
print('Model outputs: \n', prediction.shape)
print('Pred: \n', torch.max(prediction, dim=1)[1].numpy())
print('Truth: \n', batch[1].numpy())
```
Тренировка сети
Для тренировки сети нам требуется
- итератор по данным -> DataLoader
- функция тренировки (прогон по данным, вычисление и применение градиентов)
- функция валидации (прогон по тестовым данным, вычисление метрик)
```
# кросс-энтропия
criterion = torch.nn.CrossEntropyLoss()
def train(x, y, optimizer, model, batchsize=32):
losses = []
model.train()
loader = DataLoader(DigitSet(x, y), batch_size=batchsize)
for x_batch, y_batch in loader:
optimizer.zero_grad()
output = model(x_batch)
loss = criterion(output, y_batch)
loss.backward()
optimizer.step()
losses.append(loss.item())
return losses
def test(x, y, model, batchsize=1):
losses = []
model.eval()
loader = DataLoader(DigitSet(x, y), batch_size=batchsize)
for x_batch, y_batch in loader:
output = model(x_batch)
loss = criterion(output, y_batch)
losses.append(loss.item())
return losses
def plot_history(index, train_history, val_history, title='loss'):
plt.figure(figsize=(10, 10))
plt.plot(train_history, label='train loss')
plt.scatter([i*(len(X_train)//32) for i in range(index+1)], val_history, c='r', marker='o', label='test mean loss')
plt.legend()
plt.show()
train_log = []
val_log = []
model = Net()
opt = torch.optim.Adam(model.parameters(), lr=0.001)
batchsize = 32
for epoch in range(20):
train_loss = train(X_train, y_train, opt, model, batchsize=batchsize)
train_log.extend(train_loss)
val_log.append(np.mean(test(X_val, y_val, model)))
clear_output(True)
plot_history(epoch, train_log, val_log)
```
#### Задание:
Подсчитать accuracy модели на тестовой выборке (округляйте предсказания по 0.5)
```
from sklearn.metrics import accuracy_score
preds, labels = [], []
model.eval()
loader = DigitSet(X_val, y_val)
for x_batch, y_batch in loader:
output = model(torch.Tensor(x_batch).unsqueeze(0))
preds.append(torch.max(output, dim=1)[1].item())
labels.append(y_batch.item())
print('accuracy_score: ', accuracy_score(preds, labels))
```
| github_jupyter |
Copyright (c) Microsoft Corporation. All rights reserved.
# Tutorial (part 2): Use automated machine learning to build your regression model
This tutorial is **part two of a two-part tutorial series**. In the previous tutorial, you [prepared the NYC taxi data for regression modeling](regression-part1-data-prep.ipynb).
Now, you're ready to start building your model with Azure Machine Learning service. In this part of the tutorial, you will use the prepared data and automatically generate a regression model to predict taxi fare prices. Using the automated ML capabilities of the service, you define your machine learning goals and constraints, launch the automated machine learning process and then allow the algorithm selection and hyperparameter-tuning to happen for you. The automated ML technique iterates over many combinations of algorithms and hyperparameters until it finds the best model based on your criterion.
In this tutorial, you learn how to:
> * Setup a Python environment and import the SDK packages
> * Configure an Azure Machine Learning service workspace
> * Auto-train a regression model
> * Run the model locally with custom parameters
> * Explore the results
> * Register the best model
If you don’t have an Azure subscription, create a [free account](https://aka.ms/AMLfree) before you begin.
> Code in this article was tested with Azure Machine Learning SDK version 1.0.0
## Prerequisites
> * [Run the data preparation tutorial](regression-part1-data-prep.ipynb)
> * Automated machine learning configured environment e.g. Azure notebooks, Local Python environment or Data Science Virtual Machine. [Setup](https://docs.microsoft.com/azure/machine-learning/service/samples-notebooks) automated machine learning.
### Import packages
Import Python packages you need in this tutorial.
```
import azureml.core
import pandas as pd
from azureml.core.workspace import Workspace
from azureml.train.automl.run import AutoMLRun
import time
import logging
```
### Configure workspace
Create a workspace object from the existing workspace. A `Workspace` is a class that accepts your Azure subscription and resource information, and creates a cloud resource to monitor and track your model runs. `Workspace.from_config()` reads the file **aml_config/config.json** and loads the details into an object named `ws`. `ws` is used throughout the rest of the code in this tutorial.
Once you have a workspace object, specify a name for the experiment and create and register a local directory with the workspace. The history of all runs is recorded under the specified experiment.
```
ws = Workspace.from_config()
# choose a name for the run history container in the workspace
experiment_name = 'automated-ml-regression'
# project folder
project_folder = './automated-ml-regression'
import os
output = {}
output['SDK version'] = azureml.core.VERSION
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Project Directory'] = project_folder
pd.set_option('display.max_colwidth', -1)
pd.DataFrame(data=output, index=['']).T
```
## Explore data
Utilize the data flow object created in the previous tutorial. Open and execute the data flow and review the results.
```
import azureml.dataprep as dprep
import os
file_path = os.path.join(os.getcwd(), "dflows.dprep")
package_saved = dprep.Package.open(file_path)
dflow_prepared = package_saved.dataflows[0]
dflow_prepared.get_profile()
```
You prepare the data for the experiment by adding columns to `dflow_X` to be features for our model creation. You define `dflow_y` to be our prediction value; cost.
```
dflow_X = dflow_prepared.keep_columns(['pickup_weekday','pickup_hour', 'distance','passengers', 'vendor'])
dflow_y = dflow_prepared.keep_columns('cost')
```
### Split data into train and test sets
Now you split the data into training and test sets using the `train_test_split` function in the `sklearn` library. This function segregates the data into the x (features) data set for model training and the y (values to predict) data set for testing. The `test_size` parameter determines the percentage of data to allocate to testing. The `random_state` parameter sets a seed to the random generator, so that your train-test splits are always deterministic.
```
from sklearn.model_selection import train_test_split
x_df = dflow_X.to_pandas_dataframe()
y_df = dflow_y.to_pandas_dataframe()
x_train, x_test, y_train, y_test = train_test_split(x_df, y_df, test_size=0.2, random_state=223)
# flatten y_train to 1d array
y_train.values.flatten()
```
You now have the necessary packages and data ready for auto training for your model.
## Automatically train a model
To automatically train a model:
1. Define settings for the experiment run
1. Submit the experiment for model tuning
### Define settings for autogeneration and tuning
Define the experiment parameters and models settings for autogeneration and tuning. View the full list of [settings](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-train).
|Property| Value in this tutorial |Description|
|----|----|---|
|**iteration_timeout_minutes**|10|Time limit in minutes for each iteration|
|**iterations**|30|Number of iterations. In each iteration, the model trains with the data with a specific pipeline|
|**primary_metric**|spearman_correlation | Metric that you want to optimize.|
|**preprocess**| True | True enables experiment to perform preprocessing on the input.|
|**verbosity**| logging.INFO | Controls the level of logging.|
|**n_cross_validationss**|5|Number of cross validation splits
```
automl_settings = {
"iteration_timeout_minutes" : 10,
"iterations" : 30,
"primary_metric" : 'spearman_correlation',
"preprocess" : True,
"verbosity" : logging.INFO,
"n_cross_validations": 5
}
from azureml.train.automl import AutoMLConfig
# local compute
automated_ml_config = AutoMLConfig(task = 'regression',
debug_log = 'automated_ml_errors.log',
path = project_folder,
X = x_train.values,
y = y_train.values.flatten(),
**automl_settings)
```
### Train the automatic regression model
Start the experiment to run locally. Pass the defined `automated_ml_config` object to the experiment, and set the output to `true` to view progress during the experiment.
```
from azureml.core.experiment import Experiment
experiment=Experiment(ws, experiment_name)
local_run = experiment.submit(automated_ml_config, show_output=True)
```
## Explore the results
Explore the results of automatic training with a Jupyter widget or by examining the experiment history.
### Option 1: Add a Jupyter widget to see results
Use the Jupyter notebook widget to see a graph and a table of all results.
```
from azureml.widgets import RunDetails
RunDetails(local_run).show()
```
### Option 2: Get and examine all run iterations in Python
Alternatively, you can retrieve the history of each experiment and explore the individual metrics for each iteration run.
```
children = list(local_run.get_children())
metricslist = {}
for run in children:
properties = run.get_properties()
metrics = {k: v for k, v in run.get_metrics().items() if isinstance(v, float)}
metricslist[int(properties['iteration'])] = metrics
import pandas as pd
rundata = pd.DataFrame(metricslist).sort_index(1)
rundata
```
## Retrieve the best model
Select the best pipeline from our iterations. The `get_output` method on `automl_classifier` returns the best run and the fitted model for the last fit invocation. There are overloads on `get_output` that allow you to retrieve the best run and fitted model for any logged metric or a particular iteration.
```
best_run, fitted_model = local_run.get_output()
print(best_run)
print(fitted_model)
```
## Register the model
Register the model in your Azure Machine Learning Workspace.
```
description = 'Automated Machine Learning Model'
tags = None
local_run.register_model(description=description, tags=tags)
local_run.model_id # Use this id to deploy the model as a web service in Azure
```
## Test the best model accuracy
Use the best model to run predictions on the test data set. The function `predict` uses the best model, and predicts the values of y (trip cost) from the `x_test` data set. Print the first 10 predicted cost values from `y_predict`.
```
y_predict = fitted_model.predict(x_test.values)
print(y_predict[:10])
```
Create a scatter plot to visualize the predicted cost values compared to the actual cost values. The following code uses the `distance` feature as the x-axis, and trip `cost` as the y-axis. The first 100 predicted and actual cost values are created as separate series, in order to compare the variance of predicted cost at each trip distance value. Examining the plot shows that the distance/cost relationship is nearly linear, and the predicted cost values are in most cases very close to the actual cost values for the same trip distance.
```
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(14, 10))
ax1 = fig.add_subplot(111)
distance_vals = [x[4] for x in x_test.values]
y_actual = y_test.values.flatten().tolist()
ax1.scatter(distance_vals[:100], y_predict[:100], s=18, c='b', marker="s", label='Predicted')
ax1.scatter(distance_vals[:100], y_actual[:100], s=18, c='r', marker="o", label='Actual')
ax1.set_xlabel('distance (mi)')
ax1.set_title('Predicted and Actual Cost/Distance')
ax1.set_ylabel('Cost ($)')
plt.legend(loc='upper left', prop={'size': 12})
plt.rcParams.update({'font.size': 14})
plt.show()
```
Calculate the `root mean squared error` of the results. Use the `y_test` dataframe, and convert it to a list to compare to the predicted values. The function `mean_squared_error` takes two arrays of values, and calculates the average squared error between them. Taking the square root of the result gives an error in the same units as the y variable (cost), and indicates roughly how far your predictions are from the actual value.
```
from sklearn.metrics import mean_squared_error
from math import sqrt
rmse = sqrt(mean_squared_error(y_actual, y_predict))
rmse
```
Run the following code to calculate MAPE (mean absolute percent error) using the full `y_actual` and `y_predict` data sets. This metric calculates an absolute difference between each predicted and actual value, sums all the differences, and then expresses that sum as a percent of the total of the actual values.
```
sum_actuals = sum_errors = 0
for actual_val, predict_val in zip(y_actual, y_predict):
abs_error = actual_val - predict_val
if abs_error < 0:
abs_error = abs_error * -1
sum_errors = sum_errors + abs_error
sum_actuals = sum_actuals + actual_val
mean_abs_percent_error = sum_errors / sum_actuals
print("Model MAPE:")
print(mean_abs_percent_error)
print()
print("Model Accuracy:")
print(1 - mean_abs_percent_error)
```
## Next steps
In this automated machine learning tutorial, you:
> * Configured a workspace and prepared data for an experiment
> * Trained using an automated regression model locally with custom parameters
> * Explored and reviewed training results
> * Registered the best model
[Deploy your model](02.deploy-models.ipynb) with Azure Machine Learning.
| github_jupyter |
# Import Python modules
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
```
# Generate data for Univariate Decision Tree Regression
```
n = 100 #Number of observations in the training set
trueSplitPoint = 5
trueLowerLevel = 15
trueUpperLevel = 30
x = np.random.uniform(0, 10, n)
def generateY(x, trueLowerLevel, trueUpperLevel, trueSplitPoint, sd):
if x < trueSplitPoint:
y = np.random.normal(trueLowerLevel, sd, 1)
else:
y = np.random.normal(trueUpperLevel, sd, 1)
return y[0]
y = [generateY(i, trueLowerLevel, trueUpperLevel, trueSplitPoint, 3) for i in x]
data = pd.DataFrame({'X':x, 'Y':y}) #Put the two arrays into a dataframe to make it easy to work with
data.head(10) #Inspect the first ten rows of our dataset
```
# Quickly plot the data so we know what it looks like
```
plt.scatter(data['X'], data['Y'])
plt.show()
```
Data appears to be generated by two distinct processes (as indeed it is).
For the simplest version of decision tree we want to identify the split point. When predicting the (continuous) label for a new observation, if the feature is less than the split point we assign one value and if it's greater than the split point we assign another value. Thus the set of possible values we can assign has size two. Clearly this is a very simple regression technique and it's unlikely we'd use this in a real scenario, but we've generated the data in a very contrived way so that it should perform reasonably here and we'll see that the decision tree can approximate non-linear functions by using techniques such as Bagging.
Mathematically, we want to learn a function $f: \mathbb{R} \rightarrow \mathbb{R}$, such that:
\begin{equation}
f(x)=\begin{cases}
c_0, & \text{if $x<c_{split}$}.\\
c_1, & \text{otherwise}.
\end{cases}
\end{equation}
Our aim is to find the best combination of values for $c_0, c_1 \& c_{split}$
# How we're going to fit the model
We can't use a Maximum-Likelihood style approach because we don't make any probabilistic assumptions. We're going to test lots of split points, then for each split point we're calculate $c_0$ and $c_1$ by taking all of the points on the relevant side of the split point and calculating the mean of those points.
That is
$$c_0|c_{split} = \frac{\sum_{x_i < c_{split}} x_i}{\sum_{x_i < c_{split}} 1}$$
$$c_1|c_{split} = \frac{\sum_{x_i > c_{split}} x_i}{\sum_{x_i > c_{split}} 1}$$
And then we'll choose the values of $c_0, c_1 \text{and} c_{split}$ which minimise the squared error on the training set
```
class univariateDecisionTree:
def __init__(self, data, target, feature, trainTestRatio = 0.9):
#data - a pandas dataset
#target - the name of the pandas column which contains the true labels
#feature - the name of the pandas column which we will use to do the regression
#trainTestRatio - the proportion of the entire dataset which we'll use for training
# - the rest will be used for testing
self.target = target
self.feature = feature
#Split up data into a training and testing set
self.train, self.test = train_test_split(data, test_size=1-trainTestRatio)
#Generate mesh of split points to try. We want the mesh to be of a decent size (say 1000 points)
#Think about what reasonable end points of the mesh would be - what would happen if we considered
#a point that was smaller than the smallest point in the training set as a split point?
#np.linspace() is a good function for constructing this 1-d mesh
meshMin = np.min(self.train[self.feature])
meshMax = np.max(self.train[self.feature])
self.splitPointMesh = np.linspace(meshMin, meshMax, 1000)
def computeMeansGivenSplitPoint(self, splitPoint):
#Given a split point, we want to split the training set in two
#One containing all the points below the split point and one containing all the points above the split point
#The means are then the means of the targets in those datasets and they are the values we want to return
pass
def computeSquaredError(self, splitPoint):
#Once we have a split point and a set of means, we need to have some way of identifying whether it's
#a good split point or not
#First apply compuuteMeansGivenSplitPoint to get the values for above and below the dataset
#Then compute the sum of squared errors yielded by assigning the corresponding mean to each point in the training set
#If we add these two sums of squares together then we have a single error number which indicates how good our split point is
pass
def fitDT(self):
#Calculate the squared error for each split point in the mesh,
#and save the values (splitPoint and two means) obtained by the best one
pass
def predict(self, X):
pass
myModel = univariateDecisionTree(data, 'Y', 'X')
myModel.fitDT()
```
# Now lets see what the results look like
We'll plot the regression line we obtained against the values we observed in the training set
```
regLine = myModel.predict(myModel.splitPointMesh)
plt.scatter(myModel.train[myModel.feature], myModel.train[myModel.target], label = 'Training set')
plt.scatter(myModel.splitPointMesh, regLine, label = 'Regression Line')
plt.legend()
plt.show()
```
| github_jupyter |
To plot the final figures
```
import numpy as np
from scipy.spatial.distance import cdist # For calculating QPSK decoding
# import dill
from itertools import product, cycle
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
# import tensorflow.keras.backend as K
# The one who steals the data
def robinhood(fig, filename=None, lineIdx=None):
nAxis = len(fig.axes)
print("nAxis = {}".format(nAxis))
for _ax in fig.axes:
nLines = len(_ax.lines)
print(" nLines = {}".format(nLines))
for _line in _ax.lines:
print(_line.get_label(), "=>", _line.get_xdata().shape, ":", _line.get_ydata().shape)
# fig.axes[0].lines[0]._label
# fig.axes[0].lines[0].get_label()/get_xdata()/get_ydata()
blkSize = 4
chDim = 2
# Input
inVecDim = 2 ** blkSize # 1-hot vector length for block
encDim = 2*chDim
SNR_range_dB = np.arange( 0.0, 11.0, 1.0 )
one_hot_code = np.eye(inVecDim)
```
Traditional Methods
```
results_traditional = {}
qam_map = np.genfromtxt("./sphere_data/{:03d}x{:03d}_qam.csv".format(inVecDim,encDim))
qam_sym_pow = np.mean(np.sum(qam_map*qam_map,axis=1))
print( "QAM Avg. Tx Power:", qam_sym_pow )
noisePower = qam_sym_pow * 10.0**(-SNR_range_dB/10.0)
n0_per_comp = noisePower/(2*chDim)
qam_d_min = np.unique(cdist(qam_map,qam_map))[1]
print("d_min:", qam_d_min )
qam_en = qam_sym_pow / (qam_d_min**2)
print("En:", qam_en)
err = []
for n0 in n0_per_comp:
thisErr = 0
thisCount = 0
while thisErr < 500:
txSym = np.random.randint(inVecDim, size=1000)
txTest = qam_map[txSym]
rxTest = txTest + np.random.normal(scale=np.sqrt(n0), size=txTest.shape)
rxDecode = cdist(rxTest, qam_map)
rxSym = np.argmin(rxDecode,axis=1)
thisErr += np.sum(rxSym!=txSym)
thisCount += 1000
err.append(thisErr/thisCount)
results_traditional["QAM"] = {
"en": qam_en,
"dmin": qam_d_min,
"sym_pow": qam_sym_pow,
"bler": np.array(err)
}
agrell_map = np.genfromtxt("./sphere_data/{:03d}x{:03d}_agrell.csv".format(inVecDim,encDim))
agrell_sym_pow = np.mean(np.sum(agrell_map*agrell_map,axis=1))
print( "QAM Avg. Tx Power:", agrell_sym_pow )
noisePower = agrell_sym_pow * 10.0**(-SNR_range_dB/10.0)
n0_per_comp = noisePower/(2*chDim)
agrell_d_min = np.unique(cdist(agrell_map,agrell_map))[1]
print("d_min:", agrell_d_min )
agrell_en = agrell_sym_pow / (agrell_d_min**2)
print("En:", agrell_en)
err = []
for n0 in n0_per_comp:
thisErr = 0
thisCount = 0
while thisErr < 500:
txSym = np.random.randint(inVecDim, size=1000)
txTest = agrell_map[txSym]
rxTest = txTest + np.random.normal(scale=np.sqrt(n0), size=txTest.shape)
rxDecode = cdist(rxTest, agrell_map)
rxSym = np.argmin(rxDecode,axis=1)
thisErr += np.sum(rxSym!=txSym)
thisCount += 1000
err.append(thisErr/thisCount)
results_traditional["Agrell"] = {
"en": agrell_en,
"d_min": agrell_d_min,
"sym_pow": agrell_sym_pow,
"bler": np.array(err)
}
```
Deep Learning models
```
model_summary = {}
results = {}
if blkSize==8 and chDim==4:
model_summary = {
"[4]": "./models_08x04/awgn_oshea_64_32_16_10dB_summary.h5",
"Proposed: Trained with (19)": "./models_08x04/awgn_awgn_64_32_16_n080_summary.h5",
"Proposed: Trained with (26)": "./models_08x04/awgn_rbf_64_32_16_n080_summary.h5",
# "(19) with $\sigma_0^2 = 0.10$": "./models/08x04/sigma2_010/awgn_awgn_64_32_16_n080_summary.dil",
# "(23) with $\sigma_0^2 = 0.10$": "./models/08x04/sigma2_010/awgn_rbf_64_32_16_n080_summary.dil",
# "(19) with $\sigma_0^2 = 0.50$": "./models/08x04/sigma2_050/awgn_awgn_64_32_16_n080_summary.dil",
# "(23) with $\sigma_0^2 = 0.50$": "./models/08x04/sigma2_050/awgn_rbf_64_32_16_n080_summary.dil",
# "(19) with $\sigma_0^2 = 1.50$": "./models/08x04/sigma2_150/awgn_awgn_64_32_16_n080_summary.dil",
# "(23) with $\sigma_0^2 = 1.50$": "./models/08x04/sigma2_150/awgn_rbf_64_32_16_n080_summary.dil"
}
elif blkSize==4 and chDim==2:
model_summary = {
# For PPT
"Oshe'a2017": "./models_04x02/awgn_oshea_04x02_64_32_16_10dB_summary.h5",
"Proposed: AWGN Objective": "./models_04x02/awgn_awgn_04x02_64_32_16_n040_summary.h5",
"Proposed: RBF Objective": "./models_04x02/awgn_rbf_04x02_64_32_16_n040_summary.h5",
# "[4]": "./models_04x02/awgn_oshea_04x02_64_32_16_10dB_summary.h5",
# "Proposed: Trained with (19)": "./models_04x02/awgn_awgn_04x02_64_32_16_n040_summary.h5",
# "Proposed: Trained with (26)": "./models_04x02/awgn_rbf_04x02_64_32_16_n040_summary.h5",
# "(19) with $\sigma_0^2 = 0.10$": "./models/04x02/sigma2_010/awgn_awgn_64_32_16_n080_summary.dil",
# "(23) with $\sigma_0^2 = 0.10$": "./models/04x02/sigma2_010/awgn_rbf_64_32_16_n080_summary.dil",
# "(19) with $\sigma_0^2 = 0.50$": "./models/04x02/sigma2_050/awgn_awgn_64_32_16_n080_summary.dil",
# "(23) with $\sigma_0^2 = 0.50$": "./models/04x02/sigma2_050/awgn_rbf_64_32_16_n080_summary.dil",
# "(19) with $\sigma_0^2 = 1.50$": "./models/04x02/sigma2_150/awgn_awgn_64_32_16_n080_summary.dil",
# "(23) with $\sigma_0^2 = 1.50$": "./models/04x02/sigma2_150/awgn_rbf_64_32_16_n080_summary.dil",
}
elif blkSize==2 and chDim==1:
model_summary = {
# For PPT
"Oshe'a2017": "./models_02x01/awgn_oshea_02x01_64_32_16_10dB_summary.h5",
"Proposed: AWGN Objective": "./models_02x01/awgn_awgn_02x01_64_32_16_n020_summary.h5",
"Proposed: RBF Objective": "./models_02x01/awgn_rbf_02x01_64_32_16_n020_summary.h5",
# "[4]": "./models_02x01/awgn_oshea_02x01_64_32_16_10dB_summary.h5",
# "Proposed: Trained with (19)": "./models_02x01/awgn_awgn_02x01_64_32_16_n020_summary.h5",
# "Proposed: Trained with (26)": "./models_02x01/awgn_rbf_02x01_64_32_16_n020_summary.h5",
# "(19) with $\sigma_0^2 = 0.10$": "./models/02x01/sigma2_010/awgn_awgn_64_32_16_n080_summary.dil",
# "(23) with $\sigma_0^2 = 0.10$": "./models/02x01/sigma2_010/awgn_rbf_64_32_16_n080_summary.dil",
# "(19) with $\sigma_0^2 = 0.50$": "./models/02x01/sigma2_050/awgn_awgn_64_32_16_n080_summary.dil",
# "(23) with $\sigma_0^2 = 0.50$": "./models/02x01/sigma2_050/awgn_rbf_64_32_16_n080_summary.dil",
# "(19) with $\sigma_0^2 = 1.50$": "./models/02x01/sigma2_150/awgn_awgn_64_32_16_n080_summary.dil",
# "(23) with $\sigma_0^2 = 1.50$": "./models/02x01/sigma2_150/awgn_rbf_64_32_16_n080_summary.dil"
}
else:
raise NotImplementedError("Not implemented (blkSize={},chDim={})".format(blkSize,chDim))
import os.path
for (model_exp, summary_file) in model_summary.items():
log_msg = "{:40s} {:70s}".format(model_exp,summary_file)
if os.path.isfile(summary_file):
log_msg += "EXISTS"
else:
log_msg += "NOT FOUND"
print(log_msg)
```
Load the results from summary files
```
for (label, summary_file) in model_summary.items():
results[label] = pd.read_hdf(summary_file, 'table')
```
Plot the results: Packing Density
```
colors = cycle(['b', 'g', 'c', 'r', 'm', 'y'])
fig = plt.figure(figsize=(4*1.5,3*1.5))
# Plot lines for traditional methods
plt.plot(2*[results_traditional["QAM"]["en"]], [0,1], linewidth=3, label="QAM", color=next(colors), linestyle="-.")
# plt.plot(2*[results_traditional["Agrell"]["en"]], [0,1], linewidth=3, label="Agrell [17]", color=next(colors), linestyle="-.")
plt.plot(2*[results_traditional["Agrell"]["en"]], [0,1], linewidth=3, label="Agrell", color=next(colors), linestyle="-.")
for (label, result) in results.items():
clr = next(colors)
sns.distplot(result["en"], label=label, color=clr,
bins=100,
rug=False,
kde=True,
kde_kws=dict(cumulative=True,
linestyle=":" if "Oshea" in label or "[1]" in label else "-"),
hist=False,
hist_kws=dict(cumulative=True,
density=True,
histtype="step",
linestyle=":" if "Oshea" in label or "[1]" in label else "-",
linewidth=2,
color=clr, alpha=1.0))
# Experiment specific axis limits
if blkSize==8 and chDim==4:
plt.xlim([0.95*results_traditional["Agrell"]["en"], 2.5*results_traditional["Agrell"]["en"]])
elif blkSize==4 and chDim==2:
pass
elif blkSize==2 and chDim==1:
pass
else:
raise NotImplementedError("Not implemented (blkSize={},chDim={})".format(blkSize,chDim))
plt.xlabel("$E_n$", fontdict={'fontsize':16})
plt.ylabel("CDF", fontdict={'fontsize':16})
plt.grid()
plt.legend(loc='upper left', prop={'size':14})
plt.savefig("output_awgn_en_{:02d}x{:02d}.pdf".format(blkSize,chDim), format='pdf', bbox_inches='tight')
# fig.axes[0].lines[0]._label
# fig.axes[0].lines[0].get_label()/get_xdata()/get_ydata()
robinhood(fig)
```
Plot results: BLER
```
colors = cycle(['b', 'g', 'c', 'r', 'm', 'y'])
fig = plt.figure(figsize=(4*1.5,3*1.5))
plt.semilogy(SNR_range_dB, results_traditional["QAM"]["bler"], label="QAM", color=next(colors), linestyle="-.")
# plt.semilogy(SNR_range_dB, results_traditional["Agrell"]["bler"], label="Agrell [17]", color=next(colors), linestyle="-.")
plt.semilogy(SNR_range_dB, results_traditional["Agrell"]["bler"], label="Agrell", color=next(colors), linestyle="-.")
for (label, result) in results.items():
best_model_id = result['en'].idxmin() # Find the model with best E_n
plt.semilogy(SNR_range_dB,
result.loc[best_model_id]["bler"],
label=label, color=next(colors), linewidth=2,
linestyle=":" if "Oshea" in label or "[1]" in label else "-")
plt.legend(loc="lower left", prop={'size':14})
plt.grid()
# plt.title("Best observed BLER of trained models", fontdict={'fontsize':18})
plt.xlabel("SNR ($dB$)", fontdict={'fontsize':16})
plt.ylabel("BLER", fontdict={'fontsize':16})
plt.ylim((1e-3,1e0))
plt.savefig("output_awgn_best_bler_{:02d}x{:02d}.pdf".format(blkSize,chDim), format='pdf', bbox_inches='tight')
```
Plot results: BLER with confidence
```
colors = cycle(['b', 'g', 'c', 'r', 'm', 'y'])
fig = plt.figure(figsize=(4*1.5,3*1.5))
plt.semilogy(SNR_range_dB, results_traditional["QAM"]["bler"], label="QAM", color=next(colors), linestyle="-.")
plt.semilogy(SNR_range_dB, results_traditional["Agrell"]["bler"], label="Agrell [17]", color=next(colors), linestyle="-.")
for (label, result) in results.items():
clr = next(colors)
bler_mean = result["bler"].mean()
bler_std = np.array(result['bler']).std()
plt.fill_between(SNR_range_dB, bler_mean+bler_std, bler_mean-bler_std, alpha=0.1, color=clr)
plt.semilogy(SNR_range_dB,
result["bler"].mean(),
label=label, color=clr, linewidth=2,
linestyle=":" if "Oshea" in label or "[1]" in label else "-")
plt.legend(loc="lower left", prop={'size':14})
plt.grid()
# plt.title("Best observed BLER of trained models", fontdict={'fontsize':18})
plt.xlabel("SNR ($dB$)", fontdict={'fontsize':16})
plt.ylabel("BLER", fontdict={'fontsize':16})
plt.ylim((1e-4,1e0))
plt.savefig("output_awgn_avg_bler_{:02d}x{:02d}.pdf".format(blkSize,chDim), format='pdf', bbox_inches='tight')
```
| github_jupyter |
# Experiment Initialization
Here, I define the terms of my experiment, among them the location of the files in S3 (bucket and folder name), and each of the video prefixes (everything before the file extension) that I want to track.
Note that these videos should be similar-ish: while we can account for differences in mean intensities between videos, particle sizes should be approximately the same, and (slightly less important) particles should be moving at about the same order of magnitude speed. In this experiment, these videos were taken in 0.4% agarose gel at 100x magnification and 100.02 fps shutter speeds with nanoparticles of about 100nm in diameter.
```
to_track = []
result_futures = {}
start_knot = 1 #Must be unique number for every run on Cloudknot.
remote_folder = 'Tissue_Studies/10_23_18_corona_tissue' #Folder in AWS S3 containing files to be analyzed
bucket = 'ccurtis.data'
vids = 10
types = ['PEG_cor', 'PEG', 'PS_cor', 'PS']
pups = ['P1', 'P2']
slices = ['S1', 'S2']
for typ in types:
for pup in pups:
for slic in slices:
for num in range(1, vids+1):
#to_track.append('100x_0_4_1_2_gel_{}_bulk_vid_{}'.format(vis, num))
to_track.append('{}_{}_{}_XY{}'.format(typ, pup, slic, '%02d' % num))
to_track
```
The videos used with this analysis are fairly large (2048 x 2048 pixels and 651 frames), and in cases like this, the tracking algorithm can quickly eat up RAM. In this case, we chose to crop the videos to 512 x 512 images such that we can run our jobs on smaller EC2 instances with 16GB of RAM.
Note that larger jobs can be made with user-defined functions such that splitting isn't necessary-- or perhaps an intermediate amount of memory that contains splitting, tracking, and msd calculation functions all performed on a single EC2 instance.
The compiled functions in the knotlets module require access to buckets on AWS. In this case, we will be using a publicly (read-only) bucket. If users want to run this notebook on their own, will have to transfer files from nancelab.publicfiles to their own bucket, as it requires writing to S3 buckets.
```
import diff_classifier.knotlets as kn
for prefix in to_track:
kn.split(prefix, remote_folder=remote_folder, bucket=bucket)
```
## Tracking predictor
Tracking normally requires user input in the form of tracking parameters e.g. particle radius, linking max distance, max frame gap etc. When large datasets aren't required, each video can be manageably manually tracked using the TrackMate GUI. However, when datasets get large e.g. >20 videos, this can become extremely arduous. For videos that are fairly similar, you can get away with using similar tracking parameters across all videos. However, one parameter that is a little more noisy that the others is the quality filter value. Quality is a numerical value that approximate how likely a particle is to be "real."
In this case, I built a predictor that estimates the quality filter value based on intensity distributions from the input images. Using a relatively small training dataset (5-20 videos), users can get fairly good estimates of quality filter values that can be used in parallelized tracking workflows.
Note: in the current setup, the predictor should be run in Python 3. While the code will run in Python 3, there are differences between the random number generators in Python2 and Python3 that I was not able to control for.
```
import os
import diff_classifier.imagej as ij
import boto3
import os.path as op
import diff_classifier.aws as aws
import diff_classifier.knotlets as kn
import numpy as np
from sklearn.externals import joblib
```
The regress_sys function should be run twice. When have_output is set to False, it generates a list of files that the user should manually track using Trackmate. Once the quality filter values are found, they can be used as input (y) to generate a regress object that can predict quality filter values for additional videos. Once y is assigned, set have_output to True and re-run the cell.
```
tnum=25 #number of training datasets
pref = []
for num in to_track:
for row in range(0, 4):
for col in range(0, 4):
pref.append("{}_{}_{}".format(num, row, col))
y = np.array([1.5, 3.9, 1.5, 1.5, 1.6, 1.6, 6.2, 3.0, 5.7, 1.5, 5.5, 3.8, 1.7, 2.5, 1.6, 6.3, 3.5, 1.5, 14, 1.5, 1.5, 1.5,
1.5, 4.9, 5.7])
# Creates regression object based of training dataset composed of input images and manually
# calculated quality cutoffs from tracking with GUI interface.
regress = ij.regress_sys(remote_folder, pref, y, tnum, randselect=True,
have_output=True, bucket_name=bucket)
#Read up on how regress_sys works before running.
#Pickle object
filename = 'regress1.obj'
with open(filename,'wb') as fp:
joblib.dump(regress,fp)
import boto3
s3 = boto3.client('s3')
aws.upload_s3(filename, remote_folder+'/'+filename, bucket_name=bucket)
```
Users should input all tracking parameters into the tparams object. Note that the quality value will be overwritten by values found using the quality predictor found above.
```
tparams1 = {'radius': 5.0, 'threshold': 0.0, 'do_median_filtering': False,
'quality': 5.0, 'xdims': (0, 511), 'ydims': (1, 511),
'median_intensity': 300.0, 'snr': 0.0, 'linking_max_distance': 11.0,
'gap_closing_max_distance': 18.0, 'max_frame_gap': 7,
'track_duration': 20.0}
# tparams2 = {'radius': 4.0, 'threshold': 0.0, 'do_median_filtering': False,
# 'quality': 10.0, 'xdims': (0, 511), 'ydims': (1, 511),
# 'median_intensity': 300.0, 'snr': 0.0, 'linking_max_distance': 8.0,
# 'gap_closing_max_distance': 12.0, 'max_frame_gap': 6,
# 'track_duration': 20.0}
```
## Cloudknot setup
Cloudknot requires the user to define a function that will be sent to multiple computers to run. In this case, the function knotlets.tracking will be used. We create a docker image that has the required installations (defined by the requirements.txt file from diff_classifier on Github, and the base Docker Image below that has Fiji pre-installed in the correct location.
Note that I modify the Docker image below such that the correct version of boto3 is installed. For some reason, versions later than 1.5.28 error out, so I specified 5.28 as the correct version. Run my_image.build below to double-check that the Docker image is successfully built prior to submitting the job to Cloudknot.
```
import cloudknot as ck
import os.path as op
github_installs=('https://github.com/ccurtis7/diff_classifier.git@Chad')
#my_image = ck.DockerImage(func=kn.tracking, base_image='arokem/python3-fiji:0.3', github_installs=github_installs)
my_image = ck.DockerImage(func=kn.assemble_msds, base_image='arokem/python3-fiji:0.3', github_installs=github_installs)
docker_file = open(my_image.docker_path)
docker_string = docker_file.read()
docker_file.close()
req = open(op.join(op.split(my_image.docker_path)[0], 'requirements.txt'))
req_string = req.read()
req.close()
new_req = req_string[0:req_string.find('\n')-4]+'5.28'+ req_string[req_string.find('\n'):]
req_overwrite = open(op.join(op.split(my_image.docker_path)[0], 'requirements.txt'), 'w')
req_overwrite.write(new_req)
req_overwrite.close()
my_image.build("0.1", image_name="test_image")
```
The object all_maps is an iterable containing all the inputs sent to Cloudknot. This is useful, because if the user needs to modify some of the tracking parameters for a single video, this can be done prior to submission to Cloudknot.
```
names = []
all_maps = []
for prefix in to_track:
for i in range(0, 4):
for j in range(0, 4):
names.append('{}_{}_{}'.format(prefix, i, j))
all_maps.append(('{}_{}_{}'.format(prefix, i, j), remote_folder, bucket, 'regress1.obj',
4, 4, (512, 512), tparams1))
all_maps
len(all_maps)
```
The Cloudknot knot object sets up the compute environment which will run the code. Note that the name must be unique. Every time you submit a new knot, you should change the name. I do this with the variable start_knot, which I vary for each run.
If larger jobs are anticipated, users can adjust both RAM and storage with the memory and image_id variables. Memory specifies the amount of RAM to be used. Users can build a customized AMI with as much space as they need, and enter the ID into image_ID. Read the Cloudknot documentation for more details.
```
ck.aws.set_region('us-east-1')
knot = ck.Knot(name='{}_b{}'.format('Chad', start_knot),
docker_image = my_image,
memory = 16000,
resource_type = "SPOT",
bid_percentage = 100,
#image_id = 'ami-015a1b4cd3895860b', #May need to change this line
pars_policies=('AmazonS3FullAccess',),
)
result_futures = knot.map(all_maps[0:500], starmap=True)
result_futures2 = knot.map(all_maps[500:1000], starmap=True)
ck.aws.set_region('us-east-1')
knot10 = ck.Knot(name='{}_k{}'.format('Chad', start_knot),
docker_image = my_image,
memory = 16000,
resource_type = "SPOT",
bid_percentage = 100,
#image_id = 'ami-015a1b4cd3895860b', #May need to change this line
pars_policies=('AmazonS3FullAccess',),
)
result_futures10 = knot10.map(all_maps[1600:], starmap=True)
ck.aws.set_region('us-east-1')
knot10.clobber()
ck.aws.set_region('us-east-2')
knot = ck.Knot(name='{}_c{}'.format('Chad', start_knot),
docker_image = my_image,
memory = 16000,
resource_type = "SPOT",
bid_percentage = 100,
#image_id = 'ami-015a1b4cd3895860b', #May need to change this line
pars_policies=('AmazonS3FullAccess',),
)
result_futures = knot.map(all_maps[1000:1750], starmap=True)
ck.aws.set_region('us-east-2')
knot.clobber()
ck.aws.set_region('us-west-1')
knot3 = ck.Knot(name='{}_d{}'.format('Chad', start_knot),
docker_image = my_image,
memory = 16000,
resource_type = "SPOT",
bid_percentage = 100,
#image_id = 'ami-015a1b4cd3895860b', #May need to change this line
pars_policies=('AmazonS3FullAccess',),
)
result_futures3 = knot3.map(all_maps[1750:], starmap=True)
ck.aws.set_region('us-west-1')
knot3.clobber()
ck.aws.set_region('eu-west-1')
knot4 = ck.Knot(name='{}_e{}'.format('Chad', start_knot),
docker_image = my_image,
memory = 16000,
resource_type = "SPOT",
bid_percentage = 100,
#image_id = 'ami-015a1b4cd3895860b', #May need to change this line
pars_policies=('AmazonS3FullAccess',),
)
result_futures4 = knot4.map(all_maps[1000:1200], starmap=True)
ck.aws.set_region('eu-west-1')
knot4.clobber()
ck.aws.set_region('eu-west-2')
knot5 = ck.Knot(name='{}_f{}'.format('Chad', start_knot),
docker_image = my_image,
memory = 16000,
resource_type = "SPOT",
bid_percentage = 100,
#image_id = 'ami-015a1b4cd3895860b', #May need to change this line
pars_policies=('AmazonS3FullAccess',),
)
result_futures5 = knot5.map(all_maps[1200:1400], starmap=True)
ck.aws.set_region('eu-west-2')
knot5.clobber()
ck.aws.set_region('ap-south-1')
knot8 = ck.Knot(name='{}_h{}'.format('Chad', start_knot),
docker_image = my_image,
memory = 16000,
resource_type = "SPOT",
bid_percentage = 100,
#image_id = 'ami-015a1b4cd3895860b', #May need to change this line
pars_policies=('AmazonS3FullAccess',),
)
result_futures8 = knot8.map(all_maps[1400:1600], starmap=True)
ck.aws.set_region('ap-south-1')
knot8.clobber()
ck.aws.set_region('us-east-1')
knot20 = ck.Knot(name='{}_l{}'.format('Chad', start_knot),
docker_image = my_image,
memory = 16000,
resource_type = "SPOT",
bid_percentage = 100,
#image_id = 'ami-015a1b4cd3895860b', #May need to change this line
pars_policies=('AmazonS3FullAccess',),
)
result_futures20 = knot20.map(all_maps3, starmap=True)
knot20.clobber()
missing = []
all_maps3 = []
import boto3
import botocore
s3 = boto3.resource('s3')
for name in names:
try:
s3.Object(bucket, '{}/Traj_{}.csv'.format(remote_folder, name)).load()
except botocore.exceptions.ClientError as e:
if e.response['Error']['Code'] == "404":
missing.append(name)
all_maps3.append((name, remote_folder, bucket, 'regress1.obj',
4, 4, (512, 512), tparams1))
else:
print('Something else has gone wrong')
len(all_maps3)
names = []
all_maps2 = []
for prefix in to_track:
all_maps2.append((prefix, remote_folder, bucket, (512, 512), 651, 4, 4))
all_maps2
ck.aws.set_region('us-east-1')
knot21 = ck.Knot(name='msds_{}_b{}'.format('Chad', start_knot),
docker_image = my_image,
memory = 16000,
resource_type = "SPOT",
bid_percentage = 100,
#image_id = 'ami-015a1b4cd3895860b', #May need to change this line
pars_policies=('AmazonS3FullAccess',),
)
result_futures21 = knot21.map(all_maps2, starmap=True)
knot21.clobber()
knot3.clobber()
import diff_classifier.heatmaps as hm
hm.plot_trajectories('5mM_0_15xs_XY01', upload=True, bucket=bucket, remote_folder=remote_folder)
ck.aws.set_region('us-west-1')
ck.aws.set_region('us-east-1')
result_futures3 = knot3.map(all_maps2, starmap=True)
knot3.clobber()
knot4 = ck.Knot(name='download_and_track_{}_b{}'.format('chadj', start_knot),
docker_image = my_image,
memory = 144000,
resource_type = "SPOT",
bid_percentage = 100,
#image_id = 'ami-015a1b4cd3895860b', #May need to change this line
pars_policies=('AmazonS3FullAccess',),
)
result_futures4 = knot4.map(all_maps2, starmap=True)
for i in range(0, 4):
for j in range(0, 4):
names.append('{}_{}_{}.tif'.format(prefix, i, j))
print(prefix)
for name in names:
row = int(name.split(prefix)[1].split('.')[0].split('_')[1])
all_maps2
ck.aws.get_region()
knot3 = ck.Knot(name='download_and_track_{}_b{}'.format('chad33', start_knot),
docker_image = my_image,
memory = 64000,
resource_type = "SPOT",
bid_percentage = 100,
#image_id = 'ami-015a1b4cd3895860b', #May need to change this line
pars_policies=('AmazonS3FullAccess',),
)
result_futures5 = knot3.map(all_maps2, starmap=True)
missing = []
all_maps2 = []
import boto3
import botocore
s3 = boto3.resource('s3')
for name in names:
try:
s3.Object(bucket, '{}/Traj_{}.csv'.format(remote_folder, name)).load()
except botocore.exceptions.ClientError as e:
if e.response['Error']['Code'] == "404":
missing.append(name)
#all_maps2.append((name, remote_folder, bucket, 'regress.obj',
# 4, 4, (512, 512), tparams2))
else:
print('Something else has gone wrong')
knot3.clobber()
ck.aws.get_region()
ck.aws.set_region('us-east-1')
ck.aws.get_region()
knot2 = ck.Knot(name='download_and_track_{}_b{}'.format('chad2', start_knot),
docker_image = my_image,
memory = 144000,
resource_type = "SPOT",
bid_percentage = 100,
#image_id = 'ami-0e00afdf500081a0d', #May need to change this line
pars_policies=('AmazonS3FullAccess',))
result_futures2 = knot2.map(all_maps, starmap=True)
knot3 = ck.Knot(name='download_and_track_{}_b{}'.format('chad3', start_knot),
docker_image = my_image,
memory = 144000,
resource_type = "SPOT",
bid_percentage = 100,
#image_id = 'ami-0e00afdf500081a0d', #May need to change this line
pars_policies=('AmazonS3FullAccess',))
result_futures3 = knot3.map(all_maps, starmap=True)
knot4 = ck.Knot(name='download_and_track_{}_b{}'.format('chad4', start_knot),
docker_image = my_image,
memory = 144000,
resource_type = "SPOT",
bid_percentage = 100,
#image_id = 'ami-0e00afdf500081a0d', #May need to change this line
pars_policies=('AmazonS3FullAccess',))
result_futures4 = knot4.map(all_maps, starmap=True)
knot5 = ck.Knot(name='download_and_track_{}_b{}'.format('chad5', start_knot),
docker_image = my_image,
memory = 144000,
resource_type = "SPOT",
bid_percentage = 100,
#image_id = 'ami-0e00afdf500081a0d', #May need to change this line
pars_policies=('AmazonS3FullAccess',))
result_futures5 = knot5.map(all_maps, starmap=True)
ck.aws.set_region('eu-west-1')
knot5.clobber()
tparams2 = {'radius': 3.5, 'threshold': 0.0, 'do_median_filtering': False,
'quality': 10.0, 'xdims': (0, 511), 'ydims': (1, 511),
'median_intensity': 300.0, 'snr': 0.0, 'linking_max_distance': 15.0,
'gap_closing_max_distance': 22.0, 'max_frame_gap': 5,
'track_duration': 20.0}
all_maps3
import diff_classifier.aws as aws
old_folder = 'Gel_Studies/08_14_18_gel_validation/old_msds2'
for name in missing:
filename = 'Traj_{}.csv'.format(name)
aws.download_s3('{}/{}'.format(old_folder, filename), filename, bucket_name=bucket)
aws.upload_s3(filename, '{}/{}'.format(remote_folder, filename), bucket_name=bucket)
```
Users can monitor the progress of their job in the Batch interface. Once the code is complete, users should clobber their knot to make sure that all AWS resources are removed.
```
knot.clobber()
```
## Downstream analysis and visualization
The knotlet.assemble_msds function (which can also potentially be submitted to Cloudknot as well for large jobs) calculates the mean squared displacements and trajectory features from the raw trajectory csv files found from the Cloudknot submission. It accesses them from the S3 bucket to which they were saved.
```
for prefix in to_track:
kn.assemble_msds(prefix, remote_folder, bucket=bucket)
print('Successfully output msds for {}'.format(prefix))
for prefix in to_track[5:7]:
kn.assemble_msds(prefix, remote_folder, bucket='ccurtis.data')
print('Successfully output msds for {}'.format(prefix))
all_maps2 = []
for prefix in to_track:
all_maps2.append((prefix, remote_folder, bucket, 'regress100.obj',
4, 4, (512, 512), tparams))
knot = ck.Knot(name='download_and_track_{}_b{}'.format('chad', start_knot),
docker_image = my_image,
memory = 16000,
resource_type = "SPOT",
bid_percentage = 100,
#image_id = 'ami-0e00afdf500081a0d', #May need to change this line
pars_policies=('AmazonS3FullAccess',))
```
Diff_classifier includes some useful imaging tools as well, including checking trajectories, plotting heatmaps of trajectory features, distributions of diffusion coefficients, and MSD plots.
```
import diff_classifier.heatmaps as hm
import diff_classifier.aws as aws
prefix = to_track[1]
msds = 'msd_{}.csv'.format(prefix)
feat = 'features_{}.csv'.format(prefix)
aws.download_s3('{}/{}'.format(remote_folder, msds), msds, bucket_name=bucket)
aws.download_s3('{}/{}'.format(remote_folder, feat), feat, bucket_name=bucket)
hm.plot_trajectories(prefix, upload=False, figsize=(8, 8))
geomean, geoSEM = hm.plot_individual_msds(prefix, x_range=10, y_range=300, umppx=1, fps=1, upload=False)
hm.plot_heatmap(prefix, upload=False)
hm.plot_particles_in_frame(prefix, y_range=6000, upload=False)
missing = ['PS_COOH_2mM_XY05_1_1', 'PS_NH2_2mM_XY04_2_1']
all_maps
kn.tracking(missing[0], remote_folder, bucket=bucket, tparams=tparams1)
kn.tracking(missing[1], remote_folder, bucket=bucket, tparams=tparams1)
```
| github_jupyter |
## Simulate sampling from a Poisson Distribution
## Instructions
* **<font color="red">Go to "Cell->Run All" to start the program running. After that point, you should be able to use the sliders and buttons to manipulate the output.</font>**
* If things go totally awry, you can go to "Kernel->Restart" and then "Cell->Run All". A more drastic solution would be to close and reload the page, which will reset the code to its initial state.
* If you're interested in programming, click the "Toggle raw code" button. This will expose the underlying program, written in the Python 3 programming language. You can edit the code to your heart's content: just go to "Cell->Run All" after you modify things so the changes will be incorporated. Text in the code blocks preceded by `#` are comments to guide you through the excercise and/or explain the code
```
# -----------------------------------------------------------------------------------
# Javascript that gives us a cool hide-the-code button
from IPython.display import HTML
HTML('''
<script>
code_show=true;
function code_toggle() {
if (code_show){
$('div.input').hide();
} else {
$('div.input').show();
}
code_show = !code_show
}
$( document ).ready(code_toggle);
</script>
<form action="javascript:code_toggle()">
<input type="submit" value="Toggle raw code">
</form>
''')
# ------------------------------------------------------------------------------------
#Import libraries that do things like plot data and handle arrays
%matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
from scipy.stats import poisson
# libraries for making pretty sliders
from ipywidgets import interact, interactive, fixed, interact_manual
import ipywidgets as widgets
from IPython.display import display
def calc_poisson(mu):
"""
Calculate a poisson distribution pmf given a value for mu, spitting a graph and a pretty
table.
"""
# some constants for making pretty graph/table
max_count = 200
p_cutoff = 0.0001
# Do pmf calculation
x = np.arange(max_count)
p = poisson.pmf(np.arange(max_count),mu)
# only grab data with reasonable probability
truth_table = p > p_cutoff
x = x[truth_table]
p = p[truth_table]
# Make graph
plt.bar(x,p,width=0.6)
plt.xlabel("# molecules")
plt.ylabel("probability")
plt.title("Poisson probability mass function")
plt.tight_layout()
plt.show()
# write data table
print("{:>10s}{:>10s}".format("counts","prob"))
for i in range(len(x)):
print("{:10d}{:10.3f}".format(x[i],p[i]))
# graph/slider widget
mu_widget = widgets.IntSlider(description="expected number of molecules",min=1,max=100,step=1,value=10)
container = widgets.interactive(calc_poisson,mu=mu_widget)
display(container)
```
| github_jupyter |
# Box Plots
The following illustrates some options for the boxplot in statsmodels. These include `violin_plot` and `bean_plot`.
```
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import statsmodels.api as sm
```
## Bean Plots
The following example is taken from the docstring of `beanplot`.
We use the American National Election Survey 1996 dataset, which has Party
Identification of respondents as independent variable and (among other
data) age as dependent variable.
```
data = sm.datasets.anes96.load_pandas()
party_ID = np.arange(7)
labels = ["Strong Democrat", "Weak Democrat", "Independent-Democrat",
"Independent-Independent", "Independent-Republican",
"Weak Republican", "Strong Republican"]
```
Group age by party ID, and create a violin plot with it:
```
plt.rcParams['figure.subplot.bottom'] = 0.23 # keep labels visible
plt.rcParams['figure.figsize'] = (10.0, 8.0) # make plot larger in notebook
age = [data.exog['age'][data.endog == id] for id in party_ID]
fig = plt.figure()
ax = fig.add_subplot(111)
plot_opts={'cutoff_val':5, 'cutoff_type':'abs',
'label_fontsize':'small',
'label_rotation':30}
sm.graphics.beanplot(age, ax=ax, labels=labels,
plot_opts=plot_opts)
ax.set_xlabel("Party identification of respondent.")
ax.set_ylabel("Age")
#plt.show()
def beanplot(data, plot_opts={}, jitter=False):
"""helper function to try out different plot options
"""
fig = plt.figure()
ax = fig.add_subplot(111)
plot_opts_ = {'cutoff_val':5, 'cutoff_type':'abs',
'label_fontsize':'small',
'label_rotation':30}
plot_opts_.update(plot_opts)
sm.graphics.beanplot(data, ax=ax, labels=labels,
jitter=jitter, plot_opts=plot_opts_)
ax.set_xlabel("Party identification of respondent.")
ax.set_ylabel("Age")
fig = beanplot(age, jitter=True)
fig = beanplot(age, plot_opts={'violin_width': 0.5, 'violin_fc':'#66c2a5'})
fig = beanplot(age, plot_opts={'violin_fc':'#66c2a5'})
fig = beanplot(age, plot_opts={'bean_size': 0.2, 'violin_width': 0.75, 'violin_fc':'#66c2a5'})
fig = beanplot(age, jitter=True, plot_opts={'violin_fc':'#66c2a5'})
fig = beanplot(age, jitter=True, plot_opts={'violin_width': 0.5, 'violin_fc':'#66c2a5'})
```
## Advanced Box Plots
Based of example script `example_enhanced_boxplots.py` (by Ralf Gommers)
```
import numpy as np
import matplotlib.pyplot as plt
import statsmodels.api as sm
# Necessary to make horizontal axis labels fit
plt.rcParams['figure.subplot.bottom'] = 0.23
data = sm.datasets.anes96.load_pandas()
party_ID = np.arange(7)
labels = ["Strong Democrat", "Weak Democrat", "Independent-Democrat",
"Independent-Independent", "Independent-Republican",
"Weak Republican", "Strong Republican"]
# Group age by party ID.
age = [data.exog['age'][data.endog == id] for id in party_ID]
# Create a violin plot.
fig = plt.figure()
ax = fig.add_subplot(111)
sm.graphics.violinplot(age, ax=ax, labels=labels,
plot_opts={'cutoff_val':5, 'cutoff_type':'abs',
'label_fontsize':'small',
'label_rotation':30})
ax.set_xlabel("Party identification of respondent.")
ax.set_ylabel("Age")
ax.set_title("US national election '96 - Age & Party Identification")
# Create a bean plot.
fig2 = plt.figure()
ax = fig2.add_subplot(111)
sm.graphics.beanplot(age, ax=ax, labels=labels,
plot_opts={'cutoff_val':5, 'cutoff_type':'abs',
'label_fontsize':'small',
'label_rotation':30})
ax.set_xlabel("Party identification of respondent.")
ax.set_ylabel("Age")
ax.set_title("US national election '96 - Age & Party Identification")
# Create a jitter plot.
fig3 = plt.figure()
ax = fig3.add_subplot(111)
plot_opts={'cutoff_val':5, 'cutoff_type':'abs', 'label_fontsize':'small',
'label_rotation':30, 'violin_fc':(0.8, 0.8, 0.8),
'jitter_marker':'.', 'jitter_marker_size':3, 'bean_color':'#FF6F00',
'bean_mean_color':'#009D91'}
sm.graphics.beanplot(age, ax=ax, labels=labels, jitter=True,
plot_opts=plot_opts)
ax.set_xlabel("Party identification of respondent.")
ax.set_ylabel("Age")
ax.set_title("US national election '96 - Age & Party Identification")
# Create an asymmetrical jitter plot.
ix = data.exog['income'] < 16 # incomes < $30k
age = data.exog['age'][ix]
endog = data.endog[ix]
age_lower_income = [age[endog == id] for id in party_ID]
ix = data.exog['income'] >= 20 # incomes > $50k
age = data.exog['age'][ix]
endog = data.endog[ix]
age_higher_income = [age[endog == id] for id in party_ID]
fig = plt.figure()
ax = fig.add_subplot(111)
plot_opts['violin_fc'] = (0.5, 0.5, 0.5)
plot_opts['bean_show_mean'] = False
plot_opts['bean_show_median'] = False
plot_opts['bean_legend_text'] = 'Income < \$30k'
plot_opts['cutoff_val'] = 10
sm.graphics.beanplot(age_lower_income, ax=ax, labels=labels, side='left',
jitter=True, plot_opts=plot_opts)
plot_opts['violin_fc'] = (0.7, 0.7, 0.7)
plot_opts['bean_color'] = '#009D91'
plot_opts['bean_legend_text'] = 'Income > \$50k'
sm.graphics.beanplot(age_higher_income, ax=ax, labels=labels, side='right',
jitter=True, plot_opts=plot_opts)
ax.set_xlabel("Party identification of respondent.")
ax.set_ylabel("Age")
ax.set_title("US national election '96 - Age & Party Identification")
# Show all plots.
#plt.show()
```
| github_jupyter |
# StockExplorer PixieApp Part 2 - Time Series Forecasting with ARIMA model
## We start by installing quandl and statsmodels library if necessary
```
# !pip install quandl
# !pip install statsmodels
```
## Import a few modules and optionally set the quandl API key
```
import pixiedust
from pixiedust.display.app import *
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import quandl
#Comment the line below if you don't have a Quandl API Key
#To get an API key, go to https://www.quandl.com
quandl.ApiConfig.api_key = "XXXX"
```
## Download the stock data for MSFT and display a bokeh line chart of the Time Serie
```
msft = quandl.get('WIKI/MSFT')
msft['daily_spread'] = msft['Adj. Close'].diff()
msft = msft.reset_index()
display(msft)
```
## Build an ARIMA model
```
train_set, test_set = msft[:-14], msft[-14:]
logmsft = np.log(train_set['Adj. Close'])
logmsft.index = train_set['Date']
logmsft_diff = pd.DataFrame(logmsft - logmsft.shift()).reset_index()
logmsft_diff.dropna(inplace=True)
display(logmsft_diff)
```
## Apply the Dickey-Fuller test for stationarity
```
from statsmodels.tsa.stattools import adfuller
import pprint
ad_fuller_results = adfuller(
logmsft_diff['Adj. Close'], autolag = 'AIC', regression = 'c')
labels = ['Test Statistic','MacKinnon’s approximate p-value','Number of lags used','Number of Observations Used']
pp = pprint.PrettyPrinter(indent=4)
pp.pprint({labels[i]: ad_fuller_results[i] for i in range(4)})
```
## Plot the ACF chart
```
import statsmodels.tsa.api as smt
smt.graphics.plot_acf(logmsft_diff['Adj. Close'], lags=100)
plt.show()
```
## Plot the PACF chart
```
smt.graphics.plot_pacf(logmsft_diff['Adj. Close'], lags=100)
plt.show()
```
## Create the ARIMA model and returned details about the residual errors
```
from statsmodels.tsa.arima_model import ARIMA
import warnings
with warnings.catch_warnings():
warnings.simplefilter("ignore")
arima_model_class = ARIMA(train_set['Adj. Close'], dates=train_set['Date'], order=(1,1,1))
arima_model = arima_model_class.fit(disp=0)
print(arima_model.resid.describe())
s = arima_model.resid.describe().to_frame().reset_index()
display(s)
```
## Plot the predictions and compare them to the actual observation in the train_set
```
def plot_predict(model, dates_series, num_observations):
fig = plt.figure(figsize = (12,5))
model.plot_predict(
start = str(dates_series[len(dates_series)-num_observations]),
end = str(dates_series[len(dates_series)-1])
)
plt.show()
plot_predict(arima_model, train_set['Date'], 100)
plot_predict(arima_model, train_set['Date'], 10)
```
## Diagnose the model against the test set values
```
def compute_test_set_predictions(train_set, test_set):
with warnings.catch_warnings():
warnings.simplefilter("ignore")
history = train_set['Adj. Close'].values
forecast = np.array([])
for t in range(len(test_set)):
prediction = ARIMA(history, order=(1,1,0)).fit(disp=0).forecast()
history = np.append(history, test_set['Adj. Close'].iloc[t])
forecast = np.append(forecast, prediction[0])
return pd.DataFrame(
{"forecast": forecast,
"test": test_set['Adj. Close'],
"Date": pd.date_range(start=test_set['Date'].iloc[len(test_set)-1], periods = len(test_set))
}
)
results = compute_test_set_predictions(train_set, test_set)
display(results)
```
## Compute the mean squared error of the predictions for the test set
```
from sklearn.metrics import mean_squared_error
def compute_mean_squared_error(test_series, forecast_series):
return mean_squared_error(test_series, forecast_series)
print('Mean Squared Error: {}'.format( compute_mean_squared_error( test_set['Adj. Close'], results.forecast)) )
```
## Improve the version of the StockExplorer PixieApp to add forecasting with ARIMA
1. Enable the user to enter a list of Stock Tickers
2. Provide a menu for basic plotting: Price over time and Daily stock spread over time
3. Menu for displaying moving average with configurable lag
4. Menu for displaying ACF and PACF with configurable lag
5. Menu for time series forecasting with ARIMA
## Base PixieApp used as parent class for all the subapp associated with each menu
```
@PixieApp
class BaseSubApp():
def setup(self):
self.lag = 50
def add_ticker_selection_markup(refresh_ids):
def deco(fn):
def wrap(self, *args, **kwargs):
return """
<div class="row" style="text-align:center">
<div class="btn-group btn-group-toggle" style="border-bottom:2px solid #eeeeee" data-toggle="buttons">
{%for ticker, state in this.parent_pixieapp.tickers.items()%}
<label class="btn btn-secondary {%if this.parent_pixieapp.active_ticker == ticker%}active{%endif%}"
pd_refresh=\"""" + ",".join(refresh_ids) + """\" pd_script="self.set_active_ticker('{{ticker}}')">
<input type="radio" {%if this.parent_pixieapp.active_ticker == ticker%}checked{%endif%}>
{{ticker}}
</label>
{%endfor%}
</div>
</div>
""" + fn(self, *args, **kwargs)
return wrap
return deco
def set_active_ticker(self, ticker):
self.parent_pixieapp.set_active_ticker(ticker)
@route(widget="lag_slider")
def slider_screen(self):
return """
<div>
<label class="field">Lag:<span id="slideval{{prefix}}">50</span></label>
<i class="fa fa-info-circle" style="color:orange" data-toggle="pd-tooltip"
title="Selected lag used to compute moving average, ACF or PACF"></i>
<div id="slider{{prefix}}" name="slider" data-min=30 data-max=300
data-default=50 style="margin: 0 0.6em;">
</div>
</div>
<script>
$("[id^=slider][id$={{prefix}}]").each(function() {
var sliderElt = $(this)
var min = sliderElt.data("min")
var max = sliderElt.data("max")
var val = sliderElt.data("default")
sliderElt.slider({
min: isNaN(min) ? 0 : min,
max: isNaN(max) ? 100 : max,
value: isNaN(val) ? 50 : val,
change: function(evt, ui) {
$("[id=slideval{{prefix}}]").text(ui.value);
pixiedust.sendEvent({type:'lagSlider',value:ui.value})
},
slide: function(evt, ui) {
$("[id=slideval{{prefix}}]").text(ui.value);
}
});
})
</script>
"""
```
## Sub App for basic exploration of the selected Stock Time Serie
```
@PixieApp
class StockExploreSubApp(BaseSubApp):
@route()
@BaseSubApp.add_ticker_selection_markup(['chart{{prefix}}', 'daily_spread{{prefix}}'])
def main_screen(self):
return """
<div class="row" style="min-height:300px">
<div class="col-xs-6" id="chart{{prefix}}" pd_render_onload pd_options="show_chart=Adj. Close">
</div>
<div class="col-xs-6" id="daily_spread{{prefix}}" pd_render_onload pd_options="show_chart=daily_spread">
</div>
</div>
"""
@route(show_chart="*")
def show_chart_screen(self, show_chart):
return """
<div pd_entity="parent_pixieapp.get_active_df()" pd_render_onload>
<pd_options>
{
"handlerId": "lineChart",
"valueFields": "{{show_chart}}",
"rendererId": "bokeh",
"keyFields": "Date",
"noChartCache": "true",
"rowCount": "10000"
}
</pd_options>
</div>
"""
```
## Sub App for displaying moving average of the selected Stock Time Serie
```
@PixieApp
class MovingAverageSubApp(BaseSubApp):
@route()
@BaseSubApp.add_ticker_selection_markup(['chart{{prefix}}'])
def main_screen(self):
return """
<div class="row" style="min-height:300px">
<div class="page-header text-center">
<h1>Moving Average for {{this.parent_pixieapp.active_ticker}}</h1>
</div>
<div class="col-sm-12" id="chart{{prefix}}" pd_render_onload pd_entity="get_moving_average_df()">
<pd_options>
{
"valueFields": "Adj. Close",
"keyFields": "x",
"rendererId": "bokeh",
"handlerId": "lineChart",
"rowCount": "10000"
}
</pd_options>
</div>
</div>
<div class="row">
<div pd_widget="lag_slider">
<pd_event_handler
pd_source="lagSlider"
pd_script="self.lag = eventInfo['value']"
pd_refresh="chart{{prefix}}">
</pd_event_handler>
</div>
</div>
"""
def get_moving_average_df(self):
ma = self.parent_pixieapp.get_active_df()['Adj. Close'].rolling(window=self.lag).mean()
ma_df = pd.DataFrame(ma)
ma_df["x"] = ma_df.index
return ma_df
```
## Sub App for displaying ACF and PACF of the selected Stock Time Serie
```
import statsmodels.tsa.api as smt
@PixieApp
class AutoCorrelationSubApp(BaseSubApp):
@route()
@BaseSubApp.add_ticker_selection_markup(['chart_acf{{prefix}}', 'chart_pacf{{prefix}}'])
def main_screen(self):
return """
<div class="row" style="min-height:300px">
<div class="col-sm-6">
<div class="page-header text-center">
<h1>Auto-correlation Function</h1>
</div>
<div id="chart_acf{{prefix}}" pd_render_onload pd_options="show_acf=true">
</div>
</div>
<div class="col-sm-6">
<div class="page-header text-center">
<h1>Partial Auto-correlation Function</h1>
</div>
<div id="chart_pacf{{prefix}}" pd_render_onload pd_options="show_pacf=true">
</div>
</div>
</div>
<div class="row">
<div pd_widget="lag_slider">
<pd_event_handler
pd_source="lagSlider"
pd_script="self.lag = eventInfo['value']"
pd_refresh="chart_acf{{prefix}},chart_pacf{{prefix}}">
</pd_event_handler>
</div>
</div>
"""
@route(show_acf='*')
@captureOutput
def show_acf_screen(self):
smt.graphics.plot_acf(self.parent_pixieapp.get_active_df()['Adj. Close'], lags=self.lag)
@route(show_pacf='*')
@captureOutput
def show_pacf_screen(self):
smt.graphics.plot_pacf(self.parent_pixieapp.get_active_df()['Adj. Close'], lags=self.lag)
```
## Sub App for time series forecasting with ARIMA
```
from statsmodels.tsa.arima_model import ARIMA
@PixieApp
class ForecastArimaSubApp(BaseSubApp):
def setup(self):
self.entity_dataframe = self.parent_pixieapp.get_active_df().copy()
self.differencing = False
def set_active_ticker(self, ticker):
BaseSubApp.set_active_ticker(self, ticker)
self.setup()
@route()
@BaseSubApp.add_ticker_selection_markup([])
def main_screen(self):
return """
<div class="page-header text-center">
<h2>1. Data Exploration to test for Stationarity
<button class="btn btn-default" pd_script="self.toggle_differencing()" pd_refresh>
{%if this.differencing%}Remove differencing{%else%}Add differencing{%endif%}
</button>
<button class="btn btn-default" pd_options="do_forecast=true">
Continue to Forecast
</button>
</h2>
</div>
<div class="row" style="min-height:300px">
<div class="col-sm-10" id="chart{{prefix}}" pd_render_onload pd_options="show_chart=Adj. Close">
</div>
</div>
<div class="row" style="min-height:300px">
<div class="col-sm-6">
<div class="page-header text-center">
<h3>Auto-correlation Function</h3>
</div>
<div id="chart_acf{{prefix}}" pd_render_onload pd_options="show_acf=true">
</div>
</div>
<div class="col-sm-6">
<div class="page-header text-center">
<h3>Partial Auto-correlation Function</h3>
</div>
<div id="chart_pacf{{prefix}}" pd_render_onload pd_options="show_pacf=true">
</div>
</div>
</div>
"""
@route(show_chart="*")
def show_chart_screen(self, show_chart):
return """
<h3><center>Time Series</center></h3>
<div pd_render_onload pd_entity="entity_dataframe">
<pd_options>
{
"rowCount": "10000",
"keyFields": "Date",
"valueFields": "Adj. Close",
"handlerId": "lineChart",
"noChartCache": "true"
}
</pd_options>
</div>
"""
@route(show_acf='*')
@captureOutput
def show_acf_screen(self):
smt.graphics.plot_acf(self.entity_dataframe['Adj. Close'], lags=50)
@route(show_pacf='*')
@captureOutput
def show_pacf_screen(self):
smt.graphics.plot_pacf(self.entity_dataframe['Adj. Close'], lags=50)
def toggle_differencing(self):
if self.differencing:
self.entity_dataframe = self.parent_pixieapp.get_active_df().copy()
self.differencing = False
else:
log_df = np.log(self.entity_dataframe['Adj. Close'])
log_df.index = self.entity_dataframe['Date']
self.entity_dataframe = pd.DataFrame(log_df - log_df.shift()).reset_index()
self.entity_dataframe.dropna(inplace=True)
self.differencing = True
@route(do_forecast="true")
@BaseSubApp.add_ticker_selection_markup([])
def do_forecast_screen(self):
return """
<div class="page-header text-center">
<h2>2. Build Arima model
<button class="btn btn-default" pd_options="do_diagnose=true">
Diagnose Model
</button>
</h2>
</div>
<div class="row" id="forecast{{prefix}}">
<div style="font-weight:bold">Enter the p,d,q order for the ARIMA model you want to build</div>
<div class="form-group" style="margin-left: 20px">
<label class="control-label">Enter the p order for the AR model:</label>
<input type="text" class="form-control" id="p_order{{prefix}}" value="1" style="width: 100px;margin-left:10px">
<label class="control-label">Enter the d order for the Integrated step:</label>
<input type="text" class="form-control" id="d_order{{prefix}}" value="1" style="width: 100px;margin-left:10px">
<label class="control-label">Enter the q order for the MA model:</label>
<input type="text" class="form-control" id="q_order{{prefix}}" value="1" style="width: 100px;margin-left:10px">
</div>
<center>
<button class="btn btn-default" pd_target="forecast{{prefix}}"
pd_options="p_order=$val(p_order{{prefix}});d_order=$val(p_order{{prefix}});q_order=$val(p_order{{prefix}})">
Go
</button>
</center>
</div>
"""
@route(p_order="*",d_order="*",q_order="*")
@templateArgs
def build_arima_model_screen(self, p_order, d_order, q_order):
#Build the arima model
self.train_set = self.parent_pixieapp.get_active_df()[:-14]
self.test_set = self.parent_pixieapp.get_active_df()[-14:]
self.arima_model = ARIMA(
self.train_set['Adj. Close'], dates=self.train_set['Date'],
order=(int(p_order),int(d_order),int(q_order))
).fit(disp=0)
self.residuals = self.arima_model.resid.describe().to_frame().reset_index()
return """
<div class="page-header text-center">
<h3>ARIMA Model succesfully created</h3>
<div>
<div class="row">
<div class="col-sm-10 col-sm-offset-3">
<div pd_render_onload pd_options="plot_predict=true">
</div>
<h3>Predicted values against the train set</h3>
</div>
</div>
<div class="row">
<div pd_render_onload pd_entity="residuals">
<pd_options>
{
"handlerId": "tableView",
"table_noschema": "true",
"table_nosearch": "true",
"table_nocount": "true"
}
</pd_options>
</div>
<h3><center>Residual errors statistics</center></h3>
<div>
"""
@route(plot_predict="true")
@captureOutput
def plot_predict(self):
plot_predict(self.arima_model, self.train_set['Date'], 100)
def compute_test_set_predictions(self):
return compute_test_set_predictions(self.train_set, self.test_set)
@route(do_diagnose="true")
@BaseSubApp.add_ticker_selection_markup([])
def do_diagnose_screen(self):
return """
<div class="page-header text-center"><h2>3. Diagnose the model against the test set</h2></div>
<div class="row">
<div class="col-sm-10 center" pd_render_onload pd_entity="compute_test_set_predictions()">
<pd_options>
{
"keyFields": "Date",
"valueFields": "forecast,test",
"handlerId": "lineChart",
"rendererId": "bokeh",
"noChartCache": "true"
}
</pd_options>
</div>
</div>
"""
```
## Main class for the StockExplorer PixieApp
```
@PixieApp
class StockExplorer():
@route()
def main_screen(self):
return """
<style>
div.outer-wrapper {
display: table;width:100%;height:300px;
}
div.inner-wrapper {
display: table-cell;vertical-align: middle;height: 100%;width: 100%;
}
</style>
<div class="outer-wrapper">
<div class="inner-wrapper">
<div class="col-sm-3"></div>
<div class="input-group col-sm-6">
<input id="stocks{{prefix}}" type="text" class="form-control"
value="MSFT,AMZN,IBM"
placeholder="Enter a list of stocks separated by comma e.g MSFT,AMZN,IBM">
<span class="input-group-btn">
<button class="btn btn-default" type="button" pd_options="explore=true">
<pd_script>
self.select_tickers('$val(stocks{{prefix}})'.split(','))
</pd_script>
Explore
</button>
</span>
</div>
</div>
</div>
"""
def select_tickers(self, tickers):
self.tickers = {ticker.strip():{} for ticker in tickers}
self.set_active_ticker(tickers[0].strip())
def set_active_ticker(self, ticker):
self.active_ticker = ticker
if 'df' not in self.tickers[ticker]:
self.tickers[ticker]['df'] = quandl.get('WIKI/{}'.format(ticker))
self.tickers[ticker]['df']['daily_spread'] = self.tickers[ticker]['df']['Adj. Close'] - self.tickers[ticker]['df']['Adj. Open']
self.tickers[ticker]['df'] = self.tickers[ticker]['df'].reset_index()
def get_active_df(self):
return self.tickers[self.active_ticker]['df']
@route(explore="*")
@templateArgs
def stock_explore_screen(self):
tabs = [("Explore","StockExploreSubApp"), ("Moving Average", "MovingAverageSubApp"),
("ACF and PACF", "AutoCorrelationSubApp"), ("Forecast with ARIMA", "ForecastArimaSubApp")]
return """
<style>
.btn:active, .btn.active {
background-color:aliceblue;
}
</style>
<div class="page-header">
<h1>Stock Explorer PixieApp</h1>
</div>
<div class="container-fluid">
<div class="row">
<div class="btn-group-vertical btn-group-toggle col-sm-2" data-toggle="buttons">
{%for title, subapp in tabs%}
<label class="btn btn-secondary {%if loop.first%}active{%endif%}"
pd_options="show_analytic={{subapp}}"
pd_target="analytic_screen{{prefix}}">
<input type="radio" {%if loop.first%}checked{%endif%}>
{{title}}
</label>
{%endfor%}
</div>
<div id="analytic_screen{{prefix}}" class="col-sm-10">
</div>
</div>
"""
@route(show_analytic="*")
def show_analytic_screen(self, show_analytic):
return """
<div pd_app="{{show_analytic}}" pd_render_onload></div>
"""
app = StockExplorer()
app.run()
```
| github_jupyter |
# DarkSkyAPI wrapper for Python 3.6
The DarkSkyAPI weather wrapper is powered by [DarkSky]("https://darksky.net/poweredby/") and provides an easy way to access weather details using Python 3.6.
## Client instance
First import the DarkSkyClient class from the DarkSkyAPI module. If you don't have an API key for the DarkSkyAPI yet, get one for free [here]("https://darksky.net/dev/register"). This will allow you to make a 1000 calls each day.
```
from DarkSkyAPI.DarkSkyAPI import DarkSkyClient
from secret import api_key
```
Next, create the client instance using the api_key as the first argument and a tuple containing the latitude and longitude of the location as the second argument. The third argument is optional and will set the units (Celsius/Fahrenheit). The unit options are as follows:
* auto: automatically select units based on geographic location
* ca: same as si, except that windSpeed and windGust are in kilometers per hour
* uk2: same as si, except that nearestStormDistance and visibility are in miles, and windSpeed and windGust in miles per hour.
* us: Imperial units
* si: International System of Units (default)
If no units are provided it will default to "si".
```
client = DarkSkyClient(api_key, (51.2279166, 5.8405417), units="si")
```
The client instance already holds the raw weather response and can be accessed by client.raw_data.
```
client.raw_data
```
Additionally it also keeps track of the remaining api calls for the current day.
```
client.API_calls_remaining
```
## Current instance
To create the current instance, simply call the get_current method on client.
```
current = client.get_current()
```
All the data points of the current weather details are automatically set as attributes of the instance. This allows you to use the datapoints like attributes.
```
current.temperature
```
The weekday can be accessed by calling the weekday attribute on the current instance. This will return the full weekday name (in English). To return the short weekday name (i.e Mon, Tue), use the weekday_short attribute.
```
current.weekday
current.weekday_short
```
These are the attributes available for the current instance.
```
for attr in dir(current):
if "__" not in attr:
print(attr)
```
## Daily and hourly instance
To create the daily and hourly instance, simply call the get_daily and get_hourly method on client.
```
daily = client.get_daily()
hourly = client.get_hourly()
```
To customize the amount of hours or days are returned, simply pass the n amount of hours/days as an int to the method.
```
# Returns 6 days (including today)
daily = client.get_daily(6)
# Returns 24 hours (including current hour)
hourly = client.get_hourly(24)
```
The daily and hourly classes behave in the same way because they inherit from the same base class. Because of this, only the daily instance is documented. The terms hourly/daily and hour/day can be used interchangeably.
The forecast datapoints can be accessed in various ways. To get the data by individual days or hours you can either use the day/hour_n attribute or the data list of the forecast instance.
```
# Day attribute
daily.day_0['temperatureHigh']
# Daily data list
daily.data[0]['temperatureHigh']
```
Both instances also have every date point set as a property. These properties hold a list of single datapoint values.
```
daily.temperatureHigh
```
Alternatively, there are several methods you can use to get data collections of one or more datapoints. These methods work on both the daily and hourly instance. The methods currently available are:
* data_pair: Will return a list of value pair tuples containing firstly the datetime and secondly the value of the datapoint. This method accepts three arguments. The first argument is the datapoint (required). The second argument is the date_fmt parameter and will set the format of the datetime value (default - "%d-%m-%Y %H:%M"). The third argument is the graph argument, if set to True it wil return a graph-friendly dict of the datetime and values of the datapoint (default - False).
* data_single: Will return a list of single datapoint values. This method accepts three arguments. The first argument is the datapoint you wan the values of. The second argument is a boolean that will convert the datapoint to percentages if set to True (default: False). The third argument is a boolean that will convert the datapoint to a datetime string if set to True (default: False).
* data_combined: Will return a dict containing lists of datapoint values for each day/hour. This method accepts two arguments. The first is the list of datapoints. The second is the date_fmt incase the datapoint is time. If you don't provide a list of datapoints it will return all datapoints.
* datetimes: Will return a list containing all the datetimes of the days/hours. This method accepts one argument which is the dateformat (default - "%d-%m-%Y %H:%M")
### Data pair method
```
# Data pair default date format and no graph
daily.data_pair('temperatureHigh')
# Data pair weekday names date_fmt
daily.data_pair('temperatureHigh', date_fmt="%A")
# Data pair graph
daily.data_pair('temperatureHigh', graph=True)
```
### Data single method
```
# Datapoint argument.
daily.data_single('temperatureHigh')
# Datapoint argument and to_percent argument set to True.
daily.data_single('precipProbability', to_percent=True)
# Datapoint argument and to_datetime argument set to True.
daily.data_single('precipIntensityMaxTime', to_datetime=True)
```
### Data combined method
```
# Specified list of datapoints
daily.data_combined(['temperatureHigh', 'temperatureHighTime'], date_fmt="%H:%M")
# All data points
daily.data_combined()
```
### Datetimes method
```
# Default date format
daily.datetimes()
# Weekday names date format (short)
daily.datetimes(date_fmt="%a")
```
| github_jupyter |
<table class="ee-notebook-buttons" align="left">
<td><a target="_blank" href="https://github.com/giswqs/earthengine-py-notebooks/tree/master/FeatureCollection/idw_interpolation.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> View source on GitHub</a></td>
<td><a target="_blank" href="https://nbviewer.jupyter.org/github/giswqs/earthengine-py-notebooks/blob/master/FeatureCollection/idw_interpolation.ipynb"><img width=26px src="https://upload.wikimedia.org/wikipedia/commons/thumb/3/38/Jupyter_logo.svg/883px-Jupyter_logo.svg.png" />Notebook Viewer</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/giswqs/earthengine-py-notebooks/blob/master/FeatureCollection/idw_interpolation.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" /> Run in Google Colab</a></td>
</table>
## Install Earth Engine API and geemap
Install the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://geemap.org). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.
The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemap#dependencies), including earthengine-api, folium, and ipyleaflet.
```
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('Installing geemap ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
import ee
import geemap
```
## Create an interactive map
The default basemap is `Google Maps`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/basemaps.py) can be added using the `Map.add_basemap()` function.
```
Map = geemap.Map(center=[40,-100], zoom=4)
Map
```
## Add Earth Engine Python script
```
# Add Earth Engine dataset
def sampling(sample):
lat = sample.get('latitude')
lon = sample.get('longitude')
ch4 = sample.get('ch4')
return ee.Feature(ee.Geometry.Point([lon, lat]), {'ch4': ch4})
# Import two weeks of S5P methane and composite by mean.
ch4 = ee.ImageCollection('COPERNICUS/S5P/OFFL/L3_CH4') \
.select('CH4_column_volume_mixing_ratio_dry_air') \
.filterDate('2019-08-01', '2019-08-15') \
.mean() \
.rename('ch4')
# Define an area to perform interpolation over.
aoi = ee.Geometry.Polygon(
[[[-95.68487605978851, 43.09844605027055],
[-95.68487605978851, 37.39358590079781],
[-87.96148738791351, 37.39358590079781],
[-87.96148738791351, 43.09844605027055]]], {}, False)
# Sample the methane composite to generate a FeatureCollection.
samples = ch4.addBands(ee.Image.pixelLonLat()) \
.sample(**{'region': aoi, 'numPixels': 1500,
'scale':1000, 'projection': 'EPSG:4326'}) \
.map(sampling)
# Combine mean and standard deviation reducers for efficiency.
combinedReducer = ee.Reducer.mean().combine(**{
'reducer2': ee.Reducer.stdDev(),
'sharedInputs': True})
# Estimate global mean and standard deviation from the points.
stats = samples.reduceColumns(**{
'reducer': combinedReducer,
'selectors': ['ch4']})
# Do the interpolation, valid to 70 kilometers.
interpolated = samples.inverseDistance(**{
'range': 7e4,
'propertyName': 'ch4',
'mean': stats.get('mean'),
'stdDev': stats.get('stdDev'),
'gamma': 0.3})
# Define visualization arguments.
band_viz = {
'min': 1800,
'max': 1900,
'palette': ['0D0887', '5B02A3', '9A179B', 'CB4678',
'EB7852', 'FBB32F', 'F0F921']}
# Display to map.
# Map.centerObject(ee.FeatureCollection(aoi), 7)
Map.addLayer(ch4, band_viz, 'CH4')
# Map.addLayer(interpolated, band_viz, 'CH4 Interpolated')
```
## Display Earth Engine data layers
```
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map
```
| github_jupyter |
## 1. Loading your friend's data into a dictionary
<p><img src="https://assets.datacamp.com/production/project_1237/img/netflix.jpg" alt="Someone's feet on table facing a television"></p>
<p>Netflix! What started in 1997 as a DVD rental service has since exploded into the largest entertainment/media company by <a href="https://www.marketwatch.com/story/netflix-shares-close-up-8-for-yet-another-record-high-2020-07-10">market capitalization</a>, boasting over 200 million subscribers as of <a href="https://www.cbsnews.com/news/netflix-tops-200-million-subscribers-but-faces-growing-challenge-from-disney-plus/">January 2021</a>.</p>
<p>Given the large number of movies and series available on the platform, it is a perfect opportunity to flex our data manipulation skills and dive into the entertainment industry. Our friend has also been brushing up on their Python skills and has taken a first crack at a CSV file containing Netflix data. For their first order of business, they have been performing some analyses, and they believe that the average duration of movies has been declining. </p>
<p>As evidence of this, they have provided us with the following information. For the years from 2011 to 2020, the average movie durations are 103, 101, 99, 100, 100, 95, 95, 96, 93, and 90, respectively.</p>
<p>If we're going to be working with this data, we know a good place to start would be to probably start working with <code>pandas</code>. But first we'll need to create a DataFrame from scratch. Let's start by creating a Python object covered in <a href="https://learn.datacamp.com/courses/intermediate-python">Intermediate Python</a>: a dictionary!</p>
```
# Create the years and durations lists
years = [2011,2012,2013,2014,2015,2016,2017,2018,2019,2020]
durations = [103,101,99,100,100,95,95,96,93,90]
# Create a dictionary with the two lists
movie_dict = {"years":years,
"durations":durations
}
# Print the dictionary
movie_dict
```
## 2. Creating a DataFrame from a dictionary
<p>Perfect! We now have our friend's data stored in a nice Python object. We can already perform several operations on a dictionary to manipulate its contents (such as updating or adding to it). But a more useful structure might be a <code>pandas</code> DataFrame, a tabular data structure containing labeled axes and rows. Luckily, DataFrames can be created very easily from the dictionary created in the previous step!</p>
<p>To convert our dictionary <code>movie_dict</code> to a <code>pandas</code> DataFrame, we will first need to import the library under its usual alias. We'll also want to inspect our DataFrame to ensure it was created correctly. Let's perform these steps now.</p>
```
# Import pandas under its usual alias
import pandas as pd
# Create a DataFrame from the dictionary
durations_df = pd.DataFrame(movie_dict)
# Print the DataFrame
durations_df
```
## 3. A visual inspection of our data
<p>Alright, we now have a <code>pandas</code> DataFrame, the most common way to work with tabular data in Python. Now back to the task at hand. We want to follow up on our friend's assertion that movie lengths have been decreasing over time. A great place to start will be a visualization of the data.</p>
<p>Given that the data is continuous, a line plot would be a good choice, with the dates represented along the x-axis and the average length in minutes along the y-axis. This will allow us to easily spot any trends in movie durations. There are many ways to visualize data in Python, but <code>matploblib.pyplot</code> is one of the most common packages to do so.</p>
<p><em>Note: In order for us to correctly test your plot, you will need to initalize a <code>matplotlib.pyplot</code> Figure object, which we have already provided in the cell below. You can continue to create your plot as you have learned in Intermediate Python.</em></p>
```
# Import matplotlib.pyplot under its usual alias and create a figure
import matplotlib.pyplot as plt
fig = plt.figure()
# Draw a line plot of release_years and durations
plt.plot(durations_df['years'],durations_df['durations'])
# Create a title
plt.title("Netflix Movie Durations 2011-2020")
# Show the plot
plt.show()
```
## 4. Loading the rest of the data from a CSV
<p>Well, it looks like there is something to the idea that movie lengths have decreased over the past ten years! But equipped only with our friend's aggregations, we're limited in the further explorations we can perform. There are a few questions about this trend that we are currently unable to answer, including:</p>
<ol>
<li>What does this trend look like over a longer period of time?</li>
<li>Is this explainable by something like the genre of entertainment?</li>
</ol>
<p>Upon asking our friend for the original CSV they used to perform their analyses, they gladly oblige and send it. We now have access to the CSV file, available at the path <code>"datasets/netflix_data.csv"</code>. Let's create another DataFrame, this time with all of the data. Given the length of our friend's data, printing the whole DataFrame is probably not a good idea, so we will inspect it by printing only the first five rows.</p>
```
# Read in the CSV as a DataFrame
netflix_df = pd.read_csv("datasets/netflix_data.csv")
# Print the first five rows of the DataFrame
netflix_df.head()
```
## 5. Filtering for movies!
<p>Okay, we have our data! Now we can dive in and start looking at movie lengths. </p>
<p>Or can we? Looking at the first five rows of our new DataFrame, we notice a column <code>type</code>. Scanning the column, it's clear there are also TV shows in the dataset! Moreover, the <code>duration</code> column we planned to use seems to represent different values depending on whether the row is a movie or a show (perhaps the number of minutes versus the number of seasons)?</p>
<p>Fortunately, a DataFrame allows us to filter data quickly, and we can select rows where <code>type</code> is <code>Movie</code>. While we're at it, we don't need information from all of the columns, so let's create a new DataFrame <code>netflix_movies</code> containing only <code>title</code>, <code>country</code>, <code>genre</code>, <code>release_year</code>, and <code>duration</code>.</p>
<p>Let's put our data subsetting skills to work!</p>
```
# Subset the DataFrame for type "Movie"
netflix_df_movies_only = netflix_df[netflix_df['type']=='Movie']
# Select only the columns of interest
netflix_movies_col_subset = netflix_df_movies_only[['title','country','genre','release_year','duration']]
# Print the first five rows of the new DataFrame
netflix_movies_col_subset.head()
```
## 6. Creating a scatter plot
<p>Okay, now we're getting somewhere. We've read in the raw data, selected rows of movies, and have limited our DataFrame to our columns of interest. Let's try visualizing the data again to inspect the data over a longer range of time.</p>
<p>This time, we are no longer working with aggregates but instead with individual movies. A line plot is no longer a good choice for our data, so let's try a scatter plot instead. We will again plot the year of release on the x-axis and the movie duration on the y-axis.</p>
<p><em>Note: Although not taught in Intermediate Python, we have provided you the code <code>fig = plt.figure(figsize=(12,8))</code> to increase the size of the plot (to help you see the results), as well as to assist with testing. For more information on how to create or work with a <code>matplotlib</code> <code>figure</code>, refer to the <a href="https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.figure.html">documentation</a>.</em></p>
```
# Create a figure and increase the figure size
fig = plt.figure(figsize=(12,8))
# Create a scatter plot of duration versus year
plt.scatter(netflix_movies_col_subset['release_year'],netflix_movies_col_subset['duration'])
# Create a title
plt.title("Movie Duration by Year of Release")
# Show the plot
plt.show()
```
## 7. Digging deeper
<p>This is already much more informative than the simple plot we created when our friend first gave us some data. We can also see that, while newer movies are overrepresented on the platform, many short movies have been released in the past two decades.</p>
<p>Upon further inspection, something else is going on. Some of these films are under an hour long! Let's filter our DataFrame for movies with a <code>duration</code> under 60 minutes and look at the genres. This might give us some insight into what is dragging down the average.</p>
```
# Filter for durations shorter than 60 minutes
short_movies = netflix_movies_col_subset[netflix_movies_col_subset['duration']<60]
# Print the first 20 rows of short_movies
short_movies.head(20)
```
## 8. Marking non-feature films
<p>Interesting! It looks as though many of the films that are under 60 minutes fall into genres such as "Children", "Stand-Up", and "Documentaries". This is a logical result, as these types of films are probably often shorter than 90 minute Hollywood blockbuster. </p>
<p>We could eliminate these rows from our DataFrame and plot the values again. But another interesting way to explore the effect of these genres on our data would be to plot them, but mark them with a different color.</p>
<p>In Python, there are many ways to do this, but one fun way might be to use a loop to generate a list of colors based on the contents of the <code>genre</code> column. Much as we did in Intermediate Python, we can then pass this list to our plotting function in a later step to color all non-typical genres in a different color!</p>
<p><em>Note: Although we are using the basic colors of red, blue, green, and black, <code>matplotlib</code> has many named colors you can use when creating plots. For more information, you can refer to the documentation <a href="https://matplotlib.org/stable/gallery/color/named_colors.html">here</a>!</em></p>
```
# Define an empty list
colors = []
# Iterate over rows of netflix_movies_col_subset
for index, row in netflix_movies_col_subset.iterrows() :
if row['genre']== "Children":
colors.append("red")
elif row['genre']== "Documentaries" :
colors.append("blue")
elif row['genre']== "Stand-Up" :
colors.append("green")
else:
colors.append("black")
# Inspect the first 10 values in your list
colors[:10]
```
## 9. Plotting with color!
<p>Lovely looping! We now have a <code>colors</code> list that we can pass to our scatter plot, which should allow us to visually inspect whether these genres might be responsible for the decline in the average duration of movies.</p>
<p>This time, we'll also spruce up our plot with some additional axis labels and a new theme with <code>plt.style.use()</code>. The latter isn't taught in Intermediate Python, but can be a fun way to add some visual flair to a basic <code>matplotlib</code> plot. You can find more information on customizing the style of your plot <a href="https://matplotlib.org/stable/tutorials/introductory/customizing.html">here</a>!</p>
```
# Set the figure style and initalize a new figure
plt.style.use('fivethirtyeight')
fig = plt.figure(figsize=(12,8))
# Create a scatter plot of duration versus release_year
plt.scatter(netflix_movies_col_subset['release_year'],netflix_movies_col_subset['duration'])
# Create a title and axis labels
plt.xlabel("Release year",)
plt.ylabel("Duration (min)")
plt.title("Movie duration by year of release")
# Show the plot
plt.show()
```
## 10. What next?
<p>Well, as we suspected, non-typical genres such as children's movies and documentaries are all clustered around the bottom half of the plot. But we can't know for certain until we perform additional analyses. </p>
<p>Congratulations, you've performed an exploratory analysis of some entertainment data, and there are lots of fun ways to develop your skills as a Pythonic data scientist. These include learning how to analyze data further with statistics, creating more advanced visualizations, and perhaps most importantly, learning more advanced ways of working with data in <code>pandas</code>. This latter skill is covered in our fantastic course <a href="www.datacamp.com/courses/data-manipulation-with-pandas">Data Manipulation with pandas</a>.</p>
<p>We hope you enjoyed this application of the skills learned in Intermediate Python, and wish you all the best on the rest of your journey!</p>
```
# Are we certain that movies are getting shorter?
are_movies_getting_shorter = "maybe"
```
| github_jupyter |
A note book to house code that I spent wya too much tim etrying to get to work
```
# from tensorflow.keras.preprocessing.sequence import pad_sequences
# vocab_size = num_words_corpus
# text_df['encoded_docs'] = text_df['cleaned_text'].apply(lambda x: one_hot(x, vocab_size))
# text_df['encoded_docs']
# text_df['encoded_docs_length'] = text_df['encoded_docs'].apply(lambda x: len(x))
# max_length_article = text_df['encoded_docs_length'].max()
# max_length_article
# # targets
# # vocab_size = num_words_sums
# text_df['encoded_sums'] = text_df['gensim_summaries'].apply(lambda x: one_hot(x, vocab_size))
# text_df['encoded_sums_length'] = text_df['encoded_sums'].apply(lambda x: len(x))
# max_length_summary = text_df['encoded_sums_length'].max()
# max_length_summary
# text_df['encoded_sums'].shape
# padded_docs = pad_sequences(text_df['encoded_docs'], maxlen=max_length_summary, padding='post')
# padded_docs
### Text Embedding- One Hot Encoding
# from tensorflow.keras.preprocessing.sequence import pad_sequences
# vocab_size = num_words_corpus
# text_df['encoded_docs'] = text_df['cleaned_text'].apply(lambda x: one_hot(x, vocab_size))
# text_df['encoded_docs']
# text_df['encoded_docs_length'] = text_df['encoded_docs'].apply(lambda x: len(x))
# max_length_article = text_df['encoded_docs_length'].max()
# max_length_article
# # targets
# # vocab_size = num_words_sums
# text_df['encoded_sums'] = text_df['gensim_summaries'].apply(lambda x: one_hot(x, vocab_size))
# text_df['encoded_sums_length'] = text_df['encoded_sums'].apply(lambda x: len(x))
# max_length_summary = text_df['encoded_sums_length'].max()
# max_length_summary
# text_df['encoded_sums'].shape
# padded_docs = pad_sequences(text_df['encoded_docs'], maxlen=max_length_summary, padding='post')
# padded_docs
# Model 1: Simple Sequential
# vocab_size = X_voc_size
# src_text_length = max_length_article
# sum_text_length = max_length_summary
# # encoder input model
# inputs = Input(shape=(src_text_length,))
# encoder1 = Embedding(X_voc_size, 128, trainable=True)(inputs)
# encoder2 = LSTM(128)(encoder1)
# encoder3 = RepeatVector(sum_text_length)(encoder2)
# # decoder ouput model
# decoder1 = LSTM(128, return_sequences=True)(encoder3)
# outputs = TimeDistributed(Dense(X_voc_size, activation='softmax'))(decoder1)
# # add it all up
# model = Model(inputs=inputs, outputs=outputs)
# model.compile(loss='sparse_categorical_crossentropy', optimizer='rmsprop')
# model.fit(X_train_seq_pad,
# y = y_train_seq_pad.reshape(y_train_seq_pad.shape[0],y_train_seq_pad.shape[1], 1),
# batch_size=20,
# epochs=5,
# #validation_data = (X_val_seq_pad, y_val_seq_pad.reshape(y_val_seq_pad.shape[0],y_val_seq_pad.shape[1], 1))
# )
# Model 2: Recursive Model
# inputs1 = Input(shape=(src_text_length,))
# article1 = Embedding(vocab_size, 128)(inputs1)
# article2 = LSTM(128)(article1)
# article3 = RepeatVector(sum_text_length)(article2)
# # summary input model
# inputs2 = Input(shape=(sum_text_length,))
# summ1 = Embedding(vocab_size, 128)(inputs2)
# # decoder model
# decoder1 = concatenate([article3, summ1])
# decoder2 = LSTM(128, return_sequences=True)(decoder1)
# outputs = TimeDistributed(Dense(1, activation='softmax'))(decoder2)
# # tie it together [article, summary] [word]
# model = Model(inputs=[inputs1, inputs2], outputs=outputs)
# model.compile(loss='categorical_crossentropy', optimizer='adam')
# model.fit([X_train_seq_pad,
# y_train_seq_pad.reshape(y_train_seq_pad.shape[0],y_train_seq_pad.shape[1], 1)],
# y=y_train_seq_pad.reshape(y_train_seq_pad.shape[0],y_train_seq_pad.shape[1], 1),
# batch_size=20,
# epochs=1,
# #validation_data = (X_val_seq_pad, y_val_seq_pad.reshape(y_val_seq_pad.shape[0],y_val_seq_pad.shape[1], 1))
# )
# K.clear_session()
# latent_dim = 300
# embedding_dim=100
# # Encoder
# encoder_inputs = Input(shape=(src_text_length,))
# #embedding layer
# enc_emb = Embedding(X_voc_size, embedding_dim,trainable=True)(encoder_inputs)
# #encoder lstm 1
# encoder_lstm1 = LSTM(latent_dim,return_sequences=True,return_state=True,dropout=0.4,recurrent_dropout=0.4)
# encoder_output1, state_h1, state_c1 = encoder_lstm1(enc_emb)
# #encoder lstm 2
# encoder_lstm2 = LSTM(latent_dim,return_sequences=True,return_state=True,dropout=0.4,recurrent_dropout=0.4)
# encoder_output2, state_h2, state_c2 = encoder_lstm2(encoder_output1)
# #encoder lstm 3
# encoder_lstm3=LSTM(latent_dim, return_state=True, return_sequences=True,dropout=0.4,recurrent_dropout=0.4)
# encoder_outputs, state_h, state_c= encoder_lstm3(encoder_output2)
# # Set up the decoder, using `encoder_states` as initial state.
# decoder_inputs = Input(shape=(None,))
# #embedding layer
# dec_emb_layer = Embedding(y_voc_size, embedding_dim,trainable=True)
# dec_emb = dec_emb_layer(decoder_inputs)
# decoder_lstm = LSTM(latent_dim, return_sequences=True, return_state=True,dropout=0.4,recurrent_dropout=0.2)
# decoder_outputs,decoder_fwd_state, decoder_back_state = decoder_lstm(dec_emb,initial_state=[state_h, state_c])
# # Attention layer
# attn_layer = AttentionLayer(name='attention_layer')
# attn_out, attn_states = attn_layer([encoder_outputs, decoder_outputs])
# # Concat attention input and decoder LSTM output
# decoder_concat_input = Concatenate(axis=-1, name='concat_layer')([decoder_outputs, attn_out])
# #dense layer
# decoder_dense = TimeDistributed(Dense(y_voc_size, activation='softmax'))
# decoder_outputs = decoder_dense(decoder_concat_input)
# # Define the model
# model = Model([encoder_inputs, decoder_inputs], decoder_outputs)
# model.summary()
# model.compile(optimizer="adam", loss="sparse_categorical_crossentropy")
# def fit_model():
# history=model.fit([x_tr, y_tr[:,:-1]],
# y_tr.reshape(y_tr.shape[0],y_tr.shape[1], 1)[:,1:],
# epochs=50,
# batch_size=128,
# validation_data=([x_val,y_val[:,:-1]], y_val.reshape(y_val.shape[0],y_val.shape[1], 1)[:,1:]),
# callbacks=[es]
# )
# return history
# for i in np.arange(0, len(X_train_seq_pad), 50):
# if i+49 <= len(X_train_seq_pad) - 1:
# x_tr = X_train_seq_pad[i:i+49]
# y_tr = y_train_seq_pad[i:i+49]
# history = fit_model()
# plt.plot(history.history['loss'], label='train')
# plt.plot(history.history['val_loss'], label='val')
# else:
# x_tr = X_train_seq_pad[i:(len(X_train_seq_pad) - 1)]
# y_tr = y_train_seq_pad[i:(len(X_train_seq_pad) - 1)]
# history = fit_model()
# plt.plot(history.history['loss'], label='train')
# plt.plot(history.history['val_loss'], label='val')
```
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.