text stringlengths 2.5k 6.39M | kind stringclasses 3
values |
|---|---|
# Predicting Boston Housing Prices
## Using XGBoost in SageMaker (Hyperparameter Tuning)
_Deep Learning Nanodegree Program | Deployment_
---
As an introduction to using SageMaker's High Level Python API for hyperparameter tuning, we will look again at the [Boston Housing Dataset](https://www.cs.toronto.edu/~delve/data/boston/bostonDetail.html) to predict the median value of a home in the area of Boston Mass.
The documentation for the high level API can be found on the [ReadTheDocs page](http://sagemaker.readthedocs.io/en/latest/)
## General Outline
Typically, when using a notebook instance with SageMaker, you will proceed through the following steps. Of course, not every step will need to be done with each project. Also, there is quite a lot of room for variation in many of the steps, as you will see throughout these lessons.
1. Download or otherwise retrieve the data.
2. Process / Prepare the data.
3. Upload the processed data to S3.
4. Train a chosen model.
5. Test the trained model (typically using a batch transform job).
6. Deploy the trained model.
7. Use the deployed model.
In this notebook we will only be covering steps 1 through 5 as we are only interested in creating a tuned model and testing its performance.
## Step 0: Setting up the notebook
We begin by setting up all of the necessary bits required to run our notebook. To start that means loading all of the Python modules we will need.
```
%matplotlib inline
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.datasets import load_boston
import sklearn.model_selection
```
In addition to the modules above, we need to import the various bits of SageMaker that we will be using.
```
import sagemaker
from sagemaker import get_execution_role
from sagemaker.amazon.amazon_estimator import get_image_uri
from sagemaker.predictor import csv_serializer
# This is an object that represents the SageMaker session that we are currently operating in. This
# object contains some useful information that we will need to access later such as our region.
session = sagemaker.Session()
# This is an object that represents the IAM role that we are currently assigned. When we construct
# and launch the training job later we will need to tell it what IAM role it should have. Since our
# use case is relatively simple we will simply assign the training job the role we currently have.
role = get_execution_role()
```
## Step 1: Downloading the data
Fortunately, this dataset can be retrieved using sklearn and so this step is relatively straightforward.
```
boston = load_boston()
```
## Step 2: Preparing and splitting the data
Given that this is clean tabular data, we don't need to do any processing. However, we do need to split the rows in the dataset up into train, test and validation sets.
```
# First we package up the input data and the target variable (the median value) as pandas dataframes. This
# will make saving the data to a file a little easier later on.
X_bos_pd = pd.DataFrame(boston.data, columns=boston.feature_names)
Y_bos_pd = pd.DataFrame(boston.target)
# We split the dataset into 2/3 training and 1/3 testing sets.
X_train, X_test, Y_train, Y_test = sklearn.model_selection.train_test_split(X_bos_pd, Y_bos_pd, test_size=0.33)
# Then we split the training set further into 2/3 training and 1/3 validation sets.
X_train, X_val, Y_train, Y_val = sklearn.model_selection.train_test_split(X_train, Y_train, test_size=0.33)
```
## Step 3: Uploading the data files to S3
When a training job is constructed using SageMaker, a container is executed which performs the training operation. This container is given access to data that is stored in S3. This means that we need to upload the data we want to use for training to S3. In addition, when we perform a batch transform job, SageMaker expects the input data to be stored on S3. We can use the SageMaker API to do this and hide some of the details.
### Save the data locally
First we need to create the test, train and validation csv files which we will then upload to S3.
```
# This is our local data directory. We need to make sure that it exists.
data_dir = '../data/boston'
if not os.path.exists(data_dir):
os.makedirs(data_dir)
# We use pandas to save our test, train and validation data to csv files. Note that we make sure not to include header
# information or an index as this is required by the built in algorithms provided by Amazon. Also, for the train and
# validation data, it is assumed that the first entry in each row is the target variable.
X_test.to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.concat([Y_val, X_val], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False)
pd.concat([Y_train, X_train], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
```
### Upload to S3
Since we are currently running inside of a SageMaker session, we can use the object which represents this session to upload our data to the 'default' S3 bucket. Note that it is good practice to provide a custom prefix (essentially an S3 folder) to make sure that you don't accidentally interfere with data uploaded from some other notebook or project.
```
prefix = 'boston-xgboost-tuning-HL'
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix)
train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix)
```
## Step 4: Train the XGBoost model
Now that we have the training and validation data uploaded to S3, we can construct our XGBoost model and train it. Unlike in the previous notebooks, instead of training a single model, we will use SageMaker's hyperparameter tuning functionality to train multiple models and use the one that performs the best on the validation set.
To begin with, as in the previous approaches, we will need to construct an estimator object.
```
# As stated above, we use this utility method to construct the image name for the training container.
container = get_image_uri(session.boto_region_name, 'xgboost')
# Now that we know which container to use, we can construct the estimator object.
xgb = sagemaker.estimator.Estimator(container, # The name of the training container
role, # The IAM role to use (our current role in this case)
train_instance_count=1, # The number of instances to use for training
train_instance_type='ml.m4.xlarge', # The type of instance ot use for training
output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix),
# Where to save the output (the model artifacts)
sagemaker_session=session) # The current SageMaker session
```
Before beginning the hyperparameter tuning, we should make sure to set any model specific hyperparameters that we wish to have default values. There are quite a few that can be set when using the XGBoost algorithm, below are just a few of them. If you would like to change the hyperparameters below or modify additional ones you can find additional information on the [XGBoost hyperparameter page](https://docs.aws.amazon.com/sagemaker/latest/dg/xgboost_hyperparameters.html)
```
xgb.set_hyperparameters(max_depth=5,
eta=0.2,
gamma=4,
min_child_weight=6,
subsample=0.8,
objective='reg:linear',
early_stopping_rounds=10,
num_round=200)
```
Now that we have our estimator object completely set up, it is time to create the hyperparameter tuner. To do this we need to construct a new object which contains each of the parameters we want SageMaker to tune. In this case, we wish to find the best values for the `max_depth`, `eta`, `min_child_weight`, `subsample`, and `gamma` parameters. Note that for each parameter that we want SageMaker to tune we need to specify both the *type* of the parameter and the *range* of values that parameter may take on.
In addition, we specify the *number* of models to construct (`max_jobs`) and the number of those that can be trained in parallel (`max_parallel_jobs`). In the cell below we have chosen to train `20` models, of which we ask that SageMaker train `3` at a time in parallel. Note that this results in a total of `20` training jobs being executed which can take some time, in this case almost a half hour. With more complicated models this can take even longer so be aware!
```
from sagemaker.tuner import IntegerParameter, ContinuousParameter, HyperparameterTuner
xgb_hyperparameter_tuner = HyperparameterTuner(estimator = xgb, # The estimator object to use as the basis for the training jobs.
objective_metric_name = 'validation:rmse', # The metric used to compare trained models.
objective_type = 'Minimize', # Whether we wish to minimize or maximize the metric.
max_jobs = 20, # The total number of models to train
max_parallel_jobs = 3, # The number of models to train in parallel
hyperparameter_ranges = {
'max_depth': IntegerParameter(3, 12),
'eta' : ContinuousParameter(0.05, 0.5),
'min_child_weight': IntegerParameter(2, 8),
'subsample': ContinuousParameter(0.5, 0.9),
'gamma': ContinuousParameter(0, 10),
})
```
Now that we have our hyperparameter tuner object completely set up, it is time to train it. To do this we make sure that SageMaker knows our input data is in csv format and then execute the `fit` method.
```
# This is a wrapper around the location of our train and validation data, to make sure that SageMaker
# knows our data is in csv format.
s3_input_train = sagemaker.s3_input(s3_data=train_location, content_type='csv')
s3_input_validation = sagemaker.s3_input(s3_data=val_location, content_type='csv')
xgb_hyperparameter_tuner.fit({'train': s3_input_train, 'validation': s3_input_validation})
```
As in many of the examples we have seen so far, the `fit()` method takes care of setting up and fitting a number of different models, each with different hyperparameters. If we wish to wait for this process to finish, we can call the `wait()` method.
```
xgb_hyperparameter_tuner.wait()
```
Once the hyperamater tuner has finished, we can retrieve information about the best performing model.
```
xgb_hyperparameter_tuner.best_training_job()
```
In addition, since we'd like to set up a batch transform job to test the best model, we can construct a new estimator object from the results of the best training job. The `xgb_attached` object below can now be used as though we constructed an estimator with the best performing hyperparameters and then fit it to our training data.
```
xgb_attached = sagemaker.estimator.Estimator.attach(xgb_hyperparameter_tuner.best_training_job())
```
## Step 5: Test the model
Now that we have our best performing model, we can test it. To do this we will use the batch transform functionality. To start with, we need to build a transformer object from our fit model.
```
xgb_transformer = xgb_attached.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge')
```
Next we ask SageMaker to begin a batch transform job using our trained model and applying it to the test data we previous stored in S3. We need to make sure to provide SageMaker with the type of data that we are providing to our model, in our case `text/csv`, so that it knows how to serialize our data. In addition, we need to make sure to let SageMaker know how to split our data up into chunks if the entire data set happens to be too large to send to our model all at once.
Note that when we ask SageMaker to do this it will execute the batch transform job in the background. Since we need to wait for the results of this job before we can continue, we use the `wait()` method. An added benefit of this is that we get some output from our batch transform job which lets us know if anything went wrong.
```
xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line')
xgb_transformer.wait()
```
Now that the batch transform job has finished, the resulting output is stored on S3. Since we wish to analyze the output inside of our notebook we can use a bit of notebook magic to copy the output file from its S3 location and save it locally.
```
!aws s3 cp --recursive $xgb_transformer.output_path $data_dir
```
To see how well our model works we can create a simple scatter plot between the predicted and actual values. If the model was completely accurate the resulting scatter plot would look like the line $x=y$. As we can see, our model seems to have done okay but there is room for improvement.
```
Y_pred = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
plt.scatter(Y_test, Y_pred)
plt.xlabel("Median Price")
plt.ylabel("Predicted Price")
plt.title("Median Price vs Predicted Price")
```
## Optional: Clean up
The default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook.
```
# First we will remove all of the files contained in the data_dir directory
!rm $data_dir/*
# And then we delete the directory itself
!rmdir $data_dir
```
| github_jupyter |
# Noise Estimation using Correlation Methods
In this tutorial, we will demonstrate how to use 2-channel and 3-channel correlation methods,`kontrol.spectral.two_channel_correlation()` and `kontrol.spectral.three_channel_correlation()`, to estimate sensor self noise. Library reference is available [here](https://kontrol.readthedocs.io/en/latest/main_utilities.html#spectral-analysis-functions).
Description of this method is available in the baseline method section of [here](https://github.com/gw-vis/vis-commissioning-tex). We will also use notations in the document.
Let's say we have three sensors, with readouts $y_1(t)$, $y_2(t)$, and $y_3(t)$.
We place them in a position such that they sense a coherent signal
$x(t)=\Re\left(Ae^{\left(\sigma+i\omega_0e^{\gamma t}\right)t}\right)$,
where $i$ is the imaginary number, $A$ is $A$ is a real number, $\sigma$ and $\gamma$ are negative real numbers, and $\omega_0$ is a positive real number.
The first two sensors have dynamics
$H_1(s)=H_2(s)=\frac{s^2}{s^2+2\zeta\omega_ns+\omega_n^2}$,
where $\zeta>0$ and $\omega_n>0$, and the third sensor has dynamics
$H_3(s)=\frac{\omega_m}{s+\omega_m}$.
The sensors have noise dynamics
$N_i(s)=G_i(s)W_i(s)$,
where $i=1,2,3$, $W_i(s)$ is white noise with unit amplitude, and $G_i(s)$ is the noise dynamics of the sensors. Here, $W_i(s)$s are uncorrelated.
Let's say
$G_1(s)=G_2(s)=\frac{a_1}{s+\epsilon_1}$ and $G_3(s)=\frac{a_3}{(s+\epsilon_3)^2}$,
where $a_1$ and $a_3$ real number, $\epsilon_1$ and $\epsilon_3$ are real numbers, and $\epsilon_1\approx\epsilon_3\ll\omega_0$.
The readouts are then simply
$y_i(t) = \mathcal{L}^{-1}\left\{X(s)H_i(s) + N_i(s)\right\}$.
```
import control
import numpy as np
import matplotlib.pyplot as plt
np.random.seed(123)
# Time axis and sampling frequency
fs = 128
t0 = 0
t_end = 512
t = np.arange(t0, t_end, 1/fs)
# The coherent signal
A = 1
sigma = -.01
gamma = -0.1
omega_0 = 10*2*np.pi
x = A*np.exp((sigma + 1j*omega_0*np.exp(gamma*t)) * t).real
# The sensor dynamics.
zeta = 1
omega_n = 1*2*np.pi
omega_m = 10
s = control.tf("s")
H1 = s**2 / (s**2 + 2*zeta*omega_n*s + omega_n**2)
H2 = H1
H3 = omega_m / (s+omega_m)
# Signals sensed by the sensors.
_, x1 = control.forced_response(sys=H1, T=t, U=x)
_, x2 = control.forced_response(sys=H2, T=t, U=x)
_, x3 = control.forced_response(sys=H3, T=t, U=x)
# The noises
w1 = np.random.normal(loc=0, scale=1, size=len(t))
w2 = np.random.normal(loc=0, scale=1, size=len(t))
w3 = np.random.normal(loc=0, scale=1, size=len(t))
a1 = 0.5
a3 =5
epsilon_1 = omega_0/100
epsilon_3 = omega_0/200
G1 = a1 / (s+epsilon_1)
G2 = G1
G3 = a3 / (s+epsilon_3)**2
_, n1 = control.forced_response(sys=G1, T=t, U=w1)
_, n2 = control.forced_response(sys=G2, T=t, U=w2)
_, n3 = control.forced_response(sys=G3, T=t, U=w3)
# The readouts
y1 = x1 + n1
y2 = x2 + n2
y3 = x3 + n3
plt.figure(figsize=(15, 5))
plt.subplot(121)
plt.plot(t, x, label="Coherent signal $x(t)$", lw=3)
plt.plot(t, y1, "--", label="Readout $y_1(t)$", lw=1)
plt.plot(t, y2, "--", label="Readout $y_2(t)$", lw=1)
plt.plot(t, y3, "k--", label="Readout $y_3(t)$", lw=1)
plt.legend(loc=0)
plt.ylabel("Ampitude (a.u.)")
plt.xlabel("Time (s)")
plt.subplot(122, title="Noises")
plt.plot(1,1) # just to shift the colors.
plt.plot(t, n1, label="noise in $y_1$")
plt.plot(t, n2, label="noise in $y_2$")
plt.plot(t, n3, "k", label="noise in $y_3$")
plt.legend(loc=0)
plt.ylabel("Ampitude (a.u.)")
plt.xlabel("Time (s)")
plt.show()
```
Let's plot the PSDs
```
import scipy.signal
f, P_x = scipy.signal.welch(x, fs=fs)
f, P_n1 = scipy.signal.welch(n1, fs=fs)
f, P_n2 = scipy.signal.welch(n2, fs=fs)
f, P_n3 = scipy.signal.welch(n3, fs=fs)
f, P_y1 = scipy.signal.welch(y1, fs=fs)
f, P_y2 = scipy.signal.welch(y2, fs=fs)
f, P_y3 = scipy.signal.welch(y3, fs=fs)
plt.figure(figsize=(10, 5))
plt.loglog(f, P_x, label="Signal $x(t)$", lw=3)
plt.loglog(f, P_n1, label="Noise $n_1(t)$", lw=3)
plt.loglog(f, P_n2, "--", label="Noise $n_2(t)$", lw=2)
plt.loglog(f, P_n3, label="Noise $n_3(t)$", lw=3)
plt.loglog(f, P_y1, "k--", label="Readout $y_1(t)$", lw=2)
plt.loglog(f, P_y2, "g-.", label="Readout $y_2(t)$", lw=2)
plt.loglog(f, P_y3, "b--", label="Readout $y_3(t)$", lw=2)
plt.legend(loc=0)
plt.grid(which="both")
plt.ylim(1e-9, 1e-1)
plt.xlim(0.5, 10)
plt.ylabel("Power spectral density (a.u./Hz)")
plt.xlabel("Frequency (Hz)")
plt.show()
```
## Two-channel method
Sensor 1 and sensor 2 has the same dynamics and noise PSD.
Let's see if we can predict the two noises using the two-channel correlation method.
Here, we will use Kontrol spectral analysis utilities.
```
import kontrol
# Use the time series directly.
P_n1_2channel = kontrol.spectral.two_channel_correlation(y1, y2, fs=fs)
P_n2_2channel = kontrol.spectral.two_channel_correlation(y2, y1, fs=fs)
# # Alternatively, use the PSD and coherence directly.
# _, coherence_12 = scipy.signal.coherence(y1, y2, fs=fs)
# _, coherence_21 = scipy.signal.coherence(y2, y1, fs=fs) # This is actually the same as coherence_12
# P_n1_2channel_coh = kontrol.spectral.two_channel_correlation(P_y1, P_y2, fs=fs, coherence=coherence_12)
# P_n2_2channel_coh = kontrol.spectral.two_channel_correlation(P_y2, P_y1, fs=fs, coherence=coherence_21)
# # Alternatively, use the PSD and cross power spectral density directly.
# _, cpsd_12 = scipy.signal.csd(y1, y2, fs=fs)
# _, cpsd_21 = scipy.signal.csd(y2, y1, fs=fs)
# P_n1_2channel_cpsd = kontrol.spectral.two_channel_correlation(P_y1, P_y2, fs=fs, cpsd=cpsd_12)
# P_n2_2channel_cpsd = kontrol.spectral.two_channel_correlation(P_y2, P_y1, fs=fs, cpsd=cpsd_21)
plt.figure(figsize=(15, 5))
plt.subplot(121)
plt.loglog(f, P_n1, label="Sensor noise 1")
plt.loglog(f, P_n1_2channel, label="Predicted using 2-channel correlation method")
plt.legend(loc=0)
plt.grid(which="both")
plt.ylim(1e-7, 1e-3)
plt.xlim(0.5, 10)
plt.ylabel("Power spectral density (a.u./Hz)")
plt.xlabel("Frequency (Hz)")
plt.subplot(122)
plt.loglog(f, P_n2, label="Sensor noise 2")
plt.loglog(f, P_n2_2channel, label="Predicted using 2-channel correlation method")
plt.legend(loc=0)
plt.grid(which="both")
plt.ylim(1e-7, 1e-3)
plt.xlim(0.5, 10)
plt.ylabel("Power spectral density (a.u./Hz)")
plt.xlabel("Frequency (Hz)")
plt.show()
```
As can be seen, the 2-channnel method works perfectly in predicting the sensor noises using only the readouts.
Just curious to see what happens if we use sensor 3, which is not the same as sensor 1 and 2, instead.
```
P_n1_2channel_from_n3 = kontrol.spectral.two_channel_correlation(y1, y3, fs=fs)
P_n3_2channel_from_n1 = kontrol.spectral.two_channel_correlation(y3, y1, fs=fs)
plt.figure(figsize=(15, 5))
plt.subplot(121)
plt.loglog(f, P_n1, label="Sensor noise 1")
plt.loglog(f, P_n1_2channel_from_n3, label="Predicted using 2-channel correlation method but with non-identical sensor")
plt.legend(loc=0)
plt.grid(which="both")
plt.ylim(1e-7, 1e-3)
plt.xlim(0.5, 10)
plt.ylabel("Power spectral density (a.u./Hz)")
plt.xlabel("Frequency (Hz)")
plt.subplot(122)
plt.loglog(f, P_n3, label="Sensor noise 3")
plt.loglog(f, P_n3_2channel_from_n1, label="Predicted using 2-channel correlation method but with non-identical sensor")
plt.legend(loc=0)
plt.grid(which="both")
# plt.ylim(1e-7, 1e-3)
plt.xlim(0.5, 10)
plt.ylabel("Power spectral density (a.u./Hz)")
plt.xlabel("Frequency (Hz)")
plt.show()
```
Interesting, somehow gets the sensor 3 noise more accurately than that of sensor 1. But it could be just a fluke.
## Three-channel correlation method
Now, let's compute the sensors noise using the three-channel method.
```
# Use time series directly
P_n1_3channel = kontrol.spectral.three_channel_correlation(y1, y2, y3, fs=fs)
P_n2_3channel = kontrol.spectral.three_channel_correlation(y2, y1, y3, fs=fs)
P_n3_3channel = kontrol.spectral.three_channel_correlation(y3, y1, y2, fs=fs)
# # Alternatively, use PSD and coherences
# _, coherence_12 = scipy.signal.coherence(y1, y2, fs=fs)
# _, coherence_13 = scipy.signal.coherence(y1, y3, fs=fs)
# _, coherence_21 = scipy.signal.coherence(y2, y1, fs=fs)
# _, coherence_23 = scipy.signal.coherence(y2, y3, fs=fs)
# _, coherence_31 = scipy.signal.coherence(y3, y1, fs=fs)
# _, coherence_32 = scipy.signal.coherence(y3, y2, fs=fs)
# n1_kwargs = {
# "coherence_13": coherence_13,
# "coherence_23": coherence_23,
# "coherence_21": coherence_21,
# }
# # Notice the changes.
# n2_kwargs = {
# "coherence_13": coherence_23,
# "coherence_23": coherence_13,
# "coherence_21": coherence_12,
# }
# n3_kwargs = {
# "coherence_13": coherence_32,
# "coherence_23": coherence_12,
# "coherence_21": coherence_13,
# }
# P_n1_3channel = kontrol.spectral.three_channel_correlation(P_y1, **n1_kwargs)
# P_n2_3channel = kontrol.spectral.three_channel_correlation(P_y2, **n2_kwargs)
# P_n3_3channel = kontrol.spectral.three_channel_correlation(P_y3, **n3_kwargs)
# # And Alternatively, use PSD and cross power spectral densities.
# _, cpsd_12 = scipy.signal.csd(y1, y2, fs=fs)
# _, cpsd_13 = scipy.signal.csd(y1, y3, fs=fs)
# _, cpsd_21 = scipy.signal.csd(y2, y1, fs=fs)
# _, cpsd_23 = scipy.signal.csd(y2, y3, fs=fs)
# _, cpsd_31 = scipy.signal.csd(y3, y1, fs=fs)
# _, cpsd_32 = scipy.signal.csd(y3, y2, fs=fs)
# n1_kwargs = {
# "cpsd_13": cpsd_13,
# "cpsd_23": cpsd_23,
# "cpsd_21": cpsd_21
# }
# n2_kwargs = {
# "cpsd_13": cpsd_23,
# "cpsd_23": cpsd_13,
# "cpsd_21": cpsd_12,
# }
# n3_kwargs = {
# "cpsd_13": cpsd_32,
# "cpsd_23": cpsd_12,
# "cpsd_21": cpsd_13
# }
# P_n1_3channel = kontrol.spectral.three_channel_correlation(P_y1, **n1_kwargs)
# P_n2_3channel = kontrol.spectral.three_channel_correlation(P_y2, **n2_kwargs)
# P_n3_3channel = kontrol.spectral.three_channel_correlation(P_y3, **n3_kwargs)
plt.figure(figsize=(15, 10))
plt.subplot(221)
plt.loglog(f, P_y1, label="Readout 1")
plt.loglog(f, P_n1, label="Sensor noise 1", lw=3)
plt.loglog(f, P_n1_2channel, "--", label="Predicted using 2-channel correlation method.", lw=2)
plt.loglog(f, P_n1_3channel, "k-.", label="Predicted using 3-channel correlation method.", lw=2, markersize=3)
plt.legend(loc=0)
plt.grid(which="both")
plt.ylim(1e-7, 1e-2)
plt.xlim(0.5, 10)
plt.ylabel("Power spectral density (a.u./Hz)")
plt.xlabel("Frequency (Hz)")
plt.subplot(222)
plt.loglog(f, P_y2, label="Readout 2",)
plt.loglog(f, P_n2, label="Sensor noise 2", lw=3)
plt.loglog(f, P_n2_2channel, "--", label="Predicted using 2-channel correlation method.", lw=2)
plt.loglog(f, P_n2_3channel, "k-.", label="Predicted using 3-channel correlation method.", lw=2, markersize=3)
plt.legend(loc=0)
plt.grid(which="both")
plt.ylim(1e-7, 1e-2)
plt.xlim(0.5, 10)
plt.ylabel("Power spectral density (a.u./Hz)")
plt.xlabel("Frequency (Hz)")
plt.subplot(223)
plt.loglog(f, P_y3, label="Readout 3")
plt.loglog(f, P_n3, label="Sensor noise 3", lw=3)
plt.loglog(f, P_n3_2channel_from_n1, "--", label="2-channel correlation method with sensor 1", lw=2)
plt.loglog(f, P_n3_3channel, "k-.", label="Predicted using 3-channel correlation method.", lw=2, markersize=3)
plt.legend(loc=0)
plt.grid(which="both")
plt.ylim(1e-9, 1e-1)
plt.xlim(0.5, 10)
plt.ylabel("Power spectral density (a.u./Hz)")
plt.xlabel("Frequency (Hz)")
# plt.loglog(f, P_n3)
# plt.loglog(f, np.abs(P_n3_3channel))
# plt.loglog(f, P_n3_3channel_coh)
plt.show()
```
| github_jupyter |
<img src="NotebookAddons/blackboard-banner.png" width="100%" />
<font face="Calibri">
<br>
<font size="5"> <b>Volcano Source Modeling Using InSAR</b> </font>
<br>
<font size="4"> <b> Franz J Meyer; University of Alaska Fairbanks </b> <br>
</font>
<img style="padding: 7px" src="NotebookAddons/UAFLogo_A_647.png" width="170" align="right" /> <font size="3"> This notebook will introduce you to the intersection between Radar Remote Sensing and Inverse Modeling. Radar Remote Sensing can provide us with geodetic observations of surface deformation. Inverse Modeling helps us understand the physical causes behind an observed deformation.
To illuminate the handoff from geodesy to geophysics, this notebook will show how to use InSAR observations to determine the most likely parameters of a volcanic magma source underneath Okmok volcano, Alaska. We will use a Mogi source model to describe the physics behind deformation at Okmok. We will again use our **Jupyter Notebook** framework implemented within the Amazon Web Services (AWS) cloud to work on this exercise. <br><br>
This notebook will introduce the following data analysis concepts:
- A Mogi Source Model describing volcanic source geometry and physics
- How to use the "grid search" method to perform a pseudo-inversion of a Mogi source model
- How to solve for the best fitting source parameters using modeling with InSAR data
</font>
<br>
<font size="3">To download, please select the following options in the main menu of the notebook interface:
<br>
<ol type="1">
<li><font color='rgba(200,0,0,0.2)'> <b> Save your notebook with all of its content</b></font> by selecting <i> File / Save and Checkpoint </i> </li>
<li><font color='rgba(200,0,0,0.2)'> <b>To export in Notebook format</b></font>, click on <i>File / Download as / Notebook (.ipynb)</i></li>
<li><font color='rgba(200,0,0,0.2)'> <b>To export in PDF format</b></font>, click on <i>File / Download as / PDF vs LaTeX (.pdf) </i></li>
</ol>
</font>
</font>
<hr>
<font face="Calibri" size="5" color="darkred"> <b>Important Note about JupyterHub</b> </font>
<br><br>
<font face="Calibri" size="3"> <b>Your JupyterHub server will automatically shutdown when left idle for more than 1 hour. Your notebooks will not be lost but you will have to restart their kernels and re-run them from the beginning. You will not be able to seamlessly continue running a partially run notebook.</b> </font>
```
%%javascript
var kernel = Jupyter.notebook.kernel;
var command = ["notebookUrl = ",
"'", window.location, "'" ].join('')
kernel.execute(command)
from IPython.display import Markdown
from IPython.display import display
user = !echo $JUPYTERHUB_USER
env = !echo $CONDA_PREFIX
if env[0] == '':
env[0] = 'Python 3 (base)'
if env[0] != '/home/jovyan/.local/envs/insar_analysis':
display(Markdown(f'<text style=color:red><strong>WARNING:</strong></text>'))
display(Markdown(f'<text style=color:red>This notebook should be run using the "insar_analysis" conda environment.</text>'))
display(Markdown(f'<text style=color:red>It is currently using the "{env[0].split("/")[-1]}" environment.</text>'))
display(Markdown(f'<text style=color:red>Select "insar_analysis" from the "Change Kernel" submenu of the "Kernel" menu.</text>'))
display(Markdown(f'<text style=color:red>If the "insar_analysis" environment is not present, use <a href="{notebookUrl.split("/user")[0]}/user/{user[0]}/notebooks/conda_environments/Create_OSL_Conda_Environments.ipynb"> Create_OSL_Conda_Environments.ipynb </a> to create it.</text>'))
display(Markdown(f'<text style=color:red>Note that you must restart your server after creating a new environment before it is usable by notebooks.</text>'))
```
<hr>
<font face="Calibri">
<font size="5"> <b> 0. Importing Relevant Python Packages </b> </font>
<font size="3"> First step in any notebook is to import the required Python libraries into the Jupyter environment. In this notebooks we use the following scientific libraries:
<ol type="1">
<li> <b><a href="http://www.numpy.org/" target="_blank">NumPy</a></b> is one of the principal packages for scientific applications of Python. It is intended for processing large multidimensional arrays. </li>
<li> <b><a href="https://matplotlib.org/index.html" target="_blank">Matplotlib</a></b> is a low-level library for creating two-dimensional diagrams and graphs. With its help, you can build diverse charts, from histograms and scatterplots to non-Cartesian coordinates graphs. </li>
</font>
<br>
<font face="Calibri" size="3">The first step is to <b>import all required python modules:</b></font>
```
%%capture
import os # for chdir, getcwd, path.basename, path.exists
import copy
%matplotlib inline
import matplotlib.pylab as plt # for add_subplot, cm.jet, colorbar, figure, grid, imshow, rcParams.update, savefig,
# set_bad, set_clim, set_title, set_xlabel, set_ylabel
import numpy as np # for arange, arctan, concatenate, cos, fromfile, isnan, ma.masked_value, min, pi, power, reshape,
# sqrt, square, sin, sum, tile, transpose, where, zeros
import asf_notebook as asfn
asfn.jupytertheme_matplotlib_format()
```
<hr>
<font face="Calibri">
<font size="5"> <b> 1. InSAR at Okmok Volcano, Alaska </b> </font>
<img style="padding: 7px" src="NotebookAddons/Lab6-OkmokdefoGPS.JPG" width="550" align="right" /><font size="3"> Okmok is one of the more active volcanoes in Alaska’s Aleutian Chain. Its last (known) eruption was in the summer of 2008. Okmok is interesting from an InSAR perspective as it inflates and deflates heavily as magma moves around in its magmatic source located roughly 2.5 km underneath the surface. To learn more about Okmok volcano and its eruptive history, please visit the very informative site of the <a href="https://avo.alaska.edu/volcanoes/activity.php?volcname=Okmok&eruptionid=604&page=basic" target="_blank">Alaska Volcano Observatory</a>.
This notebook uses a pair of C-band ERS-2 SAR images acquired on Aug 18, 2000 and Jul 19, 2002 to analyze the properties of a volcanic source that was responsible for an inflation of Okmok volcano of more than 3 cm near its summit. The figure to the right shows the Okmok surface deformation as measured by GPS data from field campaigns conducted in 2000 and 2002. The plots show that the deformation measured at the site is consistent with that created by an inflating point (Mogi) source.<br>
<b>The primary goal of the problem set is to estimate values for four unknown model parameters describing a source process beneath a volcano.</b> The notebook uses real InSAR data from Okmok volcano, so you should get some sense for how remote sensing can be used to infer physical processes at volcanoes. We will assume that the source can be modeled as an inflating point source (a so-called Mogi source; see <a href="https://radar.community.uaf.edu/files/2019/03/2019-Lecture14_UsingInSARinGeophysics.pdf" target="_blank">Lecture 14</a>) and will use a grid-search method for finding the source model parameters (3D source location and volume of magma influx) that best describe our InSAR-observed surface deformation.
</font>
</font>
<hr>
<font face="Calibri" size="5"> <b> 2. Download and Plot the Observed Deformation Map </b>
<font face="Calibri" size="4"> <b> 2.1 Download Data from AWS S3 Storage Bucket</b><br>
<font size="3"> We are using a deformation map created from C-band ERS-2 SAR images acquired on Aug 18, 2000 and Jul 19, 2002. The deformation map is <b>available to you on the Class AWS S3 data storage bucket:</b> </font>
<font face="Calibri" size="3"><b>Create a working directory for this analysis and change into it.</b></font>
```
path = "/home/jovyan/notebooks/SAR_Training/English/Master/data_InSAR_volcano_source_modeling"
asfn.new_directory(path)
os.chdir(path)
print(f"Current working directory: {os.getcwd()}")
```
<font face="Calibri" size="3"><b>Download the deformation map from the AWS-S3 bucket:</b></font>
```
deformation_map_path = 's3://asf-jupyter-data/E451_20000818_20020719.unw'
deformation_map = os.path.basename(deformation_map_path)
!aws --region=us-east-1 --no-sign-request s3 cp $deformation_map_path $deformation_map
```
<font face="Calibri" size="3"><b>Define some variables:</b></font>
```
sample = 1100
line = 980
posting = 40.0
half_wave = 28.3
```
<font face="Calibri" size="3"><b>Read the dataset into the notebook</b>, storing our observed deformation map in the variable <i>"observed_deformation_map"</i>: </font>
```
if asfn.path_exists(deformation_map):
with open (deformation_map, 'rb') as f:
coh = np.fromfile(f, dtype='>f', count=-1)
observed_deformation_map = np.reshape(coh, (line, sample))
```
<font face="Calibri" size="3"><b>Change the units to cm and replace all nans with 0</b></font>
```
observed_deformation_map = observed_deformation_map*half_wave/2.0/np.pi
where_are_NaNs = np.isnan(observed_deformation_map)
observed_deformation_map[where_are_NaNs] = 0
```
<font face="Calibri" size="3"> <b>Create a mask</b> that removes invalid samples (low coherence) from the deformation map: </font>
```
observed_deformation_map_m = np.ma.masked_where(observed_deformation_map==0, observed_deformation_map)
```
<hr>
<font face="Calibri" size="4"> <b> 2.2 Visualize The Deformation Map </b>
<font size="3"> We will visualize the deformation map both in units of [cm] and as a rewrapped interferogram.</font>
<br><br>
<font size="3"><b>Write a function that calculates the bounding box.</b></font>
```
def extents(vector_component):
delta = vector_component[1] - vector_component[0]
return [vector_component[0] - delta/2, vector_component[-1] + delta/2]
```
<font face="Calibri" size="3"><b>Create a directory in which to store the plots we are about to make, and move into it:</b></font>
```
os.chdir(path)
product_path = 'plots'
asfn.new_directory(product_path)
if asfn.path_exists(product_path) and os.getcwd() != f"{path}/{product_path}":
os.chdir(product_path)
print(f"Current working directory: {os.getcwd()}")
```
<font size="3"><b>Write a plotting function</b>:</font>
```
def plot_model(infile, line, sample, posting, output_filename=None, dpi=72):
# Calculate the bounding box
extent_xvec = extents((np.arange(1, sample*posting, posting)) / 1000)
extent_yvec = extents((np.arange(1, line*posting, posting)) / 1000)
extent_xy = extent_xvec + extent_yvec
plt.rcParams.update({'font.size': 14})
inwrapped = (infile/10 + np.pi) % (2*np.pi) - np.pi
cmap = copy.copy(plt.cm.get_cmap("jet"))
cmap.set_bad('white', 1.)
# Plot displacement
fig = plt.figure(figsize=(16, 8))
ax1 = fig.add_subplot(1, 2, 1)
im = ax1.imshow(infile, interpolation='nearest', cmap=cmap, extent=extent_xy, origin='upper')
cbar = ax1.figure.colorbar(im, ax=ax1, orientation='horizontal')
ax1.set_title("Displacement in look direction [mm]")
ax1.set_xlabel("Easting [km]")
ax1.set_ylabel("Northing [km]")
plt.grid()
# Plot interferogram
im.set_clim(-30, 30)
ax2 = fig.add_subplot(1, 2, 2)
im = ax2.imshow(inwrapped, interpolation='nearest', cmap=cmap, extent=extent_xy, origin='upper')
cbar = ax2.figure.colorbar(im, ax=ax2, orientation='horizontal')
ax2.set_title("Interferogram phase [rad]")
ax2.set_xlabel("Easting [km]")
ax2.set_ylabel("Northing [km]")
plt.grid()
if output_filename:
plt.savefig(output_filename, dpi=dpi)
```
<font face="Calibri" size="3">Call plot_model() to <b>plot our observed deformation map:</b> </font>
```
plot_model(observed_deformation_map_m, line, sample, posting, output_filename='Okmok-inflation-observation.png', dpi=200)
```
<hr>
<font face="Calibri" size="5"> <b> 3. The Mogi Source Model and InSAR</b>
<font face="Calibri" size="4"> <b> 3.1 The Mogi Equations</b><br>
<font size="3"> The Mogi model provides the 3D ground displacement, $u(x,y,z)$, due to an inflating source at location $(x_s,y_s,z_s)$ with volume change $V$:
\begin{equation}
u(x,y,z)=\frac{1}{\pi}(1-\nu)\cdot V\Big(\frac{x-x_s}{r(x,y,z)^3},\frac{y-y_s}{r(x,y,z)^3},\frac{z-z_s}{r(x,y,z)^3}\Big)
\end{equation}
<br>
\begin{equation}
r(x,y,z)=\sqrt{(x-x_s)^2+(y-y_s)^2+(z-z_s)^2}
\end{equation}
where $r$ is the distance from the Mogi source to $(x,y,z)$, and $\nu$ is the Poisson's ratio of the halfspace. The Poisson ratio describes how rocks react when put under stress (e.g., pressure). It is affected by temperature, the quantity of liquid to solid, and the composition of the soil material. <b>In our problem, we will assume that $\nu$ is fixed</b>.
</font>
<hr>
<font face="Calibri" size="4"> <b> 3.2 Projecting Mogi Deformation to InSAR Line-of-Sight</b><br>
<font size="3"> In our example, the $x$-axis points east, $y$ points north, and $z$ points up. However, in the code the input values for $z$ are assumed to be depth, such that the Mogi source is at depth $z_s > 0$. The observed interferogram is already corrected for the effect of topography, so the observations can be considered to be at $z = 0$.
<img style="padding: 7px" src="NotebookAddons/Lab6-LOSprojection.JPG" width="650" align="center" />
The satellite “sees” a projection of the 3D ground displacement, $u$, onto the look vector, $\hat{L}$, which points from the satellite to the target. Therefore, we are actually interested in the (signed magnitude of the) projection of $u$ onto $\hat{L}$ (right). This is given by
\begin{array}{lcl} proj_{\hat{L}}u & = & (u^T\hat{L})\hat{L} \\ u^T\hat{L} & = & u \cdot \hat{L} = |u||\hat{L}|cos(\alpha) = |u|cos(\alpha) \\ & = & u_x\hat{L}_x+ u_y\hat{L}_y + u_z\hat{L}_z \end{array}
where the look vector is given by $\hat{L}=(sin(l) \cdot cos(t), -sin(l) \cdot sin(t), -cos(l))$, where $l$ is the look angle measured from the nadir direction and $t$ is the satellite track angle measured clockwise from geographic north. All vectors are represented in an east-north-up basis.
Our forward model takes a Mogi source, $(x_s,y_s,z_s,V)$, and computes the look displacement at any given $(x, y, z)$ point. If we represent the <i>i</i>th point on our surface grid by $x_i = (x_i,y_i,z_i)$ the the displacement vector is $u_i = u(x_i, y_i, z_i)$, and the look displacement is
\begin{equation}
d_i = u_i \cdot \hat{L}
\end{equation}
<br>
<hr>
<font size="4"> <b> 3.3 Defining the Mogi Forward Model</b><br></font>
<font size="3">We can now represent the Mogi <i>forward problem</i> as
\begin{equation}
g(m) = d
\end{equation}
where $g(·)$ describes the forward model in the very first equation in this notebook, $m$ is the (unknown) Mogi model, and $d$ is the predicted interferogram. The following code cells calculate the Mogi forward model according to the equations given above:
</font>
<font face="Calibri" size="3"><b>Write a function to calculate a forward model for a Mogi source.</b> </font>
```
def calc_forward_model_mogi(n1, e1, depth, delta_volume, northing, easting, plook):
# This geophysical coefficient is needed to describe how pressure relates to volume change
displacement_coefficient = (1e6*delta_volume*3)/(np.pi*4)
# Calculating the horizontal distance from every point in the deformation map to the x/y source location
d_mat = np.sqrt(np.square(northing-n1) + np.square(easting-e1))
# denominator of displacement field for mogi source
tmp_hyp = np.power(np.square(d_mat) + np.square(depth),1.5)
# horizontal displacement
horizontal_displacement = displacement_coefficient * d_mat / tmp_hyp
# vertical displacement
vertical_displacement = displacement_coefficient * depth / tmp_hyp
# azimuthal angle
azimuth = np.arctan2((easting-e1), (northing-n1))
# compute north and east displacement from horizontal displacement and azimuth angle
east_displacement = np.sin(azimuth) * horizontal_displacement
north_displacement = np.cos(azimuth) * horizontal_displacement
# project displacement field onto look vector
temp = np.concatenate((east_displacement, north_displacement, vertical_displacement), axis=1)
delta_range = temp.dot(np.transpose([plook]))
delta_range = -1.0 * delta_range
return delta_range
```
<font face="Calibri" size="3"><b>Write a function to create simulated deformation data based on Mogi Source Model parameters:</b> </font>
```
def deformation_data_from_mogi(x, y, z, volume, iplot, imask):
# Organizing model parameters
bvc = [x, y, z, volume, 0, 0, 0, 0]
bvc = np.asarray(bvc, dtype=object)
bvc = np.transpose(bvc)
# Setting acquisition parameters
track = -13.3*np.pi / 180.0
look = 23.0*np.pi / 180.0
plook = [-np.sin(look)*np.cos(track), np.sin(look)*np.sin(track), np.cos(look)]
# Defining easting and northing vectors
northing = np.arange(0, (line)*posting, posting) / 1000
easting = np.arange(0, (sample)*posting, posting) / 1000
northing_mat = np.tile(northing, (sample, 1))
easting_mat = np.transpose(np.tile(easting, (line, 1)))
northing_vec = np.reshape(northing_mat, (line*sample, 1))
easting_vec = np.reshape(easting_mat, (line*sample, 1))
# Handing coordinates and model parameters over to the rngchg_mogi function
calc_range = calc_forward_model_mogi(bvc[1], bvc[0], bvc[2], bvc[3], northing_vec, easting_vec, plook)
# Reshaping surface deformation data derived via calc_forward_model_mogi()
surface_deformation = np.reshape(calc_range, (sample,line))
# return rotated surface deformation
return np.transpose(np.fliplr(surface_deformation))
```
<hr>
<font face="Calibri" size="4"> <b> 3.4 Plotting The Mogi Forward Model</b><br></font>
<font face="Calibri" size="3">The cell below plots several Mogi forward models by varying some of the four main Mogi modeling parameters $(x_s,y_s,z_s,V)$.
The examples below fix the <i>depth</i> parameter to $z_s = 2.58 km$ and the <i>volume</i> change parameter to $volume = 0.0034 km^3$. We then vary the <i>easting</i> and <i>northing</i> parameters $x_s$ and $y_s$ to demonstrate how the model predictions vary when model parameters are changed.</font>
<br><br>
<font face="Calibri" size="3"><b>Run the first example:</b> </font>
```
plt.rcParams.update({'font.size': 14})
extent_x = extents((np.arange(1, sample*posting, posting))/1000)
extent_y = extents((np.arange(1, line*posting, posting))/1000)
extent_xy = extent_x + extent_y
xs = np.arange(18, 24.2, 0.4)
ys = np.arange(20, 24.2, 0.4)
zs = 2.58;
volume = 0.0034;
xa = [0, 7, 15]
ya = [0 ,5, 10]
fig = plt.figure(figsize=(18, 18))
cmap = copy.copy(plt.cm.get_cmap("jet"))
subplot_index = 1
for k in xa:
for l in ya:
ax = fig.add_subplot(3, 3, subplot_index)
predicted_deformation_map = deformation_data_from_mogi(xs[k], ys[l], zs, volume, 0, 0)
predicted_deformation_map_m = np.ma.masked_where(observed_deformation_map==0, predicted_deformation_map)
im = ax.imshow(predicted_deformation_map_m, cmap=cmap, extent=extent_xy)
cbar = ax.figure.colorbar(im, ax=ax, orientation='horizontal')
plt.grid()
im.set_clim(-30, 30)
ax.plot(xs[k],ys[l], 'k*', markersize=25, markerfacecolor='w')
ax.set_title('Source: X=%4.2fkm; Y=%4.2fkm' % (xs[k], ys[l]))
ax.set_xlabel("Easting [km]")
ax.set_ylabel("Northing [km]")
subplot_index += 1
plt.savefig('Model-samples-3by3.png', dpi=200, transparent='false')
```
<hr>
<font face="Calibri" size="5"> <b> 4. Solving the [Pseudo]-Inverse Model</b><br></font>
<font face="Calibri" size="3"> The inverse problem seeks to determine the optimal parameters $(\hat{x_s},\hat{y_s},\hat{z_s},\hat{V})$ of the Mogi model $m$ by minimizing the <i>misfit</i> between predictions, $g(m)$, and observations $d^{obs}$ according to
\begin{equation}
\sum{\Big[g(m) - d^{obs}\Big]^2}
\end{equation}
This equation describes misfit using the <i>method of least-squares</i>, a standard approach to approximate the solution of an overdetermined equation system. We will use a <i>grid-search</i> approach to find the set of model parameters that minimize the the misfit function. The approach is composed of the following processing steps:
<ol>
<li>Loop through the mogi model parameters,</li>
<li>Calculate the forward model for each set of parameters,</li>
<li>Calculate the misfit $\sum{[g(m) - d^{obs}]^2}$, and</li>
<li>Find the parameter set that minimizes this misfit.</li>
</ol>
</font>
<hr>
<font face="Calibri" size="4"> <b> 4.1 Experimenting with Misfit</b></font>
<br><br>
<font face="Calibri" size="3">Let's <b>look at the misfit $\sum{[g(m) - d^{obs}]^2}$ for a number of different model parameter sets $(x_s,y_s,z_s,V)$:</b>
</font>
```
plt.rcParams.update({'font.size': 14})
extent_x = extents((np.arange(1, sample*posting, posting))/1000)
extent_y = extents((np.arange(1, line*posting, posting))/1000)
extent_xy = extent_x + extent_y
xs = np.arange(18, 24.2, 0.4)
ys = np.arange(20, 24.2, 0.4)
zs = 2.58;
volume = 0.0034;
xa = [0, 7, 15]
ya = [0 ,5, 10]
fig = plt.figure(figsize=(18, 18))
cmap = copy.copy(plt.cm.get_cmap("jet"))
subplot_index = 1
for k in xa:
for l in ya:
ax = fig.add_subplot(3, 3, subplot_index)
predicted_deformation_map = deformation_data_from_mogi(xs[k], ys[l], zs, volume, 0, 0)
predicted_deformation_map_m = np.ma.masked_where(observed_deformation_map==0, predicted_deformation_map)
im = ax.imshow(observed_deformation_map_m-predicted_deformation_map_m, cmap=cmap, extent=extent_xy)
cbar = ax.figure.colorbar(im, ax=ax, orientation='horizontal')
plt.grid()
im.set_clim(-30, 30)
ax.plot(xs[k], ys[l], 'k*', markersize=25, markerfacecolor='w')
ax.set_title('Source: X=%4.2fkm; Y=%4.2fkm' % (xs[k], ys[l]))
ax.set_xlabel("Easting [km]")
ax.set_ylabel("Northing [km]")
subplot_index += 1
plt.savefig('Misfit-samples-3by3.png', dpi=200, transparent='false')
```
<hr>
<font face="Calibri" size="4"> <b> 4.2 Running Grid-Search to Find Best Fitting Model Parameters $(\hat{x}_s,\hat{y}_s)$</b><br></font>
<font face="Calibri" size="3">The following code cell runs a grid-search approach to find the best fitting Mogi source parameters for the 2000-2002 deformation event at Okmok. To keep things simple, we will fix the depth $z_s$ and volume change $V$ parameters close to their "true" values and search only for the correct east/north source location ($x_s,y_s$).</font>
<br><br>
<font face="Calibri" size="3"><b>Write a script using the grid-search approach in Python:</b></font>
```
# FIX Z AND dV, SEARCH OVER X AND Y
# Setting up search parameters
xs = np.arange(19, 22.2, 0.2)
ys = np.arange(21, 23.2, 0.2)
zs = 2.58;
volume = 0.0034;
nx = xs.size
ny = ys.size
ng = nx * ny;
print(f"fixed z = {zs}km, dV = {volume}, searching over (x,y)")
misfit = np.zeros((nx, ny))
subplot_index = 0
# Commence grid-search for best model parameters
for k, xv in enumerate(xs):
for l, yv in enumerate(ys):
subplot_index += 1
predicted_deformation_map = deformation_data_from_mogi(xs[k], ys[l], zs, volume, 0, 0)
predicted_deformation_map_m = np.ma.masked_where(observed_deformation_map==0, predicted_deformation_map)
misfit[k,l] = np.sum(np.square(observed_deformation_map_m - predicted_deformation_map_m))
print(f"Source {subplot_index:3d}/{ng:3d} is x = {xs[k]:.2f} km, y = {ys[l]:.2f} km")
# Searching for the minimum in the misfit matrix
mmf = np.where(misfit == np.min(misfit))
print(f"\n----------------------------------------------------------------")
print('Best fitting Mogi Source located at: X = %5.2f km; Y = %5.2f km' % (xs[mmf[0]], ys[mmf[1]]))
print(f"----------------------------------------------------------------")
```
<hr>
<font face="Calibri" size="4"> <b> 4.3 Plot and Inspect the Misfit Function</b><br></font>
<font face="Calibri" size="3">The code cell below plots the misfit function ($\sum{[g(m) - d^{obs}]^2}$) describing the fit of different Mogi source parameterizations to the observed InSAR data. You should notice a clear minimum in the misfit plot at the location of the best fitting source location estimated above.
You may notice that, even for the best fitting solution, the misfit does not become zero. This could be due to other signals in the InSAR data (e.g., atmospheric effects or residual topography). Alternatively, it could also indicate that the observed deformation doesn't fully comply with Mogi theory.
</font>
<br><br>
<font face="Calibri" size="3"><b>Plot the misfit function ($\sum{[g(m) - d^{obs}]^2}$):</b></font>
```
plt.rcParams.update({'font.size': 18})
extent_xy = extents(xs) + extents(ys)
fig = plt.figure(figsize=(10, 10))
cmap = copy.copy(plt.cm.get_cmap("jet"))
ax1 = fig.add_subplot(1, 1 ,1)
im = ax1.imshow(np.transpose(misfit), origin='lower', cmap=cmap, extent=extent_xy)
# USE THIS COMMAND TO CHANGE COLOR SCALING: im.set_clim(-30, 30)
ax1.set_aspect('auto')
cbar = ax1.figure.colorbar(im, ax=ax1, orientation='horizontal')
ax1.plot(xs[mmf[0]], ys[mmf[1]], 'k*', markersize=25, markerfacecolor='w')
ax1.set_title("Misfit Function for Mogi-Source Approximation")
ax1.set_xlabel("Easting [km]")
ax1.set_ylabel("Northing [km]")
plt.savefig('Misfit-function.png', dpi=200, transparent='false')
```
<hr>
<font face="Calibri" size="4"> <b> 4.4 Plot Best-Fitting Mogi Forward Model and Compare to Observations</b><br></font>
<font face="Calibri" size="3">With the best-fitting model parameters defined, you can now analyze how well the model fits the InSAR-observed surface deformation. The best way to do that is to look at both the observed and predicted deformation maps and compare their spatial patterns. Additionally, we will also plot the residuals (<i>observed_deformation_map</i> - <i>observed_deformation_map</i>) to determine if there are additional signals in the data that are not modeled using Mogi theory.
</font>
<br><br>
<font face="Calibri" size="3"><b>Compare the observed and predicted deformation maps:</b></font>
```
# Calculate predicted deformation map for best-fitting Mogi parameters:
predicted_deformation_map = deformation_data_from_mogi(xs[mmf[0]], ys[mmf[1]], zs, volume, 0, 0)
# Mask the predicted deformation map to remove pixels incoherent in the observations:
predicted_deformation_map_m = np.ma.masked_where(observed_deformation_map==0, predicted_deformation_map)
# Plot observed deformation map
plot_model(observed_deformation_map_m, line, sample, posting)
# Plot simulated deformation map
plot_model(predicted_deformation_map_m, line, sample, posting)
plt.savefig('BestFittingMogiDefo.png', dpi=200, transparent='false')
# Plot simulated deformation map without mask applied
plot_model(predicted_deformation_map, line, sample, posting)
```
<font face="Calibri" size="3"><b>Determine if there are additional signals in the data that are not modeled using Mogi theory:</b></font>
```
# Plot residual between observed and predicted deformation maps
plot_model(observed_deformation_map_m-predicted_deformation_map_m, line, sample, posting)
plt.savefig('Residuals-ObsMinusMogi.png', dpi=200, transparent='false')
```
<font face="Calibri" size="2"> <i>InSAR_volcano_source_modeling.ipynb - Version 1.3.0 - April 2021 </i>
<br>
<b>Version Changes</b>
<ul>
<li>namespace asf_notebook</li>
</ul>
</font>
| github_jupyter |
# EuroSciPy 2019 - 3D image processing with scikit-image
* Support material for the tutorial _3D image processing with scikit-image_.
This tutorial will introduce how to analyze three dimensional stacked and volumetric images in Python, mainly using scikit-image. Here we will learn how to:
* pre-process data using filtering, binarization and segmentation techniques.
* inspect, count and measure attributes of objects and regions of interest in the data.
* visualize large 3D data.
For more info:
* [[EuroSciPy (all editions)]](https://www.euroscipy.org/)
* [[EuroSciPy 2019]](https://www.euroscipy.org/2019/)
* [[scikit-image]](https://scikit-image.org/)
* [[scikit-image tutorials]](https://github.com/scikit-image/skimage-tutorials)
Please refer to the scikit-image tutorials when using this material.
## What is scikit-image?
scikit-image is a collection of image processing algorithms which aims to integrate well with for the SciPy ecosystem.
It is well documented, and provides well-tested code to quickly build sophisticated image processing pipelines.
## Checking the system
First, we'll check if your system have the necessary packages.
```
%run check_setup.py
```
## Importing the base Scientific Python ecossystem
Let's start importing the basics.
```
import numpy as np
import matplotlib.pyplot as plt
from scipy import ndimage
%matplotlib inline
```
Then, let's set a nice, `monospace` font for matplotlib's figures.
```
plt.rcParams['font.family'] = 'monospace'
```
## Introduction to three-dimensional image processing
In scikit-image, images are represented as `numpy` arrays.
A grayscale image is a 2D matrix of pixel intensities of shape `(row, column)`. They are also called single-channel images. Multi-channel data has an extra dimension, `channel`, in the final position. `channel` contains color information.
We can construct a 3D volume as a series of 2D `planes`, giving 3D images the shape `(plane, row, column)`.
Summarizing:
|Image type|Coordinates|
|:---|:---|
|2D grayscale|(row, column)|
|2D multichannel|(row, column, channel)|
|3D grayscale|(plane, row, column)|
|3D multichannel|(plane, row, column, channel)|
Some 3D images are constructed with equal resolution in each dimension. An example would be a computer generated rendering of a sphere with dimensions `(30, 30, 30)`: 30 planes, 30 rows and 30 columns.
However, most experimental data captures one dimension at a lower resolution than the other two. For example, photographing thin slices to approximate a 3D structure as a stack of 2D images. We will work with one example of such data in this tutorial.
## [skimage.io](https://scikit-image.org/docs/stable/api/skimage.io.html) - utilities to read and write images in various formats<a id='io'></a>
This module helps us on reading images and saving the results. There are multiple plugins available, which support multiple formats. The most commonly used functions include:
* `io.imread`: read an image to a numpy array.
* `io.imsave`: write an image to disk.
* `io.imread_collection`: read multiple images which match a common pattern.
Data can be loaded with `io.imread`, as in the following example.
```
from skimage import io # skimage's I/O submodule.
cells = io.imread('../../images/cells.tif')
```
First let's check its shape, data type and range.
```
print('* "cells" shape: {}'.format(cells.shape))
print('* "cells" type: {}'.format(cells.dtype))
print('* "cells" range: {}, {}'.format(cells.min(), cells.max()))
```
We see that `cells` has 60 planes, each with 256 rows and 256 columns. Let's try visualizing the image with `skimage.io.imshow`.
```
try:
io.imshow(cells, cmap='gray')
except TypeError as error:
print(str(error))
```
`skimage.io.imshow` can only display grayscale and RGB(A) 2D images. We can use `skimage.io.imshow` to visualize 2D planes. Let's use some helping functions for checking 3D data, then.
All supplementary functions we will use during this tutorial are stored within `supplementary_code.py`. First, we import this file:
```
import supplementary_code as sc
```
By fixing one axis, we can observe three different views of the image. Let's use the helper function `show_plane` to do that.
```
_, (win_left, win_center, win_right) = plt.subplots(nrows=1, ncols=3, figsize=(16, 4))
sc.show_plane(win_left, cells[32], title='Plane = 32')
sc.show_plane(win_center, cells[:, 128, :], title='Row = 128')
sc.show_plane(win_right, cells[:, :, 128], title='Column = 128')
```
Three-dimensional images can be viewed as a series of two-dimensional ones. The `slice_explorer` helper presents a slider to check the 2D planes.
```
sc.slice_explorer(cells)
```
The `display` helper function, on the other hand, displays 30 planes of the provided image. By default, every other plane is displayed.
```
sc.display(cells)
```
__Exercise: <font color='red'>(3 min, shall we? 🙄)</font>__ there is another dataset within the folder `image`, called `bead_pack.tif`.
Now, using what we saw so far, there's some tasks for you:
* Read this data and check its shape, data type, minimum and maximum values.
* Check the slices using the function `slice_explorer`.
* Display each six slices using the function `display` (you will use the variable `step` for that).
```
# Your solution goes here!
beadpack = io.imread('data/bead_pack.tif')
print('* "beadpack" shape: {}'.format(beadpack.shape))
print('* "beadpack" type: {}'.format(beadpack.dtype))
print('* "beadpack" range: {}, {}'.format(beadpack.min(),
beadpack.max()))
sc.slice_explorer(beadpack)
sc.display(beadpack, step=6)
```
## [skimage.exposure](https://scikit-image.org/docs/stable/api/skimage.exposure.html) - evaluating or changing the exposure of an image<a id='exposure'></a>
This module contains a number of functions for adjusting image contrast. We will use some of them:
* `exposure.adjust_gamma`: gamma correction.
* `exposure.equalize_hist`: histogram equalization.
[Gamma correction](https://en.wikipedia.org/wiki/Gamma_correction), also known as Power Law Transform, brightens or darkens an image. The function $O = I^\gamma$ is applied to each pixel in the image. A `gamma < 1` will brighten an image, while a `gamma > 1` will darken an image.
One of the most common tools to evaluate exposure is the *histogram*, which plots the number of points which have a certain value against the values in order from lowest (dark) to highest (light).
```
from skimage import exposure # skimage's exposure module.
gamma_val_low = 0.5
cells_gamma_low = exposure.adjust_gamma(cells, gamma=gamma_val_low)
gamma_val_high = 1.5
cells_gamma_high = exposure.adjust_gamma(cells, gamma=gamma_val_high)
_, ((win_top_left, win_top_center, win_top_right),
(win_bottom_left, win_bottom_center, win_bottom_right)) = plt.subplots(nrows=2, ncols=3, figsize=(12, 8))
# Original and its histogram.
sc.show_plane(win_top_left, cells[32], title='Original')
sc.plot_hist(win_bottom_left, cells)
# Gamma = 0.5 and its histogram.
sc.show_plane(win_top_center, cells_gamma_low[32], title='Gamma = {}'.format(gamma_val_low))
sc.plot_hist(win_bottom_center, cells_gamma_low)
# Gamma = 1.5 and its histogram.
sc.show_plane(win_top_right, cells_gamma_high[32], title='Gamma = {}'.format(gamma_val_high))
sc.plot_hist(win_bottom_right, cells_gamma_high)
```
[Histogram equalization](https://en.wikipedia.org/wiki/Histogram_equalization) improves contrast in an image by redistributing pixel intensities. The most common pixel intensities are spread out, allowing areas of lower local contrast to gain a higher contrast. This may enhance background noise.
```
cells_equalized = exposure.equalize_hist(cells)
sc.slice_explorer(cells_equalized)
_, ((win_top_left, win_top_right),
(win_bottom_left, win_bottom_right)) = plt.subplots(nrows=2, ncols=2, figsize=(16, 8))
sc.plot_hist(win_top_left, cells, title='Original')
sc.plot_hist(win_top_right, cells_equalized, title='Histogram equalization')
cdf, bins = exposure.cumulative_distribution(cells.ravel())
win_bottom_left.plot(bins, cdf, 'r')
win_bottom_left.set_title('Original CDF')
cdf, bins = exposure.cumulative_distribution(cells_equalized.ravel())
win_bottom_right.plot(bins, cdf, 'r')
win_bottom_right.set_title('Histogram equalization CDF');
```
Most experimental images are affected by salt and pepper noise. A few bright artifacts can decrease the relative intensity of the pixels of interest. A simple way to improve contrast is to clip the pixel values on the lowest and highest extremes. Clipping the darkest and brightest 0.5% of pixels will increase the overall contrast of the image.
```
vmin, vmax = np.percentile(cells, q=(0.5, 99.5))
cells_clipped = exposure.rescale_intensity(
cells,
in_range=(vmin, vmax),
out_range=np.float32
)
sc.slice_explorer(cells_clipped);
```
We'll call our dataset `cells_rescaled` from now on. In this cell, you can choose any of the previous results to continue working with.
In the next steps, we'll use the `cells_clipped` version.
```
cells_rescaled = cells_clipped
```
__Exercise: <font color='red'>(7-ish min? 🙄)</font>__ now, using our variable `beadpack`, let's repeat the process, ok?
Now, using what we saw so far, there's some tasks for you:
* Obtain a nice `gamma_val` to adjust the gamma of `beadpack`.
* Equalize `beadpack`'s histogram using `equalize_hist` and CLAHE (given by `equalize_adapthist`).
* Increase `beadpack`'s contrast by clipping the darkest/brightest pixels there. Try different percentages.
* Choose the data you think is best, and call it `beadpack_rescaled`.
```
# Part #1 of your solution goes here!
gamma_val = 0.7
beadpack_gamma = exposure.adjust_gamma(beadpack, gamma=gamma_val)
_, ((win_top_left, win_top_right),
(win_bottom_left, win_bottom_right)) = plt.subplots(nrows=2, ncols=2, figsize=(16, 8))
# Original and its histogram.
sc.show_plane(win_top_left, beadpack[32], title='Original')
sc.plot_hist(win_bottom_left, beadpack)
# Gamma-adjusted and its histogram.
sc.show_plane(win_top_right, beadpack_gamma[32], title='Gamma = {}'.format(gamma_val))
sc.plot_hist(win_bottom_right, beadpack_gamma)
# Part #2 of your solution goes here!
# let's convert beadpack to float; it'll help us on the future.
from skimage import util
beadpack = util.img_as_float(beadpack)
# First, let's create a version using histogram equalization.
beadpack_equalized = exposure.equalize_hist(beadpack)
sc.slice_explorer(beadpack_equalized)
# Now, a version using CLAHE.
beadpack_clahe = np.empty_like(beadpack)
for plane, image in enumerate(beadpack):
beadpack_clahe[plane] = exposure.equalize_adapthist(image)
sc.slice_explorer(beadpack_clahe)
# Let's check the results.
_, ((win_top_left, win_top_center, win_top_right),
(win_bottom_left, win_bottom_center, win_bottom_right)) = plt.subplots(nrows=2, ncols=3, figsize=(16, 8))
sc.plot_hist(win_top_left, beadpack, title='Original')
sc.plot_hist(win_top_center, beadpack_equalized, title='Histogram equalization')
sc.plot_hist(win_top_right, beadpack_clahe, title='CLAHE')
cdf, bins = exposure.cumulative_distribution(beadpack.ravel())
win_bottom_left.plot(bins, cdf, 'r')
win_bottom_left.set_title('Original CDF')
cdf, bins = exposure.cumulative_distribution(beadpack_equalized.ravel())
win_bottom_center.plot(bins, cdf, 'r')
win_bottom_center.set_title('Histogram equalization CDF');
cdf, bins = exposure.cumulative_distribution(beadpack_clahe.ravel())
win_bottom_right.plot(bins, cdf, 'r')
win_bottom_right.set_title('CLAHE CDF');
# Part #3 of your solution goes here!
vmin, vmax = np.percentile(data, q=)
beadpack_clipped = exposure.rescale_intensity(
beadpack,
in_range=(vmin, vmax),
out_range=np.float32
)
sc.slice_explorer(beadpack_clipped);
# Now, choose your destiny!
beadpack_rescaled =
```
## Edge detection
[Edge detection](https://en.wikipedia.org/wiki/Edge_detection) highlights regions in the image where a sharp change in contrast occurs. The intensity of an edge corresponds to the steepness of the transition from one intensity to another. A gradual shift from bright to dark intensity results in a dim edge. An abrupt shift results in a bright edge.
The [Sobel operator](https://en.wikipedia.org/wiki/Sobel_operator) is an edge detection algorithm which approximates the gradient of the image intensity, and is fast to compute.
## [skimage.filters](https://scikit-image.org/docs/stable/api/skimage.filters.html) - apply filters to an image<a id='filters'></a>
Filtering applies whole-image modifications such as sharpening or blurring. In addition to edge detection, `skimage.filters` provides functions for filtering and thresholding images.
Notable functions include (links to relevant gallery examples):
* [Thresholding](https://scikit-image.org/docs/stable/auto_examples/applications/plot_thresholding.html):
* `filters.threshold_*` (multiple different functions with this prefix)
* `filters.try_all_threshold` to compare various methods
* [Edge finding/enhancement](https://scikit-image.org/docs/stable/auto_examples/edges/plot_edge_filter.html):
* `filters.sobel` - not adapted for 3D images. It can be applied planewise to approximate a 3D result.
* `filters.prewitt`
* `filters.scharr`
* `filters.roberts`
* `filters.laplace`
* `filters.hessian`
* [Ridge filters](https://scikit-image.org/docs/stable/auto_examples/edges/plot_ridge_filter.html):
* `filters.meijering`
* `filters.sato`
* `filters.frangi`
* Inverse filtering (see also [skimage.restoration](#restoration)):
* `filters.weiner`
* `filters.inverse`
* [Directional](https://scikit-image.org/docs/stable/auto_examples/features_detection/plot_gabor.html): `filters.gabor`
* Blurring/denoising
* `filters.gaussian`
* `filters.median`
* [Sharpening](https://scikit-image.org/docs/stable/auto_examples/filters/plot_unsharp_mask.html): `filters.unsharp_mask`
* Define your own filter: `LPIFilter2D`
The sub-submodule `skimage.filters.rank` contains rank filters. These filters are nonlinear and operate on the local histogram.
```
from skimage import filters # skimage's filtering module
cells_sobel = np.empty_like(cells_rescaled)
for plane, image in enumerate(cells_rescaled):
cells_sobel[plane] = filters.sobel(image)
sc.slice_explorer(cells_sobel)
_, ((win_top_left, win_top_right),
(win_bottom_left, win_bottom_right)) = plt.subplots(nrows=2, ncols=2, figsize=(16, 4))
sc.show_plane(win_top_left, cells_sobel[:, 128, :], title='3D sobel, row = 128')
cells_sobel_row = filters.sobel(cells_rescaled[:, 128, :])
sc.show_plane(win_top_right, cells_sobel_row, title='2D sobel, row=128')
sc.show_plane(win_bottom_left, cells_sobel[:, :, 128], title='3D sobel, column = 128')
cells_sobel_col = filters.sobel(cells_rescaled[:, :, 128])
sc.show_plane(win_bottom_right, cells_sobel_col, title='2D sobel, column=128')
```
## [skimage.transform](https://scikit-image.org/docs/stable/api/skimage.transform.html) - transforms & warping<a id='transform'></a>
This submodule has multiple features which fall under the umbrella of transformations.
Forward (`radon`) and inverse (`iradon`) radon transforms, as well as some variants (`iradon_sart`) and the finite versions of these transforms (`frt2` and `ifrt2`). These are used for [reconstructing medical computed tomography (CT) images](https://scikit-image.org/docs/stable/auto_examples/transform/plot_radon_transform.html).
Hough transforms for identifying lines, circles, and ellipses.
Changing image size, shape, or resolution with `resize`, `rescale`, or `downscale_local_mean`.
`warp`, and `warp_coordinates` which take an image or set of coordinates and translate them through one of the defined `*Transforms` in this submodule. `estimate_transform` may be assist in estimating the parameters.
[Numerous gallery examples are available](https://scikit-image.org/docs/stable/auto_examples/index.html#geometrical-transformations-and-registration) illustrating these functions. [The panorama tutorial also includes warping](./solutions/adv3_panorama-stitching-solution.ipynb) via `SimilarityTransform` with parameter estimation via `measure.ransac`.
```
from skimage import transform # skimage's transform submodule.
```
We created the illustration below to illustrate the downsampling operation. The red dots show the pixels within each image.
```
# To make sure we all see the same thing, let's set a seed
np.random.seed(0)
image = np.random.random((8, 8))
image_rescaled = transform.downscale_local_mean(image, (4, 4))
_, (win_left, win_right) = plt.subplots(nrows=1, ncols=2, figsize=(12, 6))
win_left.imshow(image, cmap='gray')
win_left.set_xticks([])
win_left.set_yticks([])
centers = np.indices(image.shape).reshape(2, -1).T
win_left.plot(centers[:, 0], centers[:, 1], '.r')
win_left.set_title('Original: {}'.format(image.shape))
win_right.imshow(image_rescaled, cmap='gray')
win_right.set_xticks([])
win_right.set_yticks([])
centers = np.indices(image_rescaled.shape).reshape(2, -1).T
win_right.plot(centers[:, 0], centers[:, 1], '.r');
win_right.set_title('Downsampled: {}'.format(image_rescaled.shape))
```
The distance between pixels in each dimension, called `spacing`, is encoded in a tuple and is accepted as a parameter by some `skimage` functions and can be used to adjust contributions to filters.
The distance between pixels was reported by the microscope used to image the cells. This `spacing` information will be used to adjust contributions to filters and helps decide when to apply operations planewise. We've chosen to downsample each slice by a factor of 4 in the `row` and `column` dimensions to make the data smaller, thus reducing computational time. We also normalize it to `1.0` in the `row` and `column` dimensions.
```
# The microscope reports the following spacing:
original_spacing = np.array([0.2900000, 0.0650000, 0.0650000])
print('* Microscope original spacing: {}'.format(original_spacing))
# We downsampled each slice 4x to make the data smaller
rescaled_spacing = original_spacing * [1, 4, 4]
print('* Microscope after rescaling images: {}'.format(rescaled_spacing))
# Normalize the spacing so that pixels are a distance of 1 apart
spacing = rescaled_spacing / rescaled_spacing[2]
print('* Microscope normalized spacing: {}'.format(spacing))
```
__Exercise: <font color='red'>(3-ish min? 🙄)</font>__ now, using our variable `beadpack_rescaled`, let's check its edges.
Your tasks right now are:
* Use the Sobel edge filter to obtain the edges of `beadpack_rescaled`.
* Explore the edges at each depth.
* Check 2D and 3D Sobel filters when row and column are equal to 100.
```
# Your solution goes here!
beadpack_sobel = np.empty_like()
for plane, image in enumerate():
beadpack_sobel[plane] = filters.sobel(image)
sc.slice_explorer(beadpack_sobel)
_, ((win_top_left, win_top_right),
(win_bottom_left, win_bottom_right)) = plt.subplots(nrows=2, ncols=2, figsize=(16, 14))
sc.show_plane(win_top_left, , title='3D sobel, row=100')
beadpack_sobel_row = filters.sobel()
sc.show_plane(win_top_right, , title='2D sobel, row=100')
sc.show_plane(win_bottom_left, , title='3D sobel, column=100')
beadpack_sobel_col = filters.sobel()
sc.show_plane(win_bottom_right, , title='2D sobel, column=100')
```
## Filters
[Gaussian filter](https://en.wikipedia.org/wiki/Gaussian_filter) applies a Gaussian function to an image, creating a smoothing effect. `skimage.filters.gaussian` takes as input `sigma` which can be a scalar or a sequence of scalar. This `sigma` determines the standard deviation of the Gaussian along each axis. When the resolution in the `plane` dimension is much worse than the `row` and `column` dimensions, dividing `base_sigma` by the image `spacing` will balance the contribution to the filter along each axis.
```
base_sigma = 2.0
sigma = base_sigma / spacing
cells_gaussian = filters.gaussian(cells_rescaled, multichannel=False, sigma=sigma)
sc.slice_explorer(cells_gaussian);
```
[Median filter](https://en.wikipedia.org/wiki/Median_filter) is a noise removal filter. It is particularly effective against salt and pepper noise. An additional feature of the median filter is its ability to preserve edges. This is helpful in segmentation because the original shape of regions of interest will be preserved.
`skimage.filters.median` does not support three-dimensional images and needs to be applied planewise.
## [skimage.util](https://scikit-image.org/docs/stable/api/skimage.util.html) - utility functions<a id='util'></a>
These are generally useful functions which have no definite other place in the package.
* `util.img_as_*` are convenience functions for datatype conversion.
* `util.invert` is a convenient way to invert any image, accounting for its datatype.
* `util.random_noise` is a comprehensive function to apply any amount of many different types of noise to images. The seed may be set, resulting in pseudo-random noise for testing.
* `util.view_as_*` allows for overlapping views into the same memory array, which is useful for elegant local computations with minimal memory impact.
* `util.apply_parallel` uses Dask to apply a function across subsections of an image. This can result in dramatic performance or memory improvements, but depending on the algorithm edge effects or lack of knowledge of the remainder of the image may result in unexpected results.
* `util.pad` and `util.crop` pads or crops the edges of images. `util.pad` is now a direct wrapper for `numpy.pad`.
```
from skimage import util # skimage's util submodule.
cells_rescaled_ubyte = util.img_as_ubyte(cells_rescaled)
cells_median = np.empty_like(cells_rescaled_ubyte)
for plane, image in enumerate(cells_rescaled_ubyte):
cells_median[plane] = filters.median(image)
cells_median = util.img_as_float(cells_median)
sc.slice_explorer(cells_median);
```
## [skimage.restoration](https://scikit-image.org/docs/stable/api/skimage.restoration.html) - restoration of an image<a id='restoration'></a>
This submodule includes routines to restore images. Currently these routines fall into four major categories. Links lead to topical gallery examples.
* `restoration.denoise_*` - [Reducing noise](https://scikit-image.org/docs/stable/auto_examples/filters/plot_denoise.html).
* [Deconvolution](https://scikit-image.org/docs/stable/auto_examples/filters/plot_deconvolution.html), or reversing a convolutional effect which applies to the entire image. This can be done in an [unsupervised](https://scikit-image.org/docs/stable/auto_examples/filters/plot_restoration.html) way.
* `restoration.weiner`
* `restoration.unsupervised_weiner`
* `restoration.richardson_lucy`
* `restoration.inpaint_biharmonic` - [Inpainting](https://scikit-image.org/docs/stable/auto_examples/filters/plot_inpaint.html), or filling in missing areas of an image.
* `restoration.unwrap_phase` - [Phase unwrapping](https://scikit-image.org/docs/stable/auto_examples/filters/plot_phase_unwrap.html).
A [bilateral filter](https://en.wikipedia.org/wiki/Bilateral_filter) is another edge-preserving, denoising filter. Each pixel is assigned a weighted average based on neighboring pixels. The weight is determined by spatial and radiometric similarity (e.g., distance between two colors).
`skimage.restoration.denoise_bilateral` requires a `multichannel` parameter. This determines whether the last axis of the image is to be interpreted as multiple channels or another spatial dimension. While the function does not yet support 3D data, the `multichannel` parameter will help distinguish multichannel 2D data from grayscale 3D data.
```
from skimage import restoration # skimage's restoration submodule.
cells_bilateral = np.empty_like(cells_rescaled)
for plane, image in enumerate(cells_rescaled):
cells_bilateral[plane] = restoration.denoise_bilateral(
image,
multichannel=False
)
sc.slice_explorer(cells_bilateral);
_, ((win_top_left, win_top_right),
(win_bottom_left, win_bottom_right)) = plt.subplots(nrows=2, ncols=2, figsize=(10, 10))
sc.show_plane(win_top_left, cells_rescaled[32], title='Original')
sc.show_plane(win_top_right, cells_gaussian[32], title='Gaussian')
sc.show_plane(win_bottom_left, cells_bilateral[32], title='Bilateral')
sc.show_plane(win_bottom_right, cells_median[32], title='Median')
cells_denoised = cells_median
```
__Exercise: <font color='red'>(5-ish min? 🙄)</font>__ let's filter `beadpack_rescaled` now.
Your tasks are:
* Use Gaussian, median and bilateral filters on `beadpack_rescaled`.
* Check the results; choose one and call it `beadpack_denoised`.
```
# Your solution goes here!
sigma =
# The Gaussian...
beadpack_gaussian = filters.gaussian()
sc.slice_explorer(gaussian);
# ... the median...
beadpack_rescaled_ubyte = util.img_as_ubyte()
beadpack_median = np.empty_like()
for plane, image in enumerate(beadpack_rescaled_ubyte):
beadpack_median[plane] = filters.median()
beadpack_median = util.img_as_float(beadpack_median)
sc.slice_explorer(beadpack_median);
# ... and the bilateral filters.
beadpack_bilateral = np.empty_like()
for plane, image in enumerate():
beadpack_bilateral[plane] = restoration.denoise_bilateral(
,
multichannel=False
)
sc.slice_explorer(beadpack_bilateral);
# Choose your destiny!
beadpack_denoised =
```
## Thresholding
[Thresholding](https://en.wikipedia.org/wiki/Thresholding_%28image_processing%29) is used to create binary images. A threshold value determines the intensity value separating foreground pixels from background pixels. Foregound pixels are pixels brighter than the threshold value, background pixels are darker. Thresholding is a form of image segmentation.
Different thresholding algorithms produce different results. [Otsu's method](https://en.wikipedia.org/wiki/Otsu%27s_method) and Li's minimum cross entropy threshold are two common algorithms. The example below demonstrates how a small difference in the threshold value can visibly alter the binarized image.
```
threshold_li = filters.threshold_li(cells_denoised)
cells_binary_li = cells_denoised >= threshold_li
threshold_otsu = filters.threshold_otsu(cells_denoised)
cells_binary_otsu = cells_denoised >= threshold_otsu
_, (win_left, win_center, win_right) = plt.subplots(nrows=1, ncols=3, figsize=(18, 5))
sc.show_plane(win_left, cells_binary_li[32], title='Li\'s threshold = {:0.2}'.format(threshold_li))
sc.show_plane(win_center, cells_binary_otsu[32], title='Otsu\'s threshold = {:0.2}'.format(threshold_otsu))
sc.plot_hist(win_right, cells_denoised, 'Thresholds (Li: red, Otsu: blue)')
win_right.axvline(threshold_li, c='r')
win_right.axvline(threshold_otsu, c='b')
cells_binary = cells_binary_li
sc.slice_explorer(cells_binary)
```
__Exercise: <font color='red'>(5-ish min? 🙄)</font>__ let's binarize `beadpack_denoised`, but using different tools!
Your tasks are:
* Use the function `filters.try_all_threshold` to check the binary version of the 100th plane of `beadpack_denoised`.
* Choose one of the thresholds, apply it on the data and call it `beadpack_binary`.
```
# Your solution goes here!
filters.try_all_threshold()
threshold = filters.threshold_
beadpack_binary = beadpack_denoised >= threshold
```
## <a id='morphology'></a>[skimage.morphology](https://scikit-image.org/docs/stable/api/skimage.morphology.html) - binary and grayscale morphology
Morphological image processing is a collection of non-linear operations related to the shape or morphology of features in an image, such as boundaries, skeletons, etc. In any given technique, we probe an image with a small shape or template called a structuring element, which defines the region of interest or neighborhood around a pixel.
[Mathematical morphology](https://en.wikipedia.org/wiki/Mathematical_morphology) operations and structuring elements are defined in `skimage.morphology`. Structuring elements are shapes which define areas over which an operation is applied. The response to the filter indicates how well the neighborhood corresponds to the structuring element's shape.
There are a number of two and three dimensional structuring elements defined in `skimage.morphology`. Not all 2D structuring element have a 3D counterpart. The simplest and most commonly used structuring elements are the `disk`/`ball` and `square`/`cube`.
```
from skimage import morphology # skimage's morphological submodules.
ball = morphology.ball(radius=5)
print('* Ball shape: {}'.format(ball.shape))
cube = morphology.cube(width=5)
print('* Cube shape: {}'.format(cube.shape))
```
The most basic mathematical morphology operations are `dilation` and `erosion`. Dilation enlarges bright regions and shrinks dark regions. Erosion shrinks bright regions and enlarges dark regions. Other morphological operations are composed of `dilation` and `erosion`.
The `closing` of an image is defined as a `dilation` followed by an `erosion`. Closing can remove small dark spots (i.e. “pepper”) and connect small bright cracks. This tends to “close” up (dark) gaps between (bright) features. Morphological `opening` on an image is defined as an `erosion` followed by a `dilation`. Opening can remove small bright spots (i.e. “salt”) and connect small dark cracks. This tends to “open” up (dark) gaps between (bright) features.
These operations in `skimage.morphology` are compatible with 3D images and structuring elements. A 2D structuring element cannot be applied to a 3D image, nor can a 3D structuring element be applied to a 2D image.
These four operations (`closing`, `dilation`, `erosion`, `opening`) have binary counterparts which are faster to compute than the grayscale algorithms.
```
selem = morphology.ball(radius=3)
cells_closing = morphology.closing(cells_rescaled, selem=selem)
cells_dilation = morphology.dilation(cells_rescaled, selem=selem)
cells_erosion = morphology.erosion(cells_rescaled, selem=selem)
cells_opening = morphology.opening(cells_rescaled, selem=selem)
cells_binary_closing = morphology.binary_closing(cells_binary, selem=selem)
cells_binary_dilation = morphology.binary_dilation(cells_binary, selem=selem)
cells_binary_erosion = morphology.binary_erosion(cells_binary, selem=selem)
cells_binary_opening = morphology.binary_opening(cells_binary, selem=selem)
_, ((win_top_1, win_top_2, win_top_3, win_top_4),
(win_bottom_1, win_bottom_2, win_bottom_3, win_bottom_4)) = plt.subplots(nrows=2, ncols=4, figsize=(16, 8))
sc.show_plane(win_top_1, cells_erosion[32], title='Erosion')
sc.show_plane(win_top_2, cells_dilation[32], title='Dilation')
sc.show_plane(win_top_3, cells_closing[32], title='Closing')
sc.show_plane(win_top_4, cells_opening[32], title='Opening')
sc.show_plane(win_bottom_1, cells_binary_erosion[32], title='Binary erosion')
sc.show_plane(win_bottom_2, cells_binary_dilation[32], title='Binary dilation')
sc.show_plane(win_bottom_3, cells_binary_closing[32], title='Binary closing')
sc.show_plane(win_bottom_4, cells_binary_opening[32], title='Binary opening')
```
Morphology operations can be chained together to denoise an image. For example, a `closing` applied to an `opening` can remove salt and pepper noise from an image.
```
cells_binary_equalized = cells_equalized >= filters.threshold_li(cells_equalized)
cells_despeckled_radius1 = morphology.closing(
morphology.opening(cells_binary_equalized, selem=morphology.ball(1)),
selem=morphology.ball(1)
)
cells_despeckled_radius3 = morphology.closing(
morphology.opening(cells_binary_equalized, selem=morphology.ball(3)),
selem=morphology.ball(3)
)
_, (win_left, win_center, win_right) = plt.subplots(nrows=1, ncols=3, figsize=(16, 6))
sc.show_plane(win_left, cells_binary_equalized[32], title='Noisy data')
sc.show_plane(win_center, cells_despeckled_radius1[32], title='Despeckled, r = 1')
sc.show_plane(win_right, cells_despeckled_radius3[32], title='Despeckled, r = 3')
```
Functions operating on [connected components](https://en.wikipedia.org/wiki/Connected_space) can remove small undesired elements while preserving larger shapes.
`skimage.morphology.remove_small_holes` fills holes and `skimage.morphology.remove_small_objects` removes bright regions. Both functions accept a `min_size` parameter, which is the minimum size (in pixels) of accepted holes or objects. The `min_size` can be approximated by a cube.
```
width = 20
cells_remove_holes = morphology.remove_small_holes(
cells_binary,
width ** 3
)
sc.slice_explorer(cells_remove_holes);
width = 20
cells_remove_objects = morphology.remove_small_objects(
cells_remove_holes,
min_size=width ** 3
)
sc.slice_explorer(cells_remove_objects);
```
__Exercise: <font color='red'>(5-ish min? 🙄)</font>__ let's perform some operations on `beadpack_binary` and check the results.
Your tasks are:
* Apply opening, closing, dilation and erosion on `beadpack_binary`.
* Generate binary histogram-equalized and CLAHE versions of `beadpack`, according to the threshold you chose previously.
* Remove small holes and objects on `beadpack_binary`.
```
# Your solution goes here!
selem = morphology.ball()
beadpack_binary_erosion = morphology.binary_erosion()
beadpack_binary_dilation = morphology.binary_dilation()
beadpack_binary_closing = morphology.binary_closing()
beadpack_binary_opening = morphology.binary_opening()
_, (win_1, win_2, win_3, win_4) = plt.subplots(nrows=1, ncols=4, figsize=(16, 5))
sc.show_plane(win_bottom_1, , title='Binary erosion')
sc.show_plane(win_bottom_2, , title='Binary dilation')
sc.show_plane(win_bottom_3, , title='Binary closing')
sc.show_plane(win_bottom_4, , title='Binary opening')
beadpack_binary_equalized = beadpack_equalized >= filters.threshold_
beadpack_binary_clahe = beadpack_clahe >= filters.threshold_
width = 20
beadpack_remove_holes = morphology.remove_small_holes(
,
width ** 3
)
sc.slice_explorer(beadpack_remove_holes);
beadpack_remove_objects = morphology.remove_small_objects(
,
min_size=width ** 3
)
sc.slice_explorer(beadpack_remove_objects);
```
## <a id='measure'></a>[skimage.measure](https://scikit-image.org/docs/stable/api/skimage.measure.html) - measuring image or region properties
Multiple algorithms to label images, or obtain information about discrete regions of an image.
* `measure.label` - Label an image, i.e. identify discrete regions in the image using unique integers.
* `measure.regionprops` - In a labeled image, as returned by `label`, find various properties of the labeled regions.
Finding paths from a 2D image, or isosurfaces from a 3D image.
* `measure.find_contours`
* `measure.marching_cubes_lewiner`
* `measure.marching_cubes_classic`
* `measure.mesh_surface_area` - Surface area of 3D mesh from marching cubes.
* `measure.compare_*` - Quantify the difference between two whole images; often used in denoising or restoration.
**RANDom Sample Consensus fitting (RANSAC)** - a powerful, robust approach to fitting a model to data. It exists here because its initial use was for fitting shapes, but it can also fit transforms.
* `measure.ransac`
* `measure.CircleModel`
* `measure.EllipseModel`
* `measure.LineModelND`
[Image segmentation](https://en.wikipedia.org/wiki/Image_segmentation) partitions images into regions of interest. Integer labels are assigned to each region to distinguish regions of interest.
Connected components of the binary image are assigned the same label via `skimage.measure.label`. Tightly packed cells connected in the binary image are assigned the same label.
```
from skimage import measure # skimage's measure submodule.
cells_labels = measure.label(cells_remove_objects)
sc.slice_explorer(cells_labels, cmap='nipy_spectral');
_, (win_left, win_center, win_right) = plt.subplots(nrows=1, ncols=3, figsize=(16, 4))
sc.show_plane(win_left, cells_rescaled[32, :100, 125:], title='Rescaled')
sc.show_plane(win_center, cells_labels[32, :100, 125:], cmap='nipy_spectral', title='Labels')
sc.show_plane(win_right, cells_labels[32, :100, 125:] == 8, title='Labels = 8')
```
A better segmentation would assign different labels to disjoint regions in the original image.
[Watershed segmentation](https://en.wikipedia.org/wiki/Watershed_%28image_processing%29) can distinguish touching objects. Markers are placed at local minima and expanded outward until there is a collision with markers from another region. The inverse intensity image transforms bright cell regions into basins which should be filled.
In declumping, markers are generated from the distance function. Points furthest from an edge have the highest intensity and should be identified as markers using `skimage.feature.peak_local_max`. Regions with pinch points should be assigned multiple markers.
```
cells_distance = ndimage.distance_transform_edt(cells_remove_objects)
sc.slice_explorer(cells_distance, cmap='viridis');
```
## [skimage.feature](https://scikit-image.org/docs/stable/api/skimage.feature.html) - extract features from an image<a id='feature'></a>
This submodule presents a diverse set of tools to identify or extract certain features from images, including tools for
* Edge detection: `feature.canny`
* Corner detection:
* `feature.corner_kitchen_rosenfeld`
* `feature.corner_harris`
* `feature.corner_shi_tomasi`
* `feature.corner_foerstner`
* `feature.subpix`
* `feature.corner_moravec`
* `feature.corner_fast`
* `feature.corner_orientations`
* Blob detection
* `feature.blob_dog`
* `feature.blob_doh`
* `feature.blob_log`
* Texture
* `feature.greycomatrix`
* `feature.greycoprops`
* `feature.local_binary_pattern`
* `feature.multiblock_lbp`
* Peak finding: `feature.peak_local_max`
* Object detction
* `feature.hog`
* `feature.match_template`
* Stereoscopic depth estimation: `feature.daisy`
* Feature matching
* `feature.ORB`
* `feature.BRIEF`
* `feature.CENSURE`
* `feature.match_descriptors`
* `feature.plot_matches`
```
from skimage import feature # skimage's feature submodule.
peak_local_max = feature.peak_local_max(
cells_distance,
footprint=np.ones((15, 15, 15), dtype=np.bool),
indices=False,
labels=measure.label(cells_remove_objects)
)
cells_markers = measure.label(peak_local_max)
cells_labels = morphology.watershed(
cells_rescaled,
cells_markers,
mask=cells_remove_objects
)
sc.slice_explorer(cells_labels, cmap='nipy_spectral');
```
After watershed, we have better disambiguation between internal cells.
When cells simultaneous touch the border of the image, they may be assigned the same label. In pre-processing, we typically remove these cells.
**Note:** This is 3D data -- you may not always be able to see connections in 2D!
```
_, (win_left, win_right) = plt.subplots(nrows=1, ncols=2, figsize=(16, 8))
sc.show_plane(win_left, cells_labels[39, 156:, 20:150], cmap='nipy_spectral')
sc.show_plane(win_right, cells_labels[34, 90:190, 126:], cmap='nipy_spectral')
```
The watershed algorithm falsely detected subregions in a few cells. This is referred to as oversegmentation.
```
_, axis = plt.subplots()
sc.show_plane(axis, cells_labels[38, 50:100, 20:100], cmap='nipy_spectral', title='Oversegmented labels')
```
Plotting the markers on the distance image reveals the reason for oversegmentation. Cells with multiple markers will be assigned multiple labels, and oversegmented. It can be observed that cells with a uniformly increasing distance map are assigned a single marker near their center. Cells with uneven distance maps are assigned multiple markers, indicating the presence of multiple local maxima.
```
_, axes = plt.subplots(nrows=3, ncols=4, figsize=(16, 12))
vmin = cells_distance.min()
vmax = cells_distance.max()
offset = 31
for index, ax in enumerate(axes.flatten()):
ax.imshow(
cells_distance[offset + index],
cmap='gray',
vmin=vmin,
vmax=vmax
)
peaks = np.nonzero(peak_local_max[offset + index])
ax.plot(peaks[1], peaks[0], 'r.')
ax.set_xticks([])
ax.set_yticks([])
_, (win_left, win_center, win_right) = plt.subplots(nrows=1, ncols=3, figsize=(16, 8))
sc.show_plane(win_left, cells_remove_objects[10:, 193:253, 74])
sc.show_plane(win_center, cells_distance[10:, 193:253, 74])
features = feature.peak_local_max(cells_distance[10:, 193:253, 74])
win_center.plot(features[:, 1], features[:, 0], 'r.')
# Improve feature selection by blurring, using a larger footprint
# in `peak_local_max`, etc.
smooth_distance = filters.gaussian(cells_distance[10:, 193:253, 74], sigma=5)
sc.show_plane(win_right, smooth_distance)
features = feature.peak_local_max(
smooth_distance
)
win_right.plot(features[:, 1], features[:, 0], 'bx');
```
__Exercise: <font color='red'>(5-ish min? 🙄)</font>__ now it's time to label `beadpack_remove_objects` and separate the beads!
Your tasks are:
* Label `beadpack_remove_objects` using `measure.label`, and obtain the distance between the pixels.
* Try different footprints and obtain its max local peaks for `morphology.watershed`.
```
beadpack_labels = measure.label()
sc.slice_explorer(beadpack_labels, cmap='nipy_spectral');
_, (win_left, win_center, win_right) = plt.subplots(nrows=1, ncols=3, figsize=(16, 4))
sc.show_plane(win_left, , title='Rescaled')
sc.show_plane(win_center, , cmap='nipy_spectral', title='Labels')
sc.show_plane(win_right, , title='Labels = 100')
beadpack_distance = ndimage.distance_transform_edt()
sc.slice_explorer(, cmap='magma');
footprint =
peak_local_max = feature.peak_local_max(
,
footprint=, dtype=np.bool),
indices=False,
labels=measure.label(beadpack_remove_objects)
)
beadpack_markers = measure.label(peak_local_max)
beadpack_labels = morphology.watershed(
beadpack_rescaled,
beadpack_markers,
mask=beadpack_remove_objects
)
sc.slice_explorer(beadpack_labels, cmap='nipy_spectral');
```
## <a id='segmentation'></a>[skimage.segmentation](https://scikit-image.org/docs/stable/api/skimage.segmentation.html) - identification of regions of interest
One of the key image analysis tasks is identifying regions of interest. These could be a person, an object, certain features of an animal, microscopic image, or stars. Segmenting an image is the process of determining where these things you want are in your images.
Segmentation has two overarching categories:
**Supervised** - must provide some guidance (seed points or initial conditions)
* `segmentation.random_walker`
* `segmentation.active_contour`
* `segmentation.watershed`
* `segmentation.flood_fill`
* `segmentation.flood`
**Unsupervised** - no human input
* `segmentation.slic`
* `segmentation.felzenszwalb`
* `segmentation.chan_vese`
There are also some supervised and unsupervised thresholding algorithms in `filters`. There is a [segmentation lecture](https://github.com/scikit-image/skimage-tutorials/blob/main/lectures/4_segmentation.ipynb) ([and its solution](https://github.com/scikit-image/skimage-tutorials/blob/main/lectures/solutions/4_segmentation.ipynb)) you may peruse, as well as many [gallery examples](https://scikit-image.org/docs/stable/auto_examples/index.html#segmentation-of-objects) which illustrate all of these segmentation methods.
[Feature extraction](https://en.wikipedia.org/wiki/Feature_extraction) reduces data required to describe an image or objects by measuring informative features. These include features such as area or volume, bounding boxes, and intensity statistics.
Before measuring objects, it helps to clear objects from the image border. Measurements should only be collected for objects entirely contained in the image.
```
from skimage import segmentation # skimage's segmentation submodule.
cells_labels_inner = segmentation.clear_border(cells_labels)
cells_labels_inner = morphology.remove_small_objects(cells_labels_inner, min_size=200)
print('Interior labels: {}'.format(np.unique(cells_labels_inner)))
sc.slice_explorer(cells_labels_inner, cmap='nipy_spectral');
```
After clearing the border, the object labels are no longer sequentially increasing. The labels can be renumbered such that there are no jumps in the list of image labels:
```
cells_relabeled, _, _ = segmentation.relabel_sequential(cells_labels_inner)
print('Relabeled labels: {}'.format(np.unique(cells_relabeled)))
```
`skimage.measure.regionprops` automatically measures many labeled image features. Optionally, an `intensity_image` can be supplied and intensity features are extracted per object. It's good practice to make measurements on the original image.
Not all properties are supported for 3D data. Below are lists of supported and unsupported 3D measurements.
```
properties = measure.regionprops(cells_relabeled, intensity_image=cells)
props_first_region = properties[0]
supported = ['']
unsupported = ['']
for prop in props_first_region:
try:
props_first_region[prop]
supported.append(prop)
except NotImplementedError:
unsupported.append(prop)
print('Supported properties:')
print('\n\t'.join(supported))
print()
print('Unsupported properties:')
print('\n\t'.join(unsupported))
```
`skimage.measure.regionprops` ignores the 0 label, which represents the background.
```
print('Measured regions: {}'.format([prop.label for prop in properties]))
cells_volumes = [prop.area for prop in properties]
print('Total pixels: {}'.format(cells_volumes))
```
Collected measurements can be further reduced by computing per-image statistics such as total, minimum, maximum, mean, and standard deviation.
```
print('Volume statistics\n')
print(' * Total: {}'.format(np.sum(cells_volumes)))
print(' * Min: {}'.format(np.min(cells_volumes)))
print(' * Max: {}'.format(np.max(cells_volumes)))
print(' * Mean: {:0.2f}'.format(np.mean(cells_volumes)))
print(' * Standard deviation: {:0.2f}'.format(np.std(cells_volumes)))
```
__Exercise: <font color='red'>(5-ish min? 🙄)</font>__ let's clean the beads and prepare them to visualization!
Here are your tasks:
* Clear the borders and remove small objects on `beadpack_labels`.
* Show the volume information for the beads.
```
beadpack_labels_inner = segmentation.clear_border()
beadpack_labels_inner = morphology.remove_small_objects()
print('Interior labels: {}'.format(np.unique()))
sc.slice_explorer(beadpack_labels_inner, cmap='nipy_spectral');
beadpack_relabeled, _, _ = segmentation.relabel_sequential(beadpack_labels_inner)
print('Relabeled labels: {}'.format(np.unique(beadpack_relabeled)))
beadpack_volumes = [prop.area for prop in properties]
print('total pixels: {}'.format(beadpack_volumes))
print('Volume statistics\n')
print(' * Total: {}'.format(np.sum(beadpack_volumes)))
print(' * Min: {}'.format(np.min(beadpack_volumes)))
print(' * Max: {}'.format(np.max(beadpack_volumes)))
print(' * Mean: {:0.2f}'.format(np.mean(beadpack_volumes)))
print(' * Standard deviation: {:0.2f}'.format(np.std(beadpack_volumes)))
```
## Visualization
After cleaning, separating and studying the regions within the data, it's time to visualize them.
We can use the perimeters of a region to generate their plots. However, perimeter measurements are not computed for 3D objects. Using the fact that 3D extension of perimeter is surface area, we can measure the surface of an object by generating a surface mesh with `skimage.measure.marching_cubes` and computing the surface area of the mesh with `skimage.measure.mesh_surface_area`. The function `plot_3d_surface` has it covered:
```
sc.plot_3d_surface(data=cells,
labels=cells_relabeled,
region=6,
spacing=spacing)
```
Now let's generate a full, interactive 3D plot using ITK and `itkwidgets`:
```
import itk
from itkwidgets import view
```
To generate a 3D plot using ITK, we need to reformat the numpy array into an ITK matrix. Then, we use `itkwidgets.view`:
```
cells_itk_image = itk.GetImageFromArray(util.img_as_ubyte(cells_relabeled))
view(cells_itk_image, ui_collapsed=True)
```
__Exercise: <font color='red'>(3-ish min? 🙄)</font>__ now, using our variable `beadpack_relabeled`, let's check its edges.
Your tasks right now are:
* Downscale `beadpack_relabeled` by a factor of 4.
* Convert `beadpack_relabeled` to ITK's image.
* Use ITK's `view` to check the results.
```
# Your solution goes here!
beadpack_relabeled = transform.downscale_local_mean()
beadpack_itk_image = itk.GetImageFromArray()
view()
```
## ⭐⭐ BONUS! ⭐⭐ Parallelizing image loops
In image processing, we frequently apply the same algorithm on a large batch of images. Some of these image loops can take a while to be processed. Here we'll see how to use `joblib` to parallelize loops.
Our bilateral application during this tutorial, for example:
```
def bilateral_classic_loop():
cells_bilateral = np.empty_like(cells_rescaled)
for plane, image in enumerate(cells_rescaled):
cells_bilateral[plane] = restoration.denoise_bilateral(image, multichannel=False)
return cells_bilateral
%timeit bilateral_classic_loop()
```
Now, let's convert this loop to a `joblib` one:
```
from joblib import Parallel, delayed
# when using n_jobs=-2, all CPUs but one are used.
def bilateral_joblib_loop():
cells_bilateral = Parallel(n_jobs=-2)(delayed(restoration.denoise_bilateral)(image) for image in cells_rescaled)
%timeit bilateral_joblib_loop()
```
## Going beyond
[1] A tour/guide on scikit-image's submodules: https://github.com/scikit-image/skimage-tutorials/blob/main/lectures/tour_of_skimage.ipynb
[2] scikit-image's gallery examples: https://scikit-image.org/docs/stable/auto_examples/
[3] ITK's `ikwidgets`: https://github.com/InsightSoftwareConsortium/itkwidgets
[4] `joblib.Parallel`: https://joblib.readthedocs.io/en/latest/generated/joblib.Parallel.html
| github_jupyter |
# 19-05-16 Notes:

I was attempting to generate PDB files for my model's predictions (including sidechains), but I found out that my backbone reconstruction is poor to begin with. In this notebook, I'll use `prody` and `matplotlib` to try to root out the issue.
```
from prody import *
import sys
import numpy as np
import matplotlib.pyplot as plt
import torch
from glob import glob
sys.path.extend(['../transformer/'])
from Sidechains import SC_DATA
# from pylab import *
# %matplotlib inline
%matplotlib notebook
# %pylab
np.set_printoptions(suppress=True)
from numpy import sign, tile, concatenate, pi, cross, subtract, round, var
from numpy import ndarray, power, sqrt, array, zeros, arccos
RAD2DEG = 180 / pi
```
## Load true and predicted structures
```
true = parsePDB("1Y0M", chain="A").select('protein and name N CA C')
gen = parsePDB("../coords/0516a/1Y0M_A_l0.00.pdb")
# true = parsePDB("1h75", chain="A").select('protein and name N CA C')
# gen = parsePDB("../coords/0516a/1H75_A_l0.00.pdb")
true, gen
showProtein(true, gen)
```
## Do the dihedrals match the true structure?
```
def get_dihedral(coords1, coords2, coords3, coords4, radian=False):
"""Return the dihedral angle in degrees."""
a1 = coords2 - coords1
a2 = coords3 - coords2
a3 = coords4 - coords3
v1 = cross(a1, a2)
v1 = v1 / (v1 * v1).sum(-1)**0.5
v2 = cross(a2, a3)
v2 = v2 / (v2 * v2).sum(-1)**0.5
porm = sign((v1 * a3).sum(-1))
rad = arccos((v1*v2).sum(-1) / ((v1**2).sum(-1) * (v2**2).sum(-1))**0.5)
if radian:
return porm * rad
else:
return porm * rad * RAD2DEG
true.getNames()[:5], gen.getNames()[:5]
true.ca.getResnames()[:5], gen.ca.getResnames()[:5]
t_coords = true.getCoords()
g_coords = gen.getCoords()
i = 0
coords = ["N", "CA", "C"]*500
while i < len(true) - 3:
a, b, c, d = t_coords[i], t_coords[i+1], t_coords[i+2], t_coords[i+3]
w, x, y, z = g_coords[i], g_coords[i+1], g_coords[i+2], g_coords[i+3]
t_dihe = get_dihedral(a, b, c, d, radian=True)
g_dihe = get_dihedral(w, x, y, z, radian=True)
print(coords[i : (i+4)], t_dihe - g_dihe)
print(t_dihe, g_dihe)
i += 1
# Looking to see if using calcDihedral vs get_dihedral returns anything different
# i = 0
# coords = ["N", "CA", "C"]*500
# while i < len(true) - 3:
# a, b, c, d = true[i], true[i+1], true[i+2], true[i+3]
# w, x, y, z = gen[i], gen[i+1], gen[i+2], gen[i+3]
# t_dihe = calcDihedral(a, b, c, d, radian=True)
# g_dihe = calcDihedral(w, x, y, z, radian=True)
# d = t_dihe - g_dihe
# print(coords[i : (i+4)], d, d + 2 * pi, d + pi, d - pi)
# print(t_dihe, g_dihe)
# i += 1
list(true.getHierView())[0]
for tres, gres in zip(list(true.getHierView())[0].iterResidues(),
list(gen.getHierView())[0].iterResidues()):
try:
phi = calcPhi(tres, radian=True) - calcPhi(gres, radian=True)
gphi = calcPhi(gres, radian=True)
tphi = calcPhi(tres, radian=True)
except ValueError:
gphi = -999
phi = -999
tphi = -999
try:
psi = calcPsi(tres, radian=True) - calcPsi(gres, radian=True)
gpsi = calcPsi(gres, radian=True)
tpsi = calcPsi(tres, radian=True)
except ValueError:
gpsi = -999
psi = -999
tpsi = -999
try:
omega = calcOmega(tres, radian=True) - calcOmega(gres, radian=True)
gomega = calcOmega(gres, radian=True)
tomega = calcOmega(tres, radian=True)
except ValueError:
gomega = -999
omega = -999
tomega = -999
# print("{0}: {1:.2f} {2:.2f} {3:.2f}".format(tres, phi, psi, omega))
print("{0}: {1:.2f} {2:.2f} {3:.2f}".format(tres, tphi, tpsi, tomega))
import prody as pr
import sys
import numpy as np
import matplotlib.pyplot as plt
import torch
from glob import glob
sys.path.extend(['../transformer/'])
from Sidechains import SC_DATA
%matplotlib inline
SC_DATA["ARG"]
refs_fns = glob("../data/amino_acid_substructures/*.pdb")
refs = pr.parsePDB(refs_fns)
refs[0].getCoords(), refs[0].getNames()
```
| github_jupyter |
```
#Setup
%matplotlib inline
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import scipy
from scipy import stats
plt.rcParams["figure.figsize"] = (20,15)
```
## Homework 5
The purpose of this homework is to work carefully through a numeric/simulted solution to Bayes' Theorem. Bayes' Theorem reads
$$P(signal|data)=\frac{P(data|signal)P(signal)}{P(data)}$$
Effectively the goal of this homework (and the lab) is to find P(signal|data) .
Reading through Bayes' theorem it says that given a data reading, the probability it was produced by a given true signal (P(signal|data)) , is equal to the probability of getting a particular data reading given a certain true signal (P(data|signal)) times the probability of the signal having a particular strength (P(signal)) , divided by the probability of each data reading (P(data)) .
This is just math, so is true. But in practice it is quite subtle how to use this.
### Problem 1
First start by throwing a signal-free background. For Problem 1 choose a Normal distribution with some modest $\sigma$ , say in the range 2-5. Create a million background events.
```
bg = stats.norm.rvs(loc=0, scale=3, size = 1000000)
```
Now we need to make some signal. Let us choose to make signals of random strength on the interval of 0-20. Now it is critically important that you throw these using a uniform distribution. A uniform distribution means that the signal is equally likely to be small/faint (near zero) as large/bright (near 20). Mathematically this is the P(signal) in the equation. If you use another way of simulating signals that does not have a uniform distribution, you are injecting an implicit prior (very, very bad).
```
signal = stats.uniform.rvs(loc = 0, scale = 20, size=1000000)
```
Now add your signal to your background to create fake data readings. Since you know what the true signal was for each data reading, and you used a flat prior, you now have $P(data|signal)P(signal)$ .
```
data = bg + signal
```
Now make one of the 2D histograms as shown in class. Here we want to histogram the signal vs. the simulated data readings. There are a couple of ways to do this, but it will be easier later if you define your bin edges explicitly, make a histogram, then plot it. Here is the code I used for the plot in class:
```
signaledges = np.linspace(0,20,40)
dataedges = np.linspace(-7,27,68)
Psd, temp, temp2= np.histogram2d(data,signal, bins=[dataedges,signaledges], density=True)
datacenters = (dataedges[:-1] + dataedges[1:]) / 2
signalcenters = (signaledges[:-1] + signaledges[1:]) / 2
real = signaledges[25]
obs = dataedges[30]
plt.pcolormesh(datacenters,signalcenters,Psd.T)
plt.ylabel('True signal, $P(s|d)$', fontsize = 24)
plt.xlabel('Observed data, $P(d|s)$', fontsize = 24)
plt.axhline(real,color = 'red')
plt.axvline(obs,color = 'orange')
plt.show()
```
Now to explore this we can take slices of the above. We can look at our array edges and pick a vertical or horizontal stripe.
### Problem 1b
Select a true injected signal and plot P(d|s) . (Use a stair style plot). Label your plot and clearly explain what you are plotting and how to interpret it. [Hint: this was also shown in class.]
```
plt.step(temp[:-1], Psd[:,25])
plt.title('P(d|s) for a true signal of '+str(np.round(real,3)), fontsize=24)
plt.xlabel('P(d|s)', fontsize=24)
plt.show()
```
This plot shows the probability of any given box of data, assuming that the real signal we injected into our data was 12.821. In other words, the y-axis represents the probability that, assuming the signal behind our data is 12.821, we get an x-value between the start and end of a step if we select a value randomly from our data.
### Problem 1c
Select an observed data value and plot $P(s|d)$ . (Use a stair style plot). Label your plot and clearly explain what you are plotting and how to interpret it.
```
plt.step(temp2[:-1],Psd[25,:],Linewidth = 3, color = 'orange')
plt.title('P(s|d) for an observed signal of ' + str(np.round(obs,3)) ,fontsize = 24)
plt.xlabel('P(s|d)',fontsize = 24)
plt.show()
```
The "observed signal" is the value we observe from the combination of our signal and background. However, as the background has a non-zero value, the real signal is not equal to the observed signal. Therefore, this plot shows us the probability of each real signal value, assuming that our observed signal is 8.224.
### Problem 2
Now repeat the above, but with a background with non-zero mean. The easiest way would be to still have Guassian distribution but with a non-zero mean. [Hint: move it by at least a couple of $\sigma$ ]. Reproduce the graphs above. Lastly overplot the $P(d|s)$ and $P(s|d)$ plots. Why are they not centered on the same value? Explain carefully.
```
bg = stats.norm.rvs(loc=-8,scale=3, size = 1000000)
signal = stats.uniform.rvs(loc = 0, scale = 20, size=1000000)
data = bg + signal
signaledges = np.linspace(0,20,40)
dataedges = np.linspace(-7,27,68)
Psd, temp, temp2= np.histogram2d(data,signal, bins=[dataedges,signaledges], density=True)
datacenters = (dataedges[:-1] + dataedges[1:]) / 2
signalcenters = (signaledges[:-1] + signaledges[1:]) / 2
real = signaledges[20]
obs = dataedges[35]
plt.pcolormesh(datacenters,signalcenters,Psd.T)
plt.ylabel('True signal, $P(s|d)$', fontsize = 24)
plt.xlabel('Observed data, $P(d|s)$', fontsize = 24)
plt.axhline(real,color = 'red')
plt.axvline(obs,color = 'orange')
plt.show()
pds = plt.step(temp[:-1], Psd[:,25], label='P(d|s) for a true signal of '+str(np.round(real,3)))
psd = plt.step(temp2[:-1],Psd[25,:],Linewidth = 3, color = 'orange', label='P(s|d) for an observed signal of ' + str(np.round(obs,3)))
plt.xlabel('Value',fontsize = 24)
plt.legend()
plt.show()
```
As we can see from the graph above, the plots for $P(d|s)$ and $P(s|d)$ are not centered at the same value. This is because they measure two completely different things. $P(d|s)$ measures the likelihood of getting a random value from our distribution assuming that a signal exists and is equal to s. By contrast, $P(s|d)$ measures the likelihood of the real value of the signal assuming our data leads us to believe that a signal exists which is equal to d. There is actually very little reason to think they should be centered at the same value.
| github_jupyter |
```
# importamos las librerías necesarias
%matplotlib inline
import random
import tsfresh
import os
import math
from scipy import stats
from scipy.spatial.distance import pdist
from math import sqrt, log, floor
from fastdtw import fastdtw
import ipywidgets as widgets
import matplotlib.pyplot as plt
import matplotlib.cm as cm
import numpy as np
import pandas as pd
import seaborn as sns
from statistics import mean
from scipy.spatial.distance import euclidean
import scipy.cluster.hierarchy as hac
from scipy.cluster.hierarchy import fcluster
from sklearn.metrics.pairwise import pairwise_distances
from sklearn.decomposition import PCA
from sklearn.cluster import KMeans, AgglomerativeClustering, DBSCAN
from sklearn.manifold import TSNE
from sklearn.metrics import normalized_mutual_info_score, adjusted_rand_score, silhouette_score, silhouette_samples
from sklearn.metrics import mean_squared_error
from scipy.spatial import distance
sns.set(style='white')
# "fix" the randomness for reproducibility
random.seed(42)
!pip install ipywidgets
```
### Dataset
Los datos son series temporales (casos semanales de Dengue) de distintos distritos de Paraguay
```
path = "./data/Notificaciones/"
filename_read = os.path.join(path,"normalizado.csv")
notificaciones = pd.read_csv(filename_read,delimiter=",",engine='python')
notificaciones.shape
listaMunicp = notificaciones['distrito_nombre'].tolist()
listaMunicp = list(dict.fromkeys(listaMunicp))
print('Son ', len(listaMunicp), ' distritos')
listaMunicp.sort()
print(listaMunicp)
```
A continuación tomamos las series temporales que leímos y vemos como quedan
```
timeSeries = pd.DataFrame()
for muni in listaMunicp:
municipio=notificaciones['distrito_nombre']==muni
notif_x_municp=notificaciones[municipio]
notif_x_municp = notif_x_municp.reset_index(drop=True)
notif_x_municp = notif_x_municp['incidencia']
notif_x_municp = notif_x_municp.replace('nan', np.nan).fillna(0.000001)
notif_x_municp = notif_x_municp.replace([np.inf, -np.inf], np.nan).fillna(0.000001)
timeSeries = timeSeries.append(notif_x_municp)
ax = sns.tsplot(ax=None, data=notif_x_municp.values, err_style="unit_traces")
plt.show()
#timeseries shape
n=217
timeSeries.shape
timeSeries.describe()
```
### Análisis de grupos (Clustering)
El Clustering o la clusterización es un proceso importante dentro del Machine learning. Este proceso desarrolla una acción fundamental que le permite a los algoritmos de aprendizaje automatizado entrenar y conocer de forma adecuada los datos con los que desarrollan sus actividades. Tiene como finalidad principal lograr el agrupamiento de conjuntos de objetos no etiquetados, para lograr construir subconjuntos de datos conocidos como Clusters. Cada cluster dentro de un grafo está formado por una colección de objetos o datos que a términos de análisis resultan similares entre si, pero que poseen elementos diferenciales con respecto a otros objetos pertenecientes al conjunto de datos y que pueden conformar un cluster independiente.

Aunque los datos no necesariamente son tan fáciles de agrupar

### Métricas de similitud
Para medir lo similares ( o disimilares) que son los individuos existe una enorme cantidad de índices de similaridad y de disimilaridad o divergencia. Todos ellos tienen propiedades y utilidades distintas y habrá que ser consciente de ellas para su correcta aplicación al caso que nos ocupe.
La mayor parte de estos índices serán o bien, indicadores basados en la distancia (considerando a los individuos como vectores en el espacio de las variables) (en este sentido un elevado valor de la distancia entre dos individuos nos indicará un alto grado de disimilaridad entre ellos); o bien, indicadores basados en coeficientes de correlación ; o bien basados en tablas de datos de posesión o no de una serie de atributos.
A continuación mostramos las funciones de:
* Distancia Euclidiana
* Error cuadrático medio
* Fast Dynamic Time Warping
* Correlación de Pearson y
* Correlación de Spearman.
Existen muchas otras métricas y depende de la naturaleza de cada problema decidir cuál usar. Por ejemplo, *Fast Dymanic Time Warping* es una medida de similitud diseña especialmente para series temporales.
```
#Euclidean
def euclidean(x, y):
r=np.linalg.norm(x-y)
if math.isnan(r):
r=1
#print(r)
return r
#RMSE
def rmse(x, y):
r=sqrt(mean_squared_error(x,y))
if math.isnan(r):
r=1
#print(r)
return r
#Fast Dynamic time warping
def fast_DTW(x, y):
r, _ = fastdtw(x, y, dist=euclidean)
if math.isnan(r):
r=1
#print(r)
return r
#Correlation
def corr(x, y):
r=np.dot(x-mean(x),y-mean(y))/((np.linalg.norm(x-mean(x)))*(np.linalg.norm(y-mean(y))))
if math.isnan(r):
r=0
#print(r)
return 1 - r
#Spearman
def scorr(x, y):
r = stats.spearmanr(x, y)[0]
if math.isnan(r):
r=0
#print(r)
return 1 - r
# compute distances using LCSS
# function for LCSS computation
# based on implementation from
# https://rosettacode.org/wiki/Longest_common_subsequence
def lcs(a, b):
lengths = [[0 for j in range(len(b)+1)] for i in range(len(a)+1)]
# row 0 and column 0 are initialized to 0 already
for i, x in enumerate(a):
for j, y in enumerate(b):
if x == y:
lengths[i+1][j+1] = lengths[i][j] + 1
else:
lengths[i+1][j+1] = max(lengths[i+1][j], lengths[i][j+1])
x, y = len(a), len(b)
result = lengths[x][y]
return result
def discretise(x):
return int(x * 10)
def multidim_lcs(a, b):
a = a.applymap(discretise)
b = b.applymap(discretise)
rows, dims = a.shape
lcss = [lcs(a[i+2], b[i+2]) for i in range(dims)]
return 1 - sum(lcss) / (rows * dims)
#Distancias para kmeans
#Euclidean
euclidean_dist = np.zeros((n,n))
for i in range(0,n):
#print("i",i)
for j in range(0,n):
# print("j",j)
euclidean_dist[i,j] = euclidean(timeSeries.iloc[i].values.flatten(), timeSeries.iloc[j].values.flatten())
#RMSE
rmse_dist = np.zeros((n,n))
for i in range(0,n):
#print("i",i)
for j in range(0,n):
# print("j",j)
rmse_dist[i,j] = rmse(timeSeries.iloc[i].values.flatten(), timeSeries.iloc[j].values.flatten())
#Corr
corr_dist = np.zeros((n,n))
for i in range(0,n):
#print("i",i)
for j in range(0,n):
# print("j",j)
corr_dist[i,j] = corr(timeSeries.iloc[i].values.flatten(), timeSeries.iloc[j].values.flatten())
#scorr
scorr_dist = np.zeros((n,n))
for i in range(0,n):
#print("i",i)
for j in range(0,n):
# print("j",j)
scorr_dist[i,j] = scorr(timeSeries.iloc[i].values.flatten(), timeSeries.iloc[j].values.flatten())
#DTW
dtw_dist = np.zeros((n,n))
for i in range(0,n):
#print("i",i)
for j in range(0,n):
# print("j",j)
dtw_dist[i,j] = fast_DTW(timeSeries.iloc[i].values.flatten(), timeSeries.iloc[j].values.flatten())
```
### Determinar el número de clusters a formar
La mayoría de las técnicas de clustering necesitan como *input* el número de clusters a formar, para eso lo que se hace es hacer una prueba con diferentes números de cluster y nos quedamos con el que dió menor error en general. Para medir ese error utilizamos **Silhouette score**.
El **Silhoutte score** se puede utilizar para estudiar la distancia de separación entre los clusters resultantes, especialmente si no hay conocimiento previo de cuáles son los verdaderos grupos para cada objeto, que es el caso más común en aplicaciones reales.
El Silhouette score $s(i)$ se calcula:
\begin{equation}
s(i)=\dfrac{b(i)-a(i)}{max(b(i),a(i))}
\end{equation}
Definamos $a (i)$ como la distancia media del punto $(i)$ a todos los demás puntos del grupo que se le asignó ($A$). Podemos interpretar $a (i)$ como qué tan bien se asigna el punto al grupo. Cuanto menor sea el valor, mejor será la asignación.
De manera similar, definamos $b (i)$ como la distancia media del punto $(i)$ a otros puntos de su grupo vecino más cercano ($B$). El grupo ($B$) es el grupo al que no se asigna el punto $(i)$ pero su distancia es la más cercana entre todos los demás grupos. $ s (i) $ se encuentra en el rango de [-1,1].
```
from yellowbrick.cluster import KElbowVisualizer
model = AgglomerativeClustering()
visualizer = KElbowVisualizer(model, k=(3,20),metric='distortion', timings=False)
visualizer.fit(rmse_dist) # Fit the data to the visualizer
visualizer.show() # Finalize and render the figure
```
Así tenemos que son 9 los grupos que formaremos
```
k=9
```
## Técnicas de clustering
### K-means
El objetivo de este algoritmo es el de encontrar “K” grupos (clusters) entre los datos crudos. El algoritmo trabaja iterativamente para asignar a cada “punto” (las filas de nuestro conjunto de entrada forman una coordenada) uno de los “K” grupos basado en sus características. Son agrupados en base a la similitud de sus features (las columnas). Como resultado de ejecutar el algoritmo tendremos:
* Los “centroids” de cada grupo que serán unas “coordenadas” de cada uno de los K conjuntos que se utilizarán para poder etiquetar nuevas muestras.
* Etiquetas para el conjunto de datos de entrenamiento. Cada etiqueta perteneciente a uno de los K grupos formados.
Los grupos se van definiendo de manera “orgánica”, es decir que se va ajustando su posición en cada iteración del proceso, hasta que converge el algoritmo. Una vez hallados los centroids deberemos analizarlos para ver cuales son sus características únicas, frente a la de los otros grupos.

En la figura de arriba vemos como los datos se agrupan según el *centroid* que está representado por una estrella. El algortimo inicializa los centroides aleatoriamente y va ajustandolo en cada iteracción, los puntos que están más cerca del *centroid* son los que pertenecen al mismo grupo.
### Clustering jerárquico

El algortimo de clúster jerárquico agrupa los datos basándose en la distancia entre cada uno y buscando que los datos que están dentro de un clúster sean los más similares entre sí.
En una representación gráfica los elementos quedan anidados en jerarquías con forma de árbol.
### DBScan
El agrupamiento espacial basado en densidad de aplicaciones con ruido o Density-based spatial clustering of applications with noise (DBSCAN) es un algoritmo de agrupamiento de datos (data clustering). Es un algoritmo de agrupamiento basado en densidad (density-based clustering) porque encuentra un número de grupos (clusters) comenzando por una estimación de la distribución de densidad de los nodos correspondientes. DBSCAN es uno de los algoritmos de agrupamiento más usados y citados en la literatura científica.

Los puntos marcados en rojo son puntos núcleo. Los puntos amarillos son densamente alcanzables desde rojo y densamente conectados con rojo, y pertenecen al mismo clúster. El punto azul es un punto ruidoso que no es núcleo ni densamente alcanzable.
```
#Experimentos
print('Silhouette coefficent')
#HAC + euclidean
Z = hac.linkage(timeSeries, method='complete', metric=euclidean)
clusters = fcluster(Z, k, criterion='maxclust')
print("HAC + euclidean distance: ",silhouette_score(euclidean_dist, clusters))
#HAC + rmse
Z = hac.linkage(timeSeries, method='complete', metric=rmse)
clusters = fcluster(Z, k, criterion='maxclust')
print("HAC + rmse distance: ",silhouette_score( rmse_dist, clusters))
#HAC + corr
Z = hac.linkage(timeSeries, method='complete', metric=corr)
clusters = fcluster(Z, k, criterion='maxclust')
print("HAC + corr distance: ",silhouette_score( corr_dist, clusters))
#HAC + scorr
Z = hac.linkage(timeSeries, method='complete', metric=scorr)
clusters = fcluster(Z, k, criterion='maxclust')
print("HAC + scorr distance: ",silhouette_score( scorr_dist, clusters))
#HAC + LCSS
#Z = hac.linkage(timeSeries, method='complete', metric=multidim_lcs)
#clusters = fcluster(Z, k, criterion='maxclust')
#print("HAC + LCSS distance: ",silhouette_score( timeSeries, clusters, metric=multidim_lcs))
#HAC + DTW
Z = hac.linkage(timeSeries, method='complete', metric=fast_DTW)
clusters = fcluster(Z, k, criterion='maxclust')
print("HAC + DTW distance: ",silhouette_score( dtw_dist, clusters))
km_euc = KMeans(n_clusters=k).fit_predict(euclidean_dist)
silhouette_avg=silhouette_score( euclidean_dist, km_euc)
print("KM + euclidian distance: ",silhouette_score( euclidean_dist, km_euc))
km_rmse = KMeans(n_clusters=k).fit_predict(rmse_dist)
print("KM + rmse distance: ",silhouette_score( rmse_dist, km_rmse))
km_corr = KMeans(n_clusters=k).fit_predict(corr_dist)
print("KM + corr distance: ",silhouette_score( corr_dist, km_corr))
km_scorr = KMeans(n_clusters=k).fit_predict(scorr_dist)
print("KM + scorr distance: ",silhouette_score( scorr_dist, km_scorr))
km_dtw = KMeans(n_clusters=k).fit_predict(dtw_dist)
print("KM + dtw distance: ",silhouette_score( dtw_dist, clusters))
#Experimentos DBSCAN
DB_euc = DBSCAN(eps=3, min_samples=2).fit_predict(euclidean_dist)
silhouette_avg=silhouette_score( euclidean_dist, DB_euc)
print("DBSCAN + euclidian distance: ",silhouette_score( euclidean_dist, DB_euc))
DB_rmse = DBSCAN(eps=12, min_samples=10).fit_predict(rmse_dist)
#print("DBSCAN + rmse distance: ",silhouette_score( rmse_dist, DB_rmse))
print("DBSCAN + rmse distance: ",0.00000000)
DB_corr = DBSCAN(eps=3, min_samples=2).fit_predict(corr_dist)
print("DBSCAN + corr distance: ",silhouette_score( corr_dist, DB_corr))
DB_scorr = DBSCAN(eps=3, min_samples=2).fit_predict(scorr_dist)
print("DBSCAN + scorr distance: ",silhouette_score( scorr_dist, DB_scorr))
DB_dtw = DBSCAN(eps=3, min_samples=2).fit_predict(dtw_dist)
print("KM + dtw distance: ",silhouette_score( dtw_dist, DB_dtw))
```
## Clustering basado en propiedades
Otro enfoque en el clustering es extraer ciertas propiedades de nuestros datos y hacer la agrupación basándonos en eso, el procedimiento es igual a como si estuviesemos trabajando con nuestros datos reales.
```
from tsfresh import extract_features
#features extraction
extracted_features = extract_features(timeSeries, column_id="indice")
extracted_features.shape
list(extracted_features.columns.values)
n=217
features = pd.DataFrame()
Mean=[]
Var=[]
aCF1=[]
Peak=[]
Entropy=[]
Cpoints=[]
for muni in listaMunicp:
municipio=notificaciones['distrito_nombre']==muni
notif_x_municp=notificaciones[municipio]
notif_x_municp = notif_x_municp.reset_index(drop=True)
notif_x_municp = notif_x_municp['incidencia']
notif_x_municp = notif_x_municp.replace('nan', np.nan).fillna(0.000001)
notif_x_municp = notif_x_municp.replace([np.inf, -np.inf], np.nan).fillna(0.000001)
#Features
mean=tsfresh.feature_extraction.feature_calculators.mean(notif_x_municp)
var=tsfresh.feature_extraction.feature_calculators.variance(notif_x_municp)
ACF1=tsfresh.feature_extraction.feature_calculators.autocorrelation(notif_x_municp,1)
peak=tsfresh.feature_extraction.feature_calculators.number_peaks(notif_x_municp,20)
entropy=tsfresh.feature_extraction.feature_calculators.sample_entropy(notif_x_municp)
cpoints=tsfresh.feature_extraction.feature_calculators.number_crossing_m(notif_x_municp,5)
Mean.append(mean)
Var.append(var)
aCF1.append(ACF1)
Peak.append(peak)
Entropy.append(entropy)
Cpoints.append(cpoints)
data_tuples = list(zip(Mean,Var,aCF1,Peak,Entropy,Cpoints))
features = pd.DataFrame(data_tuples, columns =['Mean', 'Var', 'ACF1', 'Peak','Entropy','Cpoints'])
# print the data
features
features.iloc[1]
#Distancias para kmeans
#Euclidean
f_euclidean_dist = np.zeros((n,n))
for i in range(0,n):
#print("i",i)
for j in range(1,n):
#print("j",j)
f_euclidean_dist[i,j] = euclidean(features.iloc[i].values.flatten(), features.iloc[j].values.flatten())
#RMSE
f_rmse_dist = np.zeros((n,n))
for i in range(0,n):
#print("i",i)
for j in range(0,n):
# print("j",j)
f_rmse_dist[i,j] = rmse(features.iloc[i].values.flatten(), features.iloc[j].values.flatten())
#Corr
#print(features.iloc[i].values.flatten())
#print(features.iloc[j].values.flatten())
print('-------------------------------')
f_corr_dist = np.zeros((n,n))
#for i in range(0,n):
# print("i",i)
# for j in range(0,n):
# print("j",j)
# print(features.iloc[i].values.flatten())
# print(features.iloc[j].values.flatten())
# f_corr_dist[i,j] = corr(features.iloc[i].values.flatten(), features.iloc[j].values.flatten())
#scorr
f_scorr_dist = np.zeros((n,n))
for i in range(0,n):
#print("i",i)
for j in range(0,n):
# print("j",j)
f_scorr_dist[i,j] = scorr(features.iloc[i].values.flatten(), features.iloc[j].values.flatten())
#DTW
f_dtw_dist = np.zeros((n,n))
for i in range(0,n):
#print("i",i)
for j in range(0,n):
# print("j",j)
f_dtw_dist[i,j] = fast_DTW(features.iloc[i].values.flatten(), features.iloc[j].values.flatten())
from yellowbrick.cluster import KElbowVisualizer
model = AgglomerativeClustering()
visualizer = KElbowVisualizer(model, k=(3,50),metric='distortion', timings=False)
visualizer.fit(f_scorr_dist) # Fit the data to the visualizer
visualizer.show() # Finalize and render the figure
k=9
km_euc = KMeans(n_clusters=k).fit_predict(f_euclidean_dist)
silhouette_avg=silhouette_score( f_euclidean_dist, km_euc)
print("KM + euclidian distance: ",silhouette_score( f_euclidean_dist, km_euc))
km_rmse = KMeans(n_clusters=k).fit_predict(f_rmse_dist)
print("KM + rmse distance: ",silhouette_score( f_rmse_dist, km_rmse))
#km_corr = KMeans(n_clusters=k).fit_predict(f_corr_dist)
#print("KM + corr distance: ",silhouette_score( f_corr_dist, km_corr))
#print("KM + corr distance: ",silhouette_score( f_corr_dist, 0.0))
km_scorr = KMeans(n_clusters=k).fit_predict(f_scorr_dist)
print("KM + scorr distance: ",silhouette_score( f_scorr_dist, km_scorr))
km_dtw = KMeans(n_clusters=k).fit_predict(f_dtw_dist)
print("KM + dtw distance: ",silhouette_score( f_dtw_dist, clusters))
#Experimentos HAC
HAC_euc = AgglomerativeClustering(n_clusters=k).fit_predict(f_euclidean_dist)
silhouette_avg=silhouette_score( f_euclidean_dist, HAC_euc)
print("HAC + euclidian distance: ",silhouette_score( f_euclidean_dist, HAC_euc))
HAC_rmse = AgglomerativeClustering(n_clusters=k).fit_predict(f_rmse_dist)
print("HAC + rmse distance: ",silhouette_score( f_rmse_dist, HAC_rmse))
#HAC_corr = AgglomerativeClustering(n_clusters=k).fit_predict(f_corr_dist)
#print("HAC + corr distance: ",silhouette_score( f_corr_dist,HAC_corr))
print("HAC + corr distance: ",0.0)
HAC_scorr = AgglomerativeClustering(n_clusters=k).fit_predict(f_scorr_dist)
print("HAC + scorr distance: ",silhouette_score( f_scorr_dist, HAC_scorr))
HAC_dtw = AgglomerativeClustering(n_clusters=k).fit_predict(f_dtw_dist)
print("HAC + dtw distance: ",silhouette_score( f_dtw_dist, HAC_dtw))
#Experimentos DBSCAN
DB_euc = DBSCAN(eps=3, min_samples=2).fit_predict(f_euclidean_dist)
silhouette_avg=silhouette_score( f_euclidean_dist, DB_euc)
print("DBSCAN + euclidian distance: ",silhouette_score( f_euclidean_dist, DB_euc))
DB_rmse = DBSCAN(eps=12, min_samples=10).fit_predict(f_rmse_dist)
#print("DBSCAN + rmse distance: ",silhouette_score( f_rmse_dist, DB_rmse))
#print("DBSCAN + rmse distance: ",0.00000000)
#DB_corr = DBSCAN(eps=3, min_samples=2).fit_predict(f_corr_dist)
#print("DBSCAN + corr distance: ",silhouette_score( f_corr_dist, DB_corr))
print("DBSCAN + corr distance: ",0.0)
DB_scorr = DBSCAN(eps=3, min_samples=2).fit_predict(f_scorr_dist)
print("DBSCAN + scorr distance: ",silhouette_score( f_scorr_dist, DB_scorr))
DB_dtw = DBSCAN(eps=3, min_samples=2).fit_predict(f_dtw_dist)
print("KM + dtw distance: ",silhouette_score( f_dtw_dist, DB_dtw))
```
| github_jupyter |
```
import sys
import gc
#sys.path
sys.path.insert(0, '../')
sys.path
import pandas as pd
from Data_cleaning import get_clean_data
from Data_cleaning import get_merged_data_frame
```
### Load Data
we now load the data using Alfred's framework.
```
df_merged = get_merged_data_frame(user_argv=10, isbn_argv=10, path='../data/')
```
Actually we only need the data from df_ratings. But we merge the data and just deleting most of columns again.
We now end with less rows, because we have some ratings without corresponding books in df_books.
```
# df_merged.head()
df_books = df_merged[['isbn', 'title', 'author']]
df_books.set_index('isbn', inplace=True)
df_books.head()
#df_books.index
print(df_merged.shape)
df_merged = df_merged.drop(['location', 'age', 'country', 'province',
'title', 'author', 'pub_year', 'publisher',
'url_s', 'url_m', 'url_l'], axis=1)
print(df_merged.shape)
gc.collect()
import sys
# These are the usual ipython objects, including this one you are creating
ipython_vars = ['In', 'Out', 'exit', 'quit', 'get_ipython', 'ipython_vars']
# Get a sorted list of the objects and their sizes
sorted([(x, sys.getsizeof(globals().get(x))) for x in dir() if not x.startswith('_') and x not in sys.modules and x not in ipython_vars], key=lambda x: x[1], reverse=True)
data = df_merged
data.head()
data[(data.user == '2313') & (data.rating == 5)]
```
## part 2
```
from surprise import Dataset
from surprise import Reader
from surprise.model_selection import train_test_split
from surprise import accuracy
data_all = Dataset.load_from_df(df = data, reader=Reader(line_format='user item rating'))
data_explicit = Dataset.load_from_df(df = data.loc[data['rating'] != 0],
reader=Reader(line_format='user item rating'))
trainset, testset = train_test_split(data_explicit, test_size=.25)
from surprise import SVD
from surprise.model_selection import cross_validate
# We'll use the famous SVD algorithm.
algo = SVD(n_factors=100, n_epochs=20,)
# Run 5-fold cross-validation and print results
cross_validate(algo, data_all, measures=['RMSE', 'MAE'], cv=5, verbose=True)
cross_validate(algo, data_explicit, measures=['RMSE', 'MAE'], cv=5, verbose=True)
# Train the algorithm on the trainset, and predict ratings for the testset
algo.fit(trainset)
predictions = algo.test(testset)
# Then compute RMSE
accuracy.rmse(predictions)
#%% CREATE RATINGS MATRIX
userItemRatingMatrix=pd.pivot_table(data, values='rating',
index=['user'], columns=['isbn'])
df = userItemRatingMatrix.loc[['100004', '100009']]
#df[df.columns[df.apply(lambda s: len(s.unique()) == 1)]]
df[df.columns[df.apply(lambda s: sum(s.isnull()) == 1)]]
print(algo.predict('100004', '0345339703'))
print(algo.predict('100004', '059035342X'))
print(algo.predict('100009', '0345339703'))
print(algo.predict('100009', '059035342X'))
from surprise import KNNBasic
algo = KNNBasic()
cross_validate(algo, data_explicit, measures=['RMSE', 'MAE'], cv=5, verbose=True)
###############
```
To process
we now threshold our data, i.e. only account for data where users have at least 10 ratings (explicit or implicit).
```
data.shape
#%% CREATE RATINGS MATRIX
userItemRatingMatrix=pd.pivot_table(data, values='rating',
index=['user'], columns=['isbn'])
#type(userItemRatingMatrix)
#userItemRatingMatrix.head()
#userItemRatingMatrix.columns
#2313
#userItemRatingMatrix.iloc[2313]
sum(userItemRatingMatrix.iloc[2313] == 5)
```
**what the hell is this?**
```
# #%% THRESHOLD CI
# """"""from scipy.stats import sem, t
# from scipy import mean
# confidence = 0.95
# data = ratings_per_isbn['count']
# n = len(data)
# m = mean(data)
# std_err = sem(data)
# h = std_err * t.ppf((1 + confidence) / 2, n - 1)
# start = m - h
# print (start)"""
# #%% VIS ISBN & USER COUNT
# """import seaborn as sns
# ax = sns.distplot(ratings_per_isbn['count'])
# ax2 = ax.twinx()
# sns.boxplot(x=ratings_per_isbn['count'], ax=ax2)
# ax2.set(ylim=(-0.5, 10))"""
import numpy as np
from scipy.spatial.distance import hamming
def distance(user1,user2):
try:
user1Ratings = userItemRatingMatrix.transpose()[str(user1)]
print(len(user1Ratings))
user2Ratings = userItemRatingMatrix.transpose()[str(user2)]
distance = hamming(user1Ratings,user2Ratings)
except:
distance = np.NaN
return distance
def nearestNeighbors(user,K=10):
allUsers = pd.DataFrame(userItemRatingMatrix.index)
allUsers = allUsers[allUsers.user!=user]
allUsers["distance"] = allUsers["user"].apply(lambda x: distance(user,x))
KnearestUsers = allUsers.sort_values(["distance"],ascending=True)["user"][:K]
return KnearestUsers
userItemRatingMatrix.shape
allUsers = pd.DataFrame(userItemRatingMatrix.index)
# allUsers = allUsers[allUsers.user!=user]
# allUsers["distance"] = allUsers["user"].apply(lambda x: distance(user,x))
# KnearestUsers = allUsers.sort_values(["distance"],ascending=True)["user"][:K]
# nearestNeighbors(100004)
allUsers.shape
print(distance('100004', '100009'))
user1Ratings = userItemRatingMatrix.transpose()[str(user1)]
user2Ratings = userItemRatingMatrix.transpose()[str(user2)]
hamming([1, 0, 0], [0, 1, 0])
#hamming([NaN, 0, 0], [NaN, 1, 0])
df = userItemRatingMatrix.loc[['100004', '100009']]
df[df.columns[df.apply(lambda s: len(s.unique()) == 1)]]
df[df.columns[df.apply(lambda s: sum(s.isnull()) == 0)]]
sum(df['0060502258'].isnull())
#distance()
type(userItemRatingMatrix)
userItemRatingMatrix.index
userItemRatingMatrix.head()
```
???
```
# #%% DEBUGGING
# """NNRatings = userItemRatingMatrix[userItemRatingMatrix.index.isin(KnearestUsers)]
# NNRatings"""
# """avgRating = NNRatings.apply(np.nanmean).dropna()
# avgRating.head()"""
# """booksAlreadyRead = userItemRatingMatrix.transpose()[str(user)].dropna().index
# booksAlreadyRead"""
# """"avgRating = avgRating[~avgRating.index.isin(booksAlreadyRead)]"""
def bookMeta(isbn):
#df_books.set_index('isbn', inplace=True)
# title = books.at[isbn,"title"]
# author = books.at[isbn,"author"]
title = df_books.at[isbn,"title"]
author = df_books.at[isbn,"author"]
return title, author
def faveBooks(user,N):
userRatings = data[data["user"]==user]
sortedRatings = pd.DataFrame.sort_values(userRatings,['rating'],ascending=[0])[:N]
sortedRatings["title"] = sortedRatings["isbn"].apply(bookMeta)
return sortedRatings
def topN(user,N=3):
KnearestUsers = nearestNeighbors(user)
NNRatings = userItemRatingMatrix[userItemRatingMatrix.index.isin(KnearestUsers)]
avgRating = NNRatings.apply(np.nanmean).dropna()
booksAlreadyRead = userItemRatingMatrix.transpose()[user].dropna().index
avgRating = avgRating[~avgRating.index.isin(booksAlreadyRead)]
topNISBNs = avgRating.sort_values(ascending=False).index[:N]
return pd.Series(topNISBNs).apply(bookMeta)
#%% DEBUGGING
"""N=3
topNISBNs = avgRating.sort_values(ascending=False).index[:N]
pd.Series(topNISBNs).apply(bookMeta)"""
"""user = '204622'
topN(user)"""
topNISBNs = avgRating.sort_values(ascending=False).index[:3]
#%% DEBUGGING
"""N=3
topNISBNs = avgRating.sort_values(ascending=False).index[:N]
pd.Series(topNISBNs).apply(bookMeta)"""
"""user = '204622'
topN(user)"""
bookMeta('034545104X')
faveBooks('204622', 5)
topN('204622', 5)
```
| github_jupyter |
<br />
<div style="text-align: center;">
<span style="font-weight: bold; color:#6dc; font-family: 'Arial Narrow'; font-size: 3.5em;">Global Snow Cover</span>
</div>
<span style="color:#333; font-family: 'Arial'; font-size: 1.1em;"> Data Taken from: ftp://neoftp.sci.gsfc.nasa.gov/geotiff/MOD10C1_M_SNOW/<br />
<br /></span>
```
import numpy as np
import pandas as pd
import os
import rasterio
import urllib2
import shutil
from contextlib import closing
from netCDF4 import Dataset
import datetime
import tinys3
np.set_printoptions(threshold='nan')
def dataDownload():
remote_path = 'ftp://neoftp.sci.gsfc.nasa.gov/geotiff/MOD10C1_M_SNOW/'
print remote_path
local_path = os.getcwd()
listing = []
response = urllib2.urlopen(remote_path)
for line in response:
listing.append(line.rstrip())
s2=pd.DataFrame(listing)
s3=s2[0].str.split()
s4=s3[len(s3)-1]
last_file = s4[8]
print 'The last file is: ',last_file
with closing(urllib2.urlopen(remote_path+last_file)) as r:
with open(last_file, 'wb') as f:
shutil.copyfileobj(r, f)
with rasterio.open(local_path+'/'+last_file) as src:
npixels = src.width * src.height
for i in src.indexes:
band = src.read(i)
print(i, band.min(), band.max(), band.sum()/npixels)
return last_file
def tiffile(dst,outFile):
CM_IN_FOOT = 30.48
with rasterio.open(file) as src:
kwargs = src.meta
kwargs.update(
driver='GTiff',
dtype=rasterio.float64, #rasterio.int16, rasterio.int32, rasterio.uint8,rasterio.uint16, rasterio.uint32, rasterio.float32, rasterio.float64
count=1,
compress='lzw',
nodata=0,
bigtiff='NO'
)
windows = src.block_windows(1)
with rasterio.open(outFile,'w',**kwargs) as dst:
for idx, window in windows:
src_data = src.read(1, window=window)
# Source nodata value is a very small negative number
# Converting in to zero for the output raster
np.putmask(src_data, src_data < 0, 0)
dst_data = (src_data * CM_IN_FOOT).astype(rasterio.float64)
dst.write_band(1, dst_data, window=window)
os.remove('./'+file)
def s3Upload(outFile):
# Push to Amazon S3 instance
conn = tinys3.Connection(os.getenv('S3_ACCESS_KEY'),os.getenv('S3_SECRET_KEY'),tls=True)
f = open(outFile,'rb')
conn.upload(outFile,f,os.getenv('BUCKET'))
# Execution
outFile = 'snow_cover.tiff'
print 'starting'
file = dataDownload()
print 'downloaded'
tiffile(file,outFile)
print 'converted'
#s3Upload(outFile)
print 'finish'
```
| github_jupyter |
```
import json
import os
import _jsonnet
import os
from seq2struct.commands.infer import Inferer
from seq2struct.datasets.spider import SpiderItem
from seq2struct.utils import registry
import torch
exp_config = json.loads(
_jsonnet.evaluate_file(
"experiments/spider-configs/spider-mBART50MtoM-large-en-pt-es-fr-train_en-pt-es-fr-eval.jsonnet"))
model_config_path = exp_config["model_config"]
model_config_args = exp_config.get("model_config_args")
infer_config = json.loads(
_jsonnet.evaluate_file(
model_config_path,
tla_codes={'args': json.dumps(model_config_args)}))
infer_config["model"]["encoder_preproc"]["db_path"] = "data/sqlite_files/"
inferer = Inferer(infer_config)
model_dir = exp_config["logdir"] + "/bs=12,lr=1.0e-04,bert_lr=1.0e-05,end_lr=0e0,att=1"
#checkpoint_step = exp_config["eval_steps"][0]
checkpoint_step = 42100
model_dir
model = inferer.load_model(model_dir, checkpoint_step)
from seq2struct.datasets.spider_lib.preprocess.get_tables import dump_db_json_schema
from seq2struct.datasets.spider import load_tables_from_schema_dict
db_id = "singer"
my_schema = dump_db_json_schema("data/sqlite_files/{db_id}/{db_id}.sqlite".format(db_id=db_id), db_id)
from seq2struct.utils.api_utils import refine_schema_names
my_schema
# If you want to change your schema name, then run this; Otherwise you can skip this.
#refine_schema_names(my_schema)
schema, eval_foreign_key_maps = load_tables_from_schema_dict(my_schema)
schema.keys()
dataset = registry.construct('dataset_infer', {
"name": "spider", "schemas": schema, "eval_foreign_key_maps": eval_foreign_key_maps,
"db_path": "data/sqlite_files/"
})
for _, schema in dataset.schemas.items():
model.preproc.enc_preproc._preprocess_schema(schema)
spider_schema = dataset.schemas[db_id]
def infer(question):
data_item = SpiderItem(
text=None, # intentionally None -- should be ignored when the tokenizer is set correctly
code=None,
schema=spider_schema,
orig_schema=spider_schema.orig,
orig={"question": question}
)
model.preproc.clear_items()
enc_input = model.preproc.enc_preproc.preprocess_item(data_item, None)
preproc_data = enc_input, None
with torch.no_grad():
output = inferer._infer_one(model, data_item, preproc_data, beam_size=1, use_heuristic=True)
return output[0]["inferred_code"]
code = infer("How many singers are there?")
print(code)
code = infer("Quantos cantores nós temos?")
print(code)
code = infer("¿Cuántos cantantes tenemos?")
print(code)
code = infer("Combien de chanteurs avons-nous?")
print(code)
code = infer("Quantos cantores estão disponiveis?")
print(code)
```
| github_jupyter |
# PRMT-2324 Run top level table for first 2 weeks of August 2021
## Context
In our July data we saw a significant increase in GP2GP failures. We want to understand if these were blips, perhaps caused by something that happening during July, or whether these failures are continuing. We don’t want to wait until we have all August data to identify this as we are starting conversations with suppliers now.
```
import pandas as pd
import numpy as np
from datetime import datetime
transfer_file_location = "s3://prm-gp2gp-notebook-data-prod/PRMT-2324-2-weeks-august-data/transfers/v4/2021/8/transfers.parquet"
transfers_raw = pd.read_parquet(transfer_file_location)
transfers_raw.head()
# filter data to just include the first 2 weeks (15 days) of august
date_filter_bool = transfers_raw["date_requested"] < datetime(2021, 8, 16)
transfers_half_august = transfers_raw[date_filter_bool]
# Supplier data was only available from Feb/Mar 2021. Sending and requesting supplier values for all transfers before that are empty
# Dropping these columns to merge supplier data from ASID lookup files
transfers_half_august = transfers_half_august.drop(["sending_supplier", "requesting_supplier"], axis=1)
transfers = transfers_half_august.copy()
# Supplier name mapping
supplier_renaming = {
"EGTON MEDICAL INFORMATION SYSTEMS LTD (EMIS)":"EMIS",
"IN PRACTICE SYSTEMS LTD":"Vision",
"MICROTEST LTD":"Microtest",
"THE PHOENIX PARTNERSHIP":"TPP",
None: "Unknown"
}
# Generate ASID lookup that contains all the most recent entry for all ASIDs encountered
asid_file_location = "s3://prm-gp2gp-asid-lookup-preprod/2021/6/asidLookup.csv.gz"
asid_lookup = pd.read_csv(asid_file_location)
asid_lookup = asid_lookup.drop_duplicates().groupby("ASID").last().reset_index()
lookup = asid_lookup[["ASID", "MName"]]
transfers = transfers.merge(lookup, left_on='requesting_practice_asid',right_on='ASID',how='left')
transfers = transfers.rename({'MName': 'requesting_supplier', 'ASID': 'requesting_supplier_asid'}, axis=1)
transfers = transfers.merge(lookup, left_on='sending_practice_asid',right_on='ASID',how='left')
transfers = transfers.rename({'MName': 'sending_supplier', 'ASID': 'sending_supplier_asid'}, axis=1)
transfers["sending_supplier"] = transfers["sending_supplier"].replace(supplier_renaming.keys(), supplier_renaming.values())
transfers["requesting_supplier"] = transfers["requesting_supplier"].replace(supplier_renaming.keys(), supplier_renaming.values())
# Making the status to be more human readable here
transfers["status"] = transfers["status"].str.replace("_", " ").str.title()
import paths
import data
error_code_lookup_file = pd.read_csv(data.gp2gp_response_codes.path)
outcome_counts = transfers.fillna("N/A").groupby(by=["status", "failure_reason"]).agg({"conversation_id": "count"})
outcome_counts = outcome_counts.rename({"conversation_id": "Number of transfers", "failure_reason": "Failure Reason"}, axis=1)
outcome_counts["% of transfers"] = (outcome_counts["Number of transfers"] / outcome_counts["Number of transfers"].sum()).multiply(100)
outcome_counts.round(2)
transfers['month']=transfers['date_requested'].dt.to_period('M')
def convert_error_list_to_tuple(error_code_list, error_code_type):
return [(error_code_type, error_code) for error_code in set(error_code_list) if not np.isnan(error_code)]
def combine_error_codes(row):
sender_list = convert_error_list_to_tuple(row["sender_error_codes"], "Sender")
intermediate_list = convert_error_list_to_tuple(row["intermediate_error_codes"], "COPC")
final_list = convert_error_list_to_tuple(row["final_error_codes"], "Final")
full_error_code_list = sender_list + intermediate_list + final_list
if len(full_error_code_list) == 0:
return [("No Error Code", "No Error")]
else:
return full_error_code_list
transfers["all_error_codes"] = transfers.apply(combine_error_codes, axis=1)
def generate_high_level_table(transfers_sample):
# Break up lines by error code
transfers_split_by_error_code=transfers_sample.explode("all_error_codes")
# Create High level table
high_level_table=transfers_split_by_error_code.fillna("N/A").groupby(["requesting_supplier","sending_supplier","status","failure_reason","all_error_codes"]).agg({'conversation_id':'count'})
high_level_table=high_level_table.rename({'conversation_id':'Number of Transfers'},axis=1).reset_index()
# Count % of transfers
total_number_transfers = transfers_sample.shape[0]
high_level_table['% of Transfers']=(high_level_table['Number of Transfers']/total_number_transfers).multiply(100)
# Count by supplier pathway
supplier_pathway_counts = transfers_sample.fillna("Unknown").groupby(by=["sending_supplier", "requesting_supplier"]).agg({"conversation_id": "count"})['conversation_id']
high_level_table['% Supplier Pathway Transfers']=high_level_table.apply(lambda row: row['Number of Transfers']/supplier_pathway_counts.loc[(row['sending_supplier'],row['requesting_supplier'])],axis=1).multiply(100)
# Add in Paper Fallback columns
total_fallback = transfers_sample["failure_reason"].dropna().shape[0]
fallback_bool=high_level_table['status']!='Integrated On Time'
high_level_table.loc[fallback_bool,'% Paper Fallback']=(high_level_table['Number of Transfers']/total_fallback).multiply(100)
# % of error codes column
total_number_of_error_codes=transfers_split_by_error_code['all_error_codes'].value_counts().drop(('No Error Code','No Error')).sum()
error_code_bool=high_level_table['all_error_codes']!=('No Error Code', 'No Error')
high_level_table.loc[error_code_bool,'% of error codes']=(high_level_table['Number of Transfers']/total_number_of_error_codes).multiply(100)
# Adding columns to describe errors
high_level_table['error_type']=high_level_table['all_error_codes'].apply(lambda error_tuple: error_tuple[0])
high_level_table['error_code']=high_level_table['all_error_codes'].apply(lambda error_tuple: error_tuple[1])
high_level_table=high_level_table.merge(error_code_lookup_file[['ErrorCode','ResponseText']],left_on='error_code',right_on='ErrorCode',how='left')
# Select and re-order table
grouping_columns_order=['requesting_supplier','sending_supplier','status','failure_reason','error_type','ResponseText','error_code']
counting_columns_order=['Number of Transfers','% of Transfers','% Supplier Pathway Transfers','% Paper Fallback','% of error codes']
high_level_table=high_level_table[grouping_columns_order+counting_columns_order].sort_values(by='Number of Transfers',ascending=False)
return high_level_table
with pd.ExcelWriter("High Level Table First 2 weeks of August PRMT-2324.xlsx") as writer:
generate_high_level_table(transfers.copy()).to_excel(writer, sheet_name="All",index=False)
[generate_high_level_table(transfers[transfers['month']==month].copy()).to_excel(writer, sheet_name=str(month),index=False) for month in transfers['month'].unique()]
```
| github_jupyter |
```
import jieba
```
## 分词
```
# 结巴中文分词的基本操作
# 全模式: 所有可能构成词语的无向图连接而成. 缺点: 不能解决歧义问题 例如:北京大学/北京 大学
seg_list = jieba.cut('我来到北京的北京大学', cut_all=True)
print("Full Mode:"+','.join(seg_list))
# 精确分词模式, 适合做文本分析
seg_list = jieba.cut('我来到北京的北京大学', cut_all=False)
print("Default Mode:"+'/'.join(seg_list))
# 搜索引擎模式, 对长词再次切分, 提高召回率
# 该方法适用于搜索引擎构建倒排索引的分词, 粒度比较细
seg_list = jieba.cut_for_search('我来到北京的北京大学', HMM=False)
print("Search engine Mode:"+'/'.join(seg_list))
strList = list(seg_list)
strList
```
## 添加自定义词典/调整词典
```
print("原文档: \t" + '/'.join(jieba.cut('如果放到数据库中将出错', HMM=False)))
# 中将不符合语义
# 对'中将'进行拆分
# Signature: jieba.suggest_freq(segment, tune=False)
# Docstring:
# Suggest word frequency to force the characters in a word to be
# joined or splitted.
print(jieba.suggest_freq(('中', '将'),True))
print("改进文档: \t" + '/'.join(jieba.cut('如果放到数据库中将出错', HMM=False)))
print('\n原文档:\t' + '/'.join(jieba.cut("[台中]正确形式应该不会被分开")))
jieba.suggest_freq('台中', True)
print('\n原文档:\t' + '/'.join(jieba.cut("[台中]正确形式应该不会被分开")))
```
## 自定义分词词典
```
import sys
```
sys模块相关信息:https://docs.python.org/3/library/sys.html
默认词库:https://github.com/fxsjy/jieba/blob/master/jieba/dict.txt?raw=true
```
sys.path.append("../")
jieba.load_userdict("./dict.txt")
seg_list = jieba.cut("今天很高兴在慕课网和大家交流学习")
print('load user defined dictionary:\n'
+ "/".join(seg_list))
```
## 基于 TF-IDF 算法的关键词抽取
jieba.analyse.extract_tags(sentence, topK=20, withWeight=False, allowPOS=())
sentence 为待提取的文本
topK 为返回几个 TF/IDF 权重最大的关键词,默认值为 20
withWeight 为是否一并返回关键词权重值,默认值为 False
allowPOS 仅包括指定词性的词,默认值为空,即不筛选
jieba.analyse.TFIDF(idf_path=None) 新建 TFIDF 实例,idf_path 为 IDF 频率文件
```
str = "近两年来AI产业已然成为新的焦点和风口,各互联网巨头都在布局人工智能,不少互联网产品经理也开始考虑转型AI产品经理,本文作者也同样在转型中。本篇文章是通过一段时间的学习归纳总结整理而成,力图通过这篇文章给各位考虑转型的产品经理们一个对AI的全局概括了解。本文分为上下两篇,此为上篇。"
import jieba.analyse
for x,w in jieba.analyse.extract_tags(str, 10, withWeight=True):
print("%s %s" %(x, w))
```
## 文本排名
```
for x, w in jieba.analyse.textrank(str, 10, withWeight=True):
print("%s %s" %(x, w))
```
## 词性标注
```
import jieba.posseg
words = jieba.posseg.cut('我爱宁波诺丁汉大学')
for word, flag in words:
print("%s, %s" %(word, flag))
```
## 返回词语在原文的起止位置
### 默认模式
```
result = jieba.tokenize('在宁波的宁波聚像网络有限公司')
for tk in result:
print("word %s\t\t start:%d\t\t end:%d\t\t" %(tk[0], tk[1], tk[2]))
```
### 搜索模式
```
result = jieba.tokenize('在宁波的宁波聚像网络有限公司', mode='search')
for tk in result:
print("word %s\t\t start:%d\t\t end:%d\t\t" %(tk[0], tk[1], tk[2]))
```
| github_jupyter |
# Model Evaluation and Refinement
---------------------------------
This notebook will discuss some techniques on how to evaluate models and a way to refine the Linear Regression Models.
After creating a model, it is vital to evaluate it for correctness and refine if necessary. There are various ways to do so.
We would be discussing some common ways to do so in here.
We would be trying to evaluate models that aim to predict the price of the car.
This is based on the IBM's Course on Data Analysis with Python.
We would be creating some simple models to demonstrate the techniques but they can be used on complex models.
First, let's do the necessary setup.
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns; sns.set(color_codes=True)
%matplotlib inline
```
Now, we will get the data.
```
path = 'https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-DA0101EN-SkillsNetwork/labs/Data%20files/module_5_auto.csv'
df = pd.read_csv(path)
```
For our models, we would be using only the numeric data.
```
df=df._get_numeric_data()
df.head()
```
# Training and Testing
By training and testing, we refer to splitting your data into two componenets: one for training and another for testing.
This is a very important step since we can test our data pre-liminarily on 'unkown' values. The split generally depends on the problem but the test data tends to be between 10% to 30% of the total data.
First, let's create our X and y.
```
y = df['price']
```
For X, let's take everything except price.
```
X = df.drop('price', axis=1)
```
The next step is to split them into training and testing.
It's highly recommended to do so in a random manner.
To make our jobs easier, we would be using `train_test_split` from `model_selection` module in scikit-learn. To use the function, we pass in the X and y, the test size and a random_state variable. The `random_state` variable is used when applying the random function. This will allow us to reproduce the results.
```
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.10, random_state=1)
print("number of test samples :", X_test.shape[0])
print("number of training samples:",X_train.shape[0])
```
Let's create a simple linear regression model with horsepower as the predictor.
```
from sklearn.linear_model import LinearRegression
lre=LinearRegression()
lre.fit(X_train[['horsepower']], y_train)
```
# Evaluation with metrics
A good way to test the model is to use metrics. There are different metrics suitable for different situations. Let's classify them based on the problem type.
## Regression
Common metrics are R-squared and Root Mean Squared Error (RMSE).
* R-squared tells how close the data is to the linear regression line. It typically ranges from 0 to 1 with 1 being the highest; however it can be negative if the model is worse.
* RMSE (and other metrics such as MAE or MSE) give an account of the error. RMSE takes the root of the sum of error squares.
## Classification
Common metrics are Jaccard Index, Precision, Recall, and F1-Score.
* Jaccard Index tells us how 'accurate' the model. Essentially, the proportion of correctly predicted values. It is defined as the ratio of intersection (or the same values aka correct predictions) and union (all values).
* Precision talks about how precise your model is. That is, out of the predicted positive, how many of them are actual positives.
* Recall talks about how many of the Actual Positives our model captures via labeling them as Positive (True Positive)
* F1 Score is the harmonic mean of Precision and Recall. It's used when we need a balance between Precision and Recall. F1 Score might be a better measure to use if we need to seek a balance between Precision and Recall AND there is an uneven class distribution (large number of Actual Negatives). For example, if a positive means a terrorist and there a few positives, then we should use F1 Score because the cost of not capturing a terrorist is much higher than correctly identifying a civilian.
## Calculation
There are multiple ways to calculate them.
You can use the metrics in `sklearn.metrics.` You choose the desired metric function and pass in the true _y_ and predicted _y_. The function then returns back the metric.
We can also use the inherent `score` method built-in some of the objects by scikit-learn.
In `LinearRegression`, it calculates the R^2 on the test data:
```
lre.score(X_test[['horsepower']], y_test)
```
For the training data:
```
lre.score(X_train[['horsepower']], y_train)
```
This is not ideal at all. Furthermore, you might have realized that the scoring could depend heavily on the split too. For some splits, the metrics could be very different. For eg, if you change the random_state in the splitting to 0, the R-squared changes to around 0.74!
Furthermore, the dataset we have is not that large. We would need a better way to conduct tests.
This is where k-fold cross-validation comes in.
# K-Fold Cross-validation
From [Machine Learning Mastery](https://machinelearningmastery.com/k-fold-cross-validation/):
Cross-validation is a statistical method used to estimate the skill of machine learning models.
The general procedure is as follows:
* Shuffle the dataset randomly.
* Split the dataset into k groups
* For each unique group:
- Take the group as a hold out or test data set
- Take the remaining groups as a training data set
- Fit a model on the training set and evaluate it on the test set
- Retain the evaluation score and discard the model
* Summarize the skill of the model using the sample of model evaluation scores
As you can see, it is a very valuable and useful technique. Fortunately, sklearn has modules to makes our job easier.
To do cross-validation, we need to import the `cross_val_score` from `sklearn.model_selection`
```
from sklearn.model_selection import cross_val_score
from sklearn.metrics import r2_score
```
To use it, we would need to pass the _model_, _X_, _y_, and the number of folds as _cv_.
```
Rcross = cross_val_score(estimator=lre, X=X[['horsepower']], y=y, cv=4)
```
If we pass in nothing, the scorer would use the default scoring function of the estimator. The function returns an ndarray with each element containing the R-squared value for the fold run. We can see all the values by:
```
Rcross
```
## Getting the descriptive statistics
After getting the array, it's useful to calculate descriptive statistics such as the five number summary.
For the average and standard deviation. We can do just call the methods that are built-in:
```
print("The mean of the folds are", Rcross.mean(), "and the standard deviation is" , Rcross.std())
```
Here's a little hack to get the five number summary. We convert it to a panda series and then call the function describe on it.
```
pd.Series(Rcross).describe()
```
## Getting different metrics
If you want a different metric, simply pass on a string for it. To check what's available, we need to import `SCORERS` from `sklearn.metrics`.
Then we can use `SCORERS.keys`. Having it sorted is also helpful.
```
from sklearn.metrics import SCORERS
sorted(SCORERS.keys())
```
Here's how to get RMSE. Since the one here has negative, we need to multiple it by -1 to get the array of RMSEs.
```
-1 * cross_val_score(lre, X[['horsepower']], y, cv=4,
scoring='neg_root_mean_squared_error')
```
## Stratified K-Fold Cross Validation
Stratification is the process of rearranging the data as to ensure each fold is a good representative of the whole. For example in a binary classification problem where each class comprises 50% of the data, it is best to arrange the data such that in every fold, each class comprises around half the instances.
Stratified K-Fold is cross-validation object which is a variation of KFold that returns stratified folds. The folds are made by preserving the percentage of samples for each class.
## Script to use CV and get metric stats and graphs for different models
Here's a good quick find script to evaluate different models:
```python
# explore adaboost ensemble number of trees effect on performance
from numpy import mean
from numpy import std
from sklearn.datasets import make_classification
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import RepeatedStratifiedKFold
from sklearn.ensemble import AdaBoostClassifier
from matplotlib import pyplot
# get the dataset
def get_dataset():
X, y = make_classification(n_samples=1000, n_features=20, n_informative=15, n_redundant=5, random_state=6)
return X, y
# get a list of models to evaluate
def get_models():
models = dict()
# define number of trees to consider
n_trees = [10, 50, 100, 500, 1000, 5000]
for n in n_trees:
models[str(n)] = AdaBoostClassifier(n_estimators=n)
return models
# evaluate a given model using cross-validation
def evaluate_model(model, X, y):
# define the evaluation procedure
cv = RepeatedStratifiedKFold(n_splits=10, n_repeats=3, random_state=1)
# evaluate the model and collect the results
scores = cross_val_score(model, X, y, scoring='accuracy', cv=cv, n_jobs=-1)
return scores
# define dataset
X, y = get_dataset()
# get the models to evaluate
models = get_models()
# evaluate the models and store results
results, names = list(), list()
for name, model in models.items():
# evaluate the model
scores = evaluate_model(model, X, y)
# store the results
results.append(scores)
names.append(name)
# summarize the performance along the way
print('>%s %.3f (%.3f)' % (name, mean(scores), std(scores)))
# plot model performance for comparison
pyplot.boxplot(results, labels=names, showmeans=True)
pyplot.show()
```
Although this is specific for AdaBoostClassifier and generated dataset, it can be easily modified to use for different models and dataset by modifying get_models and get_dataset.
# Overfitting, Underfitting and Model Selection
The test data, sometimes referred to as the out of sample data, is a much better measure of how well your model performs in the real world. One reason for this is overfitting - when the model is overfitted or overspecific to the training data. It turns out these differences are more apparent in Multiple Linear Regression and Polynomial Regression so we will explore overfitting in that context.
Let's create Multiple linear regression objects and train the model using 'horsepower', 'curb-weight', 'engine-size' and 'highway-mpg' as features.
```
lr = LinearRegression()
lr.fit(X_train[['horsepower', 'curb-weight', 'engine-size', 'highway-mpg']],
y_train)
yhat_train = lr.predict(X_train[['horsepower', 'curb-weight', 'engine-size',
'highway-mpg']])
yhat_train[0:5]
yhat_test = lr.predict(X_test[['horsepower', 'curb-weight', 'engine-size',
'highway-mpg']])
yhat_test[0:5]
```
Let's perform some model evaluation using our training and testing data separately.
Let's examine the distribution of the predicted values of the training data.
```
plt.figure(figsize=(12, 10))
ax = sns.kdeplot(x=y)
sns.kdeplot(x=yhat_train, ax=ax)
ax.legend(['y', 'y_hat'], fontsize=14);
```
So far the model seems to be doing well in learning from the training dataset. But what happens when the model encounters new data from the testing dataset? When the model generates new values from the test data, we see the distribution of the predicted values is much different from the actual target values.
```
plt.figure(figsize=(12, 10))
ax = sns.kdeplot(x=y)
sns.kdeplot(x=yhat_test, ax=ax)
ax.legend(['y', 'y_hat'], fontsize=14);
```
Comparing the Figures, it is evident the distribution of the test data in Figure 1 is much better at fitting the data.
Let's see if polynomial regression also exhibits a drop in the prediction accuracy when analysing the test dataset.
```
from sklearn.preprocessing import PolynomialFeatures
```
Overfitting occurs when the model fits the noise, not the underlying process. Therefore when testing your model using the test-set, your model does not perform as well as it is modelling noise, not the underlying process that generated the relationship. Let's create a degree 5 polynomial model.
Let's use 55 percent of the data for training and the rest for testing:
```
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.45, random_state=0)
```
We will perform a degree 5 polynomial transformation on the feature 'horse power'
```
pr = PolynomialFeatures(degree=5)
X_train_pr = pr.fit_transform(X_train[['horsepower']])
X_test_pr = pr.fit_transform(X_test[['horsepower']])
pr
```
Now let's create a linear regression model "poly" and train it.
```
poly = LinearRegression()
poly.fit(X_train_pr, y_train)
```
We can see the output of our model using the method "predict." then assign the values to "yhat".
```
yhat = poly.predict(X_test_pr)
yhat[0:5]
```
Let's take the first five predicted values and compare it to the actual targets.
```
print("Predicted values:", yhat[0:4])
print("True values:", y_test[0:4].values)
```
To get a better idea, let's create a function to help us plot the data. We would be creating a function that will plot the training and testing values (aka real values) against the horsepower. Then the model's prediction for continous values in the lowest and highest horsepower.
Here's the function:
```
def PollyPlot(xtrain, xtest, y_train, y_test, lr,poly_transform):
width = 12
height = 10
plt.figure(figsize=(width, height))
#training data
#testing data
# lr: linear regression object
#poly_transform: polynomial transformation object
xmax=max([xtrain.values.max(), xtest.values.max()])
xmin=min([xtrain.values.min(), xtest.values.min()])
x=np.arange(xmin, xmax, 0.1)
plt.plot(xtrain, y_train, 'ro', label='Training Data')
plt.plot(xtest, y_test, 'go', label='Test Data')
plt.plot(x, lr.predict(poly_transform.fit_transform(x.reshape(-1, 1))), label='Predicted Function')
plt.ylim([-10000, 60000])
plt.ylabel('Price')
plt.legend()
```
Now, let's use the function.
```
PollyPlot(X_train[['horsepower']], X_test[['horsepower']], y_train, y_test, poly,pr)
```
Figur 4 A polynomial regression model, red dots represent training data, green dots represent test data, and the blue line represents the model prediction.
We see that the estimated function appears to track the data but around 200 horsepower, the function begins to diverge from the data points.
R^2 of the training data:
```
poly.score(X_train_pr, y_train)
```
R^2 of the test data:
```
poly.score(X_test_pr, y_test)
```
We see the R^2 for the training data is 0.5567 while the R^2 on the test data was -29.87. The lower the R^2, the worse the model, a Negative R^2 is a sign of overfitting.
Let's see how the R^2 changes on the test data for different order polynomials and plot the results:
```
Rsqu_test = []
order = [1, 2, 3, 4, 5]
for n in order:
pr = PolynomialFeatures(degree=n)
x_train_pr = pr.fit_transform(X_train[['horsepower']])
x_test_pr = pr.fit_transform(X_test[['horsepower']])
lr.fit(x_train_pr, y_train)
Rsqu_test.append(lr.score(x_test_pr, y_test))
print("The R-square values: ", Rsqu_test);
sns.lineplot(x=order, y=Rsqu_test, markers=True, marker='o')
plt.xlabel('order')
plt.ylabel('R^2')
plt.title('R^2 Using Test Data');
```
We see the R^2 gradually increases until an order three polynomial is used. Then the R^2 dramatically decreases at four.
So, what we can tell is that a model with a degree of 1 would be too loosely fitted (or underfitted) and a model of degree such as 5 would be too overfitted. Selecting the right method requires some experimentaion and getting the values.
# Hyperparameter Tuning
Often algorithms contain hyperparameters. For example, *alpha* in Ridge Regression, *kernel* in SVMs and so on. Sometimes, the choice of the hyperparemeters can be made easily from domain knowledge. Other times, it may not be that simple. For those, we may need to fit the model multiple times and fine tune it.
There are two main methods of doing it:
## Grid Search
A Grid Search performs an exhaustively generates candidates from a grid of parameter values specified. For example:
```python
param_grid = [
{'C': [1, 10, 100, 1000], 'kernel': ['linear']},
{'C': [1, 10, 100, 1000], 'gamma': [0.001, 0.0001], 'kernel': ['rbf']},
]
```
specifies that two grids should be explored: one with a linear kernel and C values in \[1, 10, 100, 1000\], and the second one with an RBF kernel, and the cross-product of C values ranging in \[1, 10, 100, 1000\] and gamma values in \[0.001, 0.0001\].
Here's an example:
```python
hyper_params = {'alpha': [0.0001, 0.001, 0.01, 0.1, 0, 1, 10, 100, 1000, 10000]}
grid = GridSearchCV(estimator=ridge, param_grid=hyper_params, scoring='r2', cv=4, n_jobs=-1)
grid.fit(X, y)
best_score = grid.best_score_
best_estimator = grid.best_estimator_
best_params = grid.best_params_
```
## Randomized Parameter Optimization
While using a grid of parameter settings is currently the most widely used method for parameter optimization, other search methods have more favourable properties. RandomizedSearchCV implements a randomized search over parameters, where each setting is sampled from a distribution over possible parameter values. This has two main benefits over an exhaustive search:
A budget can be chosen independent of the number of parameters and possible values.
Adding parameters that do not influence the performance does not decrease efficiency.
Specifying how parameters should be sampled is done using a dictionary, very similar to specifying parameters for GridSearchCV. Additionally, a computation budget, being the number of sampled candidates or sampling iterations, is specified using the n_iter parameter. For each parameter, either a distribution over possible values or a list of discrete choices (which will be sampled uniformly) can be specified:
```python
{'C': scipy.stats.expon(scale=100), 'gamma': scipy.stats.expon(scale=.1),
'kernel': ['rbf'], 'class_weight':['balanced', None]}
```
This example uses the scipy.stats module, which contains many useful distributions for sampling parameters, such as expon, gamma, uniform or randint.
In principle, any function can be passed that provides a rvs (random variate sample) method to sample a value. A call to the rvs function should provide independent random samples from possible parameter values on consecutive calls.
The usage (both calling and interpretation) is similar to GridSearchCV. Do take note that you should use continous distribution for continous variables.
# Author
By Abhinav Garg
| github_jupyter |
# Pilatus on a goniometer at ID28
Nguyen Thanh Tra who was post-doc at ESRF-ID28 enquired about a potential bug in pyFAI in October 2016: he calibrated 3 images taken with a Pilatus-1M detector at various detector angles: 0, 17 and 45 degrees.
While everything looked correct, in first approximation, one peak did not overlap properly with itself depending on the detector angle. This peak correspond to the peak in the angle of the detector, at 23.6° ...
This notebook will guide you through the calibration of the goniometer setup.
Let's first retrieve the images and initialize the environment:
```
%pylab nbagg
import os
#Nota: Set a proxy if you are befind a firewall
#os.environ["http_proxy"] = "http://proxy.company.fr:3128"
import fabio, pyFAI, os
print("Using pyFAI version:", pyFAI.version)
from os.path import basename
from pyFAI.gui import jupyter
from pyFAI.calibrant import get_calibrant
from silx.resources import ExternalResources
downloader = ExternalResources("thick", "http://www.silx.org/pub/pyFAI/testimages")
all_files = downloader.getdir("gonio_ID28.tar.bz2")
for afile in all_files:
print(basename(afile))
```
There are 3 images stored as CBF files and the associated control points as npt files.
```
images = [i for i in all_files if i.endswith("cbf")]
images.sort()
mask = None
fig, ax = subplots(1,3, figsize=(9,3))
for i, cbf in enumerate(images):
fimg = fabio.open(cbf)
jupyter.display(fimg.data, label=basename(cbf), ax=ax[i])
if mask is None:
mask = fimg.data<0
else:
mask |= fimg.data<0
numpy.save("mask.npy", mask)
```
To be able to calibrate the detector position, the calibrant used is LaB6 and the wavelength was 0.69681e-10m
```
wavelength=0.6968e-10
calibrant = get_calibrant("LaB6")
calibrant.wavelength = wavelength
print(calibrant)
detector = pyFAI.detector_factory("Pilatus1M")
# Define the function that extracts the angle from the filename:
def get_angle(basename):
"""Takes the basename (like det130_g45_0001.cbf ) and returns the angle of the detector"""
return float(os.path.basename((basename.split("_")[-2][1:])))
for afile in images:
print('filename', afile, "angle:",get_angle(afile))
#Define the transformation of the geometry as function of the goniometrer position.
# by default scale1 = pi/180 (convert deg to rad) and scale2 = 0.
from pyFAI.goniometer import GeometryTransformation, GoniometerRefinement, Goniometer
goniotrans2d = GeometryTransformation(param_names = ["dist", "poni1", "poni2",
"rot1", "rot2",
"scale1", "scale2"],
dist_expr="dist",
poni1_expr="poni1",
poni2_expr="poni2",
rot1_expr="scale1 * pos + rot1",
rot2_expr="scale2 * pos + rot2",
rot3_expr="0.0")
#Definition of the parameters start values and the bounds
param = {"dist":0.30,
"poni1":0.08,
"poni2":0.08,
"rot1":0,
"rot2":0,
"scale1": numpy.pi/180., # rot2 is in radians, while the motor position is in degrees
"scale2": 0
}
#Defines the bounds for some variables. We start with very strict bounds
bounds = {"dist": (0.25, 0.31),
"poni1": (0.07, 0.1),
"poni2": (0.07, 0.1),
"rot1": (-0.01, 0.01),
"rot2": (-0.01, 0.01),
"scale1": (numpy.pi/180., numpy.pi/180.), #strict bounds on the scale: we expect the gonio to be precise
"scale2": (0, 0) #strictly bound to 0
}
gonioref2d = GoniometerRefinement(param, #initial guess
bounds=bounds,
pos_function=get_angle,
trans_function=goniotrans2d,
detector=detector,
wavelength=wavelength)
print("Empty goniometer refinement object:")
print(gonioref2d)
# Populate with the images and the control points
for fn in images:
base = os.path.splitext(fn)[0]
bname = os.path.basename(base)
fimg = fabio.open(fn)
sg =gonioref2d.new_geometry(bname, image=fimg.data, metadata=bname,
control_points=base+".npt",
calibrant=calibrant)
print(sg.label, "Angle:", sg.get_position())
print("Filled refinement object:")
print(gonioref2d)
# Initial refinement of the goniometer model with 5 dof
gonioref2d.refine2()
# Remove constrains on the refinement:
gonioref2d.bounds=None
gonioref2d.refine2()
# Check the calibration on all 3 images
fig, ax = subplots(1, 3, figsize=(9, 3) )
for idx,lbl in enumerate(gonioref2d.single_geometries):
sg = gonioref2d.single_geometries[lbl]
if sg.control_points.get_labels():
sg.geometry_refinement.set_param(gonioref2d.get_ai(sg.get_position()).param)
jupyter.display(sg=sg, ax=ax[idx])
#Create a MultiGeometry integrator from the refined geometry:
angles = []
images = []
for sg in gonioref2d.single_geometries.values():
angles.append(sg.get_position())
images.append(sg.image)
multigeo = gonioref2d.get_mg(angles)
multigeo.radial_range=(0, 63)
print(multigeo)
# Integrate the whole set of images in a single run:
res_mg = multigeo.integrate1d(images, 10000)
ax = jupyter.plot1d(res_mg, label="multigeo")
for lbl, sg in gonioref2d.single_geometries.items():
ai = gonioref2d.get_ai(sg.get_position())
img = sg.image * ai.dist * ai.dist / ai.pixel1 / ai.pixel2
res = ai.integrate1d(img, 5000, unit="2th_deg", method="splitpixel")
ax.plot(*res, "--", label=lbl)
ax.legend()
#Let's focus on the inner most ring on the image taken at 45°:
#ax.set_xlim(21.5, 21.7)
ax.set_xlim(29.0, 29.2)
ax.set_ylim(0, 5e11)
```
On all three imges, the rings on the outer side of the detector are shifted in compatison with the average signal comming from the other two images.
This phenomenon could be related to volumetric absorption of the photon in the thickness of the detector.
To be able to investigate this phenomenon further, the goniometer geometry is saved in a JSON file:
```
gonioref2d.save("id28.json")
with open("id28.json") as f:
print(f.read())
```
## Peak profile
Let's plot the full-width at half maximum for every peak in the different intergated profiles:
```
#Peak profile
from scipy.interpolate import interp1d
from scipy.optimize import bisect
def calc_fwhm(integrate_result, calibrant):
"calculate the tth position and FWHM for each peak"
delta = integrate_result.intensity[1:] - integrate_result.intensity[:-1]
maxima = numpy.where(numpy.logical_and(delta[:-1]>0, delta[1:]<0))[0]
minima = numpy.where(numpy.logical_and(delta[:-1]<0, delta[1:]>0))[0]
maxima += 1
minima += 1
tth = []
FWHM = []
for tth_rad in calibrant.get_2th():
tth_deg = tth_rad*integrate_result.unit.scale
if (tth_deg<=integrate_result.radial[0]) or (tth_deg>=integrate_result.radial[-1]):
continue
idx_theo = abs(integrate_result.radial-tth_deg).argmin()
id0_max = abs(maxima-idx_theo).argmin()
id0_min = abs(minima-idx_theo).argmin()
I_max = integrate_result.intensity[maxima[id0_max]]
I_min = integrate_result.intensity[minima[id0_min]]
tth_maxi = integrate_result.radial[maxima[id0_max]]
I_thres = (I_max + I_min)/2.0
if minima[id0_min]>maxima[id0_max]:
if id0_min == 0:
min_lo = integrate_result.radial[0]
else:
min_lo = integrate_result.radial[minima[id0_min-1]]
min_hi = integrate_result.radial[minima[id0_min]]
else:
if id0_min == len(minima) -1:
min_hi = integrate_result.radial[-1]
else:
min_hi = integrate_result.radial[minima[id0_min+1]]
min_lo = integrate_result.radial[minima[id0_min]]
f = interp1d(integrate_result.radial, integrate_result.intensity-I_thres)
tth_lo = bisect(f, min_lo, tth_maxi)
tth_hi = bisect(f, tth_maxi, min_hi)
FWHM.append(tth_hi-tth_lo)
tth.append(tth_deg)
return tth, FWHM
fig, ax = subplots()
ax.plot(*calc_fwhm(res_mg, calibrant), "o", label="multi")
for lbl, sg in gonioref2d.single_geometries.items():
ai = gonioref2d.get_ai(sg.get_position())
img = sg.image * ai.dist * ai.dist / ai.pixel1 / ai.pixel2
res = ai.integrate1d(img, 5000, unit="2th_deg", method="splitpixel")
t,w = calc_fwhm(res, calibrant=calibrant)
ax.plot(t, w,"-o", label=lbl)
ax.set_title("Peak shape as function of the angle")
ax.set_xlabel(res_mg.unit.label)
ax.legend()
```
## Conclusion:
Can the FWHM and peak position be corrected using raytracing and deconvolution ?
| github_jupyter |
# Convergence and Stability of Gradient Descent for Linear Regression Problems
## (I) Gradient Descent
Consider the following problem:
$\min_{\alpha,\beta} \hat{Q}(\alpha,\beta) \equiv \min_{\alpha,\beta} \frac{1}{N} \sum_{i=1}^N \big(y_i - \alpha - \beta x_i\big)^2 $
The gradinet of $\hat{Q}$ can be written as :
$\nabla \hat{Q}(\alpha,\beta) \equiv \begin{pmatrix} \frac{\partial \hat{Q}(\alpha,\beta)}{\partial \alpha}\\ \frac{\partial \hat{Q}(\alpha,\beta)}{\partial \alpha} \end{pmatrix}= -\frac{2}{N} \begin{pmatrix} \sum_{i=1}^N (y_i - \beta x_i - \alpha)\\ \sum_{i=1}^N (y_i - \beta x_i - \alpha)x_i \end{pmatrix}$
which can be rewritten as:
$\nabla \hat{Q}(\alpha,\beta) = -2 \begin{pmatrix} \bar{y} - \beta \bar{x} - \alpha\\ \bar{yx} - \beta \bar{x^2} - \alpha \bar{x} \end{pmatrix}$
Now consider a simple gradient descent (GD) algorith that works as follows
$\begin{pmatrix} \alpha_{t+1}\\ \beta_{t+1} \end{pmatrix} = \begin{pmatrix} \alpha_{t}\\ \beta_{t} \end{pmatrix} - \lambda \nabla \hat{Q}(\alpha_t,\beta_t) $
where $t$ is the t-$th$ step of the optimization, $\lambda$ is the step size in the opposite direction of the gradient, and $(\alpha_0,\beta_0)$ is the initialization in the optimization algorithm. Exapanding the above formula leads to
$\begin{pmatrix} \alpha_{t+1}\\ \beta_{t+1} \end{pmatrix} = \begin{bmatrix}1 - 2\lambda & -2 \lambda \bar{x}\\ - 2 \lambda \bar{x} & 1-2\lambda \bar{x^2}\end{bmatrix} \begin{pmatrix} \alpha_{t}\\ \beta_{t} \end{pmatrix} + \begin{pmatrix} 2 \lambda \bar{y}\\ 2 \lambda \bar{xy} \end{pmatrix}$
Defining $\vec{s}_t \equiv \begin{pmatrix} \alpha_{t}\\ \beta_{t} \end{pmatrix} $
$A \equiv \begin{bmatrix}1 - 2\lambda & -2 \lambda \bar{x}\\ - 2 \lambda \bar{x} & 1-2\lambda \bar{x^2}\end{bmatrix} $ and $B \equiv \begin{pmatrix} 2\lambda \bar{y}\\ 2 \lambda \bar{xy} \end{pmatrix}$.
With these definitions the gradient descent can be written as a `linear dicrete dynamical system`:
$\vec{s}_{t+1} = A \vec{s}_t + B$
### (I.1) Steady state
I) Assuming that the spectral radious of $A$, $\rho(A) < 1$. I will prove that later, interestingly enough this condition gives an upper bound on the learning rate $\lambda$ as function of the moments of the data
II) This is a linear dynamical system, so if $\rho(A)<1$ then the sequence $\vec{s}_t$ converges to the unique steady state
III) Since $\hat{Q}(\alpha,\beta)$ is strictly convex in $(\alpha,\beta)$ then the steady state is the global minima
The steady state solves:
$s^* \equiv (I-A)^{-1} B $
Forming $I-A$:
$I - A = 2\lambda \begin{bmatrix}1& \bar{x} \\ \bar{x} & \bar{x^2}\end{bmatrix}$. Therefore the steady state can be written as
$s^* = \begin{bmatrix}1& \bar{x} \\ \bar{x} & \bar{x^2}\end{bmatrix}^{-1} \begin{pmatrix} \bar{y}\\ \bar{xy} \end{pmatrix}$
`Which is the usual OLS we teach to our students`
### (I.2) Stability
$A$ can be written as:
$A = I - 2\lambda \begin{bmatrix}1& \bar{x} \\ \bar{x} & \bar{x^2}\end{bmatrix}$, defining
$A^\prime \equiv \begin{bmatrix}1& \bar{x} \\ \bar{x} & \bar{x^2}\end{bmatrix}$
the eigenvalues of $A$ can be written as as
$1 - 2\lambda\times eig(A^\prime)$.
The eigenvalues o $A^\prime$ can be written as:
$\gamma_{\pm} = \frac{(1+\bar{x^2})\pm \sqrt{(1+\bar{x^2})- 4 Var(x)}}{2}$
Therefore choosing
$\lambda < \frac{1}{2 \min\{Re(\gamma_+),Re(\gamma_-)\} }$
guarantees convergence.
## (II) Stochastic Gradient Descent
| github_jupyter |
# Building interactive plots using `bqplot` and `ipywidgets`
* `bqplot` is built on top of the `ipywidgets` framework
* `ipwidgets` and `bqplot` widgets can be seamlessly integrated to build interactive plots
* `bqplot` figure widgets can be stacked with UI controls available in `ipywidgets` by using `Layout` classes (Box, HBox, VBox) in `ipywidgets`
(Note that *only* `Figure` objects (not `Mark` objects) inherit from `DOMWidget` class and can be combined with other widgets from `ipywidgets`)
* Trait attributes of widgets can be linked using callbacks. Callbacks should be registered using the `observe` method
Please follow these links for detailed documentation on:
1. [Layout and Styling of Jupyter Widgets](https://ipywidgets.readthedocs.io/en/stable/examples/Widget%20Styling.html)
* [Linking Widgets](https://ipywidgets.readthedocs.io/en/stable/examples/Widget%20Events.html)
<br>Let's look at examples of linking plots with UI controls
```
import numpy as np
import ipywidgets as widgets
import bqplot.pyplot as plt
```
Update the plot on a button click
```
y = np.random.randn(100).cumsum() # simple random walk
# create a button
update_btn = widgets.Button(description="Update", button_style="success")
# create a figure widget
fig1 = plt.figure(animation_duration=750)
line = plt.plot(y)
# define an on_click function
def on_btn_click(btn):
# update the y attribute of line mark
line.y = np.random.randn(100).cumsum() # another random walk
# register the on_click function
update_btn.on_click(on_btn_click)
# stack button and figure using VBox
widgets.VBox([fig1, update_btn])
```
Let's look at an example where we link a plot to a dropdown menu
```
import pandas as pd
# create a dummy time series for 5 dummy stock tickers
dates = pd.date_range(start="20180101", end="20181231")
n = len(dates)
tickers = list("ABCDE")
prices = pd.DataFrame(np.random.randn(n, 5).cumsum(axis=0), columns=tickers)
# create a dropdown menu for tickers
dropdown = widgets.Dropdown(description="Ticker", options=tickers)
# create figure for plotting time series
current_ticker = dropdown.value
fig_title_tmpl = '"{}" Time Series' # string template for title of the figure
fig2 = plt.figure(title=fig_title_tmpl.format(current_ticker))
fig2.layout.width = "900px"
time_series = plt.plot(dates, prices[current_ticker])
plt.xlabel("Date")
plt.ylabel("Price")
# 1. create a callback which updates the plot when dropdown item is selected
def update_plot(*args):
selected_ticker = dropdown.value
# update the y attribute of the mark by selecting
# the column from the price data frame
time_series.y = prices[selected_ticker]
# update the title of the figure
fig2.title = fig_title_tmpl.format(selected_ticker)
# 2. register the callback by using the 'observe' method
dropdown.observe(update_plot, "value")
# stack the dropdown and fig widgets using VBox
widgets.VBox([dropdown, fig2])
```
Let's now create a scatter plot where we select X and Y data from the two dropdown menus
```
# create two dropdown menus for X and Y attributes of scatter
x_dropdown = widgets.Dropdown(description="X", options=tickers, value="A")
y_dropdown = widgets.Dropdown(description="Y", options=tickers, value="B")
# create figure for plotting the scatter
x_ticker = x_dropdown.value
y_ticker = y_dropdown.value
# set up fig_margin to allow space to display color bar
fig_margin = dict(top=20, bottom=40, left=60, right=80)
fig3 = plt.figure(animation_duration=1000, fig_margin=fig_margin)
# custom axis options for color data
axes_options = {"color": {"tick_format": "%m/%y", "side": "right", "num_ticks": 5}}
scatter = plt.scatter(
x=prices[x_ticker],
y=prices[y_ticker],
color=dates, # represent chronology using color scale
stroke="black",
colors=["red"],
default_size=32,
axes_options=axes_options,
)
plt.xlabel(x_ticker)
plt.ylabel(y_ticker)
# 1. create a callback which updates the plot when dropdown item is selected
def update_scatter(*args):
x_ticker = x_dropdown.value
y_ticker = y_dropdown.value
# update the x and y attributes of the mark by selecting
# the column from the price data frame
with scatter.hold_sync():
scatter.x = prices[x_ticker]
scatter.y = prices[y_ticker]
# update the title of the figure
plt.xlabel(x_ticker)
plt.ylabel(y_ticker)
# 2. register the callback by using the 'observe' method
x_dropdown.observe(update_scatter, "value")
y_dropdown.observe(update_scatter, "value")
# stack the dropdown and fig widgets using VBox
widgets.VBox([widgets.HBox([x_dropdown, y_dropdown]), fig3])
```
In the example below, we'll look at plots of trigonometic functions
```
funcs = dict(sin=np.sin, cos=np.cos, tan=np.tan, sinh=np.sinh, tanh=np.tanh)
dropdown = widgets.Dropdown(options=funcs, description="Function")
fig = plt.figure(title="sin(x)", animation_duration=1000)
# create x and y data attributes for the line chart
x = np.arange(-10, 10, 0.1)
y = np.sin(x)
line = plt.plot(x, y, "m")
def update_line(*args):
f = dropdown.value
fig.title = f"{f.__name__}(x)"
line.y = f(line.x)
dropdown.observe(update_line, "value")
widgets.VBox([dropdown, fig])
```
| github_jupyter |
# Pandas
<img src="https://raw.githubusercontent.com/GokuMohandas/practicalAI/master/images/logo.png" width=150>
In this notebook, we'll learn the basics of data analysis with the Python Pandas library.
<img src="https://raw.githubusercontent.com/GokuMohandas/practicalAI/master/images/pandas.png" width=500>
# Uploading the data
We're first going to get some data to play with. We're going to load the titanic dataset from the public link below.
```
import urllib
# Upload data from GitHub to notebook's local drive
url = "https://raw.githubusercontent.com/GokuMohandas/practicalAI/master/data/titanic.csv"
response = urllib.request.urlopen(url)
html = response.read()
with open('titanic.csv', 'wb') as f:
f.write(html)
# Checking if the data was uploaded
!ls -l
```
# Loading the data
Now that we have some data to play with, let's load it into a Pandas dataframe. Pandas is a great Python library for data analysis.
```
import pandas as pd
# Read from CSV to Pandas DataFrame
df = pd.read_csv("titanic.csv", header=0)
# First five items
df.head()
```
These are the diferent features:
* pclass: class of travel
* name: full name of the passenger
* sex: gender
* age: numerical age
* sibsp: # of siblings/spouse aboard
* parch: number of parents/child aboard
* ticket: ticket number
* fare: cost of the ticket
* cabin: location of room
* emarked: port that the passenger embarked at (C - Cherbourg, S - Southampton, Q = Queenstown)
* survived: survial metric (0 - died, 1 - survived)
# Exploratory analysis
We're going to use the Pandas library and see how we can explore and process our data.
```
# Describe features
df.describe()
# Histograms
df["age"].hist()
# Unique values
df["embarked"].unique()
# Selecting data by feature
df["name"].head()
# Filtering
df[df["sex"]=="female"].head() # only the female data appears
# Sorting
df.sort_values("age", ascending=False).head()
# Grouping
survived_group = df.groupby("survived")
survived_group.mean()
# Selecting row
df.iloc[0, :] # iloc gets rows (or columns) at particular positions in the index (so it only takes integers)
# Selecting specific value
df.iloc[0, 1]
# Selecting by index
df.loc[0] # loc gets rows (or columns) with particular labels from the index
```
# Preprocessing
```
# Rows with at least one NaN value
df[pd.isnull(df).any(axis=1)].head() # specify axis=1 to look across rows rather than the default axis=0 for columns
# Drop rows with Nan values
df = df.dropna() # removes rows with any NaN values
df = df.reset_index() # reset's row indexes in case any rows were dropped
df.head()
# Dropping multiple columns
df = df.drop(["name", "cabin", "ticket"], axis=1) # we won't use text features for our initial basic models
df.head()
# Map feature values
df['sex'] = df['sex'].map( {'female': 0, 'male': 1} ).astype(int)
df["embarked"] = df['embarked'].dropna().map( {'S':0, 'C':1, 'Q':2} ).astype(int)
df.head()
```
# Feature engineering
```
# Lambda expressions to create new features
def get_family_size(sibsp, parch):
family_size = sibsp + parch
return family_size
df["family_size"] = df[["sibsp", "parch"]].apply(lambda x: get_family_size(x["sibsp"], x["parch"]), axis=1)
df.head()
# Reorganize headers
df = df[['pclass', 'sex', 'age', 'sibsp', 'parch', 'family_size', 'fare', 'embarked', 'survived']]
df.head()
```
# Saving data
```
# Saving dataframe to CSV
df.to_csv("processed_titanic.csv", index=False)
# See your saved file
!ls -l
```
| github_jupyter |
# Experiments on the COMPAS Dataset
Install ```AIF360``` with minimum requirements:
```
!pip install aif360
```
Install packages that we will use:
```
import numpy as np
import matplotlib.pyplot as plt
import pickle
from aif360.algorithms.preprocessing.optim_preproc_helpers.data_preproc_functions \
import load_preproc_data_compas
from aif360.algorithms.preprocessing.reweighing import Reweighing
from aif360.metrics import ClassificationMetric
from sklearn.preprocessing import StandardScaler
#from sklearn.linear_model import LogisticRegression
import torch
from torch.autograd import Variable
import torchvision.transforms as transforms
import torchvision.datasets as dsets
import torch.utils.data as Data
# These 2 functions will help us save and load objects
path = "/content/drive/My Drive/Colab Notebooks/Ethics/"
def save_obj(obj, name ):
with open(path+ name + '.pkl', 'wb') as f:
pickle.dump(obj, f, pickle.HIGHEST_PROTOCOL)
def load_obj(name ):
with open(path + name + '.pkl', 'rb') as f:
return pickle.load(f)
```
Define privileged and unprivileged groups:
```
privileged_groups = [{'race': 1}]
unprivileged_groups = [{'race': 0}]
```
Load COMPAS Dataset with 'race' as the sensitive attribute:
```
# COMPAS DATASET
compas_dataset_orig = load_preproc_data_compas(['race'])
```
Visualise Compas dataset with respect to the taget label ('Likelihood of reoffeding in 2 years'; Did recid(=1) or No recid(=0) and the sensitive attribute (race):
```
df = compas_dataset_orig.metadata['params']['df'].copy()
# Favored class == 1.0 (Caucasian)
# Number of Caucasian with no reicid (0.0)
caucasian_no_recid = sum(df[((df['race'] == 1.0))]['two_year_recid'] == 0.0)
# Number of Caucasian who did reicid (1.0)
caucasian_did_recid = sum(df[((df['race'] == 1.0))]['two_year_recid'] == 1.0)
# Number of non-Caucasian with no reicid (0.0)
non_caucasian_no_recid = sum(df[((df['race'] == 0.0))]['two_year_recid'] == 0.0)
# Number of non-Caucasian who did reicid (1.0)
non_caucasian_did_recid = sum(df[((df['race'] == 0.0))]['two_year_recid'] == 1.0)
print('Caucasian (Privilaged)')
print('No recid:', caucasian_no_recid,'\tDid recid:', caucasian_did_recid, 'Total:', caucasian_no_recid + caucasian_did_recid)
print('Non-Caucasian')
print('No recid:', non_caucasian_no_recid,'\tDid recid:', non_caucasian_did_recid, 'Total:', non_caucasian_no_recid + non_caucasian_did_recid)
print('\n\t\t\t\t\tTotal:', caucasian_no_recid + caucasian_did_recid + non_caucasian_no_recid + non_caucasian_did_recid)
# Plot a bar graph:
labels = ['Non-Caucasian', 'Caucasian']
did_recid = [non_caucasian_did_recid, caucasian_did_recid]
no_recid = [non_caucasian_no_recid, caucasian_no_recid]
x = np.arange(len(labels)) # the label locations
width = 0.4 # the width of the bars
fig, ax = plt.subplots(figsize=(7,5))
rects1 = ax.bar(x - width/2, did_recid, width, label='Did recid(=1)')
rects2 = ax.bar(x + width/2, no_recid, width, label='No recid(=0)')
ax.set_ylabel('Counts')
ax.set_title("Did/din't recid by race")
ax.set_xticks(x)
ax.set_xticklabels(labels)
ax.legend()
fig.tight_layout()
plt.show()
```
Split Dataset into training and test data:
```
train, test = compas_dataset_orig.split([0.7], shuffle=True)
# Preprocess data
scale_orig = StandardScaler()
X_train = scale_orig.fit_transform(train.features)
y_train = train.labels.ravel()
X_test = scale_orig.transform(test.features)
y_test = test.labels.ravel()
```
Create a Logistic Regression class with pytorch:
```
class LogisticRegression_torch(torch.nn.Module):
def __init__(self, input_dim, output_dim):
super(LogisticRegression_torch, self).__init__()
self.linear = torch.nn.Linear(input_dim, output_dim)
def forward(self, x):
outputs = torch.sigmoid(self.linear(x))
return outputs
GPU = True
device_idx = 0
if GPU:
device = torch.device("cuda:" + str(device_idx) if torch.cuda.is_available() else "cpu")
else:
device = torch.device("cpu")
BATCH_SIZE = 128
learning_rate = 0.0001
# Create a DataTensor
train_dataset = Data.TensorDataset(torch.tensor(X_train).float(), torch.Tensor(y_train).float())
if device == 0:
num_workers = 2
else:
num_workers = 0
# Data Loader
loader_train = Data.DataLoader(
dataset=train_dataset,
batch_size=BATCH_SIZE,
shuffle=True, num_workers=num_workers)
```
Train a LR-model for each regularization parameter $\lambda$ and test on the test set:
```
criterion = torch.nn.BCELoss(reduction='sum')
epochs = 20
accuracies = []
metrics = {}
lambdas = [0.0, 0.00001, 0.0001, 0.001, 0.01, 0.1, 0.2, 0.3, 0.5, 0.7, 0.9]
lambdas = np.concatenate((np.array(lambdas), np.linspace(1, 100, num=100)))
for reg_lambda in lambdas:
print('Lambda:', reg_lambda,'\n')
model = LogisticRegression_torch(X_train.shape[1], 1)
optimizer = torch.optim.SGD(model.parameters(), lr=0.0001)
for epoch in range(epochs):
train_loss = 0.0
for i, (x, y) in enumerate(loader_train):
# Converting inputs and labels to Variable
inputs = Variable(x.to(device))
labels = Variable(y.to(device))
# Clear gradient buffers because we don't want any gradient
# from previous epoch to carry forward, dont want to cummulate gradients
optimizer.zero_grad()
# get output from the model, given the inputs
outputs = model(inputs)
# Regularization
reg = 0
for param in model.parameters():
reg += 0.5 * (param ** 2).sum()
#reg += param.abs().sum()
# reg_lambda = 0
# get loss for the predicted output
loss = criterion(outputs.reshape(outputs.shape[0]), labels) + \
reg_lambda * reg
train_loss += loss.item()
# get gradients w.r.t to parameters
loss.backward()
# update parameters
optimizer.step()
if (epoch + 1) % 5 == 0:
print('epoch [{}/{}], Training loss:{:.6f}'.format(
epoch + 1,
epochs,
train_loss / len(loader_train.dataset)))
with torch.no_grad():
model.eval()
out = model(Variable(torch.Tensor(X_test).to(device))).detach().cpu()
pred = (out >= 0.5).int().numpy().squeeze()
accuracy = sum((y_test == pred))/len(y_test)
print('Accuracy: ', accuracy,'\n')
accuracies.append(accuracy)
test_pred = test.copy()
test_pred.labels = pred.reshape(-1,1)
metric = ClassificationMetric(test, test_pred,unprivileged_groups=unprivileged_groups, privileged_groups=privileged_groups)
metrics[reg_lambda] = {}
metrics[reg_lambda]['accuracy'] = accuracy
metrics[reg_lambda]['privilaged'] = metric.performance_measures(privileged=True)
metrics[reg_lambda]['unprivilaged'] = metric.performance_measures(privileged=False)
met = metric.binary_confusion_matrix(privileged=True)
PR_priv = (met['TP'] + met['FP']) / (met['TP'] + met['FP'] + met['TN'] + met['FN'])
metrics[reg_lambda]['privilaged']['PR'] = PR_priv
met = metric.binary_confusion_matrix(privileged=False)
PR_unpriv = (met['TP'] + met['FP']) / (met['TP'] + met['FP'] + met['TN'] + met['FN'])
metrics[reg_lambda]['unprivilaged']['PR'] = PR_unpriv
save_obj(metrics, 'metrics_compas')
```
Plot accuracy with respect to $\lambda$:
```
plt.plot(lambdas, accuracies)
plt.title('Accuracy of Logistic Regression on COMPAS dataset')
plt.xlabel('Reg-lambda')
plt.ylabel('Accuracy')
plt.ylim((0.6,0.7))
```
Plot TPR and NPR for each sensitive class with respect to $\lambda$:
```
TPR_priv = []
TPR_non_priv = []
TNR_priv = []
TNR_non_priv = []
for l in metrics:
TPR_priv.append(metrics[l]['privilaged']['TPR'])
TPR_non_priv.append(metrics[l]['unprivilaged']['TPR'])
TNR_priv.append(metrics[l]['privilaged']['TNR'])
TNR_non_priv.append(metrics[l]['unprivilaged']['TNR'])
fig, axs = plt.subplots(1, 2, figsize=(10,5))
fig.suptitle('Investigating Equalized Odds')
axs[0].plot(lambdas, TPR_non_priv)
axs[0].plot(lambdas, TPR_priv)
axs[0].set_title('TPR')
axs[0].set(xlabel='Reg-lambda', ylabel='TPR')
axs[0].legend(['Not Caucasian', 'Caucasian'])
axs[0].set(ylim=(0.3,1))
axs[1].plot(lambdas, TNR_non_priv)
axs[1].plot(lambdas, TNR_priv)
axs[1].set_title('TNR')
axs[1].set(xlabel='Reg-lambda', ylabel='TNR')
axs[1].legend(['Not Caucasian', 'Caucasian'])
axs[1].set(ylim=(0.2,0.9))
```
Plot positive and negative predictive parity for each sensitive class with respect to $\lambda$:
```
PPP_priv= []
PPP_non_priv= []
NPP_priv= []
NPP_non_priv = []
for l in metrics:
PPP_priv.append(metrics[l]['privilaged']['PPV'])
PPP_non_priv.append(metrics[l]['unprivilaged']['PPV'])
NPP_priv.append(metrics[l]['privilaged']['NPV'])
NPP_non_priv.append(metrics[l]['unprivilaged']['NPV'])
fig, axs = plt.subplots(1, 2, figsize=(10,5))
fig.suptitle('Investigating Predictive Parity')
axs[0].plot(lambdas, PPP_non_priv)
axs[0].plot(lambdas, PPP_priv)
axs[0].set_title('PPP')
axs[0].set(xlabel='Reg-lambda', ylabel='PPP')
axs[0].legend(['Not Caucasian', 'Caucasian'])
axs[0].set(ylim=(0.4,0.9))
axs[1].plot(lambdas, NPP_non_priv)
axs[1].plot(lambdas, NPP_priv)
axs[1].set_title('NPP')
axs[1].set(xlabel='Reg-lambda', ylabel='NPP')
axs[1].legend(['Not Caucasian', 'Caucasian'])
axs[1].set(ylim=(0,1))
```
Plot PR for each sensitive class with respect to $\lambda$:
```
PR_priv = []
PR_non_priv = []
ACC = []
for l in metrics:
PR_priv.append(metrics[l]['privilaged']['PR'])
PR_non_priv.append(metrics[l]['unprivilaged']['PR'])
ACC.append(metrics[l]['accuracy'])
fig, axs = plt.subplots(1, 2, figsize=(10,5))
axs[1].set_title('Investigating Demographic Parity')
axs[1].plot(lambdas, PR_non_priv)
axs[1].plot(lambdas, PR_priv)
axs[1].set(xlabel='Reg-lambda', ylabel='Positive Rate')
axs[1].legend(['Non-Caucasian', 'Caucasian'])
axs[1].set(ylim=(0,1))
axs[0].plot(lambdas, ACC)
axs[0].set_title('Accuracy')
axs[0].set(xlabel='Reg-lambda', ylabel = 'Accuracy')
axs[0].set(ylim=(0.6,0.7))
```
### Pre-processing by Reweighing:
```
RW = Reweighing(unprivileged_groups=unprivileged_groups,
privileged_groups=privileged_groups)
train_rw = RW.fit_transform(train)
# Create a weights Tensor
weights = torch.FloatTensor(train_rw.instance_weights)
BATCH_SIZE = 32
learning_rate = 0.0001
# Data Tensor
# We now include the weights so that data will be reweighed during training
rw_train_dataset = Data.TensorDataset(torch.tensor(X_train).float(),
torch.Tensor(y_train).float(), weights)
# Data Loader
loader_train = Data.DataLoader(
dataset=rw_train_dataset,
batch_size=BATCH_SIZE,
shuffle=False, num_workers=num_workers)
```
Train a LR-model for each regularization parameter $\lambda$ and test on the test set:
```
epochs = 20
accuracies = []
metrics_rw = {}
lambdas = [0.0, 0.00001, 0.0001, 0.001, 0.01, 0.1, 0.2, 0.3, 0.5, 0.7, 0.9]
lambdas = np.concatenate((np.array(lambdas), np.linspace(1, 100, num=100)))
for reg_lambda in lambdas:
model = LogisticRegression_torch(X_train.shape[1], 1)
optimizer = torch.optim.SGD(model.parameters(), lr=0.0001)
for epoch in range(epochs):
train_loss = 0.0
for i, (x, y, w) in enumerate(loader_train):
# Converting inputs and labels to Variable
inputs = Variable(x.to(device))
labels = Variable(y.to(device))
# Clear gradient buffers because we don't want any gradient
# from previous epoch to carry forward, dont want to cummulate gradients
optimizer.zero_grad()
# get output from the model, given the inputs
outputs = model(inputs)
# Regularization
reg = 0
for param in model.parameters():
reg += 0.5 * (param ** 2).mean()
#reg += param.abs().sum()
# reg_lambda = 0
# criterion
criterion = torch.nn.BCELoss(weight=w, reduction='sum')
# get loss for the predicted output
loss = criterion(outputs.reshape(outputs.shape[0]), labels) + \
reg_lambda * reg
train_loss += loss.item()
# get gradients w.r.t to parameters
loss.backward()
# update parameters
optimizer.step()
if (epoch + 1) % 5 == 0:
print('epoch [{}/{}], Training loss:{:.6f}'.format(
epoch + 1,
epochs,
train_loss / len(loader_train.dataset)))
with torch.no_grad():
model.eval()
out = model(Variable(torch.Tensor(X_test).to(device))).detach().cpu()
pred = (out >= 0.5).int().numpy().squeeze()
accuracy = sum((y_test == pred))/len(y_test)
print('Accuracy: ', accuracy)
accuracies.append(accuracy)
test_pred = test.copy()
test_pred.labels = pred.reshape(-1,1)
metric_rew = ClassificationMetric(test, test_pred,unprivileged_groups=unprivileged_groups, privileged_groups=privileged_groups)
metrics_rw[reg_lambda] = {}
metrics_rw[reg_lambda]['accuracy'] = accuracy
metrics_rw[reg_lambda]['privilaged'] = metric_rew.performance_measures(privileged=True)
metrics_rw[reg_lambda]['unprivilaged'] = metric_rew.performance_measures(privileged=False)
met = metric_rew.binary_confusion_matrix(privileged=True)
PR_priv = (met['TP'] + met['FP']) / (met['TP'] + met['FP'] + met['TN'] + met['FN'])
metrics_rw[reg_lambda]['privilaged']['PR'] = PR_priv
met = metric_rew.binary_confusion_matrix(privileged=False)
PR_unpriv = (met['TP'] + met['FP']) / (met['TP'] + met['FP'] + met['TN'] + met['FN'])
metrics_rw[reg_lambda]['unprivilaged']['PR'] = PR_unpriv
save_obj(metrics, 'metrics_compas_rw')
```
Plot accuracy with respect to $\lambda$:
```
plt.plot(lambdas, accuracies)
plt.title('Accuracy of Logistic Regression on COMPAS dataset')
plt.xlabel('Reg-lambda')
plt.ylabel('Accuracy')
```
Plot TPR and NPR for each sensitive class with respect to $\lambda$:
```
TPR_priv_rw = []
TPR_non_priv_rw = []
TNR_priv_rw = []
TNR_non_priv_rw = []
for l in metrics_rw:
TPR_priv_rw.append(metrics_rw[l]['privilaged']['TPR'])
TPR_non_priv_rw.append(metrics_rw[l]['unprivilaged']['TPR'])
TNR_priv_rw.append(metrics_rw[l]['privilaged']['TNR'])
TNR_non_priv_rw.append(metrics_rw[l]['unprivilaged']['TNR'])
fig, axs = plt.subplots(1, 2, figsize=(10,5))
fig.suptitle('Investigating Equalized Odds')
axs[0].plot(lambdas, TPR_non_priv_rw)
axs[0].plot(lambdas, TPR_priv_rw)
axs[0].set_title('TPR')
axs[0].set(xlabel='Reg-lambda', ylabel='TPR')
axs[0].legend(['Non-Caucasian', 'Caucasian'])
axs[1].plot(lambdas, TNR_non_priv_rw)
axs[1].plot(lambdas, TNR_priv_rw)
axs[1].set_title('TNR')
axs[1].set(xlabel='Reg-lambda', ylabel='TNR')
axs[1].legend(['Non-Caucasian', 'Caucasian'])
axs[1].set(ylim=(0, 1))
```
Plot positive and negative predictive parity for each sensitive class with respect to $\lambda$:
```
PPP_priv_rw = []
PPP_non_priv_rw = []
NPP_priv_rw = []
NPP_non_priv_rw = []
for l in metrics_rw:
PPP_priv_rw.append(metrics_rw[l]['privilaged']['PPV'])
PPP_non_priv_rw.append(metrics_rw[l]['unprivilaged']['PPV'])
NPP_priv_rw.append(metrics_rw[l]['privilaged']['NPV'])
NPP_non_priv_rw.append(metrics_rw[l]['unprivilaged']['NPV'])
fig, axs = plt.subplots(1, 2, figsize=(10,5))
fig.suptitle('Investigating Predictive Parity')
axs[0].plot(lambdas, PPP_non_priv_rw)
axs[0].plot(lambdas, PPP_priv_rw)
axs[0].set_title('PPP')
axs[0].set(xlabel='Reg-lambda', ylabel='PPP')
axs[0].legend(['Non-Caucasian', 'Caucasian'])
axs[0].set(ylim=(0.4,0.8))
axs[1].plot(lambdas, NPP_non_priv_rw)
axs[1].plot(lambdas, NPP_priv_rw)
axs[1].set_title('NPP')
axs[1].set(xlabel='Reg-lambda', ylabel='NPP')
axs[1].legend(['Non-Caucasian', 'Caucasian'])
axs[1].set(ylim=(0.4,0.8))
```
Plot PR for each sensitive class with respect to $\lambda$:
```
PR_priv_rw = []
PR_non_priv_rw = []
ACC = []
for l in metrics_rw:
PR_priv_rw.append(metrics_rw[l]['privilaged']['PR'])
PR_non_priv_rw.append(metrics_rw[l]['unprivilaged']['PR'])
ACC.append(metrics_rw[l]['accuracy'])
fig, axs = plt.subplots(1, 2, figsize=(10,5))
axs[1].set_title('Investigating Demographic Parity')
axs[1].plot(lambdas, PR_non_priv_rw)
axs[1].plot(lambdas, PR_priv_rw)
axs[1].set(xlabel='Reg-lambda', ylabel='Positive Rate')
axs[1].legend(['Non-Caucasian', 'Caucasian'])
axs[0].plot(lambdas, ACC)
axs[0].set_title('Accuracy')
axs[0].set(xlabel='Reg-lambda', ylabel = 'Accuracy')
axs[0].set(ylim=(0.63,0.67))
with torch.no_grad():
model.eval()
out = model(Variable(torch.Tensor(X_test).to(device))).detach().cpu()
t = test.copy()
class1 = t.labels[t.features[:,1] == 0]
class2 = t.labels[t.features[:,1] == 1]
pred_class1 = out[t.features[:,1] == 0]
pred_class2 = out[t.features[:,1] == 1]
pip install scikitplot
from sklearn import svm, datasets
from sklearn import metrics
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
from sklearn.datasets import load_breast_cancer
import matplotlib.pyplot as plt
fpr, tpr, _ = metrics.roc_curve(class1, pred_class1)
auc = metrics.roc_auc_score(class1, pred_class1)
plt.plot(fpr,tpr,label="data 1, auc="+str(auc))
fpr, tpr, _ = metrics.roc_curve(class2, pred_class2)
auc = metrics.roc_auc_score(class2, pred_class2)
plt.plot(fpr,tpr,label="data 2, auc="+str(auc))
plt.legend()
plt.plot(np.linspace(0,1), np.linspace(0,1),'k--')
```
| github_jupyter |
# Temperature forecast for the general public (MAELSTROM-Yr dataset)
This dataset contains temperature weather forecast for the Nordic region, and are used to produce public weather forecasts on the weather app Yr (www.yr.no). The goal of the prediction task is to generate a deterministic temperature forecast together with an uncertainty range (10% to 90%) as shown here: https://www.yr.no/en/details/graph/5-18700/Norway/Oslo/Oslo/Oslo%20(Blindern).
The target field in the dataset is constructed using a high density network of citizen weather stations from [Netatmo](https://weathermap.netatmo.com/).
The current operational implementation uses a very simple regression model based on only a subset of the predictors available in the dataset. It is described in this article: https://journals.ametsoc.org/view/journals/bams/101/1/bams-d-18-0237.1.xml
## Prerequisites
To run the code in this notebook, you need the following packages:
`pip install climetlab climetlab_maelstrom_yr keras tensorflow numpy matplotlib`
## Loading the data
We can use climetlab to load the dataset into an xarray dataset. There will be several datasets available of different sizes: 300 MB (not available yet), 5GB, and 5TB (not available yet). The 5TB dataset contains the entire Nordic domain at 1x1 km resolution for all 60 hour leadtimes. The 5GB dataset contains only a subset of grid points (128x128) surrounding Oslo, Norway and only for leadtimes 6, 12, ..., 42 hours. All datasets contain the same input predictors and time period (4 years).
Currently, only "air_temperature" is available as the predictand parameter, however precipitation_amount will be added in the future.
The entire 5GB dataset will take a few minutes to load, since the data must be downloaded from europeanweather.cloud. Climetlab caches files locally, so files need not be when rerunning the code later. To only load a subset, add a dates argument to load_dataset, e.g. `dates=['2017-01-01', '2017-01-02']` or `dates=pandas.date_range(start="2017-01-01", end="2017-03-01", freq="1D")`.
```
import climetlab as cml
import pandas
cmlds = cml.load_dataset(
'maelstrom-yr',
size='5GB',
parameter='air_temperature',
)
ds = cmlds.to_xarray()
```
This dataset contains the following dimensions and variables
```
print(ds)
```
The dataset is mostly self explanatory. The `record` dimension represent different samples. The `predictor` variable contains all predictors stacked one after the other, including values for different leadtimes. The `target` variable contain target values.
### Plotting predictors and predictand (target)
```
import matplotlib.pyplot as plt
import numpy as np
names = ds["name_predictor"].values
names = np.array([''.join([qq.decode('utf-8') for qq in names[p, :]]) for p in range(names.shape[0])])
num_leadtimes = len(ds["leadtime"])
unique_predictor_names = np.unique(names)
print("Available predictors:", unique_predictor_names)
index_date = 0
target = ds["target"].values
plt.rcParams['figure.dpi'] = 100
plt.rcParams['figure.figsize'] = [10, 6]
for i, name in enumerate(unique_predictor_names):
plt.subplot(2, 4, i + 1)
index = np.where(names == name)[0][0]
plt.pcolormesh(ds["predictors"][index_date, :, :, index], shading="auto", rasterized=True)
plt.gca().set_aspect(1)
plt.title(name)
plt.subplot(2, 4, 8)
plt.pcolormesh(target[index_date, :, :, 0], shading="auto", rasterized=True)
plt.gca().set_aspect(1)
plt.title("Target")
```
## Example ML solution
### Normalizing the predictors
First we normalize the predictors, by subtractng the mean and dividing by the standard deviation:
```
raw_forecast = np.copy(ds["predictors"][:, :, :, 0:num_leadtimes])
predictors = np.copy(ds["predictors"].values)
num_predictors = predictors.shape[3]
for p in range(num_predictors):
predictors[:, :, :, p] -= np.nanmean(predictors[:, :, :, p])
predictors[:, :, :, p] /= np.nanstd(predictors[:, :, :, p])
```
### Defining the loss function
We use the quantile loss function, by scoring each of the three output quantiles of the model:
```
import keras
import tensorflow as tf
import keras.backend as K
global num_leadtimes
def quantile_loss_function(y_true, y_pred):
err0 = y_true - y_pred[:, :, :, 0:num_leadtimes]
err1 = y_true - y_pred[:, :, :, num_leadtimes:(2*num_leadtimes)]
err2 = y_true - y_pred[:, :, :, (2*num_leadtimes):(3*num_leadtimes)]
qtloss0 = (0.5 - tf.cast((err0 < 0), tf.float32)) * err0
qtloss1 = (0.1 - tf.cast((err1 < 0), tf.float32)) * err1
qtloss2 = (0.9 - tf.cast((err2 < 0), tf.float32)) * err2
return K.mean(qtloss0 + qtloss1 + qtloss2)
```
### Setting up the model
The model takes a gridded predictor set as input and outputs gridded fields for each leadtime and for three quantiles. The tempearture forecast on yr.no has both a deterministic best guess and a 10-90% confidence interval. We want the model to predict all three parameters simultaneously.
```
num_quantiles = 3
num_outputs = num_quantiles * num_leadtimes
model = keras.Sequential()
model.add(keras.layers.InputLayer(predictors.shape[1:]))
model.add(keras.layers.Dense(num_outputs))
model.compile(optimizer = tf.keras.optimizers.Adam(learning_rate = 1e-2), loss = quantile_loss_function)
model.summary()
```
### Training the model
We will split the dataset into a training and evaluation set, based on the record dimension.
```
Itrain = range(predictors.shape[0]//2)
Ieval = range(predictors.shape[0]//2, predictors.shape[0])
num_epochs = 50
batch_size = 4
model.fit(predictors[Itrain, ...], target[Itrain, ...], epochs=num_epochs, batch_size=batch_size)
```
### Predict output
```
output = model.predict(predictors[Ieval, ...])
```
## Model evaluation and visualization
### Evaluating the model
First, lets compare the mean absolute error of the raw forecast and the ML-forecast of the median
```
import numpy as np
print("Raw model MAE:", np.nanmean(np.abs(raw_forecast[Ieval, ...] - target[Ieval, ...])), "°C")
print("ML MAE:", np.nanmean(np.abs(output[:, :, :, 0:num_leadtimes] - target[Ieval, ...])), "°C")
```
Next, we can plot the MAE as a function of leadtime:
```
x = ds["leadtime"].astype(float) / 3600 / 1e9
plt.plot(x, [np.nanmean(np.abs(output[:, :, :, i] - target[Ieval, :, :, i])) for i in range(num_leadtimes)], 'ro-', lw=2, label="Model")
plt.plot(x, [np.nanmean(np.abs(raw_forecast[Ieval, :, :, i] - target[Ieval, :, :, i])) for i in range(num_leadtimes)], 'yo-', lw=2, label="Raw")
plt.legend()
plt.xlabel("Lead time (hours)")
plt.ylabel("Mean absolute error (°C)")
```
### Visualizing the results as timeseries
We can visualize the output as a timeseries. We will pick an example point (Oslo).
```
Y = 55
X = 55
plt.plot(x, output[0, Y, X, 0:num_leadtimes], 'r-', lw=2, label="Median")
plt.plot(x, raw_forecast[Ieval[0], Y, X, 0:num_leadtimes], 'y-', lw=2, label="Raw")
lower = output[0, Y, X,num_leadtimes:2*num_leadtimes]
upper = output[0, Y, X, 2*num_leadtimes:3*num_leadtimes]
plt.plot(x, lower, 'r--', lw=2, label="10%")
plt.plot(x, upper, 'r--', lw=2, label="90%")
xx = np.concatenate((x, x[::-1]))
plt.fill(np.concatenate((x, x[::-1])), np.concatenate((lower, upper[::-1])), color='r', alpha=0.2, linewidth=0)
plt.plot(x, target[Ieval[0], Y, X, :], 'bo-', lw=2, label="Target")
plt.legend()
plt.xlabel("Lead time (hours)")
plt.ylabel("Air temperature (°C)")
```
### Visualizing the results on a map
```
plt.subplot(1, 3, 1)
plt.pcolormesh(raw_forecast[Ieval[0], :, :, 0], rasterized=True)
plt.gca().set_aspect(1)
plt.title("Raw forecast")
plt.subplot(1, 3, 2)
plt.pcolormesh(output[0, :, :, 0], rasterized=True)
plt.gca().set_aspect(1)
plt.title("ML forecast")
plt.subplot(1, 3, 3)
plt.pcolormesh(target[Ieval[0], :, :, 0], rasterized=True)
plt.gca().set_aspect(1)
plt.title("Target (median)")
```
| github_jupyter |
# Compare Hankel and Fourier Transforms
This will compare the forward and inverse transforms for both Hankel and Fourier by either computing partial derivatives of solving a parital differential equation.
This notebook focuses on the Laplacian operator in the case of radial symmetry.
Consider two 2D circularly-symmetric functions $f(r)$ and $g(r)$ that are related by the following differential operator,
$$
g(r) = \nabla^2 f(r)
= \frac{1}{r} \frac{\partial}{\partial r} \left( r \frac{\partial f}{\partial r} \right)
$$
In this notebook we will consider two problems:
1. Given $f(r)$, compute the Laplacian to obtain $g(r)$
2. Given $g(r)$, invert the Laplacian to obtain $f(r)$
We can use the 1D Hankel (or 2D Fourier) transform to compute the Laplacian in three steps:
1. Compute the Forward Transform
$$
\mathcal{H}[f(r)] = \hat f(k)
$$
2. Differentiate in Spectral space
$$
\hat g(k) = - k^2 \hat f(k)
$$
3. Compute the Inverse Transform
$$
g(r) = \mathcal{H}^{-1} [\hat g(k)]
$$
This is easily done in two-dimensions using the Fast Fourier Transform (FFT) but one advantage of the Hankel transform is that we only have a one-dimensional transform.
## Import Relevant Libraries
```
# Import Libraries
import numpy as np # Numpy
from scipy.fftpack import fft2, ifft2, fftfreq, ifftn, fftn # Fourier
from hankel import HankelTransform, SymmetricFourierTransform # Hankel
from scipy.interpolate import InterpolatedUnivariateSpline as spline # Splines
import matplotlib.pyplot as plt # Plotting
import matplotlib as mpl
from os import path
%matplotlib inline
## Put the prefix to the figure directory here for your computer. If you don't want to save files, set to empty string, or None.
prefix = path.expanduser("~/Documents/Projects/HANKEL/laplacian_paper/Figures/")
```
## Standard Plot Aesthetics
```
mpl.rcParams['lines.linewidth'] = 2
mpl.rcParams['xtick.labelsize'] = 13
mpl.rcParams['ytick.labelsize'] = 13
mpl.rcParams['font.size'] = 15
mpl.rcParams['axes.titlesize'] = 14
```
## Define Sample Functions
We define the two functions
$$
f = e^{-r^2}
\quad \mbox{ and } \quad
g = 4 e^{-r^2} (r^2 - 1).
$$
It is easy to verify that they are related by the Laplacian operator.
```
# Define Gaussian
f = lambda r: np.exp(-r**2)
# Define Laplacian Gaussian function
g = lambda r: 4.0*np.exp(-r**2)*(r**2 - 1.0)
```
We can also define the FTs of these functions analytically, so we can compare our numerical results:
```
fhat = lambda x : np.pi*np.exp(-x**2/4.)
ghat = lambda x : -x**2*fhat(x)
# Make a plot of the sample functions
fig, ax = plt.subplots(1,2,figsize=(10,4))
r = np.linspace(0,10,128)
ax[0].plot(r, f(r), label=r"$f(r)$")
ax[0].plot(r, g(r), label=r'$g_2(r)$', ls="--")
ax[0].legend()
ax[0].set_xlabel(r"$r$")
ax[0].grid(True)
k = np.logspace(-2,2,128)
ax[1].plot(k, fhat(k), label=r"$\hat{f}_2(k)$")
ax[1].plot(k, ghat(k), label=r'$\hat{g}_2(k)$', ls="--")
ax[1].legend()
ax[1].set_xlabel(r"$k$")
ax[1].grid(True)
ax[1].set_xscale('log')
#plt.suptitle("Plot of Sample Functions")
if prefix:
plt.savefig(path.join(prefix,"sample_function.pdf"))
```
## Define Transformation Functions
```
def ft_transformation_2d(f,x, inverse=False):
xx,yy = np.meshgrid(x,x)
r = np.sqrt(xx**2 + yy**2)
# Appropriate k-space values
k = 2*np.pi*fftfreq(len(x),d=x[1]-x[0])
kx,ky = np.meshgrid(k,k)
K2 = kx**2+ky**2
# The transformation
if not inverse:
g2d = ifft2(-K2 * fft2(f(r)).real).real
else:
invK2 = 1./K2
invK2[np.isinf(invK2)] = 0.0
g2d = ifft2(-invK2 * fft2(f(r)).real).real
return x[len(x)//2:], g2d[len(x)//2,len(x)//2:]
def ht_transformation_nd(f,N_forward,h_forward,K,r,ndim=2, inverse=False, N_back=None, h_back=None,
ret_everything=False):
if N_back is None:
N_back = N_forward
if h_back is None:
h_back = h_forward
# Get transform of f
ht = SymmetricFourierTransform(ndim=ndim, N=N_forward, h=h_forward)
if ret_everything:
fhat, fhat_cumsum = ht.transform(f, K, ret_cumsum=True, ret_err=False)
else:
fhat = ht.transform(f, K, ret_err = False)
# Spectral derivative
if not inverse:
ghat = -K**2 * fhat
else:
ghat = -1./K**2 * fhat
# Transform back to physical space via spline
# The following should give best resulting splines for most kinds of functions
# Use log-space y if ghat is either all negative or all positive, otherwise linear-space
# Use order 1 because if we have to extrapolate, this is more stable.
# This will not be a good approximation for discontinuous functions... but they shouldn't arise.
if np.all(ghat<=1e-13):
g_ = spline(K[ghat<0],np.log(-ghat[ghat<0]),k=1)
ghat_spline = lambda x : -np.exp(g_(x))
elif np.all(ghat>=-1e-13):
g_ = spline(K[ghat>0],np.log(ghat[ghat>0]),k=1)
ghat_spline = lambda x : np.exp(g_(x))
else:
g_ = spline(K,ghat,k=1)
ghat_spline = g_
if N_back != N_forward or h_back != h_forward:
ht2 = SymmetricFourierTransform(ndim=ndim, N=N_back, h=h_back)
else:
ht2 = ht
if ret_everything:
g, g_cumsum = ht2.transform(ghat_spline, r, ret_err=False, inverse=True, ret_cumsum=True)
else:
g = ht2.transform(ghat_spline, r, ret_err=False, inverse=True)
if ret_everything:
return g, g_cumsum, fhat,fhat_cumsum, ghat, ht,ht2, ghat_spline
else:
return g
```
## Forward Laplacian
We can simply use the defined functions to determine the foward laplacian in each case. We just need to specify the grid.
```
L = 10.
N = 256
dr = L/N
x_ft = np.linspace(-L+dr/2,L-dr/2,2*N)
r_ht = np.linspace(dr/2,L-dr/2,N)
```
We also need to choose appropriate parameters for the forwards/backwards Hankel Transforms. To do this, we can use the ``get_h`` function in the ``hankel`` library:
```
from hankel import get_h
hback, res, Nback = get_h(ghat, nu=2, K=r_ht[::10], cls=SymmetricFourierTransform, atol=1e-8, rtol=1e-4, inverse=True)
K = np.logspace(-2, 2, N) # These values come from inspection of the plot above, which shows that ghat is ~zero outside these bounds
hforward, res, Nforward = get_h(f, nu=2, K=K[::50], cls=SymmetricFourierTransform, atol=1e-8, rtol=1e-4)
hforward, Nforward, hback, Nback
## FT
r_ft, g_ft = ft_transformation_2d(f,x_ft)
# Note: r_ft is equivalent to r_ht
## HT
g_ht = ht_transformation_nd(f,N_forward=Nforward, h_forward=hforward, N_back=Nback, h_back=hback, K = K, r = r_ht)
```
Now we plot the calculated functions against the analytic result:
```
fig, ax = plt.subplots(2,1, sharex=True,gridspec_kw={"hspace":0.08},figsize=(8,6))
ax[0].plot(r_ft,g_ft, label="Fourier Transform", lw=2)
ax[0].plot(r_ht, g_ht, label="Hankel Transform", lw=2, ls='--')
ax[0].plot(r_ht, g(r_ht), label = "$g_2(r)$", lw=2, ls = ':')
ax[0].legend(fontsize=15)
#ax[0].xaxis.set_ticks([])
ax[0].grid(True)
ax[0].set_ylabel(r"$\tilde{g}_2(r)$",fontsize=15)
ax[0].set_ylim(-4.2,1.2)
ax[1].plot(r_ft, np.abs(g_ft-g(r_ft)), lw=2)
ax[1].plot(r_ht, np.abs(g_ht-g(r_ht)),lw=2, ls='--')
#ax[1].set_ylim(-1,1)
ax[1].set_yscale('log')
#ax[1].set_yscale("symlog",linthreshy=1e-6)
ax[1].set_ylabel(r"$|\tilde{g}_2(r)-g_2(r)|$",fontsize=15)
ax[1].set_xlabel(r"$r$",fontsize=15)
ax[1].set_ylim(1e-15, 0.8)
plt.grid(True)
if prefix:
fig.savefig(path.join(prefix,"forward_laplacian.pdf"))
```
Timing for each calculation:
```
%timeit ft_transformation_2d(f,x_ft)
%timeit ht_transformation_nd(f,N_forward=Nforward, h_forward=hforward, N_back=Nback, h_back=hback, K = K, r = r_ht)
```
## Inverse Laplacian
We use the 1D Hankel (or 2D Fourier) transform to compute the Laplacian in three steps:
1. Compute the Forward Transform
$$
\mathcal{H}[g(r)] = \hat g(k)
$$
2. Differentiate in Spectral space
$$
\hat f(k) = - \frac{1}{k^2} \hat g(k)
$$
3. Compute the Inverse Transform
$$
f(r) = \mathcal{H}^{-1} [\hat f(k)]
$$
Again, we compute the relevant Hankel parameters:
```
hback, res, Nback = get_h(fhat, nu=2, K=r_ht[::10], cls=SymmetricFourierTransform, atol=1e-8, rtol=1e-4, inverse=True)
K = np.logspace(-2, 2, N) # These values come from inspection of the plot above, which shows that ghat is ~zero outside these bounds
hforward, res, Nforward = get_h(g, nu=2, K=K[::50], cls=SymmetricFourierTransform, atol=1e-8, rtol=1e-4)
hforward,Nforward,hback,Nback
## FT
r_ft, f_ft = ft_transformation_2d(g,x_ft, inverse=True)
# Note: r_ft is equivalent to r_ht
## HT
f_ht = ht_transformation_nd(g,N_forward=Nforward, h_forward=hforward,N_back=Nback, h_back=hback, K = K, r = r_ht, inverse=True)
fig, ax = plt.subplots(2,1, sharex=True,gridspec_kw={"hspace":0.08},figsize=(8,6))
#np.mean(f(r_ft)) - np.mean(f_ft)
ax[0].plot(r_ft,f_ft + f(r_ft)[-1] - f_ft[-1], label="Fourier Transform", lw=2)
ax[0].plot(r_ht, f_ht, label="Hankel Transform", lw=2, ls='--')
ax[0].plot(r_ht, f(r_ht), label = "$f(r)$", lw=2, ls = ':')
ax[0].legend()
ax[0].grid(True)
ax[0].set_ylabel(r"$\tilde{f}(r)$",fontsize=15)
ax[0].set_ylim(-0.2,1.2)
#ax[0].set_yscale('log')
ax[1].plot(r_ft, np.abs(f_ft + f(r_ft)[-1] - f_ft[-1] -f(r_ft)), lw=2)
ax[1].plot(r_ht, np.abs(f_ht -f(r_ht)),lw=2, ls='--')
ax[1].set_yscale('log')
ax[1].set_ylabel(r"$|\tilde{f}(r)-f(r)|$",fontsize=15)
ax[1].set_xlabel(r"$r$",fontsize=15)
ax[1].set_ylim(1e-19, 0.8)
plt.grid(True)
if prefix:
fig.savefig(path.join(prefix,"inverse_laplacian.pdf"))
%timeit ft_transformation_2d(g,x_ft, inverse=True)
%timeit ht_transformation_nd(g,N_forward=Nforward, h_forward=hforward,N_back=Nback, h_back=hback, K = K, r = r_ht, inverse=True)
```
## 3D Problem (Forward)
We need to define the FT function again, for 3D:
```
def ft_transformation_3d(f,x, inverse=False):
r = np.sqrt(np.sum(np.array(np.meshgrid(*([x]*3)))**2,axis=0))
# Appropriate k-space values
k = 2*np.pi*fftfreq(len(x),d=x[1]-x[0])
K2 = np.sum(np.array(np.meshgrid(*([k]*3)))**2,axis=0)
# The transformation
if not inverse:
g2d = ifftn(-K2 * fftn(f(r)).real).real
else:
invK2 = 1./K2
invK2[np.isinf(invK2)] = 0.0
g2d = ifftn(-invK2 * fftn(f(r)).real).real
return x[len(x)/2:], g2d[len(x)/2,len(x)/2, len(x)/2:]
```
We also need to define the 3D laplacian function:
```
g3 = lambda r: 4.0*np.exp(-r**2)*(r**2 - 1.5)
fhat_3d = lambda x : np.pi**(3./2)*np.exp(-x**2/4.)
ghat_3d = lambda x : -x**2*fhat_3d(x)
L = 10.
N = 128
dr = L/N
x_ft = np.linspace(-L+dr/2,L-dr/2,2*N)
r_ht = np.linspace(dr/2,L-dr/2,N)
```
Again, choose our resolution parameters
```
hback, res, Nback = get_h(ghat_3d, nu=3, K=r_ht[::10], cls=SymmetricFourierTransform, atol=1e-8, rtol=1e-4, inverse=True)
K = np.logspace(-2, 2, 2*N) # These values come from inspection of the plot above, which shows that ghat is ~zero outside these bounds
hforward, res, Nforward = get_h(f, nu=3, K=K[::50], cls=SymmetricFourierTransform, atol=1e-8, rtol=1e-4)
hforward, Nforward, hback, Nback
## FT
r_ft, g_ft = ft_transformation_3d(f,x_ft)
# Note: r_ft is equivalent to r_ht
## HT
K = np.logspace(-1.0,2.,N)
g_ht = ht_transformation_nd(f,N_forward=Nforward, h_forward=hforward, N_back=Nback, h_back=hback, K = K, r = r_ht, ndim=3)
fig, ax = plt.subplots(2,1, sharex=True,gridspec_kw={"hspace":0.08},figsize=(8,6))
ax[0].plot(r_ft,g_ft, label="Fourier Transform", lw=2)
ax[0].plot(r_ht, g_ht, label="Hankel Transform", lw=2, ls='--')
ax[0].plot(r_ht, g3(r_ht), label = "$g_3(r)$", lw=2, ls = ':')
ax[0].legend(fontsize=15)
#ax[0].xaxis.set_ticks([])
ax[0].grid(True)
ax[0].set_ylabel(r"$\tilde{g}_3(r)$",fontsize=15)
#ax[0].set_ylim(-4.2,1.2)
ax[1].plot(r_ft, np.abs(g_ft-g3(r_ft)), lw=2)
ax[1].plot(r_ht, np.abs(g_ht-g3(r_ht)),lw=2, ls='--')
ax[1].set_yscale('log')
ax[1].set_ylabel(r"$|\tilde{g}_3(r)-g_3(r)|$",fontsize=15)
ax[1].set_xlabel(r"$r$",fontsize=15)
plt.grid(True)
if prefix:
fig.savefig(path.join(prefix,"forward_laplacian_3D.pdf"))
%timeit ht_transformation_nd(f,N_forward=Nforward, h_forward=hforward, N_back=Nback, h_back=hback, K = K, r = r_ht, ndim=3)
%timeit ft_transformation_3d(f,x_ft)
```
## 3D Problem (Inverse)
```
hback, res, Nback = get_h(fhat_3d, nu=3, K=r_ht[::10], cls=SymmetricFourierTransform, atol=1e-8, rtol=1e-4, inverse=True)
K = np.logspace(-2, 2, N) # These values come from inspection of the plot above, which shows that ghat is ~zero outside these bounds
hforward, res, Nforward = get_h(g3, nu=3, K=K[::50], cls=SymmetricFourierTransform, atol=1e-8, rtol=1e-4)
hforward,Nforward,hback,Nback
## FT
r_ft, f_ft = ft_transformation_3d(g3,x_ft, inverse=True)
# Note: r_ft is equivalent to r_ht
## HT
f_ht = ht_transformation_nd(g3,ndim=3, N_forward=Nforward, h_forward=hforward,N_back=Nback, h_back=hback, K = K, r = r_ht, inverse=True)
fig, ax = plt.subplots(2,1, sharex=True,gridspec_kw={"hspace":0.08},figsize=(8,6))
#np.mean(f(r_ft)) - np.mean(f_ft)
ax[0].plot(r_ft, f_ft + f(r_ft)[-1] - f_ft[-1], label="Fourier Transform", lw=2)
ax[0].plot(r_ht, f_ht + f(r_ft)[-1] - f_ht[-1], label="Hankel Transform", lw=2, ls='--')
ax[0].plot(r_ht, f(r_ht), label = "$f(r)$", lw=2, ls = ':')
ax[0].legend()
ax[0].grid(True)
ax[0].set_ylabel(r"$\tilde{f}(r)$",fontsize=15)
ax[0].set_ylim(-0.2,1.2)
#ax[0].set_yscale('log')
ax[1].plot(r_ft, np.abs(f_ft + f(r_ft)[-1] - f_ft[-1] -f(r_ft)), lw=2)
ax[1].plot(r_ht, np.abs(f_ht + f(r_ft)[-1] - f_ht[-1] -f(r_ht)),lw=2, ls='--')
ax[1].set_yscale('log')
ax[1].set_ylabel(r"$|\tilde{f}(r)-f(r)|$",fontsize=15)
ax[1].set_xlabel(r"$r$",fontsize=15)
ax[1].set_ylim(1e-19, 0.8)
plt.grid(True)
if prefix:
fig.savefig(path.join(prefix,"inverse_laplacian_3d.pdf"))
```
| github_jupyter |
# Uniform quantization in frequency domain
```
import numpy as np
import matplotlib.pyplot as plt
from scipy import fftpack
from scipy.misc import bytescale
import matplotlib.image as mpimg
# loading image
img = bytescale(mpimg.imread('i/super_mario_head.png'))
choosen_y_x = 90
resolution = 128
img_slice = img[choosen_y_x:(choosen_y_x + resolution), choosen_y_x:(choosen_y_x + resolution), 2]
```
# Distribution: Frequency vs Spatial
```
# transform: 2D DCT
dct_slice = fftpack.dct(fftpack.dct(img_slice.T, norm='ortho').T, norm='ortho')
dct_hist, dct_bin_edges = np.histogram(dct_slice, bins = range(100))
img_hist, img_bin_edges = np.histogram(img_slice, bins = range(100))
f, (plt1, plt2) = plt.subplots(1, 2, figsize=(15, 5))
plt1.set_title('Frequency histogram')
plt1.bar(dct_bin_edges[:-1], dct_hist, width = 1)
plt2.set_title('Spatial histogram')
plt2.bar(img_bin_edges[:-1], img_hist, width = 1)
```
# Quantize by dividing and requantize by multyplying
```
block = img_slice[80:88,40:48]
quantize_step = 5
dct_slice = fftpack.dct(fftpack.dct(block.T, norm='ortho').T, norm='ortho')
dct_slice_quantized = np.divide(dct_slice,[quantize_step])
rounded_quantized = np.around(dct_slice_quantized)
dct_slice_requantized = np.multiply(rounded_quantized,[quantize_step])
idct_slice = fftpack.idct(fftpack.idct(dct_slice_requantized.T, norm='ortho').T, norm='ortho')
f, (plt1, plt2) = plt.subplots(1, 2, figsize=(15, 5))
plt1.axis('off');
plt1.set_title('Original')
plt1.imshow(block, cmap='gray',interpolation='nearest')
plt2.axis('off');
plt2.set_title('Quantized')
plt2.imshow(idct_slice, cmap='gray',interpolation='nearest')
```
# DCT Coefficients
```
plt.imshow(dct_slice, interpolation='nearest',cmap=plt.cm.Paired)
plt.colorbar(shrink=1)
```
# Discarding based on coefficient importance
```
# a 8x8 block
block = img_slice[80:88,40:48]
# a 2D DCT
dct_slice = fftpack.dct(fftpack.dct(block.T, norm='ortho').T, norm='ortho')
original_dct_slice = np.copy(dct_slice)
# keeps only the top left 5 element triangle
for u in range(8):
for v in range(8):
if (u+v) >5:
dct_slice[u,v] = 0
print("It compressed ", 100 - ((np.count_nonzero(dct_slice)/64) * 100), "% of the block.")
```
## Pixel Original
```
np.set_printoptions(precision=1,linewidth=140, suppress=True)
block
```
## DCT Original
```
original_dct_slice
```
## Quantized
```
dct_slice
idct_slice = fftpack.idct(fftpack.idct(dct_slice.T, norm='ortho').T, norm='ortho')
f, (plt1, plt2) = plt.subplots(1, 2, figsize=(15, 5))
plt1.axis('off');
plt1.set_title('Original')
plt1.imshow(block, cmap='gray',interpolation='nearest')
plt2.axis('off');
plt2.set_title('Quantized')
plt2.imshow(idct_slice, cmap='gray',interpolation='nearest')
```
# JPEG quantization table
```
img = mpimg.imread('i/jpeg_quantization_table.png')
plt.axis('off');
plt.imshow(img)
```
| github_jupyter |
# Introduction to Chinook with Graphene
In the following exercise, we'll get a feeling for building and characterizing tight-binding models in chinook, in addition to some calculation of the associated ARPES intensity. I'll use graphene for this exercise.
I'll start by importing the requisite python libraries -- including the necessary chinook files. Numpy is the standard python numerics package.
```
import numpy as np
import chinook.build_lib as build_lib
import chinook.ARPES_lib as arpes_lib
import chinook.operator_library as op_lib
import chinook.orbital_plotting as oplot
```
Personally, I like to keep my model setup and my calculation execution in separate .py scripts. For the sake of code readability this helps a lot. For this Jupyter notebook though, I'm going to do things sort of linearly. It gets a bit cluttered, but will help with the flow. Have a look at the .py files saved on the same page as this notebook for the same exercises written in native python scripts.
To define a tight-binding model, I'll need four things: a lattice, an orbital basis, a Hamiltonian, and a momentum path of interest. We start with the lattice.
```
alatt = 2.46
interlayer = 100.0
avec = np.array([[-alatt/2,alatt*np.sqrt(3/4.),0.0],
[alatt/2,alatt*np.sqrt(3/4.),0.0],
[0.0,0.0,interlayer]])
```
Even though we are doing a 2D lattice, it's embedded in 3D space we live in. I've then defined an 'interlayer' distance, but this is fictitiously large so there will not be any 'interlayer' coupling. Next, we define our orbital basis:
```
spin_args = {'bool':False}
basis_positions = np.array([[0.0,0.0,0.0],
[0.0,alatt/np.sqrt(3.0),0.0]])
basis_args = {'atoms':[0,0],
'Z':{0:6},
'orbs':[["20","21x","21y","21z"],["20","21x","21y","21z"]],
'pos':basis_positions,
'spin':spin_args}
```
I'm going to ignore the spin-degree of freedom so I'm turning off the spin-switch. In other systems, these 'spin_args' allow for incorporation of spin-orbit coupling and magnetic ordering. The graphene lattice has two basis atoms per unit cell, and for now I'll include the full Carbon 2sp orbital space. This is a good point to clarify that most objects we define in chinook are generated using the 'dictionary' structure I've used here, where we use key-value pairs to define attributes in a user-readable fashion.
After the basis, I'll define my Hamiltonian. Following the introduction to Slater-Koster tight-binding, I define the relevant hoppings in the SK dictionary. The keys specify the atoms and orbitals associated with the hopping value. For example, '002211P' corresponds to the $V_{pp\pi}$ hopping between the 0$^{th}$ and 0$^{th}$ atom in our basis, coupling specifically the 2p (n=2, l=1) states.
```
SK = {"020":-8.81,"021":-0.44, #onsite energies
"002200S":-5.279, #nearest-neighbour Vssσ
"002201S":5.618, #nearest-neighbour Vspσ
"002211S":6.05,"002211P":-3.07} #nearest-neighbour Vppσ,Vppπ
hamiltonian_args = {'type':'SK',
'V':SK,
'avec':avec,
'cutoff':alatt*0.7,
'spin':spin_args}
```
Before building our model, the last thing I'll do is specify a k-path along which I want to find the band-structure.
```
G = np.array([0,0,0])
K = np.array([1./3,2./3,0])
M = np.array([0,0.5,0.0])
momentum_args= {'type':'F',
'avec':avec,
'grain':200,
'pts':[G,K,M,G],
'labels':['$\\Gamma$','K','M','$\\Gamma$']}
```
Finally then, I'll use the chinook.build_library to actually construct a tight-binding model for our use here
```
basis = build_lib.gen_basis(basis_args)
kpath = build_lib.gen_K(momentum_args)
TB = build_lib.gen_TB(basis,hamiltonian_args,kpath)
```
With this model so defined, I can now compute the eigenvalues along my k-path of interest:
```
TB.solve_H()
TB.plotting()
```
We see very nicely then the linear Dirac dispersion for which graphene is so famous, in addition the the sigma-bonding states at higher energies below $E_F$, composed of sp$_2$ hybrids, from which its mechanical strength is derived. Note also that I've chosen to n-dope my graphene, shifting the Dirac point below the chemical potential. Such a shift is routinely observed in graphene which is not free-standing, as typically used in ARPES experiments.
To understand the orbital composition more explicitly, I can compute the projection of the tight-binding eigenvectors onto the orbitals of my basis using the chinook.operator_library. Before doing so, I'll use a built-in method for the TB model object we've created to determine clearly, my orbital basis:
```
TB.print_basis_summary()
```
Clearly, orbitals [0,4] are 2s, [1,5] are 2p$_x$, [2,6] are 2p$_y$ and [3,7] are 2p$_z$. I'll use the op_lib.fatbs function to plot 'fat' bands for these basis combinations:
```
C2s = op_lib.fatbs([0,4],TB,Elims=(-30,15))
C2x = op_lib.fatbs([1,5],TB,Elims=(-30,15))
C2y = op_lib.fatbs([2,6],TB,Elims=(-30,15))
C2z = op_lib.fatbs([3,7],TB,Elims=(-30,15))
```
From these results, it's immediatedly obvious that if I am only concerned with the low-energy physics near the chemical potential (within $\pm$ 3 eV), then it is perfectly reasonable to adopt a model with only p$_z$ orbitals. I can actually redefine my model accordingly.
```
basis_args = {'atoms':[0,0],
'Z':{0:6},
'orbs':[["21z"],["21z"]],
'pos':basis_positions,
'spin':spin_args}
basis = build_lib.gen_basis(basis_args)
TB_pz = build_lib.gen_TB(basis,hamiltonian_args,kpath)
TB_pz.solve_H()
TB_pz.plotting()
```
The only difference in the above was that I redined the "orbs" argument for the basis definition, cutting out the "20", "21x", "21y" states. There is some redundancy left in this model, specifically I have defined additional hopping elements and onsite energies (for the 2s) which will not be used.
Let's shift our attention to ARPES. In ARPES experiments, one usually only sees one side of the Dirac cone. This is due to interference between the the two sublattice sites. To understand this, we can plot directly the tight-binding eigenvectors near the K-point. Since we defined our k-path with 200 points between each high-symmetry point, I'll plot the eigenvectors at the 190$^{th}$ k-point.
```
eigenvector1 = TB_pz.Evec[190,:,0]
eigenvector2 = TB_pz.Evec[190,:,1]
wfunction1 = oplot.wavefunction(basis=TB_pz.basis,vector=eigenvector1)
wfunction2 = oplot.wavefunction(basis=TB_pz.basis,vector=eigenvector2)
wplot1 = wfunction1.triangulate_wavefunction(20)
wplot2 = wfunction2.triangulate_wavefunction(20)
```
We see that the lower-energy state is the symmetric combination of sites $A$ and $B$, whereas the higher energy state is the antisymmetric combination. So we can anticipate that the symmetric state will produce constructive interference, whereas the antisymmetric will destructively interfere. Ok, let's continue with this model to calculate the ARPES spectra.
```
Kpt = np.array([1.702,0.0,0.0])
klimits = 0.1
Elimits = [-1.25,0.25]
Npoints = 100
arpes_args={'cube':{'X':[Kpt[0]-klimits,Kpt[0]+klimits,Npoints],
'Y':[Kpt[1]-klimits,Kpt[1]+klimits,Npoints],
'kz':Kpt[2],
'E':[Elimits[0],Elimits[1],1000]},
'SE':['poly',0.01,0,0.1], #Self-energy arguments (lineshape)
'hv': 21.2, # Photon energy (eV)
'pol':np.array([-1,0,1]), #light-polarization
'resolution':{'E':0.02,'k':0.005}, #energy, momentum resolution
'T':4.2} #Temperature (for Fermi distribution)
experiment = arpes_lib.experiment(TB_pz,arpes_args)
experiment.datacube()
Imap,Imap_resolution,axes = experiment.spectral(slice_select=('y',0))
Imap,Imap_resolution,axes = experiment.spectral(slice_select=('E',0))
Imap,Imap_resolution,axes = experiment.spectral(slice_select=('x',Kpt[0]))
```
I can also compare the result against what I would have with my larger basis size.
```
experiment_fullbasis = arpes_lib.experiment(TB,arpes_args)
experiment_fullbasis.datacube()
Imap,Imap_resolution,axes = experiment_fullbasis.spectral(slice_select=('x',Kpt[0]))
```
Perhaps unsurprisingly, the result is the same, as symmetries of the 2D lattice preclude hybridization of the Carbon 2p$_z$ orbitals with any of the other 2sp states.
# Manipulating the Hamiltonian
We can go beyond here and now start playing with our Hamiltonian. One possibility is to consider the effect of breaking inversion symmetry by imposing an onsite energy difference between the two Carbon sites. This is the familiar Semenoff mass proposed by UBC's Gordon Semenoff, as it modifies the massless Dirac dispersion near the K-point to become massive. I will define a simple helper function for this task:
```
def semenoff_mass(TB,mass):
Hnew = [[0,0,0,0,0,mass/2],
[1,1,0,0,0,-mass/2]]
TB.append_H(Hnew)
```
I can then call this function, acting on the pz-only model:
```
TB_semenoff = build_lib.gen_TB(basis,hamiltonian_args,kpath)
semenoff_mass(TB_semenoff,0.5)
TB_semenoff.Kobj = kpath
TB_semenoff.solve_H()
TB_semenoff.plotting()
```
By breaking inversion symmetry in the crystal, I have opened a gap at the K-point. The Dirac point need only be degenerate if both inversion and time reversal symmetries are preserved. Note that I have redefined my kpath to follow the same points as before, as the ARPES calculations impose the mesh of k-points used. Near the k-point, rather than have 'bonding' and 'anti-bonding' character, the Semenoff mass localizes the the wavefunction on one or the other sites. Printing the orbital wavefunction near K for the lower and upper states:
```
eigenvector1 = TB_semenoff.Evec[190,:,0]
eigenvector2 = TB_semenoff.Evec[190,:,1]
wfunction1 = oplot.wavefunction(basis=TB_semenoff.basis,vector=eigenvector1)
wfunction2 = oplot.wavefunction(basis=TB_semenoff.basis,vector=eigenvector2)
wplot1 = wfunction1.triangulate_wavefunction(20)
wplot2 = wfunction2.triangulate_wavefunction(20)
```
We see nicely that the eigenvector has been changed from before--while still resembling the symmetric and antisymmetric combinations we had above, now the charge distribution lies predominantly on one or the other site. Try changing the momentum point where you evaluate this, or increasing/decreasing the size of the mass term to observe its effect. I can compute the photoemission again, resulting in a gapped spectrum
```
experiment_semenoff = arpes_lib.experiment(TB_semenoff,arpes_args)
experiment_semenoff.datacube()
_ = experiment_semenoff.spectral(slice_select=('x',Kpt[0]))
_ = experiment_semenoff.spectral(slice_select=('w',-0.0))
```
In addition to the gap, we also see the modification of the eigenstate manifest in the redistribution of spectral weight on the Fermi surface, which no longer features the complete extinction of intensity on the inside the cone.
While the Semenoff mass does not break time-reversal symmetry, Duncan Haldane proposed a different form of perturbation which would have this effect. The Haldane model introduces a complex second-nearest neighbour hopping which has opposite sign on the two sublattice sites. I'll define again a function to introduce this perturbation:
```
def haldane_mass(TB,mass):
Hnew = []
vectors = [TB.avec[0],TB.avec[1],TB.avec[1]-TB.avec[0]]
for ii in range(2):
for jj in range(3):
Hnew.append([ii,ii,*vectors[jj],-(2*ii-1)*0.5j*mass])
Hnew.append([ii,ii,*(-vectors[jj]),(2*ii-1)*0.5j*mass])
TB.append_H(Hnew)
```
This function generates the simplest form of Haldane mass, with fixed phase. You can try modifying the above function to allow for arbitrary phase.
I'm going to define a separate tight-binding model for this perturbation, identical to the unperturbed p$_z$-only basis I used above. I'll then add a Haldane mass term which will result in roughly the same energy splitting as for the Semenoff mass.
```
TB_haldane = build_lib.gen_TB(basis,hamiltonian_args,kpath)
haldane_mass(TB_haldane,0.3)
TB_haldane.solve_H()
TB_haldane.plotting()
```
Evidently, we have effectively the same dispersion as before--breaking time-reversal symmetry now has the effect of gapping out the Dirac cone just as inversion symmetry breaking did.
Finally, I can of course also choose to add both a Haldane and a Semenoff mass. For a critically large Haldane mass, I enter a topologically non-trivial phase. In this case, it is useful to consider both inequivalent Dirac points in the unit cell. So I use a modified k-path here:
```
momentum_args= {'type':'F',
'avec':avec,
'grain':500,
'pts':[-1.5*K,-K,G,K,1.5*K],
'labels':["1.5K'","K'",'$\\Gamma$','K','1.5K']}
kpath_halsem = build_lib.gen_K(momentum_args)
TB_halsem = build_lib.gen_TB(basis,hamiltonian_args,kpath_halsem)
haldane_mass(TB_halsem,0.25/np.sqrt(3))
semenoff_mass(TB_halsem,0.25)
TB_halsem.solve_H()
TB_halsem.plotting(-1,0.5)
```
There we go, I've now broken both time-reversal and inversion symmetry, modifying the dispersion in a non-trivial way. While the $K$ and $K'$ points will be energetically inequivalent for arbitrary choices of $m_S$ and $m_H$, at $m_H$=$m_S/\sqrt{3}$ (as written in our formalism), the gap at $K$ closes. This can be contrast with the choice of Semenoff, and Haldane only, along this same path through both inequivalent k-points of the Brillouin zone.
```
TB_semenoff.Kobj = kpath_halsem
TB_haldane.Kobj = kpath_halsem
TB_semenoff.solve_H()
TB_haldane.solve_H()
TB_semenoff.plotting(-1,0.5)
TB_haldane.plotting(-1,0.5)
```
It is clear from this that the presence of either time-reversal or inversion symmetry preserve the energy-equivalence of the dispersion at $K$ and $K'$, and only by breaking both symmetries can we change this. Finally, we can compute the ARPES intensity for the system with critical Haldane and Semenoff masses at the $K$ and $K'$ points.
```
arpes_args['cube']['X'] =[-Kpt[0]-klimits,-Kpt[0]+klimits,500]
arpes_args['cube']['Y'] =[0,0,1]
arpes_args['cube']['E'] = [-1.5,0.25,1000]
experiment_halsem = arpes_lib.experiment(TB_halsem,arpes_args)
experiment_halsem.datacube()
_ = experiment_halsem.spectral(slice_select=('y',0),plot_bands=True)
arpes_args['cube']['X'] =[Kpt[0]-klimits,Kpt[0]+klimits,500]
experiment_halsem = arpes_lib.experiment(TB_halsem,arpes_args)
experiment_halsem.datacube()
_ = experiment_halsem.spectral(slice_select=('y',0),plot_bands=True)
```
| github_jupyter |
<img src="../../images/brownbear.png" width="400">
## A financial tool that can analyze and maximize investment portfolios on a risk adjusted basis
Description: This notebook is useful for examining potfolios comprised of stocks from the Dow Jones Industrial Average. Construct portfolios from the 30 stocks in the DJIA and examine the results of different weighting schemes.
```
%%javascript
IPython.OutputArea.prototype._should_scroll = function(lines) {
return false;
}
# imports
import pandas as pd
import matplotlib.pyplot as plt
import brownbear as bb
# format price data
pd.options.display.float_format = '{:0.2f}'.format
# display all rows
pd.set_option('display.max_rows', None)
# do not truncate column names
pd.set_option('display.max_colwidth', None)
%matplotlib inline
# set size of inline plots
'''note: rcParams can't be in same cell as import matplotlib
or %matplotlib inline
%matplotlib notebook: will lead to interactive plots embedded within
the notebook, you can zoom and resize the figure
%matplotlib inline: only draw static images in the notebook
'''
plt.rcParams["figure.figsize"] = (10, 7)
```
### Some Globals
```
investment_universe = ['dow30-galaxy']
risk_free_rate = 0
annual_returns = '3 Yr'
vola = 'Vola'
ds_vola = 'DS Vola'
# Fetch Investment Options - all values annualized
df = bb.fetch(investment_universe, risk_free_rate, annual_returns, vola, ds_vola)
df
# add fundamental columns
df = bb.add_fundamental_columns(df)
df
# rank
rank = bb.rank(df, rank_by='Dividend Yield')
rank_filtered = rank
#rank_filtered = rank.loc[(rank['3 mo'] > 0) & rank['1 Yr'] > 0]
rank_filtered
```
### Sample Portfolios
Format 'Investment option': weight
```
# everything ranked
ranked_portfolio = {
'Title': 'Ranked Portfolio'
}
everything = list(rank_filtered['Investment Option'])[:20]
ranked_portfolio.update(dict.fromkeys(everything, 1/len(everything)))
# top 10
top10_portfolio = {
'Title': 'Top10 Portfolio'
}
top10 = list(rank['Investment Option'])[:10]
top10_portfolio.update(dict.fromkeys(top10, 1/len(top10)))
```
### Custom Portfolios
```
# My portfolio
my_portfolio = {
'Title': 'My Portfolio',
}
```
### Choose Portfolio Option
```
# Select one of the portfolios from above
portfolio_option = ranked_portfolio
# Make a copy so that the original portfolio is preserved
portfolio_option = portfolio_option.copy()
```
### Analysis Options
```
# Specify the weighting scheme. It will replace the weights specified in the portfolio
# You can also fix the weights on some Investent Options, Asset Classes, and Asset Subclasses
# while the others are automatically calculated.
# 'Equal' - will use equal weights.
# 'Sharpe Ratio' - will use proportionally weighted # allocations based on the percent
# of an investment option's sharpe ratio to the sum of all the sharpe ratios in the portfolio.
# 'Std Dev' - will use standard deviation adjusted weights
# 'Annual Returns' - will use return adjusted weights
# 'Vola' - will use volatility adjusted weights
# 'DS Vola' - will use downside volatility adjusted weights
# None: 'Investment Option' means use user specified weights
# 'Asset Class' means do not group by Asset Class
# 'Asset Subclass means do not group by Asset Subclass
weight_by = {
'Asset Class': {'weight_by': None},
'Asset Subclass': {'weight_by': 'Annual Returns'},
'Investment Option': {'weight_by': 'Equal'},
}
#weight_by = None
bb.DEBUG = False
# Analyze portfolio
annual_ret, std_dev, sharpe_ratio = \
bb.analyze(df, portfolio_option, weight_by)
# Display Results
summary = bb.summary(df, portfolio_option, annual_ret, std_dev, sharpe_ratio)
summary
# Show pie charts of investment and asset class weights
bb.show_pie_charts(df, portfolio_option, charts=['Investment Option', 'Asset Subclass'])
# Show exact weights
bb.print_portfolio(portfolio_option)
```
### Optimize Portfolio
```
# Run_portfolio_optimizer = True will run portfolio optimizer after portfolio analysis is complete
run_portfolio_optimizer = True
# Optimize sharpe ratio while specifying Annual Rate, Worst Typical Down Year,
# and Black Swan. Setting a constraint to None optimizes absolute Sharpe Ratio
# without regard to that constraint.
'''
constraints = {
'Annual Return': 12,
'Worst Typical Down Year': -5,
'Black Swan': None
}
'''
constraints = {
'Annual Return': 8,
'Worst Typical Down Year': None,
'Black Swan': -40
}
if run_portfolio_optimizer:
bb.optimizer(df, portfolio_option, constraints)
```
### Use Sharpe Ratio adjusted weights
Recommend that you also try using Sharpe Ratio adjusted weights and compare those results with the Optimized Portflio.
It tends to produce a higher Annual Return while keeping the allocations more balanced than the Optimizer. (See 'Analysis Options' section).
| github_jupyter |
<h1 id="CWPK-#20:-Basic-Knowledge-Graph-Management---I">CWPK #20: Basic Knowledge Graph Management - I</h1>
<h2 id="It's-Time-to-Learn-How-to-Do-Some-Productive-Work">It's Time to Learn How to Do Some Productive Work</h2>
<div style="float: left; width: 305px; margin-right: 10px;"><img title="Cooking with KBpedia" src="http://kbpedia.org/cwpk-files/cooking-with-kbpedia-305.png" width="305" /></div>
<p>Our previous installments of the <a href="https://www.mkbergman.com/cooking-with-python-and-kbpedia/" target="_blank" rel="noopener"><em>Cooking with Python and KBpedia</em></a> series relied on the full knowledge graph, <code>kbpedia_reference_concepts.owl</code>. That approach was useful to test out whether our current <a href="https://en.wikipedia.org/wiki/Python_(programming_language)" target="_blank" rel="noopener">Python</a> and <a href="https://en.wikipedia.org/wiki/Project_Jupyter" target="_blank" rel="noopener">Jupyter Notebook</a> configurations were adequate to handle the entire 58,000 reference concepts (RCs) in <a href="https://kbpedia.org/" target="_blank" rel="noopener">KBpedia</a>. However, a file that large makes finding and navigating stuff a bit harder. For this installment, and a few that come thereafter, we will restrict our example to the much smaller <a href="https://en.wikipedia.org/wiki/Upper_ontology" target="_blank" rel="noopener">upper ontology</a> to KBpedia, KKO (Kbpedia Knowledge Ontology). This ontology only has hundreds of concepts, but has the full suite of functionality and component types found in the full system.</p>
<p>In today's installment we will apply some of the basic commands in <a href="http://www.lesfleursdunormal.fr/static/informatique/owlready/index_en.html" target="_blank" rel="noopener">owlready2</a> we learned in the last installment. Owlready2 is the <a href="https://en.wikipedia.org/wiki/Application_programming_interface" target="_blank" rel="noopener">API</a> to our <a href="https://en.wikipedia.org/wiki/Web_Ontology_Language" target="_blank" rel="noopener">OWL2</a> knowledge graphs. In today's installment we will explore the standard <a href="https://en.wikipedia.org/wiki/Create,_read,_update_and_delete" target="_blank" rel="noopener">CRUD</a> (<em>create-read-update-delete</em>) actions against the classes (reference concepts) in our graph. Since our efforts to date have focused on the R in CRUD (for reading), our emphasis today will be on class creation, updates and deletions.</p>
<p>Remember, you may find the KKO reference file that we use for this installment, <code>kko.owl</code> where you first stored your KBpedia reference files. (What I call <code>main</code> in the code snippet below.)</p>
### Load KKO
<div style="background-color:#eee; border:1px dotted #aaa; vertical-align:middle; margin:15px 60px; padding:8px;"><strong>Which environment?</strong> The specific load routine you should choose below depends on whether you are using the online MyBinder service (the 'raw' version) or local files. See <a href="https://www.mkbergman.com/2347/cwpk-17-choosing-and-installing-an-owl-api/"><strong>CWPK #17</strong></a> for further details.</div>
#### Local File Option
Like in the last installment, we will follow good practice and use an absolute file or Web address to identify our existing ontology, KKO in this case. Unlike the last installment, we will comment out the little snippet of code we added to provide screen feedback that the file is properly referenced. (If you have any doubts, remove the comment character (<code>#</code>) to restore the feedback):
```
main = 'C:/1-PythonProjects/kbpedia/sandbox/kko.owl'
# with open(main) as fobj: # we are not commenting out the code to scroll through the file
# for line in fobj:
# print (line)
```
Again, you <code>shift+enter</code> or pick Run from the main menu to execute the cell contents. (If you chose to post the code lines to screen, you may clear the file listing from the screen by choosing Cell → All Output → Clear.)
We will next consolidate multiple steps from the prior installment to make absolute file references for the imported SKOS ontology and then to actually load the files:
```
skos_file = 'http://www.w3.org/2004/02/skos/core'
from owlready2 import *
kko = get_ontology(main).load()
skos = get_ontology(skos_file).load()
kko.imported_ontologies.append(skos)
```
#### MyBinder Option
If you are running this notebook online, do **NOT** run the above routines, since we will use the GitHub files, but now consolidate all steps into a single cell:
```
kko_file = 'https://raw.githubusercontent.com/Cognonto/CWPK/master/sandbox/builds/ontologies/kko.owl'
skos_file = 'http://www.w3.org/2004/02/skos/core'
from owlready2 import *
kko = get_ontology(kko_file).load()
skos = get_ontology(skos_file).load()
kko.imported_ontologies.append(skos)
```
### Check Load Results
OK, no matter which load option you used, we can again test to see if the ontologies registered in the system, only now specifying two base IRIs in a single command:
```
print(kko.base_iri,skos.base_iri)
```
We can also confirm that the two additional ontologies have been properly imported:
```
print(kko.imported_ontologies)
```
### Re-starting the Notebook
I have alluded to it before, but let's now be explicit about how to stop-and-start a notebook, perhaps just to see whether we can clear memory and test whether all steps up to this point are working properly. To do so, go to File → Save and Checkpoint, and then File → Close and Halt. (You can insert a Rename step in there should you wish to look at multiple versions of what you are working on.)
Upon closing, you will be returned to the main Jupyter Notebook directory screen, where you can navigate to the active file, click on it, and then after it loads, Run the cells up to this point to reclaim your prior working state.
### Inspecting KKO Contents
So, we threw some steps in the process above to confirm that we were finding our files and loading them. We can now check to see if the classes have loaded properly since remaining steps focus on managing them:
```
list(kko.classes())
```
Further, we know that KKO has a class called <code>Products</code>. We also want to see if our file load has properly captured its <code>subClassOf</code> relationships. (In its baseline configuration KKO <code>Products</code> has three sub-classes: <code>Primary ...</code>, <code>Secondary ...</code>, and <code>Tertiary ...</code>.) We will return to this cell below multiple times to confirm some of the later steps:
```
list(kko.Products.subclasses())
```
### Create a New Class
'Create' is the first part of the CRUD acronym. There are many ways to create new objects in Python and Owlready2. This section details three different examples. As you interact with these three examples, you may want to go back up to the cell above and test the <code>list(kko.Products.subclasses())</code> code against the method.
The first example defines a class <code>WooProduct</code> that it assigns as a subclass of <code>Thing</code> (the root of OWL), and then we assign the class to the <code>Products</code> class. Note that in the second cell of this method we assign a value of '<code>pass</code>' to it, which is a Python convention for enabling an assignment without actual values as a placeholder for later use. You may see the '<code>pass</code>' term frequently used as scripts set up their objects in the beginning of programs.
```
class WooProducts(Thing):
namespace = kko
class WooProducts(kko.Products):
pass
```
In the second method, we bypass the initial <code>Thing</code> assignment and directly assign the new class <code>WooFoo</code>:
```
class WooFoo(kko.Products):
namespace = kko
```
In the third of our examples, we instead use the native Python method of '<code>types</code>' to do the assignment directly. This can be a useful approach when we are wanting to process longer lists of assignments directly:
```
import types
with kko:
ProductsFoo = types.new_class("ProductsFoo", (kko.Products,))
```
### Update a Class
Unfortunately, there is no direct '<code>edit</code>' or '<code>update</code>' function in Owlready2. At the class level one can '<code>delete</code>' (or '<code>destroy</code>') a class (see below) and then create a new one, granted a two-step process. For properties, including class relationships such as <code>subClassOf</code>, there are built-in methods to '<code>.append</code>' or '<code>.remove</code>' the assignment without fully deleting the class or individual object. In this case, we remove <code>WooProducts</code> as a <code>subClassOf</code> <code>Products</code>:
```
WooProducts.is_a.remove(kko.Products)
```
Since updates tend to occur more for object properties and values, we discuss these options further two installments from now.
### Delete a Class
Deletion occurs through a 'destroy' function that completely removes the object and all of its references from the ontology.
```
destroy_entity(WooProducts)
```
Of course, other functions are available for the use of classes and individuals. See the **Additional Documentation** below for links explaining these options.
### Save the Changes
When all of your desired changes are made programmatically or via an interactive session such as this one, you are then ready to save the knowledge graph out for re-use. It is generally best to write out the modified ontology under a new file name to prevent overwriting your prior version. If, after inspection, you like your changes and see no problems, you can then re-name this file back to the original name and now make it your working version going forward (of course, use the file location of your own choice).
```
kko.save(file = "C:/1-PythonProjects/kbpedia/sandbox/kko-test.owl", format = "rdfxml")
```
Note during a save specification that you may also indicate the format of the written ontology. We have been using '<code>rdfxml</code>' as our standard format, but you may also use '<code>ntriples</code>' (or others that may arise over time for the application).
### Inspect in Protege
OK, so after saving we can inspect our new file to make sure that all of the class changes above are now accurately reflected in the formal ontology. Here is the class view for Products:
<div style="margin: 10px auto; display: table;">
<img src="files/basic-management-1-1.png" title="Example Markdown Cell in Edit Mode" width="800" alt="Example Markdown Cell in Edit Mode" />
</div>
<div style="margin: 10px auto; display: table; font-style: italic;">
Figure 1: Result of KKO Class Changes
</div>
We can see that our file now has the updated file name (**1**), and the added classes appear in the KKO ontology (**2**).
As the use of Protege proves, our changes have been written to our formal ontology correctly. If we so chose, we can now re-name back to our working file, and continue on with our work. Of course, after doing such checks beginning in our process or when we introduce new major wrinkles, we can gain confidence everything is working properly and skip labor-intensive checks as appropriate.
### Additional Documentation
Owlready2 has relevant additional documentation, with examples, for:
- [Class and instance management](https://owlready2.readthedocs.io/en/latest/class.html)
- [Class construction](https://owlready2.readthedocs.io/en/latest/restriction.html).
<div style="background-color:#efefff; border:1px dotted #ceceff; vertical-align:middle; margin:15px 60px; padding:8px;">
<span style="font-weight: bold;">NOTE:</span> This article is part of the <a href="https://www.mkbergman.com/cooking-with-python-and-kbpedia/" style="font-style: italic;">Cooking with Python and KBpedia</a> series. See the <a href="https://www.mkbergman.com/cooking-with-python-and-kbpedia/"><strong>CWPK</strong> listing</a> for other articles in the series. <a href="http://kbpedia.org/">KBpedia</a> has its own Web site.
</div>
<div style="background-color:#ebf8e2; border:1px dotted #71c837; vertical-align:middle; margin:15px 60px; padding:8px;">
<span style="font-weight: bold;">NOTE:</span> This <strong>CWPK
installment</strong> is available both as an online interactive
file <a href="https://mybinder.org/v2/gh/Cognonto/CWPK/master" ><img src="https://mybinder.org/badge_logo.svg" style="display:inline-block; vertical-align: middle;" /></a> or as a <a href="https://github.com/Cognonto/CWPK" title="CWPK notebook" alt="CWPK notebook">direct download</a> to use locally. Make sure and pick the correct installment number. For the online interactive option, pick the <code>*.ipynb</code> file. It may take a bit of time for the interactive option to load.</div>
<div style="background-color:#feeedc; border:1px dotted #f7941d; vertical-align:middle; margin:15px 60px; padding:8px;">
<div style="float: left; margin-right: 5px;"><img src="http://kbpedia.org/cwpk-files/warning.png" title="Caution!" width="32" /></div>I am at best an amateur with Python. There are likely more efficient methods for coding these steps than what I provide. I encourage you to experiment -- which is part of the fun of Python -- and to <a href="mailto:mike@mkbergman.com">notify me</a> should you make improvements.
</div>
| github_jupyter |
```
import numpy as np
from scipy.io import loadmat
from sklearn.linear_model import LogisticRegression as LR
import matplotlib.pyplot as plt
%matplotlib inline
# Theano imports
import theano
theano.config.floatX = 'float32'
import theano.tensor as T
# Plotting utility
from utils import tile_raster_images as tri
```
# The dataset
The dataset is the mnist digits which is a common toy data set for testing machine learning methods on images. This is a subset of the mnist set which have also been shrunked in size. Let's load them and plot some. In addition to the images, there are also the labels: 0-9 or even-odd.
Load the data.
```
data = loadmat('small_mnist.mat')
# Training data (images, 0-9, even-odd)
# Images are stored in a (batch, x, y) array
# Labels are integers
train_im = data['train_im']
train_y = data['train_y'].ravel()
train_eo = data['train_eo'].ravel()
# Validation data (images, 0-9, even-odd)
# Same format as training data
valid_im = data['valid_im']
valid_y = data['valid_y'].ravel()
valid_eo = data['valid_eo'].ravel()
```
Plot 10 of the training images. Rerun this cell to plot new images.
```
im_size = train_im.shape[-1]
order = np.random.permutation(train_im.shape[0])
ims = tri(train_im[order[:10]].reshape((-1, im_size**2)), (im_size, im_size), (1, 10), (1,1))
plt.imshow(ims, cmap='gray', interpolation='nearest')
plt.axis('off')
print('Labels: {}'.format(train_y[order[:10]]))
print('Odd-Even: {}'.format(train_eo[order[:10]]))
```
## Baseline linear classifier
Before we spend our precious time setting up and training deep networks on the data, let's see how a simple linear classifier from sklearn can do.
```
# Create the classifier to do multinomial classification
linear_classifier = LR(solver='lbfgs', multi_class='multinomial', C=0.1)
# Train and evaluate the classifier
linear_classifier.fit(train_im.reshape(-1, im_size**2), train_y)
print('Training Error on (0-9): {}'.format(linear_classifier.score(train_im.reshape(-1, im_size**2), train_y)))
print('Validation Error on (0-9): {}'.format(linear_classifier.score(valid_im.reshape(-1, im_size**2), valid_y)))
```
Try training a linear classifier on the Even-Odd labels: train_eo!
# Using a Deep Nets library
If you're just starting off with deep nets and want to quickly try them on a dataset it is probably easiest to start with an existing library rather than writing your own. There are now a bunch of different libraries written for Python. We'll be using Keras which is designed to be easy to use. In matlab, there is the Neural Network Toolbox.
Keras documentation can be found here:
http://keras.io/
We'll do the next most complicated network comparer to linear regression: a two layer network!
```
# Import things from Keras Library
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation, Flatten
from keras.layers.convolutional import Convolution2D
from keras.regularizers import l2
from keras.optimizers import SGD, Adam, RMSprop
from keras.utils import np_utils
```
## Fully connected MLP
This is a simple network made from two layers. On the Keras documentation page, you can find other nonlinearities under "Core Layers".
You can add more layers, changes the layers, change the optimizer, or add dropout.
```
# Create the network!
mlp = Sequential()
# First fully connected layer
mlp.add(Dense(im_size**2/2, input_shape=(im_size**2,), W_regularizer=l2(0.001))) # number of hidden units, default is 100
mlp.add(Activation('tanh')) # nonlinearity
print('Shape after layer 1: {}'.format(mlp.output_shape))
# Second fully connected layer with softmax output
mlp.add(Dropout(0.0)) # dropout is currently turned off, you may need to train for more epochs if nonzero
mlp.add(Dense(10)) # number of targets, 10 for y, 2 for eo
mlp.add(Activation('softmax'))
# Adam is a simple optimizer, SGD has more parameters and is slower but may give better results
opt = Adam()
#opt = RMSprop()
#opt = SGD(lr=0.1, momentum=0.9, decay=0.0001, nesterov=True)
print('')
mlp.compile(loss='categorical_crossentropy', optimizer=opt, metrics=['accuracy'])
mlp.fit(train_im.reshape(-1, im_size**2), np_utils.to_categorical(train_y), nb_epoch=20, batch_size=100)
tr_score = mlp.evaluate(train_im.reshape(-1, im_size**2), np_utils.to_categorical(train_y), batch_size=100)
va_score = mlp.evaluate(valid_im.reshape(-1, im_size**2), np_utils.to_categorical(valid_y), batch_size=100)
print('')
print('Train loss: {}, train accuracy: {}'.format(*tr_score))
print('Validation loss: {}, validation accuracy: {}'.format(*va_score))
```
## Convolutional MLP
We can also have the first layer be a set of small filters which are convolved with the images.
Try different parameters and see what happens. (This network might be slow.)
```
# Create the network!
cnn = Sequential()
# First fully connected layer
cnn.add(Convolution2D(20, 5, 5, input_shape=(1, im_size, im_size), border_mode='valid', subsample=(2, 2)))
cnn.add(Activation('tanh')) # nonlinearity
print('Shape after layer 1: {}'.format(cnn.output_shape))
# Take outputs and turn them into a vector
cnn.add(Flatten())
print('Shape after flatten: {}'.format(cnn.output_shape))
# Fully connected layer
cnn.add(Dropout(0.0)) # dropout is currently turned off, you may need to train for more epochs if nonzero
cnn.add(Dense(100)) # number of targets, 10 for y, 2 for eo
cnn.add(Activation('tanh'))
# Second fully connected layer with softmax output
cnn.add(Dropout(0.0)) # dropout is currently turned off, you may need to train for more epochs if nonzero
cnn.add(Dense(10)) # number of targets, 10 for y, 2 for eo
cnn.add(Activation('softmax'))
# Adam is a simple optimizer, SGD has more parameters and is slower but may give better results
#opt = Adam()
#opt = RMSprop()
opt = SGD(lr=0.1, momentum=0.9, decay=0.0001, nesterov=True)
print('')
cnn.compile(loss='categorical_crossentropy', optimizer=opt, metrics=['accuracy'])
cnn.fit(train_im[:, np.newaxis, ...], np_utils.to_categorical(train_y), nb_epoch=20, batch_size=100)
tr_score = cnn.evaluate(train_im[:, np.newaxis, ...], np_utils.to_categorical(train_y), batch_size=100)
va_score = cnn.evaluate(valid_im[:, np.newaxis, ...], np_utils.to_categorical(valid_y), batch_size=100)
print('')
print('Train loss: {}, train accuracy: {}'.format(*tr_score))
print('Validation loss: {}, validation accuracy: {}'.format(*va_score))
```
# Visualizing the filters
## Linear classifier
```
W = linear_classifier.coef_
ims = tri(W, (im_size, im_size), (1, 10), (1,1))
plt.imshow(ims, cmap='gray', interpolation='nearest')
plt.axis('off')
```
## MLP
```
W = mlp.get_weights()[0].T
ims = tri(W, (im_size, im_size), (W.shape[0]//10, 10), (1,1))
plt.imshow(ims, cmap='gray', interpolation='nearest')
plt.axis('off')
```
## CNN
```
W = cnn.get_weights()[0]
ims = tri(W.reshape(-1, np.prod(W.shape[2:])), (W.shape[2], W.shape[3]), (W.shape[0]//10, 10), (1,1))
plt.imshow(ims, cmap='gray', interpolation='nearest')
plt.axis('off')
```
| github_jupyter |
# Power Law Transformation
Normally the quality of an image is improved by enhancing contrast and sharpness.
Power law transformations or piece-wise linear transformation functions require lot of user input. In the former case one has to choose the exponent
appearing in the transformation function, while in the latter case one has to choose the slopes
and ranges of the straight lines which form the transformation function.
The power-law transformation is usually defined as
s = cr^γ
where s and r are the gray levels of the pixels in the output and the input images, respectively and
c is a constant.
Maximum contrast stretching occurs by choosing the
value of γ for which the transformation function has the maximum slope at r = rmax.
That is, if m is the slope of the transformation function, then we should find the value of γ ,
which results in the maximum value of m at r = rmax.
Given the value of rmax, which corresponds to the peak in the histogram, we can determine the corresponding value of the exponent which would maximize the extent of the contrast
stretching.
```
from IPython.display import Image
PATH = "/Users/KIIT/Downloads/"
Image(filename = PATH + "powerlaw.png", width=400, height=400)
import cv2
import glob
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from PIL import Image
files = glob.glob ("C:/DATA/gaussian_filtered_images/gaussian_filtered_images/Mild/*.png") #reading files from te directory
for myFile in files:
img = cv2.imread(myFile,1)
img = img.astype('float32')
img = img/255.0
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
img = cv2.pow(img,1.5) # Power law Trnasformation with γ=1.5
img = img*255.0
img = img.astype('uint8')
img = Image.fromarray(img)
img.save("C:/DATA1/gaussian_filtered_images/gaussian_filtered_images/Mild/"+str(i)+".png") #saving the transformed image to anther directory
i=i+1
plt.imshow(img)
files = glob.glob ("C:/DATA/gaussian_filtered_images/gaussian_filtered_images/Moderate/*.png")
for myFile in files:
img = cv2.imread(myFile,1)
img = img.astype('float32')
img = img/255.0
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
img = cv2.pow(img,1.5)
img = img*255.0
img = img.astype('uint8')
img = Image.fromarray(img)
img.save("C:/DATA1/gaussian_filtered_images/gaussian_filtered_images/Moderate/"+str(i)+".png")
i=i+1
files = glob.glob ("C:/DATA/gaussian_filtered_images/gaussian_filtered_images/No_DR/*.png")
for myFile in files:
img = cv2.imread(myFile,1)
img = img.astype('float32')
img = img/255.0
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
img = cv2.pow(img,1.5)
img = img*255.0
img = img.astype('uint8')
img = Image.fromarray(img)
img.save("C:/DATA1/gaussian_filtered_images/gaussian_filtered_images/No_DR/"+str(i)+".png")
i=i+1
files = glob.glob ("C:/DATA/gaussian_filtered_images/gaussian_filtered_images/Proliferate_DR/*.png")
for myFile in files:
img = cv2.imread(myFile,1)
img = img.astype('float32')
img = img/255.0
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
img = cv2.pow(img,1.5)
img = img*255.0
img = img.astype('uint8')
img = Image.fromarray(img)
img.save("C:/DATA1/gaussian_filtered_images/gaussian_filtered_images/Proliferate_DR/"+str(i)+".png")
i=i+1
files = glob.glob ("C:/DATA/gaussian_filtered_images/gaussian_filtered_images/Severe/*.png")
for myFile in files:
img = cv2.imread(myFile,1)
img = img.astype('float32')
img = img/255.0
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
img = cv2.pow(img,1.5)
img = img*255.0
img = img.astype('uint8')
img = Image.fromarray(img)
img.save("C:/DATA1/gaussian_filtered_images/gaussian_filtered_images/Severe/"+str(i)+".png")
i=i+1
```
| github_jupyter |
# Week 2 - Classical ML Models - Part I
## 3. Classification
As it has been mentioned in the previous week, in the classification problems we have a set of inputs that belong to 2 or more categories and we have to train model to assign new set of inputs to corresponding categories.
Although there are many existing classification models, in this lecture we will focus on the **logistic regression**.
### Logistic Regression
Even though it includes the regression term, it is more related to classification than regression models. Logistic regression is essentially used to calculate the probability of a binary event.
Before going too much in-depth to the logistic regression model, we will first analyze its *'building blocks'*.
#### Sigmoid function

The sigmoid function (seen above), maps any real values to a range of 0 and 1. As we train, logistic regression model, we are basically aiming to find the threshold value: the values above threshold is considered as 1, while values below threshold are 0.
The sigmoid function itself can be expressed mathematically as
$g(z) = \frac{1}{1 + \exp(-z)}$. In Python code, this could be expressed as:
```
import numpy as np
def sigmoid(z):
return 1.0 / (1 + np.exp(-z))
```
#### Logistic regression hypothesis
So far we know how to express the sigmoid function in mathematical and code notations. We also know that in the logistic regression the inputs are passed through the sigmoid in order to determine the classification threshold value.
However, how does this connection translate in mathematics?
As you might remember from the last notebook, the linear regression had hypothesis $ \hat{y} = \theta ^T x$. For the logistic regression, this hypothesis becomes: $ \hat{y} = \frac{1}{1 + \exp(-\theta ^T x)}$.
The hypothesis can be written as:
```
def hypothesis(X, theta):
z = np.dot(X, theta)
y_hat = sigmoid(z)
return y_hat
```
#### Cost function
At this point, we have our function for our logistic function probability output. On the other hand, in order to actually train our model, we also have to define our cost function.
In the linear regression case, our cost function had the following form:
$\frac{1}{m} \sum_{i = 1}^m(\hat{y_{i}} - y_{i}) ^ 2$. Here $\hat{y_{i}}$ is the output of our probability function while $y_{i}$ is the actual label. On the other hand, our probability function $\hat{y}$ has a more complicated expression that would make it quite hard to find its optimum (function would have many local minimum points). Therefore, the cost function for the logistic regression can be written as: $J(\theta) = -\frac{1}{m} \sum_{i = 1}^m (y_{i} \log{(\hat{y}_{i})} + (1-y_{i}) \log{(1 - \hat{y}_{i})})$.
In code this can be expressed as:
```
def cost(y_hat, y):
cost = np.mean(y * np.log(y_hat) + (1 - y) * np.log(1-y_hat))
return cost
```
#### Gradient
As it has been mentioned in the previous notebook, the optimization process in ML models is usually based on the gradient descent algorithm. We will analyze it much more in depth in the near future lectures, however, just for the purpose of this tutorial, understand it as a way of finding the minimum of the function. As you might remember from the math lessons, the minimum point can be found with the use of derivatives. As we want to analyze the cost function in respect to $\theta$, we need to differentiate our cost function: $\frac{1}{m} X^T (\hat{y} - y)$.
The gradient descent formula for updatint $\theta$ values, therefore, becomes:
$\theta = \theta - lr.\frac{1}{m} X^T (\hat{y} - y)$
The code for finding the optimal $\theta$ becomes:
```
def gradient_descent(x_train, y_train, lr, epochs):
intercept = np.ones((x_train.shape[0], 1))
x_train = np.concatenate((intercept, x_train), axis=1)
theta = np.zeros(x_train.shape[1])
m = len(x_train)
for i in range(epochs):
y_hat = hypothesis(x_train, theta)
gradient = np.dot(x_train.T, (y_hat - y_train)) / m
theta -= lr * gradient
return theta
```
#### Prediction
After finding the optimal $\theta$, the prediction process is quite straightforward - we just simply need to make a new hypothesis. The output of such function ($\hat{y}$) will be a probability in the range from $0$ to $1$, therefore, we need to map values according to $0.5$ threshold.
```
def predict(x_train, y_train, x_test, lr, epochs):
theta = gradient_descent(x_train, y_train)
intercept = np.ones((x_test.shape[0], 1))
x_test = np.concatenate((intercept, x_test), axis=1)
y_hat = hypothesis(x_test, theta)
y_pred = []
for i in range(len(y_hat)):
if(y_hat[i] >= 0.5):
y_pred.append(1)
else:
y_pred.append(0)
return y_pred
```
### Exercise
Let's now put it all together into a simple logistic model. For this example exercise, we will use one of the sklearn datasets (Iris). Do not change the cell below containing data importation - **only write code to the cells with comments**.
```
############-------Do not change this cell-------##########################
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
import math
from sklearn.metrics import classification_report,accuracy_score
iris = load_iris()
#Features
X = iris.data[:, :2]
y = (iris.target != 0) * 1
############-------Tasks to do-------------------##########################
#Split the X, y dataset into x_train, x_test, y_train and y_test
#---Building the model
#sigmoid function
#hypothesis function
#cost function
#gradient descent function
#prediction function
#assigning learning rate and epochs values (can leave as it is or change it)
lr = 0.01
epochs = 100
#predicting values
y_pred = #here should be your prediction function
print('Accuracy on test set: ' + str(accuracy_score(y_test, y_pred)))
```
| github_jupyter |
```
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
from tqdm import tqdm as tqdm
%matplotlib inline
import torch
import torchvision
import torchvision.transforms as transforms
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
import random
transform = transforms.Compose(
[transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
trainset = torchvision.datasets.CIFAR10(root='./data', train=True, download=True, transform=transform)
testset = torchvision.datasets.CIFAR10(root='./data', train=False, download=True, transform=transform)
type(trainset.targets)
type(trainset.data)
index1 = [np.where(np.array(trainset.targets)==0)[0] , np.where(np.array(trainset.targets)==1)[0], np.where(np.array(trainset.targets)==2)[0] ]
index1 = np.concatenate(index1,axis=0)
len(index1) #15000
#index1
disp = np.array(trainset.targets)
true = 8000
total = 35000
sin = total-true
sin
epochs= 100
indices = np.random.choice(index1,true)
_,count = np.unique(disp[indices],return_counts=True)
print(count, indices.shape)
index = np.where(np.logical_and(np.logical_and(np.array(trainset.targets)!=0, np.array(trainset.targets)!=1), np.array(trainset.targets)!=2))[0] #35000
len(index)
req_index = np.random.choice(index.shape[0], sin, replace=False)
index = index[req_index]
index.shape
values = np.random.choice([0,1,2],size= len(index)) #labeling others as 0,1,2
print(sum(values ==0),sum(values==1), sum(values==2))
# trainset.data = torch.tensor( trainset.data )
# trainset.targets = torch.tensor(trainset.targets)
trainset.data = np.concatenate((trainset.data[indices],trainset.data[index]))
trainset.targets = np.concatenate((np.array(trainset.targets)[indices],values))
trainset.targets.shape, trainset.data.shape
# mnist_trainset.targets[index] = torch.Tensor(values).type(torch.LongTensor)
j =20078 # Without Shuffle upto True Training numbers correct , after that corrupted
print(plt.imshow(trainset.data[j]),trainset.targets[j])
trainloader = torch.utils.data.DataLoader(trainset, batch_size=256,shuffle=True, num_workers=2)
testloader = torch.utils.data.DataLoader(testset, batch_size=256,shuffle=False, num_workers=2)
classes = ('zero', 'one','two')
dataiter = iter(trainloader)
images, labels = dataiter.next()
images[:4].shape
# def imshow(img):
# img = img / 2 + 0.5 # unnormalize
# npimg = img.numpy()
# plt.imshow(np.transpose(npimg, (1, 2, 0)))
# plt.show()
def imshow(img):
img = img / 2 + 0.5 # unnormalize
npimg = img#.numpy()
plt.imshow(np.transpose(npimg, (1, 2, 0)))
plt.show()
imshow(torchvision.utils.make_grid(images[:10]))
print('GroundTruth: ', ' '.join('%5s' % classes[labels[j]] for j in range(10)))
classes = ('plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
class Conv_module(nn.Module):
def __init__(self,inp_ch,f,s,k,pad):
super(Conv_module,self).__init__()
self.inp_ch = inp_ch
self.f = f
self.s = s
self.k = k
self.pad = pad
self.conv = nn.Conv2d(self.inp_ch,self.f,k,stride=s,padding=self.pad)
self.bn = nn.BatchNorm2d(self.f)
self.act = nn.ReLU()
def forward(self,x):
x = self.conv(x)
x = self.bn(x)
x = self.act(x)
return x
class inception_module(nn.Module):
def __init__(self,inp_ch,f0,f1):
super(inception_module, self).__init__()
self.inp_ch = inp_ch
self.f0 = f0
self.f1 = f1
self.conv1 = Conv_module(self.inp_ch,self.f0,1,1,pad=0)
self.conv3 = Conv_module(self.inp_ch,self.f1,1,3,pad=1)
#self.conv1 = nn.Conv2d(3,self.f0,1)
#self.conv3 = nn.Conv2d(3,self.f1,3,padding=1)
def forward(self,x):
x1 = self.conv1.forward(x)
x3 = self.conv3.forward(x)
#print(x1.shape,x3.shape)
x = torch.cat((x1,x3),dim=1)
return x
class downsample_module(nn.Module):
def __init__(self,inp_ch,f):
super(downsample_module,self).__init__()
self.inp_ch = inp_ch
self.f = f
self.conv = Conv_module(self.inp_ch,self.f,2,3,pad=0)
self.pool = nn.MaxPool2d(3,stride=2,padding=0)
def forward(self,x):
x1 = self.conv(x)
#print(x1.shape)
x2 = self.pool(x)
#print(x2.shape)
x = torch.cat((x1,x2),dim=1)
return x,x1
class inception_net(nn.Module):
def __init__(self):
super(inception_net,self).__init__()
self.conv1 = Conv_module(3,96,1,3,0)
self.incept1 = inception_module(96,32,32)
self.incept2 = inception_module(64,32,48)
self.downsample1 = downsample_module(80,80)
self.incept3 = inception_module(160,112,48)
self.incept4 = inception_module(160,96,64)
self.incept5 = inception_module(160,80,80)
self.incept6 = inception_module(160,48,96)
self.downsample2 = downsample_module(144,96)
self.incept7 = inception_module(240,176,60)
self.incept8 = inception_module(236,176,60)
self.pool = nn.AvgPool2d(5)
self.linear = nn.Linear(236,10)
def forward(self,x):
x = self.conv1.forward(x)
#act1 = x
x = self.incept1.forward(x)
#act2 = x
x = self.incept2.forward(x)
#act3 = x
x,act4 = self.downsample1.forward(x)
x = self.incept3.forward(x)
#act5 = x
x = self.incept4.forward(x)
#act6 = x
x = self.incept5.forward(x)
#act7 = x
x = self.incept6.forward(x)
#act8 = x
x,act9 = self.downsample2.forward(x)
x = self.incept7.forward(x)
#act10 = x
x = self.incept8.forward(x)
#act11 = x
#print(x.shape)
x = self.pool(x)
#print(x.shape)
x = x.view(-1,1*1*236)
x = self.linear(x)
return x
inc = inception_net()
inc = inc.to("cuda")
criterion_inception = nn.CrossEntropyLoss()
optimizer_inception = optim.SGD(inc.parameters(), lr=0.01, momentum=0.9)
acti = []
loss_curi = []
for epoch in range(epochs): # loop over the dataset multiple times
ep_lossi = []
running_loss = 0.0
for i, data in enumerate(trainloader, 0):
# get the inputs
inputs, labels = data
inputs, labels = inputs.to("cuda"),labels.to("cuda")
# zero the parameter gradients
optimizer_inception.zero_grad()
# forward + backward + optimize
outputs = inc(inputs)
loss = criterion_inception(outputs, labels)
loss.backward()
optimizer_inception.step()
# print statistics
running_loss += loss.item()
if i % 50 == 49: # print every 50 mini-batches
print('[%d, %5d] loss: %.3f' %
(epoch + 1, i + 1, running_loss / 50))
ep_lossi.append(running_loss/50) # loss per minibatch
running_loss = 0.0
loss_curi.append(np.mean(ep_lossi)) #loss per epoch
if(np.mean(ep_lossi)<=0.03):
break
# if (epoch%5 == 0):
# _,actis= inc(inputs)
# acti.append(actis)
print('Finished Training')
correct = 0
total = 0
with torch.no_grad():
for data in trainloader:
images, labels = data
images, labels = images.to("cuda"), labels.to("cuda")
outputs = inc(images)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 35000 train images: %d %%' % ( 100 * correct / total))
total,correct
correct = 0
total = 0
out = []
pred = []
with torch.no_grad():
for data in testloader:
images, labels = data
images, labels = images.to("cuda"),labels.to("cuda")
out.append(labels.cpu().numpy())
outputs= inc(images)
_, predicted = torch.max(outputs.data, 1)
pred.append(predicted.cpu().numpy())
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 10000 test images: %d %%' % (
100 * correct / total))
out = np.concatenate(out,axis=0)
pred = np.concatenate(pred,axis=0)
index = np.logical_or(np.logical_or(out ==1,out==0),out == 2)
print(index.shape)
acc = sum(out[index] == pred[index])/sum(index)
print('Accuracy of the network on the 0-1-2 test images: %d %%' % (
100*acc))
np.unique(out[index],return_counts = True) #== pred[index])
np.unique(pred[index],return_counts = True) #== pred[index])
sum(out[index] == pred[index])
cnt = np.zeros((3,3))
true = out[index]
predict = pred[index]
for i in range(len(true)):
cnt[true[i]][predict[i]] += 1
cnt
# torch.save(inc.state_dict(),"/content/drive/My Drive/Research/CIFAR Random/model_True_"+str(true_data_count)+"_epoch_"+str(epochs)+".pkl")
```
| github_jupyter |
# Dense Sentiment Classifier
classifying IMDB reviews by sentiment.
#### Load dependencies
```
import keras
from keras.datasets import imdb
from keras.preprocessing.sequence import pad_sequences
from keras.models import Sequential
from keras.layers import Dense, Flatten, Dropout
from keras.layers import Embedding
from keras.callbacks import ModelCheckpoint
import os
from sklearn.metrics import roc_auc_score
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
```
#### Set Hyperparameter
```
output_dir = './model_output/dense'
epochs = 4
batch_size = 128
n_dim = 64
n_unique_words = 5000
n_words_to_skip = 50
max_review_length = 100
pad_type = trunc_type = 'pre'
n_dense = 64
dropout = 0.5
```
#### Load data
```
(x_train, y_train), (x_valid, y_valid) = imdb.load_data(num_words=n_unique_words, skip_top=n_words_to_skip)
x_train[0]
x_train[0:6]
for x in x_train[0:6]:
print(len(x))
y_train[0:6]
len(x_train), len(x_valid)
```
#### Restore words from index
```
word_index = keras.datasets.imdb.get_word_index()
word_index = {k:(v+3) for k,v in word_index.items()}
word_index["PAD"] = 0
word_index["START"] = 1
word_index["UNK"] = 2
word_index
index_word = {v:k for k,v in word_index.items()}
x_train[0]
' '.join(index_word[id] for id in x_train[0])
```
#### if we want to see all words
```
(all_x_train, _), (all_x_valid, _) = imdb.load_data()
' '.join(index_word[id] for id in all_x_train[0])
```
#### Preprocess data
```
x_train = pad_sequences(x_train, maxlen=max_review_length, padding=pad_type, truncating=trunc_type, value=0)
x_valid = pad_sequences(x_valid, maxlen=max_review_length, padding=pad_type, truncating=trunc_type, value=0)
x_train[0:6]
for x in x_train[0:6]:
print(len(x))
' '.join(index_word[id] for id in x_train[5])
```
#### Design NN Architecture
```
model = Sequential()
model.add(Embedding(n_unique_words, n_dim, input_length=max_review_length))
model.add(Flatten())
model.add(Dense(n_dense, activation='relu'))
model.add(Dropout(dropout))
model.add(Dense(1, activation='sigmoid'))
model.summary()
n_dim, n_unique_words, n_dim*n_unique_words
max_review_length, n_dim, n_dim*max_review_length
n_dense, n_dim*max_review_length*n_dense + n_dense
```
#### configure model
```
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
modelcheckpoint = ModelCheckpoint(filepath=output_dir+"/weights.{epoch:02d}.hdf5")
if not os.path.exists(output_dir):
os.makedirs(output_dir)
```
#### Train!
```
model.fit(x_train, y_train, batch_size=batch_size, epochs=epochs, verbose=1, validation_data=(x_valid, y_valid), callbacks=[modelcheckpoint])
```
#### Evaluate
```
model.load_weights(output_dir+'/weights.01.hdf5')
y_hat = model.predict_proba(x_valid)
y_hat[0]
plt.hist(y_hat)
_ = plt.axvline(x=0.5, color='orange')
pct_auc = roc_auc_score(y_valid, y_hat)*100.0
"{:0.2f}".format(pct_auc)
float_y_hat = []
for y in y_hat:
float_y_hat.append(y[0])
ydf = pd.DataFrame(list(zip(float_y_hat,y_valid)), columns=['y_hat', 'y'])
ydf.head(10)
' '.join(index_word[id] for id in all_x_valid[0])
' '.join(index_word[id] for id in all_x_valid[7])
```
#### Instances where neural net was wrong
(review was negative but prediction was positive)
```
ydf[(ydf.y == 0) & (ydf.y_hat > 0.9)].head(10)
' '.join(index_word[id] for id in all_x_valid[2397])
```
#### Instances where neural net was wrong
(review was positive but prediction was negative)
```
ydf[(ydf.y == 1) & (ydf.y_hat < 0.1)].head(10)
' '.join(index_word[id] for id in all_x_valid[2027])
```
| github_jupyter |
# Using DALI in PyTorch
### Overview
This example shows how to use DALI in PyTorch.
This example uses CaffeReader.
See other [examples](../../index.rst) for details on how to use different data formats.
Let us start from defining some global constants
`DALI_EXTRA_PATH` environment variable should point to the place where data from [DALI extra repository](https://github.com/NVIDIA/DALI_extra) is downloaded. Please make sure that the proper release tag is checked out.
```
import os.path
test_data_root = os.environ['DALI_EXTRA_PATH']
# Caffe LMDB
lmdb_folder = os.path.join(test_data_root, 'db', 'lmdb')
N = 8 # number of GPUs
BATCH_SIZE = 128 # batch size per GPU
ITERATIONS = 32
IMAGE_SIZE = 3
```
Let us define a pipeline with a reader:
```
from nvidia.dali.pipeline import Pipeline
import nvidia.dali.ops as ops
import nvidia.dali.types as types
class CaffeReadPipeline(Pipeline):
def __init__(self, batch_size, num_threads, device_id, num_gpus):
super(CaffeReadPipeline, self).__init__(batch_size, num_threads, device_id)
self.input = ops.CaffeReader(path = lmdb_folder,
random_shuffle = True, shard_id = device_id, num_shards = num_gpus)
self.decode = ops.ImageDecoder(device = "mixed", output_type = types.RGB)
self.resize = ops.Resize(device = "gpu",
image_type = types.RGB,
interp_type = types.INTERP_LINEAR)
self.cmn = ops.CropMirrorNormalize(device = "gpu",
output_dtype = types.FLOAT,
crop = (227, 227),
image_type = types.RGB,
mean = [128., 128., 128.],
std = [1., 1., 1.])
self.uniform = ops.Uniform(range = (0.0, 1.0))
self.resize_rng = ops.Uniform(range = (256, 480))
def define_graph(self):
inputs, labels = self.input(name="Reader")
images = self.decode(inputs)
images = self.resize(images, resize_shorter = self.resize_rng())
output = self.cmn(images, crop_pos_x = self.uniform(),
crop_pos_y = self.uniform())
return (output, labels)
```
Let us create the pipeline and pass it to PyTorch generic iterator
```
from __future__ import print_function
import numpy as np
from nvidia.dali.plugin.pytorch import DALIGenericIterator
label_range = (0, 999)
pipes = [CaffeReadPipeline(batch_size=BATCH_SIZE, num_threads=2, device_id = device_id, num_gpus = N) for device_id in range(N)]
pipes[0].build()
dali_iter = DALIGenericIterator(pipes, ['data', 'label'], pipes[0].epoch_size("Reader"))
for i, data in enumerate(dali_iter):
if i >= ITERATIONS:
break
# Testing correctness of labels
for d in data:
label = d["label"]
image = d["data"]
## labels need to be integers
assert(np.equal(np.mod(label, 1), 0).all())
## labels need to be in range pipe_name[2]
assert((label >= label_range[0]).all())
assert((label <= label_range[1]).all())
print("OK")
```
| github_jupyter |
## Applying Neural Networks on Material Science dataset
The given dataset contains certain microstructurual properties like Yield Strength, Oxygen content, percentage of reheated microstructure and fraction of acicular ferrite. Since the number of features is just 4 and the dataset as only 59 datapoints, it is tough to obtain very high accuracy.
```
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from sklearn import linear_model
from sklearn.metrics import mean_squared_error, r2_score
data = pd.read_csv('/Users/chiragbhattad/Downloads/DDP/Charpy Analysis/dataset.csv')
features = ['YS', 'O2', 'Reheated', 'ac_ferr']
target = ['T27J']
training_data = data[0:40]
test_data = data[40:60]
X_train = training_data[features]
Y_train = training_data[target]
X_test = test_data[features]
Y_test = test_data[target]
fig, ax = plt.subplots(nrows=1, ncols=1)
ax.set_facecolor('#FFEFD5')
plt.plot(range(40), Y_train, label = "Train Data")
plt.title('Training Dataset Temperatures')
plt.ylabel('Temperature')
plt.show()
fig, ax = plt.subplots(nrows=1, ncols=1)
ax.set_facecolor('#FFEFD5')
plt.plot(range(19), Y_test, label = "Test Data")
plt.title('Test Dataset Temperatures')
plt.ylabel('Temperature')
plt.show()
regr = linear_model.LinearRegression()
regr.fit(X_train, Y_train)
pred = regr.predict(X_test)
print('Coefficients: \n', regr.coef_)
print('Mean Squared Error: ', mean_squared_error(Y_test, pred))
print('Variance score:', r2_score(Y_test, pred))
plt.plot(range(19), Y_test, label = "Original Data")
plt.plot(range(19), pred, label = "Predicted Data")
plt.legend(loc='best')
plt.ylabel('Temperature')
plt.show()
import seaborn as sns
from sklearn.preprocessing import MinMaxScaler
import tensorflow as tf
scaler = MinMaxScaler()
X_train = scaler.fit_transform(training_data[features].as_matrix())
Y_train = scaler.fit_transform(training_data[target].as_matrix())
X_test = scaler.fit_transform(test_data[features].as_matrix())
Y_test = scaler.fit_transform(test_data[target].as_matrix())
Y_test
def neural_network(X_data, input_dim):
W_1 = tf.Variable(tf.random_uniform([input_dim,10]))
b_1 = tf.Variable(tf.zeros([10]))
layer_1 = tf.add(tf.matmul(X_data, W_1), b_1)
layer_1 = tf.nn.relu(layer_1)
W_2 = tf.Variable(tf.random_uniform([10,10]))
b_2 = tf.Variable(tf.zeros([10]))
layer_2 = tf.add(tf.matmul(layer_1, W_2), b_2)
layer_2 = tf.nn.relu(layer_2)
W_0 = tf.Variable(tf.random_uniform([10,1]))
b_0 = tf.Variable(tf.zeros([1]))
output = tf.add(tf.matmul(layer_2, W_0), b_0)
return output
xs = tf.placeholder("float")
ys = tf.placeholder("float")
output = neural_network(xs, 4)
cost = tf.reduce_mean(tf.square(output-ys))
train = tf.train.GradientDescentOptimizer(0.001).minimize(cost)
c_t = []
c_test = []
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
saver = tf.train.Saver()
for i in range(100):
for j in range(X_train.shape[0]):
sess.run([cost,train], feed_dict = {xs:X_train[j,:].reshape(1,4), ys:Y_train[j]})
c_t.append(sess.run(cost, feed_dict={xs:X_train, ys: Y_train}))
c_test.append(sess.run(cost, feed_dict={xs: X_test, ys: Y_test}))
print('Epoch: ', i, 'Cost: ', c_t[i])
pred = sess.run(output, feed_dict = {xs:X_test})
print('Cost: ', sess.run(cost, feed_dict={xs: X_test, ys: pred}))
print(Y_test)
Y_test = Y_test.reshape(-1,1)
Y_test = scaler.inverse_transform(Y_test)
pred = pred.reshape(-1,1)
pred = scaler.inverse_transform(pred)
Y_test
fig, ax = plt.subplots(nrows=1, ncols=1)
ax.set_facecolor('#FFEFD5')
plt.plot(range(19), Y_test, label = "Original Data")
plt.plot(range(19), pred, label = "Predicted Data")
plt.title('Comparing original values with the model')
plt.legend(loc='best')
plt.ylabel('Temperature')
plt.show()
```
| github_jupyter |
```
import numpy as np
import json
import warnings
import operator
import h5py
from keras.models import model_from_json
from keras import backend as K
from keras.utils import get_custom_objects
warnings.filterwarnings("ignore")
size_title = 18
size_label = 14
n_pred = 2
def read_file(file_path):
with open(file_path, 'r') as data_file:
data = json.loads(data_file.read())
return data
def create_model(model_path):
reverse_dictionary = dict((str(v), k) for k, v in dictionary.items())
model_weights = list()
weight_ctr = 0
while True:
try:
d_key = "weight_" + str(weight_ctr)
weights = trained_model.get(d_key).value
model_weights.append(weights)
weight_ctr += 1
except Exception as exception:
break
# set the model weights
loaded_model.set_weights(model_weights)
return loaded_model, dictionary, reverse_dictionary, compatibile_tools
def verify_model(model, tool_sequence, labels, dictionary, reverse_dictionary, compatible_tools, class_weights, topk=20, max_seq_len=25):
tl_seq = tool_sequence.split(",")
last_tool_name = reverse_dictionary[str(tl_seq[-1])]
last_compatible_tools = compatible_tools[last_tool_name]
sample = np.zeros(max_seq_len)
for idx, tool_id in enumerate(tl_seq):
sample[idx] = int(tool_id)
sample_reshaped = np.reshape(sample, (1, max_seq_len))
tool_sequence_names = [reverse_dictionary[str(tool_pos)] for tool_pos in tool_sequence.split(",")]
# predict next tools for a test path
prediction = model.predict(sample_reshaped, verbose=0)
weight_val = list(class_weights.values())
weight_val = np.reshape(weight_val, (len(weight_val),))
prediction = np.reshape(prediction, (prediction.shape[1],))
prediction_pos = np.argsort(prediction, axis=-1)
# get topk prediction
topk_prediction_pos = prediction_pos[-topk:]
topk_prediction_val = [int(prediction[pos] * 100) for pos in topk_prediction_pos]
topk_prediction_val = [(val * 100) / np.max(topk_prediction_val) for val in topk_prediction_val]
# read tool names using reverse dictionary
pred_tool_ids = [reverse_dictionary[str(tool_pos)] for tool_pos in topk_prediction_pos if tool_pos > 0]
actual_next_tool_ids = list(set(pred_tool_ids).intersection(set(last_compatible_tools.split(","))))
pred_tool_ids_sorted = dict()
for (tool_pos, tool_pred_val) in zip(topk_prediction_pos, topk_prediction_val):
try:
tool_name = reverse_dictionary[str(tool_pos)]
if tool_name not in last_tool_name and tool_name in actual_next_tool_ids: #tool_name in actual_next_tool_ids and
pred_tool_ids_sorted[tool_name] = tool_pred_val
except:
continue
pred_tool_ids_sorted = dict(sorted(pred_tool_ids_sorted.items(), key=lambda kv: kv[1], reverse=True))
cls_wt = dict()
usg_wt = dict()
inv_wt = dict()
ids_tools = dict()
keys = list(pred_tool_ids_sorted.keys())
for k in keys:
try:
cls_wt[k] = np.round(class_weights[str(data_dict[k])], 2)
usg_wt[k] = np.round(usage_weights[k], 2)
inv_wt[k] = np.round(inverted_weights[str(data_dict[k])], 2)
except:
continue
print("Predicted tools: \n")
print(pred_tool_ids_sorted)
print()
print("Class weights: \n")
cls_wt = dict(sorted(cls_wt.items(), key=lambda kv: kv[1], reverse=True))
print(cls_wt)
print()
print("Usage weights: \n")
usg_wt = dict(sorted(usg_wt.items(), key=lambda kv: kv[1], reverse=True))
print(usg_wt)
print()
total_usage_wt = np.mean(list(usg_wt.values()))
print("Mean usage wt: %0.4f" % (total_usage_wt))
print()
print("Inverted weights: \n")
inv_wt = dict(sorted(inv_wt.items(), key=lambda kv: kv[1], reverse=True))
print(inv_wt)
for key in pred_tool_ids_sorted:
ids_tools[key] = dictionary[key]
print()
print("Tool ids")
print(ids_tools)
print("======================================")
return cls_wt, usg_wt, inv_wt, pred_tool_ids_sorted
base_path = "data/models/"
model_path = base_path + "model_rnn_custom_loss.hdf5"
trained_model = h5py.File(model_path, 'r')
model_config = json.loads(trained_model.get('model_config').value)
class_weights = json.loads(trained_model.get('class_weights').value)
loaded_model = model_from_json(model_config)
dictionary = json.loads(trained_model.get('data_dictionary').value)
compatibile_tools = json.loads(trained_model.get('compatible_tools').value)
best_params = json.loads(trained_model.get('best_parameters').value)
model, dictionary, reverse_dictionary, compatibile_tools = create_model(model_path)
print(reverse_dictionary)
topk = 30
tool_seq = "605"
verify_model(model, tool_seq, "", dictionary, reverse_dictionary, compatibile_tools, class_weights, topk)
class_weights["666"]
```
| github_jupyter |
```
import matplotlib.pyplot as plt
import math
class Polygon_b:
def __init__ (self, xlist,ylist,col):
self.xlist=xlist
self.ylist=ylist
self.col=col
def display(self):
plt.fill(self.xlist,self.ylist,c=self.col)
#swiss flag
plt.figure(figsize=(4,4))
plt.axis('equal')
bg=Polygon_b([0,1,1,0],[0,0,1,1],[1,0,0])
bg.display()
class Polygon:
def __init__ (self):
self.x=[]
self.y=[]
self.col=[]
def rect(x1,y1,x2,y2,c):
r=Polygon()
r.x=[x1,x2,x2,x1]
r.y=[y1,y1,y2,y2]
r.col=c
return r
def star(xpos,ypos,rad,peaks,col):
s=Polygon()
res=peaks*2
a=2*math.pi/res
for i in range(res):
if i%2==0:
radius=rad*0.4
else:
radius=rad
s.x.append(xpos+math.cos(i*a)*radius)
s.y.append(ypos+math.sin(i*a)*radius)
s.col=col
return s
def display_flag(canvas,objects):
plt.figure(figsize=canvas)
plt.axis('off')
plt.axis('equal')
for o in objects:
plt.fill(o.x,o.y,c=o.col)
objs=[]
bg=rect(0,0,1,1,[1,0,0])
objs.append(bg)
cross_1=rect(0.2,0.4,0.8,0.6,[1,1,1])
print (cross_1.x)
objs.append(cross_1)
cross_2=rect(0.4,0.2,0.6,0.8,[1,1,1])
objs.append(cross_2)
#star1 = star(0.825,0.8,0.1,5,[1,1,1])
#objs.append(star1)
display_flag((6,6),objs)
#hungarian flag
obj_hun=[]
red_1=rgb_con(227, 32, 38)
green_1 = rgb_con(30, 117, 32)
stripe1=rect(0,0.666,1.5,1,red_1)
obj_hun.append(stripe1)
stripe2=rect(0,0.333,1.5,0.666,[.99,.99,.99])
obj_hun.append(stripe2)
stripe3=rect(0,0,1.5,0.333,green_1)
obj_hun.append(stripe3)
display_flag((7.5,5),obj_hun)
def rgb_con(r,g,b,a):
rc = 1/255*r
gc = 1/255*g
bc = 1/255*b
ac = 1/255*a
c = [rc,gc,bc,ac]
return c
rgb_con(255,255,0,100)
#gradient color list in a given range from a starting color
def grad_list_w(color,color_range, count):
r=color[0]
g=color[1]
b=color[2]
a=color[3]
# a=color.sort()
# rgb=a[0]
c=count-1
step_r = ((1-r)*color_range)/c
step_g = ((1-g)*color_range)/c
step_b = ((1-b)*color_range)/c
color_list=[]
for i in range(count):
color_list.append([r+step_r*i, g+step_g*i, b+step_b*i,a])
return color_list
def grad_list_b(color,color_range, count):
r=color[0]
g=color[1]
b=color[2]
a=color[3]
c=count-1
step_r = (r*color_range)/c
step_g = (g*color_range)/c
step_b = (b*color_range)/c
color_list=[]
for i in range(count):
color_list.append([r-step_r*i, g-step_g*i, b-step_b*i,a])
return color_list
f
#flag PIMP
##gradient_array
obj_hun=[]
color1 = rgb_con(250,0,0,255)
color2 = rgb_con(255,255,255,255)
color3 = rgb_con(0,255,0,255)
W = 1.9 #widht (height=1)
Ny=6 #number y dir
Nx=12
y = 1/Ny
x = W/Nx
totalcount = Nx
M = 0.9 #modulo of gradiency
color1 = grad_list_w(color1,M,totalcount)
color2 = grad_list_w(color2,M,totalcount)
color3 = grad_list_w(color3,M,totalcount)
print(color1)
for j in range (Nx):
X = x*(j+1)
X0 = x*j
if j%3==0:
colorlist1=color1
colorlist2=color2
colorlist3=color3
elif j%3==1:
colorlist1=color3
colorlist2=color1
colorlist3=color2
else:
colorlist1=color2
colorlist2=color3
colorlist3=color1
for i in range (Ny):
stripe1=rect(X0, y*0.666+i*y, X, y*1+i*y, colorlist1[j])
obj_hun.append(stripe1)
stripe2=rect(X0, y*0.333+y*i, X, y*0.666+i*y, colorlist2[j])
obj_hun.append(stripe2)
stripe3=rect(X0,0+i*y,X,0.333*y+i*y,colorlist3[j])
obj_hun.append(stripe3)
display_flag((20,20),obj_hun)
#flag PIMP_2
##gradient_array intersecting flags
obj_hun=[]
color1 = rgb_con(250,0,0,100)
color2 = rgb_con(255,255,255,100)
color3 = rgb_con(0,255,0,100)
W = 1.9 #widht (height=1)
Ny=6 #number y dir
Nx=6
y = 1/Ny
x = W/Nx
mx=1.2*x #flags scale modulo x dir
my=1.2*y
M = 0.5 #modulo of gradiency
#_______________________________XXXXXXXXxx
totalcount = Nx
color1 = grad_list_b(color1,M,totalcount)
color2 = grad_list_b(color2,M,totalcount)
color3 = grad_list_b(color3,M,totalcount)
print(color1)
for j in range (Nx):
X = x*(j+1)
X0 = x*j
if j%3==0:
colorlist1=color1
colorlist2=color2
colorlist3=color3
elif j%3==1:
colorlist1=color3
colorlist2=color1
colorlist3=color2
else:
colorlist1=color2
colorlist2=color3
colorlist3=color1
for i in range (Ny):
stripe1=rect(X0, y*0.666+i*y, X+mx, y*1+i*y+my, colorlist1[j])
obj_hun.append(stripe1)
stripe2=rect(X0, y*0.333+y*i, X+mx, y*0.666+i*y+my, colorlist2[j])
obj_hun.append(stripe2)
stripe3=rect(X0, 0+i*y, X+mx, 0.333*y+i*y+my, colorlist3[j])
obj_hun.append(stripe3)
display_flag((20,15),obj_hun)
#flag PIMP_2_blueish
##gradient_array intersecting flags
obj_hun=[]
color1 = rgb_con(66, 135, 245,50)
color2 = rgb_con(11, 198, 227,50)
color3 = rgb_con(115, 245, 197,75)
W = 1.9 #widht (height=1)
Ny=6 #number y dir
Nx=6
y = 1/Ny
x = W/Nx
mx=1.2*x #flags scale modulo x dir
my=1.2*y
M = 0.8 #modulo of gradiency
#_______________________________XXXXXXXXxx
totalcount = Nx
color1 = grad_list_w(color1,M,totalcount)
color2 = grad_list_w(color2,M,totalcount)
color3 = grad_list_w(color3,M,totalcount)
print(color1)
for j in range (Nx):
X = x*(j+1)
X0 = x*j
if j%3==0:
colorlist1=color1
colorlist2=color2
colorlist3=color3
elif j%3==1:
colorlist1=color3
colorlist2=color1
colorlist3=color2
else:
colorlist1=color2
colorlist2=color3
colorlist3=color1
for i in range (Ny):
stripe1=rect(X0, y*0.666+i*y, X+mx, y*1+i*y+my, colorlist1[j])
obj_hun.append(stripe1)
stripe2=rect(X0, y*0.333+y*i, X+mx, y*0.666+i*y+my, colorlist2[j])
obj_hun.append(stripe2)
stripe3=rect(X0, 0+i*y, X+mx, 0.333*y+i*y+my, colorlist3[j])
obj_hun.append(stripe3)
display_flag((20,15),obj_hun)
#flag PIMP_2_blueish
##gradient_array intersecting flags
obj_hun=[]
color1 = rgb_con(66, 135, 245,75)
color2 = rgb_con(11, 198, 227,75)
color3 = rgb_con(255, 25, 197,75)
W = 1.9 #widht (height=1)
Ny=12 #number y dir
Nx=12
y = 1/Ny
x = W/Nx
mx=1.5*x #flags scale modulo x dir
my=1*y
M = 0.8 #modulo of gradiency
#_______________________________XXXXXXXXxx
totalcount = Nx
color1 = grad_list_w(color1,M,totalcount)
color2 = grad_list_w(color2,M,totalcount)
color3 = grad_list_w(color3,M,totalcount)
print(color1)
for j in range (Nx):
X = x*(j+1)
X0 = x*j
if j%3==0:
colorlist1=color1
colorlist2=color2
colorlist3=color3
elif j%3==1:
colorlist1=color3
colorlist2=color1
colorlist3=color2
else:
colorlist1=color2
colorlist2=color3
colorlist3=color1
for i in range (Ny):
stripe1=rect(X0, y*0.666+i*y, X+mx, y*1+i*y+my, colorlist1[j])
obj_hun.append(stripe1)
stripe2=rect(X0, y*0.333+y*i, X+mx, y*0.666+i*y+my, colorlist2[j])
obj_hun.append(stripe2)
stripe3=rect(X0, 0+i*y, X+mx, 0.333*y+i*y+my, colorlist3[j])
obj_hun.append(stripe3)
display_flag((20,15),obj_hun)
#flag PIMP_2_blueish
##gradient_array intersecting flags
obj_hun=[]
color1 = rgb_con(66, 135, 245,75)
color2 = rgb_con(255,255,255,50)
color3 = rgb_con(255, 25, 197,75)
W = 1.9 #widht (height=1)
Ny=20 #number y dir
Nx=20
y = 1/Ny
x = W/Nx
mx=1.5*x #flags scale modulo x dir
my=1.5*y
M = 0.8 #modulo of gradiency
#_______________________________XXXXXXXXxx
totalcount = Nx
color1 = grad_list_w(color1,M,totalcount)
color2 = grad_list_b(color2,M,totalcount)
color3 = grad_list_w(color3,M,totalcount)
print(color1)
for j in range (Nx):
X = x*(j+1)
X0 = x*j
if j%3==0:
colorlist1=color1
colorlist2=color2
colorlist3=color3
elif j%3==1:
colorlist1=color3
colorlist2=color1
colorlist3=color2
else:
colorlist1=color2
colorlist2=color3
colorlist3=color1
for i in range (Ny):
stripe1=rect(X0, y*0.666+i*y, X+mx, y*1+i*y+my, colorlist1[j])
obj_hun.append(stripe1)
stripe2=rect(X0, y*0.333+y*i, X+mx, y*0.666+i*y+my, colorlist2[j])
obj_hun.append(stripe2)
stripe3=rect(X0, 0+i*y, X+mx, 0.333*y+i*y+my, colorlist3[j])
obj_hun.append(stripe3)
display_flag((20,15),obj_hun)
```
| github_jupyter |
<a href="https://colab.research.google.com/github/NidhiChaurasia/LGMVIP-DataScience/blob/main/Stock_Prediction_Using_Linear_Regression_and_DecisionTree_Regression_Model.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Decision Tree is used in practical approaches of supervised learning.It can be used to solve both Regression and Classification tasks.It is a tree-structured classifier with three types of nodes.Decision tree builds regression or classification models in the form of a tree structure. It breaks down a dataset into smaller and smaller subsets while at the same time an associated decision tree is incrementally developed.Decision trees can handle both categorical and numerical data.
```
#Install the dependencies
import numpy as num
import pandas as pan
from sklearn.tree import DecisionTreeRegressor #Decision Trees in Machine Learning to Predict Stock Movements.A decision tree algorithm performs a set of recursive actions before it arrives at the end result and when you plot these actions on a screen, the visual looks like a big tree, hence the name 'Decision Tree'.
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split
import matplotlib.pyplot as mat
plt.style.use('bmh')
#Load the data
from google.colab import files
uploaded = files.upload()
```
# The dataset comprises of Open, High, Low, Close Prices and Volume indicators (OHLCV).
```
#Store the data into a data frame
dataframe = pan.read_csv('NSE-TATAGLOBAL.csv')
dataframe.head(5)
#Get the number of trading days
dataframe.shape
#Visualize the close price data
mat.figure(figsize=(16,8))
mat.title('TATAGLOBAL')
mat.xlabel('Days')
mat.ylabel('Close Price USD ($)')
mat.plot(df['Close'])
mat.show()
#Get the close Price
dataframe = dataframe[['Close']]
dataframe.head(4)
#Create a variable to predict 'x' days out into the future
future_days = 25
#Create a new column (target) shifted 'x' units/days up
dataframe['Prediction'] = dataframe[['Close']].shift(-future_days)
dataframe.tail(4)
#Create the feature data set (X) and convert it to a numpy array and remove the last 'x' rows/days
X = num.array(dataframe.drop(['Prediction'],1))[:-future_days]
print(X)
#Create the target data set (y) and convert it to a numpy array and get all of the target values except the last 'x' rows/days
y = num.array(dataframe['Prediction'])[:-future_days]
print(y)
#Split the data into 75% training and 25% testing
x_train,x_test,y_train,y_test = train_test_split(X , y ,test_size = 0.25)
#Create the models
#Create the decision tree regressor model
tree = DecisionTreeRegressor().fit(x_train , y_train)
#Create the linear regression model
lr = LinearRegression().fit(x_train , y_train)
#Get the last 'x' rows of the feature data set
x_future = dataframe.drop(['Prediction'], 1)[:-future_days]
x_future = x_future.tail(future_days)
x_future = num.array(x_future)
x_future
#Show the model tree prediction
tree_prediction = tree.predict(x_future)
print(tree_prediction)
print()
#Show the model linear regression prediction
lr_prediction = lr.predict(x_future)
print(lr_prediction)
```
##Let's Visualize the data
```
predictions = tree_prediction #The regression decision trees take ordered values with continuous values.
valid = dataframe[X.shape[0]:]
valid['Predictions'] = predictions
mat.figure(figsize=(16,8))
mat.title('Stock Market Prediction Decision Tree Regression Model using sklearn')
mat.xlabel('Days')
mat.ylabel('Close Price USD ($)')
mat.plot(dataframe['Close'])
mat.plot(valid[['Close','Predictions']])
mat.legend(['Orig','Val','Pred'])
mat.show()
predictions = lr_prediction #Linear Model for Stock Price Prediction
valid = dataframe[X.shape[0]:]
valid['Predictions'] = predictions
mat.figure(figsize=(16,8))
mat.title('Stock Market Prediction Linear Regression Model')
mat.xlabel('Days')
mat.ylabel('Close Price USD ($)')
mat.plot(df['Close'])
mat.plot(valid[['Close','Predictions']])
mat.legend(['Orig','Val','Pred'])
mat.show()
```
| github_jupyter |
### Generative Adversarial Networks
Jay Urbain, Phd
Credits:
- https://github.com/eriklindernoren/Keras-GAN
- The network architecture has been found by, and optimized by, many contributors, including the authors of the DCGAN paper and people like Erik Linder-Norén, who’s excellent collection of GAN implementations called Keras GAN served as the basis of the code used here.
```
from keras.datasets import cifar10
from keras.layers import Input, Dense, Reshape, Flatten, Dropout
from keras.layers import BatchNormalization, Activation, ZeroPadding2D
from keras.layers.advanced_activations import LeakyReLU
from keras.layers.convolutional import UpSampling2D, Conv2D
from keras.models import Model
from keras.optimizers import Adam
import matplotlib
#matplotlib.use('Agg')
import matplotlib.pyplot as plt
import numpy as np
```
Run the following in the cell below:
from google.colab import drive
drive.mount('/content/gdrive')
```
from google.colab import drive
drive.mount('/content/gdrive')
#!mkdir '/content/gdrive/My Drive/Colab Notebooks/dcgan_cifar_images'
dcgan_cifar_images = '/content/gdrive/My Drive/Colab Notebooks/dcgan_cifar_images'
```
#### The CIFAR-10 dataset
The CIFAR-10 dataset consists of 60000 32x32 colour images in 10 classes, with 6000 images per class. There are 50000 training images and 10000 test images.
The dataset is divided into five training batches and one test batch, each with 10000 images. The test batch contains exactly 1000 randomly-selected images from each class. The training batches contain the remaining images in random order, but some training batches may contain more images from one class than another. Between them, the training batches contain exactly 5000 images from each class.
Here are the classes in the dataset, as well as 10 random images from each:
airplane
automobile
bird
cat
deer
dog
frog
horse
ship
truck
```
from keras.datasets.cifar10 import load_data
# load the data - it returns 2 tuples of digits & labels - one for
# the train set & the other for the test set
(train_digits, train_labels), (test_digits, test_labels) = cifar10.load_data()
# display 14 random images from the training set
import numpy as np
np.random.seed(123)
rand_14 = np.random.randint(0, train_digits.shape[0],14)
sample_digits = train_digits[rand_14]
sample_labels = train_labels[rand_14]
# code to view the images
num_rows, num_cols = 2, 7
f, ax = plt.subplots(num_rows, num_cols, figsize=(12,5),
gridspec_kw={'wspace':0.03, 'hspace':0.01},
squeeze=True)
for r in range(num_rows):
for c in range(num_cols):
image_index = r * 7 + c
ax[r,c].axis("off")
ax[r,c].imshow(sample_digits[image_index], cmap='gray')
ax[r,c].set_title('No. %d' % sample_labels[image_index])
plt.show()
plt.close()
def load_data():
(X_train, _), (_, _) = cifar10.load_data()
#(X_train, y_train), (X_test, y_test) = cifar10.load_data()
X_train = (X_train.astype(np.float32) - 127.5) / 127.5
return X_train
X_train = load_data()
num_examples = np.shape(X_train)
print('Number of examples: ', num_examples)
def build_generator(noise_shape=(100,)):
input = Input(noise_shape)
x = Dense(128 * 8 * 8, activation="relu")(input)
x = Reshape((8, 8, 128))(x)
x = BatchNormalization(momentum=0.8)(x)
x = UpSampling2D()(x)
x = Conv2D(128, kernel_size=3, padding="same")(x)
x = Activation("relu")(x)
x = BatchNormalization(momentum=0.8)(x)
x = UpSampling2D()(x)
x = Conv2D(64, kernel_size=3, padding="same")(x)
x = Activation("relu")(x)
x = BatchNormalization(momentum=0.8)(x)
x = Conv2D(3, kernel_size=3, padding="same")(x)
out = Activation("tanh")(x)
model = Model(input, out)
print("-- Generator -- ")
model.summary()
return model
def build_discriminator(img_shape):
input = Input(img_shape)
x =Conv2D(32, kernel_size=3, strides=2, padding="same")(input)
x = LeakyReLU(alpha=0.2)(x)
x = Dropout(0.25)(x)
x = Conv2D(64, kernel_size=3, strides=2, padding="same")(x)
x = (LeakyReLU(alpha=0.2))(x)
x = Dropout(0.25)(x)
x = BatchNormalization(momentum=0.8)(x)
x = Conv2D(128, kernel_size=3, strides=2, padding="same")(x)
x = LeakyReLU(alpha=0.2)(x)
x = Dropout(0.25)(x)
x = BatchNormalization(momentum=0.8)(x)
x = Conv2D(256, kernel_size=3, strides=1, padding="same")(x)
x = LeakyReLU(alpha=0.2)(x)
x = Dropout(0.25)(x)
x = Flatten()(x)
out = Dense(1, activation='sigmoid')(x)
model = Model(input, out)
print("-- Discriminator -- ")
model.summary()
return model
def train(generator, discriminator, combined, epochs=2000, batch_size=128, save_interval=50):
X_train = load_data()
num_examples = X_train.shape[0]
num_batches = int(num_examples / float(batch_size))
print('Number of examples: ', num_examples)
print('Number of Batches: ', num_batches)
print('Number of epochs: ', epochs)
half_batch = int(batch_size / 2)
for epoch in range(epochs + 1):
print("Epoch: " + str(epoch))
for batch in range(num_batches):
print("Batch: " + str(batch) + "/" + str(num_batches))
# noise images for the batch
noise = np.random.normal(0, 1, (half_batch, 100))
fake_images = generator.predict(noise)
fake_labels = np.zeros((half_batch, 1))
# real images for batch
idx = np.random.randint(0, X_train.shape[0], half_batch)
real_images = X_train[idx]
real_labels = np.ones((half_batch, 1))
# Train the discriminator (real classified as ones and generated as zeros)
d_loss_real = discriminator.train_on_batch(real_images, real_labels)
d_loss_fake = discriminator.train_on_batch(fake_images, fake_labels)
d_loss = 0.5 * np.add(d_loss_real, d_loss_fake)
noise = np.random.normal(0, 1, (batch_size, 100))
# Train the generator
g_loss = combined.train_on_batch(noise, np.ones((batch_size, 1)))
# Plot the progress
print("Epoch %d Batch %d/%d [D loss: %f, acc.: %.2f%%] [G loss: %f]" %
(epoch, batch, num_batches, d_loss[0], 100 * d_loss[1], g_loss))
if batch % 50 == 0:
save_imgs(generator, epoch, batch)
def save_imgs(generator, epoch, batch):
r, c = 5, 5
noise = np.random.normal(0, 1, (r * c, 100))
gen_imgs = generator.predict(noise)
# Rescale images 0 - 1
gen_imgs = 0.5 * gen_imgs + 0.5
fig, axs = plt.subplots(r, c)
cnt = 0
for i in range(r):
for j in range(c):
axs[i, j].imshow(gen_imgs[cnt, :, :, :])
axs[i, j].axis('off')
cnt += 1
fig.savefig(dcgan_cifar_images + "/mnist_%d_%d.png" % (epoch, batch))
plt.close()
def build_models():
gen_optimizer = Adam(lr=0.0002, beta_1=0.5)
disc_optimizer = Adam(lr=0.0002, beta_1=0.5)
discriminator = build_discriminator(img_shape=(32, 32, 3))
discriminator.compile(loss='binary_crossentropy',
optimizer=disc_optimizer,
metrics=['accuracy'])
generator = build_generator()
generator.compile(loss='binary_crossentropy', optimizer=gen_optimizer)
z = Input(shape=(100,))
img = generator(z)
discriminator.trainable = False
real = discriminator(img)
combined = Model(z, real)
combined.compile(loss='binary_crossentropy', optimizer=gen_optimizer)
return generator, discriminator, combined
def main():
generator, discriminator, combined = build_models()
train(generator, discriminator, combined,
epochs=100, batch_size=32, save_interval=1)
main()
```
| github_jupyter |
Earlier we trained a model to predict the ratings users would give to movies using a network with embeddings learned for each movie and user. Embeddings are powerful! But how do they actually work?
Previously, I claimed that embeddings capture the 'meaning' of the objects they represent, and discover useful latent structure. Let's put that to the test!
# Looking up embeddings
Let's load a model we trained earlier so we can investigate the embedding weights that it learned.
```
import os
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
import tensorflow as tf
from tensorflow import keras
#_RM_
input_dir = '../input/movielens_preprocessed'
#_UNCOMMENT_
#input_dir = '../input/movielens-preprocessing'
#_RM_
model_dir = '.'
#_UNCOMMENT_
#model_dir = '../input/movielens-spiffy-model'
model_path = os.path.join(model_dir, 'movie_svd_model_32.h5')
model = keras.models.load_model(model_path)
```
The embedding weights are part of the model's internals, so we'll have to do a bit of digging around to access them. We'll grab the layer responsible for embedding movies, and use the `get_weights()` method to get its learned weights.
```
emb_layer = model.get_layer('movie_embedding')
(w,) = emb_layer.get_weights()
w.shape
```
Our weight matrix has 26,744 rows for that many movies. Each row is 32 numbers - the size of our movie embeddings.
Let's look at an example movie vector:
```
w[0]
```
What movie is this the embedding of? Let's load up our dataframe of movie metadata.
```
movies_path = os.path.join(input_dir, 'movie.csv')
movies_df = pd.read_csv(movies_path, index_col=0)
movies_df.head()
```
Of course, it's *Toy Story*! I should have recognized that vector anywhere.
Okay, I'm being facetious. It's hard to make anything of these vectors at this point. We never directed the model about how to use any particular embedding dimension. We left it alone to learn whatever representation it found useful.
So how do we check whether these representations are sane and coherent?
## Vector similarity
A simple way to test this is to look at how close or distant pairs of movies are in the embedding space. Embeddings can be thought of as a smart distance metric. If our embedding matrix is any good, it should map similar movies (like *Toy Story* and *Shrek*) to similar vectors.
```
i_toy_story = 0
i_shrek = movies_df.loc[
movies_df.title == 'Shrek',
'movieId'
].iloc[0]
toy_story_vec = w[i_toy_story]
shrek_vec = w[i_shrek]
print(
toy_story_vec,
shrek_vec,
sep='\n',
)
```
Comparing dimension-by-dimension, these look vaguely similar. If we wanted to assign a single number to their similarity, we could calculate the euclidean distance between these two vectors. (This is our conventional 'as the crow flies' notion of distance between two points. Easy to grok in 1, 2, or 3 dimensions. Mathematically, we can also extend it to 32 dimensions, though good luck visualizing it.)
```
from scipy.spatial import distance
distance.euclidean(toy_story_vec, shrek_vec)
```
How does this compare to a pair of movies that we would think of as very different?
```
i_exorcist = movies_df.loc[
movies_df.title == 'The Exorcist',
'movieId'
].iloc[0]
exorcist_vec = w[i_exorcist]
distance.euclidean(toy_story_vec, exorcist_vec)
```
As expected, much further apart.
## Cosine Distance
If you check out [the docs for the `scipy.spatial` module](https://docs.scipy.org/doc/scipy-0.14.0/reference/spatial.distance.html), you'll see there are actually a *lot* of different measures of distance that people use for different tasks.
When judging the similarity of embeddings, it's more common to use [cosine similarity](https://en.wikipedia.org/wiki/Cosine_similarity).
In brief, the cosine similarity of two vectors ranges from -1 to 1, and is a function of the *angle* between the vectors. If two vectors point in the same direction, their cosine similarity is 1. If they point in opposite directions, it's -1. If they're orthogonal (i.e. at right angles), their cosine similarity is 0.
Cosine distance is just defined as 1 minus the cosine similarity (and therefore ranges from 0 to 2).
Let's calculate a couple cosine distances between movie vectors:
```
print(
distance.cosine(toy_story_vec, shrek_vec),
distance.cosine(toy_story_vec, exorcist_vec),
sep='\n'
)
```
> **Aside:** *Why* is cosine distance commonly used when working with embeddings? The short answer, as with so many deep learning techniques, is "empirically, it works well". In the exercise coming up, you'll get to do a little hands-on investigation that digs into this question more deeply.
Which movies are most similar to *Toy Story*? Which movies fall right between *Psycho* and *Scream* in the embedding space? We could write a bunch of code to work out questions like this, but it'd be pretty tedious. Fortunately, there's already a library for exactly this sort of work: **Gensim**.
# Exploring embeddings with Gensim
I'll instantiate an instance of [`WordEmbeddingsKeyedVectors`](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.WordEmbeddingsKeyedVectors) with our model's movie embeddings and the titles of the corresponding movies.
> Aside: You may notice that Gensim's docs and many of its class and method names refer to *word* embeddings. While the library is most frequently used in the text domain, we can use it to explore embeddings of any sort.
```
from gensim.models.keyedvectors import WordEmbeddingsKeyedVectors
# Limit to movies with at least this many ratings in the dataset
threshold = 100
mainstream_movies = movies_df[movies_df.n_ratings >= threshold].reset_index(drop=True)
movie_embedding_size = w.shape[1]
kv = WordEmbeddingsKeyedVectors(movie_embedding_size)
kv.add(
mainstream_movies['key'].values,
w[mainstream_movies.movieId]
)
```
Okay, so which movies are most similar to *Toy Story*?
```
kv.most_similar('Toy Story')
```
Wow, these are pretty great! It makes perfect sense that *Toy Story 2* is the most similar movie to *Toy Story*. And most of the rest are animated kids movies with a similar computer-animated style.
So it's learned something about 3-d animated kids flick, but maybe that was just a fluke. Let's look at the closest neighbours for a few more movies from a variety of genres:
```
#$HIDE_INPUT$
import textwrap
movies = ['Eyes Wide Shut', 'American Pie', 'Iron Man 3', 'West Side Story',
'Battleship Potemkin', 'Clueless'
]
def plot_most_similar(movie, ax, topn=5):
sim = kv.most_similar(movie, topn=topn)[::-1]
y = np.arange(len(sim))
w = [t[1] for t in sim]
ax.barh(y, w)
left = min(.6, min(w))
ax.set_xlim(right=1.0, left=left)
# Split long titles over multiple lines
labels = [textwrap.fill(t[0] , width=24)
for t in sim]
ax.set_yticks(y)
ax.set_yticklabels(labels)
ax.set_title(movie)
fig, axes = plt.subplots(3, 2, figsize=(15, 9))
for movie, ax in zip(movies, axes.flatten()):
plot_most_similar(movie, ax)
fig.tight_layout()
```
Artsy erotic dramas, raunchy sophomoric comedies, old-school musicals, superhero movies... our embeddings manage to nail a wide variety of cinematic niches!
# Semantic vector math
The [`most_similar`](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.WordEmbeddingsKeyedVectors.most_similar) method optionally takes a second argument, `negative`. If we call `kv.most_similar(a, b)`, then instead of finding the vector closest to `a`, it will find the closest vector to `a - b`.
Why would you want to do that? It turns out that doing addition and subtraction of embedding vectors often gives surprisingly meaningful results. For example, how would you fill in the following equation?
Scream = Psycho + ________
*Scream* and *Psycho* are similar in that they're violent, scary movies somewhere on the border between Horror and Thriller. The biggest difference is that *Scream* has elements of comedy. So I'd say *Scream* is what you'd get if you combined *Psycho* with a comedy.
But we can actually ask Gensim to fill in the blank for us via vector math (after some rearranging):
________ = Scream - Psycho
```
kv.most_similar(
positive = ['Scream'],
negative = ['Psycho (1960)']
)
```
If you are familiar with these movies, you'll see that the missing ingredient that takes us from *Psycho* to *Scream* is comedy (and also late-90's-teen-movie-ness).
## Analogy solving
The SAT test which is used to get into American colleges and universities poses analogy questions like:
shower : deluge :: _____ : stare
(Read "shower is to deluge as ___ is to stare")
To solve this, we find the relationship between deluge and shower, and apply it to stare. A shower is a milder form of a deluge. What's a milder form of stare? A good answer here would be "glance", or "look".
It's kind of astounding that this works, but people have found that these can often be effectively solved by simple vector math on word embeddings. Can we solve movie analogies with our embeddings? Let's try. What about:
Brave : Cars 2 :: Pocahontas : _____
The answer is not clear. One interpretation would be that *Brave* is like *Cars 2*, except that the latter is aimed primarily at boys, and the former might be more appealing to girls, given its female protagonist. So maybe the answer should be, like *Pocahontas*, a mid-90's conventional animation kids movie, but more of a 'boy movie'. *Hercules*? *The Lion King*?
Let's ask our embeddings what they think.
In terms of vector math, we can frame this as...
Cars 2 = Brave + X
_____ = Pocahontas + X
Rearranging, we get:
____ = Pocahontas + (Cars 2 - Brave)
We can solve this by passing in two movies (*Pocahontas* and *Cars 2*) for the positive argument to `most_similar`, with *Brave* as the negative argument:
```
kv.most_similar(
['Pocahontas', 'Cars 2'],
negative = ['Brave']
)
```
This weakly fits our prediction: the 4 closest movies are indeed kids animated movies from the 90s. After that, the results are a bit more perplexing.
Is our model wrong, or were we? Another difference we failed to account for between *Cars 2* and *Brave* is that the former is a sequel, and the latter is not. 7/10 of our results are also sequels. This tells us something interesting about our learned embeddings (and, ultimately, about the problem of predicting movie preferences). "Sequelness" is an important property to our model - which suggests that some of the variance in our data is accounted for the fact that some people tend to like sequels more than others.
#$YOURTURN$
| github_jupyter |
```
# default_exp model_evaluation
```
# Model Evaluation 📈
```
#export
from tensorflow.keras.models import load_model
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
from sklearn.metrics import average_precision_score,precision_recall_curve
from funcsigs import signature
from sklearn.metrics import roc_curve
from sklearn.metrics import roc_auc_score
def show_loss_accurracy_plots(history):
"""Displays loss and accuracy plots for the input model history"""
acc = history['accuracy']
val_acc = history['val_accuracy']
loss = history['loss']
val_loss = history['val_loss']
epochs2 = range(len(acc))
plt.plot(epochs2, acc, 'b', label='Training')
plt.plot(epochs2, val_acc, 'r', label='Validation')
plt.title('Training and validation accuracy')
plt.ylabel('acc')
plt.xlabel('epoch')
plt.legend()
plt.figure()
plt.plot(epochs2, loss, 'b', label='Training')
plt.plot(epochs2, val_loss, 'r', label='Validation')
plt.title('Training and validation loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend()
plt.show()
```
## Loss Accuracy Plots
We want to see if we reach criticality
```
history = pd.read_csv('../08_test/history_training.csv')
show_loss_accurracy_plots(history)
# filepath changed from: 'alex-adapted-res-003/best_model.hdf5' for testing
path = '../08_test/best_model.hdf5'
# You must be using tensorflow 2.3 or greater
criticality_network_load = load_model(path) #<----- The Model
corpora_test_x = np.load('../08_test/corpora_test_x.npy')
target_test_y = np.load('../08_test/target_test_y.npy')
#export
def evaluate_model(criticality_network_load,corpora_test_x,target_test_y):
"""Displays the given model's: loss, accuracy, Average prcision-recall and AUC for the given data."""
score = criticality_network_load.evaluate(corpora_test_x, target_test_y, verbose=1)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
history_predict = criticality_network_load.predict(x=corpora_test_x)
history_predict
inferred_data = pd.DataFrame(history_predict,columns=list('AB'))
target_data = pd.DataFrame(target_test_y,columns=list('LN'))
data = target_data.join(inferred_data)
y_true = list(data['L'])
y_score= list(data['A'])
average_precision = average_precision_score(y_true, y_score)
print('Average precision-recall score: {0:0.2f}'.format(average_precision))
#ROC Curve (all our samples are balanced)
auc = roc_auc_score(y_true, y_score)
print('AUC: %.3f' % auc)
```
## Get Accuracy of Model
```
evaluate_model(criticality_network_load,corpora_test_x,target_test_y)
def clean_list(inp):
out = []
for i in inp:
out.append(i[0])
return out
def clean_input(inp):
return tuple(clean_list(inp.tolist()))
def summarize(inp):
total = 0
for i in inp:
# print(i)
total+=i
new = [total]
return new
```
# Shap Evaluations
```
from securereqnet.utils import Embeddings
import shap
model = criticality_network_load
```
## Get Reverse Embeddings Mapping
```
embeddings = Embeddings()
embed_path = '../data/word_embeddings-embed_size_100-epochs_100.csv'
embeddings_dict = embeddings.get_embeddings_dict(embed_path)
reverse_embeddings = {}
for key, value in embeddings_dict.items():
value = tuple(np.array(value, dtype='float32').tolist())
# print(value)
reverse_embeddings[value] = key
```
## Calculate Shap Values for 200 Images
We need a background of 400 to calculate these 200 points over
```
# select a set of background examples to take an expectation over
background = corpora_test_x[np.random.choice(corpora_test_x.shape[0], 400, replace=False)]
# explain predictions of the model on four images
e = shap.DeepExplainer(model, background)
# ...or pass tensors directly
# e = shap.DeepExplainer((model.layers[0].input, model.layers[-1].output), background)
shap_values = e.shap_values(corpora_test_x[0:200])
```
## Map Shap Values to Tokens
Using our reversed embeddings from earlier, we essentially undo the vectorization so we can map shap values to our tokens. Tokens are much more readable than vectors, and allow for easy human interpretation.
```
# map shap values to strings
# (shap, string)
shaps = []
for doc in range(shap_values[0].shape[0]):
for word in range(shap_values[0][doc].shape[0]):
# grab the word
try:
string = reverse_embeddings[clean_input(corpora_test_x[doc, word])]
shap_value = summarize(clean_list(shap_values[0][doc, word]))[0]
shaps.append((shap_value, string))
except KeyError as e:
pass
shaps = sorted(shaps, key = lambda x: abs(x[0]), reverse=True)
```
## Create Plot
Here we plot the top 25 shap values over the 200 data points and check their effects
```
import matplotlib.pyplot as plt
import math
import statistics
shap_vals = []
token = []
fig1 = plt.gcf()
data = {}
# Top 25 shap vals
uBound = 25
i = 0
while i < uBound:
if(i < len(shaps)):
curTok = shaps[i][1]
curShap = shaps[i][0]
if curTok in data.keys():
data[curTok].append(curShap)
uBound += 1
else:
data[curTok] = [curShap]
i += 1
# get the rest
for i in range(len(shaps)):
curTok = shaps[i][1]
curShap = shaps[i][0]
if curTok in data.keys():
data[curTok].append(curShap)
for key in data.keys():
for item in data[key]:
shap_vals.append(item)
token.append(key)
fig = plt.figure(figsize = (15, 10))
max_shap_val = max(shap_vals)
min_shap_val = min(shap_vals)
total_range = max_shap_val - min_shap_val
std_dev = statistics.stdev(shap_vals)
median = statistics.median(shap_vals)
mean = statistics.mean(shap_vals)
# define our gradient
# we want something less linear
redness = lambda x : math.sqrt(((x+abs(min_shap_val))/total_range) * 100) * 10 / 100
blueness = lambda x : 1 - redness(x)
# size as normal distribution
size = lambda x : 500 * math.ceil(100 * ((1/(std_dev*math.sqrt(math.pi))*math.e)**(-1*((x-mean)**2)/(2*std_dev**2)))) / 100 + 35
plt.xlabel("Shap Value")
plt.ylabel("token")
plt.title("Shap Visualization for 200 Issues")
plt.xlim([-1 * (max_shap_val + std_dev), max_shap_val + std_dev])
plt.gca().invert_yaxis()
# creating the bar plot
plt.scatter(shap_vals, token, c = [(redness(x), 0, blueness(x)) for x in shap_vals], marker='.', s = [size(x) for x in shap_vals])
plt.savefig("../images/shap_200_issues_alpha.png", transparent=False)
plt.show()
```
| github_jupyter |
##### Copyright 2018 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Distributed Training in TensorFlow
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/alpha/guide/distribute_strategy"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/r2/guide/distribute_strategy.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/r2/guide/distribute_strategy.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
## Overview
`tf.distribute.Strategy` is a TensorFlow API to distribute training
across multiple GPUs, multiple machines or TPUs. Using this API, users can distribute their existing models and training code with minimal code changes.
`tf.distribute.Strategy` has been designed with these key goals in mind:
* Easy to use and support multiple user segments, including researchers, ML engineers, etc.
* Provide good performance out of the box.
* Easy switching between strategies.
`tf.distribute.Strategy` can be used with TensorFlow's high level APIs, [tf.keras](https://www.tensorflow.org/guide/keras) and [tf.estimator](https://www.tensorflow.org/guide/estimators), with just a couple of lines of code change. It also provides an API that can be used to distribute custom training loops (and in general any computation using TensorFlow).
In TensorFlow 2.0, users can execute their programs eagerly, or in a graph using [`tf.function`](../tutorials/eager/tf_function.ipynb). `tf.distribute.Strategy` intends to support both these modes of execution. Note that we may talk about training most of the time in this guide, but this API can also be used for distributing evaluation and prediction on different platforms.
As you will see in a bit, very few changes are needed to use `tf.distribute.Strategy` with your code. This is because we have changed the underlying components of TensorFlow to become strategy-aware. This includes variables, layers, models, optimizers, metrics, summaries, and checkpoints.
In this guide, we will talk about various types of strategies and how one can use them in a different situations.
```
# Import TensorFlow
from __future__ import absolute_import, division, print_function
!pip install tensorflow-gpu==2.0.0-alpha0
import tensorflow as tf
```
## Types of strategies
`tf.distribute.Strategy` intends to cover a number of use cases along different axes. Some of these combinations are currently supported and others will be added in the future. Some of these axes are:
* Syncronous vs asynchronous training: These are two common ways of distributing training with data parallelism. In sync training, all workers train over different slices of input data in sync, and aggregating gradients at each step. In async training, all workers are independently training over the input data and updating variables asynchronously. Typically sync training is supported via all-reduce and async through parameter server architecture.
* Hardware platform: Users may want to scale their training onto multiple GPUs on one machine, or multiple machines in a network (with 0 or more GPUs each), or on Cloud TPUs.
In order to support these use cases, we have 4 strategies available. In the next section we will talk about which of these are supported in which scenarios in TF 2.0-alpha at this time.
### MirroredStrategy
`tf.distribute.MirroredStrategy` support synchronous distributed training on multiple GPUs on one machine. It creates one replica per GPU device. Each variable in the model is mirrored across all the replicas. Together, these variables form a single conceptual variable called `MirroredVariable`. These variables are kept in sync with each other by applying identical updates.
Efficient all-reduce algorithms are used to communicate the variable updates across the devices.
All-reduce aggregates tensors across all the devices by adding them up, and makes them available on each device.
It’s a fused algorithm that is very efficient and can reduce the overhead of synchronization significantly. There are many all-reduce algorithms and implementations available, depending on the type of communication available between devices. By default, it uses NVIDIA NCCL as the all-reduce implementation. The user can also choose between a few other options we provide, or write their own.
Here is the simplest way of creating `MirroredStrategy`:
```
mirrored_strategy = tf.distribute.MirroredStrategy()
```
This will create a `MirroredStrategy` instance which will use all the GPUs that are visible to TensorFlow, and use NCCL as the cross device communication.
If you wish to use only some of the GPUs on your machine, you can do so like this:
```
mirrored_strategy = tf.distribute.MirroredStrategy(devices=["/gpu:0", "/gpu:1"])
```
If you wish to override the cross device communication, you can do so using the `cross_device_ops` argument by supplying an instance of `tf.distribute.CrossDeviceOps`. Currently we provide `tf.distribute.HierarchicalCopyAllReduce` and `tf.distribute.ReductionToOneDevice` as 2 other options other than `tf.distribute.NcclAllReduce` which is the default.
```
mirrored_strategy = tf.distribute.MirroredStrategy(
cross_device_ops=tf.distribute.HierarchicalCopyAllReduce())
```
### MultiWorkerMirroredStrategy
`tf.distribute.experimental.MultiWorkerMirroredStrategy` is very similar to `MirroredStrategy`. It implements synchronous distributed training across multiple workers, each with potentially multiple GPUs. Similar to `MirroredStrategy`, it creates copies of all variables in the model on each device across all workers.
It uses [CollectiveOps](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/ops/collective_ops.py) as the multi-worker all-reduce communication method used to keep variables in sync. A collective op is a single op in the TensorFlow graph which can automatically choose an all-reduce algorithm in the TensorFlow runtime according to hardware, network topology and tensor sizes.
It also implements additional performance optimizations. For example, it includes a static optimization that converts multiple all-reductions on small tensors into fewer all-reductions on larger tensors. In addition, we are designing it to have a plugin architecture - so that in the future, users will be able to plugin algorithms that are better tuned for their hardware. Note that collective ops also implement other collective operations such as broadcast and all-gather.
Here is the simplest way of creating `MultiWorkerMirroredStrategy`:
```
multiworker_strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
```
`MultiWorkerMirroredStrategy` currently allows you to choose between two different implementations of collective ops. `CollectiveCommunication.RING` implements ring-based collectives using gRPC as the communication layer. `CollectiveCommunication.NCCL` uses [Nvidia's NCCL](https://developer.nvidia.com/nccl) to implement collectives. `CollectiveCommunication.AUTO` defers the choice to the runtime. The best choice of collective implementation depends upon the number and kind of GPUs, and the network interconnect in the cluster. You can specify them like so:
```
multiworker_strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy(
tf.distribute.experimental.CollectiveCommunication.NCCL)
```
One of the key differences to get multi worker training going, as compared to multi-GPU training, is the multi-worker setup. "TF_CONFIG" environment variable is the standard way in TensorFlow to specify the cluster configuration to each worker that is part of the cluster. See section on ["TF_CONFIG" below](#TF_CONFIG) for more details on how this can be done.
Note: This strategy is [`experimental`](https://www.tensorflow.org/guide/version_compat#what_is_not_covered) as we are currently improving it and making it work for more scenarios. As part of this, please expect the APIs to change in the future.
### TPUStrategy
`tf.distribute.experimental.TPUStrategy` lets users run their TensorFlow training on Tensor Processing Units (TPUs). TPUs are Google's specialized ASICs designed to dramatically accelerate machine learning workloads. They are available on Google Colab, the [TensorFlow Research Cloud](https://www.tensorflow.org/tfrc) and [Google Compute Engine](https://cloud.google.com/tpu).
In terms of distributed training architecture, TPUStrategy is the same `MirroredStrategy` - it implements synchronous distributed training. TPUs provide their own implementation of efficient all-reduce and other collective operations across multiple TPU cores, which are used in `TPUStrategy`.
Here is how you would instantiate `TPUStrategy`.
Note: To run this code in Colab, you should select TPU as the Colab runtime. See [Using TPUs]( tpu.ipynb) guide for a runnable version.
```
resolver = tf.distribute.cluster_resolver.TPUClusterResolver()
tf.tpu.experimental.initialize_tpu_system(resolver)
tpu_strategy = tf.distribute.experimental.TPUStrategy(resolver)
```
`TPUClusterResolver` instance helps locate the TPUs. In Colab, you don't need to specify any arguments to it. If you want to use this for Cloud TPUs, you will need to specify the name of your TPU resource in `tpu` argument. We also need to initialize the tpu system explicitly at the start of the program. This is required before TPUs can be used for computation and should ideally be done at the beginning because it also wipes out the TPU memory so all state will be lost.
Note: This strategy is [`experimental`](https://www.tensorflow.org/guide/version_compat#what_is_not_covered) as we are currently improving it and making it work for more scenarios. As part of this, please expect the APIs to change in the future.
### ParameterServerStrategy
`tf.distribute.experimental.ParameterServerStrategy` supports parameter servers training. It can be used either for multi-GPU synchronous local training or asynchronous multi-machine training. When used to train locally on one machine, variables are not mirrored, instead they are placed on the CPU and operations are replicated across all local GPUs. In a multi-machine setting, some machines are designated as workers and some as parameter servers. Each variable of the model is placed on one parameter server. Computation is replicated across all GPUs of the all the workers.
In terms of code, it looks similar to other strategies:
```
ps_strategy = tf.distribute.experimental.ParameterServerStrategy()
```
For multi worker training, "TF_CONFIG" needs to specify the configuration of parameter servers and workers in your cluster, which you can read more about in ["TF_CONFIG" below](#TF_CONFIG) below.
So far we've talked about what are the different stategies available and how you can instantiate them. In the next few sections, we will talk about the different ways in which you can use them to distribute your training. We will show short code snippets in this guide and link off to full tutorials which you can run end to end.
## Using `tf.distribute.Strategy` with Keras
We've integrated `tf.distribute.Strategy` into `tf.keras` which is TensorFlow's implementation of the
[Keras API specification](https://keras.io). `tf.keras` is a high-level API to build and train models. By integrating into `tf.keras` backend, we've made it seamless for Keras users to distribute their training written in the Keras training framework. The only things that need to change in a user's program are: (1) Create an instance of the appropriate `tf.distribute.Strategy` and (2) Move the creation and compiling of Keras model inside `strategy.scope`.
Here is a snippet of code to do this for a very simple Keras model with one dense layer:
```
mirrored_strategy = tf.distribute.MirroredStrategy()
with mirrored_strategy.scope():
model = tf.keras.Sequential([tf.keras.layers.Dense(1, input_shape=(1,))])
model.compile(loss='mse', optimizer='sgd')
```
In this example we used `MirroredStrategy` so we can run this on a machine with multiple GPUs. `strategy.scope()` indicated which parts of the code to run distributed. Creating a model inside this scope allows us to create mirrored variables instead of regular variables. Compiling under the scope allows us to know that the user intends to train this model using this strategy. Once this is setup, you can fit your model like you would normally. `MirroredStrategy` takes care of replicating the model's training on the available GPUs, aggregating gradients etc.
```
dataset = tf.data.Dataset.from_tensors(([1.], [1.])).repeat(100).batch(10)
model.fit(dataset, epochs=2)
model.evaluate(dataset)
```
Here we used a `tf.data.Dataset` to provide the training and eval input. You can also use numpy arrays:
```
import numpy as np
inputs, targets = np.ones((100, 1)), np.ones((100, 1))
model.fit(inputs, targets, epochs=2, batch_size=10)
```
In both cases (dataset or numpy), each batch of the given input is divided equally among the multiple replicas. For instance, if using `MirroredStrategy` with 2 GPUs, each batch of size 10 will get divided among the 2 GPUs, with each receiving 5 input examples in each step. Each epoch will then train faster as you add more GPUs. Typically, you would want to increase your batch size as you add more accelerators so as to make effective use of the extra computing power. You will also need to re-tune your learning rate, depending on the model. You can use `strategy.num_replicas_in_sync` to get the number of replicas.
```
# Compute global batch size using number of replicas.
BATCH_SIZE_PER_REPLICA = 5
global_batch_size = (BATCH_SIZE_PER_REPLICA *
mirrored_strategy.num_replicas_in_sync)
dataset = tf.data.Dataset.from_tensors(([1.], [1.])).repeat(100)
dataset = dataset.batch(global_batch_size)
LEARNING_RATES_BY_BATCH_SIZE = {5: 0.1, 10: 0.15}
learning_rate = LEARNING_RATES_BY_BATCH_SIZE[global_batch_size]
```
### What's supported now?
In TF 2.0 alpha release, we support training with Keras using `MirroredStrategy`, as well as one machine parameter server using `ParameterServerStrategy`.
Support for other strategies will be coming soon. The API and how to use will be exactly the same as above. If you wish to use the other strategies like `TPUStrategy` or `MultiWorkerMirorredStrategy` in Keras in TF 2.0, you can currently do so by disabling eager execution (`tf.compat.v1.disable_eager_execution()`). Another thing to note is that when using `MultiWorkerMirorredStrategy` for multiple workers with Keras, currently the user will have to explicitly shard or shuffle the data for different workers, but we will change this in the future to automatically shard the input data intelligently.
### Examples and Tutorials
Here is a list of tutorials and examples that illustrate the above integration end to end with Keras:
1. [Tutorial](../tutorials/distribute/keras.ipynb) to train MNIST with `MirroredStrategy`.
2. [Tutorial](tpu.ipynb) to train Fashion MNIST with `TPUStrategy` (currently uses `disable_eager_execution`)
3. Official [ResNet50](https://github.com/tensorflow/models/blob/master/official/resnet/keras/keras_imagenet_main.py) training with ImageNet data using `MirroredStrategy`.
4. [ResNet50](https://github.com/tensorflow/tpu/blob/master/models/experimental/resnet50_keras/resnet50.py) trained with Imagenet data on Cloud TPus with `TPUStrategy`. Note that this example only works with TensorFlow 1.x currently.
## Using `tf.distribute.Strategy` with Estimator
`tf.estimator` is a distributed training TensorFlow API that originally supported the async parameter server approach. Like with Keras, we've integrated `tf.distribute.Strategy` into `tf.Estimator` so that a user who is using Estimator for their training can easily change their training is distributed with very few changes to your their code. With this, estimator users can now do synchronous distributed training on multiple GPUs and multiple workers, as well as use TPUs.
The usage of `tf.distribute.Strategy` with Estimator is slightly different than the Keras case. Instead of using `strategy.scope`, now we pass the strategy object into the [`RunConfig`](https://www.tensorflow.org/api_docs/python/tf/estimator/RunConfig) for the Estimator.
Here is a snippet of code that shows this with a premade estimator `LinearRegressor` and `MirroredStrategy`:
```
mirrored_strategy = tf.distribute.MirroredStrategy()
config = tf.estimator.RunConfig(
train_distribute=mirrored_strategy, eval_distribute=mirrored_strategy)
regressor = tf.estimator.LinearRegressor(
feature_columns=[tf.feature_column.numeric_column('feats')],
optimizer='SGD',
config=config)
```
We use a premade Estimator here, but the same code works with a custom Estimator as well. `train_distribute` determines how training will be distributed, and `eval_distribute` determines how evaluation will be distributed. This is another difference from Keras where we use the same strategy for both training and eval.
Now we can train and evaluate this Estimator with an input function:
```
def input_fn():
dataset = tf.data.Dataset.from_tensors(({"feats":[1.]}, [1.]))
return dataset.repeat(1000).batch(10)
regressor.train(input_fn=input_fn, steps=10)
regressor.evaluate(input_fn=input_fn, steps=10)
```
Another difference to highlight here between Estimator and Keras is the input handling. In Keras, we mentioned that each batch of the dataset is split across the multiple replicas. In Estimator, however, the user provides an `input_fn` and have full control over how they want their data to be distributed across workers and devices. We do not do automatic splitting of batch, nor automatically shard the data across different workers. The provided `input_fn` is called once per worker, thus giving one dataset per worker. Then one batch from that dataset is fed to one replica on that worker, thereby consuming N batches for N replicas on 1 worker. In other words, the dataset returned by the `input_fn` should provide batches of size `PER_REPLICA_BATCH_SIZE`. And the global batch size for a step can be obtained as `PER_REPLICA_BATCH_SIZE * strategy.num_replicas_in_sync`. When doing multi worker training, users will also want to either split their data across the workers, or shuffle with a random seed on each. You can see an example of how to do this in the [multi-worker tutorial](../tutorials/distribute/multi_worker.ipynb).
We showed an example of using `MirroredStrategy` with Estimator. You can also use `TPUStrategy` with Estimator as well, in the exact same way:
```
config = tf.estimator.RunConfig(
train_distribute=tpu_strategy, eval_distribute=tpu_strategy)
```
And similarly, you can use multi worker and parameter server strategies as well. The code remains the same, but you need to use `tf.estimator.train_and_evaluate`, and set "TF_CONFIG" environment variables for each binary running in your cluster.
### What's supported now?
In TF 2.0 alpha release, we support training with Estimator using all strategies.
### Examples and Tutorials
Here are some examples that show end to end usage of various strategies with Estimator:
1. [Tutorial]((../tutorials/distribute/multi_worker.ipynb) to train MNIST with multiple workers using `MultiWorkerMirroredStrategy`.
2. [End to end example](https://github.com/tensorflow/ecosystem/tree/master/distribution_strategy) for multi worker training in tensorflow/ecosystem using Kuberentes templates. This example starts with a Keras model and converts it to an Estimator using the `tf.keras.estimator.model_to_estimator` API.
3. Official [ResNet50](https://github.com/tensorflow/models/blob/master/official/resnet/imagenet_main.py) model, which can be trained using either `MirroredStrategy` or `MultiWorkerMirroredStrategy`.
4. [ResNet50](https://github.com/tensorflow/tpu/blob/master/models/experimental/distribution_strategy/resnet_estimator.py) example with TPUStrategy.
## Using `tf.distribute.Strategy` with custom training loops
As you've seen, using `tf.distrbute.Strategy` with high level APIs is only a couple lines of code change. With a little more effort, `tf.distrbute.Strategy` can also be used by other users who are not using these frameworks.
TensorFlow is used for a wide variety of use cases and some users (such as researchers) require more flexibility and control over their training loops. This makes it hard for them to use the high level frameworks such as Estimator or Keras. For instance, someone using a GAN may want to take a different number of generator or discriminator steps each round. Similarly, the high level frameworks are not very suitable for Reinforcement Learning training. So these users will usually write their own training loops.
For these users, we provide a core set of methods through the `tf.distrbute.Strategy` classes. Using these may require minor restructuring of the code initially, but once that is done, the user should be able to switch between GPUs / TPUs / multiple machines by just changing the strategy instance.
Here we will show a brief snippet illustrating this use case for a simple training example using the same Keras model as before.
Note: These APIs are still experimental and we are improving them to make them more user friendly in TensorFlow 2.0.
First, we create the model and optimizer inside the strategy's scope. This ensures that any variables created with the model and optimizer are mirrored variables.
```
with mirrored_strategy.scope():
model = tf.keras.Sequential([tf.keras.layers.Dense(1, input_shape=(1,))])
optimizer = tf.keras.optimizers.SGD()
```
Next, we create the input dataset and call `make_dataset_iterator` to distribute the dataset based on the strategy. This API is expected to change in the near future.
```
with mirrored_strategy.scope():
dataset = tf.data.Dataset.from_tensors(([1.], [1.])).repeat(1000).batch(
global_batch_size)
input_iterator = mirrored_strategy.make_dataset_iterator(dataset)
```
Then, we define one step of the training. We will use `tf.GradientTape` to compute gradients and optimizer to apply those gradients to update our model's variables. To distribute this training step, we put in in a function `step_fn` and pass it to `strategy.experimental_run` along with the iterator created before:
```
@tf.function
def train_step():
def step_fn(inputs):
features, labels = inputs
with tf.GradientTape() as tape:
logits = model(features)
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(
logits=logits, labels=labels)
loss = tf.reduce_sum(cross_entropy) * (1.0 / global_batch_size)
grads = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(list(zip(grads, model.trainable_variables)))
return loss
per_replica_losses = mirrored_strategy.experimental_run(
step_fn, input_iterator)
mean_loss = mirrored_strategy.reduce(
tf.distribute.ReduceOp.MEAN, per_replica_losses)
return mean_loss
```
A few other things to note in the code above:
1. We used `tf.nn.softmax_cross_entropy_with_logits` to compute the loss. And then we scaled the total loss by the global batch size. This is important because all the replicas are training in sync and number of examples in each step of training is the global batch. If you're using TensorFlow's standard losses from `tf.losses` or `tf.keras.losses`, they are distribution aware and will take care of the scaling by number of replicas whenever a strategy is in scope.
2. We used the `strategy.reduce` API to aggregate the results returned by `experimental_run`. `experimental_run` returns results from each local replica in the strategy, and there are multiple ways to consume this result. You can `reduce` them to get an aggregated value. You can also do `strategy.unwrap(results)`* to get the list of values contained in the result, one per local replica.
*expected to change
Finally, once we have defined the training step, we can initialize the iterator and run the training in a loop:
```
with mirrored_strategy.scope():
input_iterator.initialize()
for _ in range(10):
print(train_step())
```
In the example above, we used `make_dataset_iterator` to provide input to your training. We also provide two additional APIs: `make_input_fn_iterator` and `make_experimental_numpy_iterator` to support other kinds of inputs. See their documentation in `tf.distribute.Strategy` and how they differ from `make_dataset_iterator`.
This covers the simplest case of using `tf.distribute.Strategy` API to do distribute custom training loops. We are in the process of improving these APIs. Since this use case requres more work on the part of the user, we will be publishing a separate detailed guide for this use case in the future.
### What's supported now?
In TF 2.0 alpha release, we support training with custom training loops using `MirroredStrategy` as shown above. Support for other strategies will be coming in soon.
If you wish to use the other strategies like `TPUStrategy` in TF 2.0 with a custom training loop, you can currently do so by disabling eager execution (`tf.compat.v1.disable_eager_execution()`). The code will remain similar, except you will need to use TF 1.x graph and sessions to run the training.
`MultiWorkerMirorredStrategy` support will be coming in the future.
### Examples and Tutorials
Here are some examples for using distribution strategy with custom training loops:
1. [Tutorial](../tutorials/distribute/training_loops.ipynb) to train MNIST using `MirroredStrategy`.
2. [DenseNet](https://github.com/tensorflow/examples/blob/master/tensorflow_examples/models/densenet/distributed_train.py) example using `MirroredStrategy`.
## Other topics
In this section, we will cover some topics that are relevant to multiple use cases.
<a id="TF_CONFIG">
### Setting up TF\_CONFIG environment variable
</a>
For multi-worker training, as mentioned before, you need to set "TF\_CONFIG" environment variable for each
binary running in your cluster. The "TF\_CONFIG" environment variable is a JSON string which specifies what
tasks constitute a cluster, their addresses and each task's role in the cluster. We provide a Kubernetes template in the
[tensorflow/ecosystem](https://github.com/tensorflow/ecosystem) repo which sets
"TF\_CONFIG" for your training tasks.
One example of "TF\_CONFIG" is:
```
os.environ["TF_CONFIG"] = json.dumps({
"cluster": {
"worker": ["host1:port", "host2:port", "host3:port"],
"ps": ["host4:port", "host5:port"]
},
"task": {"type": "worker", "index": 1}
})
```
This "TF\_CONFIG" specifies that there are three workers and two ps tasks in the
cluster along with their hosts and ports. The "task" part specifies that the
role of the current task in the cluster, worker 1 (the second worker). Valid roles in a cluster is
"chief", "worker", "ps" and "evaluator". There should be no "ps" job except when using `tf.distribute.experimental.ParameterServerStrategy`.
## What's next?
`tf.distribute.Strategy` is actively under development. We welcome you to try it out and provide and your feedback via [issues on GitHub](https://github.com/tensorflow/tensorflow/issues/new).
| github_jupyter |
<div style='background-image: url("../../share/images/header.svg") ; padding: 0px ; background-size: cover ; border-radius: 5px ; height: 250px'>
<div style="float: right ; margin: 50px ; padding: 20px ; background: rgba(255 , 255 , 255 , 0.7) ; width: 50% ; height: 150px">
<div style="position: relative ; top: 50% ; transform: translatey(-50%)">
<div style="font-size: xx-large ; font-weight: 900 ; color: rgba(0 , 0 , 0 , 0.8) ; line-height: 100%">Computational Seismology</div>
<div style="font-size: large ; padding-top: 20px ; color: rgba(0 , 0 , 0 , 0.5)">Discontinuous Galerkin Method with Physics based numerical flux - 1D Elastic Wave Equation </div>
</div>
</div>
</div>
This notebook is based on the paper [A new discontinuous Galerkin spectral element method for elastic waves with physically motivated numerical fluxes](https://www.geophysik.uni-muenchen.de/~gabriel/kduru_waves2017.pdf)
Published in the [13th International Conference on Mathematical and Numerical Aspects of Wave Propagation](https://cceevents.umn.edu/waves-2017)
##### Authors:
* Kenneth Duru
* Ashim Rijal ([@ashimrijal](https://github.com/ashimrijal))
* Sneha Singh
---
## Basic Equations ##
The source-free elastic wave equation in a heterogeneous 1D medium is
\begin{align}
\rho(x)\partial_t v(x,t) -\partial_x \sigma(x,t) & = 0\\
\frac{1}{\mu(x)}\partial_t \sigma(x,t) -\partial_x v(x,t) & = 0
\end{align}
with $\rho(x)$ the density, $\mu(x)$ the shear modulus and $x = [0, L]$. At the boundaries $ x = 0, x = L$ we pose the general well-posed linear boundary conditions
\begin{equation}
\begin{split}
B_0(v, \sigma, Z_{s}, r_0): =\frac{Z_{s}}{2}\left({1-r_0}\right){v} -\frac{1+r_0}{2} {\sigma} = 0, \quad \text{at} \quad x = 0, \\
B_L(v, \sigma, Z_{s}, r_n): =\frac{Z_{s}}{2} \left({1-r_n}\right){v} + \frac{1+r_n}{2}{\sigma} = 0, \quad \text{at} \quad x = L.
\end{split}
\end{equation}
with the reflection coefficients $r_0$, $r_n$ being real numbers and $|r_0|, |r_n| \le 1$.
Note that at $x = 0$, while $r_0 = -1$ yields a clamped wall, $r_0 = 0$ yields an absorbing boundary, and with $r_0 = 1$ we have a free-surface boundary condition. Similarly, at $x = L$, $r_n = -1$ yields a clamped wall, $r_n = 0$ yields an absorbing boundary, and $r_n = 1$ gives a free-surface boundary condition.
1) Discretize the spatial domain $x$ into $K$ elements and denote the ${k}^{th}$ element $e^k = [x_{k}, x_{k+1}]$ and the element width $\Delta{x}_k = x_{k+1}-x_{k}$. Consider two adjacent elements $e^k = [x_{k}, x_{k+1}]$ and $e^{k+1} = [x_{k+1}, x_{k+2}]$ with an interface at $x_{k+1}$. At the interface we pose the physical conditions for a locked interface
\begin{align}
\text{force balance}: \quad &\sigma^{-} = \sigma^{+} = \sigma, \nonumber \\
\text{no slip}: \quad & [\![ v]\!] = 0,
\end{align}
where $[\![ v]\!] = v^{+} - v^{-}$, and $v^{-}, \sigma^{-}$ and $v^{+}, \sigma^{+}$ are the fields in $e^k = [x_{k}, x_{k+1}]$ and $e^{k+1} = [x_{k+1}, x_{k+2}]$, respectively.
2) Within the element derive the weak form of the equation by multiplying both sides by an arbitrary test function and integrating over the element.
3) Next map the $e^k = [x_{k}, x_{k+1}]$ to a reference element $\xi = [-1, 1]$
4) Inside the transformed element $\xi \in [-1, 1]$, approximate the solution and material parameters by a polynomial interpolant, and write
\begin{equation}
v^k(\xi, t) = \sum_{j = 1}^{N+1}v_j^k(t) \mathcal{L}_j(\xi), \quad \sigma^k(\xi, t) = \sum_{j = 1}^{N+1}\sigma_j^k(t) \mathcal{L}_j(\xi),
\end{equation}
\begin{equation}
\rho^k(\xi) = \sum_{j = 1}^{N+1}\rho_j^k \mathcal{L}_j(\xi), \quad \mu^k(\xi) = \sum_{j = 1}^{N+1}\mu_j^k \mathcal{L}_j(\xi),
\end{equation}
where $ \mathcal{L}_j$ is the $j$th interpolating polynomial of degree $N$. If we consider nodal basis then the interpolating polynomials satisfy $ \mathcal{L}_j(\xi_i) = \delta_{ij}$.
The interpolating nodes $\xi_i$, $i = 1, 2, \dots, N+1$ are the nodes of a Gauss quadrature with
\begin{equation}
\sum_{i = 1}^{N+1} f(\xi_i)w_i \approx \int_{-1}^{1}f(\xi) d\xi,
\end{equation}
where $w_i$ are quadrature weights.
5) At the element boundaries $\xi = \pm 1$, we generate $\widehat{v}^{k}(\pm 1, t)$ $\widehat{\sigma}^{k}(\pm 1, t)$ by solving a Riemann problem and constraining the solutions against interface and boundary conditions. Then numerical fluctuations $F^k(-1, t)$ and $G^k(1, t)$ are obtained by penalizing hat variables against the incoming characteristics only.
6) Finally, the flux fluctuations are appended to the semi-discrete PDE with special penalty weights and we have
\begin{equation}
\begin{split}
\frac{d \boldsymbol{v}^k( t)}{ d t} &= \frac{2}{\Delta{x}_k} W^{-1}({\boldsymbol{\rho}}^{k})\left(Q \boldsymbol{\sigma}^k( t) - \boldsymbol{e}_{1}F^k(-1, t)- \boldsymbol{e}_{N+1}G^k(1, t)\right),
\end{split}
\end{equation}
\begin{equation}
\begin{split}
\frac{d \boldsymbol{\sigma}^k( t)}{ d t} &= \frac{2}{\Delta{x}_k} W^{-1}(1/{\boldsymbol{\mu}^{k}})\left(Q \boldsymbol{v}^k( t) + \boldsymbol{e}_{1}\frac{1}{Z_{s}^{k}(-1)}F^k(-1, t)- \boldsymbol{e}_{N+1}\frac{1}{Z_{s}^{k}(1)}G^k(1, t)\right),
\end{split}
\end{equation}
where
\begin{align}
\boldsymbol{e}_{1} = [ \mathcal{L}_1(-1), \mathcal{L}_2(-1), \dots, \mathcal{L}_{N+1}(-1) ]^T, \quad \boldsymbol{e}_{N+1} = [ \mathcal{L}_1(1), \mathcal{L}_2(1), \dots, \mathcal{L}_{N+1}(1) ]^T,
\end{align}
and
\begin{align}
G^k(1, t):= \frac{Z_{s}^{k}(1)}{2} \left(v^{k}(1, t)-\widehat{v}^{k}(1, t) \right) + \frac{1}{2}\left(\sigma^{k}(1, t)- \widehat{\sigma}^{k}(1, t)\right),
\end{align}
\begin{align}
F^{k}(-1, t):= \frac{Z_{s}^{k}(-1)}{2} \left(v^{k}(-1, t)-\widehat{v}^{k}(-1, t) \right) - \frac{1}{2}\left(\sigma^{k}(-1, t)- \widehat{\sigma}^{k}(-1, t)\right).
\end{align}
And the weighted elemental mass matrix $W^N(a)$ and the stiffness matrix $Q^N $ are defined by
\begin{align}
W_{ij}(a) = \sum_{m = 1}^{N+1} w_m \mathcal{L}_i(\xi_m) {\mathcal{L}_j(\xi_m)} a(\xi_m), \quad Q_{ij} = \sum_{m = 1}^{N+1} w_m \mathcal{L}_i(\xi_m) {\mathcal{L}_j^{\prime}(\xi_m)}.
\end{align}
7) Time extrapolation can be performed using any stable time stepping scheme like Runge-Kutta or ADER scheme.This notebook implements both Runge-Kutta and ADER schemes for solving the free source version of the elastic wave equation in a homogeneous media. To keep the problem simple, we use as spatial initial condition a Gauss function with half-width $\delta$
\begin{equation}
v(x,t=0) = e^{-1/\delta^2 (x - x_{o})^2}, \quad \sigma(x,t=0) = 0
\end{equation}
**** Exercises****
1. Lagrange polynomial is used to interpolate the solution and the material parameters. First use polynomial degree 2 and then 6. Compare the simulation results in terms of accuracy of the solution (third and fourth figures give erros). At the end of simulation, time required to complete the simulation is also printed. Also compare the time required to complete both simulations.
2. We use quadrature rules: Gauss-Legendre-Lobatto and Gauss-Legendre. Run simulations once using Lobatto and once using Legendre rules. Compare the difference.
3. Now fix the order of polynomial to be 6, for example. Then use degree of freedom 100 and for another simulation 250. What happpens? Also compare the timre required to complete both simulations.
4. Experiment with the boundary conditions by changing the reflection coefficients $r_0$ and $r_n$.
5. You can also play around with sinusoidal initial solution instead of the Gaussian.
6. Change the time-integrator from RK to ADER. Observe if there are changes in the solution or the CFL number. Vary the polynomial order N.
```
# Parameters initialization and plotting the simulation
# Import necessary routines
import Lagrange
import numpy as np
import timeintegrate
import quadraturerules
import specdiff
import utils
import matplotlib.pyplot as plt
import timeit # to check the simulation time
#plt.switch_backend("TkAgg") # plots in external window
plt.switch_backend("nbagg") # plots within this notebook
# Simulation start time
start = timeit.default_timer()
# Tic
iplot = 20
# Physical domain x = [ax, bx] (km)
ax = 0.0 # (km)
bx = 20.0 # (km)
# Choose quadrature rules and the corresponding nodes
# We use Gauss-Legendre-Lobatto (Lobatto) or Gauss-Legendre (Legendre) quadrature rule.
#node = 'Lobatto'
node = 'Legendre'
if node not in ('Lobatto', 'Legendre'):
print('quadrature rule not implemented. choose node = Legendre or node = Lobatto')
exit(-1)
# Polynomial degree N: [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]
N = 4 # Lagrange polynomial degree
NP = N+1 # quadrature nodes per element
if N < 1 or N > 12:
print('polynomial degree not implemented. choose N>= 1 and N <= 12')
exit(-1)
# Degrees of freedom to resolve the wavefield
deg_of_freedom = 400 # = num_element*NP
# Estimate the number of elements needed for a given polynomial degree and degrees of freedom
num_element = round(deg_of_freedom/NP) # = deg_of_freedom/NP
# Initialize the mesh
y = np.zeros(NP*num_element)
# Generate num_element dG elements in the interval [ax, bx]
x0 = np.linspace(ax,bx,num_element+1)
dx = np.diff(x0) # element sizes
# Generate Gauss quadrature nodes (psi): [-1, 1] and weights (w)
if node == 'Legendre':
GL_return = quadraturerules.GL(N)
psi = GL_return['xi']
w = GL_return['weights'];
if node == 'Lobatto':
gll_return = quadraturerules.gll(N)
psi = gll_return['xi']
w = gll_return['weights']
# Use the Gauss quadrature nodes (psi) generate the mesh (y)
for i in range (1,num_element+1):
for j in range (1,(N+2)):
y[j+(N+1)*(i-1)-1] = dx[i-1]/2.0 * (psi[j-1] + 1.0) +x0[i-1]
# Overide with the exact degrees of freedom
deg_of_freedom = len(y);
# Generate the spectral difference operator (D) in the reference element: [-1, 1]
D = specdiff.derivative_GL(N, psi, w)
# Boundary condition reflection coefficients
r0 = 0 # r=0:absorbing, r=1:free-surface, r=-1: clamped
rn = 0 # r=0:absorbing, r=1:free-surface, r=-1: clamped
# Initialize the wave-fields
L = 0.5*(bx-ax)
delta = 0.01*(bx-ax)
x0 = 0.5*(bx+ax)
omega = 4.0
#u = np.sin(omega*np.pi*y/L) # Sine function
u = 1/np.sqrt(2.0*np.pi*delta**2)*np.exp(-(y-x0)**2/(2.0*delta**2)) # Gaussian
u = np.transpose(u)
v = np.zeros(len(u))
U = np.zeros(len(u))
V = np.zeros(len(u))
print('points per wavelength: ', round(delta*deg_of_freedom/(2*L)))
# Material parameters
cs = 3.464 # shear wave speed (km/s)
rho = 0*y + 2.67 # density (g/cm^3)
mu = 0*y + rho * cs**2 # shear modulus (GPa)
Zs = rho*cs # shear impedance
# Time stepping parameters
cfl = 0.5 # CFL number
dt = (0.25/(cs*(2*N+1)))*min(dx) # time-step (s)
t = 0.0 # initial time
Tend = 2 # final time (s)
n = 0 # counter
# Difference between analyticla and numerical solutions
EV = [0] # initialize errors in V (velocity)
EU = [0] # initialize errors in U (stress)
T = [0] # later append every time steps to this
# Initialize animated plot for velocity and stress
fig1 = plt.figure(figsize=(10,10))
ax1 = fig1.add_subplot(4,1,1)
line1 = ax1.plot(y, u, 'r', y, U, 'k--')
plt.title('numerical vs exact')
plt.xlabel('x[km]')
plt.ylabel('velocity [m/s]')
ax2 = fig1.add_subplot(4,1,2)
line2 = ax2.plot(y, v, 'r', y, V, 'k--')
plt.title('numerical vs exact')
plt.xlabel('x[km]')
plt.ylabel('stress [MPa]')
# Initialize error plot (for velocity and stress)
ax3 = fig1.add_subplot(4,1,3)
line3 = ax3.plot(T, EV, 'r')
plt.title('relative error in particle velocity')
plt.xlabel('time[t]')
ax3.set_ylim([10**-5, 1])
plt.ylabel('error')
ax4 = fig1.add_subplot(4,1,4)
line4 = ax4.plot(T, EU, 'r')
plt.ylabel('error')
plt.xlabel('time[t]')
ax4.set_ylim([10**-5, 1])
plt.title('relative error in stress')
plt.tight_layout()
plt.ion()
plt.show()
A = (np.linalg.norm(1/np.sqrt(2.0*np.pi*delta**2)*0.5*Zs*(np.exp(-(y+cs*(t+1*0.5)-x0)**2/(2.0*delta**2))\
- np.exp(-(y-cs*(t+1*0.5)-x0)**2/(2.0*delta**2)))))
B = (np.linalg.norm(u))
# Loop through time and evolve the wave-fields using ADER time-stepping scheme of N+1 order of accuracy
time_integrator = 'ADER'
for t in utils.drange (0.0, Tend+dt,dt):
n = n+1
# ADER time-integrator
if time_integrator in ('ADER'):
ADER_Wave_dG_return = timeintegrate.ADER_Wave_dG(u,v,D,NP,num_element,dx,w,psi,t,r0,rn,dt,rho,mu)
u = ADER_Wave_dG_return['Hu']
v = ADER_Wave_dG_return['Hv']
# Runge-Kutta time-integrator
if time_integrator in ('RK'):
RK4_Wave_dG_return = timeintegrate.RK4_Wave_dG(u,v,D,NP,num_element,dx,w,psi,t,r0,rn,dt,rho,mu)
u = RK4_Wave_dG_return['Hu']
v = RK4_Wave_dG_return['Hv']
# Analytical sine wave (use it when sine function is choosen above)
#U = 0.5*(np.sin(omega*np.pi/L*(y+cs*(t+1*dt))) + np.sin(omega*np.pi/L*(y-cs*(t+1*dt))))
#V = 0.5*Zs*(np.sin(omega*np.pi/L*(y+cs*(t+1*dt))) - np.sin(omega*np.pi/L*(y-cs*(t+1*dt))))
# Analytical Gaussian
U = 1/np.sqrt(2.0*np.pi*delta**2)*0.5*(np.exp(-(y+cs*(t+1*dt)-x0)**2/(2.0*delta**2))\
+ np.exp(-(y-cs*(t+1*dt)-x0)**2/(2.0*delta**2)))
V = 1/np.sqrt(2.0*np.pi*delta**2)*0.5*Zs*(np.exp(-(y+cs*(t+1*dt)-x0)**2/(2.0*delta**2))\
- np.exp(-(y-cs*(t+1*dt)-x0)**2/(2.0*delta**2)))
EV.append(np.linalg.norm(V-v)/A)
EU.append(np.linalg.norm(U-u)/B)
T.append(t)
# Updating plots
if n % iplot == 0:
for l in line1:
l.remove()
del l
for l in line2:
l.remove()
del l
for l in line3:
l.remove()
del l
for l in line4:
l.remove()
del l
# Display lines
line1 = ax1.plot(y, u, 'r', y, U, 'k--')
ax1.legend(iter(line1),('Numerical', 'Analytical'))
line2 = ax2.plot(y, v, 'r', y, V, 'k--')
ax2.legend(iter(line2),('Numerical', 'Analytical'))
line3 = ax3.plot(T, EU, 'k--')
ax3.set_yscale("log")#, nonposx='clip')
line4 = ax4.plot(T, EV, 'k--')
ax4.set_yscale("log")#, nonposx='clip')
plt.gcf().canvas.draw()
plt.ioff()
plt.show()
# Simulation end time
stop = timeit.default_timer()
print('total simulation time = ', stop - start) # print the time required for simulation
print('ploynomial degree = ', N) # print the polynomial degree used
print('degree of freedom = ', deg_of_freedom) # print the degree of freedom
print('maximum relative error in particle velocity = ', max(EU)) # max. relative error in particle velocity
print('maximum relative error in stress = ', max(EV)) # max. relative error in stress
```
| github_jupyter |
```
%load_ext autoreload
%autoreload 2
from genomic_benchmarks.loc2seq.with_biopython import _fastagz2dict
from genomic_benchmarks.seq2loc import fasta2loc
from sklearn.model_selection import train_test_split
import pandas as pd
from tqdm.notebook import tqdm
from pathlib import Path
import yaml
import tarfile
```
## Load genomic reference and download data from GitHub
```
genome = _fastagz2dict(Path.home() / ".genomic_benchmarks/fasta/Homo_sapiens.GRCh38.dna.toplevel.fa.gz",
24, 'MT')
genome.keys()
!wget http://www.cs.huji.ac.il/~tommy//enhancer_CNN/Enhancers_vs_negative.tgz
```
## Create FASTA
```
# extract human data
with tarfile.open("Enhancers_vs_negative.tgz", "r:gz") as tar:
for item in tar.getmembers():
if item.name in ["Human/positive_samples", "Human/negative_samples"]:
tar.extract(item, ".")
with open("Human/positive_samples") as fr:
positives = fr.read().splitlines()
with open("Human/negative_samples") as fr:
negatives = fr.read().splitlines()
with open("all_together.fa", "w") as fw:
for i, p in enumerate(positives):
fw.write(f">positive{i}\n{p}\n")
for i, n in enumerate(negatives):
fw.write(f">negative{i}\n{n}\n")
```
## Get locations
```
# slow!
locs = fasta2loc("./all_together.fa", genome)
```
### A few checks
```
len(locs.keys())
from Bio.Seq import Seq
def _rev(seq, strand):
# reverse complement
if strand == '-':
return str(Seq(seq).reverse_complement())
else:
return seq
_rev(genome['1'][959200:959451], "-")
locs_df = pd.DataFrame.from_dict(locs, orient='index', columns=['region','start','end','strand']).rename_axis('id')
positives_df = locs_df[locs_df.index.str.contains("positive")]
negatives_df = locs_df[locs_df.index.str.contains("negative")]
positives_df.shape, negatives_df.shape
positives_df['region'] = "chr" + positives_df['region']
negatives_df['region'] = "chr" + negatives_df['region']
```
## Train/test split
```
train_positives, test_positives = train_test_split(positives_df, shuffle=True, random_state=42)
train_positives.shape, test_positives.shape
train_negatives, test_negatives = train_test_split(negatives_df, shuffle=True, random_state=42)
train_negatives.shape, test_negatives.shape
```
## YAML file
```
BASE_FILE_PATH = Path("../../datasets/human_enhancers_cohn/")
# copied from https://stackoverflow.com/a/57892171
def rm_tree(pth: Path):
for child in pth.iterdir():
if child.is_file():
child.unlink()
else:
rm_tree(child)
pth.rmdir()
if BASE_FILE_PATH.exists():
rm_tree(BASE_FILE_PATH)
BASE_FILE_PATH.mkdir()
(BASE_FILE_PATH / 'train').mkdir()
(BASE_FILE_PATH / 'test').mkdir()
with open(BASE_FILE_PATH / 'metadata.yaml', 'w') as fw:
desc = {
'version': 0,
'classes': {
'positive': {
'type': 'fa.gz',
'url': 'http://ftp.ensembl.org/pub/release-97/fasta/homo_sapiens/dna/Homo_sapiens.GRCh38.dna.toplevel.fa.gz',
'extra_processing': 'ENSEMBL_HUMAN_GENOME'
},
'negative': {
'type': 'fa.gz',
'url': 'http://ftp.ensembl.org/pub/release-97/fasta/homo_sapiens/dna/Homo_sapiens.GRCh38.dna.toplevel.fa.gz',
'extra_processing': 'ENSEMBL_HUMAN_GENOME'
}
}
}
yaml.dump(desc, fw)
desc
```
## CSV files
```
train_positives.to_csv(BASE_FILE_PATH / 'train' / 'positive.csv.gz', index=False, compression='gzip')
train_negatives.to_csv(BASE_FILE_PATH / 'train' / 'negative.csv.gz', index=False, compression='gzip')
test_positives.to_csv(BASE_FILE_PATH / 'test' / 'positive.csv.gz', index=False, compression='gzip')
test_negatives.to_csv(BASE_FILE_PATH / 'test' / 'negative.csv.gz', index=False, compression='gzip')
# hotfix - adding ids
BASE_FILE_PATH = Path("../../datasets/human_enhancers_cohn/")
def add_ids(df, prefix):
ids = prefix + df.index.map(str)
df.insert(loc=0, column='id', value=ids)
train_positives = pd.read_csv(BASE_FILE_PATH / 'train' / 'positive.csv.gz')
add_ids(train_positives, "train_positive_")
train_positives.to_csv(BASE_FILE_PATH / 'train' / 'positive.csv.gz', index=False, compression='gzip')
train_negatives = pd.read_csv(BASE_FILE_PATH / 'train' / 'negative.csv.gz')
add_ids(train_negatives, "train_negative_")
train_negatives.to_csv(BASE_FILE_PATH / 'train' / 'negative.csv.gz', index=False, compression='gzip')
test_positives = pd.read_csv(BASE_FILE_PATH / 'test' / 'positive.csv.gz')
add_ids(test_positives, "test_positive_")
test_positives.to_csv(BASE_FILE_PATH / 'test' / 'positive.csv.gz', index=False, compression='gzip')
test_negatives = pd.read_csv(BASE_FILE_PATH / 'test' / 'negative.csv.gz')
add_ids(test_negatives, "test_negative_")
test_negatives.to_csv(BASE_FILE_PATH / 'test' / 'negative.csv.gz', index=False, compression='gzip')
```
## Cleaning
```
!rm all_together.fa Enhancers_vs_negative.tgz
!rm -rf Human
```
## Testing
```
from genomic_benchmarks.loc2seq import download_dataset
download_dataset("human_enhancers_cohn")
from genomic_benchmarks.data_check import info
info("human_enhancers_cohn", 0)
```
| github_jupyter |
<!--COURSE_INFORMATION-->
<img align="left" style="padding-right:10px;" src="https://user-images.githubusercontent.com/16768318/73986808-75b3ca00-4936-11ea-90f1-3a6c352766ce.png" width=10% >
<img align="right" style="padding-left:10px;" src="https://user-images.githubusercontent.com/16768318/73986811-764c6080-4936-11ea-9653-a3eacc47caed.png" width=10% >
**Bienvenidos!** Este *colab notebook* es parte del curso [**Introduccion a Google Earth Engine con Python**](https://github.com/csaybar/EarthEngineMasterGIS) desarrollado por el equipo [**MasterGIS**](https://www.mastergis.com/). Obten mas informacion del curso en este [**enlace**](https://www.mastergis.com/product/google-earth-engine/). El contenido del curso esta disponible en [**GitHub**](https://github.com/csaybar/EarthEngineMasterGIS) bajo licencia [**MIT**](https://opensource.org/licenses/MIT).
```
```
### **Ejercicio Resuelto: Mapa de Índice espectral - NDVI **
<img src="https://user-images.githubusercontent.com/60658810/112754764-28177800-8fa3-11eb-8952-a6aa210c259f.JPG" >
```
#Autenticar e iniciar Google Earth Engine
import ee
ee.Authenticate()
ee.Initialize()
#@title mapdisplay: Crea mapas interactivos usando folium
import folium
def mapdisplay(center, dicc, Tiles="OpensTreetMap",zoom_start=10):
'''
:param center: Center of the map (Latitude and Longitude).
:param dicc: Earth Engine Geometries or Tiles dictionary
:param Tiles: Mapbox Bright,Mapbox Control Room,Stamen Terrain,Stamen Toner,stamenwatercolor,cartodbpositron.
:zoom_start: Initial zoom level for the map.
:return: A folium.Map object.
'''
center = center[::-1]
mapViz = folium.Map(location=center,tiles=Tiles, zoom_start=zoom_start)
for k,v in dicc.items():
if ee.image.Image in [type(x) for x in v.values()]:
folium.TileLayer(
tiles = v["tile_fetcher"].url_format,
attr = 'Google Earth Engine',
overlay =True,
name = k
).add_to(mapViz)
else:
folium.GeoJson(
data = v,
name = k
).add_to(mapViz)
mapViz.add_child(folium.LayerControl())
return mapViz
```
### **1. Cargar datos vectoriales**
```
colombia = ee.FeatureCollection('users/sergioingeo/Colombia/Col')
colombia_img = colombia.draw(color = "000000", strokeWidth = 3, pointRadius = 3)
centroide = colombia.geometry().centroid().getInfo()['coordinates']
# Mostrar el ROI
mapdisplay(centroide,{'colombia':colombia_img.getMapId()},zoom_start= 6)
```
### **2. Cargar datos raster (Imagenes)**
```
#Colección de imagenes Landsat 8 "Surface Reflectance"
coleccion = ee.ImageCollection("LANDSAT/LC08/C01/T1_SR")\
.filterDate('2018-01-01', '2018-12-31')\
.filterBounds(colombia)\
.filterMetadata('CLOUD_COVER' ,'less_than',50)
```
### **3. Calculo del índice normalizado**
Utiliza .normalizedDifference para realizar este ejercicio
```
def ndvi(image):
return image.normalizedDifference([ 'B5' ,'B4']).rename('NDVI')
ndvi = coleccion.map(ndvi).mean().clip(colombia)
palette = [
'FFFFFF','CE7E45','DF923D','F18555','FCD163','998718',
'74A901','66A000','529400','3E8601','207401','056201',
'004C00','023801','012E01','011D01','011D01','011301']
NDVI= ndvi.getMapId({'min': 0, 'max': 1, 'palette':palette })
mapdisplay(centroide,{'NDVI':NDVI },zoom_start= 6)
```
### **4. Descargar los resultados (De Google Earth Engine a Google Drive)**
**ee.batch.Export.table.toDrive():** Guarda FeatureCollection como shapefile en Google Drive.
**ee.batch.Export.image.toDrive():** Guarda Images como GeoTIFF en Google Drive.
```
# Donde
# image: Es la imagén raster con la informacion del índice
# description: es el nombre que tendra el archivo en Google Drive.
# folder: es la carpeta que se creara en Google Drive.
# region: es el área que se exportará del producto creado.
# maxPixels: Aumenta o limita la cantidad máxima de pixels que pueden ser exportados.
# scale: El tamaño de los pixels de la imagén que serán exportados en metros.
task = ee.batch.Export.image.toDrive(
image= ndvi,
description='NDVI_Colombia',
folder='TareaMASTERGIS',
scale= 1000,
region = colombia.geometry(),
maxPixels = 1e13)
task.start()
from google.colab import drive
drive.mount('/content/drive')
#@title Mensage Final del curso
%%html
<marquee style='width: 30%; color: blue;'><b>MUCHAS GRACIAS ESPERO TE HAYAS DIVERTIDO TOMANDO ESTE CURSO :3 ... HASTA UNA PROXIMA OPORTUNIDAD</b></marquee>
```
### **¿Dudas con este Jupyer-Notebook?**
Estaremos felices de ayudarte!. Create una cuenta Github si es que no la tienes, luego detalla tu problema ampliamente en: https://github.com/csaybar/EarthEngineMasterGIS/issues
**Tienes que dar clic en el boton verde!**
<center>
<img src="https://user-images.githubusercontent.com/16768318/79680748-d5511000-81d8-11ea-9f89-44bd010adf69.png" width = 70%>
</center>
| github_jupyter |
# 1. Quaternion rate를 누적한 자세 추정
- 지구 회전을 고려하지 않을 경우 사용가능 다음과 같이 quaternion rate는 다음과 같이 단순화 하여 표현가능
$$
\begin{aligned}
{\dot q} &= \frac{1}{2}Q\tilde\omega^b\\\\
\begin{bmatrix}
\dot q_1\\
\dot q_2\\
\dot q_3\\
\dot q_4\\
\end{bmatrix}
&=
\frac{1}{2}
\begin{bmatrix}
q_1& -q_2& -q_3& -q_4\\
q_2& q_1& -q_4& q_3\\
q_3& q_4& q_1& -q_2\\
q_4& -q_3& q_2& q_1\\
\end{bmatrix}
\begin{bmatrix}
0\\
\omega_x\\
\omega_y\\
\omega_z\\
\end{bmatrix} \qquad (1) \\\\\\
\dot q &= \frac{1}{2}\Omega^b q\\\\
\begin{bmatrix}
\dot q_1\\
\dot q_2\\
\dot q_3\\
\dot q_4\\
\end{bmatrix}
&=
\frac{1}{2}
\begin{bmatrix}
0&-\omega_x&-\omega_y&-\omega_z\\
\omega_x&0&\omega_z&-\omega_y\\
\omega_y&-\omega_z&0&\omega_x\\
\omega_z&\omega_y&-\omega_x&0\\
\end{bmatrix}
\begin{bmatrix}
q_1\\
q_2\\
q_3\\
q_4
\end{bmatrix} \qquad (2)
\end{aligned}
$$
$$
\begin{aligned}
\\\\q_t&=q_{t-1} + \dot q T_s\\
q_t&=\frac{q_t}{||q_t||}\\
||q_t||&=\sqrt{q_1^2+q_2^2+q_3^2+q_4^2}
\end{aligned}
$$
- Quaternion에서 가장 중요한 부분은 quaternion 계산 후
- (1)번식과 (2)번식을 사용한 quaternion accumulation을 사용한 orientation estimation의 결과는 동일하다.
## Program
```
import numpy as np
from scipy.io import loadmat
from math import sin, cos, tan
import matplotlib.pyplot as plt
from navimath import *
# Dataset selection
# f_number
# 1: Example data provided by Magdwich
# 2: Real IMU data provided by Witmotion
# 3: IMU data provided by Understanding Kalman filter
f_number = 2
if f_number == 1:
# Example Data
ExData1 = loadmat('..\Data\ExampleData.mat')
Gyroscope = np.deg2rad(ExData1['Gyroscope'])
Accelerometer = ExData1['Accelerometer']
Magnetometer = ExData1['Magnetometer']
time = ExData1['time']
Ts = time[1]-time[0]
# System model noise covariance
Q = np.zeros((4,4))
Q[0, 0] = 1 # Roll angle uncertainty
Q[1, 1] = 1 # Pitch angle uncertainty
Q[2, 2] = 1 # Yaw angle uncertainity
Q[3, 3] = 1 # Yaw angle uncertainity
# Measurement noise covariance
R = np.zeros((4,4))
R[0, 0] = 0.01 # Accelerometer measurement uncertainty
R[1, 1] = 0.01 # Accelerometer measurement uncertainty
R[2, 2] = 0.01 # Magnetometer measurement uncertainity
R[3, 3] = 0.01 # Magnetometer measurement uncertainity
mu0 = np.array([1.0, 0.0, 0.0, 0.0])
sigma0 = np.eye((4))
Ts = time[1]-time[0]
totalLen = Accelerometer.shape[0]
elif f_number ==2:
# Example Data
ExData1 = loadmat('..\Data\WitMotion_IMU_Data.mat')
Gyroscope = np.deg2rad(ExData1['Gyroscope'])
Accelerometer = ExData1['Accelerometer']
Magnetometer = ExData1['Magnetometer']
Euler_Truth = ExData1['Euler']
time = np.array([0, 1])
Ts = time[1]-time[0]
Q = np.zeros((4,4))
Q[0, 0] = 1
Q[1, 1] = 1
Q[2, 2] = 1
Q[3, 3] = 1
# Measurement noise covariance
R = np.zeros((4,4))
R[0, 0] = 10
R[1, 1] = 10
R[2, 2] = 10
R[3, 3] = 10
mu0 = euler_to_quaternoin(Gyroscope[0,:])
sigma0 = np.eye((4))
Ts = 1
totalLen = Accelerometer.shape[0]
else:
ArsAccel = loadmat('..\Data\ArsAccel.mat')
ArsGyro = loadmat('..\Data\ArsGyro.mat')
Gyroscope = np.zeros((41500, 3))
Accelerometer = np.zeros((41500, 3))
Gyroscope[:,0] = ArsGyro['wx'][:,0]
Gyroscope[:,1] = ArsGyro['wy'][:,0]
Gyroscope[:,2] = ArsGyro['wz'][:,0]
Accelerometer[:,0] = ArsAccel['fx'][:,0]
Accelerometer[:,1] = ArsAccel['fy'][:,0]
Accelerometer[:,2] = ArsAccel['fz'][:,0]
time = 0.01 * np.array([0, 1])
Ts = time[1]-time[0]
# System model noise covariance
Q = np.zeros((4,4))
Q[0, 0] = 0.0001
Q[1, 1] = 0.0001
Q[2, 2] = 0.0001
Q[3, 3] = 0.0001
# Measurement noise covariance
R = np.zeros((4,4))
R[0, 0] = 10
R[1, 1] = 10
R[2, 2] = 10
R[3, 3] = 10
mu0 = np.array([1.0, 0., 0., 0.])
sigma0 = np.eye((4))
Ts = 0.01
totalLen = 41500
```
## (1)번 표현식 기준 quaternion 자세 계산
```
# (1) 번식을 기준으로 quaternion을 누적하여 자세를 구한다.
# 변수 초기화
q = np.array([1.0, 0.0, 0.0, 0.0]) # initial quaternion value
dot_q = np.zeros((4))
q_hist = np.zeros((totalLen, 4))
Euler_hist_q_radian = np.zeros((totalLen, 3))
Euler_hist_q_deg1 = np.zeros((totalLen, 3))
for i in range(totalLen):
#for i in range(2):
wx = Gyroscope[i, 0]
wy = Gyroscope[i, 1]
wz = Gyroscope[i, 2]
wb = np.array([0.0, wx, wy, wz])
q1 = q[0]
q2 = q[1]
q3 = q[2]
q4 = q[3]
Q = np.array([[q1,-q2,-q3,-q4],[q2,q1,-q4,q3],[q3,q4,q1,-q2],[q4,-q3,q2,q1]])
# Quaternion rate
dot_q = (1.0/2.0)*(np.matmul(Q, wb))
# Quaternion accumulation
q = q + dot_q *Ts
# Quaternion normalization
q = q/norm(q)
q_hist[i,:] = q
Euler_hist_q_radian[i,:] = quaternion_to_euler(q)
Euler_hist_q_deg1[i,:] = np.rad2deg(quaternion_to_euler(q))
```
## (2)번 표현식 기준 quaternion 자세 계산
```
# (2) 번식을 기준으로
# 변수 초기화
q = np.array([1.0, 0.0, 0.0, 0.0]) # initial quaternion value
dot_q = np.zeros((4))
q_hist = np.zeros((totalLen, 4))
Euler_hist_q_radian = np.zeros((totalLen, 3))
Euler_hist_q_deg2 = np.zeros((totalLen, 3))
for i in range(totalLen):
wx = Gyroscope[i, 0]
wy = Gyroscope[i, 1]
wz = Gyroscope[i, 2]
Omega = np.array([[0.0,-wx, -wy, -wz],[wx, 0.0, wz, -wy],[wy, -wz, 0.0, wx],[wz, wy, -wx, 0.0]])
# Quaternion rate
dot_q = (1.0/2.0)*(np.matmul(Omega, q))
# Quaternion accumulation
q = q + dot_q *Ts
# Quaternion normalization
q = q/norm(q)
q_hist[i,:] = q
Euler_hist_q_radian[i,:] = quaternion_to_euler(q)
Euler_hist_q_deg2[i,:] = np.rad2deg(quaternion_to_euler(q))
plt.figure(figsize=(15,5))
plt.plot(Euler_hist_q_deg1)
plt.legend(['Roll','Pitch','Yaw'])
plt.grid()
plt.xlabel('Step')
plt.ylabel('Euler angle (degree)')
plt.xlim([0,totalLen])
plt.title('Attitude Estimation using Quaternion Accumulation with equation (1)')
plt.figure(figsize=(15,5))
plt.plot(Euler_hist_q_deg2)
plt.legend(['Roll','Pitch','Yaw'])
plt.grid()
plt.xlabel('Step')
plt.ylabel('Euler angle (degree)')
plt.xlim([0,totalLen])
plt.title('Attitude Estimation using Quaternion Accumulation with equation (2)')
```
| github_jupyter |
<a href="https://colab.research.google.com/github/clemencia/ML4PPGF_UERJ/blob/master/Amostragem_e_integracao_MC.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
## Amostragem de Monte Carlo pelo Método da Inversão
### Exemplo 1: ***A distribuição Exponencial por amostragem***
A PDF da exponencial é definida como:
\begin{equation*}
f(t) =
\begin{cases}
\frac{1}{\tau}e^{-t/\tau} \text{, se } t\geq 0 \\
0\text{, se } t<0
\end{cases}
\end{equation*}
Para essa distribuição a CDF é dada por
\begin{equation*}
F(t) = \int_{0}^t \frac{1}{\tau}e^{-t'/\tau} dt' = 1 - e^{-t/\tau}
\end{equation*}
Como sabemos, a função inversa da CDF, $F^{-1}(u)$, leva a uma variável aleatória uniforme. Sendo assim, ao inverter $u=F(t)$, obtemos
\begin{equation*}
u = 1 - e^{-t/\tau} \Rightarrow \boxed{ t = -\tau \ln{(1-u)} }
\end{equation*}
Histograma da amostragem da distribuição exponencial:
```
#@title
# Amostragem de MC de uma distribuição Exponencial pelo Método da Inversão
from numpy.random import random
from numpy import log
import matplotlib.pyplot as plt
plt.style.use('default') # para plots bonitinhos
# número de pontos
np=10000
# número de bins do histograma
nbins=200
# Vida média
tau=0.1
# Função CDF Inversa
def invCDF(u):
return -tau*log(1-u)
# Amostragem da CDF inversa, usando uma distribuição uniforme
sample = [invCDF(random()) for _ in range(np)]
# plotar a distribuição da CDF inversa com uma dist uniforme
plt.hist(sample,bins=nbins)
plt.suptitle("Amostragem da distribuição exponencial com "+str(np)+" pontos")
plt.show()
```
Histograma da amostragem da distribuição exponencial, com a pdf da exponencial sobreposta:
```
# MC Sampling an Exponential distribution by Invertion Method
import numpy as np
from numpy.random import random
from scipy.stats import expon
import matplotlib.pyplot as plt
plt.style.use('default') # para plots bonitinhos
# Inverse CDF function
#Como boa prática de Python, definimos todas as funções no começo do script!
def invCDF(u):
return -tau*np.log(u)
# lista com diferentes números de pontos
npts=[10, 100, 1000, 10000]
# número de bins dos histogramas
nbins = 100
# Vida Média
tau=0.1
# Cria uma figura que contém subplots sobrepostos (Histograma e Exponencial)
fig, ax = plt.subplots(len(npts), sharex=True)
ax[0].set_title('Distribuição Exponencial')
# cria um plot da distribuição exponencial
x = np.linspace(0,1,1000)
y = expon.pdf(x, scale=tau) # equivalente a y = (1/tau)*np.exp(-x/tau)
# Amostra de CDF inversa usando uma distribuição uniforme
for i in range(len(npts)):
sample = [invCDF(random()) for _ in range(npts[i])]
# cria um histograma com os dados de amostragem
ax[i].hist(sample,bins=nbins,density=1,label=str(npts[i])+" pts")
ax[i].plot(x,y,color='red',linewidth=1)
ax[i].legend(loc="upper right")
# show plot
plt.show()
```
### Exemplo 2: ***Criando uma distribuição de Breit-Wigner por amostragem***
A PDF da Breit-Wigner é definida cmo
\begin{equation*}
f(x) =\frac{2}{\pi}\frac{\Gamma}{4(x-a)^2+\Gamma^2}
\end{equation*}
Para essa distribuição a CDF é dada por
\begin{equation*}
F(x) = \frac{2}{\pi \Gamma} \int_{-\infty}^x \frac{\Gamma^2}{4(x'-a)^2+\Gamma^2} dx'
\end{equation*}
Para fazer a integração, faremos a mudança de variáveis $y=2(x-a)/\Gamma \Rightarrow dy=(2/\Gamma)dx$ , obtendo assim
\begin{align*}
F(x) &= \frac{1}{\pi} \int_{-\infty}^{2(x-a)/\Gamma } \frac{1}{y^2+1} dy = \frac{1}{\pi} \arctan{(y)}~\bigg|_{y=-\infty}^{y=2(x-a)/\Gamma} \\
&= \frac{ \arctan{\left( 2(x-a)/\Gamma \right)} }{\pi} +\frac{1}{2}
\end{align*}
Invertendo a CDF $u=F(x)$ , obtemos
\begin{align*}
F^{-1}(u) &= a+ \frac{\Gamma}{2} \tan{\left[ \pi \left( u - \frac{1}{2} \right) \right]}
\end{align*}
Plot do histograma de amostragem sobreposto com a PDF da Breit-Wigner
```
# MC Sampling an Exponential distribution by Invertion Method
import numpy as np
from numpy.random import random
import matplotlib.pyplot as plt
# função CDF Inversa
def invCDF(u, a, gamma):
return ( a+0.5*gamma*np.tan( np.pi*(u-0.5) ) )
# número de pontos
npts=10000
# número de bins do histograma
nbins=200
# parâmetros da Breit-Wigner (width and pole)
gamma=0.1
a=0
# amostragem da CDF inversa usando uma distribuição uniforme
sample = [invCDF(random(), a, gamma) for _ in range(npts)]
# criar a figura contendo os subplots sobrepostos (Histograma e B-W)
fig, ax = plt.subplots()
ax.set_title('Distribuição Breit-Wigner')
# criar um histograma com os dados
xmin=a-10*gamma
xmax=a+10*gamma
ax.hist(sample,bins=nbins,range=[xmin,xmax],density=1,label=str(npts)+" pts")
ax.legend(loc="upper right")
# criar um plot da distribuição B-W
x = np.linspace(xmin,xmax,1000)
y = 2/np.pi * gamma/( 4*(x-a)**2 + gamma**2 )
ax.plot(x,y,color='red',linewidth=1)
# show plot
plt.show()
```
## Amostragem de Monte Carlo pelo Método de Aceitação-Rejeição
### Exemplo 1: ***Amostragem da distribuição Gaussiana***
A PDF gaussiana é definida como
\begin{equation*}
f(x) = \frac{1}{ \sqrt{2\pi} \sigma } e^{-\frac{x^2}{2\sigma^2} }
\end{equation*}
Essa função tem uma CDF que não é expresa em termos de uma função elementar e é muito difícil encontrar sua inversa. Mas é bastante simples de se construir, usando o método de aceitação-rejeição.
```
# MC Sampling of a Gaussian distribution by Acceptance-Rejection Method
import numpy as np
from numpy.random import random
import matplotlib.pyplot as plt
from scipy.stats import norm
# number of points
npts=10000
# number of histogram bins
nbins=200
# Gaussian width and mean
sigma=3.0
mean=0
# Gaussian function definition
def gaussian(u):
return 1/(( 2*np.pi )**0.5*sigma)*np.exp( -(u-mean)**2/(2*sigma**2) )
# Sampling range
xmin=mean-10*sigma
xmax=mean+10*sigma
ymax=gaussian(mean)
# Accept or Reject the points
sample=[]
naccept=0
ntot=0
while naccept<npts:
ntot+=1
x=np.random.uniform(xmin,xmax) # x'
y=np.random.uniform(0,ymax) # y'
if y<gaussian(x):
sample.append(x)
naccept+=1
# Create a numpy array with the list of selected points
sample=np.array(sample)
# create a figure containing overlaid subplots (Histogram and Exponential)
fig, ax = plt.subplots()
ax.set_title('Gaussian Distribution')
# create a histogram with sampled data
ax.hist(sample,nbins,range=[xmin,xmax],density=1)
# create a plot of the exponential distribution
x = np.linspace(xmin,xmax,1000)
y = gaussian(x)
ax.plot(x,y,color='red',linewidth=1)
# show plot
plt.show()
print("Sampling Efficiency=",int(naccept/ntot*100),"%")
```
## Amostragem por rejeição eficiente (MC Importance Sampling)
**Problema 1**: Como usar a "importance sampling" para melhorar a eficiência de amostragem da Gaussiana? **Use o Método de Box-Muller como gerador de números aleatórios de uma Gaussiana**
**Problema 2**: Encontrar um algoritmo que gere uma variável aleatória x, distribuída de acordo com a seguinte PDF:
$f(x) = Ce^{-\lambda x}cos^2(kx)$
onde C é uma constante de normalização e $\lambda$ e $k$ são dois parâmetros conhecidos.
Podemos tomar a função $Ce^{-\lambda x}$ como a função "envelope" $g(x)$ e gerar números aleatórios de acordo com a distribuição exponencial, pelo ***método de inversão***. Em seguida, com o ***método de Aceitação-Rejeição***, aceitamos ou rejeitamos x de acordo com $cos^2kx$. A distribuição de probabilidade, sendo os dois processos independentes, é o produto das duas distribuições.
Para obter a variável aleatória $x$ da distribuição 'envelope' com o método da inversão, partimos da distribuição exponencial. A CDF da distribuição é dada por:
$\\$
$F(x) = \int_{0}^{x}g(x')dx' = \int_{0}^{x}Ce^{-\lambda x'} dx' = 1-e^{-\lambda x}$
$\\$
Invertendo $F(x)$, temos
$\\$
$u = 1 - e^{-\lambda x} \Rightarrow x = -log(1 - r)/\lambda$
$\\$
Em resumo, para obter a distribuição de $f(x)$ pelo método de "importance sampling":
1 - Gerar $r$ uniformemente entre [0,1[ ;
2 - Calcular $x = -log(1 - r)/\lambda$ ;
3 - Gerar $s$ uniformemente entre [0,1[ ;
4 - Se $s < cos^2kx$, guardar $x$, se não, retornar ao item 1.
```
# Amostragem de MC de uma distribuição Exponencial pelo Método da Inversão
import numpy as np
from numpy.random import random
from numpy import log
import matplotlib.pyplot as plt
plt.style.use('default') # para plots bonitinhos
# Função CDF Inversa
def invCDF(u, lamb):
return -log(1-u)/lamb
# Função da 'importance sampling'
def f(x, lamb, k):
return (1/np.pi)*np.exp(-lamb*x)*(np.cos(k*x))**2
# número de pontos
npts=10000
# número de bins do histograma
nbins=200
# Vida média
lamb=0.1
# k
k = 1
xmin=invCDF(0, lamb)
xmax=invCDF(.9999, lamb)
print(xmin, xmax)
# Amostra resultante do método de aceitação-rejeição
sample = []
naccept=0
ntot=0
while naccept<npts:
ntot+=1
s=np.random.uniform(0,1) # s
x=invCDF(np.random.uniform(0,1), lamb) # x
if s<f(x, lamb, k):
sample.append(x)
naccept+=1
# plotar a distribuição da CDF inversa com uma dist uniforme
plt.hist(sample,bins=nbins, range=[xmin, xmax], density=1,label=str(npts)+" pts")
#plotar a distribuição
x = np.linspace(xmin,xmax,nbins)
y = f(x, lamb, k)
plt.plot(x, y, color='red',linewidth=1)
plt.suptitle("Amostragem por rejeição eficiente")
plt.legend(loc="upper right")
plt.show()
print("eficiencia = {} %".format(naccept/ntot*100))
```
Distribuição de $f(x)$ pelo método de Aceitação-Rejeição
```
# Amostragem de MC de uma distribuição Exponencial pelo Método da Inversão
import numpy as np
from numpy.random import random
from numpy import log
import matplotlib.pyplot as plt
plt.style.use('default') # para plots bonitinhos
# Função
def f(x, lamb, k):
return (1/np.pi)*np.exp(-lamb*x)*(np.cos(k*x))**2
# número de pontos
npts=10000
# número de bins do histograma
nbins=200
# Vida média
lamb=0.1
# k
k = 1
xmin=0
xmax=40
print(xmin, xmax)
# Amostra resultante do método de aceitação-rejeição
sample = []
naccept=0
ntot=0
while naccept<npts:
ntot+=1
x=np.random.uniform(xmin,xmax) # s
y=np.random.uniform(0,1) # x
if y<f(x, lamb, k):
sample.append(x)
naccept+=1
# plotar a distribuição da CDF inversa com uma dist uniforme
plt.hist(sample,bins=nbins, range=[xmin, xmax], density=True,label=str(npts)+" pts")
#plotar a distribuição
x = np.linspace(xmin,xmax,nbins)
y = f(x, lamb, k)
plt.plot(x, y, color='red',linewidth=1)
plt.suptitle("Amostragem por aceitação-rejeição")
plt.legend(loc="upper right")
plt.show()
print("eficiencia = {} %".format(naccept/ntot*100))
```
Note que a eficiência do método de aceitação-rejeição é menor que a eficiência do método de "importance sampling".
### Exercício:
Nos exemplos de ***Amostragem por rejeição eficiente*** (Importance Sampling) acima, a constante $C$ de normalização da função $f(x)$ está determinada como $1/\pi$, o que não é correto. Implemente nos códigos acima a constante de normalização correta: $C=1/Z$ onde $Z=\int_{x_{min}}^{x_{max}}f(x)dx$ é a área da função em um dado intervalo.
| github_jupyter |
```
# default_exp batchbald
# hide
import blackhc.project.script
from nbdev.showdoc import *
```
# BatchBALD Algorithm
> Greedy algorithm and score computation
First, we will implement two helper classes to compute conditional entropies $H[y_i|w]$ and entropies $H[y_i]$.
Then, we will implement BatchBALD and BALD.
```
# exports
import math
from dataclasses import dataclass
from typing import List
import torch
from toma import toma
from tqdm.auto import tqdm
from batchbald_redux import joint_entropy
```
We are going to define a couple of sampled distributions to use for our testing our code.
$K=20$ means 20 inference samples.
```
K = 20
import numpy as np
def get_mixture_prob_dist(p1, p2, m):
return (1.0 - m) * np.asarray(p1) + m * np.asarray(p2)
p1 = [0.7, 0.1, 0.1, 0.1]
p2 = [0.3, 0.3, 0.2, 0.2]
y1_ws = [get_mixture_prob_dist(p1, p2, m) for m in np.linspace(0, 1, K)]
p1 = [0.1, 0.7, 0.1, 0.1]
p2 = [0.2, 0.3, 0.3, 0.2]
y2_ws = [get_mixture_prob_dist(p1, p2, m) for m in np.linspace(0, 1, K)]
p1 = [0.1, 0.1, 0.7, 0.1]
p2 = [0.2, 0.2, 0.3, 0.3]
y3_ws = [get_mixture_prob_dist(p1, p2, m) for m in np.linspace(0, 1, K)]
p1 = [0.1, 0.1, 0.1, 0.7]
p2 = [0.3, 0.2, 0.2, 0.3]
y4_ws = [get_mixture_prob_dist(p1, p2, m) for m in np.linspace(0, 1, K)]
def nested_to_tensor(l):
return torch.stack(list(map(torch.as_tensor, l)))
ys_ws = nested_to_tensor([y1_ws, y2_ws, y3_ws, y4_ws])
# hide
p = [0.25, 0.25, 0.25, 0.25]
yu_ws = [p for m in range(K)]
yus_ws = nested_to_tensor([yu_ws] * 4)
ys_ws.shape
```
## Conditional Entropies and Batched Entropies
To start with, we write two functions to compute the conditional entropy $H[y_i|w]$ and the entropy $H[y_i]$ for each input sample.
```
def compute_conditional_entropy(probs_N_K_C: torch.Tensor) -> torch.Tensor:
N, K, C = probs_N_K_C.shape
entropies_N = torch.empty(N, dtype=torch.double)
pbar = tqdm(total=N, desc="Conditional Entropy", leave=False)
@toma.execute.chunked(probs_N_K_C, 1024)
def compute(probs_n_K_C, start: int, end: int):
nats_n_K_C = probs_n_K_C * torch.log(probs_n_K_C)
nats_n_K_C[probs_n_K_C == 0] = 0.0
entropies_N[start:end].copy_(-torch.sum(nats_n_K_C, dim=(1, 2)) / K)
pbar.update(end - start)
pbar.close()
return entropies_N
def compute_entropy(probs_N_K_C: torch.Tensor) -> torch.Tensor:
N, K, C = probs_N_K_C.shape
entropies_N = torch.empty(N, dtype=torch.double)
pbar = tqdm(total=N, desc="Entropy", leave=False)
@toma.execute.chunked(probs_N_K_C, 1024)
def compute(probs_n_K_C, start: int, end: int):
mean_probs_n_C = probs_n_K_C.mean(dim=1)
nats_n_C = mean_probs_n_C * torch.log(mean_probs_n_C)
nats_n_C[mean_probs_n_C == 0] = 0.0
entropies_N[start:end].copy_(-torch.sum(nats_n_C, dim=1))
pbar.update(end - start)
pbar.close()
return entropies_N
# Make sure everything is computed correctly.
assert np.allclose(compute_conditional_entropy(yus_ws), [1.3863, 1.3863, 1.3863, 1.3863], atol=0.1)
assert np.allclose(compute_entropy(yus_ws), [1.3863, 1.3863, 1.3863, 1.3863], atol=0.1)
```
However, our neural networks usually use a `log_softmax` as final layer. To avoid having to call `.exp_()`, which is easy to miss and annoying to debug, we will instead use a version that uses `log_probs` instead of `probs`.
```
# exports
def compute_conditional_entropy(log_probs_N_K_C: torch.Tensor) -> torch.Tensor:
N, K, C = log_probs_N_K_C.shape
entropies_N = torch.empty(N, dtype=torch.double)
pbar = tqdm(total=N, desc="Conditional Entropy", leave=False)
@toma.execute.chunked(log_probs_N_K_C, 1024)
def compute(log_probs_n_K_C, start: int, end: int):
nats_n_K_C = log_probs_n_K_C * torch.exp(log_probs_n_K_C)
entropies_N[start:end].copy_(-torch.sum(nats_n_K_C, dim=(1, 2)) / K)
pbar.update(end - start)
pbar.close()
return entropies_N
def compute_entropy(log_probs_N_K_C: torch.Tensor) -> torch.Tensor:
N, K, C = log_probs_N_K_C.shape
entropies_N = torch.empty(N, dtype=torch.double)
pbar = tqdm(total=N, desc="Entropy", leave=False)
@toma.execute.chunked(log_probs_N_K_C, 1024)
def compute(log_probs_n_K_C, start: int, end: int):
mean_log_probs_n_C = torch.logsumexp(log_probs_n_K_C, dim=1) - math.log(K)
nats_n_C = mean_log_probs_n_C * torch.exp(mean_log_probs_n_C)
entropies_N[start:end].copy_(-torch.sum(nats_n_C, dim=1))
pbar.update(end - start)
pbar.close()
return entropies_N
# hide
# Make sure everything is computed correctly.
assert np.allclose(compute_conditional_entropy(yus_ws.log()), [1.3863, 1.3863, 1.3863, 1.3863], atol=0.1)
assert np.allclose(compute_entropy(yus_ws.log()), [1.3863, 1.3863, 1.3863, 1.3863], atol=0.1)
```
### Examples
```
conditional_entropies = compute_conditional_entropy(ys_ws.log())
print(conditional_entropies)
assert np.allclose(conditional_entropies, [1.2069, 1.2069, 1.2069, 1.2069], atol=0.01)
entropies = compute_entropy(ys_ws.log())
print(entropies)
assert np.allclose(entropies, [1.2376, 1.2376, 1.2376, 1.2376], atol=0.01)
```
## BatchBALD
To compute BatchBALD exactly for a candidate batch, we'd have to compute $I[(y_b)_B;w] = H[(y_b)_B] - H[(y_b)_B|w]$.
As the $y_b$ are independent given $w$, we can simplify $H[(y_b)_B|w] = \sum_b H[y_b|w]$.
Furthermore, we use a greedy algorithm to build up the candidate batch, so $y_1,\dots,y_{B-1}$ will stay fixed as we determine $y_{B}$. We compute
$H[(y_b)_{B-1}, y_i] - H[y_i|w]$ for each pool element $y_i$ and add the highest scorer as $y_{B}$.
We don't utilize the last optimization here in order to compute the actual scores.
### In the Paper

### Implementation
```
# exports
@dataclass
class CandidateBatch:
scores: List[float]
indices: List[int]
def get_batchbald_batch(
log_probs_N_K_C: torch.Tensor, batch_size: int, num_samples: int, dtype=None, device=None
) -> CandidateBatch:
N, K, C = log_probs_N_K_C.shape
batch_size = min(batch_size, N)
candidate_indices = []
candidate_scores = []
if batch_size == 0:
return CandidateBatch(candidate_scores, candidate_indices)
conditional_entropies_N = compute_conditional_entropy(log_probs_N_K_C)
batch_joint_entropy = joint_entropy.DynamicJointEntropy(
num_samples, batch_size - 1, K, C, dtype=dtype, device=device
)
# We always keep these on the CPU.
scores_N = torch.empty(N, dtype=torch.double, pin_memory=torch.cuda.is_available())
for i in tqdm(range(batch_size), desc="BatchBALD", leave=False):
if i > 0:
latest_index = candidate_indices[-1]
batch_joint_entropy.add_variables(log_probs_N_K_C[latest_index : latest_index + 1])
shared_conditinal_entropies = conditional_entropies_N[candidate_indices].sum()
batch_joint_entropy.compute_batch(log_probs_N_K_C, output_entropies_B=scores_N)
scores_N -= conditional_entropies_N + shared_conditinal_entropies
scores_N[candidate_indices] = -float("inf")
candidate_score, candidate_index = scores_N.max(dim=0)
candidate_indices.append(candidate_index.item())
candidate_scores.append(candidate_score.item())
return CandidateBatch(candidate_scores, candidate_indices)
```
### Example
```
get_batchbald_batch(ys_ws.log().double(), 4, 1000, dtype=torch.double)
```
## BALD
BALD is the same as BatchBALD, except that we evaluate points individually, by computing $I[y_i;w]$ for each, and then take the top $B$ scorers.
```
# exports
def get_bald_batch(log_probs_N_K_C: torch.Tensor, batch_size: int, dtype=None, device=None) -> CandidateBatch:
N, K, C = log_probs_N_K_C.shape
batch_size = min(batch_size, N)
candidate_indices = []
candidate_scores = []
scores_N = -compute_conditional_entropy(log_probs_N_K_C)
scores_N += compute_entropy(log_probs_N_K_C)
candiate_scores, candidate_indices = torch.topk(scores_N, batch_size)
return CandidateBatch(candiate_scores.tolist(), candidate_indices.tolist())
```
### Example
```
get_bald_batch(ys_ws.log().double(), 4)
```
| github_jupyter |
# Handwritten Number Recognition with TFLearn and MNIST
In this notebook, we'll be building a neural network that recognizes handwritten numbers 0-9.
This kind of neural network is used in a variety of real-world applications including: recognizing phone numbers and sorting postal mail by address. To build the network, we'll be using the **MNIST** data set, which consists of images of handwritten numbers and their correct labels 0-9.
We'll be using [TFLearn](http://tflearn.org/), a high-level library built on top of TensorFlow to build the neural network. We'll start off by importing all the modules we'll need, then load the data, and finally build the network.
```
# Import Numpy, TensorFlow, TFLearn, and MNIST data
import numpy as np
import tensorflow as tf
import tflearn
import tflearn.datasets.mnist as mnist
```
## Retrieving training and test data
The MNIST data set already contains both training and test data. There are 55,000 data points of training data, and 10,000 points of test data.
Each MNIST data point has:
1. an image of a handwritten digit and
2. a corresponding label (a number 0-9 that identifies the image)
We'll call the images, which will be the input to our neural network, **X** and their corresponding labels **Y**.
We're going to want our labels as *one-hot vectors*, which are vectors that holds mostly 0's and one 1. It's easiest to see this in a example. As a one-hot vector, the number 0 is represented as [1, 0, 0, 0, 0, 0, 0, 0, 0, 0], and 4 is represented as [0, 0, 0, 0, 1, 0, 0, 0, 0, 0].
### Flattened data
For this example, we'll be using *flattened* data or a representation of MNIST images in one dimension rather than two. So, each handwritten number image, which is 28x28 pixels, will be represented as a one dimensional array of 784 pixel values.
Flattening the data throws away information about the 2D structure of the image, but it simplifies our data so that all of the training data can be contained in one array whose shape is [55000, 784]; the first dimension is the number of training images and the second dimension is the number of pixels in each image. This is the kind of data that is easy to analyze using a simple neural network.
```
# Retrieve the training and test data
trainX, trainY, testX, testY = mnist.load_data(one_hot=True)
```
## Visualize the training data
Provided below is a function that will help you visualize the MNIST data. By passing in the index of a training example, the function `show_digit` will display that training image along with it's corresponding label in the title.
```
# Visualizing the data
import matplotlib.pyplot as plt
%matplotlib inline
# Function for displaying a training image by it's index in the MNIST set
def show_digit(index):
label = trainY[index].argmax(axis=0)
# Reshape 784 array into 28x28 image
image = trainX[index].reshape([28,28])
plt.title('Training data, index: %d, Label: %d' % (index, label))
plt.imshow(image, cmap='gray_r')
plt.show()
# Display the first (index 0) training image
show_digit(1000)
```
## Building the network
TFLearn lets you build the network by defining the layers in that network.
For this example, you'll define:
1. The input layer, which tells the network the number of inputs it should expect for each piece of MNIST data.
2. Hidden layers, which recognize patterns in data and connect the input to the output layer, and
3. The output layer, which defines how the network learns and outputs a label for a given image.
Let's start with the input layer; to define the input layer, you'll define the type of data that the network expects. For example,
```
net = tflearn.input_data([None, 100])
```
would create a network with 100 inputs. The number of inputs to your network needs to match the size of your data. For this example, we're using 784 element long vectors to encode our input data, so we need **784 input units**.
### Adding layers
To add new hidden layers, you use
```
net = tflearn.fully_connected(net, n_units, activation='ReLU')
```
This adds a fully connected layer where every unit (or node) in the previous layer is connected to every unit in this layer. The first argument `net` is the network you created in the `tflearn.input_data` call, it designates the input to the hidden layer. You can set the number of units in the layer with `n_units`, and set the activation function with the `activation` keyword. You can keep adding layers to your network by repeated calling `tflearn.fully_connected(net, n_units)`.
Then, to set how you train the network, use:
```
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
```
Again, this is passing in the network you've been building. The keywords:
* `optimizer` sets the training method, here stochastic gradient descent
* `learning_rate` is the learning rate
* `loss` determines how the network error is calculated. In this example, with categorical cross-entropy.
Finally, you put all this together to create the model with `tflearn.DNN(net)`.
**Exercise:** Below in the `build_model()` function, you'll put together the network using TFLearn. You get to choose how many layers to use, how many hidden units, etc.
**Hint:** The final output layer must have 10 output nodes (one for each digit 0-9). It's also recommended to use a `softmax` activation layer as your final output layer.
```
# Define the neural network
def build_model():
# This resets all parameters and variables, leave this here
tf.reset_default_graph()
net = tflearn.input_data([None, 784])
net = tflearn.fully_connected(net, 500, activation='ELU')
net = tflearn.fully_connected(net, 200, activation='ELU')
net = tflearn.fully_connected(net, 10, activation ='ELU')
net = tflearn.fully_connected(net, 10, activation='softmax')
net = tflearn.regression(net, optimizer='adam',
learning_rate=.001,
loss='categorical_crossentropy')
model = tflearn.DNN(net)
return model
# Build the model
with tf.device('/gpu:0'):
model = build_model()
```
## Training the network
Now that we've constructed the network, saved as the variable `model`, we can fit it to the data. Here we use the `model.fit` method. You pass in the training features `trainX` and the training targets `trainY`. Below I set `validation_set=0.1` which reserves 10% of the data set as the validation set. You can also set the batch size and number of epochs with the `batch_size` and `n_epoch` keywords, respectively.
Too few epochs don't effectively train your network, and too many take a long time to execute. Choose wisely!
```
# Training
model.fit(trainX, trainY, validation_set=0.1, show_metric=True, batch_size=100, n_epoch=15)
# Training
model.fit(trainX, trainY, validation_set=0.05, show_metric=True, batch_size=64, n_epoch=10)
```
## Testing
After you're satisified with the training output and accuracy, you can then run the network on the **test data set** to measure it's performance! Remember, only do this after you've done the training and are satisfied with the results.
A good result will be **higher than 95% accuracy**. Some simple models have been known to get up to 99.7% accuracy!
```
# Compare the labels that our model predicts with the actual labels
# Find the indices of the most confident prediction for each item. That tells us the predicted digit for that sample.
predictions = np.array(model.predict(testX)).argmax(axis=1)
# Calculate the accuracy, which is the percentage of times the predicated labels matched the actual labels
actual = testY.argmax(axis=1)
test_accuracy = np.mean(predictions == actual, axis=0)
# Print out the result
print("Test accuracy: ", test_accuracy)
```
| github_jupyter |
This notebook is the reproduction of an exercise found at http://people.ku.edu/~gbohling/cpe940/Kriging.pdf
```
import sys
sys.path.append('..')
sys.path.append('../geostatsmodels')
from geostatsmodels import utilities, variograms, model, kriging, geoplot
import matplotlib.pyplot as plt
import numpy as np
import pandas
```
We'll read the data from `ZoneA.dat`.
```
z = utilities.readGeoEAS('../data/ZoneA.dat')
```
We want the first, second and fourth columns of the data set, representing the x and y spatial coordinates, and the porosity.
```
P = z[:,[0,1,3]]
```
We'll be interested in determining the porosity at a point (2000,4700).
```
pt = [2000, 4700]
```
We can plot our region of interest as follows:
```
plt.scatter(P[:,0], P[:,1], c=P[:,2], cmap=geoplot.YPcmap)
plt.title('Zone A Subset % Porosity')
plt.colorbar()
xmin, xmax = 0, 4250
ymin, ymax = 3200, 6250
plt.xlim(xmin,xmax)
plt.ylim(ymin,ymax)
for i in range(len(P[:,2])):
x, y, por = P[i]
if (x < xmax) & (y > ymin) & (y < ymax):
plt.text( x+100, y, '{:4.2f}'.format( por ) )
plt.scatter(pt[0], pt[1], marker='x', c='k')
plt.text(pt[0] + 100 , pt[1], '?')
plt.xlabel('Easting (m)')
plt.ylabel('Northing (m)');
```
We can determine the parameters for our model by looking at the semivariogram and trying to determine the appropriate range and sill.
```
tolerance = 250
lags = np.arange(tolerance, 10000, tolerance*2)
sill = np.var(P[:,2])
```
The semivariogram plotting function, `svplot()`, plots sill as a dashed line, and the empirical semivariogram as determined from the data. It optionally plots a semivariance model.
```
geoplot.semivariogram(P, lags, tolerance)
```
We can pass a model to this function using the optional `model` argument and see it plotted in red.
```
svm = model.semivariance(model.spherical, (4000, sill))
geoplot.semivariogram(P, lags, tolerance, model=svm)
```
The covariance modeling function function will return a spherical covariance model that takes a distance as input, and returns an covariance estimate. We've used the global variance of the porosity in `ZoneA.dat` as the sill.
```
covfct = model.covariance(model.spherical, (4000, sill))
```
We can then krige the data, using the covariance model, the point we are interested in, (2000,47000), and `N=6` signifying that we only want to use the six nearest points. The output of the simple and ordinary kriging functions below is the krigin estimate, and the standard deviation of the kriging estimate.
```
kriging.simple(P, covfct, pt, N=6)
kriging.ordinary(P, covfct, pt, N=6)
est, kstd = kriging.krige(P, covfct, [[2000,4700],[2100,4700],[2000,4800],[2100,4800]], 'simple', N=6)
est
kstd
```
| github_jupyter |
## relation extraction 实践
> Tutorial作者:余海阳(yuhaiyang@zju.edu.cn)
在这个演示中,我们使用 `gcn ` 模型实现中文关系抽取。
希望在这个demo中帮助大家了解知识图谱构建过程中,三元组抽取构建的原理和常用方法。
本demo使用 `python3` 运⾏。
### 数据集
在这个示例中,我们采样了一些中文文本,抽取其中的三元组。
sentence|relation|head|tail
:---:|:---:|:---:|:---:
孔正锡在2005年以一部温馨的爱情电影《长腿叔叔》敲开电影界大门。|导演|长腿叔叔|孔正锡
《伤心的树》是吴宗宪的音乐作品,收录在《你比从前快乐》专辑中。|所属专辑|伤心的树|你比从前快乐
2000年8月,「天坛大佛」荣获「香港十大杰出工程项目」第四名。|所在城市|天坛大佛|香港
- train.csv: 包含6个训练三元组,文件的每一⾏表示一个三元组, 按句子、关系、头实体、尾实体排序,并用`,`分隔。
- valid.csv: 包含3个验证三元组,文件的每一⾏表示一个三元组, 按句子、关系、头实体、尾实体排序,并用`,`分隔。
- test.csv: 包含3个测试三元组,文件的每一⾏表示一个三元组, 按句子、关系、头实体、尾实体排序,并用`,`分隔。
- relation.csv: 包含4种关系三元组,文件的每一⾏表示一个三元组种类, 按头实体种类、尾实体种类、关系、序号排序,并用`,`分隔。
### GCN 原理回顾

句子信息主要包括word embedding和position embedding,以及通过语法树得到的邻接矩阵adj_matrix。
该邻接矩阵的点,为每个word token,语法树中相连接的词语构建边。
输入到多层(一般取2,3层,过多层并不会明显提升结果)图卷积神经网络层后,经过最大池化输入到全连接层,即可得到句子的关系信息。
```
# 使用pytorch运行神经网络,运行前确认是否安装
!pip install torch
!pip install matplotlib
!pip install transformers
# 导入所使用模块
import os
import csv
import math
import pickle
import logging
import torch
import torch.nn as nn
import torch.nn.functional as F
import numpy as np
import matplotlib.pyplot as plt
from torch import optim
from torch.nn.utils.rnn import pack_padded_sequence, pad_packed_sequence
from torch.utils.data import Dataset,DataLoader
from sklearn.metrics import precision_recall_fscore_support
from typing import List, Tuple, Dict, Any, Sequence, Optional, Union
from transformers import BertTokenizer, BertModel
logger = logging.getLogger(__name__)
# 模型调参的配置文件
class Config(object):
model_name = 'gcn' # ['cnn', 'gcn', 'lm']
use_pcnn = True
min_freq = 1
pos_limit = 20
out_path = 'data/out'
batch_size = 2
word_dim = 10
pos_dim = 5
dim_strategy = 'sum' # ['sum', 'cat']
out_channels = 20
intermediate = 10
kernel_sizes = [3, 5, 7]
activation = 'gelu'
pooling_strategy = 'max'
dropout = 0.3
epoch = 10
num_relations = 4
learning_rate = 3e-4
lr_factor = 0.7 # 学习率的衰减率
lr_patience = 3 # 学习率衰减的等待epoch
weight_decay = 1e-3 # L2正则
early_stopping_patience = 6
train_log = True
log_interval = 1
show_plot = True
only_comparison_plot = False
plot_utils = 'matplot'
lm_file = 'bert-base-chinese'
lm_num_hidden_layers = 2
rnn_layers = 2
cfg = Config()
# word token 构建 one-hot 词典,后续输入到embedding层得到对应word信息矩阵
# 一般默认0为pad,1为unknown
class Vocab(object):
def __init__(self, name: str = 'basic', init_tokens = ["[PAD]", "[UNK]"]):
self.name = name
self.init_tokens = init_tokens
self.trimed = False
self.word2idx = {}
self.word2count = {}
self.idx2word = {}
self.count = 0
self._add_init_tokens()
def _add_init_tokens(self):
for token in self.init_tokens:
self._add_word(token)
def _add_word(self, word: str):
if word not in self.word2idx:
self.word2idx[word] = self.count
self.word2count[word] = 1
self.idx2word[self.count] = word
self.count += 1
else:
self.word2count[word] += 1
def add_words(self, words: Sequence):
for word in words:
self._add_word(word)
def trim(self, min_freq=2, verbose: Optional[bool] = True):
'''当 word 词频低于 min_freq 时,从词库中删除
Args:
param min_freq: 最低词频
'''
assert min_freq == int(min_freq), f'min_freq must be integer, can\'t be {min_freq}'
min_freq = int(min_freq)
if min_freq < 2:
return
if self.trimed:
return
self.trimed = True
keep_words = []
new_words = []
for k, v in self.word2count.items():
if v >= min_freq:
keep_words.append(k)
new_words.extend([k] * v)
if verbose:
before_len = len(keep_words)
after_len = len(self.word2idx) - len(self.init_tokens)
logger.info('vocab after be trimmed, keep words [{} / {}] = {:.2f}%'.format(before_len, after_len, before_len / after_len * 100))
# Reinitialize dictionaries
self.word2idx = {}
self.word2count = {}
self.idx2word = {}
self.count = 0
self._add_init_tokens()
self.add_words(new_words)
# 预处理过程所需要使用的函数
Path = str
def load_csv(fp: Path, is_tsv: bool = False, verbose: bool = True) -> List:
if verbose:
logger.info(f'load csv from {fp}')
dialect = 'excel-tab' if is_tsv else 'excel'
with open(fp, encoding='utf-8') as f:
reader = csv.DictReader(f, dialect=dialect)
return list(reader)
def load_pkl(fp: Path, verbose: bool = True) -> Any:
if verbose:
logger.info(f'load data from {fp}')
with open(fp, 'rb') as f:
data = pickle.load(f)
return data
def save_pkl(data: Any, fp: Path, verbose: bool = True) -> None:
if verbose:
logger.info(f'save data in {fp}')
with open(fp, 'wb') as f:
pickle.dump(data, f)
def _handle_relation_data(relation_data: List[Dict]) -> Dict:
rels = dict()
for d in relation_data:
rels[d['relation']] = {
'index': int(d['index']),
'head_type': d['head_type'],
'tail_type': d['tail_type'],
}
return rels
def _add_relation_data(rels: Dict,data: List) -> None:
for d in data:
d['rel2idx'] = rels[d['relation']]['index']
d['head_type'] = rels[d['relation']]['head_type']
d['tail_type'] = rels[d['relation']]['tail_type']
def _convert_tokens_into_index(data: List[Dict], vocab):
unk_str = '[UNK]'
unk_idx = vocab.word2idx[unk_str]
for d in data:
d['token2idx'] = [vocab.word2idx.get(i, unk_idx) for i in d['tokens']]
def _add_pos_seq(train_data: List[Dict], cfg):
for d in train_data:
d['head_offset'], d['tail_offset'], d['lens'] = int(d['head_offset']), int(d['tail_offset']), int(d['lens'])
entities_idx = [d['head_offset'], d['tail_offset']] if d['head_offset'] < d['tail_offset'] else [d['tail_offset'], d['head_offset']]
d['head_pos'] = list(map(lambda i: i - d['head_offset'], list(range(d['lens']))))
d['head_pos'] = _handle_pos_limit(d['head_pos'], int(cfg.pos_limit))
d['tail_pos'] = list(map(lambda i: i - d['tail_offset'], list(range(d['lens']))))
d['tail_pos'] = _handle_pos_limit(d['tail_pos'], int(cfg.pos_limit))
if cfg.use_pcnn:
d['entities_pos'] = [1] * (entities_idx[0] + 1) + [2] * (entities_idx[1] - entities_idx[0] - 1) +\
[3] * (d['lens'] - entities_idx[1])
def _handle_pos_limit(pos: List[int], limit: int) -> List[int]:
for i, p in enumerate(pos):
if p > limit:
pos[i] = limit
if p < -limit:
pos[i] = -limit
return [p + limit + 1 for p in pos]
def seq_len_to_mask(seq_len: Union[List, np.ndarray, torch.Tensor], max_len=None, mask_pos_to_true=True):
"""
将一个表示sequence length的一维数组转换为二维的mask,默认pad的位置为1。
转变 1-d seq_len到2-d mask.
:param list, np.ndarray, torch.LongTensor seq_len: shape将是(B,)
:param int max_len: 将长度pad到这个长度。默认(None)使用的是seq_len中最长的长度。但在nn.DataParallel的场景下可能不同卡的seq_len会有
区别,所以需要传入一个max_len使得mask的长度是pad到该长度。
:return: np.ndarray, torch.Tensor 。shape将是(B, max_length), 元素类似为bool或torch.uint8
"""
if isinstance(seq_len, list):
seq_len = np.array(seq_len)
if isinstance(seq_len, np.ndarray):
seq_len = torch.from_numpy(seq_len)
if isinstance(seq_len, torch.Tensor):
assert seq_len.dim() == 1, logger.error(f"seq_len can only have one dimension, got {seq_len.dim()} != 1.")
batch_size = seq_len.size(0)
max_len = int(max_len) if max_len else seq_len.max().long()
broad_cast_seq_len = torch.arange(max_len).expand(batch_size, -1).to(seq_len.device)
if mask_pos_to_true:
mask = broad_cast_seq_len.ge(seq_len.unsqueeze(1))
else:
mask = broad_cast_seq_len.lt(seq_len.unsqueeze(1))
else:
raise logger.error("Only support 1-d list or 1-d numpy.ndarray or 1-d torch.Tensor.")
return mask
def _lm_serialize(data: List[Dict], cfg):
logger.info('use bert tokenizer...')
tokenizer = BertTokenizer.from_pretrained(cfg.lm_file)
for d in data:
sent = d['sentence'].strip()
sent = sent.replace(d['head'], d['head_type'], 1).replace(d['tail'], d['tail_type'], 1)
sent += '[SEP]' + d['head'] + '[SEP]' + d['tail']
d['token2idx'] = tokenizer.encode(sent, add_special_tokens=True)
d['lens'] = len(d['token2idx'])
# 预处理过程
logger.info('load raw files...')
train_fp = os.path.join('data/train.csv')
valid_fp = os.path.join('data/valid.csv')
test_fp = os.path.join('data/test.csv')
relation_fp = os.path.join('data/relation.csv')
train_data = load_csv(train_fp)
valid_data = load_csv(valid_fp)
test_data = load_csv(test_fp)
relation_data = load_csv(relation_fp)
for d in train_data:
d['tokens'] = eval(d['tokens'])
for d in valid_data:
d['tokens'] = eval(d['tokens'])
for d in test_data:
d['tokens'] = eval(d['tokens'])
logger.info('convert relation into index...')
rels = _handle_relation_data(relation_data)
_add_relation_data(rels, train_data)
_add_relation_data(rels, valid_data)
_add_relation_data(rels, test_data)
logger.info('verify whether use pretrained language models...')
if cfg.model_name == 'lm':
logger.info('use pretrained language models serialize sentence...')
_lm_serialize(train_data, cfg)
_lm_serialize(valid_data, cfg)
_lm_serialize(test_data, cfg)
else:
logger.info('build vocabulary...')
vocab = Vocab('word')
train_tokens = [d['tokens'] for d in train_data]
valid_tokens = [d['tokens'] for d in valid_data]
test_tokens = [d['tokens'] for d in test_data]
sent_tokens = [*train_tokens, *valid_tokens, *test_tokens]
for sent in sent_tokens:
vocab.add_words(sent)
vocab.trim(min_freq=cfg.min_freq)
logger.info('convert tokens into index...')
_convert_tokens_into_index(train_data, vocab)
_convert_tokens_into_index(valid_data, vocab)
_convert_tokens_into_index(test_data, vocab)
logger.info('build position sequence...')
_add_pos_seq(train_data, cfg)
_add_pos_seq(valid_data, cfg)
_add_pos_seq(test_data, cfg)
logger.info('save data for backup...')
os.makedirs(cfg.out_path, exist_ok=True)
train_save_fp = os.path.join(cfg.out_path, 'train.pkl')
valid_save_fp = os.path.join(cfg.out_path, 'valid.pkl')
test_save_fp = os.path.join(cfg.out_path, 'test.pkl')
save_pkl(train_data, train_save_fp)
save_pkl(valid_data, valid_save_fp)
save_pkl(test_data, test_save_fp)
if cfg.model_name != 'lm':
vocab_save_fp = os.path.join(cfg.out_path, 'vocab.pkl')
vocab_txt = os.path.join(cfg.out_path, 'vocab.txt')
save_pkl(vocab, vocab_save_fp)
logger.info('save vocab in txt file, for watching...')
with open(vocab_txt, 'w', encoding='utf-8') as f:
f.write(os.linesep.join(vocab.word2idx.keys()))
# pytorch 构建自定义 Dataset
class Tree(object):
def __init__(self):
self.parent = None
self.num_children = 0
self.children = list()
def add_child(self, child):
child.parent = self
self.num_children += 1
self.children.append(child)
def size(self):
s = getattr(self, '_size', -1)
if s != -1:
return self._size
else:
count = 1
for i in range(self.num_children):
count += self.children[i].size()
self._size = count
return self._size
def __iter__(self):
yield self
for c in self.children:
for x in c:
yield x
def depth(self):
d = getattr(self, '_depth', -1)
if d != -1:
return self._depth
else:
count = 0
if self.num_children > 0:
for i in range(self.num_children):
child_depth = self.children[i].depth()
if child_depth > count:
count = child_depth
count += 1
self._depth = count
return self._depth
def head_to_adj(head, directed=False, self_loop=True):
"""
Convert a sequence of head indexes to an (numpy) adjacency matrix.
"""
seq_len = len(head)
head = head[:seq_len]
root = None
nodes = [Tree() for _ in head]
for i in range(seq_len):
h = head[i]
setattr(nodes[i], 'idx', i)
if h == 0:
root = nodes[i]
else:
nodes[h - 1].add_child(nodes[i])
assert root is not None
ret = np.zeros((seq_len, seq_len), dtype=np.float32)
queue = [root]
idx = []
while len(queue) > 0:
t, queue = queue[0], queue[1:]
idx += [t.idx]
for c in t.children:
ret[t.idx, c.idx] = 1
queue += t.children
if not directed:
ret = ret + ret.T
if self_loop:
for i in idx:
ret[i, i] = 1
return ret
def collate_fn(cfg):
def collate_fn_intra(batch):
batch.sort(key=lambda data: int(data['lens']), reverse=True)
max_len = int(batch[0]['lens'])
def _padding(x, max_len):
return x + [0] * (max_len - len(x))
def _pad_adj(adj, max_len):
adj = np.array(adj)
pad_len = max_len - adj.shape[0]
for i in range(pad_len):
adj = np.insert(adj, adj.shape[-1], 0, axis=1)
for i in range(pad_len):
adj = np.insert(adj, adj.shape[0], 0, axis=0)
return adj
x, y = dict(), []
word, word_len = [], []
head_pos, tail_pos = [], []
pcnn_mask = []
adj_matrix = []
for data in batch:
word.append(_padding(data['token2idx'], max_len))
word_len.append(int(data['lens']))
y.append(int(data['rel2idx']))
if cfg.model_name != 'lm':
head_pos.append(_padding(data['head_pos'], max_len))
tail_pos.append(_padding(data['tail_pos'], max_len))
if cfg.model_name == 'gcn':
head = eval(data['dependency'])
adj = head_to_adj(head, directed=True, self_loop=True)
adj_matrix.append(_pad_adj(adj, max_len))
if cfg.use_pcnn:
pcnn_mask.append(_padding(data['entities_pos'], max_len))
x['word'] = torch.tensor(word)
x['lens'] = torch.tensor(word_len)
y = torch.tensor(y)
if cfg.model_name != 'lm':
x['head_pos'] = torch.tensor(head_pos)
x['tail_pos'] = torch.tensor(tail_pos)
if cfg.model_name == 'gcn':
x['adj'] = torch.tensor(adj_matrix)
if cfg.model_name == 'cnn' and cfg.use_pcnn:
x['pcnn_mask'] = torch.tensor(pcnn_mask)
return x, y
return collate_fn_intra
class CustomDataset(Dataset):
"""默认使用 List 存储数据"""
def __init__(self, fp):
self.file = load_pkl(fp)
def __getitem__(self, item):
sample = self.file[item]
return sample
def __len__(self):
return len(self.file)
# embedding层
class Embedding(nn.Module):
def __init__(self, config):
"""
word embedding: 一般 0 为 padding
pos embedding: 一般 0 为 padding
dim_strategy: [cat, sum] 多个 embedding 是拼接还是相加
"""
super(Embedding, self).__init__()
# self.xxx = config.xxx
self.vocab_size = config.vocab_size
self.word_dim = config.word_dim
self.pos_size = config.pos_limit * 2 + 2
self.pos_dim = config.pos_dim if config.dim_strategy == 'cat' else config.word_dim
self.dim_strategy = config.dim_strategy
self.wordEmbed = nn.Embedding(self.vocab_size,self.word_dim,padding_idx=0)
self.headPosEmbed = nn.Embedding(self.pos_size,self.pos_dim,padding_idx=0)
self.tailPosEmbed = nn.Embedding(self.pos_size,self.pos_dim,padding_idx=0)
def forward(self, *x):
word, head, tail = x
word_embedding = self.wordEmbed(word)
head_embedding = self.headPosEmbed(head)
tail_embedding = self.tailPosEmbed(tail)
if self.dim_strategy == 'cat':
return torch.cat((word_embedding,head_embedding, tail_embedding), -1)
elif self.dim_strategy == 'sum':
# 此时 pos_dim == word_dim
return word_embedding + head_embedding + tail_embedding
else:
raise Exception('dim_strategy must choose from [sum, cat]')
# gcn 模型
class GCN(nn.Module):
def __init__(self, cfg):
super(GCN, self).__init__()
self.embedding = Embedding(cfg)
self.fc1 = nn.Linear(10, 20)
self.fc2 = nn.Linear(20, 20)
self.fc3 = nn.Linear(20, cfg.num_relations)
self.dropout = nn.Dropout(cfg.dropout)
def forward(self, x):
word, adj, head_pos, tail_pos = x['word'], x['adj'], x['head_pos'], x['tail_pos']
inputs = self.embedding(word, head_pos, tail_pos)
AxW = F.leaky_relu(self.fc1(torch.bmm(adj,inputs)))
AxW = self.dropout(AxW)
AxW = F.leaky_relu(self.fc2(torch.bmm(adj,AxW)))
AxW = self.dropout(AxW)
output = self.fc3(torch.bmm(adj,AxW))
output = torch.max(output, dim=1)[0]
return output
# p,r,f1 指标测量
class PRMetric():
def __init__(self):
"""
暂时调用 sklearn 的方法
"""
self.y_true = np.empty(0)
self.y_pred = np.empty(0)
def reset(self):
self.y_true = np.empty(0)
self.y_pred = np.empty(0)
def update(self, y_true:torch.Tensor, y_pred:torch.Tensor):
y_true = y_true.cpu().detach().numpy()
y_pred = y_pred.cpu().detach().numpy()
y_pred = np.argmax(y_pred,axis=-1)
self.y_true = np.append(self.y_true, y_true)
self.y_pred = np.append(self.y_pred, y_pred)
def compute(self):
p, r, f1, _ = precision_recall_fscore_support(self.y_true,self.y_pred,average='macro',warn_for=tuple())
_, _, acc, _ = precision_recall_fscore_support(self.y_true,self.y_pred,average='micro',warn_for=tuple())
return acc,p,r,f1
# 训练过程中的迭代
def train(epoch, model, dataloader, optimizer, criterion, cfg):
model.train()
metric = PRMetric()
losses = []
for batch_idx, (x, y) in enumerate(dataloader, 1):
optimizer.zero_grad()
y_pred = model(x)
loss = criterion(y_pred, y)
loss.backward()
optimizer.step()
metric.update(y_true=y, y_pred=y_pred)
losses.append(loss.item())
data_total = len(dataloader.dataset)
data_cal = data_total if batch_idx == len(dataloader) else batch_idx * len(y)
if (cfg.train_log and batch_idx % cfg.log_interval == 0) or batch_idx == len(dataloader):
# p r f1 皆为 macro,因为micro时三者相同,定义为acc
acc,p,r,f1 = metric.compute()
print(f'Train Epoch {epoch}: [{data_cal}/{data_total} ({100. * data_cal / data_total:.0f}%)]\t'
f'Loss: {loss.item():.6f}')
print(f'Train Epoch {epoch}: Acc: {100. * acc:.2f}%\t'
f'macro metrics: [p: {p:.4f}, r:{r:.4f}, f1:{f1:.4f}]')
if cfg.show_plot and not cfg.only_comparison_plot:
if cfg.plot_utils == 'matplot':
plt.plot(losses)
plt.title(f'epoch {epoch} train loss')
plt.show()
return losses[-1]
# 测试过程中的迭代
def validate(epoch, model, dataloader, criterion,verbose=True):
model.eval()
metric = PRMetric()
losses = []
for batch_idx, (x, y) in enumerate(dataloader, 1):
with torch.no_grad():
y_pred = model(x)
loss = criterion(y_pred, y)
metric.update(y_true=y, y_pred=y_pred)
losses.append(loss.item())
loss = sum(losses) / len(losses)
acc,p,r,f1 = metric.compute()
data_total = len(dataloader.dataset)
if verbose:
print(f'Valid Epoch {epoch}: [{data_total}/{data_total}](100%)\t Loss: {loss:.6f}')
print(f'Valid Epoch {epoch}: Acc: {100. * acc:.2f}%\tmacro metrics: [p: {p:.4f}, r:{r:.4f}, f1:{f1:.4f}]\n\n')
return f1,loss
# 加载数据集
train_dataset = CustomDataset(train_save_fp)
valid_dataset = CustomDataset(valid_save_fp)
test_dataset = CustomDataset(test_save_fp)
train_dataloader = DataLoader(train_dataset, batch_size=cfg.batch_size, shuffle=True, collate_fn=collate_fn(cfg))
valid_dataloader = DataLoader(valid_dataset, batch_size=cfg.batch_size, shuffle=True, collate_fn=collate_fn(cfg))
test_dataloader = DataLoader(test_dataset, batch_size=cfg.batch_size, shuffle=True, collate_fn=collate_fn(cfg))
# 因为加载预处理后的数据,才知道vocab_size
vocab = load_pkl(vocab_save_fp)
vocab_size = vocab.count
cfg.vocab_size = vocab_size
# main 入口,定义优化函数、loss函数等
# 开始epoch迭代
# 使用valid 数据集的loss做早停判断,当不再下降时,此时为模型泛化性最好的时刻。
model = GCN(cfg)
print(model)
optimizer = optim.Adam(model.parameters(), lr=cfg.learning_rate, weight_decay=cfg.weight_decay)
scheduler = optim.lr_scheduler.ReduceLROnPlateau(optimizer, factor=cfg.lr_factor, patience=cfg.lr_patience)
criterion = nn.CrossEntropyLoss()
best_f1, best_epoch = -1, 0
es_loss, es_f1, es_epoch, es_patience, best_es_epoch, best_es_f1, = 1000, -1, 0, 0, 0, -1
train_losses, valid_losses = [], []
logger.info('=' * 10 + ' Start training ' + '=' * 10)
for epoch in range(1, cfg.epoch + 1):
train_loss = train(epoch, model, train_dataloader, optimizer, criterion, cfg)
valid_f1, valid_loss = validate(epoch, model, valid_dataloader, criterion)
scheduler.step(valid_loss)
train_losses.append(train_loss)
valid_losses.append(valid_loss)
if best_f1 < valid_f1:
best_f1 = valid_f1
best_epoch = epoch
# 使用 valid loss 做 early stopping 的判断标准
if es_loss > valid_loss:
es_loss = valid_loss
es_f1 = valid_f1
best_es_f1 = valid_f1
es_epoch = epoch
best_es_epoch = epoch
es_patience = 0
else:
es_patience += 1
if es_patience >= cfg.early_stopping_patience:
best_es_epoch = es_epoch
best_es_f1 = es_f1
if cfg.show_plot:
if cfg.plot_utils == 'matplot':
plt.plot(train_losses, 'x-')
plt.plot(valid_losses, '+-')
plt.legend(['train', 'valid'])
plt.title('train/valid comparison loss')
plt.show()
print(f'best(valid loss quota) early stopping epoch: {best_es_epoch}, '
f'this epoch macro f1: {best_es_f1:0.4f}')
print(f'total {cfg.epoch} epochs, best(valid macro f1) epoch: {best_epoch}, '
f'this epoch macro f1: {best_f1:.4f}')
test_f1, _ = validate(0, model, test_dataloader, criterion,verbose=False)
print(f'after {cfg.epoch} epochs, final test data macro f1: {test_f1:.4f}')
```
本demo不包括调参部分,有兴趣的同学可以自行前往 [deepke](http://openkg.cn/tool/deepke) 仓库,下载使用更多的模型 :)
| github_jupyter |
# Algorithms blind tasting wines
*In this study we present a simple application of Natural Language Processing to classifying grape types based on semi-professional text based description of a glass of wine. We build a classifier model with pipelines and test it through two different datasets. A part of one of the datasets was involved through the building of the concept while the other is a completely out of sample data source. We present classification results of 4 different grape types with an accuracy above 85% which is in our view quite remarkable concerning the simplicity of the model.*
**Important note: This is purely driven by sel-interest. None of the mentioned entities gave financial nor other type of support. Feel free to copy and distribute this work or give remarks or suggestions.**
Wine is one of the most popular alcohols that is being produced. Its production, selling and understanding has several thousands of years of expertise. A big industry has developed around producing the wine but also around describing it. The latter is very important since wine comes in different colour, taste, smell etc. It is important to describe these features of a bottle to customers because almost all of us enjoy different aspects of a glass of wine. The people who describe the wine are called wine experts or sommeliers [[1](#ch7)].
One has to be gifted with good genetics to be able to sense and identify numerous different smells and tastes and have enough lexical knowledge to map these features to his database of wine features. This way they can tell only from sampling a glass of wine what grape was used to make it, in which country was it made, what year and maybe some more. This is an amazing skill to have, but it requires years of practice (hopefully without getting drunk). There are schools, like the [Wine and Spirit Education Trust](https://www.wsetglobal.com/) [[2](#ch7)] where you can practice these skills and learn a framework to do blind tasting. They have their own terminology to describe certain wine features like: full-budied, oaky, dry, grassy etc. (see [[3](#ch7)] for a more complete list).
Would it be possible to create an algorithm that can identify the grape, the country or the year (vintage) of a wine based on professional description of wines? We think it would be possible, but it is certainly not an easy task and has many conditions to perform it. The very first issue is to find a reliable, complete and professional description of ten thousands of wines (or even more). The second issue is to create a natural language processing (NLP) model that is capable of extracting the relevent information from the descriptions and put them into an input format that a machine can handle and understand. The final issue is to find a classifier that can read the input and based on a set of optimizable parameters it can correctly tell us the target feature (in this study it will be the grape type) of the corresponding wine description.
In a previous study, called [Become a sommelier](https://diveki.github.io/projects/wine/wine.html) [[4](#ch7)], we explored the issue of collecting the data. We wrote a web scraping algorithm that collects wine descriptions from various online wine selling websites (please read that study for more details). This database contains roughly 2000 samples. These descriptions are more in a customer friendly style, rarely very detailed, in all together we could call them semi-professional descriptions, but written by experts. Later in our research we came accross a [Kaggle](https://www.kaggle.com/) [Wine Reviews](https://www.kaggle.com/zynicide/wine-reviews) [[5](#ch7)] by *zackthoutt*. He collected a similar database of wine descriptions from another source as we did and his database contains more than 100 thousand samples. This size of database starts to be in the usable range.
In another previous study, called [Application of TfIdf-vectorizer on wine data](https://diveki.github.io/projects/wine/tfidf.html) [[6](#ch7)] we established the concept of our model that extracts information from the wine description and turns it into a vectorized bag-of-words model. We used our own data set (and not the Kaggle one) to build up all the aspects of our model, that we will present here too, but for more details read the mentioned study. To make sure that our model do not get biased during the building process, we divided it into a train and a test set and we use the same concept here too. Basically we neglected any knowledge from the test set during the building process.
In this study we will combine the created NLP model with a classifier and test the model performance in different scenarios. We will show classification results on both databases separately and also show an example where the Kaggle database trains the constructed model and we test it on our database. We present hyperparameter optimization and kfold verification of the results too.
This study will step through the following topics:
1. [Loading data](#ch1)
2. [Model definition](#ch2)
2.1. [Stopwords](#ch2.1)
2.2. [POS tagging and Lemmatizing](#ch2.2)
2.3. [Label encoding](#ch2.3)
2.4. [Splitting data into train and test sets](#ch2.4)
2.5. [Defining selectors](#ch2.5)
2.6. [Defining data pre-processors](#ch2.6)
2.7. [Defining classiffiers](#ch2.7)
3. [Train and test the model](#ch3)
3.1. [Analysis of train predictions](#ch3.1)
3.2. [Analysis of test predictions](#ch3.2)
3.3. [Testing with different classifiers](#ch3.3)
3.4. [Hyperparameter tuning](#ch3.4)
4. [Classification of data from Kaggle](#ch4)
4.1. [Data formatting](#ch4.1)
4.2. [Classification](#ch4.2)
5. [Cross-data validation of the model](#ch5)
5.1. [Classifying more target features](#ch5.1)
6. [Conclusion](#ch6)
7. [References](#ch7)
<a id="ch1"></a>
# 1. Loading data
We start by loading all the required packages and data to solve the above described task. Most of the details about these steps are described in [Become a sommelier](https://diveki.github.io/projects/wine/wine.html) and [Application of Tfidf-vectorizer on wine data](https://diveki.github.io/projects/wine/tfidf.html).
We start by loading *pandas, numpy, re, scikit-learn* and *nltk* packages.
```
# importing packages
import pandas as pd
import re
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
# sklearn packages
from sklearn import metrics
from sklearn.feature_extraction.text import TfidfVectorizer, CountVectorizer
from sklearn.model_selection import train_test_split, GridSearchCV, cross_val_score, StratifiedKFold
from sklearn.pipeline import Pipeline, FeatureUnion
from sklearn.preprocessing import FunctionTransformer, StandardScaler
from sklearn.base import BaseEstimator, TransformerMixin
from sklearn.neighbors import KNeighborsClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier
from sklearn.naive_bayes import MultinomialNB
from sklearn.dummy import DummyClassifier
from xgboost import XGBClassifier
# nltk packages
import nltk
from nltk import word_tokenize
from nltk.stem import WordNetLemmatizer
from nltk.corpus import stopwords
from string import punctuation
```
Then we load the wine data set that we scraped online. For the details about this data set see the Appendix of [Become a sommelier](https://diveki.github.io/projects/wine/wine.html).
```
filename = '../DataBase/5_grape_db.xlsx'
a0 = pd.read_excel(filename)
a0.head()
```
This data set contains all kinds of information about 5 grape types. We will use only 4 of the grape types, since the 5th does not have many samples. These 4 types are: *pinot noir, syrah* (red wines) and *chardonnay, sauvignon blanc* (white wines). By setting a limit to the minimum sample size we filter the input data.
```
result = a0['grape_variety']
limit = 40
## removing varieties that have only one member in the database
counts = nltk.Counter(result)
varieties = [key for key in counts if counts[key] > limit]
data_input = a0[a0['grape_variety'].isin(varieties)].reset_index()
data_input.head()
```
From this dataframe we will only use some of the features. The columns description and colour are the most important ones, but in the our first implementation we will add the Body feature as an input too. Let us see an example what does the code face in the descripiton column and extract reliable information in order to be able to classify grape types.
```
data_input.loc[1, 'description']
```
<a id="ch2"></a>
# 2. Model definition
As we showed in [Application of Tfidf-vectorizer on wine data](https://diveki.github.io/projects/wine/tfidf.html) in order to classify the grape types correctly, the processed input data for one grape should not correlate with other grape types. The applied model has to minimize this correlation. We did not perform an exact optimization process but rather added newer features to the model step-by-step and investigated what happens with the correlation. All the features presented here are the result of the mentioned study, so for details please go and read it. Some of the steps presented below are not discussed in that study, therefore we will elaborate them more.
Our model will be a very simple vectorized 1-gramm bag-of-words model. We will rely on term frequency - inverse document frequency (tf-idf) vectorization and some additional noise filters and word processors.
<a id="ch2.1"></a>
## 2.1. Stopwords
As you can see above, there are words in the description column that are certainly not adding any information about the grape type, like *with, and, by* etc. We can collect a list of these kind of words and call them stopwords. These will be filtered out from the text and not taken into account in the classification process. We will exploit the *nltk* package's stopwords and extend it with some words and punctuations defined by us.
```
# defining stopwords: using the one that comes with nltk + appending it with words seen from the above evaluation
stop_words = stopwords.words('english')
stop_append = ['.', ',', '`', '"', "'", '!', ';', 'wine', 'fruit', '%', 'flavour', 'aromas', 'palate']
stop_words1 = frozenset(stop_words + stop_append)
```
<a id="ch2.2"></a>
## 2.2. POS tagging and Lemmatizing
The text we want to analyse may contain the same word in different forms. A very simple example would be *cherry* and *cherries* the singular and plural version of the same word. Another example could be *good* and *better*. In their original form, these words are treated as separete ones by the code. To bring them to their common form we apply [lemmatization](https://en.wikipedia.org/wiki/Lemmatisation) [[7](#ch7)]. This is a very difficult task since it requires correct identification of the word (noun, verb etc.) type in the context. The latter is position tagging or POS tagging. We use *nltk*'s pos tagger, but as any other tagger this is not perfect neither.
The most information of a wine description is carried in its nouns and adjectives. Verbs and adverbs are rather common words to most of the wines. In our model we apply a filter that leaves nouns and adjectives in the text and removes anything else.
The POS tagging, lemmatizing and type selecting is carried out by the *LemmaTokenizer* class.
```
# list of word types (nouns and adjectives) to leave in the text
defTags = ['NN', 'NNS', 'NNP', 'NNPS', 'JJ', 'JJS', 'JJR']#, 'RB', 'RBS', 'RBR', 'VB', 'VBD', 'VBG', 'VBN', 'VBP', 'VBZ']
# functions to determine the type of a word
def is_noun(tag):
return tag in ['NN', 'NNS', 'NNP', 'NNPS']
def is_verb(tag):
return tag in ['VB', 'VBD', 'VBG', 'VBN', 'VBP', 'VBZ']
def is_adverb(tag):
return tag in ['RB', 'RBR', 'RBS']
def is_adjective(tag):
return tag in ['JJ', 'JJR', 'JJS']
# transform tag forms
def penn_to_wn(tag):
if is_adjective(tag):
return nltk.stem.wordnet.wordnet.ADJ
elif is_noun(tag):
return nltk.stem.wordnet.wordnet.NOUN
elif is_adverb(tag):
return nltk.stem.wordnet.wordnet.ADV
elif is_verb(tag):
return nltk.stem.wordnet.wordnet.VERB
return nltk.stem.wordnet.wordnet.NOUN
# lemmatizer + tokenizer (+ stemming) class
class LemmaTokenizer(object):
def __init__(self):
self.wnl = WordNetLemmatizer()
# we define (but not use) a stemming method, uncomment the last line in __call__ to get stemming tooo
self.stemmer = nltk.stem.SnowballStemmer('english')
def __call__(self, doc):
# pattern for numbers | words of length=2 | punctuations | words of length=1
pattern = re.compile(r'[0-9]+|\b[\w]{2,2}\b|[%.,_`!"&?\')({~@;:#}+-]+|\b[\w]{1,1}\b')
# tokenize document
doc_tok = word_tokenize(doc)
#filter out patterns from words
doc_tok = [pattern.sub('', x) for x in doc_tok]
# get rid of anything with length=1
doc_tok = [x for x in doc_tok if len(x) > 1]
# position tagging
doc_tagged = nltk.pos_tag(doc_tok)
# selecting nouns and adjectives
doc_tagged = [(t[0], t[1]) for t in doc_tagged if t[1] in defTags]
# preparing lemmatization
doc = [(t[0], penn_to_wn(t[1])) for t in doc_tagged]
# lemmatization
doc = [self.wnl.lemmatize(t[0], t[1]) for t in doc]
# uncomment if you want stemming as well
#doc = [self.stemmer.stem(x) for x in doc]
return doc
```
<a id="ch2.3"></a>
## 2.3. Label encoding
Although, we are mainly interested in classification by using text based description, from the database we can see that there are other, possible helpful features of the wine, that can help to classify. Such features are *body* and *colour*. Both of them are used by sommeliers to describe a wine. Colour can be easily observed while body is reflecting in a way the acidity of a wine.
These columns in the database are defined in text format, so we have to turn them into numbers so that the computer can understand them. Both of these features have discreate value, so we could just easily attach a number to them like: *red=1*, *rose=2*, *white=3*. This is called [label encoding](https://medium.com/@contactsunny/label-encoder-vs-one-hot-encoder-in-machine-learning-3fc273365621) [[8](#ch7)]. This would not be a disastrous approach, but the classifier might think there there is a trend like tendency between these categories (because of the increase in numbers), which is obviously false. Instead, sticking to the case of colours, we create three more columns (there are three colours), each representing one colour. Each column can take two values, 0 (if the wine does not have that feature) and 1 (if the wine has that feature). We do this for both the body and colour columns with the *pandas* get_dummies method.
The following cell prints an example of the modified data set that contains the encoded labels.
```
body_dummies = pd.get_dummies(data_input['Body']) # label encoding the Body column
colour_dummies = pd.get_dummies(data_input['colour']) # label encoding the colour column
# adding the body labels to the original dataset
data_input = data_input.merge(body_dummies, left_index=True, right_index=True)
# adding the colour labels to the original dataset
data_input = data_input.merge(colour_dummies, left_index=True, right_index=True)
data_input.head()
```
<a id="ch2.4"></a>
## 2.4. Splitting data into train and test sets
As we have already mentioned the analysis in [4,6](#ch7) were performed on a preselected train dataset from the whole database. We will use exactly the same train dataset to train our model. This is easy to do by setting the *random_state* argument to the same value as it was in those studies. Also, we only select the columns of description, labelled colours and labelled bodies. The *train_test_split* function will create train and test features and targets.
```
# split the data into train and test
combined_features = ['Body', 'description', 'full', 'light', 'medium', 'dry', 'red', 'rose', 'white']
target = 'grape_variety'
X_train, X_test, y_train, y_test = train_test_split(data_input[combined_features], data_input[target],
test_size=0.33, random_state=42)
X_train.head()
y_train.head()
```
<a id="ch2.5"></a>
## 2.5. Defining selectors
We will build up a [pipeline](https://medium.com/@yanhann10/a-brief-view-of-machine-learning-pipeline-in-python-5f50b941fca8) for this study. In a pipeline we chain together all kind of actions on the data into one stable flow. For example it combines data transformers (numerical normaliser) with data estimators (Naive Bayes classifier).
The input data has both text based and numerical features. They cannot be processed together by the classifier unless they are transformed into the same format, in this case numerical format. We aim to construct a pipeline that takes care of all these issues.
We define two classes where one of them will select the text based column from the input, the other will select the numerical input.
```
class TextSelector(BaseEstimator, TransformerMixin):
"""
Transformer to select a single column from the data frame to perform additional transformations on
Use on text columns in the data
"""
def __init__(self, key):
self.key = key
def fit(self, X, y=None, *parg, **kwarg):
return self
def transform(self, X):
# returns the input as a string
return X[self.key]
class NumberSelector(BaseEstimator, TransformerMixin):
"""
Transformer to select a single column from the data frame to perform additional transformations on
Use on numeric columns in the data
"""
def __init__(self, key):
self.key = key
def fit(self, X, y=None):
return self
def transform(self, X):
# returns the input as a dataframe
return X[[self.key]]
```
<a id="ch2.6"></a>
## 2.6. Defining data pre-processors and pipelines
As mentioned before, text based data cannot be used by the classifier. Therefore, we create a vectorizer that takes a string input and turns it into a vector of numbers.
We will use the [TfidfVectorizer](http://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.TfidfVectorizer.html) with 1-grams of words, the predifined stopwords and LemmaTokenizer as helping tools. Tf-idf applies the bag-of-words concept, which creates a vocabulary (list of all the terms from the string) and maps a value to them. In the case of tf-idf this value is roughly the product of the term frequency (the number of times a term occured within the document string) and the inverse document frequency (the inverse of the the number of documents that this term is present). Basically, the first term emphasizes the terms that are frequent in one document while weighs down the terms that are frequent over several documents. The reason for the latter is that if a word is used in many documents it is unlikely that it has characteristic meaning to one topic. For more on this read the relevant sections in [Application of Tfidf-vectorizer on wine data](https://diveki.github.io/projects/wine/tfidf.html).
Let us define the vectorizer.
```
vec_tdidf = TfidfVectorizer(ngram_range=(1,1), stop_words=stop_words, analyzer='word',
norm='l2', tokenizer=LemmaTokenizer())
```
Now let us combine the text vectorizer with the text selector into one pipeline.
```
text = Pipeline([
('selector', TextSelector(key='description')),
('vectorizer', vec_tdidf)
])
```
Just as in the previous sell let us put the numeric selectors into pipelines too.
```
# pipelines of body features
full = Pipeline([
('selector', NumberSelector(key='full')),
])
medium = Pipeline([
('selector', NumberSelector(key='medium')),
])
light = Pipeline([
('selector', NumberSelector(key='light')),
])
dry = Pipeline([
('selector', NumberSelector(key='dry')),
])
#pipelines of colour features
red = Pipeline([
('selector', NumberSelector(key='red')),
])
rose = Pipeline([
('selector', NumberSelector(key='rose')),
])
white = Pipeline([
('selector', NumberSelector(key='white')),
])
```
Finally let us combine all these pipelines. Note, that to combine different features one has to use the [FeatureUnion](http://scikit-learn.org/stable/modules/generated/sklearn.pipeline.FeatureUnion.html) class. Now we have all methods of transformation and pre-processing put into one variable.
```
feats = FeatureUnion([('full', full),
('medium', medium),
('light', light),
('dry', dry),
('description', text),
('red', red),
('rose', rose),
('white', white)
])
```
<a id="ch2.7"></a>
## 2.7. Defining classiffiers
The last step in our pipeline is to define a classifier. Our first choice of classifier is the [Random Forest](https://en.wikipedia.org/wiki/Random_forest). It is an ensemble classifier of decision trees and it tends to be more accurate than a single decision tree classifier. It is very versatile in application and fit to determine which features are giving the most contribution to good prediction (although we will not use this feature here).
```
clf = RandomForestClassifier(random_state=42)
```
Now let us put this classifier in the pipeline to combine it with the feature union and then we are ready to go and do blind tasting.
```
pipe = Pipeline([('feats', feats),
('clf',clf)
])
```
<a id="ch3"></a>
# 3. Train and test the model
We have arrived to the point where we can train our model with the train data set. To do that we call the *fit* method of the *pipe* object. Since this database is not really big, this training does not take a lot of time, while you should keep in mind if you have millions of inputs, your training might take a considerable amount of time.
```
%timeit pipe.fit(X_train, y_train)
```
With the `%timeit` magic command you can measure how long does it take to run one line of code. In this case it took about 1 second to run it.
<a id="ch3.1"></a>
## 3.1. Analysis of train predictions
Now let us see the performance of this trained model. Let us first investigate how good the model is at classifying grape types in the train data set. This is actually a completely in-sample measurement. We expect it to be good.
We define a function to print out all kinds of statistics on the performance, since we will use this a lot.
```
def print_stats(preds, target, labels, sep='-', sep_len=40, fig_size=(10,8)):
print('Accuracy = %.3f' % metrics.accuracy_score(target, preds))
print(sep*sep_len)
print('Classification report:')
print(metrics.classification_report(target, preds))
print(sep*sep_len)
print('Confusion matrix')
cm=metrics.confusion_matrix(target, preds)
cm = cm / np.sum(cm, axis=1)[:,None]
sns.set(rc={'figure.figsize':fig_size})
sns.heatmap(cm,
xticklabels=labels,
yticklabels=labels,
annot=True, cmap = 'YlGnBu')
plt.pause(0.05)
```
We will print out the [accuracy](http://scikit-learn.org/stable/modules/model_evaluation.html#accuracy-score), the [classification report](http://scikit-learn.org/stable/modules/model_evaluation.html#classification-report) and the [confusion matrix](http://scikit-learn.org/stable/modules/model_evaluation.html#confusion-matrix).
Accuracy is the number of correctly predicted grape types divided by the total number of grapes.
Classification report is a concise way of presenting estimator performance through the following metrics: [precision](http://scikit-learn.org/stable/auto_examples/model_selection/plot_precision_recall.html), [recall](http://scikit-learn.org/stable/auto_examples/model_selection/plot_precision_recall.html), [f1-score](http://scikit-learn.org/stable/auto_examples/model_selection/plot_precision_recall.html) and the number of samples belonging to each target feature. A good classifier has a value close to 1 for both precision and recall and therefore for f1-score too.
Confusion matrix is again a simple way to present how many grape types were correctly identified (diagonal elements), while the off diagonal elemnts tell us how many samples were classified into another target type. Obviously, one would like to decrease the values of the off diagonal elements to get perfect classification. The vertical axis represents the true class of the target, why the horizontal axis shows the predicted value of the target.
```
#train stats
preds = pipe.predict(X_train)
print_stats(preds, y_train, pipe.classes_, fig_size=(7,4))
```
Well this model does a perfect job on the train data set. We kind of expect it to do, since it was trained on it. However we are still amazed that the vectorization of the text combined with body and colour description is capable to perfectly differentiate all the train input grape types. Let us turn now to the test set.
<a id="ch3.2"></a>
## 3.2. Analysis of test predictions
As we have mentioned earlier, this test data was never used with respect to the model. It is the first time that the model sees it. The target sample sizes are not big.
```
# test stats
preds = pipe.predict(X_test)
print_stats(y_test, preds, pipe.classes_)
```
The accuracy is 74%. We have expected to have false positive and false negative scores, but a surprising observation is that even though we explicitly tell the classifier what colour the wine has it is still able to confuse red with white wines and vice versa. In order to decide wether this result is rather good or bad, we establish a benchmark and also try out other classifiers.
<a id="ch3.3"></a>
## 3.3. Testing with different classifiers
First, we establish a reference classification outcome. We do it by creating a classifier that generates random predictions by respecting the training set target feature distribution (since not all the grape types are equally represented).
```
clf = DummyClassifier(strategy='stratified',random_state=42)
pipe = Pipeline([('feats', feats),
('clf',clf)
])
%timeit pipe.fit(X_train, y_train)
# test stats
preds = pipe.predict(X_test)
print_stats(y_test, preds, pipe.classes_)
```
Well a random stratified classifier achieves 29% accuracy. Our first try with the Random Forest classifier was clearly way better. Now let us look at one of the most used ensemble booster classifier, the XGBClassifier from [xgboost](https://xgboost.readthedocs.io/en/latest/python/python_api.html) package. Gradient boosting sequentially adds predictors and corrects previous models. The classifier fits the new model to new residuals of the previous prediction and then minimizes the loss when adding the latest prediction.
```
clf = XGBClassifier(random_state=42, n_jobs=1)
pipe = Pipeline([('feats', feats),
('clf',clf)
])
%timeit pipe.fit(X_train, y_train)
# test stats
preds = pipe.predict(X_test)
print_stats(y_test, preds, pipe.classes_)
```
Without any adjustment it improves the accuracy by 10% and all the other metrics compared to the Random Forest classifier. However, this is a slower method to apply.
One would say that let us stick to the XGBClassifier, but it turned out (we will not present it here, but you can check it by running the codes) when we go to the larger Kaggle data base, the difference between XGBClassifier and RandomForest classifier disappears (and actually the latter gives slightly better results by roughly 2%). This statement assumes default settings for both classifiers which does not give necessarily a good reference to compare the two classifiers.
Because of the above observations we will stick to the Random Forest classifier.
<a id="ch3.4"></a>
## 3.4. Hyperparameter tuning
There are a certain number of parameters that can be adjusted to improve the performance of a classifier. This is hyperparameter tuning. We will improve the Random Forest classifier by using a grid search techinque over the predefined parameter values and apply [cross validation](http://scikit-learn.org/stable/modules/cross_validation.html#cross-validation). All this can be done with the *GridSearchCV* class. Cross validation is basically a k-fold technique, where for example we split our data into $M$ equal pieces and assign the first $M-k$ to be the train set and the last $k$ set to be the test. In the next round we choose another *k* piece to be the test and the rest to be the train. We can repeat this several times and take statistics over the outcome. The train and test sets never overlap.
We have observed that by chosing the right amount of features (in our case the *max_features* argument) we can imporve the accuracy by a lot (above 80%). Below we will show how to obtimize the max_features and the number of estimators (the number of trees created in the Random Forest calculation process). We will use multiprocessing too (check if your operation system supports it). Also there are other parameters like, *max_depth* and *min_samples_leaf* that you can optimize if you uncomment them. We will perform 3 cross validations.
Obviously, the time for optimization increases with the number of parameters to optimize and the number of cross validation required. In the end one has to compromise between optimization time and best optimizer.
```
# classifier and pipeline definition
clf = RandomForestClassifier(random_state=42)
pipe = Pipeline([('feats', feats),
('clf',clf)
])
# definition of parameter grid to scan through
param_grid = {
#'clf__max_depth': [60, 100, 140],
'clf__max_features': ['log2', 'auto', None],
#'clf__min_samples_leaf': [5,10,50,100,200],
'clf__n_estimators': [100, 500, 1000]
}
# grid search cross validation instantiation
grid_search = GridSearchCV(estimator = pipe, param_grid = param_grid,
cv = 3, n_jobs = 1, verbose = 0)
#hyperparameter fitting
grid_search.fit(X_train, y_train)
```
Let us first see the accuracy measures on the test sets of the cross validation:
```
grid_search.cv_results_['mean_test_score']
```
There were 9 combinations of the input parameters, therefore there are 9 accuracies. All of them are above 80% meaning basically any pair of the input parameters would do a good job.
However let us check the best parameter combination:
```
grid_search.best_params_
```
Now that we have the best parameters, let us create a classifier with these inputs:
```
clf_opt=grid_search.best_estimator_
```
Let us verify the parameters this classifier use, just to make sure we really use what we intended:
```
clf_opt.named_steps['clf'].get_params()
```
Indeed, we have the right input parameters. Let us now train it and test it on our data set.
```
clf_opt.fit(X_train, y_train)
preds = clf_opt.predict(X_test)
print_stats(y_test, preds, clf_opt.classes_)
```
Our accuracy increased from 74% to 89%. We can see that the number of false positives and negatives has dropped a lot. Precision, recall and the f1-score are all close to 0.90. The cross wine colour misclassification has decreased too. This is a great improvement.
<a id="ch4"></a>
# 4. Classification of data from Kaggle
As we have already meantioned we came accross a [Kaggle](https://www.kaggle.com/) [Wine Reviews](https://www.kaggle.com/zynicide/wine-reviews) [[5](#ch7)] wine description database by *zackthoutt*. It is much larger than the database we constructed and comes from completely different source. Therefore, verifying if our classification process works even on this database would be an ideal out of sample test.
We will do 3 things. First, we apply the whole procedure we developed earlier on this database and hope to have good results. Second, we will use the Kaggle database as the train set to fit our model (since this is the larger set) and use our database as the test set. This would be a cross database check of our model which if it works proves the validity of the model. Finally, since the Kaggle database is huge we are not restricted to select only 4 or 5 grape types to classify. We will enlarge our target features, but not to the whole database.
First, we load the database and transform the relevant columns into the format we used earlier. Also notice, that this database has no explicit information on the body and the colour of the wine, therefore we will only use the description column. Plus, from literature we can add manually the colour of the wine, completing the input information.
We will not go through such a rigorous pre-processing of the description column as for our own database. We will accept it as it is (and drop any row with NAs). We pre-process the grape variety column with taking its lower case turning any shiraz into syrah.
```
filename = '../DataBase/winemag-data-130k-v2.csv'
# select the description and grape variety columns
kaggle = pd.read_csv(filename, usecols=['description', 'grape_variety'])
# transform the grape variety column into lower case strings
kaggle['grape_variety'] = kaggle['grape_variety'].str.lower()
kaggle['description'] = kaggle['description'].str.lower()
kaggle.head()
```
<a id="ch4.1"></a>
## 4.1. Data formatting
Below you can see a few steps of preprocessing.
```
# function to change any shiraz into syrah
def shiraz_filter(ss):
if ss == 'shiraz':
return 'syrah'
else:
return ss
kaggle['grape_variety'] = kaggle.apply(lambda row: shiraz_filter(row['grape_variety']), axis=1)
# drop any row that contains NAs
kaggle = kaggle.dropna()
# select the rows that contains the 4 grape names: chardonnay, syrah, pinot noir, sauvignon blanc
kaggle_input = kaggle[kaggle['grape_variety'].isin(varieties)].reset_index()
pd.unique(kaggle_input.grape_variety)
# define a colour dictionary that will be mapped into the databaes
colour_dict = {'pinot noir': 'red', 'syrah': 'red', 'chardonnay': 'white', 'sauvignon blanc': 'white'}
kaggle_input['colour'] = kaggle_input.apply(lambda row: colour_dict[row['grape_variety']], axis=1)
colour_dummies = pd.get_dummies(kaggle_input['colour'])
kaggle_input = kaggle_input.merge(colour_dummies, left_index=True, right_index=True)
```
Create the train and test sets.
```
# split the data into train and test
combined_features = ['description', 'white', 'red']
target = 'grape_variety'
X_train, X_test, y_train, y_test = train_test_split(kaggle_input[combined_features], kaggle_input[target],
test_size=0.33, random_state=42)
```
<a id="ch4.2"></a>
## 4.2. Classification
Let us create the corresponding pipeline with the colour features and the text vectorizer:
```
red = Pipeline([
('selector', NumberSelector(key='red')),
])
white = Pipeline([
('selector', NumberSelector(key='white')),
])
text = Pipeline([
('selector', TextSelector(key='description')),
('vectorizer', TfidfVectorizer(ngram_range=(1,1), stop_words=stop_words, analyzer='word',
norm='l2', tokenizer=LemmaTokenizer()))
])
feats = FeatureUnion([('description', text),
('red', red),
('white', white)
])
```
The database is fairly big, therefore doing hyperparameter optimization on it is rather memory intensive (in my case 4 GB RAM was not enough). Therefore, we present a classification first with the default setups then do a cross validation with the optimized parameters from the previous section (although they are related to another database).
```
# classifier and pipeline definition
clf = RandomForestClassifier(random_state=42)
pipe = Pipeline([('feats', feats),
('clf',clf)
])
pipe.fit(X_train, y_train)
preds = pipe.predict(X_test)
print_stats(y_test, preds, pipe.classes_)
# definition of parameter grid to scan through
#param_grid = {
# #'clf__max_depth': [60, 100, 140],
# 'clf__max_features': ['log2', 'auto', None],
# #'clf__min_samples_leaf': [5,10,50,100,200],
# 'clf__n_estimators': [100, 500, 1000]
#}
# grid search cross validation instantiation
#grid_search = GridSearchCV(estimator = pipe, param_grid = param_grid,
# cv = 3, n_jobs = 1, verbose = 0)
#hyperparameter fitting
#grid_search.fit(X_train, y_train)
#grid_search.cv_results_['mean_test_score']
```
Without refining the model we get an accuracy of 86.7%. False negatives are large for syrah and sauvignon blanc while false positives are large for pinot noir and chardonnay.
Now we apply the optimized parameters obtained from the previous section in a cross validational sense. Hopefully it will improve the accuracy in all folds.
```
clf = RandomForestClassifier(random_state=42, max_features='log2', n_estimators=1000)
pipe = Pipeline([('feats', feats),
('clf',clf)
])
### stratified training
from sklearn.model_selection import StratifiedKFold
skf = StratifiedKFold(n_splits=3)
sc_mean=[]
for train, test in skf.split(kaggle_input[combined_features], kaggle_input[target]):
pipe.fit(kaggle_input.loc[train,combined_features], kaggle_input.loc[train, target])
preds = pipe.predict(kaggle_input.loc[test,combined_features])
sc_mean.append(metrics.accuracy_score(kaggle_input.loc[test, target], preds))
print_stats(kaggle_input.loc[test, target], preds, pipe.classes_)
print('Mean: %s' % str(sum(sc_mean)/len(sc_mean)))
print('Standard deviation: %s' % str(np.std(np.array(sc_mean))))
```
Indeed, all 3 folds gave better accuracy than the default settings. Since we have only 3 folds we use t-statistics to infere a 95% level confidence interval for the mean of the accuracy: 0.868632 - 0.878568. Without optimization, the accuracy was 0.867, therefore we would infere that it is probable that the used parameters helped to imporve the accuracy. However, they did not helped to decrease the false positives for chardonnay and pinot noir.
We have to mention that the kaggle database has not been pre-processed as our own database therefore it may still contain things that causes false positives and negatives.
Let us see what happens if we train the model on one source of data set and test it on another source.
<a id="ch5"></a>
# 5. Cross-data validation of the model
Kaggle data will be trained, since it is much larger using the optimized input parameters, namely 1000 *n_estimators* and *log2 max_features*. To construct the pipeline we use the same steps as before.
```
# split the data into train and test
combined_features = ['description', 'white', 'red']
target = 'grape_variety'
red = Pipeline([
('selector', NumberSelector(key='red')),
])
white = Pipeline([
('selector', NumberSelector(key='white')),
])
text = Pipeline([
('selector', TextSelector(key='description')),
('vectorizer', TfidfVectorizer(ngram_range=(1,1), stop_words=stop_words, analyzer='word',
norm='l2', tokenizer=LemmaTokenizer()))
])
feats = FeatureUnion([('description', text),
('red', red),
('white', white)
])
clf = RandomForestClassifier(random_state=42, max_features='log2', n_estimators=1000, n_jobs=1)
pipe = Pipeline([('feats', feats),
('clf',clf)
])
# fit the entire kaggle data
pipe.fit(kaggle_input[combined_features], kaggle_input[target])
# test stats
preds = pipe.predict(data_input[combined_features])
print_stats(data_input[target], preds, pipe.classes_)
```
The accuracy out of sample has decreased to 72%. This is kind of expected. The train and test sets are coming from completely different sources and while the test set had some data cleaning the train dif not really. Besides these differences, the metrics are still quite good in the end.
<a id="ch5.1"></a>
## 5.1 Classifying more target features
Out of curiosity we will classify now more grape varieties only from the kaggle data set. We will subset the data set to contain single grape types which have more than 1000 samples.
```
# check how many grapes are in the sample
single_name = [name for name in kaggle.grape_variety if len(name.split())<=2]
# visually investigate what grapes got through the firs filter
count = nltk.FreqDist(single_name)
count.most_common(20)
```
As you can see there are many blends in the list and wines that do not detail the grape name. We will filter them out so that we have clean names.
```
limit=1000
# grape names to be filtered
filtered_name = ['red blend', 'portuguese red', 'white blend', 'sparkling blend', 'champagne blend',
'portuguese white', 'rosé']
selected_grapes = [key for key, value in count.items() if value > limit]
selected_grapes = [name for name in selected_grapes if name not in filtered_name]
selected_grapes
# select the rows that contains the 4 grape names: chardonnay, syrah, pinot noir, sauvignon blanc
kaggle_input = kaggle[kaggle['grape_variety'].isin(selected_grapes)].reset_index()
pd.unique(kaggle_input.grape_variety)
```
We end up with 18 grape types. For a human that is a lot to keep in memory all of these type's characteristics and be able to identify them.
Just as before, we construct a mapping between the grape type and its colour and complete the input with the colour of the wine.
```
# define a colour dictionary that will be mapped into the databaes
colour_dict = {'pinot noir': 'red', 'syrah': 'red', 'chardonnay': 'white', 'sauvignon blanc': 'white',
'pinot gris': 'white', 'riesling': 'white', 'gewürztraminer': 'white', 'cabernet sauvignon': 'red',
'malbec': 'red', 'merlot': 'red', 'gamay': 'red', 'sangiovese': 'red', 'cabernet franc': 'red',
'zinfandel': 'red', 'grüner veltliner': 'white', 'nebbiolo': 'red', 'pinot grigio': 'white',
'tempranillo': 'red'}
kaggle_input = kaggle_input.dropna()
kaggle_input['colour'] = kaggle_input.apply(lambda row: colour_dict[row['grape_variety']], axis=1)
colour_dummies = pd.get_dummies(kaggle_input['colour'])
kaggle_input = kaggle_input.merge(colour_dummies, left_index=True, right_index=True)
```
We select the colours and description as the input features and the grape variety as the target feature. Then we split the data into train and test sets while keeping the occurance ratio the same in both sets.
```
# split the data into train and test
combined_features = ['description', 'white', 'red']
target = 'grape_variety'
X_train, X_test, y_train, y_test = train_test_split(kaggle_input[combined_features], kaggle_input[target],
test_size=0.33, random_state=42, stratify=kaggle_input[target])
```
Finally, we define the pipeline of feature selection, vectorization and classification, fit the train set and investigate the test set.
```
red = Pipeline([
('selector', NumberSelector(key='red')),
])
white = Pipeline([
('selector', NumberSelector(key='white')),
])
text = Pipeline([
('selector', TextSelector(key='description')),
('vectorizer', TfidfVectorizer(ngram_range=(1,1), stop_words=stop_words, analyzer='word',
norm='l2', tokenizer=LemmaTokenizer()))
])
feats = FeatureUnion([('description', text),
('red', red),
('white', white)
])
# classifier and pipeline definition
clf = RandomForestClassifier(random_state=42, max_features='log2', n_estimators=1000)
pipe = Pipeline([('feats', feats),
('clf',clf)
])
pipe.fit(X_train, y_train)
preds = pipe.predict(X_test)
print_stats(y_test, preds, pipe.classes_, fig_size=(15,10))
```
It is quite remarkable that 18 grape types can be predicted from the description and colour of the wine with 70% accuracy. The precision is not performing well in general.
<a id="ch6"></a>
# 6. Conclusion
This study presented the construction of a classification model that is able to differentiate grape types based on the description of wine samples. After construction, the model is implemented on a smaller dataset that we collected, classification parameters are optimized and performance is improved. Because of the size of the dataset, we worked with only 4 grape types. Relying on the same principles we repeated the classification on a bigger data set and showed that the previously obtained optimized parameters are imporving the performance for this data set too. In both cases we managed to achieve a classification precision of above 85%. To show that this performance is not due to a mere luck and the choice of the data set, we performed a cross data set validation too, where the accuracy dropped to 72%. Finally, using the large data set, we predicted the grape types of 18 different grapes with an accuracy of 70% which is remarkable in terms of human measures.
If you have any question please feel free to contact me at [diveki@gmail.com](diveki@gmail.com). You can also fork this project from [my GitHub repository](https://github.com/diveki/WineSommelier) or you can take a sneaky look at [my GitHub Pages website](https://diveki.github.io). I am going to publish these results on [my Kaggle page](https://www.kaggle.com/diveki) to with some additional calculations.
Just as a bonus, the preludes for this report can be found [here](https://diveki.github.io/projects/wine/wine.html) and [here](https://diveki.github.io/projects/wine/tfidf.html).
<a id="ch7"></a>
# 7. References
1. https://en.wikipedia.org/wiki/Sommelier
2. Wine and Spirit Education Trust - https://www.wsetglobal.com/
3. Wine terminology list - https://www.cawineclub.com/wine-tasting-terms
4. Become a sommelier - https://diveki.github.io/projects/wine/wine.html
5. Kagge Wine Review - https://www.kaggle.com/zynicide/wine-reviews
6. Application of TfIdf-vectorizer on wine data - https://diveki.github.io/projects/wine/tfidf.html
7. Lemmatization - https://en.wikipedia.org/wiki/Lemmatisation
8. Label encoding - https://medium.com/@contactsunny/label-encoder-vs-one-hot-encoder-in-machine-learning-3fc273365621
| github_jupyter |
```
"""
You can run either this notebook locally (if you have all the dependencies and a GPU) or on Google Colab.
Instructions for setting up Colab are as follows:
1. Open a new Python 3 notebook.
2. Import this notebook from GitHub (File -> Upload Notebook -> "GITHUB" tab -> copy/paste GitHub URL)
3. Connect to an instance with a GPU (Runtime -> Change runtime type -> select "GPU" for hardware accelerator)
4. Run this cell to set up dependencies.
"""
# If you're using Google Colab and not running locally, run this cell.
import os
!pip install wget
!apt-get install sox
!git clone https://github.com/NVIDIA/NeMo.git
os.chdir('NeMo')
!bash reinstall.sh
!pip install unidecode
```
# **SPEAKER RECOGNITION**
Speaker Recognition (SR) is an broad research area which solves two major tasks: speaker identification (who is speaking?) and speaker verification (is the speaker who she claims to be?). In this work, we focmus on the far-field, text-independent speaker recognition when the identity of the speaker is based on how speech is spoken, not necessarily in what is being said. Typically such SR systems operate on unconstrained speech utterances,
which are converted into vectors of fixed length, called speaker embeddings. Speaker embeddings are also used in automatic speech recognition (ASR) and speech synthesis.
As the goal of most speaker related systems is to get good speaker level embeddings that could help distinguish from other speakers, we shall first train these embeddings in end-to-end manner optimizing the [QuatzNet](https://arxiv.org/abs/1910.10261) based encoder model on cross-entropy loss. We modify the original quartznet based decoder to get these fixed size embeddings irrespective of the length of the input audio. We employ mean and variance based statistics pooling method to grab these embeddings.
In this tutorial we shall first train these embeddings on speaker related datasets and then get speaker embeddings from a pretrained network for a new dataset. Since Google Colab has very slow read-write speeds, Please run this locally for training on [hi-mia](https://arxiv.org/abs/1912.01231).
We use the [get_hi-mia-data.py](https://github.com/NVIDIA/NeMo/blob/master/scripts/get_hi-mia_data.py) script to download the necessary files, extract them, also re-sample to 16Khz if any of these samples are not at 16Khz. We do also provide scripts to score these embeddings for a speaker-verification task like hi-mia dataset at the end.
```
data_dir = 'scripts/data/'
!mkdir $data_dir
# Download and process dataset. This will take a few moments...
!python scripts/get_hi-mia_data.py --data_root=$data_data
```
After download and conversion, your `data` folder should contain directories with manifest files as:
* `data/<set>/train.json`
* `data/<set>/dev.json`
* `data/<set>/{set}_all.json`
Also for each set we also create utt2spk files, these files later would be used in PLDA training.
Each line in manifest file describes a training sample - `audio_filepath` contains path to the wav file, `duration` it's duration in seconds, and `label` is the speaker class label:
`{"audio_filepath": "<absolute path to dataset>/data/train/SPEECHDATA/wav/SV0184/SV0184_6_04_N3430.wav", "duration": 1.22, "label": "SV0184"}`
`{"audio_filepath": "<absolute path to dataset>/data/train/SPEECHDATA/wav/SV0184/SV0184_5_03_F2037.wav", duration": 1.375, "label": "SV0184"}`
Import necessary packages
```
from ruamel.yaml import YAML
import nemo
import nemo.collections.asr as nemo_asr
import copy
from functools import partial
```
# Building Training and Evaluation DAGs with NeMo
Building a model using NeMo consists of
1. Instantiating the neural modules we need
2. specifying the DAG by linking them together.
In NeMo, the training and inference pipelines are managed by a NeuralModuleFactory, which takes care of checkpointing, callbacks, and logs, along with other details in training and inference. We set its log_dir argument to specify where our model logs and outputs will be written, and can set other training and inference settings in its constructor. For instance, if we were resuming training from a checkpoint, we would set the argument checkpoint_dir=`<path_to_checkpoint>`.
Along with logs in NeMo, you can optionally view the tensorboard logs with the create_tb_writer=True argument to the NeuralModuleFactory. By default all the tensorboard log files will be stored in {log_dir}/tensorboard, but you can change this with the tensorboard_dir argument. One can load tensorboard logs through tensorboard by running tensorboard --logdir=`<path_to_tensorboard dir>` in the terminal.
```
exp_name = 'quartznet3x2_hi-mia'
work_dir = './myExps/'
neural_factory = nemo.core.NeuralModuleFactory(
log_dir=work_dir+"/hi-mia_logdir/",
checkpoint_dir="./myExps/checkpoints/" + exp_name,
create_tb_writer=True,
random_seed=42,
tensorboard_dir=work_dir+'/tensorboard/',
)
```
Now that we have our neural module factory, we can specify our **neural modules and instantiate them**. Here, we load the parameters for each module from the configuration file.
```
logging = nemo.logging
yaml = YAML(typ="safe")
with open('examples/speaker_recognition/configs/quartznet_spkr_3x2x512_xvector.yaml') as f:
spkr_params = yaml.load(f)
sample_rate = spkr_params["sample_rate"]
time_length = spkr_params.get("time_length", 8)
logging.info("max time length considered for each file is {} sec".format(time_length))
```
Instantiating train data_layer using config arguments. `labels = None` automatically creates output labels from manifest files, if you would like to pass those speaker names you can use the labels option. So while instantiating eval data_layer, we can use pass labels to the class in order to match same the speaker output labels as we have in the training data layer. This comes in handy while training on multiple datasets with more than one manifest file.
```
train_dl_params = copy.deepcopy(spkr_params["AudioToSpeechLabelDataLayer"])
train_dl_params.update(spkr_params["AudioToSpeechLabelDataLayer"]["train"])
del train_dl_params["train"]
del train_dl_params["eval"]
batch_size=64
data_layer_train = nemo_asr.AudioToSpeechLabelDataLayer(
manifest_filepath=data_dir+'/train/train.json',
labels=None,
batch_size=batch_size,
time_length=time_length,
**train_dl_params,
)
eval_dl_params = copy.deepcopy(spkr_params["AudioToSpeechLabelDataLayer"])
eval_dl_params.update(spkr_params["AudioToSpeechLabelDataLayer"]["eval"])
del eval_dl_params["train"]
del eval_dl_params["eval"]
data_layer_eval = nemo_asr.AudioToSpeechLabelDataLayer(
manifest_filepath=data_dir+'/train/dev.json",
labels=data_layer_train.labels,
batch_size=batch_size,
time_length=time_length,
**eval_dl_params,
)
data_preprocessor = nemo_asr.AudioToMelSpectrogramPreprocessor(
sample_rate=sample_rate, **spkr_params["AudioToMelSpectrogramPreprocessor"],
)
encoder = nemo_asr.JasperEncoder(**spkr_params["JasperEncoder"],)
decoder = nemo_asr.JasperDecoderForSpkrClass(
feat_in=spkr_params["JasperEncoder"]["jasper"][-1]["filters"],
num_classes=data_layer_train.num_classes,
pool_mode=spkr_params["JasperDecoderForSpkrClass"]['pool_mode'],
emb_sizes=spkr_params["JasperDecoderForSpkrClass"]["emb_sizes"].split(","),
)
xent_loss = nemo_asr.CrossEntropyLossNM(weight=None)
```
The next step is to assemble our training DAG by specifying the inputs to each neural module.
```
audio_signal, audio_signal_len, label, label_len = data_layer_train()
processed_signal, processed_signal_len = data_preprocessor(input_signal=audio_signal, length=audio_signal_len)
encoded, encoded_len = encoder(audio_signal=processed_signal, length=processed_signal_len)
logits, _ = decoder(encoder_output=encoded)
loss = xent_loss(logits=logits, labels=label)
```
We would like to be able to evaluate our model on the dev set, as well, so let's set up the evaluation DAG.
Our evaluation DAG will reuse most of the parts of the training DAG with the exception of the data layer, since we are loading the evaluation data from a different file but evaluating on the same model. Note that if we were using data augmentation in training, we would also leave that out in the evaluation DAG.
```
audio_signal_test, audio_len_test, label_test, _ = data_layer_eval()
processed_signal_test, processed_len_test = data_preprocessor(
input_signal=audio_signal_test, length=audio_len_test
)
encoded_test, encoded_len_test = encoder(audio_signal=processed_signal_test, length=processed_len_test)
logits_test, _ = decoder(encoder_output=encoded_test)
loss_test = xent_loss(logits=logits_test, labels=label_test)
```
# Creating CallBacks
We would like to be able to monitor our model while it's training, so we use callbacks. In general, callbacks are functions that are called at specific intervals over the course of training or inference, such as at the start or end of every n iterations, epochs, etc. The callbacks we'll be using for this are the SimpleLossLoggerCallback, which reports the training loss (or another metric of your choosing, such as \% accuracy for speaker recognition tasks), and the EvaluatorCallback, which regularly evaluates the model on the dev set. Both of these callbacks require you to pass in the tensors to be evaluated--these would be the final outputs of the training and eval DAGs above.
Another useful callback is the CheckpointCallback, for saving checkpoints at set intervals. We create one here just to demonstrate how it works.
```
from nemo.collections.asr.helpers import (
monitor_classification_training_progress,
process_classification_evaluation_batch,
process_classification_evaluation_epoch,
)
from nemo.utils.lr_policies import CosineAnnealing
train_callback = nemo.core.SimpleLossLoggerCallback(
tensors=[loss, logits, label],
print_func=partial(monitor_classification_training_progress, eval_metric=[1]),
step_freq=1000,
get_tb_values=lambda x: [("train_loss", x[0])],
tb_writer=neural_factory.tb_writer,
)
callbacks = [train_callback]
chpt_callback = nemo.core.CheckpointCallback(
folder="./myExps/checkpoints/" + exp_name,
load_from_folder="./myExps/checkpoints/" + exp_name,
step_freq=1000,
)
callbacks.append(chpt_callback)
tagname = "hi-mia_dev"
eval_callback = nemo.core.EvaluatorCallback(
eval_tensors=[loss_test, logits_test, label_test],
user_iter_callback=partial(process_classification_evaluation_batch, top_k=1),
user_epochs_done_callback=partial(process_classification_evaluation_epoch, tag=tagname),
eval_step=1000, # How often we evaluate the model on the test set
tb_writer=neural_factory.tb_writer,
)
callbacks.append(eval_callback)
```
Now that we have our model and callbacks set up, how do we run it?
Once we create our neural factory and the callbacks for the information that we want to see, we can start training by simply calling the train function on the tensors we want to optimize and our callbacks! Since this notebook is for you to get started, by an4 as dataset is small it would quickly get higher accuracies. For better models use bigger datasets
```
# train model
num_epochs=25
N = len(data_layer_train)
steps_per_epoch = N // batch_size
logging.info("Number of steps per epoch {}".format(steps_per_epoch))
neural_factory.train(
tensors_to_optimize=[loss],
callbacks=callbacks,
lr_policy=CosineAnnealing(
num_epochs * steps_per_epoch, warmup_steps=0.1 * num_epochs * steps_per_epoch,
),
optimizer="novograd",
optimization_params={
"num_epochs": num_epochs,
"lr": 0.02,
"betas": (0.95, 0.5),
"weight_decay": 0.001,
"grad_norm_clip": None,
}
)
```
Now that we trained our embeddings, we shall extract these embeddings using our pretrained checkpoint present at `checkpoint_dir`. As we can see from the neural architecture, we extract the embeddings after the `emb1` layer.

Now use the test manifest to get the embeddings. As we saw before, let's create a new `data_layer` for test. Use previously instiated models and attach the DAGs
```
eval_dl_params = copy.deepcopy(spkr_params["AudioToSpeechLabelDataLayer"])
eval_dl_params.update(spkr_params["AudioToSpeechLabelDataLayer"]["eval"])
del eval_dl_params["train"]
del eval_dl_params["eval"]
eval_dl_params['shuffle'] = False # To grab the file names without changing data_layer
test_dataset = data_dir+'/test/test_all.json',
data_layer_test = nemo_asr.AudioToSpeechLabelDataLayer(
manifest_filepath=test_dataset,
labels=None,
batch_size=batch_size,
**eval_dl_params,
)
audio_signal_test, audio_len_test, label_test, _ = data_layer_test()
processed_signal_test, processed_len_test = data_preprocessor(
input_signal=audio_signal_test, length=audio_len_test)
encoded_test, _ = encoder(audio_signal=processed_signal_test, length=processed_len_test)
_, embeddings = decoder(encoder_output=encoded_test)
```
Now get the embeddings using neural_factor infer command, that just does forward pass of all our modules. And save our embeddings in `<work_dir>/embeddings`
```
import numpy as np
import json
eval_tensors = neural_factory.infer(tensors=[embeddings, label_test], checkpoint_dir="./myExps/checkpoints/" + exp_name)
inf_emb, inf_label = eval_tensors
whole_embs = []
whole_labels = []
manifest = open(test_dataset, 'r').readlines()
for line in manifest:
line = line.strip()
dic = json.loads(line)
filename = dic['audio_filepath'].split('/')[-1]
whole_labels.append(filename)
for idx in range(len(inf_label)):
whole_embs.extend(inf_emb[idx].numpy())
embedding_dir = './myExps/embeddings/'
if not os.path.exists(embedding_dir):
os.mkdir(embedding_dir)
filename = os.path.basename(test_dataset).split('.')[0]
name = embedding_dir + filename
np.save(name + '.npy', np.asarray(whole_embs))
np.save(name + '_labels.npy', np.asarray(whole_labels))
logging.info("Saved embedding files to {}".format(embedding_dir))
!ls $embedding_dir
```
# Cosine Similarity Scoring
Here we provide a script scoring on hi-mia whose trial file has structure `<speaker_name1> <speaker_name2> <target/nontarget>` . First copy the `trails_1m` file present in test folder to our embeddings directory
```
!cp $data_dir/test/trails_1m $embedding_dir/
```
the below command would output the EER% based on cosine similarity score
```
!python examples/speaker_recognition/hi-mia_eval.py --data_root $embedding_dir --emb $embedding_dir/test_all.npy --emb_labels $embedding_dir/test_all_labels.npy --emb_size 1024
```
# PLDA Backend
To finetune our speaker embeddings further, we used kaldi PLDA scripts to train PLDA and evaluate as well. so from this point going forward, please make sure you installed kaldi and was added to your path as KALDI_ROOT.
To train PLDA, we can either use dev set or training set. Let's use the training set embeddings to train PLDA and further use this trained PLDA model to score in test embeddings. in order to do that we should get embeddings for our training data as well. As similar to above steps, generate the train embeddings
```
test_dataset = data_dir+'/train/train.json',
data_layer_test = nemo_asr.AudioToSpeechLabelDataLayer(
manifest_filepath=test_dataset,
labels=None,
batch_size=batch_size,
**eval_dl_params,
)
audio_signal_test, audio_len_test, label_test, _ = data_layer_test()
processed_signal_test, processed_len_test = data_preprocessor(
input_signal=audio_signal_test, length=audio_len_test)
encoded_test, _ = encoder(audio_signal=processed_signal_test, length=processed_len_test)
_, embeddings = decoder(encoder_output=encoded_test)
eval_tensors = neural_factory.infer(tensors=[embeddings, label_test], checkpoint_dir="./myExps/checkpoints/" + exp_name)
inf_emb, inf_label = eval_tensors
whole_embs = []
whole_labels = []
manifest = open(test_dataset, 'r').readlines()
for line in manifest:
line = line.strip()
dic = json.loads(line)
filename = dic['audio_filepath'].split('/')[-1]
whole_labels.append(filename)
for idx in range(len(inf_label)):
whole_embs.extend(inf_emb[idx].numpy())
if not os.path.exists(embedding_dir):
os.mkdir(embedding_dir)
filename = os.path.basename(test_dataset).split('.')[0]
name = embedding_dir + filename
np.save(name + '.npy', np.asarray(whole_embs))
np.save(name + '_labels.npy', np.asarray(whole_labels))
logging.info("Saved embedding files to {}".format(embedding_dir))
```
As part of kaldi necessary files we need `utt2spk` \& `spk2utt` file to get ark file for PLDA training. to do that, copy the generated utt2spk file from `data_dir` train folder to create spk2utt file using
`utt2spk_to_spk2utt.pl $data_dir/train/utt2spk > $embedding_dir/spk2utt`
Then run the below python script to get EER score using PLDA backend scoring. This script does both data preparation for kaldi followed by PLDA scoring.
```
!python examples/speaker_recognition/kaldi_plda.py --root $embedding_dir --train_embs $embedding_dir/train.npy --train_labels $embedding_dir/train_labels.npy
--eval_embs $embedding_dir/all_embs_himia.npy --eval_labels $embedding_dir/all_ids_himia.npy --stage=1
```
Here `--stage = 1` trains PLDA model but if you already have a trained PLDA then you can directly evaluate on it by `--stage=2` option.
This should output an EER of 6.32% with minDCF: 0.455
# Performance Improvement
To improve your embeddings performance:
* Add more data and Train longer (100 epochs)
* Try adding the augmentation –see config file
* Use larger model
* Train on several GPUs and use mixed precision (on NVIDIA Volta and Turing GPUs)
* Start with pre-trained checkpoints
| github_jupyter |
```
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
import matplotlib.patheffects as path_effects
import tabulate
# Set plotting style
plt.style.use('seaborn-white')
current_palette = sns.color_palette()
COLOR_MAP = {
"Male": current_palette[0],
"Female": current_palette[1],
}
%matplotlib inline
```
# Helper functions
```
def make_markdown_table(df, by, val="total_recurring_comp"):
frame = df[[by, val]].groupby([by]).agg({val: ["count", "median"]})
print(tabulate.tabulate(frame, tablefmt="pipe", headers="keys"))
def add_custom_legend(plt):
leg = plt.legend(
loc="upper right",
fontsize=32,
ncol=1,
frameon=1,
fancybox=True,
# The bellow commands remove the lines in the legend
handlelength=0,
handletextpad=0,
markerscale=0,
)
# Turn on and theme the frame
frame = leg.get_frame()
frame.set_alpha(0)
# Set the legend text color to match the line color
ax = plt.gca()
handles, _ = ax.get_legend_handles_labels()
texts = leg.get_texts()
for _, text in zip(handles, texts):
text.set_color(COLOR_MAP[text.get_text()])
text.set_path_effects(
[
path_effects.Stroke(linewidth=1, foreground='black'),
path_effects.Normal(),
]
)
fig.tight_layout()
def make_plot(df, x, y, file, hue=None, y_label="", x_label="", order=None, legend=False):
# Set plot size
fig = plt.gcf()
fig.set_size_inches(12, 8)
ax = sns.swarmplot(
data=df,
x=x,
y=y,
hue=hue,
linewidth=2,
order=order,
s=12,
dodge=True,
)
# Remove legend
try:
ax.get_legend().set_visible(False)
except AttributeError:
pass
# Remove spines
sns.despine(offset=10, trim=True)
# Use k instead of ,000 for thousands on y-axis
ax.yaxis.set_major_formatter(ticker.EngFormatter())
plt.ylabel(y_label, fontsize="xx-large")
plt.xlabel(x_label, fontsize="xx-large")
plt.tick_params(axis='both', which='major', labelsize=13)
# Make custom legend
if legend:
add_custom_legend(plt)
# Save to disk
for ext in ("png", "svg"):
fig.savefig("/tmp/{file}.{ext}".format(file=file, ext=ext), bbox_inches="tight", dpi=300)
return fig
```
# Data loading and cleaning
The data has already been cleaned in the spreadsheet.
```
CSV_FILE = "./insight_salary_survey_cleaned.csv"
df = pd.read_csv(
filepath_or_buffer=CSV_FILE,
parse_dates=[0],
)
# Fill NaNs in Salary columns with 0s
df = df.fillna({
"base_salary": 0,
"annual_bonus": 0,
"relocation_and_signing": 0,
"equity_or_stock_per_year": 0,
})
# Add a year column
df["year"] = df["timestamp"].dt.year
# Combine Northeast (Boston) with East Coast (NYC and DC)
# The distributions look identical anyway, and Boston has low N
#
# Also rename "West Coast" to "California" since it's just SF and LA, Seattle is Northwest
df["coarse_region"] = df["coarse_region"].replace({"East Coast": "Northeast", "West Coast": "California"})
# Add a Base + Bonus column
df["base_and_bonus"] = df["base_salary"] + df["annual_bonus"]
df.head(3)
df.groupby(["gender"]).count()
# Subset of the data: just DS and Male/Female
df = df[df["job_title"] == "Data Scientist"]
df = df[df["gender"].isin(["Male", "Female"])]
```
# Salary by gender
```
df_tmp = df
make_markdown_table(df_tmp, "gender")
order = df_tmp.groupby(["gender"])["total_recurring_comp"].median().sort_values(ascending=False).index.values
fig = make_plot(
df=df_tmp,
x="gender",
y="total_recurring_comp",
file="data_science_total_comp_gender",
y_label="Total Compensation",
order=["Male", "Female"],
)
```
# Salary by gender and location
```
df_tmp = df[df["coarse_region"].isin(["California", "Northeast"])]
frame = df_tmp[["gender", "total_recurring_comp", "coarse_region"]]\
.groupby(["gender", "coarse_region"])\
.agg({"total_recurring_comp": ["count", "median"]})
print(tabulate.tabulate(frame, tablefmt="pipe", headers="keys"))
order = df_tmp.groupby(["coarse_region"])["total_recurring_comp"].median().sort_values(ascending=False).index.values
fig = make_plot(
df=df_tmp,
x="coarse_region",
y="total_recurring_comp",
hue="gender",
file="data_science_total_comp_gender_and_location",
y_label="Total Compensation",
order=order,
legend=True,
)
```
# Salary by gender and Age
```
df["age_cat"] = pd.cut(df["age"], [0, 30, 35, 100])
df_tmp = df[df["coarse_region"].isin(["California", "Northeast"])]
frame = df_tmp[["gender", "total_recurring_comp", "age_cat"]]\
.groupby(["gender", "age_cat"])\
.agg({"total_recurring_comp": ["count", "median"]})
print(tabulate.tabulate(frame, tablefmt="pipe", headers="keys"))
order = df_tmp["age_cat"].unique().sort_values()[0:3]
fig = make_plot(
df=df_tmp,
x="age_cat",
y="total_recurring_comp",
hue="gender",
file="data_science_total_comp_gender_and_age",
y_label="Total Compensation",
x_label="Age Group",
order=order,
legend=True,
)
```
| github_jupyter |
##### Copyright 2020 Google LLC.
Licensed under the Apache License, Version 2.0 (the "License");
```
#@title Default title text
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Train your own Keyword Spotting Model.
[Open in Google Colab](https://colab.research.google.com/github/google-research/google-research/blob/master/speech_embedding/record_train.ipynb)
Before running any cells please enable GPUs for this notebook to speed it up.
* *Edit* → *Notebook Settings*
* select *GPU* from the *Hardware Accelerator* drop-down
```
#@title Imports
%tensorflow_version 1.x
from __future__ import division
import collections
import IPython
import functools
import math
import matplotlib.pyplot as plt
import numpy as np
import io
import os
import tensorflow as tf
import tensorflow_hub as hub
import random
import scipy.io.wavfile
import tarfile
import time
import sys
from google.colab import output
from google.colab import widgets
from base64 import b64decode
!pip install ffmpeg-python
import ffmpeg
#@title Helper functions and classes
def normalized_read(filename):
"""Reads and normalizes a wavfile."""
_, data = scipy.io.wavfile.read(open(filename, mode='rb'))
samples_99_percentile = np.percentile(np.abs(data), 99.9)
normalized_samples = data / samples_99_percentile
normalized_samples = np.clip(normalized_samples, -1, 1)
return normalized_samples
class EmbeddingDataFileList(object):
"""Container that loads audio, stores it as embeddings and can
rebalance it."""
def __init__(self, filelist,
data_dest_dir,
targets=None,
label_max=10000,
negative_label="negative",
negative_multiplier=25,
target_samples=32000,
progress_bar=None,
embedding_model=None):
"""Creates an instance of `EmbeddingDataFileList`."""
self._negative_label = negative_label
self._data_per_label = collections.defaultdict(list)
self._labelcounts = {}
self._label_list = targets
total_examples = sum([min(len(x), label_max) for x in filelist.values()])
total_examples -= min(len(filelist[negative_label]), label_max)
total_examples += min(len(filelist[negative_label]), negative_multiplier * label_max)
print("loading %d examples" % total_examples)
example_count = 0
for label in filelist:
if label not in self._label_list:
raise ValueError("Unknown label:", label)
label_files = filelist[label]
random.shuffle(label_files)
if label == negative_label:
multplier = negative_multiplier
else:
multplier = 1
for wav_file in label_files[:label_max * multplier]:
data = normalized_read(os.path.join(data_dest_dir, wav_file))
required_padding = target_samples - data.shape[0]
if required_padding > 0:
data = np.pad(data, (required_padding, required_padding), 'constant')
self._labelcounts[label] = self._labelcounts.get(label, 0) + 1
if embedding_model:
data = embedding_model.create_embedding(data)[0][0,:,:,:]
self._data_per_label[label].append(data)
if progress_bar is not None:
example_count += 1
progress_bar.update(progress(100 * example_count/total_examples))
@property
def labels(self):
return self._label_list
def get_label(self, idx):
return self.labels.index(idx)
def _get_filtered_data(self, label, filter_fn):
idx = self.labels.index(label)
return [(filter_fn(x), idx) for x in self._data_per_label[label]]
def _multply_data(self, data, factor):
samples = int((factor - math.floor(factor)) * len(data))
return int(factor) * data + random.sample(data, samples)
def full_rebalance(self, negatives, labeled):
"""Rebalances for a given ratio of labeled to negatives."""
negative_count = self._labelcounts[self._negative_label]
labeled_count = sum(self._labelcounts[key]
for key in self._labelcounts.keys()
if key != self._negative_label)
labeled_multiply = labeled * negative_count / (negatives * labeled_count)
for label in self._data_per_label:
if label == self._negative_label:
continue
self._data_per_label[label] = self._multply_data(
self._data_per_label[label], labeled_multiply)
self._labelcounts[label] = len(self._data_per_label[label])
def get_all_data_shuffled(self, filter_fn):
"""Returns a shuffled list containing all the data."""
return self.get_all_data(filter_fn, shuffled=True)
def get_all_data(self, filter_fn, shuffled=False):
"""Returns a list containing all the data."""
data = []
for label in self._data_per_label:
data += self._get_filtered_data(label, filter_fn)
if shuffled:
random.shuffle(data)
return data
def cut_middle_frame(embedding, num_frames, flatten):
"""Extrats the middle frames for an embedding."""
left_context = (embedding.shape[0] - num_frames) // 2
if flatten:
return embedding[left_context:left_context+num_frames].flatten()
else:
return embedding[left_context:left_context+num_frames]
def progress(value, maximum=100):
return IPython.display.HTML("""
<progress value='{value}' max='{max}' style='width: 80%'>{value}</progress>
""".format(value=value, max=maximum))
#@title HeadTrainerClass and head model functions
def _fully_connected_model_fn(embeddings, num_labels):
"""Builds the head model and adds a fully connected output layer."""
net = tf.layers.flatten(embeddings)
logits = tf.compat.v1.layers.dense(net, num_labels, activation=None)
return logits
framework = tf.contrib.framework
layers = tf.contrib.layers
def _conv_head_model_fn(embeddings, num_labels, context):
"""Builds the head model and adds a fully connected output layer."""
activation_fn = tf.nn.elu
normalizer_fn = functools.partial(
layers.batch_norm, scale=True, is_training=True)
with framework.arg_scope([layers.conv2d], biases_initializer=None,
activation_fn=None, stride=1, padding="SAME"):
net = embeddings
net = layers.conv2d(net, 96, [3, 1])
net = normalizer_fn(net)
net = activation_fn(net)
net = layers.max_pool2d(net, [2, 1], stride=[2, 1], padding="VALID")
context //= 2
net = layers.conv2d(net, 96, [3, 1])
net = normalizer_fn(net)
net = activation_fn(net)
net = layers.max_pool2d(net, [context, net.shape[2]], padding="VALID")
net = tf.layers.flatten(net)
logits = layers.fully_connected(
net, num_labels, activation_fn=None)
return logits
class HeadTrainer(object):
"""A tensorflow classifier to quickly train and test on embeddings.
Only use this if you are training a very small model on a very limited amount
of data. If you expect the training to take any more than 15 - 20 min then use
something else.
"""
def __init__(self, model_fn, input_shape, num_targets,
head_learning_rate=0.001, batch_size=64):
"""Creates a `HeadTrainer`.
Args:
model_fn: function that builds the tensorflow model, defines its loss
and returns the tuple (predictions, loss, accuracy).
input_shape: describes the shape of the models input feature.
Does not include a the batch dimension.
num_targets: Target number of keywords.
"""
self._input_shape = input_shape
self._output_dim = num_targets
self._batch_size = batch_size
self._graph = tf.Graph()
with self._graph.as_default():
self._feature = tf.placeholder(tf.float32, shape=([None] + input_shape))
self._labels = tf.placeholder(tf.int64, shape=(None))
module_spec = hub.create_module_spec(
module_fn=self._get_headmodule_fn(model_fn, num_targets))
self._module = hub.Module(module_spec, trainable=True)
logits = self._module(self._feature)
self._predictions = tf.nn.softmax(logits)
self._loss, self._accuracy = self._get_loss(
logits, self._labels, self._predictions)
self._update_weights = tf.train.AdamOptimizer(
learning_rate=head_learning_rate).minimize(self._loss)
self._sess = tf.Session(graph=self._graph)
with self._sess.as_default():
with self._graph.as_default():
self._sess.run(tf.local_variables_initializer())
self._sess.run(tf.global_variables_initializer())
def _get_headmodule_fn(self, model_fn, num_targets):
"""Wraps the model_fn in a tf hub module."""
def module_fn():
embeddings = tf.placeholder(
tf.float32, shape=([None] + self._input_shape))
logit = model_fn(embeddings, num_targets)
hub.add_signature(name='default', inputs=embeddings, outputs=logit)
return module_fn
def _get_loss(self, logits, labels, predictions):
"""Defines the model's loss and accuracy."""
xentropy_loss = tf.nn.sparse_softmax_cross_entropy_with_logits(
logits=logits, labels=labels)
loss = tf.reduce_mean(xentropy_loss)
accuracy = tf.contrib.metrics.accuracy(tf.argmax(predictions, 1), labels)
return loss, accuracy
def save_head_model(self, save_directory):
"""Saves the model."""
with self._graph.as_default():
self._module.export(save_directory, self._sess)
def _feature_transform(self, batch_features, batch_labels):
"""Transforms lists of features and labels into into model inputs."""
return np.stack(batch_features), np.stack(batch_labels)
def _batch_data(self, data, batch_size=None):
"""Splits the input data into batches."""
batch_features = []
batch_labels = []
batch_size = batch_size or len(data)
for feature, label in data:
if feature.shape != tuple(self._input_shape):
raise ValueError(
"Feature shape ({}) doesn't match model shape ({})".format(
feature.shape, self._input_shape))
if not 0 <= label < self._output_dim:
raise ValueError('Label value ({}) outside of target range'.format(
label))
batch_features.append(feature)
batch_labels.append(label)
if len(batch_features) == batch_size:
yield self._feature_transform(batch_features, batch_labels)
del batch_features[:]
del batch_labels[:]
if batch_features:
yield self._feature_transform(batch_features, batch_labels)
def epoch_train(self, data, epochs=1, batch_size=None):
"""Trains the model on the provided data.
Args:
data: List of tuples (feature, label) where feature is a np array of
shape `self._input_shape` and label an int less than self._output_dim.
epochs: Number of times this data should be trained on.
batch_size: Number of feature, label pairs per batch. Overwrites
`self._batch_size` when set.
Returns:
tuple of accuracy, loss;
accuracy: Average training accuracy.
loss: Loss of the final batch.
"""
batch_size = batch_size or self._batch_size
accuracy_list = []
for _ in range(epochs):
for features, labels in self._batch_data(data, batch_size):
loss, accuracy, _ = self._sess.run(
[self._loss, self._accuracy, self._update_weights],
feed_dict={self._feature: features, self._labels: labels})
accuracy_list.append(accuracy)
return (sum(accuracy_list))/len(accuracy_list), loss
def test(self, data, batch_size=None):
"""Evaluates the model on the provided data.
Args:
data: List of tuples (feature, label) where feature is a np array of
shape `self._input_shape` and label an int less than self._output_dim.
batch_size: Number of feature, label pairs per batch. Overwrites
`self._batch_size` when set.
Returns:
tuple of accuracy, loss;
accuracy: Average training accuracy.
loss: Loss of the final batch.
"""
batch_size = batch_size or self._batch_size
accuracy_list = []
for features, labels in self._batch_data(data, batch_size):
loss, accuracy = self._sess.run(
[self._loss, self._accuracy],
feed_dict={self._feature: features, self._labels: labels})
accuracy_list.append(accuracy)
return sum(accuracy_list)/len(accuracy_list), loss
def infer(self, example_feature):
"""Runs inference on example_feature."""
if example_feature.shape != tuple(self._input_shape):
raise ValueError(
"Feature shape ({}) doesn't match model shape ({})".format(
example_feature.shape, self._input_shape))
return self._sess.run(
self._predictions,
feed_dict={self._feature: np.expand_dims(example_feature, axis=0)})
#@title TfHubWrapper Class
class TfHubWrapper(object):
"""A loads a tf hub embedding model."""
def __init__(self, embedding_model_dir):
"""Creates a `SavedModelWraper`."""
self._graph = tf.Graph()
self._sess = tf.Session(graph=self._graph)
with self._graph.as_default():
with self._sess.as_default():
module_spec = hub.load_module_spec(embedding_model_dir)
embedding_module = hub.Module(module_spec)
self._samples = tf.placeholder(
tf.float32, shape=[1, None], name='audio_samples')
self._embedding = embedding_module(self._samples)
self._sess.run(tf.global_variables_initializer())
print("Embedding model loaded, embedding shape:", self._embedding.shape)
def create_embedding(self, samples):
samples = samples.reshape((1, -1))
output = self._sess.run(
[self._embedding],
feed_dict={self._samples: samples})
return output
#@title Define AudioClipRecorder Class
AUDIOCLIP_HTML ='''
<span style="font-size:30px">Recorded audio clips of {keyphrase}:</span>
<div id='target{keyphrase}'></div>
<span id = "status_label{keyphrase}" style="font-size:30px">
Ready to record.</span>
<button id='Add{keyphrase}Audio'>Record</button>
<script>
var recorder;
var base64data = 0;
function sleep(ms) {{
return new Promise(resolve => setTimeout(resolve, ms));
}}
var handleSuccess = function(stream) {{
recorder = new MediaRecorder(stream);
recorder.ondataavailable = function(e) {{
reader = new FileReader();
reader.readAsDataURL(e.data);
reader.onloadend = function() {{
base64data = reader.result;
}}
}};
recorder.start();
}};
document.querySelector('#Add{keyphrase}Audio').onclick = () => {{
var label = document.getElementById("status_label{keyphrase}");
navigator.mediaDevices.getUserMedia({{audio: true}}).then(handleSuccess);
label.innerHTML = "Recording ... please say {keyphrase}!".fontcolor("red");;
sleep({clip_length_ms}).then(() => {{
recorder.stop();
label.innerHTML = "Recording finished ... processing audio.";
sleep(1000).then(() => {{
google.colab.kernel.invokeFunction('notebook.AddAudioItem{keyphrase}',
[base64data.toString()], {{}});
label.innerHTML = "Ready to record.";
}});
}});
}};
</script>'''
class AudioClipRecorder:
"""Python class that creates a JS microphone clip recorder."""
def __init__(self, keyphrase="test", clip_length_ms=2100):
"""Creates an AudioClipRecorder instance.
When created this class prints an empty <div> tag into which the
recorded clips will be printed and a record audio button that uses
javascript to access the microphone and record an audio clip.
Args:
keyphrase: The name of the keyphrase that should be recorded.
This will be displayed in the recording prompt and used as a
directory name when the recordings are exported.
clip_length_ms: The length (in ms) of each recorded audio clip.
Due to the async nature of javascript this actual amount of recorded
audio may vary by a ~20-80ms.
"""
self._counter = 0
self._keyphrase = keyphrase
self._audio_clips = {}
IPython.display.display(IPython.display.HTML(AUDIOCLIP_HTML.format(
keyphrase=keyphrase, clip_length_ms=clip_length_ms)))
output.register_callback('notebook.AddAudioItem' + keyphrase,
self.add_list_item)
output.register_callback('notebook.RemoveAudioItem' + keyphrase,
self.rm_audio)
def add_list_item(self, data):
"""Adds the recorded audio to the list of clips.
This function is called from javascript after clip_length_ms audio has
been recorded. It prints the recorded audio clip to the <div> together with
a button that allows for it to be deleted.
Args:
data: The recorded audio in webm format.
"""
raw_string_data = data.split(',')[1]
samples, rate = self.decode_webm(raw_string_data)
length_samples = len(samples)
with output.redirect_to_element('#target{keyphrase}'.format(
keyphrase=self._keyphrase)):
with output.use_tags('{keyphrase}_audio_{counter}'.format(
counter=self._counter, keyphrase=self._keyphrase)):
IPython.display.display(IPython.display.HTML('''Audio clip {counter} -
{length} samples -
<button id=\'delbutton{keyphrase}{counter}\'>del</button>
<script>
document.querySelector('#delbutton{keyphrase}{counter}').onclick = () => {{
google.colab.kernel.invokeFunction('notebook.RemoveAudioItem{keyphrase}', [{counter}], {{}});
}};
</script>'''.format(counter=self._counter, length=length_samples,
keyphrase=self._keyphrase)))
IPython.display.display(IPython.display.Audio(data=samples, rate=rate))
IPython.display.display(IPython.display.HTML('<br><br>'))
self._audio_clips[self._counter]=samples
self._counter+=1
def rm_audio(self, count):
"""Removes the audioclip 'count' from the list of clips."""
output.clear(output_tags="{0}_audio_{1}".format(self._keyphrase, count))
self._audio_clips.pop(count)
def decode_webm(self, data):
"""Decodes a webm audio clip in a np.array of samples."""
sample_rate=16000
process = (ffmpeg
.input('pipe:0')
.output('pipe:1', format='s16le', ar=sample_rate)
.run_async(pipe_stdin=True, pipe_stdout=True, pipe_stderr=True,
quiet=True, overwrite_output=True)
)
output, err = process.communicate(input=b64decode(data))
audio = np.frombuffer(output, dtype=np.int16).astype(np.float32)
return audio, sample_rate
def save_as_wav_files(self, base_output_dir,
file_prefix='recording_', file_suffix=''):
"""Exports all audio clips as wav files.
The files wav files will be written to 'base_output_dir/self._keyphrase'.
And will be named: file_prefix + str(clip_id) + file_suffix + '.wav'
"""
if not os.path.exists(base_output_dir):
os.mkdir(base_output_dir)
keyphrase_output_dir = os.path.join(base_output_dir, self._keyphrase)
if not os.path.exists(keyphrase_output_dir):
os.mkdir(keyphrase_output_dir)
for clip_id in self._audio_clips:
filename = file_prefix + str(clip_id) + file_suffix + '.wav'
output_file = os.path.join(keyphrase_output_dir, filename)
print("Creating:", output_file)
scipy.io.wavfile.write(output_file, 16000, self._audio_clips[clip_id])
#@title Define AudioClipEval Class
class AudioClipEval(AudioClipRecorder):
def __init__(self, embedding_model, head_model, filter_fn, labels,
name="eval1", clip_length_ms=2100):
"""Creates an AudioClipEval instance.
When created this class prints an empty <div> tag into which the
recorded clips will be printed and a record audio button that uses
javascript to access the microphone and record an audio clip.
Args:
embedding_model: The embedding model.
head_model: The default head model.
filter_fn: function that prepared the input to the head model.
labels: List of head model target labels.
keyphrase: The name of the keyphrase that should be recorded.
This will be displayed in the recording prompt and used as a
directory name when the recordings are exported.
clip_length_ms: The length (in ms) of each recorded audio clip.
Due to the async nature of javascript this actual amount of recorded
audio may vary by a ~20-80ms.
"""
self._counter = 0
self._keyphrase = name
keyphrase = name
self._audio_clips = {}
self._embedding_model = embedding_model
self._head_model = head_model
self._filter_fn = filter_fn
self._labels = labels
IPython.display.display(IPython.display.HTML(
AUDIOCLIP_HTML.format(keyphrase=keyphrase, clip_length_ms=clip_length_ms)))
output.register_callback('notebook.AddAudioItem' + keyphrase,
self.add_list_item)
output.register_callback('notebook.RemoveAudioItem' + keyphrase,
self.rm_audio)
def add_list_item(self, data):
"""Adds the recorded audio to the list of clips and classifies it.
This function is called from javascript after clip_length_ms audio has
been recorded. It prints the recorded audio clip to the <div> together with
a button that allows for it to be deleted.
Args:
data: The recorded audio in webm format.
"""
raw_string_data = data.split(',')[1]
samples, rate = self.decode_webm(raw_string_data)
length_samples = len(samples)
detection, confidence = self.eval_audio(samples)
with output.redirect_to_element('#target{keyphrase}'.format(
keyphrase=self._keyphrase)):
with output.use_tags('{keyphrase}_audio_{counter}'.format(
counter=self._counter, keyphrase=self._keyphrase)):
IPython.display.display(IPython.display.HTML('''Audio clip {counter} -
{length} samples -
<button id=\'delbutton{counter}\'>del</button>
<script>
document.querySelector('#delbutton{counter}').onclick = () => {{
google.colab.kernel.invokeFunction('notebook.RemoveAudioItem{keyphrase}', [{counter}], {{}});
}};
</script>'''.format(counter=self._counter, length=length_samples,
keyphrase=self._keyphrase)))
IPython.display.display(IPython.display.Audio(data=samples, rate=rate))
IPython.display.display(IPython.display.HTML(
'''<span id = "result{counter}" style="font-size:24px">
detected: {detection} ({confidence})<span>'''.format(
counter=self._counter, detection=detection,
confidence=confidence)))
IPython.display.display(IPython.display.HTML('<br><br>'))
self._audio_clips[self._counter]=samples
self._counter+=1
def eval_audio(self, samples, head_model=None):
"""Classifies the audio using the current or a provided model."""
embeddings = self._embedding_model.create_embedding(samples)[0][0,:,:,:]
if head_model:
probs = head_model.infer(self._filter_fn(embeddings))
else:
probs = self._head_model.infer(self._filter_fn(embeddings))
return self._labels[np.argmax(probs)], np.amax(probs)
def eval_on_new_model(self, head_model):
"""Reclassifies the clips using a new head model."""
for clip_id in self._audio_clips:
samples = self._audio_clips[clip_id]
length_samples = len(samples)
detection, confidence = self.eval_audio(samples, head_model=head_model)
IPython.display.display(IPython.display.HTML(
'''Audio clip {counter} - {length} samples -
<span id = "result{counter}" style="font-size:24px">
detected: {detection} ({confidence})<span>'''.format(
counter=clip_id, length=length_samples,
detection=detection, confidence=confidence)))
IPython.display.display(IPython.display.Audio(data=samples, rate=16000))
```
## Load the embedding model
The following info messages can be ignored
> *INFO:tensorflow:Saver not created because there are no variables in the graph to restore*
Don't worry tf hub is restoring all the variables.
You can test the model by having it produce an embedding on zeros:
```
speech_embedding_model.create_embedding(np.zeros((1,66000)))
```
```
embedding_model_url = "https://tfhub.dev/google/speech_embedding/1"
speech_embedding_model = TfHubWrapper(embedding_model_url)
```
## Record training data or copy from google drive
The following cells allow you to define a set of target keyphrases and record some examples for training.
### Optional Google Drive access.
The recorded wav files can be uploaded (and later download) from your Google drive using [PyDrive](https://googleworkspace.github.io/PyDrive/docs/build/html/index.html). When you run the *Set up Google drive access* cell it will prompt you to log in and grant this colab permission to access your Google drive. Only if you do this will you be able to run the other Google drive cells.
```
#@title Optional: Set up Google drive access
!pip install PyDrive
from pydrive.auth import GoogleAuth
from pydrive.drive import GoogleDrive
from google.colab import auth
from oauth2client.client import GoogleCredentials
auth.authenticate_user()
gauth = GoogleAuth()
gauth.credentials = GoogleCredentials.get_application_default()
drive = GoogleDrive(gauth)
#@title Optional: Download and untar an archive from drive
filename = ''#@param {type:"string"}
#@markdown You can find the file_id by looking at its share-link.
#@markdown e.g. *1b9Lkfie2NHX-O06vPGrqzyGcGWUPul36*
file_id = ''#@param {type:"string"}
downloaded = drive.CreateFile({'id':file_id})
downloaded.GetContentFile(filename)
with tarfile.open(filename, 'r:gz') as data_tar_file:
for member_info in data_tar_file.getmembers():
print(member_info.name)
data_tar_file.extract(member_info)
#@title Setup recording session and define model targets
#@markdown Only use letters and _ for the **RECORDING_NAME** and **TARGET_WORDS**.
RECORDING_NAME = 'transportation' #@param {type:"string"}
target_word1 = 'hogwarts_express' #@param {type:"string"}
target_word2 = 'nautilus' #@param {type:"string"}
target_word3 = 'millennium_falcon' #@param {type:"string"}
target_word4 = 'enterprise' #@param {type:"string"}
target_word5 = '' #@param {type:"string"}
target_word6 = '' #@param {type:"string"}
clip_lengh_ms = 2100 #@param {type:"integer"}
#@markdown ### Microphone access
#@markdown Please connect the microphone that you want to use
#@markdown before running this cell. You may also be asked to
#@markdown to grant colab permission to use it.
#@markdown If you have any problems check your browser settings
#@markdown and rerun the cell.
target_words = [target_word1, target_word2, target_word3,
target_word4, target_word5, target_word6]
OWN_TARGET_WORDS = ','.join([w for w in target_words if w is not ''])
OWN_MODEL_LABELS = ['negative', 'silence'] + OWN_TARGET_WORDS.split(',')
word_list = OWN_TARGET_WORDS.split(',')
t = widgets.TabBar(word_list)
clip_recorders = {}
for label in word_list:
with t.output_to(word_list.index(label)):
clip_recorders[label] = AudioClipRecorder(keyphrase=label,
clip_length_ms=2100)
with t.output_to(0):
print()
#@title Create wav files from recording session.
session = 'recording1_'#@param {type:"string"}
speaker = '_spk1'#@param {type:"string"}
for label in clip_recorders:
clip_recorders[label].save_as_wav_files(base_output_dir=RECORDING_NAME,
file_prefix=session,
file_suffix=speaker)
#@title Load files for training.
all_train_example_files = collections.defaultdict(list)
for label in OWN_TARGET_WORDS.split(','):
label_dir = os.path.join(RECORDING_NAME, label)
all_label_files = [
os.path.join(label, f)
for f in os.listdir(label_dir)
if os.path.isfile(os.path.join(label_dir, f))
]
all_train_example_files[label].extend(all_label_files)
progress_bar = IPython.display.display(progress(0, 100), display_id=True)
print("loading train data")
train_data = EmbeddingDataFileList(
all_train_example_files, RECORDING_NAME,
targets=OWN_MODEL_LABELS, embedding_model=speech_embedding_model,
progress_bar=progress_bar)
#@title Optional: save recorded data to drive.
archive_name = RECORDING_NAME + "_" + str(int(time.time())) +".tar.gz"
def make_tarfile(output_filename, source_dir):
with tarfile.open(output_filename, "w:gz") as tar:
tar.add(source_dir, arcname=os.path.basename(source_dir))
make_tarfile(archive_name, RECORDING_NAME)
file1 = drive.CreateFile({'title': archive_name})
file1.SetContentFile(archive_name)
file1.Upload()
print('Saving to drive: %s, id: %s' % (file1['title'], file1['id']))
```
# Train a model on your recorded data
```
#@title Run training
#@markdown We assume that the keyphrase is spoken roughly in the middle
#@markdown of the loaded audio clips. With **context_size** we can choose the
#@markdown number of embeddings around the middle to use as a model input.
context_size = 16 #@param {type:"slider", min:1, max:28, step:1}
filter_fn = functools.partial(cut_middle_frame, num_frames=context_size, flatten=False)
all_train_data = train_data.get_all_data_shuffled(filter_fn=filter_fn)
all_eval_data = None
head_model = "Convolutional" #@param ["Convolutional", "Fully_Connected"] {type:"string"}
#@markdown Suggested **learning_rate** range: 0.00001 - 0.01.
learning_rate = 0.001 #@param {type:"number"}
batch_size = 32
#@markdown **epochs_per_eval** and **train_eval_loops** control how long the
#@markdown the model is trained. An epoch is defined as the model having seen
#@markdown each example at least once, with some examples twice to ensure the
#@markdown correct labeled / negatives balance.
epochs_per_eval = 1 #@param {type:"slider", min:1, max:15, step:1}
train_eval_loops = 30 #@param {type:"slider", min:5, max:80, step:5}
if head_model == "Convolutional":
model_fn = functools.partial(_conv_head_model_fn, context=context_size)
else:
model_fn = _fully_connected_model_fn
trainer = HeadTrainer(model_fn=model_fn,
input_shape=[context_size,1,96],
num_targets=len(OWN_MODEL_LABELS),
head_learning_rate=learning_rate,
batch_size=batch_size)
data_trained_on = 0
data = []
train_results = []
eval_results = []
max_data = len(all_train_data) * epochs_per_eval * train_eval_loops + 10
def plot_step(plot, max_data, data, train_results, eval_results):
plot.clf()
plot.xlim(0, max_data)
plot.ylim(0.85, 1.05)
plot.plot(data, train_results, "bo")
plot.plot(data, train_results, "b", label="train_results")
if eval_results:
plot.plot(data, eval_results, "ro")
plot.plot(data, eval_results, "r", label="eval_results")
plot.legend(loc='lower right', fontsize=24)
plot.xlabel('number of examples trained on', fontsize=22)
plot.ylabel('Accuracy', fontsize=22)
plot.xticks(fontsize=20)
plot.yticks(fontsize=20)
plt.figure(figsize=(25, 7))
for loop in range(train_eval_loops):
train_accuracy, loss = trainer.epoch_train(all_train_data,
epochs=epochs_per_eval)
train_results.append(train_accuracy)
if all_eval_data:
eval_accuracy, loss = trainer.test(all_eval_data)
eval_results.append(eval_accuracy)
else:
eval_results = None
data_trained_on += len(all_train_data) * epochs_per_eval
data.append(data_trained_on)
plot_step(plt, max_data, data, train_results, eval_results)
IPython.display.display(plt.gcf())
if all_eval_data:
print("Highest eval accuracy: %.2f percent." % (100 * max(eval_results)))
IPython.display.clear_output(wait=True)
if all_eval_data:
print("Highest eval accuracy: %.2f percent." % (100 * max(eval_results)))
#@title Test the model
clip_eval = AudioClipEval(speech_embedding_model, trainer, filter_fn, OWN_MODEL_LABELS)
#@title Rerun the test using a new head model (train a new head model first)
clip_eval.eval_on_new_model(trainer)
```
## FAQ
Q: **My model isn't very good?**
A: The head model is very small and depends a lot on the initialisation weights:
* This default setup doesn't have a negative class so it will always detect *something*.
* Try retraining it a couple of times.
* Reduce the learning rate a little bit.
* Add more training examples:
* At 1 - 5 examples per keyphrase the model probably won't be very good.
* With around 10-20 examples per keyphrase it may work reasonably well; however, it may still fail to learn a keyphrase.
* If you only have examples from a single speaker, then it may only learn how that speaker pronounces the keyphrase.
* Make sure your keyphrase are distinctive enough:
* e.g. heads up vs ketchup
Q: **Can I export the model and use it somewhere?**
A: Yes, there's some example code in the following cells that demonstrate how that could be done. However, this simple example model is only training a between-word classifier.
If you want to use it in any relaistic setting, you will probably also want to add:
* A negative or non-target-word speech class: You could do this by recording 2-10 min of continuous speech that doesn't contain your target keyphrases.
* A non-speech / silence / background-noise class: The speech commands dataset contains some examples of non-speech background audio that could be used for this, and/or you could just leave your mircophone on and record some ambient audio from the future deployement location.
# Export and reuse the head model
The following cells show how the head model you just trained can be exported and reused in a graph.
```
#@title Save the head model
head_model_module_dir = "head_model_module_dir"
trainer.save_head_model(head_model_module_dir)
#@title FullModelWrapper - Example Class
class FullModelWrapper(object):
"""A loads a save model classifier."""
def __init__(self, embedding_model_dir, head_model_dir):
self._graph = tf.Graph()
self._sess = tf.Session(graph=self._graph)
with self._graph.as_default():
embedding_module_spec = hub.load_module_spec(embedding_model_dir)
embedding_module = hub.Module(embedding_module_spec)
head_module_spec = hub.load_module_spec(head_model_dir)
head_module = hub.Module(head_module_spec)
self._samples = tf.placeholder(
tf.float32, shape=[1, None], name='audio_samples')
embedding = embedding_module(self._samples)
logits = head_module(embedding)
self._predictions = tf.nn.softmax(logits)
with self._sess.as_default():
self._sess.run(tf.global_variables_initializer())
def infer(self, samples):
samples = samples.reshape((1, -1))
output = self._sess.run(
[self._predictions],
feed_dict={self._samples: samples})
return output
#@title Test the full model on zeros
full_model = FullModelWrapper(embedding_model_url, head_model_module_dir)
full_model.infer(np.zeros((1,32000)))
```
| github_jupyter |
# Load Accession Numbers Mappings
**[Work in progress]**
This notebook downloads and standardizes accession numbers from life science and biological databases textmined from PubMedCentral full text articles by [Europe PMC](https://europepmc.org/) for ingestion into a Knowledge Graph.
Data source: [ftp site](ftp://ftp.ebi.ac.uk/pub/databases/pmc/TextMinedTerms/)
Author: Peter Rose (pwrose@ucsd.edu)
```
import os
import pandas as pd
import dateutil
from pathlib import Path
pd.options.display.max_rows = None # display all rows
pd.options.display.max_columns = None # display all columsns
ftp = 'ftp://ftp.ebi.ac.uk/pub/databases/pmc/TextMinedTerms/'
NEO4J_HOME = Path(os.getenv('NEO4J_HOME'))
print(NEO4J_HOME)
```
## Assign unique identifiers for interoperabilitiy
A [CURIE](https://en.wikipedia.org/wiki/CURIE) (Compact URI) is a compact abbreviation for Uniform Resource Identifiers (URIs). CURIEs consist of registered prefix and an accession number (prefix:accession). They provide a name space for identifiers to enable uniqueness of identifiers and interoperability among data resources.
[Identifiers.org](http://identifiers.org/) provides a registry and resolution service for life science CURIEs.
### NCBI Reference Sequences
**id**: CURIE: [pmc](https://registry.identifiers.org/registry/pmc) (PubMed Central, PMC)
**accession**: CURIE: [ncbiprotein](https://registry.identifiers.org/registry/ncbiprotein) (NCBI Reference Sequences, Refseq)
```
df1 = pd.read_csv(ftp + "refseq.csv", dtype=str)
# Remove version number from refseq to match to the latest version
df1['id'] = 'pmc:' + df1['PMCID']
df1['accession'] = 'ncbiprotein:' + df1['refseq'].str.split('.', expand=True)[0]
df1 = df1[['id','accession']]
refseq = pd.read_csv(NEO4J_HOME / "import/01c-NCBIRefSeq.csv")
refseq = refseq[['genbank_id']]
refseq = refseq.drop_duplicates()
df1 = df1.merge(refseq, left_on="accession", right_on='genbank_id')
df1 = df1[['id','accession']]
df1.head()
```
### GISAID Genome Sequences
**id**: CURIE: [pmc](https://registry.identifiers.org/registry/pmc) (PubMed Central, PMC)
**accession**: URI: [https://www.gisaid.org/](https://www.gisaid.org/help/publish-with-gisaid-references) (Global Initiative on Sharing All Influenza Data, GISAID)
```
df2 = pd.read_csv(ftp + "gisaid.csv", dtype=str)
df2['id'] = 'pmc:' + df2['PMCID']
df2['accession'] = 'https://www.gisaid.org/' + df2['gisaid']
df2 = df2[['id','accession']]
df2.head()
nextstrain = pd.read_csv(NEO4J_HOME / "import/01b-Nextstrain.csv")
nextstrain['accession'] = nextstrain['id']
nextstrain = nextstrain[['accession']]
df2 = df2.merge(nextstrain, on="accession")
df2.dropna(inplace=True)
df2.head()
```
### UniProt (NOT USED YET)
**id**: CURIE: [pmc](https://registry.identifiers.org/registry/pmc) (PubMed Central, PMC)
**accession**: CURIE: [uniprot](https://registry.identifiers.org/registry/uniprot) ( UniProt Knowledgebase, UniProtKB)
```
# df3 = pd.read_csv(ftp + "uniprot.csv", dtype=str)
# df3['id'] = 'pmc:' + df3['PMCID']
# df3['accession'] = 'uniprot:' + df3['uniprot']
# df3 = df3[['id','accession']]
# df3.head()
```
### Protein Data Bank (NOT USED YET)
**id**: CURIE: [pmc](https://registry.identifiers.org/registry/pmc) (PubMed Central, PMC)
**accession**: CURIE: [pdb](https://registry.identifiers.org/registry/pdb) (Protein Data Bank, PDB)
```
# df4 = pd.read_csv(ftp + "pdb.csv", dtype=str)
# df4['id'] = 'pmc:' + df4['PMCID']
# df4['accession'] = 'pdb:' + df4['pdb']
# df4 = df4[['id','accession']]
# df4.head()
```
### Digital Object Identifier (DOI) (NOT USED YET)
**id**: CURIE: [pmc](https://registry.identifiers.org/registry/pmc) (PubMed Central, PMC)
**accession**: CURIE: [doi](https://registry.identifiers.org/registry/doi) (Digital Object Identifier System, DOI)
```
#df5 = pd.read_csv(ftp + "doi.csv", dtype=str)
# df5['id'] = 'pmc:' + df5['PMCID']
# df5['accession'] = 'doi' + df5['doi']
# df5 = df5[['id','accession']]
# df5.head()
```
### Save data for Knowledge Graph Import
```
df = pd.concat([df1, df2])
df = df.fillna('')
df = df.query("id != ''")
df = df.query("accession != ''")
print('Mappings:', df.count())
df.head()
df.to_csv(NEO4J_HOME / "import/01d-PMC-Accession.csv", index=False)
```
| github_jupyter |
```
import os
import subprocess
import pandas as pd
import time
import seaborn as sns
sns.set(style='whitegrid')
sns.set(rc={'figure.figsize':(11.7, 8.27)})
def run_experiment(exp_name: str, rps=100, duration_sec=5, handler='sleep50', silent=True) -> str:
dump_file = f'/tmp/{exp_name}.bin'
run_vegeta = f"echo 'GET http://localhost:8890/{handler}' | vegeta -cpus 1 attack -rate {rps} -duration {duration_sec}s -timeout 1s -name {exp_name} -workers 1 > {dump_file}"
p = subprocess.run(run_vegeta, shell=True)
if not silent:
print(p)
print('---'*10)
print_report = f"cat {dump_file} | vegeta report -reporter text > /tmp/res.txt"
subprocess.run(print_report, shell=True)
print(open('/tmp/res.txt').read())
def read_experiment(csv_path: str, warmup_time_sec = 0.2) -> pd.DataFrame:
names = ['unix_ts_ns', 'http_code', 'latency_ns', 'bytes_out', 'bytes_in', 'x', 'error', 'exp_name', 'y']
data = pd.read_csv(csv_path, header=None, names=names)
del data['error']
del data['x']
del data['y']
del data['bytes_out']
del data['bytes_in']
begin_ts_ns = data.unix_ts_ns.min()
exp_start_ts_ns = begin_ts_ns + int(warmup_time_sec * 1_000_000_000)
data = data[data.unix_ts_ns > exp_start_ts_ns]
data.reset_index(drop=True, inplace=True)
data['latency_ms'] = (data['latency_ns'] / 1_000_000).round(2)
del data['unix_ts_ns']
del data['latency_ns']
return data
def aggr_exp(exp_names: list) -> str:
out_csv = f'/tmp/out.csv'
dump_files = ','.join([f'/tmp/{exp_name}.bin' for exp_name in exp_names])
dump_ext = f"vegeta dump -inputs {dump_files} -output {out_csv} -dumper csv"
p = subprocess.run(dump_ext, shell=True)
# print(p)
return out_csv
def plot_exp_data(exp_data):
ax = sns.boxplot(x='latency_ms', y='exp_name', data=exp_data)
def latency_table(exp_data, rps):
a = exp_data.groupby('exp_name').quantile([.95, .98, .99])
a = a.reset_index().pivot(index='exp_name', columns='level_1', values='latency_ms')
a = a.round(1)
df = a.reset_index()
#df['exp_name'], df['rps'] = df.exp_name.str.split('_rps_').str
df['rps'] = rps
df['rps'] = df['rps'].astype(int)
df.sort_values('rps', inplace=True)
df.reset_index(drop=True, inplace=True)
df = df[['rps','exp_name', 0.95, 0.98, 0.99]]
return df
def hand_experiment(name: str, rps=100, duration_sec=3, repeats=5, **kwargs):
exp_datas = []
latencies = []
for _ in range(repeats):
run_experiment(name, rps, duration_sec, **kwargs)
csv_file = aggr_exp([name])
exp_data = read_experiment(csv_file, warmup_time_sec=1)
exp_datas.append(exp_data)
df = latency_table(exp_data, rps)
latencies.append(df)
latencies = pd.concat(latencies)
return latencies, exp_datas
lats = []
repeats = 3
duration_sec = 10
rps = 1000
lat, _ = hand_experiment('tornado_oldstyle', rps=rps, duration_sec=duration_sec, repeats=repeats, handler='oldstyle50')
lats.append(lat)
time.sleep(0.5)
lat, _ = hand_experiment('tornado_async', rps=rps, duration_sec=duration_sec, repeats=repeats)
lats.append(lat)
pd.concat(lats)
rps = 1000
lat, _ = hand_experiment('aiohttp', rps=rps, duration_sec=duration_sec, repeats=repeats)
lats.append(lat)
time.sleep(0.5)
def results(lats: list) -> pd.DataFrame:
df = pd.concat(lats)
return df.groupby(['exp_name', 'rps']).mean().sort_values(0.98)
results(lats)
# mean cpu time of process
lat, _ = hand_experiment('tornado_hard_work', rps=rps, duration_sec=duration_sec, repeats=repeats, handler='hard_work')
lat
lats.append(lat)
lat, _ = hand_experiment('aiohttp_hard_work', rps=rps, duration_sec=duration_sec, repeats=repeats, handler='hard_work')
# lats.append(lat)
#results(lats)
lat
lat
```
| github_jupyter |
# Lecture 25: Beta-Gamma (bank-post office), order statistics, conditional expectation, two envelope paradox
## Stat 110, Prof. Joe Blitzstein, Harvard University
----
## Connecting the Gamma and Beta Distributions
Say you have to visit both the bank and the post office today. What can we say about the total times you have to wait in the lines?
Let $X \sim \operatorname{Gamma}(a, \lambda)$ be the total time you wait in line at the bank, given that there are $a$ people in line in front of you, and the waiting times are i.i.d $\operatorname{Expo}(\lambda)$; recall the analogies of geometric $\rightarrow$ negative binomial, and of exponential $\rightarrow$ gamma. The waiting time in line at the bank for everyone individually is $\operatorname{Expo}(\lambda)$, and as the $a+1^{th}$ person, your time in line is sum of those $a$ $\operatorname{Expo}(\lambda)$ times.
Similarly, let $Y \sim \operatorname{Gamma}(b, \lambda)$ be the total time you wait in line at the post office, given that there are $b$ people in line in front of you.
Assume that $X, Y$ are independent.
### Questions
1. What is the distribution of $T = X + Y$?
1. Given $T = X + Y$ and $W = \frac{X}{X+Y}$, what is the joint distribution?
1. Are $T, W$ independent?
### What is the distribution of $T$?
We immediately know that the total time you spend waiting in the lines is
\begin{align}
T &= X + Y \\
&\sim \operatorname{Gamma}(a+b, \lambda)
\end{align}
### What is the distribution of $T,W$?
Let $\lambda = 1$, to make the calculation simpler. We do not lose any generality, since we can scale by $\lambda$ later.
So we are looking the joint PDF of $T,W$
\begin{align}
\text{joint PDF } f_{T,W}(t,w) &= f_{X,Y}(x,y) \, \left| \frac{\partial(x,y)}{\partial(t,w)} \right| \\
&= \frac{1}{\Gamma(a) \Gamma(b)} \, x^a \, e^{-x} \, y^b \, e^{-y} \, \frac{1}{xy} \, \left| \frac{\partial(x,y)}{\partial(t,w)} \right| \\\\
\\
\text{for the Jacobian, let } x + y &= t \\
\frac{x}{x+y} &= w \\
\\
\Rightarrow x &= tw \\
\\
1 - \frac{x}{x+y} &= 1 - w \\
\frac{x + y - x}{t} &= 1 - w \\
\\
\Rightarrow y &= t(1-w) \\\\
\\
\left| \frac{\partial(x,y)}{\partial(t,w)} \right| &=
\begin{bmatrix}
\frac{\partial x}{\partial t} & \frac{\partial x}{\partial w} \\
\frac{\partial y}{\partial t} & \frac{\partial y}{\partial w}
\end{bmatrix} \\
&=
\begin{bmatrix}
w & t \\
1-w & -t
\end{bmatrix} \\
&= -tw - t(1-w) \\
&= -t \\\\
\\
\text{returning to PDF } f_{T,W}(t,w) &= \frac{1}{\Gamma(a) \Gamma(b)} \, x^a \, e^{-x} \, y^b \, e^{-y} \, \frac{1}{xy} \, \left| \frac{\partial(x,y)}{\partial(t,w)} \right| \\
&= \frac{1}{\Gamma(a) \Gamma(b)} \, (tw)^a \, e^{-(tw)} \, (t(1-w))^b \, e^{-t(1-w)} \, \frac{1}{tw \, t(1-w)} \, t \\
&= \frac{1}{\Gamma(a) \Gamma(b)} \, w^{a-1} \, (1-w)^{b-1} \,\, t^{a+b} \, e^{-t} \, \frac{1}{t} \, c &\quad \text{ where } c \text{ is the normalizing constant for } T \\
&= \frac{\Gamma(a+b)}{\Gamma(a) \Gamma(b)} \, w^{a-1} \, (1-w)^{b-1} \,\, \frac{t^{a+b} \, e^{-t} \, \frac{1}{t}}{\Gamma(a+b)} &\quad \text{ multiplying by } 1
\end{align}
Since we are able to successfully derive $f_{T,W}(t,w)$ in terms of $T \sim \operatorname{Gamma}(a,b)$ and $W \sim \operatorname{Beta}(a,b)$, this means we have also answered the third question: _$T,W$ are independent_.
### Unexpected Discovery: Normalizing Constant for Beta
Now say we are interested in finding the marginal PDF for $W$
\begin{align}
f_{W}(w) &= \int_{-\infty}^{\infty} f_{T,W}(t,w) dt \\
&= \int_{-\infty}^{\infty} \frac{\Gamma(a+b)}{\Gamma(a) \Gamma(b)} \, w^{a-1} \, (1-w)^{b-1} \,\, \frac{t^{a+b} \, e^{-t} \, \frac{1}{t}}{\Gamma(a+b)} \, dt \\
&= \frac{\Gamma(a+b)}{\Gamma(a) \Gamma(b)} \, w^{a-1} \, (1-w)^{b-1} \, \int_{-\infty}^{\infty} \frac{t^{a+b} \, e^{-t} \, \frac{1}{t}}{\Gamma(a+b)} \, dt\\
&= \frac{\Gamma(a+b)}{\Gamma(a) \Gamma(b)} \, w^{a-1} \, (1-w)^{b-1}
\end{align}
But notice that since marginal PDF $f_{W}(w)$ must integrate to 1, then $\frac{\Gamma(a+b)}{\Gamma(a) \Gamma(b)}$ is the normalizing constant for the Beta distribution! If this were not true, then $f_{W}(w)$ could not be a valid PDF.
## Example Usage: Finding $\mathbb{E}(W), W \sim \operatorname{Beta}(a,b)$
There are two ways you could find $\mathbb{E}(W)$.
You could use LOTUS, where you would simply do:
\begin{align}
\mathbb{E}(W) &= \int \frac{\Gamma(a+b)}{\Gamma(a) \Gamma(b)} \, w^{a-1} \, (1-w)^{b-1} \, w \, dw \\
&= \int \frac{\Gamma(a+b)}{\Gamma(a) \Gamma(b)} \, w^{a} \, (1-w)^{b-1} \, dw \\
\end{align}
... and would not be so hard to handle, since that also is a $\operatorname{Beta}$.
Or, since we are continuing on the topic of $W = X + Y$, we have:
\begin{align}
\mathbb{E}(W) &= \mathbb{E}\left( \frac{X}{X+Y} \right) \\
&= \frac{\mathbb{E}(X)}{\mathbb{E}(X+Y)} \quad \text{ which is true, under certain conditions}
\end{align}
So why is $\mathbb{E}\left( \frac{X}{X+Y} \right) = \frac{\mathbb{E}(X)}{\mathbb{E}(X+Y)}$?
Facts
1. since $T$ is independent of $W$, $\frac{X}{X+Y}$ is independent of $X+Y$
2. since independence implies they are uncorrelated, $\frac{X}{X+Y}$ and $X+Y$ are therefore _uncorrelated_
3. by definition, uncorrelated means \begin{align}
\mathbb{E}(AB) - \mathbb{E}(A) \, \mathbb{E}(B) &= 0 \\
\mathbb{E}(AB) &= \mathbb{E}(A) \, \mathbb{E}(B) \\\\
\\
\mathbb{E} \left( \frac{X}{X+Y} \, (X+Y) \right) &= \mathbb{E}(\frac{X}{X+Y}) \, \mathbb{E}(X+Y) \\
\mathbb{E}(X) &= \mathbb{E}(\frac{X}{X+Y}) \, \mathbb{E}(X+Y) \\
\Rightarrow \mathbb{E}\left( \frac{X}{X+Y} \right) &= \frac{\mathbb{E}(X)}{\mathbb{E}(X+Y)} \\\\
\\
\therefore \mathbb{E}(W) &= \mathbb{E} \left( \frac{X}{X+Y} \right) \\
&= \frac{\mathbb{E}(X)}{\mathbb{E}(X+Y)} \\
&= \frac{a}{a+b}
\end{align}
## Order Statistics
Let $X_1, X_2, \dots , X_n$ be i.i.d. The _order statistics_ are $X_{(1)} \le X_{(2)}\le \dots \le X_{(n)}$, where
\begin{align}
X_{(1)} &= min(X_1, X_2, \dots , X_n) \\
X_{(n)} &= max(X_1, X_2, \dots , X_n) \\
\\
\text{if } n \text{ is odd, } &\text{ median is } X_{( \frac{n+1}{2} )}
\end{align}
Other statistics inlucde quartiles, etc.
* Order statistis are difficult to work with, because they are dependent; knowing $X_{(1)}$ gives you information about $X_{(n)}$, for example
* In the discrete case, things are tricky since you need to consider what to do in case of ties
### Distribution of Order Statistics
Let $X_1, X_2, \dots , X_n$ be i.i.d. with PDF $f$ and CDF $F$. Find the CDF and PDF of _marginal_ $X_{(j)}$ (we focus in only on $j$).
#### CDF: $P(X_{(j)} \le x)$

Looking at the image above...
\begin{align}
\text{marginal CDF } P(X_{(j)} \le ) &= P(\text{at least } j \text{ of } X_i \le x) \\
&= \sum_{k=j}^n \binom{n}{k} \, F(x)^k \, \left( 1-F(x) \right)^{n-k} \\
\end{align}
#### PDF: $f_{X_{(j)}}(x)$
Rather than taking the derivative of the CDF (and avoiding working with tedious sums), let's once again look at an image and think about this...

Imagine a tiny interval about $x$ which we call $dx$. If we multiply the PDF by a infinitesimally small interval, we can calculate the probability that the order statistic of interest $j$ is in this tiny interval.
* pick one of the $n$ statistics to land inside of $dx$
* the probability that an order statistic lands inside of the $dx$ area is $f(x)dx$
* there are $j-1$ to the left of $dx$
* the remaining $n-j$ are to the right of $dx$
\begin{align}
f_{X_{(j)}}(x) \, dx &= n \, \binom{n-1}{j-1} \left( f(x)dx \right) \, F(x)^{j-1} \, \left( 1-F(x) \right)^{n-j} \\
\\
\text{marginal PDF } f_{X_{(j)}}(x) &= n \, \binom{n-1}{j-1} \, F(x)^{j-1} \, \left( 1-F(x) \right)^{n-j} \, f(x) \\
\end{align}
## Example: $\mathbb{E}|U_1 - U_2|$
Let $U_1, U_2, \dots , U_n$ be i.i.d. $\operatorname{Unif}(0,1)$.
Then the corresponding marginal PDF $f_{U_{(j)}}(x)$ is
\begin{align}
f_{U_{(j)}}(x) &= n \, \binom{n-1}{j-1} \, x^{j-1} \, (1-x)^{n-j} \quad \text{for } 0 \le x \le 1 \\
\\
\Rightarrow U_{(j)} &\sim \operatorname{Beta}(j, n-j+1)
\end{align}
Recall an earlier discussion of $\mathbb{E}|U_1 - U_2| = \mathbb{E}\left( max(U_1,U_2) \right) - \mathbb{E}\left( min(U_1,U_2) \right)$

But since
\begin{align}
max(U_1,U_2) &= U_2 \, \text{ so } n = 2, j = 2 \\
\mathbb{E}\left( max(U_1,U_2) \right) &= \mathbb{E} \left(\operatorname{Beta}(2, 2-2+1) \right) \\
&= \mathbb{E} \left( \operatorname{Beta}(2, 1) \right) \\
&= \frac{2}{2+1} \\
&= \frac{2}{3} \\
\\
min(U_1,U_2) &= U_1 \, \text{ so } n = 2, j = 1 \\
\mathbb{E}\left( min(U_1,U_2) \right) &= \mathbb{E} \left(\operatorname{Beta}(1, 2-1+1) \right) \\
&= \mathbb{E} \left( \operatorname{Beta}(1,2) \right) \\
&= \frac{1}{2+1} \\
&= \frac{1}{3} \\
\\
\Rightarrow \mathbb{E}|U_1 - U_2| &= \frac{2}{3} - \frac{1}{3} \\
&= \boxed{\frac{1}{3}}
\end{align}
## Conditional Expection
If you understand conditional probability, then you can extend that to _conditional expectation_.
\begin{align}
\text{consider } \mathbb{E}(X|A) &\text{where } A \text{ is an event } \\\\
\\
\mathbb{E}(X) &= \mathbb{E}(X|A)P(A) + \mathbb{E}(X|A^{\complement})P(A^{\complement}) \\
\\
\mathbb{E}(X) &= \sum_{x} x \, P(X=x) &\text{where you expand } P(X=x) \text{ with LOTP }
\end{align}
We will go more into this next time...
## Two-envelope Paradox
Now consider this paradox before we leave off.

There are two envelopes with cash inside them. You do not know how much is inside, only that one envelope has twice as much as the other.
Let's say you open up one of the envelopes and find $100 inside.
_Should you switch?_
Well, the other envelope could contain either 50 or it could contain 200. The mean of those two amounts is $125, so wouldn't that mean you should switch?
But then again, it doesn't matter that the envelope you opened contained 100: it could have been any amount $n$. So the other envelope could hold $\frac{n}{2}$ or $2n$, the average being $\frac{5n}{4}$, so you should switch. But then the same argument applies, so you should switch back. But then the same argument applies, so you should switch again...? And again...? And again...? Ad infinitum, ad nauseum.
To be continued.
----
View [Lecture 25: Order Statistics and Conditional Expectation | Statistics 110](http://bit.ly/2oRCNDa) on YouTube.
| github_jupyter |
# MARATONA BEHIND THE CODE 2020
## DESAFIO 6 - ANAHUAC
### Introducción
En este desafio, usted usará herramientas de IBM como Watson Studio (o Cloud Pak for Data) para construir un modelo baseado en Machine Learning capaz de preveer si un estudante irá continuar o abandonará su curso.
<hr>
## Installing Libs
```
!pip install scikit-learn --upgrade
!pip install xgboost --upgrade
```
<hr>
## Loading the .csv dataset from GitHub
```
import pandas as pd
!wget --no-check-certificate --content-disposition https://raw.githubusercontent.com/vanderlei-test/dataset2/master/datasets/ForTraining.csv
df_base_for_training = pd.read_csv(r'ForTraining.csv')
df_base_for_training.head()
```
Descripción: La primera tabla mostrada arriba tiene 4 columnas, 3 son features and el target: `Graduado` that has a binary values={Si, No}.
Usted puede, y debe, usar mas data que esta disponible para construir su modelo. Los siguientes archivos .csv presentados:
```
!wget --no-check-certificate --content-disposition https://raw.githubusercontent.com/vanderlei-test/dataset2/master/datasets/OrdenMaterias.csv
df_orden_materias = pd.read_csv(r'OrdenMaterias.csv')
df_orden_materias.head()
!wget --no-check-certificate --content-disposition https://raw.githubusercontent.com/vanderlei-test/dataset2/master/datasets/TablaConexiones.csv
df_tabla_conexiones = pd.read_csv(r'TablaConexiones.csv')
df_tabla_conexiones.head()
!wget --no-check-certificate --content-disposition https://raw.githubusercontent.com/vanderlei-test/dataset2/master/datasets/TablaTareas.csv
df_tabla_tareas = pd.read_csv(r'TablaTareas.csv')
df_tabla_tareas.head()
```
Overview del Dataset:
Disponibles para el participante, ecisten 4 tables cargas en DataFrames anteriormente:
**df_base_for_training**
- ``studentId``
``reducido``
``ciclo``
``Graduado`` --> ¡LA VARIABLE OBJETIVO PARA CLASIFICACIÓN BINARIA!
**df_orden_materias**
``reducido``
``2017 - 03``
``2017 - 04``
``2017 - 05``
``2017 - 06``
``2017 - 07``
``2017 - 08``
``2018 - 01``
``2018 - 02``
``2018 - 03``
``2018 - 04``
``2018 - 05``
``2018 - 06``
``2018 - 07``
``2018 - 08``
``2019 - 01``
``2019 - 02``
``2019 - 03``
``2019 - 04``
``2019 - 05``
``2019 - 06``
``2019 - 07``
``2019 - 08``
``2020 - 01``
``2020 - 02``
``2020 - 03``
``2020 - 04``
``2020 - 05``
``2020 - 06``
**df_tabla_conexiones**
- ``studentId``
``ciclo``
``Dias_Conectado``
``Minutos_Promedio``
``Minutos_Total``
**df_tabla_tareas**
- ``studentId``
``ciclo``
``Calificacion_Promedio``
``Tareas_Puntuales``
``Tareas_No_Entregadas``
``Tareas_Retrasadas``
``Total_Tareas``
Observe que la variable ``studentId`` aparece en varias tablas.
Usted puede combinar/merge estos datasets como usted desee.
```
print("Columnas en *df_base_for_training*:")
print(df_base_for_training.columns)
print("\Columnas en *df_orden_materias*:")
print(df_orden_materias.columns)
print("\Columnas en *df_tabla_conexiones*:")
print(df_tabla_conexiones.columns)
print("\Columnas en *df_tabla_tareas*:")
print(df_tabla_tareas.columns)
```
#### ¡ATENCIÓN! La columna **target** es ``Graduado``, presente en el DataFrame "df_base_for_training".
<hr>
## Uniendo DataFrames en Pandas
Documentación oficial para Pandas 1.1.0: https://pandas.pydata.org/pandas-docs/stable/user_guide/merging.html
Como un **ejemplo** de como usar Pandas, camos a unir/merge la información de las tablas "df_base_for_training" y "df_tabla_tareas" a traves de la llave ``studentId``.
Usted puee editar el dataframes manualmente si lo prefiere, usando Microsoft Excel u otros lenguajes. Recuerde insertar la data procesada en IBM Cloud Pak for Data para que pueda entrenar su modelo.
```
df_base_for_training.tail()
df_tabla_tareas.tail()
# El resultado de esta celda sera la union de los dos anteriores dataframes
# usando la columna ``studentId`` como llave.
df = pd.merge(
df_base_for_training, df_tabla_tareas, how='inner',
on=None, left_on=['studentId', 'ciclo'], right_on=['studentId', 'ciclo'],
left_index=False, right_index=False, sort=True,
suffixes=('_x', '_y'), copy=True, indicator=False,
validate=None
)
df.tail()
# Información acerca de las columnas del dataset unido
df.info()
```
De la información de arriba ud puede observar que hay valores Null/NaN en algunas de las columnas.
Para que nuestro modelo quede bien entrenado necesitamos procesar estos valores nulos de una forma adecuada.
Usted escogera la mejor estrategia como parte del desafío, pero en la siguiente celda encuentra un **ejemplo** the como puede hacer este procesamiento usanto la libreria *scikit-learn*.
<hr>
## Pre-procesando el dataset antes de entrenar
### Borrando finlas con valores NaN
Usando el metodo Pandas DataFrame.dropna() usted puede remover todas las filas que estan indefinidas para la columna ``Graduado``.
Docs: https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.dropna.html
```
# Visualizando los datos faltantes del dataset antes de la primera transformación (df_data_2)
print("Valores nulos antes de la transformación DropNA: \n\n{}\n".format(df.isnull().sum(axis = 0)))
# Aplicando la función para borrar todas las filas con valor NaN en la columna ``Graduado``:
df2 = df.dropna(axis='index', how='any', subset=['Graduado'])
# Visualizando los datos faltantes del dataset antes de la primera transformación (SimpleImputer) (df_data_3)
print("Valores nulos antes de la transformación SimpleImputer: \n\n{}\n".format(df2.isnull().sum(axis = 0)))
```
### Procesando valores NaN con SimpleImputer de sklearn
Para los otros valores NaN vamos a usar como **ejemplo** la sustitución por la constante 0.
Usted puede escoger cualquier estrategia que crea que es la mejor para esto :)
Docs: https://scikit-learn.org/stable/modules/generated/sklearn.impute.SimpleImputer.html?highlight=simpleimputer#sklearn.impute.SimpleImputer
```
from sklearn.impute import SimpleImputer
import numpy as np
# Creando un objeto ``SimpleImputer``
impute_zeros = SimpleImputer(
missing_values=np.nan, # Los valores faltantes son de tipo ``np.nan`` (estandar Pandas)
strategy='constant', # La estrategia escogida es reemplazar por una constante
fill_value=0, # La constante que será usada para reemmplazar los valores faltantes es un int64=0.
verbose=0,
copy=True
)
# Visualizando los datos faltantes del dataset antes de la segunda transformación (df_data_2)
print("Valores nulos antes de transformación SimpleImputer: \n\n{}\n".format(df2.isnull().sum(axis = 0)))
# Aplicar la transformación ``SimpleImputer`` en el conjunto de datos base
impute_zeros.fit(X=df)
# Reconstrucción del nuevo DataFrame Pandas (df_data_3)
df = pd.DataFrame.from_records(
data=impute_zeros.transform(
X=df
), # El resultado SimpleImputer.transform(<<pandas dataframe>>) es una lista de listas
columns=df.columns # Las columnas originals deben ser conservadas en esta transformación
)
# Visualizndo los datos faltantes del dataset
print("Valores nulos del dataset despues de la transformación SimpleImputer: \n\n{}\n".format(df.isnull().sum(axis = 0)))
```
### Eliminando columnas no desadas
Vamos a **demostrar** acontinuación como usar el metodo DataFrame.drop().
Docs: https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.drop.html
```
df.tail()
df2 = df.drop(columns=['reducido'], inplace=False)
df2.tail()
```
### Manejando variables Categoricas
Como se menciono antes, los computadores no son buenos con las variables categoricas.
Mientras que nosotros entendemos bien las variables categoricas, es debido a un conocimiento previo quie el computador no tiene.
La mayoria de tecnicas de Machine Learning y modelso trabajan con un set limitado de datos (Tipicamente binario).
Las redes neurales consumenda data y producen resultados en el rango de 0..1 t raramente van mas alla del alcance.
En resumen, la gran mayoria de algoritmos de machine learning aceptan data de entrada ("training data") de donde los features son extraidos.
Basado en estos features, un modelo matematico es creado, el cual es usado para hacer una predicción o decision sin ser programado explicitamente para esa tarea.
Dado un dataset con con 2 features, vamos a dejar que encoder encuentre los valores unicos por features y transforme la data a binario usando la tecnica one-hot encoding.
```
# Columnas One-hot-encoding del dataset usando el metodo de Pandas ``get_dummies`` (demontración)
df3 = pd.get_dummies(df2, columns=['ciclo'])
df3.tail()
```
<hr>
## Entrenando un clasificador basado en un Árbol de Decisión
### Seleccionando FEATURES y definiendo la variable TARGET
```
df3.columns
features = df3[
[
'studentId', 'Calificacion_Promedio', 'Tareas_Puntuales',
'Tareas_No_Entregadas', 'Tareas_Retrasadas', 'Total_Tareas',
'ciclo_2017 - 03', 'ciclo_2017 - 04', 'ciclo_2017 - 05',
'ciclo_2017 - 06', 'ciclo_2017 - 07', 'ciclo_2017 - 08',
'ciclo_2018 - 01', 'ciclo_2018 - 02', 'ciclo_2018 - 03',
'ciclo_2018 - 04', 'ciclo_2018 - 05', 'ciclo_2018 - 06',
'ciclo_2018 - 07', 'ciclo_2018 - 08', 'ciclo_2019 - 01',
'ciclo_2019 - 02', 'ciclo_2019 - 03', 'ciclo_2019 - 04',
'ciclo_2019 - 05', 'ciclo_2019 - 06', 'ciclo_2019 - 07',
'ciclo_2019 - 08'
]
]
target = df3['Graduado'] ## No cambie la variable target!
```
### Dividiendo nuestro dataset en set de Entrenamiento y Pruebas
```
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(features, target, test_size=0.2, random_state=133)
```
### Entrenando un modelo ``DecisionTreeClassifier()``
```
# Método para creacion de modelos basados en arbol de desición
from sklearn.tree import DecisionTreeClassifier
dtc = DecisionTreeClassifier(max_depth=15).fit(X_train, y_train)
```
### Haciendo predicciones del Sample Test
```
y_pred = dtc.predict(X_test)
print(y_pred)
```
### Analice la calidad del modelo a través de la matriz de confusión
```
!pip install seaborn --upgrade
from sklearn.metrics import confusion_matrix
import matplotlib.pyplot as plt
import seaborn as sns
cf_matrix = confusion_matrix(y_test, y_pred)
group_names = ['`Positivo` Correto', '`Negativo` Errado', 'Falso `Positivo`', '`Negativo` Correto']
group_counts = ['{0:0.0f}'.format(value) for value in cf_matrix.flatten()]
group_percentages = ['{0:.2%}'.format(value) for value in cf_matrix.flatten()/np.sum(cf_matrix)]
labels = [f'{v1}\n{v2}\n{v3}' for v1, v2, v3 in zip(group_names, group_counts, group_percentages)]
labels = np.asarray(labels).reshape(2,2)
accuracy = np.trace(cf_matrix) / float(np.sum(cf_matrix))
precision = cf_matrix[1,1] / sum(cf_matrix[:,1])
recall = cf_matrix[1,1] / sum(cf_matrix[1,:])
f1_score = 2*precision*recall / (precision + recall)
sns.heatmap(cf_matrix, annot=labels, fmt="")
stats_text = "\n\nAccuracy={:0.3f}\nPrecision={:0.3f}\nRecall={:0.3f}\nF1 Score={}".format(accuracy, precision, recall, f1_score)
plt.ylabel('True label')
plt.xlabel('Predicted label' + stats_text)
```
<hr>
## Scoring de la data requerida para hacer la entrega de la solución
Para el envío, necesita clasificar el siguiente dataset. Para hacer eso, usted necesita reproducir los mismos pasos de pre-procesamiento para que el dataset este en la misma estructura del que usted uso para construir su modelo. Despues de clasificar este dataframe, esperamos que usted entregue un archivo csv con las 2499 filar y una columna 'Graduado' con su predicción. **No cambie el orden del archivo a predecir ni borre filas**
```
!wget --no-check-certificate --content-disposition https://raw.githubusercontent.com/vanderlei-test/dataset2/master/for_submission/ToBePredicted.csv
df_to_be_predicted = pd.read_csv(r'ToBePredicted.csv')
df_to_be_predicted.tail()
# Uniendo los dataset
df = pd.merge(
df_to_be_predicted, df_tabla_tareas, how='inner',
on=None, left_on=['studentId', 'ciclo'], right_on=['studentId', 'ciclo'],
left_index=False, right_index=False, sort=True,
suffixes=('_x', '_y'), copy=True, indicator=False,
validate=None
)
# Eliminando la columna 'reducido'
df2 = df.drop(columns=['reducido'], inplace=False)
# Columnas One-hot-encoding del dataset usando el metodo de Pandas ``get_dummies`` (demontración)
df3 = pd.get_dummies(df2, columns=['ciclo'])
df3.tail()
```
Observando los features declarados acontinuación, sabemos que el dataset ha ser evaluado esta en el mismo formato usado para entrenar nuestro árbol de decisión anteriormente.
```features = df3[
[
'studentId', 'Calificacion_Promedio', 'Tareas_Puntuales',
'Tareas_No_Entregadas', 'Tareas_Retrasadas', 'Total_Tareas',
'ciclo_2017 - 03', 'ciclo_2017 - 04', 'ciclo_2017 - 05',
'ciclo_2017 - 06', 'ciclo_2017 - 07', 'ciclo_2017 - 08',
'ciclo_2018 - 01', 'ciclo_2018 - 02', 'ciclo_2018 - 03',
'ciclo_2018 - 04', 'ciclo_2018 - 05', 'ciclo_2018 - 06',
'ciclo_2018 - 07', 'ciclo_2018 - 08', 'ciclo_2019 - 01',
'ciclo_2019 - 02', 'ciclo_2019 - 03', 'ciclo_2019 - 04',
'ciclo_2019 - 05', 'ciclo_2019 - 06', 'ciclo_2019 - 07',
'ciclo_2019 - 08'
]
]
target = df3['Graduado'] ## No cambie la variable target!```
```
y_pred = dtc.predict(df3[
[
'studentId', 'Calificacion_Promedio', 'Tareas_Puntuales',
'Tareas_No_Entregadas', 'Tareas_Retrasadas', 'Total_Tareas',
'ciclo_2017 - 03', 'ciclo_2017 - 04', 'ciclo_2017 - 05',
'ciclo_2017 - 06', 'ciclo_2017 - 07', 'ciclo_2017 - 08',
'ciclo_2018 - 01', 'ciclo_2018 - 02', 'ciclo_2018 - 03',
'ciclo_2018 - 04', 'ciclo_2018 - 05', 'ciclo_2018 - 06',
'ciclo_2018 - 07', 'ciclo_2018 - 08', 'ciclo_2019 - 01',
'ciclo_2019 - 02', 'ciclo_2019 - 03', 'ciclo_2019 - 04',
'ciclo_2019 - 05', 'ciclo_2019 - 06', 'ciclo_2019 - 07',
'ciclo_2019 - 08'
]
])
print(y_pred)
```
### Guardando los resultados de la predicción en un archivo csv
```
np.savetxt("results.csv", y_pred, delimiter=",", fmt='%s')
project.save_data(file_name="results.csv", data=pd.read_csv("results.csv", header=None).to_csv(header=["TARGET"], index=False))
```
<hr>
## ¡Felicitaciones!
Si todo fue ejecutado sin errores, usted ya tiene un modelo basado en classificacion binaria y puede descargar sus resultados para subirlos como csv!
Para enviar su solución, ve a la página:
# https://anahuac.maratona.dev
| github_jupyter |
## Experiments approximating the posterior with diagonal Gaussians from noisy-Adam samples
We start by building the model and showing the basic inference procedure and calculation of the performance on the MNIST classification and the outlier detection task. Then perform multiple runs of the model with different number of samples in the ensemble to calculate performance statistics. Instead of SGLD we use an adaptation of Adam which is based on ideas from SGLD. So we add noise to the parameter updates in the order adaptive learning rate.
```
# Let's first setup the libraries, session and experimental data
import experiment
import inferences
import edward as ed
import tensorflow as tf
import numpy as np
import os
s = experiment.setup()
mnist, notmnist = experiment.get_data()
# Builds the model and approximation variables used for the model
y_, model_variables = experiment.get_model_3layer()
approx_variables = experiment.get_gauss_approximation_variables_3layer()
# Performs inference with our custom inference class
inference_dict = {model_variables[key]: val for key, val in approx_variables.iteritems()}
inference = inferences.VariationalGaussNoisyAdam(inference_dict, data={y_: model_variables['y']})
n_iter=1000
inference.initialize(n_iter=n_iter, learning_rate=0.005, burn_in=0)
tf.global_variables_initializer().run()
for i in range(n_iter):
batch = mnist.train.next_batch(100)
info_dict = inference.update({model_variables['x']: batch[0],
model_variables['y']: batch[1]})
inference.print_progress(info_dict)
inference.finalize()
# Computes the accuracy of our model
accuracy, disagreement = experiment.get_metrics(model_variables, approx_variables, num_samples=10)
print(accuracy.eval({model_variables['x']: mnist.test.images, model_variables['y']: mnist.test.labels}))
print(disagreement.eval({model_variables['x']: mnist.test.images, model_variables['y']: mnist.test.labels}))
# Computes some statistics for the proposed outlier detection
outlier_stats = experiment.get_outlier_stats(model_variables, disagreement, mnist, notmnist)
print(outlier_stats)
print('TP/(FN+TP): {}'.format(float(outlier_stats['TP']) / (outlier_stats['TP'] + outlier_stats['FN'])))
print('FP/(FP+TN): {}'.format(float(outlier_stats['FP']) / (outlier_stats['FP'] + outlier_stats['TN'])))
```
The following cell performs multiple runs of this model with different number of samples within the ensemble to capture performance statistics. Results are saved in `SampledGauss_noisyAdam.csv`.
```
import pandas as pd
results = pd.DataFrame(columns=('run', 'samples', 'acc', 'TP', 'FN', 'TN', 'FP'))
for run in range(5):
inference_dict = {model_variables[key]: val for key, val in approx_variables.iteritems()}
inference = inferences.VariationalGaussNoisyAdam(inference_dict, data={y_: model_variables['y']})
n_iter=1000
inference.initialize(n_iter=n_iter, learning_rate=0.005, burn_in=0)
tf.global_variables_initializer().run()
for i in range(n_iter):
batch = mnist.train.next_batch(100)
info_dict = inference.update({model_variables['x']: batch[0],
model_variables['y']: batch[1]})
inference.print_progress(info_dict)
inference.finalize()
for num_samples in range(15):
accuracy, disagreement = experiment.get_metrics(model_variables, approx_variables,
num_samples=num_samples + 1)
acc = accuracy.eval({model_variables['x']: mnist.test.images, model_variables['y']: mnist.test.labels})
outlier_stats = experiment.get_outlier_stats(model_variables, disagreement, mnist, notmnist)
results.loc[len(results)] = [run, num_samples + 1, acc,
outlier_stats['TP'], outlier_stats['FN'],
outlier_stats['TN'], outlier_stats['FP']]
results.to_csv('SampledGauss_noisyAdam.csv', index=False)
```
| github_jupyter |
# Data Cleaning and Preprocessing for Sentiment Analysis
> Copyright 2019 Dave Fernandes. All Rights Reserved.
>
> Licensed under the Apache License, Version 2.0 (the "License");
> you may not use this file except in compliance with the License.
> You may obtain a copy of the License at
>
> http://www.apache.org/licenses/LICENSE-2.0
>
> Unless required by applicable law or agreed to in writing, software
> distributed under the License is distributed on an "AS IS" BASIS,
> WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
> See the License for the specific language governing permissions and
> limitations under the License.
Data files can be downloaded from: https://www.kaggle.com/snap/amazon-fine-food-reviews/version/2
```
import numpy as np
import pandas as pd
import tensorflow as tf
import os
import re
import datetime
INPUT_DIR = './data'
OUTPUT_DIR = './data/TFRecords'
TRAIN_REVIEW = 'train_review'
TRAIN_SUMMARY = 'train_summary'
TRAIN_SCORES = 'train_scores'
TEST_REVIEW = 'test_review'
TEST_SUMMARY = 'test_summary'
TEST_SCORES = 'test_scores'
def txt_path(filename):
return os.path.join(INPUT_DIR, filename + '.txt')
def rec_path(filename):
return os.path.join(OUTPUT_DIR, filename + '.tfrec')
```
### Load and clean review content
```
REVIEWS_CSV = './data/amazon-fine-food-reviews/Reviews.csv'
reviews = pd.read_csv(REVIEWS_CSV)
print('Initial count:', reviews.shape)
reviews.drop(['Id', 'ProfileName', 'Time'], axis=1, inplace=True)
reviews.dropna(axis=0, inplace=True)
print('Has all data:', reviews.shape)
reviews.drop_duplicates(subset=['ProductId', 'UserId'], keep='first', inplace=True)
reviews.drop(['ProductId', 'UserId'], axis=1, inplace=True)
print('No duplicates:', reviews.shape)
```
### Balance the scores
- Scores at the extremes should be equally represented.
- Somewhat lower counts for middle scores is OK.
```
balanced = None
for score in range(1, 6):
score_group = reviews[reviews['Score'] == score]
if score == 1:
balanced = score_group
max_count = balanced.shape[0]
else:
if score_group.shape[0] > max_count:
score_group = score_group.sample(max_count)
balanced = pd.concat([balanced, score_group], axis=0)
del reviews
print(balanced.groupby('Score').size())
```
### Create test and train sets
```
TEST_FRACTION = 0.2
shuffled = balanced.sample(frac=1, axis=0)
del balanced
n = int(shuffled.shape[0] * TEST_FRACTION)
test_frame = shuffled[0:n]
train_frame = shuffled[n:-1]
del shuffled
print('Test:', test_frame.groupby('Score').size())
print('Train:', train_frame.groupby('Score').size())
# Save human-readable files
test_frame.to_csv('./data/test.csv', index=False)
train_frame.to_csv('./data/train.csv', index=False)
```
Save intermediate text files for processing into BERT feature vectors.
```
def write_column(column, file_path):
def clean_html(s):
clean_fn = re.compile('<.*?>')
return re.sub(clean_fn, '', s)
with open(file_path, 'w') as file:
text_list = column.apply(clean_html).values
for item in text_list:
file.write(item)
file.write('\n')
write_column(train_frame['Text'], txt_path(TRAIN_REVIEW))
write_column(train_frame['Summary'], txt_path(TRAIN_SUMMARY))
write_column(test_frame['Text'], txt_path(TEST_REVIEW))
write_column(test_frame['Summary'], txt_path(TEST_SUMMARY))
```
Save numerical columns in a TFRecord file.
```
def _int64_feature(value):
return tf.train.Feature(int64_list=tf.train.Int64List(value=[value]))
def _float_vector_feature(values):
return tf.train.Feature(float_list=tf.train.FloatList(value=values))
def _float_feature(value):
return _float_vector_feature([value])
def _bytes_feature(value):
return tf.train.Feature(bytes_list=tf.train.BytesList(value=[value]))
def _string_feature(value):
return _bytes_feature(value.encode('utf-8'))
def write_values(filename, data_frame):
with tf.python_io.TFRecordWriter(filename) as writer:
for index, row in data_frame.iterrows():
score = row['Score']
votes = row['HelpfulnessDenominator']
upvotes = row['HelpfulnessNumerator']
helpfulness = float(upvotes) / float(votes) if votes > 0 else 0.0
example = tf.train.Example(
features=tf.train.Features(
feature={
'score': _int64_feature(score),
'votes': _int64_feature(votes),
'helpfulness': _float_feature(helpfulness),
}))
writer.write(example.SerializeToString())
write_values(rec_path(TEST_SCORES), test_frame)
write_values(rec_path(TRAIN_SCORES), train_frame)
del test_frame
del train_frame
```
- First download the BERT model from: https://storage.googleapis.com/bert_models/2018_10_18/uncased_L-12_H-768_A-12.zip
- Unzip this file into the same directory as the `extract_features.py` script.
- Either run the feature extractor from the cell below; or,
- You can also run it from the command line: (you will have to repeat this for each of the 4 text files to be processed)
```
python extract_features.py \
--input_file=./data/train_text.txt \
--output_file=./data/train_text.tfrec \
--bert_model_dir=./uncased_L-12_H-768_A-12
```
- For running on a TPU, your files should be in Google Cloud Storage (`gs://my_bucket/filename`).
- And, add the following arguments to the above command:
```
--use_one_hot_embeddings=True
--tpu_name=<my_TPU_name>
--gcp_zone=<us-central1-b>
--gcp_project=<my_project_name>
```
- Finally, for the review files, allow text sequences to be processed (summary files can use the default 128):
```
--max_seq_length=512
```
This takes about 1 hour on an 8-core TPU. It will take a lot longer on GPU or CPU.
```
from extract_features import extract
MODEL_DIR = './uncased_L-12_H-768_A-12'
extract(input_file=txt_path(TEST_REVIEW), output_file=rec_path(TEST_REVIEW), bert_model_dir=MODEL_DIR, max_seq_length=512)
extract(input_file=txt_path(TEST_SUMMARY), output_file=rec_path(TEST_SUMMARY), bert_model_dir=MODEL_DIR)
extract(input_file=txt_path(TRAIN_REVIEW), output_file=rec_path(TRAIN_REVIEW), bert_model_dir=MODEL_DIR, max_seq_length=512)
extract(input_file=txt_path(TRAIN_SUMMARY), output_file=rec_path(TRAIN_SUMMARY), bert_model_dir=MODEL_DIR)
```
## Next
Run the `Regression.ipynb` notebook next...
| github_jupyter |
```
import os
import json
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
plt.style.use("seaborn")
DATA_ROOT = '../../results'
data = {env: {} for env in os.listdir(DATA_ROOT)}
for env in data:
for training in os.listdir(f'{DATA_ROOT}/{env}'):
if training.endswith('.csv'):
print(env, training)
df = pd.read_csv(f'{DATA_ROOT}/{env}/{training}')
df['hist_stats/episode_reward'] = df['hist_stats/episode_reward'].apply(lambda r: json.loads(r))
if not 'std' in df.columns:
df['std'] = df['hist_stats/episode_reward'].apply(np.std)
data[env][training[:-4]] = df
def plot_reward(df, title='', color='blue', save=False):
"""
"""
if title == 'humanoid-td3':
df['episode_reward_mean'] = df['episode_reward_mean'] * 0.67
df['std'] = df['std']*0.67
df['low_std'] = df['episode_reward_mean'] - df['std']
df['high_std'] = df['episode_reward_mean'] + df['std']
plt.plot(df['timesteps_total'], df['episode_reward_mean'], c=color)
plt.fill_between(df['timesteps_total'], df['low_std'], df['high_std'], alpha=.5, color=color)
plt.xlabel('timestep')
plt.ylabel('reward')
plt.title(title)
if save:
fig = plt.gcf()
fig.savefig('./plots/' + title + '.png')
plt.show()
def plot_time(df, title='', color='blue', save=False):
"""
"""
plt.plot(df['timesteps_total'], df['time_total_s'], c=color)
plt.xlabel('timestep')
plt.ylabel('time (s)')
plt.title(title)
if save:
fig = plt.gcf()
fig.savefig(title or 'untitled' + '.png')
plt.show()
PALETTE = sns.color_palette()
def algorithm_color(algorithm):
if algorithm == 'dqn':
return PALETTE[0]
if algorithm == 'ppo':
return PALETTE[1]
if 'sac' in algorithm:
return PALETTE[4]
for env, trainings in data.items():
for algorithm, training in trainings.items():
title = env+'-'+algorithm
color = algorithm_color(algorithm)
plot_reward(training, title, color, save=True)
print(title)
print(training.episode_reward_mean.max())
print(training.timesteps_total.max())
#plot_time(training, algorithm, save=True)
import os
from utils import Training
import matplotlib.pyplot as plt
data_path = '../../results/humanoid/ppo-hyp'
for training_dir in os.listdir(data_path):
training = Training(data_path + '/' + training_dir)
training.progress.df.plot(x='timesteps_total', y='info/learner/default_policy/kl', title=training_dir)
plt.show()
import pandas as pd
from io import StringIO
root = '../../results/humanoid/'
for alg in ('td3', 'sac', 'ppo'):
filename = root + alg + '-time.csv'
df = pd.read_csv(filename)
df = df[['num_workers', 'num_gpus', 'num_cpus_per_worker', 'time_this_iter_s']]
df['speedup'] = df['time_this_iter_s'].apply(lambda t: max(df['time_this_iter_s'])/t)
print(df.round(2).to_latex(index=False))
```
| github_jupyter |
```
from cache import cache
from collections import defaultdict
from decimal import Decimal
import httpx
import json
import pandas as pd
from pprint import pprint
import sys
from tabulate import tabulate
import time
from web3 import Web3
from IPython.display import display, HTML
# Gas Price - we could use web3 but this is fine
GAS_URL = 'https://api.debank.com/chain/gas_price_dict_v2?chain=eth'
c = cache.get(GAS_URL, refresh=True)
gas = json.loads(c['value'])
gas_price = gas['data']['normal']['price']/10**18
# ETH price
ETH_PRICE_URL = 'https://api.coingecko.com/api/v3/simple/price?vs_currencies=usd&ids=ethereum'
c = cache.get(ETH_PRICE_URL, refresh=True)
eth = json.loads(c['value'])
price_eth = eth['ethereum']['usd']
print(f'ETH Price: ${price_eth:,.2f}')
print(f'Gwei: {gas_price*10**9:,.1f}')
def print_cost(title, data):
# Calculate gas cost
total_eth = 0.0
total_usd = 0.0
for d in data:
cost_eth = d['gas'] * gas_price
total_eth += cost_eth
cost_usd = cost_eth * price_eth
total_usd += cost_usd
d['cost_eth'] = cost_eth
d['cost_usd'] = cost_usd
# Add Total
data.append({
'description': 'TOTAL',
'method': None,
'gas': None,
'contract': None,
'cost_eth': total_eth,
'cost_usd': total_usd,
})
# Contracts
contracts = {
'0x3432b6a60d23ca0dfca7761b7ab56459d9c964d0': 'FXS',
'0xc8418af6358ffdda74e09ca9cc3fe03ca6adc5b0': 'veFXS',
'0xc6764e58b36e26b08Fd1d2AeD4538c02171fA872': 'veFXSYieldDistributorV4',
'0x3669C421b77340B2979d1A00a792CC2ee0FcE737': 'FraxGaugeControllerV2',
'0x3EF26504dbc8Dd7B7aa3E97Bc9f3813a9FC0B4B0': 'FraxFarm_UniV3_veFXS_FRAX_USDC',
'0xa0b86991c6218b36c1d19d4a2e9eb0ce3606eb48': 'USDC',
'0xc36442b4a4522e871399cd717abdd847ab11fe88': 'Uniswap V3 (UNI-V3-POS)',
}
for d in data:
if d['contract'] in contracts:
address = d['contract']
name = contracts[address]
d['contract'] = f'<a href="https://etherscan.io/address/{address}">{name}</a>'
# Print a nice table
display(HTML(f'<b>{title}</b>'))
colalign = ('left', 'left', 'left', 'right', 'right', 'right')
floatfmt = (None, None, None, None, ".5f", ".2f")
display(tabulate(data, headers='keys', tablefmt='unsafehtml', colalign=colalign, floatfmt=floatfmt))
# veFXS Staking
title = 'Staking veFXS (First Time)'
data = [
{
'description': 'Approve FXS',
'contract': '0x3432b6a60d23ca0dfca7761b7ab56459d9c964d0',
'method': 'approve',
'gas': 46677
},
{
'description': 'Lock Tokens',
'contract': '0xc8418af6358ffdda74e09ca9cc3fe03ca6adc5b0',
'method': 'create_lock',
'gas': 442770
},
{
'description': 'Checkpoint',
'contract': '0xc6764e58b36e26b08Fd1d2AeD4538c02171fA872',
'method': 'checkpoint',
'gas': 200000
},
]
print_cost(title, data)
# Claiming veFXS Rewards
title = 'Claiming veFXS Rewards'
data = [
{
'description': 'Claim Rewards',
'contract': '0xc6764e58b36e26b08Fd1d2AeD4538c02171fA872',
'method': 'getYield',
'gas': 230000
},
]
print_cost(title, data)
# Voting for a Gauge
title = 'Voting for a Gauge (single vote; 2 calls to re-assign)'
data = [
{
'description': 'Gauge Voting',
'contract': '0x3669C421b77340B2979d1A00a792CC2ee0FcE737',
'method': 'vote_for_gauge_weights',
'gas': 250000
},
]
print_cost(title, data)
# Entering a Gauge Farm Example
title = 'Entering UniV3 Pool and Locking (UniV3 FRAX/USDC)'
data = [
{
'description': 'Approve USDC',
'contract': '0xa0b86991c6218b36c1d19d4a2e9eb0ce3606eb48',
'method': 'mint',
'gas': 60311
},
{
'description': 'Approve FXS',
'contract': '0x3432b6a60d23ca0dfca7761b7ab56459d9c964d0',
'method': 'approve',
'gas': 46677
},
{
'description': 'Mint UniV3 Position',
'contract': '0xc36442b4a4522e871399cd717abdd847ab11fe88',
'method': 'mint',
'gas': 400000
},
{
'description': 'Approve UniV3 Pool to Stake (one time)',
'contract': '0xc36442b4a4522e871399cd717abdd847ab11fe88',
'method': 'approve',
'gas': 53838
},
{
'description': 'Stake Lock in Gauge',
'contract': '0x3EF26504dbc8Dd7B7aa3E97Bc9f3813a9FC0B4B0',
'method': 'stakeLocked',
'gas': 500000
},
]
print_cost(title, data)
# Claiming from Gauge Farm
title = 'Claiming from Gauge Farm (UniV3 FRAX/USDC; single lock)'
data = [
{
'description': 'Claim Rewards',
'contract': '0x3EF26504dbc8Dd7B7aa3E97Bc9f3813a9FC0B4B0',
'method': 'vote_for_gauge_weights',
'gas': 400000
},
]
print_cost(title, data)
```
# Governance
Snapshot voting is gasless (FREE!)
Vote here if you have veFXS: https://snapshot.org/#/frax.eth/
Forums are also free:
https://gov.frax.finance/
| github_jupyter |
<a id="title"></a>
<a id="toc"></a>

<div style="margin-top: 9px; background-color: #efefef; padding-top:10px; padding-bottom:10px;margin-bottom: 9px;box-shadow: 5px 5px 5px 0px rgba(87, 87, 87, 0.2);">
<center>
<h2>Table of Contents</h2>
</center>
<ol>
<li><a href="#01" style="color: #37509b;">Initialization</a></li>
<li><a href="#02" style="color: #37509b;">Dataset: Cleaning and Exploration</a></li>
<li><a href="#03" style="color: #37509b;">Modelling</a></li>
<li><a href="#04" style="color: #37509b;">Quarta Seção</a></li>
<li><a href="#05" style="color: #37509b;">Quinta Seção </a></li>
</ol>
</div>
<a id="01" style="
background-color: #37509b;
border: none;
color: white;
padding: 2px 10px;
text-align: center;
text-decoration: none;
display: inline-block;
font-size: 10px;" href="#toc">TOC ↻</a>
<div style="margin-top: 9px; background-color: #efefef; padding-top:10px; padding-bottom:10px;margin-bottom: 9px;box-shadow: 5px 5px 5px 0px rgba(87, 87, 87, 0.2);">
<center>
<h1>1. Initialization</h1>
</center>
<ol type="i">
<!-- <li><a href="#0101" style="color: #37509b;">Inicialização</a></li>
<li><a href="#0102" style="color: #37509b;">Pacotes</a></li>
<li><a href="#0103" style="color: #37509b;">Funcoes</a></li>
<li><a href="#0104" style="color: #37509b;">Dados de Indicadores Sociais</a></li>
<li><a href="#0105" style="color: #37509b;">Dados de COVID-19</a></li>
-->
</ol>
</div>
<a id="0101"></a>
<h2>1.1 Description <a href="#01"
style="
border-radius: 10px;
background-color: #f1f1f1;
border: none;
color: #37509b;
text-align: center;
text-decoration: none;
display: inline-block;
padding: 4px 4px;
font-size: 14px;
">↻</a></h2>
Dataset available in:
<a href="https://www.kaggle.com/c/titanic/" target="_blank">https://www.kaggle.com/c/titanic/</a>
### Features
<table>
<tbody>
<tr><th><b>Variable</b></th><th><b>Definition</b></th><th><b>Key</b></th></tr>
<tr>
<td>survival</td>
<td>Survival</td>
<td>0 = No, 1 = Yes</td>
</tr>
<tr>
<td>pclass</td>
<td>Ticket class</td>
<td>1 = 1st, 2 = 2nd, 3 = 3rd</td>
</tr>
<tr>
<td>sex</td>
<td>Sex</td>
<td></td>
</tr>
<tr>
<td>Age</td>
<td>Age in years</td>
<td></td>
</tr>
<tr>
<td>sibsp</td>
<td># of siblings / spouses aboard the Titanic</td>
<td></td>
</tr>
<tr>
<td>parch</td>
<td># of parents / children aboard the Titanic</td>
<td></td>
</tr>
<tr>
<td>ticket</td>
<td>Ticket number</td>
<td></td>
</tr>
<tr>
<td>fare</td>
<td>Passenger fare</td>
<td></td>
</tr>
<tr>
<td>cabin</td>
<td>Cabin number</td>
<td></td>
</tr>
<tr>
<td>embarked</td>
<td>Port of Embarkation</td>
<td>C = Cherbourg, Q = Queenstown, S = Southampton</td>
</tr>
</tbody>
</table>
<a id="0102"></a>
<h2>1.2 Packages <a href="#01"
style="
border-radius: 10px;
background-color: #f1f1f1;
border: none;
color: #37509b;
text-align: center;
text-decoration: none;
display: inline-block;
padding: 4px 4px;
font-size: 14px;
">↻</a></h2>
```
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
from tqdm import tqdm
from time import time,sleep
import nltk
from nltk import tokenize
from string import punctuation
from nltk.stem import PorterStemmer, SnowballStemmer, LancasterStemmer
from unidecode import unidecode
from sklearn.dummy import DummyClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import accuracy_score,f1_score
from sklearn.model_selection import train_test_split
from sklearn.model_selection import cross_validate,KFold,GridSearchCV
from sklearn.naive_bayes import GaussianNB
from sklearn.neighbors import KNeighborsClassifier
from sklearn.neural_network import MLPClassifier
from sklearn.preprocessing import OrdinalEncoder,OneHotEncoder, LabelEncoder
from sklearn.preprocessing import StandardScaler,Normalizer
from sklearn.svm import SVC
from sklearn.tree import DecisionTreeClassifier
```
<a id="0103"></a>
<h2>1.3 Settings <a href="#01"
style="
border-radius: 10px;
background-color: #f1f1f1;
border: none;
color: #37509b;
text-align: center;
text-decoration: none;
display: inline-block;
padding: 4px 4px;
font-size: 14px;
">↻</a></h2>
```
# pandas options
pd.options.display.max_columns = 30
pd.options.display.float_format = '{:.2f}'.format
# seaborn options
sns.set(style="darkgrid")
import warnings
warnings.filterwarnings("ignore")
```
<a id="0104"></a>
<h2>1.4 Useful Functions <a href="#01"
style="
border-radius: 10px;
background-color: #f1f1f1;
border: none;
color: #37509b;
text-align: center;
text-decoration: none;
display: inline-block;
padding: 4px 4px;
font-size: 14px;
">↻</a></h2>
```
def treat_words(df,
col,
language='english',
inplace=False,
tokenizer = tokenize.WordPunctTokenizer(),
decode = True,
stemmer = None,
lower = True,
remove_words = [],
):
"""
Description:
----------------
Receives a dataframe and the column name. Eliminates
stopwords for each row of that column and apply stemmer.
After that, it regroups and returns a list.
tokenizer = tokenize.WordPunctTokenizer()
tokenize.WhitespaceTokenizer()
stemmer = PorterStemmer()
SnowballStemmer()
LancasterStemmer()
nltk.RSLPStemmer() # in portuguese
"""
pnct = [string for string in punctuation] # from string import punctuation
wrds = nltk.corpus.stopwords.words(language)
unwanted_words = pnct + wrds + remove_words
processed_text = list()
for element in tqdm(df[col]):
# starts a new list
new_text = list()
# starts a list with the words of the non precessed text
text_old = tokenizer.tokenize(element)
# check each word
for wrd in text_old:
# if the word are not in the unwanted words list
# add to the new list
if wrd.lower() not in unwanted_words:
new_wrd = wrd
if decode: new_wrd = unidecode(new_wrd)
if stemmer: new_wrd = stemmer.stem(new_wrd)
if lower: new_wrd = new_wrd.lower()
if new_wrd not in remove_words:
new_text.append(new_wrd)
processed_text.append(' '.join(new_text))
if inplace:
df[col] = processed_text
else:
return processed_text
def list_words_of_class(df,
col,
language='english',
inplace=False,
tokenizer = tokenize.WordPunctTokenizer(),
decode = True,
stemmer = None,
lower = True,
remove_words = []
):
"""
Description:
----------------
Receives a dataframe and the column name. Eliminates
stopwords for each row of that column, apply stemmer
and returns a list of all the words.
"""
lista = treat_words(
df,col = col,language = language,
tokenizer=tokenizer,decode=decode,
stemmer=stemmer,lower=lower,
remove_words = remove_words
)
words_list = []
for string in lista:
words_list += tokenizer.tokenize(string)
return words_list
def get_frequency(df,
col,
language='english',
inplace=False,
tokenizer = tokenize.WordPunctTokenizer(),
decode = True,
stemmer = None,
lower = True,
remove_words = []
):
list_of_words = list_words_of_class(
df,
col = col,
decode = decode,
stemmer = stemmer,
lower = lower,
remove_words = remove_words
)
freq = nltk.FreqDist(list_of_words)
df_freq = pd.DataFrame({
'word': list(freq.keys()),
'frequency': list(freq.values())
}).sort_values(by='frequency',ascending=False)
n_words = df_freq['frequency'].sum()
df_freq['prop'] = 100*df_freq['frequency']/n_words
return df_freq
def common_best_words(df,col,n_common = 10,tol_frac = 0.8,n_jobs = 1):
list_to_remove = []
for i in range(0,n_jobs):
print('[info] Most common words in not survived')
sleep(0.5)
df_dead = get_frequency(
df.query('Survived == 0'),
col = col,
decode = False,
stemmer = False,
lower = False,
remove_words = list_to_remove )
print('[info] Most common words in survived')
sleep(0.5)
df_surv = get_frequency(
df.query('Survived == 1'),
col = col,
decode = False,
stemmer = False,
lower = False,
remove_words = list_to_remove )
words_dead = df_dead.nlargest(n_common, 'frequency')
list_dead = list(words_dead['word'].values)
words_surv = df_surv.nlargest(n_common, 'frequency')
list_surv = list(words_surv['word'].values)
for word in list(set(list_dead).intersection(list_surv)):
prop_dead = words_dead[words_dead['word'] == word]['prop'].values[0]
prop_surv = words_surv[words_surv['word'] == word]['prop'].values[0]
ratio = min([prop_dead,prop_surv])/max([prop_dead,prop_surv])
if ratio > tol_frac:
list_to_remove.append(word)
return list_to_remove
def just_keep_the_words(df,
col,
keep_words = [],
tokenizer = tokenize.WordPunctTokenizer()
):
"""
Description:
----------------
Removes all words that is not in `keep_words`
"""
processed_text = list()
# para cada avaliação
for element in tqdm(df[col]):
# starts a new list
new_text = list()
# starts a list with the words of the non precessed text
text_old = tokenizer.tokenize(element)
for wrd in text_old:
if wrd in keep_words: new_text.append(wrd)
processed_text.append(' '.join(new_text))
return processed_text
class Classifier:
'''
Description
-----------------
Class to approach classification algorithm
Example
-----------------
classifier = Classifier(
algorithm = ChooseTheAlgorith,
hyperparameters_range = {
'hyperparameter_1': [1,2,3],
'hyperparameter_2': [4,5,6],
'hyperparameter_3': [7,8,9]
}
)
# Looking for best model
classifier.grid_search_fit(X,y,n_splits=10)
#dt.grid_search_results.head(3)
# Prediction Form 1
par = classifier.best_model_params
dt.fit(X_trn,y_trn,params = par)
y_pred = classifier.predict(X_tst)
print(accuracy_score(y_tst, y_pred))
# Prediction Form 2
classifier.fit(X_trn,y_trn,params = 'best_model')
y_pred = classifier.predict(X_tst)
print(accuracy_score(y_tst, y_pred))
# Prediction Form 3
classifier.fit(X_trn,y_trn,min_samples_split = 5,max_depth=4)
y_pred = classifier.predict(X_tst)
print(accuracy_score(y_tst, y_pred))
'''
def __init__(self,algorithm, hyperparameters_range={},random_state=42):
self.algorithm = algorithm
self.hyperparameters_range = hyperparameters_range
self.random_state = random_state
self.grid_search_cv = None
self.grid_search_results = None
self.hyperparameters = self.__get_hyperparameters()
self.best_model = None
self.best_model_params = None
self.fitted_model = None
def grid_search_fit(self,X,y,verbose=0,n_splits=10,shuffle=True,scoring='accuracy'):
self.grid_search_cv = GridSearchCV(
self.algorithm(),
self.hyperparameters_range,
cv = KFold(n_splits = n_splits, shuffle=shuffle, random_state=self.random_state),
scoring=scoring,
verbose=verbose
)
self.grid_search_cv.fit(X, y)
col = list(map(lambda par: 'param_'+str(par),self.hyperparameters))+[
'mean_fit_time',
'mean_test_score',
'std_test_score',
'params'
]
results = pd.DataFrame(self.grid_search_cv.cv_results_)
self.grid_search_results = results[col].sort_values(
['mean_test_score','mean_fit_time'],
ascending=[False,True]
).reset_index(drop=True)
self.best_model = self.grid_search_cv.best_estimator_
self.best_model_params = self.best_model.get_params()
def best_model_cv_score(self,X,y,parameter='test_score',verbose=0,n_splits=10,shuffle=True,scoring='accuracy'):
if self.best_model != None:
cv_results = cross_validate(
self.best_model,
X = X,
y = y,
cv=KFold(n_splits = 10,shuffle=True,random_state=self.random_state)
)
return {
parameter+'_mean': cv_results[parameter].mean(),
parameter+'_std': cv_results[parameter].std()
}
def fit(self,X,y,params=None,**kwargs):
model = None
if len(kwargs) == 0 and params == 'best_model' and self.best_model != None:
model = self.best_model
elif type(params) == dict and len(params) > 0:
model = self.algorithm(**params)
elif len(kwargs) >= 0 and params==None:
model = self.algorithm(**kwargs)
else:
print('[Error]')
if model != None:
model.fit(X,y)
self.fitted_model = model
def predict(self,X):
if self.fitted_model != None:
return self.fitted_model.predict(X)
else:
print('[Error]')
return np.array([])
def predict_score(self,X_tst,y_tst,score=accuracy_score):
if self.fitted_model != None:
y_pred = self.predict(X_tst)
return score(y_tst, y_pred)
else:
print('[Error]')
return np.array([])
def hyperparameter_info(self,hyperpar):
str_ = 'param_'+hyperpar
return self.grid_search_results[
[str_,'mean_fit_time','mean_test_score']
].groupby(str_).agg(['mean','std'])
def __get_hyperparameters(self):
return [hp for hp in self.hyperparameters_range]
def cont_class_limits(lis_df,n_class):
ampl = lis_df.quantile(1.0)-lis_df.quantile(0.0)
ampl_class = ampl/n_class
limits = [[i*ampl_class,(i+1)*ampl_class] for i in range(n_class)]
return limits
def cont_classification(lis_df,limits):
list_res = []
n_class = len(limits)
for elem in lis_df:
for ind in range(n_class-1):
if elem >= limits[ind][0] and elem < limits[ind][1]:
list_res.append(ind+1)
if elem >= limits[-1][0]: list_res.append(n_class)
return list_res
```
<a id="02" style="
background-color: #37509b;
border: none;
color: white;
padding: 2px 10px;
text-align: center;
text-decoration: none;
display: inline-block;
font-size: 10px;" href="#toc">TOC ↻</a>
<div style="margin-top: 9px; background-color: #efefef; padding-top:10px; padding-bottom:10px;margin-bottom: 9px;box-shadow: 5px 5px 5px 0px rgba(87, 87, 87, 0.2);">
<center>
<h1>2. Dataset: Cleaning and Exploration</h1>
</center>
<ol type="i">
<!-- <li><a href="#0101" style="color: #37509b;">Inicialização</a></li>
<li><a href="#0102" style="color: #37509b;">Pacotes</a></li>
<li><a href="#0103" style="color: #37509b;">Funcoes</a></li>
<li><a href="#0104" style="color: #37509b;">Dados de Indicadores Sociais</a></li>
<li><a href="#0105" style="color: #37509b;">Dados de COVID-19</a></li>
-->
</ol>
</div>
<a id="0101"></a>
<h2>2.1 Import Dataset <a href="#02"
style="
border-radius: 10px;
background-color: #f1f1f1;
border: none;
color: #37509b;
text-align: center;
text-decoration: none;
display: inline-block;
padding: 4px 4px;
font-size: 14px;
">↻</a></h2>
```
df_trn = pd.read_csv('data/train.csv')
df_tst = pd.read_csv('data/test.csv')
df = pd.concat([df_trn,df_tst])
df_trn = df_trn.drop(columns=['PassengerId'])
df_tst = df_tst.drop(columns=['PassengerId'])
df_tst.info()
```
## Pclass
Investigating if the class is related to the probability of survival
```
sns.barplot(x='Pclass', y="Survived", data=df_trn)
```
## Name
```
treat_words(df_trn,col = 'Name',inplace=True)
treat_words(df_tst,col = 'Name',inplace=True)
%matplotlib inline
from wordcloud import WordCloud
import matplotlib.pyplot as plt
all_words = ' '.join(list(df_trn['Name']))
word_cloud = WordCloud().generate(all_words)
plt.figure(figsize=(10,7))
plt.imshow(word_cloud, interpolation='bilinear')
plt.axis("off")
plt.show()
common_best_words(df_trn,col='Name',n_common = 10,tol_frac = 0.5,n_jobs = 1)
```
We can see that Master and William are words with equivalent proportion between both survived and not survived cases. So, they are not good descriptive words
```
df_comm = get_frequency(df_trn,col = 'Name',remove_words=['("','")','master', 'william']).reset_index(drop=True)
surv_prob = [ df_trn['Survived'][df_trn['Name'].str.contains(row['word'])].mean() for index, row in df_comm.iterrows()]
df_comm['survival_prob (%)'] = 100*np.array(surv_prob)
print('Survival Frequency related to words in Name')
df_comm.head(10)
df_comm_surv = get_frequency(df_trn[df_trn['Survived']==1],col = 'Name',remove_words=['("','")']).reset_index(drop=True)
sleep(0.5)
print('Most frequent words within those who survived')
df_comm_surv.head(10)
df_comm_dead = get_frequency(df_trn[df_trn['Survived']==0],col = 'Name',remove_words=['("','")']).reset_index(drop=True)
sleep(0.5)
print("Most frequent words within those that did not survive")
df_comm_dead.head(10)
```
### Feature Engineering
```
min_occurrences = 2
df_comm = get_frequency(df,col = 'Name',
remove_words=['("','")','john', 'henry', 'william','h','j','jr']
).reset_index(drop=True)
words_to_keep = list(df_comm[df_comm['frequency'] > min_occurrences]['word'])
df_trn['Name'] = just_keep_the_words(df_trn,
col = 'Name',
keep_words = words_to_keep
)
df_tst['Name'] = just_keep_the_words(df_tst,
col = 'Name',
keep_words = words_to_keep
)
vectorize = CountVectorizer(lowercase=True,max_features = 4)
vectorize.fit(df_trn['Name'])
bag_of_words = vectorize.transform(df_trn['Name'])
X = pd.DataFrame(vectorize.fit_transform(df_trn['Name']).toarray(),
columns=list(map(lambda word: 'Name_'+word,vectorize.get_feature_names()))
)
y = df_trn['Survived']
from sklearn.model_selection import train_test_split
X_trn,X_tst,y_trn,y_tst = train_test_split(
X,
y,
test_size = 0.25,
random_state=42
)
from sklearn.linear_model import LogisticRegression
classifier = LogisticRegression(C=100)
classifier.fit(X_trn,y_trn)
accuracy = classifier.score(X_tst,y_tst)
print('Accuracy = %.3f%%' % (100*accuracy))
df_trn = pd.concat([
df_trn
,
pd.DataFrame(vectorize.fit_transform(df_trn['Name']).toarray(),
columns=list(map(lambda word: 'Name_'+word,vectorize.get_feature_names()))
)
],axis=1).drop(columns=['Name'])
df_tst = pd.concat([
df_tst
,
pd.DataFrame(vectorize.fit_transform(df_tst['Name']).toarray(),
columns=list(map(lambda word: 'Name_'+word,vectorize.get_feature_names()))
)
],axis=1).drop(columns=['Name'])
```
## Sex
```
from sklearn.preprocessing import LabelEncoder
Sex_Encoder = LabelEncoder()
df_trn['Sex'] = Sex_Encoder.fit_transform(df_trn['Sex']).astype(int)
df_tst['Sex'] = Sex_Encoder.transform(df_tst['Sex']).astype(int)
```
## Age
```
mean_age = df['Age'][df['Age'].notna()].mean()
df_trn['Age'].fillna(mean_age,inplace=True)
df_tst['Age'].fillna(mean_age,inplace=True)
# age_limits = cont_class_limits(df['Age'],5)
# df_trn['Age'] = cont_classification(df_trn['Age'],age_limits)
# df_tst['Age'] = cont_classification(df_tst['Age'],age_limits)
```
## Family Size
```
df_trn['FamilySize'] = df_trn['SibSp'] + df_trn['Parch'] + 1
df_tst['FamilySize'] = df_tst['SibSp'] + df_tst['Parch'] + 1
df_trn = df_trn.drop(columns = ['SibSp','Parch'])
df_tst = df_tst.drop(columns = ['SibSp','Parch'])
```
## Cabin Feature
There is very little data about the cabin
```
df_trn['Cabin'] = df_trn['Cabin'].fillna('N000')
df_cab = df_trn[df_trn['Cabin'].notna()]
df_cab = pd.concat(
[
df_cab,
df_cab['Cabin'].str.extract(
'([A-Za-z]+)(\d+\.?\d*)([A-Za-z]*)',
expand = True).drop(columns=[2]).rename(
columns={0: 'Cabin_Class', 1: 'Cabin_Number'}
)
], axis=1)
df_trn = df_cab.drop(columns=['Cabin','Cabin_Number'])
df_trn = pd.concat([
df_trn.drop(columns=['Cabin_Class']),
# pd.get_dummies(df_trn['Cabin_Class'],prefix='Cabin').drop(columns=['Cabin_N'])
pd.get_dummies(df_trn['Cabin_Class'],prefix='Cabin')
],axis=1)
df_tst['Cabin'] = df_tst['Cabin'].fillna('N000')
df_cab = df_tst[df_tst['Cabin'].notna()]
df_cab = pd.concat(
[
df_cab,
df_cab['Cabin'].str.extract(
'([A-Za-z]+)(\d+\.?\d*)([A-Za-z]*)',
expand = True).drop(columns=[2]).rename(
columns={0: 'Cabin_Class', 1: 'Cabin_Number'}
)
], axis=1)
df_tst = df_cab.drop(columns=['Cabin','Cabin_Number'])
df_tst = pd.concat([
df_tst.drop(columns=['Cabin_Class']),
# pd.get_dummies(df_tst['Cabin_Class'],prefix='Cabin').drop(columns=['Cabin_N'])
pd.get_dummies(df_tst['Cabin_Class'],prefix='Cabin')
],axis=1)
```
## Ticket
```
df_trn = df_trn.drop(columns=['Ticket'])
df_tst = df_tst.drop(columns=['Ticket'])
```
## Fare
```
mean_fare = df['Fare'][df['Fare'].notna()].mean()
df_trn['Fare'].fillna(mean_fare,inplace=True)
df_tst['Fare'].fillna(mean_fare,inplace=True)
# fare_limits = cont_class_limits(df['Fare'],5)
# df_trn['Fare'] = cont_classification(df_trn['Fare'],fare_limits)
# df_tst['Fare'] = cont_classification(df_tst['Fare'],fare_limits)
```
## Embarked
```
most_frequent_emb = df['Embarked'].value_counts()[:1].index.tolist()[0]
df_trn['Embarked'] = df_trn['Embarked'].fillna(most_frequent_emb)
df_tst['Embarked'] = df_tst['Embarked'].fillna(most_frequent_emb)
df_trn = pd.concat([
df_trn.drop(columns=['Embarked']),
# pd.get_dummies(df_trn['Embarked'],prefix='Emb').drop(columns=['Emb_C'])
pd.get_dummies(df_trn['Embarked'],prefix='Emb')
],axis=1)
df_tst = pd.concat([
df_tst.drop(columns=['Embarked']),
# pd.get_dummies(df_tst['Embarked'],prefix='Emb').drop(columns=['Emb_C'])
pd.get_dummies(df_tst['Embarked'],prefix='Emb')
],axis=1)
df_trn
```
<a id="03" style="
background-color: #37509b;
border: none;
color: white;
padding: 2px 10px;
text-align: center;
text-decoration: none;
display: inline-block;
font-size: 10px;" href="#toc">TOC ↻</a>
<div style="margin-top: 9px; background-color: #efefef; padding-top:10px; padding-bottom:10px;margin-bottom: 9px;box-shadow: 5px 5px 5px 0px rgba(87, 87, 87, 0.2);">
<center>
<h1>3. Modelling</h1>
</center>
<ol type="i">
<!-- <li><a href="#0101" style="color: #37509b;">Inicialização</a></li>
<li><a href="#0102" style="color: #37509b;">Pacotes</a></li>
<li><a href="#0103" style="color: #37509b;">Funcoes</a></li>
<li><a href="#0104" style="color: #37509b;">Dados de Indicadores Sociais</a></li>
<li><a href="#0105" style="color: #37509b;">Dados de COVID-19</a></li>
-->
</ol>
</div>
```
sns.barplot(x='Age', y="Survived", data=df_trn)
scaler = StandardScaler()
X = scaler.fit_transform(df_trn.drop(columns=['Survived']))
y = df_trn['Survived']
X_trn,X_tst,y_trn,y_tst = train_test_split(
X,
y,
test_size = 0.25,
random_state=42
)
Model_Scores = {}
```
## Logistic Regression
```
SEED = 42
hyperparametric_space = {
'solver' : ['newton-cg', 'lbfgs', 'liblinear'],
'C' : [0.01,0.1,1,10,100]
}
grid_search_cv = GridSearchCV(
LogisticRegression(random_state=SEED),
hyperparametric_space,
cv = KFold(n_splits = 10, shuffle=True,random_state=SEED),
scoring='accuracy',
verbose=0
)
grid_search_cv.fit(X, y)
results = pd.DataFrame(grid_search_cv.cv_results_)
pd.options.display.float_format = '{:,.5f}'.format
col = ['param_C', 'param_solver','mean_fit_time', 'mean_test_score', 'std_test_score']
results[col].sort_values(
['mean_test_score','mean_fit_time'],
ascending=[False,True]
).head(10)
log = Classifier(
algorithm = LogisticRegression,
hyperparameters_range = {
'intercept_scaling' : [0.8,1,1.2],
# 'class_weight' : [{ 0:0.45, 1:0.55 },{ 0:0.5, 1:0.5 },{ 0:0.55, 1:0.45 }],
'solver' : ['newton-cg', 'lbfgs', 'liblinear', 'sag', 'saga'],
'C' : [0.05,0.07,0.09]
}
)
log.grid_search_fit(X,y,n_splits=10)
print('\nBest Model:')
print('\n',log.best_model)
sc_dict = log.best_model_cv_score(X,y)
sc_list = list((100*np.array(list(sc_dict.values()))))
print('\nCV Score: %.2f%% ± %.2f%%' % (sc_list[0],sc_list[1]))
log.fit(X_trn,y_trn,params = 'best_model')
psc = log.predict_score(X_tst,y_tst)
print('\nAccuracy Score: %.2f ' % psc)
Model_Scores['logistic_regression'] = {
'model' : log.best_model,
'best_params' : log.best_model_params,
'test_accuracy_score' : psc,
'cv_score' : 0.01*sc_list[0],
'cv_score_std' : 0.01*sc_list[1]
}
log.grid_search_results.head(5)
```
## Support Vector Classifier
```
sv = Classifier(
algorithm = SVC,
hyperparameters_range = {
'kernel' : ['linear', 'poly','rbf','sigmoid'],
'C' : [0.01,0.5,1,3,7,100]
}
)
sv.grid_search_fit(X,y)
print('\nBest Model:')
print('\n',sv.best_model)
sc_dict = sv.best_model_cv_score(X,y)
sc_list = list((100*np.array(list(sc_dict.values()))))
print('\nCV Score: %.2f%% ± %.2f%%' % (sc_list[0],sc_list[1]))
sv.fit(X_trn,y_trn,params = 'best_model')
psc = sv.predict_score(X_tst,y_tst)
print('\nAccuracy Score: %.2f ' % (psc))
Model_Scores['svc'] = {
'model' : sv.best_model,
'best_params' : sv.best_model_params,
'test_accuracy_score' : psc,
'cv_score' : 0.01*sc_list[0],
'cv_score_std' : 0.01*sc_list[1]
}
sv.grid_search_results.head(5)
```
## Decision Tree Classifier
```
dt = Classifier(
algorithm = DecisionTreeClassifier,
hyperparameters_range = {
'min_samples_split': [15,20,25],
'max_depth': [10,15,20,25],
'min_samples_leaf': [1,3,5,7,9]
}
)
dt.grid_search_fit(X,y)
print('\nBest Model:')
print('\n',dt.best_model)
sc_dict = dt.best_model_cv_score(X,y)
sc_list = list((100*np.array(list(sc_dict.values()))))
print('\nCV Score: %.2f%% ± %.2f%%' % (sc_list[0],sc_list[1]))
dt.fit(X_trn,y_trn,params = 'best_model')
psc = dt.predict_score(X_tst,y_tst)
print('\nAccuracy Score: %.2f ' % (psc))
Model_Scores['decision_tree'] = {
'model' : dt.best_model,
'best_params' : dt.best_model_params,
'test_accuracy_score' : psc,
'cv_score' : 0.01*sc_list[0],
'cv_score_std' : 0.01*sc_list[1]
}
dt.grid_search_results.head(5)
```
## Gaussian Naive Bayes
```
gnb = Classifier(
algorithm = GaussianNB,
hyperparameters_range = {
'var_smoothing': [1e-09,1e-07,1e-04,1e-02,1,10,100],
}
)
gnb.grid_search_fit(X,y)
print('\nBest Model:')
print('\n',gnb.best_model)
sc_dict = gnb.best_model_cv_score(X,y)
sc_list = list((100*np.array(list(sc_dict.values()))))
print('\nCV Score: %.2f%% ± %.2f%%' % (sc_list[0],sc_list[1]))
gnb.fit(X_trn,y_trn,params = 'best_model')
psc = gnb.predict_score(X_tst,y_tst)
print('\nAccuracy Score: %.2f ' % (psc ))
pd.options.display.float_format = '{:,.8f}'.format
Model_Scores['gaussian_nb'] = {
'model' : gnb.best_model,
'best_params' : gnb.best_model_params,
'test_accuracy_score' : psc,
'cv_score' : 0.01*sc_list[0],
'cv_score_std' : 0.01*sc_list[1]
}
gnb.grid_search_results.head(9)
```
## K-Nearest Neighbors Classifier
```
knn = Classifier(
algorithm = KNeighborsClassifier,
hyperparameters_range = {
'n_neighbors': [2,5,10,20],
'weights' : ['uniform', 'distance'],
'algorithm' : ['auto', 'ball_tree', 'kd_tree', 'brute'],
'p' : [2,3,4,5]
}
)
knn.grid_search_fit(X,y)
print('\nBest Model:')
print('\n',knn.best_model)
sc_dict = knn.best_model_cv_score(X,y)
sc_list = list((100*np.array(list(sc_dict.values()))))
print('\nCV Score: %.2f%% ± %.2f%%' % (sc_list[0],sc_list[1]))
knn.fit(X_trn,y_trn,params = 'best_model')
psc = knn.predict_score(X_tst,y_tst)
print('\nAccuracy Score: %.2f ' % (psc))
pd.options.display.float_format = '{:,.3f}'.format
Model_Scores['knn_classifier'] = {
'model' : knn.best_model,
'best_params' : knn.best_model_params,
'test_accuracy_score' : psc,
'cv_score' : 0.01*sc_list[0],
'cv_score_std' : 0.01*sc_list[1]
}
knn.grid_search_results.head(9)
```
## Random Forest Classifier
```
rf = Classifier(
algorithm = RandomForestClassifier,
hyperparameters_range = {
'n_estimators': [100,120,150,175,200],
'min_samples_split': [6,7,8,9,10],
'random_state': [42]
}
)
rf.grid_search_fit(X,y)
print('\nBest Model:')
print('\n',rf.best_model)
sc_dict = rf.best_model_cv_score(X,y)
sc_list = list((100*np.array(list(sc_dict.values()))))
print('\nCV Score: %.2f%% ± %.2f%%' % (sc_list[0],sc_list[1]))
rf.fit(X_trn,y_trn,params = 'best_model')
psc = rf.predict_score(X_tst,y_tst)
print('\nAccuracy Score: %.2f ' % (psc))
pd.options.display.float_format = '{:,.3f}'.format
Model_Scores['random_forest'] = {
'model' : rf.best_model,
'best_params' : rf.best_model_params,
'test_accuracy_score' : psc,
'cv_score' : 0.01*sc_list[0],
'cv_score_std' : 0.01*sc_list[1]
}
rf.grid_search_results.head(9)
```
## Gradient Boosting Classifier
```
SEED = 42
N_SPLITS = 10
MODEL = 'GradientBoostingClassifier'
start = time()
# Parametric Space
hyperparametric_space = {
'loss': ['deviance', 'exponential'],
# 'min_samples_split': [70,80,90,100,120,140,160],
'min_samples_split': [90,100,120],
# 'max_depth': [4,5,6,7,8],
'max_depth': [4,5,6,7,8]
}
# Searching the best setting
print('[info] Grid Searching')
grid_search_cv = GridSearchCV(
GradientBoostingClassifier(random_state=SEED),
hyperparametric_space,
cv = KFold(n_splits = N_SPLITS , shuffle=True,random_state=SEED),
scoring='accuracy',
verbose=0)
grid_search_cv.fit(X, y)
results = pd.DataFrame(grid_search_cv.cv_results_)
print('[info] Grid Search Timing: %.2f seconds'%(time() - start))
start = time()
# Evaluating Test Score For Best Estimator
print('[info] Test Accuracy Score')
gb = grid_search_cv.best_estimator_
gb.fit(X_trn, y_trn)
y_pred = gb.predict(X_tst)
# Evaluating K Folded Cross Validation
print('[info] KFolded Cross Validation')
cv_results = cross_validate(grid_search_cv.best_estimator_,X,y,
cv=KFold(n_splits = N_SPLITS ,shuffle=True,random_state=SEED) )
print('[info] Cross Validation Timing: %.2f seconds'%(time() - start))
Model_Scores[MODEL] = {
'test_accuracy_score' : gb.score(X_tst,y_tst),
'cv_score' : cv_results['test_score'].mean(),
'cv_score_std' : cv_results['test_score'].std(),
'best_params' : grid_search_cv.best_estimator_.get_params()
}
pd.options.display.float_format = '{:,.5f}'.format
print('\t\t test_accuracy_score: {:.3f}'.format(Model_Scores[MODEL]['test_accuracy_score']))
print('\t\t cv_score: {:.3f}±{:.3f}'.format(
Model_Scores[MODEL]['cv_score'],Model_Scores[MODEL]['cv_score_std']))
params_list = ['mean_test_score']+list(map(lambda var: 'param_'+var,grid_search_cv.best_params_.keys()))+['mean_fit_time']
results[params_list].sort_values(
['mean_test_score','mean_fit_time'],
ascending=[False,True]
).head(5)
```
## Multi Layer Perceptron Classifier
```
SEED = 42
N_SPLITS = 3
MODEL = 'MLPClassifier'
start = time()
# Parametric Space
hyperparametric_space = {
'hidden_layer_sizes': [(160,),(180,),(200,)],
# 'hidden_layer_sizes': [(180,)],
'alpha':[0.000001,0.00001,0.0001,0.001,0.01,0.1],
# 'alpha':[0.0001],
# 'beta_1':[0.81,0.9,0.99],
# 'beta_1':[0.9],
# 'beta_2':[0.999,0.99,0.9],
# 'beta_2':[0.99],
'activation': ['relu'],
'random_state': [SEED],
'learning_rate': ['adaptive']
}
# Searching the best setting
print('[info] Grid Searching')
grid_search_cv = GridSearchCV(
MLPClassifier(random_state=SEED),
hyperparametric_space,
cv = KFold(n_splits = N_SPLITS , shuffle=True,random_state=SEED),
scoring='accuracy',
verbose=0)
grid_search_cv.fit(X, y)
results = pd.DataFrame(grid_search_cv.cv_results_)
print('[info] Grid Search Timing: %.2f seconds'%(time() - start))
start = time()
# Evaluating Test Score For Best Estimator
print('[info] Test Accuracy Score')
gb = grid_search_cv.best_estimator_
gb.fit(X_trn, y_trn)
y_pred = gb.predict(X_tst)
# Evaluating K Folded Cross Validation
print('[info] KFolded Cross Validation')
cv_results = cross_validate(grid_search_cv.best_estimator_,X,y,
cv=KFold(n_splits = N_SPLITS ,shuffle=True,random_state=SEED) )
print('[info] Cross Validation Timing: %.2f seconds'%(time() - start))
Model_Scores[MODEL] = {
'test_accuracy_score' : gb.score(X_tst,y_tst),
'cv_score' : cv_results['test_score'].mean(),
'cv_score_std' : cv_results['test_score'].std(),
'best_params' : grid_search_cv.best_estimator_.get_params()
}
pd.options.display.float_format = '{:,.5f}'.format
print('\t\t test_accuracy_score: {:.3f}'.format(Model_Scores[MODEL]['test_accuracy_score']))
print('\t\t cv_score: {:.3f}±{:.3f}'.format(
Model_Scores[MODEL]['cv_score'],Model_Scores[MODEL]['cv_score_std']))
params_list = ['mean_test_score']+list(map(lambda var: 'param_'+var,grid_search_cv.best_params_.keys()))+['mean_fit_time']
results[params_list].sort_values(
['mean_test_score','mean_fit_time'],
ascending=[False,True]
).head(5)
params_list = ['mean_test_score']+list(map(lambda var: 'param_'+var,grid_search_cv.best_params_.keys()))+['mean_fit_time']
params_list = ['mean_test_score']+list(map(lambda var: 'param_'+var,grid_search_cv.best_params_.keys()))+['mean_fit_time']
results[params_list].sort_values(
['mean_test_score','mean_fit_time'],
ascending=[False,True]
).head(5)
mlc = Classifier(
algorithm = MLPClassifier,
hyperparameters_range = {
'hidden_layer_sizes': [(160,),(180,),(200,)],
'alpha':[0.00001,0.0001,0.001],
'beta_1':[0.81,0.9,0.99],
'beta_2':[0.999,0.99,0.9],
'activation': ['identity'],
# 'activation': ['identity', 'logistic', 'tanh', 'relu'],
'random_state': [42],
'learning_rate': ['adaptive'],
'max_iter': [1000]
}
)
mlc.grid_search_fit(X,y,n_splits=3)
print('\nBest Model:')
print('\n',mlc.best_model)
sc_dict = mlc.best_model_cv_score(X,y)
sc_list = list((100*np.array(list(sc_dict.values()))))
print('\nCV Score: %.2f%% ± %.2f%%' % (sc_list[0],sc_list[1]))
mlc.fit(X_trn,y_trn,params = 'best_model')
psc = mlc.predict_score(X_tst,y_tst)
print('\nAccuracy Score: %.2f ' % (psc))
pd.options.display.float_format = '{:,.6f}'.format
Model_Scores['mlc_classifier'] = {
'model' : mlc.best_model,
'best_params' : mlc.best_model_params,
'test_accuracy_score' : psc,
'cv_score' : 0.01*sc_list[0],
'cv_score_std' : 0.01*sc_list[1]
}
mlc.grid_search_results.head(9)
np.random.seed(SEED)
espaco_de_parametros = {
"n_estimators" :randint(10, 101),
"max_depth" : randint(3, 6),
"min_samples_split" : randint(32, 129),
"min_samples_leaf" : randint(32, 129),
"bootstrap" : [True, False],
"criterion" : ["gini", "entropy"]
}
tic = time.time()
busca = RandomizedSearchCV(RandomForestClassifier(),
espaco_de_parametros,
n_iter = 80,
cv = KFold(n_splits = 5, shuffle=True))
busca.fit(x_azar, y_azar)
tac = time.time()
tempo_que_passou = tac - tic
print("Tempo %.2f segundos" % tempo_que_passou)
resultados = pd.DataFrame(busca.cv_results_)
resultados.head()
```
**Categorical Variables**
```
pd.DataFrame([[
model,
Model_Scores[model]['test_accuracy_score'],
Model_Scores[model]['cv_score'],
Model_Scores[model]['cv_score_std']
] for model in Model_Scores.keys()],columns=['model','test_accuracy_score','cv_score','cv_score_std'])
```
| github_jupyter |
# Visualization Code for Machine Learning the Warm Rain Process
David John Gagne
This notebook contains the code for generating some of the figures and tables in Machine Learning the Warm Rain Process.
```
%matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from glob import glob
from os.path import join, exists
from matplotlib.colors import LogNorm
from mlmicrophysics.metrics import hellinger_distance, r2_corr, root_mean_squared_error
from sklearn.metrics import mean_absolute_error, confusion_matrix
from tensorflow.keras.models import load_model
import cartopy.crs as ccrs
```
# Load Output Data
These cells load the prediction and truth data along with previously calculated verification scores.
```
source = "TAU"
if source == "sd":
data_path = "/glade/p/cisl/aiml/ggantos/cam_sd_model_base_full_moredata/"
elif source == "TAU":
data_path = "/glade/p/cisl/aiml/ggantos/cam_run5_models_20190726_noL2/"
else:
data_path = None
print("load test prediction labels")
test_pred_labels = pd.read_csv(join(data_path, "test_prediction_labels.csv"), index_col="index")
print("load test prediction values")
test_pred_values = pd.read_csv(join(data_path, "test_prediction_values.csv"), index_col="index")
print("load cam labels")
test_cam_labels = pd.read_csv(join(data_path, "test_cam_labels.csv"), index_col="index")
print("load cam values")
test_cam_values = pd.read_csv(join(data_path, "test_cam_values.csv"), index_col="index")
print("load meta values")
meta_test = pd.read_csv(join(data_path, "meta_test.csv"), index_col="index")
print("load reg scores")
reg_scores = pd.read_csv(join(data_path, "dnn_regressor_scores.csv"), index_col="Output")
print("create mg2 dataframe")
mg2_test = meta_test.loc[:, ["qrtend_MG2", "nctend_MG2", "nrtend_MG2"]]
mg2_test.loc[mg2_test["qrtend_MG2"] > 0, "qrtend_MG2"] = np.log10(mg2_test.loc[mg2_test["qrtend_MG2"] > 0, "qrtend_MG2"])
mg2_test.loc[mg2_test["nctend_MG2"] < 0, "nctend_MG2"] = np.log10(-mg2_test.loc[mg2_test["nctend_MG2"] < 0, "nctend_MG2"])
mg2_test.loc[mg2_test["nrtend_MG2"] < 0, "nrtend_MG2"] = np.log10(-mg2_test.loc[mg2_test["nrtend_MG2"] < 0, "nrtend_MG2"])
mg2_test.loc[mg2_test["nrtend_MG2"] > 0, "nrtend_MG2"] = np.log10(mg2_test.loc[mg2_test["nrtend_MG2"] > 0, "nrtend_MG2"])
reg_scores
scores = {"rmse": root_mean_squared_error, "mae": mean_absolute_error,
"r2": r2_corr, "hellinger": hellinger_distance}
mg2_scores = pd.DataFrame(0, columns=reg_scores.columns, index=reg_scores.index)
for score in reg_scores.columns:
mg2_scores.loc[f"qrtend_{source}_1",
score] = scores[score](test_cam_values.loc[test_cam_labels[f"qrtend_{source}"] == 1, f"qrtend_{source}"],
mg2_test.loc[test_cam_labels[f"qrtend_{source}"] == 1, "qrtend_MG2"])
mg2_scores.loc[f"nctend_{source}_1",
score] = scores[score](test_cam_values.loc[test_cam_labels[f"nctend_{source}"] == 1, f"nctend_{source}"],
mg2_test.loc[test_cam_labels[f"nctend_{source}"] == 1, "nctend_MG2"])
mg2_scores.loc[f"nrtend_{source}_-1",
score] = scores[score](test_cam_values.loc[test_cam_labels[f"nrtend_{source}"] == -1, f"nrtend_{source}"],
mg2_test.loc[test_cam_labels[f"nrtend_{source}"] == -1, "nrtend_MG2"])
mg2_scores.loc[f"nrtend_{source}_1",
score] = scores[score](test_cam_values.loc[test_cam_labels[f"nrtend_{source}"] == 1, f"nrtend_{source}"],
mg2_test.loc[test_cam_labels[f"nrtend_{source}"] == 1, "nrtend_MG2"])
```
# Figure 2
Code for 2D histogram errors.
```
fig, axes = plt.subplots(2, 4, figsize=(12, 6))
hist2d_ylabels = [f"{source} Emulator", "MG2 Bulk"]
x_labels = ["$\log_{10}(dq_r/dt)$", "$\log_{10}(dN_c/dt)$",
"$\log_{10}(dN_r/dt < 0)$", "$\log_{10}(dN_r/dt > 0)$"]
vmax = 1e6
bins = dict(qr=np.arange(-18, -4.5, 0.2),
nc=np.arange(-9, 7, 0.2),
nr_neg=np.arange(-15, 7, 0.2),
nr_pos=np.arange(-9, 7, 0.2))
ticks = dict(qr=np.arange(-18, -4, 3),
nc=np.arange(-8, 7, 3),
nr_neg=np.arange(-15, 7, 3),
nr_pos=np.arange(-9, 7, 3))
h_out = axes[0, 0].hist2d(test_cam_values.loc[test_cam_labels[f"qrtend_{source}"] == 1, f"qrtend_{source}"],
test_pred_values.loc[test_cam_labels[f"qrtend_{source}"] == 1, f"qrtend_{source}_1"],
bins=bins["qr"], cmin=1, norm=LogNorm(), vmax=vmax, rasterized=True)
cax = fig.add_axes([0.95, 0.15, 0.02, 0.7])
cb = fig.colorbar(h_out[-1], cax=cax)
cb.set_label(label="Frequency", size=16)
_ =axes[0, 1].hist2d(test_cam_values.loc[test_cam_labels[f"nctend_{source}"] == 1, f"nctend_{source}"],
test_pred_values.loc[test_cam_labels[f"nctend_{source}"] == 1, f"nctend_{source}_1"],
bins=bins["nc"], cmin=1, norm=LogNorm(), vmax=vmax, rasterized=True)
_=axes[0, 2].hist2d(test_cam_values.loc[test_cam_labels[f"nrtend_{source}"] == -1, f"nrtend_{source}"],
test_pred_values.loc[test_cam_labels[f"nrtend_{source}"] == -1, f"nrtend_{source}_-1"],
bins=bins["nr_neg"], cmin=1, norm=LogNorm(), vmax=vmax, rasterized=True)
_=axes[0, 3].hist2d(test_cam_values.loc[test_cam_labels[f"nrtend_{source}"] == 1, f"nrtend_{source}"],
test_pred_values.loc[test_cam_labels[f"nrtend_{source}"] == 1, f"nrtend_{source}_1"],
bins=bins["nr_pos"], cmin=1, norm=LogNorm(), vmax=vmax, rasterized=True)
_ = axes[1, 0].hist2d(test_cam_values.loc[test_cam_labels[f"qrtend_{source}"] == 1, f"qrtend_{source}"],
mg2_test.loc[test_cam_labels[f"qrtend_{source}"] == 1, "qrtend_MG2"],
bins=bins["qr"], cmin=1, norm=LogNorm(), vmax=vmax, rasterized=True)
_ =axes[1, 1].hist2d(test_cam_values.loc[test_cam_labels[f"nctend_{source}"] == 1, f"nctend_{source}"],
mg2_test.loc[test_cam_labels[f"nctend_{source}"] == 1, "nctend_MG2"],
bins=bins["nc"], cmin=1, norm=LogNorm(), vmax=vmax, rasterized=True)
_=axes[1, 2].hist2d(test_cam_values.loc[test_cam_labels[f"nrtend_{source}"] == -1, f"nrtend_{source}"],
mg2_test.loc[test_cam_labels[f"nrtend_{source}"] == -1, "nrtend_MG2"],
bins=bins["nr_neg"], cmin=1, norm=LogNorm(), vmax=vmax, rasterized=True)
_=axes[1, 3].hist2d(test_cam_values.loc[test_cam_labels[f"nrtend_{source}"] == 1, f"nrtend_{source}"],
mg2_test.loc[test_cam_labels[f"nrtend_{source}"] == 1, "nrtend_MG2"],
bins=bins["nr_pos"], cmin=1, norm=LogNorm(), vmax=vmax, rasterized=True)
for a, out in enumerate(["qr", "nc", "nr_neg", "nr_pos"]):
axes[0, a].plot(bins[out], bins[out], 'k--', lw=0.5)
axes[1, a].plot(bins[out], bins[out], 'k--', lw=0.5)
for i in range(2):
for j, out in enumerate(["qr", "nc", "nr_neg", "nr_pos"]):
axes[i, j].set_xticks(ticks[out])
axes[i, j].set_yticks(ticks[out])
axes[i, 0].set_ylabel(hist2d_ylabels[i], fontsize=16)
for j, out in enumerate(["qr", "nc", "nr_neg", "nr_pos"]):
axes[1, j].set_xlabel(f"{source}_noL2 Bin", fontsize=16)
axes[0, j].set_title(x_labels[j], fontsize=16)
axes[0, j].text(bins[out][int(bins[out].size * 0.45)],
bins[out][int(bins[out].size * 0.1)], f"$R^2=${reg_scores.iloc[j, 2]:0.3f}", fontsize=14, bbox=dict(facecolor='white', alpha=0.8))
axes[1, j].text(bins[out][int(bins[out].size * 0.45)],
bins[out][int(bins[out].size * 0.1)], f"$R^2=${mg2_scores.iloc[j, 2]:0.3f}", fontsize=14, bbox=dict(facecolor='white', alpha=0.8))
plt.savefig(f"{source}_noL2/tendency_hist2d.pdf", bbox_inches="tight")
```
# Figure 3
Pre-calculate the histograms for each tendency.
```
bins = dict(qr=np.arange(-18, -4.5, 0.5),
nc=np.arange(-8, 7, 0.5),
nr_neg=np.arange(-15, 7, 0.5),
nr_pos=np.arange(-9, 7, 0.5))
hists = {}
out_types = ["qr", "nc", "nr_neg", "nr_pos"]
pred_types = ["em", "tau", "mg2"]
col_ext = dict(em=f"{source}_1", tau=f"{source}", mg2="MG2")
pred_dict = dict(em=test_pred_values, tau=test_cam_values, mg2=mg2_test)
for o in out_types:
hists[o] = {}
if o == "nr_neg":
indices = test_cam_labels[f"nrtend_{source}"] == -1
else:
indices = test_cam_labels[o[:2] + f"tend_{source}"] == 1
for ptype in pred_types:
print(o, ptype)
if o == "nr_neg" and ptype == "em":
end_label = f"tend_{source}_-1"
else:
end_label = "tend_" + col_ext[ptype]
hists[o][ptype], _ = np.histogram(pred_dict[ptype].loc[indices, o[:2] + end_label], bins[o], density=True)
fig, axes = plt.subplots(2, 4, figsize=(12.5, 8), dpi=300, sharey=False)
plt.subplots_adjust(0.05, 0.15, 0.95, 0.95, wspace=0.3)
ticks = dict(qr=np.arange(-18, -4, 3),
nc=np.arange(-8, 7, 3),
nr_neg=np.arange(-15, 7, 3),
nr_pos=np.arange(-9, 7, 3))
y_labels = [f"{source} Emulator", f"{source} Bin", "MG2 Bulk"]
x_labels = ["$\log_{10}(dq_r/dt)$", "$\log_{10}(dN_c/dt)$",
"$\log_{10}(dN_r/dt < 0)$", "$\log_{10}(dN_r/dt > 0)$"]
colors = {"em": "royalblue", "tau": "purple", "mg2": "orange"}
plot_objs = []
for a, ax in enumerate(axes[0]):
o = out_types[a]
for p in pred_types:
plot_objs.extend(ax.step(bins[o][:-1], hists[o][p], lw=2, color=colors[p]))
for a, ax in enumerate(axes[1]):
o = out_types[a]
for p in ["em"]:
ax.step(bins[o][:-1], hists[o][p] - hists[o][f"tau"], lw=2, color=colors[p])
for j, out in enumerate(["qr", "nc", "nr_neg", "nr_pos"]):
axes[0, j].set_xticks(ticks[out])
axes[1, j].set_xticks(ticks[out])
axes[0, 0].set_ylabel("Density", fontsize=14)
axes[1, 0].set_ylabel(f"Differnce from {source}", fontsize=14)
letters = ["A) Rain Mass ", "B) Cloud Number ", "C) -Rain Number ", "D) +Rain Number "]
for j in range(4):
axes[0, j].set_title(letters[j] + x_labels[j], fontsize=12)
axes[1, 1].legend(plot_objs[:3], y_labels, loc='lower left', bbox_to_anchor=(0, -0.25), ncol=3, fontsize=14, frameon=False)
plt.savefig(f"{source}_noL2/tendency_hist_diff_em_only.pdf", bbox_inches="tight")
```
# Table 3
Code to generate residual summary values for table 3.
```
bins_err = np.arange(-10, 10, 0.01)
error_hists = {}
residuals = {}
out_types = ["qr", "nc", "nr_neg", "nr_pos"]
pred_types = ["em", f"{source}", "mg2"]
col_ext = dict(em=f"{source}_1", tau=f"{source}", mg2="MG2")
pred_dict = dict(em=test_pred_values, tau=test_cam_values, mg2=mg2_test)
for o in out_types:
print(o)
error_hists[o] = {}
if o == "nr_neg":
indices = test_cam_labels[f"nrtend_{source}"] == -1
else:
indices = test_cam_labels[o[:2] + f"tend_{source}"] == 1
if o == "nr_neg":
end_label = f"tend_{source}_-1"
else:
end_label = "tend_" + col_ext["em"]
end_label_true = "tend_" + col_ext["tau"]
residuals[o] = pred_dict["em"].loc[indices, o[:2] + end_label] - pred_dict[f"tau"].loc[indices, o[:2] + end_label_true]
error_hists[o], _ = np.histogram(residuals[o],
bins_err, density=False)
plt.figure(figsize=(10, 6))
for i in range(4):
plt.subplot(2, 2, i+1)
plt.plot(bins_err[bins_err>0][:-1], error_hists[out_types[i]][bins_err[:-1] >= 0] / np.sum(error_hists[out_types[i]]), color="r", label="+")
plt.plot(-bins_err[bins_err<0], error_hists[out_types[i]][bins_err[:-1] < 0] / np.sum(error_hists[out_types[i]]), color="b", label="-")
plt.xlim(0, 3)
plt.legend()
plt.title(out_types[i])
plt.gca().set_yscale("log")
plt.suptitle(f"ANN Residual Distributions (ML - {source})")
plt.savefig(f"{source}_noL2/ann_residual_dist.png", dpi=200, bbox_inches="tight")
err_mag = pd.DataFrame(0, index=out_types, columns=[-10, -2, 2, 10])
b_values = [-1, -0.3, 0, 0.3, 1]
for o in out_types:
for b in range(len(b_values)-1):
err_mag.loc[o, err_mag.columns[b]] = np.count_nonzero((residuals[o] >= b_values[b]) & (residuals[o] < b_values[b + 1])) / residuals[o].size
err_mag
err_mag.to_csv("ml_tau_residual_factors.csv", index_label="tendency")
```
# False Positive and False Negative Grids
```
time_hours = np.round(meta_test["time"].unique() * 24).astype(int)
print(time_hours.size)
all_time_hours = np.round(meta_test["time"] * 24).astype(int)
fp_grid = np.zeros((32, 192, 288), dtype=int)
fn_grid = np.zeros((32, 192, 288), dtype=int)
fp = (test_pred_labels[f"qrtend_{source}"] == 1) & (test_cam_labels[f"qrtend_{source}"] == 0)
fn = (test_pred_labels[f"qrtend_{source}"] == 0) & (test_cam_labels[f"qrtend_{source}"] == 1)
for th in time_hours:
print(th)
tfp = (all_time_hours == th) & (fp)
tfn = (all_time_hours == th) & (fn)
fp_grid[meta_test.loc[tfp, "depth"], meta_test.loc[tfp,"row"], meta_test.loc[tfp, "col"]] += 1
fn_grid[meta_test.loc[tfn, "depth"], meta_test.loc[tfn,"row"], meta_test.loc[tfn, "col"]] += 1
import xarray as xr
cam_path = "/glade/p/cisl/aiml/dgagne/SD_long_2000_50"
ds_cam = xr.open_mfdataset(join(cam_path, "SD_long_2000_50.cam.h0.*.nc"))
np.arange(32)[-12:].size
plt.figure(figsize=(10, 4))
ax = plt.axes(projection=ccrs.PlateCarree())
plt.pcolormesh(ds_cam["lon"], ds_cam["lat"], fp_grid[-12:].sum(axis=0) / (344 * 12) * 100, cmap="Reds")
plt.colorbar()
ax.coastlines()
ax.set_title("d$q_r$/dt Classifier False Positive Relative Frequency (%)")
plt.savefig(f"{source}_noL2/qr_classifier_fp_map.png", dpi=200, bbox_inches="tight")
plt.figure(figsize=(10, 4))
ax = plt.axes(projection=ccrs.PlateCarree())
plt.pcolormesh(ds_cam["lon"], ds_cam["lat"], fn_grid[-12:].sum(axis=0) / (344 * 12) * 100, cmap="Purples")
plt.colorbar()
ax.coastlines()
ax.set_title("d$q_r$/dt Classifier False Negative Relative Frequency (%)")
plt.savefig(f"{source}_noL2/qr_classifier_fn_map.png", dpi=200, bbox_inches="tight")
```
# Old version of Histogram Code
```
fig, axes = plt.subplots(3, 4, figsize=(14, 8), sharey=True)
bins = dict(qr=np.arange(-18, -4.5, 0.5),
nc=np.arange(-8, 7, 0.5),
nr_neg=np.arange(-15, 7, 0.5),
nr_pos=np.arange(-9, 7, 0.5))
ticks = dict(qr=np.arange(-18, -4, 3),
nc=np.arange(-8, 7, 3),
nr_neg=np.arange(-15, 7, 3),
nr_pos=np.arange(-9, 7, 3))
y_labels = [f"{source} Emulator", f"{source} Bin", "MG2 Bulk"]
x_labels = ["$\log_{10}(dq_r/dt)$", "$\log_{10}(dN_c/dt)$",
"$\log_{10}(dN_r/dt < 0)$", "$\log_{10}(dN_r/dt > 0)$"]
axes[0, 0].hist(test_pred_values.loc[test_cam_labels[f"qrtend_{source}"] == 1, f"qrtend_{source}_1"], bins=bins["qr"], color="purple")
axes[1, 0].hist(test_cam_values.loc[test_cam_labels[f"qrtend_{source}"] == 1, f"qrtend_{source}"], bins=bins["qr"], color="blue")
axes[2, 0].hist(mg2_test.loc[test_cam_labels[f"qrtend_{source}"] == 1, "qrtend_MG2"],
bins=bins["qr"], color="green")
axes[0, 0].hist(test_cam_values.loc[test_cam_labels[f"qrtend_{source}"] == 1, f"qrtend_{source}"],
bins=bins["qr"], color="blue", histtype="step", lw=2)
axes[2, 0].hist(test_cam_values.loc[test_cam_labels[f"qrtend_{source}"] == 1, f"qrtend_{source}"],
bins=bins["qr"], color="blue", histtype="step", lw=2)
axes[0, 0].set_yscale("log")
axes[0, 1].hist(test_pred_values.loc[test_cam_labels[f"nctend_{source}"] == 1, f"nctend_{source}_1"], bins=bins["nc"], color="purple")
axes[1, 1].hist(test_cam_values.loc[test_cam_labels[f"nctend_{source}"] == 1, f"nctend_{source}"], bins=bins["nc"], color="blue")
axes[2, 1].hist(mg2_test.loc[test_cam_labels[f"nctend_{source}"] == 1, "nctend_MG2"],
bins=bins["nc"], color="green")
axes[0, 1].hist(test_cam_values.loc[test_cam_labels[f"nctend_{source}"] == 1, f"nctend_{source}"],
bins=bins["nc"], color="blue", histtype="step", lw=2)
axes[2, 1].hist(test_cam_values.loc[test_cam_labels[f"nctend_{source}"] == 1, f"nctend_{source}"],
bins=bins["nc"], color="blue", histtype="step", lw=2)
axes[0, 2].hist(test_pred_values.loc[test_cam_labels[f"nrtend_{source}"] == -1, f"nrtend_{source}_-1"], bins=bins["nr_neg"], color="purple")
axes[1, 2].hist(test_cam_values.loc[test_cam_labels[f"nrtend_{source}"] == -1, f"nrtend_{source}"], bins=bins["nr_neg"], color="blue")
axes[2, 2].hist(mg2_test.loc[test_cam_labels[f"nrtend_{source}"] == -1, "nrtend_MG2"],
bins=bins["nr_neg"], color="green")
axes[0, 2].hist(test_cam_values.loc[test_cam_labels[f"nrtend_{source}"] == -1, f"nrtend_{source}"],
bins=bins["nr_neg"], color="blue", histtype="step", lw=2)
axes[2, 2].hist(test_cam_values.loc[test_cam_labels[f"nrtend_{source}"] == -1, f"nrtend_{source}"],
bins=bins["nr_neg"], color="blue", histtype="step", lw=2)
axes[0, 3].hist(test_pred_values.loc[test_cam_labels[f"nrtend_{source}"] == 1, f"nrtend_{source}_1"], bins=bins["nr_pos"], color="purple")
axes[1, 3].hist(test_cam_values.loc[test_cam_labels[f"nrtend_{source}"] == 1, f"nrtend_{source}"], bins=bins["nr_pos"], color="blue")
axes[2, 3].hist(mg2_test.loc[test_cam_labels[f"nrtend_{source}"] == 1, "nrtend_MG2"],
bins=bins["nr_pos"], color="green")
axes[0, 3].hist(test_cam_values.loc[test_cam_labels[f"nrtend_{source}"] == 1, f"nrtend_{source}"],
bins=bins["nr_pos"], color="blue", histtype="step", lw=2)
axes[2, 3].hist(test_cam_values.loc[test_cam_labels[f"nrtend_{source}"] == 1, f"nrtend_{source}"],
bins=bins["nr_pos"], color="blue", histtype="step", lw=2)
for i in range(3):
for j, out in enumerate(["qr", "nc", "nr_neg", "nr_pos"]):
axes[i, j].set_xticks(ticks[out])
axes[i, 0].set_ylabel(y_labels[i], fontsize=16)
x_pos = [-15, -7, -11, -7]
for j in range(4):
axes[0, j].text(x_pos[j], 50, f"H={reg_scores.iloc[j, 3]:1.1e}", fontsize=16, color='w')
axes[2, j].text(x_pos[j], 50, f"H={mg2_scores.iloc[j, 3]:1.1e}", fontsize=16, color='w')
axes[2, j].set_xlabel(x_labels[j], fontsize=16)
plt.savefig(f"{source}_noL2/tendency_hist.pdf", bbox_inches="tight")
```
| github_jupyter |
# Preprocessing
MolecularGraph.jl version: 0.10.0
Your chemical structure data may have some inconsistency in molecular graph notation that comes from difference in input data format. Also preferable molecular graph model should be selected according to the application. `MolecularGraph.jl` offers preprocessing methods that helps unification of expression and making consistent molecular graph model as you like.
- Kekulization
- Implicit/Explicit hydrogen
- Dealing with stereochemistry
- Unifying representation of charges and delocalized electrons
- Extract largest molecular graph
```
using Pkg
Pkg.activate("..")
using MolecularGraph
```
## Kekulization
SMILES allows aromatic bond notation without specific bond order value. Although we know it is intrinsic expression of aromaticity, it is sometimes inconvenient in the case where we need appearent atom valence to calculate molecular properties. Kekulization is well-known technique to align double bonds along aromatic rings.
Actually, `smilestomol(mol)` method do kekulization by `kekulize!(mol)` in it so usually we don't have to care about it. Here we use `parse(SMILES, mol)` that just parse SMILES to know how `kekulize!` work.
`kekulize!` converts aromatic bonds into single/double bonds.
```
mol = parse(SMILES, "o1ccc2c1cncn2")
molsvg = drawsvg(mol, 150, 150)
# Note: aromatic bond notation (e.g. dashed cycle path or circle in the ring) is not supported yet
display("image/svg+xml", molsvg)
mol = parse(SMILES, "o1ccc2c1cncn2")
kekulize!(mol)
molsvg = drawsvg(mol, 150, 150)
display("image/svg+xml", molsvg)
```
Please be careful not to forget to write pyrrole hydrogen. It is not obvious that if nitrogens in aromatic rings are pyrrole-like (-NH-) or pyridine-like (=N-).
Some cheminformatics toolkit may parse pyrrole N without H though it is grammatically wrong. In this case, MolecularGraph.jl returns ErrorException.
```
mol = smilestomol("n1cccc1")
mol = smilestomol("[nH]1cccc1")
mol = removehydrogens(mol)
molsvg = drawsvg(mol, 150, 150)
display("image/svg+xml", molsvg)
```
## Implicit/explicit hydrogens
Molecules parsed from `SDFile` and `SMILES` may have some hydrogen atom nodes. Hydrogen nodes can be removed by `removehydrogens`.
```
mol = smilestomol("[CH3][CH2]C(=O)[OH]")
molsvg = drawsvg(mol, 150, 150)
display("image/svg+xml", molsvg)
mol = removehydrogens(mol)
molsvg = drawsvg(mol, 150, 150)
display("image/svg+xml", molsvg)
```
`removehydrogens(mol, all=true)` (default) will remove all hydrogen nodes whereas `all=false` will remove only trivial hydrogens (are attached to organic atoms, have no charges and are not involved in stereochemistry)
```
mol = smilestomol("O=C([OH])[C@H]([NH2])[CH3].[AlH3]")
molsvg = drawsvg(mol, 150, 150)
display("image/svg+xml", molsvg)
mol = removehydrogens(mol, all=false)
molsvg = drawsvg(mol, 150, 150)
display("image/svg+xml", molsvg)
mol = removehydrogens(mol) # all=true
molsvg = drawsvg(mol, 150, 150)
display("image/svg+xml", molsvg)
```
## Dealing with stereochemistry
One more thing `smilestomol` and `sdftomol` will automatically do by default is standardization of stereochemistry notation.
`setstereocenter!` sets stereocenter information to `Atom.stereo` by the same rule as SMILES notation (looking from the lowest indexed atom, 2nd, 3rd, 4th atoms are in clockwise/anticlockwise order). The values are `:clockwise`, `:anticlockwise`, `unspecified` or `atypical`. If there is implicit hydrogens involved in stereochemistry, its index order priority is considered as same as the atom that the hydrogen is attached. `setstereocenter!` will be called inside `sdftomol` method.
`setdiastereo!` sets diastereomerism information to `Bond.stereo` of double bonds. `:cis`, `:trans` or `:unspecified` will be set according cis/trans configuration of atom nodes at each side of the double bond (If there is two atom nodes on one side, that of the lower indexed atom node). `setdiastereo!` will be called inside `sdftomol` and `smilestomol` methods.
```
mol = smilestomol("C\\C(CO)=C/C=C/C")
setdiastereo!(mol)
molsvg = drawsvg(mol, 150, 150)
display("image/svg+xml", molsvg)
```
Hydrogen nodes attached to the stereocenters can be removed by `removestereohydrogens` with keeping stereochemistry. Similary, explicit hydrogen nodes can be attached to the stereocenters by `addstereohydrogen`. As newly added hydrogen does not have coordinates information, `forcecoordgen=true` option is necessary for `drawsvg` to recalculate 2D coordinates of all atoms.
Note that `removehydrogens(mol, all=true)` can break stereochemistry of the molecule. Make sure to call `removestereohydrogens` before that if you want to work on stereochemistry.
```
mol = sdftomol(split("""
5 4 0 0 0 0 0 0 0 0999 V2000
-4.1517 0.8937 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
-3.7055 1.6866 0.0000 H 0 0 0 0 0 0 0 0 0 0 0 0
-4.4194 1.5526 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
-3.3812 0.3805 0.0000 N 0 0 0 0 0 0 0 0 0 0 0 0
-4.8022 0.3917 0.0000 O 0 0 0 0 0 0 0 0 0 0 0 0
1 3 1 1 0 0 0
1 2 1 6 0 0 0
1 4 1 0 0 0 0
1 5 1 0 0 0 0
M END""", "\n"))
molsvg = drawsvg(mol, 150, 150)
display("image/svg+xml", molsvg)
mol = removestereohydrogens(mol)
molsvg = drawsvg(mol, 150, 150; forcecoordgen=true)
display("image/svg+xml", molsvg)
mol = addstereohydrogens(mol)
molsvg = drawsvg(mol, 150, 150; forcecoordgen=true)
display("image/svg+xml", molsvg)
```
## Unifying representation of charges and delocalized electrons
Many oxoacids and oniums that are ionized in physiological condition have variations of charge state in actual chemical structure data (e.g free acid or salt). In practice, these molecules are often formatted to uncharged molecule for consistency. `protonateacids!` and `deprotonateoniums!` are convenient methods for this purpose.
```
mol = smilestomol("CCCC(=O)[O-].[N+]CCCC[N+]")
molsvg = drawsvg(mol, 200, 200)
display("image/svg+xml", molsvg)
protonateacids!(mol)
deprotonateoniums!(mol)
molsvg = drawsvg(mol, 200, 200)
display("image/svg+xml", molsvg)
```
Resonance structures that are often used to describe electron delocalization can be a cause of variation in molecular graph expression. `depolarize!` and `toallenelike!` may work well for unification of the expression.
```
mol = smilestomol("C[C+]([O-])C.C[N-][N+]#N")
molsvg = drawsvg(mol, 200, 200)
display("image/svg+xml", molsvg)
depolarize!(mol)
toallenelike!(mol)
molsvg = drawsvg(mol, 200, 200)
display("image/svg+xml", molsvg)
```
## Extract largest molecular graph
Chemical structure `SDFIle` provided by reagent venders often have additives, water molecules and inorganic salts. In most cases, main component have the largest molecular graph, so `extractlargestcomponent` can be used to remove unnecessary components. (Note that this method should be carefully applied because there might be additives that have very large molecular graph. Functional group analysis will provide more meaningful desaltation and dehydration workflow)
```
mol = smilestomol("[O-]C(=O)CN(CCN(CC([O-])=O)CC([O-])=O)CC([O-])=O.[Na+].[Na+].[Na+].[Na+]")
molsvg = drawsvg(mol, 200, 200)
display("image/svg+xml", molsvg)
mol = extractlargestcomponent(mol)
molsvg = drawsvg(mol, 200, 200)
display("image/svg+xml", molsvg)
```
| github_jupyter |
# tensorflow2.0教程-文本分类
我们将构建一个简单的文本分类器,并使用IMDB进行训练和测试
```
from __future__ import absolute_import, division, print_function
import tensorflow as tf
from tensorflow import keras
import numpy as np
print(tf.__version__)
```
## 1.IMDB数据集
下载
```
imdb=keras.datasets.imdb
(train_x, train_y), (test_x, text_y)=keras.datasets.imdb.load_data(num_words=10000)
```
了解IMDB数据
```
print("Training entries: {}, labels: {}".format(len(train_x), len(train_y)))
print(train_x[0])
print('len: ',len(train_x[0]), len(train_x[1]))
```
创建id和词的匹配字典
```
word_index = imdb.get_word_index()
word2id = {k:(v+3) for k, v in word_index.items()}
word2id['<PAD>'] = 0
word2id['<START>'] = 1
word2id['<UNK>'] = 2
word2id['<UNUSED>'] = 3
id2word = {v:k for k, v in word2id.items()}
def get_words(sent_ids):
return ' '.join([id2word.get(i, '?') for i in sent_ids])
sent = get_words(train_x[0])
print(sent)
```
## 2.准备数据
```
# 句子末尾padding
train_x = keras.preprocessing.sequence.pad_sequences(
train_x, value=word2id['<PAD>'],
padding='post', maxlen=256
)
test_x = keras.preprocessing.sequence.pad_sequences(
test_x, value=word2id['<PAD>'],
padding='post', maxlen=256
)
print(train_x[0])
print('len: ',len(train_x[0]), len(train_x[1]))
```
## 3.构建模型
```
import tensorflow.keras.layers as layers
vocab_size = 10000
model = keras.Sequential()
model.add(layers.Embedding(vocab_size, 16))
model.add(layers.GlobalAveragePooling1D())
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.summary()
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])
```
## 4.模型训练与验证
```
x_val = train_x[:10000]
x_train = train_x[10000:]
y_val = train_y[:10000]
y_train = train_y[10000:]
history = model.fit(x_train,y_train,
epochs=40, batch_size=512,
validation_data=(x_val, y_val),
verbose=1)
result = model.evaluate(test_x, text_y)
print(result)
```
## 5.查看准确率时序图
```
import matplotlib.pyplot as plt
history_dict = history.history
history_dict.keys()
acc = history_dict['accuracy']
val_acc = history_dict['val_accuracy']
loss = history_dict['loss']
val_loss = history_dict['val_loss']
epochs = range(1, len(acc)+1)
plt.plot(epochs, loss, 'bo', label='train loss')
plt.plot(epochs, val_loss, 'b', label='val loss')
plt.title('Train and val loss')
plt.xlabel('Epochs')
plt.xlabel('loss')
plt.legend()
plt.show()
plt.clf() # clear figure
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend()
plt.show()
```
| github_jupyter |
##### Copyright 2018 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
```
# Basic classification: Classify images of clothing
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/tutorials/keras/classification"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/keras/classification.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/tutorials/keras/classification.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/tutorials/keras/classification.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
This guide trains a neural network model to classify images of clothing, like sneakers and shirts. It's okay if you don't understand all the details; this is a fast-paced overview of a complete TensorFlow program with the details explained as you go.
This guide uses [tf.keras](https://www.tensorflow.org/guide/keras), a high-level API to build and train models in TensorFlow.
```
# TensorFlow and tf.keras
import tensorflow as tf
from tensorflow import keras
# Helper libraries
import numpy as np
import matplotlib.pyplot as plt
print(tf.__version__)
```
## Import the Fashion MNIST dataset
This guide uses the [Fashion MNIST](https://github.com/zalandoresearch/fashion-mnist) dataset which contains 70,000 grayscale images in 10 categories. The images show individual articles of clothing at low resolution (28 by 28 pixels), as seen here:
<table>
<tr><td>
<img src="https://tensorflow.org/images/fashion-mnist-sprite.png"
alt="Fashion MNIST sprite" width="600">
</td></tr>
<tr><td align="center">
<b>Figure 1.</b> <a href="https://github.com/zalandoresearch/fashion-mnist">Fashion-MNIST samples</a> (by Zalando, MIT License).<br/>
</td></tr>
</table>
Fashion MNIST is intended as a drop-in replacement for the classic [MNIST](http://yann.lecun.com/exdb/mnist/) dataset—often used as the "Hello, World" of machine learning programs for computer vision. The MNIST dataset contains images of handwritten digits (0, 1, 2, etc.) in a format identical to that of the articles of clothing you'll use here.
This guide uses Fashion MNIST for variety, and because it's a slightly more challenging problem than regular MNIST. Both datasets are relatively small and are used to verify that an algorithm works as expected. They're good starting points to test and debug code.
Here, 60,000 images are used to train the network and 10,000 images to evaluate how accurately the network learned to classify images. You can access the Fashion MNIST directly from TensorFlow. Import and load the Fashion MNIST data directly from TensorFlow:
```
fashion_mnist = keras.datasets.fashion_mnist
(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()
```
Loading the dataset returns four NumPy arrays:
* The `train_images` and `train_labels` arrays are the *training set*—the data the model uses to learn.
* The model is tested against the *test set*, the `test_images`, and `test_labels` arrays.
The images are 28x28 NumPy arrays, with pixel values ranging from 0 to 255. The *labels* are an array of integers, ranging from 0 to 9. These correspond to the *class* of clothing the image represents:
<table>
<tr>
<th>Label</th>
<th>Class</th>
</tr>
<tr>
<td>0</td>
<td>T-shirt/top</td>
</tr>
<tr>
<td>1</td>
<td>Trouser</td>
</tr>
<tr>
<td>2</td>
<td>Pullover</td>
</tr>
<tr>
<td>3</td>
<td>Dress</td>
</tr>
<tr>
<td>4</td>
<td>Coat</td>
</tr>
<tr>
<td>5</td>
<td>Sandal</td>
</tr>
<tr>
<td>6</td>
<td>Shirt</td>
</tr>
<tr>
<td>7</td>
<td>Sneaker</td>
</tr>
<tr>
<td>8</td>
<td>Bag</td>
</tr>
<tr>
<td>9</td>
<td>Ankle boot</td>
</tr>
</table>
Each image is mapped to a single label. Since the *class names* are not included with the dataset, store them here to use later when plotting the images:
```
class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',
'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']
```
## Explore the data
Let's explore the format of the dataset before training the model. The following shows there are 60,000 images in the training set, with each image represented as 28 x 28 pixels:
```
train_images.shape
```
Likewise, there are 60,000 labels in the training set:
```
len(train_labels)
```
Each label is an integer between 0 and 9:
```
train_labels
```
There are 10,000 images in the test set. Again, each image is represented as 28 x 28 pixels:
```
test_images.shape
```
And the test set contains 10,000 images labels:
```
len(test_labels)
```
## Preprocess the data
The data must be preprocessed before training the network. If you inspect the first image in the training set, you will see that the pixel values fall in the range of 0 to 255:
```
plt.figure()
plt.imshow(train_images[0])
plt.colorbar()
plt.grid(False)
plt.show()
```
Scale these values to a range of 0 to 1 before feeding them to the neural network model. To do so, divide the values by 255. It's important that the *training set* and the *testing set* be preprocessed in the same way:
```
train_images = train_images / 255.0
test_images = test_images / 255.0
```
To verify that the data is in the correct format and that you're ready to build and train the network, let's display the first 25 images from the *training set* and display the class name below each image.
```
plt.figure(figsize=(10,10))
for i in range(25):
plt.subplot(5,5,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(train_images[i], cmap=plt.cm.binary)
plt.xlabel(class_names[train_labels[i]])
plt.show()
```
## Build the model
Building the neural network requires configuring the layers of the model, then compiling the model.
### Set up the layers
The basic building block of a neural network is the *layer*. Layers extract representations from the data fed into them. Hopefully, these representations are meaningful for the problem at hand.
Most of deep learning consists of chaining together simple layers. Most layers, such as `tf.keras.layers.Dense`, have parameters that are learned during training.
```
model = keras.Sequential([
keras.layers.Flatten(input_shape=(28, 28)),
keras.layers.Dense(128, activation='relu'),
keras.layers.Dense(10)
])
```
The first layer in this network, `tf.keras.layers.Flatten`, transforms the format of the images from a two-dimensional array (of 28 by 28 pixels) to a one-dimensional array (of 28 * 28 = 784 pixels). Think of this layer as unstacking rows of pixels in the image and lining them up. This layer has no parameters to learn; it only reformats the data.
After the pixels are flattened, the network consists of a sequence of two `tf.keras.layers.Dense` layers. These are densely connected, or fully connected, neural layers. The first `Dense` layer has 128 nodes (or neurons). The second (and last) layer returns a logits array with length of 10. Each node contains a score that indicates the current image belongs to one of the 10 classes.
### Compile the model
Before the model is ready for training, it needs a few more settings. These are added during the model's *compile* step:
* *Loss function* —This measures how accurate the model is during training. You want to minimize this function to "steer" the model in the right direction.
* *Optimizer* —This is how the model is updated based on the data it sees and its loss function.
* *Metrics* —Used to monitor the training and testing steps. The following example uses *accuracy*, the fraction of the images that are correctly classified.
```
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
```
## Train the model
Training the neural network model requires the following steps:
1. Feed the training data to the model. In this example, the training data is in the `train_images` and `train_labels` arrays.
2. The model learns to associate images and labels.
3. You ask the model to make predictions about a test set—in this example, the `test_images` array.
4. Verify that the predictions match the labels from the `test_labels` array.
### Feed the model
To start training, call the `model.fit` method—so called because it "fits" the model to the training data:
```
model.fit(train_images, train_labels, epochs=10)
```
As the model trains, the loss and accuracy metrics are displayed. This model reaches an accuracy of about 0.91 (or 91%) on the training data.
### Evaluate accuracy
Next, compare how the model performs on the test dataset:
```
test_loss, test_acc = model.evaluate(test_images, test_labels, verbose=2)
print('\nTest accuracy:', test_acc)
```
It turns out that the accuracy on the test dataset is a little less than the accuracy on the training dataset. This gap between training accuracy and test accuracy represents *overfitting*. Overfitting happens when a machine learning model performs worse on new, previously unseen inputs than it does on the training data. An overfitted model "memorizes" the noise and details in the training dataset to a point where it negatively impacts the performance of the model on the new data. For more information, see the following:
* [Demonstrate overfitting](https://www.tensorflow.org/tutorials/keras/overfit_and_underfit#demonstrate_overfitting)
* [Strategies to prevent overfitting](https://www.tensorflow.org/tutorials/keras/overfit_and_underfit#strategies_to_prevent_overfitting)
### Make predictions
With the model trained, you can use it to make predictions about some images.
The model's linear outputs, [logits](https://developers.google.com/machine-learning/glossary#logits). Attach a softmax layer to convert the logits to probabilities, which are easier to interpret.
```
probability_model = tf.keras.Sequential([model,
tf.keras.layers.Softmax()])
predictions = probability_model.predict(test_images)
```
Here, the model has predicted the label for each image in the testing set. Let's take a look at the first prediction:
```
predictions[0]
```
A prediction is an array of 10 numbers. They represent the model's "confidence" that the image corresponds to each of the 10 different articles of clothing. You can see which label has the highest confidence value:
```
np.argmax(predictions[0])
```
So, the model is most confident that this image is an ankle boot, or `class_names[9]`. Examining the test label shows that this classification is correct:
```
test_labels[0]
```
Graph this to look at the full set of 10 class predictions.
```
def plot_image(i, predictions_array, true_label, img):
true_label, img = true_label[i], img[i]
plt.grid(False)
plt.xticks([])
plt.yticks([])
plt.imshow(img, cmap=plt.cm.binary)
predicted_label = np.argmax(predictions_array)
if predicted_label == true_label:
color = 'blue'
else:
color = 'red'
plt.xlabel("{} {:2.0f}% ({})".format(class_names[predicted_label],
100*np.max(predictions_array),
class_names[true_label]),
color=color)
def plot_value_array(i, predictions_array, true_label):
true_label = true_label[i]
plt.grid(False)
plt.xticks(range(10))
plt.yticks([])
thisplot = plt.bar(range(10), predictions_array, color="#777777")
plt.ylim([0, 1])
predicted_label = np.argmax(predictions_array)
thisplot[predicted_label].set_color('red')
thisplot[true_label].set_color('blue')
```
### Verify predictions
With the model trained, you can use it to make predictions about some images.
Let's look at the 0th image, predictions, and prediction array. Correct prediction labels are blue and incorrect prediction labels are red. The number gives the percentage (out of 100) for the predicted label.
```
i = 0
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(i, predictions[i], test_labels, test_images)
plt.subplot(1,2,2)
plot_value_array(i, predictions[i], test_labels)
plt.show()
i = 12
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(i, predictions[i], test_labels, test_images)
plt.subplot(1,2,2)
plot_value_array(i, predictions[i], test_labels)
plt.show()
```
Let's plot several images with their predictions. Note that the model can be wrong even when very confident.
```
# Plot the first X test images, their predicted labels, and the true labels.
# Color correct predictions in blue and incorrect predictions in red.
num_rows = 5
num_cols = 3
num_images = num_rows*num_cols
plt.figure(figsize=(2*2*num_cols, 2*num_rows))
for i in range(num_images):
plt.subplot(num_rows, 2*num_cols, 2*i+1)
plot_image(i, predictions[i], test_labels, test_images)
plt.subplot(num_rows, 2*num_cols, 2*i+2)
plot_value_array(i, predictions[i], test_labels)
plt.tight_layout()
plt.show()
```
## Use the trained model
Finally, use the trained model to make a prediction about a single image.
```
# Grab an image from the test dataset.
img = test_images[1]
print(img.shape)
```
`tf.keras` models are optimized to make predictions on a *batch*, or collection, of examples at once. Accordingly, even though you're using a single image, you need to add it to a list:
```
# Add the image to a batch where it's the only member.
img = (np.expand_dims(img,0))
print(img.shape)
```
Now predict the correct label for this image:
```
predictions_single = probability_model.predict(img)
print(predictions_single)
plot_value_array(1, predictions_single[0], test_labels)
_ = plt.xticks(range(10), class_names, rotation=45)
```
`keras.Model.predict` returns a list of lists—one list for each image in the batch of data. Grab the predictions for our (only) image in the batch:
```
np.argmax(predictions_single[0])
```
And the model predicts a label as expected.
| github_jupyter |
# A Brief Introduction to NumPy
### "...the fundamental package for scientific computing with Python." - numpy.org
In this notebook, we will cover the basics of NumPy, a package that is the basis for many other libraries in the data science ecosystem. Let's get started.
```
import numpy as np
from IPython.display import Image
import time
from sys import getsizeof
```
# 1. NumPy Arrays
The array data structure is the backbone of the NumPy library. They can be single-dimensional (vectors), two-dimensional (matrices), or multi-dimensional for more complex tasks.
In many ways, they are similar to Python lists.
```
a = ['a', 'b', 'c', 'd', 'e', 'f']
b = np.array(['a', 'b', 'c', 'd', 'e', 'f'])
# Accessible by index
print(b[0])
# Sliceable
print(b[1:3])
# Iterable
for letter in b:
print(letter)
```
So why use NumPy arrays at all? One word: performance! Generally speaking, Python lists take up more space and require more computation than NumPy arrays. Let's take a look at the size differences.
```
n_elements = 1_000_000
# Create using list comprehension
python_list = [x for x in range(n_elements)]
print(getsizeof(python_list))
# Create with existing python list
np_arr = np.array(python_list)
print(getsizeof(np_arr))
```
Now let's look at the speed differences.
```
start = time.process_time()
# Add 100 to every element in the Python list
python_list_mod = [x + 100 for x in python_list]
python_time = time.process_time() - start
print(python_time)
# Add 100 to every element in the Numpy array
start = time.process_time()
np_arr_mod = np_arr + 100
np_time = time.process_time() - start
print(np_time)
Image('python_memory1.png')
```
If NumPy arrays are more efficient computationally and in regards to space, why not use them all the time? There are some constraints, most notably, all of their items must be of the same type. [NumPy Array](https://numpy.org/doc/stable/reference/generated/numpy.array.html#numpy.array)
```
python_list = [1, 'a', 0.222, 'hello from inside the list!']
np_arr = np.array(python_list)
print(python_list)
print(np_arr)
```
# 1.1 Creating
NumPy arrays are created with existing data (standard python lists or lists of lists) or by using a collection of built-in methods.
## 1.1.1 Existing Data
### .array()
Use python lists (or lists of lists) as input.
```
std_list = [1, 2, 3, 4, 5]
std_matrix = [[1, 2, 3], [4, 5, 6], [7, 8, 9]]
```
Note: the .array() method is a convenience function for constructing objects of the class ndarray. While it is possible to call .ndarray() directly, it is specifically regarded as an anti-pattern by the NumPy documentation.
## 1.1.2 Fixed Values
### .zeros(), .ones()
Return a new array of given shape and type, filled with zeros or ones.
Note: the numbers have periods after them to indicate that these are floating point numbers.
### .full()
Return a new array of given shape and type, filled with fill_value.
## 1.1.3 Range
### .arange()
Return evenly spaced values within a given interval. Notice that the output is inclusive of the first number parameter and exclusive of the second.
### .linspace()
Return evenly spaced numbers over a specified interval. Notice that the output is inclusive of both the first and second number parameters.
Note: the main difference between .linspace() and .arange() is that with .linspace() you have precise control over the end value, whereas with .arange() you can specify the increments explicitly.
### .logspace()
Return numbers spaced evenly on a log scale.
## 1.1.4 Random
### .rand()
Create an array of the given shape and populate it with random samples from a uniform distribution over 0,1.
### .randn()
Return a sample (or samples) from the “standard normal” distribution.
Note: .rand() is from a uniform distribution, whereas .randn() is from the standard **normal** distribution.
### .randint()
Return random integers from low (inclusive) to high (exclusive).
## 1.2 Attributes and Methods
### .shape
Tuple of array dimensions. Note: this is an attribute NOT a method.
### .reshape()
Gives a new shape to an array without changing its data. Note: this happens 'in place' (does not return new values).
### .newaxis
Alternate syntax.
### .dtype
The type of data in the array.
### .astype()
Casts values to a specified type.
Note: in mathematics i is used to denote imaginary numbers, but in Python (and many other languages) j is used because i tends to indicate the current value in a system.
## 1.3 Indexing
### 1.3.1 One-dimensional
```
# Get the value at index 5 (the sixth element)
# Get a slice of the array from index 1 (inclusive) to index 5 (exclusive)
# Get a slice of the array from index 4 to the end
# Get the last element in the array
# Reverse the array
```
### 1.3.2 Two-dimensional
```
# Get the first row
# Get the second element of the second row
# Alternative syntax
# Get first and second rows
# Get first element of both first and second rows
# Maintain shape
```
### 1.3.3 Fancy Indexing
```
# Output of fancy indexing
# Notice if second row value is not provided, NumPy compensates
```
## 1.4 Selection
We will see this syntax mirrored in the Pandas library.
# 2. Operations
One of the most powerful features of NumPy arrays is that operations are vectorized.
## 2.1 Arithmetic
Arithmetic operations work on NumPy arrays.
### 2.1.1 One-dimensional
```
arr_1d = np.arange(0,11)
# Add five to each item
# Subtract 12 from each item
# Multiply each item by 2
# Divide each item by 4
# Floor division on each item
# Raise each item to the third power
```
### 2.1.2 Two-dimensional
```
arr_2d = np.arange(15).reshape((3,5))
# Multiply each item by 2
# Raise each item to the power of 2
```
### 2.1.3 Multiple values
These operations work with multiple values.
```
arr_mult = np.array([1,2,3,4,5])
arr_2d * arr_mult
arr_mult_2 = np.array([1,2,3])
```
## 2.2 Ufuncs
Universal functions. For more information, visit: https://docs.scipy.org/doc/numpy/reference/ufuncs.html
```
arr = np.arange(1,11)
```
### .sum()
Sum of array elements over a given axis.
### .sqrt()
Return the non-negative square-root of an array, element-wise.
```
np.sqrt(9)
np.sqrt(arr)
```
### .power()
First array elements raised to powers from second array, element-wise.
```
np.power(3,2)
np.power(arr, 2)
```
### .min(), .max()
```
arr = np.random.randint(0,10, 10)
np.min(arr)
arr.min()
np.max(arr)
arr.max()
```
## 2.3 Broadcasting
The term broadcasting describes how numpy treats arrays with different shapes during arithmetic operations. Subject to certain constraints, the smaller array is “broadcast” across the larger array so that they have compatible shapes. (NumPy documentation)
```
arr_one = np.arange(0,20).reshape(4,5)
arr_one + 10
arr_one + np.array([10])
arr_one + np.array([10, 20])
arr_two = np.array([10,20,30,40,50])
arr_one + arr_two
arr_one.shape
arr_two.shape
arr_two.reshape(5,1)
arr_one + arr_two
```
Array comparison begins with the trailing dimensions and subsequently works its way foward. Two array dimensions are compatible when:
- they are equal, or
- one of them is 1
(NumPy documentation)
https://jakevdp.github.io/PythonDataScienceHandbook/02.05-computation-on-arrays-broadcasting.html
| github_jupyter |
```
import re
import math
#UNK is used for unseen words in training vocabulary
UNK= None
#sentence start and end
sent_start= "<s>"
send_end= "</s>"
def read_sentences_from_file(path):
with open(path, "r") as f:
#string.rstrip("\n") removes string portion after \n or new line
#re.split( arg1, arg2): from arg2 extract arg1 pattern
string= [re.split("\s+", line.rstrip("\n")) for line in f]
```

```
class unigram_language_model:
def __init__(self, sentences, smoothing=False):
self.unigram_frequencies= dict()
self.corpus_length= 0
for sentence in sentences:
for word in sentence:
#dict.get(key,val) , if the key is not present in the dict this func will assign "val" value to key and if it is already present then it returns the value of the key
self.unigram_frequencies[word]= self.unigram_frequencies.get(word, 0) + 1
if(word != sent_start and word != sent_end):
self.corpus_length+=1
# subtract 2 from the total no. of unique words becoz sent_start and sent_end are also there
self.unique_words= len(self.unigram_frequencies) -2
self.smoothing= smoothing
def calculate_unigram_probability(self, word):
word_probability_numerator= self.unigram_frequencies.get(word, 0)
word_probability_denominator= self.corpus_length
if(self.smoothing):
#add 1 smoothing is used
#1 is added to each word count in numerator and for denominator total unique words length is used
word_probability_numerator += 1
word_probability_denominator += self.unique_words +1 # here 1 is for UNK- unseen word
return float(word_probability_numerator)/float(word_probability_denominator)
def calculate_sentence_probability(self, sentence, normalize_probability=True):
sentence_probability_log_sum=0
for word in sentence:
if(word != sent_start and word !=sent_end):
word_probability= self.calculate_unigram_probability(word)
sentence_probability_log_sum += math.log(word_probability, 2)
return math.pow(2, sentence_probability_log_sum) if(normalize_probability) else sentence_probability_log_sum
def sorted_vocabulary(self):
full_vocab= list(self.unigram_frequencies.keys())
full_vocab.remove(sent_start)
full_vocab.remove(sent_end)
full_vocab.sort()
full_vocab.append(UNK)
full_vocab.append(sent_start)
full_vocab.append(sent_end)
return full_vocab
class bigram_language_model(unigram_language_model):
def __init__(self, sentences, smoothing=False):
self.bigram_frequencies= dict()
self.unique_bigrams= set()
for sentence in sentences:
previous_word= None
for word in sentence:
if(previous_word != None):
self.bigram_frequencies[(previous_word, word)] = self.bigram_frequencies.get((previous_word, word), 0) + 1
if(previous_word != sent_start and word != sent_end):
self.unique_bigrams.add((previous_word, word))
previous_word= word
self.unique_bigram_words= len(self.unigram_frequencies)
def calculate_bigram_probability(self, previous_word, word):
bigram_word_probability_numerator= self.bigram_frequencies.get((previous_word, word), 0)
bigram_word_probability_denominator= self.unigram_frequencies.get(previous_word, 0)
if(self.smoothing):
bigram_word_probability_numerator+=1
bigram_word_probability_denominator+= self.unique_bigram_words
return 0.0 if(bigram_word_probability_numerator==0 or bigram_word_probability_denominator==0) else float(bigram_word_probability_numerator)/float(bigram_word_probability_denominator)
def calculate_bigram_sentence_probability(self, sentence, normalize_probability=True):
bigram_sentence_probability_log_sum= 0
previous_word= None
for word in sentence:
if(previous_word != None):
bigram_sentence_probability = self.calculate_bigram_probability(previous_word, word)
bigram_sentence_probability_log_sum += math.log(bigram_sentence_probability, 2)
previous_word= word
return pow(2, bigram_sentence_probability_log_sum) if(normalize_probability) else bigram_sentence_probability_log_sum
# Now, we calculate number of unigrams and bigrams
def calculate_number_of_unigrams(sentences):
unigram_count=0
for sentence in sentences:
# remove 2 for <s> and </s>
unigram_count += len(sentence) -2
return unigram_count
def calculate_number_of_bigrams(sentences):
bigram_count=0
for sentence in sentences:
#remove 1 for number of bigrams in sentence
bigram_count += len(sentence) -1
return bigram_count
# Calculate perplexity values
def calculate_unigram_perplexity(model, sentences):
unigram_count= calculate_number_of_unigrams(sentences)
sentence_probability_log_sum= 0
for sentence in sentences:
try:
sentence_probability_log_sum -= math.log(model.calculate_sentence_probability(sentence), 2)
except:
sentence_probability_log_sum -= float('-inf')
return math.pow(2, sentence_probability_log_sum/unigram_count)
def calculate_bigram_perplexity(model, sentences):
bigram_count= calculate_number_of_bigrams(sentences)
sentence_probability_log_sum= 0
for sentence in sentences:
try:
sentence_probability_log_sum -= math.log(model.calculate_bigram_sentence_probability(sentence), 2)
except:
sentence_probability_log_sum -= float('-inf')
return math.pow(2, sentence_probability_log_sum)
# print unigram and bigram probs
def print_unigram_probs(sorted_vocab_keys, model):
for vocab_key in sorted_vocab_keys:
if vocab_key != SENTENCE_START and vocab_key != SENTENCE_END:
print("{}: {}".format(vocab_key if vocab_key != UNK else "UNK",
model.calculate_unigram_probability(vocab_key)), end=" ")
print("")
def print_bigram_probs(sorted_vocab_keys, model):
print("\t\t", end="")
for vocab_key in sorted_vocab_keys:
if vocab_key != SENTENCE_START:
print(vocab_key if vocab_key != UNK else "UNK", end="\t\t")
print("")
for vocab_key in sorted_vocab_keys:
if vocab_key != SENTENCE_END:
print(vocab_key if vocab_key != UNK else "UNK", end="\t\t")
for vocab_key_second in sorted_vocab_keys:
if vocab_key_second != SENTENCE_START:
print("{0:.5f}".format(model.calculate_bigram_probabilty(vocab_key, vocab_key_second)), end="\t\t")
print("")
print("")
```


```
if(__name__ == "__main__"):
twitter_dataset= read_sentences_from_file("sample.txt")
twitter_dataset_test= read_sentences_from_file("test_data.txt")
twitter_dataset_model_unsmoothed= bigram_language_model(twitter_dataset)
twitter_dataset_model_smoothed= bigram_language_model(twitter_dataset, smoothing= True)
sorted_vocab_keys= twitter_dataset_model_unsmoothed.sorted_vocabulary()
print("---------------- Toy dataset ---------------\n")
print("=== UNIGRAM MODEL ===")
print("- Unsmoothed -")
print_unigram_probs(sorted_vocab_keys, twitter_dataset_model_unsmoothed)
print("\n- Smoothed -")
print_unigram_probs(sorted_vocab_keys, twitter_dataset_model_smoothed)
print("")
print("=== BIGRAM MODEL ===")
print("- Unsmoothed -")
print_bigram_probs(sorted_vocab_keys, twitter_dataset_model_unsmoothed)
print("- Smoothed -")
print_bigram_probs(sorted_vocab_keys, twitter_dataset_model_smoothed)
print("")
print("== SENTENCE PROBABILITIES == ")
longest_sentence_len = max([len(" ".join(sentence)) for sentence in twitter_dataset_test]) + 5
print("sent", " " * (longest_sentence_len - len("sent") - 2), "uprob\t\tbiprob")
for sentence in twitter_dataset_test:
sentence_string = " ".join(sentence)
print(sentence_string, end=" " * (longest_sentence_len - len(sentence_string)))
print("{0:.5f}".format(twitter_dataset_model_smoothed.calculate_sentence_probability(sentence)), end="\t\t")
print("{0:.5f}".format(twitter_dataset_model_smoothed.calculate_bigram_sentence_probability(sentence)))
print("")
print("== TEST PERPLEXITY == ")
print("unigram: ", calculate_unigram_perplexity(twitter_dataset_model_smoothed, twitter_dataset_test))
print("bigram: ", calculate_bigram_perplexity(twitter_dataset_model_smoothed, twitter_dataset_test))
print("")
actual_dataset = read_sentences_from_file("sample.txt")
actual_dataset_test = read_sentences_from_file("test_data.txt")
actual_dataset_model_smoothed = bigram_language_model(actual_dataset, smoothing=True)
print("---------------- Actual dataset ----------------\n")
print("PERPLEXITY of sample.txt")
print("unigram: ", calculate_unigram_perplexity(actual_dataset_model_smoothed, actual_dataset))
print("bigram: ", calculate_bigram_perplexity(actual_dataset_model_smoothed, actual_dataset))
print("")
print("PERPLEXITY of test_data.txt")
print("unigram: ", calculate_unigram_perplexity(actual_dataset_model_smoothed, actual_dataset_test))
print("bigram: ", calculate_bigram_perplexity(actual_dataset_model_smoothed, actual_dataset_test))
```
| github_jupyter |
# Conceitos básicos de estatística
Esse notebook é composto por exercícios que te ajudarão a entender conceitos básicos de estatística.
A estatística nos ajuda a responder perguntas que queremos fazer aos dados. Esse ano, 2022, celebramos 90 anos da conquista do voto feminino, ou seja, apenas em 1932 nós mulheres conquistamos com muita luta o direito ao voto, naquele momento apenas as mulheres casadas com autorização do marido, viúvas ou solteiras com renda própria podiam votar. O sufrágio universal veio dois anos depois, em 1934, onde todas nós pudemos votar, sermos votadas e eleitas para cargos políticos. Por essa razão, vamos aprender conceitos básicos de estatística usando os dados abertos do Tribunal Superior Eleitoral, especificamente os dados das candidaturas para a Câmara dos Vereadores de Recife em 2020. É a esse conjunto de dados que vamos realizar nossas perguntas e respondê-las usando os conceitos que aprenderemos ao longo desse vídeo aplicados com a biblioteca Pandas.
Conteúdo:
- Medidas de tendência central;
- Medidas de dispersão;
- Valores discrepantes (outliers);
- Coeficientes de correlação.
Para iniciar, vamos importar as bibliotecas a serem utilizadas para análise dos dados:
```
import pandas as pd
import matplotlib.pyplot as plt
```
Importaremos o conjunto de dados:
```
df_candidaturas = pd.read_csv('dados/consulta_cand_vereadoras_2020_PE_reduzida.csv')
```
Exploraremos as primeiras informações:
```
df_candidaturas.info()
```
Por padrão o Pandas apresenta até 20 colunas, como nosso dataframe tem mais colunas, retiraremos essa limitação
```
pd.options.display.max_columns = None
```
Agora, leremos algumas amostras
```
df_candidaturas.head()
```
## Distribuição estatística
Para começar vamos observar graficamente a distribuição estatística dos dados da coluna NR_IDADE_DATA_POSSE que representa a idade que as pessoas candidatas teriam na data de sua posse. Essa curva é feita a partir da quantidade de vezes que cada idade aparece na coluna NR_IDADE_DATA_POSSE, ou seja, existem mais pessoas candidatas com 40 anos do que com 80.
```
df_candidaturas['NR_IDADE_DATA_POSSE'].plot.density()
```
Esse tipo de distribuição que o gráfico só tem 1 pico, chamamos de distribuição normal.
## Medidas de tendência central
As medidas de tendência central servem para determinar o valor central de uma distribuição e representar o centro de um conjunto de dados. As mais utilizadas são: média, mediana e moda.
### Moda
Valor que mais se repete em um conjunto de dados e também pode ser usada em dados qualitativos. Vamos descobrir a moda da coluna `NR_IDADE_DATA_POSSE`
```
df_candidaturas['NR_IDADE_DATA_POSSE'].mode()
```
### Média
Soma de todos os valores do conjunto de dados, dividido pela quantidade de valores do conjunto. Essa medida é muito afetada por valores discrepantes, então seu uso é recomendado em conjuntos de dados com valores mais uniformes. Vamos ver a média etária das candidaturas para vereadores no dia da posse.
```
df_candidaturas['NR_IDADE_DATA_POSSE'].std()
```
### Média
Soma de todos os valores do conjunto de dados, dividido pela quantidade de valores do conjunto. Essa medida é muito afetada por valores discrepantes, então seu uso é recomendado em conjuntos de dados com valores mais uniformes. Vamos descobrir a média etária das candidaturas para vereadores no dia da posse.
```
df_candidaturas['NR_IDADE_DATA_POSSE'].mean()
```
### Mediana
Valor que ocupa a posição central do conjunto de dados, após a ordenação dos valores em ordem crescente. Isso quer dizer que, após a ordenação dos valores, aquele localizado exatamente no meio dessa lista é a mediana. Isso significa que 50% dos valores são menores do que ela e 50% são maiores.
```
df_candidaturas['NR_IDADE_DATA_POSSE'].median()
```
## Medidas de dispersão
As medidas de dispersão medem o grau de dispersão dos valores em uma distribuição, ou seja, pretendem avaliar quanto os dados diferem entre si. As mais usadas são: desvio padrão e variância.
### Variância
Indica quanto o conjunto de dados desvia da sua média, ou seja, quão distante cada valor do conjunto está da média. Quanto menor é a variância, mais próximo os valores estão da média.
```
df_candidaturas['NR_IDADE_DATA_POSSE'].var()
```
### Desvio padrão
Indica o grau de dispersão de um conjunto de dados, ou seja, o quanto ele é uniforme. Quanto menor for o desvio padrão, mais homogêneos são os dados. Essa medida é a raiz quadrada da variância.
```
df_candidaturas['NR_IDADE_DATA_POSSE'].std()
```
## Valores discrepantes (outliers)
São valores numericamente distantes da maior parte do conjunto de dados, ou seja, valores extremos. Para determinar valores discrepantes, usamos como parâmetro a medida de intervalo interquartil, que determina a dispersão dos dados em torno da mediana.
Como vimos anteriormente, a mediana é o valor que ocupa a posição central do conjunto de dados, após a ordenação dos valores em ordem crescente. Após essa ordenação também podemos determinar o primeiro e o terceiro quartil e , consequentemente, o intervalo entre eles que chamamos de intervalo interquartil, que é a diferença dos valores do 3. e 1. quartil.
```
primeiro_quartil = df_candidaturas['NR_IDADE_DATA_POSSE'].quantile(0.25)
primeiro_quartil
terceiro_quartil = df_candidaturas['NR_IDADE_DATA_POSSE'].quantile(0.75)
terceiro_quartil
intervalo_interquartil = terceiro_quartil - primeiro_quartil
intervalo_interquartil
```
## Correlação
Às vezes desejamos analisar um dado cujo comportamento está sendo influenciado de algum modo por outros. Esse fenômeno chamamos de correlação. Ela pode ser positiva, negativa ou nula.
Para medir a correlação usamos o “coeficiente de correlação, um valor numérico que indica o grau e a direção da tendência de associação entre as variáveis” (Ciência de Dados na Educação Pública, 2021)
## Coeficiente de Pearson
Também é conhecido como correlação linear, é usado para verificar a correlação entre duas variáveis quantitativas. É expresso através de valores situados entre -1 e 1; quanto mais extremos os valores mais forte é a correlação entre as variáveis e quanto mais próximo de zero, mais fraca, sendo 0 a indicação de que não há correlação.
Vamos verificar o coeficiente de correlação entre a variável NR_IDADE_DATA_POSSE e CD_ESTADO_CIVIL, tendo como hipótese que a idade tem uma correlação forte com o estado civil das pessoas candidatas. Só é possível realizar essa correlação porque cada estado civil está relacionado a um código numérico, conforme apresentado no dicionário de dados:
1: SOLTEIRO(A), 3: CASADO(A), 5: VIÚVO(A), 7: SEPARADO(A) JUDICIALMENTE, 9: DIVORCIADO(A)
```
df_candidaturas[['NR_IDADE_DATA_POSSE', 'CD_ESTADO_CIVIL']].corr(method='pearson')
```
Observamos que a correlação linear é fraca usando o coeficiente de Pearson. Então, nossa hipótese não foi comprovada, não é possível dizer que existe correlação entre as variáveis analisadas.
Espero que esse notebook tenha te ajudado a entender alguns conceitos básicos de estatística. Até mais!
| github_jupyter |
```
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
a = np.random.random(10)
def make_viz(arr):
def showit(width, height, cast_as, global_bounds = False):
v = arr.view(cast_as)[:width*height].reshape((width, height))
plt.clf()
if global_bounds == True:
mi = np.iinfo(cast_as).min
ma = np.iinfo(cast_as).max
elif global_bounds in ("u1", "u2", "u4", "u8"):
print("Using ...")
mi = np.iinfo(global_bounds).min
ma = np.iinfo(global_bounds).max
print("Using ...", mi, ma, v.min(), v.max())
else:
mi = v.min()
ma = v.max()
print(mi, ma)
plt.imshow(v, vmin = mi, vmax = ma, cmap="viridis")
return showit
arr = np.arange(256).reshape((16, 16), order="F").astype("uint8")
s = make_viz(arr)
s(16, 16, "uint8")
arr, _ = np.mgrid[0:1:256j, 0:1:256j]
s = make_viz(arr)
arr.size, arr.view("uint64").size
256*256*2
s(1024, 1024, "uint8", "u1")
np.iinfo("u1")
131072/256
arr
val = "390"
arr = np.array(val, dtype="c")
arr.view("uint8")
np.unpackbits(arr[0].view("uint8"))
np.unpackbits(arr[1].view("uint8"))
np.unpackbits(arr[2].view("uint8"))
arr = np.array([390], dtype="f8")
arr
np.unpackbits(arr.view("u1"))
np.array("1.83970e-003", dtype="c").view("uint8")
_.size
a = np.array([390], dtype="f8")
a.view("u1")
np.array([1.83970e-003], dtype="f8").view("uint8")
!head data-readonly/IL_Building_Inventory.csv
import pandas as pd
df = pd.read_csv("data-readonly/IL_Building_Inventory.csv", nrows=100000)
df.columns
df.shape
df2 = df[ (df["Agency Name"] == "University of Illinois") &(df["Address"] == "501 E Daniel")]
print(df2.to_json(lines=True, orient="records"))
[_ for _ in list(df.Address) if "Daniel" in str(_)]
df.shape
plt.style.use("ggplot")
plt.rcParams["xtick.labelsize"] = 20
plt.rcParams["ytick.labelsize"] = 20
x = np.mgrid[0:2*np.pi:64j]
fig = plt.figure(figsize=(15, 12))
plt.plot(x, np.sin(x), '.-')
plt.ylim(-1.1, 1.1)
plt.xlim(-0.1, 2.0*np.pi + 0.1)
fig = plt.figure(figsize=(15, 12))
for i, n in enumerate([1, 2, 8, 16]):
ax = fig.add_subplot(2, 2, i + 1)
ax.plot(x[::n], np.sin(x[::n]), '.-')
ax.set_ylim(-1.1, 1.1)
ax.set_xlim(-0.1, 2.0*np.pi + 0.1)
ax.set_title("Step size %s" % n)
fig = plt.figure(figsize=(15, 12))
for i, n in enumerate([1, 2, 8, 16]):
ind = np.arange(x.size)
np.random.shuffle(ind)
ind = ind[:x.size//n]
ind.sort()
ax = fig.add_subplot(2, 2, i + 1)
ax.plot(x[ind], np.sin(x[ind]), '.-')
ax.set_title("Random %s" % (x.size/n))
ax.set_ylim(-1.1, 1.1)
ax.set_xlim(-0.1, 2.0*np.pi + 0.1)
x = np.mgrid[0:2*np.pi:64j]
fig = plt.figure(figsize=(15, 12))
plt.plot(x, np.sin(3*x), '.-')
plt.ylim(-1.1, 1.1)
plt.xlim(-0.1, 2.0*np.pi + 0.1)
fig = plt.figure(figsize=(15, 12))
for i, n in enumerate([1, 2, 8, 16]):
ax = fig.add_subplot(2, 2, i + 1)
ax.plot(x[::n], np.sin(3*x[::n]), '.-')
ax.set_ylim(-1.1, 1.1)
ax.set_xlim(-0.1, 2.0*np.pi + 0.1)
ax.set_title("Step size %s" % n)
fig = plt.figure(figsize=(15, 12))
for i, n in enumerate([1, 2, 8, 16]):
ind = np.arange(x.size)
np.random.shuffle(ind)
ind = ind[:x.size//n]
ind.sort()
ax = fig.add_subplot(2, 2, i + 1)
ax.plot(x[ind], np.sin(3*x[ind]), '.-')
ax.set_title("Random %s" % (x.size/n))
ax.set_ylim(-1.1, 1.1)
ax.set_xlim(-0.1, 2.0*np.pi + 0.1)
a = []
while len(a) < 5:
a.append(input("Hello!"))
```
| github_jupyter |
The code below produces a basic boxplot using the `boxplot()` function of seaborn. When you look at the graph, it is easy to conclude that the ‘C’ group has a higher value than the others. However, we cannot see what is the **underlying distribution** of dots in each group, neither the **number of observations** for each.
```
# libraries
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
import pandas as pd
# Dataset:
a = pd.DataFrame({ 'group' : np.repeat('A',500), 'value': np.random.normal(10, 5, 500) })
b = pd.DataFrame({ 'group' : np.repeat('B',500), 'value': np.random.normal(13, 1.2, 500) })
c = pd.DataFrame({ 'group' : np.repeat('B',500), 'value': np.random.normal(18, 1.2, 500) })
d = pd.DataFrame({ 'group' : np.repeat('C',20), 'value': np.random.normal(25, 4, 20) })
e = pd.DataFrame({ 'group' : np.repeat('D',100), 'value': np.random.uniform(12, size=100) })
df=a.append(b).append(c).append(d).append(e)
# Usual boxplot
sns.boxplot(x='group', y='value', data=df)
plt.show()
```
Let’s see a few techniques allowing to avoid that:
## Add Jitter
By adding a stripplot, you can show all observations along with some representation of the underlying distribution.
```
# libraries
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
import pandas as pd
# Dataset:
a = pd.DataFrame({ 'group' : np.repeat('A',500), 'value': np.random.normal(10, 5, 500) })
b = pd.DataFrame({ 'group' : np.repeat('B',500), 'value': np.random.normal(13, 1.2, 500) })
c = pd.DataFrame({ 'group' : np.repeat('B',500), 'value': np.random.normal(18, 1.2, 500) })
d = pd.DataFrame({ 'group' : np.repeat('C',20), 'value': np.random.normal(25, 4, 20) })
e = pd.DataFrame({ 'group' : np.repeat('D',100), 'value': np.random.uniform(12, size=100) })
df=a.append(b).append(c).append(d).append(e)
# boxplot
ax = sns.boxplot(x='group', y='value', data=df)
# add stripplot
ax = sns.stripplot(x='group', y='value', data=df, color="orange", jitter=0.2, size=2.5)
# add title
plt.title("Boxplot with jitter", loc="left")
# show the graph
plt.show()
```
## Violin Plot
Violin plots are perfect for showing the distribution of the data. You can prefer to use violin chart instead of boxplot if the distribution of your data is important and you don't want to loose any information.
```
# libraries
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
import pandas as pd
# Dataset:
a = pd.DataFrame({ 'group' : np.repeat('A',500), 'value': np.random.normal(10, 5, 500) })
b = pd.DataFrame({ 'group' : np.repeat('B',500), 'value': np.random.normal(13, 1.2, 500) })
c = pd.DataFrame({ 'group' : np.repeat('B',500), 'value': np.random.normal(18, 1.2, 500) })
d = pd.DataFrame({ 'group' : np.repeat('C',20), 'value': np.random.normal(25, 4, 20) })
e = pd.DataFrame({ 'group' : np.repeat('D',100), 'value': np.random.uniform(12, size=100) })
df=a.append(b).append(c).append(d).append(e)
# plot violin chart
sns.violinplot( x='group', y='value', data=df)
# add title
plt.title("Violin plot", loc="left")
# show the graph
plt.show()
```
## Show Number of Observations
Another solution is to show the number of observations in the boxplot. The following code shows how to do it:
```
# libraries
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
import pandas as pd
# Dataset:
a = pd.DataFrame({ 'group' : np.repeat('A',500), 'value': np.random.normal(10, 5, 500) })
b = pd.DataFrame({ 'group' : np.repeat('B',500), 'value': np.random.normal(13, 1.2, 500) })
c = pd.DataFrame({ 'group' : np.repeat('B',500), 'value': np.random.normal(18, 1.2, 500) })
d = pd.DataFrame({ 'group' : np.repeat('C',20), 'value': np.random.normal(25, 4, 20) })
e = pd.DataFrame({ 'group' : np.repeat('D',100), 'value': np.random.uniform(12, size=100) })
df=a.append(b).append(c).append(d).append(e)
# Start with a basic boxplot
sns.boxplot(x="group", y="value", data=df)
# Calculate number of obs per group & median to position labels
medians = df.groupby(['group'])['value'].median().values
nobs = df.groupby("group").size().values
nobs = [str(x) for x in nobs.tolist()]
nobs = ["n: " + i for i in nobs]
# Add it to the plot
pos = range(len(nobs))
for tick,label in zip(pos,ax.get_xticklabels()):
plt.text(pos[tick], medians[tick] + 0.4, nobs[tick], horizontalalignment='center', size='medium', color='w', weight='semibold')
# add title
plt.title("Boxplot with number of observation", loc="left")
# show the graph
plt.show()
```
| github_jupyter |
# The atoms of computation
## Introduction
Programming a quantum computer is now something that anyone can do in the comfort of their own home. But what to create? What is a quantum program anyway? In fact, what is a quantum computer?
These questions can be answered by making comparisons to traditional digital computers. Unfortunately, most people don’t actually understand how traditional digital computers work either. On this page, we’ll look at the basic principles behind these traditional devices, and to help us transition over to quantum computing later on, we’ll do it using the same tools we'll use with quantum computers.
## Splitting information into bits
The first thing we need to know about is the idea of _bits_. These are designed to be the world’s simplest alphabet. With only two symbols, 0 and 1, we can represent any piece of information.
One example is numbers. You are probably used to representing a number through a [string](gloss:string) of the ten digits 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9. In this string of digits, each digit represents how many times the number contains a certain [power](gloss:power) of ten. For example, when we write 213, we mean:
$$ 200+10+3 $$
or, expressed in a way that emphasizes the powers of ten
$$ (2×10^2)+(1×10^1)+(3×10^0) $$
Though we usually use this system based on the number 10, we can just as easily use one based on any other number. The binary number system, for example, is based on the number two. This means using the two characters 0 and 1 to express numbers as multiples of powers of two. For example, 213 becomes 11010101, since:
$$
\begin{aligned}
213 = & \phantom{+}(1×2^7)+(1×2^6)+(0×2^5)\\
& +(1×2^4)+(0×2^3)+(1×2^2)\\
& +(0×2^1)+(1×2^0) \\
\end{aligned}
$$
In this we are expressing numbers as multiples of 2, 4, 8, 16, 32, etc. instead of 10, 100, 1000, etc.
<!-- ::: q-block.binary -->
### Try it
q-binary
<!-- ::: -->
These strings of bits, known as binary strings, can be used to represent more than just numbers. For example, there is a way to represent any text using bits. For any letter, number, or punctuation mark you want to use, you can find a corresponding string of at most eight bits using [this table](https://www.ibm.com/docs/en/aix/7.2?topic=adapters-ascii-decimal-hexadecimal-octal-binary-conversion-table). Though these are quite arbitrary, this is a widely agreed-upon standard. In fact, it's what was used to transmit this article to you through the internet.
This is how all information is represented in conventional computers. Whether numbers, letters, images, or sound, it all exists in the form of binary strings.
Like our standard digital computers, quantum computers are based on this same basic idea. The main difference is that they use _qubits,_ an extension of the bit to [quantum mechanics](gloss:quantum-mechanics). In the rest of this textbook, we will explore what qubits are, what they can do, and how they do it. In this section, however, we are not talking about quantum at all. So, we just use qubits as if they were bits.
<!-- ::: q-block.exercise -->
### Quick quiz
<!-- ::: q-quiz(goal="intro-aoc-1") -->
<!-- ::: .question -->
If you have $n$ bits, how many different numbers could you write down?
<!-- ::: -->
<!-- ::: .option -->
1. $n$
<!-- ::: -->
<!-- ::: .option -->
2. $n^2$
<!-- ::: -->
<!-- ::: .option(correct) -->
3. $2^n$
<!-- ::: -->
<!-- ::: -->
<!-- ::: -->
## Circuit diagrams
We saw in the last page that a computation takes some input data and performs operations on this to produce some output data. With the quantum computers we’ll learn about in this textbook, this data will always be in the form of bits. Now we know what bits are, let’s see how we can manipulate them in order to turn the inputs we have into the outputs we need.
It’s often useful to represent this process in a diagram known as a _circuit diagram_. These diagrams have inputs on the left, outputs on the right, and operations represented by arcane symbols in between. These operations are called 'gates', mostly for historical reasons. Here's an example of what a circuit looks like for standard, bit-based computers. You aren't expected to understand what it does. It should simply give you an idea of what these circuits look like.

For quantum computers, we use the same basic idea but have different conventions for how to represent inputs, outputs, and the symbols used for operations. Here is the “quantum circuit” that represents the same process as above.

In the rest of this section, we will explain how to build quantum circuits. At the end, you'll know how to create the circuit above, what it does, and why it's useful.
## Creating circuits with Qiskit
To create a quantum circuit, we will import the <code>QuantumCircuit</code> class, and create a new <code>QuantumCircuit</code> object.
<!-- ::: q-block.reminder -->
### Reminder
<details>
<summary>Python basics (what’s all this about classes and objects?)</summary>
We know we can describe all information using a bunch of bits, which is how computers store and process everything, including quantum circuits! But it’s difficult for us humans to think about how we do this, and how we manipulate those bits to represent the circuits we want.
The <code>QuantumCircuit</code> class is a set of instructions for representing quantum circuits as bits. The line <code>qc = QuantumCircuit(4, 2)</code> in the cell below is a constructor, which tells Python to set aside some bits in your computer that we’ll use to represent a quantum circuit. When we want to refer to this quantum circuit (or rather, the bits that represent this quantum circuit) we’ll use the variable ‘<code>qc</code>’. We say ‘<code>qc</code>’ refers to a "<code>QuantumCircuit</code> object".
This allows us humans to think about quantum circuits at a high, abstract level; we can say things like “add an X-gate” and Qiskit will take care of what we need to do to the bits in our computer to reflect this change.
</details>
<!-- ::: -->
On creating a quantum circuit, we need to tell [Python](gloss:python) how many qubits our circuit should have, and we can optionally also tell it how many classical bits our circuit should have. We need classical bits to store the measurements of our qubits, the reason for this will become clear later in this course.
## Your first quantum circuit
In a circuit, we typically need to do three jobs: First, encode the input, then do some actual computation, and finally extract an output. For your first quantum circuit, we'll focus on the last of these jobs. We start by creating a quantum circuit with 3 qubits and 3 outputs.
```
from qiskit import QuantumCircuit
# Create quantum circuit with 3 qubits and 3 classical bits
# (we'll explain why we need the classical bits later)
qc = QuantumCircuit(3, 3)
qc.draw() # returns a drawing of the circuit
```
Finally the method <code>qc.draw()</code> creates a drawing of the circuit for us. Jupyter Notebooks evaluate the last line of a code cell and display it below the cell. Since <code>qc.draw()</code> [returns](gloss:return) a drawing, that’s what we’re seeing under the code. There are no gates in our circuit yet, so we just see some horizontal lines.
<!-- ::: q-block.reminder -->
### Reminder
<details>
<summary>Python basics (what’s a method?)</summary>
The <code>QuantumCircuit</code> class is a set of instructions for representing quantum circuits as bits, but when we want to change one of these circuits, we also need to know how to change the bits accordingly. In [Python](gloss:python), objects come with ‘methods’, which are sets of instructions for doing something with that object. In the cell above, the <code>.draw()</code> method looks at the circuit we’ve created and produces a human-readable drawing of that circuit.
</details>
<!-- ::: -->
Next, we need a way to tell our quantum computer to measure our qubits and record the results. To do this, we add a "measure" operation to our quantum circuit. We can do this with the `QuantumCircuit`'s `.measure()` method.
```
from qiskit import QuantumCircuit
qc = QuantumCircuit(3, 3)
# measure qubits 0, 1 & 2 to classical bits 0, 1 & 2 respectively
qc.measure([0,1,2], [0,1,2])
qc.draw()
```
Next, let's see what the results of running this circuit would be. To do this, we'll use a quantum simulator, which is a standard computer calculating what an ideal quantum computer would do. Because simulating a quantum computer is believed to be difficult for classical computers (the best algorithms we have grow exponentially with the number of qubits), these simulations are only possible for circuits with small numbers of qubits (up to ~30 qubits), or certain types of circuits for which we can use some tricks to speed up the simulation. Simulators are very useful tools for designing smaller quantum circuits.
Let's import Qiskit’s simulator (called Aer), and make a new simulator object.
```
from qiskit.providers.aer import AerSimulator
sim = AerSimulator() # make new simulator object
```
To do the simulation, we can use the simulators <code>.run()</code> method. This returns a "job", which contains information about the experiment, such as whether the experiment is running or completed, what backend we ran the experiment on, and importantly for us, what the results of the experiment are!
To get the results from the job, we use the results method, and the most popular way to view the results is as a dictionary of "counts".
```
job = sim.run(qc) # run the experiment
result = job.result() # get the results
result.get_counts() # interpret the results as a "counts" dictionary
```
The keys in counts dictionary are bit-strings, and the values are the number of times that bit-string was measured. Quantum computers can have randomness in their results, so it's common to repeat the circuit a few times. This circuit was repeated 1024 times, which is the default number of times to repeat a circuit in Qiskit. By convention, qubits always start in the state `0`, and since we are doing nothing to them before measurement, the results are always `0`.
### Encoding an input
Now let's look at how to encode a different binary string as an input. For this, we need what is known as a NOT gate. This is the most basic operation that you can do in a computer. It simply flips the bit value: 0 becomes 1 and 1 becomes 0. For qubits, we use a gate known as the _X-gate_ for this.
Below, we’ll create a new circuit dedicated to the job of encoding:
```
# Create quantum circuit with 3 qubits and 3 classical bits:
qc = QuantumCircuit(3, 3)
qc.x([0,1]) # Perform X-gates on qubits 0 & 1
qc.measure([0,1,2], [0,1,2])
qc.draw() # returns a drawing of the circuit
```
And let's simulate our circuit to see the results:
```
job = sim.run(qc) # run the experiment
result = job.result() # get the results
result.get_counts() # interpret the results as a "counts" dictionary
```
<!-- ::: q-block.exercise -->
### Quick quiz
<!-- ::: q-quiz(goal="intro-aoc-2") -->
<!-- ::: .question -->
What is the binary number `011` in decimal?
<!-- ::: -->
<!-- ::: .option -->
1. 5
<!-- ::: -->
<!-- ::: .option -->
2. 2
<!-- ::: -->
<!-- ::: .option(correct) -->
3. 3
<!-- ::: -->
<!-- ::: -->
Modify the code above to create a quantum circuit that encodes the numbers 6 and 4. Are the results what you'd expect?
<!-- ::: -->
Now we know how to encode information in a computer. The next step is to process it: To take an input that we have encoded, and turn it into an output that tells us something new.
## Creating an adder circuit
### Remembering how to add
To look at turning inputs into outputs, we need a problem to solve. Let’s do some basic maths. In primary school, you will have learned how to take large mathematical problems and break them down into manageable pieces. For example, how would you go about solving this addition problem?
<!-- ::: q-block -->
### Remembering how to add
<!-- ::: q-carousel -->
<!-- ::: div -->

How can we solve a problem like this? Click through this carousel to find out.
<!-- ::: -->
<!-- ::: div -->

One way is to do it digit by digit, from right to left. So we start with 3+4.
<!-- ::: -->
<!-- ::: div -->

And then 1+5.
<!-- ::: -->
<!-- ::: div -->

Then 2+8.
<!-- ::: -->
<!-- ::: div -->

Finally we have 9+1+1, and get our answer.
<!-- ::: -->
<!-- ::: -->
<!-- ::: -->
This may just be simple addition, but it demonstrates the principles behind all algorithms. Whether the algorithm is designed to solve mathematical problems or process text or images, we always break big tasks down into small and simple steps.
To run on a computer, algorithms need to be compiled down to the smallest and simplest steps possible. To see what these look like, let’s do the above addition problem again but in binary.
<!-- ::: q-block -->
### Adding binary numbers
<!-- ::: q-carousel -->
<!-- ::: div -->

Note that the second number has a bunch of extra 0s on the left. This just serves to make the two strings the same length.
<!-- ::: -->
<!-- ::: div -->

Our first task is to do the 1+0 for the column on the right. In binary, as in any number system, the answer is 1.
<!-- ::: -->
<!-- ::: div -->

We get the same result for the 0+1 of the second column.
<!-- ::: -->
<!-- ::: div -->

Next, we have 1+1. As you’ll surely be aware, 1+1=2. In binary, the number 2 is written 10, and so requires two bits. This means that we need to carry the 1, just as we would for the number 10 in decimal. The next column now requires us to calculate 1+1+1. This means adding three numbers together, so things are getting complicated for our computer.
<!-- ::: -->
<!-- ::: div -->

But we can still compile it down to simpler operations, and do it in a way that only ever requires us to add two bits together. For this, we can start with just the first two 1s.
<!-- ::: -->
<!-- ::: div -->

Now we need to add this 10 to the final 1 , which can be done using our usual method of going through the columns. The final answer is 11 (also known as 3).
<!-- ::: -->
<!-- ::: div -->

Now we can get back to the rest of the problem. With the answer of 11, we have another carry bit. So now we have another 1+1+1 to do. But we already know how to do that, so it’s not a big deal.
<!-- ::: -->
<!-- ::: div -->
In fact, everything left so far is something we already know how to do. This is because, if you break everything down into adding just two bits, there are only four possible things you’ll ever need to calculate. Here are the four basic sums (we’ll write all the answers with two bits to be consistent):

This is called a half adder. If our computer can implement this, and if it can chain many of them together, it can add anything.
<!-- ::: -->
<!-- ::: -->
<!-- ::: -->
### Adding with quantum circuits
Let's make our own half adder from a quantum circuit. This will include a part of the circuit that encodes the input, a part that executes the algorithm, and a part that extracts the result. The first part will need to be changed whenever we want to use a new input, but the rest will always remain the same.

The two bits we want to add are encoded in the qubits 0 and 1. The above example encodes a 1 in both these qubits, and so it seeks to find the solution of 1+1. The result will be a string of two bits, which we will read out from the qubits 2 and 3. All that remains is to fill in the actual program, which lives in the blank space in the middle.
The dashed lines in the image are just to distinguish the different parts of the circuit (although they can have more interesting uses too)
The basic operations of computing are known as logic gates. We’ve already used the NOT gate, but this is not enough to make our half adder. We could only use it to manually write out the answers. Since we want the computer to do the actual computing for us, we’ll need some more powerful gates.
To see what we need, let’s take another look at what our half adder needs to do.

The rightmost bit in all four of these answers is completely determined by whether the two bits we are adding are the same or different. So for 0+0 and 1+1, where the two bits are equal, the rightmost bit of the answer comes out 0. For 0+1 and 1+0, where we are adding different bit values, the rightmost bit is 1.
To get this part of our solution correct, we need something that can figure out whether two bits are different or not. Traditionally, in the study of digital computation, this is called an XOR gate.
<table>
<thead>
<tr>
<th>Input 1</th>
<th>Input 2</th>
<th>XOR Output</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>0</td>
<td>1</td>
<td>1</td>
</tr>
<tr>
<td>1</td>
<td>0</td>
<td>1</td>
</tr>
<tr>
<td>1</td>
<td>1</td>
<td>0</td>
</tr>
</tbody>
</table>
In quantum computers, the job of the XOR gate is done by the ‘controlled-NOT gate’. Since that's quite a long name, we usually just call it the ‘CNOT’. In circuit diagrams, it is drawn as in the image below. This is applied to a pair of qubits. One acts as the control qubit (this is the one with the little dot). The other acts as the target qubit (with the big circle and cross - kind of like a target mark).

In Qiskit, we can use the `.cx()` method to add a CNOT to our circuit. We need to give the indices of the two qubits it acts on as arguments. Here's an example:
```
# Create quantum circuit with 2 qubits and 2 classical bits
qc = QuantumCircuit(2, 2)
qc.x(0)
qc.cx(0,1) # CNOT controlled by qubit 0 and targeting qubit 1
qc.measure([0,1], [0,1])
display(qc.draw()) # display a drawing of the circuit
job = sim.run(qc) # run the experiment
result = job.result() # get the results
# interpret the results as a "counts" dictionary
print("Result: ", result.get_counts())
```
For our half adder, we don’t want to overwrite one of our inputs. Instead, we want to write the result on a different pair of qubits. For this, we can use two CNOTs and write the output to a new qubit which we know will be in the state 0:

We are now halfway to a fully working half adder. We know how to calculate the rightmost output bit, so we just need to work out how to calculate the left output bit. If you look again at the four possible sums, you’ll notice that there is only one case for which this is 1 instead of 0: 1+1=10. It happens only when both the bits we are adding are 1.

To calculate this part of the output, we could just get our computer to look at whether both of the inputs are 1. If they are — and only if they are — we need to do a NOT gate on qubit 3. That will flip it to the required value of 1 for this case only, giving us the output we need.
For this, we need a new gate: like a CNOT but controlled on two qubits instead of just one. This will perform a NOT on the target qubit only when both controls are in state 1. This new gate is called the [Toffoli](gloss:toffoli) gate. For those of you who are familiar with Boolean logic gates, it is basically an AND gate.

In Qiskit, we can add this to a circuit using the `.ccx()` method. And there we have it! A circuit that can compute the famous mathematical problem of 1+1.
<!-- ::: q-block.exercise -->
### Try it
Arrange the blocks to create the code block that would produce the half-adder circuit above.
q-drag-and-drop-code(goal="intro-aoc-3")
.line from qiskit import QuantumCircuit
.line qc = QuantumCircuit(4, 2)
.line(group=0) qc.cx(0, 2)
.line(group=0) qc.cx(1, 2)
.line(group=0) qc.ccx(0, 1, 3)
.result-info
<!-- ::: -->
Great! Now we have our half adder, the next thing to do it to check that it works. To do this, we’ll create another circuit that encodes some input, applies the half adder, and extracts the output.
```
test_qc = QuantumCircuit(4, 2)
# First, our circuit should encode an input (here '11')
test_qc.x(0)
test_qc.x(1)
# Next, it should carry out the adder circuit we created
test_qc.cx(0,2)
test_qc.cx(1,2)
test_qc.ccx(0,1,3)
# Finally, we will measure the bottom two qubits to extract the output
test_qc.measure(2,0)
test_qc.measure(3,1)
test_qc.draw()
job = sim.run(test_qc) # run the experiment
result = job.result() # get the results
result.get_counts() # interpret the results as a “counts” dictionary
```
Here we can see that the result ‘10’ was measured 1024 times, and we didn’t measure any other result.
<!-- ::: q-block.exercise -->
### Exercise
Verify the half adder circuit works for all four possible inputs.
[Try in IBM Quantum Lab](https://quantum-computing.ibm.com/lab)
<!-- ::: -->
The half adder contains everything you need for addition. With the NOT, CNOT, and Toffoli gates, we can create programs that add any set of numbers of any size.
These three gates are enough to do everything else in computing too. In fact, we can even do without the CNOT. Additionally, the NOT gate is only really needed to create bits with value 1. The Toffoli gate is essentially the atom of mathematics. It is the simplest element, from which every other problem-solving technique can be compiled.
| github_jupyter |
```
import os
import cv2
import time
import mediapipe as mp
import numpy as np
from matplotlib import pyplot as plt
mp_holistic = mp.solutions.holistic
mp_drawing = mp.solutions.drawing_utils
def mediapipe_detection(image, model):
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
image.flags.writeable = False
results = model.process (image)
image.flags.writeable = True
image = cv2.cvtColor(image, cv2.COLOR_RGB2BGR)
return image, results
def draw_landmarks(image, results):
mp_drawing.draw_landmarks(image, results.face_landmarks, mp_holistic.FACEMESH_TESSELATION)
mp_drawing.draw_landmarks(image, results.pose_landmarks, mp_holistic.POSE_CONNECTIONS)
mp_drawing.draw_landmarks(image, results.left_hand_landmarks, mp_holistic.HAND_CONNECTIONS)
mp_drawing.draw_landmarks(image, results.right_hand_landmarks, mp_holistic.HAND_CONNECTIONS)
def draw_styled_landmarks(image, results):
# Draw face connections
mp_drawing.draw_landmarks(image, results.face_landmarks, mp_holistic.FACEMESH_TESSELATION,
mp_drawing.DrawingSpec(color=(80,110,10), thickness=1, circle_radius=1),
mp_drawing.DrawingSpec(color=(80,256,121), thickness=1, circle_radius=1)
)
# Draw pose connections
mp_drawing.draw_landmarks(image, results.pose_landmarks, mp_holistic.POSE_CONNECTIONS,
mp_drawing.DrawingSpec(color=(80,22,10), thickness=2, circle_radius=4),
mp_drawing.DrawingSpec(color=(80,44,121), thickness=2, circle_radius=2)
)
# Draw left hand connections
mp_drawing.draw_landmarks(image, results.left_hand_landmarks, mp_holistic.HAND_CONNECTIONS,
mp_drawing.DrawingSpec(color=(121,22,76), thickness=2, circle_radius=4),
mp_drawing.DrawingSpec(color=(121,44,250), thickness=2, circle_radius=2)
)
# Draw right hand connections
mp_drawing.draw_landmarks(image, results.right_hand_landmarks, mp_holistic.HAND_CONNECTIONS,
mp_drawing.DrawingSpec(color=(245,117,66), thickness=2, circle_radius=4),
mp_drawing.DrawingSpec(color=(245,66,230), thickness=2, circle_radius=2)
)
cap = cv2.VideoCapture(0)
sentence = ''
#Set media pipe model
with mp_holistic.Holistic(min_detection_confidence=0.5, min_tracking_confidence=0.5) as holistic:
while (cap.isOpened()):
#Read feed
ret, frame = cap.read()
#Make detections
image, results = mediapipe_detection(frame, holistic)
#print(results)
#Draw landmarks
draw_styled_landmarks(image, results)
cv2.rectangle(image, (0,0), (640, 40), (245, 117, 16), -1)
if cv2.waitKey(10) & 0xFF == ord('w'):
sentence = 'Oi'
if cv2.waitKey(10) & 0xFF == ord('e'):
sentence = 'Obrigado'
if cv2.waitKey(10) & 0xFF == ord('r'):
sentence = 'Te amo'
cv2.rectangle(image, (0,0), (640, 40), (245, 117, 16), -1)
cv2.putText(image, ' '.join(sentence), (3,30),
cv2.FONT_HERSHEY_SIMPLEX, 1, (255, 255, 255), 2, cv2.LINE_AA)
#Show to screen
cv2.imshow('OpenCv Cam', image)
#Break
if cv2.waitKey(10) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
cap.release()
cv2.destroyAllWindows()
```
| github_jupyter |
<small><i>This notebook was put together by [Jake Vanderplas](http://www.vanderplas.com). Source and license info is on [GitHub](https://github.com/jakevdp/sklearn_tutorial/).</i></small>
# Density Estimation: Gaussian Mixture Models
Here we'll explore **Gaussian Mixture Models**, which is an unsupervised clustering & density estimation technique.
We'll start with our standard set of initial imports
```
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
plt.style.use('seaborn')
```
## Introducing Gaussian Mixture Models
We previously saw an example of K-Means, which is a clustering algorithm which is most often fit using an expectation-maximization approach.
Here we'll consider an extension to this which is suitable for both **clustering** and **density estimation**.
For example, imagine we have some one-dimensional data in a particular distribution:
```
np.random.seed(2)
x = np.concatenate([np.random.normal(0, 2, 2000),
np.random.normal(5, 5, 2000),
np.random.normal(3, 0.5, 600)])
plt.hist(x, 80, normed=True)
plt.xlim(-10, 20);
```
Gaussian mixture models will allow us to approximate this density:
```
from sklearn.mixture import GMM
X = x[:, np.newaxis]
clf = GMM(4, n_iter=500, random_state=3).fit(X)
xpdf = np.linspace(-10, 20, 1000)
density = np.exp(clf.score(xpdf[:, np.newaxis]))
plt.hist(x, 80, normed=True, alpha=0.5)
plt.plot(xpdf, density, '-r')
plt.xlim(-10, 20);
```
Note that this density is fit using a **mixture of Gaussians**, which we can examine by looking at the ``means_``, ``covars_``, and ``weights_`` attributes:
```
clf.means_
clf.covars_
clf.weights_
plt.hist(x, 80, normed=True, alpha=0.3)
plt.plot(xpdf, density, '-r')
for i in range(clf.n_components):
pdf = clf.weights_[i] * stats.norm(clf.means_[i, 0],
np.sqrt(clf.covars_[i, 0])).pdf(xpdf)
plt.fill(xpdf, pdf, facecolor='gray',
edgecolor='none', alpha=0.3)
plt.xlim(-10, 20);
```
These individual Gaussian distributions are fit using an expectation-maximization method, much as in K means, except that rather than explicit cluster assignment, the **posterior probability** is used to compute the weighted mean and covariance.
Somewhat surprisingly, this algorithm **provably** converges to the optimum (though the optimum is not necessarily global).
## How many Gaussians?
Given a model, we can use one of several means to evaluate how well it fits the data.
For example, there is the Aikaki Information Criterion (AIC) and the Bayesian Information Criterion (BIC)
```
print(clf.bic(X))
print(clf.aic(X))
```
Let's take a look at these as a function of the number of gaussians:
```
n_estimators = np.arange(1, 10)
clfs = [GMM(n, n_iter=1000).fit(X) for n in n_estimators]
bics = [clf.bic(X) for clf in clfs]
aics = [clf.aic(X) for clf in clfs]
plt.plot(n_estimators, bics, label='BIC')
plt.plot(n_estimators, aics, label='AIC')
plt.legend();
```
It appears that for both the AIC and BIC, 4 components is preferred.
## Example: GMM For Outlier Detection
GMM is what's known as a **Generative Model**: it's a probabilistic model from which a dataset can be generated.
One thing that generative models can be useful for is **outlier detection**: we can simply evaluate the likelihood of each point under the generative model; the points with a suitably low likelihood (where "suitable" is up to your own bias/variance preference) can be labeld outliers.
Let's take a look at this by defining a new dataset with some outliers:
```
np.random.seed(0)
# Add 20 outliers
true_outliers = np.sort(np.random.randint(0, len(x), 20))
y = x.copy()
y[true_outliers] += 50 * np.random.randn(20)
clf = GMM(4, n_iter=500, random_state=0).fit(y[:, np.newaxis])
xpdf = np.linspace(-10, 20, 1000)
density_noise = np.exp(clf.score(xpdf[:, np.newaxis]))
plt.hist(y, 80, normed=True, alpha=0.5)
plt.plot(xpdf, density_noise, '-r')
plt.xlim(-15, 30);
```
Now let's evaluate the log-likelihood of each point under the model, and plot these as a function of ``y``:
```
log_likelihood = clf.score_samples(y[:, np.newaxis])[0]
plt.plot(y, log_likelihood, '.k');
detected_outliers = np.where(log_likelihood < -9)[0]
print("true outliers:")
print(true_outliers)
print("\ndetected outliers:")
print(detected_outliers)
```
The algorithm misses a few of these points, which is to be expected (some of the "outliers" actually land in the middle of the distribution!)
Here are the outliers that were missed:
```
set(true_outliers) - set(detected_outliers)
```
And here are the non-outliers which were spuriously labeled outliers:
```
set(detected_outliers) - set(true_outliers)
```
Finally, we should note that although all of the above is done in one dimension, GMM does generalize to multiple dimensions, as we'll see in the breakout session.
## Other Density Estimators
The other main density estimator that you might find useful is *Kernel Density Estimation*, which is available via ``sklearn.neighbors.KernelDensity``. In some ways, this can be thought of as a generalization of GMM where there is a gaussian placed at the location of *every* training point!
```
from sklearn.neighbors import KernelDensity
kde = KernelDensity(0.15).fit(x[:, None])
density_kde = np.exp(kde.score_samples(xpdf[:, None]))
plt.hist(x, 80, density=True, alpha=0.5)
plt.plot(xpdf, density, '-b', label='GMM')
plt.plot(xpdf, density_kde, '-r', label='KDE')
plt.xlim(-10, 20)
plt.legend();
```
All of these density estimators can be viewed as **Generative models** of the data: that is, that is, the model tells us how more data can be created which fits the model.
| github_jupyter |
```
import glob
import matplotlib.pyplot as plt
import numpy as np
import os
import pickle
import sys
sys.path.append("..")
from demo_2_awac import och_2_awac
DATA_DIR = '/usr/local/google/home/bkinman/proj/rpl_reset_free/20201005_slider_play_reprocessed'
def create_awac_dict_from_demo_pkls(data_dir):
full_awac_dict = None
glob_str = os.path.join(data_dir, 'recording?.pkl')
pkl_files = [f for f in glob.glob(glob_str)]
for f in pkl_files:
data_path = os.path.join(data_dir, f)
data = pickle.load(open(data_path,'rb'))
awac_formatted_list = och_2_awac(data)
for entry_dict in awac_formatted_list:
if not full_awac_dict:
full_awac_dict = entry_dict
else:
for k, v in entry_dict.items():
full_awac_dict[k] = np.concatenate((full_awac_dict[k], v), axis=0)
return full_awac_dict
def relabel_bc(trajs: dict, window_size = 100):
goal_size = 25
paths = []
num_idxs = trajs['observations'].shape[0]
for idx_start in range(num_idxs - window_size - 1):
path = {}
# Windowed observations
ob = trajs['observations'][idx_start:idx_start + window_size].copy()
next_ob = trajs['observations'][idx_start + 1:idx_start + window_size + 1].copy()
# Last observations
goals = np.repeat([trajs['observations'][idx_start + window_size - 1]], len(ob), axis=0)
ob[:, goal_size:] = goals[:, :goal_size]
next_ob[:, goal_size:] = goals[:, :goal_size]
path['observations'] = ob.copy()
path['full_observations'] = ob.copy()
path['next_observations'] = next_ob.copy()
path['full_next_observations'] = next_ob.copy()
path['actions'] = trajs['actions'][idx_start:idx_start + window_size].copy()
reward = np.zeros((len(ob), 1))
reward[-1] = 1.0
path['rewards'] = reward
terminals = np.zeros((len(ob),), dtype=np.bool)
terminals[-1] = True
path['terminals'] = terminals
path['env_infos'] = [{}]*len(ob)
path['agent_infos'] = [{}] * len(ob)
paths.append(path)
return paths
def relabel_bc_strided(trajs: dict, window_size = 100, stride=10):
""" Strided relabeling procedure produces less data."""
goal_size = 25
paths = []
num_idxs = trajs['observations'].shape[0]
for idx_start in range(num_idxs - window_size - 1):
path = {}
# Windowed observations
ob = trajs['observations'][idx_start:idx_start + window_size][::stride].copy()
next_ob = trajs['observations'][idx_start + 1:idx_start + window_size + 1][::stride].copy()
# Last observations
goals = np.repeat([trajs['observations'][idx_start + window_size - 1]], len(ob), axis=0)
ob[:, goal_size:] = goals[:, :goal_size]
next_ob[:, goal_size:] = goals[:, :goal_size]
path['observations'] = ob.copy()
path['full_observations'] = ob.copy()
path['next_observations'] = next_ob.copy()
path['full_next_observations'] = next_ob.copy()
path['actions'] = trajs['actions'][idx_start:idx_start + window_size][::stride].copy()
reward = np.zeros((len(ob), 1))
reward[-1] = 1.0
path['rewards'] = reward
terminals = np.zeros((len(ob),), dtype=np.bool)
terminals[-1] = True
path['terminals'] = terminals
path['env_infos'] = [{}]*len(ob)
path['agent_infos'] = [{}] * len(ob)
paths.append(path)
return paths
def compute_window_size(obs, thresh_low=4.45, thresh_high=10e6, debug_plot = False):
""" Computes the window size, which is the essentially the average duration of each episode.
"""
thresh = ((obs > thresh_low) & (obs < thresh_high))*1.0
grad = np.gradient(thresh)
last_rise = 0
deltas_ts = []
for ts in range(len(grad)):
if grad[ts] > 0:
last_rise = ts
if grad[ts] < 0:
deltas_ts.append((last_rise, ts))
mid_ts = np.array([((b-a)/2+a) for a,b in deltas_ts]).astype(np.int32)
if debug_plot:
mid_pnts = np.zeros(len(obs))
mid_pnts[mid_ts] = 1
plt.figure(figsize=(30, 5))
plt.plot(thresh)
plt.plot(mid_pnts)
plt.plot(obs-np.amin(obs))
mean_episode_len = np.mean(mid_ts[1:] - mid_ts[:-1])
return int(mean_episode_len)
```
## Load demo data, convert to AWAC format, and relabel for Behavior Cloning
```
full_awac_dict = create_awac_dict_from_demo_pkls(DATA_DIR)
sliding_cabinet_obs = full_awac_dict['observations'][:,1]
window_size = compute_window_size(sliding_cabinet_obs)
bc_training_data = relabel_bc_strided(full_awac_dict, window_size)
lens = sum(len(a['observations']) for a in bc_training_data)
print(f'bc_num_pretrain_steps should be {(int(lens/1000)+1)*1500} steps')
print(f'q_num_pretrain2_steps should be {(int(lens/1000)+1)*3000} steps')
output_path = os.path.join(DATA_DIR, 'bc_train_strided.pkl')
pickle.dump(bc_training_data, open(output_path,'wb'))
```
## Reprocess Demo Data
When the demo data was collected, the observation vector was the incorrect size, and contained incorrect values (should have been zero initialized).
The following routine opens the original demo vectors and corrects this. To prevent accidental overwriting of data, data will be dumped to a new directory alongside DATA_DIR.
```
def reprocess_demo_data(data_dir):
output_dirname = os.path.basename(os.path.normpath(DATA_DIR))+'_reprocessed'
output_dir = os.path.join(DATA_DIR, '..', output_dirname)
if not os.path.exists(output_dir):
os.mkdir(output_dir)
glob_str = os.path.join(DATA_DIR, 'recording?.pkl')
pkl_files = [f for f in glob.glob(glob_str)]
for f in pkl_files:
data = pickle.load(open(f, 'rb'))
for episode in data:
for step in episode:
step['obs'] = np.concatenate((step['obs'][:25], np.zeros(25)), axis=0)
pickle.dump(data, open(os.path.join(output_dir, os.path.basename(f)), 'wb'))
reprocess_demo_data(DATA_DIR)
window_size
```
| github_jupyter |
# Solving the heat equation
[AMath 586, Spring Quarter 2019](http://staff.washington.edu/rjl/classes/am586s2019/) at the University of Washington. For other notebooks, see [Index.ipynb](Index.ipynb) or the [Index of all notebooks on Github](https://github.com/rjleveque/amath586s2019/blob/master/notebooks/Index.ipynb).
Sample program to solve the heat equation with the Crank-Nicolson method.
We solve the heat equation $u_t = \kappa u_{xx}$ on the interval $0\leq x \leq 1$ with Dirichlet boundary conditions $u(0,t) = g_0(t)$ and $u(1,t) = g_1(t)$.
To test accuracy, we use cases where an exact solution to the heat equation for all $x$ is known. This `utrue` function is used to set initial conditions. It is also used in each time step to set boundary values on whatever finite interval we consider.
```
%pylab inline
from matplotlib import animation
from IPython.display import HTML
def make_animation(hs_input, hs_output, nplot=1):
"""
Plot every `nplot` frames of the solution and turn into
an animation.
"""
xfine = linspace(hs_input.ax,hs_input.bx,1001)
fig, ax = plt.subplots()
ax.set_xlim((hs_input.ax,hs_input.bx))
#ax.set_ylim((-0.2, 1.2))
ax.set_ylim((-1.2, 1.2))
line1, = ax.plot([], [], '+-', color='b', lw=2, label='computed')
line2, = ax.plot([], [], color='r', lw=1, label='true')
ax.legend()
title1 = ax.set_title('')
def init():
line1.set_data(hs_output.x_computed, hs_output.u_computed[:,0])
line2.set_data(xfine, hs_input.utrue(xfine, hs_input.t0))
title1.set_text('time t = %8.4f' % hs_input.t0)
return (line1,line2,title1)
def animate(n):
line1.set_data(hs_output.x_computed, hs_output.u_computed[:,n])
line2.set_data(xfine, hs_input.utrue(xfine, hs_output.t[n]))
title1.set_text('time t = %8.4f' % hs_output.t[n])
return (line1,line2,title1)
frames = range(0, len(hs_output.t), nplot) # which frames to plot
anim = animation.FuncAnimation(fig, animate, init_func=init,
frames=frames,
interval=200,
blit=True)
close('all') # so one last frame plot doesn't remain
return anim
class HeatSolutionInput(object):
def __init__(self):
# inputs:
self.t0 = 0.
self.tfinal = 1.
self.ax = 0.
self.bx = 1.
self.mx = 39.
self.utrue = None
self.kappa = 0.02
self.nsteps = 10
class HeatSolutionOutput(object):
def __init__(self):
# outputs:
self.h = None
self.dt = None
self.t = None
self.x_computed = None
self.u_computed = None
self.errors = None
```
## Forward Euler time stepping
```
def heat_FE(heat_solution_input):
"""
Solve u_t = kappa * u_{xx} on [ax,bx] with Dirichlet boundary conditions,
using centered differences in space and the Forward Euler method for time stepping,
with m interior points, taking nsteps time steps.
Input:
`heat_solution_input` should be on object of class `HeatSolutionInput`
specifying inputs.
Output:
an object of class `HeatSolutionOutput` with the solution and other info.
This routine can be embedded in a loop on m to test the accuracy.
Note: the vector x defined below is of length m+2 and includes both boundary points.
The vector uint is of length m and is only the interior points that we solve for,
by solving an m by m linear system each time step.
The vector u is of length m+2 and obtained by extending uint with the boundary values,
so that plotting (x,u) includes the boundary values.
"""
# unpack the inputs for brevity:
ax = heat_solution_input.ax
bx = heat_solution_input.bx
kappa = heat_solution_input.kappa
m = heat_solution_input.mx
utrue = heat_solution_input.utrue
t0 = heat_solution_input.t0
tfinal = heat_solution_input.tfinal
nsteps = heat_solution_input.nsteps
h = (bx-ax)/float(m+1) # h = delta x
x = linspace(ax,bx,m+2) # note x(1)=0 and x(m+2)=1
# u(1)=g0 and u(m+2)=g1 are known from BC's
dt = (tfinal - t0) / float(nsteps)
# initial conditions:
u0 = utrue(x,t0)
# initialize u and plot:
tn = 0
u = u0
t = empty((nsteps+1,), dtype=float)
errors = empty((nsteps+1,), dtype=float)
u_computed = empty((m+2,nsteps+1), dtype=float)
t[0] = tn
errors[0] = 0.
u_computed[:,0] = u0
# main time-stepping loop:
for n in range(1,nsteps+1):
tnp = tn + dt # = t_{n+1}
# indices of interior points as in integer numpy array:
jint = array(range(1,m+1), dtype=int)
# Then the numerical method can be written without a loop
# or matrix-vector multiply:
u[jint] = u[jint] + kappa * dt/h**2 * (u[jint-1] - 2*u[jint] + u[jint+1])
# evaluate true solution to get new boundary values at tnp:
g0np = utrue(ax,tnp)
g1np = utrue(bx,tnp)
# augment with boundary values:
u[0] = g0np
u[-1] = g1np
error = abs(u-utrue(x,tnp)).max() # max norm
t[n] = tnp
u_computed[:,n] = u
errors[n] = error
tn = tnp # for next time step
heat_solution_output = HeatSolutionOutput() # create object for output
heat_solution_output.dt = dt
heat_solution_output.h = h
heat_solution_output.t = t
heat_solution_output.x_computed = x
heat_solution_output.u_computed = u_computed
heat_solution_output.errors = errors
return heat_solution_output
```
## A smooth solution
We first use the decaying Gaussian
$$
u(x,t) = \frac{1}{\sqrt{4\beta\kappa t + 1}} \exp\left(\frac{-(x-x_0)^2}{4\kappa t + 1/\beta}\right).
$$
The initial data and boundary conditions are obtained by evaluating this function at $t=0$ or at $x=0$ or $x=1$. In particular, the initial conditions are simply
$$
u(x,0) = \eta(x) = \exp(-\beta(x-x_0)^2).
$$
```
beta = 150
x0 = 0.4
kappa = 0.02
utrue_gaussian = lambda x,t: exp(-(x-0.4)**2 / (4*kappa*t + 1./beta)) \
/ sqrt(4*beta*kappa*t+1.)
```
Recall that the forward Euler time stepping on the heat equation is only stable if the time step satisfies $k \leq 0.5h^2/\kappa$. However, for smooth solutions with very small components of the high wave number Fourier modes, it can take a long time for the instability to appear even if we take much larger $k$. Here's an example. Note that it is the highest wave number (the saw-tooth mode) that grows fastest and hence appears first...
```
t0 = 0.
tfinal = 4.
ax = 0.
bx = 1.
mx = 39
h = 1./((mx+1))
dt_stab = 0.5*h**2 / kappa
nsteps_stab = int(floor(tfinal-t0)/dt_stab) + 1
print('For stability, need to take at least %i time steps' % nsteps_stab)
heat_solution_input = HeatSolutionInput()
heat_solution_input.t0 = t0
heat_solution_input.tfinal = tfinal
heat_solution_input.ax = ax
heat_solution_input.bx = bx
heat_solution_input.mx = mx
heat_solution_input.utrue = utrue_gaussian
heat_solution_input.kappa = kappa
heat_solution_input.nsteps = 240
heat_solution_output = heat_FE(heat_solution_input)
error_tfinal = heat_solution_output.errors[-1] # last element
print('Using %i time steps' % heat_solution_input.nsteps)
print('Max-norm Error at t = %6.4f is %12.8f' % (heat_solution_input.tfinal, error_tfinal))
# make an animation of the results, plotting every 10th frame:
anim = make_animation(heat_solution_input, heat_solution_output, nplot=10)
HTML(anim.to_jshtml()) # or use the line below...
#HTML(anim.to_html5_video())
```
## Discontinous initial data
The instability is observed much more quickly if the initial data contains more high wave numbers, e.g. if it is discontinuous.
Consider the exact solution
$$
u(x,t) = \text{erf}\left(x/\sqrt{4\kappa t}\right)
$$
where erf is the *error function* defined as the integral of the Gaussian,
$$
\text{erf}(z) = \frac{2}{\pi} \int_0^z \exp(-t^2)\, dt.
$$
See e.g. https://en.wikipedia.org/wiki/Error_function.
As $t \rightarrow 0$, this approaches the discontinous function jumping from $-1$ for $x<0$ to $+1$ for $x>0$.
The error function is implemented in the `scipy.special` [library of special functions](https://docs.scipy.org/doc/scipy/reference/special.html).
```
kappa = 0.02
def utrue_erf(x,t):
from scipy.special import erf
if t==0:
return where(x>0, 1., -1.)
else:
return erf(x/sqrt(4*kappa*t))
t0 = 0.
tfinal = 2.
ax = -1.
bx = 1.
mx = 40
h = (bx-ax)/((mx+1))
dt_stab = 0.5*h**2 / kappa
nsteps_stab = int(floor(tfinal-t0)/dt_stab) + 1
print('For stability, need to take at least %i time steps' % nsteps_stab)
heat_solution_input = HeatSolutionInput()
heat_solution_input.t0 = t0
heat_solution_input.tfinal = tfinal
heat_solution_input.ax = ax
heat_solution_input.bx = bx
heat_solution_input.mx = mx
heat_solution_input.utrue = utrue_erf
heat_solution_input.kappa = kappa
heat_solution_input.nsteps = 32
heat_solution_output = heat_FE(heat_solution_input)
error_tfinal = heat_solution_output.errors[-1] # last element
print('Using %i time steps' % heat_solution_input.nsteps)
print('Max-norm Error at t = %6.4f is %12.8f' % (heat_solution_input.tfinal, error_tfinal))
anim = make_animation(heat_solution_input, heat_solution_output, nplot=1)
HTML(anim.to_jshtml())
```
## Crank-Nicolson method
This method uses the same centered difference spatial discretization with the Trapezoidal method for time stepping. That method is A-stable so this method is stable for any size time step (though not necessarily accurate).
Implementing this method requires solving a tridiagonal linear system in each time step, which we do using the [sparse matrix routines](https://docs.scipy.org/doc/scipy/reference/sparse.linalg.html) from `scipy.sparse.linalg`.
```
def heat_CN(heat_solution_input):
"""
Solve u_t = kappa * u_{xx} on [ax,bx] with Dirichlet boundary conditions,
using the Crank-Nicolson method with m interior points, taking nsteps
time steps.
Input:
`heat_solution_input` should be on object of class `HeatSolutionInput`
specifying inputs.
Output:
an object of class `HeatSolutionOutput` with the solution and other info.
Note: the vector x defined below is of length m+2 and includes both boundary points.
The vector uint is of length m and is only the interior points that we solve for,
by solving an m by m linear system each time step.
The vector u is of length m+2 and obtained by extending uint with the boundary values,
so that plotting (x,u) includes the boundary values.
"""
from scipy import sparse
from scipy.sparse.linalg import spsolve
# unpack the inputs for brevity:
ax = heat_solution_input.ax
bx = heat_solution_input.bx
kappa = heat_solution_input.kappa
m = heat_solution_input.mx
utrue = heat_solution_input.utrue
t0 = heat_solution_input.t0
tfinal = heat_solution_input.tfinal
nsteps = heat_solution_input.nsteps
h = (bx-ax)/float(m+1) # h = delta x
x = linspace(ax,bx,m+2) # note x(1)=0 and x(m+2)=1
# u(1)=g0 and u(m+2)=g1 are known from BC's
dt = tfinal / float(nsteps)
# initial conditions:
u0 = utrue(x,t0)
# Each time step we solve MOL system U' = AU + g using the Trapezoidal method
# set up matrices:
r = 0.5 * kappa* dt/(h**2)
em = ones(m)
em1 = ones(m-1)
A = sparse.diags([em1, -2*em, em1], [-1, 0, 1], shape=(m,m))
A1 = sparse.eye(m) - r * A
A2 = sparse.eye(m) + r * A
# initialize u and plot:
tn = 0
u = u0
t = empty((nsteps+1,), dtype=float)
errors = empty((nsteps+1,), dtype=float)
u_computed = empty((m+2,nsteps+1), dtype=float)
t[0] = tn
errors[0] = 0.
u_computed[:,0] = u0
# main time-stepping loop:
for n in range(1,nsteps+1):
tnp = tn + dt # = t_{n+1}
# boundary values u(0,t) and u(1,t) at times tn and tnp:
# boundary values are already set at time tn in array u:
g0n = u[0]
g1n = u[m+1]
# evaluate true solution to get new boundary values at tnp:
g0np = utrue(ax,tnp)
g1np = utrue(bx,tnp)
# compute right hand side for linear system:
uint = u[1:m+1] # interior points (unknowns)
rhs = A2.dot(uint) # sparse matrix-vector product A2 * uint
# fix-up right hand side using BC's (i.e. add vector g to A2*uint)
rhs[0] = rhs[0] + r*(g0n + g0np)
rhs[m-1] = rhs[m-1] + r*(g1n + g1np)
# solve linear system:
uint = spsolve(A1,rhs) # sparse solver
# augment with boundary values:
u = hstack([g0np, uint, g1np])
error = abs(u-utrue(x,tnp)).max() # max norm
t[n] = tnp
u_computed[:,n] = u
errors[n] = error
tn = tnp # for next time step
heat_solution_output = HeatSolutionOutput() # create object for output
heat_solution_output.dt = dt
heat_solution_output.h = h
heat_solution_output.t = t
heat_solution_output.x_computed = x
heat_solution_output.u_computed = u_computed
heat_solution_output.errors = errors
return heat_solution_output
```
## Test this with k = h:
With this method we can get a fine solution with only 40 steps (on a grid with 39 interior points). We only go out to time 1 but it would stay stable forever...
```
heat_solution_input = HeatSolutionInput()
heat_solution_input.t0 = 0.
heat_solution_input.tfinal = 1.
heat_solution_input.ax = 0.
heat_solution_input.bx = 1.
heat_solution_input.mx = 39
heat_solution_input.utrue = utrue_gaussian
heat_solution_input.kappa = kappa
heat_solution_input.nsteps = 40
heat_solution_output = heat_CN(heat_solution_input)
error_tfinal = heat_solution_output.errors[-1] # last element
print('dt = %6.4f' % heat_solution_output.dt)
print('Max-norm Error at t = %6.4f is %12.8f' % (heat_solution_input.tfinal, error_tfinal))
anim = make_animation(heat_solution_input, heat_solution_output)
HTML(anim.to_jshtml())
```
We can also plot how the max-norm error evolves with time:
```
plot(heat_solution_output.t,heat_solution_output.errors)
xlabel('time')
ylabel('max-norm error');
```
## Test for second-order accuracy
If dt and h are both reduced by 2, the error should go down by a factor 4 (for sufficiently small values).
Here we loop over a range of dt and h values, with dt = h in each solve.
```
nsteps_vals = [20,40,80,160,320] # values to test
E = empty(len(nsteps_vals))
# print table header:
print(" h dt error ratio estimated order")
for j,nsteps in enumerate(nsteps_vals):
heat_solution_input.nsteps = nsteps
heat_solution_input.mx = nsteps - 1
heat_solution_output = heat_CN(heat_solution_input)
E[j] = heat_solution_output.errors[-1] # last element
h = heat_solution_output.h
dt = heat_solution_output.dt
if j>0:
ratio = E[j-1] / E[j]
else:
ratio = nan
p = log(ratio)/log(2)
print("%8.6f %8.6f %12.8f %4.2f %4.2f" % (h, dt, E[j], ratio, p))
loglog(nsteps_vals, E, '-o')
title('Log-log plot of errors')
xlabel('nsteps')
ylabel('error')
```
## Observe oscillations if dt is too large
We know that Crank-Nicolson is stable for any time step, but the amplification factor approaches $-1$ as $k\lambda \rightarrow \infty$, so we expect high wavenumber modes to oscillate in time if we take the time step too large. This can be observed with the Gaussian initial data used here.
```
heat_solution_input.mx = 39
heat_solution_input.nsteps = 2
heat_solution_output = heat_CN(heat_solution_input)
error_tfinal = heat_solution_output.errors[-1] # last element
print('h = %6.4f, dt = %6.4f' % (heat_solution_output.h, heat_solution_output.dt))
print('Max-norm Error at t = %6.4f is %12.8f' % (heat_solution_input.tfinal, error_tfinal))
anim = make_animation(heat_solution_input, heat_solution_output)
HTML(anim.to_jshtml()) # or use the line below...
#HTML(anim.to_html5_video())
```
### Discontinous data
With a sufficiently small time step, Crank-Nicolson behaves well on the problem with discontinous data. Note that we use an even number of grid points `m = 40` so that they are symmetric about $x=0$. Try `m=39` and see how the asymmetry gives a larger error!
```
heat_solution_input = HeatSolutionInput()
heat_solution_input.t0 = 0.
heat_solution_input.tfinal = 1.5
heat_solution_input.ax = -1.
heat_solution_input.bx = 1.
heat_solution_input.mx = 40
heat_solution_input.utrue = utrue_erf
heat_solution_input.kappa = kappa
heat_solution_input.nsteps = 40
heat_solution_output = heat_CN(heat_solution_input)
error_tfinal = heat_solution_output.errors[-1] # last element
print('h = %6.4f, dt = %6.4f' % (heat_solution_output.h, heat_solution_output.dt))
print('Max-norm Error at t = %6.4f is %12.8f' % (heat_solution_input.tfinal, error_tfinal))
anim = make_animation(heat_solution_input, heat_solution_output)
HTML(anim.to_jshtml()) # or use the line below...
#HTML(anim.to_html5_video())
```
The issue with oscillations is more apparent with this discontinuous initial data. Taking a much larger time step on the same grid gives the results below. Note that the Crank-Nicolson method remains stable, but the saw-tooth mode is apparent near the interface if we try to step over the rapid transient behavior in this stiff problem.
```
heat_solution_input.nsteps = 3
heat_solution_output = heat_CN(heat_solution_input)
error_tfinal = heat_solution_output.errors[-1] # last element
print('h = %6.4f, dt = %6.4f' % (heat_solution_output.h, heat_solution_output.dt))
print('Max-norm Error at t = %6.4f is %12.8f' % (heat_solution_input.tfinal, error_tfinal))
anim = make_animation(heat_solution_input, heat_solution_output)
HTML(anim.to_jshtml()) # or use the line below...
#HTML(anim.to_html5_video())
```
An L-stable method like TR-BDF2 would do better in this case.
| github_jupyter |
# Publishing packages as web layers
Packages in ArcGIS bundle maps, data, tools and cartographic information. ArcGIS lets you [create a variety of packages](http://pro.arcgis.com/en/pro-app/help/sharing/overview/introduction-to-sharing-packages.htm) such as map (.mpkx), layer (.lpkx), map tile (.tpk), vector tile (.vtpk), scene layer (.slpk), geoprocessing (.gpkx) packages etc. to name a few. You can share any of these packages with other users either as files on a network share or as items in your portal. In addition, some of these packages can be shared as web layers.
In this sample, we will observe how to publish web layers from tile, vector tile and scene layer packages. Data for this sample is available in the accompanying `data` folder.
## Publishing tile layers from a tile package
A [Tile package](http://pro.arcgis.com/en/pro-app/help/sharing/overview/tile-package.htm) contains a set of tiles (images) from a map or raster dataset. These tiles (also called as tile cache) can be used as basemaps and are useful for visualizing imagery or relatively static data.
```
# connect to the GIS
from arcgis.gis import GIS
gis = GIS("https://pythonapi.playground.esri.com/portal")
```
Upload the tile package (USA_counties_divorce_rate.tpk) as an item. To keep our 'my contents' tidy, let us create a new folder called 'packages' and add to it.
```
gis.content.create_folder('packages')
tpk_item = gis.content.add({}, data='data/USA_counties_divorce_rate.tpk', folder='packages')
tpk_item
```
Now, let us go ahead and publish this item as a tile layer
```
tile_layer = tpk_item.publish()
tile_layer
```
## Publishing vector tile layers from a vector tile package
A [vector tile package](http://pro.arcgis.com/en/pro-app/help/sharing/overview/vector-tile-package.htm) is a collection of vector tiles and style resources. Vector tiles contain vector representations of data across a range of scales. Unlike raster tiles, they can adapt to the resolution of the display device and even be customized for multiple uses.
Let us upload a World_earthquakes_2010.vtpk vector tile package like earlier and publish that as a vector tile service
```
# upload vector tile package to the portal
vtpk_item = gis.content.add({}, data='data/World_earthquakes_2010.vtpk', folder='packages')
vtpk_item
# publish that item as a vector tile layer
vtpk_layer = vtpk_item.publish()
vtpk_layer
```
## Publishing scene layers from a scene layer package
A [scene layer package](http://pro.arcgis.com/en/pro-app/help/sharing/overview/scene-layer-package.htm) contains a cache of a multipatch, point, or point cloud dataset and is used to visualize 3D data. You can publish this package and create a web scene layer which can be visualized on a web scene.
Let us publish a 'World_earthquakes_2000_2010.slpk' scene layer package that visualizes global earthquakes between the years 2000 and 2010 in 3 dimension
```
slpk_item = gis.content.add({}, data='data/World_earthquakes_2000_2010.slpk', folder='packages')
slpk_item
slpk_layer = slpk_item.publish()
slpk_layer
```
| github_jupyter |
# Word2Vec
**Learning Objectives**
1. Compile all steps into one function
2. Prepare training data for Word2Vec
3. Model and Training
4. Embedding lookup and analysis
## Introduction
Word2Vec is not a singular algorithm, rather, it is a family of model architectures and optimizations that can be used to learn word embeddings from large datasets. Embeddings learned through Word2Vec have proven to be successful on a variety of downstream natural language processing tasks.
Note: This notebook is based on [Efficient Estimation of Word Representations in Vector Space](https://arxiv.org/pdf/1301.3781.pdf) and
[Distributed
Representations of Words and Phrases and their Compositionality](https://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf). It is not an exact implementation of the papers. Rather, it is intended to illustrate the key ideas.
These papers proposed two methods for learning representations of words:
* **Continuous Bag-of-Words Model** which predicts the middle word based on surrounding context words. The context consists of a few words before and after the current (middle) word. This architecture is called a bag-of-words model as the order of words in the context is not important.
* **Continuous Skip-gram Model** which predict words within a certain range before and after the current word in the same sentence. A worked example of this is given below.
You'll use the skip-gram approach in this notebook. First, you'll explore skip-grams and other concepts using a single sentence for illustration. Next, you'll train your own Word2Vec model on a small dataset. This notebook also contains code to export the trained embeddings and visualize them in the [TensorFlow Embedding Projector](http://projector.tensorflow.org/).
Each learning objective will correspond to a __#TODO__ in the [student lab notebook](../labs/word2vec.ipynb) -- try to complete that notebook first before reviewing this solution notebook.
## Skip-gram and Negative Sampling
While a bag-of-words model predicts a word given the neighboring context, a skip-gram model predicts the context (or neighbors) of a word, given the word itself. The model is trained on skip-grams, which are n-grams that allow tokens to be skipped (see the diagram below for an example). The context of a word can be represented through a set of skip-gram pairs of `(target_word, context_word)` where `context_word` appears in the neighboring context of `target_word`.
Consider the following sentence of 8 words.
> The wide road shimmered in the hot sun.
The context words for each of the 8 words of this sentence are defined by a window size. The window size determines the span of words on either side of a `target_word` that can be considered `context word`. Take a look at this table of skip-grams for target words based on different window sizes.
Note: For this tutorial, a window size of *n* implies n words on each side with a total window span of 2*n+1 words across a word.

The training objective of the skip-gram model is to maximize the probability of predicting context words given the target word. For a sequence of words *w<sub>1</sub>, w<sub>2</sub>, ... w<sub>T</sub>*, the objective can be written as the average log probability

where `c` is the size of the training context. The basic skip-gram formulation defines this probability using the softmax function.

where *v* and *v<sup>'<sup>* are target and context vector representations of words and *W* is vocabulary size.
Computing the denominator of this formulation involves performing a full softmax over the entire vocabulary words which is often large (10<sup>5</sup>-10<sup>7</sup>) terms.
The [Noise Contrastive Estimation](https://www.tensorflow.org/api_docs/python/tf/nn/nce_loss) loss function is an efficient approximation for a full softmax. With an objective to learn word embeddings instead of modelling the word distribution, NCE loss can be [simplified](https://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf) to use negative sampling.
The simplified negative sampling objective for a target word is to distinguish the context word from *num_ns* negative samples drawn from noise distribution *P<sub>n</sub>(w)* of words. More precisely, an efficient approximation of full softmax over the vocabulary is, for a skip-gram pair, to pose the loss for a target word as a classification problem between the context word and *num_ns* negative samples.
A negative sample is defined as a (target_word, context_word) pair such that the context_word does not appear in the `window_size` neighborhood of the target_word. For the example sentence, these are few potential negative samples (when `window_size` is 2).
```
(hot, shimmered)
(wide, hot)
(wide, sun)
```
In the next section, you'll generate skip-grams and negative samples for a single sentence. You'll also learn about subsampling techniques and train a classification model for positive and negative training examples later in the tutorial.
## Setup
```
# Use the chown command to change the ownership of repository to user.
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
!pip install -q tqdm
# You can use any Python source file as a module by executing an import statement in some other Python source file.
# The import statement combines two operations; it searches for the named module, then it binds the
# results of that search to a name in the local scope.
import io
import itertools
import numpy as np
import os
import re
import string
import tensorflow as tf
import tqdm
from tensorflow.keras import Model, Sequential
from tensorflow.keras.layers import Activation, Dense, Dot, Embedding, Flatten, GlobalAveragePooling1D, Reshape
from tensorflow.keras.layers.experimental.preprocessing import TextVectorization
```
Please check your tensorflow version using the cell below.
```
# Show the currently installed version of TensorFlow
print("TensorFlow version: ",tf.version.VERSION)
SEED = 42
AUTOTUNE = tf.data.experimental.AUTOTUNE
```
### Vectorize an example sentence
Consider the following sentence:
`The wide road shimmered in the hot sun.`
Tokenize the sentence:
```
sentence = "The wide road shimmered in the hot sun"
tokens = list(sentence.lower().split())
print(len(tokens))
```
Create a vocabulary to save mappings from tokens to integer indices.
```
vocab, index = {}, 1 # start indexing from 1
vocab['<pad>'] = 0 # add a padding token
for token in tokens:
if token not in vocab:
vocab[token] = index
index += 1
vocab_size = len(vocab)
print(vocab)
```
Create an inverse vocabulary to save mappings from integer indices to tokens.
```
inverse_vocab = {index: token for token, index in vocab.items()}
print(inverse_vocab)
```
Vectorize your sentence.
```
example_sequence = [vocab[word] for word in tokens]
print(example_sequence)
```
### Generate skip-grams from one sentence
The `tf.keras.preprocessing.sequence` module provides useful functions that simplify data preparation for Word2Vec. You can use the `tf.keras.preprocessing.sequence.skipgrams` to generate skip-gram pairs from the `example_sequence` with a given `window_size` from tokens in the range `[0, vocab_size)`.
Note: `negative_samples` is set to `0` here as batching negative samples generated by this function requires a bit of code. You will use another function to perform negative sampling in the next section.
```
window_size = 2
positive_skip_grams, _ = tf.keras.preprocessing.sequence.skipgrams(
example_sequence,
vocabulary_size=vocab_size,
window_size=window_size,
negative_samples=0)
print(len(positive_skip_grams))
```
Take a look at few positive skip-grams.
```
for target, context in positive_skip_grams[:5]:
print(f"({target}, {context}): ({inverse_vocab[target]}, {inverse_vocab[context]})")
```
### Negative sampling for one skip-gram
The `skipgrams` function returns all positive skip-gram pairs by sliding over a given window span. To produce additional skip-gram pairs that would serve as negative samples for training, you need to sample random words from the vocabulary. Use the `tf.random.log_uniform_candidate_sampler` function to sample `num_ns` number of negative samples for a given target word in a window. You can call the funtion on one skip-grams's target word and pass the context word as true class to exclude it from being sampled.
Key point: *num_ns* (number of negative samples per positive context word) between [5, 20] is [shown to work](https://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf) best for smaller datasets, while *num_ns* between [2,5] suffices for larger datasets.
```
# Get target and context words for one positive skip-gram.
target_word, context_word = positive_skip_grams[0]
# Set the number of negative samples per positive context.
num_ns = 4
context_class = tf.reshape(tf.constant(context_word, dtype="int64"), (1, 1))
negative_sampling_candidates, _, _ = tf.random.log_uniform_candidate_sampler(
true_classes=context_class, # class that should be sampled as 'positive'
num_true=1, # each positive skip-gram has 1 positive context class
num_sampled=num_ns, # number of negative context words to sample
unique=True, # all the negative samples should be unique
range_max=vocab_size, # pick index of the samples from [0, vocab_size]
seed=SEED, # seed for reproducibility
name="negative_sampling" # name of this operation
)
print(negative_sampling_candidates)
print([inverse_vocab[index.numpy()] for index in negative_sampling_candidates])
```
### Construct one training example
For a given positive `(target_word, context_word)` skip-gram, you now also have `num_ns` negative sampled context words that do not appear in the window size neighborhood of `target_word`. Batch the `1` positive `context_word` and `num_ns` negative context words into one tensor. This produces a set of positive skip-grams (labelled as `1`) and negative samples (labelled as `0`) for each target word.
```
# Add a dimension so you can use concatenation (on the next step).
negative_sampling_candidates = tf.expand_dims(negative_sampling_candidates, 1)
# Concat positive context word with negative sampled words.
context = tf.concat([context_class, negative_sampling_candidates], 0)
# Label first context word as 1 (positive) followed by num_ns 0s (negative).
label = tf.constant([1] + [0]*num_ns, dtype="int64")
# Reshape target to shape (1,) and context and label to (num_ns+1,).
target = tf.squeeze(target_word)
context = tf.squeeze(context)
label = tf.squeeze(label)
```
Take a look at the context and the corresponding labels for the target word from the skip-gram example above.
```
print(f"target_index : {target}")
print(f"target_word : {inverse_vocab[target_word]}")
print(f"context_indices : {context}")
print(f"context_words : {[inverse_vocab[c.numpy()] for c in context]}")
print(f"label : {label}")
```
A tuple of `(target, context, label)` tensors constitutes one training example for training your skip-gram negative sampling Word2Vec model. Notice that the target is of shape `(1,)` while the context and label are of shape `(1+num_ns,)`
```
print(f"target :", target)
print(f"context :", context )
print(f"label :", label )
```
### Summary
This picture summarizes the procedure of generating training example from a sentence.

## Lab Task 1: Compile all steps into one function
### Skip-gram Sampling table
A large dataset means larger vocabulary with higher number of more frequent words such as stopwords. Training examples obtained from sampling commonly occuring words (such as `the`, `is`, `on`) don't add much useful information for the model to learn from. [Mikolov et al.](https://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf) suggest subsampling of frequent words as a helpful practice to improve embedding quality.
The `tf.keras.preprocessing.sequence.skipgrams` function accepts a sampling table argument to encode probabilities of sampling any token. You can use the `tf.keras.preprocessing.sequence.make_sampling_table` to generate a word-frequency rank based probabilistic sampling table and pass it to `skipgrams` function. Take a look at the sampling probabilities for a `vocab_size` of 10.
```
sampling_table = tf.keras.preprocessing.sequence.make_sampling_table(size=10)
print(sampling_table)
```
`sampling_table[i]` denotes the probability of sampling the i-th most common word in a dataset. The function assumes a [Zipf's distribution](https://en.wikipedia.org/wiki/Zipf%27s_law) of the word frequencies for sampling.
Key point: The `tf.random.log_uniform_candidate_sampler` already assumes that the vocabulary frequency follows a log-uniform (Zipf's) distribution. Using these distribution weighted sampling also helps approximate the Noise Contrastive Estimation (NCE) loss with simpler loss functions for training a negative sampling objective.
### Generate training data
Compile all the steps described above into a function that can be called on a list of vectorized sentences obtained from any text dataset. Notice that the sampling table is built before sampling skip-gram word pairs. You will use this function in the later sections.
```
# Generates skip-gram pairs with negative sampling for a list of sequences
# (int-encoded sentences) based on window size, number of negative samples
# and vocabulary size.
def generate_training_data(sequences, window_size, num_ns, vocab_size, seed):
# Elements of each training example are appended to these lists.
targets, contexts, labels = [], [], []
# Build the sampling table for vocab_size tokens.
# TODO 1a
sampling_table = tf.keras.preprocessing.sequence.make_sampling_table(vocab_size)
# Iterate over all sequences (sentences) in dataset.
for sequence in tqdm.tqdm(sequences):
# Generate positive skip-gram pairs for a sequence (sentence).
positive_skip_grams, _ = tf.keras.preprocessing.sequence.skipgrams(
sequence,
vocabulary_size=vocab_size,
sampling_table=sampling_table,
window_size=window_size,
negative_samples=0)
# Iterate over each positive skip-gram pair to produce training examples
# with positive context word and negative samples.
# TODO 1b
for target_word, context_word in positive_skip_grams:
context_class = tf.expand_dims(
tf.constant([context_word], dtype="int64"), 1)
negative_sampling_candidates, _, _ = tf.random.log_uniform_candidate_sampler(
true_classes=context_class,
num_true=1,
num_sampled=num_ns,
unique=True,
range_max=vocab_size,
seed=SEED,
name="negative_sampling")
# Build context and label vectors (for one target word)
negative_sampling_candidates = tf.expand_dims(
negative_sampling_candidates, 1)
context = tf.concat([context_class, negative_sampling_candidates], 0)
label = tf.constant([1] + [0]*num_ns, dtype="int64")
# Append each element from the training example to global lists.
targets.append(target_word)
contexts.append(context)
labels.append(label)
return targets, contexts, labels
```
## Lab Task 2: Prepare training data for Word2Vec
With an understanding of how to work with one sentence for a skip-gram negative sampling based Word2Vec model, you can proceed to generate training examples from a larger list of sentences!
### Download text corpus
You will use a text file of Shakespeare's writing for this tutorial. Change the following line to run this code on your own data.
```
path_to_file = tf.keras.utils.get_file('shakespeare.txt', 'https://storage.googleapis.com/download.tensorflow.org/data/shakespeare.txt')
```
Read text from the file and take a look at the first few lines.
```
with open(path_to_file) as f:
lines = f.read().splitlines()
for line in lines[:20]:
print(line)
```
Use the non empty lines to construct a `tf.data.TextLineDataset` object for next steps.
```
# TODO 2a
text_ds = tf.data.TextLineDataset(path_to_file).filter(lambda x: tf.cast(tf.strings.length(x), bool))
```
### Vectorize sentences from the corpus
You can use the `TextVectorization` layer to vectorize sentences from the corpus. Learn more about using this layer in this [Text Classification](https://www.tensorflow.org/tutorials/keras/text_classification) tutorial. Notice from the first few sentences above that the text needs to be in one case and punctuation needs to be removed. To do this, define a `custom_standardization function` that can be used in the TextVectorization layer.
```
# We create a custom standardization function to lowercase the text and
# remove punctuation.
def custom_standardization(input_data):
lowercase = tf.strings.lower(input_data)
return tf.strings.regex_replace(lowercase,
'[%s]' % re.escape(string.punctuation), '')
# Define the vocabulary size and number of words in a sequence.
vocab_size = 4096
sequence_length = 10
# Use the text vectorization layer to normalize, split, and map strings to
# integers. Set output_sequence_length length to pad all samples to same length.
vectorize_layer = TextVectorization(
standardize=custom_standardization,
max_tokens=vocab_size,
output_mode='int',
output_sequence_length=sequence_length)
```
Call `adapt` on the text dataset to create vocabulary.
```
vectorize_layer.adapt(text_ds.batch(1024))
```
Once the state of the layer has been adapted to represent the text corpus, the vocabulary can be accessed with `get_vocabulary()`. This function returns a list of all vocabulary tokens sorted (descending) by their frequency.
```
# Save the created vocabulary for reference.
inverse_vocab = vectorize_layer.get_vocabulary()
print(inverse_vocab[:20])
```
The vectorize_layer can now be used to generate vectors for each element in the `text_ds`.
```
def vectorize_text(text):
text = tf.expand_dims(text, -1)
return tf.squeeze(vectorize_layer(text))
# Vectorize the data in text_ds.
text_vector_ds = text_ds.batch(1024).prefetch(AUTOTUNE).map(vectorize_layer).unbatch()
```
### Obtain sequences from the dataset
You now have a `tf.data.Dataset` of integer encoded sentences. To prepare the dataset for training a Word2Vec model, flatten the dataset into a list of sentence vector sequences. This step is required as you would iterate over each sentence in the dataset to produce positive and negative examples.
Note: Since the `generate_training_data()` defined earlier uses non-TF python/numpy functions, you could also use a `tf.py_function` or `tf.numpy_function` with `tf.data.Dataset.map()`.
```
sequences = list(text_vector_ds.as_numpy_iterator())
print(len(sequences))
```
Take a look at few examples from `sequences`.
```
for seq in sequences[:5]:
print(f"{seq} => {[inverse_vocab[i] for i in seq]}")
```
### Generate training examples from sequences
`sequences` is now a list of int encoded sentences. Just call the `generate_training_data()` function defined earlier to generate training examples for the Word2Vec model. To recap, the function iterates over each word from each sequence to collect positive and negative context words. Length of target, contexts and labels should be same, representing the total number of training examples.
```
targets, contexts, labels = generate_training_data(
sequences=sequences,
window_size=2,
num_ns=4,
vocab_size=vocab_size,
seed=SEED)
print(len(targets), len(contexts), len(labels))
```
### Configure the dataset for performance
To perform efficient batching for the potentially large number of training examples, use the `tf.data.Dataset` API. After this step, you would have a `tf.data.Dataset` object of `(target_word, context_word), (label)` elements to train your Word2Vec model!
```
BATCH_SIZE = 1024
BUFFER_SIZE = 10000
dataset = tf.data.Dataset.from_tensor_slices(((targets, contexts), labels))
dataset = dataset.shuffle(BUFFER_SIZE).batch(BATCH_SIZE, drop_remainder=True)
print(dataset)
```
Add `cache()` and `prefetch()` to improve performance.
```
dataset = dataset.cache().prefetch(buffer_size=AUTOTUNE)
print(dataset)
```
## Lab Task 3: Model and Training
The Word2Vec model can be implemented as a classifier to distinguish between true context words from skip-grams and false context words obtained through negative sampling. You can perform a dot product between the embeddings of target and context words to obtain predictions for labels and compute loss against true labels in the dataset.
### Subclassed Word2Vec Model
Use the [Keras Subclassing API](https://www.tensorflow.org/guide/keras/custom_layers_and_models) to define your Word2Vec model with the following layers:
* `target_embedding`: A `tf.keras.layers.Embedding` layer which looks up the embedding of a word when it appears as a target word. The number of parameters in this layer are `(vocab_size * embedding_dim)`.
* `context_embedding`: Another `tf.keras.layers.Embedding` layer which looks up the embedding of a word when it appears as a context word. The number of parameters in this layer are the same as those in `target_embedding`, i.e. `(vocab_size * embedding_dim)`.
* `dots`: A `tf.keras.layers.Dot` layer that computes the dot product of target and context embeddings from a training pair.
* `flatten`: A `tf.keras.layers.Flatten` layer to flatten the results of `dots` layer into logits.
With the sublassed model, you can define the `call()` function that accepts `(target, context)` pairs which can then be passed into their corresponding embedding layer. Reshape the `context_embedding` to perform a dot product with `target_embedding` and return the flattened result.
Key point: The `target_embedding` and `context_embedding` layers can be shared as well. You could also use a concatenation of both embeddings as the final Word2Vec embedding.
```
class Word2Vec(Model):
def __init__(self, vocab_size, embedding_dim):
super(Word2Vec, self).__init__()
self.target_embedding = Embedding(vocab_size,
embedding_dim,
input_length=1,
name="w2v_embedding", )
self.context_embedding = Embedding(vocab_size,
embedding_dim,
input_length=num_ns+1)
self.dots = Dot(axes=(3,2))
self.flatten = Flatten()
def call(self, pair):
target, context = pair
we = self.target_embedding(target)
ce = self.context_embedding(context)
dots = self.dots([ce, we])
return self.flatten(dots)
```
### Define loss function and compile model
For simplicity, you can use `tf.keras.losses.CategoricalCrossEntropy` as an alternative to the negative sampling loss. If you would like to write your own custom loss function, you can also do so as follows:
``` python
def custom_loss(x_logit, y_true):
return tf.nn.sigmoid_cross_entropy_with_logits(logits=x_logit, labels=y_true)
```
It's time to build your model! Instantiate your Word2Vec class with an embedding dimension of 128 (you could experiment with different values). Compile the model with the `tf.keras.optimizers.Adam` optimizer.
```
# TODO 3a
embedding_dim = 128
word2vec = Word2Vec(vocab_size, embedding_dim)
word2vec.compile(optimizer='adam',
loss=tf.keras.losses.CategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
```
Also define a callback to log training statistics for tensorboard.
```
tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir="logs")
```
Train the model with `dataset` prepared above for some number of epochs.
```
word2vec.fit(dataset, epochs=20, callbacks=[tensorboard_callback])
```
Tensorboard now shows the Word2Vec model's accuracy and loss.
```
!tensorboard --bind_all --port=8081 --logdir logs
```
Run the following command in **Cloud Shell:**
<code>gcloud beta compute ssh --zone <instance-zone> <notebook-instance-name> --project <project-id> -- -L 8081:localhost:8081</code>
Make sure to replace <instance-zone>, <notebook-instance-name> and <project-id>.
In Cloud Shell, click *Web Preview* > *Change Port* and insert port number *8081*. Click *Change and Preview* to open the TensorBoard.

**To quit the TensorBoard, click Kernel > Interrupt kernel**.
## Lab Task 4: Embedding lookup and analysis
Obtain the weights from the model using `get_layer()` and `get_weights()`. The `get_vocabulary()` function provides the vocabulary to build a metadata file with one token per line.
```
# TODO 4a
weights = word2vec.get_layer('w2v_embedding').get_weights()[0]
vocab = vectorize_layer.get_vocabulary()
```
Create and save the vectors and metadata file.
```
out_v = io.open('vectors.tsv', 'w', encoding='utf-8')
out_m = io.open('metadata.tsv', 'w', encoding='utf-8')
for index, word in enumerate(vocab):
if index == 0: continue # skip 0, it's padding.
vec = weights[index]
out_v.write('\t'.join([str(x) for x in vec]) + "\n")
out_m.write(word + "\n")
out_v.close()
out_m.close()
```
Download the `vectors.tsv` and `metadata.tsv` to analyze the obtained embeddings in the [Embedding Projector](https://projector.tensorflow.org/).
```
try:
from google.colab import files
files.download('vectors.tsv')
files.download('metadata.tsv')
except Exception as e:
pass
```
## Next steps
This tutorial has shown you how to implement a skip-gram Word2Vec model with negative sampling from scratch and visualize the obtained word embeddings.
* To learn more about word vectors and their mathematical representations, refer to these [notes](https://web.stanford.edu/class/cs224n/readings/cs224n-2019-notes01-wordvecs1.pdf).
* To learn more about advanced text processing, read the [Transformer model for language understanding](https://www.tensorflow.org/tutorials/text/transformer) tutorial.
* If you’re interested in pre-trained embedding models, you may also be interested in [Exploring the TF-Hub CORD-19 Swivel Embeddings](https://www.tensorflow.org/hub/tutorials/cord_19_embeddings_keras), or the [Multilingual Universal Sentence Encoder](https://www.tensorflow.org/hub/tutorials/cross_lingual_similarity_with_tf_hub_multilingual_universal_encoder)
* You may also like to train the model on a new dataset (there are many available in [TensorFlow Datasets](https://www.tensorflow.org/datasets)).
| github_jupyter |
# Working with Datastores
Although it's fairly common for data scientists to work with data on their local file system, in an enterprise environment it can be more effective to store the data in a central location where multiple data scientists can access it. In this lab, you'll store data in the cloud, and use an Azure Machine Learning *datastore* to access it.
> **Important**: The code in this notebooks assumes that you have completed the first two tasks in Lab 4A. If you have not done so, go and do it now!
## Connect to Your Workspace
To access your datastore using the Azure Machine Learning SDK, you need to connect to your workspace.
> **Note**: If the authenticated session with your Azure subscription has expired since you completed the previous exercise, you'll be prompted to reauthenticate.
```
import azureml.core
from azureml.core import Workspace
# Load the workspace from the saved config file
ws = Workspace.from_config()
print('Ready to use Azure ML {} to work with {}'.format(azureml.core.VERSION, ws.name))
```
## View Datastores in the Workspace
The workspace contains several datastores, including the **aml_data** datastore you ceated in the [previous task](labdocs/Lab04A.md).
Run the following code to retrieve the *default* datastore, and then list all of the datastores indicating which is the default.
```
from azureml.core import Datastore
# Get the default datastore
default_ds = ws.get_default_datastore()
# Enumerate all datastores, indicating which is the default
for ds_name in ws.datastores:
print(ds_name, "- Default =", ds_name == default_ds.name)
```
## Get a Datastore to Work With
You want to work with the **aml_data** datastore, so you need to get it by name:
```
aml_datastore = Datastore.get(ws, 'aml_data')
print(aml_datastore.name,":", aml_datastore.datastore_type + " (" + aml_datastore.account_name + ")")
```
## Set the Default Datastore
You are primarily going towork with the **aml_data** datastore in this course; so for convenience, you can set it to be the default datastore:
```
ws.set_default_datastore('aml_data')
default_ds = ws.get_default_datastore()
print(default_ds.name)
```
## Upload Data to a Datastore
Now that you have identified the datastore you want to work with, you can upload files from your local file system so that they will be accessible to experiments running in the workspace, regardless of where the experiment script is actually being run.
```
default_ds.upload_files(files=['./data/diabetes.csv', './data/diabetes2.csv'], # Upload the diabetes csv files in /data
target_path='diabetes-data/', # Put it in a folder path in the datastore
overwrite=True, # Replace existing files of the same name
show_progress=True)
```
## Train a Model from a Datastore
When you uploaded the files in the code cell above, note that the code returned a *data reference*. A data reference provides a way to pass the path to a folder in a datastore to a script, regardless of where the script is being run, so that the script can access data in the datastore location.
The following code gets a reference to the **diabetes-data** folder where you uploaded the diabetes CSV files, and specifically configures the data reference for *download* - in other words, it can be used to download the contents of the folder to the compute context where the data reference is being used. Downloading data works well for small volumes of data that will be processed on local compute. When working with remote compute, you can also configure a data reference to *mount* the datastore location and read data directly from the data source.
> **More Information**: For more details about using datastores, see the [Azure ML documentation](https://docs.microsoft.com/azure/machine-learning/how-to-access-data).
```
data_ref = default_ds.path('diabetes-data').as_download(path_on_compute='diabetes_data')
print(data_ref)
```
To use the data reference in a training script, you must define a parameter for it. Run the following two code cells to create:
1. A folder named **diabetes_training_from_datastore**
2. A script that trains a classification model by using the training data in all of the CSV files in the folder referenced by the data reference parameter passed to it.
```
import os
# Create a folder for the experiment files
experiment_folder = 'diabetes_training_from_datastore'
os.makedirs(experiment_folder, exist_ok=True)
print(experiment_folder, 'folder created.')
%%writefile $experiment_folder/diabetes_training.py
# Import libraries
import os
import argparse
from azureml.core import Run
import pandas as pd
import numpy as np
import joblib
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
# Get parameters
parser = argparse.ArgumentParser()
parser.add_argument('--regularization', type=float, dest='reg_rate', default=0.01, help='regularization rate')
parser.add_argument('--data-folder', type=str, dest='data_folder', help='data folder reference')
args = parser.parse_args()
reg = args.reg_rate
# Get the experiment run context
run = Run.get_context()
# load the diabetes data from the data reference
data_folder = args.data_folder
print("Loading data from", data_folder)
# Load all files and concatenate their contents as a single dataframe
all_files = os.listdir(data_folder)
diabetes = pd.concat((pd.read_csv(os.path.join(data_folder,csv_file)) for csv_file in all_files))
# Separate features and labels
X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values
# Split data into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# Train a logistic regression model
print('Training a logistic regression model with regularization rate of', reg)
run.log('Regularization Rate', np.float(reg))
model = LogisticRegression(C=1/reg, solver="liblinear").fit(X_train, y_train)
# calculate accuracy
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
print('Accuracy:', acc)
run.log('Accuracy', np.float(acc))
# calculate AUC
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
run.log('AUC', np.float(auc))
os.makedirs('outputs', exist_ok=True)
# note file saved in the outputs folder is automatically uploaded into experiment record
joblib.dump(value=model, filename='outputs/diabetes_model.pkl')
run.complete()
```
The script will load the training data from the data reference passed to it as a parameter, so now you just need to set up the script parameters to pass the file reference when we run the experiment.
```
from azureml.train.estimator import Estimator
from azureml.core import Experiment, Environment
from azureml.widgets import RunDetails
# Create a Python environment
env = Environment("env")
env.python.user_managed_dependencies = True
env.docker.enabled = False
# Set up the parameters
script_params = {
'--regularization': 0.1, # regularization rate
'--data-folder': data_ref # data reference to download files from datastore
}
# Create an estimator
estimator = Estimator(source_directory=experiment_folder,
entry_script='diabetes_training.py',
script_params=script_params,
compute_target = 'local',
environment_definition=env
)
# Create an experiment
experiment_name = 'diabetes-training'
experiment = Experiment(workspace=ws, name=experiment_name)
# Run the experiment
run = experiment.submit(config=estimator)
# Show the run details while running
RunDetails(run).show()
run.wait_for_completion()
```
The first time the experiment is run, it may take some time to set up the Python environment - subsequent runs will be quicker.
When the experiment has completed, in the widget, view the output log to verify that the data files were downloaded.
As with all experiments, you can view the details of the experiment run in [Azure ML studio](https://ml.azure.com), and you can write code to retrieve the metrics and files generated:
```
# Get logged metrics
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
print('\n')
for file in run.get_file_names():
print(file)
```
Once again, you can register the model that was trained by the experiment.
```
from azureml.core import Model
# Register the model
run.register_model(model_path='outputs/diabetes_model.pkl', model_name='diabetes_model',
tags={'Training context':'Using Datastore'}, properties={'AUC': run.get_metrics()['AUC'], 'Accuracy': run.get_metrics()['Accuracy']})
# List the registered models
print("Registered Models:")
for model in Model.list(ws):
print(model.name, 'version:', model.version)
for tag_name in model.tags:
tag = model.tags[tag_name]
print ('\t',tag_name, ':', tag)
for prop_name in model.properties:
prop = model.properties[prop_name]
print ('\t',prop_name, ':', prop)
print('\n')
```
In this exercise, you've explored some options for working with data in the form of *datastores*.
Azure Machine Learning offers a further level of abstraction for data in the form of *datasets*, which you'll explore next.
| github_jupyter |
<a href="https://colab.research.google.com/github/satyajitghana/TSAI-DeepVision-EVA4.0/blob/master/05_CodingDrill/EVA4S5F9.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Import Libraries
```
from __future__ import print_function
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torchvision import datasets, transforms
```
## Data Transformations
We first start with defining our data transformations. We need to think what our data is and how can we augment it to correct represent images which it might not see otherwise.
```
# Train Phase transformations
train_transforms = transforms.Compose([
# transforms.Resize((28, 28)),
# transforms.ColorJitter(brightness=0.10, contrast=0.1, saturation=0.10, hue=0.1),
transforms.RandomRotation((-7.0, 7.0), fill=(1,)),
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,)) # The mean and std have to be sequences (e.g., tuples), therefore you should add a comma after the values.
# Note the difference between (0.1307) and (0.1307,)
])
# Test Phase transformations
test_transforms = transforms.Compose([
# transforms.Resize((28, 28)),
# transforms.ColorJitter(brightness=0.10, contrast=0.1, saturation=0.10, hue=0.1),
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])
```
# Dataset and Creating Train/Test Split
```
train = datasets.MNIST('./data', train=True, download=True, transform=train_transforms)
test = datasets.MNIST('./data', train=False, download=True, transform=test_transforms)
```
# Dataloader Arguments & Test/Train Dataloaders
```
SEED = 1
# CUDA?
cuda = torch.cuda.is_available()
print("CUDA Available?", cuda)
# For reproducibility
torch.manual_seed(SEED)
if cuda:
torch.cuda.manual_seed(SEED)
# dataloader arguments - something you'll fetch these from cmdprmt
dataloader_args = dict(shuffle=True, batch_size=128, num_workers=4, pin_memory=True) if cuda else dict(shuffle=True, batch_size=64)
# train dataloader
train_loader = torch.utils.data.DataLoader(train, **dataloader_args)
# test dataloader
test_loader = torch.utils.data.DataLoader(test, **dataloader_args)
```
# The model
Let's start with the model we first saw
```
import torch.nn.functional as F
dropout_value = 0.1
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# Input Block
self.convblock1 = nn.Sequential(
nn.Conv2d(in_channels=1, out_channels=16, kernel_size=(3, 3), padding=0, bias=False),
nn.ReLU(),
nn.BatchNorm2d(16),
nn.Dropout(dropout_value)
) # output_size = 26
# CONVOLUTION BLOCK 1
self.convblock2 = nn.Sequential(
nn.Conv2d(in_channels=16, out_channels=32, kernel_size=(3, 3), padding=0, bias=False),
nn.ReLU(),
nn.BatchNorm2d(32),
nn.Dropout(dropout_value)
) # output_size = 24
# TRANSITION BLOCK 1
self.convblock3 = nn.Sequential(
nn.Conv2d(in_channels=32, out_channels=10, kernel_size=(1, 1), padding=0, bias=False),
) # output_size = 24
self.pool1 = nn.MaxPool2d(2, 2) # output_size = 12
# CONVOLUTION BLOCK 2
self.convblock4 = nn.Sequential(
nn.Conv2d(in_channels=10, out_channels=16, kernel_size=(3, 3), padding=0, bias=False),
nn.ReLU(),
nn.BatchNorm2d(16),
nn.Dropout(dropout_value)
) # output_size = 10
self.convblock5 = nn.Sequential(
nn.Conv2d(in_channels=16, out_channels=16, kernel_size=(3, 3), padding=0, bias=False),
nn.ReLU(),
nn.BatchNorm2d(16),
nn.Dropout(dropout_value)
) # output_size = 8
self.convblock6 = nn.Sequential(
nn.Conv2d(in_channels=16, out_channels=16, kernel_size=(3, 3), padding=0, bias=False),
nn.ReLU(),
nn.BatchNorm2d(16),
nn.Dropout(dropout_value)
) # output_size = 6
self.convblock7 = nn.Sequential(
nn.Conv2d(in_channels=16, out_channels=16, kernel_size=(3, 3), padding=1, bias=False),
nn.ReLU(),
nn.BatchNorm2d(16),
nn.Dropout(dropout_value)
) # output_size = 6
# OUTPUT BLOCK
self.gap = nn.Sequential(
nn.AvgPool2d(kernel_size=6)
) # output_size = 1
self.convblock8 = nn.Sequential(
nn.Conv2d(in_channels=16, out_channels=10, kernel_size=(1, 1), padding=0, bias=False),
# nn.BatchNorm2d(10),
# nn.ReLU(),
# nn.Dropout(dropout_value)
)
self.dropout = nn.Dropout(dropout_value)
def forward(self, x):
x = self.convblock1(x)
x = self.convblock2(x)
x = self.convblock3(x)
x = self.pool1(x)
x = self.convblock4(x)
x = self.convblock5(x)
x = self.convblock6(x)
x = self.convblock7(x)
x = self.gap(x)
x = self.convblock8(x)
x = x.view(-1, 10)
return F.log_softmax(x, dim=-1)
```
# Model Params
Can't emphasize on how important viewing Model Summary is.
Unfortunately, there is no in-built model visualizer, so we have to take external help
```
!pip install torchsummary
from torchsummary import summary
use_cuda = torch.cuda.is_available()
device = torch.device("cuda" if use_cuda else "cpu")
print(device)
model = Net().to(device)
summary(model, input_size=(1, 28, 28))
```
# Training and Testing
All right, so we have 24M params, and that's too many, we know that. But the purpose of this notebook is to set things right for our future experiments.
Looking at logs can be boring, so we'll introduce **tqdm** progressbar to get cooler logs.
Let's write train and test functions
```
from tqdm import tqdm
train_losses = []
test_losses = []
train_acc = []
test_acc = []
def train(model, device, train_loader, optimizer, epoch):
model.train()
pbar = tqdm(train_loader)
correct = 0
processed = 0
for batch_idx, (data, target) in enumerate(pbar):
# get samples
data, target = data.to(device), target.to(device)
# Init
optimizer.zero_grad()
# In PyTorch, we need to set the gradients to zero before starting to do backpropragation because PyTorch accumulates the gradients on subsequent backward passes.
# Because of this, when you start your training loop, ideally you should zero out the gradients so that you do the parameter update correctly.
# Predict
y_pred = model(data)
# Calculate loss
loss = F.nll_loss(y_pred, target)
train_losses.append(loss)
# Backpropagation
loss.backward()
optimizer.step()
# Update pbar-tqdm
pred = y_pred.argmax(dim=1, keepdim=True) # get the index of the max log-probability
correct += pred.eq(target.view_as(pred)).sum().item()
processed += len(data)
pbar.set_description(desc= f'Loss={loss.item()} Batch_id={batch_idx} Accuracy={100*correct/processed:0.2f}')
train_acc.append(100*correct/processed)
def test(model, device, test_loader):
model.eval()
test_loss = 0
correct = 0
with torch.no_grad():
for data, target in test_loader:
data, target = data.to(device), target.to(device)
output = model(data)
test_loss += F.nll_loss(output, target, reduction='sum').item() # sum up batch loss
pred = output.argmax(dim=1, keepdim=True) # get the index of the max log-probability
correct += pred.eq(target.view_as(pred)).sum().item()
test_loss /= len(test_loader.dataset)
test_losses.append(test_loss)
print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.2f}%)\n'.format(
test_loss, correct, len(test_loader.dataset),
100. * correct / len(test_loader.dataset)))
test_acc.append(100. * correct / len(test_loader.dataset))
from torch.optim.lr_scheduler import StepLR
model = Net().to(device)
optimizer = optim.SGD(model.parameters(), lr=0.01, momentum=0.9)
# scheduler = StepLR(optimizer, step_size=6, gamma=0.1)
EPOCHS = 20
for epoch in range(EPOCHS):
print("EPOCH:", epoch)
train(model, device, train_loader, optimizer, epoch)
# scheduler.step()
test(model, device, test_loader)
```
| github_jupyter |
# ML 101
## Evaluation (Classification)
The metrics that you choose to evaluate your machine learning algorithms are very important.
Choice of metrics influences how the performance of machine learning algorithms is measured and compared. They influence how you weight the importance of different characteristics in the results and your ultimate choice of which algorithm to choose.
In this notebook we explored the following performance metrics using [Hold-out partition](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html):
1. Confusion Matrix
2. Accuracy
3. Precision
4. Recall
5. F1-score
6. MCC
7. ROC curve

```
%matplotlib inline
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import seaborn as sns
from sklearn.metrics import confusion_matrix
from sklearn.metrics import accuracy_score
from sklearn.metrics import precision_score
from sklearn.metrics import recall_score
from sklearn.metrics import f1_score
from sklearn.metrics import matthews_corrcoef
from sklearn.metrics import roc_curve
from sklearn.metrics import roc_auc_score
def plot_decision_boundary(X, y, clf):
x_min, x_max = X[:, 0].min() - 2, X[:, 0].max() + 2
y_min, y_max = X[:, 1].min() - 2, X[:, 1].max() + 2
xx, yy = np.mgrid[x_min:x_max:.01, y_min:y_max:.01]
grid = np.c_[xx.ravel(), yy.ravel()]
probs = clf.predict_proba(grid)[:, 1].reshape(xx.shape)
f, ax = plt.subplots(figsize=(8, 6))
contour = ax.contourf(xx, yy, probs, 25, cmap="RdBu", vmin=0, vmax=1)
ax_c = f.colorbar(contour)
ax_c.set_label("$P(y = 1)$")
ax_c.set_ticks([0, .25, .5, .75, 1])
ax.scatter(X[:,0], X[:, 1], c=y, s=50, cmap="RdBu", vmin=-.2, vmax=1.2, edgecolor="white", linewidth=1)
ax.set(aspect="equal", xlim=(x_min, x_max), ylim=(y_min, x_max), xlabel="$X_1$", ylabel="$X_2$")
plt.show()
```
## Hard Toy Dataset
```
# import dataset
df = pd.read_csv('https://media.githubusercontent.com/media/mariolpantunes/ml101/main/datasets/toy_dataset_01.csv')
# print the first rows of the dataset
df.head()
sns.relplot(x="X1", y="X2", hue="Y", data=df);
from sklearn.model_selection import train_test_split
X = df[['X1', 'X2']].to_numpy()
y = df['Y'].to_numpy()
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=7, stratify=y)
g = sns.relplot(x=X_train[:,0], y=X_train[:,1], hue=y_train)
g.set(ylim=(-1, 6))
g.set(xlim=(-1, 6))
g = sns.relplot(x=X_test[:,0], y=X_test[:,1], hue=y_test)
g.set(ylim=(-1, 6))
g.set(xlim=(-1, 6))
```
### Logistic Regression
```
from sklearn.linear_model import LogisticRegression
clf = LogisticRegression().fit(X_train, y_train)
plot_decision_boundary(X_train, y_train, clf)
plot_decision_boundary(X_test, y_test, clf)
y_pred = clf.predict(X_test)
cm = confusion_matrix(y_test, y_pred)
sns.heatmap(cm, annot=True)
a = accuracy_score(y_test, y_pred)
p = precision_score(y_test, y_pred)
r = recall_score(y_test, y_pred)
f = f1_score(y_test, y_pred)
m = matthews_corrcoef(y_test, y_pred)
print(f'Acc {a}\nPre {p}\nRec {r}\nF1 {f}\nMCC {m}')
y_pred_proba = clf.predict_proba(X_test)[::,1]
fpr, tpr, _ = roc_curve(y_test, y_pred_proba)
auc = roc_auc_score(y_test, y_pred_proba)
plt.plot(fpr,tpr,label="data 1, auc="+str(auc))
plt.legend(loc=4)
plt.show()
```
### Naive Bayes
```
from sklearn.naive_bayes import GaussianNB
clf = GaussianNB().fit(X_train,y_train)
plot_decision_boundary(X_train, y_train, clf)
plot_decision_boundary(X_test, y_test, clf)
y_pred = clf.predict(X_test)
cm = confusion_matrix(y_test, y_pred)
sns.heatmap(cm, annot=True)
a = accuracy_score(y_test, y_pred)
p = precision_score(y_test, y_pred)
r = recall_score(y_test, y_pred)
f = f1_score(y_test, y_pred)
m = matthews_corrcoef(y_test, y_pred)
print(f'Acc {a}\nPre {p}\nRec {r}\nF1 {f}\nMCC {m}')
y_pred_proba = clf.predict_proba(X_test)[::,1]
fpr, tpr, _ = roc_curve(y_test, y_pred_proba)
auc = roc_auc_score(y_test, y_pred_proba)
plt.plot(fpr,tpr,label="data 1, auc="+str(auc))
plt.legend(loc=4)
plt.show()
```
### SVM
```
from sklearn.svm import SVC
clf = SVC(probability=True, kernel='rbf').fit(X_train,y_train)
plot_decision_boundary(X_train, y_train, clf)
plot_decision_boundary(X_test, y_test, clf)
y_pred = clf.predict(X_test)
cm = confusion_matrix(y_test, y_pred)
sns.heatmap(cm, annot=True)
a = accuracy_score(y_test, y_pred)
p = precision_score(y_test, y_pred)
r = recall_score(y_test, y_pred)
f = f1_score(y_test, y_pred)
m = matthews_corrcoef(y_test, y_pred)
print(f'Acc {a}\nPre {p}\nRec {r}\nF1 {f}\nMCC {m}')
y_pred_proba = clf.predict_proba(X_test)[::,1]
fpr, tpr, _ = roc_curve(y_test, y_pred_proba)
auc = roc_auc_score(y_test, y_pred_proba)
plt.plot(fpr,tpr,label="data 1, auc="+str(auc))
plt.legend(loc=4)
plt.show()
```
### Neural Network
```
from sklearn.neural_network import MLPClassifier
clf = MLPClassifier(random_state=7, max_iter=5000).fit(X_train,y_train)
plot_decision_boundary(X_train, y_train, clf)
plot_decision_boundary(X_test, y_test, clf)
y_pred = clf.predict(X_test)
cm = confusion_matrix(y_test, y_pred)
sns.heatmap(cm, annot=True)
a = accuracy_score(y_test, y_pred)
p = precision_score(y_test, y_pred)
r = recall_score(y_test, y_pred)
f = f1_score(y_test, y_pred)
m = matthews_corrcoef(y_test, y_pred)
print(f'Acc {a}\nPre {p}\nRec {r}\nF1 {f}\nMCC {m}')
y_pred_proba = clf.predict_proba(X_test)[::,1]
fpr, tpr, _ = roc_curve(y_test, y_pred_proba)
auc = roc_auc_score(y_test, y_pred_proba)
plt.plot(fpr,tpr,label="data 1, auc="+str(auc))
plt.legend(loc=4)
plt.show()
```
### KNN
```
from sklearn.neighbors import KNeighborsClassifier
clf = KNeighborsClassifier(n_neighbors=7).fit(X_train,y_train)
plot_decision_boundary(X_train, y_train, clf)
plot_decision_boundary(X_test, y_test, clf)
y_pred = clf.predict(X_test)
cm = confusion_matrix(y_test, y_pred)
sns.heatmap(cm, annot=True)
a = accuracy_score(y_test, y_pred)
p = precision_score(y_test, y_pred)
r = recall_score(y_test, y_pred)
f = f1_score(y_test, y_pred)
m = matthews_corrcoef(y_test, y_pred)
print(f'Acc {a}\nPre {p}\nRec {r}\nF1 {f}\nMCC {m}')
y_pred_proba = clf.predict_proba(X_test)[::,1]
fpr, tpr, _ = roc_curve(y_test, y_pred_proba)
auc = roc_auc_score(y_test, y_pred_proba)
plt.plot(fpr,tpr,label="data 1, auc="+str(auc))
plt.legend(loc=4)
plt.show()
```
### Decision Trees
```
from sklearn.tree import DecisionTreeClassifier
clf = DecisionTreeClassifier().fit(X_train,y_train)
plot_decision_boundary(X_train, y_train, clf)
plot_decision_boundary(X_test, y_test, clf)
y_pred = clf.predict(X_test)
cm = confusion_matrix(y_test, y_pred)
sns.heatmap(cm, annot=True)
a = accuracy_score(y_test, y_pred)
p = precision_score(y_test, y_pred)
r = recall_score(y_test, y_pred)
f = f1_score(y_test, y_pred)
m = matthews_corrcoef(y_test, y_pred)
print(f'Acc {a}\nPre {p}\nRec {r}\nF1 {f}\nMCC {m}')
y_pred_proba = clf.predict_proba(X_test)[::,1]
fpr, tpr, _ = roc_curve(y_test, y_pred_proba)
auc = roc_auc_score(y_test, y_pred_proba)
plt.plot(fpr,tpr,label="data 1, auc="+str(auc))
plt.legend(loc=4)
plt.show()
```
### Random Forest
```
from sklearn.ensemble import RandomForestClassifier
clf = RandomForestClassifier(random_state=0).fit(X_train,y_train)
plot_decision_boundary(X_train, y_train, clf)
plot_decision_boundary(X_test, y_test, clf)
y_pred = clf.predict(X_test)
cm = confusion_matrix(y_test, y_pred)
sns.heatmap(cm, annot=True)
a = accuracy_score(y_test, y_pred)
p = precision_score(y_test, y_pred)
r = recall_score(y_test, y_pred)
f = f1_score(y_test, y_pred)
m = matthews_corrcoef(y_test, y_pred)
print(f'Acc {a}\nPre {p}\nRec {r}\nF1 {f}\nMCC {m}')
y_pred_proba = clf.predict_proba(X_test)[::,1]
fpr, tpr, _ = roc_curve(y_test, y_pred_proba)
auc = roc_auc_score(y_test, y_pred_proba)
plt.plot(fpr,tpr,label="data 1, auc="+str(auc))
plt.legend(loc=4)
plt.show()
```
### Gradient Boosting
```
from sklearn.ensemble import GradientBoostingClassifier
clf = GradientBoostingClassifier(random_state=0).fit(X_train,y_train)
plot_decision_boundary(X_train, y_train, clf)
plot_decision_boundary(X_test, y_test, clf)
y_pred = clf.predict(X_test)
cm = confusion_matrix(y_test, y_pred)
sns.heatmap(cm, annot=True)
a = accuracy_score(y_test, y_pred)
p = precision_score(y_test, y_pred)
r = recall_score(y_test, y_pred)
f = f1_score(y_test, y_pred)
m = matthews_corrcoef(y_test, y_pred)
print(f'Acc {a}\nPre {p}\nRec {r}\nF1 {f}\nMCC {m}')
y_pred_proba = clf.predict_proba(X_test)[::,1]
fpr, tpr, _ = roc_curve(y_test, y_pred_proba)
auc = roc_auc_score(y_test, y_pred_proba)
plt.plot(fpr,tpr,label="data 1, auc="+str(auc))
plt.legend(loc=4)
plt.show()
```
### Voting Ensemble
```
from sklearn.ensemble import VotingClassifier
#c1 = LogisticRegression()
c2 = GaussianNB()
c3 = SVC(probability=True, kernel='rbf')
c4 = MLPClassifier(random_state=7, max_iter=5000)
c5 = KNeighborsClassifier(n_neighbors=7)
c6 = DecisionTreeClassifier()
c7 = RandomForestClassifier(random_state=42)
c8 = GradientBoostingClassifier(random_state=42)
clfs = [('nb', c2), ('svm', c3), ('nn', c4),
('knn', c5), ('dt', c6), ('rf', c7), ('gbc', c8)]
clf = VotingClassifier(clfs, voting='soft').fit(X_train,y_train)
plot_decision_boundary(X_train, y_train, clf)
plot_decision_boundary(X_test, y_test, clf)
y_pred = clf.predict(X_test)
cm = confusion_matrix(y_test, y_pred)
sns.heatmap(cm, annot=True)
a = accuracy_score(y_test, y_pred)
p = precision_score(y_test, y_pred)
r = recall_score(y_test, y_pred)
f = f1_score(y_test, y_pred)
m = matthews_corrcoef(y_test, y_pred)
print(f'Acc {a}\nPre {p}\nRec {r}\nF1 {f}\nMCC {m}')
y_pred_proba = clf.predict_proba(X_test)[::,1]
fpr, tpr, _ = roc_curve(y_test, y_pred_proba)
auc = roc_auc_score(y_test, y_pred_proba)
plt.plot(fpr,tpr,label="data 1, auc="+str(auc))
plt.legend(loc=4)
plt.show()
```
| github_jupyter |
<div class="alert block alert-info alert">
# <center> Introductory Python3 Examples
## <center>Karl N. Kirschner<br>Bonn-Rhein-Sieg University of Applied Sciences<br>Sankt Augustin, Germany
## <center> Demo of Jupyter Notebook / Colaboratory
<hr style="border:2px solid gray"> </hr>
## What might I consider good coding to be?
### Example 1
- markdown cells (context)
- code cells
<hr style="border:2px dashed dodgerblue"> </hr>
#### Simple Program
Author: **Karl N. Kirschner** (kkirsc2m; **123456**)
Date: March 27, 2022
An example of Kirschner's expectation for good coding.
Listing some cool scientific women.
<\div>
```
index = 1
famous_scientists = []
famous_scientists = ['Ada Love', 'Marie Curie', 'Rosalind Franklin', 'Maria Goeppert Mayer']
for person in famous_scientists:
print(f'Scientist number {index} is {person}.')
index += 1
```
<hr style="border:2px dashed dodgerblue"> </hr>
## Talking Points:
1. Commenting code using Jupyter's markdown capabilities
- context
- convery thoughts and ideas
2. Creating objects (i.e. `index` and `famous_scientists`)
- human readable (versus `x` or `s`)
- proper spacing between list items
3. Create a 'for loop'
- reduces chances of error
4. Python incrementing a number by plus 1 (note: could also `-=`)
5. Print statement with f-string formatting
- concise and clear
6. Overall concisely and cleanly written code
<font color='dodgerblue'>**Side Note**</font>
From a Jupyter / Colaboratory notebook, you can save the code to a python file, and then run it from a terminal
1. Save notebook as a python .py file
2. Excecute in a terminal via 'python3 intro_example.py'
<hr style="border:2px solid gray"> </hr>
### Example 2
There are always multiple solutions for achieving the same goal.
Here is an example of a poor piece of coding
<hr style="border:2px dashed dodgerblue"> </hr>
```
#Simple Program, Karl N. Kirschner (kkirsc2m; 123456) March 27, 2022
#An example of what a poorly writting code might be to illustrate how code can be improved through iterative revisions. Taking the cube of a number.
print('The cube of', 0, 'is', 0*0*0)
print('The cube of {0} is {1}.'.format(1, 1*1*1))
print('The cube of {0} is {1}.'.format(2, 2*2**2))
print('The cube of', 3, 'is', 3*3*3, '.')
print('The cube of', 3, 'is {0}.'.format(4*4*4))
print('The cube of {0} is {1}.'.format(4, 5*5*5))
print('The cube of {0} is {1}.'.format(3, 6*6*6))
print("The cube of {0} is {1}.".format(6, 7*7*7))
print(f"The cube of {7} is {8*8*8}.")
print(f'The cube of 9 is {9*9*9}.')
```
<hr style="border:2px dashed dodgerblue"> </hr>
**Problems** with the above code:
1. Lots of information group at once in the first two lines
2. Comment runs off the screen - must scroll to read it all
3. Inconsistent formatting (i.e. printing and blank line usage)
4. Not concisely written (10 lines to print the cube of each number)
- notice that the errors are hard to see (missing a period, usage of 3 instead of 4, raising to a power of 2)
We can do **better**, and reduce the chances of introducing a user error:
- use markdown cells
- use code cells
<hr style="border:2px dashed dodgerblue"> </hr>
Simple Program Improved
Author: Karl N. Kirschner (kkirsc2m; 123456)
Date: March 27, 2022
---
An example of how code can be improved through iterative revisions.
Task: Take the cube of a tuple of numbers.
```
number_list = (0, 1, 2, 3, 4, 5, 6, 7, 8, 9)
for number in number_list:
number_cube = number**3
print('The cube of {0} is {1}.'.format(number, number_cube))
```
<hr style="border:2px dashed dodgerblue"> </hr>
Now, let's further improve the code cell part above - use a built-in function (range)
The following is arguably beter than above example because, the codes is
1. still readable (i.e. **not too concise or too "clever"**), and
2. the number of variables/objects is kept to a minimum, i.e.
- two: `number` and `number_list`, versus
- three: `number`, `number_list` and `number_cube`)
3. use of `f-string` print statement
```
number_list = range(10)
for number in number_list:
print(f'The cube of {number} is {number**3}.')
```
But there is even yet another solution (with only one variable/object).
```
number = 0
while number < 10:
print(f'The cube of {number} is {number**3}.')
number += 1
```
(Whether the use of a `while` or `for` loop depends on the problem at hand.)
<font color='dodgerblue'>Also note</font>: What the above examples demonstrate is that better coding can be acheived through **iterations of its writting** (i.e. <font color='dodgerblue'>code revision</font>).
| github_jupyter |
# E-news Express
## Import all the necessary libraries
```
import warnings
warnings.filterwarnings('ignore') # ignore warnings and do not display them
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
%matplotlib inline
import seaborn as sns
```
## 1. Explore the dataset and extract insights using Exploratory Data Analysis. (10 Marks)
### Exploratory Data Analysis - Step by step approach
Typical Data exploration activity consists of the following steps:
1. Importing Data
2. Variable Identification
3. Variable Transformation/Feature Creation
4. Missing value detection
5. Univariate Analysis
6. Bivariate Analysis
### Reading the Data into a DataFrame
```
#read the dataset abtest.csv
data = pd.read_csv("abtest.csv")
```
### Data Overview
- View a few rows of the data frame.
- Check the shape and data types of the data frame. Add observations.
- Fix the data-types (if needed).
- Missing Value Check.
- Summary statistics from the data frame. Add observations.
```
# view a few rows of the data frame
data.head()
# check the number of rows and columns
data.shape
```
### Observation:
- The data frame has a 100 rows and 6 columns
```
# check data types
data.info()
# check if there are missing values
# return sum of missing values
data.isnull().sum().sum()
```
### Observation:
- There are no missing values in the data frame
```
# check memory usage before converting the object data types to category
memory_usage_before = data.memory_usage().sum()
print(memory_usage_before)
### Converting object data types to category.
### This will reduce memory size and also help in analysis
columns = ["group", "landing_page", "converted", "language_preferred"]
for column in columns:
data[column] = data[column].astype("category")
data.info()
# check memory usage after converting the object data types to category
memory_usage_after_conversion = data.memory_usage().sum()
print(memory_usage_after_conversion)
```
#### Observations:
- All columns that were have object data types are now category
- The memory usage has reduced from '4.8+ KB to 2.6 KB' as you can see and compare the memory usage before and after
```
# show statistical summary aof all columns in the data frame
data.describe(include='all').T
```
### Observation:
- The minimum and maximum time spent on the pages is 0.19 and 10.71 minutes respectively
- There are two groups of users.
- The converted users were a total of 54
- There are three unique values of languages preferred
### Let us find the values counts of each unique value in the given Series
```
# checking the value counts of each unique value in the "group" series
data["group"].value_counts()
```
#### Observation:
- There two equal groups of users
- The old landing page was served to 50 users(control)
- The new landing page was served to 50 users (treatment)
```
# checking the value counts of each unique value in the "landing page" series
data["landing_page"].value_counts()
```
#### Observations:
- There are two landing pages; new and old
- Each landing page has a total of 50 sampled users
```
# checking the value counts of each unique value in the "converted" series
data["converted"].value_counts()
```
#### Observations:
- There are two categories under converted; yes and no
- 54 users were converted and 46 users were not converted
```
# checking the value counts of each unique value in the "language_preferred" series
data["language_preferred"].value_counts()
```
#### Observations:
- There are three languages that users could choose; French, Spanish, English
- 34 users preferred French
- 34 users preferred Spanish
- and 32 users preferred English
```
# statistical summary of time spent on the pages
data["time_spent_on_the_page"].describe()
#create a subsetted dataframe for old landing page users
df_old = data[data["landing_page"]=="old"]
# statistical summary of time spent on the old page
df_old["time_spent_on_the_page"].describe()
# create a subsetted dataframe for new landing page users
df_new = data[data["landing_page"]=="new"]
# statistical summary of time spent on the new page
df_new["time_spent_on_the_page"].describe()
```
#### Observations:
Acoording to the statistical summary above:
- The maximum time spent on the new page is greater than the maximum time spent on the old page
- The average time spent on the new page is greater than the average time spent on the old page
- The minimum time spent on the new page is greater than the minimum time spent on the old page
### Univariate Analysis
```
# function to plot a boxplot and a histogram along the same scale.
def histogram_boxplot(data, feature, figsize=(12, 7), kde=False, bins=None):
"""
Boxplot and histogram combined
data: dataframe
feature: dataframe column
figsize: size of figure (default (12,7))
kde: whether to show the density curve (default False)
bins: number of bins for histogram (default None)
"""
f2, (ax_box2, ax_hist2) = plt.subplots(
nrows=2, # Number of rows of the subplot grid= 2
sharex=True, # x-axis will be shared among all subplots
gridspec_kw={"height_ratios": (0.25, 0.75)},
figsize=figsize,
) # creating the 2 subplots
sns.boxplot(
data=data, x=feature, ax=ax_box2, showmeans=True, color="violet"
) # boxplot will be created and a star will indicate the mean value of the column
sns.histplot(
data=data, x=feature, kde=kde, ax=ax_hist2, bins=bins, palette="winter"
) if bins else sns.histplot(
data=data, x=feature, kde=kde, ax=ax_hist2
) # For histogram
ax_hist2.axvline(
data[feature].mean(), color="green", linestyle="--"
) # Add mean to the histogram
ax_hist2.axvline(
data[feature].median(), color="black", linestyle="-"
) # Add median to the histogram
histogram_boxplot(df_old,"time_spent_on_the_page");
plt.xlabel("Time spent on old page");
histogram_boxplot(df_new,"time_spent_on_the_page")
plt.xlabel("Time spent on new page");
# function to create labeled barplots
def labeled_barplot(data, feature, perc=False, n=None):
"""
Barplot with percentage at the top
data: dataframe
feature: dataframe column
perc: whether to display percentages instead of count (default is False)
n: displays the top n category levels (default is None, i.e., display all levels)
"""
total = len(data[feature]) # length of the column
count = data[feature].nunique()
if n is None:
plt.figure(figsize=(count + 1, 5))
else:
plt.figure(figsize=(n + 1, 5))
plt.xticks(rotation=90, fontsize=15)
ax = sns.countplot(data=data, x=feature, palette="Paired", order=data[feature].value_counts().index[:n].sort_values())
for p in ax.patches:
if perc == True:
label = "{:.1f}%".format(100 * p.get_height() / total) # percentage of each class of the category
else:
label = p.get_height() # count of each level of the category
x = p.get_x() + p.get_width() / 2 # width of the plot
y = p.get_height() # height of the plot
ax.annotate(label, (x, y), ha="center", va="center", size=12, xytext=(0, 5), textcoords="offset points") # annotate the percentage
plt.show() # show the plot
# what percentage of users get converted after visiting the pages
labeled_barplot(data, "converted")
```
#### Observations:
The bar plot shows that 54% of the users get converted compared to 46% that don't get converted
- But our focus is where the most conversion rates occur, Is it on the new page or old page.
- Lets see that below
### Let us see the percentage of people that got converted on the different landing pages
1. Old page
```
# bar graph to show the percentage of people that visited the old page and got converted
labeled_barplot(df_old, "converted")
# bar graph to show the percentage of people that visited the new page and got converted
labeled_barplot(df_new, "converted")
```
#### Observations:
From the analysis above, the users who visited the new landing page got more converted.
```
# bar graph to show the percentage of users who preferred the different languages
labeled_barplot(data, "language_preferred")
```
#### Observations:
- Out of 100 sampled users; 34 prefer Spanish, 34 prefer French and 32 prefer English
```
# bar graph to show the percentage of users on the different pages
labeled_barplot(data, "landing_page")
```
### Observation:
- Like the objective states, 100 users were sampled and divided into two equal groups. The graph above proves this point as 50 users were served the old page and 50 users were served the new page
### Bivariate Analysis
### Landing page vs time pent on the page
```
sns.set(rc = {'figure.figsize':(10,8)}) # set size of the seaborn plots
sns.barplot(x="landing_page", y="time_spent_on_the_page", data=data);
```
### Observation:
- Users tend to spend more time on the new page as compared to the old page
### Converted users vs time spent on the page
```
sns.barplot(x="converted", y="time_spent_on_the_page", data=data);
```
### Observation:
- Users that got converted spent more time on the pages.
### Language preferred vs Converted status
```
sns.countplot(x="language_preferred", hue="converted", data=data);
```
### Observation:
- The ratio of users that got converted to that that did not get converted for English users is higher compared to other language users.
## 2. Do the users spend more time on the new landing page than the existing landing page? (10 Marks)
### Perform Visual Analysis
```
# visual analysis of the time spent on the new and old page
sns.boxplot(x = 'landing_page', y = 'time_spent_on_the_page', data = data);
```
### Step 1: Define the null and alternate hypotheses
Let $\mu_1, \mu_2$ be mean time users spend on the existing page and new page respectively.
We will test the null hypothesis
>$H_0:\mu_1=\mu_2$
against the alternate hypothesis
>$H_a:\mu_2>\mu_1$
### Step 2: Select Appropriate test
- one-tailed test
- two population means
- independent populations
NB: This is a T-test for independent populations.
### Step 3: Decide the significance level
- As provided in the problem statement, let us assume the significance level($\alpha$) = 0.05
### Step 4: Collect and prepare data
```
# Calculating standard deviation for time spend on new page
std_new_page = df_new["time_spent_on_the_page"].std()
print(round(std_new_page, 2))
# Calculating standard deviation for time spend on old page
std_old_page = df_old["time_spent_on_the_page"].std()
print(round(std_old_page,2))
```
#### Observation:
- The standard deviation of the time spent on the new page is different from that of the old page from the sample data.
- Hence the population deviations can not be assumed to be equal.
- This means we areto use the T-test for independent populations for unequal Standard deviations
### Step 5: Calculate the p-value
```
#import the required functions
from scipy.stats import ttest_ind
# find the p-value
test_stat, p_value = ttest_ind(df_new['time_spent_on_the_page'], df_old['time_spent_on_the_page'], equal_var = False, alternative = 'greater')
print('The p-value is ' + str(p_value))
```
### Step 6: Compare the p-value with $\alpha$
```
# print the whether p-value is greater or less than alpha
if p_value < 0.05:
print(" The p-value is less than the level of significance")
else:
print("The p-value is greater than the level of significance")
```
#### Observation:
The P-value is much less than $\alpha$
### Step 7: Draw inference
#### Observation:
Since the P-value is much less than the level of Significance, we reject the null hypothesis. Therefore we have enough statistical significance to conclude that users spend more time on the new landing page than the existing page.
## 3. Is the conversion rate (the proportion of users who visit the landing page and get converted) for the new page greater than the conversion rate for the old page? (10 Marks)
### Perform Visual analysis
```
# visual analysis of the proportion of users who visit the old landing page and get converted
labeled_barplot(df_old, "converted")
# visual analysis of the proportion of users who visit the new landing page and get converted
labeled_barplot(df_new, "converted")
```
### Step 1: Define the null and alternative hypothesis
Let $p_1,p_2$ be the proportions of users that visit the new page and old page and get converted respectively.
The manufacturer will test the null hypothesis
>$H_0:p_1 =p_2$
against the alternate hypothesis
>$H_a:p_1 > p_2$
### Step 2: Select Appropriate test
- Binomally distributed population
- One-tailed test
- two population proportions
- independent populations
NB: The appropriate test to be used will be the two-proportion Z-test
### Step 3: Decide the significance level
- As provided in the problem statement, let us assume the significance level($\alpha$) = 0.05
### Step 4: Collect and prepare data
```
# check for total number of users in each group
data["group"].value_counts()
# calculate the number of users that were served the new page and got converted
new_page_converted_users = df_new[df_new["converted"]=="yes"].value_counts()
print(len(new_page_converted_users))
# calculate the number of users that were served the old page and got converted
old_page_converted_users = df_old[df_old["converted"]=="yes"].value_counts()
print(len(old_page_converted_users))
```
### Insight:
- Each group of users has 50 people
- 33 users got converted when they visited the new page
- 21 users got converted when they visited the old page
### Step 5: Calculate the p-value
```
# import the required fuction
from statsmodels.stats.proportion import proportions_ztest
# set the count of converted users
converted_users = np.array([33, 21])
# set the sample sizes
sampled_users = np.array([50, 50])
# find the p-value
test_stat, p_value = proportions_ztest(converted_users, sampled_users)
print('The p-value is ' + str(p_value))
```
### Step 6: Compare the p-value with $\alpha$
```
# print the whether p-value is greater or less than alpha
if p_value < 0.05:
print(" The p-value is less than the level of significance")
else:
print("The p-value is greater than the level of significance")
```
### Step 7: Draw inference
As the p-value is less than the significance level 0.05, we reject the null hypothesis. Therefore, we have enough statistical significance to conclude that the conversion rate for the new page is greater than the conversion rate for the old page.
## 4. Is the conversion and preferred language are independent or related? (10 Marks)
### Perform Visual Analysis
```
# visual analysis of the coversion count depending on the language Preferences
sns.countplot(x="language_preferred", hue="converted", data=data);
```
### Step 1: Define null and alternative hypothesis
We will test the null hypothesis
>$H_0:$ Conversion is independent of preferred language .
against the alternate hypothesis
>$H_a:$ Conversion depends on preferred language.
### Step 2: Select appropriate test
- We are to test for independence
- The variables are categorical
- Number of observations in each category is greater than 5
NB: Therefore, the appropriate test for this problem is **Chi-Square Test for Independence**
### Step 3: Decide the significance level
- As provided in the problem statement, let us assume the significance level($\alpha$) = 0.05
### Step 4: Collect and prepare data
```
#preparing the contingency table
cont_table= pd.crosstab(data['language_preferred'],data['converted'])
cont_table
```
### Step 5: Calculate the p-value
```
from scipy.stats import chi2_contingency
chi, p_value, dof, expected = chi2_contingency(cont_table)
print('The p-value is ', p_value)
```
### Step 6: Compare the p-value with $\alpha$
```
# print the whether p-value is greater or less than alpha
if p_value < 0.05:
print(" The p-value is less than the level of significance")
else:
print("The p-value is greater than the level of significance")
```
### Step 7: Draw inference
#### Observation:
As the p-value is much greater than the significance level, we can not reject the null hypothesis. Hence, we do not have enough statistical significance to conclude that conversion depends on preferred language at 5% significance level.
## 5. Is the time spent on the new page same for the different language users? (10 Marks)
### Perform visual analysis
```
# print the mean of time spent on the new page by the different groups of users with different language preferences
print(df_new.groupby("language_preferred")["time_spent_on_the_page"].mean())
# plot the box plot of the time spent on the new page depending on language preferred
sns.boxplot(x= "language_preferred", y = 'time_spent_on_the_page' , data = df_new);
```
### Step 1: Define the null and alternative hypothesis
Let $\mu_1, \mu_2, \mu_3$ be the means of time spent on the new page by different language users; English, French, Spanish respectively
We will test the null hypothesis
>$H_0: \mu_1 = \mu_2 = \mu_3$
against the alternative hypothesis
>$H_a: $ In at least in one category of language, mean time spent on the new page is different
### Step 2: Decide the significance level
- As provided in the problem statement, let us assume the significance level($\alpha$) = 0.05
### Step 3: Select the appropriate test
- This problem concerns three population means, therefore we shall use One-way ANOVA test
- NB: Let us go ahead and test if the assumptions are satisfied for this test
### Testing for normality
#### Shapiro-Wilk’s test
We will test the null hypothesis
>$H_0:$ Time spent on the new page follows a normal distribution against
against the alternative hypothesis
>$H_a:$ Time spent on the new page does not follow a normal distribution
```
# Assumption 1: Normality
from scipy import stats
# find the p-value
w, p_value = stats.shapiro(df_new['time_spent_on_the_page'])
print('The p-value is', p_value)
```
Since p-value of the test is very large, we fail to reject the null hypothesis that the time spent on the new landing page follows the normal distribution.
### Testing for Equality of variances
#### Levene’s test
We will test the null hypothesis
>$H_0$: All the population variances are equal
against the alternative hypothesis
>$H_a$: At least one variance is different from the rest
### Prepare the data
```
# create a dataframe for users on the new page that prefer the English Language
df_new_english = df_new[df_new["language_preferred"]=="English"]
# create a dataframe for users on the new page that prefer the French Language
df_new_french = df_new[df_new["language_preferred"]=="French"]
# create a dataframe for users on the new page that prefer the Spanish Language
df_new_spanish = df_new[df_new["language_preferred"]=="Spanish"]
#Assumption 2: Homogeneity of Variance
from scipy.stats import levene
statistic, p_value = levene( df_new_english['time_spent_on_the_page'],
df_new_french['time_spent_on_the_page'],
df_new_spanish['time_spent_on_the_page'])
# find the p-value
print('The p-value is', p_value)
```
Since the p-value is large, we fail to reject the null hypothesis of equal of variances.
### Observations:
The assumptions for the One-way Anova test are satisfied according to the results of levene and shapiro tests. So we can go ahead with the test
### Step 4: Calculate the p-value
```
#import the required function
from scipy.stats import f_oneway
# perform one-way anova test
test_stat, p_value = f_oneway(df_new_english['time_spent_on_the_page'],
df_new_french['time_spent_on_the_page'],
df_new_spanish['time_spent_on_the_page'])
print('The p_value is ' + str(p_value))
```
### Step 5: Compare the p-value with $\alpha$
```
# print the whether p-value is greater or less than alpha
if p_value < 0.05:
print(" The p_value is less than the level of significance")
else:
print("The p_value is greater than the level of significance")
```
### Step 6: Draw inference
As the p-value is much greater than the level of significance, therefore we fail to reject the null hypothesis. We do not have enough statistical significance to conclude that the time spent on the new page is different for atleast one kind of language users at 5% significance level.
## Conclusion and Business Recommendations
- Basing on the analysis above, at 5% significance level, there is enough statistical significance to conclude that the conversion rate for the new page is greater than the conversion rate for the old page. This means that the new feature will be effective as the new page shows more effectiveness in gathering new subscribers.
- Something to also note, is that conversion is independent of the language preferred by the users.
- Also, Users spend more time on the new page but this time is independent of the language preferred by the users.
##### In conclusion therefore, the recommendation i give E-news Express is to go ahead with the new feature/new landing page designed by the design team as it shows effectiveness in gathering new subscibers.
| github_jupyter |
```
import numpy as np
import matplotlib.pyplot as plt
import grAdapt
from grAdapt.surrogate import NoModel, NoGradient
from grAdapt.optimizer import GradientDescentBisection, GradientDescent, Adam, AdamBisection, AMSGrad, AMSGradBisection
from grAdapt.models import Sequential
# data types
from grAdapt.space.datatype import Float
def sphere(x):
return np.sum(x**2)
bounds = [Float(-5, 5) for i in range(16)] # search space
```
## 1. Defining grAdapt Model
The gradients are estimated on the objective itself and exceed the number of function evaluations.
### 1.1 Gradient Descent
```
# Define surrogate
surrogate = NoModel()
# Define optimizer. Because the optimizer works on the surrogate, the surrogate must be passed.
optimizer = GradientDescent(surrogate=surrogate)
# Both are then passed to our sequential model.
model = Sequential(surrogate=surrogate, optimizer=optimizer)
# start optimizing
res = model.minimize(sphere, bounds, 100)
plt.plot(res['y'])
plt.ylabel('Loss')
plt.xlabel('No. of function evaluations')
plt.show()
print(res['y_sol'])
```
### 1.2 Gradient Descent Bisection
```
# Define surrogate
surrogate = NoModel()
# Define optimizer. Because the optimizer works on the surrogate, the surrogate must be passed.
optimizer = GradientDescentBisection(surrogate=surrogate)
# Both are then passed to our sequential model.
model = Sequential(surrogate=surrogate, optimizer=optimizer)
# start optimizing
res = model.minimize(sphere, bounds, 100)
plt.plot(res['y'])
plt.ylabel('Loss')
plt.xlabel('No. of function evaluations')
plt.show()
print(res['y_sol'])
```
### 1.3 Adam
```
# Define surrogate
surrogate = NoModel()
# Define optimizer. Because the optimizer works on the surrogate, the surrogate must be passed.
optimizer = Adam(surrogate=surrogate)
# Both are then passed to our sequential model.
model = Sequential(surrogate=surrogate, optimizer=optimizer)
# start optimizing
res = model.minimize(sphere, bounds, 100)
plt.plot(res['y'])
plt.ylabel('Loss')
plt.xlabel('No. of function evaluations')
plt.show()
print(res['y_sol'])
```
### 1.4 Adam Bisection
```
# Define surrogate
surrogate = NoModel()
# Define optimizer. Because the optimizer works on the surrogate, the surrogate must be passed.
optimizer = AdamBisection(surrogate=surrogate)
# Both are then passed to our sequential model.
model = Sequential(surrogate=surrogate, optimizer=optimizer)
# start optimizing
res = model.minimize(sphere, bounds, 100)
plt.plot(res['y'])
plt.ylabel('Loss')
plt.xlabel('No. of function evaluations')
plt.show()
print(res['y_sol'])
```
### 1.5 AMSGrad
```
# Define surrogate
surrogate = NoModel()
# Define optimizer. Because the optimizer works on the surrogate, the surrogate must be passed.
optimizer = AMSGrad(surrogate=surrogate)
# Both are then passed to our sequential model.
model = Sequential(surrogate=surrogate, optimizer=optimizer)
# start optimizing
res = model.minimize(sphere, bounds, 100)
plt.plot(res['y'])
plt.ylabel('Loss')
plt.xlabel('No. of function evaluations')
plt.show()
print(res['y_sol'])
```
### 1.6 AMSGrad Bisection
```
# Define surrogate
surrogate = NoModel()
# Define optimizer. Because the optimizer works on the surrogate, the surrogate must be passed.
optimizer = AMSGradBisection(surrogate=surrogate)
# Both are then passed to our sequential model.
model = Sequential(surrogate=surrogate, optimizer=optimizer)
# start optimizing
res = model.minimize(sphere, bounds, 100)
plt.plot(res['y'])
plt.ylabel('Loss')
plt.xlabel('No. of function evaluations')
plt.show()
print(res['y_sol'])
```
| github_jupyter |
# Distributed Compute
This is a heart of Fugue. In the previous sections, we went over how to use Fugue in the form of extensions and basic data operations such as joins. In this section, we'll talk about how those Fugue extensions scale.
## Partition and Presort
One of the most fundamental distributed compute concepts is the partition. Our data is spread across several machines, and we often need to rearrange the way the data is spread across the machines. This is because of operations that need all of the related data in one place. For example, calculating the median value per group requires all of the data from the same group on one machine. Fugue allows users to control the paritioning scheme during execution.
In the example below, `take` is an operation that extracts `n` number of rows. We apply take on each partition. We will have two partitions because `col1` is the partition key and it only has 2 values.
```
from fugue import FugueWorkflow
import pandas as pd
data = pd.DataFrame({'col1':[1,1,1,2,2,2], 'col2':[1,4,5,7,4,2]})
df2 = data.copy()
with FugueWorkflow() as dag:
df = dag.df(df2)
df = df.partition(by=['col1'], presort="col2 desc").take(1)
df.show()
```
We also used `presort`. The presort key here was `col2 desc`, which means that the data is sorted in descending order after partitioning. This makes the `take` operation give us the max value. We'll go over one more example.
```
# schema: *, col2_diff:int
def diff(df: pd.DataFrame) -> pd.DataFrame:
df['col2_diff'] = df['col2'].diff(1)
return df
df2 = data.copy()
with FugueWorkflow() as dag:
df = dag.df(df2)
df = df.partition(by=['col1']).transform(diff)
df.show()
```
In this example `fugue` once again partitions on col1, except this time applies the `pandas` `diff` operator to calculate the difference of a Dataframe element compared with an element in the previous row. Notice that this time there are 2 NULL values because from the perspective of the first element of each partition there is no previous row to compare against! The `partition-transform` semantics are very similar to the pandas `groupby-apply` semantics. There is a deeper dive into partitions in the advanced tutorial.
## CoTransformer
Last section, we skipped the `cotransformer` because it required knowledge about partitions. The `cotransformer` takes in multiple DataFrames that are **partitioned in the same way** and outputs one DataFrame. In order to use a `cotransformer`, the `zip` method has to be used first to join them by their common keys. There is also a `@cotransformer` decorator can be used to define the `cotransformer`, but it will still be invoked by the `zip-transform` syntax.
In the example below, we will do a merge as-of operation on different groups of data. In order to align the data with events as they get distributed across the cluster, we will partition them in the same way.
```
import pandas as pd
data = pd.DataFrame({'group': (["A"] * 5 + ["B"] * 5),
'year': [2015,2016,2017,2018,2019] * 2})
events = pd.DataFrame({'group': ["A", "A", "B", "B"],
'year': [2014, 2016, 2014, 2018],
"value": [1, 2, 1, 2]})
events.head()
```
The pandas `merge_asof` function requires that the `on` column is sorted. To do this, we apply a `partition` strategy on Fugue by group and presort by the year. By the time it arrives in the `cotransformer`, the dataframes are sorted and grouped.
```
from fugue import FugueWorkflow
# schema: group:str,year:int,value:int
def merge_asof(data:pd.DataFrame, events:pd.DataFrame) -> pd.DataFrame:
return pd.merge_asof(data, events, on="year", by="group")
with FugueWorkflow() as dag:
data = dag.df(data)
events = dag.df(events)
data.zip(events, partition={"by": "group", "presort": "year"}).transform(merge_asof).show()
```
In this example, the important part to note is each group uses the pandas `merge_asof` independently. This function is very flexible, allowing users to specify forward and backward merges along with a tolerance. This is tricky to implement well in Spark, but the `cotransformer` lets us do it easily.
This operation was partitioned by the column `group` before the `cotransform` was applied. This was done through the `zip` command. `CoTransform` is a more advanced operation that may take some experience to get used to.
## Persist and Broadcast
Persist and broadcast are two other distributed compute concepts that Fugue has support for. Persist keeps a DataFrame in memory to avoid recomputation. Distributed compute frameworks often need an explicit `persist` call to know which DataFrames need to be kept, otherwise they tend to be calculated repeatedly.
Broadcasting is making a smaller DataFrame available on all the workers of a cluster. Without `broadcast`, these small DataFrames would be repeatedly sent to workers whenever they are needed to perform an operation. Broadcasting caches them on the workers.
```
with FugueWorkflow() as dag:
df = dag.df([[0,1],[1,2]],"a:long,b:long")
df.persist()
df.broadcast()
```
| github_jupyter |
# Scatterplots
Scatterplots are used to examine the relationship between two variables.
```
# import seaborn, matplotlib
# set up inline figures
# load iris and preview the data
```
Say we want to look at the relationship between `sepal_length` and `sepal_width` within our dataset. We'll use the `sns.scatterplot` function to plot this.
```
# plot sepal_length vs sepal_width
```
Can you remember what method in the statistics lessons we learned about that tells us about the relationship between two variables?
**Correlation!**
There is an easy way we can visualize the strength of the correlation on the plot using the `lmplot` function.
```
# plot sepal_length vs sepal_width with trendline
```
Based on this plot do you think there is a strong relationship between `sepal_length` and `sepal_width` in our data?
This gives us a general idea of the trend between `sepal_length` and `sepal_width`, but what if we wanted to explore the relationship between these variables on a more granular level? For example - if we wanted to see how this relationship might differ between the different species within our dataset? We can separate our plot similar to the way we did in the line graph using the `hue` parameter.
```
# plot sepal_length vs sepal_width colored by species
# the line below moves the legend outside of the plot borders
# dont worry about understanding this line of code
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
```
Similarly, we can use the `sns.lmplot` function to add a linear trendline for each species separately. We can also change the color palette using the `palette` parameter
```
# plot sepal_length vs sepal_width colored by species
```
What do you notice about the relationship between our two variables when we separate (i.e. *stratify*) by species?
Instead of stratifying by species using color, we can do so using the marker shape with the `style` parameter.
```
# plot sepal_length vs sepal_width colored by species
# the line below moves the legend outside of the plot borders
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
```
Lastly, we can combine `hue`, `style` and `palette` all together:
```
# plot sepal_length vs sepal_width colored by species
# the line below moves the legend outside of the plot borders
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
```
In this lesson we learned:
* How to create a scatterplot in `seaborn`
* Stratifying a scatterplot by another variable using color (`hue`)
* Stratifying a scatterplot by another variable using marker shape (`style`)
* Changing the color palette of a stratified plot (`palette`)
| github_jupyter |
# DML and Partitioning
As part of this section we will continue understanding further concepts related to DML and also get into the details related to partitioning tables. With respect to DML, earlier we have seen how to use LOAD command, now we will see how to use INSERT command primarily to get query results copied into a table.
* Introduction to Partitioning
* Creating Tables using Parquet
* LOAD vs. INSERT
* Inserting Data using Stage Table
* Creating Partitioned Tables
* Adding Partitions to Tables
* Loading data into Partitions
* Inserting Data into Partitions
* Using Dynamic Partition Mode
* Exercise - Partitioned Tables
```
%%HTML
<iframe width="560" height="315" src="https://www.youtube.com/embed/vnNhIcvjQqs?rel=0&controls=1&showinfo=0" frameborder="0" allowfullscreen></iframe>
```
Let us start spark context for this Notebook so that we can execute the code provided. You can sign up for our [10 node state of the art cluster/labs](https://labs.itversity.com/plans) to learn Spark SQL using our unique integrated LMS.
```
val username = System.getProperty("user.name")
import org.apache.spark.sql.SparkSession
val username = System.getProperty("user.name")
val spark = SparkSession.
builder.
config("spark.ui.port", "0").
config("spark.sql.warehouse.dir", s"/user/${username}/warehouse").
enableHiveSupport.
appName(s"${username} | Spark SQL - Managing Tables - DML and Partitioning").
master("yarn").
getOrCreate
```
If you are going to use CLIs, you can use Spark SQL using one of the 3 approaches.
**Using Spark SQL**
```
spark2-sql \
--master yarn \
--conf spark.ui.port=0 \
--conf spark.sql.warehouse.dir=/user/${USER}/warehouse
```
**Using Scala**
```
spark2-shell \
--master yarn \
--conf spark.ui.port=0 \
--conf spark.sql.warehouse.dir=/user/${USER}/warehouse
```
**Using Pyspark**
```
pyspark2 \
--master yarn \
--conf spark.ui.port=0 \
--conf spark.sql.warehouse.dir=/user/${USER}/warehouse
```
**Unlike Hive, Spark SQL does not support Bucketing which is similar to Hash Partitioning. However, Delta Lake does. Delta Lake is 3rd party library which facilitate us additional capabilities such as ACID transactions on top of Spark Metastore tables**
Let us make sure that we have orders table with data as we will be using it to populate partitioned tables very soon.
```
%%sql
USE itversity_retail
%%sql
SHOW tables
%%sql
DROP TABLE orders
%%sql
SELECT current_database()
%%sql
CREATE TABLE IF NOT EXISTS itversity_retail.orders (
order_id INT,
order_date STRING,
order_customer_id INT,
order_status STRING
) ROW FORMAT DELIMITED FIELDS TERMINATED BY ','
%%sql
LOAD DATA LOCAL INPATH '/data/retail_db/orders'
OVERWRITE INTO TABLE orders
%%sql
SELECT count(1) FROM orders
```
| github_jupyter |
## Домашнее задание по программированию.
## Производные. Частные производные. Градиент. Градиентный спуск.
Нам понадобится библиотека **numpy** - о ней было рассказано на первой лекции. Если ничего не помните, то можно обратиться к следующим ресурсам:
1. http://pyviy.blogspot.com/2009/09/numpy.html
2. https://pythonworld.ru/numpy (Часть 1, Часть 2)
```
import numpy as np
```
Отдельно напишем функцию, которая считает расстояние между двумя точками в произвольном пространстве. Математически это выглядит так $$dist(x, y) = \sum_{i=1}^{n}(x_{i} - y_{i})^{2}$$
```
def dist(x, y):
# Здесь мы можем обойтись без явного суммирования
# Если y и x - вектора из R^{n}, тогда
# y^{2} - x^{2} - тоже вектор из R^{n}
# (здесь x^{2}, y^{2} означает возведение в квадрат каждой из компонент вектора)
# Используя np.sum с атрибутом axis = 0 получим суммирование по столбцу
return np.sqrt(abs(np.sum(y ** 2 - x ** 2, axis = 0)))
```
Обычно пишут не универсальную функцию, которая принимает на вход функцию, минимум которой надо найти, и ее градиент, а явно задают эту функции внутри. Например, напишем функцию, которая будет находить точку минимума для функции многих переменных $$F(x, y) = x^{2} + y^{2}$$
```
def gradient_descent(starting_point,
learning_rate = None, iter_max = None,
precision = None, verbose = None, output = None):
f = lambda x: x[0] ** 2 + x[1] ** 2 # Обычная функция многих переменнных
# F(x, y) = x^2 + y^2
df_x = lambda x: 2 * x # Частная производная функции F по первой переменной
df_y = lambda y: 2 * y # Частная производная функции F по второй переменной
# Инициализация опциональных параметров
iter_num = 0
if learning_rate is None:
learning_rate = 0.001
if iter_max is None:
iter_max = 1e7
if precision is None:
precision = 1e-7
if verbose is None:
verbose = False
if output is None:
output = False
# Градиентный спуск
point = np.array([starting_point[0] - learning_rate * df_x(starting_point[0]),
starting_point[1] - learning_rate * df_y(starting_point[1])])
while dist(point, starting_point) > 1e-7 and iter_num < iter_max:
++iter_num
starting_point, point = np.array([
starting_point[0] - learning_rate * df_x(starting_point[0]),
starting_point[1] - learning_rate * df_y(starting_point[1])]),\
starting_point
if verbose:
print(starting_point, point)
if output:
return point, f(point)
else:
return np.round(point, 3), np.round(f(point), 3)
gradient_descent(np.array([2, -5]))
```
Вам необходимо написать функцию, которая будет находить минимум функции многих переменных $$F(x, y, z, t) = x^{4}y^{2} + z^{2}t^{2}$$
Указание - вам надо *немного* модифицировать предыдущую функцию.
```
def gradient_descent(starting_point,
learning_rate = None, iter_max = None,
precision = None, verbose = None, output = None):
f = lambda x: x[0] ** 4 * x[1] ** 2 + x[2] ** 2 * x[3] ** 2
df_x = lambda x: 4 * x[0] ** 3 * x[1] ** 2
df_y = lambda x: 2 * x[0] ** 4 * x[1]
df_z = lambda x: 2 * x[0] * x[1] ** 2
df_t = lambda x: 2 * x[0] ** 2 * x[1]
# Инициализация опциональных параметров
iter_num = 0
if learning_rate is None:
learning_rate = 0.001
if iter_max is None:
iter_max = 1e7
if precision is None:
precision = 1e-7
if verbose is None:
verbose = False
if output is None:
output = False
# Градиентный спуск
point = np.array([
starting_point[0] - learning_rate * df_x((starting_point[0], starting_point[1])),
starting_point[1] - learning_rate * df_y((starting_point[0], starting_point[1])),
starting_point[2] - learning_rate * df_z((starting_point[2], starting_point[3])),
starting_point[3] - learning_rate * df_t((starting_point[2], starting_point[3]))])
while dist(point, starting_point) > 1e-7 and iter_num < iter_max:
iter_num += 1
starting_point, point = np.array([
starting_point[0] - learning_rate * df_x((starting_point[0], starting_point[1])),
starting_point[1] - learning_rate * df_y((starting_point[0], starting_point[1])),
starting_point[2] - learning_rate * df_z((starting_point[2], starting_point[3])),
starting_point[3] - learning_rate * df_t((starting_point[2], starting_point[3]))]),\
starting_point
if verbose:
print(starting_point, point)
if output:
return point, f(point)
else:
return np.round(point, 3), np.round(f(point), 3)
gradient_descent(np.array([1, -1, 1, 1]),
learning_rate=0.1, iter_max=1000)
```
| github_jupyter |
## SWEPUB - ORCID
version 0.8
* This [notebook](https://github.com/salgo60/open-data-examples/blob/master/SWEPUB%20-%20ORCID.ipynb)
* SWEPUB
* [Kundo question](https://kundo.se/org/swepub/d/api-for-amnesklassificering/#c3571837) were they recommend download the ZIP file to access data in SWEPUB --> JSON 10.81 Gbyte
* [datamodell/swepub-bibframe](https://www.kb.se/samverkan-och-utveckling/swepub/datamodell/swepub-bibframe.html)
* [Twitter SwePub](https://twitter.com/SwePub)
* [SPARQL SWEPUB](https://github.com/libris/swepub-sparql) feels SPARQL is better than download a zipfile ?!?!?
* [Finding nr records per schools ](http://virhp07.libris.kb.se/sparql/?default-graph-uri=&query=PREFIX+bmc%3A+%3Chttp%3A%2F%2Fswepub.kb.se%2Fbibliometric%2Fmodel%23%3E+%0D%0APREFIX+swpa_m%3A+%3Chttp%3A%2F%2Fswepub.kb.se%2FSwePubAnalysis%2Fmodel%23%3E%0D%0APREFIX+mods_m%3A+%3Chttp%3A%2F%2Fswepub.kb.se%2Fmods%2Fmodel%23%3E+%0D%0APREFIX+outt_m%3A+%3Chttp%3A%2F%2Fswepub.kb.se%2FSwePubAnalysis%2FOutputTypes%2Fmodel%23%3E%0D%0APREFIX+xlink%3A+%3Chttp%3A%2F%2Fwww.w3.org%2F1999%2Fxlink%23%3E+%0D%0ASELECT+DISTINCT+xsd%3Astring%28%3F_orgName%29+COUNT%28DISTINCT+%3F_workID%29+as+%3Fc%0D%0AWHERE%0D%0A%7B%0D%0A%3FCreativeWork+bmc%3AlocalID+%3F_workID+.%0D%0A%3FPublication+bmc%3AlocalID+%3F_publicationID+.%0D%0A%0D%0A%3FOrganization+rdfs%3Alabel+%3F_orgName+.%0D%0AFILTER%28lang%28%3F_orgName%29+%3D+%27sv%27+%29%0D%0A%0D%0A%3FCreativeWork+bmc%3AreportedBy+%3FRecord+.%0D%0A%3FCreativeWork+a+bmc%3ACreativeWork+.%0D%0A%3FCreativeWork+bmc%3ApublishedAs+%3FPublication+.%0D%0A%0D%0A%23%3FCreativeWork+bmc%3ApublicationYearEarliest+%3F_pubYear+.%0D%0A%3FCreativeWork+bmc%3AhasCreatorShip+%3FCreatorShip+.%0D%0A%3FCreatorShip+bmc%3AhasAffiliation+%3FCreatorAffiliation+.%0D%0A%3FCreatorAffiliation+bmc%3AhasOrganization+%3FOrganization+.%0D%0A%0D%0A%3FCreatorShip+bmc%3AhasCreator+%3FCreator+.+%0D%0A%0D%0A%7D%0D%0AORDER+BY+xsd%3Astring%28%3F_orgName%29&format=text%2Fhtml&timeout=0&debug=on)
* [authorityRecords/organizations.sparql](https://github.com/libris/swepub-sparql/blob/master/sparqls/authorityRecords/organizations.sparql) --> [query](http://virhp07.libris.kb.se/sparql?default-graph-uri=&query=%23KB+Auktoritetslista+%C3%B6ver+organisationer%0D%0APREFIX+swpa_m%3A+%3Chttp%3A%2F%2Fswepub.kb.se%2FSwePubAnalysis%2Fmodel%23%3E%0D%0APREFIX+countries%3A+%3Chttp%3A%2F%2Fwww.bpiresearch.com%2FBPMO%2F2004%2F03%2F03%2Fcdl%2FCountries%23%3E%0D%0ASELECT+DISTINCT%0D%0Axsd%3Astring%28%3F_label%29+as+%3Forganization%0D%0Axsd%3Astring%28%3F_authority%29+as+%3Fauthority%0D%0Axsd%3Astring%28%3F_id%29+as+%3Fid%0D%0Axsd%3Astring%28%3F_nameLocal%29+as+%3Fcountry%0D%0Axsd%3Astring%28%3F_countryCodeISO3166Alpha3%29+as+%3FcountryCodeISO3166Alpha3%0D%0AWHERE%0D%0A%7B%0D%0A%3FResearchOrganization+a+swpa_m%3AResearchOrganization+.%0D%0A%3FResearchOrganization+rdfs%3Alabel+%3F_label+.%0D%0A%3FResearchOrganization+swpa_m%3AhasIdentity+%3FIdentity+.%0D%0A%3FResearchOrganization+swpa_m%3AlocatedIn+%3FISO3166DefinedCountry+.%0D%0A%3FIdentity+swpa_m%3Aauthority+%3F_authority+.%0D%0A%3FISO3166DefinedCountry+countries%3AcountryCodeISO3166Alpha3+%3F_countryCodeISO3166Alpha3+.%0D%0A%3FISO3166DefinedCountry+countries%3AreferencesCountry+%3FIndependentState+.%0D%0A%3FIndependentState+countries%3AnameLocal+%3F_nameLocal+.%0D%0A%3FIdentity+swpa_m%3Aid+%3F_id+.%0D%0A%3FIdentity+swpa_m%3Aauthority+%22kb.se%22%5E%5Exsd%3Astring+.%0D%0AFILTER%28%3F_authority+%3D+%22kb.se%22%5E%5Exsd%3Astring%29%0D%0A%7D&format=text%2Fhtml&timeout=0&debug=on)
* [publicationChannels](https://github.com/libris/swepub-sparql/blob/master/sparqls/authorityRecords/publicationChannels.sparql) --> [query](http://virhp07.libris.kb.se/sparql?default-graph-uri=&query=PREFIX+swpa_m%3A+%3Chttp%3A%2F%2Fswepub.kb.se%2FSwePubAnalysis%2Fmodel%23%3E%0D%0ASELECT+DISTINCT%0D%0Axsd%3Astring%28%3F_onetitle%29+as+%3Ftitle%0D%0Axsd%3Astring%28%3F_issn%29+as+%3FprintISN%0D%0Axsd%3Astring%28%3F_eissn%29+as+%3FelectronicISSN%0D%0Axsd%3Aint%28%3F_weight%29+as+%3FNorwegianLevel%0D%0Axsd%3Aint%28%3F_weight7%29+as+%3FFinnishLevel%0D%0Axsd%3Aint%28%3F_weight8%29+as+%3FDanishLevel%0D%0A%3FSwedishLevel%0D%0AWHERE%0D%0A%7B%0D%0A%3FJournal+a+swpa_m%3AJournal+.%0D%0A%3FJournal+swpa_m%3Aonetitle+%3F_onetitle+.%0D%0AOPTIONAL+%7B+%3FJournal+swpa_m%3Aeissn+%3F_eissn+.+%7D%0D%0AOPTIONAL+%7B+%3FJournal+swpa_m%3Aissn+%3F_issn+.+%7D%0D%0AFILTER+%28+bound%28%3F_issn%29+%7C%7C+bound%28%3F_eissn%29+%29%0D%0A%0D%0A%3FJournal+swpa_m%3AhasRank+%3FSwedishRank+.%0D%0A%3FSwedishRank+a+swpa_m%3ASwedishRank+.%0D%0A%3FSwedishRank+swpa_m%3Aweight+%3FSwedishLevel+.%0D%0A%0D%0AOPTIONAL%0D%0A%7B%0D%0A%3FJournal+swpa_m%3AhasRank+%3FNorwegianRank+.%0D%0A%3FNorwegianRank+a+swpa_m%3ANorwegianRank+.%0D%0A%3FNorwegianRank+swpa_m%3Aweight+%3F_weight+.%0D%0A%7D%0D%0AOPTIONAL%0D%0A%7B%0D%0A%3FJournal+swpa_m%3AhasRank+%3FFinnishRank+.%0D%0A%3FFinnishRank+a+swpa_m%3AFinnishRank+.%0D%0A%3FFinnishRank+swpa_m%3Aweight+%3F_weight7+.%0D%0A%7D%0D%0AOPTIONAL%0D%0A%7B%0D%0A%3FJournal+swpa_m%3AhasRank+%3FDanishRank+.%0D%0A%3FDanishRank+a+swpa_m%3ADanishRank+.%0D%0A%3FDanishRank+swpa_m%3Aweight+%3F_weight8+.%0D%0A%7D%0D%0A%7D%0D%0ALIMIT+100000&format=text%2Fhtml&timeout=0&debug=on)
* Finding persons with ORCID ???
see at the end where we find an ORCID in the JSON file looks like not everyone has an ORCID...
* Magnus C Persson [0000-0003-1062-2789](https://orcid.org/0000-0003-1062-2789) he is same as [WD Q88134673](https://www.wikidata.org/wiki/Q88134673?uselang=sv) --> [Scholia](https://scholia.toolforge.org/author/Q88134673) ->
* [co-authors](https://tinyurl.com/yb65gf5m)
* [graf](https://tinyurl.com/ycf8zc69)
* Wikidata
* ORCID property [P496](https://www.wikidata.org/wiki/Property_talk:P496) on 1 546 332 objects
* 0000-0002-5494-8126 --> WD query [haswbstatement:"P356=0000-0002-5494-8126"](https://www.wikidata.org/w/index.php?sort=relevance&search=haswbstatement%3A%22P496%3D0000-0002-5494-8126%22&title=Special:Search&profile=advanced&fulltext=1&advancedSearch-current=%7B%7D&ns0=1&ns120=1)
* DOI property [P356](https://www.wikidata.org/wiki/Property_talk:P356) on 25 609 492 objects
* doi/10.1186/S13321-016-0161-3 --> WD query [haswbstatement:"P356=10.1186/S13321-016-0161-3"](https://www.wikidata.org/w/index.php?sort=relevance&search=haswbstatement%3A%22P356%3D10.1186%2FS13321-016-0161-3%22&title=Special:Search&profile=advanced&fulltext=1&advancedSearch-current=%7B%7D&ns0=1&ns120=1)
* query using [SPARQL](https://w.wiki/X3h)
* [Scholia](https://scholia.toolforge.org/) - tool for citation graphs of data in Wikidata
* DOI link [doi/10.1186/S13321-016-0161-3](https://scholia.toolforge.org/doi/10.1186/S13321-016-0161-3)
* ORCID link [orcid/0000-0002-5494-8126](https://scholia.toolforge.org/orcid/0000-0002-5494-8126]
```
# Try to get ORCID from SWEPUB
# see https://kundo.se/org/swepub/d/api-for-amnesklassificering/#c3571837
import pandas as pd
import json
import time
start_time = time.time()
filename ="data/swepub-duplicated-2020-07-05.jsonl"
filestore ="data/swepub-duplicated-2020-07-05_1.pd"
print(time.ctime())
df_chunk = pd.read_json(filename, lines=True, chunksize=10000)
chunk_list = []
for i, chunk in enumerate(df_chunk):
chunk_list.append(chunk)
print("--- %s seconds ---" % (time.time() - start_time))
# concat the list into dataframe
df_concat = pd.concat(chunk_list)
print("--- %s seconds ---" % (time.time() - start_time))
df_concat.info()
#df_concat.to_pickle(filestore)
#print("--- %s seconds ---" % (time.time() - start_time))
pd.set_option("display.max.columns", None)
df_concat["instanceOf"]
pd.DataFrame(df_concat["instanceOf"].tolist())
```
## Try to find ORCID and DOI
```
instanceOfdf = pd.DataFrame(df_concat["instanceOf"].tolist())
instanceOfdf
pd.DataFrame(instanceOfdf["genreForm"].tolist())
instanceOfdf.info()
pd.DataFrame(instanceOfdf["hasTitle"][1:10].tolist())
pd.DataFrame(instanceOfdf["contribution"][1:10].tolist())
pd.DataFrame(instanceOfdf["hasNote"][1:10].tolist())
pd.options.display.width = 0
pd.DataFrame(instanceOfdf["contribution"][1:10].tolist())
pd.DataFrame(instanceOfdf["contribution"][1:10].tolist()[0])
```
#### we have 35 authors but looks like no ORCID
```
pd.DataFrame(instanceOfdf["contribution"][1:10].tolist()[0])["agent"]
pd.DataFrame(instanceOfdf["contribution"][1:10].tolist()[0])["agent"].tolist()
pd.DataFrame(instanceOfdf["contribution"][1:10].tolist()[1])["agent"].tolist()
pd.options.display.width = 0
pd.DataFrame(instanceOfdf["hasTitle"][1:10].tolist()[2])
pd.DataFrame(instanceOfdf["hasTitle"][1:10].tolist()[2])
pd.DataFrame(instanceOfdf["contribution"][1:10].tolist()[0])["role"].tolist()
pd.DataFrame(instanceOfdf["electronicLocator"][1:10])
pd.DataFrame(instanceOfdf["contribution"][1000:1010].tolist()[1])["agent"].tolist()
pd.DataFrame(instanceOfdf["contribution"][2000:2010].tolist()[1])["agent"].tolist()
```
## Looks we have an ORCID
* Magnus C Persson [0000-0003-1062-2789](https://orcid.org/0000-0003-1062-2789) he is same as [WD Q88134673](https://www.wikidata.org/wiki/Q88134673?uselang=sv) --> [Scholia](https://scholia.toolforge.org/author/Q88134673) ->
* [co-authors](https://tinyurl.com/yb65gf5m)
* [graf](https://tinyurl.com/ycf8zc69)
```
pd.DataFrame(instanceOfdf["contribution"][2000:2010].tolist()[1])["agent"].tolist()[0]["identifiedBy"]
pd.DataFrame(instanceOfdf["contribution"][2000:2010].tolist()[1])["agent"].tolist()[0]["identifiedBy"][0]["value"]
```
| github_jupyter |
## Setup
```
import sys
import os
madminer_src_path = "/home/shomiller/madminer"
sys.path.append(madminer_src_path)
from __future__ import absolute_import, division, print_function, unicode_literals
import logging
import numpy as np
import math
import matplotlib
from matplotlib import pyplot as plt
from scipy.optimize import curve_fit
% matplotlib inline
from matplotlib.patches import Patch
from matplotlib.lines import Line2D
from madminer.fisherinformation import FisherInformation
from madminer.fisherinformation import project_information, profile_information
from madminer.sampling import SampleAugmenter
from madminer.plotting import plot_fisher_information_contours_2d
from madminer.plotting import plot_distribution_of_information
from madminer.plotting import plot_fisherinfo_barplot
from madminer.plotting import plot_distributions
from sklearn.metrics import mean_squared_error
from pandas import DataFrame
import madminer.__version__
print( 'MadMiner version: {}'.format(madminer.__version__) )
# MadMiner output
logging.basicConfig(
format='%(asctime)-5.5s %(name)-20.20s %(levelname)-7.7s %(message)s',
datefmt='%H:%M',
level=logging.INFO
)
# Output of all other modules (e.g. matplotlib)
for key in logging.Logger.manager.loggerDict:
if "madminer" not in key:
logging.getLogger(key).setLevel(logging.WARNING)
```
Set some default matplotlib Plotting Options:
```
import matplotlib as mpl
mpl.rcParams['text.usetex'] = True
#mpl.rcParams['text.latex.preamble'] = [r'\usepackage{amsmath}']
mpl.rcParams['figure.figsize'] = (5,5)
mpl.rcParams['axes.labelsize'] = 20
mpl.rcParams['xtick.labelsize'] = 16
mpl.rcParams['ytick.labelsize'] = 16
mpl.rcParams['font.size'] = 16
mpl.rcParams['legend.fontsize'] = 16
mpl.rcParams['font.family'] = 'serif'
mpl.rcParams['font.style'] = 'normal'
mpl.rcParams['font.serif'] = 'Times New Roman'
#mpl.rcParams['savefig.bbox'] = 'standard'
mpl.rcParams['mathtext.fontset'] = 'cm'
mpl.rcParams['axes.formatter.use_mathtext'] = True
```
Color List for Plotting:
```
col_sally_met = '#3972E5'
col_stxs_5bins = '#A5000D'
col_imp_stxs = '#418C1C'
#col_rate = '#BFBDC1'
col_rate = '#747373'
col_sally_full = '#FB4F14'
col_parton = '#070600'
col_stxs_3bins = '#30BCED'
col_stxs_6bins = '#149911'
col_sally_ptw = '#5B1865' #palatinate purple
col_sally_2d = '#D84797'
labels = [ 'cHbox', 'cHD', 'cHW', 'cHq3' ]
```
### Data Files:
Saving the paths for all our lhe data files, for easy use later on.
Note, lhedata are saved in the format:
`data/{channel}_lhedata_{observables}.h5`
while the `SALLY` models are saved in:
`models/sally_ensemble_{channel}_{observables}/`
here `channel` can be e.g, `wph_mu_wbkgs` or `wmh_e_wbkgs` (or take out the backgrounds for the background free runs), while `observables` can be `full`, `met`, `ptw` or `2d`
#### Signal + Backgrounds
```
lhedatafile_wph_mu_wbkgs_full = 'data/wph_mu_wbkgs_sigsystonly_lhedata_full.h5'
lhedatafile_wph_mu_wbkgs_met = 'data/wph_mu_wbkgs_sigsystonly_lhedata_met.h5'
lhedatafile_wph_mu_wbkgs_ptw = 'data/wph_mu_wbkgs_sigsystonly_lhedata_ptw.h5'
lhedatafile_wph_mu_wbkgs_2d = 'data/wph_mu_wbkgs_sigsystonly_lhedata_2d.h5'
lhedatafile_wph_e_wbkgs_full = 'data/wph_e_wbkgs_sigsystonly_lhedata_full.h5'
lhedatafile_wph_e_wbkgs_met = 'data/wph_e_wbkgs_sigsystonly_lhedata_met.h5'
lhedatafile_wph_e_wbkgs_ptw = 'data/wph_e_wbkgs_sigsystonly_lhedata_ptw.h5'
lhedatafile_wph_e_wbkgs_2d = 'data/wph_e_wbkgs_sigsystonly_lhedata_2d.h5'
lhedatafile_wmh_mu_wbkgs_full = 'data/wmh_mu_wbkgs_sigsystonly_lhedata_full.h5'
lhedatafile_wmh_mu_wbkgs_met = 'data/wmh_mu_wbkgs_sigsystonly_lhedata_met.h5'
lhedatafile_wmh_mu_wbkgs_ptw = 'data/wmh_mu_wbkgs_sigsystonly_lhedata_ptw.h5'
lhedatafile_wmh_mu_wbkgs_2d = 'data/wmh_mu_wbkgs_sigsystonly_lhedata_2d.h5'
lhedatafile_wmh_e_wbkgs_full = 'data/wmh_e_wbkgs_sigsystonly_lhedata_full.h5'
lhedatafile_wmh_e_wbkgs_met = 'data/wmh_e_wbkgs_sigsystonly_lhedata_met.h5'
lhedatafile_wmh_e_wbkgs_ptw = 'data/wmh_e_wbkgs_sigsystonly_lhedata_ptw.h5'
lhedatafile_wmh_e_wbkgs_2d = 'data/wmh_e_wbkgs_sigsystonly_lhedata_2d.h5'
```
#### Signal Only
```
lhedatafile_wph_mu_full = 'data/wph_mu_smeftsim_lhedata_full.h5'
lhedatafile_wph_mu_met = 'data/wph_mu_smeftsim_lhedata_met.h5'
lhedatafile_wph_mu_ptw = 'data/wph_mu_smeftsim_lhedata_ptw.h5'
lhedatafile_wph_mu_2d = 'data/wph_mu_smeftsim_lhedata_2d.h5'
lhedatafile_wph_e_full = 'data/wph_e_smeftsim_lhedata_full.h5'
lhedatafile_wph_e_met = 'data/wph_e_smeftsim_lhedata_met.h5'
lhedatafile_wph_e_ptw = 'data/wph_e_smeftsim_lhedata_ptw.h5'
lhedatafile_wph_e_2d = 'data/wph_e_smeftsim_lhedata_2d.h5'
lhedatafile_wmh_mu_full = 'data/wmh_mu_smeftsim_lhedata_full.h5'
lhedatafile_wmh_mu_met = 'data/wmh_mu_smeftsim_lhedata_met.h5'
lhedatafile_wmh_mu_ptw = 'data/wmh_mu_smeftsim_lhedata_ptw.h5'
lhedatafile_wmh_mu_2d = 'data/wmh_mu_smeftsim_lhedata_2d.h5'
lhedatafile_wmh_e_full = 'data/wmh_e_smeftsim_lhedata_full.h5'
lhedatafile_wmh_e_met = 'data/wmh_e_smeftsim_lhedata_met.h5'
lhedatafile_wmh_e_ptw = 'data/wmh_e_smeftsim_lhedata_ptw.h5'
lhedatafile_wmh_e_2d = 'data/wmh_e_smeftsim_lhedata_2d.h5'
```
#### Backgrounds Only
```
lhedatafile_wph_mu_backgrounds_only_full = 'data/wph_mu_backgrounds_only_lhedata_full.h5'
lhedatafile_wph_mu_backgrounds_only_met = 'data/wph_mu_backgrounds_only_lhedata_met.h5'
lhedatafile_wph_mu_backgrounds_only_ptw = 'data/wph_mu_backgrounds_only_lhedata_ptw.h5'
lhedatafile_wph_mu_backgrounds_only_2d = 'data/wph_mu_backgrounds_only_lhedata_2d.h5'
lhedatafile_wph_e_backgrounds_only_full = 'data/wph_e_backgrounds_only_lhedata_full.h5'
lhedatafile_wph_e_backgrounds_only_met = 'data/wph_e_backgrounds_only_lhedata_met.h5'
lhedatafile_wph_e_backgrounds_only_ptw = 'data/wph_e_backgrounds_only_lhedata_ptw.h5'
lhedatafile_wph_e_backgrounds_only_2d = 'data/wph_e_backgrounds_only_lhedata_2d.h5'
lhedatafile_wmh_mu_backgrounds_only_full = 'data/wmh_mu_backgrounds_only_lhedata_full.h5'
lhedatafile_wmh_mu_backgrounds_only_met = 'data/wmh_mu_backgrounds_only_lhedata_met.h5'
lhedatafile_wmh_mu_backgrounds_only_ptw = 'data/wmh_mu_backgrounds_only_lhedata_ptw.h5'
lhedatafile_wmh_mu_backgrounds_only_2d = 'data/wmh_mu_backgrounds_only_lhedata_2d.h5'
lhedatafile_wmh_e_backgrounds_only_full = 'data/wmh_e_backgrounds_only_lhedata_full.h5'
lhedatafile_wmh_e_backgrounds_only_met = 'data/wmh_e_backgrounds_only_lhedata_met.h5'
lhedatafile_wmh_e_backgrounds_only_ptw = 'data/wmh_e_backgrounds_only_lhedata_ptw.h5'
lhedatafile_wmh_e_backgrounds_only_2d = 'data/wmh_e_backgrounds_only_lhedata_2d.h5'
```
# Loading the Computed Fisher Informations
## Load the Arrays
```
fi_sally_wbkgs_outfile = './fisher_info/fi_sally_wbkgs.npz'
fi_sally_wbkgs_cov_outfile = './fisher_info/fi_sally_wbkgs_cov.npz'
fi_sally_wph_mu_wbkgs_full_mean = np.load(fi_sally_wbkgs_outfile)['arr_0'][0]
fi_sally_wph_e_wbkgs_full_mean = np.load(fi_sally_wbkgs_outfile)['arr_0'][1]
fi_sally_wmh_mu_wbkgs_full_mean = np.load(fi_sally_wbkgs_outfile)['arr_0'][2]
fi_sally_wmh_e_wbkgs_full_mean = np.load(fi_sally_wbkgs_outfile)['arr_0'][3]
fi_sally_wph_mu_wbkgs_met_mean = np.load(fi_sally_wbkgs_outfile)['arr_0'][4]
fi_sally_wph_e_wbkgs_met_mean = np.load(fi_sally_wbkgs_outfile)['arr_0'][5]
fi_sally_wmh_mu_wbkgs_met_mean = np.load(fi_sally_wbkgs_outfile)['arr_0'][6]
fi_sally_wmh_e_wbkgs_met_mean = np.load(fi_sally_wbkgs_outfile)['arr_0'][7]
fi_sally_wph_mu_wbkgs_ptw_mean = np.load(fi_sally_wbkgs_outfile)['arr_0'][8]
fi_sally_wph_e_wbkgs_ptw_mean = np.load(fi_sally_wbkgs_outfile)['arr_0'][9]
fi_sally_wmh_mu_wbkgs_ptw_mean = np.load(fi_sally_wbkgs_outfile)['arr_0'][10]
fi_sally_wmh_e_wbkgs_ptw_mean = np.load(fi_sally_wbkgs_outfile)['arr_0'][11]
fi_sally_wph_mu_wbkgs_2d_mean = np.load(fi_sally_wbkgs_outfile)['arr_0'][12]
fi_sally_wph_e_wbkgs_2d_mean = np.load(fi_sally_wbkgs_outfile)['arr_0'][13]
fi_sally_wmh_mu_wbkgs_2d_mean = np.load(fi_sally_wbkgs_outfile)['arr_0'][14]
fi_sally_wmh_e_wbkgs_2d_mean = np.load(fi_sally_wbkgs_outfile)['arr_0'][15]
fi_sally_wph_mu_wbkgs_full_covariance = np.load(fi_sally_wbkgs_cov_outfile)['arr_0'][0]
fi_sally_wph_e_wbkgs_full_covariance = np.load(fi_sally_wbkgs_cov_outfile)['arr_0'][1]
fi_sally_wmh_mu_wbkgs_full_covariance = np.load(fi_sally_wbkgs_cov_outfile)['arr_0'][2]
fi_sally_wmh_e_wbkgs_full_covariance = np.load(fi_sally_wbkgs_cov_outfile)['arr_0'][3]
fi_sally_wph_mu_wbkgs_met_covariance = np.load(fi_sally_wbkgs_cov_outfile)['arr_0'][4]
fi_sally_wph_e_wbkgs_met_covariance = np.load(fi_sally_wbkgs_cov_outfile)['arr_0'][5]
fi_sally_wmh_mu_wbkgs_met_covariance = np.load(fi_sally_wbkgs_cov_outfile)['arr_0'][6]
fi_sally_wmh_e_wbkgs_met_covariance = np.load(fi_sally_wbkgs_cov_outfile)['arr_0'][7]
fi_sally_wph_mu_wbkgs_ptw_covariance = np.load(fi_sally_wbkgs_cov_outfile)['arr_0'][8]
fi_sally_wph_e_wbkgs_ptw_covariance = np.load(fi_sally_wbkgs_cov_outfile)['arr_0'][9]
fi_sally_wmh_mu_wbkgs_ptw_covariance = np.load(fi_sally_wbkgs_cov_outfile)['arr_0'][10]
fi_sally_wmh_e_wbkgs_ptw_covariance = np.load(fi_sally_wbkgs_cov_outfile)['arr_0'][11]
fi_sally_wph_mu_wbkgs_2d_covariance = np.load(fi_sally_wbkgs_cov_outfile)['arr_0'][12]
fi_sally_wph_e_wbkgs_2d_covariance = np.load(fi_sally_wbkgs_cov_outfile)['arr_0'][13]
fi_sally_wmh_mu_wbkgs_2d_covariance = np.load(fi_sally_wbkgs_cov_outfile)['arr_0'][14]
fi_sally_wmh_e_wbkgs_2d_covariance = np.load(fi_sally_wbkgs_cov_outfile)['arr_0'][15]
fi_sally_nobkgs_outfile = './fisher_info/fi_sally_nobkgs.npz'
fi_sally_nobkgs_cov_outfile = './fisher_info/fi_sally_nobkgs_cov.npz'
fi_sally_wph_mu_full_mean = np.load(fi_sally_nobkgs_outfile)['arr_0'][0]
fi_sally_wph_e_full_mean = np.load(fi_sally_nobkgs_outfile)['arr_0'][1]
fi_sally_wmh_mu_full_mean = np.load(fi_sally_nobkgs_outfile)['arr_0'][2]
fi_sally_wmh_e_full_mean = np.load(fi_sally_nobkgs_outfile)['arr_0'][3]
fi_sally_wph_mu_met_mean = np.load(fi_sally_nobkgs_outfile)['arr_0'][4]
fi_sally_wph_e_met_mean = np.load(fi_sally_nobkgs_outfile)['arr_0'][5]
fi_sally_wmh_mu_met_mean = np.load(fi_sally_nobkgs_outfile)['arr_0'][6]
fi_sally_wmh_e_met_mean = np.load(fi_sally_nobkgs_outfile)['arr_0'][7]
fi_sally_wph_mu_2d_mean = np.load(fi_sally_nobkgs_outfile)['arr_0'][8]
fi_sally_wph_e_2d_mean = np.load(fi_sally_nobkgs_outfile)['arr_0'][9]
fi_sally_wmh_mu_2d_mean = np.load(fi_sally_nobkgs_outfile)['arr_0'][10]
fi_sally_wmh_e_2d_mean = np.load(fi_sally_nobkgs_outfile)['arr_0'][11]
fi_sally_wph_mu_full_covariance = np.load(fi_sally_nobkgs_cov_outfile)['arr_0'][0]
fi_sally_wph_e_full_covariance = np.load(fi_sally_nobkgs_cov_outfile)['arr_0'][1]
fi_sally_wmh_mu_full_covariance = np.load(fi_sally_nobkgs_cov_outfile)['arr_0'][2]
fi_sally_wmh_e_full_covariance = np.load(fi_sally_nobkgs_cov_outfile)['arr_0'][3]
fi_sally_wph_mu_met_covariance = np.load(fi_sally_nobkgs_cov_outfile)['arr_0'][4]
fi_sally_wph_e_met_covariance = np.load(fi_sally_nobkgs_cov_outfile)['arr_0'][5]
fi_sally_wmh_mu_met_covariance = np.load(fi_sally_nobkgs_cov_outfile)['arr_0'][6]
fi_sally_wmh_e_met_covariance = np.load(fi_sally_nobkgs_cov_outfile)['arr_0'][7]
fi_sally_wph_mu_2d_covariance = np.load(fi_sally_nobkgs_cov_outfile)['arr_0'][8]
fi_sally_wph_e_2d_covariance = np.load(fi_sally_nobkgs_cov_outfile)['arr_0'][9]
fi_sally_wmh_mu_2d_covariance = np.load(fi_sally_nobkgs_cov_outfile)['arr_0'][10]
fi_sally_wmh_e_2d_covariance = np.load(fi_sally_nobkgs_cov_outfile)['arr_0'][11]
fi_sally_other_outfile = './fisher_info/fi_sally_other.npz'
fi_sally_other_cov_outfile = './fisher_info/fi_sally_other_cov.npz'
fi_wph_mu_wbkgs_rate_mean = np.load(fi_sally_other_outfile)['arr_0'][0]
fi_wph_e_wbkgs_rate_mean = np.load(fi_sally_other_outfile)['arr_0'][1]
fi_wmh_mu_wbkgs_rate_mean = np.load(fi_sally_other_outfile)['arr_0'][2]
fi_wmh_e_wbkgs_rate_mean = np.load(fi_sally_other_outfile)['arr_0'][3]
fi_wph_mu_rate_mean = np.load(fi_sally_other_outfile)['arr_0'][4]
fi_wph_e_rate_mean = np.load(fi_sally_other_outfile)['arr_0'][5]
fi_wmh_mu_rate_mean = np.load(fi_sally_other_outfile)['arr_0'][6]
fi_wmh_e_rate_mean = np.load(fi_sally_other_outfile)['arr_0'][7]
fi_wph_mu_truth_mean = np.load(fi_sally_other_outfile)['arr_0'][8]
fi_wph_e_truth_mean = np.load(fi_sally_other_outfile)['arr_0'][9]
fi_wmh_mu_truth_mean = np.load(fi_sally_other_outfile)['arr_0'][10]
fi_wmh_e_truth_mean = np.load(fi_sally_other_outfile)['arr_0'][11]
fi_wph_mu_wbkgs_rate_covariance = np.load(fi_sally_other_cov_outfile)['arr_0'][0]
fi_wph_e_wbkgs_rate_covariance = np.load(fi_sally_other_cov_outfile)['arr_0'][1]
fi_wmh_mu_wbkgs_rate_covariance = np.load(fi_sally_other_cov_outfile)['arr_0'][2]
fi_wmh_e_wbkgs_rate_covariance = np.load(fi_sally_other_cov_outfile)['arr_0'][3]
fi_wph_mu_rate_covariance = np.load(fi_sally_other_cov_outfile)['arr_0'][4]
fi_wph_e_rate_covariance = np.load(fi_sally_other_cov_outfile)['arr_0'][5]
fi_wmh_mu_rate_covariance = np.load(fi_sally_other_cov_outfile)['arr_0'][6]
fi_wmh_e_rate_covariance = np.load(fi_sally_other_cov_outfile)['arr_0'][7]
fi_wph_mu_truth_covariance = np.load(fi_sally_other_cov_outfile)['arr_0'][8]
fi_wph_e_truth_covariance = np.load(fi_sally_other_cov_outfile)['arr_0'][9]
fi_wmh_mu_truth_covariance = np.load(fi_sally_other_cov_outfile)['arr_0'][10]
fi_wmh_e_truth_covariance = np.load(fi_sally_other_cov_outfile)['arr_0'][11]
```
## 1D. Combining the Information In Different Channels
```
fi_sally_wbkgs_met_mean = fi_sally_wph_mu_wbkgs_met_mean + fi_sally_wph_e_wbkgs_met_mean + fi_sally_wmh_mu_wbkgs_met_mean + fi_sally_wmh_e_wbkgs_met_mean
fi_sally_wbkgs_met_covariance = fi_sally_wph_mu_wbkgs_met_covariance + fi_sally_wph_e_wbkgs_met_covariance + fi_sally_wmh_mu_wbkgs_met_covariance + fi_sally_wmh_e_wbkgs_met_covariance
fi_sally_wbkgs_full_mean = fi_sally_wph_mu_wbkgs_full_mean + fi_sally_wph_e_wbkgs_full_mean + fi_sally_wmh_mu_wbkgs_full_mean + fi_sally_wmh_e_wbkgs_full_mean
fi_sally_wbkgs_full_covariance = fi_sally_wph_mu_wbkgs_full_covariance + fi_sally_wph_e_wbkgs_full_covariance + fi_sally_wmh_mu_wbkgs_full_covariance + fi_sally_wmh_e_wbkgs_full_covariance
fi_sally_wbkgs_ptw_mean = fi_sally_wph_mu_wbkgs_ptw_mean + fi_sally_wph_e_wbkgs_ptw_mean + fi_sally_wmh_mu_wbkgs_ptw_mean + fi_sally_wmh_e_wbkgs_ptw_mean
fi_sally_wbkgs_ptw_covariance = fi_sally_wph_mu_wbkgs_ptw_covariance + fi_sally_wph_e_wbkgs_ptw_covariance + fi_sally_wmh_mu_wbkgs_ptw_covariance + fi_sally_wmh_e_wbkgs_ptw_covariance
fi_sally_wbkgs_2d_mean = fi_sally_wph_mu_wbkgs_2d_mean + fi_sally_wph_e_wbkgs_2d_mean + fi_sally_wmh_mu_wbkgs_2d_mean + fi_sally_wmh_e_wbkgs_2d_mean
fi_sally_wbkgs_2d_covariance = fi_sally_wph_mu_wbkgs_2d_covariance + fi_sally_wph_e_wbkgs_2d_covariance + fi_sally_wmh_mu_wbkgs_2d_covariance + fi_sally_wmh_e_wbkgs_2d_covariance
fi_sally_met_mean = fi_sally_wph_mu_met_mean + fi_sally_wph_e_met_mean + fi_sally_wmh_mu_met_mean + fi_sally_wmh_e_met_mean
fi_sally_met_covariance = fi_sally_wph_mu_met_covariance + fi_sally_wph_e_met_covariance + fi_sally_wmh_mu_met_covariance + fi_sally_wmh_e_met_covariance
fi_sally_full_mean = fi_sally_wph_mu_full_mean + fi_sally_wph_e_full_mean + fi_sally_wmh_mu_full_mean + fi_sally_wmh_e_full_mean
fi_sally_full_covariance = fi_sally_wph_mu_full_covariance + fi_sally_wph_e_full_covariance + fi_sally_wmh_mu_full_covariance + fi_sally_wmh_e_full_covariance
fi_sally_2d_mean = fi_sally_wph_mu_2d_mean + fi_sally_wph_e_2d_mean + fi_sally_wmh_mu_2d_mean + fi_sally_wmh_e_2d_mean
fi_sally_2d_covariance = fi_sally_wph_mu_2d_covariance + fi_sally_wph_e_2d_covariance + fi_sally_wmh_mu_2d_covariance + fi_sally_wmh_e_2d_covariance
fi_truth_mean = fi_wph_mu_truth_mean + fi_wph_e_truth_mean + fi_wmh_mu_truth_mean + fi_wmh_e_truth_mean
fi_truth_covariance = fi_wph_mu_truth_covariance + fi_wph_e_truth_covariance + fi_wmh_mu_truth_covariance + fi_wmh_e_truth_covariance
fi_rate_mean = fi_wph_mu_rate_mean + fi_wph_e_rate_mean + fi_wmh_mu_rate_mean + fi_wmh_e_rate_mean
fi_rate_covariance = fi_wph_mu_rate_covariance + fi_wph_e_rate_covariance + fi_wmh_mu_rate_covariance + fi_wmh_e_rate_covariance
fi_wbkgs_rate_mean = fi_wph_mu_wbkgs_rate_mean + fi_wph_e_wbkgs_rate_mean + fi_wmh_mu_wbkgs_rate_mean + fi_wmh_e_wbkgs_rate_mean
fi_wbkgs_rate_covariance = fi_wph_mu_wbkgs_rate_covariance + fi_wph_e_wbkgs_rate_covariance + fi_wmh_mu_wbkgs_rate_covariance + fi_wmh_e_wbkgs_rate_covariance
```
## Projecting & Profiling out Systematics
### No Systematics (Project)
```
fi_sally_wbkgs_met_mean_nosyst, fi_sally_wbkgs_met_covariance_nosyst = project_information( fi_sally_wbkgs_met_mean, [0,1,2,3], covariance=fi_sally_wbkgs_met_covariance)
fi_sally_wbkgs_full_mean_nosyst, fi_sally_wbkgs_full_covariance_nosyst = project_information( fi_sally_wbkgs_full_mean, [0,1,2,3], covariance=fi_sally_wbkgs_full_covariance)
fi_sally_wbkgs_ptw_mean_nosyst, fi_sally_wbkgs_ptw_covariance_nosyst = project_information( fi_sally_wbkgs_ptw_mean, [0,1,2,3], covariance=fi_sally_wbkgs_ptw_covariance)
fi_sally_wbkgs_2d_mean_nosyst, fi_sally_wbkgs_2d_covariance_nosyst = project_information( fi_sally_wbkgs_2d_mean, [0,1,2,3], covariance=fi_sally_wbkgs_2d_covariance)
#fi_sally_wbkgs_thetaw_mean_nosyst, fi_sally_wbkgs_thetaw_covariance_nosyst = project_information( fi_sally_wbkgs_thetaw_mean, [0,1,2,3], covariance=fi_sally_wbkgs_thetaw_covariance)
#fi_sally_wbkgs_2dthetaw_mean_nosyst, fi_sally_wbkgs_2dthetaw_covariance_nosyst = project_information( fi_sally_wbkgs_2dthetaw_mean, [0,1,2,3], covariance=fi_sally_wbkgs_2dthetaw_covariance)
fi_sally_met_mean_nosyst, fi_sally_met_covariance_nosyst = project_information( fi_sally_met_mean, [0,1,2,3], covariance=fi_sally_met_covariance)
fi_sally_full_mean_nosyst, fi_sally_full_covariance_nosyst = project_information( fi_sally_full_mean, [0,1,2,3], covariance= fi_sally_full_covariance)
fi_sally_2d_mean_nosyst, fi_sally_2d_covariance_nosyst = project_information( fi_sally_2d_mean, [0,1,2,3], covariance=fi_sally_2d_covariance)
fi_truth_mean_nosyst, fi_truth_covariance_nosyst = project_information( fi_truth_mean, [0,1,2,3], covariance= fi_truth_covariance)
fi_rate_mean_nosyst, fi_rate_covariance_nosyst = project_information( fi_rate_mean, [0,1,2,3], covariance= fi_rate_covariance)
fi_wbkgs_rate_mean_nosyst, fi_wbkgs_rate_covariance_nosyst = project_information( fi_wbkgs_rate_mean, [0,1,2,3], covariance=fi_wbkgs_rate_covariance)
```
## 1E. Rotating the Information
The coefficients $C_{H\square}$ and $C_{HD}$ always enter in the combination $(C_{H\square} - \frac{1}{4}C_{HD})$. This implies there is an inherent flat direction, along the axis $C_{H\square} + 4 C_{HD}$.
Before analyzing the information, we want to rotate to a basis where this flat direction is manifest. Then we can profile (or project -- the results should be approximately the same) out the flat direction easily and have a $3 \times 3$ fisher information matrix to work with in the basis $C_{H\square} - \frac{1}{4} C_{HD}$, $c_{HW}^{\phantom{(3)}}$, and $C_{HQ}^{(3)}$.
### Function for Rotating:
Function for rotating the fisher information, including the covariance matrix.
By default, it takes the rotation angle to be `np.arctan(4)`, which rotates out the flat direction and leaves the direction $C_{H\square} - \frac{1}{4} C_{HD}$
```
def rotate(
fisher_info, covariance=None, include_flat_direction=False, include_nuisance_params=False, rotation_angle=None, axis1=0, axis2=1,
):
# specify an angle (default = arctan(4), which rotates out the flat direction in cHD and cHbox)
if rotation_angle == None:
this_angle = np.arctan(4)
else:
this_angle = rotation_angle
#define the rotation matrix
dimension = len(fisher_info)
this_rotation_matrix = np.zeros((dimension, dimension))
np.fill_diagonal(this_rotation_matrix, 1)
this_rotation_matrix[axis1,axis1] = np.cos(this_angle)
this_rotation_matrix[axis1,axis2] = -1.*np.sin(this_angle)
this_rotation_matrix[axis2,axis1] = 1.*np.sin(this_angle)
this_rotation_matrix[axis2,axis2] = np.cos(this_angle)
#get the rotated fisher information
rotated_fisher_info = np.einsum('ki,lj,kl->ij', this_rotation_matrix, this_rotation_matrix, fisher_info)
#get the rotated covariance of the fisher information
if covariance is None:
rotated_covariance = covariance
else:
rotated_covariance = np.einsum('mi,nj,ok,pl,mnop->ijkl',
this_rotation_matrix, this_rotation_matrix, this_rotation_matrix, this_rotation_matrix, covariance)
# If not desired, project out the nuisance parameters
if not include_nuisance_params:
if covariance is None:
rotated_fisher_info = project_information(rotated_fisher_info, [0,1,2,3])
else:
rotated_fisher_info, rotated_covariance = project_information(rotated_fisher_info, [0,1,2,3], covariance=rotated_covariance)
#return either the three physical directions or include the flat direction
if include_flat_direction:
if covariance is None:
return rotated_fisher_info
else:
return rotated_fisher_info, rotated_covariance
else:
return project_information(rotated_fisher_info, np.arange(1,len(rotated_fisher_info)), covariance=rotated_covariance)
```
### Rotate all the F.I. (w/o Systematics)
```
fi_sally_wbkgs_met_mean_rot, fi_sally_wbkgs_met_covariance_rot = rotate(fi_sally_wbkgs_met_mean_nosyst, covariance=fi_sally_wbkgs_met_covariance_nosyst)
fi_sally_wbkgs_full_mean_rot, fi_sally_wbkgs_full_covariance_rot = rotate(fi_sally_wbkgs_full_mean_nosyst, covariance=fi_sally_wbkgs_full_covariance_nosyst)
fi_sally_wbkgs_ptw_mean_rot, fi_sally_wbkgs_ptw_covariance_rot = rotate(fi_sally_wbkgs_ptw_mean_nosyst, covariance=fi_sally_wbkgs_ptw_covariance_nosyst)
fi_sally_wbkgs_2d_mean_rot, fi_sally_wbkgs_2d_covariance_rot = rotate(fi_sally_wbkgs_2d_mean_nosyst, covariance=fi_sally_wbkgs_2d_covariance_nosyst)
#fi_sally_wbkgs_thetaw_mean_rot, fi_sally_wbkgs_thetaw_covariance_rot = rotate(fi_sally_wbkgs_thetaw_mean_nosyst, covariance=fi_sally_wbkgs_thetaw_covariance_nosyst)
#fi_sally_wbkgs_2dthetaw_mean_rot, fi_sally_wbkgs_2dthetaw_covariance_rot = rotate(fi_sally_wbkgs_2dthetaw_mean_nosyst, covariance=fi_sally_wbkgs_2dthetaw_covariance_nosyst)
fi_sally_met_mean_rot, fi_sally_met_covariance_rot = rotate(fi_sally_met_mean_nosyst, covariance=fi_sally_met_covariance_nosyst)
fi_sally_full_mean_rot, fi_sally_full_covariance_rot = rotate(fi_sally_full_mean_nosyst, covariance=fi_sally_full_covariance_nosyst)
fi_sally_2d_mean_rot, fi_sally_2d_covariance_rot = rotate(fi_sally_2d_mean_nosyst, covariance=fi_sally_2d_covariance_nosyst)
fi_truth_mean_rot, fi_truth_covariance_rot = rotate(fi_truth_mean_nosyst, covariance=fi_truth_covariance_nosyst)
fi_rate_mean_rot, fi_rate_covariance_rot = rotate(fi_rate_mean_nosyst, covariance=fi_rate_covariance_nosyst)
fi_wbkgs_rate_mean_rot, fi_wbkgs_rate_covariance_rot = rotate(fi_wbkgs_rate_mean_nosyst, covariance=fi_wbkgs_rate_covariance_nosyst)
```
# Contour Plot Code
```
import matplotlib.ticker as ticker
def plot_contours(
fisher_information_matrices,
fisher_information_covariances=None,
reference_thetas=None,
contour_distance=1.0,
xlabel=r"$\theta_0$",
ylabel=r"$\theta_1$",
xrange=(-1.0, 1.0),
yrange=(-1.0, 1.0),
labels=None,
inline_labels=None,
resolution=500,
colors=None,
linestyles=None,
linewidths=1.5,
alphas=1.0,
alphas_uncertainties=0.25,
scale_x = 1.,
scale_y = 1.,
):
# Input data
fisher_information_matrices = np.asarray(fisher_information_matrices)
n_matrices = fisher_information_matrices.shape[0]
if fisher_information_matrices.shape != (n_matrices, 2, 2):
raise RuntimeError(
"Fisher information matrices have shape {}, not (n, 2,2)!".format(fisher_information_matrices.shape)
)
if fisher_information_covariances is None:
fisher_information_covariances = [None for _ in range(n_matrices)]
if reference_thetas is None:
reference_thetas = [None for _ in range(n_matrices)]
d2_threshold = contour_distance ** 2.0
# Line formatting
if colors is None:
colors = ["C" + str(i) for i in range(10)] * (n_matrices // 10 + 1)
elif not isinstance(colors, list):
colors = [colors for _ in range(n_matrices)]
if linestyles is None:
linestyles = ["solid", "dashed", "dotted", "dashdot"] * (n_matrices // 4 + 1)
elif not isinstance(linestyles, list):
linestyles = [linestyles for _ in range(n_matrices)]
if not isinstance(linewidths, list):
linewidths = [linewidths for _ in range(n_matrices)]
if not isinstance(alphas, list):
alphas = [alphas for _ in range(n_matrices)]
if not isinstance(alphas_uncertainties, list):
alphas_uncertainties = [alphas_uncertainties for _ in range(n_matrices)]
# Grid
xi = np.linspace(xrange[0], xrange[1], resolution)
yi = np.linspace(yrange[0], yrange[1], resolution)
xx, yy = np.meshgrid(xi, yi, indexing="xy")
xx, yy = xx.flatten(), yy.flatten()
thetas = np.vstack((xx, yy)).T
# Theta from reference thetas
d_thetas = []
for reference_theta in reference_thetas:
if reference_theta is None:
d_thetas.append(thetas)
else:
d_thetas.append(thetas - reference_theta)
d_thetas = np.array(d_thetas) # Shape (n_matrices, n_thetas, n_parameters)
# Calculate Fisher distances
fisher_distances_squared = np.einsum("mni,mij,mnj->mn", d_thetas, fisher_information_matrices, d_thetas)
fisher_distances_squared = fisher_distances_squared.reshape((n_matrices, resolution, resolution))
# Calculate uncertainties of Fisher distances
fisher_distances_squared_uncertainties = []
for d_theta, inf_cov in zip(d_thetas, fisher_information_covariances):
if inf_cov is None:
fisher_distances_squared_uncertainties.append(None)
continue
var = np.einsum("ni,nj,ijkl,nk,nl->n", d_theta, d_theta, inf_cov, d_theta, d_theta)
uncertainties = (var ** 0.5).reshape((resolution, resolution))
fisher_distances_squared_uncertainties.append(uncertainties)
# logger.debug("Std: %s", uncertainties)
# Plot results
fig, ax = plt.subplots()
# fig = plt.figure(figsize=(5.0, 5.0))
# Error bands
for i in range(n_matrices):
if fisher_information_covariances[i] is not None:
d2_up = fisher_distances_squared[i] + fisher_distances_squared_uncertainties[i]
d2_down = fisher_distances_squared[i] - fisher_distances_squared_uncertainties[i]
band = (d2_up > d2_threshold) * (d2_down < d2_threshold) + (d2_up < d2_threshold) * (d2_down > d2_threshold)
plt.contourf(xi, yi, band, [0.5, 2.5], colors=colors[i], alpha=alphas_uncertainties[i])
# Predictions
for i in range(n_matrices):
cs = ax.contour(
xi,
yi,
fisher_distances_squared[i],
np.array([d2_threshold]),
colors=colors[i],
linestyles=linestyles[i],
linewidths=linewidths[i],
alpha=alphas[i],
label=None if labels is None else labels[i],
)
if inline_labels is not None and inline_labels[i] is not None and len(inline_labels[i]) > 0:
plt.clabel(cs, cs.levels, inline=True, fontsize=12, fmt={d2_threshold: inline_labels[i]})
# Legend and decorations
if labels is not None:
ax.legend()
# Scale the x and y axes by some factor
ticks_x = ticker.FuncFormatter(lambda x, pos: '{0:g}'.format(x/scale_x))
ax.xaxis.set_major_formatter(ticks_x)
ticks_y = ticker.FuncFormatter(lambda x, pos: '{0:g}'.format(x/scale_y))
ax.yaxis.set_major_formatter(ticks_y)
# Set Limits
ax.set_xlim(xrange)
ax.set_ylim(yrange)
# Set Labels
ax.set_xlabel(xlabel)
ax.set_ylabel(ylabel)
#ax.tight_layout()
fig.subplots_adjust(left=0.18,bottom=0.16,top=0.94,right=0.96, hspace=0.0, wspace=0.0)
return fig, ax
```
# Validation Plots
## Project For Plotting
```
fi_sally_wbkgs_met_mean_dw, fi_sally_wbkgs_met_covariance_dw = project_information(fi_sally_wbkgs_met_mean_rot, [0,1], fi_sally_wbkgs_met_covariance_rot)
fi_sally_wbkgs_full_mean_dw, fi_sally_wbkgs_full_covariance_dw = project_information(fi_sally_wbkgs_full_mean_rot, [0,1], fi_sally_wbkgs_full_covariance_rot)
fi_sally_met_mean_dw, fi_sally_met_covariance_dw = project_information(fi_sally_met_mean_rot, [0,1], fi_sally_met_covariance_rot)
fi_sally_full_mean_dw, fi_sally_full_covariance_dw = project_information(fi_sally_full_mean_rot, [0,1], fi_sally_full_covariance_rot)
fi_truth_mean_dw, fi_truth_covariance_dw = project_information(fi_truth_mean_rot, [0,1], fi_truth_covariance_rot)
fi_rate_mean_dw, fi_rate_covariance_dw = project_information(fi_rate_mean_rot, [0,1], fi_rate_covariance_rot)
fi_wbkgs_rate_mean_dw, fi_wbkgs_rate_covariance_dw = project_information(fi_wbkgs_rate_mean_rot, [0,1], fi_wbkgs_rate_covariance_rot)
fi_sally_wbkgs_met_mean_dq, fi_sally_wbkgs_met_covariance_dq = project_information(fi_sally_wbkgs_met_mean_rot, [0,2], fi_sally_wbkgs_met_covariance_rot)
fi_sally_wbkgs_full_mean_dq, fi_sally_wbkgs_full_covariance_dq = project_information(fi_sally_wbkgs_full_mean_rot, [0,2], fi_sally_wbkgs_full_covariance_rot)
fi_sally_met_mean_dq, fi_sally_met_covariance_dq = project_information(fi_sally_met_mean_rot, [0,2], fi_sally_met_covariance_rot)
fi_sally_full_mean_dq, fi_sally_full_covariance_dq = project_information(fi_sally_full_mean_rot, [0,2], fi_sally_full_covariance_rot)
fi_truth_mean_dq, fi_truth_covariance_dq = project_information(fi_truth_mean_rot, [0,2], fi_truth_covariance_rot)
fi_rate_mean_dq, fi_rate_covariance_dq = project_information(fi_rate_mean_rot, [0,2], fi_rate_covariance_rot)
fi_wbkgs_rate_mean_dq, fi_wbkgs_rate_covariance_dq = project_information(fi_wbkgs_rate_mean_rot, [0,2], fi_wbkgs_rate_covariance_rot)
fi_sally_wbkgs_met_mean_wq, fi_sally_wbkgs_met_covariance_wq = project_information(fi_sally_wbkgs_met_mean_rot, [1,2], fi_sally_wbkgs_met_covariance_rot)
fi_sally_wbkgs_full_mean_wq, fi_sally_wbkgs_full_covariance_wq = project_information(fi_sally_wbkgs_full_mean_rot, [1,2], fi_sally_wbkgs_full_covariance_rot)
fi_sally_met_mean_wq, fi_sally_met_covariance_wq = project_information(fi_sally_met_mean_rot, [1,2], fi_sally_met_covariance_rot)
fi_sally_full_mean_wq, fi_sally_full_covariance_wq = project_information(fi_sally_full_mean_rot, [1,2], fi_sally_full_covariance_rot)
fi_truth_mean_wq, fi_truth_covariance_wq = project_information(fi_truth_mean_rot, [1,2], fi_truth_covariance_rot)
fi_rate_mean_wq, fi_rate_covariance_wq = project_information(fi_rate_mean_rot, [1,2], fi_rate_covariance_rot)
fi_wbkgs_rate_mean_wq, fi_wbkgs_rate_covariance_wq = project_information(fi_wbkgs_rate_mean_rot, [1,2], fi_wbkgs_rate_covariance_rot)
```
## Validation Plots
For each set of operators, we'll show two plots: one with, and one without backgrounds
(just to avoid making the plots too cluttery).
For comparison's sake, we'll keep the axes the same size on each.
```
legend_elements = [
Line2D([0],[0], color=col_sally_full, lw=2, ls='dashed', label=r'Full Kin. (w/ $p_{\nu}$)'),
Line2D([0],[0], color=col_sally_met, lw=2, ls='solid', label=r'Full Kin. (MET)'),
Line2D([0],[0], color=col_rate, lw=2, ls='-.', label='Rate'),
]
legend_elements_sig = [
Line2D([0],[0], color=col_parton, lw=2, ls='dotted', label=r'Parton Level'),
Line2D([0],[0], color=col_sally_full, lw=2, ls='dashed', label=r'Full Kin. (w/ $p_{\nu}$)'),
Line2D([0],[0], color=col_sally_met, lw=2, ls='solid', label=r'Full Kin. (MET)'),
Line2D([0],[0], color=col_rate, lw=2, ls='-.', label='Rate'),
]
```
### $c_{H\triangle} - c_{HW}^{\phantom{(3)}}$
```
x_dw = 1.0
y_dw = 2.5
fig, validation_plot_dw = plot_contours(
[ fi_truth_mean_dw, fi_sally_full_mean_dw, fi_sally_met_mean_dw, fi_rate_mean_dw ],
[ fi_truth_covariance_dw, fi_sally_full_covariance_dw, fi_sally_met_covariance_dw, fi_rate_covariance_dw ],
colors=[ col_parton, col_sally_full, col_sally_met, col_rate ],
linestyles=[ 'dotted', 'dashed', 'solid', '-.' ],
xrange=(-x_dw, x_dw),
yrange=(-y_dw, y_dw),
contour_distance=2.4476,
xlabel=r'$\tilde{C}_{HD}^{\phantom{(3)}}$',
ylabel=r'$C_{HW}^{\phantom{(3)}}$',
scale_x=1.0,
scale_y=10.,
)
plt.legend(handles=legend_elements_sig, loc='lower right', frameon=False)
plt.text(-.9*x_dw,.80*y_dw,r'$WH$ Only',fontsize=16)
plt.text(-.9*x_dw,.60*y_dw,r'$C_{HQ}^{(3)} = 0$',fontsize=16)
plt.text(-.9*x_dw,.45*y_dw,r'$L = 300\,\mathrm{fb}^{-1}$',fontsize=16)
plt.savefig('plots/validation_plot_dw.eps',format='eps')
fig, validation_plot_wbkgs_dw = plot_contours(
[ fi_sally_wbkgs_full_mean_dw, fi_sally_wbkgs_met_mean_dw, fi_wbkgs_rate_mean_dw ],
[ fi_sally_wbkgs_full_covariance_dw, fi_sally_wbkgs_met_covariance_dw, fi_wbkgs_rate_covariance_dw ],
colors=[ col_sally_full, col_sally_met, col_rate ],
linestyles=[ 'dashed', 'solid', '-.' ],
xrange=(-x_dw, x_dw),
yrange=(-y_dw, y_dw),
contour_distance=2.4476,
xlabel=r'$\tilde{C}_{HD}^{\phantom{(3)}}$',
ylabel=r'$C_{HW}^{\phantom{(3)}}$',
scale_x=1.0,
scale_y=10.,
)
plt.legend(handles=legend_elements, loc='lower right', frameon=False)
plt.text(-.9*x_dw,.80*y_dw,r'$WH$ + QCD Backgrounds',fontsize=16)
plt.text(-.9*x_dw,.60*y_dw,r'$C_{HQ}^{(3)} = 0$',fontsize=16)
plt.text(-.9*x_dw,.45*y_dw,r'$L = 300\,\mathrm{fb}^{-1}$',fontsize=16)
plt.savefig('plots/validation_plot_wbkgs_dw.eps',format='eps')
```
### ${\tilde{C}}_{HD} - C_{HQ}^{(3)}$
```
x_dq = 0.4
y_dq = 2.3
validation_plot_dq = plot_contours(
[ fi_truth_mean_dq, fi_sally_full_mean_dq, fi_sally_met_mean_dq, fi_rate_mean_dq ],
[ fi_truth_covariance_dq, fi_sally_full_covariance_dq, fi_sally_met_covariance_dq, fi_rate_covariance_dq ],
colors=[ col_parton, col_sally_full, col_sally_met, col_rate ],
linestyles=[ 'dotted', 'dashed', 'solid', '-.' ],
xrange=(-x_dq,x_dq),
yrange=(-y_dq,y_dq),
contour_distance=2.4476,
xlabel=r'$\tilde{C}_{HD}^{\phantom{(3)}}$',
ylabel=r'$C_{HQ}^{(3)}$',
scale_x=1.0,
scale_y=1.,
)
plt.legend(handles=legend_elements_sig, loc='lower right', frameon=False)
plt.text(-1*x_dq, 1.05*y_dq, r'$\times 10^{-2}$', fontsize=14)
plt.text(-.9*x_dq,.8*y_dq,r'$WH$ Only',fontsize=16)
plt.text(-.9*x_dq,.6*y_dq,r'$C_{HW}^{\phantom{(3)}} = 0$',fontsize=16)
plt.text(-.9*x_dq,.45*y_dq,r'$L = 300\,\mathrm{fb}^{-1}$',fontsize=16)
plt.savefig('plots/validation_plot_dq.eps',format='eps')
validation_plot_wbkgs_dq = plot_contours(
[ fi_sally_wbkgs_full_mean_dq, fi_sally_wbkgs_met_mean_dq, fi_wbkgs_rate_mean_dq ],
[ fi_sally_wbkgs_full_covariance_dq, fi_sally_wbkgs_met_covariance_dq, fi_wbkgs_rate_covariance_dq ],
colors=[ col_sally_full, col_sally_met, col_rate ],
linestyles=[ 'dashed', 'solid', '-.' ],
xrange=(-x_dq,x_dq),
yrange=(-y_dq,y_dq),
contour_distance=2.4476,
xlabel=r'$\tilde{C}_{HD}^{\phantom{(3)}}$',
ylabel=r'$C_{HQ}^{(3)}$',
scale_x=1.0,
scale_y=1.,
)
plt.legend(handles=legend_elements, loc='lower right', frameon=False)
plt.text(-1*x_dq, 1.05*y_dq, r'$\times 10^{-2}$', fontsize=14)
plt.text(-.9*x_dq,.8*y_dq,r'$WH$ + QCD Backgrounds',fontsize=16)
plt.text(-.9*x_dq,.6*y_dq,r'$C_{HW}^{\phantom{(3)}} = 0$',fontsize=16)
plt.text(-.9*x_dq,.45*y_dq,r'$L = 300\,\mathrm{fb}^{-1}$',fontsize=16)
plt.savefig('plots/validation_plot_wbkgs_dq.eps',format='eps')
```
### $C_{HW}^{\phantom{(3)}} - C_{HQ}^{(3)}$
```
x_wq = 1.3
y_wq = 2.5
validation_plot_wq = plot_contours(
[ fi_truth_mean_wq, fi_sally_full_mean_wq, fi_sally_met_mean_wq, fi_rate_mean_wq ],
[ fi_truth_covariance_wq, fi_sally_full_covariance_wq, fi_sally_met_covariance_wq, fi_rate_covariance_wq ],
colors=[ col_parton, col_sally_full, col_sally_met, col_rate ],
linestyles=[ 'dotted', 'dashed', 'solid', '-.' ],
xrange=(-x_wq,x_wq),
yrange=(-y_wq,y_wq),
contour_distance=2.4476,
xlabel=r'$C_{HW}^{\phantom{(3)}}$',
ylabel=r'$C_{HQ}^{(3)}$',
scale_x=10.,
scale_y=1.,
)
plt.legend(handles=legend_elements_sig, loc='lower left', frameon=False)
plt.text(-1*x_wq, 1.05*y_wq, r'$\times 10^{-2}$', fontsize=14)
plt.text(.30*x_wq,.8*y_wq,r'$WH$ Only',fontsize=16)
plt.text(.45*x_wq,.6*y_wq,r'$\tilde{C}_{HD}^{\phantom{(3)}} = 0$',fontsize=16)
plt.text(.28*x_wq,.45*y_wq,r'$L = 300\,\mathrm{fb}^{-1}$',fontsize=16)
plt.savefig('plots/validation_plot_wq.eps',format='eps')
validation_plot_wbkgs_wq = plot_contours(
[ fi_sally_wbkgs_full_mean_wq, fi_sally_wbkgs_met_mean_wq, fi_wbkgs_rate_mean_wq ],
[ fi_sally_wbkgs_full_covariance_wq, fi_sally_wbkgs_met_covariance_wq, fi_wbkgs_rate_covariance_wq ],
colors=[ col_sally_full, col_sally_met, col_rate ],
linestyles=[ 'dashed', 'solid', '-.' ],
xrange=(-x_wq,x_wq),
yrange=(-y_wq,y_wq),
contour_distance=2.4476,
xlabel=r'$C_{HW}^{\phantom{(3)}}$',
ylabel=r'$C_{HQ}^{(3)}$',
scale_x=10.,
scale_y=1.,
)
plt.legend(handles=legend_elements, loc='lower left', frameon=False)
plt.text(-1*x_wq, 1.05*y_wq, r'$\times 10^{-2}$', fontsize=14)
plt.text(-.50*x_wq,.8*y_wq,r'$WH$ + QCD Backgrounds',fontsize=16)
plt.text(.45*x_wq,.6*y_wq,r'$\tilde{C}_{HD}^{\phantom{(3)}} = 0$',fontsize=16)
plt.text(.28*x_wq,.45*y_wq,r'$L = 300\,\mathrm{fb}^{-1}$',fontsize=16)
plt.savefig('plots/validation_plot_wbkgs_wq.eps',format='eps')
```
# Information in Histograms
## Load Arrays
```
fi_hist_stxs_outfile = './fisher_info/fi_hist_stxs.npz'
fi_hist_stxs_cov_outfile = './fisher_info/fi_hist_stxs_cov.npz'
fi_stxs_3bins_wph_mu_mean = np.load(fi_hist_stxs_outfile)['arr_0'][0]
fi_stxs_3bins_wph_e_mean = np.load(fi_hist_stxs_outfile)['arr_0'][1]
fi_stxs_3bins_wmh_mu_mean = np.load(fi_hist_stxs_outfile)['arr_0'][2]
fi_stxs_3bins_wmh_e_mean = np.load(fi_hist_stxs_outfile)['arr_0'][3]
fi_stxs_4bins_wph_mu_mean = np.load(fi_hist_stxs_outfile)['arr_0'][4]
fi_stxs_4bins_wph_e_mean = np.load(fi_hist_stxs_outfile)['arr_0'][5]
fi_stxs_4bins_wmh_mu_mean = np.load(fi_hist_stxs_outfile)['arr_0'][6]
fi_stxs_4bins_wmh_e_mean = np.load(fi_hist_stxs_outfile)['arr_0'][7]
fi_stxs_5bins_wph_mu_mean = np.load(fi_hist_stxs_outfile)['arr_0'][8]
fi_stxs_5bins_wph_e_mean = np.load(fi_hist_stxs_outfile)['arr_0'][9]
fi_stxs_5bins_wmh_mu_mean = np.load(fi_hist_stxs_outfile)['arr_0'][10]
fi_stxs_5bins_wmh_e_mean = np.load(fi_hist_stxs_outfile)['arr_0'][11]
fi_stxs_6bins_wph_mu_mean = np.load(fi_hist_stxs_outfile)['arr_0'][12]
fi_stxs_6bins_wph_e_mean = np.load(fi_hist_stxs_outfile)['arr_0'][13]
fi_stxs_6bins_wmh_mu_mean = np.load(fi_hist_stxs_outfile)['arr_0'][14]
fi_stxs_6bins_wmh_e_mean = np.load(fi_hist_stxs_outfile)['arr_0'][15]
fi_stxs_3bins_wph_mu_covariance = np.load(fi_hist_stxs_cov_outfile)['arr_0'][0]
fi_stxs_3bins_wph_e_covariance = np.load(fi_hist_stxs_cov_outfile)['arr_0'][1]
fi_stxs_3bins_wmh_mu_covariance = np.load(fi_hist_stxs_cov_outfile)['arr_0'][2]
fi_stxs_3bins_wmh_e_covariance = np.load(fi_hist_stxs_cov_outfile)['arr_0'][3]
fi_stxs_4bins_wph_mu_covariance = np.load(fi_hist_stxs_cov_outfile)['arr_0'][4]
fi_stxs_4bins_wph_e_covariance = np.load(fi_hist_stxs_cov_outfile)['arr_0'][5]
fi_stxs_4bins_wmh_mu_covariance = np.load(fi_hist_stxs_cov_outfile)['arr_0'][6]
fi_stxs_4bins_wmh_e_covariance = np.load(fi_hist_stxs_cov_outfile)['arr_0'][7]
fi_stxs_5bins_wph_mu_covariance = np.load(fi_hist_stxs_cov_outfile)['arr_0'][8]
fi_stxs_5bins_wph_e_covariance = np.load(fi_hist_stxs_cov_outfile)['arr_0'][9]
fi_stxs_5bins_wmh_mu_covariance = np.load(fi_hist_stxs_cov_outfile)['arr_0'][10]
fi_stxs_5bins_wmh_e_covariance = np.load(fi_hist_stxs_cov_outfile)['arr_0'][11]
fi_stxs_6bins_wph_mu_covariance = np.load(fi_hist_stxs_cov_outfile)['arr_0'][12]
fi_stxs_6bins_wph_e_covariance = np.load(fi_hist_stxs_cov_outfile)['arr_0'][13]
fi_stxs_6bins_wmh_mu_covariance = np.load(fi_hist_stxs_cov_outfile)['arr_0'][14]
fi_stxs_6bins_wmh_e_covariance = np.load(fi_hist_stxs_cov_outfile)['arr_0'][15]
fi_hist_2d_outfile = './fisher_info/fi_hist_2d.npz'
fi_hist_2d_cov_outfile = './fisher_info/fi_hist_2d_cov.npz'
fi_imp_stxs_wph_mu_mean = np.load(fi_hist_2d_outfile)['arr_0'][0]
fi_imp_stxs_wph_e_mean = np.load(fi_hist_2d_outfile)['arr_0'][1]
fi_imp_stxs_wmh_mu_mean = np.load(fi_hist_2d_outfile)['arr_0'][2]
fi_imp_stxs_wmh_e_mean = np.load(fi_hist_2d_outfile)['arr_0'][3]
fi_imp_stxs_wph_mu_covariance = np.load(fi_hist_2d_cov_outfile)['arr_0'][0]
fi_imp_stxs_wph_e_covariance = np.load(fi_hist_2d_cov_outfile)['arr_0'][1]
fi_imp_stxs_wmh_mu_covariance = np.load(fi_hist_2d_cov_outfile)['arr_0'][2]
fi_imp_stxs_wmh_e_covariance = np.load(fi_hist_2d_cov_outfile)['arr_0'][3]
```
Combine...
```
fi_stxs_3bins_mean = fi_stxs_3bins_wph_mu_mean + fi_stxs_3bins_wph_e_mean + fi_stxs_3bins_wmh_mu_mean + fi_stxs_3bins_wmh_e_mean
fi_stxs_3bins_covariance = fi_stxs_3bins_wph_mu_covariance + fi_stxs_3bins_wph_e_covariance + fi_stxs_3bins_wmh_mu_covariance + fi_stxs_3bins_wmh_e_covariance
fi_stxs_4bins_mean = fi_stxs_4bins_wph_mu_mean + fi_stxs_4bins_wph_e_mean + fi_stxs_4bins_wmh_mu_mean + fi_stxs_4bins_wmh_e_mean
fi_stxs_4bins_covariance = fi_stxs_4bins_wph_mu_covariance + fi_stxs_4bins_wph_e_covariance + fi_stxs_4bins_wmh_mu_covariance + fi_stxs_4bins_wmh_e_covariance
fi_stxs_5bins_mean = fi_stxs_5bins_wph_mu_mean + fi_stxs_5bins_wph_e_mean + fi_stxs_5bins_wmh_mu_mean + fi_stxs_5bins_wmh_e_mean
fi_stxs_5bins_covariance = fi_stxs_5bins_wph_mu_covariance + fi_stxs_5bins_wph_e_covariance + fi_stxs_5bins_wmh_mu_covariance + fi_stxs_5bins_wmh_e_covariance
fi_stxs_6bins_mean = fi_stxs_6bins_wph_mu_mean + fi_stxs_6bins_wph_e_mean + fi_stxs_6bins_wmh_mu_mean + fi_stxs_6bins_wmh_e_mean
fi_stxs_6bins_covariance = fi_stxs_6bins_wph_mu_covariance + fi_stxs_6bins_wph_e_covariance + fi_stxs_6bins_wmh_mu_covariance + fi_stxs_6bins_wmh_e_covariance
fi_imp_stxs_mean = fi_imp_stxs_wph_mu_mean + fi_imp_stxs_wph_e_mean + fi_imp_stxs_wmh_mu_mean + fi_imp_stxs_wmh_e_mean
fi_imp_stxs_covariance = fi_imp_stxs_wph_mu_covariance + fi_imp_stxs_wph_e_covariance + fi_imp_stxs_wmh_mu_covariance + fi_imp_stxs_wmh_e_covariance
```
## Plots of Information in $p_{T,W}$
### Rotate and Project the F.I. for Plotting:
Project out systematics:
```
fi_stxs_3bins_mean_nosyst, fi_stxs_3bins_covariance_nosyst = project_information(fi_stxs_3bins_mean, [0,1,2,3], covariance=fi_stxs_3bins_covariance)
fi_stxs_5bins_mean_nosyst, fi_stxs_5bins_covariance_nosyst = project_information(fi_stxs_5bins_mean, [0,1,2,3], covariance=fi_stxs_5bins_covariance)
fi_stxs_6bins_mean_nosyst, fi_stxs_6bins_covariance_nosyst = project_information(fi_stxs_6bins_mean, [0,1,2,3], covariance=fi_stxs_6bins_covariance)
fi_stxs_3bins_mean_rot, fi_stxs_3bins_covariance_rot = rotate(fi_stxs_3bins_mean_nosyst, covariance=fi_stxs_3bins_covariance_nosyst)
fi_stxs_5bins_mean_rot, fi_stxs_5bins_covariance_rot = rotate(fi_stxs_5bins_mean_nosyst, covariance=fi_stxs_5bins_covariance_nosyst)
fi_stxs_6bins_mean_rot, fi_stxs_6bins_covariance_rot = rotate(fi_stxs_6bins_mean_nosyst, covariance=fi_stxs_6bins_covariance_nosyst)
```
#### Projected
```
fi_sally_wbkgs_ptw_mean_dw, fi_sally_wbkgs_ptw_covariance_dw = project_information(fi_sally_wbkgs_ptw_mean_rot, [0,1], covariance=fi_sally_wbkgs_ptw_covariance_rot)
fi_stxs_3bins_mean_dw, fi_stxs_3bins_covariance_dw = project_information(fi_stxs_3bins_mean_rot, [0,1], covariance=fi_stxs_3bins_covariance_rot)
fi_stxs_5bins_mean_dw, fi_stxs_5bins_covariance_dw = project_information(fi_stxs_5bins_mean_rot, [0,1], covariance=fi_stxs_5bins_covariance_rot)
fi_stxs_6bins_mean_dw, fi_stxs_6bins_covariance_dw = project_information(fi_stxs_6bins_mean_rot, [0,1], covariance=fi_stxs_6bins_covariance_rot)
fi_sally_wbkgs_ptw_mean_dq, fi_sally_wbkgs_ptw_covariance_dq = project_information(fi_sally_wbkgs_ptw_mean_rot, [0,2], covariance=fi_sally_wbkgs_ptw_covariance_rot)
fi_stxs_3bins_mean_dq, fi_stxs_3bins_covariance_dq = project_information(fi_stxs_3bins_mean_rot, [0,2], covariance=fi_stxs_3bins_covariance_rot)
fi_stxs_5bins_mean_dq, fi_stxs_5bins_covariance_dq = project_information(fi_stxs_5bins_mean_rot, [0,2], covariance=fi_stxs_5bins_covariance_rot)
fi_stxs_6bins_mean_dq, fi_stxs_6bins_covariance_dq = project_information(fi_stxs_6bins_mean_rot, [0,2], covariance=fi_stxs_6bins_covariance_rot)
fi_sally_wbkgs_ptw_mean_wq, fi_sally_wbkgs_ptw_covariance_wq = project_information(fi_sally_wbkgs_ptw_mean_rot, [1,2], covariance=fi_sally_wbkgs_ptw_covariance_rot)
fi_stxs_3bins_mean_wq, fi_stxs_3bins_covariance_wq = project_information(fi_stxs_3bins_mean_rot, [1,2], covariance=fi_stxs_3bins_covariance_rot)
fi_stxs_5bins_mean_wq, fi_stxs_5bins_covariance_wq = project_information(fi_stxs_5bins_mean_rot, [1,2], covariance=fi_stxs_5bins_covariance_rot)
fi_stxs_6bins_mean_wq, fi_stxs_6bins_covariance_wq = project_information(fi_stxs_6bins_mean_rot, [1,2], covariance=fi_stxs_6bins_covariance_rot)
```
#### Profiled
```
fi_sally_wbkgs_ptw_mean_prof_dw, fi_sally_wbkgs_ptw_covariance_prof_dw = profile_information( fi_sally_wbkgs_ptw_mean_rot, [0,1], covariance = fi_sally_wbkgs_ptw_covariance_rot)
fi_stxs_3bins_mean_prof_dw, fi_stxs_3bins_covariance_prof_dw = profile_information( fi_stxs_3bins_mean_rot, [0,1], covariance = fi_stxs_3bins_covariance_rot)
fi_stxs_5bins_mean_prof_dw, fi_stxs_5bins_covariance_prof_dw = profile_information( fi_stxs_5bins_mean_rot, [0,1], covariance = fi_stxs_5bins_covariance_rot)
fi_stxs_6bins_mean_prof_dw, fi_stxs_6bins_covariance_prof_dw = profile_information( fi_stxs_6bins_mean_rot, [0,1], covariance = fi_stxs_6bins_covariance_rot)
fi_sally_wbkgs_ptw_mean_prof_dq, fi_sally_wbkgs_ptw_covariance_prof_dq = profile_information( fi_sally_wbkgs_ptw_mean_rot, [0,2], covariance = fi_sally_wbkgs_ptw_covariance_rot)
fi_stxs_3bins_mean_prof_dq, fi_stxs_3bins_covariance_prof_dq = profile_information( fi_stxs_3bins_mean_rot, [0,2], covariance = fi_stxs_3bins_covariance_rot)
fi_stxs_5bins_mean_prof_dq, fi_stxs_5bins_covariance_prof_dq = profile_information( fi_stxs_5bins_mean_rot, [0,2], covariance = fi_stxs_5bins_covariance_rot)
fi_stxs_6bins_mean_prof_dq, fi_stxs_6bins_covariance_prof_dq = profile_information( fi_stxs_6bins_mean_rot, [0,2], covariance = fi_stxs_6bins_covariance_rot)
fi_sally_wbkgs_ptw_mean_prof_wq, fi_sally_wbkgs_ptw_covariance_prof_wq = profile_information( fi_sally_wbkgs_ptw_mean_rot, [1,2], covariance = fi_sally_wbkgs_ptw_covariance_rot)
fi_stxs_3bins_mean_prof_wq, fi_stxs_3bins_covariance_prof_wq = profile_information( fi_stxs_3bins_mean_rot, [1,2], covariance = fi_stxs_3bins_covariance_rot)
fi_stxs_5bins_mean_prof_wq, fi_stxs_5bins_covariance_prof_wq = profile_information( fi_stxs_5bins_mean_rot, [1,2], covariance = fi_stxs_5bins_covariance_rot)
fi_stxs_6bins_mean_prof_wq, fi_stxs_6bins_covariance_prof_wq = profile_information( fi_stxs_6bins_mean_rot, [1,2], covariance = fi_stxs_6bins_covariance_rot)
```
### Plots of Information
Here we show the limits obtained from histograms of $p_{T,W}$ with various binnings:
* The original STXS, with 3 bins (boundaries at 0, 150, 250 GeV)
* STXS with an additional bin boundary at 75 GeV
* STXS with additional bin boundaries at 75 and 400 GeV
* A very differential histogram, with bin boundaires every 25 GeV up to 325 GeV, with extra boundaries at 375, 425, and 500 GeV
* the infinite binning limit, obtained by training a neural network (with `SALLY`) on only the observable $p_{T,W}$
```
ptw_legend = [
Line2D([0],[0], color=col_sally_ptw, ls='-', label=r'Full $p_{T,W}$ dist.'),
Line2D([0],[0], color=col_stxs_6bins, ls='-.', label='STXS, 6 bins'),
Line2D([0],[0], color=col_stxs_5bins, ls='--', label='STXS, stage 1.1'),
Line2D([0],[0], color=col_stxs_3bins, ls=':', label='STXS, stage 1'),
]
```
#### Projected
```
_, ax = plot_contours(
[
fi_sally_wbkgs_ptw_mean_dw,
fi_stxs_3bins_mean_dw,
fi_stxs_5bins_mean_dw,
fi_stxs_6bins_mean_dw,
],
colors=[ col_sally_ptw, col_stxs_3bins, col_stxs_5bins, col_stxs_6bins ],
linestyles=[ '-', ':', '--', '-.' ],
xrange=(-2.75,2.75),
yrange=(-7.5,7.5),
contour_distance=2.4476,
xlabel=r'$\tilde{C}_{HD}^{\phantom{(3)}}$',
ylabel=r'$C_{HW}^{\phantom{(3)}}$',
scale_x=1.0,
scale_y=10.,
)
plt.legend(handles=ptw_legend, loc='upper left',frameon=False)
plt.text(0.42*ax.get_xlim()[1], -.75*ax.get_ylim()[1], r'$C_{HQ}^{(3)} = 0$', fontsize=16)
plt.text(0.30*ax.get_xlim()[1], -.9*ax.get_ylim()[1], r'$L = 300\,\mathrm{fb}^{-1}$', fontsize=16)
plt.savefig('plots/stxs_contours_dw_proj.pdf')
_, ax = plot_contours(
[
fi_sally_wbkgs_ptw_mean_dq,
fi_stxs_3bins_mean_dq,
fi_stxs_5bins_mean_dq,
fi_stxs_6bins_mean_dq,
],
colors=[ col_sally_ptw, col_stxs_3bins, col_stxs_5bins, col_stxs_6bins ],
linestyles=[ '-', ':', '--', '-.' ],
xrange=(-.5,.5),
yrange=(-3.5,3.5),
contour_distance=2.4476,
xlabel=r'$\tilde{C}_{HD}^{\phantom{(3)}}$',
ylabel=r'$C_{HQ}^{(3)}$',
scale_x=1.0,
scale_y=1.,
)
plt.legend(handles=ptw_legend, loc='upper left',frameon=False)
plt.text(-1*ax.get_xlim()[1], 1.05*ax.get_ylim()[1], r'$\times 10^{-2}$', fontsize=14)
plt.text(0.42*ax.get_xlim()[1], -.75*ax.get_ylim()[1], r'$C_{HW}^{\phantom{(3)}} = 0$', fontsize=16)
plt.text(0.30*ax.get_xlim()[1], -.9*ax.get_ylim()[1], r'$L = 300\,\mathrm{fb}^{-1}$', fontsize=16)
plt.savefig('plots/stxs_contours_dq_proj.eps',format='eps')
_, ax = plot_contours(
[
fi_sally_wbkgs_ptw_mean_wq,
fi_stxs_3bins_mean_wq,
fi_stxs_5bins_mean_wq,
fi_stxs_6bins_mean_wq,
],
colors=[ col_sally_ptw, col_stxs_3bins, col_stxs_5bins, col_stxs_6bins ],
linestyles=[ '-', ':', '--', '-.' ],
xrange=(-1.5,1.5),
yrange=(-3,3),
contour_distance=2.4476,
xlabel=r'$C_{HW}^{\phantom{(3)}}$',
ylabel=r'$C_{HQ}^{(3)}$',
scale_x=10.0,
scale_y=1.,
)
plt.legend(handles=ptw_legend, loc='lower left',frameon=False)
plt.text(-1*ax.get_xlim()[1], 1.05*ax.get_ylim()[1], r'$\times 10^{-2}$', fontsize=14)
plt.text(.42*ax.get_xlim()[1], 0.8*ax.get_ylim()[1],r'$\tilde{C}_{HD}^{\phantom{(3)}} = 0$',fontsize=16)
plt.text(.30*ax.get_xlim()[1], .65*ax.get_ylim()[1], r'$L = 300\,\mathrm{fb}^{-1}$', fontsize=16)
plt.savefig('plots/stxs_contours_wq_proj.eps',format='eps')
```
#### Profiled
```
_, ax = plot_contours(
[
fi_sally_wbkgs_ptw_mean_prof_dw,
fi_stxs_3bins_mean_prof_dw,
fi_stxs_5bins_mean_prof_dw,
fi_stxs_6bins_mean_prof_dw,
],
colors=[ col_sally_ptw, col_stxs_3bins, col_stxs_5bins, col_stxs_6bins ],
linestyles=[ '-', ':', '--', '-.' ],
xrange=(-5.5,5.5),
yrange=(-16,16),
contour_distance=2.4476,
xlabel=r'$\tilde{C}_{HD}^{\phantom{(3)}}$',
ylabel=r'$C_{HW}^{\phantom{(3)}}$',
scale_x=1.0,
scale_y=10.,
)
plt.legend(handles=ptw_legend, loc='upper left',frameon=False)
plt.text(0.24*ax.get_xlim()[1], -.75*ax.get_ylim()[1], r'$C_{HQ}^{(3)}$ Profiled', fontsize=16)
plt.text(0.30*ax.get_xlim()[1], -.9*ax.get_ylim()[1], r'$L = 300\,\mathrm{fb}^{-1}$', fontsize=16)
plt.savefig('plots/stxs_contours_dw_prof.eps',format='eps')
_, ax = plot_contours(
[
fi_sally_wbkgs_ptw_mean_prof_dq,
fi_stxs_3bins_mean_prof_dq,
fi_stxs_5bins_mean_prof_dq,
fi_stxs_6bins_mean_prof_dq,
],
colors=[ col_sally_ptw, col_stxs_3bins, col_stxs_5bins, col_stxs_6bins ],
linestyles=[ '-', ':', '--', '-.' ],
xrange=(-6,6),
yrange=(-6,6),
contour_distance=2.4476,
xlabel=r'$\tilde{C}_{HD}^{\phantom{(3)}}$',
ylabel=r'$C_{HQ}^{(3)}$',
scale_x=1.0,
scale_y=1.,
)
plt.legend(handles=ptw_legend, loc='lower left',frameon=False)
plt.text(-1*ax.get_xlim()[1], 1.05*ax.get_ylim()[1], r'$\times 10^{-2}$', fontsize=14)
plt.text(0.22*ax.get_xlim()[1], 0.8*ax.get_ylim()[1], r'$C_{HW}^{\phantom{(3)}}$ Profiled', fontsize=16)
plt.text(0.30*ax.get_xlim()[1], 0.65*ax.get_ylim()[1], r'$L = 300\,\mathrm{fb}^{-1}$', fontsize=16)
plt.savefig('plots/stxs_contours_dq_prof.eps',format='eps')
_, ax = plot_contours(
[
fi_sally_wbkgs_ptw_mean_prof_wq,
fi_stxs_3bins_mean_prof_wq,
fi_stxs_5bins_mean_prof_wq,
fi_stxs_6bins_mean_prof_wq,
],
colors=[ col_sally_ptw, col_stxs_3bins, col_stxs_5bins, col_stxs_6bins ],
linestyles=[ '-', ':', '--', '-.' ],
xrange=(-18,18),
yrange=(-6,6),
contour_distance=2.4476,
xlabel=r'$C_{HW}^{\phantom{(3)}}$',
ylabel=r'$C_{HQ}^{(3)}$',
scale_x=10.0,
scale_y=1.,
)
plt.legend(handles=ptw_legend, loc='lower left',frameon=False)
plt.text(-1*ax.get_xlim()[1], 1.05*ax.get_ylim()[1], r'$\times 10^{-2}$', fontsize=14)
plt.text(0.24*ax.get_xlim()[1], 0.8*ax.get_ylim()[1],r'$\tilde{C}_{HD}^{\phantom{(3)}}$ Profiled',fontsize=16)
plt.text(0.30*ax.get_xlim()[1], 0.65*ax.get_ylim()[1], r'$L = 300\,\mathrm{fb}^{-1}$', fontsize=16)
plt.savefig('plots/stxs_contours_wq_prof.eps',format='eps')
```
## 5.2 1D vs. 2D Histograms
### Combine & Rotate for Plotting
```
fi_imp_stxs_mean_nosyst, fi_imp_stxs_covariance_nosyst = project_information(fi_imp_stxs_mean, [0,1,2,3], covariance=fi_imp_stxs_covariance)
fi_imp_stxs_mean_rot, fi_imp_stxs_covariance_rot = rotate(fi_imp_stxs_mean_nosyst, covariance=fi_imp_stxs_covariance_nosyst)
```
#### Projected
```
fi_sally_wbkgs_met_mean_dw, fi_sally_wbkgs_met_covariance_dw = project_information(fi_sally_wbkgs_met_mean_rot, [0,1], covariance=fi_sally_wbkgs_met_covariance_rot)
fi_sally_wbkgs_2d_mean_dw, fi_sally_wbkgs_2d_covariance_dw = project_information(fi_sally_wbkgs_2d_mean_rot, [0,1], covariance=fi_sally_wbkgs_2d_covariance_rot)
fi_imp_stxs_mean_dw, fi_imp_stxs_covariance_dw = project_information(fi_imp_stxs_mean_rot, [0,1], covariance=fi_imp_stxs_covariance_rot)
fi_sally_wbkgs_met_mean_dq, fi_sally_wbkgs_met_covariance_dq = project_information(fi_sally_wbkgs_met_mean_rot, [0,2], covariance=fi_sally_wbkgs_met_covariance_rot)
fi_sally_wbkgs_2d_mean_dq, fi_sally_wbkgs_2d_covariance_dq = project_information(fi_sally_wbkgs_2d_mean_rot, [0,2], covariance=fi_sally_wbkgs_2d_covariance_rot)
fi_imp_stxs_mean_dq, fi_imp_stxs_covariance_dq = project_information(fi_imp_stxs_mean_rot, [0,2], covariance=fi_imp_stxs_covariance_rot)
fi_sally_wbkgs_met_mean_wq, fi_sally_wbkgs_met_covariance_wq = project_information(fi_sally_wbkgs_met_mean_rot, [1,2], covariance=fi_sally_wbkgs_met_covariance_rot)
fi_sally_wbkgs_2d_mean_wq, fi_sally_wbkgs_2d_covariance_wq = project_information(fi_sally_wbkgs_2d_mean_rot, [1,2], covariance=fi_sally_wbkgs_2d_covariance_rot)
fi_imp_stxs_mean_wq, fi_imp_stxs_covariance_wq = project_information(fi_imp_stxs_mean_rot, [1,2], covariance=fi_imp_stxs_covariance_rot)
```
#### Profiled
```
fi_sally_wbkgs_met_mean_prof_dw, fi_sally_wbkgs_met_covariance_prof_dw = profile_information(fi_sally_wbkgs_met_mean_rot, [0,1], covariance=fi_sally_wbkgs_met_covariance_rot)
fi_sally_wbkgs_2d_mean_prof_dw, fi_sally_wbkgs_2d_covariance_prof_dw = profile_information(fi_sally_wbkgs_2d_mean_rot, [0,1], covariance=fi_sally_wbkgs_2d_covariance_rot)
fi_imp_stxs_mean_prof_dw, fi_imp_stxs_covariance_prof_dw = profile_information(fi_imp_stxs_mean_rot, [0,1], covariance=fi_imp_stxs_covariance_rot)
fi_sally_wbkgs_met_mean_prof_dq, fi_sally_wbkgs_met_covariance_prof_dq = profile_information(fi_sally_wbkgs_met_mean_rot, [0,2], covariance=fi_sally_wbkgs_met_covariance_rot)
fi_sally_wbkgs_2d_mean_prof_dq, fi_sally_wbkgs_2d_covariance_prof_dq = profile_information(fi_sally_wbkgs_2d_mean_rot, [0,2], covariance=fi_sally_wbkgs_2d_covariance_rot)
fi_imp_stxs_mean_prof_dq, fi_imp_stxs_covariance_prof_dq = profile_information(fi_imp_stxs_mean_rot, [0,2], covariance=fi_imp_stxs_covariance_rot)
fi_sally_wbkgs_met_mean_prof_wq, fi_sally_wbkgs_met_covariance_prof_wq = profile_information(fi_sally_wbkgs_met_mean_rot, [1,2], covariance=fi_sally_wbkgs_met_covariance_rot)
fi_sally_wbkgs_2d_mean_prof_wq, fi_sally_wbkgs_2d_covariance_prof_wq = profile_information(fi_sally_wbkgs_2d_mean_rot, [1,2], covariance=fi_sally_wbkgs_2d_covariance_rot)
fi_imp_stxs_mean_prof_wq, fi_imp_stxs_covariance_prof_wq = profile_information(fi_imp_stxs_mean_rot, [1,2], covariance=fi_imp_stxs_covariance_rot)
```
### Plot Comparison of 1D and 2D STXS
Comparison of the STXS, a 2D histogram, ML with 2 observable, and "full" information from `SALLY`
```
imp_legend = [
Line2D([0],[0], color=col_sally_met, ls='-', label=r'Full Kin.'),
Line2D([0],[0], color=col_sally_2d, ls=':', label=r'Full 2D dist.'),
Line2D([0],[0], color=col_stxs_5bins, ls='--', label='STXS, stage 1.1'),
Line2D([0],[0], color=col_imp_stxs, ls='-.', label=r'Imp. STXS'),
]
```
#### Projected
```
_, ax = plot_contours(
[
fi_sally_wbkgs_met_mean_dw,
fi_sally_wbkgs_2d_mean_dw,
fi_stxs_5bins_mean_dw,
fi_imp_stxs_mean_dw,
],
colors=[ col_sally_met, col_sally_2d, col_stxs_5bins, col_imp_stxs ],
linestyles=['-',':','--','-.'],
xrange=(-2.75,2.75),
yrange=(-7.5,7.5),
contour_distance=2.4476,
xlabel=r'$\tilde{C}_{HD}^{\phantom{(3)}}$',
ylabel=r'$C_{HW}^{\phantom{(3)}}$',
scale_x=1.0,
scale_y=10.,
)
plt.legend(handles=imp_legend, loc='upper left',frameon=False)
plt.text(0.45*ax.get_xlim()[1], -.75*ax.get_ylim()[1], r'$C_{HQ}^{(3)} = 0$', fontsize=16)
plt.text(0.30*ax.get_xlim()[1], -.9*ax.get_ylim()[1], r'$L = 300\,\mathrm{fb}^{-1}$', fontsize=16)
plt.savefig('plots/imp_contours_dw_proj.eps',format='eps')
_, ax = plot_contours(
[
fi_sally_wbkgs_met_mean_dq,
fi_sally_wbkgs_2d_mean_dq,
fi_stxs_5bins_mean_dq,
fi_imp_stxs_mean_dq,
],
colors=[ col_sally_met, col_sally_2d, col_stxs_5bins, col_imp_stxs ],
linestyles=['-',':','--','-.'],
xrange=(-0.4,0.4),
yrange=(-2.8,2.8),
contour_distance=2.4476,
xlabel=r'$\tilde{C}_{HD}^{\phantom{(3)}}$',
ylabel=r'$C_{HQ}^{(3)}$',
scale_x=1.0,
scale_y=1.,
)
plt.legend(handles=imp_legend, loc='upper left',frameon=False)
plt.text(-1*ax.get_xlim()[1], 1.05*ax.get_ylim()[1], r'$\times 10^{-2}$', fontsize=14)
plt.text(0.45*ax.get_xlim()[1], -.75*ax.get_ylim()[1], r'$C_{HW}^{\phantom{(3)}} = 0$', fontsize=16)
plt.text(0.30*ax.get_xlim()[1], -.9*ax.get_ylim()[1], r'$L = 300\,\mathrm{fb}^{-1}$', fontsize=16)
plt.savefig('plots/imp_contours_dq_proj.eps',format='eps')
_, ax = plot_contours(
[
fi_sally_wbkgs_met_mean_wq,
fi_sally_wbkgs_2d_mean_wq,
fi_stxs_5bins_mean_wq,
fi_imp_stxs_mean_wq,
],
colors=[ col_sally_met, col_sally_2d, col_stxs_5bins, col_imp_stxs ],
linestyles=['-',':','--','-.'],
xrange=(-1.1,1.1),
yrange=(-2.5,2.5),
contour_distance=2.4476,
xlabel=r'$C_{HW}^{\phantom{(3)}}$',
ylabel=r'$C_{HQ}^{(3)}$',
scale_x=10.0,
scale_y=1.,
)
plt.legend(handles=imp_legend, loc='lower left',frameon=False)
plt.text(-1*ax.get_xlim()[1], 1.05*ax.get_ylim()[1], r'$\times 10^{-2}$', fontsize=14)
plt.text(.48*ax.get_xlim()[1], .8*ax.get_ylim()[1],r'$\tilde{C}_{HD}^{\phantom{(3)}} = 0$',fontsize=16)
plt.text(0.30*ax.get_xlim()[1],.65*ax.get_ylim()[1], r'$L = 300\,\mathrm{fb}^{-1}$', fontsize=16)
plt.savefig('plots/imp_contours_wq_proj.eps',format='eps')
```
#### Profiled
```
_, ax = plot_contours(
[
fi_sally_wbkgs_met_mean_prof_dw,
fi_sally_wbkgs_2d_mean_prof_dw,
fi_stxs_5bins_mean_prof_dw,
fi_imp_stxs_mean_prof_dw,
],
colors=[ col_sally_met, col_sally_2d, col_stxs_5bins, col_imp_stxs ],
linestyles=['-',':','--','-.'],
xrange=(-3.75,3.75),
yrange=(-11,11),
contour_distance=2.4476,
xlabel=r'$\tilde{C}_{HD}^{\phantom{(3)}}$',
ylabel=r'$C_{HW}^{\phantom{(3)}}$',
scale_x=1.0,
scale_y=10.,
)
plt.legend(handles=imp_legend, loc='upper left',frameon=False)
plt.text(0.24*ax.get_xlim()[1], -.75*ax.get_ylim()[1], r'$C_{HQ}^{(3)}$ Profiled', fontsize=16)
plt.text(0.30*ax.get_xlim()[1], -.9*ax.get_ylim()[1], r'$L = 300\,\mathrm{fb}^{-1}$', fontsize=16)
plt.savefig('plots/imp_contours_dw_prof.eps',format='eps')
_, ax = plot_contours(
[
fi_sally_wbkgs_met_mean_prof_dq,
fi_sally_wbkgs_2d_mean_prof_dq,
fi_stxs_5bins_mean_prof_dq,
fi_imp_stxs_mean_prof_dq,
],
colors=[ col_sally_met, col_sally_2d, col_stxs_5bins, col_imp_stxs ],
linestyles=['-',':','--','-.'],
xrange=(-4,4),
yrange=(-3.5,3.5),
contour_distance=2.4476,
xlabel=r'$\tilde{C}_{HD}^{\phantom{(3)}}$',
ylabel=r'$C_{HQ}^{(3)}$',
scale_x=1.0,
scale_y=1.,
)
plt.legend(handles=imp_legend, loc='lower left',frameon=False)
plt.text(-1*ax.get_xlim()[1], 1.05*ax.get_ylim()[1], r'$\times 10^{-2}$', fontsize=14)
plt.text(0.22*ax.get_xlim()[1], 0.8*ax.get_ylim()[1], r'$C_{HW}^{\phantom{(3)}}$ Profiled', fontsize=16)
plt.text(0.30*ax.get_xlim()[1], .65*ax.get_ylim()[1], r'$L = 300\,\mathrm{fb}^{-1}$', fontsize=16)
plt.savefig('plots/imp_contours_dq_prof.eps',format='eps')
_, ax = plot_contours(
[
fi_sally_wbkgs_met_mean_prof_wq,
fi_sally_wbkgs_2d_mean_prof_wq,
fi_stxs_5bins_mean_prof_wq,
fi_imp_stxs_mean_prof_wq,
],
colors=[ col_sally_met, col_sally_2d, col_stxs_5bins, col_imp_stxs ],
linestyles=['-',':','--','-.'],
xrange=(-12,12),
yrange=(-3.5,3.5),
contour_distance=2.4476,
xlabel=r'$C_{HW}^{\phantom{(3)}}$',
ylabel=r'$C_{HQ}^{(3)}$',
scale_x=10.0,
scale_y=1.,
)
plt.legend(handles=imp_legend, loc='lower left',frameon=False)
plt.text(-1*ax.get_xlim()[1], 1.05*ax.get_ylim()[1], r'$\times 10^{-2}$', fontsize=14)
plt.text(0.24*ax.get_xlim()[1], 0.8*ax.get_ylim()[1],r'$\tilde{C}_{HD}^{\phantom{(3)}}$ Profiled',fontsize=16)
plt.text(0.30*ax.get_xlim()[1], .65*ax.get_ylim()[1], r'$L = 300\,\mathrm{fb}^{-1}$', fontsize=16)
plt.savefig('plots/imp_contours_wq_prof.eps',format='eps')
```
| github_jupyter |
```
""" Ingest MAPSPAM 2010 data into earthengine
-------------------------------------------------------------------------------
Author: Rutger Hofste
Date: 20190617
Kernel: python36
Docker: rutgerhofste/gisdocker:ubuntu16.04
"""
TESTING = 0
SCRIPT_NAME = "Y2019M06D17_RH_Ingest_MAPSPAM_EE_V01"
OUTPUT_VERSION = 2
NODATA_VALUE = -1
GCS_BUCKET = "aqueduct30_v01"
PREFIX = "Y2019M06D17_RH_MAPSPAM_V01"
EE_OUTPUT_PATH = "projects/WRI-Aquaduct/Y2019M06D17_RH_Ingest_MAPSPAM_EE_V01"
URL_STRUCTURES = "https://raw.githubusercontent.com/wri/MAPSPAM/master/metadata_tables/structure.csv"
URL_TECHS = "https://raw.githubusercontent.com/wri/MAPSPAM/master/metadata_tables/technologies.csv"
URL_CROPS = "https://raw.githubusercontent.com/wri/MAPSPAM/master/metadata_tables/mapspam_names.csv"
URL_UNITS = "https://raw.githubusercontent.com/wri/MAPSPAM/master/metadata_tables/units.csv"
URL_STRUCTURE_B = "https://raw.githubusercontent.com/wri/MAPSPAM/master/metadata_tables/structure_b.csv"
EXTRA_PARAMS = {"script_name":SCRIPT_NAME,
"output_version":OUTPUT_VERSION,
"ingested_by":"Rutger Hofste"}
import time, datetime, sys
dateString = time.strftime("Y%YM%mD%d")
timeString = time.strftime("UTC %H:%M")
start = datetime.datetime.now()
print(dateString,timeString)
sys.version
import os
import subprocess
import pandas as pd
from google.cloud import storage
```
Create imageCollection manually using UI.
Add link to GitHub using the following command. (replace command as needed)
```
#command = "/opt/anaconda3/envs/python35/bin/earthengine asset set -p metadata='https://github.com/wri/MAPSPAM' projects/WRI-Aquaduct/Y2019M06D17_RH_Ingest_MAPSPAM_EE_V01/output_V01/mapspam2010v1r0"
#subprocess.check_output(command,shell=True)
df_structures = pd.read_csv(URL_STRUCTURES)
df_crops = pd.read_csv(URL_CROPS)
df_techs = pd.read_csv(URL_TECHS)
df_units = pd.read_csv(URL_UNITS)
df_structure_b = pd.read_csv(URL_STRUCTURE_B)
os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = "/.google.json"
client = storage.Client()
bucket = client.get_bucket(GCS_BUCKET)
blobs = bucket.list_blobs(prefix=PREFIX)
blobs = list(blobs)
if TESTING:
blobs = blobs[0:3]
df_units
def get_structure(variable):
structure = df_structures.loc[df_structures["variable"]==variable]["structure"].iloc[0]
return structure
def get_parameters_structure_a(base):
"""
Obtain a dictionary from the filename using structure a.
`spam_version, extent, variable, mapspam_cropname, technology`
The structure depends on the variable, see mapspam metadata:
https://s3.amazonaws.com/mapspam/2010/v1.0/ReadMe_v1r0_Global.txt
Args:
base(string): the file basename.
Returns:
params(dictionary): dictionary with parameters contained in filename.
"""
structure = "a"
spam_version, extent, variable, mapspam_cropname, technology = base.split("_")
crop_name = df_crops.loc[df_crops["SPAM_name"]==mapspam_cropname]["name"].iloc[0]
crop_type = df_crops.loc[df_crops["SPAM_name"]==mapspam_cropname]["type"].iloc[0]
crop_number = df_crops.loc[df_crops["SPAM_name"]==mapspam_cropname]["crop_number"].iloc[0]
# added later
crop_group = df_crops.loc[df_crops["SPAM_name"]==mapspam_cropname]["crop_group"].iloc[0]
crop_color = df_crops.loc[df_crops["SPAM_name"]==mapspam_cropname]["crop_color"].iloc[0]
crop_group_color = df_crops.loc[df_crops["SPAM_name"]==mapspam_cropname]["crop_group_color"].iloc[0]
technology_full = df_techs.loc[df_techs["technology"]==technology]["technology_full"].iloc[0]
unit = df_units.loc[df_units["variable"]==variable]["unit"].iloc[0]
params = {"structure":structure,
"spam_version":spam_version,
"extent":extent,
"variable":variable,
"crop_name_short":mapspam_cropname,
"technology_short":technology,
"technology_full":technology_full,
"crop_name":crop_name,
"crop_type":crop_type,
"crop_number":crop_number,
"crop_group":crop_group,
"crop_color":crop_color,
"crop_group_color":crop_group_color,
"unit":unit}
params = {**params , **EXTRA_PARAMS}
return params
def get_parameters_structure_b(base):
"""
Obtain a dictionary from the filename using structure b.
`spam_version, extent, variable, technology`
The structure depends on the variable, see mapspam metadata:
https://s3.amazonaws.com/mapspam/2010/v1.0/ReadMe_v1r0_Global.txt
Args:
base(string): the file basename.
Returns:
params(dictionary): dictionary with parameters contained in filename.
"""
structure = "b"
components = base.split("_")
spam_version = components[0]
extent = components[1]
variable = components[2]
technology_list = components[3:]
technology = "_".join(technology_list[:-1])
technology_full = df_structure_b.loc[df_structure_b["technology_base"]==technology]["description"].iloc[0]
unit = df_units.loc[df_units["variable"]==variable]["unit"].iloc[0]
params = {"structure":structure,
"spam_version":spam_version,
"extent":extent,
"variable":variable,
"technology_short":technology,
"technology_full":technology_full,
"unit":unit
}
params = {**params , **EXTRA_PARAMS}
return params
def dictionary_to_EE_upload_command(d):
""" Convert a dictionary to command that can be appended to upload command
-------------------------------------------------------------------------------
Args:
d (dictionary) : Dictionary with metadata. nodata_value
Returns:
command (string) : string to append to upload string.
"""
command = ""
for key, value in d.items():
if key == "nodata_value":
command = command + " --nodata_value={}".format(value)
else:
if isinstance(value, str):
command = command + " -p '(string){}={}'".format(key,value)
else:
command = command + " -p '(number){}={}'".format(key,value)
return command
for blob in blobs:
filename = blob.name.split("/")[-1]
base, extension = filename.split(".")
if extension == "tif":
components = base.split("_")
variable = components[2]
structure = get_structure(variable)
if structure == "a":
params = get_parameters_structure_a(base)
elif structure == "b":
params = get_parameters_structure_b(base)
else:
break
params["nodata_value"] = NODATA_VALUE
meta_command = dictionary_to_EE_upload_command(params)
output_ic_name = "mapspam2010v1r0"
image_name = base
destination_path = "{}/output_V{:02d}/{}/{}".format(EE_OUTPUT_PATH,OUTPUT_VERSION,output_ic_name,image_name)
source_path = "gs://aqueduct30_v01/{}".format(blob.name)
command = "/opt/anaconda3/envs/python35/bin/earthengine upload image --asset_id={} {} {}".format(destination_path,meta_command,source_path)
print(command)
subprocess.check_output(command,shell=True)
else:
print("skipping file",filename)
end = datetime.datetime.now()
elapsed = end - start
print(elapsed)
```
previous run:
0:52:23.953127
0:53:01.081204
| github_jupyter |
# Pix2Pix implementation
* `Image-to-Image Translation with Conditional Adversarial Networks`, arXiv:1611.07004
* Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, Alexei A. Efros
* This code is a modified version of [tensorflow pix2pix exmaple code](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/eager/python/examples/pix2pix/pix2pix_eager.ipynb).
* This code is implemented by only `tensorflow API` not `tf.keras`.
```
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
import time
import numpy as np
import matplotlib.pyplot as plt
import PIL
from IPython.display import clear_output
# Import TensorFlow >= 1.10
import tensorflow as tf
slim = tf.contrib.slim
tf.logging.set_verbosity(tf.logging.INFO)
sess_config = tf.ConfigProto(gpu_options=tf.GPUOptions(allow_growth=True))
os.environ["CUDA_VISIBLE_DEVICES"]="0"
```
## Load the dataset
You can download this dataset and similar datasets from here. As mentioned in the paper we apply random jittering and mirroring to the training dataset.
* In random jittering, the image is resized to 286 x 286 and then randomly cropped to 256 x 256
* In random mirroring, the image is randomly flipped horizontally i.e left to right.
```
path_to_zip = tf.keras.utils.get_file('facades.tar.gz',
cache_subdir=os.path.abspath('.'),
origin='https://people.eecs.berkeley.edu/~tinghuiz/projects/pix2pix/datasets/facades.tar.gz',
extract=True)
PATH = os.path.join(os.path.dirname(path_to_zip), 'facades/')
train_dir = 'train/pix2pix/exp1/'
BUFFER_SIZE = 400
BATCH_SIZE = 1
IMG_WIDTH = 256
IMG_HEIGHT = 256
EPOCHS = 200
print_steps = 200
summary_steps = 400
save_epochs = 100
def load_image(image_file, is_train):
image = tf.read_file(image_file)
image = tf.image.decode_jpeg(image)
w = tf.shape(image)[1]
w = w // 2
real_image = image[:, :w, :]
input_image = image[:, w:, :]
input_image = tf.cast(input_image, tf.float32)
real_image = tf.cast(real_image, tf.float32)
if is_train:
# random jittering
# resizing to 286 x 286 x 3
input_image = tf.image.resize_images(input_image, [286, 286],
align_corners=True,
method=tf.image.ResizeMethod.NEAREST_NEIGHBOR)
real_image = tf.image.resize_images(real_image, [286, 286],
align_corners=True,
method=tf.image.ResizeMethod.NEAREST_NEIGHBOR)
# randomly cropping to 256 x 256 x 3
stacked_image = tf.stack([input_image, real_image], axis=0)
cropped_image = tf.random_crop(stacked_image, size=[2, IMG_HEIGHT, IMG_WIDTH, 3])
input_image, real_image = cropped_image[0], cropped_image[1]
if np.random.random() > 0.5:
# random mirroring
input_image = tf.image.flip_left_right(input_image)
real_image = tf.image.flip_left_right(real_image)
else:
input_image = tf.image.resize_images(input_image, size=[IMG_HEIGHT, IMG_WIDTH],
align_corners=True, method=2)
real_image = tf.image.resize_images(real_image, size=[IMG_HEIGHT, IMG_WIDTH],
align_corners=True, method=2)
# normalizing the images to [-1, 1]
input_image = (input_image / 127.5) - 1
real_image = (real_image / 127.5) - 1
return input_image, real_image
```
## Use tf.data to create batches, map(do preprocessing) and shuffle the dataset
```
train_dataset = tf.data.Dataset.list_files(PATH+'train/*.jpg')
train_dataset = train_dataset.shuffle(BUFFER_SIZE)
train_dataset = train_dataset.map(lambda x: load_image(x, True))
train_dataset = train_dataset.repeat(count=EPOCHS)
train_dataset = train_dataset.batch(1)
test_dataset = tf.data.Dataset.list_files(PATH+'test/*.jpg')
test_dataset = test_dataset.map(lambda x: load_image(x, False))
test_dataset = test_dataset.batch(1)
```
## Write the generator and discriminator models
* Generator
* The architecture of generator is a modified U-Net.
* Each block in the encoder is (Conv -> Batchnorm -> Leaky ReLU)
* Each block in the decoder is (Transposed Conv -> Batchnorm -> Dropout(applied to the first 3 blocks) -> ReLU)
* There are skip connections between the encoder and decoder (as in U-Net).
* Discriminator
* The Discriminator is a PatchGAN.
* Each block in the discriminator is (Conv -> BatchNorm -> Leaky ReLU)
* The shape of the output after the last layer is (batch_size, 30, 30, 1)
* Each 30x30 patch of the output classifies a 70x70 portion of the input image (such an architecture is called a PatchGAN).
* Discriminator receives 2 inputs.
* Input image and the target image, which it should classify as real.
* Input image and the generated image (output of generator), which it should classify as fake.
* We concatenate these 2 inputs together in the code (tf.concat([inp, tar], axis=-1))
* Shape of the input travelling through the generator and the discriminator is in the comments in the code.
To learn more about the architecture and the hyperparameters you can refer the paper.
```
class Generator(object):
def __init__(self, is_training=True, scope=None):
self.scope = scope
self.is_training = is_training
self.scope = scope
self.OUTPUT_CHANNELS = 3
self.batch_norm_params = {'is_training': self.is_training,
'scope': 'batch_norm'}
def upsample(self, x1, x2, num_outputs, apply_dropout=False, scope=None):
with tf.variable_scope(scope) as scope:
with slim.arg_scope([slim.conv2d_transpose],
kernel_size=[4, 4],
stride=[2, 2],
normalizer_fn=slim.batch_norm,
normalizer_params=self.batch_norm_params):
up = slim.conv2d_transpose(x1, num_outputs)
if apply_dropout:
up = slim.dropout(up, is_training=self.is_training)
output = tf.concat([up, x2], axis=-1)
return output
def __call__(self, x, reuse=False):
with tf.variable_scope('Generator/' + self.scope, reuse=reuse) as scope:
with slim.arg_scope([slim.conv2d],
kernel_size=[4, 4],
stride=[2, 2],
activation_fn=tf.nn.leaky_relu,
normalizer_fn=slim.batch_norm,
normalizer_params=self.batch_norm_params):
# Encoding part
self.down1 = slim.conv2d(x, 64, normalizer_fn=None, scope='down1')
self.down2 = slim.conv2d(self.down1, 128, scope='down2')
self.down3 = slim.conv2d(self.down2, 256, scope='down3')
self.down4 = slim.conv2d(self.down3, 512, scope='down4')
self.down5 = slim.conv2d(self.down4, 512, scope='down5')
self.down6 = slim.conv2d(self.down5, 512, scope='down6')
self.down7 = slim.conv2d(self.down6, 512, scope='down7')
self.down8 = slim.conv2d(self.down7, 512, scope='down8')
# Decoding part
self.up8 = self.upsample(self.down8, self.down7, 512, apply_dropout=True, scope='up8')
self.up7 = self.upsample(self.up8, self.down6, 512, apply_dropout=True, scope='up7')
self.up6 = self.upsample(self.up7, self.down5, 512, apply_dropout=True, scope='up6')
self.up5 = self.upsample(self.up6, self.down4, 512, scope='up5')
self.up4 = self.upsample(self.up5, self.down3, 256, scope='up4')
self.up3 = self.upsample(self.up4, self.down2, 128, scope='up3')
self.up2 = self.upsample(self.up3, self.down1, 64, scope='up2')
self.last = slim.conv2d_transpose(self.up2, self.OUTPUT_CHANNELS, [4, 4],
stride=[2, 2],
activation_fn=tf.nn.tanh,
scope='up1')
return self.last
class Discriminator(object):
def __init__(self, is_training=True, scope=None):
self.scope = scope
self.is_training = is_training
self.batch_norm_params = {'is_training': self.is_training,
'scope': 'batch_norm'}
def __call__(self, inputs, targets, reuse=False):
with tf.variable_scope('Discriminator/' + self.scope, reuse=reuse) as scope:
with slim.arg_scope([slim.conv2d],
kernel_size=[4, 4],
stride=[2, 2],
activation_fn=tf.nn.leaky_relu,
normalizer_fn=slim.batch_norm,
normalizer_params=self.batch_norm_params):
self.x = tf.concat([inputs, targets], axis=-1)
self.down1 = slim.conv2d(self.x, 64, normalizer_fn=None, scope='down1')
self.down2 = slim.conv2d(self.down1, 128, scope='down2')
self.down3 = slim.conv2d(self.down2, 256, scope='down3')
self.down3 = tf.pad(self.down3, tf.constant([[0, 0], [1, 1], [1, 1], [0, 0]]))
self.down4 = slim.conv2d(self.down3, 512, stride=1, padding='VALID', scope='down4')
self.down4 = tf.pad(self.down4, tf.constant([[0, 0], [1, 1], [1, 1], [0, 0]]))
self.last = slim.conv2d(self.down4, 512, stride=1, padding='VALID', activation_fn=None, scope='last')
return self.last
class Pix2Pix(object):
def __init__(self, mode, train_dataset, test_dataset):
assert mode in ["train", "translate"]
self.mode = mode
self.LAMBDA = 100
self.train_dataset = train_dataset
self.test_dataset = test_dataset
def build_images(self):
# tf.data.Iterator.from_string_handle의 output_shapes는 default = None이지만 꼭 값을 넣는 게 좋음
self.handle = tf.placeholder(tf.string, shape=[])
self.iterator = tf.data.Iterator.from_string_handle(self.handle,
self.train_dataset.output_types,
self.train_dataset.output_shapes)
self.input_image, self.target = self.iterator.get_next()
def discriminator_loss(self, disc_real_output, disc_generated_output):
real_loss = tf.losses.sigmoid_cross_entropy(multi_class_labels = tf.ones_like(disc_real_output),
logits = disc_real_output)
generated_loss = tf.losses.sigmoid_cross_entropy(multi_class_labels = tf.zeros_like(disc_generated_output),
logits = disc_generated_output)
total_disc_loss = real_loss + generated_loss
return total_disc_loss
def generator_loss(self, disc_generated_output, gen_output, target):
gan_loss = tf.losses.sigmoid_cross_entropy(multi_class_labels = tf.ones_like(disc_generated_output),
logits = disc_generated_output)
# mean absolute error
l1_loss = tf.reduce_mean(tf.abs(target - gen_output))
total_gen_loss = gan_loss + (self.LAMBDA * l1_loss)
return total_gen_loss
def build(self):
self.global_step = slim.get_or_create_global_step()
if self.mode == "translate":
pass
else:
self.build_images()
# Create generator and discriminator class
generator = Generator(is_training=True, scope='g')
discriminator = Discriminator(is_training=True, scope='d')
self.gen_output = generator(self.input_image)
self.disc_real_output = discriminator(self.input_image, self.target)
self.disc_generated_output = discriminator(self.input_image, self.gen_output, reuse=True)
self.gen_loss = self.generator_loss(self.disc_generated_output, self.gen_output, self.target)
self.disc_loss = self.discriminator_loss(self.disc_real_output, self.disc_generated_output)
self.g_vars = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, scope='Generator')
self.d_vars = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, scope='Discriminator')
print("complete model build.")
```
## Define the loss functions and the optimizer
* Discriminator loss
* The discriminator loss function takes 2 inputs; real images, generated images
* real_loss is a sigmoid cross entropy loss of the real images and an array of ones(since these are the real images)
* generated_loss is a sigmoid cross entropy loss of the generated images and an array of zeros(since these are the fake images)
* Then the total_loss is the sum of real_loss and the generated_loss
* Generator loss
* It is a sigmoid cross entropy loss of the generated images and an array of ones.
* The paper also includes L1 loss which is MAE (mean absolute error) between the generated image and the target image.
* This allows the generated image to become structurally similar to the target image.
* The formula to calculate the total generator loss = gan_loss + LAMBDA * l1_loss, where LAMBDA = 100. This value was decided by the authors of the paper.
## Checkpoints (Object-based saving)
```
#checkpoint_dir = './training_checkpoints'
#checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt")
#checkpoint = tf.train.Checkpoint(generator_optimizer=generator_optimizer,
# discriminator_optimizer=discriminator_optimizer,
# generator=generator,
# discriminator=discriminator)
```
## Training
* We start by iterating over the dataset
* The generator gets the input image and we get a generated output.
* The discriminator receives the input_image and the generated image as the first input. The second input is the input_image and the target_image.
* Next, we calculate the generator and the discriminator loss.
* Then, we calculate the gradients of loss with respect to both the generator and the discriminator variables(inputs) and apply those to the optimizer.
## Generate Images
* After training, its time to generate some images!
* We pass images from the test dataset to the generator.
* The generator will then translate the input image into the output we expect.
* Last step is to plot the predictions and voila!
```
def print_images(test_input, tar, prediction):
# the training=True is intentional here since
# we want the batch statistics while running the model
# on the test dataset. If we use training=False, we will get
# the accumulated statistics learned from the training dataset
# (which we don't want)
plt.figure(figsize=(15,15))
display_list = [test_input[0], tar[0], prediction[0]]
title = ['Input Image', 'Ground Truth', 'Predicted Image']
for i in range(3):
plt.subplot(1, 3, i+1)
plt.title(title[i])
# getting the pixel values between [0, 1] to plot it.
plt.imshow(display_list[i] * 0.5 + 0.5)
plt.axis('off')
plt.show()
model = Pix2Pix(mode="train", train_dataset=train_dataset, test_dataset=test_dataset)
model.build()
# show info for trainable variables
t_vars = tf.trainable_variables()
slim.model_analyzer.analyze_vars(t_vars, print_info=True)
opt_D = tf.train.AdamOptimizer(learning_rate=2e-4, beta1=0.5)
opt_G = tf.train.AdamOptimizer(learning_rate=2e-4, beta1=0.5)
with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS, scope='Discriminator')):
opt_D_op = opt_D.minimize(model.disc_loss, var_list=model.d_vars)
with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS, scope='Generator')):
opt_G_op = opt_G.minimize(model.gen_loss, global_step=model.global_step,
var_list=model.g_vars)
saver = tf.train.Saver(tf.global_variables(), max_to_keep=1000)
#with tf.Session(config=sess_config) as sess:
sess = tf.Session(config=sess_config)
sess.run(tf.global_variables_initializer())
tf.logging.info('Start Session.')
num_examples = 400
num_batches_per_epoch = int(num_examples / BATCH_SIZE)
#train_iterator = train_dataset.make_initializable_iterator()
train_iterator = train_dataset.make_one_shot_iterator()
train_handle = sess.run(train_iterator.string_handle())
test_iterator = test_dataset.make_initializable_iterator()
test_handle = sess.run(test_iterator.string_handle())
# save loss values for plot
loss_history = []
pre_epochs = 0
while True:
try:
start_time = time.time()
#for _ in range(k):
for _ in range(1):
_, loss_D = sess.run([opt_D_op, model.disc_loss],
feed_dict={model.handle: train_handle})
_, global_step_, loss_G = sess.run([opt_G_op,
model.global_step,
model.gen_loss],
feed_dict={model.handle: train_handle})
epochs = global_step_ * BATCH_SIZE / float(num_examples)
duration = time.time() - start_time
if global_step_ % print_steps == 0:
clear_output(wait=True)
examples_per_sec = BATCH_SIZE / float(duration)
print("Epochs: {:.2f} global_step: {} loss_D: {:.3f} loss_G: {:.3f} ({:.2f} examples/sec; {:.3f} sec/batch)".format(
epochs, global_step_, loss_D, loss_G, examples_per_sec, duration))
loss_history.append([epochs, loss_D, loss_G])
# print sample image
sess.run(test_iterator.initializer)
test_input, tar, prediction = sess.run([model.input_image, model.target, model.gen_output],
feed_dict={model.handle: test_handle})
print_images(test_input, tar, prediction)
# write summaries periodically
#if global_step_ % summary_steps == 0:
# summary_str = sess.run(summary_op)
# train_writer.add_summary(summary_str, global_step=global_step_)
# save model checkpoint periodically
if int(epochs) % save_epochs == 0 and pre_epochs != int(epochs):
tf.logging.info('Saving model with global step {} (= {} epochs) to disk.'.format(global_step_, int(epochs)))
saver.save(sess, train_dir + 'model.ckpt', global_step=global_step_)
pre_epochs = int(epochs)
except tf.errors.OutOfRangeError:
print("End of dataset") # ==> "End of dataset"
tf.logging.info('Saving model with global step {} (= {} epochs) to disk.'.format(global_step_, int(epochs)))
saver.save(sess, train_dir + 'model.ckpt', global_step=global_step_)
break
tf.logging.info('complete training...')
```
## Restore the latest checkpoint and test
```
# restoring the latest checkpoint in checkpoint_dir
#checkpoint.restore(tf.train.latest_checkpoint(checkpoint_dir))
```
## Testing on the entire test dataset
```
#with tf.Session(config=sess_config) as sess:
test_iterator = test_dataset.make_initializable_iterator()
test_handle = sess.run(test_iterator.string_handle())
sess.run(test_iterator.initializer)
test_input, tar, prediction = sess.run([model.input_image, model.target, model.gen_output],
feed_dict={model.handle: test_handle})
print_images(test_input, tar, prediction)
```
| github_jupyter |
```
import re
import nltk
from sklearn.feature_extraction.text import CountVectorizer
text_pos = []
labels_pos = []
with open("./pos_tweets.txt") as f:
for i in f:
text_pos.append(i)
labels_pos.append('pos')
text_neg = []
labels_neg = []
with open("./neg_tweets.txt") as f:
for i in f:
text_neg.append(i)
labels_neg.append('neg')
training_text = text_pos[:int((.8)*len(text_pos))] + text_neg[:int((.8)*len(text_neg))]
training_labels = labels_pos[:int((.8)*len(labels_pos))] + labels_neg[:int((.8)*len(labels_neg))]
test_text = text_pos[int((.8)*len(text_pos)):] + text_neg[int((.8)*len(text_neg)):]
test_labels = labels_pos[int((.8)*len(labels_pos)):] + labels_neg[int((.8)*len(labels_neg)):]
vectorizer = CountVectorizer(
analyzer = 'word',
lowercase = False,
max_features = 85
)
features = vectorizer.fit_transform(
training_text + test_text)
features_nd = features.toarray() # for easy use
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(
features_nd[0:len(training_text)],
training_labels,
train_size=0.80,
random_state=1234)
from sklearn.linear_model import LogisticRegression
log_model = LogisticRegression()
log_model = log_model.fit(X=X_train, y=y_train)
y_pred = log_model.predict(X_test)
from sklearn.metrics import accuracy_score
print(accuracy_score(y_test, y_pred))
from sklearn.ensemble import RandomForestClassifier
rf_clf=RandomForestClassifier(random_state=0)
rf_clf.fit(X_train,y_train)
y_pred = rf_clf.predict(X_test)
print(accuracy_score(y_test, y_pred))
from sklearn.metrics import classification_report
print(classification_report(y_test, y_pred))
print X_train.shape
print X_test.shape
print len(y_train)
print len(y_test)
#Artificial Neural Networks
from keras.models import Sequential
from keras.layers import *
model=Sequential()
model.add(Dense(32,activation='relu',input_dim=85))
model.add(Dense(64,activation='relu'))
#model.add(Dense(64,activation='relu'))
model.add(Dense(1,activation='sigmoid'))
model.compile(loss='binary_crossentropy',optimizer='adam',metrics=['accuracy'])
model.summary()
from sklearn.preprocessing import LabelEncoder
import keras
lb = LabelEncoder()
train_labels = lb.fit_transform(y_train)
test_labels=lb.fit_transform(y_test)
#train_labels = keras.utils.np_utils.to_categorical(train_labels)
model.fit(X_train,train_labels,batch_size=32,epochs=50,validation_data=(X_test,test_labels),verbose=1)
y_pred = model.predict_classes(X_test)
print(classification_report(y_pred, test_labels))
```
| github_jupyter |
<a href="https://colab.research.google.com/github/MarvelAmazon/ReadStataFile/blob/main/NLP_job.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
## Kouman ou ka dekri domenn yon dyòb deskripsyon avèk NLP.
### 1.Entwodiksyon
An Ayiti Chache yon dyòb ki makonnen ak domenn konpetans ou se yon bagay ki difisil anpil.Nou souvan wè yon **Enfimyè** ap travay nan yon **faktori** yon **Engenyè** ap travay tankou yon **planifikatè** yon **Medsen** irije tèt yo an bon **dizwit** an **Administratè** san pou otan se pa domenn konpetans yo vwa aprann konpetans sila yo menm yon fwa nan lavi yo . Yo chita pafwa dèyè yon biwo ap aprann metye y'ap fè a tankou yon bon apranti ,ak yon entèlijans grenn senk pou yo pa bay bon rezilta.
</br>
</br>
Souvan yon telman fè travay yo a byen yo merite yon **chapoba** makònen ak yon kokennchenn felisitasyon. Fòk ou ta frekan anpil pou ta di sa pa metye sa yo te aprann.
Men sa koze yon dezekilib nan domenn konpetans pou yon dyòb e pafwa mete yon gou amè nan dyòl tout pwofesyonèl ki vle egzèse metye ke yo te aprann paske yo pa gen ase odas pou pwostile pou yon dyòb ki pa nan domenn konpetans yo te ye a.
Pandan mwen fini aprann **#DeepLearning** **#NLP** **#Tensorflow** **#LSTM** **#RNN** avèk **#coursera** sipòte pa **#AyitiAnalytics** (**#AA**) mwen ta renmen gade ki similitid ki egziste ant diferan dyòb yo e ki konpetans similè mwen ki makonnen ak (2) ou plizyè kategori dyòb diferan avèk selman mo ke yon redije dyòb deskripsyon an.
Pou nou fè sa nou pral komanse enstale libreri sila yo
### Kòmanse pa chache done ou ap bezwen yo
```
# Libreri nou sipoze instale pou nou fè travay sa
!pip install beautifulsoup4 # # libreri ki pèmèt manipile done ki nan fòma HTML
```
```
import pandas as pd # libreri pou manipilie done
import numpy as np # libreri pou fè kalkil matrisyèl pèfòman
from bs4 import BeautifulSoup # libreri pou manipile done sou fòma HTML
import requests # Libreri pou ou resevwa yon rekèt HTTP
```
Mwen komanse pa defini done ak varyab mwen ap bezwen yo pou mwen ka fè pwojè sila
* URL : pou mwen ka estoke lyen paj pou mwen fè rekèt la
* id : kise se nimewo dyòb sou jobpaw
```
# Mwen komanse
id = 13037
url = f'https://www.jobpaw.com/pont/professionnels.php?idj={id}'
url
```
Konye a an nou kreye yon fonksyon ki pèmèt nou jwen yon dyòb sou jobpaw byen rapid
jwen_job_pa_id: pèmèt pou jwenn yon job pa id li
```
def jwen_job_pa_id(id =12):
url = f'https://www.jobpaw.com/pont/professionnels.php?idj={id}'
response = requests.get(url)
jobpaw_soup = None
if response.status_code == 200:
jobpaw_soup = BeautifulSoup(response.text,'html.parser')
body = jobpaw_soup.body
tables = body.find_all('table')
trs =tables[0].find_all('tr')
# list_job = []
my_dict = dict()
my_dict["id"] = id
for tr in trs:
tds = tr.find_all("td")
if len(tds) <4:
for index in range(0,len(tds)//2,2):
if len(str(tds[index])) !=0:
key = tds[index].text
value = tds[index+1].text
my_dict[key] =value
for index in range(len(tds)//2,len(tds)-1,2):
if len(str(tds[index])) !=0:
key = tds[index].text
value = tds[index+1].text
my_dict[key] =value
trs =tables[2].find_all('tr')
tds = trs[0].find_all("td")
my_dict["content"] = tds[0].text.strip()
# print(my_dict["content"])
return my_dict
# jon test
job_test = jwen_job_pa_id()
print(job_test['content'])
```
Konye a nou fini defini fonksyon ke nou bezwen an ann ale chache kouman nou kapab ekstrè tout dyòb sa yo sou sit la
```
def generate_segment(start=11, stop=13312, step=500):
low = start
segments = []
for high in np.arange(start=start+step, stop=stop, step=step):
segments.append((low, high))
low = high
if segments[-1][1] < stop:
segments.append((segments[-1][1], stop))
return segments
import random
user_agent_list = [
'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_5) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/13.1.1 Safari/605.1.15',
'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:77.0) Gecko/20100101 Firefox/77.0',
'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.97 Safari/537.36',
'Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:77.0) Gecko/20100101 Firefox/77.0',
'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.97 Safari/537.36',
]
generate_segment(start=11,stop=13038,step=5000)
import multiprocessing
import concurrent.futures
from google.colab import drive
drive.mount('/content/drive')
jobpaw_files_paths = '/content/drive/MyDrive/JobpawData'
import os
import time
os.mkdir(jobpaw_files_paths)
def get_jobpaw_job(start, stop, prod=True):
my_list = []
for index in np.arange(start, stop=stop+1):
my_dict = {}
if prod:
try:
my_dict = jwen_job_pa_id(index)
my_list.append(my_dict)
print(f"Page {index} done")
except:
print(f"Page {index} failed")
df_data = pd.DataFrame(my_list)
df_data.to_json(f"{jobpaw_files_paths}/job_{start}_{stop}.json")
time.sleep(.3)
get_jobpaw_job(start=11,stop=12)
segments = generate_segment(start=11, stop=13038, step=200)
processes = []
for segment in segments:
p1 = multiprocessing.Process(target=get_jobpaw_job, args=[
segment[0], segment[1]+1])
p1.start()
processes.append(p1)
p1.join()
list_df = []
for segment in segments:
list_df.append(pd.read_json(f"{jobpaw_files_paths}/job_{segment[0]}_{segment[1]+1}.json"))
df_final = pd.concat(list_df, axis=0)
df_final.to_csv("job.csv")
df_final = pd.concat(list_df, axis=0)
df_final.to_csv("job.csv")
```
| github_jupyter |
## Probability distributions
Probability distribution is the backbone of uncertainty quantification.
Creating a probability distribution in `chaospy` is done by as follows:
```
import chaospy
normal = chaospy.Normal(mu=2, sigma=2)
normal
```
The distribution have a few methods that the user can used, which has names
and syntax very similar to that of
[scipy.stats](https://docs.scipy.org/doc/scipy/reference/stats.html). Below
some of these methods are demonstrated. For a full overview of the
distribution methods, see
[chaospy.Distribution](../../api/chaospy.Distribution.rst).
For an overview of available distributions, see then take a look at the
[collection listed in the reference](../../reference/distribution/collection.rst).
### (Pseudo-)random samples
The most important property a random variable has, is to create
(pseudo-)random samples. This can be created using
[chaospy.Distribution.sample()](../../api/chaospy.Distribution.sample.rst#chaospy.Distribution.sample):
```
samples = normal.sample(4, seed=1234)
samples
```
These can be used to create e.g. histograms:
```
from matplotlib import pyplot
pyplot.hist(normal.sample(10000, seed=1234), 30)
pyplot.show()
```
The input can be both be a integer, but also a sequence of integers. For
example:
```
normal.sample([2, 2], seed=1234)
```
### Random seed
Note that the `seed` parameters was passed to ensure reproducability. In
addition to having this flag, all `chaospy` distributions respects `numpy`'s
random seed. So the sample generation can also be done as follows:
```
import numpy
numpy.random.seed(1234)
normal.sample(4)
```
### Probability density function
The probability density function, is a function whose value at any given
sample in the sample space can be interpreted as providing a relative
likelihood that the value of the random variable would equal that sample.
This method is available through
[chaospy.Distribution.pdf()](../../api/chaospy.Distribution.pdf.rst):
```
normal.pdf([-2, 0, 2])
q_loc = numpy.linspace(-4, 8, 200)
pyplot.plot(q_loc, normal.pdf(q_loc))
pyplot.show()
```
### Cumulative probability function
the cumulative distribution function, defines the probability that a random
variables is at most the argument value provided. This method is available
through
[chaospy.Distribution.cdf()](../../api/chaospy.Distribution.cdf.rst):
```
normal.cdf([-2, 0, 2])
pyplot.plot(q_loc, normal.cdf(q_loc))
pyplot.show()
```
### Statistical moments
The moments of a random variable giving important descriptive information
about the variable reduced to single scalars. The raw moments is the building
blocks to build these descriptive statistics. The moments are available
through
[chaospy.Distribution.mom()](../../api/chaospy.Distribution.mom.rst):
```
normal.mom([0, 1, 2])
```
Not all random variables have raw moment variables, but for these variables
the raw moments are estimated using quadrature integration. This allows for
the moments to be available for all distributions. This approximation can
explicitly be evoked through
[chaospy.approximate_moment()](../../api/chaospy.approximate_moment.rst):
```
chaospy.approximate_moment(normal, [2])
```
See [quadrature integration](./quadrature_integration.ipynb) for more details on how this
is done in practice.
Central moments can be accessed through wrapper functions. The four first
central moments of our random variable are:
```
(chaospy.E(normal), chaospy.Var(normal),
chaospy.Skew(normal), chaospy.Kurt(normal))
```
See [descriptive statistics](./descriptive_statistics.ipynb) for details on
the functions extracting metrics of interest from random variables.
### Truncation
In the [collection of distributions](../../reference/distribution/collection.rst) some
distribution are truncated by default. However for those that are not, and
that is the majority of distributions, truncation can be invoiced using
[chaospy.Trunc()](../../api/chaospy.Trunc.rst). It supports one-sided
truncation:
```
normal_trunc = chaospy.Trunc(normal, upper=4)
pyplot.plot(q_loc, normal_trunc.pdf(q_loc))
pyplot.show()
```
and two-sided truncation:
```
normal_trunc2 = chaospy.Trunc(normal, lower=-1, upper=5)
pyplot.plot(q_loc, normal_trunc2.pdf(q_loc))
pyplot.show()
```
### Multivariate variables
`chaospy` also supports joint random variables. Some have their own
constructors defined in the
[collection of distributions](../../reference/distribution/collection.rst).
But more practical, multivariate variables can be constructed from univariate
ones through [chaospy.J](../../api/chaospy.J.rst):
```
normal_gamma = chaospy.J(chaospy.Normal(0, 1), chaospy.Gamma(1))
```
The multivariate variables have the same functionality as the univariate
ones, except that inputs and the outputs of the methods
[chaospy.Distribution.sample](../../api/chaospy.Distribution.sample.rst),
[chaospy.Distribution.pdf()](../../api/chaospy.Distribution.pdf.rst)
and
[chaospy.Distribution.cdf()](../../api/chaospy.Distribution.cdf.rst)
assumes an extra axis for dimensions. For example:
```
pyplot.rc("figure", figsize=[12, 4])
pyplot.subplot(131)
pyplot.title("random scatter")
pyplot.scatter(*normal_gamma.sample(1000, seed=1000), marker="x")
pyplot.subplot(132)
pyplot.title("probability density")
grid = numpy.mgrid[-3:3:100j, 0:4:100j]
pyplot.contourf(grid[0], grid[1], normal_gamma.pdf(grid), 50)
pyplot.subplot(133)
pyplot.title("cumulative distibution")
pyplot.contourf(grid[0], grid[1], normal_gamma.cdf(grid), 50)
pyplot.show()
```
### Rosenblatt transformation
One of the more sure-fire ways to create random variables, is to first
generate classical uniform samples and then use a inverse transformation to
map the sample to have the desired properties. In one-dimension, this mapping
is the inverse of the cumulative distribution function, and is available as
[chaospy.Distribution.ppf()](../../api/chaospy.Distribution.ppf.rst):
```
pyplot.subplot(121)
pyplot.title("standard uniform")
u_samples = chaospy.Uniform(0, 1).sample(10000, seed=1234)
pyplot.hist(u_samples, 30)
pyplot.subplot(122)
pyplot.title("transformed normal")
q_samples = normal.inv(u_samples)
pyplot.hist(q_samples, 30)
pyplot.show()
```
Note that `u_samples` and `q_samples` here consist of independently
identical distributed samples, the joint set `(u_samples, q_samples)` are
not. In fact, they are highly dependent by following the line of the normal
cumulative distribution function shape:
```
pyplot.subplot(121)
pyplot.title("coupled samples")
pyplot.scatter(q_samples, u_samples)
pyplot.subplot(122)
pyplot.title("normal cumulative distribution")
pyplot.plot(q_loc, normal.cdf(q_loc))
pyplot.show()
```
This idea also generalizes to the multivariate case. There the mapping
function is called an inverse Rosenblatt transformation $T^{-1}$, and is
defined in terms of conditional distribution functions:
$$
T^{-1}(q_0, q_1, q_2, \dots) =
\left[ F^{-1}_{Q_0}(q_0),
F^{-1}_{Q_1\mid Q_0}(q_1),
F^{-1}_{Q_2\mid Q_1,Q_0}(q_2), \dots \right]
$$
And likewise a forward Rosenblatt transformation is defined as:
$$
T(q_0, q_1, q_2, \dots) =
\left[ F_{Q_0}(q_0),
F_{Q_1\mid Q_0}(q_1),
F_{Q_2\mid Q_1,Q_0}(q_2), \dots \right]
$$
These functions can be used to map samples from standard multivariate uniform
distribution to a distribution of interest, and vise-versa.
In `chaospy` these methods are available through
[chaospy.Distribution.inv()](../../api/chaospy.Distribution.inv.rst)
and
[chaospy.Distribution.fwd()](../../api/chaospy.Distribution.fwd.rst):
```
pyplot.subplot(121)
pyplot.title("standard uniform")
uu_samples = chaospy.Uniform(0, 1).sample((2, 500), seed=1234)
pyplot.scatter(*uu_samples)
pyplot.subplot(122)
pyplot.title("transformed normal-gamma")
qq_samples = normal_gamma.inv(uu_samples)
pyplot.scatter(*qq_samples)
pyplot.show()
```
### User-defined distributions
The [collection of distributions](../../reference/distribution/collection.rst) contains a
lot of distributions, but if one needs something custom, `chaospy` allows for
the construction of user-defined distributions through
[chaospy.UserDistribution](../../api/chaospy.UserDistribution.rst).
These can be constructed by providing three functions: cumulative
distribution function, a lower bounds function, and a upper bounds function.
As an illustrative example, let us recreate the uniform distribution:
```
def cdf(x_loc, lo, up):
"""Cumulative distribution function."""
return (x_loc-lo)/(up-lo)
def lower(lo, up):
"""Lower bounds function."""
return lo
def upper(lo, up):
"""Upper bounds function."""
return up
```
The user-define distribution takes these functions, and a dictionary with the
parameter defaults as part of its initialization:
```
user_distribution = chaospy.UserDistribution(
cdf=cdf, lower=lower, upper=upper, parameters=dict(lo=-1, up=1))
```
The distribution can then be used in the same was as any other
[chaospy.Distribution](../../api/chaospy.Distribution.rst):
```
pyplot.subplot(131)
pyplot.title("binned random samples")
pyplot.hist(user_distribution.sample(10000), 30)
pyplot.subplot(132)
pyplot.title("probability density")
x_loc = numpy.linspace(-2, 2, 200)
pyplot.plot(x_loc, user_distribution.pdf(x_loc))
pyplot.subplot(133)
pyplot.title("cumulative distribution")
pyplot.plot(x_loc, user_distribution.cdf(x_loc))
pyplot.show()
```
Alternative, it is possible to define the same distribution using cumulative
distribution and point percentile function without the bounds:
```
def ppf(q_loc, lo, up):
"""Point percentile function."""
return q_loc*(up-lo)+lo
user_distribution = chaospy.UserDistribution(
cdf=cdf, ppf=ppf, parameters=dict(lo=-1, up=1))
```
In addition to the required fields, there are a few optional ones. These does
not provide new functionality, but allow for increased accuracy and/or lower
computational cost for the operations where they are used. These include raw
statistical moments which is used by
[chaospy.Distribution.mom()](../../api/chaospy.Distribution.rst):
```
def mom(k_loc, lo, up):
"""Raw statistical moment."""
return (up**(k_loc+1)-lo**(k_loc+1))/(k_loc+1)/(up-lo)
```
And three terms recurrence coefficients which is used by the method
[chaospy.Distribution.ttr()](../../api/chaospy.Distribution.ttr.rst)
to pass analytically to Stieltjes' method:
```
def ttr(k_loc, lo, up):
"""Three terms recurrence."""
return 0.5*up+0.5*lo, k_loc**2/(4*k_loc**2-1)*lo**2
```
What these coefficients are and why they are important are discussed in the
section [orthogonal polynomials](../polynomial/orthogonality.ipynb).
| github_jupyter |
# Factorization
Factorization is the process of restating an expression as the *product* of two expressions (in other words, expressions multiplied together).
For example, you can make the value **16** by performing the following multiplications of integer numbers:
- 1 x 16
- 2 x 8
- 4 x 4
Another way of saying this is that 1, 2, 4, 8, and 16 are all factors of 16.
## Factors of Polynomial Expressions
We can apply the same logic to polynomial expressions. For example, consider the following monomial expression:
\begin{equation}-6x^{2}y^{3} \end{equation}
You can get this value by performing the following multiplication:
\begin{equation}(2xy^{2})(-3xy) \end{equation}
Run the following Python code to test this with arbitrary ***x*** and ***y*** values:
```
from random import randint
x = randint(1,100)
y = randint(1,100)
(2*x*y**2)*(-3*x*y) == -6*x**2*y**3
```
So, we can say that **2xy<sup>2</sup>** and **-3xy** are both factors of **-6x<sup>2</sup>y<sup>3</sup>**.
This also applies to polynomials with more than one term. For example, consider the following expression:
\begin{equation}(x + 2)(2x^{2} - 3y + 2) = 2x^{3} + 4x^{2} - 3xy + 2x - 6y + 4 \end{equation}
Based on this, **x+2** and **2x<sup>2</sup> - 3y + 2** are both factors of **2x<sup>3</sup> + 4x<sup>2</sup> - 3xy + 2x - 6y + 4**.
(and if you don't believe me, you can try this with random values for x and y with the following Python code):
```
from random import randint
x = randint(1,100)
y = randint(1,100)
(x + 2)*(2*x**2 - 3*y + 2) == 2*x**3 + 4*x**2 - 3*x*y + 2*x - 6*y + 4
```
## Greatest Common Factor
Of course, these may not be the only factors of **-6x<sup>2</sup>y<sup>3</sup>**, just as 8 and 2 are not the only factors of 16.
Additionally, 2 and 8 aren't just factors of 16; they're factors of other numbers too - for example, they're both factors of 24 (because 2 x 12 = 24 and 8 x 3 = 24). Which leads us to the question, what is the highest number that is a factor of both 16 and 24? Well, let's look at all the numbers that multiply evenly into 12 and all the numbers that multiply evenly into 24:
| 16 | 24 |
|--------|--------|
| 1 x 16 | 1 x 24 |
| 2 x **8** | 2 x 12 |
| | 3 x **8** |
| 4 x 4 | 4 x 6 |
The highest value that is a multiple of both 16 and 24 is **8**, so 8 is the *Greatest Common Factor* (or GCF) of 16 and 24.
OK, let's apply that logic to the following expressions:
\begin{equation}15x^{2}y\;\;\;\;\;\;\;\;9xy^{3}\end{equation}
So what's the greatest common factor of these two expressions?
It helps to break the expressions into their consitituent components. Let's deal with the coefficients first; we have 15 and 9. The highest value that divides evenly into both of these is **3** (3 x 5 = 15 and 3 x 3 = 9).
Now let's look at the ***x*** terms; we have x<sup>2</sup> and x. The highest value that divides evenly into both is these is **x** (*x* goes into *x* once and into *x*<sup>2</sup> *x* times).
Finally, for our ***y*** terms, we have y and y<sup>3</sup>. The highest value that divides evenly into both is these is **y** (*y* goes into *y* once and into *y*<sup>3</sup> *y•y* times).
Putting all of that together, the GCF of both of our expression is:
\begin{equation}3xy\end{equation}
An easy shortcut to identifying the GCF of an expression that includes variables with exponentials is that it will always consist of:
- The *largest* numeric factor of the numeric coefficients in the polynomial expressions (in this case 3)
- The *smallest* exponential of each variable (in this case, x and y, which technically are x<sup>1</sup> and y<sup>1</sup>.
You can check your answer by dividing the original expressions by the GCF to find the coefficent expressions for the GCF (in other words, how many times the GCF divides into the original expression). The result, when multiplied by the GCF will always produce the original expression. So in this case, we need to perform the following divisions:
\begin{equation}\frac{15x^{2}y}{3xy}\;\;\;\;\;\;\;\;\frac{9xy^{3}}{3xy}\end{equation}
These fractions simplify to **5x** and **3y<sup>2</sup>**, giving us the following calculations to prove our factorization:
\begin{equation}3xy(5x) = 15x^{2}y\end{equation}
\begin{equation}3xy(3y^{2}) = 9xy^{3}\end{equation}
Let's try both of those in Python:
```
from random import randint
x = randint(1,100)
y = randint(1,100)
print((3*x*y)*(5*x) == 15*x**2*y)
print((3*x*y)*(3*y**2) == 9*x*y**3)
```
## Distributing Factors
Let's look at another example. Here is a binomial expression:
\begin{equation}6x + 15y \end{equation}
To factor this, we need to find an expression that divides equally into both of these expressions. In this case, we can use **3** to factor the coefficents, because 3 • 2x = 6x and 3• 5y = 15y, so we can write our original expression as:
\begin{equation}6x + 15y = 3(2x) + 3(5y) \end{equation}
Now, remember the distributive property? It enables us to multiply each term of an expression by the same factor to calculate the product of the expression multiplied by the factor. We can *factor-out* the common factor in this expression to distribute it like this:
\begin{equation}6x + 15y = 3(2x) + 3(5y) = \mathbf{3(2x + 5y)} \end{equation}
Let's prove to ourselves that these all evaluate to the same thing:
```
from random import randint
x = randint(1,100)
y = randint(1,100)
(6*x + 15*y) == (3*(2*x) + 3*(5*y)) == (3*(2*x + 5*y))
```
For something a little more complex, let's return to our previous example. Suppose we want to add our original 15x<sup>2</sup>y and 9xy<sup>3</sup> expressions:
\begin{equation}15x^{2}y + 9xy^{3}\end{equation}
We've already calculated the common factor, so we know that:
\begin{equation}3xy(5x) = 15x^{2}y\end{equation}
\begin{equation}3xy(3y^{2}) = 9xy^{3}\end{equation}
Now we can factor-out the common factor to produce a single expression:
\begin{equation}15x^{2}y + 9xy^{3} = \mathbf{3xy(5x + 3y^{2})}\end{equation}
And here's the Python test code:
```
from random import randint
x = randint(1,100)
y = randint(1,100)
(15*x**2*y + 9*x*y**3) == (3*x*y*(5*x + 3*y**2))
```
So you might be wondering what's so great about being able to distribute the common factor like this. The answer is that it can often be useful to apply a common factor to multiple terms in order to solve seemingly complex problems.
For example, consider this:
\begin{equation}x^{2} + y^{2} + z^{2} = 127\end{equation}
Now solve this equation:
\begin{equation}a = 5x^{2} + 5y^{2} + 5z^{2}\end{equation}
At first glance, this seems tricky because there are three unknown variables, and even though we know that their squares add up to 127, we don't know their individual values. However, we can distribute the common factor and apply what we *do* know. Let's restate the problem like this:
\begin{equation}a = 5(x^{2} + y^{2} + z^{2})\end{equation}
Now it becomes easier to solve, because we know that the expression in parenthesis is equal to 127, so actually our equation is:
\begin{equation}a = 5(127)\end{equation}
So ***a*** is 5 times 127, which is 635
## Formulae for Factoring Squares
There are some useful ways that you can employ factoring to deal with expressions that contain squared values (that is, values with an exponential of 2).
### Differences of Squares
Consider the following expression:
\begin{equation}x^{2} - 9\end{equation}
The constant *9* is 3<sup>2</sup>, so we could rewrite this as:
\begin{equation}x^{2} - 3^{2}\end{equation}
Whenever you need to subtract one squared term from another, you can use an approach called the *difference of squares*, whereby we can factor *a<sup>2</sup> - b<sup>2</sup>* as *(a - b)(a + b)*; so we can rewrite the expression as:
\begin{equation}(x - 3)(x + 3)\end{equation}
Run the code below to check this:
```
from random import randint
x = randint(1,100)
(x**2 - 9) == (x - 3)*(x + 3)
```
### Perfect Squares
A *perfect square* is a number multiplied by itself, for example 3 multipled by 3 is 9, so 9 is a perfect square.
When working with equations, the ability to factor between polynomial expressions and binomial perfect square expressions can be a useful tool. For example, consider this expression:
\begin{equation}x^{2} + 10x + 25\end{equation}
We can use 5 as a common factor to rewrite this as:
\begin{equation}(x + 5)(x + 5)\end{equation}
So what happened here?
Well, first we found a common factor for our coefficients: 5 goes into 10 twice and into 25 five times (in other words, squared). Then we just expressed this factoring as a multiplication of two identical binomials *(x + 5)(x + 5)*.
Remember the rule for multiplication of polynomials is to multiple each term in the first polynomial by each term in the second polynomial and then add the results; so you can do this to verify the factorization:
- x • x = x<sup>2</sup>
- x • 5 = 5x
- 5 • x = 5x
- 5 • 5 = 25
When you combine the two 5x terms we get back to our original expression of x<sup>2</sup> + 10x + 25.
Now we have an expression multipled by itself; in other words, a perfect square. We can therefore rewrite this as:
\begin{equation}(x + 5)^{2}\end{equation}
Factorization of perfect squares is a useful technique, as you'll see when we start to tackle quadratic equations in the next section. In fact, it's so useful that it's worth memorizing its formula:
\begin{equation}(a + b)^{2} = a^{2} + b^{2}+ 2ab \end{equation}
In our example, the *a* terms is ***x*** and the *b* terms is ***5***, and in standard form, our equation *x<sup>2</sup> + 10x + 25* is actually *a<sup>2</sup> + 2ab + b<sup>2</sup>*. The operations are all additions, so the order isn't actually important!
Run the following code with random values for *a* and *b* to verify that the formula works:
```
from random import randint
a = randint(1,100)
b = randint(1,100)
a**2 + b**2 + (2*a*b) == (a + b)**2
```
| github_jupyter |
# Matplotlib
## Introduction
- matplotlib is probably the single most used Python package for 2D-graphics
- it also provides good capablities to creade 3D-graphics
- quick way to visualize data from python in publication-quality
- for further information: https://matplotlib.org/
## Creating First Plots
### 1. Import pyplot package
- provides functions that makes matplotlib work like MATLAB
- object-oriented plotting
```
import matplotlib.pyplot as plt # import pyplot interface
```
### 3. Create [figure](https://matplotlib.org/api/figure_api.html) and [axes](https://matplotlib.org/api/_as_gen/matplotlib.figure.Figure.html)
```
fig = plt.figure() # a new figure window
ax = fig.add_subplot(1, 1, 1) # a new axes
plt.show(fig)
```
### 3. Create / [Plot data](https://matplotlib.org/api/_as_gen/matplotlib.pyplot.plot.html) (sine)
```
import numpy as np
x = np.linspace(0,10,1000)
y = np.sin(x)
ax.plot(x,y, label = 'sine')
fig # this is required to re-display the figure
```
#### Customize Line Style
```
fig2 = plt.figure() # a new figure window
ax2 = fig2.add_subplot(1, 1, 1) # a new axes
x2 = np.linspace(0,10,50)
y2 = np.sin(x2)
ax2.plot(x2,y2, '-o', label = 'sine')
plt.show(fig2)
fig3 = plt.figure() # a new figure window
ax3 = fig3.add_subplot(1, 1, 1) # a new axes
x2 = np.linspace(0,10,50)
y2 = np.sin(x2)
ax3.plot(x2,y2, 'r-o', label = 'sine')
plt.show(fig3)
```
##### Line Colour
'r': red
'g': green
'b': blue
'c': cyan
'm': magenta
'y': yellow
'k': black
'w:': white
##### Line Style
'-': solid
'--': dashed
':': dotted
'-.': dot-dashed
'.': points
'o': filled circles
'^': filled triangles
### 4. Create / [Plot data](https://matplotlib.org/api/_as_gen/matplotlib.pyplot.plot.html) (cosine)
```
y2 = np.cos(x)
ax.plot(x,y2, label = 'cosine')
fig
```
### 5. Create / [Plot data](https://matplotlib.org/api/_as_gen/matplotlib.pyplot.plot.html) (3 * cosine) on second axes
```
ax_twin = ax.twinx()
y3 = 3 * np.cos(x+np.pi/4)
ax_twin.plot(x,y3, 'r',label = '3 * cosine')
fig
```
### 5. Set limits for [x](https://matplotlib.org/api/_as_gen/matplotlib.axes.Axes.set_xlim.html)-/[y](https://matplotlib.org/api/_as_gen/matplotlib.axes.Axes.set_ylim.html)-axis
```
ax.set_xlim(0,10)
ax.set_ylim(-1.5, 2.0)
fig
```
### 6. [Add legend](https://matplotlib.org/api/_as_gen/matplotlib.pyplot.legend.html)
```
ax.legend()
fig
```
### 7. Add [x](https://matplotlib.org/api/_as_gen/matplotlib.axes.Axes.set_xlabel.html)-/[y](https://matplotlib.org/api/_as_gen/matplotlib.axes.Axes.set_ylabel.html)-label and [title](https://matplotlib.org/api/_as_gen/matplotlib.axes.Axes.set_title.html)
```
ax.set_xlabel("$x$")
ax.set_ylabel("$\sin(x)$")
ax.set_title("I like $\pi$")
fig
```
### 7. [Add grid](https://matplotlib.org/api/_as_gen/matplotlib.axes.Axes.grid.html)
```
ax.grid(True)
fig
```
### Excursion Subplots
- the command [fig.add_subplot](https://matplotlib.org/api/_as_gen/matplotlib.figure.Figure.html) divides the figures in grid with a certain number of axes
- syntax:
``` python
fig.add_subplot(rows, cols, num)
```
- rows = number of rows in the grid
- cols = number of columns in the grid
- num = number of the subplot to create (counting from left to right, top to bottom and indexed starting at 1)
```
fig = plt.figure()
for i in range(6):
ax = fig.add_subplot(2, 3, i + 1)
ax.set_title("Plot #%i" % i)
```
- the subplots are overlapping
- there are a few ways to fix it, i.e.:
```
fig.subplots_adjust(wspace=0.4, hspace=0.4)
fig
```
- ```wspace``` and ```hspace ``` determine the width and height between each plot
## 2. Various 2D Plotting
### [Histograms](https://matplotlib.org/api/_as_gen/matplotlib.pyplot.hist.html)
```
x = np.random.normal(size=1000)
fig, ax = plt.subplots()
H = ax.hist(x, bins=50, alpha=0.5, histtype='stepfilled')
```
### [Pie Plot](https://matplotlib.org/api/_as_gen/matplotlib.pyplot.pie.html)
```
fracs = [30, 15, 45, 10]
colors = ['b', 'g', 'r', 'w']
fig, ax = plt.subplots(figsize=(6, 6)) # make the plot square
pie = ax.pie(fracs, colors=colors, explode=(0, 0, 0.05, 0), shadow=True,
labels=['A', 'B', 'C', 'D'])
```
### [Errorbar Plots](https://matplotlib.org/api/_as_gen/matplotlib.pyplot.errorbar.html)
```
x = np.linspace(0, 10, 30)
dy = 0.1
y = np.random.normal(np.sin(x),dy)
fig, ax = plt.subplots()
plt.errorbar(x, y, dy, fmt='.k')
```
### [Contour Plots (filled)](https://matplotlib.org/api/_as_gen/matplotlib.pyplot.contourf.html)
```
x = np.linspace(0, 10, 50)
y = np.linspace(0, 20, 60)
z = np.cos(y[:, np.newaxis]) * np.sin(x)
fig, ax = plt.subplots()
# filled contours
im = ax.contourf(x, y, z, 100)
fig.colorbar(im, ax=ax)
```
### [Contour Plots (lines)](https://matplotlib.org/api/_as_gen/matplotlib.pyplot.contour.html)
```
# contour lines
im2 = ax.contour(x, y, z, colors='k')
fig
```
## 3. [Various 3D Plotting](https://matplotlib.org/mpl_toolkits/mplot3d/tutorial.html)
```
# This is the 3D plotting toolkit
from mpl_toolkits.mplot3d import Axes3D
```
### [3D scatter Plot](https://matplotlib.org/mpl_toolkits/mplot3d/tutorial.html#scatter-plots)
```
fig = plt.figure()
ax = plt.axes(projection='3d')
z = np.linspace(0, 1, 100)
x = z * np.sin(20 * z)
y = z * np.cos(20 * z)
c = x + y
ax.scatter(x, y, z, c=c)
```
### [3D Line Plot](https://matplotlib.org/mpl_toolkits/mplot3d/tutorial.html#line-plots)
```
fig = plt.figure()
ax = plt.axes(projection='3d')
ax.plot(x, y, z, '-b')
```
### [Surface Plot](https://matplotlib.org/mpl_toolkits/mplot3d/tutorial.html#surface-plots)
```
x = np.outer(np.linspace(-2, 2, 30), np.ones(30))
y = x.copy().T
z = np.cos(x ** 2 + y ** 2)
fig = plt.figure()
ax = plt.axes(projection='3d')
ax.plot_surface(x, y, z, cmap=plt.cm.jet, rstride=1, cstride=1, linewidth=0)
```
| github_jupyter |
```
# default_exp custom_tf_training
```
# Custom Tensorflow Training
> Extending tf.keras for custom training functionality
```
# export
import os
from nbdev.showdoc import *
from fastcore.test import *
import tensorflow as tf
import sklearn
import numpy as np
from sklearn.datasets import fetch_california_housing
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rc('axes', labelsize=14)
mpl.rc('xtick', labelsize=12)
mpl.rc('ytick', labelsize=12)
test_eq(sklearn.__version__ > "0.20", True)
test_eq(tf.__version__ > "2.0.0", True)
#hide
from nbdev.showdoc import *
TEMP_DIR = "tmp"
if not os.path.exists('tmp'):
os.makedirs('tmp')
```
### Custom Loss Functions
```
housing = sklearn.datasets.fetch_california_housing()
X_train_full, X_test, y_train_full, y_test = train_test_split(
housing.data, housing.target.reshape(-1, 1), random_state=42)
X_train, X_valid, y_train, y_valid = train_test_split(
X_train_full, y_train_full, random_state=42)
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train)
X_valid_scaled = scaler.transform(X_valid)
X_test_scaled = scaler.transform(X_test)
def huber_fn(y_true, y_pred):
error = y_true - y_pred
is_small_error = tf.abs(error) < 1
squared_loss = tf.square(error) / 2
linear_loss = tf.abs(error) - 0.5
return tf.where(is_small_error, squared_loss, linear_loss)
plt.figure(figsize=(8, 3.5))
z = np.linspace(-4, 4, 200)
plt.plot(z, huber_fn(0, z), "b-", linewidth=2, label="huber($z$)")
plt.plot(z, z**2 / 2, "b:", linewidth=1, label=r"$\frac{1}{2}z^2$")
plt.plot([-1, -1], [0, huber_fn(0., -1.)], "r--")
plt.plot([1, 1], [0, huber_fn(0., 1.)], "r--")
plt.gca().axhline(y=0, color='k')
plt.gca().axvline(x=0, color='k')
plt.axis([-4, 4, 0, 4])
plt.grid(True)
plt.xlabel("$z$")
plt.legend(fontsize=14)
plt.title("Huber loss", fontsize=14)
plt.show()
input_shape = X_train.shape[1:]
model = tf.keras.models.Sequential([
tf.keras.layers.Dense(30, activation="selu", kernel_initializer="lecun_normal",
input_shape=input_shape),
tf.keras.layers.Dense(1),
])
model.compile(loss=huber_fn, optimizer="nadam", metrics=["mae"])
model.fit(X_train_scaled, y_train, epochs=2,
validation_data=(X_valid_scaled, y_valid))
CUSTOM_MODEL = os.path.join(TEMP_DIR, "my_model_with_a_custom_loss.h5")
CUSTOM_MODEL_THRESHOLD = os.path.join(TEMP_DIR, "my_model_with_a_custom_loss_threshold.h5")
CUSTOM_MODEL_LOSS_CLASS = os.path.join(TEMP_DIR, "my_model_with_a_custom_loss_class.h5")
CUSTOM_MODEL_CUSTOM_PARTS = os.path.join(TEMP_DIR, "my_model_with_many_custom_parts.h5")
CUSTOM_MODEL_CUSTOM_PARTS_2 = os.path.join(TEMP_DIR, "my_model_with_many_custom_parts_2.h5")
model.save(CUSTOM_MODEL)
del model
model = tf.keras.models.load_model(CUSTOM_MODEL,
custom_objects={"huber_fn": huber_fn})
model.fit(X_train_scaled, y_train, epochs=2,
validation_data=(X_valid_scaled, y_valid))
def create_huber(threshold=1.0):
def huber_fn(y_true, y_pred):
error = y_true - y_pred
is_small_error = tf.abs(error) < threshold
squared_loss = tf.square(error) / 2
linear_loss = threshold * tf.abs(error) - threshold**2 / 2
return tf.where(is_small_error, squared_loss, linear_loss)
return huber_fn
model.compile(loss=create_huber(2.0), optimizer="nadam", metrics=["mae"])
model.fit(X_train_scaled, y_train, epochs=2,
validation_data=(X_valid_scaled, y_valid))
model.save(CUSTOM_MODEL_THRESHOLD)
del model
model = tf.keras.models.load_model(CUSTOM_MODEL_THRESHOLD,
custom_objects={"huber_fn": create_huber(2.0)})
model.fit(X_train_scaled, y_train, epochs=2,
validation_data=(X_valid_scaled, y_valid))
class HuberLoss(tf.keras.losses.Loss):
def __init__(self, threshold=1.0, **kwargs):
self.threshold = threshold
super().__init__(**kwargs)
def call(self, y_true, y_pred):
error = y_true - y_pred
is_small_error = tf.abs(error) < self.threshold
squared_loss = tf.square(error) / 2
linear_loss = self.threshold * tf.abs(error) - self.threshold**2 / 2
return tf.where(is_small_error, squared_loss, linear_loss)
def get_config(self):
base_config = super().get_config()
return {**base_config, "threshold": self.threshold}
model = tf.keras.models.Sequential([
tf.keras.layers.Dense(30, activation="selu", kernel_initializer="lecun_normal",
input_shape=input_shape),
tf.keras.layers.Dense(1),
])
model.compile(loss=HuberLoss(2.), optimizer="nadam", metrics=["mae"])
model.fit(X_train_scaled, y_train, epochs=2,
validation_data=(X_valid_scaled, y_valid))
model.save(CUSTOM_MODEL_LOSS_CLASS, save_format='h5')
# tf.saved_model.save(model, CUSTOM_MODEL_LOSS_CLASS)
# del model
# open issue: https://github.com/tensorflow/tensorflow/issues/25938
# model = tf.keras.models.load_model(CUSTOM_MODEL_LOSS_CLASS,
# {"HuberLoss": HuberLoss})
# model = tf.saved_model.load(CUSTOM_MODEL_LOSS_CLASS)
# related to : https://github.com/tensorflow/tensorflow/issues/25938
model.fit(X_train_scaled, y_train, epochs=2,
validation_data=(X_valid_scaled, y_valid))
```
Notes
Saving tensorflow models with custom components is still very clunky.
* Saving loss classes of `tf.keras.losses.Loss` throws errors (related to: [this PR](https://github.com/tensorflow/tensorflow/issues/25938)
* `h5py` works most cases except for saving `tf.keras.losses.Loss` classes.
* Ideally would like to save all of this as a `SavedModel`, due to integreation with tensroflow serving.
### Custom Activation Functions, Initializers, Regularizers and Constants
```
def clear_keras_session():
tf.keras.backend.clear_session()
np.random.seed(42)
tf.random.set_seed(42)
clear_keras_session()
def my_softplus(z): # return value is just tf.nn.softplus(z)
return tf.math.log(tf.exp(z) + 1.0)
def my_glorot_initializer(shape, dtype=tf.float32):
stddev = tf.sqrt(2. / (shape[0] + shape[1]))
return tf.random.normal(shape, stddev=stddev, dtype=dtype)
def my_l1_regularizer(weights):
return tf.reduce_sum(tf.abs(0.01 * weights))
def my_positive_weights(weights): # return value is just tf.nn.relu(weights)
return tf.where(weights < 0., tf.zeros_like(weights), weights)
layer = tf.keras.layers.Dense(1, activation=my_softplus,
kernel_initializer=my_glorot_initializer,
kernel_regularizer=my_l1_regularizer,
kernel_constraint=my_positive_weights)
clear_keras_session()
model = tf.keras.models.Sequential([
tf.keras.layers.Dense(30, activation="selu", kernel_initializer="lecun_normal",
input_shape=input_shape),
tf.keras.layers.Dense(1, activation=my_softplus,
kernel_regularizer=my_l1_regularizer,
kernel_constraint=my_positive_weights,
kernel_initializer=my_glorot_initializer),
])
model.compile(loss="mse", optimizer="nadam", metrics=["mae"])
model.fit(X_train_scaled, y_train, epochs=2,
validation_data=(X_valid_scaled, y_valid))
model.save(CUSTOM_MODEL_CUSTOM_PARTS)
del model
model = tf.keras.models.load_model(
CUSTOM_MODEL_CUSTOM_PARTS,
custom_objects={
"my_l1_regularizer": my_l1_regularizer,
"my_positive_weights": my_positive_weights,
"my_glorot_initializer": my_glorot_initializer,
"my_softplus": my_softplus,
})
class MyL1Regularizer(tf.keras.regularizers.Regularizer):
def __init__(self, factor):
self.factor = factor
def __call__(self, weights):
return tf.reduce_sum(tf.abs(self.factor * weights))
def get_config(self):
return {"factor": self.factor}
clear_keras_session()
model = tf.keras.models.Sequential([
tf.keras.layers.Dense(30, activation="selu", kernel_initializer="lecun_normal",
input_shape=input_shape),
tf.keras.layers.Dense(1, activation=my_softplus,
kernel_regularizer=MyL1Regularizer(0.01),
kernel_constraint=my_positive_weights,
kernel_initializer=my_glorot_initializer),
])
model.compile(loss="mse", optimizer="nadam", metrics=["mae"])
model.fit(X_train_scaled, y_train, epochs=2,
validation_data=(X_valid_scaled, y_valid))
model.save(CUSTOM_MODEL_CUSTOM_PARTS_2)
model = tf.keras.models.load_model(
CUSTOM_MODEL_CUSTOM_PARTS_2,
custom_objects={
"MyL1Regularizer": MyL1Regularizer,
"my_positive_weights": my_positive_weights,
"my_glorot_initializer": my_glorot_initializer,
"my_softplus": my_softplus,
})
```
### Custom Metrics
### Custom Layers
### Custom Models
### Losses and Metrics Based on Model Internals
### Computing Gradients Using Autodiff
### Custom Training Loops
## References
* [Ch 12: Hands-On Machine Learning with Scikit-Learn, Keras & Tensorflow](https://github.com/ageron/handson-ml2/blob/master/12_custom_models_and_training_with_tensorflow.ipynb)
| github_jupyter |
```
from __future__ import division
import os, sys, time, random
import math
import scipy
from scipy import constants
import torch
from torch import nn, optim
from torch import autograd
from torch.autograd import grad
import autograd.numpy as np
from torch.utils.data import Dataset, DataLoader
from torch.autograd.variable import Variable
from torchvision import transforms, datasets
import matplotlib.pyplot as plt
from torch.nn import functional as F
from scipy.constants import pi
class Potential(nn.Module):
def __init__(self):
super(Potential,self).__init__()
self.hidden0 = nn.Sequential(
nn.Linear(2,128),
nn.Tanh()
)
# self.hidden1 = nn.Sequential(
# nn.Linear(32,128),
# nn.Tanh()
# )
self.hidden1 = nn.Sequential(
nn.Linear(128,128),
nn.Tanh()
)
self.out = nn.Sequential(
nn.Linear(128,1),
nn.Sigmoid()
)
def forward(self, x):
x = self.hidden0(x)
x = x + self.hidden1(x)
# x = x + self.hidden2(x)
x = self.out(x)
return x
def hermite(n,x):
if n==0:
return 1
elif n==1:
return 2*x
else:
return 2*x*hermite(n-1,x)-2*(n-1)*hermite(n-2,x) #recursion
def harmonic(m,h,w,n,x):
#Normalization:
norm=((m*w)/(math.pi*h))**(1/4)
term1=(math.factorial(n))*(2**n)
term2=(hermite(n,x)/math.sqrt(term1))
expterms=(-1.0*m*w*x*x)/(2*h)
#print(norm*term2,expterms,x)
evalh=norm*term2*torch.exp(expterms)
#print(norm,term1,term2,evalh)
return evalh
def init_wave_function(x,y):
return harmonic(1,1,1,2,x)*harmonic(1,1,1,2,y)
potential = Potential()
optimizer = torch.optim.Adam(potential.parameters(), lr = .001)
def conservation_energy(batch):
batch.requires_grad_(True)
x_coord = batch[:,0]
x_coord.requires_grad_(True)
y_coord = batch[:,1]
y_coord.requires_grad_(True)
output = init_wave_function(x_coord,y_coord)
output.requires_grad_(True)
potential_energy = potential(batch).squeeze()
# print(potential_energy.shape)
potential_energy.requires_grad_(True)
#potential_energy = .5*(x_coord**2 + y_coord**2).squeeze()
# print(potential_energy)
dHdx = grad(output, x_coord, grad_outputs = torch.ones_like(x_coord),
create_graph=True, retain_graph=True,
only_inputs=True,
allow_unused=True
)[0]
d2Hdx2 = grad(dHdx, x_coord, grad_outputs = torch.ones_like(x_coord),
create_graph=True, retain_graph=True,
only_inputs=True,
allow_unused=True
)[0]
dHdy = grad(output, y_coord, grad_outputs = torch.ones_like(y_coord),
create_graph=True, retain_graph=True,
only_inputs=True,
allow_unused=True
)[0]
d2Hdy2 = grad(dHdy, y_coord, grad_outputs = torch.ones_like(y_coord),
create_graph=True, retain_graph=True,
only_inputs=True,
allow_unused=True
)[0]
kinetic_energy = d2Hdx2 + d2Hdy2
# print(kinetic_energy.shape)
conserve_energy = kinetic_energy/(2*output) - potential_energy
return conserve_energy
h = .01
def taylor_approx_x(batch):
batch.requires_grad_(True)
x_coord = batch[:,0]
x_coord.requires_grad_(True)
x_coord1 = x_coord + h
x_coord2 = x_coord - h
x1_coord1 = torch.unsqueeze(x_coord1,1)
x2_coord2 = torch.unsqueeze(x_coord2,1)
y_coord = batch[:,1]
y_coord.requires_grad_(True)
y_coord = torch.unsqueeze(y_coord,1)
batch_forward = torch.cat([x1_coord1,y_coord],1)
batch_back = torch.cat([x2_coord2,y_coord],1)
partial_x = (conservation_energy(batch_forward) - conservation_energy(batch_back))/(2*h)
return partial_x
def taylor_approx_y(batch):
batch.requires_grad_(True)
x_coord = batch[:,0]
x_coord.requires_grad_(True)
x_coord = torch.unsqueeze(x_coord,1)
# x1_coord = torch.unsqueeze(x1_coord,1)
y_coord = batch[:,1]
y_coord.requires_grad_(True)
y1 = y_coord + h
y2 = y_coord - h
y1_coord = torch.unsqueeze(y1,1)
y2_coord = torch.unsqueeze(y2,1)
batch_forward = torch.cat([x_coord,y1_coord],1)
batch_back = torch.cat([x_coord,y2_coord],1)
partial_y = (conservation_energy(batch_forward) - conservation_energy(batch_back))/(2*h)
return partial_y
data = torch.rand(5000,2)
dataset = MyDataset(data)
loader = DataLoader(dataset, batch_size = 32, shuffle = True)
num_epochs = 2000
loss = []
#x = torch.tensor([0.0,0.0])
for epoch in range(num_epochs):
for n_batch, batch in enumerate(loader):
n_data = Variable(batch, requires_grad=True)
optimizer.zero_grad()
error = (taylor_approx_x(n_data)**2 + taylor_approx_y(n_data)**2).mean()
error.backward(retain_graph=True)
optimizer.step()
loss.append(error)
x = torch.rand(100,2)
p = potential(x)
p1 = conservation_energy(x)
#RMSE between ground and learned energies
torch.mean((p1+5)**2)
x_coord = x[:,0]
y_coord = x[:,1]
ground = .5*(x_coord**2 + y_coord**2)
#RMSE between ground and learned potentials
torch.mean((ground - potential(x).squeeze())**2)
np.sqrt(1.3127e-05)
```
The rest of the notebook can be ignored as it is used to generate the 2d energy plot.
```
x_coord = sample_x(4000,2)
learned_energy1 = -conserve_energy(x_coord).detach().numpy()
learned_energy1[3000],x_coord.detach().numpy()[3000]
pip install plotly
plot(fig, filename='./2dsystem.html')
import numpy as np
from itertools import product
import plotly.graph_objs as go
import plotly.offline as py
py.init_notebook_mode(connected=True)
# Gen data
import plotly.graph_objects as go
from plotly.offline import plot
# x=X
# y=Y
Z=(torch.ones(50,50)*5).tolist()
trace1 = go.Surface(
contours = {
"x": {"show": True, "start": 0, "end": 1, "size": 0.1, "color":"white"},
"y": {"show": True, "start": 0, "end": 1, "size": 0.1, "color":"white"},
},
x = [0.0000, 0.0204, 0.0408, 0.0612, 0.0816, 0.1020, 0.1224, 0.1429, 0.1633,
0.1837, 0.2041, 0.2245, 0.2449, 0.2653, 0.2857, 0.3061, 0.3265, 0.3469,
0.3673, 0.3878, 0.4082, 0.4286, 0.4490, 0.4694, 0.4898, 0.5102, 0.5306,
0.5510, 0.5714, 0.5918, 0.6122, 0.6327, 0.6531, 0.6735, 0.6939, 0.7143,
0.7347, 0.7551, 0.7755, 0.7959, 0.8163, 0.8367, 0.8571, 0.8776, 0.8980,
0.9184, 0.9388, 0.9592, 0.9796, 1.0000],
y = [0.0000, 0.0204, 0.0408, 0.0612, 0.0816, 0.1020, 0.1224, 0.1429, 0.1633,
0.1837, 0.2041, 0.2245, 0.2449, 0.2653, 0.2857, 0.3061, 0.3265, 0.3469,
0.3673, 0.3878, 0.4082, 0.4286, 0.4490, 0.4694, 0.4898, 0.5102, 0.5306,
0.5510, 0.5714, 0.5918, 0.6122, 0.6327, 0.6531, 0.6735, 0.6939, 0.7143,
0.7347, 0.7551, 0.7755, 0.7959, 0.8163, 0.8367, 0.8571, 0.8776, 0.8980,
0.9184, 0.9388, 0.9592, 0.9796, 1.0000],
z = Z,colorscale='YlGnBu',showscale=False)
trace2 = go.Scatter3d(
x = x_coord.tolist(),
y = y_coord.tolist(),
z = (-p1).tolist(),
mode="markers",
marker=dict(
opacity=.99
)
)
traces=[trace1,trace2]
# Plot
fig = go.Figure(data=traces)
plot(fig, filename='./2d_system.html')
```
| github_jupyter |
## PyBEAM Tutorial 3: Parameter inference.
In this tutorial, we discuss how to run PyBEAM's parameter inference tool. If you have not done so already, look at the Tutorial 1 and 2 notebooks since we will be using tools introduced in those here.
Once you have done this, import PyBEAM's default sub-module.
```
# import PyBEAM's default module
import pybeam.default as pbd
```
We first define a model we would like to study. For this example, we define the 'base' model with no extensions discussed in Tutorial 1. We also run the parse_model function discussed in Tutorial 2 to make sure it has the parameters we desire. For this model, they should be the non-decisio time 't_nd', the relative start 'w', the drift rate 'mu', and the decision threhsold locations 'a'.
```
# define base model
model = {'type' : 'base', # model type ('base' or 'ugm')
'sigma' : 1.0, # sets sigma, the scaling parameter
'threshold' : 'fixed', # sets threshold type (fixed, linear, exponential, or weibull)
'leakage' : False, # if True, drift rate has leaky integration
'delay' : False, # if True, decision threshold motion is delayed
'contamination' : False} # if True, uniform contamination added to model
# outputs which parameters your model uses
pbd.parse_model(model)
```
Next, we simulate a data set using the simulate_data tool discussed in Tutorial 2. We also plot the data set and its likelihood function using the plot_rt function.
```
# parameters for synthetic data
phi = {'t_nd' : 0.25, # non-decision time
'w' : 0.5, # relative start point
'mu' : 1.0, # drift rate
'a' : 0.5} # decision threshold location
# generate synthetic data
rt = pbd.simulate_model(N_sims = 500, # number of data points to simulate
model = model, # dictionary containing model information
phi = phi) # parameters used to simulate data
# plot data and model likelihood function
pbd.plot_rt(model = model, # dictionary containing model information
phi = phi, # parameters used for model rt distribution
rt = rt); # dictionary of simulated rt data
```
Now that we have defined our model and generated a data set, we can run the MCMC inference program. To do so, we first need to define all priors used by the model. This is done by generating a dicitonary containing the prior information. In this case, we have four parameters and only one data set to fit, so we require four priors. The dictionary keys are arbitrary, while the values are the priors themselves are are written in PyMC3 syntax (made into strings). In this case, we use PyMC3's uniform priors, with the syntax:
Uniform( "prior name" , lower = (lower bound of prior) , upper = (upper bound of prior) )
```
# define model priors
p = {'pt_nd' : 'Uniform("t_nd", lower = 0.0, upper = 0.75)', # non-decision time prior
'pw' : 'Uniform("w", lower = 0.3, upper = 0.7)', # relative start point prior
'pmu' : 'Uniform("mu", lower = -5.0, upper = 5.0)', # drift rate prior
'pa' : 'Uniform("a", lower = 0.25, upper = 2.0)'} # decision threshold prior
```
We next define a dictionary which sets which priors are associated with what parameters and which data set. This dictionary contains the keys output by the parse_model function. The values of these keys indicate which priors from the prior dictionary are associated with each parameter. So, for example, parameter 't_nd' has the prior associated with prior key 'pt_nd' (p['p_tnd']). So, we give it value 'pt_nd'.
In addition to the parameters, a key 'rt' is required. This key contains the data you would like to fit. This data set must be in the same form as that output by the simulate model function. It must have two keys, 'rt_upper' and 'rt_lower'. The values for these keys must be 1D lists/numpy arrays containing the reaction time data for the two choices (crossing the upper and lower decision thresholds, respectively).
This dictionary is then loaded into one last dictionary which will be input into the function. This step is added in case you would like additional conditions in your model. For this example, we only have one condition, but the notebook tutorial2 dicusses how to use multiple conditions.
```
# define model conditions
c = {'rt' : rt, # dictionary containing reaction time data
't_nd' : 'pt_nd', # prior for non-decision time, references p['pt_nd']
'w' : 'pw', # prior for relative start point, references p['pw']
'mu' : 'pmu', # prior for the drift rate, references p['pmu']
'a' : 'pa'} # prior for the t rate, references p['pa']
# load conditions into dictionary
cond = {0 : c}
```
We now run PyBEAM's parameter inference program, inference. This takes the input data and priors and outputs posteriors for each parameter. It requires inputs of the model dictionary, the prior dictionary, the conditions dictionary, chains (how many mcmc chains you want), cores (how many cpu cores to run those chains on), and file name (a string containing the file name to save the posteriors to).
This function has a few additional optional inputs that generally are not needed. These inclue other resolution settings and other MCMC solvers different than the default. These are disucssed in the function description notebook in detail.
```
# run parameter inference
trace = pbd.inference(model = model, # model dictionary
priors = p, # priors dictionary
conditions = cond, # conditions dictionary
samples = 25000, # number of MCMC samples
chains = 3, # number of MCMC chains
cores = 3, # number of CPU cores to run chains on
file_name = 'tutorial3') # file output
```
PyBEAM contains two more useful functions. The first, plot_trace, plots the posteriors output by the program. The second, summary, provides posterior summary statistics. They both accept input of the file name, in addition to a 'burnin' which sets how many of the samples need to be thrown out from the beginning.
Both functions are from the arviz library, so see the arviz/pymc3 documentation for more information about them.
```
# plot posteriors
pbd.plot_trace(file_name = 'tutorial3', burnin = 12500);
# summary of posteriors
pbd.summary(file_name = 'tutorial3', burnin = 12500)
```
| github_jupyter |
# Plotting with pandas
In this chapter, we learn how to plot directly from pandas DataFrames or Series. Internally, pandas uses matplotlib to do all of its plotting. Let's begin by reading in the stocks dataset.
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
stocks = pd.read_csv('../data/stocks/stocks10.csv', index_col='date', parse_dates=['date'])
stocks.head(3)
```
## Plotting a Series
pandas uses the Series index as the x-values and the values as y-values. By default, pandas creates a line plot. Let's plot Amazon's closing price for the last 5 years.
```
amzn = stocks['AMZN']
amzn.head(3)
amzn.plot();
```
Get four years of data from Apple, Facebook, Schlumberger and Tesla beginning in 2014.
### Plot many Series one at a time
All calls to plot that happen in the same cell will be drawn on the same Axes unless otherwise specified. Let's plot several Series at the same time.
```
stocks['AMZN'].plot()
stocks['AAPL'].plot()
stocks['FB'].plot()
stocks['SLB'].plot()
stocks['TSLA'].plot();
```
### Plot all at once from the DataFrame
Instead of individually plotting Series, we can plot each column in the DataFrame at once with its `plot` method.
```
stocks.plot();
```
### Plotting in Pandas is Column based
The most important thing to know about plotting in pandas is that it is **column based**. pandas plots each column, one at a time. It uses the index as the x-values for each column and the values of each column as the y-values. The column names are put in the **legend**.
## Choosing other types of plots
pandas directly uses Matplotlib for all of its plotting and does not have any plotting capabilities on its own. pandas is simply calling Matplotlib's plotting functions and supplying the arguments for you. pandas provides a small subset of the total available types of plots that matplotlib offers. Use the `kind` parameter to choose one of the following types of plots.
* `line` : line plot (default)
* `bar` : vertical bar plot
* `barh` : horizontal bar plot
* `hist` : histogram
* `box` : boxplot
* `kde` : Kernel Density Estimation plot. `density` is an alias
* `area` : area plot
* `pie` : pie plot
* `scatter`: Does not plot all columns, you must choose x and y
### Histogram of the closing prices of Apple
Set the `kind` parameter to thee string 'hist' to plot a histogram of closing prices.
```
aapl = stocks['AAPL']
aapl.plot(kind='hist');
```
### Kernel Density Estimate
Very similar to a histogram, a kernel density estimate (use string 'kde') plot estimates the probability density function.
```
aapl.plot(kind='kde');
```
## Additional plotting parameters
To modify plots to your liking, pandas provides several of the same parameters found in matplotlib plotting functions. The most common are listed below:
* `linestyle` or `ls` - Pass a string of one of the following ['--', '-.', '-', ':']
* `color` or `c` - Can take a string of a named color, a string of the hexadecimal characters or a rgb tuple with each number between 0 and 1.
* `linewidth` or `lw` - controls thickness of line. Default is 1
* `alpha` - controls opacity with a number between 0 and 1
* `figsize` - a tuple used to control the size of the plot. (width, height)
* `legend` - boolean to control whether or not to show legend.
```
# Use several of the additional plotting arguemnts
aapl.plot(color="darkblue",
linestyle='--',
figsize=(10, 4),
linewidth=3,
alpha=.7,
legend=True,
title="AAPL Stock Price - Last 5 Years");
```
### Diamonds dataset
Let's read in the diamonds dataset and begin making plots with it.
```
diamonds = pd.read_csv('../data/diamonds.csv')
diamonds.head(3)
```
### Changing the defaults for a scatterplot
The default plot is a line plot and uses the index as the x-axis. Each column of the frame become the y-values. This worked well for stock price data where the date was in the index and ordered. For many datasets, you will have to explicitly set the x and y axis variables. Below is a scatterplot comparison of carat vs price.
```
diamonds.plot(x='carat', y='price', kind='scatter', figsize=(8, 4));
diamonds.shape
```
### Sample the data when too many points
When there an abundance of data is present, sampling a fraction of the data can result in a more readable plot. Here, we sample five percent of the data and change the size of each point with the `s` parameter.
```
dia_sample = diamonds.sample(frac=.05)
dia_sample.plot('carat', 'price', kind='scatter', figsize=(8, 4), s=2);
```
### If you have tidy data, use `groupby/pivot_table`, then make a bar plot
If your data is tidy like it is with this diamonds dataset, you will likely need to aggregate it with either a `groupby` or a `pivot_table` to make it work with a bar plot.
### The index becomes the tick labels for String Indexes
Pandas nicely integrates the index into plotting by using it as the tick mark labels for many plots.
```
cut_count = diamonds['cut'].value_counts()
cut_count
cut_count.plot(kind='bar');
```
### More than one grouping column in the index
It's possible to make plots with a Series that have a MultiIndex.
```
cut_color_count = diamonds.groupby(['cut', 'color']).size()
cut_color_count.head(10)
cut_color_count.plot(kind='bar');
```
### Thats quite ugly
Let's reshape and plot again.
```
cut_color_pivot = diamonds.pivot_table(index='cut', columns='color', aggfunc='size')
cut_color_pivot
```
Plot the whole DataFrame. The index always goes on the x-axis. Each column value is the y-value and the column names are used as labels in the legend.
```
cut_color_pivot.plot(kind='bar', figsize=(10, 4));
```
## Pandas plots return matplotlib objects
After making a plot with pandas, you will see some text output immediately under the cell that was just executed. Pandas is returning to us the matplotlib Axes object. You can assign the result of the `plot` method to a variable.
```
ax = cut_color_pivot.plot(kind='bar');
```
Verify that we have a matplotlib Axes object.
```
type(ax)
```
Get the figure as an attribute of the Axes
```
fig = ax.figure
type(fig)
```
### We can use the figure and axes as normal
Let's set a new title for the Axes and change the size of the Figure.
```
ax.set_title('My new title on a Pandas plot')
fig.set_size_inches(10, 4)
fig
```
## Exercises
### Exercise 1
<span style="color:green; font-size:16px">In this exercise we will test whether daily returns from stocks are normally distributed. Complete the following tasks:
* Take the `df_stocks` DataFrame and call the **`pct_change`** method to get the daily return percentage and assign it to a variable.
* Assign the mean and standard deviation of each column (these will return Series) to separate variables.
* Standardize your columns by subtracting the mean and dividing by the standard deviation. You have now produced a **z-score** for each daily return.
* Add a column to this DataFrame called **`noise`** by calling **`np.random.randn`** which creates random normal variables.
* Plot the KDE for each column in your DataFrame. If the stock returns are normal, then the shapes of the curves will all look the same.
* Limit the xaxis to be between -3 and 3.
* Are stock retunrs normally distributed?</span>
### Exercise 2
<span style="color:green; font-size:16px">Use Pandas to plot a horizontal bar plot of diamond cuts.</span>
### Exercise 3
<span style="color:green; font-size:16px">Make a visualization that easily shows the differences in average salary by sex for each department of the employee dataset.</span>
### Exercise 4
<span style="color:green; font-size:16px">Split the employee data into two separate DataFrames. Those who have a hire date after the year 2000 and those who have one before. Make the same plot above for each group.</span>
### Exercise 5
<span style="color:green; font-size:16px">Use the `flights` data set. Plot the counts of the number of flights per day of week.</span>
### Exercise 6
<span style="color:green; font-size:16px">Plot the average arrival delay per day of week.</span>
### Exercise 7
<span style="color:green; font-size:16px">Plot the average arrival delay per day of week per airline.</span>
| github_jupyter |
# DD-Pose getting started
This jupyter notebook shows you how to access the raw data and annotations of the DD-Pose dataset
```
from dd_pose.dataset import Dataset
from dd_pose.dataset_item import DatasetItem
from dd_pose.image_decorator import ImageDecorator
from dd_pose.jupyter_helpers import showimage
from dd_pose.evaluation_helpers import T_headfrontal_camdriver, T_camdriver_headfrontal
from dd_pose.visualization_helpers import get_dashboard
import transformations as tr
import numpy as np
```
A `Dataset` contains `dataset item dictionaries`.
You can choose between splits `'trainval'`, `'test'` and `'all'`:
* `trainval`: training and validation split. Raw data and head pose measurements
* `test`: held-out test split. No head pose measurements
* `all`: union of the two above
```
d = Dataset(split='all')
len(d)
```
`d.get_dataset_items()` yeilds a generator for all `dataset item dictionaries` in a dataset.
A `dataset_item dictionary` is represented by a `subject` (int), a `scenario` (int) and a `humanhash` (str).
The `humanhash` is there to disambiguate multiple `scenario`s of the same type.
```
d.get_dataset_items().next()
```
You can also get a `dataset item dictionary` by providing `subject`, `scenario` and `humanhash` directly.
The file `resources/dataset-items-trainval.txt` covers the existing dataset items of the `trainval` split.
```
!head resources/dataset-items-trainval.txt
di_dict = d.get(subject_id=1, scenario_id=3, humanhash='sodium-finch-fillet-spring') # trainval
# di_dict = d.get(subject_id=6, scenario_id=0, humanhash='quebec-aspen-washington-social') # test
di_dict
```
## Access data
A `DatasetItem` object encapsulates all data
```
di = DatasetItem(di_dict)
```
A measurement in a `DatasetItem` is indexed by a timestamp.
`di.get_stamps()` gets all timestamps as long integers.
```
stamps = di.get_stamps()
len(stamps)
stamps[0:5]
```
Choose arbitrary stamp and print data for it
```
stamp = stamps[153]
```
### Get gps information
```
di.get_gps(stamp)
```
### Heading above ground
```
di.get_heading(stamp)
```
### Get the image of the left driver cam
... and convert from 16bit depth to 8bit depth by shifting 8 bits to the right
```
img, pcm = di.get_img_driver_left(stamp, shift=True)
img.shape, img.dtype
```
### pcm represents the associated pinhole camera model
```
pcm.P
```
### You can project 3d points onto the image plane
```
pcm.project3dToPixel((0, 0, 1))
```
### This is the image of the left driver camera
```
showimage(img)
```
### Get the docu cam image
```
img_docu, pcm_docu = di.get_img_docu(stamp)
img_docu.shape, img_docu.dtype
showimage(img_docu)
```
### Occlusion state of face (see paper)
```
di.get_occlusion_state(stamp)
```
### Steering wheel angle and acceleration
```
di.get_stw_angle(stamp)
```
### Transformations
Transformations are given in homogeneous coordinates.
The terminology is:
`point_A = T_A_B * point_B`
`T_A_B` is a homogeneous 4x4 matrix which transforms a `point_B` from frame `B` to a `point_A` in frame `A`.
Points are homogeneous 4 element column vectors `(x, y, z, 1.0)`.
### Static homogeneous transformation from body frame (car) to camdriver (the optical frame of the left driver camera)
```
di.get_T_camdriver_body()
```
### Static homogeneous transformation from camdocu optical frame to camdriver optical frame
```
di.get_T_camdriver_camdocu()
```
### Static homogeneous transformation from gps frame to camdriver optical frame
```
di.get_T_camdriver_gps()
```
### Head pose: homogeneous transformation from head frame to camdriver optical frame
```
T_camdriver_head = di.get_T_camdriver_head(stamp)
T_camdriver_head
```
### Draw head pose into camdriver image
```
img, pcm = di.get_img_driver_left(stamp, shift=True)
img_bgr = np.dstack((img, img, img))
image_decorator = ImageDecorator(img_bgr, pcm)
if T_camdriver_head is not None:
image_decorator.draw_axis(T_camdriver_head)
else:
image_decorator.draw_text("no T_camdriver_head")
showimage(img_bgr)
```
### Draw head pose into camdocu image
```
img_docu, pcm_docu = di.get_img_docu(stamp)
image_decorator = ImageDecorator(img_docu, pcm_docu)
# Get transformation from head into camdocu frame by "chaining"
# Note how the 'camdriver' cancels out by multiplication
T_camdocu_camdriver = np.linalg.inv(di.get_T_camdriver_camdocu())
if T_camdriver_head is not None:
T_camdocu_head = np.dot(T_camdocu_camdriver, T_camdriver_head)
image_decorator.draw_axis(T_camdocu_head)
else:
image_decorator.draw_text("No T_camdriver_head")
# Also draw camdriver frame into image
image_decorator.draw_axis(T_camdocu_camdriver)
showimage(img_docu)
```
### Angular representation of head pose
There are many angular representations.
You get get a conventional representation by 'static axis rotation' towards frontally looking head,
i.e. `roll = pitch = yaw = 0` represents a head looking frontally towards the camera
```
if T_camdriver_head is not None:
T_headfrontal_head = np.dot(T_headfrontal_camdriver, T_camdriver_head)
roll, pitch, yaw = tr.euler_from_matrix(T_headfrontal_head, 'sxyz')
roll, pitch, yaw # in rad
```
### Visualize `T_camdriver_headfrontal`
Draw `headfrontal` frame into camdocu image.
`x` points inside the camera
```
img_docu, pcm_docu = di.get_img_docu(stamp)
image_decorator = ImageDecorator(img_docu, pcm_docu)
T_camdocu_camdriver = np.linalg.inv(di.get_T_camdriver_camdocu())
T_camdocu_headfrontal = np.dot(T_camdocu_camdriver, T_camdriver_headfrontal)
image_decorator.draw_axis(T_camdocu_headfrontal)
showimage(img_docu)
```
### Use all-in-one function `get_dashboard`
Shows all information in one image
```
img_dashboard = get_dashboard(di, stamp)
showimage(img_dashboard)
```
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.