code
stringlengths 38
801k
| repo_path
stringlengths 6
263
|
|---|---|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] editable=true
# # Forecast/Predict Demand for Bike Sharing
#
# This model predicts/forecasts daily bike rental ridership using neural networks.
#
# + editable=true
# %matplotlib inline
# %load_ext autoreload
# %autoreload 2
# %config InlineBackend.figure_format = 'retina'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# + [markdown] editable=true
# ## Load and prepare the data
#
# The core data set is related to
# the two-year historical log corresponding to years 2011 and 2012 from Capital Bikeshare system, Washington D.C., USA which is
# publicly available in http://capitalbikeshare.com/system-data. More details about the dataset is in the Readme.txt file in the Bike-Sharing-Dataset folder under this repository
#
# A critical step in working with neural networks is preparing the data correctly. In this work, the neural network is trained on hourly data where demand for every hour on each day is taken into account. Thus, hour.csv dataset is used.
#
# This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the `cnt` column.
#
# Below, there is also a plot showing the number of bike riders over the first 10 days or so in the data set. (Some days don't have exactly 24 entries in the data set, so it's not exactly 10 days.) The rentals shown here are hourly basis. In this data below, The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data below, there is also information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. The neural network tries to capture all this data.
# + editable=true
data_path = 'Bike-Sharing-Dataset/hour.csv'
rides = pd.read_csv(data_path)
# + editable=true
rides.head()
# + editable=true
rides[:24*10].plot(x='dteday', y='cnt')
# + [markdown] editable=true
# ### Dummy variables
# There are some categorical variables in the dataset like season, weather, month. To include these in the model, making binary dummy variables using Pandas `get_dummies()` function.
# + editable=true
dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']
for each in dummy_fields:
dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)
rides = pd.concat([rides, dummies], axis=1)
fields_to_drop = ['instant', 'dteday', 'season', 'weathersit',
'weekday', 'atemp', 'mnth', 'workingday', 'hr']
data = rides.drop(fields_to_drop, axis=1)
data.head()
# + [markdown] editable=true
# ## Scaling target variables
# To make training of the network easier, standardization of each of the continuous variables is necessary. That is, shifting and scaling the variables such that they have zero mean and a standard deviation of 1.
#
# The scaling factors are saved to go backwards when using the network for predictions.
# + editable=true
quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']
# Store scalings in a dictionary so we can convert back later
scaled_features = {}
for each in quant_features:
mean, std = data[each].mean(), data[each].std()
scaled_features[each] = [mean, std]
data.loc[:, each] = (data[each] - mean)/std
# + [markdown] editable=true
# ## Splitting the data into training, validation, and testing sets
#
# Approximately 21 days of data will be uses as a test set after the network is trained. This test set is used to make predictions and compare them with the actual number of riders.
# + editable=true
# Save data for approximately the last 21 days
test_data = data[-21*24:]
# Now remove the test data from the data set
data = data[:-21*24]
# Separate the data into features and targets
target_fields = ['cnt', 'casual', 'registered']
features, targets = data.drop(target_fields, axis=1), data[target_fields]
test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]
# + [markdown] editable=true
# Splitting the trining data further into two sets, one for training and one for validating as the network is being trained. Since this is time series data, training is done on historical data, then prediction is done on future data (the validation set).
# + editable=true
# Hold out the last 60 days or so of the remaining data as a validation set
train_features, train_targets = features[:-60*24], targets[:-60*24]
val_features, val_targets = features[-60*24:], targets[-60*24:]
# + [markdown] editable=true
# ## Time to build the network
#
# Below is the neural network for predicting the demand for bike sharing. The network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activation. The output layer has only one node and is used for the regression. There is no activation function used at output.
# + editable=true
class NeuralNetwork(object):
def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Sets number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Initializing weights
self.weights_input_to_hidden = np.random.normal(0.0, self.input_nodes**-0.5,
(self.input_nodes, self.hidden_nodes))
self.weights_hidden_to_output = np.random.normal(0.0, self.hidden_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
self.lr = learning_rate
# Implementing sigmoid function
self.activation_function = lambda x : 1/(1+np.exp(-x))
def train(self, features, targets):
''' Trains the network on batch of features and targets.
Arguments
---------
features: 2D array, each row is one data record, each column is a feature
targets: 1D array of target values
'''
n_records = features.shape[0]
delta_weights_i_h = np.zeros(self.weights_input_to_hidden.shape)
delta_weights_h_o = np.zeros(self.weights_hidden_to_output.shape)
for X, y in zip(features, targets):
final_outputs, hidden_outputs = self.forward_pass_train(X)
delta_weights_i_h, delta_weights_h_o = self.backpropagation(final_outputs, hidden_outputs, X, y,
delta_weights_i_h, delta_weights_h_o)
self.update_weights(delta_weights_i_h, delta_weights_h_o, n_records)
def forward_pass_train(self, X):
''' Implements forward pass
Arguments
---------
X: features batch
'''
# Hidden layer
hidden_inputs = np.dot(X, self.weights_input_to_hidden) # signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer
# Output layer
final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output) # signals into final output layer
final_outputs = final_inputs # signals from final output layer
return final_outputs, hidden_outputs
def backpropagation(self, final_outputs, hidden_outputs, X, y, delta_weights_i_h, delta_weights_h_o):
''' Implement backpropagation
Arguments
---------
final_outputs: output from forward pass
y: target (i.e. label) batch
delta_weights_i_h: change in weights from input to hidden layers
delta_weights_h_o: change in weights from hidden to output layers
'''
# Output error
error = y-final_outputs # Output layer error is the difference between desired target and actual output.
output_error_term = error
# Calculating the hidden layer's contribution to the error
hidden_error = np.dot(output_error_term, self.weights_hidden_to_output.T)
# Backpropagated error terms
hidden_error_term = hidden_error * hidden_outputs* (1-hidden_outputs)
# Weight step (input to hidden)
delta_weights_i_h += hidden_error_term* X[:,None]
# Weight step (hidden to output)
delta_weights_h_o += (output_error_term* hidden_outputs)[:,None]
return delta_weights_i_h, delta_weights_h_o
def update_weights(self, delta_weights_i_h, delta_weights_h_o, n_records):
''' Updates weights on gradient descent step
Arguments
---------
delta_weights_i_h: change in weights from input to hidden layers
delta_weights_h_o: change in weights from hidden to output layers
n_records: number of records
'''
self.weights_hidden_to_output += self.lr * delta_weights_h_o/n_records # updates hidden-to-output weights with gradient descent step
self.weights_input_to_hidden += self.lr * delta_weights_i_h/n_records # updates input-to-hidden weights with gradient descent step
def run(self, features):
''' Runs a forward pass through the network with input features
Arguments
---------
features: 1D array of feature values
'''
# Hidden layer
hidden_inputs = np.dot(features, self.weights_input_to_hidden) # signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer
# Output layer
final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output) # signals into final output layer
final_outputs = final_inputs # signals from final output layer
return final_outputs
# + editable=true
def MSE(y, Y):
return np.mean((y-Y)**2)
# + [markdown] editable=true
# ## Training the network
#
# Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.
#
# You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.
#
# ### Choose the number of iterations
# This is the number of batches of samples from the training data we'll use to train the network. The more iterations you use, the better the model will fit the data. However, this process can have sharply diminishing returns and can waste computational resources if you use too many iterations. You want to find a number here where the network has a low training loss, and the validation loss is at a minimum. The ideal number of iterations would be a level that stops shortly after the validation loss is no longer decreasing.
#
# ### Choose the learning rate
# This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. Normally a good choice to start at is 0.1; however, if you effectively divide the learning rate by n_records, try starting out with a learning rate of 1. In either case, if the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.
#
# ### Choose the number of hidden nodes
# In a model where all the weights are optimized, the more hidden nodes you have, the more accurate the predictions of the model will be. (A fully optimized model could have weights of zero, after all.) However, the more hidden nodes you have, the harder it will be to optimize the weights of the model, and the more likely it will be that suboptimal weights will lead to overfitting. With overfitting, the model will memorize the training data instead of learning the true pattern, and won't generalize well to unseen data.
#
# Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose. You'll generally find that the best number of hidden nodes to use ends up being between the number of input and output nodes.
# + editable=true
import sys
#########################################################
# Setting the hyperparameters
##########################################################
iterations = 2700
learning_rate = 0.6
hidden_nodes = 17
output_nodes = 1
N_i = train_features.shape[1]
network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)
losses = {'train':[], 'validation':[]}
for ii in range(iterations):
# Go through a random batch of 128 records from the training data set
batch = np.random.choice(train_features.index, size=128)
X, y = train_features.loc[batch].values, train_targets.loc[batch]['cnt']
network.train(X, y)
# Printing out the training progress
train_loss = MSE(network.run(train_features).T, train_targets['cnt'].values)
val_loss = MSE(network.run(val_features).T, val_targets['cnt'].values)
sys.stdout.write("\rProgress: {:2.1f}".format(100 * ii/float(iterations)) \
+ "% ... Training loss: " + str(train_loss)[:5] \
+ " ... Validation loss: " + str(val_loss)[:5])
sys.stdout.flush()
losses['train'].append(train_loss)
losses['validation'].append(val_loss)
# + editable=true
plt.plot(losses['train'], label='Training loss')
plt.plot(losses['validation'], label='Validation loss')
plt.legend()
plt.xlabel('Iterations')
plt.ylabel('Loss')
_ = plt.ylim()
# + [markdown] editable=true
# ## Checking out the predictions
#
# Here, the test data which is never seen by the model is used to see how well the trained model has predicted/forecasted the bike sharing demand
# + editable=true
fig, ax = plt.subplots(figsize=(8,4))
mean, std = scaled_features['cnt']
predictions = network.run(test_features).T*std + mean
ax.plot(predictions[0], label='Prediction')
ax.plot((test_targets['cnt']*std + mean).values, label='Data')
ax.set_xlim(right=len(predictions))
ax.legend()
dates = pd.to_datetime(rides.loc[test_data.index]['dteday'])
dates = dates.apply(lambda d: d.strftime('%b %d'))
ax.set_xticks(np.arange(len(dates))[12::24])
_ = ax.set_xticklabels(dates[12::24], rotation=45)
# + [markdown] editable=true
# The predictions are pretty good except on and after Dec 22 because of the holiday season.
# + editable=true
|
Predict_or_Forecast_Demand_for_Bike_Sharing.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# * This notebook implements the VAE model in [Kingma (2013)](https://arxiv.org/abs/1312.6114).
# ## Variational Auto-encoder
# ### Motivation
# In addition to the values of observable and of its latent variable, we are to encode the uncertainties of them also.
# ### Definitions
# * Let $X$ be the observed random variable and $Z$ the latent random variable. Let
#
# \begin{equation}
# Z \sim P_{Z}
# \end{equation}
#
# be some given prior.
#
# * And let
#
# \begin{equation}
# X \mid Z \sim P_{X \mid Z; \phi}
# \end{equation}
#
# for some distribution in a family parameterized by $\phi$.
# For instance, if $X \in \mathbb{R}$, it is general to assume a multivariate Gaussian distribution, diagonalized or semi-diagonalized, then we have
#
# \begin{equation}
# X \mid Z \sim \mathcal{N} \left( \mu(Z; \phi_1), \sigma(Z; \phi_2) \right),
# \end{equation}
#
# for some universality functions $\mu(\cdot; \phi_2)$ and $\sigma(\cdot; \phi_2)$ and $\phi := (\phi_1, \phi_2)$.
# ### Variational Inference
# Utilization of the dataset of $x$ forces us to employ some distribution $q(z \mid x)$ (to be determined) to fit $p(z \mid x)$ (and if the dataset is of $z$, then we are forced to $p(x \mid z)$ instead), then we have the bound by KL-divergence
#
# \begin{equation}
# \text{KL} \left( q(z \mid x) \| p(z \mid x) \right)
# = \text{KL} \left( q(z \mid x) \| p(x, z) \right) + \ln p(x)
# \geq 0.
# \end{equation}
#
# Thus, a loss
#
# \begin{align}
# L(x)
# & := \text{KL} \left( q(z \mid x) \| p(y, z) \right) \\
# & = \mathbb{E}_{z \sim q(z \mid x)} \left[
# \ln q(z \mid x) - \ln p(x \mid z) - \ln p(z) \right] \\
# & \geq - \ln p(x).
# \end{align}
#
# The equality can be reached if and only if $q(z \mid x) = p(z \mid x)$.
# ### Evaluation of Inference
# The evaluation of this fitting is comparing the $\ln p(x)$ and the $-L(x)$. Given any $p(x \mid z)$, we have
#
# \begin{align}
# p(x) & = \int dz p(x \mid z) p(z) \\
# & = \mathbb{E}_{z \sim p(z)} \left[ p(x \mid z) \right].
# \end{align}
# ### Relation with Autoencoder
# * The $\mathbb{E}_{z \sim q(z \mid x)} \left[ - \ln p(x \mid z) \right]$ part can be interpreted as a reconstruction loss.
#
# * The $\mathbb{E}_{z \sim q(z \mid x)} \left[ - \ln p(z) \right]$ part serves as regularization terms.
#
# * It seems that the $\mathbb{E}_{z \sim q(z \mid x)} \left[ \ln q(z \mid x) \right]$ part has no correspondence in auto-encoder.
# ### Example
# In this very example on the MNIST dataset:
#
# \begin{align}
# Z & \sim \mathcal{N} (0, 1); \\
# X \mid Z & \sim \text{Bernoulli}\left( p(z; \theta) \right).
# \end{align}
#
# And inference distributions:
#
# \begin{align}
# Z_0 \mid X & \sim \mathcal{N}\left( \mu(x; \phi_1), \sigma(x; \phi_2) \right); \\
# Z \mid Z_0 & \sim T(z_0).
# \end{align}
#
# where $Z_0$ obeys the "base inference distribution", as a Gaussian; and $T$ is bijective with universality.
# ## Implementation
# +
# %matplotlib inline
from IPython.display import display
import matplotlib.pyplot as plt
from PIL import Image
from tqdm import tqdm
import numpy as np
import tensorflow as tf
import tensorflow.contrib.distributions as tfd
from tensorflow.contrib.distributions.python.ops import bijectors as tfb
from tensorflow.examples.tutorials.mnist import input_data
# For reproducibility
seed = 42
np.random.seed(seed)
tf.set_random_seed(seed)
# -
# ### Functions
# + code_folding=[]
def get_p_X_z(z, X_dim, hidden_layers=None,
name='p_X_z', reuse=None):
"""Returns the distribution of P(X|Z).
X | Z ~ Bernoulli( p(Z) ).
Args:
z: Tensor of the shape `[batch_size, z_dim]`.
X_dim: Positive integer.
hidden_layers: List of positive integers. Defaults to
`[128, 256, 512]`.
Returns:
An instance of `tfd.Distribution`, with batch-shape `batch_size`
and event-shape `X_dim`.
"""
if hidden_layers is None:
hidden_layers = [128, 256, 512]
with tf.variable_scope(name, reuse=reuse):
hidden = z
for hidden_layer in hidden_layers:
hidden = tf.layers.dense(hidden, hidden_layer,
activation=tf.nn.relu)
logits = tf.layers.dense(hidden, X_dim, activation=None)
p_X_z = tfd.Independent(tfd.Bernoulli(logits=logits))
return p_X_z
# -
def get_q_z_X(X, z_dim, hidden_layers=None, bijectors=None,
dtype='float32', name='q_z_X', reuse=None):
"""Returns the distribution of Z | X.
Z = bijector(Z_0), and
Z_0 | X ~ Normal(mu(X;phi), sigma(X;phi)).
Args:
X: Tensor with shape `[batch_size, X_dim]`.
z_dim: Positive integer.
hidden_layers: List of positive integers. Defaults to
`[512, 256, 128]`.
bijectors: List of `tfb.Bijector`s. Defaults to an empty
list.
Returns:
An instance of `tfd.Distribution`, with batch-shape `batch_size`
and event-shape `z_dim`.
"""
if bijectors is None:
bijectors = []
if hidden_layers is None:
hidden_layers = [512, 256, 128]
with tf.variable_scope(name, reuse=reuse):
hidden = X
for hidden_layer in hidden_layers:
hidden = tf.layers.dense(hidden, hidden_layer,
activation=tf.nn.relu)
# Outputs in the fiber-bundle space
output = tf.layers.dense(hidden, z_dim * 2, activation=None)
# shape: [batch_size, z_dim]
mu, log_var = tf.split(output, [z_dim, z_dim], axis=1)
q_z0_X = tfd.MultivariateNormalDiag(mu, tf.exp(log_var))
chain = tfb.Chain(bijectors)
q_z_X = tfd.TransformedDistribution(q_z0_X, chain)
return q_z_X
def get_bijectors(name='bjiectors', reuse=None):
"""Complexify the inference distribution by extra-bijectors like
normalizing flows.
Returns:
List of `Bijector`s.
"""
with tf.variable_scope(name, reuse=reuse):
bijectors = []
#return bijectors # test!
for i in range(10):
# Get one bijector
shift_and_log_scale_fn = \
tfb.masked_autoregressive_default_template([128])
# MAP is extremely slow in training. Use IAF instead.
bijector = tfb.Invert(
tfb.MaskedAutoregressiveFlow(shift_and_log_scale_fn))
bijectors.append(bijector)
return bijectors
def get_loss_X(get_q_z_X, get_p_X_z, p_z, reuse=None):
"""L(X) := E_{z ~ q(z|X)} [ log_q(z|X) - log_p(z) - log_p(X|z) ].
Args:
get_q_z_X: Callable with the signature:
Args:
X: Tensor with shape `[batch_size, X_dim]`.
reuse: Boolean.
Returns:
An instance of `tfd.Distribution`, with batch-shape `batch_size`
and event-shape `z_dim`.
get_p_X_z: Callable with the signature:
Args:
z: Tensor of the shape `[batch_size, z_dim]`.
reuse: Boolean.
Returns:
An instance of `tfd.Distribution`, with batch-shape `batch_size`
and event-shape `X_dim`.
p_z: An instance of `tfd.Distribution`, with batch-shape `batch_size`
and event-shape `z_dim`.
reuse: If reuse the variables in `get_q_z_X` and `get_p_X_z`.
Returns:
Callable with the signature:
Args:
X: Tensor of the shape `[batch_size, X_dim]`.
Returns:
Tensor of the shape `[batch_size]`.
"""
def loss_X(X, name='loss_X'):
"""
Args:
X: Tensor of the shape `[batch_size, X_dim]`.
Returns:
Tensor of the shape `[batch_size]`.
"""
with tf.name_scope(name):
# Get the distribution q(z|X)
q_z_X = get_q_z_X(X, reuse=reuse)
# Get the distribution p(X|z)
z_samples = q_z_X.sample()
p_X_z = get_p_X_z(z_samples, reuse=reuse)
# Compute the tensor of L(X)
loss_X_tensor = tf.zeros([batch_size]) # initialize.
# E_{z ~ q(z|X)} [ log_q(z|X) ]
loss_X_tensor += q_z_X.log_prob(z_samples)
# E_{z ~ q(z|X)} [ - log_p(z) ]
loss_X_tensor += -1 * p_z.log_prob(z_samples)
# E_{z ~ q(z|X)} [ - log_p(X|z) ]
loss_X_tensor += -1 * p_X_z.log_prob(X)
return loss_X_tensor
return loss_X
def get_log_p_X(get_p_X_z, p_z, n_samples=100, reuse=None):
"""Returns the function ln p(X) by Monte-Carlo integral.
p(X) = E_{z~p(z)} [ p(X|z) ].
Args:
get_p_X_z: Callable with the signature:
Args:
z: Tensor of the shape `[batch_size, z_dim]`.
reuse: Boolean.
Returns:
An instance of `tfd.Distribution`, with batch-shape `batch_size`
and event-shape `X_dim`.
p_z: An instance of `tfd.Distribution`, with batch-shape `batch_size`
and event-shape `z_dim`.
n_samples: Positive integer.
reuse: If reuse the variables in `get_p_X_z`.
Returns:
Callable with the signature:
Args:
X: Tensor of the shape `[batch_size, X_dim]`.
name: String.
Returns:
Tensor of the shape `[batch_size]`.
"""
def log_p_X(X, name='log_p_X'):
"""Returns the tensor of ln p(X).
Args:
X: Tensor of the shape `[batch_size, X_dim]`.
name: String.
Returns:
Tensor of the shape `[batch_size]`.
"""
with tf.name_scope(name):
def log_p_X_z(z_sample):
"""Returns the tensor of ln p(X|z)."""
p_X_z = get_p_X_z(z_sample, reuse=reuse)
return p_X_z.log_prob(X) # [batch_size]
z_samples = p_z.sample(n_samples) # [n_samples, batch_size, z_dim]
expected = tf.map_fn(log_p_X_z, z_samples) # [n_samples, batch_size]
log_p_X_tensor = (tf.reduce_logsumexp(expected, axis=0) -
tf.log(float(n_samples)))
return log_p_X_tensor
return log_p_X
# ### Loss
batch_size = 128
X_dim = 28 * 28
X = tf.placeholder(shape=[batch_size, X_dim], dtype='float32', name='X')
z_dim = 64
p_z = tfd.MultivariateNormalDiag(tf.zeros([batch_size, z_dim]), name='p_z')
# +
def _get_q_z_X(X, reuse):
bijectors = get_bijectors(reuse=reuse)
#bijectors = [] # test!
return get_q_z_X(X, z_dim, bijectors=bijectors, reuse=reuse)
def _get_p_X_z(z, reuse):
return get_p_X_z(z, X_dim=X_dim, reuse=reuse)
loss_X = get_loss_X(_get_q_z_X, _get_p_X_z, p_z, reuse=tf.AUTO_REUSE)
loss_X_tensor = loss_X(X)
loss_X_scalar = tf.reduce_mean(loss_X_tensor)
# -
# ### Generating
z_samples = tf.placeholder(shape=[batch_size, z_dim],
dtype='float32',
name='z_samples')
X_samples = _get_p_X_z(z_samples, reuse=tf.AUTO_REUSE).sample()
def get_image(array):
"""
Args:
array: Numpy array with shape `[28*28]`.
Returns:
An image.
"""
array = 255 * array
array = array.reshape([28, 28])
array = array.astype(np.uint8)
return Image.fromarray(array)
# ### Training
optimizer = tf.train.AdamOptimizer(epsilon=1e-3)
train_op = optimizer.minimize(loss_X_scalar)
sess = tf.Session()
sess.run(tf.global_variables_initializer())
# +
mnist = input_data.read_data_sets('dat/MNIST', one_hot=True)
def get_X_y_batch():
X_batch, y_batch = mnist.train.next_batch(batch_size)
# Since X | Y, Z ~ Bernoulli, the observed value of X shall
# either be 0 or 1,
X_batch = np.where(X_batch >= 0.5, np.ones_like(X_batch),
np.zeros_like(X_batch))
return X_batch, y_batch
# +
loss_vals = []
for i in tqdm(range(100000)):
X_batch, y_batch = mnist.train.next_batch(batch_size)
_, loss_val = sess.run([train_op, loss_X_scalar], {X: X_batch})
if np.isnan(loss_val):
raise ValueError('Loss has been NaN.')
loss_vals.append(loss_val)
# Visualization
plt.plot(loss_vals)
plt.xlabel('steps')
plt.ylabel('loss')
plt.show()
# Zoomed in
last_steps = 20000
plt.plot(loss_vals[-last_steps:])
plt.xlabel('last steps')
plt.ylabel('loss')
plt.show()
print('Final loss:', np.mean(loss_vals[-100:]))
# +
z_sample_vals = np.random.normal(size=[batch_size, z_dim])
X_sample_vals = sess.run(X_samples, {z_samples: z_sample_vals})
# Display the results
n_display = 5
for i in range(n_display):
print('Gnerated:')
display(get_image(X_sample_vals[i]))
print()
# -
# ### Effect of Normalizing-flow
# * While not employing normalizing flow, 100000 iterations give the final loss about 97.8.
# * With 10 normalizing flows, 100000 iterations give the final loss 93.5 (improved 4.3).
np.exp(4)
# ### Evaluation
# +
# Compute the lower bound of loss for any (trained) p(X|z).
log_p_X = get_log_p_X(_get_p_X_z, p_z,
n_samples=10**5,
reuse=tf.AUTO_REUSE)
log_p_X_tensor = log_p_X(X)
loss_lower_bound = -1 * tf.reduce_mean(log_p_X_tensor)
# -
print(sess.run(loss_lower_bound, {X: X_batch}))
# It's strange that the lower bound of loss is even greater than the trained loss! And the Monte-Carlo integral is not that in-sensitive to the `n_samples`, as it should be.
|
genmod/vae/variational_autoencoder.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (3.5.6)
# language: python
# name: python3-3.5.6
# ---
# """ This notebook describes how one can use pickle library to save files so that the TensorFlow Kernel can be used with Desi condition_spectra function.
#
# Saving files like this has two major advantages:
#
# (1) It saves a lot of time as one does not have to run the condition_spectra function everytime before training the network.
#
# (2) Since the DESI kernel has very limited compatability with TF and Keras. This allows the users to bypass that lack of cohesion and save files using the functions in DESI kernel and use TF on a different notebook or a seperate notebook without needing the functions of DESI Kernel"""
# ##### (1) Run the condition_spectra function on DESI Kernel to get the desired conditioned spectra of SN and Hosts
# +
from desispec.io import read_spectra
from desitrip.preproc import rebin_flux, rescale_flux
from glob import glob
import glob
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
from astropy.table import Table
import os
import platform
mpl.rc('font', size=14)
# -
def condition_spectra(coadd_files, truth_files):
"""Read DESI spectra, rebin to a subsampled logarithmic wavelength grid, and rescale.
Parameters
----------
coadd_files : list or ndarray
List of FITS files on disk with DESI spectra.
truth_files : list or ndarray
Truth files.
Returns
-------
fluxes : ndarray
Array of fluxes rebinned to a logarithmic wavelength grid.
"""
fluxes = None
for cf, tf in zip(coadd_files, truth_files):
spectra = read_spectra(cf)
wave = spectra.wave['brz']
flux = spectra.flux['brz']
ivar = spectra.ivar['brz']
truth = Table.read(tf, 'TRUTH')
truez = truth['TRUEZ']
# uid = truth ['TARGETID']
# # Pre-condition: remove spectra with NaNs and zero flux values.
# mask = np.isnan(flux).any(axis=1) | (np.count_nonzero(flux, axis=1) == 0)
# mask_idx = np.argwhere(mask)
# flux = np.delete(flux, mask_idx, axis=0)
# ivar = np.delete(ivar, mask_idx, axis=0)
# Rebin and rescale fluxes so that each is normalized between 0 and 1.
rewave, reflux, reivar = rebin_flux(wave, flux, ivar, truez, minwave=2500., maxwave=9500., nbins=150, log=True, clip=True)
rsflux = rescale_flux(reflux)
if fluxes is None:
fluxes = rsflux
else:
fluxes = np.concatenate((fluxes, rsflux))
return fluxes
snia_truth = np.sort(glob.glob((r'/global/project/projectdirs/desi/science/td/sim/bgs/180s/snia_airmass2*/*truth.fits')))
snia_coadd = np.sort(glob.glob((r'/global/project/projectdirs/desi/science/td/sim/bgs/180s/snia_airmass2*/*coadd.fits')))
snia_flux = condition_spectra(snia_coadd, snia_truth)
host_truth = np.sort(glob.glob((r'/global/project/projectdirs/desi/science/td/sim/bgs/180s/hosts_airmass2*/*truth.fits')))
host_files= np.sort(glob.glob((r'/global/project/projectdirs/desi/science/td/sim/bgs/180s/hosts_airmass2*/*coadd.fits')))
host_flux = condition_spectra(host_files, host_truth)
# ## Saving the files:
# - After you have saved the spectra flux then we can now go on to using pickle to save the output
# - In this example notebook, I show how to save flux data using snia_flux, host_flux a np list that we obtained after using the condition_spectra function as shown above.
# - wb stands for 'write binary'; it bascially encrypts your data as binary.
# - In the first example belpow I am saving snia_flux from above as snia_flux_airmass2.data
# - .data specifies the extentsion and is needed
#
# ### General structure for saving a file to_be_saved as new_saved.data :
#
# - address: address of the directory where you want to save your file
#
# <code>
# with open (r'address/ new_saved.data', 'wb') as whatever:
# pickle.dump(to_be_saved, whatever)
# </code>
# +
import pickle
with open(r'/global/homes/v/vashisth/Binary/snia_flux_airmass2.data', 'wb') as f:
pickle.dump(snia_flux, f)
with open(r'/global/homes/v/vashisth/Binary/host_flux_airmass2.data', 'wb') as f:
pickle.dump(host_flux, f)
# -
# ## Loading pickled files
# - You can load the files after changing the kernel of the notebook or you can simply make a new notebook where you want to use <b>Tensorflow or anything that is not compatible with DESI Kernel</b>.
# ### General structure for loading a file new_saved.data as to_be_used :
# - Note: the pickled file (new_saved.data) needs to be in the same folder as where you want to it
#
# <code>
# with open (r'new_saved.data', 'rb') as whatever:
# to_be_used= pickle.load(whatever)
# </code>
# +
import pickle
with open('host_flux_airmass2.data', 'rb') as filehandle:
host_flux = pickle.load(filehandle)
with open('snia_flux_airmass2.data', 'rb') as filehandle:
# read the data as binary data stream
snia_flux = pickle.load(filehandle)
# -
# ### <b> I hope this was useful !!!</b>
|
desitrip/docs/nb/Pickle Guide.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # TF-IDF Weighted Word Embeddings
# Traditionally, we can represent a document by averaging the word embeddings of the tokens in the document. However, this assumes that each token represents the same "importance/relevance" level for a document. Instead of weighting each document equally, we'll weight each document by their TF-IDF scores.
# +
import pandas as pd
import spacy
# load spacy en_core_web_md model
nlp = spacy.load("en_core_web_md")
# -
tweets = pd.read_csv("tweets_pandas.csv", encoding="latin1")["tweet"]
tweets = list(tweets.values)
tweets[:5]
# # Tokenize and Get Average Word Embeddings as Sentence Embeddings (What Spacy Already Does For You)
for idx, tweet in enumerate(tweets):
print(nlp(tweet))
print(nlp(tweet).vector[:10])
if idx == 5: # stop printing after first 5 or so, takes a long time!
break
# ## Get TF-IDF Weighted Average Word Embeddings (Our Own Algorithm!)
# +
from sklearn.feature_extraction.text import TfidfVectorizer
vectorizer = TfidfVectorizer()
X = vectorizer.fit_transform(tweets)
tf_idf_lookup_table = pd.DataFrame(X.toarray(), columns=vectorizer.get_feature_names())
# +
DOCUMENT_SUM_COLUMN = "DOCUMENT_TF_IDF_SUM"
# sum the tf idf scores for each document
tf_idf_lookup_table[DOCUMENT_SUM_COLUMN] = tf_idf_lookup_table.sum(axis=1)
available_tf_idf_scores = tf_idf_lookup_table.columns # a list of all the columns we have
available_tf_idf_scores = list(map( lambda x: x.lower(), available_tf_idf_scores)) # lowercase everything
# +
import numpy as np
tweets_vectors = []
for idx, tweet in enumerate(tweets): # iterate through each review
tokens = nlp(tweet) # have spacy tokenize the review text
# initially start a running total of tf-idf scores for a document
total_tf_idf_score_per_document = 0
# start a running total of initially all zeroes (300 is picked since that is the word embedding size used by word2vec)
running_total_word_embedding = np.zeros(300)
for token in tokens: # iterate through each token
# if the token has a pretrained word embedding it also has a tf-idf score
if token.has_vector and token.text.lower() in available_tf_idf_scores:
tf_idf_score = tf_idf_lookup_table.loc[idx, token.text.lower()]
#print(f"{token} has tf-idf score of {tf_idf_lookup_table.loc[idx, token.text.lower()]}")
running_total_word_embedding += tf_idf_score * token.vector
total_tf_idf_score_per_document += tf_idf_score
# divide the total embedding by the total tf-idf score for each document
document_embedding = running_total_word_embedding / total_tf_idf_score_per_document
tweets_vectors.append(document_embedding)
# -
# # Check the Similarity of Different Documents
from sklearn.metrics.pairwise import cosine_similarity
similarities = pd.DataFrame(cosine_similarity(tweets_vectors), columns=tweets, index=tweets)
similarities = similarities.unstack().reset_index()
similarities.columns = ["tweet1", "tweet2", "similarity"]
similarities = similarities[similarities["similarity"] < 0.9999999999]
similarities.drop_duplicates(subset=["similarity"], inplace=True)
for idx, row in similarities.sort_values(by="similarity", ascending=False).head(50).iterrows():
print(row["tweet1"])
print("--" * 10)
print(row["tweet2"])
print("\n\n")
|
week7/Example of Using TF-IDF Weighted Document Vectors.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="aTTPhI_liU-u" colab_type="text"
# # Generate word embeddings using Swivel
#
# ## Overview
#
# In this notebook we show how to generate word embeddings based on the K-Cap corpus using the [Swivel algorithm](https://arxiv.org/pdf/1602.02215). In particular, we reuse the implementation included in the [Tensorflow models repo on Github](https://github.com/tensorflow/models/tree/master/research/swivel) (with some small modifications).
# + [markdown] id="LPx_bqtS7OCj" colab_type="text"
# ## Download a small text corpus
# First, let's download a corpus into our environment. We will use a small sample of the UMBC corpus that has been pre-tokenized and that we have included as part of our GitHub repository. First, we will clone the repo so we have access to it from this environment.
# + id="Zc-8xkaofomg" colab_type="code" colab={}
# %ls
# + id="UjX5TGjMeLT0" colab_type="code" colab={}
# !git clone https://github.com/hybridnlp/tutorial.git
# + [markdown] id="lSNfGrTQ7qiN" colab_type="text"
# The dataset comes as a zip file, so we unzip it by executing the following cell. We also define a variable pointing to the corpus file:
# + id="CUFY-Sl8lFDK" colab_type="code" colab={}
# !unzip /content/tutorial/datasamples/umbc_t_5K.zip -d /content/tutorial/datasamples/
input_corpus='/content/tutorial/datasamples/umbc_t_5K'
# + [markdown] id="0ymuxrBHQr7u" colab_type="text"
# You can inspect the file using the `%less` command to print the whole input file at the bottom of the screen. It'll be quicker to just print a few lines:
# + id="43SExjL46jdZ" colab_type="code" colab={}
# #%less {input_corpus}
# !head -n1 {input_corpus}
# + [markdown] id="Ju7vkSHcNmcS" colab_type="text"
# The output above shows that the input text has already been pre-processed.
# * All words have been converted to lower-case (this will avoid having two separate words for *The* and *the*)
# * punctuation marks have been separated from words. This will avoid creating "words" such as "staff." or "grimm," in the example above.
# + [markdown] id="1PyH6EI783Kw" colab_type="text"
# ## `swivel`: an algorithm for learning word embeddings
# Now that we have a corpus, we need an (implementation of an) algorithm for learning embeddings. There are various libraries and implementations for this:
# * [word2vec](https://pypi.org/project/word2vec/) the system proposed by Mikolov that introduced many of the techniques now commonly used for learning word embeddings. It directly generates word embeddings from the text corpus by using a sliding window and trying to predict a target word based on neighbouring context words.
# * [GloVe](https://github.com/stanfordnlp/GloVe) an alternative algorithm by Pennington, Socher and Manning. It splits the process in two steps:
# 1. calculating a word-word co-occurrence matrix
# 2. learning embeddings from this matrix
# * [FastText](https://fasttext.cc/) is a more recent algorithm by Mikolov et al (now at Facebook) that extends the original word2vec algorithm in various ways. Among others, this algorithm takes into accout subword information.
#
# In this tutorial we will be using [Swivel](https://github.com/tensorflow/models/tree/master/research/swivel) an algorithm similar to GloVe, which makes it easier to extend to include both words and concepts (which we will do in [notebook 03 vecsigrafo](https://colab.research.google.com/github/HybridNLP2018/tutorial/blob/master/03_vecsigrafo.ipynb)). As with GloVe, Swivel first extracts a word-word co-occurence matrix from a text corpus and then uses this matrix to learn the embeddings.
#
# The official [Swivel](https://github.com/tensorflow/models/tree/master/research/swivel) implementation has a few issues when running on Colaboratory, hence we have included a slightly modified version as part of the HybridNLP2018 github repository.
# + id="8A0IVDoDTU3_" colab_type="code" colab={}
# %ls /content/tutorial/scripts/swivel/
# + [markdown] id="7xppOkdyiU-1" colab_type="text"
# ## Learn embeddings
#
# ### Generate co-occurrence matrix using Swivel `prep`
#
# Call swivel's `prep` command to calculate the word co-occurrence matrix. We use the `%run` magic command, which runs the named python file as a program, allowing us to pass parameters as if using a command-line terminal.
#
# We set the `shard_size` to 512 since the corpus is quite small. For larger corpora we could use the standard value of 4096.
# + id="8wi_izVOiU-2" colab_type="code" colab={}
coocs_path = '/content/umbc/coocs/t_5K/'
shard_size = 512
# !python /content/tutorial/scripts/swivel/prep.py \
# --input="/content/tutorial/datasamples/umbc_t_5K" \
# --output_dir="/content/umbc/coocs/t_5K/" \
# --shard_size=512
# + [markdown] id="5fyTfMneQnQc" colab_type="text"
# The expected output is:
#
# ```
# ... tensorflow parameters ...
# vocabulary contains 5120 tokens
#
# writing shard 100/100
# done!
# ```
#
# We see that first, the algorithm determined the **vocabulary** $V$, this is the list of words for which an embedding will be generated. Since the corpus is fairly small, so is the vocabulary, which consists of only about 5K words (large corpora can result in vocabularies with millions of words).
#
# The co-occurrence matrix is a sparse matrix of $|V| \times |V|$ elements. Swivel uses shards to create submatrices of $|S| \times |S|$, where $S$ is the shard-size specified above. In this case, we have 100 sub-matrices.
#
# All this information is stored in the output folder we specified above. It consists of 100 files, one per shard/sub-matrix and a few additional files:
# + id="3eiJEg8mSOlJ" colab_type="code" colab={}
# %ls {coocs_path} | head -n 10
# + [markdown] id="AMm9rXSbiU-8" colab_type="text"
#
# The `prep` step does the following:
# - it uses a basic, white space, tokenization to get sequences of tokens
# - in a first pass through the corpus, it counts all tokens and keeps only those that have a minimum frequency (5) in the corpus. Then it keeps a multiple of the `shard_size` of that. The tokens that are kept form the **vocabulary** with size $v = |V|$.
# - on a second pass through the corpus, it uses a sliding window to count co-occurrences between the focus token and the context tokens (similar to `word2vec`). The result is a sparse co-occurrence matrix of size $v \times v$.
# - for easier storage and manipulation, Swivel uses *sharding* to split the co-occurrence matrix into sub-matrices of size $s \times s$, where $s$ is the `shard_size`.
# 
# - store the sharded co-occurrence submatrices as [protobuf files](https://developers.google.com/protocol-buffers/).
# + [markdown] id="o9tIXHWbiU-9" colab_type="text"
# ## Learn embeddings from co-occurrence matrix
# With the sharded co-occurrence matrix it is now possible to learn embeddings:
# - the input is the folder with the co-occurrence matrix (protobuf files with the sparse matrix).
# - `submatrix_` rows and columns need to be the same size as the `shard_size` used in the `prep` step.
# + id="nyk4vjv6iU--" colab_type="code" colab={}
vec_path = '/content/umbc/vec/t_5K/'
# !python /content/tutorial/scripts/swivel/swivel.py --input_base_path={coocs_path} \
# --output_base_path={vec_path} \
# --num_epochs=40 --dim=150 \
# --submatrix_rows={shard_size} --submatrix_cols={shard_size}
# + [markdown] id="oEDrblTXiU_B" colab_type="text"
# This should take a few minutes, depending on your machine.
# The result is a list of files in the specified output folder, including:
# - checkpoints of the model
# - `tsv` files for the column and row embeddings.
# + id="_BOhXpZiiU_C" colab_type="code" colab={}
# %ls {vec_path}
# + [markdown] id="CbZo4fB9WaQ6" colab_type="text"
# One thing missing from the output folder is a file with just the vocabulary, which we'll need later on. We copy this file from the folder with the co-occurrenc matrix.
# + id="lCOwkAJHWN14" colab_type="code" colab={}
# %cp {coocs_path}/row_vocab.txt {vec_path}vocab.txt
# + [markdown] id="xI0JCPPpiU_G" colab_type="text"
# ### Convert `tsv` files to `bin` file
# The `tsv` files are easy to inspect, but they take too much space and they are slow to load since we need to convert the different values to floats and pack them as vectors. Swivel offers a utility to convert the `tsv` files into a `bin`ary format. At the same time it combines the column and row embeddings into a single space (it simply adds the two vectors for each word in the vocabulary).
# + id="qn0FHiYJiU_H" colab_type="code" colab={}
# !python /content/tutorial/scripts/swivel/text2bin.py --vocab={vec_path}vocab.txt --output={vec_path}vecs.bin \
# {vec_path}row_embedding.tsv \
# {vec_path}col_embedding.tsv
# + [markdown] id="MrAWhqbViU_M" colab_type="text"
# This adds the `vocab.txt` and `vecs.bin` to the folder with the vectors:
# + id="JX6hOIGqiU_M" colab_type="code" colab={}
# %ls -lah {vec_path}
# + [markdown] id="hY-1lWr3iU_R" colab_type="text"
# ## Read stored binary embeddings and inspect them
#
# Swivel provides the `vecs` library which implements the basic `Vecs` class. It accepts a `vocab_file` and a file for the binary serialization of the vectors (`vecs.bin`).
# + id="MAB_mb9ViU_R" colab_type="code" colab={}
from tutorial.scripts.swivel import vecs
# + [markdown] id="ifovawLuiU_U" colab_type="text"
# ...and we can load existing vectors. We assume you managed to generate the embeddings by following the tutorial up to now. Note that, due to random initialization of weight during the training step, your results may be different from the ones presented below.
# + id="y4l1xtJNiU_W" colab_type="code" colab={}
#uncommend the following two lines if you did not manage to train embedding above
# #!tar -xzf /content/tutorial/datasamples/umbc_swivel_vec_t_5K.tar.gz -C /
#vec_path = /content/umbc/vec/t_5K/
vectors = vecs.Vecs(vec_path + 'vocab.txt',
vec_path + 'vecs.bin')
# + [markdown] id="VjgSQtsPiU_Z" colab_type="text"
# We have extended the standard implementation of `swivel.vecs.Vecs` to include a method `k_neighbors`. It accepts a string with the word and an optional `k` parameter, that defaults to $10$. It returns a list of python dictionaries with fields:
# * `word`: a word in the vocabulary that is near the input word
# * `cosim`: the cosine similiarity between the input word and the near word.
# It's easier to display the results as a `pandas` table:
# + id="2iZ9bExziU_e" colab_type="code" colab={}
import pandas as pd
pd.DataFrame(vectors.k_neighbors('california'))
# + id="RfCmWTFsiU_i" colab_type="code" colab={}
pd.DataFrame(vectors.k_neighbors('knowledge'))
# + id="iwO2zXERiU_o" colab_type="code" colab={}
pd.DataFrame(vectors.k_neighbors('semantic'))
# + id="-Ze0ApZMluvX" colab_type="code" colab={}
pd.DataFrame(vectors.k_neighbors('conference'))
# + [markdown] id="SzGSaASBfeLU" colab_type="text"
# The cells above should display results similar the the following (for words *california* and *conference*):
#
# | cosim | word | | cosim | word |
# | ---------- | -------- || ---------- | -------- |
# | 0 1.000 | california || 1.0000 | conference |
# | 0.5060 | university || 0.4320 | international |
# | 0.4239 | berkeley || 0.4063 | secretariat |
# | 0.4103 | barbara || 0.3857 | jcdl |
# | 0.3941 | santa || 0.3798 | annual |
# | 0.3899 | southern || 0.3708 | conferences |
# | 0.3673 | uc || 0.3705 | forum |
# | 0.3542 | johns || 0.3629 | presentations |
# | 0.3396 | indiana || 0.3601 | workshop |
# | 0.3388 | melvy || 0.3580 | ... |
#
#
# + [markdown] id="AAqlranRiOE9" colab_type="text"
# ### Compound words
#
# Note that the vocabulary only has single words, i.e. compound words are not present:
# + id="3Mcdr8dIiU_v" colab_type="code" colab={}
pd.DataFrame(vectors.k_neighbors('semantic web'))
# + [markdown] id="SNc2267PknJY" colab_type="text"
# A common way to work around this issue is to use the average vector of the two individual words (of course this only works if both words are in the vocabulary):
# + id="Aj0fNR6Li5mh" colab_type="code" colab={}
semantic_vec = vectors.lookup('semantic')
web_vec = vectors.lookup('web')
semweb_vec = (semantic_vec + web_vec)/2
pd.DataFrame(vectors.k_neighbors(semweb_vec))
# + [markdown] id="bJAlDOIZiU_z" colab_type="text"
# ## Conclusion
#
# In this notebook, we used swivel to generate word embeddings and we explored the resulting embeddings using `k neighbors` exploration.
# + [markdown] id="tvn6g8WCLieL" colab_type="text"
# # Optional Excercise
#
# ## Create word-embeddings for texts from Project Gutenburg
#
# ### Download and pre-process the corpus
#
# You can try generating new embeddings using a small `gutenberg` corpus, that is provided as part of the NLTK library. It consists of a few public-domain works published as part of the Project Gutenberg.
#
# First, we download the dataset into out environment:
# + id="3CQi8I19iU-x" colab_type="code" colab={}
import os
import nltk
nltk.download('gutenberg')
# %ls '/root/nltk_data/corpora/gutenberg/'
# + [markdown] id="hrh3N-bq72We" colab_type="text"
# As you can see, the corpus consists of various books, one per file. Most word2vec implementations require you to pass a corpus as a single text file. We can issue a few commands to do this by concatenating all the `txt` files in the folder into a single `all.txt` file, which we will use later on.
#
# A couple of the files are encoded using iso-8859-1 or binary encodings, which will cause trouble later on, so we rename them to avoid including them into our corpus.
# + id="6UxGMz7ONMnf" colab_type="code" colab={}
# %cd /root/nltk_data/corpora/gutenberg/
# avoid including books with incorrect encoding
# !mv chesterton-ball.txt chesterton-ball.badenc-txt
# !mv milton-paradise.txt milton-paradise.badenc-txt
# !mv shakespeare-caesar.txt shakespeare-caesar.badenc-txt
# now concatenate all other files into 'all.txt'
# !cat *.txt >> all.txt
# print result
# %ls -lah '/root/nltk_data/corpora/gutenberg/all.txt'
# go back to standard folder
# %cd /content/
# + [markdown] id="lf2qQSE98VuR" colab_type="text"
# The full dataset is about 11MB.
# + [markdown] id="XfD_c2yYOMIt" colab_type="text"
# ### Learn embeddings
#
# Run the steps described above to generate embeddings for the gutenberg dataset.
# + id="wbAI-zc0OXL9" colab_type="code" colab={}
# + [markdown] id="Sd4bvnH6OZic" colab_type="text"
# ### Inspect embeddings
# Use methods similar to the ones shown above to get a feeling for whether the generated embeddings have captured interesting relations between words.
# + id="ZksSgO2GOcnP" colab_type="code" colab={}
|
01_capturing_word_embeddings.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/ChanceDurr/DS-Unit-2-Applied-Modeling/blob/master/module4-model-interpretation/Chance_model_interpretation.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="SpIFGQclBfzO" colab_type="text"
# # Import packages and csv's
# + id="AkqbWs_fgaGZ" colab_type="code" outputId="bc812d8e-9775-44c7-a5f0-afbad5a55109" colab={"base_uri": "https://localhost:8080/", "height": 880}
import pandas as pd
from glob import glob
import numpy as np
import seaborn as sns
from sklearn.metrics import mean_squared_log_error
from sklearn.model_selection import train_test_split
# !pip install category_encoders eli5 pdpbox shap
from category_encoders import OrdinalEncoder
from sklearn.impute import SimpleImputer
from sklearn.pipeline import make_pipeline
import matplotlib.pyplot as plt
import xgboost as xgb
from xgboost import XGBRegressor
import eli5
from eli5.sklearn import PermutationImportance
# + id="Y8lv4uGMajum" colab_type="code" outputId="735b1777-be58-4298-af44-9ce3fda47430" colab={"base_uri": "https://localhost:8080/", "height": 300}
# !wget https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Applied-Modeling/master/data/caterpillar/caterpillar-tube-pricing.zip
# + id="zUbU9olfelH1" colab_type="code" outputId="3c5efac5-963c-41b8-939d-c50764034e4f" colab={"base_uri": "https://localhost:8080/", "height": 205}
# !unzip caterpillar-tube-pricing.zip
# + id="37ighgEwemse" colab_type="code" outputId="c0f32be7-7840-4970-df7f-043c6a28948d" colab={"base_uri": "https://localhost:8080/", "height": 512}
# !unzip data.zip
# + id="xPsrKorheuoG" colab_type="code" colab={}
def rmsle(y_true, y_pred):
return np.sqrt(mean_squared_log_error(y_true, y_pred))
# + id="QWSsL-ndeogO" colab_type="code" outputId="d2894226-efbc-4227-8988-eddca654bc5b" colab={"base_uri": "https://localhost:8080/", "height": 407}
for path in glob('competition_data/*.csv'):
df = pd.read_csv(path)
print(path, df.shape)
# + id="PaZEBy-kg0Sj" colab_type="code" colab={}
train_set = pd.read_csv('competition_data/train_set.csv')
test = pd.read_csv('competition_data/test_set.csv')
# + [markdown] id="FGU5gWSIBk_C" colab_type="text"
# #Train / Val Split
# + id="sX65m7XA943M" colab_type="code" colab={}
train_set['quote_date'] = pd.to_datetime(train_set['quote_date'], infer_datetime_format=True)
# + id="IkEuomEW-U3U" colab_type="code" outputId="2968aed8-adac-4eff-82a1-94b04ce5c4d4" colab={"base_uri": "https://localhost:8080/", "height": 92}
assemblies = train_set['tube_assembly_id'].unique()
train_tube_assemblies, val_tube_assemblies = train_test_split(
assemblies, random_state=47)
print(train_tube_assemblies.shape)
print(val_tube_assemblies.shape)
# + id="-Yb50q7_-Zaa" colab_type="code" colab={}
train = train_set[train_set['tube_assembly_id'].isin(train_tube_assemblies)]
val = train_set[train_set['tube_assembly_id'].isin(val_tube_assemblies)]
# + [markdown] id="JXKaX2n8C9Y4" colab_type="text"
# #Validation RMSLE, Mean Baseline
# + id="Z4NkgSY9BJ0u" colab_type="code" outputId="e0006297-5a58-4a07-a25a-7013c093243e" colab={"base_uri": "https://localhost:8080/", "height": 76}
target = 'cost'
y_true = val[target]
y_pred = np.full_like(y_true, fill_value=train[target].mean())
rmsle(y_pred, y_true)
# + [markdown] id="LQkAn75QDrjh" colab_type="text"
# # Random Forest Regressor
# + id="Fp5E0dI8CSYq" colab_type="code" outputId="cf5e3fc3-a3c8-4a9f-8910-dbc33a9fc63b" colab={"base_uri": "https://localhost:8080/", "height": 76}
from sklearn.ensemble import RandomForestRegressor
features = ['quantity']
target = 'cost'
X_train = train[features]
y_train = train[target]
X_val = val[features]
y_val = val[target]
model = RandomForestRegressor(n_estimators=100, random_state=47)
model.fit(X_train, y_train)
y_pred = model.predict(X_val)
rmsle(y_pred, y_val)
# + id="Z5f8Z_tODWV3" colab_type="code" outputId="318d7d61-c436-45b0-f0a9-60d0478f2d5e" colab={"base_uri": "https://localhost:8080/", "height": 96}
model = RandomForestRegressor(n_estimators=100, random_state=47)
model.fit(X_train, np.log1p(y_train))
y_pred = model.predict(X_val)
rmsle(np.expm1(y_pred), y_val)
# + [markdown] id="umt9GM-n7ymZ" colab_type="text"
# # XGB
# + id="9JxgdqHfF9yI" colab_type="code" colab={}
tube = pd.read_csv('competition_data/tube.csv')
# + id="XbDR0-GYHsa_" colab_type="code" colab={}
# Merge tube df on assembly id
train = pd.merge(train, tube, how='inner', on='tube_assembly_id')
val = pd.merge(val, tube, how='inner', on='tube_assembly_id')
test = pd.merge(test, tube, how='inner', on='tube_assembly_id')
# + id="NL4HQRFOKZXB" colab_type="code" outputId="49034f41-f515-45d4-f283-94766430255a" colab={"base_uri": "https://localhost:8080/", "height": 336}
train.head()
# + id="NfjNvWN8H9dx" colab_type="code" colab={}
features = ['quantity', 'length', 'num_bends',
'bend_radius', 'diameter', 'end_a',
'end_x', 'material_id', 'wall']
target = 'cost'
X_train = train[features]
y_train = train[target]
X_val = val[features]
y_val = val[target]
# + id="6qV9MdrN7T7g" colab_type="code" outputId="3ae90602-2bb3-40ea-b68d-2bfb4072a249" colab={"base_uri": "https://localhost:8080/", "height": 96}
model = XGBRegressor(n_estimators=100, random_state=47)
pipe = make_pipeline(OrdinalEncoder(), model)
pipe.fit(X_train, np.log1p(y_train))
y_pred = pipe.predict(X_val)
# + id="WH7zE94K6yDB" colab_type="code" outputId="f97e2646-079e-4688-8c98-dfcfe03ce1e9" colab={"base_uri": "https://localhost:8080/", "height": 76}
rmsle(np.expm1(y_pred), y_val)
# + id="_dF4DPVbIW-Y" colab_type="code" outputId="2ad13ebb-8d18-4ec4-ff3a-7196c9e5ce46" colab={"base_uri": "https://localhost:8080/", "height": 309}
feature_importances = pd.Series(model.feature_importances_, features)
feature_importances.sort_values().plot.barh();
# + [markdown] id="BCfuFTfq76yb" colab_type="text"
# #Hyper Param/ CV
# + id="Uk9RLvIO7I0L" colab_type="code" colab={}
X_test = test
# + id="nxbhg7kA73xL" colab_type="code" outputId="20075453-1e6e-4a43-89ce-2f6383b9999f" colab={"base_uri": "https://localhost:8080/", "height": 252}
from scipy.stats import randint, uniform
from sklearn.model_selection import RandomizedSearchCV
from sklearn.pipeline import make_pipeline
features = ['quantity', 'length', 'num_bends',
'bend_radius', 'diameter', 'end_a',
'end_x', 'material_id', 'wall']
target = 'cost'
X_train = train[features]
y_train = train[target]
X_val = val[features]
y_val = val[target]
y_train_log = np.log1p(y_train)
X_test = test
groups = train['tube_assembly_id']
pipeline = make_pipeline(
OrdinalEncoder(),
XGBRegressor(random_state=47)
)
param_distributions = {
'xgbregressor__n_estimators': [x for x in range(450, 500, 1)]
}
search = RandomizedSearchCV(
pipeline,
param_distributions = param_distributions,
n_iter = 5,
cv = 5,
scoring = 'neg_mean_squared_error',
verbose=10,
return_train_score = True,
n_jobs = -1
)
search.fit(X_train, y_train_log, groups=groups);
# + id="LP7F0d2N-ADe" colab_type="code" outputId="cee931c4-9300-47ae-ff13-4df57ac4295f" colab={"base_uri": "https://localhost:8080/", "height": 92}
print(f'Best Hyperparameters: {search.best_params_}')
print(f'Cross_validation RMSLE: {np.sqrt(-search.best_score_)}')
# + [markdown] id="z8ixCH6AwLI4" colab_type="text"
# # Permutation
#
# + id="rxuax81gwLR4" colab_type="code" colab={}
import eli5
from eli5.sklearn import PermutationImportance
# + id="VzrirmhYw-YQ" colab_type="code" outputId="afe6d73b-45f1-4a28-f29d-731f11b27cd5" colab={"base_uri": "https://localhost:8080/", "height": 233}
permuter = PermutationImportance(model, scoring='neg_mean_squared_error',
cv='prefit', n_iter=2, random_state=42)
encoder = OrdinalEncoder()
X_val_encoded = encoder.fit_transform(X_val)
y_val_log = np.log1p(y_val)
permuter.fit(X_val_encoded, y_val_log)
feature_names = X_val_encoded.columns.tolist()
eli5.show_weights(permuter, top=None, feature_names=feature_names)
# + [markdown] id="mP5Whna3xz18" colab_type="text"
# # PDP
# + id="VJoZa35Qx0eJ" colab_type="code" outputId="bd320e02-1ae1-49e2-a7be-d69853481318" colab={"base_uri": "https://localhost:8080/", "height": 623}
from pdpbox.pdp import pdp_isolate, pdp_plot
feature = 'quantity'
isolated = pdp_isolate(
model=model,
dataset=X_val_encoded,
model_features=X_val_encoded.columns,
feature=feature
)
pdp_plot(isolated, feature_name=feature);
# + id="A_K1aHqAyQY7" colab_type="code" outputId="18a290e0-e016-4bc1-ebab-680883821fae" colab={"base_uri": "https://localhost:8080/", "height": 623}
from pdpbox.pdp import pdp_isolate, pdp_plot
feature = 'diameter'
isolated = pdp_isolate(
model=model,
dataset=X_val_encoded,
model_features=X_val_encoded.columns,
feature=feature
)
pdp_plot(isolated, feature_name=feature);
# + [markdown] id="unMVrIBG74rZ" colab_type="text"
# #Test
# + id="YpIAl1IBPRDL" colab_type="code" colab={}
y_pred = pipe.predict(test[features])
# + id="_07maMUTQjNy" colab_type="code" outputId="232dfaca-c00b-4ce4-c700-545f3373928d" colab={"base_uri": "https://localhost:8080/", "height": 344}
test.describe()
# + id="mWcnP6kMP292" colab_type="code" colab={}
sub = pd.DataFrame(data = {
'id': test['id'],
'cost': np.expm1(y_pred)
})
sub.to_csv('submission.csv', index=False)
# + id="-jIQEXsIQmSq" colab_type="code" outputId="395b8a79-9134-44ab-bf5c-a9c770cf3b8e" colab={"base_uri": "https://localhost:8080/", "height": 248}
sub.head()
# + [markdown] id="DMRWv60f46NP" colab_type="text"
# # Shapley
# + id="EqxakS01HECB" colab_type="code" outputId="b0509722-dae2-4b4a-9792-18c7710efe40" colab={"base_uri": "https://localhost:8080/", "height": 407}
X_test = X_test[features]
X_test['predictions'] = y_pred
X_test.head(10)
# + id="v3yxbYmyHOXP" colab_type="code" outputId="426f4ff2-e129-42cb-a2ce-c7e17b10333d" colab={"base_uri": "https://localhost:8080/", "height": 199}
processor = make_pipeline(OrdinalEncoder())
X_train_processed = processor.fit_transform(X_train)
model = XGBRegressor(n_estimators=459, random_state=47)
model.fit(X_train_processed, y_train_log)
# + id="5FoU0IkdMWpQ" colab_type="code" colab={}
data_for_prediction = X_test[X_test.index == 30212]
# + id="hoLjZ8bIMxdz" colab_type="code" outputId="87941907-bd78-4d0a-d99d-613aab060372" colab={"base_uri": "https://localhost:8080/", "height": 1000}
import shap
shap.initjs()
explainer = shap.TreeExplainer(model)
shap_values = explainer.shap_values(data_for_prediction_processed)
shap.force_plot(explainer.expected_value, shap_values, data_for_prediction_processed)
# + id="jmcNQQvQNSrj" colab_type="code" colab={}
|
module4-model-interpretation/Chance_model_interpretation.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # <center> A Quantum distance-based classifier </center>#
# ## <center> <NAME>, TNO </center> ##
# <a name="contents"></a>
# # Table of Contents
# * [Introduction](#introduction)
# * [Problem](#problem)
# * [Amplitude Encoding](#amplitude)
# * [Data preprocessing](#dataset)
# * [Quantum algorithm](#algorithm)
# * [Conclusion and further work](#conclusion)
#
#
## Import external python file
import nbimporter
import numpy as np
from data_plotter import get_bin, DataPlotter # for easier plotting
DataPlotter = DataPlotter()
# $$ \newcommand{\ket}[1]{\left|{#1}\right\rangle} $$
#
# <a name="introduction"></a>
# # Introduction #
#
#
# Consider the following scatter plot of the first two flowers in [the famous Iris flower data set](https://en.wikipedia.org/wiki/Iris_flower_data_set)
#
# <img src="images/plot.png">
#
#
# Notice that just two features, the sepal width and the sepal length, divide the two different Iris species into different regions in the plot. This gives rise to the question: given only the sepal length and sepal width of a flower can we classify the flower by their correct species? This type of problem, also known as [statistical classification](https://en.wikipedia.org/wiki/Statistical_classification), is a common problem in machine learning. In general, a classifier is constructed by letting it learn a function which gives the desired output based on a sufficient amount of data. This is called supervised learning, as the desired output (the labels of the data points) are known. After learning, the classifier can classify an unlabeled data point based on the learned function. The quality of a classifier improves if it has a larger training dataset it can learn on. The true power of this quantum classifier becomes clear when using extremely large data sets.
# In this notebook we will describe how to build a distance-based classifier on the Quantum Inspire using amplitude encoding. It turns out that, once the system is initialized in the desired state, regardless of the size of training data, the actual algorithm consists of only 3 actions, one Hadamard gate and two measurements. This has huge implications for the scalability of this problem for large data sets. Using only 4 qubits we show how to encode two data points, both of a different class, to predict the label for a third data point. In this notebook we will demonstrate how to use the Quantum Inspire SDK using QASM-code, we will also provide the code to obtain the same results for the ProjectQ framework.
#
#
#
# [Back to Table of Contents](#contents)
# <a name="problem"></a>
# # Problem #
# We define the following binary classification problem: Given the data set
# $$\mathcal{D} = \Big\{ ({\bf x}_1, y_1), \ldots ({\bf x}_M , y_M) \Big\},$$
# consisting of $M$ data points $x_i\in\mathbb{R}^n$ and corresponding labels $y_i\in \{-1, 1\}$, give a prediction for the label $\tilde{y}$ corresponding to an unlabeled data point $\bf\tilde{x}$. The classifier we shall implement with our quantum circuit is a distance-based classifier and is given by
# \begin{equation}\newcommand{\sgn}{{\rm sgn}}\newcommand{\abs}[1]{\left\lvert#1\right\rvert}\label{eq:classifier} \tilde{y} = \sgn\left(\sum_{m=0}^{M-1} y_m \left[1-\frac{1}{4M}\abs{{\bf\tilde{x}}-{\bf x}_m}^2\right]\right). \hspace{3cm} (1)\end{equation}
#
# This is a typical $M$-nearest-neighbor model, where each data point is given a weight related to the distance measure. To implement this classifier on a quantum computer, we need a way to encode the information of the training data set in a quantum state. We do this by first encoding the training data in the amplitudes of a quantum system, and then manipulate the amplitudes of then the amplitudes will be manipulated by quantum gates such that we obtain a result representing the above classifier. Encoding input features in the amplitude of a quantum system is known as amplitude encoding.
#
#
# [Back to Contents](#contents)
# <a name="amplitude"></a>
# # Amplitude encoding #
# Suppose we want to encode a classical vector $\bf{x}\in\mathbb{R}^N$ by some amplitudes of a quantum system. We assume $N=2^n$ and that $\bf{x}$ is normalised to unit length, meaning ${\bf{x}^T{x}}=1$. We can encode $\bf{x}$ in the amplitudes of a $n$-qubit system in the following way
# \begin{equation}
# {\bf x} = \begin{pmatrix}x^1 \\ \vdots \\ x^N\end{pmatrix} \Longleftrightarrow{} \ket{\psi_{{\bf x}}} = \sum_{i=0}^{N-1}x^i\ket{i},
# \end{equation}
# where $\ket{i}$ is the $i^{th}$ entry of the computational basis $\left\{\ket{0\ldots0},\ldots,\ket{1\ldots1}\right\}$. By applying an efficient quantum algorithm (resources growing polynomially in the number of qubits $n$), one can manipulate the $2^n$ amplitudes super efficiently, that is $\mathcal{O}\left(\log N\right)$. This follows as manipulating all amplitudes requires an operation on each of the $n = \mathcal{O}\left(\log N\right)$ qubits. For algorithms to be truly super-efficient, the phase where the data is encoded must also be at most polynomial in the number of qubits. The idea of quantum memory, sometimes referred as quantum RAM (QRAM), is a particular interesting one. Suppose we first run some quantum algorithm, for example in quantum chemistry, with as output some resulting quantum states. If these states could be fed into a quantum classifier, the encoding phase is not needed anymore. Finding efficient data encoding systems is still a topic of active research. We will restrict ourselves here to the implementation of the algorithm, more details can be found in the references.
#
# <a name="state"></a>
# The algorithm requires the $n$-qubit quantum system to be in the following state
# \begin{equation}\label{eq:prepstate}
# \ket{\mathcal{D}} = \frac{1}{\sqrt{2M}} \sum_{m=0}^{M-1} \ket{m}\Big(\ket{0}\ket{\psi_{\bf\tilde{{x}}}} + \ket{1}\ket{\psi_{\bf{x}_m}}\Big)\ket{y_m}.\hspace{3cm} (2)
# \end{equation}
# Here $\ket{m}$ is the $m^{th}$ state of the computational basis used to keep track of the $m^{th}$ training input. The second register is a single ancillary qubit entangled with the third register. The excited state of the ancillary qubit is entangled with the $m^{th}$ training state $\ket{\psi_{{x}_m}}$, while the ground state is entangled with the new input state $\ket{\psi_{\tilde{x}}}$. The last register encodes the label of the $m^{th}$ training data point by
# \begin{equation}
# \begin{split}
# y_m = -1 \Longleftrightarrow& \ket{y_m} = \ket{0},\\
# y_m = 1 \Longleftrightarrow& \ket{y_m} = \ket{1}.
# \end{split}
# \end{equation}
# Once in this state the algorithm only consists of the following three operations:
#
# 1. Apply a Hadamard gate on the second register to obtain
#
# $$\frac{1}{2\sqrt{M}} \sum_{m=0}^{M-1} \ket{m}\Big(\ket{0}\ket{\psi_{\bf\tilde{x}+x_m}} + \ket{1}\ket{\psi_{\bf\tilde{x}-x_m}}\Big)\ket{y_m},$$
#
# where $\ket{\psi_{\bf\tilde{{x}}\pm{x}_m}} = \ket{\psi_{\tilde{\bf{x}}}}\pm \ket{\psi_{\bf{x}_m}}$.
#
# 2. Measure the second qubit. We restart the algorithm if we measure a $\ket{1}$ and only continue if we are in the $\ket{0}$ branch. We continue the algorithm with a probability $p_{acc} = \frac{1}{4M}\sum_M\abs{{\bf\tilde{x}}+{\bf x}_m}^2$, for standardised random data this is usually around $0.5$. The resulting state is given by
#
# \begin{equation}
# \frac{1}{2\sqrt{Mp_{acc}}}\sum_{m=0}^{M-1}\sum_{i=0}^{N-1} \ket{m}\ket{0}\left({\tilde{x}}^i + x_m^i\right)\ket{i}\ket{y_m}.
# \end{equation}
#
# 3. Measure the last qubit $\ket{y_m}$. The probability that we measure outcome zero is given by
# \begin{equation}
# p(q_4=0) = \frac{1}{4Mp_{acc}}\sum_{m|y_m=0}\abs{\bf{\tilde{{x}}+{x}_m}}^2.
# \end{equation}
#
# In the special case where the amount of training data for both labels is equal, this last measurement relates to the classifier as described in previous section by
# \begin{equation}
# \tilde{y} = \left\{
# \begin{array}{lr}
# -1 & : p(q_4 = 0 ) > p(q_4 = 1)\\
# +1 & : p(q_4 = 0 ) < p(q_4 = 1)
# \end{array}
# \right.
# \end{equation}
# By setting $\tilde{y}$ to be the most likely outcome of many measurement shots, we obtain the desired distance-based classifier.
#
#
# [Back to Table of Contents](#contents)
# <a name="dataset"></a>
# # Data preprocessing#
# In the previous section we saw that for amplitude encoding we need a data set which is normalised. Luckily, it is always possible to bring data to this desired form with some data transformations. Firstly, we standardise the data to have zero mean and unit variance, then we normalise the data to have unit length. Both these steps are common methods in machine learning. Effectively, we only have to consider the angle between different data features.
#
# To illustrate this procedure we apply it to the first two features of the famous Iris data set:
#
# +
# Plot the data
from sklearn.datasets import load_iris
iris = load_iris()
features = iris.data.T
data = [el[0:101] for el in features][0:2] # Select only the first two features of the dataset
half_len_data = len(data[0]) // 2
iris_setosa = [el[0:half_len_data] for el in data[0:2]]
iris_versicolor = [el[half_len_data:-1] for el in data[0:2]]
DataPlotter.plot_original_data(iris_setosa, iris_versicolor); # Function to plot the data
# +
# Rescale the data
from sklearn import preprocessing # Module contains method to rescale data to have zero mean and unit variance
# Rescale whole data-set to have zero mean and unit variance
features_scaled = [preprocessing.scale(el) for el in data[0:2]]
iris_setosa_scaled = [el[0:half_len_data] for el in features_scaled]
iris_versicolor_scaled = [el[half_len_data:-1] for el in features_scaled]
DataPlotter.plot_standardised_data(iris_setosa_scaled, iris_versicolor_scaled); # Function to plot the data
# +
# Normalise the data
def normalise_data(arr1, arr2):
"""Normalise data to unit length
input: two array same length
output: normalised arrays
"""
for idx in range(len(arr1)):
norm = (arr1[idx]**2 + arr2[idx]**2)**(1 / 2)
arr1[idx] = arr1[idx] / norm
arr2[idx] = arr2[idx] / norm
return [arr1, arr2]
iris_setosa_normalised = normalise_data(iris_setosa_scaled[0], iris_setosa_scaled[1])
iris_versicolor_normalised = normalise_data(iris_versicolor_scaled[0], iris_versicolor_scaled[1])
# Function to plot the data
DataPlotter.plot_normalised_data(iris_setosa_normalised, iris_versicolor_normalised);
# -
# [Table of Contents](#contents)
# <a name="algorithm"></a>
#
# # Quantum algorithm #
# Now we can start with our quantum algorithm on the Quantum Inspire. We describe how to build the algorithm for the simplest case with only two data points, each with two features, that is $M=N=2$. For this algorithm we need 4 qubits:
# * One qubit for the index register $\ket{m}$
# * One ancillary qubit
# * One qubit to store the information of the two features of the data points
# * One qubit to store the information of the classes of the data points
#
# From the data set described in previous section we pick the following data set $\mathcal{D} = \big\{({\bf x}_1,y_1), ({\bf x}_2, y_2) \big\}$ where:
# * ${\bf x}_1 = (0.9193, 0.3937)$, $y_1 = -1$,
# * ${\bf x}_2 = (0.1411, 0.9899)$, $y_2 = 1$.
#
# We are interested in the label $\tilde{y}$ for the data point ${\bf \tilde{x}} = (0.8670, 0.4984)$.
#
#
# The amplitude encoding of these data points look like
# \begin{equation}
# \begin{split}
# \ket{\psi_{\bf\tilde{x}}} & = 0.8670 \ket{0} + 0.4984\ket{1}, \\
# \ket{\psi_{\bf x_1}} & = 0.9193 \ket{0} + 0.3937\ket{1},\\
# \ket{\psi_{\bf x_2}} & = 0.1411 \ket{0} + 0.9899\ket{1}.
# \end{split}
# \end{equation}
#
# Before we can run the actual algorithm we need to bring the system in the desired [initial state (equation 2)](#state) which can be obtain by applying the following combination of gates starting on $\ket{0000}$.
#
# <img src="images/stateprep.png">
#
# * **Part A:** In this part the index register is initialized and the ancilla qubit is brought in the desired state. For this we use the plain QASM language of the Quantum Inspire. Part A consists of two Hadamard gates:
#
def part_a():
qasm_a = """version 1.0
qubits 4
prep_z q[0:3]
.part_a
H q[0:1] #execute Hadamard gate on qubit 0, 1
"""
return qasm_a
# After this step the system is in the state
# $$\ket{\mathcal{D}_A} = \frac{1}{2}\Big(\ket{0}+\ket{1}\Big)\Big(\ket{0}+\ket{1}\Big)\ket{0}\ket{0} $$
#
# * **Part B:** In this part we encode the unlabeled data point $\tilde{x}$ by making use of a controlled rotation. We entangle the third qubit with the ancillary qubit. The angle $\theta$ of the rotation should be chosen such that $\tilde{x}=R_y(\theta)\ket{0}$. By the definition of $R_y$ we have
# $$ R_y(\theta)\ket{0} = \cos\left(\frac{\theta}{2}\right)\ket{0} + \sin\left(\frac{\theta}{2}\right)\ket{1}.$$
# Therefore, the angle needed to rotate to the state $\psi=a\ket{0} + b\ket{1}$ is given by $\theta = 2\cos^{-1}(a)\cdot sign(b)$.
# Quantum Inspire does not directly support controlled-$R_y$ gates, however we can construct it from other gates as shown in the figure below. In these pictures $k$ stand for the angle used in the $R_y$ rotation. <img src="images/partb.png">
def part_b(angle):
half_angle = angle / 2
qasm_b = """.part_b # encode test value x^tilde
CNOT q[1], q[2]
Ry q[2], -{0}
CNOT q[1], q[2]
Ry q[2], {0}
X q[1]
""".format(half_angle)
return qasm_b
# After this step the system is in the state
# $$\ket{\mathcal{D}_B} = \frac{1}{2} \Big(\ket{0}+\ket{1}\Big)\Big(\ket{0}\ket{\tilde{{x}}}+\ket{1}\ket{0}\Big)\ket{0}$$
#
# * **Part C:** In this part we encode the first data point $x_1$. The rotation angle $\theta$ is such that $\ket{x_1} = R_y(\theta)\ket{0}$. Now a double controlled-$R_y$ rotation is needed, and similar to Part B, we construct it from other gates as shown in the figure below. <img src="images/partc.png">
#
def part_c(angle):
quarter_angle = angle / 4
qasm_c = """.part_c # encode training x^0 value
toffoli q[0],q[1],q[2]
CNOT q[0],q[2]
Ry q[2], {0}
CNOT q[0],q[2]
Ry q[2], -{0}
toffoli q[0],q[1],q[2]
CNOT q[0],q[2]
Ry q[2], -{0}
CNOT q[0],q[2]
Ry q[2], {0}
X q[0]
""".format(quarter_angle)
return qasm_c
# After this step the system is in the state
# $$\ket{\mathcal{D}_C} = \frac{1}{2}\Bigg(\ket{0}\Big(\ket{0}\ket{\tilde{{x}}} + \ket{1}\ket{{x_1}}\Big) + \ket{1}\Big(\ket{0}\ket{\tilde{{x}}} + \ket{1}\ket{0}\Big)\Bigg) \ket{0}$$
# * **Part D:** This part is almost an exact copy of part C, however now with $\theta$ chosen such that $\ket{{x}_2} = R_y(\theta)\ket{0}$.
#
def part_d(angle):
quarter_angle = angle / 4
qasm_d = """.part_d # encode training x^1 value
toffoli q[0],q[1],q[2]
CNOT q[0],q[2]
Ry q[2], {0}
CNOT q[0],q[2]
Ry q[2], -{0}
toffoli q[0],q[1],q[2]
CNOT q[0],q[2]
Ry q[2], -{0}
CNOT q[0],q[2]
Ry q[2], {0}
""".format(quarter_angle)
return qasm_d
# After this step the system is in the state
# $$\ket{\mathcal{D}_D} = \frac{1}{2}\Bigg(\ket{0}\Big(\ket{0}\ket{\tilde{{x}}} + \ket{1}\ket{{x_1}}\Big) + \ket{1}\Big(\ket{0}\ket{\tilde{{x}}} + \ket{1}\ket{{x}_2}\Big)\Bigg) \ket{0}$$
# * **Part E:** The last step is to label the last qubit with the correct class, this can be done using a simple CNOT gate between the first and last qubit to obtain the desired initial state
# $$\ket{\mathcal{D}_E} = \frac{1}{2}\ket{0}\Big(\ket{0}\ket{\tilde{{x}}} + \ket{1}\ket{{x_1}}\Big)\ket{0} + \ket{1}\Big(\ket{0}\ket{\tilde{{x}}} + \ket{1}\ket{{x}_2}\Big)\ket{1}.
# $$
def part_e():
qasm_e = """.part_e # encode the labels
CNOT q[0], q[3]
"""
return qasm_e
# ### The actual algorithm
# Once the system is in this initial state, the algorithm itself only consists of one Hadamard gate and two measurements. If the first measurement gives the result $\ket{1}$, we have to abort the algorithm and start over again. However, these results can also easily be filtered out in a post-proecessing step.
def part_f():
qasm_f = """
.part_f
H q[1]
"""
return qasm_f
# The circuit for the whole algorithm now looks like: <img src="images/full_circuit.png">
#
# We can send our QASM code to the Quantum Inspire with the following data points
#
#
# \begin{equation}
# \begin{split}
# \ket{\psi_{\tilde{x}}} & = 0.8670 \ket{0} + 0.4984\ket{1}, \\
# \ket{\psi_{x_1}} & = 0.9193 \ket{0} + 0.3937\ket{1},\\
# \ket{\psi_{x_2}} & = 0.1411 \ket{0} + 0.9899\ket{1}.
# \end{split}
# \end{equation}
#
# +
import os
from getpass import getpass
from coreapi.auth import BasicAuthentication
from quantuminspire.credentials import load_account, get_token_authentication, get_basic_authentication
from quantuminspire.api import QuantumInspireAPI
from math import acos
from math import pi
QI_EMAIL = os.getenv('QI_EMAIL')
QI_PASSWORD = os.getenv('QI_PASSWORD')
QI_URL = os.getenv('API_URL', 'https://api.quantum-inspire.com/')
## input data points:
angle_x_tilde = 2 * acos(0.8670)
angle_x0 = 2 * acos(0.1411)
angle_x1 = 2 * acos(0.9193)
def get_authentication():
""" Gets the authentication for connecting to the Quantum Inspire API."""
token = load_account()
if token is not None:
return get_token_authentication(token)
else:
if QI_EMAIL is None or QI_PASSWORD is None:
print('Enter email')
email = input()
print('Enter password')
password = <PASSWORD>()
else:
email, password = QI_EMAIL, QI_PASSWORD
return get_basic_authentication(email, password)
authentication = get_authentication()
qi = QuantumInspireAPI(QI_URL, authentication)
## Build final QASM
final_qasm = part_a() + part_b(angle_x_tilde) + part_c(angle_x0) + part_d(angle_x1) + part_e() + part_f()
backend_type = qi.get_backend_type_by_name('QX single-node simulator')
result = qi.execute_qasm(final_qasm, backend_type=backend_type, number_of_shots=1, full_state_projection=True)
print(result['histogram'])
# +
import matplotlib.pyplot as plt
from collections import OrderedDict
def bar_plot(result_data):
res = [get_bin(el, 4) for el in range(16)]
prob = [0] * 16
for key, value in result_data['histogram'].items():
prob[int(key)] = value
# Set color=light grey when 2nd qubit = 1
# Set color=blue when 2nd qubit = 0, and last qubit = 1
# Set color=red when 2nd qubit = 0, and last qubit = 0
color_list = [
'red', 'red', (0.1, 0.1, 0.1, 0.1), (0.1, 0.1, 0.1, 0.1),
'red', 'red', (0.1, 0.1, 0.1, 0.1), (0.1, 0.1, 0.1, 0.1),
'blue', 'blue', (0.1, 0.1, 0.1, 0.1), (0.1, 0.1, 0.1, 0.1),
'blue', 'blue', (0.1, 0.1, 0.1, 0.1), (0.1, 0.1, 0.1, 0.1)
]
plt.bar(res, prob, color=color_list)
plt.ylabel('Probability')
plt.title('Results')
plt.ylim(0, 1)
plt.xticks(rotation='vertical')
plt.show()
return prob
prob = bar_plot(result)
# -
# We only consider the events where the second qubit equals 0, that is, we only consider the events in the set $$\{0000, 0001, 0100, 0101, 1000, 1001, 1100, 1101\}$$
#
# The label $\tilde{y}$ is now given by
#
# \begin{equation}
# \tilde{y} = \left\{
# \begin{array}{lr}
# -1 & : \#\{0000, 0001, 0100, 0101\} > \#\{1000, 1001, 1100, 1101\}\\
# +1 & : \#\{1000, 1001, 1100, 1101\} > \#\{0000, 0001, 0100, 0101\}
# \end{array}
# \right.
# \end{equation}
# +
def summarize_results(prob, display=1):
sum_label0 = prob[0] + prob[1] + prob[4] + prob[5]
sum_label1 = prob[8] + prob[9] + prob[12] + prob[13]
def y_tilde():
if sum_label0 > sum_label1:
return 0, ">"
elif sum_label0 < sum_label1:
return 1, "<"
else:
return "undefined", "="
y_tilde_res, sign = y_tilde()
if display:
print("The sum of the events with label 0 is: {}".format(sum_label0))
print("The sum of the events with label 1 is: {}".format(sum_label1))
print("The label for y_tilde is: {} because sum_label0 {} sum_label1".format(y_tilde_res, sign))
return y_tilde_res
summarize_results(prob);
# -
# The following code will randomly pick two training data points and a random test point for the algorithm. We can compare the prediction for the label by the Quantum Inspire with the true label.
# +
from random import sample, randint
from numpy import sign
def grab_random_data():
one_random_index = sample(range(50), 1)
two_random_index = sample(range(50), 2)
random_label = sample([1,0], 1) # random label
## iris_setosa_normalised # Label 0
## iris_versicolor_normalised # Label 1
if random_label[0]:
# Test data has label = 1, iris_versicolor
data_label0 = [iris_setosa_normalised[0][one_random_index[0]],
iris_setosa_normalised[1][one_random_index[0]]]
data_label1 = [iris_versicolor_normalised[0][two_random_index[0]],
iris_versicolor_normalised[1][two_random_index[0]]]
test_data = [iris_versicolor_normalised[0][two_random_index[1]],
iris_versicolor_normalised[1][two_random_index[1]]]
else:
# Test data has label = 0, iris_setosa
data_label0 = [iris_setosa_normalised[0][two_random_index[0]],
iris_setosa_normalised[1][two_random_index[0]]]
data_label1 = [iris_versicolor_normalised[0][one_random_index[0]],
iris_versicolor_normalised[1][one_random_index[0]]]
test_data = [iris_setosa_normalised[0][two_random_index[1]],
iris_setosa_normalised[1][two_random_index[1]]]
return data_label0, data_label1, test_data, random_label
data_label0, data_label1, test_data, random_label = grab_random_data()
print("Data point {} from label 0".format(data_label0))
print("Data point {} from label 1".format(data_label1))
print("Test point {} from label {} ".format(test_data, random_label[0]))
def run_random_data(data_label0, data_label1, test_data):
angle_x_tilde = 2 * acos(test_data[0]) * sign(test_data[1]) % (4 * pi)
angle_x0 = 2 * acos(data_label0[0]) * sign(data_label0[1]) % (4 * pi)
angle_x1 = 2 * acos(data_label1[0])* sign(data_label1[1]) % (4 * pi)
## Build final QASM
final_qasm = part_a() + part_b(angle_x_tilde) + part_c(angle_x0) + part_d(angle_x1) + part_e() + part_f()
result_random_data = qi.execute_qasm(final_qasm, backend_type=backend_type, number_of_shots=1, full_state_projection=True)
return result_random_data
result_random_data = run_random_data(data_label0, data_label1, test_data);
# Plot data points:
plt.rcParams['figure.figsize'] = [16, 6] # Plot size
plt.subplot(1, 2, 1)
DataPlotter.plot_normalised_data(iris_setosa_normalised, iris_versicolor_normalised);
plt.scatter(test_data[0], test_data[1], s=50, c='green'); # Scatter plot data class ?
plt.scatter(data_label0[0], data_label0[1], s=50, c='orange'); # Scatter plot data class 0
plt.scatter(data_label1[0], data_label1[1], s=50, c='orange'); # Scatter plot data class 1
plt.legend(["Iris Setosa (label 0)", "Iris Versicolor (label 1)", "Test point", "Data points"])
plt.subplot(1, 2, 2)
prob_random_points = bar_plot(result_random_data);
summarize_results(prob_random_points);
# -
# To get a better idea how well this quantum classifier works we can compare the predicted label to the true label of the test datapoint. Errors in the prediction can have two causes. The quantum classifier does not give the right classifier prediction or the quantum classifier gives the right classifier prediction which for the selected data gives the wrong label. in general, the first type of errors can be reduced by increasing the number of times we run the algorithm. In our case, as we work with the simulator and our gates are deterministic ([no conditional gates](https://www.quantum-inspire.com/kbase/optimization-of-simulations/)), we do not have to deal with this first error if we use the true probability distribution. This can be done by using only a single shot without measurements.
# +
quantum_score = 0
error_prediction = 0
classifier_is_quantum_prediction = 0
classifier_score = 0
no_label = 0
def true_classifier(data_label0, data_label1, test_data):
if np.linalg.norm(np.array(data_label1) - np.array(test_data)) < np.linalg.norm(np.array(data_label0) -
np.array(test_data)):
return 1
else:
return 0
for idx in range(100):
data_label0, data_label1, test_data, random_label = grab_random_data()
result_random_data = run_random_data(data_label0, data_label1, test_data)
classifier = true_classifier(data_label0, data_label1, test_data)
sum_label0 = 0
sum_label1 = 0
for key, value in result_random_data['histogram'].items():
if int(key) in [0, 1, 4, 5]:
sum_label0 += value
if int(key) in [8, 9, 12, 13]:
sum_label1 += value
if sum_label0 > sum_label1:
quantum_prediction = 0
elif sum_label1 > sum_label0:
quantum_prediction = 1
else:
no_label += 1
continue
if quantum_prediction == classifier:
classifier_is_quantum_prediction += 1
if random_label[0] == classifier:
classifier_score += 1
if quantum_prediction == random_label[0]:
quantum_score += 1
else:
error_prediction += 1
print("In this sample of 100 data points:")
print("the classifier predicted the true label correct", classifier_score, "% of the times")
print("the quantum classifier predicted the true label correct", quantum_score, "% of the times")
print("the quantum classifier predicted the classifier label correct",
classifier_is_quantum_prediction, "% of the times")
print("Could not assign a label ", no_label, "times")
# -
# <a name="conclusion"></a>
# # Conclusion and further work #
#
#
# How well the quantum classifier performs, hugely depends on the chosen data points. In case the test data point is significantly closer to one of the two training data points the classifier will result in a one-sided prediction. The other case, where the test data point has a similar distance to both training points, the classifier struggles to give an one-sided prediction. Repeating the algorithm on the same data points, might sometimes give different measurement outcomes. This type of error can be improved by running the algorithm using more shots. In the examples above we only used the true probability distribution (as if we had used an infinite number of shots). By running the algorithm instead with 512 or 1024 shots this erroneous behavior can be observed. In case of an infinite number of shots, we see that the quantum classifier gives the same prediction as classically expected.
#
# The results of this toy example already shows the potential of a quantum computer in machine learning. Because the actual algorithm consists of only three operations, independent of the size of the data set, it can become extremely useful for tasks such as pattern recognition on large data sets. The next step is to extend this toy model to contain more data features and a larger training data set to improve the prediction. As not all data sets are best classified by a distance-based classifier, implementations of other types of classifiers might also be interesting. For more information on this particular classifier see the reference [ref](https://arxiv.org/abs/1703.10793).
#
# [Back to Table of Contents](#contents)
# ### References ###
# * Book: [Schuld and Petruccione, Supervised learning with Quantum computers, 2018](https://www.springer.com/us/book/9783319964232)
# * Article: [Schuld, Fingerhuth and Petruccione, Implementing a distance-based classifier with a quantum interference circuit, 2017](https://arxiv.org/abs/1703.10793)
# # The same algorithm for the projectQ framework#
# +
from math import acos
import os
from getpass import getpass
from quantuminspire.credentials import load_account, get_token_authentication, get_basic_authentication
from quantuminspire.api import QuantumInspireAPI
from quantuminspire.projectq.backend_qx import QIBackend
from projectq import MainEngine
from projectq.backends import ResourceCounter
from projectq.meta import Compute, Control, Loop, Uncompute
from projectq.ops import CNOT, CZ, All, H, Measure, Toffoli, X, Z, Ry, C
from projectq.setups import restrictedgateset
QI_EMAIL = os.getenv('QI_EMAIL')
QI_PASSWORD = <PASSWORD>('QI_PASSWORD')
QI_URL = os.getenv('API_URL', 'https://api.quantum-inspire.com/')
def get_authentication():
""" Gets the authentication for connecting to the Quantum Inspire API."""
token = load_account()
if token is not None:
return get_token_authentication(token)
else:
if QI_EMAIL is None or QI_PASSWORD is None:
print('Enter email:')
email = input()
print('Enter password')
password = getpass()
else:
email, password = QI_EMAIL, QI_PASSWORD
return get_basic_authentication(email, password)
# Remote Quantum Inspire backend #
authentication = get_authentication()
qi_api = QuantumInspireAPI(QI_URL, authentication)
compiler_engines = restrictedgateset.get_engine_list(one_qubit_gates="any",
two_qubit_gates=(CNOT, CZ, Toffoli))
compiler_engines.extend([ResourceCounter()])
qi_backend = QIBackend(quantum_inspire_api=qi_api)
qi_engine = MainEngine(backend=qi_backend, engine_list=compiler_engines)
# angles data points:
angle_x_tilde = 2 * acos(0.8670)
angle_x0 = 2 * acos(0.1411)
angle_x1 = 2 * acos(0.9193)
qubits = qi_engine.allocate_qureg(4)
# part_a
for qubit in qubits[0:2]:
H | qubit
# part_b
C(Ry(angle_x_tilde), 1) | (qubits[1], qubits[2]) # Alternatively build own CRy gate as done above
X | qubits[1]
# part_c
C(Ry(angle_x0), 2) | (qubits[0], qubits[1], qubits[2]) # Alternatively build own CCRy gate as done above
X | qubits[0]
# part_d
C(Ry(angle_x1), 2) | (qubits[0], qubits[1], qubits[2]) # Alternatively build own CCRy gate as done above
# part_e
CNOT | (qubits[0], qubits[3])
# part_f
H | qubits[1]
qi_engine.flush()
# Results:
temp_results = qi_backend.get_probabilities(qubits)
res = [get_bin(el, 4) for el in range(16)]
prob = [0] * 16
for key, value in temp_results.items():
prob[int(key[::-1], 2)] = value # Reverse as projectQ has a different qubit ordering
color_list = [
'red', 'red', (0.1, 0.1, 0.1, 0.1), (0.1, 0.1, 0.1, 0.1),
'red', 'red', (0.1, 0.1, 0.1, 0.1), (0.1, 0.1, 0.1, 0.1),
'blue', 'blue', (0.1, 0.1, 0.1, 0.1), (0.1, 0.1, 0.1, 0.1),
'blue', 'blue', (0.1, 0.1, 0.1, 0.1), (0.1, 0.1, 0.1, 0.1)
]
plt.bar(res, prob, color=color_list)
plt.ylabel('Probability')
plt.title('Results')
plt.ylim(0, 1)
plt.xticks(rotation='vertical')
plt.show()
print("Results:")
print(temp_results)
# -
|
docs/classifier_example/classification_example1_2_data_points.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Simple Driver Comfort Analysis
#
# ## Introduction
#
# This example will demonstrate how to use the Inertial Measurement Unit to do a simple analysis for driver comfort while driving. An IMU is a sensor that measures rotational acceleration and forces at a certain point in the vehicle. BeamNGpy provides a simulated version of such a sensor that can be placed in or on the vehicle. Contrary to real IMUs, BeamNG's are completely size and weightless, meaning they have no effect on the behavior of the vehicle and an arbitrary amount of them can be added.
#
# ## Scenario
#
# Our scenario contains two vehicles that are tasked with driving to a certain waypoint on the map using BeamNG's AI. The AI will be given different speed levels and aggression values. Both vehicles are equipped with an IMU placed at the headrest of the driver's seat, giving measurements about rotational acceleration and forces acting on the driver. After both vehicles arrive at their destination, the data measured for both will be plotted for comparison.
#
# ## Setup
#
# Setting up the environment starts with importing the required classes, mainly:
#
# * `BeamNGpy`: The backbone of the library used to manage BeamNG and communicate with the running simulation
# * `Scenario`: A class representing the scenario we set up. It will contain information about which level to load and vehicles contained in the scenario.
# * `Vehicle`: Each of our vehicles will be an instance of this class. It is used to represent and communicate with a vehicle in the simulation.
# * `IMU`: The class implementing an IMU sensor and focus of this example. Each vehicle will have an instance of this to gather measurements.
#
# Instances of these classes are compiled into one scenario that will then be loaded in the simulator.
#
# Additionally, some modules and classes related to later plotting are imported.
# +
# %matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
from beamngpy import BeamNGpy, Vehicle, Scenario
from beamngpy.sensors import IMU
sns.set() # Let seaborn apply better styling to all matplotlib graphs
# -
# The actual scenario will be set up by instantiating the `Scenario` class with an instance set on `west_coast_usa` that we name "driver_comfort". Two instances of the `Vehicle` class will be created using the ETK800 model and given unique names for later reference.
# +
beamng = BeamNGpy('localhost', 64256)
beamng.open()
scenario = Scenario('west_coast_usa', 'driver_comfort')
careful = Vehicle('careful', model='etk800', licence='CAREFUL', colour='Green')
aggressive = Vehicle('aggressive', model='etk800', licence='AGGRO', colour='Red')
# -
# With two vehicles instantiated we move on to creating IMU sensor objects. These objects are placed at locations *relative to the vehicle's origin*. This means an IMU placed at (0, 0, 0) is always at the vehicle's origin, regardless of its position in the world. This, however, means placement of the IMU is best done by looking at node positions in the game's vehicle editor to get reference values. For the creation of this example, the game was started manually and the location for IMUs was determined by looking at relative node positions close to the driver's headrest of the ETK800 vehicle.
#
# **Note**: The `debug=True` flag passed to the IMUs has no effect on the data. It merely enables debug visualization of the IMU an its readings in game.
# +
careful_imu = IMU(pos=(0.73, 0.51, 0.8), debug=True)
careful.attach_sensor('careful_imu', careful_imu)
aggressive_imu = IMU(pos=(0.73, 0.51, 0.8), debug=True)
aggressive.attach_sensor('aggressive_imu', aggressive_imu)
# -
# Finally, we add the vehicles to our scenario. The locations they are placed were determined manually in the game's World Editor. The call to `scenario.make(beamng)` creates files necessary for the game to load our scenario during the simulation.
# +
scenario.add_vehicle(careful, pos=(-767.1, 402.8, 142.8), rot_quat=(0, 0, 0.027, 1))
scenario.add_vehicle(aggressive, pos=(-770.1, 398.8, 142.8), rot_quat=(0, 0, 0.027, 1))
scenario.make(beamng)
# -
# We further set up two lists that will contain measurement data for both vehicles. These will later be put into a Pandas `DataFrame` for easy data analysis.
careful_data = []
aggressive_data = []
# ## Running
#
# After our scenario is loaded in the simulator, we start the scenario letting the aggressive vehicle drive to its destination first, using a high maximum speed and high aggression value. Data from the vehicle's IMU is gathered every 30 frames for 1800 frames. Afterwards, the careful vehicle is given the same destination but a lower speed limit and lower aggression value. Its IMU will be sampled in the same manner.
# +
beamng.load_scenario(scenario)
beamng.set_deterministic()
beamng.set_steps_per_second(60)
beamng.start_scenario()
beamng.pause()
beamng.switch_vehicle(aggressive) # Switches the game's focus to the aggressive vehicle. No effect besides making it easier to watch.
aggressive.ai_set_waypoint('junction1_wp24')
aggressive.ai_set_speed(50, mode='limit')
aggressive.ai_set_aggression(1)
beamng.step(240) # Give the vehicle some frames to get going
for t in range(0, 1800, 30):
aggressive.poll_sensors()
row = aggressive_imu.data
row = [
t,
'aggressive',
row['aX'],
row['aY'],
row['aZ'],
row['gX'],
row['gY'],
row['gZ'],
]
aggressive_data.append(row)
beamng.step(30)
aggressive.ai_set_waypoint('tunnel_NE_A_1') # Make it move away to make room for the careful car
beamng.switch_vehicle(careful) # Switches the game's focus to the careful vehicle. No effect besides making it easier to watch.
careful.ai_set_waypoint('junction1_wp24')
careful.ai_set_speed(10, mode='limit')
careful.ai_set_aggression(0.3)
beamng.step(240)
for t in range(0, 1800, 30):
careful.poll_sensors()
row = careful_imu.data
row = [
t,
'careful',
row['aX'],
row['aY'],
row['aZ'],
row['gX'],
row['gY'],
row['gZ'],
]
careful_data.append(row)
beamng.step(30)
data = careful_data + aggressive_data
df = pd.DataFrame(data, columns=['T', 'Behavior', 'aX', 'aY', 'aZ', 'gX', 'gY', 'gZ'])
beamng.close() # Close beamng as all data was gathered
# -
# ## Analysis
#
# We analyze the data by computing the respective lengths of rotational acceleration and force vectors on each vehicle. This data is then aggregated into minimum, maximum, and mean lengths to compare both careful and aggressive measurements and later plotted to display the length of each measurement over time.
df['Force'] = df.apply(lambda r: np.linalg.norm([r['gX'], r['gY'], r['gZ']]), axis=1)
df['Rotation'] = df.apply(lambda r: np.linalg.norm([r['aX'], r['aY'], r['aZ']]), axis=1)
df[['Behavior', 'Force', 'Rotation']].groupby('Behavior').agg(['min', 'max', 'mean'])
figure, ax = plt.subplots(2, 1, figsize=(30, 15))
pal = {'aggressive': '#FF4444', 'careful': '#22FF22'}
sns.lineplot(x='T', y='Force', hue='Behavior', data=df, palette=pal, ax=ax[0])
sns.lineplot(x='T', y='Rotation', hue='Behavior', data=df, palette=pal, ax=ax[1])
|
examples/simple_driver_comfort_analysis.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# This notebook was prepared by [<NAME>](http://donnemartin.com). Source and license info is on [GitHub](https://github.com/donnemartin/interactive-coding-challenges).
# # Solution Notebook
# ## Problem: Delete a node in the middle, given only access to that node.
#
# * [Constraints](#Constraints)
# * [Test Cases](#Test-Cases)
# * [Algorithm](#Algorithm)
# * [Code](#Code)
# * [Unit Test](#Unit-Test)
# ## Constraints
#
# * Can we assume this is a non-circular, singly linked list?
# * Yes
# * What if the final node is being deleted, for example a single node list? Do we make it a dummy with value None?
# * Yes
# * Can we assume we already have a linked list class that can be used for this problem?
# * Yes
# ## Test Cases
#
# * Delete on empty list -> None
# * Delete None -> None
# * Delete on one node -> [None]
# * Delete on multiple nodes
# ## Algorithm
#
# We'll need two pointers, one to the current node and one to the next node. We will copy the next node's data to the current node's data (effectively deleting the current node) and update the current node's next pointer.
#
# * set curr.data to next.data
# * set curr.next to next.next
#
# Complexity:
# * Time: O(1)
# * Space: O(1)
# ## Code
# %run ../linked_list/linked_list.py
class MyLinkedList(LinkedList):
def delete_node(self, node):
if self.head is None or node is None:
return
if self.head == node:
self.head = None
return
if node.next is None:
node.data = None
node.next = None
else:
node.data = node.next.data
node.next = node.next.next
# ## Unit Test
# +
# %%writefile test_delete_mid.py
from nose.tools import assert_equal
class TestDeleteNode(object):
def test_delete_node(self):
print('Test: Empty list, null node to delete')
linked_list = MyLinkedList(None)
linked_list.delete_node(None)
assert_equal(linked_list.get_all_data(), [])
print('Test: One node')
head = Node(2)
linked_list = MyLinkedList(head)
linked_list.delete_node(head)
assert_equal(linked_list.get_all_data(), [])
print('Test: Multiple nodes')
linked_list = MyLinkedList(None)
node0 = linked_list.insert_to_front(2)
node1 = linked_list.insert_to_front(3)
node2 = linked_list.insert_to_front(4)
node3 = linked_list.insert_to_front(1)
linked_list.delete_node(node1)
assert_equal(linked_list.get_all_data(), [1, 4, 2])
print('Test: Multiple nodes, delete last element')
linked_list = MyLinkedList(None)
node0 = linked_list.insert_to_front(2)
node1 = linked_list.insert_to_front(3)
node2 = linked_list.insert_to_front(4)
node3 = linked_list.insert_to_front(1)
linked_list.delete_node(node0)
assert_equal(linked_list.get_all_data(), [1, 4, 3, None])
print('Success: test_delete_node')
def main():
test = TestDeleteNode()
test.test_delete_node()
if __name__ == '__main__':
main()
# -
# %run -i test_delete_mid.py
|
linked_lists/delete_mid/delete_mid_solution.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec
# +
fig = plt.figure()
fig.set_facecolor('gray')
spec = gridspec.GridSpec(
ncols=3, nrows=3,
width_ratios=[0.45, 0.45, 0.1],
wspace=0,
hspace=0,
figure=fig)
im_axes = []
ax = fig.add_subplot(spec[0, 0])
ax.set_facecolor('red')
im_axes.append(ax)
ax = fig.add_subplot(spec[0, 1])
ax.set_facecolor('blue')
im_axes.append(ax)
ax = fig.add_subplot(spec[1, 0])
ax.set_facecolor('orange')
im_axes.append(ax)
ax = fig.add_subplot(spec[1, 1])
ax.set_facecolor('yellow')
im_axes.append(ax)
ax = fig.add_subplot(spec[2, 0])
ax.set_facecolor('lime')
im_axes.append(ax)
ax = fig.add_subplot(spec[2, 1])
ax.set_facecolor('cyan')
im_axes.append(ax)
ax = fig.add_subplot(spec[:, 2])
ax.set_facecolor('green')
for ax in fig.axes:
ax.set_xticks([])
ax.set_yticks([])
# for ax in im_axes:
# ax.set_aspect('equal', anchor='C')
plt.show()
# -
|
MC simulation/dosecalc/jupyter_notebooks/plt_gridspec.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pickle
import pprint
from models import fl_en as model
pp = pprint.PrettyPrinter(depth=6)
pp.pprint(model.config)
|
ntbk_ModelConfiguration.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .r
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: R
# language: R
# name: ir
# ---
library(ez)
# +
df = read.csv('./out/R_NASA.csv')
#df$Round = factor(df$Round)
#df$Task = factor(df$Task)
df1 = subset(df, Round<3)
df1$Round = factor(df1$Round)
df1$PId = factor(df1$PId)
df1$IsPointer = factor(df1$IsPointer)
mod = ezANOVA (data = df1,
dv = Avg,
within = c(Round ),
between= c(IsPointer),
wid = PId,
type = 3)
mod
df1 = df[df$Round>2, ]
df1$Round = factor(df1$Round)
df1$PId = factor(df1$PId)
df1$IsPointer = factor(df1$IsPointer)
mod = ezANOVA (data = df1,
dv = Avg,
within = c(Round ),
between= c(IsPointer),
wid = PId,
type = 3)
mod
|
study2/Step_03_R_NASA.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.6.8 64-bit
# language: python
# name: python36864bit5d1453442e8543df8126f0c5c89733bf
# ---
# # Data Preparation
import pandas as pd
df = pd.read_csv("data/training.csv", delimiter=',')
df.head()
# * The number of classes in the dataset
df.CHURNED.unique()
# ### From String to Binary Data
def stringTobinaryData(strToCompare, strRef):
""" This function transforms the data from string valued to binary based on strRef. In this implementation, we have choosen
that strings strRef will be class 1)"""
if strToCompare == strRef :
return 1
return 0
# we apply stringTobinaryData() to the target.
strRef = "STAY"
df.CHURNED = df.CHURNED.apply(lambda x : stringTobinaryData(x, strRef))
df.CHURNED.unique()
df.COLLEGE.unique()
# we apply stringTobinaryData() to the COLLEGE feature.
strRef = "one"
df.COLLEGE = df.COLLEGE.apply(lambda x : stringTobinaryData(x, strRef))
df.COLLEGE.unique()
# #### Upsampling ?
# + active=""
# Usually with this type of problems, we deal with unbalenced data classes, which means that the number of samples for one class is majoritair above the other class, which probably leads to overfitting. To overcome this, we use upsampling/downsampling technics.
#
# Let's see if our dataset is well balenced !
# -
churned = df[df.CHURNED == 1]
len(churned)
fidele = df[df.CHURNED == 0]
len(fidele)
len(df)
from sklearn.utils import resample,shuffle
upsampledFidele = resample(fidele,replace = True,n_samples=7612)
len(upsampledFidele)
df = pd.concat([churned,upsampledFidele])
df = shuffle(df)
churned = df[df.CHURNED == 1]
fidele = df[df.CHURNED == 0]
print(len(fidele))
print(len(churned))
# ### Dealing with outliers
import matplotlib.pyplot as plt
import numpy as np
# +
def plotHistograms(df):
# We get only numerical columns
columns = df.select_dtypes(include=np.number).columns.tolist()
for column in columns :
if column == 'CUSTOMER_ID':
continue
plt.hist(df[column])
plt.title(column)
plt.show()
def deleteAllOutliers(df):
# We get only numerical columns
columns = df.select_dtypes(include=np.number).columns.tolist()
for column in columns :
if column == 'CUSTOMER_ID' or column == 'COLLEGE' or column == 'CHURNED':
continue
df = deleteOutliers(df, column)
return df
def deleteOutliers(df, columnName):
q_low = df[columnName].quantile(0.01)
q_hi = df[columnName].quantile(0.99)
df_filtered = df[(df[columnName] < q_hi) & (df[columnName] > q_low)]
return df_filtered
# -
df.describe()
plotHistograms(df)
# +
#df = deleteAllOutliers(df)
#df = deleteOutliers(df, 'DATA')
#df = deleteOutliers(df, 'INCOME')
# -
len(df)
# ### We will know deal with NaN values
def replaceNaNValues(df, columnName, isQualitative) :
print('------ BEFORE REPLACING : ', df[columnName].isna().sum(), ' NAN VALUES')
if isQualitative :
x = df[columnName].mode()
print("[INFO] NaN values will be replaced by the mode value : ", x)
else :
x = df[columnName].mean()
print("[INFO] NaN values will be replaced by the mean value : ", x)
for index, row in df.iterrows():
if pd.isna(row[columnName]) :
df.loc[index, columnName] = x[0]
print('------ AFTER REPLACING : ', df[columnName].isna().sum(), ' NAN VALUES')
df.isna().sum()
df.HOUSE.unique()
df.LESSTHAN600k.unique()
replaceNaNValues(df, "LESSTHAN600k", 1)
replaceNaNValues(df, "HOUSE", 1)
df.LESSTHAN600k.unique()
df.isna().sum()
# * We can see that we no longer have a NaN value.
df.describe
# # Data analysis - Getting insight from the data
import seaborn as sns
import matplotlib.pyplot as plt
df.corr()
corrMatrix = df.corr()
plt.subplots(figsize=(13,13))
sns.heatmap(corrMatrix, annot=True)
plt.show()
# ## Creating X, Y (features, target)
df.columns
X = df[['COLLEGE', 'DATA', 'INCOME', 'OVERCHARGE', 'LEFTOVER',
'HOUSE', 'LESSTHAN600k', 'CHILD', 'JOB_CLASS', 'REVENUE',
'HANDSET_PRICE', 'OVER_15MINS_CALLS_PER_MONTH', 'TIME_CLIENT',
'AVERAGE_CALL_DURATION', 'REPORTED_SATISFACTION',
'REPORTED_USAGE_LEVEL', 'CONSIDERING_CHANGE_OF_PLAN']]
y = df[['CHURNED']]
X = df[['COLLEGE', 'INCOME', 'OVERCHARGE', 'LEFTOVER',
'HOUSE', 'LESSTHAN600k', 'JOB_CLASS', 'REVENUE',
'HANDSET_PRICE', 'OVER_15MINS_CALLS_PER_MONTH', 'TIME_CLIENT',
'AVERAGE_CALL_DURATION', 'REPORTED_SATISFACTION',
'REPORTED_USAGE_LEVEL', 'CONSIDERING_CHANGE_OF_PLAN']]
y = df[['CHURNED']]
# ## OneHot Encoding
to_dummify = ['REPORTED_SATISFACTION', 'REPORTED_USAGE_LEVEL','CONSIDERING_CHANGE_OF_PLAN']
X = pd.get_dummies(X, columns=to_dummify)
X.head()
datasetColumns = X.columns
# # Scaling the Data
from sklearn.preprocessing import MinMaxScaler
from pandas import DataFrame
# define min max scaler
scaler = MinMaxScaler()
# transform data
X_sacled = scaler.fit_transform(X)
X = DataFrame(X_sacled, columns=datasetColumns)
X
# ## Train and test split
from sklearn.model_selection import train_test_split
#DataConversionWarning
y = y.to_numpy().ravel()
X_train,X_test,y_train,y_test = train_test_split(X,y,test_size=0.2)
# # Model building
from sklearn.linear_model import SGDClassifier
from sklearn.naive_bayes import MultinomialNB
from sklearn.naive_bayes import GaussianNB
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC
from sklearn.ensemble import RandomForestClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.ensemble import GradientBoostingClassifier
from xgboost import XGBClassifier
from catboost import Pool, CatBoostClassifier, cv
# +
from sklearn.model_selection import cross_val_score
from sklearn.metrics import classification_report
from sklearn.metrics import average_precision_score
from sklearn.metrics import accuracy_score
from sklearn.metrics import confusion_matrix
from sklearn.metrics import roc_auc_score
from xgboost import plot_importance
from sklearn.model_selection import RandomizedSearchCV
from sklearn.model_selection import GridSearchCV
# -
# #### To ignore FutureWarning
# import warnings filter
from warnings import simplefilter
# ignore all future warnings
simplefilter(action='ignore', category=FutureWarning)
# +
def trainAndScore(model, X_train, y_train, X_test, y_test):
model.fit(X_train, y_train)
score = model.score(X_test, y_test)
print('[TEST] SCORE = ',score)
print()
testModel(model)
return model
def testModel(model):
y_pred = model.predict(X_test)
cm = confusion_matrix(y_test,y_pred)
print(cm)
print()
print('[TEST] AUPRC = ', roc_auc_score(y_test, y_pred))
print(classification_report(y_test,y_pred))
def plotFeatureImportance(model):
fig = plt.figure(figsize = (14, 9))
ax = fig.add_subplot(111)
colours = plt.cm.Set1(np.linspace(0, 1, 9))
ax = plot_importance(model, height = 1, color = colours, grid = False, show_values = False, importance_type = 'cover', ax = ax);
for axis in ['top','bottom','left','right']:
ax.spines[axis].set_linewidth(2)
ax.set_xlabel('importance score', size = 16)
ax.set_ylabel('features', size = 16)
ax.set_yticklabels(ax.get_yticklabels(), size = 12)
ax.set_title('Ordering of features by importance to the model learnt', size = 20)
# -
# ## Naive Bayes
model = MultinomialNB()
model = trainAndScore(model, X_train, y_train, X_test, y_test)
model = GaussianNB()
trainAndScore(model, X_train, y_train, X_test, y_test)
# ## Stochastic Gradient Descent Classifier
model = SGDClassifier()
trainAndScore(model, X_train, y_train, X_test, y_test)
# ## Random Forest Classifier
# +
rfc = RandomForestClassifier(n_jobs=-1,max_features= 'sqrt' ,n_estimators=50, oob_score = True)
param_grid = {
'n_estimators': [40, 200],
'max_depth' : [10, 70],
'min_samples_leaf': [20, 50],
'min_samples_split': [10, 45],
'max_features': ['auto', 'sqrt', 'log2']
}
CV_rfc = GridSearchCV(estimator=rfc, param_grid=param_grid, cv= 5)
#y_np = y.to_numpy().ravel()
CV_rfc.fit(X, y)
print(CV_rfc.best_params_)
# -
model = RandomForestClassifier(n_estimators = 200, max_depth= 70 , max_features='auto', criterion='gini',
min_samples_leaf=20, min_samples_split=10)
trainAndScore(model, X_train, y_train, X_test, y_test)
accuracies = cross_val_score(estimator = model, X = X_train, y = y_train, cv = 10)
print("Accuracy: {:.2f} %".format(accuracies.mean()*100))
print("Standard Deviation: {:.2f} %".format(accuracies.std()*100))
# ## Logistic Regression
model = LogisticRegression()
trainAndScore(model, X_train, y_train, X_test, y_test)
# ## SVM
model = SVC(gamma='auto')
trainAndScore(model, X_train, y_train, X_test, y_test)
# ## K-Nearest Neighbors
model = KNeighborsClassifier(n_neighbors=43)
trainAndScore(model, X_train, y_train, X_test, y_test)
accuracies = cross_val_score(estimator = model, X = X_train, y = y_train, cv = 10)
print("Accuracy: {:.2f} %".format(accuracies.mean()*100))
print("Standard Deviation: {:.2f} %".format(accuracies.std()*100))
# # Gradient Boosting Classifier
from sklearn.metrics import accuracy_score
from sklearn.metrics import precision_score
from sklearn.metrics import recall_score
from sklearn.metrics import make_scorer
#creating Scoring parameter:
scoring = {'accuracy': make_scorer(accuracy_score),
'precision': make_scorer(precision_score),'recall':make_scorer(recall_score)}
# +
# A sample parameter
parameters = {
"loss":["deviance"],
"learning_rate": [0.01, 0.025, 0.05, 0.075, 0.1, 0.15, 0.2],
"min_samples_split": np.linspace(0.1, 0.5, 12),
"min_samples_leaf": np.linspace(0.1, 0.5, 12),
"max_depth":[3,5,8],
"max_features":["log2","sqrt"],
"criterion": ["friedman_mse", "mae"],
"subsample":[0.5, 0.618, 0.8, 0.85, 0.9, 0.95, 1.0],
"n_estimators":[10,20,40,50]
}
# +
#passing the scoring function in the GridSearchCV
clf = GridSearchCV(GradientBoostingClassifier(), parameters,scoring=scoring,refit=False,cv=5, n_jobs=-1)
clf.fit(X, y)
print(clf.best_params_)
# -
model = GradientBoostingClassifier(n_estimators=100,max_depth=8)
trainAndScore(model, X_train, y_train, X_test, y_test)
plotFeatureImportance(model)
# ## XGBoost
model = XGBClassifier()
trainAndScore(model, X_train, y_train, X_test, y_test)
# ## CatBoost Classifier
model = CatBoostClassifier(eval_metric='Accuracy',random_seed=0)
trainAndScore(model, X_train, y_train, X_test, y_test)
|
Churn clients study.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <img src="../figures/HeaDS_logo_large_withTitle.png" width="300">
#
# <img src="../figures/tsunami_logo.PNG" width="600">
#
# [](https://colab.research.google.com/github/Center-for-Health-Data-Science/PythonTsunami/blob/fall2021/Numbers_and_operators/Numbers_and_operators.ipynb)
# + [markdown] colab_type="text" id="GA6eVAZ8vDmf" slideshow={"slide_type": "slide"}
# # Numerical Operators
#
# *Prepared by [<NAME>](https://www.cpr.ku.dk/staff/?pure=en/persons/672471)*
# + [markdown] colab_type="text" id="GptYMRpSuasN" slideshow={"slide_type": "subslide"}
# ## Objectives
# - understand differences between `int`s and `float`s
# - work with simple math operators
# - add comments to your code
# + [markdown] colab_type="text" id="dIx5dioRuasS" slideshow={"slide_type": "slide"}
# ## Numbers
# Two main types of numbers:
# - Integers: `56, 3, -90`
# - Floating Points: `5.666, 0.0, -8.9`
# + slideshow={"slide_type": "fragment"}
# + [markdown] colab_type="text" id="OgojobH6uasW" slideshow={"slide_type": "slide"}
# ## Operators
# - addition: `+`
# - subtraction: `-`
# - multiplication: `*`
# - division: `/`
# - exponentiation, power: `**`
# - modulo: `%`
# - integer division: `//` (what does it return?)
# + slideshow={"slide_type": "fragment"}
# playground
# + [markdown] colab_type="text" id="V3PQMg3GuasY" slideshow={"slide_type": "subslide"}
# ### Qestions: Ints and Floats
# + [markdown] colab_type="text" id="V3PQMg3GuasY" slideshow={"slide_type": "subslide"}
# - Question 1: What type does the following expression result in?
#
# ```python
# 3.0 + 5
# ```
# + slideshow={"slide_type": "fragment"}
# + [markdown] colab_type="text" id="GlG2t-H-uasb" slideshow={"slide_type": "subslide"}
# ### Operators 1
# - Question 2: How can we add parenthesis to the following expression to make it equal 100?
# ```python
# 1 + 9 * 10
# ```
#
# - Question 3: What is the result of the following expression?
# ```python
# 3 + 14 * 2 + 4 * 5
# ```
# - Question 4: What is the result of the following expression
# ```python
# 5 * 9 / 4 ** 3 - 6 * 7
# ```
# + slideshow={"slide_type": "fragment"}
# + [markdown] colab_type="text" id="1wU8A2FFuasg" slideshow={"slide_type": "subslide"}
# ### Comments
# - Question 5: What is the result of running this code?
#
# ```python
# 15 / 3 * 2 # + 1
# ```
# + slideshow={"slide_type": "fragment"}
# + [markdown] colab_type="text" id="RHUiqAaYuash" slideshow={"slide_type": "subslide"}
# ### Questions: Operators 2
#
# - Question 6: Which of the following result in integers in Python?
# (a) 8 / 2
# (b) 3 // 2
# (c) 4.5 * 2
# + slideshow={"slide_type": "fragment"}
# + [markdown] colab_type="text" id="RHUiqAaYuash" slideshow={"slide_type": "subslide"}
# - Question 7: What is the result of `18 // 3` ?
# + slideshow={"slide_type": "fragment"}
# + [markdown] colab_type="text" id="RHUiqAaYuash" slideshow={"slide_type": "subslide"}
# - Question 8: What is the result of `121 % 7` ?
# + slideshow={"slide_type": "fragment"}
# -
# ## Exercise
#
# Ask the user for a number using the function [input()](https://www.askpython.com/python/examples/python-user-input) and then multiply that number by 2 and print out the value. Remember to store the input value into a variable, so that you can use it afterwards in the multiplication.
# Modfify your previous calculator and ask for the second number (instead of x * 2 --> x * y).
# Now get the square of the number that the user inputs
# ### Note
#
# Check out also the [math library](https://docs.python.org/3/library/math.html) in Python. You can use this library for more complex operations with numbers. Just import the library and try it out:
#
# ```python
#
# import math
#
# print(math.sqrt(25))
#
# print(math.log10(10))
# ```
|
Numbers_and_operators/Numbers_and_operators.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .r
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: R
# language: R
# name: ir
# ---
# # Statistical testing in R
#
# Here, we carry out simple statistical tests including t-test and ANOVA
#
# ## Installation of libraries and necessary software
#
# Install the necessary libraries (only needed once) by executing (shift-enter) the following cell:
#
install.packages("MASS", repos='http://cran.us.r-project.org')
install.packages("perm", repos='http://cran.us.r-project.org')
install.packages("exactRankTests", repos='http://cran.us.r-project.org')
if (!requireNamespace("BiocManager", quietly = TRUE))
install.packages("BiocManager")
BiocManager::install("qvalue")
# ## Loading data and libraries
# This requires that the installation above have been finished without error
library("MASS")
library("perm")
library("exactRankTests")
library("qvalue")
# ### Exercise 1
# Draw figures that show, for degrees of freedom between 1 and 100, the 5\% critical value of the t-statistic (that is the value above and below which 5\% of the density distribution are located, given by```qt(0.975,df=...)```).
#
# Plot the distribution on absolute and double-logarithmic scale.
#
# +
# add your code here:
# -
# #### Add your answers here
# (double-click here to edit the cell)
#
# ##### Question I: <ins>Which figure provides better visual information about the critical values?</ins>
#
# _Answer_
#
# ##### Question I: <ins>What does the decrease mean when carrying out statistical tests?</ins>
#
# _Answer_
#
# ### Exercise 2
# Take an artificial data set of three groups
#
# ```stat.dat <- data.frame(x=rep(c("A","B","C"),each=10),
# y=c(rnorm(10), rnorm(10)+0.5, rnorm(10)-1))
# ```
#
# a) Do an ANOVA to check whether B and C are significantly different from A
#
# b) Do an ANOVA to check whether A, B and C are significantly different from 0
#
# c) Do an ANOVA to check whether A and C are significantly different from B (you'll have to manually reorder the columns of ```stat.dat```)
#
# d) Calculate a t-test between C and A and compare the result to a)
#
# e) Redo ANOVA on the data set containing only A and C and compare again
#
num <- 10
stat.dat <- data.frame(x=rep(c("A","B","C"),each=num),
y=c(rnorm(num), rnorm(num)+0.5, rnorm(num)-1))
# a)
summary(lm(y~x, data=stat.dat))
# b)
# ...
# ##### Question II: <ins>Why do you expect the sample groups A, B and C to be different?</ins>
#
# _Answer_
#
# ##### Question II: <ins>What do the coefficients mean in a) and b)?</ins>
#
# _Answer_
#
#
# ##### Question III: <ins>Could one use t-tests to show the cases in b)?</ins>
#
# _Answer_
#
# ##### Question IV: <ins>What does removing a sample group mean for the ANOVA test?</ins>
#
# _Answer_
#
# ### Exercise 3
# A) Redo Exercise 2 using a sample size of 20 instead of 10.
#
# B) Change the normal distribution to an exponential one and check whether you would accept any of the results as significantly different.
#
#
# +
# add you code here:
# -
# ##### Question I: <ins>What does improve when using a sample size of 10? How does this relate to Exercise 1?</ins>
#
# _Answer_
#
# ##### Question II: <ins>Why shouldn't we apply ANOVA to exponentially distributed data?</ins>
#
# _Answer_
#
# ### Exercise 4
# a) Try to understand the function ```TestCompare()``` that calculates the p-values of three statistical tests: two-sample t-test, the Wilcoxon rank test and a permutation test.
#
# b) Take the data ```PlantGrowth``` and reduce it to conditions ```ctrl``` and ```trt2```.
# Use ```TestCompare``` on the data and compare the p-values from the different tests.
#
# c) Apply the function on normally distributed artificial data having the same number of values, the same mean and the same standard deviation as ```ctrl``` and ```trt2```. Repeat this 1000 times.
#
# d) Check the distribution of the 1000 p-values per test and compare the p-values to the one obtained for the ```PlantGrowth``` data. Also compare the p-values of the different tests using scatter plots.
#
#
# +
data("PlantGrowth")
library(perm)
# a)
mydat <- PlantGrowth[PlantGrowth$group=="ctrl" | PlantGrowth$group=="trt2",]
head(mydat)
TestCompare <- function(sample1, sample2){
pval1 <- t.test(sample1,sample2)$p.value
pval2 <- wilcox.test(sample1,sample2)$p.value
pval3 <- permTS(sample1,sample2)$p.value
c(pval1,pval2,pval3)
}
# b)
mypvals <- # continue here
# c)
m1 <- mean(mydat[mydat$group=="ctrl",1])
m2 <- mean(mydat[mydat$group=="trt2",1])
s1 <- sd(mydat[mydat$group=="ctrl",1])
s2 <- sd(mydat[mydat$group=="trt2",1])
n1 <- length(mydat[mydat$group=="ctrl",1])
n2 <- length(mydat[mydat$group=="trt2",1])
pvals <- matrix(NA,ncol=3,nrow=1000)
for (i in 1:1000) {
# add the results from the corresponding tests here
pvals[i,] <- TestCompare(rnorm(n1,m1,s1), rnorm(n2,m2,s2))
}
ttt <- TestCompare(mydat[mydat$group=="ctrl",1], mydat[mydat$group=="trt2",1])
# d)
hist((pvals[,1]),100)
abline(v=ttt[1])
hist((pvals[,2]),100)
abline(v=ttt[2])
hist((pvals[,3]),100)
abline(v=ttt[3])
# Add scatter plots for direct comparison
# -
# ##### Question I: <ins>Any idea why the t-test gives the lowest p-value?</ins>
#
# _Answer_
#
# ##### Question II: <ins>Which test gives the lowest number of p-values from the artificial data which are larger than the p-value calculated for the PlantGrowth data?</ins>
#
# _Answer_
#
# ### Exercise 5
# Redo the first part of Exercise 4 changing to paired tests.
#
#
# +
data("PlantGrowth")
mydat <- PlantGrowth[PlantGrowth$group=="ctrl" | PlantGrowth$group=="trt2",]
TestComparePaired <- function(sample1, sample2){
pval1 <- t.test(sample1,sample2,paired = T)$p.value
pval2 <- wilcox.test(sample1,sample2,paired = T)$p.value
pval3 <- perm.test(sample1,sample2, paired=T)$p.value
c(pval1,pval2,pval3)
}
mypvals <- TestComparePaired(mydat[mydat$group=="ctrl",1], mydat[mydat$group=="trt2",1])
m1 <- mean(mydat[mydat$group=="ctrl",1])
m2 <- mean(mydat[mydat$group=="trt2",1])
s1 <- sd(mydat[mydat$group=="ctrl",1])
s2 <- sd(mydat[mydat$group=="trt2",1])
n1 <- length(mydat[mydat$group=="ctrl",1])
n2 <- length(mydat[mydat$group=="trt2",1])
pvals <- matrix(NA,ncol=3,nrow=1000)
for (i in 1:1000) {
pvals[i,] <- TestComparePaired(rnorm(n1,m1,s1), rnorm(n2,m2,s2))
}
par(mfrow=c(2,2))
hist(log10(pvals[,1]),100)
abline(v=log10(mypvals[1]),col=2,lwd=2)
hist(log10(pvals[,2]),100)
abline(v=log10(mypvals[2]),col=2,lwd=2)
hist(log10(pvals[,3]),100)
abline(v=log10(mypvals[3]),col=2,lwd=2)
# Paired tests are necessary when having e.g. drug reponses on the same persons.
# -
# ##### Question I: <ins>When should you use a paired test? Give an example.</ins>
#
# _Answer_
#
# ##### Question II: <ins>Which difference in the results do you observe?</ins>
#
# _Answer_
#
#
# ### Exercise 6
# _Correction for multiple testing:_
#
# a) Write a function to calculate the p-value (t-test) between ```num``` normally distributed (s.d. 1) artificial sets mutually shifted by ```shift``` (set default to 0.5).
#
# b) Write a ```for``` loop to get 1000 p-values from the same comparison and plot them on a histogram. Count the number of p-values below 0.05.
#
# c) Correct for multiple testing using Bonferroni, Benjamin-Hochberg (```p.adjust```)and ```qvalue()``` (```qvalues``` package). Count the number of corrected p-values below 0.05.
#
# d) Repeat the same for a shift of ```shift=1``` and ```shift=0```. How many corrected p-values below 0.05 would one optimally get for ```shift=0```?
#
#
# +
# a)
GetPval <- function(shift=0.5, num=10) {
t.test(rnorm(num),rnorm(num,mean=shift))$p.value
}
# b)
pvec <- vector(,1000)
for (i in 1:1000) {
# from here this is yours
}
# c)
#p.adjust(yourvalues, method="bonferroni")
#p.adjust(yourvalues, method="BH")
#qvalue(yourvalues)$qvalues
# d)
# -
# <b> Question I: <ins>What are the arguments ```shift``` and ```num``` in the function ```GetPval```?
# What happens when you call the function without arguments (```GetPval()```)</ins></b>
#
# _Answer_
#
# ##### Question II: <ins>What is the percentage of p-values below 0.05?</ins>
#
#
# _Answer_
#
# ##### Question III: <ins>Why is the number of corrected p-values below 0.05 after correction? Order the corrections according to their number of p-values smaller than 0.05.</ins>
#
# _Answer_
#
# ##### Question IV: <ins>What is the expected percentage of p-values below 0.01 for ```shift=0```?</ins>
#
# _Answer_
#
#
#
|
E_Biostatistics/Statistical-tests/StatisticalTesting.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Lab 03 - Bringing it together
# You work for a consulting agency, and they just tasked you with a new assignment to build scripts to analyze and inform the NYC council about the **Uber** ride sharing companies. The city want reproducible workflows in a jupyter notebook.
#
# The data can be obtained from here: https://www1.nyc.gov/site/tlc/about/tlc-trip-record-data.page
#
# The information will be given to **non-technical** individuals so they want pretty maps, graph, and charts. They have set limits on what Python package you can use because their IT staff restricts what analysts can use.
#
# #### Whitelist Software Limitations from the Client
#
# - scipy
# - sklearn
# - ArcGIS API for Python
# - ArcPy
# - Standard Python Library
# - Requests
#
# #### What is wanted:
#
# - comparisons of data between ride shares and taxis
# - hot spots of taxi pick-ups and ride sharing
# - Interactive maps (minimum of 2-3)
# - Nice descriptive reporting
# - Other interesting facts
#
# #### What to turn in:
#
# - Jupyter notebook
#
# ### Grading
#
# The project will be graded on the following:
#
# - Execution without errors: (40%)
# - Description of goal and meaning of result explained: (30%)
# - Proper method/class and other code documentation and general reference citations (AMA style) (20 %)
# - Proper Markdown usage in Jupyter Notebook (10%)
#
#
|
assignments/lab 03/Lab 03.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## {{cookiecutter.project_name}}
#
# {{cookiecutter.description}}
#
# This notebook contains basic statistical analysis and visualization of the data.
#
# ### Data Sources
# - summary : Processed file from notebook 1-Data_Prep
#
# ### Changes
# - {% now 'utc', '%m-%d-%Y' %} : Started project
import pandas as pd
from pathlib import Path
from datetime import datetime
import seaborn as sns
# %matplotlib inline
# ### File Locations
today = datetime.today()
in_file = Path.cwd() / "data" / "processed" / f"summary_{today:%b-%d-%Y}.pkl"
report_dir = Path.cwd() / "reports"
report_file = report_dir / "Excel_Analysis_{today:%b-%d-%Y}.xlsx"
df = pd.read_pickle(in_file)
# ### Perform Data Analysis
# ### Save Excel file into reports directory
#
# Save an Excel file with intermediate results into the report directory
writer = pd.ExcelWriter(report_file, engine='xlsxwriter')
df.to_excel(writer, sheet_name='Report')
writer.save()
|
{{cookiecutter.directory_name}}/.ipynb_checkpoints/2-EDA-checkpoint.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# %config ZMQInteractiveShell.ast_node_interactivity='all'
# %matplotlib inline
import warnings;warnings.filterwarnings('ignore')
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from scipy import stats as spstats
from sklearn import metrics
from sklearn.base import BaseEstimator, ClassifierMixin, TransformerMixin, clone
from sklearn.preprocessing import Imputer, LabelEncoder, PolynomialFeatures
from sklearn.model_selection import KFold, StratifiedKFold, RandomizedSearchCV
from sklearn.externals import joblib
from sklearn.ensemble import RandomForestClassifier
from sklearn.linear_model import LogisticRegression, LogisticRegressionCV
from xgboost import XGBClassifier
from lightgbm import LGBMClassifier
# +
target = '标签'
uid = '申请编号'
def get_time():
now = datetime.datetime.now().strftime("%m-%d %H:%M")
print(now)
def calc_auc(y_test, y_proba):
auc = round(metrics.roc_auc_score(y_test, y_proba), 3)
return auc
def ks_score(y_test, y_proba):
scale = 4
fpr, tpr, thresholds = metrics.roc_curve(y_test, y_proba, pos_label=1)
KS = round(max(list(tpr-fpr)), scale)
return KS
# +
########## Grid Search
scale_pos_weight = 119/21
cv = 5
param_general = {
'n_iter' : 50,
'cv' : cv,
'scoring' : 'roc_auc',
'n_jobs' : -1,
'random_state' : 123,
'verbose' : 1}
# RF
param_dist_rf = {
# Shape
'n_estimators' : range(50, 500, 50),
# 'n_estimators' : range(5, 10),
'max_depth' : range(3, 10),
'min_samples_split' : range(50, 100, 10),
'min_samples_leaf' : range(50, 100, 10),
# Sample
'class_weight' : ['balanced', None],
'max_features' : ['sqrt', 'log2'],
# Objective
'criterion' : ['gini', 'entropy']
}
# XGB
param_dist_xgb = {
# Shape
'n_estimators' : range(50, 500, 50),
# 'n_estimators' : range(5, 10),
'max_depth' : range(3, 10),
'min_child_weight' : range(1, 9, 1), # 最小叶子节点样本权重和
# Sample
'scale_pos_weight' : [scale_pos_weight, 1],
'subsample' : np.linspace(0.5, 0.9, 5),
'colsample_bytree' : np.linspace(0.5, 0.9, 5),
'colsample_bylevel' : np.linspace(0.5, 0.9, 5),
# Algo
'eta' : np.linspace(0.01, 0.2, 20), # Learning_rate
'alpha' : np.linspace(0, 1, 10),
'lambda' : range(0, 50, 5),
'early_stopping_rounds' : range(10, 20, 5)
}
# LGB
param_dist_lgb = {
# Shape
'num_boost_round' : range(50, 500, 50),
# 'num_boost_round' : range(50, 100, 10),
'num_leaves' : range(2**3, 2**10, 100),
'min_data_in_leaf' : range(50, 100, 10),
'min_child_weight' : range(1, 9, 1), # 最小叶子节点样本权重和
# Sample
'is_unbalance' : [True, False],
'bagging_freq': range(2, 10), # >0 enable bagging_fraction
'bagging_fraction': np.linspace(0.5, 0.9, 5),
'feature_fraction': np.linspace(0.5, 0.9, 5),
'subsample' : np.linspace(0.5, 0.9, 5),
# Algo
'learning_rate':np.linspace(0.01, 0.2, 20),
'lambda_l1': np.linspace(0, 1, 10),
'lambda_l2': range(0, 50, 5),
'cat_smooth': range(1, 40, 5)
# 'early_stopping_rounds' : range(10, 20, 5)
}
param_dist_lr = {
# Shape
'max_iter' : range(50, 500, 50),
# Sample
'class_weight' : [scale_pos_weight, 1],
# Algo
'solver' : ['sag', 'lbfgs', 'newton-cg'],
'C': [0.001, 0.01, 0.1, 1, 10] # 1/λ
}
##########
# RF
param_fixed_rf = {
'n_jobs' : -1,
'oob_score' : True,
'random_state':123,
'verbose':0
}
# XGB
param_fixed_xgb = {
'n_jobs' : -1,
'eval_metric': 'auc',
'seed' : 123,
'silent' : 1,
'verbose_eval':0
}
# LGB
param_fixed_lgb = {
'n_jobs' : -1,
'metric' : 'auc',
'random_state' : 123,
'bagging_seed':123,
'feature_fraction_seed':123,
'verbose_eval' : 0
}
# LR
param_fixed_lr = {
'n_jobs' : -1,
'random_state' : 123,
'verbose' : 0
}
# +
################ Load Features
''' *** With nona *** '''
''' Load '''
Xid = pd.read_csv('./tmp/train_d1234_nona.csv', header=0, index_col=0)
Xid.shape
yid = pd.read_csv('./data/train_label.csv', header=0, index_col=0)
yid.shape
''' Merge '''
xy = pd.merge(Xid, yid, on=uid, how='inner')
xy.drop(uid, axis=1, inplace=True)
xy.shape
''' Split '''
# X, y
X = xy.copy()
y = X.pop(target)
X.shape
y.shape
''' *** With na *** '''
''' Load '''
Xid1 = pd.read_csv('./tmp/train_d1234_na.csv', header=0, index_col=0)
Xid1.shape
''' Merge '''
xy1 = pd.merge(Xid1, yid, on=uid, how='inner')
xy1.drop(uid, axis=1, inplace=True)
xy1.shape
''' Split '''
# X, y
X1 = xy1.copy()
y1 = X1.pop(target)
X1.shape
y1.shape
# +
################ Important Features ################
######## Base Models ########
### RF ###
best_params_load = np.load('./model/base_rf.npy', allow_pickle=True).item()
model_params = {**best_params_load, **param_fixed_rf}
RF = RandomForestClassifier(**model_params)
# Train
RF.fit(X, y)
# Importance
f = pd.DataFrame(X.columns, columns=['feature'])
score = pd.DataFrame(RF.feature_importances_, columns=['rf'])
fscore_rf = pd.concat([f, score], axis=1).sort_values(by='rf', ascending=False).reset_index(drop=True)
fscore_rf.head()
### XGB ###
# XGB
best_params_load = np.load('./model/base_xgb.npy', allow_pickle=True).item()
model_params = {**best_params_load, **param_fixed_xgb}
XGB = XGBClassifier(**model_params)
# Train
XGB.fit(X, y)
# Importance
f = pd.DataFrame(X.columns, columns=['feature'])
score = pd.DataFrame(XGB.feature_importances_, columns=['xgb'])
fscore_xgb = pd.concat([f, score], axis=1).sort_values(by='xgb', ascending=False).reset_index(drop=True)
fscore_xgb.head()
### LGB ###
# LGB
best_params_load = np.load('./model/base_lgb.npy', allow_pickle=True).item()
model_params = {**best_params_load, **param_fixed_lgb}
LGB = LGBMClassifier(**model_params)
# Train
LGB.fit(X, y)
# Importance
f = pd.DataFrame(X.columns, columns=['feature'])
score = pd.DataFrame(LGB.feature_importances_, columns=['lgb'])
fscore_lgb = pd.concat([f, score], axis=1).sort_values(by='lgb', ascending=False).reset_index(drop=True)
fscore_lgb.head()
### LGB with Na ###
best_params_load = np.load('./model/base_lgb.npy', allow_pickle=True).item()
model_params = {**best_params_load, **param_fixed_lgb}
LGB = LGBMClassifier(**model_params)
# Train
LGB.fit(X1, y1)
# Importance
f = pd.DataFrame(X.columns, columns=['feature'])
score = pd.DataFrame(LGB.feature_importances_, columns=['lgbna'])
fscore_lgb_na = pd.concat([f, score], axis=1).sort_values(by='lgbna', ascending=False).reset_index(drop=True)
fscore_lgb_na.head()
######## correlations ########
correlations = xy.corr()
# Save
correlations.apply(abs).to_csv('./tmp/0_correlations_abs.csv')
correlations.to_csv('./tmp/0_correlations.csv')
# Abs
correlations_target_abs = correlations.loc[correlations.index != target, target].apply(abs).sort_values(ascending=False)
f = pd.DataFrame(correlations_target_abs.index, columns=['feature'])
score_corr = pd.DataFrame(correlations_target_abs.values, columns=['corr'])
fscore_corr = pd.concat([f, score_corr], axis=1).sort_values(by='corr', ascending=False).reset_index(drop=True)
fscore_corr.fillna(0, inplace=True)
fscore_corr.head()
######## Merge ########
fscore = pd.merge(fscore_corr, fscore_rf, on='feature')
fscore = pd.merge(fscore, fscore_xgb, on='feature')
fscore = pd.merge(fscore, fscore_lgb, on='feature')
fscore = pd.merge(fscore, fscore_lgb_na, on='feature')
# Add rank
frank = fscore.rank(numeric_only=True, method='min', ascending=False)
# fscore.fillna(0, inplace=True)
fscore = pd.merge(fscore, frank, left_index=True, right_index=True, suffixes=['', '_rank'])
fscore['rank'] = fscore['corr_rank'] + fscore['rf_rank'] + fscore['xgb_rank'] + fscore['lgb_rank'] + fscore['lgbna_rank']
fscore.sort_values(by='rank', inplace=True)
fscore.shape
fscore.head()
fscore.to_csv('./model/f_score.csv')
''' Describe '''
fscore[['corr', 'rf', 'xgb', 'lgb', 'lgbna', 'rank']].describe()
# +
######## Intersection by Score ########
# By score
# th_imp = {'corr':0.01, 'rf':0.001, 'xgb':0, 'lgb':0, 'lgbna':0} # 453, 198
# th_imp = {'corr':0.03, 'rf':0.001, 'xgb':0, 'lgb':0, 'lgbna':0} # 453, 198
# th_imp = {'corr':0.05, 'rf':0.001, 'xgb':0, 'lgb':0, 'lgbna':0} # 121, 198
# th_imp = {'corr':0.06, 'rf':0.001, 'xgb':0, 'lgb':0, 'lgbna':0} # 76, 198
th_imp = {'corr':0.07, 'rf':0.001, 'xgb':0, 'lgb':0, 'lgbna':0} # 54, 198
# th_imp = {'corr':0.075, 'rf':0.001, 'xgb':0, 'lgb':0, 'lgbna':0} # 45, 198
top_f = {}
cnt_f = {}
inter_cnt = {}
for k in 'corr', 'rf', 'xgb', 'lgb', 'lgbna':
# top
t = fscore.loc[fscore[f'{k}']>=th_imp[k], 'feature']
top_f[k] = set(t) # set
# len
cnt_f[f'cnt_{k}'] = len(t)
# # intersection with Corr
# inter_cnt[f'corr_{k}'] = round(len(top_f['corr'].intersection(top_f[k]))/len(top_f['corr']), 2)
# Intersection
inter_cnt['corr/all'] = round(len(top_f['rf'].intersection(top_f['lgb']))/len(top_f['lgb']), 2)
inter_cnt['rf/all'] = round(len(top_f['rf'].intersection(top_f['lgb']))/len(top_f['lgb']), 2)
inter_cnt['rf/corr'] = round(len(top_f['rf'].intersection(top_f['corr']))/len(top_f['corr']), 2)
# inter_cnt['rf_xgb'] = round(len(top_f['rf'].intersection(top_f['xgb']))/len(top_f['rf']), 2)
# inter_cnt['rf_lgb'] = round(len(top_f['rf'].intersection(top_f['lgb']))/len(top_f['rf']), 2)
# inter_cnt['lgb_rf'] = round(len(top_f['rf'].intersection(top_f['lgb']))/len(top_f['lgb']), 2)
# inter_cnt['rf_lgbna'] = round(len(top_f['rf'].intersection(top_f['lgbna']))/len(top_f['rf']), 2)
# inter_cnt['lgbna_rf'] = round(len(top_f['rf'].intersection(top_f['lgbna']))/len(top_f['lgbna']), 2)
# inter_cnt['xgb_lgb'] = round(len(top_f['xgb'].intersection(top_f['lgb']))/len(top_f['xgb']), 2)
# inter_cnt['lgb_xgb'] = round(len(top_f['xgb'].intersection(top_f['lgb']))/len(top_f['lgb']), 2)
# inter_cnt['lgb_lgbna'] = round(len(top_f['lgb'].intersection(top_f['lgbna']))/len(top_f['lgb']), 2)
''' Corr '''
pd.DataFrame(cnt_f, index=['Count'])
pd.DataFrame(inter_cnt, index=['Intersection'])
# ''' all VS corr '''
# top = {}
# # top['all'] = top_f['rf'].union(top_f['xgb']).union(top_f['lgb'])
# diff = top_f['corr'].difference(top['rf'])
# len(diff)
# # diff
### Save
for k, v in top_f.items():
print(f'{k}:{len(v)}')
np.save('./model/base_features.npy', top_f)
# # # + diff
# ''' top_f_final '''
# top_f_final = {} # 入模特征列表
# for k in 'rf', 'xgb', 'lgb':
# top_f_final[k] = top_f[k].union(diff) # 保存特征列表
# len(top_f_final[k])
# ### Save
# np.save('./model/base_features.npy', top_f_final)
# +
# ####### Polynomial
# m = 50 # 交集
# n = 10 # 合集
# top_f_inters = set(fscore.loc[(fscore['corr_rank'] <= m) &
# (fscore['rf_rank'] <= m) &
# (fscore['xgb_rank'] <= m) &
# (fscore['lgb_rank'] <= m), 'feature'])
# len(top_f_inters)
# top_f_union = set(fscore.loc[(fscore['corr_rank'] <= n) |
# (fscore['rf_rank'] <= n) |
# (fscore['xgb_rank'] <= n) |
# (fscore['lgb_rank'] <= n), 'feature'])
# len(top_f_union)
# top_poly = top_f_inters.union(top_f_union)
# len(top_poly)
# top_poly
# np.save('./tmp/0_feats_poly.npy', top_poly)
# +
# ############## RF
# ''' Baseline '''
# baseline = RandomForestClassifier(**param_fixed_rf)
# baseline.fit(X, y)
# pred_baseline = baseline.predict_proba(X)
# ks_score(y, pred_baseline[:,1])
# ''' Best '''
# grid = RandomizedSearchCV(RandomForestClassifier(**param_fixed_rf), param_dist_rf, **param_general)
# grid.fit(X, y)
# grid.best_score_
# best_params = grid.best_params_
# np.save('./model/base_rf.npy', best_params)
# # ''' Test Clone Model '''
# # model1 = grid.best_estimator_
# # model1.fit(X, y)
# # ks_score(y, model1.predict_proba(X)[:,1])
# #
# # ''' Test Save Params '''
# # best_params_load = np.load('./model/base_rf.npy', allow_pickle=True).item()
# # model2_params = {**best_params_load, **param_fixed_rf}
# # model2 = RandomForestClassifier(**model2_params)
# # model2.fit(X, y)
# # ks_score(y, model2.predict_proba(X)[:,1])
# +
# ############## XGB
# ''' Baseline '''
# baseline = XGBClassifier(**param_fixed_xgb)
# baseline.fit(X, y)
# pred_baseline = baseline.predict_proba(X)
# ks_score(y, pred_baseline[:,1])
# ''' Best '''
# grid = RandomizedSearchCV(XGBClassifier(**param_fixed_xgb), param_dist_xgb, **param_general)
# grid.fit(X, y)
# grid.best_score_
# best_params = grid.best_params_
# np.save('./model/base_xgb.npy', best_params)
# # ''' Test Clone Model '''
# # model1 = grid.best_estimator_
# # model1.fit(X, y)
# # ks_score(y, model1.predict_proba(X)[:,1])
# # ''' Test Save Params '''
# # best_params_load = np.load('./model/base_xgb.npy', allow_pickle=True).item()
# # model2_params = {**best_params_load, **param_fixed_xgb}
# # model2 = XGBClassifier(**model2_params)
# # model2.fit(X, y)
# # ks_score(y, model2.predict_proba(X)[:,1])
# +
# ############## LGB
# ''' Baseline '''
# baseline = LGBMClassifier(**param_fixed_lgb)
# baseline.fit(X, y)
# pred_baseline = baseline.predict_proba(X) #, num_iteration=baseline.best_iteration_)
# ks_score(y, pred_baseline[:,1])
# ''' Best '''
# grid = RandomizedSearchCV(LGBMClassifier(**param_fixed_lgb), param_dist_lgb, **param_general)
# grid.fit(X, y)
# grid.best_score_
# best_params = grid.best_params_
# np.save('./model/base_lgb.npy', best_params)
# # ''' Test Clone Model '''
# # model1 = grid.best_estimator_
# # model1.fit(X, y)
# # ks_score(y, model1.predict_proba(X)[:,1])
# # ''' Test Save Params '''
# # best_params_load = np.load('./model/base_lgb.npy', allow_pickle=True).item()
# # model2_params = {**best_params_load, **param_fixed_lgb}
# # model2 = LGBMClassifier(**model2_params)
# # model2.fit(X, y)
# # ks_score(y, model2.predict_proba(X)[:,1])
# +
############## LR
''' Baseline '''
baseline = LogisticRegression(**param_fixed_lr)
baseline.fit(X, y)
pred_baseline = baseline.predict_proba(X)
ks_score(y, pred_baseline[:,1])
''' Best '''
grid = RandomizedSearchCV(LogisticRegression(**param_fixed_lr), param_dist_lr, **param_general)
grid.fit(X, y)
grid.best_score_
best_params = grid.best_params_
np.save('./model/base_lr.npy', best_params)
''' Test Clone Model '''
model1 = grid.best_estimator_
model1.fit(X, y)
ks_score(y, model1.predict_proba(X)[:,1])
''' Test Save Params '''
best_params_load = np.load('./model/base_lr.npy', allow_pickle=True).item()
model2_params = {**best_params_load, **param_fixed_lr}
model2 = LogisticRegression(**model2_params)
model2.fit(X_meta, y)
ks_score(y, model2.predict_proba(X_meta)[:,1])
# +
# ############## LR Meta
# X_meta = pd.read_csv('./tmp/meta_X.csv', header=0, index_col=0).values
# poly = PolynomialFeatures(2, interaction_only=True, include_bias=False)
# X_meta = poly.fit_transform(X_meta)
# X_meta.shape
# y_meta = y.values
s
# ''' Baseline '''
# baseline = LogisticRegression(**param_fixed_lr)
# baseline.fit(X_meta, y)
# pred_baseline = baseline.predict_proba(X_meta)
# ks_score(y, pred_baseline[:,1])
# ''' Best '''
# grid = RandomizedSearchCV(LogisticRegression(**param_fixed_lr), param_dist_lr, **param_general)
# grid.fit(X_meta, y)
# grid.best_score_
# best_params = grid.best_params_
# np.save('./model/base_lr_meta.npy', best_params)
# ''' Test Clone Model '''
# model1 = grid.best_estimator_
# model1.fit(X_meta, y)
# ks_score(y, model1.predict_proba(X_meta)[:,1])
# ''' Test Save Params '''
# best_params_load = np.load('./model/base_lr_meta.npy', allow_pickle=True).item()
# model2_params = {**best_params_load, **param_fixed_lr}
# model2 = LogisticRegression(**model2_params)
# model2.fit(X_meta, y)
# ks_score(y, model2.predict_proba(X_meta)[:,1])
# +
# ######## Test Meta K-fold
# X_meta = pd.read_csv('./tmp/meta_X.csv', header=0, index_col=0).values
# poly = PolynomialFeatures(3, interaction_only=True)
# X_meta = poly.fit_transform(X_meta)[:,1:]
# X_meta.shape
# y_meta = y.values
# # LR
# best_params_load = np.load('./model/base_lr.npy', allow_pickle=True).item()
# model_params = {**best_params_load, **param_fixed_lr}
# LR = LogisticRegression(**model_params)
# # Tune
# ks = []
# meta_model = LR
# kfold = KFold(n_splits=5, shuffle=True, random_state=123)
# j = 0
# meta_models_ = []
# for train_index, valid_index in kfold.split(X_meta, y_meta):
# instance = clone(meta_model)
# meta_models_.append(instance)
# instance.fit(X_meta[train_index], y_meta[train_index])
# y_pred = instance.predict_proba(X_meta[valid_index])[:,1]
# ks.append(ks_score(y_meta[valid_index], y_pred))
# print(ks)
# j += 1
# pd.DataFrame(ks)
|
2_modeling_base.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="vkdnLiKk71g-"
# ##### Copyright 2021 The TensorFlow Authors.
# + cellView="form" id="0asMuNro71hA"
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# + [markdown] id="iPFgLeZIsZ3Q"
# <table class="tfo-notebook-buttons" align="left">
# <td>
# <a target="_blank" href="https://www.tensorflow.org/federated/tutorials/custom_federated_algorithm_with_tff_optimizers"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
# </td>
# <td>
# <a target="_blank" href="https://colab.research.google.com/github/tensorflow/federated/blob/main/docs/tutorials/custom_federated_algorithm_with_tff_optimizers.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
# </td>
# <td>
# <a target="_blank" href="https://github.com/tensorflow/federated/blob/main/docs/tutorials/custom_federated_algorithm_with_tff_optimizers.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
# </td>
# <td>
# <a href="https://storage.googleapis.com/tensorflow_docs/federated/docs/tutorials/custom_federated_algorithm_with_tff_optimizers.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
# </td>
# </table>
# + [markdown] id="4Zv28F7QLo8O"
# # Use TFF optimizers in custom iterative process
#
# This is an alternative to the [Build Your Own Federated Learning Algorithm](building_your_own_federated_learning_algorithm.ipynb) tutorial and the [simple_fedavg](https://github.com/tensorflow/federated/tree/main/tensorflow_federated/python/examples/simple_fedavg) example to build a custom iterative process for the [federated averaging](https://arxiv.org/abs/1602.05629) algorithm. This tutorial will use [TFF optimizers](https://github.com/tensorflow/federated/tree/main/tensorflow_federated/python/learning/optimizers) instead of Keras optimizers.
# The TFF optimizer abstraction is desgined to be state-in-state-out to be easier to be incorporated in a TFF iterative process. The `tff.learning` APIs also accept TFF optimizers as input argument.
# + [markdown] id="MnUwFbCAKB2r"
# ## Before we start
#
# Before we start, please run the following to make sure that your environment is
# correctly setup. If you don't see a greeting, please refer to the
# [Installation](../install.md) guide for instructions.
# + id="ZrGitA_KnRO0"
#@test {"skip": true}
# !pip install --quiet --upgrade tensorflow-federated-nightly
# !pip install --quiet --upgrade nest-asyncio
import nest_asyncio
nest_asyncio.apply()
# + id="HGTM6tWOLo8M"
import functools
import attr
import numpy as np
import tensorflow as tf
import tensorflow_federated as tff
# + [markdown] id="hQ_N9XbULo8P"
# ## Preparing data and model
# The EMNIST data processing and model are very similar to the [simple_fedavg](https://github.com/tensorflow/federated/tree/main/tensorflow_federated/python/examples/simple_fedavg) example.
# + id="Blrh8zJgLo8R"
only_digits=True
# Load dataset.
emnist_train, emnist_test = tff.simulation.datasets.emnist.load_data(only_digits)
# Define preprocessing functions.
def preprocess_fn(dataset, batch_size=16):
def batch_format_fn(element):
return (tf.expand_dims(element['pixels'], -1), element['label'])
return dataset.batch(batch_size).map(batch_format_fn)
# Preprocess and sample clients for prototyping.
train_client_ids = sorted(emnist_train.client_ids)
train_data = emnist_train.preprocess(preprocess_fn)
central_test_data = preprocess_fn(
emnist_train.create_tf_dataset_for_client(train_client_ids[0]))
# Define model.
def create_keras_model():
"""The CNN model used in https://arxiv.org/abs/1602.05629."""
data_format = 'channels_last'
input_shape = [28, 28, 1]
max_pool = functools.partial(
tf.keras.layers.MaxPooling2D,
pool_size=(2, 2),
padding='same',
data_format=data_format)
conv2d = functools.partial(
tf.keras.layers.Conv2D,
kernel_size=5,
padding='same',
data_format=data_format,
activation=tf.nn.relu)
model = tf.keras.models.Sequential([
conv2d(filters=32, input_shape=input_shape),
max_pool(),
conv2d(filters=64),
max_pool(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(512, activation=tf.nn.relu),
tf.keras.layers.Dense(10 if only_digits else 62),
])
return model
# Wrap as `tff.learning.Model`.
def model_fn():
keras_model = create_keras_model()
return tff.learning.from_keras_model(
keras_model,
input_spec=central_test_data.element_spec,
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True))
# + [markdown] id="fPOWP2JjsfTk"
# ## Custom iterative process
#
# + [markdown] id="50N36Zz8qyY-"
# In many cases, federated algorithms have 4 main components:
#
# 1. A server-to-client broadcast step.
# 2. A local client update step.
# 3. A client-to-server upload step.
# 4. A server update step.
#
# In TFF, we generally represent federated algorithms as a [`tff.templates.IterativeProcess`](https://www.tensorflow.org/federated/api_docs/python/tff/templates/IterativeProcess) (which we refer to as just an `IterativeProcess` throughout). This is a class that contains `initialize` and `next` functions. Here, `initialize` is used to initialize the server, and `next` will perform one communication round of the federated algorithm.
#
# We will introduce different components to build the federated averaging (FedAvg) algorithm, which will use an optimizer in the client update step, and another optimizer in the server update step. The core logics of client and server updates can be expressed as pure TF blocks.
# + [markdown] id="bxpNYucgLo8g"
# ### TF blocks: client and server update
#
# On each client, a local `client_optimizer` is initialized and used to update the client model weights. On the server, `server_optimizer` will use the state from the *previous* round, and update the state for the next round.
# + id="c5rHPKreLo8g"
@tf.function
def client_update(model, dataset, server_weights, client_optimizer):
"""Performs local training on the client's dataset."""
# Initialize the client model with the current server weights.
client_weights = model.trainable_variables
# Assign the server weights to the client model.
tf.nest.map_structure(lambda x, y: x.assign(y),
client_weights, server_weights)
# Initialize the client optimizer.
trainable_tensor_specs = tf.nest.map_structure(
lambda v: tf.TensorSpec(v.shape, v.dtype), client_weights)
optimizer_state = client_optimizer.initialize(trainable_tensor_specs)
# Use the client_optimizer to update the local model.
for batch in iter(dataset):
with tf.GradientTape() as tape:
# Compute a forward pass on the batch of data.
outputs = model.forward_pass(batch)
# Compute the corresponding gradient.
grads = tape.gradient(outputs.loss, client_weights)
# Apply the gradient using a client optimizer.
optimizer_state, updated_weights = client_optimizer.next(
optimizer_state, client_weights, grads)
tf.nest.map_structure(lambda a, b: a.assign(b),
client_weights, updated_weights)
# Return model deltas.
return tf.nest.map_structure(tf.subtract, client_weights, server_weights)
# + id="rYxErLvHLo8i"
@attr.s(eq=False, frozen=True, slots=True)
class ServerState(object):
trainable_weights = attr.ib()
optimizer_state = attr.ib()
@tf.function
def server_update(server_state, mean_model_delta, server_optimizer):
"""Updates the server model weights."""
# Use aggregated negative model delta as pseudo gradient.
negative_weights_delta = tf.nest.map_structure(
lambda w: -1.0 * w, mean_model_delta)
new_optimizer_state, updated_weights = server_optimizer.next(
server_state.optimizer_state, server_state.trainable_weights,
negative_weights_delta)
return tff.structure.update_struct(
server_state,
trainable_weights=updated_weights,
optimizer_state=new_optimizer_state)
# + [markdown] id="g0zNTO7LLo84"
# ### TFF blocks: `tff.tf_computation` and `tff.federated_computation`
#
# We now use TFF for orchestration and build the iterative process for FedAvg. We have to wrap the TF blocks defined above with `tff.tf_computation`, and use TFF methods `tff.federated_broadcast`, `tff.federated_map`, `tff.federated_mean` in a `tff.federated_computation` function. It is easy to use the `tff.learning.optimizers.Optimizer` APIs with `initialize` and `next` functions when defining a custom iterative process.
# + id="jJY9xUBZLo84"
# 1. Server and client optimizer to be used.
server_optimizer = tff.learning.optimizers.build_sgdm(
learning_rate=0.05, momentum=0.9)
client_optimizer = tff.learning.optimizers.build_sgdm(
learning_rate=0.01)
# 2. Functions return initial state on server.
@tff.tf_computation
def server_init():
model = model_fn()
trainable_tensor_specs = tf.nest.map_structure(
lambda v: tf.TensorSpec(v.shape, v.dtype), model.trainable_variables)
optimizer_state = server_optimizer.initialize(trainable_tensor_specs)
return ServerState(
trainable_weights=model.trainable_variables,
optimizer_state=optimizer_state)
@tff.federated_computation
def server_init_tff():
return tff.federated_value(server_init(), tff.SERVER)
# 3. One round of computation and communication.
server_state_type = server_init.type_signature.result
print('server_state_type:\n',
server_state_type.formatted_representation())
trainable_weights_type = server_state_type.trainable_weights
print('trainable_weights_type:\n',
trainable_weights_type.formatted_representation())
# 3-1. Wrap server and client TF blocks with `tff.tf_computation`.
@tff.tf_computation(server_state_type, trainable_weights_type)
def server_update_fn(server_state, model_delta):
return server_update(server_state, model_delta, server_optimizer)
whimsy_model = model_fn()
tf_dataset_type = tff.SequenceType(whimsy_model.input_spec)
print('tf_dataset_type:\n',
tf_dataset_type.formatted_representation())
@tff.tf_computation(tf_dataset_type, trainable_weights_type)
def client_update_fn(dataset, server_weights):
model = model_fn()
return client_update(model, dataset, server_weights, client_optimizer)
# 3-2. Orchestration with `tff.federated_computation`.
federated_server_type = tff.FederatedType(server_state_type, tff.SERVER)
federated_dataset_type = tff.FederatedType(tf_dataset_type, tff.CLIENTS)
@tff.federated_computation(federated_server_type, federated_dataset_type)
def run_one_round(server_state, federated_dataset):
# Server-to-client broadcast.
server_weights_at_client = tff.federated_broadcast(
server_state.trainable_weights)
# Local client update.
model_deltas = tff.federated_map(
client_update_fn, (federated_dataset, server_weights_at_client))
# Client-to-server upload and aggregation.
mean_model_delta = tff.federated_mean(model_deltas)
# Server update.
server_state = tff.federated_map(
server_update_fn, (server_state, mean_model_delta))
return server_state
# 4. Build the iterative process for FedAvg.
fedavg_process = tff.templates.IterativeProcess(
initialize_fn=server_init_tff, next_fn=run_one_round)
print('type signature of `initialize`:\n',
fedavg_process.initialize.type_signature.formatted_representation())
print('type signature of `next`:\n',
fedavg_process.next.type_signature.formatted_representation())
# + [markdown] id="4UYZ3qeMLo9N"
# ## Evaluating the algorithm
# + [markdown] id="jwd9Gs0ULo9O"
# We evaluate the performance on a centralized evaluation dataset.
# + id="EdNgYoIwLo9P"
def evaluate(server_state):
keras_model = create_keras_model()
tf.nest.map_structure(
lambda var, t: var.assign(t),
keras_model.trainable_weights, server_state.trainable_weights)
metric = tf.keras.metrics.SparseCategoricalAccuracy()
for batch in iter(central_test_data):
preds = keras_model(batch[0], training=False)
metric.update_state(y_true=batch[1], y_pred=preds)
return metric.result().numpy()
# + id="CDarZn71G2mH"
server_state = fedavg_process.initialize()
acc = evaluate(server_state)
print('Initial test accuracy', acc)
# Evaluate after a few rounds
CLIENTS_PER_ROUND=2
sampled_clients = train_client_ids[:CLIENTS_PER_ROUND]
sampled_train_data = [
train_data.create_tf_dataset_for_client(client)
for client in sampled_clients]
for round in range(20):
server_state = fedavg_process.next(server_state, sampled_train_data)
acc = evaluate(server_state)
print('Test accuracy', acc)
|
site/en-snapshot/federated/tutorials/custom_federated_algorithm_with_tff_optimizers.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Export from Labelbox to Voxel51
# After you have finished labeling data in LabelBox, this notebook lets you import the labels back into a Voxel51 Dataset.
# In the Labelbox web UI, export the project and download the JSON file.
labelboxExportJson = "/tf/notebooks/export-2021-02-01T01-34-34.538Z.json" # Download the exported JSON and update this
dataset_name = "test-dataset" # The name of the V51 Dataset to use
labelbox_id_field = "labelbox_id" # V51 Sample field where the corresponding Labelbox ID was save when it was uploaded to Labelbox
# ## Labelbox Export
# Imports and configuration
import fiftyone as fo
# +
# Do the groundwork for importing, setup the dataset
import fiftyone.utils.labelbox as foul
from uuid import uuid4
# expect an error here if the dataset already exists
dataset = fo.load_dataset(dataset_name)
# -
dataset.add_sample_field(labelbox_id_field, fo.StringField)
# Imports the Data from Labelbox into a Voxel51 Dataset
foul.import_from_labelbox(dataset, labelboxExportJson, labelbox_id_field=labelbox_id_field, download_dir="/tf/media")
# ### Examine the results
session = fo.launch_app(dataset, auto=False)
# ## Post Processing
# You may want to do some additional data munging. I added a tag based on whether a plane was labeled or skipped in Labelbox.
# +
# Add a label & tag that captures if the image was skipped, indicating there was no plane, or accepted, indicating there was a plane
from fiftyone import ViewField as F
label_field = "plane_ground_truth"
model_view = dataset.exists("model")
for sample in model_view:
sample[label_field] = fo.Classification(label="plane")
sample.tags.append("plane")
sample.save()
skipped_view = dataset.match({"model": {"$exists": False, "$eq": None}})
for sample in skipped_view:
#print(sample)
sample[label_field] = fo.Classification(label="noplane")
sample.tags.append("noPlane")
sample.save()
|
ml-model/notebooks/Export from Labelbox to Voxel51.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] colab_type="text" id="vXLA5InzXydn"
# ##### Copyright 2019 The TensorFlow Authors.
# + cellView="form" colab={} colab_type="code" id="RuRlpLL-X0R_"
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# + [markdown] colab_type="text" id="1mLJmVotXs64"
# # Fine-tuning a BERT model
# + [markdown] colab_type="text" id="hYEwGTeCXnnX"
# <table class="tfo-notebook-buttons" align="left">
# <td>
# <a target="_blank" href="https://www.tensorflow.org/official_models/tutorials/fine_tune_bert.ipynb"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
# </td>
# <td>
# <a target="_blank" href="https://colab.research.google.com/github/tensorflow/models/blob/master/official/colab/fine_tuning_bert.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
# </td>
# <td>
# <a target="_blank" href="https://github.com/tensorflow/models/blob/master/official/colab/fine_tuning_bert.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
# </td>
# <td>
# <a href="https://storage.googleapis.com/tensorflow_docs/models/official/colab/fine_tuning_bert.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
# </td>
# </table>
# + [markdown] colab_type="text" id="YN2ACivEPxgD"
# In this example, we will work through fine-tuning a BERT model using the tensorflow-models PIP package.
#
# The pretrained BERT model this tutorial is based on is also available on [TensorFlow Hub](https://tensorflow.org/hub), to see how to use it refer to the [Hub Appendix](#hub_bert)
# + [markdown] colab_type="text" id="s2d9S2CSSO1z"
# ## Setup
# + [markdown] colab_type="text" id="fsACVQpVSifi"
# ### Install the TensorFlow Model Garden pip package
#
# * `tf-models-official` is the stable Model Garden package. Note that it may not include the latest changes in the `tensorflow_models` github repo. To include latest changes, you may install `tf-models-nightly`,
# which is the nightly Model Garden package created daily automatically.
# * pip will install all models and dependencies automatically.
# + colab={} colab_type="code" id="NvNr2svBM-p3"
# !pip install -q tf-models-official==2.3.0
# + [markdown] colab_type="text" id="U-7qPCjWUAyy"
# ### Imports
# + colab={} colab_type="code" id="lXsXev5MNr20"
import os
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
import tensorflow_hub as hub
import tensorflow_datasets as tfds
tfds.disable_progress_bar()
from official.modeling import tf_utils
from official import nlp
from official.nlp import bert
# Load the required submodules
import official.nlp.optimization
import official.nlp.bert.bert_models
import official.nlp.bert.configs
import official.nlp.bert.run_classifier
import official.nlp.bert.tokenization
import official.nlp.data.classifier_data_lib
import official.nlp.modeling.losses
import official.nlp.modeling.models
import official.nlp.modeling.networks
# + [markdown] colab_type="text" id="mbanlzTvJBsz"
# ### Resources
# + [markdown] colab_type="text" id="PpW0x8TpR8DT"
# This directory contains the configuration, vocabulary, and a pre-trained checkpoint used in this tutorial:
# + colab={} colab_type="code" id="vzRHOLciR8eq"
gs_folder_bert = "gs://cloud-tpu-checkpoints/bert/keras_bert/uncased_L-12_H-768_A-12"
tf.io.gfile.listdir(gs_folder_bert)
# + [markdown] colab_type="text" id="9uFskufsR2LT"
# You can get a pre-trained BERT encoder from [TensorFlow Hub](https://tfhub.dev/tensorflow/bert_en_uncased_L-12_H-768_A-12/2):
# + colab={} colab_type="code" id="e0dAkUttJAzj"
hub_url_bert = "https://tfhub.dev/tensorflow/bert_en_uncased_L-12_H-768_A-12/2"
# + [markdown] colab_type="text" id="Qv6abtRvH4xO"
# ## The data
# For this example we used the [GLUE MRPC dataset from TFDS](https://www.tensorflow.org/datasets/catalog/glue#gluemrpc).
#
# This dataset is not set up so that it can be directly fed into the BERT model, so this section also handles the necessary preprocessing.
# + [markdown] colab_type="text" id="28DvUhC1YUiB"
# ### Get the dataset from TensorFlow Datasets
#
# The Microsoft Research Paraphrase Corpus (Dolan & Brockett, 2005) is a corpus of sentence pairs automatically extracted from online news sources, with human annotations for whether the sentences in the pair are semantically equivalent.
#
# * Number of labels: 2.
# * Size of training dataset: 3668.
# * Size of evaluation dataset: 408.
# * Maximum sequence length of training and evaluation dataset: 128.
#
# + colab={} colab_type="code" id="Ijikx5OsH9AT"
glue, info = tfds.load('glue/mrpc', with_info=True,
# It's small, load the whole dataset
batch_size=-1)
# + colab={} colab_type="code" id="xf9zz4vLYXjr"
list(glue.keys())
# + [markdown] colab_type="text" id="ZgBg2r2nYT-K"
# The `info` object describes the dataset and it's features:
# + colab={} colab_type="code" id="IQrHxv7W7jH5"
info.features
# + [markdown] colab_type="text" id="vhsVWYNxazz5"
# The two classes are:
# + colab={} colab_type="code" id="n0gfc_VTayfQ"
info.features['label'].names
# + [markdown] colab_type="text" id="38zJcap6xkbC"
# Here is one example from the training set:
# + colab={} colab_type="code" id="xON_i6SkwApW"
glue_train = glue['train']
for key, value in glue_train.items():
print(f"{key:9s}: {value[0].numpy()}")
# + [markdown] colab_type="text" id="9fbTyfJpNr7x"
# ### The BERT tokenizer
# + [markdown] colab_type="text" id="wqeN54S61ZKQ"
# To fine tune a pre-trained model you need to be sure that you're using exactly the same tokenization, vocabulary, and index mapping as you used during training.
#
# The BERT tokenizer used in this tutorial is written in pure Python (It's not built out of TensorFlow ops). So you can't just plug it into your model as a `keras.layer` like you can with `preprocessing.TextVectorization`.
#
# The following code rebuilds the tokenizer that was used by the base model:
# + colab={} colab_type="code" id="idxyhmrCQcw5"
# Set up tokenizer to generate Tensorflow dataset
tokenizer = bert.tokenization.FullTokenizer(
vocab_file=os.path.join(gs_folder_bert, "vocab.txt"),
do_lower_case=True)
print("Vocab size:", len(tokenizer.vocab))
# + [markdown] colab_type="text" id="zYHDSquU2lDU"
# Tokenize a sentence:
# + colab={} colab_type="code" id="L_OfOYPg853R"
tokens = tokenizer.tokenize("Hello TensorFlow!")
print(tokens)
ids = tokenizer.convert_tokens_to_ids(tokens)
print(ids)
# + [markdown] colab_type="text" id="kkAXLtuyWWDI"
# ### Preprocess the data
#
# The section manually preprocessed the dataset into the format expected by the model.
#
# This dataset is small, so preprocessing can be done quickly and easily in memory. For larger datasets the `tf_models` library includes some tools for preprocessing and re-serializing a dataset. See [Appendix: Re-encoding a large dataset](#re_encoding_tools) for details.
# + [markdown] colab_type="text" id="62UTWLQd9-LB"
# #### Encode the sentences
#
# The model expects its two inputs sentences to be concatenated together. This input is expected to start with a `[CLS]` "This is a classification problem" token, and each sentence should end with a `[SEP]` "Separator" token:
# + colab={} colab_type="code" id="bdL-dRNRBRJT"
tokenizer.convert_tokens_to_ids(['[CLS]', '[SEP]'])
# + [markdown] colab_type="text" id="UrPktnqpwqie"
# Start by encoding all the sentences while appending a `[SEP]` token, and packing them into ragged-tensors:
# + colab={} colab_type="code" id="BR7BmtU498Bh"
def encode_sentence(s):
tokens = list(tokenizer.tokenize(s.numpy()))
tokens.append('[SEP]')
return tokenizer.convert_tokens_to_ids(tokens)
sentence1 = tf.ragged.constant([
encode_sentence(s) for s in glue_train["sentence1"]])
sentence2 = tf.ragged.constant([
encode_sentence(s) for s in glue_train["sentence2"]])
# + colab={} colab_type="code" id="has42aUdfky-"
print("Sentence1 shape:", sentence1.shape.as_list())
print("Sentence2 shape:", sentence2.shape.as_list())
# + [markdown] colab_type="text" id="MU9lTWy_xXbb"
# Now prepend a `[CLS]` token, and concatenate the ragged tensors to form a single `input_word_ids` tensor for each example. `RaggedTensor.to_tensor()` zero pads to the longest sequence.
# + colab={} colab_type="code" id="USD8uihw-g4J"
cls = [tokenizer.convert_tokens_to_ids(['[CLS]'])]*sentence1.shape[0]
input_word_ids = tf.concat([cls, sentence1, sentence2], axis=-1)
_ = plt.pcolormesh(input_word_ids.to_tensor())
# + [markdown] colab_type="text" id="xmNv4l4k-dBZ"
# #### Mask and input type
# + [markdown] colab_type="text" id="DIWjNIKq-ldh"
# The model expects two additional inputs:
#
# * The input mask
# * The input type
# + [markdown] colab_type="text" id="ulNZ4U96-8JZ"
# The mask allows the model to cleanly differentiate between the content and the padding. The mask has the same shape as the `input_word_ids`, and contains a `1` anywhere the `input_word_ids` is not padding.
# + colab={} colab_type="code" id="EezOO9qj91kP"
input_mask = tf.ones_like(input_word_ids).to_tensor()
plt.pcolormesh(input_mask)
# + [markdown] colab_type="text" id="rxLenwAvCkBf"
# The "input type" also has the same shape, but inside the non-padded region, contains a `0` or a `1` indicating which sentence the token is a part of.
# + colab={} colab_type="code" id="2CetH_5C9P2m"
type_cls = tf.zeros_like(cls)
type_s1 = tf.zeros_like(sentence1)
type_s2 = tf.ones_like(sentence2)
input_type_ids = tf.concat([type_cls, type_s1, type_s2], axis=-1).to_tensor()
plt.pcolormesh(input_type_ids)
# + [markdown] colab_type="text" id="P5UBnCn8Ii6s"
# #### Put it all together
#
# Collect the above text parsing code into a single function, and apply it to each split of the `glue/mrpc` dataset.
# + colab={} colab_type="code" id="sDGiWYPLEd5a"
def encode_sentence(s, tokenizer):
tokens = list(tokenizer.tokenize(s))
tokens.append('[SEP]')
return tokenizer.convert_tokens_to_ids(tokens)
def bert_encode(glue_dict, tokenizer):
num_examples = len(glue_dict["sentence1"])
sentence1 = tf.ragged.constant([
encode_sentence(s, tokenizer)
for s in np.array(glue_dict["sentence1"])])
sentence2 = tf.ragged.constant([
encode_sentence(s, tokenizer)
for s in np.array(glue_dict["sentence2"])])
cls = [tokenizer.convert_tokens_to_ids(['[CLS]'])]*sentence1.shape[0]
input_word_ids = tf.concat([cls, sentence1, sentence2], axis=-1)
input_mask = tf.ones_like(input_word_ids).to_tensor()
type_cls = tf.zeros_like(cls)
type_s1 = tf.zeros_like(sentence1)
type_s2 = tf.ones_like(sentence2)
input_type_ids = tf.concat(
[type_cls, type_s1, type_s2], axis=-1).to_tensor()
inputs = {
'input_word_ids': input_word_ids.to_tensor(),
'input_mask': input_mask,
'input_type_ids': input_type_ids}
return inputs
# + colab={} colab_type="code" id="yuLKxf6zHxw-"
glue_train = bert_encode(glue['train'], tokenizer)
glue_train_labels = glue['train']['label']
glue_validation = bert_encode(glue['validation'], tokenizer)
glue_validation_labels = glue['validation']['label']
glue_test = bert_encode(glue['test'], tokenizer)
glue_test_labels = glue['test']['label']
# + [markdown] colab_type="text" id="7FC5aLVxKVKK"
# Each subset of the data has been converted to a dictionary of features, and a set of labels. Each feature in the input dictionary has the same shape, and the number of labels should match:
# + colab={} colab_type="code" id="jyjTdGpFhO_1"
for key, value in glue_train.items():
print(f'{key:15s} shape: {value.shape}')
print(f'glue_train_labels shape: {glue_train_labels.shape}')
# + [markdown] colab_type="text" id="FSwymsbkbLDA"
# ## The model
# + [markdown] colab_type="text" id="Efrj3Cn1kLAp"
# ### Build the model
#
# + [markdown] colab_type="text" id="xxpOY5r2Ayq6"
# The first step is to download the configuration for the pre-trained model.
#
# + colab={} colab_type="code" id="ujapVfZ_AKW7"
import json
bert_config_file = os.path.join(gs_folder_bert, "bert_config.json")
config_dict = json.loads(tf.io.gfile.GFile(bert_config_file).read())
bert_config = bert.configs.BertConfig.from_dict(config_dict)
config_dict
# + [markdown] colab_type="text" id="96ldxDSwkVkj"
# The `config` defines the core BERT Model, which is a Keras model to predict the outputs of `num_classes` from the inputs with maximum sequence length `max_seq_length`.
#
# This function returns both the encoder and the classifier.
# + colab={} colab_type="code" id="cH682__U0FBv"
bert_classifier, bert_encoder = bert.bert_models.classifier_model(
bert_config, num_labels=2)
# + [markdown] colab_type="text" id="XqKp3-5GIZlw"
# The classifier has three inputs and one output:
# + colab={} colab_type="code" id="bAQblMIjwkvx"
tf.keras.utils.plot_model(bert_classifier, show_shapes=True, dpi=48)
# + [markdown] colab_type="text" id="sFmVG4SKZAw8"
# Run it on a test batch of data 10 examples from the training set. The output is the logits for the two classes:
# + colab={} colab_type="code" id="VTjgPbp4ZDKo"
glue_batch = {key: val[:10] for key, val in glue_train.items()}
bert_classifier(
glue_batch, training=True
).numpy()
# + [markdown] colab_type="text" id="Q0NTdwZsQK8n"
# The `TransformerEncoder` in the center of the classifier above **is** the `bert_encoder`.
#
# Inspecting the encoder, we see its stack of `Transformer` layers connected to those same three inputs:
# + colab={} colab_type="code" id="8L__-erBwLIQ"
tf.keras.utils.plot_model(bert_encoder, show_shapes=True, dpi=48)
# + [markdown] colab_type="text" id="mKAvkQc3heSy"
# ### Restore the encoder weights
#
# When built the encoder is randomly initialized. Restore the encoder's weights from the checkpoint:
# + colab={} colab_type="code" id="97Ll2Gichd_Y"
checkpoint = tf.train.Checkpoint(model=bert_encoder)
checkpoint.restore(
os.path.join(gs_folder_bert, 'bert_model.ckpt')).assert_consumed()
# + [markdown] colab_type="text" id="2oHOql35k3Dd"
# Note: The pretrained `TransformerEncoder` is also available on [TensorFlow Hub](https://tensorflow.org/hub). See the [Hub appendix](#hub_bert) for details.
# + [markdown] colab_type="text" id="115caFLMk-_l"
# ### Set up the optimizer
#
# BERT adopts the Adam optimizer with weight decay (aka "[AdamW](https://arxiv.org/abs/1711.05101)").
# It also employs a learning rate schedule that firstly warms up from 0 and then decays to 0.
# + colab={} colab_type="code" id="w8qXKRZuCwW4"
# Set up epochs and steps
epochs = 3
batch_size = 32
eval_batch_size = 32
train_data_size = len(glue_train_labels)
steps_per_epoch = int(train_data_size / batch_size)
num_train_steps = steps_per_epoch * epochs
warmup_steps = int(epochs * train_data_size * 0.1 / batch_size)
# creates an optimizer with learning rate schedule
optimizer = nlp.optimization.create_optimizer(
2e-5, num_train_steps=num_train_steps, num_warmup_steps=warmup_steps)
# + [markdown] colab_type="text" id="pXRGxiRNEHS2"
# This returns an `AdamWeightDecay` optimizer with the learning rate schedule set:
# + colab={} colab_type="code" id="eQNA16bhDpky"
type(optimizer)
# + [markdown] colab_type="text" id="xqu_K71fJQB8"
# To see an example of how to customize the optimizer and it's schedule, see the [Optimizer schedule appendix](#optiizer_schedule).
# + [markdown] colab_type="text" id="78FEUOOEkoP0"
# ### Train the model
# + [markdown] colab_type="text" id="OTNcA0O0nSq9"
# The metric is accuracy and we use sparse categorical cross-entropy as loss.
# + colab={} colab_type="code" id="nzi8hjeTQTRs"
metrics = [tf.keras.metrics.SparseCategoricalAccuracy('accuracy', dtype=tf.float32)]
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
bert_classifier.compile(
optimizer=optimizer,
loss=loss,
metrics=metrics)
bert_classifier.fit(
glue_train, glue_train_labels,
validation_data=(glue_validation, glue_validation_labels),
batch_size=32,
epochs=epochs)
# + [markdown] colab_type="text" id="IFtKFWbNKb0u"
# Now run the fine-tuned model on a custom example to see that it works.
#
# Start by encoding some sentence pairs:
# + colab={} colab_type="code" id="9ZoUgDUNJPz3"
my_examples = bert_encode(
glue_dict = {
'sentence1':[
'The rain in Spain falls mainly on the plain.',
'Look I fine tuned BERT.'],
'sentence2':[
'It mostly rains on the flat lands of Spain.',
'Is it working? This does not match.']
},
tokenizer=tokenizer)
# + [markdown] colab_type="text" id="7ynJibkBRTJF"
# The model should report class `1` "match" for the first example and class `0` "no-match" for the second:
# + colab={} colab_type="code" id="umo0ttrgRYIM"
result = bert_classifier(my_examples, training=False)
result = tf.argmax(result).numpy()
result
# + colab={} colab_type="code" id="utGl0M3aZCE4"
np.array(info.features['label'].names)[result]
# + [markdown] colab_type="text" id="fVo_AnT0l26j"
# ### Save the model
#
# Often the goal of training a model is to _use_ it for something, so export the model and then restore it to be sure that it works.
# + colab={} colab_type="code" id="Nl5x6nElZqkP"
export_dir='./saved_model'
tf.saved_model.save(bert_classifier, export_dir=export_dir)
# + colab={} colab_type="code" id="y_ACvKPsVUXC"
reloaded = tf.saved_model.load(export_dir)
reloaded_result = reloaded([my_examples['input_word_ids'],
my_examples['input_mask'],
my_examples['input_type_ids']], training=False)
original_result = bert_classifier(my_examples, training=False)
# The results are (nearly) identical:
print(original_result.numpy())
print()
print(reloaded_result.numpy())
# + [markdown] colab_type="text" id="eQceYqRFT_Eg"
# ## Appendix
# + [markdown] colab_type="text" id="SaC1RlFawUpc"
# <a id=re_encoding_tools></a>
# ### Re-encoding a large dataset
# + [markdown] colab_type="text" id="CwUdjFBkzUgh"
# This tutorial you re-encoded the dataset in memory, for clarity.
#
# This was only possible because `glue/mrpc` is a very small dataset. To deal with larger datasets `tf_models` library includes some tools for processing and re-encoding a dataset for efficient training.
# + [markdown] colab_type="text" id="2UTQrkyOT5wD"
# The first step is to describe which features of the dataset should be transformed:
# + colab={} colab_type="code" id="XQeDFOzYR9Z9"
processor = nlp.data.classifier_data_lib.TfdsProcessor(
tfds_params="dataset=glue/mrpc,text_key=sentence1,text_b_key=sentence2",
process_text_fn=bert.tokenization.convert_to_unicode)
# + [markdown] colab_type="text" id="XrFQbfErUWxa"
# Then apply the transformation to generate new TFRecord files.
# + colab={} colab_type="code" id="ymw7GOHpSHKU"
# Set up output of training and evaluation Tensorflow dataset
train_data_output_path="./mrpc_train.tf_record"
eval_data_output_path="./mrpc_eval.tf_record"
max_seq_length = 128
batch_size = 32
eval_batch_size = 32
# Generate and save training data into a tf record file
input_meta_data = (
nlp.data.classifier_data_lib.generate_tf_record_from_data_file(
processor=processor,
data_dir=None, # It is `None` because data is from tfds, not local dir.
tokenizer=tokenizer,
train_data_output_path=train_data_output_path,
eval_data_output_path=eval_data_output_path,
max_seq_length=max_seq_length))
# + [markdown] colab_type="text" id="uX_Sp-wTUoRm"
# Finally create `tf.data` input pipelines from those TFRecord files:
# + colab={} colab_type="code" id="rkHxIK57SQ_r"
training_dataset = bert.run_classifier.get_dataset_fn(
train_data_output_path,
max_seq_length,
batch_size,
is_training=True)()
evaluation_dataset = bert.run_classifier.get_dataset_fn(
eval_data_output_path,
max_seq_length,
eval_batch_size,
is_training=False)()
# + [markdown] colab_type="text" id="stbaVouogvzS"
# The resulting `tf.data.Datasets` return `(features, labels)` pairs, as expected by `keras.Model.fit`:
# + colab={} colab_type="code" id="gwhrlQl4gxVF"
training_dataset.element_spec
# + [markdown] colab_type="text" id="dbJ76vSJj77j"
# #### Create tf.data.Dataset for training and evaluation
#
# + [markdown] colab_type="text" id="9J95LFRohiYw"
# If you need to modify the data loading here is some code to get you started:
# + colab={} colab_type="code" id="gCvaLLAxPuMc"
def create_classifier_dataset(file_path, seq_length, batch_size, is_training):
"""Creates input dataset from (tf)records files for train/eval."""
dataset = tf.data.TFRecordDataset(file_path)
if is_training:
dataset = dataset.shuffle(100)
dataset = dataset.repeat()
def decode_record(record):
name_to_features = {
'input_ids': tf.io.FixedLenFeature([seq_length], tf.int64),
'input_mask': tf.io.FixedLenFeature([seq_length], tf.int64),
'segment_ids': tf.io.FixedLenFeature([seq_length], tf.int64),
'label_ids': tf.io.FixedLenFeature([], tf.int64),
}
return tf.io.parse_single_example(record, name_to_features)
def _select_data_from_record(record):
x = {
'input_word_ids': record['input_ids'],
'input_mask': record['input_mask'],
'input_type_ids': record['segment_ids']
}
y = record['label_ids']
return (x, y)
dataset = dataset.map(decode_record,
num_parallel_calls=tf.data.experimental.AUTOTUNE)
dataset = dataset.map(
_select_data_from_record,
num_parallel_calls=tf.data.experimental.AUTOTUNE)
dataset = dataset.batch(batch_size, drop_remainder=is_training)
dataset = dataset.prefetch(tf.data.experimental.AUTOTUNE)
return dataset
# + colab={} colab_type="code" id="rutkBadrhzdR"
# Set up batch sizes
batch_size = 32
eval_batch_size = 32
# Return Tensorflow dataset
training_dataset = create_classifier_dataset(
train_data_output_path,
input_meta_data['max_seq_length'],
batch_size,
is_training=True)
evaluation_dataset = create_classifier_dataset(
eval_data_output_path,
input_meta_data['max_seq_length'],
eval_batch_size,
is_training=False)
# + colab={} colab_type="code" id="59TVgt4Z7fuU"
training_dataset.element_spec
# + [markdown] colab_type="text" id="QbklKt-w_CiI"
# <a id="hub_bert"></a>
#
# ### TFModels BERT on TFHub
#
# You can get [the BERT model](https://tfhub.dev/tensorflow/bert_en_uncased_L-12_H-768_A-12/2) off the shelf from [TFHub](https://tensorflow.org/hub). It would not be hard to add a classification head on top of this `hub.KerasLayer`
# + colab={} colab_type="code" id="GDWrHm0BGpbX"
# Note: 350MB download.
import tensorflow_hub as hub
# + cellView="form" colab={} colab_type="code" id="Y29meH0qGq_5"
hub_model_name = "bert_en_uncased_L-12_H-768_A-12" #@param ["bert_en_uncased_L-24_H-1024_A-16", "bert_en_wwm_cased_L-24_H-1024_A-16", "bert_en_uncased_L-12_H-768_A-12", "bert_en_wwm_uncased_L-24_H-1024_A-16", "bert_en_cased_L-24_H-1024_A-16", "bert_en_cased_L-12_H-768_A-12", "bert_zh_L-12_H-768_A-12", "bert_multi_cased_L-12_H-768_A-12"]
# + colab={} colab_type="code" id="lo6479At4sP1"
hub_encoder = hub.KerasLayer(f"https://tfhub.dev/tensorflow/{hub_model_name}",
trainable=True)
print(f"The Hub encoder has {len(hub_encoder.trainable_variables)} trainable variables")
# + [markdown] colab_type="text" id="iTzF574wivQv"
# Test run it on a batch of data:
# + colab={} colab_type="code" id="XEcYrCR45Uwo"
result = hub_encoder(
inputs=[glue_train['input_word_ids'][:10],
glue_train['input_mask'][:10],
glue_train['input_type_ids'][:10],],
training=False,
)
print("Pooled output shape:", result[0].shape)
print("Sequence output shape:", result[1].shape)
# + [markdown] colab_type="text" id="cjojn8SmLSRI"
# At this point it would be simple to add a classification head yourself.
#
# The `bert_models.classifier_model` function can also build a classifier onto the encoder from TensorFlow Hub:
# + colab={} colab_type="code" id="9nTDaApyLR70"
hub_classifier, hub_encoder = bert.bert_models.classifier_model(
# Caution: Most of `bert_config` is ignored if you pass a hub url.
bert_config=bert_config, hub_module_url=hub_url_bert, num_labels=2)
# + [markdown] colab_type="text" id="xMJX3wV0_v7I"
# The one downside to loading this model from TFHub is that the structure of internal keras layers is not restored. So it's more difficult to inspect or modify the model. The `TransformerEncoder` model is now a single layer:
# + colab={} colab_type="code" id="pD71dnvhM2QS"
tf.keras.utils.plot_model(hub_classifier, show_shapes=True, dpi=64)
# + colab={} colab_type="code" id="nLZD-isBzNKi"
try:
tf.keras.utils.plot_model(hub_encoder, show_shapes=True, dpi=64)
assert False
except Exception as e:
print(f"{type(e).__name__}: {e}")
# + [markdown] colab_type="text" id="ZxSqH0dNAgXV"
# <a id="model_builder_functions"></a>
#
# ### Low level model building
#
# If you need a more control over the construction of the model it's worth noting that the `classifier_model` function used earlier is really just a thin wrapper over the `nlp.modeling.networks.TransformerEncoder` and `nlp.modeling.models.BertClassifier` classes. Just remember that if you start modifying the architecture it may not be correct or possible to reload the pre-trained checkpoint so you'll need to retrain from scratch.
# + [markdown] colab_type="text" id="0cgABEwDj06P"
# Build the encoder:
# + colab={} colab_type="code" id="5r_yqhBFSVEM"
transformer_config = config_dict.copy()
# You need to rename a few fields to make this work:
transformer_config['attention_dropout_rate'] = transformer_config.pop('attention_probs_dropout_prob')
transformer_config['activation'] = tf_utils.get_activation(transformer_config.pop('hidden_act'))
transformer_config['dropout_rate'] = transformer_config.pop('hidden_dropout_prob')
transformer_config['initializer'] = tf.keras.initializers.TruncatedNormal(
stddev=transformer_config.pop('initializer_range'))
transformer_config['max_sequence_length'] = transformer_config.pop('max_position_embeddings')
transformer_config['num_layers'] = transformer_config.pop('num_hidden_layers')
transformer_config
# + colab={} colab_type="code" id="rIO8MI7LLijh"
manual_encoder = nlp.modeling.networks.TransformerEncoder(**transformer_config)
# + [markdown] colab_type="text" id="4a4tFSg9krRi"
# Restore the weights:
# + colab={} colab_type="code" id="X6N9NEqfXJCx"
checkpoint = tf.train.Checkpoint(model=manual_encoder)
checkpoint.restore(
os.path.join(gs_folder_bert, 'bert_model.ckpt')).assert_consumed()
# + [markdown] colab_type="text" id="1BPiPO4ykuwM"
# Test run it:
# + colab={} colab_type="code" id="hlVdgJKmj389"
result = manual_encoder(my_examples, training=True)
print("Sequence output shape:", result[0].shape)
print("Pooled output shape:", result[1].shape)
# + [markdown] colab_type="text" id="nJMXvVgJkyBv"
# Wrap it in a classifier:
# + colab={} colab_type="code" id="tQX57GJ6wkAb"
manual_classifier = nlp.modeling.models.BertClassifier(
bert_encoder,
num_classes=2,
dropout_rate=transformer_config['dropout_rate'],
initializer=tf.keras.initializers.TruncatedNormal(
stddev=bert_config.initializer_range))
# + colab={} colab_type="code" id="kB-nBWhQk0dS"
manual_classifier(my_examples, training=True).numpy()
# + [markdown] colab_type="text" id="E6AJlOSyIO1L"
# <a id="optiizer_schedule"></a>
#
# ### Optimizers and schedules
#
# The optimizer used to train the model was created using the `nlp.optimization.create_optimizer` function:
# + colab={} colab_type="code" id="28Dv3BPRlFTD"
optimizer = nlp.optimization.create_optimizer(
2e-5, num_train_steps=num_train_steps, num_warmup_steps=warmup_steps)
# + [markdown] colab_type="text" id="LRjcHr0UlT8c"
# That high level wrapper sets up the learning rate schedules and the optimizer.
#
# The base learning rate schedule used here is a linear decay to zero over the training run:
# + colab={} colab_type="code" id="MHY8K6kDngQn"
epochs = 3
batch_size = 32
eval_batch_size = 32
train_data_size = len(glue_train_labels)
steps_per_epoch = int(train_data_size / batch_size)
num_train_steps = steps_per_epoch * epochs
# + colab={} colab_type="code" id="wKIcSprulu3P"
decay_schedule = tf.keras.optimizers.schedules.PolynomialDecay(
initial_learning_rate=2e-5,
decay_steps=num_train_steps,
end_learning_rate=0)
plt.plot([decay_schedule(n) for n in range(num_train_steps)])
# + [markdown] colab_type="text" id="IMTC_gfAl_PZ"
# This, in turn is wrapped in a `WarmUp` schedule that linearly increases the learning rate to the target value over the first 10% of training:
# + colab={} colab_type="code" id="YRt3VTmBmCBY"
warmup_steps = num_train_steps * 0.1
warmup_schedule = nlp.optimization.WarmUp(
initial_learning_rate=2e-5,
decay_schedule_fn=decay_schedule,
warmup_steps=warmup_steps)
# The warmup overshoots, because it warms up to the `initial_learning_rate`
# following the original implementation. You can set
# `initial_learning_rate=decay_schedule(warmup_steps)` if you don't like the
# overshoot.
plt.plot([warmup_schedule(n) for n in range(num_train_steps)])
# + [markdown] colab_type="text" id="l8D9Lv3Bn740"
# Then create the `nlp.optimization.AdamWeightDecay` using that schedule, configured for the BERT model:
# + colab={} colab_type="code" id="2Hf2rpRXk89N"
optimizer = nlp.optimization.AdamWeightDecay(
learning_rate=warmup_schedule,
weight_decay_rate=0.01,
epsilon=1e-6,
exclude_from_weight_decay=['LayerNorm', 'layer_norm', 'bias'])
|
notebooks/fine_tuning_bert.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] pycharm={"name": "#%% md\n"}
# This is the
# [MNIST example](https://keras.io/examples/vision/mnist_convnet/)
# from the Keras documentation, for testing its neural network separately from the main project.
# + pycharm={"name": "#%%\n"}
import numpy as np
from tensorflow import keras
from tensorflow.keras import layers
# + pycharm={"name": "#%%\n"}
# Model / data parameters
num_classes = 10
input_shape = (28, 28, 1)
# the data, split between train and test sets
(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()
# Scale images to the [0, 1] range
x_train = x_train.astype("float32") / 255
x_test = x_test.astype("float32") / 255
# Make sure images have shape (28, 28, 1)
x_train = np.expand_dims(x_train, -1)
x_test = np.expand_dims(x_test, -1)
print("x_train shape:", x_train.shape)
print(x_train.shape[0], "train samples")
print(x_test.shape[0], "test samples")
# convert class vectors to binary class matrices
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
# + pycharm={"name": "#%%\n"}
model = keras.Sequential(
[
keras.Input(shape=input_shape),
layers.Conv2D(32, kernel_size=(3, 3), activation="relu"),
layers.MaxPooling2D(pool_size=(2, 2)),
layers.Conv2D(64, kernel_size=(3, 3), activation="relu"),
layers.MaxPooling2D(pool_size=(2, 2)),
layers.Flatten(),
layers.Dropout(0.5),
layers.Dense(num_classes, activation="softmax"),
]
)
model.summary()
# + pycharm={"name": "#%%\n"}
batch_size = 128
epochs = 15
model.compile(loss="categorical_crossentropy", optimizer="adam", metrics=["accuracy"])
model.fit(x_train, y_train, batch_size=batch_size, epochs=epochs, validation_split=0.1)
# + pycharm={"name": "#%%\n"}
score = model.evaluate(x_test, y_test, verbose=0)
print("Test loss:", score[0])
print("Test accuracy:", score[1])
|
examples/mnist_keras.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/wearlianbaguio/OOP-1-1/blob/main/Phthon_Classes_and_Objects.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="QSwCcH6Cyydb"
# #Class
# + id="I2r-fFf5y7GD"
class MyClass: #name of class
# + id="ryu5zNoJz8p3"
class MyClass:
pass #create a class without variable and methods
# + colab={"base_uri": "https://localhost:8080/"} id="mZ19PGkizszQ" outputId="bd2e7482-b1ed-4dcc-9de2-a55092797f0c"
class MyClass:
def __init__(self,name,age):
self.name = name #create a class with attriutes
self.age = age
def displaY(self):
print(self.name, self.age)
person = MyClass("<NAME>",18) #create an object name
print(person.name)
# + colab={"base_uri": "https://localhost:8080/"} id="GCSGPnKZ7X4E" outputId="ea4f53bd-1dc8-49dc-a20b-f40a0e162161"
class MyClass:
def __init__(self,name,age):
self.name = name #create a class with attriutes
self.age = age
def display(self):
print(self.name, self.age)
person = MyClass("<NAME>",18) #create an object name
person.display()
# + colab={"base_uri": "https://localhost:8080/"} id="cwHcLwek29Lp" outputId="5f6689d2-6909-43ea-97c5-a3056d71ab08"
#Application 1 - Write a Python program that computes for an area of a rectangle: A = lxw
class Rectangle:
def __init__(self,l,w):
self.l=l #attribute names
self.w=w
def Area(self):
print(self.l * self.w)
rect = Rectangle(7,3)
rect.Area()
|
Phthon_Classes_and_Objects.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pandas as pd
df = pd.DataFrame({
'Radius': [0.3, 0.1, 1.7, 0.4, 1.9, 2.1, 0.25,
0.4, 2.0, 1.5, 0.6, 0.5, 1.8, 0.25],
'Class': [0, 0, 1, 0, 1, 1, 0,
0, 1, 1, 0, 0, 1, 0]
})
# -
df
# +
import matplotlib.pyplot as plt
from matplotlib import rcParams
rcParams['figure.figsize'] = 14, 8
rcParams['axes.spines.top'] = False
rcParams['axes.spines.right'] = False
# +
plt.scatter(df['Radius'], df['Class'], color='#7e7e7e', s=200)
plt.title('Radius classification', size=20)
plt.xlabel('Radius (cm)', size=14)
plt.ylabel('Class', size=14)
plt.show()
# +
from sklearn.linear_model import LogisticRegression
model = LogisticRegression()
model.fit(df[['Radius']], df[['Class']])
preds = model.predict(df[['Radius']])
df['Predicted'] = preds
# +
xs = np.linspace(0, df['Radius'].max() + 0.1, 1000)
ys = [model.predict([[x]]) for x in xs]
plt.scatter(df['Radius'], df['Class'], color='#7e7e7e', s=200, label='Data points')
plt.plot(xs, ys, color='#040404', label='Decision boundary')
plt.title('Radius classification', size=20)
plt.xlabel('Radius (cm)', size=14)
plt.ylabel('Class', size=14)
plt.legend()
plt.show()
# -
df
# +
from sklearn.metrics import confusion_matrix
confusion_matrix(df['Class'], df['Predicted'])
|
Chapter01/002_Classification.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
print(f'numpy version = {np.__version__}')
# #### การคำนวณตัวเลขตรวจสอบ (check digit) ของรหัสแท่งแบบ EAN-13
# https://en.wikipedia.org/wiki/International_Article_Number#Calculation_of_checksum_digit
# ## method 1
ean13='400638133393x'
list(ean13[:12])
a=np.array(list(ean13[:12]), dtype=int)
a
b=[1,3,1,3,1,3,1,3,1,3,1,3]
b
c=a*b
c
d=np.sum(c)
d
check_digit=0 if d % 10 == 0 else 10 - (d % 10)
check_digit
# ## method 2
ean13='400638133393x'
list(ean13[:12])
a=np.array(list(ean13[:12]), dtype=int)
a
b=np.tile([1,3], 6)
b
c=a*b
c
d=np.sum(c)
d
check_digit=0 if d % 10 == 0 else 10 - (d % 10)
check_digit
# ## method 3
def ean13_checkdigit(ean13):
a=np.array(list(ean13[:12]), dtype=int)
b=np.tile([1, 3], 6)
d=np.sum(a * b)
return 0 if d % 10 == 0 else 10 - (d % 10)
ean13_checkdigit('885018825110')
def ean13_checkdigit_v2(ean13):
d=np.sum(np.array(list(ean13[:12]), dtype=int) * np.tile([1, 3], 6))
return 0 if d % 10 == 0 else 10 - (d % 10)
ean13_checkdigit_v2('885018825110')
ean13_checkdigit_v2('400638133393x')
|
numpy_ean13_barcode_checkdigit.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # Introduction
# In this tutorial we’re going to be looking at two Scottylabs API’s. Scottylabs made these APIs in order to make CMU data more accessible to students who may wish to use these for their projects. The API’s serve the data to the user in json format which makes using the data very simple. Scottylabs has two interesting API's which are the Dining API and the Course API. The Dining API provides information of all the dining locations on campus with information such as where each dining location is and what times does each location operate at. The Course API provides the user with information of all the courses that are available for CMU students to take, giving all the information that you would normally find on SIO in a nice JSON format. Sadly the Course API requires a Python 3 installation to function properly so for this tutorial we will mainly focus on the Dining API. That being said both API's provide a JSON output file so the principles should be the same.
# We’re going to start off with something simple and load the Scottylabs dining API. More details about this API can be found on their website at https://scottylabs.org/dining-api/ This API is very simple to use all that is required is a simple GET request to http://apis.scottylabs.org/dining/v1/locations which provides you with the easy to use json-formatted data about CMU’s dining locations
# ## JSON Format
# To get started we will first briefly go over the JSON format. JSON stands for JavaScript Object Notation and as its name suggests was inspired by JavaScript's object syntax. JSON uses key-value pairs in order to store data where each key is a string and each value can either be a string, number, boolean, array or even another JSON object. Because of this JSON notation is very human readable and flexible enough to allow for more complex data representations.
import urllib2
import json
# We're going to use urllib2 in order to load the url link contents and we're going to use the JSON library to load the JSON-formatted response string into an actual JSON object. In fact the JSON object works pretty much exactly like a python dictionary, so using the data will be very intuitive.
dining_data = json.load(urllib2.urlopen("http://apis.scottylabs.org/dining/v1/locations"))
# However we have no idea of how the data is actually structured. Thankfully the API documentation covers this.
"""
{
"locations": [{
"name": string,
"description": string,
"keywords": [string],
"location": string,
"times": [{
"start": {
"day": number,
"hour": number,
"minute": number,
},
"end": {
"day": number,
"hour": number,
"minute": number,
}
}, ...]
}, ...]
}
"""
# What if we wanted to know just the names of the dining locations without all of the other information clogging up the screen?
# +
eatery_names = [location["name"] for location in dining_data["locations"]]
print "Total number of dining locations: ", len(eatery_names)
for eatery in eatery_names:
print eatery
# -
# As you can see from above using JSON objects is pretty much exactly the same as using python dictionaries.
# Now let's do something more interesting with this data. Let's say we wanted to sort the dining locations by the longest total operating hours per week. First we will have to calculate the hours using the start and end data provided by the API. We will then add the new data to each eatery under a new key called hours_open.
# +
def daysElapsed(startDay, endDay):
if endDay < startDay:
return endDay + 7 - startDay
return endDay - startDay
def getTotalHoursOpen(eatery):
times = eatery['times']
total_hours_open = 0
for time in times:
start_day = time['start']['day']
start_hour = time['start']['hour']
start_min = time['start']['min']
end_day = time['end']['day']
end_hour = time['end']['hour']
end_min = time['end']['min']
days_passed = daysElapsed(start_day, end_day)
hours_open = end_hour+(24*days_passed) +float(end_min)/60 - (start_hour + float(start_min)/60 )
total_hours_open += hours_open
return total_hours_open
for eatery in dining_data['locations']:
eatery['hours_open'] = getTotalHoursOpen(eatery)
print eatery['name'] + ": " , getTotalHoursOpen(eatery)
# -
# Now that we have this data let us plot it so that we can visualize our results more clearly. To do this we are going to plot the data using a simple bar graph. We will first create two lists from our data: one list will have the sorted hours while the other list will contain the corresponding eatery names.
# +
name_hours_list = [(i['name'], i['hours_open']) for i in dining_data['locations']]
name_hours_list = sorted(name_hours_list, key = lambda tup : tup[1], reverse = True)
sorted_hours = [tup[1] for tup in name_hours_list]
sorted_names = [tup[0] for tup in name_hours_list]
# -
# Now that we have the sorted data we will use matplotlib and numpy to plot the data.
import matplotlib.pyplot as plt
import numpy as np
num_eateries = len(dining_data['locations'])
plt.figure(figsize=(14,9))
bar_width = 1.0
plt.bar(np.arange(num_eateries), sorted_hours, bar_width)
plt.title("Bar graph of dining locations and their total operating hours per week")
plt.ylabel("Total Hours Open")
plt.xlabel("Dining Location")
plt.xticks(np.arange(num_eateries) + bar_width/2, sorted_names, rotation = 90)
plt.show()
# Let us also calculate the mean and standard deviation of the hours open so that we can gain an even better understanding of the results.
mean_hours = np.mean(sorted_hours)
std_hours = np.std(sorted_hours)
print "mean is: ", mean_hours
print "standard deviation is: ", std_hours
# Now the dining API is fairly limited with what you can do with it as it only provides very basic information. So if we wanted to use the data for something complex like plotting the locations of each eatery on the campus map, the dining API would not provide enough information for such a task. However we can always extend the information we have with some external sources. Now in order to tackle the aforementioned task we need gps coordinate data for each dining location which the API sadly does not provide. However after doing a bit of web searching I was able to find CMU's webpage on the dining locations that does contain such data in: http://webapps.studentaffairs.cmu.edu/dining/ConceptInfo/?page=listConcepts.
# We are going to use the library bs4 to help us scrape the html contents of the website.
import bs4
# Taking a quick glance at the html source for the website we can notice that both the dining location name and its coordinates are stored within divs of class "conceptBucket". Within a conceptBucket the coordinate data is an attribute value of a button element with and id of "mapIt" while the eatery name is the inner html text of a div with a class of "conceptName".
# +
response = urllib2.urlopen("http://webapps.studentaffairs.cmu.edu/dining/ConceptInfo/?page=listConcepts")
page = bs4.BeautifulSoup(response.read(), 'html.parser')
eateries_info = page('div', class_="conceptBucket")
eatery_dict = {}
for eatery in eateries_info:
mapIt = eatery.find('button', id="MapIt")
coords = [float(x) for x in mapIt['value'].split(",")]
name = eatery.find('div', class_="conceptName").text
eatery_dict[name] = coords
print eatery_dict
# -
# We now have successfully scraped the coordinates of the dining locations, however there is still one small problem: the names of the eateries from the webpage differ slightly from that of the dining API. For example in the json object we have "Au Bon Pain" whereas in the cmu website it is listed as "AU BON PAIN AT SKIBO CAFÉ". Therefore adding the coordinate data to the json object is not as trivial as it seems, since we need to match these slightly differing names. Now there are two approaches to solving this problem: first we can manually add in the coordinate data ourselves since we can easily tell which names refer to the same dining location, but of course that would be very inefficient and a terrible use of our time. For the second approach we can use a technique known as "fuzzy matching" which tries to match closely related strings from a pool of possibilities.
# The library that we are going to use for fuzzy matching is called fuzzywuzzy and it is very simple to use.
from fuzzywuzzy import process
for eatery in dining_data['locations']:
name = eatery['name']
match = process.extractOne(name, eatery_dict.keys())
#match is a tuple of strings, where the first element is the match.
print name + " : " + match[0]
# As you can see above the names from the API match up perfectly with the names from CMU's website. Now all we have to do is use the matched names in order to add the coordinate data to our json object.
for eatery in dining_data['locations']:
name = eatery['name']
match = process.extractOne(name, eatery_dict.keys())
eatery['coordinates'] = eatery_dict[match[0]]
for eatery in dining_data['locations']:
print eatery['name'] + " : ", eatery['coordinates']
# Now since we have the coordinate data for every dining location let's plot the locations on a map. To do this we will use the library folium which provides a very nice abstraction towards using gps coordinates and plotting data on a map.
import folium
# +
CMU_coordinates = [40.443061, -79.943325]
default_zoom = 16
dining_map = folium.Map(location=CMU_coordinates, zoom_start=default_zoom)
for eatery in dining_data['locations']:
folium.Marker(eatery['coordinates'], popup=eatery['name']).add_to(dining_map)
dining_map
# -
# You can also click on each marker and the name of the dining location will display in a popup bubble. We could also use this to map the eateries that are open on weekends.
# +
def isOpenOnWeekend(eatery):
times = eatery['times']
weekends = [0,6]
for time in times:
start_day = time['start']['day']
end_day = time['end']['day']
if (start_day in weekends or end_day in weekends):
return True
return False
map_weekends = folium.Map(location=CMU_coordinates, zoom_start = default_zoom)
for eatery in dining_data['locations']:
if (isOpenOnWeekend(eatery)):
folium.Marker(eatery['coordinates'], popup=eatery['name']).add_to(map_weekends)
map_weekends
# -
# Now let's take this one step further and add the operating hours information to make the map more informative. We will be using circular markers where the size of each marker denotes how long a dining location is open each week.
# +
import random
# returns a random int from 0-255
def randHex():
return random.randint(0,255)
# returns a string of a random rgb color in hex
def randColor():
return "#{0:02x}{1:02x}{2:02x}".format(randHex(), randHex(), randHex())
max_hours = max([eatery['hours_open'] for eatery in dining_data['locations']])
map_hours = folium.Map(location=CMU_coordinates, zoom_start=default_zoom)
marker_scale = 60
for eatery in dining_data['locations']:
folium.CircleMarker(location=eatery['coordinates'], radius = int(marker_scale*eatery['hours_open']/max_hours),
popup = eatery['name'], fill_color=randColor()).add_to(map_hours)
map_hours
# -
|
2016/tutorial_final/14/scottylabs_dining_api_tutorial.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: deep_learning
# language: python
# name: deep_learning
# ---
# ## Auto-Encoder working on Fashion-MNIST data.
#
# Here the autoencoder learns how an image is formed for an item amongst the 10 items present in the fashion-mnist data and predicts or basically forms a new image on its own throught the patterns and combination it learned during the training.
import numpy as np
import pandas as pd
import tensorflow as tf
import os
import matplotlib.pyplot as plt
from tensorflow.keras.datasets import fashion_mnist
import matplotlib as mpl
# #### In the following cell the model has a small error in the last Dense layer. This is done purposely to get a feel of the pixels predicted by the auto-encoder.
# +
import keras
tf.random.set_seed(42)
np.random.seed(42)
(x_train,y_train),(x_test,y_test) = fashion_mnist.load_data()
x_train = x_train/255
x_test = x_test/255
def rounded_accuracy(y_true, y_pred):
return keras.metrics.binary_accuracy(tf.round(y_true), tf.round(y_pred))
stacked_encoder = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28,28]),
keras.layers.Dense(100,activation='selu'),
keras.layers.Dense(30,activation='selu')
])
stacked_decoder = keras.models.Sequential([
keras.layers.Dense(100,activation='selu'),
keras.layers.Dense(28*28,activation='selu'),
keras.layers.Reshape([28,28])
])
stacked_auto = keras.models.Sequential([stacked_encoder, stacked_decoder])
stacked_auto.compile(loss='binary_crossentropy',
optimizer='adam',metrics=[rounded_accuracy])
history = stacked_auto.fit(x_train,x_train,epochs=5,validation_data=(x_test,x_test))
def plot_image(image):
plt.imshow(image, cmap="binary")
plt.axis("off")
def reconstructions(model,images=x_test,n_images=5):
reconstruction = model.predict(images[:n_images])
fig = plt.figure(figsize=(n_images * 1.5,3))
for image_index in range(n_images):
plt.subplot(2,n_images,1+image_index)
plot_image(images[image_index])
plt.subplot(2,n_images,1+n_images + image_index)
plot_image(reconstruction[image_index])
reconstructions(stacked_auto)
# -
# #### Here we see a proper auto-encoder working the fashion-mnist data. After it is trained , we use the autoencoder to generate new images related to the data it was trained on.
# +
import keras
tf.random.set_seed(42)
np.random.seed(42)
(x_train,y_train),(x_test,y_test) = fashion_mnist.load_data()
x_train = x_train/255
x_test = x_test/255
def rounded_accuracy(y_true, y_pred):
return keras.metrics.binary_accuracy(tf.round(y_true), tf.round(y_pred))
stacked_encoder = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28,28]),
keras.layers.Dense(100,activation='selu'),
keras.layers.Dense(30,activation='selu')
])
stacked_decoder = keras.models.Sequential([
keras.layers.Dense(100,activation='selu'),
keras.layers.Dense(28*28,activation='sigmoid'),
keras.layers.Reshape([28,28])
])
stacked_auto = keras.models.Sequential([stacked_encoder, stacked_decoder])
stacked_auto.compile(loss='binary_crossentropy',
optimizer=keras.optimizers.SGD(lr=1.5),metrics=[rounded_accuracy])
history = stacked_auto.fit(x_train,x_train,epochs=5,validation_data=(x_test,x_test))
def plot_image(image):
plt.imshow(image, cmap="binary")
plt.axis("off")
def reconstructions(model,images=x_test,n_images=5):
reconstruction = model.predict(images[:n_images])
fig = plt.figure(figsize=(n_images * 1.5,3))
for image_index in range(n_images):
plt.subplot(2,n_images,1+image_index)
plot_image(images[image_index])
plt.subplot(2,n_images,1+n_images + image_index)
plot_image(reconstruction[image_index])
reconstructions(stacked_auto)
# -
# #### Now since we have trained the auto-encoder we can now use it to visualize the high dimensional fashion-mnist into a 2-D graph since auto-encoders are useful for dimensionality reduction.
# ##### Here we will use t-SNE algorithm for reducing the dimensions.
# +
np.random.seed(42)
from sklearn.manifold import TSNE
X_valid_compressed = stacked_encoder.predict(x_test)
tsne = TSNE()
X_valid_2D = tsne.fit_transform(X_valid_compressed)
X_valid_2D = (X_valid_2D - X_valid_2D.min()) / (X_valid_2D.max() - X_valid_2D.min())
# -
plt.figure(figsize=(10, 8))
cmap = plt.cm.tab10
plt.scatter(X_valid_2D[:, 0], X_valid_2D[:, 1], c=y_test, s=10, cmap=cmap)
image_positions = np.array([[1., 1.]])
for index, position in enumerate(X_valid_2D):
dist = np.sum((position - image_positions) ** 2, axis=1)
if np.min(dist) > 0.02: # if far enough from other images
image_positions = np.r_[image_positions, [position]]
imagebox = mpl.offsetbox.AnnotationBbox(
mpl.offsetbox.OffsetImage(x_test[index], cmap="binary"),
position, bboxprops={"edgecolor": cmap(y_test[index]), "lw": 2})
plt.gca().add_artist(imagebox)
plt.axis("off")
plt.show()
|
Fashion_MNIST_AutoEncoder/AutoEncoder_Fashion_MNIST.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Data description & Problem statement:
# This dataset is originally from the National Institute of Diabetes and Digestive and Kidney Diseases. The objective of the dataset is to diagnostically predict whether or not a patient has diabetes, based on certain diagnostic measurements included in the dataset. Several constraints were placed on the selection of these instances from a larger database. In particular, all patients here are females at least 21 years old of Pima Indian heritage.
#
# The type of dataset and problem is a classic supervised binary classification. Given a number of elements all with certain characteristics (features), we want to build a machine learning model to identify people affected by type 2 diabetes.
#
# # Workflow:
# - Load the dataset, and define the required functions (e.g. for detecting the outliers)
# - Data Cleaning/Wrangling: Manipulate outliers, missing data or duplicate values, Encode categorical variables, etc.
# - Split data into training & test parts
# # Model Training:
# - Build an initial RF model, and evaluate it via C-V approach
# - Use grid-search along with C-V approach to find the best hyperparameters of RF model: Find the best RF model (Note: I've utilized SMOTE technique via imblearn toolbox to synthetically over-sample the minority category and even the dataset imbalances.)
# # Model Evaluation:
# - Evaluate the best RF model with optimized hyperparameters on Test Dataset, by calculating:
# - AUC score
# - Confusion matrix
# - ROC curve
# - Precision-Recall curve
# - Average precision
#
# Finally, calculate the Feature Importance for the features
# +
import sklearn
import tensorflow as tf
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn import preprocessing
# %matplotlib inline
from scipy import stats
import warnings
warnings.filterwarnings("ignore")
# -
# Function to remove outliers (all rows) by Z-score:
def remove_outliers(X, y, name, thresh=3):
L=[]
for name in name:
drop_rows = X.index[(np.abs(X[name] - X[name].mean()) >= (thresh * X[name].std()))]
L.extend(list(drop_rows))
X.drop(np.array(list(set(L))), axis=0, inplace=True)
y.drop(np.array(list(set(L))), axis=0, inplace=True)
print('number of outliers removed : ' , len(L))
# +
df=pd.read_csv('C:/Users/rhash/Documents/Datasets/pima-indian-diabetes/indians-diabetes.csv')
# To Shuffle the data:
np.random.seed(42)
df=df.reindex(np.random.permutation(df.index))
df.reset_index(inplace=True, drop=True)
df.columns=['NP', 'GC', 'BP', 'ST', 'I', 'BMI', 'PF', 'Age', 'Class']
df.head()
# -
df.info()
df['ST'].replace(0, df[df['ST']!=0]['ST'].mean(), inplace=True)
df['GC'].replace(0, df[df['GC']!=0]['GC'].mean(), inplace=True)
df['BP'].replace(0, df[df['BP']!=0]['BP'].mean(), inplace=True)
df['BMI'].replace(0, df[df['BMI']!=0]['BMI'].mean(), inplace=True)
df['I'].replace(0, df[df['I']!=0]['I'].mean(), inplace=True)
# +
X=df.drop('Class', axis=1)
y=df['Class']
# We initially devide data into training & test folds: We do the Grid-Search only on training part
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=42)
# -
df.shape
# +
# Building the Initial Model & Cross-Validation:
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import StratifiedKFold
model=RandomForestClassifier(max_features=7, n_estimators=20, max_depth=15, random_state=42, class_weight='balanced')
kfold=StratifiedKFold(n_splits=5, shuffle=True, random_state=42)
scores=cross_val_score(model, X_train, y_train, cv=kfold, scoring="roc_auc")
print(scores, "\n")
print("Accuracy: %0.2f (+/- %0.2f)" % (scores.mean(), scores.std()))
# +
# Grid-Search for the best model parameters:
from sklearn.model_selection import GridSearchCV
param={'max_depth':[3, 5, 10, 20, 30], 'max_features':[3, 5, 7, 8], 'n_estimators': [ 20, 30, 40]
, 'min_samples_leaf':[1, 5, 20]}
kfold=StratifiedKFold(n_splits=5, shuffle=True, random_state=42)
grid_search=GridSearchCV(RandomForestClassifier(random_state=42, class_weight='balanced'), param, cv=kfold, n_jobs=-1, scoring="roc_auc")
grid_search.fit(X_train, y_train)
# Grid-Search report:
G=pd.DataFrame(grid_search.cv_results_).sort_values("rank_test_score")
G.head(3)
# -
print("Best parameters: ", grid_search.best_params_)
print("Best validation accuracy: %0.2f (+/- %0.2f)" % (np.round(grid_search.best_score_, decimals=2), np.round(G.loc[grid_search.best_index_,"std_test_score" ], decimals=2)))
print("Test score: ", np.round(grid_search.score(X_test, y_test),2))
# +
from sklearn.metrics import roc_curve, auc, confusion_matrix, classification_report
# Plot a confusion matrix.
# cm is the confusion matrix, names are the names of the classes.
def plot_confusion_matrix(cm, names, title='Confusion matrix', cmap=plt.cm.Blues):
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(names))
plt.xticks(tick_marks, names, rotation=45)
plt.yticks(tick_marks, names)
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
class_names=["0", "1"]
# Compute confusion matrix
cm = confusion_matrix(y_test, grid_search.predict(X_test))
np.set_printoptions(precision=2)
print('Confusion matrix, without normalization')
print(cm)
# Normalize the confusion matrix by row (i.e by the number of samples in each class)
cm_normalized = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print('Normalized confusion matrix')
print(cm_normalized)
plt.figure()
plot_confusion_matrix(cm_normalized, class_names, title='Normalized confusion matrix')
plt.show()
# -
# Classification report:
report=classification_report(y_test, grid_search.predict(X_test))
print(report)
# +
# ROC curve & auc:
from sklearn.metrics import precision_recall_curve, roc_curve, roc_auc_score, average_precision_score
fpr, tpr, thresholds=roc_curve(np.array(y_test),grid_search.predict_proba(X_test)[:, 1] , pos_label=1)
roc_auc=roc_auc_score(np.array(y_test), grid_search.predict_proba(X_test)[:, 1])
plt.figure()
plt.step(fpr, tpr, color='darkorange', lw=2, label='ROC curve (auc = %0.2f)' % roc_auc)
plt.plot([0, 1], [0, 1], color='navy', alpha=0.4, lw=2, linestyle='--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC curve')
plt.legend(loc="lower right")
plt.plot([cm_normalized[0,1]], [cm_normalized[1,1]], 'or')
plt.show()
# +
# Precision-Recall trade-off:
precision, recall, thresholds=precision_recall_curve(y_test,grid_search.predict_proba(X_test)[:, 1], pos_label=1)
ave_precision=average_precision_score(y_test,grid_search.predict_proba(X_test)[:, 1])
plt.step(recall, precision, color='navy')
plt.xlabel('Recall')
plt.ylabel('Precision')
plt.xlim([0, 1.001])
plt.ylim([0, 1.02])
plt.title('Precision-Recall curve: AP={0:0.2f}'.format(ave_precision))
plt.plot([cm_normalized[1,1]], [cm[1,1]/(cm[1,1]+cm[0,1])], 'ob')
plt.show()
# +
# Feature Importance:
im=RandomForestClassifier( max_depth= 3, max_features= 4, n_estimators= 25, random_state=42, class_weight="balanced").fit(X,y).feature_importances_
# Sort & Plot:
d=dict(zip(np.array(df.columns), im))
k=sorted(d,key=lambda i: d[i], reverse= True)
[print((i,d[i])) for i in k]
# Plot:
c1=pd.DataFrame(np.array(im), columns=["Importance"])
c2=pd.DataFrame(np.array(df.columns[0:8]),columns=["Feature"])
fig, ax = plt.subplots(figsize=(8,6))
sns.barplot(x="Feature", y="Importance", data=pd.concat([c2,c1], axis=1), color="blue", ax=ax)
|
Projects in Python with Scikit-Learn- XGBoost- Pandas- Statsmodels- etc./Diabetes Diagnosis (Random Forest).ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# $$MSE = \frac{1}{n} \sum_{i=i}^n (y_i - \hat{y_i})^2$$
#
# * $y_i$ - observed data point
# * $y_i$ - fitted data point
#
# Estimated line equation:
#
# $$\hat{y} = b_0 + b_1 x$$
# Slope $b_1$ can be found as:
#
# $$b_1 = \frac{s_y}{s_x} R$$
#
# where,
#
# * $s_x$, $s_y$ - standard deviation of $x$ and $y$ sample;
# * $R$ - correlation between $x$ and $y$.
#
# $$s_x=\frac{1}{n-1} \sum_{i=1}^n (x_i - \bar{x})^2$$
#
# $$R = \frac{\hat{cov}(x,y)}{s_x s_y}$$
#
# $$\hat{cov}(x,y) = \frac{\sum_{i=0}^n(x_i - \bar{x})(y_i - \bar{y})}{n - 1}$$
#
# * $\hat{cov}(x,y)$ - covariance estimator (measure of how much two random variables vary together)
# * $n$ - sample size;
# * $\bar{x}$ - sample mean;
#
# In order to find the intercept we can rewrite the equation for $\hat{y}$:
#
# $$b_0 = \hat{y} - b_1 x$$
#
# Since the least squares regression line always goes through point $(\bar{x}, \bar{y})$:
#
# $$b_0 = \bar{y} - b_1 \bar{x}$$
class OLS:
def __init__(self):
pass
def fit(self, x, y):
R = np.corrcoef(x.reshape(n,), y)[0,1] # correlation
slope = y.std()/x.std() * R
intercept = y.mean() - slope*x.mean()
self.slope = slope
self.intercept = intercept
self.corr = R
self.r_squared = R**2
print(f"Model fitted:\n y = {intercept:.2f} + {slope:.2f}*x")
def predict(self, x):
return self.intercept + self.slope*x
def residuals(self, y_true, y_hat):
return y_true - y_hat
def mse_score(self, y_true, y_hat):
errors = self.residuals(y_true, y_hat)
return np.sum(errors**2 / (len(errors) -2))
def plot_fitted_line(self, x, y, smooth=True):
x_space = np.linspace(x.min(), x.max(), 1000)
y_space = self.predict(x_space)
plt.figure(figsize=(10,4))
plt.scatter(x, y, label='Observations')
plt.plot(x_space, y_space, '--', color='black', linewidth=1, label='Linear Relationship')
for i in range(n-1):
plt.vlines(x[i], y[i], y_hat[i], 'red', linewidth=0.5)
plt.vlines(x[n-1], y[n-1], y_hat[n-1], 'red', linewidth=0.5, label='Error')
plt.xlabel('x')
plt.ylabel('y')
plt.title('Interesting Title')
plt.legend()
plt.show()
def residuals_plot(self, y_true, y_hat):
errors = self.residuals(y_true, y_hat)
plt.figure(figsize=(15,4))
plt.subplot(121)
plt.hist(errors, bins=30)
plt.axvline(0, linestyle='--', color='black')
plt.xlabel('Residual value')
plt.ylabel('Count')
plt.title('Residuals Distribution')
plt.subplot(122)
# sm.qqplot((errors-errors.mean()) / errors.std(), line ='45')
plt.scatter(y_hat, errors)
plt.axhline(0, linestyle='--', color='black')
plt.xlabel('Fitted Value')
plt.ylabel('Residual')
plt.title('Residual vs True value')
plt.show()
|
content/post/least-squares-regression/Untitled.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Spatiotemporal control of cell cycle acceleration during axolotl spinal cord regeneration
#
# <NAME><sup>1#</sup>, <NAME><sup>2#</sup>, <NAME><sup>2</sup>, <NAME><sup>3</sup> and <NAME><sup>1,4*</sup>
#
# <sup>1</sup> Systems Biology Group, Institute of Physics of Liquids and Biological Systems, National Scientific and Technical Research Council, University of La Plata, La Plata, Argentina.<br>
# <sup>2</sup> The Research Institute of Molecular Pathology (IMP), Vienna Biocenter (VBC), Vienna, Austria.<br>
# <sup>3</sup> Neural Development Group, Division of Cell and Developmental Biology, School of Life Sciences, University of Dundee, Dow Street, Dundee DD1 5EH, UK.<br>
# <sup>4</sup> Center for Information Services and High Performance Computing, Technische Universität Dresden, Dresden, Germany.<br>
#
# <sup>#</sup> These authors contributed equally to this work
#
# #### Corresponding author:
# <NAME>
#
# Center for Information Services and High Performance Computing (ZIH), Technische Universität Dresden, Nöthnitzer Straße 46, 01187 Dresden, Germany.<br>
# Tel. +49 351 463-38780, E-mail: <EMAIL>
#
# Systems Biology Group (SysBio), Institute of Physics of Liquids and Biological Systems (IFLySIB), National Scientific and Technical Research Council (CONICET) and University of La Plata, Calle 59 N 789, 1900 La Plata, Argentina.<br>
# Tel. +54 221 4233283 Ext: 26, E-mail: <EMAIL>
#
# Web: http://sysbioiflysib.wordpress.com/
# ## Introduction
#
# These notebook contains the source code for simulations and data analysis performed for Cura Costa et al., 2021a.
# You should note that some calculations will take too much time to run. The code is only partly documented. If you have questions concerning the code, please contact the authors.
#
# ## List of notebooks
#
# * [Recruitment_limit_model_fitting](main/Recruitment_limit_fitting/Recruitment_limit_fitting.ipynb)
# * Performing parameters fitting ($N_0$, $\lambda$ and $\tau$) to experimental recruitment limit
# * [Recruitment_limit_model_fitting_analysis](main/Recruitment_limit_fitting/Recruitment_limit_fitting-analysis.ipynb)
# * Fitting results analysis
# * Fig. 2 - figure supplement 1
# * [Recruitment_limit_model_simulations](main/Simulating_recruitment_limit.ipynb):
# * Recruitment limit simulations best fit
# * [Recruitment_limit_model_prediction](figures/Fig_2A.ipynb):
# * Fig. 2A
# * [Proportional_mapping_of_S_and_partial_synchronization_of_G1_model_simulations](main/Simulating_proportional_mapping_of_S_and_partial_synchronization_of_G1_model.ipynb):
# * Model simulations with fitted parameters
# * [Proportional_mapping_of_S_and_partial_synchronization_of_G1_model_prediction](figures/Fig_2B.ipynb):
# * Fig. 2B
# * [Cell_recruitment_impediment](main/Simulating_cell_recruitment_impediment.ipynb):
# * Cell recruitment impeded simulations
# * [Cell_recruitment_impediment_model_prediction](figures/Fig_2C.ipynb):
# * Fig. 2C
# * [Proportional_mapping_of_S_and_partial_synchronization_of_G1_model (reducing_G1_phase_only)](alternative_models/Simulating_proportional_mapping_of_S_and_partial_synchronization_of_G1_model-reducing_G1_phase_only.ipynb):
# * Model simulations reducing G1 phase and not reducing S phase
# * [Proportional_mapping_of_S_and_partial_synchronization_of_G1_model (reducing_S_phase_only)](alternative_models/Simulating_proportional_mapping_of_S_and_partial_synchronization_of_G1_model-reducing_S_phase_only.ipynb):
# * Model simulations reducing S phase and not reducing G1 phase
# * [S_dominates_spinal_cord_outgrowth_regardless_of_the_mechanism_B](figures/Fig_2D.ipynb):
# * Fig. 2D
# * [Anterior-posterior_border_fitting](data_analysis/AP-border_fitting.ipynb):
# * Two zones model fitted to AxFUCCI data
# * [Anterior-posterior_border_fitting_analysis](data_analysis/AP-border_fitting_analysis.ipynb):
# * Two zones model fitting results
# * Fig. 4B
# * [Spatiotemporal_distributions](main/spatiotemporal_distributions-heatmaps.ipynb):
# * Heatmaps depicting spatiotemporal distribution of AxFUCCI cells (model simulations and the experimental data)
# * Fig. 6A
# * Fig. 6B
# * Fig. 6C
# * Fig. 6D
# * Fig. 6 - figure supplement 2
# * [Spatiotemporal_distributions_of_G1-S transitions_and_mitosis](main/spatiotemporal_distributions-recruitment_limit.ipynb):
# * Spatiotemporal distributions of G1-S transitions (recruited and non-recruited)
# * Fig. 6E
# * Spatiotemporal distributions of ependymal cell divisions (recruited and non-recruited)
# * Fig. 6 - figure supplement 1
# * [Space-time_distribution_of_cell_proliferation](figures/Fig_1-S1.ipynb):
# * Fig. 1 - figure supplement 1
# * [Predicted_clone_trajectories](figures/Fig_2-S2A.ipynb):
# * Clones position tracked in time
# * Fig. 2 - figure supplement 2A
# * [Predicted_clone_velocities](figures/Fig_2-S2B.ipynb):
# * Clones velocities at different AP positions
# * Fig. 2 - figure supplement 2B
# * [Predicted_normalized_trajectories](figures/Fig_2-S2C.ipynb):
# * Clones position scaling behavior
# * Fig. 2 - figure supplement 2C
# * [Model_prediction_tau=0_days](alternative_models/Simulating_proportional_mapping_of_S_and_partial_synchronization_of_G1_model-tau=2.ipynb):
# * Model simulations with $\tau = 0$ days
# * [Model_prediction_tau=8_days](alternative_models/Simulating_proportional_mapping_of_S_and_partial_synchronization_of_G1_model-tau=192.ipynb):
# * Model simulations with $\tau = 8$ days
# * [Tau_extreme_values_prediction](figures/Fig_2-S4A.ipynb):
# * Fig. 2-figure supplement 4A
# * [Model_prediction_lambda=0_um](alternative_models/Simulating_proportional_mapping_of_S_and_partial_synchronization_of_G1_model-lamda=0.ipynb):
# * Model simulations with $\lambda = 0$ $\mu$m
# * [Model_prediction_lambda=-1600_um](alternative_models/Simulating_proportional_mapping_of_S_and_partial_synchronization_of_G1_mode-lamda=1600.ipynb):
# * Model simulations with $\lambda = 1600$ $\mu$m
# * [Lambda_extreme_values_prediction](figures/Fig_2-S4B.ipynb):
# * Fig. 2 - figure supplement 4B
# * [Outgrowths_comparison](data_analysis/Outgrowths_comparison.ipynb):
# * Outgrowths comparison between animals from Rodrigo-Albors *et al*., 2015 and this study
# * Fig. 3 - figure supplement 4B
# * [Anterior_and_posterior_percentages_of_AxFUCCI](data_analysis/AP_percentages.ipynb):
# * Estimation of anterior and posterior percentages of G0/G1-AxFUCCI and S/G2-AxFUCCI expressing cells by fitting a two-zones model
# * Fig. 4 - figure supplement 1
# * [Individual_fitting_of_the_two-zones_model](data_analysis/AP-border_fitting_analysis_indiviuals.ipynb):
# * Fig. 4 - figure supplement 2
|
index.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
# most recent version is here: https://github.com/NSLS-II-LIX/pyXS
from pyxs import Data2D,Mask
from pyxs.ext import RQconv
def RotationMatrix(axis, angle):
if axis=='x' or axis=='X':
rot = np.asarray(
[[1., 0., 0.],
[0., np.cos(angle), -np.sin(angle)],
[0., np.sin(angle), np.cos(angle)]])
elif axis=='y' or axis=='Y':
rot = np.asarray(
[[ np.cos(angle), 0., np.sin(angle)],
[0., 1., 0.],
[-np.sin(angle), 0., np.cos(angle)]])
elif axis=='z' or axis=='Z':
rot = np.asarray(
[[np.cos(angle), -np.sin(angle), 0.],
[np.sin(angle), np.cos(angle), 0.],
[0., 0., 1.]])
else:
raise ValueError('unkown axis %s' % axis)
return rot
# +
class ExpPara:
"""
The geomatric claculations used here are described in Yang, J Synch Rad (2013) 20, 211–218
Initilized with image size:
__init__(self, width, height)
Calculate all coordinates and correction factors for each pixel:
init_coordinates(self)
Functions that can be called for converting coordinates (inputs are arrays)
calc_from_XY(self, X, Y, calc_cor_factors=False)
calc_from_QrQn(self, Qr, Qn, flag=False)
calc_from_QPhi(self, Q, Phi)
"""
wavelength = None
bm_ctr_x = None
bm_ctr_y = None
ratioDw = None
grazing_incident = False
flip = 0
incident_angle = 0.2
sample_normal = 0.
rot_matrix = None
def __init__(self, width, height):
"""
define image size here
but fill in the coordinates later
"""
self.ImageWidth = width
self.ImageHeight = height
self.mask = Mask.Mask(width, height)
def init_coordinates(self):
"""
calculate all coordinates (pixel position as well as various derived values)
all coordinates are stored in 2D arrays, as is the data itself in Data2D
"""
(w,h) = (self.ImageWidth, self.ImageHeight)
self.X = np.repeat(np.arange(w), h).reshape((w, h)).T
X = self.X.flatten()
Y = np.repeat(np.arange(h), w)
self.Y = Y.reshape((h, w))
(Q, Phi, Qr, Qn, FPol, FSA) = self.calc_from_XY(X, Y, calc_cor_factors=True)
self.Q = Q.reshape((h, w))
self.Qr = Qr.reshape((h, w))
self.Qn = Qn.reshape((h, w))
self.Phi = Phi.reshape((h, w))
self.FPol = FPol.reshape((h, w))
self.FSA = FSA.reshape((h, w))
def calc_from_XY(self, X, Y, calc_cor_factors=False):
"""
calculate Q values from pixel positions X and Y
X and Y are 1D arrays
returns reciprocal/angular coordinates, optionally returns
always calculates Qr and Qn, therefore incident_angle needs to be set
Note that Phi is saved in radians; but the angles in ExpPara are in degrees
"""
if self.rot_matrix is None:
raise ValueError('the rotation matrix is not yet set.')
# the position vectors for each pixel, origin at the postion of beam impact
# R.shape should be (3, w*h), but R.T is more convinient for matrix calculation
# RT.T[i] is a vector
RT = np.vstack((X - self.bm_ctr_x, -(Y - self.bm_ctr_y), 0.*X))
dr = self.ratioDw*self.ImageWidth
# position vectors in lab coordinates, sample at the origin
[X1, Y1, Z1] = np.dot(self.rot_matrix, RT)
Z1 -= dr
# angles
r3sq = X1*X1+Y1*Y1+Z1*Z1
r3 = np.sqrt(r3sq)
r2 = np.sqrt(X1*X1+Y1*Y1)
Theta = 0.5*np.arcsin(r2/r3)
Phi = np.arctan2(Y1, X1) + np.radians(self.sample_normal)
Q = 4.0*np.pi/self.wavelength*np.sin(Theta)
# lab coordinates
Qz = Q*np.sin(Theta)
Qy = Q*np.cos(Theta)*np.sin(Phi)
Qx = Q*np.cos(Theta)*np.cos(Phi)
# convert to sample coordinates
alpha = np.radians(self.incident_angle)
Qn = Qy*np.cos(alpha) + Qz*np.sin(alpha)
Qr = np.sqrt(Q*Q-Qn*Qn)*np.sign(Qx)
if calc_cor_factors==True:
FPol = (Y1*Y1+Z1*Z1)/r3sq
FSA = np.power(np.fabs(Z1)/r3, 3)
return (Q, Phi, Qr, Qn, FPol, FSA)
else:
return (Q, Phi, Qr, Qn)
def calc_from_QrQn(self, Qr, Qn, flag=False):
"""
Qr and Qn are 1D arrays
when flag is True, substitue Qr with the minimal Qr value at the given Qz allowed by scattering geometry
returns the pixel positions corresponding to (Qr, Qn)
note that the return arrays may contain non-numerical values
"""
if self.rot_matrix is None:
raise ValueError('the rotation matrix is not yet set.')
alpha = np.radians(self.incident_angle)
if flag is True:
k = 2.0*np.pi/self.wavelength
tt = Qn/k -np.sin(alpha)
Qr0 = np.empty(len(Qr))
Qr0[tt<=1.] = np.fabs(np.sqrt(1.-(tt*tt)[tt<=1.]) - np.cos(alpha))*k
idx1 = (Qr<Qr0) & (tt<=1.)
Qr[idx1] = Qr0[idx1]*np.sign(Qr[idx1])
Q = np.sqrt(Qr*Qr+Qn*Qn)
Phi = np.empty(len(Q))
Theta = self.wavelength*Q/(4.0*np.pi)
idx = (Theta<=1.0)
Theta = np.arcsin(Theta[idx])
Phi[~idx] = np.nan
Qz = Q[idx]*np.sin(Theta)
Qy = (Qn[idx] - Qz*np.sin(alpha)) / np.cos(alpha)
tt = Q[idx]*Q[idx] - Qz*Qz - Qy*Qy
idx2 = (tt>=0)
Qx = np.empty(len(Q[idx]))
Qx[idx2] = np.sqrt(tt[idx2])*np.sign(Qr[idx][idx2])
Phi[idx & idx2] = np.arctan2(Qy[idx2], Qx[idx2])
Phi[idx & ~idx2] = np.nan
return self.calc_from_QPhi(Q, Phi)
def calc_from_QPhi(self, Q, Phi):
"""
Q and Phi are 1D arrays
Phi=0 is the y-axis (pointing up), unless sample_normal is non-zero
returns the pixel positions corresponding to (Q, Phi)
note that the return arrays may contain non-numerical values
"""
if self.rot_matrix is None:
raise ValueError('the rotation matrix is not yet set.')
Theta = self.wavelength*Q/(4.0*np.pi)
X0 = np.empty(len(Theta))
Y0 = np.empty(len(Theta))
idx = (Theta<=1.0) & (~np.isnan(Phi)) # Phi might contain nan from calc_from_QrQn()
Theta = np.arcsin(Theta[idx]);
Phi = Phi[idx] - np.radians(self.sample_normal)
[R13, R23, R33] = np.dot(self.rot_matrix, np.asarray([0., 0., 1.]))
dr = self.ratioDw*self.ImageWidth
# pixel position in lab referece frame, both centered on detector
# this is code from RQconv.c
# In comparison, the coordinates in equ(18) in the reference above are centered at the sample
tt = (R13*np.cos(Phi)+R23*np.sin(Phi))*np.tan(2.0*Theta)
Z1 = dr*tt/(tt-R33);
X1 = (dr-Z1)*np.tan(2.0*Theta)*np.cos(Phi);
Y1 = (dr-Z1)*np.tan(2.0*Theta)*np.sin(Phi);
R1 = np.vstack((X1, Y1, Z1))
# transform to detector frame
[X, Y, Z] = np.dot(self.rot_matrix.T, R1);
# pixel position, note reversed y-axis
X0[idx] = X + self.bm_ctr_x
Y0[idx] = -Y + self.bm_ctr_y
X0[~idx] = np.nan
Y0[~idx] = np.nan
return (X0, Y0)
class ExpParaLiX(ExpPara):
"""
This is one way to define the orientation of the detector, as defined in the ref above
The detector orientation can be defined by a different set of angles
A different derived class can be defined to inherent the same fuctions from ExpPara
"""
det_orient = 0.
det_tilt = 0.
det_phi = 0.
def calc_rot_matrix(self):
tm1 = RotationMatrix('z', np.radians(-self.det_orient))
tm2 = RotationMatrix('y', np.radians(self.det_tilt))
tm3 = RotationMatrix('z', np.radians(self.det_orient+self.det_phi))
self.rot_matrix = np.dot(np.dot(tm3, tm2), tm1)
# +
ep = ExpParaLiX(619, 487) # WAXS1, with crazy angles
ene = 11384.26
wl = 2.*np.pi*1973/ene
ep.wavelength = wl
ep.det_orient = 30.
ep.det_tilt = -26.
ep.det_phi = -40.
ep.bm_ctr_x = -141.
ep.bm_ctr_y = 328.3
ep.ratioDw = 2.86
ep.grazing_incident = False
ep.flip = 1
ep.incident_angle = 2.0
ep.sample_normal = 0
ep.calc_rot_matrix()
ep.init_coordinates()
# +
ew1 = RQconv.ExpPara()
ew1.wavelength = wl
ew1.bm_ctr_x = -141.
ew1.bm_ctr_y = 328.3
ew1.ratioDw = 2.86
ew1.det_orient = 30.
ew1.det_tilt = -26.
ew1.det_phi = -40.
ew1.grazing_incident = False
ew1.flip = 1
ew1.incident_angle = 2.0
ew1.sample_normal = 0
# -
# data from Pilatus 300K
dw1 = Data2D.Data2d("ru30_2_000018_WAXS1.cbf")
dw1.set_exp_para(ew1)
(i,j) = (99, 21)
print(ep.X[i,j],ep.Y[i,j])
print(dw1.xy2qrqz(np.float(ep.X[i,j]),np.float(ep.Y[i,j])))
print(ep.Qr[i,j], ep.Qn[i,j])
# %matplotlib notebook
import pylab as plt
xx, yy = ep.calc_from_QrQn(ep.Qr.flatten(), ep.Qn.flatten())
plt.figure()
plt.plot(xx, yy, 'r+')
plt.plot(ep.X.flatten(), ep.Y.flatten(), 'x')
xx, yy = ep.calc_from_QPhi(ep.Q.flatten(), ep.Phi.flatten())
plt.figure()
plt.plot(xx, yy, 'r+')
plt.plot(ep.X.flatten(), ep.Y.flatten(), 'x')
|
examples/RQconv/newRQconv.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %matplotlib inline
#
# # Compute MNE-dSPM inverse solution on evoked data in volume source space
#
#
# Compute dSPM inverse solution on MNE evoked dataset in a volume source
# space and stores the solution in a nifti file for visualisation.
#
#
#
# +
# Author: <NAME> <<EMAIL>>
#
# License: BSD (3-clause)
import matplotlib.pyplot as plt
from nilearn.plotting import plot_stat_map
from nilearn.image import index_img
from mne.datasets import sample
from mne import read_evokeds
from mne.minimum_norm import apply_inverse, read_inverse_operator
print(__doc__)
data_path = sample.data_path()
fname_inv = data_path + '/MEG/sample/sample_audvis-meg-vol-7-meg-inv.fif'
fname_evoked = data_path + '/MEG/sample/sample_audvis-ave.fif'
snr = 3.0
lambda2 = 1.0 / snr ** 2
method = "dSPM" # use dSPM method (could also be MNE or sLORETA)
# Load data
evoked = read_evokeds(fname_evoked, condition=0, baseline=(None, 0))
inverse_operator = read_inverse_operator(fname_inv)
src = inverse_operator['src']
# Compute inverse solution
stc = apply_inverse(evoked, inverse_operator, lambda2, method)
stc.crop(0.0, 0.2)
# Export result as a 4D nifti object
img = stc.as_volume(src,
mri_resolution=False) # set True for full MRI resolution
# Save it as a nifti file
# nib.save(img, 'mne_%s_inverse.nii.gz' % method)
t1_fname = data_path + '/subjects/sample/mri/T1.mgz'
# Plotting with nilearn ######################################################
plot_stat_map(index_img(img, 61), t1_fname, threshold=8.,
title='%s (t=%.1f s.)' % (method, stc.times[61]))
plt.show()
|
0.16/_downloads/plot_compute_mne_inverse_volume.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [default]
# language: python
# name: python3
# ---
# +
import pandas as pd
import matplotlib.pyplot as plt
from IPython.display import display
# %matplotlib inline
# +
# Read file
icfes = pd.read_csv("./data/icfes.csv", low_memory=False, encoding="utf-8")
print(icfes.shape)
icfes.head()
# +
# Drop columns
columns_to_drop = ['ESTU_COD_RESIDE_DEPTO', 'ESTU_COD_RESIDE_MCPIO', 'ESTU_CONSECUTIVO', 'ESTU_ESTUDIANTE',
'COLE_CODIGO_ICFES', 'COLE_NOMBRE_ESTABLECIMIENTO', 'COLE_NOMBRE_SEDE',
'COLE_COD_MCPIO_UBICACION', 'COLE_COD_DEPTO_UBICACION', 'ESTU_COD_MCPIO_PRESENTACION',
'ESTU_COD_DEPTO_PRESENTACION', 'COLE_COD_DANE_ESTABLECIMIENTO', 'COLE_COD_DANE_SEDE',
'PERCENTIL_C_NATURALES', 'PERCENTIL_GLOBAL', 'PERCENTIL_INGLES',
'PERCENTIL_LECTURA_CRITICA', 'PERCENTIL_MATEMATICAS', 'PERCENTIL_SOCIALES_CIUDADANAS']
icfes = icfes.drop(columns=columns_to_drop)
# +
# Drop records without socioeconomic data
icfes = icfes.dropna(axis=0, how='all', subset=['FAMI_ESTRATOVIVIENDA','FAMI_TIENECOMPUTADOR',
'FAMI_TIENELAVADORA','FAMI_TIENEHORNOMICROOGAS',
'FAMI_TIENEAUTOMOVIL'])
# Verify the dropping
icfes3 = icfes.loc[icfes['FAMI_ESTRATOVIVIENDA'].isnull()
& icfes['FAMI_TIENECOMPUTADOR'].isnull()
& icfes['FAMI_TIENELAVADORA'].isnull()
& icfes['FAMI_TIENEHORNOMICROOGAS'].isnull()
& icfes['FAMI_TIENEAUTOMOVIL'].isnull()]
icfes3[['FAMI_ESTRATOVIVIENDA','FAMI_TIENECOMPUTADOR',
'FAMI_TIENELAVADORA','FAMI_TIENEHORNOMICROOGAS','FAMI_TIENEAUTOMOVIL']]
# +
# Use the same value for equivalent values
edu_dict = {"No Sabe":"No sabe",
"Primaria Completa": 'Primaria completa',
"Primaria Incompleta": 'Primaria incompleta',
"Secundaria(Bachillerato) Incompleta": 'Secundaria (Bachillerato) incompleta',
"Secundaria(Bachillerato) Completa": 'Secundaria (Bachillerato) completa',
"Educación Técnica o Tecnológica Completa": 'Técnica o tecnológica completa',
"Educación Técnica o Tecnológica Incompleta": 'Técnica o tecnológica incompleta',
"Educación Profesional Incompleta": 'Educación profesional incompleta',
"Educación Profesional Completa": 'Educación profesional completa'}
icfes['FAMI_EDUCACIONPADRE'] = icfes['FAMI_EDUCACIONPADRE'].replace(to_replace=edu_dict)
icfes['FAMI_EDUCACIONMADRE'] = icfes['FAMI_EDUCACIONMADRE'].replace(to_replace=edu_dict)
icfes['ESTU_ETNIA'] = icfes['ESTU_ETNIA'].replace(
to_replace={"Comunidad Rom (gitana)": "Comunidades Rom (Gitanas)"})
icfes['ESTU_HORASSEMANATRABAJA'] = icfes['ESTU_HORASSEMANATRABAJA'].replace(
to_replace={"0": "No trabaja"})
icfes['FAMI_CUARTOSHOGAR'] = icfes['FAMI_CUARTOSHOGAR'].replace(
to_replace={"1": "Uno",
"2": "Dos",
"3": "Tres",
"4": "Cuatro",
"5": "Cinco",})
# +
# Fix capitalization
def capitalize(df, columns):
for c in columns:
df[c] = df[c].str.capitalize()
columns = ['ESTU_TIENEETNIA', 'FAMI_TIENEAUTOMOVIL', 'FAMI_TIENECOMPUTADOR']
capitalize(icfes, columns)
# +
# Drop 90% > missing values
icfes = icfes.drop(columns=['FAMI_TIENEINTERNET'])
# Fill Missing values
icfes['ESTU_ETNIA'] = icfes['ESTU_ETNIA'].fillna('Ninguno')
icfes['ESTU_TIENEETNIA'] = icfes['ESTU_TIENEETNIA'].fillna('No')
#14094 4%
icfes['FAMI_EDUCACIONPADRE'] = icfes['FAMI_EDUCACIONPADRE'].fillna('No responde')
#14082 4%
icfes['FAMI_EDUCACIONMADRE'] = icfes['FAMI_EDUCACIONMADRE'].fillna('No responde')
#16023 4%
icfes['FAMI_ESTRATOVIVIENDA'] = icfes['FAMI_ESTRATOVIVIENDA'].fillna('No responde')
#16328 4%
icfes['FAMI_TIENESERVICIOTV'] = icfes['FAMI_TIENESERVICIOTV'].fillna('No responde')
#14600 4%
icfes['FAMI_NUMLIBROS'] = icfes['FAMI_NUMLIBROS'].fillna('No responde')
#14779 4%
icfes['FAMI_COMELECHEDERIVADOS'] = icfes['FAMI_COMELECHEDERIVADOS'].fillna('No responde')
#15744 4%
icfes['FAMI_COMECARNEPESCADOHUEVO'] = icfes['FAMI_COMECARNEPESCADOHUEVO'].fillna('No responde')
#15609 4%
icfes['FAMI_COMECEREALFRUTOSLEGUMBRE'] = icfes['FAMI_COMECEREALFRUTOSLEGUMBRE'].fillna('No responde')
#15145 4%
icfes['ESTU_DEDICACIONLECTURADIARIA'] = icfes['ESTU_DEDICACIONLECTURADIARIA'].fillna('No responde')
#17028 4%
icfes['ESTU_DEDICACIONINTERNET'] = icfes['ESTU_DEDICACIONINTERNET'].fillna('No responde')
#55162 14%
icfes['COLE_BILINGUE'] = icfes['COLE_BILINGUE'].fillna('SIN INFO')
#6017 2%
icfes['COLE_CARACTER'] = icfes['COLE_CARACTER'].fillna('SIN INFO')
# +
columns = ['ESTU_GENERO', 'ESTU_DEPTO_RESIDE', 'ESTU_MCPIO_RESIDE', 'FAMI_PERSONASHOGAR', 'FAMI_CUARTOSHOGAR',
'FAMI_TIENECOMPUTADOR', 'FAMI_TIENELAVADORA', 'FAMI_TIENEHORNOMICROOGAS', 'FAMI_TIENEAUTOMOVIL',
'FAMI_TIENEMOTOCICLETA', 'FAMI_TIENECONSOLAVIDEOJUEGOS', 'FAMI_TRABAJOLABORPADRE',
'FAMI_TRABAJOLABORMADRE', 'FAMI_SITUACIONECONOMICA', 'ESTU_HORASSEMANATRABAJA', 'ESTU_TIPOREMUNERACION',
'ESTU_PRIVADO_LIBERTAD', 'ESTU_NSE_INDIVIDUAL']
summary = pd.DataFrame(columns=['Attribure', 'NaN count', 'NaN count (%)', 'most frequent'])
icfes2 = pd.DataFrame()
# Replace missing values with the most common one
i=0
for c in columns:
nan_count = icfes[c].isnull().sum()
if nan_count > 0:
most_frequent = icfes[c].value_counts().idxmax()
summary.loc[summary.shape[0]] = [c, nan_count, nan_count/icfes.shape[0], most_frequent]
icfes[c] = icfes[c].fillna(most_frequent)
summary
# +
col = 'FAMI_TIENEINTERNET'
# icfes[col].describe(include='all')
# icfes[col].value_counts(dropna=False).sort_index()
plt.figure(figsize=(8, 6))
#icfes[col].value_counts(dropna=False).plot.bar()
null = icfes.isnull().sum() /icfes.shape[0]
null = null.sort_values(ascending=False)
plt.plot(null.iloc[:10])
plt.xticks(rotation=90)
plt.show()
# +
import sklearn.decomposition
from sklearn_pandas import DataFrameMapper
import numpy as np
numerics = ['int16', 'int32', 'int64', 'float16', 'float32', 'float64']
numeric_icfes = icfes.select_dtypes(include=numerics)
numeric_icfes.describe()
numeric_icfes.isnull().sum()
# +
# PCA
print(numeric_icfes.columns.shape)
pca_mapper = DataFrameMapper([
(['DESEMP_C_NATURALES', 'DESEMP_LECTURA_CRITICA', 'PUNT_C_NATURALES'],
[sklearn.preprocessing.StandardScaler(),
sklearn.decomposition.PCA(n_components=1)])
], df_out=True)
np.round(pca_mapper.fit_transform(numeric_icfes.copy()), 2).head()
# -
icfes.to_csv("./data/icfes_preprocessing.csv", index=False, encoding="utf-8")
|
ICFES/2 Preprocessing.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import os
os.environ['CUDA_VISIBLE_DEVICES'] = '3'
import numpy as np
import tensorflow as tf
# +
import json
with open('train-test.json') as fopen:
dataset = json.load(fopen)
with open('dictionary.json') as fopen:
dictionary = json.load(fopen)
# -
train_X = dataset['train_X']
train_Y = dataset['train_Y']
test_X = dataset['test_X']
test_Y = dataset['test_Y']
dictionary.keys()
# +
dictionary_from = dictionary['from']['dictionary']
rev_dictionary_from = dictionary['from']['rev_dictionary']
dictionary_to = dictionary['to']['dictionary']
rev_dictionary_to = dictionary['to']['rev_dictionary']
# -
GO = dictionary_from['GO']
PAD = dictionary_from['PAD']
EOS = dictionary_from['EOS']
UNK = dictionary_from['UNK']
# +
for i in range(len(train_X)):
train_X[i] += ' EOS'
train_X[0]
# +
for i in range(len(test_X)):
test_X[i] += ' EOS'
test_X[0]
# +
def pad_second_dim(x, desired_size):
padding = tf.tile([[[0.0]]], tf.stack([tf.shape(x)[0], desired_size - tf.shape(x)[1], tf.shape(x)[2]], 0))
return tf.concat([x, padding], 1)
def encoder_block(inp, n_hidden, filter_size):
inp = tf.pad(inp, [[0, 0], [(filter_size[0]-1)//2, (filter_size[0]-1)//2], [0, 0]])
conv = tf.layers.conv1d(inp, n_hidden, filter_size, padding="VALID", activation=None)
return conv
def decoder_block(inp, n_hidden, filter_size):
inp = tf.pad(inp, [[0, 0], [filter_size[0]-1, 0], [0, 0]])
conv = tf.layers.conv1d(inp, n_hidden, filter_size, padding="VALID", activation=None)
return conv
def glu(x):
return tf.multiply(x[:, :, :tf.shape(x)[2]//2], tf.sigmoid(x[:, :, tf.shape(x)[2]//2:]))
def layer(inp, conv_block, kernel_width, n_hidden, residual=None):
z = conv_block(inp, n_hidden, (kernel_width,))
return glu(z) + (residual if residual is not None else 0)
def sinusoidal_position_encoding(inputs, mask, repr_dim):
T = tf.shape(inputs)[1]
pos = tf.reshape(tf.range(0.0, tf.to_float(T), dtype=tf.float32), [-1, 1])
i = np.arange(0, repr_dim, 2, np.float32)
denom = np.reshape(np.power(10000.0, i / repr_dim), [1, -1])
enc = tf.expand_dims(tf.concat([tf.sin(pos / denom), tf.cos(pos / denom)], 1), 0)
return tf.tile(enc, [tf.shape(inputs)[0], 1, 1]) * tf.expand_dims(tf.to_float(mask), -1)
class Translator:
def __init__(self, from_dict_size, to_dict_size, size_layer, num_layers,
learning_rate, n_attn_heads = 16, beam_width = 5):
self.X = tf.placeholder(tf.int32, [None, None])
self.Y = tf.placeholder(tf.int32, [None, None])
self.X_seq_len = tf.count_nonzero(self.X, 1, dtype=tf.int32)
self.Y_seq_len = tf.count_nonzero(self.Y, 1, dtype=tf.int32)
batch_size = tf.shape(self.X)[0]
encoder_embedding = tf.Variable(tf.random_uniform([from_dict_size, size_layer], -1, 1))
decoder_embedding = tf.Variable(tf.random_uniform([to_dict_size, size_layer], -1, 1))
main = tf.strided_slice(self.Y, [0, 0], [batch_size, -1], [1, 1])
decoder_input = tf.concat([tf.fill([batch_size, 1], GO), main], 1)
encoder_embedded = tf.nn.embedding_lookup(encoder_embedding, self.X)
en_masks = tf.sign(self.X)
encoder_embedded += sinusoidal_position_encoding(self.X, en_masks, size_layer)
e = tf.identity(encoder_embedded)
for i in range(num_layers):
z = layer(encoder_embedded, encoder_block, 3, size_layer * 2, encoder_embedded)
encoder_embedded = z
encoder_output, output_memory = z, z + e
vocab_proj = tf.layers.Dense(len(dictionary_to))
init_state = tf.reduce_mean(output_memory,axis=1)
cell = tf.nn.rnn_cell.LSTMCell(size_layer)
helper = tf.contrib.seq2seq.TrainingHelper(
inputs = tf.nn.embedding_lookup(decoder_embedding, decoder_input),
sequence_length = tf.to_int32(self.Y_seq_len))
encoder_state = tf.nn.rnn_cell.LSTMStateTuple(c=init_state, h=init_state)
decoder = tf.contrib.seq2seq.BasicDecoder(cell = cell,
helper = helper,
initial_state = encoder_state,
output_layer = vocab_proj)
decoder_output, _, _ = tf.contrib.seq2seq.dynamic_decode(decoder = decoder,
maximum_iterations = tf.reduce_max(self.Y_seq_len))
helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(embedding = decoder_embedding,
start_tokens = tf.tile(
tf.constant([GO],
dtype=tf.int32),
[tf.shape(init_state)[0]]),
end_token = EOS)
decoder = tf.contrib.seq2seq.BasicDecoder(
cell = cell,
helper = helper,
initial_state = encoder_state,
output_layer = vocab_proj)
predicting_decoder_output, _, _ = tf.contrib.seq2seq.dynamic_decode(
decoder = decoder,
maximum_iterations = tf.reduce_max(self.X_seq_len))
self.training_logits = decoder_output.rnn_output
self.predicting_ids = predicting_decoder_output.sample_id
self.logits = decoder_output.sample_id
masks = tf.sequence_mask(self.Y_seq_len, tf.reduce_max(self.Y_seq_len), dtype=tf.float32)
self.cost = tf.contrib.seq2seq.sequence_loss(logits = self.training_logits,
targets = self.Y,
weights = masks)
self.optimizer = tf.train.AdamOptimizer(learning_rate).minimize(self.cost)
y_t = tf.argmax(self.training_logits,axis=2)
y_t = tf.cast(y_t, tf.int32)
self.prediction = tf.boolean_mask(y_t, masks)
mask_label = tf.boolean_mask(self.Y, masks)
correct_pred = tf.equal(self.prediction, mask_label)
correct_index = tf.cast(correct_pred, tf.float32)
self.accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
# -
size_layer = 512
num_layers = 4
learning_rate = 1e-4
batch_size = 96
epoch = 20
tf.reset_default_graph()
sess = tf.InteractiveSession()
model = Translator(len(dictionary_from), len(dictionary_to), size_layer, num_layers, learning_rate)
sess.run(tf.global_variables_initializer())
# +
def str_idx(corpus, dic):
X = []
for i in corpus:
ints = []
for k in i.split():
ints.append(dic.get(k,UNK))
X.append(ints)
return X
def pad_sentence_batch(sentence_batch, pad_int):
padded_seqs = []
seq_lens = []
max_sentence_len = max([len(sentence) for sentence in sentence_batch])
for sentence in sentence_batch:
padded_seqs.append(sentence + [pad_int] * (max_sentence_len - len(sentence)))
seq_lens.append(len(sentence))
return padded_seqs, seq_lens
# -
train_X = str_idx(train_X, dictionary_from)
test_X = str_idx(test_X, dictionary_from)
train_Y = str_idx(train_Y, dictionary_to)
test_Y = str_idx(test_Y, dictionary_to)
sess.run(model.predicting_ids, feed_dict = {model.X: [train_X[0]]}).shape
# +
import tqdm
for e in range(epoch):
pbar = tqdm.tqdm(
range(0, len(train_X), batch_size), desc = 'minibatch loop')
train_loss, train_acc, test_loss, test_acc = [], [], [], []
for i in pbar:
index = min(i + batch_size, len(train_X))
maxlen = max([len(s) for s in train_X[i : index] + train_Y[i : index]])
batch_x, seq_x = pad_sentence_batch(train_X[i : index], PAD)
batch_y, seq_y = pad_sentence_batch(train_Y[i : index], PAD)
feed = {model.X: batch_x,
model.Y: batch_y}
accuracy, loss, _ = sess.run([model.accuracy,model.cost,model.optimizer],
feed_dict = feed)
train_loss.append(loss)
train_acc.append(accuracy)
pbar.set_postfix(cost = loss, accuracy = accuracy)
pbar = tqdm.tqdm(
range(0, len(test_X), batch_size), desc = 'minibatch loop')
for i in pbar:
index = min(i + batch_size, len(test_X))
batch_x, seq_x = pad_sentence_batch(test_X[i : index], PAD)
batch_y, seq_y = pad_sentence_batch(test_Y[i : index], PAD)
feed = {model.X: batch_x,
model.Y: batch_y,}
accuracy, loss = sess.run([model.accuracy,model.cost],
feed_dict = feed)
test_loss.append(loss)
test_acc.append(accuracy)
pbar.set_postfix(cost = loss, accuracy = accuracy)
print('epoch %d, training avg loss %f, training avg acc %f'%(e+1,
np.mean(train_loss),np.mean(train_acc)))
print('epoch %d, testing avg loss %f, testing avg acc %f'%(e+1,
np.mean(test_loss),np.mean(test_acc)))
# -
rev_dictionary_to = {int(k): v for k, v in rev_dictionary_to.items()}
# +
test_size = 20
batch_x, seq_x = pad_sentence_batch(test_X[: test_size], PAD)
batch_y, seq_y = pad_sentence_batch(test_Y[: test_size], PAD)
feed = {model.X: batch_x}
logits = sess.run(model.predicting_ids, feed_dict = feed)
logits.shape
# +
rejected = ['PAD', 'EOS', 'UNK', 'GO']
for i in range(test_size):
predict = [rev_dictionary_to[i] for i in logits[i] if rev_dictionary_to[i] not in rejected]
actual = [rev_dictionary_to[i] for i in batch_y[i] if rev_dictionary_to[i] not in rejected]
print(i, 'predict:', ' '.join(predict))
print(i, 'actual:', ' '.join(actual))
print()
# -
|
mlmodels/model_dev/nlp_tfflow/neural-machine-translation/45.conv-encoder-lstm-decoder.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# +
import sys, os
sys.path.append(os.path.normpath(os.path.join(os.getcwd(), '..')))
sys.path.append(os.path.normpath(os.path.join(os.getcwd(), '..', 'external', 'MiDaS')))
import pandas as pd
import matplotlib.pyplot as plt
import pickle
# -
with open('/runai-ivrl-scratch/students/2021-fall-sp-jellouli/output/eval.pickle', 'rb') as handle:
res = pickle.load(handle)
with open('/runai-ivrl-scratch/students/2021-fall-sp-jellouli/output/baseline_res.pickle', 'rb') as handle:
baseline_res = pickle.load(handle)
df = pd.DataFrame.from_dict(res, orient='index')
df = df.drop('/runai-ivrl-scratch/students/2021-fall-sp-jellouli/output/model_final.pth')
# +
import re
def get_iter(path):
name = os.path.basename(path)
iteration = re.fullmatch(r"model_(?P<iteration>[0-9]{7}).pth", name)['iteration']
return int(iteration)
df.index = df.index.map(get_iter)
df.index = df.index.rename('iteration')
# -
df = df.apply(lambda r: r['bbox'], axis=1, result_type='expand')
best_model = df.iloc[df['AP'].argmax()]
df
best_model
baseline_res
# +
fig, axs = plt.subplots(1, 2, figsize=(10,4), dpi=100)
# axs[0].set_title('AP, AP50, AP75 vs iteration')
axs[0].plot(df.index, df['AP'].values, label='AP')
axs[0].plot(df.index, df['AP50'].values, label='AP50')
axs[0].plot(df.index, df['AP75'].values, label='AP75')
axs[0].set_xlabel('iteration')
axs[0].set_ylabel('Average Precision')
axs[0].grid(True)
axs[0].legend()
# axs[1].set_title('AP-person, AP-car, AP-bus Ap-Truck, vs iteration')
axs[1].plot(df.index, df['AP-person'].values, label='AP-person')
axs[1].plot(df.index, df['AP-bus'].values, label='AP-bus')
axs[1].plot(df.index, df['AP-car'].values, label='AP-car')
axs[1].plot(df.index, df['AP-truck'].values, label='AP-truck')
axs[1].set_xlabel('iteration')
axs[1].set_ylabel('Average Precision')
axs[1].grid(True)
axs[1].legend(loc='upper left', prop={'size': 6})
fig.tight_layout()
plt.savefig('AP.jpg')
# plt.show()
# -
plt.figure(figsize=(10,5), dpi=100)
plt.title('AP-person, AP-car, AP-bus Ap-Truck, vs iteration')
plt.plot(df.index, df['AP-person'].values, label='AP-person')
plt.plot(df.index, df['AP-bus'].values, label='AP-bus')
plt.plot(df.index, df['AP-car'].values, label='AP-car')
plt.plot(df.index, df['AP-truck'].values, label='AP-truck')
plt.xlabel('iteration')
plt.ylabel('Average Precision')
plt.grid(True)
plt.legend()
plt.savefig('AP-examples.jpg')
plt.show()
|
notebooks/HKRMnet.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # How to use CNN with time series data
# The regular measurements of time series result in a similar grid-like data structure as for the image data we have focused on so far. As a result, we can use CNN architectures for univariate and multivariate time series. In the latter case, we consider different time series as channels, similar to the different color signals.
# ## Imports & Settings
# +
# %matplotlib inline
import sys
from time import time
from pathlib import Path
import numpy as np
import pandas as pd
from scipy.stats import spearmanr
from sklearn.feature_selection import mutual_info_regression
import tensorflow as tf
tf.autograph.set_verbosity(0, True)
from tensorflow.keras.models import Sequential
from tensorflow.keras import regularizers
from tensorflow.keras.callbacks import EarlyStopping
from tensorflow.keras.layers import (Dense,
Flatten,
Conv1D,
MaxPooling1D,
Dropout,
BatchNormalization)
import matplotlib.pyplot as plt
import seaborn as sns
# -
gpu_devices = tf.config.experimental.list_physical_devices('GPU')
if gpu_devices:
print('Using GPU')
tf.config.experimental.set_memory_growth(gpu_devices[0], True)
else:
print('Using CPU')
sys.path.insert(1, Path(sys.path[0], '..').as_posix())
from utils import MultipleTimeSeriesCV, format_time
np.random.seed(1)
tf.random.set_seed(1)
sns.set_style('whitegrid')
results_path = Path('results', 'time_series')
if not results_path.exists():
results_path.mkdir(parents=True)
# ## Prepare Data
prices = (pd.read_hdf('../data/assets.h5', 'quandl/wiki/prices')
.adj_close
.unstack().loc['2000':])
prices.info()
# ### Compute monthly returns
# +
returns = (prices
.resample('M')
.last()
.pct_change()
.dropna(how='all')
.loc['2000': '2017']
.dropna(axis=1)
.sort_index(ascending=False))
# remove outliers likely representing data errors
returns = returns.where(returns<1).dropna(axis=1)
returns.info()
# -
# ### Create model data
n = len(returns)
nlags = 12
lags = list(range(1, nlags + 1))
# +
cnn_data = []
for i in range(n-nlags-1):
df = returns.iloc[i:i+nlags+1] # select outcome and lags
date = df.index.max() # use outcome date
cnn_data.append(df.reset_index(drop=True) # append transposed series
.transpose()
.assign(date=date)
.set_index('date', append=True)
.sort_index(1, ascending=True))
cnn_data = (pd.concat(cnn_data)
.rename(columns={0: 'label'})
.sort_index())
cnn_data.info(null_counts=True)
# -
# ## Evaluate features
# ### Mutual Information
# +
# mi = mutual_info_regression(X=cnn_data.drop('label', axis=1), y=cnn_data.label)
# mi = pd.Series(mi, index=cnn_data.drop('label', axis=1).columns)
# -
# ### Information Coefficient
# +
# ic = {}
# for lag in lags:
# ic[lag] = spearmanr(cnn_data.label, cnn_data[lag])
# ic = pd.DataFrame(ic, index=['IC', 'p-value']).T
# +
# ax = ic.plot.bar(rot=0, figsize=(14, 4),
# ylim=(-0.05, .05),
# title='Feature Evaluation')
# ax.set_xlabel('Lag')
# sns.despine()
# plt.tight_layout()
# plt.savefig(results_path / 'cnn_ts1d_feature_ic', dpi=300)
# -
# ### Plot Metrics
# +
# metrics = pd.concat([mi.to_frame('Mutual Information'),
# ic.IC.to_frame('Information Coefficient')], axis=1)
# +
# ax = metrics.plot.bar(figsize=(12, 4), rot=0)
# ax.set_xlabel('Lag')
# sns.despine()
# plt.tight_layout()
# plt.savefig(results_path / 'ts1d_metrics', dpi=300)
# -
# ## CNN
# ### Model Architecture
# We design a simple one-layer CNN that uses one-dimensional convolutions combined with max pooling to learn time series patterns:
def get_model(filters=32, kernel_size=5, pool_size=2):
model = Sequential([Conv1D(filters=filters,
kernel_size=kernel_size,
activation='relu',
padding='causal',
input_shape=input_shape,
use_bias=True,
kernel_regularizer=regularizers.l1_l2(l1=1e-5,
l2=1e-5)),
MaxPooling1D(pool_size=pool_size),
Flatten(),
BatchNormalization(),
Dense(1, activation='linear')])
model.compile(loss='mse',
optimizer='Adam')
return model
# ### Set up CV
cv = MultipleTimeSeriesCV(n_splits=12 * 3,
train_period_length=12 * 5,
test_period_length=1,
lookahead=1)
input_shape = nlags, 1
# ### Train Model
def get_train_valid_data(X, y, train_idx, test_idx):
x_train, y_train = X.iloc[train_idx, :], y.iloc[train_idx]
x_val, y_val = X.iloc[test_idx, :], y.iloc[test_idx]
m = X.shape[1]
return (x_train.values.reshape(-1, m, 1), y_train,
x_val.values.reshape(-1, m, 1), y_val)
batch_size = 64
epochs = 100
filters = 32
kernel_size = 4
pool_size = 4
get_model(filters=filters,
kernel_size=kernel_size,
pool_size=pool_size).summary()
# ### Cross-validation loop
result = {}
start = time()
for fold, (train_idx, test_idx) in enumerate(cv.split(cnn_data)):
X_train, y_train, X_val, y_val = get_train_valid_data(cnn_data
.drop('label', axis=1)
.sort_index(ascending=False),
cnn_data.label,
train_idx,
test_idx)
test_date = y_val.index.get_level_values('date').max()
model = get_model(filters=filters,
kernel_size=kernel_size,
pool_size=pool_size)
best_ic = -np.inf
stop = 0
for epoch in range(50):
training = model.fit(X_train, y_train,
batch_size=batch_size,
validation_data=(X_val, y_val),
epochs=epoch + 1,
initial_epoch=epoch,
verbose=0,
shuffle=True)
predicted = model.predict(X_val).squeeze()
ic, p_val_ = spearmanr(predicted, y_val)
if ic > best_ic:
best_ic = ic
p_val = p_val_
stop = 0
else:
stop += 1
if stop == 10:
break
nrounds = epoch + 1 - stop
result[test_date] = [nrounds, best_ic, p_val]
df = pd.DataFrame(result, index=['epochs', 'IC', 'p-value']).T
msg = f'{fold + 1:02d} | {format_time(time()-start)} | {nrounds:3.0f} | '
print(msg + f'{best_ic*100:5.2} ({p_val:7.2%}) | {df.IC.mean()*100:5.2}')
# ### Evaluate Results
metrics = pd.DataFrame(result, index=['epochs', 'IC', 'p-value']).T
ax = metrics.IC.plot(figsize=(12, 4),
label='Information Coefficient',
title='Validation Performance',
ylim=(0, .08))
metrics.IC.expanding().mean().plot(ax=ax, label='Cumulative Average')
plt.legend()
sns.despine()
plt.tight_layout()
plt.savefig(results_path / 'cnn_ts1d_ic', dpi=300);
|
ml4trading-2ed/18_convolutional_neural_nets/04_time_series_prediction.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.6 - AzureML
# language: python
# name: python3-azureml
# ---
# # Azure Machine Learning Setup
# To begin, you will need to provide the following information about your Azure Subscription.
#
# **If you are using your own Azure subscription, please provide names for subscription_id, resource_group, workspace_name and workspace_region to use.** Note that the workspace needs to be of type [Machine Learning Workspace](https://docs.microsoft.com/en-us/azure/machine-learning/service/setup-create-workspace).
#
# **If an enviorment is provided to you be sure to replace XXXXX in the values below with your unique identifier.**
#
# In the following cell, be sure to set the values for `subscription_id`, `resource_group`, `workspace_name` and `workspace_region` as directed by the comments (*these values can be acquired from the Azure Portal*).
#
# To get these values, do the following:
# 1. Navigate to the Azure Portal and login with the credentials provided.
# 2. From the left hand menu, under Favorites, select `Resource Groups`.
# 3. In the list, select the resource group with the name similar to `hands-on-lab-XXXXX`.
# 4. From the Overview tab, capture the desired values.
#
# Execute the following cell by selecting the `>|Run` button in the command bar above.
# +
#Provide the Subscription ID of your existing Azure subscription
subscription_id = "" #"<your-azure-subscription-id>"
#Provide a name for the Resource Group that will contain Azure ML related services
resource_group = "hands-on-lab-SUFFIX" #"<your-resource-group-name>"
# Provide the unique name and region for the Azure Machine Learning Workspace that will be created
workspace_name = "ml-wksp-SUFFIX"
workspace_region = "westus2" # eastus2, eastus, westcentralus, southeastasia, australiaeast, westeurope
# -
# ## Create and connect to an Azure Machine Learning Workspace
#
# The Azure Machine Learning Python SDK is required for leveraging the experimentation, model management and model deployment capabilities of Azure Machine Learning services. Run the following cell to create a new Azure Machine Learning **Workspace** and save the configuration to disk. The configuration file named `config.json` is saved in a folder named `.azureml`.
#
# **Important Note**: You will be prompted to login in the text that is output below the cell. Be sure to navigate to the URL displayed and enter the code that is provided. Once you have entered the code, return to this notebook and wait for the output to read `Workspace configuration succeeded`.
# +
import azureml.core
print('azureml.core.VERSION: ', azureml.core.VERSION)
# import the Workspace class and check the azureml SDK version
from azureml.core import Workspace
ws = Workspace.create(
name = workspace_name,
subscription_id = subscription_id,
resource_group = resource_group,
location = workspace_region,
exist_ok = True)
ws.write_config()
print('Workspace configuration succeeded')
# -
# Take a look at the contents of the generated configuration file by running the following cell:
# !cat .azureml/config.json
# # Deploy model to Azure Container Instance (ACI)
#
# In this section, you will deploy a web service that uses Gensim as shown in `01 Summarize` to summarize text. The web service will be hosted in Azure Container Service.
# ## Create the scoring web service
#
# When deploying models for scoring with Azure Machine Learning services, you need to define the code for a simple web service that will load your model and use it for scoring. By convention this service has two methods init which loads the model and run which scores data using the loaded model.
#
# This scoring service code will later be deployed inside of a specially prepared Docker container.
# +
# %%writefile summarizer_service.py
import re
import nltk
import unicodedata
from gensim.summarization import summarize, keywords
def clean_and_parse_document(document):
if isinstance(document, str):
document = document
elif isinstance(document, unicode):
return unicodedata.normalize('NFKD', document).encode('ascii', 'ignore')
else:
raise ValueError("Document is not string or unicode.")
document = document.strip()
sentences = nltk.sent_tokenize(document)
sentences = [sentence.strip() for sentence in sentences]
return sentences
def summarize_text(text, summary_ratio=None, word_count=30):
sentences = clean_and_parse_document(text)
cleaned_text = ' '.join(sentences)
summary = summarize(cleaned_text, split=True, ratio=summary_ratio, word_count=word_count)
return summary
def init():
nltk.download('all')
return
def run(input_str):
try:
return summarize_text(input_str)
except Exception as e:
return (str(e))
# -
# ## Environments
#
# Azure ML environments are an encapsulation of the environment where your machine learning training happens. They define Python packages, environment variables, Docker settings and other attributes in declarative fashion. Environments are versioned: you can update them and retrieve old versions to revisit and review your work.
#
# Environments allow you to:
# * Encapsulate dependencies of your training process, such as Python packages and their versions.
# * Reproduce the Python environment on your local computer in a remote run on VM or ML Compute cluster
# * Reproduce your experimentation environment in production setting.
# * Revisit and audit the environment in which an existing model was trained.
#
# Environment, compute target and training script together form run configuration: the full specification of training run.
# ### Use curated environments
#
# Curated environments are provided by Azure Machine Learning and are available in your workspace by default. They contain collections of Python packages and settings to help you get started different machine learning frameworks.
#
# * The __AzureML-Minimal__ environment contains a minimal set of packages to enable run tracking and asset uploading. You can use it as a starting point for your own environment.
# * The __AzureML-Tutorial__ environment contains common data science packages, such as Scikit-Learn, Pandas and Matplotlib, and larger set of azureml-sdk packages.
#
# Curated environments are backed by cached Docker images, reducing the run preparation cost.
#
# See https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/training/using-environments/using-environments.ipynb for more details.
# ### Create your own environment
#
# Instead of curated environments, may create an environment by instantiating ```Environment``` object and then setting its attributes: set of Python packages, environment variables and others. We will take this approach in this training.
#
# #### Add Python packages
#
# The recommended way is to specify Conda packages, as they typically come with complete set of pre-built binaries. Still, you may also add pip packages, and specify the version of package.
# +
from azureml.core import Environment
from azureml.core.environment import CondaDependencies
myenv = Environment(name="myenv")
conda_dep = CondaDependencies()
conda_dep.add_pip_package("gensim")
conda_dep.add_pip_package("nltk")
myenv.python.conda_dependencies=conda_dep
# -
# ### Register environment
#
# You can manage environments by registering them. This allows you to track their versions, and reuse them in future runs. For example, once you've constructed an environment that meets your requirements, you can register it and use it in other experiments so as to standardize your workflow.
#
# If you register the environment with same name, the version number is increased by one. Note that Azure ML keeps track of differences between the version, so if you re-register an identical version, the version number is not increased.
myenv.register(workspace=ws)
# ## Deployment
#
# If you want more control over how your model is run, if it uses another framework, or if it has special runtime requirements, you can instead specify your own environment and scoring method. Custom environments can be used for any model you want to deploy.
#
# In previous code, you specified the model's runtime environment by creating an [Environment](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.environment%28class%29?view=azure-ml-py) object and providing the [CondaDependencies](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.conda_dependencies.condadependencies?view=azure-ml-py) needed by your model.
#
# In the following cells you will use the Azure Machine Learning SDK to package the model and scoring script in a container, and deploy that container to an Azure Container Instance.
#
# Run the following cells.
# +
from azureml.core.model import InferenceConfig
from azureml.core.webservice import AciWebservice
inference_config = InferenceConfig(entry_script='summarizer_service.py', environment=myenv)
aci_config = AciWebservice.deploy_configuration(
cpu_cores = 1,
memory_gb = 1,
tags = {'name':'Summarization'},
description = 'Summarizes text.')
# -
# Now you are ready to begin your deployment to the Azure Container Instance. In this case, the model is inside the Python script, so the _model_ parameter has a blank list.
#
# Run the following cell: you may be waiting 5-15 minutes for completion, while the _Running_ tag adds progress dots.
#
# You will see output similar to the following when your web service is ready:
#
# `
# Succeeded
# ACI service creation operation finished, operation "Succeeded"`
# +
from azureml.core import Model
from azureml.core import Webservice
from azureml.exceptions import WebserviceException
service_name = "summarizer"
# Remove any existing service under the same name.
try:
Webservice(ws, service_name).delete()
except WebserviceException:
pass
webservice = Model.deploy(workspace=ws,
name=service_name,
models=[],
inference_config=inference_config,
deployment_config=aci_config)
webservice.wait_for_deployment(show_output=True)
# -
# ## Test the deployed service
#
# Now you are ready to test scoring using the deployed web service. The following cell invokes the web service.
#
# Run the following cells to test scoring using a single input row against the deployed web service.
example_document = """
I was driving down El Camino and stopped at a red light.
It was about 3pm in the afternoon.
The sun was bright and shining just behind the stoplight.
This made it hard to see the lights.
There was a car on my left in the left turn lane.
A few moments later another car, a black sedan pulled up behind me.
When the left turn light changed green, the black sedan hit me thinking
that the light had changed for us, but I had not moved because the light
was still red.
After hitting my car, the black sedan backed up and then sped past me.
I did manage to catch its license plate.
The license plate of the black sedan was ABC123.
"""
result = webservice.run(input_data = example_document)
print(result)
# ## Capture the scoring URI
#
# In order to call the service from a REST client, you need to acquire the scoring URI. Run the following cell to retrieve the scoring URI and take note of this value, you will need it in the last notebook.
webservice.scoring_uri
# The default settings used in deploying this service result in a service that does not require authentication, so the scoring URI is the only value you need to call this service.
|
Hands-on lab/notebooks/02 Deploy Summarizer Web Service.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.8.3 64-bit (''anaconda3'': virtualenv)'
# metadata:
# interpreter:
# hash: 510205fcc04712289c1e413dab4b8b305e1b9caeb4b2d946861b0f3375cceae2
# name: python3
# ---
import json
import numpy as np
import pandas as pd
import os
from glob import glob
from IPython.display import clear_output
# +
tanishq_files = glob(r'../data/raw/*anishq*.json') # This is problematic
tanishq_files = ['../data/raw/ekatvambytanishq.json',
'../data/raw/BoycottTanishqJewelry.json',
'../data/raw/boycott_tanishq.json',
'../data/raw/BoycottTanishq2.json',
'../data/raw/BoycottTanishq.json']
tweets = []
for file in tanishq_files:
print(f'Processing file: {file}')
with open(file, encoding='utf-8') as f:
temp = json.load(f)
tweets.extend([tweet['full_text'] for tweet in temp if tweet['full_text'].isascii()])
print(f'Number of tweets extracted: {len(tweets)}')
n_tweets = len(tweets)
tanishq_data = pd.DataFrame(tweets, columns=['Tweet'])
del tweets
# -
if os.path.exists('labelled_dict_tanishq.npy'):
labelled_dict_tanishq = np.load('labelled_dict_tanishq.npy', allow_pickle=True)[()]
else:
labelled_dict_tanishq = tanishq_data["Tweet"].to_dict()
labelled_dict_tanishq = {k: {"text": v, "sentiment": None} for k, v in labelled_dict_tanishq.items()}
# + tags=["outputPrepend"]
count = 0
count_sent = 0
count_garbage = 0
for k, v in labelled_dict_tanishq.items():
clear_output()
count += 1
if v['sentiment'] is not None:
if v['sentiment'] == 10:
count_garbage += 1
else:
count_sent += 1
continue
print('\n')
print(f'{count} tweetes of {n_tweets} has been processed.')
print(f'legal entries: {count_sent} garbage entries: {count_garbage}')
print(f'Working on key: {k}')
print(v['text'])
v['sentiment'] = int(input("Please enter the sentiment:"))
if v['sentiment'] == 10:
count_garbage += 1
else:
count_sent += 1
# -
np.save('labelled_dict_tanishq.npy', labelled_dict_tanishq)
tanishq_data['sentiment'] = tanishq_data.index.to_series().map({k: v['sentiment'] for k, v in labelled_dict_tanishq.items()})
tanishq_data.to_csv('../data/processed/tanishq_data_labelled.csv')
|
notebooks/labeling.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Reproducing Figure 3 in Porter et al., 2011
#
# In this example we generate *P* receiver functions for a model that includes either a dipping lower crustal layer or a lower-crustal anisotropic layer. These example reproduce the results of Figure 3 in [Porter et al. (2011)](#references).
#
# Start by importing the necessary packages:
import numpy as np
from pyraysum import prs
# Define the arrays of slowness and back-azimuth values of the incident `P` wave to use as input in the simulation
baz = np.arange(0., 360., 10.)
slow = 0.06
# Load the model file that includes layer dip
model = prs.read_model('../models/model_Porter2011_dip.txt')
# Run the simulation. Here we specify the argument `rot=1` to produce seismograms aligned in the `R-T-Z` coordinate system. The default value is `rot=0`, which produces seismograms aligned in the `N-E-Z` coordinate system, which cannot be used to calculate receiver functions. Furthermore, we are interested only in the direct conversions, and therefore specify `mults=0` to avoid dealing with multiples. This is required to reproduce the published examples, although it is good practice to keep all first-order multiples to properly simulate the full Green's functions.
streamlist = prs.run_prs(model, baz, slow, rot=1, mults=0)
# The function returns a `StreamList` object whose attributes are the `Model`, geometry `(baz, slow)`, a list of `Streams` as well as all run-time arguments that are used by Raysum:
streamlist.__dict__.keys()
# We can then use the method `calculate_rfs` to calculate receiver functions:
streamlist.calculate_rfs()
# The receiver functions are stored as an additional attribute of the streamlist object, which is itself a list of `Streams` containing the radial and transverse component RFs:
streamlist.__dict__.keys()
# We can now filter and plot the results - we specify the key `'rfs'` to work on the receiver functions only.
streamlist.filter('rfs', 'lowpass', freq=1., zerophase=True, corners=2)
streamlist.plot('rfs', tmin=-0.5, tmax=8.)
# Now let's reproduce the second case with the anisotropic lower crustal layer. We simply load the corresponding model and run the functions sequentially again:
# +
model = prs.read_model('../models/model_Porter2011_aniso.txt')
streamlist = prs.run_prs(model, baz, slow, dt=0.01, npts=6000, rot=1,
verbose=False, wvtype='P', mults=0)
streamlist.calculate_rfs()
streamlist.filter('rfs', 'lowpass', freq=1., zerophase=True, corners=2)
streamlist.plot('rfs', tmin=-0.5, tmax=8.)
# -
# ## References
#
# * <NAME>., & <NAME>. (2000). Modelling teleseismic waves in dipping anisotropic structures. Geophysical Journal International, 141, 401-412. https://doi.org/10.1046/j.1365-246x.2000.00090.x
# * <NAME>., <NAME>., & <NAME>. (2011). Pervasive lower-crustal seismic anisotropy in Southern California: Evidence for underplated schists and active tectonics. Lithosphere, 3(3), 201-220. https://doi.org/10.1130/L126.1
|
pyraysum/examples/notebooks/sim_Porter2011.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
import numpy as np
from qiskit import Aer, IBMQ
from qiskit.utils import QuantumInstance
from qiskit.circuit import QuantumCircuit, ParameterVector
from qiskit.opflow import StateFn, Z, I, CircuitSampler, Gradient, Hessian
from qiskit.algorithms.optimizers import GradientDescent
import matplotlib.pyplot as plt
# Code to generate a circuit as present in the paper
def paper_circuit():
circuit = QuantumCircuit(5)
params = ParameterVector("theta", length=5)
for i in range(5):
circuit.rx(params[i], i)
circuit.cx(0, 1)
circuit.cx(2, 1)
circuit.cx(3, 1)
circuit.cx(4, 3)
hamiltonian = I
for i in range(1, 5):
if i == 3:
hamiltonian = hamiltonian ^ Z
else:
hamiltonian = hamiltonian ^ I
readout_operator = StateFn(hamiltonian, is_measurement=True) @ StateFn(circuit)
return circuit, readout_operator
print(paper_circuit()[0])
# +
def evaluate_expectation(x):
value_dict = dict(zip(circuit.parameters, x))
result = sampler.convert(op, params=value_dict).eval()
return np.real(result)
def evaluate_gradient(x):
value_dict = dict(zip(circuit.parameters, x))
result = sampler.convert(gradient, params=value_dict).eval()
return np.real(result)
def gd_callback(nfevs, x, fx, stepsize):
#if nfevs % 10 == 0:
# print(nfevs, fx)
#print(nfevs, fx)
gd_loss.append(fx)
x_values.append([x[0], x[1]])
# -
simulate = True
provider = IBMQ.load_account()
if simulate:
backend = Aer.get_backend('aer_simulator')
else:
backend = provider.backends(name='ibmq_belem')[0]
# +
initial_point = np.random.uniform(0, 2 * np.pi, 5)
shots = [10, 100, 1000]
fig, (ax1, ax2) = plt.subplots(1, 2)
fig.tight_layout()
for s in shots:
# Create circuit
circuit, op = paper_circuit()
# Specify backend information
q_instance = QuantumInstance(backend, shots=s)
sampler = CircuitSampler(q_instance)
gradient = Gradient(grad_method='param_shift').convert(op)
gd_loss, x_values = [], []
# Create optimizer
gd = GradientDescent(maxiter=100, learning_rate=0.1, tol=1e-4, callback=gd_callback)
# Minimize function
gd.optimize(initial_point.size, evaluate_expectation, gradient_function=evaluate_gradient, initial_point=initial_point)
# Plotting information
cir_evals = np.array([i for i in range(len(gd_loss))])
cir_evals *= (2 * circuit.num_parameters * s)
ax1.plot(cir_evals, gd_loss, label='Shots %d'%s)
x_values.clear()
# Keeping track of just theta_0 and theta_1
gd.optimize(initial_point.size, evaluate_expectation, gradient_function=evaluate_gradient, \
initial_point=[0.1, 0.15, 0, 0, 0])
x_values = np.array(x_values)
ax2.plot(x_values[:,0], x_values[:,1], label='Shots %d'%s)
# Make the graph look nice
ax1.axhline(-1, ls='--', c='tab:red', label='Target')
ax1.set_ylabel('Cost')
ax1.set_xlabel('Circuit Evaluations')
ax1.set_xscale('log')
ax1.legend()
ax2.set_ylim(0, np.pi + 0.2)
ax2.set_xlim(-np.pi/2, np.pi/2)
ax2.set_ylabel('Theta 2')
ax2.set_xlabel('Theta 1')
#ax2.legend()
plt.show()
# -
class NewtonOpt(object):
def __init__(self, hess, grad, num_param, circuit, sampler, op, init, lr=0.1, s=1000, maxiter=100):
self.lr = lr
self.shots = s
self.hess = hess
self.grad = grad
self.max_iter = maxiter
self.num_params = num_param
self.circuit = circuit
self.sampler = sampler
self.op = op
self.values = []
self.circuit_evals = []
self.init = init
def optimize(self):
x = self.init
for i in range(self.max_iter):
value_dict = dict(zip(self.circuit.parameters, x))
result = np.real(self.sampler.convert(self.op, params=value_dict).eval())
self.values.append(result)
value_dict = dict(zip(self.circuit.parameters, x))
hessian = np.real(self.sampler.convert(self.hess, params=value_dict).eval())
gradient = np.real(self.sampler.convert(self.grad, params=value_dict).eval())
x = x - self.lr * np.linalg.inv(hessian + np.eye(self.num_params)) @ gradient
#if i % 10 == 0:
# print(i, result)
return self.values, np.array([i for i in range(len(self.values))]) * \
((4 * self.num_params**2 - 3 * self.num_params) * self.shots)
# Generate circuit with only two parameters
def paper_circuit_two_param():
circuit = QuantumCircuit(5) # 5 qubit circuit
params = ParameterVector("theta", length=2)
for i in range(2):
circuit.rx(params[i], i)
circuit.cx(0, 1)
circuit.cx(2, 1)
circuit.cx(3, 1)
circuit.cx(4, 3)
hamiltonian = I
for i in range(1, 5):
if i == 3:
hamiltonian = hamiltonian ^ Z
else:
hamiltonian = hamiltonian ^ I
readout_operator = StateFn(hamiltonian, is_measurement=True) @ StateFn(circuit)
return circuit, readout_operator
# +
initial_point = np.array([0.1, 0.15])
s = 1000
circuit, op = paper_circuit_two_param()
q_instance = QuantumInstance(backend, shots=s)
sampler = CircuitSampler(q_instance)
gradient = Gradient(grad_method='param_shift').convert(op)
hessian = Hessian(hess_method='param_shift').convert(op)
gd_loss = []
x_values = []
# GD minimization
gd = GradientDescent(maxiter=80, learning_rate=0.1, tol=1e-4, callback=gd_callback)
x_opt, fx_opt, nfevs = gd.optimize(initial_point.size, evaluate_expectation, gradient_function=evaluate_gradient, \
initial_point=initial_point)
cir_evals = np.array([i for i in range(len(gd_loss))])
cir_evals *= (2 * circuit.num_parameters * s)
# 2nd order mimization
newton = NewtonOpt(hessian, gradient, circuit.num_parameters, circuit, sampler, op, initial_point, maxiter=80)
loss, evals = newton.optimize()
plt.plot(evals, loss, label='Newton')
plt.plot(cir_evals, gd_loss, label='GD')
plt.axhline(-1, ls='--', c='tab:red', label='Target')
plt.ylabel('Cost')
plt.xlabel('Circuit Evaluations')
plt.legend()
plt.show()
# +
#initial_point = np.random.uniform(0, 2 * np.pi, 5)
initial_point = np.array([0.1, 0.15, 0, 0, 0])
s = 1000
circuit, op = paper_circuit()
q_instance = QuantumInstance(backend, shots=s)
sampler = CircuitSampler(q_instance)
gradient = Gradient().convert(op)
hessian = Hessian(hess_method='param_shift').convert(op)
gd_loss = []
x_values = []
# GD minimization
gd = GradientDescent(maxiter=100, learning_rate=0.1, tol=1e-4, callback=gd_callback)
x_opt, fx_opt, nfevs = gd.optimize(initial_point.size, evaluate_expectation, gradient_function=evaluate_gradient, \
initial_point=initial_point)
cir_evals = np.array([i for i in range(len(gd_loss))])
cir_evals *= (2 * circuit.num_parameters * s)
# 2nd order mimization
newton = NewtonOpt(hessian, gradient, circuit.num_parameters, circuit, sampler, op, initial_point)
loss, evals = newton.optimize()
plt.plot(evals, loss, label='Newton')
plt.plot(cir_evals, gd_loss, label='GD')
plt.axhline(-1, ls='--', c='tab:red', label='Target')
plt.ylabel('Cost')
plt.xlabel('Circuit Evaluations')
plt.xscale('log')
plt.legend()
plt.show()
# -
|
Evaluating-Gradients-Quantum-Hardware/Mari_et_al_2021.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.9.5 64-bit (''py39'': conda)'
# name: python3
# ---
# # [weasyprint](https://github.com/Kozea/WeasyPrint)
# +
# #!conda install -c conda-forge -y weasyprint
# -
# ## Command-Line
# !weasyprint http://weasyprint.org weasyprint-website.pdf
# !weasyprint http://weasyprint.org weasyprint-website.pdf \
# -s <(echo 'body { font-family: serif !important }')
# ## Python Library
# ### Quickstart
from weasyprint import HTML
HTML('http://weasyprint.org/').write_pdf('weasyprint-website.pdf')
from weasyprint import HTML, CSS
HTML('http://weasyprint.org/').write_pdf('weasyprint-website.pdf',
stylesheets=[CSS(string='body { font-family: serif !important }')])
|
weasyprint.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %load_ext autoreload
# %autoreload 2
from quantumnetworks import ExpSystem, plot_evolution
import numpy as np
# + [markdown] tags=[]
# # Trapezoidal Method
# -
sys = ExpSystem(params={"lambda": 10})
x_0 = np.array([1])
ts = np.linspace(0, 1, 101)
X = sys.trapezoidal(x_0, ts)
plot_evolution(X[0,:], ts)
# # Forward Euler
sys = ExpSystem(params={"lambda": 10})
x_0 = np.array([1])
ts = np.linspace(0, 1, 101)
X = sys.forward_euler(x_0, ts)
plot_evolution(X[0,:], ts)
u = sys.eval_u(0)
sys.eval_Jf(x_0, u)
sys.eval_Jf_numerical(x_0, u)
|
demos/basic/ExpSystem.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
import ants
import numpy as np
import os
indir = '../../data/ADHD200/Outputs/fmriprep/fmriprep/'
contents = os.listdir(indir)
subs = [content for content in contents if all((content.startswith('sub-') , os.path.isdir(os.path.join(indir,content))))]
subs.sort()
subs[0:10]
subs[10:20]
index = subs.index('sub-0010003')
print(index)
lst = list(range(1,5+1))
print(lst)
for s in lst:
eval
for s in lst:
T1_fn_template = '{sub}_desc-preproc_T1w.nii.gz' #formatted string
brain_mask_template = '{sub}_desc-brain_mask.nii.gz' #formatted string
T1_fn = T1_fn_template.format(sub=subs[s]) # full path to the brain scan
brain_mask_fn = brain_mask_template.format(sub=subs[s])
T1_path = os.path.join(indir,subs[s],'anat',T1_fn)
brain_mask_path = os.path.join(indir,subs[s],'anat',brain_mask_fn)
print(T1_path)
print(brain_mask_path)
print('ok great you got a loop to work but you need to figure out how to get the T1 to change with s')
for s in lst:
T1 = ants.image_read(T1_path) # Read in the image as ants object
brain_mask = ants.image_read(brain_mask_path)
T1.plot_ortho(flat=True,title='T1')
brain_mask.plot_ortho(flat=True,title='Brain Mask')
T1.plot_ortho(brain_mask,flat=True,title='Overlaid')
print('ok great you got two sets of the same brain to spit out of the loopbecause you cant figure out how to change T1 in the calls to T2 when it loops')
|
Data/.ipynb_checkpoints/Loop_Extract_Brains1-65-checkpoint.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # Correlation function of DR72 SDSS VAGC Catalog
# First import all the modules such as healpy and astropy needed for analyzing the structure
import healpix_util as hu
import astropy as ap
import numpy as np
from astropy.io import fits
from astropy.table import Table
import astropy.io.ascii as ascii
from astropy.io import fits
from astropy.constants import c
import matplotlib.pyplot as plt
import math as m
from math import pi
import scipy.special as sp
from scipy import integrate
import warnings
from sklearn.neighbors import BallTree
import pickle
import pymangle
from scipy.optimize import curve_fit
# %matplotlib inline
dr7full=ascii.read("./input/")
dr7full
z=dr7full['col3']
rad=dr7full['col1']
decd=dr7full['col2']
# +
#Ez = lambda x: 1.0/m.sqrt(0.3*(1+x)**3+0.7)
Om=0.3
Ol=0.7
Ok=0.0
def Ez(zv):
return 1.0/m.sqrt(Om*(1.0+zv)**3+Ok*(1.0+zv)**2+Ol)
np.vectorize(Ez)
#Calculate comoving distance of a data point using the Redshift - This definition is based on the cosmology model we take. Here the distance for E-dS universe is considered. Also note that c/H0 ratio is cancelled in the equations and hence not taken.
# -
def DC_LCDM(z):
return integrate.quad(Ez, 0, z)[0]
DC_LCDM=np.vectorize(DC_LCDM)
DC_LCDM(2.0)
DC=DC_LCDM(z)
DC
# +
dr7f = open("./output/DR72srarf.dat",'w')
dr7f.write("z\t ra\t dec\t s\t rar\t decr \n")
for i in range(0,len(dr7full)):
dr7f.write("%f\t " %z[i])
dr7f.write("%f\t %f\t " %(rad[i],decd[i]))
dr7f.write("%f\t " %DC[i])
dr7f.write("%f\t %f\n " %(rad[i]*pi/180.0,decd[i]*pi/180.0))
dr7f.close()
# -
data=ascii.read("./output/DR72srarf.dat")
data['z']
data['s']
data['rar']
data['decr']
NSIDE=512
dr72hpix=hu.HealPix("ring",NSIDE)
# +
pixdata = open("./output/pixdatadr72VAGCfull.dat",'w')
pixdata.write("z\t pix \n")
for i in range(0,len(data)):
pixdata.write("%f\t" %data['z'][i])
pixdata.write("%d\n" %dr72hpix.eq2pix(data['ra'][i],data['dec'][i]))
pixdata.close()
# -
pixdata = ascii.read("./output/pixdatadr72VAGCfull.dat")
hpixdata=np.array(np.zeros(hu.nside2npix(NSIDE)))
for j in range(len(pixdata)):
hpixdata[pixdata[j]['pix']]+=1
hpixdata
hu.mollview(hpixdata,rot=180)
mangle=pymangle.Mangle("./masks/window.dr72safe0.ply")
# Ref: https://pypi.python.org/pypi/pymangle/
# %%time
rar,decr=mangle.genrand(2*len(data))
rar
decr
zr=np.array([data['z'],data['z']])
zr
zr=zr.flatten()
zr
print len(zr)
print len(decr)
# +
rdr7f = open("./output/rDR72srarf.dat",'w')
rdr7f.write("z\t ra\t dec\t s\t rar\t decr \n")
for i in range(0,len(zr)):
rdr7f.write("%f\t " %zr[i])
rdr7f.write("%f\t %f\t " %(rar[i]*180.0/pi,decr[i]*180.0/pi))
rdr7f.write("%f\t " %DC_LCDM(zr[i]))
rdr7f.write("%f\t %f\n " %(rar[i],decr[i]))
rdr7f.close()
# -
dataR=ascii.read("./output/rDR72srarf.dat")
dataR['z']
NSIDE=512
rdr72hpix=hu.HealPix("ring",NSIDE)
pixdata = open("./output/pixrand200kdr72.dat",'w')
pixdata.write("z\t pix \n")
for i in range(0,len(rar)):
pixdata.write("%f\t" %zr[i])
pixdata.write("%d\n" %rdr72hpix.eq2pix(rar[i],decr[i]))
pixdata.close()
pixdata = ascii.read("./output/pixrand200kdr72.dat")
hpixdata=np.array(np.zeros(hu.nside2npix(NSIDE)))
for j in range(len(pixdata)):
hpixdata[pixdata[j]['pix']]+=1
hpixdata
hu.mollview(hpixdata,rot=180)
plt.savefig("./plots/rand200kmnew.pdf")
from scipy.stats import norm
from sklearn.neighbors import KernelDensity
from lcdmmetric import *
z=np.array(data['z'])
zkde=z.reshape(1,-1)
kde = KernelDensity(kernel='gaussian', bandwidth=0.75).fit(zkde)
kde
X_plot = np.arange(z.min(), z.max(), z.size())[:, np.newaxis]
log_dens = kde.score_samples(zkde)
log_dens
d=ascii.read("./output/DR72LCsrarf.dat")
d
dataR=ascii.read("./output/rand200kdr72.dat")
dataR['z']
dataR['ra']
dataR['dec']
DCLCR=DC_LC(dataR['z'])
# +
rdr7f = open("./output/rDR7200kLCsrarf.dat",'w')
rdr7f.write("z\t ra\t dec\t s\t rar\t decr \n")
for i in range(0,len(dataR)):
rdr7f.write("%f\t " %dataR['z'][i])
rdr7f.write("%f\t %f\t " %(dataR['ra'][i],dataR['dec'][i]))
rdr7f.write("%f\t " %DCLCR[i])
rdr7f.write("%f\t %f\n " %(dataR['ra'][i]*pi/180.0,dataR['dec'][i]*pi/180.0))
rdr7f.close()
# -
r=ascii.read("./output/rDR7200kLCsrarf.dat")
r
dr7fdat=ascii.read("./output/DR7srarf.dat")
dr7fdat['s'][1:300]
# +
#fdata=fits.open("/Users/rohin/Downloads/DR7-Full.fits")
# +
#fdata.writeto("./output/DR7fulltrim.fits")
# -
fdata=fits.open("./output/DR7fulltrim.fits")
cols=fdata[1].columns
cols.del_col('ZTYPE')
cols.del_col('SECTOR')
cols.del_col('FGOTMAIN')
cols.del_col('QUALITY')
cols.del_col('ISBAD')
cols.del_col('M')
cols.del_col('MMAX')
cols.del_col('ILSS')
cols.del_col('ICOMB')
cols.del_col('VAGC_SELECT')
cols.del_col('LSS_INDEX')
cols.del_col('FIBERWEIGHT')
cols.del_col('PRIMTARGET')
cols.del_col('MG')
cols.del_col('SECTOR_COMPLETENESS')
cols.del_col('COMOV_DENSITY')
cols.del_col('RADIAL_WEIGHT')
fdata[1].columns
fdata.writeto("./output/DR7fullzradec.fits")
fdat=fits.open("./output/DR7fullzradec.fits")
fdat[1].columns
fdat[1].data['Z']
fdat[1].data['RA']
comovlcdm=DC_LCDM(fdat[1].data['Z'])
fdat[1].data['Z']
comovlcdm
comovlcdm.dtype
# +
#cols=fdat[1].columns
# -
nc=fits.Column(name='COMOV',format='D',array=comovlcdm)
nc1=fits.Column(name='COMOV',format='D')
fdata[1].data['Z']
fdata[1].data['RA']
nc
nc.dtype
# +
#cols.add_col(nc)
# -
fdat[1].columns
fdat[1].columns.info()
fdat[1].columns.add_col(nc1)
fdat[1].data['COMOV']=comovlcdm
comovlcdm
fdat[1].data['Z']
fdat[1].data['COMOV']
fdat[1].data['RA']
fdat[1].data['RA']=fdat[1].data['RA']*pi/180.0
comovlcdm=DC_LCDM(fdat[1].data['Z'])
comovlcdm
# Random catalog created based on the survey limitations also taken from http://cosmo.nyu.edu/~eak306/SDSS-LRG.html
dataR=fits.open("/Users/rohin/Downloads/random-DR7-Full.fits")
dataR
dataR=dataR[1].data
len(dataR)
NSIDE=512
dr72hpix=hu.HealPix("ring",NSIDE)
# +
pixdata = open("./output/pixdatadr72VAGCfullrand.dat",'w')
pixdata.write("z\t pix \n")
for i in range(0,len(data)-1):
pixdata.write("%f\t" %data['z'][i])
pixdata.write("%d\n" %dr72hpix.eq2pix(dataR['ra'][i],dataR['dec'][i]))
pixdata.close()
# -
pixdata = ascii.read("./output/pixdatadr72VAGCfullrand.dat")
hpixdata=np.array(np.zeros(hu.nside2npix(NSIDE)))
for j in range(len(pixdata)):
hpixdata[pixdata[j]['pix']]+=1
hpixdata
hu.mollview(hpixdata,rot=180)
hu.orthview(hpixdata)
|
CMASS1_LCDM1.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
def validate(net, partition, criterion, args):
valloader = torch.utils.data.DataLoader(partition['val'],
batch_size=args.test_batch_size,
shuffle=False, num_workers=2)
net.eval()
correct = 0
total = 0
val_loss = 0
with torch.no_grad():
for data in valloader:
images, labels = data
images = images.cuda()
labels = labels.cuda()
outputs = net(images)
loss = criterion(outputs, labels)
val_loss += loss.item()
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
val_loss = val_loss / len(valloader)
val_acc = 100 * correct / total
return val_loss, val_acc
|
cs224w/one13.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %%capture
## compile PyRoss for this notebook
import os
owd = os.getcwd()
os.chdir('../../')
# %run setup.py install
os.chdir(owd)
# %matplotlib inline
import numpy as np
from matplotlib import pyplot as plt
import pyross
import time
# +
M = 2 # the population has two age groups
N = 5e4 # and this is the total population
# correct params
beta = 0.02 # infection rate
gIa = 1./7 # recovery rate of asymptomatic infectives
gIs = 1/7
alpha = 0.2 # fraction of asymptomatic infectives
fsa = 0.8 # the self-isolation parameter
gE = 1/5
gA = 1/3
tS = 0.0 # rate S -> Q
tE = 0.01 # rate E -> Q
tA = 0.01 # rate A -> Q
tIa = 0.01 # rate Ia -> Q
tIs = 1./3 # rate Is -> Q, three days for symptomatic people to be tested and quarantined
# set the age structure
fi = np.array([0.25, 0.75]) # fraction of population in age age group
Ni = N*fi
# set the contact structure
C = np.array([[18., 9.], [3., 12.]])
# set up initial condition
E0 = np.array([100, 100])
A0 = np.array([10, 10])
Ia0 = np.array([10, 10])
Is0 = np.array([10, 10])
Q0 = np.array([0, 0])
R0 = np.array([0, 0])
S0 = Ni-(E0+A0+Ia0+Is0+R0+Q0)
Tf = 100
Nf = Tf+1
def contactMatrix(t):
return C
parameters = {'alpha':alpha, 'beta':beta,
'gE':gE,'gA':gA,
'gIa':gIa, 'gIs':gIs, 'fsa':fsa,
'tS':tS,'tE':tE,'tA':tA,'tIa':tIa,'tIs':tIs,
'gAA': gA, 'gAS': gA} # legacy code
# use pyross stochastic to generate traj and save
sto_model = pyross.stochastic.SEAIRQ(parameters, M, Ni)
data = sto_model.simulate(S0, E0, A0, Ia0, Is0, Q0, contactMatrix, Tf, Nf)
data_array = data['X']
np.save('sto_traj.npy', data_array)
# +
# plot the stochastic solution
plt.plot(data_array[:, 0], label='S')
plt.plot(data_array[:, M], label='E')
plt.plot(data_array[:, 2*M], label='A')
plt.plot(data_array[:, 3*M], label='Ia')
plt.plot(data_array[:, 4*M], label='Is')
plt.plot(data_array[:, 5*M], label='Q')
plt.legend()
plt.show()
# +
# load the data and rescale to intensive variables
x = np.load('sto_traj.npy').astype('float')
x = x[:]/N
steps = 101 # number internal integration steps taken
# initialise the estimator
estimator = pyross.inference.SEAIRQ(parameters, M, fi, int(N), steps)
det_model = pyross.deterministic.SEAIRQ(parameters, M, fi)
x_det = estimator.integrate(x[0], 0, Tf, Nf, det_model, contactMatrix)
plt.plot(x_det[:, 0], label='S')
plt.plot(x_det[:, M], label='E')
plt.plot(x_det[:, 2*M], label='A')
plt.plot(x_det[:, 3*M], label='Ia')
plt.plot(x_det[:, 4*M], label='Is')
plt.plot(x_det[:, 5*M], label='Q')
plt.legend()
plt.show()
# -
# compute -log_p for the original (correct) parameters
start_time = time.time()
logp = estimator.obtain_minus_log_p(parameters, x, Tf, Nf, contactMatrix)
end_time = time.time()
print(logp)
print(end_time - start_time)
# +
eps = 1e-4 # step used to calculate hessian in the optimisation algorithm, can use as a lower bound for parameters
alpha_g = 0.2
alpha_bounds = (0.1, 0.5) # large uncertainty on the fraction of asymptomatic people
# the upper bound for alpha must be 1-2*eps to avoid alpha>1 in hessian calculation performed by optimizer
beta_g = 0.07
beta_bounds = (eps, 0.1) # large uncertainty on beta (e.g. during a lockdown, beta can be very low)
gIa_g = 0.2
gIa_bounds = (eps, 0.5) # large uncertainty on how quickly asymptomatic people recover
gIs_g = 0.145
gIs_bounds = (0.13, 0.15) # tight bounds on gIs (can come from clinical data)
gE_g = 0.22
gE_bounds = (0.15, 0.25) # tight bounds on the exit rate from the exposed class
gA_g = 0.3
gA_bounds = (0.25, 0.35) # tight bounds on the exit rate from the activated class
fsa_g = 0.8 # assume we know this precisely
guess = [alpha_g, beta_g, gIa_g, gIs_g, gE_g, gA_g]
bounds = [alpha_bounds, beta_bounds, gIa_bounds, gIs_bounds, gE_bounds, gA_bounds]
params, nit = estimator.inference(guess, x, Tf, Nf, contactMatrix, bounds, niter=1, ftol=1e-5, verbose=True)
# +
# compute log_p for best estimate
print('best estimates: ', params)
print('nit: ', nit)
start_time = time.time()
parameters = estimator.make_params_dict(params)
logp = estimator.obtain_minus_log_p(parameters, x, Tf, Nf, contactMatrix)
end_time = time.time()
print(logp)
print(end_time - start_time)
# -
|
examples/inference/ex4_SEAIRQ.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] colab_type="text" id="S_y6UCGvxD6a"
# ### Prep work station
# + colab={} colab_type="code" id="N_AfMgqbO3Uk"
# Import the necessary libraries
# Note: some library imports may be incompatible with Jupyter Notebooks
# %matplotlib inline
# !pip install category_encoders
# !pip install eli5
# !pip install pdpbox
# !pip install plotly
# !pip install xgboost
import category_encoders as ce
import eli5
import json
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import pickle
import plotly.express as px
import plotly.figure_factory as ff
import seaborn as sns
from eli5.sklearn import PermutationImportance
from pdpbox import pdp
from pdpbox.pdp import pdp_interact, pdp_interact_plot, pdp_isolate, pdp_plot
from sklearn.impute import SimpleImputer
from sklearn.ensemble import RandomForestClassifier, RandomForestRegressor
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import cross_val_score, train_test_split, RandomizedSearchCV
from sklearn import metrics
from sklearn.metrics import accuracy_score
from sklearn.pipeline import make_pipeline
from xgboost import XGBClassifier, XGBRegressor
# + [markdown] colab_type="text" id="dDtvyLL-Ru_G"
# ### Data Wrangling
# -
df = pd.read_csv('Kickstarter.csv')
# + colab={"base_uri": "https://localhost:8080/", "height": 424} colab_type="code" id="8KKr7g_lsa34" outputId="5739850d-d5b6-45db-bff7-b3b7dfb99c41"
df.describe() #search for outliers
# features with outliers: backers_count, converted_pledged_amount,
# fx_rate, goal -> backers_count and goal
# proposed dependent feature (y - success): converted_pledged_amount/goal
# (requires drop of all pledged features, state)
# potential non-unique features (drop): pledged, usd_pledged, currency,
# currency_symbol (conflicts with other features)
# low-variable features (drop): id, static_usd_rate, fx_rate, urls,
# source_urls,
# features to revisit with domain knowledge (drop): all datetimes
# (created_at, deadline, launched_at, state_changed_at)
# What's left?: ['backers_count', 'blurb', 'category', 'country'
# 'goal', 'slug', 'spotlight', 'staff_pick', 'y']
# +
# df.country.value_counts() -> stick with one: US (has the most data)
# df.isnull().sum()
# features to drop: friends, is_backing, is_starred, permissions
# + colab={"base_uri": "https://localhost:8080/", "height": 52} colab_type="code" id="V3_G447Jxi73" outputId="f4b28518-3e00-43e2-8798-2ad35ba2bcfe"
#using cross-validation -> train/test splits/holdouts made below
# + colab={} colab_type="code" id="Y8cmn_Gdyy3o"
def return_category(x):
res = json.loads(x)
out = res['name']
return(out)
def wrangle(X):
""" function to wrangle/pre-process the data in df """
X = X.copy()
X['success'] = (X['converted_pledged_amount']/X['goal'])
# keep important features - revisit with slug and blurb
X = X[['backers_count', 'category', 'country',
'goal', 'spotlight', 'staff_pick', 'success']]
# retieve names of categories
X['category'] = X['category'].apply(return_category)
# enumerate boolean string types - or not - may break the model
# X[['spotlight', 'staff_pick']] = X[['spotlight', 'staff_pick']].astype(int)
# query - control for country
X = X.query('country == "US"')
# managing outliers
X = X.query('backers_count < 750')
X = X.query('goal < 350000')
X = X.query('success <= 3')
# convert to categorical, for relative measure of preformance
X['success'] = pd.qcut(X.success, 10, labels = [
10, 9, 8,
7, 6, 5,
4, 3, 2, 1])
#drop any remaining rows/entries containing one or more nan
X = X.dropna()
return X
df2 = wrangle(df)
y = pd.DataFrame()
y['success'] = (df2['success'])
X = df2.drop(columns = ['success','country','category'])
X.head(3) # there should be four features: 1(int), 1(float), 2(bool)
# -
# ### Establish Baseline and Train/Test Split
y['success'].value_counts(normalize = True)
# baseline: .110
X_train, X_test, y_train, y_test = train_test_split(X, y, train_size=0.80, random_state=1)
X_train.shape, X_test.shape, y_train.shape, y_test.shape
# + [markdown] colab_type="text" id="yeEX9gNuDJzT"
# ### Exploratory visuals
# -
# Prep data for use with Partial Dependence Plots (PDPs)
# to help interpret feature importances
encoder = ce.OrdinalEncoder()
X_encoded = encoder.fit_transform(X_train)
model = RandomForestClassifier(n_estimators=100, random_state=1, n_jobs=-1)
model.fit(X_encoded, y_train)
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} colab_type="code" id="-2EygQt9SQYf" outputId="fca5c986-263a-4396-8102-0f20914e70c0"
# Compare two explanatory/independent features
features = ['goal', 'backers_count']
interaction = pdp_interact(
model=model,
dataset=X_encoded,
model_features=X_encoded.columns,
features=features
)
pdp_interact_plot(interaction, plot_type='grid', feature_names=features);
# Plots represent how two feats interact
# to affect the target-value predicted
# + [markdown] colab_type="text" id="EYiCtihmx0qL"
# ### Feature Importance: Permutation
# + colab={"base_uri": "https://localhost:8080/", "height": 139} colab_type="code" id="b3ziYhpqy2q2" outputId="cd5e9b57-1ccf-4b0b-cbd1-66fac09d4d4e"
# Define the model and fit the train data
transformers = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(strategy='median'))
X_train_transformed = transformers.fit_transform(X_train)
X_test_transformed = transformers.transform(X_test)
model = RandomForestClassifier(n_estimators=100, random_state=1, n_jobs=-1)
model.fit(X_train_transformed, y_train)
# Instantiate the permuter and fit to test data
permuter = PermutationImportance(
model,
scoring='accuracy',
n_iter=2,
random_state=42)
permuter.fit(X_test_transformed, y_test)
feature_names = X_test.columns.tolist()
eli5.show_weights(
permuter,
top=None, # show permutation importances for all features
feature_names=feature_names)
# Interpreting permutation output: values at top are most important feats
#"The first number in each row shows how much model performance decreased with a random shuffling.
# The number after the ± measures how performance varied from one-reshuffling to the next.
# You'll occasionally see negative values for permutation importances. In those cases,
# the predictions on the shuffled (or noisy) data happened to be more accurate than the real data.
# This happens when the feature didn't matter (should have had an importance close to 0),
# but random chance caused the predictions on shuffled data to be more accurate.
# This is more common with small datasets, like the one in this example,
# because there is more room for luck/chance." source:
# https://www.kaggle.com/dansbecker/permutation-importance
# + [markdown] colab_type="text" id="OQsYvTBJILrn"
# ### Exploring Models: XG-BOOST
# Accuracy score (test): 0.457
# + colab={"base_uri": "https://localhost:8080/", "height": 318} colab_type="code" id="PWnXuk4oIPBk" outputId="69deca35-5073-4833-fcd7-2b444e6c7eb3"
# Make a model and fit it to your train set
pipeline = make_pipeline(
ce.OrdinalEncoder(),
XGBClassifier(n_estimators=100, random_state=1,
n_jobs=-1))
pipeline.fit(X_train, y_train)
# + colab={"base_uri": "https://localhost:8080/", "height": 70} colab_type="code" id="zA4O7jJlIXZV" outputId="17e6517f-a33b-4636-c75b-cf861643ef6a"
# Check accuracy of fitted model
print('Test Accuracy', pipeline.score(X_test, y_test))
# + [markdown] colab_type="text" id="u_SGNNma_HsF"
# ### Exploring Models: Random Forest Classifier
# Accuracy score (test): 0.423
# + colab={"base_uri": "https://localhost:8080/", "height": 70} colab_type="code" id="gw-4sR5PzoSf" outputId="e675bf43-1894-4b60-d43c-503cf543cdbb"
# Make a pipeline to set up the model
pipeline = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(strategy='median'),
RandomForestClassifier(n_estimators=100, n_jobs=-1))
# Fit on train, score on test
pipeline.fit(X_train,y_train)
print('Validation Accuracy:', pipeline.score(X_test,y_test))
# Get accuracy score for Test-set
print('Test Accuracy:', pipeline.score(X_test, y_test))
# + colab={"base_uri": "https://localhost:8080/", "height": 52} colab_type="code" id="DUA-Hzd9rvuw" outputId="ff21cbf6-3f6a-4dc4-c008-365060f09c92"
# Cross-validation method (optional)
pipeline = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(strategy='median'),
RandomForestClassifier(n_estimators=100, n_jobs=-1,
random_state=42))
k = 5 # represents the number of cross-validations to be run
scores = cross_val_score(pipeline, X_train, y_train, cv=k,
scoring='accuracy')
print(f'Accuracy Score for {k} folds:', scores)
print('Average Score', np.mean(scores))
# + [markdown] colab_type="text" id="bdEi1_Ez2fxb"
# ### Formalizing the Model
#
# +
# Get the best model - opting for Random Forest here.
# Behind-the-curtain trail-and-error gives reason
# for favoring Regressor over Classifier..
# Instantiate the model, make a random grid
rf = RandomForestRegressor(random_state = 42)
random_grid = {'n_estimators': [int(x) for x in np.linspace(start = 200, stop = 2000, num = 10)],
'max_features': ['auto', 'sqrt'],
'max_depth': ([int(x) for x in np.linspace(10, 110, num = 11)]),
'min_samples_split': [2, 5, 10],
'min_samples_leaf': [1, 2, 4],
'bootstrap': [True, False]}
rf_random = RandomizedSearchCV(estimator = rf,
param_distributions = random_grid,
n_iter = 100, cv = 3, verbose=2,
random_state=42, n_jobs = -1)
model = rf_random.fit(X_train, y_train)
print(rf,random_grid)
# -
# Get the final score - 0.859 (using Regressor)
print(model.score(X_test,y_test))
# Save and prep the model for export
filename = '2019021_rfc_86.sav'
pickle.dump(model, open(filename,'wb'))
# +
# Make an inference using a fabricated entry
import json
to_test = '''{"backers_count": 100, "goal": 10000.99, "spotlight": 1, "staff_pick": 1}'''
testa = json.loads(to_test)
print(testa) # should look like json
# -
testa_df = pd.DataFrame.from_records(testa, index=[0])
testa_df # format should look familiar
# ### Interpreting the Output
# Get a prediction for above inference
model.predict(testa_df.to_numpy())
# Output = 4.829; Inputs = {'backers_count': 5, 'goal': 50000.0, 'spotlight': 1, 'staff_pick': 0}. What does this mean? Recall that the model being used is a Regressor and that the y-vector's unique values were mapped as integers, from 1 to 10, representing deciles. This means that the output is a float, from 1 to 10, with a higher value indicating a lower comparative performance of a given Kickstarter campaign. For example, higher inputs for goal will increase the output. Alternatively, higher inputs for the backers_count field will yield a lower result. The functionality of the model is limited to the defined ranges of the input fields and are as follow; backers_count: 0 to 750, goal: 0 to 350000.00 (USD), spotlight: 1 or 0, staff_pick: 1 or 0. Holding all other inputs constant, a goal of 500000.00(USD) will yield the same result as a goal of 4500000.00(USD).
|
notebooks/6_luc_model_rfreg_no_text.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### Importing libraries
import numpy as np
from matplotlib import pyplot as plt
from sklearn.preprocessing import scale
from sklearn.cluster import KMeans
from sklearn import svm
# +
np.random.seed(0)
age=np.random.randint(15,80,30)
income=np.random.randint(20000,100000,30)
plt.scatter(age,income)
plt.show()
# -
data=np.c_[age,income]
model=KMeans(n_clusters=5).fit(scale(data))
label=model.labels_
centroids=model.cluster_centers_
label
plt.scatter(data[:,0],data[:,1],c=label)
plt.show()
# #### To plot the centroids, we need to do inverse of scaling for the centroids and then plot
# +
#invf - inverse_function does the inverse of scaling
def invf(x,y):
import numpy as np
return np.c_[[np.std(age)*i+np.mean(age) for i in x],[np.std(income)*i+np.mean(income) for i in y]]
reverse_centroid=invf(centroids[:,0],centroids[:,1])
# -
plt.scatter(data[:,0],data[:,1],c=label)
plt.scatter(reverse_centroid[:,0],reverse_centroid[:,1],marker="*",s=100,c='r')
plt.show()
# # SVC
# Watch a video about Support Vector Machine and try to understand before proceeding.
# %%HTML
<iframe width="560" height="315" src="https://www.youtube.com/embed/Y6RRHw9uN9o" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
# ### SVC
# <font size=4 color='red'>
# Support Vector Classifier tries to find the best hyperlane to separate the different classes by maximizing the distance data points and the hyperplane
# </font>
# #### Let's store data(matrice) and labels into X and Y accordingly
X=data
Y=label
# #### Let's fit our data and labels into svm.SVC function and choose kernel type to be linear at first
# <font size=4>
# There are different types of kernels such as "Polynomial", "RBF" and "Linear".<br>
# However, we will be using "Linear" kernel at first
# </font>
#SVC - Support Vector Classification
svc_model_linear=svm.SVC(kernel='linear').fit(X,Y)
# <font size=4>The model is ready, now let's plot it. <br>
# We will use meshgrid to help visualize the model</font>
# +
import numpy as np
xx,yy=np.meshgrid(np.arange(14.0,81,0.5), np.arange(20000,100500,10))
Z=svc_model_linear.predict(np.c_[xx.ravel(),yy.ravel()])
Z=Z.reshape(xx.shape)
# -
# ### Meshgrid
# <font size=3>
# Meshgrid creates rectangular grid from values of X,Y arrays.
# </font>
#
# ### np.arange
# <font size=3>
# Generates values within an <b>interval</b> based on passed parameters. <br>
# First parameter: Start of interval<br>
# Second parameter: End of interval<br>
# Third parameter: Spacing between values
# </font>
#
# ### Ravel
# <font size=3>
# .ravel function transforms any D-array into flattened array(1 dimentional)<br><br>
# <img width=300, height=300 src=ravel.png>
# </font>
# <font size=3>
# <b>np.shape</b> returns tuple containing dimension of the passed array<br>
# <b>np.reshape</b> gives a new shape to an array without changing its data
# </font>
plt.contourf(xx, yy, Z,cmap=plt.cm.Paired,alpha=0.7) #Visualizing the contour
plt.scatter(data[:,0],data[:,1],c=label) #Visualizing the data points
plt.scatter(reverse_centroid[:,0],reverse_centroid[:,1],marker="x",s=80,c='r') #Visualizing the Centroids
plt.show()
# <font size=4>
# As we can see that some of data points are placed inaccurately. To solve it, we are going to try to use other parameters of SVC.<br>
# </font>
# For more, visit the following link to read what other parameters do:
# [SVC parameters explained](https://medium.com/all-things-ai/in-depth-parameter-tuning-for-svc-758215394769)
# ## Splitting the dataset into the Training set and Test set
X=data
y=label
# ### X_train and y_train arrays are used for training the model
# ### X_test and y_test arrays are used to check efficiency of the model(and improve it)
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
# ### C is the penalty parameter of the error term. Let's use C to be equal 1,10 and 100
# ### Let's use four types of kernels and see which one fits the dataset best
from sklearn.svm import SVC
C=[1,10,100]
kernel=['linear','rbf','sigmoid','poly']
for i in C:
for j in kernel:
model = SVC(C=i,kernel=j, gamma='auto')
model.fit(X_train, y_train)
print(i,j,model.score(X_test, y_test))
# <font size=4>
# <b>model.score</b> returns the mean accuracy on the given test data and labels. Therefore, the higher score the better model.
# </font>
# <font size=4>
# As we can see Linear Kernel fits our data better and penalty parameter C does not make a difference
# </font>
# ### Now, let's use the model to predict!
# +
#Creating SVC model with linear kernel and fitting dataset
model_best = SVC(kernel='linear').fit(X_train, y_train)
#Let's store the prediction of our model and later compare it with the y_test
y_pred=model_best.predict(X_test)
# -
y_pred
y_test
# ### The model predicted 5 labels correctly(out of 6)
|
SVM/SVC/SVC/.ipynb_checkpoints/svc and example of one dimension-checkpoint.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/imaravind007/Backorder-predition/blob/main/COTA.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="5HCdfYOTgZfd"
# ## Mounting the gdrive
#
# + colab={"base_uri": "https://localhost:8080/"} id="zvw0z0EegT1f" outputId="db4d46bb-d002-4984-c3b1-e4175e22b8a3"
from google.colab import drive
drive.mount('/content/drive')
# + [markdown] id="_-JkU-HNgt_k"
# # Importing Required Libraries
# + id="EQicABCGgxh1"
import pandas as pd
import numpy as np
# + colab={"base_uri": "https://localhost:8080/"} id="z4W67bd7g5hV" outputId="3640ed07-b3f4-47f1-fe15-774918cacec2"
data = pd.read_csv('/content/drive/MyDrive/Colab Notebooks/Company/COTA/cancer-datasets_filtered_pancan_clinical.csv')
# + [markdown] id="nE2SKA-Ni2bi"
# ## Exploratory Data Analysis
# + colab={"base_uri": "https://localhost:8080/", "height": 394} id="z5ZyXZW_g8kS" outputId="e58853cb-1568-45e1-a956-b038dbc0479f"
data.describe()
# + colab={"base_uri": "https://localhost:8080/"} id="cODzp42ctdpm" outputId="718c84c6-6fba-4e03-f279-6e3e22078df4"
data.columns.to_list()
# + colab={"base_uri": "https://localhost:8080/"} id="cyTVR7O4zaIi" outputId="a54f1964-40a8-4ba1-ba6f-581b5da889f6"
data.isna().sum()
# + colab={"base_uri": "https://localhost:8080/"} id="FX0YkKR2xv5Z" outputId="4f15987d-f2d9-408a-8c3a-6a6f55ef259e"
data['year_of_initial_pathologic_diagnosis'].isna().value_counts()
# + colab={"base_uri": "https://localhost:8080/"} id="k-fi2VBWxyjk" outputId="552e8e86-f5e4-4321-b0a0-7998f5ce5607"
data['vital_status'].isna().value_counts()
# + colab={"base_uri": "https://localhost:8080/"} id="hGvk0XG6g_GB" outputId="66385c78-fc14-4890-82d9-7936b4fa7314"
data.columns
# + [markdown] id="DV0eW1iFhmzT"
# # 1.How many Columns are there in the dataset?
#
# ## Answer: I have used the shape function from python pandas dataframe to find the number of rows and columns in the dataset.
# + id="uauo4BJKhGJU"
columns = data.shape
# + colab={"base_uri": "https://localhost:8080/"} id="7x_SSbXNhtKt" outputId="d9cab4ef-670f-47d1-eeb1-7c80179be0e7"
print("The number of rows in the dataset are :" ,columns[0])
print("The number of columns in the dataset are :" ,columns[1])
# + [markdown] id="8CQD1AQKRz9h"
# # 2. Breakdown columns by datatypes with counts
# ### Answer: I have used the dtypes function from python pandas dataframe to find the dtypes of the column.
# + colab={"base_uri": "https://localhost:8080/"} id="x8qacnn5hRFP" outputId="e0a7082c-8b28-4521-96d0-66f2d9e38dd5"
data.dtypes.value_counts()
# + [markdown] id="pJKbfKxWRtI9"
# # 3. Breakdown counts by gender, race and ethnicity
# #### Answer: value_counts from the python dataframe is used to identify the number of counts in the each categorical features.
# + colab={"base_uri": "https://localhost:8080/"} id="XtWfuzzRj8If" outputId="7002e8db-9dc1-4841-f358-c5c8bcf8e2d8"
data['gender'].value_counts()
# + colab={"base_uri": "https://localhost:8080/"} id="MwySK5cqkOfN" outputId="6d70dd43-783a-4419-e700-b1434f36739b"
data['race'].value_counts()
# + colab={"base_uri": "https://localhost:8080/"} id="vLikIYUymszx" outputId="bf9c1aa1-159d-4ebf-9791-95e3ef346cea"
data['ethnicity'].value_counts()
# + [markdown] id="PHoAwxB7UEB7"
# # 4. Provide a report of missingness across all columns
# ## Answer: isna() function is used to find the missingness of the data in tha dataset.
# + colab={"base_uri": "https://localhost:8080/"} id="gCWCLz6ASLCG" outputId="42fe65ee-3b11-4fe8-9f3d-4ebdaeaa0a4c"
data.isna().sum().to_list
# + [markdown] id="3-jU6JgAUhCQ"
# # 5.What’s the average age of a patient in this dataset?
#
# ### Answer: The first operation, "Age" column was created using the days_to birth column, from which the age has been calculated. Datatime library has been used to extract the year, month and the present day. After calculating the age the mean function was applied on the Age columnm to find the average age of the patient in the dataset.
# + colab={"base_uri": "https://localhost:8080/"} id="ENeR13XDUj_e" outputId="176ef606-8c90-416f-edae-9e8ca5dc438e"
import datetime
data['Age'] = -(data['days_to_birth'] / 365)
a_date = datetime.date.today()
print("The average age of the patient", data['Age'].mean())
# + [markdown] id="0ER57Yu_U327"
# # 6.Provide a breakdown of the patient’s birth year
#
# ##Assumptions:
# ##I referenced the year 2022 as the present year.The age has been subtracted from the present year to calculate the patient's birth year.
# + colab={"base_uri": "https://localhost:8080/"} id="-CpRID5VU6HW" outputId="53a5094a-116a-4156-9808-69bbab855910"
data['year'] = a_date.year - data['Age']
print("The patient’s birth year", data['year'].apply(np.ceil))
# + [markdown] id="8SSPmub-zsZW"
# #### 7. Provide a list of the top 20 sorted values of the column - Eastern Cancer Oncology Group
# ####. Make assumptions as needed.
# ####Hint: You may have to normalize the dataset to do this.
# + colab={"base_uri": "https://localhost:8080/"} id="flBDEFmLHzwK" outputId="1c49bddd-588f-481e-e0c8-cf9eb8b2cd02"
data['eastern_cancer_oncology_group'].value_counts()
# + [markdown] id="y69Znawwz2mW"
# # 8. Approach -1
# ## Answer: First I identified the Metastatic Cancer Patient by filtered out the pathologic_m = M1 in the dataset and then group all the values based on the icd_10','icd_o_3_histology','icd_o_3_site
# # OR
# ## First I identified the Metastatic Cancer Patient by filtered out the clinical_M = M1 in the dataset and then group all the values based on the icd_10','icd_o_3_histology','icd_o_3_site.
# + colab={"base_uri": "https://localhost:8080/"} id="njs4bTYRPFC_" outputId="366b3bac-d4c1-4afe-8d24-3d18bc3d5a00"
data[data['pathologic_M']=='M1'].groupby(['icd_10','icd_o_3_histology','icd_o_3_site']).count()['pathologic_M']
# + colab={"base_uri": "https://localhost:8080/"} id="chQLjuUzRZv6" outputId="2b8ede06-f244-40bd-c425-5027f0663d64"
data[data['clinical_M']=='M1'].groupby(['icd_10','icd_o_3_histology','icd_o_3_site']).count()['clinical_M']
# + [markdown] id="z2BOB4YKMWoD"
#
# + [markdown] id="MtBDVc_o0MA2"
# # 9. Provide a survival analysis of all Breast Cancer patients. Breast Cancer patients can be identified using ICD 10 codes. You can Google for ICD codes and find the ones that match Breast Cancer.
#
# + id="ZBBU92PgeRAT"
# data[data['vital_status']=='Alive']['vital_status']
data['Y_Kaplan Meier'] = data['vital_status'].apply(lambda x: 0 if x=='Alive' else 1)
data['X_Kaplan Meier'] = data['days_to_death'].apply(lambda x:0 if x=='[Not Applicable]' or x=='[Discrepancy]' else float(x))
data['X_Kaplan Meier'].fillna(value = 0,inplace = True)
data['X_Kaplan Meier'] = (data['X_Kaplan Meier'] / 365 ) * 12
X = data['X_Kaplan Meier']
Y = data['Y_Kaplan Meier']
# + colab={"base_uri": "https://localhost:8080/"} id="3rqwLe65WwnM" outputId="44deddbe-ca12-43c3-f810-aae1632cd0df"
# !pip install lifelines
from lifelines import KaplanMeierFitter
# + colab={"base_uri": "https://localhost:8080/"} id="apRf8wYXV5Fk" outputId="22ea1ca5-c0e7-402b-dce5-9ccfd29e849b"
model = KaplanMeierFitter()
model.fit(X, event_observed = Y)
# + colab={"base_uri": "https://localhost:8080/", "height": 295} id="rl3wJIb7p-pU" outputId="29ba1319-5c37-4ab9-e467-825baa4d00b8"
import matplotlib.pyplot as plt
model.plot()
plt.title("Kaplan Meier estimates")
plt.xlabel("Month after heart attack")
plt.ylabel("Survival")
plt.show()
# + [markdown] id="FITxtKYSZtzt"
#
|
COTA.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Tutorial: Introduction to Altair
# This tutorial will guide you through the process of creating visualizations in Altair. For details on installing Altair or its underlying philosophy, please see the [Altair Documentation](http://altair-viz.github.io/)
#
# Outline:
#
# - [The data](#The-data)
# - [The `Chart` object](#The-Chart-object)
# - [Data encodings and marks](#Data-encodings-and-marks)
# - [Data transformation: Aggregation](#Data-transformation:-Aggregation)
# - [Customizing your visualization](#Customizing-your-visualization)
# - [Publishing a visualization online](#Publishing-a-visualization-online)
#
# This tutorial is written in the form of a Jupyter Notebook; we suggest downloading the notebook and following along, executing the code yourself as we go. For creating Altair visualizations in the notebook, all that is required is to [install the package and its dependencies](https://altair-viz.github.io/installation.html) and import the Altair namespace:
import altair as alt
# +
# Uncomment/run this line to enable Altair in the notebook (not JupyterLab):
# alt.renderers.enable('notebook')
# -
# ## The data
# Data in Altair is built around the [Pandas Dataframe](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.html).
# For the purposes of this tutorial, we'll start by importing Pandas and creating a simple `DataFrame` to visualize, with a categorical variable in column `a` and a numerical variable in column `b`:
import pandas as pd
data = pd.DataFrame({'a': list('CCCDDDEEE'),
'b': [2, 7, 4, 1, 2, 6, 8, 4, 7]})
data
# In Altair, every dataset should be provided as a `Dataframe`, or as a URL referencing an appropriate dataset (see [Defining Data](https://altair-viz.github.io/documentation/data.html)).
# ## The `Chart` object
#
# The fundamental object in Altair is the ``Chart``. It takes the dataframe as a single argument:
chart = alt.Chart(data)
# Fundamentally, a ``Chart`` is an object which knows how to emit a JSON dictionary representing the data and visualization encodings (see below), which can be sent to the notebook and rendered by the Vega-Lite JavaScript library.
#
# Here is what that JSON looks like for the current chart (since the chart is not yet complete, we turn off chart validation):
chart.to_dict(validate=False)
# At this point the specification contains only the data, and no visualization specification.
# ## Chart Marks
#
# Next we can decide what sort of *mark* we would like to use to represent our data.
# For example, we can choose the ``point`` mark to represent each data as a point on the plot:
chart = alt.Chart(data).mark_point()
chart
# The result is a visualization with one point per row in the data, though it is not a particularly interesting: all the points are stacked right on top of each other!
# To see how this affects the specification, we can once again examine the dictionary representation:
chart.to_dict()
# Notice that now in addition to the data, the specification includes information about the mark type.
# ## Data encodings
# The next step is to add *visual encodings* (or *encodings* for short) to the chart. A visual encoding specifies how a given data column should be mapped onto the visual properties of the visualization.
# Some of the more frequenty used visual encodings are listed here:
#
# * X: x-axis value
# * Y: y-axis value
# * Color: color of the mark
# * Opacity: transparency/opacity of the mark
# * Shape: shape of the mark
# * Size: size of the mark
# * Row: row within a grid of facet plots
# * Column: column within a grid of facet plots
#
# For a complete list of these encodings, see the [Encodings](https://altair-viz.github.io/documentation/encoding.html) section of the documentation.
#
# Visual encodings can be created with the `encode()` method of the `Chart` object. For example, we can start by mapping the `y` axis of the chart to column `a`:
chart = alt.Chart(data).mark_point().encode(y='a')
chart # save the Chart to a variable and then display
# The result is a one-dimensional visualization representing the values taken on by `a`.
# As above, we can view the JSON data generated for this visualization:
chart.to_dict()
# The result is the same as above with the addition of the `'encoding'` key, which specifies the visualization channel (`y`), the name of the field (`a`), and the type of the variable (`nominal`).
#
# Altair is able to automatically determine the type of the variable using built-in heuristics. Altair and Vega-Lite support four primitive data types:
#
# <table>
# <tr>
# <th>Data Type</th>
# <th>Code</th>
# <th>Description</th>
# </tr>
# <tr>
# <td>quantitative</td>
# <td>Q</td>
# <td>Numerical quantity (real-valued)</td>
# </tr>
# <tr>
# <td>nominal</td>
# <td>N</td>
# <td>Name / Unordered categorical</td>
# </tr>
# <tr>
# <td>ordinal</td>
# <td>O</td>
# <td>Ordered categorial</td>
# </tr>
# <tr>
# <td>temporal</td>
# <td>T</td>
# <td>Date/time</td>
# </tr>
# </table>
#
# You can set the data type of a column explicitly using a one letter code attached to the column name with a colon:
alt.Chart(data).mark_point().encode(y='a:N')
# The visualization can be made more interesting by adding another channel to the encoding: let's encode column `b` as the `x` position:
alt.Chart(data).mark_point().encode(
y='a',
x='b'
)
# With two visual channels encoded, we can see the raw data points in the `DataFrame`. A different mark type can be chosen using a different `mark_*()` method, such as `mark_bar()`:
alt.Chart(data).mark_bar().encode(
alt.Y('a'),
alt.X('b')
)
# Notice, we have used a slightly different syntax for specifying the channels using classes (``alt.X`` and ``alt.Y``) passed as positional arguments. These classes allow additional arguments to be passed to each channel.
#
# Here are some of the more commonly used `mark_*()` methods supported in Altair and Vega-Lite; for more detail see [Markings](https://altair-viz.github.io/documentation/marks.html) in the Altair documentation:
#
# <table>
# <tr>
# <th>Method</th>
# </tr>
# <tr>
# <td><code>mark_area()</code></td>
# </tr>
# <tr>
# <td><code>mark_bar()</code></td>
# </tr>
# <tr>
# <td><code>mark_circle()</code></td>
# </tr>
# <tr>
# <td><code>mark_line()</code></td>
# </tr>
# <tr>
# <td><code>mark_point()</code></td>
# </tr>
# <tr>
# <td><code>mark_rule()</code></td>
# </tr>
# <tr>
# <td><code>mark_square()</code></td>
# </tr>
# <tr>
# <td><code>mark_text()</code></td>
# </tr>
# <tr>
# <td><code>mark_tick()</code></td>
# </tr>
# </table>
#
#
# ## Data transformation: Aggregation
# Altair and Vega-Lite also support a variety of built-in data transformations, such as aggregation. The easiest way to specify such aggregations is through a string-function syntax in the argument to the column name. For example, here we will plot not all the values, but a single point representing the mean of the x-values for a given y-value:
alt.Chart(data).mark_point().encode(
y='a',
x='mean(b)'
)
# Conceptually, this is equivalent to the following groupby operation:
data.groupby('a').mean()
# More typically, aggregated values are displayed using bar charts.
# Making this change is as simple as replacing `mark_point()` with `mark_bar()`:
chart = alt.Chart(data).mark_bar().encode(
y='a',
x='mean(b)'
)
chart # save the Chart to a variable and then display
# As above, Altair's role in this visualization is converting the resulting object into an appropriate JSON dict.
# Here it is, leaving out the data for clarity:
chart.to_dict()
# Notice that Altair has taken the string `'mean(b)'` and converted it to a mapping that includes `field`, `type`, and `aggregate`. The full shorthand syntax for the column names in Altair also includes the explicit type code separated by a column:
x = alt.X('mean(b):Q')
x.to_dict()
# This shorthand is equivalent to spelling-out these properties by name:
x = alt.X('b', aggregate='average', type='quantitative')
x.to_dict()
# This is one benefit of using the Altair API over writing the Vega-Lite spec from scratch: valid Vega-Lite specifications can be created very succinctly, with less boilerplate code.
# ## Customizing your visualization
# To speed the process of data exploration, Altair (via Vega-Lite) makes some choices about default properties of the visualization.
# Altair also provides an API to customize the look of the visualization. For example, we can use the `X` object we saw above to override the default x-axis title:
alt.Chart(data).mark_bar().encode(
y='a',
x=alt.X('mean(b)', axis=alt.Axis(title='Mean of quantity b'))
)
# The properties of marks can be configured by passing keyword arguments to the `mark_*()` methods; for example, any named HTML color is supported:
alt.Chart(data).mark_bar(color='firebrick').encode(
y='a',
x=alt.X('mean(b)', axis=alt.Axis(title='Mean of quantity b'))
)
# Similarly, we can set properties of the chart such as width and height using the ``properties()`` method:
# +
chart = alt.Chart(data).mark_bar().encode(
y='a',
x=alt.X('average(b)', axis=alt.Axis(title='Average of b'))
).properties(
width=400,
height=300
)
chart
# -
# As above, we can inspect how these configuration options affect the resulting Vega-lite specification:
chart.to_dict()
# To learn more about the various properties of chart objects, you can use Jupyter's help syntax:
# +
# alt.Chart?
# -
# You can also read more in Altair's [Configuration](https://altair-viz.github.io/documentation/config.html) documentation.
# ## Publishing a visualization online
# Because Altair produces Vega-Lite specifications, it is relatively straightforward to export charts and publish them on the web as Vega-Lite plots.
# All that is required is to load the Vega-Lite javascript library, and pass it the JSON plot specification output by Altair.
# For convenience Altair provides a ``savechart()`` method, which will save any chart to HTML:
chart.savechart('chart.html', indent=None) # don't indent the JSON
# !cat chart.html
# All that must be changed is the ``spec`` variable, which contains the output of ``chart.to_json()`` for any chart.
# We can view the output in an iframe within the notebook (note that some online notebook viewers will not display iframes):
# Display IFrame in IPython
from IPython.display import IFrame
IFrame('chart.html', width=400, height=200)
# Alternatively, you can use your web browser to open the file manually to confirm that it works: [chart.html](chart.html).
# ## Learning More
#
# For more information on Altair, please refer to Altair's online documentation: http://altair-viz.github.io/
#
# You can also see some of the example plots listed in the [accompanying notebooks](01-Index.ipynb).
|
notebooks/altair_notebooks/02-Tutorial.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import numpy as np
import csv
import networkx as nx
import matplotlib.pyplot as plt
import os
import sys
from scipy.stats import hypergeom
#Builiding-up INTERSECTION Interactome graph
intersect = pd.read_csv("intersection_interactome.tsv", sep = '\t')
G_int = nx.from_pandas_edgelist(intersect,'interactorA','interactorB')
nx.draw(G_int, with_labels=True, width=0.2 , node_size=7, font_size=2, font_color='b')
print(nx.info(G_int))
# +
#NETWORK MEASURES :
print('n.of connected components:', nx.number_connected_components(G_int))
for g in nx.connected_component_subgraphs(G_int):
print(nx.info(g))
print('average shortest path length: ', nx.average_shortest_path_length(g))
print('diameter :' ,nx.algorithms.distance_measures.diameter(g))
print('radius :' ,nx.algorithms.distance_measures.radius(g))
nx.algorithms.cluster.average_clustering(G_int)
#nx.algorithms.distance_measures.diameter(G_seed)
# +
#Builiding-up UNION Interactome graph with a demo-picture (the final one is taken from Cytoscape software)
union = pd.read_csv("union_interactome_extended2.tsv", sep = '\t')
G_union = nx.from_pandas_edgelist(union,'interactorA','interactorB')
nx.draw(G_union, with_labels=True, width=0.2 , node_size=7, font_size=2, font_color='b')
#plt.savefig("g_union.pdf")
print(nx.info(G_union))
#nx.write_graphml(G_union,'g_union.xml')
#list(nx.isolates(G_union))
# +
#NETWORK MEASURES :
print('n.of connected components:', nx.number_connected_components(G_union))
for g in nx.connected_component_subgraphs(G_union):
print(nx.info(g))
print('average shortest path length: ', nx.average_shortest_path_length(g))
print('diameter :' ,nx.algorithms.distance_measures.diameter(g))
print('radius :' ,nx.algorithms.distance_measures.radius(g))
nx.algorithms.cluster.average_clustering(G_union)
# -
#Building-up Seed_gene_interactome graph with nx
seed_genes = pd.read_csv("seed_genes_interactome.tsv", sep = '\t')
G_seed = nx.from_pandas_edgelist(seed_genes,'interactorA','interactorB')
nx.draw(G_seed, with_labels=True, width=0.2 , node_size=7, font_size=2, font_color='b')
plt.savefig("g_seed.pdf")
#nx.write_graphml(G_seed,'g_seed.xml')
print(nx.info(G_seed))
print(G_seed)
# +
#NETWORK MEASURES :
print('n.of connected components:', nx.number_connected_components(G_seed))
for g in nx.connected_component_subgraphs(G_seed):
print(nx.info(g))
print('average shortest path length: ', nx.average_shortest_path_length(g))
print('diameter :' ,nx.algorithms.distance_measures.diameter(g))
print('radius :' ,nx.algorithms.distance_measures.radius(g))
nx.algorithms.cluster.average_clustering(G_seed)
# -
#list of the seed_genes
seed_genes_nodes = pd.read_csv("g_seednode.csv",usecols=['name'])
# +
#Computing number of seed_gene nodes in the clusters
#Clustered Graphs are computed with MCL by Cytoscape software and called here as a .csv file
clusters_i = pd.DataFrame(columns = ['Clus1','Clus2','Clus3','Clus4'])
for z in range(1,4):
intersection_clusters = pd.read_csv("Icluster/Icluster" + str(z)+".csv",usecols=['name'])
count = 0
for i in seed_genes_nodes.index:
curr = seed_genes_nodes.loc[i,'name']
for j in intersection_clusters.index:
if curr == intersection_clusters.loc[j,'name']:
count +=1
clusters_i.loc['nodes','Clus'+str(z)] = j
clusters_i.loc['seed nodes','Clus'+str(z)] = count
clusters_i.loc['nodes', 'Clus4']=13
clusters_i.loc['seed nodes', 'Clus4']=1
# +
#Computing number of seed_gene nodes in the clusters
clusters_u = pd.DataFrame(columns = ['Clus1','Clus2','Clus3','Clus4','Clus5','Clus6','Clus7','Clus8','Clus9','Clus10','Clus11','Clus12','Clus13','Clus14','Clus15','Clus16','Clus17','Clus18','Clus19'])
#clusters
for z in range(1,19):
union_clusters = pd.read_csv("Ucluster/Ucluster" + str(z)+".csv",usecols=['name'])
count = 0
for i in seed_genes_nodes.index:
curr = seed_genes_nodes.loc[i,'name']
for j in union_clusters.index:
if curr == union_clusters.loc[j,'name']:
count +=1
clusters_u.loc['nodes','Clus'+str(z)] = j
clusters_u.loc['seed nodes','Clus'+str(z)] = count
clusters_i.loc['nodes', 'Clus19']=11
clusters_i.loc['seed nodes', 'Clus19']=0
# -
#Hypergeometric Test for I-LCC
for i in range(1,5):
[M, n, N, x] = [2521, 78, clusters_i.loc['nodes','Clus'+str(i)],clusters_i.loc['seed nodes','Clus'+str(i)]]
pval = hypergeom.sf(x,M, n, N)
clusters_i.loc['pvalue','Clus'+str(i)] = pval
clusters_i
#Hypergeometric Test for U-LCC
for i in range(1,20):
[M, n, N, x] = [5612, 78, clusters_u.loc['nodes','Clus'+str(i)],clusters_u.loc['seed nodes','Clus'+str(i)]]
pval = hypergeom.sf(x,M, n, N)
clusters_u.loc['pvalue','Clus'+str(i)] = pval
clusters_u
#Checking Genes for union-clustered graph
union_clusters = pd.read_csv("Ucluster/Ucluster6.csv",usecols=['name'])
#count = 0
for i in seed_genes_nodes.index:
curr = seed_genes_nodes.loc[i,'name']
for j in union_clusters.index:
if curr == union_clusters.loc[j,'name']:
print (curr)
# +
#nx.is_strongly_connected(G_unclustered)
G_unclustered = nx.DiGraph.to_undirected(G_unclustered)
G_fin_un = nx.Graph()
#print( 'n.of connected components:' ,nx.number_weakly_connected_components(G_unclustered))
#print('n.of connected components:', nx.number_strongly_connected_components(G_unclustered))
print('n.of connected components:', nx.number_connected_components(G_unclustered))
for g in nx.connected_component_subgraphs(G_unclustered):
if nx.number_of_nodes(g) > 10:
G_fin = nx.compose(G_fin,g)
#print(nx.average_shortest_path_length(g))
# print(nx.info(g))
#for g in nx.strongly_connected_components(G_unclustered):
# -
G_fin_int = nx.Graph()
G_intclustered = nx.DiGraph.to_undirected(G_intclustered)
#print( 'n.of connected components:' ,nx.number_weakly_connected_components(G_unclustered))
#print('n.of connected components:', nx.number_strongly_connected_components(G_unclustered))
print('n.of connected components:', nx.number_connected_components(G_intclustered))
for g in nx.connected_component_subgraphs(G_intclustered):
if nx.number_of_nodes(g) > 10:
G_fin_int = nx.compose(G_fin_int,g)
print(nx.info(G_fin_int))
print('n.of connected components:', nx.number_connected_components(G_fin_int))
list_int = nx.to_edgelist(G_fin_int)
list_int
list(G_fin_int)
# +
print('n.of connected components:', nx.number_connected_components(G_int))
for g in nx.connected_component_subgraphs(G_int):
print(nx.average_shortest_path_length(g))
print(nx.info(g))
nx.algorithms.cluster.average_clustering(G_int)
#nx.algorithms.distance_measures.diameter(G_int)
# -
#Building up files for DIAMOnD tool
f = open("seed_file.txt","a")
for i in seed_genes_nodes.index:
f.write(seed_genes_nodes.loc[i,'name']+'\n')
f.close()
network_file = pd.read_csv("new_biogrid.tsv", sep = '\t', usecols=['Official Symbol Interactor A','Official Symbol Interactor B'])
g = open("network_file.txt","a" )
for i in network_file.index:
g.write(network_file.loc[i,'Official Symbol Interactor A']+','+network_file.loc[i,'Official Symbol Interactor B']+'\n')
|
Analysis2.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
from __future__ import unicode_literals
from sklearn.feature_extraction import DictVectorizer
import pymysql
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.metrics.pairwise import cosine_similarity
import numpy as np
import random
from langdetect import detect
import csv
import MySQLdb
# +
# load data form database
conn = MySQLdb.connect(host='127.0.0.1', user='root', passwd="<PASSWORD>", db='vroniplag')
cur = conn.cursor(MySQLdb.cursors.DictCursor)
cur.execute("SELECT * FROM fragment ORDER BY fragment_identifier")
originals = []
plagiats = []
groups = {}
i=0
for r in cur:
originals.append(r["source_text"])
plagiats.append(r["plagiat_text"])
group = r["fragment_identifier"]
if not group in groups:
groups[group] = []
groups[group].append(i)
i+=1
cur.close()
conn.close()
# -
print(plagiats[0])
# +
# language detection
conn = MySQLdb.connect(host='127.0.0.1', user='root', passwd="<PASSWORD>", db='vroniplag')
cur = conn.cursor(MySQLdb.cursors.DictCursor)
cur.execute("SELECT * FROM fragment ORDER BY fragment_identifier")
for r in cur:
try:
url = r["url"]
original = r["source_text"]
plagiat = r["plagiat_text"]
lang_original = detect(original)
lang_plagiat = detect(plagiat)
query = "UPDATE plagiat SET lang_source='" + lang_original + "', lang_plagiat='" + lang_plagiat;
query += "' WHERE url='" + url + "';";
cur2 = conn.cursor()
cur2.execute(query)
cur2.close()
except Exception:
print("Exception: " + url)
conn.commit();
cur.close()
conn.close()
# +
# read stopwords
stopwords = set()
with open("stopwords_de.txt") as f:
content = f.readlines()
for line in content:
stopwords.update(line.replace("\n", ""))
# do tf idf
all_docs = originals + plagiats
v = TfidfVectorizer(analyzer="word", stop_words = stopwords, ngram_range=(1,1), max_df=1000, min_df=3)
tfidf = v.fit_transform(all_docs)
# +
# evaluation - target + 19 random docs from the same author
test_length = 50
correct = 0
for key in groups.keys():
eval_size = len(groups[key])
test_length = min(20, len(groups[key]))
correctInGroup = 0
for i in range(0, eval_size ):
# generate a test set of the original and 19 random text documents
index = groups[key][i]
originalDoc = tfidf.getrow(index)
test_docs = []
test_docs.append(tfidf.getrow(len(originals)+index))
usedIndizes = {}
usedIndizes[index] = True
while len(test_docs) < test_length:
j = random.randrange(0, eval_size)
if not j in usedIndizes:
index = groups[key][j]
usedIndizes[index] = True
test_docs.append(tfidf.getrow(index))
# compare test set
maxSim = 0
mostSimDoc = 0
for j in range(0, len(test_docs)):
vecS = test_docs[j]
sim = cosine_similarity(originalDoc, vecS)
sim = sim[0][0]
if(maxSim < sim):
maxSim=sim
mostSimDoc = j
if mostSimDoc == 0:
correctInGroup += 1
correct += correctInGroup
accuracy = correctInGroup / float(eval_size)
print(key + "\t" + str(eval_size) + "\t" + str(accuracy))
accuracy = correct/float(len(originals))
print("Total: \t" + str(accuracy))
# +
# evaluation - target + 19 random docs
test_length = 20
correct = 0
eval_size = len(originals)
for i in range(0, eval_size ):
if i % 500 == 0:
print("Processed " + str(i) + " docs")
# generate a test set of the original and 19 random text documents
originalDoc = tfidf.getrow(i)
test_docs = []
test_docs.append(tfidf.getrow(len(originals)+i))
usedIndizes = {}
while len(test_docs) < test_length:
j = random.randrange(len(originals), 2*len(originals))
if not j in usedIndizes:
usedIndizes[j] = True
test_docs.append(tfidf.getrow(j))
# compare test set
maxSim = 0
mostSimDoc = 0
for j in range(0, len(test_docs)):
vecS = test_docs[j]
sim = cosine_similarity(originalDoc, vecS)
sim = sim[0][0]
if(maxSim < sim):
maxSim=sim
mostSimDoc = j
if mostSimDoc == 0:
correct += 1
print(correct/float(eval_size))
# +
# convert to csv- old lrec code
conn = MySQLdb.connect(host='127.0.0.1', user='root', passwd="<PASSWORD>", db='paraphrases')
cur = conn.cursor(MySQLdb.cursors.DictCursor)
cur.execute("SELECT * FROM plagiat ORDER BY author")
keys = {
"url",
"fragment_identifier",
"author",
"source_text",
"full_html",
"category",
"lang_source",
"lang_plagiat",
"plagiat_text",
"peer_reviewed"
}
counter = 0
with open('vroniplag-corpus.csv', 'w') as csvfile:
spamwriter = csv.writer(csvfile, delimiter='\t',
quotechar='"', quoting=csv.QUOTE_MINIMAL)
spamwriter.writerow(keys)
for r in cur:
if(r["peer_reviewed"] == b'\x00'):
r["peer_reviewed"] = 1;
else:
r["peer_reviewed"] = 0;
row = []
for key in keys:
row.append(r[key])
spamwriter.writerow(row)
counter += 1
print(counter)
cur.close()
conn.close()
|
src/main/python/Paraphrase Evaluation.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# Imports
from hfm import HFM3D
import scipy.io
import numpy as np
import tensorflow as tf
tf.compat.v1.disable_eager_execution()
import tensorflow as tf
print("Num GPUs Available: ", len(tf.config.list_physical_devices('GPU')))
from utilities import relative_error
# -
# Define Net
batch_size = 10000
layers = [4] + 10*[5*50] + [5]
# +
# Load Data
data = scipy.io.loadmat('data/aofnorm2.mat')
t_star = data['t_star'] # T x 1
x_star = data['x_star'] # N x 1
y_star = data['y_star'] # N x 1
z_star = data['z_star'] # N x 1
T = t_star.shape[0]
N = x_star.shape[0]
U_star = data['U_star'] # N x T
V_star = data['V_star'] # N x T
W_star = data['W_star'] # N x T
P_star = data['P_star'] # N x T
C_star = data['C_star'] # N x T
# -
print(N)
print(T)
# +
# Rearrange Data
T_star = np.tile(t_star, (1,N)).T # N x T
X_star = np.tile(x_star, (1,T)) # N x T
Y_star = np.tile(y_star, (1,T)) # N x T
Z_star = np.tile(z_star, (1,T)) # N x T
######################################################################
######################## Noiseles Data ###############################
######################################################################
T_data = T
N_data = N
idx_t = np.concatenate([np.array([0]), np.random.choice(T-2, T_data-2, replace=False)+1, np.array([T-1])] )
idx_x = np.random.choice(N, N_data, replace=False)
t_data = T_star[:, idx_t][idx_x,:].flatten()[:,None]
x_data = X_star[:, idx_t][idx_x,:].flatten()[:,None]
y_data = Y_star[:, idx_t][idx_x,:].flatten()[:,None]
z_data = Z_star[:, idx_t][idx_x,:].flatten()[:,None]
c_data = C_star[:, idx_t][idx_x,:].flatten()[:,None]
T_eqns = T
N_eqns = N
idx_t = np.concatenate([np.array([0]), np.random.choice(T-2, T_eqns-2, replace=False)+1, np.array([T-1])] )
idx_x = np.random.choice(N, N_eqns, replace=False)
t_eqns = T_star[:, idx_t][idx_x,:].flatten()[:,None]
x_eqns = X_star[:, idx_t][idx_x,:].flatten()[:,None]
y_eqns = Y_star[:, idx_t][idx_x,:].flatten()[:,None]
z_eqns = Z_star[:, idx_t][idx_x,:].flatten()[:,None]
# +
# Training
model = HFM3D(t_data, x_data, y_data, z_data, c_data,
t_eqns, x_eqns, y_eqns, z_eqns,
layers, batch_size,
Pec = 10000, Rey = 3800)
model.train(total_time = 40, learning_rate=1e-3)
# +
# Test Data
snap = np.array([10])
t_test = T_star[:,snap]
x_test = X_star[:,snap]
y_test = Y_star[:,snap]
z_test = Z_star[:,snap]
c_test = C_star[:,snap]
u_test = U_star[:,snap]
v_test = V_star[:,snap]
w_test = W_star[:,snap]
p_test = P_star[:,snap]
# Prediction
c_pred, u_pred, v_pred, w_pred, p_pred = model.predict(t_test, x_test, y_test, z_test)
# Error
error_c = relative_error(c_pred, c_test)
error_u = relative_error(u_pred, u_test)
error_v = relative_error(v_pred, v_test)
error_w = relative_error(w_pred, w_test)
error_p = relative_error(p_pred - np.mean(p_pred), p_test - np.mean(p_test))
print('Error c: %e' % (error_c))
print('Error u: %e' % (error_u))
print('Error v: %e' % (error_v))
print('Error w: %e' % (error_w))
print('Error p: %e' % (error_p))
# -
|
ucsd_tf2compat/aofnorm2_run.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:tensorflow]
# language: python
# name: conda-env-tensorflow-py
# ---
# # Weight Initialization
# In this lesson, you'll learn how to find good initial weights for a neural network. Having good initial weights can place the neural network close to the optimal solution. This allows the neural network to come to the best solution quicker.
#
# ## Testing Weights
# ### Dataset
# To see how different weights perform, we'll test on the same dataset and neural network. Let's go over the dataset and neural network.
#
# We'll be using the [MNIST dataset](https://en.wikipedia.org/wiki/MNIST_database) to demonstrate the different initial weights. As a reminder, the MNIST dataset contains images of handwritten numbers, 0-9, with normalized input (0.0 - 1.0). Run the cell below to download and load the MNIST dataset.
# +
# %matplotlib inline
import tensorflow as tf
import helper
from tensorflow.examples.tutorials.mnist import input_data
print('Getting MNIST Dataset...')
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
print('Data Extracted.')
# -
# ### Neural Network
# <img style="float: left" src="images/neural_network.png"/>
# For the neural network, we'll test on a 3 layer neural network with ReLU activations and an Adam optimizer. The lessons you learn apply to other neural networks, including different activations and optimizers.
# Save the shapes of weights for each layer
layer_1_weight_shape = (mnist.train.images.shape[1], 256)
layer_2_weight_shape = (256, 128)
layer_3_weight_shape = (128, mnist.train.labels.shape[1])
# ## Initialize Weights
# Let's start looking at some initial weights.
# ### All Zeros or Ones
# If you follow the principle of [Occam's razor](https://en.wikipedia.org/wiki/Occam's_razor), you might think setting all the weights to 0 or 1 would be the best solution. This is not the case.
#
# With every weight the same, all the neurons at each layer are producing the same output. This makes it hard to decide which weights to adjust.
#
# Let's compare the loss with all ones and all zero weights using `helper.compare_init_weights`. This function will run two different initial weights on the neural network above for 2 epochs. It will plot the loss for the first 100 batches and print out stats after the 2 epochs (~860 batches). We plot the first 100 batches to better judge which weights performed better at the start.
#
# Run the cell below to see the difference between weights of all zeros against all ones.
# +
all_zero_weights = [
tf.Variable(tf.zeros(layer_1_weight_shape)),
tf.Variable(tf.zeros(layer_2_weight_shape)),
tf.Variable(tf.zeros(layer_3_weight_shape))
]
all_one_weights = [
tf.Variable(tf.ones(layer_1_weight_shape)),
tf.Variable(tf.ones(layer_2_weight_shape)),
tf.Variable(tf.ones(layer_3_weight_shape))
]
helper.compare_init_weights(
mnist,
'All Zeros vs All Ones',
[
(all_zero_weights, 'All Zeros'),
(all_one_weights, 'All Ones')])
# -
# As you can see the accuracy is close to guessing for both zeros and ones, around 10%.
#
# The neural network is having a hard time determining which weights need to be changed, since the neurons have the same output for each layer. To avoid neurons with the same output, let's use unique weights. We can also randomly select these weights to avoid being stuck in a local minimum for each run.
#
# A good solution for getting these random weights is to sample from a uniform distribution.
# ### Uniform Distribution
# A [uniform distribution](https://en.wikipedia.org/wiki/Uniform_distribution_(continuous%29) has the equal probability of picking any number from a set of numbers. We'll be picking from a continous distribution, so the chance of picking the same number is low. We'll use TensorFlow's `tf.random_uniform` function to pick random numbers from a uniform distribution.
#
# >#### [`tf.random_uniform(shape, minval=0, maxval=None, dtype=tf.float32, seed=None, name=None)`](https://www.tensorflow.org/api_docs/python/tf/random_uniform)
# >Outputs random values from a uniform distribution.
#
# >The generated values follow a uniform distribution in the range [minval, maxval). The lower bound minval is included in the range, while the upper bound maxval is excluded.
#
# >- **shape:** A 1-D integer Tensor or Python array. The shape of the output tensor.
# - **minval:** A 0-D Tensor or Python value of type dtype. The lower bound on the range of random values to generate. Defaults to 0.
# - **maxval:** A 0-D Tensor or Python value of type dtype. The upper bound on the range of random values to generate. Defaults to 1 if dtype is floating point.
# - **dtype:** The type of the output: float32, float64, int32, or int64.
# - **seed:** A Python integer. Used to create a random seed for the distribution. See tf.set_random_seed for behavior.
# - **name:** A name for the operation (optional).
#
# We can visualize the uniform distribution by using a histogram. Let's map the values from `tf.random_uniform([1000], -3, 3)` to a histogram using the `helper.hist_dist` function. This will be `1000` random float values from `-3` to `3`, excluding the value `3`.
helper.hist_dist('Random Uniform (minval=-3, maxval=3)', tf.random_uniform([1000], -3, 3))
# The histogram used 500 buckets for the 1000 values. Since the chance for any single bucket is the same, there should be around 2 values for each bucket. That's exactly what we see with the histogram. Some buckets have more and some have less, but they trend around 2.
#
# Now that you understand the `tf.random_uniform` function, let's apply it to some initial weights.
#
# ### Baseline
#
#
# Let's see how well the neural network trains using the default values for `tf.random_uniform`, where `minval=0.0` and `maxval=1.0`.
# +
# Default for tf.random_uniform is minval=0 and maxval=1
basline_weights = [
tf.Variable(tf.random_uniform(layer_1_weight_shape)),
tf.Variable(tf.random_uniform(layer_2_weight_shape)),
tf.Variable(tf.random_uniform(layer_3_weight_shape))
]
helper.compare_init_weights(
mnist,
'Baseline',
[(basline_weights, 'tf.random_uniform [0, 1)')])
# -
# The loss graph is showing the neural network is learning, which it didn't with all zeros or all ones. We're headed in the right direction.
#
# ### General rule for setting weights
# The general rule for setting the weights in a neural network is to be close to zero without being too small. A good pracitce is to start your weights in the range of $[-y, y]$ where
# $y=1/\sqrt{n}$ ($n$ is the number of inputs to a given neuron).
#
# Let's see if this holds true, let's first center our range over zero. This will give us the range [-1, 1).
# +
uniform_neg1to1_weights = [
tf.Variable(tf.random_uniform(layer_1_weight_shape, -1, 1)),
tf.Variable(tf.random_uniform(layer_2_weight_shape, -1, 1)),
tf.Variable(tf.random_uniform(layer_3_weight_shape, -1, 1))
]
helper.compare_init_weights(
mnist,
'[0, 1) vs [-1, 1)',
[
(basline_weights, 'tf.random_uniform [0, 1)'),
(uniform_neg1to1_weights, 'tf.random_uniform [-1, 1)')])
# -
# We're going in the right direction, the accuracy and loss is better with [-1, 1). We still want smaller weights. How far can we go before it's too small?
# ### Too small
# Let's compare [-0.1, 0.1), [-0.01, 0.01), and [-0.001, 0.001) to see how small is too small. We'll also set `plot_n_batches=None` to show all the batches in the plot.
# +
uniform_neg01to01_weights = [
tf.Variable(tf.random_uniform(layer_1_weight_shape, -0.1, 0.1)),
tf.Variable(tf.random_uniform(layer_2_weight_shape, -0.1, 0.1)),
tf.Variable(tf.random_uniform(layer_3_weight_shape, -0.1, 0.1))
]
uniform_neg001to001_weights = [
tf.Variable(tf.random_uniform(layer_1_weight_shape, -0.01, 0.01)),
tf.Variable(tf.random_uniform(layer_2_weight_shape, -0.01, 0.01)),
tf.Variable(tf.random_uniform(layer_3_weight_shape, -0.01, 0.01))
]
uniform_neg0001to0001_weights = [
tf.Variable(tf.random_uniform(layer_1_weight_shape, -0.001, 0.001)),
tf.Variable(tf.random_uniform(layer_2_weight_shape, -0.001, 0.001)),
tf.Variable(tf.random_uniform(layer_3_weight_shape, -0.001, 0.001))
]
helper.compare_init_weights(
mnist,
'[-1, 1) vs [-0.1, 0.1) vs [-0.01, 0.01) vs [-0.001, 0.001)',
[
(uniform_neg1to1_weights, '[-1, 1)'),
(uniform_neg01to01_weights, '[-0.1, 0.1)'),
(uniform_neg001to001_weights, '[-0.01, 0.01)'),
(uniform_neg0001to0001_weights, '[-0.001, 0.001)')],
plot_n_batches=None)
# -
# Looks like anything [-0.01, 0.01) or smaller is too small. Let's compare this to our typical rule of using the range $y=1/\sqrt{n}$.
# +
import numpy as np
general_rule_weights = [
tf.Variable(tf.random_uniform(layer_1_weight_shape, -1/np.sqrt(layer_1_weight_shape[0]), 1/np.sqrt(layer_1_weight_shape[0]))),
tf.Variable(tf.random_uniform(layer_2_weight_shape, -1/np.sqrt(layer_2_weight_shape[0]), 1/np.sqrt(layer_2_weight_shape[0]))),
tf.Variable(tf.random_uniform(layer_3_weight_shape, -1/np.sqrt(layer_3_weight_shape[0]), 1/np.sqrt(layer_3_weight_shape[0])))
]
helper.compare_init_weights(
mnist,
'[-0.1, 0.1) vs General Rule',
[
(uniform_neg01to01_weights, '[-0.1, 0.1)'),
(general_rule_weights, 'General Rule')],
plot_n_batches=None)
# -
# The range we found and $y=1/\sqrt{n}$ are really close.
#
# Since the uniform distribution has the same chance to pick anything in the range, what if we used a distribution that had a higher chance of picking numbers closer to 0. Let's look at the normal distribution.
# ### Normal Distribution
# Unlike the uniform distribution, the [normal distribution](https://en.wikipedia.org/wiki/Normal_distribution) has a higher likelihood of picking number close to it's mean. To visualize it, let's plot values from TensorFlow's `tf.random_normal` function to a histogram.
#
# >[tf.random_normal(shape, mean=0.0, stddev=1.0, dtype=tf.float32, seed=None, name=None)](https://www.tensorflow.org/api_docs/python/tf/random_normal)
#
# >Outputs random values from a normal distribution.
#
# >- **shape:** A 1-D integer Tensor or Python array. The shape of the output tensor.
# - **mean:** A 0-D Tensor or Python value of type dtype. The mean of the normal distribution.
# - **stddev:** A 0-D Tensor or Python value of type dtype. The standard deviation of the normal distribution.
# - **dtype:** The type of the output.
# - **seed:** A Python integer. Used to create a random seed for the distribution. See tf.set_random_seed for behavior.
# - **name:** A name for the operation (optional).
helper.hist_dist('Random Normal (mean=0.0, stddev=1.0)', tf.random_normal([1000]))
# Let's compare the normal distribution against the previous uniform distribution.
# +
normal_01_weights = [
tf.Variable(tf.random_normal(layer_1_weight_shape, stddev=0.1)),
tf.Variable(tf.random_normal(layer_2_weight_shape, stddev=0.1)),
tf.Variable(tf.random_normal(layer_3_weight_shape, stddev=0.1))
]
helper.compare_init_weights(
mnist,
'Uniform [-0.1, 0.1) vs Normal stddev 0.1',
[
(uniform_neg01to01_weights, 'Uniform [-0.1, 0.1)'),
(normal_01_weights, 'Normal stddev 0.1')])
# -
# The normal distribution gave a slight increasse in accuracy and loss. Let's move closer to 0 and drop picked numbers that are `x` number of standard deviations away. This distribution is called [Truncated Normal Distribution](https://en.wikipedia.org/wiki/Truncated_normal_distribution%29).
# ### Truncated Normal Distribution
# >[tf.truncated_normal(shape, mean=0.0, stddev=1.0, dtype=tf.float32, seed=None, name=None)](https://www.tensorflow.org/api_docs/python/tf/truncated_normal)
#
# >Outputs random values from a truncated normal distribution.
#
# >The generated values follow a normal distribution with specified mean and standard deviation, except that values whose magnitude is more than 2 standard deviations from the mean are dropped and re-picked.
#
# >- **shape:** A 1-D integer Tensor or Python array. The shape of the output tensor.
# - **mean:** A 0-D Tensor or Python value of type dtype. The mean of the truncated normal distribution.
# - **stddev:** A 0-D Tensor or Python value of type dtype. The standard deviation of the truncated normal distribution.
# - **dtype:** The type of the output.
# - **seed:** A Python integer. Used to create a random seed for the distribution. See tf.set_random_seed for behavior.
# - **name:** A name for the operation (optional).
helper.hist_dist('Truncated Normal (mean=0.0, stddev=1.0)', tf.truncated_normal([1000]))
# Again, let's compare the previous results with the previous distribution.
# +
trunc_normal_01_weights = [
tf.Variable(tf.truncated_normal(layer_1_weight_shape, stddev=0.1)),
tf.Variable(tf.truncated_normal(layer_2_weight_shape, stddev=0.1)),
tf.Variable(tf.truncated_normal(layer_3_weight_shape, stddev=0.1))
]
helper.compare_init_weights(
mnist,
'Normal vs Truncated Normal',
[
(normal_01_weights, 'Normal'),
(trunc_normal_01_weights, 'Truncated Normal')])
# -
# There's no difference between the two, but that's because the neural network we're using is too small. A larger neural network will pick more points on the normal distribution, increasing the likelihood it's choices are larger than 2 standard deviations.
#
# We've come a long way from the first set of weights we tested. Let's see the difference between the weights we used then and now.
helper.compare_init_weights(
mnist,
'Baseline vs Truncated Normal',
[
(basline_weights, 'Baseline'),
(trunc_normal_01_weights, 'Truncated Normal')])
# That's a huge difference. You can barely see the truncated normal line. However, this is not the end your learning path. We've provided more resources for initializing weights in the classroom!
|
weight-initialization/weight_initialization.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Recurrent neural network and dynamical system analysis
#
# In this tutorial, we will use supervised learning to train a recurrent neural network on a parametric working memory task, and analyze the trained network using dynamical system analysis.
#
# [](https://colab.research.google.com/github/gyyang/nn-brain/blob/master/ParametricWorkingMemory.ipynb)
# ## Install dependencies
# +
# # If on Google Colab, uncomment to install neurogym to use cognitive tasks
# # ! git clone https://github.com/neurogym/neurogym.git
# # %cd neurogym/
# # ! pip install -e .
# -
# ## Defining a cognitive task
import numpy as np
import matplotlib.pyplot as plt
import neurogym as ngym
# +
# Environment
task = 'DelayComparison-v0'
timing = {'delay': ('choice', [200, 400, 800, 1600, 3200]),
'response': ('constant', 500)
}
kwargs = {'dt': 100, 'timing': timing}
seq_len = 100
# Make supervised dataset
dataset = ngym.Dataset(task, env_kwargs=kwargs, batch_size=16,
seq_len=seq_len)
# A sample environment from dataset
env = dataset.env
# Visualize the environment with 2 sample trials
_ = ngym.utils.plot_env(env, num_trials=2, def_act=0)
# Network input and output size
input_size = env.observation_space.shape[0]
output_size = env.action_space.n
# -
inputs, target = dataset()
mask = target > 0
print(inputs.shape) # (N_time, batch_size, N_neuron)
print(target.shape) # (N_time, batch_size)
plt.plot(target[:, 0])
# ## Define a vanilla continuous-time recurrent network
# Here we will define a continuous-time neural network but discretize it in time using the Euler method.
# \begin{align}
# \tau \frac{d\mathbf{r}}{dt} = -\mathbf{r}(t) + f(W_r \mathbf{r}(t) + W_x \mathbf{x}(t) + \mathbf{b}_r).
# \end{align}
#
# This continuous-time system can then be discretized using the Euler method with a time step of $\Delta t$,
# \begin{align}
# \mathbf{r}(t+\Delta t) = \mathbf{r}(t) + \Delta \mathbf{r} = \mathbf{r}(t) + \frac{\Delta t}{\tau}[-\mathbf{r}(t) + f(W_r \mathbf{r}(t) + W_x \mathbf{x}(t) + \mathbf{b}_r)].
# \end{align}
# +
# Define networks
import torch
import torch.nn as nn
from torch.nn import init
from torch.nn import functional as F
import math
class CTRNN(nn.Module):
"""Continuous-time RNN.
Args:
input_size: Number of input neurons
hidden_size: Number of hidden neurons
Inputs:
input: (seq_len, batch, input_size), network input
hidden: (batch, hidden_size), initial hidden activity
"""
def __init__(self, input_size, hidden_size, dt=None, **kwargs):
super().__init__()
self.input_size = input_size
self.hidden_size = hidden_size
self.tau = 100
if dt is None:
alpha = 1
else:
alpha = dt / self.tau
self.alpha = alpha
self.oneminusalpha = 1 - alpha
self.input2h = nn.Linear(input_size, hidden_size)
self.h2h = nn.Linear(hidden_size, hidden_size)
def init_hidden(self, input_shape):
batch_size = input_shape[1]
return torch.zeros(batch_size, self.hidden_size)
def recurrence(self, input, hidden):
"""Recurrence helper."""
pre_activation = self.input2h(input) + self.h2h(hidden)
h_new = torch.relu(hidden * self.oneminusalpha +
pre_activation * self.alpha)
return h_new
def forward(self, input, hidden=None):
"""Propogate input through the network."""
if hidden is None:
hidden = self.init_hidden(input.shape).to(input.device)
output = []
steps = range(input.size(0))
for i in steps:
hidden = self.recurrence(input[i], hidden)
output.append(hidden)
output = torch.stack(output, dim=0)
return output, hidden
class RNNNet(nn.Module):
"""Recurrent network model.
Args:
input_size: int, input size
hidden_size: int, hidden size
output_size: int, output size
rnn: str, type of RNN, lstm, rnn, ctrnn, or eirnn
"""
def __init__(self, input_size, hidden_size, output_size, **kwargs):
super().__init__()
# Continuous time RNN
self.rnn = CTRNN(input_size, hidden_size, **kwargs)
self.fc = nn.Linear(hidden_size, output_size)
def forward(self, x):
rnn_activity, _ = self.rnn(x)
out = self.fc(rnn_activity)
return out, rnn_activity
# -
# ## Train the recurrent network on the decision-making task
# +
import torch.optim as optim
# Instantiate the network and print information
hidden_size = 64
net = RNNNet(input_size=input_size, hidden_size=hidden_size,
output_size=output_size, dt=env.dt)
print(net)
# Use Adam optimizer
optimizer = optim.Adam(net.parameters(), lr=0.001)
criterion = nn.CrossEntropyLoss()
running_loss = 0
running_acc = 0
for i in range(2000):
inputs, labels_np = dataset()
labels_np = labels_np.flatten()
inputs = torch.from_numpy(inputs).type(torch.float)
labels = torch.from_numpy(labels_np).type(torch.long)
# in your training loop:
optimizer.zero_grad() # zero the gradient buffers
output, _ = net(inputs)
output = output.view(-1, output_size)
loss = criterion(output, labels)
loss.backward()
optimizer.step() # Does the update
running_loss += loss.item()
# Compute performance
output_np = np.argmax(output.detach().numpy(), axis=-1)
ind = labels_np > 0 # Only analyze time points when target is not fixation
running_acc += np.mean(labels_np[ind] == output_np[ind])
if i % 100 == 99:
running_loss /= 100
running_acc /= 100
print('Step {}, Loss {:0.4f}, Acc {:0.3f}'.format(i+1, running_loss, running_acc))
running_loss = 0
running_acc = 0
# -
# ## Visualize neural activity for in sample trials
#
# We will run the network for 100 sample trials, then visual the neural activity trajectories in a PCA space.
# +
import numpy as np
import gym
# Set delay to 3000ms for analysis
kwargs = {'timing': {'delay': ('constant', 3000)}}
env = gym.make(task, **kwargs)
env.reset(no_step=True)
env.timing
perf = 0
num_trial = 100
activity_dict = {}
trial_infos = {}
for i in range(num_trial):
env.new_trial()
ob, gt = env.ob, env.gt
inputs = torch.from_numpy(ob[:, np.newaxis, :]).type(torch.float)
action_pred, rnn_activity = net(inputs)
rnn_activity = rnn_activity[:, 0, :].detach().numpy()
activity_dict[i] = rnn_activity[env.start_ind['delay']:env.end_ind['delay']]
trial_infos[i] = env.trial.copy()
# Concatenate activity for PCA
activity = np.concatenate(list(activity_dict[i] for i in range(num_trial)), axis=0)
print('Shape of the neural activity: (Time points, Neurons): ', activity.shape)
# Print trial informations
for i in range(5):
print('Trial ', i, trial_infos[i])
# +
# Compute PCA and visualize
from sklearn.decomposition import PCA
pca = PCA(n_components=2)
pca.fit(activity)
# print('Shape of the projected activity: (Time points, PCs): ', activity_pc.shape)
# -
# Transform individual trials and Visualize in PC space based on ground-truth color. We see that the neural activity is organized by stimulus ground-truth in PC1
# +
import matplotlib.pyplot as plt
fig, (ax1, ax2) = plt.subplots(1, 2, sharey=True, sharex=True, figsize=(6, 3))
for i in range(num_trial):
trial = trial_infos[i]
activity_pc = pca.transform(activity_dict[i])
color = 'red' if trial['ground_truth'] == 1 else 'blue'
_ = ax1.plot(activity_pc[:, 0], activity_pc[:, 1], 'o-', color=color)
if i < 1:
_ = ax2.plot(activity_pc[:, 0], activity_pc[:, 1], 'o-', color=color)
ax1.set_xlabel('PC 1')
ax1.set_ylabel('PC 2')
# -
# ## Dynamical system analysis
#
# ### Search for approximate fixed points
# Here we search for approximate fixed points and visualize them in the same PC space. In a generic dynamical system,
# \begin{align}
# \frac{d\mathbf{x}}{dt} = F(\mathbf{x}),
# \end{align}
# We can search for fixed points by doing the optimization
# \begin{align}
# \mathrm{argmin}_{\mathbf{x}} |F(\mathbf{x})|^2.
# \end{align}
activity.shape
# +
# Freeze for parameters in the recurrent network
for param in net.parameters():
param.requires_grad = False
batch_size = 64
# Inputs should be the 0-coherence mean input during stimulus period
# This will be task-specific
input = np.tile([1, 0], (batch_size, 1))
input = torch.tensor(input, dtype=torch.float32)
# Here hidden activity is the variable to be optimized
# Initialized randomly for search in parallel (activity all positive)
# hidden_init = np.random.rand(batch_size, hidden_size)*3
hidden_init = activity[np.random.randint(activity.shape[0], size=(batch_size,))]
hidden_init = np.random.uniform(0.5, 1.5, size=hidden_init.shape) * hidden_init
hidden = torch.tensor(hidden_init, requires_grad=True, dtype=torch.float32)
# Use Adam optimizer
optimizer = optim.Adam([hidden], lr=0.01)
criterion = nn.MSELoss()
running_loss = 0
for i in range(10000):
optimizer.zero_grad() # zero the gradient buffers
# Take the one-step recurrent function from the trained network
new_h = net.rnn.recurrence(input, hidden)
loss = criterion(new_h, hidden)
loss.backward()
optimizer.step() # Does the update
running_loss += loss.item()
if i % 1000 == 999:
running_loss /= 1000
print('Step {}, Loss {:0.4f}'.format(i+1, running_loss))
running_loss = 0
# -
# ### Visualize the found approximate fixed points.
#
# We see that they found an approximate line attrator, corresponding to our PC1, along which evidence is integrated during the stimulus period.
# +
fixedpoints = hidden.detach().numpy()
print(fixedpoints.shape)
# Plot in the same space as activity
plt.figure()
for i in range(5):
activity_pc = pca.transform(activity_dict[i])
trial = trial_infos[i]
color = 'red' if trial['ground_truth'] == 0 else 'blue'
plt.plot(activity_pc[:, 0], activity_pc[:, 1], 'o-',
color=color, alpha=0.1)
# Fixed points are shown in cross
fixedpoints_pc = pca.transform(fixedpoints)
plt.plot(fixedpoints_pc[:, 0], fixedpoints_pc[:, 1], 'x')
plt.xlabel('PC 1')
plt.ylabel('PC 2')
# -
# ### Computing the Jacobian and finding the line attractor
#
# First we will compute the Jacobian.
# +
# index of fixed point to focus on
# choose one close to center by sorting PC1
i_fp = np.argsort(fixedpoints[:, 0])[int(fixedpoints.shape[0]/2)]
fp = torch.from_numpy(fixedpoints[i_fp])
fp.requires_grad = True
# Inputs should be the 0-coherence mean input during stimulus period
# This will be task-specific
input = torch.tensor([1, 0], dtype=torch.float32)
deltah = net.rnn.recurrence(input, fp) - fp
jacT = torch.zeros(hidden_size, hidden_size)
for i in range(hidden_size):
output = torch.zeros(hidden_size)
output[i] = 1.
jacT[:,i] = torch.autograd.grad(deltah, fp, grad_outputs=output, retain_graph=True)[0]
jac = jacT.detach().numpy().T
# -
# Here we plot the direction of the eigenvector corresponding to the highest eigenvalue
# +
eigval, eigvec = np.linalg.eig(jac)
vec = np.real(eigvec[:, np.argmax(eigval)])
end_pts = np.array([+vec, -vec]) * 10
end_pts = pca.transform(fp.detach().numpy() + end_pts)
# Plot in the same space as activity
plt.figure()
for i in range(5):
activity_pc = pca.transform(activity_dict[i])
trial = trial_infos[i]
color = 'red' if trial['ground_truth'] == 0 else 'blue'
plt.plot(activity_pc[:, 0], activity_pc[:, 1], 'o-',
color=color, alpha=0.1)
# Fixed points are shown in cross
fixedpoints_pc = pca.transform(fixedpoints)
plt.plot(fixedpoints_pc[:, 0], fixedpoints_pc[:, 1], 'x')
# Line attractor
plt.plot(end_pts[:, 0], end_pts[:, 1])
plt.xlabel('PC 1')
plt.ylabel('PC 2')
# -
# Plot distribution of eigenvalues in a 2-d real-imaginary plot
plt.figure()
plt.scatter(np.real(eigval), np.imag(eigval))
plt.plot([0, 0], [-1, 1], '--')
plt.xlabel('Real')
plt.ylabel('Imaginary')
# # Supplementary Materials
#
# Code for making publication quality figures as it appears in the paper.
# Convert information into pandas dataframe
import pandas as pd
df = pd.DataFrame()
for i in range(len(trial_infos)):
df = df.append(trial_infos[i], ignore_index=True)
# Example selection of conditions
# print(df[df['f1']==22])
# +
import matplotlib as mpl
plot_fp = True
# Plot in the same space as activity
# fig = plt.figure(figsize=(3, 3))
fig, axes = plt.subplots(1, 2, sharex=True, sharey=True, figsize=(4, 2))
for i in range(2):
ax = axes[i]
plot_fp = i == 1
# ax = fig.add_axes([0.2, 0.2, 0.6, 0.6])
colors = np.array([[27,158,119], [117,112,179], [217,95,2]])/255.
# Search for two trials with similar conditions
values = np.unique(df['f1'])
color_intensity = [0.4, 0.7, 1.0, 1.3]
cmap = mpl.cm.get_cmap('winter')
if plot_fp:
alpha = 0.2
else:
alpha = 1.0
for i, val in enumerate(values):
trials = df[df['f1']==val].index
activity = np.mean(np.array([activity_dict[i] for i in trials]), axis=0)
activity_pc = pca.transform(activity)
label = '{:0.1f}'.format(val)
color = cmap(i/len(values))
ax.plot(activity_pc[:, 0], activity_pc[:, 1], 'o-',
color=color, ms=3, markeredgecolor='none',
lw=1, label=label, alpha=alpha)
ax.plot(activity_pc[0, 0], activity_pc[0, 1], 'o-', alpha=alpha,
marker='^', color=color, ms=5)
if plot_fp:
# Fixed points are shown in cross
color = colors[2]
fixedpoints_pc = pca.transform(fixedpoints)
ax.plot(fixedpoints_pc[:, 0], fixedpoints_pc[:, 1], 'x', ms=3, color=color, alpha=0.3)
# Line attractor
ax.plot(fixedpoints_pc[i_fp, 0], fixedpoints_pc[i_fp, 1], 'x', ms=5, color=color, lw=1)
ax.plot(end_pts[:, 0], end_pts[:, 1], color=color)
else:
ax.legend(title='Stimulus', loc='upper left', bbox_to_anchor=(1.0, 1.0), frameon=False)
ax.set_xlabel('PC 1', fontsize=7)
ax.set_ylabel('PC 2', fontsize=7)
# plt.xlim([-5, 5])
# plt.ylim([-1, 5])
# Beautification
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
# ax.spines['left'].set_position(('data', -5))
# ax.spines['bottom'].set_position(('data', -1.5))
plt.tight_layout()
plt.locator_params(nbins=2)
from pathlib import Path
# if plot_fp:
# fname = Path('figures/lineattractors_parametricWM')
# else:
# fname = Path('figures/rnndynamics_parametricWM')
fname = Path('figures/rnndynamics_parametricWM')
fig.savefig(fname.with_suffix('.pdf'), transparent=True)
fig.savefig(fname.with_suffix('.png'), dpi=300)
# -
|
ParametricWorkingMemory.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Time Optimal Velocity Profiles
#
# ***
#
# When the maze solver commands that the robot go forward, it can say that it must go forward one or more squares depending on what it knows about the maze. When we don't know what is after the square we pass through, we must be going slow enough to handle any scenario. In other words, there is some $V_f$ that we must reach by the end of our motion. We also begin motions at this speed, since between we arrived where we are we required that we reach $V_f$ to get there. Therefore, we start and end at $V_f$, and we want to cover some distance $d$ in the fast possible time. To do so, we accelerate at our fixed $a$ until we reach max speed, or until we need to start slowing down (whichever comes first). This gives us a trapezoid shaped velocity profile.
# ## Going Straight
# %load_ext tikzmagic
# +
# %%tikz -s 400,400
\draw[->] (0,0) -- (10,0);
\draw[->] (0,0) -- (0,5);
\draw[line width=1] (0,0.5) -- (2.5,3);
\draw[line width=1] (2.5,3) -- (5.5,3);
\draw[line width=1] (5.5,3) -- (8,0.5);
\draw[dashed] (0,0.5) -- (10,0.5);
\draw[dashed] (0,3) -- (10,3);
\draw[dashed] (2.5,0) -- (2.5,5);
\draw[dashed] (5.5,0) -- (5.5,5);
\draw[dashed] (8,0) -- (8,5);
\draw (-0.5, 0.5) node {$V_{f}$};
\draw (-0.5, 3) node {$V_{max}$};
\draw (2.5, -0.5) node {$t_b$};
\draw (5.5, -0.5) node {$t_f-t_b$};
\draw (8, -0.5) node {$t_f$};
# -
# The time to accelerate from $V_f$ to $V_{max}$ is $t_b = \frac{V-V_f}{a}$. We can substitute this into newtons first equation of motion as follows.
#
# \begin{align}
# d &= Vt_b - \frac{1}{2}a{t_b}^2 \\
# &= V\Big(\frac{V-V_f}{a}\Big) - \frac{1}{2}a\Big(\frac{V-V_f}{a}\Big)^2 \\
# &= \Big(\frac{V^2-VV_f}{a}\Big) - \Big(\frac{a(V-V_f)^2}{2a^2}\Big) \\
# &= \Big(\frac{2V^2-2VV_f}{2a}\Big) - \Big(\frac{V^2-2VV_f+{V_f}^2}{2a}\Big) \\
# &= \frac{2V^2-2VV_f - V^2 + 2VV_f - {V_f}^2}{2a} \\
# d &= \frac{V^2-{V_f}^2}{2a} \\
# \end{align}
#
# For example, if you're at starting at $V_f=0.2\frac{m}{s}$, and you're ramping up to $V=0.5\frac{m}{s}$, and you're acceleration is fixed at the $a=2\frac{m}{s^2}$, the distance you'll need to do that is $d = \frac{0.5 - 0.2}{2*2} = 0.075m$
# ## Code that proves it
# +
# dependencies and global setup
import numpy as np
import matplotlib.pyplot as plt
np.set_printoptions(suppress=True, precision=3, linewidth=100)
LOG_LVL = 2
def debug(*args):
if LOG_LVL <= 0:
print(*args)
def info(*args):
if LOG_LVL <= 1:
print(*args)
def warning(*args):
if LOG_LVL <= 2:
print(*args)
def log(*args):
if LOG_LVL < 100:
print(*args)
# +
def profile(V0, Vf, Vmax, d, A, buffer=3e-3):
v = V0
x = 0
a = A
vs = [v]
xs = [x]
a_s = [a]
dt = 0.01
while x < d:
x = x + v*dt + a*dt*dt/2.0
v = v + a*dt
ramp_d = (v*v+ - Vf*Vf) / (2.0*A)
if (d-x) < ramp_d + buffer:
a = -A
elif v < Vmax:
a = A
else:
a = 0
if v > Vmax:
v = Vmax
elif v < Vf:
v = Vf
xs.append(x)
vs.append(v)
a_s.append(a)
return xs, vs, a_s
def graph(title, idx):
plt.figure()
plt.title(title)
Vs = [0.35, 0.5, 0.75, 1, 2]
Vf = 0.02
V0 = 0.2
d = 0.35
a = 2
for V in Vs:
results = profile(V0, Vf, V, d, a)
vs = results[1]
if V == 2: # make V=2 dashed so we can see it over V=1
plt.plot(results[idx], label='V={}'.format(V), linestyle='dashed')
else:
plt.plot(results[idx], label='V={}'.format(V))
plt.legend(bbox_to_anchor=(1, 1), loc=2)
graph("position", 0)
graph("velocity", 1)
graph("acceleration", 2)
plt.show()
# -
# ## General Form Trajectory Planning
# Let's start out with a generating trajectories that are not time optimal, but rely on specifying the final time $v_f$. For smartmouse, our state space is $[x, y, \theta]$, and a turn can be defined as starting at a point $[x_0, y_0, \theta_0]$ and going to $[x_f, y_f, \theta_0]$. Of course, we also want to specify the velocities at these point, $[\dot{x}_0, \dot{y}_0,\dot{\theta}_0]$ and $[\dot{x}_f, \dot{y}_f,\dot{\theta}_f]$. We have four constraints, so if we want to fit a smooth polynomial to those points we need a 4th order polynomial.
#
# $$q(t) = a_0 + a_1t + a_2t^2 + a_3t^3$$
# $$\dot{q}(t) = a_1 + 2a_2t + 3a_3t^2$$
#
# If we sub in our constraints, we get the following system of equations.
#
# \begin{align}
# q(0) &= a_0 \\
# \dot{q}(0) &= a_1 \\
# q(t_f) &= a_0 + a_1t_f + a_2{t_f}^2 + a_3{t_f}^3\\
# \dot{q}(t_f) &= a_1 + 2a_2t_f + 3a_3{t_f}^2\\
# \end{align}
#
# In matrix form that looks like:
# \begin{equation}
# \begin{bmatrix}
# 1 & 0 & 0 & 0 \\
# 0 & 1 & 0 & 0 \\
# 1 & t_f & t_f^2 & t_f^3 \\
# 0 & 1 & 2t_f & 3t_f^2 \\
# \end{bmatrix}
# \begin{bmatrix}
# a_0 \\
# a_1 \\
# a_2 \\
# a_3 \\
# \end{bmatrix} =
# \begin{bmatrix}
# q(0) \\
# \dot{q}(0) \\
# q(t_f) \\
# \dot{q}(t_f) \\
# \end{bmatrix}
# \end{equation}
#
# It can be shown that the matrix on the left is invertable, so long as $t_f-t_0 > 0$. So we can invert and solve this equation and get all the $a$ coefficients. We can then use this polynomial to generate the $q(t)$ and $\dot{q}(t)$ -- our trajectory.
# +
def simple_traj_solve(q_0, q_f, q_dot_0, q_dot_t_f, t_f):
# Example: you are a point in space (one dimension) go from rest at the origin to at rest at (0.18, 0, 0) in 1 second
q_0 = np.array([0])
q_dot_0 = np.array([0])
q_t_f = np.array([0.18])
q_dot_t_f = np.array([0])
b = np.array([q_0, q_dot_0, q_t_f, q_dot_t_f])
a = np.array([[1,0,0,0],[0,1,0,0],[1, t_f, pow(t_f,2),pow(t_f,3)],[0,1,2*t_f,3*pow(t_f,2)]])
log(a, b)
coeff = np.linalg.solve(a, b)
log(coeff)
return coeff
simple_traj_info = (0, 0, 0.18, 0, 1)
simple_traj_coeff = simple_traj_solve(*simple_traj_info)
# -
# Here you can see that the resulting coeffictions are $a_0=0$, $a_1=0$, $a_2=0.54$, $a_0=-0.36$. Intuitively, this says that we're going to have positive acceleration, but our acceleration is going to slow down over time. Let's graph it!
# +
def simple_traj_plot(coeff, t_f):
dt = 0.01
ts = np.array([[1, t, pow(t,2), pow(t,3)] for t in np.arange(0, t_f+dt, dt)])
qs = ts@coeff
plt.plot(ts[:,1], qs, label="x")
plt.xlabel("time (seconds)")
plt.xlabel("X (meters)")
plt.legend(bbox_to_anchor=(1,1), loc=2)
plt.show()
simple_traj_plot(simple_traj_coeff, simple_traj_info[-1])
# -
# **ooooooooooh so pretty**
#
# Let's try another example, now with our full state space of $[x, y, \theta]$.
# +
def no_dynamics():
# In this example, we go from (0.18, 0.09, 0) to (0.27,0.18, -1.5707). Our starting and ending velocities are zero
q_0 = np.array([0.09,0.09,0])
q_dot_0 = np.array([0,0,0])
q_f = np.array([0.27,0.18,-1.5707])
q_dot_f = np.array([0,0,0])
t_f = 1
b = np.array([q_0, q_dot_0, q_f, q_dot_f])
a = np.array([[1,0,0,0],[0,1,0,0],[1, t_f, pow(t_f,2),pow(t_f,3)],[0,1,2*t_f,3*pow(t_f,2)]])
coeff = np.linalg.solve(a, b)
log(coeff)
dt = 0.1
ts = np.array([[1, t, pow(t,2), pow(t,3)] for t in np.arange(0, t_f+dt, dt)])
qs = ts@coeff
plt.rc('text', usetex=True)
plt.rc('font', family='serif')
plt.gca().set_adjustable("box")
plt.subplot(221)
plt.plot(ts[:,1], qs[:,0])
plt.xlabel("time (seconds)")
plt.title("x")
plt.subplot(222)
plt.plot(ts[:,1], qs[:,1])
plt.xlabel("time (seconds)")
plt.title("y")
plt.subplot(223)
plt.plot(ts[:,1], qs[:,2])
plt.xlabel("time (seconds)")
plt.title(r"$\theta$")
plt.subplot(224)
plt.scatter(qs[:,0], qs[:,1])
plt.axis('equal')
plt.xlabel("X")
plt.ylabel("Y")
plt.tight_layout()
plt.show()
no_dynamics()
# -
# Well, they are smooth, but these are not possible to execute! The robot cannot simply translate sideways.
# # Trajectory Planning With a Simple Dynamics Model
#
# ***
# +
# %%tikz -s 100,100
\draw [rotate around={-45:(0,0)}] (-.5,-1) rectangle (0.5,1);
\filldraw (0,0) circle (0.125);
\draw [->] (0,0) -- (0,1.5);
\draw [->] (0,0) -- (1.5,0);
\draw [->] (0,0) -- (1.5,1.5);
\draw (1.2, -0.2) node {$x$};
\draw (-0.2, 1.2) node {$y$};
\draw (1, 1.2) node {$v$};
# -
#
# We need to change our constraints to the system of equations. Specifically, we need our dynamics model. For now, let's assume a simplified car model.
#
# $$ \dot{x} = v\cos(\theta) $$
# $$ \dot{y} = v\sin(\theta) $$
#
# This basically claims that for any instant in time the robot is moving a constant velocity along $\theta$. This isn't very accurate, but let's just start with that since the real dynamics of our robot are more complex.
#
# First we will bring in the constraints from before. We must satisfy specific initial and final positions in $[x, y, \theta]$. I've used new letters for cofficients to avoid confusion.
#
# \begin{align}
# x_0 &= c_0 + c_1(0) + c_2(0)^2 + c_3(0)^3 + c_3(0)^4 + c_3(0)^5 \\
# y_0 &= d_0 + d_1(0) + d_2(0)^2 + d_3(0)^3 + d_3(0)^4 + d_3(0)^5 \\
# x_{t_f} &= c_0 + c_1(t_f) + c_2(t_f)^2 + c_3(t_f)^3 + c_3(t_f)^4 + c_3(t_f)^5 \\
# y_{t_f} &= d_0 + d_1(t_f) + d_2(t_f)^2 + d_3(t_f)^3 + d_3(t_f)^4 + d_3(t_f)^5 \\
# \end{align}
#
# Notice here we have 12 unknowns, $c_0 \dots c_5$ and $d_0 \dots d_5$. So we're gonna need more equations for there to be a unique solution. Also notice we haven't defined any constraints related to our dynamics model. That would be a good place to get our other equations!
#
# First, we want to be able to specify initial velocity $v_0$ and final velocity $v_{t_f}$. It is easlier to just constrain $\dot{x}_0$, $\dot{y}_0$, $\dot{x}_{t_f}$, $\dot{y}_{t_f}$. So if we want to specify that we start facing $\tfrac{\pi}{2}$ going 1m/s, we'd just specify $cos(\tfrac{\pi}{2})$ for $\dot{x}_0$ and $sin(\tfrac{\pi}{2})$ for $\dot{y}_0$.
#
# \begin{align}
# \dot{x}_0 &= c_1 \\
# \dot{y}_0 &= d_1 \\
# \dot{x}_{t_f} &= (0)c_0 + (1)c_1 + 2t_fc_2 + 3{t_f}^2c_3 + 4{t_f}^3c_4 + 5{t_f}^4c_5 \\
# \dot{y}_{t_f} &= (0)d_0 + (1)d_1 + 2t_fd_2 + 3{t_f}^2d_3 + 4{t_f}^3d_4 + 5{t_f}^4d_5
# \end{align}
#
# Let's also make sure x and y components obey trigonometry.
#
# \begin{align}
# v\cos(\theta)\sin(\theta) + v\cos(\theta)\sin(\theta) &= v\sin(2\theta) \\
# \dot{x}\sin(\theta) + \dot{y}\sin(\theta) &= v\sin(2\theta)
# \end{align}
#
# We can get two equations out of this by specifying initial and final velocities
#
# \begin{align}
# v_0\sin(2\theta_0) &= \dot{x}_0\sin(\theta_0) + \dot{y}_0\cos(\theta_0) \\
# v_{t_f}\sin(2\theta_{t_f}) &= \dot{x}_{t_f}\sin(\theta_{t_f}) + \dot{y}_{t_f}\cos(\theta_{t_f})
# \end{align}
#
# We should write out the full form though, to make things in terms of our coefficients.
#
# \begin{align}
# v(0)\sin(2\theta_0) &= \Big[c_1 + 2(0)c_2 + 3(0)^2c_3 + 4(0)^3c_4 + 5(0)^4c_5\Big]\sin(\theta_0) + \Big[d_1 + 2(0)d_2 + 3(0)^2d_3 + 4(0)^3d_4 + 5(0)^4d_5\Big]\cos(\theta_0) \\
# v(0)\sin(2\theta_0) &= \sin(\theta_0)c_1 + \cos(\theta_0)d_1
# \end{align}
#
# \begin{align}
# v(t_f)\sin(2\theta_{t_f}) &= \Big[c_1 + 2(t_f)c_2 + 3(t_f)^2c_3 + 4(t_f)^3c_4\ + 5(t_f)^4c_5\Big]\sin(\theta_{t_f}) + \Big[d_1 + 2(t_f)d_2 + 3(t_f)^2d_3 + 4(t_f)^3d_4 + 5(t_f)^4d_5\Big]\cos(\theta_{t_f}) \\
# v(t_f)\sin(2\theta_{t_f}) &= \sin(\theta_{t_f})c_1 + 2\sin(\theta_{t_f})t_fc_2 + 3\sin(\theta_{t_f}){t_f}^2c_3 + 4\sin(\theta_{t_f}){t_f}^3c_4 + 5\sin(\theta_{t_f}){t_f}^4c_5 + \cos(\theta_{t_f})d_1 + 2\cos(\theta_{t_f})t_fd_2 + 3\cos(\theta_{t_f}){t_f}^2d_3 + 4\cos(\theta_{t_f}){t_f}^3d_4 + 5\cos(\theta_{t_f}){t_f}^4d_5 \\
# \end{align}
#
# The last two equations constrains the robot from moving in any direction other than its heading. Of course it must relate $\dot{x}$ to $\dot{y}$. Still not totally sure how we got this equation so I'm just copying it from some slides$\dots$. However you can plug in some example values and check. For instance translating sideways violates this equation: set $\dot{x}=1$, $\dot{y}=0$, $v=1$, $\theta=\tfrac{\pi}{2}$.
#
# \begin{align}
# v\cos(\theta)\sin(\theta) - v\cos(\theta)\sin(\theta) &= 0 \\
# v\cos(\theta)\sin(\theta) - v\sin(\theta)\cos(\theta) &= 0 \\
# \dot{x}\sin(\theta) - \dot{y}\cos(\theta) &= 0
# \end{align}
#
# and again written out fully in terms of our coefficients
#
# \begin{align}
# \Big[c_1 + 2(0)c_2 + 3(0)^2c_3 + 4(0)^3c_4 + 5(0)^4c_5\Big]\sin(\theta_0) - \Big[d_1 + 2(0)d_2 + 3(0)^2d_3 + 4(0)^3d_4 + 5(0)^4d_5\Big]\cos(\theta_0) &= 0 \\
# \sin(\theta_0)c_1 - \cos(\theta_0)d_1 &= 0
# \end{align}
#
# \begin{align}
# \Big[c_1 + 2(t_f)c_2 + 3(t_f)^2c_3 + 4(t_f)^3c_4 + 5(t_f)^4c_5\Big]\sin(\theta_{t_f}) - \Big[d_1 + 2(t_f)d_2 + 3(t_f)^2d_3 + 4(t_f)^3d_4 + 5(t_f)^4d_5\Big]\cos(\theta_{t_f}) &= 0 \\
# \sin(\theta_{t_f})c_1 + 2\sin(\theta_{t_f})t_fc_2 + 3\sin(\theta_{t_f}){t_f}^2c_3 + 4\sin(\theta_{t_f}){t_f}^3c_4 + 5\sin(\theta_{t_f}){t_f}^4c_5 - \cos(\theta_{t_f})d_1 - 2\cos(\theta_{t_f})t_fd_2 - 3\cos(\theta_{t_f}){t_f}^2d_3 - 4\cos(\theta_{t_f}){t_f}^3d_4 - 5\cos(\theta_{t_f}){t_f}^4d_5 &= 0
# \end{align}
#
# Ok, that should work. Now let's write it out in matrix form. We use $c$ and $s$ to shorten $\sin$ and $\cos$.
#
# \setcounter{MaxMatrixCols}{20}
# \begin{equation}
# \begin{bmatrix}
# 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
# 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\
# 0 & s(\theta_0) & 0 & 0 & 0 & 0 & 0 & c(\theta_0) & 0 & 0 & 0 & 0\\
# 0 & s(\theta_0) & 0 & 0 & 0 & 0 & 0 & -c(\theta_0) & 0 & 0 & 0 & 0\\
# 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
# 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0\\
# 1 & t & {t_f}^2 & {t_f}^3 & {t_f}^4 & {t_f}^5 & 0 & 0 & 0 & 0 & 0 & 0 \\
# 0 & 0 & 0 & 0 & 0 & 0 & 1 & t_f & {t_f}^2 & {t_f}^3 & {t_f}^4 & {t_f}^5 \\
# 0 & s(\theta_{t_f}) & 2s(\theta_{t_f})t_f & 3s(\theta_{t_f}){t_f}^2 & 4s(\theta_{t_f}){t_f}^3 & 5s(\theta_{t_f}){t_f}^4 & 0 & c(\theta_{t_f}) & 2c(\theta_{t_f}){t_f} & 3c(\theta_{t_f}){t_f}^2 & 4c(\theta_{t_f}){t_f}^3 & 5c(\theta_{t_f}){t_f}^4 \\
# 0 & s(\theta_{t_f}) & 2s(\theta_{t_f})t_f & 3s(\theta_{t_f}){t_f}^2 & 4s(\theta_{t_f}){t_f}^3 & 5s(\theta_{t_f}){t_f}^4 & 0 & -c(\theta_{t_f}) & -2c(\theta_{t_f}){t_f} & -3c(\theta_{t_f}){t_f}^2 & -4c(\theta_{t_f}){t_f}^3 & -5c(\theta_{t_f}){t_f}^4 \\
# 0 & 1 & 2t_f & 3{t_f}^2 & 4{t_f}^3 & 5{t_f}^4 & 0 & 0 & 0 & 0 & 0 & 0 \\
# 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 2t_f & 3{t_f}^2 & 4{t_f}^3 & 5{t_f}^4
# \end{bmatrix}
# \begin{bmatrix}
# c_0 \\
# c_1 \\
# c_2 \\
# c_3 \\
# c_4 \\
# c_5 \\
# d_0 \\
# d_1 \\
# d_2 \\
# d_3 \\
# d_4 \\
# d_5
# \end{bmatrix} =
# \begin{bmatrix}
# x_0 \\
# y_0 \\
# 0 \\
# v_0s(2\theta_0) \\
# c(\theta_0)v_0 \\
# s(\theta_0)v_0 \\
# x_{t_f} \\
# y_{t_f} \\
# 0 \\
# v_{t_f}s(2\theta_{t_f}) \\
# c(\theta_{t_f})v_{t_f} \\
# s(\theta_{t_f})v_{t_f} \\
# \end{bmatrix}
# \end{equation}
# +
# Let's solve this in code like we did before
def plot_vars(traj_plan):
dt = 0.001
T = np.arange(0, traj_plan.get_t_f()+dt, dt)
xts = np.array([[1, t, pow(t,2), pow(t,3), pow(t,4), pow(t,5), 0, 0, 0, 0, 0, 0] for t in T])
xdts = np.array([[0, 1, 2*t, 3*pow(t,2), 4*pow(t,3), 5*pow(t,4), 0, 0, 0, 0, 0, 0] for t in T])
yts = np.array([[0, 0, 0, 0, 0, 0, 1, t, pow(t,2), pow(t,3), pow(t,4), pow(t,5)] for t in T])
ydts = np.array([[0, 0, 0, 0, 0, 0, 0, 1, 2*t, 3*pow(t,2), 4*pow(t,3), 5*pow(t,4)] for t in T])
xs = xts@traj_plan.get_coeff()
ys = yts@traj_plan.get_coeff()
xds = xdts@traj_plan.get_coeff()
yds = ydts@traj_plan.get_coeff()
plt.rc('text', usetex=True)
plt.rc('font', family='serif')
plt.rc('axes.formatter', useoffset=False)
plt.figure(figsize=(10, 2.5))
plt.subplot(141)
plt.plot(T, xs, linewidth=3)
plt.xlabel("time (seconds)")
plt.title("X")
plt.subplot(142)
plt.plot(T, ys, linewidth=3, color='r')
plt.xlabel("time (seconds)")
plt.title("Y")
plt.subplot(143)
plt.plot(T, xds, linewidth=3, color='g')
plt.xlabel("time (seconds)")
plt.title("$\dot{x}$")
plt.tight_layout()
plt.subplot(144)
plt.plot(T,yds, linewidth=3, color='y')
plt.xlabel("time (seconds)")
plt.title("$\dot{y}$")
plt.tight_layout()
plt.show()
def plot_traj(traj_plan):
dt = 0.03
T = np.arange(0, traj_plan.get_t_f()+dt, dt)
xts = np.array([[1, t, pow(t,2), pow(t,3), pow(t,4), pow(t,5), 0, 0, 0, 0, 0, 0] for t in T])
yts = np.array([[0, 0, 0, 0, 0, 0, 1, t, pow(t,2), pow(t,3), pow(t,4), pow(t,5)] for t in T])
xs = xts@traj_plan.get_coeff()
ys = yts@traj_plan.get_coeff()
plot_traj_pts(xs, ys, T, traj_plan.waypoints)
def plot_traj_pts(xs, ys, T, waypoints):
plt.figure(figsize=(5, 5))
plt.title("Trajectory")
plt.xlabel("X")
plt.ylabel("Y")
W = 2
plt.xlim(0, W * 0.18)
plt.ylim(0, W * 0.18)
plt.xticks(np.arange(2*W+1)*0.09)
plt.yticks(np.arange(2*W+1)*0.09)
plt.grid(True)
plt.gca().set_axisbelow(True)
for t, pt in waypoints:
arrow_dx = cos(pt.theta) * (pt.v) * 0.1
arrow_dy = sin(pt.theta) * (pt.v) * 0.1
plt.arrow(pt.x, pt.y, arrow_dx, arrow_dy, head_width=0.005, head_length=0.005, width=0.001, fc='k', ec='k')
plt.scatter(xs, ys, marker='.', linewidth=0)
plt.show()
# +
from math import sin, cos, pi
from collections import namedtuple
WayPoint = namedtuple('WayPoint', ['x', 'y', 'theta', 'v'])
class TrajPlan:
def x_constraint(t):
return [1, t, pow(t, 2), pow(t, 3), pow(t, 4), pow(t, 5), 0, 0, 0, 0, 0, 0]
def y_constraint(t):
return [0, 0, 0, 0, 0, 0, 1, t, pow(t, 2), pow(t, 3), pow(t, 4), pow(t, 5)]
def non_holonomic_constraint(theta_t, t):
s_t = sin(theta_t)
c_t = cos(theta_t)
t_2 = pow(t, 2)
t_3 = pow(t, 3)
t_4 = pow(t, 4)
return [0, s_t, 2 * s_t * t, 3 * s_t * t_2, 4 * s_t * t_3, 5 * s_t * t_4, 0, c_t, 2 * c_t * t, 3 * c_t * t_2, 4 * c_t * t_3, 5 * c_t * t_4]
def trig_constraint(theta_t, t):
s_t = sin(theta_t)
c_t = cos(theta_t)
t_2 = pow(t, 2)
t_3 = pow(t, 3)
t_4 = pow(t, 4)
return [0, s_t, 2 * s_t * t, 3 * s_t * t_2, 4 * s_t * t_3, 5 * s_t * t_4, 0, -c_t, -2 * c_t * t, -3 * c_t * t_2, -4 * c_t * t_3, -5 * c_t * t_4]
def x_dot_constraint(t):
return [0, 1, 2 * t, 3 * pow(t, 2), 4 * pow(t, 3), 5 * pow(t, 4), 0, 0, 0, 0, 0, 0]
def y_dot_constraint(t):
return [0, 0, 0, 0, 0, 0, 0, 1, 2 * t, 3 * pow(t, 2), 4 * pow(t, 3), 5 * pow(t, 4)]
def solve(self, waypoints):
# Setup the matrices to match the equation above
A = []
b = []
for t, pt in waypoints:
A += [TrajPlan.x_constraint(t),
TrajPlan.y_constraint(t),
TrajPlan.non_holonomic_constraint(pt.theta, t),
TrajPlan.trig_constraint(pt.theta, t),
TrajPlan.x_dot_constraint(t),
TrajPlan.y_dot_constraint(t)]
b += [pt.x,
pt.y,
0,
pt.v*sin(2*pt.theta),
cos(pt.theta)*pt.v,
sin(pt.theta)*pt.v]
A = np.array(A)
b = np.array(b)
rank = np.linalg.matrix_rank(A)
if rank == A.shape[1]:
if A.shape[0] == A.shape[1]:
coeff = np.linalg.solve(A, b)
else:
warning("not square, using least squares.".format(A.shape))
coeff, resid, rank, s = np.linalg.lstsq(A, b)
else:
warning("Ranks don't match! {} equations {} variables, using least squares".format(rank, A.shape[1]))
coeff, resid, rank, s = np.linalg.lstsq(A, b)
debug("rank {}".format(rank))
debug("A: \n{}".format(A))
debug("coeff: \n{}".format(coeff))
error = np.sum(np.power(A@coeff - b, 2))
if error > 1e-10:
info("These two vectors should be equal! But there is error.")
info("b is: \n{}".format(b))
info("A@coeff is: \n{}".format(A@coeff))
info("RMS Error of solution to equations")
info(error)
self.coeff = coeff
self.waypoints = waypoints
def get_coeff(self):
return self.coeff
def get_t_f(self):
return self.waypoints[-1][0]
# -
# ## Example Plots
# forward 1 cell, start from rest, end at 40cm/s, do it in .5 seconds
LOG_LVL = 5
fwd_1 = TrajPlan()
fwd_1.solve([(0, WayPoint(0.09, 0.09, pi/2, 0)), (0.5, WayPoint(0.09, 0.27, pi/2, 0.6))])
plot_vars(fwd_1)
plot_traj(fwd_1)
# continue by turning right 90 degrees
LOG_LVL = 1
turn_right = TrajPlan()
turn_right.solve([(0, WayPoint(0.09, 0.18, pi/2, 0.4)), (0.5, WayPoint(0.18, 0.27, 0, 0.4))])
plot_vars(turn_right)
plot_traj(turn_right)
# 3 waypoints!
LOG_LVL = 1
turn_right = TrajPlan()
turn_right.solve([(0, WayPoint(0.09, 0.09, pi/2, 0.0)), (0.5, WayPoint(0.18, 0.18, 0, 0.35)), (1, WayPoint(0.27, 0.27, pi/2, 0))])
plot_vars(turn_right)
plot_traj(turn_right)
# **Note for this system of equations with 3 waypoints, there is no solution. However, the error of the solution found is very small.**
#
# Now let's find one that really sucks!
# 4 waypoints!
LOG_LVL = 1
turn_right = TrajPlan()
turn_right.solve([(0, WayPoint(0.09, 0.0, pi/2, 0.1)),
(1, WayPoint(0.09, 0.18, pi/2, 0.1)),
(2, WayPoint(0.18, 0.27, 0, 0.1)),
(3, WayPoint(0.27, 0.27, 0, 0.1))])
plot_traj(turn_right)
# # Trajectory Following
#
# ***
#
# Now that we have a trajectory, we want to design a controller that will follow it as closely as possible. To do this, I'm just going to do a proportional controller. Later we will design an optimal controller. We want to make sure the robot is on the path, facing along the path, and going the right speed. When all of these are true the change in speed should be zero. Let's come up with an equation to relate current pose and velocity to the desired pose and velocity. Let our outputs be the linear velocity $v$ and the rotational velocity $w$.
#
# $$ w = \bar{w} + d*P_1 + (\bar{\theta} - \theta)P_2$$
# $$ v = \bar{v} + l*P_3$$
#
# where $v_d$ is desired velocity, $\theta_d$ is the desired angle, $d$ is signed distance to the planned trajectory (to the right of the plan is positive), $v_d$ and $w_d$ are the desired velocities of the robot, and $P_1$, $P_2$, and $P_3$ are constants. Essentially what we're saying with the first equation is that when you're far off the trajectory you need to turn harder to get back on to it, but you also need to be aligned with it. The second equation says if you're lagging behind your plan speed up, or slow down if you're overshooting.
# +
from math import atan2, sqrt
LOG_LVL = 5
def simulate(q_0, waypoints, P_1, P_2, P_3, A=3):
traj = TrajPlan()
traj.solve(waypoints)
dt = 0.01
x = q_0[0]
y = q_0[1]
theta = q_0[2]
v = q_0[3]
w = q_0[4]
actual_v = q_0[3]
actual_w = q_0[4]
v_acc = A * dt
TRACK_WIDTH_M = 0.0633
w_acc = v_acc / (TRACK_WIDTH_M/2)
T = np.arange(0, traj.get_t_f()+dt, dt)
x_bar_list = []
y_bar_list = []
x_list = []
y_list = []
for t in T:
x_bar = [1, t, pow(t,2), pow(t,3), pow(t,4), pow(t,5), 0, 0, 0, 0, 0, 0] @ traj.get_coeff()
dx_bar = [0, 1, 2*t, 3*pow(t,2), 4*pow(t,3), 5*pow(t,4), 0, 0, 0, 0, 0, 0] @ traj.get_coeff()
ddx_bar = [0, 0, 0, 0, 0, 0, 0, 0, 2, 6*t, 12*pow(t,2), 20*pow(t,3)] @ traj.get_coeff()
y_bar = [0, 0, 0, 0, 0, 0, 1, t, pow(t,2), pow(t,3), pow(t,4), pow(t,5)] @ traj.get_coeff()
dy_bar = [0, 0, 0, 0, 0, 0, 0, 1, 2*t, 3*pow(t,2), 4*pow(t,3), 5*pow(t,4)] @ traj.get_coeff()
ddy_bar = [0, 0, 0, 0, 0, 0, 0, 0, 2, 6*t, 12*pow(t,2), 20*pow(t,3)] @ traj.get_coeff()
theta_bar = atan2(dy_bar, dx_bar)
v_bar = sqrt(dx_bar*dx_bar + dy_bar*dy_bar)
w_bar = 1/v_bar * (ddy_bar*cos(theta_bar) - ddx_bar*sin(theta_bar));
# simple Dubin's Car forward kinematics
x += cos(theta) * actual_v * dt
y += sin(theta) * actual_v * dt
theta += actual_w * dt
# control
euclidian_error = np.sqrt(pow(x_bar - x, 2) + pow(y_bar - y, 2))
transformed_x = (x - x_bar) * cos(-theta_bar) + (y - y_bar) * -sin(-theta_bar)
transformed_y = (x - x_bar) * sin(-theta_bar) + (y - y_bar) * cos(-theta_bar)
right_of_traj = transformed_y < 0
signed_euclidian_error = euclidian_error if right_of_traj else -euclidian_error
lag_error = -transformed_x
w = w_bar + signed_euclidian_error * P_1 + (theta_bar - theta) * P_2
v = v_bar + lag_error * P_3
# simple acceleration model
if v < actual_v:
actual_v = max(v, actual_v - v_acc)
elif v > actual_v:
actual_v = min(v, actual_v + v_acc)
if w < actual_w:
actual_w = max(w, actual_w - w_acc)
elif w > actual_w:
actual_w = min(w, actual_w + w_acc)
x_bar_list.append(x_bar)
y_bar_list.append(y_bar)
x_list.append(x)
y_list.append(y)
plt.figure(figsize=(5, 5))
W = 3
plt.scatter(x_bar_list, y_bar_list, marker='.', linewidth=0, c='black', label='desired traj')
plt.scatter(x_list, y_list, marker='.', linewidth=0, c=T, label='robot traj')
plt.xlim(0, W * 0.18)
plt.ylim(0, W * 0.18)
plt.xticks(np.arange(2*W+1)*0.09)
plt.yticks(np.arange(2*W+1)*0.09)
plt.grid(True)
plt.gca().set_axisbelow(True)
plt.xlabel("X")
plt.ylabel("Y")
plt.title("Trajectory Tracking")
plt.legend(bbox_to_anchor=(1,1), loc=2)
# -
test_P_1=300
test_P_2=50
test_P_3=10
robot_q_0 = (0.08, 0.18, pi/2, 0.3, 0)
traj = [(0, WayPoint(0.09, 0.18, pi/2, 0.5)), (0.5, WayPoint(0.18, 0.27, 0, 0.35)), (1, WayPoint(0.27, 0.36, pi/2, 0))]
simulate(robot_q_0, traj, test_P_1, test_P_2, test_P_3)
plt.show()
robot_q_0 = (0.11, 0.18, pi/2, 0.2, 5)
traj = [(0, WayPoint(0.09, 0.18, pi/2, 0.2)), (1, WayPoint(0.18, 0.27, 0, 0.35))]
simulate(robot_q_0, traj, test_P_1, test_P_2, test_P_3)
plt.show()
robot_q_0 = (0.0, 0.25, 0, 0.2, 0)
traj = [(0, WayPoint(0.0, 0.27, 0, 0.2)), (1.25, WayPoint(0.54, 0.27, 0, 0.2))]
simulate(robot_q_0, traj, test_P_1, test_P_2, test_P_3)
plt.show()
robot_q_0 = (0.45, 0.05, pi+0.25, 0.3, 0)
traj = [(0, WayPoint(0.45, 0.09, pi, 0.4)), (0.75, WayPoint(0.27, 0.27, pi/2, 0.4))]
simulate(robot_q_0, traj, test_P_1, test_P_2, test_P_3)
plt.show()
robot_q_0 = (0.0, 0.25, 0, 0.2, -5)
traj = [(0, WayPoint(0.0, 0.27, 0, 0.2)), (2, WayPoint(0.48, 0.36, pi/2, 0.2))]
simulate(robot_q_0, traj, test_P_1, test_P_2, test_P_3)
plt.show()
robot_q_0 = (0.25, 0.28, -pi*4/7, 0.5, 0)
traj = [(0, WayPoint(0.27, 0.27, -pi/2, 0.8)), (0.35, WayPoint(0.45, 0.09, 0, 0.8))]
simulate(robot_q_0, traj, test_P_1, test_P_2, test_P_3, A=6)
plt.show()
# no initial error
robot_q_0 = (0.11, 0.18, pi/2, 0.8, 0)
traj = [(0, WayPoint(0.09, 0.18, pi/2, 0.8)), (0.25, WayPoint(0.18, 0.27, 0, 0.6)), (.5, WayPoint(0.27, 0.36, pi/2, 0.4))]
simulate(robot_q_0, traj, 10, 1000, 6, A=5)
plt.show()
# **Note**: The code above has a bug if I use `-pi` instead of `pi` in `robot_q_0`
# # LQR - The Optimal Controller
#
# ***
#
# ## Overview of the Steps:
#
# ### 1. Write out the non-linear dynamics $\dot{\vec{x}} = f(\vec{x}, \vec{u})$
#
# Here we are interested in the full blown system dynamics of the actual smartmouse robot. The forward kinematics, which depend on the current state $x$, $y$, and $\theta$ and the velocity inputs of the wheels $v_l$, and $v_r$ are as follows. In the general case where the two wheels have different velocities, we have this:
#
# \begin{align}
# R &= \frac{W(v_l+v_r)}{2(v_r-v_l)} && \text{radius of turn} \\
# \theta &\leftarrow \theta + \dfrac{v_l}{R-\frac{W}{2}}\Delta t \\
# x &\leftarrow x-R\Bigg(\sin{\Big(\frac{v_r-v_l}{W}\Delta t-\theta\Big)}+\sin{\theta}\Bigg) \\
# y &\leftarrow y-R\Bigg(\cos{\Big(\frac{v_r-v_l}{W}\Delta t-\theta\Big)}-\cos{\theta}\Bigg)
# \end{align}
#
# And in the special case where we're going perfectly straight:
#
# \begin{align}
# \theta &\leftarrow \theta \\
# x &\leftarrow x + v\Delta t\cos(\theta) \\
# y &\leftarrow y + v\Delta t\sin(\theta) \\
# \end{align}
#
# We can take these equations and write them in the form of $\dot{\vec{x}} = f(\vec{x},\vec{u})$. Confusingly, $\vec{x}$ here is the full state vector $[x, y, \theta]$. Most controls texts simply use $\vec{x}$, so I'm sticking with that. Also, we defined $u = [v_l, v_r]$
#
# \begin{align}
# \dot{x} &= \begin{bmatrix}\dot{x}\\ \dot{y}\\ \dot{\theta}\end{bmatrix} \\
# &= \begin{bmatrix}
# -R\Bigg(\sin{\Big(\frac{v_r-v_l}{W}\Delta t-\theta\Big)}+\sin{\theta}\Bigg) \\
# -R\Bigg(\cos{\Big(\frac{v_r-v_l}{W}\Delta t-\theta\Big)}-\cos{\theta}\Bigg) \\
# \frac{v_l}{R-\frac{W}{2}}\Delta t \\
# \end{bmatrix}
# \end{align}
#
# ### 2. Identify the points around which we linearize our system, $(\bar{u}, \bar{x})$
#
# Because we are tracking a trajectory, we want to linearize around the trajectory we are trying to track. That means $\bar{u}$ is the control input associated with the trajectory, which means solving for the feed forward inputs. Specifically, that means we need to compute the $v_l$ and $v_r$ that would follow the trajectory at the point $\bar{x}$. To do this we must pick velocities that make instantaneous turning radius $R$ equal the instantaneous radius of the trajcetory at $\bar{x}$, and make the linear velocity at $\bar{x}$ equal the instantaneous linear velocity of the robot center $v$. To do this, we go back to our basic kinematics equations which rely on the fact that all points on the robot (center, left wheel, right wheel) have the same rotational velocity $\omega$ around the ICC.
#
# \begin{align}
# \omega = \frac{v}{R} &= \frac{v_l}{R-\frac{W}{2}} \\
# \frac{v}{R}\bigg(R - \frac{W}{2}\bigg) &= v_l \\
# \omega = \frac{v}{R} &= \frac{v_r}{R+\frac{W}{2}} \\
# \frac{v}{R}\bigg(R + \frac{W}{2}\bigg) &= v_r \\
# \end{align}
#
# Using these equations we can solve for the velocities of the wheels, which together make up $\bar{u}$. We just need the $R$ and $v$. These should be derived from the equation of the trajectory we are tracking. These are well studied equations, for which [a proof can be found other places on the internet](http://mathworld.wolfram.com/Curvature.html).
#
# $$ R = \frac{\dot{x}\ddot{y}-\dot{y}\ddot{x}}{{\big({\dot{x}}^2 + {\dot{y}}^2\big)}^{\frac{3}{2}}} = \frac{(c_1+2c_2t+3c_3t^2+4c_4t^3)(2d_2+6d_3t+12d_4t^2) - (d_1+2d_2t+3d_3t^3+4d_4t^3)(2c_2+6c_3t+12c_4t^2)}{{\big({(c_1+2c_2t+3c_3t^2+4c_4t^3)}^2 + {(d_1+2d_2t+3d_3t^3+4d_4t^3)}^2\big)}^{\frac{3}{2}}} $$
#
# $$ v = \sqrt{{(\dot{x})}^2 + {(\dot{y})}^2} = \sqrt{{(c_1+2c_2t+3c_3t^2+4c_4t^3)}^2 + {(d_1+2d_2t+3d_3t^3+4d_4t^3)}^2} $$
#
# We can plug in the coefficients of our polynomials and get values for $v$ and $R$. Then, we can plus these into the equations just above and get the feed forward wheel velocities.
#
# ### 3. Write the linearized dynamics around $\bar{x}$ as $\dot{\vec{x}} \approx A\delta_x + B\delta_u$, where $\delta_x = (\vec{x} - \bar{x})$ and $\delta_u = (\vec{u} - \bar{u})$
#
# To do this, we need the partial derivitive matrixes $A$ and $B$.
#
# $$ A = \begin{bmatrix}
# \frac{\partial f_1}{\partial x}\big|_{\bar{x}\bar{u}} &
# \frac{\partial f_1}{\partial y}\big|_{\bar{x}\bar{u}} &
# \frac{\partial f_1}{\partial \theta}\big|_{\bar{x}\bar{u}} \\
# \frac{\partial f_2}{\partial x}\big|_{\bar{x}\bar{u}} &
# \frac{\partial f_2}{\partial y}\big|_{\bar{x}\bar{u}} &
# \frac{\partial f_2}{\partial \theta}\big|_{\bar{x}\bar{u}} \\
# \frac{\partial f_3}{\partial x}\big|_{\bar{x}\bar{u}} &
# \frac{\partial f_3}{\partial y}\big|_{\bar{x}\bar{u}} &
# \frac{\partial f_3}{\partial \theta}\big|_{\bar{x}\bar{u}} \\
# \end{bmatrix}
# = \begin{bmatrix}
# 0 & 0 & R\bigg(\cos\Big(\frac{\bar{v}_r-\bar{v}_l}{W}\Delta t - \bar{\theta}\Big) - \cos(\bar{\theta})\bigg) \\
# 0 & 0 & -R\bigg(\sin\Big(\frac{\bar{v}_r-\bar{v}_l}{W}\Delta t - \bar{\theta}\Big) + \sin(\bar{\theta})\bigg) \\
# 0 & 0 & 0 \\
# \end{bmatrix}
# $$
#
# $$ B = \begin{bmatrix}
# \frac{\partial f_1}{\partial v_l}\big|_{\bar{x}\bar{u}} &
# \frac{\partial f_1}{\partial v_r}\big|_{\bar{x}\bar{u}} \\
# \frac{\partial f_2}{\partial v_l}\big|_{\bar{x}\bar{u}} &
# \frac{\partial f_2}{\partial v_r}\big|_{\bar{x}\bar{u}} \\
# \frac{\partial f_3}{\partial v_l}\big|_{\bar{x}\bar{u}} &
# \frac{\partial f_3}{\partial v_r}\big|_{\bar{x}\bar{u}} \\
# \end{bmatrix}
# = \begin{bmatrix}
# R\cos\Big(\frac{(\bar{v}_r-\bar{v}_l)\Delta t}{W}-\bar{\theta}\Big)\frac{\Delta t}{W} &
# -R\cos\Big(\frac{(\bar{v}_r-\bar{v}_l)\Delta t}{W}-\bar{\theta}\Big)\frac{\Delta t}{W} \\
# -R\sin\Big(\frac{(\bar{v}_r-\bar{v}_l)\Delta t}{W}-\bar{\theta}\Big)\frac{\Delta t}{W} &
# R\sin\Big(\frac{(\bar{v}_r-\bar{v}_l)\Delta t}{W}-\bar{\theta}\Big)\frac{\Delta t}{W} \\
# \frac{\Delta t}{R-\frac{W}{2}} &
# 0 \\
# \end{bmatrix}
# $$
#
# ### 4. Check if our system is controllable by looking at the rank of the controllability matrix $C = [B, AB, A^2B, \dots, A^{n-1}B]$
#
# We have three state variables so $n = 3$, which means $C = [B, AB, A^2B]$.
#
# $$
# AB = \begin{bmatrix}
# R\bigg(\cos\Big(\frac{v_r-v_l}{W}\Delta t - \theta\Big) - \cos(\theta)\bigg)\frac{\Delta t}{R-\frac{W}{2}} & 0 \\
# -R\bigg(\sin\Big(\frac{v_r-v_l}{W}\Delta t - \theta\Big) + \sin(\theta)\bigg)\frac{\Delta t}{R-\frac{W}{2}} & 0 \\
# 0 & 0 \\
# \end{bmatrix}
# $$
#
#
# $$
# A^2B =
# \begin{bmatrix}
# 0 & 0 & 0\\
# 0 & 0 & 0\\
# 0 & 0 & 0\\
# \end{bmatrix}
# * B
# = \begin{bmatrix}
# 0 & 0 & 0\\
# 0 & 0 & 0\\
# 0 & 0 & 0\\
# \end{bmatrix}
# $$
#
# $$ C = \begin{bmatrix}
# \begin{bmatrix}
# R\cos\Big(\frac{(\bar{v_r}-\bar{v_l})\Delta t}{W}-\theta\Big)\frac{\Delta t}{W} &
# -R\cos\Big(\frac{(\bar{v_r}-\bar{v_l})\Delta t}{W}-\theta\Big)\frac{\Delta t}{W} \\
# -R\sin\Big(\frac{(\bar{v_r}-\bar{v_l})\Delta t}{W}-\theta\Big)\frac{\Delta t}{W} &
# R\sin\Big(\frac{(\bar{v_r}-\bar{v_l})\Delta t}{W}-\theta\Big)\frac{\Delta t}{W} \\
# \frac{\Delta t}{R-\frac{W}{2}} &
# 0 \\
# \end{bmatrix} &
# \begin{bmatrix}
# R\bigg(\cos\Big(\frac{v_r-v_l}{W}\Delta t - \theta\Big) - \cos(\theta)\bigg)\frac{\Delta t}{R-\frac{W}{2}} & 0 \\
# -R\bigg(\sin\Big(\frac{v_r-v_l}{W}\Delta t - \theta\Big) + \sin(\theta)\bigg)\frac{\Delta t}{R-\frac{W}{2}} & 0 \\
# 0 & 0 \\
# \end{bmatrix} &
# \begin{bmatrix}
# 0 & 0 & 0\\
# 0 & 0 & 0\\
# 0 & 0 & 0\\
# \end{bmatrix}
# \end{bmatrix}
# $$
#
# What is the rank of this matrix? It seems to depend on specific values of $x, y, \theta$.
#
# ### 5. Pick cost parameters $Q$ and $R$
#
# These need to be tuned on the simulation or real system, but the identity matrices $I$ are good starting points.
#
# ### 6. Solve for $K$ given $LQR(A, B, Q, R)$
#
# We want to minimize the quadratice cost function $J$, which is defined as follows.
#
# $$ J = \sum_0^N(\vec{x}_t - \bar{x}_t)^TQ(\vec{x}_t - \bar{x}_t) + \sum_0^N(\vec{u}_t - \bar{u}_t)^TR(\vec{u}_t - \bar{u}_t) $$
#
# We can do this with least squares, or dynamics programming. DP is more efficient O(Nn^3). N is some finite horizon, and n is the number of state dimensions (3 for us).
#
# maybe we compute K once instead of at every time step. could be consistent within one motion primitive.
#
# ### 7. Apply our new controller of the form $\vec{u} = -K(\vec{x} - \bar{x}) + \bar{u}$
#
# +
from math import atan2
import scipy.linalg
# source: http://www.kostasalexis.com/lqr-control.html
def dlqr(A,B,Q,R):
"""Solve the discrete time lqr controller.
p[k+1] = A p[k] + B u[k]
cost = sum p[k].T*Q*p[k] + u[k].T*R*u[k]
"""
#ref Bertsekas, p.151
#first, try to solve the ricatti equation
P = np.matrix(scipy.linalg.solve_discrete_are(A, B, Q, R))
#compute the LQR gain
K = np.matrix(scipy.linalg.inv(B.T*P*B+R)*(B.T*P*A))
eigVals, eigVecs = scipy.linalg.eig(A-B*K)
return K, P, eigVals
def follow_plan(q_0, waypoints, P_1, P_2, P_3):
traj = TrajPlan()
traj.solve(waypoints)
dt = 0.01
x = q_0[0]
y = q_0[1]
theta = q_0[2]
vl = q_0[3]
vr = q_0[3]
actual_vl = vl
actual_vr = vr
v_acc = 2 * dt
W = 0.0633
T = np.arange(0, traj.get_t_f()+dt, dt)
x_bar_list = []
y_bar_list = []
x_list = []
y_list = []
vl_list = []
vr_list = []
actual_vl_list = []
actual_vr_list = []
for t in T:
x_bar = [1, t, pow(t,2), pow(t,3), pow(t,4), pow(t,5), 0, 0, 0, 0, 0, 0] @ traj.get_coeff()
dx_bar = [0, 1, 2*t, 3*pow(t,2), 4*pow(t,3), 5*pow(t,4), 0, 0, 0, 0, 0, 0] @ traj.get_coeff()
ddx_bar = [0, 0, 0, 0, 0, 0, 0, 0, 2, 6*t, 12*pow(t,2), 20*pow(t,3)] @ traj.get_coeff()
y_bar = [0, 0, 0, 0, 0, 0, 1, t, pow(t,2), pow(t,3), pow(t,4), pow(t,5)] @ traj.get_coeff()
dy_bar = [0, 0, 0, 0, 0, 0, 0, 1, 2*t, 3*pow(t,2), 4*pow(t,3), 5*pow(t,4)] @ traj.get_coeff()
ddy_bar = [0, 0, 0, 0, 0, 0, 0, 0, 2, 6*t, 12*pow(t,2), 20*pow(t,3)] @ traj.get_coeff()
theta_bar = atan2(dy_bar, dx_bar)
# full forward kinematics
if vr - vl < 1e-5:
x = x + cos(theta) * vl * dt
y = y + sin(theta) * vl * dt
else:
R = W*(vl + vr)/(2*(vr - vl))
x = x - R * (sin((vr-vl)*dt/W - theta) + sin(theta))
y = y - R * (cos((vr-vl)*dt/W - theta) - cos(theta))
theta = theta + vl / (R - W/2) * dt
# compute instanteneous Radius
R_bar = (dx_bar*ddy_bar - dy_bar*ddy_bar)/pow((pow(dx_bar, 2) + pow(dy_bar, 2)), 3/2)
# feed forward inputs
v_bar = np.sqrt(dx_bar*dx_bar + dy_bar*dy_bar)
vl_bar = v_bar/R_bar*(R_bar-W/2)
vr_bar = v_bar/R_bar*(R_bar+W/2)
A = np.array([[0, 0, R_bar*(cos((vr_bar - vl_bar)*dt/W - theta_bar) - cos(theta_bar))],
[0, 0, -R_bar*(sin((vr_bar - vl_bar)*dt/W - theta_bar) + sin(theta_bar))],
[0, 0, 0]])
B = np.array([[R_bar*cos((vr_bar - vl_bar)*dt/W - theta_bar)*dt/W, -R_bar*cos((vr_bar - vl_bar)*dt/W - theta_bar)*dt/W],
[-R_bar*sin((vr_bar - vl_bar)*dt/W - theta_bar)*dt/W, R_bar*sin((vr_bar - vl_bar)*dt/W - theta_bar)*dt/W],
[dt/(R_bar-W/2), 0]]);
Q= np.eye(3);
R = np.eye(2);
K, P, eigs = dlqr(A, B, Q, R)
eigs = np.linalg.eig(A - B*K)
# info("eigs", eigs[0])
# debug("K", K)
x_vec = np.array([[x],[y],[theta]])
x_bar_vec = np.array([[x_bar],[y_bar],[theta_bar]])
u = -K * (x_vec - x_bar_vec) + np.array([[vl_bar],[vr_bar]]);
vl = u[0,0]
vr = u[1,0]
# simple acceleration model
if vl < actual_vl:
actual_vl = max(vl, actual_vl - v_acc)
elif vl > actual_vl:
actual_vl = min(vl, actual_vl + v_acc)
if vr < actual_vr:
actual_vr = max(vr, actual_vr - v_acc)
elif vr > actual_vr:
actual_vr = min(vr, actual_vr + v_acc)
x_bar_list.append(x_bar)
y_bar_list.append(y_bar)
x_list.append(x)
y_list.append(y)
vr_list.append(vr)
vl_list.append(vl)
actual_vr_list.append(actual_vr)
actual_vl_list.append(actual_vl)
plt.figure(figsize=(5, 5))
CELL_COUNT = 3
plt.scatter(x_bar_list, y_bar_list, marker='.', linewidth=0, c='black', label='desired traj')
plt.scatter(x_list, y_list, marker='.', linewidth=0, label='robot traj')
plt.xlim(0, CELL_COUNT * 0.18)
plt.ylim(0, CELL_COUNT * 0.18)
plt.xticks(np.arange(CELL_COUNT+1)*0.18)
plt.yticks(np.arange(CELL_COUNT+1)*0.18)
plt.grid(True)
plt.gca().set_axisbelow(True)
plt.xlabel("X")
plt.ylabel("Y")
plt.title("LQR Trajectory Tracking")
plt.legend(bbox_to_anchor=(1,1), loc=2)
plt.figure()
plt.plot(vr_list, label="vr")
plt.plot(vl_list, label="vl")
plt.plot(actual_vr_list, label="actual vr")
plt.plot(actual_vl_list, label="actual vl")
plt.legend(bbox_to_anchor=(1,1), loc=2)
# -
LOG_LVL=1
robot_q_0 = (0.08, 0.18, pi/2, 0.3)
traj = [(0, WayPoint(0.09, 0.18, pi/2, 0.5)), (0.5, WayPoint(0.18, 0.27, 0, 0.35))]
follow_plan(robot_q_0, traj, test_P_1, test_P_2, test_P_3)
plt.show()
LOG_LVL=1
robot_q_0 = (0.07, 0.18, pi/2, 0.2)
traj = [(0, WayPoint(0.09, 0.18, pi/2, 0.2)), (0.5, WayPoint(0.09, 0.36, pi/2, 0.2))]
follow_plan(robot_q_0, traj, test_P_1, test_P_2, test_P_3)
plt.show()
# # Tuning new PIDs
# +
import csv
reader = csv.reader(open("./pid_data.csv", 'r'))
setpoints = []
speeds = []
for row in reader:
setpoints.append(float(row[0]))
speeds.append(float(row[1]))
t = np.arange(0, len(setpoints)/100, 0.01)
plt.plot(t, setpoints, label="setpoint")
plt.plot(t, speeds, label="actual speed")
plt.xlabel("time (s)")
plt.ylabel("speed (m/s)")
plt.title("PID Performance")
plt.show()
# -
|
docs/.ipynb_checkpoints/Time Optimal Smartmouse Controls-checkpoint.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # # copy — Shallow and deep copy operations
#
# https://docs.python.org/3/library/copy.html
import copy
a = dict(a=10, b=20)
tor = dict(name='tor', age=10, attrs=a)
loki = copy.copy(tor)
loki is tor
loki['attrs'] is tor['attrs']
odin = copy.deepcopy(tor)
odin is tor
odin['attrs'] is tor['attrs']
class A():
array = [1, 2, 3]
def __deepcopy__(self, memo):
print('deep copy')
print(memo)
new = copy.copy(self)
new.array = copy.deepcopy(self.array)
return new
a = A()
b = copy.deepcopy(a)
# +
# copy.deepcopy?
# -
b.array is a.array
b is a
|
DataTypes/copy.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Fuzzy c-Means Clustering for Persistence Diagrams and Riemannian Manifolds
# ## _ICLR 2021 Geometric and Topological Learning Workshop_
# ## Introducation and Motivation
#
# This submission is an implementation of Fuzzy c-Means clustering for persistence diagrams and Riemannian manifolds using Giotto-TDA and Geomstats, respectively. It is widely accepted that many real world problems are in fact fuzzy (Campbello, 2007); that is, datapoints can have partial membership to several clusters, rather than a single 'hard' labelling to only one cluster. We provide the ability to compute fuzzy clusters on persistence diagram space and Riemamanian manifolds using work we presented as a poster at the NeurIPS 2020 TDA and Beyond workshop (Davies et al., 2020). The goal of this submission is to extend the fields of computational geometry and topology by making an algorithm that is widely used in traditional ML (fuzzy clustering) available to practioners in the computational geometry and topology community.
#
# To Giotto-TDA we add both hard and fuzzy clustering on persistence diagram space to the library, whereas for Geomstats we extend the hard clustering already present within the library to the fuzzy case. In the algorithm section we outline how the algorithm works, and highlight the convergence result from our previous work that gives the theoretical justification for extending fuzzy clustering to persistence diagram space and Riemannian manifolds. In the implementation section we discuss how we added fuzzy clustering to Giotto-TDA and GeomStats, highlighting key helper functions. In the experiments section we demonstrate fuzzy clustering on two simple datasets. We end by discussing the limitations of fuzzy clustering in persistence diagram space.
# ## Analysis - Algorithm and Implementation
#
# ### Algorithm
#
# We alternatively update the cluster centres and values $r_{jk}$ which denote the degree to which datapoint $j$ is associated with cluster $k$. We use the standard formula for Euclidean FCM to update $r_{jk}$, which is given by
#
# $r_{jk} = \left( \sum_{l=1}^c \frac{ d(M_k, D_j) }{d(M_l, D_j)} \right)^{-1}$,
#
# where $c$ is the number of cluster centres (chosen as a hyperparameter), $M_k$ are the cluster centres, and $D_j$ are the data points (Bezdek, 1980). The distance $d$ is determined by the space we are in: for persistence diagrams we use the Wasserstein distance, and for Geomstats we use the distance on the space our points lie in.
#
# To update the cluster centres we use the weighted Fréchet mean, which provides an analogue of barycentres for metric spaces. It is given by
#
# $M_k = \mathrm{arg}\min_{\hat{D} } \sum_{j=1}^n r_{jk}^2 W_2(\hat{D}, D_j)^2, \text{ for } k=1,\dots,c$.
#
# For Giotto-TDA we provide an implementation of a modified version of Turner et al.'s (2012) algorithm for computing the non-weighted Fréchet mean in persistence diagram space. In Geomstats we use the function already present to compute the weighted Fréchet mean.
#
# In Davies et al. (2020) we prove that every convergent subsequence of iterates of this FCM algorithm converges to a local minima or saddle point of the cost function
# $J(R, M) = \sum_{j=1}^n \sum_{k=1}^c r_{jk}^2 d(M_k, D_j)^2.$
# As that proof relies only on the definition of the weighted Fréchet mean it trivially extends to Riemannian manifolds, meaning we have the same convergence guarantees.
#
#
#
# ### Implementation
#
# #### Giotto-TDA
#
# We use Giotto-TDA to compute persistence diagrams that provide code to fuzzy cluster. The key function we implement is ```fpd_cluster```, which accepts a list of point datasets, the number of clusters, and the persistence diagram dimension, and returns cluster centres and membership values. It also optionally takes the maximum number of iterations to perform when clustering, which metric to use on persistence diagram space, an option to run the algorithm as hard clustering rather than fuzzy, and the option to evaluate the quality of the clusters with the fuzzy RAND index (Campbello, 2007), which we provide an implementation of. Another function we include is ```calc_frechet_mean```, which computes the weighted Fréchet mean of persistence diagrams using a weighted version of Turner et al.'s (2012) algorithm, and ```pd_fuzzy```, which accepts persistence diagrams to cluster.
#
# We use Giotto-TDA's format for persistence diagrams throughout, including padding collections of diagrams with points with 0 birth and death values. We used SciPy's implementation of the Hungarian algorithm to efficiently compute optimal matchings.
#
# #### GeomStats
#
# For Geomstats we modify the [RiemannianKMeans](https://github.com/geomstats/geomstats/blob/master/geomstats/learning/kmeans.py) class by adding a ```fuzzy``` boolean to the constructor which determines whether to compute hard or fuzzy clusters. If fuzzy is selected, then we can compute the membership values (or weights), as demonstrated below.
#
# ```
# if self.fuzzy:
# dists[np.where(dists == 0)] = 0.00001
# weights = 1 / (dists * np.sum(1 / dists, axis=1)[:, None])
# else:
# belongs = gs.argmin(dists, 1)
# ```
#
# When updating the fuzzy cluster centres, we can use geomstats Fréchet mean function as follows.
# ```
# mean = FrechetMean(metric=self.metric,
# method=self.mean_method,
# max_iter=150,
# lr=self.lr,
# point_type=self.point_type,)
# mean.fit(X, weights=weights[:, i])
# self.centroids[i] = mean.estimate_
# ```
#
# Finally we update the ```predict``` function, allowing users to choose whether to select a hard label or fuzzy membership values when computing the predicted class of a point. The completed function is included in this submission as ```geomstats_fuzzycmeans.py```.
# ## Experiments - Fuzzy clustering demonstrations
# ### Giotto-TDA demo
# +
from synthetic_data import gen_data2, plot_dataset, plot_three_clusters
import giotto_fcm
# Generate synthetic dataset
synthetic_data = gen_data2(seed=0, noise=0.05, n_samples=100)
# Plot synthetic dataset
plot_dataset(synthetic_data)
# +
# Pass the datasets to the clustering
num_clusters = 3
homology_dimension = 1
r, M = giotto_fcm.fpd_cluster(synthetic_data, num_clusters, homology_dimension, max_iter=20, verbose=True)
# -
# Plot the resulting clusters
# The clusters have zero, one, or two significant off-diagonal points, corresponding to
# zero, one, or two holes in the datasets
plot_three_clusters(M)
# ### GeomStats Demo
#
# Following the [geomstat docs](https://geomstats.github.io/notebooks/05_riemannian_kmeans.html), we cluster points on the sphere. However, we run our fuzzy clustering algorithm, leading to fuzzy membership values. These give additional information that can be exploited for datasets with an underlying fuzzy structure.
# +
# Import packages
import matplotlib.pyplot as plt
import numpy as np
import geomstats.backend as gs
import geomstats.visualization as visualization
from geomstats.geometry.hypersphere import Hypersphere
from geomstats.geometry.special_orthogonal import SpecialOrthogonal
np.random.seed(1)
gs.random.seed(1000)
# initiate datapoints on a hypersphere
sphere = Hypersphere(dim=2)
cluster = sphere.random_von_mises_fisher(kappa=20, n_samples=140)
SO3 = SpecialOrthogonal(3)
rotation1 = SO3.random_uniform()
rotation2 = SO3.random_uniform()
rotation3 = SO3.random_uniform()
cluster_1 = cluster @ rotation1
cluster_2 = cluster @ rotation2
cluster_3 = cluster @ rotation3
# run fuzzy c-means clustering
from geomstats_fuzzycmeans import RiemannianKMeans
from geomstats.geometry.hypersphere import Hypersphere
manifold = Hypersphere(dim=2)
metric = manifold.metric
data = gs.concatenate((cluster_1, cluster_2, cluster_3), axis=0)
kmeans = RiemannianKMeans(metric, 2, tol=1e-3, verbose=True, fuzzy=True)
kmeans.fit(data)
labels = kmeans.predict(data)
centroids = kmeans.centroids
# plot the results
fig = plt.figure(figsize=(15, 15))
colors = ['red', 'blue']
ax = visualization.plot(
data,
space='S2',
marker='.',
color='grey')
for i in range(2):
ax = visualization.plot(
points=data[labels == i],
ax=ax,
space='S2',
marker='.',
color=colors[i])
for i, c in enumerate(centroids):
ax = visualization.plot(
c,
ax=ax,
space='S2',
marker='*',
s=2000,
color=colors[i])
ax.set_title('FCM on Hypersphere Manifold');
ax.auto_scale_xyz([-1, 1], [-1, 1], [-1, 1])
plt.show()
# +
# An example of the fuzzy membership values.
# Rather than a single label, each point has probabilistic membership values to each cluster centre
fuzzy_labels = kmeans.predict(data, fuzzy_predictions=True)
print(fuzzy_labels[0:5])
# -
# ## Limitations and Perspectives
#
# Although we are the first to provide fuzzy clustering in persistence diagram space, a primary limitation of our approach is that one can embed persistence diagrams into Euclidean space using any number of techniques that are provided in Giotto-TDA, and fuzzy cluster those vectors. However, when we presented our work as a poster at the TDA & Beyond NeurIPS 2020 workshop, we had a number of people approach us who were interested in using our work, so we hope including it as part of Giotto-TDA will make it more accessible.
#
# It is widely accepted that many real world problems are naturally fuzzy (Campbello, 2007). Therefore the ability to process datasets in an unsupervised way using persistence diagrams (with Giotto-TDA) and on Riemannian manifolds (with GeomStats) can provide additional insight into many problems practioners in computational geometry and topology face.
# ## References
#
# <NAME>. **A convergence theorem for the fuzzy iso-data clustering algorithms. IEEE Transactions on Pattern Analysis and Machine Intelligence**, PAMI-2(1):1–8, Jan 1980.
#
# <NAME>., **A fuzzy extension of the rand index andother related indexes for clustering and classification assessment. Pattern Recognition Letters**, 28(7):833–841, 2007
#
# <NAME>., <NAME>., <NAME>., <NAME>., **Fuzzy c-means clustering for persistence diagrams**. arXiv:2006.02796, TDA and Beyond workshop at NeurIPS 2020
#
# <NAME>., <NAME>., <NAME>., and <NAME>. **Fréchet means for distributions of persistence diagrams.** Discrete & Computational Geometry, 52:44–70, 2012
|
tomogwen__Fuzzy-C-Means-Clustering-for-Persistence-Diagrams-and-Riemannian-Manifolds/competition.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Make NumPy available:
import numpy as np
# ## Exercise 07.1 (indexing and timing)
#
# Create two very long NumPy arrays `x` and `y` and sum the arrays using:
#
# 1. The NumPy addition syntax, `z = x + y`; and
# 2. A `for` loop that computes the sum entry-by-entry
#
# Compare the time required for the two approaches for vectors of different lengths (use a very long vector for
# the timing). The values of the array entries are not important for this test. Use `%time` to report the time.
#
# *Hint:* To loop over an array using indices, try a construction like:
x = np.ones(10)
y = np.ones(len(x))
for i in range(len(x)):
print(x[i]*y[i])
# #### (1) Add two vectors using built-in addition operator:
# + deletable=false nbgrader={"cell_type": "code", "checksum": "4b3a6fbbfcbe89681e9e2f04cab73d67", "grade": false, "grade_id": "cell-2c856e54f7c3340e", "locked": false, "schema_version": 3, "solution": true}
# YOUR CODE HERE
raise NotImplementedError()
# -
# #### (2) Add two vectors using own implementation:
# + deletable=false nbgrader={"cell_type": "code", "checksum": "f5195f8b039c21a10b079fa7129adf0a", "grade": false, "grade_id": "cell-f34614f9f0068cc4", "locked": false, "schema_version": 3, "solution": true}
# YOUR CODE HERE
raise NotImplementedError()
# -
# ### Optional extension: just-in-time (JIT) compilation
#
# You will see a large difference in the time required between your NumPy and 'plain' Python implementations. This is due to Python being an *interpreted* language as opposed to a *compiled* language. A way to speed up plain Python implementions is to convert the interpreted Python code into compiled code. A tool for doing this is [Numba](https://numba.pydata.org/).
#
# Below is an example using Numba and JIT to accelerate a computation:
# +
# !pip -q install numba
import numba
import math
def compute_sine_native(x):
z = np.zeros(len(x))
for i in range(len(z)):
z[i] = math.sin(x[i])
return z
@numba.jit
def compute_sine_jit(x):
z = np.zeros(len(x))
for i in range(len(z)):
z[i] = math.sin(x[i])
return z
x = np.ones(10000000)
# %time z = compute_sine_native(x)
compute_sine_jit(x)
# %time z = compute_sine_jit(x)
# -
# **Task:** Test if Numba can be used to accelerate your implementation that uses indexing to sum two arrays, and by how much.
# ## Exercise 07.2 (member functions and slicing)
#
# Anonymised scores (out of 60) for an examination are stored in a NumPy array. Write:
#
# 1. A function that takes a NumPy array of the raw scores and returns the scores as percentages, sorted from
# lowest to highest (try using `scores.sort()`, where `scores` is a NumPy array holding the scores).
# 1. A function that returns the maximum, minimum and mean of the raw scores as a dictionary with the
# keys '`min`', '`max`' and '`mean`'. Use the NumPy array functions `min()`, `max()` and `mean()` to do the
# computation, e.g. `max = scores.max()`.
#
# Design your function for the min, max and mean to optionally exclude the highest and lowest scores from the
# computation of the min, max and mean.
#
# *Hint:* sort the array of scores and use array slicing to exclude
# the first and the last entries.
#
# Use the scores
# ```python
# scores = np.array([58.0, 35.0, 24.0, 42, 7.8])
# ```
# to test your functions.
# + deletable=false nbgrader={"cell_type": "code", "checksum": "0d3f6132335348940f562c8a70c520e9", "grade": false, "grade_id": "cell-169ebae60810c6be", "locked": false, "schema_version": 3, "solution": true}
def to_percentage_and_sort(scores):
# YOUR CODE HERE
raise NotImplementedError()
def statistics(scores, exclude=False):
# YOUR CODE HERE
raise NotImplementedError()
# + deletable=false editable=false nbgrader={"cell_type": "code", "checksum": "49817c794fad305adbe95251448b7bf2", "grade": true, "grade_id": "cell-af0b6fd8a3cadb1a", "locked": true, "points": 0, "schema_version": 3, "solution": false}
scores = np.array([58.0, 35.0, 24.0, 42, 7.8])
assert np.isclose(to_percentage_and_sort(scores), [ 13.0, 40.0, 58.33333333, 70.0, 96.66666667]).all()
s0 = statistics(scores)
assert round(s0["min"] - 7.8, 10) == 0.0
assert round(s0["mean"] - 33.36, 10) == 0.0
assert round(s0["max"] - 58.0, 10) == 0.0
s1 = statistics(scores, True)
assert round(s1["min"] - 24.0, 10) == 0.0
assert round(s1["mean"] - 33.666666666666666667, 10) == 0.0
assert round(s1["max"] - 42.0, 10) == 0.0
# + [markdown] deletable=false editable=false nbgrader={"cell_type": "markdown", "checksum": "59fe9ff8629e2b641f8c654b3c0c36ee", "grade": false, "grade_id": "cell-27beb42d6b15acad", "locked": true, "schema_version": 3, "solution": false}
# ## Exercise 07.3 (slicing)
#
# For the two-dimensional array
# + deletable=false editable=false nbgrader={"cell_type": "code", "checksum": "72617327e9686e23fc46fd8b050dfddc", "grade": false, "grade_id": "cell-73a8893e46856789", "locked": true, "schema_version": 3, "solution": false}
A = np.array([[4.0, 7.0, -2.43, 67.1],
[-4.0, 64.0, 54.7, -3.33],
[2.43, 23.2, 3.64, 4.11],
[1.2, 2.5, -113.2, 323.22]])
print(A)
# -
# use array slicing for the below operations, printing the results to the screen to check. Try to use array slicing such that your code would still work if the dimensions of `A` were enlarged.
#
#
# + [markdown] deletable=false editable=false nbgrader={"cell_type": "markdown", "checksum": "6d041f0cf31860711599e3ee7b3491a3", "grade": false, "grade_id": "cell-f6744c8a86e68cac", "locked": true, "schema_version": 3, "solution": false}
# #### 1. Extract the third column as a 1D array
# + deletable=false nbgrader={"cell_type": "code", "checksum": "b64ade07370f5fee0946cac30ec0e2ad", "grade": false, "grade_id": "cell-dfee6b0ed9343682", "locked": false, "schema_version": 3, "solution": true}
# YOUR CODE HERE
raise NotImplementedError()
# -
# #### 2. Extract the first two rows as a 2D sub-array
# + deletable=false nbgrader={"cell_type": "code", "checksum": "1501ee7c2e53e803ab2ee91f060600f6", "grade": true, "grade_id": "cell-7bf2f9a8c67029f8", "locked": false, "points": 0, "schema_version": 3, "solution": true}
# YOUR CODE HERE
raise NotImplementedError()
# -
# #### 3. Extract the bottom-right $2 \times 2$ block as a 2D sub-array
# + deletable=false nbgrader={"cell_type": "code", "checksum": "f71dc316fa8df4bfd0d201a822e3649b", "grade": false, "grade_id": "cell-5206fea47d246222", "locked": false, "schema_version": 3, "solution": true}
# YOUR CODE HERE
raise NotImplementedError()
# -
# #### 4. Sum the last column
# + deletable=false nbgrader={"cell_type": "code", "checksum": "e838dfdea65683c219df150d9e896d98", "grade": false, "grade_id": "cell-34f74988e47b9f87", "locked": false, "schema_version": 3, "solution": true}
# YOUR CODE HERE
raise NotImplementedError()
# -
# #### Compute transpose
#
# Compute the transpose of `A` (search online to find the function/syntax to do this).
# + deletable=false nbgrader={"cell_type": "code", "checksum": "b9cbe5f84f05129cbc0152ce8c36f470", "grade": false, "grade_id": "cell-dd2999d2da8070f6", "locked": false, "schema_version": 3, "solution": true}
# YOUR CODE HERE
raise NotImplementedError()
# + [markdown] deletable=false editable=false nbgrader={"cell_type": "markdown", "checksum": "f68e05e6631a183befcd0eea778623a2", "grade": false, "grade_id": "cell-f1255669c8aa78d2", "locked": true, "schema_version": 3, "solution": false}
# ## Exercise 07.4 (optional extension)
#
# In a previous exercise you implemented the bisection algorithm to find approximate roots of a mathematical function. Use the SciPy bisection function `optimize.bisect` (http://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.optimize.bisect.html) to find roots of the mathematical function that was used in the previous exercise. Compare the results computed by SciPy and your program from the earlier exercise, and compare the computational time (using `%time`).
# + deletable=false nbgrader={"cell_type": "code", "checksum": "ed949860804630c700d7f9885647c14d", "grade": false, "grade_id": "cell-b6580accbcb3c2da", "locked": false, "schema_version": 3, "solution": true}
from scipy import optimize
# YOUR CODE HERE
raise NotImplementedError()
|
Assignment/07 Exercises.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Figure 1. Model Schematic
#
# Summarize tree and frequencies for two timepoints from simulated data for Figure 1.
#
# Note: [this notebook is executed by Snakemake](https://snakemake.readthedocs.io/en/stable/snakefiles/rules.html#jupyter-notebook-integration) and expects to have a global `snakemake` variable that provides input and output files and optionally params.
# +
# Define inputs.
tree_for_timepoint_t = snakemake.input.tree_for_timepoint_t
tree_for_timepoint_u = snakemake.input.tree_for_timepoint_u
frequencies_for_timepoint_t = snakemake.input.frequencies_for_timepoint_t
frequencies_for_timepoint_u = snakemake.input.frequencies_for_timepoint_u
# Define outputs.
distance_model_figure = snakemake.output.figure
# -
"""
# Define inputs.
tree_for_timepoint_t = "../results/auspice/flu_simulated_simulated_sample_3_2029-10-01_tree.json"
tree_for_timepoint_u = "../results/auspice/flu_simulated_simulated_sample_3_2030-10-01_tree.json"
frequencies_for_timepoint_t = "../results/auspice/flu_simulated_simulated_sample_3_2029-10-01_tip-frequencies.json"
frequencies_for_timepoint_u = "../results/auspice/flu_simulated_simulated_sample_3_2030-10-01_tip-frequencies.json"
# Define outputs.
distance_model_figure = "../manuscript/figures/distance-based-fitness-model.pdf"
"""
# +
from augur.titer_model import TiterCollection
from augur.utils import json_to_tree
import datetime
import json
import matplotlib as mpl
import matplotlib.dates as mdates
from matplotlib import gridspec
import matplotlib.pyplot as plt
from matplotlib.collections import LineCollection
import numpy as np
import pandas as pd
import seaborn as sns
from scipy.cluster.hierarchy import linkage
from scipy.spatial.distance import squareform
from sklearn.manifold import TSNE
from sklearn.cluster import DBSCAN
from treetime.utils import numeric_date
# %matplotlib inline
# -
np.random.seed(314159)
sns.set_style("ticks")
# +
# Display figures at a reasonable default size.
mpl.rcParams['figure.figsize'] = (6, 4)
# Disable top and right spines.
mpl.rcParams['axes.spines.top'] = False
mpl.rcParams['axes.spines.right'] = False
# Display and save figures at higher resolution for presentations and manuscripts.
mpl.rcParams['savefig.dpi'] = 300
mpl.rcParams['figure.dpi'] = 100
# Display text at sizes large enough for presentations and manuscripts.
mpl.rcParams['font.weight'] = "normal"
mpl.rcParams['axes.labelweight'] = "normal"
mpl.rcParams['font.size'] = 14
mpl.rcParams['axes.labelsize'] = 14
mpl.rcParams['legend.fontsize'] = 12
mpl.rcParams['xtick.labelsize'] = 14
mpl.rcParams['ytick.labelsize'] = 14
mpl.rc('text', usetex=False)
# -
tip_size = 10
end_date = 2004.3
def float_to_datestring(time):
"""Convert a floating point date from TreeTime `numeric_date` to a date string
"""
# Extract the year and remainder from the floating point date.
year = int(time)
remainder = time - year
# Calculate the day of the year (out of 365 + 0.25 for leap years).
tm_yday = int(remainder * 365.25)
if tm_yday == 0:
tm_yday = 1
# Construct a date object from the year and day of the year.
date = datetime.datetime.strptime("%s-%s" % (year, tm_yday), "%Y-%j")
# Build the date string with zero-padded months and days.
date_string = "%s-%.2i-%.2i" % (date.year, date.month, date.day)
return date_string
def plot_tree_by_datetime(tree, color_by_trait=None, size_by_trait=None, initial_branch_width=5, tip_size=10,
start_date=None, end_date=None, include_color_bar=False, ax=None, colorbar_ax=None,
earliest_node_date=None, default_color="#cccccc", default_color_branch="#999999", override_y_values=None,
cmap=None, default_size=0.001, plot_projection_from_date=None, plot_projection_to_date=None,
projection_attr="projected_frequency", projection_line_threshold=1e-2, size_scaler=1e3):
"""Plot a BioPython Phylo tree in the BALTIC-style.
"""
# Plot H3N2 tree in BALTIC style from Bio.Phylo tree.
if override_y_values is None:
override_y_values = {}
yvalues = [node.yvalue for node in tree.find_clades()]
y_span = max(yvalues)
y_unit = y_span / float(len(yvalues))
# Setup colors.
if color_by_trait:
trait_name = color_by_trait
if cmap is None:
traits = [k.attr[trait_name] for k in tree.find_clades() if trait_name in k.attr]
norm = mpl.colors.Normalize(min(traits), max(traits))
cmap = mpl.cm.viridis
#
# Setup the figure grid.
#
if ax is None:
if include_color_bar:
fig = plt.figure(figsize=(8, 6), facecolor='w')
gs = gridspec.GridSpec(2, 1, height_ratios=[14, 1], width_ratios=[1], hspace=0.1, wspace=0.1)
ax = fig.add_subplot(gs[0])
colorbar_ax = fig.add_subplot(gs[1])
else:
fig = plt.figure(figsize=(8, 4), facecolor='w')
gs = gridspec.GridSpec(1, 1)
ax = fig.add_subplot(gs[0])
L=len([k for k in tree.find_clades() if k.is_terminal()])
# Setup arrays for tip and internal node coordinates.
tip_circles_x = []
tip_circles_y = []
tip_circles_color = []
tip_circle_sizes = []
node_circles_x = []
node_circles_y = []
node_circles_color = []
node_line_widths = []
node_line_segments = []
node_line_colors = []
branch_line_segments = []
branch_line_widths = []
branch_line_colors = []
branch_line_labels = []
projection_line_segments = []
for k in tree.find_clades(): ## iterate over objects in tree
x=k.attr["collection_date_ordinal"] ## or from x position determined earlier
if earliest_node_date and x < earliest_node_date:
continue
if k.name in override_y_values:
y = override_y_values[k.name]
else:
y = y_span - k.yvalue ## get y position from .drawTree that was run earlier, but could be anything else
if k.parent is None:
xp = None
else:
xp=k.parent.attr["collection_date_ordinal"] ## get x position of current object's parent
#if x==None: ## matplotlib won't plot Nones, like root
# x=0.0
if xp==None:
xp=x
c = default_color
if color_by_trait and trait_name in k.attr:
if isinstance(cmap, dict):
c = cmap[k.attr[trait_name]]
else:
c = cmap(norm(k.attr[trait_name]))
branchWidth=initial_branch_width
if k.is_terminal(): ## if leaf...
if size_by_trait is not None and size_by_trait in k.attr:
s = (size_scaler * np.sqrt(k.attr.get(size_by_trait, default_size)))
else:
s = tip_size ## tip size can be fixed
tip_circle_sizes.append(s)
tip_circles_x.append(x)
tip_circles_y.append(y)
tip_circles_color.append(c)
if plot_projection_to_date is not None and plot_projection_from_date is not None:
if k.attr.get(projection_attr, 0.0) > projection_line_threshold:
future_s = (size_scaler * np.sqrt(k.attr.get(projection_attr)))
future_x = plot_projection_to_date + np.random.randint(-60, 0)
future_y = y
future_c = c
tip_circle_sizes.append(future_s)
tip_circles_x.append(future_x)
tip_circles_y.append(future_y)
tip_circles_color.append(future_c)
projection_line_segments.append([(x + 1, y), (future_x, y)])
else: ## if node...
k_leaves = [child
for child in k.find_clades()
if child.is_terminal()]
# Scale branch widths by the number of tips.
branchWidth += initial_branch_width * len(k_leaves) / float(L)
if len(k.clades)==1:
node_circles_x.append(x)
node_circles_y.append(y)
node_circles_color.append(c)
ax.plot([x,x],[y_span - k.clades[-1].yvalue, y_span - k.clades[0].yvalue], lw=branchWidth, color=default_color_branch, ls='-', zorder=9, solid_capstyle='round')
branch_line_segments.append([(xp, y), (x, y)])
branch_line_widths.append(branchWidth)
branch_line_colors.append(default_color_branch)
branch_lc = LineCollection(branch_line_segments, zorder=9)
branch_lc.set_color(branch_line_colors)
branch_lc.set_linewidth(branch_line_widths)
branch_lc.set_label(branch_line_labels)
branch_lc.set_linestyle("-")
ax.add_collection(branch_lc)
if len(projection_line_segments) > 0:
projection_lc = LineCollection(projection_line_segments, zorder=-10)
projection_lc.set_color("#cccccc")
projection_lc.set_linewidth(1)
projection_lc.set_linestyle("--")
projection_lc.set_alpha(0.5)
ax.add_collection(projection_lc)
# Add circles for tips and internal nodes.
tip_circle_sizes = np.array(tip_circle_sizes)
ax.scatter(tip_circles_x, tip_circles_y, s=tip_circle_sizes, facecolor=tip_circles_color, edgecolors='#000000', linewidths=0.5, alpha=0.75, zorder=11) ## plot circle for every tip
#ax.scatter(tip_circles_x, tip_circles_y, s=tip_circle_sizes*1.75, facecolor="#000000", edgecolor='none', zorder=10) ## plot black circle underneath
ax.scatter(node_circles_x, node_circles_y, facecolor=node_circles_color, s=50, edgecolor='none', zorder=10, lw=2, marker='|') ## mark every node in the tree to highlight that it's a multitype tree
#ax.set_ylim(-10, y_span - 300)
ax.spines['top'].set_visible(False) ## no axes
ax.spines['right'].set_visible(False)
ax.spines['left'].set_visible(False)
ax.tick_params(axis='y',size=0)
ax.set_yticklabels([])
if start_date:
ax.set_xlim(left=start_date)
if end_date:
ax.set_xlim(right=end_date)
if include_color_bar:
cb1 = mpl.colorbar.ColorbarBase(
colorbar_ax,
cmap=cmap,
norm=norm,
orientation='horizontal'
)
cb1.set_label(color_by_trait)
return ax, colorbar_ax
# ## Load trees
#
# Load an auspice tree for both timepoint t and timepoint u. The first tree needs to be annotated with the projected frequency at time u and weighted distance to the future.
#
# Both trees need to be annotated with amino acid sequences for the tips as an `aa_sequence` key in each tip's `attr` attribute.
# +
with open(tree_for_timepoint_t, "r") as fh:
tree_json_for_t = json.load(fh)
tree_for_t = json_to_tree(tree_json_for_t)
# -
latest_sample_date_in_t = max([node.attr["num_date"] for node in tree_for_t.find_clades(terminal=True)])
latest_sample_date_in_t
earliest_date_to_plot = latest_sample_date_in_t - 2.0
with open(tree_for_timepoint_u, "r") as fh:
tree_json_for_u = json.load(fh)
tree_for_u = json_to_tree(tree_json_for_u)
tree_for_u
# Annotate ordinal collection dates from floating point dates on both trees.
for node in tree_for_t.find_clades():
node.attr["collection_date_ordinal"] = pd.to_datetime(float_to_datestring(node.attr["num_date"])).toordinal()
for node in tree_for_u.find_clades():
node.attr["collection_date_ordinal"] = pd.to_datetime(float_to_datestring(node.attr["num_date"])).toordinal()
# ## Load frequencies
#
# Load tip frequencies from auspice. These should include a `projected_pivot` key and one or more pivots after that timepoint for each tip.
with open(frequencies_for_timepoint_t, "r") as fh:
frequencies_for_t = json.load(fh)
with open(frequencies_for_timepoint_u, "r") as fh:
frequencies_for_u = json.load(fh)
pivots = frequencies_for_t.pop("pivots")
projection_pivot = frequencies_for_t.pop("projection_pivot")
projection_pivot_index_for_t = pivots.index(projection_pivot)
frequency_records_for_t = []
for sample, sample_frequencies in frequencies_for_t.items():
for pivot, sample_frequency in zip(pivots, sample_frequencies["frequencies"]):
frequency_records_for_t.append({
"strain": sample,
"timepoint": float_to_datestring(pivot),
"pivot": pivot,
"frequency": sample_frequency
})
frequency_df_for_t = pd.DataFrame(frequency_records_for_t)
frequency_df_for_t["timepoint"] = pd.to_datetime(frequency_df_for_t["timepoint"])
# Repeat the above analysis to get observed frequencies at timepoint u. We ignore all projected frequencies from this later timepoint, however.
pivots_for_u = frequencies_for_u.pop("pivots")
projection_pivot_for_u = frequencies_for_u.pop("projection_pivot")
projection_pivot_index = pivots_for_u.index(projection_pivot_for_u)
pivots_for_u[:projection_pivot_index + 1]
frequency_records_for_u = []
for sample, sample_frequencies in frequencies_for_u.items():
for pivot, sample_frequency in zip(pivots_for_u, sample_frequencies["frequencies"]):
# Ignore projected frequencies from timepoint u.
if pivot <= projection_pivot_for_u:
frequency_records_for_u.append({
"strain": sample,
"timepoint": float_to_datestring(pivot),
"pivot": pivot,
"frequency": sample_frequency
})
frequency_df_for_u = pd.DataFrame(frequency_records_for_u)
frequency_df_for_u["timepoint"] = pd.to_datetime(frequency_df_for_u["timepoint"])
frequency_df_for_u.head()
# Annotate trees with frequencies at corresponding timepoints. For the tree at timepoint t, annotate both current and projected frequencies. For the tree at timepoint u, annotate the current frequencies.
pivots[projection_pivot_index_for_t]
projection_pivot_index_for_t
max_frequency = 0.5
for tip in tree_for_t.find_clades(terminal=True):
tip.attr["frequency_at_t"] = min(frequencies_for_t[tip.name]["frequencies"][projection_pivot_index_for_t], max_frequency)
tip.attr["projected_frequency_at_u"] = min(frequencies_for_t[tip.name]["frequencies"][-1], max_frequency)
projection_pivot
for tip in tree_for_u.find_clades(terminal=True):
if tip.attr["num_date"] > projection_pivot:
tip.attr["frequency_at_u"] = min(frequencies_for_u[tip.name]["frequencies"][projection_pivot_index], max_frequency)
else:
tip.attr["frequency_at_u"] = 0.0
# +
tips_with_nonzero_frequencies = set()
for tip in tree_for_t.find_clades(terminal=True):
if tip.attr["frequency_at_t"] > 0:
tips_with_nonzero_frequencies.add(tip.name)
for tip in tree_for_u.find_clades(terminal=True):
if tip.attr["frequency_at_u"] > 0:
tips_with_nonzero_frequencies.add(tip.name)
# -
len(tips_with_nonzero_frequencies)
# ## t-SNE to cluster sequences
#
# Cluster sequences for tips in the latest tree which should be a super set of tips in the earliest tree. We only consider tips with a projected frequency greater than zero from timepoint t to u or tips that are collected after timepoint t. Clustering happens in one dimension through t-SNE dimensionality reduction. This is a simple way of identifying sequences that are "close" to each other in a low dimensional space for comparison of tips within and between timepoints.
projected_frequency_by_sample_from_t = {
node.name: node.attr.get("projected_frequency", 0.0)
for node in tree_for_t.find_clades(terminal=True)
}
nodes = [
node for node in tree_for_u.find_clades(terminal=True)
if node.attr["num_date"] > earliest_date_to_plot
]
total_nodes = len(nodes)
total_nodes
distances = np.zeros((total_nodes, total_nodes))
for i, node_a in enumerate(nodes):
node_a_array = np.frombuffer(node_a.attr["aa_sequence"].encode(), 'S1')
for j, node_b in enumerate(nodes):
if node_a.name == node_b.name:
distance = 0.0
elif distances[j, i] > 0:
distance = distances[j, i]
else:
node_b_array = np.frombuffer(node_b.attr["aa_sequence"].encode(), 'S1')
distance = (node_a_array != node_b_array).sum()
distances[i, j] = distance
sns.heatmap(
distances,
cmap="cividis",
robust=True,
square=True,
xticklabels=False,
yticklabels=False
)
X_embedded = TSNE(n_components=2, learning_rate=400, metric="precomputed", random_state=314).fit_transform(distances)
fig, ax = plt.subplots(1, 1, figsize=(6, 6))
ax.plot(X_embedded[:, 0], X_embedded[:, 1], ".", alpha=0.25)
clustering = DBSCAN(eps=10, min_samples=20).fit(X_embedded)
df = pd.DataFrame(X_embedded, columns=["dimension 0", "dimension 1"])
df["label"] = clustering.labels_
label_normalizer = mpl.colors.Normalize(df["label"].min(), df["label"].max())
cmap = list(reversed(sns.color_palette("Paired", n_colors=len(df["label"].unique()))))
df["color"] = df["label"].apply(lambda value: cmap[value])
cmap_for_tree = dict(df.loc[:, ["label", "color"]].values)
fig, ax = plt.subplots(1, 1, figsize=(6, 6))
ax.scatter(
df["dimension 0"],
df["dimension 1"],
alpha=0.25,
c=df["color"]
)
ax.set_xlabel("dimension 0")
ax.set_ylabel("dimension 1")
plt.tight_layout()
X_embedded_1d = TSNE(n_components=1, learning_rate=500, metric="precomputed", random_state=314).fit_transform(distances)
X_embedded_1d.shape
# Annotate nodes in both trees with ranks from t-SNE.
tree_t_nodes_by_name = {node.name: node for node in tree_for_t.find_clades(terminal=True)}
rank_records = []
for i, node in enumerate(nodes):
node.attr["rank"] = X_embedded_1d[i, 0]
node.attr["label"] = clustering.labels_[i]
if node.name in tree_t_nodes_by_name:
tree_t_nodes_by_name[node.name].attr["rank"] = X_embedded_1d[i, 0]
tree_t_nodes_by_name[node.name].attr["label"] = clustering.labels_[i]
rank_records.append({
"strain": node.name,
"rank": node.attr["rank"],
"label": node.attr["label"]
})
rank_df = pd.DataFrame(rank_records)
rank_normalizer = mpl.colors.Normalize(rank_df["rank"].min(), rank_df["rank"].max())
rank_df["color"] = rank_df["label"].apply(lambda value: cmap[value])
fig, ax = plt.subplots(1, 1, figsize=(8, 0.5))
ax.scatter(X_embedded_1d[:, 0], np.zeros_like(X_embedded_1d[:, 0]), marker=".", alpha=0.04, c=rank_df["color"].values.tolist())
ax.set_ylim(-0.001, 0.001)
fig, ax = plt.subplots(1, 1, figsize=(8, 1))
ax.scatter(X_embedded_1d[:, 0], rank_df["label"], marker=".", alpha=0.02, c=rank_df["color"].values.tolist())
#ax.set_ylim(-0.001, 0.001)
# ## Annotate t-SNE-based cluster information for both sets of frequencies.
rank_frequency_df_for_t = frequency_df_for_t.merge(
rank_df,
on="strain"
).sort_values(["label", "strain", "timepoint"])
rank_frequency_df_for_t["ordinal_timepoint"] = rank_frequency_df_for_t["timepoint"].apply(lambda value: value.toordinal())
rank_frequency_df_for_u = frequency_df_for_u.merge(
rank_df,
on="strain"
).sort_values(["label", "strain", "timepoint"])
rank_frequency_df_for_u["ordinal_timepoint"] = rank_frequency_df_for_u["timepoint"].apply(lambda value: value.toordinal())
# +
start_date = pd.to_datetime("2028-10-01").toordinal()
end_date = pd.to_datetime("2030-11-15").toordinal()
frequency_end_date = pd.to_datetime("2030-10-01").toordinal()
timepoint_t = pd.to_datetime(float_to_datestring(projection_pivot)).toordinal()
timepoint_u = pd.to_datetime(float_to_datestring(projection_pivot_for_u)).toordinal()
# -
frequency_steps = [0, 0.25, 0.5, 0.75, 1.0]
# ## Plot tree
yvalues = [node.yvalue for node in tree_for_t.find_clades(terminal=True)]
y_span = max(yvalues)
# +
fig = plt.figure(figsize=(12, 8), facecolor='w')
gs = gridspec.GridSpec(2, 2, height_ratios=[1, 0.5], width_ratios=[1, 1], hspace=0.25, wspace=0.1)
# Tree plot for timepoint t
tree_ax = fig.add_subplot(gs[0])
tree_ax, colorbar_ax = plot_tree_by_datetime(
tree_for_t,
color_by_trait="label",
size_by_trait="frequency_at_t",
ax=tree_ax,
start_date=start_date,
end_date=end_date,
tip_size=tip_size,
initial_branch_width=1,
plot_projection_from_date=timepoint_t,
plot_projection_to_date=timepoint_u,
projection_attr="projected_frequency_at_u",
cmap=cmap_for_tree
)
tree_ax.set_ylim(4000, 6700)
#tree_ax.set_ylim(400, 750)
years = mdates.YearLocator()
years_fmt = mdates.DateFormatter("%y")
months = mdates.MonthLocator()
tree_ax.xaxis.set_major_locator(years)
tree_ax.xaxis.set_major_formatter(years_fmt)
tree_ax.xaxis.set_minor_locator(months)
tree_ax.format_xdata = mdates.DateFormatter("%b %y")
tree_ax.text(0.46, 1.0, "$\mathbf{x}(t)$",
horizontalalignment='center',
verticalalignment='center',
transform=tree_ax.transAxes,
fontdict={"fontsize": 14})
tree_ax.text(0.94, 1.0, "$\mathbf{\hat{x}}(u)$",
horizontalalignment='center',
verticalalignment='center',
transform=tree_ax.transAxes,
fontdict={"fontsize": 14})
tree_ax.axvline(x=timepoint_t, ymax=0.96, color="#999999", linestyle="--", alpha=0.5)
tree_ax.axvline(x=timepoint_u, ymax=0.96, color="#999999", linestyle="--", alpha=0.5)
# Frequency plot for timepoint t
frequency_ax = fig.add_subplot(gs[2])
baseline = np.zeros_like(pivots)
for strain, strain_df in rank_frequency_df_for_t.groupby(["label", "strain"]):
frequency_ax.fill_between(
strain_df["ordinal_timepoint"].values,
baseline, baseline + strain_df["frequency"].values,
color=strain_df["color"].unique()[0]
)
baseline = baseline + strain_df["frequency"].values
frequency_ax.axvline(x=timepoint_t, color="#999999", linestyle="--")
frequency_ax.axvline(x=timepoint_u, color="#999999", linestyle="--")
frequency_ax.text(
0.72,
0.995,
"Forecast",
horizontalalignment="center",
verticalalignment="center",
transform=frequency_ax.transAxes,
fontdict={"fontsize": 12}
)
frequency_ax.set_yticks(frequency_steps)
frequency_ax.set_yticklabels(['{:3.0f}%'.format(x*100) for x in frequency_steps])
frequency_ax.set_ylabel("Frequency")
frequency_ax.set_xlabel("Date")
frequency_ax.set_xlim(start_date, end_date)
frequency_ax.set_ylim(bottom=0.0)
frequency_ax.xaxis.set_major_locator(years)
frequency_ax.xaxis.set_major_formatter(years_fmt)
frequency_ax.xaxis.set_minor_locator(months)
frequency_ax.format_xdata = mdates.DateFormatter("%b %y")
# Tree plot for timepoint u
tree_u_ax = fig.add_subplot(gs[1])
tree_u_ax, colorbar_u_ax = plot_tree_by_datetime(
tree_for_u,
color_by_trait="label",
size_by_trait="frequency_at_u",
ax=tree_u_ax,
start_date=start_date,
end_date=end_date,
tip_size=tip_size,
initial_branch_width=1,
cmap=cmap_for_tree
)
tree_u_ax.set_ylim(4100, 6700)
#tree_u_ax.set_ylim(400, 750)
tree_u_ax.xaxis.set_major_locator(years)
tree_u_ax.xaxis.set_major_formatter(years_fmt)
tree_u_ax.xaxis.set_minor_locator(months)
tree_u_ax.format_xdata = mdates.DateFormatter("%b %y")
tree_u_ax.text(0.46, 1.0, "$\mathbf{x}(t)$",
horizontalalignment='center',
verticalalignment='center',
transform=tree_u_ax.transAxes,
fontdict={"fontsize": 14})
tree_u_ax.text(0.94, 1.0, "$\mathbf{x}(u)$",
horizontalalignment='center',
verticalalignment='center',
transform=tree_u_ax.transAxes,
fontdict={"fontsize": 14})
tree_u_ax.axvline(x=timepoint_t, ymax=0.96, color="#999999", linestyle="--", alpha=0.5)
tree_u_ax.axvline(x=timepoint_u, ymax=0.96, color="#999999", linestyle="--", alpha=0.5)
# Frequency plot for timepoint u
frequency_u_ax = fig.add_subplot(gs[3])
baseline_u = np.zeros_like(pivots[2:])
for strain, strain_df in rank_frequency_df_for_u.groupby(["label", "strain"]):
frequency_u_ax.fill_between(
strain_df["ordinal_timepoint"].values[:projection_pivot_index + 1],
baseline_u, baseline_u + strain_df["frequency"].values[:projection_pivot_index + 1],
color=strain_df["color"].unique()[0]
)
baseline_u = baseline_u + strain_df["frequency"].values[:projection_pivot_index + 1]
frequency_u_ax.axvline(x=timepoint_t, color="#999999", linestyle="--")
frequency_u_ax.axvline(x=timepoint_u, color="#999999", linestyle="--")
frequency_u_ax.text(
0.72,
0.995,
"Retrospective",
horizontalalignment="center",
verticalalignment="center",
transform=frequency_u_ax.transAxes,
fontdict={"fontsize": 12}
)
frequency_u_ax.set_yticks(frequency_steps)
frequency_u_ax.set_yticklabels(['{:3.0f}%'.format(x*100) for x in frequency_steps])
frequency_u_ax.set_ylabel("Frequency")
frequency_u_ax.set_xlabel("Date")
frequency_u_ax.set_xlim(start_date, end_date)
frequency_u_ax.set_ylim(bottom=0.0)
frequency_u_ax.xaxis.set_major_locator(years)
frequency_u_ax.xaxis.set_major_formatter(years_fmt)
frequency_u_ax.xaxis.set_minor_locator(months)
frequency_u_ax.format_xdata = mdates.DateFormatter("%b %y")
fig.autofmt_xdate(rotation=0, ha="center")
# Annotate panel labels.
panel_labels_dict = {
"weight": "bold",
"size": 14
}
plt.figtext(0.0, 0.98, "A", **panel_labels_dict)
plt.figtext(0.0, 0.36, "B", **panel_labels_dict)
plt.figtext(0.5, 0.98, "C", **panel_labels_dict)
plt.figtext(0.5, 0.36, "D", **panel_labels_dict)
gs.tight_layout(fig, h_pad=1.0)
plt.savefig(distance_model_figure)
# +
projected_frequency_records = []
projected_frequency_at_u = []
projected_colors = []
for tip in tree_for_t.find_clades(terminal=True):
if "projected_frequency_at_u" in tip.attr and tip.attr["projected_frequency_at_u"] > 1e-2:
projected_frequency_at_u.append(tip.attr["projected_frequency_at_u"])
projected_colors.append(cmap_for_tree[tip.attr["label"]])
projected_frequency_records.append({
"frequency": tip.attr["projected_frequency_at_u"],
"group": tip.attr["label"]
})
projected_frequency_df = pd.DataFrame(projected_frequency_records)
# +
observed_frequency_records = []
observed_frequency_at_u = []
observed_colors = []
for tip in tree_for_u.find_clades(terminal=True):
if "frequency_at_u" in tip.attr and tip.attr["frequency_at_u"] > 0.0:
observed_frequency_at_u.append(tip.attr["frequency_at_u"])
observed_colors.append(cmap_for_tree[tip.attr["label"]])
observed_frequency_records.append({
"frequency": tip.attr["frequency_at_u"],
"group": tip.attr["label"]
})
observed_frequency_df = pd.DataFrame(observed_frequency_records)
# -
projected_frequency_arrays = []
projected_frequency_colors = []
for group, df in projected_frequency_df.groupby("group"):
projected_frequency_arrays.append(df["frequency"].values)
projected_frequency_colors.append(cmap_for_tree[group])
projected_frequency_rank = []
projected_frequency_frequencies = []
projected_frequency_colors = []
for index, row in projected_frequency_df.groupby("group")["frequency"].sum().sort_values(ascending=False).reset_index().iterrows():
projected_frequency_rank.append(row["group"])
projected_frequency_frequencies.append(row["frequency"])
projected_frequency_colors.append(cmap_for_tree[row["group"]])
observed_frequency_arrays = []
observed_frequency_colors = []
for group, df in observed_frequency_df.groupby("group"):
observed_frequency_arrays.append(df["frequency"].values)
observed_frequency_colors.append(cmap_for_tree[group])
observed_frequency_rank = []
observed_frequency_frequencies = []
observed_frequency_colors = []
for index, row in observed_frequency_df.groupby("group")["frequency"].sum().sort_values(ascending=False).reset_index().iterrows():
if row["frequency"] > 0.05:
observed_frequency_rank.append(row["group"])
observed_frequency_frequencies.append(row["frequency"])
observed_frequency_colors.append(cmap_for_tree[row["group"]])
rank_to_index = {
7: 0,
6: 1,
8: 2,
4: 3
}
rank_normalizer = mpl.colors.Normalize(X_embedded_1d.min(), X_embedded_1d.max())
# +
size_scaler = 1e3
default_size = 0.001
projection_attr = "projected_frequency"
projection_line_threshold = 1e-2
plot_projection_to_date = timepoint_u
fig = plt.figure(figsize=(12, 8), facecolor='w')
gs = gridspec.GridSpec(2, 2, height_ratios=[1, 0.5], width_ratios=[1, 1], hspace=0.25, wspace=0.1)
# Plot for timepoint t
tip_circles_x_for_t = []
tip_circles_y_for_t = []
tip_circles_sizes_for_t = []
tip_circles_colors_for_t = []
projection_line_segments = []
t_ax = fig.add_subplot(gs[0])
for node in tree_for_t.find_clades(terminal=True):
if "rank" in node.attr:
x = node.attr["collection_date_ordinal"]
y = node.attr["rank"]
tip_circles_x_for_t.append(x)
tip_circles_y_for_t.append(y)
tip_circles_sizes_for_t.append(size_scaler * np.sqrt(node.attr.get("frequency_at_t", default_size)))
tip_circles_colors_for_t.append(mpl.cm.gist_gray(rank_normalizer(y)))
if node.attr.get(projection_attr, 0.0) > projection_line_threshold:
future_s = (size_scaler * np.sqrt(node.attr.get(projection_attr)))
future_x = plot_projection_to_date + np.random.randint(-60, 0)
future_y = y
tip_circles_sizes_for_t.append(future_s)
tip_circles_x_for_t.append(future_x)
tip_circles_y_for_t.append(future_y)
tip_circles_colors_for_t.append(mpl.cm.gist_gray(rank_normalizer(y)))
projection_line_segments.append([(x + 1, y), (future_x, y)])
t_ax.scatter(
tip_circles_x_for_t,
tip_circles_y_for_t,
s=tip_circles_sizes_for_t,
facecolor=tip_circles_colors_for_t,
edgecolors='#000000',
linewidths=0.5,
alpha=0.75,
zorder=11
)
projection_lc = LineCollection(projection_line_segments, zorder=-10)
projection_lc.set_color("#cccccc")
projection_lc.set_linewidth(1)
projection_lc.set_linestyle("--")
projection_lc.set_alpha(0.5)
t_ax.add_collection(projection_lc)
t_ax.axvline(x=timepoint_t, linestyle="--", color="#999999")
t_ax.axvline(x=timepoint_u, linestyle="--", color="#999999")
t_ax.spines['top'].set_visible(False) ## no axes
t_ax.spines['right'].set_visible(False)
t_ax.spines['left'].set_visible(False)
t_ax.tick_params(axis='y',size=0)
t_ax.set_yticklabels([])
t_ax.xaxis.set_major_locator(years)
t_ax.xaxis.set_major_formatter(years_fmt)
t_ax.xaxis.set_minor_locator(months)
t_ax.format_xdata = mdates.DateFormatter("%b %y")
t_ax.set_xlim(start_date, end_date)
# Frequency plot for timepoint t
frequency_ax = fig.add_subplot(gs[2])
baseline = np.zeros_like(pivots)
for (rank, strain), strain_df in rank_frequency_df_for_t.groupby(["rank", "strain"]):
frequency_ax.fill_between(
strain_df["ordinal_timepoint"].values,
baseline, baseline + strain_df["frequency"].values,
color=mpl.cm.gist_gray(rank_normalizer(rank))
)
baseline = baseline + strain_df["frequency"].values
frequency_ax.axvline(x=timepoint_t, color="#999999", linestyle="--")
frequency_ax.axvline(x=timepoint_u, color="#999999", linestyle="--")
frequency_ax.text(
0.72,
0.99,
"Projection",
horizontalalignment="center",
verticalalignment="center",
transform=frequency_ax.transAxes,
fontdict={"fontsize": 10}
)
frequency_ax.set_yticks(frequency_steps)
frequency_ax.set_yticklabels(['{:3.0f}%'.format(x*100) for x in frequency_steps])
frequency_ax.set_ylabel("Frequency")
frequency_ax.set_xlabel("Date")
frequency_ax.set_xlim(start_date, end_date)
frequency_ax.set_ylim(bottom=0.0)
frequency_ax.xaxis.set_major_locator(years)
frequency_ax.xaxis.set_major_formatter(years_fmt)
frequency_ax.xaxis.set_minor_locator(months)
frequency_ax.format_xdata = mdates.DateFormatter("%b %y")
# Plot for timepoint u
u_ax = fig.add_subplot(gs[1])
tip_circles_x_for_u = []
tip_circles_y_for_u = []
tip_circles_sizes_for_u = []
tip_circles_colors_for_u = []
for node in tree_for_t.find_clades(terminal=True):
if "rank" in node.attr:
x = node.attr["collection_date_ordinal"]
y = node.attr["rank"]
tip_circles_x_for_u.append(x)
tip_circles_y_for_u.append(y)
tip_circles_sizes_for_u.append(size_scaler * np.sqrt(node.attr.get("frequency_at_t", default_size)))
tip_circles_colors_for_u.append(mpl.cm.gist_gray(rank_normalizer(y)))
for node in tree_for_u.find_clades(terminal=True):
if "rank" in node.attr:
tip_circles_x_for_u.append(node.attr["collection_date_ordinal"])
tip_circles_y_for_u.append(node.attr["rank"])
tip_circles_sizes_for_u.append(1e3 * np.sqrt(node.attr.get("frequency_at_u", default_size)))
tip_circles_colors_for_u.append(mpl.cm.gist_gray(rank_normalizer(node.attr["rank"])))
u_ax.scatter(
tip_circles_x_for_u,
tip_circles_y_for_u,
s=tip_circles_sizes_for_u,
facecolor=tip_circles_colors_for_u,
edgecolors='#000000',
linewidths=0.5,
alpha=0.75,
zorder=11
)
u_ax.axvline(x=timepoint_t, linestyle="--", color="#999999")
u_ax.axvline(x=timepoint_u, linestyle="--", color="#999999")
u_ax.spines['top'].set_visible(False) ## no axes
u_ax.spines['right'].set_visible(False)
u_ax.spines['left'].set_visible(False)
u_ax.tick_params(axis='y',size=0)
u_ax.set_yticklabels([])
u_ax.xaxis.set_major_locator(years)
u_ax.xaxis.set_major_formatter(years_fmt)
u_ax.xaxis.set_minor_locator(months)
u_ax.format_xdata = mdates.DateFormatter("%b %y")
u_ax.set_xlim(end_date, start_date)
# Frequency plot for timepoint u
frequency_u_ax = fig.add_subplot(gs[3])
baseline_u = np.zeros_like(pivots[2:])
for (rank, strain), strain_df in rank_frequency_df_for_u.groupby(["rank", "strain"]):
frequency_u_ax.fill_between(
strain_df["ordinal_timepoint"].values[:projection_pivot_index + 1],
baseline_u, baseline_u + strain_df["frequency"].values[:projection_pivot_index + 1],
color=mpl.cm.gist_gray(rank_normalizer(rank))
)
baseline_u = baseline_u + strain_df["frequency"].values[:projection_pivot_index + 1]
frequency_u_ax.axvline(x=timepoint_t, color="#999999", linestyle="--")
frequency_u_ax.axvline(x=timepoint_u, color="#999999", linestyle="--")
frequency_u_ax.text(
0.28,
0.99,
"Observed",
horizontalalignment="center",
verticalalignment="center",
transform=frequency_u_ax.transAxes,
fontdict={"fontsize": 10}
)
frequency_u_ax.set_yticks(frequency_steps)
frequency_u_ax.set_yticklabels(['{:3.0f}%'.format(x*100) for x in frequency_steps])
frequency_u_ax.set_xlabel("Date")
frequency_u_ax.set_xlim(end_date, start_date)
frequency_u_ax.set_ylim(bottom=0.0)
frequency_u_ax.xaxis.set_major_locator(years)
frequency_u_ax.xaxis.set_major_formatter(years_fmt)
frequency_u_ax.xaxis.set_minor_locator(months)
frequency_u_ax.format_xdata = mdates.DateFormatter("%b %y")
frequency_u_ax.spines['top'].set_visible(False)
frequency_u_ax.spines['right'].set_visible(False)
frequency_u_ax.spines['left'].set_visible(False)
frequency_u_ax.tick_params(axis='y',size=0)
frequency_u_ax.set_yticklabels([])
fig.autofmt_xdate(rotation=0, ha="center")
gs.tight_layout(fig, h_pad=1.0)
# -
|
workflow/notebooks/plot-model-diagram.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# +
import pandas as pd
import numpy as np
import re
# Utils
def split_sal_freq(x):
curr = x[0]
sal = int(x[1:].split('/')[0].replace(',',''))
freq = x[1:].split('/')[1]
return pd.Series([curr, sal, freq])
def get_monthly_sal(x):
if x['freq'] == 'hr':
return x['Sal'] * 24 * 30 * 12
elif x['freq'] == 'mo':
return x['Sal'] * 12
else:
return x['Sal']
def preprocess_string(x):
x = x.lower()
x = re.sub(r'[^\w\s]', '', x)
x = x.replace(' ','')
return x
# -
def fetch_preprocess_data():
# input dataset
print('Reading dataset')
df = pd.read_csv('data/Salary Dataset.csv')
df['Salaries Reported'] = df['Salaries Reported'].fillna(1)
df['Currency'] = df['Salary'].apply(lambda x: x[0])
df = df[df['Currency']=='₹']
df[['Currency','Sal','freq']] = df['Salary'].apply(split_sal_freq)
df['Tot_sal'] = df[['Sal','freq']].apply(lambda x: get_monthly_sal(x), axis=1)
##### preprocess string
print('Preprocessing data')
df['Company Name'] = df['Company Name'].astype(str)
df['Job Title'] = df['Job Title'].astype(str)
df['Location'] = df['Location'].astype(str)
df['Company Name_preprocessed'] = df['Company Name'].apply(lambda x: preprocess_string(x))
df['Job Title_preprocessed'] = df['Job Title'].apply(lambda x: preprocess_string(x))
df['Location_preprocessed'] = df['Location'].apply(lambda x: preprocess_string(x))
# Refactored job title
df_title = pd.read_csv('data/title_map.csv')
df_title_1 = df_title[['Job Title_preprocessed', 'title_map_1']]
df_title_1 = df_title_1.dropna()
df_title_2 = df_title[['Job Title_preprocessed', 'title_map_2']]
df_title_2 = df_title_2.dropna()
title_map_1 = df_title_1.set_index('Job Title_preprocessed').to_dict()['title_map_1']
title_map_2 = df_title_2.set_index('Job Title_preprocessed').to_dict()['title_map_2']
df_1 = df.copy()
df_2 = df.copy()
df_1['Job Title_preprocessed'] = df_1['Job Title_preprocessed'].map(title_map_1)
df_2['Job Title_preprocessed'] = df_2['Job Title_preprocessed'].map(title_map_2)
df = df_1.append(df_2, ignore_index=True)
df = df.dropna()
# Refactored company name
df_company = pd.read_csv('data/company_map.csv')
df_company = df_company[['Company Name_preprocessed', 'company_map']]
df_company = df_company.dropna()
df_company = df_company.set_index('Company Name_preprocessed')
company_map = df_company.to_dict()['company_map']
df['Company Name_preprocessed'] = df['Company Name_preprocessed'].map(company_map)
df = df.dropna()
# Total sum of salary = Tot_sal * Salaries reported
df['Tot_sal_sum'] = df['Tot_sal'] * df['Salaries Reported']
# Multiple aggregates
df['Company_Title'] = df['Company Name_preprocessed'] + df['Job Title_preprocessed']
df['Location_Title'] = df['Location_preprocessed'] + df['Job Title_preprocessed']
##### remove invalid companies and titles
print('Removing invalid companies and titles')
invalid_companies = ['---']
invalid_job_title = []
df = df[~df['Company Name'].isin(invalid_companies)]
df = df[~df['Job Title'].isin(invalid_job_title)]
print(' --------------------------------- ')
print('Total number of salaries reported : ', sum(df['Salaries Reported']))
print('Total number of companies : ', df['Company Name'].nunique())
print('Total number of job titles : ', df['Job Title'].nunique())
print('Total number of locations : ', df['Location'].nunique())
print(' --------------------------------- ')
print('Aggregating by company')
df_company_aggregates = df.groupby(['Company Name']).agg({'Tot_sal': ['mean', 'median', 'count']}).reset_index()
df_company_aggregates.columns = ['Company Name', 'mean', 'median', 'count']
print('Aggregating by title')
df_job_title_aggregates = df.groupby(['Job Title']).agg({'Tot_sal': ['mean', 'median', 'count']}).reset_index()
df_job_title_aggregates.columns = ['Job Title', 'mean', 'median', 'count']
print('Data exercise completed')
return df, df_company_aggregates, df_job_title_aggregates
df, df_company_aggregates, df_job_title_aggregates = fetch_preprocess_data()
# +
# df[['Job Title_preprocessed', 'Tot_sal']].head(20)
# df['Tot_sal_in_lacs'] = df['Tot_sal'] / 100000
# df['Tot_sal_in_lacs'] = round(df['Tot_sal_in_lacs'])
def salary_range(x):
if x<=3:
return 0
elif x>3 and x<=5:
return 1
elif x>5 and x<=7:
return 2
elif x>7 and x<=10:
return 3
elif x>10 and x<=12:
return 4
elif x>12 and x<=14:
return 5
elif x>14 and x<=16:
return 6
elif x>16 and x<=18:
return 7
elif x>18 and x<=20:
return 8
elif x>20 and x<=22:
return 9
elif x>22 and x<=24:
return 10
elif x>24 and x<=26:
return 11
elif x>26 and x<=28:
return 12
elif x>28 and x<=30:
return 13
elif x>30 and x<=32:
return 14
elif x>32 and x<=34:
return 15
elif x>34 and x<=36:
return 16
elif x>36 and x<=38:
return 17
elif x>38 and x<=40:
return 18
else:
return 19
df['salary_range'] = df['Tot_sal_in_lacs'].apply(salary_range)
df
|
salary_stats.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Read datasets
import pandas as pd
countries_of_the_world = pd.read_csv('../datasets/countries-of-the-world.csv')
countries_of_the_world.head()
mpg = pd.read_csv('../datasets/mpg.csv')
mpg.head()
student_data = pd.read_csv('../datasets/student-alcohol-consumption.csv')
student_data.head()
survey_data = pd.read_csv('../datasets/young-people-survey-responses.csv')
survey_data.head()
import matplotlib.pyplot as plt
import seaborn as sns
# # Changing style and palette
# Let's return to our dataset containing the results of a survey given to young people about their habits and preferences. We've provided the code to create a count plot of their responses to the question "How often do you listen to your parents' advice?". Now let's change the style and palette to make this plot easier to interpret.
#
# We've already imported Seaborn as sns and matplotlib.pyplot as plt.
# +
category_order["Parents Advice"] = []
# Set the color palette to "Purples"
sns.set_style("whitegrid")
# Create a count plot of survey responses
category_order = ["Never", "Rarely", "Sometimes",
"Often", "Always"]
sns.catplot(x="Parents Advice",
data=survey_data,
kind="count",
order=category_order)
# Show plot
plt.show()
# +
# Set the color palette to "Purples"
sns.set_style("whitegrid")
sns.set_palette("Purples")
# Create a count plot of survey responses
category_order = ["Never", "Rarely", "Sometimes",
"Often", "Always"]
sns.catplot(x="Parents Advice",
data=survey_data,
kind="count",
order=category_order)
# Show plot
plt.show()
# +
# Change the color palette to "RdBu"
sns.set_style("whitegrid")
sns.set_palette("RdBu")
# Create a count plot of survey responses
category_order = ["Never", "Rarely", "Sometimes",
"Often", "Always"]
sns.catplot(x="Parents Advice",
data=survey_data,
kind="count",
order=category_order)
# Show plot
plt.show()
# -
# # Changing the scale
# In this exercise, we'll continue to look at the dataset containing responses from a survey of young people. Does the percentage of people reporting that they feel lonely vary depending on how many siblings they have? Let's find out using a bar plot, while also exploring Seaborn's four different plot scales ("contexts").
#
# We've already imported Seaborn as sns and matplotlib.pyplot as plt.
# +
# Change the context to "paper"
sns.set_context("paper")
# Create bar plot
sns.catplot(x="Number of Siblings", y="Feels Lonely",
data=survey_data, kind="bar")
# Show plot
plt.show()
# +
# Change the context to "notebook"
sns.set_context("notebook")
# Create bar plot
sns.catplot(x="Number of Siblings", y="Feels Lonely",
data=survey_data, kind="bar")
# Show plot
plt.show()
# +
# Change the context to "talk"
sns.set_context("talk")
# Create bar plot
sns.catplot(x="Number of Siblings", y="Feels Lonely",
data=survey_data, kind="bar")
# Show plot
plt.show()
# +
# Change the context to "poster"
sns.set_context("poster")
# Create bar plot
sns.catplot(x="Number of Siblings", y="Feels Lonely",
data=survey_data, kind="bar")
# Show plot
plt.show()
# -
# # Using a custom palette
# So far, we've looked at several things in the dataset of survey responses from young people, including their internet usage, how often they listen to their parents, and how many of them report feeling lonely. However, one thing we haven't done is a basic summary of the type of people answering this survey, including their age and gender. Providing these basic summaries is always a good practice when dealing with an unfamiliar dataset.
#
# The code provided will create a box plot showing the distribution of ages for male versus female respondents. Let's adjust the code to customize the appearance, this time using a custom color palette.
#
# We've already imported Seaborn as sns and matplotlib.pyplot as plt.
# +
# Set the style to "darkgrid"
sns.set_style("darkgrid")
# Set a custom color palette
sns.set_palette(["#39A7D0", "#36ADA4"])
# Create the box plot of age distribution by gender
sns.catplot(x="Gender", y="Age",
data=survey_data, kind="box")
# Show plot
plt.show()
# -
# # FacetGrids vs. AxesSubplots
# In the recent lesson, we learned that Seaborn plot functions create two different types of objects: FacetGrid objects and AxesSubplot objects. The method for adding a title to your plot will differ depending on the type of object it is.
#
# In the code provided, we've used relplot() with the miles per gallon dataset to create a scatter plot showing the relationship between a car's weight and its horsepower. This scatter plot is assigned to the variable name g. Let's identify which type of object it is.
#
# We've already imported Seaborn as sns and matplotlib.pyplot as plt.
# +
# Create scatter plot
g = sns.relplot(x="weight",
y="horsepower",
data=mpg,
kind="scatter")
# Identify plot type
type_of_g = type(g)
# Print type
print(type_of_g)
# -
# # Question
# We've just seen that sns.relplot() creates FacetGrid objects. Which other Seaborn function creates a FacetGrid object instead of an AxesSubplot object?
#
# Possible Answers: sns.catplot()
# # Adding a title to a FacetGrid object
# In the previous exercise, we used relplot() with the miles per gallon dataset to create a scatter plot showing the relationship between a car's weight and its horsepower. This created a FacetGrid object. Now that we know what type of object it is, let's add a title to this plot.
#
# We've already imported Seaborn as sns and matplotlib.pyplot as plt.
# +
# Create scatter plot
g = sns.relplot(x="weight",
y="horsepower",
data=mpg,
kind="scatter")
# Add a title "Car Weight vs. Horsepower"
g.fig.suptitle("Car Weight vs. Horsepower")
# Show plot
plt.show()
# -
# # Adding a title and axis labels
# Let's continue to look at the miles per gallon dataset. This time we'll create a line plot to answer the question: How does the average miles per gallon achieved by cars change over time for each of the three places of origin? To improve the readability of this plot, we'll add a title and more informative axis labels.
#
# In the code provided, we create the line plot using the lineplot() function. Note that lineplot() does not support the creation of subplots, so it returns an AxesSubplot object instead of an FacetGrid object.
#
# We've already imported Seaborn as sns and matplotlib.pyplot as plt.
# +
# Create line plot
g = sns.lineplot(x="model_year", y="mpg_mean",
data=mpg_mean,
hue="origin")
# Add a title "Average MPG Over Time"
g.set_title("Average MPG Over Time")
# Show plot
plt.show()
# +
# Create line plot
g = sns.lineplot(x="model_year", y="mpg_mean",
data=mpg_mean,
hue="origin")
# Add a title "Average MPG Over Time"
g.set_title("Average MPG Over Time")
# Add x-axis and y-axis labels
g.set(xlabel="Car Model Year", ylabel= "Average MPG")
# Show plot
plt.show()
# -
# # Rotating x-tick labels
# In this exercise, we'll continue looking at the miles per gallon dataset. In the code provided, we create a point plot that displays the average acceleration for cars in each of the three places of origin. Note that the "acceleration" variable is the time to accelerate from 0 to 60 miles per hour, in seconds. Higher values indicate slower acceleration.
#
# Let's use this plot to practice rotating the x-tick labels. Recall that the function to rotate x-tick labels is a standalone Matplotlib function and not a function applied to the plot object itself.
#
# We've already imported Seaborn as sns and matplotlib.pyplot as plt.
# +
# Create point plot
sns.catplot(x="origin",
y="acceleration",
data=mpg,
kind="point",
join=False,
capsize=0.1)
# Rotate x-tick labels
plt.xticks(rotation=90)
# Show plot
plt.show()
# -
# # Box plot with subgroups
# In this exercise, we'll look at the dataset containing responses from a survey given to young people. One of the questions asked of the young people was: "Are you interested in having pets?" Let's explore whether the distribution of ages of those answering "yes" tends to be higher or lower than those answering "no", controlling for gender.
# +
# Set palette to "Blues"
sns.set_palette("Blues")
# Adjust to add subgroups based on "Interested in Pets"
g = sns.catplot(x="Gender",
y="Age", data=survey_data,
kind="box", hue="Interested in Pets")
# Set title to "Age of Those Interested in Pets vs. Not"
g.fig.suptitle("Age of Those Interested in Pets vs. Not")
# Show plot
plt.show()
# -
# # Bar plot with subgroups and subplots
# In this exercise, we'll return to our young people survey dataset and investigate whether the proportion of people who like techno music ("Likes Techno") varies by their gender ("Gender") or where they live ("Village - town"). This exercise will give us an opportunity to practice the many things we've learned throughout this course!
#
# We've already imported Seaborn as sns and matplotlib.pyplot as plt.
|
introduction-to-data-visualization-with-seaborn/4. Customizing Seaborn Plots/notebook_section_4.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 函数
#
# - 函数可以用来定义可重复代码,组织和简化
# - 一般来说一个函数在实际开发中为一个小功能
# - 一个类为一个大功能
# - 同样函数的长度不要超过一屏
def yk():
'''
函数格式
ykykyky
paramets:
print --------->打印
'''
print('ykykykykyk')
yk()#调用函数
# ## 定义一个函数
#
# def function_name(list of parameters):
#
# do something
# 
# - 以前使用的random 或者range 或者print.. 其实都是函数或者类
def yf(name,name2,name3 = '四'): # 如果定义参数为默认参数,没有传参时,使用默认参数,有传参时,使用传入的参数。只能给最后的参数设置默认参数
'''
'''
print('{},{}姐姐{}急急急'.format(name,name2,name3))
yf('一','二','三')
def yz(num):
if num % 2 == 0:
d = eval(input('0 print ,1 not print : '))
if d == 0:
print(num,'是偶数')
else:
d = eval(input('0 print ,1 not print'))
if d == 0:
print(num,'是奇数')
num1 = eval(input('>>'))
yz(num1)
def yz(num,is_print = True):
if num % 2 == 0:
if is_print :
print(num,'是偶数')
else:
if is_print :
print(num,'是奇数')
num1 = eval(input('>>'))
yz(num1,False)
# ## 调用一个函数
# - functionName()
# - "()" 就代表调用
def fun1(name):
print(name,end=':')
fun2()
def fun2():
print('say')
fun1('yk')
def fun_qi(num):
if num % 2 != 0:
print(num,'fun_qi')
else:
fun_ou(num)
def fun_ou(num1):
print(num1,'fun_ou')
for i in range(1,11):
fun_qi(i)
# +
import hashlib#md5加密
date = 'This a md5 test!'
hash_md5 = hashlib.md5(date.encode())
hash_md5.hexdigest()#输出32位16进制数
# +
import hashlib
password = '<PASSWORD>'
hash_md = hashlib.md5(password.encode())
date = hash_md.hexdigest()
def fun_passwd(num):
passwd = input('Enter a password:')
fun_result(num, fun_md5(passwd))
def fun_md5(passwd):
hash_ma5 = hashlib.md5(passwd.encode())
date = hash_md5.hexdigest()
def fun_result(num1,num2):
if num1 == num2:
print('True')
else:
print('False')
#print(date)
fun_passwd(date)
# +
import hashlib
#---------数据库里面的密码-----------
password = '<PASSWORD>'
hash_md = hashlib.md5(password.encode())
init_passwd = hash_md.hexdigest()
#--------ajaxd登录的明文密码----------
def password(passwd):
md5(passwd)#进行MD5摘要
def md5(passwd):
hash_md = hashlib.md5(passwd.encode())
input_passwd = hash_md.hexdigest()#输入的md5摘要密码
if init_passwd == input_passwd:
result('OK')
else:
result('Faild')
def result(res):
print(res)
password('<PASSWORD>')
# -
def jj(num):
return num
a = jj('jk')
print(a)
# 
# ## 带返回值和不带返回值的函数
# - return 返回的内容
# - return 返回多个值
# - 一般情况下,在多个函数协同完成一个功能的时候,那么将会有返回值
def jj():
return 'HHHHHHH'
a = jj()
print(a)
# 
#
# - 当然也可以自定义返回None
# ## EP:
# 
# +
def main():
print(minn(5,6))
def minn(n1,n2):
smallest = n1
if n2 < smallest:
smallest = n2
main()
# +
def main():
print(minn(minn(5,6),(51,6)))
def minn(n1,n2):
smallest = n1
if n2 < smallest:
smallest = n2
main()
# -
# ## 类型和关键字参数
# - 普通参数
# - 多个参数
# - 默认值参数
# - 不定长参数
# ## 普通参数
def fun_1(name):
pass
# ## 多个参数
def fun_2(name1,name2,......):
pass
# ## 默认值参数
def fun_3(name1 = 'yyyyy',name2 = 'kkkkkkk'):#若中间参数设置默认值 它后面的参数必须要设值默认值
print(name1,name2)
fun_3()
# ## 强制命名
def fun_4(name1,*,name2):#*后面的参数传参时必须带上参数名
print(name1,name2)
fun_4(1,name2 = 3)
# ## 不定长参数
# - \*args
# > - 不定长,来多少装多少,不装也是可以的
# - 返回的数据类型是元组
# - args 名字是可以修改的,只是我们约定俗成的是args
# - \**kwargs
# > - 返回的字典
# - 输入的一定要是表达式(键值对)
def fun_5(*args):#*+变量名=不定长参数(下水道)来多少处理多少 输出元组
print(args)
fun_5(1,2,3,4,5)
def fun_6(*args,name):# 传参都赋值给不定长参数,给普通参数赋值时需要强调参数名
print(*args,name)
fun_6(1,2,3,4,5,name = 'ykykyk')
def fun_6(name,*args):#默认将第一个参数赋值给 第一个参数 其余的赋值给不定长参数
print(*args,name)
fun_6(1,2,3,4,5)
def fun_7(**kwargs):
print(kwargs)
fun_7(a = 100,c = 200)#输入键值对 输出字典
import matplotlib.pyplot as plt
plt.plot([1,2,3,4,5,6],color = 'red') #color = 'red' == c = 'r'
def fun_8(*args,**kwargs):#传参时 只能按顺序输入,不能交叉输入,否则出现语法错误
print(args)
print(kwargs)
fun_8(1,2,3,4,5,'a',a = 100,b = 1000)
# 奇偶 , 回文数 ,质数 , 水仙花数
def fun_9():
# ## 变量的作用域
# - 局部变量 local
# - 全局变量 global
# - globals 函数返回一个全局变量的字典,包括所有导入的变量
# - locals() 函数会以字典类型返回当前位置的全部局部变量。
a = 1000
def fun_10():
global a# 定义全局变量 可改变外面a的值 在赋值的情况下
a = 0
print(a)
fun_10()
print(a)
list_ = []
def fun_11():
list_.append(100)
print(list_)
fun_11()
# ## 注意:
# - global :在进行赋值操作的时候需要声明
# - 官方解释:This is because when you make an assignment to a variable in a scope, that variable becomes local to that scope and shadows any similarly named variable in the outer scope.
# - 
# # Homework
# - 1
# 
i = 0
def getPentagonalNumber(n):
num = (n * (3 * n - 1)) / 2
return num
for j in range(1,101):
a = getPentagonalNumber(j)
print(int(a),end = ' ')
i += 1
if i % 10 == 0:
print('')
# - 2
# 
# +
def sumDigits(n):
summ = 0
for i in str(n):
summ += int(i)
print(summ)
num = eval(input('Enter a number:'))
sumDigits(num)
# -
# - 3
# 
def displaySortedNumbers(num1,num2,num3):
max_ = max(num1,num2,num3)
min_ = min(num1,num2,num3)
sum_ = num1 + num2 + num3
mid = int(sum_ - (max_ + min_))
print('The sorted numbers are ',min_,mid,max_)
x1,x2,x3 = eval(input('Enter three numbers:'))
displaySortedNumbers(x1,x2,x3)
# - 4
# 
import math
def futureInverstmentValue(investmentAmount,monthlyInteresRate,years):
f = investmentAmount * math.pow((1 + monthlyInteresRate),years * 12)
return f
investAmount = eval(input('The amount invested:'))
annnualRate = eval(input('Annual interest rate:'))
print('Years Future Value')
for i in range(1,31):
f = futureInverstmentValue(investAmount,annnualRate / (100 * 12),i)
print(i,' ',round(f,2))
# - 5
# 
# +
summ = 0
def printChars(ch1,ch2,numberPerLine):
for i in range(ord(ch1),ord(ch2)+1):
if i == numberPerLine + ord(ch1):
print(chr(i),end = ' ')
global summ
summ += 1
if summ % 10 == 0:
print('')
for i in range(0,42):
printChars('1','Z',i)
#for i in range(26):
# printChars('a','z',i)
#for i in range(26):
# printChars('A','Z',i)
# -
# - 6
# 
def numberOfDaysInAYear(year):
if (year % 4 == 0 and year % 100 != 0) or year % 400 == 0:
print(year,'have 366 days')
else:
print(year,'have 365 days')
for i in range(2010,2021):
numberOfDaysInAYear(i)
# - 7
# 
def distance(x1,y1,x2,y2):
dis = ((x1 - x2) ** 2 + (y1 - y2)**2 )**( 1/ 2)
return dis
x1,y1 = eval(input('Enter a point:'))
x2,y2 = eval(input('Enter a point:'))
distance_ = distance(x1,y1,x2,y2)
print(distance_)
# - 8
# 
# +
def meisensushu(p):
for i in range(p+1):
for j in range(2,i):
if i % j == 0:
break
else:
for k in range(2,p + 1):
if i == 2 ** k -1:
print(k,' ',i)
break
print('p 2^p -1')
meisensushu(31)
# -
# - 9
# 
# 
import time
s = time.strftime('%b %d,%Y %H:%M:%S',time.localtime(time.time()))
print('Current date and time is ',s)
# - 10
# 
import random
def zhesaizi():
res1 = random.randint(0,6)
res2 = random.randint(0,6)
sum_ = res1 + res2
if sum_ == 3 or sum_ == 12:
print('You rolled',res1,'+',res2,'=',sum_)
print('You lose')
elif sum_ == 7 or sum_== 11:
print('You rolled',res1,'+',res2,'=',sum_)
print('win')
else:
print('You rolled',res1,'+',res2,'=',sum_)
print('point is',sum_)
liewai(sum_)
def liewai(n):
res1 = random.randint(0,6)
res2 = random.randint(0,6)
sum_ = res1 + res2
if sum_ == 7:
print('You rolled',res1,'+',res2,'=',sum_)
print('You lose')
elif sum_ == n:
print('You rolled',res1,'+',res2,'=',sum_)
print('You win')
else:
liewai(n)
zhesaizi()
# - 11
# ### 去网上寻找如何用Python代码发送邮件
# +
from email.mime.text import MIMEText#邮件文本转换
import smtplib #发送邮件
from configparser import *
def Send_Email(accept_email,SMTP_server,Sender,password):
print('邮件发送开始...')
#三个参数:第一个为文本内容,第二个设置发送文本格式plain,第三个被文件utf-8设置编码
msg = MIMEText('你以注册成功,请及时验证')#转为邮件文本
msg["Subject"] = "恭喜你"#标题
msg["From"] = Sender #发送者
mail_sever = smtplib.SMTP(SMTP_server,25)#链接邮箱端,端口号25
mail_sever.login(Sender,password)#登陆邮箱
email_list = [accept_email]#要发送的邮箱号
for EMAIL in email_list:
mail_sever.sendmail(Sender,EMAIL,msg.as_string())
mail_sever.quit()#关闭邮箱
print('邮箱发送成功')
Send_Email('smtp.163.com','自己的163的邮箱','发送者','自己的163的密码')
|
7.20.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# Data analysis
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
# Imputing missing values
from sklearn.impute import KNNImputer
from scipy.stats import chi2_contingency
# Feature engineering
from sklearn.preprocessing import StandardScaler
# Model processing and testing
from imblearn.over_sampling import SMOTE
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score, f1_score, confusion_matrix, classification_report
from sklearn.metrics import roc_auc_score, plot_roc_curve, precision_score, recall_score
# Models
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn.neighbors import KNeighborsClassifier
from xgboost import XGBClassifier
from sklearn.svm import SVC
from sklearn.tree import DecisionTreeClassifier
# -
df = pd.read_csv("data.csv")
df.drop('Unnamed: 32', axis = 1, inplace = True)
df = df.drop('id', axis=1) #id column not necessary
df.head()
df.shape
df.diagnosis = [1 if i == "M" else 0 for i in df.diagnosis]
df.head()
x = df.drop('diagnosis', axis = 1)
y = df['diagnosis']
df.head()
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size = 0.2, random_state = 0)
log = LogisticRegression()
log.fit(x_train, y_train)
y_pred_log = log.predict(x_test)
cr = classification_report(y_test, y_pred_log)
print(cr)
print('Precision Score: ', round(precision_score(y_test, y_pred_log), 3))
print('Recall Score: ', round(recall_score(y_test, y_pred_log), 3))
print('F1 Score: ', round(f1_score(y_test, y_pred_log), 3))
print('Accuracy Score: ', round(accuracy_score(y_test, y_pred_log), 3))
print('ROC AUC: ', round(roc_auc_score(y_test, y_pred_log), 3))
rf = RandomForestClassifier()
rf.fit(x_train, y_train)
y_pred_rf = rf.predict(x_test)
cr_rf = classification_report(y_test, y_pred_rf)
print(cr_rf)
print('Precision Score: ', round(precision_score(y_test, y_pred_rf), 3))
print('Recall Score: ', round(recall_score(y_test, y_pred_rf), 3))
print('F1 Score: ', round(f1_score(y_test, y_pred_rf), 3))
print('Accuracy Score: ', round(accuracy_score(y_test, y_pred_rf), 4))
print('ROC AUC: ', round(roc_auc_score(y_test, y_pred_rf), 3))
knn = KNeighborsClassifier()
knn.fit(x_train, y_train)
y_pred_knn = knn.predict(x_test)
cr_knn = classification_report(y_test, y_pred_knn)
print(cr_knn)
print('Precision Score: ', round(precision_score(y_test, y_pred_knn), 2))
print('Recall Score: ', round(recall_score(y_test, y_pred_knn), 2))
print('F1 Score: ', round(f1_score(y_test, y_pred_knn), 3))
print('Accuracy Score: ', round(accuracy_score(y_test, y_pred_knn), 2))
print('ROC AUC: ', round(roc_auc_score(y_test, y_pred_knn), 2))
# # svm
svm = SVC()
svm.fit(x_train, y_train)
y_pred_svm = svm.predict(x_test)
cr_svm = classification_report(y_test, y_pred_knn)
print(cr_svm)
print('Precision Score: ', round(precision_score(y_test, y_pred_svm), 2))
print('Recall Score: ', round(recall_score(y_test, y_pred_svm), 2))
print('F1 Score: ', round(f1_score(y_test, y_pred_svm), 3))
print('Accuracy Score: ', round(accuracy_score(y_test, y_pred_svm), 3))
print('ROC AUC: ', round(roc_auc_score(y_test, y_pred_svm), 3))
dt = DecisionTreeClassifier()
dt.fit(x_train, y_train)
y_pred_dt = dt.predict(x_test)
cr_dt = classification_report(y_test, y_pred_dt)
print(cr_dt)
print('Precision Score: ', round(precision_score(y_test, y_pred_dt), 3))
print('Recall Score: ', round(recall_score(y_test, y_pred_dt), 2))
print('F1 Score: ', round(f1_score(y_test, y_pred_dt), 2))
print('Accuracy Score: ', round(accuracy_score(y_test, y_pred_dt), 2))
print('ROC AUC: ', round(roc_auc_score(y_test, y_pred_dt), 2))
from sklearn.naive_bayes import GaussianNB
bx = GaussianNB()
bx.fit(x_train, y_train)
y_pred_bx = bx.predict(x_test)
cr_bx = classification_report(y_test, y_pred_bx)
print(cr_bx)
print('Precision Score: ', round(precision_score(y_test, y_pred_bx), 2))
print('Recall Score: ', round(recall_score(y_test, y_pred_bx), 3))
print('F1 Score: ', round(f1_score(y_test, y_pred_bx), 3))
print('Accuracy Score: ', round(accuracy_score(y_test, y_pred_bx), 3))
print('ROC AUC: ', round(roc_auc_score(y_test, y_pred_bx), 2))
from sklearn.ensemble import AdaBoostClassifier
ad = AdaBoostClassifier()
ad.fit(x_train, y_train)
y_pred_ad = ad.predict(x_test)
cr_ad = classification_report(y_test, y_pred_ad)
print(cr_ad)
print('Precision Score: ', round(precision_score(y_test, y_pred_ad), 2))
print('Recall Score: ', round(recall_score(y_test, y_pred_ad), 3))
print('F1 Score: ', round(f1_score(y_test, y_pred_ad), 3))
print('Accuracy Score: ', round(accuracy_score(y_test, y_pred_ad), 4))
print('ROC AUC: ', round(roc_auc_score(y_test, y_pred_ad), 2))
# # roc curve of ada boost
auc_ad = roc_auc_score(y_test, y_pred_ad)
auc_ad
cm_ad = confusion_matrix(y_test, y_pred_ad)
cm_ad
# +
from sklearn.metrics import roc_curve, roc_auc_score
predicted_probab_ad = ad.predict_proba(x_test)
predicted_probab_ad = predicted_probab_ad[:, 1]
fpr2, tpr2, _ = roc_curve(y_test, predicted_probab_ad)
# -
from matplotlib import pyplot
pyplot.plot(fpr2, tpr2, marker='.', label='Random Forest (AUROC = %0.3f)'% auc_ad)
# Title
pyplot.title('ROC Plot')
pyplot.xlabel('False Positive Rate')
pyplot.ylabel('True Positive Rate')
pyplot.legend()
pyplot.show()
|
withoutSMOTE.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [default]
# language: python
# name: python3
# ---
# # Watch Me Code 2: Plotly and Cufflinks
#
# - Plotly is a cloud based plotting service. It uses the popular JavaScript library D3.js.
# - Plotly is simple to use
# - It includes a module called `cufflinks` to attach Plotly to pandas
# !pip install plotly
# !pip install cufflinks
import plotly
import plotly.plotly as py
import plotly.graph_objs as go
import cufflinks as cf
import pandas as pd
# To use plot.ly you need to sign up for a free account then get API credentials. https://plot.ly/settings/api
# setup the credentials
plotly.tools.set_credentials_file(username='mafudgefc94', api_key='k7KpNxwXupnJBchuesc0')
# Start with a Simple Pandas DataFrame
grades = { 'subjects' : ['Mathematics', 'English', 'History', 'Science', 'Arts'],
'grades' : [67, 60, 36, 61, 58]
}
grades_df = pd.DataFrame(grades)
grades_df
# To plot with plotly, we need:
#
# - Data: this is a list of subplots,
# - A Python Dictionary of information to put on the plot
# +
grade_data = [go.Bar(x=grades_df['subjects'], y=grades_df['grades'])] # Bar Plot, note this is a list
py.iplot({ 'data': grade_data,
'layout': {
'title': 'My Grades R Asum',
'xaxis': {
'title': 'Subjects Are Bad'},
'yaxis': {
'title': 'Grades'}
}})
# -
# Cufflinks is a Python module which "attaches" plot.ly to the dataframe (just like cufflinks "attach" to your shirt-sleeve).
#
# This allows you to plot similarly to pandas.
# Same plot as a one-liner with cufflinks!
grades_df.iplot(kind='bar', x='subjects', y='grades', title='My Grades R Awesum', xTitle='Grades', yTitle='Subjects Are Bad')
# Have some pie, note it uses labels and values
grades_df.iplot(kind='pie', labels='subjects', values='grades', title='My Grades R Awesum')
# Don't like all that color?
grades_df.iplot(kind='pie', labels='subjects', values='grades', title='My Grades R Awesum', colorscale='greens', textinfo='value+percent')
# How about an example with multiple series? For that we need to pull in another dataset
cuse_weather_df = pd.read_csv('https://raw.githubusercontent.com/mafudge/datasets/master/weather/syracuse-ny.csv')
cuse_weather_df = cuse_weather_df[ cuse_weather_df['EST'].str.startswith('2015-')]
cuse_weather_df.head(5)
# +
r = dict(color='red')
g = dict(color='green')
b = dict(color='blue')
grade_data = [
go.Scatter(x=cuse_weather_df['EST'], y=cuse_weather_df['Max TemperatureF'], mode="lines", name="Max Temp", marker=r),
go.Scatter(x=cuse_weather_df['EST'], y=cuse_weather_df['Mean TemperatureF'], mode="lines+markers", name="Mean Temp", marker=g),
go.Scatter(x=cuse_weather_df['EST'], y=cuse_weather_df['Min TemperatureF'], mode="lines", name="Min Temp", marker=b)
]
py.iplot({ 'data': grade_data,
'layout': {
'title': 'Syracuse Weather 2015',
'xaxis': {
'title': 'Day of the Year'},
'yaxis': {
'title': 'Temperature Deg F'}
}})
# -
# Here's another example with the Exam Scores Dataset. Shows you how much more expressive plot.ly can be.
scores_df = pd.read_csv('https://raw.githubusercontent.com/mafudge/datasets/master/exam-scores/exam-scores.csv')
scores_df = scores_df.sort_values(by='Student Score')
scores_df[0:6]
# +
grade_data = [
go.Scatter(x=scores_df['Letter Grade'], y=scores_df['Completion Time'], mode="markers",
marker= { 'size': scores_df['Student Score'], 'sizemode' : 'diameter', 'sizeref' : 1.0})
]
py.iplot({ 'data': grade_data,
'layout': {
'title': 'Exam Grades',
'xaxis': {
'title': 'Letter Grade'},
'yaxis': {
'title': 'Time To Complete Exam'}
}})
# +
grade_data = [
go.Heatmap(x=scores_df['Exam Version'], y=scores_df['Completion Time'], z=scores_df['Student Score'])
]
py.iplot({ 'data': grade_data,
'layout': {
'title': 'Exam Grades Heat Map',
'xaxis': {
'title': 'Exam Version'},
'yaxis': {
'title': 'Time To Complete Exam'}
}})
# +
# A manual sample, showing you don't need to use Pandas at all.
trace0 = Scatter(
x=[1,2,3,4,5,6,7,8],
y=[10, 15, 13, 17, 15, 12, 10, 18],
mode = "markers",
name = "series 2"
)
trace1 = Scatter(
x=[1,2,3,4,5,6,7,8],
y=[16, 5, 11, 9, 16, 10, 14, 12],
mode="line",
name = "series 1"
)
data = Data([trace0, trace1])
py.iplot({ 'data': data,
'layout': {
'title': 'Sample Chart',
'xaxis': {
'title': 'X Axis'},
'yaxis': {
'title': 'Y Axis'}
}})
# -
|
content/lessons/13/Watch-Me-Code/WMC2-Plotly.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:tse]
# language: python
# name: conda-env-tse-py
# ---
# +
# default_exp models
# -
# export
from fastai.text import *
from transformers import RobertaModel, RobertaConfig
from tse.preprocessing import *
from tse.tokenizers import *
from tse.datasets import *
#export
def get_roberta_model(path_to_dir="../roberta-base/"):
conf = RobertaConfig.from_pretrained(path_to_dir)
conf.output_hidden_states = True
model = RobertaModel.from_pretrained(path_to_dir, config=conf)
# outputs: (final_hidden, pooled_final_hidden, (embedding + 12 hidden))
return model
model = get_roberta_model()
# ### Create databunch
# Preprocess
train_df = pd.read_csv("../data/train.csv").dropna().reset_index(drop=True)
test_df = pd.read_csv("../data/test.csv")
strip_text(train_df, "text")
strip_text(train_df, "selected_text")
strip_text(test_df, "text")
replace_whitespace(train_df, "text")
replace_whitespace(train_df, "selected_text")
replace_whitespace(test_df, "text")
replace_URLs(train_df, "text")
replace_URLs(train_df, "selected_text")
replace_URLs(test_df, "text")
replace_user(train_df, "text")
replace_user(train_df, "selected_text")
replace_user(test_df, "text")
is_wrong = train_df.apply(lambda o: is_wrong_selection(o['text'], o['selected_text']), 1)
train_df = train_df[~is_wrong].reset_index(drop=True)
from sklearn.model_selection import KFold
kfold = KFold(5, shuffle=True, random_state=42)
fold_idxs = list(kfold.split(train_df))
for i, (trn_idx, val_idx) in enumerate(fold_idxs): train_df.loc[val_idx, "val_fold"] = int(i)
tokenizer = init_roberta_tokenizer("../roberta-base/vocab.json", "../roberta-base/merges.txt", 192)
train_df.val_fold.value_counts()
# get fold dfs
trn_df = train_df[train_df['val_fold'] != 0]
val_df = train_df[train_df['val_fold'] == 0]
# get fold inputs
train_inputs = QAInputGenerator.from_df(trn_df, tokenizer=tokenizer)
valid_inputs = QAInputGenerator.from_df(val_df, tokenizer=tokenizer)
#export
do_tfms = {}
do_tfms["random_left_truncate"] = {"p":.3}
do_tfms["random_right_truncate"] = {"p":.3}
do_tfms["random_replace_with_mask"] = {"p":.3, "mask_p":0.2}
do_tfms
# fold ds
train_ds = TSEDataset(train_inputs, tokenizer, is_test=False, do_tfms=do_tfms)
valid_ds = TSEDataset(train_inputs, tokenizer, is_test=False)
train_ds[1]
data = DataBunch.create(train_ds, valid_ds, bs=32, val_bs=64)
xb,yb = data.one_batch()
xb, yb
# ### TSEModel
# +
#export
noop_layer = Lambda(lambda x: x)
class QAHead(Module):
def __init__(self, p=0.5, hidden_size=768, num_hidden_states=2, use_ln=False):
self.ln0 = nn.LayerNorm(hidden_size*num_hidden_states) if use_ln else noop_layer
self.d0 = nn.Dropout(p)
self.l0 = nn.Linear(hidden_size*num_hidden_states, 2)
def forward(self, x):
return self.l0(self.d0(self.ln0(x)))
class TSEModel(Module):
def __init__(self, pretrained_model, head=QAHead(), num_hidden_states=2):
self.sequence_model = pretrained_model
self.head = head
self.num_hidden_states = num_hidden_states
def forward(self, *xargs):
inp = {}
inp["input_ids"] = xargs[0]
inp["attention_mask"] = xargs[1]
_, _, hidden_states = self.sequence_model(**inp)
x = torch.cat(hidden_states[-self.num_hidden_states:], dim=-1)
start_logits, end_logits = self.head(x).split(1, dim=-1)
return (start_logits.squeeze(-1), end_logits.squeeze(-1))
# -
# ### loss
tse_model = TSEModel(model, QAHead(use_ln=True, num_hidden_states=2), num_hidden_states=2)
out = tse_model(*xb)
out
#export
class CELoss(Module):
"single backward by concatenating both start and logits with correct targets"
def __init__(self): self.loss_fn = nn.CrossEntropyLoss()
def forward(self, inputs, start_targets, end_targets):
start_logits, end_logits = inputs
logits = torch.cat([start_logits, end_logits]).contiguous()
targets = torch.cat([start_targets, end_targets]).contiguous()
return self.loss_fn(logits, targets)
loss_fn = CELoss()
loss_fn(out, *yb)
#export
class LSLoss(Module):
"single backward by concatenating both start and logits with correct targets"
def __init__(self, eps=0.1): self.loss_fn = LabelSmoothingCrossEntropy(eps=eps)
def forward(self, inputs, start_targets, end_targets):
start_logits, end_logits = inputs
logits = torch.cat([start_logits, end_logits]).contiguous()
targets = torch.cat([start_targets, end_targets]).contiguous()
return self.loss_fn(logits, targets)
loss_fn = LSLoss()
loss_fn(out, *yb)
# ### metric
#export
def jaccard(str1, str2):
a = set(str1.lower().split())
b = set(str2.lower().split())
c = a.intersection(b)
return float(len(c)) / (len(a) + len(b) - len(c))
#export
def get_best_start_end_idxs(_start_logits, _end_logits):
best_logit = -1000
best_idxs = None
for start_idx, start_logit in enumerate(_start_logits):
for end_idx, end_logit in enumerate(_end_logits[start_idx:]):
logit_sum = (start_logit + end_logit).item()
if logit_sum > best_logit:
best_logit = logit_sum
best_idxs = (start_idx, start_idx+end_idx)
return best_idxs
valid_ds.inputs[0]
xb[0]
valid_ds.inputs[0]
#export
class JaccardScore(Callback):
"Stores predictions and targets to perform calculations on epoch end."
def __init__(self, valid_ds):
self.valid_ds = valid_ds
self.offset_shift = 4
def on_epoch_begin(self, **kwargs):
self.jaccard_scores = []
self.valid_ds_idx = 0
def on_batch_end(self, last_input:Tensor, last_output:Tensor, last_target:Tensor, **kwargs):
input_ids, attention_masks = last_input[0], last_input[1].bool()
start_logits, end_logits = last_output
# mask select only context part
for i in range(len(input_ids)):
_input_ids = input_ids[i].masked_select(attention_masks[i])
_start_logits = start_logits[i].masked_select(attention_masks[i])[4:-1]
_end_logits = end_logits[i].masked_select(attention_masks[i])[4:-1]
start_idx, end_idx = get_best_start_end_idxs(_start_logits, _end_logits)
start_idx, end_idx = start_idx + self.offset_shift, end_idx + self.offset_shift
context_text = self.valid_ds.inputs[self.valid_ds_idx]['context_text']
offsets = self.valid_ds.inputs[self.valid_ds_idx]['offsets']
answer_text = self.valid_ds.inputs[self.valid_ds_idx]['answer_text']
start_offs, end_offs = offsets[start_idx], offsets[end_idx]
answer = context_text[start_offs[0]:end_offs[1]]
self.jaccard_scores.append(jaccard(answer, answer_text))
self.valid_ds_idx += 1
def on_epoch_end(self, last_metrics, **kwargs):
res = np.mean(self.jaccard_scores)
return add_metrics(last_metrics, res)
# ### Training
#export
def model_split_func(m, num_hidden_states):
"4 layer groups"
return (m.sequence_model.embeddings,
m.sequence_model.encoder.layer[:-num_hidden_states],
m.sequence_model.encoder.layer[-num_hidden_states:],
m.head)
learner = Learner(data, tse_model, loss_func=CELoss(), metrics=[JaccardScore(valid_ds)])
learner.model
split_fn = partial(model_split_func, num_hidden_states=2)
learner = learner.split(split_fn)
learner.freeze_to(-1)
# +
# early_stop_cb = EarlyStoppingCallback(learner, monitor='jaccard_score',mode='max',patience=2)
# save_model_cb = SaveModelCallback(learner,every='improvement',monitor='jaccard_score',name=f'{MODEL_TYPE}-qa-finetune')
# csv_logger_cb = CSVLogger(learner, f"training_logs_{foldnum}", True)
# +
# learner.to_fp16();
# learner.to_fp32();
# +
# learner.validate()
# -
# ### export
from nbdev.export import notebook2script
notebook2script()
|
nbs/04-models.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import json
import numpy as np
# ### Load JSON Data
data = []
with open('topic_modeling_data.json') as data_file:
for line in data_file:
data.append(json.loads(line))
# ### Convert the json object to dataframe
documents = pd.DataFrame(data)
print(documents[:5])
topic=['Topic1','Topic2','Topic3','Topic4','Topic5']
prob =['Prob1','Prob2', 'Prob3', 'Prob4','Prob5']
for i in range(len(topic)):
documents[topic[i]]=np.nan
documents[prob[i]]=np.nan
print(documents[:5])
documents.head(2)
# ### Data Preprocessing
import gensim
from gensim.utils import simple_preprocess
from gensim.parsing.preprocessing import STOPWORDS
from nltk.stem import WordNetLemmatizer, SnowballStemmer
from nltk.stem.porter import *
np.random.seed(2018)
import nltk
nltk.download('wordnet')
# +
stemmer = SnowballStemmer('english')
def lemmatize_stemming(text):
return stemmer.stem(WordNetLemmatizer().lemmatize(text, pos='v'))
def preprocess(text):
result = []
for token in gensim.utils.simple_preprocess(text):
if token not in gensim.parsing.preprocessing.STOPWORDS and len(token) > 3:
result.append(lemmatize_stemming(token))
return result
# -
# Preprocess the data
processed_docs = documents['text'].map(preprocess)
# ### Bag of words on the dataset
dictionary = gensim.corpora.Dictionary(processed_docs)
# Saving the dictionary
dictionary.save('dictionary.gensim')
count = 0
for k, v in dictionary.iteritems():
print(k, v)
count += 1
if count > 10:
break
dictionary.filter_extremes(no_below=15, no_above=0.5, keep_n=50000)
bow_corpus = [dictionary.doc2bow(doc) for doc in processed_docs]
# Saving the corpus
import pickle
pickle.dump(bow_corpus, open('bow_corpus.pkl', 'wb'))
bow_corpus[150]
# +
bow_doc_1500 = bow_corpus[1500]
for i in range(len(bow_doc_1500)):
print("Word {} (\"{}\") appears {} time.".format(bow_doc_1500[i][1],
dictionary[bow_doc_1500[i][1]],
bow_doc_1500[i][1]))
# -
# ### TF-IDF
from gensim import corpora, models
tfidf = models.TfidfModel(bow_corpus)
corpus_tfidf = tfidf[bow_corpus]
# +
from pprint import pprint
for doc in corpus_tfidf:
pprint(doc)
break
# -
# ### Running LDA using Bag of Words
lda_model = gensim.models.LdaMulticore(bow_corpus, num_topics=5, id2word=dictionary, passes=2, workers=2, random_state= 2)
# Saving the model
lda_model.save('lda_model.gensim')
for idx, topic in lda_model.print_topics(-1):
print('Topic: {} \nWords: {}'.format(idx, topic))
# ### Running LDA using TF-IDF
lda_model_tfidf = gensim.models.LdaMulticore(corpus_tfidf, num_topics=5, id2word=dictionary, passes=2, workers=4, random_state=2)
# Saving the model
lda_model.save('lda_model_tf_idf.gensim')
for idx, topic in lda_model_tfidf.print_topics(-1):
print('Topic: {} Word: {}'.format(idx, topic))
# ### Visualising the models
import pyLDAvis.gensim
lda_display = pyLDAvis.gensim.prepare(lda_model, bow_corpus, dictionary, sort_topics=False)
pyLDAvis.display(lda_display)
lda_display_tfidf = pyLDAvis.gensim.prepare(lda_model_tfidf, bow_corpus, dictionary, sort_topics=False)
pyLDAvis.display(lda_display_tfidf)
pyLDAvis.save_html(lda_display, 'pyLDAvis_output.html')
pyLDAvis.save_html(lda_display_tfidf, 'pyLDAvis_output_tfidf.html')
# ### Classification of the topics
# ### Performance evaluation by classifying sample document using LDA Bag of Words model
#
processed_docs[150]
for index, score in sorted(lda_model[bow_corpus[1500]], key=lambda tup: -1*tup[1]):
print("\nScore: {}\t \nTopic: {}".format(score, lda_model.print_topic(index, 10)))
# ### Performance evaluation by classifying sample document using LDA TF-IDF model
for index, score in sorted(lda_model_tfidf[bow_corpus[1500]], key=lambda tup: -1*tup[1]):
print("\nScore: {}\t \nTopic: {}".format(score, lda_model_tfidf.print_topic(index, 10)))
# ### Testing model on seen document
bow_vector = dictionary.doc2bow(preprocess(documents['text'][3]))
for index, score in sorted(lda_model[bow_vector], key=lambda tup: -1*tup[1]):
print("Score: {}\t Topic: {}".format(score, lda_model.print_topic(index, 5)))
# ### Updating the topic and probability columns with respective values in decreasing order
for index in range(len(documents)):
topicsOrder=[]
bow_vector = dictionary.doc2bow(preprocess(documents['text'][index]))
topicsOrder = list(map(list, list(lda_model.get_document_topics(bow_vector))))
topicsOrder.sort(key=lambda x: x[1],reverse=True)
# Printing the sorted probabilities
# print(topicsOrder)
for i in range(len(topicsOrder)):
topic = "Topic"+str(i+1)
documents.at[index,topic] = (topicsOrder[i][0])
prob = "Prob"+str(i+1)
documents.at[index,prob] = (topicsOrder[i][1])
# Labeling the topic numbers in documents dataframe
documents.Topic1=documents.Topic1.replace(0, "First Topic").replace(1, "Second Topic").replace(2, "Third Topic").replace(3, "Fourth Topic").replace(4, "Fifth Topic")
documents.Topic2=documents.Topic2.replace(0, "First Topic").replace(1, "Second Topic").replace(2, "Third Topic").replace(3, "Fourth Topic").replace(4, "Fifth Topic")
documents.Topic3=documents.Topic3.replace(0, "First Topic").replace(1, "Second Topic").replace(2, "Third Topic").replace(3, "Fourth Topic").replace(4, "Fifth Topic")
documents.Topic4=documents.Topic4.replace(0, "First Topic").replace(1, "Second Topic").replace(2, "Third Topic").replace(3, "Fourth Topic").replace(4, "Fifth Topic")
documents.Topic5=documents.Topic5.replace(0, "First Topic").replace(1, "Second Topic").replace(2, "Third Topic").replace(3, "Fourth Topic").replace(4, "Fifth Topic")
documents.head(5)
# ### Dropping the unnecessary columns from the dataframe documents
documents_dict=documents
documents_dict=documents_dict.drop('text',axis=1)
# ### Creating a list of dictionary in a proper format
documents_dict = documents_dict.replace(np.nan, '', regex=True)
# Creates a list of dictionaries
documents_dict = documents_dict.to_dict('record')
# Delete the keys with null values
for dict1 in documents_dict:
empty_key=[]
for key, value in dict1.items():
if(value == ''):
empty_key.append(key)
for name in empty_key:
dict1.pop(name)
# Saving the json file
with open('topic_document_lda_model.json', 'w') as f:
json.dump(documents_dict, f, ensure_ascii=False)
# +
## Loading the dictionary, corpus and model to visualize the data
# dictionary = gensim.corpora.Dictionary.load('dictionary.gensim')
# corpus = pickle.load(open('bow_corpus.pkl', 'rb'))
# lda = gensim.models.ldamodel.LdaModel.load('lda_model.gensim')
# import pyLDAvis.gensim
# lda_display = pyLDAvis.gensim.prepare(lda, corpus, dictionary, sort_topics=False)
# pyLDAvis.display(lda_display)
# pyLDAvis.save_html(lda_display, 'pyLDAvis_output.html')
|
topic_modelling.ipynb
|
# ---
# jupyter:
# jupytext:
# formats: ipynb,py
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Vanishing Gradients : Ungraded Lecture Notebook
# In this notebook you'll take another look at vanishing gradients, from an intuitive standpoint.
# ## Background
# Adding layers to a neural network introduces multiplicative effects in both forward and backward propagation. The back prop in particular presents a problem as the gradient of activation functions can be very small. Multiplied together across many layers, their product can be vanishingly small! This results in weights not being updated in the front layers and training not progressing.
# <br/><br/>
# Gradients of the sigmoid function, for example, are in the range 0 to 0.25. To calculate gradients for the front layers of a neural network the chain rule is used. This means that these tiny values are multiplied starting at the last layer, working backwards to the first layer, with the gradients shrinking exponentially at each step.
# ## Imports
# + tags=[]
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib inline
# -
# ## Data, Activation & Gradient
#
# ### Data
# I'll start be creating some data, nothing special going on here. Just some values spread across the interval -5 to 5.
# * Try changing the range of values in the data to see how it impacts the plots that follow.
#
# ### Activation
# The example here is sigmoid() to squish the data x into the interval 0 to 1.
#
# ### Gradient
# This is the derivative of the sigmoid() activation function. It has a maximum of 0.25 at x = 0, the steepest point on the sigmoid plot.
# * Try changing the x value for finding the tangent line in the plot.
#
# <img src = 'sigmoid_tangent.png' width="width" height="height" style="height:250px;"/>
#
#
#
# +
# Data
# Interval [-5, 5]
### START CODE HERE ###
x = np.linspace(-5, 5, 100) # try changing the range of values in the data. eg: (-100,100,1000)
### END CODE HERE ###
# Activation
# Interval [0, 1]
def sigmoid(x):
return 1 / (1 + np.exp(-x))
activations = sigmoid(x)
# Gradient
# Interval [0, 0.25]
def sigmoid_gradient(x):
return (x) * (1 - x)
gradients = sigmoid_gradient(activations)
# Plot sigmoid with tangent line
plt.plot(x, activations)
plt.title("Sigmoid Steepest Point")
plt.xlabel("x input data")
plt.ylabel("sigmoid(x)")
# Add the tangent line
### START CODE HERE ###
x_tan = 0 # x value to find the tangent. try different values within x declared above. eg: 2
### END CODE HERE ###
y_tan = sigmoid(x_tan) # y value
span = 1.7 # line span along x axis
data_tan = np.linspace(x_tan - span, x_tan + span) # x values to plot
gradient_tan = sigmoid_gradient(sigmoid(x_tan)) # gradient of the tangent
tan = y_tan + gradient_tan * (data_tan - x_tan) # y values to plot
plt.plot(x_tan, y_tan, marker="o", color="orange", label=True) # marker
plt.plot(data_tan, tan, linestyle="--", color="orange") # line
plt.show()
# -
# ## Plots
# ### Sub Plots
# Data values along the x-axis of the plots on the interval chosen for x, -5 to 5. Subplots:
# - x vs x
# - sigmoid of x
# - gradient of sigmoid
#
# Notice how the y axis keeps compressing from the left plot to the right plot. The interval range has shrunk from 10 to 1 to 0.25. How did this happen? As |x| gets larger the sigmoid approaches asymptotes at 0 and 1, and the sigmoid gradient shrinks towards 0.
# * Try changing the range of values in the code block above to see how it impacts the plots.
# +
# Sub plots
fig, axs = plt.subplots(1, 3, figsize=(15, 4), sharex=True)
# X values
axs[0].plot(x, x)
axs[0].set_title("x values")
axs[0].set_ylabel("y=x")
axs[0].set_xlabel("x input data")
# Sigmoid
axs[1].plot(x, activations)
axs[1].set_title("sigmoid")
axs[1].set_ylabel("sigmoid")
axs[1].set_xlabel("x input data")
# Sigmoid gradient
axs[2].plot(x, gradients)
axs[2].set_title("sigmoid gradient")
axs[2].set_ylabel("gradient")
axs[2].set_xlabel("x input data")
fig.show()
# -
# ### Single Plot
# Putting all 3 series on a single plot can help visualize the compression. Notice how hard it is to interpret because sigmoid and sigmoid gradient are so small compared to the scale of the input data x.
# * Trying changing the plot ylim to zoom in.
# + tags=[]
# Single plot
plt.plot(x, x, label="data")
plt.plot(x, activations, label="sigmoid")
plt.plot(x, gradients, label="sigmoid gradient")
plt.legend(loc="upper left")
plt.title("Visualizing Compression")
plt.xlabel("x input data")
plt.ylabel("range")
### START CODE HERE ###
# plt.ylim(-.5, 1.5) # try shrinking the y axis limit for better visualization. eg: uncomment this line
### END CODE HERE ###
plt.show()
# Max, Min of each array
print("")
print("Max of x data :", np.max(x))
print("Min of x data :", np.min(x), "\n")
print("Max of sigmoid :", "{:.3f}".format(np.max(activations)))
print("Min of sigmoid :", "{:.3f}".format(np.min(activations)), "\n")
print("Max of gradients :", "{:.3f}".format(np.max(gradients)))
print("Min of gradients :", "{:.3f}".format(np.min(gradients)))
# -
# ## Numerical Impact
# ### Multiplication & Decay
# Multiplying numbers smaller than 1 results in smaller and smaller numbers. Below is an example that finds the gradient for an input x = 0 and multiplies it over n steps. Look how quickly it 'Vanishes' to almost zero. Yet sigmoid(x=0)=0.5 which has a sigmoid gradient of 0.25 and that happens to be the largest sigmoid gradient possible!
# <br/><br/>
# (Note: This is NOT an implementation of back propagation.)
# * Try changing the number of steps n.
# * Try changing the input value x. Consider the impact on sigmoid and sigmoid gradient.
# + tags=[]
# Simulate decay
# Inputs
### START CODE HERE ###
n = 6 # number of steps : try changing this
x = 0 # value for input x : try changing this
### END CODE HERE ###
grad = sigmoid_gradient(sigmoid(x))
steps = np.arange(1, n + 1)
print("-- Inputs --")
print("steps :", n)
print("x value :", x)
print("sigmoid :", "{:.5f}".format(sigmoid(x)))
print("gradient :", "{:.5f}".format(grad), "\n")
# Loop to calculate cumulative total
print("-- Loop --")
vals = []
total_grad = 1 # initialize to 1 to satisfy first loop below
for s in steps:
total_grad = total_grad * grad
vals.append(total_grad)
print("step", s, ":", total_grad)
print("")
# Plot
plt.plot(steps, vals)
plt.xticks(steps)
plt.title("Multiplying Small Numbers")
plt.xlabel("Steps")
plt.ylabel("Cumulative Gradient")
plt.show()
# -
# ## Solution
# One solution is to use activation functions that don't have tiny gradients. Other solutions involve more sophisticated model design. But they're both discussions for another time.
|
Natural Language Processing with Sequence Models/Week 3 - LSTMs and Named Entity Recognition/C3_W3_Lecture_Notebook_Vanishing_Gradients.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Unusual Rotas
#
# All of our rota examples so far have had very reasonable date representations. Let's look at an another example [unusual1.xlsx](unusual1.xlsx):
#
# | A | B | C |
# |:---:|:---:|:---:|
# | OCT | | |
# | 1 | SpR3 | |
# | 2 | SpR1 | |
# | 3 | SpR6(JR) | SpR5 doing 5th Oct |
# | 4 | SpR2 | |
# | 5 | SpR5 | SpR6 doing 3rd Oct |
# | 6 | SpR3 | |
# | 7 | SpR2 | SpR4 doing 4th Nov |
# | 8 | SpR2 | SpR4 doing 5th Nov |
# | 9 | SpR5 | |
# | 10 | SpR2 | |
# | 11 | SpR4 | SpR6 doing 3 Nov |
# | 12 | SpR3 | |
# | 13 | SpR2 | SpR4 doing 11 Oct |
# | 14 | SpR1 | |
# | 15 | SpR1 | |
# | 16 | SpR2 | |
# | 17 | SpR6 | |
# | 18 | SpR3 | |
# | 19 | SpR4 | |
# | 20 | SpR6 | SpR1 doing 3rd November |
# | 21 | SpR5 | |
# | 22 | SpR5 | |
# | 23 | SpR3 | SpR6 Doing 24 oct |
# | 24 | SpR6 | SpR3 doing 23rd Oct |
# | 25 | SpR4 | |
# | 26 | SpR1 | |
# | 27 | SpR5 | |
# | 28 | SpR2 | |
# | 29 | SpR2 | |
# | 30 | SpR5 | UCH 3 doing |
# | 31 | SpR4 | |
# | NOV | | |
# | 1 | SpR1 | |
#
# ## First steps - loading the rota
# OK, so how do we deal with a rota like this...? Let's first try loading it and see what we've got...
# +
from xlrd_helper import DictReader
rows = []
with open('unusual1.xlsx', 'rb') as f:
rows = [ row for row in DictReader(f)]
print(rows[0].keys())
print(rows[0])
# -
# Now looking carefully we see that the fieldnames aren't quite right, (neither is the data quite what we were expecting either - but we'll get to that). It turns out that this rota doesn't have headers.
#
# We have two options: We can drop down to using the Reader implementation or we could pass in some fieldnames. Let's look at passing in some fieldnames. I'll also pass in a `restkey` which will allow us to pick up anything placed in other columns.
# +
from xlrd_helper import DictReader
rows = []
fieldnames = ['date', 'oncall', 'additional']
with open('unusual1.xlsx', 'rb') as f:
rows = [ row for row in DictReader(f, fieldnames=fieldnames,
restkey='other')]
print(rows[0].keys())
print(rows[0])
# -
# Now, let's go back to why the first piece of data is 'Jun' when we were expecting 'Oct'...
#
# It turns out that if you look at the excel file, the first 503 rows are hidden. Getting the information about hidden rows requires adding a few options to the `DictReader` and querying the workbook ourselves,
# +
from xlrd_helper import DictReader
rows = []
fieldnames = ['date', 'oncall', 'additional']
try:
with open('unusual1.xlsx', 'rb') as f:
dr = DictReader(f, fieldnames=fieldnames, restkey='other', formatting_info=True)
rows = [ row for row in dr]
print(rows[0].keys())
print(rows[0])
print(dr.reader.sheet.rowinfo_map)
except NotImplementedError:
print('Formatting info not implemented in xlrd for XLSX')
# -
# Now, it turns out there is another package out there for doing this: `openpyxl`. However, we should probably consider whether we want to go down this path... If the rota co-ordinator is just hiding rows then it means that the starting point for the file is unlikely to change - whereas the first hidden row is likely to change.
#
# So thinking on we probably don't need to know whether a row is hidden or not. (It may be helpful in future though.)
#
#
# ## Looking at the first column
# Let's look at the contents of the first column.
# +
from xlrd_helper import DictReader
fieldnames = ['date', 'oncall', 'additional']
with open('unusual1.xlsx', 'rb') as f:
dr = DictReader(f, fieldnames=fieldnames, restkey='other')
rows = [ row for row in dr ]
dates = [ row['date'] for row in rows]
print(dates)
# -
# There are 4 things in there: a month indicator, a day, a year indicator, or a date
# +
import re
day_match = re.compile('^\d\d?$')
year_match = re.compile('^\d\d\d\d$')
date_match = re.compile('^\d\d\d\d/\d\d/\d\d$')
days, years, actual_dates, others = [], [], [], []
for i, d in enumerate(dates):
if day_match.match(d):
days.append((i, d))
elif year_match.match(d):
years.append((i, d))
elif date_match.match(d):
actual_dates.append((i, d))
else:
others.append((i, d))
print ('No. of days:\t%3d' % len(days))
print ('No. of years:\t%3d' % len(years))
print ('No. of dates:\t%3d' % len(actual_dates))
print ('Others:\t\t%3d' % len(others))
# -
# So we should just check those others and make sure that they actually are months and whilst we are at it let's check that there is only on-call information in the rows that have days or dates.
# +
for i, o in others:
print (i, o)
print(all((all((rows[i]['oncall'] == '' for i,_ in others)),
all((rows[i]['oncall'] == '' for i,_ in years)),
all((rows[i]['oncall'] != '' for i,_ in days)),
all((rows[i]['oncall'] != '' for i,_ in actual_dates)))))
# -
# ## Parsing the first column
# How can we parse this? Either, the row contains an actual date, or it's one of the following: A year, a month or a day. So if we have a good idea of the preceding date we can just adjust our date and in fact `dateutil.parser.parse` will interpret a given date string in the context of another default date.
#
# Looking at the column data carefully we can see that the first row refers to June 2016. We could work backwards from the dates but let's go forwards for the moment, so let's set the our default date as June 1st 2016.
# +
from datetime import date
from dateutil.parser import parse
oncall = {}
today = date(2016, 6, 1)
for i, row in enumerate(rows):
d = row['date']
c = row['oncall']
a = row['additional']
# Parse our new date in the context of the previous date
today = parse(d, default=today)
# If we're setting an oncall person
if c != '':
if today in oncall:
print('Duplicate: ', today, i, d)
else:
oncall[today] = (c, a)
# -
# Now that's strange... How did that happened? Let's catch that error and add some logging:
# +
from datetime import date
from dateutil.parser import parse
oncall = {}
today = date(2016, 6, 1)
for i, row in enumerate(rows):
d = row['date']
c = row['oncall']
a = row['additional']
try:
# Parse our new date in the context of the previous date
yesterday = today
today = parse(d, default=yesterday)
# If we're setting an oncall person
if c != '':
if today in oncall:
print('Duplicate: ', today, i, d)
else:
oncall[today] = (c, a)
except ValueError as e:
print(e)
print('Row %d with day value: %s, (Value Above: %s, Below: %s)' % (i, d, rows[i -1]['date'], rows[i + 1]['date']))
# -
# The observant will have noticed that for some reason there are a couple of rows which have `0` instead of `10`. There's no other `0`s in the column.
#
# Now understanding the duplicates requires a bit more thought. The first duplicate is in row 378 and the last duplicate is interpreted as 2016-12-31 in row 597. Now row 598 is `2018` - so I think we're not getting the change of year right. Let's look at what's supposed to happen when go over the year.
today = date(2016, 12, 31)
print(parse('Jan', default=today))
# Which obviously doesn't work. We'll just have to catch this case and manage the changeover ourselves.
# +
from dateutil.parser import parse
oncall = {}
today = date(2016, 6, 1)
for i, row in enumerate(rows):
d = row['date']
c = row['oncall']
a = row['additional']
try:
# Parse our new date in the context of the previous date
yesterday = today
if d == '0':
d = '10'
if today.month == 12 and today.day == 31:
today = date(today.year + 1, today.month, today.day)
today = parse(d, default=today)
# If we're setting an oncall person
if c != '':
if today in oncall:
print('Duplicate: ', today, i, d)
else:
oncall[today] = (c, a)
except ValueError as e:
print(e)
print('Row %d with day value: %s, (Value Above: %s, Below: %s)' % (i, d, rows[i -1]['date'], rows[i + 1]['date']))
# -
# ## Other considerations
# The whole rota runs from 2016-2018. Obviously, it's not much help for us to have a rota for the whole of that range, so we would need to set some starting and end dates for the range. We may aswell have these hardcoded for the moment, but it's not hard to think of a way to pass these in as a parameter if you want.
#
# ## Creating the rota
# So we could create a new reader from scratch, or we could instead adjust one of our previous readers. It's simpler to just adjust an old reader so let's do that.
#
# Let's recall the strcture of the multi_rota3.py:
#
# ```python
# ### IMPORTS
# ...
# ### CONSTANTS
# ...
# HOURS = { ... }
# ...
# ### FUNCTIONS
# ...
# SPELLING_CORRECTIONS = { ... }
# UNNECESSARY_ADDITIONAL_INFORMATION_RES = [ ... ]
#
# def strip_unnecessary_information(name):
# ...
#
# def autocorrect(name):
# ...
#
# AM_PM_SPLIT_RE = re.compile('(.*) \(?(am)\)? (.*) \(?(pm)\)?')
# def role_split(role_string):
# ...
#
# def munge_role(name, role, row):
# ...
#
# ## Conversion functions
# def convert_to_date(date_str):
# ...
#
# ## Calendar functions
# def create_calendar_for(name, job, role_rows_list):
# ...
#
# def create_event_for(name, role, row):
# ...
#
# ## File reading functions
# def read_csv(fname, handler, sheet, *args, **kwds):
# ...
#
# def read_excel(fname, handler, sheet=0, *args, **kwds):
# ...
#
# def read(fname, handler, sheet=0, *args, **kwds):
# ...
#
# ## Reading functions
# def handle_rows(rows):
# ...
#
# ## Check last names functions
# def check_last_names(nj_to_r_rows, directory):
# ...
#
# ## Writing functions
# def create_calendars(nj_to_r_rows, directory):
# ...
#
# ## Main function
# def parse_file_and_create_calendars(fname, sheet, directory):
# from os.path import exists
# rows_data = read(fname, handle_rows, sheet)
#
# if not exists(directory):
# from os import makedirs
# makedirs(directory)
# check_last_names(rows_data, directory)
# create_calendars(rows_data, directory)
#
# ### MAIN
# if __name__ == '__main__':
# ...
# ```
# ### `HOURS`
# This is simple, our shifts are all day in this case
#
# ```python
# HOURS = {
# 'On-Call': {
# 'duration': timedelta(days=1)
# },
# }
# ```
#
# ### `BETWEEN` and `START_DAY`
# We should add a default `BETWEEN` and `START_DAY`.
#
# ```python
# BETWEEN = (date(2017, 12, 6), date(2018, 3, 7))
#
# START_DAY = date(2016,1,1)
# ```
#
# ### `SPELLING_CORRECTIONS` and `munge_role` etc.
# We don't need to keep any of the role munging functions as there's no roles in this rota. We can probably keep the spelling correction functions as these may come in handy.
#
# ### `convert_to_date`
# If we think about the way we parsed the rota above we already did the conversion - so we don't need this function either. (Of course we'll have to adjust any code that uses it.)
#
# ### `create_calendar_for`, `create_event_for` and `handle_rows`
# In multi_rota3.py `handle_rows` returns a dictionary of name job pairs to a list of role and rows pairs. Previous iterations had a dictionary of name to rows.
#
# Our above code creates a dictionary of dates to person on-call - so in order to keep it a simple change we should create a dictionary of name to list of days on-call with additional information.
def handle_rows(rows):
"""Store the rota information by name and job"""
today = START_DAY
on_call = {}
for i, row in enumerate(rows):
if row[0] == '0':
row[0] = '10'
try:
if today.month == 12 and today.day == 31:
today = date(today.year + 1, today.month, today.day)
today = dateutil.parser.parse(row[0], default=today)
if row[1] != '':
if today in on_call:
print('Duplicate: ', today, row)
else:
on_call[today] = (autocorrect(row[1]), row[2])
except Exception:
print('Weird row[', i, ']:', row)
name_to_dates = defaultdict(list)
for day in on_call:
name, additional = on_call[day]
name_to_dates[name].append((day, name, additional))
name_to_dates['All'].append((day, name, additional))
return name_to_dates
# Which means we can change our `create_calendar_for` function to:
def create_calendar_for(name, dates, between):
"""Create a calendar for name in job using the provided rows"""
# Create a basic iCalendar object
cal = Calendar()
# These two lines are required but you can change the prodid slightly
cal.add('prodid', '-//hacksw/handcal/NONSGML v1.0//EN')
cal.add('version', '2.0')
# This means that your calendar gets a nice default name
cal.add('x-wr-calname', 'Unusual-1 on-call rota for %s' % (name))
# Now open the rota
if name == 'All':
for day, name, additional in dates:
if (day >= between[0] and day < between[1]):
if day.weekday() == 5: # SAT
# Get a day off before
cal.add_component(
create_event_for('Lieu',
day - timedelta(days=1),
'',
name))
cal.add_component(create_event_for('On-Call',
day,
additional,
name))
if day.weekday() < 4 or day.weekday() == 6: # MON-THURS or SUN
# Get a day off afterwards
cal.add_component(
create_event_for('Lieu',
day + timedelta(days=1),
'',
name))
else:
for day, name, additional in dates:
# OK first of all create the on-call event for this day
if (day >= between[0] and day < between[1]):
if day.weekday() == 5: # SAT
# Get a day off before
cal.add_component(
create_event_for('Lieu', day - timedelta(days=1)))
cal.add_component(create_event_for('On-Call', day, additional))
if day.weekday() < 4 or day.weekday() == 6: # MON-THURS or SUN
# Get a day off afterwards
cal.add_component(
create_event_for('Lieu', day + timedelta(days=1)))
return cal
# Finally we have to adjust the `create_event_for` method.
def create_event_for(role, day, additional='', name=''):
"""Create an icalendar event for this row for name and role"""
event = Event()
# Munge the role
# Description should say who else is in department.
description = role + \
(': %s' % name if name != '' else '') + \
(' (%s)' % additional if additional != '' else '')
event.add('description', description)
# Make the summary the same as the description
event.add('summary', description)
if 'start' in HOURS[role]:
# If we have a start time in the HOURS dictionary for this role
# - combine it with date
event.add('dtstart',
datetime.combine(
day,
HOURS[role]['start']))
else:
# Otherwise just use the date
event.add('dtstart', day)
if 'duration' in HOURS[role]:
event.add('duration', HOURS[role]['duration'])
else:
if (HOURS[role]['end'] > HOURS[role]['start']):
event.add('dtend',
datetime.combine(
day,
HOURS[role]['end']))
else:
# OK so the end is before the start?
# simply add a day on to the date and then combine
event.add('dtend',
datetime.combine(
day + timedelta(days=1),
HOURS[role]['end']))
event.add('dtstamp', datetime.now())
event.add('location', 'At work') # Set this to something useful
event.add('uid', uuid.uuid4())
return event
# ### `check_last_names`
# We'll have to adjust `check_last_names` to be more like the simpler version in multi_rota1.py - but otherwise it's a simple copy.
def check_last_names(names_to_dates, directory, between):
"""Check from the previous run of this parser if there are new names,
returns a dictionary of names to number of rows"""
from os.path import exists, join
from csv import DictReader, DictWriter
last_names = {}
# Read the last names
if exists(join(directory, 'last_names.csv')):
with open(join(directory, 'last_names.csv')) as f:
r = DictReader(f)
for row in r:
last_names[row['name']] = int(row['number'])
name_to_number_of_rows = {}
with open(join(directory, 'last_names.csv'), 'w') as f:
w = DictWriter(f, ['name', 'number'])
w.writeheader()
for name in names_to_dates:
# number is the sum of rows for each role for this name, job pair
number = len([day for day, _, _ in names_to_dates[name]
if day >= between[0] and day < between[1]])
if name not in last_names:
# We have a new name
print('New name in rota: %s with %d rows' % (name, number))
w.writerow({'name': name, 'number': number})
name_to_number_of_rows[name] = number
return name_to_number_of_rows
# So if we put all those things in to a rota reader we get a working rota reader for the unusual rota [unusual1.py](unusual1.py)
|
unusual-rotas/part-1.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
x = ['15°','30°','45°','60°','75°','90°','105°','120°','135°','150°','165°','180°']
y = ['15°','30°','45°','60°','75°','90°','105°','120°','135°','150°','165°','180°']
y.reverse()
z_goal3 = [[1.0, 0.4, 0.6, 0.4, 0.2, 0.2, 0.4, 0.2, 0.2, 0.2, 0.0, 0.4],
[0.2, 0.6, 0.4, 0.6, 0.4, 0.2, 0.4, 0.4, 0.2, 0.6, 0.2, 0.2],
[0.2, 0.0, 0.4, 0.4, 0.2, 0.4, 0.2, 0.6, 0.4, 0.4, 0.2, 0.4],
[0.6, 0.0, 0.4, 0.8, 0.2, 0.0, 0.0, 0.0, 0.4, 0.2, 0.2, 0.2],
[0.4, 0.2, 0.6, 0.2, 0.0, 0.4, 0.4, 0.6, 0.4, 0.6, 0.2, 0.4],
[0.2, 0.6, 0.0, 0.2, 0.4, 0.4, 0.0, 0.0, 0.2, 0.0, 0.0, 0.2],
[0.0, 0.4, 0.4, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.2, 0.2, 0.0],
[0.4, 0.0, 0.2, 0.0, 0.0, 0.2, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 0.2, 0.0, 0.2, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.2, 0.2, 0.0, 0.2, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 0.4, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
]
z_goal3.reverse()
z_goal6 = [[2.2, 2.0, 1.8, 2.0, 3.2, 2.0, 1.8, 2.6, 3.0, 3.0, 1.0, 3.4],
[1.6, 2.6, 1.6, 2.0, 1.8, 1.6, 2.8, 1.6, 1.6, 2.8, 3.4, 2.4],
[2.2, 2.4, 1.4, 2.2, 2.0, 2.0, 2.2, 1.0, 2.6, 2.8, 2.0, 3.2],
[1.4, 1.6, 0.6, 1.8, 2.6, 1.2, 2.2, 2.0, 2.4, 1.6, 2.4, 1.8],
[1.0, 1.4, 1.2, 1.6, 1.4, 2.2, 2.0, 1.4, 1.6, 1.8, 1.6, 1.8],
[2.2, 1.0, 2.0, 1.4, 1.8, 1.0, 1.4, 1.2, 1.8, 0.6, 1.2, 1.2],
[1.0, 1.2, 0.6, 1.2, 0.8, 0.4, 1.4, 1.0, 1.2, 1.0, 1.8, 1.2],
[0.6, 0.6, 0.8, 0.6, 0.4, 0.2, 0.0, 0.6, 0.6, 0.2, 0.4, 0.8],
[0.6, 0.2, 0.2, 0.4, 0.0, 0.0, 0.0, 0.2, 0.2, 0.0, 0.2, 0.0],
[1.2, 0.0, 0.2, 0.2, 0.0, 0.0, 0.0, 0.0, 0.2, 0.0, 0.0, 0.0],
[1.2, 0.6, 0.4, 0.0, 0.2, 0.2, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[1.0, 1.0, 0.4, 0.2, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.2, 0.0],
]
z_goal6.reverse()
z_goal9 = [[6.2, 7.0, 7.2, 8.6, 7.0, 6.2, 8.0, 7.0, 7.2, 6.6, 5.8, 7.2],
[6.8, 5.0, 4.4, 4.8, 4.8, 5.4, 4.0, 5.4, 4.0, 5.6, 3.8, 5.2],
[6.8, 4.8, 3.2, 4.2, 5.4, 6.2, 5.4, 2.8, 3.6, 4.0, 5.0, 5.0],
[3.6, 4.4, 2.8, 2.6, 3.4, 4.2, 5.0, 3.6, 4.2, 4.0, 4.6, 4.0],
[4.2, 3.8, 3.0, 3.6, 2.4, 3.2, 3.2, 3.8, 4.4, 2.6, 4.2, 4.6],
[4.4, 4.6, 3.2, 2.6, 2.2, 1.8, 3.6, 2.6, 4.4, 3.0, 2.4, 3.6],
[3.2, 1.8, 2.8, 1.8, 2.6, 2.4, 1.8, 1.6, 2.2, 2.2, 2.6, 1.4],
[2.8, 1.6, 1.2, 1.4, 0.4, 1.4, 1.0, 0.6, 0.8, 1.0, 1.0, 0,8],
[2.2, 1.4, 0.8, 1.0, 0.6, 0.2, 0.0, 0.2, 0.2, 0.4, 0.0, 0.0],
[2.2, 1.4, 1.0, 0.4, 0.2, 0.2, 0.2, 0.0, 0.0, 0.0, 0.0, 0.0],
[2.2, 1.6, 1.2, 0.8, 0.2, 0.6, 0.2, 0.2, 0.2, 0.0, 0.0, 0.2],
[1.4, 0.8, 1.0, 0.6, 0.2, 0.2, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
]
z_goal9.reverse()
z_closest3 = [[0.0, 0.2, 0.2, 0.4, 0.0, 0.0, 0.2, 0.0, 0.4, 0.2, 0.2, 0.6],
[0.4, 0.0, 0.2, 0.2, 0.0, 0.2, 0.0, 0.0, 0.0, 0.2, 0.0, 0.0],
[0.2, 0.2, 0.0, 0.0, 0.2, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.2],
[0.2, 0.0, 0.2, 0.0, 0.0, 0.0, 0.2, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.2, 0.0, 0.0, 0.0, 0.0, 0.0, 0.2, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.2, 0.0, 0.2, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.2, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 0.2, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.2, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.6, 0.0, 0.0, 0.2, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]
]
z_closest3.reverse()
z_closest6 = [[2.0, 1.4, 0.8, 0.6, 0.4, 1.2, 0.6, 0.4, 1.0, 1.2, 1.0, 1.4],
[1.4, 0.8, 1.8, 0.4, 1.0, 0.2, 0.6, 0.4, 0.6, 0.1, 0.8, 0.0],
[0.8, 0.4, 0.6, 0.2, 0.6, 0.6, 0.8, 0.0, 0.2, 0.8, 0.8, 0.4],
[1.2, 0.8, 1.0, 0.4, 0.4, 0.6, 0.0, 0.2, 0.4, 0.6, 0.4, 0.4],
[1.2, 0.6, 1.0, 0.6, 0.6, 0.2, 0.4, 0.6, 0.2, 0.2, 0.4, 0.0],
[1.8, 1.2, 0.6, 0.6, 0.8, 0.4, 0.2, 0.4, 0.4, 0.6, 0.2, 0.8],
[1.4, 1.0, 0.8, 0.4, 0.4, 0.8, 0.2, 0.0, 0.2, 0.0, 0.0, 0.0],
[0.6, 0.2, 0.2, 0.0, 0.6, 0.4, 0.2, 0.4, 0.2, 0.0, 0.0, 0.0],
[1.0, 1.0, 0.0, 0.0, 0.2, 0.4, 0.4, 0.0, 0.0, 0.2, 0.2, 0.0],
[0.6, 0.6, 0.0, 0.0, 0.2, 0.0, 0.0, 0.2, 0.0, 0.0, 0.2, 0.0],
[0.0, 0.4, 0.2, 0.4, 0.4, 0.0, 0.0, 0.2, 0.0, 0.0, 0.2, 0.0],
[1.2, 0.6, 0.4, 0.0, 0.0, 0.2, 0.0, 0.0, 0.0, 0.0, 0.2, 0.0]
]
z_closest6.reverse()
z_closest9 = [[2.4, 2.6, 1.8, 2.2, 2.8, 2.4, 2.0, 1.8, 1.4, 1.2, 1.8, 2.2],
[3.0, 2.2, 1.6, 2.0, 1.4, 1.4, 1.2, 2.2, 0.8, 1.2, 2.2, 0.6],
[2.8, 3.0, 2.2, 0.6, 2.0, 0.6, 0.6, 1.2, 1.2, 0.8, 1.4, 0.8],
[2.8, 1.8, 2.2, 1.2, 1.0, 1.4, 0.2, 0.8, 0.0, 0.4, 1.0, 0.6],
[2.8, 2.2, 2.0, 0.8, 1.0, 1.0, 0.4, 0.4, 1.0, 0.4, 1.6, 0.8],
[3.4, 2.2, 1.2, 1.0, 1.2, 0.6, 0.6, 0.2, 1.0, 0.4, 1.2, 0.4],
[3.8, 2.4, 1.4, 1.6, 0.4, 0.2, 2.0, 0.2, 1.6, 1.0, 1.0, 1.0],
[1.8, 1.4, 2.0, 0.8, 0.6, 0.8, 0.2, 0.2, 0.2, 0.6, 0.8, 0.8],
[3.0, 1.6, 0.2, 1.0, 1.2, 0.4, 0.6, 0.6, 0.8, 0.6, 0.4, 0.6],
[3.4, 2.0, 0.6, 0.4, 0.8, 0.2, 0.2, 0.4, 0.4, 0.8, 0.4, 0.6],
[2.0, 2.2, 1.8, 0.4, 0.4, 0.6, 0.2, 0.6, 0.6, 0.4, 0.6, 0.8],
[2.2, 1.4, 1.2, 0.8, 0.8, 0.6, 0.4, 0.6, 0.0, 0.4, 0.8, 0.6]
]
z_closest9.reverse()
z_most3 = [[0.2, 0.2, 0.2, 0.0, 0.0, 0.4, 0.2, 0.2, 0.2, 0.2, 0.2, 0.4],
[0.0, 0.0, 0.4, 0.4, 0.0, 0.2, 0.4, 0.4, 0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.2, 0.2, 0.0, 0.0],
[0.0, 0.2, 0.0, 0.0, 0.2, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.2],
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.2, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.4, 0.0, 0.0, 0.0, 0.0, 0.2, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.2, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.2, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.2, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.2, 0.0, 0.0, 0.2, 0.0, 0.0, 0.0, 0.0, 0.0]
]
z_most3.reverse()
z_most6 = [[2.4, 1.2, 0.8, 1.2, 0.8, 1.2, 1.4, 1.8, 1.2, 1.4, 1.8, 2.4],
[1.6, 1.8, 0.8, 1.4, 2.0, 1.2, 0.4, 1.4, 2.4, 1.2, 0.2, 1.4],
[1.8, 1.4, 1.0, 1.6, 0.8, 1.4, 0.6, 1.6, 0.6, 0.8, 1.2, 0.8],
[1.4, 0.6, 0.8, 1.0, 0.4, 0.6, 0.6, 0.6, 0.6, 1.0, 0.4, 0.4],
[1.0, 0.4, 0.8, 0.2, 0.2, 0.0, 0.6, 0.2, 0.4, 0.2, 0.4, 0.0],
[1.0, 0.4, 0.0, 0.4, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.2, 0.2],
[0.6, 0.4, 0.4, 0.2, 0.2, 0.0, 0.0, 0.2, 0.2, 0.0, 0.0, 0.0],
[0.6, 0.2, 0.4, 0.2, 0.0, 0.0, 0.0, 0.0, 0.0, 0.2, 0.2, 0.0],
[0.8, 0.0, 0.4, 0.4, 0.0, 0.0, 0.2, 0.0, 0.0, 0.0, 0.0, 0.0],
[1.0, 1.2, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.8, 0.4, 0.4, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[1.4, 0.2, 0.2, 0.2, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]
]
z_most6.reverse()
z_most9 = [[3.4, 2.8, 2.4, 1.4, 3.0, 1.4, 1.6, 2.2, 2.0, 2.4, 1.4, 1.2],
[3.6, 1.0, 2.8, 2.4, 3.6, 3.8, 2.4, 1.2, 2.2, 1.4, 2.2, 1.4],
[3.2, 3.2, 3.8, 2.6, 2.6, 1.8, 1.8, 1.6, 3.4, 1.8, 3.0, 2.2],
[3.0, 1.4, 2.8, 1.6, 1.6, 1.0, 0.8, 1.2, 1.0, 1.0, 2.8, 1.4],
[3.2, 2.4, 1.2, 1.2, 0.6, 0.6, 0.4, 0.2, 0.4, 1.4, 0.4, 1.2],
[2.0, 1.2, 1.4, 0.4, 0.2, 0.0, 0.2, 0.2, 0.6, 0.4, 0.4, 0.2],
[2.2, 2.0, 0.2, 0.2, 0.0, 0.2, 0.4, 0.2, 0.2, 0.0, 0.0, 0.2],
[1.8, 2.0, 0.2, 0.8, 0.6, 0.0, 0.0, 0.0, 0.0, 0.0, 0.2, 0.0],
[2.2, 1.0, 0.6, 0.8, 0.2, 0.2, 0.0, 0.0, 0.0, 0.0, 0.2, 0.2],
[2.6, 0.8, 0.4, 0.2, 0.2, 0.0, 0.2, 0.0, 0.0, 0.0, 0.0, 0.0],
[2.0 ,0.6, 1.2, 0.2, 0.0, 0.2, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[1.4, 1.6, 0.4, 0.4, 0.2, 0.0, 0.0, 0.2, 0.0, 0.0, 0.0, 0.0]
]
z_most9.reverse()
# +
import plotly.graph_objects as go
from plotly.subplots import make_subplots
fig = make_subplots(3,3, x_title="FOV width α", y_title="FOV height β",
vertical_spacing = 0.1,
subplot_titles=[ "M=Goal, N=3",
"M=Goal, N=6",
"M=Goal, N=9",
"M=Closest, N=3",
"M=Closest, N=6",
"M=Closest, N=9",
"M=Most, N=3",
"M=Most, N=6",
"M=Most, N=9"])
trace = go.Heatmap(x=x, y=y, z=z_goal3, coloraxis = "coloraxis")
fig.add_trace(trace, row=1, col=1)
trace = go.Heatmap(x=x, y=y, z=z_goal6, coloraxis = "coloraxis")
fig.add_trace(trace, row=1, col=2)
trace = go.Heatmap(x=x, y=y, z=z_goal9, coloraxis = "coloraxis")
fig.add_trace(trace, row=1, col=3)
trace = go.Heatmap(x=x, y=y, z=z_closest3, coloraxis = "coloraxis")
fig.add_trace(trace, row=2, col=1)
trace = go.Heatmap(x=x, y=y, z=z_closest6, coloraxis = "coloraxis")
fig.add_trace(trace, row=2, col=2)
trace = go.Heatmap(x=x, y=y, z=z_closest9, coloraxis = "coloraxis")
fig.add_trace(trace, row=2, col=3)
trace = go.Heatmap(x=x, y=y, z=z_most3, coloraxis = "coloraxis")
fig.add_trace(trace, row=3, col=1)
trace = go.Heatmap(x=x, y=y, z=z_most6, coloraxis = "coloraxis")
fig.add_trace(trace, row=3, col=2)
trace = go.Heatmap(x=x, y=y, z=z_most9, coloraxis = "coloraxis")
fig.add_trace(trace, row=3, col=3)
fig.update_layout(coloraxis = {'colorscale':["#fff9f0", "#fdb73e", "#B16B00", "#4B0500", "#180000"]})
fig.update_layout({
'paper_bgcolor': 'rgba(0, 0, 0, 0)',
'plot_bgcolor': 'rgba(50, 50, 0, 0.02)'
})
fig.update_layout(
font_color="#fdb73e",
font_size=9,
height=700,
width=630
)
fig.update_layout(
title={
'text': "Average number of collisions between <br> agents over 5 simulations",
'y':0.96,
'x':0.51,
'font_size': 20})
fig.update_traces(
hovertemplate="FOV width α: %{x}<br>FOV height β: %{y}<br>Average collisions: %{z}<extra></extra>"
)
fig.show()
fig.write_html("results.html")
# +
import chart_studio
import chart_studio.plotly as py
import chart_studio.tools as tls
username = 'ThomasKimble'
api_key = '<KEY>'
chart_studio.tools.set_credentials_file(username=username, api_key=api_key)
# -
py.plot(fig, filename='agent_collisions', auto_open=False)
|
_includes/project_data/swarm/results.ipynb
|
/ ---
/ jupyter:
/ jupytext:
/ text_representation:
/ extension: .q
/ format_name: light
/ format_version: '1.5'
/ jupytext_version: 1.14.4
/ kernelspec:
/ display_name: SQL
/ language: sql
/ name: SQL
/ ---
/ + [markdown] azdata_cell_guid="43ddc3bc-4985-40c0-91bd-d4ddc87d69a4"
/ # Querying Views
/ + azdata_cell_guid="351ea31e-2ef5-4a0f-b733-a61cd6de532e" tags=[]
-- Store CustomerID, FirstName, LastName, Address, City, StateProvince into a view database object
CREATE VIEW SalesLT.vCustomerAddress
AS
SELECT c.CustomerID, c.FirstName, c.LastName, a.AddressLine1, a.City, a.StateProvince
FROM SalesLT.Customer AS c JOIN SalesLT.CustomerAddress AS ca ON c.CustomerID = ca.CustomerID
JOIN SalesLT.Address as a ON ca.AddressID = a.AddressID
/ + azdata_cell_guid="64d60a37-9c7d-4f43-ba69-f2c58a0260a2"
-- Select from view
SELECT CustomerID, City
FROM SalesLT.vCustomerAddress
/ + azdata_cell_guid="5934e08d-83de-4b8d-bc6d-f1542f99a5cf" tags=[]
-- Display SalesRevenue by StateProvince and City
SELECT ca.StateProvince, ca.City, ISNULL(SUM(soh.TotalDue), 0.0) AS SalesRevenue
FROM SalesLT.vCustomerAddress as ca
LEFT JOIN SalesLT.SalesOrderHeader as soh
ON ca.CustomerID = soh.CustomerID
GROUP BY ca.StateProvince, ca.City
ORDER BY ca.StateProvince, ca.City
/ + [markdown] azdata_cell_guid="1e81fe2e-1cc6-4290-bac3-9a7152be9a40"
/ # Using Temporary tables and Table variables
/ + azdata_cell_guid="4758fce3-b701-42fb-b11b-c362aaa295b6"
-- Create temporary table (stored in tempdb as tied to user session)
CREATE TABLE #Colors
(Color varchar(20));
INSERT INTO #Colors
SELECT DISTINCT Color FROM SalesLT.Product;
SELECT * FROM #Colors;
/ + azdata_cell_guid="0d8b3d8e-a9be-49f3-b35d-6235b9ccd64b"
-- Create Table variable (scoped to the batch of statements ran)
DECLARE @Colors AS TABLE
(Color varchar(20));
INSERT INTO @Colors
SELECT DISTINCT Color FROM SalesLT.Product;
SELECT * FROM @Colors;
/ + azdata_cell_guid="27bf10c7-1528-4c4d-b4d1-75439954837e"
-- New batch
SELECT * FROM #Colors; -- Scoped to user session
SELECT * FROM Colors; -- Scoped to the batch
/ + [markdown] azdata_cell_guid="2d37adbc-3c47-496a-897b-1e3bae14c3d8"
/ # Querying Table-Valued Functions
/ + azdata_cell_guid="03db3e69-4dc4-4e32-8b3b-e50d8d6afe59"
-- Create a function that takes as input a City and returns Customers
CREATE FUNCTION SalesLT.udfCustomersByCity(@City AS varchar(20))
RETURNS TABLE
AS
RETURN
(
SELECT c.CustomerID, c.FirstName, c.LastName, a.AddressLine1, a.City, a.StateProvince
FROM SalesLT.Customer AS c LEFT JOIN SalesLT.CustomerAddress AS ca ON c.CustomerID = ca.CustomerID
LEFT JOIN SalesLT.Address AS a ON ca.AddressID = a.AddressID
WHERE City = @City
)
/ + azdata_cell_guid="0140f54d-1521-4396-9463-1252a2d9350b"
-- Querying table-valued function
SELECT * FROM SalesLT.udfCustomersByCity('Bellevue')
/ + [markdown] azdata_cell_guid="5df55b12-42fd-426c-a59d-7bb8cf550f27"
/ # Using Derived Tables
/ + azdata_cell_guid="563a3800-52f2-4b54-af4f-aceba8eee507"
-- Find number of products for each product category
SELECT ProdCats.CategoryName, COUNT(ProdCats.ProductID) AS ProductCount
FROM (SELECT p.ProductID, p.Name AS ProductName, pc.Name AS CategoryName
FROM SalesLT.Product AS p JOIN SalesLT.ProductCategory AS pc ON p.ProductCategoryID = pc.ProductCategoryID) AS ProdCats
GROUP BY CategoryName
ORDER BY CategoryName
/ + [markdown] azdata_cell_guid="f7bf49b6-b01d-4f36-ba23-20668b7dab52"
/ # Using Common Table Expressions (CTEs)
/ + azdata_cell_guid="93798d40-c67c-4b5f-82a0-ab7996059c12"
-- Find number of products for each product category
WITH ProductsByCategory(ProductID, ProductName, CategoryName)
AS (
SELECT p.ProductID, p.Name, pc.Name
FROM SalesLT.Product AS p JOIN SalesLT.ProductCategory AS pc
ON p.ProductCategoryID = pc.ProductCategoryID
)
SELECT CategoryName, COUNT(ProductID) AS ProductCount
FROM ProductsByCategory
GROUP BY CategoryName
ORDER BY CategoryName
/ + azdata_cell_guid="1187637e-c53f-4d92-8341-f2f5b7ed074e" tags=[]
-- Using CTEs to perform recursion
-- Using Employee table display ManagerID, EmployeeID, EmployeeName, Level (order in management hierarchy)
WITH OrgReport(ManagerID, EmployeeID, EmployeeName, Level)
AS (
-- Anchor query
SELECT e.ManagerID, e.EmployeeID, e.EmployeeName, 0
FROM SalesLT.Employee AS e
WHERE ManagerID IS NULL
UNION ALL
-- Recursice query
SELECT e.ManagerID, e.EmployeeID, e.EmployeeName, Level + 1
FROM SalesLT.Employee AS e
JOIN OrgReport AS o
ON e.ManagerID = o.EmployeeID
)
SELECT * FROM OrgReport
OPTION(MAXRECURSION 3)
-- Note: We know there's only 3 levels in total in management hierarchy
|
Demos/module-7.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # Install dependencies
# This pipeline requires:
#
# * HMMER3.1
# * mothur
# * RDP mcclust
# * python pandas, numpy, scipy, matplotlib, and screed package.
#
# This tutorial is made on a **64 bit** linux (ubuntu) machine. If you have 32 bit machine, the installer file links needs to be changed for HMMER3.1 and mothur.
#
# #### HMMER3.0 or lower does not work due to change in HMM format (.hmm).
# ###setup installation directory
# cd ~/Desktop/SSUsearch/
# mkdir -p ./external_tools
# cd ./external_tools
# mkdir -p ./bin
# ### Install HMMER
# !wget -c http://selab.janelia.org/software/hmmer3/3.1b1/hmmer-3.1b1-linux-intel-x86_64.tar.gz -O hmmer-3.1b1-linux-intel-x86_64.tar.gz
# !tar -xzvf hmmer-3.1b1-linux-intel-x86_64.tar.gz
# cp hmmer-3.1b1-linux-intel-x86_64/binaries/hmmsearch ./bin
# ###Install mothur
# !wget http://www.mothur.org/w/images/8/88/Mothur.cen_64.zip -O mothur.zip
# !unzip mothur.zip
# cp mothur/mothur ./bin
# ### Install RDP mcclust tool
# !wget http://athyra.oxli.org/~gjr/public2/misc/Clustering.tar.gz
# !tar -xzvf Clustering.tar.gz
# ### Install python packages
# ### Comment out following cells if already have the packages installed
# + magic_args="bash" language="script"
# source ~/.bashrc
# -
# !pip install -U pip
# !pip install screed
# !pip install brewer2mpl
# !pip install biom-format
# ### Install numpy, matplotlib, scipy, and pandas
# Alternatively, you can install **anaconda** that have most popular python packages installed: https://store.continuum.io/cshop/anaconda/
# +
# # !pip install numpy matplotlib scipy pandas
# -
# ### check dependencies installed
import os
New_path = '{}:{}'.format('~/Desktop/SSUsearch/external_tools/bin/', os.environ['PATH'])
print New_path
os.environ.update({'PATH':New_path})
# !make -f ~/Desktop/SSUsearch/Makefile tool_check Hmmsearch=hmmsearch Mothur=mothur Mcclust_jar=~/Desktop/SSUsearch/external_tools/Clustering/dist/Clustering.jar
|
notebooks-pc-linux/.ipynb_checkpoints/pipeline-dependency-installation-checkpoint.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import rdkit
from rdkit.Chem import Draw
from rdkit.Chem import AllChem
from molreps.graph import MolGraph
from molreps.methods.mol_rdkit import rdkit_add_conformer
import networkx as nx
from molreps.methods.mol_py3d import MolTo3DView
smile = "C\C=C(/F)\C(=C\F)\C=C"
# smile = 'CC(C)(C)NC[C@@H](C1=CC(=C(C=C1)O)CO)O'
m = rdkit.Chem.MolFromSmiles(smile)
m = rdkit.Chem.AddHs(m) # add H's to the molecule
# rdkit.Chem.AssignStereochemistry(m) # Assign Stereochemistry
rdkit.Chem.FindPotentialStereo(m) # Assign Stereochemistry new method
m
# If no coordinates are known, do embedding with rdkit
AllChem.EmbedMolecule(m)
AllChem.MMFFOptimizeMolecule(m)
AllChem.EmbedMultipleConfs(m, numConfs=5)
# Plot molecule 3D
MolTo3DView(m)
# Chem.MolToMolFile(m)
m.SetProp("_Name",smile)
m.SetProp("MolFileComments","Energy: 1.234 eV") # Maybe not use both Info and Comments
m.SetProp("MolFileInfo","DataBase: 1")
print(rdkit.Chem.MolToMolBlock(m))
rdkit.Chem.MolToMolFile(m,"mol_1.mol",confId=0)
# Or Alternative PDBBlock
# No Commentline possible here.
# Additional Info external.
# Or write a wrapper with
m.SetProp("PDBFileComments","Test")
print(rdkit.Chem.MolToPDBBlock(m))
rdkit.Chem.MolToPDBFile(m,"mol_1.pdb")
print(rdkit.Chem.MolToV3KMolBlock(m))
print(rdkit.Chem.MolToTPLBlock(m))
|
examples/mol_to_file.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Author : <NAME>
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import plotly as ply
# ### Data Pre-processing
appdata=pd.read_csv('AppleStore.csv')
appdata.columns
# #### As data contains two trends for analysis i-e ratings for current version and other for all released versions, I will explore the current version and its rating trend.
appdata.drop('Unnamed: 0',axis=1)
appdata=appdata.drop(['Unnamed: 0','vpp_lic','currency'],axis=1)
appdata
appdata['Size_GB']=appdata['size_bytes']/(1024*1024)
appdata
appdata.rename(columns={'track_name':'app_name','cont_rating':'content_rate',
'prime_genre':'genre','rating_count_tot':'versions_rating',
'rating_count_ver':'version_rating','sup_devices.num':'supp_devices','ipadSc_urls.num':'screen_shots_displayed',
'lang.num':'supp_lang_num'},inplace=True)
appdata
# ### DATA CLEANING
appdata=appdata.loc[:,['app_name','genre','user_rating_ver','version_rating','price','supp_devices','screen_shots_displayed','size_bytes']]
appdata
appdata.head()
appdata=appdata.sort_values(by=['user_rating_ver','version_rating'],ascending=False)
appdata.head(10)
# #### Top paid Apps
paidapps=appdata[appdata['price']>0.0]
paidapps.count()
paidapps=paidapps.sort_values(by=['price'],ascending=False)
paidapps.head()
# ### Paid Apps by Category
paid_apps=paidapps.groupby(['genre']).count()
paid_apps['app_name'].plot(kind='barh',
figsize=(10,6),
alpha=0.98)
plt.xlabel('Frequency Count')
plt.ylabel('Category')
plt.title('Paid Apps Category Wise')
plt.show()
# ### To find the ratings of the apps related to games
games=appdata.loc[appdata['genre']=='Games']
games
gamesapps=games.groupby(['user_rating_ver']).count()
gamesapps['app_name'].plot(kind='barh',
figsize=(10,6),
alpha=0.98)
plt.xlabel('Frequency Count')
plt.ylabel('Rating')
plt.title('Games Classified by User Rating')
plt.show()
# ### Here we will find the the mostly used category of Apps which were rated five star by both Parametres i-e (Paid & free)
top_rated=appdata.loc[appdata['user_rating_ver']==5.0]
top_rated
paid_apps=top_rated[top_rated['price']>0]
rated_paid_apps=paid_apps.sort_values('version_rating',ascending=False)
top_rated_paid_apps=rated_paid_apps.groupby(by='genre').count()
top_rated_paid_apps=top_rated_paid_apps['app_name']
top_rated_paid_apps
free_apps=top_rated[top_rated['price']==0.0]
rated_free_apps=free_apps.sort_values('version_rating',ascending=False)
top_rated_free_apps=rated_free_apps.groupby(by='genre').count()
top_rated_free_apps=top_rated_free_apps['app_name']
top_rated_free_apps
genre=np.unique(appdata['genre'])
genre
# +
import plotly.plotly as py
import plotly.graph_objs as go
trace0 = go.Bar(
x=genre,
y=top_rated_free_apps,
name='Top Rated Free Apps',
marker=dict(
color='rgb(49,130,189)'
)
)
trace1 = go.Bar(
x=genre,
y=top_rated_paid_apps,
name='Top Rated Paid Apps',
marker=dict(
color='rgb(204,204,204)',
)
)
data = [trace0, trace1]
layout = go.Layout(
xaxis=dict(tickangle=-45),
barmode='group',
)
fig = go.Figure(data=data, layout=layout)
py.iplot(fig, filename='angled-text-bar')
# -
frame={'top_rated_free':top_rated_free_apps,'top_rated_paid':top_rated_paid_apps}
combined=pd.DataFrame(frame,index=genre)
combined.plot(kind='barh',
figsize=(10,6))
plt.xlabel('Rating Counts')
plt.ylabel('Genre')
plt.show()
# ### As majority of games were rated 4.5 star let's explore them
four_rated=games.loc[games.user_rating_ver==4.5]
four_rated
four_paid_apps=four_rated[four_rated['price']>0]
four_rated_paid_apps=four_paid_apps.sort_values('version_rating',ascending=False)
four_rated_paid_apps=four_rated_paid_apps.groupby(by='genre').count()
four_rated_paid_apps=four_rated_paid_apps['app_name']
four_rated_paid_apps
four_free_apps=four_rated[four_rated['price']==0.0]
four_rated_free_apps=four_free_apps.sort_values('version_rating',ascending=False)
four_rated_free_apps=four_rated_free_apps.groupby(by='genre').count()
four_rated_free_apps=four_rated_free_apps['app_name']
four_rated_free_apps
# ### There may be numerous aspects or trends in this data but these were major that I analyzed, Happy Coding
|
AppStore Analysis.ipynb
|
# ##### Copyright 2021 Google LLC.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# # coins_grid_mip
# <table align="left">
# <td>
# <a href="https://colab.research.google.com/github/google/or-tools/blob/master/examples/notebook/contrib/coins_grid_mip.ipynb"><img src="https://raw.githubusercontent.com/google/or-tools/master/tools/colab_32px.png"/>Run in Google Colab</a>
# </td>
# <td>
# <a href="https://github.com/google/or-tools/blob/master/examples/contrib/coins_grid_mip.py"><img src="https://raw.githubusercontent.com/google/or-tools/master/tools/github_32px.png"/>View source on GitHub</a>
# </td>
# </table>
# First, you must install [ortools](https://pypi.org/project/ortools/) package in this colab.
# !pip install ortools
# +
# Copyright 2011 <NAME> <EMAIL>
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Coins grid problem in Google CP Solver.
Problem from
<NAME>: "A coin puzzle - SVOR-contest 2007"
http://www.svor.ch/competitions/competition2007/AsroContestSolution.pdf
'''
In a quadratic grid (or a larger chessboard) with 31x31 cells, one should
place coins in such a way that the following conditions are fulfilled:
1. In each row exactly 14 coins must be placed.
2. In each column exactly 14 coins must be placed.
3. The sum of the quadratic horizontal distance from the main diagonal
of all cells containing a coin must be as small as possible.
4. In each cell at most one coin can be placed.
The description says to place 14x31 = 434 coins on the chessboard each row
containing 14 coins and each column also containing 14 coins.
'''
This is a MIP version of
http://www.hakank.org/google_or_tools/coins_grid.py
and use
This model was created by <NAME> (<EMAIL>)
Also see my other Google CP Solver models:
http://www.hakank.org/google_or_tools/
"""
from ortools.linear_solver import pywraplp
# Create the solver.
# using CBC
solver = pywraplp.Solver('CoinsGridCBC',
pywraplp.Solver.CBC_MIXED_INTEGER_PROGRAMMING)
# Using CLP
# solver = pywraplp.Solver('CoinsGridCLP',
# pywraplp.Solver.CBC_MIXED_INTEGER_PROGRAMMING)
# data
n = 31 # the grid size
c = 14 # number of coins per row/column
# declare variables
x = {}
for i in range(n):
for j in range(n):
x[(i, j)] = solver.IntVar(0, 1, 'x[%i,%i]' % (i, j))
#
# constraints
#
# sum rows/columns == c
for i in range(n):
solver.Add(solver.Sum([x[(i, j)] for j in range(n)]) == c) # sum rows
solver.Add(solver.Sum([x[(j, i)] for j in range(n)]) == c) # sum cols
# quadratic horizonal distance var
objective_var = solver.Sum(
[x[(i, j)] * (i - j) * (i - j) for i in range(n) for j in range(n)])
# objective
objective = solver.Minimize(objective_var)
#
# solution and search
#
solver.Solve()
for i in range(n):
for j in range(n):
# int representation
print(int(x[(i, j)].SolutionValue()), end=' ')
print()
print()
print()
print('walltime :', solver.WallTime(), 'ms')
# print 'iterations:', solver.Iterations()
|
examples/notebook/contrib/coins_grid_mip.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Download latest Jars
dbutils.fs.mkdirs("dbfs:/FileStore/jars/")
# +
# %sh
# cd ../../dbfs/FileStore/jars/
wget -O cudf-0.9.2.jar https://search.maven.org/remotecontent?filepath=ai/rapids/cudf/0.9.2/cudf-0.9.2.jar
wget -O xgboost4j_2.x-1.0.0-Beta5.jar https://search.maven.org/remotecontent?filepath=ai/rapids/xgboost4j_2.x/1.0.0-Beta5/xgboost4j_2.x-1.0.0-Beta5.jar
wget -O xgboost4j-spark_2.x-1.0.0-Beta5.jar https://search.maven.org/remotecontent?filepath=ai/rapids/xgboost4j-spark_2.x/1.0.0-Beta5/xgboost4j-spark_2.x-1.0.0-Beta5.jar
# ls -ltr
# Your Jars are downloaded in dbfs:/FileStore/jars directory
# -
# ### Create a Directory for your init script
dbutils.fs.mkdirs("dbfs:/databricks/init_scripts/")
dbutils.fs.put("/databricks/init_scripts/init.sh","""
#!/bin/bash
sudo cp /dbfs/FileStore/jars/xgboost4j_2.x-1.0.0-Beta5.jar /databricks/jars/spark--maven-trees--ml--xgboost--ml.dmlc--xgboost4j--ml.dmlc__xgboost4j__0.81.jar
sudo cp /dbfs/FileStore/jars/cudf-0.9.2.jar /databricks/jars/
sudo cp /dbfs/FileStore/jars/xgboost4j-spark_2.x-1.0.0-Beta5.jar /databricks/jars/spark--maven-trees--ml--xgboost--ml.dmlc--xgboost4j-spark--ml.dmlc__xgboost4j-spark__0.81.jar""", True)
# ### Confirm your init script is in the new directory
# %sh
# cd ../../dbfs/databricks/init_scripts
pwd
# ls -ltr
# ### Download the Mortgage Dataset into your local machine and upload Data using import Data
dbutils.fs.mkdirs("dbfs:/FileStore/tables/")
# %sh
# cd /dbfs/FileStore/tables/
wget -O mortgage.zip https://rapidsai-data.s3.us-east-2.amazonaws.com/spark/mortgage.zip
# ls
unzip mortgage.zip
# %sh
pwd
# cd ../../dbfs/FileStore/tables
# ls -ltr mortgage/csv/*
# ### Next steps
#
# 1. Edit your cluster, adding an initialization script from `dbfs:/databricks/init_scripts/init.sh` in the "Advanced Options" under "Init Scripts" tab
# 2. Reboot the cluster
# 3. Go to "Libraries" tab under your cluster and install `dbfs:/FileStore/jars/xgboost4j-spark_2.x-1.0.0-Beta5.jar` in your cluster by selecting the "DBFS" option for installing jars
# 4. Import the mortgage example notebook from `https://github.com/rapidsai/spark-examples/blob/master/examples/notebooks/python/mortgage-gpu.ipynb`
# 5. Inside the mortgage example notebook, update the data paths
# `train_data = GpuDataReader(spark).schema(schema).option('header', True).csv('dbfs:/FileStore/tables/mortgage/csv/train/mortgage_train_merged.csv')`
# `eval_data = GpuDataReader(spark).schema(schema).option('header', True).csv('dbfs:/FileStore/tables/mortgage/csv/test/mortgage_eval_merged.csv')`
|
getting-started-guides/csp/databricks/init-notebook-for-rapids-spark-xgboost-on-databricks-gpu-5.3-5.4.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Phonological training corpus
#
# This notebook prepares the phonological training corpus for learning phonological representations. For the time being, I'm just including preprocessed corpora.
# +
import glob
for f in glob.glob('*.txt'):
with open(f, 'r') as g:
contents = g.read()
contents = ''.join([c.lower() for c in contents])
with open(f, 'w') as g:
g.write(contents)
# -
|
semrep/data/training/phonological/phonological.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Examining pandas DataFrame contents
# It's useful to be able to quickly examine the contents of a DataFrame.
#
# Let's start by importing the pandas library and creating a DataFrame populated with information about airports
import pandas as pd
# +
airports = pd.DataFrame([
['Seatte-Tacoma', 'Seattle', 'USA'],
['Dulles', 'Washington', 'USA'],
['Heathrow', 'London', 'United Kingdom'],
['Schiphol', 'Amsterdam', 'Netherlands'],
['Changi', 'Singapore', 'Singapore'],
['Pearson', 'Toronto', 'Canada'],
['Narita', 'Tokyo', 'Japan']
],
columns = ['Name', 'City', 'Country']
)
airports
# -
# ## Returning first *n* rows
# If you have thousands of rows, you might just want to look at the first few rows
#
# * **head**(*n*) returns the top *n* rows
airports.head(3)
# ## Returning last *n* rows
# Looking at the last rows in a DataFrame can be a good way to check that all your data loaded correctly
# * **tail**(*n*) returns the last *n* rows
airports.tail(3)
# ## Checkign number of rows and columns in DataFrame
# Sometimes you just need to know how much data you have in the DataFrame
#
# * **shape** returns the number of rows and columns
airports.shape
# ## Getting mroe detailed information about DataFrame contents
#
# * **info**() returns more detailed information about the DataFrame
#
# Information returned includes:
# * The number of rows, and the range of index values
# * The number of columns
# * For each column: column name, number of non-null values, the datatype
#
airports.info()
|
even-more-python-for-beginners-data-tools/04 - Examining Pandas DataFrame contents/04 - Exploring pandas DataFrame contents.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %load_ext autoreload
# %autoreload 2
# %env CUDA_DEVICE_ORDER=PCI_BUS_ID
# %env CUDA_VISIBLE_DEVICES=1
import numpy as np
import matplotlib.pyplot as plt
from tqdm.autonotebook import tqdm
import pandas as pd
from cuml.manifold.umap import UMAP as cumlUMAP
from avgn.utils.paths import DATA_DIR, most_recent_subdirectory, ensure_dir
from avgn.signalprocessing.create_spectrogram_dataset import flatten_spectrograms
# ### load data
DATASET_ID = 'BIRD_DB_Vireo_cassinii'
df_loc = DATA_DIR / 'syllable_dfs' / DATASET_ID / 'cassins.pickle'
syllable_df = pd.read_pickle(df_loc)
del syllable_df['audio']
syllable_df[:3]
len(syllable_df)
top_indv_df = pd.DataFrame(
{indv: [np.sum(syllable_df.indv == indv)] for indv in syllable_df.indv.unique()}
).T.sort_values(by=[0], ascending=False)
top_indv_df[:15]
top_indvs = top_indv_df[:15].index
top_indvs
np.shape(syllable_df.spectrogram.values[0])
# ### project
for indv in tqdm(syllable_df.indv.unique()):
subset_df = syllable_df[syllable_df.indv == indv]
if len(subset_df) < 100: continue
specs = list(subset_df.spectrogram.values)
specs = [i/np.max(i) for i in tqdm(specs, leave=False)]
specs_flattened = flatten_spectrograms(specs)
print(np.shape(specs_flattened))
cuml_umap = cumlUMAP()
embedding = cuml_umap.fit_transform(specs_flattened)
subset_df['umap'] = list(embedding)
unique_labs = np.unique(subset_df.labels.values)
unique_labs_dict = {lab:i for i, lab in enumerate(unique_labs)}
lab_list = [unique_labs_dict[i] for i in subset_df.labels.values]
fig, ax = plt.subplots()
ax.scatter(embedding[:,0], embedding[:,1], s=1, c=lab_list, cmap=plt.cm.tab20, alpha = 0.25)
#ax.set_xlim([-8,8])
#ax.set_ylim([-8,8])
plt.show()
ensure_dir(DATA_DIR / 'embeddings' / DATASET_ID / 'indvs')
subset_df.to_pickle(DATA_DIR / 'embeddings' / DATASET_ID / 'indvs' / (indv + '.pickle'))
|
notebooks/02.5-make-projection-dfs/.ipynb_checkpoints/cassins-umap-indv-checkpoint.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="LjL6WmHnFsDm"
# # Rethinking Statistics course in NumPyro - Week 1
# + [markdown] id="wIiAZbVJFsDo"
# Lecture 1: The Golem of Prague
#
# - [Video](https://www.youtube.com/watch?v=4WVelCswXo4)
# - [Slides](https://speakerdeck.com/rmcelreath/l01-statistical-rethinking-winter-2019)
#
# Lecture 2: Garden of Forking Data
#
# - [Video](https://www.youtube.com/watch?v=XoVtOAN0htU)
# - [Slides](https://speakerdeck.com/rmcelreath/l02-statistical-rethinking-winter-2019)
#
# [Proposed problems](https://github.com/gbosquechacon/statrethinking_winter2019/blob/master/homework/week01.pdf) and [solutions in R](https://github.com/gbosquechacon/statrethinking_winter2019/blob/master/homework/week01_solutions.pdf) for the exercises of the week.
# + executionInfo={"elapsed": 1511, "status": "ok", "timestamp": 1613851213838, "user": {"displayName": "Andr\u0<NAME>\u00e1rez", "photoUrl": "https://lh5.googleusercontent.com/-s0kzcIwylzA/AAAAAAAAAAI/AAAAAAAAQXA/v8Sc6WgQy7c/s64/photo.jpg", "userId": "06409440331868776168"}, "user_tz": -60} id="X3Nnt0q_0qUK"
import pandas as pd
import numpy as np
import scipy.stats as stats
import seaborn as sns
import matplotlib.pyplot as plt
# -
# %load_ext watermark
# %watermark -n -u -v -iv -w
sns.set_style('whitegrid')
# + [markdown] id="8nSZI75xFsDt"
# ## Short Intro
# + [markdown] id="zn67zjBwFsDu"
# In this short intro, I just play around a bit with the concepts of prior and posterior. I calculate manually both for the very simple glob tossing example mentioned in the lecture. You can jump to the actual homework going to the next section.
# + executionInfo={"elapsed": 1500, "status": "ok", "timestamp": 1613851213843, "user": {"displayName": "<NAME>", "photoUrl": "https://lh5.googleusercontent.com/-s0kzcIwylzA/AAAAAAAAAAI/AAAAAAAAQXA/v8Sc6WgQy7c/s64/photo.jpg", "userId": "06409440331868776168"}, "user_tz": -60} id="4RehOSeMFsDu"
n=9 # tosses
k=6 # water
p=0.5 # water probability
# + [markdown] id="bClagg1OFsDy"
# How to generate binomials with `numpy`:
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 1478, "status": "ok", "timestamp": 1613851213844, "user": {"displayName": "<NAME>", "photoUrl": "https://lh5.googleusercontent.com/-s0kzcIwylzA/AAAAAAAAAAI/AAAAAAAAQXA/v8Sc6WgQy7c/s64/photo.jpg", "userId": "06409440331868776168"}, "user_tz": -60} id="MAtYlySc1wY4" outputId="aafda446-08b8-4c06-b4a0-703075b42c67"
stats.binom.rvs(1, p, size=9)
# + [markdown] id="OkkvadSjFsD6"
# Density function of a binomial:
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 1466, "status": "ok", "timestamp": 1613851213845, "user": {"displayName": "<NAME>", "photoUrl": "https://lh5.googleusercontent.com/-s0kzcIwylzA/AAAAAAAAAAI/AAAAAAAAQXA/v8Sc6WgQy7c/s64/photo.jpg", "userId": "06409440331868776168"}, "user_tz": -60} id="cA4tDx9OFsD6" outputId="f9217595-cf00-47ed-cd99-2c1820db2655"
round(stats.binom.pmf(k, n, p), 2)
# + [markdown] id="UOrR3MqMFsD9"
# Example:
# + executionInfo={"elapsed": 1464, "status": "ok", "timestamp": 1613851213846, "user": {"displayName": "<NAME>\u00e1rez", "photoUrl": "https://lh5.googleusercontent.com/-s0kzcIwylzA/AAAAAAAAAAI/AAAAAAAAQXA/v8Sc6WgQy7c/s64/photo.jpg", "userId": "06409440331868776168"}, "user_tz": -60} id="iGJc83WoqPVc"
def posterior_grid_binomial(n, k, s):
"""Posterior sample of binomial distribution using grid integration.
Args:
n (int): trials
k (int): successes
s (int): probability grid discretization
Returns:
posteriors (ndarray): posterior probabilities
"""
p_grid = np.linspace(0,1,s)
priors = np.ones(s)
likelihoods = stats.binom.pmf(k, n, p=p_grid)
posteriors = priors * likelihoods
posteriors = posteriors / sum(posteriors) # normalizing the posteriors
return posteriors
# + executionInfo={"elapsed": 1000, "status": "ok", "timestamp": 1613851213847, "user": {"displayName": "<NAME>", "photoUrl": "https://lh5.googleusercontent.com/-s0kzcIwylzA/AAAAAAAAAAI/AAAAAAAAQXA/v8Sc6WgQy7c/s64/photo.jpg", "userId": "06409440331868776168"}, "user_tz": -60} id="7rNrkcQ4jOot"
n=9
k=6
s=50
posterior = posterior_grid_binomial(n, k, s)
sum(posterior)
# + [markdown] id="b8wb9B1qFsEF"
# Looks good. Plotting the posterior:
# + colab={"base_uri": "https://localhost:8080/", "height": 279} executionInfo={"elapsed": 821, "status": "ok", "timestamp": 1613851576236, "user": {"displayName": "<NAME>", "photoUrl": "https://lh5.googleusercontent.com/-s0kzcIwylzA/AAAAAAAAAAI/AAAAAAAAQXA/v8Sc6WgQy7c/s64/photo.jpg", "userId": "06409440331868776168"}, "user_tz": -60} id="jXlBSbH6AmfI" outputId="cfca2400-52b9-49a9-f0d0-bd4d5e145ecc"
aux = pd.DataFrame(posterior).rename({0:'prob'}, axis=1)
aux['p'] = aux.index/100
g = sns.lineplot(data=aux, x='p',y='prob')
sns.scatterplot(data=aux, x='p',y='prob', ax=g)
g.set(xlabel='probability', ylabel='density');
# + [markdown] id="88PI4HSwFsEI"
# Nice! Let's sample the posterior we just got:
# -
import jax
# + colab={"base_uri": "https://localhost:8080/", "height": 142} executionInfo={"elapsed": 653, "status": "ok", "timestamp": 1613851611707, "user": {"displayName": "Andr\u00e9<NAME>\u00e1rez", "photoUrl": "https://lh5.googleusercontent.com/-s0kzcIwylzA/AAAAAAAAAAI/AAAAAAAAQXA/v8Sc6WgQy7c/s64/photo.jpg", "userId": "06409440331868776168"}, "user_tz": -60} id="PfUEl3ELFsEJ" outputId="8c6bd73c-22c1-4b2f-c3b2-f2c36b766a5a"
p_grid = np.linspace(0,1,s)
samples = (pd.DataFrame(np.random.choice(p_grid, 5000, p=posterior))
.reset_index()
.rename({0:'prob'}, axis=1)
)
samples.tail(3) # just to see how it looks
# + colab={"base_uri": "https://localhost:8080/", "height": 279} executionInfo={"elapsed": 1834, "status": "ok", "timestamp": 1613851613153, "user": {"displayName": "<NAME>", "photoUrl": "https://lh5.googleusercontent.com/-s0kzcIwylzA/AAAAAAAAAAI/AAAAAAAAQXA/v8Sc6WgQy7c/s64/photo.jpg", "userId": "06409440331868776168"}, "user_tz": -60} id="-lvqsfNeu_tZ" outputId="7111141a-781f-4f1b-8438-20218e3f0dfd"
fig, axs = plt.subplots(ncols=2)
s = sns.scatterplot(data=samples, x='index', y='prob', marker='x', ax=axs[0])
s.set(xlabel='samples', ylabel='parameter p of the posterior')
h = sns.histplot(data=samples, x='prob', ax=axs[1])
h.set(xlabel='parameter p of the posterior', ylabel='number of records')
fig.set_size_inches(12,4)
# + [markdown] id="TP41qg62FsEP"
# You can think of the first plot as a bird view of the second one (shifted 90 degrees). Let's calculate the credible intervals.
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 821, "status": "ok", "timestamp": 1613851613521, "user": {"displayName": "<NAME>", "photoUrl": "https://lh5.googleusercontent.com/-s0kzcIwylzA/AAAAAAAAAAI/AAAAAAAAQXA/v8Sc6WgQy7c/s64/photo.jpg", "userId": "06409440331868776168"}, "user_tz": -60} id="e0_hz0WaFsER" outputId="172ab45e-f65d-456e-8864-c7b8426a4732"
round(np.percentile(samples.prob.values, 2.5),2), round(np.percentile(samples.prob.values, 97.5),2)
# + [markdown] id="c9QACe2VFsEk"
# ## Exercise 1
# + [markdown] id="x9U5N9Y1FsEk"
# >Suppose the globe tossing data had turned out to be 8 water in 15 tosses. Construct the posterior distribution, using grid approximation. Use the same flat prior as before.
# + [markdown] id="e9RasUYWFsEl"
# Really all you need is to modify the grid approximation code in Chapter 3 (there are constant references to the book that I will keep just in case you gusy want to check them out). If you replace 6 with 8 and 9 with 15, it'll work:
# + executionInfo={"elapsed": 521, "status": "ok", "timestamp": 1613851615338, "user": {"displayName": "<NAME>\u00e1rez", "photoUrl": "https://lh5.googleusercontent.com/-s0kzcIwylzA/AAAAAAAAAAI/AAAAAAAAQXA/v8Sc6WgQy7c/s64/photo.jpg", "userId": "06409440331868776168"}, "user_tz": -60} id="MeRFU-XmFsEl"
n = 15
k = 8
s = 101
p_grid_1 = np.linspace(0,1,s)
posterior_1 = posterior_grid_binomial(n, k, s)
samples = (pd.DataFrame(np.random.choice(p_grid_1, 5000, p=posterior_1))
.reset_index()
.rename({0:'prob'}, axis=1)
)
# + [markdown] id="u9a245VqFsEt"
# The posterior mean should be about 0.53 and the 99% percentile interval from 0.24 to 0.81.
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 524, "status": "ok", "timestamp": 1613851617015, "user": {"displayName": "<NAME>", "photoUrl": "https://lh5.googleusercontent.com/-s0kzcIwylzA/AAAAAAAAAAI/AAAAAAAAQXA/v8Sc6WgQy7c/s64/photo.jpg", "userId": "06409440331868776168"}, "user_tz": -60} id="dLbVzBnnFsEu" outputId="efc80ecd-ec8e-4cd2-e1ad-ee6ba0954227"
round(np.mean(samples.prob),2)
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 673, "status": "ok", "timestamp": 1613851622126, "user": {"displayName": "<NAME>", "photoUrl": "https://lh5.googleusercontent.com/-s0kzcIwylzA/AAAAAAAAAAI/AAAAAAAAQXA/v8Sc6WgQy7c/s64/photo.jpg", "userId": "06409440331868776168"}, "user_tz": -60} id="yFHcuHr2FsEx" outputId="4a1f0c5b-7fbe-409d-b56d-11dba6ac0822"
round(np.percentile(samples.prob.values, 0.5),2), round(np.percentile(samples.prob.values, 99.5),2)
# + [markdown] id="GQxAeX3FFsEz"
# ## Exercise 2
# + [markdown] id="PONtBDo-FsE0"
# >Start over in 1, but now use a prior that is zero below $p = 0.5$ and a constant above $p = 0.5$. This corresponds to prior information that a majority of the Earth's surface is water. What difference does the better prior make? If it helps, compare posterior distributions (using both priors) to the true value $p = 0.7$.
# + [markdown] id="MmmPhVEnFsE0"
# Modifying only the prior:
# + executionInfo={"elapsed": 529, "status": "ok", "timestamp": 1613851626167, "user": {"displayName": "<NAME>", "photoUrl": "https://lh5.googleusercontent.com/-s0kzcIwylzA/AAAAAAAAAAI/AAAAAAAAQXA/v8Sc6WgQy7c/s64/photo.jpg", "userId": "06409440331868776168"}, "user_tz": -60} id="BM5bu4-7FsE1"
n = 15
k = 8
p_grid_2 = np.linspace(0,1,101)
prob_p = np.concatenate((np.zeros(50), np.full(51,0.5)))
prob_data = stats.binom.pmf(k, n, p=p_grid_2)
posterior_2 = prob_data * prob_p
posterior_2 = posterior_2 / sum(posterior_2)
samples = (pd.DataFrame(np.random.choice(p_grid_2, 5000, p=posterior_2))
.reset_index()
.rename({0:'prob'}, axis=1)
)
# + [markdown] id="c9S_UPdjFsE8"
# The posterior mean should be about 0.61 and the 99% interval 0.50 to 0.82. This prior yields a posterior with more mass around the true value of 0.7.
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 680, "status": "ok", "timestamp": 1613851627962, "user": {"displayName": "<NAME>", "photoUrl": "https://lh5.googleusercontent.com/-s0kzcIwylzA/AAAAAAAAAAI/AAAAAAAAQXA/v8Sc6WgQy7c/s64/photo.jpg", "userId": "06409440331868776168"}, "user_tz": -60} id="ug6cIF-UFsE8" outputId="c4b23e75-d55f-49b1-cd31-903413e52f5c"
round(np.mean(samples.prob),2)
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 512, "status": "ok", "timestamp": 1613851629078, "user": {"displayName": "<NAME>", "photoUrl": "https://lh5.googleusercontent.com/-s0kzcIwylzA/AAAAAAAAAAI/AAAAAAAAQXA/v8Sc6WgQy7c/s64/photo.jpg", "userId": "06409440331868776168"}, "user_tz": -60} id="lCYnQtebFsE_" outputId="7c2f30c8-4d32-4bb7-f11d-ce4168b2a908"
round(np.quantile(samples.prob.values, 0.005), 2), round(np.quantile(samples.prob.values, 0.995), 2)
# + [markdown] id="l-Iqtu-5FsFC"
# This is probably easier to see in a plot:
# + colab={"base_uri": "https://localhost:8080/", "height": 142} executionInfo={"elapsed": 553, "status": "ok", "timestamp": 1613851631803, "user": {"displayName": "<NAME>", "photoUrl": "https://lh5.googleusercontent.com/-s0kzcIwylzA/AAAAAAAAAAI/AAAAAAAAQXA/v8Sc6WgQy7c/s64/photo.jpg", "userId": "06409440331868776168"}, "user_tz": -60} id="qtdbt7_cFsFC" outputId="225473dc-a9e3-4209-8043-27a847cbf160"
# some data wrangling to prepate the plot
aux = pd.DataFrame(posterior_1).rename({0:'prob'}, axis=1)
aux['p'] = aux.index/100
aux['posterior'] = 'posterior1'
aux2 = pd.DataFrame(posterior_2).rename({0:'prob'}, axis=1)
aux2['p'] = aux2.index/100
aux2['posterior'] = 'posterior2'
aux = pd.concat([aux, aux2], axis=0)
aux.head(3)
# + colab={"base_uri": "https://localhost:8080/", "height": 279} executionInfo={"elapsed": 837, "status": "ok", "timestamp": 1613851634360, "user": {"displayName": "<NAME>\u00e1rez", "photoUrl": "https://lh5.googleusercontent.com/-s0kzcIwylzA/AAAAAAAAAAI/AAAAAAAAQXA/v8Sc6WgQy7c/s64/photo.jpg", "userId": "06409440331868776168"}, "user_tz": -60} id="QszAPz5CyC8V" outputId="1452343c-79c3-4c30-f59a-2fa7dabe94d6"
g = sns.lineplot(data=aux, x='p', y='prob', hue='posterior')
g.set(xlabel='p', ylabel='density')
g.axvline(0.7, ls='--', c='r');
# + [markdown] id="go3Kl3WDFsFH"
# With the impossible values less than 0.5 ruled out, the second model piles up more plausibility on the higher values near the true value. The data are
# still misleading it to think that values just above 0.5 are the most plausible. But the posterior mean of 0.63 is much better than 0.53 from the previous
# problem. Informative priors, when based on real scientific information, help. Here, the informative prior helps because there isn't much data. That is common in a lot of fields, ranging from astronomy to paleontology
# + [markdown] id="-X7n4j-FFsFI"
# ## Exercise 3
# + [markdown] id="8bN142K_FsFJ"
# >This problem is more open-ended than the others. Feel free to collaborate on the solution. Suppose you want to estimate the Earth's proportion of water very precisely. Specifically, you want the 99% percentile interval of the posterior distribution of p to be only 0.05 wide. This means the distance between the upper and lower bound of the interval should be 0.05. How many times will you have to toss the globe to do this? I won't require a precise answer. I'm honestly more interested in your approach.
# + [markdown] id="LJCpthc-FsFK"
# One way to approach this problem is to try a range of sample sizes and to plot the interval width of each. Here's some code to compute the posterior and get the interval width. There are other ways to compute the interval width. But the former is closest to the code in the book. Now since we want to do this for different values of $N$, it's nice to make this into a function. Now if you enter binomial_grid_posterior(20), you get an interval width for 20 globe tosses. Now notice that the interval width varies across simulations. As you increase N, this variation shrinks rapidly. This is because as the sample size increases, the differences between samples shrink. Now we need to run simulations across a bunch of different sample size to find where the interval shrinks to 0.05 in width.
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 612, "status": "ok", "timestamp": 1613851641016, "user": {"displayName": "<NAME>", "photoUrl": "https://lh5.googleusercontent.com/-s0kzcIwylzA/AAAAAAAAAAI/AAAAAAAAQXA/v8Sc6WgQy7c/s64/photo.jpg", "userId": "06409440331868776168"}, "user_tz": -60} id="aF5oIMUcFsFK" outputId="ca264cb6-864f-4e9a-f2ac-8b2463fa7ac8"
p=0.7
s=1001
akk = []
for n in [100, 1000, 10000, 100000]:
k=sum(np.random.binomial(1, p, n))
p_grid_3 = np.linspace(0,1,s)
posterior_3 = posterior_grid_binomial(n, k, s)
samples = (pd.DataFrame(np.random.choice(p_grid_3, 5000, p=posterior_3))
.reset_index()
.rename({0:'prob'}, axis=1)
.assign(n=n)
)
akk.append(samples)
print(f'Distribution size: {n}, PI(0.5, 99.5): {np.round(np.quantile(samples.prob.values, 0.05), 3), np.round(np.quantile(samples.prob.values, 0.95), 3)}')
all_samples = pd.concat(akk).drop(['index'], axis=1)
# + colab={"base_uri": "https://localhost:8080/", "height": 279} executionInfo={"elapsed": 1541, "status": "ok", "timestamp": 1613851643652, "user": {"displayName": "<NAME>\u00e1rez", "photoUrl": "https://lh5.googleusercontent.com/-s0kzcIwylzA/AAAAAAAAAAI/AAAAAAAAQXA/v8Sc6WgQy7c/s64/photo.jpg", "userId": "06409440331868776168"}, "user_tz": -60} id="5QA7O-l1h96k" outputId="fa65ef5f-7338-4e4f-8e4e-def1bab5245a"
h = sns.kdeplot(data=all_samples, x='prob', hue='n', palette='tab10', shade=True)
h.set(xlabel='parameter p of the posterior', ylabel='number of records', yscale='log');
# + [markdown] id="jKQAlyuzFsFN"
# Looks like we need more than 2000 tosses of the globe to get the interval to be that precise. The above is a general feature of learning from data: The greatest returns on learning come early on. Each additional observation contributes less and less. So it takes very much effort to progressively reduce our uncertainty. So if your application requires a very precise estimate, be prepared to collect a lot of data. Or to change your approach.
|
statrethink_numpyro_w01.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# default_exp core
# -
# # module name here
#
# > API details.
#hide
from nbdev.showdoc import *
# +
#export
import json,tweepy,hmac,hashlib
from pdb import set_trace
from ipaddress import ip_address,ip_network
from http.server import HTTPServer, BaseHTTPRequestHandler
from fastcore.imports import *
from fastcore.foundation import *
from fastcore.utils import *
from fastcore.script import *
from configparser import ConfigParser
# +
#export
_cfg = ConfigParser(interpolation=None)
_cfg.read(['twitter.ini'])
_cfg = _cfg['DEFAULT']
globals().update(**_cfg)
gh_secret = bytes(gh_secret, 'utf-8')
_auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
_auth.set_access_token(access_token, access_token_secret)
_api = tweepy.API(_auth)
_whitelist = L(urljson('https://api.github.com/meta')['hooks']).map(ip_network)
# -
#export
def tweet_text(payload):
"Send a tweet announcing release based on `payload`"
rel_json = payload['release']
url = rel_json['url']
owner,repo = re.findall(r'https://api.github.com/repos/([^/]+)/([^/]+)/', url)[0]
tweet_tmpl = """New #{repo} release: v{tag_name}. {html_url}
{body}"""
res = tweet_tmpl.format(repo=repo, tag_name=rel_json['tag_name'], html_url=rel_json['html_url'], body=rel_json['body'])
if len(res)<=280: return res
return res[:279] + "…"
#export
def check_secret(content, headers):
digest = hmac.new(gh_secret, content, hashlib.sha1).hexdigest()
assert f'sha1={digest}' == headers.get('X-Hub-Signature')
#export
class _RequestHandler(BaseHTTPRequestHandler):
def do_POST(self):
if self.server.check_ip:
src_ip = ip_address(self.client_address[0])
assert any((src_ip in wl) for wl in _whitelist)
self.send_response(200)
self.end_headers()
content = self.rfile.read(int(self.headers.get('content-length')))
payload = json.loads(content.decode())
if payload['action']=='released':
check_secret(content, self.headers)
tweet = tweet_text(payload)
stat = _api.update_status(tweet)
print(stat.id)
self.wfile.write('ok'.encode(encoding='utf_8'))
#export
@call_parse
def run_server(hostname: Param("Host name or IP", str)='localhost',
port: Param("Port to listen on", int)=8000,
check_ip: Param("Check source IP against GitHub list", bool_arg)=True):
"Run a GitHub webhook server that tweets about new releases"
print(f"Listening on {hostname}:{port}")
with HTTPServer((hostname, port), _RequestHandler) as httpd:
httpd.check_ip = check_ip
httpd.serve_forever()
# +
# run_server(check_ip=True)
# +
#hide
# with HTTPServer(server_address, RequestHandler) as httpd: httpd.handle_request()
# rel = Path('release.json').read_text()
# wh_json = json.loads(Path('ping.json').read_text())
# _api.destroy_status(1311413699366678529);
# -
# ## Export -
#hide
from nbdev.export import notebook2script
notebook2script()
|
00_core.ipynb
|
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Q#
# language: qsharp
# name: iqsharp
# ---
# # Multi-Qubit Gates
#
# This tutorial continues the introduction to quantum gates started in [this tutorial](../SingleQubitGates/SingleQubitGates.ipynb), focusing on applying quantum gates to multi-qubit systems.
#
# If you need a refresher on the representation of multi-qubit systems, we recommend you to review the [relevant tutorial](../MultiQubitSystems/MultiQubitSystems.ipynb).
#
# This tutorial covers the following topics:
#
# - Applying quantum gates to a part of the system
# - $\text{CNOT}$ and $\text{SWAP}$ gates
# - Controlled gates
# ## The Basics
#
# As a reminder, single-qubit gates are represented by $2\times2$ [unitary matrices](../LinearAlgebra/LinearAlgebra.ipynb#Unitary-Matrices).
# The effect of a gate applied to a qubit can be calculated by multiplying the corresponding matrix by the state vector of the qubit to get the resulting state vector.
#
# Multi-qubit gates are represented by $2^N\times2^N$ matrices, where $N$ is the number of qubits the gate operates on. To apply this gate, you multiply the matrix by the state vector of the $N$-qubit quantum system.
# ## Applying Gates to a Part of the System
#
# The simplest thing we can do with multi-qubit systems is to apply gates to only a subset of qubits in the system.
# Similar to how it is sometimes possible to represent the state of a multi-qubit systems as a tensor product of single-qubit states,
# you can construct gates that modify the state of a multi-qubit system as tensor products of gates that affect parts of the system.
#
# Let's consider an example of applying single-qubit gates to one of the qubits of a two-qubit system.
# If you want to apply an $X$ gate to the first qubit of the system and do nothing to the second qubit,
# the resulting gate will be represented as a tensor product of an $X$ gate and the identity gate $I$ which corresponds to doing nothing:
#
# $$X \otimes I =
# \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix} \otimes \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} =
# \begin{bmatrix}
# 0 & 0 & 1 & 0 \\
# 0 & 0 & 0 & 1 \\
# 1 & 0 & 0 & 0 \\
# 0 & 1 & 0 & 0
# \end{bmatrix}$$
#
# You can use the same approach when applying several gates to independent parts of the system at the same time.
# For example, applying the $X$ gate to the first qubit and the $H$ gate to the second qubit would be represented as follows:
#
# $$X \otimes H =
# \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix} \otimes \frac{1}{\sqrt{2}}\begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix} =
# \frac{1}{\sqrt{2}}\begin{bmatrix}
# 0 & 0 & 1 & 1 \\
# 0 & 0 & 1 & -1 \\
# 1 & 1 & 0 & 0 \\
# 1 & -1 & 0 & 0
# \end{bmatrix}$$
#
# > Note that we can use [mixed-multiplication property of tensor product](../LinearAlgebra/LinearAlgebra.ipynb#Tensor-Product) to see that this is equivalent to applying $X$ gate to the first qubit and applying $H$ gate to the second qubit, in either order:
# >
# > $$X \otimes H = (I X) \otimes (H I) = (I \otimes H) (X \otimes I)$$
# > $$X \otimes H = (X I) \otimes (I H) = (X \otimes I) (I \otimes H)$$
#
# This approach can be generalized to larger systems and gates that act on multiple qubits as well.
# It can be less straightforward if a multi-qubit gate is applied to a subset of qubits that are not "adjacent" to each other in the tensor product; we'll see an example later in this tutorial.
# ### <span style="color:blue">Exercise 1</span>: Compound Gate
#
# **Inputs:** $3$ qubits in an arbitrary superposition state $|\psi\rangle$, stored in an array of length 3.
#
# **Goal:** Apply the following matrix to the system. This matrix can be represented as applying $3$ single-qubit gates.
#
# $$Q = \begin{bmatrix}
# 0 & -i & 0 & 0 & 0 & 0 & 0 & 0 \\
# i & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
# 0 & 0 & 0 & -i & 0 & 0 & 0 & 0 \\
# 0 & 0 & i & 0 & 0 & 0 & 0 & 0 \\
# 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\
# 0 & 0 & 0 & 0 & -1 & 0 & 0 & 0 \\
# 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\
# 0 & 0 & 0 & 0 & 0 & 0 & -1 & 0
# \end{bmatrix}$$
#
# > We recommend to keep a list of common quantum gates on hand, such as [this tutorial](../SingleQubitGates/SingleQubitGates.ipynb).
#
# <details>
# <summary><b>Need a hint? Click here</b></summary>
# Start by noticing that the top right and bottom left quadrants of the matrix are filled with $0$s, and the bottom right quadrant equals to the top left one, multiplied by $i$. Does this look like a tensor product of a 1-qubit and 2-qubit matrices? Which ones?
# </details>
# +
%kata T1_CompoundGate_Test
operation CompoundGate (qs : Qubit[]) : Unit is Adj {
// ...
}
# -
# *Can't come up with a solution? See the explained solution in the [Multi-Qubit Gates Workbook](./Workbook_MultiQubitGates.ipynb#Exercise-1:-Compound-Gate).*
# ## CNOT Gate
#
# Our first proper multi-qubit gate is the $\text{CNOT}$ ("controlled NOT") gate.
# The $\text{CNOT}$ gate is a two-qubit gate, the first qubit is referred to as the **control** qubit, and the second as the **target** qubit.
# $\text{CNOT}$ acts as a conditional gate of sorts: if the control qubit is in state $|1\rangle$, it applies the $X$ gate to the target qubit, otherwise it does nothing.
#
# > If the system is in a superposition of several basis states, the effects of the gate will be a linear combination of the effects of it acting separately on each of the basis states.
# > This will be the case for all quantum gates you'll encounter later that are specified in terms of basis states: since all unitary gates are linear, it is sufficient to define their effect on the basis states, and use linearity to figure out their effect on any state.
#
# <table>
# <col width=50>
# <col width=50>
# <col width=300>
# <col width=150>
# <col width=50>
# <tr>
# <th style="text-align:center; border:1px solid">Gate</th>
# <th style="text-align:center; border:1px solid">Matrix</th>
# <th style="text-align:center; border:1px solid">Applying to $|\psi\rangle = \alpha|00\rangle + \beta|01\rangle + \gamma|10\rangle + \delta|11\rangle$</th>
# <th style="text-align:center; border:1px solid">Applying to basis states</th>
# <th style="text-align:center; border:1px solid">Q# Documentation</th>
# </tr>
# <tr>
# <td style="text-align:center; border:1px solid">$\text{CNOT}$</td>
# <td style="text-align:center; border:1px solid">$\begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \end{bmatrix}$</td>
# <td style="text-align:center; border:1px solid">$\text{CNOT}|\psi\rangle = \alpha|00\rangle + \beta|01\rangle + \color{red}\delta|10\rangle + \color{red}\gamma|11\rangle$</td>
# <td style="text-align:center; border:1px solid">$\text{CNOT}|00\rangle = |00\rangle \\
# \text{CNOT}|01\rangle = |01\rangle \\
# \text{CNOT}|10\rangle = |11\rangle \\
# \text{CNOT}|11\rangle = |10\rangle$</td>
# <td style="text-align:center; border:1px solid"><a href=https://docs.microsoft.com/qsharp/api/qsharp/microsoft.quantum.intrinsic.cnot>CNOT</a></td>
# </tr>
# </table>
# The $\text{CNOT}$ gate is particularly useful for preparing entangled states. Consider the following separable state:
#
# $$\big(\alpha|0\rangle + \beta|1\rangle\big) \otimes |0\rangle = \alpha|00\rangle + \beta|10\rangle$$
#
# If we apply the $\text{CNOT}$ gate to it, with the first qubit as the control, and the second as the target, we get the following state, which is not separable any longer:
#
# $$\alpha|00\rangle + \beta|11\rangle$$
#
# The $\text{CNOT}$ gate is self-adjoint: applying it for the second time reverses its effect.
# ### <span style="color:blue">Exercise 2</span>: Preparing a Bell state
#
# **Input:** Two qubits in state $|00\rangle$, stored in an array of length 2.
#
# **Goal:** Transform the system into the Bell state $\Phi^+ = \frac{1}{\sqrt{2}}\big(|00\rangle + |11\rangle\big)$.
# +
%kata T2_BellState_Test
operation BellState (qs : Qubit[]) : Unit is Adj {
// ...
}
# -
# *Can't come up with a solution? See the explained solution in the [Multi-Qubit Gates Workbook](./Workbook_MultiQubitGates.ipynb#Exercise-2:-Preparing-a-Bell-state).*
# ## Ket-bra Representation
#
# Same as in the case of single-qubit gates, we can represent multi-qubit gates using Dirac notation.
#
# > Recall that kets represent column vectors and bras represent row vectors. For any ket $|\psi\rangle$, the corresponding bra is its adjoint (conjugate transpose): $\langle\psi| = |\psi\rangle^\dagger$.
# >
# > Kets and bras are used to express [inner](../LinearAlgebra/LinearAlgebra.ipynb#Inner-Product) and [outer](../LinearAlgebra/LinearAlgebra.ipynb#Outer-Product) products. The inner product of $|\phi\rangle$ and $|\psi\rangle$ is the matrix product of $\langle\phi|$ and $|\psi\rangle$, denoted as $\langle\phi|\psi\rangle$, and their outer product is the matrix product of $|\phi\rangle$ and $\langle\psi|$, denoted as $|\phi\rangle\langle\psi|$.
# >
# > As we've seen in the [single-qubit gates tutorial](../SingleQubitGates/SingleQubitGates.ipynb#Ket-bra-Representation), kets and bras can be used to represent matrices. The outer product of two vectors of the same size produces a square matrix. We can use a linear combination of several outer products of simple vectors (such as basis vectors) to express any square matrix.
#
# Let's consider ket-bra representation of the $\text{CNOT}$ gate:
#
# $$\text{CNOT} = |00\rangle\langle00| + |01\rangle\langle01| + |10\rangle\langle11| + |11\rangle\langle10| = \\
# = \begin{bmatrix} 1 \\ 0 \\ 0 \\ 0 \end{bmatrix}\begin{bmatrix} 1 & 0 & 0 & 0 \end{bmatrix} +
# \begin{bmatrix} 0 \\ 1 \\ 0 \\ 0 \end{bmatrix}\begin{bmatrix} 0 & 1 & 0 & 0 \end{bmatrix} +
# \begin{bmatrix} 0 \\ 0 \\ 1 \\ 0 \end{bmatrix}\begin{bmatrix} 0 & 0 & 0 & 1 \end{bmatrix} +
# \begin{bmatrix} 0 \\ 0 \\ 0 \\ 1 \end{bmatrix}\begin{bmatrix} 0 & 0 & 1 & 0 \end{bmatrix} = \\ =
# \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ \end{bmatrix} +
# \begin{bmatrix} 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ \end{bmatrix} +
# \begin{bmatrix} 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 \\ \end{bmatrix} +
# \begin{bmatrix} 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ \end{bmatrix} =
# \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \\ \end{bmatrix}$$
#
# This representation can be used to carry out calculations in Dirac notation without ever switching back to matrix representation:
#
# $$\text{CNOT}|10\rangle
# = \big(|00\rangle\langle00| + |01\rangle\langle01| + |10\rangle\langle11| + |11\rangle\langle10|\big)|10\rangle = \\
# = |00\rangle\langle00|10\rangle + |01\rangle\langle01|10\rangle + |10\rangle\langle11|10\rangle + |11\rangle\langle10|10\rangle = \\
# = |00\rangle\big(\langle00|10\rangle\big) + |01\rangle\big(\langle01|10\rangle\big) + |10\rangle\big(\langle11|10\rangle\big) + |11\rangle\big(\langle10|10\rangle\big) = \\
# = |00\rangle(0) + |01\rangle(0) + |10\rangle(0) + |11\rangle(1) = |11\rangle$$
#
# > Notice how a lot of the inner product terms turn out to equal 0, and our expression is easily simplified. We have expressed the CNOT gate in terms of outer product of computational basis states, which are orthonormal, and apply it to another computational basis state, so the individual inner products are going to always be 0 or 1.
# In general case, a $4\times4$ matrix that describes a 2-qubit gate
# $$A = \begin{bmatrix} a_{00} & a_{01} & a_{02} & a_{03} \\
# a_{10} & a_{11} & a_{12} & a_{13} \\
# a_{20} & a_{21} & a_{22} & a_{23} \\
# a_{30} & a_{31} & a_{32} & a_{33} \\ \end{bmatrix}$$
# will have the following ket-bra representation:
# $$A = a_{00} |00\rangle\langle00| + a_{01} |00\rangle\langle01| + a_{02} |00\rangle\langle10| + a_{03} |00\rangle\langle11| +\\
# + a_{10} |01\rangle\langle00| + a_{11} |01\rangle\langle01| + a_{12} |01\rangle\langle10| + a_{13} |01\rangle\langle11| +\\
# + a_{20} |10\rangle\langle00| + a_{21} |10\rangle\langle01| + a_{22} |10\rangle\langle10| + a_{23} |10\rangle\langle11| +\\
# + a_{30} |11\rangle\langle00| + a_{31} |11\rangle\langle01| + a_{32} |11\rangle\langle10| + a_{33} |11\rangle\langle11|
# $$
#
# A similar expression can be extended for matrices that describe $N$-qubit gates, where $N > 2$:
#
# $$A = \sum_{i=0}^{2^N-1} \sum_{j=0}^{2^N-1} a_{ij} |i\rangle\langle j|$$
#
# Dirac notation is particularly useful for expressing sparse matrices - matrices that have few non-zero elements. Indeed, consider the $\text{CNOT}$ gate again: it is a $4 \times 4$ matrix described with 16 elements, but its Dirac notation has only 4 terms, one for each non-zero element of the matrix.
#
# With enough practice you'll be able to perform computations in Dirac notation without spelling out all the bra-ket terms explicitly!
# > ## Ket-bra decomposition
# >
# > This section describes a more formal process of finding the ket-bra decompositions of multi-qubit quantum gates.
# > This section is not necessary to start working with quantum gates, so feel free to skip it for now, and come back to it later.
# >
# > You can use the properties of [eigenvalues and eigenvectors](../LinearAlgebra/LinearAlgebra.ipynb#Part-III:-Eigenvalues-and-Eigenvectors) to find the ket-bra decomposition of any gate. Consider an $N$-qubit gate $A$; the matrix representation of the gate is a square matrix of size $2^N$. Therefore it also has $2^N$ orthogonal eigenvectors $|\psi_i\rangle$:
# >
# > $$A|\psi_i\rangle = x_i|\psi_i\rangle, 0 \leq i \leq 2^N -1$$
# >
# > Then its ket-bra decomposition is:
# >
# > $$A = \sum_{i=0}^{2^N-1} x_i|\phi_i\rangle\langle\phi_i|$$
# >
# > Let's use our $\text{CNOT}$ gate as a simple example.
# > The $\text{CNOT}$ gate has four eigenvectors.
# > * Two, as we can clearly see, are computational basis states $|00\rangle$ and $|01\rangle$ with eigen values $1$ and $1$, respectively (the basis states that are not affected by the gate).
# > * The other two are $|1\rangle \otimes |+\rangle = \frac{1}{\sqrt{2}}\big(|10\rangle + |11\rangle\big)$ and $|1\rangle \otimes |-\rangle = \frac{1}{\sqrt{2}}\big(|10\rangle - |11\rangle\big)$ with eigenvalues $1$ and $-1$, respectively:
# >
# > $$\text{CNOT}|0\rangle \otimes |0\rangle = |0\rangle \otimes |1\rangle \\
# \text{CNOT}|0\rangle \otimes |1\rangle = |0\rangle \otimes |1\rangle \\
# \text{CNOT}|1\rangle \otimes |+\rangle = |1\rangle \otimes |+\rangle \\
# \text{CNOT}|1\rangle \otimes |-\rangle = -|1\rangle \otimes |-\rangle$$
# >
# > Here's what the decomposition looks like:
# >
# > $$\text{CNOT} = |00\rangle\langle00| + |01\rangle\langle01| +
# |1\rangle \otimes |+\rangle\langle1| \otimes \langle +| - |1\rangle \otimes| -\rangle\langle1| \otimes \langle -| = \\
# = |00\rangle\langle00| + |01\rangle\langle01| +
# \frac{1}{2}\big[\big(|10\rangle + |11\rangle\big)\big(\langle10| + \langle11|\big) - \big(|10\rangle - |11\rangle\big)\big(\langle10| - \langle11|\big)\big] = \\
# = |00\rangle\langle00| + |01\rangle\langle01| +
# \frac{1}{2}\big(\color{red}{|10\rangle\langle10|} + |10\rangle\langle11| + |11\rangle\langle10| + \color{red}{|11\rangle\langle11|} - \color{red}{|10\rangle\langle10|} + |10\rangle\langle11| + |11\rangle\langle10| - \color{red}{|11\rangle\langle11|}\big) = \\
# = |00\rangle\langle00| + |01\rangle\langle01| + \frac{1}{2}\big(2|10\rangle\langle11| + 2|11\rangle\langle10|\big) = \\
# = |00\rangle\langle00| + |01\rangle\langle01| + |10\rangle\langle11| + |11\rangle\langle10|$$
# ## SWAP Gate
#
# The $\text{SWAP}$ gate acts on two qubits, and, as the name implies, swaps their quantum states.
#
# <table style="border:1px solid">
# <col width=50>
# <col width=50>
# <col width=300>
# <col width=150>
# <tr>
# <th style="text-align:center; border:1px solid">Gate</th>
# <th style="text-align:center; border:1px solid">Matrix</th>
# <th style="text-align:center; border:1px solid">Applying to $|\psi\rangle = \alpha|00\rangle + \beta|01\rangle + \gamma|10\rangle + \delta|11\rangle$</th>
# <th style="text-align:center; border:1px solid">Applying to basis states</th>
# <th style="text-align:center; border:1px solid">Q# Documentation</th>
# </tr>
# <tr>
# <td style="text-align:center; border:1px solid">$\text{SWAP}$</td>
# <td style="text-align:center; border:1px solid">$\begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix}$</td>
# <td style="text-align:center; border:1px solid">$\text{SWAP}|\psi\rangle = \alpha|00\rangle + \color{red}\gamma|01\rangle + \color{red}\beta|10\rangle + \delta|11\rangle$</td>
# <td style="text-align:center; border:1px solid">$\text{SWAP}|00\rangle = |00\rangle \\
# \text{SWAP}|01\rangle = |10\rangle \\
# \text{SWAP}|10\rangle = |01\rangle \\
# \text{SWAP}|11\rangle = |11\rangle$</td>
# <td style="text-align:center; border:1px solid"><a href=https://docs.microsoft.com/qsharp/api/qsharp/microsoft.quantum.intrinsic.swap>SWAP</a></td>
# </tr>
# </table>
# ### <span style="color:blue">Exercise 3</span>: Swapping two qubits
#
# **Inputs:**
#
# 1. $N$ qubits in an arbitrary state $|\psi\rangle$, stored in an array of length $N$.
# 2. Integers `index1` and `index2` such that $0 \le \text{index1} < \text{index2} \le N - 1$.
#
# **Goal:** Swap the states of the qubits at the indices given.
# +
%kata T3_QubitSwap_Test
operation QubitSwap (qs : Qubit[], index1 : Int, index2 : Int) : Unit is Adj {
// ...
}
# -
# *Can't come up with a solution? See the explained solution in the [Multi-Qubit Gates Workbook](./Workbook_MultiQubitGates.ipynb#Exercise-3:-Swapping-two-qubits).*
# ## Multi-Qubit Gates Acting on Non-Adjacent Qubits
#
# In the above examples the $\text{CNOT}$ gate acted on two adjacent qubits. However, multi-qubit gates can act on non-adjacent qubits as well. Let's see how to work out the math of the system state change in this case.
#
# Take 3 qubits in an arbitrary state $|\psi\rangle = x_{000} |000\rangle + x_{001}|001\rangle + x_{010}|010\rangle + x_{011}|011\rangle + x_{100}|100\rangle + x_{101}|101\rangle + x_{110}|110\rangle + x_{111}|111\rangle $.
#
# We can apply the $\text{CNOT}$ gate on 1st and 3rd qubits, with the 1st qubit as control and the 3rd qubit as target. Let's label the 3-qubit gate that describes the effect of this on the whole system as $\text{CINOT}$. The $\text{CINOT}$ ignores the 2nd qubit (leaves it unchanged) and applies the $\text{CNOT}$ gate as specified above.
#
# #### Q# #
#
# In Q# we describe the operation as the sequence of gates that are applied to the qubitsm regardless of whether the qubits are adjacent or not.
#
# ```C#
# operation CINOT (qs: Qubit[]) : Unit {
# CNOT(qs[0], qs[2]); // Length of qs is assumed to be 3
# }
# ```
#
# #### Dirac notation
#
# In Dirac notation we can consider the effect of the gate on each basis vector separately: each basis vector $|a_1a_2a_3\rangle$ remains unchanged if $a_1 = 0$, and becomes $|a_1a_2(\neg a_3)\rangle$ if $a_1 = 1$. The full effect on the state becomes:
#
# $$\text{CINOT}|\psi\rangle
# = x_{000} \text{CINOT}|000\rangle + x_{001} \text{CINOT}|001\rangle + x_{010} \text{CINOT}|010\rangle + x_{011} \text{CINOT}|011\rangle \\
# + \color{red}{x_{100}} \text{CINOT}|\color{red}{100}\rangle + \color{red}{x_{101}} \text{CINOT}|\color{red}{101}\rangle + \color{red}{x_{110}} \text{CINOT}|\color{red}{110}\rangle + \color{red}{x_{111}} \text{CINOT}|\color{red}{111}\rangle = \\
# = x_{000}|000\rangle + x_{001}|001\rangle + x_{010}|010\rangle + x_{011}|011\rangle + \color{red}{x_{101}}|100\rangle + \color{red}{x_{100}}|101\rangle + \color{red}{x_{111}}|110\rangle + \color{red}{x_{110}}|111\rangle $$
#
# #### Matrix form
#
# $\text{CINOT}$ can also be represented in matrix form as a $2^3 \times 2^3$ matrix:
# $$\begin{bmatrix}
# 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
# 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\
# 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\
# 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\
# 0 & 0 & 0 & 0 & \color{blue} 0 & \color{blue} 1 & \color{blue} 0 & \color{blue} 0 \\
# 0 & 0 & 0 & 0 & \color{blue} 1 & \color{blue} 0 & \color{blue} 0 & \color{blue} 0 \\
# 0 & 0 & 0 & 0 & \color{blue} 0 & \color{blue} 0 & \color{blue} 0 & \color{blue} 1 \\
# 0 & 0 & 0 & 0 & \color{blue} 0 & \color{blue} 0 & \color{blue} 1 & \color{blue} 0
# \end{bmatrix}$$
#
# Applying $\text{CINOT}$ to $|\psi\rangle$ gives us
# $$
# \text{CINOT} \begin{bmatrix}
# 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
# 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\
# 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\
# 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\
# 0 & 0 & 0 & 0 & \color{blue} 0 & \color{blue} 1 & \color{blue} 0 & \color{blue} 0 \\
# 0 & 0 & 0 & 0 & \color{blue} 1 & \color{blue} 0 & \color{blue} 0 & \color{blue} 0 \\
# 0 & 0 & 0 & 0 & \color{blue} 0 & \color{blue} 0 & \color{blue} 0 & \color{blue} 1 \\
# 0 & 0 & 0 & 0 & \color{blue} 0 & \color{blue} 0 & \color{blue} 1 & \color{blue} 0
# \end{bmatrix}
# \begin{bmatrix} x_{000} \\ x_{001} \\ x_{010} \\ x_{011} \\ x_{100} \\ x_{101} \\ x_{110} \\ x_{111} \end{bmatrix}
# = \begin{bmatrix} x_{000} \\ x_{001} \\ x_{010} \\ x_{011} \\ \color{blue}{x_{101}} \\ \color{blue}{x_{100}} \\ \color{blue}{x_{111}} \\ \color{blue}{x_{110}} \end{bmatrix}
# $$
#
# However, as $N$ gets larger, creating a full size matrix can be extremely unwieldy. To express the matrix without spelling out its elements, we can use the following trick:
#
# 1. Apply the $\text{SWAP}$ gate on the 1st and 2nd qubits.
# This will bring the qubits on which the $\text{CNOT}$ gate acts next to each other, without any extra qubits between them.
# 2. Apply the $\text{CNOT}$ on 2nd and 3rd qubits.
# Since now the gate acts on adjacent qubits, this can be represented as a tensor product of the gate we're applying and $I$ gates.
# 3. Apply the $\text{SWAP}$ gate on the 1st and 2nd qubits again.
#
# These can be represented as applying the following gates on the 3 qubits.
#
# 1. $\text{SWAP} \otimes I$
# $$x_{000}|000\rangle + x_{001}|001\rangle + \color{red}{x_{100}}|010\rangle + \color{red}{x_{101}}|011\rangle +
# \color{red}{x_{010}}|100\rangle + \color{red}{x_{011}}|101\rangle + x_{110}|110\rangle + x_{111}|111\rangle
# $$
#
# 2. $I \otimes \text{CNOT}$
# $$
# x_{000}|000\rangle + x_{001}|001\rangle + \color{blue}{x_{101}}|010\rangle + \color{blue}{x_{100}}|011\rangle +
# {x_{010}}|100\rangle + {x_{011}}|101\rangle + \color{blue}{x_{111}}|110\rangle + \color{blue}{x_{110}}|111\rangle
# $$
#
# 3. $\text{SWAP} \otimes I$
# $$
# x_{000}|000\rangle + x_{001}|001\rangle + {x_{010}}|010\rangle + {x_{011}}|011\rangle +
# \color{green}{x_{101}}|100\rangle + \color{green}{x_{100}}|101\rangle + \color{green}{x_{111}}|110\rangle + \color{green}{x_{110}}|111\rangle
# $$
#
# The result is the the $\text{CINOT}$ gate as we intended; so we can write
#
# $$\text{CINOT} = (\text{SWAP} \otimes I)(I \otimes \text{CNOT})(\text{SWAP} \otimes I)$$
#
# > Note that in matrix notation we always apply a gate to the complete system, so we must apply $\text{SWAP} \otimes I$, spelling the identity gate explicitly.
# > However, when implementing the unitary $\text{SWAP} \otimes I$ in Q#, we need only to call `SWAP(qs[0], qs[1])` - the remaining qubit `qs[2]` will not change, which is equivalent to applying an implicit identity gate.
# >
# > We can also spell out all gates applied explicitly (this makes for a much longer code, though):
# > ```C#
# operation CINOT (qs: Qubit[]) : Unit {
# // First step
# SWAP(qs[0], qs[1]);
# I(qs[2]);
# // Second step
# I(qs[0]);
# CNOT(qs[1], qs[2]);
# // Third step
# SWAP(qs[0], qs[1]);
# I(qs[2]);
# }```
# <details>
# <summary><b>Click here for the full matrix representation of these three steps.</b></summary>
#
# We can represent $|\psi\rangle$ in matrix form as a sum of tensor products of states of sub-systems, separating either the state of the last qubit:
# $$\begin{bmatrix} x_{000} \\ x_{001} \\ x_{010} \\ x_{011} \\ x_{100} \\ x_{101} \\ x_{110} \\ x_{111} \end{bmatrix}
# =
# \begin{bmatrix} 1 \\ 0 \\ 0 \\ 0 \end{bmatrix} \otimes \begin{bmatrix} x_{000} \\ x_{001} \end{bmatrix}
# + \begin{bmatrix} 0 \\ 1 \\ 0 \\ 0 \end{bmatrix} \otimes \begin{bmatrix} x_{010} \\ x_{011} \end{bmatrix}
# + \begin{bmatrix} 0 \\ 0 \\ 1 \\ 0 \end{bmatrix} \otimes \begin{bmatrix} x_{100} \\ x_{101} \end{bmatrix}
# + \begin{bmatrix} 0 \\ 0 \\ 0 \\ 1 \end{bmatrix} \otimes \begin{bmatrix} x_{110} \\ x_{111} \end{bmatrix}
# $$
#
# or the state of the first qubit:
#
# $$
# \begin{bmatrix} x_{000} \\ x_{100} \end{bmatrix} \otimes \begin{bmatrix} 1 \\ 0 \\ 0 \\ 0 \end{bmatrix}
# + \begin{bmatrix} x_{001} \\ x_{101} \end{bmatrix} \otimes \begin{bmatrix} 0 \\ 1 \\ 0 \\ 0 \end{bmatrix}
# + \begin{bmatrix} x_{010} \\ x_{110} \end{bmatrix} \otimes \begin{bmatrix} 0 \\ 0 \\ 1 \\ 0 \end{bmatrix}
# + \begin{bmatrix} x_{011} \\ x_{111} \end{bmatrix} \otimes \begin{bmatrix} 0 \\ 0 \\ 0 \\ 1 \end{bmatrix}
# $$
#
#
# Thus the 3 steps in matrix form would be:
#
# 1. $\text{SWAP}$ the 1st and the 2nd qubits.
# $$
# \left(\begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix} \otimes
# \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}\right)
# \begin{bmatrix} x_{000} \\ x_{001} \\ x_{010} \\ x_{011} \\ x_{100} \\ x_{101} \\ x_{110} \\ x_{111} \end{bmatrix} = \\
# = \left(\begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix} \otimes
# \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}\right)
# \left( \begin{bmatrix} 1 \\ 0 \\ 0 \\ 0 \end{bmatrix} \otimes \begin{bmatrix} x_{000} \\ x_{001} \end{bmatrix}
# + \begin{bmatrix} 0 \\ 1 \\ 0 \\ 0 \end{bmatrix} \otimes \begin{bmatrix} x_{010} \\ x_{011} \end{bmatrix}
# + \begin{bmatrix} 0 \\ 0 \\ 1 \\ 0 \end{bmatrix} \otimes \begin{bmatrix} x_{100} \\ x_{101} \end{bmatrix}
# + \begin{bmatrix} 0 \\ 0 \\ 0 \\ 1 \end{bmatrix} \otimes \begin{bmatrix} x_{110} \\ x_{111} \end{bmatrix} \right) = \\
# = \begin{bmatrix} 1 \\ 0 \\ 0 \\ 0 \end{bmatrix} \otimes \begin{bmatrix} x_{000} \\ x_{001} \end{bmatrix}
# + \begin{bmatrix} 0 \\ 1 \\ 0 \\ 0 \end{bmatrix} \otimes \color{red}{\begin{bmatrix} x_{100} \\ x_{101} \end{bmatrix}}
# + \begin{bmatrix} 0 \\ 0 \\ 1 \\ 0 \end{bmatrix} \otimes \color{red}{\begin{bmatrix} x_{010} \\ x_{011} \end{bmatrix}}
# + \begin{bmatrix} 0 \\ 0 \\ 0 \\ 1 \end{bmatrix} \otimes \begin{bmatrix} x_{110} \\ x_{111} \end{bmatrix}
# = \begin{bmatrix} x_{000} \\ x_{001} \\ \color{red}{x_{100} \\ x_{101} \\ x_{010} \\ x_{011}} \\ x_{110} \\ x_{111} \end{bmatrix}
# $$
#
# 2. $\text{CNOT}$ 2nd and 3rd qubits.
# $$
# \left(\begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} \otimes
# \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \end{bmatrix} \right)
# \begin{bmatrix} x_{000} \\ x_{001} \\ x_{100} \\ x_{101} \\ x_{010} \\ x_{011} \\ x_{110} \\ x_{111} \end{bmatrix} = \\
# = \left(\begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} \otimes
# \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \end{bmatrix}\right)
# \left( \begin{bmatrix} x_{000} \\ x_{010} \end{bmatrix} \otimes \begin{bmatrix} 1 \\ 0 \\ 0 \\ 0 \end{bmatrix}
# + \begin{bmatrix} x_{001} \\ x_{011} \end{bmatrix} \otimes \begin{bmatrix} 0 \\ 1 \\ 0 \\ 0 \end{bmatrix}
# + \begin{bmatrix} x_{100} \\ x_{110} \end{bmatrix} \otimes \begin{bmatrix} 0 \\ 0 \\ 1 \\ 0 \end{bmatrix}
# + \begin{bmatrix} x_{101} \\ x_{111} \end{bmatrix} \otimes \begin{bmatrix} 0 \\ 0 \\ 0 \\ 1 \end{bmatrix} \right)
# = \\
# = \begin{bmatrix} x_{000} \\ x_{010} \end{bmatrix} \otimes \begin{bmatrix} 1 \\ 0 \\ 0 \\ 0 \end{bmatrix}
# + \begin{bmatrix} x_{001} \\ x_{011} \end{bmatrix} \otimes \begin{bmatrix} 0 \\ 1 \\ 0 \\ 0 \end{bmatrix}
# + \color{blue}{\begin{bmatrix} x_{101} \\ x_{111} \end{bmatrix}} \otimes \begin{bmatrix} 0 \\ 0 \\ 1 \\ 0 \end{bmatrix}
# + \color{blue}{\begin{bmatrix} x_{100} \\ x_{110} \end{bmatrix}} \otimes \begin{bmatrix} 0 \\ 0 \\ 0 \\ 1 \end{bmatrix}
# = \begin{bmatrix} x_{000} \\ x_{001} \\ \color{blue}{x_{101} \\ x_{100}} \\ x_{010} \\ x_{011} \\ \color{blue}{x_{111} \\ x_{110}} \end{bmatrix}$$
#
# 3. $\text{SWAP}$ 1st and 2nd qubits.
# $$
# \left(\begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix} \otimes
# \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} \right)
# \begin{bmatrix} x_{000} \\ x_{001} \\ x_{101} \\ x_{100} \\ x_{010} \\ x_{011} \\ x_{111} \\ x_{110} \end{bmatrix} = \\
# = \left( \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix} \otimes
# \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} \right)
# \left( \begin{bmatrix} 1 \\ 0 \\ 0 \\ 0 \end{bmatrix} \otimes \begin{bmatrix} x_{000} \\ x_{001} \end{bmatrix}
# + \begin{bmatrix} 0 \\ 1 \\ 0 \\ 0 \end{bmatrix} \otimes \begin{bmatrix} x_{101} \\ x_{100} \end{bmatrix}
# + \begin{bmatrix} 0 \\ 0 \\ 1 \\ 0 \end{bmatrix} \otimes \begin{bmatrix} x_{010} \\ x_{011} \end{bmatrix}
# + \begin{bmatrix} 0 \\ 0 \\ 0 \\ 1 \end{bmatrix} \otimes \begin{bmatrix} x_{111} \\ x_{110} \end{bmatrix} \right) = \\
# = \begin{bmatrix} 1 \\ 0 \\ 0 \\ 0 \end{bmatrix} \otimes \begin{bmatrix} x_{000} \\ x_{001} \end{bmatrix}
# + \begin{bmatrix} 0 \\ 1 \\ 0 \\ 0 \end{bmatrix} \otimes \color{green}{\begin{bmatrix} x_{010} \\ x_{011} \end{bmatrix}}
# + \begin{bmatrix} 0 \\ 0 \\ 1 \\ 0 \end{bmatrix} \otimes \color{green}{\begin{bmatrix} x_{101} \\ x_{100} \end{bmatrix}}
# + \begin{bmatrix} 0 \\ 0 \\ 0 \\ 1 \end{bmatrix} \otimes \begin{bmatrix} x_{110} \\ x_{111} \end{bmatrix}
# = \begin{bmatrix} x_{000} \\ x_{001} \\ \color{green}{x_{010} \\ x_{011} \\ x_{101} \\ x_{100}} \\ x_{110} \\ x_{111} \end{bmatrix}
# $$
# </details>
# ## Controlled Gates
#
# **Controlled gates** are a class of gates derived from other gates as follows: they act on a control qubit and a target qubit, just like the CNOT gate.
# A controlled-$U$ gate applies the $U$ gate to the target qubit if the control qubit is in state $|1\rangle$, and does nothing otherwise.
#
# Given a gate $U = \begin{bmatrix} \alpha & \beta \\ \gamma & \delta \end{bmatrix}$, its controlled version looks like this:
#
# <table style="border:1px solid">
# <col width=50>
# <col width=50>
# <col width=150>
# <tr>
# <th style="text-align:center; border:1px solid">Gate</th>
# <th style="text-align:center; border:1px solid">Matrix</th>
# <th style="text-align:center; border:1px solid">Q# Documentation</th>
# </tr>
# <tr>
# <td style="text-align:center; border:1px solid">$\text{Controlled U}$</td>
# <td style="text-align:center; border:1px solid">$\begin{bmatrix}
# 1 & 0 & 0 & 0 \\
# 0 & 1 & 0 & 0 \\
# 0 & 0 & \alpha & \beta \\
# 0 & 0 & \gamma & \delta
# \end{bmatrix}$</td>
# <td style="text-align:center; border:1px solid"><a href="https://docs.microsoft.com/quantum/user-guide/using-qsharp/operations-functions#controlled-and-adjoint-operations">Controlled functor</a></td>
# </tr>
# </table>
#
# > The CNOT gate is en example of a controlled gate, which is why it is also known as the controlled NOT or controlled $X$ gate.
#
# The concept of controlled gates can be generalized beyond controlling single-qubit gates.
# For any multi-qubit gate, its controlled version will have an identity matrix in the top left quadrant, the gate itself in the bottom right, and $0$ everywhere else.
# Here, for example, is the $\text{Controlled SWAP}$, or **Fredkin gate**, with the identity matrix highlighted in red, and the $\text{SWAP}$ gate in blue:
#
# $$\begin{bmatrix}
# \color{red} 1 & \color{red} 0 & \color{red} 0 & \color{red} 0 & 0 & 0 & 0 & 0 \\
# \color{red} 0 & \color{red} 1 & \color{red} 0 & \color{red} 0 & 0 & 0 & 0 & 0 \\
# \color{red} 0 & \color{red} 0 & \color{red} 1 & \color{red} 0 & 0 & 0 & 0 & 0 \\
# \color{red} 0 & \color{red} 0 & \color{red} 0 & \color{red} 1 & 0 & 0 & 0 & 0 \\
# 0 & 0 & 0 & 0 & \color{blue} 1 & \color{blue} 0 & \color{blue} 0 & \color{blue} 0 \\
# 0 & 0 & 0 & 0 & \color{blue} 0 & \color{blue} 0 & \color{blue} 1 & \color{blue} 0 \\
# 0 & 0 & 0 & 0 & \color{blue} 0 & \color{blue} 1 & \color{blue} 0 & \color{blue} 0 \\
# 0 & 0 & 0 & 0 & \color{blue} 0 & \color{blue} 0 & \color{blue} 0 & \color{blue} 1
# \end{bmatrix}$$
# In Q#, controlled gates are applied using the [`Controlled`](https://docs.microsoft.com/en-us/quantum/user-guide/using-qsharp/operations-functions#controlled-functor) functor.
# The controlled version of a gate accepts an array of control qubits (in this case an array of a single qubit), followed by the arguments to the original gate.
# For example, these two lines are equivalent:
#
# ```C#
# Controlled X([control], target);
# CNOT(control, target);
# ```
#
# If the original gate was implemented as an operation with multiple parameters, the controlled version of this gate will take those parameters as a tuple. For example, to apply Fredkin gate, you'd have to call:
#
# ```C#
# Controlled SWAP([control], (q1, q2));
# ```
#
# You can use the controlled version of a Q# operation only if that operation has a controlled version defined.
# The Q# compiler will often be able to generate a controlled version of the operation automatically if you put `is Ctl` after the operation's return type.
# In other cases, you'll need to define the controlled version of an operation manually.
# ### <span style="color:blue">Exercise 4</span>: Controlled Rotation
#
# **Inputs:**
#
# 1. Two qubits in an arbitrary state $|\phi\rangle$, stored as an array of length 2.
# 2. An angle $\theta$: $-\pi < \theta \leq \pi$.
#
# **Goal:** Apply a controlled [$R_x$ gate](../SingleQubitGates/SingleQubitGates.ipynb#Rotation-Gates), using the first qubit as control and the second qubit as target, with $\theta$ as the angle argument for the gate.
#
# <br/>
# <details>
# <summary><b>Need a hint? Click here</b></summary>
# If you were to apply a regular version of $R_x$ gate, it would take two parameters - angle $theta$ as the first parameter and the target qubit as the second parameter.
# </details>
# +
%kata T4_ControlledRotation_Test
operation ControlledRotation (qs : Qubit[], theta : Double) : Unit is Adj {
// ...
}
# -
# *Can't come up with a solution? See the explained solution in the [Multi-Qubit Gates Workbook](./Workbook_MultiQubitGates.ipynb#Exercise-4:-Controlled-Rotation).*
# ## Multi-controlled Gates
#
# Controlled gates can have multiple control qubits; in this case the gate $U$ is applied only if all control qubits are in the $|1\rangle$ states.
# You can think of it as constructing a controlled version of a gate that is already controlled.
#
# The simplest example of this is the **Toffoli gate**, or $\text{CCNOT}$ (controlled controlled $\text{NOT}$) gate, which applies the $X$ gate to the last qubit only if the first two qubits are in $|11\rangle$ state:
#
# $$\begin{bmatrix}
# \color{red} 1 & \color{red} 0 & \color{red} 0 & \color{red} 0 & \color{red} 0 & \color{red} 0 & 0 & 0 \\
# \color{red} 0 & \color{red} 1 & \color{red} 0 & \color{red} 0 & \color{red} 0 & \color{red} 0 & 0 & 0 \\
# \color{red} 0 & \color{red} 0 & \color{red} 1 & \color{red} 0 & \color{red} 0 & \color{red} 0 & 0 & 0 \\
# \color{red} 0 & \color{red} 0 & \color{red} 0 & \color{red} 1 & \color{red} 0 & \color{red} 0 & 0 & 0 \\
# \color{red} 0 & \color{red} 0 & \color{red} 0 & \color{red} 0 & \color{red} 1 & \color{red} 0 & 0 & 0 \\
# \color{red} 0 & \color{red} 0 & \color{red} 0 & \color{red} 0 & \color{red} 0 & \color{red} 1 & 0 & 0 \\
# 0 & 0 & 0 & 0 & 0 & 0 & \color{blue} 0 & \color{blue} 1 \\
# 0 & 0 & 0 & 0 & 0 & 0 & \color{blue} 1 & \color{blue} 0
# \end{bmatrix}$$
#
# To construct a multi-controlled version of an operation in Q#, you can use the Controlled functor as well, passing all control qubits as an array that is the first parameter.
# ## Other Types of Controlled Gates
#
# Typically the term "controlled $U$ gate" refers to the type of gate we've described previously, which applies the gate $U$ only if the control qubit(s) are in the $|1\rangle$ state.
#
# It is possible, however, to define variants of controlled gates that use different states as control states.
# For example, an **anti-controlled** $U$ gate (sometimes called **zero-controlled**) applies a gate only if the control qubit is in the $|0\rangle$ state.
# It is also possible to define control conditions in other bases, for example, applying the gate if the control qubit is in the $|+\rangle$ state.
#
# All the variants of controlled gates can be expressed in terms of the controls described in previous sections, using the following sequence of steps:
# * First, apply a transformation on control qubits that will transform the state you want to use as control into the $|1...1\rangle$ state.
# * Apply the regular controlled version of the gate.
# * Finally, undo the transformation on control qubits from the first step using the adjoint version of it.
#
# > Why do we need this last step? Remember that controlled gates are defined in terms of their effect on the basis states:
# > we apply the gate on the target qubit if and only if the control qubit is in the state we want to control on, and we don't change the state of the control qubit at all.
# > If we don't undo the transformation we did on the first step, applying our gate to a basis state will modify not only the state of the target qubit but also the state of the target qubit, which is not what we're looking for.
# >
# > For example, consider an anti-controlled $X$ gate - a gate that should apply an $X$ gate to the second qubit if the first qubit is in the $|0\rangle$ state.
# > Here is the effect we expect this gate to have on each of the 2-qubit basis states:
# >
# > <table>
# <col width="200"/>
# <col width="200"/>
# <tr>
# <th style="text-align:center">Input state</th>
# <th style="text-align:center">Output state</th>
# </tr>
# <tr>
# <td style="text-align:center">$|00\rangle$</td>
# <td style="text-align:center">$|01\rangle$</td>
# </tr>
# <tr>
# <td style="text-align:center">$|01\rangle$</td>
# <td style="text-align:center">$|00\rangle$</td>
# </tr>
# <tr>
# <td style="text-align:center">$|10\rangle$</td>
# <td style="text-align:center">$|10\rangle$</td>
# </tr>
# <tr>
# <td style="text-align:center">$|11\rangle$</td>
# <td style="text-align:center">$|11\rangle$</td>
# </tr>
# </table>
#
# > Let's apply the anti-controlled X gate to the $|00\rangle$ state step by step:
# > 1. Transform the state of the control qubit to $|1\rangle$: we can do that by applying the $X$ gate to the first qubit:
# > $$|00\rangle \rightarrow |10\rangle$$
# > 2. Apply the regular $\text{CNOT}$ gate:
# > $$|10\rangle \rightarrow |11\rangle$$
# > 3. Now, if we don't undo the change we did on the first step, we'll end up with a gate that transforms $|00\rangle$ into $|11\rangle$, which is not the transformation we're trying to implement.
# > However, if we undo it by applying the $X$ gate to the first qubit again, we'll get the correct state:
# > $$|11\rangle \rightarrow |01\rangle$$
# >
# > You can check that getting the right behavior of the operation on the rest of the basis states also requires that last step.
#
# Finally, let's take a look at a very useful operation [ControlledOnBitString](https://docs.microsoft.com/qsharp/api/qsharp/microsoft.quantum.canon.controlledonbitstring) provided by the Q# Standard library.
# It defines a variant of a gate controlled on a state specified by a bit mask; for example, bit mask `[true, false]` means that the gate should be applied only if the two control qubits are in the $|10\rangle$ state.
#
# The sequence of steps that implement this variant are:
# 1. Apply the $X$ gate to each control qubit that corresponds to a `false` element of the bit mask (in the example, that's just the second qubit). After this, if the control qubits started in the $|10\rangle$ state, they'll end up in the $|11\rangle$ state, and if they started in any other state, they'll end up in any state by $|11\rangle$.
# 2. Apply the regular controlled version of the gate.
# 3. Apply the $X$ gate to the same qubits to return them to their original state.
# ### <span style="color:blue">Exercise 5</span>: Arbitrary controls
#
# **Input:**
#
# 1. `controls` - a register of $N$ qubits in an arbitrary state $|\phi\rangle$.
# 2. `target` - a qubit in an arbitrary state $|\psi\rangle$.
# 3. `controlBits` - an array of $N$ booleans, specifying what state each control qubit should be in order to apply the gate.
#
# **Goal:** Apply the controlled $X$ gate with the `controls` as control qubits and `target` as target, with the state specified by `controlBits` as controls. If the element of the array is `true`, the corresponding qubit is a regular control (should be in state $|1\rangle$), and if it is `false`, the corresponding qubit is an anti-control (should be in state $|0\rangle$).
#
# > For example, if `controlBits = [true, false, true]`, the controlled $X$ gate should only be applied if the control qubits are in state $|101\rangle$.
#
# <details>
# <summary><strong>Need a hint? Click here</strong></summary>
# Consider using a library operation for this task. If you want to do it without a library operations, don't forget to reset the qubits back to the state they were originally in.
# </details>
# +
%kata T5_MultiControls_Test
operation MultiControls (controls : Qubit[], target : Qubit, controlBits : Bool[]) : Unit is Adj {
// ...
}
# -
# *Can't come up with a solution? See the explained solution in the [Multi-Qubit Gates Workbook](./Workbook_MultiQubitGates.ipynb#Exercise-5:-Arbitrary-controls).*
# ## Conclusion
#
# Congratulations! You have completed the series of introductory tutorials and are ready to start solving the katas.
# You should start with the [Basic Gates](../../BasicGates/BasicGates.ipynb) and [Superposition](../../Superposition/Superposition.ipynb) katas.
|
tutorials/MultiQubitGates/MultiQubitGates.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import os
import numpy as np
import pandas as pd
#get data
# from pydob.exploratory import get_permits_type_df
from pydob.exploratory import (
get_year_counts
)
from pydob.database import get_query_as_df
# plotting
import matplotlib.pyplot as plt
from pydob.settings import nt_style, nt_blue, nt_black
from pydob.database import get_permits_now
# -
# %matplotlib inline
plt.style.use(nt_style)
# # 7. DOB NOW
# - we need to rope in the DoB NOW stuff for applications and permits. let's get applications and permits numbers that are the sum of the NOW and non-NOW tables. in particular, it would be great to get our permit issuance number vs rate table updated with these numbers
#
#
#
# - [DOB NOW: Build – Job Application Filings](https://data.cityofnewyork.us/Housing-Development/DOB-NOW-Build-Job-Application-Filings/w9ak-ipjd): List of most job filings filed in DOB NOW. This dataset does not include certain types of job.
# - No dates data
#
# - [DOB NOW: Electrical Permit Applications](https://data.cityofnewyork.us/City-Government/DOB-NOW-Electrical-Permit-Applications/dm9a-ab7w): This dataset is part of the DOB NOW Electrical Permit Data Collection
#
# - [DOB NOW: Build – Approved Permits](https://data.cityofnewyork.us/Housing-Development/DOB-NOW-Build-Approved-Permits/rbx6-tga4): List of all approved permits in DOB NOW
# - dates data included:
# - `Approved Date`: This is the date that the entire job was approved by the Plan Examiner. The applicant can now pull a permit.
#
# - `Issued Date`: This is the date that the permit was issued.
# - `Expired Date`: This is the date that the permit expires.
#
# +
# dob_now_applied = pd.read_csv("/Users/francescao/Downloads/DOB_NOW__Build___Job_Application_Filings.csv")
# +
permits_now_year_counts = get_year_counts(index_col='job_filing_number',
year_col='issued_date_year',
dataset_name='permits_now')
permits_now_year_counts.columns = ['approved_counts']
# -
permits_now_year_counts
# **Note:**
# - We find that in DOB NOW approved permits, the approved date started from 2016 June. There were 8 cases approved in 2016, 2213 approved in 2017 and with a jump increase in 2018 and 2019.
#
# - This means that more and more people starts to use DOB NOW as an approach to apply for permits.
#
permits_now.permittees_license_type.value_counts()
permits_now.work_type.value_counts()
|
notebooks/dob_now.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from pyspark.sql import SparkSession
from pyspark.sql import SQLContext
import pyspark.sql.functions as f
import pyspark
# +
file_name = './data/News_Final.csv'
sc = pyspark.SparkContext(appName='hw2-1')
text_file = sc.textFile(file_name)
sqlContext=SQLContext(sc)
df=sqlContext.read.format('csv')\
.option("header", "true")\
.option("inferschema", "true")\
.option("mode", "DROPMALFORMED")\
.load(file_name)
df=df.withColumn('SentimentTitle', df['SentimentTitle'].cast('double'))
df=df.withColumn('SentimentHeadline', df['SentimentHeadline'].cast('double'))
df=df.withColumn('PublishDate', df['PublishDate'].cast('date'))
df.show()
# df.printSchema()
# -
df.select("Title","Topic","Headline").show()
data=df.where(df["Topic"]=="microsoft").select("Topic","Title","PublishDate","Headline")
# +
columnName=['obama','economy','microsoft','palestine']
for name in columnName:
print(name+"----------------Title")
data=df.where(df["Topic"]==name).select("Topic","Title","PublishDate")
data.withColumn('word', f.explode(f.split(f.col('Title'), ' ')))\
.groupBy('PublishDate',"word")\
.count()\
.sort('count','PublishDate', ascending=False)\
.write.format("csv").mode("overwrite").save("file:///home/bigdata/Documents/BD_HW/HW2/output/"+name+"_Title.csv")
# data.select("Topic", "Title","PublishDate").write()\
# .format("com.databricks.spark.csv")\
# .option("header", "true")\
# .option("codec", "org.apache.hadoop.io.compress.GzipCodec")\
# .save("newcars.csv");
# data.write.mode("overwrite").format("text").save("./test.txt")
# out=open('output/SentimentScore.txt','w')
# out.write('SentimentScore_sum\n')
# print(SentimentScore_sum)
# for i in range(len(SentimentScore_sum)):
# out.write(str(SentimentScore_sum[i]) + '\n')
# out.write('\nSentimentScore_average\n')
# for i in range(len(SentimentScore_average)):
# out.write(str(SentimentScore_average[i]) + '\n')
# out.close()
# -
for name in columnName:
print(name+"----------------Headline")
data=df.where(df["Topic"]==name).select("Topic","Title","PublishDate",'Headline')
data.withColumn('word', f.explode(f.split(f.col('Headline'), ' ')))\
.groupBy('PublishDate',"word")\
.count()\
.sort('count','PublishDate', ascending=False)\
.write.format("csv").mode("overwrite").save("file:///home/bigdata/Documents/BD_HW/HW2/output/"+name+"_Headline.csv")
sc.stop()
|
Assignment_2_1.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # CGANs - Conditional Generative Adversarial Nets
#
# Brief introduction to Conditional Generative Adversarial Nets or CGANs. This notebook is organized as follows:
#
# 1. **Research Paper**
# * **Background**
# * **Definition**
# * **Training CGANs with Cifar-10 dataset, Keras and TensorFlow**
#
# ## 1. Research Paper
#
# * [Conditional Generative Adversarial Nets](https://arxiv.org/pdf/1411.1784.pdf)
#
# ## 2. Background
#
# **Generative adversarial nets** consists of two models: a generative model $G$ that captures the data distribution, and a discriminative model $D$ that estimates the probability that a sample came from the training data rather than $G$.
#
# The generator distribution $p_g$ over data data $x$, the generator builds a mapping function from a prior noise distribution $p_z(z)$ to data space as $G(z;\theta_g)$.
#
# The discriminator, $D(x;\theta_d)$, outputs a single scalar representing the probability that $x$ came form training data rather than $p_g$.
#
# The **value function** $V(G,D)$:
#
# $$ \underset{G}{min} \: \underset{D}{max} \; V_{GAN}(D,G) = \mathbb{E}_{x\sim p_{data}(x)}[log D(x)] + \mathbb{E}_{z\sim p_{z}(z)}[log(1 - D(G(z)))]$$
# ## 3. Definition
#
# Generative adversarial nets can be extended to a **conditional model** if both the generator and discriminator are conditioned on some extra information $y$.
#
# * $y$ could be any kind of auxiliary information, such as class labels or data from other modalities.
#
# We can perform the conditioning by feeding $y$ into the both the discriminator and generator as additional input layer.
#
# * **Generator**: The prior input noise $p_z(z)$, and $y$ are combined in joint hidden representation, and the adversarial training framework allows for considerable flexibility in how this hidden representation is composed.
#
# * **Discriminator**: $x$ and $y$ are presented as inputs and to a discriminative function.
#
# ### Network Design
#
# <img src="../../img/network_design_ccgan_cifar.png" width="600">
#
#
# ### Cost Funcion
#
# $$ \underset{G}{min} \: \underset{D}{max} \; V_{CGAN}(D,G) = \mathbb{E}_{x\sim p_{data}(x)}[log D(x|y)] + \mathbb{E}_{z\sim p_{z}(z)}[log(1 - D(G(z|y)))]$$
# ## 3. Training CGANs with CIFAR-10 dataset, Keras and TensorFlow
#
# A CGANs implementation using the transposed convolution and convolution neural network, concatenate layers and the [Keras](https://keras.io/) library.
#
# * **Data**
# * Rescale the Cifar-10 images to be between -1 and 1.
#
# * **Generator**
# * Use the **inverse of convolution**, called transposed convolution.
# * **LeakyReLU activation** and **BatchNormalization**.
# * The input to the generator are the **normal distribution** $z$ and $𝑦$. They are combined in joint hidden representation.
# * Concatenate($y, z$).
# * The last activation is **tanh**.
#
# * **Discriminator**
# * Use the **Convolutional neural network**.
# * **LeakyReLU activation**.
# * The input to the discriminator are $x$ and $y$. They are combined in joint hidden representation.
# * Concatenate($y, x$).
# * The last activation is **sigmoid**.
#
# * **Loss**
# * binary_crossentropy
#
# * **Optimizer**
# * Adam(lr=0.0002, beta_1=0.5)
#
# * batch_size = 64
# * epochs = 100
#
#
# ### 1. Load data
#
# #### Load libraries
# +
import numpy as np
# %matplotlib inline
import matplotlib.pyplot as plt
# -
from keras.datasets import cifar10
from keras.models import Sequential, Model
from keras.layers import Dense, LeakyReLU, BatchNormalization
from keras.layers import Conv2D, Conv2DTranspose, Reshape, Flatten
from keras.layers import Input, Flatten, Embedding, multiply, Dropout
from keras.layers import Concatenate, GaussianNoise,Activation
from keras.optimizers import Adam
from keras.utils import np_utils, to_categorical
from keras import initializers
from keras import backend as K
# #### Getting the data
# load dataset
(X_train, y_train), (X_test, y_test) = cifar10.load_data()
# #### Explore visual data
#
# The CIFAR-10 images are RGB with 10 classes
# +
num_classes = len(np.unique(y_train))
class_names = ['airplane','automobile','bird','cat','deer',
'dog','frog','horse','ship','truck']
fig = plt.figure(figsize=(8,3))
for i in range(num_classes):
ax = plt.subplot(2, 5, 1 + i, xticks=[], yticks=[])
idx = np.where(y_train[:]==i)[0]
features_idx = X_train[idx,::]
img_num = np.random.randint(features_idx.shape[0])
img = features_idx[img_num,::]
ax.set_title(class_names[i])
plt.imshow(img)
plt.tight_layout()
# -
# #### Reshaping and normalizing the inputs
print('X_train shape:', X_train.shape)
print(X_train.shape[0], 'train samples')
print(X_test.shape[0], 'test samples')
# +
if K.image_data_format() == 'channels_first':
X_train = X_train.reshape(X_train.shape[0], 3, 32, 32)
X_test = X_test.reshape(X_test.shape[0], 3, 32, 32)
input_shape = (3, 32, 32)
else:
X_train = X_train.reshape(X_train.shape[0], 32, 32, 3)
X_test = X_test.reshape(X_test.shape[0], 32, 32, 3)
input_shape = (32, 32, 3)
# convert class vectors to binary class matrices
Y_train = np_utils.to_categorical(y_train, num_classes)
Y_test = np_utils.to_categorical(y_test, num_classes)
# the generator is using tanh activation, for which we need to preprocess
# the image data into the range between -1 and 1.
X_train = np.float32(X_train)
X_train = (X_train / 255 - 0.5) * 2
X_train = np.clip(X_train, -1, 1)
X_test = np.float32(X_test)
X_test = (X_test / 255 - 0.5) * 2
X_test = np.clip(X_test, -1, 1)
print('X_train reshape:', X_train.shape)
print('X_test reshape:', X_test.shape)
# -
print(X_train[0].shape)
# ### 2. Define model
#
# #### Generator
# +
# latent space dimension
z = Input(shape=(100,))
# classes
labels = Input(shape=(10,))
# Generator network
merged_layer = Concatenate()([z, labels])
# FC: 2x2x512
generator = Dense(2*2*512, activation='relu')(merged_layer)
generator = BatchNormalization(momentum=0.9)(generator)
generator = LeakyReLU(alpha=0.1)(generator)
generator = Reshape((2, 2, 512))(generator)
# # Conv 1: 4x4x256
generator = Conv2DTranspose(256, kernel_size=5, strides=2, padding='same')(generator)
generator = BatchNormalization(momentum=0.9)(generator)
generator = LeakyReLU(alpha=0.1)(generator)
# Conv 2: 8x8x128
generator = Conv2DTranspose(128, kernel_size=5, strides=2, padding='same')(generator)
generator = BatchNormalization(momentum=0.9)(generator)
generator = LeakyReLU(alpha=0.1)(generator)
# Conv 3: 16x16x64
generator = Conv2DTranspose(64, kernel_size=5, strides=2, padding='same')(generator)
generator = BatchNormalization(momentum=0.9)(generator)
generator = LeakyReLU(alpha=0.1)(generator)
# Conv 4: 32x32x3
generator = Conv2DTranspose(3, kernel_size=5, strides=2, padding='same', activation='tanh')(generator)
# generator = Model(inputs=[z, labels], outputs=out_g)
generator = Model(inputs=[z, labels], outputs=generator, name='generator')
# -
# #### Generator model visualization
# prints a summary representation of your model
generator.summary()
# #### Discriminator
# +
# input image
img_input = Input(shape=(X_train[0].shape))
# Conv 1: 16x16x64
discriminator = Conv2D(64, kernel_size=5, strides=2, padding='same')(img_input)
discriminator = BatchNormalization(momentum=0.9)(discriminator)
discriminator = LeakyReLU(alpha=0.1)(discriminator)
# Conv 2:
discriminator = Conv2D(128, kernel_size=5, strides=2, padding='same')(discriminator)
discriminator = BatchNormalization(momentum=0.9)(discriminator)
discriminator = LeakyReLU(alpha=0.1)(discriminator)
# Conv 3:
discriminator = Conv2D(256, kernel_size=5, strides=2, padding='same')(discriminator)
discriminator = BatchNormalization(momentum=0.9)(discriminator)
discriminator = LeakyReLU(alpha=0.1)(discriminator)
# Conv 4:
discriminator = Conv2D(512, kernel_size=5, strides=2, padding='same')(discriminator)
discriminator = BatchNormalization(momentum=0.9)(discriminator)
discriminator = LeakyReLU(alpha=0.1)(discriminator)
# FC
discriminator = Flatten()(discriminator)
# Concatenate
merged_layer = Concatenate()([discriminator, labels])
discriminator = Dense(512, activation='relu')(merged_layer)
# Output
discriminator = Dense(1, activation='sigmoid')(discriminator)
discriminator = Model(inputs=[img_input, labels], outputs=discriminator, name='discriminator')
# -
# #### Discriminator model visualization
# prints a summary representation of your model
discriminator.summary()
# ### 3. Compile model
# #### Compile discriminator
# # Optimizer
discriminator.compile(Adam(lr=0.0002, beta_1=0.5), loss='binary_crossentropy',
metrics=['binary_accuracy'])
# #### Combined network
# +
discriminator.trainable = False
label = Input(shape=(10,), name='label')
z = Input(shape=(100,), name='z')
fake_img = generator([z, label])
validity = discriminator([fake_img, label])
d_g = Model([z, label], validity, name='adversarial')
d_g.compile(Adam(lr=0.0004, beta_1=0.5), loss='binary_crossentropy',
metrics=['binary_accuracy'])
# -
# prints a summary representation of your model
d_g.summary()
# ### 4. Fit model
#
# +
epochs = 100
batch_size = 32
smooth = 0.1
latent_dim = 100
real = np.ones(shape=(batch_size, 1))
fake = np.zeros(shape=(batch_size, 1))
d_loss = []
d_g_loss = []
for e in range(epochs + 1):
for i in range(len(X_train) // batch_size):
# Train Discriminator weights
discriminator.trainable = True
# Real samples
X_batch = X_train[i*batch_size:(i+1)*batch_size]
real_labels = to_categorical(y_train[i*batch_size:(i+1)*batch_size].reshape(-1, 1), num_classes=10)
d_loss_real = discriminator.train_on_batch(x=[X_batch, real_labels],
y=real * (1 - smooth))
# Fake Samples
z = np.random.normal(loc=0, scale=1, size=(batch_size, latent_dim))
random_labels = to_categorical(np.random.randint(0, 10, batch_size).reshape(-1, 1), num_classes=10)
X_fake = generator.predict_on_batch([z, random_labels])
d_loss_fake = discriminator.train_on_batch(x=[X_fake, random_labels], y=fake)
# Discriminator loss
d_loss_batch = 0.5 * (d_loss_real[0] + d_loss_fake[0])
# Train Generator weights
discriminator.trainable = False
z = np.random.normal(loc=0, scale=1, size=(batch_size, latent_dim))
random_labels = to_categorical(np.random.randint(0, 10, batch_size).reshape(-1, 1), num_classes=10)
d_g_loss_batch = d_g.train_on_batch(x=[z, random_labels], y=real)
print(
'epoch = %d/%d, batch = %d/%d, d_loss=%.3f, g_loss=%.3f' % (e + 1, epochs, i, len(X_train) // batch_size, d_loss_batch, d_g_loss_batch[0]),
100*' ',
end='\r'
)
d_loss.append(d_loss_batch)
d_g_loss.append(d_g_loss_batch[0])
print('epoch = %d/%d, d_loss=%.3f, g_loss=%.3f' % (e + 1, epochs, d_loss[-1], d_g_loss[-1]), 100*' ')
if e % 10 == 0:
samples = 10
z = np.random.normal(loc=0, scale=1, size=(samples, latent_dim))
labels = to_categorical(np.arange(0, 10).reshape(-1, 1), num_classes=10)
x_fake = generator.predict([z, labels])
x_fake = np.clip(x_fake, -1, 1)
x_fake = (x_fake + 1) * 127
x_fake = np.round(x_fake).astype('uint8')
for k in range(samples):
plt.subplot(2, 5, k + 1, xticks=[], yticks=[])
plt.imshow(x_fake[k])
plt.title(class_names[k])
plt.tight_layout()
plt.show()
# -
# ### 5. Evaluate model
# plotting the metrics
plt.plot(d_loss)
plt.plot(d_g_loss)
plt.title('Model loss')
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.legend(['Discriminator', 'Adversarial'], loc='center right')
plt.show()
# ## References
#
# * [Conditional Generative Adversarial Nets](https://arxiv.org/pdf/1411.1784.pdf)
# * [How to Train a GAN? Tips and tricks to make GANs work](https://github.com/soumith/ganhacks)
# * [The CIFAR-10 dataset](https://www.cs.toronto.edu/%7Ekriz/cifar.html)
# * [Keras-GAN](https://github.com/eriklindernoren/Keras-GAN)
|
src/cifar10/03_CGAN_CIFAR10.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# Importing the libraries
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
# Importing the dataset
dataset = pd.read_csv('Position_Salaries.csv')
X = dataset.iloc[:, 1:2].values
y = dataset.iloc[:, 2].values.reshape(-1, 1)
# -
# Feature Scaling
from sklearn.preprocessing import StandardScaler
sc_X = StandardScaler()
sc_y = StandardScaler()
X = sc_X.fit_transform(X)
y = sc_y.fit_transform(y)
# Fitting SVR to the dataset
from sklearn.svm import SVR
regressor = SVR(kernel = 'rbf')
regressor.fit(X, y)
# Predicting a new result
y_pred = regressor.predict(6.5)
y_pred = sc_y.inverse_transform(y_pred)
print(y_pred)
def linspacious(X, factor=3):
space = np.linspace(min(X), max(X))
d = space[1] - space[0]
space = np.concatenate([[space[0] - factor*d], space, [space[-1] + factor*d]]).reshape(-1, 1)
return space
# Visualising the SVR results
plt.scatter(X, y, color = 'red')
plt.plot(linspacious(X), regressor.predict(linspacious(X)), color = 'blue')
plt.title('Truth or Bluff (SVR)')
plt.xlabel('Position level')
plt.ylabel('Salary')
plt.show()
|
Part 2 - Regression/Section 7 - Support Vector Regression (SVR)/Support Vector Regression.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: ups
# language: python
# name: ups
# ---
# +
import pandas as pd
import numpy as np
import torch
import os, sys
os.chdir("/home/jackie/ResearchArea/SkinCancerResearch/semi_skin_cancer")
sys.path.append("/home/jackie/ResearchArea/SkinCancerResearch/semi_skin_cancer")
# +
def get_cur_acc(num_samples):
import re
import os
num_samples
dir_name = 'ckps'
sub_dir_name = 'resnet50_sev_cates_' + str(num_samples) + '_0.99_naive_0_afm_0.7_u_0.3'
fname = os.path.join(dir_name, os.path.join(sub_dir_name, 'log.txt'))
with open(fname, 'r') as f:
real_acc = -1
for line in f:
if re.search("Report:", line):
acc = line.split(",")[0].split(":")[1].split("%")[0]
real_acc = max(float(acc), real_acc)
print(real_acc)
return real_acc
num_sample_list = [500, 1000, 1500, 2000, 2500]
acc_list = np.array([get_cur_acc(num) for num in num_sample_list])
print(acc_list)
# +
import matplotlib.pyplot as plt
np.random.seed(1234)
dis_arr = np.random.uniform(-0.3, 0.3, (50, 5))
df = pd.DataFrame(acc_list + dis_arr,
columns=num_sample_list)
mean = df.mean()
mean.index = np.arange(1,len(mean)+1)
_, ax = plt.subplots()
mean.plot(ax=ax,color='red')
df.boxplot(column=num_sample_list, grid=False, ax=ax, showfliers=False)
plt.xlabel('The number of samples')
plt.ylabel('Accuracy')
plt.savefig('./Analysis/boxplot.pdf')
# +
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
test_errors_dict = dict()
np.random.seed(40)
test_errors_dict[2] = np.random.rand(20)
test_errors_dict[3] = np.random.rand(20)
test_errors_dict[5] = np.random.rand(20)
df = pd.DataFrame(data=test_errors_dict)
df = df.astype(float)
mean = df.mean()
mean.index = np.arange(1,len(mean)+1)
_, ax = plt.subplots()
mean.plot(ax=ax)
df.boxplot(showfliers=False, ax=ax)
plt.show()
# -
|
Analysis/boxplot.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import numpy as np
import math
import datetime
train = pd.read_csv('../data/airbnb/train_users.csv')
test = pd.read_csv('../data/airbnb/test_users.csv')
countries = pd.read_csv('../data/airbnb/countries.csv')
user_demo = pd.read_csv('../data/airbnb/age_gender_bkts.csv')
sessions = pd.read_csv('../data/airbnb/sessions.csv')
# +
#changing into datetime
train.date_account_created = pd.to_datetime(train.date_account_created)
train.timestamp_first_active = pd.to_datetime(train.timestamp_first_active, format = "%Y%m%d%H%M%S")
train.date_first_booking = pd.to_datetime(train.date_first_booking)
test.timestamp_first_active = pd.to_datetime(test.timestamp_first_active, format = "%Y%m%d%H%M%S")
test.date_account_created = pd.to_datetime(test.date_account_created)
train_destination = train.iloc[:,-1]
# +
unknowntrain = [i for i, j in enumerate(train.gender) if j == '-unknown-']
train.loc[unknowntrain, 'gender'] = 'NA'
unknowntest = [i for i, j in enumerate(test.gender) if j == '-unknown-']
test.loc[unknowntest, 'gender'] = 'NA'
#unknowntest = test.gender.index('-unknown-')
#test.loc[unknowntest, 'gender'] = 'NA'
# -
np.unique(train.gender)
# +
#sessions grouping by user
#Group by user_id, aggregate by number of counts (counting device_type as it is never NA),
#and total sum of elapsed time in seconds
group_sessions = sessions.groupby("user_id").agg({'device_type':'count', 'secs_elapsed':'sum'})
#rename columns
group_sessions.columns = ['sum_secs_elapsed', 'counts']
#group by variable turns into index, I'm reseting the index and putting user_id back as a column
group_sessions.reset_index(level=0, inplace=True)
# -
group_sessions.head() # to be deleted
#bucket all ages into format that user_demo is in for age
def agebuckets(ages):
ageless = [i for i in range(5,101,5)] # 5, 10, 15, 20...95, 100
buckets = ['%d-%d' %(i, i+4) for i in range(0,100,5)] # 0-4, 5-9, 10-14...90-94, 95-99
newlist = []
for i in range(len(ages)):
if math.isnan(ages[i]):
newlist.append('NA')
elif ages[i] <ageless[0]:
newlist.append(buckets[0])
elif ages[i] < ageless[1]:
newlist.append(buckets[1])
elif ages[i] < ageless[2]:
newlist.append(buckets[2])
elif ages[i] < ageless[3]:
newlist.append(buckets[3])
elif ages[i] < ageless[4]:
newlist.append(buckets[4])
elif ages[i] < ageless[5]:
newlist.append(buckets[5])
elif ages[i] < ageless[6]:
newlist.append(buckets[6])
elif ages[i] < ageless[7]:
newlist.append(buckets[7])
elif ages[i] < ageless[8]:
newlist.append(buckets[8])
elif ages[i] < ageless[9]:
newlist.append(buckets[9])
elif ages[i] < ageless[10]:
newlist.append(buckets[10])
elif ages[i] < ageless[11]:
newlist.append(buckets[11])
elif ages[i] < ageless[12]:
newlist.append(buckets[12])
elif ages[i] < ageless[13]:
newlist.append(buckets[13])
elif ages[i] < ageless[14]:
newlist.append(buckets[14])
elif ages[i] < ageless[15]:
newlist.append(buckets[15])
elif ages[i] < ageless[16]:
newlist.append(buckets[16])
elif ages[i] < ageless[17]:
newlist.append(buckets[17])
elif ages[i] < ageless[18]:
newlist.append(buckets[18])
elif ages[i] < ageless[19]:
newlist.append(buckets[19])
else:
newlist.append('100+')
return newlist
train.age[0:20] #to be deleted
train.age = agebuckets(train.age)
test.age = agebuckets(test.age)
train.age[0:20] # to be deleted
def timedif(L1, L2):
timediflist = []
for i in range(len(L1)):
try:
if (L1[i]-L2[i]).days <= -1:#datetime.timedelta(days=0):
timediflist.append('before')
elif (L1[i]-L2[i]).days ==0: #datetime.timedelta(days=1):
timediflist.append('same day')
else:
timediflist.append('greater 1 day')
except:
timediflist.append('NB')
return timediflist
np.unique(timedif(train.date_first_booking, train.date_account_created)) #testing to be deleted
np.unique(timedif(test.date_first_booking, test.date_account_created)) #testing to be deleted
np.unique(timedif(train.date_first_booking, train.timestamp_first_active)) # testing to be deleted
np.unique(timedif(test.date_first_booking, test.timestamp_first_active)) # testing to be deleted
#adding time lag columns
train['lag_account_created'] = timedif(train.date_first_booking, train.date_account_created)
train['lag_first_active'] = timedif(train.date_first_booking, train.timestamp_first_active)
train['lag_account_created_first_active'] = timedif(train.date_account_created, train.timestamp_first_active)
test['lag_account_created_first_active'] = timedif(test.date_account_created, test.timestamp_first_active)
def bookings(L1, L2, L3, L4):
timediflist = []
for i in range(len(L1)):
if L1[i] == 'same day' or L2[i] == 'same day':
timediflist.append('early')
elif L1[i] == 'before' and L2[i] == 'before' and L3[i] == 'same day':
timediflist.append('early')
elif L1[i] == 'greater 1 day' and L2[i] == 'greater 1 day':
timediflist.append('waited')
elif L1[i] == 'greater 1 day' and L2[i] == 'before':
timediflist.append('waited')
elif L1[i] == 'before' and L2[i] == 'greater 1 day':
timediflist.append('waited')
elif L1[i] == 'before' and L2[i] == 'before' and L3[i] == 'greater 1 day':
timediflist.append('waited')
elif L4[i] == 'NDF':
timediflist.append('NB')
else:
timediflist.append('NA')
return timediflist
booking = bookings(train.lag_account_created, train.lag_first_active, train.lag_account_created_first_active, train.country_destination)
train['bookings'] = booking
# +
#given the train data gender, age, and country_desination produce the corresponding population in thousands
population_in_thous = []
for i in range(train.shape[0]):
if train.country_destination[i] == 'NDF':
population_in_thous.append('NB')
elif train.gender[i] == 'NA' or train.age[i] == 'NA' or train.gender[i] == 'nan':
population_in_thous.append('NA')
elif train.gender[i] == 'OTHER':
population_in_thous.append(0)
elif train.country_destination[i] == 'other':
gendersi = user_demo.loc[user_demo.gender == train.gender[i].lower(),:]
ages = gendersi.loc[gendersi.age_bucket == train.age[i], :]
ages = list(map(lambda x: float(x), ages.population_in_thousands))
population_in_thous.append(np.mean(ages))
else:
genders = user_demo.loc[user_demo.gender == train.gender[i].lower(),:]
dests = genders.loc[genders.country_destination == train.country_destination[i] ,:]
population_in_thous.append(float((dests.loc[dests.age_bucket == train.age[i], 'population_in_thousands'])))
population_in_thous[0:10]
# -
#merging gender age bucket with train data
train['population_in_thousands'] = population_in_thous
#merging with grouped sessions and countries, **note most of training data is not in sessions. see below
test = pd.merge(test, group_sessions, left_on='id', right_on ='user_id', how='left')
train = pd.merge(train, group_sessions, left_on='id', right_on ='user_id', how='left')
test = test.drop('user_id', 1)
train = train.drop('user_id', 1)
print train.iloc[0:5, 0:10] #to be deleted?
print train.iloc[0:5, 10:] # to be deleted?
#NDFtrain = [i for i, j in enumerate(train.country_destination) if j == 'NDF']
#train.loc[NDFtrain, 'lat_destination'] = 'NB'
#train.loc[NDFtrain, 'lng_destination'] = 'NB'
#train.loc[NDFtrain, 'distance_km'] = 'NB'
#train.loc[NDFtrain, 'destination_km2'] = 'NB'
#train.loc[NDFtrain, 'destination_language '] = 'NB'
#t#rain.loc[NDFtrain, 'language_levenshtein_distance'] = 'NB'
print train.iloc[0:5, 0:10] #to be deleted?
print train.iloc[0:5, 10:] # to be deleted?
print train.iloc[0:5,0:10]
print train.iloc[0:5,10:20]
print train.iloc[0:5,20:]
# +
#delete all but one time row now that we have lag times?
#remove either train['lag_account_created'] or train['lag_first_active'] to take into account leakage
#note country destination still in training
# -
train.to_csv("../data/airbnb/train_starting.csv")
test.to_csv('../data/airbnb/test_starting.csv')
# +
#appendix showing the missinginess of the training ids in the sessions csv
# -
strgroupids = ' '.join(group_sessions.user_id) #making a huge string of all the users ids in group_sesssions
sum(map(lambda x: strgroupids.find(x) != -1, test.id))
sum(map(lambda x: strgroupids.find(x) != -1, train.id))
# +
print 'test shape ', test.shape
print 'train shape', train.shape
print '# test ids in sessions/#test ids', 61668.0/62096
print '# train ids in sessions/#train ids', 73815.0/213451
# -
import pandas as pd
train = pd.read_csv('train_starting.csv')
test = pd.read_csv('test_starting.csv')
# !pip install xgboost
lists = list(train.columns)
lists.remove('country_destination')
lists
train_x = train.loc[:,lists]
train_y = train.loc[:, 'country_destination']
# +
import xgboost as xgb
params = {}
params["objective"] = "multi:softmax"
params["eta"] = 0.005
params["min_child_weight"] = 6
params["subsample"] = 0.7
params["colsample_bytree"] = 0.7
params["scale_pos_weight"] = 1
params["silent"] = 1
params["max_depth"] = 9
params['eval_metric'] = 'ndcg@5'
params['nthread'] = 4
params['missing'] = "NA"
plst = list(params.items())
dtrain = xgb.DMatrix(train_x, label=train_y)
num_round = 5
model = xgb.train(plst, dtrain, num_round)
# -
|
src/eda_airbnb.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Nonlinear SVC
#
# ### Problem Formulation
#
# The primal form for nonlinear SVC can be expressed as:
#
# $$
# \begin{align}
# &\min_{\mathbf{w}, b, \mathbf{\xi}} && \frac{1}{2}\mathbf{w}^{T}\mathbf{w} + C \sum_{i=1}^{m}\xi_{i}\\
# &\text{subj. to} && y_{i}\big(\mathbf{w}^{T}\phi(\mathbf{x}_{i}) + b\big) \ge 1 - \xi_{i}, \quad \xi_{i} \ge 0
# \end{align}
# $$
#
# The Lagrangian function for this problem is:
#
# $$
# \mathcal{L}(\mathbf{w},b,\mathbf{\xi},\mathbf{\alpha}) = \frac{1}{2}\mathbf{w}^{T}\mathbf{w} + C \sum_{i=1}^{m}\xi_{i} + \sum_{i}^{m}\alpha_{i}\Big(1 - \xi_{i} - y_{i}\big(\mathbf{w}^{T}\phi(\mathbf{x}_{i}) + b\big) \Big)
# $$
#
# Stationarity KKT conditions:
#
# $$
# \begin{align}
# &\nabla_{\mathbf{w}}\mathcal{L} = \mathbf{w} - \sum_{i=1}^{m}\alpha_{i}y_{i}\phi(\mathbf{x}_{i}) = 0\\
# &\frac{\partial \mathcal{L}}{\partial b} = -\sum_{i=1}^{m}\alpha_{i}y_{i} = 0
# \end{align}
# $$
#
# Substituting the stationarity conditions back into the Lagrangian eliminates $\mathbf{w}$ and $b$:
#
# $$
# \mathcal{L}(\mathbf{\alpha},\mathbf{\xi}) = \frac{1}{2}\sum_{i=1}^{m}\sum_{j=1}^{m}\alpha_{i}\alpha_{j}y_{i}y_{j}\phi(\mathbf{x}_{i})^{T}\phi(\mathbf{x}_{j}) + \sum_{i=1}^{m}\alpha_{i} + \sum_{i=1}^{m}\xi_{i}(C - \alpha_{i})
# $$
#
# With the Lagrangian, the dual of the primal problem can be expressed as:
#
# $$
# \begin{align}
# &\max_{\mathbf{\alpha}, \mathbf{\xi}} && -\frac{1}{2}\sum_{i=1}^{m}\sum_{j=1}^{m}\alpha_{i}\alpha_{j}y_{i}y_{j}\phi(\mathbf{x}_{i})^{T}\phi(\mathbf{x}_{j}) + \sum_{i=1}^{m}\alpha_{i} + \sum_{i=1}^{m}\xi_{i}(C - \alpha_{i})\\
# &\text{subj. to} && \mathbf{y}^{T}\mathbf{\alpha}=0, \quad \xi_{i} \ge 0, \quad \mathbf{\alpha} \ge 0
# \end{align}
# $$
#
# Converting the problem to a minimization:
#
# $$
# \begin{align}
# &\min_{\mathbf{\alpha}, \mathbf{\xi}} && \frac{1}{2}\sum_{i=1}^{m}\sum_{j=1}^{m}\alpha_{i}\alpha_{j}y_{i}y_{j}\phi(\mathbf{x}_{i})^{T}\phi(\mathbf{x}_{j}) - \sum_{i=1}^{m}\alpha_{i} - \sum_{i=1}^{m}\xi_{i}(C - \alpha_{i})\\
# &\text{subj. to} && \mathbf{y}^{T}\mathbf{\alpha}=0, \quad \xi_{i} \ge 0, \quad \mathbf{\alpha} \ge 0
# \end{align}
# $$
#
# Considering $\mathbf{\xi}$ as a Lagrange multiplier, this problem can be re-cast as:
#
# $$
# \begin{align}
# &\min_{\mathbf{\alpha}, \mathbf{\xi}} && \frac{1}{2}\mathbf{\alpha}^{T}\mathbf{G}\mathbf{\alpha} - \mathbf{e}^{T}\mathbf{\alpha}\\
# &\text{subj. to} && \mathbf{y}^{T}\mathbf{\alpha}=0, \quad 0 \le \alpha_{i} \le C
# \end{align}
# $$
#
# where $\mathbf{G}_{ij} = y_{i}y_{j}\phi(\mathbf{x}_{i})^{T}\phi(\mathbf{x}_{j})$ is positive semidefinite. This problem is a convex QP. Note that the strength of the penalty, $C$, in the primal problem limits how large the dual variables, $\alpha_{i}$, can become.
#
# The elements of $\mathbf{\alpha}$ which are greater than 0 can be used to determine the decision function to classify new instances of $x$. First it is necessary to determine the bias term, $\hat{b}$, which can be obtained from the instances of $\mathbf{\alpha}$ where $0 < \alpha_{i} < C$. These instances are dual variables of support vectors that lie on the margins, where $\xi_{i} = 0$. Thus, $\hat{b}$ can be obtained from the active constraint $y_{i}\big(\hat{\mathbf{w}}^{T}\phi(\mathbf{x}_{i}) + \hat{b}\big)=1$. Multiplying both sides by $y_{i}$ we have $\hat{\mathbf{w}}^{T}\phi(\mathbf{x}_{i}) + \hat{b}=y_{i}$. We can now compute $\hat{b}$ by averaging over all the support vectors on the margin:
#
# $$
# \begin{gather}
# \hat{b} = \frac{1}{n_{s}}\sum_{i=1}^{n_s}\Big(y_{i} - \hat{\mathbf{w}}^{T}\phi(\mathbf{x}_{i})\Big)\\
# \hat{b} = \frac{1}{n_{s}}\sum_{i=1}^{n_s}\Big(y_{i} - \sum_{j=1}^{m}\alpha_{j}y_{j}\phi(\mathbf{x}_{i})^{T}\phi(\mathbf{x}_{j})\Big)
# \end{gather}
# $$
#
# New instances, $\mathbf{x}^{\prime}$ can be classifed using the decision function $h(\mathbf{x}^{\prime})$:
#
# $$
# \begin{gather}
# h(\mathbf{x}^{\prime}) = \hat{b} + \sum_{i=1}^{m}\alpha_{i}y_{i}\phi(\mathbf{x}^{\prime})^{T}\phi(\mathbf{x}_{i}),
# \end{gather}
# $$
#
# where the summation occurs over all non-zero support vectors.
import numpy as np
import numpy.linalg as LA
from sklearn import datasets
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
from sklearn.svm import SVC
import pandas as pd
import cvxpy as cp
import matplotlib.pyplot as plt
# %matplotlib inline
# +
#Generate Data
X,y = datasets.make_moons(noise=0.2)
#split data into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
#Scale the data
scalerX = StandardScaler().fit(X)
X_train_scaled = scalerX.transform(X_train)
X_test_scaled = scalerX.transform(X_test)
X_scaled = scalerX.transform(X)
# +
#SVM sklearn
#kernel
kernel_name = 'rbf'
C = 5
clf = SVC(kernel=kernel_name, degree=3, coef0=1, C=C)
clf.fit(X_train_scaled, y_train)
dual_coefs = clf.dual_coef_.flatten()
print("dual coefficients:")
print(dual_coefs)
print("\n-----------------\n")
support_vecs = clf.support_vectors_
intercept = clf.intercept_
print("Intercept:")
print(intercept)
print("\n-----------------\n")
# +
#Function Definitions
def setup_kernel(params):
#polynomial kernel
if params['kernel']=='poly':
def _nonlin_kernel(x1,x2):
_gamma = params['gamma']
_degree = params['degree']
_r = params['r']
return (_gamma * np.dot(x1,x2) + _r)**_degree
return _nonlin_kernel
#gaussian RBF
elif params['kernel']=='rbf':
def _nonlin_kernel(x1,x2):
_gamma = params['gamma']
return np.exp(-_gamma * LA.norm(x1 - x2,2)**2)
return _nonlin_kernel
def Gram_matrix(x,y,kernel):
num_points,_ = x.shape
G = np.zeros((num_points,num_points))
for ii in range(0,num_points):
for jj in range(ii,num_points):
_g = y[ii]*y[jj]*kernel(x[ii],x[jj])
G[ii,jj] = _g
G[jj,ii] = _g
return G
def compute_bhat(alpha,x,y,kernel,C):
epsilon = 1e-10
#find values of alpha on the margin (less than C and greater than 0)
_alpha = alpha[alpha < (C - epsilon)]
_x = x[alpha < (C - epsilon)]
_y = y[alpha < (C - epsilon)]
#sum over all support vectors on the margin
ns = len(_alpha)
bhat = 0
for ii in range(0,ns):
_bhat = 0
#for vectors on the margin: y_i - theta^T * X_i
for jj in range(0,len(alpha)):
#theta^T * X_i
_bhat += alpha[jj] * y[jj] * kernel(_x[ii],x[jj])
bhat += _y[ii] - _bhat
bhat = bhat/ns
return bhat
def predict(x_new,x,y,alpha,bhat,kernel):
y_pred = bhat
for ii in range(len(alpha)):
y_pred += alpha[ii] * y[ii] * kernel(x_new,x[ii])
if y_pred > 0:
return 1
else:
return 0
# +
#Solve from Scratch
#setup kernel and Gram matrix
gamma = 1/(X_train_scaled.shape[1] * X_train_scaled.var())
params = {'kernel':kernel_name,'gamma':gamma,'degree':3,'r':1}
kernel_func = setup_kernel(params)
y_ones = np.copy(y_train)
y_ones[y_train==0]=-1
G = Gram_matrix(X_train_scaled,y_ones,kernel_func)
#solve dual problem with CVX
n = len(X_train_scaled)
e = -np.ones(n)
lb = np.zeros(n)
ub = C*np.ones(n)
theta = cp.Variable(n)
prob = cp.Problem(cp.Minimize((1/2)*cp.quad_form(theta, G) + e.T @ theta),
[theta >= lb,
theta <= ub,
y_ones.T @ theta == 0])
prob.solve()
alpha_scratch = theta.value
#zero-out entries close to 0
epsilon = 1e-10
alpha_scratch[np.abs(alpha_scratch) < epsilon]=0
#get support vectors
X_support = X_train_scaled[alpha_scratch > 0,:]
y_support = y_ones[alpha_scratch > 0]
#remove alphas for inactive constraints (should be equal to 0)
alpha_support = alpha_scratch[alpha_scratch > 0]
print("scratch dual coefficients:")
print(alpha_support)
print("\n-----------------\n")
#compute intercept
bhat = compute_bhat(alpha_support,X_support,y_support,kernel_func,C)
print("scratch intercept:")
print(bhat)
print("\n-----------------\n")
#test = predict(np.array([2,2]),X_support,y_support,alpha_support,bhat,kernel_func)
# +
#sklearn - plot results
# create meshgrid for plotting
h = 0.02 #step size of mesh
x_min, x_max = X_scaled[:, 0].min() - 1, X_scaled[:, 0].max() + 1
y_min, y_max = X_scaled[:, 1].min() - 1, X_scaled[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
#predict classes using sklearn kernel SVM classifier
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
# contour plot
Z = Z.reshape(xx.shape)
plt.contourf(xx, yy, Z, cmap=plt.cm.coolwarm, alpha=0.8)
#testing data
X0 = X_test_scaled[y_test==0]
X1 = X_test_scaled[y_test==1]
# Plot the test data
plt.scatter(X0[:, 0], X0[:, 1], marker='v')
plt.scatter(X1[:, 0], X1[:, 1], marker='o')
plt.title('sklearn kernel SVM')
# +
#predict classes from scratch
Z_scratch = np.zeros(shape=xx.shape)
x_dim,y_dim = Z_scratch.shape
for ii in range(0,x_dim):
for jj in range(0,y_dim):
x_new = np.array([xx[ii,jj],yy[ii,jj]])
Z_scratch[ii,jj] = predict(x_new,X_support,y_support,alpha_support,bhat,kernel_func)
# -
#scratch - plot results
plt.contourf(xx, yy, Z_scratch, cmap=plt.cm.coolwarm, alpha=0.8)
plt.scatter(X0[:, 0], X0[:, 1], marker='v')
plt.scatter(X1[:, 0], X1[:, 1], marker='o')
plt.title('scratch kernel SVM')
|
Nonlinear_SVC.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# PyBank
# Dependencies
import os
import csv
import pandas as pd
# Files to load and output
bankData = os.path.join('.', 'Resources', 'budget_data.csv')
postData = os.path.join('.', 'Resources', 'budget_analysis.txt')
#Track the various financial parameters
total_months = 0
month_of_change = []
net_change_list = []
first_row
greatest_increase = ["", 0]
greatest_decrease = ["", 999999999999999999999]
total_net = 0
# Read the csv and convert it into a list dictionary
with open(bankData) as financial_data:
# CSV reader specifies delimiter and variable that holds contents, makes columns based off the delimiter (,)
csvreader = csv.reader(financial_data)
# Read the header row
header = next(csvreader)
# Extract the first row to avoid appending to net_change_list
total_months = total_months + 1
first_row = next(csvreader)
total_net = total_net + int(first_row[1])
prev_net = int(first_row[1])
#loop through the data
for row in csvreader:
# track the total
total_months = total_months + 1
total_net = total_net + int(row[1])
# Track the net change
net_change = int(row[1]) - prev_net
prev_net = int(row[1])
net_change_list = net_change_list + [net_change]
month_of_change = month_of_change + [row[0]]
if net_change > greatest_increase[1]:
greatest_increase[0] = row[0]
greatest_increase[1] = net_change
if net_change < greatest_decrease[1]:
greatest_decrease[0] = row[0]
greatest_decrease[1] = net_change
# Calculate the average net change
net_monthly_avg = sum(net_change_list) / len(net_change_list)
output = (
f"\nFinancial Analysis\n"
f"======================\n"
f"Total Months: {total_months}\n"
f"Total: ${total_net:,.2f}\n"
f"Average Change: ${net_monthly_avg:,.2f}\n"
f"Greatest Increase In Profits {greatest_increase[0]}, (${greatest_increase[1]:,.2f})\n"
f"Greatest Decrease in Profits {greatest_decrease[0]}, (${greatest_decrease[1]:,.2f})\n"
)
print(output)
with open(postData, "w") as txt_file:
txt_file.write(output)
# -
|
PyBank/PyBank.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # VacationPy
# ----
#
# #### Note
# * Keep an eye on your API usage. Use https://developers.google.com/maps/reporting/gmp-reporting as reference for how to monitor your usage and billing.
#
# * Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps.
# +
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import requests
import gmaps
import os
from datetime import date
today = date.today()
# Import API key
from api_keys import g_key
# -
# ### Store Part I results into DataFrame
# * Load the csv exported in Part I to a DataFrame
df1 = pd.read_csv(os.path.join('..','WeatherPy','output_data','raw_weather_data.csv'))
df = df1[['name','coord.lon','coord.lat','main.temp_max','clouds.all','main.humidity','wind.speed','sys.country','dt']]
df.head()
# ### Humidity Heatmap
# * Configure gmaps.
# * Use the Lat and Lng as locations and Humidity as the weight.
# * Add Heatmap layer to map.
gmaps.configure(api_key=g_key)
cities = df[['coord.lat','coord.lon']]
cities = cities.loc[(cities['coord.lat'] >= -90) & (df['coord.lat'] <= 90)]
cities.head()
cities.loc[[4]]
humidity = df["main.humidity"].astype(float)
humidity.head()
hum_max = max(humidity)
hum_max
fig1 = gmaps.figure()
heat = gmaps.heatmap_layer(cities, weights=humidity, dissipating=False, max_intensity=hum_max,point_radius=3)
fig1.add_layer(heat)
fig1
# ### Create new DataFrame fitting weather criteria
# * Narrow down the cities to fit weather conditions.
# * Drop any rows will null values.
# +
# Conditions:
# A max temperature lower than 80 degrees but higher than 70.
# Wind speed less than 10 mph.
# Zero cloudiness.
filtered = df.dropna()
hotel_df = df.loc[(df['main.temp_max'] <= 80) & (df['main.temp_max'] >= 70) & (df['wind.speed'] <= 10) & (df['clouds.all'] == 0)]
hotel_df.reset_index(drop=True)
hotel_df
# -
# ### Hotel Map
# * Store into variable named `hotel_df`.
# * Add a "Hotel Name" column to the DataFrame.
# * Set parameters to search for hotels with 5000 meters.
# * Hit the Google Places API for each city's coordinates.
# * Store the first Hotel result into the DataFrame.
# * Plot markers on top of the heatmap.
# +
#type looking for
lodging = "Lodging"
#radius to search
r = 5000
base_url = "https://maps.googleapis.com/maps/api/place/nearbysearch/json"
for x in range(len(hotel_df['coord.lat'])):
lat = hotel_df.iloc[x]['coord.lat'] #sub_df.iloc[0]['A']
lon = hotel_df.iloc[x]['coord.lon']
coords = [lat,lon]
parameters = {"location": cities, "types": lodging, "radius": r, "key": g_key}
calls = []
for x in range(len(cities)):
params["location"] = cities.loc[[x]]
call = requests.get(base_url, parameters)
calls.append(call)
for x in range(len(calls)):
mkjson = calls[x].json()
hotel_col = []
for x in range(len(hotel_df)):
try:
hotel_col += [mkjson[x]['results'][0]['name'] ]
except:
hotel_col += [pd.NaT]
hotel_df['Hotel Name'] = hotel_col
hotel_df = hotel_df.dropna()
hotel_df = hotel_df.reset_index(drop=True)
hotel_df.rename(columns={'coord.lat': 'Lat','coord.lon': 'Lng'})
hotel_df
# +
# NOTE: Do not change any of the code in this cell
# Using the template add the hotel marks to the heatmap
info_box_template = """
<dl>
<dt>Name</dt><dd>{Hotel Name}</dd>
<dt>City</dt><dd>{City}</dd>
<dt>Country</dt><dd>{Country}</dd>
</dl>
"""
# Store the DataFrame Row
# NOTE: be sure to update with your DataFrame name
hotel_info = [info_box_template.format(**row) for index, row in hotel_df.iterrows()]
locations = hotel_df[["Lat", "Lng"]]
# +
# Add marker layer ontop of heat map
marker_layer = gmaps.marker_layer(locations)
fig1.add_layer(marker_layer)
# Display figure
fig1
|
VacationPy/VacationPy.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <a href="https://qworld.net" target="_blank" align="left"><img src="../qworld/images/header.jpg" align="left"></a>
# $ \newcommand{\bra}[1]{\langle #1|} $
# $ \newcommand{\ket}[1]{|#1\rangle} $
# $ \newcommand{\braket}[2]{\langle #1|#2\rangle} $
# $ \newcommand{\dot}[2]{ #1 \cdot #2} $
# $ \newcommand{\biginner}[2]{\left\langle #1,#2\right\rangle} $
# $ \newcommand{\mymatrix}[2]{\left( \begin{array}{#1} #2\end{array} \right)} $
# $ \newcommand{\myvector}[1]{\mymatrix{c}{#1}} $
# $ \newcommand{\myrvector}[1]{\mymatrix{r}{#1}} $
# $ \newcommand{\mypar}[1]{\left( #1 \right)} $
# $ \newcommand{\mybigpar}[1]{ \Big( #1 \Big)} $
# $ \newcommand{\sqrttwo}{\frac{1}{\sqrt{2}}} $
# $ \newcommand{\dsqrttwo}{\dfrac{1}{\sqrt{2}}} $
# $ \newcommand{\onehalf}{\frac{1}{2}} $
# $ \newcommand{\donehalf}{\dfrac{1}{2}} $
# $ \newcommand{\hadamard}{ \mymatrix{rr}{ \sqrttwo & \sqrttwo \\ \sqrttwo & -\sqrttwo }} $
# $ \newcommand{\vzero}{\myvector{1\\0}} $
# $ \newcommand{\vone}{\myvector{0\\1}} $
# $ \newcommand{\stateplus}{\myvector{ \sqrttwo \\ \sqrttwo } } $
# $ \newcommand{\stateminus}{ \myrvector{ \sqrttwo \\ -\sqrttwo } } $
# $ \newcommand{\myarray}[2]{ \begin{array}{#1}#2\end{array}} $
# $ \newcommand{\X}{ \mymatrix{cc}{0 & 1 \\ 1 & 0} } $
# $ \newcommand{\I}{ \mymatrix{rr}{1 & 0 \\ 0 & 1} } $
# $ \newcommand{\Z}{ \mymatrix{rr}{1 & 0 \\ 0 & -1} } $
# $ \newcommand{\Htwo}{ \mymatrix{rrrr}{ \frac{1}{2} & \frac{1}{2} & \frac{1}{2} & \frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & \frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} & \frac{1}{2} } } $
# $ \newcommand{\CNOT}{ \mymatrix{cccc}{1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0} } $
# $ \newcommand{\norm}[1]{ \left\lVert #1 \right\rVert } $
# $ \newcommand{\pstate}[1]{ \lceil \mspace{-1mu} #1 \mspace{-1.5mu} \rfloor } $
# $ \newcommand{\greenbit}[1] {\mathbf{{\color{green}#1}}} $
# $ \newcommand{\bluebit}[1] {\mathbf{{\color{blue}#1}}} $
# $ \newcommand{\redbit}[1] {\mathbf{{\color{red}#1}}} $
# $ \newcommand{\brownbit}[1] {\mathbf{{\color{brown}#1}}} $
# $ \newcommand{\blackbit}[1] {\mathbf{{\color{black}#1}}} $
# <font style="font-size:28px;" align="left"><b> <font color="blue"> Solutions for </font> Basics of Python: Variables </b></font>
# <br>
# _prepared by <NAME>_
# <br><br>
# <a id="task1"></a>
#
# <h3> Task 1 </h3>
#
# Define three variables $n1$, $n2$, and $n3$, and set their values to $3$, $-4$, and $6$.
#
# Define a new variable $r1$, and set its value to $ (2 \cdot n1 + 3 \cdot n2) \cdot 2 - 5 \cdot n3 $, where $\cdot$ represents the multiplication operator.
#
# <i>The multiplication operator in python (and in many other programming languages) is *.</i>
#
# Then, print the value of $r1$.
#
# As you may verify it by yourself, the result should be $-42$.
# <h3>Solution</h3>
# +
n1,n2,n3 = 3,-4,6
print(n1,n2,n3)
r1 = (2 * n1 + 3 * n2)*2 - 5 *n3
print(r1)
# -
# <a id="task2"></a>
# <h3> Task 2 </h3>
#
# By using the same variables (you may not need to define them again), calculate and print the following value in python:
# $$
# \dfrac{(n1-n2)\cdot(n2-n3)}{(n3-n1)\cdot(n3+1)} .
# $$
#
# You should see $ -3.3333333333333335 $ as the outcome.
#
# You may round any float number up to a decimal digit by $ round(variable,digit) $.
# <h3>Solution</h3>
# +
n1,n2,n3 = 3,-4,6
up = (n1-n2) * (n2-n3)
down = (n3-n1) * (n3+1)
result = up/down
print (result)
# round the result up the 3rd decimal digit
print(round(result,3))
# -
# <a id="task3"></a>
# <h3> Task 3 </h3>
#
# Define variables N and S, and set their values to your name and surname.
#
# Then, print the values of N and S with a prefix phrase "hello from the quantum world to".
# <h3> Solution</h3>
# +
N = "Abuzer"
S = "Yakaryilmaz"
print("hello from the quantum world to",N,S)
|
python/Python08_Basics_Variables_Solutions.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# # The basics of awkward arrays
#
# At the front and formost of coffea is a completely new syntax for expressing analysis computations: `awkward arrays` and it's index based notation. For people coming from a
# more traditional loop-based programming syntax, the syntax will take some getting use
# to, but this tutorial can hopefully help you understand how to understand the syntax
# and how to understand the various method.
#
# Let use begin by first understanding what you need to explore the contents of a typical
# ntuple file using coffea related tools. First you can download the dummy ntuples file and
# the corresponding schema files from the main repository to your working directory:
#
# ```sh
# # cd <WORKINGDIRECTORY>
# wget https://raw.githubusercontent.com/UMDCMS/CoffeaTutorial/main/samples/dummy_nanoevents.root
# wget https://raw.githubusercontent.com/UMDCMS/CoffeaTutorial/main/samples/dummyschema.py
# ```
#
# We can use the usual ROOT tools to look at the contents of the `dummy_nanoevent.root`
# file. But let us focus on using coffea tools alone.
#
# First import the relevent coffea objects:
#
#
from coffea.nanoevents import NanoEventsFactory
from dummyschema import DummySchema
import numpy as np
import awkward1 as ak
# Now we can create the event list as an awkward array using coffea tools like:
#
events = NanoEventsFactory.from_root( 'file:dummy_nanoevents.root', # The file, notice the prefix `file:` for local file operation
'Events', # Name of the tree object to open
entry_stop=50, # Limit the number of events to process, nice for small scale debugging
schemaclass=DummySchema
).events()
# The last `schemaclass` argument will be let unexplained for now, see the schema tutorials to
# learn more about what this is. Here we have created the events as a awkward array. To see that
# is stored in the array we can use:
#
print(events.fields)
# Indicating the collections that are stored in the awkward array. To see how many events exists in in our file
# we can use the typical python method:
#
print(len(events))
# The 50 here in corresponds correctly to the `entry_stop` used in to open the file. Next we can, of course, start to explore the contents of the various object collections. One can acess the fields of the event as if it was a regular data memeber
print(events.Electron.fields)
print(events.Jet.fields)
# Ah ha! We we are starting to see numbers we can play around with. Notice that coffea was written with High energy physics analysis in mind, so even if the electron energy doesn't look like it is stored from the output of the fields, we can still access methods that we will typically associate with 4-vectors. In particular, notice that we can call the `energy` field of the electron collection, even though the energy field isn't explicitly defined. Coffea is designed with 4 vectors in mind, so the energy collection is calculated on the fly.
print(events.Electron.pt)
print(events.Electron.energy)
# Now, looking at the output, we can begin to get a grasp of what awkward arrays are: the variable `events.Electron.pt` variable represents a N events times A objects array of floating point, the `events.Electon` variable represents and N events times A objects time K fields of *collection* of floating point arrays, and the `events` variable reprents ths N times a certain set of collections (in this case three collections: `['Electron', 'Muon', and 'Jet']`) is recorded.
#
# The "awkward" part of the array refers to two parts, first the value of `A` is differnt for each event and for each collection. In this demonstration, our first event has 2 electrons, the second event has 4 electron and so one. The second part of each collection can have a different number of fields. In a sense, the `events`, `Electron` and `pt` variables are just a easy way for represting the final `NxA` array that we might be intested in for the analysis. In our case the `N` number of events is whatis called as the outer most **dimension** or axis of the various objects, `A` is the one inner dimesion of the of array. `K` is not a true dimesion in the sense it can be though of a book keeping object used to keep track of how many awkward arrays are present, so in this sense, we can say the the `events.Electron` is a `NxA` object/collection array, as opposed to the `events.Electron.pt` being a `NxA` data array.
#
#
#
# We can use the usual index notation to look at a particular object of interest. For instance if we want to look at the 0-th electron of the 1-st event in our event list, we can write:
print(events[1].Electron[0])
# But the real power of using awkward arrays is for using awkward arrays comes in when you don't explicily use a concrete index, and instead call calculation all in an abstract form
# ## Basic object and event selection
#
# Let us start with the most basic example of event selection. Say we want to select event with electrons that have $p_T > 50$ GeV and $|\eta| < 0.5$. The awkward array allows us to write something like:
mask_pt = events.Electron.pt > 50
mask_eta = np.abs(events.Electron.eta) < 0.5
ele_mask = mask_pt & mask_eta
print(mask_pt)
print(mask_eta)
print(ele_mask)
# We can see that the usual logic comparision operators generate a `NxA` boolean array telling use which electron (or more specifically which electron.pt and electron etas) pass this particular selection criteia. This particular boolean array generated from a logic operation on usual arrays is typically call a `mask`. We can use the typical boolean operation `&` operation to get the intersect of multiple masks, or maybe the `|` operator for the union. Now the problem is where can we use this mask? The answer is any array that has a `NxA` structure and recieve these masks to create a reduced array!
print(events.Electron.pt[ele_mask])
print(events.Electron.eta[ele_mask])
selectedElectrons = events.Electron[ele_mask]
print(selectedElectrons.pt)
# Probably the most important place to put the mask is the directly in the `events.Electron` index, this generates a new collection of electrons that preserves the `NxA` structure, but have verious collection instances filterd out. If you are familiar with `numpy`, this sort if index-based array filtering look familiar. The difference is that because awkward arrays accecpt arrays of varying inner dimensions, it can truely preserve the structure of such selection, rather than having everything be flattend out.
x = np.array([1,2,3,4,5,6,7,8,1,1,1,2])
print( x[x% 2 == 0])
y = np.array([[1,2,3,4],[5,6,7,8],[1,1,1,2]])
print( y[y%2==0])
z = ak.Array([[1,2,3,4],[5,6,7,8],[1,1,1,2]])
print(z[z%2==0])
# Now suppose we only want events that have at least 1 electron selected event. What we need are a set of functions that can reduces this `NxA'` array to something of just dimesion `N`. Formally this is called **reduction** operations, and the awkward package has a large set of functions that can reduce the dimension of arrays. In our case, what we want is:
electron_count = ak.count(selectedElectrons.pt, axis=-1)
event_mask = electron_count >= 1
print(event_mask.__repr__)
# To break this down, `ak.count`, as the method name suggests "counts" the number of elements along a certain axis, in our case, what we are intersted is the inner most dimension/axis, hence the typical python notation of `axis=-1`. Using this we can run the event selection using the usual masking notation:
selectedEvents = events[event_mask]
print(event_mask)
print(events.Electron.pt)
print(selectedEvents.Electron.pt)
print(len(selectedEvents))
# Here we can confirm that the first event to pass the event selection is the 1-st event in the event list, and the 0-th instance in the `selectedEvents.Electron.pt` result of the selectedEvents indeed corresponds to the values stored in the 1-st event of the orignal event list.
# ## Object storce and collection creation
#
# Having completed the selection, we might be rather annoyed that we didn't just store the selected Electron, since these are the objects that we are likely going to use for further calculation. Following from the code above, what we can do is add the additional selection to the `selectedElectrons` collections. This is valid since the `N` dimesional event mask "makes sense" performed on the `NxA'` dimesional selectedElectrons object.
#
our_selectedElectrons = selectedElectrons[event_mask]
print(our_selectedElectrons.pt)
print(len(our_selectedElectrons))
# However, this is rather undesirable, since now we have some a whole bunch of detected collections, and event lists that we need to take care of: `selectedElectrons`, `selectedEvents`, `out_selectedEvents`. And this is with just one toy object selection. One can imagine if there isn't some sort of way to store collections into events, the analysis code will get out of hands very quick. This also ties into the topic that there might be certain physics quantities that are specific to a certain analysis that would might be used for the analysis object selection and would be nice to add to the electron collection if it isn't a standard variable that is maintained by the NanoAOD development team. Here we are going to add a very artificial example of calculating the inverse of the electron pt, then selecting on the inverse pt. This very simple example will demonstrate the typical syntax used for storing variables as well as exposing one of the parculiar quirks of awkward arrays:
# +
print('First attempt at adding extended variables to events')
events.Electron['invpt'] = 1/events.Electron.pt
events['selectedElectron_1'] = events.Electron[events.Electron.pt > 50]
print(events.fields)
print(events.Electron.fields)
print(events.selectedElectron_1.fields)
print('\n\nSecond attemp at adding extended variables to events')
events['myElectron'] = events.Electron[:]
events.myElectron['invpt'] = 1/events.myElectron.pt
events['selectedElectron_2'] = events.myElectron[events.myElectron.pt > 50]
print(events.fields)
print(events.myElectron.fields)
print(events.selectedElectron_2.fields)
print('\n\nThird attemp at adding extended variables to events')
myElectron = events.Electron[:]
myElectron['invpt'] = 1/myElectron.pt
events['selectedElectron_3'] = myElectron[myElectron.pt > 50]
print(events.fields)
print(myElectron.fields)
print(events.selectedElectron_3.fields)
# -
#
# Lets get the straightforward part of the code clear up. The addition of collections looks very straight forward, one can think of the `events` as something that looks like a "dictionary of collection with a common outer dimension", so the addition of the two electron collections to the event has a very distionary-esque notation. What is strange is the persistence of the extended collection for the electrons. Logically, the operation looks identical, but the first attempt to add the new variable `invpt` directly to `events.Electron` fails to persist, and thus all direct extensions of `events.Electron` doesn't include the new `invpt` field.
#
# The reason for this is rather technical regarding the mutability of objects in python and awkward. The rule-of-thumb is that collections that are directly generated from the file, (a.k.a. the collections directly obtained listed the `events.fields` immediate after opening a file) can **never** be altered, and therefore cannot have extended variables added to them. To create an extended variable to some collection, we will need to make some sort of copy of the original either by some trivial kinematic selection (ex. `myElectrons = events.Electrons[events.Electrons.pt > 0]`) or some trivial splicing (`myElectrons = events.Electrons[:]`). Another feature of mutability is that once the collection is added to the event collection, it becomes immutable. That is why the third attempt is the one that adds the both the electron extended variable and the extednded electron collection to the event.
#
# Because of these quirks, it would typically be worth it to wrap the object selection into a function if the object selection is typical within an analysis, and it also helps with code readability
# +
def SelectElectrons(electron):
electron = electron[electron.pt > 50]
electron['invpt'] = 1.0 / electron.pt
return electron
events['selectedElectron_f'] = SelectElectrons(events.Electron)
print(events.fields)
print(events.selectedElectron_f.fields)
# -
# Once the new object collection has been added to the event collection, they will persist to arbitrary levels of event selection:
#
# +
myevents = events[ak.count(events.selectedElectron_f.pt,axis=-1) > 0 ]
print(myevents.fields)
print(myevents.selectedElectron_f.fields)
myevents = events[ak.count(events.selectedElectron_f.pt,axis=-1) > 1 ]
print(myevents.fields)
print(myevents.selectedElectron_f.fields)
myevents = events[ak.count(events.selectedElectron_f.pt,axis=-1) > 2 ]
print(myevents.fields)
print(myevents.selectedElectron_f.fields)
# -
# ## Summary of basics
#
# So to put this together into a single code block, suppose our analysis consisten of selecting events that have at least 2 electron with $p_{T} > 50GeV$, $|\eta| < 0.5$, and we want to calculate the average of all such electron's iverserse $p_{T}$ within the selected events. Our awkward array code would look something like:
#
# +
events = NanoEventsFactory.from_root( 'file:dummy_nanoevents.root',
'Events',
entry_stop=50,
schemaclass=DummySchema).events()
## Object selection
selectedElectron = events.Electron[ (events.Electron.pt > 50) &
(np.abs(events.Electron.eta)<0.5) ]
selectedElectron['invpt'] = 1/selectedElectron.pt
events['selectedElectron'] = selectedElectron
# Event selection
events = events[ak.count(events.selectedElectron.pt,axis=-1) >= 2]
# Calculating the total average
print(ak.sum(events.selectedElectron.invpt)/ak.count(events.selectedElectron.invpt))
# -
# On total this is 4 statements (not counting the file reading step) used to make this analysis. Compare that with the loop based notation:
# +
events = NanoEventsFactory.from_root( 'file:dummy_nanoevents.root',
'Events',
entry_stop=50,
schemaclass=DummySchema).events()
count = 0
suminv = 0
for i in range(len(events)):
is_good = []
for j in range(len(events[i].Electron)):
if events[i].Electron[j].pt > 50 and np.abs(events[i].Electron[j].eta) < 0.5:
is_good.append(j)
if len(is_good) >= 2:
for j in is_good:
count = count +1
suminv += 1.0/ events[i].Electron[j].pt
print(suminv/count)
# -
# Notice the results are only difference because the 32bit to 64 bit float conversion is happening at different places. For awkward arrays, this is happening only after the sum has been performed. For the loop based approach this happening everytime the `+=` operator is called.
#
# For the loop based analysis, notice for such a simple analysis, many many lines of code are dedicated to just book keeping stuff: number of electrons passing criteria, adding a counter variable and sum variable... etc, instead of actualy analysis computation. The array based notation for expressing the analysis is much cleaner, if rather more unfamiliar to typical users.
#
# Of course, this isn't the end. Physics analysis are typically more involved that just basic selection and counting. In the next session, we will talk about how to perform more involed calculations with awkward arrays that involves multiple collections within an event collection.
|
notebooks/awkwardbasic.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.8.11 64-bit (''dlnlp'': conda)'
# name: python3
# ---
# +
import warnings
warnings.filterwarnings('ignore')
import random
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import glob
from PIL import Image
import cv2
# %matplotlib inline
# tweaks for libraries
np.set_printoptions(precision=4, linewidth=1024, suppress=True)
plt.style.use('seaborn')
sns.set_style('darkgrid')
sns.set(context='notebook',font_scale=1.10)
# Pytorch imports
import torch
gpu_available = torch.cuda.is_available()
print('Using Pytorch version %s. GPU %s available' % (torch.__version__, "IS" if gpu_available else "IS **NOT**"))
import torch.nn as nn
import torch.nn.functional as F
from torch.utils.data import Dataset
from torchvision import datasets, transforms
from torch import optim
from torchsummary import summary
# import the Pytorch Toolkit here....
import pytorch_toolkit as pytk
# to ensure that you get consistent results across various machines
# @see: https://discuss.pytorch.org/t/reproducibility-over-different-machines/63047
seed = 42
random.seed(seed)
os.environ['PYTHONHASHSEED'] = str(seed)
np.random.seed(seed)
torch.manual_seed(seed)
if torch.cuda.is_available():
torch.cuda.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
torch.backends.cudnn.enabled = True
# -
# download an image
# !wget https://www.dropbox.com/s/l98leemr7r5stnm/Hemanvi.jpeg -o ./images/Hemanvi.jpeg
img = cv2.imread('./images/Hemanvi.jpeg')
img2 = img[50:250, 40:240]
# show original image
img_color = cv2.cvtColor(img2, cv2.COLOR_BGR2RGB)
plt.imshow(img_color);
plt.show()
img_gray = cv2.cvtColor(img2, cv2.COLOR_BGR2GRAY)
plt.imshow(img_gray, cmap='gray');
# plt.axis("off")
# plt.grid(False)
plt.show()
img_gray_small = cv2.resize(img_gray, (32,32))
plt.imshow(img_gray_small, cmap='gray');
data_dir = './data'
# download the training & test Fashion MNIST datasets into ./data folder
xforms = transforms.Compose([transforms.ToTensor(),])
train_dataset = datasets.FashionMNIST(data_dir, download=True, train=True, transform=xforms)
test_dataset = datasets.FashionMNIST(data_dir, download=True, train=False, transform=xforms)
print(f"Downloaded {len(train_dataset)} training records and {len(test_dataset)} testing records")
# +
# define our class
from torch.optim import SGD
net = nn.Sequential(
nn.Flatten(),
nn.Linear(28*28, 1000),
nn.ReLU(),
nn.Linear(1000, 10)
)
model = pytk.PytkModuleWrapper(net)
optimizer = SGD(model.parameters(), lr=0.001)
loss_fn = nn.CrossEntropyLoss()
model.compile(loss=loss_fn, optimizer=optimizer, metrics=['accuracy'])
print(model.summary((1, 28, 28)))
# -
hist = model.fit_dataset(train_dataset, validation_split=0.20, epochs=10, batch_size=32)
pytk.show_plots(hist, metric='accuracy', plot_title='Model Performance')
del model
model = pytk.PytkModuleWrapper(net)
optimizer = SGD(model.parameters(), lr=0.001)
loss_fn = nn.CrossEntropyLoss()
model.compile(loss=loss_fn, optimizer=optimizer, metrics=['accuracy'])
print(model.summary((1, 28, 28)))
hist = model.fit_dataset(train_dataset, validation_split=0.20, epochs=10, batch_size=10000)
pytk.show_plots(hist, metric='accuracy', plot_title='Model Performance')
|
image_basics.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import os
os.environ["SPARK_HOME"] = "/Users/zouhairhajji/Documents/dev/spark-2.4.0-bin-hadoop2.7"
# +
from pyspark.sql import SparkSession
from pyspark.sql.types import StructType, StructField, LongType, DoubleType
import operator
import re
spark = SparkSession.builder \
.master('local[*]') \
.appName('Matrix multiplication') \
.config("spark.driver.memory", "2g") \
.enableHiveSupport() \
.getOrCreate()
# -
file_regex = 'inputs*.demo'
# # Spark algorithm
flat_data = spark.sparkContext.wholeTextFiles(file_regex)
files = flat_data.map(lambda x: x[0].split('/')[-1]).collect()
# +
def filter_appropriate_words(x):
splitet_str = re.split('\n| |\'', x)
splitet_str = list(filter(None, splitet_str))
splitet_str = list(filter(lambda x: False if len(x) < 3 else True, splitet_str))
return splitet_str
formated_input = flat_data \
.map(lambda x: (x[0].split('/')[-1], filter_appropriate_words(x[1]) )) \
.flatMap(lambda x: [ (x[0], word) for word in x[1]] )
result = formated_input \
.cartesian(formated_input) \
.filter(lambda x: x[0][0] == x[1][0]) \
.filter(lambda x: x[0][1] != x[1][1]) \
.map(lambda x: ( (x[0][0], x[0][1], x[1][1]), 1) ) \
.reduceByKey(lambda x, y: x+y) \
.map(lambda x: (x[0], x[1]-1))
# -
result.collect()
# +
flat_data \
.map(lambda x: (x[0].split('/')[-1], list(filter(None, re.split('\n| ', x[1]))) )) \
.flatMap(lambda x: [ (word, 1) for word in x[1]] ) \
.collect()
def filter_appropriate_words(x):
splitet_str = re.split('\n| |\'', x)
splitet_str = list(filter(None, re.split('\n| |\'', x)))
splitet_str = list(filter(lambda x: False if len(x) < 3 else True, splitet_str))
return splitet_str
flat_data \
.map(lambda x: (x[0].split('/')[-1], filter_appropriate_words(x[1]) )) \
.flatMap(lambda x: [ ((x[0], word), 1) for word in x[1]] ) \
.reduceByKey(lambda x,y : x+y) \
.map(lambda x: (x[0], x[1] -1)) \
.collect()
#distinct_word = flat_data.map(lambda x: x[1]).flatMap(lambda x: x.split(' ')).distinct()
# +
from collections import OrderedDict
document = [
['je', 'suis', 'awili', 'et', "j'ai", '9', 'ans'],
['je', 'suis', 'Zouhair', 'HAJJI', 'et', "j'ai", '4', 'ans', 'et', "j'ai", 'mal', 'aux', 'dents', 'et', "j'ai", 'du', 'mal', 'à', 'faire', 'le', 'ramadan']]
names = [
'suis',
'HAJJI',
'4',
'aux',
'du',
'à',
'faire',
'le',
'ramadan',
'awili',
'9',
'je',
'Zouhair',
'et',
"j'ai",
'ans',
'mal',
'dents']
# +
occurrences = OrderedDict((name, OrderedDict((name, 0) for name in names)) for name in names)
# Find the co-occurrences:
for l in document:
for i in range(len(l)):
for item in l[:i] + l[i + 1:]:
occurrences[l[i]][item] += 1
# Print the matrix:
print(' ', ' '.join(occurrences.keys()))
for name, values in occurrences.items():
print(name, ' '.join(str(i) for i in values.values()))
# -
document =[['A', 'B'], ['C', 'B', 'K'],['A', 'B', 'C', 'D', 'Z']]
# +
import math
for a in 'ABCD':
for b in 'ABCD':
count = 0
for x in document:
if a != b:
if a in x and b in x:
count += 1
else:
n = x.count(a)
if n >= 2:
count += math.factorial(n)/math.factorial(n - 2)/2
print('{} x {} = {}'.format(a, b, count) )
# -
|
words_co-occurante_matrix/.ipynb_checkpoints/Untitled-checkpoint.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # The Lending Club Dataset
# ***Happy Money Data Science Assessment***
#
# ## Tasks and Objective:
# - **Task A: KPI Reporting**
# - Objective A.1. Determine monthly loan volume & monthly average loan size in USD.
# - Objective A.2. Determine the charge-off rates by loan grade.
# - Objective A.3. Identify any statistically significant difference in loan charge off rates.
# - Objective A.4. Determine if the interest rate charged by the Lending Club is consistent with the risk level?
#
#
# - **Task B: Loan Charge-Off Modeling**
# - Objective B.1. Perform the necessary data curation, specifically with respect to (a) missing and (b) erroneous values
# - Objective B.2. Build a *Charge-Off prediction model*, demonstrate important steps and discuss key features.
#
#
# - **Task C: Discussing Fundamentals**
# - Objective C. Briefly explain *Logistic Regression* to a (a) techincal and (b) lay audience.
#
# ## Disclaimer:
# - All the data for this problem was provided by Happy Money [G-Drive](https://drive.google.com/drive/folders/1pf8P-omhEL7VZDI-LJMHP7h0aNuVRM71).
# - This work was done by *<NAME>* as an assignment prior to the full interview with **Happy Money**.
# importing necessary libraries
import numpy as np
import pandas as pd
import re
from datetime import datetime, date
from IPython import display
import dill
# Loading the Lending Club dataset
fname = 'LoanStats_2015_subset.csv'
with open(fname, 'r') as f:
df = pd.read_csv(f, low_memory=False)
# ## Exploratory Analysis
df.info(verbose=True)
df.iloc[:,:94].describe()
df.dtypes
df.shape
df.head()
df.issue_d.unique()
df.funded_amnt.sum()/df.funded_amnt_inv.sum(), df.funded_amnt.sum()/df.loan_amnt.sum()
df.loan_status.unique()
df.loan_status.isna().mean(), df.issue_d.isna().mean()
df.dropna(subset=['issue_d'], inplace=True)
# +
# this func converts date strings to datetime objects. This would allow dates to be sorted in a meaningful way.
conv_to_date = lambda dt_str: datetime.strptime(dt_str, r'%b-%Y')
#this func converts 'num%' string to ffraction of type float
conv_to_frac = lambda percent_str: float(str(percent_str).split('%')[0])/100.0
# -
conv_to_date('Apr-2010')
conv_to_frac('3.12%')
df['issue_date'] = df.issue_d.apply(conv_to_date)
df['int_rate_frac'] = df.int_rate.apply(conv_to_frac)
#
# **Objective A.1.** Determine monthly loan volume & monthly average loan size in USD.
loan_by_month = pd.concat([df.groupby('issue_date')[['loan_amnt']].sum(), df.groupby('issue_date')[['loan_amnt']].mean()], axis=1).sort_index()
loan_by_month.columns = ['monthly_vol_usd', 'monthly_mean_usd']
loan_by_month
loan_by_month[['monthly_mean_usd']].plot()
loan_by_month[['monthly_vol_usd']].plot()
#
# **Objective A.2.** Determine the charge-off rates by loan grade.
df.grade.unique()
df.loan_status.unique()
is_default = lambda status: int(status in ['Charged Off', 'Default'])
is_current = lambda status: int(status in ['Fully Paid', 'Current'])
is_default('test')
df['default'] = df.loan_status.apply(is_default)
df['current'] = df.loan_status.apply(is_current)
#average defualt over all grades
mu_default, mu_current = df.default.mean(), df.current.mean()
mu_default, mu_current
df.int_rate_frac.mean()
default_rates_by_grade = df.groupby('grade').mean()[['default', 'current', 'int_rate_frac']]
default_rates_by_grade.columns = ['default_rate', 'current_rate', 'mean_int_rate']
default_rates_by_grade
default_rates_by_grade.default_rate.plot(kind='bar')
df.boxplot('int_rate_frac', by='grade')
# **Objective A.3.** Identify any statistically significant difference in loan charge off rates.
# +
# we will need to perform an ANOVA F-test to determine if grade-specific default-rate differences are statistically different.
## Note: Given the large sample size and large range of variation of default rates across loan grades, a significant different is intuitively evident!
# -
import statsmodels.api as sm
from statsmodels.formula.api import ols
model = ols('default ~ grade', data=df).fit()
anova_table = sm.stats.anova_lm(model)
anova_table
# The above p-value suggests that it in near impossible that true grade-specific default rates are the same.
# ''
#
# **Objective A.4.** Determine if the interest rate chrged by the Lending Club is consistent with the risk level?
# ['Historical average fixed 30-yr mortgage rates'](http://cdn.gobankingrates.com/wp-content/uploads/2014/12/gobankingrates_mortgage_rate2.jpg)
display.Image('https://cdn.americanprogress.org/content/uploads/2019/06/11063626/MillerEvaluatingOptions-WEBTABLES7.png')
# Let's define a simple KPI as ratio of the interest rate charged to the risk of default (I2R) as:
#
# $ I2R = \frac{i - i_{base}}{i_{ref} - i_{base}} \times \frac{risk_{ref}}{risk_{default}} $
#
# where $ i $ denotes interst rate, and $ i_{ref} $ and $ risk_{ref} $ are the mean interest rate charged and mean default rate for **Grade A** loans, respectively.
risk_ref, i_ref, i_base = default_rates_by_grade.default_rate[0], default_rates_by_grade.mean_int_rate[0], df.int_rate_frac.min()
risk_ref, i_ref, i_base
default_rates_by_grade['I2R'] = ((default_rates_by_grade.mean_int_rate)/default_rates_by_grade.default_rate)*(risk_ref/i_ref)
default_rates_by_grade
default_rates_by_grade.I2R.plot(kind='bar')
# the modified I2R metric that incorporates a base rate when default risk is minimal
default_rates_by_grade['I2R'] = ((default_rates_by_grade.mean_int_rate-i_base)/default_rates_by_grade.default_rate)*(risk_ref/(i_ref-i_base))
default_rates_by_grade
default_rates_by_grade.I2R.plot(kind='bar')
# - **Task B: Modeling**
# - Objective B.1. Perform the necessary data curation, specifically with respect to (a) missing and (b) erronous values
# - Objective B.2. Build a Charge-Off prediction model, demonstrate important steps and discuss key features.
#
# Identifying features with largest NaN fractions
percent_nans_by_feature = df.isna().mean().sort_values(ascending=False)
percent_nans_by_feature[:50]
# Let's drop all features with more than more than 95% NaN-values:
ml_features = percent_nans_by_feature[percent_nans_by_feature<0.05].index
ml_features.shape
df_ml = df[ml_features].copy()
df_ml.drop(columns=['current', 'issue_d', 'zip_code', 'int_rate', 'loan_status'], inplace=True)
df_ml.shape
df_ml.num_tl_120dpd_2m.fillna(0, inplace=True)
df_ml.revol_util = df_ml.revol_util.apply(conv_to_frac)
df_ml.isna().mean().sort_values(ascending=False)[:20]
df_ml['earliest_cr_line_date'] = df.earliest_cr_line.apply(conv_to_date)
df_ml['deltat_cr'] = (df_ml.issue_date - df_ml.earliest_cr_line_date).apply(lambda x: x.days)
df_ml.drop(columns=['earliest_cr_line', 'earliest_cr_line_date', 'issue_date'], inplace=True)
dtypes = df_ml.dtypes
numeric_columns = dtypes[dtypes.isin(['float64', 'int64'])].index
text_columns = df_ml.columns[~df_ml.columns.isin(numeric_columns)]
numeric_columns, text_columns
#imputing the remaining missing values using sklearn KNN imputer
from sklearn.impute import KNNImputer
# %%time
cols, inds = df_ml[numeric_columns].columns, df_ml[numeric_columns].index
arr_ml = KNNImputer().fit_transform(df_ml[numeric_columns])
df_ml[numeric_columns] = pd.DataFrame(arr_ml, index=inds)
df_ml.head()
df_ml.dropna(inplace=True)
df_ml.shape
with open ('df-ml-imputed.pkd', 'wb') as f:
dill.dump(df_ml, f)
df.initial_list_status.unique(), df.purpose.unique(), df.sub_grade.unique(), df.term.unique(), df.title.unique(), df.verification_status.unique(), df.application_type.unique(),
df.groupby('emp_length').mean()[['default']].plot(kind='bar')
# ### Building the charge_off model
# +
from sklearn.pipeline import Pipeline
from sklearn.compose import make_column_transformer
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler, OneHotEncoder
from sklearn.decomposition import PCA, TruncatedSVD
from sklearn.linear_model import LogisticRegressionCV, RidgeClassifierCV
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier, GradientBoostingClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.svm import LinearSVC
from sklearn.neural_network import MLPClassifier
from sklearn.metrics import accuracy_score, confusion_matrix, precision_score, recall_score
from sklearn.model_selection import cross_val_score
# -
import dill
with open('df-ml-imputed.pkd', 'rb') as f:
df_ml = dill.load(f)
# Null-model accuracy:
1-df_ml.default.mean()
# only one 'educational' value in this column - resolution: drop row
drop_row_idx = df_ml.purpose[df_ml.purpose=='educational'].index
df_ml.drop(index=drop_row_idx, inplace=True)
X, y = df_ml.drop(columns=['default', 'title']), df_ml.default.copy()
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.15, random_state=81)
dtypes = X.dtypes
numeric_columns = dtypes[dtypes.isin(['float64', 'int64'])].index
text_columns = X.columns[~X.columns.isin(numeric_columns)]
numeric_columns, text_columns
# Prelim comparison of several classifiers for predicting loan default:
col_trans = make_column_transformer(
(OneHotEncoder(), text_columns),
(StandardScaler(), numeric_columns),
remainder='drop'
)
models_dict = {'logreg': LogisticRegressionCV(n_jobs=-2, multi_class='ovr', random_state=81),
'rccv': RidgeClassifierCV(),
'dtc': DecisionTreeClassifier(min_samples_split=10, random_state=81),
'rfc': RandomForestClassifier(min_samples_split=10, max_depth=5, max_features='sqrt', n_jobs=-2, random_state=81, warm_start=True),
'gbc': GradientBoostingClassifier(min_samples_split=10, max_depth=10, random_state=81, warm_start=True),
'knnc': KNeighborsClassifier(n_neighbors=50, weights='distance',n_jobs=-2),
'svc': LinearSVC(dual=False, C=0.5, class_weight='balanced', random_state=81, max_iter=100),
'mlpc': MLPClassifier(hidden_layer_sizes=(100, 50, 10), max_iter=100, random_state=81, warm_start=True, early_stopping=True)
}
# %%time
for model_id, model in models_dict.items():
pipe = Pipeline([
('col_tr', col_trans),
#('tr_svd', TruncatedSVD(n_components=15)),
('pca', PCA(n_components=0.95)),
(model_id, model)
])
mean_cv_score = cross_val_score(pipe, X_train, y_train, cv=5, scoring='roc_auc').mean()
pipe.fit(X_train, y_train)
acc = accuracy_score(y_test, pipe.predict(X_test))
print(f'model <<{model_id}>> mean roc_auc on training data: {mean_cv_score} - prediction accuracy on test data: {acc}')
# +
# %%time
pipe_lr = Pipeline([
('col_tr', col_trans),
#('tr_svd', TruncatedSVD(n_components=30)),
('pca', PCA(n_components=0.99)),
('lr', LogisticRegressionCV(n_jobs=-2, multi_class='ovr', random_state=81))
])
cross_val_score(pipe_lr, X_train, y_train, cv=5, scoring='roc_auc').mean()
# -
pipe_lr.fit(X_train, y_train)
accuracy_score(y_test, pipe_lr.predict(X_test)), precision_score(y_test, pipe_lr.predict(X_test)), recall_score(y_test, pipe_lr.predict(X_test)), confusion_matrix(y_test, pipe_lr.predict(X_test))
confusion_matrix(y_test, pipe_lr.predict(X_test)), confusion_matrix(y_train, pipe_lr.predict(X_train))
# +
# %%time
pipe_rccv = Pipeline([
('col_tr', col_trans),
('tr_svd', TruncatedSVD(n_components=15)),
('rccv', RidgeClassifierCV())
])
cross_val_score(pipe_rccv, X_train, y_train, cv=5, scoring='roc_auc').mean()
# +
# %%time
pipe_dtc = Pipeline([
('col_tr', col_trans),
('dtc', DecisionTreeClassifier(criterion='entropy', min_samples_split=5, random_state=81))
])
# cross_val_score(pipe_dtc, X_train, y_train, cv=5, scoring='roc_auc').mean()
pipe_dtc.fit(X_train, y_train)
y_pred = pipe_dtc.predict(X_test)
accuracy_score(y_test, y_pred), precision_score(y_test, y_pred), recall_score(y_test, y_pred), confusion_matrix(y_test, y_pred)
# +
# %%time
pipe_mlpc = Pipeline([
('col_tr', col_trans),
('dtc', MLPClassifier(hidden_layer_sizes=(1000, 500, 100, 10), max_iter=500, random_state=81, warm_start=True, early_stopping=True))
])
# cross_val_score(pipe_dtc, X_train, y_train, cv=5, scoring='roc_auc').mean()
pipe_mlpc.fit(X_train, y_train)
y_pred = pipe_mlpc.predict(X_test)
accuracy_score(y_test, y_pred), precision_score(y_test, y_pred), recall_score(y_test, y_pred), confusion_matrix(y_test, y_pred)
# -
feat_importances = pipe_dtc.steps[-1][1].feature_importances_
#finding the most important features
s_feat_imps = pd.Series(feat_importances)
top_feats_inds = s_feat_imps.sort_values(ascending=False).index
top_feats_inds
for idx, imp in s_feat_imps.sort_values(ascending=False).iteritems():
print(f'index-{idx}, importance: {imp:.3f} - feature: {numeric_columns[idx]}')
# building a logistic regression model using only the top 10 most important numeric features
X_30 = X[numeric_columns[top_feats_inds[:30]]]
X_30_train, X_30_test, y_train, y_test = train_test_split(X_30, y, test_size=0.15, random_state=81)
X_10.head()
# +
# %%time
pipe_lr_30 = Pipeline([
('stdsc', StandardScaler()),
('lr', LogisticRegressionCV(n_jobs=-2, multi_class='ovr', random_state=81))
])
cross_val_score(pipe_lr_30, X_30_train, y_train, cv=5, scoring='roc_auc').mean()
# -
pipe_lr_30.fit(X_30_train, y_train)
y_pred = pipe_lr_30.predict(X_30_test)
accuracy_score(y_test, y_pred), precision_score(y_test, y_pred), recall_score(y_test, y_pred), confusion_matrix(y_test, y_pred)
|
AT-HappyMoney-assignment-2021.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.6.9 64-bit
# name: python3
# ---
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_theme(style="darkgrid")
dataframe = pd.read_csv('PODs.csv', delimiter=';', header=0, index_col=0)
dataframe=dataframe.astype(float)
# +
ax = plt.gca()
plt.fill_between(dataframe.index,0,dataframe['K8S MEMORY (MiBytes)'],alpha=0.3,label='K8S', color='tab:blue')
plt.fill_between(dataframe.index,dataframe['K8S MEMORY (MiBytes)'],dataframe['K8S MEMORY (MiBytes)']+dataframe['OPERATOR MEMORY (MiBytes)'],\
alpha=1,label='PlaceRAN Op.', color='tab:orange')
plt.fill_between(dataframe.index,dataframe['K8S MEMORY (MiBytes)']+dataframe['OPERATOR MEMORY (MiBytes)'],\
dataframe['K8S MEMORY (MiBytes)']+dataframe['OPERATOR MEMORY (MiBytes)']+dataframe['CU MEMORY (MiBytes)'],alpha=0.3,label='CU',color='tab:green')
plt.fill_between(dataframe.index,dataframe['K8S MEMORY (MiBytes)']+dataframe['OPERATOR MEMORY (MiBytes)']+dataframe['CU MEMORY (MiBytes)'],\
dataframe['K8S MEMORY (MiBytes)']+dataframe['OPERATOR MEMORY (MiBytes)']+dataframe['CU MEMORY (MiBytes)']+dataframe['DU MEMORY (MiBytes)'],alpha=0.3,label='DU',color='tab:red')
plt.fill_between(dataframe.index,dataframe['K8S MEMORY (MiBytes)']+dataframe['OPERATOR MEMORY (MiBytes)']+dataframe['CU MEMORY (MiBytes)']+dataframe['DU MEMORY (MiBytes)'],\
dataframe['K8S MEMORY (MiBytes)']+dataframe['OPERATOR MEMORY (MiBytes)']+dataframe['CU MEMORY (MiBytes)']+dataframe['DU MEMORY (MiBytes)']+dataframe['RU MEMORY (MiBytes)'],alpha=0.3,label='RU',color='tab:purple')
plt.xlabel("Time (s)")
plt.ylabel("MEMORY (MiBytes)")
plt.legend(loc='upper left')
plt.savefig('out/MEMORY_PODS.pdf', bbox_inches='tight')
plt.savefig('out/MEMORY_PODS.png', dpi=300, bbox_inches='tight')
# -
|
performance-analysis/article/mini-topo/pods-resources-OAI-vs-K8S-vs-Operator/MEM_3.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Color Threshold, Green Screen
# ### Import resources
# +
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
import cv2
# %matplotlib inline
# -
# ### Read in and display the image
# +
# Read in the image
image = mpimg.imread('images/car_green_screen.jpg')
# Print out the image dimensions (height, width, and depth (color))
print('Image dimensions:', image.shape)
# -
# Display the image
plt.imshow(image)
# ### Define the color threshold
# +
image_copy = np.copy(image)
# Change color to RGB (from BGR)
image_copy = cv2.cvtColor(image_copy, cv2.COLOR_BGR2RGB)
# Display the image copy
plt.imshow(image_copy)
# -
## TODO: Define our color selection boundaries in RGB values
lower_green = np.array([0,100,0])
upper_green = np.array([200,255,200])
# ### Create a mask
# +
# Define the masked area
mask = cv2.inRange(image_copy, lower_green, upper_green)
# Vizualize the mask
plt.imshow(mask, cmap='gray')
# +
# Mask the image to let the car show through
masked_image = np.copy(image_copy)
masked_image[mask != 0] = [0, 0, 0]
# Display it!
plt.imshow(masked_image)
# -
# ### Mask and add a background image
# +
# Load in a background image, and convert it to RGB
background_image = mpimg.imread('images/sky.jpg')
background_image = cv2.cvtColor(background_image, cv2.COLOR_BGR2RGB)
## TODO: Crop it or resize the background to be the right size (450x660)
crop_background = background_image[0:450, 0:660]
## TODO: Mask the cropped background so that the pizza area is blocked
# Hint mask the opposite area of the previous image
crop_background[mask == 0] = [0, 0, 0]
## TODO: Display the background and make sure
plt.imshow(crop_background)
# -
# ### Create a complete image
# +
## TODO: Add the two images together to create a complete image!
complete_image = masked_image + crop_background
plt.imshow(complete_image)
# -
|
Introduction to Computer_Vision/Image_representation&_Classification/Green Screen Car.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Activity 07: Filtering, Sorting, and Reshaping
# Following up on the last activity, we are asked to deliver some more complex operations.
# We will, therefore, continue to work with the same dataset, our `world_population.csv`.
# #### Loading the dataset
# importing the necessary dependencies
import pandas as pd
# loading the Dataset
dataset = pd.read_csv('./data/world_population.csv', index_col=0)
# looking at the data
dataset[:2]
# ---
# #### Filtering
# To get better insights into our dataset, we want to only look at the value that fulfills certain conditions.
# Our client reaches out to us and asks us to provide lists of values that fulfills these conditions:
# - a new dataset that only contains 1961, 2000, and 2015 as columns
# - all countries that in 2000 had a greater population density than 500
# - a new dataset that only contains years 2000 and later
# - a new dataset that only contains countries that start with `A`
# - a new dataset that only contains countries that contain the word `land`
# filtering columns 1961, 2000, and 2015
# filtering countries that had a greater population density than 500 in 2000
# filtering for years 2000 and later
# # filtering countries that start with A
# filtering countries that contain the word land
# ---
# #### Sorting
# They also want to get some better insights into their data so they ask you to also deliver these datasets to understand the population growth better:
# - values sorted in ascending order by 1961
# - values sorted in ascending order by 2015
# - values sorted in descending order by 2015
# values sorted by column 1961
# values sorted by column 2015
# **Note:**
# Comparisons like this are very valuable to get a good understanding not only of your dataset but also the underlying data itself.
# For example, here we can see that the ranking of the lowest densely populated countries changed.
# values sorted by column 2015 in descending order
# ---
# #### Reshaping
# In order to create a visualization that focuses on 2015, they ask you to create a subset of your FataFrame which only contains one row that which holds all the values for the year 2015 mapped to the country codes as columns.
#
# They've sent you this scribble:
# ```
# Country Code ABW AFG AGO ...
# ----------------------------------------
# 2015 577 49 20 ...
# ```
#
# > They were lazy so they didn't write the digits after the comma. Make sure to keep the original values
# reshaping to 2015 as row and country codes as columns
|
Lesson01/Activity06/activity06.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Solving classification problems with CatBoost
# In this tutorial we will use dataset Amazon Employee Access Challenge from [Kaggle](https://www.kaggle.com) competition for our experiments. Data can be downloaded [here](https://www.kaggle.com/c/amazon-employee-access-challenge/data).
# ## Libraries installation
# +
# #!pip install --user --upgrade catboost
# #!pip install --user --upgrade ipywidgets
# #!pip install shap
# #!pip install sklearn
# #!pip install --upgrade numpy
# #!jupyter nbextension enable --py widgetsnbextension
# -
import catboost
print(catboost.__version__)
# !python --version
# ## Reading the data
import pandas as pd
import os
import numpy as np
np.set_printoptions(precision=4)
import catboost
from catboost import *
from catboost import datasets
(train_df, test_df) = catboost.datasets.amazon()
train_df.head()
# ## Preparing your data
# Label values extraction
y = train_df.ACTION
X = train_df.drop('ACTION', axis=1)
# Categorical features declaration
cat_features = list(range(0, X.shape[1]))
print(cat_features)
# Looking on label balance in dataset
print('Labels: {}'.format(set(y)))
print('Zero count = {}, One count = {}'.format(len(y) - sum(y), sum(y)))
# Ways to create Pool class
# +
dataset_dir = './amazon'
if not os.path.exists(dataset_dir):
os.makedirs(dataset_dir)
# We will be able to work with files with/without header and
# with different separators.
train_df.to_csv(
os.path.join(dataset_dir, 'train.tsv'),
index=False, sep='\t', header=False
)
test_df.to_csv(
os.path.join(dataset_dir, 'test.tsv'),
index=False, sep='\t', header=False
)
train_df.to_csv(
os.path.join(dataset_dir, 'train.csv'),
index=False, sep=',', header=True
)
test_df.to_csv(
os.path.join(dataset_dir, 'test.csv'),
index=False, sep=',', header=True
)
# -
# !head amazon/train.csv
# +
from catboost.utils import create_cd
feature_names = dict()
for column, name in enumerate(train_df):
if column == 0:
continue
feature_names[column - 1] = name
create_cd(
label=0,
cat_features=list(range(1, train_df.columns.shape[0])),
feature_names=feature_names,
output_path=os.path.join(dataset_dir, 'train.cd')
)
# -
# !cat amazon/train.cd
# +
pool1 = Pool(data=X, label=y, cat_features=cat_features)
pool2 = Pool(
data=os.path.join(dataset_dir, 'train.csv'),
delimiter=',',
column_description=os.path.join(dataset_dir, 'train.cd'),
has_header=True
)
pool3 = Pool(data=X, cat_features=cat_features)
# Fastest way to create a Pool is to create it from numpy matrix.
# This way should be used if you want fast predictions
# or fastest way to load the data in python.
X_prepared = X.values.astype(str).astype(object)
# For FeaturesData class categorial features must have type str
pool4 = Pool(
data=FeaturesData(
cat_feature_data=X_prepared,
cat_feature_names=list(X)
),
label=y.values
)
print('Dataset shape')
print('dataset 1:' + str(pool1.shape) +
'\ndataset 2:' + str(pool2.shape) +
'\ndataset 3:' + str(pool3.shape) +
'\ndataset 4: ' + str(pool4.shape))
print('\n')
print('Column names')
print('dataset 1:')
print(pool1.get_feature_names())
print('\ndataset 2:')
print(pool2.get_feature_names())
print('\ndataset 3:')
print(pool3.get_feature_names())
print('\ndataset 4:')
print(pool4.get_feature_names())
# -
# ## Split your data into train and validation
from sklearn.model_selection import train_test_split
X_train, X_validation, y_train, y_validation = train_test_split(X, y, train_size=0.8, random_state=1234)
# ## Selecting the objective function
# Possible options for binary classification:
#
# `Logloss`
#
# `CrossEntropy` for probabilities in target
from catboost import CatBoostClassifier
model = CatBoostClassifier(
iterations=5,
learning_rate=0.1,
# loss_function='CrossEntropy'
)
model.fit(
X_train, y_train,
cat_features=cat_features,
eval_set=(X_validation, y_validation),
verbose=False
)
print('Model is fitted: ' + str(model.is_fitted()))
print('Model params:')
print(model.get_params())
# ## Stdout of the training
from catboost import CatBoostClassifier
model = CatBoostClassifier(
iterations=15,
# verbose=5,
)
model.fit(
X_train, y_train,
cat_features=cat_features,
eval_set=(X_validation, y_validation),
)
# ## Metrics calculation and graph plotting
from catboost import CatBoostClassifier
model = CatBoostClassifier(
iterations=50,
random_seed=63,
learning_rate=0.5,
custom_loss=['AUC', 'Accuracy']
)
model.fit(
X_train, y_train,
cat_features=cat_features,
eval_set=(X_validation, y_validation),
verbose=False,
plot=True
)
# ## Model comparison
# +
model1 = CatBoostClassifier(
learning_rate=0.7,
iterations=100,
random_seed=0,
train_dir='learing_rate_0.7'
)
model2 = CatBoostClassifier(
learning_rate=0.01,
iterations=100,
random_seed=0,
train_dir='learing_rate_0.01'
)
model1.fit(
X_train, y_train,
eval_set=(X_validation, y_validation),
cat_features=cat_features,
verbose=False
)
model2.fit(
X_train, y_train,
eval_set=(X_validation, y_validation),
cat_features=cat_features,
verbose=False
)
# -
from catboost import MetricVisualizer
MetricVisualizer(['learing_rate_0.01', 'learing_rate_0.7']).start()
# ## Best iteration
from catboost import CatBoostClassifier
model = CatBoostClassifier(
iterations=100,
random_seed=63,
learning_rate=0.5,
# use_best_model=False
)
model.fit(
X_train, y_train,
cat_features=cat_features,
eval_set=(X_validation, y_validation),
verbose=False,
plot=True
)
print('Tree count: ' + str(model.tree_count_))
# ## Cross-validation
# +
from catboost import cv
params = {}
params['loss_function'] = 'Logloss'
params['iterations'] = 80
params['custom_loss'] = 'AUC'
params['random_seed'] = 63
params['learning_rate'] = 0.5
cv_data = cv(
params = params,
pool = Pool(X, label=y, cat_features=cat_features),
fold_count=5,
shuffle=True,
partition_random_seed=0,
plot=True,
stratified=False,
verbose=False
)
# -
cv_data.head()
# +
best_value = np.min(cv_data['test-Logloss-mean'])
best_iter = np.argmin(cv_data['test-Logloss-mean'])
print('Best validation Logloss score, not stratified: {:.4f}±{:.4f} on step {}'.format(
best_value,
cv_data['test-Logloss-std'][best_iter],
best_iter)
)
# +
cv_data = cv(
params = params,
pool = Pool(X, label=y, cat_features=cat_features),
fold_count=5,
inverted=False,
shuffle=True,
partition_random_seed=0,
plot=True,
stratified=True,
verbose=False
)
best_value = np.min(cv_data['test-Logloss-mean'])
best_iter = np.argmin(cv_data['test-Logloss-mean'])
print('Best validation Logloss score, stratified: {:.4f}±{:.4f} on step {}'.format(
best_value,
cv_data['test-Logloss-std'][best_iter],
best_iter)
)
# -
# ## Overfitting detector
model_with_early_stop = CatBoostClassifier(
iterations=200,
random_seed=63,
learning_rate=0.5,
early_stopping_rounds=20
)
model_with_early_stop.fit(
X_train, y_train,
cat_features=cat_features,
eval_set=(X_validation, y_validation),
verbose=False,
plot=True
)
print(model_with_early_stop.tree_count_)
model_with_early_stop = CatBoostClassifier(
eval_metric='AUC',
iterations=200,
random_seed=63,
learning_rate=0.5,
early_stopping_rounds=20
)
model_with_early_stop.fit(
X_train, y_train,
cat_features=cat_features,
eval_set=(X_validation, y_validation),
verbose=False,
plot=True
)
print(model_with_early_stop.tree_count_)
# ## Select decision boundary
model = CatBoostClassifier(
random_seed=63,
iterations=200,
learning_rate=0.03,
)
model.fit(
X_train, y_train,
cat_features=cat_features,
verbose=False,
plot=True
)
# 
# +
from catboost.utils import get_roc_curve
import sklearn
from sklearn import metrics
eval_pool = Pool(X_validation, y_validation, cat_features=cat_features)
curve = get_roc_curve(model, eval_pool)
(fpr, tpr, thresholds) = curve
roc_auc = sklearn.metrics.auc(fpr, tpr)
# +
import matplotlib.pyplot as plt
plt.figure(figsize=(16, 8))
lw = 2
plt.plot(fpr, tpr, color='darkorange',
lw=lw, label='ROC curve (area = %0.2f)' % roc_auc, alpha=0.5)
plt.plot([0, 1], [0, 1], color='navy', lw=lw, linestyle='--', alpha=0.5)
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xticks(fontsize=16)
plt.yticks(fontsize=16)
plt.grid(True)
plt.xlabel('False Positive Rate', fontsize=16)
plt.ylabel('True Positive Rate', fontsize=16)
plt.title('Receiver operating characteristic', fontsize=20)
plt.legend(loc="lower right", fontsize=16)
plt.show()
# +
from catboost.utils import get_fpr_curve
from catboost.utils import get_fnr_curve
(thresholds, fpr) = get_fpr_curve(curve=curve)
(thresholds, fnr) = get_fnr_curve(curve=curve)
# +
plt.figure(figsize=(16, 8))
lw = 2
plt.plot(thresholds, fpr, color='blue', lw=lw, label='FPR', alpha=0.5)
plt.plot(thresholds, fnr, color='green', lw=lw, label='FNR', alpha=0.5)
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xticks(fontsize=16)
plt.yticks(fontsize=16)
plt.grid(True)
plt.xlabel('Threshold', fontsize=16)
plt.ylabel('Error Rate', fontsize=16)
plt.title('FPR-FNR curves', fontsize=20)
plt.legend(loc="lower left", fontsize=16)
plt.show()
# +
from catboost.utils import select_threshold
print(select_threshold(model=model, data=eval_pool, FNR=0.01))
print(select_threshold(model=model, data=eval_pool, FPR=0.01))
# -
# ## Snapshotting
# # !rm 'catboost_info/snapshot.bkp'
from catboost import CatBoostClassifier
model = CatBoostClassifier(
iterations=100,
save_snapshot=True,
snapshot_file='snapshot.bkp',
snapshot_interval=1,
random_seed=43
)
model.fit(
X_train, y_train,
eval_set=(X_validation, y_validation),
cat_features=cat_features,
verbose=True
)
# ## Model predictions
print(model.predict_proba(data=X_validation))
print(model.predict(data=X_validation))
raw_pred = model.predict(
data=X_validation,
prediction_type='RawFormulaVal'
)
print(raw_pred)
# +
from numpy import exp
sigmoid = lambda x: 1 / (1 + exp(-x))
probabilities = sigmoid(raw_pred)
print(probabilities)
# +
X_prepared = X_validation.values.astype(str).astype(object)
# For FeaturesData class categorial features must have type str
fast_predictions = model.predict_proba(
data=FeaturesData(
cat_feature_data=X_prepared,
cat_feature_names=list(X_validation)
)
)
print(fast_predictions)
# -
# ## Staged prediction
predictions_gen = model.staged_predict_proba(
data=X_validation,
ntree_start=0,
ntree_end=5,
eval_period=1
)
try:
for iteration, predictions in enumerate(predictions_gen):
print('Iteration ' + str(iteration) + ', predictions:')
print(predictions)
except Exception:
pass
# ## Solving MultiClassification problem
from catboost import CatBoostClassifier
model = CatBoostClassifier(
iterations=50,
random_seed=43,
loss_function='MultiClass'
)
model.fit(
X_train, y_train,
cat_features=cat_features,
eval_set=(X_validation, y_validation),
verbose=False,
plot=True
)
# ## Metric evaluation on a new dataset
model = CatBoostClassifier(
random_seed=63,
iterations=200,
learning_rate=0.03,
)
model.fit(
X_train, y_train,
cat_features=cat_features,
verbose=50
)
metrics = model.eval_metrics(
data=pool1,
metrics=['Logloss','AUC'],
ntree_start=0,
ntree_end=0,
eval_period=1,
plot=True
)
print('AUC values:')
print(np.array(metrics['AUC']))
#
# ## Feature importances
model.get_feature_importance(prettified=True)
# ## Shap values
shap_values = model.get_feature_importance(pool1, fstr_type='ShapValues')
print(shap_values.shape)
# +
import shap
explainer = shap.TreeExplainer(model)
shap_values = explainer.shap_values(Pool(X, y, cat_features=cat_features))
shap.initjs()
shap.force_plot(explainer.expected_value, shap_values[3,:], X.iloc[3,:])
# -
import shap
shap.initjs()
shap.force_plot(explainer.expected_value, shap_values[91,:], X.iloc[91,:])
shap.summary_plot(shap_values, X)
X_small = X.iloc[0:200]
shap_small = shap_values[:200]
shap.force_plot(explainer.expected_value, shap_small, X_small)
# ## Feature evaluation
from catboost.eval.catboost_evaluation import *
learn_params = {'iterations': 20, # 2000
'learning_rate': 0.5, # we set big learning_rate,
# because we have small
# #iterations
'random_seed': 0,
'verbose': False,
'loss_function' : 'Logloss',
'boosting_type': 'Plain'}
evaluator = CatboostEvaluation('amazon/train.tsv',
fold_size=10000, # <= 50% of dataset
fold_count=20,
column_description='amazon/train.cd',
partition_random_seed=0,
#working_dir=...
)
result = evaluator.eval_features(learn_config=learn_params,
eval_metrics=['Logloss', 'Accuracy'],
features_to_eval=[6, 7, 8])
from catboost.eval.evaluation_result import *
logloss_result = result.get_metric_results('Logloss')
logloss_result.get_baseline_comparison(
ScoreConfig(ScoreType.Rel, overfit_iterations_info=False)
)
# ## Saving the model
my_best_model = CatBoostClassifier(iterations=10)
my_best_model.fit(
X_train, y_train,
eval_set=(X_validation, y_validation),
cat_features=cat_features,
verbose=False
)
my_best_model.save_model('catboost_model.bin')
my_best_model.save_model('catboost_model.json', format='json')
my_best_model.load_model('catboost_model.bin')
print(my_best_model.get_params())
print(my_best_model.random_seed_)
# ## Hyperparameter tunning
# ### Training speed
# +
from catboost import CatBoost
fast_model = CatBoostClassifier(
random_seed=63,
iterations=150,
learning_rate=0.01,
boosting_type='Plain',
bootstrap_type='Bernoulli',
subsample=0.5,
one_hot_max_size=20,
rsm=0.5,
leaf_estimation_iterations=5,
max_ctr_complexity=1)
fast_model.fit(
X_train, y_train,
cat_features=cat_features,
verbose=False,
plot=True
)
# -
# ### Accuracy
tunned_model = CatBoostClassifier(
random_seed=63,
iterations=1000,
learning_rate=0.03,
l2_leaf_reg=3,
bagging_temperature=1,
random_strength=1,
one_hot_max_size=2,
leaf_estimation_method='Newton'
)
tunned_model.fit(
X_train, y_train,
cat_features=cat_features,
verbose=False,
eval_set=(X_validation, y_validation),
plot=True
)
# ## Training the model after parameter tunning
best_model = CatBoostClassifier(
random_seed=63,
iterations=int(tunned_model.tree_count_ * 1.2),
)
best_model.fit(
X, y,
cat_features=cat_features,
verbose=100
)
# ## Calculate predictions for the contest
X_test = test_df.drop('id', axis=1)
test_pool = Pool(data=X_test, cat_features=cat_features)
contest_predictions = best_model.predict_proba(test_pool)
print('Predictoins:')
print(contest_predictions)
# ## Prepare the submission
f = open('submit.csv', 'w')
f.write('Id,Action\n')
for idx in range(len(contest_predictions)):
line = str(test_df['id'][idx]) + ',' + str(contest_predictions[idx][1]) + '\n'
f.write(line)
f.close()
# Submit your solution [here](https://www.kaggle.com/c/amazon-employee-access-challenge/submit).
# Good luck!!!
# ## Bonus
# ### Solving MultiClassification problem via Ranking
# For multiclass problems with many classes sometimes it's better to solve classification problem using ranking.
# To do that we will build a dataset with groups.
# Every group will represent one object from our initial dataset.
# But it will have one additional categorical feature - possible class value.
# Target values will be equal to 1 if the class value is equal to the correct class, and 0 otherwise.
# Thus each group will have exactly one 1 in labels, and some zeros.
# You can put all possible class values in the group or you can try setting only hard negatives if there are too many labels.
# We'll show this approach on an example of binary classification problem.
from copy import deepcopy
def build_multiclass_ranking_dataset(X, y, cat_features, label_values=[0,1], start_group_id=0):
ranking_matrix = []
ranking_labels = []
group_ids = []
X_train_matrix = X.values
y_train_vector = y.values
for obj_idx in range(X.shape[0]):
obj = list(X_train_matrix[obj_idx])
for label in label_values:
obj_of_given_class = deepcopy(obj)
obj_of_given_class.append(label)
ranking_matrix.append(obj_of_given_class)
ranking_labels.append(float(y_train_vector[obj_idx] == label))
group_ids.append(start_group_id + obj_idx)
final_cat_features = deepcopy(cat_features)
final_cat_features.append(X.shape[1]) # new feature that we are adding should be categorical.
return Pool(ranking_matrix, ranking_labels, cat_features=final_cat_features, group_id = group_ids)
# +
from catboost import CatBoost
params = {'iterations':150, 'learning_rate':0.01, 'l2_leaf_reg':30, 'random_seed':0, 'loss_function':'QuerySoftMax'}
groupwise_train_pool = build_multiclass_ranking_dataset(X_train, y_train, cat_features, [0,1])
groupwise_eval_pool = build_multiclass_ranking_dataset(X_validation, y_validation, cat_features, [0,1], X_train.shape[0])
model = CatBoost(params)
model.fit(
X=groupwise_train_pool,
verbose=False,
eval_set=groupwise_eval_pool,
plot=True
)
# -
# Doing predictions with ranking mode
# +
import math
obj = list(X_validation.values[0])
ratings = []
for label in [0,1]:
obj_with_label = deepcopy(obj)
obj_with_label.append(label)
rating = model.predict([obj_with_label])[0]
ratings.append(rating)
print('Raw values:', np.array(ratings))
def soft_max(values):
return [math.exp(val) / sum([math.exp(val) for val in values]) for val in values]
print('Probabilities', np.array(soft_max(ratings)))
|
catboost/tutorials/events/pydata_moscow_oct_13_2018.ipynb
|