code stringlengths 38 801k | repo_path stringlengths 6 263 |
|---|---|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## What's this TensorFlow business?
#
# You've written a lot of code in this assignment to provide a whole host of neural network functionality. Dropout, Batch Norm, and 2D convolutions are some of the workhorses of deep learning in computer vision. You've also worked hard to make your code efficient and vectorized.
#
# For the last part of this assignment, though, we're going to leave behind your beautiful codebase and instead migrate to one of two popular deep learning frameworks: in this instance, TensorFlow (or PyTorch, if you switch over to that notebook)
#
# #### What is it?
# TensorFlow is a system for executing computational graphs over Tensor objects, with native support for performing backpropogation for its Variables. In it, we work with Tensors which are n-dimensional arrays analogous to the numpy ndarray.
#
# #### Why?
#
# * Our code will now run on GPUs! Much faster training. Writing your own modules to run on GPUs is beyond the scope of this class, unfortunately.
# * We want you to be ready to use one of these frameworks for your project so you can experiment more efficiently than if you were writing every feature you want to use by hand.
# * We want you to stand on the shoulders of giants! TensorFlow and PyTorch are both excellent frameworks that will make your lives a lot easier, and now that you understand their guts, you are free to use them :)
# * We want you to be exposed to the sort of deep learning code you might run into in academia or industry.
# ## How will I learn TensorFlow?
#
# TensorFlow has many excellent tutorials available, including those from [Google themselves](https://www.tensorflow.org/get_started/get_started).
#
# Otherwise, this notebook will walk you through much of what you need to do to train models in TensorFlow. See the end of the notebook for some links to helpful tutorials if you want to learn more or need further clarification on topics that aren't fully explained here.
# ## Load Datasets
#
import tensorflow as tf
import numpy as np
import math
import timeit
import matplotlib.pyplot as plt
# %matplotlib inline
# +
from cs231n.data_utils import load_CIFAR10
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=10000):
"""
Load the CIFAR-10 dataset from disk and perform preprocessing to prepare
it for the two-layer neural net classifier. These are the same steps as
we used for the SVM, but condensed to a single function.
"""
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = range(num_training, num_training + num_validation)
X_val = X_train[mask]
y_val = y_train[mask]
mask = range(num_training)
X_train = X_train[mask]
y_train = y_train[mask]
mask = range(num_test)
X_test = X_test[mask]
y_test = y_test[mask]
# Normalize the data: subtract the mean image
mean_image = np.mean(X_train, axis=0)
X_train -= mean_image
X_val -= mean_image
X_test -= mean_image
return X_train, y_train, X_val, y_val, X_test, y_test
# Invoke the above function to get our data.
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
print('Train data shape: ', X_train.shape)
print('Train labels shape: ', y_train.shape)
print('Validation data shape: ', X_val.shape)
print('Validation labels shape: ', y_val.shape)
print('Test data shape: ', X_test.shape)
print('Test labels shape: ', y_test.shape)
# -
# ## Example Model
#
# ### Some useful utilities
#
# . Remember that our image data is initially N x H x W x C, where:
# * N is the number of datapoints
# * H is the height of each image in pixels
# * W is the height of each image in pixels
# * C is the number of channels (usually 3: R, G, B)
#
# This is the right way to represent the data when we are doing something like a 2D convolution, which needs spatial understanding of where the pixels are relative to each other. When we input image data into fully connected affine layers, however, we want each data example to be represented by a single vector -- it's no longer useful to segregate the different channels, rows, and columns of the data.
# ### The example model itself
#
# The first step to training your own model is defining its architecture.
#
# Here's an example of a convolutional neural network defined in TensorFlow -- try to understand what each line is doing, remembering that each layer is composed upon the previous layer. We haven't trained anything yet - that'll come next - for now, we want you to understand how everything gets set up.
#
# In that example, you see 2D convolutional layers (Conv2d), ReLU activations, and fully-connected layers (Linear). You also see the Hinge loss function, and the Adam optimizer being used.
#
# Make sure you understand why the parameters of the Linear layer are 5408 and 10.
#
# ### TensorFlow Details
# In TensorFlow, much like in our previous notebooks, we'll first specifically initialize our variables, and then our network model.
# +
# clear old variables
tf.reset_default_graph()
# setup input (e.g. the data that changes every batch)
# The first dim is None, and gets sets automatically based on batch size fed in
X = tf.placeholder(tf.float32, [None, 32, 32, 3])
y = tf.placeholder(tf.int64, [None])
is_training = tf.placeholder(tf.bool)
def simple_model(X,y):
# define our weights (e.g. init_two_layer_convnet)
# setup variables
Wconv1 = tf.get_variable("Wconv1", shape=[7, 7, 3, 32])
bconv1 = tf.get_variable("bconv1", shape=[32])
W1 = tf.get_variable("W1", shape=[5408, 10])
b1 = tf.get_variable("b1", shape=[10])
# define our graph (e.g. two_layer_convnet)
a1 = tf.nn.conv2d(X, Wconv1, strides=[1,2,2,1], padding='VALID') + bconv1
h1 = tf.nn.relu(a1)
h1_flat = tf.reshape(h1,[-1,5408])
y_out = tf.matmul(h1_flat,W1) + b1
return y_out
y_out = simple_model(X,y)
# define our loss
total_loss = tf.losses.hinge_loss(tf.one_hot(y,10),logits=y_out)
mean_loss = tf.reduce_mean(total_loss)
# define our optimizer
optimizer = tf.train.AdamOptimizer(5e-4) # select optimizer and set learning rate
train_step = optimizer.minimize(mean_loss)
# -
# TensorFlow supports many other layer types, loss functions, and optimizers - you will experiment with these next. Here's the official API documentation for these (if any of the parameters used above were unclear, this resource will also be helpful).
#
# * Layers, Activations, Loss functions : https://www.tensorflow.org/api_guides/python/nn
# * Optimizers: https://www.tensorflow.org/api_guides/python/train#Optimizers
# * BatchNorm: https://www.tensorflow.org/api_docs/python/tf/layers/batch_normalization
# ### Training the model on one epoch
# While we have defined a graph of operations above, in order to execute TensorFlow Graphs, by feeding them input data and computing the results, we first need to create a `tf.Session` object. A session encapsulates the control and state of the TensorFlow runtime. For more information, see the TensorFlow [Getting started](https://www.tensorflow.org/get_started/get_started) guide.
#
# Optionally we can also specify a device context such as `/cpu:0` or `/gpu:0`. For documentation on this behavior see [this TensorFlow guide](https://www.tensorflow.org/tutorials/using_gpu)
#
# You should see a validation loss of around 0.4 to 0.6 and an accuracy of 0.30 to 0.35 below
# +
def run_model(session, predict, loss_val, Xd, yd,
epochs=1, batch_size=64, print_every=100,
training=None, plot_losses=False):
# have tensorflow compute accuracy
correct_prediction = tf.equal(tf.argmax(predict,1), y)
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# shuffle indicies
train_indicies = np.arange(Xd.shape[0])
np.random.shuffle(train_indicies)
training_now = training is not None
# setting up variables we want to compute (and optimizing)
# if we have a training function, add that to things we compute
variables = [mean_loss,correct_prediction,accuracy]
if training_now:
variables[-1] = training
# counter
iter_cnt = 0
for e in range(epochs):
# keep track of losses and accuracy
correct = 0
losses = []
# make sure we iterate over the dataset once
for i in range(int(math.ceil(Xd.shape[0]/batch_size))):
# generate indicies for the batch
start_idx = (i*batch_size)%Xd.shape[0]
idx = train_indicies[start_idx:start_idx+batch_size]
# create a feed dictionary for this batch
feed_dict = {X: Xd[idx,:],
y: yd[idx],
is_training: training_now }
# get batch size
actual_batch_size = yd[idx].shape[0]
# have tensorflow compute loss and correct predictions
# and (if given) perform a training step
loss, corr, _ = session.run(variables,feed_dict=feed_dict)
# aggregate performance stats
losses.append(loss*actual_batch_size)
correct += np.sum(corr)
# print every now and then
if training_now and (iter_cnt % print_every) == 0:
print("Iteration {0}: with minibatch training loss = {1:.3g} and accuracy of {2:.2g}"\
.format(iter_cnt,loss,np.sum(corr)/actual_batch_size))
iter_cnt += 1
total_correct = correct/Xd.shape[0]
total_loss = np.sum(losses)/Xd.shape[0]
print("Epoch {2}, Overall loss = {0:.3g} and accuracy of {1:.3g}"\
.format(total_loss,total_correct,e+1))
if plot_losses:
plt.plot(losses)
plt.grid(True)
plt.title('Epoch {} Loss'.format(e+1))
plt.xlabel('minibatch number')
plt.ylabel('minibatch loss')
plt.show()
return total_loss,total_correct
with tf.Session() as sess:
with tf.device("/cpu:0"): #"/cpu:0" or "/gpu:0"
sess.run(tf.global_variables_initializer())
print('Training')
run_model(sess,y_out,mean_loss,X_train,y_train,1,64,100,train_step,True)
print('Validation')
run_model(sess,y_out,mean_loss,X_val,y_val,1,64)
# -
# ## Training a specific model
#
# In this section, we're going to specify a model for you to construct. The goal here isn't to get good performance (that'll be next), but instead to get comfortable with understanding the TensorFlow documentation and configuring your own model.
#
# Using the code provided above as guidance, and using the following TensorFlow documentation, specify a model with the following architecture:
#
# * 7x7 Convolutional Layer with 32 filters and stride of 1
# * ReLU Activation Layer
# * Spatial Batch Normalization Layer (trainable parameters, with scale and centering)
# * 2x2 Max Pooling layer with a stride of 2
# * Affine layer with 1024 output units
# * ReLU Activation Layer
# * Affine layer from 1024 input units to 10 outputs
#
#
# +
# clear old variables
tf.reset_default_graph()
# define our input (e.g. the data that changes every batch)
# The first dim is None, and gets sets automatically based on batch size fed in
X = tf.placeholder(tf.float32, [None, 32, 32, 3])
y = tf.placeholder(tf.int64, [None])
is_training = tf.placeholder(tf.bool)
# define model
def complex_model(X,y,is_training):
Wconv1 = tf.get_variable("Wconv1", shape=[7, 7, 3, 32])
bconv1 = tf.get_variable("bconv1", shape=[32])
W1 = tf.get_variable("W1", shape=[5408, 1024])
b1 = tf.get_variable("b1", shape=[1024])
W2 = tf.get_variable("W2", shape=[1024, 10])
b2 = tf.get_variable("b2", shape=[10])
gamma = tf.get_variable('gamma', shape=[32])
beta = tf.get_variable('beta', shape=[32])
c1 = tf.nn.conv2d(X, Wconv1, strides=[1,1,1,1], padding='VALID') + bconv1
r1 = tf.nn.relu(c1)
mean, var = tf.nn.moments(r1, [0,1,2])
bn = tf.nn.batch_normalization(r1, mean, var, beta, gamma, 1e-6)
mp = tf.nn.max_pool(bn, [1,2,2,1], strides=[1,2,2,1], padding='VALID')
mp_flat = tf.reshape(mp,[-1,5408])
a1 = tf.matmul(mp_flat, W1) + b1
r2 = tf.nn.relu(a1)
a2 = tf.matmul(r2, W2) + b2
return a2
y_out = complex_model(X,y,is_training)
# -
# To make sure you're doing the right thing, use the following tool to check the dimensionality of your output (it should be 64 x 10, since our batches have size 64 and the output of the final affine layer should be 10, corresponding to our 10 classes):
# Now we're going to feed a random batch into the model
# and make sure the output is the right size
x = np.random.randn(64, 32, 32,3)
with tf.Session() as sess:
with tf.device("/cpu:0"): #"/cpu:0" or "/gpu:0"
tf.global_variables_initializer().run()
ans = sess.run(y_out,feed_dict={X:x,is_training:True})
# %timeit sess.run(y_out,feed_dict={X:x,is_training:True})
print(ans.shape)
print(np.array_equal(ans.shape, np.array([64, 10])))
# You should see the following from the run above
#
# `(64, 10)`
#
# `True`
# ### GPU!
#
# Now, we're going to try and start the model under the GPU device, the rest of the code stays unchanged and all our variables and operations will be computed using accelerated code paths. However, if there is no GPU, we get a Python exception and have to rebuild our graph. On a dual-core CPU, you might see around 50-80ms/batch running the above, while the Google Cloud GPUs (run below) should be around 2-5ms/batch.
try:
with tf.Session() as sess:
with tf.device("/gpu:0") as dev: #"/cpu:0" or "/gpu:0"
tf.global_variables_initializer().run()
ans = sess.run(y_out,feed_dict={X:x,is_training:True})
# %timeit sess.run(y_out,feed_dict={X:x,is_training:True})
except tf.errors.InvalidArgumentError:
print("no gpu found, please use Google Cloud if you want GPU acceleration")
# rebuild the graph
# trying to start a GPU throws an exception
# and also trashes the original graph
tf.reset_default_graph()
X = tf.placeholder(tf.float32, [None, 32, 32, 3])
y = tf.placeholder(tf.int64, [None])
is_training = tf.placeholder(tf.bool)
y_out = complex_model(X,y,is_training)
# You should observe that even a simple forward pass like this is significantly faster on the GPU. So for the rest of the assignment (and when you go train your models in assignment 3 and your project!), you should use GPU devices. However, with TensorFlow, the default device is a GPU if one is available, and a CPU otherwise, so we can skip the device specification from now on.
# ### Train the model.
#
# Now that you've seen how to define a model and do a single forward pass of some data through it, let's walk through how you'd actually train one whole epoch over your training data (using the complex_model you created provided above).
#
# Make sure you understand how each TensorFlow function used below corresponds to what you implemented in your custom neural network implementation.
#
# First, set up an **RMSprop optimizer** (using a 1e-3 learning rate) and a **cross-entropy loss** function. See the TensorFlow documentation for more information
# * Layers, Activations, Loss functions : https://www.tensorflow.org/api_guides/python/nn
# * Optimizers: https://www.tensorflow.org/api_guides/python/train#Optimizers
# +
# Inputs
# y_out: is what your model computes
# y: is your TensorFlow variable with label information
# Outputs
# mean_loss: a TensorFlow variable (scalar) with numerical loss
# optimizer: a TensorFlow optimizer
# This should be ~3 lines of code!
total_loss = tf.nn.softmax_cross_entropy_with_logits_v2(labels=tf.one_hot(y,10),logits=y_out)
mean_loss = tf.reduce_mean(total_loss)
optimizer = tf.train.RMSPropOptimizer(5e-4) # select optimizer and set learning rate
# -
# batch normalization in tensorflow requires this extra dependency
extra_update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
with tf.control_dependencies(extra_update_ops):
train_step = optimizer.minimize(mean_loss)
# ### Train the model
# Below we'll create a session and train the model over one epoch. You should see a loss of 1.4 to 2.0 and an accuracy of 0.4 to 0.5. There will be some variation due to random seeds and differences in initialization
# +
sess = tf.Session()
sess.run(tf.global_variables_initializer())
print('Training')
run_model(sess,y_out,mean_loss,X_train,y_train,1,64,100,train_step)
# -
# ### Check the accuracy of the model.
#
# Let's see the train and test code in action -- feel free to use these methods when evaluating the models you develop below. You should see a loss of 1.3 to 2.0 with an accuracy of 0.45 to 0.55.
print('Validation')
run_model(sess,y_out,mean_loss,X_val,y_val,1,64)
# ## Train a _great_ model on CIFAR-10!
#
# Now it's your job to experiment with architectures, hyperparameters, loss functions, and optimizers to train a model that achieves ** >= 70% accuracy on the validation set** of CIFAR-10. You can use the `run_model` function from above.
# ### Things you should try:
# - **Filter size**: Above we used 7x7; this makes pretty pictures but smaller filters may be more efficient
# - **Number of filters**: Above we used 32 filters. Do more or fewer do better?
# - **Pooling vs Strided Convolution**: Do you use max pooling or just stride convolutions?
# - **Batch normalization**: Try adding spatial batch normalization after convolution layers and vanilla batch normalization after affine layers. Do your networks train faster?
# - **Network architecture**: The network above has two layers of trainable parameters. Can you do better with a deep network? Good architectures to try include:
# - [conv-relu-pool]xN -> [affine]xM -> [softmax or SVM]
# - [conv-relu-conv-relu-pool]xN -> [affine]xM -> [softmax or SVM]
# - [batchnorm-relu-conv]xN -> [affine]xM -> [softmax or SVM]
# - **Use TensorFlow Scope**: Use TensorFlow scope and/or [tf.layers](https://www.tensorflow.org/api_docs/python/tf/layers) to make it easier to write deeper networks. See [this tutorial](https://www.tensorflow.org/tutorials/layers) for how to use `tf.layers`.
# - **Use Learning Rate Decay**: [As the notes point out](http://cs231n.github.io/neural-networks-3/#anneal), decaying the learning rate might help the model converge. Feel free to decay every epoch, when loss doesn't change over an entire epoch, or any other heuristic you find appropriate. See the [Tensorflow documentation](https://www.tensorflow.org/versions/master/api_guides/python/train#Decaying_the_learning_rate) for learning rate decay.
# - **Global Average Pooling**: Instead of flattening and then having multiple affine layers, perform convolutions until your image gets small (7x7 or so) and then perform an average pooling operation to get to a 1x1 image picture (1, 1 , Filter#), which is then reshaped into a (Filter#) vector. This is used in [Google's Inception Network](https://arxiv.org/abs/1512.00567) (See Table 1 for their architecture).
# - **Regularization**: Add l2 weight regularization, or perhaps use [Dropout as in the TensorFlow MNIST tutorial](https://www.tensorflow.org/get_started/mnist/pros)
#
# ### Tips for training
# For each network architecture that you try, you should tune the learning rate and regularization strength. When doing this there are a couple important things to keep in mind:
#
# - If the parameters are working well, you should see improvement within a few hundred iterations
# - Remember the coarse-to-fine approach for hyperparameter tuning: start by testing a large range of hyperparameters for just a few training iterations to find the combinations of parameters that are working at all.
# - Once you have found some sets of parameters that seem to work, search more finely around these parameters. You may need to train for more epochs.
# - You should use the validation set for hyperparameter search, and we'll save the test set for evaluating your architecture on the best parameters as selected by the validation set.
#
# ### Going above and beyond
# If you are feeling adventurous there are many other features you can implement to try and improve your performance. You are **not required** to implement any of these; however they would be good things to try for extra credit.
#
# - Alternative update steps: For the assignment we implemented SGD+momentum, RMSprop, and Adam; you could try alternatives like AdaGrad or AdaDelta.
# - Alternative activation functions such as leaky ReLU, parametric ReLU, ELU, or MaxOut.
# - Model ensembles
# - Data augmentation
# - New Architectures
# - [ResNets](https://arxiv.org/abs/1512.03385) where the input from the previous layer is added to the output.
# - [DenseNets](https://arxiv.org/abs/1608.06993) where inputs into previous layers are concatenated together.
# - [This blog has an in-depth overview](https://chatbotslife.com/resnets-highwaynets-and-densenets-oh-my-9bb15918ee32)
#
# If you do decide to implement something extra, clearly describe it in the "Extra Credit Description" cell below.
#
# ### What we expect
# At the very least, you should be able to train a ConvNet that gets at **>= 70% accuracy on the validation set**. This is just a lower bound - if you are careful it should be possible to get accuracies much higher than that! Extra credit points will be awarded for particularly high-scoring models or unique approaches.
#
# You should use the space below to experiment and train your network. The final cell in this notebook should contain the training and validation set accuracies for your final trained network.
#
# Have fun and happy training!
# +
# Feel free to play with this cell
def my_model(X, y, is_training):
conv1 = tf.layers.conv2d(inputs=X, filters=128, kernel_size=(3, 3), padding="same", activation=tf.nn.relu)
conv2 = tf.layers.conv2d(inputs=conv1, filters=128, kernel_size=(3, 3), padding="same", activation=tf.nn.relu)
s_batch_normal1 = tf.layers.batch_normalization(conv2, training=is_training)
pooling1 = tf.layers.max_pooling2d(inputs=s_batch_normal1, pool_size=(2, 2), strides=2)
conv3 = tf.layers.conv2d(inputs=pooling1, filters=128, kernel_size=(3, 3), padding="same", activation=tf.nn.relu)
conv4 = tf.layers.conv2d(inputs=conv3, filters=128, kernel_size=(3, 3), padding="same", activation=tf.nn.relu)
s_batch_normal2 = tf.layers.batch_normalization(conv4, training=is_training)
pooling2 = tf.layers.max_pooling2d(inputs=s_batch_normal2, pool_size=(2, 2), strides=2)
reshape_pooling2 = tf.reshape(pooling2, [-1, 64 * 128])
dense_1 = tf.layers.dense(inputs=reshape_pooling2, units=512, activation=tf.nn.relu)
dp1 = tf.layers.dropout(dense_1, training=is_training)
dense_2 = tf.layers.dense(inputs=dp1, units=512, activation=tf.nn.relu)
dp2 = tf.layers.dropout(dense_2, training=is_training)
return tf.layers.dense(inputs=dp2, units=10)
tf.reset_default_graph()
X = tf.placeholder(tf.float32, [None, 32, 32, 3])
y = tf.placeholder(tf.int64, [None])
is_training = tf.placeholder(tf.bool)
y_out = my_model(X, y, is_training)
total_loss = tf.nn.softmax_cross_entropy_with_logits_v2(labels=tf.one_hot(y, 10), logits=y_out)
mean_loss = tf.reduce_mean(total_loss)
optimizer = tf.train.AdamOptimizer(5e-4)
train_step = optimizer.minimize(mean_loss)
# batch normalization in tensorflow requires this extra dependency
extra_update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
with tf.control_dependencies(extra_update_ops):
train_step = optimizer.minimize(mean_loss)
# +
# Feel free to play with this cell
# This default code creates a session
# and trains your model for 10 epochs
# then prints the validation set accuracy
sess = tf.Session()
sess.run(tf.global_variables_initializer())
print('Training')
run_model(sess,y_out,mean_loss,X_train,y_train,15,64,100,train_step,True)
print('Validation')
run_model(sess,y_out,mean_loss,X_val,y_val,1,64)
# -
# Test your model here, and make sure
# the output of this cell is the accuracy
# of your best model on the training and val sets
# We're looking for >= 70% accuracy on Validation
print('Training')
run_model(sess,y_out,mean_loss,X_train,y_train,1,64)
print('Validation')
run_model(sess,y_out,mean_loss,X_val,y_val,1,64)
# ### Describe what you did here
# In this cell you should also write an explanation of what you did, any additional features that you implemented, and any visualizations or graphs that you make in the process of training and evaluating your network
# The structure of the network consists of two Convolutional layers, with spatial batchnorm and poolings technique implemented. This process is implemented two times. I've tried to train it on 5,10,15 epoches. The best result goes with 15 epoches. **The current result on validation set is 81.6%.** I've trained it with 32 CPUs, and I believe that if I train it at a larger epoch, it will have a better result.
# ### Test Set - Do this only once
# Now that we've gotten a result that we're happy with, we test our final model on the test set. This would be the score we would achieve on a competition. Think about how this compares to your validation set accuracy.
print('Test')
run_model(sess,y_out,mean_loss,X_test,y_test,1,64)
# ## Going further with TensorFlow
#
# The next assignment will make heavy use of TensorFlow. You might also find it useful for your projects.
#
#
#
# # Extra Bonus: CIFAR10 Classification by LSTM with PyTorch
# For the bonus, I implemented a simple LSTM to do the classfication on the same CIFAR-10 dataset.
#
# The code is under the directory of assignment2, a python code file named `LSTM.py`
#
# I've also attached the code at the very end of this python notebook.
# ## Some Notes on my implementation:
# The code is implemented under the framework of PyTorch1.6.0
#
# Since I used GPU to train the LSTM, I directly used the dataset from torchvision.datasets. It is completely the same from the dataset we used above as they are both downloaded from the same URL and the shape of both original datasets are $<50000,32,32,3>$.
#
# The difference is that: the torchvision datasets has the labels already fixed as a tensor so I can directly use the dataloader in torch.utils to read them.
#
# I trained the LSTM four times, each by 10, 20, 50 and 100 epoches, while each iteration has a batch size of 100, each epoch will go over 500 batches.
#
# The result is below:
#
# **The 10 epoches LSTM has an accuracy of 54.7%**
#
# 
#
# **The 20 epoches LSTM has an accuracy of 54.8%**
#
# 
#
# **The 50 epoches LSTM has an accuracy of 54.2%.**
#
# 
#
# **The 100 epoches LSTM has an accuracy of 49.7%**
#
# 
#
# **By slightly changing the learning rate, we achieve a higher accuracy with 10 epoches of 55.32%**
#
# 
#
# **But a slightly lower accuracy with 20 epoches**
#
# 
#
# ## We have tested the occurence of overfitting and at the same time, this experiment with LSTM indicates that the simple LSTM could perform better with sufficient parameter tuning, and probably slightly better performance than the basic CNN, I believe that they might have similar performance with only the most basic structure
#
# **Below is the code for LSTM**
# +
import argparse
import torch
import torchvision
from torch import nn
from torch import optim
from torch.autograd import Variable
from torchvision import transforms
parser = argparse.ArgumentParser()
parser.add_argument('--batch_size', type=int, default=100, help="The batch size of the LSTM")
parser.add_argument('--learning_rate', type=float, default=0.01, help="Learning rate of the LSTM")
parser.add_argument('--epoch', type=int, default=20, help="number of epoch to be trained")
args = parser.parse_args()
BATCH_SIZE = args.batch_size
LR = args.learning_rate
EPOCH = args.epoch
transform = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])
])
trainsets = torchvision.datasets.CIFAR10(root='./cifar10/', train=True, download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainsets, batch_size=BATCH_SIZE, shuffle=True)
testsets = torchvision.datasets.CIFAR10(root='./cifar10', train=False, download=True, transform=transform)
testloader = torch.utils.data.DataLoader(testsets, batch_size=BATCH_SIZE, shuffle=False)
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.LSTM = nn.LSTM(32 * 3, 128, batch_first=True, num_layers=3)
self.line = nn.Linear(128, 128)
self.output = nn.Linear(128, 10)
def forward(self, x):
out, (h_n, c_n) = self.LSTM(x)
out = self.line(out[:, -1, :])
return self.output(out)
if __name__ == '__main__':
net = Net()
print('parameters:', sum(param.numel() for param in net.parameters()))
net.cuda()
Loss = nn.CrossEntropyLoss()
Opt = optim.Adam(net.parameters(), lr=LR)
for epoch in range(EPOCH):
for step, (data, target) in enumerate(trainloader):
data = Variable(data)
target = Variable(target)
data = data.view(-1, 32, 32 * 3)
data = data.cuda()
target = target.cuda()
out = net(data)
loss = Loss(out, target)
Opt.zero_grad()
loss.backward()
Opt.step()
total_accu, total_num = 0, 0
if step % 50 == 0:
for d in testloader:
test_x, test_y = d
test_x, test_y = test_x.cuda().data, test_y.cuda().data
test_x = test_x.view(-1, 32, 32 * 3)
test_out = net(test_x)
pred_y = torch.max(test_out, 1)[1].cuda().data
accuracy = torch.sum(pred_y == test_y).type(torch.FloatTensor) / test_y.size(0)
total_accu += accuracy
total_num += 1
print("Epoch: {} | number of batches: {} | train loss: {} | test accuracy: {}".format(epoch, step,
loss.data.cpu().numpy(),
total_accu / total_num))
| assignment2/.ipynb_checkpoints/TensorFlow-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Introduction to Kubernetes
# **Learning Objectives**
# * Create GKE cluster from command line
# * Deploy an application to your cluster
# * Cleanup, delete the cluster
# ## Overview
# Kubernetes is an open source project (available on [kubernetes.io](kubernetes.io)) which can run on many different environments, from laptops to high-availability multi-node clusters; from public clouds to on-premise deployments; from virtual machines to bare metal.
#
# The goal of this lab is to provide a short introduction to Kubernetes (k8s) and some basic functionality.
# ## Create a GKE cluster
#
# A cluster consists of at least one cluster master machine and multiple worker machines called nodes. Nodes are Compute Engine virtual machine (VM) instances that run the Kubernetes processes necessary to make them part of the cluster.
#
# **Note**: Cluster names must start with a letter and end with an alphanumeric, and cannot be longer than 40 characters.
#
# We'll call our cluster `asl-cluster`.
# +
import os
CLUSTER_NAME = "asl-cluster"
ZONE = "us-central1-a"
os.environ["CLUSTER_NAME"] = CLUSTER_NAME
os.environ["ZONE"] = ZONE
# -
# We'll set our default compute zone to `us-central1-a` and use `gcloud container clusters create ...` to create the GKE cluster. Let's first look at all the clusters we currently have.
# !gcloud container clusters list
# **Exercise**
#
# Use `gcloud container clusters create` to create a new cluster using the `CLUSTER_NAME` we set above. This takes a few minutes...
# + language="bash"
# gcloud container clusters create $CLUSTER_NAME --zone $ZONE
# -
# Now when we list our clusters again, we should see the cluster we created.
# !gcloud container clusters list
# ## Get authentication credentials and deploy and application
#
# After creating your cluster, you need authentication credentials to interact with it. Use `get-credentials` to authenticate the cluster.
#
# **Exercise**
#
# Use `gcloud container clusters get-credentials` to authenticate the cluster you created.
# + language="bash"
# gcloud container clusters get-credentials $CLUSTER_NAME --zone $ZONE
# -
# You can now deploy a containerized application to the cluster. For this lab, you'll run `hello-app` in your cluster.
#
# GKE uses Kubernetes objects to create and manage your cluster's resources. Kubernetes provides the [Deployment](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/) object for deploying stateless applications like web servers. [Service](https://kubernetes.io/docs/concepts/services-networking/service/) objects define rules and load balancing for accessing your application from the internet.
# **Exercise**
#
# Use the `kubectl create` command to create a new Deployment `hello-server` from the `hello-app` container image. The `--image` flag to specify a container image to deploy. The `kubectl create` command pulls the example image from a Container Registry bucket. Here, use [gcr.io/google-samples/hello-app:1.0](gcr.io/google-samples/hello-app:1.0) to indicate the specific image version to pull. If a version is not specified, the latest version is used.
# + language="bash"
# kubectl create deployment hello-server --image=gcr.io/google-samples/hello-app:1.0pp
# -
# This Kubernetes command creates a Deployment object that represents `hello-server`. To create a Kubernetes Service, which is a Kubernetes resource that lets you expose your application to external traffic, run the `kubectl expose` command.
#
# **Exercise**
#
# Use the `kubectl expose` to expose the application. In this command,
# * `--port` specifies the port that the container exposes.
# * `type="LoadBalancer"` creates a Compute Engine load balancer for your container.
# + language="bash"
# kubectl expose deployment hello-server --type=LoadBalancer --port 8080
# -
# Use the `kubectl get service` command to inspect the `hello-server` Service.
#
# **Note**: It might take a minute for an external IP address to be generated. Run the previous command again if the `EXTERNAL-IP` column for `hello-server` status is pending.
# !kubectl get service
# You can now view the application from your web browser, open a new tab and enter the following address, replacing `EXTERNAL IP` with the EXTERNAL-IP for `hello-server`:
#
# ```bash
# http://[EXTERNAL_IP]:8080
# ```
#
# You should see a simple page which displays
#
# ```bash
# Hello, world!
# Version: 1.0.0
# Hostname: hello-server-5bfd595c65-7jqkn
# ```
# ## Cleanup
#
# Delete the cluster using `gcloud` to free up those resources. Use the `--quiet` flag if you are executing this in a notebook. Deleting the cluster can take a few minutes.
# **Exercise**
#
# Delete the cluster. Use the `--quiet` flag since we're executing in a notebook.
# + language="bash"
# gcloud container clusters --quiet delete ${CLUSTER_NAME} --zone $ZONE
# -
# Copyright 2020 Google LLC Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at https://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
| notebooks/docker_and_kubernetes/labs/2_intro_k8s.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# ---
# + [markdown] origin_pos=0
# # Multilayer Perceptrons
# :label:`sec_mlp`
#
# In :numref:`chap_linear`, we introduced
# softmax regression (:numref:`sec_softmax`),
# implementing the algorithm from scratch
# (:numref:`sec_softmax_scratch`) and using high-level APIs
# (:numref:`sec_softmax_concise`),
# and training classifiers to recognize
# 10 categories of clothing from low-resolution images.
# Along the way, we learned how to wrangle data,
# coerce our outputs into a valid probability distribution,
# apply an appropriate loss function,
# and minimize it with respect to our model's parameters.
# Now that we have mastered these mechanics
# in the context of simple linear models,
# we can launch our exploration of deep neural networks,
# the comparatively rich class of models
# with which this book is primarily concerned.
#
#
# ## Hidden Layers
#
# We have described the affine transformation in
# :numref:`subsec_linear_model`,
# which is a linear transformation added by a bias.
# To begin, recall the model architecture
# corresponding to our softmax regression example,
# illustrated in :numref:`fig_softmaxreg`.
# This model mapped our inputs directly to our outputs
# via a single affine transformation,
# followed by a softmax operation.
# If our labels truly were related
# to our input data by an affine transformation,
# then this approach would be sufficient.
# But linearity in affine transformations is a *strong* assumption.
#
# ### Linear Models May Go Wrong
#
# For example, linearity implies the *weaker*
# assumption of *monotonicity*:
# that any increase in our feature must
# either always cause an increase in our model's output
# (if the corresponding weight is positive),
# or always cause a decrease in our model's output
# (if the corresponding weight is negative).
# Sometimes that makes sense.
# For example, if we were trying to predict
# whether an individual will repay a loan,
# we might reasonably imagine that holding all else equal,
# an applicant with a higher income
# would always be more likely to repay
# than one with a lower income.
# While monotonic, this relationship likely
# is not linearly associated with the probability of
# repayment. An increase in income from 0 to 50 thousand
# likely corresponds to a bigger increase
# in likelihood of repayment
# than an increase from 1 million to 1.05 million.
# One way to handle this might be to preprocess
# our data such that linearity becomes more plausible,
# say, by using the logarithm of income as our feature.
#
#
# Note that we can easily come up with examples
# that violate monotonicity.
# Say for example that we want to predict probability
# of death based on body temperature.
# For individuals with a body temperature
# above 37°C (98.6°F),
# higher temperatures indicate greater risk.
# However, for individuals with body temperatures
# below 37° C, higher temperatures indicate lower risk!
# In this case too, we might resolve the problem
# with some clever preprocessing.
# Namely, we might use the distance from 37°C as our feature.
#
#
# But what about classifying images of cats and dogs?
# Should increasing the intensity
# of the pixel at location (13, 17)
# always increase (or always decrease)
# the likelihood that the image depicts a dog?
# Reliance on a linear model corresponds to the implicit
# assumption that the only requirement
# for differentiating cats vs. dogs is to assess
# the brightness of individual pixels.
# This approach is doomed to fail in a world
# where inverting an image preserves the category.
#
#
# And yet despite the apparent absurdity of linearity here,
# as compared with our previous examples,
# it is less obvious that we could address the problem
# with a simple preprocessing fix.
# That is because the significance of any pixel
# depends in complex ways on its context
# (the values of the surrounding pixels).
# While there might exist a representation of our data
# that would take into account
# the relevant interactions among our features,
# on top of which a linear model would be suitable,
# we simply do not know how to calculate it by hand.
# With deep neural networks, we used observational data
# to jointly learn both a representation via hidden layers
# and a linear predictor that acts upon that representation.
#
#
# ### Incorporating Hidden Layers
#
# We can overcome these limitations of linear models
# and handle a more general class of functions
# by incorporating one or more hidden layers.
# The easiest way to do this is to stack
# many fully-connected layers on top of each other.
# Each layer feeds into the layer above it,
# until we generate outputs.
# We can think of the first $L-1$ layers
# as our representation and the final layer
# as our linear predictor.
# This architecture is commonly called
# a *multilayer perceptron*,
# often abbreviated as *MLP*.
# Below, we depict an MLP diagrammatically (:numref:`fig_mlp`).
#
# 
# :label:`fig_mlp`
#
# This MLP has 4 inputs, 3 outputs,
# and its hidden layer contains 5 hidden units.
# Since the input layer does not involve any calculations,
# producing outputs with this network
# requires implementing the computations
# for both the hidden and output layers;
# thus, the number of layers in this MLP is 2.
# Note that these layers are both fully connected.
# Every input influences every neuron in the hidden layer,
# and each of these in turn influences
# every neuron in the output layer.
# However, as suggested by :numref:`subsec_parameterization-cost-fc-layers`,
# the parameterization cost of MLPs
# with fully-connected layers
# can be prohibitively high,
# which may motivate
# tradeoff between parameter saving and model effectiveness even without changing the input or output size :cite:`Zhang.Tay.Zhang.ea.2021`.
#
#
#
# ### From Linear to Nonlinear
#
#
# As before, by the matrix $\mathbf{X} \in \mathbb{R}^{n \times d}$,
# we denote a minibatch of $n$ examples where each example has $d$ inputs (features).
# For a one-hidden-layer MLP whose hidden layer has $h$ hidden units,
# denote by $\mathbf{H} \in \mathbb{R}^{n \times h}$
# the outputs of the hidden layer, which are
# *hidden representations*.
# In mathematics or code, $\mathbf{H}$ is also known as a *hidden-layer variable* or a *hidden variable*.
# Since the hidden and output layers are both fully connected,
# we have hidden-layer weights $\mathbf{W}^{(1)} \in \mathbb{R}^{d \times h}$ and biases $\mathbf{b}^{(1)} \in \mathbb{R}^{1 \times h}$
# and output-layer weights $\mathbf{W}^{(2)} \in \mathbb{R}^{h \times q}$ and biases $\mathbf{b}^{(2)} \in \mathbb{R}^{1 \times q}$.
# Formally, we calculate the outputs $\mathbf{O} \in \mathbb{R}^{n \times q}$
# of the one-hidden-layer MLP as follows:
#
# $$
# \begin{aligned}
# \mathbf{H} & = \mathbf{X} \mathbf{W}^{(1)} + \mathbf{b}^{(1)}, \\
# \mathbf{O} & = \mathbf{H}\mathbf{W}^{(2)} + \mathbf{b}^{(2)}.
# \end{aligned}
# $$
#
#
#
# Note that after adding the hidden layer,
# our model now requires us to track and update
# additional sets of parameters.
# So what have we gained in exchange?
# You might be surprised to find out
# that---in the model defined above---*we
# gain nothing for our troubles*!
# The reason is plain.
# The hidden units above are given by
# an affine function of the inputs,
# and the outputs (pre-softmax) are just
# an affine function of the hidden units.
# An affine function of an affine function
# is itself an affine function.
# Moreover, our linear model was already
# capable of representing any affine function.
#
#
# We can view the equivalence formally
# by proving that for any values of the weights,
# we can just collapse out the hidden layer,
# yielding an equivalent single-layer model with parameters
# $\mathbf{W} = \mathbf{W}^{(1)}\mathbf{W}^{(2)}$ and $\mathbf{b} = \mathbf{b}^{(1)} \mathbf{W}^{(2)} + \mathbf{b}^{(2)}$:
#
# $$
# \mathbf{O} = (\mathbf{X} \mathbf{W}^{(1)} + \mathbf{b}^{(1)})\mathbf{W}^{(2)} + \mathbf{b}^{(2)} = \mathbf{X} \mathbf{W}^{(1)}\mathbf{W}^{(2)} + \mathbf{b}^{(1)} \mathbf{W}^{(2)} + \mathbf{b}^{(2)} = \mathbf{X} \mathbf{W} + \mathbf{b}.
# $$
#
#
# In order to realize the potential of multilayer architectures,
# we need one more key ingredient: a
# nonlinear *activation function* $\sigma$
# to be applied to each hidden unit
# following the affine transformation.
# The outputs of activation functions
# (e.g., $\sigma(\cdot)$)
# are called *activations*.
# In general, with activation functions in place,
# it is no longer possible to collapse our MLP into a linear model:
#
#
# $$
# \begin{aligned}
# \mathbf{H} & = \sigma(\mathbf{X} \mathbf{W}^{(1)} + \mathbf{b}^{(1)}), \\
# \mathbf{O} & = \mathbf{H}\mathbf{W}^{(2)} + \mathbf{b}^{(2)}.\\
# \end{aligned}
# $$
#
# Since each row in $\mathbf{X}$ corresponds to an example in the minibatch,
# with some abuse of notation, we define the nonlinearity
# $\sigma$ to apply to its inputs in a rowwise fashion,
# i.e., one example at a time.
# Note that we used the notation for softmax
# in the same way to denote a rowwise operation in :numref:`subsec_softmax_vectorization`.
# Often, as in this section, the activation functions
# that we apply to hidden layers are not merely rowwise,
# but elementwise.
# That means that after computing the linear portion of the layer,
# we can calculate each activation
# without looking at the values taken by the other hidden units.
# This is true for most activation functions.
#
#
# To build more general MLPs, we can continue stacking
# such hidden layers,
# e.g., $\mathbf{H}^{(1)} = \sigma_1(\mathbf{X} \mathbf{W}^{(1)} + \mathbf{b}^{(1)})$
# and $\mathbf{H}^{(2)} = \sigma_2(\mathbf{H}^{(1)} \mathbf{W}^{(2)} + \mathbf{b}^{(2)})$,
# one atop another, yielding ever more expressive models.
#
# ### Universal Approximators
#
# MLPs can capture complex interactions
# among our inputs via their hidden neurons,
# which depend on the values of each of the inputs.
# We can easily design hidden nodes
# to perform arbitrary computation,
# for instance, basic logic operations on a pair of inputs.
# Moreover, for certain choices of the activation function,
# it is widely known that MLPs are universal approximators.
# Even with a single-hidden-layer network,
# given enough nodes (possibly absurdly many),
# and the right set of weights,
# we can model any function,
# though actually learning that function is the hard part.
# You might think of your neural network
# as being a bit like the C programming language.
# The language, like any other modern language,
# is capable of expressing any computable program.
# But actually coming up with a program
# that meets your specifications is the hard part.
#
# Moreover, just because a single-hidden-layer network
# *can* learn any function
# does not mean that you should try
# to solve all of your problems
# with single-hidden-layer networks.
# In fact, we can approximate many functions
# much more compactly by using deeper (vs. wider) networks.
# We will touch upon more rigorous arguments in subsequent chapters.
#
# + origin_pos=1 tab=["mxnet"]
# %matplotlib inline
from d2l import mxnet as d2l
from mxnet import autograd, np, npx
npx.set_np()
# + [markdown] origin_pos=4
# ## Activation Functions
#
# Activation functions decide whether a neuron should be activated or not by
# calculating the weighted sum and further adding bias with it.
# They are differentiable operators to transform input signals to outputs,
# while most of them add non-linearity.
# Because activation functions are fundamental to deep learning,
# let us briefly survey some common activation functions.
#
# ### ReLU Function
#
# The most popular choice,
# due to both simplicity of implementation and
# its good performance on a variety of predictive tasks,
# is the *rectified linear unit* (*ReLU*).
# ReLU provides a very simple nonlinear transformation.
# Given an element $x$, the function is defined
# as the maximum of that element and $0$:
#
# $$\operatorname{ReLU}(x) = \max(x, 0).$$
#
# Informally, the ReLU function retains only positive
# elements and discards all negative elements
# by setting the corresponding activations to 0.
# To gain some intuition, we can plot the function.
# As you can see, the activation function is piecewise linear.
#
# + origin_pos=5 tab=["mxnet"]
x = np.arange(-8.0, 8.0, 0.1)
x.attach_grad()
with autograd.record():
y = npx.relu(x)
d2l.plot(x, y, 'x', 'relu(x)', figsize=(5, 2.5))
# + [markdown] origin_pos=8
# When the input is negative,
# the derivative of the ReLU function is 0,
# and when the input is positive,
# the derivative of the ReLU function is 1.
# Note that the ReLU function is not differentiable
# when the input takes value precisely equal to 0.
# In these cases, we default to the left-hand-side
# derivative and say that the derivative is 0 when the input is 0.
# We can get away with this because
# the input may never actually be zero.
# There is an old adage that if subtle boundary conditions matter,
# we are probably doing (*real*) mathematics, not engineering.
# That conventional wisdom may apply here.
# We plot the derivative of the ReLU function plotted below.
#
# + origin_pos=9 tab=["mxnet"]
y.backward()
d2l.plot(x, x.grad, 'x', 'grad of relu', figsize=(5, 2.5))
# + [markdown] origin_pos=12
# The reason for using ReLU is that
# its derivatives are particularly well behaved:
# either they vanish or they just let the argument through.
# This makes optimization better behaved
# and it mitigated the well-documented problem
# of vanishing gradients that plagued
# previous versions of neural networks (more on this later).
#
# Note that there are many variants to the ReLU function,
# including the *parameterized ReLU* (*pReLU*) function :cite:`He.Zhang.Ren.ea.2015`.
# This variation adds a linear term to ReLU,
# so some information still gets through,
# even when the argument is negative:
#
# $$\operatorname{pReLU}(x) = \max(0, x) + \alpha \min(0, x).$$
#
# ### Sigmoid Function
#
# The *sigmoid function* transforms its inputs,
# for which values lie in the domain $\mathbb{R}$,
# to outputs that lie on the interval (0, 1).
# For that reason, the sigmoid is
# often called a *squashing function*:
# it squashes any input in the range (-inf, inf)
# to some value in the range (0, 1):
#
# $$\operatorname{sigmoid}(x) = \frac{1}{1 + \exp(-x)}.$$
#
# In the earliest neural networks, scientists
# were interested in modeling biological neurons
# which either *fire* or *do not fire*.
# Thus the pioneers of this field,
# going all the way back to McCulloch and Pitts,
# the inventors of the artificial neuron,
# focused on thresholding units.
# A thresholding activation takes value 0
# when its input is below some threshold
# and value 1 when the input exceeds the threshold.
#
#
# When attention shifted to gradient based learning,
# the sigmoid function was a natural choice
# because it is a smooth, differentiable
# approximation to a thresholding unit.
# Sigmoids are still widely used as
# activation functions on the output units,
# when we want to interpret the outputs as probabilities
# for binary classification problems
# (you can think of the sigmoid as a special case of the softmax).
# However, the sigmoid has mostly been replaced
# by the simpler and more easily trainable ReLU
# for most use in hidden layers.
# In later chapters on recurrent neural networks,
# we will describe architectures that leverage sigmoid units
# to control the flow of information across time.
#
# Below, we plot the sigmoid function.
# Note that when the input is close to 0,
# the sigmoid function approaches
# a linear transformation.
#
# + origin_pos=13 tab=["mxnet"]
with autograd.record():
y = npx.sigmoid(x)
d2l.plot(x, y, 'x', 'sigmoid(x)', figsize=(5, 2.5))
# + [markdown] origin_pos=16
# The derivative of the sigmoid function is given by the following equation:
#
# $$\frac{d}{dx} \operatorname{sigmoid}(x) = \frac{\exp(-x)}{(1 + \exp(-x))^2} = \operatorname{sigmoid}(x)\left(1-\operatorname{sigmoid}(x)\right).$$
#
#
# The derivative of the sigmoid function is plotted below.
# Note that when the input is 0,
# the derivative of the sigmoid function
# reaches a maximum of 0.25.
# As the input diverges from 0 in either direction,
# the derivative approaches 0.
#
# + origin_pos=17 tab=["mxnet"]
y.backward()
d2l.plot(x, x.grad, 'x', 'grad of sigmoid', figsize=(5, 2.5))
# + [markdown] origin_pos=20
# ### Tanh Function
#
# Like the sigmoid function, the tanh (hyperbolic tangent)
# function also squashes its inputs,
# transforming them into elements on the interval between -1 and 1:
#
# $$\operatorname{tanh}(x) = \frac{1 - \exp(-2x)}{1 + \exp(-2x)}.$$
#
# We plot the tanh function below.
# Note that as the input nears 0, the tanh function approaches a linear transformation. Although the shape of the function is similar to that of the sigmoid function, the tanh function exhibits point symmetry about the origin of the coordinate system.
#
# + origin_pos=21 tab=["mxnet"]
with autograd.record():
y = np.tanh(x)
d2l.plot(x, y, 'x', 'tanh(x)', figsize=(5, 2.5))
# + [markdown] origin_pos=24
# The derivative of the tanh function is:
#
# $$\frac{d}{dx} \operatorname{tanh}(x) = 1 - \operatorname{tanh}^2(x).$$
#
# The derivative of tanh function is plotted below.
# As the input nears 0,
# the derivative of the tanh function approaches a maximum of 1.
# And as we saw with the sigmoid function,
# as the input moves away from 0 in either direction,
# the derivative of the tanh function approaches 0.
#
# + origin_pos=25 tab=["mxnet"]
y.backward()
d2l.plot(x, x.grad, 'x', 'grad of tanh', figsize=(5, 2.5))
# + [markdown] origin_pos=28
# In summary, we now know how to incorporate nonlinearities
# to build expressive multilayer neural network architectures.
# As a side note, your knowledge already
# puts you in command of a similar toolkit
# to a practitioner circa 1990.
# In some ways, you have an advantage
# over anyone working in the 1990s,
# because you can leverage powerful
# open-source deep learning frameworks
# to build models rapidly, using only a few lines of code.
# Previously, training these networks
# required researchers to code up
# thousands of lines of C and Fortran.
#
# ## Summary
#
# * MLP adds one or multiple fully-connected hidden layers between the output and input layers and transforms the output of the hidden layer via an activation function.
# * Commonly-used activation functions include the ReLU function, the sigmoid function, and the tanh function.
#
#
# ## Exercises
#
# 1. Compute the derivative of the pReLU activation function.
# 1. Show that an MLP using only ReLU (or pReLU) constructs a continuous piecewise linear function.
# 1. Show that $\operatorname{tanh}(x) + 1 = 2 \operatorname{sigmoid}(2x)$.
# 1. Assume that we have a nonlinearity that applies to one minibatch at a time. What kinds of problems do you expect this to cause?
#
# + [markdown] origin_pos=29 tab=["mxnet"]
# [Discussions](https://discuss.d2l.ai/t/90)
#
| d2l-en/mxnet/chapter_multilayer-perceptrons/mlp.ipynb |
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 1.2.0
# language: julia
# name: julia-1.2
# ---
# ## K-SVD Algorithm
# Based on the implementation of IshitaTakeshi.
using Pkg
Pkg.add("PyCall")
Pkg.add("DataStructures")
Pkg.add("SparseArrays")
Pkg.add("ProgressBars")
Pkg.add("DelimitedFiles")
Pkg.add("Plots")
Pkg.add("Images")
using DataStructures
using SparseArrays
using LinearAlgebra
using ProgressBars
using PyCall
using Plots
function init_dictionary(n::Int, K::Int)
"""
Initialize the dictionary.
Args:
n: dimension of input signal
k = number of atoms in the dictionary
"""
# D must be a full-rank matrix
D = rand(n, K)
while rank(D) != min(n, K)
D = rand(n, K)
end
@inbounds for k in 1:K
D[:, k] ./= norm(@view(D[:, k]))
end
return D
end
# +
# The implementation is referencing the wikipedia page
# https://en.wikipedia.org/wiki/Matching_pursuit#The_algorithm
const default_max_iter = 20
const default_tolerance = 1e-6
function SparseArrays.sparsevec(d::DefaultDict, m::Int)
SparseArrays.sparsevec(collect(keys(d)), collect(values(d)), m)
end
function matching_pursuit_(data::AbstractVector, dictionary::AbstractMatrix,
max_iter::Int, tolerance::Float64)
n_atoms = size(dictionary, 2)
residual = copy(data)
xdict = DefaultDict{Int, Float64}(0.)
for i in 1:max_iter
if norm(residual) < tolerance
return sparsevec(xdict, n_atoms)
end
# find an atom with maximum inner product
products = dictionary' * residual
_, maxindex = findmax(abs.(products))
maxval = products[maxindex]
atom = dictionary[:, maxindex]
# c is the length of the projection of data onto atom
a = maxval / sum(abs2, atom) # equivalent to maxval / norm(atom)^2
residual -= atom * a
xdict[maxindex] += a
end
return sparsevec(xdict, n_atoms)
end
"""
matching_pursuit(data::Vector, dictionary::AbstractMatrix;
max_iter::Int = $default_max_iter,
tolerance::Float64 = $default_tolerance)
Find ``x`` such that ``Dx = y`` or ``Dx ≈ y`` where y is `data` and D is `dictionary`.
```
# Arguments
* `max_iter`: Hard limit of iterations
* `tolerance`: Exit when the norm of the residual < tolerance
```
"""
function matching_pursuit(data::AbstractVector, dictionary::AbstractMatrix;
max_iter::Int = default_max_iter,
tolerance = default_tolerance)
if tolerance <= 0
throw(ArgumentError("`tolerance` must be > 0"))
end
if max_iter <= 0
throw(ArgumentError("`max_iter` must be > 0"))
end
if size(data, 1) != size(dictionary, 1)
throw(ArgumentError(
"Dimensions must match: `size(data, 1)` and `size(dictionary, 1)`."
))
end
matching_pursuit_(data, dictionary, max_iter, tolerance)
end
# The implementation is referencing the wikipedia page
# https://en.wikipedia.org/wiki/Matching_pursuit#The_algorithm
const default_max_iter = 20
const default_tolerance = 1e-6
function SparseArrays.sparsevec(d::DefaultDict, m::Int)
SparseArrays.sparsevec(collect(keys(d)), collect(values(d)), m)
end
function matching_pursuit_(data::AbstractVector, dictionary::AbstractMatrix,
max_iter::Int, tolerance::Float64)
n_atoms = size(dictionary, 2)
residual = copy(data)
xdict = DefaultDict{Int, Float64}(0.)
for i in 1:max_iter
if norm(residual) < tolerance
return sparsevec(xdict, n_atoms)
end
# find an atom with maximum inner product
products = dictionary' * residual
_, maxindex = findmax(abs.(products))
maxval = products[maxindex]
atom = dictionary[:, maxindex]
# c is the length of the projection of data onto atom
a = maxval / sum(abs2, atom) # equivalent to maxval / norm(atom)^2
residual -= atom * a
xdict[maxindex] += a
end
return sparsevec(xdict, n_atoms)
end
"""
matching_pursuit(data::Vector, dictionary::AbstractMatrix;
max_iter::Int = $default_max_iter,
tolerance::Float64 = $default_tolerance)
Find ``x`` such that ``Dx = y`` or ``Dx ≈ y`` where y is `data` and D is `dictionary`.
```
# Arguments
* `max_iter`: Hard limit of iterations
* `tolerance`: Exit when the norm of the residual < tolerance
```
"""
function matching_pursuit(data::AbstractVector, dictionary::AbstractMatrix;
max_iter::Int = default_max_iter,
tolerance = default_tolerance)
if tolerance <= 0
throw(ArgumentError("`tolerance` must be > 0"))
end
if max_iter <= 0
throw(ArgumentError("`max_iter` must be > 0"))
end
if size(data, 1) != size(dictionary, 1)
throw(ArgumentError(
"Dimensions must match: `size(data, 1)` and `size(dictionary, 1)`."
))
end
matching_pursuit_(data, dictionary, max_iter, tolerance)
end
"""
matching_pursuit(data::AbstractMatrix, dictionary::AbstractMatrix;
max_iter::Int = $default_max_iter,
tolerance::Float64 = $default_tolerance)
Find ``X`` such that ``DX = Y`` or ``DX ≈ Y`` where Y is `data` and D is `dictionary`.
```
# Arguments
* `max_iter`: Hard limit of iterations
* `tolerance`: Exit when the norm of the residual < tolerance
```
"""
function matching_pursuit(data::AbstractMatrix, dictionary::AbstractMatrix;
max_iter::Int = default_max_iter,
tolerance::Float64 = default_tolerance)
K = size(dictionary, 2)
N = size(data, 2)
X = spzeros(K, N)
for i in 1:N
X[:, i] = matching_pursuit(
vec(data[:, i]),
dictionary,
max_iter = max_iter,
tolerance = tolerance
)
end
return X
end
"""
matching_pursuit(data::AbstractMatrix, dictionary::AbstractMatrix;
max_iter::Int = $default_max_iter,
tolerance::Float64 = $default_tolerance)
Find ``X`` such that ``DX = Y`` or ``DX ≈ Y`` where Y is `data` and D is `dictionary`.
```
# Arguments
* `max_iter`: Hard limit of iterations
* `tolerance`: Exit when the norm of the residual < tolerance
```
"""
function matching_pursuit(data::AbstractMatrix, dictionary::AbstractMatrix;
max_iter::Int = default_max_iter,
tolerance::Float64 = default_tolerance)
K = size(dictionary, 2)
N = size(data, 2)
X = spzeros(K, N)
for i in 1:N
X[:, i] = matching_pursuit(
vec(data[:, i]),
dictionary,
max_iter = max_iter,
tolerance = tolerance
)
end
return X
end
# -
function K_SVD(Y,niter_KSVD,n_atoms)
"""
Computes the K-SVD algorithm.
Args:
Y -
niter_KSVD: number of iterations for the algorithm
n_atoms - number of atoms in the dictionary
returns: Dictionary (D) and Sparse coefficients (X)
"""
D = init_dictionary(size(Y,1),n_atoms)
X = matching_pursuit(Y,D)
for i in ProgressBar(1:niter_KSVD)
X = matching_pursuit(Y,D)
for k = 1:n_atoms
Xk = X[k,:]
all(iszero,Xk)&&continue
wk = findall(!iszero,Xk)
indices = [j for j=1:size(D,2) if j!=k]
Ek = Y - D[:,indices]*X[indices,:]
Ωk = sparse(wk,1:length(wk),ones(length(wk)),size(Y,2),length(wk))
U, S, V= svd(Ek*Ωk, full=true)
D[:,k]=U[:,1]
X[k,wk] = V[:,1]*S[1]
end
end
return D,X
end
# ### First Task: Image Compression _(MNIST dataset)_
datasets = pyimport("sklearn.datasets")
digits = datasets.load_digits()
Y = digits["data"];
D,X = K_SVD(Y,200,256);
# +
# Find D and X such that Y ≈ DX
println("||Y - D * X|| = $(norm(Y - D * X))")
println("The ratio of zero elemnts in the matrix X: ",
sum(X .== 0) / length(X))
# +
# saves result in a dlm file
using DelimitedFiles
writedlm("dictionary.dlm", D)
writedlm("reconstruction.dlm", D*X)
# -
# ### Second Analysis _(CIFAR10_dataset)_
# +
using DelimitedFiles
data = readdlm("../CIFAR10_data.dlm");
labels = readdlm("../CIFAR10_labels.dlm");
# -
D,X = K_SVD(data[1:2000,:],100,256)
# +
# Find D and X such that Y ≈ DX
println("||Y - D * X|| = $(norm(data[1:2000,:] - D * X))")
println("The ratio of zero elemnts in the matrix X: ",
sum(X .== 0) / length(X))
# -
writedlm("dictionary_CIFAR.dlm", D)
writedlm("reconstruction_CIFAR.dlm", D*X)
| Sparse_Dictionary_Learning_KSVD.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from pathlib import Path
from sklearn.model_selection import train_test_split
from sklearn import tree
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import accuracy_score
import pandas as pd
from sklearn.metrics import confusion_matrix
my_file = Path("/Users/bharu/CS690-PROJECTS/ActivityAnalyzer/activity_analyzer/DecisionTreeClassifier/FeaturesCsvFile/featuresfile.csv")
df = pd.read_csv(my_file)
df.head()
df.shape#(no of rows, no of columns)
df = df.drop_duplicates(subset=['User', 'Timestamp'])
df.head()
df.shape
X = df.values[:,2:45]
Y = df.values[:,45]
X_train,X_test,Y_train,Y_test = train_test_split(X,Y,test_size=0.3)
len(X_train)
len(X_test)
df_gini = DecisionTreeClassifier(criterion = 'gini')
df_gini.fit(X_train, Y_train)
df_gini.feature_importances_
Y_predict_gini = df_gini.predict(X_test)
Y_predict_gini
score = accuracy_score(Y_test,Y_predict_gini)
score
cm = confusion_matrix(Y_test,Y_predict_gini)
cm
tree.export_graphviz(df_gini,feature_names=df.columns.values[2:45],out_file='tree_gini.dot')
df_entropy = DecisionTreeClassifier(criterion = 'entropy')
df_entropy.fit(X_train,Y_train)
df_entropy.feature_importances_
tree.export_graphviz(df_gini,feature_names=df.columns.values[2:45],out_file='tree_entropy.dot')
Y_predict_entropy = df_entropy.predict(X_test)
Y_predict_entropy
score_en = accuracy_score(Y_test,Y_predict_entropy)
score_en
| DecisionTreeClassifier/Classifier/DecisionTreeClassifier.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import grid
import json
from grid.adapters.imd
# +
'''This example demonstrates the use of fasttext for text classification
Based on Joulin et al's paper:
Bags of Tricks for Efficient Text Classification
https://arxiv.org/abs/1607.01759
Results on IMDB datasets with uni and bi-gram embeddings:
Uni-gram: 0.8813 test accuracy after 5 epochs. 8s/epoch on i7 cpu.
Bi-gram : 0.9056 test accuracy after 5 epochs. 2s/epoch on GTx 980M gpu.
'''
from __future__ import print_function
import numpy as np
from keras.preprocessing import sequence
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Embedding
from keras.layers import GlobalAveragePooling1D
from keras.datasets import imdb
# Set parameters:
# ngram_range = 2 will add bi-grams features
ngram_range = 1
max_features = 20000
maxlen = 400
batch_size = 32
embedding_dims = 50
epochs = 5
print('Build model...')
model = Sequential()
# we start off with an efficient embedding layer which maps
# our vocab indices into embedding_dims dimensions
model.add(Embedding(max_features,
embedding_dims,
input_length=maxlen))
# we add a GlobalAveragePooling1D, which will average the embeddings
# of all words in the document
model.add(GlobalAveragePooling1D())
# We project onto a single unit output layer, and squash it with a sigmoid:
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
model.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
validation_data=(x_test, y_test))
# -
| notebooks/Untitled.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .r
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: R
# language: R
# name: ir
# ---
# # Lesson 7
#
# ## What is a Vector?
#
# A vector is basically an array.<br>
# **The index of a vector starts with `1` in R!**
#
# ### Numeric vector
#
# ```
# 1 2 3 4 5
# [1, 33, 44, 5, 55]
# ```
#
#
# ### Character vector
#
# ```
# 1 2 3 4 5
# ["Z", "f", "7", "2a", "Yes"]
# ```
#
# Even numbers will be converted to a character in a character vector.
#
#
# ### The secret of R
#
# Everything, even a single number like `27` is a vector with the length one!
#
#
# ### Creating a numeric vector
#
# You can combine a list of numbers into a vector by using the function `c()`.
MyFirstVector <- c(3, 45, 56, 732)
MyFirstVector
is.numeric(MyFirstVector)
is.integer(MyFirstVector)
# That's because R saves all numbers as doubles as default.
is.double(MyFirstVector)
V2 <- c(3L, 12L, 243L, 0L)
V2
is.numeric(V2)
is.integer(V2)
is.double(V2)
# ### Creating a character vector
V3 <- c("a", "B23", "Hello", 7)
V3
is.character(V3)
is.numeric(V3)
# ### Other ways to create a vector
#
# * `seq()` - sequence / like `:`
# * `rep()` - replicate
seq(1, 15)
1:15
# But `seq()` can do more than `:`...
seq(1, 15, 2) # 2 is the step
z <- seq(1,15,4)
z
# Now `rep()`...
rep(3, 50)
d <- rep(3, 20)
d
rep("a", 5)
x <- c(80, 20)
y <- rep(x, 10)
y
# ## Accessing vectors
w <- c("a", "b", "c", "d", "e")
w
# Getting the first element
w[1]
w[2]
w[3]
# Access everything except the first element...
w[-1]
# Access everything except the third element...
v <- w[-3]
v
# Access a range of elements...
w[1:3]
w[3:5]
# Specifically accessing defining which elements you want to access...
w[c(1, 3, 5)]
# Or don't want...
w[c(-2, -4)]
# Exclude a range of elements...
w[-3:-5]
| 07_vectors.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:571]
# language: python
# name: conda-env-571-py
# ---
# # Temporary Jupyter Notebook for SCRIPT 4
# ## Rachel's Part:
import numpy as np
import pandas as pd
import altair as alt
# +
from hashlib import sha1
import matplotlib.pyplot as plt
from IPython.display import HTML
from sklearn.compose import ColumnTransformer
from sklearn.dummy import DummyClassifier
from sklearn.impute import SimpleImputer
from sklearn.model_selection import cross_val_score, cross_validate, train_test_split
from sklearn.preprocessing import (
FunctionTransformer,
Normalizer,
OneHotEncoder,
OrdinalEncoder,
StandardScaler,
normalize,
scale,
)
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import (
accuracy_score,
classification_report,
confusion_matrix,
f1_score,
make_scorer,
precision_score,
recall_score,
)
from sklearn.model_selection import (
RandomizedSearchCV,
cross_validate,
train_test_split,
)
from sklearn.pipeline import Pipeline, make_pipeline
from sklearn.svm import SVC
# -
# set max column and max row display of dataframes
pd.set_option("display.max_colwidth", 200)
pd.set_option('display.max_rows', 50)
# ### Import processed data (not sure how to do this...)
#
# We can delete the next cell after downloading processed data:
# +
# Packages necessary for importing data (from a zip file containing 2 dataset CSVs)
import requests, zipfile
from urllib.request import urlopen
from io import BytesIO
zip_file_url = "https://archive.ics.uci.edu/ml/machine-learning-databases/00296/dataset_diabetes.zip"
zip_file_load = urlopen(zip_file_url)
zipinmemory = BytesIO(zip_file_load.read())
zip_file = zipfile.ZipFile(zipinmemory)
# Only load the first file in the zip folder
diabetes_csv = pd.read_csv(zip_file.open(zip_file.namelist()[0]))
# Change `readmitted` target column to binary "YES" or "NO" values if admitted or not.
pattern = r'[<>]30'
diabetes_csv["readmitted"] = diabetes_csv["readmitted"].str.replace(pattern,"YES",regex = True)
# Convert any ? to na
diabetes_csv = diabetes_csv.replace("?", np.NaN)
# Drop any rows with na
diabetes_clean = diabetes_csv.dropna()
# Drop columns not useful to answering our question
diabetes_clean = diabetes_csv.drop(columns = ["encounter_id", "patient_nbr", "race", "weight", "payer_code", "medical_specialty", "examide", "citoglipton"])
diabetes_clean.head(10)
# -
# ### Split Data into Training and Testing
# +
# Take a random and representative sample of our diabetes dataset to apply data analysis to
diabetes_subset = diabetes_clean.sample(n = 1_000)
diabetes_subset
# +
# Change positive and negative labels of readmitted target column to 0 (not readmitted) and 1 (readmitted)
from sklearn.preprocessing import label_binarize
encoded_column_vector = label_binarize(diabetes_subset['readmitted'], classes=['NO','YES'])
encoded_labels = np.ravel(encoded_column_vector)
diabetes_subset["readmitted"] = encoded_labels
# +
# Split the data into training (0.8) and testing (0.2)
train_df, test_df = train_test_split(diabetes_subset, test_size=0.2, random_state=123)
# Split the data into X and Y
X_train, y_train = train_df.drop(columns=["readmitted"]), train_df["readmitted"]
X_test, y_test = test_df.drop(columns=["readmitted"]), test_df["readmitted"]
# -
# ### Label features
# categorical features - OneHotEncoding
# numeric features - StandardScaler
# ordinal features - OrdinalEncoding
categorical_features = ["age", "diag_1", "diag_2", "diag_3", "max_glu_serum", "A1Cresult", "metformin", "repaglinide", "nateglinide", "chlorpropamide", "glimepiride", "acetohexamide", "glipizide", "glyburide", "tolbutamide", "pioglitazone", "rosiglitazone", "acarbose", "miglitol", "troglitazone", "tolazamide", "glyburide-metformin", "glipizide-metformin", "glimepiride-pioglitazone", "metformin-rosiglitazone", "metformin-pioglitazone"]
numeric_features = ["admission_type_id", "discharge_disposition_id", "admission_source_id", "time_in_hospital", "num_lab_procedures", "num_procedures", "num_medications", "number_outpatient", "number_emergency", "number_inpatient", "number_diagnoses" ]
ordinal_features = ["gender", "change", "diabetesMed"]
target_feature = "readmitted"
# ### Create transformers and preprocesser pipeline
# +
categorical_transformer = Pipeline(
steps=[
("imputer", SimpleImputer(strategy="constant", fill_value="missing")),
("onehot", OneHotEncoder(handle_unknown="ignore")),
]
)
numeric_transformer = Pipeline(
steps=[
("imputer", SimpleImputer(strategy="median")),
("scaler", StandardScaler()),
]
)
ordinal_transformer = Pipeline(
steps=[
("imputer", SimpleImputer(strategy="constant", fill_value="missing")),
("ordinal", OrdinalEncoder()),
]
)
preprocessor = ColumnTransformer(
transformers=[
("num", numeric_transformer, numeric_features),
("cat", categorical_transformer, categorical_features),
("ord", ordinal_transformer, ordinal_features)
],
#remainder = "passthrough"
)
# -
preprocessor.fit(X_train, y_train)
# ### Create model pipelines and test out different models (RBF SVM and LR) against DummyClassifier baseline
# Create an empty dictionary to store results
results_dict = {}
# Code adapted from MDS 571 - Lab 4
def store_results(classifier_name, scores, results_dict):
"""
Stores mean scores from cross_validate in results_dict for
the given model model_name.
Parameters
----------
model_name :
scikit-learn classification model
scores : dict
object return by `cross_validate`
results_dict: dict
dictionary to store results
Returns
----------
None
"""
# test cases for store_results function
assert type(classifier_name) == str # test that the classifier_name is a string
assert type(scores) == dict # test that the scores is a dictionary
results_dict[classifier_name] = {
"fit_time": "{:0.4f}".format(np.mean(scores["fit_time"])),
"score_time": "{:0.4f}".format(np.mean(scores["score_time"])),
"test_accuracy": "{:0.4f}".format(np.mean(scores["test_accuracy"])),
"train_accuracy": "{:0.4f}".format(np.mean(scores["train_accuracy"])),
"test_f1": "{:0.4f}".format(np.mean(scores["test_f1"])),
"train_f1": "{:0.4f}".format(np.mean(scores["train_f1"])),
"test_recall": "{:0.4f}".format(np.mean(scores["test_recall"])),
"train_recall": "{:0.4f}".format(np.mean(scores["train_recall"])),
"test_precision": "{:0.4f}".format(np.mean(scores["test_precision"])),
"train_precision": "{:0.4f}".format(np.mean(scores["train_precision"])),
"test_average_precision": "{:0.4f}".format(np.mean(scores["test_precision"])),
"train_average_precision": "{:0.4f}".format(np.mean(scores["train_precision"])),
"test_roc_auc": "{:0.4f}".format(np.mean(scores["test_roc_auc"])),
"train_roc_auc": "{:0.4f}".format(np.mean(scores["train_roc_auc"])),
}
# Test 3 models against baseline DummyClassifier
classifiers = {
"Dummy Classifier" : DummyClassifier(strategy = "stratified"),
"RBF SVM": SVC(),
"Logistic Regression": LogisticRegression(max_iter = 1000),
"Logistic Regression (balanced)": LogisticRegression(class_weight="balanced", max_iter = 1000),
}
# +
# ignore warnings, DummyClassifier will output many 0s for scores but this is correct
import warnings
warnings.filterwarnings("ignore", category=DeprecationWarning)
scoring = ["accuracy", "f1", "recall", "precision", "average_precision", "roc_auc"]
for classifier_name, classifier in classifiers.items():
pipe = Pipeline(steps=[("preprocessor", preprocessor), ("classifier", classifier)])
scores = cross_validate(pipe, X_train, y_train, return_train_score=True, scoring = scoring)
store_results(classifier_name, scores, results_dict)
results_dict = pd.DataFrame(results_dict).T
results_dict
# -
# According to the results it looks like Logistic Regression (balanced) had the highest training accuracy and f1 scores. RBF SVM is also extremely slow.
# ### Continuing work with Logistic Regression (balanced) pipeline
lr_bal_pipe = Pipeline(steps=[("preprocessor", preprocessor), ("lr", LogisticRegression(class_weight="balanced"))])
# ### Hyperparameter Optimization with Logistic Regression (balanced)
# +
scoring=["accuracy", "precision", "f1", "recall", 'roc_auc', 'average_precision']
pipe = make_pipeline(preprocessor, LogisticRegression(class_weight="balanced", max_iter = 1000))
param_grid = {
"logisticregression__C": [10,100,500],
}
random_search = RandomizedSearchCV(pipe, param_distributions=param_grid, n_jobs=-1, n_iter=2, cv=5, scoring= "f1")
# -
random_search.fit(X_train, y_train)
random_search.best_params_
# ### Hyperparameter Optimization results (confusion matrix, precision-recall curve, AUC curve)
# +
from sklearn.metrics import plot_confusion_matrix
cm = plot_confusion_matrix(random_search.best_estimator_, X_test, y_test, display_labels=["not admitted", "readmitted"], values_format="d", cmap=plt.cm.Blues)
cm
# +
from sklearn.metrics import plot_precision_recall_curve
plot_precision_recall_curve(random_search, X_test, y_test, name='LogisticRegressionClassifier');
plt.plot(recall_score(y_test, random_search.predict(X_test)), precision_score(y_test, random_search.predict(X_test)), 'or', markersize=8)
# -
print(classification_report(y_test, random_search.predict(X_test),
target_names=["not admitted", "readmitted"]))
# +
from sklearn.metrics import plot_roc_curve
cm = confusion_matrix(y_test, random_search.predict(X_test))
rc = plot_roc_curve(random_search, X_test, y_test, name='Logistic Regression');
plt.plot(cm[0,1]/(cm[0].sum()), cm[1,1]/(cm[1].sum()), 'or', markersize=8);
# -
As AUC means perfect classification, here we are getting AUC = 0.59 which is not close to 1 shows the model is not
predicting the classes accurately for most of the data.
# ### Use Logistic Regression (balanced) model and best hyperparameters on test set
random_search.best_estimator_.fit(X_train, y_train)
random_search.best_estimator_.score(X_test,y_test)
# ### Top Coefficients of Best Indicator Features
# ### EXTRA: Find Test Set With Most Predictive Readmission Outcome vs. No Readmission Outcome
| .ipynb_checkpoints/SCRIPT4_temp-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
from __future__ import absolute_import, division, print_function, unicode_literals
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from datetime import timedelta, datetime
# %matplotlib notebook
# -
csos = pd.read_csv('data/merged_cso_data.csv')
csos['Open date/time'] = pd.to_datetime(csos['Open date/time'])
csos['Close date/time'] = pd.to_datetime(csos['Close date/time'])
csos['Duration'] = csos['Close date/time'] - csos['Open date/time']
csos.head()
rain_df = pd.read_csv('data/ohare_hourly_20160929.csv')
rain_df['datetime'] = pd.to_datetime(rain_df['datetime'])
rain_df = rain_df.set_index(pd.DatetimeIndex(rain_df['datetime']))
rain_df = rain_df['19700101':]
chi_rain_series = rain_df['HOURLYPrecip'].resample('1H', label='right').max()
chi_rain_series.head()
def cum_rainfall(timestamps, hours_before):
results = []
for timestamp in timestamps:
top_of_hour = (timestamp + timedelta(hours=1)).replace(minute=0, second=0)
rain_start = top_of_hour - timedelta(hours=(hours_before-1))
results.append(chi_rain_series[rain_start:top_of_hour].sum())
return results
cum_rainfall(csos['Open date/time'], 24)
csos['24hr_rain'] = cum_rainfall(csos['Open date/time'], 24)
csos
# What is the least amount of rain that causes a CSO?
csos = csos.sort_values('24hr_rain')
csos
# How many rows are there? How many have a value of 0?
print('Total rows: %s' % len(csos))
print('Rows with 0 rain in previous 24 hours: %s' % len(csos[csos['24hr_rain'] == 0]))
csos_without_zero = csos[csos['24hr_rain'] != 0]
csos_without_zero
| sewer-overflows/notebooks/CSO and Rainfall analysis.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.8.3 64-bit (''base'': conda)'
# language: python
# name: python_defaultSpec_1600051588785
# ---
# + tags=[]
# %matplotlib widget
# %load_ext autoreload
# %autoreload 2
import numpy as np
import scipy as sp
import matplotlib.pyplot as plt
import scipy.misc as misc
import math
import time
# Project imports
import llops.operators as ops
import llops as yp
import llops.simulation as sim
from llops import vec
# -
# ## Define Backend and Datatype
# +
global_backend = 'numpy' # arrayfire or numpy
global_dtype = 'complex32' # complex32 or complex64
ops.setDefaultBackend(global_backend)
ops.setDefaultDatatype(global_dtype)
# -
# # Create Test Object
# +
# Image size to simulate
image_size = np.array([64, 128])
# Determine machine precision threshold
eps = yp.precision(global_dtype) * np.prod(image_size)
# Load object and crop to size
x = sim.brain(image_size)
# Generate convolution kernel h
h_size = np.array([4, 4])
h = yp.zeros(image_size, global_dtype, global_backend)
h[image_size[0] // 2 - h_size[0] // 2:image_size[0] // 2 + h_size[0] // 2,
image_size[1] // 2 - h_size[1] // 2:image_size[1] // 2 + h_size[1] // 2] = yp.randn((h_size[0], h_size[1]), global_dtype, global_backend)
h /= yp.scalar(yp.sum(yp.abs(h)))
# Forward Operator
A = ops.Convolution(h, mode='circular', pad_value='mean', invalid_support_value=0)
A.inverse_regularizer = 1e-2
# Generate Measurement
y = A * x
# Reconstruction
x_star = A.inv * y
# Show object and h
plt.figure(figsize=(12,3))
plt.subplot(141)
plt.imshow(yp.abs(yp.changeBackend(x, 'numpy')), cmap='gray')
plt.title('Object (x)')
plt.subplot(142)
plt.imshow(yp.abs(np.asarray(h)), cmap='gray')
plt.title('h (A)')
plt.subplot(143)
plt.imshow((yp.abs(np.asarray(y))), cmap='gray')
plt.title('Measurement (A * x)');
plt.subplot(144)
plt.imshow((yp.abs(np.asarray(x_star))), cmap='gray')
plt.title('Recon (A.inv * A * x)');
# -
# ## Identity Operator
# +
I = ops.Identity(image_size)
# Check forward operator
assert yp.sum((I * x) - x) < eps
# Check gradient
I.gradient_check()
# Render forward model
I.latex()
# Render gradient
I.latex(gradient=True)
# -
# ## Diagonalization Operator
# +
K = ops.Diagonalize(h)
# Check forward operator
yp.assert_equality(K * x, h * x)
# Check gradient
K.gradient_check()
# Render forward model
K.latex()
# Render gradient
K.latex(gradient=True)
# -
# ## Matrix Multiplication Operator
# +
matrix_size = (10,10)
m = yp.rand(matrix_size, global_dtype, global_backend)
xm = yp.rand(matrix_size[1], global_dtype, global_backend)
M = ops.MatrixMultiply(m)
# Check Forward operator
assert yp.sum(yp.abs(yp.vec(yp.changeBackend(M * xm, 'numpy')) - yp.vec(yp.changeBackend(m, 'numpy').dot(yp.changeBackend(xm, 'numpy'))))) < eps, "%f" % yp.sum(yp.abs(yp.changeBackend(M * xm, 'numpy') - yp.changeBackend(m, 'numpy').dot(yp.changeBackend(xm, 'numpy'))[:, np.newaxis]))
# Check Adjoint
assert yp.sum(yp.abs(yp.vec(yp.changeBackend(M.H * xm, 'numpy')) - yp.vec(np.conj(yp.changeBackend(m, 'numpy').T).dot(yp.changeBackend(xm, 'numpy'))))) < eps, "%f" % yp.sum(yp.abs(yp.changeBackend(M.H * xm, 'numpy') - np.conj(yp.changeBackend(m, 'numpy').T).dot(yp.changeBackend(xm, 'numpy'))[:, np.newaxis]))
# Check gradient
M.gradient_check()
# Render forward model
M.latex()
# Render gradient
M.latex(gradient=True)
# -
# ## Circular Convolution Operator
# +
# Generate circular convolution operator
C = ops.Convolution(h)
# Test forward operator
conv2 = lambda x, h: yp.changeBackend(np.fft.ifftshift((np.fft.ifft2(np.fft.fft2(x, axes=(0,1), norm='ortho') * np.fft.fft2(h, axes=(0,1), norm='ortho'), axes=(0,1), norm='ortho')), axes=(0,1)).astype(yp.getNativeDatatype(global_dtype, 'numpy')), global_backend)
x_np = yp.changeBackend(x, 'numpy')
h_np = yp.changeBackend(h, 'numpy')
# Check gradient
C.gradient_check(eps=1e-0)
# Render forward model
C.latex()
# Render gradient
C.latex(gradient=True)
# -
# ## Non-circular Convolution Operator
# + tags=[]
pad_value = 0
# Windowed Convolution
C_full = ops.Convolution(h, mode='same', pad_value=pad_value, dtype=global_dtype, backend=global_backend)
y_full = yp.changeBackend(yp.abs(C_full * x), 'numpy')
# Circular Convolution
C = ops.Convolution(h, dtype=global_dtype, backend=global_backend)
y5 = yp.abs(yp.changeBackend(C * x, 'numpy'))
plt.figure(figsize=(10,2))
plt.subplot(131)
plt.imshow(yp.real(y5))
plt.title('FFT')
plt.subplot(132)
plt.imshow(yp.real(y_full))
plt.title('Windowed')
plt.subplot(133)
plt.imshow(yp.abs(y_full - y5))
plt.title('|FFT - windowed|');
plt.colorbar()
print('SSD is %.2E' % yp.sum(yp.abs(y_full - y5)) ** 2)
# Check Gradient
C_full.gradient_check()
# Render forward model
C_full.latex()
# Render gradient
C_full.latex(gradient=True)
# -
# ## Cross-Correlation Operator
# +
XC = ops.CrossCorrelation(h)
xc = lambda x, h: np.fft.ifftshift((np.fft.ifft2(np.fft.fft2(x, axes=(0,1), norm='ortho') \
* np.conj(np.fft.fft2(h, axes=(0,1), norm='ortho')), axes=(0,1), norm='ortho')), axes=(0,1)).astype(np.complex64)
# Check forward operator
# y1 = yp.changeBackend(XC * vec(x), 'numpy')
# y2 = xc(yp.changeBackend(x, 'numpy'), yp.changeBackend(h, 'numpy'))
# assert yp.sum(yp.abs(y1 - y2.reshape(-1))) < eps
# Check gradient
XC.gradient_check()
# Render forward model
XC.latex()
# Render gradient
XC.latex(gradient=True)
# -
# ## Crop Operator: Centered
# +
# Generate Crop Operator
crop_size = (image_size[0] // 2, image_size[1] // 2)
crop_start = tuple(np.asarray(image_size) // 2 - np.asarray(crop_size) // 2)
CR = ops.Crop(image_size, crop_size, pad_value=0, crop_start=crop_start, dtype=global_dtype, backend=global_backend)
# Check forward operator
y_1 = yp.changeBackend(CR * x, 'numpy')
y_2 = yp.changeBackend(yp.crop(x, crop_size, crop_start), 'numpy')
assert yp.sum(yp.abs(y_1 - y_2)) < eps
# Check Adjoint Operator
pad_size = [int((image_size[i] - crop_size[i]) / 2) for i in range(len(image_size))]
y_3 = yp.pad(yp.crop(x, crop_size, crop_start), image_size, crop_start, pad_value=0)
y_4 = CR.H * CR * x
assert yp.sum(yp.abs(y_3 - y_4)) < eps
# Check gradient
CR.gradient_check()
# Render forward model
CR.latex()
# Render gradient
CR.latex(gradient=True)
# -
# ## Crop Operator: Non-Centered
# +
# Generate Crop Operator
crop_size = (image_size[0] // 2, image_size[1] // 2)
crop_start = (6, 6)
CR = ops.Crop(image_size, crop_size, pad_value=0, dtype=global_dtype, backend=global_backend, crop_start=crop_start)
# Check forward operator
y_1 = yp.changeBackend(CR * x, 'numpy')
y_2 = yp.changeBackend(yp.crop(x, crop_size, crop_start), 'numpy')
assert yp.sum(yp.abs(y_1 - y_2)) < eps
# Check Adjoint Operator
pad_size = [int((image_size[i] - crop_size[i]) / 2) for i in range(len(image_size))]
y_3 = yp.pad(yp.crop(x, crop_size, crop_start), image_size, crop_start, pad_value=0)
y_4 = yp.reshape(CR.H * CR * x, image_size)
assert yp.sum(yp.abs(y_3 - y_4)) < eps
# Check gradient
CR.gradient_check()
# Render forward model
CR.latex()
# Render gradient
CR.latex(gradient=True)
# -
# ## Shift Operator
# +
# Normal shift
shift = (0, 10) # should be y, x
T = ops.Shift(image_size, shift)
def shift_func(x, shift):
x = yp.changeBackend(x, 'numpy')
for ax, sh in enumerate(shift):
x = np.roll(x, int(sh), axis=ax)
return(x)
# Check Forward Operator
y_1 = yp.changeBackend(T * x, 'numpy')
y_2 = shift_func(yp.changeBackend(x, 'numpy'), shift)
assert yp.sum(yp.abs(y_1 - y_2)) < eps
# Check Adjoint Operator
assert yp.sum(yp.abs(T.H * T * x - x)) < eps
# Check gradient
T.gradient_check()
# Render forward model
T.latex()
# Render gradient
T.latex(gradient=True)
# -
# ## Summation Operator
# +
axis_to_sum = (0,1)
Σ = ops.Sum(image_size)
# Check forward operator
y_1 = yp.changeBackend(Σ * x, 'numpy')
y_2 = yp.sum(yp.changeBackend(x, 'numpy'), axis=axis_to_sum)
assert yp.abs(yp.sum(y_1 - y_2)) < eps
# Check adjoint operator
y_3 = yp.changeBackend(Σ.H * Σ * x, 'numpy')
reps = [1, ] * len(image_size)
axes = list(range(len(image_size))) if axis_to_sum is 'all' else axis_to_sum
scale = 1
for axis in axes:
reps[axis] = image_size[axis]
scale *= 1 / image_size[axis]
y_4 = yp.tile(y_2, reps) * scale
assert yp.sum(yp.abs(y_3 - y_4)) < eps
# Check gradient
# Σ.gradient_check(eps=1)
# Render forward model
Σ.latex()
# Render gradient
Σ.latex(gradient=True)
# -
# ## Mean Operator
# ## Intensity Operator
# +
I = ops.Intensity(image_size)
# Check forward operator
assert yp.sum(yp.abs((yp.abs(yp.changeBackend(x, 'numpy')) ** 2) - yp.changeBackend(I * x, 'numpy'))) < eps
# Check gradient
I.gradient_check()
# Render forward model
I.latex()
# Render gradient
I.latex(gradient=True)
# -
# ## Flip Operator
# +
flip_axis = 0
L = ops.Flip(image_size, axis=flip_axis)
# Check forward operator
assert yp.sum(yp.abs(L * x - yp.flip(x, flip_axis))) < eps, "%f" % yp.sum(yp.abs(L * x - vec(yp.flip(x, flip_axis))))
# Check gradient
L.gradient_check()
# Render forward model
L.latex()
# Render gradient
L.latex(gradient=True)
# -
# ## $\ell2$ Norm Operator
# +
L2 = ops.L2Norm(image_size)
# Check forward operator
assert yp.sum(yp.abs(L2 * x - 0.5 * yp.norm(x) ** 2)) < eps, '%f' % yp.sum(yp.abs(L2 * x - 0.5 * np.linalg.norm(x) ** 2))
# Check gradient
L2.gradient_check()
# Render forward model
L2.latex()
# Render gradient
L2.latex(gradient=True)
# -
# ## $\ell1 $ Norm Operator
# +
L1 = ops.L1Norm(image_size)
# Forward operator
assert L1 * x - yp.sum(yp.abs(x)) < eps
# Render forward model
L1.latex()
# -
# ## Wavelet Transform
# + tags=[]
import pywt
wavelet_list = ['db1', 'haar', 'rbio1.1', 'bior1.1', 'bior4.4', 'sym12']
for wavelet_test in wavelet_list:
# Wavelet Transform
W = ops.WaveletTransform(image_size, wavelet_type=wavelet_test, use_cycle_spinning=False)
# Check forward operation
coeffs = pywt.wavedecn(x, wavelet=wavelet_test)
x_wavelet, coeff_slices = pywt.coeffs_to_array(coeffs)
assert yp.sum(yp.abs(yp.changeBackend(W * x, 'numpy') - x_wavelet)) < eps, "Difference %.6e"
# Check inverse operation
coeffs_from_arr = pywt.array_to_coeffs(x_wavelet, coeff_slices)
cam_recon = pywt.waverecn(coeffs_from_arr, wavelet=wavelet_test)
assert yp.sum(yp.abs(W.H * W * x - x)) < 1e-2
# Ensure that the wavelet transform isn't just identity (weird bug)
if W.shape[1] is yp.size(x):
assert yp.sum(yp.abs(W * yp.vec(x) - yp.vec(x))) > 1e-2, "%s" % wavelet_test
# Check gradient
W.gradient_check()
# Render forward model
W.latex()
# -
# ## Exponential Operator
# +
L2 = ops.L2Norm(image_size)
F = ops.FourierTransform(image_size)
EXP = ops.Exponential(image_size)
# Forward model
assert yp.sum(yp.abs(yp.changeBackend(EXP * x, 'numpy') - np.exp(yp.changeBackend(x, 'numpy')))) < eps
# Check gradient
EXP.gradient_check()
# Generate composite operator
D = ops.Diagonalize(h)
L2 = ops.L2Norm(image_size)
EXP_COMP = L2 * F * EXP
EXP_COMP.gradient_check()
EXP_COMP_2 = L2 * F * EXP * D
EXP_COMP_2.gradient_check()
# Render forward model
EXP.latex()
# Render gradient
EXP.latex(gradient=True)
# -
# ## Phase Ramp Operator
# + tags=[]
eps_phase_ramp = 1e-4
shift = yp.changeBackend(np.asarray((-5,3)).astype(yp.getNativeDatatype(global_dtype, 'numpy')), global_backend)
# Generate phase ramp
R = ops.PhaseRamp(image_size)
r = R * shift
F = ops.FourierTransform(image_size, dtype=global_dtype, normalize=False, backend=global_backend)
D_R = ops.Diagonalize(r, dtype=global_dtype)
S_R = F.H * D_R * F
# Pixel-wise shift operator
S = ops.Shift(image_size, shift)
# Check gradient of phase ramp convolution
S_R.gradient_check()
# Check gradient of phase ramp
print(R.gradient_check(eps=1))
# Render forward model
R.latex()
# Render gradient
R.latex(gradient=True)
# plt.figure()
# plt.subplot(131)
# plt.imshow(yp.abs(yp.reshape(yp.changeBackend(S_R * vec(x), 'numpy'), image_size)))
# plt.subplot(132)
# plt.imshow(yp.abs(yp.reshape(yp.changeBackend(S * vec(x), 'numpy'), image_size)))
# plt.subplot(133)
# plt.imshow(yp.abs(yp.reshape(yp.changeBackend(S * vec(x) - S_R * vec(x), 'numpy'), image_size)))
# -
# ## Derivative Operator
# +
# Derivatie operator in x
Dx = ops.Derivative(image_size, dtype=global_dtype, backend=global_backend, axis=1)
xd = Dx * x
# Derivative operator in y
Dy = ops.Derivative(image_size, dtype=global_dtype, backend=global_backend, axis=0)
yd = Dy * x
# True derivative grids for comparison
N = image_size
r_x = np.arange(-N[1] / 2, N[1] / 2, 1.0) / N[1]
r_y = np.arange(-N[0] / 2, N[0] / 2, 1.0) / N[0]
grid_np = np.meshgrid(r_x, r_y)
grid = []
for g in grid_np:
grid.append(yp.changeBackend(g.astype(yp.getNativeDatatype(global_dtype, 'numpy')), global_backend))
# from libwallerlab.operators.fft import Ft, iFt
Ft = lambda x: np.fft.fftshift(np.fft.fft2(np.fft.fftshift(x, axes=(0, 1)), axes=(0, 1), norm='ortho'), axes=(0, 1))
iFt = lambda x: np.fft.fftshift(np.fft.ifft2(np.fft.fftshift(x, axes=(0, 1)), axes=(0, 1), norm='ortho'), axes=(0, 1))
dx_func = lambda x: iFt(Ft(x) * grid[1].reshape(image_size))
dy_func = lambda x: iFt(Ft(x) * grid[0].reshape(image_size))
# assert yp.sum(yp.abs(dx_func(x) - xd.reshape(image_size))) < eps, "X derivative was not equal! (%.4e)" % yp.sum(yp.abs(dx_func(x) - xd.reshape(image_size)))
# assert yp.sum(yp.abs(dy_func(x) - yd.reshape(image_size))) < eps, "Y derivative was not equal! (%.4e)" % yp.sum(yp.abs(dy_func(x) - yd.reshape(image_size)))
# Check Gradient
Dx.gradient_check()
Dy.gradient_check()
# Render forward models
Dx.latex()
Dy.latex()
# Render gradients
Dx.latex(gradient=True)
Dy.latex(gradient=True)
# -
# ## Power Operator
# +
power = 2
P = ops.Power(image_size, power,dtype=global_dtype, backend=global_backend)
assert yp.sum(yp.abs(yp.changeBackend(P * x, 'numpy') - yp.changeBackend(x, 'numpy') ** power)) < eps, "%f" % yp.sum(yp.abs(yp.changeBackend(P * x, 'numpy') - yp.changeBackend(x, 'numpy') ** power))
# Render forward model
P.latex()
# Render gradient
Σ.latex(gradient=True)
# -
# ## FFTShift Operator
# +
S = ops.FFTShift(image_size)
yp.assert_equality(S * x, yp.fftshift(x))
yp.assert_equality(S.H * S * x, x)
# Check Gradient
S.gradient_check()
# Render Latex
S.latex()
# Render gradient
S.latex(gradient=True)
# plt.figure()
# plt.subplot(131)
# plt.imshow(yp.abs(x))
# plt.subplot(132)
# plt.imshow(yp.abs(S * x))
# plt.subplot(133)
# plt.imshow(yp.abs(S.H * S * x))
# -
# ## Image Segmentation Operator
# +
crop_size = crop_size = (image_size[0], image_size[0])
roi_list = [yp.Roi(crop_size, start=(0,0), input_shape=image_size),
yp.Roi(crop_size, start=(0,image_size[1] // 4), input_shape=image_size),
yp.Roi(crop_size, start=(0,image_size[1] // 2), input_shape=image_size)]
# roi_list[0] -= 5
# Create segmentation operatoe
G = ops.Segmentation(roi_list, image_size, alpha_blend_size=0, backend=None)
# Generate measurements
y = G * x
# Apply some mis-calibration to measurement
y_list = ops.VecSplit(y, len(roi_list))
# y_list[1] = yp.circshift(y_list[1], (3, -1))
# y_list[1] *= 1.1
# y_list[2] *= 0.9
y = ops.VecStack(y_list)
# Show figures
plt.figure()
plt.subplot(131)
plt.imshow(yp.real(y))
plt.title('Forward')
plt.subplot(132)
plt.imshow(yp.real(G.H * y))
plt.title('Adjoint * Forward')
plt.subplot(133)
plt.imshow(yp.real(G.inv * y))
plt.title('Inverse * Forward')
# Perform gradient check
G.gradient_check()
# Show latex
G.latex()
# Show latex
G.latex(gradient=True)
# -
# ## Registration Operator
# +
# Define known shift
known_shift = yp.asarray((3, 10))
# Create registration operator
R = ops.Registration(x, debug=False)
# Check forward operation
yp.assert_equality(R * yp.asarray(known_shift), yp.roll(x, known_shift))
# Check inverse operation
yp.assert_equality(R.inv * (R * yp.asarray(known_shift)), yp.asarray(known_shift))
# Render latex
R.latex()
# Render gradient
R.latex(gradient=True)
# -
# # Operator Algebra
# ## Inner Operators
# +
# Create phase ramp to diagonalize
H = ops.PhaseRamp(image_size)
s = yp.rand((2,1))
# Create diagonalized phase ramp operator
D = ops.Diagonalize(s, inner_operator=H)
# Check that inside operator is set correctly
assert yp.sum(yp.abs(D * x - ((H * s) * x))) == 0.0
# Check gradient
D.gradient_check()
# Render Latex
D.latex()
# Render gradient
D.latex(gradient=True)
# -
# ## Operator-Vector Sum
# +
# Test sum operations here
F = ops.FourierTransform(image_size, center=False)
y = F * x
A_s = A + y
# Forward operator
assert yp.sum(yp.abs(A_s * x - (A * x + y))) < eps
# Adjoint
assert yp.sum(yp.abs(A_s.H * x - A.H * x)) < eps
# Gradient Numerical Check
A.gradient_check()
# Render forward model
A_s.latex()
# Render gradient
A_s.latex(gradient=True)
# -
# # Operator Mechanics
# ## Linearity Flag
# +
F = ops.FourierTransform(image_size) # Linear Operator
L2 = ops.L2Norm((image_size[0], image_size[1])) # Non-linear operator
assert F.linear
assert not L2.linear
assert not (L2 * F).linear
assert (F + F).linear
assert not (L2 * F + L2 * F).linear
# -
# ## Smoothness Flag
# +
F = ops.FourierTransform(image_size) # Linear Operator
L1 = ops.L1Norm(image_size) # Non-linear operator
assert F.smooth
assert not L1.smooth
assert not (L1 * F).smooth
assert (F + F).smooth
assert not (L1 * F + L2 * F).smooth
assert not (L1 * F + L1 * F).smooth
# -
# ## Operator Indexing (Suboperators)
# +
K = ops.Diagonalize(h, label='K')
K_2 = ops.Diagonalize(h, label='Q')
F = ops.FourierTransform(image_size)
A = F.H * K * F
A.label = 'A'
A.suboperators[1].argument = yp.ones(h.shape)
# Render forward model
A.latex()
# Render gradient
A.latex(gradient=True)
# -
# # Condition Number Calculation
# The condition number of a matrix product $AB$ is bounded by the following relation:
# $$\kappa\{AB\} \geq \kappa\{B\}\kappa\{B\}$$
#
# Unless either $A$ or $B$ is unitary ($\kappa\{\cdot\}=1$), we cannot know the condition number exactly since the spectrum basis (eigenvectors) are not common between these two matricies. In the future, we could store the whole spectrum and check this, but this would be complicated to implement.
# +
# Unitary Matrix
F = ops.FourierTransform(image_size)
assert F.condition_number == 1
assert not F.condition_number_is_upper_bound
# Matrix with a condition number
hh = yp.changeBackend((np.random.rand(image_size[0], image_size[1]) + 0.1).astype(np.complex64), global_backend)
D = ops.Diagonalize(hh)
assert not D.condition_number_is_upper_bound
# Product of two unitary matricies
assert (F * F).condition_number == 1
# assert not (F * F).condition_number_is_upper_bound
# Product of one unitary and one non-singular matrix
assert (F * D).condition_number == D.condition_number
# assert not (F * D).condition_number_is_upper_bound # because one matrix is unitary, this condition number is NOT an upper bound. This can be checked numerically.
# Product of two non-singular matricies.
hh_2 = yp.changeBackend((np.random.rand(image_size[0], image_size[1]) + 0.1).astype(np.complex64), global_backend)
D2 = ops.Diagonalize(hh_2)
assert (D * D2).condition_number >= D.condition_number
# assert not (D * D2).condition_number_is_upper_bound
# Product of two diagonal matricies seperated by a F.T.
assert (D * F * D2).condition_number >= D.condition_number
assert (D * F * D2).condition_number_is_upper_bound
# -
# ## Check if an Operator is the Inverse of another operator
# + tags=[]
F = ops.FourierTransform(h.shape)
F = ops.FourierTransform(h.shape)
print(F.is_adjoint_of(F.H))
print((F.is_inverse_of(F.H)))
# -
# ## Removal of Inverses and Redundant Products
# +
F = ops.FourierTransform(image_size)
D = ops.Diagonalize(h)
L2 = ops.L2Norm(h.shape)
I = ops.Identity(image_size)
y = F * x
# Simple introspection
A = F.H * F
assert 'Identity' in str(A)
# Introspection with extra operators on right
A = F.H * F * D
assert 'Fourier' not in str(A)
# Introspection with extra operators on left
A = D * F.H * F
assert 'Fourier' not in str(A)
# Introspection wiht several opposites
A = F.H * F * D * F.H * F
assert 'Fourier' not in str(A)
# -
# # Inverses
# ## Linear Inverses
# +
# Fourier Transform
A = ops.FourierTransform(x.shape)
assert yp.sum(yp.abs(A.inv * A * x - x)) < 1e-3
# Identity
A = ops.Identity(x.shape)
assert yp.sum(yp.abs(A.inv * A * x - x)) < 1e-3
# Shift
A = ops.Shift(x.shape, (10,10))
assert yp.sum(yp.abs(A.inv * A * x - x)) < 1e-3
# Convolution (explicit)
F = ops.FourierTransform(h.shape)
A = F.H * ops.Diagonalize((F * h), inverse_regularizer=1e-10) * F
assert yp.sum(yp.abs(A.inv * A * x - x)) < 1
# Convolution (implicit)
A = ops.Convolution(h, inverse_regularizer=0)
assert yp.sum(yp.abs(A.inv * A * x - x)) < 1e-3
# -
# # Gradients
# ## Gradients of Linear Operators
# We'll assume that if the adjoint is provided, the gradient operator is just the adjoint operating on the input (x), or current iterate. This allows us to specify either the adjoint (linear operator) OR the gradient (non-linear operator) for each operator, and the rest will be handled by the metaclass.
# +
# Simple test case (linear operator)
A = ops.FourierTransform(image_size)
# Check validity of gradient operator (and several short-hand pass-through functions)
A.gradient_check()
# Render forward model
A.latex()
# Render gradient
A.latex(gradient=True)
# -
# ## Chained Linear Gradients
# +
# Chained linear operators
F = ops.FourierTransform(image_size)
D = ops.Diagonalize(yp.asbackend(h, global_backend))
A = F.H * D * F
A.label = 'A'
# Check gradient numerically
A.gradient_check()
# Render forward model
A.latex()
# Render gradient
A.latex(gradient=True)
# -
# ## Chained Nonlinear Gradients
# ### Inner convolution (linear) operator with outer L2 Norm (non-linear) operator
# +
# Inner convolution (linear) operator with outer L2 Norm (non-linear) operator
L2 = ops.L2Norm(image_size)
F = ops.FourierTransform(image_size)
D = ops.Diagonalize(h)
A_linear = F.H * D * F
# A_linear.label = 'A_{linear}'
A = L2 * A_linear
# Check forward operator
assert np.all(yp.abs(A * x - 0.5 * yp.norm(A_linear * x) ** 2) < eps)
# Check gradient operator
A.gradient_check()
# Render forward model
A.latex()
# Render gradient
A.latex(gradient=True)
# -
# ### Inner convolution and vector subtraction (linear) operator with outer L2 Norm (non-linear) operator
# +
L2 = ops.L2Norm(image_size)
F = ops.FourierTransform(image_size)
D = ops.Diagonalize(h)
A = F.H * D * F
# Data difference function
Delta = (A - y)
# Objective Function
O = L2 * Delta
# Check forward operator
assert np.all(yp.abs(O * x - 0.5 * yp.norm(Delta * x) ** 2) < eps)
# Check gradient operator (adjoint form)
O.gradient_check()
# Render forward model
O.latex()
# Render gradient
O.latex(gradient=True)
# -
# ### Inner non-linear operator, linear operator in middle, and norm on outside
# +
phase_ramp_dtype = 'complex32'
x_long = yp.astype(x, phase_ramp_dtype)
# Inner non-linear operator, linear operator in middle, and norm on outside
shift_true = yp.changeBackend(np.asarray((-5,3)).astype(yp.getNativeDatatype(phase_ramp_dtype, 'numpy')), global_backend)
# Inner non-linear operator, linear operator in middle, and norm on outside
F = ops.FourierTransform(image_size, dtype=phase_ramp_dtype, backend=global_backend)
D_object = ops.Diagonalize(F * x_long, label='object', dtype=phase_ramp_dtype, backend=global_backend)
R = ops.PhaseRamp(image_size, dtype=phase_ramp_dtype, backend=global_backend)
A_shift = F.H * D_object * R
y1 = A_shift(shift_true)
L2 = ops.L2Norm(image_size, dtype=phase_ramp_dtype, backend=global_backend)
objective = L2 * (A_shift - y1)
# Check gradient
objective.gradient_check()
# Render forward model
objective.latex()
# Render gradient
objective.latex(gradient=True)
# -
# ## Sum of Phase Ramps
# +
phase_ramp_dtype = 'complex32'
x_long = yp.astype(x, phase_ramp_dtype)
# Inner non-linear operator, linear operator in middle, and norm on outside
shift_true = yp.changeBackend(np.asarray((-5,3)).astype(yp.getNativeDatatype(phase_ramp_dtype, 'numpy')), global_backend)
# Inner non-linear operator, linear operator in middle, and norm on outside
F = ops.FourierTransform(image_size, dtype=phase_ramp_dtype, backend=global_backend)
D_object = ops.Diagonalize(yp.reshape(F * vec(x_long), image_size), label='D_{object}', dtype=phase_ramp_dtype, backend=global_backend)
R = ops.PhaseRamp(image_size, dtype=phase_ramp_dtype, backend=global_backend)
H = ops.Hstack((R, R, R))
A_shift = F.H * D_object * H
xx = yp.changeBackend(np.hstack((np.asarray(shift_true), np.asarray(shift_true), np.asarray(shift_true))), global_backend)
y_sum = A_shift * yp.changeBackend(np.hstack((np.asarray(shift_true), np.asarray(shift_true), np.asarray(shift_true))), global_backend)
L2 = ops.L2Norm(image_size, dtype=phase_ramp_dtype, backend=global_backend)
objective = L2 * (A_shift - y_sum)
# Check gradient
objective.gradient_check()
# Render forward model
objective.latex()
# Render gradient
objective.latex(gradient=True)
# -
# ### Scaling a Norm
# +
L2 = ops.L2Norm(image_size, dtype=global_dtype)
F = ops.FourierTransform(image_size, dtype=global_dtype, axes=(0, 1))
D = ops.Diagonalize(h, dtype=global_dtype)
O_2 = L2 * F
O = 0.1 * O_2
# Check gradient operator (adjoint form)
O.gradient_check()
# Render forward model
O.latex()
# Render gradient
O.latex(gradient=True)
# -
# ### Sum of Norms (E.g. regularization)
# + tags=[]
L2 = ops.L2Norm(image_size)
F = ops.FourierTransform(image_size)
D = ops.Diagonalize(h)
O_1 = L2 * ((F.H * D * F) - y)
O_2 = 1e-3 * L2 * F
O = O_2 + O_1
# Check gradient operator (adjoint form)
O.gradient_check()
# Render forward model
O.latex()
# Render gradient
O.latex(gradient=True)
# -
# # Stacking Operators
#
# Stacking operators are tricky - they need to take or return a VectorStack class, which is simply a container for images of different sizes to be operated on independently.
#
# Hstack - operates on a vectorstack (or vector) class, returns a vector
#
# Vstack - operates on a vector, returns a vectorstack class
#
# Diagstack - operates on a vectorstack, returns a vectorstack
# +
# Create list of operators
op_list_nonlinear = [
ops.FourierTransform(image_size),
ops.Identity(image_size),
ops.Exponential(image_size)
]
op_list_linear = [
ops.FourierTransform(image_size),
ops.Identity(image_size),
ops.Diagonalize(h)
]
# -
# ## Horizontal Stacking
# ### Linear Stacking
# +
# Horizontally stacked operators
H_l = ops.Hstack(op_list_linear)
# Vertically stack x for forward operator
x_np = yp.changeBackend(x, 'numpy')
x3 = yp.changeBackend(np.vstack((x_np,x_np, x_np)), global_backend)
# Check forward operation
y2 = yp.zeros(op_list_linear[0].N, op_list_linear[0].dtype, op_list_linear[0].backend)
for op in op_list_linear:
y2 = y2 + op * x
# Check equality
yp.assert_equality(H_l(x3), y2)
# Check gradient
H_l.gradient_check()
# Render forward model
H_l.latex()
# Render gradient
H_l.latex(gradient=True)
# -
# ### Non-linear operators
# +
# Horizontally stacked operators
H_nl = ops.Hstack(op_list_nonlinear)
# Vertically stack x for forward operator
x3 = yp.changeBackend(np.vstack((x, x, x)), global_backend)
# Check forward operation
y2 = yp.zeros(op_list_nonlinear[0].shape[0], op_list_nonlinear[0].dtype, op_list_nonlinear[0].backend)
for op in op_list_nonlinear:
y2 += op * x
assert yp.sum(yp.abs(H_nl(x3) - y2)) < eps, "%.4e" % yp.sum(yp.abs(H_nl(x3)) - y2)
# Check gradient
H_nl.gradient_check()
# Render forward model
H_nl.latex()
# Render gradient
H_nl.latex(gradient=True)
# -
# ## Vertical Stacking
# ### Linear Operators
# +
# Create vertically stacked operator
V_l = ops.Vstack(op_list_linear)
# Check forward operator
y3 = np.empty((0,image_size[1]), dtype=yp.getNativeDatatype(global_dtype, 'numpy'))
for index, op in enumerate(op_list_linear):
y3 = np.append(y3, (op * x), axis=0)
y3 = yp.changeBackend(y3, global_backend)
assert yp.sum(yp.abs(V_l * x - y3)) < eps, "%.4e" % yp.sum(yp.abs(V_l * vec(x) - y3))
# Check gradient
V_l.gradient_check()
# Render forward model
V_l.latex()
# Render gradient
V_l.latex(gradient=True)
# -
# ### Nonlinear Operators
# +
# Create list of operators
op_list_nonlinear = [
ops.FourierTransform(image_size),
ops.Identity(image_size),
ops.Exponential(image_size)
]
# Create vertically stacked operator
V_nl = ops.Vstack(op_list_nonlinear)
# Check forward operator
y3 = np.empty((0,image_size[1]), dtype=yp.getNativeDatatype(global_dtype, 'numpy'))
for index, op in enumerate(op_list_nonlinear):
y3 = np.append(y3, (op * x), axis=0)
y3 = yp.changeBackend(y3, global_backend)
yp.assert_equality(V_nl * x, y3)
# Check gradient
V_nl.gradient_check()
# Render forward model
V_nl.latex()
# Render gradient
V_nl.latex(gradient=True)
# -
# ## Diagonal Stacking
# ### Linear Operators
# +
# Horizontally stacked operators
D_l = ops.Dstack(op_list_linear)
# Vertically stack x for forward operator
x3 = yp.changeBackend(np.vstack((x, x, x)), global_backend)
# Check forward operation
y4 = np.empty((0,image_size[1]), dtype=yp.getNativeDatatype(global_dtype, 'numpy'))
for index, op in enumerate(op_list_linear):
y4 = np.append(y4, (op * x), axis=0)
y4 = yp.changeBackend(y4, global_backend)
# Check forward
yp.assert_equality(D_l(x3), y4)
# Check gradient
D_l.gradient_check()
# Render forward model
D_l.latex()
# Render gradient
D_l.latex(gradient=True)
# -
# ### Nonlinear operators
# +
# Horizontally stacked operators
D_nl = ops.Dstack(op_list_nonlinear)
# Vertically stack x for forward operator
x3 = yp.changeBackend(np.vstack((x, x, x)), global_backend)
# Check forward operation
y4 = np.empty((0,image_size[1]), dtype=yp.getNativeDatatype(global_dtype, 'numpy'))
for index, op in enumerate(op_list_nonlinear):
y4 = np.append(y4, (op * x), axis=0)
y4 = yp.changeBackend(y4, global_backend)
# Check forward operation
yp.assert_equality(D_nl(x3), y4)
# Check gradient
D_nl.gradient_check()
# Render forward model
D_nl.latex()
# Render gradient
D_nl.latex(gradient=True)
# -
# ## Speed Comparison
# + tags=[]
op_count = 100
shape = (128, 128)
F = ops.FourierTransform(shape)
op_list = [F * ops.Diagonalize(yp.rand(shape))* F.H for _ in range(op_count)]
_x_list = ops.VecStack([yp.rand(shape)] * op_count)
_x = yp.rand(shape)
# Horizontally stacked operators
H_l_n = ops.Hstack(op_list, parallelize=False)
H_l_p = ops.Hstack(op_list, parallelize=True)
D_l_n = ops.Dstack(op_list, parallelize=False)
D_l_p = ops.Dstack(op_list, parallelize=True)
V_l_n = ops.Vstack(op_list, parallelize=False)
V_l_p = ops.Vstack(op_list, parallelize=True)
# %timeit H_l_n * _x_list
# %timeit H_l_p * _x_list
# %timeit D_l_n * _x_list
# %timeit D_l_p * _x_list
# %timeit V_l_n * _x
# %timeit V_l_p * _x
# -
# ## Sum of Operators
gradient(exp(Ax))
# +
# Sum of operators
S = ops.OperatorSum(op_list_nonlinear)
# Check forward operator
assert yp.sum(yp.abs(S * x - sum([op_list_nonlinear[i] * x for i in range(len(op_list_nonlinear))]))) < eps, '%f' % yp.sum(yp.abs(S * x - sum([op_list_nonlinear[i] * x for i in range(len(op_list_nonlinear))])))
# Check gradient
S.gradient_check()
# Render forward model
S.latex()
# Render gradient
S.latex(gradient=True)
# -
# ### Sum of Exponentials
# +
EXP = ops.Exponential(image_size)
exp_list = [EXP] * 5
# Sum of operators
S = ops.OperatorSum(exp_list)
# Check forward operator
assert yp.sum(yp.abs(S * x - sum([exp_list[i] * x for i in range(len(exp_list))]))) < eps, '%f' % yp.sum(yp.abs(S * x - sum([exp_list[i] * x for i in range(len(exp_list))])))
# Check gradient
S.gradient_check()
# print latex
S.latex()
# -
# ### Sum of Phase Ramps
# +
phase_ramp_dtype = 'complex32'
x_long = yp.astype(x, phase_ramp_dtype)
shift = yp.changeBackend(np.asarray((-5,3)).astype(yp.getNativeDatatype(phase_ramp_dtype, 'numpy')), global_backend)
R = ops.PhaseRamp(image_size, dtype=phase_ramp_dtype, backend=global_backend)
r_list = [R] * 3
# Sum of operators
S = ops.OperatorSum(r_list)
# Check forward operator
assert yp.sum(yp.abs(S * shift - sum([r_list[i] * shift for i in range(len(r_list))]))) < eps, '%f' % yp.sum(yp.abs(S * shift - sum([r_list[i] * vec(shift) for i in range(len(exp_list))])))
# Check gradient
S.gradient_check(eps=1)
# Render forward model
S.latex()
# Render gradient
S.latex(gradient=True)
# -
# # Setting and Getting Arguments of Composite Operators
# + tags=[]
# Generate two different diagonal operators
d0, d1 = np.random.rand(*image_size), np.random.rand(*image_size)
D0 = ops.Diagonalize(d0)
D1 = ops.Diagonalize(d1)
# Combine into a single operator
A = D0 * D1
print(d0.__array_interface__['data'])
print(d0.__array_interface__['data'])
print(D0.arguments[D0].__array_interface__['data'])
print(A.arguments[D0].__array_interface__['data'])
# Ensure we can get arguments
yp.assert_equality(A.arguments[D0], d0)
yp.assert_equality(A.arguments[D1], d1)
# -
A.suboperators
# + tags=[]
class ArgumentsDict(dict):
def __init__(self, operator, *args,**kwargs):
self.operator = operator
super(ArgumentsDict, self).__init__(*args, **kwargs)
def __setitem__(self, operator, new_argument):
if operator in self.operator.arguments:
operator._set_argument_function(new_argument)
def __getitem__(self, key):
return self.operator.arguments[key]
def __repr__(self):
return self.operator.arguments.__repr__()
q = ArgumentsDict(A)
for key, value in q.items():
print(key)
# -
| notebooks/operators/example-operators.ipynb |
# ---
# jupyter:
# jupytext:
# split_at_heading: true
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + colab={"base_uri": "https://localhost:8080/"} id="iQs3Ka9NxPB7" outputId="7027e136-ef36-4df2-8462-745bf7ac1c16"
#hide
# !pip install -Uqq fastbook
import fastbook
fastbook.setup_book()
# + id="18IWkpPHxPCA"
#hide
from fastbook import *
# + [markdown] id="aX0sbJ1txPCB"
# # Image Classification
# + [markdown] id="pk0l7OrNxPCB"
# ## From Dogs and Cats to Pet Breeds
# + colab={"base_uri": "https://localhost:8080/", "height": 17} id="NxbQ7WAlxPCB" outputId="95f6704d-56d8-452d-9c7f-2e704e7ae5cd"
from fastai.vision.all import *
path = untar_data(URLs.PETS)
# -
# ### memo
# untar_data() downloads data files to Colab Session Storage.
# Is gone after instance in close on Colab cloud.
# Unix "tar" command, compressed and uncompressed data files.
#
# https://www.robots.ox.ac.uk/~vgg/data/pets/
# Oxford Univ Geometry Group and Indian Inst I & Tech.
# Curated data on dogs and cats breeds. Some are confusing even for humans to identify.
# Manually download to repo on C drive. Move to Ubuntu and untar (uncompress). Explore.
# + id="ZtJYYb0tyBtS"
# path?
# + [markdown] id="DD9wd2-5ySvJ"
# Type: PosixPath
# String form: /root/.fastai/data/oxford-iiit-pet
# File: /usr/lib/python3.6/pathlib.py
# Docstring:
# Path subclass for non-Windows systems.
# + [markdown] id="UGP3ql5W06S-"
# #### memo:
# pathlib.py symbolically manipulates drive path.
# path.parts() - separates parts out for Winows or Unix OS.
# ```
# >>> p = PurePath('/usr/bin/python3')
# >>> p.parts
# ('/', 'usr', 'bin', 'python3')
#
# >>> p = PureWindowsPath('c:/Program Files/PSF')
# >>> p.parts
# ('c:\\', 'Program Files', 'PSF')
#
# Usage:
# from pathlib import Path
# >>> p = Path('.') # current directory
# >>> [x for x in p.iterdir() if x.is_dir()] # lists all directories in tree.
# ```
# + id="y753KA0DxPCC"
#hide
Path.BASE_PATH = path
# + colab={"base_uri": "https://localhost:8080/"} id="XoP0FHY0xPCC" outputId="7395f928-6e3b-40fa-f31a-4e9a3573e4aa"
path.ls()
# + colab={"base_uri": "https://localhost:8080/"} id="tUffP4prxPCC" outputId="85df5946-334d-4592-9ab7-37b02388b020"
(path/"images").ls()
# + id="m2HIhYsgzJSF"
# Path.BASE_PATH?
# + id="cJAF10GJxPCD"
fname = (path/"images").ls()[0]
# + id="4xe-kKbIxPCD"
re.findall(r'(.+)_\d+.jpg$', fname.name)
# + id="BSUYf3pzxPCD"
pets = DataBlock(blocks = (ImageBlock, CategoryBlock),
get_items=get_image_files,
splitter=RandomSplitter(seed=42),
get_y=using_attr(RegexLabeller(r'(.+)_\d+.jpg$'), 'name'),
item_tfms=Resize(460),
batch_tfms=aug_transforms(size=224, min_scale=0.75))
dls = pets.dataloaders(path/"images")
# + [markdown] id="citCVZ0exPCD"
# ## Presizing
# + id="zQho4xjUxPCE"
dblock1 = DataBlock(blocks=(ImageBlock(), CategoryBlock()),
get_y=parent_label,
item_tfms=Resize(460))
dls1 = dblock1.dataloaders([(Path.cwd()/'images'/'grizzly.jpg')]*100, bs=8)
dls1.train.get_idxs = lambda: Inf.ones
x,y = dls1.valid.one_batch()
_,axs = subplots(1, 2)
x1 = TensorImage(x.clone())
x1 = x1.affine_coord(sz=224)
x1 = x1.rotate(draw=30, p=1.)
x1 = x1.zoom(draw=1.2, p=1.)
x1 = x1.warp(draw_x=-0.2, draw_y=0.2, p=1.)
tfms = setup_aug_tfms([Rotate(draw=30, p=1, size=224), Zoom(draw=1.2, p=1., size=224),
Warp(draw_x=-0.2, draw_y=0.2, p=1., size=224)])
x = Pipeline(tfms)(x)
#x.affine_coord(coord_tfm=coord_tfm, sz=size, mode=mode, pad_mode=pad_mode)
TensorImage(x[0]).show(ctx=axs[0])
TensorImage(x1[0]).show(ctx=axs[1]);
# + [markdown] id="DTynYgzDxPCE"
# ### Checking and Debugging a DataBlock
# + id="dAUR4qZAxPCE"
dls.show_batch(nrows=1, ncols=3)
# + id="XwqrYMjOxPCE"
pets1 = DataBlock(blocks = (ImageBlock, CategoryBlock),
get_items=get_image_files,
splitter=RandomSplitter(seed=42),
get_y=using_attr(RegexLabeller(r'(.+)_\d+.jpg$'), 'name'))
pets1.summary(path/"images")
# + id="zJeSlex4xPCF"
learn = cnn_learner(dls, resnet34, metrics=error_rate)
learn.fine_tune(2)
# + [markdown] id="8hHUiC-VxPCF"
# ## Cross-Entropy Loss
# + [markdown] id="asoe93UexPCF"
# ### Viewing Activations and Labels
# + id="TiCfI_t8xPCF"
x,y = dls.one_batch()
# + id="uooamvBUxPCF"
y
# + id="XxOqYdOhxPCF"
preds,_ = learn.get_preds(dl=[(x,y)])
preds[0]
# + id="YVJJI4O3xPCG"
len(preds[0]),preds[0].sum()
# + [markdown] id="g9gZtdWMxPCG"
# ### Softmax
# + id="VZwlPw70xPCG"
plot_function(torch.sigmoid, min=-4,max=4)
# + id="mVkWk3wrxPCG"
#hide
torch.random.manual_seed(42);
# + id="mpglr6IpxPCG"
acts = torch.randn((6,2))*2
acts
# + id="JfG_riJCxPCH"
acts.sigmoid()
# + id="uY95bXtsxPCH"
(acts[:,0]-acts[:,1]).sigmoid()
# + id="3GwWuIEExPCH"
sm_acts = torch.softmax(acts, dim=1)
sm_acts
# + [markdown] id="NWvxUpfCxPCH"
# ### Log Likelihood
# + id="V_P4hjcbxPCH"
targ = tensor([0,1,0,1,1,0])
# + id="snnrElmHxPCH"
sm_acts
# + id="cctd2LYyxPCI"
idx = range(6)
sm_acts[idx, targ]
# + id="zDAbBuQSxPCI"
from IPython.display import HTML
df = pd.DataFrame(sm_acts, columns=["3","7"])
df['targ'] = targ
df['idx'] = idx
df['loss'] = sm_acts[range(6), targ]
t = df.style.hide_index()
#To have html code compatible with our script
html = t._repr_html_().split('</style>')[1]
html = re.sub(r'<table id="([^"]+)"\s*>', r'<table >', html)
display(HTML(html))
# + id="HElF1MDExPCI"
-sm_acts[idx, targ]
# + id="Pg1VScn8xPCI"
F.nll_loss(sm_acts, targ, reduction='none')
# + [markdown] id="BvVP3SQPxPCI"
# ### Taking the Log
# + id="BtLmA7UwxPCI"
plot_function(torch.log, min=0,max=4)
# + id="pBDinwyWxPCJ"
loss_func = nn.CrossEntropyLoss()
# + id="QgOrWOfrxPCJ"
loss_func(acts, targ)
# + id="ZSXc_52oxPCJ"
F.cross_entropy(acts, targ)
# + id="lSMcZ89_xPCJ"
nn.CrossEntropyLoss(reduction='none')(acts, targ)
# + [markdown] id="6Nxk47mxxPCJ"
# ## Model Interpretation
# + id="FF3Av1izxPCJ"
interp = ClassificationInterpretation.from_learner(learn)
interp.plot_confusion_matrix(figsize=(12,12), dpi=60)
# + id="Lgt4ZWeyxPCJ"
interp.most_confused(min_val=5)
# + [markdown] id="8WvkLAqTxPCK"
# ## Improving Our Model
# + [markdown] id="pKj7gdfVxPCK"
# ### The Learning Rate Finder
# + id="KVk2ynKXxPCK"
learn = cnn_learner(dls, resnet34, metrics=error_rate)
learn.fine_tune(1, base_lr=0.1)
# + id="l-zNyHSZxPCK"
learn = cnn_learner(dls, resnet34, metrics=error_rate)
lr_min,lr_steep = learn.lr_find()
# + id="UDjuW8A7xPCK"
print(f"Minimum/10: {lr_min:.2e}, steepest point: {lr_steep:.2e}")
# + id="nEVEhgOJxPCK"
learn = cnn_learner(dls, resnet34, metrics=error_rate)
learn.fine_tune(2, base_lr=3e-3)
# + [markdown] id="ncnAIb6hxPCL"
# ### Unfreezing and Transfer Learning
# + id="TacNg7HVxPCL"
# learn.fine_tune??
# + id="sBp8KHMXxPCL"
learn = cnn_learner(dls, resnet34, metrics=error_rate)
learn.fit_one_cycle(3, 3e-3)
# + id="ZRy7LLrzxPCL"
learn.unfreeze()
# + id="B33NL-NOxPCM"
learn.lr_find()
# + id="GK0UPpJTxPCM"
learn.fit_one_cycle(6, lr_max=1e-5)
# + [markdown] id="lbOYf9AhxPCM"
# ### Discriminative Learning Rates
# + id="5uCrJsuexPCN"
learn = cnn_learner(dls, resnet34, metrics=error_rate)
learn.fit_one_cycle(3, 3e-3)
learn.unfreeze()
learn.fit_one_cycle(12, lr_max=slice(1e-6,1e-4))
# + id="GZrYALRlxPCN"
learn.recorder.plot_loss()
# + [markdown] id="btXEigsWxPCN"
# ### Selecting the Number of Epochs
# + [markdown] id="sJMkhOiWxPCN"
# ### Deeper Architectures
# + id="mkMCv54UxPCN"
from fastai.callback.fp16 import *
learn = cnn_learner(dls, resnet50, metrics=error_rate).to_fp16()
learn.fine_tune(6, freeze_epochs=3)
# + [markdown] id="E6nzmMr3xPCN"
# ## Conclusion
# + [markdown] id="9-pueVqhxPCO"
# ## Questionnaire
# + [markdown] id="QUqO2a9GxPCO"
# 1. Why do we first resize to a large size on the CPU, and then to a smaller size on the GPU?
# * ans: to get squares. PyTorch uses square images.
# 1. If you are not familiar with regular expressions, find a regular expression tutorial, and some problem sets, and complete them. Have a look on the book's website for suggestions.
# * ans: do more exercises. re.py
# 1. What are the two ways in which data is most commonly provided, for most deep learning datasets?
# * ans: individual files where each file is a data item, such as images, with file names perhaps indicate organization.
# Tabular format (csv) where each row is a data item, with possible names associated with image or document files.
# Can also be binary format files, for large dump, such as medical imaging data.
# 1. Look up the documentation for `L` and try using a few of the new methods is that it adds.
# * ans: L is a fastai function, builds on PyTorch nn.convolutions object?
# 1. Look up the documentation for the Python `pathlib` module and try using a few methods of the `Path` class.
# * ans: symbolic directory path manipulation. https://docs.python.org/3/library/pathlib.html
#
# 1. Give two examples of ways that image transformations can degrade the quality of the data.
# * ans: interpolated image can become fuzzy, can have unrelated artifacts (parts of image missing or taken over by other objects)
# 1. What method does fastai provide to view the data in a `DataLoaders`?
# * ans: ?
# 1. What method does fastai provide to help you debug a `DataBlock`?
# * ans: stepping through, debug.
# 1. Should you hold off on training a model until you have thoroughly cleaned your data?
# * ans: No, train as soon as possible. Use trained model to look for error in data. Exp. unique()
# 1. What are the two pieces that are combined into cross-entropy loss in PyTorch?
# * ans:
# 1. What are the two properties of activations that softmax ensures? Why is this important?
# * ans: vanishing weights, so later layers will continue to improve learning.
# * ?
# 1. When might you want your activations to not have these two properties?
# 1. Calculate the `exp` and `softmax` columns of <<bear_softmax>> yourself (i.e., in a spreadsheet, with a calculator, or in a notebook).
# * Later -- do.
# 1. Why can't we use `torch.where` to create a loss function for datasets where our label can have more than two categories?
# * ans: We can, but use one-hot encoding to specifiy multiclass (dummy) variables.
# 1. What is the value of log(-2)? Why?
# * ans: 2e^(i*pi) in Complex Number space, but undefined in Real Number space.
# 1. What are two good rules of thumb for picking a learning rate from the learning rate finder?
# * ans: 1/c ? Hightest slope decline in accuracy.
# 1. What two steps does the `fine_tune` method do?
# * ans: trains the randomly added final layer for one epoch
# Unfreeze all layers and train all of them for the N epoches requested.
# 1. In Jupyter Notebook, how do you get the source code for a method or function?
# * ans: ?? function??
# 1. What are discriminative learning rates?
# * ans: Use varying rate in layers, depending on user data concordance with trained model data. Usually earier layers train on primative shapes and can be readily transferred to user's data, but later layers learn complex shapes that does not transfer well to user's untrained data.
# 1. How is a Python `slice` object interpreted when passed as a learning rate to fastai?
# * ans: start number, end number, interpolate between with geometric growth. learning rate starts low at initial layer, that is already well trained, and gets high towards final layer that has not been trained (our data).
# 1. Why is early stopping a poor choice when using 1cycle training?
# * ans: final random layer has not had enough epoches to get accurate.
# 1. What is the difference between `resnet50` and `resnet101`?
# * ans: more layers. pre-trained models for imagenet database is available for standard number of layers. restnet18 and resnet34 are smaller and good to start with. larger ones are good for trying to improve accuracy.
# 1. What does `to_fp16` do?
# * ans: reduce byte size to floating point 16bit precision, rounds numbers, reduce memory usage.
# + [markdown] id="06AnBR1m2H72"
# #### Regular Expression
# re.search()
# re.sub #substitutes
# re.finall
# re.match
# * one or more match
# # # ? one or zero match
# https://learnbyexample.github.io/python-regex-cheatsheet/
# + [markdown] id="xk1zUJwbxPCO"
# ### Further Research
# + [markdown] id="N0ESAZnAxPCP"
# 1. Find the paper by <NAME> that introduced the learning rate finder, and read it.
# 1. See if you can improve the accuracy of the classifier in this chapter. What's the best accuracy you can achieve? Look on the forums and the book's website to see what other students have achieved with this dataset, and how they did it.
# + id="84e2YChxxPCP"
| fastbook/nbs-mycopy/chp05-pets/05_pet_breeds_clean-ans-dn.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Homework assignment #3
#
# These problem sets focus on using the Beautiful Soup library to scrape web pages.
#
# ## Problem Set #1: Basic scraping
#
# I've made a web page for you to scrape. It's available [here](http://static.decontextualize.com/widgets2016.html). The page concerns the catalog of a famous [widget](http://en.wikipedia.org/wiki/Widget) company. You'll be answering several questions about this web page. In the cell below, I've written some code so that you end up with a variable called `html_str` that contains the HTML source code of the page, and a variable `document` that stores a Beautiful Soup object.
from bs4 import BeautifulSoup
from urllib.request import urlopen
html_str = urlopen("http://static.decontextualize.com/widgets2016.html").read()
document = BeautifulSoup(html_str, "html.parser")
# Now, in the cell below, use Beautiful Soup to write an expression that evaluates to the number of `<h3>` tags contained in `widgets2016.html`.
h3_tag = document.find_all('h3')
for item in h3_tag:
print(item.string)
# Now, in the cell below, write an expression or series of statements that displays the telephone number beneath the "Widget Catalog" header.
telephone = document.find('a',{'class':'tel'})
for item in telephone:
print (item.string)
# In the cell below, use Beautiful Soup to write some code that prints the names of all the widgets on the page. After your code has executed, `widget_names` should evaluate to a list that looks like this (though not necessarily in this order):
#
# ```
# Skinner Widget
# Widget For Furtiveness
# Widget For Strawman
# Jittery Widget
# Silver Widget
# Divided Widget
# Manicurist Widget
# Infinite Widget
# Yellow-Tipped Widget
# Unshakable Widget
# Self-Knowledge Widget
# Widget For Cinema
# ```
widget_names = document.find_all('td',{'class':'wname'})
for item in widget_names:
print (item.string)
# ## Problem set #2: Widget dictionaries
#
# For this problem set, we'll continue to use the HTML page from the previous problem set. In the cell below, I've made an empty list and assigned it to a variable called `widgets`. Write code that populates this list with dictionaries, one dictionary per widget in the source file. The keys of each dictionary should be `partno`, `wname`, `price`, and `quantity`, and the value for each of the keys should be the value for the corresponding column for each row. After executing the cell, your list should look something like this:
#
# ```
# [{'partno': 'C1-9476',
# 'price': '$2.70',
# 'quantity': u'512',
# 'wname': 'Skinner Widget'},
# {'partno': 'JDJ-32/V',
# 'price': '$9.36',
# 'quantity': '967',
# 'wname': u'Widget For Furtiveness'},
# ...several items omitted...
# {'partno': '5B-941/F',
# 'price': '$13.26',
# 'quantity': '919',
# 'wname': 'Widget For Cinema'}]
# ```
#
# And this expression:
#
# widgets[5]['partno']
#
# ... should evaluate to:
#
# LH-74/O
#
# +
widgets = []
# your code here
winfo = document.find_all('tr')
for item in winfo:
partno = item.find('td',{'class': 'partno'})
price = item.find('td',{'class': 'price'})
quantity = item.find('td',{'class': 'quantity'})
wname = item.find('td',{'class': 'wname'})
widget_map={}
widget_map['partno']=partno.string
widget_map['price']=price.string
widget_map['quantity']=quantity.string
widget_map['wname']=wname.string
widgets.append(widget_map)
# end your code
widgets
# -
# In the cell below, duplicate your code from the previous question. Modify the code to ensure that the values for `price` and `quantity` in each dictionary are floating-point numbers and integers, respectively. I.e., after executing the cell, your code should display something like this:
#
# [{'partno': 'C1-9476',
# 'price': 2.7,
# 'quantity': 512,
# 'widgetname': 'Skinner Widget'},
# {'partno': 'JDJ-32/V',
# 'price': 9.36,
# 'quantity': 967,
# 'widgetname': 'Widget For Furtiveness'},
# ... some items omitted ...
# {'partno': '5B-941/F',
# 'price': 13.26,
# 'quantity': 919,
# 'widgetname': 'Widget For Cinema'}]
#
# (Hint: Use the `float()` and `int()` functions. You may need to use string slices to convert the `price` field to a floating-point number.)
# +
widgets = []
# your code here
winfo = document.find_all('tr')
for item in winfo:
partno = item.find('td',{'class': 'partno'})
price = item.find('td',{'class': 'price'})
quantity = item.find('td',{'class': 'quantity'})
wname = item.find('td',{'class': 'wname'})
widget_map={}
widget_map['partno']=partno.string
widget_map['price']=float(price.string[1:])
widget_map['quantity']=int(quantity.string)
widget_map['wname']=wname.string
widgets.append(widget_map)
# end your code
widgets
# -
# Great! I hope you're having fun. In the cell below, write an expression or series of statements that uses the `widgets` list created in the cell above to calculate the total number of widgets that the factory has in its warehouse.
#
# Expected output: `7928`
sum_quantity = 0
for quantity in widgets:
sum_quantity = sum_quantity + quantity['quantity']
print (sum_quantity)
# In the cell below, write some Python code that prints the names of widgets whose price is above $9.30.
#
# Expected output:
#
# ```
# Widget For Furtiveness
# Jittery Widget
# Silver Widget
# Infinite Widget
# Widget For Cinema
# ```
for item in widgets:
if item['price'] > 9.30:
print (item['wname'])
# ## Problem set #3: Sibling rivalries
#
# In the following problem set, you will yet again be working with the data in `widgets2016.html`. In order to accomplish the tasks in this problem set, you'll need to learn about Beautiful Soup's `.find_next_sibling()` method. Here's some information about that method, cribbed from the notes:
#
# Often, the tags we're looking for don't have a distinguishing characteristic, like a class attribute, that allows us to find them using `.find()` and `.find_all()`, and the tags also aren't in a parent-child relationship. This can be tricky! For example, take the following HTML snippet, (which I've assigned to a string called `example_html`):
example_html = """
<h2>Camembert</h2>
<p>A soft cheese made in the Camembert region of France.</p>
<h2>Cheddar</h2>
<p>A yellow cheese made in the Cheddar region of... France, probably, idk whatevs.</p>
"""
# If our task was to create a dictionary that maps the name of the cheese to the description that follows in the `<p>` tag directly afterward, we'd be out of luck. Fortunately, Beautiful Soup has a `.find_next_sibling()` method, which allows us to search for the next tag that is a sibling of the tag you're calling it on (i.e., the two tags share a parent), that also matches particular criteria. So, for example, to accomplish the task outlined above:
# +
example_doc = BeautifulSoup(example_html, "html.parser")
cheese_dict = {}
for h2_tag in example_doc.find_all('h2'):
cheese_name = h2_tag.string
cheese_desc_tag = h2_tag.find_next_sibling('p')
cheese_dict[cheese_name] = cheese_desc_tag.string
cheese_dict
# -
# With that knowledge in mind, let's go back to our widgets. In the cell below, write code that uses Beautiful Soup, and in particular the `.find_next_sibling()` method, to print the part numbers of the widgets that are in the table *just beneath* the header "Hallowed Widgets."
#
# Expected output:
#
# ```
# MZ-556/B
# QV-730
# T1-9731
# 5B-941/F
# ```
# +
# do it again
h3_tag = document.find_all('h3')
tr_tag = document.find_all('tr')
for item in h3_tag:
if item.string == "Hallowed Widgets":
for item in tr_tag:
partno = item.find_next_sibling('td',{'class': 'partno'})
print (partno)
# -
# Okay, now, the final task. If you can accomplish this, you are truly an expert web scraper. I'll have little web scraper certificates made up and I'll give you one, if you manage to do this thing. And I know you can do it!
#
# In the cell below, I've created a variable `category_counts` and assigned to it an empty dictionary. Write code to populate this dictionary so that its keys are "categories" of widgets (e.g., the contents of the `<h3>` tags on the page: "Forensic Widgets", "Mood widgets", "Hallowed Widgets") and the value for each key is the number of widgets that occur in that category. I.e., after your code has been executed, the dictionary `category_counts` should look like this:
#
# ```
# {'Forensic Widgets': 3,
# 'Hallowed widgets': 4,
# 'Mood widgets': 2,
# 'Wondrous widgets': 3}
# ```
# +
category_counts = {}
# your code here
# end your code
category_counts
# -
# Congratulations! You're done.
| .ipynb_checkpoints/Homework_3-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + slideshow={"slide_type": "skip"}
from traitlets.config.manager import BaseJSONConfigManager
import jupyter_core
# path = "/Users/i.oseledets/anaconda2/envs/teaching/etc/jupyter/nbconfig"
cm = BaseJSONConfigManager(config_dir=path)
cm.update("livereveal", {
"theme": "sky",
"transition": "zoom",
"start_slideshow_at": "selected",
"scroll": True
})
# + [markdown] slideshow={"slide_type": "slide"}
# # Лекция 3. Линейные системы
# + [markdown] slideshow={"slide_type": "slide"}
# ## План
#
# - Линейные системы
# - Обратная матрица
# - Число обусловленности
# - Метод Гаусса (Gaussian elimination)
# + [markdown] slideshow={"slide_type": "slide"}
# ## Линейные системы
#
# $$ Ax = f, $$
#
# где матрица $A$ и вектор $f$ известны.
#
# Задача решения системы линейных уравнений – одна из основных задач вычислительной линейной алгебры.
#
# Она возникает при решении следующих задач:
#
# - задача линейной регрессии
# - решение уравнений в частных производных и интегральных уравнений
# - задачи нелинейной регрессии
# - задачи оптимизации (методы Ньютона-Рафсона и Гаусса-Ньютона, условия ККТ)
# + [markdown] slideshow={"slide_type": "slide"}
# ## Пере- и недоопределённые линейные системы
#
# Если система $Au = f$ имеет
# - больше уравнений, чем неизвестных, она называется **переопределённой** (в общем случае не имеет решений)
#
# - меньше уравнений, чем неизвестных, она называется **недоопределённой** (решение неединственно, нужны дополнительные предположения, чтобы гарантировать единственность решения)
# + [markdown] slideshow={"slide_type": "slide"}
# ## Существование решений
#
# Решение системы линейных уравнений с квадратной матрицей $A$
#
# $$A u = f$$
#
# существует тогда и только тогда, когда
# * $\det A \ne 0$
#
# или
#
# * матрица $A$ имеет полный ранг.
# + [markdown] slideshow={"slide_type": "slide"}
# ## Шкала размерностей линейных систем
#
# В различных приложениях размерности линейных систем могут быть различны
#
# - Малая: $n \leq 10^4$ (вся матрица помещается в память, **плотные матрицы**)
# - Средняя: $n = 10^4 - 10^6$ (обычно **разреженные** или **структурированные** матрицы)
# - Большая: $n = 10^8 - 10^9$ (обычно **разреженные** матрицы и параллельные вычисления)
# + [markdown] slideshow={"slide_type": "slide"}
# ## Линейные системы могут быть большими
#
# Возьмём непрерывную задачу, дискретизуем её на сетке из $N$ элементов и получим линейную систему с матрицей $N\times N$.
# Пример сетки для самолёта Airbus A319
# (подробнее см. [GMSH website](http://geuz.org/gmsh/)).
# <img src="./a319_4.png" width=50%>
#
# Основная сложность в том, что такие системы очень большие: миллионы и миллиарды неизвестных!
# + [markdown] slideshow={"slide_type": "slide"}
# ## Линейные системы могут быть структурированы
#
# - Хранение $N^2$ элементов матрицы невозможно уже для $N = 100000$.
#
# **Q:** как работать с такими матрицами?
#
# **A:** к счастью, такие матрицы чаще всего являются **структурированными** и требуют хранения $\mathcal{O}(N)$ элементов.
#
# - Наиболее растространённый тип структурированных матриц – это разреженные матрицы: такие матрицы имеют только $\mathcal{O}(N)$ ненулевых элементов!
#
# - Пример (одна из самых известных матриц для $n = 5$):
#
# $$
# \begin{pmatrix}
# 2 & -1 & 0 & 0 & 0 \\
# -1 & 2 & -1 & 0 & 0 \\
# 0 & -1 & 2 & -1 & 0 \\
# 0 & 0 &-1& 2 & -1 \\
# 0 & 0 & 0 & -1 & 2 \\
# \end{pmatrix}
# $$
#
# - По крайней мере можно хранить такие матрицы
# - Также можно умножать такие матрицы на вектор быстро
# - Но как решать линейные системы с такими матрицами?
# + [markdown] slideshow={"slide_type": "slide"}
# ## Основные вопросы о линейных системах
#
# 1. Какую точность мы можем получить от решения (из-за ошибок округления)?
# 2. Как мы вычислим решение? (LU разложение, метод Гаусса)
# 3. Какая сложность решения системы линейных уравнений?
# + [markdown] slideshow={"slide_type": "slide"}
# ## Как решать линейные системы?
#
# **Важно**: забудьте о детерминантах и правиле Крамера (хотя они полезны для матриц $2 \times 2$)!
# + [markdown] slideshow={"slide_type": "slide"}
# ## Как решать линейные системы?
#
# Основной инструмент – исключение переменных.
#
# \begin{align*}
# &2 y + 3 x = 5 \quad&\longrightarrow \quad &y = 5/2 - 3/2 x \\
# &2 x + 3z = 5 \quad&\longrightarrow\quad &z = 5/3 - 2/3 x\\
# &z + y = 2 \quad&\longrightarrow\quad & 5/2 + 5/3 - (3/2 + 2/3) x = 2,\\
# \end{align*}
#
# и так вы можете найти $x$ (и все остальные неизвестные).
#
# Этот процесс называется **методов Гаусса** и является одним из самых часто используемых алгоритмов.
# + [markdown] slideshow={"slide_type": "slide"}
# ## Метод Гаусса
#
# Метод Гаусса состоит из двух этапов:
# 1. Проход вперёд
# 2. Проход назад
# + [markdown] slideshow={"slide_type": "slide"}
# ## Проход вперёд
#
# - Исключим $x_1$:
#
# $$
# x_1 = f_1 - (a_{12} x_2 + \ldots + a_{1n} x_n)/a_{11},
# $$
#
# и подставим в уравнения $2, \ldots, n$.
#
# - Затем исключим $x_2$ и подставим в остальные уравнения.
#
# - Важно, что ведущий элемент (pivot), тот на который мы делим, не равен $0$.
# + [markdown] slideshow={"slide_type": "slide"}
# ## Проход назад
#
# Во время прохода назад:
# - решаем уравнение для $x_n$
# - подставляем решение в уравнение для $x_{n-1}$ и так далее, пока не вычислим все $x_i, i=1,\ldots, n$.
# + [markdown] slideshow={"slide_type": "slide"}
# ## Метод Гаусса и LU разложение
#
# Метод Гаусса связан с вычислением одного из самых важных матричных разложений: **LU разложения**.
#
# **Определение**: LU разложение матрицы $A$ – это представление
#
# $$A = LU,$$
#
# где $L$ – **нижнетреугольная** и $U$ – **верхнетреугольная** матрица.
#
# Это разложение **неединственно**, поэтому обычно требуют дополнительно, что на диагонали матрицы $L$ стоят 1.
# + [markdown] slideshow={"slide_type": "slide"}
# **Основная цель** вычисления LU разложения – это решение системы линейных уравнений, поскольку
#
# $$
# A^{-1} f = (L U)^{-1} f = U^{-1} L^{-1} f,
# $$
#
# и задача сводится к решению двух линейных систем с верхне- и нижнетреугольными матрицами.
#
# Проход вперёд выражается в виде
#
# $$
# L y = f,
# $$
#
# аналогично для прохода назад
#
# $$
# U x = y.
# $$
#
# Всегда ли существует $LU$ разложение?
# + [markdown] slideshow={"slide_type": "slide"}
# ## Сложность метода Гаусса/LU разложения
#
# - Каждый шаг исключения занимает $\mathcal{O}(n^2)$ операций.
#
# - Таким образом, сложность алгоритма $\mathcal{O}(n^3)$.
# + [markdown] slideshow={"slide_type": "slide"}
# ## Существование LU разложения
#
# Алгоритм вычисления LU разложения работает,
#
# если **мы не делим на $0$** на каждом шаге метода Гаусса.
#
# **Q:** Для какого класса матриц это так?
#
# **A:** Это так для **строго регулярных матриц**.
# + [markdown] slideshow={"slide_type": "slide"}
# ## Строго регулярные матрицы и LU разложение
#
# **Определение.** Матрица $A$ называется *строго регулярной*, если все лидирующие главные миноры (подматрицы из первых $k$ строк и $k$ столбцов) не вырождены.
#
# В этом случае LU разложение всегда существует. Обратное также верно (проверьте!).
# + [markdown] slideshow={"slide_type": "slide"}
# ## LU разложение для положительно определённых Эрмитовых матриц (разложение Холецкого)
#
# Строго регулярные матрицы имеют LU разложение.
#
# Важный класс строго регулярных матриц – это класс **Эрмитовых положительно определённых матриц**
#
# **Определение.** Матрица $A$ называется <font color='red'> положительно определённой </font>, если для любого $x: \Vert x \Vert \ne 0$ выполнено
#
# $$
# (x, Ax) > 0.
# $$
# - если это выполнено для $x \in \mathbb{C}^n$, тогда матрица $A$ эрмитова
# - если это выполнено для $x \in \mathbb{R}^n$, тогда матрица $A$ может быть несимметрична
# + [markdown] slideshow={"slide_type": "slide"}
# **Утверждение:** Эрмитова положительно определённая матрица $A$ строго регулярна и имеет разложение Холецкого вида
#
# $$A = RR^*,$$
#
# где $R$ нижнетреугольная матрица.
#
# Часто матрица $R$ называется "квадратным корнем" матрицы $A$.
# + [markdown] slideshow={"slide_type": "slide"}
# ## Вычисление LU разложения
#
# Во многих случаях достаточно один раз вычислить LU разложение!
#
# Если такое разложение найдено (что требует $\mathcal{O}(n^3)$ операций), тогда решение линейной системы сводится к решению линейных систем с матрицами $L$ и $U$, которые требуют $\mathcal{O}(n^2)$ операций.
#
# **Упражнение:** Решение линейной системы с треугольной матрицей вычисляется быстро. Как вычислить $L$ и $U$?
# + [markdown] slideshow={"slide_type": "slide"}
# ## Когда алгоритм вычисления LU разложения не работает
#
# - Что случится, если матрица не является строго регулярной (или ведущий элемент в методе Гаусса очень мал)?
#
# - Классический пример матрицы $2 \times 2$ с "плохим" LU разложением:
#
# $$
# A = \begin{pmatrix}
# \varepsilon & 1 \\
# 1 & 1
# \end{pmatrix}
# $$
#
# - Если $\varepsilon$ достаточно мал, мы можем столкнуться с неустойчивостью. В то время как вычисление разложения Холецкого всегда устойчиво.
#
# Проверим это численно...
# + slideshow={"slide_type": "slide"}
import numpy as np
eps = 1e-16#1.12e-16
a = np.array([[eps, 1],[1.0, 1]])
a0 = a.copy()
n = a.shape[0]
L = np.zeros((n, n))
U = np.zeros((n, n))
for k in range(n): #Eliminate one row
L[k, k] = 1
for i in range(k+1, n):
L[i, k] = a[i, k] / a[k, k]
for j in range(k+1, n):
a[i, j] = a[i, j] - L[i, k] * a[k, j]
for j in range(k, n):
U[k, j] = a[k, j]
print('L * U - A:\n', np.dot(L, U) - a0)
L, U
# + [markdown] slideshow={"slide_type": "slide"}
# ## Выбор ведущего элемента (pivoting)
#
# Мы можем переставить строки и столбцы матрицы $A$ так, чтобы элемент $A_{kk}$, на который мы делим, был максимальным.
#
# Простейшая, но эффективная стратегия – это выбор максимального элемента в строке: на каждом шаге выбираем элемент, который максимален по модулю и перемещаем его на диагональ.
#
# Это даёт следующее разложение
#
# $$A = P L U,$$
#
# где $P$ – это **матрица перестановки**.
#
#
# - Почему это хорошая стратегия?
# + [markdown] slideshow={"slide_type": "slide"}
# ## Устойчивость линейных систем
#
# - Существует фундаментальная проблема с решением систем линейных уравнений, которая не зависит от используемого алгоритма.
#
# - Она проявляется, когда элементы матрицы представляются как числа с плавающей точкой или имеется некоторый шум в измерениях.
#
# Проиллюстрируем эту проблему на следующем примере.
# + slideshow={"slide_type": "slide"}
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib inline
n = 15
a = [[1.0/(i + j + 1) for i in range(n)] for j in range(n)]
a = np.array(a)
rhs = np.random.randn(n) #Right-hand side
x = np.linalg.solve(a, rhs) #This function computes LU-factorization and solves linear system
#And check if everything is fine
er = np.linalg.norm(a.dot(x) - rhs) / np.linalg.norm(rhs)
print(er)
plt.plot(x)
plt.grid(True)
plt.xticks(fontsize=22)
_ = plt.yticks(fontsize=22)
# + [markdown] slideshow={"slide_type": "slide"}
# ## Линейные системы и обратная матрица
#
# - В чём проблема в предыдущем примере?
#
# - Почему ошибка растёт так быстро?
#
# - И мы приходим к одному из главных понятий вычислительной линейной алгебры: числу обусловленности матрицы.
#
# Но перед этим нам нужно определить **обратную матрицу**.
# + [markdown] slideshow={"slide_type": "slide"}
# ## Обратная матрица: определение
#
# Матрица, обратная к матрице $A$, это такая матрица $X$ что
#
# $$
# AX = XA = I,
# $$
#
# где $I$ – единичная матрица. Обратная матрица обозначается как $A^{-1}$.
#
# Вычисление обратной матрицы связано с решением линейной системы. В самом деле, $i$-ый столбец произведения даёт
#
# $$
# A x_i = e_i,
# $$
#
# где $e_i$ – $i$-ый столбец единичной матрицы.
# Таким образом, мы можем использовать метод Гаусса, чтобы решить эту систему.
#
# + [markdown] slideshow={"slide_type": "slide"}
# ## Обратная матрица и линейные системы
#
# Если у нас есть обратная матрица $A^{-1}$, тогда решение линейной системы
#
# $$Ax = f$$
#
# выражается как $x = A^{-1} f$.
#
# В самом деле,
#
# $$
# A(A^{-1} f) = (AA^{-1})f = I f = f.
# $$
# + [markdown] slideshow={"slide_type": "slide"}
# ## <NAME>
#
# Для того чтобы выяснить, почему в решении была такая большая ошибка, нам потребуется важный вспомогательный результат.
#
# **<NAME>**:
#
# Если матрица $F$ такая что $\Vert F \Vert < 1$, тогда матрица $(I - F)$ обратима и
#
# $$(I - F)^{-1} = I + F + F^2 + F^3 + \ldots = \sum_{k=0}^{\infty} F^k.$$
#
# Заметим, что это матричная версия выражения для суммы геометрической прогрессии.
#
# **Q**: какая норма тут используется? Какая норма "самая лучшая" в данном случае?
# + [markdown] slideshow={"slide_type": "slide"}
# ## Доказательство
#
# Сначала докажем, что ряд $\sum_{k=0}^{\infty} F^k$ сходится.
#
# Как и в скалярном случае выполнено
#
# $$
# (I - F) \sum_{k=0}^N F^k = (I - F^{N+1}) \rightarrow I, \quad N \to +\infty
# $$
#
# Действительно,
#
# $$
# \| (I - F^{N+1}) - I\| = \|F^{N+1}\| \leqslant \|F\|^{N+1} \to 0, \quad N\to +\infty.
# $$
#
# Также можем оценить **норму обратной матрицы**:
#
# $$
# \left\Vert \sum_{k=0}^N F^k \right\Vert \leq \sum_{k=0}^N \Vert F \Vert^k \Vert I \Vert \leq \frac{\Vert I \Vert}{1 - \Vert F \Vert}
# $$
# + [markdown] slideshow={"slide_type": "slide"}
# ## Малое возмущение обратной матрицы
#
# Используя этот результат, мы можем оценить как возмущение в элементах матрицы влияет на возмущение в элементах обратной матрицы. Предположим, что возмущение $E$ мало в том смысле, что $\Vert A^{-1} E \Vert < 1$. Тогда
#
# $$(A + E)^{-1} = \sum_{k=0}^{\infty} (-A^{-1} E)^k A^{-1}$$
#
# и более того,
#
# $$
# \frac{\Vert (A + E)^{-1} - A^{-1} \Vert}{\Vert A^{-1} \Vert} \leq \frac{\Vert A^{-1} \Vert \Vert E \Vert \Vert I \Vert}{1 - \Vert A^{-1} E \Vert}.
# $$
#
# Видно, что норма обратной матрицы входит в оценку.
# + [markdown] slideshow={"slide_type": "slide"}
# ## Число обусловленности линейной системы
#
# Рассмотрим **возмущённую** линейную систему:
#
# $$
# (A + \Delta A) \widehat{x} = f + \Delta f.
# $$
# + [markdown] slideshow={"slide_type": "slide"}
# ## Оценки!
#
# $$
# \begin{split}
# \widehat{x} - x &= (A + \Delta A)^{-1} (f + \Delta f) - A^{-1} f =\\
# &= \left((A + \Delta A)^{-1} - A^{-1}\right)f + (A + \Delta A)^{-1} \Delta f = \\
# &= \Big[\sum_{k=0}^{\infty} (-A^{-1} \Delta A)^k\Big] A^{-1} f + \Big[\sum_{k=0}^{\infty} (A^{-1} \Delta A)^k \Big] A^{-1} \Delta f,
# \end{split}
# $$
# поэтому
# $$
# \begin{split}
# \frac{\Vert \widehat{x} - x \Vert}{\Vert x \Vert} \leq
# &\frac{\Vert A \Vert \Vert A^{-1} \Vert}{1 - \|A^{-1}\Delta A\|} \Big(\frac{\Vert\Delta A\Vert}{\Vert A \Vert} + \frac{\Vert \Delta f \Vert}{ \Vert f \Vert}\Big) \leq \\
# \leq
# &\frac{\Vert A \Vert \Vert A^{-1} \Vert}{1 - \|A\|\|A^{-1}\|\frac{\|\Delta A\|}{\|A\|}} \Big(\frac{\Vert\Delta A\Vert}{\Vert A \Vert} + \frac{\Vert \Delta f \Vert}{ \Vert f \Vert}\Big) \equiv \\
# \equiv &\frac{\mathrm{cond}(A)}{1 - \mathrm{cond}(A)\frac{\|\Delta A\|}{\|A\|}} \Big(\frac{\Vert\Delta A\Vert}{\Vert A \Vert} + \frac{\Vert \Delta f \Vert}{ \Vert f \Vert}\Big)
# \end{split}
# $$
#
# Главную роль играет **число обусловленности** матрицы $A$: $\mathrm{cond}(A) = \Vert A \Vert \Vert A^{-1} \Vert$.
# + [markdown] slideshow={"slide_type": "slide"}
# ## Число обусловленности
#
# - Чем больше число обусловленности, тем меньше цифр в записи числа мы можем правильно восстановить.
# - Число обусловленности разное для разных норм.
# - Для спектральной нормы выполнено (проверьте!) $\mathrm{cond}_2 (A) = \|A\|_2 \|A^{-1}\|_2 = \frac{\sigma_{\max}(A)}{\sigma_{\min}(A)}$
# - Заметим, что если $\Delta A = 0$, тогда
#
# $$
# \frac{\Vert \widehat{x} - x \Vert}{\Vert x \Vert} \leq \mathrm{cond}(A) \frac{\|\Delta f\|}{\|f\|}
# $$
# + [markdown] slideshow={"slide_type": "slide"}
# ## И снова матрица Гильберта
#
# - Проверим насколько оценка точная в обоих случаях: единичная правая часть и случайная правая часть (вспомните пример с прошлой лекции!)
# - Результаты существенно отличаются!
# + slideshow={"slide_type": "slide"}
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib inline
n = 1000
a = [[1.0/(i + j + 1) for i in range(n)] for j in range(n)]
a = np.array(a)
rhs = np.ones(n) #Right-hand side
f = np.linalg.solve(a, rhs)
#And check if everything is fine
er = np.linalg.norm(a.dot(f) - rhs) / np.linalg.norm(rhs)
cn = np.linalg.cond(a, 2)
print('Error:', er, 'Condition number:', cn)
# + slideshow={"slide_type": "slide"}
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib inline
n = 100
a = [[1.0/(i + j + 1) for i in range(n)] for j in range(n)]
a = np.array(a)
rhs = np.random.randn(n) #Right-hand side
f = np.linalg.solve(a, rhs)
#And check if everything is fine
er = np.linalg.norm(a.dot(f) - rhs) / np.linalg.norm(rhs)
cn = np.linalg.cond(a)
print('Error:', er, 'Condition number:', cn)
u, s, v = np.linalg.svd(a)
rhs = np.random.randn(n)
plt.plot(u.T.dot(rhs))
plt.xticks(fontsize=22)
plt.yticks(fontsize=22)
plt.grid(True)
# + [markdown] slideshow={"slide_type": "slide"}
# ### Как это объяснить?
# + [markdown] slideshow={"slide_type": "slide"}
# ## Переопределённые линейные системы
#
# - Рассмотрим переопределённые линейные системы, в которых число уравнений больше, чем число неизвестных.
# - Простейший пример: аппроксимация точек на плоскости с помощью линейной модели
#
# Стандартный способ минимизации невязки (**линейная задача наименьших квадратов**)
#
# $$\Vert A x - b \Vert_2 \rightarrow \min$$
# + [markdown] slideshow={"slide_type": "slide"}
# ## Переопределённая система и матрица Грама
#
# Условие оптимальности $0\equiv \nabla \left(\|Ax-b\|_2^2\right)$, где $\nabla$ обозначает градиент. Поэтому,
#
# $$
# 0 \equiv \nabla \left(\|Ax-b\|_2^2\right) = 2(A^*A x - A^*b) = 0.
# $$
#
# Таким образом,
#
# $$
# A^* A x = A^* b
# $$
#
# Матрица $A^* A$ называется **матрицей Грама**, а система называется **нормальным уравнением**.
#
# - Число обусловленности матрицы $A^* A$ равно квадрату числа обусловленности матрицы $A$ (проверьте!).
# - Поэтому решать нормальное уравнение в таком виде – не самая хорошая идея!
# + [markdown] slideshow={"slide_type": "slide"}
# ## Псевдообратная матрица
#
# Матрица $A^* A$ может быть вырождена в общем случае (почему?).
# Поэтому необходимо ввести понятие псевдообратной матрицы $A^{\dagger}$ такой что <br>
# решение линейной задачи наименьших квадратов можно было записать в виде
#
# $$x = A^{\dagger} b.$$
#
# Матрица
#
# $$
# A^{\dagger} = \lim_{\alpha \rightarrow 0}(\alpha I + A^* A)^{-1} A^*
# $$
#
# называется псевдообратной матрицей Мура-Пенроуза для матрицы $A$.
#
# * Если матрица $A$ имеет полный ранг, тогда $A^* A$ невырождена, и мы получим $A^{\dagger} = \lim_{\alpha \rightarrow 0}(\alpha I + A^* A)^{-1} A^* = (A^* A)^{-1} A^*$.
#
# * Если матрица $A$ квадратная и невырожденная, мы получим $A^{\dagger} = \lim_{\alpha \rightarrow 0}(\alpha I + A^* A)^{-1} A^* = (A^* A)^{-1} A^* = A^{-1} A^{-*} A^* = A^{-1}$ – обычная обратная матрица для $A$
#
# * Если $A$ имеет линейно зависимые столбцы, тогда $A^\dagger b$ даёт решение минимальной евклидовой нормы.
# + [markdown] slideshow={"slide_type": "slide"}
# ## Вычисление псевдообратной матрицы с помощью SVD
#
# Пусть $A = U \Sigma V^*$ SVD для матрицы $A$. Тогда,
#
# $$A^{\dagger} = V \Sigma^{\dagger} U^*,$$
#
# где $\Sigma^{\dagger}$ состоит из обращённых ненулевых сингулярных чисел матрицы $A$. Действительно,
#
# $$A^{\dagger} = \lim_{\alpha \rightarrow 0}(\alpha I + A^* A)^{-1} A^* = \lim_{\alpha \rightarrow 0}( \alpha VV^* + V \Sigma^2 V^*)^{-1} V \Sigma U^* = \lim_{\alpha \rightarrow 0}( V(\alpha I + \Sigma^2) V^*)^{-1} V \Sigma U^* = V \lim_{\alpha \rightarrow 0}(\alpha I + \Sigma^2)^{-1} \Sigma U^* = V \Sigma^{\dagger} U^*,$$
#
# * Вы можете проверить, что $\Sigma^{\dagger}$ состоит из обращённых ненулевых сингулярных чисел <br>
# * Если сингулярные числа малы, их можно не обращать. Это даст решение менее чувствительное к шуму в правой части
#
# **Q:** что произошло с числом обусловленности?
# + [markdown] slideshow={"slide_type": "slide"}
# ## Стандартный способ решения линейной задачи наименьших квадратов
#
# Использование $QR$ разложения.
#
# Любая матрица может быть представлена в виде
#
# $$
# A = Q R,
# $$
#
# где $Q$ – унитарная матрица, и $R$ – верхнетреугольная.
#
# Тогда, если $A$ имеет полный ранг, тогда
#
# $$
# x = A^{\dagger}b = (A^*A)^{-1}A^*b = ((QR)^*(QR))^{-1}(QR)^*b = (R^*Q^*QR)^{-1}R^*Q^*b = R^{-1}Q^*b.
# $$
#
# Таким образом, задача поиска оптимального $x$ эквивалентна решению следующей квадратной системы
#
# $$
# Rx = Q^* b.
# $$
#
# Так как $R$ верхнетреугольная, решение этой системы требует $\mathcal{O}(n^2)$ операций. Также этот способ более устойчив, чем использование псевдообратной матрицы напрямую.
# + [markdown] slideshow={"slide_type": "slide"}
# ## Пример линейной задачи наименьших квадратов
#
# Рассмотрим двумерный пример. Пусть дана линейная модель
#
# $$y = ax + b$$
#
# и зашумлённые данные $(x_1, y_1), \dots (x_n, y_n)$. Тогда линейная система на коэффициенты будет выглядеть как
#
# $$
# \begin{split}
# a x_1 &+ b &= y_1 \\
# &\vdots \\
# a x_n &+ b &= y_n \\
# \end{split}
# $$
# или в матричном виде
# $$
# \begin{pmatrix}
# x_1 & 1 \\
# \vdots & \vdots \\
# x_n & 1 \\
# \end{pmatrix}
# \begin{pmatrix}
# a \\
# b
# \end{pmatrix} =
# \begin{pmatrix}
# y_1 \\
# \vdots \\
# y_n \\
# \end{pmatrix},
# $$
# что является переопределённой системой
# + slideshow={"slide_type": "slide"}
# %matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
a_exact = 1.
b_exact = 2.
n = 10
xi = np.arange(n)
yi = a_exact * xi + b_exact + 2*np.random.random(n)
A = np.array([xi, np.ones(n)])
coef = np.linalg.pinv(A).T.dot(yi) # coef is [a, b]
plt.plot(xi, yi, 'o', label='$(x_i, y_i)$')
plt.plot(xi, coef[0]*xi + coef[1], label='Least Squares')
plt.legend(loc='best')
# + [markdown] slideshow={"slide_type": "slide"}
# ## Главное в этой лекции
#
# - Линейные системы можно решать методом Гаусса, сложность – $\mathcal{O}(n^3)$.
# - Линейные системы можно решать с помощью LU разложения, сложность – $\mathcal{O}(n^3)$ для разложения, и $\mathcal{O}(n^2)$ для каждой правой части
# - Линейная задача наименьших квадратов может быть решена с помощью решения нормального уравнения (плохая идея!)
# - Линейная задача наименьших квадратов может быть решена с помощью QR разложения (стандартный подход)
| lectures/lecture3/lecture3.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # NAME: <NAME>
# ## THE SPARKS FOUNDATION GRIPMarch21
# ### Data Science And Business Analytics Intern at The Sparks Foundation
# ### Task 1: From the given ‘Iris’ dataset, predict the optimum number of clusters and represent it visually.
# First let us import the modules used for the algorithm.
# +
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# -
data = pd.read_csv("Iris.csv")
data.head(100)
# ### Now we add features to this data frame.
# +
X = data[["SepalLengthCm","SepalWidthCm","PetalLengthCm","PetalWidthCm","Species"]]
plt.scatter(X["SepalWidthCm"],X["PetalLengthCm"],c="black")
plt.show()
# -
# ### As we can see the give feature "Species" is not a float So we have to convert it in a numerical form so it can be used in K-means algorithm.
from sklearn.preprocessing import OneHotEncoder
OH_encoder = OneHotEncoder(handle_unknown='ignore', sparse=False)
X_cols = pd.DataFrame(OH_encoder.fit_transform(X[["Species"]]))
X_cols.index = X.index
num_X = X.drop(["Species"],axis=1)
X_encoded = pd.concat([num_X,X_cols],axis=1)
X_encoded.head()
# ### After Viewing the plot, now we have to determine the optimum number of clusters for our data set.
# ### Although features used here are only 4,but just for confirmation we should use the famous "Elbow Method" for finding the optimum number of clusters.First let us import the modules.
from yellowbrick.cluster import KElbowVisualizer
from sklearn.cluster import KMeans
model = KMeans()
# k is range of number of clusters.
visualizer = KElbowVisualizer(model, k=(2,30),metric='calinski_harabasz', timings= True)
visualizer.fit(X_encoded) # Fit the data to the visualizer
visualizer.show()
# ### As we can see in this graph clearly,it states that the elbow of the graph is found at k=3 and hence we infer that the value of k =3
#
wcss = []
x = X_encoded.iloc[:,0:4].values
#print(x)
for i in range(1,11):
kmeans = KMeans(n_clusters=i,init="k-means++",
max_iter=100,n_init=10,random_state=0)
kmeans.fit(x)
wcss.append(kmeans.inertia_)
plt.plot(range(1,11),wcss)
plt.title('The elbow method')
plt.xlabel('Number of clusters')
plt.ylabel('WCSS')
plt.show()
# ### You can clearly see why it is called 'The elbow method' from the above graph, the optimum clusters is where the elbow occurs. This is when the within cluster sum of squares (WCSS) doesn't decrease significantly with every iteration.From this we choose the number of clusters as "3".
kmeans = KMeans(n_clusters=3,init="k-means++",max_iter=100,
n_init=10,random_state=0)
y_kmeans = kmeans.fit_predict(X_encoded)
# +
plt.scatter(x[y_kmeans == 0, 0], x[y_kmeans == 0, 1],
s = 100, c = 'red', label = 'Iris-setosa')
plt.scatter(x[y_kmeans == 1, 0], x[y_kmeans == 1, 1],
s = 100, c = 'blue', label = 'Iris-versicolour')
plt.scatter(x[y_kmeans == 2, 0], x[y_kmeans == 2, 1],
s = 100, c = 'green', label = 'Iris-virginica')
# Plotting the centroids of the clusters
plt.scatter(kmeans.cluster_centers_[:, 0], kmeans.cluster_centers_[:,1],
s = 100, c = 'yellow', label = 'Centroids')
plt.legend()
# -
| SPARKS FOUNDATION TASK 1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
bookings = pd.read_csv('https://stepik.org/media/attachments/lesson/360344/bookings.csv', encoding='windows=1251', sep=';')
bookings.shape
bookings_head = bookings.head(7)
bookings_head
bookings.dtypes
def to_lower_underscore(name):
name = name.lower().replace(' ','_')
return name
to_lower_underscore('Arrival Date Month')
bookings = bookings.rename(columns=to_lower_underscore)
bookings
bookings.query('is_canceled == 0') \
.country \
.value_counts()[:5]
bookings
df = bookings.groupby('hotel') \
.agg({'stays_total_nights':'mean'})
df.round(2)
bookings.query('assigned_room_type != reserved_room_type').shape
bookings.query('arrival_date_year == 2016') \
.arrival_date_month \
.value_counts() \
.idxmax()
bookings.query('arrival_date_year == 2017') \
.arrival_date_month \
.value_counts() \
.idxmax()
bookings.query('hotel == "City Hotel" ') \
.query('is_canceled == "1" ') \
.groupby('arrival_date_year')['arrival_date_month'].value_counts()
[bookings.adults.mean(), bookings.children.mean(), bookings.babies.mean()]
df = bookings[['adults', 'children', 'babies']]
df.mean().idxmax()
bookings['total_kids'] = bookings.children + bookings.babies
bookings
df_1 = bookings.groupby('hotel') \
.agg({'total_kids':'mean'}).round(2)
df_1
bookings['has_kids'] = bookings.total_kids > 0
no_kids = bookings.query('is_canceled == 1 and has_kids == False').shape[0] / bookings.query('has_kids == False').shape[0]
no_kids = round(no_kids * 100, 2)
yes_kids = bookings.query('is_canceled == 1 and has_kids == True').shape[0] / bookings.query('has_kids == True').shape[0]
yes_kids = round(yes_kids * 100, 2)
no_kids
yes_kids
| Python/Lesson_2/Miniproject/Miniproject_lesson_2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## 使用代理IP
# ### 默认使用自己的IP
import requests
url = 'http://icanhazip.com'
try:
response = requests.get(url) #不使用代理
print(response.status_code)
if response.status_code == 200:
print(response.text)
except requests.ConnectionError as e:
print(e.args)
# ## 66代理 http://www.66ip.cn/1.html
# +
import time
import requests
from lxml import etree
def IPList_66():
headers = {
'referer': 'http://www.66ip.cn/index.html',
'Host': 'www.66ip.cn',
"User-Agent":"Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.26 Safari/537.36 Core/1.63.6788.400 QQBrowser/10.3.2864.400",
# 'Cookie':'bid=dBLxsRMMRbs; __utmc=30149280; __utmc=223695111; ll="118184"; push_noty_num=0; push_doumail_num=0; __utmv=30149280.19232; _vwo_uuid_v2=DD9A4D81803D3AC581031A21E0B6F1628|eb32f81d72deea36fe07c889241a8846; _pk_ses.100001.4cf6=*; ap_v=0,6.0; __utma=30149280.219998368.1550975340.1550979553.1550999110.3; __utmz=30149280.1550999110.3.2.utmcsr=movie.douban.com|utmccn=(referral)|utmcmd=referral|utmcct=/subject/26266893/reviews; dbcl2="192325248:8Qm6nZGk5Co"; ck=n3HL; __utma=223695111.1281168398.1550975340.1550979553.1550999422.3; __utmb=223695111.0.10.1550999422; __utmz=223695111.1550999422.3.2.utmcsr=accounts.douban.com|utmccn=(referral)|utmcmd=referral|utmcct=/passport/login; __utmt=1; gr_user_id=95b206b3-7bf4-40ca-94d8-14f39be020e1; gr_session_id_22c937bbd8ebd703f2d8e9445f7dfd03=1b9e9583-a672-486f-8b8d-7c6a6ba6294b; gr_cs1_1b9e9583-a672-486f-8b8d-7c6a6ba6294b=user_id%3A1; gr_session_id_22c937bbd8ebd703f2d8e9445f7dfd03_1b9e9583-a672-486f-8b8d-7c6a6ba6294b=true; __utmt_douban=1; __utmb=30149280.7.10.1550999110; _pk_id.100001.4cf6=138e377200b95457.1550975332.3.1551000296.1550979600.'
}
for q in [1,2,3]:
print('查询:第%d页'%(q))
url = 'http://www.66ip.cn/'+str(q)+'.html'
response = requests.get(url,headers = headers,timeout=5)
response.raise_for_status()
if response.status_code == 200:
page = response.content.decode(response.apparent_encoding)
# print(page)
html = etree.fromstring(page,parser=etree.HTMLParser(encoding=response.apparent_encoding))
for i in range(2,11):
hosts = html.xpath('//*[@id="main"]/div/div[1]/table/tbody/tr[2]/td[1]/text()')
print(hosts)
for j in hosts:
print(j)
host = html.xpath('normalize-space(//*[@id="main"]/div/div[1]/table/tbody/tr[%d]/td[1]/text())'%(i))
port = html.xpath('normalize-space(//*[@id="main"]/div/div[1]/table/tbody/tr[%d]/td[2]/text())'%(i))
print('http://%s:%s' %(host,port))
# -
IPList_66()
# ## 89代理 http://www.89ip.cn/index_1.html
# +
import time
import requests
from lxml import etree
def IPList_89():
headers = {
# "Referer":"https://movie.douban.com/subject/26266893/?from=showing",
"User-Agent":"Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.26 Safari/537.36 Core/1.63.6788.400 QQBrowser/10.3.2864.400",
# 'Cookie':'bid=dBLxsRMMRbs; __utmc=30149280; __utmc=223695111; ll="118184"; push_noty_num=0; push_doumail_num=0; __utmv=30149280.19232; _vwo_uuid_v2=DD9A4D81803D3AC581031A21E0B6F1628|eb32f81d72deea36fe07c889241a8846; _pk_ses.100001.4cf6=*; ap_v=0,6.0; __utma=30149280.219998368.1550975340.1550979553.1550999110.3; __utmz=30149280.1550999110.3.2.utmcsr=movie.douban.com|utmccn=(referral)|utmcmd=referral|utmcct=/subject/26266893/reviews; dbcl2="192325248:8Qm6nZGk5Co"; ck=n3HL; __utma=223695111.1281168398.1550975340.1550979553.1550999422.3; __utmb=223695111.0.10.1550999422; __utmz=223695111.1550999422.3.2.utmcsr=accounts.douban.com|utmccn=(referral)|utmcmd=referral|utmcct=/passport/login; __utmt=1; gr_user_id=95b206b3-7bf4-40ca-94d8-14f39be020e1; gr_session_id_22c937bbd8ebd703f2d8e9445f7dfd03=1b9e9583-a672-486f-8b8d-7c6a6ba6294b; gr_cs1_1b9e9583-a672-486f-8b8d-7c6a6ba6294b=user_id%3A1; gr_session_id_22c937bbd8ebd703f2d8e9445f7dfd03_1b9e9583-a672-486f-8b8d-7c6a6ba6294b=true; __utmt_douban=1; __utmb=30149280.7.10.1550999110; _pk_id.100001.4cf6=138e377200b95457.1550975332.3.1551000296.1550979600.'
}
proxy_list = []
for q in [1,2,3]:
print('89查询:第%d页'%(q))
url = 'http://www.89ip.cn/index_'+str(q)+'.html'
response = requests.get(url,headers = headers,timeout=5)
response.raise_for_status()
if response.status_code == 200:
page = response.content.decode(response.apparent_encoding)
html = etree.fromstring(page,parser=etree.HTMLParser(encoding=response.apparent_encoding))
for i in range(1,15):
host = html.xpath("normalize-space(/html/body/div[4]/div[1]/div/div[1]/table/tbody/tr[%d]/td[1]/text())"%i)
port = html.xpath("normalize-space(/html/body/div[4]/div[1]/div/div[1]/table/tbody/tr[%d]/td[2]/text())"%i)
# print('http://%s:%s' %(host,port))
proxy_list.append('http://%s:%s' %(host,port))
return proxy_list
# -
# ### ip3366代理 http://www.ip3366.net/free/?stype=1&page=1
# +
import time
import requests
from lxml import etree
def IPList_3366():
headers = {
# "Referer":"https://movie.douban.com/subject/26266893/?from=showing",
"User-Agent":"Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.26 Safari/537.36 Core/1.63.6788.400 QQBrowser/10.3.2864.400",
# 'Cookie':'bid=dBLxsRMMRbs; __utmc=30149280; __utmc=223695111; ll="118184"; push_noty_num=0; push_doumail_num=0; __utmv=30149280.19232; _vwo_uuid_v2=DD9A4D81803D3AC581031A21E0B6F1628|eb32f81d72deea36fe07c889241a8846; _pk_ses.100001.4cf6=*; ap_v=0,6.0; __utma=30149280.219998368.1550975340.1550979553.1550999110.3; __utmz=30149280.1550999110.3.2.utmcsr=movie.douban.com|utmccn=(referral)|utmcmd=referral|utmcct=/subject/26266893/reviews; dbcl2="192325248:8Qm6nZGk5Co"; ck=n3HL; __utma=223695111.1281168398.1550975340.1550979553.1550999422.3; __utmb=223695111.0.10.1550999422; __utmz=223695111.1550999422.3.2.utmcsr=accounts.douban.com|utmccn=(referral)|utmcmd=referral|utmcct=/passport/login; __utmt=1; gr_user_id=95b206b3-7bf4-40ca-94d8-14f39be020e1; gr_session_id_22c937bbd8ebd703f2d8e9445f7dfd03=1b9e9583-a672-486f-8b8d-7c6a6ba6294b; gr_cs1_1b9e9583-a672-486f-8b8d-7c6a6ba6294b=user_id%3A1; gr_session_id_22c937bbd8ebd703f2d8e9445f7dfd03_1b9e9583-a672-486f-8b8d-7c6a6ba6294b=true; __utmt_douban=1; __utmb=30149280.7.10.1550999110; _pk_id.100001.4cf6=138e377200b95457.1550975332.3.1551000296.1550979600.'
}
proxy_list = []
for q in [1,2,3]:
print('3366查询:第%d页'%(q))
url = 'http://www.ip3366.net/free/?stype=1&page='+str(q)
response = requests.get(url,headers = headers,timeout=5)
response.raise_for_status()
if response.status_code == 200:
page = response.content.decode("gb2312")
html = etree.fromstring(page,parser=etree.HTMLParser(encoding=response.apparent_encoding))
for i in range(1,15):
protocol = html.xpath('normalize-space(//*[@id="list"]/table/tbody/tr[%d]/td[4]/text())'%i)
host = html.xpath('normalize-space(//*[@id="list"]/table/tbody/tr[%d]/td[1]/text())'%i)
port = html.xpath('normalize-space(//*[@id="list"]/table/tbody/tr[%d]/td[2]/text())'%i)
#print('%s://%s:%s' %(protocol.lower(),host,port))
proxy_list.append('%s://%s:%s' %(protocol.lower(),host,port))
return proxy_list
# -
# ### 一个请求地址上可以使用的代理IP
#
# +
all_proxy_ip ='proxy/all_proxy_ip.txt'
list_66 = IPList_66()
list_89 = IPList_89()
list_3366 = IPList_3366()
with open(all_proxy_ip, 'w', encoding="utf-8") as proxyfile:
proxyfile.write('\n'.join(list_89))
proxyfile.write('\n')
proxyfile.write('\n'.join(list_3366))
proxyfile.write('\n')
proxyfile.write('\n'.join(list_66))
# -
# ## 验证此代理是否有效 (对应网站上)
'''
验证代理的有效性
'''
import requests
def verifyProxy(proxies,url):
try:
headers = {
# "Referer":"https://movie.douban.com/subject/26266893/?from=showing",
"User-Agent":"Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.26 Safari/537.36 Core/1.63.6788.400 QQBrowser/10.3.2864.400",
# 'Cookie':'bid=dBLxsRMMRbs; __utmc=30149280; __utmc=223695111; ll="118184"; push_noty_num=0; push_doumail_num=0; __utmv=30149280.19232; _vwo_uuid_v2=DD9A4D81803D3AC581031A21E0B6F1628|eb32f81d72deea36fe07c889241a8846; _pk_ses.100001.4cf6=*; ap_v=0,6.0; __utma=30149280.219998368.1550975340.1550979553.1550999110.3; __utmz=30149280.1550999110.3.2.utmcsr=movie.douban.com|utmccn=(referral)|utmcmd=referral|utmcct=/subject/26266893/reviews; dbcl2="192325248:8Qm6nZGk5Co"; ck=n3HL; __utma=223695111.1281168398.1550975340.1550979553.1550999422.3; __utmb=223695111.0.10.1550999422; __utmz=223695111.1550999422.3.2.utmcsr=accounts.douban.com|utmccn=(referral)|utmcmd=referral|utmcct=/passport/login; __utmt=1; gr_user_id=95b206b3-7bf4-40ca-94d8-14f39be020e1; gr_session_id_22c937bbd8ebd703f2d8e9445f7dfd03=1b9e9583-a672-486f-8b8d-7c6a6ba6294b; gr_cs1_1b9e9583-a672-486f-8b8d-7c6a6ba6294b=user_id%3A1; gr_session_id_22c937bbd8ebd703f2d8e9445f7dfd03_1b9e9583-a672-486f-8b8d-7c6a6ba6294b=true; __utmt_douban=1; __utmb=30149280.7.10.1550999110; _pk_id.100001.4cf6=138e377200b95457.1550975332.3.1551000296.1550979600.'
}
response = requests.get(url,headers = headers,proxies=proxies,timeout=5)
code = response.status_code
return True
except requests.ConnectionError as e:
#print(e.args)
return False
#verifyProxy("192.168.127.12:3129")
proxies = {
'http': "172.16.17.32:8118"
# 'https': "172.16.17.32:8118"
}
verifyProxy(proxies,"https://www.baidu.com")
# ### 查询可用IP
def write_ip(url,all_proxy_ip,use_proxy):
able_ip_list = []
with open (all_proxy_ip,'r') as all_proxy_file:
content = all_proxy_file.read().splitlines()
for proxy_str in content:
if verifyProxy({proxy_str.split("://")[0]:proxy_str.split("://")[1]},url):
able_ip_list.append(proxy_str)
print(able_ip_list)
with open (use_proxy,'w') as all_use_proxy_file:
all_use_proxy_file.write('\n'.join(able_ip_list))
# +
url ="https://www.baidu.com"
all_proxy_ip ='proxy/all_proxy_ip.txt'
use_proxy="proxy/use_proxy.txt"
write_ip(url,all_proxy_ip,use_proxy)
# +
url ="https://www.english-corpora.org/coca/"
all_proxy_ip ='proxy/all_proxy_ip.txt'
use_proxy="coca/use_proxy.txt"
write_ip(url,all_proxy_ip,use_proxy)
# -
#获取网页源代码的框架
import requests
def getHtml(URL):
try:
headers = {
# "Referer":"https://movie.douban.com/subject/26266893/?from=showing",
"User-Agent":"Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.26 Safari/537.36 Core/1.63.6788.400 QQBrowser/10.3.2864.400",
# 'Cookie':'bid=dBLxsRMMRbs; __utmc=30149280; __utmc=223695111; ll="118184"; push_noty_num=0; push_doumail_num=0; __utmv=30149280.19232; _vwo_uuid_v2=DD9A4D81803D3AC581031A21E0B6F1628|eb32f81d72deea36fe07c889241a8846; _pk_ses.100001.4cf6=*; ap_v=0,6.0; __utma=30149280.219998368.1550975340.1550979553.1550999110.3; __utmz=30149280.1550999110.3.2.utmcsr=movie.douban.com|utmccn=(referral)|utmcmd=referral|utmcct=/subject/26266893/reviews; dbcl2="192325248:8Qm6nZGk5Co"; ck=n3HL; __utma=223695111.1281168398.1550975340.1550979553.1550999422.3; __utmb=223695111.0.10.1550999422; __utmz=223695111.1550999422.3.2.utmcsr=accounts.douban.com|utmccn=(referral)|utmcmd=referral|utmcct=/passport/login; __utmt=1; gr_user_id=95b206b3-7bf4-40ca-94d8-14f39be020e1; gr_session_id_22c937bbd8ebd703f2d8e9445f7dfd03=1b9e9583-a672-486f-8b8d-7c6a6ba6294b; gr_cs1_1b9e9583-a672-486f-8b8d-7c6a6ba6294b=user_id%3A1; gr_session_id_22c937bbd8ebd703f2d8e9445f7dfd03_1b9e9583-a672-486f-8b8d-7c6a6ba6294b=true; __utmt_douban=1; __utmb=30149280.7.10.1550999110; _pk_id.100001.4cf6=138e377200b95457.1550975332.3.1551000296.1550979600.'
}
response = requests.get(URL,headers = headers,timeout=5)
response.raise_for_status()
response.encoding = response.apparent_encoding
return response.content.decode("utf-8")
except requests.ConnectionError as e:
print(e.args)
return False
for i in range(10):
getHtml("http://192.168.160.23:8080/system/login.jsp")
getHtml("http://192.168.160.23:8080/lg")
| nlp/proxy.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # **App Usage Density** #
#
# Our objective is to analyze consumer density for various apps and to determine which type of app should be developed with respect to the data presented.
# +
# Apple apps source - https://www.kaggle.com/ramamet4/app-store-apple-data-set-10k-apps/home
# Google apps source - https://www.kaggle.com/lava18/google-play-store-apps/home
opened_file = open('AppleStore copy.csv')
opened_file2 = open('googleplaystore copy.csv')
from csv import reader
read_file = reader(opened_file)
read_file2 = reader(opened_file2)
apple_data = list(read_file)
google_data = list(read_file2)
# -
def explore_data(dataset, start, end, rows_and_columns=False):
dataset_slice = dataset[start:end]
for row in dataset_slice:
print(row)
print('\n') # adds a new (empty) line after each row
if rows_and_columns:
print('Number of rows:', len(dataset))
print('Number of columns:', len(dataset[0]))
explore_data(apple_data, 1, 5, rows_and_columns=True)
print()
explore_data(google_data, 1, 5, rows_and_columns=True)
# Here we determine which columns produce relevant data to our objective.
#
# We only want to analyze free, English apps only since those are the only apps our company intends to make.
# +
print(apple_data[0:1])
# size_bytes, rating_count_tot, user_rating,
# user_rating_ver, cont_rating, prime_genre, sup_devices.num.
# [3, 6, 8]
# [9, 11, 12, 13]
print()
print(google_data[0:1])
# Category, Rating, Reviews, Size, Installs,
# Content Rating, Genres, Last Updated
# [1, 2, 4, 5]
# [8, 9, 10]
# -
# The Google Play data set has duplicate counts of many apps. To optimize our data, it's optimal to remove these duplicate accounts to be as close to accurate as possible when quantifying the difference between these apps.
# +
for app in google_data:
name = app[0]
if name == 'Facebook':
print(app)
print()
for app in google_data:
name = app[0]
if name == 'Instagram':
print(app)
# +
duplicate = []
unique = []
for app in google_data:
name = app[0]
if name in unique:
duplicate.append(name)
else:
unique.append(name)
print('Duplicates:', len(duplicate))
# -
# If we look closely at the duplicate instances of 'Facebook' and 'Instagram', we'll notice that although *most* of the data is similar, there is **one** difference with each row - the number of reviews. This implies that duplicate accounts of these apps had their data extracted at different times. Although removing these duplicates randomly is an option, it's better to keep the row that was **most recently** taken into account. The row with the most reviews out of its respective duplicates satisfies this condition.
# +
reviews_max = {}
for app in google_data[1:]:
name = app[0]
# n_reviews = float(app[0])
n_reviews = float(app[3])
if name in reviews_max and reviews_max[name] < n_reviews:
#reviews_max.append(n_reviews)
reviews_max[name] = n_reviews
if name not in reviews_max:
#reviews_max.append(name, n_reviews)
reviews_max[name] = n_reviews
# -
print('Total number of apps:', len(google_data[1:]) - 1176)
# +
google_clean = []
already_added = []
for app in google_data[1:]:
name = app[0]
n_reviews = float(app[3])
#if n_reviews == reviews_max and name not in already_added:
if reviews_max[name] == n_reviews and name not in already_added:
google_clean.append(app)
already_added.append(name)
# +
# Verify that the loop produced the accurate number of apps in the data set.
len(google_clean)
# -
def english(string):
for value in string:
if ord(value) > 127:
return False
#else:
#return True
return True
english('Instagram')
english('爱奇艺PPS -《欢乐颂2》电视剧热播')
english('Docs To Go™ Free Office Suite')
english('Instachat 😜')
# +
def english(string):
count = 0
for value in string:
#count = 0
if ord(value) > 127:
count += 1
#if count > 3:
#return False
#else:
#return True
if count > 3:
return False
else:
return True
#return True
print(english('Docs To Go™ Free Office Suite'))
print(english('Instachat 😜'))
print(english('爱奇艺PPS -《欢乐颂2》电视剧热播'))
# +
google_english = []
google_not_english = []
apple_english = []
apple_not_english = []
for apps in google_clean:
name = apps[0]
if english(name): #is True:
#google_english.append(name)
google_english.append(apps)
else:
#google_not_english.append(name)
google_not_english.append(apps)
for apps in apple_data[1:]:
name = apps[2]
if english(name): #is True:
#apple_english.append(name)
apple_english.append(apps)
else:
#apple_not_english.append(name)
apple_not_english.append(apps)
explore_data(google_english, 0, 3, True)
print('\n')
explore_data(apple_english, 0, 3, True)
# +
google_free = []
apple_free = []
for apps in google_english:
cost = apps[6]
if cost == 'Free':
google_free.append(apps)
for apps in apple_english:
cost = float(apps[5])
if cost == 0.0:
apple_free.append(apps)
print(len(google_free))
print(len(apple_free))
# -
# To maximize profitability, it's optimal to build an app that's compatible with both Google and Apple. If an app is profitible within the first 6 months of its availability in Google Play, we can have optimistic projections about making it available in the Apple Store.
# +
google_genres = {}
number = 0
for row in google_free:
genres = row[9]
if genres in google_genres:
number += 1
google_genres[genres] = number
else:
google_genres[genres] = number
apple_genres = {}
number2 = 0
for row2 in apple_free:
genres = row2[12]
if genres in apple_genres:
number2 += 1
apple_genres[genres] = number2
else:
apple_genres[genres] = number2
print(google_genres)
print()
print(apple_genres)
# +
#app_count = {}
#freq_count = 0
def freq_table(dataset, index):
freq_count = 0
app_count = {}
for row in dataset:
freq_count += 1 # added
observation = row[index]
if observation in app_count: #row
app_count[observation] += 1
else:
app_count[observation] = 1
# forgot percentage
table_percentages = {}
for key in app_count:
percentage = (app_count[key] / freq_count) * 100
table_percentages[key] = percentage
return table_percentages
#print(app_count)
def display_table(dataset, index):
app_count = freq_table(dataset, index)
table_display = []
for key in app_count:
key_val_as_tuple = (app_count[key], key)
table_display.append(key_val_as_tuple)
table_sorted = sorted(table_display, reverse = True)
for entry in table_sorted:
print(entry[1], ':', entry[0])
# +
# prime-genre, Genres and Category
display_table(apple_free, -5)
print()
display_table(google_free, -4)
print()
display_table(google_free, 1)
# -
google_data[0:1]
apple_data[0:1]
google_free[0:1]
freq_table(apple_free, -5)
# +
apple_popular = freq_table(apple_free, -5)
for genre in apple_popular:
total = 0
len_genre = 0
#for app in apple_popular:
for app in apple_free:
genre_app = app[-5]
if genre_app == genre:
num = float(app[6])
# num += total
total += num
len_genre += 1
avg = total / len_genre
print(genre, ':', avg)
# -
#
#
#
#
# **Decide which type of App Store is best here**
#
#
#
#
# +
google_popular = freq_table(google_free, 1)
for category in google_popular:
total = 0
len_category = 0
for app in google_free:
category_app = app[1]
if category_app == category:
num = app[5]
# num.replace('+', '')
num = num.replace('+', '')
# num.replace(',', '')
num = num.replace(',', '')
total += float(num)
len_category += 1
avg = total / len_category
print(category, ":", avg)
# -
#
#
#
#
# **Decide which type of Google Play app is best here**
#
#
#
#
| App Usage Density.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Schema: schema's in MongoDB
#
# ## Validator: (partieel) schema voor collection-documenten
#
# Elk document in een MongoDB collection heeft zijn eigen *structuur*: veldnamen en bijbehorende waarden (types).
# Deze grote vrijheid is niet handig als je een collectie wilt kunnen doorzoeken:
# daarvoor moet je weten welke namen en waarden de verschillende documenten gebruiken.
# Dit werkt beter als die documenten een bepaalde (minimale) gemeenschappelijke structuur hebben.
#
# Met behulp van een *validator* kun je een *minimale* structuur van de documenten in een collection beschrijven.
# MongoDB gebruikt deze validator bij het toevoegen of aanpassen van een document.
# Als dit document niet voldoet aan de beschrijving van de validator, wordt het niet toegevoegd.
#
# Je kunt de validator opgeven bij het aanmaken van de collection.
# Je kunt deze ook later toevoegen, met het db-commando `collMod` voor het aanpassen van de eigenschappen van een collection.
# ### Schema
#
# De structuur van de documenten in een collection noemen we een *schema*.
# In een MongoDB collection-schema bepaal je zelf welk deel van de structuur vastligt,
# en waar documenten kunnen verschillen.
#
# > In een SQL database beschrijft het (fysieke) schema de *volledige structuur* van de database: de tabellen, en de structuur van elke tabel (namen en types van de kolommen).
# Alle rijen (records) in een tabel hebben dezelfde structuur.
# ## Initialisaties
# +
import os
import re
import pandas as pd
import numpy as np
from IPython.core.display import display, HTML
import pymongo
os.environ["PATH"]=os.environ["PATH"] + ":/usr/local/bin"
pd.set_option('max_colwidth',160)
# userline = !echo $USER
username = userline[0]
dbname = username + "-demodb"
print("Database name: " + dbname)
from pymongo import MongoClient
print('Mongo version', pymongo.__version__)
client = MongoClient('localhost', 27017)
db = client[dbname]
contacts = db.contacts
contacts.drop()
os.system('mongoimport -d ' + dbname + ' -c contacts adressen.json')
# -
# ## Tegenvoorbeeld: toevoegen van een vreemd document
#
# MongoDB-collections hebben in eerste instantie geen structuur (schema).
# Dit betekent dat je willekeurige documenten kun toevoegen, zoals we hier demonstreren:
contacts.insert_one({"kleur": "groen", "prijs": 400, "beschrijving": "fiets"})
list(contacts.find())
# Dit is natuurlijk niet de bedoeling.
# Als de database sterk gekoppeld is aan een enkele toepassing zal dit niet zo snel gebeuren.
# Maar een database wordt vaak door meerdere toepassingen gebruikt: je wilt dergelijke problemen dan voorkomen.
#
# Bovendien wil je weten welke velden (properties) gebruikt kunnen worden in de documenten in een bepaalde collection, zoals `contacts` of `agenda`.
# ### Zoeken van documenten via een *type*
#
# In de volgende zoekopdracht gebruiken we niet de waarde van een veld, maar het type.
# Dit komt later van pas bij het definiëren van een schema (validator).
list(contacts.find({"kleur": {"$type": "string"}}))
# ## Valideren van documenten
#
# Met behulp van een *validator* controleert MongoDB bij het toevoegen of aanpassen van een document in een collection of dat document voldoet aan de regels van die collection.
#
# Je kunt een validator zien als een query-expressie waarmee je alle "valide" documenten in de database filtert.
#
# We kunnen de validator van een collection instellen met behulp van het db-commando `collMod`.
# ### Definiëren van de validator
#
# Als minimale eis voor de documenten in de `contacts`-collection stellen we dat er tenminste een `name`-veld (property) is, en een `email` of een `tel`-veld.
# Dit beschrijven we met het volgende schema:
contact_schema = {"name": {"$type": "string"},
"$or": [{"email": {"$type": "string"}},
{"tel": {"$type": "string"}}
]}
# We testen dit schema, door te zoeken naar de documenten die hier wel aan voldoen:
list(contacts.find(contact_schema))
# ### Vinden van niet-valide documenten
#
# Vervolgens gaan we na welke documenten *niet* aan de validator-query voldoen.
# Hiervoor gebruiken we de `$nor`-operator met een lijst van deel-queries,
# in ons geval is de lijst maar 1 lang. (Er is geen `$not`, maar zo kan het ook.)
list(contacts.find({"$nor":[contact_schema]}))
# ### Toevoegen van de validator aan de collection
#
# We voegen dit schema toe als *validator*-schema voor de collection `contacts`.
#
# > Je kunt de validator definiëren bij de initialisatie van de collection, maar je kunt deze achteraf ook nog veranderen, zoals we hier doen.
db.command("collMod", "contacts", validator=contact_schema)
# ### Voorbeeld: toevoegen van een valide document
#
# Het toevoegen van een document dat aan deze regels voldoet:
contacts.insert_one({"name": "<NAME>", "tel": "06 3333 8765"})
# ### Voorbeeld: toevoegen van een niet-valide document
#
# Het toevoegen van een document dat *niet* aan deze regels voldoet (door een foute keuze van het "name"-veld).
#
# > Dit geeft een foutmelding; later geven we een manier om hier handiger mee om te gaan in een programma.
contacts.insert_one({"naam": "<NAME>", "tel": "06 1234 8855"})
# Het is handiger om dergelijke fouten in het programma zelf op te vangen.
# Python biedt hiervoor de mogelijkheid met het exception-mechanisme, zie het voorbeeld hieronder:
try:
contacts.insert_one({"naam": "<NAME>", "tel": "06 1234 8855"})
except pymongo.errors.WriteError as s:
print("Document not inserted: " + str(s))
else:
print("insert OK")
# ## Vinden van niet-valide documenten
#
# Als je achteraf de validator aanpast, kan de collection nog steeds documenten bevatten die niet aan deze nieuwe validator voldoen. Dit moet je zelf controleren, en de data eventueel aanpassen.
#
# > Het is verstandig om dit bij elke verandering van een schema te doen, anders loop je het risico op een foutmelding bij een `update` van een niet-valide document.
list(contacts.find({"$nor": [contact_schema]}))
# ## Opdracht
#
# * definieer het schema `contact_schema` zodat naast een document naast de naam, een telefoonnummer of een emailadres, *ook* een fysiek adres bevat. Dit fysieke adres heeft (tenminste) de eigenschap (property) `city`.
#
# > tip: voor het zoeken naar veld `b` als onderdeel van veld `a` gebruik je de notatie `"a.b": ...`.
contact_schema = {"name": {"$type": "string"},
"$or": [{"email": {"$type": "string"}},
{"tel": {"$type": "string"}}
],
"address.city": {"$type": "string"}
}
# * zoek alle documenten die *niet* aan dit schema voldoen.
list(contacts.find({"$nor": [contact_schema]}))
list(contacts.find(contact_schema))
# * (her)definieer de collection-validator met dit nieuwe schema.
db.command("collMod", "contacts", validator=contact_schema)
# Demonstreer dat het schema goed werkt bij het toevoegen voor het volgende document.
#
# > Ga zelf na of dit aan het schema voldoet. Welk resultaat verwacht je?
# > Pas zo nodig het document aan, en voer de opdracht nogmaals uit
person = {"name": "<NAME>",
"email": "<EMAIL>",
"address": {"straat": "Kastanjelaan 31", "plaats": "Almere"}}
try:
contacts.insert_one(person)
except pymongo.errors.WriteError as s:
print("Document not inserted: " + str(s))
else:
print("insert OK")
# ## Opdracht
#
# We willen alleen adressen met (tenminste) `street`, `city` en `postcode` toestaan.
#
# * Herdefinieer het `contact_schema` zodat al deze velden hierin opgenomen zijn als string.
#
# > *Opmerking*: je kunt met behulp van reguliere expressies nog preciezer beschrijven hoe een postcode eruit kan zien,
# maar dat later we hier buiten beschouwing. `string` is voldoende.
# Herdefinieer de validator met dit nieuwe `address_schema`.
contact_schema = {"name": {"$type": "string"},
"$or": [{"email": {"$type": "string"}},
{"tel": {"$type": "string"}}
],
"address.street": {"$type": "string"},
"address.city": {"$type": "string"},
"address.postcode": {"$type": "string"}
}
# * pas de validator van `contacts` aan
db.command("collMod", "contacts", validator=contact_schema)
# * Geef een voorbeeld van een insert van een document dat voldoet aan deze validator.
person = {"name": "<NAME>",
"email": "<EMAIL>",
"address": {"street": "Planetenstraat 42", "city": "Zierikzee","postcode": "1023 AB"}}
try:
contacts.insert_one(person)
except pymongo.errors.WriteError as s:
print("Document not inserted: " + str(s))
else:
print("insert OK")
# * Geef een voorbeeld van een insert van een document dat *niet* voldoet aan deze validator.
person = {"name": "<NAME>",
"email": "<EMAIL>",
"address": {"street": "Planetenstraat 42", "city": "Zierikzee"}}
try:
contacts.insert_one(person)
except pymongo.errors.WriteError as s:
print("Document not inserted: " + str(s))
else:
print("insert OK")
# ## Opmerkingen
#
# * voor het beschrijven van een schema voor een validator biedt MongoDB twee mogelijkheden:
# * de oorspronkelijke MongoDB-query-notatie, zoals hierboven gebruikt;
# * JSON-schema, een (internet/IETF) draft-standaard voor JSON-documenten (zie https://json-schema.org, https://json-schema.org/latest/json-schema-core.html, en https://json-schema.org/understanding-json-schema/index.html).
# ## JSON schema
#
# We geven hieronder zonder commentaar het oorspronkelijke validatieschema, met `name` als verplicht veld, en met de keuze tussen `tel` en `email` als verplichte velden.
#
# JSON schema is tegenwoordig de voorkeurnotatie voor validatie in MongoDB. Je kunt hierin vrij precies vastleggen hoe documenten eruit moeten zien, inclusief de structuur van niet-verplichte onderdelen. Je kunt JSON schema ook gebruiken in normale query-opdrachten (`find`).
#
# > `anyOf` staat voor "or", met een lijst van alternatieven.
schema = {"type": "object",
"required": ["name"],
"properties": {
"name": {"type": "string"}
},
"anyOf": [
{"properties": {"email": {"anyOf": [{"type": "string"},
{"type": "array",
"items": {"type": "string"}}
]}},
"required": ["email"]},
{"properties": {"tel": {"type": "string"}},
"required": ["tel"]}
]
}
list(contacts.find({"$jsonSchema": schema}))
list(contacts.find({"$jsonSchema": {"not": schema}}))
# ---
# (Einde van dit Jupyter notebook.)
db.getCollectionInfos()
| uitwerkingen/Schema-uitwerkingen.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Script for uploading our rProtein sequences
#
# Uses a pregenerated csv file with the columns:
#
# *Txid*, *Accession*, *Origin database*, *Description*, and *Full sequence*
#
# Updates tables: **Polymer_Data**, **Polymer_metadata**, and **Residues**
# +
# #!/usr/bin/env python3
import csv, sys, getopt, getpass, mysql.connector
def usage():
print (\
"USAGE:\n./upload_accession.py -c [csv_file_path]-h\n\
-c: defines path to csv file with txids, accessions, database, protein name, description, and sequence.\tREQUIRED\n\
-h: prints this\
")
try:
opts, args = getopt.getopt(sys.argv[1:], 'c:h', ['csv=', 'help'])
except getopt.GetoptError:
usage()
sys.exit(2)
for opt, arg in opts:
if opt in ('-h', '--help'):
usage()
sys.exit(2)
elif opt in ('-c', '--csv'):
csv_path = arg
else:
usage()
sys.exit(2)
uname = input("User name: ")
pw = getpass.getpass("Password: ")
cnx = mysql.connector.connect(user=uname, password=pw, host='172.16.17.32', database='SEREB')
cursor = cnx.cursor()
def read_csv(csv_path):
with open(csv_path, 'r') as csv_file:
reader = csv.reader(csv_file)
csv_list = list(reader)
return csv_list
def superkingdom_info(ID):
'''
Gets the superkingdom for a strain ID
'''
#print(ID)
cursor.execute("SELECT SEREB.TaxGroups.groupName FROM SEREB.Species_TaxGroup\
INNER JOIN SEREB.TaxGroups ON SEREB.Species_TaxGroup.taxgroup_id=SEREB.TaxGroups.taxgroup_id\
INNER JOIN SEREB.Species ON SEREB.Species_TaxGroup.strain_id=SEREB.Species.strain_id\
WHERE SEREB.TaxGroups.groupLevel = 'superkingdom' AND SEREB.Species.strain_id = '"+ID+"'")
results = cursor.fetchall()
#print(ID,results)
try:
superkingdom=(results[0][0])
except:
raise ValueError ("No result for specie "+str(ID)+" in the MYSQL query")
return superkingdom
def check_nomo_id(occur, prot_name):
'''
Gets nom_id for new name and superkingdom
'''
cursor.execute("SELECT SEREB.Nomenclature.nom_id FROM SEREB.Nomenclature\
INNER JOIN SEREB.Old_name ON SEREB.Nomenclature.nom_id=SEREB.Old_name.nomo_id\
WHERE SEREB.Old_name.old_name = '"+prot_name+"' AND SEREB.Old_name.N_B_Y_H_A = 'BAN' AND SEREB.Nomenclature.occurrence = '"+occur+"'")
result = cursor.fetchall()
#nom_id=result[0][0]
try:
nom_id=result[0][0]
except:
raise ValueError ("No result for nom_id "+prot_name+" and occurrence "+occur+" in the MYSQL query")
return nom_id
def upload_resi(poldata_id, fullseq):
i = 1
for resi in fullseq:
query = "INSERT INTO `SEREB`.`Residues`(`PolData_id`,`resNum`,`unModResName`) VALUES('"+poldata_id+"','"+str(i)+"','"+resi+"')"
cursor.execute(query)
#print(query)
i+=1
return True
def main():
csv_list = read_csv(csv_path)
for entry in csv_list:
superK = superkingdom_info(entry[0])
nom_id = check_nomo_id(superK[0], entry[3])
query = "INSERT INTO `SEREB`.`Polymer_Data`(`GI`,`strain_ID`,`nomgd_id`, `GeneDescription`) VALUES('"+entry[1]+"','"+str(entry[0])+"','"+str(nom_id)+"','"+entry[4]+"')"
print(query)
cursor.execute(query)
lastrow_id = str(cursor.lastrowid)
query = "INSERT INTO `SEREB`.`Polymer_metadata`(`polymer_id`,`accession_type`,`polymer_type`, `Fullseq`) VALUES('"+str(lastrow_id)+"','LDW-prot','protein','"+entry[5]+"')"
cursor.execute(query)
#print(query)
upload_resi(str(lastrow_id), entry[5])
if __name__ == "__main__":
main()
#cnx.commit()
cursor.close()
cnx.close()
print("Success!")
| populate_db/Notebooks/.ipynb_checkpoints/Polymer_Data-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/asu-trans-ai-lab/DLSim/blob/main/osm2gmns.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="_ghEaf-d2Fdm"
# **Step 0: load the OSM file from the repository of OSM testing datasets**
# + colab={"base_uri": "https://localhost:8080/"} id="e3xZMKsJ2Ew2" outputId="aebdcacd-43f0-4ce9-db92-93a7cc9e011b"
# !rm -rf ./osm_test_data_set/
# !git clone https://github.com/asu-trans-ai-lab/osm_test_data_set
# + id="nudHbrPPY8PP" colab={"base_uri": "https://localhost:8080/"} outputId="25211be7-2e02-4b8f-a28c-440daf7b447b"
# %cd /content/osm_test_data_set/datasets/loop_101
# + [markdown] id="CexwTGDB0D0A"
# Check the file icon on the left hand side, makesure file map.osm exists.
# + [markdown] id="j6y7B8WX-d46"
# **Step 1: install python packages**
# + colab={"base_uri": "https://localhost:8080/"} id="Mysg2UEz0cu5" outputId="21caa722-b9ed-4f41-d9ac-91c85647f989"
# !pip install osm2gmns
# !pip install grid2demand
# + [markdown] id="qrchDA-R0WN0"
# **Step 2: convert OSM to GMNS Files**
# + id="3UL0MUnaHAD5" colab={"base_uri": "https://localhost:8080/", "height": 664} outputId="ec89dce7-211e-475b-836b-d631e5342989"
import osm2gmns as og
net = og.getNetFromFile('map.osm', POIs=True, POI_sampling_ratio=0.05)
og.connectPOIWithNet(net)
og.generateNodeActivityInfo(net)
og.consolidateComplexIntersections(net)
og.outputNetToCSV(net)
og.show(net)
og.saveFig(net)
# + [markdown] id="SENYsUzC03bx"
# Check node.csv, link.csv and poi.csv exist in the left-hand-side Colab folder.
# + [markdown] id="_vLsOkgJ0vuZ"
# **Step 3: Run grid2demand to generate demand based POI rates**
# + id="EbIz8pgPJgsn" colab={"base_uri": "https://localhost:8080/"} outputId="7406fe0a-e3ab-40ec-e0a3-c2d9592e2581"
import grid2demand as gd
"Step 1: Read Input Network Data"
net = gd.ReadNetworkFiles('')
"Step 2: Partition Grid into cells"
zone = gd.PartitionGrid(number_of_x_blocks=5, number_of_y_blocks=5)
# user can customize number of grid cells or cell's width and height
"Step 3: Get Production/Attraction Rates of Each Land Use Type with a Specific Trip Purpose"
triprate = gd.GetPoiTripRate(trip_rate_folder='',trip_purpose=1)
# user can customize poi_trip_rate.csv and trip purpose
"Step 4: Define Production/Attraction Value of Each Node According to POI Type"
nodedemand = gd.GetNodeDemand()
"Step 5: Calculate Zone-to-zone Accessibility Matrix by Centroid-to-centroid Straight Distance"
accessibility = gd.ProduceAccessMatrix(latitude=30, accessibility_folder='')
# user can customize the latitude of the research area and accessibility.csv
"Step 6: Apply Gravity Model to Conduct Trip Distribution"
demand = gd.RunGravityModel(trip_purpose=1, a=None, b=None, c=None)
# user can customize friction factor coefficients under a specific trip purpose
"Step 7: Generate Agent"
demand = gd.GenerateAgentBasedDemand()
# + [markdown] id="gfLuLzVs4SJS"
# **Step 4: Download data files**
#
#
# + colab={"base_uri": "https://localhost:8080/"} id="gRYOkJyT4juC" outputId="1c3bcd18-50d5-4a26-bcd6-b47632ed0dc8"
% cd ../
# !zip -r /content/osm_test_data_set/map.zip /content/osm_test_data_set/
# + colab={"base_uri": "https://localhost:8080/", "height": 17} id="mDf0i_KS7dIH" outputId="0ed584e1-8b25-4c3e-81cb-786d4439b1ab"
from google.colab import files
files.download("/content/osm_test_data_set/map.zip")
# + [markdown] id="b6pyZEum8Ood"
# **Step 5: Visualization using GMNS tool:**
# By simply uploading node.csv and link.csv at https://asu-trans-ai-lab.github.io/index.html#,
# you can easily create custom online maps for any GMNS network files.
# To view zone and demand information please visit this page to use QGIS/NeXTA tools. https://github.com/asu-trans-ai-lab/traffic-engineering-and-analysis/blob/master/undergraduate_student_project/QGIS%20For%20Gmns%20User%20Guide_v0.5.pdf
# + [markdown] id="grPzsCuq9rcq"
# **Option for downloading OSM map.osm file for the area of interest**
#
# On OpenStreetMap homepage, click the Export button to enter Export mode. Before downloading, you may need to span and zoom in/out the map to make sure that your target area is properly shown on the screen. Or, you can use Manually select a different area to select your area more precisely. Click the Export button in blue to export the network you want.
#
# Note that if the target area is too large, you may get an error message: “You requested too many nodes (limit is 50000). Either request a smaller area, or use planet.osm”. In this case, you can always click Overpass API to download the network you need via a mirror site.
#
# You can uploda the file as shown below to Google Colab environment and repeat from step 2 of running OSM2GMNS to step 5
#
#
# + id="A3wSE0zi9u5L"
from google.colab import files
uploaded = files.upload()
for fn in uploaded.keys():
print('User uploaded file "{name}" with length {length} bytes'.format(
name=fn, length=len(uploaded[fn])))
| osm2gmns.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/HSE-LAMBDA/MLatMisis-2019/blob/master/Introduction/01-Welcome.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="n_nitmHugcSH" colab_type="text"
# # Welcome
# + [markdown] id="pfojW1Laghph" colab_type="text"
# During the practical session on the data analysis we are going to use [Python programming language](https://www.python.org) in the [Google Colab environment](https://colab.research.google.com).
#
# If you are new to Python, please consider reading through the following tutorial:
# - https://docs.python.org/3.6/tutorial/
#
# In particular, the following parts of it should provide a more or less comprehensive introduction to the must-know basics:
# - https://docs.python.org/3.6/tutorial/introduction.html
# - https://docs.python.org/3.6/tutorial/controlflow.html
# - https://docs.python.org/3.6/tutorial/datastructures.html
# - https://docs.python.org/3.6/tutorial/modules.html
#
#
# + [markdown] id="wNKpO_suinc-" colab_type="text"
# An overview of basic features of the Google Colab environment can be found [here](https://colab.research.google.com/notebooks/basic_features_overview.ipynb).
# + [markdown] id="YdeKlZBujiKN" colab_type="text"
# # A few brief examples
# + [markdown] id="eJgglc3Rqswh" colab_type="text"
# In this section we're just throwing a few basic examples at you, without much explanation. Make sure you run the cells below. Try experimenting with these examples, modify them with different numbers, operations etc.
# + [markdown] id="RqOTq_6hjkyP" colab_type="text"
# ## Using python as calculator
# + id="JVfrJYBHgbXG" colab_type="code" colab={}
1 + 2
# + id="YhdP8Ksajr7q" colab_type="code" colab={}
5 * (3 - 1)
# + id="mWdxx3vjjwNJ" colab_type="code" colab={}
2**3
# + id="lmTFsTApj2zC" colab_type="code" colab={}
"Hello" + " " + "world"
# + [markdown] id="gVRXRn6QkU18" colab_type="text"
# ## Variables
# + id="51LslOEhkWsJ" colab_type="code" colab={}
a = 1
b = 2
c = a / b
print(a, b, c)
# Note that `c` has different type (float) compared to `a` and `b` (which are of integer type)
print(type(a), type(b), type(c))
# + id="wHbXRpmEk4Ba" colab_type="code" colab={}
print(a == b)
print(a + 1 > b)
print(a + 1 >= b)
print(type(a + 1 > b))
# + [markdown] id="xtEdQYzzkPK5" colab_type="text"
# ## Loops and interables
# + id="pUary3TNkAYj" colab_type="code" colab={}
a = 0
while a < 10:
a += 1
print(a**2)
# + id="-Z4twE06lHgr" colab_type="code" colab={}
a = [1, 2, 3, 'Hello']
for i in a:
print(i, type(i))
print(type(a))
# + id="Z1jg1HSuliJT" colab_type="code" colab={}
a = []
for i in range(5):
a.append(i)
print(a)
# + id="vjkYAJBFlXA9" colab_type="code" colab={}
a = []
for i in range(1, 20, 3):
a.insert(0, i)
print(a)
# + [markdown] id="rHKmw61Il80a" colab_type="text"
# ## Functions
# + id="igddBBlblfkO" colab_type="code" colab={}
def my_function(x):
return x**2
# For such simple functions one may use lambdas:
my_lambda_function = lambda x: x**2
for i in range(5):
print(my_function(i), my_lambda_function(i))
# + [markdown] id="4pNRGgxv3ABa" colab_type="text"
# # A bit more comprehensive intro
# + [markdown] id="vr0nTYTtrZ02" colab_type="text"
# In this section we'll show you a few more examples, now with a bit of explanation. Please run the cells and follow the instructions below.
# + [markdown] id="_H8HstR39gRm" colab_type="text"
# ### Basic types and operations
# + [markdown] id="1Nlg4jJyl7ag" colab_type="text"
# Now, let's start with defining some variables of basic types:
# + id="a4bOzlpl-UCk" colab_type="code" colab={}
# Anything after a '#' is a comment
# integers:
a1 = 1 # Define an integer variable named 'a1' of value 1
a2 = -42
# floats:
b = 1.0
c = .5
d = 8.
e = 1.2e3
print(a1, a2)
print(b)
print(c)
print(d)
print(e)
# + id="pEscnUhk-T1a" colab_type="code" colab={}
# complex numbers:
c1 = 2 + 3j
c2 = 1/1j
print(c1)
print(c2)
# + id="G8aupHIm-Too" colab_type="code" colab={}
# strings
f1 = 'Hello!'
f2 = "Hi"
g = """Hi there!
I'm a multiline string!"""
h = 'foo' 'bar'
i = '"'"'"'"'"'""'"'"'"'"'"'
print(f1, f2)
print(g)
print(h)
print(i)
# + id="n5M1ZaJgQWGJ" colab_type="code" colab={}
# booleans
j = True
k = False
l = j == k
print(j, k, l)
# + id="S5Sg3FEynV-d" colab_type="code" colab={}
# In case you want to check variable type:
print(type(a1))
print(type(e))
# + id="FUdV_HZn8U67" colab_type="code" colab={}
# YOUR CODE HERE
# - print out the type of other variables we've defined above
# - print out the type of 'print' function
# - print out the type of 'type' function
# + id="v4g9Jol3nnVk" colab_type="code" colab={}
# Operations:
print(2 + 2)
print(2 - 2)
print(2 * 2)
print(1 / 2) # note that python is smart enough to convert integer to float here
print(7 // 3) # floor division
print(2019 % 100) # remainder of division
print(2**4) # exponentiation
a = 1
b = 2
# Some string operations and formatting:
print("Hello " + "world!")
print("%i + %i = %i" % (a, b, a + b))
print("{} + {} = {}".format(a, b, a + b))
print("{var_a} + {var_b} = {a_plus_b_result}".format(
a_plus_b_result=a+b, var_a=a, var_b=b))
print("1 + 2 = %.4f" % (1 + 2))
print("1 + 2 = {:.4f}".format(1 + 2))
# + [markdown] id="FuUDOWzF3tJA" colab_type="text"
# Note that after a cell execution the value of the last expression will be printed out, so you don't always have to use the `print` function:
# + id="BA8Cjjf3n9at" colab_type="code" colab={}
a = 1
b = 2
a + b
# + [markdown] id="MKP-eUtY4QNp" colab_type="text"
# Unless you finish a line with a semicolon:
# + id="RTvObVSS4PVU" colab_type="code" colab={}
a + b;
# + [markdown] id="_XbtNJNo4kls" colab_type="text"
# We've defined a number of variables so far. Here's how you can list all of them (+ some internals of python):
# + id="nRVS4csJ4jJN" colab_type="code" colab={}
dir()
# + [markdown] id="v0aNEFQY5f8t" colab_type="text"
# In general, one-letter variable names is a bad practice. It's preferable to give your variables meaningful names.
#
# Let's get rid of all the variables we've defined by restarting the runtime. You can do this by clicking the `Runtime->Restart runtime...` menu item.
#
# Now `dir()` should have gotten rid of these variables:
# + id="kS5VvAoN6JVL" colab_type="code" colab={}
dir()
# + [markdown] id="NsvDiBuY5Gtt" colab_type="text"
# Note: **Be careful with your variable names!** Python allows you to use the built-in function names for your variables, which can mess things up:
# + id="cFa4gcsu6q88" colab_type="code" colab={}
print = False
print
# + [markdown] id="OD2vKRLB7hlN" colab_type="text"
# At this point you can't use the `print` function for output:
# + id="bxwMGoZX6wJV" colab_type="code" colab={}
print(10)
# + [markdown] id="EO4sArU37q8x" colab_type="text"
# One way of fixing this is to reset the runtime as we did before. Another way is to use the `del` statement:
# + id="EdkDrH1F7nwS" colab_type="code" colab={}
del print
print(10)
# + [markdown] id="zEwxe19U_Ihq" colab_type="text"
# ### Composite types
# + [markdown] id="00Wdm1BEAGSa" colab_type="text"
# We'll start with lists - they represent arrays of objects:
# + id="QF4pb46M8Fr9" colab_type="code" colab={}
array1 = [1, 2, -3, 'hello', 4e5, 8j, False]
print(array1)
print(type(array1))
# + [markdown] id="A7vGe8lDANd3" colab_type="text"
# Addition means concatenation for lists:
# + id="LInAFaHm_zld" colab_type="code" colab={}
print(array1 + array1)
# + [markdown] id="ZZsLKJhKAQgm" colab_type="text"
# Another way of adding elements to a list (inplace) is by using the `append` method:
# + id="zz8V3id3_98u" colab_type="code" colab={}
array1.append('world')
array1
# + [markdown] id="Avjrs7C0AdWo" colab_type="text"
# Lists can contain other lists:
# + id="eM5_GSw0Ab7J" colab_type="code" colab={}
array2 = [[True, False], ['Hello'], 42]
array2
# + [markdown] id="6eX11ZlxBDV4" colab_type="text"
# or even be recursive:
# + id="or2i50-QBC0L" colab_type="code" colab={}
array2.append(array2)
array2
# + [markdown] id="PM0ukvotBLfE" colab_type="text"
# Here's how you can index lists (indexing starts with 0):
# + id="nFI445r_BBh1" colab_type="code" colab={}
print(array1[0]) # first item
print(array1[-1]) # last item
print(array1[-2]) # one before last
# + [markdown] id="-Q9kn_sOC55y" colab_type="text"
# You can also slice lists:
# + id="nWi3Ce3SC2xZ" colab_type="code" colab={}
print(array1[1:6:2]) # most general form - every 2nd element from 1 to 6
print(array1[2::3]) # elements from 2 to the end with step 3
print(array1[::2]) # every 2nd element starting from 0
print(array1[::-1]) # all list in reverse order
print(array1[2:5]) # elements from 2 to 5 (not including 5)
print(array1[-2:]) # last two elements of the list
# + [markdown] id="1vJVR34SD4Ux" colab_type="text"
# A very similar type is called "tuple". Similarly to lists, tuples are arrays of objects. The major difference is that they are **immutable**.
# + id="nony88Y4CsoI" colab_type="code" colab={}
tuple1 = (1, 2, 'foo')
tuple2 = tuple(array2) # You can initialize tuples with lists and vice versa
print(tuple1)
print(tuple2)
# + [markdown] id="4wgK4O_QEwpv" colab_type="text"
# Note that inner lists from `array2` are stored as lists within the `tuple2`, so they are mutable:
# + id="vShUqu8SEc_7" colab_type="code" colab={}
tuple2[0].append(8)
print(tuple2)
# + [markdown] id="h_BsFBKRFTYE" colab_type="text"
# Also note, that the first item in `array2` has changed as well:
# + id="Va_HqVsmFIjX" colab_type="code" colab={}
array2
# + [markdown] id="LVnOMmgQFkqm" colab_type="text"
# This illustrates a very important feature of python: all objects are stored by reference. Since `tuple2` was initialized from `array2`, they both reference the same object instances.
# + [markdown] id="4TtM9brjGQAR" colab_type="text"
# You can check whether two variables are referencing the same object with `is` operator:
# + id="9mbgGuc0Fbgh" colab_type="code" colab={}
print(array2[-1] is array2)
print(tuple2[0] is array2[0])
# + [markdown] id="lwGbXShSHUAo" colab_type="text"
# Two empty tuples created with `tuple()` expression will reference the same object:
# + id="QOQ71ETcHmpX" colab_type="code" colab={}
tuple() is tuple()
# + [markdown] id="GeTA_nFSHqOI" colab_type="text"
# while two empty lists won't, since lists are mutable (and the two empty list instances may be changed in a different way):
# + id="s1TzcRMwHofA" colab_type="code" colab={}
list() is list()
# + [markdown] id="D_71MZB0NiOP" colab_type="text"
# Another important type is dictionary. Dictionaries are key->value mappings with a restriction that keys can only be immutable objects.
#
# Several ways to create and fill a dictionary:
# + id="v6WfRilNHuWR" colab_type="code" colab={}
dict1 = {"foo" : 'bar', -3 : False}
dict2 = dict(foo='bar', hello='world')
print(dict1)
print(dict2)
# + [markdown] id="2SELmRyoOe3_" colab_type="text"
# Accessing elements:
# + id="Nmmm2ErNObj7" colab_type="code" colab={}
print(dict1[-3])
print(dict2['foo'])
dict2['new_key'] = 'new_value'
print(dict2)
# + [markdown] id="EVScmD0WOx7Y" colab_type="text"
# ### Loops, conditionals, functions
# + [markdown] id="tFQEd2ylO-8A" colab_type="text"
# A loop over elements of a list:
# + id="_yO1tD-mOt5V" colab_type="code" colab={}
objects = [1, 2, 'hello', False]
for obj in objects:
print(obj)
# Note that the body of the loop is indented.
# + [markdown] id="8qd7noUBPfiY" colab_type="text"
# Use the `range` function to iterate over spcified range of numbers:
# + id="dmCu-COkPVJ5" colab_type="code" colab={}
for i in range(5):
print(i)
# + [markdown] id="-XwmCqJRQEZB" colab_type="text"
# N.B.: `range` has more arguments. Place the cursor right after the opening bracket and hit **TAB** to see help:
# + id="jsnZXCXWQDDl" colab_type="code" colab={}
for i in range(# <--- press TAB
# YOUR CODE HERE:
# print all the numbers: 20, 23, 26... up to 60
# + [markdown] id="ZikK4DnZQqqW" colab_type="text"
# Conditions:
# + id="mpk_kkcIQqQf" colab_type="code" colab={}
for i in range(100):
if i % 7 == 1 and i % 5 == 3:
print("i % 7 == 1 and i % 5 == 3:", i)
if not (i > 3) or i > 98:
print("not (i > 3) or i > 98:", i)
# + id="zOolnUZeOMdc" colab_type="code" colab={}
array1 = [3, 15, 27, 42]
array2 = []
for i in range(30):
if i in array1:
array2.append(i**2)
print(array2)
# + [markdown] id="8NITSV_CO0-S" colab_type="text"
# Functions:
# + id="Xn5yZ3ZNOt7R" colab_type="code" colab={}
# definition:
def some_function(argument1, argument2):
print(argument1 + argument2)
# calls:
some_function(3, 8)
some_function(['foo'], ['bar'])
# + [markdown] id="0mofkWp6PVTu" colab_type="text"
# Arguments with default values:
# + id="mm4QBW3DPJlO" colab_type="code" colab={}
def some_function(argument1, argument2=42):
print("{} ----- {}".format(argument1, argument2))
some_function(18)
some_function(18, 19)
some_function(argument2=19, argument1=18)
some_function(some_function)
# + [markdown] id="fzt2uWp9bdKL" colab_type="text"
# Function returning a value:
# + id="uWBhj7B3bbvN" colab_type="code" colab={}
def square(x):
return x**2
for i in range(4):
print(square(i))
# + [markdown] id="AQVipi_9SDBR" colab_type="text"
# # Tasks
# + [markdown] id="oB-AV1muSlVP" colab_type="text"
# ## Task 1
# + [markdown] id="a57M3R8qQpTr" colab_type="text"
# Now you should be equipped well enough to solve this famous job interview problem:
#
#
#
# > Write a function called `FooBar` that takes input integer `n` and for all the numbers from 1 upto `n` prints in a new line:
# * "Foo", if the number is divisible by 3;
# * "Bar", if the number is divisible by 5;
# * "FooBar", if the number is divisible by both 3 and 5;
# * the number itself otherwise.
#
# > For example FooBar(15) should print as follows:
# ```1
# 2
# Foo
# 4
# Bar
# Foo
# 7
# 8
# Foo
# Bar
# 11
# Foo
# 13
# 14
# FooBar
# ```
#
#
#
#
#
#
# + id="_zd8CQzEPwKk" colab_type="code" colab={}
# Your code here
# + [markdown] id="BjLyprScauk5" colab_type="text"
# ## Task 2
# + [markdown] id="NTxZz0UhcHQk" colab_type="text"
# Write a function calculating the factorial of an integer number.
#
# *Suggestion: use recursion.*
# + id="IhXkCdvDcdaH" colab_type="code" colab={}
# Your code here
# + [markdown] id="cZ3xv_5Mcfyc" colab_type="text"
# ## Task 3
# + [markdown] id="3pw6Zh7CaxM6" colab_type="text"
# Write a function that takes two numbers `m` and `n`, and a list `array`, appends `m` to `array` `n` times, and returns the result.
# + id="YAYd3skXawL7" colab_type="code" colab={}
# Your code here
# + [markdown] id="cwi1tAOSejh9" colab_type="text"
# Modify the function you wrote such that the `array` argument is optional (the function should append to an empty list in case array is not provided).
#
# *Hint: make sure the result is correct when you make several subsequent calls without providing the `array` argument.*
# + id="GE6Yh7sCex_Y" colab_type="code" colab={}
# Your code here
| Introduction/01-Welcome.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 1. Import libraries
# +
#----------------------------Reproducible----------------------------------------------------------------------------------------
import numpy as np
import random as rn
import os
seed=0
os.environ['PYTHONHASHSEED'] = str(seed)
np.random.seed(seed)
rn.seed(seed)
#----------------------------Reproducible----------------------------------------------------------------------------------------
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
#--------------------------------------------------------------------------------------------------------------------------------
import matplotlib
import matplotlib.pyplot as plt
import matplotlib.cm as cm
# %matplotlib inline
matplotlib.style.use('ggplot')
import random
import scipy.sparse as sparse
import scipy.io
from keras.utils import to_categorical
from sklearn.ensemble import ExtraTreesClassifier
from sklearn.model_selection import cross_val_score
from sklearn.preprocessing import MinMaxScaler
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
import scipy.io
from skfeature.function.sparse_learning_based import NDFS
from skfeature.utility import construct_W
from skfeature.utility.sparse_learning import feature_ranking
from sklearn.impute import SimpleImputer
import time
import pandas as pd
# -
#--------------------------------------------------------------------------------------------------------------------------------
def ETree(p_train_feature,p_train_label,p_test_feature,p_test_label,p_seed):
clf = ExtraTreesClassifier(n_estimators=50, random_state=p_seed)
# Training
clf.fit(p_train_feature, p_train_label)
# Training accuracy
print('Training accuracy:',clf.score(p_train_feature, np.array(p_train_label)))
print('Training accuracy:',accuracy_score(np.array(p_train_label),clf.predict(p_train_feature)))
#print('Training accuracy:',np.sum(clf.predict(p_train_feature)==np.array(p_train_label))/p_train_label.shape[0])
# Testing accuracy
print('Testing accuracy:',clf.score(p_test_feature, np.array(p_test_label)))
print('Testing accuracy:',accuracy_score(np.array(p_test_label),clf.predict(p_test_feature)))
#print('Testing accuracy:',np.sum(clf.predict(p_test_feature)==np.array(p_test_label))/p_test_label.shape[0])
#--------------------------------------------------------------------------------------------------------------------------------
def write_to_csv(p_data,p_path):
dataframe = pd.DataFrame(p_data)
dataframe.to_csv(p_path, mode='a',header=False,index=False,sep=',')
# # 2. Loading data
# +
data_frame=pd.read_excel('./Dataset/Mice/Data_Cortex_Nuclear.xls',sheet_name='Hoja1')
data_arr=(np.array(data_frame)[:,1:78]).copy()
label_arr=(np.array(data_frame)[:,81]).copy()
for index_i in np.arange(len(label_arr)):
if label_arr[index_i]=='c-CS-s':
label_arr[index_i]='0'
if label_arr[index_i]=='c-CS-m':
label_arr[index_i]='1'
if label_arr[index_i]=='c-SC-s':
label_arr[index_i]='2'
if label_arr[index_i]=='c-SC-m':
label_arr[index_i]='3'
if label_arr[index_i]=='t-CS-s':
label_arr[index_i]='4'
if label_arr[index_i]=='t-CS-m':
label_arr[index_i]='5'
if label_arr[index_i]=='t-SC-s':
label_arr[index_i]='6'
if label_arr[index_i]=='t-SC-m':
label_arr[index_i]='7 '
label_arr_onehot=label_arr#to_categorical(label_arr)
# Show before Imputer
#print(data_arr[558])
imp_mean = SimpleImputer(missing_values=np.nan, strategy='mean')
imp_mean.fit(data_arr)
data_arr=imp_mean.transform(data_arr)
# Show after Imputer
#print(data_arr[558])
# -
data_arr=MinMaxScaler(feature_range=(0,1)).fit_transform(data_arr)
# +
C_train_x,C_test_x,C_train_y,C_test_y= train_test_split(data_arr,label_arr_onehot,test_size=0.2,random_state=seed)
x_train,x_validate,y_train_onehot,y_validate_onehot= train_test_split(C_train_x,C_train_y,test_size=0.1,random_state=seed)
x_test=C_test_x
y_test_onehot=C_test_y
print('Shape of x_train: ' + str(x_train.shape))
print('Shape of x_validate: ' + str(x_validate.shape))
print('Shape of x_test: ' + str(x_test.shape))
print('Shape of y_train: ' + str(y_train_onehot.shape))
print('Shape of y_validate: ' + str(y_validate_onehot.shape))
print('Shape of y_test: ' + str(y_test_onehot.shape))
print('Shape of C_train_x: ' + str(C_train_x.shape))
print('Shape of C_train_y: ' + str(C_train_y.shape))
print('Shape of C_test_x: ' + str(C_test_x.shape))
print('Shape of C_test_y: ' + str(C_test_y.shape))
# -
key_feture_number=10
# # 3. Classifying 1
# ### Extra Trees
# +
train_feature=C_train_x
train_label=C_train_y
test_feature=C_test_x
test_label=C_test_y
print('Shape of train_feature: ' + str(train_feature.shape))
print('Shape of train_label: ' + str(train_label.shape))
print('Shape of test_feature: ' + str(test_feature.shape))
print('Shape of test_label: ' + str(test_label.shape))
p_seed=seed
ETree(train_feature,train_label,test_feature,test_label,p_seed)
# -
num_cluster=len(np.unique(label_arr))
# # 4. Model
# +
start = time.clock()
# construct affinity matrix
kwargs_W = {"metric": "euclidean", "neighborMode": "knn", "weightMode": "heatKernel", "k": 5, 't': 1}
train_W = construct_W.construct_W(train_feature, **kwargs_W)
# obtain the scores of features, and sort the feature scores in an ascending order according to the feature scores
train_score = NDFS.ndfs(train_feature, W=train_W,n_clusters=num_cluster)
train_idx = feature_ranking(train_score)
# obtain the dataset on the selected features
train_selected_x = train_feature[:, train_idx[0:key_feture_number]]
print("train_selected_x",train_selected_x.shape)
test_W = construct_W.construct_W(test_feature, **kwargs_W)
# obtain the scores of features, and sort the feature scores in an ascending order according to the feature scores
test_score = NDFS.ndfs(test_feature, W=test_W,n_clusters=num_cluster)
test_idx = feature_ranking(test_score)
# obtain the dataset on the selected features
test_selected_x = test_feature[:, test_idx[0:key_feture_number]]
print("test_selected_x",test_selected_x.shape)
time_cost=time.clock() - start
write_to_csv(np.array([time_cost]),"./log/NDFS_time"+str(key_feture_number)+".csv")
# -
# # 5. Classifying 2
# ### Extra Trees
# +
train_feature=train_selected_x
train_label=C_train_y
test_feature=test_selected_x
test_label=C_test_y
print('Shape of train_feature: ' + str(train_feature.shape))
print('Shape of train_label: ' + str(train_label.shape))
print('Shape of test_feature: ' + str(test_feature.shape))
print('Shape of test_label: ' + str(test_label.shape))
p_seed=seed
ETree(train_feature,train_label,test_feature,test_label,p_seed)
# -
# # 6. Reconstruction loss
# +
from sklearn.linear_model import LinearRegression
def mse_check(train, test):
LR = LinearRegression(n_jobs = -1)
LR.fit(train[0], train[1])
MSELR = ((LR.predict(test[0]) - test[1]) ** 2).mean()
return MSELR
# +
train_feature_tuple=(train_selected_x,C_train_x)
test_feature_tuple=(test_selected_x,C_test_x)
reconstruction_loss=mse_check(train_feature_tuple, test_feature_tuple)
print(reconstruction_loss)
# -
| Python/AbsoluteAndOtherAlgorithms/1MiceProtein/NDFS_10.ipynb |
# ---
# jupyter:
# jupytext:
# formats: ''
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Optimization towards a Perfect Entangler
# + attributes={"classes": [], "id": "", "n": "1"}
# NBVAL_IGNORE_OUTPUT
# %load_ext watermark
import qutip
import numpy as np
import scipy
import matplotlib
import matplotlib.pylab as plt
import krotov
from IPython.display import display
import weylchamber as wc
from weylchamber.visualize import WeylChamber
from weylchamber.coordinates import from_magic
# %watermark -v --iversions
# -
# $\newcommand{tr}[0]{\operatorname{tr}}
# \newcommand{diag}[0]{\operatorname{diag}}
# \newcommand{abs}[0]{\operatorname{abs}}
# \newcommand{pop}[0]{\operatorname{pop}}
# \newcommand{aux}[0]{\text{aux}}
# \newcommand{opt}[0]{\text{opt}}
# \newcommand{tgt}[0]{\text{tgt}}
# \newcommand{init}[0]{\text{init}}
# \newcommand{lab}[0]{\text{lab}}
# \newcommand{rwa}[0]{\text{rwa}}
# \newcommand{bra}[1]{\langle#1\vert}
# \newcommand{ket}[1]{\vert#1\rangle}
# \newcommand{Bra}[1]{\left\langle#1\right\vert}
# \newcommand{Ket}[1]{\left\vert#1\right\rangle}
# \newcommand{Braket}[2]{\left\langle #1\vphantom{#2} \mid
# #2\vphantom{#1}\right\rangle}
# \newcommand{op}[1]{\hat{#1}}
# \newcommand{Op}[1]{\hat{#1}}
# \newcommand{dd}[0]{\,\text{d}}
# \newcommand{Liouville}[0]{\mathcal{L}}
# \newcommand{DynMap}[0]{\mathcal{E}}
# \newcommand{identity}[0]{\mathbf{1}}
# \newcommand{Norm}[1]{\lVert#1\rVert}
# \newcommand{Abs}[1]{\left\vert#1\right\vert}
# \newcommand{avg}[1]{\langle#1\rangle}
# \newcommand{Avg}[1]{\left\langle#1\right\rangle}
# \newcommand{AbsSq}[1]{\left\vert#1\right\vert^2}
# \newcommand{Re}[0]{\operatorname{Re}}
# \newcommand{Im}[0]{\operatorname{Im}}$
#
# This example demonstrates the optimization with an "unconventional"
# optimization target. Instead of a state-to-state transition, or the realization
# of a specific quantum gate, we optimize for an arbitrary perfectly entangling
# gate. See
#
# * <NAME>, et al., Phys. Rev. A 91, 062306 (2015)
#
# * <NAME>, et al., Phys. Rev. A 91, 062307 (2015)
#
# for details.
# ## Hamiltonian
# We consider a generic two-qubit Hamiltonian (motivated from the example of two
# superconducting transmon qubits, truncated to the logical subspace),
#
# $$
# \begin{equation}
# \op{H}(t)
# = - \frac{\omega_1}{2} \op{\sigma}_{z}^{(1)}
# - \frac{\omega_2}{2} \op{\sigma}_{z}^{(2)}
# + 2 J \left(
# \op{\sigma}_{x}^{(1)} \op{\sigma}_{x}^{(2)}
# + \op{\sigma}_{y}^{(1)} \op{\sigma}_{y}^{(2)}
# \right)
# + u(t) \left(
# \op{\sigma}_{x}^{(1)} + \lambda \op{\sigma}_{x}^{(2)}
# \right),
# \end{equation}
# $$
#
# where $\omega_1$ and $\omega_2$ are the energy level splitting of the
# respective qubit, $J$ is the effective coupling strength and $u(t)$ is the
# control field. $\lambda$ defines the strength of the qubit-control coupling for
# qubit 2, relative to qubit 1.
#
# We use the following parameters:
# +
w1 = 1.1 # qubit 1 level splitting
w2 = 2.1 # qubit 2 level splitting
J = 0.2 # effective qubit coupling
u0 = 0.3 # initial driving strength
la = 1.1 # relative pulse coupling strength of second qubit
T = 25.0 # final time
nt = 250 # number of time steps
tlist = np.linspace(0, T, nt)
# -
# These are for illustrative purposes only, and do not correspond to any
# particular physical system.
# The initial guess is defined as
#
#
#
#
#
def eps0(t, args):
return u0 * krotov.shapes.flattop(
t, t_start=0, t_stop=T, t_rise=(T / 20), t_fall=(T / 20), func='sinsq'
)
# + attributes={"classes": [], "id": "", "n": "10"}
def plot_pulse(pulse, tlist):
fig, ax = plt.subplots()
if callable(pulse):
pulse = np.array([pulse(t, args=None) for t in tlist])
ax.plot(tlist, pulse)
ax.set_xlabel('time')
ax.set_ylabel('pulse amplitude')
plt.show(fig)
# -
plot_pulse(eps0, tlist)
# We instantiate the Hamiltonian with this guess pulse
# +
def hamiltonian(w1=w1, w2=w2, J=J, la=la, u0=u0):
"""Two qubit Hamiltonian
Args:
w1 (float): energy separation of the first qubit levels
w2 (float): energy separation of the second qubit levels
J (float): effective coupling between both qubits
la (float): factor that pulse coupling strength differs for second qubit
u0 (float): constant amplitude of the driving field
"""
# local qubit Hamiltonians
Hq1 = 0.5 * w1 * np.diag([-1, 1])
Hq2 = 0.5 * w2 * np.diag([-1, 1])
# lift Hamiltonians to joint system operators
H0 = np.kron(Hq1, np.identity(2)) + np.kron(np.identity(2), Hq2)
# define the interaction Hamiltonian
sig_x = np.array([[0, 1], [1, 0]])
sig_y = np.array([[0, -1j], [1j, 0]])
Hint = 2 * J * (np.kron(sig_x, sig_x) + np.kron(sig_y, sig_y))
H0 = H0 + Hint
# define the drive Hamiltonian
H1 = np.kron(np.array([[0, 1], [1, 0]]), np.identity(2)) + la * np.kron(
np.identity(2), np.array([[0, 1], [1, 0]])
)
# convert Hamiltonians to QuTiP objects
H0 = qutip.Qobj(H0)
H1 = qutip.Qobj(H1)
return [H0, [H1, eps0]]
H = hamiltonian(w1=w1, w2=w2, J=J, la=la, u0=u0)
# -
# As well as the canonical two-qubit logical basis,
psi_00 = qutip.Qobj(np.kron(np.array([1, 0]), np.array([1, 0])))
psi_01 = qutip.Qobj(np.kron(np.array([1, 0]), np.array([0, 1])))
psi_10 = qutip.Qobj(np.kron(np.array([0, 1]), np.array([1, 0])))
psi_11 = qutip.Qobj(np.kron(np.array([0, 1]), np.array([0, 1])))
# with the corresponding projectors to calculate population dynamics below.
proj_00 = qutip.ket2dm(psi_00)
proj_01 = qutip.ket2dm(psi_01)
proj_10 = qutip.ket2dm(psi_10)
proj_11 = qutip.ket2dm(psi_11)
# ## Objectives for a perfect entangler
# Our optimization target is the closest perfectly entangling gate, quantified by
# the perfect-entangler functional
#
# $$
# \begin{equation}
# F_{PE} = g_3 \sqrt{g_1^2 + g_2^2} - g_1,
# \end{equation}
# $$
#
# where $g_1, g_2, g_3$ are the local invariants of the implemented gate that
# uniquely identify its non-local content. The local invariants are closely
# related to the Weyl coordinates $c_1, c_2, c_3$, which provide a useful
# geometric visualization in the Weyl chamber. The perfectly entangling gates lie
# within a polyhedron in the Weyl chamber and $F_{PE}$ becomes zero at its
# boundaries. We define $F_{PE} \equiv 0$ for *all* perfect entanglers (inside
# the polyhedron)
#
# A list of four objectives that encode the minimization of $F_{PE}$ are
# generated by calling the `gate_objectives` function with the canonical basis,
# and `"PE"` as target "gate".
objectives = krotov.gate_objectives(
basis_states=[psi_00, psi_01, psi_10, psi_11], gate="PE", H=H
)
objectives
# The initial states in these objectives are not the canonical basis states, but a Bell
# basis,
# NBVAL_IGNORE_OUTPUT
for obj in objectives:
display(obj.initial_state)
# Since we don't know *which* perfect entangler the optimization result will
# implement, we cannot associate any "target state" with each objective, and the
# `target` attribute is set to the string 'PE'.
# We can treat the above objectives as a "black box"; the only important
# consideration is that the `chi_constructor` that we will pass to
# `optimize_pulses` to calculating the boundary condition for the backwards
# propagation,
#
# $$
# \begin{equation}
# \ket{\chi_{k}} = \frac{\partial F_{PE}}{\partial \bra{\phi_k}} \Bigg|_{\ket{\phi_{k}(T)}}\,,
# \end{equation}
# $$
#
# must be consistent with how the `objectives` are set up. For the perfect
# entanglers functional, the calculation of the $\ket{\chi_{k}}$ is relatively
# complicated. The `weylchamber` package
# (https://github.com/qucontrol/weylchamber) contains a suitable routine that
# works on the `objectives` exactly as defined above (specifically, under the
# assumption that the $\ket{\phi_k}$ are the appropriate Bell states):
help(wc.perfect_entanglers.make_PE_krotov_chi_constructor)
chi_constructor = wc.perfect_entanglers.make_PE_krotov_chi_constructor(
[psi_00, psi_01, psi_10, psi_11]
)
# Again, the key point to take from this is that when defining a new or unusual
# functional, **the `chi_constructor` must be congruent with the way the
# objectives are defined**. As a user, you can choose whatever definition of
# objectives and implementation of `chi_constructor` is most suitable, as long
# they are compatible.
# ## Second Order Update Equation
# As the perfect-entangler functional $F_{PE}$ is non-linear in
# the states, Krotov's method requires the second-order contribution in
# order to guarantee monotonic convergence (see <NAME>, et al., J. Chem.
# Phys. 136, 104103 (2012) for details). The second order update equation
# reads
#
# $$
# \begin{align}
# \epsilon^{(i+1)}(t)
# & =
# \epsilon^{ref}(t) + \frac{S(t)}{\lambda_a} \Im \Bigg\{
# \sum_{k=1}^{N}
# \Bigg\langle
# \chi_k^{(i)}(t)
# \Bigg\vert
# \left.\frac{\partial \Op{H}}{\partial \epsilon}\right\vert_{{\scriptsize \begin{matrix}\phi^{(i+1)}(t) \\\epsilon^{(i+1)}(t)\end{matrix}}}
# \Bigg\vert
# \phi_k^{(i+1)}(t)
# \Bigg\rangle
# \\
# & \qquad +
# \frac{1}{2} \sigma(t)
# \Bigg\langle
# \Delta\phi_k(t)
# \Bigg\vert
# \left.\frac{\partial \Op{H}}{\partial \epsilon}\right\vert_{{\scriptsize \begin{matrix}\phi^{(i+1)}(t)\\\epsilon^{(i+1)}(t)\end{matrix}}}
# \Bigg\vert
# \phi_k^{(i+1)}(t)
# \Bigg\rangle
# \Bigg\}\,,
# \end{align}
# $$
#
# where the term proportional to $\sigma(t)$ defines the second-order
# contribution. In order to use the second-order term, we need to pass
# a function to evaluate this $\sigma(t)$ as `sigma` to `optimize_pulses`. We use
# the equation
#
# $$
# \begin{equation}
# \sigma(t) = -\max\left(\varepsilon_A,2A+\varepsilon_A\right)
# \end{equation}
# $$
#
# with $\varepsilon_A$ a small non-negative number, and $A$ a parameter that can
# be recalculated numerically after each iteration (see <NAME>, et al., J.
# Chem. Phys. 136, 104103 (2012) for details).
#
# Generally, $\sigma(t)$ has parametric dependencies like $A$ in this example,
# which should be refreshed for each iteration. Thus, since `sigma` holds
# internal state, it must be implemented as an object subclassing from
# `krotov.second_order.Sigma`:
#
class sigma(krotov.second_order.Sigma):
def __init__(self, A, epsA=0):
self.A = A
self.epsA = epsA
def __call__(self, t):
ϵ, A = self.epsA, self.A
return -max(ϵ, 2 * A + ϵ)
def refresh(
self,
forward_states,
forward_states0,
chi_states,
chi_norms,
optimized_pulses,
guess_pulses,
objectives,
result,
):
try:
Delta_J_T = result.info_vals[-1][0] - result.info_vals[-2][0]
except IndexError: # first iteration
Delta_J_T = 0
self.A = krotov.second_order.numerical_estimate_A(
forward_states, forward_states0, chi_states, chi_norms, Delta_J_T
)
# This combines the evaluation of the function, `sigma(t)`, with the recalculation of
# $A$ (or whatever parametrizations another $\sigma(t)$ function might contain)
# in `sigma.refresh`, which `optimize_pulses` invokes automatically at the
# end of each iteration.
# ## Optimization
# Before running the optimization, we define the shape function $S(t)$ to
# maintain the smooth switch-on and switch-off, and the $\lambda_a$ parameter
# that determines the overall magnitude of the pulse update in each iteration:
#
#
#
#
#
# +
def S(t):
"""Shape function for the field update"""
return krotov.shapes.flattop(
t, t_start=0, t_stop=T, t_rise=T / 20, t_fall=T / 20, func='sinsq'
)
pulse_options = {H[1][1]: dict(lambda_a=1.0e2, update_shape=S)}
# -
# In previous examples, we have used `info_hook` routines that display and store
# the value of the functional $J_T$. Here, we will also want to analyze the
# optimization in terms of the Weyl chamber coordinates $(c_1, c_2, c_3)$. We
# therefore write a custom `print_fidelity` routine that prints $F_{PE}$ as well
# as the gate concurrence (as an alternative measure for the entangling power of
# quantum gates), and results in the storage of a nested tuple `(F_PE, (c1, c2,
# c3))` for each iteration, in `Result.info_vals`.
#
def print_fidelity(**args):
basis = [objectives[i].initial_state for i in [0, 1, 2, 3]]
states = [args['fw_states_T'][i] for i in [0, 1, 2, 3]]
U = wc.gates.gate(basis, states)
c1, c2, c3 = wc.coordinates.c1c2c3(from_magic(U))
g1, g2, g3 = wc.local_invariants.g1g2g3_from_c1c2c3(c1, c2, c3)
conc = wc.perfect_entanglers.concurrence(c1, c2, c3)
F_PE = wc.perfect_entanglers.F_PE(g1, g2, g3)
print(" F_PE: %f\n gate conc.: %f" % (F_PE, conc))
return F_PE, [c1, c2, c3]
# This structure must be taken into account in a `check_convergence` routine. This would
# affect routines like `krotov.convergence.value_below` that assume that
# `Result.info_vals` contains the values of $J_T$ only. Here, we define a check
# that stops the optimization as soon as we reach a perfect entangler:
def check_PE(result):
# extract F_PE from (F_PE, [c1, c2, c3])
F_PE = result.info_vals[-1][0]
if F_PE <= 0:
return "achieved perfect entangler"
else:
return None
opt_result = krotov.optimize_pulses(
objectives,
pulse_options=pulse_options,
tlist=tlist,
propagator=krotov.propagators.expm,
chi_constructor=chi_constructor,
info_hook=krotov.info_hooks.chain(
krotov.info_hooks.print_debug_information, print_fidelity
),
check_convergence=check_PE,
sigma=sigma(A=0.0),
iter_stop=20,
)
opt_result
# We can visualize how each iteration of the optimization brings the dynamics
# closer to the polyhedron of perfect entanglers (using the Weyl chamber
# coordinates that we calculated in the `info_hook` routine `print_fidelity`, and
# that were stored in `Result.info_vals`).
w = WeylChamber()
c1c2c3 = [opt_result.info_vals[i][1] for i in range(len(opt_result.iters))]
for i in range(len(opt_result.iters)):
w.add_point(c1c2c3[i][0], c1c2c3[i][1], c1c2c3[i][2])
w.plot()
# The final optimized control field looks like this:
# + attributes={"classes": [], "id": "", "n": "17"}
plot_pulse(opt_result.optimized_controls[0], tlist)
| docs/notebooks/07_example_PE.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <a id='intro'></a>
#
# # Tools for exploring and calculating statistics over ICOS Atmospheric data
#
#
# This notebook includes functions in Python for exploring ICOS data. The notebook is divided in the following parts:
#
# - [Import Python modules](#import_modules)
# - [Data availability](#data_availability)
# - [General-purpose functions](#general_funcs)
# - [Bookeh help functions](#bookeh_help_funcs)
# - [Map functions](#map_funcs)
# - [Updating plot functions](#update_plot_funcs)
# - [Plotting functions](#plotting_funcs)
# - [Widget functions](#widget_funcs)
# - [Control input](#control_input)
# - [Statistics](#statistics)
#
#
#
#
# Use the links to quickly navigate to the parts you are interested in.
# <a id='import_modules'></a>
# <br>
# <br>
#
# ## 1. Import modules
# + language="javascript"
# IPython.OutputArea.prototype._should_scroll = function(lines){
# return false;}
# +
#Import modules:
import numpy as np
from numpy import nan
import pandas as pd
from datetime import datetime
import requests
import fnmatch
from ipywidgets import interact, interact_manual, ColorPicker, Dropdown, SelectMultiple, Checkbox, DatePicker
from bokeh.io import show, reset_output, output_notebook
#Import ICOS tools:
from icoscp.sparql import sparqls
from icoscp.sparql.runsparql import RunSparql
from icoscp.cpb.dobj import Dobj
#Set the notebook as the selected output location:
reset_output()
output_notebook()
# -
# <a id='data_availability'></a>
# <br>
# <div style="text-align: right">
# <a href="#intro">Back to top</a>
# </div>
# <br>
# <br>
# <br>
# <br>
# <br>
#
# ## 2. Data availability
# This part includes functions that return information about ICOS atmosphere stations that produce Level 1 and Level 2 CO, CO$_2$ and CH$_4$ data products.
def create_lookup_df_atc_L1():
"""
Project: 'ICOS Carbon Portal'
Created: Wed Mar 28 17:40:00 2019
Last Changed: Sun Sep 13 14:00:00 2020
Version: 1.1.0
Author(s): Karolina
Description: Return a pandas dataframe with information for all available ICOS Level-1 Atmospheric Data Files.
Input: No input parameter/s
Output: pandas dataframe
columns:
1. URL to ICOS RI Data Object Landing Page (var_name: 'dobj', var_type: String)
2. Filename for Data Object (var_name: 'filename', var_type: String)
3. Name of gas/tracer (var_name: 'variable', var_type: String)
4. Station name (var_name: 'stationName', var_type: String)
5. Sampling height a.g.l. (var_name: 'height', var_type: String)
6. Sampling start time (var_name:'timeStart', var_type: String)
7. Sampling end time (var_name: 'timeEnd', var_type: String)
8. 3-character Station ID (var_name: 'stationId', var_type: String)
"""
#Get ICOS-stations with level-1 gas-data:
icos_stations_L1_gas_df = RunSparql(sparql_query=sparqls.get_icos_stations_atc_L1(), output_format='pandas').run()
#Get ICOS-station info:
station_info_df = RunSparql(sparql_query=sparqls.get_coords_icos_stations_atc(), output_format='pandas').run()
#Create lookup dataframe:
lookup_df = icos_stations_L1_gas_df.join(station_info_df.filter(['stationName',
'stationId']).set_index('stationName'),
on='stationName')
#Return dataframe:
return lookup_df
def create_lookup_df_atc_L2():
"""
Project: 'ICOS Carbon Portal'
Created: Wed Apr 01 10:00:00 2019
Last Changed: Sun Sep 13 14:10:00 2020
Version: 1.1.0
Author(s): Karolina
Description: Return a pandas dataframe with information for all available ICOS Level-2 Atmospheric Data Files.
Input: No input parameter/s
Output: pandas dataframe
columns:
1. URL to ICOS RI Data Object Landing Page (var_name: 'dobj', var_type: String)
2. Filename for Data Object (var_name: 'filename', var_type: String)
3. Name of gas/tracer (var_name: 'variable', var_type: String)
4. Station name (var_name: 'stationName', var_type: String)
5. Sampling height a.g.l. (var_name: 'height', var_type: String)
6. Sampling start time (var_name:'timeStart', var_type: String)
7. Sampling end time (var_name: 'timeEnd', var_type: String)
8. 3-character Station ID (var_name: 'stationId', var_type: String)
"""
#Get ICOS-stations with level-1 gas-data:
icos_stations_L2_gas_df = RunSparql(sparql_query=sparqls.get_icos_stations_atc_L2(), output_format='pandas').run()
#Get ICOS-station info:
station_info_df = RunSparql(sparql_query=sparqls.get_coords_icos_stations_atc(), output_format='pandas').run()
#Create lookup dataframe:
lookup_df = icos_stations_L2_gas_df.join(station_info_df.filter(['stationName',
'stationId']).set_index('stationName'),
on='stationName')
#Return dataframe:
return lookup_df
# <a id='general_funcs'></a>
# <br>
# <div style="text-align: right">
# <a href="#intro">Back to top</a>
# </div>
# <br>
# <br>
# <br>
# <br>
# <br>
#
# ## 3. General-purpose functions
#
def printmd(string):
"""
Project: 'ICOS Carbon Portal'
Created: Fri May 10 12:00:00 2019
Last Changed: Fri May 10 12:00:00 2019
Version: 1.0.0
Author(s): Karolina
Description: Function that allows you to print the string input parameter using
markdown formatting code.
Input parameters: String of characters
(var_name: 'string', var_type: String)
Output: String
"""
#import module:
from IPython.display import Markdown, display
#Enable displaying string with markdown code:
display(Markdown(string))
def get_country_fullname_from_iso3166_2char(countryCode):
#Get iso 3166 translation pandas dataframe:
country_names_codes_iso3166 = pd.read_csv('data/country_names_codes_iso_3166.csv',
header=0,
delimiter=';')
#Check if the input is a valid 2-character long ISO 3166 country code:
if ('SE' in country_names_codes_iso3166.Alpha_2_code.values):
#Return the fullname of a country based on the given iso 3166 2-character country code:
return country_names_codes_iso3166.Country.loc[country_names_codes_iso3166.Alpha_2_code==countryCode].values[0]
#If the input is not a valid 2-character long ISO 3166 country code:
else:
print('Error! Invalid ISO 3166 2-char country code')
# <br>
# <div style="text-align: right">
# <a href="#intro">Back to top</a>
# </div>
# <br>
# <br>
# <br>
#
# <a id='bookeh_help_funcs'></a>
#
# <br>
# <br>
#
# ## 4. Bokeh help functions
# This part includes help functions for displaying Bokeh plots. The help functions handle:
# - visualization of secondary y-axis in Bokeh plots (aligning 2nd-yaxis to primary yaxis)
# - visualization of secondary & 3rd y-axis in Bokeh plots (aligning 2nd-yaxis and 3rd-yaxis to primary yaxis)
# - automatic assignment of Bokeh colormaps to plots based on number of information layers
# +
def rounddown_100(x):
"""
Project: 'ICOS Carbon Portal'
Created: Tue May 07 09:00:00 2018
Last Changed: Tue May 07 09:00:00 2019
Version: 1.0.0
Author(s): Karolina
Description: Function that takes a number as input and floors it to the nearest "100".
Input parameters: Number (var_name: 'x', var_type: Integer or Float)
Output: Float
"""
#Import module:
import numbers
#Check if input parameter is numeric:
if(isinstance(x, numbers.Number)==True):
#If the number is an integral multiple of 100:
if(((x/100.0)%2==0) or (x<=0) or (x==100)):
return(int(x / 100.0) * 100) - 100
#If the input number is NOT an integral multiple of 100:
else:
return(int(x / 100.0) * 100)
#If input parameter is not numeric, prompt an error message:
else:
print("Input parameter is not numeric!")
# -
def roundup_100(x):
"""
Project: 'ICOS Carbon Portal'
Created: Tue May 07 09:00:00 2018
Last Changed: Tue May 07 09:00:00 2019
Version: 1.0.0
Author(s): Karolina
Description: Function that takes a number as input and rounds it up to the nearest "100".
Input parameters: Number (var_name: 'x', var_type: Integer or Float)
Output: Float
"""
#Import modules:
import math
import numbers
#Check if input parameter is numeric:
if(isinstance(x, numbers.Number)==True):
#for integral mulitples of 100 and for the special cases of 100, 0 and -100:
if(((x/100.0)%2==0) or (x==100) or (x==-100)):
return int(math.ceil(x / 100.0)) * 100 + 100
else:
return int(math.ceil(x / 100.0)) * 100
#If input parameter is not numeric, prompt an error message:
else:
print("Input parameter is not numeric!")
def rounddown_20(x):
"""
Project: 'ICOS Carbon Portal'
Created: Tue May 07 09:00:00 2018
Last Changed: Tue May 07 09:00:00 2019
Version: 1.0.0
Author(s): Karolina
Description: Function that takes a number as input and floors it to the nearest "20".
Input parameters: Number (var_name: 'x', var_type: Integer or Float)
Output: Float
"""
#Import module:
import math
import numbers
#Check if input parameter is numeric:
if(isinstance(x, numbers.Number)==True):
#If the 2nd digit from the decimal point is an even number:
if(int(x/10.0)%2==0):
return(int(x / 10.0) * 10) - 20
#If the 2nd digit from the decimal point is an odd number:
else:
return(int(x / 10.0) * 10) - 10
#If input parameter is not numeric, prompt an error message:
else:
print("Input parameter is not numeric!")
def roundup_20(x):
"""
Project: 'ICOS Carbon Portal'
Created: Tue May 07 09:00:00 2018
Last Changed: Tue May 07 09:00:00 2019
Version: 1.0.0
Author(s): Karolina
Description: Function that takes a number as input and rounds it up to the closest "20".
Input parameters: Number (var_name: 'x', var_type: Integer or Float)
Output: Float
"""
#Import module:
import math
import numbers
#Check if input parameter is numeric:
if(isinstance(x, numbers.Number)==True):
#for positive numbers, multiples of 20.0:
if((x>=0)&(((x/10.0)%20)%2 == 0)):
return int(math.ceil(x / 10.0)) * 10 +20
#for positive numbers with an even number as 2nd digit:
elif((x>0)&(int(x/10.0)%2==0)):
return int(math.ceil(x / 10.0)) * 10 +10
#for positive and negative numbers, whose 2nd digit is an odd number (except for i in [-1,-9]):
elif(int(x/10.0)%2!=0):
return int((x / 10.0)) * 10 +10
#for negative numbers, whose 1st or 2nd digit is an even number:
elif((x<-10) & (int(x)%2==0)):
return int((x / 10.0)) * 10 +20
else:
return 0
#If input parameter is NOT numeric, prompt an error message:
else:
print("Input parameter is not numeric!")
def set_yranges_2y(y1_min, y1_max, y2_min, y2_max, y1_step, y2_step ,new_yrange_name):
"""
Project: 'ICOS Carbon Portal'
Created: Tue May 07 10:30:00 2018
Last Changed: Tue May 07 10:30:00 2019
Version: 1.0.0
Author(s): Karolina
Description: Function that takes the primary and secondary y-axis min/max values as well as
the step values for every y-axis and the secondary y-axis new range name as input
parameters, performs computations so that the two axes are alligned and returns
their corresponding RangeId objects. Works only for Bokeh plots.
Input parameters: 1. Min value of primary y-axis (var_name: 'y1_min', var_type: Integer or Float)
2. Max value of primary y-axis (var_name: 'y1_max', var_type: Integer or Float)
3. Min value of secondary y-axis (var_name: 'y2_min', var_type: Integer or Float)
4. Max value of secondary y-axis (var_name: 'y2_max', var_type: Integer or Float)
5. Step of primary y-axis (var_name: 'y1_step', var_type: Integer or Float)
6. Step of secondary y-axis (var_name: 'y2_step', var_type: Integer or Float)
7. Name of new yrange object for secondary y-axis
(var_name: "new_yrange_name", var_type: Bokeh Plot yrange object)
Output: Bokeh Plot yrange objects for primary and secondary y-axes.
"""
#import modules:
import numpy as np
from bokeh.models import Range1d
#yrange and tick function for plot with primary and secondary y-axis:
yticks1 = np.arange(y1_min, y1_max + y1_step, y1_step)
yticks2 = np.arange(y2_min, y2_max + y2_step, y2_step)
#Get difference in total number of ticks between primary and secondary y-axis:
diff = abs(len(yticks2)-len(yticks1))
#Get how many times the step needs to be added to start and end:
num_of_steps = int(diff/2)
#If the primary and the secondary y-axis have the same number of ticks:
if(diff==0):
#Set the range of the 1st y-axis:
y_range = Range1d(start=y1_min, end=y1_max)
#Set the 2nd y-axis, range-name, range:
extra_y_ranges = {new_yrange_name: Range1d(start=y2_min, end=y2_max)}
#If the primary y-axis has fewer ticks than the secondary y-axis:
elif(len(yticks2)>len(yticks1)):
#If the difference in ticks between the two axes is an odd number:
if(diff%2==1):
#Set the range of the 1st y-axis:
y_range = Range1d(start=y1_min-(y1_step*(num_of_steps+1)), end=y1_max+(y1_step*num_of_steps))
#Set the 2nd y-axis, range-name, range:
extra_y_ranges = {new_yrange_name: Range1d(start=y2_min, end=y2_max)}
#If the difference in ticks between the two axes is an even number:
else:
#Set the range of the 1st y-axis:
y_range = Range1d(start=y1_min-(y1_step*num_of_steps), end=y1_max+(y1_step*num_of_steps))
#Set the 2nd y-axis, range-name, range:
extra_y_ranges = {new_yrange_name: Range1d(start=y2_min, end=y2_max)}
#If the primary y-axis has more ticks than the secondary y-axis, e.g. len(yticks1)>len(yticks2_test):
else:
#If the difference in ticks between the two axes is an odd number:
if(diff%2==1):
#Set the range of the 1st y-axis:
y_range = Range1d(start=y1_min, end=y1_max)
#Set the 2nd y-axis, range-name, range:
extra_y_ranges = {new_yrange_name: Range1d(start=y2_min - (y2_step*(num_of_steps)), end=y2_max + (y2_step*(num_of_steps+1)))}
#If the difference in ticks between the two axes is an even number:
else:
#Set the range of the 1st y-axis:
y_range = Range1d(start=y1_min, end=y1_max)
#Set the 2nd y-axis, range-name, range:
extra_y_ranges = {new_yrange_name: Range1d(start=y2_min - (y2_step*num_of_steps), end=y2_max + (y2_step*num_of_steps))}
#Return y-range for primary and secondary y-axes:
return y_range, extra_y_ranges
def set_yranges_3y(y1_min, y1_max, y2_min, y2_max, y3_min, y3_max, y1_step, y2_step, y3_step):
"""
Project: 'ICOS Carbon Portal'
Created: Tue May 07 10:30:00 2018
Last Changed: Tue May 07 10:30:00 2019
Version: 1.0.0
Author(s): Karolina
Description: Function that takes the primary, secondary and third y-axis min/max values as well as
the step values for every y-axis as input parameters, performs computations so that the
three axes are alligned and returns their corresponding RangeId objects.
Works only for Bokeh plots.
Input parameters: 1. Min value of primary y-axis (var_name: 'y1_min', var_type: Integer or Float)
2. Max value of primary y-axis (var_name: 'y1_max', var_type: Integer or Float)
3. Min value of secondary y-axis (var_name: 'y2_min', var_type: Integer or Float)
4. Max value of secondary y-axis (var_name: 'y2_max', var_type: Integer or Float)
5. Min value of third y-axis (var_name: 'y3_min', var_type: Integer or Float)
6. Max value of third y-axis (var_name: 'y3_max', var_type: Integer or Float)
7. Step of primary y-axis (var_name: 'y1_step', var_type: Integer or Float)
8. Step of secondary y-axis (var_name: 'y2_step', var_type: Integer or Float)
9. Step of third y-axis (var_name: 'y3_step', var_type: Integer or Float)
Output: Bokeh Plot yrange objects for primary and secondary y-axes.
"""
#import modules:
import numpy as np
from bokeh.models import Range1d
#yrange and tick function for plot with primary and secondary y-axis:
yticks1 = np.arange(y1_min, y1_max + y1_step, y1_step)
yticks2 = np.arange(y2_min, y2_max + y2_step, y2_step)
yticks3 = np.arange(y3_min, y3_max + y3_step, y3_step)
#Get the number of ticks per y-axis:
y1_num_of_ticks = len(yticks1)
y2_num_of_ticks = len(yticks2)
y3_num_of_ticks = len(yticks3)
#Get difference in total number of ticks between primary and secondary y-axis:
diff_12 = abs(len(yticks2)-len(yticks1))
diff_13 = abs(len(yticks3)-len(yticks1))
diff_23 = abs(len(yticks3)-len(yticks2))
#Get how many times the step needs to be added to start and end:
num_of_steps_12 = int(diff_12/2)
num_of_steps_13 = int(diff_13/2)
num_of_steps_23 = int(diff_23/2)
#If the primary, secondary and 3rd y-axis have the same number of ticks:
if((diff_12==0) and (diff_13==0) and (diff_23==0)):
#Set the range of the 1st y-axis:
y_range = Range1d(start=y1_min, end=y1_max)
#Set the 2nd y-axis, range-name, range:
extra_y_ranges_1 = Range1d(start=y2_min, end=y2_max)
#Set the 3rd y-axis, range-name, range:
extra_y_ranges_2 = Range1d(start=y3_min, end=y3_max)
#if y-axis 1 is the axis with the highest number of ticks:
elif(max(y1_num_of_ticks, y2_num_of_ticks, y3_num_of_ticks)==y1_num_of_ticks):
#Check if the difference between y-axis 1 and the other axes is an even number:
if((diff_12%2==0) and (diff_13%2==0)):
#Set the range of the 1st y-axis:
y_range = Range1d(start=y1_min, end=y1_max)
#Set the 2nd y-axis, range-name, range:
extra_y_ranges_1 = Range1d(start=y2_min - (y2_step*num_of_steps_12),
end=y2_max + (y2_step*num_of_steps_12))
#Set the 3rd y-axis, range-name, range:
extra_y_ranges_2 = Range1d(start=y3_min - (y3_step*num_of_steps_13),
end=y3_max + (y3_step*num_of_steps_13))
#Check if the difference between y-axis 1 and the other axes is an odd number:
elif((diff_12%2==1) and (diff_13%2==1)):
#Set the range of the 1st y-axis:
y_range = Range1d(start=y1_min, end=y1_max)
#Set the 2nd y-axis, range-name, range:
extra_y_ranges_1 = Range1d(start=y2_min - (y2_step*(num_of_steps_12)),
end=y2_max + (y2_step*(num_of_steps_12+1)))
#Set the 3rd y-axis, range-name, range:
extra_y_ranges_2 = Range1d(start=y3_min - (y3_step*(num_of_steps_13)),
end=y3_max + (y3_step*(num_of_steps_13+1)))
#Check if the difference between y-axis 1 and the other axes is an even/odd number:
elif((diff_12%2==0) and (diff_13%2==1)):
#Set the range of the 1st y-axis:
y_range = Range1d(start=y1_min, end=y1_max)
#Set the 2nd y-axis, range-name, range: --- > even diff
extra_y_ranges_1 = Range1d(start=y2_min - (y2_step*num_of_steps_12),
end=y2_max + (y2_step*num_of_steps_12))
#Set the 3rd y-axis, range-name, range: --- > odd diff
extra_y_ranges_2 = Range1d(start=y3_min - (y3_step*(num_of_steps_13)),
end=y3_max + (y3_step*(num_of_steps_13+1)))
#Check if the difference between y-axis 1 and the other axes is an odd/even number:
#I.e. (diff_12%2==1) and (diff_13%2==0)
else:
#Set the range of the 1st y-axis:
y_range = Range1d(start=y1_min, end=y1_max)
#Set the 2nd y-axis, range-name, range: --- > odd diff
extra_y_ranges_1 = Range1d(start=y2_min - (y2_step*(num_of_steps_12)),
end=y2_max + (y2_step*(num_of_steps_12+1)))
#Set the 3rd y-axis, range-name, range: --- > even diff
extra_y_ranges_2 = Range1d(start=y3_min - (y3_step*num_of_steps_13),
end=y3_max + (y3_step*num_of_steps_13))
#if y-axis 2 is the axis with the highest number of ticks:
elif(max(y1_num_of_ticks, y2_num_of_ticks, y3_num_of_ticks)==y2_num_of_ticks):
#Check if the difference between y-axis 2 and the other axes is an even number:
if((diff_12%2==0) and (diff_23%2==0)):
#Set the range of the 1st y-axis:
y_range = Range1d(start=y1_min-(y1_step*num_of_steps_12),
end=y1_max+(y1_step*num_of_steps_12))
#Set the 2nd y-axis, range-name, range:
extra_y_ranges_1 = Range1d(start=y2_min, end=y2_max)
#Set the 3rd y-axis, range-name, range:
extra_y_ranges_2 = Range1d(start=y3_min-(y3_step*num_of_steps_23),
end=y3_max+(y3_step*num_of_steps_23))
#Check if the difference between y-axis 2 and the other axes is an odd number:
elif((diff_12%2==1) and (diff_23%2==1)):
#Set the range of the 1st y-axis:
y_range = Range1d(start=y1_min-(y1_step*(num_of_steps_12+1)),
end=y1_max+(y1_step*num_of_steps_12))
#Set the 2nd y-axis, range-name, range:
extra_y_ranges_1 = Range1d(start=y2_min, end=y2_max)
#Set the 3rd y-axis, range-name, range:
extra_y_ranges_2 = Range1d(start=y3_min-(y3_step*(num_of_steps_23+1)),
end=y3_max+(y3_step*num_of_steps_23))
#Check if the difference between y-axis 2 and the other axes is an even/odd number:
elif((diff_12%2==0) and (diff_23%2==1)):
#Set the range of the 1st y-axis: --- > even diff
y_range = Range1d(start=y1_min-(y1_step*num_of_steps_12),
end=y1_max+(y1_step*num_of_steps_12))
#Set the 2nd y-axis, range-name, range:
extra_y_ranges_1 = Range1d(start=y2_min, end=y2_max)
#Set the 3rd y-axis, range-name, range: --- > odd diff
extra_y_ranges_2 = Range1d(start=y3_min-(y3_step*(num_of_steps_23+1)),
end=y3_max+(y3_step*num_of_steps_23))
#Check if the difference between y-axis 2 and the other axes is an odd/even number:
#I.e. (diff_12%2==1) and (diff_23%2==0)
else:
#Set the range of the 1st y-axis: --- > odd diff
y_range = Range1d(start=y1_min-(y1_step*(num_of_steps_12+1)),
end=y1_max+(y1_step*num_of_steps_12))
#Set the 2nd y-axis, range-name, range:
extra_y_ranges_1 = Range1d(start=y2_min, end=y2_max)
#Set the 3rd y-axis, range-name, range: --- > even diff
extra_y_ranges_2 = Range1d(start=y3_min-(y3_step*num_of_steps_23),
end=y3_max+(y3_step*num_of_steps_23))
#if y-axis 3 is the axis with the highest number of ticks:
elif(max(y1_num_of_ticks, y2_num_of_ticks, y3_num_of_ticks)==y3_num_of_ticks):
#Check if the difference between y-axis 3 and the other axes is an even number:
if((diff_13%2==0) and (diff_23%2==0)):
#Set the range of the 1st y-axis:
y_range = Range1d(start=y1_min-(y1_step*num_of_steps_13),
end=y1_max+(y1_step*num_of_steps_13))
#Set the 2nd y-axis, range-name, range:
extra_y_ranges_1 = Range1d(start=y2_min-(y2_step*num_of_steps_23),
end=y2_max+(y2_step*num_of_steps_23))
#Set the 3rd y-axis, range-name, range:
extra_y_ranges_2 = Range1d(start=y3_min, end=y3_max)
#Check if the difference between y-axis 3 and the other axes is an odd number:
elif((diff_13%2==1) and (diff_23%2==1)):
#Set the range of the 1st y-axis:
y_range = Range1d(start=y1_min-(y1_step*(num_of_steps_13+1)),
end=y1_max+(y1_step*num_of_steps_13))
#Set the 2nd y-axis, range-name, range:
extra_y_ranges_1 = Range1d(start=y2_min-(y2_step*(num_of_steps_23+1)),
end=y2_max+(y2_step*num_of_steps_23))
#Set the 3rd y-axis, range-name, range:
extra_y_ranges_2 = Range1d(start=y3_min, end=y3_max)
#Check if the difference between y-axis 3 and the other axes is an even/odd number:
elif((diff_13%2==0) and (diff_23%2==1)):
#Set the range of the 1st y-axis:
y_range = Range1d(start=y1_min-(y1_step*num_of_steps_13),
end=y1_max+(y1_step*num_of_steps_13))
#Set the 2nd y-axis, range-name, range:
extra_y_ranges_1 = Range1d(start=y2_min-(y2_step*(num_of_steps_23+1)),
end=y2_max+(y2_step*num_of_steps_23))
#Set the 3rd y-axis, range-name, range:
extra_y_ranges_2 = Range1d(start=y3_min, end=y3_max)
#Check if the difference between y-axis 3 and the other axes is an odd/even number:
#I.e. (diff_13%2==1) and (diff_23%2==0)
else:
#Set the range of the 1st y-axis:
y_range = Range1d(start=y1_min-(y1_step*(num_of_steps_13+1)),
end=y1_max+(y1_step*num_of_steps_13))
#Set the 2nd y-axis, range-name, range:
extra_y_ranges_1 = Range1d(start=y2_min-(y2_step*num_of_steps_23),
end=y2_max+(y2_step*num_of_steps_23))
#Set the 3rd y-axis, range-name, range:
extra_y_ranges_2 = Range1d(start=y3_min, end=y3_max)
else:
y_range = None
extra_y_ranges_1 = None
extra_y_ranges_2 = None
#Return y-range for primary and secondary y-axes:
return y_range, extra_y_ranges_1, extra_y_ranges_2
def get_colormap(num_of_items):
"""
Project: 'ICOS Carbon Portal'
Created: Fri Apr 04 17:00:00 2019
Last Changed: Fri Apr 04 17:00:00 2019
Version: 1.0.0
Author(s): Karolina
Description: Function that takes an integer representing the total number of items that should receive
a sepparate color and returns a colormap (i.e. a list of strings, where every string
represents a different color in hexadecimal code) with the same amount of colors.
The function can return colormaps for 1 - 256 items.
Input parameters: Number of items to be colored in a sepparate color
(var_name: 'num_of_items', var_type: Integer)
Output: List of strings (colormap)
"""
#Check input:
if(isinstance(num_of_items, int)):
#import module:
from bokeh.palettes import all_palettes, Colorblind, magma
#Check the number of items to be colored (1-2 items):
if((num_of_items>0) and (num_of_items<3)):
return ['#2b83ba','#fdae61'] #return colormap selection
#Check the number of items to be colored (3-8 items):
elif((num_of_items>2) and (num_of_items<9)):
return all_palettes['Colorblind'][num_of_items] #return colormap selection
#Check the number of items to be colored (9-256 items):
elif((num_of_items>8) and (num_of_items<257)):
return magma(num_of_items) #return colormap selection
#If the number of items exceeds the number 256:
else:
print('Error! Number of items to be colored is zero or higher than 256.')
#If the input parameter is not an integer:
else:
print('Input is not an integer.')
# <a id='map_funcs'></a>
# <br>
# <div style="text-align: right">
# <a href="#intro">Back to top</a>
# </div>
# <br>
# <br>
# <br>
# <br>
# <br>
#
# ## 5. Map functions
# This part includes functions that åroduce interactive maps with folium.
# +
def plotmap(stations_df, selected_station, basemap, d_icon='cloud', icon_col='orange'):
"""
Project: 'ICOS Carbon Portal'
Created: Tue Feb 04 10:40:00 2020
Last Changed: Tue Feb 04 10:40:00 2020
Version: 1.0.0
Author(s): Karolina
Description: Function that takes a dataframe containing info about ICOS Stations and the 3-character
station code of a selected station as input and returns an interactive Folium Map, with
the location of the selected station highlighted in red.
Folium (URL): https://python-visualization.github.io/folium/quickstart.html
Input parameters: 1. Dataframe with Information regarding ICOS Stations
(var_name: 'stations_df', var_type: Pandas Dataframe)
2. Station 3-character Code
(var_name: 'selected_station', var_type: String)
Output: Folium Map (Folium Map Object)
"""
#Import modules:
import folium
#Check what type of basemap is selected:
if(basemap=='Imagery'):
#Create folium map-object:
m = folium.Map(location=[float(stations_df.loc[stations_df.stationId==selected_station].lat.values[0]),
float(stations_df.loc[stations_df.stationId==selected_station].lon.values[0])],
zoom_start=5,
tiles = 'https://server.arcgisonline.com/ArcGIS/rest/services/World_Imagery/MapServer/tile/{z}/{y}/{x}',
attr = 'Esri',
name = 'Esri Satellite',
overlay = False,
control = True)
else:
#Create folium map-object:
m = folium.Map(
location=[float(stations_df.loc[stations_df.stationId==selected_station].lat.values[0]),
float(stations_df.loc[stations_df.stationId==selected_station].lon.values[0])],
zoom_start=4)
#Add marker-tooltip:
tooltip = 'Click to view station info'
def add_marker(map_obj, station_code, marker_color):
#Add popup text:
popup=folium.Popup("""<meta content="text/html; charset=UTF-8"><style>td{padding: 3px;}</style><table>"""+
'<tr><td>Name: </td><td><b><a href="'+str(stations_df.station.loc[stations_df.stationId==station_code].values[0])+'"target="_blank">'+str(stations_df.stationName.loc[stations_df.stationId==station_code].values[0])+'</a></b></td></tr>'+
'<tr><td>Code:</td><td><b>'+station_code+'</b></td></tr>'+
'<tr><td>Country:</td><td><b>'+get_country_fullname_from_iso3166_2char(stations_df.Country.loc[stations_df.stationId==station_code].values[0])+'</b></td></tr>'+
'<tr><td>Latitude:</td><td><b>'+str(stations_df.lat.loc[stations_df.stationId==station_code].values[0])+'</b></td></tr>'+
'<tr><td>Longitude:</td><td><b>'+str(stations_df.lon.loc[stations_df.stationId==station_code].values[0])+'</b></td></tr>'+
'</td></tr></table>',
max_width=450)
#Create marker and add it to the map:
folium.Marker(location=[float(stations_df.lat.loc[stations_df.stationId==station_code].values[0]),
float(stations_df.lon.loc[stations_df.stationId==station_code].values[0])],
popup=popup,
icon=folium.Icon(color=marker_color, icon=d_icon),
tooltip=tooltip).add_to(map_obj)
#Get list of stations (not incl. selected station):
station_ls = [i for i in stations_df.stationId.values if i!=selected_station]
#Create markers for all stations except selected station:
for st in station_ls:
add_marker(m, st, icon_col)
#Add marker for selected station:
add_marker(m, selected_station, 'darkred')
#Show map:
display(m)
# -
# <a id='update_plot_funcs'></a>
# <br>
# <div style="text-align: right">
# <a href="#intro">Back to top</a>
# </div>
# <br>
# <br>
# <br>
# <br>
# <br>
#
# ## 6. Update plot functions
# This part includes functions that update the data displayed in interactive plots.
def update_icos_single_station_plot_binary(data_obj_id_ls, station, tracer, color):
"""
Project: 'ICOS Carbon Portal'
Created: Fri Apr 07 10:05:00 2019
Last Changed: Fri Mar 27 10:05:00 2020
Version: 1.1.0
Author(s): Karolina
Description: Function that gets the user's selection of station, tracer and color as input parameters,
accesses and reads in the corresponding datafiles and outputs a Bokeh plot with observations
for the selected tracer.
Input parameters: 1. list of data object IDs
(var_name: 'data_obj_id_ls', var_type: List of Strings)
2. list of station info, containing 3-character station code and sampling height,
e.g. ['GAT', '30.0']
(var_name: 'station', var_type: List of Strings)
3. gas/traser, e.g. 'co', 'co2' or 'ch4'
(var_name: 'tracer', var_type: String)
4. color in hexadecimal code (var_name: 'color', var_type: String)
Output: Bokeh Plot
"""
#import modules:
from bokeh.layouts import column
from icoscp.cpb.dobj import Dobj
#Create dictionary to store tracer info:
tracer_info_dict = {}
#Create dict to store the station info:
station_info_dict = {}
#Create a file object from the 1st object in the data object id list:
file = Dobj(data_obj_id_ls[0])
#Get the tracer description:
tracer_info_dict['tracer_info'] = file.info[1].valueType.loc[file.info[1].colName==tracer].values[0]
#Get tracer unit:
tracer_info_dict['tracer_unit'] = file.info[1].unit.loc[file.info[1].colName==tracer].values[0]
#Get pandas dataframe with all ICOS stations:
icos_stations_df = RunSparql(sparql_query=sparqls.get_coords_icos_stations_atc(), output_format='pandas').run()
#Get station info:
station_info_dict['station_name'] = icos_stations_df.stationName.loc[icos_stations_df.stationId==station[0]].values[0]
station_info_dict['station_code'] = station[0]
station_info_dict['station_sampling_height'] = station[1]
station_info_dict['station_country_code'] = icos_stations_df.Country.loc[icos_stations_df.stationId==station[0]].values[0]
station_info_dict['station_country'] = get_country_fullname_from_iso3166_2char(station_info_dict['station_country_code'])
station_info_dict['station_lat'] = icos_stations_df.lat.loc[icos_stations_df.stationId==station[0]].values[0]
station_info_dict['station_lon'] = icos_stations_df.lon.loc[icos_stations_df.stationId==station[0]].values[0]
#Create list to store the data dataframes of all data object IDs:
data_df_ls = []
#Loop through every data object ID in the list:
for dobjid in data_obj_id_ls:
#Get a pandas dataframe with all the columns for the selected data-object id:
obs_data_df = Dobj(dobjid).get()
#Add data dataframe of the current data object ID to the list:
data_df_ls.append(obs_data_df)
#Concatenate data dataframes to one dataframe:
data_df = pd.concat(data_df_ls)
#Sort the dataframe index in ascending order:
data_df.sort_index(inplace=True)
### Plot ###
#Plot station:
p = plot_icos_single_station_binary(data_df,
station_info_dict,
tracer_info_dict,
color=color)
#Return plot:
return p
def update_exploring_multiple_tracers_binary(selection_list):
"""
Project: 'ICOS Carbon Portal'
Created: Tue May 07 10:30:00 2018
Last Changed: Tue May 07 10:30:00 2019
Version: 1.0.0
Author(s): Karolina
Description: Function that takes as input a list of sublists, where every sublist contains
a data object ID, a tracer string and a station ID string.
The function reads the corresponding ICOS Level-2 Atmospheric Tracer data files
for every sublist into sepparate pandas dataframes. Every data file produces two
sepparate pandas dataframes; metadata dataframe and data dataframe.
These dataframes are then set as input parameters to a plot function,
that returns a Bokeh Figure (plot).
The Bokeh Figure is then returned as output.
Input parameters: List with sublists of data object IDs & tracers
(var_name: 'selection_list', var_type: List)
Output: Bokeh Plot
"""
#import modules:
from bokeh.layouts import column
from icoscp.cpb.dobj import Dobj
#Create lists to store lists of metadata & data dataframes for every station-tracer combination:
co2_df_list = []
co_df_list = []
ch4_df_list = []
#Create dict to store the station info:
station_info_dict = {}
#Get pandas dataframe with all ICOS stations:
icos_stations_df = RunSparql(sparql_query=sparqls.get_coords_icos_stations_atc(), output_format='pandas').run()
#Get station info:
station_info_dict['station_name'] = icos_stations_df.stationName.loc[icos_stations_df.stationId==selection_list[0][2]].values[0]
station_info_dict['station_country_code'] = icos_stations_df.Country.loc[icos_stations_df.stationId==selection_list[0][2]].values[0]
station_info_dict['station_country'] = get_country_fullname_from_iso3166_2char(station_info_dict['station_country_code'])
station_info_dict['station_lat'] = icos_stations_df.lat.loc[icos_stations_df.stationId==selection_list[0][2]].values[0]
station_info_dict['station_lon'] = icos_stations_df.lon.loc[icos_stations_df.stationId==selection_list[0][2]].values[0]
station_info_dict['station_sampling_height'] = selection_list[0][3]
#Loop throught list of data-object-IDs and tracers and download and read file/s with ICOS data:
for selection in selection_list:
#Check tracer type:
if(selection[1]=='co2'):
#Create dictionary to store tracer info:
tracer_info_dict_co2 = {}
#Create a file object from the current data object id:
file_co2 = Dobj(selection[0])
#Get the tracer description:
tracer_info_dict_co2['tracer_info'] = file_co2.info[1].valueType.loc[file_co2.info[1].colName==selection[1]].values[0]
#Get tracer unit:
tracer_info_dict_co2['tracer_unit'] = file_co2.info[1].unit.loc[file_co2.info[1].colName==selection[1]].values[0]
#Add data dataframe for the current data object ID to the list:
co2_df_list.append(file_co2.get())
elif(selection[1]=='co'):
#Create dictionary to store tracer info:
tracer_info_dict_co = {}
#Create a file object from the current data object id:
file_co = Dobj(selection[0])
#Get the tracer description:
tracer_info_dict_co['tracer_info'] = file_co.info[1].valueType.loc[file_co.info[1].colName==selection[1]].values[0]
#Get tracer unit:
tracer_info_dict_co['tracer_unit'] = file_co.info[1].unit.loc[file_co.info[1].colName==selection[1]].values[0]
#Add data dataframe for the current data object ID to the list:
co_df_list.append(file_co.get())
elif(selection[1]=='ch4'):
#Create dict to store the tracer info:
tracer_info_dict_ch4 = {}
#Create a file object from the current data object id:
file_ch4 = Dobj(selection[0])
#Get the tracer description:
tracer_info_dict_ch4['tracer_info'] = file_ch4.info[1].valueType.loc[file_ch4.info[1].colName==selection[1]].values[0]
#Get tracer unit:
tracer_info_dict_ch4['tracer_unit'] = file_ch4.info[1].unit.loc[file_ch4.info[1].colName==selection[1]].values[0]
#Add data dataframe for the current data object ID to the list:
ch4_df_list.append(file_ch4.get())
#If tracer is not one of the following: "CO2", "CO" or "CH4":
else:
print("\033[0;31;1m "+'Error! No support for this tracer!'+"\033[0;31;0m\n\n")
#Create list to store lists of metadata & data dataframes for every station-tracer combination:
info_list = []
#Add all non-empty co2 lists to df_list:
if(len(co2_df_list)>0):
#Concatenate data dataframes to one dataframe:
co2_data_df = pd.concat(co2_df_list)
#Sort the dataframe index in ascending order:
co2_data_df.sort_index(inplace=True)
#Add the station_info dict, tracer_info dict and data dataframe to a list:
info_list.append([station_info_dict, tracer_info_dict_co2, co2_data_df])
#Add all non-empty tracer lists to df_list:
if(len(co_df_list)>0):
#Concatenate data dataframes to one dataframe:
co_data_df = pd.concat(co_df_list)
#Sort the dataframe index in ascending order:
co_data_df.sort_index(inplace=True)
#Add the pair of dataframes as a list to the list of dataframes:
info_list.append([station_info_dict, tracer_info_dict_co, co_data_df])
#Add all non-empty tracer lists to df_list:
if(len(ch4_df_list)>0):
#Concatenate data dataframes to one dataframe:
ch4_data_df = pd.concat(ch4_df_list)
#Sort the dataframe index in ascending order:
ch4_data_df.sort_index(inplace=True)
#Add the pair of dataframes as a list to the list of dataframes:
info_list.append([station_info_dict, tracer_info_dict_ch4, ch4_data_df])
#Call a function to plot data from all dataframes in the list of dataframes:
p = plot_icos_single_station_multiple_tracers_binary(info_list)
#Set output channel:
output_notebook()
#Show plot:
show(p)
def update_exploring_multiple_stations_binary(station_dobj_ls):
"""
Project: 'ICOS Carbon Portal'
Created: Tue May 07 10:30:00 2018
Last Changed: Tue May 07 10:30:00 2019
Version: 1.0.0
Author(s): Karolina
Description: Function that takes as input a list of sublists, where every sublist contains
one or more lists with the ICOS Atmospheric Level-2 data-object-ID, the station
code, the sampling height and the selected tracer for a selected station and
sampling height.
More than one lists per selected station, sampling height and tracer can occur
for ICOS Level-2 data in cases where the measuring equipment has changed (i.e.
installation of new instrument with new instrument ID). When a new instrument
is used, the measurments from this instrument will be put in a different file,
which in turns corresponds to a new data object ID.
The function reads in the data from the data file corresponding to every data
object ID and stores it in metadata- and data-dataframes. One set of metadata-
and data-dataframes are produced for every file. The function checks if there
are more than one files related to the same station with the same sampling
height and tracer, and if this is the case, it concatenates the data-dataframes.
All sets of metadata- and data-dataframes are then stored in sublists of a list
and sent as input parameters to a plot function, that returns a Bokeh Figure (plot).
The Bokeh Figure is then returned as output.
Input parameters: List of sublists with list(s) including info about an ICOS Atmospheric Level-2
data-object-ID, the station code, the sampling height and the selected tracer
(var_name: 'station_dobj_ls', var_type: List)
Output: Bokeh Plot
"""
#import modules:
from bokeh.layouts import column
from icoscp.cpb.dobj import Dobj
#Create list to store the metadata & data dataframes for all combinations of tracers and stations:
station_df_ls = []
#Loop through every data object ID in the list:
for station_ls in station_dobj_ls:
#Create dictionary to store tracer info:
tracer_info_dict = {}
#Create dict to store the station info:
station_info_dict = {}
#Create a file object from the current data object id:
file = Dobj(station_ls[0][0])
#Get pandas dataframe with all ICOS stations:
icos_stations_df = RunSparql(sparql_query=sparqls.get_coords_icos_stations_atc(), output_format='pandas').run()
#Get the tracer description:
tracer_info_dict['tracer_info'] = file.info[1].valueType.loc[file.info[1].colName==station_ls[0][1]].values[0]
#Get tracer unit:
tracer_info_dict['tracer_unit'] = file.info[1].unit.loc[file.info[1].colName==station_ls[0][1]].values[0]
#Get station info:
station_info_dict['station_name'] = icos_stations_df.stationName.loc[icos_stations_df.stationId==station_ls[0][2]].values[0]
station_info_dict['station_code'] = station_ls[0][2]
station_info_dict['station_country_code'] = icos_stations_df.Country.loc[icos_stations_df.stationId==station_ls[0][2]].values[0]
station_info_dict['station_country'] = get_country_fullname_from_iso3166_2char(station_info_dict['station_country_code'])
station_info_dict['station_lat'] = icos_stations_df.lat.loc[icos_stations_df.stationId==station_ls[0][2]].values[0]
station_info_dict['station_lon'] = icos_stations_df.lon.loc[icos_stations_df.stationId==station_ls[0][2]].values[0]
station_info_dict['station_sampling_height'] = station_ls[0][3]
#Create list to store the data dataframes of all data object IDs for the same station,
#tracer and sampling height:
df_ls = []
#station_info[0] --- > dataobjid (e.g. 'MdYIndlCMyEp2BoGwUL_0Jqq')
#station_info[1] --- > Tracer (e.g. 'co2')
#station_info[2] --- > Station Code (e.g. 'HPB')
#station_info[3] --- > Sampling Height (e.g. '50.0')
#Download data from all files that correspond to the same station,
#at the same sampling height and include data for the same tracer
#in metadata- and data- dataframes that are stored as sublists in a list:
df_ls = [Dobj(station_info[0]).get() for station_info in station_ls]
#Concatenate the data-dataframes that include tracer data for
#the same station at the same sampling height, to one data-dataframe:
data_df = pd.concat([df for df in df_ls])
data_df.sort_index(inplace=True)
#Append metadata-dataframe and concatenated data-dataframe to list:
station_df_ls.append([station_info_dict, tracer_info_dict, data_df])
#Get plot:
p = plot_icos_single_tracer_multiple_stations_binary(station_df_ls)
#Output should be in the notebook
output_notebook()
#Show plot
show(p, notebook_handle=True)
def update_focus_plot_binary(data_obj_id_ls, station, tracer, color):
"""
Project: 'ICOS Carbon Portal'
Created: Tue May 07 10:30:00 2018
Last Changed: Tue May 07 10:30:00 2019
Version: 1.0.0
Author(s): Karolina
Description: Function that takes as input a list of ICOS data object IDs, the station name,
the tracer type and the plot color. The function reads the corresponding
ICOS Level-2 Atmospheric Tracer data files for every data object ID into sepparate
pandas dataframes. Every data file produces two sepparate pandas dataframes;
metadata dataframe and data dataframe. These dataframes are then set as input
parameters to a plot function, that returns a Bokeh Figure (plot).
The Bokeh Figure is then returned as output.
Input parameters: 1. List with sublists of data object IDs
(var_name: 'data_obj_id_ls', var_type: List)
2. ICOS Station 3-charcter Code
(var_name: 'station', var_type: String)
3. Tracer/gas, e.g. 'co2'
(var_name: 'tracer', var_type: String)
4. Plot Color
(var_name: 'color', var_type: String)
Output: Bokeh Plot
"""
#import modules:
from icoscp.cpb.dobj import Dobj
from bokeh.layouts import column
from bokeh.io import push_notebook, output_notebook
from bokeh.plotting import show
#Create dictionary to store tracer info:
tracer_info_dict = {}
#Create dict to store the station info:
station_info_dict = {}
#Create a file object from the 1st object in the data object id list:
file = Dobj(data_obj_id_ls[0])
#Get the tracer description:
tracer_info_dict['tracer_info'] = file.info[1].valueType.loc[file.info[1].colName==tracer].values[0]
#Get tracer unit:
tracer_info_dict['tracer_unit'] = file.info[1].unit.loc[file.info[1].colName==tracer].values[0]
#Get pandas dataframe with all ICOS stations:
icos_stations_df = RunSparql(sparql_query=sparqls.get_coords_icos_stations_atc(), output_format='pandas').run()
#Get station info:
station_info_dict['station_name'] = icos_stations_df.stationName.loc[icos_stations_df.stationId==station[0]].values[0]
station_info_dict['station_code'] = station[0]
station_info_dict['station_sampling_height'] = station[1]
station_info_dict['station_country_code'] = icos_stations_df.Country.loc[icos_stations_df.stationId==station[0]].values[0]
station_info_dict['station_country'] = get_country_fullname_from_iso3166_2char(station_info_dict['station_country_code'])
station_info_dict['station_lat'] = icos_stations_df.lat.loc[icos_stations_df.stationId==station[0]].values[0]
station_info_dict['station_lon'] = icos_stations_df.lon.loc[icos_stations_df.stationId==station[0]].values[0]
#Create list to store the data dataframes of all data object IDs:
data_df_ls = []
#Loop through every data object ID in the list:
for dobjid in data_obj_id_ls:
#Get data dataframe:
obs_data_df = Dobj(dobjid).get()
#Add data dataframe corresponding to the current data object ID to the list:
data_df_ls.append(obs_data_df)
#Concatenate data dataframes to one dataframe:
data_df = pd.concat(data_df_ls)
#Sort the dataframe index in ascending order:
data_df.sort_index(inplace=True)
### PLOT ###
#Plot station:
p1, p2 = plot_icos_focus_binary(data_df, station_info_dict, tracer_info_dict, tracer, color)
#Set the output to be organized columnwise (i.e. output plots one under the other):
layout = column(p1,p2)
#Set the notebook as the prefered output channel:
output_notebook()
#Show plot:
show(layout)
#Update plot:
#push_notebook()
# +
def update_basic_statistics_binary(station_dobj_ls, start_date, end_date):
"""
Project: 'ICOS Carbon Portal'
Created: Wed May 15 10:30:00 2018
Last Changed: Wed May 15 10:30:00 2019
Version: 1.0.0
Author(s): Karolina
Description: Function that takes as input a list of sublists, where every sublist contains
one or more lists with the ICOS Atmospheric Level-2 data-object-ID, the station
code, the sampling height and the selected tracer for a selected station and
sampling height. The Function also takes as input the selected start-date and
end-date.
More than one lists per selected station, sampling height and tracer can occur
for ICOS Level-2 data in cases where the measuring equipment has changed (i.e.
installation of new instrument with new instrument ID). When a new instrument
is used, the measurments from this instrument will be put in a different file,
which in turns corresponds to a new data object ID.
The function reads in the data from the data file corresponding to every data
object ID and stores it in metadata- and data-dataframes. One set of metadata-
and data-dataframes are produced for every file. The function checks if there
are more than one files related to the same station with the same sampling
height and tracer, and if this is the case, it concatenates the data-dataframes.
The data dataframe is filtered to only contain data between the selected start-date
och end-date.
All sets of metadata- and data-dataframes are then stored in sublists of a list
and sent as input parameters to a function that will compute the basic statistics
for the tracer-column of every data-dataframe. The dataframe including the
basic statistics results for every selected station are the returned as output.
Input parameters: 1. List of sublists with list(s) including info about an ICOS Atmospheric Level-2
data-object-ID, the station code, the sampling height and the selected tracer
(var_name: 'station_dobj_ls', var_type: List)
2. Start date - user's input
(var_name: 'start_date', var_type: DateTime Object)
3. End date - user's input
(var_name: 'end_date', var_type: DateTime Object)
Output: Pandas DataFrame
"""
#Import modules:
import pandas as pd
from icoscp.cpb.dobj import Dobj
#Create dictionary to store tracer info:
tracer_info_dict = {}
#Create a file object from the 1st object in the data object id list:
file = Dobj(station_dobj_ls[0][0][0])
#Get the tracer description:
tracer_info_dict['tracer_info'] = file.info[1].valueType.loc[file.info[1].colName==station_dobj_ls[0][0][1]].values[0]
#Get tracer unit:
tracer_info_dict['tracer_unit'] = file.info[1].unit.loc[file.info[1].colName==station_dobj_ls[0][0][1]].values[0]
#Get pandas dataframe with all ICOS stations:
icos_stations_df = RunSparql(sparql_query=sparqls.get_coords_icos_stations_atc(), output_format='pandas').run()
#Create list to store the metadata & data dataframes for all combinations of tracers and stations:
station_df_ls = []
#Loop through every data object ID in the list:
for station_ls in station_dobj_ls:
#Create a dictionary to store all station info:
station_info_dict = {}
#Get station info:
station_info_dict['station_name'] = icos_stations_df.stationName.loc[icos_stations_df.stationId==station_ls[0][2]].values[0]
station_info_dict['station_code'] = station_ls[0][2]
station_info_dict['station_sampling_height'] = station_ls[0][3]
station_info_dict['station_country_code'] = icos_stations_df.Country.loc[icos_stations_df.stationId==station_ls[0][2]].values[0]
station_info_dict['station_country'] = get_country_fullname_from_iso3166_2char(station_info_dict['station_country_code'])
station_info_dict['station_lat'] = icos_stations_df.lat.loc[icos_stations_df.stationId==station_ls[0][2]].values[0]
station_info_dict['station_lon'] = icos_stations_df.lon.loc[icos_stations_df.stationId==station_ls[0][2]].values[0]
#Create list to store the data dataframes of all data object IDs
#for all stations:
df_ls = []
#station_info[0] --- > dataobjid (e.g. 'MdYIndlCMyEp2BoGwUL_0Jqq')
#station_info[1] --- > Tracer (e.g. 'co2')
#station_info[2] --- > Station Code (e.g. 'HPB')
#station_info[3] --- > Sampling Height (e.g. '50.0')
#Download data from all files that correspond to the same station,
#at the same sampling height and include data for the same tracer
#in a pandas dataframe that are then stored as sublists in a list:
df_ls = [Dobj(station_info[0]).get()
for station_info in station_ls]
#Concatenate the data-dataframes that include tracer data for
#the same station at the same sampling height, to one data-dataframe:
data_df = pd.concat([df for df in df_ls])
#Add column with datetime object:
data_df['DateTime'] = pd.to_datetime(data_df['TIMESTAMP'], unit='ms')
#Create a copy of the dataframe and set "DateTime" as index:
data_df_ind = data_df.copy().set_index('DateTime')
#Sort the dataframe index in ascending order:
data_df_ind.sort_index(inplace=True)
#Filter dataframe to only contain data between the selected dates:
data_df_filt = data_df_ind.loc[start_date:end_date]
#Append metadata-dataframe and concatenated data-dataframe to list:
station_df_ls.append([data_df_filt, station_info_dict, tracer_info_dict])
#Call function to compute basic statistics for all selected stations
#and return the result:
return calculate_basic_statistics_binary(station_df_ls, station_dobj_ls[0][0][1])
# +
def update_corr_stat_multi_binary(station_dobj_ls, start_date, end_date):
"""
Project: 'ICOS Carbon Portal'
Created: Tue May 07 10:30:00 2018
Last Changed: Tue May 07 10:30:00 2019
Version: 1.0.0
Author(s): Karolina
Description: Function that takes as input a list of sublists, where every sublist contains an
ICOS Data Object ID and a tracer string. The function reads the corresponding
ICOS Level-2 Atmospheric Tracer data files for every data object ID into sepparate
pandas dataframes. Every data file produces two sepparate pandas dataframes;
metadata dataframe and data dataframe. The data dataframe is filtered to only include
values for the time period that was selected by the user.
Then the correlation is computed between every station and the results are returned
in a pandas dataframe.
Input parameters: 1. List with sublists of data object IDs and tracers
(var_name: 'station_dobj_ls', var_type: List)
2. Start date - user's input
(var_name: 'start_date', var_type: DateTime Object)
3. End date - user's input
(var_name: 'end_date', var_type: DateTime Object)
Output: Pandas DataFrame
"""
#Import modules:
import pandas as pd
from icoscp.cpb.dobj import Dobj
#Add dictionary to transform digits to subscript:
SUB = str.maketrans("0123456789", "₀₁₂₃₄₅₆₇₈₉")
#Get pandas dataframe with all ICOS stations:
icos_stations_df = RunSparql(sparql_query=sparqls.get_coords_icos_stations_atc(), output_format='pandas').run()
#Create list to store the metadata & data dataframes for all combinations of tracers and stations:
station_df_ls = []
#Loop through every data object ID in the list:
for station_ls in station_dobj_ls:
#Create a file object from the 1st object in the data object id list:
file = Dobj(station_ls[0][0])
#Create dictionary to store tracer info:
tracer_info_dict = {}
#Create a dictionary to store all station info:
station_info_dict = {}
#Create list to store the data dataframes of all data object IDs for :
df_ls = []
#Get the tracer description:
tracer_info_dict['tracer_info'] = file.info[1].valueType.loc[file.info[1].colName==station_ls[0][1]].values[0]
#Get tracer unit:
tracer_info_dict['tracer_unit'] = file.info[1].unit.loc[file.info[1].colName==station_ls[0][1]].values[0]
#Get station info:
station_info_dict['station_name'] = icos_stations_df.stationName.loc[icos_stations_df.stationId==station_ls[0][2]].values[0]
station_info_dict['station_code'] = station_ls[0][2]
station_info_dict['station_sampling_height'] = station_ls[0][3]
station_info_dict['station_country_code'] = icos_stations_df.Country.loc[icos_stations_df.stationId==station_ls[0][2]].values[0]
station_info_dict['station_country'] = get_country_fullname_from_iso3166_2char(station_info_dict['station_country_code'])
station_info_dict['station_lat'] = icos_stations_df.lat.loc[icos_stations_df.stationId==station_ls[0][2]].values[0]
station_info_dict['station_lon'] = icos_stations_df.lon.loc[icos_stations_df.stationId==station_ls[0][2]].values[0]
#station_info[0] --- > dataobjid (e.g. 'MdYIndlCMyEp2BoGwUL_0Jqq')
#station_info[1] --- > Station Code (e.g. 'HPB')
#station_info[2] --- > Sampling Height (e.g. '50.0')
#station_info[3] --- > Tracer (e.g. 'co2')
#Download data from all files that correspond to the same station,
#at the same sampling height and include data for the same tracer
#in metadata- and data- dataframes that are stored as sublists in a list:
df_ls = [Dobj(station_info[0]).get() for station_info in station_ls]
#Check size of df_ls:
if(len(df_ls)>1):
#Concatenate the data-dataframes that include tracer data for
#the same station at the same sampling height, to one data-dataframe:
data_df = pd.concat([df for df in df_ls])
else:
data_df = df_ls[0]
#Add column with datetime object:
data_df['DateTime'] = pd.to_datetime(data_df['TIMESTAMP'], unit='ms')
#Create a copy of the dataframe and set "DateTime" as index:
data_df_ind = data_df.copy().set_index('DateTime')
#Sort the dataframe index in ascending order:
data_df_ind.sort_index(inplace=True)
#Filter dataframe to only contain data between the selected dates:
data_df_filt = data_df_ind.loc[start_date:end_date]
#Append metadata-dataframe and concatenated data-dataframe to list:
station_df_ls.append([data_df_filt, station_info_dict, tracer_info_dict])
#Extract the tracer-column from every station's data dataframe to a new pandas dataframe:
tracer_df_ls =[pd.DataFrame({station_df_ls[i][1]['station_code']+'_'+
station_df_ls[i][1]['station_sampling_height']+' ('+
station_df_ls[i][2]['tracer_info'].replace(' mixing ratio (dry mole fraction)', '').translate(SUB)+
')': station_df_ls[i][0][station_df_ls[i][2]['tracer_info'].replace(' mixing ratio (dry mole fraction)', '').lower()]})
for i in range(len(station_df_ls))]
#Concatenate dataframes to one final dataframe:
tracers_df = pd.concat(tracer_df_ls, axis=1)
#Get correlation between data:
corr_df = tracers_df.corr(method='pearson')
#Return dataframe:
return corr_df
# -
#Function that updates the plot every time the user interacts with a widget:
def update_smoothing_plot_binary(data_obj_id_ls, station, tracer, days, color):
"""
Project: 'ICOS Carbon Portal'
Created: Tue May 07 10:30:00 2018
Last Changed: Tue May 07 10:30:00 2019
Version: 1.0.0
Author(s): Karolina
Description: Function that takes as input an ICOS data object ID, the station name,
the tracer type, the total number of days over which to average the
ICOS tracer data and the plot color. The function reads the corresponding
ICOS Level-2 Atmospheric Tracer data files for every data object ID into sepparate
pandas dataframes. Every data file produces two sepparate pandas dataframes;
metadata dataframe and data dataframe. The values of the tracer in the data
dataframe are then averaged by a window that corresponds to the related input
parameter. The updated data dataframe and the metadata dataframe are then set as input
parameters to a plot function, that returns a Bokeh Figure (plot).
The Bokeh Figure is then returned as output.
Input parameters: 1. List with sublists of data object IDs
(var_name: 'data_obj_id_ls', var_type: List)
2. ICOS Station 3-charcter Code
(var_name: 'station', var_type: String)
3. Tracer/gas, e.g. 'co2'
(var_name: 'tracer', var_type: String)
4. Number of days to average by
(var_name: 'days', var_type: Integer)
5. Plot Color
(var_name: 'color', var_type: String)
Output: Bokeh Plot
"""
#import modules:
import pandas as pd
from bokeh.layouts import column
from icoscp.cpb.dobj import Dobj
#Create dictionary to store tracer info:
tracer_info_dict = {}
#Create a dictionary to store the station info:
station_info_dict = {}
#Create a file object from the 1st object in the data object id list:
file = Dobj(data_obj_id_ls[0])
#Get the tracer description:
tracer_info_dict['tracer_info'] = file.info[1].valueType.loc[file.info[1].colName==tracer].values[0]
#Get tracer unit:
tracer_info_dict['tracer_unit'] = file.info[1].unit.loc[file.info[1].colName==tracer].values[0]
#Get pandas dataframe with all ICOS stations:
icos_stations_df = RunSparql(sparql_query=sparqls.get_coords_icos_stations_atc(), output_format='pandas').run()
#Get station info:
station_info_dict['station_name'] = icos_stations_df.stationName.loc[icos_stations_df.stationId==station[0]].values[0]
station_info_dict['station_code'] = station[0]
station_info_dict['station_sampling_height'] = station[1]
station_info_dict['station_country_code'] = icos_stations_df.Country.loc[icos_stations_df.stationId==station[0]].values[0]
station_info_dict['station_country'] = get_country_fullname_from_iso3166_2char(station_info_dict['station_country_code'])
station_info_dict['station_lat'] = icos_stations_df.lat.loc[icos_stations_df.stationId==station[0]].values[0]
station_info_dict['station_lon'] = icos_stations_df.lon.loc[icos_stations_df.stationId==station[0]].values[0]
#Create list to store the data dataframes of all data object IDs:
data_df_ls = []
#Loop through every data object ID in the list:
for dobjid in data_obj_id_ls:
#Get all the columns for the selected dataobject id:
obs_data_df = Dobj(dobjid).get()
#Add data dataframe of the current data object ID to the list:
data_df_ls.append(obs_data_df)
#Concatenate data dataframes to one dataframe:
data_df = pd.concat(data_df_ls)
#Add column with datetime object:
data_df['DateTime'] = pd.to_datetime(data_df['TIMESTAMP'], unit='ms')
#Create a copy of the dataframe and set "DateTime" as index:
data_df_ind = data_df.copy().set_index('DateTime')
#Sort the dataframe index in ascending order:
data_df_ind.sort_index(inplace=True)
#Smoothing the glyph-line, by using a window of a selected number of days that averages the values:
if days == 0:
data_df_ind['rolling_mean'] = data_df_ind[tracer]
#If the number of days is higher than zero:
else:
#If number of days is an even number
if(days%2==0):
data_df_ind['rolling_mean'] = data_df_ind[tracer].rolling('{0}D'.format(days), closed='left').mean().shift(int(-days/2)*24, freq='h')
#If number of days is an odd number:
else:
data_df_ind['rolling_mean'] = data_df_ind[tracer].rolling('{0}D'.format(days), closed='left').mean().shift(round(-days/2)*24, freq='h')
#Plot station:
p = plot_icos_single_station_smoothing_binary(data_df_ind, station_info_dict, tracer_info_dict, tracer, color)
#Output should be in the notebook
output_notebook()
#Show plot
show(p, notebook_handle=True)
def update_comparing_binary(station_dobj_ls, num_of_stations):
"""
Project: 'ICOS Carbon Portal'
Created: Tue May 07 10:30:00 2018
Last Changed: Tue May 07 10:30:00 2019
Version: 1.0.0
Author(s): Karolina
Description: Function that takes as input a list of sublists, where every sublist contains an
ICOS Data Object ID and a tracer string. The function aslo takes as input an integer
representing the total number of stations selected. This parameter is then used to get
a sublist of the same number of different colors from a colormap. The function reads
the corresponding ICOS Level-2 Atmospheric Tracer data files for every data object ID
into sepparate pandas dataframes. Every data file produces two sepparate pandas dataframes;
metadata dataframe and data dataframe. Then, every set of metadata- and data-dataframes
are sent as input parameters to a plot function, that will produce a sepparate plot
for every station.
Input parameters: 1. List with sublists of data object IDs and tracers
(var_name: 'station_dobj_ls', var_type: List)
2. Total number of stations selected
(var_name: 'num_of_stations', var_type: Integer)
Output: Interactive Bokeh Plot(s)
"""
#import modules:
from bokeh.layouts import column
from bokeh.io import push_notebook, output_notebook
from bokeh.plotting import show
from icoscp.cpb.dobj import Dobj
#Get colormap:
colormap = get_colormap(num_of_stations)
#Create dictionary to store tracer info:
tracer_info_dict = {}
#Create dict to store the station info:
station_info_dict = {}
#Create a file object from the 1st object in the data object id list:
file = Dobj(station_dobj_ls[0][0][0])
#Get the tracer description:
tracer_info_dict['tracer_info'] = file.info[1].valueType.loc[file.info[1].colName==station_dobj_ls[0][0][1]].values[0]
#Get tracer unit:
tracer_info_dict['tracer_unit'] = file.info[1].unit.loc[file.info[1].colName==station_dobj_ls[0][0][1]].values[0]
#Get pandas dataframe with all ICOS stations:
icos_stations_df = RunSparql(sparql_query=sparqls.get_coords_icos_stations_atc(), output_format='pandas').run()
#Define and initialize list to store station plots:
plot_list = []
#Add counter:
counter = 0
#Loop through every data object ID in the list:
for station_ls in station_dobj_ls:
#Get station info:
station_info_dict['station_name'] = icos_stations_df.stationName.loc[icos_stations_df.stationId==station_ls[0][2]].values[0]
station_info_dict['station_code'] = station_ls[0][2]
station_info_dict['station_sampling_height'] = station_ls[0][3]
station_info_dict['station_country_code'] = icos_stations_df.Country.loc[icos_stations_df.stationId==station_ls[0][2]].values[0]
station_info_dict['station_country'] = get_country_fullname_from_iso3166_2char(station_info_dict['station_country_code'])
station_info_dict['station_lat'] = icos_stations_df.lat.loc[icos_stations_df.stationId==station_ls[0][2]].values[0]
station_info_dict['station_lon'] = icos_stations_df.lon.loc[icos_stations_df.stationId==station_ls[0][2]].values[0]
#Create list to store the data dataframes of all data object IDs for :
df_ls = []
#station_info[0] --- > dataobjid (e.g. 'MdYIndlCMyEp2BoGwUL_0Jqq')
#station_info[1] --- > Tracer (e.g. 'co2')
#station_info[2] --- > Station Code (e.g. 'HPB')
#station_info[3] --- > Sampling Height (e.g. '50.0')
#Download data from all files that correspond to the same station,
#at the same sampling height and include data for the same tracer
#in metadata- and data- dataframes that are stored as sublists in a list:
df_ls = [Dobj(station_info[0]).get()
for station_info in station_ls]
#Concatenate the data-dataframes that include tracer data for
#the same station at the same sampling height, to one data-dataframe:
data_df = pd.concat([df for df in df_ls])
#data_df.sort_index(inplace=True)
#Add plot-object to list:
plot_list.append(plot_icos_single_station_binary(data_df,
station_info_dict,
tracer_info_dict,
color=colormap[counter]))
#Increase counter:
counter = counter + 1
#Organize the plots in a "column" layout:
layout = column(plot_list)
#Output will be displayed in notebook:
output_notebook()
#Show plot/s:
show(layout, notebook_handle=True)
def update_icos_single_station_plot_L1_L2_binary(data_obj_id_L1_ls, data_obj_id_L2_ls, station_code, station_sampl_height, tracer, color):
"""
Project: 'ICOS Carbon Portal'
Created: Tue May 07 10:30:00 2018
Last Changed: Tue May 07 10:30:00 2019
Version: 1.0.0
Author(s): Karolina
Description: Function that takes as input a list of ICOS Level-1 data object IDs, a list
of ICOS Level-2 data object IDs, the tracer type and the plot color. The
function reads the corresponding ICOS Level-1 and Level-2 Atmospheric Tracer
data files for every data object ID into sepparate pandas dataframes. Every
data file produces two sepparate pandas dataframes; metadata dataframe and
data dataframe. These dataframes are then set as input parameters to a
plot function, that returns a Bokeh Figure (plot).
The Bokeh Figure is then returned as output.
Input parameters: 1. List with sublists of ICOS Level-1 data object IDs
(var_name: 'data_obj_id_L1_ls', var_type: List)
2. List with sublists of ICOS Level-2 data object IDs
(var_name: 'data_obj_id_L2_ls', var_type: List)
3. Tracer/gas, e.g. 'co2'
(var_name: 'tracer', var_type: String)
4. Plot Color
(var_name: 'color', var_type: String)
Output: Bokeh Plot
"""
#Import modules:
from icoscp.cpb.dobj import Dobj
#Create dictionary to store tracer info:
tracer_info_dict = {}
#Create dict to store the station info:
station_info_dict = {}
#Get pandas dataframe with all ICOS stations:
icos_stations_df = RunSparql(sparql_query=sparqls.get_coords_icos_stations_atc(), output_format='pandas').run()
#Create a file object from the 1st object in the data object id list:
file = Dobj(data_obj_id_L1_ls[0])
#Get the tracer description:
tracer_info_dict['tracer_info'] = file.info[1].valueType.loc[file.info[1].colName==tracer].values[0]
#Get tracer unit:
tracer_info_dict['tracer_unit'] = file.info[1].unit.loc[file.info[1].colName==tracer].values[0]
#Get station info:
station_info_dict['station_name'] = icos_stations_df.stationName.loc[icos_stations_df.stationId==station_code].values[0]
station_info_dict['station_code'] = station_code
station_info_dict['station_sampling_height'] = station_sampl_height
station_info_dict['station_country_code'] = icos_stations_df.Country.loc[icos_stations_df.stationId==station_code].values[0]
station_info_dict['station_country'] = get_country_fullname_from_iso3166_2char(station_info_dict['station_country_code'])
station_info_dict['station_lat'] = icos_stations_df.lat.loc[icos_stations_df.stationId==station_code].values[0]
station_info_dict['station_lon'] = icos_stations_df.lon.loc[icos_stations_df.stationId==station_code].values[0]
#Create list to store the data dataframes of all data object IDs:
L1_data_df_ls = []
#Loop through every data object ID in the list:
for dobjid_L1 in data_obj_id_L1_ls:
#Get a pandas dataframe with all the columns for the selected data-object id:
obs_data_df_L1 = Dobj(dobjid_L1).get()
#Add data dataframe of the current data object ID to the list:
L1_data_df_ls.append(obs_data_df_L1)
#Concatenate data dataframes to one dataframe:
data_L1_df = pd.concat(L1_data_df_ls)
#Add column with datetime object:
data_L1_df['DateTime'] = pd.to_datetime(data_L1_df['TIMESTAMP'], unit='ms')
#Create a copy of the dataframe and set "DateTime" as index:
data_df_ind_L1 = data_L1_df.copy().set_index('DateTime')
#Sort the dataframe index in ascending order:
data_df_ind_L1.sort_index(inplace=True)
#Create list to store the data dataframes of all data object IDs:
L2_data_df_ls = []
#Loop through every data object ID in the list:
for dobjid_L2 in data_obj_id_L2_ls:
#Get a pandas dataframe with all the columns for the selected data-object id:
obs_data_df_L2 = Dobj(dobjid_L2).get()
#Add data dataframe of the current data object ID to the list:
L2_data_df_ls.append(obs_data_df_L2)
#Concatenate data dataframes to one dataframe:
data_L2_df = pd.concat(L2_data_df_ls)
#Add column with datetime object:
data_L2_df['DateTime'] = pd.to_datetime(data_L2_df['TIMESTAMP'], unit='ms')
#Create a copy of the dataframe and set "DateTime" as index:
data_df_ind_L2 = data_L2_df.copy().set_index('DateTime')
#Sort the dataframe index in ascending order:
data_df_ind_L2.sort_index(inplace=True)
#Plot station:
p = plot_icos_single_station_L1_L2_binary([data_df_ind_L1, data_df_ind_L2],
station_info_dict,
tracer_info_dict,
color)
#Show plot
show(p)
def update_icos_single_station_plot_LX_binary(data_obj_id_LX_ls, station_code, station_sampl_height, tracer, color, level):
"""
Project: 'ICOS Carbon Portal'
Created: Tue May 07 10:30:00 2018
Last Changed: Tue May 07 10:30:00 2019
Version: 1.0.0
Author(s): Karolina
Description: Function that takes as input a list of ICOS Level-X data object IDs, the
tracer type, the plot color and the ICOS data level. The function reads
the corresponding ICOS Level-1 Tracer data files for every data object ID
into sepparate pandas dataframes. Every data file produces two sepparate
pandas dataframes; metadata dataframe and data dataframe.
These dataframes are then set as input parameters to a plot function,
that returns a Bokeh Figure (plot).
The Bokeh Figure is then returned as output.
Input parameters: 1. List with sublists of ICOS Level-X data object IDs
(var_name: 'data_obj_id_LX_ls', var_type: List)
2. Tracer/gas, e.g. 'co2'
(var_name: 'tracer', var_type: String)
3. Plot Color
(var_name: 'color', var_type: String)
4. ICOS Atmospheric Data Level
(var_name: 'level', var_type: Integer)
Output: Bokeh Plot
"""
#Import modules:
from icoscp.cpb.dobj import Dobj
#Create dictionary to store tracer info:
tracer_info_dict = {}
#Create dict to store the station info:
station_info_dict = {}
#Create list to store the data dataframes of all data object IDs:
data_df_ls = []
#Get pandas dataframe with all ICOS stations:
icos_stations_df = RunSparql(sparql_query=sparqls.get_coords_icos_stations_atc(), output_format='pandas').run()
#Create a file object from the 1st object in the data object id list:
file = Dobj(data_obj_id_LX_ls[0])
#Get the tracer description:
tracer_info_dict['tracer_info'] = file.info[1].valueType.loc[file.info[1].colName==tracer].values[0]
#Get tracer unit:
tracer_info_dict['tracer_unit'] = file.info[1].unit.loc[file.info[1].colName==tracer].values[0]
#Get station info:
station_info_dict['station_name'] = icos_stations_df.stationName.loc[icos_stations_df.stationId==station_code].values[0]
station_info_dict['station_code'] = station_code
station_info_dict['station_sampling_height'] = station_sampl_height
station_info_dict['station_country_code'] = icos_stations_df.Country.loc[icos_stations_df.stationId==station_code].values[0]
station_info_dict['station_country'] = get_country_fullname_from_iso3166_2char(station_info_dict['station_country_code'])
station_info_dict['station_lat'] = icos_stations_df.lat.loc[icos_stations_df.stationId==station_code].values[0]
station_info_dict['station_lon'] = icos_stations_df.lon.loc[icos_stations_df.stationId==station_code].values[0]
#Loop through every data object ID in the list:
for dobjid in data_obj_id_LX_ls:
#Get a pandas dataframe with all the columns for the selected dataobject id:
obs_data_df = Dobj(dobjid).get()
#Add data dataframe of the current data object ID to the list:
data_df_ls.append(obs_data_df)
#Concatenate data dataframes to one dataframe:
data_LX_df = pd.concat(data_df_ls)
#Call plotting function:
p = plot_icos_single_station_binary(data_LX_df, station_info_dict, tracer_info_dict, level, color)
#Show plot
show(p)
# <a id='plotting_funcs'></a>
# <br>
# <div style="text-align: right">
# <a href="#intro">Back to top</a>
# </div>
# <br>
# <br>
# <br>
# <br>
# <br>
#
# ## 7. Plotting functions
# This part includes functions that create interactive plots with Bokeh visualization library.
def plot_icos_single_station_binary(df_data, station_info_dict, tracer_info_dict, level=2, color='#0F0C08'):
"""
Project: 'ICOS Carbon Portal'
Created: Mon Apr 07 09:30:00 2019
Last Changed: Mon Apr 07 09:30:00 2019
Version: 1.0.0
Author(s): Karolina
Description: Function that plots the content of an ICOS Level-1 or Level-2 Atmospheric Data file to an
interactive plot using the Bokeh interactive visualization library.
Bokeh (URL): https://bokeh.pydata.org/en/latest/
Input parameters: 1. ICOS Level-1 or Level-2 Atmospheric Observation Data
(var_name: 'df_data', var_type: pandas dataframe)
2. Dictionary with station info
(var_name: 'station_info_dict', var_type: dictionary)
3. Dictionary with tracer info
(var_name: 'tracer_info_dict', var_type: dictionary)
4. Data level [optional]
(var_name: 'level', var_type: Integer)
5. Color for Line- or Circle Glyph [optional]
(var_name: 'color', var_type: String)
Default value for color: The default value for line-glyph or circle-glyph color is "lightblue".
Default value for level: The default value for data level is "2". Function calls for Level-2 data do not have
to include a value for the level input parameter.
Output: Bokeh Figure Object (plot)
"""
#Import modules to create figure:
import pandas as pd
from bokeh.plotting import figure
from bokeh.models import ColumnDataSource, HoverTool, Label, Legend
from datetime import datetime
#Dictionary for subscript transformations of numbers:
SUB = str.maketrans("0123456789", "₀₁₂₃₄₅₆₇₈₉")
SUP = str.maketrans("0123456789", "⁰¹²³⁴⁵⁶⁷⁸⁹")
#Define Datasets:
x = pd.to_datetime(df_data['TIMESTAMP'], unit='ms')
y = df_data[tracer_info_dict['tracer_info'].replace(' mixing ratio (dry mole fraction)','').lower()].values #give tracer as parameter, where tracer can be ['co', 'co2']
z = df_data["NbPoints"].values
w = df_data["Stdev"].values
o = df_data["Flag"].values
#u = df_data["InstrumentId"].values
#Create a ColumnDataSource object:
source = ColumnDataSource( data = {'x':x, 'y':y, 'z':z, 'w':w, 'o':o,} )
#Create a figure object:
p = figure(plot_width=900,
plot_height=400,
x_axis_label='Time (UTC)',
y_axis_label=tracer_info_dict['tracer_info'].replace(' mixing ratio (dry mole fraction)','').translate(SUB)+ ' (' +
tracer_info_dict['tracer_unit'].translate(SUP) + ')',
x_axis_type='datetime',
title = tracer_info_dict['tracer_info'].translate(SUB)+' '+
station_info_dict['station_name'] +', '+
station_info_dict['station_country'] +', '+
station_info_dict['station_sampling_height'] +' m.a.g.l.',
tools='pan,box_zoom,wheel_zoom,undo,redo,reset,save')
#Create glyphs:
g0 = p.circle('x','y', source=source, radius=.02, color=color)
#If data is level-2 data:
if(level==2):
g1 = p.line('x','y', source=source, line_width=1, color=color, name=tracer_info_dict['tracer_info'].replace(' mixing ratio (dry mole fraction)','').translate(SUB))
#If data is level-1 data:
else:
g1 = p.line('x','y', source=source, line_width=2, color=color, name=tracer_info_dict['tracer_info'].replace(' mixing ratio (dry mole fraction)','').translate(SUB),
line_dash='dotted')
#Add tooltip on hover:
p.add_tools(HoverTool(tooltips=[
('Station Code',station_info_dict['station_code']),
('Latitude',station_info_dict['station_lat']),
('Longitude',station_info_dict['station_lon']),
('Time (UTC)','@x{%Y-%m-%d %H:%M:%S}'),
(tracer_info_dict['tracer_info'].replace(' mixing ratio (dry mole fraction)','').translate(SUB),'@y{0.f}'),
('St dev', '@w{0.f}'),
('NbPoints', '@z'),
('Flag', '@o')
],
formatters={
'@x' : 'datetime', # use 'datetime' formatter for 'date' field
},
# display a tooltip whenever the cursor is vertically in line with a glyph
mode='vline'
))
#Set title attributes:
p.title.align = 'center'
p.title.text_font_size = '13pt'
p.title.offset = 15
#Set label font style:
p.xaxis.axis_label_text_font_style = 'normal'
p.yaxis.axis_label_text_font_style = 'normal'
p.xaxis.axis_label_standoff = 15 #Sets the distance of the label from the x-axis in screen units
p.yaxis.axis_label_standoff = 15 #Sets the distance of the label from the y-axis in screen units
#Set the copyright label position:
label_opts = dict(x=0, y=10,
x_units='screen', y_units='screen')
#Create a label object and format it:
caption1 = Label(text="© ICOS ERIC", **label_opts)
caption1.text_font_size = '8pt'
#Deactivate hover-tool, which is by default active:
p.toolbar.active_inspect = None
#Add label to plot:
p.add_layout(caption1, 'below')
#return plot:
return p
def plot_icos_single_station_multiple_tracers_binary(df_list):
"""
Project: 'ICOS Carbon Portal'
Created: Fri Apr 07 10:00:00 2019
Last Changed: Fri Apr 07 10:00:00 2019
Version: 1.0.0
Author(s): Karolina
Description: Function that creates an interactive Bokeh plot with ICOS Level-2 Atmospheric Data
('CO2', 'CO', 'CH4'). The plot has a sepparate y-axis for every tracer.
Input parameters: List of lists of station-info dictionaries, tracer-info dictionaries and
data dataframes with ICOS Level-2 Atmospheric Data
(var_name: "df_list", var_type: List of Pandas DataFrames)
Output: Bokeh Plot
"""
#Import modules to create figure:
import pandas as pd
from bokeh.plotting import figure
from bokeh.models import ColumnDataSource, HoverTool, Label, Legend, LinearAxis, Range1d, SingleIntervalTicker
from datetime import datetime
#Dictionaries for subscript/superscript transformations of numbers:
SUB = str.maketrans("0123456789", "₀₁₂₃₄₅₆₇₈₉")
SUP = str.maketrans("0123456789", "⁰¹²³⁴⁵⁶⁷⁸⁹")
#Create a figure object:
p = figure(plot_width=900,
plot_height=500,
x_axis_label='Time (UTC)',
y_axis_label= df_list[0][1]['tracer_info'].replace(' mixing ratio (dry mole fraction)','').translate(SUB)+' (' +
df_list[0][1]['tracer_unit'].translate(SUP) + ')',
x_axis_type='datetime',
title = 'Tracer Observations ('+
df_list[0][0]['station_name']+', '+
df_list[0][0]['station_country']+', '+
df_list[0][0]['station_sampling_height']+' m. a. g. l.)',
tools='pan,box_zoom,wheel_zoom,reset, save')
#Dictionary containing tracer colors:
colors = {'co':'#ff7502',#'#bf812d','#d6604d','#CD6839','#f46d43','#d2691e', '#fc8d59',
'co2':'#543005',#'#48240A','#993404',
'ch4':'#e4c981'}#'#fee090'}'#dfc27d'}'#FFB90F'}'#fdcc8a'}
#Create an empty list that will store the legend info:
legend_it = []
#Extract time and tracer values for every tracer category:
x = pd.to_datetime(df_list[0][2]['TIMESTAMP'], unit='ms')
y = df_list[0][2][df_list[0][1]['tracer_info'].replace(' mixing ratio (dry mole fraction)','').lower()]
#Create a circle and line glyph for the values of every emission category:
r0 = p.circle(x, y, radius=.12, color=colors[df_list[0][1]['tracer_info'].replace(' mixing ratio (dry mole fraction)','').lower()])
r1 = p.line(x, y, line_width=1, color=colors[df_list[0][1]['tracer_info'].replace(' mixing ratio (dry mole fraction)','').lower()],
name=df_list[0][1]['tracer_info'].replace(' mixing ratio (dry mole fraction)','').translate(SUB))
#Add the name and glyph info (i.e. colour and marker type) to the legend:
legend_it.append((df_list[0][1]['tracer_info'].replace(' mixing ratio (dry mole fraction)','').translate(SUB), [r0,r1]))
#If 2 tracers have been selected:
if(len(df_list)==2):
#Get the total min/max values for every tracer:
tracer1_min = df_list[0][2][df_list[0][1]['tracer_info'].replace(' mixing ratio (dry mole fraction)','').lower()].min()
tracer1_max = df_list[0][2][df_list[0][1]['tracer_info'].replace(' mixing ratio (dry mole fraction)','').lower()].max()
tracer2_min = df_list[1][2][df_list[1][1]['tracer_info'].replace(' mixing ratio (dry mole fraction)','').lower()].min()
tracer2_max = df_list[1][2][df_list[1][1]['tracer_info'].replace(' mixing ratio (dry mole fraction)','').lower()].max()
#If two tracers are selected anf if one of the tracers is co2:
if((len(df_list)==2) and (df_list[0][1]['tracer_info'].replace(' mixing ratio (dry mole fraction)','')=='CO2')):
p.y_range, p.extra_y_ranges = set_yranges_2y(rounddown_20(tracer1_min),
roundup_20(tracer1_max),
rounddown_100(tracer2_min),
roundup_100(tracer2_max), 20.0, 100.0, 'Yaxis2')
#Set primary y-axis ticker:
ticker_1 = SingleIntervalTicker(interval= 20)
#Add primary y-axis ticker to plot:
p.yaxis.ticker = ticker_1
#Set secondary y-axis ticker:
ticker_2 = SingleIntervalTicker(interval=100)
#Create 2nd y-axis:
bg_yaxis = LinearAxis(y_range_name="Yaxis2",
axis_label=df_list[1][1]['tracer_info'].replace(' mixing ratio (dry mole fraction)','').translate(SUB)+' (' +
df_list[1][1]['tracer_unit'].translate(SUP) + ')',
ticker=ticker_2, axis_label_standoff = 15)
#Add secondary y-axis to plot:
p.add_layout(bg_yaxis, 'right')
else:
p.y_range, p.extra_y_ranges = set_yranges_2y(rounddown_100(tracer1_min),
roundup_100(tracer1_max),
rounddown_100(tracer2_min),
roundup_100(tracer2_max),
100.0, 100.0, 'Yaxis2')
#Set primary y-axis ticker:
ticker_1 = SingleIntervalTicker(interval= 100)
#Add primary y-axis ticker to plot:
p.yaxis.ticker = ticker_1
#Set secondary y-axis ticker:
ticker_2 = SingleIntervalTicker(interval=100)
#Create 2nd y-axis:
yaxis_2 = LinearAxis(y_range_name="Yaxis2",
axis_label=df_list[1][1]['tracer_info'].replace(' mixing ratio (dry mole fraction)','').translate(SUB)+' (' +
df_list[1][1]['tracer_unit'].translate(SUP) + ')',
ticker=ticker_2,
axis_label_standoff = 15)
#Add secondary y-axis to plot:
p.add_layout(yaxis_2, 'right')
#Set the text color of the yaxis for both y-axes:
p.yaxis[0].axis_label_text_color = colors[df_list[0][1]['tracer_info'].replace(' mixing ratio (dry mole fraction)','').lower()]
p.yaxis[1].axis_label_text_color = colors[df_list[1][1]['tracer_info'].replace(' mixing ratio (dry mole fraction)','').lower()]
#Extract time and tracer values for every tracer category:
x2i = pd.to_datetime(df_list[1][2]['TIMESTAMP'], unit='ms')
y2i = df_list[1][2][df_list[1][1]['tracer_info'].replace(' mixing ratio (dry mole fraction)','').lower()]
#Create a circle and line glyph for the values of every emission category:
r2 = p.circle(x2i, y2i, radius=.12, color=colors[df_list[1][1]['tracer_info'].replace(' mixing ratio (dry mole fraction)','').lower()],
y_range_name="Yaxis2")
r3 = p.line(x2i, y2i, line_width=1, color=colors[df_list[1][1]['tracer_info'].replace(' mixing ratio (dry mole fraction)','').lower()],
name=df_list[1][1]['tracer_info'].replace(' mixing ratio (dry mole fraction)','').translate(SUB), y_range_name="Yaxis2")
#Add the name and glyph info (i.e. colour and marker type) to the legend:
legend_it.append((df_list[1][1]['tracer_info'].replace(' mixing ratio (dry mole fraction)','').translate(SUB), [r2,r3]))
#If three tracers have been selected:
if(len(df_list)==3):
#Get the total min/max values for every tracer:
tracer1_min = df_list[0][2][df_list[0][1]['tracer_info'].replace(' mixing ratio (dry mole fraction)','').lower()].min()
tracer1_max = df_list[0][2][df_list[0][1]['tracer_info'].replace(' mixing ratio (dry mole fraction)','').lower()].max()
tracer2_min = df_list[1][2][df_list[1][1]['tracer_info'].replace(' mixing ratio (dry mole fraction)','').lower()].min()
tracer2_max = df_list[1][2][df_list[1][1]['tracer_info'].replace(' mixing ratio (dry mole fraction)','').lower()].max()
tracer3_min = df_list[2][2][df_list[2][1]['tracer_info'].replace(' mixing ratio (dry mole fraction)','').lower()].min()
tracer3_max = df_list[2][2][df_list[2][1]['tracer_info'].replace(' mixing ratio (dry mole fraction)','').lower()].max()
#Get the ranges for every y-axis:
p.y_range, p.extra_y_ranges['Yaxis2'], p.extra_y_ranges['Yaxis3']= set_yranges_3y(rounddown_20(tracer1_min),
roundup_20(tracer1_max),
rounddown_100(tracer2_min),
roundup_100(tracer2_max),
rounddown_100(tracer3_min),
roundup_100(tracer3_max),
20.0, 100.0, 100.0)
#Set primary y-axis ticker:
ticker_1 = SingleIntervalTicker(interval= 20)
#Add primary y-axis ticker to plot:
p.yaxis.ticker = ticker_1
#Set secondary y-axis ticker:
ticker_2 = SingleIntervalTicker(interval=100)
#Create 2nd y-axis:
yaxis_2 = LinearAxis(y_range_name="Yaxis2",
axis_label=df_list[1][1]['tracer_info'].replace(' mixing ratio (dry mole fraction)','').translate(SUB)+' (' +
df_list[1][1]['tracer_unit'].translate(SUP) + ')',
ticker=ticker_2,
axis_label_standoff = 15)
#Create 3rd y-axis:
yaxis_3 = LinearAxis(y_range_name="Yaxis3",
axis_label=df_list[2][1]['tracer_info'].replace(' mixing ratio (dry mole fraction)','').translate(SUB)+' (' +
df_list[2][1]['tracer_unit'].translate(SUP) + ')',
ticker=ticker_2,
axis_label_standoff = 15)
#Add secondary y-axis to plot:
p.add_layout(yaxis_2, 'right')
#Add third y-axis to plot:
p.add_layout(yaxis_3, 'right')
df_list[1][1]['tracer_info'].replace(' mixing ratio (dry mole fraction)','').lower
#Set yaxis tick-label color for all 3 y-axes:
p.yaxis[0].major_label_text_color = colors[df_list[0][1]['tracer_info'].replace(' mixing ratio (dry mole fraction)','').lower()]
p.yaxis[1].major_label_text_color = colors[df_list[1][1]['tracer_info'].replace(' mixing ratio (dry mole fraction)','').lower()]
p.yaxis[2].major_label_text_color = colors[df_list[2][1]['tracer_info'].replace(' mixing ratio (dry mole fraction)','').lower()]
#Set the text color of the yaxis for all 3 y-axes:
p.yaxis[0].axis_label_text_color = colors[df_list[0][1]['tracer_info'].replace(' mixing ratio (dry mole fraction)','').lower()]
p.yaxis[1].axis_label_text_color = colors[df_list[1][1]['tracer_info'].replace(' mixing ratio (dry mole fraction)','').lower()]
p.yaxis[2].axis_label_text_color = colors[df_list[2][1]['tracer_info'].replace(' mixing ratio (dry mole fraction)','').lower()]
#Extract time and tracer values for every tracer category:
x2 = pd.to_datetime(df_list[1][2]['TIMESTAMP'], unit='ms')
y2 = df_list[1][2][df_list[1][1]['tracer_info'].replace(' mixing ratio (dry mole fraction)','').lower()]
#Create a circle and line glyph for the values of every emission category:
r2 = p.circle(x2, y2, radius=.12,
color=colors[df_list[1][1]['tracer_info'].replace(' mixing ratio (dry mole fraction)','').lower()],
y_range_name="Yaxis2")
r3 = p.line(x2, y2, line_width=1,
color=colors[df_list[1][1]['tracer_info'].replace(' mixing ratio (dry mole fraction)','').lower()],
name=df_list[1][1]['tracer_info'].replace(' mixing ratio (dry mole fraction)','').translate(SUB), y_range_name="Yaxis2")
#Add the name and glyph info (i.e. colour and marker type) to the legend:
legend_it.append((df_list[1][1]['tracer_info'].replace(' mixing ratio (dry mole fraction)','').translate(SUB), [r2,r3]))
#Extract time and tracer values for every tracer category:
x3 = pd.to_datetime(df_list[2][2]['TIMESTAMP'], unit='ms')
y3 = df_list[2][2][df_list[2][1]['tracer_info'].replace(' mixing ratio (dry mole fraction)','').lower()]
#Create a circle and line glyph for the values of every emission category:
r4 = p.circle(x3, y3, radius=.12,
color=colors[df_list[2][1]['tracer_info'].replace(' mixing ratio (dry mole fraction)','').lower()],
y_range_name="Yaxis3")
r5 = p.line(x3, y3, line_width=1,
color=colors[df_list[2][1]['tracer_info'].replace(' mixing ratio (dry mole fraction)','').lower()],
name=df_list[2][1]['tracer_info'].replace(' mixing ratio (dry mole fraction)','').translate(SUB), y_range_name="Yaxis3")
#Add the name and glyph info (i.e. colour and marker type) to the legend:
legend_it.append((df_list[2][1]['tracer_info'].replace(' mixing ratio (dry mole fraction)','').translate(SUB), [r4,r5]))
#Add tooltip on hover:
p.add_tools(HoverTool(tooltips=[
('Tracer type','$name'),
('Time (UTC)','@x{%Y-%m-%d %H:%M:%S}'),
('Tracer value','@y{0.f}'),
],
formatters={
'@x' : 'datetime', # use 'datetime' formatter for 'date' field
},
# display a tooltip whenever the cursor is vertically in line with a glyph
mode='vline'
))
#Create legend:
legend = Legend(items=legend_it, location= 'bottom_center')
legend.orientation = 'horizontal'
legend.click_policy='hide'
legend.spacing = 10 #sets the distance between legend entries
#Set title attributes:
p.title.align = 'center'
p.title.text_font_size = '13pt'
p.title.offset = 15
#Set axis label font style:
p.xaxis.axis_label_text_font_style = 'normal'
p.yaxis.axis_label_text_font_style = 'normal'
p.xaxis.axis_label_standoff = 15 #Sets the distance of the label from the x-axis in screen units
p.yaxis.axis_label_standoff = 15 #Sets the distance of the label from the y-axis in screen units
#Set the copyright label position:
label_opts = dict(x=0, y=10,
x_units='screen', y_units='screen')
#Create a label object and format it:
caption1 = Label(text="© ICOS ERIC", **label_opts)
caption1.text_font_size = '8pt'
#Deactivate hover-tool, which is by default active:
p.toolbar.active_inspect = None
#Add label to plot:
p.add_layout(caption1, 'below')
#Add legend to figure:
p.add_layout(legend, 'below')
#return plot:
return p
# +
def plot_icos_single_tracer_multiple_stations_binary(df_list):
"""
Project: 'ICOS Carbon Portal'
Created: Tue May 07 10:30:00 2018
Last Changed: Tue May 07 10:30:00 2019
Version: 1.0.0
Author(s): Karolina
Description: Function that takes as input a list of sublists with dataframes, where every sublist
contains a metadata and a data dataframe for a tracer measured at a specific station.
The function creates an interactive Bokeh plot with the contents of the dataframes and
returns a Bokeh Figure (plot).
Input parameters: List of sublists with ICOS Level-2 Atmospheric data and metadata dataframes
(var_name: 'df_list', var_type: List)
Output: Bokeh Plot
"""
#Import modules to create figure:
import pandas as pd
from bokeh.plotting import figure
from bokeh.models import ColumnDataSource, HoverTool, Label, Legend
from datetime import datetime
#Dictionary for subscript transformations of numbers:
SUB = str.maketrans("0123456789", "₀₁₂₃₄₅₆₇₈₉")
SUP = str.maketrans("0123456789", "⁰¹²³⁴⁵⁶⁷⁸⁹")
#Create a figure object:
p = figure(plot_width=900,
plot_height=500,
x_axis_label='Time (UTC)',
y_axis_label=df_list[0][1]['tracer_info'].replace(' mixing ratio (dry mole fraction)', '').translate(SUB)+
' (' + df_list[0][1]['tracer_unit'].translate(SUP) + ')',
x_axis_type='datetime',
title = df_list[0][1]['tracer_info'].translate(SUB),
tools='pan,box_zoom,wheel_zoom,reset,save')
#Get colormap:
colormap = get_colormap(len(df_list))
#Create an empty list that will store the legend info:
legend_it = []
#Create a glyph with a different colour for every tracer:
for item, color in zip(df_list, colormap):
#Extract time and tracer values for every tracer category:
x = pd.to_datetime(item[2]['TIMESTAMP'], unit='ms')
y = item[2][item[1]['tracer_info'].replace(' mixing ratio (dry mole fraction)','').lower()]
#Create a circle and line glyph for the values of every emission category:
r0 = p.circle(x, y, radius=.12, color=color)
r1 = p.line(x, y, line_width=1, color=color, name=item[0]['station_code']+' ('+
item[0]['station_sampling_height']+')')
#Add the name and glyph info (i.e. colour and marker type) to the legend:
legend_it.append((item[0]['station_code']+' ('+item[0]['station_sampling_height']+')', [r0,r1]))
#Add tooltip on hover:
p.add_tools(HoverTool(tooltips=[
('Station','$name'),
('Time (UTC)','@x{%Y-%m-%d %H:%M:%S}'),
(item[1]['tracer_info'].replace(' mixing ratio (dry mole fraction)','').translate(SUB),'@y{0.f}'),
],
formatters={
'@x' : 'datetime', # use 'datetime' formatter for 'date' field
},
# display a tooltip whenever the cursor is vertically in line with a glyph
mode='vline'
))
#Create legend:
legend = Legend(items=legend_it, location= 'bottom_center')
legend.orientation = 'horizontal'
legend.click_policy='hide'
legend.spacing = 10 #sets the distance between legend entries
#Set title attributes:
p.title.align = 'center'
p.title.text_font_size = '13pt'
p.title.offset = 15
#Set axis label font style:
p.xaxis.axis_label_text_font_style = 'normal'
p.yaxis.axis_label_text_font_style = 'normal'
p.xaxis.axis_label_standoff = 15 #Sets the distance of the label from the x-axis in screen units
p.yaxis.axis_label_standoff = 15 #Sets the distance of the label from the y-axis in screen units
#Set the copyright label position:
label_opts = dict(x=0, y=10,
x_units='screen', y_units='screen')
#Create a label object and format it:
caption1 = Label(text="© ICOS ERIC", **label_opts)
caption1.text_font_size = '8pt'
#Deactivate hover-tool, which is by default active:
p.toolbar.active_inspect = None
#Add label to plot:
p.add_layout(caption1, 'below')
#Add legend to figure:
p.add_layout(legend, 'below')
#return plot:
return p
# -
def plot_icos_focus_binary(df_data, station_info_dict, tracer_info_dict, tracer, color):
"""
Project: 'ICOS Carbon Portal'
Created: Tue May 07 10:30:00 2018
Last Changed: Tue May 07 10:30:00 2019
Version: 1.0.0
Author(s): Karolina
Description: Function that takes as input a metadata dataframe, a data dataframe, a string stating
the tracer and a string with the color for the plot of datafor a specific ICOS station.
The function creates two interactive Bokeh plots with the contents of the dataframes and
returns two Bokeh Figures (plots).
Input parameters: 1. ICOS Level-2 tracer Atmospheric Data Dataframe
(var_name: 'df_data', var_type: Pandas DataFrame)
2. Dictionary with ICOS station information
(var_name: 'station_info_dict', var_type: Dictionary)
3. Dictionary with tracer information
(var_name: 'tracer_info_dict', var_type: Dictionary)
4. Tracer/gas - e.g. 'co2'
(var_name: "tracer", var_type: String)
5. Plot Color
(var_name: "color", var_type: String)
Output: Bokeh Plot
"""
#Import modules to create figure:
#from bokeh.palettes import Category10
import pandas as pd
from bokeh.layouts import row, column
from bokeh.models import ColumnDataSource, HoverTool, Label, Legend, CustomJS, Rect
from bokeh.plotting import figure, show
from datetime import datetime
from bokeh.io import push_notebook, output_notebook
#Dictionary for subscript transformations of numbers:
SUB = str.maketrans("0123456789", "₀₁₂₃₄₅₆₇₈₉")
SUP = str.maketrans("0123456789", "⁰¹²³⁴⁵⁶⁷⁸⁹")
#Define Datasets:
x = pd.to_datetime(df_data['TIMESTAMP'], unit='ms')
y = df_data[tracer_info_dict['tracer_info'].replace(' mixing ratio (dry mole fraction)','').lower()].values
#Create a ColumnDataSource object:
source = ColumnDataSource({'x': [], 'y': [], 'width': [], 'height': []})
#Javascript code defining the new x-range, y-range of the plot, based on the area selected by the user:
jscode="""
var data = source.data;
var start = cb_obj.start;
var end = cb_obj.end;
data['%s'] = [start + (end - start) / 2];
data['%s'] = [end - start];
source.change.emit();
"""
#Create a figure object:
p1 = figure(plot_width=900,
plot_height=400,
x_axis_label='Time (UTC)',
y_axis_label=tracer_info_dict['tracer_info'].replace(' mixing ratio (dry mole fraction)','').translate(SUB) + ' (' +
tracer_info_dict['tracer_unit'].translate(SUP) + ')',
x_axis_type='datetime',
title = tracer_info_dict['tracer_info'].translate(SUB)+' '+
station_info_dict['station_name']+', '+
station_info_dict['station_country'][0]+', '+
station_info_dict['station_sampling_height']+' m. a. g. .l.',
tools='pan,box_zoom,wheel_zoom,undo,redo,reset,save')
#Create glyphs:
p1.circle(x,y, radius=.02, color=color)# ,legend=df_metadata.loc['STATION CODE'].values[0])
p1.line(x,y, line_width=1, color=color)# ,legend=df_metadata.loc['STATION CODE'].values[0])
#Add tooltip on hover:
p1.add_tools(HoverTool(tooltips=[
('Station Code',station_info_dict['station_code']),
('Latitude',station_info_dict['station_lat']),
('Longitude',station_info_dict['station_lon']),
('Time (UTC)','@x{%Y-%m-%d %H:%M:%S}'),
(tracer_info_dict['tracer_info'].replace(' mixing ratio (dry mole fraction)','').translate(SUB),'@y{0.f}'),
],
formatters={
'@x' : 'datetime', # use 'datetime' formatter for 'date' field
},
# display a tooltip whenever the cursor is vertically in line with a glyph
mode='vline'
))
#Set title attributes:
p1.title.align = 'center'
p1.title.text_font_size = '13pt'
p1.title.offset = 15
#Set label font style:
p1.xaxis.axis_label_text_font_style = 'normal'
p1.yaxis.axis_label_text_font_style = 'normal'
p1.xaxis.axis_label_standoff = 15 #Sets the distance of the label from the x-axis in screen units
p1.yaxis.axis_label_standoff = 15 #Sets the distance of the label from the y-axis in screen units
#Set the copyright label position:
label_opts = dict(x=0, y=10,
x_units='screen', y_units='screen')
#Create a label object and format it:
caption1 = Label(text="© ICOS ERIC", **label_opts)
caption1.text_font_size = '8pt'
#Change the plot's x-range, y-range based on the selected area (javascript callback):
xcb = CustomJS(args=dict(source=source), code=jscode % ('x', 'width'))
ycb = CustomJS(args=dict(source=source), code=jscode % ('y', 'height'))
p1.x_range.js_on_change('start', xcb)
p1.x_range.js_on_change('end', xcb)
p1.y_range.js_on_change('start', ycb)
p1.y_range.js_on_change('end', ycb)
#Deactivate hover-tool, which is by default active:
p1.toolbar.active_inspect = None
#Add label to plot:
p1.add_layout(caption1, 'below')
############################################### CODE FOR 2nd PLOT ###################################################
#Create a figure object:
p2 = figure(plot_width=900,
plot_height=400,
x_axis_label='Time (UTC)',
y_axis_label=tracer_info_dict['tracer_info'].replace(' mixing ratio (dry mole fraction)','').translate(SUB) +
' (' +tracer_info_dict['tracer_unit'].translate(SUP) + ')',
x_axis_type='datetime',
title = tracer_info_dict['tracer_info'].translate(SUB)+' '+
station_info_dict['station_name']+', '+
station_info_dict['station_country']+', '+
station_info_dict['station_sampling_height']+' m. a. g. l.',
tools='save')
#Create glyphs:
p2.circle(x, y, radius=.02, color=color)# ,legend=df_metadata.loc['STATION CODE'].values[0])
p2.line(x, y, line_width=1, color=color)# ,legend=df_metadata.loc['STATION CODE'].values[0])
#Add tooltip on hover:
p2.add_tools(HoverTool(tooltips=[
('Station Code',station_info_dict['station_code']),
('Latitude',station_info_dict['station_lat']),
('Longitude',station_info_dict['station_lon']),
('Time (UTC)','@x{%Y-%m-%d %H:%M:%S}'),
(tracer_info_dict['tracer_info'].replace(' mixing ratio (dry mole fraction)','').translate(SUB),'@y{0.f}')
],
formatters={
'@x' : 'datetime', # use 'datetime' formatter for 'date' field
},
# display a tooltip whenever the cursor is vertically in line with a glyph
mode='vline'
))
#Set title attributes:
p2.title.align = 'center'
p2.title.text_font_size = '13pt'
p2.title.offset = 15
#Set label font style:
p2.xaxis.axis_label_text_font_style = 'normal'
p2.yaxis.axis_label_text_font_style = 'normal'
p2.xaxis.axis_label_standoff = 15 #Sets the distance of the label from the x-axis in screen units
p2.yaxis.axis_label_standoff = 15 #Sets the distance of the label from the y-axis in screen units
#Set the copyright label position:
label_opts = dict(x=0, y=10,
x_units='screen', y_units='screen')
#Create a label object and format it:
caption1 = Label(text="© ICOS ERIC", **label_opts)
caption1.text_font_size = '8pt'
#Add label to plot:
p2.add_layout(caption1, 'below')
#Add a rectangle with
rect = Rect(x='x', y='y', width='width', height='height', fill_alpha=0.1, line_color='black', fill_color='black')
p2.add_glyph(source, rect)
#Deactivate hover-tool, which is by default active:
p2.toolbar.active_inspect = None
#Return plots:
return p1, p2
def plot_icos_single_station_smoothing_binary(df_data, station_info_dict, tracer_info_dict, tracer, color):
"""
Project: 'ICOS Carbon Portal'
Created: Tue May 07 10:30:00 2018
Last Changed: Tue May 07 10:30:00 2019
Version: 1.0.0
Author(s): Karolina
Description: Function that takes as input a metadata dataframe, a data dataframe, a string stating
the tracer and a string with the color for the plot of datafor a specific ICOS station.
The function creates an interactive Bokeh plot with the contents of the dataframes and
returns a Bokeh Figure (plot).
Input parameters: 1. ICOS Level-2 tracer Atmospheric Data Dataframe
(var_name: 'df_data', var_type: Pandas DataFrame)
2. Dictionary with ICOS Station info
(var_name: 'station_info_dict', var_type: Dictionary)
3. Dictionary with tracer info
(var_name: 'tracer_info_dict', var_type: Dictionary)
4. Tracer/gas - e.g. 'co2'
(var_name: "tracer", var_type: String)
5. Plot Color
(var_name: "color", var_type: String)
Output: Bokeh Plot
"""
#Import modules to create figure:
import pandas as pd
from bokeh.plotting import figure
from bokeh.models import ColumnDataSource, HoverTool, Label, Legend
from datetime import datetime
#Dictionary for subscript transformations of numbers:
SUB = str.maketrans("0123456789", "₀₁₂₃₄₅₆₇₈₉")
SUP = str.maketrans("0123456789", "⁰¹²³⁴⁵⁶⁷⁸⁹")
#Define Datasets:
x = df_data.index.values
y1 = df_data['rolling_mean']#.astype({tracer: np.float32})[tracer]
y2 = df_data[tracer_info_dict['tracer_info'].replace(' mixing ratio (dry mole fraction)','').lower()]
#Create a figure object:
p = figure(plot_width=900,
plot_height=400,
x_axis_label='Time (UTC)',
y_axis_label=tracer_info_dict['tracer_info'].replace(' mixing ratio (dry mole fraction)','').translate(SUB) + ' (' +
tracer_info_dict['tracer_unit'].translate(SUP) + ')',
x_axis_type='datetime',
title = tracer_info_dict['tracer_info'].translate(SUB)+' '+
station_info_dict['station_name']+', '+
station_info_dict['station_country']+', '+
station_info_dict['station_sampling_height']+' m. a. g. l.',
tools='pan,box_zoom,wheel_zoom,,undo,reset,save')
#Create an empty list that will store the legend info:
legend_it = []
#Create glyphs:
g0 = p.circle(x, y1, radius=.02, color=color)# ,legend=df_metadata.loc['STATION CODE'].values[0])
g1 = p.line(x, y1, line_width=1.25, color=color, name='Smoothed Line')
g2 = p.circle(x, y2, radius=.02, color=color)# ,legend=df_metadata.loc['STATION CODE'].values[0])
g3 = p.line(x, y2, line_width=1, line_dash='dotted', line_alpha=0.5, color=color, name='Original Observations')
#Add the name and glyph info (i.e. colour and marker type) to the legend:
legend_it.append(('Smoothed '+station_info_dict['station_code']+' ('+
station_info_dict['station_sampling_height']+')', [g1]))
#Add the name and glyph info (i.e. colour and marker type) to the legend:
legend_it.append(('Original ' + station_info_dict['station_code']+' ('+
station_info_dict['station_sampling_height']+')', [g3]))
#Add tooltip on hover:
p.add_tools(HoverTool(tooltips=[
('Type','$name'),
('Time (UTC)','@x{%Y-%m-%d %H:%M:%S}'),
(tracer_info_dict['tracer_info'].replace(' mixing ratio (dry mole fraction)','').translate(SUB),'@y{0.f}'),
],
formatters={
'@x' : 'datetime', # use 'datetime' formatter for 'date' field
},
# display a tooltip whenever the cursor is vertically in line with a glyph
mode='vline'
))
#Create legend:
legend = Legend(items=legend_it, location= 'bottom_center')
legend.orientation = 'horizontal'
legend.click_policy='hide'
legend.spacing = 10 #sets the distance between legend entries
#Set title attributes:
p.title.align = 'center'
p.title.text_font_size = '13pt'
p.title.offset = 15
#Set label font style:
p.xaxis.axis_label_text_font_style = 'normal'
p.yaxis.axis_label_text_font_style = 'normal'
p.xaxis.axis_label_standoff = 15 #Sets the distance of the label from the x-axis in screen units
p.yaxis.axis_label_standoff = 15 #Sets the distance of the label from the y-axis in screen units
#Set the copyright label position:
label_opts = dict(x=0, y=10,
x_units='screen', y_units='screen')
#Create a label object and format it:
caption1 = Label(text="© ICOS ERIC", **label_opts)
caption1.text_font_size = '8pt'
#Deactivate hover-tool, which is by default active:
p.toolbar.active_inspect = None
#Add label to plot:
p.add_layout(caption1, 'below')
#Add legend to figure:
p.add_layout(legend, 'below')
#return plot:
return p
def plot_icos_single_station_L1_L2_binary(data_df_list, station_info_dict, tracer_info_dict, color):
"""
Project: 'ICOS Carbon Portal'
Created: Tue May 07 10:30:00 2018
Last Changed: Tue May 07 10:30:00 2019
Version: 1.0.0
Author(s): Karolina
Description: Function that takes as input a list of data dataframes, a dictionary with information for
the ICOS station, a dictionary with tracer information and a string with the color for
the plot line. This implementation is for one selected ICOS station and one selected tracer.
The function creates an interactive Bokeh plot with the contents of the dataframes and
returns a Bokeh Figure (plot).
Input parameters: 1. ICOS Level-2 tracer Atmospheric Data Dataframe
(var_name: 'data_df_list', var_type: List of dataframes)
2. Dictionary with info for ICOS stations
(var_name: 'station_info_dict', var_type: Dictionary)
3. Dictionary with tracer info (i.e. tracer name, tracer unit)
(var_name: "tracer_info_dict", var_type: Dictionary)
4. Selected Color for plot line
(var_name: "color", var_type: String)
Output: Bokeh Plot
"""
#Import modules to create figure:
from bokeh.plotting import figure
from bokeh.models import ColumnDataSource, HoverTool, Label, Legend
from datetime import datetime
#Dictionaries for subscript/superscript transformations of numbers:
SUB = str.maketrans("0123456789", "₀₁₂₃₄₅₆₇₈₉")
SUP = str.maketrans("0123456789", "⁰¹²³⁴⁵⁶⁷⁸⁹")
#Create a figure object:
p = figure(plot_width=900,
plot_height=500,
x_axis_label='Time (UTC)',
y_axis_label=tracer_info_dict['tracer_info'].replace(' mixing ratio (dry mole fraction)', '').translate(SUB)+
' (' +tracer_info_dict['tracer_unit'].translate(SUP) + ')',
x_axis_type='datetime',
title = tracer_info_dict['tracer_info'].replace(' mixing ratio (dry mole fraction)', '').translate(SUB)+
' - Continuous air ( '+
station_info_dict['station_name']+', '+#.encode('latin1').decode('utf8')+', '+
station_info_dict['station_country']+', '+
station_info_dict['station_sampling_height']+' m. a. g. l.)' ,
tools='pan,box_zoom,wheel_zoom,reset,save')
#Create an empty list that will store the legend info:
legend_it = []
#Extract time and tracer values for every data level:
x1 = data_df_list[0].index.values
y1 = data_df_list[0][tracer_info_dict['tracer_info'].replace(' mixing ratio (dry mole fraction)', '').lower()].values
x2 = data_df_list[1].index.values
y2 = data_df_list[1][tracer_info_dict['tracer_info'].replace(' mixing ratio (dry mole fraction)', '').lower()].values
#Create a circle and line glyph for the values of every emission category:
r0 = p.circle(x1, y1, radius=.12, color=color, alpha=0.5)
r1 = p.line(x1, y1, line_width=2, line_dash='dotted', line_alpha=0.5, color=color, name='L1, '+station_info_dict['station_code']+' ('+station_info_dict['station_sampling_height']+')')
r2 = p.circle(x2, y2, radius=.12, color=color)
r3 = p.line(x2, y2, line_width=1, color=color, name='L2, '+station_info_dict['station_code']+' ('+station_info_dict['station_sampling_height']+')')
#Add the name and glyph info (i.e. colour and marker type) to the legend:
legend_it.append(('Level 1 - '+station_info_dict['station_code']+' ('+station_info_dict['station_sampling_height']+')', [r1]))
#Add the name and glyph info (i.e. colour and marker type) to the legend:
legend_it.append(('Level 2 - '+station_info_dict['station_code']+' ('+station_info_dict['station_sampling_height']+')', [r3]))
#Add tooltip on hover:
p.add_tools(HoverTool(tooltips=[
('Station','$name'),
('Time (UTC)','@x{%Y-%m-%d %H:%M:%S}'),
(tracer_info_dict['tracer_info'].replace(' mixing ratio (dry mole fraction)','').translate(SUB),'@y{0.f}'),
],
formatters={
'@x' : 'datetime', # use 'datetime' formatter for 'date' field
},
# display a tooltip whenever the cursor is vertically in line with a glyph
mode='vline'
))
#Create legend:
legend = Legend(items=legend_it, location= 'bottom_center')
legend.orientation = 'horizontal'
legend.click_policy='hide'
legend.spacing = 10 #sets the distance between legend entries
#Set title attributes:
p.title.align = 'center'
p.title.text_font_size = '13pt'
p.title.offset = 15
#Set axis label font style:
p.xaxis.axis_label_text_font_style = 'normal'
p.yaxis.axis_label_text_font_style = 'normal'
p.xaxis.axis_label_standoff = 15 #Sets the distance of the label from the x-axis in screen units
p.yaxis.axis_label_standoff = 15 #Sets the distance of the label from the y-axis in screen units
#Set the copyright label position:
label_opts = dict(x=0, y=10,
x_units='screen', y_units='screen')
#Create a label object and format it:
caption1 = Label(text="© ICOS ERIC", **label_opts)
caption1.text_font_size = '8pt'
#Deactivate hover-tool, which is by default active:
p.toolbar.active_inspect = None
#Add label to plot:
p.add_layout(caption1, 'below')
#Add legend to figure:
p.add_layout(legend, 'below')
#return plot:
return p
# <a id='widget_funcs'></a>
# <br>
# <div style="text-align: right">
# <a href="#intro">Back to top</a>
# </div>
# <br>
# <br>
# <br>
# <br>
# <br>
#
# ## 8. Widget functions
# This part includes functions that create widget-forms and help functions to these widget-forms. In Python interactive elements like dropdown lists, ckeckboxes, buttons, etc. are called widgets.
def create_station_labels(lookup_df):
"""
Project: 'ICOS Carbon Portal'
Created: Mon Apr 01 10:26:00 2019
Last Changed: Mon Apr 01 10:26:00 2019
Version: 1.0.0
Author(s): Karolina
Description: Return a list of tuples. Everey tuple contains the label for a specific station and a
list with the station's 3-character code and sampling height. The first item of every tuple
is used as a label in the station dropdown list (i.e. Select or Multi-select widget).
Input parameters: lookup table (pandas dataframe)
Output: List of tuples (e.g. [('Gartow (alt. 30.0)', ['GAT', '30.0']),...]
tuple:
1. Constructed Station Label (var_type: String)
2. List of two items
i. Station 3-character Code (var_type: String)
ii. Station Sampling Height (var_type: String)
"""
#Filter the dataframe and get a dataframe of unique combinations of station-names and
#corresponding sampling heights:
df = lookup_df.filter(['stationName', 'height','stationId']).drop_duplicates(subset=['stationName',
'height',
'stationId'])
#Get a list of tuples for every station that has provided tracer-data:
#Every tuple is constructed like: ('Gartow (alt. 30.0)', ['GAT', '30.0'])
station_labels = [(df.stationName.iloc[i]+ " (alt. " + df.height.iloc[i] + ")",
[df.stationId.iloc[i], df.height.iloc[i]]) for i in range(len(df))]
#Return list:
return station_labels
def create_widgets_exploring():
"""
Project: 'ICOS Carbon Portal'
Created: Fri Apr 07 10:00:00 2019
Last Changed: Fri Apr 07 10:00:00 2019
Version: 1.0.0
Author(s): Karolina
Description: Function that creates a set of widgets; a tracer dropdown list, a station dropdown list,
a colorpicker and a button, populates the dropdown lists with values, captures the user's
input and calls a function to update the contents of the "exploring"-plot.
Input parameters: No input parameter/s
Output: Plot with Map or Warning Message
"""
#Dictionary for subscript transformations of numbers:
SUB = str.maketrans("0123456789", "₀₁₂₃₄₅₆₇₈₉")
#Create lookup dataframe:
df_lookup = create_lookup_df_atc_L2()
#Create a list including all tracers (e.g. CO2, CO, CH4)
tracers = df_lookup.variable.unique().tolist()
#reverse list order:
tracers.reverse()
#Create widgets:
tracer = Dropdown(options = tracers)
station = Dropdown(options = create_station_labels(df_lookup))
basemap_wdgt = Dropdown(options = ['Imagery', 'OpenStreetMap'], description='Basemap')
#Function that calls functions to update the plot/s and/or map,
#based on the selected tracer, station and color:
def update_plot_func(Tracer, Station, basemap, color, Citation):
#Get tracer short:
tracer = Tracer.replace(' mixing ratio (dry mole fraction)', '').lower()
#Get a list of data obect URLs that refer to the selected station and tracer:
data_obj_url_ls = df_lookup.dobj.loc[(df_lookup.stationId==Station[0]) &
(df_lookup.height==Station[1]) &
(df_lookup.variable==Tracer)].values
#If L2-data is available for the selected tracer and station:
if(data_obj_url_ls.size>0):
#Get a list of data object IDs (L2-data):
data_obj_id_ls = [data_obj_url_ls[i].replace('https://meta.icos-cp.eu/objects/', '')
for i in range(data_obj_url_ls.size)]
#Call function to return plot for the selected station:
p = update_icos_single_station_plot_binary(data_obj_id_ls, Station, tracer, color)
#Output should be in the notebook
output_notebook()
#Show plot:
show(p, notebook_handle=True)
#If the "citation" checkbox is checked:
if(Citation):
#Get a list with citation info for every ICOS Level-2 data object:
cit_ls_L2 = [RunSparql(sparql_query=sparqls.get_icos_citation(dobj), output_format='pandas').run().cit.iloc[0]
for dobj in data_obj_url_ls]
#Print citation title:
print('\n\n\033[1m' + 'Data Citation:' + '\033[0m')
#Loop through all citations:
for cit in cit_ls_L2:
#Print data object citation:
printmd("<sub>"+cit+"</sub>")
#Get pandas dataframe with all ICOS stations:
icos_stations_df = RunSparql(sparql_query=sparqls.get_coords_icos_stations_atc(), output_format='pandas').run()
#Show map:
plotmap(icos_stations_df, Station[0], basemap)
#If no L2-data are available for the selected tracer and station:
else:
print("\033[0;31;1m "+'No '+tracer.upper().translate(SUB)+' Level-2 data available for the selected station...'+"\033[0;31;0m\n\n")
#Create function that contains a box of widgets:
interact_c = interact_manual(update_plot_func,
Tracer=tracer,
Station=station,
basemap = basemap_wdgt,
color=ColorPicker(concise=False,
description='Pick a color',
value='#3973ac',
disabled=False),
Citation=Checkbox(value=True, description='Citation', disabled=False))
#Set the font of the widgets included in interact_manual:
interact_c.widget.children[0].layout.width = '430px'
interact_c.widget.children[0].layout.margin = '40px 2px 2px 2px'
interact_c.widget.children[1].layout.width = '430px'
interact_c.widget.children[2].layout.width = '430px'
interact_c.widget.children[3].layout.width = '430px'
interact_c.widget.children[4].layout.width = '430px'
interact_c.widget.children[5].description = 'Update Plot'
interact_c.widget.children[5].button_style = 'danger'
interact_c.widget.children[5].style.button_color = '#3973ac'
interact_c.widget.children[5].layout.margin = '10px 10px 40px 180px' # top/right/bottom/left
# +
def create_widgets_exploring_multiple_tracers():
"""
Project: 'ICOS Carbon Portal'
Created: Tue May 07 10:30:00 2018
Last Changed: Tue May 07 10:30:00 2019
Version: 1.0.0
Author(s): Karolina
Description: Function that creates a set of widgets; a tracer multiselect dropdown list, a
station dropdown list and a button. The function populates the dropdown lists
with values and outputs the result.
Input parameters: No Input Parameter(s)
Output: Python Widgets
"""
#Create lookup dataframe:
df_lookup = create_lookup_df_atc_L2()
#Create a list including all tracers (e.g. CO2, CO, CH4)
tracers = df_lookup.variable.unique().tolist()
#reverse list order:
tracers.reverse()
#Create widgets:
station = Dropdown(options = create_station_labels(df_lookup))
tracer = SelectMultiple(options = tracers,
value = [tracers[0]],
disabled=False)
#Function that calls functions to update the plot/s and/or map,
#based on the selected tracer, station and color:
def update_plot_func(Tracer, Station, Citation):
#Get a list of Level-2 data object URLs, tracers, station-IDs and station sampling heights
#that refer to the selected station and tracer/s:
selection_dobj_url_list = [[df_lookup.dobj.loc[(df_lookup.stationId==Station[0]) &
(df_lookup.height==Station[1]) &
(df_lookup.variable==tracer)].values,
tracer.replace(' mixing ratio (dry mole fraction)', '').lower(),
Station[0],
Station[1]]
for tracer in Tracer
if len(df_lookup.dobj.loc[(df_lookup.stationId==Station[0]) &
(df_lookup.height==Station[1]) &
(df_lookup.variable==tracer)].values)>0]
#If Level-2 data are available for the selected tracer and station:
if(len(selection_dobj_url_list)>0):
#Get a list of lists, where every sublist contains
#a data-object-id, a tracer-string (e.g. 'ch4'), a station-ID string (e.g. 'GAT')
#and the station sampling height string (e.g. '30.0'):
selection_list = [[selection_dobj_url_list[item][0][0].replace('https://meta.icos-cp.eu/objects/', ''),
selection_dobj_url_list[item][1],
selection_dobj_url_list[item][2],
selection_dobj_url_list[item][3]]
for item in range(len(selection_dobj_url_list))]
#Call function to return plot for the selected station:
update_exploring_multiple_tracers_binary(selection_list)
#If the "citation" checkbox is checked:
if(Citation):
#Get a list with citation info for every ICOS Level-2 data object:
cit_ls_L2 = [RunSparql(sparql_query=sparqls.get_icos_citation(dobj[0][0]), output_format='pandas').run().cit.iloc[0]
for dobj in selection_dobj_url_list]
#Print citation title:
print('\n\n\033[1m' + 'Data Citation:' + '\033[0m')
#Loop through all citations:
for cit in cit_ls_L2:
#Print data object citation:
printmd("<sub>"+cit+"</sub>")
#If no L2-data is available for the selected tracer and station:
else:
print("\033[0;31;1m "+'No Level-2 data available for the selected tracer and/or station/s.'+"\033[0;31;0m\n\n")
#Create function that contains a box of widgets:
interact_c = interact_manual(update_plot_func,
Tracer=tracer,
Station=station,
Citation=Checkbox(value=True, description='Citation', disabled=False))
#Set the font of the widgets included in interact_manual:
interact_c.widget.children[0].layout.width = '420px'
interact_c.widget.children[0].layout.height = '60px'
interact_c.widget.children[1].layout.width = '420px'
interact_c.widget.children[2].layout.width = '420px'
interact_c.widget.children[3].description = 'Update Plot'
interact_c.widget.children[3].button_style = 'danger'
interact_c.widget.children[3].style.button_color = '#3973ac'
#interact_c.widget.children[3].layout.width = '300px'
interact_c.widget.children[3].layout.margin = '10px 10px 20px 180px' # top/right/bottom/left
# -
def create_widgets_exploring_multiple_stations():
"""
"""
#Create lookup dataframe:
df_lookup = create_lookup_df_atc_L2()
#Create a list including all tracers (e.g. CO2, CO, CH4)
tracers = df_lookup.variable.unique().tolist()
#reverse list order:
tracers.reverse()
stations = create_station_labels(df_lookup)
#Create widgets:
tracer = Dropdown(options = tracers)
station = SelectMultiple(options = stations, disabled=False)
#Function that calls functions to update the plot/s and/or map,
#based on the selected tracer, station and color:
def update_plot_func(Tracer, Station, Citation):
#Get tracer (e.g. 'co2'):
tracer_low_case = Tracer.replace(' mixing ratio (dry mole fraction)', '').lower()
#Get a list of sublists, where every sublist contains the following:
#1. ICOS Level-2 data object URL
#2. Tracer/Gas (e.g. 'co2')
#3. ICOS Station ID (3-character code)
#4. ICOS Station Sampling Height
#that refer to the selected station(s) and tracer:
selection_dobj_url_list = [[df_lookup.dobj.loc[(df_lookup.stationId==station[0]) &
(df_lookup.height==station[1]) &
(df_lookup.variable==Tracer)].values,
tracer_low_case,
station[0],
station[1]]
for station in Station
if len(df_lookup.dobj.loc[(df_lookup.stationId==station[0]) &
(df_lookup.height==station[1]) &
(df_lookup.variable==Tracer)].values)>0]
#If Level-2 data are available for the selected tracer and station(s):
if(len(selection_dobj_url_list)>0):
#Get a list of lists, where every sublist contains a data-object-ID, the selected tracer,
#the station code and the station sampling height.
#E.g. ['U4VYazHdmZwzr7DxUowMtUu-', 'co2', 'GAT', '30.0']:
selection_list = [[selection_dobj_url_list[i][0][j].replace('https://meta.icos-cp.eu/objects/', ''),
selection_dobj_url_list[i][1],
selection_dobj_url_list[i][2],
selection_dobj_url_list[i][3]]
for i in range(len(selection_dobj_url_list))
for j in range(len(selection_dobj_url_list[i][0]))]
####
#ICOS Atmospheric Level-2 Data for a given station, a given tracer and
#a given sampling height can be stored in two different files in cases
#where the measuring instrument has changed.
#The following code controls for such occurances and merges the data
#if necessary.
####
#Get a list of tuples (e.g. ('co2', 'HPB', '50.0')) with unique occurances of
#"station code" - "station sampling height" - "tracer" triplets:
station_unique_ls = list(set([(item[1], item[2], item[3]) for item in selection_list]))
#Group selection_list items refering to tracer-data from the same station and
#sampling height to lists:
station_dobj_ls = [[item for item in selection_list
if((item[1]==station_id[0]) & (item[2]==station_id[1]) & (item[3]==station_id[2]))]
for station_id in station_unique_ls]
#Get plot displaying tracer-values for the selected station/s:
update_exploring_multiple_stations_binary(station_dobj_ls)
#If the "citation" checkbox is checked:
if(Citation):
#Get a list with citation info for every ICOS Level-2 data object:
cit_ls_L2 = [RunSparql(sparql_query=sparqls.get_icos_citation(dobj[0][0]), output_format='pandas').run().cit.iloc[0]
for dobj in selection_dobj_url_list]
#Print citation title:
print('\n\n\033[1m' + 'Data Citation:' + '\033[0m')
#Loop through all citations:
for cit in cit_ls_L2:
#Print data object citation:
printmd("<sub>"+cit+"</sub>")
#If no L2-data is available for the selected tracer and station:
else:
print("\033[0;31;1m "+'No Level-2 data available for the selected tracer and/or station/s.'+"\033[0;31;0m\n\n")
#Create function that contains a box of widgets:
interact_c = interact_manual(update_plot_func,
Tracer=tracer,
Station=station,
Citation=Checkbox(value=True, description='Citation', disabled=False))
#Set the font of the widgets included in interact_manual:
interact_c.widget.children[0].layout.width = '420px'
interact_c.widget.children[0].layout.margin = '40px 2px 2px 2px'
interact_c.widget.children[1].layout.width = '420px'
interact_c.widget.children[1].layout.height = '120px'
interact_c.widget.children[2].layout.width = '420px'
interact_c.widget.children[3].description = 'Update Plot'
interact_c.widget.children[3].button_style = 'danger'
interact_c.widget.children[3].style.button_color = '#3973ac'
interact_c.widget.children[3].layout.margin = '10px 10px 40px 180px' # top/right/bottom/left
def create_widgets_focusing():
"""
Project: 'ICOS Carbon Portal'
Created: Tue May 07 10:30:00 2018
Last Changed: Tue May 07 10:30:00 2019
Version: 1.0.0
Author(s): Karolina
Description: Function that creates a set of widgets; a station dropdown list,
a tracer dropdown list, a color-picker and a button. The function
populates the dropdown lists with values and outputs the result.
Input parameters: No Input Parameter(s)
Output: Python Widgets
"""
#Dictionary for subscript transformations of numbers:
SUB = str.maketrans("0123456789", "₀₁₂₃₄₅₆₇₈₉")
#Create lookup dataframe:
df_lookup = create_lookup_df_atc_L2()
#Create a list including all tracers (e.g. CO2, CO, CH4)
tracers = df_lookup.variable.unique().tolist()
#reverse list order:
tracers.reverse()
#Create widgets:
tracer = Dropdown(options = tracers)
station = Dropdown(options = create_station_labels(df_lookup))
#Function that calls functions to update the plot/s and/or map,
#based on the selected tracer, station and color:
def update_plot_func(Tracer, Station, color, Citation):
#Get tracer short:
tracer = Tracer.replace(' mixing ratio (dry mole fraction)', '').lower()
#Get a list of data obect URLs that refer to the selected station and tracer:
data_obj_url_ls = df_lookup.dobj.loc[(df_lookup.stationId==Station[0]) &
(df_lookup.height==Station[1]) &
(df_lookup.variable==Tracer)].values
#If L2-data is available for the selected tracer and station:
if(data_obj_url_ls.size>0):
#Get a list of data object IDs (L2-data):
data_obj_id_ls = [data_obj_url_ls[i].replace('https://meta.icos-cp.eu/objects/', '')
for i in range(data_obj_url_ls.size)]
#Call function to return plot and map for the selected station:
update_focus_plot_binary(data_obj_id_ls, Station, tracer, color)
#If the "citation" checkbox is checked:
if(Citation):
#Get a list with citation info for every ICOS Level-2 data object:
cit_ls_L2 = [RunSparql(sparql_query=sparqls.get_icos_citation(dobj), output_format='pandas').run().cit.iloc[0]
for dobj in data_obj_url_ls]
#Print citation title:
print('\n\n\033[1m' + 'Data Citation:' + '\033[0m')
#Loop through all citations:
for cit in cit_ls_L2:
#Print data object citation:
printmd("<sub>"+cit+"</sub>")
#If no L2-data are available for the selected tracer and station:
else:
print("\033[0;31;1m "+'No '+tracer.upper().translate(SUB)+' Level-2 data available for the selected station'+"\033[0;31;0m\n\n")
#Create function that contains a box of widgets:
interact_c = interact_manual(update_plot_func,
Tracer=tracer,
Station=station,
color=ColorPicker(concise=False,
description='Pick a color',
value='#3973ac',
disabled=False),
Citation=Checkbox(value=True, description='Citation', disabled=False))
#Set the font of the widgets included in interact_manual:
interact_c.widget.children[0].layout.width = '420px'
interact_c.widget.children[0].layout.margin = '40px 2px 2px 2px'
interact_c.widget.children[1].layout.width = '420px'
interact_c.widget.children[2].layout.width = '420px'
interact_c.widget.children[3].layout.width = '420px'
interact_c.widget.children[4].description = 'Update Plot'
interact_c.widget.children[4].button_style = 'danger'
interact_c.widget.children[4].style.button_color = '#3973ac'
interact_c.widget.children[4].layout.margin = '10px 10px 40px 180px' # top/right/bottom/left
def create_widgets_basic_stat():
"""
Project: 'ICOS Carbon Portal'
Created: Tue May 07 10:30:00 2018
Last Changed: Tue May 07 10:30:00 2019
Version: 1.0.0
Author(s): Karolina
Description: Function that creates a set of widgets; a station multiselection dropdown list, a
tracer dropdown list and a button. The function populates the dropdown lists with
values and outputs the result.
Input parameters: No Input Parameter(s)
Output: Python Widgets
"""
#Create lookup dataframe:
df_lookup = create_lookup_df_atc_L2()
#Convert data type of date-columns from string to datetime:
df_lookup.timeStart = [datetime.strptime(df_lookup.timeStart.iloc[i],'%Y-%m-%dT%H:%M:%SZ')
for i in range(len(df_lookup))]
df_lookup.timeEnd = [datetime.strptime(df_lookup.timeEnd.iloc[i],'%Y-%m-%dT%H:%M:%SZ')
for i in range(len(df_lookup))]
#Create a list including all tracers (e.g. CO2, CO, CH4)
tracers = df_lookup.variable.unique().tolist()
#reverse list order:
tracers.reverse()
#Create widgets:
tracer = Dropdown(options = tracers)
station = SelectMultiple(options = create_station_labels(df_lookup), disabled=False)
#Function that calls functions to update the plot/s and/or map,
#based on the selected tracer and station/s:
def update_plot_func(Tracer, Station, start_date, end_date):
#Check user input:
if(check_input_icos_L2_basic_stat(Station, start_date, end_date)):
#Get tracer (e.g. 'co2'):
tracer_low_case = Tracer.replace(' mixing ratio (dry mole fraction)', '').lower()
#Get a list of sublists, where every sublist contains the following:
#1. ICOS Level-2 data object URL
#2. ICOS Station ID (3-character code)
#3. ICOS Station Sampling Height
#4. Tracer/Gas (e.g. 'co2')
#that refer to the selected station(s) and tracer:
selection_dobj_url_list = [[df_lookup.dobj.loc[(df_lookup.stationId==station[0]) &
(df_lookup.height==station[1]) &
(df_lookup.variable==Tracer) &
(((df_lookup.timeStart<=pd.Timestamp(start_date))&
(df_lookup.timeEnd>=pd.Timestamp(start_date)))|
((df_lookup.timeStart<=pd.Timestamp(end_date))&
(df_lookup.timeEnd>=pd.Timestamp(end_date)))|
((df_lookup.timeStart>=pd.Timestamp(start_date))&
(df_lookup.timeEnd<=pd.Timestamp(end_date))))].values,
station[0],
station[1],
tracer_low_case]
for station in Station]
#Get a list of items, where every item represents the size of the url-array for every selected station:
check_url_ls_size = [selection_dobj_url_list[m][0].size for m in range(len(selection_dobj_url_list))]
#If Level-2 data are available for the selected tracer, station(s) and time period:
if(sum(check_url_ls_size)>0):
#Get a list of lists, where every sublist contains a data-object-ID, the station code,
#the sampling height & the selected tracer.
#E.g. ['U4VYazHdmZwzr7DxUowMtUu-', 'co2', 'GAT', '30.0']:
selection_list = [[selection_dobj_url_list[i][0][j].replace('https://meta.icos-cp.eu/objects/', ''),
selection_dobj_url_list[i][3],
selection_dobj_url_list[i][1],
selection_dobj_url_list[i][2]]
for i in range(len(selection_dobj_url_list))
for j in range(len(selection_dobj_url_list[i][0]))]
#Get a list of tuples (e.g. ('HPB', '50.0', 'co2')) with unique occurances of
#"station code" - "station sampling height" - "tracer" triplets:
station_unique_ls = list(set([(item[2], item[3], item[1]) for item in selection_list]))
#Group selection_list items refering to tracer-data from the same station and sampling height to lists:
station_dobj_ls = [[item for item in selection_list
if((item[1]==station_id[2]) & (item[2]==station_id[0]) & (item[3]==station_id[1]))]
for station_id in station_unique_ls]
#Call function to calculate and return basic statistics dataframe for the selected stations:
return update_basic_statistics_binary(station_dobj_ls, start_date, end_date)
#If no data are available for the selected tracer, station(s) and time period:
else:
print("\033[0;31;1m No ICOS Atmospheric Level-2 data available.\033[0;31;0m\n\n")
#Create function that contains a box of widgets:
interact_c = interact_manual(update_plot_func,
Tracer=tracer,
Station=station,
start_date=DatePicker(description='Starting Date',disabled=False),
end_date=DatePicker(description='Ending Date',disabled=False))
#Set the font of the widgets included in interact_manual:
interact_c.widget.children[0].layout.width = '430px'
interact_c.widget.children[0].layout.margin = '40px 2px 2px 2px'
interact_c.widget.children[1].layout.width = '430px'
interact_c.widget.children[1].layout.height = '120px'
interact_c.widget.children[2].layout.width = '430px'
interact_c.widget.children[3].layout.width = '430px'
interact_c.widget.children[4].description = 'Update Table'
interact_c.widget.children[4].button_style = 'danger'
interact_c.widget.children[4].style.button_color = '#3973ac'
interact_c.widget.children[4].layout.margin = '10px 10px 40px 180px' # top/right/bottom/left
def create_widgets_correlation_multiple_tracers_multiple_stations():
"""
Project: 'ICOS Carbon Portal'
Created: Tue May 07 10:30:00 2018
Last Changed: Tue May 07 10:30:00 2019
Version: 1.0.0
Author(s): Karolina
Description: Function that creates a set of widgets; a station multiselect dropdown list, a
tracer multiselect dropdown list and a button. The function populates the dropdown
lists with values and outputs the result.
Input parameters: No Input Parameter(s)
Output: Python Widgets
"""
#Create lookup dataframe:
df_lookup = create_lookup_df_atc_L2()
#Convert data type of date-columns from string to datetime:
df_lookup.timeStart = [datetime.strptime(df_lookup.timeStart.iloc[i],'%Y-%m-%dT%H:%M:%SZ')
for i in range(len(df_lookup))]
df_lookup.timeEnd = [datetime.strptime(df_lookup.timeEnd.iloc[i],'%Y-%m-%dT%H:%M:%SZ')
for i in range(len(df_lookup))]
#Create a list including all tracers (e.g. CO2, CO, CH4)
tracers = df_lookup.variable.unique().tolist()
#reverse list order:
tracers.reverse()
#Create widgets:
tracers = SelectMultiple(options = tracers, disabled=False)
stations = SelectMultiple(options = create_station_labels(df_lookup), disabled=False)
#Function that calls functions to update the plot/s and/or map,
#based on the selected tracer, station and color:
def update_plot_func(Tracers, Stations, start_date, end_date):
#Control input parameters:
if(check_input_icos_L2_correlation(Tracers, Stations, start_date, end_date)):
#Get a list of sublists, where every sublist contains the following:
#1. ICOS Level-2 data object URL
#2. ICOS Station ID (3-character code)
#3. ICOS Station Sampling Height
#4. Tracer/Gas (e.g. 'co2')
#that refer to the selected station(s) and tracer(s):
selection_dobj_url_list = [[df_lookup.dobj.loc[(df_lookup.stationId==station[0]) &
(df_lookup.height==station[1]) &
(df_lookup.variable==tracer) &
(((df_lookup.timeStart<=pd.Timestamp(start_date))&
(df_lookup.timeEnd>=pd.Timestamp(start_date)))|
((df_lookup.timeStart<=pd.Timestamp(end_date))&
(df_lookup.timeEnd>=pd.Timestamp(end_date)))|
((df_lookup.timeStart>=pd.Timestamp(start_date))&
(df_lookup.timeEnd<=pd.Timestamp(end_date))))].values,
tracer.replace(' mixing ratio (dry mole fraction)', '').lower(),
station[0],
station[1]]
for tracer in Tracers for station in Stations]
#Get a list of items, where every item represents the size of the url-array for every selected station:
check_url_ls_size = [selection_dobj_url_list[m][0].size for m in range(len(selection_dobj_url_list))]
#If Level-2 data are available for the selected tracer(s), station(s) and time period:
if(sum(check_url_ls_size)>0):
#Get a list of lists, where every sublist contains a data-object-ID, the station code,
#the sampling height & the selected tracer.
#E.g. ['U4VYazHdmZwzr7DxUowMtUu-', 'GAT', '30.0', 'co2']:
selection_list = [[selection_dobj_url_list[i][0][j].replace('https://meta.icos-cp.eu/objects/', ''),
selection_dobj_url_list[i][1],
selection_dobj_url_list[i][2],
selection_dobj_url_list[i][3]]
for i in range(len(selection_dobj_url_list))
for j in range(len(selection_dobj_url_list[i][0]))]
#Get a list of tuples (e.g. ('HPB', '50.0', 'co2')) with unique occurances of
#"station code" - "station sampling height" - "tracer" triplets:
station_unique_ls = list(set([(item[1], item[2], item[3]) for item in selection_list]))
#Group selection_list items refering to tracer-data from the same station and sampling height to lists:
station_dobj_ls = [[item for item in selection_list
if((item[1]==station_id[0]) & (item[2]==station_id[1]) & (item[3]==station_id[2]))]
for station_id in station_unique_ls]
#Call function to calculate and return a correlation statistics dataframe for the selected station:
return update_corr_stat_multi_binary(station_dobj_ls, start_date, end_date)
#If no data are available for the selected tracer, station(s) and time period:
else:
print("\033[0;31;1m No ICOS Atmospheric Level-2 data available.\033[0;31;0m\n\n")
#Create function that contains a box of widgets:
interact_c = interact_manual(update_plot_func,
Tracers = tracers,
Stations = stations,
start_date=DatePicker(description='Starting Date',disabled=False),
end_date=DatePicker(description='Ending Date',disabled=False))
#Set the font of the widgets included in interact_manual:
interact_c.widget.children[0].layout.width = '430px'
interact_c.widget.children[0].layout.height = '60px'
interact_c.widget.children[1].layout.width = '430px'
interact_c.widget.children[2].layout.width = '430px'
interact_c.widget.children[3].layout.width = '430px'
interact_c.widget.children[4].description = 'Update Table'
interact_c.widget.children[4].button_style = 'danger'
interact_c.widget.children[4].style.button_color = '#3973ac'
#interact_c.widget.children[4].layout.width = '300px'
interact_c.widget.children[4].layout.margin = '10px 10px 20px 180px' # top/right/bottom/left
def create_smoothing_widgets():
"""
Project: 'ICOS Carbon Portal'
Created: Tue May 07 10:30:00 2018
Last Changed: Tue May 07 10:30:00 2019
Version: 1.0.0
Author(s): Karolina
Description: Function that creates a set of widgets; a station dropdown list, a tracer dropdown
list, a 'total num of days' slider, a color-picker and a button. The function populates
the dropdown lists and slider with values and outputs the result.
Input parameters: No Input Parameter(s)
Output: Python Widgets
"""
#Import modules:
from bokeh.plotting import figure
#Dictionary for subscript transformations of numbers:
SUB = str.maketrans("0123456789", "₀₁₂₃₄₅₆₇₈₉")
#Create lookup dataframe:
df_lookup = create_lookup_df_atc_L2()
#Create a list including all tracers (e.g. CO2, CO, CH4)
tracers = df_lookup.variable.unique().tolist()
#reverse list order:
tracers.reverse()
#Create widgets:
tracer = Dropdown(options = tracers)
station = Dropdown(options = create_station_labels(df_lookup))
#Function that calls functions to update the plot/s and/or map,
#based on the selected tracer, station and color:
def update_plot_func(Tracer, Station, Days, Color, Citation):
#Get tracer (e.g. 'co2'):
tracer_low_case = Tracer.replace(' mixing ratio (dry mole fraction)', '').lower()
#Get a list of data obect URLs that refer to the selected station and tracer:
data_obj_url_ls = df_lookup.dobj.loc[(df_lookup.stationId==Station[0]) &
(df_lookup.height==Station[1]) &
(df_lookup.variable==Tracer)].values
#If L2-data is available for the selected tracer and station:
if(data_obj_url_ls.size>0):
#Get a list of data object IDs (L2-data):
data_obj_id_ls = [data_obj_url_ls[i].replace('https://meta.icos-cp.eu/objects/', '')
for i in range(data_obj_url_ls.size)]
#Call function to return plot and map for the selected station:
update_smoothing_plot_binary(data_obj_id_ls, Station, tracer_low_case, Days, Color)
#If the "citation" checkbox is checked:
if(Citation):
#Get a list with citation info for every ICOS Level-2 data object:
cit_ls_L2 = [RunSparql(sparql_query=sparqls.get_icos_citation(dobj), output_format='pandas').run().cit.iloc[0]
for dobj in data_obj_url_ls]
#Print citation title:
print('\n\n\033[1m' + 'Data Citation:' + '\033[0m')
#Loop through all citations:
for cit in cit_ls_L2:
#Print data object citation:
printmd("<sub>"+cit+"</sub>")
#If no L2-data are available for the selected tracer and station:
else:
print('\033[0;31;1m '+ 'No '+tracer_low_case.upper().translate(SUB)+' Level-2 data available for the selected station' +'\033[0;31;0m\n\n')
#Create function that contains a box of widgets:
interact_c = interact_manual(update_plot_func,
Tracer=tracer,
Station=station,
Days = (0,90),
Color=ColorPicker(concise=False,
description='Pick a color',
value='#3973ac',
disabled=False),
Citation=Checkbox(value=True, description='Citation', disabled=False))
#Set the font of the widgets included in interact_manual:
interact_c.widget.children[0].layout.width = '420px'
interact_c.widget.children[0].layout.margin = '40px 2px 2px 2px'
interact_c.widget.children[1].layout.width = '420px'
interact_c.widget.children[2].layout.width = '420px'
interact_c.widget.children[3].layout.width = '420px'
interact_c.widget.children[4].layout.width = '420px'
interact_c.widget.children[5].description = 'Update Plot'
interact_c.widget.children[5].button_style = 'danger'
interact_c.widget.children[5].style.button_color = '#3973ac'
interact_c.widget.children[5].layout.margin = '10px 10px 40px 180px' # top/right/bottom/left
def create_widgets_comparing():
"""
Project: 'ICOS Carbon Portal'
Created: Tue May 07 10:30:00 2018
Last Changed: Tue May 07 10:30:00 2019
Version: 1.0.0
Author(s): Karolina
Description: Function that creates a set of widgets; a multiselect station dropdown list, a
tracer dropdown list and a button. The function populates the dropdown lists
with values and outputs the result.
Input parameters: No Input Parameter(s)
Output: Python Widgets
"""
#Create lookup dataframe:
df_lookup = create_lookup_df_atc_L2()
#Create a list including all tracers (e.g. CO2, CO, CH4)
tracers = df_lookup.variable.unique().tolist()
#reverse list order:
tracers.reverse()
#Create widgets:
tracer = Dropdown(options = tracers)
station = SelectMultiple(options = create_station_labels(df_lookup), disabled=False)
#Function that calls functions to update the plot/s and/or map,
#based on the selected tracer, station and color:
def update_plot_func(Tracer, Station, Citation):
#Get tracer (e.g. 'co2'):
tracer_low_case = Tracer.replace(' mixing ratio (dry mole fraction)', '').lower()
#Get a list of sublists, where every sublist contains the following:
#1. ICOS Level-2 data object URL
#2. Tracer/Gas (e.g. 'co2')
#3. ICOS Station ID (3-character code)
#4. ICOS Station Sampling Height
#that refer to the selected station(s) and tracer:
selection_dobj_url_list = [[df_lookup.dobj.loc[(df_lookup.stationId==station[0]) &
(df_lookup.height==station[1]) &
(df_lookup.variable==Tracer)].values,
tracer_low_case,
station[0],
station[1]]
for station in Station
if df_lookup.dobj.loc[(df_lookup.stationId==station[0]) &
(df_lookup.height==station[1]) &
(df_lookup.variable==Tracer)].values.size>0]
#If Level-2 data are available for the selected tracer and station(s):
if(len(selection_dobj_url_list)>0):
#Get a list of lists, where every sublist contains a data-object-ID, the station code,
#the sampling height & the selected tracer.
#E.g. ['U4VYazHdmZwzr7DxUowMtUu-', 'co2', 'GAT', '30.0']:
selection_list = [[selection_dobj_url_list[i][0][j].replace('https://meta.icos-cp.eu/objects/', ''),
selection_dobj_url_list[i][1],
selection_dobj_url_list[i][2],
selection_dobj_url_list[i][3]]
for i in range(len(selection_dobj_url_list))
for j in range(len(selection_dobj_url_list[i][0]))]
####
#ICOS Atmospheric Level-2 Data for a given station, a given tracer and
#a given sampling height can be stored in two different files in cases
#where the measuring instrument has changed.
#The following code controls for such occurances and merges the data
#if necessary.
####
#Get a list of tuples (e.g. ('HPB', '50.0', 'co2')) with unique occurances of
#"station code" - "station sampling height" - "tracer" triplets:
station_unique_ls = list(set([(item[1], item[2], item[3]) for item in selection_list]))
#Group selection_list items refering to tracer-data from the same station and sampling height to lists:
station_dobj_ls = [[item for item in selection_list
if((item[1]==station_id[0]) & (item[2]==station_id[1]) & (item[3]==station_id[2]))]
for station_id in station_unique_ls]
#Get plot displaying tracer-values for the selected station/s:
update_comparing_binary(station_dobj_ls, len(station_unique_ls))
#If the "citation" checkbox is checked:
if(Citation):
#Get a list with citation info for every ICOS Level-2 data object:
cit_ls_L2 = [RunSparql(sparql_query=sparqls.get_icos_citation(dobj[0][0]), output_format='pandas').run().cit.iloc[0]
for dobj in selection_dobj_url_list]
#Print citation title:
print('\n\n\033[1m' + 'Data Citation:' + '\033[0m')
#Loop through all citations:
for cit in cit_ls_L2:
#Print data object citation:
printmd("<sub>"+cit+"</sub>")
#If no L2-data is available for the selected tracer and station:
else:
print('\033[0;31;1m '+ 'No Level-2 data available for the selected tracer and/or station/s.' +'\033[0;31;0m\n\n')
#Create function that contains a box of widgets:
interact_c = interact_manual(update_plot_func,
Tracer=tracer,
Station=station,
Citation=Checkbox(value=True, description='Citation', disabled=False))
#Set the font of the widgets included in interact_manual:
interact_c.widget.children[0].layout.width = '430px'
interact_c.widget.children[0].layout.margin = '40px 2px 2px 2px'
interact_c.widget.children[1].layout.width = '430px'
interact_c.widget.children[1].layout.height = '120px'
interact_c.widget.children[2].layout.width = '430px'
interact_c.widget.children[3].description = 'Update Plot/s'
interact_c.widget.children[3].button_style = 'danger'
interact_c.widget.children[3].style.button_color = '#3973ac'
interact_c.widget.children[3].layout.margin = '10px 10px 40px 180px' # top/right/bottom/left
def create_widgets_L1():
"""
Project: 'ICOS Carbon Portal'
Created: Tue May 07 10:30:00 2018
Last Changed: Tue May 07 10:30:00 2019
Version: 1.0.0
Author(s): Karolina
Description: Function that creates a set of widgets; a station dropdown list, a tracer
dropdown list, a color-picker, a checkbox and a button. The function populates
the dropdown lists with values and outputs the result.
Input parameters: No Input Parameter(s)
Output: Python Widgets
"""
#Dictionary for subscript transformations of numbers:
SUB = str.maketrans("0123456789", "₀₁₂₃₄₅₆₇₈₉")
#Create lookup dataframes:
df_lookup_L1 = create_lookup_df_atc_L1() #ICOS Level 1 Atmosphere Data
df_lookup_L2 = create_lookup_df_atc_L2() #ICOS Level 2 Atmosphere Data
#Create a list including all tracers (e.g. CO2, CO, CH4)
tracers_L1 = df_lookup_L1.variable.unique().tolist()
#reverse list order:
tracers_L1.reverse()
#Create widgets:
tracer_L1 = Dropdown(options = tracers_L1)
station_L1 = Dropdown(options = create_station_labels(df_lookup_L1))
#Function that calls functions to update the plot/s and/or map,
#based on the selected tracer, station and color:
def update_plot_func(Tracer, Station, color, Level, Citation):
#Get tracer short:
tracer = Tracer.replace(' mixing ratio (dry mole fraction)', '').lower()
#If "Add Level 2 Data" is not selected and if L1-data is available for the specific station and tracer:
if(Level==False):
#Get a list of L1 data obect URLs that refer to the selected station and tracer:
data_obj_url_L1_ls = df_lookup_L1.dobj.loc[(df_lookup_L1.stationId==Station[0]) &
(df_lookup_L1.height==Station[1]) &
(df_lookup_L1.variable==Tracer)].values
#If L1-data is available for the selected tracer and station:
if(data_obj_url_L1_ls.size>0):
#Get a list of data object IDs (L1-data):
data_obj_id_L1_ls = [data_obj_url_L1_ls[i].replace('https://meta.icos-cp.eu/objects/', '')
for i in range(data_obj_url_L1_ls.size)]
#Call function to return plot for the selected station (Level 1 Data):
update_icos_single_station_plot_LX_binary(data_obj_id_L1_ls, Station[0], Station[1], tracer, color, 1)
#If the "citation" checkbox is checked:
if(Citation):
#Get a list with citation info for every ICOS Level-1 data object:
cit_ls_L2 = [RunSparql(sparql_query=sparqls.get_icos_citation(dobj), output_format='pandas').run().cit.iloc[0]
for dobj in data_obj_url_L1_ls]
#Print citation title:
print('\n\n\033[1m' + 'Data Citation:' + '\033[0m')
#Loop through all citations:
for cit in cit_ls_L2:
#Print data object citation:
printmd("<sub>"+cit+"</sub>")
#If no L1-data is available for the selected tracer and station:
else:
print('\033[0;31;1m '+ 'No '+tracer.upper().translate(SUB)+' Level-1 data available for the selected station' +'\033[0;31;0m\n\n')
#If "Add Level 2 Data" is selected:
elif(Level==True):
#Get a list of L1 data object URLs that refer to the selected station and tracer:
data_obj_url_L1_ls = df_lookup_L1.dobj.loc[(df_lookup_L1.stationId==Station[0]) &
(df_lookup_L1.height==Station[1]) &
(df_lookup_L1.variable==Tracer)].values
#Get a list of L2 data object URLs that refer to the selected station and tracer:
data_obj_url_L2_ls = df_lookup_L2.dobj.loc[(df_lookup_L2.stationId==Station[0]) &
(df_lookup_L2.height==Station[1]) &
(df_lookup_L2.variable==Tracer)].values
#If Level-1 & Level-2 data are available for the selected tracer and station:
if((data_obj_url_L1_ls.size>0) & (data_obj_url_L2_ls.size>0)):
#Get a list of data object IDs (L1-data):
data_obj_id_L1_ls = [data_obj_url_L1_ls[j].replace('https://meta.icos-cp.eu/objects/', '')
for j in range(data_obj_url_L1_ls.size)]
#Get a list of data object IDs (L2-data):
data_obj_id_L2_ls = [data_obj_url_L2_ls[k].replace('https://meta.icos-cp.eu/objects/', '')
for k in range(data_obj_url_L2_ls.size)]
#Call function to return plot for the selected station (Level 1 & 2 Data):
update_icos_single_station_plot_L1_L2_binary(data_obj_id_L1_ls, data_obj_id_L2_ls, Station[0], Station[1], tracer, color)
#If the "citation" checkbox is checked:
if(Citation):
#Get a list with citation info for every ICOS Level-1 data object:
cit_ls_L1 = [RunSparql(sparql_query=sparqls.get_icos_citation(dobj), output_format='pandas').run().cit.iloc[0]
for dobj in data_obj_url_L1_ls]
#Get a list with citation info for every ICOS Level-2 data object:
cit_ls_L2 = [RunSparql(sparql_query=sparqls.get_icos_citation(dobj), output_format='pandas').run().cit.iloc[0]
for dobj in data_obj_url_L2_ls]
#Concatenate citation lists to one list:
cit_ls = cit_ls_L1 + cit_ls_L2
#Print citation:
print('\n\n\033[1m' + 'Data Citation:' + '\033[0m')
#Loop through all citations:
for cit in cit_ls:
#Print data object citation:
printmd("<sub>"+cit+"</sub>")
#If only Level-1 data are available for the selected tracer and station:
elif((data_obj_url_L1_ls.size>0) & (data_obj_url_L2_ls.size<1)):
#Print message:
print('\033[0;31;1m '+ 'No Level-2 data available yet ...' +'\033[0;31;0m\n\n')
#Get a list of data object IDs (L1-data):
data_obj_id_L1_ls = [data_obj_url_L1_ls[j].replace('https://meta.icos-cp.eu/objects/', '')
for j in range(data_obj_url_L1_ls.size)]
#Call function to return plot for the selected station (Level 1):
update_icos_single_station_plot_LX_binary(data_obj_id_L1_ls, Station[0], Station[1], tracer, color, 1)
#If the "citation" checkbox is checked:
if(Citation):
#Get a list with citation info for every ICOS Level-1 data object:
cit_ls_L1 = [RunSparql(sparql_query=sparqls.get_icos_citation(dobj), output_format='pandas').run().cit.iloc[0]
for dobj in data_obj_url_L1_ls]
#Print citation:
print('\n\n\033[1m' + 'Data Citation:' + '\033[0m')
#Loop through all citations:
for cit in cit_ls_L1:
#Print data object citation:
printmd("<sub>"+cit+"</sub>")
#If only Level-2 data are available for the selected tracer and station:
elif((data_obj_url_L1_ls.size<1) & (data_obj_url_L2_ls.size>0)):
#Print message:
print('\033[0;31;1m '+ 'No Level-1 data available ...' +'\033[0;31;0m\n\n')
#Get a list of data object IDs (L2-data):
data_obj_id_L2_ls = [data_obj_url_L2_ls[j].replace('https://meta.icos-cp.eu/objects/', '')
for j in range(data_obj_url_L2_ls.size)]
#Call function to return plot for the selected station (Level-2 Data):
update_icos_single_station_plot_LX_binary(data_obj_id_L2_ls, Station[0], Station[1], tracer, color, 2)
#If the "citation" checkbox is checked:
if(Citation):
#Get a list with citation info for every ICOS Level-1 data object:
cit_ls_L2 = [RunSparql(sparql_query=sparqls.get_icos_citation(dobj), output_format='pandas').run().cit.iloc[0]
for dobj in data_obj_url_L2_ls]
#Print citation:
print('\n\n\033[1m' + 'Data Citation:' + '\033[0m')
#Loop through all citations:
for cit in cit_ls_L2:
#Print data object citation:
printmd("<sub>"+cit+"</sub>")
#If no L1-data or L2-data are available for the selected tracer and station:
else:
print('\033[0;31;1m '+ 'No Level-1 or Level-2 data available for the selected tracer and station at present.\nTry a new combination!' +'\033[0;31;0m\n\n')
#Create function that contains a box of widgets:
interact_c = interact_manual(update_plot_func,
Tracer=tracer_L1,
Station=station_L1,
color=ColorPicker(concise=False,description='Pick a color',value='#3973ac',
disabled=False),
Level=Checkbox(value=False, description='Add Level 2 Data', disabled=False),
Citation=Checkbox(value=True, description='Citation', disabled=False))
#Set the font of the widgets included in interact_manual:
interact_c.widget.children[0].layout.width = '430px'
interact_c.widget.children[0].layout.margin = '40px 2px 2px 2px'
interact_c.widget.children[1].layout.width = '430px'
interact_c.widget.children[2].layout.width = '430px'
interact_c.widget.children[3].layout.width = '430px'
interact_c.widget.children[3].layout.margin = '12px 2px 2px 2px'
interact_c.widget.children[4].layout.width = '430px'
interact_c.widget.children[5].description = 'Update Plot'
interact_c.widget.children[5].button_style = 'danger'
interact_c.widget.children[5].style.button_color = '#3973ac'
interact_c.widget.children[5].layout.margin = '10px 10px 40px 180px' # top/right/bottom/left
# <a id='control_input'></a>
# <br>
# <div style="text-align: right">
# <a href="#intro">Back to top</a>
# </div>
# <br>
# <br>
# <br>
# <br>
# <br>
#
# ## 9. Control input
# This part includes functions that control the input to functions with statistics.
# +
def check_input_icos_L2_basic_stat(Station, start_date, end_date):
"""
Project: 'ICOS Carbon Portal'
Created: Fri May 17 14:30:00 2019
Last Changed: Fri May 17 14:30:00 2019
Version: 1.0.0
Author(s): Karolina
Description: Function that checks the user input to widgets and returns a boolean for
invalid or empty input.
Input parameters: 1. List of sublists, where every sublist contains the Station ID & Sampling Height
(var_name: 'Station', var_type: List)
2. Start date of time period
(var_name: 'start_date', var_type: DateTime Object)
3. End date of time period
(var_name: 'end_date', var_type: DateTime Object)
Output: Pandas DataFrame
"""
#Check if NO station has been selected:
if(len(Station)<1):
#Print message:
print(("\033[0;31;1m Select station! \033[0;31;0m\n\n"))
#Return boolean:
return False
#Check if a station has been selected:
else:
#Check if a start-date and/or end-date have been selected:
if((start_date==None)|(end_date==None)):
#Print message:
print(("\033[0;31;1m Select a start date and/or end date! \033[0;31;0m\n\n"))
#Return boolean:
return False
#If a start-date and an end-date have been selected:
else:
#Compute the difference between end_date and start_date:
diff = end_date - start_date
#Check if end-date refers to an earlier date than start-date:
if(diff.days<0):
#Print message:
print('\033[0;31;1m Error...\n The selected start-date corresponds to a later date than the selected end-date.\n Enter new dates!\n\n')
#Return boolean:
return False
#If the selected start-date & end-date are valid:
else:
#Return boolean:
return True
# +
def check_input_icos_L2_correlation(Tracer, Station, start_date, end_date):
"""
Project: 'ICOS Carbon Portal'
Created: Fri May 17 14:30:00 2019
Last Changed: Fri May 17 14:30:00 2019
Version: 1.0.0
Author(s): Karolina
Description: Function that checks the user input to widgets and returns a boolean for
invalid or empty input.
Input parameters: 1. List with Tracer(s)/gas(es) (Long Text)
(var_name: 'Tracer', var_type: List)
2. List of sublists, where every sublist contains the Station ID & Sampling Height
(var_name: 'Station', var_type: List)
3. Start date of time period
(var_name: 'start_date', var_type: DateTime Object)
4. End date of time period
(var_name: 'end_date', var_type: DateTime Object)
Output: Pandas DataFrame
"""
#Check if NO tracer has been selected:
if(len(Tracer)<1):
#Print message:
print(("\033[0;31;1m Select tracer! \033[0;31;0m\n\n"))
#Return boolean:
return False
#Check if a tracer has been selected:
else:
#Check if NO station has been selected:
if(len(Station)<1):
#Print message:
print(("\033[0;31;1m Select station! \033[0;31;0m\n\n"))
#Return boolean:
return False
#Check if a station has been selected:
else:
#Check if a start-date and/or end-date have been selected:
if((start_date==None)|(end_date==None)):
#Print message:
print(("\033[0;31;1m Select a start date and/or end date! \033[0;31;0m\n\n"))
#Return boolean:
return False
#If a start-date and an end-date have been selected:
else:
#Compute the difference between end_date and start_date:
diff = end_date - start_date
#Check if end-date refers to an earlier date than start-date:
if(diff.days<0):
#Print message:
print('\033[0;31;1m Error...\n The selected start-date corresponds to a later date than the selected end-date.\n Enter new dates!\n\n')
#Return boolean:
return False
#If the selected start-date & end-date are valid:
else:
#Return boolean:
return True
# -
# <a id='statistics'></a>
# <br>
# <div style="text-align: right">
# <a href="#intro">Back to top</a>
# </div>
# <br>
# <br>
# <br>
# <br>
# <br>
#
# ## 10.Statistics
# This part includes functions that calculate statistics for ICOS Level 1 and Level 2 CO, CO$_2$ and CH$_4$ data products.
def calculate_basic_statistics_binary(station_df_ls, tracer):
"""
Project: 'ICOS Carbon Portal'
Created: Wed May 15 14:30:00 2019
Last Changed: Wed May 15 14:30:00 2019
Version: 1.0.0
Author(s): Karolina
Description: Function that loops through a list of sublists, where every sublist contains
the data-dataframe. dictionary with station info and dictionary with tracer info
of a specific station, and computes the min,
max, mean and standard deviation of the tracer-column in the data-dataframe
of every station. The statistical values are stored in a separate dataframe.
One dataframe is produced for every station. In cases where more than one
stations have been selected, all separate dataframes with basic statistics
results per station are concatenated to one dataframe that is then returned
as output.
Input parameters: 1. List with sublists of data-dataframes for every selected station
(var_name: 'station_df_ls', var_type: List)
2. Tracer/gas, e.g. 'co2'
(var_name: 'tracer', var_type: String)
Output: Pandas DataFrame
Columns:
1. The earliest date in the dataset
(var_name: "start_date", var_type: NumPy DateTime64)
2. The latest date in the dataset
(var_name: "end_date", var_type: NumPy DateTime64)
3. The minimum tracer value in the dataset
(var_name: 'min', var_type: float)
4. The maximum tracer value in the dataset
(var_name: 'max', var_type: float)
5. The average tracer value in the dataset
(var_name: 'mean', var_type: float)
6. The standard deviation of the tracer values in the dataset
(var_name: 'st_dev', var_type: float)
"""
#Dictionary for subscript transformations of numbers:
SUB = str.maketrans("0123456789", "₀₁₂₃₄₅₆₇₈₉")
#Create list to store the metadata and data dataframes for every station:
df_list = []
#Loop through every station's metadata- & data- dataframe:
for df in station_df_ls:
#Get time-period and statistics:
stat_df = pd.DataFrame({'start_date': df[0].index.values.min(),
'end_date': df[0].index.values.max(),
'min': round(pd.to_numeric(df[0][tracer],errors='coerce').min(),2),
'max': round(pd.to_numeric(df[0][tracer],errors='coerce').max(),2),
'mean': round(pd.to_numeric(df[0][tracer],errors='coerce').mean(),2),
'st dev': round(pd.to_numeric(df[0][tracer],errors='coerce').std(),2)},
index=[df[1]['station_name']+', '+
df[1]['station_sampling_height']+
' ('+tracer.upper().translate(SUB)+')'])
#Add dataframe to list:
df_list.append(stat_df)
#Concatenate dataframes to one dataframe:
basic_statistics_df = pd.concat(df_list)
#Sort dataframe index:
basic_statistics_df.sort_index(inplace=True)
#Return dataframe:
return basic_statistics_df
# <br>
# <br>
# <br>
# <br>
# <br>
# <div style="text-align: right">
# <a href="#intro">Back to top</a>
# </div>
# <br>
# <br>
| icos_jupyter_notebooks/as_stat_tools/icos_as_stat_tools.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Naive Bayes with PySpark
# This notebook creates and measures a Naive Bayes classifier with PySpark
# ## Imports
# +
from os import environ
# Set SPARK_HOME
environ["SPARK_HOME"] = "/home/students/spark-2.2.0"
import findspark
findspark.init()
from pyspark import SparkContext
from pyspark.sql import SQLContext
from pyspark.ml.classification import NaiveBayes
from pyspark.ml.evaluation import MulticlassClassificationEvaluator
# -
# ## Get Some Context
# Create a SparkContext and a SQLContext context to use
sc = SparkContext(appName="Naive Bayes Classification with Spark")
sqlContext = SQLContext(sc)
# ## Load and Prepare the Data
DATA_FILE = "/home/students/data/mllib/sample_libsvm_data.txt"
# Load the training data
data = sqlContext.read.format("libsvm").load(DATA_FILE)
data.show(5)
# View a single row
data.take(1)
# ## Fit a Naive Bayes Model
# Split the data into train and test sets
splits = data.randomSplit([0.6, 0.4], 1234)
train = splits[0]
test = splits[1]
# Create an instance of a NaiveBayes model
nb = NaiveBayes(smoothing=1.0, modelType="multinomial")
# Train the model
nb_model = nb.fit(train)
nb_model.pi
# ## Create Predictions
# Create predictions from the test set
predictions = nb_model.transform(test)
predictions.show(5)
# ## Model Evaluation
# ### MulticlassClassificationEvaluator
#
# The MulticlassClassificationEvaluator expects two input columns: prediction and label.
#
# Available metrics:
# * f1: a measure of a test's accuracy considering both [precision and recall](https://en.wikipedia.org/wiki/Precision_and_recall). Best value is 1.
# * precision: the fraction of retrieved documents that are relevant to the query
# * recall: the fraction of the relevant documents that are successfully retrieved
# * weightedPrecision
# * weightedRecall
# * accuracy: either the fraction (default) or the count (normalize=False) of correct predictions.
# +
# Use the MulticlassClassificationEvaluator to compute accuracy on the test set
metrics = ['f1','weightedPrecision','weightedRecall','accuracy']
measurements = dict()
for metric in metrics:
metric_eval = MulticlassClassificationEvaluator(labelCol="label",
predictionCol="prediction",
metricName=metric).evaluate(predictions)
measurements[metric] = metric_eval
for key, value in measurements.items():
print("{}: {}".format(key, value))
# -
# ## Shut it Down
sc.stop()
| code/day_6/9 - Naive Bayes with PySpark.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import os.path as osp
import torch
from tqdm import tqdm
import torch.nn.functional as F
import matplotlib.pyplot as plt
from torch_geometric.data import Data
from torch_geometric.nn import GNNExplainer, GCNConv
from torch_geometric.utils import k_hop_subgraph, from_networkx
import pickle
import networkx as nx
from math import floor
from tqdm import tqdm
import seaborn as sns
from scipy.sparse import coo_matrix,csr_matrix
import sys
sys.path.append("..")
from IPython.display import set_matplotlib_formats
# %matplotlib inline
set_matplotlib_formats('svg')
prefix = '/gpfs_home/spate116/singhlab/GCN_Integration/scripts/BI/pyro_model/synthetic/'
G = nx.read_gpickle( prefix + 'data/syn3_G.pickle')
with open(prefix + 'data/syn3_lab.pickle', 'rb') as f:
labels = pickle.load(f)
x = torch.tensor([x[1]['feat'] for x in G.nodes(data=True)])
edge_index = torch.tensor([x for x in G.edges])
edge_index_flipped = edge_index[:, [1, 0]]
edge_index = torch.cat((edge_index, edge_index_flipped))
y = torch.tensor(labels, dtype=torch.long)
data = Data(x=x, edge_index=edge_index.T, y=y)
class Net(torch.nn.Module):
def __init__(self, x=64):
super(Net, self).__init__()
self.conv1 = GCNConv(10, x)
self.conv2 = GCNConv(x, x)
self.conv3 = GCNConv(x, x)
self.fc = torch.nn.Linear(x, max(y).tolist()+1)
def forward(self, x, edge_index):
x = F.leaky_relu(self.conv1(x, edge_index))
x = F.leaky_relu(self.conv2(x, edge_index))
x = F.leaky_relu(self.conv3(x, edge_index))
return self.fc(x)
# Load everything onto the gpu if available
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
data = data.to(device)
x, edge_index = data.x, data.edge_index
model = Net(x=64).to(device)
optimizer = torch.optim.Adam(model.parameters(), lr=0.0001)
best_loss = 100
pbar = tqdm(range(10000))
for epoch in pbar:
# Training step
model.train()
optimizer.zero_grad()
log_logits = model(x, edge_index)
loss = F.cross_entropy(log_logits, data.y)
loss.backward()
optimizer.step()
# Testing step
model.eval()
best_loss = loss if loss < best_loss else best_loss
pbar.set_description("Acc -> %.4f" % torch.mean((torch.argmax(log_logits, dim=1) == data.y).float()).item())
# -
explainer = GNNExplainer(model, epochs=1000)
node_idx = 549
node_feat_mask, edge_mask = explainer.explain_node(node_idx, x, edge_index)
ax, G = explainer.visualize_subgraph(node_idx, edge_index, edge_mask, y=data.y)
# +
from BayesianExplainerNF import BayesianExplainer
k = 3
sharp = 1e-12
splines = 6
explainer = BayesianExplainer(model, node_idx, k, x, edge_index, sharp, splines)
avgs = explainer.train(epochs=3000, lr=5, lambd=5e-11, window=500, p = 1.1, log=True)
edge_mask = explainer.edge_mask()
ax, G = explainer.visualize_subgraph(node_idx, edge_index, edge_mask, data.y, k)
plt.show()
# -
subset, edge_index_adj, mapping, edge_mask_hard = k_hop_subgraph(
node_idx, 3, edge_index, relabel_nodes=True)
x_adj = x[subset]
edge_index_adj.shape
# +
import numpy as np
model = model.to(device)
full_cat = model(x_adj, edge_index_adj)[mapping].reshape(-1)
full_cat = full_cat.detach().cpu().numpy()
full_cat = np.exp(full_cat) / np.exp(full_cat).sum()
full_cat
# +
N = 20000
masks = 0.7 * torch.ones([
N, edge_index_adj.shape[1]
])
masks = torch.bernoulli(masks)
masks
# +
import numpy as np
from math import log
from tqdm import tqdm
masks_np = masks.cpu().numpy()
log_losses = []
for i in tqdm(range(N)):
mean = model(x_adj, edge_index_adj[:, masks[i, :] == 1])[mapping].reshape(-1).detach().cpu().numpy()
mean = np.exp(mean) / np.exp(mean).sum()
log_losses.append(-mean[0] * log(full_cat[0]) + mean[1] * log(full_cat[1]))
# -
log_losses = np.array(log_losses)
masks_np
# +
from sklearn import linear_model
reg = linear_model.LinearRegression()
reg.fit(masks_np, log_losses)
# -
imp = np.abs(reg.coef_)
norm_imp = imp / imp.sum()
norm_imp
explainer.visualize_subgraph(node_idx, edge_index, norm_imp, data.y, k)
plt.show()
# +
from sklearn.ensemble import AdaBoostRegressor
reg = AdaBoostRegressor()
reg.fit(masks_np, log_losses)
# -
imp = reg.feature_importances_
norm_imp = imp / imp.sum()
norm_imp
explainer.visualize_subgraph(node_idx, edge_index, norm_imp, data.y, k)
plt.show()
| scripts/BI/pyro_model/lime_test.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.8.11 ('117')
# language: python
# name: python3
# ---
# +
import re
import os
import sys
import json
import jieba
sys.path.append("../../")
from utils.io import read_csv, write_to, load_json
# +
def strQ2B(ustring):
"""全角转半角"""
rstring = ""
for uchar in ustring:
inside_code=ord(uchar)
if inside_code == 12288: #全角空格直接转换
inside_code = 32
elif (inside_code >= 65281 and inside_code <= 65374): #全角字符(除空格)根据关系转化
inside_code -= 65248
rstring += chr(inside_code)
return rstring
#
def preprocess(sentence):
s = strQ2B(sentence)
back_num = re.findall('\d+', s)
back_eng = re.findall(r'[a-zA-Z]+', s)
s = re.sub(r'[a-zA-Z]+', 'e', s)
s = re.sub('\d+', 'n', s)
return s
# -
def get_sighan_from_json():
all_data = {
"train":None,
"dev":None,
"test":None,
"test14":None,
"test15":None,
}
data_dir = "../../data/rawdata/sighan/csc/"
train_file1 = os.path.join(data_dir, "train_dev.json")
train_file2 = os.path.join(data_dir, "train131415.json")
test14_file = os.path.join(data_dir, "test14.json")
test15_file = os.path.join(data_dir, "test15.json")
all_data["train"] = load_json(train_file1)
all_data["train"].extend(load_json(train_file2))
all_data["train"] = all_data["train"]
all_data["valid14"] = load_json(test14_file)
all_data["valid"] = load_json(test15_file)
#all_data["test"].extend(load_json(test15_file))
return all_data
# +
def light_preprocess(sentence):
import re
import jieba
return [ i for i in jieba.cut(re.sub("\W*", "", sentence)) if len(i) >= 1]
def json2list(data, need_preprocess):
source, target, ids = [], [], []
for element in data:
if need_preprocess:
source.append(preprocess(element["original_text"]))
target.append(preprocess(element["correct_text"]))
ids.apoend(element["wrong_ids"])
else:
source.append(strQ2B((element["original_text"])))
target.append(strQ2B((element["correct_text"])))
ids.append(element["wrong_ids"])
return source, target, ids
# +
data = get_sighan_from_json()
train_source, train_target, train_ids = json2list(data["train"], False)
valid14_source, valid14_target, valid14_ids = json2list(data["valid14"], False)
valid_source, valid_target, valid_ids = json2list(data["valid"], False)
# +
all_train = train_source + valid14_source + valid_source
all_target = train_target + valid14_target + valid_target
all_ids = train_ids + valid14_ids + valid_ids
# +
def get_all_cor(train_source_target_ids):
train_cor = {}
train_cor_graph = {}
from tqdm import tqdm
for source, target, ids in tqdm(train_source_target_ids):
for i in range(len(source)):
if source[i] != target[i]:
key = (source[i], target[i])
if key in train_cor:
train_cor[key] += 1
else:
train_cor[key] = 1
if source[i] in train_cor_graph:
train_cor_graph[source[i]][target[i]] = 0
else:
train_cor_graph[source[i]] = {}
return train_cor, train_cor_graph
all_cor, cor_graph = get_all_cor(zip(train_source, train_target, train_ids))
# -
test_cor, test_cor_graph = get_all_cor(zip((valid14_source + valid_source), (valid14_target+valid_target), (valid14_ids + valid_ids) ) )
# +
print(len(list(test_cor.keys())))
#print(test_cor.values())
test_cor_washed = { k:v for k, v in test_cor.items() if v > 1 }
test_cor_in_train = {}
for k, v in test_cor_washed.items():
if k in all_cor:
test_cor_in_train[k] = all_cor[k]
else:
test_cor_in_train[k] = 0
import matplotlib.pyplot as plt
num_list = list(test_cor_washed.values())
#plt.yticks( [ 0.0001, 0.01 ], color='r') # 设置y刻度
s = sum(num_list)
#print([ i / s for i in num_list ])
plt.plot(range(len(num_list)), [ i / sum(num_list) for i in num_list ],)# fc='r')
num_list_2 = list(test_cor_in_train.values())
#s2 = sum(num_list_2)
plt.plot(range(len(num_list_2)), [ i / sum(num_list_2) for i in num_list_2 ], )#bottom=num_list, fc='b')
plt.tick_params(axis='y',colors='red')
plt.show()
# -
# +
length = []
for key, values in cor_graph.items():
length.append(len(values))
if len(values) >= 60:
print(key, values)
length.sort(reverse=True)
print(length)
# +
valid_cor = get_all_cor(zip(valid_source, valid_target, valid_ids))
# -
count = 0
for key in valid_cor.keys():
if key in all_cor:
count += 1
else:
print(key)
print(count / len(valid_cor.keys()))
# +
#能被分词(长度>2的错误)
def count_wrongs(all_data_and_ids):
count = 0
for data, ids in all_data_and_ids:
if not ids:
continue
spans = [ i for i in jieba.cut(data) ]
pointer_ids = 0
pointer = 0
for span in spans:
pointer += len(span)
tmp = ""
if pointer >= ids[pointer_ids]:
if not span.isdigit() and len(span) != 1:
tmp = span
count += 1
else:
break
pointer_ids += 1
if pointer_ids >= len(ids):
break
return count
t = count_wrongs(zip(valid_target, valid_ids))
print(t / len(valid_ids))
# +
#count 训练和test,要改的句子数量的样本不均衡
train_wrongs = 0
for i in train_ids:
if i:
train_wrongs += 1
print(train_wrongs / len(train_ids))
test_wrongs = 0
for j in valid_ids:
if j:
test_wrongs += 1
print(test_wrongs / len(valid_ids))
# -
t_test = count_wrongs(zip(valid_target, valid_ids))
print(t_test / len(valid_target))
# +
#let's check the dataset hit status
sys.path.append("/remote-home/xtzhang/CTC/CTC2021/SpecialEdition")
from core import get_lattice_dataset
datasets, vocabs, embeddings = get_lattice_dataset(dataset="sighan", path_head="/remote-home/xtzhang/CTC/CTC2021/SpecialEdition/")
# +
total_wrongs = 0
hit = 0
for i in range(len(train_target)):
wrong_char_host =[]
for j in range(len(train_target[i])):
if train_target[i][j] != train_source[i][j]:
wrong_char_host.append(train_target[i][j])
arrows = datasets['train'][i]['lattice'][(datasets['train'][i]['seq_len'] - datasets['train'][i]['lex_nums']):]
for wrong_char in wrong_char_host:
if wrong_char in arrows:
hit += 1
total_wrongs += len(wrong_char_host)
print(total_wrongs, hit, hit / total_wrongs)
# +
total_wrongs = 0
hit = 0
for i in range(len(valid_target)):
wrong_char_host =[]
for j in range(len(valid_target[i])):
if valid_target[i][j] != valid_source[i][j]:
wrong_char_host.append(valid_target[i][j])
arrows = datasets['test'][i]['lattice'][(datasets['test'][i]['seq_len'] - datasets['test'][i]['lex_nums']):]
for wrong_char in wrong_char_host:
if wrong_char in arrows:
hit += 1
total_wrongs += len(wrong_char_host)
print(total_wrongs, hit, hit / total_wrongs)
# -
"".join(datasets['train'][0]['lattice'][(datasets['train'][0]['seq_len'] - datasets['train'][0]['lex_nums']):])
# +
print(train_source[0])
print(all_target[0])
print(all_ids[0])
tmp = [ i for i in jieba.cut(all_target[0])]
print(tmp)
result = ""
for target in all_target:
tmp = [ i for i in jieba.cut(target) ]
pointer = 0
for j in tmp:
pointer += len(j)
if pointer >= all_ids[0][0]:
result = j
break
break
print(result)
# +
from tqdm import tqdm
all_char_list = []
for sentence in tqdm(all_train + all_target):
tmp = [j for j in sentence]
all_char_list += tmp
all_char_set = list(set(all_char_list))
print(len(all_char_set))
write_to("my_char_dict.txt", "\n".join(all_char_set))
# -
write_to("my_char_dict.txt", "\n".join(all_char_set))
def get_all_word(data, ids):
"""
法新社记者报导,法国驻巴蓦斯坦大使杰拉德,以及巴国官员在巴基斯坦西北部的托克哈姆边界关卡迎接十月九日与两名巴基斯坦同业一起遭塔利班逮捕的裴哈。
data : "法新社记者报导,法国驻巴基斯坦大使杰拉德,以及巴国官员在巴基斯坦西北部的托克哈姆边界关卡迎接十月九日与两名巴基斯坦同业一起遭塔利班逮捕的裴哈。"
ids: [12]
"""
if not ids:
return ""
spans = [ i for i in jieba.cut(data)]
result = []
pointer = 0
for span in spans:
pointer += len(span)
tmp = ""
span = re.sub("\W*", "", span)
if span and not span.isdigit() and len(span) != 1:
tmp = span
if tmp:
result.append(tmp)
return result
# +
all_word_dict_plus = {}
from tqdm import tqdm
for data, id in tqdm(zip(all_target, all_ids)):
for i in get_all_word(data, id):
all_word_dict_plus[i] = 0
# -
write_to("all_word_dict.txt", "\n".join([i for i in all_word_dict_plus.keys()]))
def get_magic_word(data, ids):
"""
法新社记者报导,法国驻巴蓦斯坦大使杰拉德,以及巴国官员在巴基斯坦西北部的托克哈姆边界关卡迎接十月九日与两名巴基斯坦同业一起遭塔利班逮捕的裴哈。
data : "法新社记者报导,法国驻巴基斯坦大使杰拉德,以及巴国官员在巴基斯坦西北部的托克哈姆边界关卡迎接十月九日与两名巴基斯坦同业一起遭塔利班逮捕的裴哈。"
ids: [12]
"""
#print(data, ids)
if not ids:
return ""
spans = [ i for i in jieba.cut(data) ]
result = []
pointer_ids = 0
pointer = 0
#print(spans)
for span in spans:
pointer += len(span)
tmp = ""
if pointer > ids[pointer_ids]:
span = re.sub("\W*", "", span)
if span and not span.isdigit():
tmp = span
if tmp:
result.append(tmp)
pointer_ids += 1
if pointer_ids >= len(ids):
break
return result
# +
magic_dict = []
from tqdm import tqdm
for data, id in tqdm(zip(valid_target, valid_ids)):
magic_dict += get_magic_word(data, id)
# -
for i in range(5):
print(magic_dict[i])
# +
c = 0
for i in valid_ids:
if i :
c += len(i)
print(c)
# +
print(len(magic_dict))
count_word = 0
count_ =0
for i in magic_dict:
#
count_ += 1
if len(i) > 1:
count_word += 1
print(count_, count_word)
# +
import re
import os
import sys
import json
import time
import jieba
sys.path.append("/remote-home/xtzhang/CTC/CTC2021/SpecialEdition")
from utils.io import read_csv, write_to, load_json
from utils.trie_utils import list2confusion_trie
confusion_set = read_csv("/remote-home/xtzhang/CTC/CTC2021/SpecialEdition/data/confusion_set/confusion.txt")
confusion_dict = {}
for line in confusion_set:
line = line.split(":")
confusion_dict[line[0][0]] = line[-1]
all_word_list = read_csv("/remote-home/xtzhang/CTC/CTC2021/SpecialEdition/scripts/sighan/all_word_dict.txt")
def wash_n(all_word_list):
return [ re.sub("\n", "", i) for i in all_word_list]
all_word_list = wash_n(all_word_list)
all_word_dict = {i:0 for i in all_word_list}
trie = list2confusion_trie(all_word_list, confusion_dict)
def super_get(sentence):
trie.assign(sentence)
trie.my_get_lexion()
return trie.result
path = "./30wdict_utf8.txt"
dict_ = read_csv(path)
word_dict = {}
from tqdm import tqdm
for line in tqdm(dict_):
word_dict[re.sub("\W*", "", line)] = 0
# +
matched_word_dict = []
for sentence in bert_missed_source:#valid_source:
tmp = super_get(sentence)
tmp = [ i[-1] for i in tmp]
matched_word_dict += tmp
matched_word_dict = {i:0 for i in set(matched_word_dict)}
# -
bert_missed = read_csv("bert_missed.txt")
print(len(bert_missed) / 3)
bert_missed_source = []
i = 0
for s in bert_missed:
if i % 3 == 0:
bert_missed_source.append(s)
i+= 1
print(len(bert_missed_source))
need = ['小鸡', '记得', '糊涂', '订位', '怎么', '面见', '弟弟', '炒饭', '汉字', '那店', '八点钟', '这样', '迟到', '很饱', '迟到', '迟到', '安静', '一点', '妈妈', '吃饭', '时候', '希望', '简讯', '那里', '诉取', '不半工', '哪些', '上半', '上半', '轻松', '消息', '与此同时', '这种', '权利', '优家', '失业', '心情', '减少', '著迷', '家长', '唠叨', '影响', '睡觉', '拜托', '存到', '年轻', '威胁', '我们', '回覆', '墙壁', '他们', '这是', '现在', '很忙', '不怎么', '试试看', '录影机', '哪个', '哪个', '不是', '再加', '该不该', '看著', '看著', '有装', '教书', '记录']
len(need)
# +
print("List:")
count = 0
count_super = 0
missed = []
for word in need:#magic_dict:
if word in matched_word_dict:
count_super += 1
if word in all_word_dict and word in matched_word_dict:
count += 1
if len(word) != 1 and word not in matched_word_dict:
missed.append(word)
print("Missed :", len(missed))
print(count_super, count)
print("Set:")
count = 0
count_super = 0
missed = []
for word in list(set(need)):#list(set(magic_dict)):
if word in matched_word_dict:
count_super += 1
if word in all_word_dict and word in matched_word_dict:
count += 1
if len(word) != 1 and word not in matched_word_dict:
missed.append(word)
print("Missed :", len(missed))
print(count_super, count)
# -
print(len(valid_target))
print(train_source[:15], all_target[:15])
print(all_ids[:15])
print(magic_dict[:15])
print(len(magic_dict))
write_to("./magic_dict.txt", "\n".join(magic_dict))
def dict2d(word_dict):
"""
"""
d = {}
for word in word_dict:
key = len(word)
if key in d:
d[key].append(word)
else:
d[key] = [word]
return d
magic_dict = read_csv("./magic_dict.txt")
# +
def wash(magic_dict):
result = []
for i in magic_dict:
tmp = re.sub("\n", "", i)
if tmp:
result.append(tmp)
return result
magic_dict = wash(magic_dict)
# +
def get_better_dict(word_dict):
d = {}
for word in word_dict:
key_1 = len(word)
#small -> ignore length==2
if len(word) == 2:
continue
if key_1 in d.keys():
key_2 = word[0]
key_3 = word[-1]
if key_2 in d[key_1][0].keys():
d[key_1][0][key_2].append(word)
else:
d[key_1][0][key_2] = [word]
if key_3 in d[key_1][-1].keys():
d[key_1][-1][key_3].append(word)
else:
d[key_1][-1][key_3] = [word]
else:
d[key_1] = {0:{},-1:{} }
key_2 = word[0]
key_3 = word[-1]
d[key_1][0][key_2] = [word]
d[key_1][-1][key_3] = [word]
return d
# -
d = get_better_dict(magic_dict)
d[3]
d.keys()
def better_match(data, d):
result, map_info = [], []
index = 0
#print(data)
for word in data:
if len(word) not in d.keys():
index += len(word)
continue
length = len(word)
key_left, key_right = word[0], word[-1]
if key_left in d[length][0]:
d_left = d[length][0][word[0]]
else:
d_left = {}
if key_right in d[length][-1]:
d_right = d[length][-1][word[-1]]
else:
d_right = {}
pair = []
for query in d_right:
count = 0
for i in range(len(query)):
if count > 1:
break
elif query[i] == word[i]:
pass
else:
count += 1
if count == 1:
pair.append(query)
map_info.append("$".join(list(map(str, range(index, index+len(word))))))
for query in d_left:
count = 0
for i in range(len(query)):
if count > 1:
break
elif query[i] == word[i]:
pass
else:
count += 1
if count == 1:
if query not in pair:
pair.append(query)
map_info.append("$".join(list(map(str, range(index, index+len(word))))))
index += length
result += pair
return result, map_info
def air_preprocess(sentence):
import re
import jieba
return [ i for i in jieba.cut(sentence) if len(i) >= 1]
print(valid_source[0])
better_match(air_preprocess(valid_source[0]), d)
# +
result = []
import time
start_time = time.time()
from tqdm import tqdm
for data in tqdm(train_source):
result.append(better_match(light_preprocess(data), d))
result_valid14 = []
for data in tqdm(valid14_source):
result_valid14.append(better_match(light_preprocess(data), d))
result_valid = []
for data in tqdm(valid_source):
result_valid.append(better_match(light_preprocess(data), d))
print(time.time()-start_time)
# -
result[0]
sum(map(len, result[2]))
sum(map(lambda x:sum(map(len, x)), result)) / len(result)
print(result_valid14[:5])
print(result_valid14[:5])
print(result_valid[:5])
print(result_valid[:5])
" ".join(list(map("".join, result[6][1])))
print(len(result))
# +
def app(train_source, result):
new = []
for i in range(len(train_source)):
tmp = train_source[i] + "\n" + " ".join(result[i][0]) + "," + " ".join(result[i][1])
new.append(tmp)
return new
train_magic_source, valid14_magic_source, valid_magic_source = app(train_source, result), app(valid14_source, result_valid14), app(valid_source, result_valid)
# -
print(train_magic_source[0])
print(valid14_magic_source[0])
print(valid_magic_source[0])
# +
write_to("/remote-home/xtzhang/CTC/CTC2021/SpecialEdition/data/rawdata/sighan/lattice/train.src", "\n".join(train_magic_source))
write_to("/remote-home/xtzhang/CTC/CTC2021/SpecialEdition/data/rawdata/sighan/lattice/train.tgt", "\n".join(train_target))
write_to("/remote-home/xtzhang/CTC/CTC2021/SpecialEdition/data/rawdata/sighan/lattice/valid14.src", "\n".join(valid14_magic_source))
write_to("/remote-home/xtzhang/CTC/CTC2021/SpecialEdition/data/rawdata/sighan/lattice/valid14.tgt", "\n".join(valid14_target))
write_to("/remote-home/xtzhang/CTC/CTC2021/SpecialEdition/data/rawdata/sighan/lattice/test.src", "\n".join(valid_magic_source))
write_to("/remote-home/xtzhang/CTC/CTC2021/SpecialEdition/data/rawdata/sighan/lattice/test.tgt", "\n".join(valid_target))
write_to("/remote-home/xtzhang/CTC/CTC2021/SpecialEdition/data/rawdata/sighan/lattice/valid.src", "\n".join(valid_magic_source[500:]))
write_to("/remote-home/xtzhang/CTC/CTC2021/SpecialEdition/data/rawdata/sighan/lattice/valid.tgt", "\n".join(valid_target[500:]))
| scripts/sighan/get_magic_dict.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Natural Language in Python
#
# Natural language is fun to work with in Python, thanks to easy-to-use tools. Text can be processed quickly with regular expressions, or libraries like `nltk` and `spaCy` can run pre-trained models to tokenize, parse, and vectorize text.
#
# This guide is intended as a quick overview of the options you have in Python.
#
# * Regular expressions
# * Getting rid of punctuation
# * Scrubbing XML
# * Natural Language ToolKit (nltk)
# * Tokenization
# * Part-of-speech tagging
# * Sentence tokenization
# * Stemming
# * Lemmatization
# * spaCy library
# * Tokens and dependencies
# * Named entity recognition
# * Word vectors
#
# ## Regular Expressions
#
# Regular expressions (regex) are extremely useful in natural language processing. You can use them with the [`re` library](https://docs.python.org/3/library/re.html) in python. Regex may be intimidating, but it's worth the effort.
import re
# ### Punctuation
#
# It's a common thing to want to remove punctuation from text. Regex makes this easy.
# +
text = "The brown dog jumped over the lazy cheese; repeatedly. Without the cheese, there is boredom: the dog?"
print(re.sub(r'[.,;:!?-]', ' ', text))
# -
# The above gives extra spaces which can be removed with regex.
# +
text = "The brown dog jumped over the lazy cheese; repeatedly. Without the cheese, there is boredom: the dog?"
spaces = re.sub(r'[.,;:!?-]', ' ', text)
print(re.sub(r'[ ]+', ' ', spaces))
# -
# Regular expression looks weird at first, but a [good cheatsheet](https://pycon2016.regex.training/cheat-sheet) helps. Microsoft also makes [printable cheasheets](https://docs.microsoft.com/en-us/dotnet/standard/base-types/regular-expression-language-quick-reference), but there may be slight difference between implementations.
#
# ### Getting rid of XML tags
#
# If you're working with web data, regex is useful for cleaning. For example, the 100MB wikiepedia dataset has lots of XML and HTML tag everywhere, which you normally don't want.
# +
with open("./enwik8.txt", "r") as f:
enwik8 = f.read().splitlines()
print(enwik8[50:60])
# -
# Luckily, a researcher named [<NAME>](http://mattmahoney.net/) wrote a nice perl script for cleaning that stuff out. The fastText team has made that script available [here](https://github.com/facebookresearch/fastText/blob/master/wikifil.pl) and I've translated it into python below.
# +
cleaned_enwik8 = []
# I've kept the comments in the code, but I've otherwise tweaked it to run in Python
# Program to filter Wikipedia XML dumps to "clean" text consisting only of lowercase
# letters (a-z, converted from A-Z), and spaces (never consecutive).
# All other characters are converted to spaces. Only text which normally appears
# in the web browser is displayed. Tables are removed. Image captions are
# preserved. Links are converted to normal text. Digits are spelled out.
# Written by <NAME>, June 10, 2006. This program is released to the public domain.
for line in enwik8:
if "<text" in line.lower() and "#redirect" not in line.lower():
line = line.lower()
line = re.sub(r"<.*>", r"", line) # remove xml tags
line = re.sub(r"&", r"&", line) # decode URL encoded chars
line = re.sub(r"<", r"<", line)
line = re.sub(r">", r">", line)
line = re.sub(r"<ref[^<]*<\/ref>", r"", line) # remove references <ref...> ... </ref>
line = re.sub(r"<[^>]*>", r"", line) # remove xhtml tags
line = re.sub(r"\[http:[^] ]*", r"[]", line) # remove normal url, preserve visible text
line = re.sub(r"\|thumb", "", line) # remove images links, preserve caption
line = re.sub(r"\|left", "", line)
line = re.sub(r"\|right", "", line)
line = re.sub(r"\|\d+px", "", line)
line = re.sub(r"\[\[image:[^\[\]]*\|", "", line)
line = re.sub(r"\[\[category:([^|\]]*)[^]]*\]\]", "[[$1]]", line) # show categories without markup
line = re.sub(r"\[\[[a-z\-]*:[^\]]*\]\]", "", line) # remove links to other languages
line = re.sub(r"\[\[[^\|\]]*\|", "[[", line) # remove wiki url, preserve visible text
line = re.sub(r"\{\{[^\}]*\}\}", "", line) # remove {{icons}} and {tables}
line = re.sub(r"\{[^\}]*\}", "", line) # remove [ and ]
line = re.sub(r"\[", "", line)
line = re.sub(r"\]", "", line)
line = re.sub(r"&[^;]*;", "", line) # remove URL encoded chars
# convert to lowercase letters and spaces, spell digits
line = " "+line+" "
line = re.sub(r"0", " zero ", line)
line = re.sub(r"1", " one ", line)
line = re.sub(r"2", " two ", line)
line = re.sub(r"3", " three ", line)
line = re.sub(r"4", " four ", line)
line = re.sub(r"5", " five ", line)
line = re.sub(r"6", " six ", line)
line = re.sub(r"7", " seven ", line)
line = re.sub(r"8", " eight ", line)
line = re.sub(r"9", " nine", line)
line = re.sub(r"[^\w]+", " ", line)
line = re.sub(r"[ ]+", " ", line)
line = line.strip()
if len(line) > 0 :
cleaned_enwik8.append(line)
print(cleaned_enwik8[:5])
# -
# The short script scrubs the data clean and leaves the text behind.
#
# ## Natural Language ToolKit
#
# The `nltk` python package has lots of tools to help you work with text. The following functions may all appear to be magic, but they're mostly based off of statistical model.
#
# You can find tokenizers and part-of-speech taggers for language other than English.
# +
import nltk
# You will likely have to download nltk packages to use them
#nltk.download()
# -
# ### Tokenization
#
# You can split text into tokens (words) using the punkt tokenizer model (`punkt`). Tokenization is extremely useful for natural language modelling.
#
# Notice that the punctuation is properly separated from the words.
# +
paragram = "The brown dog jumped over the lazy cheese; repeatedly. Without the cheese, there is boredom: the dog?"
print(nltk.word_tokenize(paragram))
# -
# ### Part of speech tagging
#
# The natural language toolkit can also do something called "part of speech tagging" (`averaged_perceptron_tagger` + `treebank`). It will identify the subjects, predicates, etc in your sentence.
tokenized = nltk.word_tokenize(paragram)
nltk.pos_tag(tokenized)
# ### Sentence tokenization
#
# Sentence tokenization is helpful when you want to feed sentences to your model but your raw data is in paragraphs. This uses the `punkt` tokenizer from before.
# +
with open("./Principio.txt", "r") as f:
principio = " ".join(f.readlines()).replace("\n", "")
print("Raw text")
print(principio)
print()
print("Tokenized sentences")
print(nltk.sent_tokenize(principio))
# -
# ### Stemming
#
# [Stemming](https://en.wikipedia.org/wiki/Stemming) algorithms crop words to their roots. They're a way of reducing your vocabulary size.
# +
porter_stemmer = nltk.stem.snowball.SnowballStemmer("english")
paragram = "The quick foxes quickly jumped over the laziest dog. The dogs' owners are saddened."
for word in nltk.word_tokenize(paragram):
stemmed = porter_stemmer.stem(word)
print(f"Original: {word:<10} Stemmed: {stemmed:<10} Changed: {'Y'*(word!=stemmed)}")
# -
# ### Lemmatization
#
# The `wordnet` [lemmatizer](https://en.wikipedia.org/wiki/Lemmatisation) will group inflections together, which serves to reduce your vocabulary size. An inflection is the modification of a word for various reasons. Examples are words in plural with an `s` or verbs that whether one person or many people perform it.
#
# The `lemmatize()` takes a `pos` argument, but I haven't been able to find this well-explained online. As a basic step I suggest using POS tagging to distinguish nouns and verbs. Below you can see that this helps lemmatize `learned` alright.
# +
wordnet_lemmatizer = nltk.stem.WordNetLemmatizer()
paragram = "I learned Python. The learned learn Python. Therefore I am learned."
for word, pos in nltk.pos_tag(nltk.word_tokenize(paragram)):
if pos[0] == "V": pos_arg = "v"
else: pos_arg = "n"
lemmatized = wordnet_lemmatizer.lemmatize(word, pos=pos_arg)
print(f"Original: {word:<10} Lemmatized: {lemmatized:<10} Changed: {'Y'*(word!=lemmatized):<5} POS: {pos}")
# -
# ## spaCy library
#
# The `spacy` library can also work with natural text. I find it has a more modern feel than `nltk`. I recommend visiting their [website](https://spacy.io/usage/) since it has a lot of examples.
#
# You have to download and install models before you can use them.
#
# ```
# python -m spacy download en
# ```
#
# After you've installed a model, you can then load it in spaCy.
# +
import spacy
nlp = spacy.load('en')
# -
# ### Part-of-speech, lemmas, etc
#
# This [code snippet](https://spacy.io/usage/linguistic-features#section-pos-tagging) from the website shows how to quickly do various transformations on your text. You can quickly get information from your text.
#
# Notice that spaCy is making some errors with its POS tagging. It always interprets `learned` as a verb.
# +
doc = nlp(u"I learned Python. The learned learn Python. Therefore I am learned.")
print("{:<10} {:<10} {:<10} {:<10} {:<10} {:<10} {:<10} {:<10}".format(
"text", "lemma", "pos", "tag", "dep", "shape", "is_alpha", "is_stop"
))
print(80*"-")
for token in doc:
print("{:<10} {:<10} {:<10} {:<10} {:<10} {:<10} {:<10} {:<10}".format(
token.text, token.lemma_, token.pos_, token.tag_, token.dep_, token.shape_, token.is_alpha, token.is_stop
))
# -
# spaCy conveniently draws visuals for you. The `dep` style means dependency.
# +
doc = nlp(u"The brown dog jumped over the lazy cheese.")
spacy.displacy.render(doc, style="dep", jupyter=True, options={"distance" : 120})
# -
# ### Getting a bigger better model
#
# I'm going to switch to the medium model `en_core_web_md`, which is 120MB. This will make the following examples work better.
#
# There seems to be a problem with timeout when installing these larger models. You can use the [`pip install` instructions](https://spacy.io/usage/models#download-pip) in the guide with the `--timeout=10000` option.
nlp = spacy.load('en_core_web_md')
# ### Named entity recognition
#
# spaCy can also do named entity recognition. A model is used here, so the label is not 100% accurate (seems trained on current events).
# +
doc = nlp("""The despotisms of Cinna and Sulla were brief; """
"""the rule of Pompeius and of Crassus soon yielded before Caesar; """
"""the arms of Lepidus and Antonius before Augustus; """
"""who, when the world was wearied by civil strife, subjected it to empire under the title of Princeps.""")
print("{:<20} {:<10} {:<10} {:<10}".format(
"text", "start_char", "end_char", "label"
))
print(50*"-")
for ent in doc.ents:
print("{:<20} {:<10} {:<10} {:<10}".format(
ent.text, ent.start_char, ent.end_char, ent.label_
))
# -
# Again, spaCy can draw these nicely. (Doesn't display properly on GitHub.)
# +
doc = nlp("""The despotisms of Cinna and Sulla were brief; """
"""the rule of Pompeius and of Crassus soon yielded before Caesar; """
"""the arms of Lepidus and Antonius before Augustus; """
"""who, when the world was wearied by civil strife, subjected it to empire under the title of Princeps.""")
spacy.displacy.render(doc, style="ent", jupyter=True)
# -
# ### Word vectors
#
# The spaCy library also comes with pretrained word embeddings. They recommend using a larger model than the default `en` (the default is "sm" for small), so the `md` model we got above is suitable.
#
# You can then check the similarity of tokens. Ham and bacon are similar to one another, and cars and trucks are similar to one another.
#
# The examples I show below can also be found in the [spaCy vector examples](https://spacy.io/usage/vectors-similarity).
# +
tokens = nlp(u'ham bacon cars trucks')
for token1 in tokens:
print(f"{token1}\n-----")
for token2 in tokens:
print(f"{token1.text:<10} {token2.text:<10} Similarity: {token1.similarity(token2):5.2f}")
print(f"{token1}\n-----")
# -
# You can look up vectors, and if your word isn't in the vocabulary you'll get nothing.
# +
tokens = nlp(u'ham bus hambus')
print("{:<10} {:<10} {:>15} {:<10}".format(
"token", "has_vector", "vector_norm", "is_oov"
))
print(45*"-")
for token in tokens:
print("{:<10} {:<10} {:15.2f} {:<10}".format(
token.text, token.has_vector, token.vector_norm, token.is_oov
))
# -
# You can access the vector value with the `.vector` property. You get a `numpy` array.
print(nlp(u'ham').vector.shape)
# If you want to use these vectors in a model, you can retrieve them all from a sentence and then average them.
# +
import numpy as np
tokens = nlp(u'The brown dog jumped over the lazy cheese.')
print(np.mean([token.vector for token in tokens], axis=0).shape)
# -
# I hope this was useful. Please report any issues!
| Statistical Learning/4_Natural_Language.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import os
import re
import numpy as np
import pandas as pd
from scipy.optimize import minimize_scalar
import seaborn as sns
import matplotlib.pylab as plt
from luescher_nd.database import utilities as ut
from luescher_nd.database.utilities import DATA_FOLDER
from luescher_nd.zeta.extern.pyzeta import zeta
from luescher_nd.plotting import styles
styles.setup(pgf=False)
# -
# %load_ext blackcellmagic
a_inv = 0.0
L = 1.0
# +
diff_sq = lambda x: (a_inv - zeta(x)[0] / np.pi / L) ** 2
bounds = [-10] + [n2 for n2 in ut.get_degeneracy(20) if n2 < 50]
xs = []
for b1, b2 in zip(bounds, np.roll(bounds, -1)):
if b1 > b2: break
xs.append(
minimize_scalar(
diff_sq,
method="bounded",
bounds=(b1 + 1.0e-3, b2 - 1.0e-3),
options={"xatol": 1.0e-16}
).x
)
spectrum = np.array(xs)
# -
files = [f for f in os.listdir(DATA_FOLDER) if f.endswith(".sqlite") and not "tmp" in f]
file_name = f"contact-fitted_a-inv={a_inv:+1.1f}_zeta=spherical_projector=a1g_n-eigs=200.sqlite"
df = ut.read_table(
os.path.join(DATA_FOLDER, file_name),
zeta=None,
round_digits=2,
filter_poles=False,
filter_by_nstates=False,
filter_degeneracy=False,
).query("nlevel < 24 and epsilon < 0.2 and L == @L")[
["n1d", "epsilon", "nstep", "L", "x", "nlevel", "mass"]
]
df["L"] = df.L.round(7)
df.head()
# +
data = []
for idx, (l, nstep) in df[["L", "nstep"]].drop_duplicates().iterrows():
for nlevel, x in enumerate(spectrum):
data.append({
"L": l,
"epsilon": 0,
"nstep": int(nstep),
"n1d": None,
"x": x,
"nlevel": nlevel,
})
tf = pd.DataFrame(data)
for deg in ut.get_degeneracy_list(20):
tf.loc[tf.nlevel >= deg, "nlevel"] += 1
# +
ff = df.groupby(["n1d", "epsilon"]).apply(
lambda frame: (frame.set_index(["L", "nstep", "nlevel"])[["x"]]
- tf.set_index(["L", "nstep", "nlevel"])[["x"]]).abs()
).reset_index().dropna()
ff["diff_e"] = ff["x"] / ff["epsilon"] / (df["mass"].unique()[0]/2)
ff["e2"] = ff["epsilon"]**2
ff["nstep_label"] = ff.nstep.where(ff.nstep > 0, "$\infty$")
ff.head()
# +
grid = sns.FacetGrid(
data=ff.sort_values("epsilon").query("nlevel > 0 and nlevel < 5"),
col="nlevel",
hue="nstep_label",
col_wrap=4,
sharey=True,
margin_titles=True,
hue_order=[1,2,4,r"$\infty$"]
)
grid.map(plt.plot, "epsilon", "diff_e", marker=".", ls=":", zorder=10)
grid.add_legend(title="$n_\mathrm{step}$")
for ax in grid.axes.flat:
ax.set_yscale("log")
ax.set_xscale("log")
ax.set_xlim(1.9e-2, 2**-4)
grid.set_xlabels("$\epsilon \, [\mathrm{fm}]$")
grid.set_ylabels(r"$\left|x_A - x_N\right| / (\mu\epsilon)$")
styles.finalize(grid.fig, width=None)
# -
grid.savefig("continuum-diff-detail.jpg", bbox_inches="tight")
# +
ff["even"] = ff.n1d % 2 == 0
grid = sns.FacetGrid(
data=ff.query("nlevel > 0 and nlevel < 2").query("epsilon < 0.05"),
col="nstep",
row="nlevel",
hue="even",
sharey=False,
margin_titles=True,
col_order=[1,2,4,-1]
)
grid.map(plt.plot, "epsilon", "diff_e", marker="o", ls=":", zorder=10)
grid.add_legend(title="$n_{1d}$ even")
for ax in grid.axes.flat:
ax.set_yscale("log")
ax.set_xscale("log")
grid.set_xlabels("$\epsilon \, [\mathrm{fm}]$")
grid.set_ylabels(r"$\left|x_A - x_N\right| / (\mu\epsilon)$")
styles.finalize(grid.fig, width=None)
# -
grid.savefig("continuum-diff-detail.jpg", bbox_inches="tight")
| notebooks/devel/spectrum-continuum-limit.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="x0mBAu3llLcM"
# # Pre-processing
# + colab={"base_uri": "https://localhost:8080/"} id="WWgXqoxwsBZx" outputId="ca26acec-2447-411d-9a5a-019d89fbbed2"
from google.colab import drive
drive.mount('/content/drive')
# + id="8x_ldaYimX7c"
import os
assert os.environ['COLAB_TPU_ADDR'], 'Make sure to select TPU from Edit > Notebook settings > Hardware accelerator'
# + colab={"base_uri": "https://localhost:8080/"} id="TCOszNAXhIq-" outputId="23d65087-86b7-4e7d-b290-44dacb5198e1"
# !pip install einops
# !pip install cloud-tpu-client==0.10 https://storage.googleapis.com/tpu-pytorch/wheels/torch_xla-1.9-cp37-cp37m-linux_x86_64.whl
# !pip install timm
# !pip install pydicom
# + colab={"base_uri": "https://localhost:8080/"} id="J-mv0J0EzJx2" outputId="7352455b-6846-4242-e285-1d15ee237e9c"
# !unzip ./drive/MyDrive/torch_project/medi/mlpmixer/chexnet/rsna-pneumonia-detection-challenge.zip
# + colab={"base_uri": "https://localhost:8080/"} id="ZE_4748daVAo" outputId="82fdb0e3-122c-4d9e-9cfd-cfa73ba8ec6c"
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
plt.style.use("ggplot")
import pydicom
import os
from os import listdir
from os.path import isfile, join
import glob, pylab
from torch import nn
from torch import Tensor
from PIL import Image
import torchvision.transforms as transforms
from torchvision.transforms import Compose, Resize, ToTensor
from torchvision import datasets, transforms, models
import torchvision
from einops import rearrange, reduce, repeat
from einops.layers.torch import Rearrange, Reduce
from torchsummary import summary
import torch
import torch.nn.functional as F
import torch_xla
import torch_xla.core.xla_model as xm
import torch_xla.distributed.xla_multiprocessing as xmp
import torch_xla.distributed.parallel_loader as pl
import timm
import gc
import time
import random
from datetime import datetime
from tqdm.notebook import tqdm
from sklearn import model_selection, metrics
from sklearn.metrics import f1_score
# + colab={"base_uri": "https://localhost:8080/"} id="njnq4Yhpyw5l" outputId="3b5d061b-06a3-4367-8910-d6ff80b7fdb1"
# Image examples
train_images_dir = './stage_2_train_images/'
train_images = [f for f in listdir(train_images_dir) if isfile(join(train_images_dir, f))]
test_images_dir = './stage_2_test_images/'
test_images = [f for f in listdir(test_images_dir) if isfile(join(test_images_dir, f))]
print('5 Training images', train_images[:5]) # Print the first 5
print('Number of train images:', len(train_images))
print('Number of test images:', len(test_images))
# + colab={"base_uri": "https://localhost:8080/", "height": 203} id="FKSBtuU40mPn" outputId="4fb1601e-444b-4830-a5dc-afced3f31809"
train_labels = pd.read_csv('./stage_2_train_labels.csv')
train_labels.head()
# + colab={"base_uri": "https://localhost:8080/", "height": 159} id="IyNWzD7z0ufN" outputId="31117fe0-efbe-4289-ecbb-aa9421d35382"
# Number of positive targets
print(round((8964 / (8964 + 20025)) * 100, 2), '% of the examples are positive')
pd.DataFrame(train_labels.groupby('Target')['patientId'].count())
# + colab={"base_uri": "https://localhost:8080/", "height": 282} id="Rryak3OB0wcd" outputId="8f09e207-d285-4fa2-ca23-c13b98d953bd"
# Distribution of Target in Training Set
plt.style.use('ggplot')
plot = train_labels.groupby('Target') \
.count()['patientId'] \
.plot(kind='bar', figsize=(10,4), rot=0)
# + id="ljj6lgJcmTs0"
# For parallelization in TPUs
os.environ["XLA_USE_BF16"] = "1"
os.environ["XLA_TENSOR_ALLOCATOR_MAXSIZE"] = "100000000"
# + id="EnSOPXKyAlJ8"
def seed_everything(seed):
"""
Seeds basic parameters for reproductibility of results
Arguments:
seed {int} -- Number of the seed
"""
random.seed(seed)
os.environ["PYTHONHASHSEED"] = str(seed)
np.random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed(seed)
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
# + id="ThjxZ_yTnqhV"
# 폐렴 증상이 여러부위가 있으므로 patientId에 중복이 있음, id 중복제거 df
temp = train_labels.drop_duplicates(['patientId'])
# + id="JKSjdOLRnqhW"
# Script to prepare combined dataset
# Class 0: Normal
# Class 1: Pneumonia
seed_everything(1)
DATA_PATH = './drive/My Drive/torch_project/medi/mlpmixer/chexnet/chest_jpg/'
df_val = temp.sample(n=int(temp.shape[0]*0.3))
df_train=temp.drop(index=df_val.index)
df_test = df_val.sample(n=int(temp.shape[0]*0.1))
df_val=df_val.drop(index=df_test.index)
# + colab={"base_uri": "https://localhost:8080/"} id="NiAjhdXeoCPr" outputId="48428c2e-430b-40ed-e497-4c72bcc62d08"
df_train.shape, df_val.shape, df_test.shape
# + id="z6GlunDUnqhW"
# model specific global variables
IMG_SIZE = 224
BATCH_SIZE = 16
LR = 2e-05
GAMMA = 0.7
N_EPOCHS = 10
DATA_DIR="./drive/My Drive/torch_project/medi/mlpmixer"
VIT_PATH = (
"./drive/My Drive/torch_project/medi/mlpmixer/jx_vit_base_p16_224-80ecf9dd.pth"
)
MLP_PATH = "./drive/My Drive/torch_project/medi/mlpmixer/jx_mixer_b16_224-76587d61.pth"
# + id="hzuG68hmnqhW"
class pneumonia_dataset(torch.utils.data.Dataset):
"""
Helper Class to create the pytorch dataset
"""
def __init__(self, df, data_path=DATA_PATH, transforms=None):
super().__init__()
self.df_data = df.values
self.data_path = data_path
self.transforms = transforms
#self.mode = mode
#self.data_dir = "train" if mode == "train" else "val"
def __len__(self):
return len(self.df_data)
def __getitem__(self, index):
img_name, _, _, _, _, label = self.df_data[index]
img_path = os.path.join(self.data_path, img_name+'.jpg')
img = Image.open(img_path).convert("RGB")
if self.transforms is not None:
image = self.transforms(img)
return image, label
# + id="pbnfD0BCnqhW"
# create image augmentations
transforms_train = transforms.Compose(
[
transforms.Resize((IMG_SIZE, IMG_SIZE)),
transforms.RandomHorizontalFlip(p=0.3),
transforms.RandomVerticalFlip(p=0.3),
transforms.RandomResizedCrop(IMG_SIZE),
transforms.ToTensor(),
transforms.Normalize((0.485, 0.456, 0.406), (0.229, 0.224, 0.225)),
]
)
transforms_val = transforms.Compose(
[
transforms.Resize((IMG_SIZE, IMG_SIZE)),
transforms.ToTensor(),
transforms.Normalize((0.485, 0.456, 0.406), (0.229, 0.224, 0.225)),
]
)
# + [markdown] id="nz1Kmg0UmhLC"
# # ViT Pre-trained
# + id="qBHnk6lrDoMi"
class ViT_MLP_Base16(nn.Module):
def __init__(self, n_classes, pretrained=False, vit=True):
super(ViT_MLP_Base16, self).__init__()
if vit :
self.model = timm.create_model("vit_base_patch16_224", pretrained=False, in_chans=3)
else :
self.model = timm.create_model("gmixer_24_224", pretrained=False, in_chans=3)
# self.model.norm = nn.LayerNorm((768,), eps=1e-5, elementwise_affine=True)
if pretrained:
if vit :
self.model.load_state_dict(torch.load(VIT_PATH))
else :
self.model.load_state_dict(torch.load('./gmixer_24_224_raa-7daf7ae6.pth'))
self.model.head = nn.Linear(self.model.head.in_features, n_classes)
def forward(self, x):
x = self.model(x)
return x
def train_one_epoch(self, train_loader, criterion, optimizer, device):
# keep track of training loss
epoch_loss = 0.0
epoch_accuracy = 0.0
###################
# train the model #
###################
self.model.train()
for i, (data, target) in enumerate(train_loader):
# move tensors to GPU if CUDA is available
if device.type == "cuda":
data, target = data.cuda(), target.cuda()
elif device.type == "xla":
data = data.to(device, dtype=torch.float32)
target = target.to(device, dtype=torch.int64)
# clear the gradients of all optimized variables
optimizer.zero_grad()
# forward pass: compute predicted outputs by passing inputs to the model
output = self.forward(data)
# calculate the batch loss
loss = criterion(output, target)
# backward pass: compute gradient of the loss with respect to model parameters
loss.backward()
# Calculate Accuracy
accuracy = (output.argmax(dim=1) == target).float().mean()
# update training loss and accuracy
epoch_loss += loss
epoch_accuracy += accuracy
# perform a single optimization step (parameter update)
if device.type == "xla":
xm.optimizer_step(optimizer)
if i % 20 == 0:
xm.master_print(f"\tBATCH {i+1}/{len(train_loader)} - LOSS: {loss}")
else:
optimizer.step()
return epoch_loss / len(train_loader), epoch_accuracy / len(train_loader)
def validate_one_epoch(self, valid_loader, criterion, device):
# keep track of validation loss
valid_loss = 0.0
valid_accuracy = 0.0
valid_f1 = 0.0
######################
# validate the model #
######################
self.model.eval()
for data, target in valid_loader:
# move tensors to GPU if CUDA is available
if device.type == "cuda":
data, target = data.cuda(), target.cuda()
elif device.type == "xla":
data = data.to(device, dtype=torch.float32)
target = target.to(device, dtype=torch.int64)
with torch.no_grad():
# forward pass: compute predicted outputs by passing inputs to the model
output = self.model(data)
# calculate the batch loss
loss = criterion(output, target)
# Calculate Accuracy
accuracy = (output.argmax(dim=1) == target).float().mean()
# update average validation loss and accuracy
valid_loss += loss
valid_accuracy += accuracy
valid_f1 += f1_score(output.argmax(dim=1).cpu().numpy(), target.cpu().numpy(), average='macro')
return valid_loss / len(valid_loader), valid_accuracy / len(valid_loader), valid_f1/ len(valid_loader)
# + id="lvpNZdt2DxyU"
def fit_tpu(
model, epochs, device, criterion, optimizer, train_loader, valid_loader=None
):
valid_loss_min = np.Inf # track change in validation loss
# keeping track of losses as it happen
train_losses = []
valid_losses = []
train_accs = []
valid_accs = []
valid_f1s = []
for epoch in range(1, epochs + 1):
gc.collect()
para_train_loader = pl.ParallelLoader(train_loader, [device])
xm.master_print(f"{'='*50}")
xm.master_print(f"EPOCH {epoch} - TRAINING...")
train_loss, train_acc = model.train_one_epoch(
para_train_loader.per_device_loader(device), criterion, optimizer, device
)
xm.master_print(
f"\n\t[TRAIN] EPOCH {epoch} - LOSS: {train_loss}, ACCURACY: {train_acc}\n"
)
train_losses.append(train_loss)
train_accs.append(train_acc)
gc.collect()
if valid_loader is not None:
gc.collect()
para_valid_loader = pl.ParallelLoader(valid_loader, [device])
xm.master_print(f"EPOCH {epoch} - VALIDATING...")
valid_loss, valid_acc, valid_f1 = model.validate_one_epoch(
para_valid_loader.per_device_loader(device), criterion, device
)
xm.master_print(f"\t[VALID] LOSS: {valid_loss}, ACCURACY: {valid_acc}, F1: {valid_f1}\n")
valid_losses.append(valid_loss)
valid_accs.append(valid_acc)
valid_f1s.append(valid_f1s)
gc.collect()
# save model if validation loss has decreased
if valid_loss <= valid_loss_min and epoch != 1:
xm.master_print(
"Validation loss decreased ({:.4f} --> {:.4f}). Saving model ...".format(
valid_loss_min, valid_loss
)
)
#xm.save(model.state_dict(), f'{DATA_DIR}/checkpoint/best_model.pth')
valid_loss_min = valid_loss
return {
"train_loss": train_losses,
"valid_losses": valid_losses,
"train_acc": train_accs,
"valid_acc": valid_accs,
"valid_f1:": valid_f1s
}
# + id="Qk3u_JjKDzyz"
model = ViT_MLP_Base16(n_classes=2, pretrained=True, mode="vit")
# + id="mW7L1zzsD4Md"
def _run():
train_dataset = pneumonia_dataset(df_train, transforms=transforms_train)
valid_dataset = pneumonia_dataset(df_val, transforms=transforms_val)
train_sampler = torch.utils.data.distributed.DistributedSampler(
train_dataset,
num_replicas=xm.xrt_world_size(),
rank=xm.get_ordinal(),
shuffle=True,
)
valid_sampler = torch.utils.data.distributed.DistributedSampler(
valid_dataset,
num_replicas=xm.xrt_world_size(),
rank=xm.get_ordinal(),
shuffle=False,
)
train_loader = torch.utils.data.DataLoader(
dataset=train_dataset,
batch_size=BATCH_SIZE,
sampler=train_sampler,
drop_last=True,
num_workers=8,
)
valid_loader = torch.utils.data.DataLoader(
dataset=valid_dataset,
batch_size=BATCH_SIZE,
sampler=valid_sampler,
drop_last=True,
num_workers=8,
)
criterion = nn.CrossEntropyLoss()
# device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
device = xm.xla_device()
model.to(device)
lr = LR * xm.xrt_world_size()
optimizer = torch.optim.Adam(model.parameters(), lr=lr)
xm.master_print(f"INITIALIZING TRAINING ON {xm.xrt_world_size()} TPU CORES")
start_time = datetime.now()
xm.master_print(f"Start Time: {start_time}")
logs = fit_tpu(
model=model,
epochs=N_EPOCHS,
device=device,
criterion=criterion,
optimizer=optimizer,
train_loader=train_loader,
valid_loader=valid_loader,
)
xm.master_print(f"Execution time: {datetime.now() - start_time}")
xm.master_print("Saving Model")
xm.save(
model.state_dict(), f'{DATA_DIR}/checkpoint/model_5e_{datetime.now().strftime("%Y%m%d-%H%M")}.pth'
)
# + colab={"base_uri": "https://localhost:8080/"} id="urYU2DewEFTQ" outputId="c69d2a4a-ea14-42fd-c079-d1dd0e878b71"
# Start training processes
def _mp_fn(rank, flags):
torch.set_default_tensor_type("torch.FloatTensor")
a = _run()``
# _run()
FLAGS = {}
xmp.spawn(_mp_fn, args=(FLAGS,), nprocs=8, start_method="fork")
# + colab={"base_uri": "https://localhost:8080/"} id="-s_aGwIyMUbC" outputId="e16726c4-2615-4d91-d5f6-2f885fd0d9d4"
# load model
vit_checkpoint_path = './drive/My Drive/torch_project/medi/mlpmixer/vit_checkpoint/vit_best.pth'
vit_checkpoint = torch.load(vit_checkpoint_path)
model.load_state_dict(vit_checkpoint)
# + id="jjUb1Yy0fGr2"
def predict_scores(model, raw_data, device) :
model.to(device)
model.eval()
test_dataset = pneumonia_dataset(raw_data, transforms=transforms_val)
test_sampler = torch.utils.data.distributed.DistributedSampler(
test_dataset,
num_replicas=xm.xrt_world_size(),
rank=xm.get_ordinal(),
shuffle=False,
)
test_loader = torch.utils.data.DataLoader(
dataset=test_dataset,
batch_size=BATCH_SIZE,
sampler=test_sampler,
drop_last=True,
num_workers=8,
)
para_test_loader = pl.ParallelLoader(test_loader, [device])
test_accuracy = 0.0
test_f1 = 0.0
for data, target in para_test_loader.per_device_loader(device):
data = data.to(device, dtype=torch.float32)
target = target.to(device, dtype=torch.int64)
with torch.no_grad():
output = model(data)
accuracy = (output.argmax(dim=1) == target).float().mean()
test_accuracy += accuracy
test_f1 += f1_score(output.argmax(dim=1).cpu().numpy(), target.cpu().numpy(), average='macro')
return test_accuracy / len(test_loader), test_f1/ len(test_loader)
# + colab={"base_uri": "https://localhost:8080/"} id="0XE6zQXSQjeu" outputId="06d60c2c-e7f1-4025-ef06-a05a40ec36fb"
device = xm.xla_device()
predict_scores(model, df_test, device)
# + [markdown] id="_2klWVugaeGg"
# #MLP Mixer Pre-trained
# + colab={"base_uri": "https://localhost:8080/"} id="oIlvROx_zHu8" outputId="b0c98f9a-0019-402a-d53b-a148ada2d363"
# !wget https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/gmixer_24_224_raa-7daf7ae6.pth
# + id="oTOZ1GZYpxhe"
model = ViT_MLP_Base16(n_classes=2, pretrained=True, vit=False)
# + id="yFhh1EfQphaP"
def _run():
train_dataset = pneumonia_dataset(df_train, transforms=transforms_train)
valid_dataset = pneumonia_dataset(df_val, transforms=transforms_val)
train_sampler = torch.utils.data.distributed.DistributedSampler(
train_dataset,
num_replicas=xm.xrt_world_size(),
rank=xm.get_ordinal(),
shuffle=True,
)
valid_sampler = torch.utils.data.distributed.DistributedSampler(
valid_dataset,
num_replicas=xm.xrt_world_size(),
rank=xm.get_ordinal(),
shuffle=False,
)
train_loader = torch.utils.data.DataLoader(
dataset=train_dataset,
batch_size=BATCH_SIZE,
sampler=train_sampler,
drop_last=True,
num_workers=8,
)
valid_loader = torch.utils.data.DataLoader(
dataset=valid_dataset,
batch_size=BATCH_SIZE,
sampler=valid_sampler,
drop_last=True,
num_workers=8,
)
criterion = nn.CrossEntropyLoss()
# device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
device = xm.xla_device()
model.to(device)
lr = LR * xm.xrt_world_size()
optimizer = torch.optim.Adam(model.parameters(), lr=lr)
xm.master_print(f"INITIALIZING TRAINING ON {xm.xrt_world_size()} TPU CORES")
start_time = datetime.now()
xm.master_print(f"Start Time: {start_time}")
logs = fit_tpu(
model=model,
epochs=N_EPOCHS,
device=device,
criterion=criterion,
optimizer=optimizer,
train_loader=train_loader,
valid_loader=valid_loader,
)
xm.master_print(f"Execution time: {datetime.now() - start_time}")
xm.master_print("Saving Model")
xm.save(
model.state_dict(), f'{DATA_DIR}/checkpoint/model_5e_{datetime.now().strftime("%Y%m%d-%H%M")}.pth'
)
# + colab={"base_uri": "https://localhost:8080/"} id="_L8ArZSY0VLJ" outputId="43f0c760-ffac-4e4f-cefc-988c2f3df2e6"
# Start training processes
def _mp_fn(rank, flags):
torch.set_default_tensor_type("torch.FloatTensor")
a = _run()
# _run()
FLAGS = {}
xmp.spawn(_mp_fn, args=(FLAGS,), nprocs=8, start_method="fork")
# + colab={"base_uri": "https://localhost:8080/"} id="hmeXXfXOeqds" outputId="3d29536c-77e1-44b3-f52f-3e709995a171"
# load model
mlp_checkpoint_path = './drive/My Drive/torch_project/medi/mlpmixer/checkpoint/mlp_pre.pth'
mlp_checkpoint = torch.load(mlp_checkpoint_path)
model.load_state_dict(mlp_checkpoint)
# + colab={"base_uri": "https://localhost:8080/"} id="CqWt29Raeqds" outputId="243cc81d-eaeb-4e04-b17e-8fe10404e7fe"
predict_scores(model, df_test, xm.xla_device())
# + [markdown] id="wfft3AmOhSrI"
# #ViT
# + id="2yZFR7sxxljN"
class PatchEmbedding(nn.Module):
def __init__(self, in_channels: int = 3, patch_size: int = 16, emb_size: int = 768, img_size: int = 224):
self.patch_size = patch_size
super().__init__()
self.projection = nn.Sequential(
# using a conv layer instead of a linear one -> performance gains
nn.Conv2d(in_channels, emb_size, kernel_size=patch_size, stride=patch_size),
Rearrange('b e (h) (w) -> b (h w) e'),
)
self.cls_token = nn.Parameter(torch.randn(1,1, emb_size))
self.positions = nn.Parameter(torch.randn((img_size // patch_size) **2 + 1, emb_size))
def forward(self, x: Tensor) -> Tensor:
b, _, _, _ = x.shape
x = self.projection(x)
cls_tokens = repeat(self.cls_token, '() n e -> b n e', b=b)
# prepend the cls token to the input
x = torch.cat([cls_tokens, x], dim=1)
# add position embedding
x += self.positions
return x
class MultiHeadAttention(nn.Module):
def __init__(self, emb_size: int = 768, num_heads: int = 8, dropout: float = 0):
super().__init__()
self.emb_size = emb_size
self.num_heads = num_heads
# fuse the queries, keys and values in one matrix
self.qkv = nn.Linear(emb_size, emb_size * 3)
self.att_drop = nn.Dropout(dropout)
self.projection = nn.Linear(emb_size, emb_size)
def forward(self, x : Tensor, mask: Tensor = None) -> Tensor:
# split keys, queries and values in num_heads
qkv = rearrange(self.qkv(x), "b n (h d qkv) -> (qkv) b h n d", h=self.num_heads, qkv=3)
queries, keys, values = qkv[0], qkv[1], qkv[2]
# sum up over the last axis
energy = torch.einsum('bhqd, bhkd -> bhqk', queries, keys) # batch, num_heads, query_len, key_len
if mask is not None:
fill_value = torch.finfo(torch.float32).min
energy.mask_fill(~mask, fill_value)
scaling = self.emb_size ** (1/2)
att = F.softmax(energy, dim=-1) / scaling
att = self.att_drop(att)
# sum up over the third axis
out = torch.einsum('bhal, bhlv -> bhav ', att, values)
out = rearrange(out, "b h n d -> b n (h d)")
out = self.projection(out)
return out
class ResidualAdd(nn.Module):
def __init__(self, fn):
super().__init__()
self.fn = fn
def forward(self, x, **kwargs):
res = x
x = self.fn(x, **kwargs)
x += res
return x
class FeedForwardBlock(nn.Sequential):
def __init__(self, emb_size: int, expansion: int = 4, drop_p: float = 0.):
super().__init__(
nn.Linear(emb_size, expansion * emb_size),
nn.GELU(),
nn.Dropout(drop_p),
nn.Linear(expansion * emb_size, emb_size),
)
class TransformerEncoderBlock(nn.Sequential):
def __init__(self,
emb_size: int = 768,
drop_p: float = 0.,
forward_expansion: int = 4,
forward_drop_p: float = 0.,
** kwargs):
super().__init__(
ResidualAdd(nn.Sequential(
nn.LayerNorm(emb_size),
MultiHeadAttention(emb_size, **kwargs),
nn.Dropout(drop_p)
)),
ResidualAdd(nn.Sequential(
nn.LayerNorm(emb_size),
FeedForwardBlock(
emb_size, expansion=forward_expansion, drop_p=forward_drop_p),
nn.Dropout(drop_p)
)
))
class TransformerEncoder(nn.Sequential):
def __init__(self, depth: int = 12, **kwargs):
super().__init__(*[TransformerEncoderBlock(**kwargs) for _ in range(depth)])
class ClassificationHead(nn.Sequential):
def __init__(self, emb_size: int = 768, n_classes: int = 4):
super().__init__(
Reduce('b n e -> b e', reduction='mean'),
nn.LayerNorm(emb_size),
nn.Linear(emb_size, n_classes))
class ViT(nn.Sequential):
def __init__(self,
in_channels: int = 3,
patch_size: int = 16,
emb_size: int = 768,
img_size: int = 224,
depth: int = 12,
n_classes: int = 2,
**kwargs):
super().__init__(
PatchEmbedding(in_channels, patch_size, emb_size, img_size),
TransformerEncoder(depth, emb_size=emb_size, **kwargs),
ClassificationHead(emb_size, n_classes)
)
# + id="0XJnpXLOXOAc"
class ViT_MLP_CUSTOM(nn.Module):
def __init__(self, n_classes, pretrained=False, vit=True):
super(ViT_MLP_CUSTOM, self).__init__()
if vit :
self.model = ViT()
else :
self.model = MlpMixer(n_classes, 12, 16, 768, 384, 3072)
if pretrained:
if vit :
self.model.load_state_dict(torch.load(VIT_PATH))
else :
self.model.load_state_dict(torch.load('./gmixer_24_224_raa-7daf7ae6.pth'))
#self.model.head = nn.Linear(self.model.head.in_features, n_classes)
def forward(self, x):
x = self.model(x)
return x
def train_one_epoch(self, train_loader, criterion, optimizer, device):
# keep track of training loss
epoch_loss = 0.0
epoch_accuracy = 0.0
###################
# train the model #
###################
self.model.train()
for i, (data, target) in enumerate(train_loader):
# move tensors to GPU if CUDA is available
if device.type == "cuda":
data, target = data.cuda(), target.cuda()
elif device.type == "xla":
data = data.to(device, dtype=torch.float32)
target = target.to(device, dtype=torch.int64)
# clear the gradients of all optimized variables
optimizer.zero_grad()
# forward pass: compute predicted outputs by passing inputs to the model
output = self.forward(data)
# calculate the batch loss
loss = criterion(output, target)
# backward pass: compute gradient of the loss with respect to model parameters
loss.backward()
# Calculate Accuracy
accuracy = (output.argmax(dim=1) == target).float().mean()
# update training loss and accuracy
epoch_loss += loss
epoch_accuracy += accuracy
# perform a single optimization step (parameter update)
if device.type == "xla":
xm.optimizer_step(optimizer)
if i % 20 == 0:
xm.master_print(f"\tBATCH {i+1}/{len(train_loader)} - LOSS: {loss}")
else:
optimizer.step()
return epoch_loss / len(train_loader), epoch_accuracy / len(train_loader)
def validate_one_epoch(self, valid_loader, criterion, device):
# keep track of validation loss
valid_loss = 0.0
valid_accuracy = 0.0
valid_f1 = 0.0
######################
# validate the model #
######################
self.model.eval()
for data, target in valid_loader:
# move tensors to GPU if CUDA is available
if device.type == "cuda":
data, target = data.cuda(), target.cuda()
elif device.type == "xla":
data = data.to(device, dtype=torch.float32)
target = target.to(device, dtype=torch.int64)
with torch.no_grad():
# forward pass: compute predicted outputs by passing inputs to the model
output = self.model(data)
# calculate the batch loss
loss = criterion(output, target)
# Calculate Accuracy
accuracy = (output.argmax(dim=1) == target).float().mean()
# update average validation loss and accuracy
valid_loss += loss
valid_accuracy += accuracy
valid_f1 += f1_score(output.argmax(dim=1).cpu().numpy(), target.cpu().numpy(), average='macro')
return valid_loss / len(valid_loader), valid_accuracy / len(valid_loader), valid_f1/ len(valid_loader)
# + id="C0qabbIgXOAd"
def fit_tpu(
model, epochs, device, criterion, optimizer, train_loader, valid_loader=None
):
valid_loss_min = np.Inf # track change in validation loss
# keeping track of losses as it happen
train_losses = []
valid_losses = []
train_accs = []
valid_accs = []
valid_f1s = []
for epoch in range(1, epochs + 1):
gc.collect()
para_train_loader = pl.ParallelLoader(train_loader, [device])
xm.master_print(f"{'='*50}")
xm.master_print(f"EPOCH {epoch} - TRAINING...")
train_loss, train_acc = model.train_one_epoch(
para_train_loader.per_device_loader(device), criterion, optimizer, device
)
xm.master_print(
f"\n\t[TRAIN] EPOCH {epoch} - LOSS: {train_loss}, ACCURACY: {train_acc}\n"
)
train_losses.append(train_loss)
train_accs.append(train_acc)
gc.collect()
if valid_loader is not None:
gc.collect()
para_valid_loader = pl.ParallelLoader(valid_loader, [device])
xm.master_print(f"EPOCH {epoch} - VALIDATING...")
valid_loss, valid_acc, valid_f1 = model.validate_one_epoch(
para_valid_loader.per_device_loader(device), criterion, device
)
xm.master_print(f"\t[VALID] LOSS: {valid_loss}, ACCURACY: {valid_acc}, F1: {valid_f1}\n")
valid_losses.append(valid_loss)
valid_accs.append(valid_acc)
valid_f1s.append(valid_f1s)
gc.collect()
# save model if validation loss has decreased
if valid_loss <= valid_loss_min and epoch != 1:
xm.master_print(
"Validation loss decreased ({:.4f} --> {:.4f}). Saving model ...".format(
valid_loss_min, valid_loss
)
)
#xm.save(model.state_dict(), f'{DATA_DIR}/checkpoint/best_model.pth')
valid_loss_min = valid_loss
return {
"train_loss": train_losses,
"valid_losses": valid_losses,
"train_acc": train_accs,
"valid_acc": valid_accs,
"valid_f1:": valid_f1s
}
# + id="nbmIhl7fXOAd"
model = ViT_MLP_CUSTOM(n_classes=2, pretrained=False, vit=True)
# + id="OuqTSI19XOAd"
def _run():
train_dataset = pneumonia_dataset(df_train, transforms=transforms_train)
valid_dataset = pneumonia_dataset(df_val, transforms=transforms_val)
train_sampler = torch.utils.data.distributed.DistributedSampler(
train_dataset,
num_replicas=xm.xrt_world_size(),
rank=xm.get_ordinal(),
shuffle=True,
)
valid_sampler = torch.utils.data.distributed.DistributedSampler(
valid_dataset,
num_replicas=xm.xrt_world_size(),
rank=xm.get_ordinal(),
shuffle=False,
)
train_loader = torch.utils.data.DataLoader(
dataset=train_dataset,
batch_size=BATCH_SIZE,
sampler=train_sampler,
drop_last=True,
num_workers=8,
)
valid_loader = torch.utils.data.DataLoader(
dataset=valid_dataset,
batch_size=BATCH_SIZE,
sampler=valid_sampler,
drop_last=True,
num_workers=8,
)
criterion = nn.CrossEntropyLoss()
# device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
device = xm.xla_device()
model.to(device)
lr = LR * xm.xrt_world_size()
optimizer = torch.optim.Adam(model.parameters(), lr=lr)
xm.master_print(f"INITIALIZING TRAINING ON {xm.xrt_world_size()} TPU CORES")
start_time = datetime.now()
xm.master_print(f"Start Time: {start_time}")
logs = fit_tpu(
model=model,
epochs=N_EPOCHS,
device=device,
criterion=criterion,
optimizer=optimizer,
train_loader=train_loader,
valid_loader=valid_loader,
)
xm.master_print(f"Execution time: {datetime.now() - start_time}")
xm.master_print("Saving Model")
xm.save(
model.state_dict(), f'{DATA_DIR}/checkpoint/model_5e_{datetime.now().strftime("%Y%m%d-%H%M")}.pth'
)
# + colab={"base_uri": "https://localhost:8080/"} id="msziS_4fXOAe" outputId="f10ac6e9-3c3f-48c5-80e8-c8ae9bde611a"
# Start training processes
def _mp_fn(rank, flags):
torch.set_default_tensor_type("torch.FloatTensor")
a = _run()
# _run()
FLAGS = {}
xmp.spawn(_mp_fn, args=(FLAGS,), nprocs=8, start_method="fork")
# + colab={"base_uri": "https://localhost:8080/"} id="uF6j9OKagxta" outputId="85e743ae-f9da-4baf-c3fc-784b824fe546"
# load model
vit_custom_checkpoint_path = './drive/My Drive/torch_project/medi/mlpmixer/checkpoint/vit_custom.pth'
vit_custom_checkpoint = torch.load(vit_custom_checkpoint_path)
model.load_state_dict(vit_custom_checkpoint)
# + colab={"base_uri": "https://localhost:8080/"} id="LFVYoQAbgxtr" outputId="94fea1c5-506c-4780-a9d1-1bf267a8d10b"
predict_scores(model, df_test, xm.xla_device())
# + [markdown] id="hXCcqK0LwW_m"
# # MLP Mixer
# + id="PtEE6KKso-MG"
class MlpBlock(nn.Module):
def __init__(self, hidden_dim, mlp_dim):
super(MlpBlock, self).__init__()
self.mlp = nn.Sequential(
nn.Linear(hidden_dim, mlp_dim),
nn.GELU(),
nn.Linear(mlp_dim, hidden_dim)
)
def forward(self, x):
return self.mlp(x)
class MixerBlock(nn.Module):
def __init__(self, num_tokens, hidden_dim, tokens_mlp_dim, channels_mlp_dim):
super(MixerBlock, self).__init__()
self.ln_token = nn.LayerNorm(hidden_dim)
self.token_mix = MlpBlock(num_tokens, tokens_mlp_dim)
self.ln_channel = nn.LayerNorm(hidden_dim)
self.channel_mix = MlpBlock(hidden_dim, channels_mlp_dim)
def forward(self, x):
out = self.ln_token(x).transpose(1, 2)
x = x + self.token_mix(out).transpose(1, 2)
out = self.ln_channel(x)
x = x + self.channel_mix(out)
return x
class MlpMixer(nn.Module):
def __init__(self, num_classes, num_blocks, patch_size, hidden_dim, tokens_mlp_dim, channels_mlp_dim, image_size=224):
super(MlpMixer, self).__init__()
num_tokens = (image_size // patch_size)**2
self.patch_emb = nn.Conv2d(3, hidden_dim, kernel_size=patch_size, stride=patch_size, bias=False)
self.mlp = nn.Sequential(*[MixerBlock(num_tokens, hidden_dim, tokens_mlp_dim, channels_mlp_dim) for _ in range(num_blocks)])
self.ln = nn.LayerNorm(hidden_dim)
self.fc = nn.Linear(hidden_dim, num_classes)
def forward(self, x):
x = self.patch_emb(x)
x = x.flatten(2).transpose(1, 2)
x = self.mlp(x)
x = self.ln(x)
x = x.mean(dim=1)
x = self.fc(x)
return x
# + id="KCk9yycQkK-I"
model = ViT_MLP_CUSTOM(n_classes=2, pretrained=False, vit=False)
# + id="fxC0ncl5khnL"
def _run():
train_dataset = pneumonia_dataset(df_train, transforms=transforms_train)
valid_dataset = pneumonia_dataset(df_val, transforms=transforms_val)
train_sampler = torch.utils.data.distributed.DistributedSampler(
train_dataset,
num_replicas=xm.xrt_world_size(),
rank=xm.get_ordinal(),
shuffle=True,
)
valid_sampler = torch.utils.data.distributed.DistributedSampler(
valid_dataset,
num_replicas=xm.xrt_world_size(),
rank=xm.get_ordinal(),
shuffle=False,
)
train_loader = torch.utils.data.DataLoader(
dataset=train_dataset,
batch_size=BATCH_SIZE,
sampler=train_sampler,
drop_last=True,
num_workers=8,
)
valid_loader = torch.utils.data.DataLoader(
dataset=valid_dataset,
batch_size=BATCH_SIZE,
sampler=valid_sampler,
drop_last=True,
num_workers=8,
)
criterion = nn.CrossEntropyLoss()
# device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
device = xm.xla_device()
model.to(device)
lr = LR * xm.xrt_world_size()
optimizer = torch.optim.Adam(model.parameters(), lr=lr)
xm.master_print(f"INITIALIZING TRAINING ON {xm.xrt_world_size()} TPU CORES")
start_time = datetime.now()
xm.master_print(f"Start Time: {start_time}")
logs = fit_tpu(
model=model,
epochs=N_EPOCHS,
device=device,
criterion=criterion,
optimizer=optimizer,
train_loader=train_loader,
valid_loader=valid_loader,
)
xm.master_print(f"Execution time: {datetime.now() - start_time}")
xm.master_print("Saving Model")
xm.save(
model.state_dict(), f'{DATA_DIR}/checkpoint/model_5e_{datetime.now().strftime("%Y%m%d-%H%M")}.pth'
)
# + colab={"base_uri": "https://localhost:8080/"} id="GHTa0270khnM" outputId="b82719cc-6717-4681-e20f-a5410ac9f5e2"
# Start training processes
def _mp_fn(rank, flags):
torch.set_default_tensor_type("torch.FloatTensor")
a = _run()
# _run()
FLAGS = {}
xmp.spawn(_mp_fn, args=(FLAGS,), nprocs=8, start_method="fork")
# + colab={"base_uri": "https://localhost:8080/"} id="uGRzKJ4Qg7BI" outputId="41fe9d71-d947-4b89-84e2-06e9fe24b16b"
# load model
mlp_custom_checkpoint_path = './drive/My Drive/torch_project/medi/mlpmixer/checkpoint/mlp_custom.pth'
mlp_custom_checkpoint = torch.load(mlp_custom_checkpoint_path)
model.load_state_dict(mlp_custom_checkpoint)
# + colab={"base_uri": "https://localhost:8080/"} id="S0GiHdISg7BI" outputId="bdd3269f-488e-4f04-8e3e-1098f22f3433"
predict_scores(model, df_test, xm.xla_device())
| Vit and Mixer/tpu_vit_mlp_mixer.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## SQL Ingestion Pipeline Documentation
#
# Data is populated into the database using the `sqlIngest` package. `sqlIngest` is comprised of a `DataHandler` class with various methods for initializing, importing, and updating data in the SQL database.
# ### DataHandler
# Adding to python path for one-off relative import
import pandas as pd
import sys
sys.path.append('../server/src/services/')
from sqlIngest import DataHandler
# The `DataHandler` object is the central tool in `sqlIngest`. It can be used to load data from Socrata or CSV sources, perform formatting, and write that data back to CSV or into a database instance. It has the following dependencies:
#
# - `pandas` for data handling and manipulation
# - `sqlAlchemy` for database operations
# - `sodapy` for Socrata operations
# Initialize instance of DataHandler
loader = DataHandler()
# It is first necessary to load configuration information from `settings.cfg`. In this tutorial `settings.example.cfg` is specified, but you will need to modify this file and resave it as `settings.cfg`. Include the database connection string and user token for your machine and account respectively.
# Load configuration file
loader.loadConfig(configFilePath='../server/src/settings.example.cfg')
# This initializes several values
print('file path:\t\t%s' % loader.configFilePath)
print('database string:\t%s' % loader.dbString)
print('socrata token:\t\t%s' % loader.token)
# We are able to fetch data from the City of Los Angeles Socrata data stores by specifying the year we are interested in. Data can be fetched either in small increments specified by paging over the database in multiple queries or fetched as full-year chunks. The recommended page size for Socrata is 1000 entries, but larger page sizes are allowed. Speed will be relative to page size and query size.
#
# __NOTE:__ For unknown reasons, 2015 data struggles with timeouts during paging.
# Fetch partial dataset from Socrata
# (Need Socrata API key for significant number of queries)
loader.fetchSocrata(year=2019, querySize=1000, pageSize=1000)
loader.data.head()
# Fetch full dataset from Socrata (slow)
loader.fetchSocrataFull(year=2015)
loader.data.head()
# Once we have imported the data, we need to perform a cleaning step in order to standardize it for input into the database. This removes columns that we are not interested in tracking and makes sure that data types are correctly formatted for the SQL import.
# Cleaning data for consistency before SQL import
loader.cleanData()
loader.data.head()
# After cleaning we can output the data as a CSV file if desired. Since the `loader.data` object is a pandas dataframe, we can also write it out using any of the associated dataframe methods like `.to_csv`.
# Write data out as CSV
loader.saveCsvFile('../../testFile.csv')
# Once the data has been cleaned, it is ready for import into the database implementation. By default, the `ingestData` method will use the `ingestMethod='replace'` parameter, which __overwrites the existing staging table in the database__ if you don't desire this functionality, you can specify `ingestMethod='append'`, but be aware that this could lead to duplicate rows and associated errors if used incorrectly.
# Ingest data into database
loader.ingestData(ingestMethod='replace')
# We are also able to run the full process by using the `populateFullDatabase` method. Be aware that you must still run the initialization and config portions of the script before calling this method. The `yearRange` parameter expects a python `range` object. Keep in mind that the range object uses python indexing, so you will have to add 1 to the endpoint year.
# Run full ingestion pipeline
loader.populateFullDatabase(yearRange=range(2015,2016))
# sqlIngest can also run partial updates using a paging request system. Those requests are then checked against the database. If the same srnumber is present in the database, the more recently queried record is inserted.
loader.fetchSocrata(year=2019, querySize=2000, pageSize=2000)
loader.cleanData()
loader.updateDatabase()
| Documentation/sqlIngest_documentation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
# %matplotlib notebook
# -
import gmag.arrays.carisma as carisma
import gmag.arrays.image as image
import gmag.arrays.themis as themis
import gmag.arrays.canopus as canopus
from gmag import utils
import pandas as pd
import matplotlib.pyplot as plt
sdate = '2011-01-01'
ndays = 1
#load data
df_c=carisma.load('GILL',sdate,ndays=ndays,dl=True)
df_i=image.load('KIL',sdate,ndays=ndays,dl=True)
df_t=themis.load('GILL',sdate,ndays=ndays,dl=True)
df_i.tail(n=50)
r = df_c.join(df_i, how="outer")
r2 = r.join(df_t, how="outer")
r2.tail(50)
#load data
df_c, meta=canopus.load('GILL','1995-06-23',ndays=1)
meta
| notebooks/GMAG example.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + language="javascript"
# // Disables truncation of output window
# IPython.OutputArea.prototype._should_scroll = function(lines) {
# return false;
# }
# -
# Config
output_dir = "trials1"
trial_type = "DEFAULT"
analysis_type = "temporal"
num_trials = 100
options = "-output_dir %s" % output_dir
options += " -type %s" % trial_type
options += " -analysis_type %s" % analysis_type
options += " -num_trials %d" % num_trials
options += " -summarize_only 1" # enable to collate results of completed trials (sets of multiple simulations)
# options += " -analyze_results_only 1" # enable to collate results of completed simulations
from anamod.simulation import run_trials
outputs = run_trials.main(options)
# +
import numpy as np
import plotly.express as px
import plotly.graph_objects as go
from anamod.constants import FDR, POWER, TEMPORAL_FDR, TEMPORAL_POWER, AVERAGE_WINDOW_FDR, AVERAGE_WINDOW_POWER
from anamod.constants import TEMPORAL, WINDOW_OVERLAP
GROUPS = {"Overall Feature Importance Detection": (FDR, POWER),
"Temporal Feature Importance Detection": (TEMPORAL_FDR, TEMPORAL_POWER),
"Average Window Detection": (AVERAGE_WINDOW_FDR, AVERAGE_WINDOW_POWER)}
def visualize(data):
"""Visualize outputs"""
if analysis_type == TEMPORAL:
# Window overlap histogram
fig = go.Figure()
for param, values in data[WINDOW_OVERLAP].items():
fig.add_trace(go.Histogram(x=values, name=param))
fig.update_traces(histnorm="probability", xbins=dict(start=0.0, end=1.0), opacity=0.6)
fig.update_layout(title={"text": "Histogram of Average Window Overlap", "xanchor": "center", "x": 0.5},
xaxis_title="Average Window Overlap", yaxis_title="Probability", template="none",
legend_title=trial_type)
fig.show()
for name, group in GROUPS.items():
fig = go.Figure()
for cat in group:
x, y = ([], [])
for param, values in data[cat].items():
y.extend(values)
x.extend(["n = %s" % param] * len(values))
fig.add_trace(go.Violin(x=x, y=y,
legendgroup=cat, scalegroup=cat, name=cat))
fig.update_traces(box_visible=True, meanline_visible=True, opacity=0.6, points="all")
fig.update_layout(title={"text": name, "xanchor": "center", "x": 0.5},
xaxis_title=trial_type, yaxis_title="Value",
violinmode="group", template="none")
fig.show()
visualize(outputs)
| notebooks/visualize.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# import libraries
import os
import pandas as pd
import json
import gzip
import fasttext
import matplotlib.pyplot as plt
import seaborn as sns
import re
# Train set
path = '../../src/data/schemafiltereddata/TrainTestTables/Small/Train/'
#path = '../../src/data/LocalBusiness/Splitting_ManualCheck/Train_Validation_Test/validation tables/'
#path = '../../src/data/product/train_test_split/output_unfiltered_tables/large/after_manual_checking/val/'
files = [file for file in os.listdir(path) if file.endswith('.json.gz')]
#files = [file for file in os.listdir(path) if file.endswith('.csv')]
files[0]
# +
# if this format is found -> PT number m number s ->
# -
df = pd.read_json(os.path.join(path, '{}'.format(files[1])), compression='gzip', lines=True)
l = [int(s) for s in re.findall(r'\d+', 'PT3M58S')]
l
'-'.join(l)
df
df = pd.read_json(os.path.join(path, '{}'.format(files[3])), compression='gzip', lines=True)
df
df.duration.apply(lambda x:[int(s) for s in x if s.isdigit()])
df.duration.apply(lambda x: re.sub("PT\d+M\d+S", "", str(x)).lower())
# +
#df = pd.read_json(os.path.join(path, '{}'.format(files[0])), compression='gzip', lines=True)
# +
#df.drop('row_id', axis=1, inplace=True)
# +
#pd.read_json(os.path.join(path, '{}'.format(files[25])), compression='gzip', lines=True)
# -
# clean out special characters
def col_cleaner(df):
for col in df.columns:
df[col] = df[col].apply(lambda x: re.sub("[^0-9a-zA-Z-@]+", " ", str(x)).lower())
try:
df[col] = pd.to_numeric(df[col])
except:
pass
return df
# +
#col_cleaner(pd.read_json(os.path.join(path, '{}'.format(files[35])), compression='gzip', lines=True))
# +
# check duplicates
#import pickle
#import gzip
#import io
for file in files:
data = col_cleaner(pd.read_json(os.path.join(path, '{}'.format(file)), compression='gzip', lines=True))
#data = col_cleaner(pd.read_csv(os.path.join(path, '{}'.format(file))).drop(['Unnamed: 0', 'row_id'], axis=1))
#data.drop('row_id', axis=1, inplace=True)
if len(data[data.duplicated()]) > 0:
print(file)
#data.to_csv('../../src/data/LocalBusiness/Splitting_ManualCheck/Train_Validation_Test/validation_tables_cleaned/' + file )
#data.to_json(os.path.join('../../src/data/product/train_test_split/output_unfiltered_tables/large/after_manual_checking/train_cleaned/', '{}').format(file), compression='gzip', orient='records', lines=True)
#data.to_json(os.path.join('../../src/data/product/train_test_split/output_unfiltered_tables/large/after_manual_checking/val_cleaned/', '{}').format(file), compression='gzip', orient='records', lines=True)
# +
# local business
# test tables
LocalBusiness_lawresolution.com_September2020.csv
LocalBusiness_harborcountry.org_September2020.csv
LocalBusiness_gc-chamber.com_September2020.csv
Hotel_hotelscombined.com_September2020.csv
Restaurant_tajhotels.com_September2020.csv
LocalBusiness_dailyexaminer.com.au_September2020.csv
LocalBusiness_sunshinecoastdaily.com.au_September2020.csv
LocalBusiness_mychamber.org_September2020.csv
LocalBusiness_frasercoastchronicle.com.au_September2020.csv
# train tables
LocalBusiness_gochambermaster.com_September2020.csv
LocalBusiness_goyellow.de_September2020.csv
LocalBusiness_attorneyhelp.org_September2020.csv
LocalBusiness_101attorney.com_September2020.csv
LocalBusiness_aussieweb.com.au_September2020.csv
LocalBusiness_whitsundaytimes.com.au_September2020.csv
LocalBusiness_chambersburg.org_September2020.csv
LocalBusiness_frankfortchamber.com_September2020.csv
LocalBusiness_golocal.de_September2020.csv
LocalBusiness_northernstar.com.au_September2020.csv
LocalBusiness_101dentist.com_September2020.csv
LocalBusiness_moverreviews.com_September2020.csv
# validation
LocalBusiness_champaigncounty.org_September2020.csv
LocalBusiness_yoys.si_September2020.csv
# -
# Cleaned files
path = '../../src/data/schemafiltereddata/cleaned_files/small/'
files = [file for file in os.listdir(path) if file.endswith('.json.gz')]
pd.read_json(os.path.join(path, '{}'.format(files[0])), compression='gzip', lines=True)
# +
# aggregate rating
# name
# offers
# description
#
# +
#pd.read_json('../../src/data/schemafiltereddata/TrainTestTables/Small/Train/Product_completenutrition.com_September2020.json.gz', compression='gzip', lines=True)
# +
#pd.read_json('../../src/data/schemafiltereddata/TrainTestTables/Small/Train/Product_trussgenius.com_September2020.json.gz', compression='gzip', lines=True)
# +
#pd.read_json('../../src/data/schemafiltereddata/TrainTestTables/Small/Train/MusicRecording_bobskon.com_September2020.json.gz', compression='gzip', lines=True)
# +
#pd.read_json('../../src/data/schemafiltereddata/TrainTestTables/Small/Train/MusicRecording_shadcore.com_September2020.json.gz', compression='gzip', lines=True)
# +
#pd.read_json('../../src/data/schemafiltereddata/TrainTestTables/Small/Train/Event_oscodachamber.com_September2020.json.gz', compression='gzip', lines=True)
# +
#pd.read_json('../../src/data/schemafiltereddata/TrainTestTables/Small/Train/CreativeWork_sphosp.com_September2020.json.gz', compression='gzip', lines=True)
# -
# Tabbie cleaner
#path = '../../src/data/schemafiltereddata/TrainTestTables/Test/'
#path = '../../src/data/schemafiltereddata/TrainTestTables/Medium/Train'
path = '../../src/data/schemafiltereddata/TrainTestTables/Large/Train'
#path = '../../src/data/LocalBusiness/Splitting_ManualCheck/Train_Validation_Test/validation tables/'
#path = '../../src/data/product/train_test_split/output_unfiltered_tables/large/after_manual_checking/val/'
files = [file for file in os.listdir(path) if file.endswith('.json.gz')]
#files = [file for file in os.listdir(path) if file.endswith('.csv')]
len(files)
# test
for file in files:
data = col_cleaner(pd.read_json(os.path.join(path, '{}'.format(file)), compression='gzip', lines=True))
if data.shape[1] > 25:
print(data.shape)
print(file)
# medium
for file in files:
data = col_cleaner(pd.read_json(os.path.join(path, '{}'.format(file)), compression='gzip', lines=True))
if data.shape[1] > 25:
print(data.shape)
print(file)
# large
for file in files:
data = col_cleaner(pd.read_json(os.path.join(path, '{}'.format(file)), compression='gzip', lines=True))
if data.shape[1] > 25:
print(data.shape)
print(file)
for file in files:
data = col_cleaner(pd.read_json(os.path.join(path, '{}'.format(file)), compression='gzip', lines=True))
if data.shape[1] > 25:
print(data.shape)
print(file)
pd.read_json(os.path.join(path, '{}'.format('Product_phillgrovereviews.com_September2020.json.gz')), compression='gzip', lines=True)
# +
# delete the additional columns - wenn ich Luisa richtig verstehe muss ich das nicht
# part tables into multiple
# change the csv file with the numbers of the target columns
| notebooks/Schema/Preprocessing/Cleaner_EW.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] colab_type="text" id="view-in-github"
# <a href="https://colab.research.google.com/github/open-mmlab/mmsegmentation/blob/master/demo/MMSegmentation_Tutorial.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="FVmnaxFJvsb8"
# # MMSegmentation Tutorial
# Welcome to MMSegmentation!
#
# In this tutorial, we demo
# * How to do inference with MMSeg trained weight
# * How to train on your own dataset and visualize the results.
# + [markdown] id="QS8YHrEhbpas"
# ## Install MMSegmentation
# This step may take several minutes.
#
# We use PyTorch 1.6 and CUDA 10.1 for this tutorial. You may install other versions by change the version number in pip install command.
# + colab={"base_uri": "https://localhost:8080/"} id="UWyLrLYaNEaL" outputId="32a47fe3-f10d-47a1-f6b9-b7c235abdab1"
# Check nvcc version
# !nvcc -V
# Check GCC version
# !gcc --version
# + colab={"base_uri": "https://localhost:8080/"} id="Ki3WUBjKbutg" outputId="14bd14b0-4d8c-4fa9-e3f9-da35c0efc0d5"
# Install PyTorch
# !conda install pytorch=1.6.0 torchvision cudatoolkit=10.1 -c pytorch
# Install MMCV
# !pip install mmcv-full -f https://download.openmmlab.com/mmcv/dist/cu101/torch1.6/index.html
# + colab={"base_uri": "https://localhost:8080/"} id="nR-hHRvbNJJZ" outputId="10c3b131-d4db-458c-fc10-b94b1c6ed546"
# !rm -rf mmsegmentation
# !git clone https://github.com/open-mmlab/mmsegmentation.git
# %cd mmsegmentation
# !pip install -e .
# + colab={"base_uri": "https://localhost:8080/"} id="mAE_h7XhPT7d" outputId="83bf0f8e-fc69-40b1-f9fe-0025724a217c"
# Check Pytorch installation
import torch, torchvision
print(torch.__version__, torch.cuda.is_available())
# Check MMSegmentation installation
import mmseg
print(mmseg.__version__)
# + [markdown] id="eUcuC3dUv32I"
# ## Run Inference with MMSeg trained weight
# + colab={"base_uri": "https://localhost:8080/"} id="2hd41IGaiNet" outputId="b7b2aafc-edf2-43e4-ea43-0b5dd0aa4b4a"
# !mkdir checkpoints
# !wget https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r50-d8_512x1024_40k_cityscapes/pspnet_r50-d8_512x1024_40k_cityscapes_20200605_003338-2966598c.pth -P checkpoints
# + id="H8Fxg8i-wHJE"
from mmseg.apis import inference_segmentor, init_segmentor, show_result_pyplot
from mmseg.core.evaluation import get_palette
# + id="umk8sJ0Xuace"
config_file = '../configs/pspnet/pspnet_r50-d8_512x1024_40k_cityscapes.py'
checkpoint_file = '../checkpoints/pspnet_r50-d8_512x1024_40k_cityscapes_20200605_003338-2966598c.pth'
# + colab={"base_uri": "https://localhost:8080/"} id="nWlQFuTgudxu" outputId="5e45f4f6-5bcf-4d04-bb9c-0428ee84a576"
# build the model from a config file and a checkpoint file
model = init_segmentor(config_file, checkpoint_file, device='cuda:0')
# + id="izFv6pSRujk9"
# test a single image
img = './demo.png'
result = inference_segmentor(model, img)
# + colab={"base_uri": "https://localhost:8080/", "height": 504} id="bDcs9udgunQK" outputId="7c55f713-4085-47fd-fa06-720a321d0795"
# show the results
show_result_pyplot(model, img, result, get_palette('cityscapes'))
# + [markdown] id="Ta51clKX4cwM"
# ## Train a semantic segmentation model on a new dataset
#
# To train on a customized dataset, the following steps are necessary.
# 1. Add a new dataset class.
# 2. Create a config file accordingly.
# 3. Perform training and evaluation.
# + [markdown] id="AcZg6x_K5Zs3"
# ### Add a new dataset
#
# Datasets in MMSegmentation require image and semantic segmentation maps to be placed in folders with the same prefix. To support a new dataset, we may need to modify the original file structure.
#
# In this tutorial, we give an example of converting the dataset. You may refer to [docs](https://github.com/open-mmlab/mmsegmentation/docs/en/tutorials/new_dataset.md) for details about dataset reorganization.
#
# We use [Stanford Background Dataset](http://dags.stanford.edu/projects/scenedataset.html) as an example. The dataset contains 715 images chosen from existing public datasets [LabelMe](http://labelme.csail.mit.edu), [MSRC](http://research.microsoft.com/en-us/projects/objectclassrecognition), [PASCAL VOC](http://pascallin.ecs.soton.ac.uk/challenges/VOC) and [Geometric Context](http://www.cs.illinois.edu/homes/dhoiem/). Images from these datasets are mainly outdoor scenes, each containing approximately 320-by-240 pixels.
# In this tutorial, we use the region annotations as labels. There are 8 classes in total, i.e. sky, tree, road, grass, water, building, mountain, and foreground object.
# + colab={"base_uri": "https://localhost:8080/"} id="TFIt7MHq5Wls" outputId="74a126e4-c8a4-4d2f-a910-b58b71843a23"
# download and unzip
# !wget http://dags.stanford.edu/data/iccv09Data.tar.gz -O stanford_background.tar.gz
# !tar xf stanford_background.tar.gz
# + colab={"base_uri": "https://localhost:8080/", "height": 377} id="78LIci7F9WWI" outputId="c432ddac-5a50-47b1-daac-5a26b07afea2"
# Let's take a look at the dataset
import mmcv
import matplotlib.pyplot as plt
img = mmcv.imread('iccv09Data/images/6000124.jpg')
plt.figure(figsize=(8, 6))
plt.imshow(mmcv.bgr2rgb(img))
plt.show()
# + [markdown] id="L5mNQuc2GsVE"
# We need to convert the annotation into semantic map format as an image.
# + id="WnGZfribFHCx"
import os.path as osp
import numpy as np
from PIL import Image
# convert dataset annotation to semantic segmentation map
data_root = 'iccv09Data'
img_dir = 'images'
ann_dir = 'labels'
# define class and plaette for better visualization
classes = ('sky', 'tree', 'road', 'grass', 'water', 'bldg', 'mntn', 'fg obj')
palette = [[128, 128, 128], [129, 127, 38], [120, 69, 125], [53, 125, 34],
[0, 11, 123], [118, 20, 12], [122, 81, 25], [241, 134, 51]]
for file in mmcv.scandir(osp.join(data_root, ann_dir), suffix='.regions.txt'):
seg_map = np.loadtxt(osp.join(data_root, ann_dir, file)).astype(np.uint8)
seg_img = Image.fromarray(seg_map).convert('P')
seg_img.putpalette(np.array(palette, dtype=np.uint8))
seg_img.save(osp.join(data_root, ann_dir, file.replace('.regions.txt',
'.png')))
# + colab={"base_uri": "https://localhost:8080/", "height": 377} id="5MCSS9ABfSks" outputId="92b9bafc-589e-48fc-c9e9-476f125d6522"
# Let's take a look at the segmentation map we got
import matplotlib.patches as mpatches
img = Image.open('iccv09Data/labels/6000124.png')
plt.figure(figsize=(8, 6))
im = plt.imshow(np.array(img.convert('RGB')))
# create a patch (proxy artist) for every color
patches = [mpatches.Patch(color=np.array(palette[i])/255.,
label=classes[i]) for i in range(8)]
# put those patched as legend-handles into the legend
plt.legend(handles=patches, bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.,
fontsize='large')
plt.show()
# + id="WbeLYCp2k5hl"
# split train/val set randomly
split_dir = 'splits'
mmcv.mkdir_or_exist(osp.join(data_root, split_dir))
filename_list = [osp.splitext(filename)[0] for filename in mmcv.scandir(
osp.join(data_root, ann_dir), suffix='.png')]
with open(osp.join(data_root, split_dir, 'train.txt'), 'w') as f:
# select first 4/5 as train set
train_length = int(len(filename_list)*4/5)
f.writelines(line + '\n' for line in filename_list[:train_length])
with open(osp.join(data_root, split_dir, 'val.txt'), 'w') as f:
# select last 1/5 as train set
f.writelines(line + '\n' for line in filename_list[train_length:])
# + [markdown] id="HchvmGYB_rrO"
# After downloading the data, we need to implement `load_annotations` function in the new dataset class `StandfordBackgroundDataset`.
# + id="LbsWOw62_o-X"
from mmseg.datasets.builder import DATASETS
from mmseg.datasets.custom import CustomDataset
@DATASETS.register_module()
class StandfordBackgroundDataset(CustomDataset):
CLASSES = classes
PALETTE = palette
def __init__(self, split, **kwargs):
super().__init__(img_suffix='.jpg', seg_map_suffix='.png',
split=split, **kwargs)
assert osp.exists(self.img_dir) and self.split is not None
# + [markdown] id="yUVtmn3Iq3WA"
# ### Create a config file
# In the next step, we need to modify the config for the training. To accelerate the process, we finetune the model from trained weights.
# + id="Wwnj9tRzqX_A"
from mmcv import Config
cfg = Config.fromfile('../configs/pspnet/pspnet_r50-d8_512x1024_40k_cityscapes.py')
# + [markdown] id="1y2oV5w97jQo"
# Since the given config is used to train PSPNet on the cityscapes dataset, we need to modify it accordingly for our new dataset.
# + colab={"base_uri": "https://localhost:8080/"} id="eyKnYC1Z7iCV" outputId="6195217b-187f-4675-994b-ba90d8bb3078"
from mmseg.apis import set_random_seed
# Since we use ony one GPU, BN is used instead of SyncBN
cfg.norm_cfg = dict(type='BN', requires_grad=True)
cfg.model.backbone.norm_cfg = cfg.norm_cfg
cfg.model.decode_head.norm_cfg = cfg.norm_cfg
cfg.model.auxiliary_head.norm_cfg = cfg.norm_cfg
# modify num classes of the model in decode/auxiliary head
cfg.model.decode_head.num_classes = 8
cfg.model.auxiliary_head.num_classes = 8
# Modify dataset type and path
cfg.dataset_type = 'StandfordBackgroundDataset'
cfg.data_root = data_root
cfg.data.samples_per_gpu = 8
cfg.data.workers_per_gpu=8
cfg.img_norm_cfg = dict(
mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
cfg.crop_size = (256, 256)
cfg.train_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='LoadAnnotations'),
dict(type='Resize', img_scale=(320, 240), ratio_range=(0.5, 2.0)),
dict(type='RandomCrop', crop_size=cfg.crop_size, cat_max_ratio=0.75),
dict(type='RandomFlip', flip_ratio=0.5),
dict(type='PhotoMetricDistortion'),
dict(type='Normalize', **cfg.img_norm_cfg),
dict(type='Pad', size=cfg.crop_size, pad_val=0, seg_pad_val=255),
dict(type='DefaultFormatBundle'),
dict(type='Collect', keys=['img', 'gt_semantic_seg']),
]
cfg.test_pipeline = [
dict(type='LoadImageFromFile'),
dict(
type='MultiScaleFlipAug',
img_scale=(320, 240),
# img_ratios=[0.5, 0.75, 1.0, 1.25, 1.5, 1.75],
flip=False,
transforms=[
dict(type='Resize', keep_ratio=True),
dict(type='RandomFlip'),
dict(type='Normalize', **cfg.img_norm_cfg),
dict(type='ImageToTensor', keys=['img']),
dict(type='Collect', keys=['img']),
])
]
cfg.data.train.type = cfg.dataset_type
cfg.data.train.data_root = cfg.data_root
cfg.data.train.img_dir = img_dir
cfg.data.train.ann_dir = ann_dir
cfg.data.train.pipeline = cfg.train_pipeline
cfg.data.train.split = 'splits/train.txt'
cfg.data.val.type = cfg.dataset_type
cfg.data.val.data_root = cfg.data_root
cfg.data.val.img_dir = img_dir
cfg.data.val.ann_dir = ann_dir
cfg.data.val.pipeline = cfg.test_pipeline
cfg.data.val.split = 'splits/val.txt'
cfg.data.test.type = cfg.dataset_type
cfg.data.test.data_root = cfg.data_root
cfg.data.test.img_dir = img_dir
cfg.data.test.ann_dir = ann_dir
cfg.data.test.pipeline = cfg.test_pipeline
cfg.data.test.split = 'splits/val.txt'
# We can still use the pre-trained Mask RCNN model though we do not need to
# use the mask branch
cfg.load_from = 'checkpoints/pspnet_r50-d8_512x1024_40k_cityscapes_20200605_003338-2966598c.pth'
# Set up working dir to save files and logs.
cfg.work_dir = './work_dirs/tutorial'
cfg.runner.max_iters = 200
cfg.log_config.interval = 10
cfg.evaluation.interval = 200
cfg.checkpoint_config.interval = 200
# Set seed to facitate reproducing the result
cfg.seed = 0
set_random_seed(0, deterministic=False)
cfg.gpu_ids = range(1)
# Let's have a look at the final config used for training
print(f'Config:\n{cfg.pretty_text}')
# + [markdown] id="QWuH14LYF2gQ"
# ### Train and Evaluation
# + colab={"base_uri": "https://localhost:8080/"} id="jYKoSfdMF12B" outputId="422219ca-d7a5-4890-f09f-88c959942e64"
from mmseg.datasets import build_dataset
from mmseg.models import build_segmentor
from mmseg.apis import train_segmentor
# Build the dataset
datasets = [build_dataset(cfg.data.train)]
# Build the detector
model = build_segmentor(
cfg.model, train_cfg=cfg.get('train_cfg'), test_cfg=cfg.get('test_cfg'))
# Add an attribute for visualization convenience
model.CLASSES = datasets[0].CLASSES
# Create work_dir
mmcv.mkdir_or_exist(osp.abspath(cfg.work_dir))
train_segmentor(model, datasets, cfg, distributed=False, validate=True,
meta=dict())
# + [markdown] id="DEkWOP-NMbc_"
# Inference with trained model
# + colab={"base_uri": "https://localhost:8080/", "height": 645} id="ekG__UfaH_OU" outputId="1437419c-869a-4902-df86-d4f6f8b2597a"
img = mmcv.imread('iccv09Data/images/6000124.jpg')
model.cfg = cfg
result = inference_segmentor(model, img)
plt.figure(figsize=(8, 6))
show_result_pyplot(model, img, result, palette)
# -
| demo/MMSegmentation_Tutorial.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernel_info:
# name: python3
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### Import/Load Data
# +
import tensorflow as tf
import numpy as np
# Load dataset
training_data = "iris_training.csv"
testing_data = "iris_test.csv"
training_set = tf.contrib.learn.datasets.base.load_csv_with_header(filename=training_data,
features_dtype=np.float32,
target_dtype=np.int)
test_set = tf.contrib.learn.datasets.base.load_csv_with_header(filename=testing_data,
features_dtype=np.float32,
target_dtype=np.int)
# -
feature_name = "iris_features"
feature_columns = [tf.feature_column.numeric_column(feature_name, shape=[4])]
# ### Input Functions
def input_fn(data):
features = {feature_name: tf.constant(data.data)}
label = tf.constant(data.target)
return features, label
# + inputHidden=false outputHidden=false
train_input = lambda: input_fn(training_set)
eval_input = lambda: input_fn(test_set)
# -
# ### Training w/ Linear Classifier
classifier = tf.estimator.LinearClassifier(
feature_columns=feature_columns,
n_classes=3,
model_dir="tmp/iris")
# + inputHidden=false outputHidden=false
# define training, eval spec for train and evaluate including
train_spec = tf.estimator.TrainSpec(train_input,
max_steps=3000
)
eval_spec = tf.estimator.EvalSpec(eval_input,
name='mnist-eval'
)
# run training and evaluation
tf.estimator.train_and_evaluate(
classifier, train_spec, eval_spec)
# -
# ### Training w/ Deep Neural Network Estimator
nn_classifier = tf.estimator.DNNClassifier(
feature_columns=feature_columns,
hidden_units=[8, 4],
activation_fn=tf.nn.relu,
dropout=0.1,
n_classes=3,
model_dir="tmp/irisnn")
# + inputHidden=false outputHidden=false
# define training, eval spec for train and evaluate including
train_spec = tf.estimator.TrainSpec(train_input,
max_steps=20000
)
eval_spec = tf.estimator.EvalSpec(eval_input,
name='mnist-eval'
)
# run training and evaluation
tf.estimator.train_and_evaluate(
nn_classifier, train_spec, eval_spec)
# -
# ### Serving function and exporter
# + inputHidden=false outputHidden=false
feature_spec = {feature_name:
tf.FixedLenFeature(shape=[4], dtype=np.float32)}
serving_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn(feature_spec)
exporter = tf.estimator.LatestExporter('exporter',serving_fn)
eval_spec = tf.estimator.EvalSpec(eval_input,
name='mnist-eval',
exporters=[exporter]
)
# -
# ### Re-run and export model
# run training and evaluation
tf.estimator.train_and_evaluate(
nn_classifier, train_spec, eval_spec)
# ### Export Model for Prediction
# +
new_samples = np.array(
[[6.4, 3.2, 4.5, 1.5],
[5.8, 3.1, 5.0, 1.7]], dtype=np.float32)
predict_input_fn = tf.estimator.inputs.numpy_input_fn(
x={feature_name: new_samples},
num_epochs=1,
shuffle=False)
predictions = list(nn_classifier.predict(input_fn=predict_input_fn))
predicted_classes = [int(p['classes']) for p in predictions]
print("New Samples, Class Predictions: {}\n".format(predicted_classes))
# -
| 3. Distributed Deep Learning with Google ML Engine/2. dive into estimators/tensorflow_estimators.ipynb |
# ---
# jupyter:
# jupytext:
# formats: ipynb,py
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:core_acc] *
# language: python
# name: conda-env-core_acc-py
# ---
# # Examine data
#
# This notebook is examining the expression data that will be used in the network analysis
# %load_ext autoreload
# %autoreload 2
# %matplotlib inline
import os
import pandas as pd
import plotnine as pn
import seaborn as sns
import matplotlib.pyplot as plt
import umap
import random
import numpy as np
from scripts import paths
# Load expression data
pao1_compendium_filename = paths.PAO1_COMPENDIUM
pa14_compendium_filename = paths.PA14_COMPENDIUM
pao1_compendium = pd.read_csv(pao1_compendium_filename, sep="\t", header=0, index_col=0)
pa14_compendium = pd.read_csv(pa14_compendium_filename, sep="\t", header=0, index_col=0)
# ## Visualize distribution of expression data
# Random PAO1 genes
random_pao1_ids = random.sample(list(pao1_compendium.columns), 4)
sns.pairplot(pao1_compendium[random_pao1_ids])
plt.suptitle("Random set of genes (PAO1)", y=1.05)
# Try removing outlier samples
pao1_compendium_tmp = pao1_compendium[pao1_compendium["PA1337"] < 200]
# Co-operonic PAO1 genes
# pao1_co_operonic_ids = ["PA0001", "PA0002", "PA0003", "PA0004"]
# pao1_co_operonic_ids = ["PA0054","PA0055", "PA0056"]
pao1_co_operonic_ids = ["PA1335", "PA1336", "PA1337"]
sns.pairplot(pao1_compendium_tmp[pao1_co_operonic_ids])
plt.suptitle("Co-operonic set of genes (PAO1)", y=1.05)
# Houskeeping PAO1 gene that we would expect a consistently high expression across samples
# which doesn't have that peak at 0
sns.displot(pao1_compendium["PA1805"])
# Random PA14 gene
random_pa14_ids = random.sample(list(pa14_compendium.columns), 4)
sns.pairplot(pa14_compendium[random_pa14_ids])
plt.suptitle("Random set of genes (PA14)", y=1.05)
# **Observations:**
# These pair plots tell us what the distribution of the genes look like in our compendia. Overall it looks like genes tend to have a heavy right tail and some genes have a spike at 0 while others don't. As expected, our example housekeeping gene doesn't have this peak since this is a gene that tends to be highly active across all samples (i.e. there is no 0 spike).
#
# These pair plots also give us a rough sense for how correlated genes are. We would expect co-operonic genes to be more highly correlated compared to a random set of genes, which we do see. Some correlations appear not as strong due to the differences in scales between genes - something to consider when we are looking at correlations between genes.
| 2_correlation_analysis/0_examine_data.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# In this notebook we try and implement the error correction.
# +
import cv2 as cv
import numpy as np
from flytracker.components import Frame, BlobDetector
from flytracker.utils import run_localization, FourArenasQRCodeMask, run_tracker
import matplotlib.pyplot as plt
# %load_ext autoreload
# %autoreload 2
# -
# # Getting a frame with an issue
# %%time
coordinates = run_localization(200)
n_detected_flies = np.array([fly_coordinates.shape[0] for fly_coordinates in coordinates])
plt.plot(n_detected_flies)
np.where(n_detected_flies != 40)[0]
# So frame 106.
# +
# For now we have to cycle through the first frames
path = '/Users/gert-janboth/Documents/flyTracker/data/movies/4arenas_QR.h264'
mask = FourArenasQRCodeMask().mask
capture = cv.VideoCapture(path)
localise_flies = BlobDetector()
for frame_idx in np.arange(200):
frame = Frame(capture.read()[1], mask)
location = localise_flies(frame)
if location.shape[0] != 40:
print(frame_idx)
break
# -
location.shape
plt.figure(figsize=(10, 10))
plt.imshow(frame(), cmap='gray')
plt.scatter(location[:, 0], location[:, 1], s=30, c='red', marker='x')
# So the issue is in the upper right arena.
# # Error correction
# The idea behind the error correction is that we do a contour finder, use k means to subdivide the contours until all old flies are fixed.
# +
# We first need to threshold and apply some dilution
thresholded_frame = cv.threshold(frame(), 120, 255, cv.THRESH_BINARY_INV)[1]
kernel = cv.getStructuringElement(cv.MORPH_CROSS, (3, 3))
processed_frame = cv.dilate(thresholded_frame, kernel)
processed_frame = cv.medianBlur(processed_frame, 3)
# -
plt.figure(figsize=(15, 15))
plt.imshow(processed_frame)
# Now what if we just kmeans this image?
thresholded_frame
pixels = cv.findNonZero(thresholded_frame).astype(np.float32)
n = 40
kmeans_criteria = (cv.TERM_CRITERIA_EPS + cv.TERM_CRITERIA_MAX_ITER, 10, 1.0)
flags = cv.KMEANS_RANDOM_CENTERS
ret, label, center = cv.kmeans(pixels, n, None, kmeans_criteria, 10, flags)
plt.scatter(center[:, 1], center[:, 0], s=30, c='red', marker='x')
plt.figure(figsize=(10, 10))
plt.imshow(frame(), cmap='gray')
#plt.scatter(center[:, 0], center[:, 1], s=30, c='red', marker='x')
from sklearn.cluster import KMeans
fly_locs = np.stack(np.where(thresholded_frame != 0)).T[:, ::-1] # to get y and x good
# +
plt.figure(figsize=(10, 10))
plt.imshow(frame(), cmap='gray')
plt.scatter(fly_locs[:, 0], fly_locs[:, 1])
# +
# %%time
estim = KMeans(n_clusters=40)
estim.fit(fly_locs)
plt.figure(figsize=(10, 10))
plt.imshow(frame(), cmap='gray')
plt.scatter(estim.cluster_centers_[:, 0], estim.cluster_centers_[:, 1], s=30, c='red', marker='x')
# -
estim.inertia_
# +
from sklearn.cluster import KMeans
class ErrorCorrect:
def __init__(self, n_flies):
self.n_flies = n_flies
self.estimator = KMeans(n_clusters=self.n_flies)
def __call__(self, image):
# We first threshold
thresholded_frame = cv.threshold(image(), 120, 255, cv.THRESH_BINARY_INV)[1]
# Get the location of the non-zero pixels
fly_pixels = np.stack(np.where(thresholded_frame != 0)).T[:, ::-1] # to get y and x good
# Fit and get cluster centres
estim.fit(fly_locs)
locations = self.estimator.cluster_centers_
return locations
# -
# # OPEN CV k means?
# %%time
pixels = cv.findNonZero(thresholded_frame).astype(np.float32).squeeze()
criteria = (cv.TERM_CRITERIA_EPS + cv.TERM_CRITERIA_MAX_ITER, 1000000000, 1e-8)
k = 40
dist_cv, labels, centers = cv.kmeans(pixels, k, None, criteria, 200, cv.KMEANS_RANDOM_CENTERS)
dist_cv
plt.figure(figsize=(10, 10))
plt.imshow(frame(), cmap='gray')
plt.scatter(centers[:, 0], centers[:, 1], s=30, c='red', marker='x')
# # Testing
# %%time
dataset = run_tracker(1000, n_flies=40)
for fly in np.arange(40):
plt.scatter(dataset[dataset[:, 1] == fly][:, 2], dataset[dataset[:, 1] == fly][:, 3])
plt.figure(figsize=(15, 15))
for fly in np.arange(40):
plt.plot(dataset[dataset[:, 1] == fly][90:120, 2], dataset[dataset[:, 1] == fly][90:120, 3])
plt.imshow(frame(), cmap='gray')
plt.ylim([500, 0])
plt.xlim([700, 1100])
| dev/Implementing/error_correction.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # Alpha Vertex: PreCog 100/500 Securities
# In this notebook, we'll take a look at the Alpha Vertex *Top 100/500 Securities PreCog* dataset, available on [Quantopian](https://www.quantopian.com/store). This dataset spans 2010 through the current day. PreCog uses machine learning models to forecast stock returns at multiple horizons.
#
# The *100* dataset contains 5 day predicted log returns for the top 100 securities by market cap. The *500* dataset contains 5 day predicted log returns for the top 500 securities by market cap.
#
# Update time: Daily data will be updated close to midnight for the previous day. So on the 27th, you will have data with an asof_date of the 26th.
#
# ## Notebook Contents
# There are two ways to access the data and you'll find both of them listed below. Just click on the section you'd like to read through.
#
# - <a href="#interactive"><strong>Interactive overview</strong></a>: This is only available on Research and uses blaze to give you access to large amounts of data. Recommended for exploration and plotting.
# - <a href="#pipeline"><strong>Pipeline overview</strong></a>: Data is made available through pipeline which is available in both the Research & Backtesting environments. Recommended for factor development and moving back & forth between research/backtesting.
#
# ### Free samples and limits
# The result of any expression is limited to 10,000 rows to protect against runaway memory usage. To be clear, you have access to all the data server side. We are limiting the size of the responses back from Blaze.
#
# There is a *free* version of this dataset as well as a paid one. The free sample includes data until 2 months prior to the current date.
#
# To access the most up-to-date values for this data set for trading a live algorithm (as with other partner sets), you need to purchase access to the full set.
#
# <a></a></a>
#
# # Interactive Overview
# ### Accessing the data with Blaze and Interactive on Research
# Partner datasets are available on Quantopian Research through an API service known as [Blaze](http://blaze.pydata.org). Blaze provides the Quantopian user with a convenient interface to access very large datasets, in an interactive, generic manner.
#
# Blaze provides an important function for accessing these datasets. Some of these sets are many millions of records. Bringing that data directly into Quantopian Research directly just is not viable. So Blaze allows us to provide a simple querying interface and shift the burden over to the server side.
#
# It is common to use Blaze to perform a reduction expression on your dataset so that you don't have to pull the whole dataset into memory. You can convert the result of a blaze expression to a Pandas data structure (e.g. a [DataFrame](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.html)) and perform further computation, manipulation, and visualization on that structure.
#
# Helpful links:
#
# * [Query building for Blaze](http://blaze.readthedocs.io/en/latest/queries.html)
# * [Pandas-to-Blaze dictionary](http://blaze.readthedocs.io/en/latest/rosetta-pandas.html)
# * [SQL-to-Blaze dictionary](http://blaze.readthedocs.io/en/latest/rosetta-sql.html).
#
# Once you have a Blaze expression that reduces the dataset to less than 10,000 rows, you can convert it to a Pandas DataFrames using:
#
# > `from odo import odo`
# > `odo(expr, pandas.DataFrame)`
#
# #### To see how to create a factor using this data, search for the `Pipeline Overview` section of this notebook or head straight to <a href="#pipeline">Pipeline Overview</a>.
# +
# import the free sample of the dataset
from quantopian.interactive.data.alpha_vertex import (
# Top 100 Securities
precog_top_100 as dataset_100,
# Top 500 Securities
precog_top_500 as dataset_500
)
# import data operations
from odo import odo
# import other libraries we will use
import pandas as pd
import matplotlib.pyplot as plt
# -
# Let's use blaze to understand the data a bit using Blaze dshape()
dataset_500.asof_date.max()
# And how many rows are there?
# N.B. we're using a Blaze function to do this, not len()
dataset_500.count()
# Let's see what the data looks like. We'll grab the first few rows.
dataset_500.peek()
# Let's go over the columns:
#
# - **symbol**: The ticker symbol of the company.
# - **sid**: The security ID (see [here](https://www.quantopian.com/tutorials/getting-started#lesson3) for explanation).
# - **name**: The name of the company.
# - **asof_date**: The date to which this data applies/actual report date
# - **timestamp**: This is our timestamp on when we registered the data.
# - **predicted_five_day_log_return**: The predicted log return for the security over the next 5 days
#
# Fields like `timestamp` and `sid` are standardized across all Quantopian Store Datasets, so the datasets are easy to combine. The `sid` field is also standardized across all Quantopian equity databases.
# Now that we understand the data a bit better, let's get the `predicted_five_day_log_return` data for Apple (sid 24) and visualize it with a chart.
# +
# We start by defining a Blaze expression that gets the rows where symbol == AAPL.
aapl_data = dataset_500[dataset_500.symbol == 'AAPL']
# We then convert the Blaze expression to a pandas DataFrame, which is populated
# with the data resulting from our Blaze expression.
aapl_df = odo(aapl_data, pd.DataFrame)
# Display the first few rows of the DataFrame.
aapl_df.head()
# -
# For plotting purposes, set the index of the DataFrame to the asof_date.
aapl_df.set_index('asof_date', inplace=True)
# Plot the predicted 5-day log return data.
aapl_df['predicted_five_day_log_return'].plot()
# <a></a></a>
#
# # Pipeline Overview
# [Pipeline](https://www.quantopian.com/tutorials/pipeline) is a tool that can be used to define computations called factors, filters, or classifiers. These computations can be used in an algorithm to dynamically select securities, compute portfolio weights, compute risk factors, and more.
#
# In research, pipeline is mostly used to explore these computations.
#
# The only method for accessing partner data within an algorithm on Quantopian is in a pipeline. Before moving to [the IDE](https://www.quantopian.com/algorithms) to work on an algorithm, it's a good idea to define your pipeline in research, so that you can iterate on an idea and analyze the output.
#
# To start, we need to import the following:
from quantopian.pipeline import Pipeline
from quantopian.research import run_pipeline
# To access partner data in pipeline, you must import the dataset. If the data is in a format that's difficult to use (e.g. event-based datasets), the data is sometimes available via built-in factors or filters. There are no such built-ins for the Alpha Vertex dataset as the prediction data is in a nice, usable format.
#
# Let's import the pipeline version of the Alpha Vertex dataset:
# These imports can be found in the store panel for each dataset
# (https://www.quantopian.com/data). Note that not all store datasets
# can be used in pipeline yet.
from quantopian.pipeline.data.alpha_vertex import (
# Top 100 Securities
precog_top_100 as dataset_100,
# Top 500 Securities|
precog_top_500 as dataset_500
)
# Now that we've imported the data, let's take a look at which fields are available for each dataset, along with their datatypes.
# +
print "Here are the list of available fields per dataset:"
print "---------------------------------------------------\n"
def _print_fields(dataset):
print "Dataset: %s\n" % dataset.__name__
print "Fields:"
for field in list(dataset.columns):
print "%s - %s" % (field.name, field.dtype)
print "\n"
_print_fields(dataset_500)
print "---------------------------------------------------\n"
# -
# Now that we know what fields we have access to, let's define a [pipeline](https://www.quantopian.com/tutorials/pipeline) that gets the latest predicted five day log return for the PreCog 500 dataset for stocks in the [Q1500US](https://www.quantopian.com/tutorials/pipeline#lesson11).
# Import the Q1500US pipeline filter.
from quantopian.pipeline.filters.morningstar import Q1500US
# +
# We only want to get the signal for stocks in the Q1500US that have a non-null
# latest predicted_five_day_log_return.
universe = (Q1500US() & dataset_500.predicted_five_day_log_return.latest.notnull())
# Define our pipeline to return the latest prediction for the stocks in `universe`.
pipe = Pipeline(columns= {
'prediction': dataset_500.predicted_five_day_log_return.latest,
},
screen=universe)
# Run our pipeline (this gets the data).
pipe_output = run_pipeline(pipe, start_date='2014-01-01', end_date='2017-01-01')
# -
# The result is a pandas DataFrame with a MultiIndex.
pipe_output.head()
# Let's see how many securities we have a prediction for each day.
pipe_output.groupby(pipe_output.index.get_level_values(0)).count().plot()
# The set of ~500 stocks in the PreCog top 500 is derived from market cap at the beginning of each year, which we can see above!
# Now, you can to try writing an algorithm using this pipeline. The [final lesson in the Pipeline Tutorial](https://www.quantopian.com/tutorials/pipeline#lesson12) gives an example of moving from research to the IDE.
#
# There is also an example algorithm using the PreCog 500 that can be found [here](https://www.quantopian.com/posts/alpha-vertex-precog-dataset).
| docs/memo/notebooks/data/alpha_vertex.precog_top_500/notebook.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
##ossec-hids中的rule配置和测试数据可用于日志分析
from os import listdir
from os.path import isfile, join
import xml.etree.ElementTree as ET
import xmltodict, json
import pandas as pd
import collections
import re
from pandas.io.json import json_normalize
pd.options.display.max_colwidth = 1000
filepath = r"F:\open-source-check\ossec-testing\tests"
# +
def parseXMLToPd(file):
try:
with open(file, "r", encoding='utf-8') as fd:
#https://github.com/martinblech/xmltodict/issues/2
data = xmltodict.parse('<root>{0}</root>'.format(fd.read()))['root']
#get all the vars
df_vars = pd.DataFrame()
if "var" in data:
try:
df_vars = pd.DataFrame(json_normalize(data["var"]))
except Exception as e:
print(e)
raise e
#replace all the vars in jsonStr
jsonData = json.dumps(data["group"], indent=4)
for index, row in df_vars.iterrows():
key, value = row["@name"], row["#text"]
#for Unrecognized escape sequence
value = re.sub(r"([\\\/])", r"\\\1", value)
jsonData = jsonData.replace('$'+key, value)
#print(jsonData)
return pd.read_json(jsonData)
except Exception as e:
print(file)
raise e
return None
#print(parseXMLToPd(join(filepath, "web_appsec_rules.xml")).head())
#print(parseXMLToPd(join(filepath, "syslog_rules.xml")).head())
#print(parseXMLToPd(join(filepath, "mcafee_av_rules.xml")).head())
# +
filelist = [f for f in listdir(filepath) if isfile(join(filepath, f))]
print(filelist)
regex=re.compile('^(log \d+ pass = )')
df = pd.DataFrame()
for file in filelist:
datalist = []
try:
with open(join(filepath, file), "r", encoding="utf-8") as fd:
for line in fd:
if re.match(regex, line):
datalist.append(re.sub(regex, '', line))
if len(datalist) > 0:
df_tmp = pd.DataFrame(datalist, columns=["body"])
df_tmp["group_name"] = re.sub(r"(\.ini)$", "", file)
df = df.append(df_tmp, ignore_index = True)
except Exception as e:
print(e)
print(df.shape)
df.to_csv("ossec_testing.csv", sep=',', encoding='utf-8', index=False)
# -
print(df)
| load_data/load_osse_testing.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Lista de Exercícios
# Estruturas condicionais (`if-else-elif`)
#
# ## Questao 1:
# Faça um Programa que peça dois números e imprima o maior deles
# ## Questao 2
# Faça um Programa que peça um valor e mostre na tela se o valor é positivo ou negativo
# ## Questao 3
# Faça um Programa que verifique se uma letra digitada é "F" ou "M". Conforme a letra escrever:
# - "F" - Feminino,
# - "M" - Masculino,
# - Caso contrário, 'Sexo Inválido'.
# ## Questao 4
# Faça um Programa que peça para entrar com um ano com 4 dígitos e determine se o mesmo é ou não bissexto.
# ## Questao 5
# Faça um Programa que verifique se uma letra digitada é vogal ou consoante
# ## Questao 6
# Faça um Programa que leia três números e mostre-os em ordem decrescente.
# ## Questao 7
# Faça um programa para a leitura de duas notas parciais de um aluno. O programa deve calcular a média alcançada por aluno e apresentar:
# - A mensagem "Aprovado", se a média alcançada for maior ou igual a sete;
# - A mensagem "Reprovado", se a média for menor do que sete;
# - A mensagem "Aprovado com Distinção", se a média for igual a dez.
# ## Questao 8
# Faça um Programa que leia três números e mostre o maior deles.
# ## Questao 9
# Faça um Programa que leia três números e mostre o maior e o menor deles.
# ## Questao 10
# Faça um programa que pergunte o preço de três produtos e informe qual produto você deve comprar, sabendo que a decisão é sempre pelo mais barato
# ## Questao 11
# Faça um Programa que pergunte em que turno você estuda. Peça para digitar
# - M-matutino
# - V-Vespertino
# - N- Noturno.
#
# Imprima a mensagem "Bom Dia!", "Boa Tarde!" ou "Boa Noite!" ou "Valor
# Inválido!", conforme o caso.
# ## Questao 12
# As Organizações Tabajara resolveram dar um aumento de salário aos seus colaboradores e lhe contraram para desenvolver o programa que calculará os reajustes.
#
# Faça um programa que recebe o salário de um colaborador e o reajuste segundo o seguinte critério, baseado no salário atual:
#
# - salários até R\$ 280,00 (incluindo): aumento de 20%
# - salários de R\$ 280,00 a 700,00 (incluindo) : aumento de 15%
# - salários de R\$ 700,00 e 1500,00 (incluindo): aumento de 10%
# - salários de R\$ 1500,00 em diante : aumento de 5%
#
# Após o aumento ser realizado, informe na tela:
# - salário antes do reajuste;
# - percentual de aumento aplicado;
# - valor do aumento;
# - novo salário, após o aumento.
| exercicios/Lista_02.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# 
# # Lab 2: Grover's Algorithm
# In this lab, you will implement Grover's algorithm in `Qiskit` and investigate its behavior following the material presented in lectures 4 to 6.
#
# You might find this chapter of the Qiskit Textbook useful:
# - https://qiskit.org/textbook/ch-algorithms/grover.html
#
# Remember, to run a cell in Jupyter notebooks, you press `Shift` + `Return/Enter` on your keyboard.
# ### Installing necessary packages
# Before we begin, you will need to install some prerequisites into your environment. Run the cell below to complete these installations. At the end, the cell outputs will be cleared.
# +
# !pip install -U -r resources/requirements.txt
from IPython.display import clear_output
clear_output()
# -
# # Review of Grover's Algorithm
# 
# You might recall from lectures 4 to 6 that Grover's algorithm has three main components.
# 1. First, we begin by creating a superposition of all $2^n$ computational basis states by applying a Hadamard ($H$) gate on each qubit starting off in the state $\vert0\rangle^{\otimes n}$. Here, the exponent $\otimes n$ means that we have a tensor product of the states of $n$ qubits.
# 2. Second, we apply an Oracle operator to mark the appropriate elements among the $2^n$ elements. The oracle operator applies a coefficient of $-1$ to each of the marked elements.
# 3. Third, we apply a Diffusion operator, or diffuser, which inverts the amplitude of all elements about the average amplitude.
#
# Putting these components together, and applying the Oracle and Diffusion operators $O(\sqrt{N = 2^n})$ times, Grover's algorithm allows us to successfully determine the elements that were marked by the Oracle operator with high probability. This is shown in the block diagram above, where the quantum circuit for Grover's algorithm is depicted with a measurement in the end to read out the qubits.
#
# # Graded Exercise 1: Implementing Grover's Algorithm
#
# As you saw in the lecture, it is not hard to implement Grover's algorithm using `Qiskit`. The goal of this lab is to implement Grover's algorithm by creating a quantum circuit that has the marked elements `000001` and `101010`. You will see that the algorithm outputs one of these two marked elements with probability greater than $99\%$.
#
# Let us build each block step by step.
#
# ### 1.) Phase Oracle
# We start with the phase oracle. You might find it helpful to have a look at the corresponding chapter in the Qiskit textbook: https://qiskit.org/textbook/ch-algorithms/grover.html. However, note that the implementation in the textbook is done on 2 and 3 qubits only, while here we need to apply it to 6 qubits.
#
# **Recall that the action of the phase oracle is to add a phase of $-1$ to all states representing the marked elements, while leaving all other states unchanged.** An easy way to implement the phase oracle is to create an identity matrix on all $n$ qubits (remember that the corresponding dimension of this matrix is $2^n$) and then change those diagonal elements to $-1$ that correspond to the marked elements. Then, you need to convert that unitary into an operator.
#
# We have created a function below called `phase_oracle` which takes in two arguments. The first argument, $n$, gives the number of qubits in the quantum circuit. The second argument, `indices_to_mark`, is a list of the indices whose elements will be marked by the phase oracle with a phase of $-1$. Using these inputs, create a $2^n\times2^n$ identity matrix, and apply a phase of $-1$ to the diagonal elements at locations given in `indices_to_mark`. For example, if $0$ is in `indices_to_mark`, that means you need to set the top-left-most diagonal element of the identity matrix to -1.
#
# Once you complete these steps, apply the unitary operator to the quantum circuit.
from qiskit.quantum_info import Operator
from qiskit import QuantumCircuit
import numpy as np
def phase_oracle(n, indices_to_mark, name = 'Oracle'):
# create a quantum circuit on n qubits
qc = QuantumCircuit(n, name=name)
### WRITE YOUR CODE BETWEEN THESE LINES - START
### WRITE YOUR CODE BETWEEN THESE LINES - END
# convert your matrix (called oracle_matrix) into an operator, and add it to the quantum circuit
qc.unitary(Operator(oracle_matrix), range(n))
return qc
# ### 2.) Diffusion Operator $V$
#
# Next, we define the diffuser, which we called $V$ in the lecture. Its effect is to reflect all amplitudes about the average amplitude. To do so, we simply call the `phase_oracle` with only the zero state ($\vert0\rangle^{\otimes n}$) as the marked element and sandwich it between Hadamard gates applied to all qubits.
def diffuser(n):
# create a quantum circuit on n qubits
qc = QuantumCircuit(n, name='Diffuser')
### WRITE YOUR CODE BETWEEN THESE LINES - START
### WRITE YOUR CODE BETWEEN THESE LINES - END
return qc
# ### 3.) Putting it all together
#
# Finally, we combine the functions to construct Grover's algorithm. We need to determine the optimal number of rounds $r$ as described in the lecture.
#
# This was given by
#
# $$r = \left\lfloor\frac{\pi}{4}\sqrt{\frac{N}{k}}\right\rfloor$$
#
# where $k$ is the number of marked elements, and $\lfloor~\rfloor$ means rounding down to the nearest integer. In the specific example that we consider here, where we have six qubits ($N = 2^6$) and two marked elements ($k = 2$), implying that $r = 4$. You can check this yourself by plugging in the numbers.
#
# In the lecture, we have also seen a lower bound on the success probability when using $n$ qubits. In this exercise, the success probability should be higher than $99\%$.
#
# Let's construct a quantum program that finds the marked elements `000001` and `101010` using Grover's algorithm. To do this, we will need to do the following:
# 1. We start with a Hadamard gate on all qubits.
# 2. Next, we apply $r$ rounds of Grover's algorithm, where each round consists of the application of the phase oracle with the marked elements and the diffuser. The indices for the two marked elements `000001` and `101010` are $1$ and $42$.
# 3. Finally, we need to measure all qubits.
#
# The next lines of code put everything together. **You do not need to modify anything below, but you will need to run the cell to submit your solution.**
# +
def Grover(n, indices_of_marked_elements):
# Create a quantum circuit on n qubits
qc = QuantumCircuit(n, n)
# Determine r
r = int(np.floor(np.pi/4*np.sqrt(2**n/len(indices_of_marked_elements))))
print(f'{n} qubits, basis states {indices_of_marked_elements} marked, {r} rounds')
# step 1: apply Hadamard gates on all qubits
qc.h(range(n))
# step 2: apply r rounds of the phase oracle and the diffuser
for _ in range(r):
qc.append(phase_oracle(n, indices_of_marked_elements), range(n))
qc.append(diffuser(n), range(n))
# step 3: measure all qubits
qc.measure(range(n), range(n))
return qc
mycircuit = Grover(6, [1, 42])
mycircuit.draw()
# -
# That's it! You might find it useful to run your quantum circuit and see the measurement outcomes, as well as visualize the statevector at the end.
#
# In order to run your quantum circuit and get the measurement outcomes, you simply need to run `Qiskit`'s `execute` function as follows.
from qiskit import Aer, execute
simulator = Aer.get_backend('qasm_simulator')
counts = execute(mycircuit, backend=simulator, shots=1000).result().get_counts(mycircuit)
from qiskit.visualization import plot_histogram
plot_histogram(counts)
# # Additional reading
#
# - In the exercise above, we implemented the phase oracle and diffuser as matrices without decomposing them into single- and two-qubit gates. To run on real hardware, one will also need to consider how to build these oracles using gates. You can find examples of how the oracles can be built in the Grover's algorithm section of the Qiskit Textbook here: https://qiskit.org/textbook/ch-algorithms/grover.html
| Labs/introqcqh-lab-2/lab-2.ipynb |
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 0.4.6
# language: julia
# name: julia-0.4
# ---
# # Interactive Widgets in IJulia
# IPython 2.0 introduced interactive widgets, which are basically:
#
# * Javascript widgets (sliders, buttons, etcetera)
# * A communications protocol for the widgets to talk to the kernel
# * A Python interface to create and manipulate these.
#
# Thanks to fantastic work by a Google Summer of Code student, [<NAME>](https://github.com/shashi/), the same features are accessible from a Julia interface.
using Interact
@manipulate for n in 1:100
rand(n,n)
end
using Colors
@manipulate for r in 0:0.1:1, g in 0:0.1:1, b in 0:0.1:1, n in 1:100
linspace(RGB(0.0,0.0,0.0), RGB(r,g,b), n)
end
using PyPlot
x = linspace(0,10,1000)
clf()
f = figure()
@manipulate for α = 1:0.1:4, β = 1:0.1:4, leg="a funny plot"
withfig(f) do
plot(x, cos(α*x + sin(β*x)))
legend([leg])
end
end
using SymPy
x = Sym("x")
@manipulate for n=0:20
latex(SymPy.diff(sin(x^2), x, n))
end
| container/interactive/IJulia/tutorial/Interactive Widgets.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] colab_type="text" id="GKp_6SRlQqkt"
# # Bangalore House Price Prediction - Supervised Regression Problem
#
# ## Data Preprocessing
# + [markdown] colab={"base_uri": "https://localhost:8080/", "height": 54} colab_type="code" executionInfo={"elapsed": 4344, "status": "ok", "timestamp": 1593086698183, "user": {"displayName": "indian ai production", "photoUrl": "", "userId": "05336710603640792650"}, "user_tz": -330} id="Pvv4zbesQlDS" outputId="0ef1fc2f-18e2-4712-8999-ac3f3984bf3c"
# Project Steps:
#
# 1. Look at the big picture.
# 2. Get the data.
# 3. Discover and visualize the data to gain insights.
# 4. Prepare the data for Machine Learning algorithms.
# 5. Select a model and train it.
# 6. Fine-tune your model.
# 7. Present your solution.
# 8. Launch, monitor, and maintain your system.
# + [markdown] colab_type="text" id="bMaJ1G7lQ-yC"
# # 1. Business Problem
# The main goal of this project is to find the price of the house in Bangalore using their features.
# + [markdown] colab_type="text" id="oDZT1ynvSRfY"
# # Import Libraries
# + colab={"base_uri": "https://localhost:8080/", "height": 71} colab_type="code" executionInfo={"elapsed": 7790, "status": "ok", "timestamp": 1593086701782, "user": {"displayName": "indian ai production", "photoUrl": "", "userId": "05336710603640792650"}, "user_tz": -330} id="OkPXjMyZQ-Q0" outputId="504edb5d-31fd-4107-8154-ef3274c5c0e5"
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
# + [markdown] colab_type="text" id="AFDpgDYCSWGB"
# # 2. Load dataset
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" executionInfo={"elapsed": 7729, "status": "ok", "timestamp": 1593086701784, "user": {"displayName": "indian ai production", "photoUrl": "", "userId": "05336710603640792650"}, "user_tz": -330} id="ZhD-jJYtSVkh" outputId="f970f71e-3c5d-42e9-c18a-3dbe558ab7fb"
path = "https://drive.google.com/uc?export=download&id=13mP8FeMX09L3utbPcCDp-U2fXnf53gwx"
df_raw = pd.read_csv(path)
df_raw.shape
# + colab={"base_uri": "https://localhost:8080/", "height": 204} colab_type="code" executionInfo={"elapsed": 7562, "status": "ok", "timestamp": 1593086701786, "user": {"displayName": "indian ai production", "photoUrl": "", "userId": "05336710603640792650"}, "user_tz": -330} id="K2ZuGoHddACU" outputId="d80d838d-8271-45b3-a505-fc2c35e5ae3a"
df_raw.head()
# + colab={"base_uri": "https://localhost:8080/", "height": 204} colab_type="code" executionInfo={"elapsed": 7462, "status": "ok", "timestamp": 1593086701789, "user": {"displayName": "indian ai production", "photoUrl": "", "userId": "05336710603640792650"}, "user_tz": -330} id="sqUihl5AdKVi" outputId="60e45a4e-2434-4cba-f1c8-89a4cd9aca98"
df_raw.tail()
# + [markdown] colab_type="text" id="phcXj_mudWaD"
# ## 3. Exploratory Data Analysis
# + colab={} colab_type="code" id="UZsRiyVVdVjY"
df = df_raw.copy() # get the copy of raw data
# + colab={"base_uri": "https://localhost:8080/", "height": 289} colab_type="code" executionInfo={"elapsed": 6147, "status": "ok", "timestamp": 1593086701795, "user": {"displayName": "indian ai production", "photoUrl": "", "userId": "05336710603640792650"}, "user_tz": -330} id="r37N_FTKdO5o" outputId="8f7caf5f-b9cd-4bef-a13d-b0cfc02688ab"
# get the information of data
df.info()
# + colab={} colab_type="code" id="nwTvqKFVdm_g"
# We have only 3 neumerical features - bath, balcony and price
# 6 categorical features - area type, availability, size, society, and total_srft
# Target Feature =======>>>>>> price >>>>>>
# Price in lakh
# + colab={"base_uri": "https://localhost:8080/", "height": 297} colab_type="code" executionInfo={"elapsed": 6003, "status": "ok", "timestamp": 1593086701798, "user": {"displayName": "indian ai production", "photoUrl": "", "userId": "05336710603640792650"}, "user_tz": -330} id="T6h2p8PsdrEa" outputId="2a7b5321-4ed7-4270-feaf-bdcd26c7340e"
df.describe()
#observe 75% and max value it shows huge diff
# + colab={"base_uri": "https://localhost:8080/", "height": 584} colab_type="code" executionInfo={"elapsed": 9639, "status": "ok", "timestamp": 1593086705507, "user": {"displayName": "indian ai production", "photoUrl": "", "userId": "05336710603640792650"}, "user_tz": -330} id="ZSPkjZdwdsrv" outputId="c8a2214d-c11f-40cb-a2da-776b24df2780"
sns.pairplot(df)
# bath and price have slightly linear correlation with some outliers
# + colab={} colab_type="code" id="FzagHR78eQOJ"
# value count of each feature
def value_count(df):
for var in df.columns:
print(df[var].value_counts())
print("--------------------------------")
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} colab_type="code" executionInfo={"elapsed": 9577, "status": "ok", "timestamp": 1593086705513, "user": {"displayName": "indian ai production", "photoUrl": "", "userId": "05336710603640792650"}, "user_tz": -330} id="xbC1B74cenAm" outputId="63083a00-46a3-442e-ed10-fe107e02aca2"
value_count(df)
# + colab={"base_uri": "https://localhost:8080/", "height": 286} colab_type="code" executionInfo={"elapsed": 9532, "status": "ok", "timestamp": 1593086705515, "user": {"displayName": "indian ai production", "photoUrl": "", "userId": "05336710603640792650"}, "user_tz": -330} id="hR4eve8ye10G" outputId="98e5c60b-fa92-40b7-b77c-5d73a980b28c"
# correlation heatmap
num_vars = ["bath", "balcony", "price"]
sns.heatmap(df[num_vars].corr(),cmap="coolwarm", annot=True)
# correlation of bath is greater than a balcony with price
# + [markdown] colab_type="text" id="rBJn5yfZfap0"
# # 4. Preare Data for Machine Learning Model
# + [markdown] colab_type="text" id="J_NF04EUfggt"
# ## Data cleaning
# + colab={"base_uri": "https://localhost:8080/"} colab_type="code" executionInfo={"elapsed": 9496, "status": "ok", "timestamp": 1593086705516, "user": {"displayName": "indian ai production", "photoUrl": "", "userId": "05336710603640792650"}, "user_tz": -330} id="P8RwWrote8Qf" outputId="ee6a4aec-52ef-4985-fc5a-e608671e7218"
df.isnull().sum() # find the homuch missing data available
# + colab={"base_uri": "https://localhost:8080/"} colab_type="code" executionInfo={"elapsed": 9481, "status": "ok", "timestamp": 1593086705518, "user": {"displayName": "indian ai production", "photoUrl": "", "userId": "05336710603640792650"}, "user_tz": -330} id="wdrarSAEfnNz" outputId="b60a8237-98d0-4e3b-f3d7-a71d1b625a95"
df.isnull().mean()*100 # % of measing value
#society has 41.3% missing value (need to drop)
# + colab={"base_uri": "https://localhost:8080/"} colab_type="code" executionInfo={"elapsed": 11905, "status": "ok", "timestamp": 1593086707960, "user": {"displayName": "indian ai production", "photoUrl": "", "userId": "05336710603640792650"}, "user_tz": -330} id="VSGHH8hVfrAQ" outputId="5b3c3056-e4a3-4081-c72d-c74b3e7c7a29"
# visualize missing value using heatmap to get idea where is the value missing
plt.figure(figsize=(16,9))
sns.heatmap(df.isnull())
# + colab={"base_uri": "https://localhost:8080/"} colab_type="code" executionInfo={"elapsed": 11889, "status": "ok", "timestamp": 1593086707962, "user": {"displayName": "indian ai production", "photoUrl": "", "userId": "05336710603640792650"}, "user_tz": -330} id="Azmc5V29f-w3" outputId="9e04fb7b-e12a-4d25-da49-74759f81207c"
# Drop ----------> society feature
# because 41.3% missing value
df2 = df.drop('society', axis='columns')
df2.shape
# + colab={"base_uri": "https://localhost:8080/"} colab_type="code" executionInfo={"elapsed": 11869, "status": "ok", "timestamp": 1593086707964, "user": {"displayName": "indian ai production", "photoUrl": "", "userId": "05336710603640792650"}, "user_tz": -330} id="nn9Sh3VugR9t" outputId="696e1a27-acaa-488d-8cad-571563035b64"
# fill mean value in --------> balcony feature
# because it contain 4.5% missing value
df2['balcony'] = df2['balcony'].fillna(df2['balcony'].mean())
df2.isnull().sum()
# + colab={"base_uri": "https://localhost:8080/"} colab_type="code" executionInfo={"elapsed": 11849, "status": "ok", "timestamp": 1593086707965, "user": {"displayName": "indian ai production", "photoUrl": "", "userId": "05336710603640792650"}, "user_tz": -330} id="Nk-GK233glrd" outputId="e8d117fa-5919-4505-ec79-79be34c3caae"
# drop na value rows from df2
# because there is very less % value missing
df3 = df2.dropna()
df3.shape
# + colab={"base_uri": "https://localhost:8080/"} colab_type="code" executionInfo={"elapsed": 11822, "status": "ok", "timestamp": 1593086707966, "user": {"displayName": "indian ai production", "photoUrl": "", "userId": "05336710603640792650"}, "user_tz": -330} id="Lc9HQAZGhZBt" outputId="c93a9a4f-6695-48d2-fc2b-b14842509978"
df3.isnull().sum()
# + colab={"base_uri": "https://localhost:8080/"} colab_type="code" executionInfo={"elapsed": 11798, "status": "ok", "timestamp": 1593086707967, "user": {"displayName": "indian ai production", "photoUrl": "", "userId": "05336710603640792650"}, "user_tz": -330} id="Ih3NqEDDhbfF" outputId="42c60a91-0b8a-442a-ef3a-d23fd57bef36"
df3.head()
# + [markdown] colab_type="text" id="WM_tZEv8hn0T"
# ## Feature Engineering
# + colab={} colab_type="code" id="x9pqpwGohjnU"
# to show all th ecolumns and rows
pd.set_option("display.max_columns", None)
pd.set_option("display.max_rows", None)
# + [markdown] colab_type="text" id="re24TUzKhziC"
# ### Converting 'total_sqft' cat feature in numeric
# + colab={"base_uri": "https://localhost:8080/"} colab_type="code" executionInfo={"elapsed": 11761, "status": "ok", "timestamp": 1593086707970, "user": {"displayName": "indian ai production", "photoUrl": "", "userId": "05336710603640792650"}, "user_tz": -330} id="OEP9sAsMhv7c" outputId="aa6bc91d-57dd-486b-be29-ead553902c0b"
df3['total_sqft'].value_counts()
# here we observe that 'total_sqft' contain string value in diff format
#float, int like value 1689.28,817
# range value: 540 - 740
# number and string: 142.84Sq. Meter, 117Sq. Yards, 1Grounds
# best strategy is to convert it into number by spliting it
# + colab={} colab_type="code" id="rxMYQnljjFk1"
total_sqft_int = []
for str_val in df3['total_sqft']:
try:
total_sqft_int.append(float(str_val)) # if '123.4' like this value in str then conver in float
except:
try:
temp = []
temp = str_val.split('-')
total_sqft_int.append((float(temp[0])+float(temp[-1]))/2) # '123 - 534' this str value split and take mean
except:
total_sqft_int.append(np.nan) # if value not contain in above format then consider as nan
# + colab={} colab_type="code" id="cAR7V6RekTZ7"
# reset the index of dataframe
df4 = df3.reset_index(drop=True) # drop=True - don't add index column in df
# + colab={"base_uri": "https://localhost:8080/"} colab_type="code" executionInfo={"elapsed": 11689, "status": "ok", "timestamp": 1593086707973, "user": {"displayName": "indian ai production", "photoUrl": "", "userId": "05336710603640792650"}, "user_tz": -330} id="lnRJIGOukeet" outputId="73c10884-ae53-4f46-ecf9-67e1ff403640"
# join df4 and total_srft_int list
df5 = df4.join(pd.DataFrame({'total_sqft_int':total_sqft_int}))
df5.head()
# + colab={"base_uri": "https://localhost:8080/"} colab_type="code" executionInfo={"elapsed": 11671, "status": "ok", "timestamp": 1593086707975, "user": {"displayName": "indian ai production", "photoUrl": "", "userId": "05336710603640792650"}, "user_tz": -330} id="-mvhjP-TkekY" outputId="949af9f6-05a7-47b6-adbe-fc71b7315a23"
df5.tail()
# + colab={"base_uri": "https://localhost:8080/"} colab_type="code" executionInfo={"elapsed": 11653, "status": "ok", "timestamp": 1593086707976, "user": {"displayName": "indian ai production", "photoUrl": "", "userId": "05336710603640792650"}, "user_tz": -330} id="YiS81zx5k--n" outputId="94c73315-df5c-4ba2-b996-489cec9ecc1c"
df5.isnull().sum()
# + colab={"base_uri": "https://localhost:8080/"} colab_type="code" executionInfo={"elapsed": 11630, "status": "ok", "timestamp": 1593086707977, "user": {"displayName": "indian ai production", "photoUrl": "", "userId": "05336710603640792650"}, "user_tz": -330} id="Xregleu_lGoj" outputId="33f9def9-f249-4921-a16f-eddece056101"
# drop na value
df6 = df5.dropna()
df6.shape
# + colab={"base_uri": "https://localhost:8080/"} colab_type="code" executionInfo={"elapsed": 11608, "status": "ok", "timestamp": 1593086707978, "user": {"displayName": "indian ai production", "photoUrl": "", "userId": "05336710603640792650"}, "user_tz": -330} id="tchjeGPmlPiS" outputId="452c9179-8787-4b86-b77f-abe3f97dd74e"
df6.info()
# + [markdown] colab_type="text" id="0mX13Sa0lpdG"
# ## Working on <<<< Size >>>> feature
# + colab={"base_uri": "https://localhost:8080/"} colab_type="code" executionInfo={"elapsed": 11588, "status": "ok", "timestamp": 1593086707979, "user": {"displayName": "indian ai production", "photoUrl": "", "userId": "05336710603640792650"}, "user_tz": -330} id="U5xvFlo_ld8s" outputId="72042053-b624-4a67-8bce-a5f4b80ef3e2"
df6['size'].value_counts()
# size feature shows the number of rooms
# + colab={} colab_type="code" id="KkTvN3jVlv7p"
"""
in size feature we assume that
2 BHK = 2 Bedroom == 2 RK
so takes only number and remove sufix text
"""
size_int = []
for str_val in df6['size']:
temp=[]
temp = str_val.split(" ")
try:
size_int.append(int(temp[0]))
except:
size_int.append(np.nan)
print("Noice = ",str_val)
# + colab={} colab_type="code" id="AsPiyp8HmUhA"
df6 = df6.reset_index(drop=True)
# + colab={"base_uri": "https://localhost:8080/"} colab_type="code" executionInfo={"elapsed": 11538, "status": "ok", "timestamp": 1593086707983, "user": {"displayName": "indian ai production", "photoUrl": "", "userId": "05336710603640792650"}, "user_tz": -330} id="-ig3eCT8myqg" outputId="9d2d5d84-fb05-42db-d020-37e199ecfee9"
# join df6 and list size_int
df7 = df6.join(pd.DataFrame({'bhk':size_int}))
df7.shape
# + colab={"base_uri": "https://localhost:8080/"} colab_type="code" executionInfo={"elapsed": 11513, "status": "ok", "timestamp": 1593086707984, "user": {"displayName": "indian ai production", "photoUrl": "", "userId": "05336710603640792650"}, "user_tz": -330} id="91metB4mm0tJ" outputId="9ab9aa53-9c9c-48a3-a4d8-efc79705cf0c"
df7.tail()
# + [markdown] colab_type="text" id="k4aiX9WlqQ8W"
# ## Finding Outlier and Removing
# + colab={} colab_type="code" id="YwIdhm5Ym3pj"
# function to create histogram, Q-Q plot and boxplot
# for Q-Q plots
import scipy.stats as stats
def diagnostic_plots(df, variable):
# function takes a dataframe (df) and
# the variable of interest as arguments
# define figure size
plt.figure(figsize=(16, 4))
# histogram
plt.subplot(1, 3, 1)
sns.distplot(df[variable], bins=30)
plt.title('Histogram')
# Q-Q plot
plt.subplot(1, 3, 2)
stats.probplot(df[variable], dist="norm", plot=plt)
plt.ylabel('Variable quantiles')
# boxplot
plt.subplot(1, 3, 3)
sns.boxplot(y=df[variable])
plt.title('Boxplot')
plt.show()
# + colab={"base_uri": "https://localhost:8080/"} colab_type="code" executionInfo={"elapsed": 13711, "status": "ok", "timestamp": 1593086710216, "user": {"displayName": "indian ai production", "photoUrl": "", "userId": "05336710603640792650"}, "user_tz": -330} id="4PBtFOmIqpNd" outputId="3c1f989d-c40a-49b0-bb48-a3e960bbc2d8"
num_var = ["bath","balcony","total_sqft_int","bhk","price"]
for var in num_var:
print("******* {} *******".format(var))
diagnostic_plots(df7, var)
# here we observe outlier using histogram,, qq plot and boxplot
# + colab={"base_uri": "https://localhost:8080/"} colab_type="code" executionInfo={"elapsed": 13684, "status": "ok", "timestamp": 1593086710218, "user": {"displayName": "indian ai production", "photoUrl": "", "userId": "05336710603640792650"}, "user_tz": -330} id="EpcfW0zGsFDr" outputId="b66c1677-c7cb-42e8-84e2-c52bccf40da8"
# here we consider 1 BHK requierd min 350 sqft are
df7[df7['total_sqft_int']/df7['bhk'] < 350].head()
# no we found outliers
# + colab={"base_uri": "https://localhost:8080/"} colab_type="code" executionInfo={"elapsed": 13621, "status": "ok", "timestamp": 1593086710220, "user": {"displayName": "indian ai production", "photoUrl": "", "userId": "05336710603640792650"}, "user_tz": -330} id="_KML6BbAuBhy" outputId="a57a5905-a664-4de7-f7f1-c25f30c270d5"
# if 1 BHK total_sqft are < 350 then we ae going to remove them
df8 = df7[~(df7['total_sqft_int']/df7['bhk'] < 350)]
df8.shape
# + colab={"base_uri": "https://localhost:8080/"} colab_type="code" executionInfo={"elapsed": 13597, "status": "ok", "timestamp": 1593086710222, "user": {"displayName": "indian ai production", "photoUrl": "", "userId": "05336710603640792650"}, "user_tz": -330} id="Ub6uPFN4ux_2" outputId="2b950738-1f8e-438d-e48c-c32a013d42ef"
# create new feature that is price per squre foot
# it help to find the outliers
#price in lakh so conver into rupee and then / by total_sqft_int
df8['price_per_sqft'] = df8['price']*100000 / df8['total_sqft_int']
df8.head()
# + colab={"base_uri": "https://localhost:8080/"} colab_type="code" executionInfo={"elapsed": 13572, "status": "ok", "timestamp": 1593086710223, "user": {"displayName": "indian ai production", "photoUrl": "", "userId": "05336710603640792650"}, "user_tz": -330} id="sGJwRpkNvFB7" outputId="355c58b9-432a-4d8c-e566-2d74a588ea2d"
df8.price_per_sqft.describe()
#here we can see huge difference between min and max price_per_sqft
# min 6308.502826 max 176470.588235
# + colab={"base_uri": "https://localhost:8080/"} colab_type="code" executionInfo={"elapsed": 17402, "status": "ok", "timestamp": 1593086714072, "user": {"displayName": "indian ai production", "photoUrl": "", "userId": "05336710603640792650"}, "user_tz": -330} id="qpAdC6wJwJ5E" outputId="ee2f2995-786b-4c11-b94c-f496655b0723"
# Removing outliers using help of 'price per sqrt' taking std and mean per location
def remove_pps_outliers(df):
df_out = pd.DataFrame()
for key, subdf in df.groupby('location'):
m=np.mean(subdf.price_per_sqft)
st=np.std(subdf.price_per_sqft)
reduced_df = subdf[(subdf.price_per_sqft>(m-st))&(subdf.price_per_sqft<=(m+st))]
df_out = pd.concat([df_out, reduced_df], ignore_index = True)
return df_out
df9 = remove_pps_outliers(df8)
df9.shape
# + colab={"base_uri": "https://localhost:8080/"} colab_type="code" executionInfo={"elapsed": 17377, "status": "ok", "timestamp": 1593086714074, "user": {"displayName": "indian ai production", "photoUrl": "", "userId": "05336710603640792650"}, "user_tz": -330} id="ZE5Do-K8xXyp" outputId="da4283bf-17ba-472f-a9e6-ca2d6c2b7c8d"
def plot_scatter_chart(df,location):
bhk2 = df[(df.location==location) & (df.bhk==2)]
bhk3 = df[(df.location==location) & (df.bhk==3)]
plt.figure(figsize=(16,9))
plt.scatter(bhk2.total_sqft_int, bhk2.price, color='Blue', label='2 BHK', s=50)
plt.scatter(bhk3.total_sqft_int, bhk3.price, color='Red', label='3 BHK', s=50, marker="+")
plt.xlabel("Total Square Feet Area")
plt.ylabel("Price")
plt.title(location)
plt.legend()
plot_scatter_chart(df9, "<NAME>")
# in below scatterplot we observe that at same location price of
# 2 bhk house is greater than 3 bhk so it is outlier
# + colab={"base_uri": "https://localhost:8080/"} colab_type="code" executionInfo={"elapsed": 17363, "status": "ok", "timestamp": 1593086714078, "user": {"displayName": "indian ai production", "photoUrl": "", "userId": "05336710603640792650"}, "user_tz": -330} id="OWYS0z-wyD3d" outputId="58169a6e-42a4-41c3-f199-e6a47e10953f"
plot_scatter_chart(df9, "Hebbal")
# in below scatterplot we observe that at same location price of
# 3 bhk house is less than 2 bhk so it is outlier
# + colab={"base_uri": "https://localhost:8080/"} colab_type="code" executionInfo={"elapsed": 18963, "status": "ok", "timestamp": 1593086715701, "user": {"displayName": "indian ai production", "photoUrl": "", "userId": "05336710603640792650"}, "user_tz": -330} id="6RtsF0zbyqY7" outputId="f9dd90d7-f768-45d3-a61c-f6c45af3b4f7"
# Removing BHK outliers
def remove_bhk_outliers(df):
exclude_indices = np.array([])
for location, location_df in df.groupby('location'):
bhk_stats = {}
for bhk, bhk_df in location_df.groupby('bhk'):
bhk_stats[bhk]={
'mean':np.mean(bhk_df.price_per_sqft),
'std':np.std(bhk_df.price_per_sqft),
'count':bhk_df.shape[0]}
for bhk, bhk_df in location_df.groupby('bhk'):
stats=bhk_stats.get(bhk-1)
if stats and stats['count']>5:
exclude_indices = np.append(exclude_indices, bhk_df[bhk_df.price_per_sqft<(stats['mean'])].index.values)
return df.drop(exclude_indices, axis='index')
df10 = remove_bhk_outliers(df9)
df10.shape
# + colab={"base_uri": "https://localhost:8080/"} colab_type="code" executionInfo={"elapsed": 19708, "status": "ok", "timestamp": 1593086716461, "user": {"displayName": "indian ai production", "photoUrl": "", "userId": "05336710603640792650"}, "user_tz": -330} id="E9yys1dGz1mm" outputId="ab078cf9-532b-452b-b1d7-f6113068b1f4"
plot_scatter_chart(df10, "Hebbal")
# In below scatter plot most of the red data point remove fron blue points
# + [markdown] colab_type="text" id="QQ7lWGTG0f7_"
# ### Remove outliers using the help of 'bath' feature
# + colab={"base_uri": "https://localhost:8080/"} colab_type="code" executionInfo={"elapsed": 19692, "status": "ok", "timestamp": 1593086716464, "user": {"displayName": "indian ai production", "photoUrl": "", "userId": "05336710603640792650"}, "user_tz": -330} id="8sdFD1NZ0H4t" outputId="8db71e36-bba8-449e-ad88-5be8a92d934e"
df10.bath.unique()
# + colab={"base_uri": "https://localhost:8080/"} colab_type="code" executionInfo={"elapsed": 19674, "status": "ok", "timestamp": 1593086716468, "user": {"displayName": "indian ai production", "photoUrl": "", "userId": "05336710603640792650"}, "user_tz": -330} id="2eUHdFGs0p_Q" outputId="2012835e-2d35-484d-c400-ab7c5f14a202"
df10[df10.bath > df10.bhk+2]
# + colab={"base_uri": "https://localhost:8080/"} colab_type="code" executionInfo={"elapsed": 19659, "status": "ok", "timestamp": 1593086716469, "user": {"displayName": "indian ai production", "photoUrl": "", "userId": "05336710603640792650"}, "user_tz": -330} id="nrpCKw3Q03fG" outputId="97662a9f-c2e2-44c7-c034-d7b4ab276f2c"
# here we are considering data only total no. bathroom = bhk + 1
df11 = df10[df10.bath < df10.bhk+2]
df11.shape
# + colab={"base_uri": "https://localhost:8080/"} colab_type="code" executionInfo={"elapsed": 20336, "status": "ok", "timestamp": 1593086717169, "user": {"displayName": "indian ai production", "photoUrl": "", "userId": "05336710603640792650"}, "user_tz": -330} id="pNXZiHv9138i" outputId="4dfea9c8-3e57-47db-c9f3-a7ffd06ab8a9"
plt.figure(figsize=(16,9))
for i,var in enumerate(num_var):
plt.subplot(3,2,i+1)
sns.boxplot(df11[var])
# + colab={"base_uri": "https://localhost:8080/"} colab_type="code" executionInfo={"elapsed": 20329, "status": "ok", "timestamp": 1593086717176, "user": {"displayName": "indian ai production", "photoUrl": "", "userId": "05336710603640792650"}, "user_tz": -330} id="JMVuL-QJ2cxm" outputId="ab636060-dc23-4b26-9ffb-f32b72e86270"
df11.head()
# + colab={"base_uri": "https://localhost:8080/"} colab_type="code" executionInfo={"elapsed": 20316, "status": "ok", "timestamp": 1593086717178, "user": {"displayName": "indian ai production", "photoUrl": "", "userId": "05336710603640792650"}, "user_tz": -330} id="d2uIPGKJ3ZKD" outputId="713b738b-597f-43e8-f8d4-5d9520a1f9b7"
df12 = df11.drop(['area_type', 'availability',"location","size","total_sqft"], axis =1)
df12.head()
# + colab={} colab_type="code" id="q13U9W1U30gR"
df12.to_csv("clean_data.csv", index=False) # test ml model on this data
# ML model train on this data and got best score >>>> XGBoost=0.914710
# + [markdown] colab_type="text" id="NAh4f1vajG-5"
# # Categorical Variable Encoding
# + colab={"base_uri": "https://localhost:8080/", "height": 204} colab_type="code" executionInfo={"elapsed": 20273, "status": "ok", "timestamp": 1593086717180, "user": {"displayName": "indian ai production", "photoUrl": "", "userId": "05336710603640792650"}, "user_tz": -330} id="KgZ2QPkQ4F_3" outputId="5927e9d4-3ed1-45b7-d16d-7f83b90a5c25"
df13 = df11.drop(["size","total_sqft"], axis =1)
df13.head()
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" executionInfo={"elapsed": 20229, "status": "ok", "timestamp": 1593086717181, "user": {"displayName": "indian ai production", "photoUrl": "", "userId": "05336710603640792650"}, "user_tz": -330} id="bSLo_pYZjd_W" outputId="d8c77ba7-8e18-4e4b-81ea-f4f915b1ec2d"
df14 = pd.get_dummies(df13, drop_first=True, columns=['area_type','availability','location'])
df14.shape
# + colab={"base_uri": "https://localhost:8080/", "height": 275} colab_type="code" executionInfo={"elapsed": 20791, "status": "ok", "timestamp": 1593086717827, "user": {"displayName": "indian ai production", "photoUrl": "", "userId": "05336710603640792650"}, "user_tz": -330} id="Y37NypHbkzVX" outputId="e83d9dae-42c5-418d-951e-69367c9c2900"
df14.head()
# + colab={} colab_type="code" id="unULTcCVk-Zs"
df14.to_csv('oh_encoded_data.csv', index=False) # test ml model on this data
# + [markdown] colab_type="text" id="oLTS7un4X48K"
# In ['area_type','availability','location'] contain multiple classe and if we convert them into OHE so it increase the size of DF
# so try to use those classes which are *frequently* present in the car var
# + [markdown] colab_type="text" id="fcEZKmuBaF6a"
# ## Working on <<<<<< area_type >>>>> feature
# + colab={"base_uri": "https://localhost:8080/"} colab_type="code" executionInfo={"elapsed": 21349, "status": "ok", "timestamp": 1593086718436, "user": {"displayName": "indian ai production", "photoUrl": "", "userId": "05336710603640792650"}, "user_tz": -330} id="enSMW-udlY6h" outputId="000780f9-60a6-451b-eb9c-36fe6cc1dda0"
df13['area_type'].value_counts()
# + colab={"base_uri": "https://localhost:8080/"} colab_type="code" executionInfo={"elapsed": 21334, "status": "ok", "timestamp": 1593086718438, "user": {"displayName": "indian ai production", "photoUrl": "", "userId": "05336710603640792650"}, "user_tz": -330} id="3Rc6zRW63lYD" outputId="de89b283-1a88-4829-d0ce-44c664699af8"
df15 = df13.copy()
# appy Ohe-Hot encoding on 'area_type' feature
for cat_var in ["Super built-up Area","Built-up Area","Plot Area"]:
df15["area_type"+cat_var] = np.where(df15['area_type']==cat_var, 1,0)
df15.shape
# + colab={"base_uri": "https://localhost:8080/"} colab_type="code" executionInfo={"elapsed": 21320, "status": "ok", "timestamp": 1593086718441, "user": {"displayName": "indian ai production", "photoUrl": "", "userId": "05336710603640792650"}, "user_tz": -330} id="dtqGj98kapL7" outputId="c820a106-91ee-4c20-ad39-cbdacdf621b9"
df15.head(2)
# + [markdown] colab_type="text" id="6j5v5Bjla73m"
# ## Working with <<<<< availability >>>>> Feature
# + colab={"base_uri": "https://localhost:8080/"} colab_type="code" executionInfo={"elapsed": 21310, "status": "ok", "timestamp": 1593086718443, "user": {"displayName": "indian ai production", "photoUrl": "", "userId": "05336710603640792650"}, "user_tz": -330} id="RpOajj-JauCb" outputId="97ac80be-1a82-4a44-c12d-d6c086bf2560"
df15["availability"].value_counts()
# + colab={"base_uri": "https://localhost:8080/"} colab_type="code" executionInfo={"elapsed": 21298, "status": "ok", "timestamp": 1593086718444, "user": {"displayName": "indian ai production", "photoUrl": "", "userId": "05336710603640792650"}, "user_tz": -330} id="uPvvvHO8a_pD" outputId="65eba69f-c093-4394-fbc8-ebc2148dffea"
# in availability feature, 10525 house 'Ready to Move" and remaining will be redy on perticuler date
# so we crate new feature ""availability_Ready To Move"" and add vale 1 if availability is Ready To Move else 0
df15["availability_Ready To Move"] = np.where(df15["availability"]=="Ready To Move",1,0)
df15.shape
# + colab={"base_uri": "https://localhost:8080/"} colab_type="code" executionInfo={"elapsed": 21282, "status": "ok", "timestamp": 1593086718445, "user": {"displayName": "indian ai production", "photoUrl": "", "userId": "05336710603640792650"}, "user_tz": -330} id="4i75qvJubaxy" outputId="824a57b1-f926-4c90-8874-091f5f5c20c6"
df15.tail()
# + [markdown] colab_type="text" id="Y2fnJEDtbpgd"
# ## Working on <<<< Location >>>> feature
# + colab={"base_uri": "https://localhost:8080/"} colab_type="code" executionInfo={"elapsed": 21268, "status": "ok", "timestamp": 1593086718446, "user": {"displayName": "indian ai production", "photoUrl": "", "userId": "05336710603640792650"}, "user_tz": -330} id="yPzFM1oLbdgq" outputId="ecee8444-e17e-4210-84ae-6441bea43086"
location_value_count = df15['location'].value_counts()
location_value_count
# + colab={"base_uri": "https://localhost:8080/"} colab_type="code" executionInfo={"elapsed": 21252, "status": "ok", "timestamp": 1593086718447, "user": {"displayName": "indian ai production", "photoUrl": "", "userId": "05336710603640792650"}, "user_tz": -330} id="kITj96Ygbuhy" outputId="2f0fe5f3-cfed-46f1-d016-4d53e8fbd971"
location_gert_20 = location_value_count[location_value_count>=20].index
location_gert_20
# + colab={"base_uri": "https://localhost:8080/"} colab_type="code" executionInfo={"elapsed": 21238, "status": "ok", "timestamp": 1593086718449, "user": {"displayName": "indian ai production", "photoUrl": "", "userId": "05336710603640792650"}, "user_tz": -330} id="b006OZBQb7eB" outputId="0c0bf606-4dcc-4a65-f291-36be486d840d"
# location count is greter than 19 then we create column of that feature
# then if this location present in location feature then set value 1 else 0 ( ohe hot encoding)
df16 = df15.copy()
for cat_var in location_gert_20:
df16['location_'+cat_var]=np.where(df16['location']==cat_var, 1,0)
df16.shape
# + colab={"base_uri": "https://localhost:8080/"} colab_type="code" executionInfo={"elapsed": 21228, "status": "ok", "timestamp": 1593086718451, "user": {"displayName": "indian ai production", "photoUrl": "", "userId": "05336710603640792650"}, "user_tz": -330} id="VaWXj_SmcYxK" outputId="26ea4c3e-a9e0-424e-94e8-8b4cb81ea90b"
df16.head()
# + [markdown] colab_type="text" id="ARI5vPzqcq7v"
# ## Drop categorical variable
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" executionInfo={"elapsed": 21213, "status": "ok", "timestamp": 1593086718453, "user": {"displayName": "indian ai production", "photoUrl": "", "userId": "05336710603640792650"}, "user_tz": -330} id="gHXOhOCbcfDC" outputId="80208e8c-60a8-49ac-e9f4-cf5f87b91075"
df17 = df16.drop(["area_type","availability",'location'], axis =1)
df17.shape
# + colab={"base_uri": "https://localhost:8080/", "height": 258} colab_type="code" executionInfo={"elapsed": 21829, "status": "ok", "timestamp": 1593086719098, "user": {"displayName": "indian ai production", "photoUrl": "", "userId": "05336710603640792650"}, "user_tz": -330} id="3AINHMMcdTEA" outputId="05fc942e-e3fe-4c05-b4bc-fac2f899ee43"
df17.head()
# + colab={} colab_type="code" id="_qLhyrx3dA2Y"
df17.to_csv('ohe_data_reduce_cat_class.csv', index=False)
| Bengaluru_House_Price_Prediction/Data_Preprocessing_Bengaluru_House_Price_Prediction.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# ---
myArray = [1, 2, 3, 4, 5]
print myArray.
import numpy as np
myArray= [1, 2, 3,4, 5]
print (np.sum(myArray))
| mathThinkingInCoSci/change.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Problem set 4: Analyzing data
# [<img src="https://mybinder.org/badge_logo.svg">](https://mybinder.org/v2/gh/NumEconCopenhagen/exercises-2019/master?urlpath=lab/tree/PS4/problem_set_4.ipynb)
import numpy as np
import matplotlib.pyplot as plt
plt.style.use('seaborn')
import pandas as pd
import pydst
dst = pydst.Dst(lang='en')
# # Tasks
# ## Import national account data from Denmark Statistics
# Consider the following dictionary definitions:
# +
columns_dict = {}
columns_dict['TRANSAKT'] = 'variable'
columns_dict['PRISENHED'] = 'unit'
columns_dict['TID'] = 'year'
columns_dict['INDHOLD'] = 'value'
var_dict = {} # var is for variable
var_dict['P.1 Output'] = 'Y'
var_dict['P.3 Final consumption expenditure'] = 'C'
var_dict['P.3 Government consumption expenditure'] = 'G'
var_dict['P.5 Gross capital formation'] = 'I'
var_dict['P.6 Export of goods and services'] = 'X'
var_dict['P.7 Import of goods and services'] = 'M'
unit_dict = {}
unit_dict['2010-prices, chained values'] = 'real'
unit_dict['Current prices'] = 'nominal'
# -
# **Step 1:** Download all of table `nah1`.
# +
# hint, nah1 = dst.get_data(table_id = '?', variables={'TRANSAKT':[?], 'PRISENHED':[?], 'TID':[?]})
# -
# **Step 2:** Rename the columns using `columns_dict` and replace data using `var_dict` and `unit_dict`.
# +
# hint, nah1_true.rename(?,inplace=True)
# for key,value in var_dict.items():
# nah1.variable.replace(?)
#for key,value in unit_dict.items():
# nah1.unit.replace(?)
# -
# **Step 3:** Only keep rows where the variable is in `[Y, C, G, I, X, M]`. Afterwards convert the `value` column to a float.
# +
# write you code here
# nah1.value = nah1.value.astype('float')
# -
# **Step 4:** Discuss what the following summary statistics show.
# +
# nah1_true.groupby(['variable','unit']).describe()
# -
# **Answer:**
# + jupyter={"source_hidden": true}
# a. load
nah1_true = dst.get_data(table_id = 'NAH1', variables={'TRANSAKT':['*'], 'PRISENHED':['*'], 'TID':['*']})
# b. rename and replace
nah1_true.rename(columns=columns_dict,inplace=True)
# c. replace data
for key,value in var_dict.items():
nah1_true.variable.replace(key,value,inplace=True)
for key,value in unit_dict.items():
nah1_true.unit.replace(key,value,inplace=True)
# d. keep if in var_dict
I = False
for key,value in var_dict.items():
I = I | (nah1_true.variable == value)
nah1_true = nah1_true[I]
# e. convert values to numeric
nah1_true.value = nah1_true.value.astype('float')
# d. summary statistics
nah1_true.groupby(['variable','unit']).describe()
# -
# ## Merge with population data from Denmark Statistics
# Load population data from Denmark Statistics:
pop = dst.get_data(table_id = 'FT', variables={'HOVEDDELE':['*'], 'TID':['*']})
pop.rename(columns={'TID':'year','INDHOLD':'population'},inplace=True)
I = pop.HOVEDDELE == 'All Denmark'
pop = pop.loc[I,['year','population']]
pop.head()
# **Question 1:** Merge the population and the national account data, so there is a new column called `population`. Use the **merge function**.
# +
# hint, merged = pd.merge(?,?,how='?',on=[?])
# merged_true.tail(10)
# -
# **Answer:**
# + jupyter={"source_hidden": true}
merged_true = pd.merge(nah1_true,pop,how='left',on=['year'])
merged_true.tail(10)
# -
# **Question 2:** Merge the population on again, so there is a new column called `population_alt`. Use the **join method**.
# +
# pop_with_index = pop.set_index(?)
# pop_with_index.rename(columns={'population':'population_alt'},inplace=True)
# merged_with_index = merged.set_index(?)
# merged_alt = merged_with_index.join(?)
# merged_alt.tail(10)
# -
# **Answer:**
# + jupyter={"source_hidden": true}
pop_with_index = pop.set_index('year')
pop_with_index.rename(columns={'population':'population_alt'},inplace=True)
merged_true_with_index = merged_true.set_index('year')
merged_true_alt = merged_true_with_index.join(pop_with_index)
merged_true_alt.tail(10)
# -
# ## Split-apply-combine-(plot)
# Consider the following **split-apply-combine-plot:**
# +
# a. split
nah1_true_grouped = nah1_true.groupby(['variable','unit'])
nah1_true_grouped_first = nah1_true_grouped.value.first()
nah1_true_grouped_first.name = 'first'
# b. apply
nah1_true.set_index(['variable','unit','year'],inplace=True)
nah1_true = nah1_true.join(nah1_true_grouped_first,how='left',on=['variable','unit'])
nah1_true.reset_index(inplace=True)
# c. combine
nah1_true['indexed'] = nah1_true['value']/nah1_true['first']
# d. plot
def plot(df):
df_indexed = df.set_index('year')
I = df_indexed.unit == 'real'
df_indexed[I].groupby(['variable'])['indexed'].plot(legend=True);
plot(nah1_true)
# -
# **Question** Implement the same split-apply-combine as above using `transform`.
# +
def first(x): # select the first element in a series
return x.iloc[0]
# nah1_alt = nah1_final.copy()
# grouped = nah1_alt.groupby(?)
#nah1_alt[?] = ?.transform(lambda x: ?)
#nah1_alt.head()
# -
# **Answer:**
# + jupyter={"source_hidden": true}
nah1_true_alt = nah1_true.copy()
grouped = nah1_true_alt.groupby(['variable','unit'])
nah1_true_alt['index_transform'] = grouped['value'].transform(lambda x: x/first(x))
nah1_true_alt.head()
# -
# # Problem: The Housing market
#
# ## Housing data
# **Note:** The file `data/bm010_parcel.xlsx` has been downloaded from http://rkr.statistikbank.dk/201.
#
# **Question:** Go through the cell below and ensure you understand ALL commands.
# +
# a. load data
prices = pd.read_excel('data/bm010_parcel.xlsx', skiprows=2)
prices.rename(columns={'Unnamed: 2': 'municipality'}, inplace=True)
# b. delete columns
del prices['Unnamed: 0']
del prices['Unnamed: 1']
# c. rename time columns: 1992K1 -> price19921
time_dict = {}
for y in range(1992,2018+1):
for k in range(1,4+1):
str_from = f'{y}K{k}'
str_to = f'price{y}{k}'
time_dict[str_from] = str_to
prices = prices.rename(columns = time_dict)
# d. drop missing
prices = prices.dropna()
# e. convert to long
prices_long = pd.wide_to_long(prices, stubnames='price', i='municipality', j='year_quarter')
prices_long.reset_index(inplace=True)
# f. drop missing and convert to float
I = prices_long.loc[prices_long.price == '..']
prices_long.drop(I.index, inplace=True)
prices_long.price = prices_long.price.astype('float')
# g. create date variable
prices_long['d'] = (prices_long.year_quarter.astype(str).str[:4] # grab the year, first four digits
+ 'Q' # add the letter Q
+ prices_long.year_quarter.astype(str).str[4]) # the quarter (fifth digit)
prices_long['date'] = pd.to_datetime(prices_long.d)
# h. cleanup
del prices_long['year_quarter']
del prices_long['d']
prices_long.head()
# -
# ## Population data
# **Question:** Go through the cell below and ensure you understand ALL commands.
# +
# a. load data
pop = dst.get_data(table_id='FOLK1A', variables={'Alder':['IALT'], 'CIVILSTAND':['TOT'], 'Køn':['TOT'], 'Tid':['*'], 'OMRÅDE':['*']})
# b. drop and rename columns
for v in ['ALDER', 'CIVILSTAND', 'KØN']:
del pop[v]
pop = pop.rename(columns = {'INDHOLD':'population', 'OMRÅDE': 'municipality'})
# c. drop non-municipalities
for val in ['Region', 'All']:
I = pop['municipality'].str.contains(val)
pop.drop(pop[I].index, inplace=True)
# d. convert to date
pop['date'] = pd.to_datetime(pop.TID)
del pop['TID']
pop.head()
# -
# ## Analysis
# **Problem:** Analyze the co-variation betwen population growth and house price growth. Reproduce the graphs below.
#
# **Hint:** For the second one consider the `agg` method (similar to but different from `transform`, Google it).
# +
# write your code here
# -
# **Answer:**
# + jupyter={"source_hidden": true}
# a. merge
full = pd.merge(pop, prices_long, on=['date','municipality'], how='left')
full.sort_values(['municipality','date'], inplace=True)
# b. take logs
full['log_population'] = np.log(full['population'])
full['log_price'] = np.log(full['price'])
# c. figur 1: log differences
ax = full.groupby('municipality').diff(1).plot(x = 'log_population', y = 'log_price', kind = 'scatter');
ax.set_xlabel('log difference in population')
ax.set_ylabel('log difference in price')
# c. figur 2: mean log differences
ax = full.groupby('municipality').agg(lambda x: np.mean(x.diff())).plot(x = 'log_population', y = 'log_price', kind = 'scatter');
ax.set_xlabel('within-municipality mean log difference in population')
ax.set_ylabel('within-municipality mean log difference in price');
| PS4/problem_set_4.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Wir benuzten Logische Regression, um den Spam-Datensatz und Non-Spam Datensatz binär zu klassifizieren.
import numpy as np
import matplotlib.pyplot as plt
def file_get_contents(filename):
return np.genfromtxt(filename, delimiter=',')
# Wir lesen die Daten ein, und teile die in die Trainingdaten(80%) und Testendaten(20%).
# +
data_original = file_get_contents('Resourcen/spambase.data')
def extract_training(data):
length = int(0.8 * len(data))
return data[:length]
def normal_training(data):
data = extract_training(data)
for i in range(len(data)):
data[i] = normalisation(data[i])
return data
def extract_test(data):
test = np.copy(data)
np.random.shuffle(test) # da die Dateien nicht zufällig gemischt sind, mischen wir hier selbst mal
length = int(0.2 * len(test))
return test[:length]
training_original = extract_training(data_original)
test_original = extract_test(data_original)
# -
# Bevor wir anfangen, müssen wir die Daten normalisieren.
# +
def normalisation(sample):
mean = get_mean(sample)
diff = np.max(sample) - np.min(sample)
for i in range(0, len(sample)):
sample[i] = (sample[i]-mean)/diff
return sample
def get_mean(data):
return (np.sum(data))/len(data)
# -
# Wir brauen eine Aktivierungsfunktion, um alle Eingabewerte in den Interval [0,1] zuzuordnen. Hier wenden wir Sigmoid Funktion an.
def sigmoid(x):
tmp = 1.0 + np.exp(-x)
result = 1.0 / tmp
return result
# Um besser die Dateien zu trainieren, wir zerlegen die Samples und Labels.
# +
def get_samples(training):
training = np.delete(training, -1, axis=1)
return np.matrix(training)
def get_labels(training):
tmp = np.matrix(training[:, -1])
return np.transpose(tmp)
# -
# Wir bilden eine Error Funktion, und minimieren wir die Error Funktion mithilfe von Gradient Descent, das Performance hängt stark von der Wahl der Learning Rate und Iterationsanzahl ab, hier wird ein paar Mal angepasst bis eine relativ kleine Error rate vorkommt.
def get_weight(training):
data = get_samples(training)
label = get_labels(training)
learnRate = 0.99999
m, n = np.shape(data) # datamatirx: (m*n)
weight = np.zeros((n, 1)) # initialer Gewicht Vektor, null Vektor oder zufällig konfigurieren
for i in range(0, 5000):
predict = sigmoid(np.dot(data, weight)) # (m*n)*(n*1)=(m*1) Matrix
error = np.subtract(predict, label) # (m*1)-(m*1)
tmp = np.dot(np.transpose(data), error) # (n*m)*(m*1)=(n*1) Matrix
'''(n*1) Gewicht Vektor (entspricht zur Sprung Richtung) wird aktualisiert'''
weight = np.subtract(weight, np.dot(learnRate, tmp))
learnRate -= 0.00001 # am Anfang relativ große Learning Rate, und schrittweise reduzieren
return weight
# Dann können wir schon klassifizieren, vergleichen wir das Ergebnis nach Sigmoid Funktion, ob es größer als 1/2 ist, da es die Wahrscheinlichkeit der Ereignisse entspricht.
def classify(training, test, weight):
testdata = test[: -1]
prediction = np.dot(np.transpose(weight), testdata)
probability = sigmoid(prediction)
if (probability > 0.5): # entspricht zur p > 1-p, und p liegt im Interval [0,1]
return 1
else: return 0
# Schauen wir uns mal die Error Rate an.
# +
def error_rate(training, test):
weight = get_weight(training)
error = 0
for i in range(0, len(test)):
lable = test[i][-1]
result = classify(training, test[i], weight)
if (result != lable):
error += 1
return (error/len(test))*100
print('error rate ist', error_rate(training_original, test_original), '%')
# -
# Jetzt visualisieren wir das Ergebnis, um zu schauen was ist da passiert, wir wählen zwei Merkmale zu ploten, statt n-dimensional zu betrachten. Hier werden die Durchschnitt und Maximum gewählt (Grund kein Minimum: sind bei allen Samples das Minimum 0).
# +
def ploting(data, colour):
data = np.matrix(data)
mean = []
max = []
for i in range(len(data)):
mean_i = get_mean(data[i])
max_i = np.max(data[i])
mean.append(mean_i)
max.append(max_i)
mean = np.array(list(np.matrix(mean)))
max = np.array(list(np.matrix(max)))
x = mean[0][0]
y = max[0][0]
plt.scatter(x, y, 0.1, color= colour)
plt.xlim(0, 500)
plt.ylim(0, 500)
def get_class_plot(training, test):
weight = get_weight(training)
spam = []
non_spam = []
for i in range(0, len(test)):
result = classify(training, test[i], weight)
if (result == 1):
spam.append(test[i])
else: non_spam.append(test[i])
ploting(spam, 'r')
ploting(non_spam, 'g')
plt.show()
print(error_rate(training, test))
data = list(data_original)
training = np.matrix(extract_training(data))
test = extract_test(data)
print(get_class_plot(training_original, test))
# -
# Wir bilden jetzt eine Konfusions Matrix ab.
# +
def confusion_matrix(training, test, weight):
matrix = np.zeros((2, 2))
for i in range(0, len(test)):
if (classify(training, test[i], weight) == 1):
if (test[i][57] == 1):
a = matrix[0][0]
a += 1
matrix.itemset((0, 0), a)
else:
a = matrix[1][0]
a += 1
matrix.itemset((1, 0), a)
elif (test[i][57] == 0):
a = matrix[1][1]
a += 1
matrix.itemset((1, 1), a)
else:
a = matrix[1][0]
a += 1
matrix.itemset((1, 0), a)
return matrix
weight = get_weight(training_original)
print(confusion_matrix(training_original, test_original, weight))
| logistic-regression/Logistic Regression.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# +
# %pylab inline
from sklearn.dummy import DummyRegressor
from sklearn.ensemble import RandomForestRegressor
from sklearn.metrics import mean_squared_error
import xgboost as xgb
from soln.dataset import AllCategoricalsFeaturizer
from soln.dataset import featurize_and_to_numpy
from soln.dataset import generate_xv_splits
from soln.dataset import get_augmented_train_and_test_set
from soln.utils import eval_regressor
# -
# %time aug_train_set, aug_test_set = get_augmented_train_and_test_set()
params = {
'objective': 'reg:linear',
'eta': 0.02,
'min_child_weight': 6,
'subsample': 0.7,
'colsample_bytree': 0.6,
'scale_pos_weight': 0.8, # undocumented?!
'silent': 1,
'max_depth': 8,
'max_delta_step': 2,
}
# +
featurizer = AllCategoricalsFeaturizer()
num_rounds = 1000
train_rmsles = []
test_rmsles = []
for i, split in enumerate(generate_xv_splits(aug_train_set)):
print "---------------------- split {}".format(i)
# %time split_np = featurize_and_to_numpy(featurizer, *split)
X_train_np, y_train_np, X_test_np, y_test_np = split_np
xgtrain = xgb.DMatrix(X_train_np, label=y_train_np)
xgtest = xgb.DMatrix(X_test_np)
# %time model = xgb.train(params.items(), xgtrain, num_rounds)
# %time y_train_pred = model.predict(xgtrain)
train_rmsle = np.sqrt(mean_squared_error(y_train_np, y_train_pred))
# %time y_test_pred = model.predict(xgtest)
test_rmsle = np.sqrt(mean_squared_error(y_test_np, y_test_pred))
print "train_rmsle {}; test_rmsle {}".format(train_rmsle, test_rmsle)
train_rmsles.append(train_rmsle)
test_rmsles.append(test_rmsle)
print
print "------------------------------ averages:".format(i)
print " train RMSLE avg {} std {}".format(np.mean(train_rmsles), np.std(train_rmsles))
# print " train RMSLEs: {}".format(train_rmsles)
print " test RMSLE avg {} std {}".format(np.mean(test_rmsles), np.std(test_rmsles))
# print " test RMSLEs: {}".format(test_rmsles)
print
# -
| exploration/xv_eval_xgboost.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/semishen/ML100Days/blob/master/Day_009_HW.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="L8XfQEbr1Pnt" colab_type="text"
# # 檢視與處理 Outliers
# ### 為何會有 outliers, 常見的 outlier 原因
# * 未知值,隨意填補 (約定俗成的代入),如年齡常見 0,999
# * 可能的錯誤紀錄/手誤/系統性錯誤,如某本書在某筆訂單的銷售量 = 1000 本
# + [markdown] id="sTdDwMzy1Pnu" colab_type="text"
# # [作業目標]
# - 依照下列提示與引導, 以幾種不同的方式, 檢視可能的離群值
# + [markdown] id="aReOwjfG1Pnv" colab_type="text"
# # [作業重點]
# - 從原始資料篩選可能的欄位, 看看那些欄位可能有離群值 (In[3], Out[3])
# - 繪製目標值累積密度函數(ECDF)的圖形, 和常態分布的累積密度函數對比, 以確認是否有離群值的情形 (In[6], Out[6], In[7], Out[7])
# + id="MgPPoZlm1Pnv" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 53} outputId="f3aee6ba-a854-4605-ea4c-958a8d18bed4"
# Import 需要的套件
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
# + id="7FcbgT_g1Pny" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 321} outputId="ff635253-38a1-4251-c8b8-7f096af867e3"
app_train = pd.read_csv('application_train.csv')
app_train.head()
# + id="PpvpXNsME6W7" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 89} outputId="b616c1ee-72b3-41ad-b3fb-321ead37a6fd"
app_train.dtypes.value_counts()
# + [markdown] id="WAaMhTSg1Pn1" colab_type="text"
# ## 請參考 HomeCredit_columns_description.csv 的欄位說明,觀察並列出三個你覺得可能有 outlier 的欄位並解釋可能的原因
# + id="N68j_BmV1Pn1" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="dc39f227-9aa1-46fc-faef-abb7f7d0f3f6"
# 先篩選數值型的欄位
numeric_columns = list(app_train.columns[list((app_train.dtypes == 'int64') | (app_train.dtypes == 'float64'))])
print("{} columns with number type".format(len(numeric_columns)))
# 再把只有 2 值 (通常是 0,1) 的欄位去掉
numeric_columns = list(app_train[numeric_columns].columns[list(app_train[numeric_columns].apply(lambda x:len(x.unique())!=2 ))])
print("{} numeric columns without bool type".format(len(numeric_columns)))
# 檢視這些欄位的數值範圍
for col in numeric_columns:
sns.boxplot(y = col, data = app_train)
plt.show()
# + id="f2en7Tlh1Pn4" colab_type="code" colab={}
# 從上面的圖檢查的結果,至少這三個欄位好像有點可疑
# AMT_INCOME_TOTAL
# REGION_POPULATION_RELATIVE
# OBS_60_CNT_SOCIAL_CIRCLE
# + [markdown] id="m1oaoJ-t1Pn6" colab_type="text"
# ### Hints: Emprical Cumulative Density Plot, [ECDF](https://zh.wikipedia.org/wiki/%E7%BB%8F%E9%AA%8C%E5%88%86%E5%B8%83%E5%87%BD%E6%95%B0), [ECDF with Python](https://stackoverflow.com/questions/14006520/ecdf-in-python-without-step-function)
# + id="MTbE7adS1Pn6" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 757} outputId="6dcbb38e-0ae8-45c1-d3b3-02c2b2bbaebc"
# 最大值離平均與中位數很遠
#print(app_train['AMT_INCOME_TOTAL'].describe())
# 繪製 Empirical Cumulative Density Plot (ECDF)
"""
YOUR CODE HERE
"""
value_counts_df = app_train['AMT_INCOME_TOTAL'].value_counts()
#print(value_counts_df)
sorted_df = value_counts_df.sort_index()
#print(sorted_df)
cdf = sorted_df.cumsum()
#print(cdf)
plt.plot(list(cdf.index), cdf/cdf.max())
plt.xlabel('Value')
plt.ylabel('ECDF')
plt.xlim([cdf.index.min(), cdf.index.max() * 1.05]) # 限制顯示圖片的範圍
plt.ylim([-0.05,1.05]) # 限制顯示圖片的範圍
plt.show()
# 改變 y 軸的 Scale, 讓我們可以正常檢視 ECDF
plt.bar(np.log(list(cdf.index)), cdf/cdf.max())
plt.xlabel('Value (log-scale)')
plt.ylabel('ECDF')
plt.ylim([-0.05,1.05]) # 限制顯示圖片的範圍
plt.show()
print(app_train['AMT_INCOME_TOTAL'].value_counts().sort_index(ascending = False))
# + [markdown] id="mMI_4YLq1Pn9" colab_type="text"
# ## 補充:Normal dist 的 ECDF
# 
# + id="Z2WQykgJ1Pn9" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 905} outputId="d50059b1-0a0e-4fdb-d404-76c6645ab614"
# 最大值落在分布之外
print(app_train['REGION_POPULATION_RELATIVE'].describe())
# 繪製 Empirical Cumulative Density Plot (ECDF)
"""
Your Code Here
"""
cdf = app_train['REGION_POPULATION_RELATIVE'].value_counts().sort_index().cumsum()
plt.plot(list(cdf.index), cdf/cdf.max())
plt.xlabel('Value')
plt.ylabel('ECDF')
plt.ylim([-0.05,1.05]) # 限制顯示圖片的範圍
plt.show()
app_train['REGION_POPULATION_RELATIVE'].hist()
plt.show()
print(app_train['REGION_POPULATION_RELATIVE'].value_counts().sort_index(ascending = False))
# 就以這個欄位來說,雖然有資料掉在分布以外,也不算異常,僅代表這間公司在稍微熱鬧的地區有的據點較少,
# 導致 region population relative 在少的部分較為密集,但在大的部分較為疏漏
# + id="tF2-ydeN1PoA" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="adf755e7-1f5e-49f4-f13f-04a72aefcd06"
# 最大值落在分布之外
print(app_train['OBS_60_CNT_SOCIAL_CIRCLE'].describe())
# 繪製 Empirical Cumulative Density Plot (ECDF)
cdf = app_train['OBS_60_CNT_SOCIAL_CIRCLE'].value_counts().sort_index().cumsum()
plt.plot(list(cdf.index), cdf/cdf.max())
plt.xlabel('Value')
plt.ylabel('ECDF')
plt.xlim([cdf.index.min() * 0.95, cdf.index.max() * 1.05])
plt.ylim([-0.05,1.05]) # 限制顯示圖片的範圍
plt.show()
app_train['OBS_60_CNT_SOCIAL_CIRCLE'].hist()
plt.show()
print(app_train['OBS_60_CNT_SOCIAL_CIRCLE'].value_counts().sort_index(ascending = False))
# + [markdown] id="GxtGO3461PoC" colab_type="text"
# ## 注意:當 histogram 畫出上面這種圖 (只出現一條,但是 x 軸延伸很長導致右邊有一大片空白時,代表右邊有值但是數量稀少。這時可以考慮用 value_counts 去找到這些數值
# + id="k-oUydiU1PoC" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 281} outputId="5da1f26e-ac8e-4c64-d904-21bd64ecf070"
# 把一些極端值暫時去掉,在繪製一次 Histogram
# 選擇 OBS_60_CNT_SOCIAL_CIRCLE 小於 20 的資料點繪製
"""
Your Code Here
"""
loc_a = list(app_train['OBS_60_CNT_SOCIAL_CIRCLE'] < 2.0)
loc_b = ['OBS_60_CNT_SOCIAL_CIRCLE']
app_train.loc[loc_a, loc_b].hist()
plt.show()
| Day_009_HW.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/LondonInternational/Data-Science-Batch-A-OCTAIML2021/blob/main/Session_V_IntrotoPythonProgramming.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="TNYU_Lze-W-p"
# Data Structures: List, Set, Tuple & Dictionary
# + colab={"base_uri": "https://localhost:8080/"} id="i2XJRyqje9Eg" outputId="1c4ac597-d8b6-40c1-e979-aafff760c500"
x = 90
y = 100
z = 34
d = 67
print(x+y+z+d)
# + colab={"base_uri": "https://localhost:8080/"} id="F4ab7-1RfLkj" outputId="097ee011-e580-478f-d352-5fcd330ea0ec"
# List
numList = [90, 100, 34, 67]
print(numList)
# + colab={"base_uri": "https://localhost:8080/"} id="Q7MtuKnyfYQa" outputId="9560208f-ca60-4c23-90f1-b6ac742b8fa0"
sumi = 0
for item in numList:
sumi = sumi + item
print(sumi)
print(sum(numList))
# + colab={"base_uri": "https://localhost:8080/"} id="DoL3fvz4g90W" outputId="94dda62f-303a-4a38-9708-a5aaa2970ca0"
sum([4,5,6,7])
# + colab={"base_uri": "https://localhost:8080/"} id="ocTxyr8Phdng" outputId="7abb116a-9977-42b5-e4df-5d90ea5192fd"
# Create a empty List
xList = []
print(xList)
# + colab={"base_uri": "https://localhost:8080/"} id="zkI7E6FQh1Io" outputId="dae37b43-bba1-4bc4-b97c-761bbf0cfce3"
# List is heterogenous in nature - it can hold mixed type of elements
xList = [34,56,'sata',67.89]
print(xList)
# + colab={"base_uri": "https://localhost:8080/"} id="L94PZ5u1h7OY" outputId="43c3e539-3cda-4057-fd3c-b32d57a967a3"
# List supports indexing mechanism - index starts with 0 - positive index/negative index
# pos index starts with 0 -> first element
# negative starts with -1 -> last element
xList = [34,56,689,44,778]
print(xList[0]) # first element
print(xList[2]) # third element
print(xList[-1]) # last element
print(xList[-2]) # second last element
print(xList[-5]) # first element
# + colab={"base_uri": "https://localhost:8080/", "height": 167} id="mHLg1CNxjdgv" outputId="d555b672-5182-4513-a100-9d0b48d90f23"
print(xList[20]) # no element -> indexerror
# + colab={"base_uri": "https://localhost:8080/", "height": 167} id="A0e0O-2VjukA" outputId="c0726f0e-6df3-4c6b-b55f-b1776d0eb4a7"
print(xList[-20]) # no element -> indexerror
# + colab={"base_uri": "https://localhost:8080/"} id="99PGrSo4j3Z4" outputId="10720144-a4cd-47d1-8b05-dc4f5d461e20"
numList = [34,56,31] # 34*56*31 -> 59024
result = 1
for item in numList:
result = result * item
print(result)
# + colab={"base_uri": "https://localhost:8080/"} id="lpIgNrKOlE-S" outputId="7f01dff3-46f2-4a76-8005-e58dd1f63d26"
# append() -> add one element in a list - last position
xList = [2,3,4]
print(xList)
xList.append(100)
print(xList)
xList = [2,3,4]
print(xList)
xList.append([100,34,56])
print(xList)
# + colab={"base_uri": "https://localhost:8080/"} id="E4CWrLPSlw_r" outputId="8ee1b643-d9f8-433e-f78f-5a00c275f177"
# extend() -> add several elements in a list - last position
xList = [2,3,4]
print(xList)
xList.extend([100,34,56])
print(xList)
# + colab={"base_uri": "https://localhost:8080/"} id="DlmNchjpmNTk" outputId="2825a702-bf3d-4526-a998-48f258056b0f"
xlist = [45,67,87]
xlist.append('amit')
print(xlist)
# + colab={"base_uri": "https://localhost:8080/"} id="HevKi-qOmpX-" outputId="92ccfc2f-a345-41f6-f375-4fffcdd17bc9"
xlist = [45,67,87]
xlist.extend('amit')
print(xlist)
# + colab={"base_uri": "https://localhost:8080/"} id="Uv_Hx7AYm8Cs" outputId="c6c4063d-3840-4706-870e-263ef3f252d4"
xlist = [45,67,87]
xlist.append([67,100])
print(xlist)
# + colab={"base_uri": "https://localhost:8080/"} id="qRAFub6onASU" outputId="13ed270f-40be-4f8c-e0fb-cc36a87d246e"
xlist = [45,67,87]
xlist.extend([67,100])
print(xlist)
# + colab={"base_uri": "https://localhost:8080/"} id="bIDM1cnInGFk" outputId="10956a13-41e0-4bd9-ff49-03a081c783a1"
xlist = [45,67,87]
xlist.append(100)
print(xlist)
# + colab={"base_uri": "https://localhost:8080/", "height": 202} id="Wr9Yqp3snfDx" outputId="92a6bf3d-5c30-46d9-a3b0-3e40dc9162d9"
xlist = [45,67,87]
xlist.extend(100)
print(xlist)
# + colab={"base_uri": "https://localhost:8080/"} id="89kTIH92npjl" outputId="c806af07-2c96-4958-da6c-412d1e03f675"
xlist = [45,67,87]
xlist.extend([100])
print(xlist)
# + colab={"base_uri": "https://localhost:8080/"} id="wCdSD-rgn38H" outputId="5ffb3831-101d-47f2-8487-b49b6ab723aa"
xlist = [45,67,87]
xlist.insert(2,4000) # insert(index,item) function
print(xlist)
# + colab={"base_uri": "https://localhost:8080/"} id="DiDQlL-hoKhg" outputId="19a84db6-1998-4c59-ec96-5bb94cfa114a"
xlist = [45,67,87]
xlist.remove(67) # remove(value) function
print(xlist)
# + colab={"base_uri": "https://localhost:8080/"} id="6Qat4KVXoe2Y" outputId="b6b7d1f0-2a1e-49ec-f2c3-212ac4a13965"
xlist = [45,67,87]
xlist.pop(0) # remove(index) function - remove value at index
print(xlist)
# + colab={"base_uri": "https://localhost:8080/"} id="XB2B437FouWz" outputId="43f5c8a0-153d-4a98-ddec-f1db7b83e7da"
xlist = [45,67,4500,5000,87]
print(xlist.index(5000))
# + colab={"base_uri": "https://localhost:8080/"} id="z25ZAHanpIGX" outputId="14099037-7bc4-4180-b7d2-15371eca632c"
# List Slicing
xlist = [45,67,4500,5000,8,7,'lo']
print(xlist[2:4]) # listname[startindex:stopindex]
print(xlist[3:])
print(xlist[-1:-4:-1]) # listname[startindex:stopindex:step]
print(xlist[2:5])
# + colab={"base_uri": "https://localhost:8080/"} id="nNKlokEpqLGc" outputId="6f5a890e-c12c-4a7b-cb85-36b256641133"
# listname[startindex:stopindex:step]
numList = [3,4,5,6,7,60,9] # 3,5,7,9
print(numList[0::2])
# + colab={"base_uri": "https://localhost:8080/"} id="tf_5HUJtrMyI" outputId="4c028291-eba5-4297-a66a-a73df3e0a38a"
alList=[]
for number in range(65,91):
alList.append(chr(number))
print(alList)
# + colab={"base_uri": "https://localhost:8080/"} id="Strv3eIHrrf0" outputId="462c9a97-fa56-4a22-d81d-a3cdbbbda38c"
# fetch every 4th letter from alList [ startindex : stopindex : step]
alList[0::4] # sublist or slice of list
# + colab={"base_uri": "https://localhost:8080/"} id="Aya1Y87_sxtK" outputId="7e220953-55e5-49d3-c2eb-4ed8d933c4a2"
# reverse a list, alList [ startindex : stopindex : step]
alList[-1::-2]
# + id="oAyfgEBCsIX4"
# # reverse a list, alList [ startindex : stopindex : step]
# alList[-1::-1]
# + colab={"base_uri": "https://localhost:8080/"} id="ggwZyYPXtBkS" outputId="6eaa8adc-e17a-4a98-dfa8-e78aedd2826c"
alList[:]
# + colab={"base_uri": "https://localhost:8080/"} id="NmcFXS6ltHjt" outputId="ebc0fd1c-01b2-4a02-b002-662baa565d5a"
alList[::]
# + colab={"base_uri": "https://localhost:8080/"} id="L7-2oyo1tOWo" outputId="2e3ef674-82d5-4680-afc5-c52dda1e7c27"
len(alList)
# + colab={"base_uri": "https://localhost:8080/"} id="LhXJpVpCtTTo" outputId="338d6efd-4a9e-4d22-cfa0-882fd6cb62f2"
numList = [8,7,8,56]
print(sum(numList))
# + colab={"base_uri": "https://localhost:8080/"} id="vFO7Ii-eta0I" outputId="13459cb1-c7b2-4d5f-ec2a-f05b7b78466b"
numList = [8,7,8,56]
print(sorted(numList)) # ascending order
print(sorted(numList,reverse=True)) # descending order
numList.sort()
print(numList)
# + id="FpbbPcXitatD"
# List Methods: append,extend,index,remove,pop,sort,count,reverse,clear
# Built-in Functions: len,type,isinstance,print,input,sorted,sum,del,max,min
# + colab={"base_uri": "https://localhost:8080/"} id="WvJ3aSv0tajv" outputId="f74667c8-abdb-4ed3-a3a1-221faac8fe50"
numList = [8,7,8,56]
print(numList.count(8))
# + colab={"base_uri": "https://localhost:8080/"} id="RN2xc4aBu9KG" outputId="17cdcf53-5255-4996-e6be-31ac7ec99f0a"
numList = [8,7,8,56]
numList.reverse()
print(numList)
# + colab={"base_uri": "https://localhost:8080/"} id="saFP59WtvE87" outputId="3d80f565-b1c4-4067-c222-9e271f6f0143"
numList = [8,7,8,56]
numList.clear()
print(numList)
# + colab={"base_uri": "https://localhost:8080/"} id="3dO3EddYv6_7" outputId="17f0044c-d522-4631-faf8-d36316966d7d"
numList = [8,7,800,500,67,8,23,6]
del numList[1:3] # delete multiple elements from list
print(numList)
# + colab={"base_uri": "https://localhost:8080/"} id="mi7aNHOWwhkv" outputId="a456ed03-92ff-493f-d477-76f9d7828587"
numList = [8,7,800,500,67,8,23,6]
del numList[2] # delete a single element from list
print(numList)
# + colab={"base_uri": "https://localhost:8080/", "height": 202} id="EyoHbmiHvhc4" outputId="a5be674a-0173-4893-b092-0b67ad5fbd71"
numList = [8,7,8,56]
del numList # deleted an entire list
print(numList)
# + colab={"base_uri": "https://localhost:8080/"} id="KZhm6YVBvrzZ" outputId="e47e7bf9-e5c7-4329-c3c8-8b154a89dc92"
numList = [8,7,8,56]
print(45 in numList)
print(45 not in numList)
# + id="usZSdvrMv5S0"
# Python program to interchange first and last elements in a list
# Python program to swap two elements in a list
# Python | Ways to find length of list
# Python | Ways to check if element exists in list
# Different ways to clear a list in Python
# Python | Reversing a List
# Python program to find sum of elements in list
# Python | Multiply all numbers in the list
# Python program to find smallest number in a list
# Python program to find largest number in a list
# Python program to find second largest number in a list
# Python program to find N largest elements from a list
# Python program to print even numbers in a list
# Python program to print odd numbers in a List
# Python program to print all even numbers in a range
# Python program to print all odd numbers in a range
# Python program to print positive numbers in a list
# Python program to print negative numbers in a list
# Python program to print all positive numbers in a range
# Python program to print all negative numbers in a range
# Remove multiple elements from a list in Python
# Python – Remove empty List from List
# Python | Count occurrences of an element in a list
# Python | Program to print duplicates from a list of integers
# Python | Sum of number digits in List
# Break a list into chunks of size N in Python
# Python | Sort the values of first list using second list
| Session_V_IntrotoPythonProgramming.ipynb |
# ---
# jupyter:
# jupytext:
# split_at_heading: true
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %matplotlib inline
from fastai2.basics import *
import gzip
# ## MNIST SGD
# Get the 'pickled' MNIST dataset from http://deeplearning.net/data/mnist/mnist.pkl.gz. We're going to treat it as a standard flat dataset with fully connected layers, rather than using a CNN.
path = Config().data/'mnist'
path.ls()
with gzip.open(path/'mnist.pkl.gz', 'rb') as f:
((x_train, y_train), (x_valid, y_valid), _) = pickle.load(f, encoding='latin-1')
plt.imshow(x_train[0].reshape((28,28)), cmap="gray")
x_train.shape
x_train,y_train,x_valid,y_valid = map(torch.tensor, (x_train,y_train,x_valid,y_valid))
n,c = x_train.shape
x_train.shape, y_train.min(), y_train.max()
# In lesson2-sgd we did these things ourselves:
#
# ```python
# x = torch.ones(n,2)
# def mse(y_hat, y): return ((y_hat-y)**2).mean()
# y_hat = x@a
# ```
#
# Now instead we'll use PyTorch's functions to do it for us, and also to handle mini-batches (which we didn't do last time, since our dataset was so small).
from torch.utils.data import TensorDataset
bs=64
train_ds = TensorDataset(x_train, y_train)
valid_ds = TensorDataset(x_valid, y_valid)
train_dl = TfmdDL(train_ds, bs=bs, shuffle=True)
valid_dl = TfmdDL(valid_ds, bs=2*bs)
dls = DataLoaders(train_dl, valid_dl)
x,y = dls.one_batch()
x.shape,y.shape
class Mnist_Logistic(Module):
def __init__(self): self.lin = nn.Linear(784, 10, bias=True)
def forward(self, xb): return self.lin(xb)
model = Mnist_Logistic().cuda()
model
model.lin
model(x.to(torch.cuda.current_device())).shape#model(x).shape
[p.shape for p in model.parameters()]
lr=2e-2
loss_func = nn.CrossEntropyLoss()
def update(x,y,lr):
wd = 1e-5
y_hat = model(x)
# weight decay
w2 = 0.
for p in model.parameters(): w2 += (p**2).sum()
# add to regular loss
loss = loss_func(y_hat, y) + w2*wd
loss.backward()
with torch.no_grad():
for p in model.parameters():
p.sub_(lr * p.grad)
p.grad.zero_()
return loss.item()
losses = [update(x.to(torch.cuda.current_device()),y.to(torch.cuda.current_device()),lr) for x,y in dls.train]#losses = [update(x,y,lr) for x,y in dls.train]
plt.plot(losses);
class Mnist_NN(Module):
def __init__(self):
self.lin1 = nn.Linear(784, 50, bias=True)
self.lin2 = nn.Linear(50, 10, bias=True)
def forward(self, xb):
x = self.lin1(xb)
x = F.relu(x)
return self.lin2(x)
model = Mnist_NN().cuda()
losses = [update(x.to(torch.cuda.current_device()),y.to(torch.cuda.current_device()),lr) for x,y in dls.train]#losses = [update(x,y,lr) for x,y in dls.train]
plt.plot(losses);
model = Mnist_NN().cuda()
def update(x,y,lr):
opt = torch.optim.Adam(model.parameters(), lr)
y_hat = model(x)
loss = loss_func(y_hat, y)
loss.backward()
opt.step()
opt.zero_grad()
return loss.item()
losses = [update(x.to(torch.cuda.current_device()),y.to(torch.cuda.current_device()),lr) for x,y in dls.train]#losses = [update(x,y,1e-3) for x,y in dls.train]
plt.plot(losses);
learn = Learner(dls, Mnist_NN(), loss_func=loss_func, metrics=accuracy)
from fastai2.callback.all import *
learn.lr_find()
learn.fit_one_cycle(1, 1e-2)
learn.recorder.plot_sched()
learn.recorder.plot_loss()
# ## fin
| nbs/course/lesson5-sgd-mnist.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # The GLM, part 2: inference
# In this notebook, we'll continue with the GLM, focusing on statistical tests (i.e., inference) of parameters. Note that there are two notebooks this week: this one, `glm_part2_inference.ipynb`, and `design_of_experiments.ipynb`. Please do this one first.
#
# Last week, you learned how to estimate parameters of the GLM and how to interpret them. This week, we'll focus on statistical inference of those estimated parameters (and design of experiment, in another notebook). Importantly, we are going to introduce the most important formula in the context of univariate fMRI analyses: the formula for the *t-value*. Make sure you understand this formula, as we will continue to discuss it in the next weeks.
#
# **What you'll learn**: after this week's lab ...
# * you know how the different parts of the t-value formula and how they relate to your data and experiment;
# * you are able to calculate t-values and corresponding p-value of parameters from a GLM;
#
# **Estimated time needed to complete**: 1-3 hours <br>
# First some imports
import numpy as np
import matplotlib.pyplot as plt
from numpy.linalg import inv
# %matplotlib inline
# ## Introduction
# From your statistics classes, you might remember that many software packages (e.g. SPSS, R, SAS) do not only return beta-parameters of linear regression models, but also *t*-values and *p*-values associated with those parameters. Like beta-parameters, these statistics evaluate whether a beta-parameter (or combination of beta-parameters) differs significantly from 0 (or in fMRI terms: whether a voxel activates/deactivates significantly in response to one or more experimental factors).
#
# In univariate (activation-based) fMRI studies, we need statistics to evaluate the estimated parameters in context of the *uncertainty* of their estimation. As we'll discuss later in more detail, interpreting (and performing inference about) the magnitude of GLM parameters without their associated uncertainty is rarely warranted in univariate fMRI studies. To illustrate the problem with this, let's look at an example.
#
# In this example, we try to predict someone's height (in meters; $\mathbf{y}$) using someone's weight (in kilos; $\mathbf{X}$). (Note that the data is not necessarily representative of the true relationship between height and weight.)
#
# Anyway, let's run a linear regression using weight (in kilos) as a predictor for height (in meters).
# +
data = np.load('weight_height_data.npz')
X, y = data['X'], data['y']
plt.figure(figsize=(10, 6))
plt.scatter(X, y)
plt.title('Relation between weight and height (in meters)', y=1.05, fontsize=20)
plt.xlabel('Weight (kg)', fontsize=20)
plt.ylabel('Height (meters)', fontsize=20)
Xn = np.hstack((np.ones((y.size, 1)), X))
beta = inv(Xn.T @ Xn) @ Xn.T @ y
y_hat = Xn @ beta
mse = np.mean((y_hat - y) ** 2)
plt.plot([X.min(), X.max()], [Xn.min(axis=0) @ beta, Xn.max(axis=0) @ beta], ls='-', c='r')
plt.xlim((X.min(), X.max()))
plt.text(70, 1.9, r'$\hat{\beta}_{weight} = %.5f$' % beta[1], fontsize=18)
plt.text(70, 1.8, r'$MSE = %.5f$' % mse, fontsize=18)
plt.grid()
plt.show()
# -
# Well, quite a modest beta-parameter on the one hand, but on the other hand the Mean Squared Error is also quite low.
# Now, to illustrate the problem of interpretating 'raw' beta-weights, let's rephrase our objective of predicting height based on weight: we'll try to predict *height in centimeters* based on weight (still in kilos). So, what we'll do is just rescale the data points of $\mathbf{y}$ (height in meters) so that they reflect height in centimeters. We can simply do this by multipling our $\mathbf{y}$ by 100.
y_cm = y * 100
# Now, you wouldn't expect our model to change, right? We only rescaled our target ... As you'll see below, this actually changes a lot!
# <div class='alert alert-warning'>
# <b>ToDo</b> (0 points): Run linear regression like the previous code block, but with <tt>y_cm</tt> instead of <tt>y</tt> as the target variable. You can use the same design (<tt>Xn</tt>). Calculate the beta-parameter and MSE (store them in the variables <tt>beta_cm</tt> and <tt>mse_cm</tt>).
# </div>
# + deletable=false nbgrader={"cell_type": "code", "checksum": "7c0eae7138d8a700d8575f21ad675a3e", "grade": false, "grade_id": "cell-a67cba1915b72950", "locked": false, "schema_version": 3, "solution": true} tags=["raises-exception", "remove-output"]
''' Implement the ToDo here. '''
# YOUR CODE HERE
raise NotImplementedError()
print(beta_cm)
# + deletable=false editable=false nbgrader={"cell_type": "code", "checksum": "1e224b06f09c0995243ea2afac3a24f9", "grade": true, "grade_id": "cell-60d8290f1add9cc2", "locked": true, "points": 0, "schema_version": 3, "solution": false, "task": false} tags=["raises-exception", "remove-output"]
''' Tests the above ToDo'''
np.testing.assert_almost_equal(beta_cm, beta * 100, decimal=4)
np.testing.assert_almost_equal(mse_cm, mse * 10000, decimal=4)
print("Well done!")
# -
# If you did it correctly, when you compare the beta-parameters between the two models (one where $y$ is in meters, and one where $y$ is in centimeters), you see a massive difference — a 100 fold difference to be exact\*! This is a nice example where you see that the (raw) value of the beta-parameter is completely dependent on the scale of your variables. (Actually, you could either rescale $\mathbf{X}$ or $\mathbf{y}$); both will have a similar effect on your estimated beta-parameter.)
# <div class='alert alert-info'>
# <b>ToThink</b> (0 points): Note that the MSE is a 10,000 times larger in the model with <tt>y_cm</tt> compared to <tt>y</tt> (in meters). From your understanding of how MSE is calculated, do you understand why?
# </div>
# <div class='alert alert-info'>
# <b>ToThink</b> (2 points): By now, you know that the scale of the data (either $X$ or $y$) influences the magnitude of the raw parameter estimates. One could argue that this is not relevant for fMRI data because all data (i.e. different voxels in the brain) all measure the same type of signal, so their scale shouldn't differ that much. This, however, is a false assumption.
#
# Think of (at least) two reasons why voxels might differ in their scale and write them down in the text cell below.
# </div>
# + [markdown] deletable=false nbgrader={"cell_type": "markdown", "checksum": "d472b006c4fd2e42eba1bc902d4e02e3", "grade": true, "grade_id": "cell-2602edc5df20bc9f", "locked": false, "points": 2, "schema_version": 3, "solution": true}
# YOUR ANSWER HERE
# -
# ## How to compute *t*-values and *p*-values
# So, you've seen that interpreting beta-parameters by themselves is useless because their value depends very much on the scale of your variables. But how should we, then, interpret the effects of our predictors on our target-variable? From the plots above, you probably guessed already that it has something to do with the MSE of our model (or, more generally, the model fit). That is indeed the case. As you might have noticed, not only the beta-parameters depend on the scale of your data, the errors (residuals) depend on the scale as well. In other words, not only the *effect* (beta-values) but also the *noise* (errors, MSE) depend on the scale of the variables!
#
# ### *t*-values
# In fact, the key to getting interpretable effects of our predictors is to divide ("normalize") our beta-parameter(s) by some quantity that summarizes how well our model describes the data. This quantity is the **standard error of the beta-parameter**, usually denoted by $\mathrm{SE}_{\beta}$. The standard error of the beta-parameter can be computed by taking the square root of the **variance of the beta-parameter**. If we'd divide our beta-estimate with it's standard error, we compute a statistic you are all familiar with: the *t*-statistic! Formally:
#
# \begin{align}
# t_{\hat{\beta}} = \frac{\hat{\beta}}{\mathrm{SE}_{\hat{\beta}}} = \frac{\hat{\beta}}{\sqrt{\mathrm{var}(\hat{\beta})}}
# \end{align}
# <div class='alert alert-info'>
# <b>ToThink</b> (0 points): Suppose that I know the $\mathrm{SE}$ of a particular beta-parameter. How can I derive the variance of that parameter (i.e., how do I go from the $\mathrm{SE}$ to the variance)? And yes, the answer is as straightforward as you'd think.
# </div>
# Another way to think about it is that the t-value is the "effect" ($\hat{\beta}$) divided by your (un)certainty or confidence in the effect ($\mathrm{SE}_{\hat{\beta}}$). In a way, you can think of t-values as "uncertainty-normalized" effects.
#
# So, what drives (statistical) uncertainty about "effects" (here: $\hat{\beta}$ parameters)? To find out, let's dissect the uncertainty term, $\mathrm{SE}_{\hat{\beta}}$, a little more. The standard error of a parameter can interpreted conceptually as the "unexplained variance of the model" (or *noise*) multiplied with the "design variance" (or: *the variance of the parameter due to the design*). In this lab, we won't explain what *design variance* means or how to compute this, as this is the topic of the second notebook of this week (`design_of_experiments`).
#
# For now, we treat "design variance", here, as some known (constant) value given the design matrix ($\mathbf{X}$). So, with this information, we can construct a conceptual formula for the standard error of our parameter(s):
#
# \begin{align}
# \mathbf{SE}_{\hat{\beta}} = \sqrt{\mathrm{noise} \cdot \mathrm{design\ variance}}
# \end{align}
#
# Now we also create a "conceptual formula" for the *t*-statistic:
#
# \begin{align}
# t_{\hat{\beta}} = \frac{\hat{\beta}}{\mathrm{SE}_{\hat{\beta}}} = \frac{\mathrm{effect}}{\sqrt{\mathrm{noise} \cdot \mathrm{design\ variance}}}
# \end{align}
#
# **This (conceptual) formula involving effects, noise, and design variance is probably the most important concept of this course**. The effects (*t*-values) we measure in GLM analyses of fMRI data depend on two things: the effect measured ($\hat{\beta}$) and the (un)certainty of the effect ($SE_{\hat{\beta}}$), of which the latter term can be divided into the unexplained variance ("noise") and the design variance (uncertainty of the parameter due to the design).
#
# These two terms (noise and design variance) will be central to the next couple of weeks of this course. In this week's second notebook (topic: design of experiments), we'll focus on how to optimize our *t*-values by minimizing the "design variance" term. Next week (topic: preprocessing), we'll focus on how to (further) optimize our *t*-values by minimizing the error/noise.
#
# While we're going to ignore the design variance for now, we are, however, going to learn how to calculate the "noise" term.
#
# In fact, the noise term is *very* similar to the MSE, but instead of taking the *mean* of the squared residuals, we sum the squared residuals ("sums of squared erros", SSE) and divide it by the model's degrees of freedom (DF). People usually use the $\hat{\sigma}^{2}$ symbol for this noise term:
#
# \begin{align}
# \mathrm{noise} = \hat{\sigma}^{2} = \frac{\sum_{i=1}^{N}(\hat{y_{i}} - y_{i})^2}{\mathrm{df}}
# \end{align}
#
# where the degrees of freedom (df) are defined as the number of samples ($N$) minus the number of predictors *including the intercept* ($P$):
#
# \begin{align}
# \mathrm{df} = N - P
# \end{align}
#
# So, the formula of the *t*-statistic becomes:
#
# \begin{align}
# t_{\hat{\beta}} = \frac{\hat{\beta}}{\sqrt{\frac{\sum_{i=1}^{N}(\hat{y_{i}} - y_{i})^2}{N - P} \cdot \mathrm{design\ variance}}}
# \end{align}
#
# Alright, enough formulas. Let's see how we can compute these terms in Python. We're going to calculate the *t*-statistic of the weight-predictor for both models (the meter and the centimeter model) to see whether we can show that essentially the (normalized) effect of weight on height in meters is the same as the effect on height in centimeters; in other words, we are going to investigate whether the conversion to *t*-values "normalizes" the beta-parameters.
#
# First, we'll create a function for you to calculate the design-variance. You *don't* have to understand how this works; we're going to explain this to you in detail next week.
def design_variance(X, which_predictor=1):
''' Returns the design variance of a predictor (or contrast) in X.
Parameters
----------
X : numpy array
Array of shape (N, P)
which_predictor : int or list/array
The index of the predictor you want the design var from.
Note that 0 refers to the intercept!
Alternatively, "which_predictor" can be a contrast-vector
(which will be discussed later this lab).
Returns
-------
des_var : float
Design variance of the specified predictor/contrast from X.
'''
is_single = isinstance(which_predictor, int)
if is_single:
idx = which_predictor
else:
idx = np.array(which_predictor) != 0
c = np.zeros(X.shape[1])
c[idx] = 1 if is_single == 1 else which_predictor[idx]
des_var = c.dot(np.linalg.inv(X.T.dot(X))).dot(c.T)
return des_var
# So, if you want the design variance of the 'weight' parameter in the varianble `Xn` from before, you do:
# use which_predictor=1, because the weight-column in Xn is at index 1 (index 0 = intercept)
design_variance_weight_predictor = design_variance(Xn, which_predictor=1)
print("Design variance of weight predictor is: %.6f " % design_variance_weight_predictor)
# Alright, now we only need to calculate our noise-term ($\hat{\sigma}^2$):
# +
# Let's just redo the linear regression (for clarity)
beta_meter = inv(Xn.T @ Xn) @ Xn.T @ y
y_hat_meter = Xn @ beta_meter
N = y.size
P = Xn.shape[1]
df = (N - P)
print("Degrees of freedom: %i" % df)
sigma_hat = np.sum((y - y_hat_meter) ** 2) / df
print("Sigma-hat (noise) is: %.3f" % sigma_hat)
design_variance_weight = design_variance(Xn, 1)
# -
# Now we can calculate the *t*-value:
t_meter = beta_meter[1] / np.sqrt(sigma_hat * design_variance_weight)
print("The t-value for the weight-parameter (beta = %.3f) is: %.3f" % (beta_meter[1], t_meter))
# That's it! There's not much more to calculating *t*-values in linear regression. Now it's up to you to do the same thing and calculate the *t*-value for the model of height in centimeters, and check if it is the same as the *t*-value for the weight parameter in the model with height in meters.
# <div class='alert alert-warning'>
# <b>ToDo</b> (1 point): Calculate the <em>t</em>-statistic for the beta from the centimeter-model you calculated earlier. Store the value in a new variable named <tt>t_centimeter</tt>. Note: you don't have to calculate the design variance again (because <tt>X</tt> hasn't changed!) — you can reuse the variable <tt>design_variance_weight</tt>.
# </div>
# + deletable=false nbgrader={"cell_type": "code", "checksum": "8506e6c8ddee3462108fa37fd270437b", "grade": false, "grade_id": "cell-1b502342df415d39", "locked": false, "schema_version": 3, "solution": true} tags=["raises-exception", "remove-output"]
''' Implement your ToDo here. '''
# YOUR CODE HERE
raise NotImplementedError()
# + deletable=false editable=false nbgrader={"cell_type": "code", "checksum": "55245eef9d935d1159eeb27fbf104307", "grade": true, "grade_id": "cell-722437956591ffd0", "locked": true, "points": 1, "schema_version": 3, "solution": false} tags=["raises-exception", "remove-output"]
''' Tests the above ToDo. '''
try:
np.testing.assert_almost_equal(t_centimeter, t_meter)
except AssertionError as e:
print("The t-value using height in centimeters is not the same as when using height in meters!")
raise(e)
print("Well done!")
# -
# ### P-values
# As you can see, calculating *t*-values solves the "problem" of uninterpretable beta-parameters!
#
# Now, the last thing you need to know is how to calculate the statistical significance of your *t*-value, or in other words, how you calculate the corresponding *p*-value. You probably remember that the *p*-value corresponds to the area under the curve of a *t*-distribution associated with your observed *t*-value *and more extreme values*:
# 
# *Image credits: <NAME> and <NAME>, Northern Kentucky University*
#
# The function `stats.t.sf(t_value, df)` from the `scipy` package does exactly this. Importantly, this function *always* returns the right-tailed p-value. For negative t-values, however, you'd want the left-tailed *p*-value. One way to remedy this, is to always pass the absolute value of your *t*-value - `np.abs(t_value)` to the `stats.t.sf()` function. Also, the `stats.t.sf()` function by default returns the one-sided *p*-value. If you'd want the two-sided *p*-value, you can simply multiply the returned *p*-value by two to get the corresponding two-sided *p*-value.
#
# Let's see how we'd do that in practice:
# +
from scipy import stats
# take the absolute by np.abs(t)
p_value = stats.t.sf(np.abs(t_meter), df) * 2 # multiply by two to create a two-tailed p-value
print('The p-value corresponding to t(%i) = %.3f is: %.8f' % (df, t_meter, p_value))
# -
# ## Contrasts
# We're almost done! We're really at 99% of what you should know about the GLM and fMRI analysis (except for some important caveats that have to do with GLM assumptions, that we'll discuss next week). The only major concept that we need to discuss is **contrasts**. Contrasts are basically follow-up statistical tests of GLM parameters, with which you can implement any (linear) statistical test that you are familiar with. *t*-tests, *F*-tests, ANCOVAs — they can all be realized with the GLM and the right contrast(s). (Again, if you want to know more about this equivalence between the GLM and common statistical tests, check out this [blog post](https://lindeloev.github.io/tests-as-linear/).) Importantly, the choice of contrast should reflect the hypothesis that you want to test.
#
# ### *t*-tests
# T-tests in the GLM can be implemented in two general ways:
#
# **1. Using a contrast of a parameters "against baseline"**
#
# This type of contrast basically tests the hypothesis: "Does my predictor(s) have *any* effect on my dependent variable?" In other words, it tests the following hypothesis:
# * $H_{0}: \beta = 0$ (our null-hypothesis, i.e. no effect)
# * $H_{a}: \beta \neq 0$ (our two-sided alternative hypothesis, i.e. *some* effect)
#
# Note that a directional alternative hypothesis is also possible, i.e., $H_{a}: \beta > 0$ or $H_{a}: \beta < 0$.
#
# **2. Using a contrast between parameters**
#
# This type of contrast basically tests hypotheses such as "Does predictor 1 have a larger effect on my dependent variable than predictor 2?". In other words, it tests the following hypothesis:
# * $H_{0}: \beta_{1} - \beta_{2} = 0$ (our null-hypothesis, i.e. there is no difference)
# * $H_{a}: \beta_{1} - \beta_{2} \neq 0$ (our alternative hypotehsis, i.e. there is some difference)
#
# Let's look at an example of how we would evaluate a simple hypothesis that a beta has an *some* effect on the dependent variable. Say we'd have an experimental design with 6 conditions:
#
# * condition 1: images of **male** faces with a **happy** expression
# * condition 2: images of **male** faces with a **sad** expression
# * condition 3: images of **male** faces with a **neutral** expression
# * condition 4: images of **female** faces with a **happy** expression
# * condition 5: images of **female** faces with a **sad** expression
# * condition 6: images of **female** faces with a **neutral** expression
#
# Let's assume we have fMRI data from a run with 100 volumes. We then have a target-signal of shape ($100,$) and a design-matrix (after convolution with a canonical HRF) of shape ($100 \times 7$) (the first predictor is the intercept!). We load in this data below:
# +
data = np.load('data_contrast_example.npz')
X, y = data['X'], data['y']
print("Shape of X: %s" % (X.shape,))
print("Shape of y: %s" % (y.shape,))
# -
# After performing linear regression with these 6 predictors (after convolving the stimulus-onset times with an HRF, etc. etc.), you end up with 7 beta values:
betas = inv(X.T @ X) @ X.T @ y
betas = betas.squeeze() # remove singleton dimension; this is important for later
print("Betas corresonding to our 6 conditions (and intercept):\n%r" % betas.T)
# The first beta corresponds to the intercept, the second beta to the male/happy predictor, the third beta to the male/sad predictor, etc. etc. Now, suppose that we'd like to test whether images of male faces with a sad expression have an influence on voxel activity (our dependent variable).
#
# The first thing you need to do is extract this particular beta value from the array with beta values (I know this sounds really trivial, but bear with me):
beta_male_sad = betas[2]
print("The 'extracted' beta is %.3f" % beta_male_sad)
# In neuroimaging analyses, however, this is usually done slightly differently: using **contrast-vectors**. Basically, it specifies your specific hypothesis about your beta(s) of interest in a vector. Before explaining it in more detail, let's look at it in a code example:
# Again, we'd want to test whether the beta of "male_sad" is different from 0
contrast_vector = np.array([0, 0, 1, 0, 0, 0, 0])
contrast = (betas * contrast_vector).sum() # we simply elementwise multiply the contrast-vector with the betas and sum it!
print('The beta-contrast is: %.3f' % contrast)
# "Wow, what a tedious way to just select the third value of the beta-array", you might think. And, in a way, this is indeed somewhat tedious for a contrast against baseline. But let's look at a case where you would want to investigate whether two betas are different - let's say whether male sad faces have a larger effect on our voxel than male happy faces. Again, you *could* do this:
beta_difference = betas[2] - betas[1]
print("Difference between betas: %.3f" % beta_difference)
# ... but you could also use a contrast-vector:
contrast_vector = np.array([0, -1, 1, 0, 0, 0, 0])
contrast = (betas * contrast_vector).sum()
print('The contrast between beta 2 and beta 1 is: %.3f' % contrast)
print('This is exactly the same as beta[2] - beta[1]: %.3f' % (betas[2]-betas[1]))
# "Alright, so using contrast-vectors is just a fancy way of extracting and subtracting betas from each other ...", you might think. In a way, that's true. But you have to realize that once the hypotheses you want to test become more complicated, using contrast-vectors actually starts to make sense.
#
# Let's look at some more elaborate hypotheses. First, let's test whether male faces lead to higher voxel activity than female faces, *regardless of emotion*:
# male faces > female faces
contrast_vector = [0, 1, 1, 1, -1, -1, -1]
male_female_contrast = (contrast_vector * betas).sum()
print("Male - female contrast (regardless of expression): %.2f" % male_female_contrast)
# ... or whether emotional faces (regardless of *which* exact emotion) lead to higher activity than neutral faces:
# Emotion (regardless of which emotion, i.e., regardless of sad/happy) - neutral
contrast_vector = np.array([0, 1, 1, -2, 1, 1, -2])
emo_neutral_contrast = (contrast_vector * betas).sum()
print("Emotion - neutral contrast (regardless of which emotion): %.2f" % emo_neutral_contrast)
# See how contrast-vectors come in handy when calculating (more intricate) comparisons? In the male-female contrast, for example, instead 'manually' picking out the betas of 'sad_male' and 'happy_male', averaging them, and subtracting their average beta from the average 'female' betas ('happy_female', 'sad_female'), you can
# specify a contrast-vector, multiply it with your betas, and sum them. That's it.
# <div class='alert alert-info'>
# <b>ToThink</b> (1 point): In the last contrast (<tt>emo_neural_contrast</tt>), we set all the "emotional" predictors (sad/happy) to 1, but the neutral predictors to minus <em>2</em> ... Why are these set to -2 and not -1? Write your answer below.
# </div>
# + [markdown] deletable=false nbgrader={"cell_type": "markdown", "checksum": "5e8a98d537439a52de1fc1912f951a1b", "grade": true, "grade_id": "cell-c9b2ee94e3e03078", "locked": false, "points": 1, "schema_version": 3, "solution": true}
# YOUR ANSWER HERE
# -
# <div class='alert alert-warning'>
# <b>ToDo</b> (1 point): Create a contrast vector for the hypothesis: sad faces (regardless whether it's male or female) activate this voxel more than neutral faces (regardless of whether it's male/female). Multiply this contrast vector with the betas and store the result in a variable named <tt>contrast_todo</tt>.
# </div>
# + deletable=false nbgrader={"cell_type": "code", "checksum": "a89f18ad5a9781174ae700515e70ecf6", "grade": false, "grade_id": "cell-49f8094366dfb9fa", "locked": false, "schema_version": 3, "solution": true} tags=["raises-exception", "remove-output"]
# Implement the sad - neutral contrast here:
# YOUR CODE HERE
raise NotImplementedError()
# + deletable=false editable=false nbgrader={"cell_type": "code", "checksum": "c6a4c18673134bd033625428dc653115", "grade": true, "grade_id": "cell-8a31e9963406e314", "locked": true, "points": 1, "schema_version": 3, "solution": false} tags=["raises-exception", "remove-output"]
''' Tests the above ToDo. '''
from niedu.tests.nii.week_3 import test_contrast_todo_1
test_contrast_todo_1(betas, contrast_todo)
# -
# We're not only telling you about contrasts because we think it's an elegant way of computing beta-comparisons, but also because virtually every major neuroimaging software package uses them, so that you can specify what hypotheses you exactly want to test! You'll also see this when we're going to work with FSL (in week 5) to perform automated whole-brain linear regression analyses.
#
# Knowing how contrast-vectors work, we now can extend our formula for *t*-tests of beta-parameters such that they can describe **every possible test** (not only *t*-tests, but also ANOVAs, *F*-tests, etc.) of betas "against baseline" or between betas that you can think of:
#
# Our "old" formula of the *t*-test of a beta-parameter:
# \begin{align}
# t_{\hat{\beta}} = \frac{\hat{\beta}_{j}}{\mathrm{SE}_{\hat{\beta}}}
# \end{align}
#
# And now our "generalized" version of the *t*-test of *any* contrast/hypothesis:
#
# \begin{align}
# t_{\mathbf{c}\hat{\beta}} = \frac{\sum_{j=1}^{P}{c_{j}\hat{\beta}_{j}}}{\mathrm{SE}_{\mathbf{c}\hat{\beta}}}
# \end{align}
#
# in which $\mathbf{c}$ represents the entire contrast-vector, and $c_{j}$ represents the $j^{\mathrm{th}}$ value in our contrast vector. By the way, we can simplify the (notation of the) numerator a little bit using some matrix algebra. Remember that multiplying two (equal length) vectors with each other and then summing the values together is the same thing as the (inner) "dot product" between the two vectors?
#
# This means that you can also evaluate this elementwise multiplication and sum of the contrast-vector and the betas using the dot-product:
#
# \begin{align}
# t_{\mathbf{c}\hat{\beta}} = \frac{\mathbf{c}\hat{\beta}}{\mathrm{SE}_{\mathbf{c}\hat{\beta}}}
# \end{align}
# <div class='alert alert-warning'>
# <b>ToDo</b> (0 points): Convince yourself that the elementwise multiplication and sum is mathematically exactly the same as the dot product! Below, we initialized a hypothetical vector with beta-values (<tt>some_betas</tt>) and a hypothetical contrast-vector (<tt>some_cvec</tt>). First, implement the "multiply and sum" approach and then implement the "dot product" approach. You should find that it gives you exactly the same value: -3.34
# </div>
# +
some_betas = np.array([1.23, 2.95, 3.33, 4.19])
some_cvec = np.array([1, 1, -1, -1])
# Try to implement both approaches and convince yourself that it's
# mathematically the same!
# -
# So, you need the contrast vector in the *numerator* of the *t*-value formula (i.e., $\mathbf{c}\hat{\beta}$), but it turns out that you actually also need the contrast-vector in the denominator, because it's part of the calculation of design variance. Again, we will discuss how this works exactly in the next notebook. In the function `design_variance`, it is also possible to calculate design variance for a particular contrast (not just a single predictor) by passing a contrast vector to the `which_predictor` argument.
#
# We'll show this below:
# E.g., get design-variance of happy/male - sad/male
c_vec = np.array([0, 1, -1, 0, 0, 0, 0]) # our contrast vector!
dvar = design_variance(X, which_predictor=c_vec) # pass c_vec to which_predictor
print("Design variance of happy/male - sad/male: %.3f" % dvar)
# For the rest of ToDos this lab, make sure to pass your contrast-vector to the `design_variance` function in order to calculate it correctly.
#
# Now you know enough to do it yourself!
# <div class='alert alert-warning'>
# <b>ToDo</b> (2 points):
#
# Calculate the *t*-value and *p*-value for the hypothesis "sad faces have a larger effect than happy faces (regardless of gender) on our dependent variable" (i.e. voxel activity). In other words, test the hypothesis: $\beta_{sad} - \beta_{happy} \neq 0$ (note that this is a two-sided test!).
#
# Store the *t*-value and *p*-value in the variables <tt>tval_todo</tt> and <tt>pval_todo</tt> respectively. We reload the variables below (we'll call them <tt>X_new</tt> and <tt>y_new</tt>) to make sure you're working with the correct data. Note that the <tt>X_new</tt> variable already contains an intercept; the other six columns correspond to the different predictors (male/hapy, male/sad, etc.). In summary, you have to do the following:
#
# - (you don't have to calculate the betas; this has already been done (stored in the variable <tt>betas</tt>)
# - calculate "sigma-hat" ($\mathrm{SSE} / \mathrm{df}$)
# - calculate design-variance (use the <tt>design_variance</tt> function with a proper contrast-vector)
# - calculate the contrast ($\mathbf{c}\hat{\beta}$)
# - calculate the t-value and p-value
# </div>
# + deletable=false nbgrader={"cell_type": "code", "checksum": "b1c00f8bf7e944964c52b5ff63ef73e3", "grade": false, "grade_id": "cell-55833a9a2174215c", "locked": false, "schema_version": 3, "solution": true} tags=["raises-exception", "remove-output"]
data = np.load('data_contrast_example.npz')
X_new, y_new = data['X'], data['y']
print("Shape of X: %s" % (X_new.shape,))
print("Shape of y: %s" % (y_new.shape,))
# YOUR CODE HERE
raise NotImplementedError()
# + deletable=false editable=false nbgrader={"cell_type": "code", "checksum": "d1335cf1eee1eac6e83cfb94d6a65872", "grade": true, "grade_id": "cell-411fa3ab43d50400", "locked": true, "points": 2, "schema_version": 3, "solution": false} tags=["raises-exception", "remove-output"]
''' Part 1 of testing the above ToDo. '''
from niedu.tests.nii.week_3 import test_contrast_todo_2
test_contrast_todo_2(X_new, y_new, betas, tval_todo, pval_todo)
print("Well done!")
# -
# ### *F*-tests on contrasts
# In the previous section we discussed how to calculate *t*-values for single contrasts. However, sometimes you might have a hypothesis about multiple contrasts at the same time. This may sound weird, but let's consider an experiment.
#
# Suppose you have data from an experiment in which you showed images circles which were either blue, red, or green. In that case, you have three predictors. Then, you could have very specific question, like "Do blue circles activate a voxel significantly compared to baseline", which corresponds to the following null and alternative hypothesis:
#
# * $H_{0}: \beta_{blue} = 0$ (our null-hypothesis, i.e. there is no activation compared to baseline)
# * $H_{a}: \beta_{blue} > 0$ (our alternative hypothesis, i.e. blue activates relative to baseline)
#
# However, you can also have a more general question, like "Does the presentation of *any* circle (regardless of color) activate a voxel compared to baseline?". This question represents the following null and alternative hypothesis:
#
# * $H_{0}: \beta_{blue} = \beta_{red} = \beta_{green} = 0$
# * $H_{a}: (\beta_{blue} > 0) \vee (\beta_{red} > 0) \vee (\beta_{green} > 0)$
#
# The $\vee$ symbol in the alternative hypothesis means "or". So the alternative hypothesis nicely illustrates our question: is there *any* condition (circle) that activates a voxel more than baseline? This hypothesis-test might sound familiar, because it encompasses the *F*-test! In other words, an *F*-test tests *a collection of contrasts* together. In the example here, the *F*-test tests the following contrasts together (ignoring the intercept) of our beta-parameters:
#
# * `[1, 0, 0]` ($\mathrm{red} > 0$)
# * `[0, 1, 0]` ($\mathrm{blue} > 0$)
# * `[0, 0, 1]` ($\mathrm{green} > 0$)
#
# Thus, a *F*-test basically tests this contrast-*matrix* all at once! Therefore, the *F*-tests is a type of "omnibus test"!
#
# Now, let's look at the math behind the *F*-statistic. The *F*-statistic for set of $K$ contrasts (i.e., the number of rows in the contrast-matrix) is defined as follows:
#
# \begin{align}
# F = (\mathbf{c}\hat{\beta})^{T}[K\mathbf{c}((X^{T}X)^{-1}\hat{\sigma}^{2})\mathbf{c}^{T}]^{-1}(\mathbf{c}\hat{\beta})
# \end{align}
#
# With a little imagination, you can see how the *F*-test is an extension of the *t*-test of a single contrast to accomodate testing a set of contrasts together. Don't worry, you don't have to understand how the formula for the *F*-statistic works mathematically and you don't have to implement this in Python. But you *do* need to understand what type of hypothesis an *F*-test tests!
#
# Let's practice this in a ToDo!
# <div class='alert alert-warning'>
# <b>ToDo</b> (1 point)
#
# Remember the temporal basis sets from before? Suppose we have an experiment with two conditions ("A" and "B") and suppose we've created a design matrix based on convolution with a single-gamma basis set (with a canonical HRF, its temporal derivative, and its dispersion derivative). Together with the intercept, the design matrix thus has 7 columns (2 conditions * 3 HRF + intercept).
#
# The order of the columns is as follows:
# * column 1: intercept
# * column 2: canonical HRF "A"
# * column 3: temporal deriv "A"
# * column 4: dispersion deriv "A"
# * column 5: canonical HRF "B"
# * column 6: temporal deriv "B"
# * column 7: dispersion deriv "B"
#
# Suppose I want to test whether there is *any* difference in response to condition "A" ($A > 0$) compared to baseline, and *I don't care what element of the HRF caused it*. I can use an F-test for this. What would the corresponding contrast-*matrix* (in which each row represents a different contrast) look like?
#
# We've created an 'empty' (all-zeros) 2D matrix below with three rows. It's up to you to fill in the matrix such that it can be used to test the above question/hypothesis.
# </div>
# + deletable=false nbgrader={"cell_type": "code", "checksum": "b4ec5445e719a70ea9483902eb83327a", "grade": false, "grade_id": "cell-82c295ab029883fe", "locked": false, "schema_version": 3, "solution": true} tags=["raises-exception", "remove-output"]
# Fill in the correct values!
contrast_matrix = np.array([
[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0]
])
# YOUR CODE HERE
raise NotImplementedError()
# + deletable=false editable=false nbgrader={"cell_type": "code", "checksum": "8dff3dd4e20a31d27684ad772be6e827", "grade": true, "grade_id": "cell-7f33290dbbfa631d", "locked": true, "points": 1, "schema_version": 3, "solution": false} tags=["raises-exception", "remove-output"]
''' Tests the above ToDo. '''
from niedu.tests.nii.week_3 import test_definition_ftest
test_definition_ftest(contrast_matrix)
print("Well done!")
# -
# ### Summary
# Alright, now you know basically everything about how to perform a univariate fMRI analysis!
#
# "Wait, that's it?", you might ask (or not). Well, yeah, regular univariate analyses as you might read about in scientific journals do basically what you've just learned, but then not on a single voxel, but on each voxel in the brain separately. Basically just a gigantic for-loop across voxels in which everytime the same design ($\mathbf{X}$) is used to predict a new voxel-signal ($\mathbf{y}$). Afterwards, the *t*-values of the contrast (hypothesis) you're interested in are plotted back onto a brain, color-code it (high t-values yellow, low t-values red), and voilà, you have your pretty brain plot.
# <div class='alert alert-info'>
# <b>ToThink</b> (1 point): More explained variance (i.e., a smaller "sums of squared error" term) does not always mean that your <em>t</em>-value is higher. Explain how this might happen.
# </div>
# + [markdown] deletable=false nbgrader={"cell_type": "markdown", "checksum": "f10eabffe20ce417ef8507044d2155af", "grade": true, "grade_id": "cell-50d9ec0c4060b7ff", "locked": false, "points": 1, "schema_version": 3, "solution": true, "task": false}
# YOUR ANSWER HERE
# -
# <div class='alert alert-warning'>
# <b>ToDo</b> (2 points): Suppose that, within the hypothesized face-experiment explained earlier, you want to know which parts of the brain show (significantly) more activity during periods without stimuli (i.e., no faces were shown, i.e., "rest") than during periods with stimuli. Define a contrast vector which would test this hypothesis and store it in a variable <tt>cvec_rest</tt>. Remember: the original face experiment had 7 predictors (the first one being the intercept, followed by 6 face predictors).
# </div>
# + deletable=false nbgrader={"cell_type": "code", "checksum": "642023e320be2c5fc4473b0581bf3f12", "grade": false, "grade_id": "cell-fc092da92c99eae3", "locked": false, "schema_version": 3, "solution": true, "task": false} tags=["raises-exception", "remove-output"]
# Implement the assignment here
# YOUR CODE HERE
raise NotImplementedError()
# + deletable=false editable=false nbgrader={"cell_type": "code", "checksum": "78529115950f515d3fd84b33cde34190", "grade": true, "grade_id": "cell-7ae9b02813f2102f", "locked": true, "points": 2, "schema_version": 3, "solution": false, "task": false} tags=["raises-exception", "remove-output"]
''' Tests the above ToDo. '''
from niedu.tests.nii.week_3 import test_rest_vs_stim_contrast
test_rest_vs_stim_contrast(cvec_rest)
# -
# <div class='alert alert-success'>
# <b>Tip!</b>
# Before handing in your notebooks, we recommend restarting your kernel (<em>Kernel</em> → <em>Restart & Clear Ouput</em>) and running all your cells again (manually, or by <em>Cell</em> → <em>Run all</em>). By running all your cells one by one (from "top" to "bottom" of the notebook), you may spot potential errors that are caused by accidentally overwriting your variables or running your cells out of order (e.g., defining the variable 'x' in cell 28 which you then use in cell 15).
# </div>
| NI-edu/fMRI-introduction/week_3/glm_part2_inference.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # RGI10 (Asia North)
#
# <NAME> & <NAME>, June-December 2021
import pandas as pd
import geopandas as gpd
import subprocess
import matplotlib.pyplot as plt
import matplotlib.patches as mpatches
import seaborn as sns
import numpy as np
from utils import mkdir, submission_summary, needs_size_filter, size_filter, plot_map, plot_date_hist, open_zip_shapefile
import os
# ## Files and storage paths
# +
# Region of interest
reg = 10
# go down from rgi7_scripts/workflow
data_dir = '../../rgi7_data/'
# Level 2 GLIMS files
l2_dir = os.path.join(data_dir, 'l2_sel_reg_tars')
# Output directories
output_dir = mkdir(os.path.join(data_dir, 'l3_rgi7a'))
output_dir_tar = mkdir(os.path.join(data_dir, 'l3_rgi7a_tar'))
# RGI v6 file for comparison later
rgi6_reg_file = os.path.join(data_dir, 'l0_RGIv6', '10_rgi60_NorthAsia.zip')
# +
# Specific to this region: boxes where data has to be selected differently
support_dir = os.path.join(data_dir, 'l0_support_data')
# OK path to file
box_file = os.path.join(support_dir, 'rgi10_boxes.zip')
# -
# ### Load the input data
# Read L2 files
shp = gpd.read_file('tar://' + l2_dir + f'/RGI{reg:02d}.tar.gz/RGI{reg:02d}/RGI{reg:02d}.shp')
# ### List of submissions
sdf, df_cat = submission_summary(shp)
sdf
# - 636 is RGI6
# - 698 is GAMDAMv2 - we use it
# - 726 is a mapping of a few remaining nominal glaciers on three De Long Islands
# - 743 is an update of the Barr inventory for Kamchatka
# +
# # Optional: write out selection in intermediate shape files for manual GIS review
# tmp_output_dir = mkdir(os.path.join(data_dir, 'l0_tmp_data', f'rgi{reg:02d}_inventories'))
# tmp_output_dir_tar = mkdir(os.path.join(data_dir, 'l0_tmp_data'))
# for subid in shp.subm_id.unique():
# s_loc = shp.loc[shp.subm_id == subid]
# s_loc.to_file(tmp_output_dir + f'/subm_{int(subid):03d}.shp')
# print('Taring...')
# print(subprocess.run(['tar', '-zcvf', f'{tmp_output_dir_tar}/rgi{reg:02d}_inventories.tar.gz', '-C',
# os.path.join(data_dir, 'l0_tmp_data'), f'rgi{reg:02d}_inventories']))
# -
# ## Outline selection
glims_rgi = shp.loc[shp.subm_id.isin([636])].copy()
glims_rgi['is_rgi6'] = True
all_others = shp.loc[shp.subm_id.isin([698, 726, 743])].copy()
all_others['is_rgi6'] = False
# Preselected areas to remove
box = open_zip_shapefile(support_dir + '/rgi10_boxes.zip')
# Remove the new regions from rgi
rp = glims_rgi.representative_point()
rp = rp.to_frame('geometry')
rp['orig_index'] = glims_rgi.index
difference = gpd.overlay(rp, box, how='difference')
glims_rgi = glims_rgi.loc[difference['orig_index']].copy()
# Size filter?
needs_size_filter(glims_rgi), needs_size_filter(all_others)
print(len(all_others))
all_others = size_filter(all_others)
print(len(all_others))
rgi7 = pd.concat([glims_rgi, all_others])
# ### Some sanity checks
sdf, df_class = submission_summary(rgi7)
df_class
# Check the orphaned rock outcrops
orphan_f = os.path.join(data_dir, 'l1_orphan_interiors', f'RGI{reg:02d}', f'RGI{reg:02d}.shp')
if os.path.exists(orphan_f):
orphan_f = gpd.read_file(orphan_f)
check = np.isin(rgi7.subm_id.unique(), orphan_f.subm_id.unique())
if np.any(check):
print(f'Orphan rock outcrops detected in subm_id {rgi7.subm_id.unique()[check]}')
orphan_f['area'] = orphan_f.to_crs({'proj':'cea'}).area
# ### Plots
plot_map(rgi7, reg, figsize=(22, 10), linewidth=3, loc='upper center')
plot_map(rgi7, reg, figsize=(22, 10), linewidth=3, loc='upper center', is_rgi6=True)
plot_date_hist(rgi7, reg)
# ### Text for github
fgh = sdf.T
fgh
print(fgh.to_markdown(headers=np.append(['subm_id'], fgh.columns)))
# ## Write out and tar
# +
dd = mkdir(f'{output_dir}/RGI{reg:02d}/', reset=True)
print('Writing...')
rgi7.to_file(dd + f'RGI{reg:02d}.shp')
print('Taring...')
print(subprocess.run(['tar', '-zcvf', f'{output_dir_tar}/RGI{reg:02d}.tar.gz', '-C', output_dir, f'RGI{reg:02d}']))
# -
# ## Consistency check with RGI6
rgi6 = open_zip_shapefile(rgi6_reg_file)
len(rgi7), len(rgi6)
# Test the areas:
rgi6['area'] = rgi6.to_crs({'proj':'cea'}).area
print('Area RGI7a (km2)', rgi7['area'].sum() * 1e-6)
print('Area RGI6 (km2)', rgi6['area'].sum() * 1e-6)
print('diff areas RGI6 - RGI7 computed by us (km2)', (rgi6['area'].sum() - rgi7['area'].sum()) * 1e-6)
# +
# Remove the ids
rp = rgi6.representative_point()
rp = rp.to_frame('geometry')
rp['orig_index'] = rgi6.index
difference = gpd.overlay(rp, box, how='difference')
rgi6_old = rgi6.loc[difference['orig_index']].copy()
difference = gpd.overlay(rp, box, how='intersection')
rgi6_new = rgi6.loc[difference['orig_index']].copy()
assert len(rgi6_new) + len(rgi6_old) == len(rgi6)
# -
print(f'N1 = {len(rgi6_old)} , N2 = {len(glims_rgi)}')
print('Area RGI7 (km2)', glims_rgi['area'].sum() * 1e-6)
print('Area RGI6 (km2)', rgi6_old['area'].sum() * 1e-6)
print('diff', (rgi6_old['area'].sum() - glims_rgi['area'].sum()) * 1e-6)
| workflow/RGI10.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/pachterlab/GRNP_2020/blob/master/notebooks/figure_generation/GenFig2_S4_S5.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="dhydD96df06z"
# **Generates figure 2 as well as supplementary figures 4 and 5**
#
# This notebook generates figures that show the fraction of single-copy molecules per gene in different datasets.
#
# Steps:
# 1. Download the code and processed data
# 2. Setup the R environment
# 3. Generate the figures
#
# The data for these figures is produced by the following notebooks:
#
# Processing of FASTQ files with kallisto and bustools:
#
# https://github.com/pachterlab/GRNP_2020/blob/master/notebooks/FASTQ_processing/ProcessEVAL.ipynb
# https://github.com/pachterlab/GRNP_2020/blob/master/notebooks/FASTQ_processing/ProcessEVALPBMC.ipynb
# https://github.com/pachterlab/GRNP_2020/blob/master/notebooks/FASTQ_processing/ProcessEVALPBMC_DS.ipynb
# https://github.com/pachterlab/GRNP_2020/blob/master/notebooks/FASTQ_processing/ProcessEVALPBMC_SW.ipynb
# https://github.com/pachterlab/GRNP_2020/blob/master/notebooks/FASTQ_processing/ProcessLC.ipynb
# https://github.com/pachterlab/GRNP_2020/blob/master/notebooks/FASTQ_processing/ProcessMRET.ipynb
# https://github.com/pachterlab/GRNP_2020/blob/master/notebooks/FASTQ_processing/ProcessMRET2.ipynb
# https://github.com/pachterlab/GRNP_2020/blob/master/notebooks/FASTQ_processing/ProcessPBMC_NG.ipynb
# https://github.com/pachterlab/GRNP_2020/blob/master/notebooks/FASTQ_processing/ProcessPBMC_NG_2.ipynb
# https://github.com/pachterlab/GRNP_2020/blob/master/notebooks/FASTQ_processing/ProcessPBMC_V2.ipynb
# https://github.com/pachterlab/GRNP_2020/blob/master/notebooks/FASTQ_processing/ProcessPBMC_V3.ipynb
# https://github.com/pachterlab/GRNP_2020/blob/master/notebooks/FASTQ_processing/ProcessPBMC_V3_2.ipynb
# https://github.com/pachterlab/GRNP_2020/blob/master/notebooks/FASTQ_processing/ProcessPBMC_V3_3.ipynb
# https://github.com/pachterlab/GRNP_2020/blob/master/notebooks/FASTQ_processing/ProcessMARSSEQ.ipynb
#
# Preprocessing of BUG files:
#
# https://github.com/pachterlab/GRNP_2020/blob/master/notebooks/R_processing/ProcessR_EVAL.ipynb
# https://github.com/pachterlab/GRNP_2020/blob/master/notebooks/R_processing/ProcessR_EVALPBMC.ipynb
# https://github.com/pachterlab/GRNP_2020/blob/master/notebooks/R_processing/ProcessR_EVALPBMC_DS.ipynb
# https://github.com/pachterlab/GRNP_2020/blob/master/notebooks/R_processing/ProcessR_EVALPBMC_SW.ipynb
# https://github.com/pachterlab/GRNP_2020/blob/master/notebooks/R_processing/ProcessR_LC.ipynb
# https://github.com/pachterlab/GRNP_2020/blob/master/notebooks/R_processing/ProcessR_MRET.ipynb
# https://github.com/pachterlab/GRNP_2020/blob/master/notebooks/R_processing/ProcessR_MRET2.ipynb
# https://github.com/pachterlab/GRNP_2020/blob/master/notebooks/R_processing/ProcessR_PBMC_NG.ipynb
# https://github.com/pachterlab/GRNP_2020/blob/master/notebooks/R_processing/ProcessR_PBMC_NG_2.ipynb
# https://github.com/pachterlab/GRNP_2020/blob/master/notebooks/R_processing/ProcessR_PBMC_V2.ipynb
# https://github.com/pachterlab/GRNP_2020/blob/master/notebooks/R_processing/ProcessR_PBMC_V3.ipynb
# https://github.com/pachterlab/GRNP_2020/blob/master/notebooks/R_processing/ProcessR_PBMC_V3_2.ipynb
# https://github.com/pachterlab/GRNP_2020/blob/master/notebooks/R_processing/ProcessR_PBMC_V3_3.ipynb
# https://github.com/pachterlab/GRNP_2020/blob/master/notebooks/R_processing/ProcessR_MARSSEQ.ipynb
#
#
# + [markdown] id="h8RnKVMXgbzr"
# **1. Download the code and processed data**
# + id="doUAtCxIyOiI" colab={"base_uri": "https://localhost:8080/"} outputId="9871c11f-8370-4f39-81b5-ffb156e04a30"
#download the R code
![ -d "GRNP_2020" ] && rm -r GRNP_2020
# !git clone https://github.com/pachterlab/GRNP_2020.git
# + id="dUNSQ1qBZb2g" colab={"base_uri": "https://localhost:8080/"} outputId="a2d4f83e-cbad-4cd9-d447-486566413fe3"
#download processed data from Zenodo for all datasets
![ -d "figureData" ] && rm -r figureData
# !mkdir figureData
# !cd figureData && wget https://zenodo.org/record/4661263/files/EVAL.zip?download=1 && unzip 'EVAL.zip?download=1' && rm 'EVAL.zip?download=1'
# !cd figureData && wget https://zenodo.org/record/4661263/files/EVALPBMC.zip?download=1 && unzip 'EVALPBMC.zip?download=1' && rm 'EVALPBMC.zip?download=1'
# !cd figureData && wget https://zenodo.org/record/4661263/files/EVALPBMC_DS.zip?download=1 && unzip 'EVALPBMC_DS.zip?download=1' && rm 'EVALPBMC_DS.zip?download=1'
# !cd figureData && wget https://zenodo.org/record/4661263/files/EVALPBMC_SW.zip?download=1 && unzip 'EVALPBMC_SW.zip?download=1' && rm 'EVALPBMC_SW.zip?download=1'
# !cd figureData && wget https://zenodo.org/record/4661263/files/PBMC_V3_3.zip?download=1 && unzip 'PBMC_V3_3.zip?download=1' && rm 'PBMC_V3_3.zip?download=1'
# !cd figureData && wget https://zenodo.org/record/4661263/files/PBMC_V3_2.zip?download=1 && unzip 'PBMC_V3_2.zip?download=1' && rm 'PBMC_V3_2.zip?download=1'
# !cd figureData && wget https://zenodo.org/record/4661263/files/PBMC_V3.zip?download=1 && unzip 'PBMC_V3.zip?download=1' && rm 'PBMC_V3.zip?download=1'
# !cd figureData && wget https://zenodo.org/record/4661263/files/PBMC_NG.zip?download=1 && unzip 'PBMC_NG.zip?download=1' && rm 'PBMC_NG.zip?download=1'
# !cd figureData && wget https://zenodo.org/record/4661263/files/PBMC_NG_2.zip?download=1 && unzip 'PBMC_NG_2.zip?download=1' && rm 'PBMC_NG_2.zip?download=1'
# !cd figureData && wget https://zenodo.org/record/4661263/files/PBMC_V2.zip?download=1 && unzip 'PBMC_V2.zip?download=1' && rm 'PBMC_V2.zip?download=1'
# !cd figureData && wget https://zenodo.org/record/4661263/files/LC.zip?download=1 && unzip 'LC.zip?download=1' && rm 'LC.zip?download=1'
# !cd figureData && wget https://zenodo.org/record/4661263/files/MRET.zip?download=1 && unzip 'MRET.zip?download=1' && rm 'MRET.zip?download=1'
# !cd figureData && wget https://zenodo.org/record/4661263/files/MRET2.zip?download=1 && unzip 'MRET2.zip?download=1' && rm 'MRET2.zip?download=1'
# !cd figureData && wget https://zenodo.org/record/4661263/files/MARSSEQ.zip?download=1 && unzip 'MARSSEQ.zip?download=1' && rm 'MARSSEQ.zip?download=1'
# + id="oesgTqLO0Qje" colab={"base_uri": "https://localhost:8080/"} outputId="3168132c-0c32-4e3d-aecf-ca4bf72eb932"
#Check that download worked
# !cd figureData && ls -l && cd EVAL && ls -l
# + [markdown] id="sCmhNVdYgkWH"
# **2. Prepare the R environment**
# + id="5Gt6rQkSXriM"
#switch to R mode
# %reload_ext rpy2.ipython
# + id="jJ3rQJCdgeJa" colab={"base_uri": "https://localhost:8080/"} outputId="4d14493f-b0f5-4c3e-8d43-018d232ce9ca"
#install the R packages
# %%R
install.packages("tidyverse")
install.packages("ggplot2")
install.packages("DescTools")
install.packages("ggpubr")
install.packages("hexbin")
install.packages("reshape2")
install.packages("farver")
install.packages("tidyverse")
# + [markdown] id="x56fjfCSicrp"
# **3. Generate the figures**
#
# + id="V37XLBAO68oR"
#First set some path variables
# %%R
source("GRNP_2020/RCode/pathsGoogleColab.R")
# + id="R6kuhOmzZL_X" colab={"base_uri": "https://localhost:8080/"} outputId="dce93495-7c42-41e7-98fb-cdd05217bb33"
#Import the code for prediction (available in other notebooks)
# %%R
source(paste0(sourcePath,"ButterflyHelpers.R"))
#source(paste0(sourcePath,"preseqHelpers.R"))
source(paste0(sourcePath,"CCCHelpers.R"))
source(paste0(sourcePath,"ggplotHelpers.R"))
library(tidyverse)
library(ggplot2)
library(ggpubr)
# + id="kqZPO7XjtPPa"
#create figure directory
![ -d "figures" ] && rm -r figures
# !mkdir figures
# + id="tkQeT362BM7V" colab={"base_uri": "https://localhost:8080/", "height": 497} outputId="5bd14418-d100-4c94-8163-a2cc519e6271"
#####################################################
# Fig S4
#####################################################
# %%R
loadStats("LC")
loadStats("PBMC_NG")
loadStats("PBMC_NG_2")
loadStats("PBMC_V3")
loadStats("PBMC_V3_2")
loadStats("PBMC_V3_3")
loadStats("PBMC_V2")
loadStats("EVAL")
loadStats("EVALPBMC")
loadStats("EVALPBMC_DS")
loadStats("EVALPBMC_SW")
loadStats("MRET")
loadStats("MRET2")
loadStats("MARSSEQ")
AddToHexbinData = function(dat, umis, fracOnes, dataset) {
logUmis = log2(umis)
d = tibble(x = logUmis, y = fracOnes, ds = rep(dataset, length(fracOnes)))
if (is.null(dat)) {
dat = d
} else {
dat = bind_rows(dat,d);
}
return(dat)
}
dat = NULL
dat = AddToHexbinData(dat, statsPBMC_V3$UMIs_PBMC_V3_d_100, statsPBMC_V3$FracOnes_PBMC_V3_d_100, "PBMC_V3")
dat = AddToHexbinData(dat, statsPBMC_V3_2$UMIs_PBMC_V3_2_d_100, statsPBMC_V3_2$FracOnes_PBMC_V3_2_d_100, "PBMC_V3_2")
#dat = AddToHexbinData(dat, statsPBMC_V3_3$UMIs_PBMC_V3_3_d_100, statsPBMC_V3_3$FracOnes_PBMC_V3_3_d_100, "PBMC_V3_3") #this is the same as Fig 2A, don't duplicate it!
dat = AddToHexbinData(dat, statsPBMC_NG$UMIs_PBMC_NG_d_100, statsPBMC_NG$FracOnes_PBMC_NG_d_100, "PBMC_NG")
dat = AddToHexbinData(dat, statsPBMC_NG_2$UMIs_PBMC_NG_2_d_100, statsPBMC_NG_2$FracOnes_PBMC_NG_2_d_100, "PBMC_NG_2")
dat = AddToHexbinData(dat, statsPBMC_V2$UMIs_PBMC_V2_d_100, statsPBMC_V2$FracOnes_PBMC_V2_d_100, "PBMC_V2")
dat = AddToHexbinData(dat, statsEVAL$UMIs_EVAL_d_100, statsEVAL$FracOnes_EVAL_d_100, "EVAL")
dat = AddToHexbinData(dat, statsEVALPBMC$UMIs_EVALPBMC_d_100, statsEVALPBMC$FracOnes_EVALPBMC_d_100, "EVALPBMC")
dat = AddToHexbinData(dat, statsLC$UMIs_LC_d_100, statsLC$FracOnes_LC_d_100, "LC")
dat = AddToHexbinData(dat, statsMRET2$UMIs_MRET2_d_100, statsMRET2$FracOnes_MRET2_d_100, "MRET2")
dat = AddToHexbinData(dat, statsEVALPBMC_DS$UMIs_EVALPBMC_DS_d_100, statsEVALPBMC_DS$FracOnes_EVALPBMC_DS_d_100, "EVALPBMC_DS")
dat = AddToHexbinData(dat, statsMRET$UMIs_MRET_d_100, statsMRET$FracOnes_MRET_d_100, "MRET")
dat = AddToHexbinData(dat, statsEVALPBMC_SW$UMIs_EVALPBMC_SW_d_100, statsEVALPBMC_SW$FracOnes_EVALPBMC_SW_d_100, "EVALPBMC_SW")
dat = AddToHexbinData(dat, statsMARSSEQ$UMIs_MARSSEQ_d_100, statsMARSSEQ$FracOnes_MARSSEQ_d_100, "MARSSEQ")
dat$ds = factor(dat$ds, levels = c("PBMC_V3","PBMC_V3_2","PBMC_V3_3","PBMC_NG","PBMC_NG_2","PBMC_V2","EVAL","EVALPBMC","LC","MRET2","EVALPBMC_DS","MRET","EVALPBMC_SW","MARSSEQ"))
#create figure:
figS4 = ggplot(dat) +
stat_binhex(bins=60,na.rm = TRUE, mapping=aes(x = x, y=y, fill = log(..count..))) + # opts(aspect.ratio = 1) +
facet_wrap(facets = ~ds, scales = "free_x", ncol=3) +
labs(x=expression(Log[2]*"(UMI counts)"), y="FSCM") +
theme(panel.background = element_rect("white", "white", 0, 0, "white"),
legend.position= "bottom", legend.direction = "horizontal",#, legend.title = element_blank())
strip.text.x = element_text(size = 12, face = "bold"),
#legend.position= "none",
strip.background = element_blank())
print(figS4) # for some reason this plot sometimes fail and show an error ("hbin" ...) - Restart R and try again in that case
ggsave(
paste0(figure_path, "FigS4.png"),
plot = figS4, device = "png",
width = 7, height = 11, dpi = 300)
# + id="yvtUQDEFr8Pl" colab={"base_uri": "https://localhost:8080/", "height": 497} outputId="d035b4a7-61b5-4c7e-bd2b-764a1da92aab"
#############################
# Fig S5 - Hexbin plot
#############################
# %%R
AddToHexbinData2 = function(dat, ds1, ds2) {
stats1 = get(paste0("stats",ds1), envir=.GlobalEnv)
indUMIs1 = which(colnames(stats1) == paste0("UMIs_",ds1,"_d_100"))
indFO1 = which(colnames(stats1) == paste0("FracOnes_",ds1,"_d_100"))
stats2 = get(paste0("stats",ds2), envir=.GlobalEnv)
indUMIs2 = which(colnames(stats2) == paste0("UMIs_",ds2,"_d_100"))
indFO2 = which(colnames(stats2) == paste0("FracOnes_",ds2,"_d_100"))
stats1Filt = stats1[stats1[[indUMIs1]] >= 200, ]
stats2Filt = stats2[stats2[[indUMIs2]] >= 200, ]
merged = inner_join(stats2Filt[, c(1,indFO2)], stats1Filt[, c(1,indFO1)], by="gene")
colnames(merged) = c("gene", "x", "y")
d = merged %>% add_column(ds=paste0(ds1, " vs ", ds2))
if (is.null(dat)) {
dat = d
} else {
dat = bind_rows(dat,d);
}
return(dat)
}
#ds1 = "PBMC_V3_3"
#ds2 = "PBMC_V3_2"
dat2 = NULL
#dat2 = AddToHexbinData2(dat2, "PBMC_V3_3", "PBMC_V3_2") #the same as Fig 2B, don't duplicate it!
dat2 = AddToHexbinData2(dat2, "PBMC_V3_3", "PBMC_V2")
dat2 = AddToHexbinData2(dat2, "PBMC_V2", "EVALPBMC")
dat2 = AddToHexbinData2(dat2, "PBMC_V2", "LC")
dat2 = AddToHexbinData2(dat2, "EVALPBMC", "EVALPBMC_DS")
dat2 = AddToHexbinData2(dat2, "EVALPBMC", "EVALPBMC_SW")
dat2 = AddToHexbinData2(dat2, "EVALPBMC_DS", "EVALPBMC_SW")
dat2 = AddToHexbinData2(dat2, "MRET2", "MRET")
dat2 = AddToHexbinData2(dat2, "EVAL", "MARSSEQ")
#dat2 = AddToHexbinData2(dat2, "PBMC_V3_3", "PBMC_V2")
#specify the order of the plots
dat2$ds = factor(dat2$ds, levels = c("PBMC_V3_3 vs PBMC_V3_2", "PBMC_V3_3 vs PBMC_V2", "PBMC_V2 vs EVALPBMC",
"PBMC_V2 vs LC", "EVALPBMC vs EVALPBMC_DS", "EVALPBMC vs EVALPBMC_SW",
"EVALPBMC_DS vs EVALPBMC_SW", "MRET2 vs MRET", "EVAL vs MARSSEQ"))
dfline = data.frame(x=c(0,1), y=c(0,1))
figS5 = ggplot(dat2) +
stat_binhex(bins=60,na.rm = TRUE, mapping=aes(x = x, y=y, fill = log(..count..))) + # opts(aspect.ratio = 1) +
geom_line(data=dfline, mapping=aes(x = x, y=y), color="black", size=1.5) +
facet_wrap(facets = ~ds, scales = "free_x", ncol=2) +
labs(x="FSCM 2", y="FSCM 1") +
theme(panel.background = element_rect("white", "white", 0, 0, "white"),
legend.position= "bottom", legend.direction = "horizontal",#, legend.title = element_blank())
strip.text.x = element_text(size = 10, face = "bold"),
#legend.position= "none",
strip.background = element_blank())
print(figS5) # for some reason this plot sometimes fail and show an error ("hbin" ...) - Restart R and try again in that case
ggsave(
paste0(figure_path, "FigS5.png"),
plot = figS5, device = "png",
width = 6, height = 10, dpi = 300)
# + id="fCES8zTLsO2-" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="f43f1d0f-cd12-4bb8-80be-92012ae25008"
#############################
# Fig 2 - One of each type above
#############################
# %%R
dat3 = NULL
dat3 = AddToHexbinData(dat3, statsPBMC_V3_3$UMIs_PBMC_V3_3_d_100, statsPBMC_V3_3$FracOnes_PBMC_V3_3_d_100, "PBMC_V3_3")
fig2A = ggplot(dat3) +
stat_binhex(bins=60,na.rm = TRUE, mapping=aes(x = x, y=y, fill = log(..count..))) + # opts(aspect.ratio = 1) +
#facet_wrap(facets = ~ds, scales = "free_x", ncol=3) +
ggtitle("FSCM vs Gene Expression") +
labs(x=expression(Log[2]*"(UMI counts)"), y="FSCM") +
theme(panel.background = element_rect("white", "white", 0, 0, "white"),
legend.position= "bottom", legend.direction = "horizontal",#, legend.title = element_blank())
strip.text.x = element_text(size = 12, face = "bold"),
#legend.position= "none",
strip.background = element_blank())
print(fig2A)
dat4 = NULL
dat4 = AddToHexbinData2(dat4, "PBMC_V3_3", "PBMC_V3_2")
dfline = data.frame(x=c(0,1), y=c(0,1))
fig2B = ggplot(dat4) +
stat_binhex(bins=60,na.rm = TRUE, mapping=aes(x = x, y=y, fill = log(..count..))) + # opts(aspect.ratio = 1) +
geom_line(data=dfline, mapping=aes(x = x, y=y), color="black", size=1.5) +
ggtitle("FSCM Across Datasets") +
#facet_wrap(facets = ~ds, scales = "free_x", ncol=2) +
labs(x="FSCM, PBMC_V3_2", y="FSCM, PBMC_V3_3") +
theme(panel.background = element_rect("white", "white", 0, 0, "white"),
legend.position= "bottom", legend.direction = "horizontal",#, legend.title = element_blank())
strip.text.x = element_text(size = 10, face = "bold"),
#legend.position= "none",
strip.background = element_blank())
print(fig2B) # for some reason this plot sometimes fail and show an error ("hbin" ...) - Restart R and try again in that case
fig2 = ggarrange(fig2A, fig2B, nrow=1, ncol=2,labels=c("A","B"))
print(fig2) # for some reason this plot sometimes fail and show an error ("hbin" ...) - Restart R and try again in that case
ggsave(
paste0(figure_path, "Fig2.png"),
plot = fig2, device = "png",
width = 6, height = 4, dpi = 300)
| notebooks/figure_generation/GenFig2_S4_S5.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="dmqwDAFK5wK_" colab_type="code" colab={}
import numpy as np
import cupy as cp
import time
# + id="qasTTCQLnjBk" colab_type="code" colab={}
def matprint(mat, fmt="g"):
col_maxes = [max([len(("{:"+fmt+"}").format(x)) for x in col]) for col in mat.T]
for x in mat:
for i, y in enumerate(x):
print(("{:"+str(col_maxes[i])+fmt+"}").format(y), end=" ")
print("")
print()
# + id="XI-l3zMLYjzz" colab_type="code" outputId="46eddf5e-4260-4fe2-ca7b-38402347735f" colab={"base_uri": "https://localhost:8080/", "height": 228}
m = np.ones((3, 3))
v = np.ones((3, 1))
p = np.dot(m,v)
matprint(m)
matprint(v)
matprint(p)
# + id="V-G8GZkpHaQt" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 70} outputId="61b16651-5b12-4f24-ddd5-7eed4fcaf6ea"
N = 50000000
# Vector + Vector
############CPU###############
s = time.time()
a_h = np.ones((N))
b_h = np.ones((N))
c_h = np.ones((N))
c_h = a_h + b_h
e = time.time()
cpu_time = (e - s) * 1000.0
print(f"CPU Time: {cpu_time} msec")
############GPU###############
stream = cp.cuda.Stream.null
start = stream.record()
a_d = cp.ones((N))
b_d = cp.ones((N))
c_d = cp.ones((N))
c_d = a_d + b_d
end = stream.record()
end.synchronize()
gpu_time = cp.cuda.get_elapsed_time(start, end)
print(f"GPU Time: {gpu_time} msec")
############Speedup###############
print(f"Speedup = {cpu_time/gpu_time}")
# print(c_d[:10])
# + colab_type="code" outputId="e2dbbd05-2864-4816-9b4e-e9532a42f073" id="AQhbTjB0qvYn" colab={"base_uri": "https://localhost:8080/", "height": 70}
N = 10000000
# Vector * Scalar (Size: 10m)
############CPU###############
s = time.time()
v_cpu = np.ones((N))
p_cpu = v_cpu * 5
e = time.time()
cpu_time = (e - s) * 1000.0
print(f"CPU Time: {cpu_time} msec")
############GPU###############
stream = cp.cuda.Stream.null
start = stream.record()
v_gpu = cp.ones((N))
p_gpu = v_gpu * 5
end = stream.record()
end.synchronize()
gpu_time = cp.cuda.get_elapsed_time(start, end)
print(f"GPU Time: {gpu_time} msec")
############Speedup###############
print(f"Speedup = {cpu_time/gpu_time}")
# + id="RO70m9dY53gw" colab_type="code" outputId="43565966-47c1-4262-cdaf-f4ba4a6e7241" colab={"base_uri": "https://localhost:8080/", "height": 70}
N = 20000
# Matrix * Vector
############CPU###############
s = time.time()
m_cpu = np.ones((N, N))
v_cpu = np.ones((N, 1))
p_cpu = np.dot(m_cpu, v_cpu)
e = time.time()
cpu_time = (e - s) * 1000.0
print(f"CPU Time: {cpu_time} msec")
############GPU###############
stream = cp.cuda.Stream.null
start = stream.record()
m_gpu = cp.ones((N, N))
v_gpu = cp.ones((N, 1))
p_gpu = cp.dot(m_gpu, v_gpu)
end = stream.record()
end.synchronize()
gpu_time = cp.cuda.get_elapsed_time(start, end)
print(f"GPU Time: {gpu_time} msec")
############Speedup###############
print(f"Speedup = {cpu_time/gpu_time}")
# + id="NiA5yqilVvTD" colab_type="code" outputId="988d22fa-96f2-4d9b-bcf6-3d783e739cf1" colab={"base_uri": "https://localhost:8080/", "height": 70}
N = 5000
# Matrix * Matrix
############CPU###############
s = time.time()
m1_cpu = np.ones((N, N))
m2_cpu = np.ones((N, N))
p_cpu = np.dot(m1_cpu, m2_cpu)
e = time.time()
cpu_time = (e - s) * 1000.0
print(f"CPU Time: {cpu_time} msec")
############GPU###############
stream = cp.cuda.Stream.null
start = stream.record()
x_gpu = cp.ones((N, N))
v_gpu = cp.ones((N, N))
p_gpu = cp.dot(x_gpu, v_gpu)
end = stream.record()
end.synchronize()
gpu_time = cp.cuda.get_elapsed_time(start, end)
print(f"GPU Time: {gpu_time} msec")
############Speedup#############
print(f"Speedup = {cpu_time/gpu_time}")
| cupy.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from doridori import Doridori
dori = Doridori('./suk_shortshort.mp4')
coordinates = dori.detect_face()
dori.fit()
dori.save_video('./suk_short2.mp4')
| tutorial.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Freight Rates API Example
# ## Setup
# Install the Signal Ocean SDK:
# ```
# pip install signal-ocean
# ```
# Set your subscription key acquired here: https://apis.signalocean.com/profile
pip install signal-ocean
# + pycharm={"name": "#%%\n"}
signal_ocean_api_key = '' #replace with your subscription key
# -
# ## Freight Rates API
# The Freight Rates API retrieves freight costs breakdown for a given load,
# discharge port and vessel class. First create connection towards
# Freight Rates API in order to find available ports and vessel classes:
# + pycharm={"name": "#%%\n"}
from signal_ocean import Connection, FreightRatesAPI
from signal_ocean.freight_rates import PortFilter
connection = Connection(api_key=signal_ocean_api_key)
fr_api = FreightRatesAPI(connection)
# -
# #### Find available ports and vessel classes
# Now retrieve the available vessel classes and look if specific ports are available.
# If you want to get all the available ports do not pass any parameter to the
# corresponding method.
# + pycharm={"name": "#%%\n"}
vessel_classes = fr_api.get_vessel_classes()
print(vessel_classes)
cpc = fr_api.get_ports(PortFilter(name_like='CPC'))[0]
augusta = fr_api.get_ports(PortFilter(name_like='Augusta'))[0]
# -
# #### Get freight rates for specific ports and vessel class
# In this example we retrieve today’s freight rate for Clean Panamax Amsterdam - Lome:
# + pycharm={"name": "#%%\n"}
amsterdam = fr_api.get_ports(PortFilter(name_like='Amsterdam'))[0]
lome = fr_api.get_ports(PortFilter(name_like='Lome'))[0]
fr = fr_api.get_freight_pricing(load_port_id=amsterdam.id, discharge_port_id=lome.id,
vessel_classes=["PanamaxTanker"], is_clean=True)
print(fr)
# + [markdown] pycharm={"name": "#%% md\n"}
# We can also plot the freight rates for the same combination from 1st of January
# until today:
# + pycharm={"name": "#%%\n"}
from datetime import date, timedelta
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
start_date = date(2021, 1, 1)
end_date = date.today()
delta = end_date - start_date
dates = [start_date + timedelta(days=i) for i in range(delta.days + 1)]
rates = []
for day in dates:
frates = fr_api.get_freight_pricing(load_port_id=cpc.id,
discharge_port_id=augusta.id,
vessel_classes=["Aframax"],
is_clean=False,
date=day)
rates.append(frates[0].rate)
plt.gca().xaxis.set_major_formatter(mdates.DateFormatter('%d/%m/%Y'))
plt.gca().xaxis.set_major_locator(mdates.MonthLocator(interval=1))
plt.plot(dates, rates)
plt.gcf().autofmt_xdate()
| docs/examples/jupyter/FreightRatesAPI/FreightRatesAPI.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
from pathlib import Path
ROOT_PATH = Path.cwd().parent.parent
RAW_TRAIN_DATA_PATH = "dataset/raw_data/train.txt"
RAW_TEST_DATA_PATH = "dataset/raw_data/test.txt"
TRAIN_DATA_PATH = "dataset/ner_data/train.data"
TEST_DATA_PATH = "dataset/ner_data/test.data"
TRAIN_GRAINED_DATA_PATH = "dataset/ner_data/train_grained.data"
TEST_GRAINED_DATA_PATH = "dataset/ner_data/test_grained.data"
MODEL = [
"CRF",
"SVM",
"PYTORCH_CRF",
"BILSTM_CRF",
"BERT_CRF",
"BERT_BILSTM_CRF"
]
MODEL_SELECT = 3
# %set_env PYTHONPATH=$ROOT_PATH
# +
# Generate train, test NER format Data
# !python data_generator.py \
# --RAW_TRAIN_DATA_PATH=$ROOT_PATH/$TRAIN_DATA_PATH \
# --RAW_TEST_DATA_PATH=$ROOT_PATH/$TEST_DATA_PATH \
# --TRAIN_DATA_PATH=$ROOT_PATH/$TRAIN_GRAINED_DATA_PATH \
# --TEST_DATA_PATH=$ROOT_PATH/$TEST_GRAINED_DATA_PATH \
# --OUTPUT_TYPE=split
# +
# Preprocess and generate trainable datasets
# !python data_preprocessor.py \
# --TRAIN_DATA_PATH=$ROOT_PATH/$TRAIN_GRAINED_DATA_PATH \
# --TEST_DATA_PATH=$ROOT_PATH/$TEST_GRAINED_DATA_PATH \
# --RAW_TEST_DATA_PATH=$ROOT_PATH/$RAW_TEST_DATA_PATH \
# --MODEL_DATA_PATH=$ROOT_PATH/model/{MODEL[MODEL_SELECT]}/data/
# +
# Tokenize and training process, use the dataset pickled from data_preprocessor
# !python ner_trainer.py \
# --MODEL={MODEL[MODEL_SELECT]} \
# --TRAIN_DATA_PATH=$ROOT_PATH/$TRAIN_GRAINED_DATA_PATH \
# --MODEL_DATA_PATH=$ROOT_PATH/model/{MODEL[MODEL_SELECT]}/data/ \
# --MODEL_CHECKPOINT_PATH=$ROOT_PATH/model/{MODEL[MODEL_SELECT]}/checkpoint/ \
# --CHECKPOINT_KEEP=3 \
# --SENTENCE_MAX_LENGTH=32 \
# --BATCH_SIZE=16 \
# --EMBEDDING_SIZE=300 \
# --HIIDEN_NUMS=512 \
# --EPOCHS=1 \
# --LEARNING_RATE=1e-3
# + tags=[]
# Predicting process and export the results, use the model generated from training checkpoints
# !python ner_predictor.py \
# --MODEL={MODEL[MODEL_SELECT]} \
# --MODEL_DATA_PATH=$ROOT_PATH/model/{MODEL[MODEL_SELECT]}/data/ \
# --MODEL_CHECKPOINT_PATH=$ROOT_PATH/model/{MODEL[MODEL_SELECT]}/checkpoint/ \
# --MODEL_OUTPUT_PATH=$ROOT_PATH/model/{MODEL[MODEL_SELECT]}/output/ \
# --EMBEDDING_SIZE=300 \
# --HIIDEN_NUMS=512 \
# --LEARNING_RATE=1e-3
# -
| program/main/main.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import time
import tweepy
import os
import pandas as pd
from dotenv import load_dotenv
load_dotenv()
consumer_key = os.getenv("TWITTER_CONSUMER_KEY")
consumer_secret = os.getenv("TWITTER_CONSUMER_SECRET")
access_token = os.getenv("TWITTER_ACCESS_TOKEN")
access_token_secret = os.getenv("TWITTER_ACCESS_TOKEN_SECRET")
# -
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_token_secret)
api = tweepy.API(auth, wait_on_rate_limit=True, wait_on_rate_limit_notify=True)
def limit_handled(cursor):
while True:
try:
yield cursor.next()
except tweepy.RateLimitError:
time.sleep(15 * 60)
except StopIteration:
print("End of iterations reached")
break
#1 change sub-topic
# searchQuery = "baseball OR base ball OR mlb OR #baseball OR #mlb -filter:retweets"
# searchQuery = "golf OR #golf OR #themasters OR #pgatour -filter:retweets"
searchQuery = "nfl OR #nfl OR american football OR #americanfootball -filter:retweets"
records = []
print("Collecting tweets...")
for record in limit_handled(tweepy.Cursor(api.search, q=searchQuery).items(15000)):
records.append(record._json)
df = pd.DataFrame(records)
#2 change csv file name
df.to_csv("input/raw/twitter_nfl.csv")
| collect_twitter.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/rameshveer/NLP_Projects_TSAI/blob/main/END_S5_RNN_LSTM.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="jofyc9OC4Qcf"
# #Imports
# + id="ahBVnrNc3E0U"
import numpy as np
# %matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from IPython import display
plt.style.use('seaborn-white')
# + [markdown] id="crQSAaIz4SkA"
# # Read and process data.
#
# Download the file from this URL: https://drive.google.com/file/d/1UWWIi-sz9g0x3LFvkIZjvK1r2ZaCqgGS/view?usp=sharing
# + id="rgOGxPDP3Wpp"
data = open('text.txt', 'r').read()
# + colab={"base_uri": "https://localhost:8080/", "height": 137} id="aBpKgzBDQ1r9" outputId="f24fd683-9792-46b9-ff4c-e89d132d532c"
data
# + [markdown] id="ZeXXMLRb4kXb"
# Process data and calculate indices
# + id="E5TKeiOp4jtl" colab={"base_uri": "https://localhost:8080/"} outputId="ffa11f7c-b343-4b5d-8743-067972a9d93c"
chars = list(set(data))
print(chars)
data_size, X_size = len(data), len(chars)
print("Corona Virus article has %d characters, %d unique characters" %(data_size, X_size))
char_to_idx = {ch:i for i,ch in enumerate(chars)}
idx_to_char = {i:ch for i,ch in enumerate(chars)}
print(char_to_idx)
print(idx_to_char)
# + [markdown] id="4C53MB135LRY"
# # Constants and Hyperparameters
# + id="dfj21ORa49Ps"
Hidden_Layer_size = 100 #size of the hidden layer
Time_steps = 40 # Number of time steps (length of the sequence) used for training
learning_rate = 1e-1 # Learning Rate
weight_sd = 0.1 #Standard deviation of weights for initialization
z_size = Hidden_Layer_size + X_size #Size of concatenation(H, X) vector
# + colab={"base_uri": "https://localhost:8080/"} id="jJV4M1PAKn7o" outputId="927bf1af-c674-4b52-d4c6-13acdf97d2ed"
z_size
# + [markdown] id="OdmJf4Du5uhb"
# # Activation Functions and Derivatives
# + id="seGHei_D5FGk"
def sigmoid(x):
return np.exp(x) / (1 + np.exp(x))
def dsigmoid(x):
return x * (1 - x)
def tanh(x):
x = np.asarray(x)
return (np.exp(2*x) - 1) / (np.exp(2 * x) + 1)
def dtanh(x):
return 1 - x * x
# + [markdown] id="KeCvVH1v6Me-"
# # Quiz Question 1
#
# What is the value of sigmoid(0) calculated from your code? (Answer up to 1 decimal point, e.g. 4.2 and NOT 4.29999999, no rounding off).
#
# # Quiz Question 2
#
# What is the value of dsigmoid(sigmoid(0)) calculated from your code?? (Answer up to 2 decimal point, e.g. 4.29 and NOT 4.29999999, no rounding off).
#
# # Quiz Question 3
#
# What is the value of tanh(dsigmoid(sigmoid(0))) calculated from your code?? (Answer up to 5 decimal point, e.g. 4.29999 and NOT 4.29999999, no rounding off).
#
# # Quiz Question 4
#
# What is the value of dtanh(tanh(dsigmoid(sigmoid(0)))) calculated from your code?? (Answer up to 5 decimal point, e.g. 4.29999 and NOT 4.29999999, no rounding off).
# + colab={"base_uri": "https://localhost:8080/"} id="UOUqPAxy_22H" outputId="b394b915-6889-4d79-ed74-4051b2d4cf74"
# Q1
sigmoid(0)
# + colab={"base_uri": "https://localhost:8080/"} id="wUkB3RLoAFdF" outputId="83005614-ad3b-48c7-c2bd-0437012893e0"
# Q2
dsigmoid(sigmoid(0))
# + colab={"base_uri": "https://localhost:8080/"} id="AB-O1CJGAN4A" outputId="23915707-8c41-42c7-8aef-3cd1791c77db"
# Q3
tanh(dsigmoid(sigmoid(0)))
# + colab={"base_uri": "https://localhost:8080/"} id="8HxCDUDgARgS" outputId="8c6b3778-7d4f-4576-f504-445e6b6dde70"
# Q4
dtanh(tanh(dsigmoid(sigmoid(0))))
# + [markdown] id="EeSVipDu8iKE"
# # Parameters
# + id="ICbWNemE6LGV"
class Param:
def __init__(self, name, value):
self.name = name
self.v = value # parameter value
self.d = np.zeros_like(value) # derivative
self.m = np.zeros_like(value) # momentum for Adagrad
# + [markdown] id="j83pZNPE8212"
# We use random weights with normal distribution (0, weight_sd) for tanh activation function and (0.5, weight_sd) for `sigmoid` activation function.
#
# Biases are initialized to zeros.
# + [markdown] id="swHwLXOI9E7V"
# # LSTM
# You are making this network, please note f, i, c and o (also "v") in the image below:
# 
#
# Please note that we are concatenating the old_hidden_vector and new_input.
# + [markdown] id="A0DBzNY-90s5"
# # Quiz Question 4
#
# In the class definition below, what should be size_a, size_b, and size_c? ONLY use the variables defined above.
# + id="SFuHhqVq6Wge"
size_a = Hidden_Layer_size# write your code here
size_b = z_size# write your code here
size_c = X_size# write your code here
class Parameters:
def __init__(self):
self.W_f = Param('W_f', np.random.randn(size_a, size_b) * weight_sd + 0.5)
self.b_f = Param('b_f', np.zeros((size_a, 1)))
self.W_i = Param('W_i', np.random.randn(size_a, size_b) * weight_sd + 0.5)
self.b_i = Param('b_i', np.zeros((size_a, 1)))
self.W_C = Param('W_C', np.random.randn(size_a, size_b) * weight_sd)
self.b_C = Param('b_C', np.zeros((size_a, 1)))
self.W_o = Param('W_o', np.random.randn(size_a, size_b) * weight_sd + 0.5)
self.b_o = Param('b_o', np.zeros((size_a, 1)))
#For final layer to predict the next character
self.W_v = Param('W_v', np.random.randn(X_size, size_a) * weight_sd)
self.b_v = Param('b_v', np.zeros((size_c, 1)))
def all(self):
return [self.W_f, self.W_i, self.W_C, self.W_o, self.W_v,
self.b_f, self.b_i, self.b_C, self.b_o, self.b_v]
parameters = Parameters()
# + colab={"base_uri": "https://localhost:8080/"} id="HOo7KtMlNPNi" outputId="41c2952a-9269-4c00-e612-139708207e5a"
parameters.all()
# + [markdown] id="RzmfGLZt_xVs"
# Look at these operations which we'll be writing:
#
# **Concatenation of h and x:**
#
# $z\:=\:\left[h_{t-1},\:x\right]$
#
# $f_t=\sigma\left(W_f\cdot z\:+\:b_f\:\right)$
#
# $i_i=\sigma\left(W_i\cdot z\:+\:b_i\right)$
#
# $\overline{C_t}=\tanh\left(W_C\cdot z\:+\:b_C\right)$
#
# $C_t=f_t\ast C_{t-1}+i_t\ast \overline{C}_t$
#
# $o_t=\sigma\left(W_o\cdot z\:+\:b_i\right)$
#
# $h_t=o_t\ast\tanh\left(C_t\right)$
#
# **Logits:**
#
# $v_t=W_v\cdot h_t+b_v$
#
# **Softmax:**
#
# $\hat{y}=softmax\left(v_t\right)$
#
# + colab={"base_uri": "https://localhost:8080/"} id="p47Ifu-cPnRa" outputId="5bf3886a-5853-42b9-8f05-651fd1d2879e"
param_dict = {x.name: x.v for x in parameters.all()}
param_dict.keys()
# + id="-bUkseNnDott"
def forward(x, h_prev, C_prev, p = parameters):
assert x.shape == (X_size, 1)
assert h_prev.shape == (Hidden_Layer_size, 1)
assert C_prev.shape == (Hidden_Layer_size, 1)
param_dict = {x.name: x.v for x in parameters.all()}
W_f = param_dict['W_f']
W_i = param_dict['W_i']
W_C = param_dict['W_C']
b_f = param_dict['b_f']
b_i = param_dict['b_i']
b_C = param_dict['b_C']
W_o = param_dict['W_o']
b_o = param_dict['b_o']
W_v = param_dict['W_v']
b_v = param_dict['b_v']
z = np.row_stack((h_prev, x))
f = sigmoid(np.dot(W_f, z) + b_f)# write your code here
i = sigmoid(np.dot(W_i, z) + b_i) # write your code here
C_bar =tanh(np.dot(W_C, z) + b_C)# write your code here
C = f * C_prev + i * C_bar# write your code here
o = sigmoid(np.dot(W_o, z) + b_o )# write your code here
h = o * tanh(C)# write your code here
v = np.dot(W_v, h) + b_v# write your code here
y = np.exp(v) / (np.sum(np.exp(v)) + 1e-8) #softmax
return z, f, i, C_bar, C, o, h, v, y
# + [markdown] id="jZrDhZIjFpdI"
# You must finish the function above before you can attempt the questions below.
#
# # Quiz Question 5
#
# What is the output of 'print(len(forward(np.zeros((X_size, 1)), np.zeros((Hidden_Layer_size, 1)), np.zeros((Hidden_Layer_size, 1)), parameters)))'?
# + colab={"base_uri": "https://localhost:8080/"} id="G36eqa6tQept" outputId="7f9fc27a-be1a-45af-dbd4-9318ca8bc093"
print(len(forward(np.zeros((X_size, 1)), np.zeros((Hidden_Layer_size, 1)), np.zeros((Hidden_Layer_size, 1)), parameters)))
# + [markdown] id="XV-YVl_GGiX8"
# # Quiz Question 6.
#
# Assuming you have fixed the forward function, run this command:
# z, f, i, C_bar, C, o, h, v, y = forward(np.zeros((X_size, 1)), np.zeros((Hidden_Layer_size, 1)), np.zeros((Hidden_Layer_size, 1)))
#
# Now, find these values:
#
#
# 1. print(z.shape)
# 2. print(np.sum(z))
# 3. print(np.sum(f))
#
# Copy and paste exact values you get in the logs into the quiz.
#
#
# + colab={"base_uri": "https://localhost:8080/"} id="0N_lGg3SR1af" outputId="8ee47b7d-9cd3-433b-cfd3-8100f43bb0cb"
print(z.shape)
print(np.sum(z))
print(np.sum(f))
# + id="1GvKVWmTDt3H"
z, f, i, C_bar, C, o, h, v, y = forward(np.zeros((X_size, 1)), np.zeros((Hidden_Layer_size, 1)), np.zeros((Hidden_Layer_size, 1)))
# + [markdown] id="NeSvhkqwILsG"
# # Backpropagation
#
# Here we are defining the backpropagation. It's too complicated, here is the whole code. (Please note that this would work only if your earlier code is perfect).
# + id="zIa1jUZiGPmF"
def backward(target, dh_next, dC_next, C_prev,
z, f, i, C_bar, C, o, h, v, y,
p = parameters):
assert z.shape == (X_size + Hidden_Layer_size, 1)
assert v.shape == (X_size, 1)
assert y.shape == (X_size, 1)
for param in [dh_next, dC_next, C_prev, f, i, C_bar, C, o, h]:
assert param.shape == (Hidden_Layer_size, 1)
dv = np.copy(y)
dv[target] -= 1
p.W_v.d += np.dot(dv, h.T)
p.b_v.d += dv
dh = np.dot(p.W_v.v.T, dv)
dh += dh_next
do = dh * tanh(C)
do = dsigmoid(o) * do
p.W_o.d += np.dot(do, z.T)
p.b_o.d += do
dC = np.copy(dC_next)
dC += dh * o * dtanh(tanh(C))
dC_bar = dC * i
dC_bar = dtanh(C_bar) * dC_bar
p.W_C.d += np.dot(dC_bar, z.T)
p.b_C.d += dC_bar
di = dC * C_bar
di = dsigmoid(i) * di
p.W_i.d += np.dot(di, z.T)
p.b_i.d += di
df = dC * C_prev
df = dsigmoid(f) * df
p.W_f.d += np.dot(df, z.T)
p.b_f.d += df
dz = (np.dot(p.W_f.v.T, df)
+ np.dot(p.W_i.v.T, di)
+ np.dot(p.W_C.v.T, dC_bar)
+ np.dot(p.W_o.v.T, do))
dh_prev = dz[:Hidden_Layer_size, :]
dC_prev = f * dC
return dh_prev, dC_prev
# + [markdown] id="Tnc7WpRkIU5S"
# # Forward and Backward Combined Pass
#
# Let's first clear the gradients before each backward pass
# + id="OJWoC3U1ITf8"
def clear_gradients(params = parameters):
for p in params.all():
p.d.fill(0)
# + [markdown] id="7XN93UnjIgmA"
# Clip gradients to mitigate exploding gradients
# + id="0LTsublxIfFl"
def clip_gradients(params = parameters):
for p in params.all():
np.clip(p.d, -1, 1, out=p.d)
# + [markdown] id="T7XUpDTWIl_Y"
# Calculate and store the values in forward pass. Accumulate gradients in backward pass and clip gradients to avoid exploding gradients.
#
# input, target are list of integers, with character indexes.
# h_prev is the array of initial h at h−1 (size H x 1)
# C_prev is the array of initial C at C−1 (size H x 1)
# Returns loss, final hT and CT
# + id="CQNxjTuZIia_"
def forward_backward(inputs, targets, h_prev, C_prev):
global paramters
# To store the values for each time step
x_s, z_s, f_s, i_s, = {}, {}, {}, {}
C_bar_s, C_s, o_s, h_s = {}, {}, {}, {}
v_s, y_s = {}, {}
# Values at t - 1
h_s[-1] = np.copy(h_prev)
C_s[-1] = np.copy(C_prev)
loss = 0
# Loop through time steps
assert len(inputs) == Time_steps
for t in range(len(inputs)):
x_s[t] = np.zeros((X_size, 1))
x_s[t][inputs[t]] = 1 # Input character
(z_s[t], f_s[t], i_s[t],
C_bar_s[t], C_s[t], o_s[t], h_s[t],
v_s[t], y_s[t]) = \
forward(x_s[t], h_s[t - 1], C_s[t - 1]) # Forward pass
loss += -np.log(y_s[t][targets[t], 0]) # Loss for at t
clear_gradients()
dh_next = np.zeros_like(h_s[0]) #dh from the next character
dC_next = np.zeros_like(C_s[0]) #dh from the next character
for t in reversed(range(len(inputs))):
# Backward pass
dh_next, dC_next = \
backward(target = targets[t], dh_next = dh_next,
dC_next = dC_next, C_prev = C_s[t-1],
z = z_s[t], f = f_s[t], i = i_s[t], C_bar = C_bar_s[t],
C = C_s[t], o = o_s[t], h = h_s[t], v = v_s[t],
y = y_s[t])
clip_gradients()
return loss, h_s[len(inputs) - 1], C_s[len(inputs) - 1]
# + [markdown] id="tcy5u_vRItkV"
# # Sample the next character
# + id="p8SrtJiwIsSm"
def sample(h_prev, C_prev, first_char_idx, sentence_length):
x = np.zeros((X_size, 1))
x[first_char_idx] = 1
h = h_prev
C = C_prev
indexes = []
for t in range(sentence_length):
_, _, _, _, C, _, h, _, p = forward(x, h, C)
idx = np.random.choice(range(X_size), p=p.ravel())
x = np.zeros((X_size, 1))
x[idx] = 1
indexes.append(idx)
return indexes
# + [markdown] id="SiWFaWLNIx_L"
# # Training (Adagrad)
#
# Update the graph and display a sample output
#
#
# + id="ENQYU-7AIw0t"
def update_status(inputs, h_prev, C_prev):
#initialized later
global plot_iter, plot_loss
global smooth_loss
# Get predictions for 200 letters with current model
sample_idx = sample(h_prev, C_prev, inputs[0], 200)
txt = ''.join(idx_to_char[idx] for idx in sample_idx)
# Clear and plot
plt.plot(plot_iter, plot_loss)
display.clear_output(wait=True)
plt.show()
#Print prediction and loss
print("----\n %s \n----" % (txt, ))
print("iter %d, loss %f" % (iteration, smooth_loss))
# + [markdown] id="ACXcASJuI73a"
# # Update Parameters
#
# \begin{align}
# \theta_i &= \theta_i - \eta\frac{d\theta_i}{\sum dw_{\tau}^2} \\
# d\theta_i &= \frac{\partial L}{\partial \theta_i}
# \end{align}
# + id="bR08TvcjI4Pf"
def update_paramters(params = parameters):
for p in params.all():
p.m += p.d * p.d # Calculate sum of gradients
#print(learning_rate * dparam)
p.v += -(learning_rate * p.d / np.sqrt(p.m + 1e-8))
# + [markdown] id="La9vyJ6RJLFK"
# To delay the keyboard interrupt to prevent the training from stopping in the middle of an iteration
#
#
# + id="ZVDHbMb7JNGT"
# Exponential average of loss
# Initialize to a error of a random model
smooth_loss = -np.log(1.0 / X_size) * Time_steps
iteration, pointer = 0, 0
# For the graph
plot_iter = np.zeros((0))
plot_loss = np.zeros((0))
# + [markdown] id="HF6vS0VWJqsS"
# # Training Loop
# + id="OQyNSL0iJOxH" colab={"base_uri": "https://localhost:8080/", "height": 350} outputId="842c2ed0-d4d6-4832-9ed6-b1896700536f"
iter = 50000
while iter > 0:
# Reset
if pointer + Time_steps >= len(data) or iteration == 0:
g_h_prev = np.zeros((Hidden_Layer_size, 1))
g_C_prev = np.zeros((Hidden_Layer_size, 1))
pointer = 0
inputs = ([char_to_idx[ch]
for ch in data[pointer: pointer + Time_steps]])
targets = ([char_to_idx[ch]
for ch in data[pointer + 1: pointer + Time_steps + 1]])
loss, g_h_prev, g_C_prev = \
forward_backward(inputs, targets, g_h_prev, g_C_prev)
smooth_loss = smooth_loss * 0.999 + loss * 0.001
# Print every hundred steps
if iteration % 100 == 0:
update_status(inputs, g_h_prev, g_C_prev)
update_paramters()
plot_iter = np.append(plot_iter, [iteration])
plot_loss = np.append(plot_loss, [loss])
pointer += Time_steps
iteration += 1
iter = iter -1
# + [markdown] id="2AKpa1BGOItQ"
# # Quiz Question 7.
#
# Run the above code for 50000 iterations making sure that you have 100 hidden layers and time_steps is 40. What is the loss value you're seeing?
# + [markdown] id="_amQ8OpLdbC4"
# 6.42685
| END_S5_RNN_LSTM.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Pandas
# This NoteBook is about the functionalities of Pandas
import numpy as np
import pandas as pd
pd.__version__
# <img src='https://www.kdnuggets.com/wp-content/uploads/pandas-02.png'>
# [Image Credit](https://www.kdnuggets.com/2017/01/pandas-cheat-sheet.html)
# ## Wie erzeuge ich eine Pandas-Series?
dict_1 = {'Name': 'Chris', 'Item Purchased': 'Dog Food', 'Cost': 22.50}
print(dict_1)
type(dict_1)
series_1 = pd.Series(dict_1)
series_1.index
series_1.values
# ## Creating Pandas-DataFrame
# +
# Example: Series
purchase_1 = pd.Series({'Name': 'Chris',
'Item Purchased': 'Dog Food',
'Cost': 22.50})
purchase_2 = pd.Series({'Name': 'Kevyn',
'Item Purchased': 'Kitty Litter',
'Cost': 2.50})
purchase_3 = pd.Series({'Name': 'Vinod',
'Item Purchased': 'Bird Seed',
'Cost': 5.00})
df = pd.DataFrame([purchase_1, purchase_2, purchase_3], index=['Store 1', 'Store 1', 'Store 2'])
# -
df
type(df)
df.index
df.columns
df.values
purchase_1 = pd.Series({'Name': 'Chris',
'Item Purchased': 'Dog Food',
'Cost': 22.50})
purchase_2 = pd.Series({'Name': 'Kevyn',
'Item Purchased': 'Kitty Litter',
'Cost': 2.50})
purchase_4 = pd.Series({'Name': 'Vinod',
'Item Purchased': 'Bird Seed',
'Cost': 5.00,
'Test': 12})
df_2 = pd.DataFrame([purchase_1, purchase_2, purchase_4], index=['Store 1', 'Store 1', 'Store 3'])
df
# ### First look: DataFrames
df_2.head(2)
df_2.tail(2)
df
df_2.describe()
df_2.describe(include='all')
df.info()
# ## Playing with the data
df.loc['Store 1']
df.iloc[0:,0:2]
df.loc['Store 1', 'Cost']
df.loc['Store 1', ['Name', 'Cost']]
df.T
df.T.index
df.loc['Store 1'].iloc[:,1:]
# +
# df.drop?
# -
df
df_3 = df.drop('Cost', axis=1).copy()
df_3
copy_df = df.copy()
print(copy_df)
del copy_df['Name']
copy_df
df
costs = df['Cost']
costs
costs += 2
df
# ## Data Import
# +
# loading CSV files
# -
df = pd.read_csv('data/trainh.csv')
# +
# setting option for data show
pd.set_option('display.max_columns', 500)
# -
df
# understanding the data
df.info()
df.describe()
# +
# loading data from URL
df_csv_aus_html = pd.read_csv('https://raw.githubusercontent.com/zekelabs/data-science-complete-tutorial/master/Data/HR_comma_sep.csv.txt')
# -
type(df_csv_aus_html)
df_csv_aus_html.describe()
# ## Filtering DataFrames
df['SalePrice'] > 50000
only_SalePrice = df.where(df['SalePrice'] > 150000)
only_SalePrice
only_SalePrice_2 = only_SalePrice.dropna()
only_SalePrice_2.head()
only_SalePrice = df.where((df['SalePrice'] < 130000 | (df['SalePrice'] > 180000)))
only_SalePrice
df_Auswahl = df[(df['SalePrice'] > 50000) & (df['MSZoning'] == 'RL') & (df['LotShape'] == 'Reg')].copy()
df_Auswahl
# ## Cleaning Data
df = pd.read_csv('data/traint.csv')
df.info()
df.isnull().sum()
df.isnull().sum()
df['Age'].mean()
df['Age'] = df['Age'].fillna(df['Age'].mean())
df['Age']
# ## Explorative Data Analyse (EDA) - Groups and Pivots
df.groupby('Sex')[['Survived']].mean()
df.groupby(['Sex', 'Pclass'])['Survived'].aggregate('mean').unstack()
df.head(1)
# ## Merging (Joins) & Concat (Anhängen)
# <img src='https://static1.squarespace.com/static/54bb1957e4b04c160a32f928/t/5724fd0bf699bb5ad6432150/1462041871236/?format=750w'>
# [Image Credit](https://www.ryanbaumann.com/blog/2016/4/30/python-pandas-tosql-only-insert-new-rows)
# Pandas erlaubt uns ganz einfaches Verknüpfen von Tabellen. Ganz ähnlich zu einem JOIN in SQL. Am schnellsten funktioniert das, wenn wir den Index ausnutzen. (Es geht aber natürlich auch ohne, dass wir den Index verwenden...)
# +
# Creating a dataframe
df = pd.DataFrame([{'Name': 'MJ', 'Item Purchased': 'Sponge', 'Cost': 22.50},
{'Name': 'Kevyn', 'Item Purchased': 'Kitty Litter', 'Cost': 2.50},
{'Name': 'Filip', 'Item Purchased': 'Spoon', 'Cost': 5.00}],
index=['Store 1', 'Store 1', 'Store 2'])
df
# +
# Adding a column
df['Date'] = ['December 1', 'January 1', 'mid-May']
df
# +
# Bool column
df['Delivered'] = True
df
# -
df['Feedback'] = ['Positive', None, 'Negative']
df
# +
# Rearrange Index
adf = df.reset_index()
#reset_index: New index
#set_index: Index from chosen column
adf['Date'] = pd.Series({0: 'December 1', 2: 'mid-May'})
adf
# -
# ## How conenct the DataFrames (Tables)
#
# it is needed at least 2 tables with "Key" that allows connection
# +
# Example
staff_df = pd.DataFrame([{'Name': 'Kelly', 'Role': 'Director of HR'},
{'Name': 'Sally', 'Role': 'Course liasion'},
{'Name': 'James', 'Role': 'Grader'}])
staff_df = staff_df.set_index('Name')
student_df = pd.DataFrame([{'Name': 'James', 'School': 'Business'},
{'Name': 'Mike', 'School': 'Law'},
{'Name': 'Sally', 'School': 'Engineering'}])
student_df = student_df.set_index('Name')
# -
student_df
staff_df
# +
# Function "merge" allows the combination
# -
pd.merge(staff_df, student_df, how='outer', left_index=True, right_index=True)
pd.merge(staff_df, student_df, how='inner', left_index=True, right_index=True)
pd.merge(staff_df, student_df, how='left', left_index=True, right_index=True)
pd.merge(staff_df, student_df, how='right', left_index=True, right_index=True)
# ### Is it possible without Index? Yes, but it will be slow!
staff_df = staff_df.reset_index()
student_df = student_df.reset_index()
staff_df
student_df
pd.merge(staff_df, student_df, how='left', left_on='Name', right_on='Name')
# +
staff_df = pd.DataFrame([{'Name': 'Kelly', 'Role': 'Director of HR', 'Location': 'State Street'},
{'Name': 'Sally', 'Role': 'Course liasion', 'Location': 'Washington Avenue'},
{'Name': 'James', 'Role': 'Grader', 'Location': 'Washington Avenue'}])
student_df = pd.DataFrame([{'Name': 'James', 'School': 'Business', 'Location': '1024 Billiard Avenue'},
{'Name': 'Mike', 'School': 'Law', 'Location': 'Fraternity House #22'},
{'Name': 'Sally', 'School': 'Engineering', 'Location': '512 Wilson Crescent'}])
pd.merge(staff_df, student_df, how='left', left_on='Name', right_on='Name')
# -
staff_df
student_df
# ### it is also possible to combine over more fields
staff_df = pd.DataFrame([{'First Name': 'Kelly', 'Last Name': 'Desjardins', 'Role': 'Director of HR'},
{'First Name': 'Sally', 'Last Name': 'Brooks', 'Role': 'Course liasion'},
{'First Name': 'James', 'Last Name': 'Wilde', 'Role': 'Grader'}])
student_df = pd.DataFrame([{'First Name': 'James', 'Last Name': 'Hammond', 'School': 'Business'},
{'First Name': 'Mike', 'Last Name': 'Smith', 'School': 'Law'},
{'First Name': 'Sally', 'Last Name': 'Brooks', 'School': 'Engineering'}])
staff_df
student_df
pd.merge(staff_df, student_df, how='inner', left_on=['First Name','Last Name'], right_on=['First Name','Last Name'])
# +
# Creating first dataframe
df1 = pd.DataFrame({'A': ['A0', 'A1', 'A2', 'A3'],
'B': ['B0', 'B1', 'B2', 'B3'],
'C': ['C0', 'C1', 'C2', 'C3'],
'D': ['D0', 'D1', 'D2', 'D3']},
index = [0, 1, 2, 3])
# Creating second dataframe
df2 = pd.DataFrame({'A': ['A4', 'A5', 'A6', 'A7'],
'B': ['B4', 'B5', 'B6', 'B7'],
'C': ['C4', 'C5', 'C6', 'C7'],
'D': ['D4', 'D5', 'D6', 'D7']},
index = [4, 5, 6, 7])
# Creating third dataframe
df3 = pd.DataFrame({'A': ['A8', 'A9', 'A10', 'A11'],
'B': ['B8', 'B9', 'B10', 'B11'],
'C': ['C8', 'C9', 'C10', 'C11'],
'D': ['D8', 'D9', 'D10', 'D11']},
index = [8, 9, 10, 11])
# -
# ### If there is no common "key" it is also available to concat the DFs (like UNION in SQL)
df1
df2
df3
# Concatenating the dataframes
pd.concat([df1, df2, df3])
pd.concat([df1, df2, df3], axis=1)
df1.columns = ['E','F','G','H']
df1
pd.concat([df1, df2, df3], axis=0)
# ## Indexing from DF and EDA
df = pd.read_csv('data/trainh.csv')
df.head()
df = df.set_index('SalePrice')
df.head()
df = df.reset_index(drop=True)
df.head()
df = pd.read_csv('data/traint.csv')
df.head()
df['Age'].unique()
df=df[df['Age'] == 50]
df.head()
# ## Timestamps in Pandas
t1 = pd.Series(list('abc'), [pd.Timestamp('2016-09-01'), pd.Timestamp('2016-09-02'), pd.Timestamp('2016-09-03')])
t1
type(t1.index)
t2 = pd.Series(list('def'), [pd.Period('2016-09'), pd.Period('2016-10'), pd.Period('2016-11')])
t2
type(t2.index)
# +
#Daten in Zeitstempel konvertieren
d1 = ['2 June 2013', 'Aug 29, 2014', '2015-06-26', '7/12/16']
ts3 = pd.DataFrame(np.random.randint(10, 100, (4,2)), index=d1, columns=list('ab'))
ts3
# -
ts3.index = pd.to_datetime(ts3.index)
ts3
pd.to_datetime('4.7.12', dayfirst=True)
datum = pd.to_datetime('4.7.12', dayfirst=True)
datum
pd.Timestamp('9/3/2016')-pd.Timestamp('9/9/2010')
pd.Timestamp('9/2/2016 8:10AM') + pd.Timedelta('12D 3H')
dates = pd.date_range('10-01-2016', periods=9, freq='2W-SUN')
dates
# +
# pd.Period?
# +
# pd.Timestamp?
# -
# ## Pivot
import seaborn as sns
tips = sns.load_dataset("tips")
tips.head(10)
pd.pivot_table(tips, values='tip',
index=['sex'],
columns=['time','smoker'], aggfunc='sum')
# +
# pd.pivot_table?
# -
# ## Melt
df = pd.DataFrame({'A': {0: 'a', 1: 'b', 2: 'c'},'B': {0: 1, 1: 3, 2: 5},'C': {0: 2, 1: 4, 2: 6}})
df
# +
# pd.melt?
# -
pd.melt(df, id_vars=['A'], value_vars=['B', 'C'])
| 2_Overview_Pandas.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] colab_type="text" id="_omgylxzm5i9"
# ##### Copyright 2019 The TensorFlow Authors.
# + cellView="form" colab={} colab_type="code" id="f0A2utIXbPc5"
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# + [markdown] colab_type="text" id="eriCSHTznS4U"
# # Partial Differential Equations
# + [markdown] colab_type="text" id="uYCNQT4snWr6"
# <table class="tfo-notebook-buttons" align="left">
# <td>
# <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/r1/tutorials/non-ml/pdes.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
# </td>
# <td>
# <a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/r1/tutorials/non-ml/pdes.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
# </td>
# </table>
# + [markdown] colab_type="text" id="zxQvbi5gnyMm"
# TensorFlow isn't just for machine learning. Here you will use TensorFlow to simulate the behavior of a [partial differential equation](https://en.wikipedia.org/wiki/Partial_differential_equation). You'll simulate the surface of a square pond as a few raindrops land on it.
#
# ## Basic setup
#
# A few imports you'll need.
# + colab={} colab_type="code" id="FG6DLet6ol3j"
from __future__ import absolute_import, division, print_function, unicode_literals
#Import libraries for simulation
try:
# # %tensorflow_version only exists in Colab.
# %tensorflow_version 2.x
except Exception:
pass
import tensorflow.compat.v1 as tf
import numpy as np
#Imports for visualization
import PIL.Image
from io import BytesIO
from IPython.display import clear_output, Image, display
# + [markdown] colab_type="text" id="7vd7rHS0oqEF"
# A function for displaying the state of the pond's surface as an image.
# + colab={} colab_type="code" id="fJ8SpYYUoq6G"
def DisplayArray(a, fmt='jpeg', rng=[0,1]):
"""Display an array as a picture."""
a = (a - rng[0])/float(rng[1] - rng[0])*255
a = np.uint8(np.clip(a, 0, 255))
f = BytesIO()
PIL.Image.fromarray(a).save(f, fmt)
clear_output(wait = True)
display(Image(data=f.getvalue()))
# + [markdown] colab_type="text" id="NjiZ2_6Mou13"
#
# Here you start an interactive TensorFlow session for convenience in playing around. A regular session would work as well if you were doing this in an executable .py file.
# + colab={} colab_type="code" id="cH82JlsPozdV"
sess = tf.InteractiveSession()
# + [markdown] colab_type="text" id="Hbk97yero5a9"
# ## Computational convenience functions
# + colab={} colab_type="code" id="XVomNV1OpBbX"
def make_kernel(a):
"""Transform a 2D array into a convolution kernel"""
a = np.asarray(a)
a = a.reshape(list(a.shape) + [1,1])
return tf.constant(a, dtype=1)
def simple_conv(x, k):
"""A simplified 2D convolution operation"""
x = tf.expand_dims(tf.expand_dims(x, 0), -1)
y = tf.nn.depthwise_conv2d(x, k, [1, 1, 1, 1], padding='SAME')
return y[0, :, :, 0]
def laplace(x):
"""Compute the 2D laplacian of an array"""
laplace_k = make_kernel([[0.5, 1.0, 0.5],
[1.0, -6., 1.0],
[0.5, 1.0, 0.5]])
return simple_conv(x, laplace_k)
# + [markdown] colab_type="text" id="f9gBib2lpINO"
# ## Define the PDE
#
# Our pond is a perfect 500 x 500 square, as is the case for most ponds found in nature.
# + colab={} colab_type="code" id="7faiwBQhpK1Z"
N = 500
# + [markdown] colab_type="text" id="U_DscmhfpPs0"
#
# Here you create a pond and hit it with some rain drops.
# + colab={} colab_type="code" id="Mtk8t0IOpSrb"
# Initial Conditions -- some rain drops hit a pond
# Set everything to zero
u_init = np.zeros([N, N], dtype=np.float32)
ut_init = np.zeros([N, N], dtype=np.float32)
# Some rain drops hit a pond at random points
for n in range(40):
a,b = np.random.randint(0, N, 2)
u_init[a,b] = np.random.uniform()
DisplayArray(u_init, rng=[-0.1, 0.1])
# + [markdown] colab_type="text" id="5vzdx9rHpXsl"
# Now you specify the details of the differential equation.
# + colab={} colab_type="code" id="c6uj8LFDpaZO"
# Parameters:
# eps -- time resolution
# damping -- wave damping
eps = tf.placeholder(tf.float32, shape=())
damping = tf.placeholder(tf.float32, shape=())
# Create variables for simulation state
U = tf.Variable(u_init)
Ut = tf.Variable(ut_init)
# Discretized PDE update rules
U_ = U + eps * Ut
Ut_ = Ut + eps * (laplace(U) - damping * Ut)
# Operation to update the state
step = tf.group(
U.assign(U_),
Ut.assign(Ut_))
# + [markdown] colab_type="text" id="eAjwNRjTppN-"
# ## Run the simulation
#
# This is where it gets fun -- running time forward with a simple for loop.
# + colab={} colab_type="code" id="jJLvEydzprsy"
# Initialize state to initial conditions
tf.global_variables_initializer().run()
# Run 1000 steps of PDE
for i in range(1000):
# Step simulation
step.run({eps: 0.03, damping: 0.04})
# Show final image
DisplayArray(U.eval(), rng=[-0.1, 0.1])
# + [markdown] colab_type="text" id="8AcEDQfbpyDT"
# Look! Ripples!
| site/en/r1/tutorials/non-ml/pdes.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import sys
sys.path.append("..")
import time
from huginn.demo_console import demo_console
cl = demo_console()
cl.plot_interest()
cl.get_anomalies(k=3)
cl.plot_interest_with_anomalies()
num_links = 3
t1 = time.time()
cl.get_info(num_links = num_links)
print(str(int(time.time()-t1)) + 's to compute')
| notebooks/demo_notebook.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from datetime import date
# +
def voto(a):
idade = date.today().year - a
if 18>idade>=16 or idade > 65:
return f'Você tem {idade}, então seu voto é opicional!'
elif idade < 16:
t = 18 - idade
a2 = t+date.today().year
return f'Você ainda não pode vota, você só pode votar em {a2}'
else:
return f'Você tem {idade}, então tem que votar!'
nasc = int(input('Em que ano nasceu? '))
voto(nasc)
print(voto(nasc))
# -
| .ipynb_checkpoints/EX101 - Funções para Votação-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pandas
import matplotlib.pyplot as plt
from sklearn.svm import NuSVR
from sklearn.model_selection import train_test_split
import numpy as np
data = pandas.read_json("../json/tagged_remote_sensing_2017.json", orient="records")
known, unknown = train_test_split(data[["evi", "ndvi"]], test_size=0.2, random_state=2)
# -
svr = NuSVR(nu=0.5, gamma=0.15, C=100, kernel="rbf")
svr = svr.fit(known[["ndvi"]], known["evi"])
unknown["evi_predicted"] = svr.predict(unknown[["ndvi"]])
# +
fig = plt.figure(figsize=(6,4))
plot_2d = fig.add_subplot(111)
plot_2d.set_xlabel("NDVI")
plot_2d.set_ylabel("EVI")
plot_2d.scatter(known["ndvi"].loc[svr.support_], known["evi"].loc[svr.support_], label='Support vectors',
facecolor="none", edgecolor="g")
plot_2d.scatter(unknown["ndvi"], unknown["evi"], label="Target", marker="x")
plot_2d.scatter(unknown["ndvi"], unknown["evi_predicted"], label="Prediction")
plot_2d.legend()
fig.savefig(f"pdf/nu_svr.pdf",
dpi=600,
format="pdf",
facecolor="none",
alpha=0,
edgecolor="none",
bbox_inches="tight",
orientation="portrait")
# -
| jupyter/svr.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda root]
# language: python
# name: conda-root-py
# ---
# # Logistic Regression Consulting Project SOLUTIONS
# ## Binary Customer Churn
#
# A marketing agency has many customers that use their service to produce ads for the client/customer websites. They've noticed that they have quite a bit of churn in clients. They basically randomly assign account managers right now, but want you to create a machine learning model that will help predict which customers will churn (stop buying their service) so that they can correctly assign the customers most at risk to churn an account manager. Luckily they have some historical data, can you help them out? Create a classification algorithm that will help classify whether or not a customer churned. Then the company can test this against incoming data for future customers to predict which customers will churn and assign them an account manager.
#
# The data is saved as customer_churn.csv. Here are the fields and their definitions:
#
# Name : Name of the latest contact at Company
# Age: Customer Age
# Total_Purchase: Total Ads Purchased
# Account_Manager: Binary 0=No manager, 1= Account manager assigned
# Years: Totaly Years as a customer
# Num_sites: Number of websites that use the service.
# Onboard_date: Date that the name of the latest contact was onboarded
# Location: Client HQ Address
# Company: Name of Client Company
#
# Once you've created the model and evaluated it, test out the model on some new data (you can think of this almost like a hold-out set) that your client has provided, saved under new_customers.csv. The client wants to know which customers are most likely to churn given this data (they don't have the label yet).
from pyspark.sql import SparkSession
spark = SparkSession.builder.appName('logregconsult').getOrCreate()
# Load training data
data= spark.read.csv('customer_churn.csv',inferSchema=True,header=True)
data.printSchema()
data.describe().show()
data.columns
from pyspark.ml.linalg import Vectors
from pyspark.ml.feature import VectorAssembler
assembler = VectorAssembler(
inputCols=['Age',
'Total_Purchase',
'Account_Manager',
'Years',
'Num_Sites',
],
outputCol="features")
output = assembler.transform(data)
final_data = output.select('features','churn')
train_data,test_data = final_data.randomSplit([0.7,0.3])
from pyspark.ml.classification import LogisticRegression
lr = LogisticRegression(labelCol='churn')
# Fit the model to the data and call this model lrModel
lrModel = lr.fit(train_data)
trainingSummary = lrModel.summary
trainingSummary.featuresCol
trainingSummary.predictions.describe().show()
# May change soon!
from pyspark.mllib.evaluation import MulticlassMetrics
predictionAndLabels = lrModel.evaluate(test_data)
predictionAndLabels.predictions.show()
predictionAndLabels = predictionAndLabels.predictions.select('churn','prediction')
predictionAndLabels.show()
from pyspark.ml.evaluation import BinaryClassificationEvaluator, MulticlassClassificationEvaluator
evaluator = BinaryClassificationEvaluator(rawPredictionCol='prediction', labelCol='churn')
AUC = evaluator.evaluate(predictionAndLabels)
AUC
# For multiclass
evaluator = MulticlassClassificationEvaluator(predictionCol='prediction', labelCol='churn',
metricName='accuracy')
acc = evaluator.evaluate(predictionAndLabels)
acc
# ## Predicting on new data
final_lrModel = lr.fit(final_data)
new_customers = spark.read.csv('new_customers.csv',inferSchema=True,header=True)
new_customers.printSchema()
new_customers.describe().show()
test_new_customers = assembler.transform(new_customers)
test_new_customers.printSchema()
final_test = test_new_customers.select('features')
results = final_lrModel.transform(final_test)
results.show()
# That is the result, hopefully they work out for the client, there is no way we can say with 100% certainty that these predictions will be correct, but we can say that feel relatively confident in them given the model strength earlier.
#
# ## Great Job!
| Python-and-Spark-for-Big-Data-master/Spark_for_Machine_Learning/Logistic_Regression/.ipynb_checkpoints/Logistic_Regression_Consulting_Project_SOLUTIONS-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
from itertools import islice
import matplotlib.pyplot as plt
import sys
import numpy as np
from tqdm import tqdm_notebook
import torch
from torchvision import models, transforms, datasets
# -
device = torch.device('cuda')
print(device.type)
print(torch.cuda.get_device_properties(device)) #/ 1024 / 1024 /1024
cpu_device = torch.device('cpu')
print(cpu_device.type)
inception_transforms = transforms.Compose([
transforms.Resize(299),
#transforms.CenterCrop(constants.INPUT_SIZE),
transforms.ToTensor(),
#transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
])
# +
unlabeled_celeba = datasets.ImageFolder('imgs_by_label/celeba_unlabeled/', inception_transforms)
print(unlabeled_celeba)
unlabeled_celeba_loader = torch.utils.data.DataLoader(
unlabeled_celeba, batch_size=1, shuffle=True, num_workers=1)
labeled_celeba = datasets.ImageFolder('imgs_by_label/celeba_labeled/', inception_transforms)
print(labeled_celeba)
labeled_celeba_loader = torch.utils.data.DataLoader(
labeled_celeba, batch_size=1, shuffle=True, num_workers=1)
labeled_progan = datasets.ImageFolder('imgs_by_label/progan_labeled/', inception_transforms)
print(labeled_progan)
labeled_progan_loader = torch.utils.data.DataLoader(
labeled_progan, batch_size=1, shuffle=True, num_workers=1)
# -
def get_inception_features(img_iter, device=None):
inception_net = models.inception_v3(pretrained=True, transform_input=True)
layers_to_grab = [inception_net.Conv2d_1a_3x3, inception_net.Conv2d_2b_3x3,
inception_net.Conv2d_3b_1x1, inception_net.Mixed_5d, inception_net.Mixed_6e,
inception_net.Mixed_7c, inception_net.fc]
layer_features = [None for i in range(len(layers_to_grab))]
def hook_fn(self, inp, out, container, layer_index):
#print(layer_index, inp[0].shape, out.shape)
num_channels = out.shape[1]
if len(out.shape) > 2:
#Warning: this will break for batch sizes > 1
cur_features = out.squeeze().permute(1,2,0).reshape(-1, num_channels)
else:
cur_features = out
if container[layer_index] is None:
container[layer_index] = [cur_features]
else:
#container[layer_index] = torch.cat((container[layer_index], cur_features))
container[layer_index].append(cur_features)
def hook_fn_i(container, i):
return lambda self, inp, out: hook_fn(self, inp, out, container, i)
for i, layer in enumerate(layers_to_grab):
layer.register_forward_hook(hook_fn_i(layer_features, i))
inception_net.eval()
for x,y in tqdm_notebook(img_iter):
#print(x.shape, y)
#plt.imshow((x).squeeze().permute(1, 2, 0))
#plt.show()
out = inception_net(x.to(device))
del(out)
#print(out.sum())
return layer_features
# +
#unlabeled_celeba_features = get_inception_features(unlabeled_celeba_loader)
#flat_unlabeled_celeba_features = [torch.cat(lf, dim=0) for lf in unlabeled_celeba_features]
#print([(len(lf), lf[0].shape) for lf in unlabeled_celeba_features])
#print([lf.shape for lf in flat_unlabeled_celeba_features])
#torch.save(unlabeled_celeba_features, 'unlabeled_celeba_features.pt')
torch.save(flat_unlabeled_celeba_features, 'flat_unlabeled_celeba_features.pt')
# The features from these 734 reference images are 8.9 gigs on disk, yikes!
del unlabeled_celeba_features
# -
flat_unlabeled_celeba_features = torch.load('flat_unlabeled_celeba_features.pt', map_location=torch.device('cpu'))
[layer_feats.shape for layer_feats in flat_unlabeled_celeba_features]
#flat_unlabeled_celeba_features[6][0]
# +
#l = len(flat_unlabeled_celeba_features[0])
#indices = torch.LongTensor(np.random.choice(range(l), size=100, replace=False))
#small_ref_features = torch.index_select(flat_unlabeled_celeba_features[0], dim=0, index = indices)
#print(small_ref_features.shape)
small_celeba_features = []
for layer_feats in flat_unlabeled_celeba_features:
l = len(layer_feats)
indices = torch.LongTensor(np.random.choice(range(l), size=min(l, 10000), replace=False))
small_layer_feats = torch.index_select(layer_feats, dim=0, index = indices).detach()
small_celeba_features.append(small_layer_feats)
for small_layer in small_celeba_features:
print(small_layer.shape)
# -
torch.save(small_celeba_features, 'small_celeba_features.pt')
# +
# small_ref_features = torch.index_select(flat_unlabeled_celeba_features[0], dim=0, index=torch.LongTensor([0,3,5]))
# print(small_ref_features.shape)
# print(small_ref_features[2], '\n', flat_unlabeled_celeba_features[0][5])
# +
#TODO: modify the get_inception_features function to have a "don't flatten" mode for these?
#Or just feed them in one at a time so we don't care about the flattening. (So only mess with
#it if we need batch size > 1)
#celeba_features = get_inception_features(labeled_celeba_loader)
# +
#progan_features_1000_2234 = get_inception_features(islice(labeled_progan_loader,1000,2234))
# -
progan_features_1000_2234 = torch.load('progan_features_1000_2234.pt', map_location=torch.device('cpu'))
# +
#torch.save(progan_features_1000_2234, 'progan_features_1000_2234.pt')
# +
#del(progan_features_1000_2234)
# -
progan_features_0_1000 = torch.load('progan_features_0_1000.pt', map_location=torch.device('cpu'))
# +
L = len(progan_features_0_1000)
N = len(progan_features_0_1000[0])
#N = 3
small_progan_features_0_1000 = []
for i in tqdm_notebook(range(N)):
example_features = []
for l in range(L):
cur_feats = progan_features_0_1000[l][i]
D = len(cur_feats)
#print(cur_feats.shape)
indices = torch.LongTensor(np.random.choice(range(D), size=min(D, 10000), replace=False))
small_cur_feats = torch.index_select(cur_feats, dim=0, index = indices).detach()
example_features.append(small_cur_feats)
small_progan_features_0_1000.append(example_features)
# -
print(len(small_progan_features_0_1000))
print([x.shape for x in small_progan_features_0_1000[999]])
torch.save(small_progan_features_0_1000, 'small_progan_features_0_1000.pt')
# +
L = len(progan_features_1000_2234)
N = len(progan_features_1000_2234[0])
small_progan_features_1000_2234 = []
for i in tqdm_notebook(range(N)):
example_features = []
for l in range(L):
cur_feats = progan_features_1000_2234[l][i]
D = len(cur_feats)
#print(cur_feats.shape)
indices = torch.LongTensor(np.random.choice(range(D), size=min(D, 10000), replace=False))
small_cur_feats = torch.index_select(cur_feats, dim=0, index = indices).detach()
example_features.append(small_cur_feats)
small_progan_features_1000_2234.append(example_features)
# -
print(len(small_progan_features_1000_2234))
print([x.shape for x in small_progan_features_1000_2234[1232]])
torch.save(small_progan_features_1000_2234, 'small_progan_features_1000_2234.pt')
small_progan_features = small_progan_features_0_1000 + small_progan_features_1000_2234
torch.save(small_progan_features, 'small_progan_features.pt')
[(len(lf), lf[0].shape) for lf in progan_features_0_1000]
progan_0_feats = [lf[0] for lf in progan_features_0_1000]
[lf.shape for lf in progan_0_feats]
# +
#TODO: Parallelize as much as possible within memory
#TODO: Run on GPU, see if it's faster
# Features (for single image): #layers x (H*W for that layer) x (C for that layer)
# Reference set (for N comparison images): # layers x (N*H*W for that layer) x (C for that layer)
def layerwise_nn_features(features, reference_set, device, batch_size=1):
assert(len(features) == len(reference_set))
L = len(features)
#print(L)
mean_layer_closest_dists = torch.zeros(L).to(device)
for l in range(L):
#print(l)
lf = features[l].detach().to(device) #layer features
rlf = reference_set[l].detach().to(device) #reference layer features
#print(lf.shape, rlf.shape)
#layer is HxWxC
#rlf[i] is NxC
HtimesW,C = lf.shape
N,C2 = rlf.shape
assert(C == C2)
rlf = rlf.reshape(1, N, C).detach()
num_batches = HtimesW // batch_size
if HtimesW % batch_size != 0: num_batches += 1 # for the fractional batch
#Loop through each feature vector, we can parallelize later...
#Note: if batch size does not divide HtimesW, this will miss the last HtimesW%batch_size vectors
for b in range(num_batches):
x = lf[b*batch_size : (b+1) * batch_size].reshape(-1, 1, C)
cur_batch_size = x.shape[0]
#Differences from vector to all reference vectors in that layer
diffs = (x - rlf).detach()
assert(diffs.shape == (cur_batch_size, N, C))
sqr_dists = torch.sum(diffs**2, dim=2).detach()
assert(sqr_dists.shape == (cur_batch_size, N))
min_sqr_dists = torch.min(sqr_dists, dim=1)[0].detach()
assert(min_sqr_dists.shape == (cur_batch_size,))
min_dists = torch.sqrt(min_sqr_dists).detach()
assert(min_dists.shape == (cur_batch_size,))
mean_layer_closest_dists[l] += torch.sum(min_dists).detach()
del x
del diffs
del sqr_dists
del min_sqr_dists
del min_dists
mean_layer_closest_dists[l] /= (HtimesW)
del lf
del rlf
continue
# x = lf.reshape(HtimesW, 1, C)
# cur_refs = rlf.reshape(1, N, C)
# diffs = x - cur_refs
# assert(diffs.shape == (H*W, N, C))
# sqr_dists = torch.sum(diffs**2, dim=2)
# assert(sqr_dists.shape == (H*W, N))
# min_sqr_dists = torch.min(sqr_dists, dim=1)
# assert(min_dists.shape == (H*W))
# min_dists = torch.sqrt(min_sqr_dists)
# assert(min_dists.shape == (H*W))
# mean_layer_closest_dists[l] = torch.mean(min_dists)
return mean_layer_closest_dists
# +
torch.cuda.empty_cache()
#torch.zeros(4, device=device)
# +
#x1 = progan_0_feats[-1]
#x2 = small_celeba_features[-1]
#torch.min(torch.sqrt(((x1 - x2)**2).sum(dim=1)))
# -
len(small_progan_features), len(small_progan_features[0])
N, L = len(small_progan_features), len(small_progan_features[0])
print(N,L)
small_progan_distance_features = []
for progan_i_feats in tqdm_notebook(small_progan_features):
distance_features = layerwise_nn_features(progan_i_feats, small_celeba_features, device, 32)
small_progan_distance_features.append(distance_features)
#len(small_progan_distance_features), small_progan_distance_features[0]
progan_features_full = np.array(([np.array(x.cpu().detach()) for x in small_progan_distance_features]))
progan_features_full.shape, progan_features_full[0]
torch.save(progan_features_full, 'progan_features_full.pt')
progan_1000_distance_features
layerwise_nn_features(progan_0_feats, small_celeba_features, device, 1)
# +
import gc
from collections import defaultdict
counts = defaultdict(int)
for obj in gc.get_objects():
try:
if torch.is_tensor(obj) or (hasattr(obj, 'data') and torch.is_tensor(obj.data)):
#print(type(obj), obj.size(), obj.device)
if obj.device == torch.device(3):
print(obj.shape)
del obj
counts[obj.size()] += 1
except:
pass
#for k,v in counts.items():
# print(k,v)
# +
counts = defaultdict(int)
for obj in gc.get_objects():
try:
if torch.is_tensor(obj) or (hasattr(obj, 'data') and torch.is_tensor(obj.data)):
#print(type(obj), obj.size())
counts[obj.size()] += 1
except:
pass
for k,v in counts.items():
print(k,v)
# -
labeled_celeba_x = []
labeled_celeba_y = []
# TODO: Pull % rated as real for each image!
for x,y in tqdm(labeled_celeba_loader):
cur_features = layerwise_nn_features(x, unlabeled_celeba_features)
labeled_celeba_x.append(cur_features)
# Now pull the % label
labeled_celeba_y.append(pct_real_votes)
labeled_progan_x = []
labeled_progan_y = []
# TODO: Pull % rated as real for each image!
for x,y in tqdm(labeled_progan_loader):
cur_features = layerwise_nn_features(x, unlabeled_celeba_features)
labeled_progan_x.append(cur_features)
# Now pull the % label
labeled_progan_y.append(pct_real_votes)
# +
#TODO: Break the features/labels into train/val/test and train a logistic regression model;
#see how well it does out of sample!
| distances/Feature_distances.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import warnings
warnings.filterwarnings('ignore')
# +
from glob import glob
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib inline
import pandas as pd
from scipy.interpolate import interp1d
from scipy.signal import gaussian, convolve
from statistics import mean, median
from astropy import stats
from scipy.optimize import curve_fit, least_squares
import collections
import os
import utils
from dl import queryClient as qc
# -
if not os.path.exists('results'):
os.makedirs('results')
if not os.path.exists('results/plots'):
os.makedirs('results/plots')
# + code_folding=[46, 69, 81, 88, 95, 105]
def get_data(df,objname):
order = ['u','g','r','i','z']
best_periods = []
crv=[]
fltrs=[]
for f in order:
selfltr = (df['filter'] == f)
selfwhm = (df['fwhm'] <= 4.0)
sel = selfltr & selfwhm
t = df['mjd'][sel].values
y = df['mag_auto'][sel].values
dy = df['magerr_auto'][sel].values
if len(t) < 25:
continue
best_periods.append(get_ls_period(t,y,objname=objname))
crvi = np.vstack((t,y,dy)).T
crv.append(crvi[np.argsort(crvi[:,0])])
fltrs.append(f)
period = 0
for p in best_periods:
period += p/len(best_periods)
return crv, period, fltrs
def get_tmps(fltrs):
tmps=[]
typs =[]
names=[]
for fltr in fltrs:
typ = []
templets = glob('templets/*{}.dat'.format(fltr))
tmp = np.zeros((len(templets),501,2))
for i in range(len(templets)):
tmp[i] = np.concatenate((np.array([[0,0]]),
np.array(pd.read_csv(templets[i],sep=' ')),
np.array([[1,0]])))
#adjust if filepath to templets changes
if len(templets[i])==17:
typ.append('RRab')
elif len(templets[i])==15:
typ.append('RRc')
typs.append(typ)
names.append(templets)
tmps.append(tmp)
return tmps, names, typs
def double_tmps(tmps):
tmps2=[]
for f in range(len(tmps)):
tmps2.append(np.tile(tmps[f],(2,1)))
tmps2[f][:,int(len(tmps2[f][0])/2):,0] += 1
return tmps2
def plot_periodogram(period,power,best_period=None,objname='',ax=None):
if ax is None:
fig, ax = plt.subplots(figsize=(10,7))
ax.plot(period,power,lw=0.1)
ax.set_xlabel('period (days)')
ax.set_ylabel('relative power')
ax.set_title(objname)
if best_period is not None:
ax.axvline(best_period,color='r');
ax.text(0.03,0.93,'period = {:.3f} days'.format(best_period),transform=ax.transAxes,color='r')
fig.savefig('results/plots/{}_periodogram.png'.format(objname))
plt.close(fig)
def get_ls_period(t,y,min_freq=1./1.,max_freq=1./0.1,objname='_'):
"""Use Lomb-Scargle periodogram to get an estimate on period"""
ls = stats.LombScargle(t, y)
frequency, power = ls.autopower(minimum_frequency=min_freq,maximum_frequency=max_freq)
period = 1./frequency # period is the inverse of frequency
best_period = period[np.argmax(power)]
plot_periodogram(period,power,best_period,objname=objname)
return best_period
def get_pinit(crv,period):
pinit = ()
for ltcrv in crv:
pinit += ((0.0,max(ltcrv[:,1])-min(ltcrv[:,1]),0.0),)
pinit += (period,)
return pinit
def update_pinit(pars,period):
pinit = ()
for i in range(len(pars)):
pinit += (tuple(pars[i,:-1]),)
pinit += (period,)
return pinit
def RemoveOutliers(crv,tmps,pars,period):
n = pars[:,-1].astype(int)
crv_in = []
for i in range(len(crv)):
f = interp1d(tmps[i][n[i],:,0],tmps[i][n[i],:,1]*pars[i,1]+pars[i,2])
phase = (crv[i][:,0]/period-pars[i,0]) %1
dif = abs(crv[i][:,1]-f(phase))
crv_in.append(crv[i][dif<utils.mad(dif)*5])
return crv_in
def double_period(crv,pars,period):
crv2 = []
for i in range(len(crv)):
crv2.append(crv[i].copy())
crv2[i][:,1] -= pars[i,2]
crv2[i][:,0] = (crv2[i][:,0]/period-pars[i,0])%1
crv2[i] = np.tile(crv2[i].T,2).T
crv2[i][int(len(crv2[i])/2):,0] += 1
crv2[i] = crv2[i][crv2[i][:,0].argsort()]
return crv2
# -
def get_tmps(fltrs):
tmps=[]
typs =[]
names=[]
for fltr in fltrs:
typ = ['RRab','RRab','RRab','RRab','RRab','RRab','RRc']
tempnames = ['a1','a2','a3','b1','b2','b3','c']
tmp = np.zeros((len(tempnames),51,2))
tmpmatrix = np.loadtxt('templets/LaydenTemplates.txt',delimiter=',')
tmp[:,:,0] = np.tile(tmpmatrix[:,0],7).reshape(7,51)
tmp[:,:,1] = np.swapaxes(tmpmatrix[:,1:],0,1)
typs.append(typ)
names.append(tempnames)
tmps.append(tmp)
return tmps, names, typs
# + code_folding=[1, 6, 18]
class tmpfitter:
def __init__ (self, tmps):
self.fltr=0
self.n=0
self.tmps=tmps
def model(self, t, t0, amplitude, yoffset):
# modify the template using peak-to-peak amplitude, yoffset
# fold input times t by period, phase shift to match template
xtemp = self.tmps[self.fltr][self.n,:,0]
ytemp = self.tmps[self.fltr][self.n,:,1]*amplitude + yoffset
ph = (t - t0) %1
#print((ph[0],period,t0%1))
#print((period,t0,amplitude,yoffset))
# interpolate the modified template to the phase we want
return interp1d(xtemp,ytemp)(ph)
def tmpfit(crv,tmps,pinit,w=.1,steps=21,n=1):
fitter = tmpfitter(tmps)
lsteps = int(steps/2+.5)
rsteps = steps - lsteps
pl = np.linspace(pinit[-1]-w,pinit[-1],lsteps)
pr = np.linspace(pinit[-1]+w,pinit[-1],rsteps,endpoint=False)
plist = np.zeros(pl.size+pr.size)
plist[0::2] = np.flip(pl)
plist[1::2] = np.flip(pr)
plist = plist[plist>0]
pars = np.zeros((len(tmps),4))
minsumx2 = 10**50
minp = 0
for p in plist:
sumx2=0
ppars=np.zeros((len(tmps),4))
for f in range(len(tmps)):
fitter.fltr = f
phase = crv[f][:,0]/p%n #1 for one period, 2 for two periods
minx2 = 10**50
for i in range(len(tmps[f])):
fitter.n = i
try:
tpars, cov = curve_fit(fitter.model, phase, crv[f][:,1],
bounds = ((-.5,0,-50),(.5,10,50)),
sigma=crv[f][:,2], p0=pinit[f], maxfev=500)
except RuntimeError:
#print('Error: Curve_fit failed on templet={}-{}, p={:.4}'.format(f,i,p))
continue
x2 = sum((fitter.model(phase,tpars[0],tpars[1],tpars[2])-crv[f][:,1])**2/crv[f][:,2]**2)
if x2 < minx2:
ppars[f,:-1] = tpars
ppars[f,-1] = i
minx2 = x2
sumx2 += minx2
if sumx2 > minsumx2:
break
if sumx2 < minsumx2:
minsumx2 = sumx2
minp = p
pars = ppars
npoints=0
for i in range(len(crv)):
npoints += len(crv[i])
return pars, minp, minsumx2/npoints
# -
def fit_plot(objname,file):
star=qc.query(sql="""SELECT meas.*
FROM nsc_dr2.meas
WHERE objectid='{:s}'""".format(objname),
fmt='pandas',
profile='db01')
#print(collections.Counter(star['filter']))
crv,period,fltrs = get_data(star,objname)
if len(fltrs) == 0:
return
tmps, tmpnames, typs = get_tmps(fltrs)
pinit = get_pinit(crv,period)
pars, p, x2 = tmpfit(crv,tmps,pinit,w=.1,steps=25)
crv_in = RemoveOutliers(crv,tmps,pars,p)
pinit = update_pinit(pars,p)
pars_in,p_in,x2 = tmpfit(crv_in,tmps, pinit,w=.01,steps=25)
crv2 = double_period(crv,pars_in,p_in)
tmps2= double_tmps(tmps)
n = pars[:,-1].astype(int)
colors = []
for f in fltrs:
if f == 'r' or f == 'g':
colors.append(f)
else:
colors.append('black')
#Check if each filter is consistent with RR type (RRab or RRc)
consistent = True
for i in range(len(typs)):
for j in range(i+1,len(typs)):
if typs[i][n[i]] != typs[j][n[j]]:
consistent = False
break
if not consistent:
break
if consistent:
typ = typs[0][n[0]]
else:
typ = '???'
fig, ax = plt.subplots(len(fltrs), figsize=(10,7.5), sharex=True, sharey=True)
if len(fltrs) == 1:
ax = [ax]
for i in range(len(fltrs)):
crvmean = mean(crv2[i][:,1])
ax[i].scatter(crv2[i][:,0],crv2[i][:,1]-crvmean,c=colors[i])
ax[i].plot(tmps2[i][n[i],:,0],tmps2[i][n[i],:,1]*pars_in[i,1]-crvmean,c='black')
ax[i].invert_yaxis()
ax[i].set_ylabel(fltrs[i], fontsize=18)
ax[-1].set_xlabel('Phase', fontsize=16)
ax[0].set_title("Object: {} Period: {:.3f} d Type: {}".format(objname,p_in,typ), fontsize=20)
fig.savefig('results/plots/{}.png'.format(objname))
file.write("{},{:.3f},{:.3f},\n".format(objname,x2,p_in))
for i in range(len(fltrs)):
file.write("{:.3f},{:.3f},{:.3f},{}\n".format(pars_in[i][0],pars_in[i][1]/2,pars_in[i][2],tmpnames[i][n[i]]))#[9:]))
file.write("---\n")
plt.close(fig)
from astropy.table import Table
gldorig = np.loadtxt('goldsample\golden_original.txt',delimiter=',',dtype=str)
gldrrab = np.loadtxt('goldsample\golden_RRab.txt',delimiter=',',dtype=str)
t=Table([gldrrab],names=['id'])
t['period'] = -99.99
t['type'] = ' '
t['utyp'] = ' '
t['uprob'] = -99.99
t['uflag'] = -1
t['undat'] = 0
t['uprd'] = -99.99
t['gtyp'] = ' '
t['gprob'] = -99.99
t['gflag'] = -1
t['gndat'] = 0
t['gprd'] = -99.99
t['rtyp'] = ' '
t['rprob'] = -99.99
t['rflag'] = -1
t['rndat'] = 0
t['rprd'] = -99.99
t['ityp'] = ' '
t['iprob'] = -99.99
t['iflag'] = -1
t['indat'] = 0
t['iprd'] = -99.99
t['ztyp'] = ' '
t['zprob'] = -99.99
t['zflag'] = -1
t['zndat'] = 0
t['zprd'] = -99.99
t[:5]
def
names = ['150536_22075','150023_1179','151047_5422','150537_4644']
file = open("results/parameters.csv",'a')
for name in names:
fit_plot(name,file)
print(name)
file = open("results/parameters.csv",'a')
fit_plot('77516_8215',file)
file.close()
reslist=qc.query(sql="""SELECT id FROM nsc_dr2.object
WHERE variable10sig=1 AND
gmag-rmag>0.1 AND gmag-rmag<0.5
AND ndet>100""",
fmt='table',
profile='db01')
from tqdm import tqdm
file = open("results/parameters.csv",'a')
for i in tqdm(range(20)):#len(reslist))):
fit_plot(reslist[i][0],file)
file.close()
file.close()
# +
#res = qc.query(sql="""SELECT * from nsc_dr2.meas
# JOIN nsc_dr2.object as obj
# ON meas.objectid=obj.id
# where obj.variable10sig=1 and
# obj.gmag-obj.rmag>.1 and
# obj.gmag-obj.rmag<0.5 and
# obj.ndet>100""",
# fmt='table')
# -
a="templets/103g.dat"
a
a[9:]
| .ipynb_checkpoints/Old Templet Fitter-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/Yaniii2021/Linear-Algebra-58019/blob/main/Vectors.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="ZyNvfRT-0l9m"
# #Matrix Algebra
# + id="hyzSx79cyyCB"
import numpy as np
# + colab={"base_uri": "https://localhost:8080/"} id="ZmXadF-q1Tht" outputId="b5fd74b0-b6a7-46a0-a8ec-5424891dea8b"
a = np.array([1,2,3]) #This is an example of a 1x3 matrix
print(a)
# + colab={"base_uri": "https://localhost:8080/"} id="kUwpu5Eh1uFL" outputId="ed26dd17-fa7c-4233-dcfd-ee14a16cbcc0"
b = np.array([[1,2,3],
[4,5,6]])
print(b)
# + colab={"base_uri": "https://localhost:8080/"} id="r36r_EJQ2U2o" outputId="01ae0ad0-207d-4ba7-cb68-18c939eaa90f"
c = np.array([[1,2,3],
[4,5,6],
[7,8,9]])
print(c)
# + colab={"base_uri": "https://localhost:8080/"} id="N-7z22rG2vj4" outputId="39064622-dc50-4dae-aef6-532eb1cd0df0"
d = np.full((3,3), 7)
print(d)
# + colab={"base_uri": "https://localhost:8080/"} id="GhXx3i7x3DJD" outputId="8b34824b-e1d1-4ab2-cde4-6ce7e765463d"
e = np.array([[1,2,3],
[4,5,6],
[7,8,9]])
print(e)
e = np.diagonal([
[1,2,3],
[4,5,6],
[7,8,9]])
print(e)
# + colab={"base_uri": "https://localhost:8080/"} id="LGTzh-PS3mxx" outputId="1f1e8992-c007-4607-a60e-06826bc8fbc5"
f = np.eye(3)
print(f)
# + colab={"base_uri": "https://localhost:8080/"} id="LInofW5L3-e0" outputId="7627b5ee-b9a9-4926-bd04-5ef3b9c5b98f"
g = np.zeros((3,3))
print(g)
# + colab={"base_uri": "https://localhost:8080/"} id="nzWA-tz44Yn9" outputId="2c9bb762-9d52-468d-b8f2-9c0d422349b8"
h = np.empty((0,12))
print(h)
# + [markdown] id="a5pQTV2u4naX"
# #Operations
# + colab={"base_uri": "https://localhost:8080/"} id="HO10TMZD4rAF" outputId="cd869940-f643-4b2c-cc71-4782a71733a8"
print(d+d)
# + colab={"base_uri": "https://localhost:8080/"} id="y3GolSg049Rx" outputId="a3b53424-ccfb-4bc5-d203-2d58aebf9e50"
print(d-d)
| Vectors.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] colab_type="text" id="pmU5YUal1eTZ"
# _Lambda School Data Science_
#
# # Join and Reshape datasets
#
# Objectives
# - concatenate data with pandas
# - merge data with pandas
# - understand tidy data formatting
# - melt and pivot data with pandas
#
# Links
# - [Pandas Cheat Sheet](https://github.com/pandas-dev/pandas/blob/master/doc/cheatsheet/Pandas_Cheat_Sheet.pdf)
# - [Tidy Data](https://en.wikipedia.org/wiki/Tidy_data)
# - Combine Data Sets: Standard Joins
# - Tidy Data
# - Reshaping Data
# - Python Data Science Handbook
# - [Chapter 3.6](https://jakevdp.github.io/PythonDataScienceHandbook/03.06-concat-and-append.html), Combining Datasets: Concat and Append
# - [Chapter 3.7](https://jakevdp.github.io/PythonDataScienceHandbook/03.07-merge-and-join.html), Combining Datasets: Merge and Join
# - [Chapter 3.8](https://jakevdp.github.io/PythonDataScienceHandbook/03.08-aggregation-and-grouping.html), Aggregation and Grouping
# - [Chapter 3.9](https://jakevdp.github.io/PythonDataScienceHandbook/03.09-pivot-tables.html), Pivot Tables
#
# Reference
# - Pandas Documentation: [Reshaping and Pivot Tables](https://pandas.pydata.org/pandas-docs/stable/reshaping.html)
# - Modern Pandas, Part 5: [Tidy Data](https://tomaugspurger.github.io/modern-5-tidy.html)
# + id="5MsWLLW4Xg_i" colab_type="code" outputId="980adbf5-8367-4016-84e2-df0cfd00ed9e" colab={"base_uri": "https://localhost:8080/", "height": 228}
# !wget https://s3.amazonaws.com/instacart-datasets/instacart_online_grocery_shopping_2017_05_01.tar.gz
# + id="gfr4_Ya0XkLI" colab_type="code" outputId="a7dfe9ff-00c0-4a25-d57d-7496b242b07f" colab={"base_uri": "https://localhost:8080/", "height": 243}
# !tar --gunzip --extract --verbose --file=instacart_online_grocery_shopping_2017_05_01.tar.gz
# + id="N4YyGPNdXrT0" colab_type="code" outputId="d64c267f-b0c1-414d-ce60-cc9b9ca1b9fe" colab={"base_uri": "https://localhost:8080/", "height": 35}
# %cd instacart_2017_05_01
# + id="b26wmLUiXtlM" colab_type="code" outputId="b36fbedc-d2d5-4166-d487-d7bedbc0e8af" colab={"base_uri": "https://localhost:8080/", "height": 121}
# !ls -lh *.csv
# + id="6d-7LkXnoCQz" colab_type="code" colab={}
import pandas as pd
# + [markdown] colab_type="text" id="kAMtvSQWPUcj"
# # Assignment
#
# ## Join Data Practice
#
# These are the top 10 most frequently ordered products. How many times was each ordered?
#
# 1. Banana
# 2. Bag of Organic Bananas
# 3. Organic Strawberries
# 4. Organic Baby Spinach
# 5. Organic Hass Avocado
# 6. Organic Avocado
# 7. Large Lemon
# 8. Strawberries
# 9. Limes
# 10. Organic Whole Milk
#
# First, write down which columns you need and which dataframes have them.
#
# Next, merge these into a single dataframe.
#
# Then, use pandas functions from the previous lesson to get the counts of the top 10 most frequently ordered products.
# + id="vvE0EVHgXMFO" colab_type="code" colab={}
#First, write down which columns you need and which dataframes have them.
#I need :
# FROM PRODUCTS-
# 'product_name'
# 'product_id'
# FROM ORDERS_PRODUCTS
# 'order_id'
# 'product_id'
# aisles does not have it.
#aisles = pd.read_csv('aisles.csv')
#print(aisles.shape)
#aisles.head(10)
#departments = pd.read_csv('departments.csv')
#print(departments.shape)
#departments.head(10)
order_products__prior = pd.read_csv('order_products__prior.csv')
print(order_products__prior.shape)
order_products__prior.head()
# + id="V_rx1Yqe2PB1" colab_type="code" colab={}
# I will merge orders_..._prior to orders_..._train for accuracy sake and
#test concact on my own
order_products__train = pd.read_csv('order_products__train.csv')
print(order_products__train.shape)
order_products__train.head(10)
# + id="ihjyNrhV22MZ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 219} outputId="95f5ffd1-5642-449d-9151-12e10ed835a5"
order_products = pd.concat([order_products__prior, order_products__train])
#assert lets me check that .shape[0] + ... is equal to the new .shape[0]
#this answered a question I had as to how to answer if they got added correctly.*
assert (order_products__prior.shape[0] + order_products__train.shape[0]) == order_products.shape[0]
print(order_products.shape)
order_products.head()
# + id="k084JoSxuRtp" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 239} outputId="c8309ef7-b645-425e-a2d0-3b27591343e4"
orders = pd.read_csv('orders.csv')
print(orders.shape)
orders.head()
# + id="cSari8HVsMbt" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 219} outputId="0573ec97-d1c3-42a6-c716-c1c1d3610963"
products = pd.read_csv('products.csv')
print(products.shape)
products.head()
# + id="Rl-wu6kV6URa" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 202} outputId="e173a465-d9b7-4b03-d5d3-3b25f5f443e1"
#I need :
# FROM 'products'-
# 'product_name'
# 'product_id'
# FROM 'order_products'
# 'order_id'
# 'product_id'
#What I'm looking for:
#Banana
#Bag of Organic Bananas
#Organic Strawberries
#Organic Baby Spinach
#Organic Hass Avocado
#Organic Avocado
#Large Lemon
#Strawberries
#Limes
#Organic Whole Milk
#Ask how I can get this line of code to work for condition without having to input each line
# I figured it out with a single line of code '.isin(products_needed)'
products_needed = [
'Banana',
'Bag of Organic Bananas',
'Organic Strawberries',
'Organic Baby Spinach',
'Organic Hass Avocado',
'Organic Avocado',
'Large Lemon',
'Strawberries',
'Limes',
'Organic Whole Milk'
]
columns_products = [
'product_name',
'product_id',
]
# I don't need a condition yet
#condition = products['product_name'] == products_needed
#subset = products.loc[condition, columns_products]
#subset.head()
merged = (products[['product_id', 'product_name']]
.merge(order_products[['order_id', 'product_id']]))
merged.head()
# + id="SEL9SufEDk0w" colab_type="code" colab={}
condition = ((merged['product_name'] == products_needed[0]) |
(merged['product_name'] == products_needed[1]) |
(merged['product_name'] == products_needed[2]) |
(merged['product_name'] == products_needed[3]) |
(merged['product_name'] == products_needed[4]) |
(merged['product_name'] == products_needed[5]) |
(merged['product_name'] == products_needed[6]) |
(merged['product_name'] == products_needed[7]) |
(merged['product_name'] == products_needed[8]) |
(merged['product_name'] == products_needed[9])
)
#I could have replaced this line with:
#condition = merged['product_name'].isin(products_needed)
# + id="cRkFBVi9FTLn" colab_type="code" colab={}
merged = merged[condition]
# + id="2YTs-rIGHFhl" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 219} outputId="a650515e-d228-4f78-f36e-6d2c7e5f211c"
print(merged.shape)
merged.head()
# + id="6MLTcLtvHPbl" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 208} outputId="8addf5fb-cf68-47b5-9ae1-a240720d38ee"
merged['product_name'].value_counts().head(10)
# + [markdown] id="RsiWi4DuXPLP" colab_type="text"
# ## Reshape Data Section
#
# - Replicate the lesson code
# - Complete the code cells we skipped near the beginning of the notebook
# - Table 2 --> Tidy
# - Tidy --> Table 2
# - Load seaborn's `flights` dataset by running the cell below. Then create a pivot table showing the number of passengers by month and year. Use year for the index and month for the columns. You've done it right if you get 112 passengers for January 1949 and 432 passengers for December 1960.
# + id="LjS7JbpVK-Vw" colab_type="code" colab={}
# %matplotlib inline
import pandas as pd
import numpy as np
import seaborn as sns
table1 = pd.DataFrame(
[[np.nan, 2],
[16, 11],
[3, 1]],
index=['<NAME>', '<NAME>', '<NAME>'],
columns=['treatmenta', 'treatmentb'])
table2 = table1.T
# + id="Ss8JqGFKLC6e" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 110} outputId="05ba0595-a69a-46f1-ebc6-32dbe37b239b"
table2
# + id="hxw_dEOzLFR5" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 110} outputId="1b02886a-fe2c-48b7-ceba-a59a53141dc1"
table2 = table2.reset_index()
table2
# + id="zta_WQ4zLaqs" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 233} outputId="df8c640d-9dee-48d0-aa7a-de1d285bb815"
tidy2 = table2.melt(id_vars='index', value_vars=['<NAME>', '<NAME>', '<NAME>'])
tidy2
# + id="23sFkCBNMG0B" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 233} outputId="3c29ce88-1779-4b4a-afd5-6d317797a7dd"
tidy2 = tidy2.rename(columns={
'index': 'trt',
'variable': 'name',
'value': 'result'
})
tidy2
# + id="VBGD3CPbMUHL" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 233} outputId="a71286a8-6af4-415d-906b-474fd7fd9542"
tidy2.trt = tidy2.trt.str.replace('treatment', '')
tidy2
# + id="mprPpAm0MlrG" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 141} outputId="95ba9082-f2ff-428c-8305-f815cf025004"
wide2 = tidy2.pivot_table(index='trt', columns='name', values='result')
wide2
# + [markdown] id="nbsX68UnK2vc" colab_type="text"
# - Load seaborn's `flights` dataset by running the cell below. Then create a pivot table showing the number of passengers by month and year. Use year for the index and month for the columns. You've done it right if you get 112 passengers for January 1949 and 432 passengers for December 1960.
#
# + id="fgxulJQq0uLw" colab_type="code" colab={}
flights = sns.load_dataset('flights')
# + id="1qKc88WI0up-" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 219} outputId="cfe5ab7e-9a4a-4cc0-d200-49999ad81a0b"
print(flights.shape)
flights.head()
# + id="jaTX4HO4RNnE" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 467} outputId="a283f20b-1463-4674-f9fd-edf79c78a5d1"
#I need to ask more information on aggfunc and how inputs can affect the output
pivot_flights = flights.pivot_table(index='year',
columns='month',
values='passengers')
pivot_flights
# + id="WA5VPqTHTHG-" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 283} outputId="2391d2ed-23b8-44f2-b154-513c198440f5"
pivot_flights.plot();
# + [markdown] id="mnOuqL9K0dqh" colab_type="text"
# ## Join Data Stretch Challenge
#
# The [Instacart blog post](https://tech.instacart.com/3-million-instacart-orders-open-sourced-d40d29ead6f2) has a visualization of "**Popular products** purchased earliest in the day (green) and latest in the day (red)."
#
# The post says,
#
# > "We can also see the time of day that users purchase specific products.
#
# > Healthier snacks and staples tend to be purchased earlier in the day, whereas ice cream (especially Half Baked and The Tonight Dough) are far more popular when customers are ordering in the evening.
#
# > **In fact, of the top 25 latest ordered products, the first 24 are ice cream! The last one, of course, is a frozen pizza.**"
#
# Your challenge is to reproduce the list of the top 25 latest ordered popular products.
#
# We'll define "popular products" as products with more than 2,900 orders.
#
#
# + id="B-QNMrVkYap4" colab_type="code" colab={}
##### YOUR CODE HERE #####
# + [markdown] id="Ij8S60q0YXxo" colab_type="text"
# ## Reshape Data Stretch Challenge
#
# _Try whatever sounds most interesting to you!_
#
# - Replicate more of Instacart's visualization showing "Hour of Day Ordered" vs "Percent of Orders by Product"
# - Replicate parts of the other visualization from [Instacart's blog post](https://tech.instacart.com/3-million-instacart-orders-open-sourced-d40d29ead6f2), showing "Number of Purchases" vs "Percent Reorder Purchases"
# - Get the most recent order for each user in Instacart's dataset. This is a useful baseline when [predicting a user's next order](https://www.kaggle.com/c/instacart-market-basket-analysis)
# - Replicate parts of the blog post linked at the top of this notebook: [Modern Pandas, Part 5: Tidy Data](https://tomaugspurger.github.io/modern-5-tidy.html)
# + id="_d6IA2R0YXFY" colab_type="code" colab={}
##### YOUR CODE HERE #####
| module1-join-and-reshape-data/Jean_Fraga_LS_DS8_121_Join_and_Reshape_Data_Assignment.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: platzi
# language: python
# name: platzi
# ---
# # Proxies
# ## Escondiendo la IP
# Muchos servidores web, al recibir múltiples peticiones en un corto tiempo de una misma IP, la bloquean para evitar saturaciones y problemas de servicio. Esto puede ser un problema para los scrapers ya que generan justamente este comportamiento.<br>
# Para evitar ser detectados tendríamos que cambiar nuestra dirección IP pública antes de cada request, cosa que sería extremadamente lento y en muchos casos imposible, o podemos utilizar un **proxy**. Un proxy es un intermediario entre quien hace la petición (nuestro programa) y quien la recibe (el servidor) que nos permite enmascarar la IP de donde salió la request. Utilizando un proxy, el servidor web verá la IP de ese proxy y no la nuestra. Si bien no podemos elegir con qué dirección IP hacer la petición, sí podemos elegir a través de qué proxy hacerla.<br>
# El sitio www.cualesmiip.com te permite ver cuál es la IP saliente de tu red. Si estás en una LAN, seguramente tu IP local sea algo como 192.18.x.x, pero la IP con la que salís al mundo, la IP de tu router asignada por tu ISP, será diferente.<br>
# Links útiles:
# - https://free-proxy-list.net/
# - [PySocks](https://pypi.org/project/PySocks/)
import requests
import re
def get_my_ip(url='http://www.cualesmiip.com/', proxies=None):
try:
r = requests.get(url=url, proxies=proxies)
except Exception as e:
print('Error haciendo la request', e)
return None
if r.status_code != 200:
print("Status Code:", r.status_code)
return None
regex = re.compile(r'(\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})')
my_ip = regex.findall(r.text)
return my_ip[0] if my_ip else None
| NoteBooks/Curso de WebScraping/Unificado/web-scraping-master/Clases_old/Módulo 5_ Tesseract y Proxies/M5C1 - Proxies.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 12._Reinforcement_Learning-DLBnGMow
# language: python
# name: 12._reinforcement_learning-dlbngmow
# ---
# +
import pandas as pd
import matplotlib.pyplot as plt
from statsmodels.graphics.tsaplots import plot_acf
from scipy.stats import norm
import numpy as np
from scipy.stats import lognorm
import numpy.random as ra
from pandas.tools.plotting import autocorrelation_plot
from statsmodels.tsa.arima_model import ARIMA
np.set_printoptions(threshold=np.nan)
# -
gas_prices = pd.read_csv('/Users/b1017579/Documents/PhD/Projects/10. ELECSIM/data/raw/fuel/fuel_wholesale_price/natural_gas_historical_price/EIA-STEO_NGSPUUK_M (1).csv')
gas_prices.head()
gas_prices['Date'] = pd.to_datetime(gas_prices['Date'])
gas_prices.head()
plt.plot(gas_prices.Date, gas_prices.Value)
plt.show()
gas_prices.hist()
# Standard deviation for each year
gas_prices_group = gas_prices.groupby(gas_prices.Date.dt.year)
gas_prices_group.std()
gas_prices_group.std()
plot_acf(gas_prices['Value'])
gas_prices['diff_1'] = gas_prices.Value.diff().diff(periods=12)
gas_prices
plt.plot(gas_prices['diff_1'])
gas_prices = gas_prices.dropna()
plot_acf(gas_prices['diff_1'])
gas_prices.diff_1.plot(kind='hist', normed=True)
gas_prices.diff_1.plot(kind='hist', density=True)
range = np.arange(-4, 4, 0.001)
plt.plot(range, norm.pdf(range,0,1))
count, division = np.histogram(gas_prices.diff_1, bins=20)
count, division
count/sum(count)
prob = count/sum(count)
cum_prob = np.cumsum(prob)
cum_prob
fig, ax = plt.subplots()
ax.bar(division[:-1], count, width=np.diff(division), ec="k", align="edge")
fig, ax = plt.subplots()
ax.bar(division[:-1], cum_prob, width=np.diff(division), ec="k", align="edge")
N = 10000
R = ra.uniform(0, 1, N)
count_array = division
cum_prob_array = cum_prob
count_array
gen_points = [count_array[np.argwhere(cum_prob_array == min(cum_prob_array[(cum_prob_array - r) > 0]))][0][0] for r in R]
generated_points = pd.Series(gen_points)
[[x,gen_points.count(x)] for x in set(gen_points)]
generated_points = pd.Series(gen_points)
generated_points.hist()
gas_prices['diff_1'].std()
# # ARIMA Model
autocorrelation_plot(gas_prices['Value'])
model = ARIMA(gas_prices['Value'], order=(12,1,0))
model_fit = model.fit(disp=0)
print(model_fit.summary())
# +
x_axis = np.arange(-4, 4, 0.001)
residuals = pd.DataFrame(model_fit.resid)
residuals.plot()
plt.show()
residuals.plot(kind='kde')
plt.plot(x_axis, norm.pdf(x_axis,-0.010028,0.634514))
plt.show()
print(residuals.describe())
residuals
# +
# %matplotlib inline
import warnings
import numpy as np
import pandas as pd
import scipy.stats as st
import statsmodels as sm
import matplotlib
import matplotlib.pyplot as plt
matplotlib.rcParams['figure.figsize'] = (16.0, 12.0)
matplotlib.style.use('ggplot')
# Create models from data
def best_fit_distribution(data, bins=200, ax=None):
"""Model data by finding best fit distribution to data"""
# Get histogram of original data
y, x = np.histogram(data, bins=bins, density=True)
x = (x + np.roll(x, -1))[:-1] / 2.0
# Distributions to check
DISTRIBUTIONS = [
st.alpha,st.anglit,st.arcsine,st.beta,st.betaprime,st.bradford,st.burr,st.cauchy,st.chi,st.chi2,st.cosine,
st.dgamma,st.dweibull,st.erlang,st.expon,st.exponnorm,st.exponweib,st.exponpow,st.f,st.fatiguelife,st.fisk,
st.foldcauchy,st.foldnorm,st.frechet_r,st.frechet_l,st.genlogistic,st.genpareto,st.gennorm,st.genexpon,
st.genextreme,st.gausshyper,st.gamma,st.gengamma,st.genhalflogistic,st.gilbrat,st.gompertz,st.gumbel_r,
st.gumbel_l,st.halfcauchy,st.halflogistic,st.halfnorm,st.halfgennorm,st.hypsecant,st.invgamma,st.invgauss,
st.invweibull,st.johnsonsb,st.johnsonsu,st.ksone,st.kstwobign,st.laplace,st.levy,st.levy_l,st.levy_stable,
st.logistic,st.loggamma,st.loglaplace,st.lognorm,st.lomax,st.maxwell,st.mielke,st.nakagami,st.ncx2,st.ncf,
st.nct,st.norm,st.pareto,st.pearson3,st.powerlaw,st.powerlognorm,st.powernorm,st.rdist,st.reciprocal,
st.rayleigh,st.rice,st.recipinvgauss,st.semicircular,st.t,st.triang,st.truncexpon,st.truncnorm,st.tukeylambda,
st.uniform,st.vonmises,st.vonmises_line,st.wald,st.weibull_min,st.weibull_max,st.wrapcauchy
]
# Best holders
best_distribution = st.norm
best_params = (0.0, 1.0)
best_sse = np.inf
# Estimate distribution parameters from data
for distribution in DISTRIBUTIONS:
# Try to fit the distribution
try:
# Ignore warnings from data that can't be fit
with warnings.catch_warnings():
warnings.filterwarnings('ignore')
# fit dist to data
params = distribution.fit(data)
# Separate parts of parameters
arg = params[:-2]
loc = params[-2]
scale = params[-1]
# Calculate fitted PDF and error with fit in distribution
pdf = distribution.pdf(x, loc=loc, scale=scale, *arg)
sse = np.sum(np.power(y - pdf, 2.0))
# if axis pass in add to plot
try:
if ax:
pd.Series(pdf, x).plot(ax=ax)
end
except Exception:
pass
# identify if this distribution is better
if best_sse > sse > 0:
best_distribution = distribution
best_params = params
best_sse = sse
except Exception:
pass
return (best_distribution.name, best_params)
def make_pdf(dist, params, size=10000):
"""Generate distributions's Probability Distribution Function """
# Separate parts of parameters
arg = params[:-2]
loc = params[-2]
scale = params[-1]
# Get sane start and end points of distribution
start = dist.ppf(0.01, *arg, loc=loc, scale=scale) if arg else dist.ppf(0.01, loc=loc, scale=scale)
end = dist.ppf(0.99, *arg, loc=loc, scale=scale) if arg else dist.ppf(0.99, loc=loc, scale=scale)
# Build PDF and turn into pandas Series
x = np.linspace(start, end, size)
y = dist.pdf(x, loc=loc, scale=scale, *arg)
pdf = pd.Series(y, x)
return pdf
# Load data from statsmodels datasets
data = residuals
# Plot for comparison
plt.figure(figsize=(12,8))
ax = data.plot(kind='hist', bins=50, normed=True, alpha=0.5)
# Save plot limits
dataYLim = ax.get_ylim()
# Find best fit distribution
best_fit_name, best_fit_params = best_fit_distribution(data, 200, ax)
best_dist = getattr(st, best_fit_name)
# Update plots
ax.set_ylim(dataYLim)
ax.set_title(u'El Niño sea temp.\n All Fitted Distributions')
ax.set_xlabel(u'Temp (°C)')
ax.set_ylabel('Frequency')
# Make PDF with best params
pdf = make_pdf(best_dist, best_fit_params)
# Display
plt.figure(figsize=(12,8))
ax = pdf.plot(lw=2, label='PDF', legend=True)
data.plot(kind='hist', bins=50, normed=True, alpha=0.5, label='Data', legend=True, ax=ax)
param_names = (best_dist.shapes + ', loc, scale').split(', ') if best_dist.shapes else ['loc', 'scale']
param_str = ', '.join(['{}={:0.2f}'.format(k,v) for k,v in zip(param_names, best_fit_params)])
dist_str = '{}({})'.format(best_fit_name, param_str)
ax.set_title(u'El Niño sea temp. with best fit distribution \n' + dist_str)
ax.set_xlabel(u'Temp. (°C)')
ax.set_ylabel('Frequency')
# -
# # Coal Price Analysis
coal_price = pd.read_csv('/Users/b1017579/Documents/PhD/Projects/10. ELECSIM/data/raw/fuel/fuel_wholesale_price/coal_historical_price/Coal Futures Historical Data.csv')
coal_price = coal_price[['Date', 'Price']]
coal_price.head()
plt.plot(coal_price.Date, coal_price.Price)
plt.show()
autocorrelation_plot(coal_price.Price)
model = ARIMA(coal_price.Price, order=(12,1,0))
model_fit = model.fit(disp=0)
print(model_fit.summary())
# +
x_axis = np.arange(-15, 15, 0.001)
residuals = pd.DataFrame(model_fit.resid)
residuals.plot()
plt.show()
residuals.plot(kind='kde')
plt.plot(x_axis, norm.pdf(x_axis,-0.042884,3.400331))
plt.show()
print(residuals.describe())
| elecsim/visualisation/fuel_costs/gas_price_distribution.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.7.10 64-bit (''base'': conda)'
# name: python3
# ---
# # Earthquake Detection Workflow
#
# ## Outline
#
# Here we show an example of the current modules in QuakeFlow
#
# 1. Download data using Obpsy:
#
# [FDSN web service client for ObsPy](https://docs.obspy.org/packages/obspy.clients.fdsn.html#module-obspy.clients.fdsn)
#
# [Mass Downloader for FDSN Compliant Web Services](https://docs.obspy.org/packages/autogen/obspy.clients.fdsn.mass_downloader.html#module-obspy.clients.fdsn.mass_downloader)
#
# 2. PhaseNet for picking P/S phases
#
# Find more details in [PhaseNet github page](https://wayneweiqiang.github.io/PhaseNet/)
#
# 3. GaMMA for associating picking and estimate approximate location and magnitude
#
# Find more details in [GaMMA github page](https://wayneweiqiang.github.io/GMMA/)
#
# 4. Earthquake location, magnitude estimation, etc. (to be continued)
#
# ## 1. Install [miniconda](https://docs.conda.io/en/latest/miniconda.html) and download packages
# + [markdown] tags=[]
# <!-- # %%capture -->
# ```bash
# git clone https://github.com/wayneweiqiang/PhaseNet.git
# git clone https://github.com/wayneweiqiang/GMMA.git
# conda env update -f=env.yml -n base
# ```
#
# **Second option: install to quakeflow environment, but need to select jupyter notebook kernel to quakflow**
# ```bash
# conda env create -f=env.yml -n quakeflow
# python -m ipykernel install --user --name=quakeflow
# ```
# -
import kfp
import kfp.dsl as dsl
import kfp.components as comp
from kfp.components import InputPath, OutputPath
import warnings
warnings.filterwarnings("ignore")
# ## 2. Set configurations
# +
import os
import matplotlib
# matplotlib.use("agg")
import matplotlib.pyplot as plt
# region_name = "Ridgecrest_demo"
# region_name = "Ridgecrest_oneweek"
# region_name = "SaltonSea"
# region_name = "Ridgecrest"
# region_name = "SanSimeon"
# region_name = "Italy"
# region_name = "PNSN"
region_name = "Hawaii"
# region_name = "PuertoRico"
# region_name = "SmithValley"
# region_name = "Antilles"
dir_name = region_name
if not os.path.exists(dir_name):
os.mkdir(dir_name)
root_dir = lambda x: os.path.join(dir_name, x)
run_local = False
# -
def set_config(
index_json: OutputPath("json"),
config_json: OutputPath("json"),
datetime_json: OutputPath("json"),
num_parallel: int = 1,
) -> list:
import obspy
import datetime
import json
pi = 3.1415926
degree2km = pi * 6371 / 180
# region_name = "Ridgecrest_demo"
# center = (-117.504, 35.705)
# horizontal_degree = 1.0
# vertical_degree = 1.0
# starttime = obspy.UTCDateTime("2019-07-04T17")
# endtime = obspy.UTCDateTime("2019-07-04T19")
# client = "SCEDC"
# network_list = ["CI"]
# channel_list = "HH*,BH*,EH*,HN*"
# region_name = "Ridgecrest_oneweek"
# center = (-117.504, 35.705)
# horizontal_degree = 1.0
# vertical_degree = 1.0
# starttime = obspy.UTCDateTime("2019-07-04T00")
# endtime = obspy.UTCDateTime("2019-07-10T00")
# client = "SCEDC"
# network_list = ["CI"]
# channel_list = "HH*,BH*,EH*,HN*"
region_name = "Hawaii"
center = (-155.32, 19.39)
horizontal_degree = 2.0
vertical_degree = 2.0
starttime = obspy.UTCDateTime("2021-07-01T00")
endtime = obspy.UTCDateTime("2021-11-01T00")
client = "IRIS"
network_list = ["HV", "PT"]
channel_list = "HH*,BH*,EH*,HN*"
# region_name = "PuertoRico"
# center = (-66.5, 18)
# horizontal_degree = 3.0
# vertical_degree = 2.0
# # starttime = obspy.UTCDateTime("2020-01-01T00")
# # endtime = obspy.UTCDateTime("2020-01-05T05")
# starttime = obspy.UTCDateTime("2018-01-01T00")
# endtime = obspy.UTCDateTime("2018-01-01T06")
# # endtime = obspy.UTCDateTime("2018-01-06T00")
# # endtime = obspy.UTCDateTime("2021-08-01T00")
# client = "IRIS"
# network_list = ["*"]
# channel_list = "HH*,BH*,HN*"
# region_name = "SaltonSea"
# center = (-115.53, 32.98)
# horizontal_degree = 1.0
# vertical_degree = 1.0
# starttime = obspy.UTCDateTime("2020-10-01T00")
# endtime = obspy.UTCDateTime("2020-10-01T02")
# client = "SCEDC"
# network_list = ["CI"]
# channel_list = "HH*,BH*,EH*,HN*"
# region_name = "2003SanSimeon"
# center = (-121.101, 35.701)
# horizontal_degree = 1.0
# vertical_degree = 1.0
# starttime = obspy.UTCDateTime("2003-12-22T00")
# endtime = obspy.UTCDateTime("2003-12-24T00")
# client = "NCEDC"
# network_list = ["*"]
# channel_list = "HH*,BH*,EH*,HN*"
# region_name = "Italy"
# center = (13.188, 42.723)
# horizontal_degree = 1.0
# vertical_degree = 1.0
# starttime = obspy.UTCDateTime("2016-08-24T00")
# endtime = obspy.UTCDateTime("2016-08-26T00")
# client = "INGV"
# network_list = ["*"]
# channel_list = "HH*,BH*,EH*,HN*"
# region_name = "SmithValley"
# center = (-119.5, 38.51)
# horizontal_degree = 1.0
# vertical_degree = 1.0
# starttime = obspy.UTCDateTime("2021-07-08T00:00")
# endtime = obspy.UTCDateTime("2021-07-16T00:00")
# client = "NCEDC"
# network_list = ["*"]
# channel_list = "HH*,BH*,EH*,HN*"
# region_name = "Antilles"
# center = (-61.14867, 14.79683)
# horizontal_degree = 0.2
# vertical_degree = 0.2
# starttime = obspy.UTCDateTime("2021-04-10T00")
# endtime = obspy.UTCDateTime("2021-04-15T00")
# client = "RESIF"
# network_list = ["*"]
# channel_list = "HH*,BH*,EH*,HN*"
####### save config ########
config = {}
config["region"] = region_name
config["center"] = center
config["xlim_degree"] = [center[0] - horizontal_degree / 2, center[0] + horizontal_degree / 2]
config["ylim_degree"] = [center[1] - vertical_degree / 2, center[1] + vertical_degree / 2]
config["degree2km"] = degree2km
config["starttime"] = starttime.datetime.isoformat()
config["endtime"] = endtime.datetime.isoformat()
config["networks"] = network_list
config["channels"] = channel_list
config["client"] = client
one_day = datetime.timedelta(days=1)
one_hour = datetime.timedelta(hours=1)
starttimes = []
tmp_start = starttime
while tmp_start < endtime:
starttimes.append(tmp_start.datetime.isoformat())
tmp_start += one_hour
with open(datetime_json, "w") as fp:
json.dump({"starttimes": starttimes, "interval": one_hour.total_seconds()}, fp)
if num_parallel == 0:
num_parallel = min(24, len(starttimes))
# num_parallel = min(60, len(starttimes)//12)
# num_parallel = (len(starttimes)-1)//(24) + 1
# num_parallel = (len(starttimes)-1)//(24*7) + 1
# num_parallel = (len(starttimes)-1)//(24*10) + 1
# num_parallel = (len(starttimes)-1)//(24*14) + 1
# num_parallel = min(60, (len(starttimes)-1)//(24) + 1)
print(f"num_parallel = {num_parallel}")
config["num_parallel"] = num_parallel
idx = [[] for i in range(num_parallel)]
for i in range(len(starttimes)):
idx[i - i // num_parallel * num_parallel].append(i)
with open(config_json, 'w') as fp:
json.dump(config, fp)
with open(index_json, 'w') as fp:
json.dump(idx, fp)
return list(range(num_parallel))
if run_local:
idx = set_config(root_dir("index.json"), root_dir("config.json"), root_dir("datetimes.json"), num_parallel=1)
config_op = comp.func_to_container_op(
set_config,
# base_image='zhuwq0/quakeflow-env:latest',
base_image='python:3.8',
packages_to_install=[
"obspy",
],
)
# ## 3. Download events in the routine catalog
#
# This catalog is not used by QuakeFolow. It is only used for comparing detection results.
def download_events(config_json: InputPath("json"), event_csv: OutputPath(str)):
from obspy.clients.fdsn import Client
from collections import defaultdict
import pandas as pd
import json
import matplotlib
# matplotlib.use("agg")
import matplotlib.pyplot as plt
with open(config_json, "r") as fp:
config = json.load(fp)
####### IRIS catalog ########
try:
events = Client(config["client"]).get_events(
starttime=config["starttime"],
endtime=config["endtime"],
minlongitude=config["xlim_degree"][0],
maxlongitude=config["xlim_degree"][1],
minlatitude=config["ylim_degree"][0],
maxlatitude=config["ylim_degree"][1],
)
except:
events = Client("iris").get_events(
starttime=config["starttime"],
endtime=config["endtime"],
minlongitude=config["xlim_degree"][0],
maxlongitude=config["xlim_degree"][1],
minlatitude=config["ylim_degree"][0],
maxlatitude=config["ylim_degree"][1],
)
print(f"Number of events: {len(events)}")
####### Save catalog ########
catalog = defaultdict(list)
for event in events:
if len(event.magnitudes) > 0:
catalog["time"].append(event.origins[0].time.datetime)
catalog["magnitude"].append(event.magnitudes[0].mag)
catalog["longitude"].append(event.origins[0].longitude)
catalog["latitude"].append(event.origins[0].latitude)
catalog["depth(m)"].append(event.origins[0].depth)
catalog = pd.DataFrame.from_dict(catalog).sort_values(["time"])
catalog.to_csv(
event_csv,
sep="\t",
index=False,
float_format="%.3f",
date_format='%Y-%m-%dT%H:%M:%S.%f',
columns=["time", "magnitude", "longitude", "latitude", "depth(m)"],
)
####### Plot catalog ########
plt.figure()
plt.plot(catalog["longitude"], catalog["latitude"], '.', markersize=1)
plt.xlabel("Longitude")
plt.ylabel("Latitude")
plt.axis("scaled")
plt.xlim(config["xlim_degree"])
plt.ylim(config["ylim_degree"])
# plt.savefig(os.path.join(data_path, "events_loc.png"))
plt.show()
plt.figure()
plt.plot_date(catalog["time"], catalog["magnitude"], '.', markersize=1)
plt.gcf().autofmt_xdate()
plt.ylabel("Magnitude")
plt.title(f"Number of events: {len(events)}")
# plt.savefig(os.path.join(data_path, "events_mag_time.png"))
plt.show()
if run_local:
download_events(root_dir("config.json"), root_dir("events.csv"))
download_events_op = comp.func_to_container_op(
download_events,
#base_image='zhuwq0/quakeflow-env:latest',
base_image='python:3.8',
packages_to_install=[
"obspy",
"pandas",
"matplotlib",
],
)
# ## 4. Download stations
def download_stations(config_json: InputPath("json"), station_csv: OutputPath(str), station_pkl: OutputPath("pickle")):
import pickle
from obspy.clients.fdsn import Client
from collections import defaultdict
import pandas as pd
import json
import matplotlib
# matplotlib.use("agg")
import matplotlib.pyplot as plt
with open(config_json, "r") as fp:
config = json.load(fp)
print("Network:", ",".join(config["networks"]))
####### Download stations ########
stations = Client(config["client"]).get_stations(
network=",".join(config["networks"]),
station="*",
starttime=config["starttime"],
endtime=config["endtime"],
minlongitude=config["xlim_degree"][0],
maxlongitude=config["xlim_degree"][1],
minlatitude=config["ylim_degree"][0],
maxlatitude=config["ylim_degree"][1],
channel=config["channels"],
level="response",
)
print("Number of stations: {}".format(sum([len(x) for x in stations])))
####### Save stations ########
station_locs = defaultdict(dict)
for network in stations:
for station in network:
for chn in station:
sid = f"{network.code}.{station.code}.{chn.location_code}.{chn.code[:-1]}"
if sid in station_locs:
station_locs[sid]["component"] += f",{chn.code[-1]}"
station_locs[sid]["response"] += f",{chn.response.instrument_sensitivity.value:.2f}"
else:
component = f"{chn.code[-1]}"
response = f"{chn.response.instrument_sensitivity.value:.2f}"
dtype = chn.response.instrument_sensitivity.input_units.lower()
tmp_dict = {}
tmp_dict["longitude"], tmp_dict["latitude"], tmp_dict["elevation(m)"] = (
chn.longitude,
chn.latitude,
chn.elevation,
)
tmp_dict["component"], tmp_dict["response"], tmp_dict["unit"] = component, response, dtype
station_locs[sid] = tmp_dict
station_locs = pd.DataFrame.from_dict(station_locs, orient='index')
station_locs.to_csv(
station_csv,
sep="\t",
float_format="%.3f",
index_label="station",
columns=["longitude", "latitude", "elevation(m)", "unit", "component", "response"],
)
with open(station_pkl, "wb") as fp:
pickle.dump(stations, fp)
######## Plot stations ########
plt.figure()
plt.plot(station_locs["longitude"], station_locs["latitude"], "^", label="Stations")
plt.xlabel("X (km)")
plt.ylabel("Y (km)")
plt.axis("scaled")
plt.xlim(config["xlim_degree"])
plt.ylim(config["ylim_degree"])
plt.legend()
plt.title(f"Number of stations: {len(station_locs)}")
# plt.savefig(os.path.join(data_path, "stations_loc.png"))
plt.show()
if run_local:
download_stations(root_dir("config.json"), root_dir("stations.csv"), root_dir("stations.pkl"))
download_stations_op = comp.func_to_container_op(
download_stations,
# base_image='zhuwq0/quakeflow-env:latest',
base_image='python:3.8',
packages_to_install=[
"obspy",
"pandas",
"matplotlib",
],
)
# ## 5. Download waveform data
def download_waveform(
i: int,
index_json: InputPath("json"),
config_json: InputPath("json"),
datetime_json: InputPath("json"),
station_pkl: InputPath("pickle"),
fname_csv: OutputPath(str),
data_path: str,
bucket_name: str = "waveforms",
s3_url: str = "minio-service:9000",
secure: bool = True,
) -> str:
import pickle, os
import obspy
from obspy.clients.fdsn import Client
import time
import json
import random
import threading
lock = threading.Lock()
upload_minio = False
try:
from minio import Minio
minioClient = Minio(s3_url, access_key='minio', secret_key='minio123', secure=secure)
if not minioClient.bucket_exists(bucket_name):
minioClient.make_bucket(bucket_name)
upload_minio = True
except Exception as err:
# print(f"ERROR: can not access minio service! \n{err}")
pass
with open(index_json, "r") as fp:
index = json.load(fp)
idx = index[i]
with open(config_json, "r") as fp:
config = json.load(fp)
with open(datetime_json, "r") as fp:
tmp = json.load(fp)
starttimes = tmp["starttimes"]
interval = tmp["interval"]
with open(station_pkl, "rb") as fp:
stations = pickle.load(fp)
waveform_dir = os.path.join(data_path, config["region"], "waveforms")
if not os.path.exists(waveform_dir):
os.makedirs(waveform_dir)
####### Download data ########
client = Client(config["client"])
fname_list = ["fname"]
def download(i):
# for i in idx:
starttime = obspy.UTCDateTime(starttimes[i])
endtime = starttime + interval
fname = "{}.mseed".format(starttime.datetime.strftime("%Y-%m-%dT%H:%M:%S"))
if not upload_minio:
if os.path.exists(os.path.join(waveform_dir, fname)):
print(f"{fname} exists")
fname_list.append(fname)
return
else:
try:
minioClient.fget_object(bucket_name, os.path.join(config['region'], fname), os.path.join(waveform_dir, fname))
print(f"{bucket_name}/{os.path.join(config['region'], fname)} download to {os.path.join(waveform_dir, fname)}")
fname_list.append(fname)
return
except Exception as err:
print(err)
max_retry = 10
stream = obspy.Stream()
print(f"{fname} download starts")
num_sta = 0
for network in stations:
for station in network:
print(f"********{network.code}.{station.code}********")
retry = 0
while retry < max_retry:
try:
tmp = client.get_waveforms(
network.code, station.code, "*", config["channels"], starttime, endtime
)
# for trace in tmp:
# if trace.stats.sampling_rate != 100:
# print(trace)
# trace = trace.interpolate(100, method="linear")
# trace = trace.detrend("spline", order=2, dspline=5*trace.stats.sampling_rate)
# stream.append(trace)
stream += tmp
num_sta += len(tmp)
break
except Exception as err:
print("Error {}.{}: {}".format(network.code, station.code, err))
message = "No data available for request."
if str(err)[: len(message)] == message:
break
retry += 1
time.sleep(5)
continue
if retry == max_retry:
print(f"{fname}: MAX {max_retry} retries reached : {network.code}.{station.code}")
# stream = stream.merge(fill_value=0)
# stream = stream.trim(starttime, endtime, pad=True, fill_value=0)
stream.write(os.path.join(waveform_dir, fname))
print(f"{fname} download succeeds")
if upload_minio:
minioClient.fput_object(bucket_name, os.path.join(config['region'], fname), os.path.join(waveform_dir, fname))
print(f"{fname} upload to minio {os.path.join(config['region'], fname)}")
lock.acquire()
fname_list.append(fname)
lock.release()
threads = []
MAX_THREADS = 4
for ii, i in enumerate(idx):
t = threading.Thread(target=download, args=(i,))
t.start()
time.sleep(1)
threads.append(t)
if ii % MAX_THREADS == MAX_THREADS - 1:
for t in threads:
t.join()
threads = []
for t in threads:
t.join()
with open(fname_csv, "w") as fp:
fp.write("\n".join(fname_list))
return waveform_dir
# + tags=[]
if run_local:
waveform_path = download_waveform(
0,
root_dir("index.json"),
root_dir("config.json"),
root_dir("datetimes.json"),
root_dir("stations.pkl"),
root_dir("fname.csv"),
data_path=root_dir(""),
)
# -
download_waveform_op = comp.func_to_container_op(
download_waveform,
# base_image='zhuwq0/quakeflow-env:latest',
base_image='python:3.8',
packages_to_install=["obspy", "minio"],
)
def phasenet2gamma(
i: int,
index_json: InputPath("json"),
config_json: InputPath("json"),
station_csv: InputPath(str),
data_path: str,
data_list: InputPath(str),
catalog_csv: OutputPath(str),
picks_csv: OutputPath(str),
bucket_name: str = "catalogs",
s3_url: str = "localhost:9000",
secure: bool = True,
) -> str:
import json
import os
import obspy
import pandas as pd
import requests
import numpy as np
import multiprocessing
from multiprocessing import dummy
import time
import threading
lock = threading.Lock()
# from requests.adapters import HTTPAdapter
# from requests.packages.urllib3.util.retry import Retry
# retry_strategy = Retry(
# total=10,
# status_forcelist=[104, 502],
# )
# adapter = HTTPAdapter(max_retries=retry_strategy)
# http = requests.Session()
# http.mount("http://", adapter)
def convert_mseed(
mseed, stations, sampling_rate=100, n_channel=3, dtype="float32", amplitude=True, remove_resp=True
):
try:
mseed = mseed.detrend("spline", order=2, dspline=5 * mseed[0].stats.sampling_rate)
except:
print(f"Error: spline detrend failed at file {mseed}")
mseed = mseed.detrend("demean")
mseed = mseed.merge(fill_value=0)
starttime = min([st.stats.starttime for st in mseed])
endtime = max([st.stats.endtime for st in mseed])
# endtime = starttime + 60
mseed = mseed.trim(starttime, endtime, pad=True, fill_value=0)
for i in range(len(mseed)):
if mseed[i].stats.sampling_rate != sampling_rate:
# print(f"Resampling {mseed[i].id} from {mseed[i].stats.sampling_rate} to {sampling_rate} Hz")
mseed[i] = mseed[i].interpolate(sampling_rate, method="linear")
order = ['3', '2', '1', 'E', 'N', 'Z']
order = {key: i for i, key in enumerate(order)}
comp2idx = {"3": 0, "2": 1, "1": 2, "E": 0, "N": 1, "Z": 2}
nsta = len(stations)
nt = max(len(mseed[i].data) for i in range(len(mseed)))
data = []
station_id = []
t0 = []
for i in range(nsta):
trace_data = np.zeros([nt, n_channel], dtype=dtype)
empty_station = True
# sta = stations.iloc[i]["station"]
# sta = stations.index[i]
sta = stations.iloc[i]["id"]
comp = stations.iloc[i]["component"].split(",")
if remove_resp:
resp = stations.iloc[i]["response"].split(",")
# resp = station_locs.iloc[i]["response"]
for j, c in enumerate(sorted(comp, key=lambda x: order[x[-1]])):
resp_j = float(resp[j])
if len(comp) != 3: ## less than 3 component
j = comp2idx[c]
if len(mseed.select(id=sta + c)) == 0:
# print(f"Empty trace: {sta+c} {starttime}")
continue
else:
empty_station = False
tmp = mseed.select(id=sta + c)[0].data.astype(dtype)
trace_data[: len(tmp), j] = tmp[:nt]
if stations.iloc[i]["unit"] == "m/s**2":
tmp = mseed.select(id=sta + c)[0]
tmp = tmp.integrate()
tmp = tmp.filter("highpass", freq=1.0)
tmp = tmp.data.astype(dtype)
trace_data[: len(tmp), j] = tmp[:nt]
elif stations.iloc[i]["unit"] == "m/s":
tmp = mseed.select(id=sta + c)[0].data.astype(dtype)
trace_data[: len(tmp), j] = tmp[:nt]
else:
print(
f"Error in {stations.iloc[i]['station']}\n{stations.iloc[i]['unit']} should be m/s**2 or m/s!"
)
if remove_resp:
trace_data[:, j] /= resp_j
if not empty_station:
data.append(trace_data)
station_id.append(sta)
t0.append(starttime.strftime("%Y-%m-%dT%H:%M:%S.%f")[:-3])
data = np.stack(data)
meta = {"data": data, "t0": t0, "station_id": station_id}
return meta
# PHASENET_API_URL = "http://phasenet.quakeflow.com"
# GAMMA_API_URL = "http://gamma.quakeflow.com"
PHASENET_API_URL = "http://phasenet-api.default.svc.cluster.local:8000"
GAMMA_API_URL = "http://gamma-api.default.svc.cluster.local:8001"
## read config
with open(index_json, "r") as fp:
index = json.load(fp)
idx = index[i]
with open(config_json, "r") as fp:
config = json.load(fp)
## read stations
stations = pd.read_csv(station_csv, delimiter="\t")
stations = stations.rename(columns={"station": "id"})
## read meed list
data_list = pd.read_csv(data_list)
manager = multiprocessing.Manager()
# catalog_list = manager.list()
# picks_list = manager.list()
catalog_list = []
picks_list = []
for fname in data_list['fname']:
# def process(fname):
# print(f"Process {fname}\n")
mseed = obspy.read(os.path.join(data_path, fname))
meta = convert_mseed(mseed, stations)
batch = 2
# phasenet_picks = manager.list()
phasenet_picks = []
# for j in range(0, len(meta["station_id"]), batch):
def run_phasenet(j):
req = {
"id": meta['station_id'][j : j + batch],
"timestamp": meta["t0"][j : j + batch],
"vec": meta["data"][j : j + batch].tolist(),
}
# resp = requests.post(f'{PHASENET_API_URL}/predict', json=req)
# phasenet_picks.extend(resp.json())
counts = 0
while True:
try:
resp = requests.post(f'{PHASENET_API_URL}/predict', json=req, timeout=30)
lock.acquire()
phasenet_picks.extend(resp.json())
lock.release()
break
except Exception as e:
if counts > 30:
print(f"Error PhaseNet on {fname} at {j} trace: {e}")
break
print(f"Retry PhaseNet on {fname} at {j} trace")
# time.sleep(3)
counts += 1
time_prev = time.time()
print(f"PhaseNet start on {fname}...")
threads = []
MAX_THREADS = 4
# for ii, i in enumerate(idx):
for ii, j in enumerate(range(0, len(meta["station_id"]), batch)):
t = threading.Thread(target=run_phasenet, args=(j,))
t.start()
time.sleep(3)
threads.append(t)
if ii % MAX_THREADS == MAX_THREADS - 1:
for t in threads:
t.join()
threads = []
for t in threads:
t.join()
print(f"PhaseNet finish on {fname} using {time.time()-time_prev}s")
# print(f"PhaseNet on {fname}...")
# # with multiprocessing.dummy.Pool(processes=min(8, (len(meta["station_id"])-1)//batch+1)) as pool:
# with multiprocessing.dummy.Pool(processes=min(8, (len(meta["station_id"])-1)//batch+1)) as pool:
# pool.map(run_phasenet, range(0, len(meta["station_id"]), batch))
# phasenet_picks = list(phasenet_picks)
time_prev = time.time()
print(f"GaMMA start on {fname}...")
counts = 0
while True:
try:
resp = requests.post(
f'{GAMMA_API_URL}/predict',
json={"picks": phasenet_picks, "stations": stations.to_dict(orient="records"), "config": config},
)
result = resp.json()
catalog_gamma = result["catalog"]
for c in catalog_gamma:
c["file_idx"] = fname.split("/")[-1]
picks_gamma = result["picks"]
for c in picks_gamma:
c["file_idx"] = fname.split("/")[-1]
break
except Exception as e:
if counts > 30:
print(f"Error in GaMMA on {fname}: {e}")
break
print(f"Retry GaMMA on {fname}")
time.sleep(3)
counts += 1
catalog_list.extend(catalog_gamma)
picks_list.extend(picks_gamma)
print(f"GaMMA finish on {fname} using {time.time()-time_prev}s")
# with multiprocessing.dummy.Pool(processes=min(12, len(data_list["fname"]))) as pool:
# pool.map(process, data_list["fname"])
catalog_df = pd.DataFrame(list(catalog_list))
picks_df = pd.DataFrame(list(picks_list))
manager.shutdown()
if len(catalog_df) > 0:
print("GaMMA catalog:")
print(
catalog_df[
["time", "latitude", "longitude", "depth(m)", "magnitude", "covariance", "event_idx", "file_idx"]
]
)
if len(picks_df) > 0:
print("GaMMA association:")
print(picks_df)
with open(catalog_csv, 'w') as fp:
catalog_df.to_csv(
fp,
sep="\t",
index=False,
float_format="%.3f",
date_format='%Y-%m-%dT%H:%M:%S.%f',
columns=["time", "magnitude", "longitude", "latitude", "depth(m)", "covariance", "event_idx", "file_idx"],
)
with open(picks_csv, 'w') as fp:
picks_df.to_csv(
fp,
sep="\t",
index=False,
date_format='%Y-%m-%dT%H:%M:%S.%f',
columns=["id", "timestamp", "type", "prob", "amp", "prob_gmma", "event_idx", "file_idx"],
)
## upload to s3 bucket
try:
from minio import Minio
catalog_dir = os.path.join("/tmp/", bucket_name)
if not os.path.exists(catalog_dir):
os.makedirs(catalog_dir)
minioClient = Minio(s3_url, access_key='minio', secret_key='minio123', secure=secure)
if not minioClient.bucket_exists(bucket_name):
minioClient.make_bucket(bucket_name)
with open(os.path.join(catalog_dir, f"catalog_{idx[0]:04d}.csv"), 'w') as fp:
catalog_df.to_csv(
fp,
sep="\t",
index=False,
float_format="%.3f",
date_format='%Y-%m-%dT%H:%M:%S.%f',
columns=[
"time",
"magnitude",
"longitude",
"latitude",
"depth(m)",
"covariance",
"event_idx",
"file_idx",
],
)
minioClient.fput_object(
bucket_name,
f"{config['region']}/catalog_{idx[0]:04d}.csv",
os.path.join(catalog_dir, f"catalog_{idx[0]:04d}.csv"),
)
with open(os.path.join(catalog_dir, f"picks_{idx[0]:04d}.csv"), 'w') as fp:
picks_df.to_csv(
fp,
sep="\t",
index=False,
date_format='%Y-%m-%dT%H:%M:%S.%f',
columns=["id", "timestamp", "type", "prob", "amp", "prob_gmma", "event_idx", "file_idx"],
)
minioClient.fput_object(
bucket_name,
f"{config['region']}/picks_{idx[0]:04d}.csv",
os.path.join(catalog_dir, f"picks_{idx[0]:04d}.csv"),
)
except Exception as err:
# print(f"ERROR: can not access minio service! \n{err}")
pass
return f"catalog_{idx[0]:04d}.csv"
if run_local:
phasenet2gamma(
0,
root_dir("index.json"),
root_dir("config.json"),
root_dir("stations.csv"),
root_dir(root_dir('waveforms')),
root_dir('fname.csv'),
root_dir("catalog.csv"),
root_dir("picks.csv"),
)
phasenet2gamma_op = comp.func_to_container_op(
phasenet2gamma,
# base_image='zhuwq0/quakeflow-env:latest',
base_image='python:3.8',
packages_to_install=["pandas", "numpy", "tqdm", "minio", "obspy"],
)
# ## 8. Plot catalogs
# + tags=[]
if run_local:
# %run plot_catalog.ipynb
# -
# ## 9. Parallel processing on cloud
#
# Only run this section for parallel jobs on cloud. Setting cloud environment is needed.
def merge_catalog(
config_json: InputPath("json"),
catalog_csv: OutputPath(str),
picks_csv: OutputPath(str),
bucket_name: str = "catalogs",
s3_url: str = "minio-service:9000",
secure: bool = True,
):
import pandas as pd
from glob import glob
import os
import json
from minio import Minio
minioClient = Minio(s3_url, access_key='minio', secret_key='minio123', secure=secure)
with open(config_json, "r") as fp:
config = json.load(fp)
objects = minioClient.list_objects(bucket_name, prefix=config["region"], recursive=True)
tmp_path = lambda x: os.path.join("/tmp/", x)
for obj in objects:
print(obj._object_name)
minioClient.fget_object(bucket_name, obj._object_name, tmp_path(obj._object_name.split("/")[-1]))
files_catalog = sorted(glob(tmp_path("catalog_*.csv")))
files_picks = sorted(glob(tmp_path("picks_*.csv")))
if len(files_catalog) > 0:
catalog_list = []
for f in files_catalog:
tmp = pd.read_csv(f, sep="\t", dtype=str)
catalog_list.append(tmp)
merged_catalog = pd.concat(catalog_list).sort_values(by="time")
merged_catalog.to_csv(tmp_path("merged_catalog.csv"), sep="\t", index=False)
minioClient.fput_object(bucket_name, f"{config['region']}/merged_catalog.csv", tmp_path("merged_catalog.csv"))
pick_list = []
for f in files_picks:
tmp = pd.read_csv(f, sep="\t", dtype=str)
pick_list.append(tmp)
merged_picks = pd.concat(pick_list).sort_values(by="timestamp")
merged_picks.to_csv(tmp_path("merged_picks.csv"), sep="\t", index=False)
minioClient.fput_object(bucket_name, f"{config['region']}/merged_picks.csv", tmp_path("merged_picks.csv"))
with open(catalog_csv, "w") as fout:
with open(tmp_path("merged_catalog.csv"), "r") as fin:
for line in fin:
fout.write(line)
with open(picks_csv, "w") as fout:
with open(tmp_path("merged_picks.csv"), "r") as fin:
for line in fin:
fout.write(line)
else:
with open(catalog_csv, "w") as fout:
pass
print("No catalog.csv found!")
with open(picks_csv, "w") as fout:
pass
print("No picks.csv found!")
# if run_local
# combine_catalog(root_dir("config.json"), root_dir("combined_catalog.csv"), root_dir("combined_picks.csv"), bucket_name="catalogs", s3_url="localhost:9000", secure=False)
merge_op = comp.func_to_container_op(
merge_catalog,
# base_image='zhuwq0/quakeflow-env:latest',
base_image='python:3.8',
packages_to_install=["pandas", "minio"],
)
## Define QuakeFlow pipeline
@dsl.pipeline(name='QuakeFlow', description='')
def quakeflow_pipeline(
data_path: str = "/tmp/",
num_parallel=0,
bucket_catalog: str = "catalogs",
s3_url: str = "minio-service:9000",
secure: bool = False,
):
config = config_op(num_parallel)
events = download_events_op(config.outputs["config_json"])
stations = download_stations_op(config.outputs["config_json"])
with kfp.dsl.ParallelFor(config.outputs["output"]) as i:
vop_ = dsl.VolumeOp(
name=f"Create volume", resource_name=f"data-volume-{str(i)}", size="20Gi", modes=dsl.VOLUME_MODE_RWO
).set_retry(3)
download_op_ = (
download_waveform_op(
i,
config.outputs["index_json"],
config.outputs["config_json"],
config.outputs["datetime_json"],
stations.outputs["station_pkl"],
data_path=data_path,
bucket_name=f"waveforms",
# s3_url=s3_url,
s3_url="quakeflow-minio.default.svc.cluster.local:9000",
secure=secure,
)
.add_pvolumes({data_path: vop_.volume})
.set_cpu_request("860m") #2CPU per node
.set_retry(3)
.set_display_name('Download Waveforms')
)
download_op_.execution_options.caching_strategy.max_cache_staleness = "P30D"
phasenet2gamma_op_ = (
phasenet2gamma_op(
i,
config.outputs["index_json"],
config.outputs["config_json"],
station_csv=stations.outputs["station_csv"],
data_path=download_op_.outputs["Output"],
data_list=download_op_.outputs["fname_csv"],
bucket_name=f"catalogs",
s3_url=s3_url,
secure=secure,
).add_pvolumes({data_path: download_op_.pvolume})
.set_cpu_request("860m")
# .set_memory_request("2500M") #6GB per node
.set_display_name('PhaseNet + GaMMA')
)
phasenet2gamma_op_.execution_options.caching_strategy.max_cache_staleness = "P30D"
merge_op_ = merge_op(config.outputs["config_json"], bucket_name=f"catalogs", s3_url=s3_url, secure=secure).after(
phasenet2gamma_op_
)
merge_op_.execution_options.caching_strategy.max_cache_staleness = "P0D"
vop_.delete().after(merge_op_)
# ```
# helm install quakeflow-minio --set accessKey.password=<PASSWORD> --set secretKey.password=<PASSWORD> --set persistence.size=1T bitnami/minio
# ```
# +
import os
os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = "/Users/weiqiang/.dotbot/cloud/quakeflow_zhuwq.json"
experiment_name = 'QuakeFlow'
pipeline_func = quakeflow_pipeline
run_name = pipeline_func.__name__ + '_run'
arguments = {
"data_path": "/tmp",
"num_parallel": 0,
"bucket_catalog": "catalogs",
"s3_url": "minio-service:9000",
# "s3_url": "quakeflow-minio.default.svc.cluster.local:9000",
"secure": False,
}
if not run_local:
client = kfp.Client(host="66bece58bfa6ae5b-dot-us-west1.pipelines.googleusercontent.com")
kfp.compiler.Compiler().compile(pipeline_func, '{}.zip'.format(experiment_name))
results = client.create_run_from_pipeline_func(
pipeline_func, experiment_name=experiment_name, run_name=run_name, arguments=arguments
)
| docs/workflow-api.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: torch
# language: python
# name: torch
# ---
# +
import time
from pathlib import Path
from random import randint
from matplotlib import pyplot as plt
import torch as np
from torchvision.utils import save_image
from models.CSA import CSA
from tools.toml import load_option
from plot import array2image
from loader import loader
def mkdir(out_dir):
out_dir = Path(out_dir)
if not out_dir.exists():
out_dir.mkdir(parents=True, exist_ok=True)
def mask_op(mask):
mask = mask.cuda()
mask = mask[0][0]
mask = np.unsqueeze(mask, 0)
mask = np.unsqueeze(mask, 1)
mask = mask.byte()
return mask
# -
# ## 模型定义
# +
# 超参数设定
## 固定参数
epochs = 15
display_freq = 200
save_epoch_freq = 1
## 模型参数
alpha = 1
beta = 0.2
model_name = f'CSA-{alpha}-{beta}'
# +
base_opt = load_option('options/base.toml')
opt = load_option('options/train.toml')
opt.update(base_opt)
opt.update({'name': model_name}) # 设定模型名称
model = CSA(beta, **opt)
image_save_dir = model.save_dir / 'images'
mkdir(image_save_dir)
# -
# ## 模型训练
# 训练阶段
start_epoch = 0
total_steps = 0
iter_start_time = time.time()
for epoch in range(start_epoch, epochs):
epoch_start_time = time.time()
epoch_iter = 0
trainset = loader.trainset(alpha)
for batch, mask in zip(trainset, loader.maskset):
image = batch[0]
mask = mask_op(mask)
total_steps += model.batch_size
epoch_iter += model.batch_size
# it not only sets the input data with mask, but also sets the latent mask.
model.set_input(image, mask)
model.set_gt_latent()
model.optimize_parameters()
if total_steps % display_freq == 0:
real_A, real_B, fake_B = model.get_current_visuals()
# real_A=input, real_B=ground truth fake_b=output
pic = (np.cat([real_A, real_B, fake_B], dim=0) + 1) / 2.0
image_name = f"epoch{epoch}-{total_steps}-{alpha}.jpg"
save_image(pic, image_save_dir/image_name, nrow=1)
if total_steps % 100 == 0:
errors = model.get_current_errors()
t = (time.time() - iter_start_time) / model.batch_size
print(
f"Epoch/total_steps/alpha-beta: {epoch}/{total_steps}/{alpha}-{beta}", dict(errors))
if epoch % save_epoch_freq == 0:
print(f'保存模型 Epoch {epoch}, iters {total_steps} 在 {model.save_dir}')
model.save(epoch)
print(
f'Epoch/Epochs {epoch}/{epochs-1} 花费时间:{time.time() - epoch_start_time}s')
model.update_learning_rate()
| train.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda root]
# language: python
# name: conda-root-py
# ---
# # تحلیل بازار
#
# اطلاعات میزان خرید و فروش و قیمت هر گرم طلا در بازه زمانی یک ساله به
# تفکیک روز از مجموعه ای از طلا فروشان موجود است. مدلی ارئه نمایید که قیمت
# هر گرم طلا را در بازه زمانی مذکور به ازای هر روز خواسته شده در زیر بر اساس
# میزان خرید و فروش پیش بینی کند.
#
# 2013/4/13
#
# 2013/7/13
#
# 2013/7/14
#
# 2013/11/7
#
# 2013/12/15
#
# 2014/2/9
#
# 2014/2/17
#
# خروجی مورد نظر:
#
# • تدوین مستند مرتبط با فرایند طراحی شده به منظور رسیدن به جواب مطلوب
#
# • کد قابل اجرا به انضمام تمام مستندات مرتبط با اجرا
#
# • فایل پاسخ مساله شامل مقادیر پیشبینی شده برای دوره درخواستی
# +
# importing necessary liberaries
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from datetime import datetime
# -
# %matplotlib inline
# - For ease of use i converted the xlsx file to csv file format and changed the columns name
# - Firstly i have to explore data to findout useful information
# +
# Read the csv data to a dataframe
goldcsv = pd.read_csv('my_created_gold 2.csv')
# +
# Change the string values to float
#goldcsv['number_of_daily_shopping'] = goldcsv['number_of_daily_shopping'].str.replace(',','')
#goldcsv['number_of_daily_sales'] = goldcsv['number_of_daily_shopping'].str.replace(',','')
#goldcsv['price_per_unit '] = goldcsv['price_per_unit '].str.replace(',','')
# +
# drop the first column that is unnecessary
goldcsv.drop(goldcsv.columns[0], 1)
# +
# return the shape of data
goldcsv.shape
# +
#returns the count of each market
goldcsv['market_number'].value_counts()
# +
# save modified dataframe as csv file
#goldcsv.to_csv('my_created_gold.csv')
# +
# Dropping all rows that contain a NA
clean_golddf = goldcsv.dropna
# +
# number of all unique markets
number_of_markets = set(goldcsv.market_number.unique())
len(number_of_markets)
# +
# merging three columns (year, month, day) to one date column
date = goldcsv['year'].map(str) +' '+ goldcsv['month'].map(str) + ' ' + goldcsv['day'].map(str)
price = goldcsv['price_per_unit ']
# +
# manipulate goldcsv DataFrame to a new one by combining the three date columns into one
new_gold_df = goldcsv.drop(goldcsv.columns[0],1)
# -
new_gold_df.head()
new_gold_df['date'] = new_gold_df.apply(lambda x:'%s %s %s' % (x['year'],x['month'], x['day']),axis=1)
new_gold_df['date'] = new_gold_df.to_d
new_gold_df.head()
| MarketAnalysis.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="wNr8Y566YC7O"
# + [markdown] id="cr9Yp62Pptp1"
# # Classification result tuning and Plots
# + colab={"base_uri": "https://localhost:8080/"} id="NlcUuRtLZCXO" executionInfo={"status": "ok", "timestamp": 1627555982309, "user_tz": -330, "elapsed": 32328, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhIcW2ccb-vKB0nzSNQiH_55kdPj7GnUj2nbBDOaKk=s64", "userId": "08907162275712489656"}} outputId="b9b6e063-4c15-48dc-f7be-ef4a01a0f306"
import os, sys
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns ; sns.set()
from google.colab import drive
drive.mount('/content/drive')
sys.path.append("/content/drive/MyDrive/GSOC-NMR-project/Work/Notebooks")
from auxillary_functions import *
from polynomial_featextract import poly_featextract
# + colab={"base_uri": "https://localhost:8080/"} id="KUJYaIKGZCoh" executionInfo={"status": "ok", "timestamp": 1627555993703, "user_tz": -330, "elapsed": 11409, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhIcW2ccb-vKB0nzSNQiH_55kdPj7GnUj2nbBDOaKk=s64", "userId": "08907162275712489656"}} outputId="94e2e86f-13cd-411c-bd74-4b0a923b0d68"
# import raw data and params.txt file
datadir_path = "/content/drive/MyDrive/GSOC-NMR-project/Work/Data/2021-06-21_classify_datagen_all_funcs"
rawdata = load_data(datadir_path)
params = load_params(datadir_path)
ker_integrals = load_wlist(datadir_path) # load wlist.txt file
# Stencil type : {'0' : 'Gaussian', '1' : 'Power Law', '2' : 'RKKY'}
# + colab={"base_uri": "https://localhost:8080/"} id="lVuquLhdZC4O" executionInfo={"status": "ok", "timestamp": 1627555993705, "user_tz": -330, "elapsed": 22, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhIcW2ccb-vKB0nzSNQiH_55kdPj7GnUj2nbBDOaKk=s64", "userId": "08907162275712489656"}} outputId="5d34ad62-5366-4517-ed3c-f8800dfa14e0"
print(rawdata.shape)
offset = 150
shifted_data, center = get_window(rawdata,2/3,width=offset)
print("The Echo pulse occurs at timestep:",center)
# Rescaled data
rscl_data = shifted_data / np.max(shifted_data,axis=1,keepdims=True)
# + [markdown] id="xN8jMPLPZoH1"
# # Classification
# + [markdown] id="j5-JaI4oa0x_"
# ## Pointwise features
# + id="-WsRnukra7bm" executionInfo={"status": "ok", "timestamp": 1627556061970, "user_tz": -330, "elapsed": 689, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhIcW2ccb-vKB0nzSNQiH_55kdPj7GnUj2nbBDOaKk=s64", "userId": "08907162275712489656"}}
import sklearn
from sklearn.model_selection import train_test_split, cross_val_score, GridSearchCV, RandomizedSearchCV
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import confusion_matrix, f1_score, classification_report
# + id="wF8zDDLPZED2" executionInfo={"status": "ok", "timestamp": 1627556062419, "user_tz": -330, "elapsed": 5, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhIcW2ccb-vKB0nzSNQiH_55kdPj7GnUj2nbBDOaKk=s64", "userId": "08907162275712489656"}}
X_train , X_test, y_train, y_test = train_test_split(rscl_data, params['stencil_type'], test_size=0.2,
stratify=params['stencil_type'], random_state=101)
# + id="r7J5NnN_bkIH" executionInfo={"status": "ok", "timestamp": 1627556062863, "user_tz": -330, "elapsed": 3, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhIcW2ccb-vKB0nzSNQiH_55kdPj7GnUj2nbBDOaKk=s64", "userId": "08907162275712489656"}}
model = RandomForestClassifier(oob_score=True, n_jobs=-1)
# + [markdown] id="O5jYRqfTruxx"
# ### Search (Grid /Random)
# + id="qrSSfgIZrxpm"
model_params = {
'n_estimators' : [int(x) for x in np.linspace(10, 121, 10)],
'min_samples_split' : [2, 5, 10],
'min_samples_leaf' : [1, 2, 4],
'bootstrap' : [True, False],
}
# + id="EJXvA_5hrx3G"
gs = GridSearchCV(estimator=model, param_grid=params, cv=3, n_jobs=-1, verbose=True)
# + id="Vkc93okQ5NxP"
rf_random = RandomizedSearchCV(estimator = model, param_distributions = model_params,
n_iter = 100, cv = 3, verbose=2,scoring='f1_weighted',
random_state=42, n_jobs = -1)
# + colab={"base_uri": "https://localhost:8080/"} id="LACvPaUP5xhs" executionInfo={"status": "ok", "timestamp": 1627288240224, "user_tz": -330, "elapsed": 484361, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhIcW2ccb-vKB0nzSNQiH_55kdPj7GnUj2nbBDOaKk=s64", "userId": "08907162275712489656"}} outputId="18740ea0-ca96-48bd-af18-f6b0683f22a7"
rf_random.fit(X_train, y_train)
# + colab={"base_uri": "https://localhost:8080/", "height": 878} id="eWzMBTQp9oQ2" executionInfo={"status": "ok", "timestamp": 1627288791484, "user_tz": -330, "elapsed": 326, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhIcW2ccb-vKB0nzSNQiH_55kdPj7GnUj2nbBDOaKk=s64", "userId": "08907162275712489656"}} outputId="885514c2-714e-4cd7-8255-6762949068c3"
pd.DataFrame(rf_random.cv_results_) #.plot.scatter('mean_fit_time', 'param_n_estimators', ls='-')
# + colab={"base_uri": "https://localhost:8080/"} id="I2lJFzAk5xkm" executionInfo={"status": "ok", "timestamp": 1627288422736, "user_tz": -330, "elapsed": 396, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhIcW2ccb-vKB0nzSNQiH_55kdPj7GnUj2nbBDOaKk=s64", "userId": "08907162275712489656"}} outputId="b5cd6a59-4657-4d88-b1ab-e6c0024bb6d0"
rf_random.best_params_
# + id="CFtRcj728wXB"
rscv_model = RandomForestClassifier(random_state=0, n_estimators=34, min_samples_split=2, min_samples_leaf=1)
# + colab={"base_uri": "https://localhost:8080/"} id="mZR85zAX8wZ2" executionInfo={"status": "ok", "timestamp": 1627288607726, "user_tz": -330, "elapsed": 4337, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhIcW2ccb-vKB0nzSNQiH_55kdPj7GnUj2nbBDOaKk=s64", "userId": "08907162275712489656"}} outputId="e4a91949-9c31-42c1-fdc7-7086924f842f"
rscv_model.fit(X_train, y_train)
# + colab={"base_uri": "https://localhost:8080/"} id="EoBq1r5q8wcM" executionInfo={"status": "ok", "timestamp": 1627288607728, "user_tz": -330, "elapsed": 9, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhIcW2ccb-vKB0nzSNQiH_55kdPj7GnUj2nbBDOaKk=s64", "userId": "08907162275712489656"}} outputId="3d3851d9-ee4d-453b-b8be-4cd0fe8aca09"
ypred_rscv = rscv_model.predict(X_test)
print(f1_score(y_test, ypred_rscv, average='weighted'))
# + colab={"base_uri": "https://localhost:8080/"} id="LTBEjFcq5xnT" executionInfo={"status": "ok", "timestamp": 1627288612140, "user_tz": -330, "elapsed": 1137, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhIcW2ccb-vKB0nzSNQiH_55kdPj7GnUj2nbBDOaKk=s64", "userId": "08907162275712489656"}} outputId="4a6448c2-fef8-4020-fe9c-8eb01e67d2ab"
report = classification_report(y_test, ypred_rscv, labels=[0,1,2],target_names =['Gaussian','Power Law','RKKY'])
print(report)
# + colab={"base_uri": "https://localhost:8080/", "height": 395} id="jjwegi-Z9ScD" executionInfo={"status": "ok", "timestamp": 1627288640858, "user_tz": -330, "elapsed": 957, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhIcW2ccb-vKB0nzSNQiH_55kdPj7GnUj2nbBDOaKk=s64", "userId": "08907162275712489656"}} outputId="61f249a7-02bf-4fa8-91f7-d8cc0729b823"
fig, ax = plt.subplots(figsize=(8,6))
sns.heatmap(confusion_matrix(y_test, ypred_rscv), annot=True, fmt='.0f',
cmap='coolwarm_r', lw=2)
plt.xlabel('Actual Class', fontsize=14)
plt.ylabel('Predicted class', fontsize=14)
plt.xticks(np.arange(3)+0.5,['Gaussian','Power Law','RKKY'])
plt.yticks(np.arange(3)+0.5,['Gaussian','Power Law','RKKY'])
plt.show()
# + id="KD6GXT2C9Set"
# + id="nre8e8oX5xpp"
# + colab={"base_uri": "https://localhost:8080/"} id="QIQmvYyuryD2" executionInfo={"status": "ok", "timestamp": 1627286708195, "user_tz": -330, "elapsed": 667227, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhIcW2ccb-vKB0nzSNQiH_55kdPj7GnUj2nbBDOaKk=s64", "userId": "08907162275712489656"}} outputId="b0dc4598-d930-47ae-940b-ae3c5739bd15"
gs.fit(X_train, y_train)
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="BHlTJQWWryN-" executionInfo={"status": "ok", "timestamp": 1627286708199, "user_tz": -330, "elapsed": 30, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhIcW2ccb-vKB0nzSNQiH_55kdPj7GnUj2nbBDOaKk=s64", "userId": "08907162275712489656"}} outputId="73c9d164-56a6-4b30-b2c6-b6977cbba903"
pd.DataFrame(gs.cv_results_)
# + colab={"base_uri": "https://localhost:8080/"} id="LvloNxSqsxMT" executionInfo={"status": "ok", "timestamp": 1627286709079, "user_tz": -330, "elapsed": 10, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhIcW2ccb-vKB0nzSNQiH_55kdPj7GnUj2nbBDOaKk=s64", "userId": "08907162275712489656"}} outputId="8dfa25bc-a169-4b35-c826-b889acd3c6ac"
gs.best_params_
# + id="Ox0U17z2z0AY"
rf = RandomForestClassifier(random_state=0, **gs.best_params_)
# + colab={"base_uri": "https://localhost:8080/"} id="DGPszKzSz0DM" executionInfo={"status": "ok", "timestamp": 1627287059847, "user_tz": -330, "elapsed": 10866, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhIcW2ccb-vKB0nzSNQiH_55kdPj7GnUj2nbBDOaKk=s64", "userId": "08907162275712489656"}} outputId="2ceaf845-48a4-49a5-9085-2a2b420c4802"
rf.fit(X_train, y_train)
# + colab={"base_uri": "https://localhost:8080/"} id="4ogHwbi-sxPf" executionInfo={"status": "ok", "timestamp": 1627287068946, "user_tz": -330, "elapsed": 317, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhIcW2ccb-vKB0nzSNQiH_55kdPj7GnUj2nbBDOaKk=s64", "userId": "08907162275712489656"}} outputId="ffe36e19-cd9f-4c4f-967e-a49642160ffe"
rf.score(X_test, y_test)
# + colab={"base_uri": "https://localhost:8080/"} id="WLIXbdUJsxRy" executionInfo={"status": "ok", "timestamp": 1627287112435, "user_tz": -330, "elapsed": 340, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhIcW2ccb-vKB0nzSNQiH_55kdPj7GnUj2nbBDOaKk=s64", "userId": "08907162275712489656"}} outputId="d43bd372-4217-4df0-8a04-91fd964a7d13"
y_pred_gs = rf.predict(X_test)
print(f1_score(y_test, y_pred_gs, average='weighted'))
# + id="UrSvpY1CsxUZ"
# + colab={"base_uri": "https://localhost:8080/"} id="ULaqPxEpbZLN" executionInfo={"status": "ok", "timestamp": 1627283676951, "user_tz": -330, "elapsed": 194502, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhIcW2ccb-vKB0nzSNQiH_55kdPj7GnUj2nbBDOaKk=s64", "userId": "08907162275712489656"}} outputId="831fb89f-a37a-4840-da43-4e6032d5bc7c"
scores = cross_val_score(model, X_train, y_train, scoring='f1_weighted', cv=5, n_jobs=-1, verbose=True )
# + colab={"base_uri": "https://localhost:8080/"} id="zJUtkDhjcGAk" executionInfo={"status": "ok", "timestamp": 1627283676954, "user_tz": -330, "elapsed": 17, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhIcW2ccb-vKB0nzSNQiH_55kdPj7GnUj2nbBDOaKk=s64", "userId": "08907162275712489656"}} outputId="9d4e099b-aa30-4aa9-d3e9-6dd08107577c"
scores
# + colab={"base_uri": "https://localhost:8080/"} id="eABYD6MWcvR0" executionInfo={"status": "ok", "timestamp": 1627282605361, "user_tz": -330, "elapsed": 51512, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhIcW2ccb-vKB0nzSNQiH_55kdPj7GnUj2nbBDOaKk=s64", "userId": "08907162275712489656"}} outputId="65a30119-20d1-4029-ff41-02d3e90df5f7"
model.fit(X_train, y_train)
# + id="sXmgD78Gc3uM"
y_pred = model.predict(X_test)
f1_score(y_test, y_pred, average='weighted')
# + colab={"base_uri": "https://localhost:8080/"} id="Ix-RYKEOgcoc" executionInfo={"status": "ok", "timestamp": 1627282616661, "user_tz": -330, "elapsed": 345, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhIcW2ccb-vKB0nzSNQiH_55kdPj7GnUj2nbBDOaKk=s64", "userId": "08907162275712489656"}} outputId="27c559e0-a65d-4e6e-d1fa-e03d90a580f2"
report = classification_report(y_test, y_pred, labels=[0,1,2],target_names =['Gaussian','Power Law','RKKY'])
print(report)
# + colab={"base_uri": "https://localhost:8080/", "height": 395} id="5p-tmxfidGWD" executionInfo={"status": "ok", "timestamp": 1627283748856, "user_tz": -330, "elapsed": 1189, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhIcW2ccb-vKB0nzSNQiH_55kdPj7GnUj2nbBDOaKk=s64", "userId": "08907162275712489656"}} outputId="295d42a6-5227-4c20-9ca7-90e56bb99e05"
fig, ax = plt.subplots(figsize=(8,6))
sns.heatmap(confusion_matrix(y_test, y_pred), annot=True, fmt='.0f',
cmap='coolwarm_r', lw=2)
plt.xlabel('Actual Class', fontsize=14)
plt.ylabel('Predicted class', fontsize=14)
plt.xticks(np.arange(3)+0.5,['Gaussian','Power Law','RKKY'])
plt.yticks(np.arange(3)+0.5,['Gaussian','Power Law','RKKY'])
plt.show()
# + id="QHireaaEjEVF"
model.feature_importances_
# + [markdown] id="6AeIkBlahK9x"
# ### wrongly classified curves
# + colab={"base_uri": "https://localhost:8080/", "height": 268} id="pfZ_BO8Am4c_" executionInfo={"status": "ok", "timestamp": 1627283251929, "user_tz": -330, "elapsed": 1247, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhIcW2ccb-vKB0nzSNQiH_55kdPj7GnUj2nbBDOaKk=s64", "userId": "08907162275712489656"}} outputId="369dccd3-9678-401e-aaaf-a22ee72262c7"
gs_rkky = ((y_test == 0)&(y_pred == 2))
gs_rkky = gs_rkky[gs_rkky == True].index
for curve in rscl_data[gs_rkky,:]:
plt.plot(curve)
# + colab={"base_uri": "https://localhost:8080/"} id="w_fL5qnpeDdc" executionInfo={"status": "ok", "timestamp": 1627283325357, "user_tz": -330, "elapsed": 349, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhIcW2ccb-vKB0nzSNQiH_55kdPj7GnUj2nbBDOaKk=s64", "userId": "08907162275712489656"}} outputId="25003c9a-bebc-4300-c2d8-b5f396673e7a"
wrongly_classified = np.where(y_test != y_pred)[0]
wrongly_classified
# + colab={"base_uri": "https://localhost:8080/"} id="-p0O3Ipbhm1y" executionInfo={"status": "ok", "timestamp": 1627282039389, "user_tz": -330, "elapsed": 374, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhIcW2ccb-vKB0nzSNQiH_55kdPj7GnUj2nbBDOaKk=s64", "userId": "08907162275712489656"}} outputId="48f2fc31-581a-4bb8-8b9e-7bc5ec2fbc24"
y_test.iloc[wrongly_classified].iloc[0]
# + colab={"base_uri": "https://localhost:8080/"} id="6E4ac_rohzHL" executionInfo={"status": "ok", "timestamp": 1627281463955, "user_tz": -330, "elapsed": 515, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhIcW2ccb-vKB0nzSNQiH_55kdPj7GnUj2nbBDOaKk=s64", "userId": "08907162275712489656"}} outputId="4557b107-eb3f-4998-bc28-fcc0929bb5b5"
y_pred[wrongly_classified]
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="LqjhLEsoi1db" executionInfo={"status": "ok", "timestamp": 1627283359375, "user_tz": -330, "elapsed": 5802, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhIcW2ccb-vKB0nzSNQiH_55kdPj7GnUj2nbBDOaKk=s64", "userId": "08907162275712489656"}} outputId="9a8d6823-1c06-4aec-fb8d-721070099d3a"
stencil_type = {0 : 'Gaussian', 1 : 'Power Law', 2 : 'RKKY'}
wrongly_classified_x = rscl_data[y_test.iloc[wrongly_classified].index]
for i in range(0,len(wrongly_classified),3):
fig,ax = plt.subplots()
plt.plot(wrongly_classified_x[i,:])
actual_label = y_test.iloc[wrongly_classified].iloc[i]
pred_label = y_pred[wrongly_classified][i]
plt.title(f"Actual type : {stencil_type[actual_label]} | Predicted type: {stencil_type[pred_label]}")
| Week8/July26-classification-plots.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Chem
# language: python
# name: chem
# ---
import sys
sys.path.append('..')
import torch
import pickle
import yaml
import seaborn as sn
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
from torch.utils.data.dataloader import DataLoader
from reaction_predictors.graph_model.models import RGCNNTrClassifier
from utils.graph_utils import get_bonds, get_nodes
from utils.torch_dataset import Dataset, graph_collate
from utils.draw_utils import draw_gt_reaction
from utils.dataset_utils import prune_dataset_by_length
from reaction_predictors.graph_model.model_utils import train_epoch, evaluate, test
from collections import namedtuple
import pickle
from rdkit.Chem.Draw import IPythonConsole
from IPython.display import SVG, Image
IPythonConsole.molSize = (400,400)
def vizualize_attention(att_map, data):
print(data["smiles"])
plt.figure(figsize=(10, 10))
elems = [num2elem[i] + ':' + str(j) for (i, j) in zip(data["reactants"]["nodes"], data["reactants"]["mask"])]
n_atoms = len(data["reactants"]["nodes"])
sn.heatmap(att_map[:n_atoms, :n_atoms], xticklabels=elems, yticklabels=elems)
plt.show()
def convert(dictionary):
for key, value in dictionary.items():
if isinstance(value, dict):
dictionary[key] = convert(value)
return namedtuple('GenericDict', dictionary.keys())(**dictionary)
with open('../scripts/graph_models/MT_EGTBF_100.yml', 'r') as ymlfile:
config = yaml.load(ymlfile, Loader=yaml.FullLoader)
device = 'cuda:0'
model_cfg = convert(config["model"])
data_cfg = convert(config["dataset"])
train_cfg = convert(config["train"])
paths = convert(config["paths"])
# +
meta = pickle.load(open(paths.dataset_path + 'meta.pkl', 'rb'))
node2label = get_nodes(meta['node'], n_molecule_level=data_cfg.n_molecule_level,
n_reaction_level=data_cfg.n_reaction_level)
bond2label = get_bonds(meta['type'], n_molecule_level=data_cfg.n_molecule_level,
n_reaction_level=data_cfg.n_reaction_level,
self_bond=data_cfg.self_bond)
# -
num_rels = len(bond2label)
pad_length = data_cfg.max_num_atoms + 15 * data_cfg.n_molecule_level + \
data_cfg.n_molecule_level * data_cfg.n_reaction_level
num_nodes = pad_length
model =torch.load(paths.save_path, map_location=device)
model = model.to(device)
test_dataset = pickle.load(open(paths.dataset_path + 'test.pkl', 'rb'))
test_dataset = prune_dataset_by_length(test_dataset, data_cfg.max_num_atoms)
ts_dataset = Dataset(test_dataset, device=device, pad_length=pad_length,
bond2label=bond2label, node2label=node2label, feature_idxs=data_cfg.feature_idxs,
target_main_product=data_cfg.target_main_product, target_center=data_cfg.target_center,
n_molecule_level=data_cfg.n_molecule_level, n_reaction_level=data_cfg.n_reaction_level)
test_loader = DataLoader(ts_dataset, train_cfg.batch_size, drop_last=True, collate_fn=graph_collate)
elements = "H He Li Be B C N O F Ne Na Mg Al Si P S Cl Ar K Ca Sc Ti V Cr Mn Fe Co Ni Cu Zn Ga Ge As Se Br Kr Rb Sr Y Zr Nb Mo Tc Ru Rh Pd Ag Cd In Sn Sb Te I Xe Cs Ba La Ce Pr Nd Pm Sm Eu Gd Tb Dy Ho Er Tm Yb Lu Hf Ta W Re Os Ir Pt Au Hg Tl Pb Bi Po At Rn Fr Ra Ac Th Pa U Np Pu Am Cm Bk Cf Es Fm Md No Lr Rf Db Sg Bh Hs Mt Ds Rg Cn Uut Fl Uup Lv Uus Uuo".split()
num2elem = dict(zip(range(1, len(elements)+1), elements))
elem2num = dict(zip(elements, range(1, len(elements)+1)))
model.eval()
attentions = []
in_hiddens = []
out_hiddens = []
with torch.no_grad():
for batch in test_loader:
g = batch[0]
h = model.embed(g.ndata['feats'].T)
g.ndata['h'] = h
h = model.rgcn(g).view((model.batch_size, model.n_nodes, model.h_dim))
in_hiddens.append(h.cpu().detach().numpy())
h = h.permute(1, 0, 2)
attentions.append(model.trans.layers[0].self_attn(h, h, h)[1].cpu().detach().numpy())
out_hiddens.append(model.trans.layers[0].self_attn(h, h, h)[0].permute(1, 0, 2).cpu().detach().numpy())
att_maps = np.concatenate(attentions)
out_hs = np.concatenate(out_hiddens)
in_hs = np.concatenate(in_hiddens)
id2map = dict(zip(test_dataset.keys(), att_maps))
id2out = dict(zip(test_dataset.keys(), out_hs))
id2in = dict(zip(test_dataset.keys(), in_hs))
idx = 0
pic1, pic2 = draw_gt_reaction(test_dataset[idx], mapping=True)
SVG(pic1)
SVG(pic2)
vizualize_attention(id2map[idx], test_dataset[idx])
data = test_dataset[idx]
in_map = id2in[idx]
print(data["smiles"])
plt.figure(figsize=(10, 20))
elems = [num2elem[i] + ':' + str(j) for (i, j) in zip(data["reactants"]["nodes"], data["reactants"]["mask"])]
n_atoms = len(data["reactants"]["nodes"])
sn.heatmap(in_map[:n_atoms, :].T, xticklabels=elems)
plt.show()
data = test_dataset[idx]
out_map = id2out[idx]
print(data["smiles"])
plt.figure(figsize=(10, 20))
elems = [num2elem[i] + ':' + str(j) for (i, j) in zip(data["reactants"]["nodes"], data["reactants"]["mask"])]
n_atoms = len(data["reactants"]["nodes"])
sn.heatmap(out_map[:n_atoms, :].T, xticklabels=elems)
plt.show()
print(data["smiles"])
plt.figure(figsize=(10, 20))
elems = [num2elem[i] + ':' + str(j) for (i, j) in zip(data["reactants"]["nodes"], data["reactants"]["mask"])]
n_atoms = len(data["reactants"]["nodes"])
sn.heatmap(in_map[:n_atoms, :].T + out_map[:n_atoms, :].T, xticklabels=elems)
plt.show()
| notebooks/transformer_visualisation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# In this example, we will use tensorflow v1 (version 1.15) to create a simple MLP model, and transfer the application to Cluster Serving step by step.
#
# This tutorial is recommended for Tensorflow v1 user only. If you are not Tensorflow v1 user, the keras tutorial [here](#keras-to-cluster-serving-example.ipynb) is more recommended.
# ### Original Tensorflow v1 Application
import tensorflow as tf
tf.__version__
# We first define the Tensorflow graph, and create some data.
g = tf.Graph()
with g.as_default():
# Graph Inputs
features = tf.placeholder(dtype=tf.float32,
shape=[None, 2], name='features')
targets = tf.placeholder(dtype=tf.float32,
shape=[None, 1], name='targets')
# Model Parameters
weights = tf.Variable(tf.zeros(shape=[2, 1],
dtype=tf.float32), name='weights')
bias = tf.Variable([[0.]], dtype=tf.float32, name='bias')
# Forward Pass
linear = tf.add(tf.matmul(features, weights), bias, name='linear')
ones = tf.ones(shape=tf.shape(linear))
zeros = tf.zeros(shape=tf.shape(linear))
prediction = tf.where(condition=tf.less(linear, 0.),
x=zeros,
y=ones,
name='prediction')
# Backward Pass
errors = targets - prediction
weight_update = tf.assign_add(weights,
tf.reshape(errors * features, (2, 1)),
name='weight_update')
bias_update = tf.assign_add(bias, errors,
name='bias_update')
train = tf.group(weight_update, bias_update, name='train')
saver = tf.train.Saver(name='saver')
import numpy as np
x_train, y_train = np.array([[1,2],[3,4],[1,3]]), np.array([1,2,1])
x_train.shape, y_train.shape
# ### Export TensorFlow SavedModel
# Then, we train the graph and in the `with tf.Session`, we save the graph to SavedModel. The detailed code is following, and we could see the prediction result is `[1]` with input `[1,2]`.
with tf.Session(graph=g) as sess:
sess.run(tf.global_variables_initializer())
for epoch in range(5):
for example, target in zip(x_train, y_train):
feed_dict = {'features:0': example.reshape(-1, 2),
'targets:0': target.reshape(-1, 1)}
_ = sess.run(['train'], feed_dict=feed_dict)
w, b = sess.run(['weights:0', 'bias:0'])
print('Model parameters:\n')
print('Weights:\n', w)
print('Bias:', b)
saver.save(sess, save_path='perceptron')
pred = sess.run('prediction:0', feed_dict={features: x_train})
print(pred)
# in this session, save the model to savedModel format
inputs = dict([(features.name, features)])
outputs = dict([(prediction.name, prediction)])
inputs, outputs
tf.saved_model.simple_save(sess, "/tmp/mlp_tf1", inputs, outputs)
# ### Deploy Cluster Serving
# After model prepared, we start to deploy it on Cluster Serving.
#
# First install Cluster Serving
# ! pip install bigdl-serving
import os
# ! mkdir cluster-serving
os.chdir('cluster-serving')
# ! cluster-serving-init
# ! tail wget-log
# +
# if you encounter slow download issue like above, you can just use following command to download
# # ! wget https://repo1.maven.org/maven2/com/intel/analytics/bigdl/bigdl-spark_2.4.3/0.9.0/bigdl-spark_2.4.3-0.9.0-serving.jar
# if you are using wget to download, or get "bigdl-xxx-serving.jar" after "ls", please call mv *serving.jar bigdl.jar after downloaded.
# -
# After initialization finished, check the directory
# ! ls
# Call mv *serving.jar bigdl.jar as mentioned above
# ! mv *serving.jar bigdl.jar
# ! ls
# We config the model path in `config.yaml` to following (the detail of config is at [Cluster Serving Configuration](https://github.com/intel-analytics/bigdl/blob/master/docs/docs/ClusterServingGuide/ProgrammingGuide.md#2-configuration))
# +
## BigDL Cluster Serving
model:
# model path must be provided
path: /tmp/mlp_tf1
# -
# ! head config.yaml
# ### Start Cluster Serving
#
# Cluster Serving requires Flink and Redis installed, and corresponded environment variables set, check [Cluster Serving Installation Guide](https://github.com/intel-analytics/bigdl/blob/master/docs/docs/ClusterServingGuide/ProgrammingGuide.md#1-installation) for detail.
#
# Flink cluster should start before Cluster Serving starts, if Flink cluster is not started, call following to start a local Flink cluster.
# ! $FLINK_HOME/bin/start-cluster.sh
# After configuration, start Cluster Serving by `cluster-serving-start` (the detail is at [Cluster Serving Programming Guide](https://github.com/intel-analytics/bigdl/blob/master/docs/docs/ClusterServingGuide/ProgrammingGuide.md#3-launching-service))
# ! cluster-serving-start
# ### Prediction using Cluster Serving
# Next we start Cluster Serving code at python client.
from bigdl.serving.client import InputQueue, OutputQueue
input_queue = InputQueue()
# Use async api to put and get, you have pass a name arg and use the name to get
arr = np.array([1,2])
input_queue.enqueue('my-input', t=arr)
output_queue = OutputQueue()
prediction = output_queue.query('my-input')
# Use sync api to predict, this will block until the result is get or timeout
prediction = input_queue.predict(arr)
prediction
# The `prediction` result would be the same as using Tensorflow.
#
# This is the end of this tutorial. If you have any question, you could raise an issue at [BigDL Github](https://github.com/intel-analytics/bigdl/issues).
| docs/readthedocs/source/doc/Serving/Example/tf1-to-cluster-serving-example.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
from os import listdir
from os.path import isfile, join
data_example = pd.read_csv("data/2015-04/2015-04-city-of-london-street.csv")
data_example.head(5)
data_example["Crime type"].unique()
# +
data = pd.DataFrame()
dirs = listdir("data")
#listdir("data/{}".format(dirs[1]))
#join("data",dirs[1])
for i in dirs:
if (i != ".DS_Store") and (i[:4] == "2015"):
for l in listdir("data/{}".format(i)):
if l != ".DS_Store":
if l[-10:] == "street.csv":
tmp = pd.read_csv(join("data",i,l))
data = data.append(tmp)
# -
data.shape
coord = data[["Longitude","Latitude"]]
from math import radians, cos, sin, asin, sqrt
london_geo = {"Latitude": 51.507, "Longitude": -0.127}
def haversine(coord):
"""
Calculate the great circle distance between two points
on the earth (specified in decimal degrees)
"""
lon1=london_geo["Longitude"]
lat1=london_geo["Latitude"]
# convert decimal degrees to radians
lon1, lat1, lon2, lat2 = map(radians, [lon1, lat1, coord["Longitude"], coord["Latitude"]])
# haversine formula
dlon = lon2 - lon1
dlat = lat2 - lat1
a = sin(dlat/2)**2 + cos(lat1) * cos(lat2) * sin(dlon/2)**2
c = 2 * asin(sqrt(a))
km = 6367 * c
return km
new_coord = coord[coord["Latitude"].notnull()]
new_coord.insert(len(new_coord.columns), "Weight",1)
new_coord.head()
new_coord = new_coord.groupby(["Latitude", "Longitude"], as_index=False)["Weight"].sum()
new_coord.info()
new_coord.insert(len(new_coord.columns), "Distance", new_coord.apply(haversine, axis=1))
new_coord[new_coord["Distance"] <= 30].shape
new_coord[new_coord["Distance"] <= 30].to_json("coord.json", orient="records")
| pycrime.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# import basic libraries
import pandas as pd
import numpy as np
import nltk
import matplotlib.pyplot as plt
from wordcloud import WordCloud
pd.set_option('display.max_colwidth', None)
# specify data file
import pathlib
datafile = pathlib.Path.cwd().parent / "data" / "news_headlines"/ "news_headline_data.csv"
# create data frame
df = pd.read_csv(datafile)
# evaluate data frame
df.head()
df.dtypes
# convert date column to date time
df.date = pd.to_datetime(df.date,errors='coerce')
df.dtypes
# general info on dataframe
df.describe(include='all')
# list of publishers
df.publisher.value_counts()
# identify row which had an error in date
df[df.isnull().any(axis=1)]
# Notes:
# Date appears to be available in link so I will do a quick fill-in with that
# quick replacement
df.iloc[102,1]='2020-04-27'
# convert date column to date time
df.date = pd.to_datetime(df.date,errors='coerce')
df.dtypes
df.describe(include='all')
#check for duplicates
df[df.article_title.duplicated(False) == True]
# Notes:
#
# 45-url no longer goes to valid article so I will drop
#
# 30/60 are the same article with the same authors (maybe bringing in author is available could also help identify duplicates)
# available from 2 different publishers (based on the notes the authors are listed with ap so I will keep that one in this case)
#
# 71/79 are same article with same publisher just variations on the url link so I will keep the one with the earliest date
# Notes:
# While I am dropping the duplicated articles in this notebook, there is a larger question on how to handle them. Are we trying to understand unique articles or if an article is listed under more than one publisher do we want to consider it twice given it may have greater reach. Things to think about and discuss with partner.
# drop determined duplicates
df.drop(index=[45,30,71],inplace=True)
df.describe(include="all")
fig, ax = plt.subplots(figsize=(10, 6))
df.groupby(df.date.dt.date).size().plot(style="o")
ax.set_ylabel("count")
ax.set_title("All articles")
plt.show()
# Notes: The current dataset looks very limited to more recent articles
pzifer=df[df.article_title.str.lower().str.contains('pfizer')]
fig, ax = plt.subplots(figsize=(10, 6))
pzifer.groupby(pzifer.date.dt.date).size().plot(style="o")
ax.set_ylabel("count")
ax.set_title("Articles containing pfizer")
ax.set_ylim(0)
ax.set_yticks([0, 1, 2, 3, 4])
plt.show()
df[df.article_title.str.lower().str.contains('politico')]
# Notes: There are some article titles that contain a heading for the publisher that could be removed
# Import/download nltk packages
from nltk.tokenize import word_tokenize
from nltk.tokenize import sent_tokenize
from nltk.corpus import stopwords
from nltk.stem import WordNetLemmatizer
from nltk.stem import PorterStemmer
from collections import Counter
nltk.download('punkt')
nltk.download('stopwords')
nltk.download('wordnet')
# User-Entered Parameters
textcolumn = "article_title"
lowercase = True
strip_non_alpha = True
remove_stopwords = True
use_default_stopwords = False
user_defined_stopwords = ["vaccine","to","for","in","of","the","coronavirus","s","on","vaccines","and","covid","is","by","a","as","at","from","with","are","vaccination","its","PoliticoPlaybook","get","says"]
lemmatize_stem_algorithm = "none" #other options are "porter" and "wordnet"
subs = {'<NAME>': 'JohnsonandJohnson',
'White House': 'WhiteHouse',
'<NAME>':'BorisJohnson',
'J&J': 'JohnsonandJohnson',
'POLITICO Playbook': 'PoliticoPlaybook',
'West Virginia': 'WestVirginia'}
number_top_words = 50
number_top_bigrams = 15
# Notes: In the first round I noticed that there are certain words that we would want to avoid splitting up which for a quick fix I added to subs. In the future probably want to do some NER to better identify and adjust:
# (('johnson', 'johnson'), 13),
# (('boris', 'johnson'), 11),
# (('j', 'j'), 8),
# (('politico', 'playbook'), 5),
# (('white', 'house'), 4),
# list English stopwords
print(stopwords.words('english'))
# Notes:
# The typical stopwords would remove many verbs which we need to understand sentiment and also we need to consider contractions to understand the not cases of verbs
#
# +
# Create list of text column values
doc = pd.Series.tolist(df[textcolumn])
# Create sentence and word tokens using NLTK tokenizers
sentences = [sent_tokenize(i) for i in doc]
tokenized_sentences = [word_tokenize(i) for i in doc]
words= []
for s in sentences:
for from_, to in subs.items():
s[0] = re.sub(from_, to, s[0], flags=re.IGNORECASE)
for i in s:
words += word_tokenize(i)
# Lowercase word tokens
if lowercase == True:
words = [w.lower() for w in words]
tokenized_sentences = [[w.lower() for w in s] for s in tokenized_sentences]
# Strip non-alphabetic tokens
if strip_non_alpha == True:
words = [w for w in words if w.isalpha()]
tokenized_sentences = [[w for w in s if w.isalpha()] for s in tokenized_sentences]
# Remove stopwords from tokens
if use_default_stopwords == True:
stopwords_all = stopwords.words('english')+user_defined_stopwords
stopwords_all = [sw for sw in stopwords_all if sw not in ["not","no"]] #modification to keep in negatives
else:
stopwords_all = user_defined_stopwords
if remove_stopwords == True:
words = [w for w in words if w not in stopwords_all]
tokenized_sentences = [[w for w in s if w not in stopwords_all] for s in tokenized_sentences]
# Stem/lemmatize word tokens
if lemmatize_stem_algorithm == "wordnet":
wordnet_lemmatizer = WordNetLemmatizer()
words = [wordnet_lemmatizer.lemmatize(w) for w in words]
tokenized_sentences = [[wordnet_lemmatizer.lemmatize(w) for w in s] for s in tokenized_sentences]
elif lemmatize_stem_algorithm == "porter":
porter = PorterStemmer()
words = [porter.stem(w) for w in words]
tokenized_sentences = [[porter.stem(w) for w in s] for s in tokenized_sentences]
else:
words = words
#print("Some tokenized sentences:",tokenized_sentences[:4])
#print("\nSome sentences:",sentences[:4])
#print("\nSome words:",words[:15])
# -
# Top Words List
counts = Counter(words) #count words
counts.most_common(number_top_words) #display most common
# +
# Top Words Cloud
wordcloud = WordCloud(background_color="white",max_words=number_top_words)
wordcloud.fit_words(counts)
plt.figure()
plt.axis("off")
plt.imshow(wordcloud,interpolation='bilinear')
plt.show()
# -
# Top Bigrams
counts = Counter(zip(words, words[1:])) #count words
counts.most_common(number_top_bigrams) #display most common
# Notes: Looks like there is still have some clean-up to bring together some of these into a single token
| notebooks/newsheadlinesdata_eda_camille.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Parte 1: Codificación: Series Temporales
#
# Entrenar o ajustar modelos que predictivos para series de temporales requiere organizar adecuadamente los datos. La codificación de series de tiempo se ocupa de representar eventos que ocurren a lo largo del tiempo. Hay muchos métodos diferentes para codificar los datos que ocurren con el tiempo. Sin embargo, una red neuronal multicapa falla al predecir procesos que dependen del tiempo, porque para dado un vector de entrada, la red siempre producirá el mismo vector de salida. Por este motivo para este tipo de problema utilizan redes neuronales recurrentes.
#
# La variación de temperatura durante la semana es un ejemplo de datos de series de tiempo. Por ejemplo, si sabemos que la temperatura de hoy es de 25 grados y la temperatura de mañana es de 27 grados, las redes neuronales recurrentes y la codificación de series de tiempo brindan otra opción para predecir la temperatura correcta para la semana. Por el contrario, una red neuronal multicapa tradicional siempre responderá con la misma salida para una entrada determinada. Si entrenamos una red neuronal multicapa para predecir la temperatura de mañana, debería devolver un valor de 27 por 25.
#
# Anteriormente, entrenamos redes neuronales con entrada ($ x $) y salida esperada ($ y $). $ X $ era una matriz, las filas eran ejemplos de entrenamiento y las columnas eran valores a predecir. El valor $ x $ ahora contendrá secuencias de datos. La definición del valor $ y $ seguirá siendo la misma.
#
# Dimensiones del conjunto de entrenamiento ($ x $):
# * Eje 1: Elementos del conjunto de entrenamiento (secuencias) (debe ser del mismo tamaño que el tamaño de $ y $)
# * Eje 2: Miembros de secuencia
# * Eje 3: características en los datos (como neuronas de entrada)
#
# Anteriormente, podíamos tomar como entrada un único precio de acción para predecir si deberíamos comprar (1), vender (-1) o mantener (0). El siguiente código ilustra esta codificación.
# +
#
x = [
[32],
[41],
[39],
[20],
[15]
]
y = [
1,
-1,
0,
-1,
1
]
print(x)
print(y)
# -
# El siguiente código crea un dataFrame:
# +
from IPython.display import display, HTML
import pandas as pd
import numpy as np
x = np.array(x)
print(x[:,0])
df = pd.DataFrame({'x':x[:,0], 'y':y})
display(df)
# -
# Es posible que desee agregar volumen con el precio de las acciones. El siguiente código muestra cómo podemos agregar una dimensión adicional para manejar el volumen.
# +
x = [
[32,1383],
[41,2928],
[39,8823],
[20,1252],
[15,1532]
]
y = [
1,
-1,
0,
-1,
1
]
print(x)
print(y)
# -
# Nuevamente, como dataFrame
# +
from IPython.display import display, HTML
import pandas as pd
import numpy as np
x = np.array(x)
print(x[:,0])
df = pd.DataFrame({'price':x[:,0], 'volume':x[:,1], 'y':y})
display(df)
# -
# Ahora llegamos al formato de secuencia. Cuando queremos predecir algo en una secuencia, se debe agregar una dimensión yespecificar una longitud máxima de secuencia.
# +
x = [
[[32,1383],[41,2928],[39,8823],[20,1252],[15,1532]],
[[35,8272],[32,1383],[41,2928],[39,8823],[20,1252]],
[[37,2738],[35,8272],[32,1383],[41,2928],[39,8823]],
[[34,2845],[37,2738],[35,8272],[32,1383],[41,2928]],
[[32,2345],[34,2845],[37,2738],[35,8272],[32,1383]],
]
y = [
1,
-1,
0,
-1,
1
]
print(x)
print(y)
# -
# Incluso si solo hay una característica (precio), se debe utilizar la tercera dimensión:
#
# +
x = [
[[32],[41],[39],[20],[15]],
[[35],[32],[41],[39],[20]],
[[37],[35],[32],[41],[39]],
[[34],[37],[35],[32],[41]],
[[32],[34],[37],[35],[32]],
]
y = [
1,
-1,
0,
-1,
1
]
print(x)
print(y)
# -
| SeriesTemporales_LSTM/01_Codificacion.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import pandas as pd
import os
from binascii import hexlify
def topk(filename="bn.npy", k=100000):
d = np.load(filename)
d = d[d['n-gram'] != b'']
d = pd.DataFrame(d)
d['counter'] = abs(d['counter'])
d.sort_values(by='counter',ascending=False, inplace=True)
top = d.head(k)
if top.duplicated('n-gram').any():
print("duplicated found !!!")
return top
topk1 = topk("bn.1m.npy", 100000) #topk_of_1m
s1 = set([i for i in topk1['n-gram']])
print("len(s1)=%ld" % len(s1))
topk1[topk1['counter'] > 10]
topk2 = topk("bn.newest.npy", 10000) #topk_of_10k
s2 = set([i for i in topk2['n-gram']])
print("len(s2)=%ld" % len(s2))
print("len(s2.intersection(s1))=%ld" % len(s2.intersection(s1)))
topk2[topk2['counter'] > 10]
len(s1.intersection(s2))
s1.loc[s1.index.difference(s2.index)]
s1.loc[s1.index.intersection(s2.index)]
s2.loc[s2.index.difference(s1.index)]
s2.loc[s2.index.intersection(s1.index)]
| compare_topk.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: python3
# language: python
# name: python3
# ---
# # <img style="float: left; padding-right: 10px; width: 45px" src="https://raw.githubusercontent.com/Harvard-IACS/2018-CS109A/master/content/styles/iacs.png"> CS109B Data Science 2: Advanced Topics in Data Science
# ## Advanced-Sections: Homework 3 - Echo-State Reservoir Computing (AKA HW6-209)
#
#
#
#
# **Harvard University**<br/>
# **Spring 2020**<br/>
# **Instructors**: <NAME>, <NAME>, & <NAME>
#
#
# <hr style="height:2pt">
#RUN THIS CELL
import requests
from IPython.core.display import HTML
styles = requests.get("https://raw.githubusercontent.com/Harvard-IACS/2018-CS109A/master/content/styles/cs109.css").text
HTML(styles)
import os
import pathlib
working_dir = pathlib.Path().absolute()
# Uncomment the line below to help debug if the path to included images don't show
#print(working_dir)
os.chdir(working_dir)
# <hr style="height:2pt">
#
# ### INSTRUCTIONS
#
# - To submit your assignment follow the instructions given in Canvas.
#
# - This homework can be submitted in pairs.
#
# - If you submit individually but you have worked with someone, please include the name of your **one** partner below.
# - Please restart the kernel and run the entire notebook again before you submit. (Exception - you may skip the cells where you train neural networks, running the cells which load previously saved weights instead. However, **don't delete/overwrite the output that model.fit produced during training!**)
#
# <br><BR>
#
# <hr style="height:2pt">
# +
import numpy as np
from numpy import loadtxt
from matplotlib import pyplot as plt
# %matplotlib inline
# This is a class for reservoir computing
# The pyESN.py file must be in the same directory with this notebook
from pyESN import ESN
# -
# ### Overview
# We discussed in class the formulation of Reservoir Computing (RC), an echo-state recurrent neural network. One of the examples that we discussed was the Mackey-Glass nonlinear dynamical system. This example can be found in the seminal paper: http://www.rctn.org/vs265/jaeger04-ESN.pdf. This is the paper that introduced RC, so we highly encourage you to read it.
#
# In this homework, you are asked to work on the Rossler dynamical system. It is a very popular chaotic system that has been used to describe the evolution of chemical reactions. For more information check the Wikipedia page: https://en.wikipedia.org/wiki/R%C3%B6ssler_attractor
# For the implementation of the RC you have to use the class `pyESN` which is available at https://github.com/cknd/pyESN. In this github repository you can also find the Mackey-Glass example. We encourage you to explore this library.
#
# In the homework, you have to employ the RC network to predict the time evolution of a chaotic time series. We provide you three time series `x(t), y(t), z(t)` (files: `x.dat`, `y.dat`, `z.dat`), which are the solutions of the chaotic Rossler system.
#
# In the first question you are asked to make a short-range forecast. It is a prediction where the network is learning from the past (training set) and trying to make a future prediction based on the past. In this case the prediction is not a response to the previous signal, therefore, the input should be an array of ones.
#
# In the second question you are asked to make a long-range forecast. You have to confirm that making a prediction by learning only the past yields very bad performance. This is expected because we deal with a very difficult (chaotic) time-series. On the other hand, we saw in the class that by using the concept of the `observers` we can perform extremely long-range forecast. To include the observers you need to use three different inputs: a vector of ones, and the two other known time series `y(t)` and `z(t)` (the observers). In this case the prediction is a response to the past behavior and also to the present observers' signals. This kind of predictions are called **inference**. For the inference, we need to know the values of the `observers` also for the future values, since they are used as inputs in the RC. This is why we have an inference instead of a pure forecasting.
#
#
# As we discussed in the class RC is very sensitive to the hyper-parameters. In all the questions you are asked to find the optimal set of hyper-parameters that gives the best predictions. For convenience, we are asking you to optimize just two of the hyper-parameters, the `spectrum radius` and `sparsity term`. The rest of the hyper-parameters are given.
#
# The goals of this homework are for you to:
# 1. learn the mechanics of RC
# 2. confirm that RC training is fast
# 3. learn how to use RC for forecasting
# 4. acknowledge that RC is sensitive to hyper-parameters (no free lunch)
# 5. learn how to optimize hyper-parameters of RC
# 6. evaluate RC forecasting
# 7. learn observers-based RC for inference
# ### Overview for the pyESN library for the RC implementation
#
#
# #### You call the RC as:
# esn = ESN(n_inputs = #, <br>
# $\quad$ $\quad$ n_outputs = #, <br>
# $\quad$ $\quad$ n_reservoir = #,<br>
# $\quad$ $\quad$ sparsity= #,<br>
# $\quad$ $\quad$ random_state= #, <br>
# $\quad$ $\quad$ spectral_radius = #,<br>
# $\quad$ $\quad$ noise= #)
# <br> where # denotes the value that you choose.
#
# ##### Brief explanation of the parameters:
# `n_inputs`: number of input dimensions <br>
# `n_outputs`: number of output dimensions <br>
# `n_reservoir`: number of reservoir neurons <br>
# `random_state`: seed for the random generator<br>
# `sparsity`: proportion of recurrent weights set to zero <br>
# `spectral_radius`: spectral radius of the recurrent weight matrix <br>
# `noise`: noise added to each hidden neuron (regularization) <br>
#
#
# Throughout homework you should fix the following hyper-parameters.
#
# `n_outputs = 1`, <br>
# `n_reservoir = 1000`, <br>
# `noise = 0.0001`, <br>
# `random_state=42` <br>
# **Helper functions**
#
# We are providing three helper functions. You can use them if you want or you can make your own implementation, it's up to you. If you define any other helper functions, they should be placed *after* this next cell. While heler functions are useful in keeping code organized organized, you are not required to use them for this homework.
#
# The given functions calculate the `MSE`, the `residuals`, and prepare the data. The `prepareData` splits the output data into training and testing set, and create a training and testing array of ones. Note that the `prepareData` **does not** prepare the *observers*, you will need to do it manually in the question 2.
# +
# HELPER FUNCTIONS GO HERE
def myMSE(prediction,target):
return np.sqrt(np.mean((prediction.flatten() - target.flatten() )**2))
def residuals(prediction,target):
return (target.flatten() - prediction.flatten())
def prepareData(target, train_perc=0.9, plotshow=False):
datalen = len(target)
trainlen = int(train_perc*datalen)
testlen = datalen-trainlen
# Train/Test sets
trainTarget = target[:trainlen]
testTarget = target[trainlen:trainlen+testlen]
inputTrain = np.ones(trainlen)
inputTest = np.ones(testlen)
if plotshow:
plt.figure(figsize=(14,3))
plt.plot(range(0,trainlen), trainTarget,'g',label='Train')
plt.plot(range(trainlen,trainlen+testlen), testTarget,'-r',label='Test')
plt.legend(loc=(0.1,1.1),fontsize=18,ncol=2)
plt.tight_layout()
return trainTarget, testTarget, inputTrain, inputTest
# -
# If you define any other helper functions, they should be put in the following cell.
# +
# HELPER FUNCTIONS GO HERE
### your code here
# -
# Load and plot your data (three time series)
### your code here
x = np.genfromtxt('data/x.dat')
y = np.genfromtxt('data/y.dat')
z = np.genfromtxt('data/z.dat')
# plot data
plt.plot(x, label='x')
plt.plot(y, label='y')
plt.plot(z, label='z')
plt.legend();
# <div class='exercise'><b> Question 1: Short-range forecast [50pts total] </b></div>
#
# In this question you are asked to perform a short range prediction. In particular, you have to use the first `95%` of the sequential points of the time-series `x(t)` and predict the final `5%`, this is considered the validation or testing set; in this homework the validation and the testing sets are the same.
#
# First, try to manually find a set of the hyper-parameters `spectral_radius` and `sparsity` that yields a prediction with relatively low validation/testing MSE (smaller than 0.25). Plot the training and the prediction along with the ground truth data. Also, show the residual between the ground truth and your prediction.
#
#
# Next, make a more systematic hyper-parameter optimization by using a grid search for the hyper-parameters `spectral_radius` and `sparsity` . The goal is to find the optimal set that gives the lowest MSE on the prediction. Make a 2D color plot to show the MSE for the different values of `spectral_radius` and `sparity`.
#
# Finally, you have to make predictions with the optimal hyper-parameter set. Plot the training and the predictions along with the ground truth data. Again, show the residual between the ground truth and your prediction.
#
#
#
# Set the target time-series and name it `target`
# +
### your code here
trainlen=int(len(x)*0.95) # need to convert back to int
testlen=int(len(x)*0.05)
target = x # target time series
# -
# Prepare your data: The target time-series should be split into training and testing sets. Plot the time-series using different colors to indicate the training and testing sets. You might want to use the given helper function `prepareData()` or you can do it by yourself.
### your code here
trainTarget, testTarget, inputTrain, inputTest = prepareData(target, train_perc=0.95, plotshow=True);
# Make a quick prediction until find a testing MSE < 0.25: Try around `spectral_radius = [1.2, 2.6]` and `sparsity = [0.16, 0.24]`.
#
# +
### your code here
esn = ESN(n_inputs = 1,
n_outputs = 1,
n_reservoir = 1000,
sparsity= .2,
random_state= 42,
spectral_radius = 1.5,
noise= 0.0001)
yfit = esn.fit(inputTrain, trainTarget)
yhat = esn.predict(inputTest)
# +
mse = myMSE(yhat, testTarget) #(prediction, target)
print(mse)
plt.figure(figsize=(14,3))
plt.plot(range(0,trainlen), trainTarget,'g',label='Train')
plt.plot(range(trainlen,trainlen+testlen), testTarget,'-r',label='Test')
plt.plot(range(trainlen,trainlen+testlen), yhat,'--b',label='Prediction', alpha=0.8)
plt.axvline(trainlen)
plt.legend(loc=(0.1,1.1),fontsize=18,ncol=2)
plt.tight_layout();
# -
# #### Hyper-parameters optimization
# Make a search grid for the hyper-parameters `spectra-radius` and `sparsity`. Visualize the result by ploting the testing MSE in a 2D color plot.
# +
### your code here
from tqdm import tqdm
grid = []
for spectra_radius in tqdm(np.arange(1.2, 2.7, .1)):
for sparsity in np.arange(0.16, 0.25, .01):
esn = ESN(n_inputs = 1,
n_outputs = 1,
n_reservoir = 1000,
random_state= 42,
noise= 0.0001,
sparsity= sparsity,
spectral_radius = spectra_radius)
yfit = esn.fit(inputTrain, trainTarget)
yhat = esn.predict(inputTest)
mse = myMSE(yhat, testTarget)
grid.append([spectra_radius, sparsity, mse])
# -
# plot the results
### your code here
grid = np.asarray(grid)
grid
# +
# plot
import seaborn as sns
grid_plot=grid[:, -1].reshape(len(np.arange(1.2, 2.7, .1)), len(np.arange(0.16, 0.25, .01)))
plt.figure(figsize=(6,8))
sns.heatmap(grid_plot, xticklabels=np.around(np.arange(0.16, 0.25, .01), 2),
yticklabels= np.around(np.arange(1.2, 2.7, .1), 1),
cmap = sns.cm.rocket_r)
plt.show()
# -
print(grid_plot.argmin(), round(grid_plot.min(),5))
print(grid[20])
# Optimal Prediction
# +
### your code here
esn = ESN(n_inputs = 1,
n_outputs = 1,
n_reservoir = 1000,
sparsity= .18,
random_state= 42,
spectral_radius = 1.4,
noise= 0.0001)
yfit = esn.fit(inputTrain, trainTarget)
yhat = esn.predict(inputTest)
mse = myMSE(yhat, testTarget) #(prediction, target)
print(mse)
plt.figure(figsize=(14,3))
plt.plot(range(0,trainlen), trainTarget,'g',label='Train')
plt.plot(range(trainlen,trainlen+testlen), testTarget,'-r',label='Test')
plt.plot(range(trainlen,trainlen+testlen), yhat,'--b',label='Prediction', alpha=0.8)
plt.axvline(trainlen)
plt.legend(loc=(0.1,1.1),fontsize=18,ncol=2)
plt.tight_layout();
# -
# <div class='exercise'><b> Question 2: Long-range forecast [50pts total] </b></div>
#
# Here you are asked to make a long-range prediction. Use the first `50%` of your data to train the RC network and then predict the final `50%`. This is a very long prediction and, consequently, it is extremely hard.
#
# First, show that by using the RC as before, it is imposible to make a good prediction (with MSE smaller than 0.4). Make a grid search to check the lowest possible testing MSE.
#
# Next, use the concept of the `observers` and perform an inference prediction. Follow the steps of the Question 1. Make a grid search in the hyperparameters `spectral-radius` and `sparsity`. Visualize the MSE in prediction by using a 2D plot. Then perform a inference prediction with the optimal set. Plot the training and prediction along with the ground truth. Again, show the residuals.
# Prepare your data: The target time-series should be splitted into training and testing sets.
### your code here
trainTarget, testTarget, inputTrain, inputTest = prepareData(target, train_perc=0.5, plotshow=True);
# +
### your code here
# make a random prediction
trainlen=len(trainTarget)
testlen=len(testTarget)
esn = ESN(n_inputs = 1,
n_outputs = 1,
n_reservoir = 1000,
sparsity= .18,
random_state= 42,
spectral_radius = 1.4,
noise= 0.0001)
yfit = esn.fit(inputTrain, trainTarget)
yhat = esn.predict(inputTest)
mse = myMSE(yhat, testTarget) #(prediction, target)
print(mse)
plt.figure(figsize=(14,3))
plt.plot(range(0,trainlen), trainTarget,'g',label='Train')
plt.plot(range(trainlen,trainlen+testlen), testTarget,'-r',label='Test')
plt.plot(range(trainlen,trainlen+testlen), yhat,'--b',label='Prediction', alpha=0.8)
plt.axvline(trainlen)
plt.legend(loc=(0.1,1.1),fontsize=18,ncol=2)
plt.tight_layout();
# -
# Hyperparameter Optimization
# +
### your code here
grid_50 = []
for spectra_radius in tqdm(np.arange(1.2, 2.7, .1)):
for sparsity in np.arange(0.16, 0.25, .01):
esn = ESN(n_inputs = 1,
n_outputs = 1,
n_reservoir = 1000,
random_state= 42,
noise= 0.0001,
sparsity= sparsity,
spectral_radius = spectra_radius)
yfit = esn.fit(inputTrain, trainTarget)
yhat = esn.predict(inputTest)
mse = myMSE(yhat, testTarget)
grid_50.append([spectra_radius, sparsity, mse])
grid_50 =np.asarray(grid_50)
# -
# plot the results
# +
### your code here
grid_plot=grid_50[:, -1].reshape(len(np.arange(1.2, 2.7, .1)), len(np.arange(0.16, 0.25, .01)))
plt.figure(figsize=(6,8))
sns.heatmap(grid_plot, xticklabels=np.around(np.arange(0.16, 0.25, .01), 2),
yticklabels= np.around(np.arange(1.2, 2.7, .1), 1),
cmap = sns.cm.rocket_r)
plt.show()
# -
print(grid_plot.argmin(), round(grid_plot.min(),5))
print(grid_50[6])
# #### Inference: Observers
# Prepare your `observers`. The given `prepareData()` function does not prepare the observers, so you need to do it manually.
# +
### your code here
split = .5
trainlen= int(len(x)*split)
testlen = int(len(x)*split)
print(trainlen, testlen)
# tTrain = np.ones(trainlen)
# tTest = np.ones(testlen)
# xTrain = x[:trainlen]
# xTest = x[trainlen: trainlen + testlen]
y_train = y[:trainlen]
y_test = y[trainlen:trainlen+testlen]
z_train = z[:trainlen]
z_test = z[trainlen:trainlen+testlen]
inputTrain = np.stack((np.ones(trainlen), y_train, z_train), axis=1)
inputTest = np.stack((np.ones(testlen), y_test, z_test), axis=1)
# -
inputTrain[0]
# Make a quick prediction to see the improvement (without optimizing the hyper-parameters yet)
trainTarget.shape
# +
### your code here
esn = ESN(n_inputs = 3,
n_outputs = 1,
n_reservoir = 1000,
sparsity= .16,
random_state= 42,
spectral_radius = 2.3,
noise= 0.0001)
yfit = esn.fit(inputTrain, trainTarget)
yhat = esn.predict(inputTest)
mse = myMSE(yhat, testTarget) #(prediction, target)
print(mse)
plt.figure(figsize=(14,3))
plt.plot(range(0,trainlen), trainTarget,'g',label='Train')
plt.plot(range(trainlen,trainlen+testlen), testTarget,'-r',label='Test')
plt.plot(range(trainlen,trainlen+testlen), yhat,'--b',label='Prediction', alpha=0.8)
plt.axvline(trainlen)
plt.legend(loc=(0.1,1.1),fontsize=18,ncol=2)
plt.tight_layout();
# -
# Hyper-parameter optimization
# +
### your code here
grid_obs = []
for spectra_radius in tqdm(np.arange(1.2, 2.6, .1)):
for sparsity in np.arange(0.16, 0.24, .01):
esn = ESN(n_inputs = 3,
n_outputs = 1,
n_reservoir = 1000,
random_state= 42,
noise= 0.0001,
sparsity= sparsity,
spectral_radius = spectra_radius)
yfit = esn.fit(inputTrain, trainTarget)
yhat = esn.predict(inputTest)
mse = myMSE(yhat, testTarget)
grid_obs.append([spectra_radius, sparsity, mse])
grid_obs =np.asarray(grid_obs)
# -
# Plot the results
#
# +
### your code here
grid_plot=grid_obs[:, -1].reshape(len(np.arange(1.2, 2.6, .1)), len(np.arange(0.16, 0.24, .01)))
plt.figure(figsize=(6,8))
sns.heatmap(grid_plot, xticklabels=np.around(np.arange(0.16, 0.25, .01), 2),
yticklabels= np.around(np.arange(1.2, 2.7, .1), 1),
cmap = sns.cm.rocket_r)
plt.show()
# -
print(grid_plot.argmin(), round(grid_plot.min(),5))
print(grid_obs[grid_plot.argmin()])
# Plot the optimal prediction (inference) here. As in Q1, show the fitting and prediction data along with the ground truth. And one, last time, plot the residuals.
print(grid_obs)
grid_obs[:,-1].argmin()
grid_obs[76]
# +
### your code here
# using
esn = ESN(n_inputs = 3,
n_outputs = 1,
n_reservoir = 1000,
random_state= 42,
sparsity= 0.22,
spectral_radius = 1.6,
noise= 0.0001)
yfit = esn.fit(inputTrain, trainTarget)
yhat = esn.predict(inputTest)
mse = myMSE(yhat, testTarget) #(prediction, target)
print(mse)
plt.figure(figsize=(14,3))
plt.plot(range(0,trainlen), trainTarget,'g',label='Train')
plt.plot(range(trainlen,trainlen+testlen), testTarget,'-r',label='Test')
plt.plot(range(trainlen,trainlen+testlen), yhat,'--b',label='Prediction', alpha=0.8)
plt.axvline(trainlen)
plt.legend(loc=(0.1,1.1),fontsize=18,ncol=2)
plt.tight_layout();
# -
# #### QUESTION: why is the mse different from gridsearch?
# ## **References**
#
# - https://github.com/cknd/pyESN
# - https://github.com/FilippoMB/Reservoir-Computing-framework-for-multivariate-time-series-classification
# - https://towardsdatascience.com/gentle-introduction-to-echo-state-networks-af99e5373c68
#
#
# 1. <NAME> and <NAME>. Harnessing Nonlinearity: Predicting Chaotic Systems and Saving Energy in Wireless Communication, Science **304** (2004)
# 2. <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, and <NAME>. Reservoir observers: Model-free inference of unmeasured variables in chaotic systems, Chaos **27** (2017)
# 3. <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, and <NAME>. Machine learning with observers predicts complex spatiotemporal behavior. Front. Phys. - Quantum Computing **7** (2019)
#
| content/HW/hw6/cs109b_hw6_209_submit.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.6.13 64-bit (''ProtoShotXAI'': conda)'
# name: python3
# ---
# +
import os
os.environ["CUDA_VISIBLE_DEVICES"]="0"
import numpy as np
import pickle
import matplotlib.pyplot as plt
from tqdm import tqdm
from tensorflow.keras.layers import Input
from tensorflow.keras.models import load_model
from keras.datasets import mnist
from architectures.protoshotxai import ProtoShotXAI
from utils.ploting_function import xai_plot
# -
model_path_pretrained = '../trained_models/adv_pretrained_conv_mnist/'
base_model = load_model(model_path_pretrained)
# +
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train = np.expand_dims(x_train,axis = 3)/255
x_test = np.expand_dims(x_test,axis=3)/255
data = pickle.load(open("../data/MNIST_adversarial.pkl", "rb"))
x_test_adv = data['x_test_adv']
y_test_adv = data['y_test_adv']
example_index = 3495
query_adv = np.expand_dims(x_test_adv[example_index,:,:,0],axis=2)
query_adv = np.expand_dims(query_adv,axis=0)
query_adv = np.expand_dims(query_adv,axis=0)
# +
adv_sample = x_test_adv[example_index,:,:,:]
adv_samples = np.tile(np.expand_dims(np.copy(adv_sample),axis=0),(10000,1,1,1))
adv_sum = np.sum(np.abs(adv_samples-x_test),axis=(1,2,3))
adv_where = np.where(adv_sum == np.min(adv_sum))[0][0]
query_benign = np.expand_dims(x_test[adv_where,:,:,0],axis=2)
query_benign = np.expand_dims(query_benign,axis=0)
query_benign = np.expand_dims(query_benign,axis=0)
# -
plt.imshow(1-x_test[adv_where,:,:,0],'gray')
plt.axis('off')
plt.imshow(1-x_test_adv[example_index,:,:,0],'gray')
plt.axis('off')
protoshot = ProtoShotXAI(base_model)
shot = 1000
print(np.argmax(base_model.predict(query_adv[0])))
print(np.argmax(base_model.predict(query_benign[0])))
# +
f = np.linspace(0, 128, 128)
## Adversarial 4, prototype 4
iclass = 4 # prototype 4
support_data = x_train[y_train == iclass]
support_data = support_data[np.random.permutation(support_data.shape[0])[:shot]]
support_data = np.expand_dims(np.copy(support_data),axis=0)
s_feature_adv_proto4, q_feature_adv_proto4, den = protoshot.compute_features(support_data,query_adv,iclass)
s_feature_adv_proto4 = s_feature_adv_proto4.flatten()
q_feature_adv_proto4 = q_feature_adv_proto4.flatten()
den_adv_proto4 = den[0][0]
## Benign 4, prototype 4
s_feature_benign_proto4, q_feature_benign_proto4, den = protoshot.compute_features(support_data,query_benign,iclass)
s_feature_benign_proto4 = s_feature_benign_proto4.flatten()
q_feature_benign_proto4 = q_feature_benign_proto4.flatten()
den_benign_proto4 = den[0][0]
## Adversarial 4, prototype 5
iclass = 5 # prototype 4
support_data = x_train[y_train == iclass]
support_data = support_data[np.random.permutation(support_data.shape[0])[:shot]]
support_data = np.expand_dims(np.copy(support_data),axis=0)
s_feature_adv_proto5, q_feature_adv_proto5, den = protoshot.compute_features(support_data,query_adv,iclass)
s_feature_adv_proto5 = s_feature_adv_proto5.flatten()
q_feature_adv_proto5 = q_feature_adv_proto5.flatten()
den_adv_proto5 = den[0][0]
## Benign 4, prototype 5
s_feature_benign_proto5, q_feature_benign_proto5, den = protoshot.compute_features(support_data,query_benign,iclass)
s_feature_benign_proto5 = s_feature_benign_proto5.flatten()
q_feature_benign_proto5 = q_feature_benign_proto5.flatten()
den_benign_proto5 = den[0][0]
# +
from plotly.subplots import make_subplots
import plotly.graph_objects as go
fig = go.Figure()
# '#636EFA', '#EF553B', '#00CC96', '#AB63FA', '#FFA15A', '#19D3F3', '#FF6692', '#B6E880', '#FF97FF', '#FECB52'
fig = make_subplots(rows=2, cols=2, vertical_spacing = 0.15, horizontal_spacing = 0.15,
specs=[[{"secondary_y": True}, {"secondary_y": True}],
[{"secondary_y": True}, {"secondary_y": True}]],
subplot_titles=("Class 4 Features for Benign 4",
"Class 5 Features for Benign 4",
"Class 4 Features for Adversarial 4 (Class: True 4, Predicted 5)",
"Class 5 Features for Adversarial 4 (Class: True 4, Predicted 5)"))
fig.update_annotations(font_size=24)
## Adversarial 4, prototype 4
fig.add_trace(go.Scatter(
x=f, y=s_feature_adv_proto4/np.sqrt(np.sum(s_feature_adv_proto4*s_feature_adv_proto4)),
name='Support prototype feature components',
line=dict(color="#636EFA", width=6),
), secondary_y=False, row=2, col=1)
fig.add_trace(go.Scatter(
x=f, y=q_feature_adv_proto4/np.sqrt(np.sum(q_feature_adv_proto4*q_feature_adv_proto4)),
name='Query feature components',
line=dict(color="#00CC96", width=4),
), secondary_y=False, row=2, col=1)
fig.add_trace(go.Scatter(
x=f, y=q_feature_adv_proto4*s_feature_adv_proto4/den_adv_proto4,
name='ProtoShotXAI components',
line=dict(color="#EF553B", width=2),
), secondary_y=True, row=2, col=1)
fig.update_xaxes(title_text="Feature Number", range = [1,128],row=2, col=1)
fig.update_yaxes(title_text="Feature Weight", range = [-0.6,0.6],secondary_y=False, row=2, col=1)
fig.update_yaxes(title_text="ProtoShotXAI Weight", range = [-0.12,0.12],secondary_y=True, row=2, col=1)
## Benign 4, prototype 4
fig.add_trace(go.Scatter(
x=f, y=s_feature_benign_proto4/np.sqrt(np.sum(s_feature_benign_proto4*s_feature_benign_proto4)),
name='support prototype feature components',
line=dict(color="#636EFA", width=6),
showlegend=False,
), secondary_y=False, row=1, col=1)
fig.add_trace(go.Scatter(
x=f, y=q_feature_benign_proto4/np.sqrt(np.sum(q_feature_benign_proto4*q_feature_benign_proto4)),
name='benign features',
line=dict(color="#00CC96", width=4),
showlegend=False,
), secondary_y=False, row=1, col=1)
fig.add_trace(go.Scatter(
x=f, y=q_feature_benign_proto4*s_feature_benign_proto4/den_benign_proto4,
name='ProtoShot distance components',
line=dict(color="#EF553B", width=2),
showlegend=False,
), secondary_y=True, row=1, col=1)
fig.update_xaxes(title_text="Feature Number", range = [1,128],row=1, col=1)
fig.update_yaxes(title_text="Feature Weight", range = [-0.6,0.6],secondary_y=False, row=1, col=1)
fig.update_yaxes(title_text="ProtoShotXAI Weight", range = [-0.12,0.12],secondary_y=True, row=1, col=1)
## Adversarial 4, prototype 5
fig.add_trace(go.Scatter(
x=f, y=s_feature_adv_proto5/np.sqrt(np.sum(s_feature_adv_proto5*s_feature_adv_proto5)),
name='Support prototype feature components',
line=dict(color="#636EFA", width=6),
showlegend=False,
), secondary_y=False, row=2, col=2)
fig.add_trace(go.Scatter(
x=f, y=q_feature_adv_proto5/np.sqrt(np.sum(q_feature_adv_proto5*q_feature_adv_proto5)),
name='Adversarial feature components',
line=dict(color="#00CC96", width=4),
showlegend=False,
), secondary_y=False, row=2, col=2)
fig.add_trace(go.Scatter(
x=f, y=q_feature_adv_proto5*s_feature_adv_proto5/den_adv_proto5,
name='ProtoShot distance components',
line=dict(color="#EF553B", width=2),
showlegend=False,
), secondary_y=True, row=2, col=2)
fig.update_xaxes(title_text="Feature Number", range = [1,128],row=2, col=2)
fig.update_yaxes(title_text="Feature Weight", range = [-0.6,0.6],secondary_y=False, row=2, col=2)
fig.update_yaxes(title_text="ProtoShotXAI Weight", range = [-0.12,0.12],secondary_y=True, row=2, col=2)
## Benign 4, prototype 5
fig.add_trace(go.Scatter(
x=f, y=s_feature_benign_proto5/np.sqrt(np.sum(s_feature_benign_proto5*s_feature_benign_proto5)),
name='support prototype feature components',
line=dict(color="#636EFA", width=6),
showlegend=False,
), secondary_y=False, row=1, col=2)
fig.add_trace(go.Scatter(
x=f, y=q_feature_benign_proto5/np.sqrt(np.sum(q_feature_benign_proto5*q_feature_benign_proto5)),
name='benign features',
line=dict(color="#00CC96", width=4),
showlegend=False,
), secondary_y=False, row=1, col=2)
fig.add_trace(go.Scatter(
x=f, y=q_feature_benign_proto5*s_feature_benign_proto5/den_benign_proto5,
name='cosine distance components',
line=dict(color="#EF553B", width=2),
showlegend=False,
), secondary_y=True, row=1, col=2)
fig.update_xaxes(title_text="Feature Number", range = [1,128],row=1, col=2)
fig.update_yaxes(title_text="Feature Weight", range = [-0.6,0.6],secondary_y=False, row=1, col=2)
fig.update_yaxes(title_text="ProtoShotXAI Weight", range = [-0.12,0.12],secondary_y=True, row=1, col=2)
fig.update_layout(
font=dict(
size=18,
)
)
fig.update_layout(title_font_size=20)
fig.show()
import plotly.io as pio
pio.write_image(fig, './results/Adversarial_MNIST/Adversarial_MNIST_Features.png', width=2000, height=800)
# +
n_samples = 1000
rand_seq = np.random.permutation(np.shape(x_test_adv)[0])
rand_seq = rand_seq[:n_samples]
scores_benign = np.zeros((n_samples,10))
scores_adv = np.zeros((n_samples,10))
true_vals = np.zeros(n_samples)
progress_bar = True
for irand in tqdm(range(n_samples),disable=(not progress_bar)):
rand_int = rand_seq[irand]
true_vals[irand] = y_test[rand_int]
query_adv = np.expand_dims(x_test_adv[rand_int,:,:,0],axis=2)
query_adv = np.expand_dims(query_adv,axis=0)
query_adv = np.expand_dims(query_adv,axis=0)
query_benign = np.expand_dims(x_test[rand_int,:,:,0],axis=2)
query_benign = np.expand_dims(np.copy(query_benign),axis=0)
query_benign = np.expand_dims(np.copy(query_benign),axis=0)
for iclass in range(10):
support_data = x_train[y_train == iclass]
support_data = support_data[np.random.permutation(support_data.shape[0])[:shot]]
support_data = np.expand_dims(np.copy(support_data),axis=0)
scores_adv[irand,iclass] = protoshot.compute_score(support_data,query_adv,iclass)
scores_benign[irand,iclass] = protoshot.compute_score(support_data,query_benign,iclass)
# +
mask_adv = np.zeros((n_samples,10))
mask_adv[np.arange(n_samples),np.argmax(scores_adv,axis=1)] = 1
mask_benign = np.zeros((n_samples,10))
mask_benign[np.arange(n_samples),true_vals.astype(int)] = 1
in_class_benign = scores_benign[mask_benign==1]
in_class_adv = scores_adv[mask_adv==1]
# +
import plotly.graph_objects as go
fig = go.Figure()
fig.add_trace(go.Histogram(x=in_class_benign, name='in-class scores for regular digits', histnorm='probability',
xbins=dict(
start=0.0,
end=1.0,
size=0.02))
)
fig.add_trace(go.Histogram(x=in_class_adv, name='in-class scores for adversarial digits', histnorm='probability',
xbins=dict(
start=0.0,
end=1.0,
size=0.02))
)
# The two histograms are drawn on top of another
fig.update_layout(barmode='overlay')
fig.update_traces(opacity=0.75)
fig.update_xaxes(title_text="ProtoShotXAI Score")
fig.update_yaxes(title_text="Probability")
fig.update_layout(
title={
'text': "Histogram of In-Class Scores",
'y':0.85,
'x':0.35,
'xanchor': 'center',
},
font=dict(
size=18,
)
)
fig.update_layout(title_font_size=20)
import plotly.io as pio
pio.write_image(fig, './results/Adversarial_MNIST/Adversarial_MNIST_Histrograms.png', width=1000, height=500)
fig.show()
# -
n_points = 1000
thresh = np.arange(n_points)/n_points
ROC_x = np.zeros_like(thresh)
ROC_y = np.zeros_like(thresh)
for i in range(n_points):
ithresh = thresh[i]
ROC_y[i] = np.sum(in_class_benign >= ithresh)/n_points
ROC_x[i] = np.sum(in_class_adv >= ithresh)/n_points
# +
import plotly.graph_objects as go
fig = go.Figure()
fig.add_trace(go.Scatter(
x=ROC_x, y=ROC_y,line=dict(width=6)))
fig.update_xaxes(title_text="False Positive Rate",range=[-0.01,1])
fig.update_yaxes(title_text="Adversarial Detection Rate")
fig.update_layout(
title={
'text': "ROC Curve for MNIST Adversarial Detection",
'y':0.87,
'x':0.49,
'xanchor': 'center',
},
font=dict(
size=18,
)
)
fig.update_layout(title_font_size=20)
import plotly.io as pio
pio.write_image(fig, './results/Adversarial_MNIST/Adversarial_MNIST_ROC.png', width=500, height=500)
fig.show()
# -
| experiments/Adversarial_MNIST.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
symbols = ['A', 'C', 'G', 'T']
n = 4
symbs = 'ABCDEFGHIJ'
from itertools import product
for i in product(symbs,repeat=2):
print(''.join(i))
for i in range(len(symbs)):
for j in range(len(symbs)):
#if i != j:
print(symbs[i]+symbs[j])
n = 0
for i in range(len(symbs)):
n = 0
while n < 4:
for j in range()
| LEXF.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import os
import sys
module_path = os.path.abspath(os.path.join('..'))
#module_path = module_path + "/Final/Algorithms/pySINDy"
if module_path not in sys.path:
sys.path.append(module_path)
print(sys.path)
from pySINDy.sindy import SINDy
import numpy as np
import csv
# +
data_path = 'Test1.csv' #+ input_data
x = np.array([])
y = np.array([])
print(x)
print(y)
# -
#code to create x & y values
with open(data_path) as csvfile:
readCSV = csv.reader(csvfile, delimiter=',')
for row in readCSV:
new_x = float(row[0])
new_y = float(row[1])
x = np.append(x,new_x)
y = np.append(y,new_y)
print(x)
print(y)
data = np.append([x],[y],axis=0)
print(data)
dt = 0.1 #take out for final version
model = SINDy(name='SINDy model for Own Data')
model.fit(data, dt, poly_degree=3, cut_off=0.01)
coef = model.coefficients
desc = model.descriptions
#return coef,desc
print(coef)
print(desc)
| Algorithms/pySINDy/examples/.ipynb_checkpoints/Own Data Test-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# %matplotlib inline
#
#
# Save and Load the Model
# ============================
#
# In this section we will look at how to persist model state with saving, loading and running model predictions.
#
#
import torch
import torchvision.models as models
# Saving and Loading Model Weights
# --------------------------------
# PyTorch models store the learned parameters in an internal
# state dictionary, called ``state_dict``. These can be persisted via the ``torch.save``
# method:
#
#
model = models.vgg16(pretrained=True)
torch.save(model.state_dict(), 'model_weights.pth')
# To load model weights, you need to create an instance of the same model first, and then load the parameters
# using ``load_state_dict()`` method.
#
#
model = models.vgg16() # we do not specify pretrained=True, i.e. do not load default weights
model.load_state_dict(torch.load('model_weights.pth'))
model.eval()
# <div class="alert alert-info"><h4>Note</h4><p>be sure to call ``model.eval()`` method before inferencing to set the dropout and batch normalization layers to evaluation mode. Failing to do this will yield inconsistent inference results.</p></div>
#
#
# Saving and Loading Models with Shapes
# -------------------------------------
# When loading model weights, we needed to instantiate the model class first, because the class
# defines the structure of a network. We might want to save the structure of this class together with
# the model, in which case we can pass ``model`` (and not ``model.state_dict()``) to the saving function:
#
#
torch.save(model, 'model.pth')
# We can then load the model like this:
#
#
model = torch.load('model.pth')
# <div class="alert alert-info"><h4>Note</h4><p>This approach uses Python `pickle <https://docs.python.org/3/library/pickle.html>`_ module when serializing the model, thus it relies on the actual class definition to be available when loading the model.</p></div>
#
#
# Related Tutorials
# -----------------
# `Saving and Loading a General Checkpoint in PyTorch <https://pytorch.org/tutorials/recipes/recipes/saving_and_loading_a_general_checkpoint.html>`_
#
#
| 07_SAVE_AND_LOAD_MODEL.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.8.5 64-bit (''penv'': venv)'
# name: python3
# ---
# + [markdown] id="Cxi8K9mXwl5t"
# <a href="https://colab.research.google.com/github//pylabel-project/samples/blob/main/yolo2pylabeler.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
#
# # PyLabeler: View and Edit YOLO annotations
# This notebook is a proof of concept using the [jupyter-bbox-widget](https://github.com/gereleth/jupyter-bbox-widget) and PyLabel to created an interactive image labeling tool. Use it to read, edit, and save bounding box annotations to and from multiple annotation formats including coco, voc, and yolo--all withing a Jupyter notebook.
# + id="mMxefSgXN_cM"
import logging
logging.getLogger().setLevel(logging.CRITICAL)
# !pip install pylabel > /dev/null
from pylabel import importer
# + [markdown] id="FJH3E6FwN_cN"
# ## Import Yolo annotations
# First we will import annotations stored in Yolo v5 format.
# + colab={"base_uri": "https://localhost:8080/"} id="pdaM2LrON_cN" outputId="9c9845e5-27ae-4f0a-97dc-4fc659fc940c"
# %%capture
import os, zipfile
#Download sample yolo dataset
os.makedirs("data", exist_ok=True)
# !wget "https://github.com/ultralytics/yolov5/releases/download/v1.0/coco128.zip" -O data/coco128.zip
with zipfile.ZipFile("data/coco128.zip", 'r') as zip_ref:
zip_ref.extractall("data")
# + colab={"base_uri": "https://localhost:8080/", "height": 302} id="NsAQXnTTN_cO" outputId="b809cb47-4d29-4a11-c810-453234f95d51"
path_to_annotations = "data/coco128/labels/train2017/"
#Identify the path to get from the annotations to the images
path_to_images = "../../images/train2017/"
#Import the dataset into the pylable schema
#Class names are defined here https://github.com/ultralytics/yolov5/blob/master/data/coco128.yaml
yoloclasses = ['person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus', 'train', 'truck', 'boat', 'traffic light',
'fire hydrant', 'stop sign', 'parking meter', 'bench', 'bird', 'cat', 'dog', 'horse', 'sheep', 'cow',
'elephant', 'bear', 'zebra', 'giraffe', 'backpack', 'umbrella', 'handbag', 'tie', 'suitcase', 'frisbee',
'skis', 'snowboard', 'sports ball', 'kite', 'baseball bat', 'baseball glove', 'skateboard', 'surfboard',
'tennis racket', 'bottle', 'wine glass', 'cup', 'fork', 'knife', 'spoon', 'bowl', 'banana', 'apple',
'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza', 'donut', 'cake', 'chair', 'couch',
'potted plant', 'bed', 'dining table', 'toilet', 'tv', 'laptop', 'mouse', 'remote', 'keyboard', 'cell phone',
'microwave', 'oven', 'toaster', 'sink', 'refrigerator', 'book', 'clock', 'vase', 'scissors', 'teddy bear',
'hair drier', 'toothbrush']
dataset = importer.ImportYoloV5(path=path_to_annotations, path_to_images=path_to_images, cat_names=yoloclasses,
img_ext="jpg", name="coco128")
dataset.df.head()
#dataset.df.loc[:, dataset.df.columns.str.startswith('ann')]
# + [markdown] id="3yhvJjOoN_cP"
# ## Edit Annotations
# Use the jupyter_bbox_widget to inspect, edit, and save annotations without leaving the Jupyter notebook. Open an entire dataset for labeling.
# + colab={"base_uri": "https://localhost:8080/", "height": 724, "referenced_widgets": ["2a53a471a65b46828df58a3b644fe28c", "85a87c353908457db16e709559d833e9", "f25d44a9caed4337a7d6680e6198b377", "4a14e100bf074b218fcf552572574b84", "5504bb14d5444636aadfd51a4b2417d7", "b16f86d42208495ab1febee44d7315cb", "f099d32e268f49b19c6da8036e1bc7c6", "78839b2813524c018fd817bc7d402f59", "9b8c8e6660d14a448d226e84bc41105c", "c6040f7edfdc4ea496c063f7512c9266", "ff82b03c565842c18ee3425a28a27241", "e7257c919ec44374977191d0341e93ea", "188539e7b7b441b98bed0760f0dff270", "a4c8d00e3d8e40deb318fd12e9f5e603", "<KEY>", "bdbaac26113243129208e37b569e35e6", "<KEY>", "373a68af757b4548a0429e32d1b98be0", "18de88b549214e708c6b7809190b890b", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "e0b8fcc4765146e288e09211491c820e", "<KEY>", "<KEY>", "<KEY>", "596dedc99c014a37bfef6459e5851be4", "<KEY>", "77b8cc47db2545b39b7c782a046af1b5", "<KEY>", "1be249b01e084e8b89972857e4da2fc8", "c8e800eaebcb438eb8309492a6d51606", "2403556cbb5a4edb8c0f83edbeb9d3ac"]} id="D5u4PDEJN_cQ" outputId="ac0d74ae-6a4d-4027-9f7b-8a1b3e7c37fe"
dataset.labeler.StartPyLaber()
# + [markdown] id="TkM27j5cN_cQ"
# # Check Labels for a specific image
#
# + colab={"base_uri": "https://localhost:8080/", "height": 910, "referenced_widgets": ["f6efbbaba577489594e83579e4948700", "e607b0a292604f06a748323036717507", "272dd98d3dcf4e62b00154404759e85a", "75b8426fc9cb432289212076022833a6", "a1443aa8fb0141c8987a8ab7d0af8fed", "fee52791236d4e6b856df9d7e4765cb0", "<KEY>", "208b7a5f536a43019797d25cc69c81a4", "85288511531045c1a70d65cb8af79ec2", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "45b83bd8de8645b58099c3e444e6a2d1", "<KEY>", "985b038973ff4700824359d5eaceb3cc", "eedf3013b6214ff4a866915578f78fab", "<KEY>", "<KEY>", "<KEY>", "5b0eef35c3114bc0976883df9f226afa", "<KEY>", "<KEY>", "ced06c73b3e14c3f830521405e45a0ea", "9efd8ea9eed44b5cba5a6f4ea9759401", "ca5a4dab5ecd4adeacf0fefe68d9792f", "<KEY>", "<KEY>", "<KEY>", "b862e67fd9e046b09827027203d188c6"]} id="rzPg0WndO054" outputId="b26d77fc-b283-426d-ef11-4f5326f32c8b"
img_filename = '000000000078.jpg'
dataset.labeler.StartPyLaber(image=img_filename)
# + [markdown] id="S5xjtFwxRC1_"
# - Select class 'bird' in the above widget
# - Draw a box around the owl
# - Click **Save**
#
# When you click submit the annotations for that image are updated. Run the cell below to verify that there are now 2 annotations for that image.
# + colab={"base_uri": "https://localhost:8080/", "height": 176} id="91a8PwdvN_cR" outputId="c0ea92d8-0d33-4954-8a67-ab16cc32457a"
dataset.df.loc[dataset.df['img_filename'] == img_filename]
# + colab={"base_uri": "https://localhost:8080/"} id="K_JvcxV6N_cR" outputId="864b22e9-320a-448a-fa10-92e589d20824"
#Export the annotations in Yolo format
dataset.path_to_annotations = 'training/labels/'
os.makedirs(dataset.path_to_annotations, exist_ok=True)
dataset.export.ExportToYoloV5()
#View the Yolo annotations for the above image
# !cat training/labels/000000000078.txt
| yolo2pylabeler.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .r
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: R 4.0
# language: R
# name: ir40
# ---
# + [markdown] toc=true
# <h1>Table of Contents<span class="tocSkip"></span></h1>
# <div class="toc"><ul class="toc-item"><li><span><a href="#Dependencies" data-toc-modified-id="Dependencies-1">Dependencies</a></span></li><li><span><a href="#Functions" data-toc-modified-id="Functions-2">Functions</a></span></li><li><span><a href="#Paths" data-toc-modified-id="Paths-3">Paths</a></span></li><li><span><a href="#Main" data-toc-modified-id="Main-4">Main</a></span><ul class="toc-item"><li><span><a href="#CBTN" data-toc-modified-id="CBTN-4.1">CBTN</a></span></li><li><span><a href="#ICGC" data-toc-modified-id="ICGC-4.2">ICGC</a></span></li><li><span><a href="#TARGET" data-toc-modified-id="TARGET-4.3">TARGET</a></span></li><li><span><a href="#TCGA" data-toc-modified-id="TCGA-4.4">TCGA</a></span></li><li><span><a href="#Merge-all" data-toc-modified-id="Merge-all-4.5">Merge all</a></span></li></ul></li></ul></div>
# -
# # Dependencies
library(tidyr)
library(biomaRt)
# # Functions
matchexp_matrixfx <- function(exp_matrix, estimate_manifest, group, sample_type, matchingcol){
message("dimensions of exp_matrix: ", deparse(substitute(exp_matrix)))
print(dim(exp_matrix))
subset_estimate_manifest <- estimate_manifest[estimate_manifest$group == group,]
subset_estimate_manifest_sampletype <- subset_estimate_manifest[grepl(sample_type, subset_estimate_manifest$sample_type),]
message("dimensions of estimate_manifest_df for: ", group, " sample_type:", sample_type)
print(dim(subset_estimate_manifest_sampletype))
exp_matrix_matchingset <- exp_matrix[,colnames(exp_matrix) %in% subset_estimate_manifest_sampletype[[matchingcol]]]
message("dimensions of exp_matrix_matchingset for: ", deparse(substitute(exp_matrix)), " with matching column: ", matchingcol)
print(dim(exp_matrix_matchingset))
exp_matrix_matchingset <- cbind(exp_matrix[,1:2], exp_matrix_matchingset)
return(exp_matrix_matchingset)
}
# # Paths
manifestpath <- "/Users/anabbi/OneDrive - UHN/Documents/IPD2/Manifests/"
datapath <- "/Users/anabbi/OneDrive - UHN/Documents/IPD2/Data/"
plotpath <- "/Users/anabbi/OneDrive - UHN/Documents/IPD2/Plots/"
# # Main
# Subset to primary samples
# ## CBTN
# This file is post cleanup, post estimate runs, post primary subset, pre clustering. for expression_clustering
CBTTCtpm <- read.table(paste0(datapath, "exp_mat/CBTTC_tpm_matrix_dedup.txt"), header = T, sep = "\t", stringsAsFactors = F, check.names = F)
CBTTCtpm[1:10,1:10]
load(file = paste0(datapath, "/ESTIMATE/estimate_manifest_primary_clean.RData"))
table(estimate_manifest_primary_clean$group)
CBTTC_tpm_matrix_primary <- matchexp_matrixfx(CBTTCtpm, estimate_manifest_primary_clean,"CBTN", "Initial", "sample_id")
CBTTC_tpm_matrix_primary$ensembl_id <- gsub("[.].*", "", CBTTC_tpm_matrix_primary$ensembl_id)
CBTN_tpm_matrix_primary_dedup <- CBTTC_tpm_matrix_primary[!duplicated(CBTTC_tpm_matrix_primary$ensembl_id),]
dim(CBTTC_tpm_matrix_primary)
dim(CBTN_tpm_matrix_primary_dedup)
write.table(CBTN_tpm_matrix_primary_dedup,
file = paste0(datapath, "exp_mat/CBTTC_tpm_matrix_primary_dedup.txt"), sep = "\t", quote = T, row.names = F)
# ## ICGC
ICGC <- read.table(file = paste0(datapath,"exp_mat/ICGC.tpm.matrix.txt"),
sep = "\t", header = TRUE, stringsAsFactors = FALSE)
dim(ICGC)
ICGC[1:10,1:10]
colnames(ICGC)[1] <- "ensembl_id"
# Some cleanup for colnames
colnames(ICGC) <- gsub(".genes.results", "", colnames(ICGC))
colnames(ICGC) <- gsub("[.]", "-", colnames(ICGC))
dim(ICGC)
head(ICGC)
# Remove PAR_Y pseudoautosomal regions
ICGC <- ICGC[!grepl("PAR_Y", ICGC$ensembl_id),]
dim(ICGC)
ICGC$ensembl_id <- gsub("[.].*", "", ICGC$ensembl_id)
ICGC_dedup <- ICGC[ !duplicated(ICGC$ensembl_id),]
dim(ICGC_dedup)
ICGC_exp_matrix_primary <- matchexp_matrixfx(ICGC_dedup, estimate_manifest_primary_clean,"ICGC", "Primary", "sample_id")
save(ICGC_exp_matrix_primary,
file = paste0(datapath, "exp_mat/DKFZ_tpm_matrix_primary.RData"))
# ## TARGET
TARGET_tpm_matrix <- read.table(file = paste0(datapath,"exp_mat/NBL_tpm_matrix_hugo.txt"),
sep = "\t", header = TRUE, stringsAsFactors = FALSE, check.names = F)
dim(TARGET_tpm_matrix)
TARGET_tpm_matrix_primary <- matchexp_matrixfx(TARGET_tpm_matrix, estimate_manifest_primary_clean,
"TARGET", "Primary", "sample_id")
# Remove duplicated genes
TARGET_tpm_matrix_primary$ensembl <- gsub("[.].*", "", TARGET_tpm_matrix_primary$ensembl)
TARGET_tpm_matrix_primary_dedup <- TARGET_tpm_matrix_primary[!duplicated(TARGET_tpm_matrix_primary$ensembl),]
dim(TARGET_tpm_matrix_primary_dedup)
save(TARGET_tpm_matrix_primary_dedup,
file = paste0(datapath, "exp_mat/TARGET_tpm_matrix_primary_dedup.RData"))
# NBL matrix hgnc only
TARGET_tpm_matrix_primary_dedup_hgnc <- TARGET_tpm_matrix_primary[!duplicated(TARGET_tpm_matrix_primary$gene_symbol),]
rownames(TARGET_tpm_matrix_primary_dedup_hgnc) <- TARGET_tpm_matrix_primary_dedup_hgnc$gene_symbol
TARGET_tpm_matrix_primary_dedup_hgnc$gene_symbol <- NULL
TARGET_tpm_matrix_primary_dedup_hgnc$ensembl <- NULL
head(TARGET_tpm_matrix_primary_dedup_hgnc)
write.table(TARGET_tpm_matrix_primary_dedup_hgnc,
file = paste0(datapath, "exp_mat/tpm_matrix_ped_TARGET_NBL_HGNConly.txt"),
sep = "\t", quote = F, row.names = T)
# ## TCGA
TCGA_tpm_matrix <- read.table(file = paste0(datapath,"exp_mat/TCGA_tpm_matrix_hugo.txt"),
sep = "\t", header = TRUE, stringsAsFactors = FALSE, check.names = F)
head(TCGA_tpm_matrix)
TCGA_tpm_matrix_primary <- matchexp_matrixfx(TCGA_tpm_matrix, estimate_manifest_primary_clean,"TCGA", "Primary", "sample_id")
# Remove duplicated ensembl id (if any)
TCGA_tpm_matrix_primary$ensembl <- gsub("[.].*", "", TCGA_tpm_matrix_primary$ensembl)
TCGA_tpm_matrix_primary_dedup <- TCGA_tpm_matrix_primary[!duplicated(TCGA_tpm_matrix_primary$ensembl),]
dim(TCGA_tpm_matrix_primary_dedup)
save(TCGA_tpm_matrix_primary_dedup,
file = paste0(datapath, "exp_mat/TCGA_tpm_matrix_primary_dedup.RData"))
# ## Merge all
colnames(TARGET_tpm_matrix_primary_dedup)[1:2]
colnames(TCGA_tpm_matrix_primary_dedup)[1:2]
colnames(ICGC_exp_matrix_primary)[1:2]
colnames(CBTN_tpm_matrix_primary_dedup)[1:2]
colnames(TARGET_tpm_matrix_primary_dedup)[1] <- "ensembl_id"
colnames(TCGA_tpm_matrix_primary_dedup)[1] <- "ensembl_id"
# merge with ensembl ids
tpm_matrix_ped <- merge(CBTN_tpm_matrix_primary_dedup, TARGET_tpm_matrix_primary_dedup, by = "ensembl_id")
dim(CBTN_tpm_matrix_primary_dedup)
dim(TARGET_tpm_matrix_primary_dedup)
dim(tpm_matrix_ped)
tpm_matrix_ped <- merge(tpm_matrix_ped, ICGC_exp_matrix_primary, by = "ensembl_id")
dim(ICGC_exp_matrix_primary)
dim(tpm_matrix_ped)
head(tpm_matrix_ped)
dim(tpm_matrix_ped)
# +
rownames(tpm_matrix_ped) <- tpm_matrix_ped$ensembl_id
tpm_matrix_ped$gene_symbol <- NULL
tpm_matrix_ped$ensembl_id <- NULL
# -
head(tpm_matrix_ped)
# remove non-coding-RNA and pseudogenes
hg38 <- useMart(biomart="ENSEMBL_MART_ENSEMBL", host="www.ensembl.org",
path="/biomart/martservice", dataset="hsapiens_gene_ensembl")
ensembls <- rownames(tpm_matrix_ped)
ensembl_hgnc_type <- getBM(filters="ensembl_gene_id",
attributes=c("hgnc_symbol","ensembl_gene_id", "entrezgene_id", "gene_biotype"),
values= ensembls, mart=hg38)
table(ensembl_hgnc_type$gene_biotype, useNA = "always")
biotypes <- as.data.frame(table(ensembl_hgnc_type$gene_biotype), stringsAsFactors = F)$Var1
biotypes_pseudogenes <- biotypes[grepl("pseudo", biotypes)]
biotypes_RNA <- biotypes[grepl("RNA", biotypes)]
biotypes_RNA
biotypes_pseudogenes
rm_genes <- ensembl_hgnc_type$ensembl_gene_id[ensembl_hgnc_type$gene_biotype %in% c(biotypes_RNA,"TEC", biotypes_pseudogenes)]
dim(tpm_matrix_ped)
tpm_matrix_ped <- tpm_matrix_ped[!rownames(tpm_matrix_ped) %in% rm_genes,]
dim(tpm_matrix_ped)
tpm_matrix_ped_ensembl <- tpm_matrix_ped[, colnames(tpm_matrix_ped) != "hgnc_symbol"]
head(tpm_matrix_ped_ensembl)
save(tpm_matrix_ped_ensembl,
file = paste0(datapath, "exp_mat/tpm_matrix_ped_primary_dedup_ensembl.RData"))
rownames(tpm_matrix_ped) <- tpm_matrix_ped$hgnc_symbol
tpm_matrix_ped$hgnc_symbol <- NULL
head(tpm_matrix_ped)
dim(tpm_matrix_ped)
save(tpm_matrix_ped,
file = paste0(datapath, "exp_mat/tpm_matrix_ped_primary_dedup.RData"))
write.table(tpm_matrix_ped,
file = paste0(datapath, "exp_mat/tpm_matrix_ped_primary_dedup.txt"), sep = "\t", quote = F)
| notebooks/06_exp_matrix_4clustering.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Plotting U.S. marriage and divorce statistics
#
# Example code by [<NAME>](http://www.randalolson.com)
from bokeh.models import HoverTool, NumeralTickFormatter, SingleIntervalTicker, LinearAxis
from bokeh.plotting import figure, show, output_notebook, ColumnDataSource
from bokeh.sampledata.us_marriages_divorces import data
output_notebook()
# +
md_data = data.copy()
# Fill in missing data with a simple linear interpolation
md_data = md_data.interpolate(method='linear', axis=0).ffill().bfill()
# +
# Set up the data sources for the lines we'll be plotting.
# We need separate data sources for each line because we're
# displaying different data in the hover tool.
source_marriages = ColumnDataSource(
data=dict(
# x-axis (Years) for the chart
x=md_data.Year.values,
# y-axis (Marriages per capita) for the chart
y=md_data.Marriages_per_1000.values,
# The string version of the y-value that is displayed in the hover box
y_text=md_data.Marriages_per_1000.apply(lambda x: '{}'.format(round(x, 1))),
# Extra descriptive text that is displayed in the hover box
desc=['marriages per 1,000 people'] * len(md_data),
)
)
source_divorces = ColumnDataSource(
data=dict(
# x-axis (Years) for the chart
x=md_data.Year.values,
# y-axis (Marriages per capita) for the chart
y=md_data.Divorces_per_1000.values,
# The string version of the y-value that is displayed in the hover box
y_text=md_data.Divorces_per_1000.apply(lambda x: '{}'.format(round(x, 1))),
# Extra descriptive text that is displayed in the hover box
desc=['divorces and annulments per 1,000 people'] * len(md_data),
)
)
# +
# Use HTML to mark up the tooltip that displays over the chart
# Note that the variables in the data sources (above) are referenced with a @
hover = HoverTool(tooltips='<font face="Arial" size="3">@y_text @desc in @x</font>', mode='vline')
# Select the tools that will be available to the chart
TOOLS = ['pan,wheel_zoom,box_zoom,reset,save,resize'] + [hover]
bplot = figure(tools=TOOLS, width=800, height=500, x_axis_type=None)
# Create a custom x-axis with 10-year intervals
ticker = SingleIntervalTicker(interval=10, num_minor_ticks=0)
xaxis = LinearAxis(ticker=ticker)
bplot.add_layout(xaxis, 'below')
# Customize the y-axis
bplot.yaxis.formatter = NumeralTickFormatter(format='0.0a')
bplot.yaxis.axis_label = '# per 1,000 people'
# Provide a descriptive title for the chart
bplot.title = '144 years of marriage and divorce in the U.S.'
# Finally, plot the data!
# Note that the data source determines what is plotted and what shows in the tooltips
bplot.line('x', 'y', color='#1f77b4', line_width=3, source=source_marriages)
bplot.line('x', 'y', color='#ff7f0e', line_width=3, source=source_divorces)
# -
show(bplot)
| examples/howto/us_marriages_divorces/us_marriages_divorces_interactive.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import re
male_words = ['he', 'him', 'himself', 'man', 'men', 'his']
female_words = ['she', 'her', 'herself', 'woman', 'women', 'hers']
male_prefix = ['mr']
female_prefix = ['miss', 'mis', 'mrs']
file = open('corpus.txt')
def process_sentence(sentence):
# convert to lower
line = sentence.lower()
# remove extra spaces
line = line.strip()
# create a space between word and the punctuation following it
line = re.sub(r"([?.!,¿'])", r" \1 ", line)
# replace everything with space except (a-Z, A-Z, 0-9)
# that is remove all punctuations
line = re.sub(r"[^a-zA-Z0-9]+", " ", line)
# convert multiple spaces to a single space
line = re.sub(r'[" "]+', " ", line)
# remove extra spaces
line = line.strip()
# return the line
return line
words = []
lines = []
for line in file:
lines.append(process_sentence(line))
for line in lines:
for word in line.split(' '):
#if word not in words:
#words.add(word)
words.append(word)
len(words)
N = int(input())
names = []
for i in range(N):
names.append(input().lower())
name = 'turner'
len_to_check = 20
score = 0
for i, word in enumerate(words):
if word == name:
s = i - len_to_check if i > len_to_check else 0
e = i + len_to_check if i < len(words) - len_to_check else 0
words_to_check = words[s:e]
for w in words_to_check:
if w in male_prefix:
score += 100
if w in female_prefix:
score -= 100
if w in male_words:
score += 5
if w in female_words:
score -= 5
score
| Heroes and Heroines, Villains and Villainesses/Gender Prediction.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="1KWL1G5sXwFr" colab_type="code" outputId="a69cba38-878f-4079-86ca-387aaf933b92" executionInfo={"status": "ok", "timestamp": 1572126427155, "user_tz": 180, "elapsed": 25866, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mBHzrYFhikwGj5HS4HCH2B5iUmYoPpm1AFV6OcFBA=s64", "userId": "06256612867315483887"}} colab={"base_uri": "https://localhost:8080/", "height": 121}
from google.colab import drive
drive.mount('/content/drive')
# + id="gvw4b-lj2bNZ" colab_type="code" colab={}
# !cp -r '/content/drive/My Drive/Colab Notebooks/[Kaggle] Understanding Clouds from Satellite Images/Scripts/.' .
# + id="DDDjJ17CXziL" colab_type="code" colab={}
# !unzip -q '/content/drive/My Drive/Colab Notebooks/[Kaggle] Understanding Clouds from Satellite Images/Data/test_images320x480.zip'
# + [markdown] colab_type="text" id="yxvzFnySHdqd"
# ### Dependencies
# + id="yCH6-k8q2dpu" colab_type="code" outputId="301c5fde-0153-408b-ba3d-a4960ddeeb4c" executionInfo={"status": "ok", "timestamp": 1572126453475, "user_tz": 180, "elapsed": 52165, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mBHzrYFhikwGj5HS4HCH2B5iUmYoPpm1AFV6OcFBA=s64", "userId": "06256612867315483887"}} colab={"base_uri": "https://localhost:8080/", "height": 96}
from utillity_script_cloud_segmentation import *
# + id="nMR4SXlKASc-" colab_type="code" cellView="form" colab={}
#@title
from keras.optimizers import Optimizer
class AdamAccumulated(Optimizer):
"""Adam optimizer with gradient accumulation.
Default parameters follow those provided in the original paper.
# Arguments
accumulation_steps: int > 0. Update gradient in every accumulation steps.
lr: float >= 0. Learning rate.
beta_1: float, 0 < beta < 1. Generally close to 1.
beta_2: float, 0 < beta < 1. Generally close to 1.
epsilon: float >= 0. Fuzz factor. If `None`, defaults to `K.epsilon()`.
decay: float >= 0. Learning rate decay over each update.
amsgrad: boolean. Whether to apply the AMSGrad variant of this
algorithm from the paper "On the Convergence of Adam and Beyond".
# References
- [Adam - A Method for Stochastic Optimization](https://arxiv.org/abs/1412.6980v8)
- [On the Convergence of Adam and Beyond](https://openreview.net/forum?id=ryQu7f-RZ)
"""
def __init__(self, accumulation_steps, lr=0.001, beta_1=0.9, beta_2=0.999,
epsilon=None, decay=0., amsgrad=False, **kwargs):
super(AdamAccumulated, self).__init__(**kwargs)
with K.name_scope(self.__class__.__name__):
self.iterations = K.variable(0, dtype='int64', name='iterations')
self.accumulation_steps = K.variable(accumulation_steps, dtype='int64', name='accumulation_steps')
self.lr = K.variable(lr, name='lr')
self.beta_1 = K.variable(beta_1, name='beta_1')
self.beta_2 = K.variable(beta_2, name='beta_2')
self.decay = K.variable(decay, name='decay')
if epsilon is None:
epsilon = K.epsilon()
self.epsilon = epsilon
self.initial_decay = decay
self.amsgrad = amsgrad
def get_updates(self, loss, params):
grads = self.get_gradients(loss, params)
self.updates = [K.update_add(self.iterations, 1)]
update_cond = K.equal((self.iterations + 1) % self.accumulation_steps, 0)
sub_step = self.iterations % self.accumulation_steps + 1
t = K.cast(self.iterations // self.accumulation_steps, K.floatx()) + 1
lr = self.lr
if self.initial_decay > 0:
lr = lr * (1. / (1. + self.decay * K.cast(self.iterations, K.dtype(self.decay))))
lr_t = lr * (K.sqrt(1. - K.pow(self.beta_2, t)) / (1. - K.pow(self.beta_1, t)))
lr_t = K.switch(update_cond, lr_t, 0.0)
ms = [K.zeros(K.int_shape(p), dtype=K.dtype(p), name='m_' + str(i)) for (i, p) in enumerate(params)]
vs = [K.zeros(K.int_shape(p), dtype=K.dtype(p), name='v_' + str(i)) for (i, p) in enumerate(params)]
if self.amsgrad:
vhats = [K.zeros(K.int_shape(p), dtype=K.dtype(p), name='vhat_' + str(i)) for (i, p) in enumerate(params)]
else:
vhats = [K.zeros(1, name='vhat_' + str(i)) for i in range(len(params))]
self.weights = [self.iterations] + ms + vs + vhats
acc_grads = [K.zeros(K.int_shape(p), dtype=K.dtype(p)) for p in params]
for grad, acc_grad in zip(grads, acc_grads):
ave_grad = grad / K.cast(self.accumulation_steps, K.floatx())
self.updates.append(K.update(
acc_grad,
K.switch(
K.equal(sub_step, 1),
ave_grad,
acc_grad + (ave_grad - acc_grad) / K.cast(sub_step, K.floatx())
),
))
grads = [K.switch(update_cond, grad, K.zeros_like(grad)) for grad in acc_grads]
for p, g, m, v, vhat in zip(params, grads, ms, vs, vhats):
m_t = K.switch(update_cond, (self.beta_1 * m) + (1. - self.beta_1) * g, m)
v_t = K.switch(update_cond, (self.beta_2 * v) + (1. - self.beta_2) * K.square(g), v)
if self.amsgrad:
vhat_t = K.switch(update_cond, K.maximum(vhat, v_t), vhat)
p_t = p - lr_t * m_t / (K.sqrt(vhat_t) + self.epsilon)
self.updates.append(K.update(vhat, vhat_t))
else:
p_t = p - lr_t * m_t / (K.sqrt(v_t) + self.epsilon)
self.updates.append(K.update(m, m_t))
self.updates.append(K.update(v, v_t))
new_p = p_t
if getattr(p, 'constraint', None) is not None:
new_p = p.constraint(new_p)
self.updates.append(K.update(p, new_p))
return self.updates
def get_config(self):
config = {'accumulation_steps': int(K.get_value(self.accumulation_steps)),
'lr': float(K.get_value(self.lr)),
'beta_1': float(K.get_value(self.beta_1)),
'beta_2': float(K.get_value(self.beta_2)),
'decay': float(K.get_value(self.decay)),
'epsilon': self.epsilon,
'amsgrad': self.amsgrad}
base_config = super(AdamAccumulated, self).get_config()
return dict(list(base_config.items()) + list(config.items()))
# + _kg_hide-output=true colab_type="code" id="ayE1DJg0fRzl" colab={}
seed = 0
seed_everything(seed)
warnings.filterwarnings("ignore")
# + id="C40wYEyOYgu6" colab_type="code" colab={}
base_path = '/content/drive/My Drive/Colab Notebooks/[Kaggle] Understanding Clouds from Satellite Images/'
data_path = base_path + 'Data/'
classification_model_base_path = base_path + 'Models/files/classification/'
classification_model_path = classification_model_base_path + '12-Xception_299x299_acc_16.h5'
submission_base_path = data_path + 'submissions/inference/'
test_path = data_path + 'sample_submission.csv'
test_images_path = 'test_images/'
# + [markdown] colab_type="text" id="6SnKKLczHdqn"
# ### Load data
# + _cell_guid="79c7e3d0-c299-4dcb-8224-4455121ee9b0" _kg_hide-input=true _uuid="d629ff2d2480ee46fbb7e2d37f6b5fab8052498a" colab_type="code" id="pH6kKJKoHdqo" outputId="7ef88320-c8e9-486d-b05e-cfb58e806772" executionInfo={"status": "ok", "timestamp": 1572126454131, "user_tz": 180, "elapsed": 52796, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mBHzrYFhikwGj5HS4HCH2B5iUmYoPpm1AFV6OcFBA=s64", "userId": "06256612867315483887"}} colab={"base_uri": "https://localhost:8080/", "height": 212}
submission = pd.read_csv(test_path)
print('Test samples:', len(submission))
# Preprocecss data
submission['image'] = submission['Image_Label'].apply(lambda x: x.split('_')[0])
test = pd.DataFrame(submission['image'].unique(), columns=['image'])
display(test.head())
# + [markdown] colab_type="text" id="xyPzEBHGHdqr"
# # Model parameters
# + colab_type="code" id="BBNl0qkSHdqs" colab={}
HEIGHT = 299
WIDTH = 299
CHANNELS = 3
N_CLASSES = 4
label_columns=['Fish', 'Flower', 'Gravel', 'Sugar']
best_tresholds_class = [0.86, 0.90, 0.72, 0.60]
model_name = '27-[seg]-[5-fold]42-unet_densenet169_384x480[class]12-Xception_299x299_acc_16_beta015'
submission_path = submission_base_path + '%s_submission.csv' % (model_name)
# + [markdown] colab_type="text" id="Idm7ex1GHdq_"
# # Model
# + colab_type="code" id="b-TF9Qn3dVBl" colab={}
classification_model = load_model(classification_model_path, custom_objects={'AdamAccumulated':AdamAccumulated})
# + [markdown] colab_type="text" id="kEiavXfkxAzF"
# ### Classification data generator
# + _kg_hide-input=true colab_type="code" id="NYZa5zzHxChz" cellView="both" outputId="33f503fa-512e-449d-ca25-6df614a0d8aa" executionInfo={"status": "ok", "timestamp": 1572126484680, "user_tz": 180, "elapsed": 83328, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mBHzrYFhikwGj5HS4HCH2B5iUmYoPpm1AFV6OcFBA=s64", "userId": "06256612867315483887"}} colab={"base_uri": "https://localhost:8080/", "height": 34}
test_datagen=ImageDataGenerator(rescale=1./255.)
classification_test_generator=test_datagen.flow_from_dataframe(
dataframe=test,
directory=test_images_path,
x_col="image",
target_size=(HEIGHT, WIDTH),
class_mode=None,
batch_size=1,
shuffle=False,
seed=seed)
# + [markdown] id="wvsgj0sFrU2i" colab_type="text"
# # Load predictions
# + id="Q1nwxgGbrWvn" colab_type="code" outputId="4c7ab3e2-f30a-4e6b-bcf5-a67e62ca586e" executionInfo={"status": "ok", "timestamp": 1572126485874, "user_tz": 180, "elapsed": 84513, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mBHzrYFhikwGj5HS4HCH2B5iUmYoPpm1AFV6OcFBA=s64", "userId": "06256612867315483887"}} colab={"base_uri": "https://localhost:8080/", "height": 195}
prev_submission_path = data_path + 'submissions/inference/14-[seg]-[5-fold]42-unet_densenet169_384x480[class]8-resnet50_224x224_submission_post.csv'
X_test = pd.read_csv(prev_submission_path)
X_test.head()
# + [markdown] id="zors0R3O_V-v" colab_type="text"
# # Apply classifcation model to test set
# + id="lGu30wrM_Xxm" colab_type="code" colab={}
test_class_preds = classification_model.predict_generator(classification_test_generator)
for index in range(len(label_columns)):
test_class_preds[:,index] = (test_class_preds[:,index] > best_tresholds_class[index]).astype(int)
X_test['empty_mask'] = test_class_preds.reshape(test_class_preds.shape[0]*N_CLASSES)
X_test['EncodedPixels_pred'] = X_test.apply(lambda row: row['EncodedPixels'] if row['empty_mask'] == 0 else np.nan, axis=1)
# + [markdown] id="O4xhyn1pcnrs" colab_type="text"
# ### Number of masks removed
# + id="DJKHVGMvcjti" colab_type="code" outputId="d92cd2c4-c893-48c2-f016-8cfff70eccd5" executionInfo={"status": "ok", "timestamp": 1572126622086, "user_tz": 180, "elapsed": 220714, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mBHzrYFhikwGj5HS4HCH2B5iUmYoPpm1AFV6OcFBA=s64", "userId": "06256612867315483887"}} colab={"base_uri": "https://localhost:8080/", "height": 34}
print('Masks removed: %s' % len(X_test[(~X_test['EncodedPixels'].isnull()) & (X_test['empty_mask'] == 1)]))
# + [markdown] id="gc7F9fRkwOup" colab_type="text"
# ### Submission with mask classification
# + id="ybZkgbhtwRs4" colab_type="code" outputId="f19d46ff-b848-4c3c-9091-bdcf4e5fb68e" executionInfo={"status": "ok", "timestamp": 1572126622542, "user_tz": 180, "elapsed": 221164, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mBHzrYFhikwGj5HS4HCH2B5iUmYoPpm1AFV6OcFBA=s64", "userId": "06256612867315483887"}} colab={"base_uri": "https://localhost:8080/", "height": 195}
submission_df = X_test[['Image_Label' ,'EncodedPixels_pred']]
submission_df.columns = ['Image_Label' ,'EncodedPixels']
submission_df.to_csv(submission_path, index=False)
display(submission_df.head())
| Model backlog/Inference/Google Colab/27-[seg]-[5-fold]42-unet_densenet169_384x480[class]12-Xception_299x299_acc_16_beta015.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + id="wMXwIe4pbtro"
from nltk.chat.util import Chat, reflections
#Pairs is a list of patterns and responses.
pairs = [
[
r"(.*)my name is (.*)",
["\nHello %2, How are you today ?\n",]
],
[
r"(.*)help(.*) ",
["\nYes! I can help you. I recognise keywords. I need your documents\n",]
],
[
r"(.*) your name ?",
["\nMy name is Disease.Assistant.Bot, but you can just call me DIB and I'm a helpful chatbot.\n",]
],
[
r"how are you (.*) ?",
["\nI'm doing very well\n", "\ni am great !\n","\nTroublesome day at work! but enjoying myself\n",]
],
[
r"sorry (.*)",
["\nIts alright\n","\nIts OK, never mind that\n",]
],
[
r"i'm (.*) (good|well|okay|ok)",
["\nNice to hear that","Alright, great !\n",]
],
[
r"(hi|hey|hello|hola|holla)(.*)",
["\nHello\n", "\nHey there\n",]
],
[
r"what (.*) want ?",
["\nI want your reports and your user ID, so I can tell you the best course of action\n",]
],
[
r"(.*)created(.*)",
["\n<NAME> created me using Python's NLTK library ","top secret ;)\n",]
],
[
r"(.*)my location(.*)",
["\nSituation at your location is good, you're safe! Provided you haven't visited a RED ZONE.\n",]
],
[
r"(.*) good health",
["\nChecking your details, I can see you have been keeping safe. You are a responsible citizen\n ",]
],
[
r"(.*) bad health",
["\nChecking your details, I can see you have been in risk infested areas. Please book a medical appointment asap. \n",]
],
[
r"(.*)doctors(.*)",
["\nIn your locality I can see three doctors who have slots open. \n\nDr.<NAME> \nDr.<NAME> \nDr.<NAME>\n",]
],
[
r"book (.*) slot",
["\nPlease tell me your email ID\n"]
],
[
r"(.*)@(.*)",
["\nA link will be sent to you on the given mail, to book an appointment.\n"]
],
[
r"quit",
["\nBye for now. See you soon :) \n","\nIt was nice talking to you. See you soon :)\n"]
],
[
r"thank you",
['You are welcome :)']
],
[
r"(.*)",
['\nThat is nice to hear\n']
],
]
#Create Chat Bot
chat = Chat(pairs, reflections)
# + colab={"base_uri": "https://localhost:8080/"} id="G4BsvcLEe18n" outputId="0f0140a3-0e34-4ebc-e60e-bf54e00b5be0"
#default message at the start of chat
print("Hi, I'm DIB and I like to chat\nPlease type lowercase English language to start a conversation. Type quit to leave ")
#Start conversation
chat.converse()
| ChatBot/Conversation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
from notebook.services.config import ConfigManager
cm = ConfigManager()
cm.update('livereveal', {
'width': 1024,
'height': 768,
'scroll': True,
})
import pandas as pd
import pylab as plt
import pystan
import seaborn as sns
import numpy as np
# %matplotlib inline
import warnings
warnings.simplefilter('ignore')
# + [markdown] slideshow={"slide_type": "slide"}
# <!-- .element height="80%" width="80%" -->
#
# <http://www.DataJavelin.com>
# ## Dr <NAME>
# + [markdown] slideshow={"slide_type": "subslide"}
# # Bayesian Data Analysis Workflow with Stan
#
# A good workflow for developing and testing models is essential!
#
# 1. Model Building
# 2. Model Inference
# 3. Model Checking
# 4. Model Improvment
#
# For more detailed information see [Bayesian Data Analysis](http://www.stat.columbia.edu/~gelman/book/) book by <NAME> and co-authors and [<NAME>'s case study](https://betanalpha.github.io/assets/case_studies/principled_bayesian_workflow.html#1_bayesian_modeling_and_inference)
# + [markdown] slideshow={"slide_type": "subslide"}
# Lets go through steps with an example problem
#
# ## Golf putting
# An example from [<NAME>'s blog](https://statmodeling.stat.columbia.edu/2019/03/21/new-golf-putting-data-and-a-new-golf-putting-model/)
# + slideshow={"slide_type": "fragment"}
data=pd.read_csv('orig_golf_data.txt',sep='\s+')
data[0:5]
# + slideshow={"slide_type": "subslide"}
p=data['y']/data['n']
error=np.sqrt(p*(1-p)/data['n'])
plt.errorbar(data['x'],data['y']/data['n'],yerr=error,fmt='o')
plt.xlabel('Distance to hole (feet)');
plt.ylabel('Probability of Success' );
# + [markdown] slideshow={"slide_type": "fragment"}
# With error bars taken as simple classical standard deviations $\sqrt{\hat{p}_j(1-\hat{p}_j)/n_j}$ where $\hat{p}_j=y_j/n_j$ success rate for putts taken at distance $x_j$.
# + [markdown] slideshow={"slide_type": "slide"}
# ## Model Building
#
# + [markdown] slideshow={"slide_type": "subslide"}
# ### Build generative model
# 1. construct a probabilistic generative model of the observation process.
# 2. generative observation model can be a crude approximation to the complexity of the true measurement process
#
# + [markdown] slideshow={"slide_type": "notes"}
# An ideal generative model will use mathematical functions to describe how an observation is produced based on a given model configuration. It is described as generative as it models how the data was generated (i.e. joint distribution)
#
# Simple approximations are often good enough to answer even sophisticated questions. We describe the model as generative as we are trying to build a model that replicates how the data was generated.
#
# Often it is helpful to visualise the model via a probabilistic graphical model. These visualisations are a good way of understanding and showing how different variables relate and depend on each other.
# + [markdown] slideshow={"slide_type": "subslide"}
# #### Our second model, from first principles
# We want to model the probability of success in golf putting as a function of distance from the hole? What is it about the distance that makes it harder to make a putt?
#
# -
# 
#
# + slideshow={"slide_type": "fragment"}
import daft
pgm = daft.PGM(shape=(5,3),observed_style="inner",dpi=150)
# Hierarchical parameters.
pgm.add_node("sigma", r"$\sigma$", 0.5, 2)
pgm.add_node("r", r"$r$", 1.5, 2,fixed=True)
pgm.add_node("R", r"$R$", 2.5, 2,fixed=True)
# Latent variable.
pgm.add_node("x", r"$x_j$", 1, 0.9,fixed=True)
# Data.
pgm.add_node("y", r"$y_j$", 2, 1, observed=True)
pgm.add_node("n", r"$n_j$", 3, 0.9,fixed=True)
pgm.add_edge('sigma','y')
pgm.add_edge('r','y')
pgm.add_edge('R','y')
pgm.add_edge('x','y')
pgm.add_edge('n','y')
pgm.add_plate([0.5, 0.5, 3, 1], label=r"$j = 1, \ldots, J$", shift=-0.1)
# Render and save.
pgm.render()
pgm.show()
# -
# $$\mbox{Pr}\left(|\mbox{angle}| < \sin^{-1}((R-r)/x)\right) = 2\Phi\left(\frac{\sin^{-1}((R-r)/x)}{\sigma}\right) - 1$$
#
# $\Phi$ is the cumulative normal distribution. Multiply by 2 since angle can be $+$ or $-$, subtract 0.5 twice as we are looking at $\mbox{Pr}\left(0<angle<\sin^{-1}((R-r)/x)\right)$ times 2
#
#
from scipy.stats import norm
rv = norm()
x=np.arange(-4,5,0.1)
plt.plot(x, rv.pdf(x), 'k-', lw=2, label='frozen pdf')
plt.fill_between(np.arange(0,2,0.1),rv.pdf(np.arange(0,2,0.1)),alpha=0.5)
plt.xlabel('Angle')
plt.text(2, 0.1, r'$\sin^{-1}((R-r)/x))$', fontsize=12);
r=(1.68/2)/12
R=(4.25/2)/12
def success_curve(sigma,x):
return 2*rv.cdf(np.arcsin(((R-r)/x))/sigma)-1
# (the golf ball and hole have diameters 1.68 and 4.25 inches, respectively)
x=np.arange(0.0,20,0.5)
for sigma_angle in [0.5,2,10,20]:
sigma=sigma_angle*np.pi/180.0
plt.plot(x,success_curve(sigma,x),label=r'$\sigma={}^\circ$'.format(sigma_angle))
plt.xlabel('Distance to hole (feet)');
plt.ylabel('Probability of Success' );
plt.legend()
# + [markdown] slideshow={"slide_type": "subslide"}
# Always good to think about problem and put informative priors if possible.
#
# Thinking about our model,
# Priors:
# * $\sigma$ can't be lower than 0 or higher than $\pi/2$
# + [markdown] slideshow={"slide_type": "subslide"}
# ### Writing the model in Stan
# * A Stan program is organized into a sequence of named blocks
#
# + slideshow={"slide_type": "fragment"}
example_model="""functions {
// ... function declarations and definitions ...
}
data {
// ... declarations ...
}
transformed data {
// ... declarations ... statements ...
}
parameters {
// ... declarations ...
}
transformed parameters {
// ... declarations ... statements ...}
model {
// ... declarations ... statements ...
}
generated quantities {
// ... declarations ... statements ...
}
"""
# + [markdown] slideshow={"slide_type": "subslide"}
# ### Other things to note:
# * Add comments with `\\`
# * Sampling statements with `~`
# * Can print output with `print()`
# * `;` at end of each line
#
#
# Documentation for Stan: https://mc-stan.org/docs/2_19/stan-users-guide/index.html
#
# + slideshow={"slide_type": "subslide"}
golf_model_2="""
data {
int J;
int n[J];
vector[J] x;
int y[J];
real r;
real R;
int fit;//boolean for fitting
}
transformed data {
vector[J] threshold_angle = asin((R-r) ./ x);
}
parameters {
real<lower=0.0> sigma;
}
model {
vector[J] p = 2*Phi(threshold_angle / sigma) - 1;
sigma ~exponential(2.0);
if (fit>0){
y ~ binomial(n, p);
}
}
generated quantities {
real sigma_degrees = sigma * 180 / pi();
int y_pred[J];
for (i in 1:J){
y_pred[i]=binomial_rng(n[i],2*Phi(threshold_angle[i] / sigma) - 1);
}
}
"""
# + slideshow={"slide_type": "subslide"}
sm=pystan.StanModel(model_code=golf_model_2)
# + [markdown] slideshow={"slide_type": "subslide"}
# ### Analyse the Generative Ensemble
# * Analyse a range of samples from generative model
# * Check outputs are realistic
# * Good way of checking model (especially priors) is behaving appropriately and how you expect
# + [markdown] slideshow={"slide_type": "notes"}
# Before drawing inferences from a real observation, we first want to analyse a range of samples from our generative model to check that the resulting putting success relation is realistic. This is a good way of checking that the prior distribution is sensible and the model is behaving approriately.
#
# To do that we first simulate parameters and observations from the complete generative model. We do this with the Python interface to Stan, PyStan.
#
# Simulating from the generative model allows us to see how our model and the prior choices we have made, affect the trend and whether they are realistic or whether we need to go back and rethink the priors. Simulating from the generative model also gives us simulated data for which we know the true parameter values. We can then use this simulated data to carry out the same inference procedures we would run on real data to test whether we can accurately recover the true parameter values. This gives us confidence that when we run it on real data we can trust the parameter values we get.
# + slideshow={"slide_type": "fragment"}
model_data={
'J':len(data),
'n':data['n'],
'x':data['x'],
'y':data['y'],
'r':r,
'R':R,
'fit':0}
# + slideshow={"slide_type": "subslide"}
fit=sm.sampling(data=model_data,chains=4,iter=1000,seed=194838)
# + slideshow={"slide_type": "fragment"}
fit
# + [markdown] slideshow={"slide_type": "notes"}
# * $\hat{R}$ compares variation within and between chains. You want $\hat{R} < 1.1$
# * The amount by which autocorrelation within the chains increases uncertainty in estimates can be measured by effective sample size, $n_{eff}$. Typical MCMC has a low $n_{eff}$ and requires thinning (as to keep all samples would be too memory intensive). Since Stan is efficient, no need to chuck samples away. If $n_{eff} / N < 0.001$, then there is a problem with model
# + slideshow={"slide_type": "subslide"}
pystan.diagnostics.check_hmc_diagnostics(fit,verbose=3)
# + [markdown] slideshow={"slide_type": "subslide"}
# *Divergent transitions*:Critical warning. Step size is too large. Try fixing by increasing `adapt_delta` e.g. `fit=sm.sampling(data=data,control=dict(adapt_delta=0.9)`
#
# *Maximum Tree depth*: Not as critical. A detail specific to NUTS algorithm. Fix by increasing tree depth e.g. `fit=sm.sampling(data=data,control=dict(max_treedepth=15)`
#
# *BFMI low*: Bayesian Fraction of Missing Information. Adaptation phase of the Markov Chains did not turn out well and those chains likely did not explore the posterior distribution efficiently. Can try running for more iterations, but probably need to re-parameterise model.
#
#
# Details on diagnostics are [here](https://mc-stan.org/misc/warnings.html). Good explanation for divergences can also be found [here](https://dev.to/martinmodrak/taming-divergences-in-stan-models-5762)
# + [markdown] slideshow={"slide_type": "subslide"}
# Prior distribution on parameters:
# + slideshow={"slide_type": "fragment"}
plt.hist(fit['sigma_degrees'])
plt.xlabel(r'$\sigma (^\circ)$')
# + [markdown] slideshow={"slide_type": "subslide"}
# Lets look at Prior predicitive distribution
# + slideshow={"slide_type": "fragment"}
for i in range(0,1000,100):
plt.plot(data['x'],fit['y_pred'][i,:]/data['n'],'b',alpha=0.2)
plt.plot(data['x'],data['y']/data['n'],'r',label='data')
plt.xlabel('Distance to hole (feet)');
plt.ylabel('Probability of Success' );
# + [markdown] slideshow={"slide_type": "subslide"}
# ### Fit the Simulated Observations and Evaluate
# * Test ability to fit model (testing fitting algorithms)
# * Fit samples from the prior predicitive distribution
# * Compare Posterior probabillity distribution with truth
#
# + [markdown] slideshow={"slide_type": "notes"}
# We have generated a sample of simulated observations from our generative model. To test our ability to draw inferences with the model when using real data, we can attempt to fit each of these simulated observations and construct a posterior distribution on the parameters of interest. The advantage of fitting simulated results is we know the truth and so we can compare the posterior probability distributions coming from the inference with the true values. Lets start with fitting one sample from our simulated observations.
# + slideshow={"slide_type": "subslide"}
s=8
plt.plot(data['x'],fit['y_pred'][s,:]/data['n'],'bo',alpha=0.2,label='Predicted data')
plt.plot(data['x'],success_curve(fit['sigma'][s],data['x']),'r',label='Model, $\sigma={:3.1f}^\circ$'.format(fit['sigma_degrees'][s]))
plt.xlabel('Distance to hole (feet)');
plt.ylabel('Probability of Success' );
plt.legend();
# + slideshow={"slide_type": "subslide"}
data_samp={
'J':len(data),
'n':data['n'],
'x':data['x'],
'y':fit['y_pred'][s,:].astype('int'),
'r':r,
'R':R,
'fit':1}
# + slideshow={"slide_type": "fragment"}
fit_samp=sm.sampling(data=data_samp,chains=4,iter=1000,seed=10)
# + slideshow={"slide_type": "subslide"}
fit_samp
# -
pystan.diagnostics.check_hmc_diagnostics(fit_samp)
# +
plt.hist(fit_samp['sigma_degrees'])
plt.axvline(fit_samp['sigma_degrees'][s],color='r')
plt.xlabel(r'$\sigma (^\circ)$')
# + [markdown] slideshow={"slide_type": "notes"}
# When analysing posterior probability distributions for model parameters, it is good practise to do so alongside the prior distribution. This allows us to visualise whether we are gaining much information from the data, beyond our prior knowledge.
# + [markdown] slideshow={"slide_type": "subslide"}
# Lets plot replicated data from our model fit, and compare to fitted data
# + slideshow={"slide_type": "fragment"}
plt.figure(figsize=(15,7.5))
plt.violinplot(fit_samp['y_pred']/data['n'].values,positions=data['x'],showextrema=False);
plt.plot(data['x'],fit['y_pred'][s,:]/data['n'],'o')
plt.xlabel('Distance to hole (feet)');
plt.ylabel('Probability of Success' );
# + [markdown] slideshow={"slide_type": "subslide"}
# ### Evaluate Prior-Posterior Consistency
#
# We have fitted one sample. To be confident our chosen fitting algorithm works for all possible values (i.e. within Prior) we should fit *many* samples
# + [markdown] slideshow={"slide_type": "notes"}
# To check the model and our inference technique (i.e. the algorithm used to fit the data) is perfoming appropriately, you can carry out simulated-based calibration. This involved fitting each prior sample as if it were data. The posterior distributions on the fits should look like the prior. This approach checks this is correct. If everything is working, the plots below for each parameter should be uniform.
#
# For details, see [Talts et al. 2018](https://arxiv.org/pdf/1804.06788) and [Michael Betancourt's case study](https://betanalpha.github.io/assets/case_studies/principled_bayesian_workflow.html#22_computational_faithfulness)
#
# +
samples=[]
n_samp=100
sigma_cal=np.empty(n_samp)
for s in range(0,n_samp):
#set data to one of the samples
data_samp={
'J':len(data),
'n':data['n'],
'x':data['x'],
'y':fit['y_pred'][s,:].astype('int'),
'r':r,
'R':R,
'fit':1}
#fit the data
fit_tmp=sm.sampling(data=data_samp,chains=4,iter=1000,seed=10,verbose=False)
#append samples to list
samples.append(pd.DataFrame(fit_tmp['sigma_degrees'],columns=['sigma_degrees']))
#carry out calibration statistic
sigma_cal[s]=np.sum(fit_tmp['sigma_degrees']<fit['sigma_degrees'][s])
samples=pd.concat(samples)
# -
plt.figure(figsize=(10,10))
plt.subplot(2,1,1)
plt.hist(samples['sigma_degrees'],density=True,alpha=0.5,label='Posterior');
plt.hist(fit['sigma_degrees'],density=True,alpha=0.5,label='Prior');
plt.xlabel(r'$\sigma ^\circ$');
plt.legend()
plt.subplot(2,1,2)
plt.hist(sigma_cal)
plt.xlabel('Rank Statistic')
plt.subplots_adjust(hspace=0.5)
# + [markdown] slideshow={"slide_type": "subslide"}
# ### Analyse Posterior Behaviours
#
# **Z score**
# $$z=|(\mu_{post}-\theta_{true})/\sigma_{post}|$$
#
# how accurately the posterior recovers ground truth and whether there is any bias. Close to zero indicate more accurate, less biased posteriors.
#
# **Posterior Shrinkage**
# $$s=1-\sigma^2_{post}/\sigma^2_{prior}$$
#
# quantifies how much the posterior learns from a given observation. Close to zero indicates dominated by prior, close to one indicates dominated by data.
# + [markdown] slideshow={"slide_type": "notes"}
# Assuming that we are accurately recovering posteriors across all of the simulated observations then we can proceed to analyse the range of behaviors in these posteriors. For example, the posterior z-score of a given parameter,
#
# $$z=|(\mu_{post}-\theta_{true})/\sigma_{post}|$$
#
# quantifies how accurately the posterior recovers the ground truth and whether there is any bias. Values close to zero indicate more accurate, less biased posteriors.
#
# At the same time the posterior shrinkage,
#
# $$s=1-\sigma^2_{post}/\sigma^2_{prior}$$
#
# quantifies how much the posterior learns from a given observation. Our visualisation of the posterior and prior for $\alpha$ and $\beta$ had already indicated that the inference had given us information on both parameters. Shrinkage allows us to quantify this. A value near zero indicates that the data provide little information beyond that encoded in the prior distribution while shrinkage near one indicates highly informative observations.
#
# + [markdown] slideshow={"slide_type": "subslide"}
# <img src="./assets/sensitivity.png" alt="Drawing" style="width: 600px;"/>
# + slideshow={"slide_type": "subslide"}
def zscore(posterior, truth):
return np.abs((np.mean(posterior)-truth)/np.std(posterior))
def shrinkage(posterior,prior):
return 1-(np.var(posterior)/np.var(prior))
n_post_samps=int(len(samples)/n_samp)
z_score_array=np.empty((1,n_samp))
shrinkage_array=np.empty((1,n_samp))
for i in range(0,n_samp):
z_score_array[0,i]=zscore(samples['sigma_degrees'][i*n_post_samps:(i+1)*n_post_samps],fit['sigma_degrees'][i])
shrinkage_array[0,i]=shrinkage(samples['sigma_degrees'][i*n_post_samps:(i+1)*n_post_samps],fit['sigma_degrees'])
# + slideshow={"slide_type": "subslide"}
g=sns.PairGrid(pd.DataFrame(np.vstack((shrinkage_array[0,:],z_score_array[0,:])).T,columns=['Shrinkage','Zscore']))
g.map_diag(plt.hist,color='blue',alpha=0.5,bins=np.arange(0,5,0.1))
g.map_lower(plt.scatter,color='blue',alpha=0.5)
g.axes[1,0].set_xlim(0,1.2)
g.axes[0,1].set_axis_off()
# + [markdown] slideshow={"slide_type": "slide"}
# ## Model Inference
# Having satisfied ourselves that model is behaving as we expect, lets fit our model to observational data
# + slideshow={"slide_type": "subslide"}
model_data={
'J':len(data),
'n':data['n'],
'x':data['x'],
'y':data['y'],
'r':r,
'R':R,
'fit':1}
# + slideshow={"slide_type": "fragment"}
fit_obs=sm.sampling(data=model_data,chains=4,iter=1000,seed=194838)
# + slideshow={"slide_type": "subslide"}
fit_obs
# + [markdown] slideshow={"slide_type": "subslide"}
# ### Diagnostic tests
# + slideshow={"slide_type": "fragment"}
pystan.diagnostics.check_hmc_diagnostics(fit_obs,verbose=3)
# + [markdown] slideshow={"slide_type": "slide"}
# ## Model Checking
# + slideshow={"slide_type": "subslide"}
plt.hist(fit_obs['sigma_degrees'])
plt.xlabel(r'$\sigma (^\circ)$')
# + slideshow={"slide_type": "subslide"}
plt.figure(figsize=(15,7.5))
plt.violinplot(fit_obs['y_pred']/data['n'].values,positions=data['x'],showextrema=False);
plt.plot(data['x'],data['y']/data['n'],'o')
plt.xlabel('Distance to hole (feet)');
plt.ylabel('Probability of Success' );
# + [markdown] slideshow={"slide_type": "subslide"}
# ### Posterior Predicitive Checks
# + [markdown] slideshow={"slide_type": "notes"}
# When examining goodness of fits, the typical method is to look at the residuals. i.e. $\frac{data - model}{\sigma}$. Because we have distribution of $y^{rep}$, we can do this in a more probabilisitic way using posterior predictive checks. For more information on posterior predictive checks, [Gelman et al. 1996](http://www.stat.columbia.edu/~gelman/research/published/A6n41.pdf) is a good starting point.
# + slideshow={"slide_type": "subslide"}
import seaborn as sns
import matplotlib as mpl
sns.set_style("white")
fig=plt.figure(figsize=(10,5))
# This is the colormap I'd like to use.
cm = sns.diverging_palette(220, 20, as_cmap=True)
# Get the histogramp
Y,X = np.histogram(fit_obs['y_pred'][:,0]/data['n'].values[0], 25, normed=1)
#C = [cm(((x-X.min())/x_span)) for x in X]
C = [cm(((((x-np.mean(fit_obs['y_pred'][:,0]/data['n'].values[0]))/np.std(fit_obs['y_pred'][:,0]/data['n'].values[0]))+6)/12.0)) for x in X]
plt.bar(X[:-1],Y,color=C,width=X[1]-X[0])
plt.xlabel('Prob. of success at distance '+str(data['x'].values[0]))
plt.axvline(0.94, linestyle='--')
plt.axvline(0.9675,linestyle=':')
plt.annotate('higher success rate than \n model cannot explain',xy=(0.9675, 20), xycoords='data',
xytext=(0.9675, 50), textcoords='data',rotation='vertical',size='large')
plt.annotate('success in model to high\n compared to data',xy=(0.94, 20), xycoords='data',
xytext=(0.94, 50), textcoords='data',rotation='vertical',size='large')
#ax1 = fig.add_axes([0.05, 0.80, 0.9, 0.15])
ax1 = fig.add_axes([0.94, 0.15, 0.02, 0.7])
norm = mpl.colors.Normalize(vmin=-6, vmax=6)
cb1 = mpl.colorbar.ColorbarBase(ax1, cmap=cm,
norm=norm,
orientation='vertical')
cb1.set_label('$\sigma$')
# + [markdown] slideshow={"slide_type": "subslide"}
# We can calculate fraction of $y^{rep}$ samples above and below real success value. This is often referred to as the Bayesian P-value and is telling us what the probability is of drawing the real success measurement, from our model which has been inferred on the data. This is tells us if the model is inconsistent with the data, given the uncertianties in parameters and data.
#
# * $\sim 0.5$ (i.e. near the middle of the distribution) means our model is consistent with the data
# * $0.99$ or $0.01$ (i.e. in the tails) means the model is missing something.
#
# We can convert this to a typical '$\sigma$' level, such that $\sigma < -3$ or $\sigma > 3$ indicates a problem with the model.
# + [markdown] slideshow={"slide_type": "notes"}
# For more information on posterior predictive checks, see:
# * [Bayesian Data Analysis](http://www.stat.columbia.edu/~gelman/book/)
# * [http://www.stat.columbia.edu/~gelman/research/published/philosophy.pdf](http://www.stat.columbia.edu/~gelman/research/published/philosophy.pdf)
# + slideshow={"slide_type": "subslide"}
def Bayes_P_value(rep_data,obs_data):
import scipy.stats as st
pval=np.empty_like(obs_data)
for i,d in enumerate(obs_data):
ind=rep_data[:,i]<d
pval[i]=st.norm.ppf(sum(ind)/rep_data.shape[0])
return pval
pvalues=Bayes_P_value(fit_obs['y_pred']/data['n'].values,data['y']/data['n'])
# + slideshow={"slide_type": "fragment"}
pvalues
# -
# ## Question:
# Golf is too boring, I want to make it more exciting by having more successful longer putts.
#
# **How big should we make the holes for 50% of 10 feet putts to go in?**
# +
import scipy.stats as st
R_new=10.0*np.sin(st.norm.ppf(1.5/2)*fit_obs['sigma'])+r
# -
plt.hist(R_new)
plt.axvline(x=R)
# + [markdown] slideshow={"slide_type": "slide"}
# ## Model Improvement
# What is wrong with our model?
#
# How could we improve it? Add a component for distance?
# +
pgm = daft.PGM(shape=(5,3),observed_style="inner",dpi=150)
# Hierarchical parameters.
pgm.add_node("sigma_angle", r"$\sigma_{angle}$", 0.5, 2)
pgm.add_node("r", r"$r$", 1.5, 2,fixed=True)
pgm.add_node("R", r"$R$", 2.5, 2,fixed=True)
pgm.add_node("sigma_dist", r"$\sigma_{dist.}$", 3, 2)
# Latent variable.
pgm.add_node("x", r"$x_j$", 1, 1)
# Data.
pgm.add_node("y", r"$y_j$", 2, 1, observed=True)
pgm.add_node("n", r"$n_j$", 3, 1)
pgm.add_edge('sigma_angle','y')
pgm.add_edge('sigma_dist','y')
pgm.add_edge('r','y')
pgm.add_edge('R','y')
pgm.add_edge('x','y')
pgm.add_edge('n','y')
pgm.add_plate([0.5, 0.5, 3, 1], label=r"$j = 1, \ldots, J$", shift=-0.1)
# Render and save.
pgm.render()
pgm.show()
# -
# ### Hierarchical model for individuals
# If we had success rates for individual golfers, we could extend the model even further.
# * First, we would have a $\sigma_{angle}$ and $\sigma_{dist.}$ for each golfer.
# * Secondly we could constrain the individual $\sigma_{angle}$ and $\sigma_{dist.}$ to come from an overall distribution, e.g. normal distribution, with $\mu$ and $\sigma$. Constraining hierarchicaly, allows us to pool and share information across golfers, yet also get a handle on values for the individual.
# +
pgm = daft.PGM(shape=(5,5),observed_style="inner",dpi=150)
# Hierarchical parameters.
pgm.add_node("sigma_angle", r"$\sigma_{angle,i}$", 0.5, 2,scale=1.2)
pgm.add_node("mu_a", r"$\mu_{angle}$", 0.25, 3)
pgm.add_node("sig_a", r"$\sigma_{angle}$", 0.75, 3)
pgm.add_node("r", r"$r$", 1.5, 3,fixed=True)
pgm.add_node("R", r"$R$", 2.5, 3,fixed=True)
pgm.add_node("sigma_dist", r"$\sigma_{dist.,i}$", 3, 2,scale=1.2)
pgm.add_node("mu_d", r"$\mu_{dist.}$", 2.90, 3)
pgm.add_node("sig_d", r"$\sigma_{dist.}$", 3.40, 3)
# Latent variable.
pgm.add_node("x", r"$x_{j,i}$", 1, 1)
# Data.
pgm.add_node("y", r"$y_{j,i}$", 2, 1, observed=True)
pgm.add_node("n", r"$n_{j,i}$", 3, 1)
pgm.add_edge('sigma_angle','y')
pgm.add_edge('sigma_dist','y')
pgm.add_edge('mu_a','sigma_angle')
pgm.add_edge('sig_a','sigma_angle')
pgm.add_edge('mu_d','sigma_dist')
pgm.add_edge('sig_d','sigma_dist')
pgm.add_edge('r','y')
pgm.add_edge('R','y')
pgm.add_edge('x','y')
pgm.add_edge('n','y')
pgm.add_plate([0.5, 0.5, 3, 1], label=r"$j = 1, \ldots, J$", shift=-0.1)
pgm.add_plate([0.2, 0.2, 3.5, 2.5], label=r"$i = 1, \ldots, I$", shift=-0.1)
# Render and save.
pgm.render()
pgm.show()
# -
| Thursday/DISCUS_Course_PartIII_.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# # Analysis of hyperparameter search results
# In the previous notebook we showed how to implement a randomized
# search for tuning the hyperparameters of a `HistGradientBoostingClassifier`
# to fit the `adult_census` dataset. In practice, a randomized hyperparameter
# search is usually run with a large number of iterations.
# In order to avoid the computational cost and still make a decent analysis,
# we load the results obtained from a similar search with 200 iterations.
# +
import pandas as pd
cv_results = pd.read_csv("../figures/randomized_search_results.csv", index_col=0)
cv_results
# -
# We define a function to remove the prefixes in the hyperparameters
# column names.
def shorten_param(param_name):
if "__" in param_name:
return param_name.rsplit("__", 1)[1]
return param_name
cv_results = cv_results.rename(shorten_param, axis=1)
cv_results
# As we have more than 2 parameters in our randomized-search, we
# cannot visualize the results using a heatmap. We could still do
# it pair-wise, but having a two-dimensional projection of a
# multi-dimensional problem can lead to a wrong interpretation of
# the scores.
# +
import seaborn as sns
import numpy as np
df = pd.DataFrame(
{
"max_leaf_nodes": cv_results["max_leaf_nodes"],
"learning_rate": cv_results["learning_rate"],
"score_bin": pd.cut(
cv_results["mean_test_score"], bins=np.linspace(0.5, 1.0, 6)
),
}
)
sns.set_palette("YlGnBu_r")
ax = sns.scatterplot(
data=df,
x="max_leaf_nodes",
y="learning_rate",
hue="score_bin",
s=50,
color="k",
edgecolor=None,
)
ax.set_xscale("log")
ax.set_yscale("log")
_ = ax.legend(title="mean_test_score", loc="center left", bbox_to_anchor=(1, 0.5))
# -
# In the previous plot we see that the top performing values are located in a
# band of learning rate between 0.01 and 1.0, but we have no control in how the
# other hyperparameters interact with such values for the learning rate.
# Instead, we can visualize all the hyperparameters at the same time using a
# parallel coordinates plot.
# +
import numpy as np
import plotly.express as px
fig = px.parallel_coordinates(
cv_results.rename(shorten_param, axis=1).apply(
{
"learning_rate": np.log10,
"max_leaf_nodes": np.log2,
"max_bins": np.log2,
"min_samples_leaf": np.log10,
"l2_regularization": np.log10,
"mean_test_score": lambda x: x,
}
),
color="mean_test_score",
color_continuous_scale=px.colors.sequential.Viridis,
)
fig.show()
# -
# <div class="admonition note alert alert-info">
# <p class="first admonition-title" style="font-weight: bold;">Note</p>
# <p class="last">We <strong>transformed most axis values by taking a log10 or log2</strong> to
# spread the active ranges and improve the readability of the plot.</p>
# </div>
#
# The parallel coordinates plot will display the values of the hyperparameters
# on different columns while the performance metric is color coded. Thus, we are
# able to quickly inspect if there is a range of hyperparameters which is
# working or not.
#
# It is possible to **select a range of results by clicking and holding on any
# axis** of the parallel coordinate plot. You can then slide (move) the range
# selection and cross two selections to see the intersections. You can undo a
# selection by clicking once again on the same axis.
#
# In particular for this hyperparameter search, it is interesting to confirm
# that the yellow lines (top performing models) all reach intermediate values
# for the learning rate, that is, tick values between -2 and 0 which correspond
# to learning rate values of 0.01 to 1.0 once we invert back the log10 transform
# for that axis.
#
# But now we can also observe that it is not possible to select the highest
# performing models by selecting lines of on the `max_bins` axis with tick
# values between 1 and 3.
#
# The other hyperparameters are not very sensitive. We can check that if we
# select the `learning_rate` axis tick values between -1.5 and -0.5 and
# `max_bins` tick values between 5 and 8, we always select top performing
# models, whatever the values of the other hyperparameters.
#
# In this notebook, we saw how to interactively explore the results of a
# large randomized search with multiple interacting hyperparameters.
# In particular we observed that some hyperparameters have very little
# impact on the cross-validation score, while others have to be adjusted
# within a specific range to get models with good predictive accuracy.
| notebooks/parameter_tuning_parallel_plot.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python (ox)
# language: python
# name: ox
# ---
# +
import geopandas as gpd
import json
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from scipy import stats
from sklearn.preprocessing import quantile_transform
# clip then min-max rescale?
minmax = False
# -
# ## Load the data
# +
census_path = 'data/census_data.csv'
indicators_path = 'data/tracts_indicators_grades.csv'
tracts_path = 'data/tracts_shapefile'
ztrax_year_path = 'data/ztrax_years.csv'
output_path = 'data/tracts_indicators_grades_eras_index.csv'
crs = {'init':'epsg:4326'}
# -
# tract-level census data
cd = pd.read_csv(census_path, dtype={'geoid':str, 'state':str, 'county':str})
cd.shape
# tract-level hisdac-us vintage data from ztrax
ztrax = pd.read_csv(ztrax_year_path, dtype={'GEOID':str})
ztrax.shape
# tract-level street network indicators
indicators = pd.read_csv(indicators_path, dtype={'geoid':str})
indicators.shape
indicators = pd.merge(indicators, ztrax, left_on='geoid', right_on='GEOID', how='left')
indicators = pd.merge(indicators, cd, left_on='geoid', right_on='geoid', how='inner')
indicators.shape
tracts = gpd.read_file(tracts_path, crs=crs).rename(columns={'ALAND':'aland'})[['GEOID', 'aland']]
tracts.shape
gdf = gpd.GeoDataFrame(pd.merge(indicators, tracts, left_on='geoid', right_on='GEOID'), crs=crs)
gdf = gdf.drop(columns=['GEOID_x', 'GEOID_y'])
gdf.shape
with open('data/states_by_fips.json') as f:
fips_to_state = json.load(f)
gdf['state_abbrev'] = gdf['state'].map(lambda x: fips_to_state[x]['abbreviation'])
gdf.head()
# ## Create and convert variables
# convert land area and densities to square kilometers
gdf['aland'] = gdf['aland'] / 1e6 #convert m2 to km2
gdf['intersect_density'] = (gdf['n'] / gdf['aland']) * (1 - gdf['prop_deadend']) #per km2
gdf['pop_density'] = gdf['total_pop'] / gdf['aland'] #per km2
gdf['aland'] = gdf['aland'] / 1000 #finally convert km2 to 1000s of km2
# population in units of 1,000 persons
gdf['total_pop_k'] = gdf['total_pop'] / 1000
# log of mean street segment length
gdf['length_mean_log'] = np.log(gdf['length_mean'])
# straightness is inverse of circuity
gdf['straightness'] = 1 / gdf['circuity_avg']
# create state dummies
states = gdf['state_abbrev'].unique()
for state in states:
gdf[state] = gdf['state_abbrev'].map(lambda x: 1 if x==state else 0)
# dummy for if tract is rural vs urban
# census bureau considers a block urban if it has at least 1000 people per sq mile
urban_density = 1000 / 2.59 # 1000 people per sq mile converted to sq km
gdf['is_urban'] = (gdf['pop_density'] > urban_density).astype(int)
gdf['is_urban'].value_counts()
gdf['pop_density'] = gdf['pop_density'] / 1000 #1000s of persons per km2
gdf['med_hh_income'] = gdf['med_hh_income'] / 1000 #1000s of USD
# ## Create grid index
#
# The components themselves have very different variances. Before we combine them into an index, we need to re-scale them so that they contribute more equally to the variance of the index. We use three methods.
#
# 1. clipped +/- *n* std devs above/below the mean, then min-max scaled (this is the "main" grid index)
# 2. standardized then min-max scaled (this is a robustness check)
# 3. quantile-transformed then min-max scaled (this is a 2nd robustness check)
# create gdf_index so normalization for index doesn't appear in all subsequent variable's analysis
index_components = ['orientation_order', 'straightness', 'prop_4way']
gdf_index = gdf[index_components].copy()
gdf_index.describe()
# 1. Create clipped, min-max scaled grid index. This is the main calculation method.
# clip vectors to *sigma* std devs above/below mean to make variances more similar
# then min-max scale to get them into (0,1) range
if minmax:
sigma = 3
for col in index_components:
lower = gdf_index[col].mean() - gdf_index[col].std() * sigma
upper = gdf_index[col].mean() + gdf_index[col].std() * sigma
gdf_index[col] = gdf_index[col].clip(lower, upper)
# min-max scaling
gdf_index = (gdf_index - gdf_index.min()) / (gdf_index.max() - gdf_index.min())
# fix any rounding errors so all three components are in range 0 to 1
gdf_index = gdf_index.clip(lower=0, upper=1)
gdf_index.describe()
# 2. As a robustness test, calculate grid index from normalized components.
# standardized (mean-normalized) version with mean=0 and std=1, then min-max scaled from 0 to 1
gdf_index_norm = (gdf_index - gdf_index.mean()) / gdf_index.std()
gdf_index_norm = (gdf_index_norm - gdf_index_norm.min()) / (gdf_index_norm.max() - gdf_index_norm.min())
gdf_index_norm.describe()
# 3. As a second robustness test, calculate grid index from quantile-transformed components. This scaling method is robust against outliers to make the mins, maxs, and stds nearly identical among the components.
# quantile-transformed version where each vector is output normally-distributed, then min-max scaled from 0 to 1
gdf_index_quant = quantile_transform(gdf_index, output_distribution='normal', copy=True)
gdf_index_quant = pd.DataFrame(gdf_index_quant, columns=gdf_index.columns)
gdf_index_quant = (gdf_index_quant - gdf_index_quant.min()) / (gdf_index_quant.max() - gdf_index_quant.min())
gdf_index_quant.describe()
# #### Now, calculate the grid index itself from its constituent components
# +
# geometric mean, even-weighting of min-max-normalized components
# this is our "main" grid index for analysis
gdf['grid_index'] = stats.mstats.gmean(gdf_index, axis=1)
# alternative: geometric mean, even-weighting of standardized components
gdf['grid_index_norm'] = stats.mstats.gmean(gdf_index_norm, axis=1)
# alternative: geometric mean, even-weighting of quantile-transformed components
gdf['grid_index_quant'] = stats.mstats.gmean(gdf_index_quant, axis=1)
# -
sample = gdf.sample(n=6, random_state=2)
sample[['geoid', 'state_abbrev', 'grid_index'] + index_components]
# want component indicators that are relevant but not too redundant (ie, strongly correlated)
# here, we see each of our indicators is more strongly correlated with the index than with each other: good
gdf[['grid_index'] + index_components].corr()
# ## Make era dummies then inspect our columns
# +
def get_ztrax_decade(year):
if year < 1940:
return 'prop_1939_earlier'
elif year >= 1940 and year < 1950:
return 'prop_1940_49'
elif year >= 1950 and year < 1960:
return 'prop_1950_59'
elif year >= 1960 and year < 1970:
return 'prop_1960_69'
elif year >= 1970 and year < 1980:
return 'prop_1970_79'
elif year >= 1980 and year < 1990:
return 'prop_1980_89'
elif year >= 1990 and year < 2000:
return 'prop_1990_99'
elif year >= 2000 and year < 2010:
return 'prop_2000_09'
elif year >= 2010 and year < 2020:
return 'prop_2010_later'
# ztrax decade will be that of the median value of all the earliest-year grid cells intersecting the tract
# that is, of all the grid cells in tract, what is the "typical" earliest property date
gdf['ztrax_decade'] = gdf['year_median'].map(get_ztrax_decade)
ztrax_dummies = pd.get_dummies(gdf['ztrax_decade'], prefix='dummy_ztrax')
gdf[ztrax_dummies.columns] = ztrax_dummies
# +
cols = ['prop_1939_earlier', 'prop_1940_49', 'prop_1950_59', 'prop_1960_69',
'prop_1970_79', 'prop_1980_89', 'prop_1990_99', 'prop_2000_09', 'prop_2010_later']
# jitter so we don't get 2 eras with equal value and both are the plurality
np.random.seed(0)
gdf[cols] = gdf[cols].applymap(lambda x: x + np.random.random() * 1e-6)
# +
# %%time
# identify the primary decade algorithmically
def find_earliest_threshold(row, cols, threshold):
for col in cols:
if row[col] > threshold:
return col
def determine_primary_decade(row, cols=cols):
for threshold in [0.5, 0.4, 0.3, 0.2, 0.1]:
decade = find_earliest_threshold(row, cols, threshold)
if decade is not None:
return decade
gdf['primary_decade'] = gdf.apply(determine_primary_decade, axis='columns')
primary_dummies = pd.get_dummies(gdf['primary_decade'], prefix='dummy_primary')
gdf[primary_dummies.columns] = primary_dummies
# +
# %%time
# identify whichever decade is earlier: ztrax or primary
def get_earlier_decade(row):
primary_decade = row['primary_decade']
ztrax_decade = row['ztrax_decade']
if pd.isnull(primary_decade) and pd.notnull(ztrax_decade):
return ztrax_decade
if pd.isnull(ztrax_decade) and pd.notnull(primary_decade):
return primary_decade
if pd.isnull(primary_decade) and pd.isnull(ztrax_decade):
return None
if float(primary_decade[5:9]) < float(ztrax_decade[5:9]):
return primary_decade
else:
return ztrax_decade
gdf['prim_ztrax_decade'] = gdf.apply(get_earlier_decade, axis=1)
primary_ztrax_earliest_dummies = pd.get_dummies(gdf['prim_ztrax_decade'], prefix='dummy_prim_ztrax')
gdf[primary_ztrax_earliest_dummies.columns] = primary_ztrax_earliest_dummies
# +
# %%time
# identify earliest decade by which cumulatively >50% of tract's structures were built
def determine_earliest_cumulative_decade(row):
for col in cols:
if row[col]:
return col
cs = gdf[cols].cumsum(axis='columns') > 0.50
gdf['cumulative_decade'] = cs.apply(determine_earliest_cumulative_decade, axis='columns')
cumulative_dummies = pd.get_dummies(gdf['cumulative_decade'], prefix='dummy_cumulative')
gdf[cumulative_dummies.columns] = cumulative_dummies
# +
# %%time
# identify earliest decade in which >20% of tract's structures were built
def determine_earliest_decade(row, threshold=0.20):
for col in cols:
if row[col] > threshold:
return col
gdf['earliest_decade'] = gdf.apply(determine_earliest_decade, axis='columns')
earliest_dummies = pd.get_dummies(gdf['earliest_decade'], prefix='dummy_earliest')
gdf[earliest_dummies.columns] = earliest_dummies
# +
# %%time
# identify decade in which plurality of tract's structures were built
def determine_plurality_decade(row):
for col in cols:
other_cols = [c for c in cols if c != col]
if (row[col] > row[other_cols]).all():
return col
gdf['plurality_decade'] = gdf.apply(determine_plurality_decade, axis='columns')
plurality_dummies = pd.get_dummies(gdf['plurality_decade'], prefix='dummy_plurality')
gdf[plurality_dummies.columns] = plurality_dummies
# +
# %%time
# identify decade in which majority of tract's structures were built (where a majority exists)
def determine_majority_decade(row):
for col in cols:
if row[col] > 0.5:
return col
gdf['majority_decade'] = gdf.apply(determine_majority_decade, axis='columns')
majority_dummies = pd.get_dummies(gdf['majority_decade'], prefix='dummy_majority')
gdf[majority_dummies.columns] = majority_dummies
# -
decades = ['majority_decade', 'plurality_decade', 'earliest_decade', 'cumulative_decade', 'primary_decade', 'ztrax_decade', 'prim_ztrax_decade']
gdf[decades].apply(lambda x: x.value_counts())
# urban only
gdf[gdf['is_urban']==1][decades].apply(lambda x: x.value_counts())
# +
def fstr(x):
try:
return f'{x:0.3f}'
except:
return x
gdf[cols + decades].sample(n=5, random_state=2).applymap(fstr)
# -
mismatch = gdf[gdf['primary_decade'] != gdf['ztrax_decade']][cols + decades].applymap(fstr)
print(mismatch.shape)
mismatch.head()
# not every tract has residential strutures
pd.isnull(gdf['primary_decade']).sum()
str(gdf.columns.sort_values().tolist())
gdf.to_csv(output_path, index=False, encoding='utf-8')
# ## Look at individual stats
response = 'grid_index'
gdf[response].describe()
ax = gdf[response].hist(bins=100)
ax.set_xlim((0,1))
plt.show()
y = gdf[response].sort_values()
fig, ax = plt.subplots(figsize=(5,5))
ax.scatter(x=range(len(y)), y=y, s=20, marker='o', edgecolor='b', color='none', alpha=0.7)
xmax = int(len(gdf) * 1.02)
xmin = int(len(gdf) * -0.02)
ymax = 1.02
ymin = -0.02
plt.plot([xmin, xmax], [ymin, ymax], c='#999999', ls=':', zorder=-1)
ax.set_xlim((xmin,xmax))
ax.set_ylim((ymin,ymax))
ax.set_ylabel(response)
ax.set_xlabel('Tract Rank')
plt.show()
print(gdf.groupby('state_abbrev')[[response, 'prop_4way']].median().sort_values('prop_4way').head(10))
print(gdf.groupby('state_abbrev')[[response, 'prop_4way']].median().sort_values('prop_4way').tail(10))
# total nodes and edges in dataset
print('{:,}'.format(gdf['m'].sum()))
print('{:,}'.format(gdf['n'].sum()))
# +
variables = [response, 'straightness', 'orientation_order', 'prop_4way',
'aland', 'total_pop_k', 'is_urban', 'prop_single_fam', 'med_rooms_per_home',
'intersect_density', 'length_mean', 'prop_deadend', 'k_avg',
'elevations_iqr', 'grade_mean']
gdf[variables].corr()
# -
mask_urban = (gdf['state_abbrev'].isin(states)) & (gdf['is_urban'] == 1)
mask_rural = (gdf['state_abbrev'].isin(states)) & (gdf['is_urban'] == 0)
print(gdf[mask_urban][response].median())
print(gdf[mask_rural][response].median())
ne = ['ME', 'VT', 'NH', 'MA', 'RI', 'CT', 'NJ', 'PA', 'NY']
mask_urban = (gdf['state_abbrev'].isin(ne)) & (gdf['is_urban'] == 1)
mask_rural = (gdf['state_abbrev'].isin(ne)) & (gdf['is_urban'] == 0)
print(gdf[mask_urban][response].median())
print(gdf[mask_rural][response].median())
plains = ['ND', 'SD', 'NE', 'KS', 'OK']
mask_urban = (gdf['state_abbrev'].isin(plains)) & (gdf['is_urban'] == 1)
mask_rural = (gdf['state_abbrev'].isin(plains)) & (gdf['is_urban'] == 0)
print(gdf[mask_urban][response].median())
print(gdf[mask_rural][response].median())
| 03b-acs-vintage-indicators.ipynb |
# ## DKRZ NCL notebook example
# <table align="left">
# <tr><td>Title:</td><td>The NCL viewport</td></tr>
# <tr><td>Description</td><td>Shows how to use the viewport resources to resize the plot and position it in the frame</td></tr>
# <tr><td>20.07.18</td><td>kmf</td></tr>
# </table>
# First, we define the graphics output format which should be PNG and of size 300x300 pixels.
wks_type = "png"
wks_type@wkWidth = 300
wks_type@wkHeight = 300
wks = gsn_open_wks(wks_type,"plot_viewport_settings")
# If we do not change any default setting for the viewport NCL will center the plot for us.
plot = gsn_csm_map(wks,True)
# 
# ### Viewport edge
# Well, we can't see the edge of the viewport but we can show it using the **gsn_polyline_ndc** function (ndc - normalized device coordinates). <br/>
# Therefore, we have to assign a variable of type logical and tell NCL not to advance the frame so that the polylines can be added to the plot.
res = True
res@gsnFrame = False
# Draw the polyline close to the viewport edge. **The viewport is always a square with x: 0.0-1.0 and y: 0.0-1.0.**
# +
x = (/0.0001, 0.9999, 0.9999, 0.0001, 0.0001/)
y = (/0.0001, 0.0001, 0.9999, 0.9999, 0.0001/)
gsn_polyline_ndc(wks,x,y,True)
# -
# Create the plot again but note that we have to use **res** instead of the logical **True** for the plot function.
plot = gsn_csm_map(wks,res)
frame(wks)
# 
# Since we are creating a few plots NCL will save each plot output to a separated PNG file named plot_viewport_settings.000001.png, plot_viewport_settings.000002.png, ...
# ### Moving the plot
# The next step is to change some default viewport settings to move the plot uppward and slightly to the left. **vpXF** specifies the location of left edge of the View object's bounding box in NDC (normalized device coordinates) space (default 0.2). And accordingly **vpYF** specifies the location of top edge of the View object's bounding box in NDC space (default: 0.8).
res@vpXF = 0.05
res@vpYF = 0.99
# Let's see what happen.
# +
gsn_polyline_ndc(wks,x,y,True)
plot = gsn_csm_map(wks,res)
frame(wks)
# -
# 
# Ok, that was a little bit too much. Let's play with the values.
res@vpXF = 0.07
res@vpYF = 0.98
# Create the plot.
# +
gsn_polyline_ndc(wks,x,y,True)
plot = gsn_csm_map(wks,res)
frame(wks)
# -
# 
# Better :-)
# ### Changing the plot size
# The next step is to change the size of the plot with the viewport resource settings **vpWidthF** and **vpHeightF**. **vpWidthF** specifies the width of View object's bounding box in NDC units (default: 0.6).
# **vpHeightF** specifies the height of View object's bounding box in NDC units (default: 0.6).
res@vpWidthF = 0.6
res@vpHeightF = 0.3
# Create the plot.
# +
gsn_polyline_ndc(wks,x,y,True)
plot = gsn_csm_map(wks,res)
frame(wks)
# -
# 
# ### Working with two plots
# Sometimes we want to display two or more plots in the same frame. Therfore, we can use the **gsn_panel** function to do it for us but if we want *more control about size and position* it is better to use the **viewport resources**.
# Draw the polylines, add a title to the plot and draw the first plot.
# +
gsn_polyline_ndc(wks,x,y,True)
res@tiMainString = "First plot"
plot = gsn_csm_map(wks,res)
# -
# Create a second plot with its title and draw it below the first one.
# +
res@tiMainString = "Second plot"
res@vpYF = 0.58
plot2 = gsn_csm_map(wks,res)
frame(wks)
# -
# 
#
# ### Playing with multiple plots
# Ok, let's see what we can do with multiple plots using their own resources.
# The next part will shortly describe how to handle multiple plots with own resources, different sizes and positions. For a cleaner coding we will define a new graphic output and create new plots.
wks2_type = "png"
wks2_type@wkWidth = 300
wks2_type@wkHeight = 300
wks2 = gsn_open_wks(wks2_type,"plot_viewport_settings_multiple_plots")
# In this example we'll create 4 plots. The common resources for the plots can be copied.
# +
res1 = True
res1@gsnFrame = False
res3 = res1
res4 = res1
# -
# Plot 1 and plot 2 should have the same size and be placed on the left side among each other.
# +
res1@vpWidthF = 0.46
res1@vpHeightF = 0.23
res1@vpXF = 0.07
res2 = res1
res1@tiMainString = "First plot"
res1@vpYF = 0.9
res2@tiMainString = "Second plot"
res2@vpYF = 0.5
# -
# Plot 3 and plot 4 should be much smaller and be placed on the right side among each other.
# +
res3@vpWidthF = 0.34
res3@vpHeightF = 0.17
res3@vpXF = 0.62
res4 = res3
res3@tiMainString = "Third plot"
res3@vpYF = 0.9
res4@tiMainString = "Fourth plot"
res4@vpYF = 0.6
# -
# Now, create all 4 plots. For convenience, they are simple maps again. Draw a line at the edges.
# +
gsn_polyline_ndc(wks2,x,y,True)
plot1 = gsn_csm_map(wks2,res1)
plot2 = gsn_csm_map(wks2,res2)
plot3 = gsn_csm_map(wks2,res3)
plot4 = gsn_csm_map(wks2,res4)
frame(wks2)
# -
# 
| Visualization/NCL notebooks/NCL_notebook_viewport_settings.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="TKj4zNwSL5Me"
# Lambda School Data Science
#
# *Unit 2, Sprint 2, Module 4*
#
# ---
# + id="h_v0H2kcL5Mp"
# %%capture
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Kaggle-Challenge/main/data/'
# !pip install category_encoders==2.*
# !pip install pandas-profiling==2.*
# If you're working locally:
#else:
#DATA_PATH = '../data/'
# + colab={"base_uri": "https://localhost:8080/"} id="USXdYpTUMtI2" outputId="be8de508-cd97-49d2-bf1b-4da3b378cec5"
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.preprocessing import StandardScaler
from sklearn.impute import SimpleImputer
from sklearn.metrics import accuracy_score
from sklearn.pipeline import make_pipeline
from sklearn.linear_model import LogisticRegression
from pandas_profiling import ProfileReport
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import GridSearchCV, RandomizedSearchCV
from sklearn.metrics import plot_confusion_matrix, classification_report
from category_encoders import OneHotEncoder, OrdinalEncoder
# + [markdown] id="nCc3XZEyG3XV"
# # Module Project: Classification Metrics
#
# This sprint, the module projects will focus on creating and improving a model for the Tanazania Water Pump dataset. Your goal is to create a model to predict whether a water pump is functional, non-functional, or needs repair.
#
# Dataset source: [DrivenData.org](https://www.drivendata.org/competitions/7/pump-it-up-data-mining-the-water-table/).
#
# ## Directions
#
# The tasks for this project are as follows:
#
# - **Task 1:** Use `wrangle` function to import training and test data.
# - **Task 2:** Split training data into feature matrix `X` and target vector `y`.
# - **Task 3:** Split training data into training and validation sets.
# - **Task 4:** Establish the baseline accuracy score for your dataset.
# - **Task 5:** Build `model`.
# - **Task 6:** Calculate the training and validation accuracy score for your model.
# - **Task 7:** Plot the confusion matrix for your model.
# - **Task 8:** Print the classification report for your model.
# - **Task 9:** Identify likely `'non-functional'` pumps in the test set.
# - **Task 10:** Find likely `'non-functional'` pumps serving biggest populations.
# - **Task 11 (`stretch goal`):** Plot pump locations from Task 10.
#
# You should limit yourself to the following libraries for this project:
#
# - `category_encoders`
# - `matplotlib`
# - `pandas`
# - `pandas-profiling`
# - `plotly`
# - `sklearn`
#
#
# # I. Wrangle Data
# + id="tDRREcWPL5Ms"
def wrangle(fm_path, tv_path=None):
if tv_path:
df = pd.merge(pd.read_csv(fm_path,
parse_dates=['date_recorded'], na_values=[0, -2.000000e-08]),
pd.read_csv(tv_path)).set_index('id')
else:
df = pd.read_csv(fm_path,
parse_dates=['date_recorded'],
na_values=[0, -2.000000e-08]).set_index('id')
# Drop constant columns
df.drop(columns=['recorded_by'], inplace=True)
# Dropping high null columns
df.drop(columns=['amount_tsh', 'num_private'], inplace=True)
# Drop HCCCs
cutoff = 100
drop_cols = [col for col in df.select_dtypes('object').columns
if df[col].nunique() > cutoff]
df.drop(columns=drop_cols, inplace=True)
# Drop duplicate columns
dupe_cols = [col for col in df.head(15).T.duplicated().index
if df.head(15).T.duplicated()[col]]
df.drop(columns=dupe_cols, inplace=True)
df['recorded_year'] = df['date_recorded'].dt.year
df['recorded_month'] = df['date_recorded'].dt.month
df['recorded_day'] = df['date_recorded'].dt.day
df.drop(columns='date_recorded', inplace=True)
#feature engineering a new column
df['pump_age'] = df['recorded_year'] - df['construction_year']
return df
# + [markdown] id="PtXqyCgcL5Mt"
# **Task 1:** Using the above `wrangle` function to read `train_features.csv` and `train_labels.csv` into the DataFrame `df`, and `test_features.csv` into the DataFrame `X_test`.
# + id="HmVJPIGRNDRD"
train_feature_path = DATA_PATH+'waterpumps/train_features.csv'
train_target_path = DATA_PATH+'waterpumps/train_labels.csv'
test_feature_path = DATA_PATH+'waterpumps/test_features.csv'
# + id="tC7Lqh9iL5Mu"
df = wrangle(train_feature_path, train_target_path)
X_test = wrangle(test_feature_path)
# + [markdown] id="YrGk2itgL5Mu"
# # II. Split Data
#
# **Task 2:** Split your DataFrame `df` into a feature matrix `X` and the target vector `y`. You want to predict `'status_group'`.
#
# **Note:** You won't need to do a train-test split because you'll use cross-validation instead.
# + id="TUcOFJETN8-G"
def binary(status):
if status == 'functional':
return 0
else:
return 1
df['needs_repair_or_not'] = df['status_group'].apply(binary)
# + colab={"base_uri": "https://localhost:8080/"} id="IW5vvTd-OTwe" outputId="04c19ac1-292b-467c-f2cf-9276789d245c"
df['needs_repair_or_not'].value_counts()
# + id="2Cqr7XgvL5Mv"
target = 'status_group'
y = df[target]
X = df.drop(columns=[target, 'needs_repair_or_not'])
# + [markdown] id="FKl9k9LzL5Mv"
# **Task 3:** Using a randomized split, divide `X` and `y` into a training set (`X_train`, `y_train`) and a validation set (`X_val`, `y_val`).
# + id="uFBaGLwXL5Mw"
X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=42)
# + id="nTXk7KLrNfnM"
assert len(X_train) + len(X_val) == len(X)
# + [markdown] id="vsE52D6IL5Mw"
# # III. Establish Baseline
#
# **Task 4:** Since this is a **classification** problem, you should establish a baseline accuracy score. Figure out what is the majority class in `y_train` and what percentage of your training observations it represents.
# + id="42MFWdW9L5Mx" colab={"base_uri": "https://localhost:8080/"} outputId="b2e36ab4-1c55-4b15-a0d2-f74a19cf3bd5"
baseline_acc = y_train.value_counts(normalize=True).max()
print('Baseline Accuracy Score:', baseline_acc)
# + [markdown] id="X_YEZC8-L5Mx"
# # IV. Build Models
#
# **Task 5:** Build and train your `model`. Include the transformers and predictor that you think are most appropriate for this problem.
# + id="HVgJAJmAL5My"
model = make_pipeline(
OrdinalEncoder(),
SimpleImputer(),
RandomForestClassifier(n_estimators=47,
n_jobs=-2,
random_state=42)
)
model.fit(X_train, y_train);
# + [markdown] id="OCU_QXu5L5My"
# # V. Check Metrics
#
# **Task 6:** Calculate the training and validation accuracy scores for `model`.
# + id="A29VU-R8L5Mz" colab={"base_uri": "https://localhost:8080/"} outputId="528f4d5a-592c-4bf2-cc65-72de2e5f9218"
training_acc = model.score(X_train, y_train)
val_acc = model.score(X_val, y_val)
print('Training Accuracy Score:', training_acc)
print('Validation Accuracy Score:', val_acc)
# + [markdown] id="EB4nR9CwL5Mz"
# **Task 7:** Plot the confusion matrix for your model, using your validation data.
#
# **Note:** Since there are three classes in your target vector, the dimensions of your matrix will be 3x3.
# + id="qKlMH84RL5M0" colab={"base_uri": "https://localhost:8080/", "height": 297} outputId="ca91bb81-8c6a-4296-e220-11bdacd920f3"
# Plot 3x3 confusion matrix
plot_confusion_matrix(model,
X_val,
y_val,
values_format='.0f',
display_labels=['Functional', 'Non-functional', 'Needs Repair'])
# + [markdown] id="nTY5uVTuL5M0"
# Calculating precision and recall for a multiclass problem is a bit of a mess. Fortunately, we can use `sklearn`'s classification report.
#
# **Task 8:** Print the classification report for your `model`, using your validation data.
# + id="AJlq1mA1L5M1" colab={"base_uri": "https://localhost:8080/"} outputId="b24155dc-767f-4804-f55e-e8754e227a26"
# Print classification report
print(classification_report(y_val, model.predict(X_val)))
# + [markdown] id="9QcgXoFQL5M1"
# # VI. Tune Model
#
# Usually, we use this part of the ML workflow to adjust the hyperparameters of the our model to increase performance based on metrics like accuracy. Today, we'll use it to help maximize the impact of our water pump repairs when resources are scarce. What if we only had funds to repair 100 water pumps?
#
# (This activity is based on a [post](https://towardsdatascience.com/maximizing-scarce-maintenance-resources-with-data-8f3491133050) by Lambda alum <NAME>.)
#
# **Task 9:** Using your model's `predict_proba` method, identify the observations in your **test set** where the model is more than 95% certain that a pump is `'non-functional'`. Put these observations in the DataFrame `X_test_nf`.
# + colab={"base_uri": "https://localhost:8080/"} id="7IH8PjH38bWf" outputId="4f148bd5-53e1-4b80-fd81-58c6e8b265e9"
y_train.value_counts()
# + id="UZkI2LBEAiKd"
#printing the first 10 values from our numpy array
#print(model.predict(y_pred_proba)[0:10])
# + colab={"base_uri": "https://localhost:8080/", "height": 709} id="FYn1T-_1-ikC" outputId="bc034644-d1d1-4e3c-ae0e-90f21e6a80bc"
# multi-classification problem
# always need the colon to locate/isolate a column from a ND(nomial dimension) array
X_test['y_predict'] = model.predict_proba(X_test)[:,-1]
X_test
# + colab={"base_uri": "https://localhost:8080/"} id="FV9bx81y-rXK" outputId="dfb58f49-227c-4aab-e7d5-2ce9a4dae015"
X_test.shape
# + colab={"base_uri": "https://localhost:8080/", "height": 674} id="7g95dZYI-vZH" outputId="50299ae5-3ba8-4fc9-9c2a-911302a62059"
mask = X_test['y_predict'] > 0.95
X_test_nf = X_test[mask].copy()
X_test_nf
# + [markdown] id="2LwPjTfAL5M2"
# **Task 10:** Limit `X_test_nf` to the 100 pumps with the largest associated populations.
# + colab={"base_uri": "https://localhost:8080/", "height": 726} id="8GhkF-KguEoN" outputId="bdd0dcb1-a241-4b59-ddac-9e4c6ff603bd"
#will have to drop null values as well
X_test_nf.sort_values('population', ascending=False).head(100)
# + [markdown] id="htE-XYdqL5M3"
# # VII. Communicate Results
#
# **Task 11 (`stretch goal`):** Create a scatter plot with the location of the 100 pumps in `X_test_nf`.
#
# **Note:** If you want to make this a **`super stretch goal`**, create a Mapbox scatter plot using [Plotly](https://plotly.github.io/plotly.py-docs/generated/plotly.express.scatter_mapbox.html).
# + id="vOdqnE6cL5M4" colab={"base_uri": "https://localhost:8080/", "height": 367} outputId="bda35de8-c9dc-4b04-b629-a267d1226600"
import plotly.express as px
figure = px.scatter_mapbox(
X_test_nf,
lat='latitude',
lon='longitude',
hover_name='region',
hover_data=['basin', 'population'],
color_discrete_sequence=['azure'],
zoom=3, height=350
)
figure.update_layout(mapbox_style="open-street-map")
figure.update_layout(margin={"r":0,"t":0,"l":0,"b":0})
figure.show()
| JE_LS_DS_224_assignment.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/brucebra000/DS-Unit-2-Kaggle-Challenge/blob/master/U2S2A4_kaggle_challenge_4.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="_k5sD_kD2T97" colab_type="text"
# Lambda School Data Science
#
# *Unit 2, Sprint 2, Module 4*
#
# ---
# + [markdown] colab_type="text" id="nCc3XZEyG3XV"
# # Classification Metrics
#
# ## Assignment
# - [ ] If you haven't yet, [review requirements for your portfolio project](https://lambdaschool.github.io/ds/unit2), then submit your dataset.
# - [ ] Plot a confusion matrix for your Tanzania Waterpumps model.
# - [ ] Continue to participate in our Kaggle challenge. Every student should have made at least one submission that scores at least 70% accuracy (well above the majority class baseline).
# - [ ] Submit your final predictions to our Kaggle competition. Optionally, go to **My Submissions**, and _"you may select up to 1 submission to be used to count towards your final leaderboard score."_
# - [ ] Commit your notebook to your fork of the GitHub repo.
# - [ ] Read [Maximizing Scarce Maintenance Resources with Data: Applying predictive modeling, precision at k, and clustering to optimize impact](https://towardsdatascience.com/maximizing-scarce-maintenance-resources-with-data-8f3491133050), by Lambda DS3 student <NAME>. His blog post extends the Tanzania Waterpumps scenario, far beyond what's in the lecture notebook.
#
#
# ## Stretch Goals
#
# ### Reading
# - [Attacking discrimination with smarter machine learning](https://research.google.com/bigpicture/attacking-discrimination-in-ml/), by Google Research, with interactive visualizations. _"A threshold classifier essentially makes a yes/no decision, putting things in one category or another. We look at how these classifiers work, ways they can potentially be unfair, and how you might turn an unfair classifier into a fairer one. As an illustrative example, we focus on loan granting scenarios where a bank may grant or deny a loan based on a single, automatically computed number such as a credit score."_
# - [Notebook about how to calculate expected value from a confusion matrix by treating it as a cost-benefit matrix](https://github.com/podopie/DAT18NYC/blob/master/classes/13-expected_value_cost_benefit_analysis.ipynb)
# - [Simple guide to confusion matrix terminology](https://www.dataschool.io/simple-guide-to-confusion-matrix-terminology/) by <NAME>, with video
# - [Visualizing Machine Learning Thresholds to Make Better Business Decisions](https://blog.insightdatascience.com/visualizing-machine-learning-thresholds-to-make-better-business-decisions-4ab07f823415)
#
#
# ### Doing
# - [ ] Share visualizations in our Slack channel!
# - [ ] RandomizedSearchCV / GridSearchCV, for model selection. (See module 3 assignment notebook)
# - [ ] More Categorical Encoding. (See module 2 assignment notebook)
# - [ ] Stacking Ensemble. (See below)
#
# ### Stacking Ensemble
#
# Here's some code you can use to "stack" multiple submissions, which is another form of ensembling:
#
# ```python
# import pandas as pd
#
# # Filenames of your submissions you want to ensemble
# files = ['submission-01.csv', 'submission-02.csv', 'submission-03.csv']
#
# target = 'status_group'
# submissions = (pd.read_csv(file)[[target]] for file in files)
# ensemble = pd.concat(submissions, axis='columns')
# majority_vote = ensemble.mode(axis='columns')[0]
#
# sample_submission = pd.read_csv('sample_submission.csv')
# submission = sample_submission.copy()
# submission[target] = majority_vote
# submission.to_csv('my-ultimate-ensemble-submission.csv', index=False)
# ```
# + id="wS5AzR5z3a0R" colab_type="code" outputId="b83a3eef-37f3-4483-9703-3a29a9113021" colab={"base_uri": "https://localhost:8080/", "height": 214}
# !pip install --upgrade category_encoders
# + id="FDDxzPYk37g2" colab_type="code" outputId="9cbe347a-48a0-4a42-dcd6-6cd91ef2621a" colab={"base_uri": "https://localhost:8080/", "height": 160}
# !pip install matplotlib==3.1.0
# + id="5XefNdEC_Yw3" colab_type="code" outputId="8e95c498-a6e4-42a5-8671-cc1e178eccf7" colab={"base_uri": "https://localhost:8080/", "height": 35}
import matplotlib
print(matplotlib.__version__)
# + id="XcPLSBIR9GIY" colab_type="code" outputId="bbe63c3e-cd9e-4209-ea18-8cda514b6c74" colab={"base_uri": "https://localhost:8080/", "height": 231}
# !pip install scikit-plot
# + id="lE76L1uo1hto" colab_type="code" colab={}
import sys
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import category_encoders as ce
import scikitplot as skplt
from sklearn.model_selection import train_test_split
from sklearn.impute import SimpleImputer
from sklearn.ensemble import RandomForestClassifier
from sklearn.pipeline import make_pipeline
from sklearn.metrics import accuracy_score
from google.colab import files
# + colab_type="code" id="lsbRiKBoB5RE" colab={}
# %%capture
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Kaggle-Challenge/master/data/'
# !pip install category_encoders==2.*
# If you're working locally:
else:
DATA_PATH = '../data/'
# + colab_type="code" id="BVA1lph8CcNX" colab={}
# Merge train_features.csv & train_labels.csv
train = pd.merge(pd.read_csv(DATA_PATH+'waterpumps/train_features.csv'),
pd.read_csv(DATA_PATH+'waterpumps/train_labels.csv'))
# Read test_features.csv & sample_submission.csv
test = pd.read_csv(DATA_PATH+'waterpumps/test_features.csv')
sample_submission = pd.read_csv(DATA_PATH+'waterpumps/sample_submission.csv')
# + id="Y0shpI6C2T-K" colab_type="code" outputId="2f74b262-8da7-4f55-e969-b13c412f86a9" colab={"base_uri": "https://localhost:8080/", "height": 435}
print(train.shape)
train.head()
# + id="wSndTBPG5ciu" colab_type="code" outputId="39d16669-0f8a-48d6-935c-7989bcab5baf" colab={"base_uri": "https://localhost:8080/", "height": 35}
#Validation set
train, val = train_test_split(train, stratify = train['status_group'], random_state = 1)
train.shape, val.shape, test.shape
# + id="vjNjJs8t5jVp" colab_type="code" colab={}
#Wrangling data
def wrangle(x):
x = x.copy()
x['latitude'] = x['latitude'].replace(-2e-08, 0)
zero_cols = ['longitude', 'latitude', 'construction_year', 'gps_height', 'population']
for col in zero_cols:
x[col] = x[col].replace(0, np.nan)
x[col+'_MISSING'] = x[col].isnull()
duplicates = ['quantity_group', 'payment_type']
x = x.drop(columns = duplicates)
unusable = ['recorded_by', 'id']
x = x.drop(columns = unusable)
x['date_recorded'] = pd.to_datetime(x['date_recorded'], infer_datetime_format = True)
x['year_recorded'] = x['date_recorded'].dt.year
x['month_recorded'] = x['date_recorded'].dt.month
x['day_recorded'] = x['date_recorded'].dt.day
x['years'] = x['year_recorded'] - x['construction_year']
x['years_MISSING'] = x['years'].isnull()
return x
# + id="bG-tbXwX5v6q" colab_type="code" outputId="703f242a-0ce4-4ea0-feed-cccc451db6ff" colab={"base_uri": "https://localhost:8080/", "height": 35}
train = wrangle(train)
val = wrangle(val)
test = wrangle(test)
train.shape, val.shape, test.shape
# + id="hIdAoWke5xQT" colab_type="code" colab={}
#Feature sets and targets
target = 'status_group'
train_features = train.drop(columns = [target])
numeric_features = train_features.select_dtypes(include = 'number').columns.tolist()
cardinality = train_features.select_dtypes(exclude = 'number').nunique()
categorical_features = cardinality[cardinality <= 50].index.tolist()
features = numeric_features + categorical_features
x_train = train[features]
y_train = train[target]
x_val = val[features]
y_val = val[target]
x_test = test[features]
# + id="OJXGSzZ16krf" colab_type="code" outputId="b11ac7e7-8d02-4dfa-9ac3-b1fc905c5665" colab={"base_uri": "https://localhost:8080/", "height": 35}
#Pipeline
pipeline = make_pipeline(
ce.OneHotEncoder(use_cat_names = True, cols = ['basin']),
ce.OrdinalEncoder(),
SimpleImputer(strategy = 'median'),
RandomForestClassifier(n_estimators = 100, random_state = 1, n_jobs = -1)
)
pipeline.fit(x_train, y_train)
y_pred = pipeline.predict(x_val)
print(accuracy_score(y_val, y_pred))
# + id="-aNHJizK556x" colab_type="code" outputId="bf9d9c7b-fc22-42c3-adea-cd4408f34252" colab={"base_uri": "https://localhost:8080/", "height": 404}
#Confusion Matrix
skplt.metrics.plot_confusion_matrix(
y_val,
y_pred,
figsize = (8, 6),
title = f'Confusion Matrix ({len(y_val)})',
normalize = False
);
# + id="3bNfsF11-XhV" colab_type="code" outputId="6bb4636e-a364-466e-a9f5-68f2deb5226f" colab={"base_uri": "https://localhost:8080/", "height": 404}
#Confusion Matrix (Normalized))
skplt.metrics.plot_confusion_matrix(
y_val,
y_pred,
figsize = (8, 6),
title = f'Confusion Matrix ({len(y_val)})',
normalize = True
);
| U2S2A4_kaggle_challenge_4.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Proof of Concept
# - generating a regex expression
# - individual has a dynamic length, can grow / shrink
# %config IPCompleter.greedy=True
# +
import warnings
warnings.filterwarnings('ignore')
# -
# +
import re
import numpy as np
print('re:', re.__version__)
print('numpy:', np.__version__)
# +
import sys
sys.path.append('..')
from package.ga import BinaryGeneFactory, AbstractFitness, SimpleHillClimber
from package.transformers import IntegerToBinaryString, StringToMapping, KeyArrayToRegex
# -
# + language="html"
# <h4>1. text, expected text</h4>
# -
# +
## 1. text -> '{expected string}' within,
expected_text = 'cost: 1500'
expected_text_length = len(expected_text)
text = 'a bird in hand is worth two in the bush?\n' \
+ 'these watches ' + expected_text + '!\n' \
+ 'the ball is in court 1500.' \
print(text)
# -
# + language="html"
# <h4>2. setup</h4>
# -
# +
consts = 'abcdefghijklmnopqrstuvwxyz'
regexes = [
r'\s',
r'\d',
r'[a-z]',
r'[:]',
r'[!?.]',
r'[0-9]'
]
complete_set = [ c for c in consts ] + regexes
binary_start = 0
binary_end = len(complete_set) -1 # hard end, values < binary_end
integer_to_binary_transformer = IntegerToBinaryString(5)
gene_factory = BinaryGeneFactory(binary_start, binary_end, 5)
binary_to_regex = {}
for i in range(binary_end):
key = integer_to_binary_transformer.transform(i)
binary_to_regex[key] = complete_set[i]
string_mapper = StringToMapping(binary_to_regex)
to_regex = KeyArrayToRegex(string_mapper)
# -
# +
class Fitness(AbstractFitness):
to_regex = None
expected_match = ''
def __init__(self, to_regex, expected_match, text):
self.to_regex = to_regex
self.expected_match = expected_match
self.text = text
super()
def evaluate(self, individual, display_logging = False):
fitness = 0.0
regexes = self.to_regex.transform_to_array(individual)
regexes_length = len(regexes)
expected_text_length = len(self.expected_match)
## 1. regex is the same length
if regexes_length == expected_text_length:
fitness += 1.0
elif expected_text_length > regexes_length:
fitness += (1 - ((expected_text_length - regexes_length) / expected_text_length))
else:
fitness += (1 - ((regexes_length - expected_text_length) / regexes_length))
if display_logging:
print('rule 1:', fitness)
compare_individual_elements = np.array([
re.match(ai, self.expected_match[i]) != None
for i, ai
in enumerate(regexes)
if i < expected_text_length
]).astype(int).sum()
fitness += (compare_individual_elements / expected_text_length)
if display_logging:
print('rule 2:', fitness)
regex = self.to_regex.transform(individual)
pattern = re.compile(regex)
matches = pattern.findall(self.text)
if len(matches) > 0:
against_first_match = matches[0]
comparision = np.array([
self.expected_match[i] == against_first_match[i]
for i
in range(len(against_first_match)) if i < expected_text_length
]).astype(int).sum()
fitness += (comparision / expected_text_length)
if display_logging:
print('rule 2:', fitness)
return fitness / 3
fitness_evaluator = Fitness(to_regex, expected_text, text)
# -
# +
def gene_mutator(gene, display_logging = False):
precentage = np.random.rand()
if precentage < .08:
gene = gene_factory.create()
return gene
def individual_height_mutator(individual, display_logging = False):
precentage = np.random.rand()
if precentage < .10:
gene = gene_factory.create()
individual += [gene]
length = len(individual)
if precentage > .90 and length > 0:
individual = individual[:len(individual)-1]
return individual
hill_climber = SimpleHillClimber(fitness_evaluator, [ gene_mutator ], [ individual_height_mutator ])
# -
# + language="html"
# <h4>3. create individual</h4>
# -
# +
individual = gene_factory.create_many(4)
print('binary:', '|'.join(individual))
print('regex: ', '/'+ ''.join(to_regex.transform_and_compress(individual)) + '/gimu')
# -
# + language="html"
# <h4>4. run</h4>
# -
# +
number_of_iterations = 50000
result = hill_climber.run(individual, number_of_iterations)
final_individual = result[0]
final_fitness = result[1]
final_iteration = result[2]
print(
'compressed:',
'/' + to_regex.transform_and_compress(final_individual) + '/gimu',
'~',
'"' + expected_text + '"',
'~',
final_fitness,
'~',
final_iteration
)
print(
'original:',
'/' + to_regex.transform_and_compress(individual) + '/gimu',
)
# -
| notebooks/poc_dynamic_individual.ipynb |